path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Homework/hw3/ac209a_hw3.ipynb | ###Markdown
CS109A Introduction to Data Science: Homework 3 AC 209 : Regularization**Harvard University****Fall 2019****Instructors**: Pavlos Protopapas, Kevin Rader and Chris Tanner
###Code
# RUN THIS CELL FOR FORMAT
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
# Imports
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_regression
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet, RidgeCV, LassoCV, ElasticNetCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score
%matplotlib inline
###Output
_____no_output_____
###Markdown
Question 1 [12 pts] Ridge and LASSO regularizations are powerful tools that not only increase generalization, but also expand the range of problems that we can solve. We will study this statement in this question. **1.1** Let $X\in \mathbb{R}^{n\times p}$ be a matrix of observations, where each row corresponds an observation and each column corresponds to a predictor. Now consider the case $p > n$: explain why there is no unique solution to the OLS estimator. **1.2** Now consider the Ridge formulation. Show that finding the ridge estimator is equivalent to solving an OLS problem after adding p dummy observations with their X value equal to $\sqrt{\lambda}$ at the j-th component and zero everywhere else, and their Y value set to zero. In a nutshell, show that the ridge estimator can be found by getting the least squares estimator for the augmented problem:$$X^* = \begin{bmatrix} X \\ \sqrt{\lambda}I \end{bmatrix}$$$$Y^* = \begin{bmatrix} Y \\ \textbf{0} \end{bmatrix}$$**1.3** Can we now solve the $p > n$ situation? Explain why.**1.4** Take a look at the LASSO estimator expression that we derived when $X^TX=I$. What needs to happen for LASSO to nullify $\beta_i$?**1.5** Can LASSO be used when $p>n$? What important consideration, related to the number of predictors that LASSO chooses, do we have to keep in mind in that case?**1.6** Ridge and LASSO still have room for improvement. List two limitations of Ridge, and two limitations of LASSO.**1.7** Review the class slides and answer the following questions: When is Ridge preferred? When is LASSO preferred? When is Elastic Net preferred? Answers **1.1 Let $X\in \mathbb{R}^{n\times p}$ be a matrix of observations, where each row corresponds an observation and each column corresponds to a predictor. Now consider the case $p > n$: explain why there is no unique solution to the OLS estimator. ** *your answer here* **1.2 Now consider the Ridge formulation. Show that finding the ridge estimator is equivalent to solving an OLS problem after adding p dummy observations with their X value equal to $\sqrt{\lambda}$ at the j-th component and zero everywhere else, and their Y value set to zero. In a nutshell, show that the ridge estimator can be found by getting the least squares estimator for the augmented problem: **$$X^* = \begin{bmatrix} X \\ \sqrt{\lambda}I \end{bmatrix}$$$$Y^* = \begin{bmatrix} Y \\ \textbf{0} \end{bmatrix}$$ *your answer here* **1.3 Can we now solve the $p > n$ situation? Explain why. ** *your answer here* **1.4 Take a look at the LASSO estimator expression that we derived when $X^TX=I$. What needs to happen for LASSO to nullify $\beta_i$? ** *your answer here* **1.5 Can LASSO be used when $p>n$? What important consideration, related to the number of predictors that LASSO chooses, do we have to keep in mind in that case? ** *your answer here* **5.6 Ridge and LASSO still have room for improvement. List two limitations of Ridge, and two limitations of LASSO. ** *your answer here* **5.7 Review the class slides and answer the following questions: When is Ridge preferred? When is LASSO preferred? When is Elastic Net preferred? ** *your answer here* Question 2 [12pts]We want to analyze the behavior of our estimators in cases where p > n. We will generate dummy regression problems for this analysis, so that we have full control on the properties of the problem. Sklearn provides an easy to use function to generate regression problems: `sklearn.datasets.make_regression`. **2.1** Use the provided notebook cell to to build a dataset with 500 samples, 2500 features, 100 informative features and a noise sd of 10.0. The function will return the true coefficients in `true_coef`. Intercepts are not generated, so do not fit them in your regressions. Fit LinearRegression, LassoCV, RidgeCV and ElasticNetCV estimators on the traininig set with 5-fold crossvalidation.Test 100 lambda values from 0.01 to 1000, in logscale. For Elastic Net, also test the following L1 ratios: [.1, .5, .7, .9, .95, .99] (it is good practice to try more ratio values near the L1 term, as the ridge penalty tends to have higher absolute magnitude).**Do not change `random_state=209`, to facilitate grading.****2.2** As we used `n_informative = 100`, the true betas will contain 100 non-zero values. Let's see if our estimators picked up on that trend. Print the number of betas greater than $10^{-6}$ (non-zero values) for each estimator, and comment on the results.**2.3** Let's see how our estimators perform on the test set. Calculate $R^2$ for each estimator on the test set. Comment on the results.**2.4** Now, let's observe what happens when we increase the number of informative features. Generate another regression problem with the same parameters as before, but this time with an n_informative of 600. Finally, fit OLS, Ridge, LASSO and EN, and print the number of non-zero coefficients and R2 Scores.**2.5** Compare the results with the previous case and comment. What can we say about LASSO and Elastic Net in particular?
###Code
# Constants
n= 500
p= 2500
informative= 100
rs = 209
sd = 5
# Generate regresion
X,y,true_coef = make_regression(n_samples = n, n_features = p, n_informative = informative,
coef = True, noise = sd)
# Get train test split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=rs)
###Output
_____no_output_____
###Markdown
Solutions **2.1 Use the provided notebook cell to to build a dataset with 500 samples, 2500 features, 100 informative features and a noise sd of 10.0. The function will return the true coefficients in `true_coef`. Intercepts are not generated, so do not fit them in your regressions. Fit LinearRegression, LassoCV, RidgeCV and ElasticNetCV estimators on the traininig set with 5-fold crossvalidation. **
###Code
# your code here
###Output
_____no_output_____
###Markdown
**2.2 As we used `n_informative = 100`, the true betas will contain 100 non-zero values. Let's see if our estimators picked up on that trend. Print the number of betas with absolute value greater than $10^{-6}$ (which will corrspond to non-zero values) for each estimator, and comment on the results. **
###Code
# your code here
###Output
_____no_output_____
###Markdown
*your answer here* **2.3** Let's see how our estimators perform on the test set. Calculate $R^2$ for each estimator on the test set. Comment on the results.
###Code
# your code here
###Output
_____no_output_____
###Markdown
*your answer here* **2.4 Now, let's observe what happens when we increase the number of informative features. Generate another regression problem with the same parameters as before, but this time with an n_informative of 600. Finally, fit OLS, Ridge, LASSO and EN, and print the number of non-zero coefficients and R2 Scores. **
###Code
# your code here
###Output
_____no_output_____
###Markdown
**2.5 Compare the results with the previous case and comment. What can we say about LASSO and Elastic Net in particular? ** *your answer here* Question 3 [1pt] (for fun) We would like to visualize how Ridge, LASSO and Elastic Net behave. We will build a toy regression example to observe the behavior of the coefficients and loss function as lambda increases.**3.1** Use `sklearn.datasets.make_regression` to build a well-conditioned regression problem with 1000 samples, 5 features, noise standard deviation of 10 and random state 209.**3.2** Find the Ridge, LASSO and EN estimator for this problem, varying the regularization parameter in the interval $[0.1,100]$ for LASSO and EN, and $[0.1,10000]$ for Ridge. Plot the evolution of the 5 coefficients for each estimator in a 2D plot, where the X axis is the regularization parameter and the Y axis is the coefficient value. For Elastic Net, make 4 plots, each one with one of the following L1 ratios: $[0.1, 0.5, 0.8, 0.95]$ You should have 6 plots: one for Lasso, one for Ridge, and 4 for EN. Each plot should have 5 curves, one per coefficient. **3.3** Comment on this evolution. Does this make sense with what we've seen so far?**3.4** We're now interested in visualizing the behavior of the Loss functions. First, generate a regression problem with 1000 samples and 2 features. Then, use the provided "loss_3d_interactive" function to observe how the loss surface changes as the regularization parameter changes. Test the function with Ridge_loss, LASSO_loss and EN_loss. Comment on what you observe.****Note: for this to work, you have to install plotly. Go to https://plot.ly/python/getting-started/ and follow the steps. You don't need to make an account as we'll use the offline mode.** Solutions **3.1 Use `sklearn.datasets.make_regression` to build a well-conditioned regression problem with 1000 samples, 5 features, noise standard deviation of 10 and random state 209. **
###Code
# your code here
###Output
_____no_output_____
###Markdown
**3.2 Find the Ridge, LASSO and EN estimator for this problem, varying the regularization parameter in the interval $[0.1,100]$ for LASSO and EN, and $[0.1,10000]$ for Ridge. Plot the evolution of the 5 coefficients for each estimator in a 2D plot, where the X axis is the regularization parameter and the Y axis is the coefficient value. For Elastic Net, make 4 plots, each one with one of the following L1 ratios: $[0.1, 0.5, 0.8, 0.95]$ You should have 6 plots: one for Lasso, one for Ridge, and 4 for EN. Each plot should have 5 curves, one per coefficient. **
###Code
# your code here
#your code here
###Output
_____no_output_____
###Markdown
**3.3 Comment on this evolution. Does this make sense with what we've seen so far?** *your answer here* **3.4 We're now interested in visualizing the behavior of the Loss functions. First, generate a regression problem with 1000 samples and 2 features. Then, use the provided "loss_3d_interactive" function to observe how the loss surface changes as the regularization parameter changes. Test the function with Ridge_loss, LASSO_loss and EN_loss. Comment on what you observe.****Note: for this to work, you have to install plotly. Go to https://plot.ly/python/getting-started/ and follow the steps. You don't need to make an account as we'll use the offline mode.**
###Code
X,y,true_coef = make_regression(n_samples = 1000, n_features = 2, noise = 10, random_state=209, coef=True)
from ipywidgets import interactive, HBox, VBox
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import plotly.graph_objs as go
init_notebook_mode(connected=True)
def OLS_loss(X, y, beta, lbda=0):
y_hat = np.dot(X,beta)
return np.sum((y_hat-y)**2,axis=0)
def Ridge_loss(X, y, beta, lbda):
y_hat = np.dot(X,beta)
return np.sum((y_hat-y)**2,axis=0) + lbda*np.sum(beta**2, axis=0)
def LASSO_loss(X, y, beta, lbda):
y_hat = np.dot(X,beta)
return (1 / (2 * len(X)))*np.sum((y_hat-y)**2,axis=0) + lbda*np.sum(np.abs(beta), axis=0)
def EN_loss(X, y, beta, lbda):
ratio=0.1
y_hat = np.dot(X,beta)
return (1 / (2 * len(X)))*np.sum((y_hat-y)**2,axis=0) + lbda*(ratio*np.sum(beta**2, axis=0) + (1-ratio)*np.sum(np.abs(beta), axis=0))
def loss_3d_interactive(X, y, loss='Ridge'):
'''Uses plotly to draw an interactive 3D representation of the loss function,
with a slider to control the regularization factor.
Inputs:
X: predictor matrix for the regression problem. Has to be of dim n x 2
y: response vector
loss: string with the loss to plot. Options are 'Ridge', 'LASSO', 'EN'.
'''
if loss == 'Ridge':
loss_function = Ridge_loss
lbda_slider_min = 0
lbda_slider_max = 10000
lbda_step = 10
clf = Ridge()
elif loss == 'LASSO':
loss_function = LASSO_loss
lbda_slider_min = 1
lbda_slider_max = 150
lbda_step = 1
clf = Lasso()
elif loss == 'EN':
loss_function = EN_loss
lbda_slider_min = 1
lbda_slider_max = 150
lbda_step = 1
clf = ElasticNet()
else:
raise ValueError("Loss string not recognized. Available options are: 'Ridge', 'LASSO', 'EN'.")
# linspace for loss surface
L=20
lsp_b = np.linspace(-80,80,L)
lsp_b_x, lsp_b_y = np.meshgrid(lsp_b,lsp_b)
lsp_b_mat = np.column_stack((lsp_b_x.flatten(),lsp_b_y.flatten()))
# Get all optimal betas for current lambda range
precomp_coefs=[]
for l in range(lbda_slider_min,lbda_slider_max+1,lbda_step):
clf.set_params(alpha=l)
clf.fit(X, y)
precomp_coefs.append(clf.coef_)
f = go.FigureWidget(
data=[
go.Surface(
x=lsp_b_x,
y=lsp_b_y,
z=loss_function(X,y.reshape(-1,1), lsp_b_mat.T, 0).reshape((L,L)),
colorscale='Viridis',
opacity=0.7,
contours=dict(z=dict(show=True,
width=3,
highlight=True,
highlightcolor='orange',
project=dict(z=True),
usecolormap=True))
),
go.Scatter3d(
x=[p[0] for p in precomp_coefs],
y=[p[1] for p in precomp_coefs],
z=np.zeros(len(precomp_coefs)),
marker=dict(
size=1,
color='darkorange',
line=dict(
color='darkorange',
width=1
),
opacity=1
)
),
go.Scatter3d(
x=[0],
y=[0],
z=[0],
marker=dict(
size=10,
color='orange',
opacity=1
),
)
],
layout=go.Layout(scene=go.layout.Scene(
xaxis = dict(
title='Beta 1'),
yaxis = dict(
title='Beta 2'),
zaxis = dict(
title='Loss'),
camera=go.layout.scene.Camera(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=1.25, y=1.25, z=1.25))
),
width=1000,
height=700,)
)
def update_z(lbda):
f.data[0].z = loss_function(X, y.reshape(-1,1), lsp_b_mat.T, lbda).reshape((L,L))
beta_opt = precomp_coefs[(lbda-lbda_slider_min)//(lbda_step)]
f.data[-1].x = [beta_opt[0]]
f.data[-1].y = [beta_opt[1]]
f.data[-1].z = [0]
lambda_slider = interactive(update_z, lbda=(lbda_slider_min, lbda_slider_max, lbda_step))
vb = VBox((f, lambda_slider))
vb.layout.align_items = 'center'
display(vb)
#your code here
#your code here
#your code here
###Output
_____no_output_____ |
divvy_station_status/visualize_live_data.ipynb | ###Markdown
Make Connection to Google cloud function
###Code
import urllib.request
QUERY_FUNC = 'https://us-central1-divvy-bike-shari-1562130131708.cloudfunctions.net/query_from_divvy_cloudsql'
%%time
reslst = requests.post(QUERY_FUNC, json={'stationid': '35, 192'}).content.decode('utf-8')
def to_dataframe(raw):
lst_of_lst = [v.split(',') for v in raw.split('\n')]
df = pd.DataFrame(lst_of_lst, columns=['timestamp', 'stationid', 'bikes_avail', 'docks_avail'])
return df
reslst = requests.post(QUERY_FUNC, json={'stationid': '35,192,100,2'}).content.decode('utf-8')
df = to_dataframe(reslst)
df.shape
df.info()
tmp = df[df.stationid == '35'].copy()
tmp['timeindex'] = pd.to_datetime(df['timestamp']).dt.tz_localize('utc').dt.tz_convert('US/Central')
tmp['month'] = tmp.timeindex.apply(lambda x: x.month)
tmp['day'] = tmp.timeindex.apply(lambda x: x.day)
tmp['hour'] = tmp.timeindex.apply(lambda x: x.hour)
tmp.bikes_avail = tmp.bikes_avail.astype('int')
tmp.docks_avail = tmp.docks_avail.astype('int')
min_df = tmp.groupby(['month', 'day', 'hour'])[['timeindex', 'bikes_avail', 'docks_avail']].min()\
.reset_index().rename(columns={"bikes_avail": "min_bikes", "docks_avail": "min_docks"})
max_df = tmp.groupby(['month', 'day', 'hour'])[['bikes_avail', 'docks_avail']].max()\
.reset_index().rename(columns={"bikes_avail": "max_bikes", "docks_avail": "max_docks"})
ave_df = tmp.groupby(['month', 'day', 'hour'])[['bikes_avail', 'docks_avail']].mean()\
.reset_index().rename(columns={"bikes_avail": "ave_bikes", "docks_avail": "ave_docks"})
min_df.merge(max_df, on=['month', 'day', 'hour']).merge(ave_df, on=['month', 'day', 'hour'])
# df.bikes_avail.rolling(12).min()
###Output
_____no_output_____
###Markdown
Visualize data
###Code
import plotly
from plotly.offline import iplot, plot
plotly.__version__
###Output
_____no_output_____
###Markdown
Query Live Station Status (Up to now)
###Code
DIVVY_STATION_URL = 'https://gbfs.divvybikes.com/gbfs/en/station_information.json'
res = requests.get(DIVVY_STATION_URL)
jsonres = res.json()
station_json = json.dumps(jsonres['data']['stations'])
station_status_df = pd.read_json(station_json)
cleaned_stationdata = station_status_df[['capacity', 'lat', 'lon', 'station_id', 'name', 'short_name']]
cleaned_stationdata.to_csv('station.csv')
cleaned_stationdata.head(5)
###Output
_____no_output_____
###Markdown
Make Connection to postgres Initialize connection
###Code
gcp_sql_username = os.environ.get('gcp_sql_username')
gcp_sql_password = os.environ.get('gcp_sql_password')
conn = psycopg2.connect(user=gcp_sql_username, password=gcp_sql_password,
host='localhost', port='5432')
###Output
_____no_output_____
###Markdown
Query data
###Code
%%time
DISPLAY_ROWS = 10000
cur = conn.cursor()
cur.execute('SELECT * FROM divvylivedata WHERE stationid = %s;' %('192'))
print("Total rows: {}\nDisplayed rows: {}\n".format(cur.rowcount, DISPLAY_ROWS))
row_counter = 1
row = cur.fetchone()
while row is not None and row_counter <= DISPLAY_ROWS:
# print(','.join([str(v) for v in row]))
row = cur.fetchone()
row_counter += 1
print(row_counter)
cur.close()
###Output
Total rows: 296
Displayed rows: 10000
297
CPU times: user 3.53 ms, sys: 3.01 ms, total: 6.55 ms
Wall time: 232 ms
###Markdown
Convert unix timestamps into timestamps and consider timezone
###Code
utc_timestamp = datetime.utcfromtimestamp(1565246835).strftime('%Y-%m-%d %H:%M:%S')
print(utc_timestamp)
# METHOD 1: Hardcode zones:
from_zone = tz.gettz('UTC')
to_zone = tz.gettz('America/Chicago')
# # METHOD 2: Auto-detect zones:
# from_zone = tz.tzutc()
# to_zone = tz.tzlocal()
# utc = datetime.utcnow()
utc = datetime.strptime(utc_timestamp, '%Y-%m-%d %H:%M:%S')
# Tell the datetime object that it's in UTC time zone since
# datetime objects are 'naive' by default
utc = utc.replace(tzinfo=from_zone)
# Convert time zone
central = utc.astimezone(to_zone)
print("Local time in Chicago: ", central)
###Output
Local time in Chicago: 2019-08-08 01:47:15-05:00
###Markdown
Close connection
###Code
if conn:
conn.close()
###Output
_____no_output_____ |
10 Days of Statistics/Day_9. Multiple Linear Regression.ipynb | ###Markdown
Day_9. Multiple Linear Regression`Task`Andrea has a simple equation:`Input Format`The first line contains `2` space-separated integers, `m` (the number of observed features) and `n` (the number of feature sets Andrea studied), respectively. Each of the `n` subsequent lines contain `m+1` space-separated decimals; the first `m` elements are features `(f1, f2,...,fm)`, and the last element is the value of `Y` for the line's feature set.The next line contains a single integer, , denoting the number of feature sets Andrea wants to query for. Each of the `q` subsequent lines contains `m` space-separated decimals describing the feature sets.`Scoring``Output Format`For each of the `q` feature sets, print the value of `Y` on a new line (i.e., you must print a total of `q` lines).`Sample Input````2 70.18 0.89 109.851.0 0.26 155.720.92 0.11 137.660.07 0.37 76.170.85 0.16 139.750.99 0.41 162.60.87 0.47 151.7740.49 0.180.57 0.830.56 0.640.76 0.18````Sample Output````105.22142.68132.94129.71```
###Code
import numpy as np
n, m = map(int, input().split())
x_list, y_list = [], []
for _ in range(m):
val = list(map(float, input().split()))
x_list.append([1.0] + val[:-1])
y_list.append(val[-1])
X = np.matrix(x_list)
Y = np.matrix(y_list).reshape(m, 1)
B = (X.T * X).I * X.T * Y
num = int(input())
n_x_list = []
for _ in range(num):
val2 = list(map(float, input().split()))
n_x_list.append([1.0] + val2)
n_X = np.matrix(n_x_list)
n_Y = n_X * B
for i in range(num):
print(f'{n_Y[i, 0]:.2f}')
###Output
_____no_output_____ |
AphabetSoupCode.ipynb | ###Markdown
Preprocessing
###Code
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("/Users/smith/Desktop/GitHub Repos/deep_learning_challenge/Resources/charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df = application_df.drop(columns=['EIN','NAME'])
application_df
# Determine the number of unique values in each column.
application_df.nunique()
# Look at APPLICATION_TYPE value counts for binning
application_df['APPLICATION_TYPE'].value_counts()
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
#Chose 528 as cutoff per the visual below from starter code
application_types_to_replace = []
for app, cnt in application_df.APPLICATION_TYPE.value_counts().iteritems():
if cnt < 528:
application_types_to_replace.append(app)
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
application_df['APPLICATION_TYPE'].value_counts()
# Look at CLASSIFICATION value counts for binning
application_df['CLASSIFICATION'].value_counts()
# You may find it helpful to look at CLASSIFICATION value counts >1
application_df['CLASSIFICATION'].value_counts()[application_df['CLASSIFICATION'].value_counts()>1]
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
#Using 1883 as cutoff as per starter code
classifications_to_replace = []
for cls, cnt in application_df.CLASSIFICATION.value_counts().iteritems():
if cnt < 1883:
classifications_to_replace.append(cls)
# Replace in dataframe
for cls in classifications_to_replace:
application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
application_df['CLASSIFICATION'].value_counts()
#Finding categorical data to convert to numerical
convert_data = list(application_df.dtypes[application_df.dtypes == "object"].index)
convert_data
# Convert categorical data to numeric with `pd.get_dummies`
converted_df = pd.get_dummies(application_df, columns=convert_data)
converted_df.head()
# Split our preprocessed data into our features and target arrays
X = converted_df.drop('IS_SUCCESSFUL', axis=1)
y = converted_df['IS_SUCCESSFUL']
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Compile, Train and Evaluate the Model
###Code
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
# YOUR CODE GOES HERE
nn_model = tf.keras.models.Sequential()
# First hidden layer
nn_model.add(tf.keras.layers.Dense(units=64, activation="relu", input_dim=43))
# Second hidden layer
nn_model.add(tf.keras.layers.Dense(units=16, activation="relu"))
# Output layer
nn_model.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Check the structure of the model
nn_model.summary()
# Compile the model
nn_model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Train the model
fit_model = nn_model.fit(X_train_scaled, y_train, epochs=50)
# Evaluate the model using the test data
model_loss, model_accuracy = nn_model.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn_model.save('AlphabetSoupCharity_Optimization.h5')
###Output
_____no_output_____ |
examples/Networks/Case study Chicago/Cross-Validation grid.ipynb | ###Markdown
Cross-Validation on a gridContinuing to work with the Rosser et al. paper, we also want to cross validate the grid based "prospective hotspotting".
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.collections
import numpy as np
import descartes
import zipfile, pickle
import open_cp.sources.chicago
import open_cp.geometry
import open_cp.prohotspot
import open_cp.predictors
data_path = os.path.join("/media", "disk", "Data")
#data_path = os.path.join("..", "..", "..", "..", "..", "..", "Data")
open_cp.sources.chicago.set_data_directory(data_path)
south_side = open_cp.sources.chicago.get_side("South")
grid = open_cp.data.Grid(xsize=150, ysize=150, xoffset=0, yoffset=0)
grid = open_cp.geometry.mask_grid_by_intersection(south_side, grid)
filename = open_cp.sources.chicago.get_default_filename()
timed_points = open_cp.sources.chicago.load(filename, ["BURGLARY"])
timed_points.number_data_points, timed_points.time_range
timed_points = open_cp.geometry.intersect_timed_points(timed_points, south_side)
timed_points.number_data_points
###Output
_____no_output_____
###Markdown
Use old data instead
###Code
filename = os.path.join(data_path, "chicago_all_old.csv")
timed_points = open_cp.sources.chicago.load(filename, ["BURGLARY"], type="all")
timed_points.number_data_points, timed_points.time_range
timed_points = open_cp.geometry.intersect_timed_points(timed_points, south_side)
timed_points.number_data_points
###Output
_____no_output_____
###Markdown
What do Rosser et al do?They seem to use a "hybrid" approach, which we have (fortuitously) implemented as `ProspectiveHotSpotContinuous`. That is, they use a continuous KDE method, with both a space and time component, and then convert this to a grid as a final step.The exact formula used is$$ \lambda(t,s) = \sum_{i : t_i<t} f(\|s-s_i\|) g(t-t_i) $$where$$ f(\Delta s) = \begin{cases} \frac{h_S - \Delta s}{h_S^2} & :\text{if } \Delta s \leq h_S, \\ 0 &:\text{otherwise.}\end{cases} \qquadg(\Delta t) = \frac{1}{h_T} \exp\Big( -\frac{\Delta t}{h_T} \Big). $$Notice that this is _not normalised_ because when converting from two dimensions to a (positive) number using the Euclidean norm $\|\cdot\|$ we map the infinitesimal annulus $r \leq \sqrt{x^2+y^2} \leq r+dr$ to the interval $[r, r+dr]$; the former has area $\pi((r+dr)^2 - r^2) = 2\pi r dr$ while the latter has length $dr$. NormalisationLet us think a bit harder about normalisation. We treat $\lambda$ as a "kernel" in time and space, we presumably, mathematically, we allow $s$ to vary over the whole plane, but constrain $t\geq 0$ (assuming all events occur in positive time; in which case $\lambda$ is identically zero for $t<0$ anyway). Thus, that $\lambda$ is "normalised" should mean that$$ \int_0^\infty \int_{\mathbb R^2} \lambda(t, s) \ ds \ dt = 1. $$How do we actually use $\lambda$? In Rosser et al. it is first used to find the "optimal bandwidth selection" by constructing $\lambda$ using all events up to time $T$ and then computing the log likelihood$$ \sum_{T \leq t_i < T+\delta} \log \lambda(t_i, s_i) $$where, if we're using a time unit of days, $\delta=1$ (i.e. we look at the events in the next day).To make predictions, we take point estimates of $\lambda$, or use the mean value. How we treat space versus time is a little unclear in the literature. There are perhaps two approaches:1. Fix a time $t$ and then compute the mean value of $\lambda(t, s)$ as $s$ varies across the grid cell.2. Compute the mean value of $\lambda(t, s)$ as $t$ varies across the day (or other time period) we are predicting for, and as $s$ varies across the grid cell.Typically we use a monte carlo approach to estimate the mean from point estimates. Currently our code implements the first method by fixing time at the start of the day. The example below shows a roughly 2% (maximum) difference between (1) and (2) with little change if we vary the fixed time $t$ in (1).Notice that this introduces a difference between finding the optimal bandwidth selection and making a prediction. The former uses the exact timestamps of events, while the latter makes one prediction for the whole day, and then "validates" this against all events which occur in that day.We thus have a number of different "normalisations" to consider. We could normalise $\lambda$ so that it is a probability kernel-- this is needed if we are to use point evaluations of $\lambda$. When forming a prediction, the resulting grid of values will (almost) never by normalised, as we are not integrating over all time. Thus we should almost certainly normalise the resulting grid based prediction, if we are to compare different predictions. Normalising $\lambda$After private communication with the authors, it appears they are well aware of the normalisation issue, and that this is due to a typo in the paper. Using [Polar coordinates](https://en.wikipedia.org/wiki/Polar_coordinate_systemIntegral_calculus_.28area.29) we wish to have that$$ 1 = \int_0^{2\pi} \int_0^\infty r f(r) \ dr \ d\theta = 2\pi \int_0^{h_S} rf(r) \ dr. $$The natural change to make is to define$$ f'(\Delta s) = \begin{cases} \frac{h_S - \Delta s}{\pi h_s^2\Delta s} & :\text{if } \Delta s \leq h_S, \\ 0 &:\text{otherwise.}\end{cases} $$However, this introduces a singularity at $\Delta s = 0$ which is computationally hard to deal with (essentially, the monte carlo approach to integration we use becomes much noisier, as some experiments show).An alternative is to simply divide $f$ by a suitable constant. In our case, the constant is$$ 2\pi \int_0^{h_S} \frac{h_S - r}{h_S^2} r \ dr = 2\pi \Big[ \frac{r^2}{2h_S} - \frac{r^3}{3h_S^2} \Big]_0^{h_S}= 2\pi \Big( \frac{h_S}{2} - \frac{h_S}{3} \Big)= 2\pi \Big( \frac{h_S}{2} - \frac{h_S}{3} \Big)= \pi h_S / 3. $$
###Code
predictor = open_cp.prohotspot.ProspectiveHotSpotContinuous(grid_size=150, time_unit=np.timedelta64(1, "D"))
predictor.data = timed_points[timed_points.timestamps >= np.datetime64("2013-01-01")]
class OurWeight():
def __init__(self):
self.time_bandwidth = 100
self.space_bandwidth = 10
def __call__(self, dt, dd):
kt = np.exp(-dt / self.time_bandwidth) / self.time_bandwidth
dd = np.atleast_1d(np.asarray(dd))
#ks = (self.space_bandwidth - dd) / (self.space_bandwidth * self.space_bandwidth * dd * np.pi)
ks = ((self.space_bandwidth - dd) / (self.space_bandwidth * self.space_bandwidth
* np.pi * self.space_bandwidth) * 3)
mask = dd > self.space_bandwidth
ks[mask] = 0
return kt * ks
predictor.weight = OurWeight()
predictor.weight.space_bandwidth = 1
tend = np.datetime64("2013-01-01") + np.timedelta64(180, "D")
prediction = predictor.predict(tend, tend)
prediction.samples = 50
grid_pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(prediction, grid)
grid_pred.mask_with(grid)
grid_pred = grid_pred.renormalise()
fig, ax = plt.subplots(ncols=2, figsize=(16,8))
for a in ax:
a.set_aspect(1)
a.add_patch(descartes.PolygonPatch(south_side, fc="none", ec="Black"))
ax[0].pcolormesh(*grid_pred.mesh_data(), grid_pred.intensity_matrix, cmap="Blues")
ax[0].set_title("Prediction")
points = predictor.data.events_before(tend)
ax[1].scatter(points.xcoords, points.ycoords, marker="x", color="black", alpha=0.5)
None
grid_pred2 = predictor.grid_predict(tend, tend, tend + np.timedelta64(1, "D"), grid, samples=1)
grid_pred2.mask_with(grid)
grid_pred2 = grid_pred2.renormalise()
fig, ax = plt.subplots(ncols=3, figsize=(16,6))
for a in ax:
a.set_aspect(1)
a.add_patch(descartes.PolygonPatch(south_side, fc="none", ec="Black"))
mp = ax[0].pcolormesh(*grid_pred.mesh_data(), grid_pred.intensity_matrix, cmap="Blues")
ax[0].set_title("Point prediction")
fig.colorbar(mp, ax=ax[0])
mp = ax[1].pcolormesh(*grid_pred2.mesh_data(), grid_pred2.intensity_matrix, cmap="Blues")
ax[1].set_title("With meaned time")
fig.colorbar(mp, ax=ax[1])
mp = ax[2].pcolormesh(*grid_pred2.mesh_data(),
np.abs(grid_pred.intensity_matrix - grid_pred2.intensity_matrix), cmap="Blues")
ax[2].set_title("Difference")
fig.colorbar(mp, ax=ax[2])
fig.tight_layout()
None
###Output
_____no_output_____
###Markdown
Direct calculation of optimal bandwidthFollowing Rosser et al. closely, we don't need to form a grid prediction, and hence actually don't need to use (much of) our library code.- We find the maximum likelihood at 500m and 35--45 days, a tighter bandwidth than Rosser et al.- This mirrors what we saw for the network; perhaps because of using Chicago and not UK data
###Code
tstart = np.datetime64("2013-01-01")
tend = np.datetime64("2013-01-01") + np.timedelta64(180, "D")
def log_likelihood(start, end, weight):
data = timed_points[(timed_points.timestamps >= tstart) &
(timed_points.timestamps < start)]
validate = timed_points[(timed_points.timestamps >= start) &
(timed_points.timestamps <= end)]
dt = validate.timestamps[None, :] - data.timestamps[:, None]
dt = dt / np.timedelta64(1, "D")
dx = validate.xcoords[None, :] - data.xcoords[:, None]
dy = validate.ycoords[None, :] - data.ycoords[:, None]
dd = np.sqrt(dx*dx + dy*dy)
ll = np.sum(weight(dt, dd), axis=0)
ll[ll < 1e-30] = 1e-30
return np.sum(np.log(ll))
def score(weight):
out = 0.0
for day in range(60):
start = tend + np.timedelta64(1, "D") * day
end = tend + np.timedelta64(1, "D") * (day + 1)
out += log_likelihood(start, end, weight)
return out
time_lengths = list(range(5,100,5))
space_lengths = list(range(50, 2000, 50))
scores = {}
for sl in space_lengths:
for tl in time_lengths:
weight = OurWeight()
weight.space_bandwidth = sl
weight.time_bandwidth = tl
key = (sl, tl)
scores[key] = score(weight)
data = np.empty((39,19))
for i, sl in enumerate(space_lengths):
for j, tl in enumerate(time_lengths):
data[i,j] = scores[(sl,tl)]
ordered = data.copy().ravel()
ordered.sort()
cutoff = ordered[int(len(ordered) * 0.25)]
data = np.ma.masked_where(data<cutoff, data)
fig, ax = plt.subplots(figsize=(8,6))
mappable = ax.pcolor(range(5,105,5), range(50,2050,50), data, cmap="Blues")
ax.set(xlabel="Time (days)", ylabel="Space (meters)")
fig.colorbar(mappable, ax=ax)
None
print(max(scores.values()))
[k for k, v in scores.items() if v > -7775]
###Output
-7772.56122721
###Markdown
Scoring the gridWe'll now use the grid prediction; firstly using the "fully averaged" version.- We find the maximum likelihood at 500m and 80 days.- I wonder what explains the slight difference from above?
###Code
def log_likelihood(grid_pred, timed_points):
logli = 0
for x, y in zip(timed_points.xcoords, timed_points.ycoords):
risk = grid_pred.risk(x, y)
if risk < 1e-30:
risk = 1e-30
logli += np.log(risk)
return logli
tstart = np.datetime64("2013-01-01")
tend = np.datetime64("2013-01-01") + np.timedelta64(180, "D")
def score_grids(grids):
out = 0
for day in range(60):
start = tend + np.timedelta64(1, "D") * day
end = tend + np.timedelta64(1, "D") * (day + 1)
grid_pred = grids[start]
mask = (predictor.data.timestamps > start) & (predictor.data.timestamps <= end)
timed_points = predictor.data[mask]
out += log_likelihood(grid_pred, timed_points)
return out
def score(predictor):
grids = dict()
for day in range(60):
start = tend + np.timedelta64(1, "D") * day
end = tend + np.timedelta64(1, "D") * (day + 1)
grid_pred = predictor.grid_predict(start, start, end, grid, samples=5)
grid_pred.mask_with(grid)
grids[start] = grid_pred.renormalise()
return score_grids(grids), grids
time_lengths = list(range(5,100,5))
space_lengths = list(range(50, 2000, 50))
predictor = open_cp.prohotspot.ProspectiveHotSpotContinuous(grid_size=150, time_unit=np.timedelta64(1, "D"))
predictor.data = timed_points[timed_points.timestamps >= np.datetime64("2013-01-01")]
predictor.weight = OurWeight()
results = dict()
zp = zipfile.ZipFile("grids.zip", "w", compression=zipfile.ZIP_DEFLATED)
for sl in space_lengths:
for tl in time_lengths:
key = (sl, tl)
predictor.weight = OurWeight()
predictor.weight.space_bandwidth = sl / predictor.grid
predictor.weight.time_bandwidth = tl
results[key], grids = score(predictor)
with zp.open("{}_{}.grid".format(sl,tl), "w") as f:
f.write(pickle.dumps(grids))
print("Done", sl, tl, file=sys.__stdout__)
zp.close()
data = np.empty((39,19))
for i, sl in enumerate(space_lengths):
for j, tl in enumerate(time_lengths):
data[i,j] = results[(sl,tl)]
ordered = data.copy().ravel()
ordered.sort()
cutoff = ordered[int(len(ordered) * 0.25)]
data = np.ma.masked_where(data<cutoff, data)
fig, ax = plt.subplots(figsize=(8,6))
mappable = ax.pcolor(range(5,105,5), range(50,2050,50), data, cmap="Blues")
ax.set(xlabel="Time (days)", ylabel="Space (meters)")
fig.colorbar(mappable, ax=ax)
None
print(max(results.values()))
[k for k, v in results.items() if v > -3660]
###Output
-3658.71861229
###Markdown
Where did we get to?
###Code
zp = zipfile.ZipFile("grids.zip")
with zp.open("500_80.grid") as f:
grids = pickle.loads(f.read())
one = list(grids)[0]
one = grids[one]
with zp.open("500_90.grid") as f:
grids = pickle.loads(f.read())
two = list(grids)[0]
two = grids[two]
fig, ax = plt.subplots(ncols=2, figsize=(17,8))
for a in ax:
a.set_aspect(1)
a.add_patch(descartes.PolygonPatch(south_side, fc="none", ec="Black"))
for a, g in zip([0,1], [one,two]):
mp = ax[a].pcolormesh(*g.mesh_data(), g.intensity_matrix, cmap="Blues")
fig.colorbar(mp, ax=ax[a])
None
###Output
_____no_output_____
###Markdown
Again, with normal gridInstead of averaging in time, we just take a point estimate.- We find the maximum likelihood at 500m and 60--85 days.
###Code
def score(predictor):
grids = dict()
for day in range(60):
start = tend + np.timedelta64(1, "D") * day
end = tend + np.timedelta64(1, "D") * (day + 1)
prediction = predictor.predict(tend, tend)
prediction.samples = 5
grid_pred = open_cp.predictors.GridPredictionArray.from_continuous_prediction_grid(prediction, grid)
grid_pred.mask_with(grid)
grids[start] = grid_pred.renormalise()
return score_grids(grids), grids
results = dict()
zp = zipfile.ZipFile("grids.zip", "w", compression=zipfile.ZIP_DEFLATED)
for sl in space_lengths:
for tl in time_lengths:
key = (sl, tl)
predictor.weight = OurWeight()
predictor.weight.space_bandwidth = sl / predictor.grid
predictor.weight.time_bandwidth = tl
results[key], grids = score(predictor)
with zp.open("{}_{}.grid".format(sl,tl), "w") as f:
f.write(pickle.dumps(grids))
print("Done", sl, tl, file=sys.__stdout__)
zp.close()
data = np.empty((39,19))
for i, sl in enumerate(space_lengths):
for j, tl in enumerate(time_lengths):
data[i,j] = results[(sl,tl)]
ordered = data.copy().ravel()
ordered.sort()
cutoff = ordered[int(len(ordered) * 0.25)]
data = np.ma.masked_where(data<cutoff, data)
fig, ax = plt.subplots(figsize=(8,6))
mappable = ax.pcolor(range(5,105,5), range(50,2050,50), data, cmap="Blues")
ax.set(xlabel="Time (days)", ylabel="Space (meters)")
fig.colorbar(mappable, ax=ax)
None
print(max(results.values()))
[k for k, v in results.items() if v > -3680]
zp = zipfile.ZipFile("grids.zip")
with zp.open("500_80.grid") as f:
grids = pickle.loads(f.read())
one = list(grids)[0]
one = grids[one]
with zp.open("500_85.grid") as f:
grids = pickle.loads(f.read())
two = list(grids)[0]
two = grids[two]
fig, ax = plt.subplots(ncols=2, figsize=(17,8))
for a in ax:
a.set_aspect(1)
a.add_patch(descartes.PolygonPatch(south_side, fc="none", ec="Black"))
for a, g in zip([0,1], [one,two]):
mp = ax[a].pcolormesh(*g.mesh_data(), g.intensity_matrix, cmap="Blues")
fig.colorbar(mp, ax=ax[a])
None
###Output
_____no_output_____ |
Machine Learning/Week 2/Day8_Boston-Housing-Price-Predict/code/Boston Housing - Python.ipynb | ###Markdown
Boston Housing
###Code
# import packages
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_boston
###Output
_____no_output_____
###Markdown
Load data
###Code
# Load data
boston = load_boston()
# Boston Housing Dataset
print(boston['DESCR'])
# Boston Housing Dataset
df = pd.DataFrame(boston['data'], columns=boston["feature_names"]) # create new data frame
df["target"] = boston["target"] # add target as column in the data
df
###Output
_____no_output_____
###Markdown
Explore data
###Code
# plot the relations between columns
plt.figure(figsize=(14, 8))
sns.heatmap(df.corr(), annot=True)
###Output
_____no_output_____
###Markdown
**Cost Functions**
###Code
def cost_functions(true, pred):
"""
This function to calculate Cost Functions and print output
Input:
true: list of the true value for target
pred: list of the predict value for target
"""
result_dict = {}
mse = mean_squared_error(true, pred)
mae = mean_absolute_error(true, pred)
ls = [mse, mae]
ls2 = ["MSE", "MAE"]
for x in range(len(ls)):
print(f"The result for {ls2[x]}: {ls[x]}")
result_dict[ls2[x]] = ls[x]
return result_dict
###Output
_____no_output_____
###Markdown
**Create models** **Create model using all features**
###Code
X = df.drop("target", axis=1)
y = df[["target"]]
# split data
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=2)
X_train
# create the model using all features
lr = LinearRegression()
lr.fit(X_train,y_train)
lr.score(X_test, y_test)
# Coefficients
lr.coef_
# predictions
preds = lr.predict(pd.DataFrame(X_test))
res = cost_functions(y_test, preds)
###Output
The result for MSE: 22.160198304875582
The result for MAE: 3.2416565967950546
###Markdown
**Create model using the most affected features**
###Code
# feaures selection
X_train_2 = X_train[['ZN','RAD','PTRATIO','LSTAT','RM']]
X_test_2 = X_test[['ZN','RAD','PTRATIO','LSTAT','RM']]
# create the model using the most affected features
lr2 = LinearRegression()
lr2.fit(X_train_2,y_train)
lr2.score(X_test_2, y_test)
# Coefficients
lr2.coef_
# predictions
preds_2 = lr2.predict(pd.DataFrame(X_test_2))
res2 = cost_functions(y_test, preds_2)
###Output
The result for MSE: 25.216226086209712
The result for MAE: 3.4604287164521472
###Markdown
Finally,The first model achieve the low amount of error. So, we can said the first model better than second. For future improvements, can use polynomial regression or use cross validation techniques. Save dataset as .csv file
###Code
df.to_csv('Boston_Housing.csv')
###Output
_____no_output_____ |
MTA data - Nonprofit outreach recommendation /Benson_geolocation_final.ipynb | ###Markdown
Get Station Zipcode
###Code
#fetch zipcode from geocode API for each station
stations = stations + " station, NY"
def getzipcode(ser):
station_dict = dict()
for station in ser:
try:
zipcode = gmaps.geocode(station)[0]["address_components"][-1]['long_name']
station_dict[station] = zipcode
if len(station_dict) % 50 == 0:
print("index =", len(station_dict), "zipcode =", zipcode)
except IndexError:
print("index error at index=", len(station_dict))
pass
return station_dict
station_dict = getzipcode(stations)
# save dictionary as text
import csv
f = open("station_dict.txt","w")
f.write( str(station_dict) )
f.close()
#Write Dictionary to CSV file
#Convert Dictionary to Dataframe, convert non-zipcodes to NaN, zipcodes to integers
zipcode_df = pd.DataFrame(list(station_dict.items()), columns=['STATION', 'zipcode'])
latlong_df = pd.DataFrame(list(latlong_dict.items()), columns=['STATION', 'latlong'])
# ****** where trouble begins
#zipcode_df["STATION"] = zipcode_df["STATION"].str.replace(" stations, NY","")
zipcode_df["STATION"] = zipcode_df["STATION"].replace("\sstation,\sNY","", regex = True)
zipcode_df.head()
#fix some of the missing/incorrect zipcodes
zipcode_df["zipcode"] = pd.to_numeric(zipcode_df["zipcode"],errors='coerce',downcast='integer')
# Find assigned values (google geocode could not locate the zipcode, or wrongly identifies the zipcode)
unassigned = zipcode_df[(zipcode_df.zipcode.isnull()) | (zipcode_df.zipcode < 10000)]
unassigned
#Assign mislocated or missing zipcodes manually
zipcode_df.iloc[0,1] = 11207.0
zipcode_df.iloc[4,1] = 10018.0
zipcode_df.iloc[8,1] = 10003.0
zipcode_df.iloc[10,1] = 10012.0
zipcode_df.iloc[16,1] = 10002.0
zipcode_df.iloc[41,1] = 11217.0
zipcode_df.iloc[61,1] = 11219.0
zipcode_df.iloc[67,1] = 10019.0
zipcode_df.iloc[78,1] = 11207.0
zipcode_df.iloc[87,1] = 11430.0
zipcode_df.iloc[107,1] = 11418.0
zipcode_df.iloc[129,1] = 10023.0
zipcode_df.iloc[152,1] = 11416.0
zipcode_df.iloc[190,1] = 11375.0
zipcode_df.iloc[193,1] = 11415.0
zipcode_df.iloc[229,1] = 11432.0
#skip new jersey stations & lackawanna
zipcode_df.iloc[245,1] = 10001.0
#new jersey ewark c
zipcode_df.iloc[258,1] = 10040.0
zipcode_df.iloc[324,1] = 11101.0
zipcode_df.iloc[330,1] = 11377.0
zipcode_df.iloc[334,1] = 11372.0
zipcode_df.iloc[352,1] = 11212.0
#RIT-MANHATTA don't know
# unassigned = zipcode_df[(zipcode_df.zipcode.isnull()) | (zipcode_df.zipcode < 10000)]
# unassigned
#write to csv
zipcode_df.zipcode = zipcode_df.zipcode.astype("int64")
#zipcode_df.info()
#Save the zipcode dataframe to CSV file
zipcode_df.to_csv("zipcode_df.csv")
###Output
_____no_output_____
###Markdown
Get station coordinates
###Code
# Fetch lat long data from goeocode API
stations = stations + " station, NY"
def getlatlong(ser):
latlong_dict = dict()
for station in ser:
try:
latlong = gmaps.geocode(station)[0]["geometry"]["location"]
latlong_dict[station] = latlong
if len(latlong_dict) % 50 == 0:
print("index =", len(latlong_dict), "latlong =", latlong)
except IndexError:
print("index error at index=", len(latlong_dict))
pass
return latlong_dict
latlong_dict = getlatlong(stations)
# save dictionary as text
import csv
f = open("latlong_dict.txt","w")
f.write( str(station_dict) )
f.close()
#Write Dictionary to CSV file
latlong_df = pd.DataFrame.from_dict(latlong_dict).T
#latlong_df["STATION"] = latlong_df.index
latlong_df.reset_index(level=latlong_df.index.names, inplace=True)
latlong_df = latlong_df.rename(columns={"index": "STATION"})
latlong_df["STATION"]= latlong_df["STATION"].replace("\sstation,\sNY","", regex = True)
latlong_df.lng.describe()
###Output
_____no_output_____ |
python/DeepFace-Analyze-V2.ipynb | ###Markdown
DeepFace analyze method wraps age, gender and race prediction models. Those models build VGG-Face first, modify its output layer and load pre-trained weights for those sub models. Each model size is 500 MB.However, we can build VGG-Face once and find its early layer output once. Then, transfer this early layer output to 3 layer basic network. In this way, the size of the all age, gender and race model would decrease from 500 MB to 2 MB. This notebook makes those sub models into smaller ones.
###Code
import numpy as np
import tensorflow as tf
import keras
from keras.preprocessing import image
from keras.callbacks import ModelCheckpoint,EarlyStopping
from keras.layers import Dense, Activation, Dropout, Flatten, Input, Convolution2D, ZeroPadding2D, MaxPooling2D, Activation
from keras.layers import Conv2D, AveragePooling2D
from keras.models import Model, Sequential
import keras.backend as K
from deepface import DeepFace
from deepface.commons import functions
from deepface.extendedmodels import Age
###Output
_____no_output_____
###Markdown
VGG-Face
###Code
#the both if and else blocks work well
if True:
model = DeepFace.build_model('VGG-Face')
else:
model = Sequential()
model.add(ZeroPadding2D((1,1),input_shape=(224,224, 3)))
model.add(Convolution2D(64, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(512, (3, 3), activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
model.add(Convolution2D(4096, (7, 7), activation='relu'))
model.add(Dropout(0.5))
model.add(Convolution2D(4096, (1, 1), activation='relu'))
model.add(Dropout(0.5))
model.add(Convolution2D(2622, (1, 1)))
model.add(Flatten())
model.add(Activation('softmax'))
model.load_weights('C:/Users/IS96273/.deepface/weights/vgg_face_weights.h5')
###Output
_____no_output_____
###Markdown
V1 model
###Code
if True:
classes = 101
base_model_output = Sequential()
base_model_output = Convolution2D(classes, (1, 1), name='predictions')(model.layers[-4].output)
base_model_output = Flatten()(base_model_output)
base_model_output = Activation('softmax')(base_model_output)
age_model = Model(inputs=model.input, outputs=base_model_output)
age_model.load_weights("C:/Users/IS96273/.deepface/weights/age_model_weights.h5")
else:
#else block causes trouble. I cannot understand why.
age_model = DeepFace.build_model('Age')
###Output
_____no_output_____
###Markdown
V2 model
###Code
common_model = Model(inputs = model.input, outputs = model.layers[-4].output)
age_model_v2 = Sequential()
age_model_v2.add(Convolution2D(101, (1, 1), input_shape=(1, 1, 4096)))
age_model_v2.add(Flatten())
age_model_v2.add(Activation('softmax'))
if True:
for i in range(0, 3):
age_model_v2.layers[i].set_weights(age_model.layers[-3+i].get_weights())
age_model_v2.save_weights("age_model_v2_weights.h5")
else:
age_model_v2.load_weights("age_model_v2_weights.h5")
###Output
_____no_output_____
###Markdown
Test
###Code
img_path = "deepface/tests/dataset/img1.jpg"
img = functions.preprocess_face(img_path)
img.shape
probas = age_model.predict(img)[0]
print("v1 result: ", Age.findApparentAge(probas))
common_output = common_model.predict(img)
probas_v2 = age_model_v2.predict(common_output)
print("v2 result: ", Age.findApparentAge(probas_v2))
###Output
v2 result: 31.803729148267408
|
MNIST_first_ImageDataSet.ipynb | ###Markdown
Modeling using MNIST
###Code
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
import matplotlib.pyplot as plt
(X_train, y_train),(X_test, y_test) = keras.datasets.mnist.load_data()
len(X_train)
X_train.shape
X_train_flat = X_train.reshape(len(X_train), 28*28)
X_test_flat = X_test.reshape(len(X_test), 28*28)
X_train_flat[1]
model = keras.Sequential([
keras.layers.Dense(10, input_shape = (784,), activation = 'sigmoid')
])
model.compile(
optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics =['accuracy']
)
model.fit(X_train_flat, y_train, epochs = 5)
#after scaling
X_train2 = X_train/255
X_test2 = X_test/255
X_train2_flat = X_train2.reshape(len(X_train2), 28*28)
X_test2_flat = X_test2.reshape(len(X_test2), 28*28)
model.fit(X_train2_flat, y_train, epochs = 5)
model.evaluate(X_test_flat, y_test)
model.evaluate(X_test2_flat, y_test)
plt.matshow(X_test[11])
y_pred = model.predict(X_test2_flat)
np.argmax(y_pred[11])
y_pred_lab = [np.argmax(i) for i in y_pred]
cm = tf.math.confusion_matrix(labels = y_test,predictions = y_pred_lab )
cm
plt.figure(figsize = (10,7))
sns.heatmap(cm, annot = True, fmt = 'd')
plt.xlabel('Predicted')
plt.ylabel('Truth')
# model 2
model2 = keras.Sequential([
keras.layers.Dense(100, input_shape = (784,), activation = 'relu'),
keras.layers.Dense(10, activation = 'relu'),
keras.layers.Dense(10, activation = 'sigmoid')
])
model2.compile(
optimizer = 'adam',
loss = 'sparse_categorical_crossentropy',
metrics =['accuracy']
)
model2.fit(X_train2_flat, y_train, epochs=5)
model2.evaluate(X_test2_flat, y_test)
# model 3
model3 = keras.Sequential([
keras.layers.Dense(100, input_shape = (784,), activation = 'relu'),
keras.layers.Dense(90, activation = 'relu'),
keras.layers.Dense(70, activation = 'relu'),
keras.layers.Dense(50, activation = 'relu'),
keras.layers.Dense(10, activation = 'sigmoid')
])
model3.compile(
optimizer = 'adam',
loss = 'Hinge',
metrics =['accuracy']
)
model3.fit(X_train2_flat, y_train, epochs=5)
model3.evaluate(X_test2_flat, y_test)
# model 4
model4 = keras.Sequential([
keras.layers.Dense(100, input_shape = (784,), activation = 'relu'),
keras.layers.Dense(10, activation = 'relu'),
keras.layers.Dense(10, activation = 'sigmoid')
])
model4.compile(
optimizer = 'adam',
loss = 'Huber',
metrics =['accuracy']
)
model4.fit(X_train2_flat, y_train, epochs=5)
model4.evaluate(X_test2_flat, y_test)
###Output
313/313 [==============================] - 1s 3ms/step - loss: 3.1962 - accuracy: 0.0980
|
_posts/scikit/classifier-comparisoon/Classifier-comparison.ipynb | ###Markdown
A comparison of a several classifiers in scikit-learn on synthetic datasets. The point of this example is to illustrate the nature of decision boundaries of different classifiers. This should be taken with a grain of salt, as the intuition conveyed by these examples does not necessarily carry over to real datasets.Particularly in high-dimensional spaces, data can more easily be separated linearly and the simplicity of classifiers such as naive Bayes and linear SVMs might lead to better generalization than is achieved by other classifiers.The plots show training points in solid colors and testing points semi-transparent. The lower right shows the classification accuracy on the test set. New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version
###Code
import sklearn
sklearn.__version__
###Output
_____no_output_____
###Markdown
Imports
###Code
print(__doc__)
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
###Output
Automatically created module for IPython interactive environment
###Markdown
Calculations and Plots
###Code
fig = tools.make_subplots(rows=11, cols=3,
print_grid=False)
h = .02 # step size in the mesh
def matplotlib_to_plotly(cmap, pl_entries):
h = 1.0/(pl_entries-1)
pl_colorscale = []
for k in range(pl_entries):
C = map(np.uint8, np.array(cmap(k*h)[:3])*255)
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])
return pl_colorscale
names = ["Input Data","Nearest Neighbors", "Linear SVM",
"RBF SVM", "Gaussian Process","Decision Tree",
"Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA"]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0), warm_start=True),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()]
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable
]
i = 1
j = 1
# iterate over datasets
for ds_cnt, ds in enumerate(datasets):
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.4, random_state=42)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
# Plot the training points
training_points = go.Scatter(x=X_train[:, 0],y=X_train[:, 1],showlegend=False,
mode='markers', marker=dict(color='red'))
# and testing points
testing_points = go.Scatter(x=X_test[:, 0], y=X_test[:, 1],showlegend=False,
mode='markers', marker=dict(color='blue'))
fig.append_trace(training_points, 1, j)
fig.append_trace(testing_points, 1, j)
# iterate over classifiers
i=2
for name, clf in zip(names, classifiers):
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
trace = go.Contour(y=xx[0],z=Z,x=xx[0],
line=dict(width=0),
contours=dict( coloring='heatmap'),
colorscale= matplotlib_to_plotly(cm,300),
opacity = 0.7, showscale=False)
# Plot also the training points
training_points = go.Scatter(x=X_train[:, 0],y=X_train[:, 1],showlegend=False,
mode='markers', marker=dict(color='red'))
# and testing points
testing_points1 = go.Scatter(x=X_test[:, 0], y=X_test[:, 1],showlegend=False,
mode='markers', marker=dict(color='blue'))
fig.append_trace(training_points, i, j)
fig.append_trace(testing_points, i, j)
fig.append_trace(trace, i, j)
i=i+1
j+=1
for i in map(str, range(1,34)):
x='xaxis'+i
y='yaxis'+i
fig['layout'][y].update(showticklabels=False, ticks='',
showgrid=False, zeroline=False)
fig['layout'][x].update(showticklabels=False, ticks='',
showgrid=False, zeroline=False)
k=0
for x in map(str, range(1,32,3)):
y='yaxis'+x
fig['layout'][y].update(title=names[k])
k=k+1
fig['layout'].update(height=2000)
py.iplot(fig)
###Output
The draw time for this plot will be slow for all clients.
###Markdown
LicenseCode source: Gaël Varoquaux Andreas MüllerModified for documentation by Jaques GroblerLicense: BSD 3 clause
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Classifier-comparison.ipynb', 'scikit-learn/plot-classifier-comparison/', ' Classifier Comparison| plotly',
' ',
title = 'Classifier Comparison | plotly',
name = ' Classifier Comparison',
has_thumbnail='true', thumbnail='thumbnail/class-compare.jpg',
language='scikit-learn', page_type='example_index',
display_as='classification', order=4,
ipynb= '~Diksha_Gabha/2737')
###Output
_____no_output_____ |
data/7.2-Explore-NER-Dataset-Dev.ipynb | ###Markdown
The CoNLL-2003 datasetTo load the CoNLL-2003 dataset, we use the `load_dataset()` method from the 🤗 Datasets library:
###Code
from datasets import load_dataset
import datasets
raw_datasets = load_dataset("conll2003")
datasets.__version__
###Output
Reusing dataset conll2003 (/home/matthias/.cache/huggingface/datasets/conll2003/conll2003/1.0.0/63f4ebd1bcb7148b1644497336fd74643d4ce70123334431a3c053b7ee4e96ee)
###Markdown
Check the relevant (*version* of `datasets`) documentation:- [`DatasetDict`](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classesdatasetdict[[datasets.datasetdict]])- [`DatasetDict`](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classesdatasets.datasetdict)- [`Dataset`](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classesdatasets.Dataset)> **This [video](https://www.youtube.com/watch?v=iY2AZYdZAr0) is very relevant!**> **Employ** `id2label` as shown in 7.2-Token_classification.ipynb!
###Code
def instance_details(datasets, shard, instance):
shards = list(datasets.keys())
print("dataset shards: {}\n".format(shards))
for shard_i in shards:
print("{} features: \t{}".format(shard_i, list(datasets[shard_i].features.keys())))
print("{} num_rows: \t{}\n".format(shard_i, datasets[shard_i].num_rows))
inst = datasets[shard][instance]
feats = list(datasets[shard_i].features.keys())
for feat in feats:
print("{} \t{}".format(feat, inst[feat]))
pass
instance_details(raw_datasets, "validation", 0)
###Output
dataset shards: ['train', 'validation', 'test']
train features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags']
train num_rows: 14042
validation features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags']
validation num_rows: 3251
test features: ['id', 'tokens', 'pos_tags', 'chunk_tags', 'ner_tags']
test num_rows: 3454
id 0
tokens ['CRICKET', '-', 'LEICESTERSHIRE', 'TAKE', 'OVER', 'AT', 'TOP', 'AFTER', 'INNINGS', 'VICTORY', '.']
pos_tags [22, 8, 22, 22, 15, 22, 22, 22, 22, 21, 7]
chunk_tags [11, 0, 11, 12, 13, 11, 12, 12, 12, 12, 0]
ner_tags [0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0]
|
core-metrics.ipynb | ###Markdown
https://office.wikimedia.org/wiki/Quarters
###Code
all_funded = pd.read_csv('data/inputs/data-all-funded.csv')
# clean all_funded
#drop rows without USD amount
df = all_funded[all_funded['USD over grant life'] != 0]
df = df[df['USD over grant life'].notnull()]
#fix datatype for USD over grant life column
df['USD over grant life'] = df['USD over grant life'].str.replace(',', '')
df['USD over grant life'] = df['USD over grant life'].str.replace('$', '')
df['USD over grant life'] = df['USD over grant life'].astype('float')
#columns to datetime
df['Approved on'] = pd.to_datetime(df['Approved on'], errors = 'coerce')
df['Executed on'] = pd.to_datetime(df['Executed on'], errors = 'coerce')
df['grant_count'] = df.groupby('Grantee').cumcount() + 1
df["total_grantee_grants"] = df.groupby('Grantee') ['Approved on'].transform('count')+1 #to start the count at
df['counter'] = range(len(df))
#assign unique IDs
df['id'] = df.groupby('Grantee').ngroup()
df['Program'].unique()
#grants_df = df[df['grant_count'] != 'Conference']
#do not include TPS, PEG, IEG, Partnership Grants
exclude = ['TPS', 'PEG', 'IEG', 'Partnership Grants', 'Conference', 'Wikicite', 'WMS']
grants_df = df[~df['Program'].isin(exclude)]
grants_df20 = grants_df[grants_df['Fiscal year ending'] == 2020]
###Output
_____no_output_____
###Markdown
KR 1: Ensure 65% of all grants are from outside well established communities so that grantmaking becomes a key mechanism to empower and welcome newcomers and increase diversity of content.
###Code
grants_by_community = grants_df20.groupby(['Fiscal year ending', 'Community type'])['counter'].nunique().to_frame().rename(columns={'counter': 'unique_grants'}).reset_index()
grants_by_community[grants_by_community['Community type']!='Developed']['unique_grants'].sum()/len(grants_df20)
###Output
_____no_output_____
###Markdown
% of grantees who are new
###Code
total_grants = grants_df20.groupby('Fiscal year ending').size().to_frame().reset_index().rename(columns={0: 'total_granted'})
unique_grantees = grants_df20.groupby('Fiscal year ending')['Grantee'].nunique().to_frame().reset_index().rename(columns={'Grantee': 'unique_grantees'})
unique_new_grantees = grants_df20[grants_df20['grant_count'] == 1].groupby('Fiscal year ending')['Grantee'].nunique().to_frame().reset_index().rename(columns={'Grantee': 'unique_n_grantees'})
#get a count of all grants awarded to a new grantee that has not received a grant in a previous year, not including those grantees that doubled up in their first
#year and received a follow-up grant
new_grantee_grants = pd.pivot_table(data=grants_df20[grants_df20['grant_count'] == 1], index='Fiscal year ending', values='counter', aggfunc='count').reset_index().rename(columns={'counter': 'total_n_grantee_grants'})
#create roll_up df combining the dfs above
year_roll_up = total_grants.merge(unique_grantees, on='Fiscal year ending', how='left').merge(unique_new_grantees, on='Fiscal year ending', how='left')
year_roll_up['n_grantee %'] = year_roll_up['unique_n_grantees'] / year_roll_up['unique_grantees']
year_roll_up
###Output
_____no_output_____
###Markdown
% of grantees who are new from emerging community
###Code
unique_new_grantees_by_comm = grants_df20[grants_df20['grant_count'] == 1].groupby(['Fiscal year ending', 'Community type'])['Grantee'].nunique().to_frame().rename(columns={'Grantee': 'unique_n_grantees'}).reset_index()
unique_new_grantees_by_comm[unique_new_grantees_by_comm['Community type'] == 'Emerging']['unique_n_grantees'].sum()/unique_new_grantees_by_comm['unique_n_grantees'].sum()
###Output
_____no_output_____
###Markdown
% of grantees who are new and outside of developed communities
###Code
unique_new_grantees_by_comm[unique_new_grantees_by_comm['Community type']!='Developed']['unique_n_grantees'].sum()/unique_new_grantees_by_comm['unique_n_grantees'].sum()
###Output
_____no_output_____
###Markdown
% of all funds for all grants in emerging and least developed communities
###Code
grants_df20[grants_df20['Community type'].str.match('Emerging|Least Developed')]['USD over grant life'].sum()/grants_df20['USD over grant life'].sum()
###Output
_____no_output_____
###Markdown
% gender focused grant funding
###Code
grants_df20[grants_df20['Gender gap (Y/N)'] == 'Yes']['USD over grant life'].sum()/grants_df20['USD over grant life'].sum()
###Output
_____no_output_____
###Markdown
% of rapid grantees that had more than one rapid grant in the reporting year
###Code
rgrbg_20 = grants_df20[grants_df20['Program'] == 'Rapid'].groupby('Grantee').size().to_frame().reset_index().rename(columns={0: 'rapid_grants_received'})
rgrbg_20_receiving_multiple = len(rgrbg_20[rgrbg_20['rapid_grants_received'] >= 2])/len(rgrbg_20)
rgrbg_20_receiving_multiple
###Output
_____no_output_____
###Markdown
% of rapid grant grantees that received a rapid grant in the year prior
###Code
rapid_20 = df[(df['Program']=='Rapid') & (df['Fiscal year ending']==2020)]
rapid_19 = df[(df['Program']=='Rapid') & (df['Fiscal year ending']==2019)]
r19list = list(rapid_19['Grantee'].unique())
r_grantees_2020_that_received_prior_r_grant = rapid_20[rapid_20['Grantee'].isin(r19list)]
print('r_grantees_2020_that_received_prior_r_grant:', len(r_grantees_2020_that_received_prior_r_grant))
print('2019 rapid Grantees:', len(r19list))
print('2020 rapid Grantees:', len(rapid_20['Grantee'].unique()))
print('2020 R grant Grantees that received prior rapid grants in prior year:',(len(r_grantees_2020_that_received_prior_r_grant)/len(rapid_20['Grantee'].unique())*100), '%')
###Output
_____no_output_____ |
4_4_3+Unsupervised+NLP.ipynb | ###Markdown
###Code
from google.colab import files
uploaded = files.upload()
!pip install nltk
import numpy as np
import pandas as pd
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
SemanticsWith all the information we were able to pull out of the text, one thing we didn't really use was semantics- the *meaning* of the words and sentences. Our supervised learning model 'knows' that Jane Austen tends to use the word 'lady' a lot in her writing, and it may know (if you included parts of speech as features) that 'lady' is a noun, but it doesn't know what a lady is. There is nothing in our work on NLP so far that would allow a model to say whether 'queen' or 'car' is more similar to 'lady.' This severely limits the applicability of our NLP skills! In the absence of semantic information, models can get tripped up on things like synonyms ('milady' and 'lady'). We could modify the spaCy dictionary to include 'lady' as the lemma of 'milady,' then use lemmas for all our analyses, but for this to be an effective approach we would have to go through our entire corpus and identify all synonyms for all words by hand. This approach would also discard subtle differences in the connotations of (words, concepts, ideas, or emotions associated with) 'lady' (elicits thoughts of formal manners and England) and 'milady' (elicits thoughts of medieval ages and Rennaissance Faires). Basically, language is complicated, and trying to explicitly model all the information encoded in language is nearly impossibly complicated. Fortunately, unsupervised modeling techniques, and particularly unsupervised neural networks, are perfect for this kind of task. Rather than us 'telling' the model how language works and what each sentence means, we can feed the model a corpus of text and have it 'learn' the rules by identifying recurring patterns within the corpus. Then we can use the trained unsupervised model to understand new sentences as well. As with supervised NLP, unsupervised models are limited by their corpus- an unsupervised model trained on a medical database is unlikely to know that 'lady' and 'milady' are similar, just as a model trained on Jane Austen wouldn't catch that 'Ehler-Danlos Syndrome' and 'joint hypermobility' describe the same medical condition. In this assignment, we are going to introduce Latent Semantic Analysis. In the next, we will discuss unsupervised neural network applications for NLP. Converting sentences to vectorsConsider the following sentences:1. "The best Monty Python sketch is the one about the dead parrot, I laughed so hard."2. "I laugh when I think about Python's Ministry of Silly Walks sketch, it is funny, funny, funny, the best!"3. "Chocolate is the best ice cream dessert topping, with a great taste."4. "The Lumberjack Song is the funniest Monty Python bit: I can't think of it without laughing."5. "I would rather put strawberries on my ice cream for dessert, they have the best taste."6. "The taste of caramel is a fantastic accompaniment to tasty mint ice cream."As a human being, it's easy to see that the sentences involve two topics, comedy and ice cream. One way to represent the sentences is in a term-document matrix, with a column for each sentence and a row for each word. Ignoring the stop words 'the', 'is','and', 'a', 'of,','I', and 'about,', discarding words that occur only once, and reducing words like 'laughing' to their root form ('laugh'), the term-document matrix for these sentences would be:| | 1 | 2 | 3 | 4 | 5 | 6 ||-----------|---|---|---|---|---|---|| Monty | 1 | 0 | 0 | 1 | 0 | 0 || Python | 1 | 1 | 0 | 1 | 0 | 0 || sketch | 1 | 1 | 0 | 0 | 0 | 0 || laugh | 1 | 1 | 0 | 1 | 0 | 0 || funny | 0 | 3 | 0 | 1 | 0 | 0 || best | 1 | 1 | 1 | 0 | 1 | 0 || ice cream | 0 | 0 | 1 | 0 | 1 | 1 || dessert | 0 | 0 | 1 | 0 | 1 | 0 || taste | 0 | 0 | 1 | 0 | 1 | 2 |Note that we use the term 'document' to refer to the individual text chunks we are working with. It can sometimes mean sentences, sometimes paragraphs, and sometimes whole text files. In our cases, each sentence is a document. Also note that, contrary to how we usually operate, a term-document matrix has words as rows and documents as columns.The comedy sentences use the words: Python (3), laugh (3), Monty (2), sketch (2), funny (2), and best (2).The ice cream sentences use the words: ice cream (3), dessert (3), taste (3), and best (2).The word 'best' stands out here- it appears in more sentences than any other word (4 of 6). It is used equally to describe Monty Python and ice cream. If we were to use this term-document matrix as-is to teach a computer to parse sentences, 'best' would end up as a significant identifier for both topics, and every time we gave the model a new sentence to identify that included 'best,' it would bring up both topics. Not very useful. To avoid this, we want to weight the matrix so that words that occur in many different sentences have lower weights than words that occur in fewer sentences. We do want to put a floor on this though-- words that only occur once are totally useless for finding associations between sentences. Another word that stands out is 'funny', which appears more often in the comedy sentences than any other word. This suggests that 'funny' is a very important word for defining the 'comedy' topic. Quantifying documents: Collection and document frequencies'Document frequency' counts how many sentences a word appears in. 'Collection frequency' counts how often a word appears, total, over all sentences. Let's calculate the df and cf for our sentence set:| |df |cf| |-----------|---|---|| Monty | 2 | 2 | | Python | 3 | 3 | | sketch | 2 | 2 | | laugh | 3 | 3 | | funny | 2 | 4 | | best | 4 | 4 | | ice cream | 3 | 3 | | dessert | 2 | 2 | | taste | 3 | 4 | Penalizing Indiscriminate Words: Inverse Document FrequencyNow let's weight the document frequency so that words that occur less often (like 'sketch' and 'dessert') are more influential than words that occur a lot (like 'best'). We will calculate the ratio of total documents (N) divided by df, then take the log (base 2) of the ratio, to get our inverse document frequency number (idf) for each term (t):$$idf_t=log \dfrac N{df_t}$$| |df |cf| idf ||-----------|---|---|| Monty | 2 | 2 | 1.585 || Python | 3 | 3 | 1 || sketch | 2 | 2 | 1.585 || laugh | 3 | 3 | 1 || funny | 2 | 4 | 1.585 || best | 4 | 4 | .585 || ice cream | 3 | 3 | 1 || dessert | 2 | 2 | 1.585 || taste | 3 | 4 | 1 |The idf weights tell the model to consider 'best' as less important than other terms. Term-frequency weightsThe next piece of information to consider for our weights is how frequently a term appears within a sentence. The word 'funny' appears three times in one sentence- it would be good if we were able to weight 'funny' so that the model knows that. We can accomplish this by creating unique weights for each sentence that combine the term frequency (how often a word appears within an individual document) with the idf, like so:$$tf-idf_{t,d}=(tf_{t,d})(idf_t)$$Now the term 'funny' in sentence 2, where it occurs three times, will be weighted more heavily than the term 'funny' in sentence 1, where it only occurs once. If 'best' had appeared multiple times in one sentence, it would also have a higher weight for that sentence, but the weight would be reduced by the idf term that takes into account that 'best' is a pretty common word in our collection of sentences.The tf_idf score will be highest for a term that occurs a lot within a small number of sentences, and lowest for a word that occurs in most or all sentences. Now we can represent each sentence as a vector made up of the tf-idf scores for each word:| | 1 | 2 | 3 | |-----------|---|---|---|| Monty | 1.585 | 0 | 0 || Python | 1 | 1 | 0 | | sketch | 1.585| 1.585 | 0 | | laugh | 1 | 1 | 0 | | funny | 0 | 4.755 | 0 | | best | .585 | .585 | .585 | | ice cream | 0 | 0 | 1 | | dessert | 0 | 0 | 1.585 | | taste | 0 | 0 | 1 | Drill: tf-idf scoresConverting sentences into numeric vectors is fundamental for a lot of unsupervised NLP tasks. To make sure you are solid on how these vectors work, please generate the vectors for the last three sentences. If you are feeling uncertain, have your mentor walk you through it.(solution for 4, 5, and 6:4. 1.585, 1, 0, 1, 1.585, 0,0,0,05. 0,0,0,0,0, .585, 1, 1.585, 16. 0,0,0,0,0,0, 1, 0, 2) You can think of the tf-idf vectors as a 'translation' from human-readable language to computer-usable numeric form. Some information is inevitably lost in translation, and the usefulness of any model we build from here on out depends on the decisions we made during the translation step. Possible decision-points include:* Which stop words to include or exclude* Should we use phrases ('Monty Python' instead of 'Monty' and 'Python') as terms* The threshold for infrequent words: Here, we excluded words that only occurred once. In longer documents, it may be a good idea to set a higher threshold.* How many terms to keep. We kept all the terms that fit our criteria (not a stop word, occurred more than once), but for bigger document collections or longer documents, this may create unfeasibly long vectors. We may want to decide to only keep the 10,000 words with the highest collection frequency scores, for example. Vector Space ModelOur vector representation of the text is referred to as a Vector Space Model. We can use this representation to compute the similarity between our sentences and a new phrase or sentence- this method is often used by search engines to match a query to possible results. By now, you've had some practice thinking of data as existing in multi-dimensional space. Our sentences exist in an n-dimensional space where n is equal to the number of terms in our term-document matrix. To compute the similarity of our sentences to a new sentence, we transform the new sentence into a vector and place it in the space. We can then calculate how different the angles are for our original vectors and the new vector, and identify the vector whose angle is closest to the new vector. Typically this is done by calculating the cosine of the angle between the vectors. If the two vectors are identical, the angle between them will be 0° and the cosine will be 1. If the two vectors are orthogonal, with an angle of 90°, the cosine will be 0. If we were running a search query, then, we would return sentences that were most similar to the query sentence, ordered from the highest similarity score (cosine) to the lowest. Pretty handy! Latent Semantic AnalysisCool as this is, there are limitations to the VSM. In particular, because it treats each word as distinct from every other word, it can run aground on *synonyms* (treating words that mean the same thing as though they are different, like big and large). Also, because it treats all occurrences of a word as the same regardless of context, it can run aground on *polysemy*, where there are different meanings attached to the same word: 'I need a break' vs 'I break things.' In addition, VSM has difficulty with very large documents because the more words a document has, the more opportunities it has to diverge from other documents in the space, making it difficult to see similarities.A solution to this problem is to reduce our tf-idf-weighted term-document matrix into a lower-dimensional space, that is, to express the information in the matrix using fewer rows by combining the information from multiple terms into one new row/dimension. We do this using Principal Components Analysis, which you may recall from [an earlier assignment](https://courses.thinkful.com/data-201v1/assignment/2.1.6). So Latent Semantic Analysis (also called Latent Semantic Indexing) is the process of applying PCA to a tf-idf term-document matrix. What we get, in the end, is clusters of terms that presumably reflect a topic. Each document will get a score for each topic, with higher scores indicating that the document is relevant to the topic. Documents can pertain to more than one topic.LSA is handy when your corpus is too large to topically annotate by hand, or when you don't know what topics characterize your documents. It is also useful as a way of creating features to be used in other models.Let's try it out! Once again, we'll use the gutenberg corpus. This time, we'll focus on comparing paragraphs within Emma by Jane Austen.
###Code
import nltk
from nltk.corpus import gutenberg
nltk.download('gutenberg')
import re
from sklearn.model_selection import train_test_split
nltk.download('punkt')
#reading in the data, this time in the form of paragraphs
emma=gutenberg.paras('austen-emma.txt')
#processing
emma_paras=[]
for paragraph in emma:
para=paragraph[0]
#removing the double-dash from all words
para=[re.sub(r'--','',word) for word in para]
#Forming each paragraph into a string and adding it to the list of strings.
emma_paras.append(' '.join(para))
print(emma_paras[0:4])
###Output
[nltk_data] Downloading package gutenberg to /root/nltk_data...
[nltk_data] Package gutenberg is already up-to-date!
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Package punkt is already up-to-date!
['[ Emma by Jane Austen 1816 ]', 'VOLUME I', 'CHAPTER I', 'Emma Woodhouse , handsome , clever , and rich , with a comfortable home and happy disposition , seemed to unite some of the best blessings of existence ; and had lived nearly twenty - one years in the world with very little to distress or vex her .']
###Markdown
tfidf in sklearnHappily for us, sklearn has a tfidf function that will do all our heavy lifting. It also has a [very long list of stop words](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/stop_words.py). Since we're going to be doing dimension reduction later on anyway, let's keep all the words for now.
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
X_train, X_test = train_test_split(emma_paras, test_size=0.4, random_state=0)
vectorizer = TfidfVectorizer(max_df=0.5, # drop words that occur in more than half the paragraphs
min_df=2, # only use words that appear at least twice
stop_words='english',
lowercase=True, #convert everything to lower case (since Alice in Wonderland has the HABIT of CAPITALIZING WORDS for EMPHASIS)
use_idf=True,#we definitely want to use inverse document frequencies in our weighting
norm=u'l2', #Applies a correction factor so that longer paragraphs and shorter paragraphs get treated equally
smooth_idf=True #Adds 1 to all document frequencies, as if an extra document existed that used every word once. Prevents divide-by-zero errors
)
#Applying the vectorizer
emma_paras_tfidf=vectorizer.fit_transform(emma_paras)
print("Number of features: %d" % emma_paras_tfidf.get_shape()[1])
#splitting into training and test sets
X_train_tfidf, X_test_tfidf= train_test_split(emma_paras_tfidf, test_size=0.4, random_state=0)
#Reshapes the vectorizer output into something people can read
X_train_tfidf_csr = X_train_tfidf.tocsr()
#number of paragraphs
n = X_train_tfidf_csr.shape[0]
#A list of dictionaries, one per paragraph
tfidf_bypara = [{} for _ in range(0,n)]
#List of features
terms = vectorizer.get_feature_names()
#for each paragraph, lists the feature words and their tf-idf scores
for i, j in zip(*X_train_tfidf_csr.nonzero()):
tfidf_bypara[i][terms[j]] = X_train_tfidf_csr[i, j]
#Keep in mind that the log base 2 of 1 is 0, so a tf-idf score of 0 indicates that the word was present once in that sentence.
print('Original sentence:', X_train[5])
print('Tf_idf vector:', tfidf_bypara[5])
###Output
Number of features: 1948
Original sentence: A very few minutes more , however , completed the present trial .
Tf_idf vector: {'minutes': 0.7127450310382584, 'present': 0.701423210857947}
###Markdown
Dimension reductionOkay, now we have our vectors, with one vector per paragraph. It's time to do some dimension reduction. We use the Singular Value Decomposition (SVD) function from sklearn rather than PCA because we don't want to mean-center our variables (and thus lose sparsity):
###Code
from sklearn.decomposition import TruncatedSVD
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import Normalizer
#Our SVD data reducer. We are going to reduce the feature space from 1379 to 130.
svd= TruncatedSVD(130)
lsa = make_pipeline(svd, Normalizer(copy=False))
# Run SVD on the training data, then project the training data.
X_train_lsa = lsa.fit_transform(X_train_tfidf)
variance_explained=svd.explained_variance_ratio_
total_variance = variance_explained.sum()
print("Percent variance captured by all components:",total_variance*100)
#Looking at what sorts of paragraphs our solution considers similar, for the first five identified topics
paras_by_component=pd.DataFrame(X_train_lsa,index=X_train)
for i in range(5):
print('Component {}:'.format(i))
print(paras_by_component.loc[:,i].sort_values(ascending=False)[0:10])
###Output
Percent variance captured by all components: 45.20126267087986
Component 0:
" Oh ! 0.999283
" Oh ! 0.999283
" Oh !" 0.999283
" Oh ! 0.999283
" Oh ! 0.999283
" Oh ! 0.999283
" Oh ! 0.999283
" Oh ! 0.999283
" Oh ! 0.999283
" Oh ! 0.999283
Name: 0, dtype: float64
Component 1:
" You have made her too tall , Emma ," said Mr . Knightley . 0.634199
" You get upon delicate subjects , Emma ," said Mrs . Weston smiling ; " remember that I am here . Mr . 0.591326
" I do not know what your opinion may be , Mrs . Weston ," said Mr . Knightley , " of this great intimacy between Emma and Harriet Smith , but I think it a bad thing ." 0.569111
" You are right , Mrs . Weston ," said Mr . Knightley warmly , " Miss Fairfax is as capable as any of us of forming a just opinion of Mrs . Elton . 0.564195
Mr . Knightley might quarrel with her , but Emma could not quarrel with herself . 0.528830
" There were misunderstandings between them , Emma ; he said so expressly . 0.528188
" Now ," said Emma , when they were fairly beyond the sweep gates , " now Mr . Weston , do let me know what has happened ." 0.508860
Emma found that it was not Mr . Weston ' s fault that the number of privy councillors was not yet larger . 0.508366
" In one respect , perhaps , Mr . Elton ' s manners are superior to Mr . Knightley ' s or Mr . Weston ' s . 0.505174
Mrs . Weston was acting no part , feigning no feelings in all that she said to him in favour of the event . She had been extremely surprized , never more so , than when Emma first opened the affair to her ; but she saw in it only increase of happiness to all , and had no scruple in urging him to the utmost . She had such a regard for Mr . Knightley , as to think he deserved even her dearest Emma ; and it was in every respect so proper , suitable , and unexceptionable a connexion , and in one respect , one point of the highest importance , so peculiarly eligible , so singularly fortunate , that now it seemed as if Emma could not safely have attached herself to any other creature , and that she had herself been the stupidest of beings in not having thought of it , and wished it long ago . How very few of those men in a rank of life to address Emma would have renounced their own home for Hartfield ! 0.501116
Name: 1, dtype: float64
Component 2:
CHAPTER X 0.998712
CHAPTER I 0.998712
CHAPTER I 0.998712
CHAPTER V 0.998712
CHAPTER X 0.998712
CHAPTER X 0.998712
CHAPTER V 0.998712
CHAPTER I 0.998712
CHAPTER V 0.998712
CHAPTER XVII 0.997666
Name: 2, dtype: float64
Component 3:
" Ah ! 0.99292
" Ah ! 0.99292
" Ah ! 0.99292
" Ah !" 0.99292
" Ah ! 0.99292
But ah ! 0.99292
" Ah ! 0.99292
" Ah ! 0.99292
" Ah ! 0.99292
" Ah ! 0.99292
Name: 3, dtype: float64
Component 4:
" There were misunderstandings between them , Emma ; he said so expressly . 0.650388
" Are you well , my Emma ?" 0.598897
Emma demurred . 0.598897
Emma was silenced . 0.587864
At first it was downright dulness to Emma . 0.587038
" Emma , my dear Emma " 0.576634
Emma could not resist . 0.568584
" It is not now worth a regret ," said Emma . 0.559902
" For shame , Emma ! 0.556508
" I am ready ," said Emma , " whenever I am wanted ." 0.512921
Name: 4, dtype: float64
###Markdown
From gazing at the most representative sample paragraphs, it appears that component 0 targets the exclamation 'Oh!', component 1 seems to largely involve critical dialogue directed at or about the main character Emma, component 2 is chapter headings, component 3 is exclamations involving 'Ah!, and component 4 involves actions by or directly related to Emma.What fun! Sentence similarityWe can also look at how similar various sentences are to one another. For example, here are the similarity scores (as a heatmap) of the first 10 sentences in the training set:
###Code
# Compute document similarity using LSA components
similarity = np.asarray(np.asmatrix(X_train_lsa) * np.asmatrix(X_train_lsa).T)
#Only taking the first 10 sentences
sim_matrix=pd.DataFrame(similarity,index=X_train).iloc[0:10,0:10]
#Making a plot
ax = sns.heatmap(sim_matrix,yticklabels=range(10))
plt.show()
#Generating a key for the plot.
print('Key:')
for i in range(10):
print(i,sim_matrix.index[i])
###Output
_____no_output_____
###Markdown
Not much similarity at all except between sentences 8 and 9, both of which seem to describe people getting along well. Drill 0: Test setNow it's your turn: Apply our LSA model to the test set. Does it identify similar sentences for components 0 through 4?
###Code
# Remember, you will use the same model, only with the test set data. Don't fit a new model by mistake!
X_test_lsa = lsa.fit_transform(X_test_tfidf)
variance_explained=svd.explained_variance_ratio_
total_variance = variance_explained.sum()
print("Percent variance captured by all components:",total_variance*100)
#Looking at what sorts of paragraphs our solution considers similar, for the first five identified topics
paras_by_component=pd.DataFrame(X_test_lsa,index=X_test)
for i in range(5):
print('Component {}:'.format(i))
print(paras_by_component.loc[:,i].sort_values(ascending=False)[0:10])
# Compute document similarity using LSA components
similarity = np.asarray(np.asmatrix(X_test_lsa) * np.asmatrix(X_test_lsa).T)
#Only taking the first 10 sentences
sim_matrix=pd.DataFrame(similarity,index=X_test).iloc[0:10,0:10]
#Making a plot
ax = sns.heatmap(sim_matrix,yticklabels=range(10))
plt.show()
#Generating a key for the plot.
print('Key:')
for i in range(10):
print(i,sim_matrix.index[i])
###Output
_____no_output_____
###Markdown
Drill 1: Tweaking tf-idfGo back up to the code where we originally translated the text from words to numbers. There are a lot of decision-points here, from the stop list to the thresholds for inclusion and exclusion, and many others as well. We also didn't integrate spaCy, and so don't have info on lemmas or Named Entities. Change things up a few times and see how that affects the results of the LSA. Write up your observations and share them with your mentor.
###Code
#Tweaks Go Here
import spacy
nlp = spacy.load('en')
###Output
_____no_output_____ |
Dog Breed Classifier - PyTorch/dog_app-cn.ipynb | ###Markdown
卷积神经网络 项目:为小狗识别应用编写算法 ---在此 notebook 中,我们已经为你提供一些模板代码,要成功完成此项目,你需要实现其他功能。除此之外,不需要修改所提供的代码。标题中以**(实现)**开头的部分表明你必须在下面的代码块中提供其他功能。我们会在每个部分提供说明,并在以“TODO”开头的代码块中提供实现细节。请仔细阅读说明。 > **注意**:完成所有代码实现后,最后需要将 iPython Notebook 导出为 HTML 文档。在将 notebook 导出为 HTML 前,请运行所有代码单元格,使审阅者能够查看最终实现和输出结果。然后导出 notebook,方法是:使用顶部的菜单并依次转到**文件 -> 下载为 -> HTML (.html)**。提交内容应该同时包含此 notebook 和完成的文档。除了实现代码之外,还需要回答与项目和代码实现相关的问题。请仔细阅读每个问题,并在**答案:**下方的文本框中填写答案。我们将根据每个问题的答案以及实现代码评估你提交的项目。>**注意:**可以通过 **Shift + Enter** 键盘快捷键执行代码和标记单元格,并且可以通过双击单元格进入编辑模式,编辑标记单元格。审阅标准还包含可选的“锦上添花”建议,可以指导你在满足最低要求的基础上改进项目。如果你打算采纳这些建议,则应该在此 Jupyter notebook 中添加代码。--- 为何要完成这道练习 在此 notebook 中,你将开发一种可用于移动应用或网络应用的算法。最终你的代码将能够将任何用户提供的图像作为输入。如果从图像中检测出小狗,该算法将大致识别出小狗品种。如果检测出人脸,该算法将大致识别出最相似的小狗品种。下图显示了最终项目的潜在示例输出(但是我们希望每个学员的算法行为都不一样。)。 在此实际应用中,你需要将一系列模型整合到一起并执行不同的任务;例如,检测图中人脸的算法与推理小狗品种的 CNN 将不一样。有很多地方都可能会出错,没有什么完美的算法。即使你的答案不完美,也可以创造有趣的用户体验。 项目规划我们将此 notebook 分成了几个独立的步骤。你可以通过以下链接浏览此 notebook。* [第 0 步](step0):导入数据集* [第 1 步](step1):检测人脸* [第 2 步](step2):检测小狗* [第 3 步](step3):(从头开始)创建分类小狗品种的 CNN* [第 4 步](step4):(使用迁移学习)创建分类小狗品种的 CNN* [第 5 步](step5):编写算法* [第 6 步](step6):测试算法--- 第 0 步:导入数据集首先下载人脸和小狗数据集:**注意:如果你使用的是 Udacity 工作区,你*不需要重新下载它们 - 它们可以在`/ data`文件夹中找到,如下面的单元格所示。*** 下载[小狗数据集](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/dogImages.zip)。解压文件并将其放入此项目的主目录中,位置为 `/dog_images`。 * 下载[人脸数据集](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/lfw.zip)。解压文件并将其放入此项目的主目录中,位置为 `/lfw`。 *注意如果你使用的是 Windows 设备,建议使用 [7zip](http://www.7-zip.org/) 解压文件。*在下面的代码单元格中将人脸 (LFW) 数据集和小狗数据集的文件路径保存到 NumPy 数组 `human_files` 和 `dog_files` 中。
###Code
import numpy as np
from glob import glob
# load filenames for human and dog images
human_files = np.array(glob("/data/lfw/*/*"))
dog_files = np.array(glob("/data/dog_images/*/*/*"))
# print number of images in each dataset
print('There are %d total human images.' % len(human_files))
print('There are %d total dog images.' % len(dog_files))
###Output
There are 13233 total human images.
There are 8351 total dog images.
###Markdown
第 1 步:检测人脸在此部分,我们使用 OpenCV 的[哈儿特征级联分类器](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html)检测图像中的人脸。 OpenCV 提供了很多预训练的人脸检测器,它们以 XML 文件的形式存储在 [github](https://github.com/opencv/opencv/tree/master/data/haarcascades) 上。我们下载了其中一个检测器并存储在 `haarcascades` 目录中。在下个代码单元格中,我们将演示如何使用此检测器从样本图像中检测人脸。
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[0])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
###Output
Number of faces detected: 1
###Markdown
在使用任何人脸检测器之前,标准做法是将图像转换为灰阶图像。`detectMultiScale` 函数会执行存储在 `face_cascade` 中的分类器并将灰阶图像当做参数。 在上述代码中,`faces` 是一个包含检测到的人脸的 numpy 数组,其中每行对应一张检测到的人脸。检测到的每张人脸都是一个一维数组,其中有四个条目,分别指定了检测到的人脸的边界框。数组中的前两个条目(在上述代码中提取为 `x` 和`y`)指定了左上角边界框的水平和垂直位置。数组中的后两个条目(提取为 `w` 和 `h`)指定了边界框的宽和高。 编写人脸检测器我们可以编写一个函数,如果在图像中检测到人脸,该函数将返回 `True`,否则返回 `False`。此函数称为 `face_detector`,参数为图像的字符串文件路径,并出现在以下代码块中。
###Code
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
###Output
_____no_output_____
###Markdown
(实现)评估人脸检测器__问题 1:__使用以下代码单元格测试 `face_detector` 函数的性能。 - 对于 `human_files` 中的前100 张图像,有多少图像检测到了人脸? - 对于 `dog_files` 中的前100 张图像,有多少图像检测到了人脸? 理想情况下,我们希望所有人脸图像都能检测到人脸,所有小狗图像都不能检测到人脸。我们的算法不能满足此目标,但是依然达到了可接受的水平。我们针对每个数据集的前 100 张图像提取出文件路径,并将它们存储在 numpy 数组 `human_files_short` 和 `dog_files_short` 中。__答案:__ (请在此单元格中填写结果和/或百分比)There are 98images that detect the first 100 images of face in human_files_short.There are 17images that detect the first 100 images of face in dog_files_short.
###Code
from tqdm import tqdm
human_files_short = human_files[:100]
dog_files_short = dog_files[:100]
#-#-# Do NOT modify the code above this line. #-#-#
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
human_count, dog_count = 0, 0
for image in human_files_short:
if face_detector(image):
human_count += 1
for image in dog_files_short:
if face_detector(image):
dog_count += 1
print("There are " + str(human_count) + "images that detect the first 100 images of face in human_files_short.")
print("There are " + str(dog_count) + "images that detect the first 100 images of face in dog_files_short.")
###Output
There are 98images that detect the first 100 images of face in human_files_short.
There are 17images that detect the first 100 images of face in dog_files_short.
###Markdown
建议在算法中使用 OpenCV 的人脸检测器来检测人脸图像,但是你也可以尝试其他方法,尤其是利用深度学习的方法:)。请在以下代码单元格中设计并测试你的人脸检测算法。如果你打算完成此_可选_任务,请报告 `human_files_short` 和 `dog_files_short` 的效果。
###Code
### (Optional)
### TODO: Test performance of anotherface detection algorithm.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
--- 第 2 步:检测小狗在此部分,我们使用[预训练的模型](http://pytorch.org/docs/master/torchvision/models.html)检测图像中的小狗。 获取预训练的 VGG-16 模型以下代码单元格会下载 VGG-16 模型以及在 [ImageNet](http://www.image-net.org/) 上训练过的权重,ImageNet 是一个非常热门的数据集,可以用于图像分类和其他视觉任务。ImageNet 包含 1000 万以上的 URL,每个都链接到包含某个对象的图像,这些对象分成了 [1000 个类别](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a)。
###Code
import torch
import torchvision.models as models
# define VGG16 model
VGG16 = models.vgg16(pretrained=True)
# check if CUDA is available
use_cuda = torch.cuda.is_available()
# move model to GPU if CUDA is available
if use_cuda:
VGG16 = VGG16.cuda()
###Output
Downloading: "https://download.pytorch.org/models/vgg16-397923af.pth" to /root/.torch/models/vgg16-397923af.pth
100%|██████████| 553433881/553433881 [00:34<00:00, 15907053.78it/s]
###Markdown
如果给定一张图像,此预训练的 VGG-16 模型能够针对图像中的对象返回预测结果(属于 ImageNet 中的 1000 个潜在类别之一)。 (实现)使用预训练的模型做出预测在下个代码单元格中,你将编写一个函数,它将图像路径(例如 `'dogImages/train/001.Affenpinscher/Affenpinscher_00001.jpg'`)当做输入,并返回预训练 VGG-16 模型预测的 ImageNet 类别对应的索引。输出应该始终是在 0 - 999(含)之间的整数。在编写该函数之前,请阅读此 [PyTorch 文档](http://pytorch.org/docs/stable/torchvision/models.html),了解如何针对预训练的模型预处理张量。
###Code
from PIL import Image
import torchvision.transforms as transforms
def load_image(img_path):
image = Image.open(img_path).convert('RGB')
transform = transforms.Compose([transforms.Resize((244, 244)), transforms.ToTensor()])
image = transform(image)[:3,:,:].unsqueeze(0)
return image
def VGG16_predict(img_path):
'''
Use pre-trained VGG-16 model to obtain index corresponding to
predicted ImageNet class for image at specified path
Args:
img_path: path to an image
Returns:
Index corresponding to VGG-16 model's prediction
'''
## TODO: Complete the function.
## Load and pre-process an image from the given img_path
## Return the *index* of the predicted class for that image
image = load_image(img_path)
if use_cuda:
image = image.cuda()
predict = VGG16(image)
predict = predict.data.cpu().argmax()
return predict # predicted class index
###Output
_____no_output_____
###Markdown
(实现)编写小狗检测器查看该[字典](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a)后,你将发现:小狗对应的类别按顺序排列,对应的键是 151-268(含),包含从 `'Chihuahua'` 到 `'Mexican hairless'` 的所有类别。因此,要检查预训练的 VGG-16 模型是否预测某个图像包含小狗,我们只需检查预训练模型预测的索引是否在 151 - 268(含)之间。请根据这些信息完成下面的 `dog_detector` 函数,如果从图像中检测出小狗,它将返回 `True`(否则返回 `False`)。
###Code
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
## TODO: Complete the function.
class_index = VGG16_predict(img_path)
result = (class_index >= 151) & (class_index <= 268)
return result # true/false
###Output
_____no_output_____
###Markdown
(实现)评估小狗检测器__问题 2:__在以下代码单元格中测试 `dog_detector` 的效果。 - 对于 `human_files_short` 中的图像,有多少图像检测到了小狗? - 对于 `dog_files_short` 中的图像,有多少图像检测到了小狗?__答案:__- 对于 `human_files_short` 中的图像,有0%的图像检测到了小狗? - 对于 `dog_files_short` 中的图像,有98%的图像检测到了小狗?
###Code
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
human_count, dog_count = 0, 0
for index in tqdm(range(len(human_files_short))):
if dog_detector(human_files_short[index]):
human_count += 1
for index in tqdm(range(len(dog_files_short))):
if dog_detector(dog_files_short[index]):
dog_count += 1
print("There are " + str(human_count) + "% images that detect the face in human_files_short.")
print("There are " + str(dog_count) + "% images that detect the face in dog_files_short.")
###Output
100%|██████████| 100/100 [00:04<00:00, 23.97it/s]
100%|██████████| 100/100 [00:05<00:00, 19.17it/s]
###Markdown
建议在算法中使用 VGG-16 检测小狗图像,但是你也可以尝试其他预训练的网络(例如 [Inception-v3](http://pytorch.org/docs/master/torchvision/models.htmlinception-v3)、[ResNet-50](http://pytorch.org/docs/master/torchvision/models.htmlid3) 等)。请在以下代码单元格中测试其他预训练的 PyTorch 模型。如果你打算完成此_可选_任务,请报告 `human_files_short` 和 `dog_files_short` 的效果。
###Code
### (Optional)
### TODO: Report the performance of another pre-trained network.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
--- 第 3 步:(从头开始)创建分类小狗品种的 CNN创建好从图像中检测人脸和小狗的函数后,我们需要预测图像中的小狗品种。在这一步,你需要创建一个分类小狗品种的 CNN。你必须从头创建一个 CNN(因此暂时不能使用迁移学习。),并且测试准确率必须至少达到 10%。在此 notebook 的第 4 步,你将使用迁移学习创建 CNN,并且能够获得很高的准确率。预测图中小狗的品种是一项非常难的挑战。说实话,即使是我们人类,也很难区分布列塔尼猎犬和威尔斯激飞猎犬。 布列塔尼猎犬 | 威尔斯激飞猎犬- | - | 还有很多其他相似的狗品种(例如卷毛寻回犬和美国水猎犬)。 卷毛寻回犬 | 美国水猎犬- | - | 同理,拉布拉多有黄色、巧克力色和黑色品种。基于视觉的算法需要克服这种同一类别差异很大的问题,并决定如何将所有这些不同肤色的小狗分类为相同的品种。 黄色拉布拉多 | 巧克力色拉布拉多 | 黑色拉布拉多- | - | | 随机猜测的效果很差:除了类别数量不太平衡之外,随机猜测的正确概率约为 1/133,准确率不到 1%。 在深度学习领域,实践比理论知识靠谱得到。请尝试多种不同的架构,并相信你的直觉。希望你可以从学习中获得乐趣! (实现)为小狗数据集指定数据加载器在以下代码单元格中编写三个独立的[数据加载器](http://pytorch.org/docs/stable/data.htmltorch.utils.data.DataLoader),用于训练、验证和测试小狗图像数据集(分别位于 `dog_images/train`、`dog_images/valid` 和 `dog_images/test` 下)。[此自定义数据集文档](http://pytorch.org/docs/stable/torchvision/datasets.html)或许对你有帮助。如果你想增强训练和/或验证数据,请参阅各种[转换方法](http://pytorch.org/docs/stable/torchvision/transforms.html?highlight=transform)!
###Code
import os
from torchvision import datasets
### TODO: Write data loaders for training, validation, and test sets
## Specify appropriate transforms, and batch_sizes
num_workers = 0
batch_size = 20
image_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(size=224),
transforms.RandomRotation(degrees=10),
transforms.RandomHorizontalFlip(),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
]),
'valid':transforms.Compose([
transforms.Resize(size=224),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'test':transforms.Compose([
transforms.Resize(size=224),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = '/data/dog_images/'
train_dir = os.path.join(data_dir, 'train/')
test_dir = os.path.join(data_dir, 'test/')
valid_dir = os.path.join(data_dir, 'valid/')
dataset_full = {'train' : datasets.ImageFolder(root=train_dir,transform=image_transforms['train']),
'valid' : datasets.ImageFolder(root=valid_dir,transform=image_transforms['valid']),
'test' : datasets.ImageFolder(root=test_dir,transform=image_transforms['test'])}
loaders_full = {'train' : torch.utils.data.DataLoader(dataset_full['train'], batch_size = batch_size, num_workers = num_workers, shuffle=True),
'valid' : torch.utils.data.DataLoader(dataset_full['valid'], batch_size = batch_size, num_workers = num_workers, shuffle = False),
'test' : torch.utils.data.DataLoader(dataset_full['test'], batch_size = batch_size, num_workers = num_workers, shuffle = False)}
###Output
_____no_output_____
###Markdown
**问题 3:**描述你所选的数据预处理流程。 - 你是如何调整图像大小的(裁剪、拉伸等)?你选择的输入张量大小是多少,为何?- 你是否决定增强数据集?如果是,如何增强(平移、翻转、旋转等)?如果否,理由是?**答案**:我通过transforms.CenterCrop()进行了中心裁剪,通过RandomResizedCrop()进行了随机长宽比裁剪。我选择的输入张量大小是224×224,因为这个大小非常适合我们的数据集。因为本身的数据集量不大,我决定增强数据集,我通过RandomRotation(degrees=10)进行了旋转,通过RandomHorizontalFlip()进行了翻转。 (实现)模型架构创建分类小狗品种的 CNN。使用以下代码单元格中的模板。
###Code
import torch.nn as nn
import torch.nn.functional as F
num_classes = len(dataset_full['train'].classes)
# define the CNN architecture
class Net(nn.Module):
### TODO: choose an architecture, and complete the class
def __init__(self):
super(Net, self).__init__()
## Define layers of a CNN
self.conv1 = nn.Conv2d(3, 16, 3, stride=1, padding=1)
self.conv2 = nn.Conv2d(16, 32, 3, stride=1, padding=1)
self.conv3 = nn.Conv2d(32, 64, 3, stride=1, padding=1)
self.pool = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(28*28*64, 500)
self.fc2 = nn.Linear(500, num_classes)
self.dropout = nn.Dropout(p=0.25)
self.batn = nn.BatchNorm2d(16)
self.batn2 = nn.BatchNorm2d(32)
def forward(self, x):
## Define forward behavior
x = self.pool(F.relu(self.conv1(x)))
x = self.batn(x)
x = self.pool(F.relu(self.conv2(x)))
x = self.batn2(x)
x = self.pool(F.relu(self.conv3(x)))
x = x.view(-1, 28*28*64)
x = self.dropout(x)
x = F.relu(self.fc1(x))
x = self.dropout(x)
x = F.relu(self.fc2(x))
return x
#-#-# You so NOT have to modify the code below this line. #-#-#
# instantiate the CNN
model_scratch = Net()
# move tensors to GPU if CUDA is available
if use_cuda:
model_scratch.cuda()
###Output
_____no_output_____
###Markdown
__问题 4:__列出获得最终 CNN 结构的步骤以及每步的推理过程。 __答案:__ 我的CNN结构包括了三层卷积层,一层池化层,还有两层全连接层。三层卷积层的结构分别是nn.Conv2d(3, 16, 3, stride=1, padding=1),nn.Conv2d(16, 32, 3, stride=1, padding=1),nn.Conv2d(32, 64, 3, stride=1, padding=1),步长设置成了1,填充值也是1。在前向传播的时候,卷积层让每个滤波器都在输入数据的宽度和高度上滑动(卷积),然后计算整个滤波器和输入数据任意一处的内积。池化层是尺寸为2×2的窗口,对输入数据体每一个深度切片单独处理,来降低数据体的空间尺寸,来减少网络中参数的数量,减少计算资源消耗,同时也能够有效控制过拟合。进入全连接层之前,为了防止过拟合,引入了dropout。全连接层类似一个简单的多分类神经网络,在全连接层之后得到最终的输出,整个模型训练完毕。 (实现)指定损失函数和优化器在下个代码单元格中指定[损失函数](http://pytorch.org/docs/stable/nn.htmlloss-functions)和[优化器](http://pytorch.org/docs/stable/optim.html)。在下面将所选的损失函数另存为 `criterion_scratch`,并将优化器另存为 `optimizer_scratch`。
###Code
import torch.optim as optim
### TODO: select loss function
criterion_scratch = nn.CrossEntropyLoss()
### TODO: select optimizer
optimizer_scratch = optim.SGD(model_scratch.parameters(), lr = 0.01)
###Output
_____no_output_____
###Markdown
(实现)训练和验证模型在以下代码单元格中训练和验证模型。[将最终模型参数](http://pytorch.org/docs/master/notes/serialization.html)保存到以下文件路径:`'model_scratch.pt'`。
###Code
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
def train(n_epochs, loaders, model, optimizer, criterion, use_cuda, save_path):
"""returns trained model"""
# initialize tracker for minimum validation loss
valid_loss_min = np.Inf
for epoch in range(1, n_epochs+1):
# initialize variables to monitor training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for batch_idx, (data, target) in enumerate(loaders['train']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## find the loss and update the model parameters accordingly
## record the average training loss, using something like
## train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
train_loss = train_loss + ((1 / (batch_idx + 1)) * (loss.data - train_loss))
######################
# validate the model #
######################
model.eval()
for batch_idx, (data, target) in enumerate(loaders['valid']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
## update the average validation loss
output = model(data)
loss = criterion(output, target)
valid_loss = valid_loss + ((1 / (batch_idx + 1)) * (loss.data - valid_loss))
train_loss = train_loss/len(loaders['train'].dataset)
valid_loss = valid_loss/len(loaders['valid'].dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch,
train_loss,
valid_loss
))
## TODO: save the model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), save_path)
valid_loss_min = valid_loss
# return trained model
return model
# train the model
model_scratch = train(60, loaders_full, model_scratch, optimizer_scratch,
criterion_scratch, use_cuda, 'model_scratch.pt')
# load the model that got the best validation accuracy
model_scratch.load_state_dict(torch.load('model_scratch.pt'))
###Output
Epoch: 1 Training Loss: 0.000729 Validation Loss: 0.005765
Validation loss decreased (inf --> 0.005765). Saving model ...
Epoch: 2 Training Loss: 0.000716 Validation Loss: 0.005622
Validation loss decreased (0.005765 --> 0.005622). Saving model ...
Epoch: 3 Training Loss: 0.000703 Validation Loss: 0.005515
Validation loss decreased (0.005622 --> 0.005515). Saving model ...
Epoch: 4 Training Loss: 0.000692 Validation Loss: 0.005413
Validation loss decreased (0.005515 --> 0.005413). Saving model ...
Epoch: 5 Training Loss: 0.000685 Validation Loss: 0.005332
Validation loss decreased (0.005413 --> 0.005332). Saving model ...
Epoch: 6 Training Loss: 0.000676 Validation Loss: 0.005228
Validation loss decreased (0.005332 --> 0.005228). Saving model ...
Epoch: 7 Training Loss: 0.000669 Validation Loss: 0.005143
Validation loss decreased (0.005228 --> 0.005143). Saving model ...
Epoch: 8 Training Loss: 0.000661 Validation Loss: 0.005093
Validation loss decreased (0.005143 --> 0.005093). Saving model ...
Epoch: 9 Training Loss: 0.000654 Validation Loss: 0.005008
Validation loss decreased (0.005093 --> 0.005008). Saving model ...
Epoch: 10 Training Loss: 0.000645 Validation Loss: 0.004952
Validation loss decreased (0.005008 --> 0.004952). Saving model ...
Epoch: 11 Training Loss: 0.000637 Validation Loss: 0.004925
Validation loss decreased (0.004952 --> 0.004925). Saving model ...
Epoch: 12 Training Loss: 0.000632 Validation Loss: 0.004893
Validation loss decreased (0.004925 --> 0.004893). Saving model ...
Epoch: 13 Training Loss: 0.000626 Validation Loss: 0.004798
Validation loss decreased (0.004893 --> 0.004798). Saving model ...
Epoch: 14 Training Loss: 0.000621 Validation Loss: 0.004822
Epoch: 15 Training Loss: 0.000615 Validation Loss: 0.004750
Validation loss decreased (0.004798 --> 0.004750). Saving model ...
Epoch: 16 Training Loss: 0.000611 Validation Loss: 0.004697
Validation loss decreased (0.004750 --> 0.004697). Saving model ...
Epoch: 17 Training Loss: 0.000605 Validation Loss: 0.004670
Validation loss decreased (0.004697 --> 0.004670). Saving model ...
Epoch: 18 Training Loss: 0.000601 Validation Loss: 0.004659
Validation loss decreased (0.004670 --> 0.004659). Saving model ...
Epoch: 19 Training Loss: 0.000595 Validation Loss: 0.004709
Epoch: 20 Training Loss: 0.000590 Validation Loss: 0.004637
Validation loss decreased (0.004659 --> 0.004637). Saving model ...
Epoch: 21 Training Loss: 0.000585 Validation Loss: 0.004553
Validation loss decreased (0.004637 --> 0.004553). Saving model ...
Epoch: 22 Training Loss: 0.000582 Validation Loss: 0.004546
Validation loss decreased (0.004553 --> 0.004546). Saving model ...
Epoch: 23 Training Loss: 0.000577 Validation Loss: 0.004540
Validation loss decreased (0.004546 --> 0.004540). Saving model ...
Epoch: 24 Training Loss: 0.000571 Validation Loss: 0.004476
Validation loss decreased (0.004540 --> 0.004476). Saving model ...
Epoch: 25 Training Loss: 0.000569 Validation Loss: 0.004566
Epoch: 26 Training Loss: 0.000568 Validation Loss: 0.004468
Validation loss decreased (0.004476 --> 0.004468). Saving model ...
Epoch: 27 Training Loss: 0.000561 Validation Loss: 0.004527
Epoch: 28 Training Loss: 0.000561 Validation Loss: 0.004431
Validation loss decreased (0.004468 --> 0.004431). Saving model ...
Epoch: 29 Training Loss: 0.000554 Validation Loss: 0.004439
Epoch: 30 Training Loss: 0.000550 Validation Loss: 0.004412
Validation loss decreased (0.004431 --> 0.004412). Saving model ...
Epoch: 31 Training Loss: 0.000546 Validation Loss: 0.004366
Validation loss decreased (0.004412 --> 0.004366). Saving model ...
Epoch: 32 Training Loss: 0.000545 Validation Loss: 0.004290
Validation loss decreased (0.004366 --> 0.004290). Saving model ...
Epoch: 33 Training Loss: 0.000543 Validation Loss: 0.004275
Validation loss decreased (0.004290 --> 0.004275). Saving model ...
Epoch: 34 Training Loss: 0.000538 Validation Loss: 0.004375
Epoch: 35 Training Loss: 0.000532 Validation Loss: 0.004272
Validation loss decreased (0.004275 --> 0.004272). Saving model ...
Epoch: 36 Training Loss: 0.000528 Validation Loss: 0.004421
Epoch: 37 Training Loss: 0.000531 Validation Loss: 0.004288
Epoch: 38 Training Loss: 0.000526 Validation Loss: 0.004248
Validation loss decreased (0.004272 --> 0.004248). Saving model ...
Epoch: 39 Training Loss: 0.000519 Validation Loss: 0.004253
Epoch: 40 Training Loss: 0.000517 Validation Loss: 0.004270
Epoch: 41 Training Loss: 0.000514 Validation Loss: 0.004207
Validation loss decreased (0.004248 --> 0.004207). Saving model ...
Epoch: 42 Training Loss: 0.000510 Validation Loss: 0.004216
Epoch: 43 Training Loss: 0.000503 Validation Loss: 0.004157
Validation loss decreased (0.004207 --> 0.004157). Saving model ...
Epoch: 44 Training Loss: 0.000502 Validation Loss: 0.004290
Epoch: 45 Training Loss: 0.000501 Validation Loss: 0.004184
Epoch: 46 Training Loss: 0.000496 Validation Loss: 0.004308
Epoch: 47 Training Loss: 0.000494 Validation Loss: 0.004163
Epoch: 48 Training Loss: 0.000494 Validation Loss: 0.004244
Epoch: 49 Training Loss: 0.000486 Validation Loss: 0.004197
Epoch: 50 Training Loss: 0.000483 Validation Loss: 0.004124
Validation loss decreased (0.004157 --> 0.004124). Saving model ...
Epoch: 51 Training Loss: 0.000484 Validation Loss: 0.004195
Epoch: 52 Training Loss: 0.000478 Validation Loss: 0.004063
Validation loss decreased (0.004124 --> 0.004063). Saving model ...
Epoch: 53 Training Loss: 0.000481 Validation Loss: 0.004157
Epoch: 54 Training Loss: 0.000472 Validation Loss: 0.004158
Epoch: 55 Training Loss: 0.000473 Validation Loss: 0.004148
Epoch: 56 Training Loss: 0.000469 Validation Loss: 0.004096
Epoch: 57 Training Loss: 0.000467 Validation Loss: 0.004159
Epoch: 58 Training Loss: 0.000457 Validation Loss: 0.004154
Epoch: 59 Training Loss: 0.000461 Validation Loss: 0.004155
Epoch: 60 Training Loss: 0.000459 Validation Loss: 0.004156
###Markdown
(实现)测试模型在小狗图像测试数据集上尝试模型。在以下代码单元格中计算并输出测试损失和准确率。确保测试准确率高于 10%。
###Code
def test(loaders, model, criterion, use_cuda):
# monitor test loss and accuracy
test_loss = 0.
correct = 0.
total = 0.
model.eval()
for batch_idx, (data, target) in enumerate(loaders['test']):
# move to GPU
if use_cuda:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update average test loss
test_loss = test_loss + ((1 / (batch_idx + 1)) * (loss.data - test_loss))
# convert output probabilities to predicted class
pred = output.data.max(1, keepdim=True)[1]
# compare predictions to true label
correct += np.sum(np.squeeze(pred.eq(target.data.view_as(pred))).cpu().numpy())
total += data.size(0)
print('Test Loss: {:.6f}\n'.format(test_loss))
print('\nTest Accuracy: %2d%% (%2d/%2d)' % (
100. * correct / total, correct, total))
# call test function
test(loaders_full, model_scratch, criterion_scratch, use_cuda)
###Output
Test Loss: 3.444776
Test Accuracy: 16% (140/836)
###Markdown
--- 第 4 步:(使用迁移学习)创建分类小狗品种的 CNN现在你将使用迁移学习创建能够识别图中小狗品种的 CNN。你的 CNN 必须在测试集上至少达到 60% 的准确率。 (实现)为小狗数据集指定数据加载器在以下代码单元格中编写三个独立的[数据加载器](http://pytorch.org/docs/master/data.htmltorch.utils.data.DataLoader),用于训练、验证和测试小狗图像数据集(分别位于 `dogImages/train`、`dogImages/valid` 和 `dogImages/test` 下)。 **你也可以使用在从头开始创建 CNN 这一步时创建的同一数据加载器**。
###Code
## TODO: Specify data loaders
loaders_transfer = loaders_full
###Output
_____no_output_____
###Markdown
(实现)模型架构使用迁移学习创建分类小狗品种的 CNN。在以下代码单元格中填写代码并将初始化的模型另存为变量 `model_transfer`。
###Code
import torchvision.models as models
import torch.nn as nn
## TODO: Specify model architecture
model_transfer = models.vgg16(pretrained=True)
num_features = model_transfer.classifier[6].in_features
model_transfer.classifier[6] = nn.Linear(num_features, num_classes)
if use_cuda:
model_transfer = model_transfer.cuda()
###Output
_____no_output_____
###Markdown
__问题 5:__列出获得最终 CNN 结构的步骤以及每步的推理过程。解释为何该结构适合解决手头的问题。__答案:__ 我使用了vgg16模型作为迁移学习的模型,它共包括13个卷积层,3个全连接层和5个池化层,特点是small filters, deeper networks。将全连接层输出的分类的数量为这个数据集中的num_classes。这样能更加适合解决手头的问题 (实现)指定损失函数和优化器在下个代码单元格中指定[损失函数](http://pytorch.org/docs/master/nn.htmlloss-functions)和[优化器](http://pytorch.org/docs/master/optim.html)。在下面将所选的损失函数另存为 `criterion_transfer`,并将优化器另存为 `optimizer_transfer`。
###Code
criterion_transfer = nn.CrossEntropyLoss()
optimizer_transfer = optim.SGD(model_transfer.classifier.parameters(), lr = 0.01)
###Output
_____no_output_____
###Markdown
(实现)训练和验证模型。在以下代码单元格中训练和验证模型。[将最终模型参数](http://pytorch.org/docs/master/notes/serialization.html)保存到以下文件路径:`'model_transfer.pt'`。
###Code
# train the model
n_epochs=10
model_transfer = train(n_epochs, loaders_transfer, model_transfer, optimizer_transfer, criterion_transfer, use_cuda, 'model_transfer.pt')
# load the model that got the best validation accuracy (uncomment the line below)
model_transfer.load_state_dict(torch.load('model_transfer.pt'))
###Output
Epoch: 1 Training Loss: 0.000339 Validation Loss: 0.000913
Validation loss decreased (inf --> 0.000913). Saving model ...
Epoch: 2 Training Loss: 0.000204 Validation Loss: 0.000644
Validation loss decreased (0.000913 --> 0.000644). Saving model ...
Epoch: 3 Training Loss: 0.000185 Validation Loss: 0.000630
Validation loss decreased (0.000644 --> 0.000630). Saving model ...
Epoch: 4 Training Loss: 0.000171 Validation Loss: 0.000576
Validation loss decreased (0.000630 --> 0.000576). Saving model ...
Epoch: 5 Training Loss: 0.000164 Validation Loss: 0.000546
Validation loss decreased (0.000576 --> 0.000546). Saving model ...
Epoch: 6 Training Loss: 0.000155 Validation Loss: 0.000527
Validation loss decreased (0.000546 --> 0.000527). Saving model ...
Epoch: 7 Training Loss: 0.000151 Validation Loss: 0.000532
Epoch: 8 Training Loss: 0.000147 Validation Loss: 0.000526
Validation loss decreased (0.000527 --> 0.000526). Saving model ...
Epoch: 9 Training Loss: 0.000144 Validation Loss: 0.000496
Validation loss decreased (0.000526 --> 0.000496). Saving model ...
Epoch: 10 Training Loss: 0.000145 Validation Loss: 0.000510
###Markdown
(实现)测试模型在小狗图像测试数据集上尝试模型。在以下代码单元格中计算并输出测试损失和准确率。确保测试准确率高于 60%。
###Code
test(loaders_transfer, model_transfer, criterion_transfer, use_cuda)
###Output
Test Loss: 0.521102
Test Accuracy: 85% (712/836)
###Markdown
(实现)使用模型预测小狗品种编写一个函数,它会将图像路径作为输入,并返回模型预测的小狗品种(`Affenpinscher`、`Afghan hound` 等)。
###Code
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
# list of class names by index, i.e. a name can be accessed like class_names[0]
class_names = [item[4:].replace("_", " ") for item in dataset_full['train'].classes]
def predict_breed_transfer(img_path):
# load the image and return the predicted breed
image = load_image(img_path)
if use_cuda:
img=image.cuda()
model_transfer.eval()
output = model_transfer(img)
index = output.data.cpu().numpy().argmax()
return class_names[index]
###Output
_____no_output_____
###Markdown
--- 第 5 步:编写算法编写一个算法,它会将图像的文件路径作为输入,并首先判断图像中是否包含人脸、小狗,或二者都不含。然后,- 如果在图像中检测到了__小狗__,则返回预测的品种。- 如果在图像中检测到了__人脸__,则返回相似的小狗品种。- 如果二者都没检测到,则输出错误消息。你可以自己编写从图像中检测人脸和小狗的函数,当然也可以使用上面开发的 `face_detector` 和 `human_detector` 函数。你必须使用在第 4 步创建的 CNN 预测小狗品种。 下面提供了一些示例算法输出,但是你也可以自己设计用户体验。 (实现)编写算法
###Code
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def run_app(img_path):
## handle cases for a human face, dog, and neither
image = Image.open(img_path)
if dog_detector(img_path):
print("The image is a dog.")
plt.imshow(image)
plt.show()
prediction = predict_breed_transfer(img_path)
print('The predicted breed is:', prediction)
elif face_detector(img_path):
print("The image is a human.")
plt.imshow(image)
plt.show()
prediction = predict_breed_transfer(img_path)
print('The predicted breed is:', prediction)
else:
print("This is neither human, nor dog.")
plt.imshow(image)
plt.show()
###Output
_____no_output_____
###Markdown
--- 第 6 步:测试算法在此部分测试新算法啦。算法认为看起来像哪种小狗?如果你有一只狗,算法能准确预测出小狗的品种吗?如果你有一只猫,算法会错误地认为这只猫是小狗吗? (实现)在样本图像上测试算法。至少在计算机上用 6 张图像测试你的算法。你可以使用任何图像。至少测试两张人脸图像和两张小狗图像。 __问题 6:__结果比你预期的要好吗 :)?还是更糟糕 :(?请对你的算法提出至少三个值得改进的地方。__答案:__(三个值得改进的地方)结果比我预期的要好。值得改进的地方:① 增加更多的数据集,使得结果更加精确。② 随机打乱(shuffle)图像。③ 在训练模型的过程中增加更多的epochs数量。```python TODO: Execute your algorithm from Step 6 on at least 6 images on your computer. Feel free to use as many code cells as needed. suggested code, belowfor file in np.hstack((human_files[:3], dog_files[:3])): run_app(file)```
###Code
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
## suggested code, below
for file in np.hstack((human_files[:3], dog_files[:3])):
run_app(file)
###Output
The image is a human.
|
notebooks/Day_4/Examples/Example_1_TensorRt.ipynb | ###Markdown
Install TensorRt and Torch2TRT on Google Colab. You have to restart the notebook for change to take effect. Ignore this part on Jetbot. Install TensorRt
1. Create new folder 'TRT' on Google drive
2. Download deb package for cuda 10 and place into TRT folder
3. Run script below Download
https://developer.nvidia.com/compute/machine-learning/tensorrt/secure/7.0/7.0.0.11/local_repo/nv-tensorrt-repo-ubuntu1804-cuda10.0-trt7.0.0.11-ga-20191216_1-1_amd64.deb and place it in your google drive under a folder "TRT".
###Code
import os
os.environ["os1"]="ubuntu1804"
os.environ["tag"]="cuda10.0-trt7.0.0.11-ga-20191216"
os.environ["version"]="7.0.0-1+cuda10.0"
os.chdir('/content/drive/MyDrive/TRT/')
!sudo dpkg -i nv-tensorrt-repo-${os1}-${tag}_1-1_amd64.deb
!sudo apt-key add /var/nv-tensorrt-repo-${tag}/7fa2af80.pub
!sudo apt-get update
!sudo apt-get install libnvinfer7=${version} libnvonnxparsers7=${version} libnvparsers7=${version} libnvinfer-plugin7=${version} libnvinfer-dev=${version} libnvonnxparsers-dev=${version} libnvparsers-dev=${version} libnvinfer-plugin-dev=${version} python-libnvinfer=${version} python3-libnvinfer=${version}
!sudo apt-mark hold libnvinfer7 libnvonnxparsers7 libnvparsers7 libnvinfer-plugin7 libnvinfer-dev libnvonnxparsers-dev libnvparsers-dev libnvinfer-plugin-dev python-libnvinfer python3-libnvinfer
!sudo apt-get install tensorrt=${version}
!sudo apt-get install python3-libnvinfer-dev=${version}
###Output
_____no_output_____
###Markdown
Install torch2trt
1. Create folder 'torch2trt' on Google drive
2. Run script below
3. Restart the notebook after installation success
###Code
!git clone https://github.com/NVIDIA-AI-IOT/torch2trt /drive/MyDrive/torch2trt
! cd /drive/MyDrive/torch2trt && pwd && python setup.py install
###Output
_____no_output_____
###Markdown
Convert PyTorch model using torch2trt
###Code
import os
import time
# Helper function to get size
def get_size(file_path):
b = os.path.getsize(file_path)
print("File size in byte: ", b)
# Helper function to measure inference time and return model predictions
def run_inference(model, data):
t0 = time.time()
output = model(data)
print("Inference Speed (s): {:.4f}".format(time.time() - t0))
return output
###Output
_____no_output_____
###Markdown
Use a pretrained resnet50 model from torchvision for example. Set to eval to change layer behavior to inference because we only want to use it for inference
###Code
import torchvision
import torch
model = torchvision.models.resnet50(pretrained=True).cuda().eval()
model_path = '/content/drive/MyDrive/JetBot/Day 4/Examples/resnet50.pth'
# model_path = 'resnet50.pth' # Use this on Jetbot
torch.save(model.state_dict(), model_path)
get_size(model_path)
for name, param in model.state_dict().items():
print("Layer: ", name)
print(param.type())
break
# model = model.half()
# for name, param in model.state_dict().items():
# print("Layer: ", name)
# print(param.type())
# break
###Output
_____no_output_____
###Markdown
Next, create some sample input that will be used to infer the shape and data types of our TensorRT engine
Make sure to set the right shape for input according to the model input shape.
###Code
data = torch.randn((1, 3, 224, 224)).cuda()
# If you encounter ModuleNotFoundError: No module named 'torch2trt' on Google Colab, restart the session
from torch2trt import torch2trt
model_trt = torch2trt(model, [data], fp16_mode=True)
###Output
_____no_output_____
###Markdown
Perform inference using converted model
###Code
output_trt = run_inference(model_trt, data)
output = run_inference(model, data)
###Output
_____no_output_____
###Markdown
Check predictions error of both model
###Code
print('max error: %f' % float(torch.max(torch.abs(output - output_trt))))
###Output
_____no_output_____
###Markdown
Check the size of trt model
###Code
model_path = '/content/drive/MyDrive/JetBot/Day 4/Examples/resnet50_trt.pth'
# model_path = 'resnet50_trt.pth' # Use this on Jetbot
torch.save(model_trt.state_dict(), model_path)
get_size(model_path)
67358703 > 102549933
###Output
_____no_output_____
###Markdown
Load tensorrt model
###Code
from torch2trt import TRTModule
model_trt = TRTModule()
model_path = '/content/drive/MyDrive/JetBot/Day 4/Examples/resnet50_trt.pth'
# model_path = 'resnet50_trt.pth' # Use this on Jetbot
model_trt.load_state_dict(torch.load(model_path))
###Output
_____no_output_____ |
gradient-descent/10.b.1 Closed Form Solution.ipynb | ###Markdown
Linear Regression with closed form solution Import necessary libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.io as sio
%matplotlib inline
def computeCost(X, Y, theta):
"""
Calculates the overall cost and returns it
to the caller.
Parameters
----------
X : Nxd matrix
N is number of samples and d is number of params
Y : Nx1 matrix
The matrix storing the actual outputs
theta: 1xd matrix
The matrix storing regression parameters
Returns
-------
float
Overall cost for the current step calculated
as (sum of squared error) / (2 * N)
"""
inner = np.power((X * theta - Y), 2)
return np.sum(inner) / (2 * len(X))
def computeTheta(X, Y):
"""
Calculates theta using the closed form solution.
This method uses the following formula to
calculate the value of regression parameters:
theta = ((X.T*X)^(-1))*(X.T*Y)
where:
X is a matrix storing the input data points,
Y is the observed outputs,
X.T represents the transpose of a matrix X and
theta is a matrix storing the regression params
Parameters
----------
X : Nxd matrix
N is the number of input samples and d is
the number of features
Y : Nx1 matrix
The matrix storing the actual outputs
Returns
-------
dx1 matrix
The calculated regression parameters
"""
inversePart = np.power(X.T * X, -1) #
rest = X.T * Y
return inversePart * rest
def add_polynomial_cols(data, degree, result_col_name):
"""
Generate polynomial columns for all
columns in the data based on the
specified degree.
Parameters
----------
data : Pandas DataFrame
The data to be manipulated
degree : Integer
The polynomial degree
result_col_name : String
The name of the column that stores
the results
Returns
-------
Pandas DataFrame
storing the updated data
"""
# Fetch the list of column names
cols = list(data.columns.values)
# Create polynomial columns for all
# except the result column
for col in cols:
if (col != result_col_name):
for i in range(degree + 1):
if (i != 1):
new_col_name = col + str(i)
data[new_col_name] = data[col].apply(lambda x: pow(x, i))
return data
def pre_process(data, result_col_name, degree):
# Add polynomial columns
data = add_polynomial_cols(data, degree, result_col_name)
# Split data and result columns into
# X and Y
data_cols = list(data.columns.values)
data_cols.remove(result_col_name)
X = data[data_cols]
X = np.matrix(X.values)
Y = data[[result_col_name]]
Y = np.matrix(Y.values)
return X, Y
def load_train_data(path):
train_data = sio.loadmat(path)
train_data = pd.DataFrame(np.hstack((train_data['X_trn'], train_data['Y_trn'])))
train_data.columns = ['X_trn', 'Y_trn']
return train_data
def plot_training_data_fit(train_data, degree, theta):
x = np.linspace(train_data.X_trn.min(), train_data.X_trn.max(), 100)
f = 0
for i in range(degree + 1):
f += (theta[i, 0] * pow(x, i))
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(x, f, 'r', label='Prediction')
ax.scatter(train_data.X_trn, train_data.Y_trn, label='Traning Data')
ax.legend(loc=2)
ax.set_xlabel('X_trn')
ax.set_ylabel('Y_trn')
ax.set_title('Predicted X_trn vs. Y_trn')
def generate_model(path, result_col_name, degree):
train_data = load_train_data(path)
X, Y = pre_process(train_data, result_col_name, degree)
theta = computeTheta(X, Y)
print("Theta: ", theta)
train_error = computeCost(X, Y, theta)
print("Train error: ", train_error)
plot_training_data_fit(train_data, degree, theta)
return theta
def load_test_data(path):
test_data = sio.loadmat(path)
test_data = pd.DataFrame(np.hstack((test_data['X_tst'], test_data['Y_tst'])))
test_data.columns = ['X_tst', 'Y_tst']
return test_data
def plot_test_data_fit(test_data, degree, theta):
x = np.linspace(test_data.X_tst.min(), test_data.X_tst.max(), 100)
f = 0
for i in range(degree + 1):
f += (theta[i, 0] * pow(x, i))
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(x, f, 'r', label='Prediction')
ax.scatter(test_data.X_tst, test_data.Y_tst, label='Test Data')
ax.legend(loc=2)
ax.set_xlabel('X_tst')
ax.set_ylabel('Y_tst')
ax.set_title('Predicted X_tst vs. Y_tst')
def predict(path, result_col_name, degree, theta):
test_data = load_test_data(path)
X, Y = pre_process(test_data, result_col_name, degree)
test_error = computeCost(X, Y, theta)
print("Test error: ", test_error)
plot_test_data_fit(test_data, degree, theta)
path = "dataset1.mat"
result_col_name = "Y_trn"
degree = 5
theta = generate_model(path, result_col_name, degree)
result_col_name = "Y_tst"
predict(path, result_col_name, degree, theta)
###Output
Theta: [[ 381.22587064]
[ 883.56557018]
[ 99.8521827 ]
[ 43.81219824]
[ 10.97272652]
[ 4.76231624]]
Train error: 8745136.6456
Test error: 634885528.886
|
submodules/resource/d2l-zh/mxnet/chapter_convolutional-neural-networks/lenet.ipynb | ###Markdown
卷积神经网络(LeNet):label:`sec_lenet`通过之前几节,我们学习了构建一个完整卷积神经网络的所需组件。回想一下,之前我们将softmax回归模型( :numref:`sec_softmax_scratch`)和多层感知机模型( :numref:`sec_mlp_scratch`)应用于Fashion-MNIST数据集中的服装图片。为了能够应用softmax回归和多层感知机,我们首先将每个大小为$28\times28$的图像展平为一个784维的固定长度的一维向量,然后用全连接层对其进行处理。而现在,我们已经掌握了卷积层的处理方法,我们可以在图像中保留空间结构。同时,用卷积层代替全连接层的另一个好处是:模型更简洁、所需的参数更少。在本节中,我们将介绍LeNet,它是最早发布的卷积神经网络之一,因其在计算机视觉任务中的高效性能而受到广泛关注。这个模型是由AT&T贝尔实验室的研究员Yann LeCun在1989年提出的(并以其命名),目的是识别图像 :cite:`LeCun.Bottou.Bengio.ea.1998`中的手写数字。当时,Yann LeCun发表了第一篇通过反向传播成功训练卷积神经网络的研究,这项工作代表了十多年来神经网络研究开发的成果。当时,LeNet取得了与支持向量机(support vector machines)性能相媲美的成果,成为监督学习的主流方法。LeNet被广泛用于自动取款机(ATM)机中,帮助识别处理支票的数字。时至今日,一些自动取款机仍在运行Yann LeCun和他的同事Leon Bottou在上世纪90年代写的代码呢! LeNet总体来看,(**LeNet(LeNet-5)由两个部分组成:**)(~~卷积编码器和全连接层密集块~~)* 卷积编码器:由两个卷积层组成;* 全连接层密集块:由三个全连接层组成。该架构如 :numref:`img_lenet`所示。:label:`img_lenet`每个卷积块中的基本单元是一个卷积层、一个sigmoid激活函数和平均汇聚层。请注意,虽然ReLU和最大汇聚层更有效,但它们在20世纪90年代还没有出现。每个卷积层使用$5\times 5$卷积核和一个sigmoid激活函数。这些层将输入映射到多个二维特征输出,通常同时增加通道的数量。第一卷积层有6个输出通道,而第二个卷积层有16个输出通道。每个$2\times2$池操作(步骤2)通过空间下采样将维数减少4倍。卷积的输出形状由批量大小、通道数、高度、宽度决定。为了将卷积块的输出传递给稠密块,我们必须在小批量中展平每个样本。换言之,我们将这个四维输入转换成全连接层所期望的二维输入。这里的二维表示的第一个维度索引小批量中的样本,第二个维度给出每个样本的平面向量表示。LeNet的稠密块有三个全连接层,分别有120、84和10个输出。因为我们在执行分类任务,所以输出层的10维对应于最后输出结果的数量。通过下面的LeNet代码,你会相信用深度学习框架实现此类模型非常简单。我们只需要实例化一个`Sequential`块并将需要的层连接在一起。
###Code
from mxnet import autograd, gluon, init, np, npx
from mxnet.gluon import nn
from d2l import mxnet as d2l
npx.set_np()
net = nn.Sequential()
net.add(nn.Conv2D(channels=6, kernel_size=5, padding=2, activation='sigmoid'),
nn.AvgPool2D(pool_size=2, strides=2),
nn.Conv2D(channels=16, kernel_size=5, activation='sigmoid'),
nn.AvgPool2D(pool_size=2, strides=2),
# 默认情况下,“Dense”会自动将形状为(批量大小,通道数,高度,宽度)的输入,
# 转换为形状为(批量大小,通道数*高度*宽度)的输入
nn.Dense(120, activation='sigmoid'),
nn.Dense(84, activation='sigmoid'),
nn.Dense(10))
###Output
_____no_output_____
###Markdown
我们对原始模型做了一点小改动,去掉了最后一层的高斯激活。除此之外,这个网络与最初的LeNet-5一致。下面,我们将一个大小为$28 \times 28$的单通道(黑白)图像通过LeNet。通过在每一层打印输出的形状,我们可以[**检查模型**],以确保其操作与我们期望的 :numref:`img_lenet_vert`一致。:label:`img_lenet_vert`
###Code
X = np.random.uniform(size=(1, 1, 28, 28))
net.initialize()
for layer in net:
X = layer(X)
print(layer.name, 'output shape:\t', X.shape)
###Output
conv0 output shape: (1, 6, 28, 28)
pool0 output shape: (1, 6, 14, 14)
conv1 output shape: (1, 16, 10, 10)
pool1 output shape: (1, 16, 5, 5)
dense0 output shape: (1, 120)
dense1 output shape: (1, 84)
dense2 output shape: (1, 10)
###Markdown
请注意,在整个卷积块中,与上一层相比,每一层特征的高度和宽度都减小了。第一个卷积层使用2个像素的填充,来补偿$5 \times 5$卷积核导致的特征减少。相反,第二个卷积层没有填充,因此高度和宽度都减少了4个像素。随着层叠的上升,通道的数量从输入时的1个,增加到第一个卷积层之后的6个,再到第二个卷积层之后的16个。同时,每个汇聚层的高度和宽度都减半。最后,每个全连接层减少维数,最终输出一个维数与结果分类数相匹配的输出。 模型训练现在我们已经实现了LeNet,让我们看看[**LeNet在Fashion-MNIST数据集上的表现**]。
###Code
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size=batch_size)
###Output
_____no_output_____
###Markdown
虽然卷积神经网络的参数较少,但与深度的多层感知机相比,它们的计算成本仍然很高,因为每个参数都参与更多的乘法。如果你有机会使用GPU,可以用它加快训练。 为了进行评估,我们需要[**对**] :numref:`sec_softmax_scratch`中描述的(**`evaluate_accuracy`函数进行轻微的修改**)。由于完整的数据集位于内存中,因此在模型使用GPU计算数据集之前,我们需要将其复制到显存中。
###Code
def evaluate_accuracy_gpu(net, data_iter, device=None): #@save
"""使用GPU计算模型在数据集上的精度"""
if not device: # 查询第一个参数所在的第一个设备
device = list(net.collect_params().values())[0].list_ctx()[0]
metric = d2l.Accumulator(2) # 正确预测的数量,总预测的数量
for X, y in data_iter:
X, y = X.as_in_ctx(device), y.as_in_ctx(device)
metric.add(d2l.accuracy(net(X), y), d2l.size(y))
return metric[0] / metric[1]
###Output
_____no_output_____
###Markdown
[**为了使用GPU,我们还需要一点小改动**]。与 :numref:`sec_softmax_scratch`中定义的`train_epoch_ch3`不同,在进行正向和反向传播之前,我们需要将每一小批量数据移动到我们指定的设备(例如GPU)上。如下所示,训练函数`train_ch6`也类似于 :numref:`sec_softmax_scratch`中定义的`train_ch3`。由于我们将实现多层神经网络,因此我们将主要使用高级API。以下训练函数假定从高级API创建的模型作为输入,并进行相应的优化。我们使用在 :numref:`subsec_xavier`中介绍的Xavier随机初始化模型参数。与全连接层一样,我们使用交叉熵损失函数和小批量随机梯度下降。
###Code
#@save
def train_ch6(net, train_iter, test_iter, num_epochs, lr, device):
"""用GPU训练模型(在第六章定义)"""
net.initialize(force_reinit=True, ctx=device, init=init.Xavier())
loss = gluon.loss.SoftmaxCrossEntropyLoss()
trainer = gluon.Trainer(net.collect_params(),
'sgd', {'learning_rate': lr})
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs],
legend=['train loss', 'train acc', 'test acc'])
timer, num_batches = d2l.Timer(), len(train_iter)
for epoch in range(num_epochs):
metric = d2l.Accumulator(3) # 训练损失之和,训练准确率之和,样本数
for i, (X, y) in enumerate(train_iter):
timer.start()
# 下面是与“d2l.train_epoch_ch3”的主要不同
X, y = X.as_in_ctx(device), y.as_in_ctx(device)
with autograd.record():
y_hat = net(X)
l = loss(y_hat, y)
l.backward()
trainer.step(X.shape[0])
metric.add(l.sum(), d2l.accuracy(y_hat, y), X.shape[0])
timer.stop()
train_l = metric[0] / metric[2]
train_acc = metric[1] / metric[2]
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(train_l, train_acc, None))
test_acc = evaluate_accuracy_gpu(net, test_iter)
animator.add(epoch + 1, (None, None, test_acc))
print(f'loss {train_l:.3f}, train acc {train_acc:.3f}, '
f'test acc {test_acc:.3f}')
print(f'{metric[2] * num_epochs / timer.sum():.1f} examples/sec '
f'on {str(device)}')
###Output
_____no_output_____
###Markdown
现在,我们[**训练和评估LeNet-5模型**]。
###Code
lr, num_epochs = 0.9, 10
train_ch6(net, train_iter, test_iter, num_epochs, lr, d2l.try_gpu())
###Output
loss 0.473, train acc 0.823, test acc 0.786
40832.5 examples/sec on gpu(0)
|
face_mask_classifier.ipynb | ###Markdown
Load face detector
###Code
import numpy as np
import cv2
def mask_classifier(img, net, face_mask_classifier):
## get heigh and width
h, w = img.shape[0], img.shape[1]
## construct a blob of the image for the net
blob = cv2.dnn.blobFromImage(img,
1.0,
(300, 300),
(104.0, 177.0, 123.0))
## pass the blob through the net
net.setInput(blob)
detections = net.forward()
if detections.shape[2] > 0:
## loop in the detections
for i in range(0, detections.shape[2]):
## extract the confidence
confidence = detections[0, 0, i, 2]
## greater the minimun confidence
if confidence > 0.8:
## get the coordinates for bounding box
box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
start_x, start_y, end_x, end_y = box.astype("int")
## ensure the boundig box
start_x, start_y = max(0, start_x), max(0, start_y)
end_x, end_y = min(w-1, end_x), min(h-1, end_y)
## extract the face ROI
face = img[start_y:end_y, start_x:end_x]
#face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
face = cv2.resize(face, (224, 224))
face = img_to_array(face)
face = preprocess_input(face)
face = np.expand_dims(face, axis=0)
# pass the face through the model to determine if the face
# has a mask or not
(mask, withoutMask) = face_mask_classifier.predict(face)[0]
if max(mask, withoutMask) > 0.8:
# determine the class label and color we'll use to draw
# the bounding box and text
label = "Mask" if mask > withoutMask else "No Mask"
color = (0, 255, 0) if label == "Mask" else (255, 0, 0)
# include the probability in the label
label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)
# display the label and bounding box rectangle on the output
# frame
cv2.putText(img, label, (start_x, start_y - 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.45, color, 2)
cv2.rectangle(img, (start_x, start_y), (end_x, end_y), color, 2)
return img
## get the architecture and weights of the model
proto = "face_detector/deploy.prototxt"
weights = "face_detector/res10_300x300_ssd_iter_140000.caffemodel"
## load model using opencv
net = cv2.dnn.readNet(proto, weights)
net
###Output
_____no_output_____
###Markdown
Load face mask classifier model
###Code
# import the necessary packages
from tensorflow.keras.models import load_model
face_mask_classifier = load_model("face_mask_classifier")
#face_mask_classifier.summary()
###Output
WARNING:tensorflow:From /home/neisser/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
Instructions for updating:
If using Keras pass *_constraint arguments to layers.
WARNING:tensorflow:From /home/neisser/anaconda3/lib/python3.6/site-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
###Markdown
With images
###Code
import matplotlib.pyplot as plt
import numpy as np
import cv2
from tensorflow.keras.applications.mobilenet_v2 import preprocess_input
from tensorflow.keras.preprocessing.image import img_to_array
## read random image
img = cv2.imread("dataset/with_mask/0-with-mask.jpg")
plt.axis("off")
plt.imshow(img)
plt.show()
img_ = mask_classifier(img, net, face_mask_classifier)
plt.axis("off")
plt.imshow(img_)
plt.show()
## read random image
img = cv2.imread("dataset/without_mask/137.jpg")
plt.axis("off")
plt.imshow(img)
plt.show()
img_ = mask_classifier(img, net, face_mask_classifier)
plt.axis("off")
plt.imshow(img_)
plt.show()
## read random image
img = cv2.imread("person_with_mask.jpeg")
plt.axis("off")
plt.imshow(img)
plt.show()
img_ = mask_classifier(img, net, face_mask_classifier)
plt.axis("off")
plt.imshow(img_)
plt.show()
cv2.imwrite("example_classified.jpg", img_)
###Output
_____no_output_____
###Markdown
With Camera
###Code
## get camera
video_capture = cv2.VideoCapture(0)
while video_capture.isOpened():
# Capture frame-by-frame
ret, frame = video_capture.read()
## mask classifier
frame = mask_classifier(frame, net, face_mask_classifier)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
##When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
###Output
_____no_output_____ |
d2l/tensorflow/chapter_multilayer-perceptrons/dropout.ipynb | ###Markdown
暂退法(Dropout):label:`sec_dropout`在 :numref:`sec_weight_decay` 中,我们介绍了通过惩罚权重的$L_2$范数来正则化统计模型的经典方法。在概率角度看,我们可以通过以下论证来证明这一技术的合理性:我们已经假设了一个先验,即权重的值取自均值为0的高斯分布。更直观的是,我们希望模型深度挖掘特征,即将其权重分散到许多特征中,而不是过于依赖少数潜在的虚假关联。 重新审视过拟合当面对更多的特征而样本不足时,线性模型往往会过拟合。相反,当给出更多样本而不是特征,通常线性模型不会过拟合。不幸的是,线性模型泛化的可靠性是有代价的。简单地说,线性模型没有考虑到特征之间的交互作用。对于每个特征,线性模型必须指定正的或负的权重,而忽略其他特征。泛化性和灵活性之间的这种基本权衡被描述为*偏差-方差权衡*(bias-variance tradeoff)。线性模型有很高的偏差:它们只能表示一小类函数。然而,这些模型的方差很低:它们在不同的随机数据样本上可以得出了相似的结果。深度神经网络位于偏差-方差谱的另一端。与线性模型不同,神经网络并不局限于单独查看每个特征,而是学习特征之间的交互。例如,神经网络可能推断“尼日利亚”和“西联汇款”一起出现在电子邮件中表示垃圾邮件,但单独出现则不表示垃圾邮件。即使我们有比特征多得多的样本,深度神经网络也有可能过拟合。2017年,一组研究人员通过在随机标记的图像上训练深度网络。这展示了神经网络的极大灵活性,因为人类很难将输入和随机标记的输出联系起来,但通过随机梯度下降优化的神经网络可以完美地标记训练集中的每一幅图像。想一想这意味着什么?假设标签是随机均匀分配的,并且有10个类别,那么分类器在测试数据上很难取得高于10%的精度,那么这里的泛化差距就高达90%,如此严重的过拟合。深度网络的泛化性质令人费解,而这种泛化性质的数学基础仍然是悬而未决的研究问题。我们鼓励喜好研究理论的读者更深入地研究这个主题。本节,我们将着重对实际工具的探究,这些工具倾向于改进深层网络的泛化性。 扰动的稳健性在探究泛化性之前,我们先来定义一下什么是一个“好”的预测模型?我们期待“好”的预测模型能在未知的数据上有很好的表现:经典泛化理论认为,为了缩小训练和测试性能之间的差距,应该以简单的模型为目标。简单性以较小维度的形式展现,我们在 :numref:`sec_model_selection` 讨论线性模型的单项式函数时探讨了这一点。此外,正如我们在 :numref:`sec_weight_decay` 中讨论权重衰减($L_2$正则化)时看到的那样,参数的范数也代表了一种有用的简单性度量。简单性的另一个角度是平滑性,即函数不应该对其输入的微小变化敏感。例如,当我们对图像进行分类时,我们预计向像素添加一些随机噪声应该是基本无影响的。1995年,克里斯托弗·毕晓普证明了具有输入噪声的训练等价于Tikhonov正则化 :cite:`Bishop.1995`。这项工作用数学证实了“要求函数光滑”和“要求函数对输入的随机噪声具有适应性”之间的联系。然后在2014年,斯里瓦斯塔瓦等人 :cite:`Srivastava.Hinton.Krizhevsky.ea.2014`就如何将毕晓普的想法应用于网络的内部层提出了一个想法:在训练过程中,他们建议在计算后续层之前向网络的每一层注入噪声。因为当训练一个有多层的深层网络时,注入噪声只会在输入-输出映射上增强平滑性。这个想法被称为*暂退法*(dropout)。暂退法在前向传播过程中,计算每一内部层的同时注入噪声,这已经成为训练神经网络的常用技术。这种方法之所以被称为*暂退法*,因为我们从表面上看是在训练过程中丢弃(drop out)一些神经元。在整个训练过程的每一次迭代中,标准暂退法包括在计算下一层之前将当前层中的一些节点置零。需要说明的是,暂退法的原始论文提到了一个关于有性繁殖的类比:神经网络过拟合与每一层都依赖于前一层激活值相关,称这种情况为“共适应性”。作者认为,暂退法会破坏共适应性,就像有性生殖会破坏共适应的基因一样。那么关键的挑战就是如何注入这种噪声。一种想法是以一种*无偏向*(unbiased)的方式注入噪声。这样在固定住其他层时,每一层的期望值等于没有噪音时的值。在毕晓普的工作中,他将高斯噪声添加到线性模型的输入中。在每次训练迭代中,他将从均值为零的分布$\epsilon \sim \mathcal{N}(0,\sigma^2)$采样噪声添加到输入$\mathbf{x}$,从而产生扰动点$\mathbf{x}' = \mathbf{x} + \epsilon$,预期是$E[\mathbf{x}'] = \mathbf{x}$。在标准暂退法正则化中,通过按保留(未丢弃)的节点的分数进行规范化来消除每一层的偏差。换言之,每个中间活性值$h$以*暂退概率*$p$由随机变量$h'$替换,如下所示:$$\begin{aligned}h' =\begin{cases} 0 & \text{ 概率为 } p \\ \frac{h}{1-p} & \text{ 其他情况}\end{cases}\end{aligned}$$根据此模型的设计,其期望值保持不变,即$E[h'] = h$。 实践中的暂退法回想一下 :numref:`fig_mlp`中带有1个隐藏层和5个隐藏单元的多层感知机。当我们将暂退法应用到隐藏层,以$p$的概率将隐藏单元置为零时,结果可以看作是一个只包含原始神经元子集的网络。比如在 :numref:`fig_dropout2`中,删除了$h_2$和$h_5$,因此输出的计算不再依赖于$h_2$或$h_5$,并且它们各自的梯度在执行反向传播时也会消失。这样,输出层的计算不能过度依赖于$h_1, \ldots, h_5$的任何一个元素。:label:`fig_dropout2`通常,我们在测试时不用暂退法。给定一个训练好的模型和一个新的样本,我们不会丢弃任何节点,因此不需要标准化。然而也有一些例外:一些研究人员在测试时使用暂退法,用于估计神经网络预测的“不确定性”:如果通过许多不同的暂退法遮盖后得到的预测结果都是一致的,那么我们可以说网络发挥更稳定。 从零开始实现要实现单层的暂退法函数,我们从均匀分布$U[0, 1]$中抽取样本,样本数与这层神经网络的维度一致。然后我们保留那些对应样本大于$p$的节点,把剩下的丢弃。在下面的代码中,(**我们实现 `dropout_layer` 函数,该函数以`dropout`的概率丢弃张量输入`X`中的元素**),如上所述重新缩放剩余部分:将剩余部分除以`1.0-dropout`。
###Code
import tensorflow as tf
from d2l import tensorflow as d2l
def dropout_layer(X, dropout):
assert 0 <= dropout <= 1
# 在本情况中,所有元素都被丢弃
if dropout == 1:
return tf.zeros_like(X)
# 在本情况中,所有元素都被保留
if dropout == 0:
return X
mask = tf.random.uniform(
shape=tf.shape(X), minval=0, maxval=1) < 1 - dropout
return tf.cast(mask, dtype=tf.float32) * X / (1.0 - dropout)
###Output
_____no_output_____
###Markdown
我们可以通过下面几个例子来[**测试`dropout_layer`函数**]。我们将输入`X`通过暂退法操作,暂退概率分别为0、0.5和1。
###Code
X = tf.reshape(tf.range(16, dtype=tf.float32), (2, 8))
print(X)
print(dropout_layer(X, 0.))
print(dropout_layer(X, 0.5))
print(dropout_layer(X, 1.))
###Output
tf.Tensor(
[[ 0. 1. 2. 3. 4. 5. 6. 7.]
[ 8. 9. 10. 11. 12. 13. 14. 15.]], shape=(2, 8), dtype=float32)
tf.Tensor(
[[ 0. 1. 2. 3. 4. 5. 6. 7.]
[ 8. 9. 10. 11. 12. 13. 14. 15.]], shape=(2, 8), dtype=float32)
tf.Tensor(
[[ 0. 2. 4. 6. 0. 10. 0. 14.]
[ 0. 18. 20. 0. 24. 0. 0. 0.]], shape=(2, 8), dtype=float32)
tf.Tensor(
[[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]], shape=(2, 8), dtype=float32)
###Markdown
定义模型参数同样,我们使用 :numref:`sec_fashion_mnist`中引入的Fashion-MNIST数据集。我们[**定义具有两个隐藏层的多层感知机,每个隐藏层包含256个单元**]。
###Code
num_outputs, num_hiddens1, num_hiddens2 = 10, 256, 256
###Output
_____no_output_____
###Markdown
定义模型我们可以将暂退法应用于每个隐藏层的输出(在激活函数之后),并且可以为每一层分别设置暂退概率:常见的技巧是在靠近输入层的地方设置较低的暂退概率。下面的模型将第一个和第二个隐藏层的暂退概率分别设置为0.2和0.5,并且暂退法只在训练期间有效。
###Code
dropout1, dropout2 = 0.2, 0.5
class Net(tf.keras.Model):
def __init__(self, num_outputs, num_hiddens1, num_hiddens2):
super().__init__()
self.input_layer = tf.keras.layers.Flatten()
self.hidden1 = tf.keras.layers.Dense(num_hiddens1, activation='relu')
self.hidden2 = tf.keras.layers.Dense(num_hiddens2, activation='relu')
self.output_layer = tf.keras.layers.Dense(num_outputs)
def call(self, inputs, training=None):
x = self.input_layer(inputs)
x = self.hidden1(x)
# 只有在训练模型时才使用dropout
if training:
# 在第一个全连接层之后添加一个dropout层
x = dropout_layer(x, dropout1)
x = self.hidden2(x)
if training:
# 在第二个全连接层之后添加一个dropout层
x = dropout_layer(x, dropout2)
x = self.output_layer(x)
return x
net = Net(num_outputs, num_hiddens1, num_hiddens2)
###Output
_____no_output_____
###Markdown
[**训练和测试**]这类似于前面描述的多层感知机训练和测试。
###Code
num_epochs, lr, batch_size = 10, 0.5, 256
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
trainer = tf.keras.optimizers.SGD(learning_rate=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
###Output
_____no_output_____
###Markdown
[**简洁实现**]对于深度学习框架的高级API,我们只需在每个全连接层之后添加一个`Dropout`层,将暂退概率作为唯一的参数传递给它的构造函数。在训练时,`Dropout`层将根据指定的暂退概率随机丢弃上一层的输出(相当于下一层的输入)。在测试时,`Dropout`层仅传递数据。
###Code
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
# 在第一个全连接层之后添加一个dropout层
tf.keras.layers.Dropout(dropout1),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
# 在第二个全连接层之后添加一个dropout层
tf.keras.layers.Dropout(dropout2),
tf.keras.layers.Dense(10),
])
###Output
_____no_output_____
###Markdown
接下来,我们[**对模型进行训练和测试**]。
###Code
trainer = tf.keras.optimizers.SGD(learning_rate=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
###Output
_____no_output_____
###Markdown
Dropout:label:`sec_dropout`In :numref:`sec_weight_decay`,we introduced the classical approachto regularizing statistical modelsby penalizing the $L_2$ norm of the weights.In probabilistic terms, we could justify this techniqueby arguing that we have assumed a prior beliefthat weights take values froma Gaussian distribution with mean zero.More intuitively, we might arguethat we encouraged the model to spread out its weightsamong many features rather than depending too muchon a small number of potentially spurious associations. Overfitting RevisitedFaced with more features than examples,linear models tend to overfit.But given more examples than features,we can generally count on linear models not to overfit.Unfortunately, the reliability with whichlinear models generalize comes at a cost.Naively applied, linear models do not takeinto account interactions among features.For every feature, a linear model must assigneither a positive or a negative weight, ignoring context.In traditional texts, this fundamental tensionbetween generalizability and flexibilityis described as the *bias-variance tradeoff*.Linear models have high bias: they can only represent a small class of functions.However, these models have low variance: they give similar resultsacross different random samples of the data.Deep neural networks inhabit the oppositeend of the bias-variance spectrum.Unlike linear models, neural networksare not confined to looking at each feature individually.They can learn interactions among groups of features.For example, they might infer that“Nigeria” and “Western Union” appearingtogether in an email indicates spambut that separately they do not.Even when we have far more examples than features,deep neural networks are capable of overfitting.In 2017, a group of researchers demonstratedthe extreme flexibility of neural networksby training deep nets on randomly-labeled images.Despite the absence of any true patternlinking the inputs to the outputs,they found that the neural network optimized by stochastic gradient descentcould label every image in the training set perfectly.Consider what this means.If the labels are assigned uniformlyat random and there are 10 classes,then no classifier can do betterthan 10% accuracy on holdout data.The generalization gap here is a whopping 90%.If our models are so expressive that theycan overfit this badly, then when shouldwe expect them not to overfit?The mathematical foundations forthe puzzling generalization propertiesof deep networks remain open research questions,and we encourage the theoretically-orientedreader to dig deeper into the topic.For now, we turn to the investigation ofpractical tools that tend toempirically improve the generalization of deep nets. Robustness through PerturbationsLet us think briefly about what weexpect from a good predictive model.We want it to peform well on unseen data.Classical generalization theorysuggests that to close the gap betweentrain and test performance,we should aim for a simple model.Simplicity can come in the formof a small number of dimensions.We explored this when discussing themonomial basis functions of linear modelsin :numref:`sec_model_selection`.Additionally, as we saw when discussing weight decay($L_2$ regularization) in :numref:`sec_weight_decay`,the (inverse) norm of the parameters alsorepresents a useful measure of simplicity.Another useful notion of simplicity is smoothness,i.e., that the function should not be sensitiveto small changes to its inputs.For instance, when we classify images,we would expect that adding some random noiseto the pixels should be mostly harmless.In 1995, Christopher Bishop formalizedthis idea when he proved that training with input noiseis equivalent to Tikhonov regularization :cite:`Bishop.1995`.This work drew a clear mathematical connectionbetween the requirement that a function be smooth (and thus simple),and the requirement that it be resilientto perturbations in the input.Then, in 2014, Srivastava et al. :cite:`Srivastava.Hinton.Krizhevsky.ea.2014`developed a clever idea for how to apply Bishop's ideato the internal layers of a network, too.Namely, they proposed to inject noiseinto each layer of the networkbefore calculating the subsequent layer during training.They realized that when traininga deep network with many layers,injecting noise enforces smoothness just on the input-output mapping.Their idea, called *dropout*, involvesinjecting noise while computingeach internal layer during forward propagation,and it has become a standard techniquefor training neural networks.The method is called *dropout* because we literally*drop out* some neurons during training.Throughout training, on each iteration,standard dropout consists of zeroing outsome fraction of the nodes in each layerbefore calculating the subsequent layer.To be clear, we are imposingour own narrative with the link to Bishop.The original paper on dropoutoffers intuition through a surprisinganalogy to sexual reproduction.The authors argue that neural network overfittingis characterized by a state in whicheach layer relies on a specifcpattern of activations in the previous layer,calling this condition *co-adaptation*.Dropout, they claim, breaks up co-adaptationjust as sexual reproduction is argued tobreak up co-adapted genes.The key challenge then is how to inject this noise.One idea is to inject the noise in an *unbiased* mannerso that the expected value of each layer---while fixingthe others---equals to the value it would have taken absent noise.In Bishop's work, he added Gaussian noiseto the inputs to a linear model.At each training iteration, he added noisesampled from a distribution with mean zero$\epsilon \sim \mathcal{N}(0,\sigma^2)$ to the input $\mathbf{x}$,yielding a perturbed point $\mathbf{x}' = \mathbf{x} + \epsilon$.In expectation, $E[\mathbf{x}'] = \mathbf{x}$.In standard dropout regularization,one debiases each layer by normalizingby the fraction of nodes that were retained (not dropped out).In other words,with *dropout probability* $p$,each intermediate activation $h$ is replaced bya random variable $h'$ as follows:$$\begin{aligned}h' =\begin{cases} 0 & \text{ with probability } p \\ \frac{h}{1-p} & \text{ otherwise}\end{cases}\end{aligned}$$By design, the expectation remains unchanged, i.e., $E[h'] = h$. Dropout in PracticeRecall the MLP with a hidden layer and 5 hidden unitsin :numref:`fig_mlp`.When we apply dropout to a hidden layer,zeroing out each hidden unit with probability $p$,the result can be viewed as a networkcontaining only a subset of the original neurons.In :numref:`fig_dropout2`, $h_2$ and $h_5$ are removed.Consequently, the calculation of the outputsno longer depends on $h_2$ or $h_5$and their respective gradient also vanisheswhen performing backpropagation.In this way, the calculation of the output layercannot be overly dependent on anyone element of $h_1, \ldots, h_5$.:label:`fig_dropout2`Typically, we disable dropout at test time.Given a trained model and a new example,we do not drop out any nodesand thus do not need to normalize.However, there are some exceptions:some researchers use dropout at test time as a heuristicfor estimating the *uncertainty* of neural network predictions:if the predictions agree across many different dropout masks,then we might say that the network is more confident. Implementation from ScratchTo implement the dropout function for a single layer,we must draw as many samplesfrom a Bernoulli (binary) random variableas our layer has dimensions,where the random variable takes value $1$ (keep)with probability $1-p$ and $0$ (drop) with probability $p$.One easy way to implement this is to first draw samplesfrom the uniform distribution $U[0, 1]$.Then we can keep those nodes for which the correspondingsample is greater than $p$, dropping the rest.In the following code, we (**implement a `dropout_layer` functionthat drops out the elements in the tensor input `X`with probability `dropout`**),rescaling the remainder as described above:dividing the survivors by `1.0-dropout`.
###Code
import tensorflow as tf
from d2l import tensorflow as d2l
def dropout_layer(X, dropout):
assert 0 <= dropout <= 1
# In this case, all elements are dropped out
if dropout == 1:
return tf.zeros_like(X)
# In this case, all elements are kept
if dropout == 0:
return X
mask = tf.random.uniform(
shape=tf.shape(X), minval=0, maxval=1) < 1 - dropout
return tf.cast(mask, dtype=tf.float32) * X / (1.0 - dropout)
###Output
_____no_output_____
###Markdown
We can [**test out the `dropout_layer` function on a few examples**].In the following lines of code,we pass our input `X` through the dropout operation,with probabilities 0, 0.5, and 1, respectively.
###Code
X = tf.reshape(tf.range(16, dtype=tf.float32), (2, 8))
print(X)
print(dropout_layer(X, 0.))
print(dropout_layer(X, 0.5))
print(dropout_layer(X, 1.))
###Output
tf.Tensor(
[[ 0. 1. 2. 3. 4. 5. 6. 7.]
[ 8. 9. 10. 11. 12. 13. 14. 15.]], shape=(2, 8), dtype=float32)
tf.Tensor(
[[ 0. 1. 2. 3. 4. 5. 6. 7.]
[ 8. 9. 10. 11. 12. 13. 14. 15.]], shape=(2, 8), dtype=float32)
tf.Tensor(
[[ 0. 0. 0. 0. 0. 10. 0. 14.]
[ 0. 0. 20. 0. 24. 26. 28. 30.]], shape=(2, 8), dtype=float32)
tf.Tensor(
[[0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0.]], shape=(2, 8), dtype=float32)
###Markdown
Defining Model ParametersAgain, we work with the Fashion-MNIST datasetintroduced in :numref:`sec_fashion_mnist`.We [**define an MLP withtwo hidden layers containing 256 units each.**]
###Code
num_outputs, num_hiddens1, num_hiddens2 = 10, 256, 256
###Output
_____no_output_____
###Markdown
Defining the ModelThe model below applies dropout to the outputof each hidden layer (following the activation function).We can set dropout probabilities for each layer separately.A common trend is to seta lower dropout probability closer to the input layer.Below we set it to 0.2 and 0.5 for the firstand second hidden layers, respectively.We ensure that dropout is only active during training.
###Code
dropout1, dropout2 = 0.2, 0.5
class Net(tf.keras.Model):
def __init__(self, num_outputs, num_hiddens1, num_hiddens2):
super().__init__()
self.input_layer = tf.keras.layers.Flatten()
self.hidden1 = tf.keras.layers.Dense(num_hiddens1, activation='relu')
self.hidden2 = tf.keras.layers.Dense(num_hiddens2, activation='relu')
self.output_layer = tf.keras.layers.Dense(num_outputs)
def call(self, inputs, training=None):
x = self.input_layer(inputs)
x = self.hidden1(x)
if training:
x = dropout_layer(x, dropout1)
x = self.hidden2(x)
if training:
x = dropout_layer(x, dropout2)
x = self.output_layer(x)
return x
net = Net(num_outputs, num_hiddens1, num_hiddens2)
###Output
_____no_output_____
###Markdown
[**Training and Testing**]This is similar to the training and testing of MLPs described previously.
###Code
num_epochs, lr, batch_size = 10, 0.5, 256
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
trainer = tf.keras.optimizers.SGD(learning_rate=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
###Output
_____no_output_____
###Markdown
[**Concise Implementation**]With high-level APIs, all we need to do is add a `Dropout` layerafter each fully-connected layer,passing in the dropout probabilityas the only argument to its constructor.During training, the `Dropout` layer will randomlydrop out outputs of the previous layer(or equivalently, the inputs to the subsequent layer)according to the specified dropout probability.When not in training mode,the `Dropout` layer simply passes the data through during testing.
###Code
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
# Add a dropout layer after the first fully connected layer
tf.keras.layers.Dropout(dropout1),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
# Add a dropout layer after the second fully connected layer
tf.keras.layers.Dropout(dropout2),
tf.keras.layers.Dense(10),
])
###Output
_____no_output_____
###Markdown
Next, we [**train and test the model**].
###Code
trainer = tf.keras.optimizers.SGD(learning_rate=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
###Output
_____no_output_____ |
Building_a_binary_image_classifier.ipynb | ###Markdown
Import Packages* numpy - package for scientific computing with Python* matplotlib.pyplot - plotting framework
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Import Keras models* Sequential - basic keras model composed of a linear stack of layers.* model_from_json - Loads a saved model from a json file
###Code
from keras.models import Sequential
from keras.models import model_from_json
###Output
_____no_output_____
###Markdown
Import Keras Layers* Conv2D- 2D convolution layer* MaxPooling2D - Max pooling operation for spatial data.* Flatten - Flattens the input. Does not affect the batch size.* Dense - regular densely-connected NN layer.
###Code
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
###Output
_____no_output_____
###Markdown
Import preprocessing packages* image - image preprocessing package* ImageDataGenerator - Generate batches of tensor image data with real-time data augmentation.
###Code
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
###Output
_____no_output_____
###Markdown
Build the classifier model* Build a CNN.CNN has mostly four fucntions: * Convolution: Add the first layer which is a convolutional layer. Set the number of filters as 32, the shape of each filter as 3x3 and the input shape and the type of image as 50,50,3 i.e. the input is of a 50x50 RGB image and the activation function as relu. * Pooling: Add a pooling layer to reduce the total number of nodes for the upcoming layers. It takes a 2x2 matrix thus giving minimum pixel loss and a precise region where the features are located. * Flatten : Flattens the pooled images. * Dense : add a fully connected layer to feed the images to the output layer. Set the number of nodes as 256, as its a common practice to use a power of 2 and a rectifier function asthe activation function, relu.* Define the output layer. Set number of units to 1 as this is a binary classifier and sigmoid as the activation function* Compile the model. Set adam as the optimizer and binary_crossentropy as the loss fucntion, as this is a binary classifier.
###Code
classifier = Sequential()
classifier.add(Conv2D(32, (3, 3), input_shape = (50, 50, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Conv2D(64, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))
classifier.add(Flatten())
classifier.add(Dense(units = 256, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
###Output
_____no_output_____
###Markdown
Fitting the CNN to the images* Improve the dataset using the ImageDataGenerator method which generate batches of tensor image data with real-time data augmentation. * rescale: rescaling factor. If None or 0, no rescaling is applied, otherwise the data is multiplied by the value provided. * shear_range: Shear Intensity * zoom_range: Range for random zoom. * horizontal_flip: Randomly flip inputs horizontally if true.* Define the training and test datasets using the flow_from_directory which takes the path to a directory, and generates batches of augmented/normalized data. * directory: path to the target directory. It should contain one subdirectory per class. * target_size: The dimensions to which all images found will be resized. * class_mode: one of "categorical", "binary", "sparse", "input" or None. Determines the type of label arrays that are returned * batch_size: size of the batches of data
###Code
train_datagen = ImageDataGenerator(rescale = 1./255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(rescale = 1./255)
training_set = train_datagen.flow_from_directory('dataset/training_set',
target_size = (50, 50),
batch_size = 32,
class_mode = 'binary')
test_set = test_datagen.flow_from_directory('dataset/test_set',
target_size = (50, 50),
batch_size = 32,
class_mode = 'binary')
###Output
_____no_output_____
###Markdown
Explore the dataset* Print a preprocessed image from the dataset
###Code
x,y = training_set.next()
for i in range(0,1):
random_image = x[i]
plt.imshow(random_image)
plt.show()
###Output
_____no_output_____
###Markdown
Define an earlystopping callback* Import EarlyStopping - method to stop training when a monitored quantity has stopped improving.* Define a callback.Set monitor as val_acc, patience as 5 and mode as max so that if val_acc does not improve over 5 epochs, terminate the training process.
###Code
# from keras.callbacks import EarlyStopping
# acc_callback = [EarlyStopping(monitor='val_acc', patience=5, mode='max')]
###Output
_____no_output_____
###Markdown
Fit the model* Invoke the fit_generator to fits the model on data generated batch-by-batch by a Python generator. * steps_per_epoch’ holds the number of training images, i.e 8000 * A single epoch is a single step in training a neural network,set it at 25. * callbacks: List of callbacks to apply during training. * validation_data: test data * validation_steps: Total number of steps (batches of samples) to yield from validation_data generator before stopping at the end of every epoch. It should typically be equal to the number of samples of your validation dataset divided by the batch size.
###Code
classifier.fit_generator(training_set,
steps_per_epoch = 8000,
epochs = 25,
#callbacks = acc_callback,
validation_data = test_set,
validation_steps = 2000)
###Output
_____no_output_____
###Markdown
Evaluate the model* Load model from disk.* Preprocess and feed a random input image to the model for prediction.* Test the accuracy and loss using the evaluate_generator method.
###Code
json_file = open('saved_models/cnn_base_model.json', 'r')
loaded_classifier_json = json_file.read()
json_file.close()
loaded_classifier = model_from_json(loaded_classifier_json)
loaded_classifier.load_weights("saved_models/cnn_base_model.h5")
print("Loaded model from disk")
loaded_classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
test_image = image.load_img('dataset/single_prediction/cat_or_dog_1.jpg', target_size = (50, 50))
plt.imshow(test_image)
plt.show()
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = loaded_classifier.predict(test_image)
if result[0][0] == 1:
prediction = 'This is a dog'
else:
prediction = 'This is a cat'
print (prediction)
loss,accuracy = loaded_classifier.evaluate_generator(test_set)
print("Accuracy = {:.2f}".format(accuracy))
###Output
_____no_output_____
###Markdown
Visualization tools* plot_model : The keras.utils.vis_utils module provides utility functions to plot a Keras model (using graphviz).This will plot a graph of the model and save it to a file.* quiver_engine : Interactive deep convolutional networks features visualization.
###Code
from keras.utils import plot_model
plot_model(classifier, to_file='classifier.png')
from quiver_engine import server
server.launch(loaded_classifier)
###Output
_____no_output_____ |
phase2/2.4/Feature_Engineering.ipynb | ###Markdown
install
###Code
# Important library for many geopython libraries
!apt install gdal-bin python-gdal python3-gdal
# Install rtree - Geopandas requirment
!apt install python3-rtree
# Install Geopandas
!pip install git+git://github.com/geopandas/geopandas.git
# Install descartes - Geopandas requirment
!pip install descartes
# Install Folium for Geographic data visualization
!pip install folium
# Install plotlyExpress
!pip install plotly_express
!pip install Shapely
!pip install plotly_express
from shapely.geometry import Point,Polygon
import pandas as pd
import numpy as np
import geopandas as gpd
import matplotlib
import matplotlib.pyplot as plt
import folium
from folium import plugins
from folium.plugins import HeatMap
import plotly_express as px
from google.colab import drive
from IPython.display import display
import glob
import re
import plotly_express as px
import seaborn as sns
import os
from os.path import exists
drive.mount('/content/drive')
###Output
Mounted at /content/drive
###Markdown
Read
###Code
!ls '/content/drive/My Drive/2021 Route Prediction/Project-1/Source-Code/data/3.1_cluster_on_trips'
tripPath = '/content/drive/My Drive/2021 Route Prediction/Project-1/Source-Code/data/trips_and_stops_NE_2019'
districtPath = '/content/drive/My Drive/2021 Route Prediction/Project-1/Source-Code/data/District_NE.geojson'
district_NE = gpd.read_file(districtPath)
district_NE
###Output
_____no_output_____
###Markdown
Algorithm
###Code
def check_district(lon,lat):
data = zip(district_NE.geometry,district_NE.AMP_NAME_T)
point = Point(lon,lat)
for index in data:
if point.within(index[0]):
return index[1]
return False
check_district(102.151459,14.915597)
def vehicle_group(df):
df['veh_group']=df['unit_type'].apply(lambda x: 0 if x == 3 else x)
df['veh_group']=df['unit_type'].apply(lambda x: 1 if x == 5 or x == 8 or x == 9 else x)
df['veh_group']=df['unit_type'].apply(lambda x: 2 if x == 6 or x == 7 else x)
def days_group(df):
df.time_stamp = df.time_stamp.astype("datetime64")
if df.loc[0].time_stamp.day_name() == 'Monday':
df['day_group'] = 0
if (df.loc[0].time_stamp.day_name() == 'Tuesday') or (df.loc[0].time_stamp.day_name() == 'Wednesday') or (df.loc[0].time_stamp.day_name() == 'Thursday'):
df['day_group'] = 1
if df.loc[0].time_stamp.day_name() == 'Friday':
df['day_group'] = 2
if df.loc[0].time_stamp.day_name() == 'Saturday' or (df.loc[0].time_stamp.day_name() == 'Sunday'):
df['day_group'] = 3
# df['day_group']=df['time_stamp'].apply(lambda x: 0 if x.day_name() == 'Monday' else x)
# df['day_group']=df['time_stamp'].apply(lambda x: 1 if x.day_name() == 'Tuesday' or x.day_name() == 'Wednesday' or x.day_name() == 'Thursday' else x)
# df['day_group']=df['time_stamp'].apply(lambda x: 2 if x.day_name() == 'Friday' else x)
# df['day_group']=df['time_stamp'].apply(lambda x: 3 if x.day_name() == 'Saturday' or x.day_name() == 'Sunday'else x)
###Output
_____no_output_____
###Markdown
Feature
###Code
Path = '/content/drive/My Drive/2021 Route Prediction/Project-1/Source-Code/data/Feature_trips'
days = range(1,32)
month = 3
for day in days:
export_filename = Path + f'/trips_2019_{month}_{day}.csv'
#### check file exists
if os.path.exists(export_filename):
print(f'trips_2019_{month}_{day}.csv already taken!!')
continue
else:
with open(export_filename, 'w') as fp:
pass
if os.path.exists(export_filename):
print(f'trips_2019_{month}_{day}.csv creating...')
pass
filenames = '/content/drive/My Drive/2021 Route Prediction/Project-1/Source-Code/data/3.1_cluster_on_trips/' + f'trips_2019_{month}_{day}.csv'
trip = pd.read_csv(filenames)
vehicle_group(trip)
days_group(trip)
trip.to_csv(export_filename, encoding='utf-8', index = False)
print(f'trips {month} {day} done')
###Output
trips_2019_3_1.csv creating...
trips 3 1 done
trips_2019_3_2.csv creating...
trips 3 2 done
trips_2019_3_3.csv creating...
trips 3 3 done
trips_2019_3_4.csv creating...
trips 3 4 done
trips_2019_3_5.csv creating...
trips 3 5 done
trips_2019_3_6.csv creating...
trips 3 6 done
trips_2019_3_7.csv creating...
trips 3 7 done
trips_2019_3_8.csv creating...
trips 3 8 done
trips_2019_3_9.csv creating...
trips 3 9 done
trips_2019_3_10.csv creating...
trips 3 10 done
trips_2019_3_11.csv creating...
trips 3 11 done
trips_2019_3_12.csv creating...
trips 3 12 done
trips_2019_3_13.csv creating...
trips 3 13 done
trips_2019_3_14.csv creating...
trips 3 14 done
trips_2019_3_15.csv creating...
trips 3 15 done
trips_2019_3_16.csv creating...
trips 3 16 done
trips_2019_3_17.csv creating...
trips 3 17 done
trips_2019_3_18.csv creating...
trips 3 18 done
trips_2019_3_19.csv creating...
trips 3 19 done
trips_2019_3_20.csv creating...
trips 3 20 done
trips_2019_3_21.csv creating...
trips 3 21 done
trips_2019_3_22.csv creating...
trips 3 22 done
trips_2019_3_23.csv creating...
trips 3 23 done
trips_2019_3_24.csv creating...
trips 3 24 done
trips_2019_3_25.csv creating...
trips 3 25 done
trips_2019_3_26.csv creating...
trips 3 26 done
trips_2019_3_27.csv creating...
trips 3 27 done
trips_2019_3_28.csv creating...
trips 3 28 done
trips_2019_3_29.csv creating...
trips 3 29 done
trips_2019_3_30.csv creating...
trips 3 30 done
trips_2019_3_31.csv creating...
trips 3 31 done
###Markdown
trip
###Code
day = 5
month = 2
filenames = '/content/drive/My Drive/2021 Route Prediction/Project-1/Source-Code/data/3.1_cluster_on_trips/' + f'trips_2019_{month}_{day}.csv'
trip = pd.read_csv(filenames)
###Output
_____no_output_____
###Markdown
vehicle type
###Code
trip
vehicle_group(trip)
trip
days_group(trip)
trip
###Output
_____no_output_____ |
notebooks/Ideation LIDAR point data.ipynb | ###Markdown
IntroNotebook for analyzing spatial clusters and ideation for extraction of data
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv("data\extracted\dom1l-fp_32349_5660_1_nw.csv")
len(df)
df.head()
df.z.describe()
###Output
_____no_output_____
###Markdown
Get only high-alt pointsAssumption: 40m over the sealevel + 5m buildings
###Code
# filter out everything under 45m
df_filter = df[df.z > 45]
len(df_filter)
df_filter.z = df_filter.z-45
f, ax = plt.subplots(figsize=(15,15), facecolor="w")
df_filter.sample(200000)[["x", "y", "z"]].plot.scatter(x = "x", y = "y", c="z", s=0.1, ax=ax, cmap="autumn")
ax.set_axis_off()
ax.set_title("Aerial Laser Scanning (height) - Monheim am Rhein\ndom1l-fp_32349_5660_1_nw");
###Output
_____no_output_____ |
PN.py/03.PN digits type.ipynb | ###Markdown
Floating-point operations*(this chapter is not important on first contact with Polynomial Numbers)*`PolyNum` class is ready to use different floating-point format of PN digits. See `PolyNumConf.py`. To have homogeneous type of float calculations, for example `mpf` from mpmath[ 1 ](mpmath10), use the same type for scalars using strings to initialize all scalars and all PNs. Recommended starting precision: mpmath -> mp.prec = 128 (38 dec. significant dig. of float), max_N=64 or 128 (PN significant digits number).1: Fredrik Johansson and others. [mpmath](http://mpmath.org/): a Python library for arbitrary-precision floating-point arithmetic (version 1.0.0), 2017. http://mpmath.org/.
###Code
from PNlib import PolyNumConf
PolyNumConf.max_N=80 # PN significant digits number (restart jupyter kernel on change conf.)
PolyNumConf.FLOAT_TYPE = 'FLOAT-MPMATH-MPF'
PolyNumConf.MPMATH_PREC = 150
#PolyNumConf const can be changed efficiently before loading any other PNlib files
%matplotlib inline
from PNlib.digitPN import flt
#example: h = flt('0.1') - Use it for all scalars.
print(repr(flt('0.1'))) # ...9982')
from PNlib.PolyNum import PolyNum
print(repr(PolyNum().mantissa[0]))
###Output
mpf('0.0')
###Markdown
Example
###Code
# Z-transform live example of homogeneous type of float calculations (to set in PolyNumConf.py)
h = flt('0.01') # sampling period
p = 1/h * PolyNum('const:(~2~,-4~4~-4~4~...~)') #intiger can be mixed with mpf float
E_0 = flt('10')
E = E_0/2 * PolyNum('const:(~1~,2~2~2~2~...~)') # flt('10')/2, but not 10/2
R, L, C = flt('20'), flt('2'), flt('1e-3')
Z_C = 1 / (p*C)
Z = R + p*L + Z_C
U_C = E * Z_C / Z
# plt.plot(E, 'r--',U_C, 'b-')
import matplotlib.pyplot as plt
plt.plot(E, 'r--',U_C, 'b-')
plt.grid(b=True)
plt.text(0.6,10.4,"$ e(t) $", fontsize=16, color='red')
plt.text(21,12.2,"$ u_C(t) $", fontsize=16, color='blue')
plt.show();
print(repr(U_C.mantissa[0])) #to see float type
###Output
mpf('0.058823529411764705882352941176470588235294117724')
|
examples/06_example_clustering/Clustering_by_quality.ipynb | ###Markdown
Data analysis Perform clustering
###Code
freq, nd, sample_list = PaSDqc.extra_tools.mk_ndarray("../data/cluster/")
link = PaSDqc.extra_tools.hclust(nd, method='ward')
###Output
_____no_output_____
###Markdown
Curve fitting
###Code
lp_bad = fit_curve(freq, nd[3:])
lp_good = fit_curve(freq, nd[:3])
###Output
_____no_output_____
###Markdown
Amplicon size density estimation
###Code
freq_eval = np.arange(3, 5.5, 0.01)
l_pdf_good = gaussian2(lp_good, freq_eval)
freq_eval2 = np.arange(2.5, 5.5, 0.01)
l_pdf_bad = gaussian2(lp_bad, freq_eval2)
###Output
_____no_output_____
###Markdown
Plot
###Code
sns.set_context('poster')
sns.set_style("ticks", {'ytick.minor.size': 0.0, 'xtick.minor.size': 0.0})
sns.set_palette('colorblind')
cp = sns.color_palette()
fig, ax = plt.subplots(nrows=3, ncols=1, figsize=(10, 12))
for avg, s, c in zip(nd[:3], sample_list[:3], sns.color_palette("Greens_r", 8)):
ax[0].plot(1/freq, 10*np.log10(avg/norm), label=s, color=c)
for pdf, s ,c in zip(l_pdf_good, sample_list[:3], sns.color_palette("Greens_r", 8)):
ax[1].plot(10**freq_eval, pdf, color=c, label=s)
for avg, s, c in zip(nd[3:], sample_list[3:], cp[3:]):
ax[0].plot(1/freq, 10*np.log10(avg/norm), label=s, color=c)
cp = sns.color_palette()
for pdf, s, c in zip(l_pdf_bad, sample_list[3:], cp[3:]):
ax[1].plot(10**freq_eval2, pdf, label=s, color=c)
ax[1].set_xscale('log')
ax[1].set_xlabel('Amplicon size (log)')
ax[1].set_ylabel('Density')
ax[1].set_xticklabels(["0", "100 bp", "1 kb", "10 kb", "100 kb", "1 mb"])
ax[0].set_xlabel('Genomic scale')
ax[0].legend(bbox_to_anchor=(0., 0.8, 0.55, .102), loc=(0, 0), ncol=2, mode="expand", borderaxespad=0.)
ax[0].set_xscale('log')
ax[0].set_xticklabels(["0", "100 bp", "1 kb", "10 kb", "100 kb", "1 mb"])
ax[0].set_ylabel("PSD (dB)")
hc.dendrogram(link, labels=sample_list, ax=ax[2], leaf_font_size=16, orientation='right', distance_sort='ascending', color_threshold=0.1)
ax[2].set_xlabel('Symmetric KL divergence')
sns.despine(ax=ax[0])
sns.despine(ax=ax[1])
sns.despine(ax=ax[2])
fig.text(0.01, 0.98, "A", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
fig.text(0.01, 0.65, "B", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
fig.text(0.01, 0.33, "C", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
plt.tight_layout(h_pad=0.5)
###Output
_____no_output_____ |
Survival of Haberman's Cancer Patients after surgery.ipynb | ###Markdown
1.Survival of Haberman's Cancer Patients- EDA DetailsThe dataset contains cases from a study that was conducted between 1958 and 1970 at the University of Chicago's Billings Hospital on the survival of patients who had undergone surgery for breast cancer. Attributes:1. age: Age of patient at time of operation (numerical)2. year: Patient's year of operation (year - 1900, numerical)3. nodes: Number of positive axillary nodes detected (numerical)4. status: Survival status (class attribute) 1= the patient survived 5 years or longer 2= the patient dies within 5 years Objective:Given the age, year and nodes, classify/predict a patient's survival who had undergone surgery for breast cancer. **Importing packages and libraries**
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import os
print(os.listdir('../input')) #checking input dataset
###Output
['haberman.csv']
###Markdown
**Loading Dataset**
###Code
haberman_df = pd.read_csv('../input/haberman.csv/haberman.csv')
###Output
_____no_output_____
###Markdown
**Understanding the data** The top 5 rows of data set can be seen by the head() function.
###Code
haberman_df.head()
print (haberman_df.shape) #shows datapoints and features
print (haberman_df.columns) #displays column names in our dataset
haberman_df["status"].value_counts()
haberman_df.dtypes
###Output
_____no_output_____
###Markdown
**Observations:**1. There are 306 datapoints and 4 features2. Haberman dataset is an imbalanced dataset as the number of data points is different ("the number of patients survived 5 years or longer"= 225, "the number of patient died within 5 years"= 80"3. The datatype of survival_status is an integer, which is meaningless. It has to be converted to a categorical datatype
###Code
print(list(haberman_df['status'].unique())) # print the unique values of the target column(status)
###Output
[1, 2]
###Markdown
There are two unique values, '1' and '2' in the status column. So the value '1' can be mapped to ‘YES’ which means the patient survived 5 years or longer and the value '2' can be mapped to ‘NO’ which means the patient died within 5 years.
###Code
haberman_df['status'] = haberman_df['status'].map({1:'YES', 2:'NO'}) #mapping the value '1' to 'YES'and value '2' to 'NO'
haberman_df.head() #printing the first 5 records from the dataset.
###Output
_____no_output_____
###Markdown
Scatter plots**1-D scatter plot**
###Code
one = haberman_df.loc[haberman_df["status"] == "YES"]
two = haberman_df.loc[haberman_df["status"] == "NO"]
plt.plot(one["age"], np.zeros_like(one["age"]), 'o',label='YES')
plt.plot(two["age"], np.zeros_like(two["age"]), 'o',label='NO')
plt.title("1-D scatter plot for age")
plt.xlabel("age")
plt.legend(title="survival_status")
plt.show()
###Output
_____no_output_____
###Markdown
**Observation:**1. Since a lot of overlapping is seen here, we can't infer much from this 1-D scatter plot **2-D Scatter Plot**
###Code
sns.set_style("whitegrid");
sns.FacetGrid(haberman_df, hue="status", height=6) \
.map(plt.scatter, "age", "nodes") \
.add_legend();
plt.show();
###Output
_____no_output_____
###Markdown
**Observations:**1. Seperating the patients_survived from patients_died is harder as they have considerable overlap (they are not linearly separable). **Pair Plots**
###Code
sns.set_style("whitegrid")
sns.pairplot(haberman_df, diag_kind="kde", hue="status", height=4)
plt.show()
###Output
_____no_output_____
###Markdown
**Observation:**1. Not much informative, as there is too much of overlapping. Classification is not possible.2. The plot between year and nodes is comparatively better. Univariant Analysis**PDF(Probability Density Function)**
###Code
sns.FacetGrid(haberman_df, hue="status", height=5) \
.map(sns.distplot, "age") \
.add_legend();
plt.title("PDF of age")
plt.show();
###Output
_____no_output_____
###Markdown
**Observations:**The PDF of Patients_age shows major overlapping. This tells us that the survival chance of a patient is irrespective of their age. But we can roughly tell that patient's in age group 30-40 are more likely to survive.
###Code
sns.FacetGrid(haberman_df, hue="status", height=5) \
.map(sns.distplot, "year") \
.add_legend();
plt.title("PDF of year")
plt.show();
###Output
_____no_output_____
###Markdown
**Observations:** Here also major overlapping is seen. Also year of operation alone cannot be used as a parameter to determine the patient's survival chance.
###Code
sns.FacetGrid(haberman_df, hue="status", height=5) \
.map(sns.distplot, "nodes") \
.add_legend();
plt.title("PDF of nodes")
plt.show();
###Output
_____no_output_____
###Markdown
**Observations:**1. Overlapping is observed. Hence difficult to classify two classes.2. But vaguely we can say that patients with 0 or 1 node are more likely to survive. **Cumulative Distribution Function(CDF)**
###Code
# the patient survived 5 years or longer
counts, bin_edges = np.histogram(one['nodes'], bins=10, density = True)
pdf1 = counts/(sum(counts))
print(pdf1);
print(bin_edges)
cdf1 = np.cumsum(pdf1)
plt.plot(bin_edges[1:],pdf1)
plt.plot(bin_edges[1:], cdf1)
# the patient dies within 5 years
counts, bin_edges = np.histogram(two['nodes'], bins=10, density = True)
pdf2 = counts/(sum(counts))
print(pdf2)
print(bin_edges)
cdf2 = np.cumsum(pdf2)
plt.plot(bin_edges[1:],pdf2)
plt.plot(bin_edges[1:], cdf2)
label = ["pdf of patient_survived", "cdf of patient_survived", "pdf of patient_died", "cdf of patient_died"]
plt.legend(label)
plt.xlabel("positive_lymph_node")
plt.title("pdf and cdf for positive_lymph_node")
plt.show();
###Output
[0.83555556 0.08 0.02222222 0.02666667 0.01777778 0.00444444
0.00888889 0. 0. 0.00444444]
[ 0. 4.6 9.2 13.8 18.4 23. 27.6 32.2 36.8 41.4 46. ]
[0.5625 0.15 0.1375 0.05 0.075 0. 0.0125 0. 0. 0.0125]
[ 0. 5.2 10.4 15.6 20.8 26. 31.2 36.4 41.6 46.8 52. ]
###Markdown
**Observations**: 1. There are about 84% of patients_survived that has nodes<=42. About 56% of patients_died has nodes<=4.5
###Code
# the patient survived 5 years or longer
counts, bin_edges = np.histogram(one['age'], bins=10, density = True)
pdf1 = counts/(sum(counts))
print(pdf1);
print(bin_edges)
cdf1 = np.cumsum(pdf1)
plt.plot(bin_edges[1:],pdf1)
plt.plot(bin_edges[1:], cdf1)
# the patient dies within 5 years
counts, bin_edges = np.histogram(two['age'], bins=10, density = True)
pdf2 = counts/(sum(counts))
print(pdf2)
print(bin_edges)
cdf2 = np.cumsum(pdf2)
plt.plot(bin_edges[1:],pdf2)
plt.plot(bin_edges[1:], cdf2)
label = ["pdf of patient_survived", "cdf of patient_survived", "pdf of patient_died", "cdf of patient_died"]
plt.legend(label)
plt.xlabel("age")
plt.title("pdf and cdf for age")
plt.show();
###Output
[0.05333333 0.10666667 0.12444444 0.09333333 0.16444444 0.16444444
0.09333333 0.11111111 0.06222222 0.02666667]
[30. 34.7 39.4 44.1 48.8 53.5 58.2 62.9 67.6 72.3 77. ]
[0.0375 0.075 0.2125 0.1125 0.2 0.1 0.0875 0.1125 0.0375 0.025 ]
[34. 38.4 42.8 47.2 51.6 56. 60.4 64.8 69.2 73.6 78. ]
###Markdown
**Observations:**1. 20% of patients who survived had age<41 **Box_plots**
###Code
sns.boxplot(x='status',y='age', data=haberman_df)
plt.title("Box_plot for age and survival status")
plt.show()
sns.boxplot(x='status',y='year', data=haberman_df)
plt.title("Box_plot for year and survival status")
plt.show()
sns.boxplot(x='status',y='nodes', data=haberman_df)
plt.title("Box_plot for nodes and survival status")
plt.show()
###Output
_____no_output_____
###Markdown
**Violin Plots**
###Code
sns.violinplot(x="status", y="age", data=haberman_df, size=8)
plt.title("Violin plot for age and survival status")
plt.show()
sns.violinplot(x="status", y="year", data=haberman_df, size=8)
plt.title("Violin plot for year and survival status")
plt.show()
sns.violinplot(x="status", y="nodes", data=haberman_df, size=8)
plt.title("Violin plot for nodes and survival status")
plt.show()
###Output
_____no_output_____
###Markdown
**Observations:**1. More number of patients survived who had 0 to 1 positive axillary nodes. But there is a small frequency of patients who had no nodes died within 5 years of operation. Thus absence of positive axillary nodes doesn't necessarily guarantee survival.2. There are more number of patients aged between 50-60 who survived. At the same time a large frequency of patients died lie in the age range of 45-55. Thus age is not an important feature to determine a persons survival chance. **Contour Plot**
###Code
sns.jointplot(x="age", y="year", data=haberman_df, kind="kde");
plt.show();
###Output
_____no_output_____
###Markdown
**Observation:**1. The years 1960 to 1964 saw more operations done on patients aged between 45 and 55 Conclusions:1. Haberman datset is not linearly separable since there is too much overlapping in datapoints. Hence difficult to classify classes.2. The dataset is imbalanced as it contains unequal number of data-points for each class. Thus it is difficult to classify the survival chance of a patient based on given features.3. The number of positive axillary nodes gave us some insight about the survival chance. Zero or less number of nodes in patients indicated more chance of survival. But still the absence of nodes cannot always guarantee survival. 2.Survival of Haberman's Cancer Patients-MODELLING
###Code
# check how imbalanced the dataset actually is
from collections import Counter
# summarize the class distribution
target = haberman_df['status'].values
counter = Counter(target)
for k,v in counter.items():
per = v / len(target) * 100
print('Class=%s, Count=%s, Percentage=%.3f%%' % (k, v, per))
# retrieve numpy array
haberman_df = haberman_df.values
# split into input and output elements
X, y = haberman_df[:, :-1], haberman_df[:, -1]
# label encode the target variable to have the classes 0 and 1
from sklearn.preprocessing import LabelEncoder
y = LabelEncoder().fit_transform(y)
from sklearn.metrics import brier_score_loss
from numpy import mean
from numpy import std
# calculate brier skill score (BSS)
def brier_skill_score(y_true, y_prob):
# calculate reference brier score
ref_probs = [0.26471 for _ in range(len(y_true))]
bs_ref = brier_score_loss(y_true, ref_probs)
# calculate model brier score
bs_model = brier_score_loss(y_true, y_prob)
# calculate skill score
return 1.0 - (bs_model / bs_ref)
from sklearn.metrics import make_scorer
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
# evaluate a model
def evaluate_model(X, y, model):
# define evaluation procedure
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# define the model evaluation metric
metric = make_scorer(brier_skill_score, needs_proba=True)
# evaluate model
scores = cross_val_score(model, X, y, scoring=metric, cv=cv, n_jobs=-1)
return scores
# summarize the loaded dataset
print(X.shape, y.shape, Counter(y))
from sklearn.dummy import DummyClassifier
# define the reference model
model = DummyClassifier(strategy='prior')
# evaluate the model
scores = evaluate_model(X, y, model)
print('Mean BSS: %.3f (%.3f)' % (mean(scores), std(scores)))
from sklearn.linear_model import LogisticRegression
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.gaussian_process import GaussianProcessClassifier
# define models to test
def get_models():
models, names = list(), list()
# LR
models.append(LogisticRegression(solver='lbfgs'))
names.append('LR')
# LDA
models.append(LinearDiscriminantAnalysis())
names.append('LDA')
# QDA
models.append(QuadraticDiscriminantAnalysis())
names.append('QDA')
# GNB
models.append(GaussianNB())
names.append('GNB')
# MNB
models.append(MultinomialNB())
names.append('MNB')
# GPC
models.append(GaussianProcessClassifier())
names.append('GPC')
return models, names
# define models
models, names = get_models()
results = list()
# evaluate each model
for i in range(len(models)):
# evaluate the model and store results
scores = evaluate_model(X, y, models[i])
results.append(scores)
# summarize and store
print('>%s %.3f (%.3f)' % (names[i], mean(scores), std(scores)))
# plot the results
plt.boxplot(results, labels=names, showmeans=True)
plt.show()
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
# define models to test
def get_models():
models, names = list(), list()
# LR
models.append(LogisticRegression(solver='lbfgs'))
names.append('LR')
# LDA
models.append(LinearDiscriminantAnalysis())
names.append('LDA')
# QDA
models.append(QuadraticDiscriminantAnalysis())
names.append('QDA')
# GNB
models.append(GaussianNB())
names.append('GNB')
# GPC
models.append(GaussianProcessClassifier())
names.append('GPC')
return models, names
# define models
models, names = get_models()
results = list()
# evaluate each model
for i in range(len(models)):
# create a pipeline
pipeline = Pipeline(steps=[('t', StandardScaler()),('m',models[i])])
# evaluate the model and store results
scores = evaluate_model(X, y, pipeline)
results.append(scores)
# summarize and store
print('>%s %.3f (%.3f)' % (names[i], mean(scores), std(scores)))
# plot the results
plt.boxplot(results, labels=names, showmeans=True)
plt.show()
###Output
>LR 0.566 (0.053)
>LDA 0.566 (0.061)
>QDA 0.549 (0.091)
>GNB 0.544 (0.090)
>GPC 0.579 (0.056)
###Markdown
Model Evaluation With Power Transform**Power transforms, such as the Box-Cox and Yeo-Johnson transforms, are designed to change the distribution to be more Gaussian. We can use the PowerTransformer scikit-learn class to perform the Yeo-Johnson transform and automatically determine the best parameters to apply based on the dataset. Importantly, this transformer will also standardize the dataset as part of the transform****We have zero values in our dataset, therefore we will scale the dataset prior to the power transform using a MinMaxScaler. Again, we can use this transform in a Pipeline to ensure it is fit on the training dataset and applied to the train and test datasets correctly, without data leakage.**
###Code
from sklearn.preprocessing import PowerTransformer
from sklearn.preprocessing import MinMaxScaler
# define models to test
def get_models():
models, names = list(), list()
# LR
models.append(LogisticRegression(solver='lbfgs'))
names.append('LR')
# LDA
models.append(LinearDiscriminantAnalysis())
names.append('LDA')
# GPC
models.append(GaussianProcessClassifier())
names.append('GPC')
return models, names
# define models
models, names = get_models()
results = list()
# evaluate each model
for i in range(len(models)):
# create a pipeline
steps = [('t1', MinMaxScaler()), ('t2', PowerTransformer()),('m',models[i])]
pipeline = Pipeline(steps=steps)
# evaluate the model and store results
scores = evaluate_model(X, y, pipeline)
results.append(scores)
# summarize and store
print('>%s %.3f (%.3f)' % (names[i], mean(scores), std(scores)))
# plot the results
plt.boxplot(results, labels=names, showmeans=True)
plt.show()
###Output
>LR 0.588 (0.058)
>LDA 0.587 (0.067)
>GPC 0.581 (0.058)
###Markdown
**We can see a further lift in model skill for the three models that were evaluated. We can see that the LR appears to have out-performed the other two methods** **Make Prediction on New Data** We will select the Logistic Regression model with a power transform on the input data as our final model. We can define and fit this model on the entire training dataset
###Code
# fit the model
steps = [('t1', MinMaxScaler()),('t2', PowerTransformer()),('m',LogisticRegression(solver='lbfgs'))]
model = Pipeline(steps=steps)
model.fit(X, y)
# some survival cases
print('Survival Cases:')
data = [[31,59,2], [31,65,4], [34,60,1]]
for row in data:
# make prediction
yhat = model.predict_proba([row])
# get percentage of survival
p_survive = yhat[0, 0] * 100
# summarize
print('>data=%s, Survival=%.3f%%' % (row, p_survive))
# some non-survival cases
print('Non-Survival Cases:')
data = [[44,64,6], [34,66,9], [38,69,21]]
for row in data:
# make prediction
yhat = model.predict_proba([row])
# get percentage of survival
p_survive = yhat[0, 0] * 100
# summarize
print('>data=%s, Survival=%.3f%%' % (row, p_survive))
###Output
Survival Cases:
>data=[31, 59, 2], Survival=17.021%
>data=[31, 65, 4], Survival=24.197%
>data=[34, 60, 1], Survival=13.686%
Non-Survival Cases:
>data=[44, 64, 6], Survival=37.409%
>data=[34, 66, 9], Survival=38.234%
>data=[38, 69, 21], Survival=48.403%
|
doc/source/.ipynb_checkpoints/Example_tasks-checkpoints.ipynb | ###Markdown
Example_autoclean
###Code
import datacleanbot.dataclean as dc
import openml as oml
import numpy as np
# acquire data
data = oml.datasets.get_dataset(51)
X, y, categorical_indicator, features = data.get_data(target=data.default_target_attribute, dataset_format='array')
Xy = np.concatenate((X,y.reshape((y.shape[0],1))), axis=1)
# input openml dataset id
Xy = dc.autoclean(Xy, data.name, features)
###Output
_____no_output_____ |
2-pandas-examples/basic-pandas.ipynb | ###Markdown
Pandas基础该部分主要参考[Pandas中文网](https://www.pypandas.cn/docs/)和[官方用户手册](https://pandas.pydata.org/pandas-docs/stable/index.html),以及 [Pandas教程](https://www.yiibai.com/pandas)Pandas是一个开源的,为Python编程语言提供高性能,易于使用的**数据结构**和**数据分析工具**。在hydrus环境下安装pandas:```Shellconda install -c conda-forge pandas```Pandas适合处理多种不同类型的数据:- 不同类型的数据列组成的表格数据;- 有序或无序的时间序列数据;- 带行列标签的类型相同或不同的行或列组成的矩阵数据;- 任意形式的观测或统计数据,它们不需标记即可放入pandas数据结构中。 其运算都主要都围绕两类数据结构展开,Series和Dataframe,接下来从它们开始记录。Series(1维)和DataFrame(2维)能处理绝大多数的典型应用,包括金融、统计、社科、工程等各个领域。 对于R用户,DataFrame提供了所有R的data.frame提供的,并且还有更多。pandas建立在NumPy之上,也整合了很多第三方库,打造了良好的科学计算环境。pandas处理的经典问题包括且不限于:- 处理缺失数据,用NaN表示;- 数据结构大小可变:可以插入删除DataFrame或者更高维对象的列;- 自动且明确的data alignment,可以明确指定数据到一系列标签下;- 灵活数据分组,很容易聚合转换数据;- 容易将杂乱、标签复杂的Python和NumPy数据结构转换为DataFrame对象;- 根据标签容易slicing,indexing和subsetting大数据集;- 容易合并和连接数据集;- 灵活地reshaping和pivoting数据集;- 多级标签;- 从各类文件中加载数据, 包括HDF5格式的数据;- 时间序列指定的功能:日期范围生成和频率转换,滑动窗口的统计, 滑动线性回归, 日期平移或滞后等。在数据处理的各个环节,包括变换和清洗数据,分析建模,组织分析结果为正式的格式,可视化和输出数据等,pandas都能很好地处理。接下来先记录pandas基本数据结构,然后日常积累常见数据处理操作。 Pandas基本数据结构最好的理解Pandas数据结构的方式是将其当做其更低维数据的容器,比如Series是标量数据的容器,Dataframe是Series的容器。可以通过类似字典的方式从容器中插入或移除对象。pandas中为了更直观地给出数据结构不同维度的信息,更多用index和columns,而不是使用axis0和1这样来表示。例如:``` pythonfor col in df.columns: series = df[col] do something with series```pandas所有数据结构都是值可变的,但不总是size可变的。比如Series的长度是不可变的,但是Dataframe中是可以插入列的。绝大多数方法都会产生新的对象,而不改变原输入数据。一般来说,pandas喜欢保持不可变性。基本上来说,是利用numpy的数组为值依据,用columns和index给列和行起名。
###Code
import numpy as np
import pandas as pd
# 初始化Series
k = pd.Series({'a':np.random.randint(10,size=5),'b':0})
k
# 检索series的项
print(k.iloc[1:])
print(k["b"])
# 另一种常用的初始化Series的方式
s = pd.Series([1, 2, 3, 4], index=['A', 'B', 'C', 'D'])
s
# 创建一个空的 DataFrame
df_empty = pd.DataFrame(columns=['A', 'B', 'C', 'D'])
print(df_empty)
print("列名:",df_empty.columns.values)
# 创建一个只有一行的,不能使用下面的方式
# df_1r = pd.DataFrame(np.array([[1],[2],[3],[4]]),columns=['A', 'B', 'C', 'D'])
# 需要使用append:先定一个空的,再来添加
df_1r = df_empty
df_1r.loc[0] = [1,2,3,4]
print("有一行:\n",df_1r)
# 创建有行有列的
df = pd.DataFrame({"a": range(3), "b": range(3), "c": range(3)})
print(df)
# 使用numpy数组,命名行列进行初始化
df2 = pd.DataFrame(np.arange(16).reshape((4,4)), columns=['one', 'two', 'three', 'four'], index=['a', 'b', 'c','d'])
print(df2)
# 带时间序列的dataframe的初始化
time_range = pd.date_range('2011-1-1', periods=4, freq='D')
df2.index = time_range
print(df2)
print('列:\n',df2.columns)
print('列名称:\n',df2.columns.values)
time_range = pd.date_range('2011-1-1', '2011-1-4', freq='D')
df2.columns = time_range
print(df2)
###Output
Empty DataFrame
Columns: [A, B, C, D]
Index: []
列名: ['A' 'B' 'C' 'D']
有一行:
A B C D
0 1 2 3 4
a b c
0 0 0 0
1 1 1 1
2 2 2 2
one two three four
a 0 1 2 3
b 4 5 6 7
c 8 9 10 11
d 12 13 14 15
one two three four
2011-01-01 0 1 2 3
2011-01-02 4 5 6 7
2011-01-03 8 9 10 11
2011-01-04 12 13 14 15
列:
Index(['one', 'two', 'three', 'four'], dtype='object')
列名称:
['one' 'two' 'three' 'four']
2011-01-01 2011-01-02 2011-01-03 2011-01-04
2011-01-01 0 1 2 3
2011-01-02 4 5 6 7
2011-01-03 8 9 10 11
2011-01-04 12 13 14 15
###Markdown
可以初始化一个只有列名的空Dataframe,然后再给列赋新值,可以使用如下方式
###Code
import pandas as pd
df0 = pd.DataFrame(columns=['A', 'B', 'C', 'D'])
df0["A"] = np.arange(5)
df0
###Output
_____no_output_____
###Markdown
如果想要批次给各列赋值,可以这样:
###Code
for column in df0.columns:
df0[column]=np.arange(5)
df0
###Output
_____no_output_____
###Markdown
有时候会需要指定某些列的数据类型,这时候可以使用如下方式:
###Code
a = [['a', '1.2', '4.2'], ['b', '70', '0.03'], ['x', '5', '0']]
df = pd.DataFrame(a, columns=['one', 'two', 'three'])
df
df.dtypes
df[['two', 'three']] = df[['two', 'three']].astype(float)
df.dtypes
###Output
_____no_output_____
###Markdown
pandas与numpy数据结构之间可以方便地进行转换。pandas转为numpy比较简单,直接调用value即转为ndarray。反过来,也很容易,直接用pandas的类型,相当于直接初始化了。
###Code
# pandas dataframe与numpy array之间的转换
df = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]})
print(df)
# df转换为ndarray
array_from_df=df.values
print(array_from_df)
print(np.array(df))
# 读取某一列数据,两种方式均可
print(df['A'])
print(df.loc[:, 'A'])
# 一列数据转换为ndarray
print(np.array(df['A']))
# Series转ndarray
# Creating the Series
sr = pd.Series(['New York', 'Chicago', 'Toronto', 'Lisbon', 'Rio'])
# Create the Index
index_ = ['City 1', 'City 2', 'City 3', 'City 4', 'City 5']
# set the index
sr.index = index_
# return numpy array representation
result = sr.values
# Print the result
print("series转为ndarray:")
print(result)
# Print the series
print(sr)
###Output
A B C
0 1 4 7
1 2 5 8
2 3 6 9
[[1 4 7]
[2 5 8]
[3 6 9]]
[[1 4 7]
[2 5 8]
[3 6 9]]
0 1
1 2
2 3
Name: A, dtype: int64
0 1
1 2
2 3
Name: A, dtype: int64
[1 2 3]
series转为ndarray:
['New York' 'Chicago' 'Toronto' 'Lisbon' 'Rio']
City 1 New York
City 2 Chicago
City 3 Toronto
City 4 Lisbon
City 5 Rio
dtype: object
###Markdown
IO工具很多时候,初始化DataFrame都是从读取文件开始的。对于csv文件和txt文件,都可以通过pandas的read_csv()函数读取。
###Code
from io import StringIO, BytesIO
import pandas as pd
data = ('col1,col2,col3\n'
'a,b,01\n'
'a,b,02\n'
'c,d,03')
print("原数据:\n",data)
print("pandas读取默认设置:\n",pd.read_csv(StringIO(data)))
# 读取数据的时候即进行slice
print("pandas读取前两列:\n",pd.read_csv(StringIO(data), usecols=range(0,2)))
print("pandas读取某两列:\n",pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ['COL1', 'COL3']))
print("pandas读取某些行:\n",pd.read_csv(StringIO(data), skiprows=lambda x: x % 2 != 0))
print("读取数据之后,重新命名列名:\n")
df_example=pd.read_csv(StringIO(data))
df_example.columns = ['A','B','C']
df_example
###Output
原数据:
col1,col2,col3
a,b,01
a,b,02
c,d,03
pandas读取默认设置:
col1 col2 col3
0 a b 1
1 a b 2
2 c d 3
pandas读取前两列:
col1 col2
0 a b
1 a b
2 c d
pandas读取某两列:
col1 col3
0 a 1
1 a 2
2 c 3
pandas读取某些行:
col1 col2 col3
0 a b 2
读取数据之后,重新命名列名:
###Markdown
另外,有时候会需要指定某一列作为dataframe的index,比如指定第一列为index,可以使用如下形式:
###Code
data = ('col1,col2,col3\n'
'a,b,01\n'
'a,b,02\n'
'c,d,03')
print("原数据:\n",data)
print("pandas读取默认设置:\n",pd.read_csv(StringIO(data), index_col=0))
###Output
原数据:
col1,col2,col3
a,b,01
a,b,02
c,d,03
pandas读取默认设置:
col2 col3
col1
a b 1
a b 2
c d 3
###Markdown
专业方面,很多数据都是把年月或者年旬或者年日分别当做行和列,在计算时,需要进行处理,把时间变为一列或一行,即把几列或行的数组拼接起来即将[1 2;2 3]的数据变为[1 2 2 3],pandas的concat和numpy的concatenate类似。最后把行名index统一换成日期,构成时间序列Series
###Code
# 如果遇到“UnicodeDecodeError: 'utf8' codec can't decode byte....”错误,用记事本另存csv文件时,将“编码”设置为‘UTF-8’即可
dataset = pd.read_csv('Sheet1.csv')
# header指定某行的值作为各列的列名,1表示第二行,如果是None,则从第一行开始就是数据。
dataset = pd.read_csv('Sheet1.csv', header=1)
# 因为给定的数据格式,各人有各自的一套,所以具体情况具体分析。比如,这里dataset的第一列可以去掉,即得到所有数据,删除列时,用drop函数,加参数axis=1,不加则表示删除行
dataset1 = dataset.drop(['旬平均'], axis=1)
df = pd.DataFrame({"a": range(3), "b": range(3), "c": range(3)})
# iloc获取的数据格式为Series
se = df.iloc[:, 0]
# 获取dataframe的列数:df.shape[1]
for i in range(1, df.shape[1]):
se_temp = df.iloc[:, i]
se = pd.concat([se, se_temp])
# 如果一开始没有给series起名,比如从csv读取出来时是dataframe,拼接之后没办法再给Series起名,但是又需要给它起名,那么只能采取重新构造一个series的方式来命名
se = pd.Series(se, name="aaaaaaa")
# 把每行名称换为日期
rng = pd.date_range('2011-1-1', periods=9, freq='H')
se.index = rng
se = pd.Series(se, name="bbbbb")
se.head()
###Output
_____no_output_____
###Markdown
当csv中包含了大量的编号代码,比如是002开头的编号——002111, 使用pd.read_csv('text.csv') 则会让所有的002xxx,变成了2xxx,前面2个0不见了,因为作为数值类型,没有必要保留0。因此,为了保持编号的字符串特性,直接在参数一栏设置一下即可:df=pd.read_csv('text.csv', dtype={'code':str} 这样,把你要转换的列的名字设定好, “code”列中的数据读取为str,用列名或者列序号均可。 这样,读取到的数据就是按照我们的要求的了。
###Code
dataset = pd.read_csv(StringIO(data),dtype={2:str})
print(dataset)
dataset = pd.read_csv(StringIO(data),dtype={'col3':str})
print(dataset)
###Output
col1 col2 col3
0 a b 01
1 a b 02
2 c d 03
col1 col2 col3
0 a b 01
1 a b 02
2 c d 03
###Markdown
如果文件有注释行,读取的时候可以使用comment参数跳过
###Code
import pandas as pd
file= 'test-comment.csv'
data_wo_comment = pd.read_csv(file, comment='#')
data_wo_comment
###Output
_____no_output_____
###Markdown
读取excel文件的话稍微有些不同,这时候我们需要使用read_excel函数,该函数需要额外的 openpyxl 包,因此安装此包:```Shellconda install -c conda-forge openpyxl```
###Code
import pandas as pd
df = pd.read_excel('AK_U.xlsx',sheet_name='AK')
# df = pd.read_excel('NID2018_U.xlsx')
df.head()
###Output
_____no_output_____
###Markdown
另外,有时候还会使用 feather 文件,feather 是一种高效读写二进制数据的文件格式。不过根据https://zhuanlan.zhihu.com/p/69221436 一文中的建议,pandas自带的to_feather和read_feather容易有版本不兼容的问题,所以最好还是使用feather自带的api。安装feather:```Shellconda install -c conda-forge feather-format```首先看个例子,把csv文件转为feather文件
###Code
import numpy as np
import pandas as pd
import os
import feather
PATH = 'attr_temp99%_days_99sites.csv'
df_temp = pd.read_csv(PATH)
df_temp.head()
df_temp.to_feather('attr_temp99%_days_99sites.feather')
###Output
_____no_output_____
###Markdown
然后看看feather数据的读取:
###Code
train_data = pd.read_feather("attr_temp99%_days_99sites.feather")
train_data.head()
###Output
_____no_output_____
###Markdown
另外还有一类数据文件格式也是常用的 -- json文件,在pandas中,读取json文件,可以使用 read_json()
###Code
import pandas as pd
strtext='[{"ttery":"min","issue":"20130801-3391","code":"8,4,5,2,9","code1":"297734529","code2":null,"time":1013395466000},\
{"ttery":"min","issue":"20130801-3390","code":"7,8,2,1,2","code1":"298058212","code2":null,"time":1013395406000},\
{"ttery":"min","issue":"20130801-3389","code":"5,9,1,2,9","code1":"298329129","code2":null,"time":1013395346000},\
{"ttery":"min","issue":"20130801-3388","code":"3,8,7,3,3","code1":"298588733","code2":null,"time":1013395286000},\
{"ttery":"min","issue":"20130801-3387","code":"0,8,5,2,7","code1":"298818527","code2":null,"time":1013395226000}]'
text_data = pd.read_json(strtext)
text_data
json_file = "test.json"
# 当json文件中仅仅是够dict的一组key-valuie时,要使用type='series'来保证读取正确
text_json = pd.read_json(json_file, typ='series')
print(type(text_json.index[0]))
text_json
###Output
<class 'numpy.int64'>
###Markdown
但是这里发现index类型自动默认为int了,根据 [How to read index data as string with pandas.read_csv()?](https://stackoverflow.com/questions/35058435/how-to-read-index-data-as-string-with-pandas-read-csv) 的介绍(虽然是针对read_csv的,不过应该是一样的),这是pandas里面的一个小bug,没法直接一行代码指定index的数据类型, 如果采用先读取后转换的方式,那么最前面的0 是变不回来的:
###Code
text_json.index = text_json.index.map(str)
print(type(text_json.index[0]))
text_json
###Output
<class 'str'>
###Markdown
可以采用这种方式:先读到dict里面,然后再变成series,或者dataframe,这时候index就是str了
###Code
import json
with open(json_file, 'r') as fp:
my_object = json.load(fp)
all_sites_purposes = pd.Series(my_object)
all_sites_purposes
###Output
_____no_output_____
###Markdown
读取数据或者后面对数据进行操作时,有时会想要进行数据类型转换。可以参考[Pandas数据类型转换的几个小技巧](https://zhuanlan.zhihu.com/p/35287822)Pandas中进行数据类型转换有三种基本方法:- 使用astype()函数进行强制类型转换,比如:dataset['col3'].astype('float')- 自定义函数进行数据类型转换- 使用Pandas提供的函数如to_numeric()、to_datetime()当待转换列中含有不能转换的特殊值时(例如¥,ErrorValue,14n等)astype()函数将失效。astype()函数有效的情形:- 数据列中的每一个单位都能简单的解释为数字(2, 2.12等)- 数据列中的每一个单位都是数值类型且向字符串object类型转换Pandas的astype()函数和复杂的自定函数之间有一个中间段,那就是Pandas的一些辅助函数。这些辅助函数对于某些特定数据类型的转换非常有用(如to_numeric()、to_datetime())。 如果想要配置一些项目,比如忽略某些行,选择分隔符等,需要配置对应的参数,比如:
###Code
import os
import pandas as pd
fn = "TMaxMon.csv"
df = pd.read_csv(fn, engine='python', index_col=0, header=None,
skiprows=2, sep=',')
df.drop(list(df.columns)[0], axis=1, inplace=True)
df
###Output
_____no_output_____
###Markdown
注意,pandas的Dataframe是没有原子性的,即行之间是可以完全相同的,所以这方面要注意,下面写入dict之后就会少一行
###Code
d = {}
for node_id, row in df.iterrows():
print(node_id)
d[node_id] = row.values.tolist()
print(d.keys())
###Output
11120301
11120201
11120202
11130101
11120202
11120304
11120302
11120303
dict_keys([11120301, 11120201, 11120202, 11130101, 11120304, 11120302, 11120303])
###Markdown
除了读取数据,就是写入文件了,写文件最常用的就是to_csv函数。
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame({
'col1': ['A', 'A', 'B', np.nan, 'D', 'C'],
'col2': [2, 1, 9, 8, 7, 4],
'col3': [0, 1, 9, 4, 2, 3],
})
file_name = "test.csv"
df.to_csv(file_name)
###Output
_____no_output_____
###Markdown
如果写入的时候不要带index,则设置index为False即可:
###Code
df.to_csv(file_name,index=False)
###Output
_____no_output_____
###Markdown
日期时间处理和python基础,numpy一样,pandas也有自己的日期处理包,日期总是比较麻烦的。
###Code
import pandas as pd
# 日期/字符串转换
strtime=['2000-01-31', '2000-02-29', '2000-03-31', '2000-04-30',
'2000-05-31', '2000-06-30', '2000-07-31', '2000-08-31',
'2000-09-30', '2000-10-31']
time=pd.to_datetime(strtime)
[[dt.year, dt.month, dt.day, dt.hour] for dt in time]
my_time = pd.DataFrame([[dt.year, dt.month, dt.day, dt.hour] for dt in time], columns=['Year', 'Mnth', 'Day', 'Hr'])
my_time.head()
###Output
_____no_output_____
###Markdown
下面看看将数字转为日期。例子的数据中前面三列是年月日,最后一列是数据:
###Code
# 数字,日期转换
import pandas as pd
data_temp = pd.DataFrame([[1952,1,1,3950],
[1952,1,2,3920],
[1952,1,3,3960],
[1952,1,4,3970]])
data_temp
df_date = data_temp[[0, 1, 2]]
print(df_date)
df_date.columns = ['year', 'month', 'day']
date = pd.to_datetime(df_date).values.astype('datetime64[D]')
date
###Output
0 1 2
0 1952 1 1
1 1952 1 2
2 1952 1 3
3 1952 1 4
###Markdown
排序对DataFrame中的数据进行排序也是常用的操作之一。比如:
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame({
'col1': ['A', 'A', 'B', np.nan, 'D', 'C'],
'col2': [2, 1, 9, 8, 7, 4],
'col3': [0, 1, 9, 4, 2, 3],
})
df
###Output
_____no_output_____
###Markdown
Sort by col1
###Code
df.sort_values(by=['col1'])
###Output
_____no_output_____
###Markdown
Sort by multiple columns
###Code
df.sort_values(by=['col1', 'col2'])
###Output
_____no_output_____
###Markdown
排序这个地方需要注意下NaN的顺序是会比较奇怪的,可以看到上面NaN是被当作最大值了,这种数值条件下是一样的。
###Code
import pandas as pd
import numpy as np
df_nan = pd.DataFrame({'col1': [0, 2, 5, np.nan, 7, 10]})
df_nan
df_nan.sort_values(by=['col1'])
###Output
_____no_output_____
###Markdown
增删,拼接等操作比如删除某些全是nan值的列。
###Code
import pandas as pd
import numpy as np
df0 = pd.DataFrame({"a":np.arange(3),"b":[1,np.nan,3],"c":[np.nan,np.nan,np.nan]})
df0
###Output
_____no_output_____
###Markdown
直接删除的话,前面已经提到,直接drop对应列即可。
###Code
df1 = df0.drop(['c'], axis=1)
df1
###Output
_____no_output_____
###Markdown
但如果想要批量判断,再删除,可以这样做:
###Code
# 判断哪些列全为nan
is_all_nan = df0.apply(lambda x: all(pd.isna(x)), axis=0)
is_all_nan
# 删除其中为真的列
df2=df0.drop(is_all_nan[is_all_nan==True].index,axis=1)
df2
###Output
_____no_output_____
###Markdown
可参考:[Merge, join, and concatenate](https://www.pypandas.cn/docs/user_guide/merging.htmlconcatenating-objects)拼接主要使用的是concat函数,axis=0是默认选项纵向拼接,axis=1是横向
###Code
import pandas as pd
df0 = pd.DataFrame()
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],
'B': ['B8', 'B9', 'B10', 'B11'],
'C': ['C8', 'C9', 'C10', 'C11'],
'D': ['D8', 'D9', 'D10', 'D11']},
index=[8, 9, 10, 11])
frames = [df0, df1, df2, df3]
result = pd.concat(frames)
print(result)
df4 = pd.DataFrame({'B': ['B2', 'B3', 'B6', 'B7'],
'D': ['D2', 'D3', 'D6', 'D7'],
'F': ['F2', 'F3', 'F6', 'F7']},
index=[2, 3, 6, 7])
result = pd.concat([df1, df4], axis=1)
result
###Output
_____no_output_____
###Markdown
再看 一个增加空列的例子,主要参考了:[pandas dataframe在指定的位置添加一列, 或者一次性添加几列,reindex,pd.concat的使用](https://blog.csdn.net/AlanGuoo/article/details/76522429)
###Code
df5 = pd.concat([df4, pd.DataFrame(columns=['flow', 'mode'])])
df5
###Output
_____no_output_____
###Markdown
reindex可以来重新索引,如下所示,只用了df1的index,所以就剩下4行了
###Code
df5.reindex(df1.index)
###Output
_____no_output_____
###Markdown
另外还有一个多列拼接成一列的小例子,几列字符串拼为一列字符串:
###Code
df = pd.DataFrame({"a": range(3), "b": range(3), "c": range(3)})
# 结构很简单: 第一列的名称.str.cat(第二列的名称)
df['a'] = df.iloc[:, 0].apply(str) + "-" + df['b'].apply(str) + "-" + df['b'].apply(str)
# 拼接之后,只留下特定的几列
df = df[['a', 'c']]
print(df)
###Output
a c
0 0-0-0 0
1 1-1-1 1
2 2-2-2 2
###Markdown
多个Series拼接
###Code
import pandas as pd
# 初始化的时候不起名,后面rename没有用
a = pd.Series([1, 2], name='aa')
rng1 = pd.date_range('2011-1-1', periods=2, freq='D')
print(type(rng1))
a.index = rng1
print("构建一个序列,并以时间做index:",a)
b = pd.Series([2, 3, 4])
rng2 = pd.date_range('2011-1-2', periods=3, freq='D')
b.index = rng2
df1 = pd.concat([a, b], axis=1)
print("拼接a和b:",df1)
c = pd.Series([5, 6])
rng3 = pd.date_range('2011-1-1', periods=2, freq='D')
c.index = rng3
df2 = pd.concat([df1, c], axis=1)
print("拼接df1和c:",df2)
# 尝试直接拼接多个:
df3 = pd.concat([df1, c, c], axis=1)
print("拼接多个目标:",df3)
###Output
<class 'pandas.core.indexes.datetimes.DatetimeIndex'>
构建一个序列,并以时间做index: 2011-01-01 1
2011-01-02 2
Freq: D, Name: aa, dtype: int64
拼接a和b: aa 0
2011-01-01 1.0 NaN
2011-01-02 2.0 2.0
2011-01-03 NaN 3.0
2011-01-04 NaN 4.0
拼接df1和c: aa 0 0
2011-01-01 1.0 NaN 5.0
2011-01-02 2.0 2.0 6.0
2011-01-03 NaN 3.0 NaN
2011-01-04 NaN 4.0 NaN
拼接多个目标: aa 0 0 1
2011-01-01 1.0 NaN 5.0 5.0
2011-01-02 2.0 2.0 6.0 6.0
2011-01-03 NaN 3.0 NaN NaN
2011-01-04 NaN 4.0 NaN NaN
###Markdown
拼接还可以使用append:
###Code
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],
'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],
'D': ['D4', 'D5', 'D6', 'D7']},
index=[4, 5, 6, 7])
result = df1.append(df2)
result
###Output
_____no_output_____
###Markdown
在数据处理中,经常会遇到空值地情况,这时候我们通常会进行插值,pandas中提供了非常方便地插值方式可供使用。
###Code
import pandas as pd
import numpy as np
s = pd.Series([0, 1, np.nan, 3])
s.interpolate()
s
###Output
_____no_output_____
###Markdown
注意interpolate函数并不是inplace操作。如果需要平滑插值,可以使用样条插值,不过样条插值需要 scipy 包作为基础,后续会有scipy介绍,这里需要的话,可以先安装使用```Shellconda install -c conda-forge scipy```
###Code
s = pd.Series([0, 2, np.nan, 8])
s.interpolate(method='polynomial', order=2)
###Output
_____no_output_____
###Markdown
然后,对于DataFrame:
###Code
df = pd.DataFrame([(0.0, np.nan, -1.0, 1.0),
(np.nan, 2.0, np.nan, np.nan),
(2.0, 3.0, np.nan, 9.0),
(np.nan, 4.0, -4.0, 16.0)],
columns=list('abcd'))
df
df.interpolate(method='linear', limit_direction='forward', axis=0)
df.interpolate(method='linear', limit_direction='forward', axis=1)
###Output
_____no_output_____
###Markdown
修改操作包括修改行列名,交换行列等操作。行一般又称index;列名也称field。总体上可以分为在读数据时修改和读后修改两种。具体的,行的修改方法有多种,参考:[重命名dataframe的index](https://blog.csdn.net/sinat_35930259/article/details/79872577);列名的修改方法也有多种,参考[Pandas中修改DataFrame列名](http://www.voidcn.com/article/p-wycobfgd-bqs.html)
###Code
# 给出所有列名,这里类型是Index,可以直接转为list
import pandas as pd
from io import StringIO, BytesIO
data = ('col1,col2,col3\n'
'a,b,01\n'
'a,b,02\n'
'c,d,03')
print("原数据:\n",data)
dataset = pd.read_csv(StringIO(data), index_col=0)
print("pandas读取默认设置:\n",dataset)
print(dataset.columns)
columns=dataset.columns.tolist()
print(columns)
# 转换后,每个元素直接就是字符串了
print(type(columns[0]))
# 修改列名的常用方式,inplace为false的话,还需要赋值到dataset,因此直接设置为true
dataset.rename(columns={'col1':'a', 'col2':'b', 'col3':'c'}, inplace = True)
print(dataset)
# 修改index行名
"""重新命名各列"""
new_col = ['new1', 'new2', 'new3', 'new4']
df2.columns = new_col
print(df2)
"""交换列的位置"""
order = ['new2', 'new1', 'new3', 'new4']
df2 = df2[order]
print(df2)
###Output
new1 new2 new3 new4
4 A4 B4 C4 D4
5 A5 B5 C5 D5
6 A6 B6 C6 D6
7 A7 B7 C7 D7
new2 new1 new3 new4
4 B4 A4 C4 D4
5 B5 A5 C5 D5
6 B6 A6 C6 D6
7 B7 A7 C7 D7
###Markdown
行列拆分,参考[DataFrame一列拆成多列以及一行拆成多行](https://blog.csdn.net/Asher117/article/details/84346073),在数据分析时,经常需要把DataFrame的一列拆成多列或者根据某列把一行拆成多行。
###Code
import pandas as pd
df= pd.DataFrame({'Country': ['China', 'America', 'Japan'],
'City': ['Shanghai|Shenzhen', 'New York|State College', 'Tokyo|Osaka']},
index=[0, 1, 2])
df
df['Citys']=df['City'].map(lambda x: x.split('|'))
df
df['City1']=df['City'].map(lambda x: x.split('|')[0])
df['City2']=df['City'].map(lambda x: x.split('|')[1])
df
###Output
_____no_output_____
###Markdown
除了上面的拆分,还有将各列拼接到一起,重新组织成符合数据库范式形式的表格,这就要用到十分常用的melt函数了,转换后的格式:其中一个或多个列是标识符变量,而所有其他列(被认为是测量变量)都不以行轴为轴,**只留下两个非标识符列**(变量和值)。
###Code
import pandas as pd
df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
'B': {0: 1, 1: 3, 2: 5},
'C': {0: 2, 1: 4, 2: 6}})
df
pd.melt(df, id_vars=['A'], value_vars=['B'])
pd.melt(df, id_vars=['A'], value_vars=['B', 'C'])
pd.melt(df, id_vars=['A'], value_vars=['B'], var_name='myVarname', value_name='myValname')
###Output
_____no_output_____
###Markdown
索引,切片等操作首先是直接[]索引,以及loc和iloc的使用,这是pandas中最常用的索引方式。切片包括横向切,纵向切,即选取某些行,选取某些列的操作。dataframe的切片操作很容易,记住灵活运用其index的功能即可,第一维是行,第二维是列,定位使用loc,数值定位用iloc。 先看看 Series 的索引操作
###Code
import numpy as np
import pandas as pd
se = pd.Series([1, 2, 3, 4], index=['A', 'B', 'C', 'D'])
se
###Output
_____no_output_____
###Markdown
索引Series种符合条件的某些项可以直接采用如下方式
###Code
se[se>2]
###Output
_____no_output_____
###Markdown
给出符号条件的项的索引
###Code
se[se==2].index.tolist()
###Output
_____no_output_____
###Markdown
再给一个例子,如果想要判断包含b的字符项对应的index,下面的方式是会报错的。
###Code
s1= pd.Series(['a', 'bv', "b", "d"], index=['A', 'B', 'C', 'D'])
s1["b" in s1].index.tolist()
# 或者
# s1[s1.find('b')].index.tolist()
###Output
_____no_output_____
###Markdown
这时候需要使用apply函数遍历作用各项
###Code
include_b = s1.apply(lambda x:"b" in x)
s1[include_b==True].index.tolist()
data = pd.DataFrame(np.arange(16).reshape(4,4),index=list('abcd'),columns=list('wxyz'))
print(data)
print("选择表格中的'w'列,使用类字典属性,返回的是Series类型:", data['w'])
print("选择表格中的'w'列,使用点属性,返回的是Series类型:",data.w)
print("选择表格中的'w'列,返回的是DataFrame属性:",data[['w']])
print("选择表格中的'w'、'z'列:",data[['w','z']])
print("返回第1行到第2行的所有行,前闭后开,包括前不包括后:",data[0:2])
print(" #返回第2行,从0计,返回的是单行,通过有前后值的索引形式:",data[1:2])
#如果采用data[1]则报错
print("#返回第2行的第三种方法,返回的是DataFrame,跟data[1:2]同:",data.iloc[1:2])
print(" #利用index值进行切片,返回的是**前闭后闭**的DataFrame:",data['a':'b'])
print("#返回data的前几行数据,默认为前五行,需要前十行则dta.head(10):",data.head())
print("#返回data的后几行数据,默认为后五行,需要后十行则data.tail(10):",data.tail())
print("#选取DataFrame最后一行,返回的是Series:",data.iloc[-1])
print("#选取DataFrame最后一行,返回的是DataFrame:",data.iloc[-1:])
print("#返回‘a’行'w'、'x'列,这种用于选取行索引列索引已知",data.loc['a',['w','x']])
print("#选取第二行第二列,用于已知行、列位置的选取:",data.iat[1,1])
###Output
w x y z
a 0 1 2 3
b 4 5 6 7
c 8 9 10 11
d 12 13 14 15
选择表格中的'w'列,使用类字典属性,返回的是Series类型: a 0
b 4
c 8
d 12
Name: w, dtype: int32
选择表格中的'w'列,使用点属性,返回的是Series类型: a 0
b 4
c 8
d 12
Name: w, dtype: int32
选择表格中的'w'列,返回的是DataFrame属性: w
a 0
b 4
c 8
d 12
选择表格中的'w'、'z'列: w z
a 0 3
b 4 7
c 8 11
d 12 15
返回第1行到第2行的所有行,前闭后开,包括前不包括后: w x y z
a 0 1 2 3
b 4 5 6 7
#返回第2行,从0计,返回的是单行,通过有前后值的索引形式: w x y z
b 4 5 6 7
#返回第2行的第三种方法,返回的是DataFrame,跟data[1:2]同: w x y z
b 4 5 6 7
#利用index值进行切片,返回的是**前闭后闭**的DataFrame: w x y z
a 0 1 2 3
b 4 5 6 7
#返回data的前几行数据,默认为前五行,需要前十行则dta.head(10): w x y z
a 0 1 2 3
b 4 5 6 7
c 8 9 10 11
d 12 13 14 15
#返回data的后几行数据,默认为后五行,需要后十行则data.tail(10): w x y z
a 0 1 2 3
b 4 5 6 7
c 8 9 10 11
d 12 13 14 15
#选取DataFrame最后一行,返回的是Series: w 12
x 13
y 14
z 15
Name: d, dtype: int32
#选取DataFrame最后一行,返回的是DataFrame: w x y z
d 12 13 14 15
#返回‘a’行'w'、'x'列,这种用于选取行索引列索引已知 w 0
x 1
Name: a, dtype: int32
#选取第二行第二列,用于已知行、列位置的选取: 5
###Markdown
查看Dataframe各行信息:
###Code
print(data.index)
data.index.tolist()
###Output
Index(['a', 'b', 'c', 'd'], dtype='object')
###Markdown
查看Dataframe各列信息:
###Code
print(type(data.columns))
data.columns.tolist()
###Output
<class 'pandas.core.indexes.base.Index'>
###Markdown
如果某列用字符串索引,行用数字索引,那么需要这样做:
###Code
data = pd.DataFrame(np.arange(16).reshape(4,4),index=list('abcd'),columns=list('wxyz'))
print(data)
data['x'][0]
"""取dataframe指定多行列 slice操作"""
# 取多行
print(df2.iloc[0:2])
# 取多列
print(df2.iloc[:, 0:2])
# 取多行多列
print(df2.iloc[0:2, 0:2])
###Output
new2 new1 new3 new4
4 B4 A4 C4 D4
5 B5 A5 C5 D5
new2 new1
4 B4 A4
5 B5 A5
6 B6 A6
7 B7 A7
new2 new1
4 B4 A4
5 B5 A5
###Markdown
series的slice
###Code
"""给Series作slice操作"""
arr = [1, 2, 3, 4] # 创建数组
series_1 = pd.Series(arr)
series_1.index = ['a', 'b', 'c', 'd']
print("------------------Series查询操作----------------------")
print(series_1['a'])
print(series_1[['a', 'b']])
print(series_1[series_1 > 2])
print(series_1[:2])
print(series_1['a':'c'])
###Output
------------------Series查询操作----------------------
1
a 1
b 2
dtype: int64
c 3
d 4
dtype: int64
a 1
b 2
dtype: int64
a 1
b 2
c 3
dtype: int64
###Markdown
在dataframe中还常用到条件筛选。参考[30分钟带你入门数据分析工具 Pandas(上篇),果断收藏](https://zhuanlan.zhihu.com/p/44174554),用中括号 [] 的方式,除了直接指定选中某些列外,还能接收一个条件语句,然后筛选出符合条件的行/列。比如:
###Code
import pandas as pd
import numpy as np
df=pd.DataFrame(np.random.randn(5,4),['A','B','C','D','E'],['W','X','Y','Z'])
print(df)
# 筛选出 'W'>0 的行
print(df[df['W']>0])
print('\n')
# 只看 'X' 列中 'W'>0 的数据
print(df[df['W']>0]['X'])
print('\n')
print(df[df['W']>0][['X','Y']])
###Output
W X Y Z
A 1.444328 -1.473487 -1.659399 0.298594
B 0.530866 0.566232 0.987480 -0.931774
C 0.143968 0.958281 0.786524 -0.032190
D 0.620018 -0.192263 -0.175855 -1.118276
E -0.180265 0.438621 2.173577 -2.866212
W X Y Z
A 1.444328 -1.473487 -1.659399 0.298594
B 0.530866 0.566232 0.987480 -0.931774
C 0.143968 0.958281 0.786524 -0.032190
D 0.620018 -0.192263 -0.175855 -1.118276
A -1.473487
B 0.566232
C 0.958281
D -0.192263
Name: X, dtype: float64
X Y
A -1.473487 -1.659399
B 0.566232 0.987480
C 0.958281 0.786524
D -0.192263 -0.175855
###Markdown
如果想要给符合条件的某些行的某列赋值,可以使用如下形式,顺便再理解下loc,loc的参数一个表示行索引,一个表示列索引。这是一个inplace的操作:
###Code
df.loc[df['W']>0, 'W'] = 1
df
###Output
_____no_output_____
###Markdown
还可以用逻辑运算符 &(与)和 |(或)来链接多个条件语句,以便一次应用多个筛选条件到当前的 DataFrame 上,比如筛选出同时满足 'W'>0 和'X'>1 的行:
###Code
df[(df['W']>0) & (df['X']<1)]
###Output
_____no_output_____
###Markdown
如果想要获取符合条件的行的索引也很简单,如下所示:
###Code
df[(df['W']>0) & (df['X']<1)].index.tolist()
###Output
_____no_output_____
###Markdown
再比如,还可以让列之间进行比较并作为索引条件
###Code
df
df[df['W']<df['X']]
###Output
_____no_output_____
###Markdown
有一种常见的索引形式,即选出Dataframe中某一列在一个list中的那些项:
###Code
import pandas as pd
df = pd.DataFrame({'A': [5,6,3,4], 'B': [1,2,3,5]})
df
df[df['A'].isin([3, 6])]
###Output
_____no_output_____
###Markdown
如果想要不在某个list中的项:
###Code
df[~df['A'].isin([3, 6])]
###Output
_____no_output_____
###Markdown
使用pandas的loc时要注意,当有missing labels,在pandas 1.0版本之后是会报错的,更推荐的做法是使用reindex()
###Code
import pandas as pd
s = pd.Series([1, 2, 3])
s.loc[[1, 2]]
s.loc[[1, 2, 3]] # missing value使用loc会报错
###Output
_____no_output_____
###Markdown
现在看看使用reindex,更多内容可以参考:https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.htmldeprecate-loc-reindex-listlike
###Code
s.reindex([1, 2, 3])
###Output
_____no_output_____
###Markdown
如果仅仅想要去除valid keys对应的值,可以使用:
###Code
labels = [1, 2, 3]
s.index.intersection(labels)
# s.loc[s.index.intersection(labels)]
###Output
_____no_output_____
###Markdown
另外一个需要注意的是NaN的索引,pandas里面是容易犯错的。参考:https://blog.csdn.net/S_o_l_o_n/article/details/100661937pandas基于numpy,所以其中的空值nan和numpy.nan是等价的。numpy中的nan并不是空对象,其实际上是numpy.float64对象,所以我们不能误认为其是空对象,从而用bool(np.nan)去判断是否为空值,这是不对的。对于pandas中的空值,我们该如何判断,并且有哪些我们容易掉进去的陷阱,即不能用怎么样的方式去判断呢?可以判断pandas中单个空值对象的方式:1. 利用pd.isnull(),pd.isna();2. 利用np.isnan();3. 利用is表达式;4. 利用in表达式。不可以用来判断pandas单个空值对象的方式:1. **不可**直接用==表达式判断;2. **不可**直接用bool表达式判断;3. **不可**直接用if语句判断。对于同时多个空值对象的判断和处理:1. 可以用Series对象和DataFrame对象的any()或all()方法;2. 可以用numpy的any()或all()方法;3. 不可以直接用python的内置函数any()和all()方法;4. 可以用Series或DataFrame对象的dropna()方法剔除空值;5. 可以用Series或DataFrame对象的fillna()方法填充空值。
###Code
import pandas as pd
import numpy as np
na=np.nan
# 可以用来判断空值的方式
print(pd.isnull(na)) # True
print(pd.isna(na)) # True
print(np.isnan(na)) # True
print(na is np.nan) # True
print(na in [np.nan]) # True
# 不可以直接用来判断的方式,即以下结果和我们预期不一样
print(na == np.nan) # False
print(bool(na)) # True
if na:
print('na is not null') # Output: na is not null
# 不可以直接用python内置函数any和all
print(any([na])) # True
print(all([na])) #True
df_nan = pd.DataFrame({"a":[1,2,3,np.nan,4,5,6,7,8,9]})
df_nan.dropna() # 不是内置运算,即不改变原来的dataframe
df_nan
###Output
_____no_output_____
###Markdown
查找 nan 所在位置:
###Code
df_nan[df_nan.values!=df_nan.values]
# 给出索引
df_nan[df_nan.values!=df_nan.values].index.tolist()
###Output
_____no_output_____
###Markdown
分组对dataframe进行分组的例子。任何分组(groupby)操作都涉及原始对象的以下操作之一。它们是 - - 分割对象- 应用一个函数- 结合的结果在许多情况下,我们将数据分成多个集合,并在每个子集上应用一些函数。在应用函数中,可以执行以下操作 -- 聚合 - 计算汇总统计- 转换 - 执行一些特定于组的操作- 过滤 - 在某些情况下丢弃数据
###Code
salaries = pd.DataFrame({
'name': ['BOSS', 'Lilei', 'Lilei', 'Han', 'BOSS', 'BOSS', 'Han', 'BOSS'],
'Year': [2016, 2016, 2016, 2016, 2017, 2017, 2017, 2017],
'Salary': [999999, 20000, 25000, 3000, 9999999, 999999, 3500, 999999],
'Bonus': [100000, 20000, 20000, 5000, 200000, 300000, 3000, 400000]
})
print(salaries.columns)
print(salaries.info())
print(salaries.describe())
salaries = salaries[['name', 'Year', 'Salary', 'Bonus']]
# 定顺序
print(salaries)
# 对dataframe按name进行分组
group_by_name = salaries.groupby('name')
# 获取分组后的某一组
se_temp = group_by_name.get_group('Lilei')
print(se_temp)
print(se_temp.describe())
# 循环各组,并将名字在给定的序列中的group添加到数组中
config = pd.Series({'names': ['Lilei', 'BOSS']})
dfs = []
for name, group in group_by_name:
print(name)
if name in config['names']:
dfs.append(group)
for i in range(len(dfs)):
# 如果没有把name和group分开,那么dfs[i]的类型会是tuple,key为name,value是group,可参考接下来的输出
print(type(dfs[i]))
print(dfs[i])
# 如果没有把name和group分开,那么dfs[i]的类型会是tuple,key为name,value是group
dfs_s = []
for group in group_by_name:
dfs_s.append(group)
for i in range(len(dfs_s)):
print(type(dfs_s[i]))
print(dfs_s[i])
###Output
Index(['name', 'Year', 'Salary', 'Bonus'], dtype='object')
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 8 entries, 0 to 7
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 name 8 non-null object
1 Year 8 non-null int64
2 Salary 8 non-null int64
3 Bonus 8 non-null int64
dtypes: int64(3), object(1)
memory usage: 384.0+ bytes
None
Year Salary Bonus
count 8.000000 8.000000e+00 8.000000
mean 2016.500000 1.631437e+06 131000.000000
std 0.534522 3.416521e+06 152851.935826
min 2016.000000 3.000000e+03 3000.000000
25% 2016.000000 1.587500e+04 16250.000000
50% 2016.500000 5.124995e+05 60000.000000
75% 2017.000000 9.999990e+05 225000.000000
max 2017.000000 9.999999e+06 400000.000000
name Year Salary Bonus
0 BOSS 2016 999999 100000
1 Lilei 2016 20000 20000
2 Lilei 2016 25000 20000
3 Han 2016 3000 5000
4 BOSS 2017 9999999 200000
5 BOSS 2017 999999 300000
6 Han 2017 3500 3000
7 BOSS 2017 999999 400000
name Year Salary Bonus
1 Lilei 2016 20000 20000
2 Lilei 2016 25000 20000
Year Salary Bonus
count 2.0 2.000000 2.0
mean 2016.0 22500.000000 20000.0
std 0.0 3535.533906 0.0
min 2016.0 20000.000000 20000.0
25% 2016.0 21250.000000 20000.0
50% 2016.0 22500.000000 20000.0
75% 2016.0 23750.000000 20000.0
max 2016.0 25000.000000 20000.0
BOSS
Han
Lilei
<class 'pandas.core.frame.DataFrame'>
name Year Salary Bonus
0 BOSS 2016 999999 100000
4 BOSS 2017 9999999 200000
5 BOSS 2017 999999 300000
7 BOSS 2017 999999 400000
<class 'pandas.core.frame.DataFrame'>
name Year Salary Bonus
1 Lilei 2016 20000 20000
2 Lilei 2016 25000 20000
<class 'tuple'>
('BOSS', name Year Salary Bonus
0 BOSS 2016 999999 100000
4 BOSS 2017 9999999 200000
5 BOSS 2017 999999 300000
7 BOSS 2017 999999 400000)
<class 'tuple'>
('Han', name Year Salary Bonus
3 Han 2016 3000 5000
6 Han 2017 3500 3000)
<class 'tuple'>
('Lilei', name Year Salary Bonus
1 Lilei 2016 20000 20000
2 Lilei 2016 25000 20000)
###Markdown
如果想要将类型数据转换为数字,则可以使用factorize函数:
###Code
# 返回列数:
print(dataset.shape[1])
# 返回行数:
print(dataset.shape[0])
# factorize(values[, sort, order, …]) Encode the object as an enumerated type or categorical variable.
labels, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'])
print(labels)
uniques
###Output
2
3
[0 0 1 2 0]
###Markdown
聚合函数为每个组返回单个聚合值。当创建了分组(group by)对象,就可以对分组数据执行多个聚合操作。
###Code
import pandas as pd
import numpy as np
ipl_data = {'Team': ['Riders', 'Riders', 'Devils', 'Devils', 'Kings',
'kings', 'Kings', 'Kings', 'Riders', 'Royals', 'Royals', 'Riders'],
'Rank': [1, 2, 2, 3, 3,4 ,1 ,1,2 , 4,1,2],
'Year': [2014,2015,2014,2015,2014,2015,2016,2017,2016,2014,2015,2017],
'Points':[876,789,863,673,741,812,756,788,694,701,804,690]}
df = pd.DataFrame(ipl_data)
df
grouped = df.groupby('Year')
# .round(2) 是保留两位小数
grouped['Points'].agg(np.mean).round(2)
###Output
_____no_output_____
###Markdown
批量运算Pandas中,为了加速运算,通常会尽量采用向量化的方式进行运算。对dataframe数据执行批量运算,使用pandas的apply函数。
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame([[4, 9]] * 3, columns=['A', 'B'])
print(df)
df.apply(np.sqrt, axis=1)
###Output
A B
0 4 9
1 4 9
2 4 9
###Markdown
apply函数默认是作用到dataframe每列,axis=0
###Code
df.apply(np.sum)
df.apply(lambda x:sum(x))
df.apply(lambda x: x**2)
###Output
_____no_output_____
###Markdown
如果想要作用于某些指定的列,比如对第二列之后的运算,可以如下操作:
###Code
df = pd.DataFrame([[1,2,3,4,5,6]] * 3, columns=['ID', 'A', 'B', 'C', 'D', 'E'])
df
df.iloc[:,1:].apply(lambda x: x**2)
###Output
_____no_output_____
###Markdown
也可以直接指定某些列:
###Code
df[["A","B","C"]].apply(lambda x:sum(x))
###Output
_____no_output_____
###Markdown
如果想要对一行去做运算,需要指定 axis=1
###Code
df[["A","B","C"]].apply(lambda x:sum(x),axis=1)
###Output
_____no_output_____
###Markdown
如果想要对一列的每个值操作一个函数,那么可以:
###Code
df["A"].apply(np.sqrt)
###Output
_____no_output_____
###Markdown
但是可以看到,这并不是一个原地运算的函数,df是没变的:
###Code
df
###Output
_____no_output_____
###Markdown
如果想要执行inplace运算,则需要重新赋值一下:
###Code
df["A"]=df["A"].apply(np.sqrt)
df
###Output
_____no_output_____
###Markdown
如果有nan值,可以使用忽略nan值的运算
###Code
df = pd.DataFrame({'A':[1,2,3,np.nan], 'B':[np.nan,2,np.nan,6]})
df.apply(lambda x:np.nansum(x), axis=1).values
###Output
_____no_output_____
###Markdown
如果要指定的函数返回值和原dataframe不一致,为了继续使用向量化以使代码速度较快,可以利用numpy的相关函数
###Code
import numpy as np
from scipy import interpolate
num_of_years = 20
df = pd.DataFrame([[1,2,3,4,5,6]] * 3, columns=['A', 'B', 'C', 'D', 'E', 'F'])
data_np=df.values
data_np
def interpolate_myself(y):
x = np.linspace(0, num_of_years - 1, num=6)
xs = np.linspace(0, num_of_years - 1, num=num_of_years)
ys = interpolate.UnivariateSpline(x, y, s=0)(xs)
return ys
np_new=np.apply_along_axis(interpolate_myself, 1, data_np)
np_new
###Output
_____no_output_____
###Markdown
最后还需要还原为dataframe的话,执行DataFrame的初始化操作即可:
###Code
df2 = pd.DataFrame(np_new)
time_range = np.arange(1985,2005)
df2.columns = time_range
df2
###Output
_____no_output_____ |
Multi-label CNN-PS.ipynb | ###Markdown
Final Experiments - Multi-label CNNText Problem Statement Utilities and Imports
###Code
%reload_ext autoreload
%autoreload 2
import itertools
import random
from collections import Counter
import numpy as np
import pickle
from operator import itemgetter
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
# matplotlib.rcParams['figure.figsize'] = [5, 10]
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix, f1_score
from sklearn.metrics import precision_recall_fscore_support, hamming_loss
from sklearn.svm import LinearSVC, SVC
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from fastai import text as ft
from fastai import dataloader as fd
from fastai import dataset as fs
from fastai import learner as fl
from fastai import core as fc
from fastai import metrics as fm
from skai.runner import TextRunner, Adam_lambda
from skai.mwrapper import MWrapper, SKModel
from skai.utils import multi_to_text_out, vote_pred
from skai.utils import get_classification_type, weights_init, multilabel_prediction, prf_report
from skai.dataset import TokenDataset, SimpleDataset
from skai.metrics import f1_micro_skai
def mapt(f, *iters):
return tuple(map(f, *iters))
def mapl(f, *iters):
return list(map(f, *iters))
def manually_remove_problems(data):
""" remove problem from data if it has a certain tag"""
final_data = {}
remove = ['*special']
for i in data:
if set(data[i][1][0]).intersection(set(remove)) == set():
if data[i][0][0] != '':
final_data[i] = data[i]
return final_data
def get_single_label_problems(data):
'''returns a dict of all problems which only have one label'''
single_label_problems = {}
for i in data:
if len(data[i][1][0]) == 1:
single_label_problems[i] = data[i]
return single_label_problems
def get_classwise_distribution(data):
class_count = {}
for i in data:
for cls in data[i][1][0]:
if cls in class_count:
class_count[cls] +=1
else:
class_count[cls] = 1
return class_count
def get_topk_single_label_problems(data,k):
""" get top k by frequency single label problems"""
class_dict = get_classwise_distribution(data)
print(class_dict)
class_dict = dict(sorted(class_dict.items(), key=itemgetter(1), reverse=True)[:k])
print(set(class_dict.keys()))
topk_data = {}
for i in data:
if set(data[i][1][0]).intersection(set(class_dict.keys())) != set():
topk_data[i] = data[i]
return topk_data
def make_text_dataset(rdata):
Xtext, ytext = [], []
for url, data in rdata.items():
try:
ytext.append(data[1][0][0])
except IndexError:
continue
Xtext.append(data[0][0])
return Xtext, ytext
def make_multi_text_dataset(rdata):
Xtext, ytext = [], []
for url, data in rdata.items():
try:
ytext.append(data[1][0])
except IndexError:
continue
Xtext.append(data[0][0])
return Xtext, ytext
def make_statement_dataset(rdata):
Xtext, ytext = [], []
for url, data in rdata.items():
try:
ytext.append(data[1][0][0])
except IndexError:
continue
Xtext.append(data[0][2])
return Xtext, ytext
def make_non_statement_dataset(rdata):
Xtext, ytext = [], []
for url, data in rdata.items():
try:
ytext.append(data[1][0][0])
except IndexError:
continue
Xtext.append(f'{data[0][3]}\n{data[0][4]}\n{data[0][5]}')
return Xtext, ytext
def make_multi_statement_dataset(rdata):
Xtext, ytext = [], []
for url, data in rdata.items():
try:
ytext.append(data[1][0])
except IndexError:
continue
Xtext.append(data[0][2])
return Xtext, ytext
def make_multi_io_dataset(rdata):
Xtext, ytext = [], []
for url, data in rdata.items():
try:
ytext.append(data[1][0])
except IndexError:
continue
Xtext.append(f'{data[0][3]}\n{data[0][4]}\n{data[0][5]}')
return Xtext, ytext
def get_class_list(labels):
return list(set(labels))
def plot_confusion_matrix(y_true, y_pred, classes,
normalize=True,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
cm = confusion_matrix(y_true, y_pred, labels=classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
fig = plt.gcf()
fig.set_size_inches(22, 16)
plt.imshow(cm, interpolation='nearest', cmap=cmap, vmin=0.0, vmax=1.0)
# plt.title(title, fontsize)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, fontsize=32)
plt.yticks(tick_marks, classes, fontsize=32)
print(cm.max())
fmt = '.2f' if normalize else 'd'
thresh = 0.5
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black",
fontsize=32)
plt.tight_layout()
plt.ylabel('True label', fontsize=32)
plt.xlabel('Predicted label', fontsize=32)
###Output
/home/aayn/anaconda3/envs/fastai/lib/python3.6/site-packages/sklearn/ensemble/weight_boosting.py:29: DeprecationWarning: numpy.core.umath_tests is an internal NumPy module and should not be imported. It will be removed in a future NumPy release.
from numpy.core.umath_tests import inner1d
###Markdown
Load data
###Code
top10m = pickle.load(open('data/10multi_26aug.pkl', 'rb'))
top20m = pickle.load(open('data/20multi_26aug.pkl', 'rb'))
top10pm, top20pm = mapt(make_multi_statement_dataset, [top10m, top20m])
print(len(top10pm[0]))
print(top20pm[1][0])
print(top10pm[1][0])
###Output
['binary search', 'implementation', 'data structures']
['binary search', 'data structures', 'brute force', 'dp']
###Markdown
CNN Experiments
###Code
class CNN_Text(nn.Module):
def __init__(self, embed_num, class_num, channel_in=1,
kernel_sizes=[3, 4, 5], kernel_num=512, embed_dim=300):
super().__init__()
self.kernel_num = kernel_num
self.embed = nn.Embedding(embed_num, embed_dim)
convs = [nn.Conv1d(1, kernel_num, (ks, embed_dim))
for ks in kernel_sizes]
self.convs = nn.ModuleList(convs)
# self.bn1 = nn.BatchNorm2d(kernel_num)
self.fc1 = nn.Linear(len(kernel_sizes) * kernel_num, class_num)
def conv_and_pool(self, x, conv):
x = F.relu(conv(x)).squeeze(3) # (N, Co, W)
x = F.max_pool1d(x, x.size(2)).squeeze(2)
return x
def forward(self, x):
x = self.embed(x)
x = x.unsqueeze(1)
x = [F.relu(conv(x)).squeeze(3) for conv in self.convs]
x = [F.max_pool1d(i, i.size(2)).squeeze(2) for i in x]
x = torch.cat(x, 1)
out = self.fc1(x)
return out
###Output
_____no_output_____
###Markdown
20-multi
###Code
trunner = TextRunner([None], top20pm[0], top20pm[1], 'top20pm')
in_dim = len(trunner.alldata.tvectorizer.itos)
Xall, yall = trunner.dataset
runs = 1
out_dim = 20
all_preds, all_targs = [], []
for i in range(runs):
outer_cv = KFold(n_splits=10, shuffle=True, random_state=i+41)
outer_cv.get_n_splits(Xall, yall)
for j, (nontest_i, test_i) in enumerate(outer_cv.split(Xall, yall)):
X_train, y_train = Xall[nontest_i], yall[nontest_i]
X_test, y_test = Xall[test_i], yall[test_i]
textcnn = MWrapper(CNN_Text(in_dim, out_dim),
f'{i}_cnntext20pm_{j}')
textcnn.model.apply(weights_init)
dl_train = fd.DataLoader(SimpleDataset(X_train, y_train),
batch_size=32, num_workers=1,
pad_idx=1, transpose=False)
dl_val = fd.DataLoader(SimpleDataset(X_test, y_test),
batch_size=32, num_workers=1,
pad_idx=1, transpose=False)
modeldata = fs.ModelData(str(textcnn.path), dl_train, dl_val)
learner = fl.Learner.from_model_data(textcnn.model,
modeldata,
opt_fn=Adam_lambda())
learner.metrics = [f1_micro_skai]
learner.fit(5e-4, 10, best_save_name='best')
dl_test = fd.DataLoader(SimpleDataset(X_test, y_test),
batch_size=32, num_workers=2,
pad_idx=1, transpose=False)
learner.load('best')
preds, targs = learner.predict_dl(dl_test)
preds = multilabel_prediction(preds, 0.5)
all_preds.append(preds)
all_targs.append(targs)
print(f1_score(np.concatenate(np.array(all_targs), axis=0),
np.concatenate(np.array(all_preds), axis=0), average='micro'))
all_preds = np.array(all_preds)
all_targs = np.array(all_targs)
all_preds = np.concatenate(all_preds, axis=0)
all_targs = np.concatenate(all_targs, axis=0)
# pickle.dump([all_preds, all_targs], open('data/results/cnn-ps_20m.pkl', 'wb'))
all_preds, all_targs = pickle.load(open('data/results/cnn-ps_20m.pkl', 'rb'))
print(all_preds[7])
hl = hamming_loss(all_targs, all_preds)
micro_f1 = f1_score(all_targs, all_preds, average='micro')
macro_f1 = f1_score(all_targs, all_preds, average='macro')
# prf_report(all_targs, all_preds, labels=m20_labels)
print(f'Hamming loss = {hl}\nMicro_F1 = {micro_f1}l\nMacro_F1 = {macro_f1}')
###Output
Hamming loss = 0.10835858585858586
Micro_F1 = 0.33596409780253794l
Macro_F1 = 0.2834606852644749
###Markdown
Problem-Algorithm Separate Analyis
###Code
# Demo of how to separate tags into categories
exm = all_preds[0:5]
print(exm)
prob_idxs = (2, 5, 10, 11, 12, 13, 14 , 15, 19)
alg_idxs = (0, 1, 3, 4, 6, 7, 8, 9, 16, 17, 18)
prob_targs = [exm[:, i] for i in prob_idxs]
prob_targs = np.concatenate([prob_targs]).transpose(1, 0)
print(prob_targs)
# np.concatenate([[exm[:, 1]], [exm[:, 3]]]).transpose(1, 0)
# Actual problem-algorithm splitting
prob_idxs = (2, 5, 10, 11, 12, 13, 14 , 15, 19)
alg_idxs = (0, 1, 3, 4, 6, 7, 8, 9, 16, 17, 18)
probcat_targs = [all_targs[:, i] for i in prob_idxs]
probcat_targs = np.concatenate([probcat_targs]).transpose(1, 0)
print(probcat_targs)
probcat_preds = [all_preds[:, i] for i in prob_idxs]
probcat_preds = np.concatenate([probcat_preds]).transpose(1, 0)
print(probcat_preds)
probcat_hl = hamming_loss(probcat_targs, probcat_preds)
probcat_micro_f1 = f1_score(probcat_targs, probcat_preds, average='micro')
probcat_macro_f1 = f1_score(probcat_targs, probcat_preds, average='macro')
print(f'Hamming loss = {probcat_hl}\nMicro_F1 = {probcat_micro_f1}l\nMacro_F1 = {probcat_macro_f1}')
# Actual problem-algorithm splitting
prob_idxs = (2, 5, 10, 11, 12, 13, 14 , 15, 19)
alg_idxs = (0, 1, 3, 4, 6, 7, 8, 9, 16, 17, 18)
algcat_targs = [all_targs[:, i] for i in alg_idxs]
algcat_targs = np.concatenate([algcat_targs]).transpose(1, 0)
print(algcat_targs)
algcat_preds = [all_preds[:, i] for i in alg_idxs]
algcat_preds = np.concatenate([algcat_preds]).transpose(1, 0)
print(algcat_preds)
algcat_hl = hamming_loss(algcat_targs, algcat_preds)
algcat_micro_f1 = f1_score(algcat_targs, algcat_preds, average='micro')
algcat_macro_f1 = f1_score(algcat_targs, algcat_preds, average='macro')
print(f'Hamming loss = {algcat_hl}\nMicro_F1 = {algcat_micro_f1}l\nMacro_F1 = {algcat_macro_f1}')
###Output
Hamming loss = 0.13264462809917354
Micro_F1 = 0.30835527890830744l
Macro_F1 = 0.17317862239168946
|
sandbox/jupyter notebooks/Test bed.ipynb | ###Markdown
Test Bed
###Code
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
module_path = os.path.abspath(os.path.join('../..'))
if module_path not in sys.path:
sys.path.append(module_path)
import numpy as np
from src.gen_spectra import gen_spectrum, get_precursor
from src.objects import Spectrum
from src.utils import insort_by_index, make_sparse_array, ppm_to_da
from src.scoring import scoring
from src import main
from src.params import OUTPUT_DIRECTORY
from collections import namedtuple
# run hyped search with the params
parameters = namedtuple('parameteres', 'params')
main.main(parameters('True'))
# load the values
import json
all_results = json.load(open(OUTPUT_DIRECTORY + 'summary.json', 'r'))
for i, entry in all_results.items():
print(f'---------- Alignments for spectrum {i} ----------')
for a in entry['alignments']:
print(f'{a["sequence"]} \t b score: {a["b_score"]} \t y score: {a["y_score"]}')
gen_spectrum('MALWAR', ion='y', charge=1)['spectrum']
###Output
_____no_output_____ |
demos/demo_reactive_planners_solo12_step_adjustment_walk.ipynb | ###Markdown
Writing the same setup as DGH code
###Code
import dynamic_graph_head as dgh
from dynamic_graph_head import ThreadHead, SimHead, SimVicon, HoldPDController
bullet_env = BulletEnvWithGround()
# Create a robot instance. This initializes the simulator as well.
robot = Solo12Robot()
bullet_env.add_robot(robot)
import pinocchio as pin
import mim_control_cpp
class CentroidalController:
def __init__(self, head, vicon_name, mu, kp, kd, kc, dc, kb, db, qp_weights=[5e5, 5e5, 5e5, 1e6, 1e6, 1e6]):
self.set_k(kp, kd)
self.config = Solo12Config
self.robot = Solo12Config.buildRobotWrapper()
self.vicon_name = vicon_name
self.x_com = [0.0, 0.0, 0.20]
self.xd_com = [0.0, 0.0, 0.0]
self.x_des = np.array([
0.2, 0.142, 0.015, 0.2, -0.142, 0.015,
-0.2, 0.142, 0.015, -0.2, -0.142, 0.015
])
self.xd_des = np.array(4*[0., 0., 0.])
self.x_ori = [0., 0., 0., 1.]
self.x_angvel = [0., 0., 0.]
self.cnt_array = 4*[1,]
self.w_com = np.zeros(6)
q_init = np.zeros(19)
q_init[7] = 1
self.centrl_pd_ctrl = mim_control_cpp.CentroidalPDController()
self.centrl_pd_ctrl.initialize(2.5, np.diag(self.robot.mass(q_init)[3:6, 3:6]))
self.force_qp = mim_control_cpp.CentroidalForceQPController()
self.force_qp.initialize(4, mu, np.array(qp_weights))
root_name = 'universe'
endeff_names = ['FL_ANKLE', 'FR_ANKLE', 'HL_ANKLE', 'HR_ANKLE']
self.imp_ctrls = [mim_control_cpp.ImpedanceController() for eff_name in endeff_names]
for i, c in enumerate(self.imp_ctrls):
c.initialize(self.robot.model, root_name, endeff_names[i])
self.kc = np.array(kc)
self.dc = np.array(dc)
self.kb = np.array(kb)
self.db = np.array(db)
self.joint_positions = head.get_sensor('joint_positions')
self.joint_velocities = head.get_sensor('joint_velocities')
self.slider_positions = head.get_sensor('slider_positions')
self.imu_gyroscope = head.get_sensor('imu_gyroscope')
def set_k(self, kp, kd):
self.kp = 4 * [kp, kp, kp, 0, 0, 0]
self.kd = 4 * [kd, kd, kd, 0, 0, 0]
def warmup(self, thread_head):
thread_head.vicon.bias_position(self.vicon_name)
self.zero_sliders = self.slider_positions.copy()
def get_base(self, thread_head):
base_pos, base_vel = thread_head.vicon.get_state(self.vicon_name)
base_vel[3:] = self.imu_gyroscope
return base_pos, base_vel
def compute_F(self, thread_head):
ext_cnt_array = [1., 1., 1., 1.]
self.force_qp.run(self.w_com, self.rel_eff, ext_cnt_array)
self.F = self.force_qp.get_forces()
def run(self, thread_head):
base_pos, base_vel = self.get_base(thread_head)
self.q = np.hstack([base_pos, self.joint_positions])
self.dq = np.hstack([base_vel, self.joint_velocities])
self.centrl_pd_ctrl.run(
self.kc, self.dc, self.kb, self.db,
self.q[:3], self.x_com, self.dq[:3], self.xd_com,
self.q[3:7], self.x_ori, self.dq[3:6], self.x_angvel
)
self.w_com = self.centrl_pd_ctrl.get_wrench()
self.w_com[2] += 9.81 * Solo12Config.mass
# distrubuting forces to the active end effectors
pin_robot = self.robot
pin_robot.framesForwardKinematics(self.q)
com = self.com = pin_robot.com(self.q)
self.rel_eff = np.array([
pin_robot.data.oMf[i].translation - com for i in Solo12Config.end_eff_ids
]).reshape(-1)
self.compute_F(thread_head)
# passing forces to the impedance controller
self.tau = np.zeros(18)
for i, c in enumerate(self.imp_ctrls):
c.run(self.q, self.dq,
np.array(self.kp[6*i:6*(i+1)]),
np.array(self.kd[6*i:6*(i+1)]),
1.,
pin.SE3(np.eye(3), np.array(self.x_des[3*i:3*(i+1)])),
pin.Motion(self.xd_des[3*i:3*(i+1)], np.zeros(3)),
pin.Force(self.F[3*i:3*(i+1)], np.zeros(3))
)
self.tau += c.get_torques()
head.set_control('ctrl_joint_torques', self.tau[6:])
class ReactiveStepperController(CentroidalController):
def __init__(self, head):
super().__init__(
head, 'solo12/solo12', 0.6,
50, 0.7,
[0, 0, 200], [10, 10, 10], [25, 25, 25.], [22.5, 22.5, 22.5],
qp_weights=[1e0, 1e0, 1e6, 1e6, 1e6, 1e6]
)
base_pos, _ = self.get_base(thread_head)
q = np.hstack([base_pos, self.joint_positions])
is_left_leg_in_contact = True
l_min = -0.1
l_max = 0.1
w_min = -0.08
w_max = 0.2
t_min = 0.2
t_max = 1.0
l_p = 0.00 # Pelvis width
com_height = 0.25
weight = [1, 1, 5, 1000, 1000, 100000, 100000, 100000, 100000]
mid_air_foot_height = 0.05
control_period = 0.001
planner_loop = 0.010
# init poses
self.robot.framesForwardKinematics(q)
base_pose = q[:7]
front_left_foot_position = self.robot.data.oMf[
self.config.end_eff_ids[0]].translation
front_right_foot_position = self.robot.data.oMf[
self.config.end_eff_ids[1]].translation
hind_left_foot_position = self.robot.data.oMf[
self.config.end_eff_ids[2]].translation
hind_right_foot_position = self.robot.data.oMf[
self.config.end_eff_ids[3]].translation
self.stepper = QuadrupedDcmReactiveStepper()
self.stepper.initialize(
is_left_leg_in_contact,
l_min,
l_max,
w_min,
w_max,
t_min,
t_max,
l_p,
com_height,
weight,
mid_air_foot_height,
control_period,
planner_loop,
base_pose,
front_left_foot_position,
front_right_foot_position,
hind_left_foot_position,
hind_right_foot_position,
)
self.stepper.set_dynamical_end_effector_trajectory()
self.stepper.set_desired_com_velocity(np.array([0.0, 0.0, 0.0]))
self.x_com[2] = com_height
def warmup(self, thread_head):
super().warmup(thread_head)
self.stepper.start()
self.control_time = 0.
def compute_F(self, thread_head):
config = self.config
robot = self.robot
x_com, xd_com = self.robot.com(self.q, self.dq)
robot.forwardKinematics(self.q, self.dq)
robot.framesForwardKinematics(self.q)
# Define left as front left and back right leg
front_left_foot_position = robot.data.oMf[config.end_eff_ids[0]].translation
front_right_foot_position = robot.data.oMf[config.end_eff_ids[1]].translation
hind_left_foot_position = robot.data.oMf[config.end_eff_ids[2]].translation
hind_right_foot_position = robot.data.oMf[config.end_eff_ids[3]].translation
front_left_foot_velocity = pin.getFrameVelocity(
robot.model, robot.data, config.end_eff_ids[0], pin.LOCAL_WORLD_ALIGNED).linear
front_right_foot_velocity = pin.getFrameVelocity(
robot.model, robot.data, config.end_eff_ids[1], pin.LOCAL_WORLD_ALIGNED).linear
hind_left_foot_velocity = pin.getFrameVelocity(
robot.model, robot.data, config.end_eff_ids[2], pin.LOCAL_WORLD_ALIGNED).linear
hind_right_foot_velocity = pin.getFrameVelocity(
robot.model, robot.data, config.end_eff_ids[3], pin.LOCAL_WORLD_ALIGNED).linear
open_loop = True
self.stepper.run(
self.control_time,
front_left_foot_position,
front_right_foot_position,
hind_left_foot_position,
hind_right_foot_position,
front_left_foot_velocity,
front_right_foot_velocity,
hind_left_foot_velocity,
hind_right_foot_velocity,
x_com,
xd_com,
yaw(self.q),
not open_loop,
)
cnt_array = self.stepper.get_contact_array()
self.force_qp.run(self.w_com, self.rel_eff, cnt_array)
self.F = self.force_qp.get_forces()
# dcm_forces = self.stepper.get_forces()
# if cnt_array[0] == 1:
# self.F[3:6] = -dcm_forces[6:9]
# self.F[6:9] = -dcm_forces[6:9]
# else:
# self.F[0:3] = -dcm_forces[:3]
# self.F[9:12] = -dcm_forces[:3]
self.x_des = np.hstack([
self.stepper.get_front_left_foot_position(),
self.stepper.get_front_right_foot_position(),
self.stepper.get_hind_left_foot_position(),
self.stepper.get_hind_right_foot_position()
])
self.dx_des = np.hstack([
self.stepper.get_front_left_foot_velocity(),
self.stepper.get_front_right_foot_velocity(),
self.stepper.get_hind_left_foot_velocity(),
self.stepper.get_hind_right_foot_velocity(),
])
self.control_time += 0.001
head = SimHead(robot, vicon_name='solo12')
thread_head = ThreadHead(
0.001, # dt.
HoldPDController(head, 3., 0.05, True), # Safety controllers.
head, # Heads to read / write from.
[ # Utils.
('vicon', SimVicon(['solo12/solo12']))
],
bullet_env # Environment to step.
)
q, dq = np.array(Solo12Config.initial_configuration), np.array(Solo12Config.initial_velocity)
q[0] = 0.
head.reset_state(q, dq)
thread_head.sim_run(1)
thread_head.switch_controllers(thread_head.safety_controllers)
thread_head.safety_controllers[0].warmup(thread_head)
thread_head.sim_run(1000)
centroidal_controller = CentroidalController(head, 'solo12/solo12', 0.2, 50., 0.7,
[100., 100., 100.], [15., 15., 15.], [25., 25., 25.], [22.5, 22.5, 22.5]
)
thread_head.switch_controllers(centroidal_controller)
ctrl = ReactiveStepperController(head)
thread_head.switch_controllers(ctrl)
thread_head.start_streaming()
thread_head.sim_run(60000)
###Output
Not logging 'kp' as field type '<class 'list'>' is unsupported
Not logging 'kd' as field type '<class 'list'>' is unsupported
Not logging 'config' as field type '<class 'type'>' is unsupported
Not logging 'robot' as field type '<class 'pinocchio.robot_wrapper.RobotWrapper'>' is unsupported
Not logging 'vicon_name' as field type '<class 'str'>' is unsupported
Not logging 'x_com' as field type '<class 'list'>' is unsupported
Not logging 'xd_com' as field type '<class 'list'>' is unsupported
Not logging 'x_ori' as field type '<class 'list'>' is unsupported
Not logging 'x_angvel' as field type '<class 'list'>' is unsupported
Not logging 'cnt_array' as field type '<class 'list'>' is unsupported
Not logging 'centrl_pd_ctrl' as field type '<class 'mim_control_cpp.CentroidalPDController'>' is unsupported
Not logging 'force_qp' as field type '<class 'mim_control_cpp.CentroidalForceQPController'>' is unsupported
Not logging 'imp_ctrls' as field type '<class 'list'>' is unsupported
Not logging 'stepper' as field type '<class 'reactive_planners_cpp.QuadrupedDcmReactiveStepper'>' is unsupported
!!! ThreadHead: Start streaming data.
|
homeworks/D029/Day_029_HW.ipynb | ###Markdown
作業 : (Kaggle)鐵達尼生存預測 [作業目標]- 試著模仿範例寫法, 在鐵達尼生存預測中, 練習特徵重要性的寫作與觀察 [作業重點]- 仿造範例, 完成特徵重要性的計算, 並觀察對預測結果的影響 (In[3]~[5], Out[3]~[5]) - 仿造範例, 將兩個特徵重要性最高的特徵重組出新特徵, 並觀察對預測結果的影響 (In[8], Out[8])
###Code
# 做完特徵工程前的所有準備 (與前範例相同)
import pandas as pd
import numpy as np
import copy
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import GradientBoostingClassifier
data_path = 'data/'
df = pd.read_csv(data_path + 'titanic_train.csv')
train_Y = df['Survived']
df = df.drop(['PassengerId', 'Survived'] , axis=1)
df.head()
# 因為需要把類別型與數值型特徵都加入, 故使用最簡版的特徵工程
LEncoder = LabelEncoder()
MMEncoder = MinMaxScaler()
for c in df.columns:
df[c] = df[c].fillna(-1)
if df[c].dtype == 'object':
df[c] = LEncoder.fit_transform(list(df[c].values))
df[c] = MMEncoder.fit_transform(df[c].values.reshape(-1, 1))
df.head()
# 梯度提升樹擬合後, 將結果依照重要性由高到低排序 (note : D27作業中'Ticket'是第一名特徵, 'Age'是數值特徵中排名最高者)
estimator = GradientBoostingClassifier()
estimator.fit(df.values, train_Y)
feats = pd.Series(data=estimator.feature_importances_, index=df.columns)
feats = feats.sort_values(ascending=False)
feats
###Output
_____no_output_____
###Markdown
先用梯度提升機對鐵達尼生存預測做訓練,再用其特徵重要性回答下列問題 作業1* 將特徵重要性較低的一半特徵刪除後,再做生存率預估,正確率是否有變化?
###Code
# 原始特徵 + 梯度提升樹
train_X = MMEncoder.fit_transform(df)
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
# 高重要性特徵 + 梯度提升樹
"""
Your Code Here
"""
high_feature = list(feats[:5].index)
train_X = MMEncoder.fit_transform(df[high_feature])
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
###Output
_____no_output_____
###Markdown
作業2* 將特徵重要性最高的兩個特徵做特徵組合,是否能再進一步提升預測力?
###Code
# 觀察重要特徵與目標的分布
# 第一名 : Ticket
import seaborn as sns
import matplotlib.pyplot as plt
sns.regplot(x=df['Ticket'], y=train_Y, fit_reg=False)
plt.show()
# 第二名 : Name
sns.regplot(x=df['Name'], y=train_Y, fit_reg=False)
plt.show()
# 製作新特徵看效果
"""
Your Code Here
"""
df['Add_char'] = (df['Ticket'] + df['Name']) / 2
df['Multi_char'] = df['Ticket'] * df['Name']
df['TN_div1p'] = df['Ticket'] / (df['Name']+1) * 2
df['NT_div1p'] = df['Name'] / (df['Ticket']+1) * 2
train_X = MMEncoder.fit_transform(df)
cross_val_score(estimator, train_X, train_Y, cv=5).mean()
###Output
_____no_output_____ |
Codes/02Crab_Age_Prediction_EDA2.ipynb | ###Markdown
Crab Age Prediction EDA Problem Statement: The file data.csv contains the observation of a study on crabs found around the Boston Area. The challenge consist of making some sense out of those data. We're notably looking for a method to predict the age of a crab given its features. Part 2: EDA: Exploratory Data Analysis (Part 2)
###Code
# Import required libraries
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
np.random.seed(0)
os.getcwd()
# Read the csv file data
os.chdir('..\\Data\\')
df = pd.read_csv('data_treated.csv')
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
2.1: Box Plots for All Variables
###Code
# Change directory to Images
os.chdir("..\\Images")
###Output
_____no_output_____
###Markdown
###Code
# Draw Box plots for the treated dataset to visulize if there are any outliers left
variables = list(df.columns)
no_rows = int(np.ceil(len(variables)/3))
plt.rcParams['figure.figsize'] = [15, 5*no_rows]
for i in range(0,len(variables)):
plt.subplot(no_rows,3,(i+1))
plt.boxplot(df[variables[i]])
plt.title("Box plot:" + str(variables[i]))
plt.savefig("BoxPlots_All.png")
plt.show()
###Output
_____no_output_____
###Markdown
2.2: Scatter Plot for all the Variables
###Code
# Draw scatter plot for the treated dataset
variables = list(df.columns)
variables.remove("Age")
no_rows = int(np.ceil(len(variables)/3))
plt.rcParams['figure.figsize'] = [15, 5*no_rows]
for i in range(0,len(variables)):
plt.subplot(no_rows,3,(i+1))
plt.scatter(df[variables[i]], df["Age"])
plt.xlabel(variables[i])
plt.ylabel("Age")
plt.title(str(variables[i]) + " Vs Age")
plt.savefig("ScatterPlotsAll.png")
plt.show()
###Output
_____no_output_____
###Markdown
Comments:Based on the plots, all variables are within the limits (With no extreme observation). Other than gender variables (Sex_I, Sex_F, Sex_M), all variables have positive linear relationship (based on scatter plots) 2.3: One on One Trend relationship with Age and Gender
###Code
def plot_gender_diff(df, var1):
# subset the data
f_df = df[df["Sex_F"]==1][[var1, "Age"]]
m_df = df[df["Sex_M"]==1][[var1, "Age"]]
i_df = df[df["Sex_I"]==1][[var1, "Age"]]
# Group by Age
f_df = f_df.groupby("Age").mean()
m_df = m_df.groupby("Age").mean()
i_df = i_df.groupby("Age").mean()
plt.rcParams['figure.figsize'] = [8, 5]
# Plot the figures
plt.plot(i_df.index, i_df[var1], 'b-', label="Gender: I")
plt.plot(f_df.index, f_df[var1], 'r*', label="Gender: F")
plt.plot(m_df.index, m_df[var1], 'yo', label="Gender: M")
plt.xlabel("Age")
plt.ylabel(str(var1))
plt.title("Average {} observed for different genders by age".format(var1))
plt.legend()
plt.savefig(f'Age_Gender_{var1}.png')
plt.show()
return
###Output
_____no_output_____
###Markdown
2.3.1: Length
###Code
plot_gender_diff(df, "Length")
###Output
_____no_output_____
###Markdown
__Comments:__ The average length of a Male and Female crab is very similar to one another. The crabs with "Intermediate" gender have a lower length compared to M & F gender 2.3.1: Diameter
###Code
plot_gender_diff(df, "Diameter")
###Output
_____no_output_____
###Markdown
__Comments:__ The average diameter of a Male and Female crab is very similar to one another. The crabs with "Intermediate" gender have a lower diameter compared to M & F gender 2.3.1: Height
###Code
plot_gender_diff(df, "Height")
###Output
_____no_output_____
###Markdown
__Comments:__ The average height of a Male and Female crab is very similar to one another. The crabs with "Intermediate" gender have a lower height compared to M & F gender 2.3.1: Weight
###Code
plot_gender_diff(df, "Weight")
###Output
_____no_output_____
###Markdown
__Comments:__ The average weight of a Male and Female crab is very similar to one another. The crabs with "Intermediate" gender have a lower weight compared to M & F gender 2.3.1: Shell Weight
###Code
plot_gender_diff(df, "Shell Weight")
###Output
_____no_output_____ |
_doc/notebooks/sklearn/visualize_pipeline.ipynb | ###Markdown
Visualize a scikit-learn pipelinePipeline can be big with *scikit-learn*, let's dig into a visual way to look a them.
###Code
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
import warnings
warnings.simplefilter("ignore")
###Output
_____no_output_____
###Markdown
Simple modelLet's vizualize a simple pipeline, a single model not even trained.
###Code
import pandas
from sklearn import datasets
from sklearn.linear_model import LogisticRegression
iris = datasets.load_iris()
X = iris.data[:, :4]
df = pandas.DataFrame(X)
df.columns = ["X1", "X2", "X3", "X4"]
clf = LogisticRegression()
clf
###Output
_____no_output_____
###Markdown
The trick consists in converting the pipeline in a graph through the [DOT](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) language.
###Code
from mlinsights.plotting import pipeline2dot
dot = pipeline2dot(clf, df)
print(dot)
###Output
digraph{
orientation=portrait;
nodesep=0.05;
ranksep=0.25;
sch0[label="<f0> X1|<f1> X2|<f2> X3|<f3> X4",shape=record,fontsize=8];
node1[label="union",shape=box,style="filled,rounded",color=cyan,fontsize=12];
sch0:f0 -> node1;
sch0:f1 -> node1;
sch0:f2 -> node1;
sch0:f3 -> node1;
sch1[label="<f0> -v-0",shape=record,fontsize=8];
node1 -> sch1:f0;
node2[label="LogisticRegression",shape=box,style="filled,rounded",color=yellow,fontsize=12];
sch1:f0 -> node2;
sch2[label="<f0> PredictedLabel|<f1> Probabilities",shape=record,fontsize=8];
node2 -> sch2:f0;
node2 -> sch2:f1;
}
###Markdown
It is lot better with an image.
###Code
dot_file = "graph.dot"
with open(dot_file, "w", encoding="utf-8") as f:
f.write(dot)
# might be needed on windows
import sys
import os
if sys.platform.startswith("win") and "Graphviz" not in os.environ["PATH"]:
os.environ['PATH'] = os.environ['PATH'] + r';C:\Program Files (x86)\Graphviz2.38\bin'
from pyquickhelper.loghelper import run_cmd
cmd = "dot -G=300 -Tpng {0} -o{0}.png".format(dot_file)
run_cmd(cmd, wait=True, fLOG=print);
from PIL import Image
img = Image.open("graph.dot.png")
img
###Output
_____no_output_____
###Markdown
Complex pipeline*scikit-learn* instroduced a couple of transform to play with features in a single pipeline. The following example is taken from [Column Transformer with Mixed Types](https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.htmlsphx-glr-auto-examples-compose-plot-column-transformer-mixed-types-py).
###Code
from sklearn import datasets
from sklearn.linear_model import LogisticRegression, LinearRegression
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.linear_model import LogisticRegression
columns = ['pclass', 'name', 'sex', 'age', 'sibsp', 'parch', 'ticket', 'fare',
'cabin', 'embarked', 'boat', 'body', 'home.dest']
numeric_features = ['age', 'fare']
numeric_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
categorical_features = ['embarked', 'sex', 'pclass']
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features),
])
clf = Pipeline(steps=[('preprocessor', preprocessor),
('classifier', LogisticRegression(solver='lbfgs'))])
clf
###Output
_____no_output_____
###Markdown
Let's see it first as a simplified text.
###Code
from mlinsights.plotting import pipeline2str
print(pipeline2str(clf))
dot = pipeline2dot(clf, columns)
dot_file = "graph2.dot"
with open(dot_file, "w", encoding="utf-8") as f:
f.write(dot)
cmd = "dot -G=300 -Tpng {0} -o{0}.png".format(dot_file)
run_cmd(cmd, wait=True, fLOG=print);
img = Image.open("graph2.dot.png")
img
###Output
_____no_output_____
###Markdown
With javascript
###Code
from jyquickhelper import RenderJsDot
RenderJsDot(dot)
###Output
_____no_output_____
###Markdown
Example with FeatureUnion
###Code
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import MinMaxScaler, PolynomialFeatures
model = Pipeline([('poly', PolynomialFeatures()),
('union', FeatureUnion([
('scaler2', MinMaxScaler()),
('scaler3', StandardScaler())]))])
dot = pipeline2dot(model, columns)
RenderJsDot(dot)
###Output
_____no_output_____
###Markdown
Compute intermediate outputsIt is difficult to access intermediate outputs with *scikit-learn* but it may be interesting to do so. The method [alter_pipeline_for_debugging](find://alter_pipeline_for_debugging) modifies the pipeline to intercept intermediate outputs.
###Code
from numpy.random import randn
model = Pipeline([('scaler1', StandardScaler()),
('union', FeatureUnion([
('scaler2', StandardScaler()),
('scaler3', MinMaxScaler())])),
('lr', LinearRegression())])
X = randn(4, 5)
y = randn(4)
model.fit(X, y)
print(pipeline2str(model))
###Output
Pipeline
StandardScaler
FeatureUnion
StandardScaler
MinMaxScaler
LinearRegression
###Markdown
Let's now modify the pipeline to get the intermediate outputs.
###Code
from mlinsights.helpers.pipeline import alter_pipeline_for_debugging
alter_pipeline_for_debugging(model)
###Output
_____no_output_____
###Markdown
The function adds a member ``_debug`` which stores inputs and outputs in every piece of the pipeline.
###Code
model.steps[0][1]._debug
model.predict(X)
###Output
_____no_output_____
###Markdown
The member was populated with inputs and outputs.
###Code
model.steps[0][1]._debug
###Output
_____no_output_____
###Markdown
Every piece behaves the same way.
###Code
from mlinsights.helpers.pipeline import enumerate_pipeline_models
for coor, model, vars in enumerate_pipeline_models(model):
print(coor)
print(model._debug)
###Output
(0,)
BaseEstimatorDebugInformation(Pipeline)
predict(
shape=(4, 5) type=<class 'numpy.ndarray'>
[[ 1.22836841 2.35164607 -0.37367786 0.61490475 -0.45377634]
[-0.77187962 0.43540786 0.20465106 0.8910651 -0.23104796]
[-0.36750208 0.35154324 1.78609517 -1.59325463 1.51595267]
[ 1.37547609 1.59470748 -0.5932628 0.57822003 0.56034736]]
) -> (
shape=(4,) type=<class 'numpy.ndarray'>
[ 0.73619378 0.87936142 -0.56528874 -0.2675163 ]
)
(0, 0)
BaseEstimatorDebugInformation(StandardScaler)
transform(
shape=(4, 5) type=<class 'numpy.ndarray'>
[[ 1.22836841 2.35164607 -0.37367786 0.61490475 -0.45377634]
[-0.77187962 0.43540786 0.20465106 0.8910651 -0.23104796]
[-0.36750208 0.35154324 1.78609517 -1.59325463 1.51595267]
[ 1.37547609 1.59470748 -0.5932628 0.57822003 0.56034736]]
) -> (
shape=(4, 5) type=<class 'numpy.ndarray'>
[[ 0.90946066 1.4000516 -0.67682808 0.49311806 -1.03765861]
[-1.20030006 -0.89626498 -0.05514595 0.76980985 -0.7493565 ]
[-0.77378303 -0.99676381 1.64484777 -1.71929067 1.51198065]
[ 1.06462242 0.49297719 -0.91287374 0.45636275 0.27503446]]
)
(0, 1)
BaseEstimatorDebugInformation(FeatureUnion)
transform(
shape=(4, 5) type=<class 'numpy.ndarray'>
[[ 0.90946066 1.4000516 -0.67682808 0.49311806 -1.03765861]
[-1.20030006 -0.89626498 -0.05514595 0.76980985 -0.7493565 ]
[-0.77378303 -0.99676381 1.64484777 -1.71929067 1.51198065]
[ 1.06462242 0.49297719 -0.91287374 0.45636275 0.27503446]]
) -> (
shape=(4, 10) type=<class 'numpy.ndarray'>
[[ 0.90946066 1.4000516 -0.67682808 0.49311806 -1.03765861 0.93149357
1. 0.09228748 0.88883864 0. ]
[-1.20030006 -0.89626498 -0.05514595 0.76980985 -0.7493565 0.
0.04193015 0.33534839 1. 0.11307564]
[-0.77378303 -0.99676381 1.64484777 -1.71929067 1.51198065 0.18831419
...
)
(0, 1, 0)
BaseEstimatorDebugInformation(StandardScaler)
transform(
shape=(4, 5) type=<class 'numpy.ndarray'>
[[ 0.90946066 1.4000516 -0.67682808 0.49311806 -1.03765861]
[-1.20030006 -0.89626498 -0.05514595 0.76980985 -0.7493565 ]
[-0.77378303 -0.99676381 1.64484777 -1.71929067 1.51198065]
[ 1.06462242 0.49297719 -0.91287374 0.45636275 0.27503446]]
) -> (
shape=(4, 5) type=<class 'numpy.ndarray'>
[[ 0.90946066 1.4000516 -0.67682808 0.49311806 -1.03765861]
[-1.20030006 -0.89626498 -0.05514595 0.76980985 -0.7493565 ]
[-0.77378303 -0.99676381 1.64484777 -1.71929067 1.51198065]
[ 1.06462242 0.49297719 -0.91287374 0.45636275 0.27503446]]
)
(0, 1, 1)
BaseEstimatorDebugInformation(MinMaxScaler)
transform(
shape=(4, 5) type=<class 'numpy.ndarray'>
[[ 0.90946066 1.4000516 -0.67682808 0.49311806 -1.03765861]
[-1.20030006 -0.89626498 -0.05514595 0.76980985 -0.7493565 ]
[-0.77378303 -0.99676381 1.64484777 -1.71929067 1.51198065]
[ 1.06462242 0.49297719 -0.91287374 0.45636275 0.27503446]]
) -> (
shape=(4, 5) type=<class 'numpy.ndarray'>
[[0.93149357 1. 0.09228748 0.88883864 0. ]
[0. 0.04193015 0.33534839 1. 0.11307564]
[0.18831419 0. 1. 0. 1. ]
[1. 0.62155016 0. 0.87407214 0.51485443]]
)
(0, 2)
BaseEstimatorDebugInformation(LinearRegression)
predict(
shape=(4, 10) type=<class 'numpy.ndarray'>
[[ 0.90946066 1.4000516 -0.67682808 0.49311806 -1.03765861 0.93149357
1. 0.09228748 0.88883864 0. ]
[-1.20030006 -0.89626498 -0.05514595 0.76980985 -0.7493565 0.
0.04193015 0.33534839 1. 0.11307564]
[-0.77378303 -0.99676381 1.64484777 -1.71929067 1.51198065 0.18831419
...
) -> (
shape=(4,) type=<class 'numpy.ndarray'>
[ 0.73619378 0.87936142 -0.56528874 -0.2675163 ]
)
|
paper_figures/fig-12-reconstruction-losses-cat.ipynb | ###Markdown
CGAN
###Code
fig, ax = plot_shaded_fidelities(fdict['fidelities-gan_l1_0'], x=x, color=color_dict['fidelities-gan_l1_0'],
label=labels['fidelities-gan_l1_0'])
for key in ['fidelities-gan_l1_1', 'fidelities-gan_l1_10', 'fidelities-gan_l1_100']:
fig, ax = plot_shaded_fidelities(fdict[key], x=x, fig=fig, ax=ax, color=color_dict[key], label=labels[key])
plt.legend(frameon=False)
plt.title("QST-CGAN + L1 loss")
plt.show()
# plt.savefig(figpath+"fig-12-gan-cat.pdf", bbox_inches = "tight", pad_inches=0)
bmat = loadmat("data/cat-2-0-0/matlab-data/cat_reconstructed.mat")
pgd_fidelities = bmat['fmatrix']
key = 'fidelities-imle'
fig, ax = plot_shaded_fidelities(fdict[key], x=x, color=color_dict[key], label=labels[key])
ax.semilogx(x, np.mean(pgd_fidelities, 0), "--", color="black", label="APG-MLE")
plt.title("MLE")
plt.legend(loc="lower left")
plt.show()
# plt.savefig(figpath+"fig-12-imle-apg-cat.pdf", bbox_inches = "tight", pad_inches=0)
###Output
/Users/shahnawaz/Dropbox/phd/tomography/manuscript/code/qst-nn/qst_nn/utils.py:599: RuntimeWarning: Mean of empty slice
return np.nanmean(arr, axis=0), np.nanstd(arr, axis=0)
/Users/shahnawaz/miniconda3/envs/qst-nn/lib/python3.9/site-packages/numpy/lib/nanfunctions.py:1664: RuntimeWarning: Degrees of freedom <= 0 for slice.
var = nanvar(a, axis=axis, dtype=dtype, out=out, ddof=ddof,
###Markdown
Show the state
###Code
hilbert_size = 32
# Betas can be selected in a grid or randomly in a circle
num_grid = 32
num_points = num_grid*num_grid
beta_max_x = 5
beta_max_y = 5
xvec = np.linspace(-beta_max_x, beta_max_x, num_grid)
yvec = np.linspace(-beta_max_y, beta_max_y, num_grid)
X, Y = np.meshgrid(xvec, yvec)
betas = (X + 1j*Y).ravel()
# betas = [random_alpha(5) for i in range(num_grid*num_grid)]
psi = coherent(32, 2) + coherent(32, -2)
psi = psi.unit()
rho = psi*psi.dag()
x = qfunc(rho, xvec, yvec, g=2)
fig, ax = plt.subplots(1, 1, figsize=(2*fig_width/5, 2*fig_width/5))
im = ax.pcolor(xvec, yvec, x/np.max(x), cmap="hot", vmin=0, vmax=1)
ax.set_aspect("equal")
ax.set_yticks([-5, 0, 5])
ax.set_xlabel(r"Re$(\beta)$", labelpad=0)
ax.set_ylabel(r"Im$(\beta)$", labelpad=-8)
cbar = plt.colorbar(im, fraction=0.046, ticks=[0, 0.5, 1])
cbar.ax.set_yticklabels(["0", "0.5", "1"])
# plt.savefig(figpath+"fig-12-cat-data.pdf", bbox_inches = "tight", pad_inches=0)
plt.show()
def psi_cat(hilbert_size, alpha=None, S=None, mu=None):
"""
Generates a cat state. For a detailed discussion on the definition
see `Albert, Victor V. et al. “Performance and Structure of Single-Mode Bosonic Codes.” Physical Review A 97.3 (2018) <https://arxiv.org/abs/1708.05010>`_
and `Ahmed, Shahnawaz et al., “Classification and reconstruction of quantum states with neural networks.” Journal <https://arxiv.org/abs/1708.05010>`_
Args:
-----
N (int): Hilbert size dimension.
alpha (complex64): Complex number determining the amplitude.
S (int): An integer >= 0 determining the number of coherent states used
to generate the cat superposition. S = {0, 1, 2, ...}.
corresponds to {2, 4, 6, ...} coherent state superpositions.
mu (int): An integer 0/1 which generates the logical 0/1 encoding of
a computational state for the cat state.
Returns:
-------
cat (:class:`qutip.Qobj`): Cat state density matrix
"""
if alpha == None:
alpha = random_alpha(2, 3)
if S == None:
S = np.random.randint(0, 3)
if mu is None:
mu = np.random.randint(0, 2)
kend = 2 * S + 1
cstates = 0 * (coherent(hilbert_size, 0))
for k in range(0, int((kend + 1) / 2)):
sign = 1
if k >= S:
sign = (-1) ** int(mu > 0.5)
prefactor = np.exp(1j * (np.pi / (S + 1)) * k)
cstates += sign * coherent(hilbert_size, prefactor * alpha * (-((1j) ** mu)))
cstates += sign * coherent(hilbert_size, -prefactor * alpha * (-((1j) ** mu)))
return cstates.unit()
a = destroy(hilbert_size)
psi = psi_cat(hilbert_size, alpha=2, S=0, mu=0)
psi2 = a*psi
rho_2 = psi2*psi2.dag()
rho_2 = rho_2.unit()
rho2 = a*rho*a.dag()
rho2 = rho2.unit()
plot_wigner_fock_distribution(rho2)
hilbert_size = 32
# Betas can be selected in a grid or randomly in a circle
num_grid = 32
num_points = num_grid*num_grid
beta_max_x = 5
beta_max_y = 5
xvec = np.linspace(-beta_max_x, beta_max_x, num_grid)
yvec = np.linspace(-beta_max_y, beta_max_y, num_grid)
x = qfunc(rho_2/rho_2.tr(), xvec, yvec, g=2)
fig, ax = plt.subplots(1, 1, figsize=(2*fig_width/5, 2*fig_width/5))
im = ax.pcolor(xvec, yvec, x/np.max(x), cmap="hot", vmin=0, vmax=1)
ax.set_aspect("equal")
ax.set_yticks([-5, 0, 5])
ax.set_xlabel(r"Re$(\beta)$", labelpad=0)
ax.set_ylabel(r"Im$(\beta)$", labelpad=-8)
cbar = plt.colorbar(im, fraction=0.046, ticks=[0, 0.5, 1])
cbar.ax.set_yticklabels(["0", "0.5", "1"])
# plt.savefig(figpath+"fig-12-cat-orthogonal-data.pdf", bbox_inches = "tight", pad_inches=0)
plt.show()
###Output
/var/folders/8s/tfpsk_fx609f8w7z__yzz9vh0000gn/T/ipykernel_29328/2910761354.py:16: MatplotlibDeprecationWarning: shading='flat' when X and Y have the same dimensions as C is deprecated since 3.3. Either specify the corners of the quadrilaterals with X and Y, or pass shading='auto', 'nearest' or 'gouraud', or set rcParams['pcolor.shading']. This will become an error two minor releases later.
im = ax.pcolor(xvec, yvec, x/np.max(x), cmap="hot", vmin=0, vmax=1)
|
SparseChem_Train_Synt.ipynb | ###Markdown
Output to `experiments/SparseChem`
###Code
# from IPython.core.display import display, HTML
# display(HTML("<style>.container { width:90% !important; }</style>"))
%load_ext autoreload
%autoreload 2
# Copyright (c) 2020 KU Leuven
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
import sparsechem as sc
import scipy.io
import scipy.sparse
import numpy as np
import pandas as pd
import torch
import argparse
import os
import sys
import os.path
import time
import json
import functools
from datetime import datetime
import pprint
import csv
#from apex import amp
from contextlib import redirect_stdout
from sparsechem import Nothing
from torch.utils.data import DataLoader
from torch.optim.lr_scheduler import MultiStepLR
from torch.utils.tensorboard import SummaryWriter
from pytorch_memlab import MemReporter
from pynvml import *
pp = pprint.PrettyPrinter(indent=4)
np.set_printoptions(edgeitems=3, infstr='inf', linewidth=150, nanstr='nan')
torch.set_printoptions( linewidth=132)
# import multiprocessing
# multiprocessing.set_start_method('fork', force=True)
if torch.cuda.is_available():
nvmlInit()
parser = argparse.ArgumentParser(description="Training a multi-task model.")
parser.add_argument("--x", help="Descriptor file (matrix market, .npy or .npz)", type=str, default=None)
parser.add_argument("--y_class", "--y", "--y_classification", help="Activity file (matrix market, .npy or .npz)", type=str, default=None)
parser.add_argument("--y_regr", "--y_regression", help="Activity file (matrix market, .npy or .npz)", type=str, default=None)
parser.add_argument("--y_censor", help="Censor mask for regression (matrix market, .npy or .npz)", type=str, default=None)
parser.add_argument("--weights_class", "--task_weights", "--weights_classification", help="CSV file with columns task_id, training_weight, aggregation_weight, task_type (for classification tasks)", type=str, default=None)
parser.add_argument("--weights_regr", "--weights_regression", help="CSV file with columns task_id, training_weight, censored_weight, aggregation_weight, aggregation_weight, task_type (for regression tasks)", type=str, default=None)
parser.add_argument("--censored_loss", help="Whether censored loss is used for training (default 1)", type=int, default=1)
parser.add_argument("--folding", help="Folding file (npy)", type=str, required=True)
parser.add_argument("--fold_va", help="Validation fold number", type=int, default=0)
parser.add_argument("--fold_te", help="Test fold number (removed from dataset)", type=int, default=None)
parser.add_argument("--batch_ratio", help="Batch ratio", type=float, default=0.02)
parser.add_argument("--internal_batch_max", help="Maximum size of the internal batch", type=int, default=None)
parser.add_argument("--normalize_loss", help="Normalization constant to divide the loss (default uses batch size)", type=float, default=None)
parser.add_argument("--normalize_regression", help="Set this to 1 if the regression tasks should be normalized", type=int, default=0)
parser.add_argument("--normalize_regr_va", help="Set this to 1 if the regression tasks in validation fold should be normalized together with training folds", type=int, default=0)
parser.add_argument("--inverse_normalization", help="Set this to 1 if the regression tasks in validation fold should be inverse normalized at validation time", type=int, default=0)
parser.add_argument("--hidden_sizes", nargs="+", help="Hidden sizes of trunk", default=[], type=int, required=True)
parser.add_argument("--last_hidden_sizes", nargs="+", help="Hidden sizes in the head (if specified , class and reg heads have this dimension)", default=None, type=int)
#parser.add_argument("--middle_dropout", help="Dropout for layers before the last", type=float, default=0.0)
#parser.add_argument("--last_dropout", help="Last dropout", type=float, default=0.2)
parser.add_argument("--weight_decay", help="Weight decay", type=float, default=0.0)
parser.add_argument("--last_non_linearity", help="Last layer non-linearity (depecrated)", type=str, default="relu", choices=["relu", "tanh"])
parser.add_argument("--middle_non_linearity", "--non_linearity", help="Before last layer non-linearity", type=str, default="relu", choices=["relu", "tanh"])
parser.add_argument("--input_transform", help="Transformation to apply to inputs", type=str, default="none", choices=["binarize", "none", "tanh", "log1p"])
parser.add_argument("--lr", help="Learning rate", type=float, default=1e-3)
parser.add_argument("--lr_alpha", help="Learning rate decay multiplier", type=float, default=0.3)
parser.add_argument("--lr_steps", nargs="+", help="Learning rate decay steps", type=int, default=[10])
parser.add_argument("--input_size_freq", help="Number of high importance features", type=int, default=None)
parser.add_argument("--fold_inputs", help="Fold input to a fixed set (default no folding)", type=int, default=None)
parser.add_argument("--epochs", help="Number of epochs", type=int, default=20)
parser.add_argument("--pi_zero", help="Reference class ratio to be used for calibrated aucpr", type=float, default=0.1)
parser.add_argument("--min_samples_class", help="Minimum number samples in each class and in each fold for AUC calculation (only used if aggregation_weight is not provided in --weights_class)", type=int, default=5)
parser.add_argument("--min_samples_auc", help="Obsolete: use 'min_samples_class'", type=int, default=None)
parser.add_argument("--min_samples_regr", help="Minimum number of uncensored samples in each fold for regression metric calculation (only used if aggregation_weight is not provided in --weights_regr)", type=int, default=10)
parser.add_argument("--dev", help="Device to use", type=str, default="cuda:0")
parser.add_argument("--run_name", help="Run name for results", type=str, default=None)
parser.add_argument("--output_dir", help="Output directory, including boards (default 'models')", type=str, default="models")
parser.add_argument("--prefix", help="Prefix for run name (default 'run')", type=str, default='sc')
parser.add_argument("--verbose", help="Verbosity level: 2 = full; 1 = no progress; 0 = no output", type=int, default=2, choices=[0, 1, 2])
parser.add_argument("--save_model", help="Set this to 0 if the model should not be saved", type=int, default=1)
parser.add_argument("--save_board", help="Set this to 0 if the TensorBoard should not be saved", type=int, default=1)
parser.add_argument("--profile", help="Set this to 1 to output memory profile information", type=int, default=0)
parser.add_argument("--mixed_precision", help="Set this to 1 to run in mixed precision mode (vs single precision)", type=int, default=0)
parser.add_argument("--eval_train", help="Set this to 1 to calculate AUCs for train data", type=int, default=0)
parser.add_argument("--enable_cat_fusion", help="Set this to 1 to enable catalogue fusion", type=int, default=0)
parser.add_argument("--eval_frequency", help="The gap between AUC eval (in epochs), -1 means to do an eval at the end.", type=int, default=1)
#hybrid model features
parser.add_argument("--regression_weight", help="between 0 and 1 relative weight of regression loss vs classification loss", type=float, default=0.5)
parser.add_argument("--scaling_regularizer", help="L2 regularizer of the scaling layer, if inf scaling layer is switched off", type=float, default=np.inf)
parser.add_argument("--class_feature_size", help="Number of leftmost features used from the output of the trunk (default: use all)", type=int, default=-1)
parser.add_argument("--regression_feature_size", help="Number of rightmost features used from the output of the trunk (default: use all)", type=int, default=-1)
parser.add_argument("--last_hidden_sizes_reg", nargs="+", help="Hidden sizes in the regression head (overwritten by last_hidden_sizes)", default=None, type=int)
parser.add_argument("--last_hidden_sizes_class", nargs="+", help="Hidden sizes in the classification head (overwritten by last_hidden_sizes)", default=None, type=int)
parser.add_argument("--dropouts_reg" , nargs="+", help="List of dropout values used in the regression head (needs one per last hidden in reg head, ignored if last_hidden_sizes_reg not specified)", default=[], type=float)
parser.add_argument("--dropouts_class", nargs="+", help="List of dropout values used in the classification head (needs one per last hidden in class head, ignored if no last_hidden_sizes_class not specified)", default=[], type=float)
parser.add_argument("--dropouts_trunk", nargs="+", help="List of dropout values used in the trunk", default=[], type=float)
dev = "gpu"
rstr = datetime.now().strftime("%m%d_%H%M")
# data_dir="chembl23_data"
# data_dir="chembl23_run_01152022"
data_dir = "../MLDatasets/chembl23_synthetic"
output_dir = f"../experiments/synt-SparseChem/{rstr}"
print(output_dir)
rm_output=False
# rstr = "synthetic_data_model" ##random_str(12)
# rstr = "synthetic_data_model_03042022" ##random_str(12)
# output_dir = f"./models-{rstr}/"
# output_dir = f"./{data_dir}/models-{rstr}/"
# output dir kbardool/kusanagi/experiments/SparseChem/0116_0843
###Output
../experiments/synt-SparseChem/0409_1753
###Markdown
Two layer network as specified in `https://git.infra.melloddy.eu/wp2/sparsechem/-/blob/master/docs/main.md`
###Code
cmd = (
f" --x {data_dir}/chembl_23mini_x.npy " +
f" --y_class {data_dir}/chembl_23mini_adashare_y_all_bin_sparse.npy " +
f" --folding {data_dir}/chembl_23mini_folds.npy " +
f" --output_dir {output_dir}" +
F" --dev cpu "
f" --fold_va 0 " +
f" --fold_inputs 32000" +
f" --batch_ratio 0.01 " +
f" --hidden_sizes 50 50" +
f" --dropouts_trunk 0 0 " +
f" --weight_decay 1e-4 "
f" --epochs 100 " +
f" --lr 1e-3 " +
f" --lr_steps 10 " +
f" --lr_alpha 0.3" +
f" --prefix sc " +
f" --min_samples_class 1"
)
# f" --eval_train 0 " +
## f" --last_hidden_sizes ?? ""
# cmd = (
# f" --x {data_dir}/chembl_23mini_x.npy " +
# f" --y_class {data_dir}/chembl_23mini_adashare_y_all_bin_sparse.npy " +
# f" --folding {data_dir}/chembl_23mini_folds.npy " +
# f" --output_dir {output_dir}" +
# f" --fold_va 0 " +
# f" --batch_ratio 0.02 " +
# f" --hidden_sizes 40 40 " +
# f" --dropouts_trunk 0 0 " +
# f" --weight_decay 1e-4 " +
# f" --epochs 20 " +
# f" --lr 1e-3 " +
# f" --lr_steps 10 " +
# f" --lr_alpha 0.3 "
# )
# f" --hidden_sizes 25 25 25 25 25 25 " +
# f" --dropouts_trunk 0 0 0 0 0 0 " +
# f" --hidden_sizes 400 400 " +
# f" --last_dropout 0.2 " +
# f" --middle_dropout 0.2 " +
# f" --x ./{data_dir}/chembl_23_x.mtx " +
# f" --y_class ./{data_dir}/chembl_23_y.mtx " +
# f" --folding ./{data_dir}/folding_hier_0.6.npy " +
#### copied from SparseChemDev
# cmd = (
# f" --x ./{data_dir}/chembl_23mini_x.npy" +
# f" --y_class ./{data_dir}/chembl_23mini_y.npy" +
# f" --folding ./{data_dir}/chembl_23mini_folds.npy" +
# f" --hidden_sizes 20 30 40 " +
# f" --output_dir {output_dir}" +
# f" --batch_ratio 0.1" +
# f" --epochs 2" +
# f" --lr 1e-3" +
# f" --lr_steps 1" +
# f" --dev {dev}" +
# f" --verbose 1")
# f" --input_size_freq 40"
# f" --tail_hidden_size 10"
###Output
_____no_output_____
###Markdown
Initializations
###Code
args = parser.parse_args(cmd.split())
# %tb
# args = parser.parse_args()
def vprint(s=""):
if args.verbose:
print(s)
pp.pprint(vars(args))
if args.run_name is not None:
name = args.run_name
else:
name = f"{args.prefix}"
name += f"_{'.'.join([str(h) for h in args.hidden_sizes])}"
# name += f"_do{'.'.join([str(d) for d in args.dropouts_trunk])}"
name += f"_lr{args.lr}"
name += f"_do{args.dropouts_trunk[0]}"
# name += f"_wd{args.weight_decay}"
# name += f"_hs{'.'.join([str(h) for h in args.hidden_sizes])}"
# name += f"_lrsteps{'.'.join([str(s) for s in args.lr_steps])}_ep{args.epochs}"
# name += f"_fva{args.fold_va}_fte{args.fold_te}"
if args.mixed_precision == 1:
name += f"_mixed_precision"
vprint(f"Run name is '{name}'.")
###Output
Run name is 'sc_50.50_lr0.001_do0.0'.
###Markdown
Assertions
###Code
if (args.last_hidden_sizes is not None) and ((args.last_hidden_sizes_class is not None) or (args.last_hidden_sizes_reg is not None)):
raise ValueError("Head specific and general last_hidden_sizes argument were both specified!")
if (args.last_hidden_sizes is not None):
args.last_hidden_sizes_class = args.last_hidden_sizes
args.last_hidden_sizes_reg = args.last_hidden_sizes
if args.last_hidden_sizes_reg is not None:
assert len(args.last_hidden_sizes_reg) == len(args.dropouts_reg), "Number of hiddens and number of dropout values specified must be equal in the regression head!"
if args.last_hidden_sizes_class is not None:
assert len(args.last_hidden_sizes_class) == len(args.dropouts_class), "Number of hiddens and number of dropout values specified must be equal in the classification head!"
if args.hidden_sizes is not None:
assert len(args.hidden_sizes) == len(args.dropouts_trunk), "Number of hiddens and number of dropout values specified must be equal in the trunk!"
if args.class_feature_size == -1:
args.class_feature_size = args.hidden_sizes[-1]
if args.regression_feature_size == -1:
args.regression_feature_size = args.hidden_sizes[-1]
assert args.regression_feature_size <= args.hidden_sizes[-1], "Regression feature size cannot be larger than the trunk output"
assert args.class_feature_size <= args.hidden_sizes[-1], "Classification feature size cannot be larger than the trunk output"
assert args.regression_feature_size + args.class_feature_size >= args.hidden_sizes[-1], "Unused features in the trunk! Set regression_feature_size + class_feature_size >= trunk output!"
#if args.regression_feature_size != args.hidden_sizes[-1] or args.class_feature_size != args.hidden_sizes[-1]:
# raise ValueError("Hidden spliting not implemented yet!")
assert args.input_size_freq is None, "Using tail compression not yet supported."
if (args.y_class is None) and (args.y_regr is None):
raise ValueError("No label data specified, please add --y_class and/or --y_regr.")
###Output
_____no_output_____
###Markdown
Summary writer
###Code
if args.profile == 1:
assert (args.save_board==1), "Tensorboard should be enabled to be able to profile memory usage."
if args.save_board:
tb_name = os.path.join(args.output_dir, "", name)
writer = SummaryWriter(tb_name)
else:
writer = Nothing()
###Output
_____no_output_____
###Markdown
Load Datasets
###Code
ecfp = sc.load_sparse(args.x)
y_class = sc.load_sparse(args.y_class)
y_regr = sc.load_sparse(args.y_regr)
y_censor = sc.load_sparse(args.y_censor)
if (y_regr is None) and (y_censor is not None):
raise ValueError("y_censor provided please also provide --y_regr.")
if y_class is None:
y_class = scipy.sparse.csr_matrix((ecfp.shape[0], 0))
if y_regr is None:
y_regr = scipy.sparse.csr_matrix((ecfp.shape[0], 0))
if y_censor is None:
y_censor = scipy.sparse.csr_matrix(y_regr.shape)
folding = np.load(args.folding)
assert ecfp.shape[0] == folding.shape[0], "x and folding must have same number of rows"
## Loading task weights
tasks_class = sc.load_task_weights(args.weights_class, y=y_class, label="y_class")
tasks_regr = sc.load_task_weights(args.weights_regr, y=y_regr, label="y_regr")
## Input transformation
ecfp = sc.fold_transform_inputs(ecfp, folding_size=args.fold_inputs, transform=args.input_transform)
print(f"count non zero:{ecfp[0].count_nonzero()}")
num_pos = np.array((y_class == +1).sum(0)).flatten()
num_neg = np.array((y_class == -1).sum(0)).flatten()
num_class = np.array((y_class != 0).sum(0)).flatten()
if (num_class != num_pos + num_neg).any():
raise ValueError("For classification all y values (--y_class/--y) must be 1 or -1.")
num_regr = np.bincount(y_regr.indices, minlength=y_regr.shape[1])
assert args.min_samples_auc is None, "Parameter 'min_samples_auc' is obsolete. Use '--min_samples_class' that specifies how many samples a task needs per FOLD and per CLASS to be aggregated."
if tasks_class.aggregation_weight is None:
## using min_samples rule
fold_pos, fold_neg = sc.class_fold_counts(y_class, folding)
n = args.min_samples_class
tasks_class.aggregation_weight = ((fold_pos >= n).all(0) & (fold_neg >= n)).all(0).astype(np.float64)
if tasks_regr.aggregation_weight is None:
if y_censor.nnz == 0:
y_regr2 = y_regr.copy()
y_regr2.data[:] = 1
else:
## only counting uncensored data
y_regr2 = y_censor.copy()
y_regr2.data = (y_regr2.data == 0).astype(np.int32)
fold_regr, _ = sc.class_fold_counts(y_regr2, folding)
del y_regr2
tasks_regr.aggregation_weight = (fold_regr >= args.min_samples_regr).all(0).astype(np.float64)
vprint(f"Input dimension: {ecfp.shape[1]}")
vprint(f"#samples: {ecfp.shape[0]}")
vprint(f"#classification tasks: {y_class.shape[1]}")
vprint(f"#regression tasks: {y_regr.shape[1]}")
vprint(f"Using {(tasks_class.aggregation_weight > 0).sum()} classification tasks for calculating aggregated metrics (AUCROC, F1_max, etc).")
vprint(f"Using {(tasks_regr.aggregation_weight > 0).sum()} regression tasks for calculating metrics (RMSE, Rsquared, correlation).")
if args.fold_te is not None and args.fold_te >= 0:
## removing test data
assert args.fold_te != args.fold_va, "fold_va and fold_te must not be equal."
keep = folding != args.fold_te
ecfp = ecfp[keep]
y_class = y_class[keep]
y_regr = y_regr[keep]
y_censor = y_censor[keep]
folding = folding[keep]
normalize_inv = None
if args.normalize_regression == 1 and args.normalize_regr_va == 1:
y_regr, mean_save, var_save = sc.normalize_regr(y_regr)
fold_va = args.fold_va
idx_tr = np.where(folding != fold_va)[0]
idx_va = np.where(folding == fold_va)[0]
y_class_tr = y_class[idx_tr]
y_class_va = y_class[idx_va]
y_regr_tr = y_regr[idx_tr]
y_regr_va = y_regr[idx_va]
y_censor_tr = y_censor[idx_tr]
y_censor_va = y_censor[idx_va]
if args.normalize_regression == 1 and args.normalize_regr_va == 0:
y_regr_tr, mean_save, var_save = sc.normalize_regr(y_regr_tr)
if args.inverse_normalization == 1:
normalize_inv = {}
normalize_inv["mean"] = mean_save
normalize_inv["var"] = var_save
num_pos_va = np.array((y_class_va == +1).sum(0)).flatten()
num_neg_va = np.array((y_class_va == -1).sum(0)).flatten()
num_regr_va = np.bincount(y_regr_va.indices, minlength=y_regr.shape[1])
pos_rate = num_pos_va/(num_pos_va+num_neg_va)
pos_rate_ref = args.pi_zero
pos_rate = np.clip(pos_rate, 0, 0.99)
cal_fact_aucpr = pos_rate*(1-pos_rate_ref)/(pos_rate_ref*(1-pos_rate))
#import ipdb; ipdb.set_trace()
vprint(f"Input dimension : {ecfp.shape[1]}")
vprint(f"Input dimension : {ecfp.shape[1]}")
vprint(f"Training dataset : {ecfp[idx_tr].shape}")
vprint(f"Validation dataset: {ecfp[idx_va].shape}")
vprint()
vprint(f"#classification tasks: {y_class.shape[1]}")
vprint(f"#regression tasks : {y_regr.shape[1]}")
vprint(f"Using {(tasks_class.aggregation_weight > 0).sum():3d} classification tasks for calculating aggregated metrics (AUCROC, F1_max, etc).")
vprint(f"Using {(tasks_regr.aggregation_weight > 0).sum():3d} regression tasks for calculating metrics (RMSE, Rsquared, correlation).")
num_int_batches = 1
batch_size = 128
# batch_size = int(np.ceil(args.batch_ratio * idx_tr.shape[0]))
print(f"orig batch size: {batch_size}")
print(f"orig num int batches: {num_int_batches}")
if args.internal_batch_max is not None:
if args.internal_batch_max < batch_size:
num_int_batches = int(np.ceil(batch_size / args.internal_batch_max))
batch_size = int(np.ceil(batch_size / num_int_batches))
print(f"batch size: {batch_size}")
print(f"num_int_batches: {num_int_batches}")
tasks_cat_id_list = None
select_cat_ids = None
if tasks_class.cat_id is not None:
tasks_cat_id_list = [[x,i] for i,x in enumerate(tasks_class.cat_id) if str(x) != 'nan']
tasks_cat_ids = [i for i,x in enumerate(tasks_class.cat_id) if str(x) != 'nan']
select_cat_ids = np.array(tasks_cat_ids)
cat_id_size = len(tasks_cat_id_list)
else:
cat_id_size = 0
###Output
_____no_output_____
###Markdown
Dataloaders
###Code
dataset_tr = sc.ClassRegrSparseDataset(x=ecfp[idx_tr], y_class=y_class_tr, y_regr=y_regr_tr, y_censor=y_censor_tr, y_cat_columns=select_cat_ids)
dataset_va = sc.ClassRegrSparseDataset(x=ecfp[idx_va], y_class=y_class_va, y_regr=y_regr_va, y_censor=y_censor_va, y_cat_columns=select_cat_ids)
loader_tr = DataLoader(dataset_tr, batch_size=batch_size, num_workers = 8, pin_memory=True, collate_fn=dataset_tr.collate, shuffle=True)
loader_va = DataLoader(dataset_va, batch_size=batch_size, num_workers = 4, pin_memory=True, collate_fn=dataset_va.collate, shuffle=False)
args.input_size = dataset_tr.input_size
args.output_size = dataset_tr.output_size
args.class_output_size = dataset_tr.class_output_size
args.regr_output_size = dataset_tr.regr_output_size
args.cat_id_size = cat_id_size
dev = torch.device(args.dev)
net = sc.SparseFFN(args).to(dev)
loss_class = torch.nn.BCEWithLogitsLoss(reduction="none")
loss_regr = sc.censored_mse_loss
if not args.censored_loss:
loss_regr = functools.partial(loss_regr, censored_enabled=False)
tasks_class.training_weight = tasks_class.training_weight.to(dev)
tasks_regr.training_weight = tasks_regr.training_weight.to(dev)
tasks_regr.censored_weight = tasks_regr.censored_weight.to(dev)
###Output
/data/leuven/326/vsc32647/miniconda3/envs/pyt-gpu/lib/python3.9/site-packages/torch/nn/init.py:388: UserWarning: Initializing zero-element tensors is a no-op
warnings.warn("Initializing zero-element tensors is a no-op")
###Markdown
Network
###Code
vprint("Network:")
vprint(net)
reporter = None
h = None
###Output
Network:
SparseFFN(
(net): Sequential(
(0): SparseInputNet(
(net_freq): SparseLinear(in_features=32000, out_features=50, bias=True)
)
(1): MiddleNet(
(net): Sequential(
(layer_0): Sequential(
(0): ReLU()
(1): Dropout(p=0.0, inplace=False)
(2): Linear(in_features=50, out_features=50, bias=True)
)
)
)
)
(classLast): LastNet(
(net): Sequential(
(initial_layer): Sequential(
(0): ReLU()
(1): Dropout(p=0.0, inplace=False)
(2): Linear(in_features=50, out_features=15, bias=True)
)
)
)
(regrLast): Sequential(
(0): LastNet(
(net): Sequential(
(initial_layer): Sequential(
(0): Tanh()
(1): Dropout(p=0.0, inplace=False)
(2): Linear(in_features=50, out_features=0, bias=True)
)
)
)
)
)
###Markdown
setup memory profiling reporter
###Code
if args.profile == 1:
torch_gpu_id = torch.cuda.current_device()
if "CUDA_VISIBLE_DEVICES" in os.environ:
ids = list(map(int, os.environ.get("CUDA_VISIBLE_DEVICES", "").split(",")))
nvml_gpu_id = ids[torch_gpu_id] # remap
else:
nvml_gpu_id = torch_gpu_id
h = nvmlDeviceGetHandleByIndex(nvml_gpu_id)
if args.profile == 1:
##### output saving #####
if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)
reporter = MemReporter(net)
with open(f"{args.output_dir}/memprofile.txt", "w+") as profile_file:
with redirect_stdout(profile_file):
profile_file.write(f"\nInitial model detailed report:\n\n")
reporter.report()
###Output
_____no_output_____
###Markdown
Optimizer, Scheduler, GradScaler
###Code
optimizer = torch.optim.Adam(net.parameters(), lr=args.lr, weight_decay=args.weight_decay)
scheduler = MultiStepLR(optimizer, milestones=args.lr_steps, gamma=args.lr_alpha, verbose = False)
scaler = torch.cuda.amp.GradScaler()
num_prints = 0
print(optimizer)
# args.eval_train = 0
# args.epochs = 5
print(f"dev : {dev}")
print(f"args.lr : {args.lr}")
print(f"args.weight_decay: {args.weight_decay}")
print(f"args.lr_steps : {args.lr_steps}")
print(f"args.lr_steps : {args.lr_steps}")
print(f"num_int_batches : {num_int_batches}")
print(f"batch_size : {batch_size}")
print(f"EPOCHS : {args.epochs}")
print(f"scaler : {scaler}")
print(f"args.normalize_loss : {args.normalize_loss}")
print(f"loss_class : {loss_class}")
print(f"mixed precision : {args.mixed_precision}")
print(args.eval_train)
current_epoch = 0
###Output
Adam (
Parameter Group 0
amsgrad: False
betas: (0.9, 0.999)
eps: 1e-08
initial_lr: 0.001
lr: 0.001
weight_decay: 0.0001
)
dev : cpu
args.lr : 0.001
args.weight_decay: 0.0001
args.lr_steps : [10]
args.lr_steps : [10]
num_int_batches : 1
batch_size : 128
EPOCHS : 100
scaler : <torch.cuda.amp.grad_scaler.GradScaler object at 0x2b36906286a0>
args.normalize_loss : None
loss_class : BCEWithLogitsLoss()
mixed precision : 0
0
###Markdown
Training Loop
###Code
end_epoch = current_epoch + args.epochs
for epoch in range(current_epoch, end_epoch, 1):
t0 = time.time()
sc.train_class_regr(
net, optimizer,
loader = loader_tr,
loss_class = loss_class,
loss_regr = loss_regr,
dev = dev,
weights_class = tasks_class.training_weight * (1-args.regression_weight) * 2,
weights_regr = tasks_regr.training_weight * args.regression_weight * 2,
censored_weight = tasks_regr.censored_weight,
normalize_loss = args.normalize_loss,
num_int_batches = num_int_batches,
progress = False,
reporter = reporter,
writer = writer,
epoch = epoch,
args = args,
scaler = scaler)
# nvml_handle = h)
if args.profile == 1:
with open(f"{args.output_dir}/memprofile.txt", "a+") as profile_file:
profile_file.write(f"\nAfter epoch {epoch} model detailed report:\n\n")
with redirect_stdout(profile_file):
reporter.report()
t1 = time.time()
eval_round = (args.eval_frequency > 0) and ((epoch + 1) % args.eval_frequency == 0)
last_round = epoch == args.epochs - 1
if eval_round or last_round:
results_va = sc.evaluate_class_regr(net, loader_va,
loss_class,
loss_regr,
tasks_class=tasks_class,
tasks_regr=tasks_regr,
dev=dev,
progress = False,
normalize_inv=normalize_inv,
cal_fact_aucpr=cal_fact_aucpr)
# import ipdb; ipdb.set_trace()
for key, val in results_va["classification_agg"].items():
writer.add_scalar("val_metrics:aggregated/"+key, val, epoch*batch_size)
# writer.add_scalar(key+"/va", val, epoch)
# for key, val in results_va["regression_agg"].items():
# writer.add_scalar(key+"/va", val, epoch)
if args.eval_train:
results_tr = sc.evaluate_class_regr(net, loader_tr, loss_class, loss_regr, tasks_class=tasks_class, tasks_regr=tasks_regr, dev=dev, progress = args.verbose >= 2)
for key, val in results_tr["classification_agg"].items():
writer.add_scalar("trn_metrics:aggregated/"+key, val, epoch *batch_size)
# writer.add_scalar(key+"/tr", val, epoch)
# for key, val in results_tr["regression_agg"].items():
# writer.add_scalar(key+"/tr", val, epoch)
else:
results_tr = None
if args.verbose:
## printing a new header every 20 lines
header = num_prints % 20 == 0
num_prints += 1
sc.print_metrics_cr(epoch, t1 - t0, results_tr, results_va, header)
scheduler.step()
#print("DEBUG data for hidden spliting")
#print (f"Classification mask: Sum = {net.classmask.sum()}\t Uniques: {np.unique(net.classmask)}")
#print (f"Regression mask: Sum = {net.regmask.sum()}\t Uniques: {np.unique(net.regmask)}")
#print (f"overlap: {(net.regmask * net.classmask).sum()}")
epoch
end_epoch
results_tr
writer.close()
vprint()
if args.profile == 1:
multiplexer = sc.create_multiplexer(tb_name)
# sc.export_scalars(multiplexer, '.', "GPUmem", "testcsv.csv")
data = sc.extract_scalars(multiplexer, '.', "GPUmem")
vprint(f"Peak GPU memory used: {sc.return_max_val(data)}MB")
vprint("Saving performance metrics (AUCs) and model.")
##### model saving #####
if not os.path.exists(args.output_dir):
os.makedirs(args.output_dir)
model_file = f"{args.output_dir}/{name}.pt"
out_file = f"{args.output_dir}/{name}.json"
if args.save_model:
torch.save(net.state_dict(), model_file)
vprint(f"Saved model weights into '{model_file}'.")
results_va["classification"]["num_pos"] = num_pos_va
results_va["classification"]["num_neg"] = num_neg_va
results_va["regression"]["num_samples"] = num_regr_va
if results_tr is not None:
results_tr["classification"]["num_pos"] = num_pos - num_pos_va
results_tr["classification"]["num_neg"] = num_neg - num_neg_va
results_tr["regression"]["num_samples"] = num_regr - num_regr_va
stats=None
if args.normalize_regression == 1 :
stats={}
stats["mean"] = mean_save
stats["var"] = np.array(var_save)[0]
sc.save_results(out_file, args, validation=results_va, training=results_tr, stats=stats)
vprint(f"Saved config and results into '{out_file}'.\nYou can load the results by:\n import sparsechem as sc\n res = sc.load_results('{out_file}')")
pp.pprint(results_va)
results_va['classification']
###Output
_____no_output_____
###Markdown
Results of run on synthetic data Run name is 'sc_run_ h40.40_ ldo_r_wd0.0001_ lr0.001_ lrsteps10_ ep20_ fva0_fteNone'
###Code
19 | 0.64258 0.64258 0.86670 0.86844 0.51818 0.79627 | nan nan nan | 1.1
###Output
_____no_output_____
###Markdown
Results of run on synthetic data Run name is 'sc_run_ h400.400_ ldo_r_wd0.0001_ lr0.001_ lrsteps10_ ep20_ fva: 0 fte: None'.
###Code
Epoch | logl bceloss aucroc aucpr aucpr_cal f1_max | rmse rsquared corrcoef | tr_time
0 | 0.34269 0.45776 0.80664 0.72646 0.48133 0.74813 | nan nan nan | 13.6
1 | 0.31554 0.42048 0.83178 0.75306 0.51796 0.76370 | nan nan nan | 12.9
2 | 0.31272 0.41163 0.84092 0.76037 0.53157 0.76873 | nan nan nan | 12.8
3 | 0.31439 0.41333 0.84125 0.76524 0.53492 0.77030 | nan nan nan | 12.9
4 | 0.31480 0.41191 0.84390 0.76670 0.53815 0.77059 | nan nan nan | 12.8
5 | 0.31809 0.41840 0.84434 0.76798 0.53912 0.77124 | nan nan nan | 12.9
6 | 0.32237 0.42103 0.84506 0.76737 0.53627 0.77180 | nan nan nan | 12.9
7 | 0.32203 0.42055 0.84465 0.76600 0.53617 0.77124 | nan nan nan | 13.1
8 | 0.32931 0.43144 0.84398 0.76671 0.53747 0.77013 | nan nan nan | 13.1
9 | 0.33162 0.42992 0.84499 0.76793 0.53587 0.77157 | nan nan nan | 13.1
10 | 0.32617 0.42194 0.84909 0.77293 0.54663 0.77418 | nan nan nan | 13.0
11 | 0.32782 0.42469 0.84891 0.77210 0.54406 0.77411 | nan nan nan | 12.9
12 | 0.33080 0.42809 0.84897 0.77252 0.54445 0.77391 | nan nan nan | 12.9
13 | 0.33451 0.43235 0.84840 0.77160 0.54306 0.77383 | nan nan nan | 13.1
14 | 0.33882 0.43660 0.84832 0.77141 0.54322 0.77295 | nan nan nan | 13.1
15 | 0.34108 0.43973 0.84803 0.77172 0.54287 0.77311 | nan nan nan | 13.1
16 | 0.34506 0.44437 0.84782 0.77086 0.54190 0.77230 | nan nan nan | 13.0
17 | 0.34866 0.44951 0.84697 0.77017 0.54043 0.77185 | nan nan nan | 13.0
18 | 0.35135 0.45143 0.84740 0.77087 0.54130 0.77260 | nan nan nan | 13.0
19 | 0.35432 0.45495 0.84742 0.77097 0.54075 0.77233 | nan nan nan | 13.0
###Output
_____no_output_____
###Markdown
Results of run on synthetic data Run name is 'sc_run_ h400.400_ ldo_r_wd0.0001_ lr0.001_ lrsteps10_ ep20_ fva: 0 fte: None'.
###Code
Epoch | logl bceloss aucroc aucpr aucpr_cal f1_max | rmse rsquared corrcoef | tr_time
0 | 0.48793 0.48793 0.84856 0.84747 0.46481 0.78310 | nan nan nan | 6.8
1 | 0.50662 0.50662 0.86016 0.86280 0.50705 0.79179 | nan nan nan | 5.8
2 | 0.57006 0.57006 0.86303 0.86548 0.51272 0.79318 | nan nan nan | 5.9
3 | 0.62017 0.62017 0.86448 0.86568 0.50931 0.79533 | nan nan nan | 5.9
4 | 0.67868 0.67868 0.86434 0.86611 0.51275 0.79361 | nan nan nan | 5.8
5 | 0.72765 0.72765 0.86344 0.86555 0.51187 0.79405 | nan nan nan | 5.9
6 | 0.78053 0.78053 0.86309 0.86354 0.50294 0.79417 | nan nan nan | 5.8
7 | 0.80909 0.80909 0.86289 0.86454 0.50737 0.79399 | nan nan nan | 5.8
8 | 0.84269 0.84269 0.86410 0.86583 0.51109 0.79424 | nan nan nan | 5.8
9 | 0.85616 0.85616 0.86439 0.86607 0.51266 0.79459 | nan nan nan | 5.9
10 | 0.86517 0.86517 0.86582 0.86714 0.51413 0.79524 | nan nan nan | 5.9
11 | 0.88092 0.88092 0.86604 0.86750 0.51514 0.79538 | nan nan nan | 5.9
12 | 0.89291 0.89291 0.86629 0.86786 0.51641 0.79579 | nan nan nan | 5.8
13 | 0.90424 0.90424 0.86645 0.86797 0.51655 0.79594 | nan nan nan | 5.8
14 | 0.91264 0.91264 0.86660 0.86817 0.51707 0.79587 | nan nan nan | 5.9
15 | 0.92194 0.92194 0.86690 0.86844 0.51773 0.79623 | nan nan nan | 5.8
16 | 0.92784 0.92784 0.86700 0.86855 0.51805 0.79619 | nan nan nan | 5.8
17 | 0.93727 0.93727 0.86712 0.86872 0.51868 0.79638 | nan nan nan | 5.9
18 | 0.94468 0.94468 0.86731 0.86897 0.51942 0.79633 | nan nan nan | 5.9
19 | 0.95230 0.95230 0.86728 0.86890 0.51925 0.79640 | nan nan nan | 5.8
###Output
_____no_output_____
###Markdown
Run results on original `chembl_23_mini` data
###Code
Epoch | logl bceloss aucroc aucpr aucpr_cal f1_max | rmse rsquared corrcoef | tr_time
0 | 0.34269 0.45776 0.80664 0.72646 0.48133 0.74813 | nan nan nan | 13.6
1 | 0.31554 0.42048 0.83178 0.75306 0.51796 0.76370 | nan nan nan | 12.9
2 | 0.31272 0.41163 0.84092 0.76037 0.53157 0.76873 | nan nan nan | 12.8
3 | 0.31439 0.41333 0.84125 0.76524 0.53492 0.77030 | nan nan nan | 12.9
4 | 0.31480 0.41191 0.84390 0.76670 0.53815 0.77059 | nan nan nan | 12.8
5 | 0.31809 0.41840 0.84434 0.76798 0.53912 0.77124 | nan nan nan | 12.9
6 | 0.32237 0.42103 0.84506 0.76737 0.53627 0.77180 | nan nan nan | 12.9
7 | 0.32203 0.42055 0.84465 0.76600 0.53617 0.77124 | nan nan nan | 13.1
8 | 0.32931 0.43144 0.84398 0.76671 0.53747 0.77013 | nan nan nan | 13.1
9 | 0.33162 0.42992 0.84499 0.76793 0.53587 0.77157 | nan nan nan | 13.1
10 | 0.32617 0.42194 0.84909 0.77293 0.54663 0.77418 | nan nan nan | 13.0
11 | 0.32782 0.42469 0.84891 0.77210 0.54406 0.77411 | nan nan nan | 12.9
12 | 0.33080 0.42809 0.84897 0.77252 0.54445 0.77391 | nan nan nan | 12.9
13 | 0.33451 0.43235 0.84840 0.77160 0.54306 0.77383 | nan nan nan | 13.1
14 | 0.33882 0.43660 0.84832 0.77141 0.54322 0.77295 | nan nan nan | 13.1
15 | 0.34108 0.43973 0.84803 0.77172 0.54287 0.77311 | nan nan nan | 13.1
16 | 0.34506 0.44437 0.84782 0.77086 0.54190 0.77230 | nan nan nan | 13.0
17 | 0.34866 0.44951 0.84697 0.77017 0.54043 0.77185 | nan nan nan | 13.0
18 | 0.35135 0.45143 0.84740 0.77087 0.54130 0.77260 | nan nan nan | 13.0
19 | 0.35432 0.45495 0.84742 0.77097 0.54075 0.77233 | nan nan nan | 13.0
###Output
_____no_output_____ |
apis/api_data_wrangling_mini_project.ipynb | ###Markdown
This exercise will require you to pull some data from the Qunadl API. Qaundl is currently the most widely used aggregator of financial market data. As a first step, you will need to register a free account on the http://www.quandl.com website. After you register, you will be provided with a unique API key, that you should store:
###Code
# Store the API key as a string - according to PEP8, constants are always named in all upper case
API_KEY = 'r8hDAzBMTWp6BAmV4iZb'
###Output
_____no_output_____
###Markdown
Qaundl has a large number of data sources, but, unfortunately, most of them require a Premium subscription. Still, there are also a good number of free datasets. For this mini project, we will focus on equities data from the Frankfurt Stock Exhange (FSE), which is available for free. We'll try and analyze the stock prices of a company called Carl Zeiss Meditec, which manufactures tools for eye examinations, as well as medical lasers for laser eye surgery: https://www.zeiss.com/meditec/int/home.html. The company is listed under the stock ticker AFX_X. You can find the detailed Quandl API instructions here: https://docs.quandl.com/docs/time-series While there is a dedicated Python package for connecting to the Quandl API, we would prefer that you use the *requests* package, which can be easily downloaded using *pip* or *conda*. You can find the documentation for the package here: http://docs.python-requests.org/en/master/ Finally, apart from the *requests* package, you are encouraged to not use any third party Python packages, such as *pandas*, and instead focus on what's available in the Python Standard Library (the *collections* module might come in handy: https://pymotw.com/3/collections/).Also, since you won't have access to DataFrames, you are encouraged to us Python's native data structures - preferably dictionaries, though some questions can also be answered using lists.You can read more on these data structures here: https://docs.python.org/3/tutorial/datastructures.html Keep in mind that the JSON responses you will be getting from the API map almost one-to-one to Python's dictionaries. Unfortunately, they can be very nested, so make sure you read up on indexing dictionaries in the documentation provided above.
###Code
# First, import the relevant modules
import json
import urllib.request
# Now, call the Quandl API and pull out a small sample of the data (only one day) to get a glimpse
# into the JSON structure that will be returned
url_01 = "https://www.quandl.com/api/v3/datasets/FSE/AFX_X.json?start_date=2018-11-30&end_date=2018-11-30&api_key="
data_01 = json.loads(urllib.request.urlopen(url_01+API_KEY).read().decode('utf8'))
# Inspect the JSON structure of the object you created, and take note of how nested it is,
# as well as the overall structure
# print(type(data_01))
# print(data)
data_01
###Output
_____no_output_____
###Markdown
These are your tasks for this mini project:1. Collect data from the Franfurt Stock Exchange, for the ticker AFX_X, for the whole year 2017 (keep in mind that the date format is YYYY-MM-DD).2. Convert the returned JSON object into a Python dictionary.3. Calculate what the highest and lowest opening prices were for the stock in this period.4. What was the largest change in any one day (based on High and Low price)?5. What was the largest change between any two days (based on Closing Price)?6. What was the average daily trading volume during this year?7. (Optional) What was the median trading volume during this year. (Note: you may need to implement your own function for calculating the median.) STEP 1 Collect data from the Franfurt Stock Exchange, for the ticker AFX_X, for the whole year 2017 CollectinAg data for the whole 2017 year: 2017-01-01 to 2017-12-31
###Code
url_2017 = "https://www.quandl.com/api/v3/datasets/FSE/AFX_X.json?start_date=2017-01-01&end_date=2017-12-31&api_key="
data_2017 = json.loads(urllib.request.urlopen(url_2017+API_KEY).read().decode('utf8'))
data_2017
# print(data_2017)
###Output
_____no_output_____
###Markdown
STEP 2 Convert the returned JSON object into a Python dictionary. The data is already stored as a dictionary
###Code
print(type(data_2017))
###Output
<class 'dict'>
###Markdown
STEP 3 Calculate what the highest and lowest opening prices were for the stock in this period. Getting to know the data strcture (column names).
###Code
data_2017_columns = data_2017['dataset']['column_names']
for column in data_2017_columns:
print(column)
# get the actual data rows
data_2017_rows = data_2017['dataset']['data']
open_prices = []
#get the opening prices
for row in data_2017_rows:
if row[1] is not None:
open_prices.append(row[1])
print("TOTAL PRICES: "+str(len(open_prices)))
print("MINIMUM OPENING PRICE:"+str(min(open_prices)))
print("MAXIMUM OPENING PRICE:"+str(max(open_prices)))
###Output
TOTAL PRICES: 252
MINIMUM OPENING PRICE:34.0
MAXIMUM OPENING PRICE:53.11
###Markdown
STEP 4 What was the largest change in any one day (based on High and Low price)? "High" is stored in [2] while "Low" is in [3].
###Code
single_day_changes = []
for row in data_2017_rows:
if row[2] and row[3] is not None:
single_day_changes.append(abs(float(row[2])-float(row[3])))
print("TOTAL PRICES: "+str(len(single_day_changes)))
print("LARGEST CHANGE IN ONE DAY PRICE:"+str(max(single_day_changes)))
###Output
TOTAL PRICES: 255
LARGEST CHANGE IN ONE DAY PRICE:2.8100000000000023
###Markdown
STEP 5 What was the largest change between any two days (based on Closing Price)? Closing cost in in [4]. I am not quite sure what "the largest change between any two days" means in the sense that are they any two days of the year or any 2 CONSECUTIVE days; as such I will calculate both of them.
###Code
closing_costs = []
diffrence_between_two_consecutive_days = 0
count = 0
for row in data_2017_rows:
if row[4] is not None:
closing_costs.append(float(row[4]))
if count>0:
if abs(float(row[4])-float(closing_costs[count-1])) > diffrence_between_two_consecutive_days:
diffrence_between_two_consecutive_days = abs(float(row[4])-float(closing_costs[count-1]))
count +=1
print("The largest change between any two days (based on Closing Price):"+str(float(max(closing_costs)-min(closing_costs))))
print("The largest change between any two consecutive days (based on Closing Price):"+str(float(diffrence_between_two_consecutive_days)))
###Output
The largest change between any two days (based on Closing Price):19.03
The largest change between any two consecutive days (based on Closing Price):2.559999999999995
###Markdown
STEP 6 What was the average daily trading volume during this year? The "Traded Volume" in in position [6].
###Code
daily_volume = []
for row in data_2017_rows:
daily_volume.append(float(row[6]))
print("The average Traded Volume:"+str(float(sum(daily_volume)/len(daily_volume))))
###Output
The average Traded Volume:89124.33725490196
###Markdown
STEP 7 What was the median trading volume during this year. (Note: you may need to implement your own function for calculating the median.).
###Code
sorted_daily_volume = sorted(daily_volume)
mid_index = len(sorted_daily_volume) // 2
if len(sorted_daily_volume) %2:
median = sorted_daily_volume[mid_index]
else:
median = sum(sorted_daily_volume[indmid_indexex - 1:mid_index + 1]) / 2
print("Median trading volume during this year:"+str(float(median)))
###Output
Median trading volume during this year:76286.0
|
artificial_intelligence/01 - ConsumptionRegression/.ipynb_checkpoints/3 - Linear regression (all variables)-checkpoint.ipynb | ###Markdown
Importação dos dados* Dados do dia inteiro, agregado a cada 1h pela média* Colunas: * momento * temp_celsius * pressao * precipitacao * winddirection_deg * windspeed_mps * ta_real * tb_real * tc_real * ampa_real * ampb_real * ampc_real * pa * pb * pc * qa * qb * qc * sa * sb * sc * p3 * q3 * s3
###Code
raw = pd.read_csv ('../datasets/clima3.csv', sep=',')
###Output
_____no_output_____
###Markdown
Exploratory data analysis
###Code
raw.head(100)
raw.describe()
corr = raw.corr(method='pearson')
print (corr)
ax = sns.heatmap(
corr,
vmin=-1, vmax=1, center=0,
cmap=sns.diverging_palette(20, 220, n=200),
square=True
)
pd.plotting.scatter_matrix(raw, alpha=0.5, figsize=(10, 10), diagonal='kde')
# tensão vs temperatura e potência
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,6))
ax1.scatter(raw['temp_celsius'], raw['ta_real'], Alpha=0.5)
ax1.set_xlabel("temp_celsius")
ax1.set_ylabel("ta_real")
ax2.scatter(raw['p3'], raw['ta_real'], Alpha=0.5)
ax2.set_xlabel("p3")
ax2.set_ylabel("ta_real")
# potência vs tempetura e corrente
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,6))
ax1.scatter(raw['temp_celsius'], raw['p3'], Alpha=0.5)
ax1.set_xlabel("temp_celsius")
ax1.set_ylabel("p3")
ax2.scatter(raw['ampa_real'], raw['p3'], Alpha=0.5)
ax2.set_xlabel("ampa_real")
ax2.set_ylabel("p3")
raw['momento'] = pd.to_datetime (raw['momento'])
raw = raw.set_index(raw.momento)
resampled = raw.resample('B').agg({
'ta_real': ['mean'],
'ampa_real': ['mean'],
'temp_celsius': ['mean', 'std'],
'p3': ['mean']
})
resampled.columns = resampled.columns.map('_'.join)
resampled = resampled.dropna()
plt.scatter(resampled['temp_celsius_std'], resampled['ta_real_mean'], s=resampled['p3_mean']/2, Alpha=0.5)
plt.xlabel('temp_celsius')
plt.ylabel('ta_real')
###Output
_____no_output_____
###Markdown
Model Training
###Code
raw = raw.drop('momento', axis=1)
raw = raw.dropna()
from sklearn.model_selection import train_test_split
X = raw.drop('p3', axis=1)
y = raw ['p3']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
lm = LinearRegression()
lm.fit (X_train, y_train)
pd.DataFrame(lm.coef_,X.columns,columns=['Coefficient'])
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
from sklearn import metrics
predictions = lm.predict(X_test)
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
plt.scatter(y_test,predictions)
###Output
_____no_output_____ |
homework/hw-for-wk-07_ingest_data_BLANK.ipynb | ###Markdown
Pandas - Importing data
###Code
## import the pandas library in this cell
###Output
_____no_output_____
###Markdown
Ingest the following dataset into this note:- ```ems_2019_first_6_months.csv``` into a dataframe named ```ems_19_first_df```
###Code
## import data and store in dataframe
## call the first 10 rows
## call the last 5 rows
## call a 20 random rows
## get basic overview of the data
## what datatypes do the different columns hold?
## how many rows and columns are there
###Output
_____no_output_____
###Markdown
In three separate cells, ingest the 3 other related datasets into this notebook:- ```ems_2019_last_6_months.csv``` into a dataframe named ```ems_19_last_df```- ```ems_2020_first_6_months.csv``` into a dataframe named ```ems_20_first_df```- ```ems_2020_last_6_months.csv``` into a dataframe named ```ems_20_last_df```We want to combine all 4 dataframes so we can run an analysis on the combined dataframes
###Code
### For last 6 months of 2019
## call the dataframe to see what you have
### For first 6 months of 2020
## call the dataframe to see what you have
### For last 6 months of 2020
## call the dataframe to see what you have
###Output
_____no_output_____
###Markdown
Check if all the column headers are identical (down to the same order)We need to confirm they are the same before we combine them
###Code
## headers for first 6 months of 2019
## headers for first 6 months of 2020
## check if they are the same
## creater headers list for last 6 months of 2019 and 2020
## check if one of the first is equal to one of the last
## now confirm that the two lasts are the same
###Output
_____no_output_____
###Markdown
Combine the 4 dataframes
###Code
## create a list to combine all
## combine the dfs
###Output
_____no_output_____
###Markdown
Confirm that their row totals add up to the correct number
###Code
## how many rows does the combined df have
## get total rows when you add up all the individual dataframes
###Output
_____no_output_____
###Markdown
Ingest Massive DatasetStep 1. Open this dataset in Excel first. What happened?Step 2. Open this data set in Pandas.
###Code
## import global_temps.csv into a df called temps_df
###Output
_____no_output_____
###Markdown
How many rows of data does this dataset have?
###Code
## 2_906_327
###Output
_____no_output_____
###Markdown
Targeting an Excel sheetWe need the Customs and Border Protection data in this ```multi-data.xlsx``` workbook.Note that this is an excerpt of the actual CBP data.
###Code
### import it here
## call the top 15 rows
###Output
_____no_output_____ |
Worksheets/10_2_Movies_Project.ipynb | ###Markdown
Movies Mini-project---In the previous worksheet you converted an SQL relational database to a single pandas dataframe and downloaded it. You will be analysing it today.If you were unable to download the file, there is a copy located here: "https://github.com/lilaceri/Working-with-data-/blob/main/Data%20Sets%20for%20code%20divisio/movies.csv?raw=true" Inspect the dataset ---
###Code
url = 'https://github.com/lilaceri/Working-with-data-/blob/main/Data%20Sets%20for%20code%20divisio/movies.csv?raw=true'
def create_dataframe(url):
import pandas as pd
df = pd.read_csv(url)
return df
movies_df = create_dataframe(url)
display(movies_df.info())
display(movies_df.describe())
display(movies_df.head())
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 368894 entries, 0 to 368893
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 368894 non-null int64
1 first_name 368894 non-null object
2 last_name 368894 non-null object
3 name 368894 non-null object
4 year 368894 non-null int64
5 rank 113376 non-null float64
6 genre 368894 non-null object
dtypes: float64(1), int64(2), object(4)
memory usage: 19.7+ MB
###Markdown
Clean the dataset ---
###Code
movies_df = movies_df.drop(columns=['Unnamed: 0'])
movies_df.drop_duplicates(inplace=True)
movies_df.info()
movies_df.head()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 368893 entries, 0 to 368893
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 first_name 368893 non-null object
1 last_name 368893 non-null object
2 name 368893 non-null object
3 year 368893 non-null int64
4 rank 113376 non-null float64
5 genre 368893 non-null object
dtypes: float64(1), int64(1), object(4)
memory usage: 19.7+ MB
###Markdown
How many movies of each genre are there?---
###Code
def get_movies_by(df, column='genre'):
df_grouped = df.groupby(column)['name'].count()
return df_grouped
#display(get_movies_by(movies_df, ['year', 'rank']))
display(get_movies_by(movies_df))
###Output
_____no_output_____
###Markdown
Which director has the highest ranked movies?---
###Code
def get_highest_rank_movies(df):
max_rank = df['rank'].max(skipna=True)
df_filtered = df[df['rank'] == max_rank]
return df_filtered
display(get_highest_rank_movies(movies_df))
###Output
_____no_output_____
###Markdown
How many movies have ranks of over 9?---
###Code
def get_above_rank_9_movies(df, rank=9):
max_rank = rank
df_filtered = df[df['rank'] > max_rank]
return df_filtered
total_above_9_rank_movies = len(get_above_rank_9_movies(movies_df))
print(f'Total above 9 rank movies: {total_above_9_rank_movies}')
###Output
Total above 9 rank movies: 1483
###Markdown
Plot a bar chart of mean rank and genre---
###Code
def show_barchart_rank_genre(df):
import matplotlib.pyplot as plt
rank_genre_df = df[['rank', 'genre']]
df_grouped = rank_genre_df.groupby('genre')['rank'].mean()
# Draw the bar graph
df_grouped.plot(x='genre', y=df_grouped, kind='bar')
plt.xlabel('Genre')
plt.ylabel('Rank')
plt.title('The Mean Rank')
#plt.xticks(rotation=45)
plt.show()
show_barchart_rank_genre(movies_df)
###Output
_____no_output_____
###Markdown
Plot a pie chart of how many movies of each genre there are ---
###Code
def show_pieplot_name_genre(df):
import matplotlib.pyplot as plt
rank_genre_df = df[['name', 'genre']]
df_grouped = rank_genre_df.groupby('genre')['name'].count()
# Draw the pie plot
plt.pie(df_grouped, labels=df_grouped.keys())
plt.title('Movies by Genre')
plt.show()
show_pieplot_name_genre(movies_df)
###Output
_____no_output_____
###Markdown
Plot a graph showing the mean Rank for each year
###Code
def show_barchart_rank_year(df):
import matplotlib.pyplot as plt
rank_year_df = df[['rank', 'year']]
df_grouped = rank_year_df.groupby('year')['rank'].mean()
# Draw the line graph
df_grouped.plot(x='year', y=df_grouped, kind='line')
plt.xlabel('Year')
plt.ylabel('Rank')
plt.title('The Mean Rank')
plt.show()
show_barchart_rank_year(movies_df)
###Output
_____no_output_____
###Markdown
What else can you find out from this dataset?---Make a plan of 3 further things you can do to interrogate and analyse this dataset Type your answer here 1. Highest ranked 'Comedy' movies?2. The year with the highest number of movies?3. Highest ranked 'Thriller' movies? Complete the tasks you have set out in the exercise above. ---
###Code
def get_highest_ranked_comedy_movies(df):
comedy_df = df[df['genre'] == 'Comedy']
max_rank = comedy_df['rank'].max(skipna=True)
df_filtered = comedy_df[comedy_df['rank'] == max_rank]
df_filtered = df_filtered.sort_values(by='year', ascending=False)
return df_filtered
display(get_highest_ranked_comedy_movies(movies_df))
def show_movie_count_year(df):
import matplotlib.pyplot as plt
movie_count_df = df[['name', 'year']]
df_grouped = movie_count_df.groupby('year')['name'].count()
# Draw the line graph
df_grouped.plot(x='year', y=df_grouped, kind='line')
plt.xlabel('Year')
plt.ylabel('Count')
plt.title('The Number Of Movies Released')
plt.show()
show_movie_count_year(movies_df)
def get_highest_ranked_thriller_movies(df):
thriller_df = df[df['genre'] == 'Thriller']
max_rank = thriller_df['rank'].max(skipna=True)
df_filtered = thriller_df[thriller_df['rank'] == max_rank]
df_filtered = df_filtered.sort_values(by='year', ascending=False)
return df_filtered
display(get_highest_ranked_thriller_movies(movies_df))
###Output
_____no_output_____ |
_build/jupyter_execute/Module3/Programming_Assignment_3.ipynb | ###Markdown
Week 3 Programming Assignment Remark: Please upload your solutions of this assignment to Canvas with a file named "Programming_Assignment_3 _yourname.ipynb" before deadline. ================================================================================================================= **Problem 1 .** Use stochastic gradient descent method to train MNIST with 1 hidden layer neural network model to achieve at least 97% test accuracy. Print the results with the following format: "Epoch: i, Training accuracy: $a_i$, Test accuracy: $b_i$"where $i=1,2,3,...$ means the $i$-th epoch, $a_i$ and $b_i$ are the training accuracy and test accuracy computed at the end of $i$-th epoch.
###Code
# write your code for solving probelm 1 in this cell
###Output
_____no_output_____
###Markdown
================================================================================================================= **Problem 2 .** Use stochastic gradient descent method to train CIFAR-10 with* (1) logistic regression model to achieve at least 25% test accuracy * (2) 2-hidden layers neural network model to achieve at least 50% test accuracyPrint the results with the following format:* For logistic regression model, print: "Logistic Regression Model, Epoch: i, Training accuracy: $a_i$, Test accuracy: $b_i$"* For 2-hidden layers neural network model, print: "DNN Model, Epoch: i, Training accuracy: $a_i$, Test accuracy: $b_i$"where $i=1,2,3,...$ means the $i$-th epoch, $a_i$ and $b_i$ are the training accuracy and test accuracy computed at the end of $i$-th epoch.Hint: (1) The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.(2) The input_size should be $3072=3*32*32$, where 3 is the number of channels (RGB image), $32*32$ is the size of every image. (3) For the 2-hidden layers neural network model, consider to use $W^1\in \mathbb{R}^{3072\times3072}$ for the 1st-hidden layer, $W^2 \in \mathbb{R}^{500\times 3072}$ for the 2nd-hidden layer and $W^3 \in \mathbb{R}^{10\times 500}$ for the output layer.
###Code
# write your code for solving probelm 2 in this cell
# You can load CIFAR-10 dataset as follows:
CIFAR10_transform = torchvision.transforms.Compose([torchvision.transforms.ToTensor()])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=CIFAR10_transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=128, shuffle=True)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=CIFAR10_transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=1, shuffle=False)
###Output
_____no_output_____ |
biobb_wf_amber_md_setup/html/mdsetup_lig/biobb_amber_complex_setup_notebook.web.ipynb | ###Markdown
Protein-ligand complex MD Setup tutorial using BioExcel Building Blocks (biobb) --***AmberTools package version***--**Based on the [MDWeb](http://mmb.irbbarcelona.org/MDWeb2/) [Amber FULL MD Setup tutorial](https://mmb.irbbarcelona.org/MDWeb2/help.php?id=workflowsAmberWorkflowFULL)*****This tutorial aims to illustrate the process of **setting up a simulation system** containing a **protein in complex with a ligand**, step by step, using the **BioExcel Building Blocks library (biobb)** wrapping the **AmberTools** utility from the **AMBER package**. The particular example used is the **T4 lysozyme** protein (PDB code [3HTB](https://www.rcsb.org/structure/3HTB)) with two residue modifications ***L99A/M102Q*** complexed with the small ligand **2-propylphenol** (3-letter code [JZ4](https://www.rcsb.org/ligand/JZ4)). *** Settings Biobb modules used - [biobb_io](https://github.com/bioexcel/biobb_io): Tools to fetch biomolecular data from public databases. - [biobb_amber](https://github.com/bioexcel/biobb_amber): Tools to setup and run Molecular Dynamics simulations with AmberTools. - [biobb_structure_utils](https://github.com/bioexcel/biobb_structure_utils): Tools to modify or extract information from a PDB structure file. - [biobb_analysis](https://github.com/bioexcel/biobb_analysis): Tools to analyse Molecular Dynamics trajectories. - [biobb_chemistry](https://github.com/bioexcel/biobb_chemistry): Tools to to perform chemical conversions. Auxiliar libraries used - [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels): Enables a Jupyter Notebook or JupyterLab application in one conda environment to access kernels for Python, R, and other languages found in other environments. - [nglview](http://nglviewer.org/nglview): Jupyter/IPython widget to interactively view molecular structures and trajectories in notebooks. - [ipywidgets](https://github.com/jupyter-widgets/ipywidgets): Interactive HTML widgets for Jupyter notebooks and the IPython kernel. - [plotly](https://plot.ly/python/offline/): Python interactive graphing library integrated in Jupyter notebooks. - [simpletraj](https://github.com/arose/simpletraj): Lightweight coordinate-only trajectory reader based on code from GROMACS, MDAnalysis and VMD. Conda Installation and Launch```consolegit clone https://github.com/bioexcel/biobb_wf_amber_md_setup.gitcd biobb_wf_amber_md_setupconda env create -f conda_env/environment.ymlconda activate biobb_AMBER_MDsetup_tutorialsjupyter-nbextension enable --py --user widgetsnbextensionjupyter-nbextension enable --py --user nglviewjupyter-notebook biobb_wf_amber_md_setup/notebooks/mdsetup_lig/biobb_amber_complex_setup_notebook.ipynb``` *** Pipeline steps 1. [Input Parameters](input) 2. [Fetching PDB Structure](fetch) 3. [Preparing PDB file for AMBER](pdb4amber) 4. [Create ligand system topology](ligtop) 5. [Create Protein-Ligand Complex System Topology](top) 6. [Energetically Minimize the Structure](minv) 7. [Create Solvent Box and Solvating the System](box) 8. [Adding Ions](ions) 9. [Energetically Minimize the System](min) 10. [Heating the System](heating) 11. [Equilibrate the System (NVT)](nvt) 12. [Equilibrate the System (NPT)](npt) 13. [Free Molecular Dynamics Simulation](free) 14. [Post-processing and Visualizing Resulting 3D Trajectory](post) 15. [Output Files](output) 16. [Questions & Comments](questions) ***<img src="https://bioexcel.eu/wp-content/uploads/2019/04/Bioexcell_logo_1080px_transp.png" alt="Bioexcel2 logo" title="Bioexcel2 logo" width="400" />*** Input parameters**Input parameters** needed: - **pdbCode**: PDB code of the protein structure (e.g. 3HTB) - **ligandCode**: 3-letter code of the ligand (e.g. JZ4) - **mol_charge**: Charge of the ligand (e.g. 0)
###Code
import nglview
import ipywidgets
import plotly
from plotly import subplots
import plotly.graph_objs as go
pdbCode = "3htb"
ligandCode = "JZ4"
mol_charge = 0
###Output
_____no_output_____
###Markdown
*** Fetching PDB structureDownloading **PDB structure** with the **protein molecule** from the RCSB PDB database.Alternatively, a **PDB file** can be used as starting structure. Stripping from the **downloaded structure** any **crystallographic water** molecule or **heteroatom**. *****Building Blocks** used: - [pdb](https://biobb-io.readthedocs.io/en/latest/api.htmlmodule-api.pdb) from **biobb_io.api.pdb** - [remove_pdb_water](https://biobb-structure-utils.readthedocs.io/en/latest/utils.htmlmodule-utils.remove_pdb_water) from **biobb_structure_utils.utils.remove_pdb_water** - [remove_ligand](https://biobb-structure-utils.readthedocs.io/en/latest/utils.htmlmodule-utils.remove_ligand) from **biobb_structure_utils.utils.remove_ligand*****
###Code
# Import module
from biobb_io.api.pdb import pdb
# Create properties dict and inputs/outputs
downloaded_pdb = pdbCode+'.pdb'
prop = {
'pdb_code': pdbCode,
'filter': False
}
#Create and launch bb
pdb(output_pdb_path=downloaded_pdb,
properties=prop)
# Show protein
view = nglview.show_structure_file(downloaded_pdb)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein', color='sstruc')
view.add_representation(repr_type='ball+stick', radius='0.1', selection='water')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='ligand')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='ion')
view._remote_call('setSize', target='Widget', args=['','600px'])
view.render_image()
view.download_image(filename='ngl1.png')
view
###Output
_____no_output_____
###Markdown
###Code
# Import module
from biobb_structure_utils.utils.remove_pdb_water import remove_pdb_water
# Create properties dict and inputs/outputs
nowat_pdb = pdbCode+'.nowat.pdb'
#Create and launch bb
remove_pdb_water(input_pdb_path=downloaded_pdb,
output_pdb_path=nowat_pdb)
# Import module
from biobb_structure_utils.utils.remove_ligand import remove_ligand
# Removing PO4 ligands:
# Create properties dict and inputs/outputs
nopo4_pdb = pdbCode+'.noPO4.pdb'
prop = {
'ligand' : 'PO4'
}
#Create and launch bb
remove_ligand(input_structure_path=nowat_pdb,
output_structure_path=nopo4_pdb,
properties=prop)
# Removing BME ligand:
# Create properties dict and inputs/outputs
nobme_pdb = pdbCode+'.noBME.pdb'
prop = {
'ligand' : 'BME'
}
#Create and launch bb
remove_ligand(input_structure_path=nopo4_pdb,
output_structure_path=nobme_pdb,
properties=prop)
###Output
2021-06-22 14:33:16,341 [MainThread ] [INFO ] PDB format detected, removing all atoms from residues named PO4
2021-06-22 14:33:16,349 [MainThread ] [INFO ] PDB format detected, removing all atoms from residues named BME
###Markdown
Visualizing 3D structure
###Code
# Show protein
view = nglview.show_structure_file(nobme_pdb)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein', color='sstruc')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='hetero')
view._remote_call('setSize', target='Widget', args=['','600px'])
view.render_image()
view.download_image(filename='ngl2.png')
view
###Output
_____no_output_____
###Markdown
*** Preparing PDB file for AMBERBefore starting a **protein MD setup**, it is always strongly recommended to take a look at the initial structure and try to identify important **properties** and also possible **issues**. These properties and issues can be serious, as for example the definition of **disulfide bridges**, the presence of a **non-standard aminoacids** or **ligands**, or **missing residues**. Other **properties** and **issues** might not be so serious, but they still need to be addressed before starting the **MD setup process**. **Missing hydrogen atoms**, presence of **alternate atomic location indicators** or **inserted residue codes** (see [PDB file format specification](https://www.wwpdb.org/documentation/file-format-content/format33/sect9.htmlATOM)) are examples of these not so crucial characteristics. Please visit the [AMBER tutorial: Building Protein Systems in Explicit Solvent](http://ambermd.org/tutorials/basic/tutorial7/index.php) for more examples. **AmberTools** utilities from **AMBER MD package** contain a tool able to analyse **PDB files** and clean them for further usage, especially with the **AmberTools LEaP program**: the **pdb4amber tool**. The next step of the workflow is running this tool to analyse our **input PDB structure**.For the particular **T4 Lysosyme** example, the most important property that is identified by the **pdb4amber** utility is the presence of **disulfide bridges** in the structure. Those are marked changing the residue names **from CYS to CYX**, which is the code that **AMBER force fields** use to distinguish between cysteines forming or not forming **disulfide bridges**. This will be used in the following step to correctly form a **bond** between these cysteine residues. We invite you to check what the tool does with different, more complex structures (e.g. PDB code [6N3V](https://www.rcsb.org/structure/6N3V)). *****Building Blocks** used: - [pdb4amber_run](https://biobb-amber.readthedocs.io/en/latest/pdb4amber.htmlpdb4amber-pdb4amber-run-module) from **biobb_amber.pdb4amber.pdb4amber_run*****
###Code
# Import module
from biobb_amber.pdb4amber.pdb4amber_run import pdb4amber_run
# Create prop dict and inputs/outputs
output_pdb4amber_path = 'structure.pdb4amber.pdb'
# Create and launch bb
pdb4amber_run(input_pdb_path=nobme_pdb,
output_pdb_path=output_pdb4amber_path,
properties=prop)
###Output
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/lib/python3.7/site-packages/biobb_common/tools/file_utils.py:354: UserWarning: Warning: ligand is not a recognized property. The most similar property is: out_log
error_property, close_property))
2021-06-22 14:33:16,599 [MainThread ] [INFO ] Creating bdfb3702-88bc-4dcc-82dd-51082d0619f9 temporary folder
2021-06-22 14:33:16,603 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 14:33:17,750 [MainThread ] [INFO ] pdb4amber -i 3htb.noBME.pdb -o structure.pdb4amber.pdb
2021-06-22 14:33:17,752 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:33:17,754 [MainThread ] [INFO ]
==================================================
Summary of pdb4amber for: 3htb.noBME.pdb
===================================================
----------Chains
The following (original) chains have been found:
A
---------- Alternate Locations (Original Residues!))
The following residues had alternate locations:
MET_1
THR_21
TYR_24
ASN_68
ASP_72
ARG_80
ARG_96
GLN_102
LYS_147
ARG_154
-----------Non-standard-resnames
JZ4
---------- Mising heavy atom(s)
None
The alternate coordinates have been discarded.
Only the first occurrence for each atom was kept.
2021-06-22 14:33:17,756 [MainThread ] [INFO ] Removed: bdfb3702-88bc-4dcc-82dd-51082d0619f9
###Markdown
*** Create ligand system topology**Building AMBER topology** corresponding to the ligand structure.Force field used in this tutorial step is **amberGAFF**: [General AMBER Force Field](http://ambermd.org/antechamber/gaff.html), designed for rational drug design.- [Step 1](ligandTopologyStep1): Extract **ligand structure**.- [Step 2](ligandTopologyStep2): Add **hydrogen atoms** if missing.- [Step 3](ligandTopologyStep3): **Energetically minimize the system** with the new hydrogen atoms. - [Step 4](ligandTopologyStep4): Generate **ligand topology** (parameters). *****Building Blocks** used: - [ExtractHeteroAtoms](https://biobb-structure-utils.readthedocs.io/en/latest/utils.htmlmodule-utils.extract_heteroatoms) from **biobb_structure_utils.utils.extract_heteroatoms** - [ReduceAddHydrogens](https://biobb-chemistry.readthedocs.io/en/latest/ambertools.htmlmodule-ambertools.reduce_add_hydrogens) from **biobb_chemistry.ambertools.reduce_add_hydrogens** - [BabelMinimize](https://biobb-chemistry.readthedocs.io/en/latest/babelm.htmlmodule-babelm.babel_minimize) from **biobb_chemistry.babelm.babel_minimize** - [AcpypeParamsAC](https://biobb-chemistry.readthedocs.io/en/latest/acpype.htmlmodule-acpype.acpype_params_ac) from **biobb_chemistry.acpype.acpype_params_ac** *** Step 1: Extract **Ligand structure**
###Code
# Create Ligand system topology, STEP 1
# Extracting Ligand JZ4
# Import module
from biobb_structure_utils.utils.extract_heteroatoms import extract_heteroatoms
# Create properties dict and inputs/outputs
ligandFile = ligandCode+'.pdb'
prop = {
'heteroatoms' : [{"name": "JZ4"}]
}
extract_heteroatoms(input_structure_path=output_pdb4amber_path,
output_heteroatom_path=ligandFile,
properties=prop)
###Output
2021-06-22 14:33:17,826 [MainThread ] [INFO ] File JZ4.pdb created
###Markdown
Step 2: Add **hydrogen atoms**
###Code
# Create Ligand system topology, STEP 2
# Reduce_add_hydrogens: add Hydrogen atoms to a small molecule (using Reduce tool from Ambertools package)
# Import module
from biobb_chemistry.ambertools.reduce_add_hydrogens import reduce_add_hydrogens
# Create prop dict and inputs/outputs
output_reduce_h = ligandCode+'.reduce.H.pdb'
prop = {
'nuclear' : 'true'
}
# Create and launch bb
reduce_add_hydrogens(input_path=ligandFile,
output_path=output_reduce_h,
properties=prop)
###Output
2021-06-22 14:33:17,857 [MainThread ] [INFO ] Not using any container
2021-06-22 14:33:17,972 [MainThread ] [INFO ] reduce -NUClear -OH -ROTNH3 -ALLALT JZ4.pdb > JZ4.reduce.H.pdb
2021-06-22 14:33:17,973 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:33:17,974 [MainThread ] [INFO ] reduce: version 3.3 06/02/2016, Copyright 1997-2016, J. Michael Word
Processing file: "JZ4.pdb"
Database of HETATM connections: "/anaconda3/envs/biobb_AMBER_MDsetup_tutorials//dat/reduce_wwPDB_het_dict.txt"
VDW dot density = 16/A^2
Orientation penalty scale = 1 (100%)
Eliminate contacts within 3 bonds.
Ignore atoms with |occupancy| <= 0.01 during adjustments.
Waters ignored if B-Factor >= 40 or |occupancy| < 0.66
Aromatic rings in amino acids accept hydrogen bonds.
Building or keeping OH & SH Hydrogens.
Rotating NH3 Hydrogens.
Not processing Met methyls.
WARNING: atom H13A from JZ4 will be treated as hydrogen
WARNING: atom H14A from JZ4 will be treated as hydrogen
WARNING: atom H13A from JZ4 will be treated as hydrogen
WARNING: atom H13A from JZ4 will be treated as hydrogen
WARNING: atom H14A from JZ4 will be treated as hydrogen
WARNING: atom H14A from JZ4 will be treated as hydrogen
WARNING: atom H14A from JZ4 will be treated as hydrogen
WARNING: atom H14A from JZ4 will be treated as hydrogen
WARNING: atom H13A from JZ4 will be treated as hydrogen
WARNING: atom H13A from JZ4 will be treated as hydrogen
WARNING: atom H14A from JZ4 will be treated as hydrogen
WARNING: atom H13A from JZ4 will be treated as hydrogen
Singles(size 1): A 164 JZ4 OAB
orientation 1: A 164 JZ4 OAB : rot 120: bump=0.000, HB=0.000, total=0.000
Found 0 hydrogens (0 hets)
Standardized 0 hydrogens (0 hets)
Added 12 hydrogens (12 hets)
Removed 0 hydrogens (0 hets)
Adjusted 1 group(s)
If you publish work which uses reduce, please cite:
Word, et. al. (1999) J. Mol. Biol. 285, 1735-1747.
For more information see http://kinemage.biochem.duke.edu
###Markdown
Step 3: **Energetically minimize the system** with the new hydrogen atoms.
###Code
# Create Ligand system topology, STEP 3
# Babel_minimize: Structure energy minimization of a small molecule after being modified adding hydrogen atoms
# Import module
from biobb_chemistry.babelm.babel_minimize import babel_minimize
# Create prop dict and inputs/outputs
output_babel_min = ligandCode+'.H.min.mol2'
prop = {
'method' : 'sd',
'criteria' : '1e-10',
'force_field' : 'GAFF'
}
# Create and launch bb
babel_minimize(input_path=output_reduce_h,
output_path=output_babel_min,
properties=prop)
###Output
2021-06-22 14:33:17,994 [MainThread ] [INFO ] Hydrogens is not correct, assigned default value: False
2021-06-22 14:33:17,996 [MainThread ] [INFO ] Steps is not correct, assigned default value: 2500
2021-06-22 14:33:17,996 [MainThread ] [INFO ] Cut-off is not correct, assigned default value: False
2021-06-22 14:33:17,997 [MainThread ] [INFO ] Rvdw is not correct, assigned default value: 6.0
2021-06-22 14:33:17,998 [MainThread ] [INFO ] Rele is not correct, assigned default value: 10.0
2021-06-22 14:33:17,999 [MainThread ] [INFO ] Frequency is not correct, assigned default value: 10
2021-06-22 14:33:18,000 [MainThread ] [INFO ] Not using any container
2021-06-22 14:33:18,620 [MainThread ] [INFO ] obminimize -c 1e-10 -sd -ff GAFF -ipdb JZ4.reduce.H.pdb -omol2 > JZ4.H.min.mol2
2021-06-22 14:33:18,622 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:33:18,623 [MainThread ] [INFO ]
A T O M T Y P E S
IDX TYPE RING
1 c3 NO
2 ca AR
3 ca AR
4 ca AR
5 ca AR
6 ca AR
7 ca AR
8 c3 NO
9 c3 NO
10 oh NO
11 ho NO
12 hc NO
13 hc NO
14 ha NO
15 ha NO
16 ha NO
17 hc NO
18 hc NO
19 hc NO
20 hc NO
21 hc NO
22 ha NO
C H A R G E S
IDX CHARGE
1 -0.064979
2 -0.061278
3 -0.058294
4 -0.019907
5 0.120013
6 -0.055115
7 -0.005954
8 -0.024481
9 -0.051799
10 -0.506496
11 0.292144
12 0.026577
13 0.031423
14 0.065403
15 0.061871
16 0.061769
17 0.022987
18 0.022987
19 0.022987
20 0.026577
21 0.031423
22 0.062142
S E T T I N G U P C A L C U L A T I O N S
SETTING UP BOND CALCULATIONS...
SETTING UP ANGLE CALCULATIONS...
SETTING UP TORSION CALCULATIONS...
SETTING UP IMPROPER TORSION CALCULATIONS...
SETTING UP VAN DER WAALS CALCULATIONS...
SETTING UP ELECTROSTATIC CALCULATIONS...
S T E E P E S T D E S C E N T
STEPS = 2500
STEP n E(n) E(n-1)
------------------------------------
0 50.048 ----
10 36.21724 36.73363
20 32.62519 32.90050
30 30.42176 30.60740
40 28.84022 28.98002
50 27.60107 27.71398
60 26.57419 26.66957
70 25.69242 25.77532
80 24.91809 24.99145
90 24.22825 24.29392
100 23.60848 23.66758
110 23.04960 23.10303
120 22.54147 22.59027
130 22.07638 22.12112
140 21.64899 21.69018
150 21.25485 21.29289
160 20.89018 20.92542
170 20.55172 20.58448
180 20.23672 20.26724
190 19.94279 19.97131
200 19.66791 19.69460
210 19.41032 19.43536
220 19.16851 19.19203
230 18.94116 18.96329
240 18.72713 18.74797
250 18.52541 18.54506
260 18.33510 18.35365
270 18.15542 18.17293
280 17.98563 18.00219
290 17.82510 17.84076
300 17.67323 17.68805
310 17.52948 17.54351
320 17.39335 17.40664
330 17.26438 17.27697
340 17.14215 17.15408
350 17.02625 17.03757
360 16.91633 16.92707
370 16.81204 16.82223
380 16.71305 16.72272
390 16.61907 16.62825
400 16.52981 16.53853
410 16.44502 16.45331
420 16.36444 16.37232
430 16.28785 16.29534
440 16.21503 16.22215
450 16.14577 16.15255
460 16.07989 16.08634
470 16.01721 16.02334
480 15.95755 15.96338
490 15.90076 15.90631
500 15.84668 15.85198
510 15.79519 15.80023
520 15.74614 15.75094
530 15.69941 15.70398
540 15.65488 15.65924
550 15.61245 15.61660
560 15.57200 15.57596
570 15.53344 15.53721
580 15.49667 15.50027
590 15.46160 15.46504
600 15.42816 15.43144
610 15.39626 15.39939
620 15.36583 15.36881
630 15.33679 15.33963
640 15.30908 15.31179
650 15.28263 15.28522
660 15.25739 15.25986
670 15.23329 15.23565
680 15.21029 15.21254
690 15.18832 15.19047
700 15.16735 15.16940
710 15.14731 15.14928
720 15.12818 15.13005
730 15.10990 15.11169
740 15.09243 15.09415
750 15.07575 15.07738
760 15.05980 15.06137
770 15.04456 15.04606
780 15.03000 15.03143
790 15.01607 15.01744
800 15.00276 15.00406
810 14.99003 14.99127
820 14.97785 14.97904
830 14.96620 14.96734
840 14.95506 14.95615
850 14.94439 14.94544
860 14.93419 14.93519
870 14.92442 14.92538
880 14.91507 14.91598
890 14.90611 14.90699
900 14.89753 14.89837
910 14.88931 14.89011
920 14.88143 14.88220
930 14.87388 14.87462
940 14.86663 14.86735
950 14.85969 14.86037
960 14.85302 14.85368
970 14.84662 14.84725
980 14.84048 14.84108
990 14.83458 14.83516
1000 14.82891 14.82946
1010 14.82345 14.82399
1020 14.81821 14.81873
1030 14.81316 14.81366
1040 14.80831 14.80878
1050 14.80362 14.80409
1060 14.79911 14.79956
1070 14.79476 14.79519
1080 14.79056 14.79098
1090 14.78650 14.78691
1100 14.78258 14.78297
1110 14.77879 14.77917
1120 14.77512 14.77548
1130 14.77156 14.77191
1140 14.76835 14.76859
1150 14.76655 14.76671
1160 14.76491 14.76508
1170 14.76332 14.76348
1180 14.76177 14.76192
1190 14.76025 14.76040
1200 14.75876 14.75891
1210 14.75731 14.75746
1220 14.75589 14.75603
1230 14.75450 14.75464
1240 14.75315 14.75328
1250 14.75181 14.75195
1260 14.75051 14.75064
1270 14.74924 14.74936
1280 14.74798 14.74811
1290 14.74676 14.74688
1300 14.74556 14.74568
1310 14.74438 14.74449
1320 14.74322 14.74333
1330 14.74208 14.74220
1340 14.74097 14.74108
1350 14.73987 14.73998
1360 14.73880 14.73890
1370 14.73774 14.73784
1380 14.73669 14.73680
1390 14.73567 14.73577
1400 14.73466 14.73476
1410 14.73367 14.73377
1420 14.73269 14.73279
1430 14.73173 14.73182
1440 14.73078 14.73087
1450 14.72984 14.72993
1460 14.72891 14.72901
1470 14.72800 14.72809
1480 14.72710 14.72719
1490 14.72621 14.72630
1500 14.72549 14.72554
1510 14.72504 14.72509
1520 14.72461 14.72466
1530 14.72419 14.72423
1540 14.72377 14.72381
1550 14.72336 14.72340
1560 14.72295 14.72299
1570 14.72255 14.72259
1580 14.72216 14.72220
1590 14.72177 14.72181
1600 14.72138 14.72142
1610 14.72100 14.72104
1620 14.72063 14.72067
1630 14.72026 14.72030
1640 14.71989 14.71993
1650 14.71953 14.71957
1660 14.71918 14.71921
1670 14.71883 14.71886
1680 14.71848 14.71851
1690 14.71813 14.71817
1700 14.71780 14.71783
1710 14.71746 14.71749
1720 14.71713 14.71716
1730 14.71680 14.71683
1740 14.71648 14.71651
1750 14.71616 14.71619
1760 14.71584 14.71587
1770 14.71553 14.71556
1780 14.71522 14.71525
1790 14.71491 14.71495
1800 14.71461 14.71464
1810 14.71431 14.71434
1820 14.71402 14.71405
1830 14.71372 14.71375
1840 14.71344 14.71346
1850 14.71323 14.71324
1860 14.71308 14.71309
1870 14.71293 14.71294
1880 14.71278 14.71280
1890 14.71264 14.71265
1900 14.71250 14.71251
1910 14.71236 14.71237
1920 14.71222 14.71223
1930 14.71208 14.71209
1940 14.71194 14.71196
1950 14.71181 14.71182
1960 14.71168 14.71169
1970 14.71155 14.71156
1980 14.71142 14.71144
1990 14.71130 14.71131
2000 14.71117 14.71119
2010 14.71105 14.71107
2020 14.71093 14.71095
2030 14.71082 14.71083
2040 14.71070 14.71071
2050 14.71058 14.71060
2060 14.71047 14.71048
2070 14.71036 14.71037
2080 14.71025 14.71026
2090 14.71014 14.71015
2100 14.71003 14.71005
2110 14.70993 14.70994
2120 14.70982 14.70983
2130 14.70972 14.70973
2140 14.70962 14.70963
2150 14.70952 14.70953
2160 14.70942 14.70943
2170 14.70933 14.70934
2180 14.70923 14.70924
2190 14.70914 14.70915
2200 14.70904 14.70905
2210 14.70896 14.70896
2220 14.70889 14.70890
2230 14.70884 14.70885
2240 14.70879 14.70880
2250 14.70874 14.70875
2260 14.70869 14.70870
2270 14.70865 14.70865
2280 14.70860 14.70860
2290 14.70855 14.70856
2300 14.70851 14.70851
2310 14.70846 14.70847
2320 14.70842 14.70842
2330 14.70837 14.70838
2340 14.70833 14.70833
2350 14.70829 14.70829
2360 14.70825 14.70825
2370 14.70820 14.70821
2380 14.70816 14.70817
2390 14.70812 14.70813
2400 14.70808 14.70809
2410 14.70805 14.70805
2420 14.70801 14.70801
2430 14.70797 14.70797
2440 14.70793 14.70794
2450 14.70790 14.70790
2460 14.70786 14.70786
2470 14.70783 14.70783
2480 14.70779 14.70779
2490 14.70776 14.70776
2500 14.70772 14.70773
Time: 0.563172seconds. Iterations per second: 4440.92
###Markdown
Visualizing 3D structuresVisualizing the small molecule generated **PDB structures** using **NGL**: - **Original Ligand Structure** (Left)- **Ligand Structure with hydrogen atoms added** (with Reduce program) (Center)- **Ligand Structure with hydrogen atoms added** (with Reduce program), **energy minimized** (with Open Babel) (Right)
###Code
# Show different structures generated (for comparison)
view1 = nglview.show_structure_file(ligandFile)
view1.add_representation(repr_type='ball+stick')
view1._remote_call('setSize', target='Widget', args=['350px','400px'])
view1.camera='orthographic'
view1.render_image()
view1.download_image(filename='ngl3.png')
view1
view2 = nglview.show_structure_file(output_reduce_h)
view2.add_representation(repr_type='ball+stick')
view2._remote_call('setSize', target='Widget', args=['350px','400px'])
view2.camera='orthographic'
view2.render_image()
view2.download_image(filename='ngl4.png')
view2
view3 = nglview.show_structure_file(output_babel_min)
view3.add_representation(repr_type='ball+stick')
view3._remote_call('setSize', target='Widget', args=['350px','400px'])
view3.camera='orthographic'
view3.render_image()
view3.download_image(filename='ngl5.png')
view3
ipywidgets.HBox([view1, view2, view3])
###Output
_____no_output_____
###Markdown
Step 4: Generate **ligand topology** (parameters).
###Code
# Create Ligand system topology, STEP 4
# Acpype_params_gmx: Generation of topologies for AMBER with ACPype
# Import module
from biobb_chemistry.acpype.acpype_params_ac import acpype_params_ac
# Create prop dict and inputs/outputs
output_acpype_inpcrd = ligandCode+'params.inpcrd'
output_acpype_frcmod = ligandCode+'params.frcmod'
output_acpype_lib = ligandCode+'params.lib'
output_acpype_prmtop = ligandCode+'params.prmtop'
output_acpype = ligandCode+'params'
prop = {
'basename' : output_acpype,
'charge' : mol_charge
}
# Create and launch bb
acpype_params_ac(input_path=output_babel_min,
output_path_inpcrd=output_acpype_inpcrd,
output_path_frcmod=output_acpype_frcmod,
output_path_lib=output_acpype_lib,
output_path_prmtop=output_acpype_prmtop,
properties=prop)
###Output
2021-06-22 14:33:18,871 [MainThread ] [INFO ] Running acpype, this execution can take a while
2021-06-22 14:33:18,872 [MainThread ] [INFO ] Not using any container
2021-06-22 14:33:21,690 [MainThread ] [INFO ] acpype -i /home/gbayarri_local/projects/BioBB/tutorials/biobb_wf_amber_md_setup/biobb_wf_amber_md_setup/notebooks/mdsetup_lig/JZ4.H.min.mol2 -b JZ4params.ZW79gA -n 0
2021-06-22 14:33:21,692 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:33:21,692 [MainThread ] [INFO ] ========================================================================================
| ACPYPE: AnteChamber PYthon Parser interfacE v. 2019-11-07T23:16:00CET (c) 2021 AWSdS |
========================================================================================
==> ... charge set to 0
==> Executing Antechamber...
==> * Antechamber OK *
==> * Parmchk OK *
==> Executing Tleap...
++++++++++start_quote+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Checking 'JZ4'....
Checking parameters for unit 'JZ4'.
Checking for bond parameters.
Checking for angle parameters.
Unit is OK.
++++++++++end_quote+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
==> * Tleap OK *
==> Removing temporary files...
==> Writing NEW PDB file
==> Writing CNS/XPLOR files
==> Writing GROMACS files
==> Writing GMX dihedrals for GMX 4.5 and higher.
==> Writing CHARMM files
==> Writing pickle file JZ4params.ZW79gA.pkl
Total time of execution: 3s
2021-06-22 14:33:21,696 [MainThread ] [INFO ] File JZ4params.prmtop succesfully created
2021-06-22 14:33:21,698 [MainThread ] [INFO ] File JZ4params.frcmod succesfully created
2021-06-22 14:33:21,701 [MainThread ] [INFO ] File JZ4params.lib succesfully created
2021-06-22 14:33:21,703 [MainThread ] [INFO ] File JZ4params.inpcrd succesfully created
2021-06-22 14:33:21,705 [MainThread ] [INFO ] Removed temporary folder: JZ4params.ZW79gA.acpype
###Markdown
*** Create protein-ligand complex system topology**Building AMBER topology** corresponding to the protein-ligand complex structure.*IMPORTANT: the previous pdb4amber building block is changing the proper cysteines residue naming in the PDB file from CYS to CYX so that this step can automatically identify and add the disulfide bonds to the system topology.*The **force field** used in this tutorial is [**ff14SB**](https://doi.org/10.1021/acs.jctc.5b00255) for the **protein**, an evolution of the **ff99SB** force field with improved accuracy of protein side chains and backbone parameters; and the [**gaff**](https://doi.org/10.1002/jcc.20035) force field for the small molecule. **Water** molecules type used in this tutorial is [**tip3p**](https://doi.org/10.1021/jp003020w).Adding **side chain atoms** and **hydrogen atoms** if missing. Forming **disulfide bridges** according to the info added in the previous step. *NOTE: From this point on, the **protein-ligand complex structure and topology** generated can be used in a regular MD setup.*Generating three output files: - **AMBER structure** (PDB file)- **AMBER topology** (AMBER [Parmtop](https://ambermd.org/FileFormats.phptopology) file)- **AMBER coordinates** (AMBER [Coordinate/Restart](https://ambermd.org/FileFormats.phprestart) file) *****Building Blocks** used: - [leap_gen_top](https://biobb-amber.readthedocs.io/en/latest/leap.htmlmodule-leap.leap_gen_top) from **biobb_amber.leap.leap_gen_top*****
###Code
# Import module
from biobb_amber.leap.leap_gen_top import leap_gen_top
# Create prop dict and inputs/outputs
output_pdb_path = 'structure.leap.pdb'
output_top_path = 'structure.leap.top'
output_crd_path = 'structure.leap.crd'
prop = {
"forcefield" : ["protein.ff14SB","gaff"]
}
# Create and launch bb
leap_gen_top(input_pdb_path=output_pdb4amber_path,
input_lib_path=output_acpype_lib,
input_frcmod_path=output_acpype_frcmod,
output_pdb_path=output_pdb_path,
output_top_path=output_top_path,
output_crd_path=output_crd_path,
properties=prop)
###Output
2021-06-22 14:33:21,724 [MainThread ] [INFO ] Creating 47a0068a-6a8f-44cf-b009-eb08c11b5da8 temporary folder
2021-06-22 14:33:21,725 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 14:33:22,111 [MainThread ] [INFO ] tleap -f 47a0068a-6a8f-44cf-b009-eb08c11b5da8/leap.in
2021-06-22 14:33:22,113 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:33:22,115 [MainThread ] [INFO ] -I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/prep to search path.
-I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib to search path.
-I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm to search path.
-I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd to search path.
-f: Source 47a0068a-6a8f-44cf-b009-eb08c11b5da8/leap.in.
Welcome to LEaP!
(no leaprc in search path)
Sourcing: ./47a0068a-6a8f-44cf-b009-eb08c11b5da8/leap.in
----- Source: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.protein.ff14SB
----- Source of /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.protein.ff14SB done
Log file: ./leap.log
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/parm10.dat
Reading title:
PARM99 + frcmod.ff99SB + frcmod.parmbsc0 + OL3 for RNA
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ff14SB
Reading force field modification type file (frcmod)
Reading title:
ff14SB protein backbone and sidechain parameters
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/amino12.lib
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/aminoct12.lib
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/aminont12.lib
----- Source: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.gaff
----- Source of /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.gaff done
Log file: ./leap.log
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/gaff.dat
Reading title:
AMBER General Force Field for organic molecules (Version 1.81, May 2017)
Loading library: ./JZ4params.lib
Loading parameters: ./JZ4params.frcmod
Reading force field modification type file (frcmod)
Reading title:
Remark line goes here
Loading PDB file: ./structure.pdb4amber.pdb
Added missing heavy atom: .R<CASN 163>.A<OXT 15>
total atoms in file: 1310
Leap added 1326 missing atoms according to residue templates:
1 Heavy
1325 H / lone pairs
Writing pdb file: structure.leap.pdb
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Warning!
Converting N-terminal residue name to PDB format: NMET -> MET
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Warning!
Converting C-terminal residue name to PDB format: CASN -> ASN
Checking Unit.
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Warning!
The unperturbed charge of the unit (5.999999) is not zero.
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Note.
Ignoring the warning from Unit Checking.
Building topology.
Building atom parameters.
Building bond parameters.
Building angle parameters.
Building proper torsion parameters.
Building improper torsion parameters.
total 534 improper torsions applied
Building H-Bond parameters.
Incorporating Non-Bonded adjustments.
Not Marking per-residue atom chain types.
Marking per-residue atom chain types.
(Residues lacking connect0/connect1 -
these don't have chain types marked:
res total affected
CASN 1
JZ4 1
NMET 1
)
(no restraints)
Quit
Exiting LEaP: Errors = 0; Warnings = 3; Notes = 1.
2021-06-22 14:33:22,116 [MainThread ] [INFO ] Removed: 47a0068a-6a8f-44cf-b009-eb08c11b5da8
2021-06-22 14:33:22,122 [MainThread ] [INFO ] Removed: leap.log
###Markdown
Visualizing 3D structureVisualizing the **PDB structure** using **NGL**. Try to identify the differences between the structure generated for the **system topology** and the **original one** (e.g. hydrogen atoms).
###Code
import nglview
import ipywidgets
# Show protein
view = nglview.show_structure_file(output_pdb_path)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein', opacity='0.4')
view.add_representation(repr_type='ball+stick', selection='protein')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='JZ4')
view._remote_call('setSize', target='Widget', args=['','600px'])
view.render_image()
view.download_image(filename='ngl6.png')
view
###Output
_____no_output_____
###Markdown
Energetically minimize the structure**Energetically minimize** the **protein-ligand complex structure** (in vacuo) using the **sander tool** from the **AMBER MD package**. This step is **relaxing the structure**, usually **constrained**, especially when coming from an X-ray **crystal structure**. The **miminization process** is done in two steps:- [Step 1](minv_1): **Hydrogen** minimization, applying **position restraints** (50 Kcal/mol.$Å^{2}$) to the **protein heavy atoms**.- [Step 2](minv_2): **System** minimization, with **no restraints**.*****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_minout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_minout) from **biobb_amber.process.process_minout***** Step 1: Minimize Hydrogens**Hydrogen** minimization, applying **position restraints** (50 Kcal/mol.$Å^{2}$) to the **protein heavy atoms**.
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_h_min_traj_path = 'sander.h_min.x'
output_h_min_rst_path = 'sander.h_min.rst'
output_h_min_log_path = 'sander.h_min.log'
prop = {
'simulation_type' : "min_vacuo",
"mdin" : {
'maxcyc' : 500,
'ntpr' : 5,
'ntr' : 1,
'restraintmask' : '\":*&!@H=\"',
'restraint_wt' : 50.0
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_top_path,
input_crd_path=output_crd_path,
input_ref_path=output_crd_path,
output_traj_path=output_h_min_traj_path,
output_rst_path=output_h_min_rst_path,
output_log_path=output_h_min_log_path,
properties=prop)
###Output
2021-06-22 14:33:22,291 [MainThread ] [INFO ] Creating 552ce4cb-1feb-474b-9c35-2661dd87fec7 temporary folder
2021-06-22 14:33:22,293 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 14:33:46,284 [MainThread ] [INFO ] sander -O -i 552ce4cb-1feb-474b-9c35-2661dd87fec7/sander.mdin -p structure.leap.top -c structure.leap.crd -r sander.h_min.rst -o sander.h_min.log -x sander.h_min.x -ref structure.leap.crd
2021-06-22 14:33:46,285 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:33:46,286 [MainThread ] [INFO ] Removed: mdinfo, 552ce4cb-1feb-474b-9c35-2661dd87fec7
###Markdown
Checking Energy Minimization resultsChecking **energy minimization** results. Plotting **potential energy** along time during the **minimization process**.
###Code
# Import module
from biobb_amber.process.process_minout import process_minout
# Create prop dict and inputs/outputs
output_h_min_dat_path = 'sander.h_min.energy.dat'
prop = {
"terms" : ['ENERGY']
}
# Create and launch bb
process_minout(input_log_path=output_h_min_log_path,
output_dat_path=output_h_min_dat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_h_min_dat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy kcal/mol")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
Step 2: Minimize the system**System** minimization, with **restraints** only on the **small molecule**, to avoid a possible change in position due to **protein repulsion**.
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_n_min_traj_path = 'sander.n_min.x'
output_n_min_rst_path = 'sander.n_min.rst'
output_n_min_log_path = 'sander.n_min.log'
prop = {
'simulation_type' : "min_vacuo",
"mdin" : {
'maxcyc' : 500,
'ntpr' : 5,
'restraintmask' : '\":' + ligandCode + '\"',
'restraint_wt' : 500.0
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_top_path,
input_crd_path=output_h_min_rst_path,
output_traj_path=output_n_min_traj_path,
output_rst_path=output_n_min_rst_path,
output_log_path=output_n_min_log_path,
properties=prop)
###Output
2021-06-22 14:33:47,029 [MainThread ] [INFO ] Creating 7e579700-6ba4-40aa-853c-dafa2ef0f191 temporary folder
2021-06-22 14:33:47,031 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 14:34:10,096 [MainThread ] [INFO ] sander -O -i 7e579700-6ba4-40aa-853c-dafa2ef0f191/sander.mdin -p structure.leap.top -c sander.h_min.rst -r sander.n_min.rst -o sander.n_min.log -x sander.n_min.x
2021-06-22 14:34:10,097 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:34:10,098 [MainThread ] [INFO ] Removed: mdinfo, 7e579700-6ba4-40aa-853c-dafa2ef0f191
###Markdown
Checking Energy Minimization resultsChecking **energy minimization** results. Plotting **potential energy** by time during the **minimization process**.
###Code
# Import module
from biobb_amber.process.process_minout import process_minout
# Create prop dict and inputs/outputs
output_n_min_dat_path = 'sander.n_min.energy.dat'
prop = {
"terms" : ['ENERGY']
}
# Create and launch bb
process_minout(input_log_path=output_n_min_log_path,
output_dat_path=output_n_min_dat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_n_min_dat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy kcal/mol")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Create solvent box and solvating the systemDefine the unit cell for the **protein structure MD system** to fill it with water molecules.A **truncated octahedron box** is used to define the unit cell, with a **distance from the protein to the box edge of 9.0 Angstroms**. The solvent type used is the default **TIP3P** water model, a generic 3-point solvent model.*****Building Blocks** used: - [amber_to_pdb](https://biobb-amber.readthedocs.io/en/latest/ambpdb.htmlmodule-ambpdb.amber_to_pdb) from **biobb_amber.ambpdb.amber_to_pdb** - [leap_solvate](https://biobb-amber.readthedocs.io/en/latest/leap.htmlmodule-leap.leap_solvate) from **biobb_amber.leap.leap_solvate** *** Getting minimized structureGetting the result of the **energetic minimization** and converting it to **PDB format** to be then used as input for the **water box generation**. This is achieved by converting from **AMBER topology + coordinates** files to a **PDB file** using the **ambpdb** tool from the **AMBER MD package**.
###Code
# Import module
from biobb_amber.ambpdb.amber_to_pdb import amber_to_pdb
# Create prop dict and inputs/outputs
output_ambpdb_path = 'structure.ambpdb.pdb'
# Create and launch bb
amber_to_pdb(input_top_path=output_top_path,
input_crd_path=output_h_min_rst_path,
output_pdb_path=output_ambpdb_path)
###Output
2021-06-22 14:34:10,210 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 14:34:10,244 [MainThread ] [INFO ] ambpdb -p structure.leap.top -c sander.h_min.rst > structure.ambpdb.pdb
2021-06-22 14:34:10,245 [MainThread ] [INFO ] Exit code 0
###Markdown
Create water boxDefine the **unit cell** for the **protein-ligand complex structure MD system** and fill it with **water molecules**.
###Code
# Import module
from biobb_amber.leap.leap_solvate import leap_solvate
# Create prop dict and inputs/outputs
output_solv_pdb_path = 'structure.solv.pdb'
output_solv_top_path = 'structure.solv.parmtop'
output_solv_crd_path = 'structure.solv.crd'
prop = {
"forcefield" : ["protein.ff14SB","gaff"],
"water_type": "TIP3PBOX",
"distance_to_molecule": "9.0",
"box_type": "truncated_octahedron"
}
# Create and launch bb
leap_solvate(input_pdb_path=output_ambpdb_path,
input_lib_path=output_acpype_lib,
input_frcmod_path=output_acpype_frcmod,
output_pdb_path=output_solv_pdb_path,
output_top_path=output_solv_top_path,
output_crd_path=output_solv_crd_path,
properties=prop)
###Output
2021-06-22 14:34:10,262 [MainThread ] [INFO ] Creating 2074c37b-6879-488a-8b34-027ffb15d8e9 temporary folder
2021-06-22 14:34:10,263 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 14:34:11,240 [MainThread ] [INFO ] tleap -f 2074c37b-6879-488a-8b34-027ffb15d8e9/leap.in
2021-06-22 14:34:11,242 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:34:11,243 [MainThread ] [INFO ] -I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/prep to search path.
-I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib to search path.
-I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm to search path.
-I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd to search path.
-f: Source 2074c37b-6879-488a-8b34-027ffb15d8e9/leap.in.
Welcome to LEaP!
(no leaprc in search path)
Sourcing: ./2074c37b-6879-488a-8b34-027ffb15d8e9/leap.in
----- Source: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.protein.ff14SB
----- Source of /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.protein.ff14SB done
Log file: ./leap.log
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/parm10.dat
Reading title:
PARM99 + frcmod.ff99SB + frcmod.parmbsc0 + OL3 for RNA
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ff14SB
Reading force field modification type file (frcmod)
Reading title:
ff14SB protein backbone and sidechain parameters
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/amino12.lib
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/aminoct12.lib
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/aminont12.lib
----- Source: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.gaff
----- Source of /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.gaff done
Log file: ./leap.log
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/gaff.dat
Reading title:
AMBER General Force Field for organic molecules (Version 1.81, May 2017)
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ionsjc_tip3p
Reading force field modification type file (frcmod)
Reading title:
Monovalent ion parameters for Ewald and TIP3P water from Joung & Cheatham JPCB (2008)
----- Source: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.water.tip3p
----- Source of /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.water.tip3p done
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/atomic_ions.lib
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/solvents.lib
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.tip3p
Reading force field modification type file (frcmod)
Reading title:
This is the additional/replacement parameter set for TIP3P water
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ions1lm_126_tip3p
Reading force field modification type file (frcmod)
Reading title:
Li/Merz ion parameters of monovalent ions for TIP3P water model (12-6 normal usage set)
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ionsjc_tip3p
Reading force field modification type file (frcmod)
Reading title:
Monovalent ion parameters for Ewald and TIP3P water from Joung & Cheatham JPCB (2008)
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ions234lm_126_tip3p
Reading force field modification type file (frcmod)
Reading title:
Li/Merz ion parameters of divalent to tetravalent ions for TIP3P water model (12-6 normal usage set)
Loading library: ./JZ4params.lib
Loading parameters: ./JZ4params.frcmod
Reading force field modification type file (frcmod)
Reading title:
Remark line goes here
Loading PDB file: ./structure.ambpdb.pdb
total atoms in file: 2636
Scaling up box by a factor of 1.185578 to meet diagonal cut criterion
Solute vdw bounding box: 37.464 39.384 60.640
Total bounding box for atom centers: 81.980 81.980 81.980
(box expansion for 'iso' is 88.2%)
Solvent unit box: 18.774 18.774 18.774
Volume: 285691.372 A^3 (oct)
Total mass 153246.106 amu, Density 0.891 g/cc
Added 7471 residues.
Writing pdb file: structure.solv.pdb
printing CRYST1 record to PDB file with box info
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Warning!
Converting N-terminal residue name to PDB format: NMET -> MET
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Warning!
Converting C-terminal residue name to PDB format: CASN -> ASN
Checking Unit.
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Warning!
The unperturbed charge of the unit (5.999999) is not zero.
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Note.
Ignoring the warning from Unit Checking.
Building topology.
Building atom parameters.
Building bond parameters.
Building angle parameters.
Building proper torsion parameters.
Building improper torsion parameters.
total 534 improper torsions applied
Building H-Bond parameters.
Incorporating Non-Bonded adjustments.
Not Marking per-residue atom chain types.
Marking per-residue atom chain types.
(Residues lacking connect0/connect1 -
these don't have chain types marked:
res total affected
CASN 1
JZ4 1
NMET 1
WAT 7471
)
(no restraints)
Quit
Exiting LEaP: Errors = 0; Warnings = 3; Notes = 1.
2021-06-22 14:34:11,253 [MainThread ] [INFO ] Removed: 2074c37b-6879-488a-8b34-027ffb15d8e9
2021-06-22 14:34:11,254 [MainThread ] [INFO ] Removed: leap.log
###Markdown
Adding ions**Neutralizing** the system and adding an additional **ionic concentration** using the **leap tool** from the **AMBER MD package**. Using **Sodium (Na+)** and **Chloride (Cl-)** counterions and an **additional ionic concentration** of 150mM.*****Building Blocks** used: - [leap_add_ions](https://biobb-amber.readthedocs.io/en/latest/leap.htmlmodule-leap.leap_add_ions) from **biobb_amber.leap.leap_add_ions*****
###Code
# Import module
from biobb_amber.leap.leap_add_ions import leap_add_ions
# Create prop dict and inputs/outputs
output_ions_pdb_path = 'structure.ions.pdb'
output_ions_top_path = 'structure.ions.parmtop'
output_ions_crd_path = 'structure.ions.crd'
prop = {
"forcefield" : ["protein.ff14SB","gaff"],
"neutralise" : True,
"positive_ions_type": "Na+",
"negative_ions_type": "Cl-",
"ionic_concentration" : 150, # 150mM
"box_type": "truncated_octahedron"
}
# Create and launch bb
leap_add_ions(input_pdb_path=output_solv_pdb_path,
input_lib_path=output_acpype_lib,
input_frcmod_path=output_acpype_frcmod,
output_pdb_path=output_ions_pdb_path,
output_top_path=output_ions_top_path,
output_crd_path=output_ions_crd_path,
properties=prop)
###Output
2021-06-22 14:34:11,288 [MainThread ] [INFO ] Creating 350eea0d-4d40-495e-b8ab-eb8664e6e207 temporary folder
2021-06-22 14:34:11,383 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 14:34:13,439 [MainThread ] [INFO ] tleap -f 350eea0d-4d40-495e-b8ab-eb8664e6e207/leap.in
2021-06-22 14:34:13,440 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:34:13,441 [MainThread ] [INFO ] -I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/prep to search path.
-I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib to search path.
-I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm to search path.
-I: Adding /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd to search path.
-f: Source 350eea0d-4d40-495e-b8ab-eb8664e6e207/leap.in.
Welcome to LEaP!
(no leaprc in search path)
Sourcing: ./350eea0d-4d40-495e-b8ab-eb8664e6e207/leap.in
----- Source: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.protein.ff14SB
----- Source of /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.protein.ff14SB done
Log file: ./leap.log
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/parm10.dat
Reading title:
PARM99 + frcmod.ff99SB + frcmod.parmbsc0 + OL3 for RNA
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ff14SB
Reading force field modification type file (frcmod)
Reading title:
ff14SB protein backbone and sidechain parameters
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/amino12.lib
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/aminoct12.lib
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/aminont12.lib
----- Source: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.gaff
----- Source of /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.gaff done
Log file: ./leap.log
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/gaff.dat
Reading title:
AMBER General Force Field for organic molecules (Version 1.81, May 2017)
----- Source: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.water.tip3p
----- Source of /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/cmd/leaprc.water.tip3p done
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/atomic_ions.lib
Loading library: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/lib/solvents.lib
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.tip3p
Reading force field modification type file (frcmod)
Reading title:
This is the additional/replacement parameter set for TIP3P water
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ions1lm_126_tip3p
Reading force field modification type file (frcmod)
Reading title:
Li/Merz ion parameters of monovalent ions for TIP3P water model (12-6 normal usage set)
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ionsjc_tip3p
Reading force field modification type file (frcmod)
Reading title:
Monovalent ion parameters for Ewald and TIP3P water from Joung & Cheatham JPCB (2008)
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ions234lm_126_tip3p
Reading force field modification type file (frcmod)
Reading title:
Li/Merz ion parameters of divalent to tetravalent ions for TIP3P water model (12-6 normal usage set)
Loading parameters: /anaconda3/envs/biobb_AMBER_MDsetup_tutorials/dat/leap/parm/frcmod.ionsjc_tip3p
Reading force field modification type file (frcmod)
Reading title:
Monovalent ion parameters for Ewald and TIP3P water from Joung & Cheatham JPCB (2008)
Loading library: ./JZ4params.lib
Loading parameters: ./JZ4params.frcmod
Reading force field modification type file (frcmod)
Reading title:
Remark line goes here
Loading PDB file: ./structure.solv.pdb
total atoms in file: 25049
6 Cl- ions required to neutralize.
Adding 6 counter ions to "mol". 7465 solvent molecules will remain.
0: Placed Cl- in mol at (7.65, 16.21, -18.24).
0: Placed Cl- in mol at (-17.57, 11.01, 14.09).
0: Placed Cl- in mol at (-24.22, -3.80, 19.96).
0: Placed Cl- in mol at (-11.43, 16.25, 9.67).
0: Placed Cl- in mol at (19.99, -17.48, -8.28).
0: Placed Cl- in mol at (-0.07, -21.16, 23.20).
0.000001 0 1 0
0 Na+ ion required to neutralize.
Adding 20 counter ions to "mol". 7445 solvent molecules will remain.
0: Placed Cl- in mol at (20.67, 9.89, -26.26).
0: Placed Cl- in mol at (-18.85, 23.84, -8.06).
0: Placed Cl- in mol at (7.42, 26.80, 6.98).
0: Placed Cl- in mol at (2.02, 9.29, -29.60).
0: Placed Cl- in mol at (4.68, -1.63, 35.32).
0: Placed Cl- in mol at (-24.31, -19.07, -6.70).
0: Placed Cl- in mol at (15.11, -12.21, 3.77).
0: Placed Cl- in mol at (15.24, -6.96, -9.29).
0: Placed Cl- in mol at (34.73, -19.31, 5.50).
0: Placed Cl- in mol at (-25.33, 17.29, -25.15).
0: Placed Cl- in mol at (17.16, -14.59, -0.50).
0: Placed Cl- in mol at (-24.37, -1.29, -13.63).
0: Placed Cl- in mol at (6.80, -23.84, 31.25).
0: Placed Cl- in mol at (-12.82, -7.73, 15.11).
0: Placed Cl- in mol at (-28.83, 4.42, 20.25).
0: Placed Cl- in mol at (9.94, 33.74, 11.09).
0: Placed Cl- in mol at (-29.64, -9.77, 13.26).
0: Placed Cl- in mol at (4.10, 6.84, 24.70).
0: Placed Cl- in mol at (-30.31, -19.31, 5.50).
0: Placed Cl- in mol at (-8.72, 25.37, 12.44).
Adding 20 counter ions to "mol". 7425 solvent molecules will remain.
0: Placed Na+ in mol at (4.61, -24.77, -10.33).
0: Placed Na+ in mol at (-7.96, 20.89, 13.26).
0: Placed Na+ in mol at (-16.81, -3.09, -21.32).
0: Placed Na+ in mol at (19.04, -3.74, 19.85).
0: Placed Na+ in mol at (33.03, 13.15, 12.31).
0: Placed Na+ in mol at (17.97, 34.89, 9.08).
0: Placed Na+ in mol at (-25.72, -3.99, 11.19).
0: Placed Na+ in mol at (-32.43, 3.96, 21.47).
0: Placed Na+ in mol at (-21.58, 4.89, -33.03).
0: Placed Na+ in mol at (-19.29, -2.79, 29.07).
0: Placed Na+ in mol at (27.01, 1.24, -21.05).
0: Placed Na+ in mol at (0.57, 13.88, -17.98).
0: Placed Na+ in mol at (1.44, 26.24, 17.09).
0: Placed Na+ in mol at (17.78, -9.65, -15.83).
0: Placed Na+ in mol at (19.33, -16.46, -27.27).
0: Placed Na+ in mol at (-7.15, 14.08, 20.31).
0: Placed Na+ in mol at (12.89, -24.01, 24.89).
0: Placed Na+ in mol at (-15.35, 28.72, 26.39).
0: Placed Na+ in mol at (-18.92, -15.13, -27.04).
0: Placed Na+ in mol at (22.25, -1.45, -17.98).
Box dimensions: 74.612300 85.514600 88.139600
Writing pdb file: structure.ions.pdb
printing CRYST1 record to PDB file with box info
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Warning!
Converting N-terminal residue name to PDB format: NMET -> MET
/anaconda3/envs/biobb_AMBER_MDsetup_tutorials/bin/teLeap: Warning!
Converting C-terminal residue name to PDB format: CASN -> ASN
Checking Unit.
Building topology.
Building atom parameters.
Building bond parameters.
Building angle parameters.
Building proper torsion parameters.
Building improper torsion parameters.
total 534 improper torsions applied
Building H-Bond parameters.
Incorporating Non-Bonded adjustments.
Not Marking per-residue atom chain types.
Marking per-residue atom chain types.
(Residues lacking connect0/connect1 -
these don't have chain types marked:
res total affected
CASN 1
JZ4 1
NMET 1
WAT 7425
)
(no restraints)
Quit
Exiting LEaP: Errors = 0; Warnings = 2; Notes = 0.
2021-06-22 14:34:13,443 [MainThread ] [INFO ] Fixing truncated octahedron Box in the topology and coordinates files
2021-06-22 14:34:13,511 [MainThread ] [INFO ] Removed: 350eea0d-4d40-495e-b8ab-eb8664e6e207
2021-06-22 14:34:13,512 [MainThread ] [INFO ] Removed: leap.log
###Markdown
Visualizing 3D structureVisualizing the **protein-ligand complex system** with the newly added **solvent box** and **counterions** using **NGL**. Note the **truncated octahedron box** filled with **water molecules** surrounding the **protein structure**, as well as the randomly placed **positive** (Na+, blue) and **negative** (Cl-, gray) **counterions**.
###Code
# Show protein
view = nglview.show_structure_file(output_ions_pdb_path)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein')
view.add_representation(repr_type='ball+stick', selection='solvent')
view.add_representation(repr_type='spacefill', selection='Cl- Na+')
view._remote_call('setSize', target='Widget', args=['','600px'])
view.render_image()
view.download_image(filename='ngl7.png')
view
###Output
_____no_output_____
###Markdown
Energetically minimize the system**Energetically minimize** the **system** (protein structure + ligand + solvent + ions) using the **sander tool** from the **AMBER MD package**. **Restraining heavy atoms** with a force constant of 15 15 Kcal/mol.$Å^{2}$ to their initial positions.- [Step 1](emStep1): Energetically minimize the **system** through 500 minimization cycles.- [Step 2](emStep2): Checking **energy minimization** results. Plotting energy by time during the **minimization** process. *****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_minout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_minout) from **biobb_amber.process.process_minout***** Step 1: Running Energy MinimizationThe **minimization** type of the **simulation_type property** contains the main default parameters to run an **energy minimization**:- imin = 1 ; Minimization flag, perform an energy minimization.- maxcyc = 500; The maximum number of cycles of minimization.- ntb = 1; Periodic boundaries: constant volume.- ntmin = 2; Minimization method: steepest descent.In this particular example, the method used to run the **energy minimization** is the default **steepest descent**, with a **maximum number of 500 cycles** and **periodic conditions**.
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_min_traj_path = 'sander.min.x'
output_min_rst_path = 'sander.min.rst'
output_min_log_path = 'sander.min.log'
prop = {
"simulation_type" : "minimization",
"mdin" : {
'maxcyc' : 300, # Reducing the number of minimization steps for the sake of time
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+\"', # Restraining solute
'restraint_wt' : 15.0 # With a force constant of 50 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_ions_crd_path,
input_ref_path=output_ions_crd_path,
output_traj_path=output_min_traj_path,
output_rst_path=output_min_rst_path,
output_log_path=output_min_log_path,
properties=prop)
###Output
2021-06-22 14:34:13,736 [MainThread ] [INFO ] Creating 7a502ae4-ffcb-462e-800c-38f0dceb6f42 temporary folder
2021-06-22 14:34:13,737 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 14:36:10,064 [MainThread ] [INFO ] sander -O -i 7a502ae4-ffcb-462e-800c-38f0dceb6f42/sander.mdin -p structure.ions.parmtop -c structure.ions.crd -r sander.min.rst -o sander.min.log -x sander.min.x -ref structure.ions.crd
2021-06-22 14:36:10,065 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:36:10,067 [MainThread ] [INFO ] Removed: mdinfo, 7a502ae4-ffcb-462e-800c-38f0dceb6f42
###Markdown
Step 2: Checking Energy Minimization resultsChecking **energy minimization** results. Plotting **potential energy** along time during the **minimization process**.
###Code
# Import module
from biobb_amber.process.process_minout import process_minout
# Create prop dict and inputs/outputs
output_dat_path = 'sander.min.energy.dat'
prop = {
"terms" : ['ENERGY']
}
# Create and launch bb
process_minout(input_log_path=output_min_log_path,
output_dat_path=output_dat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_dat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy kcal/mol")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
Heating the system**Warming up** the **prepared system** using the **sander tool** from the **AMBER MD package**. Going from 0 to the desired **temperature**, in this particular example, 300K. **Solute atoms restrained** (force constant of 10 Kcal/mol). Length 5ps.***- [Step 1](heatStep1): Warming up the **system** through 500 MD steps.- [Step 2](heatStep2): Checking results for the **system warming up**. Plotting **temperature** along time during the **heating** process. *****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_mdout) from **biobb_amber.process.process_mdout***** Step 1: Warming up the systemThe **heat** type of the **simulation_type property** contains the main default parameters to run a **system warming up**:- imin = 0; Run MD (no minimization)- ntx = 5; Read initial coords and vels from restart file- cut = 10.0; Cutoff for non bonded interactions in Angstroms- ntr = 0; No restrained atoms- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms- ntf = 2; Bond interactions involving H omitted- ntt = 3; Constant temperature using Langevin dynamics- ig = -1; Seed for pseudo-random number generator- ioutfm = 1; Write trajectory in netcdf format- iwrap = 1; Wrap coords into primary box- nstlim = 5000; Number of MD steps - dt = 0.002; Time step (in ps)- tempi = 0.0; Initial temperature (0 K)- temp0 = 300.0; Final temperature (300 K)- irest = 0; No restart from previous simulation- ntb = 1; Periodic boundary conditions at constant volume- gamma_ln = 1.0; Collision frequency for Langevin dynamics (in 1/ps)In this particular example, the **heating** of the system is done in **2500 steps** (5ps) and is going **from 0K to 300K** (note that the number of steps has been reduced in this tutorial, for the sake of time).
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_heat_traj_path = 'sander.heat.netcdf'
output_heat_rst_path = 'sander.heat.rst'
output_heat_log_path = 'sander.heat.log'
prop = {
"simulation_type" : "heat",
"mdin" : {
'nstlim' : 2500, # Reducing the number of steps for the sake of time (5ps)
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+\"', # Restraining solute
'restraint_wt' : 10.0 # With a force constant of 10 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_min_rst_path,
input_ref_path=output_min_rst_path,
output_traj_path=output_heat_traj_path,
output_rst_path=output_heat_rst_path,
output_log_path=output_heat_log_path,
properties=prop)
###Output
2021-06-22 14:36:10,186 [MainThread ] [INFO ] Creating 4dfd3a1c-b371-4152-8ac1-b7a85c976396 temporary folder
2021-06-22 14:36:10,189 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 14:57:48,864 [MainThread ] [INFO ] sander -O -i 4dfd3a1c-b371-4152-8ac1-b7a85c976396/sander.mdin -p structure.ions.parmtop -c sander.min.rst -r sander.heat.rst -o sander.heat.log -x sander.heat.netcdf -ref sander.min.rst
2021-06-22 14:57:48,865 [MainThread ] [INFO ] Exit code 0
2021-06-22 14:57:48,866 [MainThread ] [INFO ] Removed: mdinfo, 4dfd3a1c-b371-4152-8ac1-b7a85c976396
###Markdown
Step 2: Checking results from the system warming upChecking **system warming up** output. Plotting **temperature** along time during the **heating process**.
###Code
# Import module
from biobb_amber.process.process_mdout import process_mdout
# Create prop dict and inputs/outputs
output_dat_heat_path = 'sander.md.temp.dat'
prop = {
"terms" : ['TEMP']
}
# Create and launch bb
process_mdout(input_log_path=output_heat_log_path,
output_dat_path=output_dat_heat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_dat_heat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Heating process",
xaxis=dict(title = "Heating Step (ps)"),
yaxis=dict(title = "Temperature (K)")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Equilibrate the system (NVT)Equilibrate the **protein-ligand complex system** in **NVT ensemble** (constant Number of particles, Volume and Temperature). Protein **heavy atoms** will be restrained using position restraining forces: movement is permitted, but only after overcoming a substantial energy penalty. The utility of position restraints is that they allow us to equilibrate our solvent around our protein, without the added variable of structural changes in the protein.- [Step 1](eqNVTStep1): Equilibrate the **protein system** with **NVT** ensemble.- [Step 2](eqNVTStep2): Checking **NVT Equilibration** results. Plotting **system temperature** by time during the **NVT equilibration** process. *****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_mdout) from **biobb_amber.process.process_mdout** *** Step 1: Equilibrating the system (NVT)The **nvt** type of the **simulation_type property** contains the main default parameters to run a **system equilibration in NVT ensemble**:- imin = 0; Run MD (no minimization)- ntx = 5; Read initial coords and vels from restart file- cut = 10.0; Cutoff for non bonded interactions in Angstroms- ntr = 0; No restrained atoms- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms- ntf = 2; Bond interactions involving H omitted- ntt = 3; Constant temperature using Langevin dynamics- ig = -1; Seed for pseudo-random number generator- ioutfm = 1; Write trajectory in netcdf format- iwrap = 1; Wrap coords into primary box- nstlim = 5000; Number of MD steps - dt = 0.002; Time step (in ps)- irest = 1; Restart previous simulation- ntb = 1; Periodic boundary conditions at constant volume- gamma_ln = 5.0; Collision frequency for Langevin dynamics (in 1/ps)In this particular example, the **NVT equilibration** of the system is done in **500 steps** (note that the number of steps has been reduced in this tutorial, for the sake of time).
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_nvt_traj_path = 'sander.nvt.netcdf'
output_nvt_rst_path = 'sander.nvt.rst'
output_nvt_log_path = 'sander.nvt.log'
prop = {
"simulation_type" : 'nvt',
"mdin" : {
'nstlim' : 500, # Reducing the number of steps for the sake of time (1ps)
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+ & !@H=\"', # Restraining solute heavy atoms
'restraint_wt' : 5.0 # With a force constant of 5 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_heat_rst_path,
input_ref_path=output_heat_rst_path,
output_traj_path=output_nvt_traj_path,
output_rst_path=output_nvt_rst_path,
output_log_path=output_nvt_log_path,
properties=prop)
###Output
2021-06-22 14:57:48,980 [MainThread ] [INFO ] Creating 071fcbc9-4a22-4537-a033-0c91c57cecd8 temporary folder
2021-06-22 14:57:48,981 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 15:02:12,764 [MainThread ] [INFO ] sander -O -i 071fcbc9-4a22-4537-a033-0c91c57cecd8/sander.mdin -p structure.ions.parmtop -c sander.heat.rst -r sander.nvt.rst -o sander.nvt.log -x sander.nvt.netcdf -ref sander.heat.rst
2021-06-22 15:02:12,765 [MainThread ] [INFO ] Exit code 0
2021-06-22 15:02:12,767 [MainThread ] [INFO ] Removed: mdinfo, 071fcbc9-4a22-4537-a033-0c91c57cecd8
###Markdown
Step 2: Checking NVT Equilibration resultsChecking **NVT Equilibration** results. Plotting **system temperature** by time during the NVT equilibration process.
###Code
# Import module
from biobb_amber.process.process_mdout import process_mdout
# Create prop dict and inputs/outputs
output_dat_nvt_path = 'sander.md.nvt.temp.dat'
prop = {
"terms" : ['TEMP']
}
# Create and launch bb
process_mdout(input_log_path=output_nvt_log_path,
output_dat_path=output_dat_nvt_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_dat_nvt_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="NVT equilibration",
xaxis=dict(title = "Equilibration Step (ps)"),
yaxis=dict(title = "Temperature (K)")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Equilibrate the system (NPT)Equilibrate the **protein-ligand complex system** in **NPT ensemble** (constant Number of particles, Pressure and Temperature). Protein **heavy atoms** will be restrained using position restraining forces: movement is permitted, but only after overcoming a substantial energy penalty. The utility of position restraints is that they allow us to equilibrate our solvent around our protein, without the added variable of structural changes in the protein.- [Step 1](eqNPTStep1): Equilibrate the **protein system** with **NPT** ensemble.- [Step 2](eqNPTStep2): Checking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NVT equilibration** process. *****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_mdout) from **biobb_amber.process.process_mdout** *** Step 1: Equilibrating the system (NPT)The **npt** type of the **simulation_type property** contains the main default parameters to run a **system equilibration in NPT ensemble**:- imin = 0; Run MD (no minimization)- ntx = 5; Read initial coords and vels from restart file- cut = 10.0; Cutoff for non bonded interactions in Angstroms- ntr = 0; No restrained atoms- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms- ntf = 2; Bond interactions involving H omitted- ntt = 3; Constant temperature using Langevin dynamics- ig = -1; Seed for pseudo-random number generator- ioutfm = 1; Write trajectory in netcdf format- iwrap = 1; Wrap coords into primary box- nstlim = 5000; Number of MD steps - dt = 0.002; Time step (in ps)- irest = 1; Restart previous simulation- gamma_ln = 5.0; Collision frequency for Langevin dynamics (in 1/ps)- pres0 = 1.0; Reference pressure- ntp = 1; Constant pressure dynamics: md with isotropic position scaling- taup = 2.0; Pressure relaxation time (in ps)In this particular example, the **NPT equilibration** of the system is done in **500 steps** (note that the number of steps has been reduced in this tutorial, for the sake of time).
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_npt_traj_path = 'sander.npt.netcdf'
output_npt_rst_path = 'sander.npt.rst'
output_npt_log_path = 'sander.npt.log'
prop = {
"simulation_type" : 'npt',
"mdin" : {
'nstlim' : 500, # Reducing the number of steps for the sake of time (1ps)
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+ & !@H=\"', # Restraining solute heavy atoms
'restraint_wt' : 2.5 # With a force constant of 2.5 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_nvt_rst_path,
input_ref_path=output_nvt_rst_path,
output_traj_path=output_npt_traj_path,
output_rst_path=output_npt_rst_path,
output_log_path=output_npt_log_path,
properties=prop)
###Output
2021-06-22 15:02:12,874 [MainThread ] [INFO ] Creating 3607b942-e34a-437e-9a93-b6484675b35a temporary folder
2021-06-22 15:02:12,876 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 15:06:40,892 [MainThread ] [INFO ] sander -O -i 3607b942-e34a-437e-9a93-b6484675b35a/sander.mdin -p structure.ions.parmtop -c sander.nvt.rst -r sander.npt.rst -o sander.npt.log -x sander.npt.netcdf -ref sander.nvt.rst
2021-06-22 15:06:40,893 [MainThread ] [INFO ] Exit code 0
2021-06-22 15:06:40,894 [MainThread ] [INFO ] Removed: mdinfo, 3607b942-e34a-437e-9a93-b6484675b35a
###Markdown
Step 2: Checking NPT Equilibration resultsChecking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NPT equilibration** process.
###Code
# Import module
from biobb_amber.process.process_mdout import process_mdout
# Create prop dict and inputs/outputs
output_dat_npt_path = 'sander.md.npt.dat'
prop = {
"terms" : ['PRES','DENSITY']
}
# Create and launch bb
process_mdout(input_log_path=output_npt_log_path,
output_dat_path=output_dat_npt_path,
properties=prop)
# Read pressure and density data from file
with open(output_dat_npt_path,'r') as pd_file:
x,y,z = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]),float(line.split()[2]))
for line in pd_file
if not line.startswith(("#","@"))
])
)
plotly.offline.init_notebook_mode(connected=True)
trace1 = go.Scatter(
x=x,y=y
)
trace2 = go.Scatter(
x=x,y=z
)
fig = subplots.make_subplots(rows=1, cols=2, print_grid=False)
fig.append_trace(trace1, 1, 1)
fig.append_trace(trace2, 1, 2)
fig['layout']['xaxis1'].update(title='Time (ps)')
fig['layout']['xaxis2'].update(title='Time (ps)')
fig['layout']['yaxis1'].update(title='Pressure (bar)')
fig['layout']['yaxis2'].update(title='Density (Kg*m^-3)')
fig['layout'].update(title='Pressure and Density during NPT Equilibration')
fig['layout'].update(showlegend=False)
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Free Molecular Dynamics SimulationUpon completion of the **two equilibration phases (NVT and NPT)**, the system is now well-equilibrated at the desired temperature and pressure. The **position restraints** can now be released. The last step of the **protein** MD setup is a short, **free MD simulation**, to ensure the robustness of the system. - [Step 1](mdStep1): Run short MD simulation of the **protein system**.- [Step 2](mdStep2): Checking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step.*****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_mdout) from **biobb_amber.process.process_mdout** - [cpptraj_rms](https://biobb-analysis.readthedocs.io/en/latest/ambertools.htmlmodule-ambertools.cpptraj_rms) from **biobb_analysis.cpptraj.cpptraj_rms** - [cpptraj_rgyr](https://biobb-analysis.readthedocs.io/en/latest/ambertools.htmlmodule-ambertools.cpptraj_rgyr) from **biobb_analysis.cpptraj.cpptraj_rgyr***** Step 1: Creating portable binary run file to run a free MD simulationThe **free** type of the **simulation_type property** contains the main default parameters to run an **unrestrained MD simulation**:- imin = 0; Run MD (no minimization)- ntx = 5; Read initial coords and vels from restart file- cut = 10.0; Cutoff for non bonded interactions in Angstroms- ntr = 0; No restrained atoms- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms- ntf = 2; Bond interactions involving H omitted- ntt = 3; Constant temperature using Langevin dynamics- ig = -1; Seed for pseudo-random number generator- ioutfm = 1; Write trajectory in netcdf format- iwrap = 1; Wrap coords into primary box- nstlim = 5000; Number of MD steps - dt = 0.002; Time step (in ps)In this particular example, a short, **5ps-length** simulation (2500 steps) is run, for the sake of time.
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_free_traj_path = 'sander.free.netcdf'
output_free_rst_path = 'sander.free.rst'
output_free_log_path = 'sander.free.log'
prop = {
"simulation_type" : 'free',
"mdin" : {
'nstlim' : 2500, # Reducing the number of steps for the sake of time (5ps)
'ntwx' : 500 # Print coords to trajectory every 500 steps (1 ps)
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_npt_rst_path,
output_traj_path=output_free_traj_path,
output_rst_path=output_free_rst_path,
output_log_path=output_free_log_path,
properties=prop)
###Output
2021-06-22 15:06:41,080 [MainThread ] [INFO ] Creating 4fa1528a-9211-4a59-9648-8840346dfc54 temporary folder
2021-06-22 15:06:41,081 [MainThread ] [INFO ] Creating command line with instructions and required arguments
2021-06-22 15:28:10,365 [MainThread ] [INFO ] sander -O -i 4fa1528a-9211-4a59-9648-8840346dfc54/sander.mdin -p structure.ions.parmtop -c sander.npt.rst -r sander.free.rst -o sander.free.log -x sander.free.netcdf
2021-06-22 15:28:10,366 [MainThread ] [INFO ] Exit code 0
2021-06-22 15:28:10,368 [MainThread ] [INFO ] Removed: mdinfo, 4fa1528a-9211-4a59-9648-8840346dfc54
###Markdown
Step 2: Checking free MD simulation resultsChecking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step. **RMSd** against the **experimental structure** (input structure of the pipeline) and against the **minimized and equilibrated structure** (output structure of the NPT equilibration step).
###Code
# cpptraj_rms: Computing Root Mean Square deviation to analyse structural stability
# RMSd against minimized and equilibrated snapshot (backbone atoms)
# Import module
from biobb_analysis.ambertools.cpptraj_rms import cpptraj_rms
# Create prop dict and inputs/outputs
output_rms_first = pdbCode+'_rms_first.dat'
prop = {
'mask': 'backbone',
'reference': 'first'
}
# Create and launch bb
cpptraj_rms(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_rms_first,
properties=prop)
# cpptraj_rms: Computing Root Mean Square deviation to analyse structural stability
# RMSd against experimental structure (backbone atoms)
# Import module
from biobb_analysis.ambertools.cpptraj_rms import cpptraj_rms
# Create prop dict and inputs/outputs
output_rms_exp = pdbCode+'_rms_exp.dat'
prop = {
'mask': 'backbone',
'reference': 'experimental'
}
# Create and launch bb
cpptraj_rms(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_rms_exp,
input_exp_path=output_pdb_path,
properties=prop)
# Read RMS vs first snapshot data from file
with open(output_rms_first,'r') as rms_first_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rms_first_file
if not line.startswith(("#","@"))
])
)
# Read RMS vs experimental structure data from file
with open(output_rms_exp,'r') as rms_exp_file:
x2,y2 = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rms_exp_file
if not line.startswith(("#","@"))
])
)
trace1 = go.Scatter(
x = x,
y = y,
name = 'RMSd vs first'
)
trace2 = go.Scatter(
x = x,
y = y2,
name = 'RMSd vs exp'
)
data = [trace1, trace2]
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": data,
"layout": go.Layout(title="RMSd during free MD Simulation",
xaxis=dict(title = "Time (ps)"),
yaxis=dict(title = "RMSd (Angstrom)")
)
}
plotly.offline.iplot(fig)
# cpptraj_rgyr: Computing Radius of Gyration to measure the protein compactness during the free MD simulation
# Import module
from biobb_analysis.ambertools.cpptraj_rgyr import cpptraj_rgyr
# Create prop dict and inputs/outputs
output_rgyr = pdbCode+'_rgyr.dat'
prop = {
'mask': 'backbone'
}
# Create and launch bb
cpptraj_rgyr(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_rgyr,
properties=prop)
# Read Rgyr data from file
with open(output_rgyr,'r') as rgyr_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rgyr_file
if not line.startswith(("#","@"))
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Radius of Gyration",
xaxis=dict(title = "Time (ps)"),
yaxis=dict(title = "Rgyr (Angstrom)")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Post-processing and Visualizing resulting 3D trajectoryPost-processing and Visualizing the **protein system** MD setup **resulting trajectory** using **NGL**- [Step 1](ppStep1): *Imaging* the resulting trajectory, **stripping out water molecules and ions** and **correcting periodicity issues**.- [Step 2](ppStep3): Visualizing the *imaged* trajectory using the *dry* structure as a **topology**. *****Building Blocks** used: - [cpptraj_image](https://biobb-analysis.readthedocs.io/en/latest/ambertools.htmlmodule-ambertools.cpptraj_image) from **biobb_analysis.cpptraj.cpptraj_image** *** Step 1: *Imaging* the resulting trajectory.Stripping out **water molecules and ions** and **correcting periodicity issues**
###Code
# cpptraj_image: "Imaging" the resulting trajectory
# Removing water molecules and ions from the resulting structure
# Import module
from biobb_analysis.ambertools.cpptraj_image import cpptraj_image
# Create prop dict and inputs/outputs
output_imaged_traj = pdbCode+'_imaged_traj.trr'
prop = {
'mask': 'solute',
'format': 'trr'
}
# Create and launch bb
cpptraj_image(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_imaged_traj,
properties=prop)
###Output
2021-06-22 15:28:11,231 [MainThread ] [INFO ] Not using any container
2021-06-22 15:28:11,448 [MainThread ] [INFO ] cpptraj -i 53bd0bea-bf45-459b-b93d-6857d9981d61/instructions.in
2021-06-22 15:28:11,449 [MainThread ] [INFO ] Exit code 0
2021-06-22 15:28:11,450 [MainThread ] [INFO ]
CPPTRAJ: Trajectory Analysis. V4.25.6
___ ___ ___ ___
| \/ | \/ | \/ |
_|_/\_|_/\_|_/\_|_
| Date/time: 06/22/21 15:28:11
| Available memory: 953.430 MB
INPUT: Reading input from '53bd0bea-bf45-459b-b93d-6857d9981d61/instructions.in'
[parm structure.ions.parmtop]
Reading 'structure.ions.parmtop' as Amber Topology
Radius Set: modified Bondi radii (mbondi)
[trajin sander.free.netcdf 1 -1 1]
Reading 'sander.free.netcdf' as Amber NetCDF
[center !@H*,1H*,2H*,3H* origin]
CENTER: Centering coordinates using geometric center of atoms in mask (!@H*,1H*,2H*,3H*) to
coordinate origin.
[autoimage]
AUTOIMAGE: To box center based on center of mass, anchor is first molecule.
[rms first !@H*,1H*,2H*,3H*]
RMSD: (!@H*,1H*,2H*,3H*), reference is first frame (!@H*,1H*,2H*,3H*).
Best-fit RMSD will be calculated, coords will be rotated and translated.
[strip :WAT,HOH,SOL,TIP3,TP3]
STRIP: Stripping atoms in mask [:WAT,HOH,SOL,TIP3,TP3]
[trajout 3htb_imaged_traj.trr trr]
Writing '3htb_imaged_traj.trr' as Gromacs TRX
---------- RUN BEGIN -------------------------------------------------
PARAMETER FILES (1 total):
0: structure.ions.parmtop, 24957 atoms, 7635 res, box: Trunc. Oct., 7473 mol, 7425 solvent
INPUT TRAJECTORIES (1 total):
0: 'sander.free.netcdf' is a NetCDF AMBER trajectory with coordinates, time, box, Parm structure.ions.parmtop (Trunc. Oct. box) (reading 5 of 5)
Coordinate processing will occur on 5 frames.
OUTPUT TRAJECTORIES (1 total):
'3htb_imaged_traj.trr' (5 frames) is a GROMACS TRR file, big-endian, single precision
BEGIN TRAJECTORY PROCESSING:
.....................................................
ACTION SETUP FOR PARM 'structure.ions.parmtop' (4 actions):
0: [center !@H*,1H*,2H*,3H* origin]
Mask [!@H*,1H*,2H*,3H*] corresponds to 8782 atoms.
1: [autoimage]
Original box is truncated octahedron, turning on 'familiar'.
Using first molecule as anchor.
1 molecules are fixed to anchor: 2
7471 molecules are mobile.
2: [rms first !@H*,1H*,2H*,3H*]
Target mask: [!@H*,1H*,2H*,3H*](8782)
Reference mask: [!@H*,1H*,2H*,3H*](8782)
Warning: Coordinates are being rotated and box coordinates are present.
Warning: Unit cell vectors are NOT rotated; imaging will not be possible
Warning: after the RMS-fit is performed.
3: [strip :WAT,HOH,SOL,TIP3,TP3]
Stripping 22275 atoms.
Stripped topology: 2682 atoms, 210 res, box: Trunc. Oct., 48 mol
.....................................................
ACTIVE OUTPUT TRAJECTORIES (1):
3htb_imaged_traj.trr (coordinates, time, box)
----- sander.free.netcdf (1-5, 1) -----
0% 25% 50% 75% 100% Complete.
Read 5 frames and processed 5 frames.
TIME: Avg. throughput= 148.3812 frames / second.
ACTION OUTPUT:
TIME: Analyses took 0.0000 seconds.
DATASETS (1 total):
RMSD_00001 "RMSD_00001" (double, rms), size is 5 (0.040 kB)
Total data set memory usage is at least 0.040 kB
RUN TIMING:
TIME: Init : 0.0001 s ( 0.19%)
TIME: Trajectory Process : 0.0337 s ( 98.68%)
TIME: Action Post : 0.0000 s ( 0.00%)
TIME: Analysis : 0.0000 s ( 0.00%)
TIME: Data File Write : 0.0000 s ( 0.07%)
TIME: Other : 0.0004 s ( 0.01%)
TIME: Run Total 0.0341 s
---------- RUN END ---------------------------------------------------
TIME: Total execution time: 0.1701 seconds.
--------------------------------------------------------------------------------
To cite CPPTRAJ use:
Daniel R. Roe and Thomas E. Cheatham, III, "PTRAJ and CPPTRAJ: Software for
Processing and Analysis of Molecular Dynamics Trajectory Data". J. Chem.
Theory Comput., 2013, 9 (7), pp 3084-3095.
2021-06-22 15:28:11,452 [MainThread ] [INFO ] Removed: [PurePosixPath('53bd0bea-bf45-459b-b93d-6857d9981d61')]
###Markdown
Step 2: Visualizing the generated dehydrated trajectory.Using the **imaged trajectory** (output of the [Post-processing step 1](ppStep1)) with the **dry structure** (output of
###Code
# Show trajectory
view = nglview.show_simpletraj(nglview.SimpletrajTrajectory(output_imaged_traj, output_ambpdb_path), gui=True)
view.clear_representations()
view.add_representation('cartoon', color='sstruc')
view.add_representation('licorice', selection='JZ4', color='element', radius=1)
view
###Output
_____no_output_____
###Markdown
###Code
from time import sleep
# range number of frames for the animated gif trajectory
for frame in range(0, 5):
# set frame to update coordinates
view.frame = frame
# make sure to let NGL spending enough time to update coordinates
sleep(0.5)
view.download_image(filename='trj_image{}.png'.format(frame))
# make sure to let NGL spending enough time to render before going to next frame
sleep(2.0)
import moviepy.editor as mpy
# go to folder where the images are stored
template = './trj_image{}.png'
# get all (sorted) image files
imagefiles = [template.format(str(i)) for i in range(0, 4, 1)]
# make a gif file
frame_per_second = 8
im = mpy.ImageSequenceClip(imagefiles, fps=frame_per_second)
im.write_gif('traj.gif', fps=frame_per_second)
###Output
t: 0%| | 0/4 [00:00<?, ?it/s, now=None] |
Math-Study/Function.ipynb | ###Markdown
함수 **함수(function)**는 입력 값을 출력 값으로 바꾸어 출력하는 관계(relationship)를 말한다.**정의역(domain)** : 함수에서 입력변수가 가질 수 있는 값의 집합**공역(range)** : 출력변수가 가질 수 있는 값의 집합 변수 **변수(variable)** : 어떤 숫자를 대표하는 기호. **입력변수(input variable)** : 입력 값을 대표하는 변수**출력변수(output variable)** : 출력 값을 대표하는 변수 불연속함수- 데이터 분석에서 많이 사용되는 불연속함수들 부호함수 입력이 양수이면 1, 음수이면 -1, 0이면 0을 출력하는 $x=0$에서 불연속인 함수이다. 넘파이에서는 `sign()` 명령으로 구현
###Code
np.sign(-0.01), np.sign(0), np.sign(0.01)
###Output
_____no_output_____
###Markdown
단위계단함수 단위계단함수(Heaviside step function)도 $x=0$에서 불연속인 함수이다. 넘파이 구현이 없으므로 직접 구현해야 한다. 지시함수 함수 이름에 아래 첨자로 미리 지정된 값이 들어오면 출력이 1이 되고 아니면 출력이 0이 된다.지시함수는 데이터 중에서 특정한 데이터만 선택하여 갯수를 세는데 사용된다. 역함수- 어떤 함수의 입력/출력 관계와 정반대의 입출력 관계를 갖는 함수 $ y = f(x), \;\;\; \rightarrow \;\;\; x = f^{-1}(y) $ 데이터 분석에서 많이 사용되는 함수들 다항식함수 거듭제곱 항의 선형조합으로 이루어진 함수다. $f(x) = c_0 + c_1 x + c_2 x^2 + \cdots + c_n x^n $ 최대함수와 최소함수 최대함수는 두 개의 인수 중에서 큰 값을 출력하는 함수이다.$ \begin{align}\max(x, y) =\begin{cases}x & \text{ if } x \geq y \\y & \text{ if } x < y \end{cases}\end{align}$최소함수는 최대함수와 반대로 두 개의 인수 중 작은 값을 출력하는 함수이다.$ \begin{align}\min(x, y) =\begin{cases}x & \text{ if } x \leq y \\y & \text{ if } x > y \end{cases}\end{align}$ 지수함수 밑을 오일러 수 $e$로 하여 거듭제곱을 하는 함수를 **지수함수(exponential function)**라고 한다. $ y = e^x $$ y = \exp (x) =\exp x $
###Code
np.exp(-2), np.exp(0), np.exp(2)
###Output
_____no_output_____
###Markdown
지수함수 특성* 양수($e$)를 거듭제곱한 값이므로 항상 양수다.* $x=0$일 때 1이 된다.* $x$가 양의 무한대로 가면($x \rightarrow \infty$), 양의 무한대로 다가간다.* $x$가 음의 무한대로 가면($x \rightarrow -\infty$), 0으로 다가간다.* $x_1 > x_2$이면 $\exp{x_1} > \exp{x_2}$이다.
###Code
np.exp(5 + 3), np.exp(5) * np.exp(3)
###Output
_____no_output_____
###Markdown
로지스틱함수- 지수함수를 변형한 함수- 회귀 분석이나 인공신경망에서 자주 사용 $ \begin{align}\sigma(x) = \dfrac{1}{1 + \exp(-x)} \end{align}$ 로그함수
###Code
np.log(10)
###Output
_____no_output_____ |
Day 5/Day 5 Normal Distribution II.ipynb | ###Markdown
problem statementhttps://www.hackerrank.com/challenges/s10-normal-distribution-2/problem
###Code
import math
def phi(x,mu,sig):
return 0.5 * (1.0+math.erf((x-mu)/(sig*2**0.5)))
mu, sig = list(map(int,input().split()))
x1 = float(input())
print("%.2f"%(100.0*(1-phi(x1,mu,sig))))
x2 = float(input())
print("%.2f"%(100.0*(1-phi(x2,mu,sig))))
print("%.2f"%(100.0*(phi(x2,mu,sig))))
###Output
_____no_output_____ |
notebooks/tables/glyf.ipynb | ###Markdown
glyf Table DescriptionThe glyf table contains glyph outline and instruction set definitions for a TrueType outline format font. Documentation- [Apple Specification](https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6glyf.html)- [Microsoft Specification](https://docs.microsoft.com/en-us/typography/opentype/spec/glyf) Source SettingsChange the paths below to view the table in a different font.
###Code
FONT_URL = "https://github.com/source-foundry/opentype-notes/raw/master/assets/fonts/roboto/Roboto-Regular.ttf"
FONT_PATH = "Roboto-Regular.ttf"
###Output
_____no_output_____
###Markdown
Setup
###Code
import os
try:
import fontTools
except ImportError:
!pip install fontTools
if not os.path.exists(FONT_PATH):
!curl -L -O {FONT_URL}
###Output
_____no_output_____
###Markdown
View Table
###Code
!ttx -t glyf -o - {FONT_PATH}
###Output
_____no_output_____
###Markdown
Read/Write Access to Table- [fontTools `_g_l_y_f.py` module](https://github.com/fonttools/fonttools/blob/master/Lib/fontTools/ttLib/tables/_g_l_y_f.py)
###Code
import inspect
from fontTools.ttLib import TTFont
# instantiate table object
tt = TTFont(FONT_PATH)
table = tt["glyf"]
# print table methods
print("Printing methods of {}:".format(table))
methods = inspect.getmembers(table, predicate=inspect.ismethod)
methods_list = [method[0] for method in methods]
for x in sorted(methods_list):
print(x)
###Output
_____no_output_____
###Markdown
Cleanup
###Code
!rm {FONT_PATH}
###Output
_____no_output_____ |
FDAWindTurbineProject2020.ipynb | ###Markdown
Introduction:For this project I was asked to perform and explain simple linear regression using Pythonon the powerproduction dataset available on Moodle. The goal is to accurately predict wind turbine power output from wind speed values using the data set as a basis. First I must import the dataset from Moodle into this notebook.I did this by first copying the data set from https://raw.githubusercontent.com/ianmcloughlin/2020A-machstat-project/master/dataset/powerproduction.csv to my project repository.Then used pandas to read the csv and print the data set in the Jupyter Notebook.
###Code
import pandas as pd
data = pd.read_csv ("data.csv")
print(data)
###Output
speed power
0 0.000 0.0
1 0.125 0.0
2 0.150 0.0
3 0.225 0.0
4 0.275 0.0
.. ... ...
495 24.775 0.0
496 24.850 0.0
497 24.875 0.0
498 24.950 0.0
499 25.000 0.0
[500 rows x 2 columns]
###Markdown
Next step is to import the libraries I'll need for plotting and data analysis.
###Code
# Make matplotlib show interactive plots in the notebook.
%matplotlib inline
import numpy as np # numpy efficiently deals with numerical multi-dimensional arrays.
import matplotlib.pyplot as plt # matplotlib is a plotting library, and pyplot is its easy-to-use module.
###Output
_____no_output_____
###Markdown
Then, I plot the data set and do a simple linear regression to find the line of best fit.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
data = pd.read_csv("data.csv")
# w_avg = np.mean(w)
# d_avg = np.mean(d)
# # Subtract means from w and d.
# w_zero = w - w_avg
# d_zero = d - d_avg
# # The best m is found by the following calculation.
# m = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)
# # Use m from above to calculate the best c.
# c = d_avg - m * w_avg
# print("m is %8.6f and c is %6.6f." % (m, c))
speed = data.speed
power = data.power
average_speed = speed.mean()
average_power = power.mean()
speed_zero = speed - average_speed
power_zero = power - average_power
slope = np.sum(speed_zero * power_zero) / np.sum(speed_zero * speed_zero)
constant = average_power - slope * average_speed
print("m is %8.6f and c is %6.6f." % (slope, constant))
plt.plot(speed, power, 'k.', label="Original Data")
plt.plot(speed, slope * speed + constant, "b-", label="Best Fit Line")
plt.xlabel('speed')
plt.ylabel('power')
plt.legend()
plt.show()
### Relationship for linear regression
### https://www.w3schools.com/python/python_ml_linear_regression.asp
from scipy import stats
import numpy as np
from sklearn.metrics import r2_score
import pandas as pd
# Load a dataset.
data = pd.read_csv('data.csv')
x = data.speed.tolist()
y = data.power.tolist()
slope, intercept, r, p, std_err = stats.linregress(x, y)
print(r)
###Output
0.8537775037188597
###Markdown
Analysis:Looking at the graph, I can see that the best power output of the wind turbines is between 13 and 25 kmph. Any speeds above 25 kmph actually have a negative effect on the power output, which was surprising to me, as I predicted that the faster the wind turbines rotate, the more power would be produced.After plotting the data set and line of best fit, I tested the relationship between coefficients and line of best fit for linear regression.In this relationship test, there are three possible results: a 0 result shows that there is no relationship at all between coefficients and line of best fit, a 1 shows 100% relationship between coefficients and line of best fit, and anything between 0 and 1 shows the degree to which there is a relationship between the coefficients and line of best fit.The relationship test for simple linear regression result came up as 0.85, so it's a very positive relationship.I was curious to see if there was another type of regression that would produce an even closer fit to the the data set, so I then plotted out the data set with polynomial regression. Plotting data set with polynomial regression
###Code
### Polynomial Regression
### https://www.w3schools.com/python/python_ml_polynomial_regression.asp
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy import stats
from sklearn.metrics import r2_score
# Load a dataset.
data = pd.read_csv('data.csv')
x = data.speed.tolist()
y = data.power.tolist()
mymodel = np.poly1d(np.polyfit(x, y, 3))
### Relationship of coefficients
print(r2_score(y, mymodel(x)))
myline = np.linspace(0, 25, 100)
plt.scatter(x, y)
plt.plot(myline, mymodel(myline), "r+")
plt.show()
###Output
0.8796883953739737
|
tutorials/3.4 Writing your own Extensions.ipynb | ###Markdown
Writing Your Own Extensions Writing your own extensions is easier than you'd expect, given some knowledge of javascript. I'll start with a very brief overview, and then go deeper in subsequent sections. Basic structure of an extension The docs for writing your own extensions are lacking, but data scientist [Will Koehrsen](https://towardsdatascience.com/@williamkoehrsen) wrote a [great intro to extensions](https://towardsdatascience.com/how-to-write-a-jupyter-notebook-extension-a63f9578a38c) which was very helpful to me in teaching myself to write extensions. In this post I will borrow heavily from it, as well as try to build on it, so thank you Will. Every extension has 3 parts: 1. main.js - The javascript code. This is where you write the functionality.2. description.yaml - A config file for the extension3. README.md - A readme file (displayed in the nbconfigurator tab for your users)4. CSS Files (optional) - You can link CSS files from your main.js, we'll cover this later These three files need to be together in a single directory. To install the extension use `jupyter nbextension install path/to/directory --user`All this command does is copy that directory to jupyter's data directory where nbextensions are stored. You can see your data directory by running `jupyter --data-dir` on the command line. The nbextensions configurator reads from that folder so you can now enable/disable your extension from there. Editing Existing Extensions I would not advise writing extensions from scratch, but instead using So far every extension we've seen comes from a single collection called "jupyter_contrib_nbextensions" [Github Link](https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tree/master/src/jupyter_contrib_nbextensions/nbextensions). They are maintained and updated in one place so that we don't have to pip install them one by one. The source code for these extensions is usually located in the site-packages folder of Anaconda3. To find it yourself run the command below and check the location section
###Code
!pip show jupyter_contrib_nbextensions
###Output
Name: jupyter-contrib-nbextensions
Version: 0.5.1
Summary: A collection of Jupyter nbextensions.
Home-page: https://github.com/ipython-contrib/jupyter_contrib_nbextensions.git
Author: ipython-contrib and jupyter-contrib developers
Author-email: [email protected]
License: BSD
Location: c:\users\rob\anaconda3\lib\site-packages
Requires: nbconvert, jupyter-contrib-core, ipython-genutils, tornado, notebook, jupyter-nbextensions-configurator, pyyaml, lxml, traitlets, jupyter-core, jupyter-highlight-selected-word, jupyter-latex-envs
Required-by:
###Markdown
**Windows:** `C:\Users\\Anaconda3\Lib\site-packages\jupyter_contrib_nbextensions\nbextensions`**Linux:** `/opt/anaconda3/lib/python3.7/site-packages/jupyter_contrib_nbextensions/nbextensions/` or`/usr/local/lib/python3.7/site-packages/jupyter_contrib_nbextensions/nbextensions` Can't find this path? run `jupyter --paths` on command line Check the first config folder for a file called "jupyter_nbconvert_config.json", it will have a "template path" that leads to the folder where the extensions are hosted Reload your extension
###Code
! jupyter contrib nbextension install --user
###Output
_____no_output_____
###Markdown
Unfortunately we only have access to [Font Awesome 4.7 icons](https://fontawesome.com/v4.7.0/icons/) Extending the notebook The most basic extension you can write: taken from [the docs](https://jupyter-notebook.readthedocs.io/en/latest/extending/frontend_extensions.html) ```javascriptdefine(function(){ function load_ipython_extension(){ console.info('this is my first extension'); } return { load_ipython_extension: load_ipython_extension };});``` (this part taken from the link above)Although for historical reasons the function is called load_ipython_extension, it does apply to the Jupyter notebook in general, and will work regardless of the kernel in use. *** The current namespace is exposed via base/js/namespace so we can require it and bind it to the name Jupyter to hook in. To see all available named actions, run this in the console `Object.keys(require('base/js/namespace').actions._actions);` *** Install your extension with the command `jupyter nbextension install path/to/my_extension/ --user` For development you can use the --symlink flag to symlink your extention so there's no need to reinstall after changes are made Enable your extension with the command `jupyter nbextension enable my_extension/main [--sys-prefix][--section='common']`the my_extension.main refers to the main.js file of your extension (without the js of course), if you host it elsewhere (i.e. cooleffect.js, you will need to reference that)
###Code
%%javascript
console.log(Jupyter.notebook.config["data"]["template_message"])
###Output
_____no_output_____
###Markdown
At the very top of load_ipython_extension to include CSS```// add css $('') .attr('href', requirejs.toUrl('./main.css')) .appendTo('head');```
###Code
<div class=
###Output
_____no_output_____ |
maven/Startnotebook.ipynb | ###Markdown
Import libraries
###Code
import pandas as pd
import numpy as np
import datetime
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Read data
###Code
train = pd.read_csv('Train.csv')
test = pd.read_csv('Test.csv')
ss = pd.read_csv('SampleSubmission.csv')
variable_def = pd.read_csv('VariableDefinitions.csv')
###Output
_____no_output_____
###Markdown
Simple EDA
###Code
train.head()
test.head()
ss.head()
variable_def
print('Train shape:',train.shape,'\nTest shape:', test.shape, '\nsamplesubmission shape:',ss.shape)
train.info()
test.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 5177 entries, 0 to 5176
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ID 5177 non-null object
1 Policy Start Date 5177 non-null object
2 Policy End Date 5177 non-null object
3 Gender 5021 non-null object
4 Age 5177 non-null int64
5 First Transaction Date 5177 non-null object
6 No_Pol 5177 non-null int64
7 Car_Category 3539 non-null object
8 Subject_Car_Colour 2172 non-null object
9 Subject_Car_Make 4116 non-null object
10 LGA_Name 2395 non-null object
11 State 2389 non-null object
12 ProductName 5177 non-null object
dtypes: int64(2), object(11)
memory usage: 525.9+ KB
###Markdown
Since the ratio of categorical variables to numerical variable is high, consider combining both train and test for easy preproccessing
###Code
# join train and test together
ntrain = train.shape[0]
ntest = test.shape[0]
all_data = pd.concat((train, test)).reset_index(drop=True)
print("all_data size is : {}".format(all_data.shape))
all_data.tail()
date_col = ['Policy Start Date','Policy End Date','First Transaction Date']
num_col = ['Age']
cat_col = [col for col in test.columns if col not in date_col+num_col]
cat_col
cat_col.remove('ID')
train.describe()
test.describe()
sns.countplot(train.target)
###Output
_____no_output_____
###Markdown
The dataset is is skewed towards class 0, consider balancing the dataset
###Code
print("Are There Missing value in train? :",train.isnull().any().any())
print((train.isnull().sum()/train.shape[0])*100)
print("Are There Missing value in test? :",test.isnull().any().any())
print((test.isnull().sum()/test.shape[0])*100)
###Output
Are There Missing value in test? : True
ID 0.000000
Policy Start Date 0.000000
Policy End Date 0.000000
Gender 3.013328
Age 0.000000
First Transaction Date 0.000000
No_Pol 0.000000
Car_Category 31.639946
Subject_Car_Colour 58.045200
Subject_Car_Make 20.494495
LGA_Name 53.737686
State 53.853583
ProductName 0.000000
dtype: float64
###Markdown
Remember to handle the missing values
###Code
f,ax=plt.subplots(figsize=(8,8))
sns.heatmap(train.corr(),annot=True,linewidth=.5,fmt='.1f',ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
Correlation might not be a best measure for this dataset since there are more categorical features
###Code
all_data.head()
def check_categorical_relationship(cat_col,y_col,df):
for feat in cat_col:
plt.figure(figsize=(20,5))
sns.barplot(df[feat],df[y_col])
plt.show()
print("\n \n \n ")
check_categorical_relationship(cat_col,'Age',all_data)
check_categorical_relationship(cat_col,'No_Pol',all_data)
# Gender distribution
sns.countplot(all_data.Gender)
all_data.Gender.unique()
###Output
_____no_output_____
###Markdown
Basic Data preprocessing
###Code
train.head()
###Output
_____no_output_____
###Markdown
fill mising value
###Code
all_data = all_data.fillna(9999)
all_data.head()
print("Are There still Missing value in data? :",all_data.isnull().any().any())
print((all_data.isnull().sum()/all_data.shape[0])*100)
###Output
Are There still Missing value in data? : False
ID 0.0
Policy Start Date 0.0
Policy End Date 0.0
Gender 0.0
Age 0.0
First Transaction Date 0.0
No_Pol 0.0
Car_Category 0.0
Subject_Car_Colour 0.0
Subject_Car_Make 0.0
LGA_Name 0.0
State 0.0
ProductName 0.0
target 0.0
dtype: float64
###Markdown
date features
###Code
date_col
for feat in date_col:
all_data[feat] = pd.to_datetime(all_data[feat])
all_data.info()
all_data.head()
def extract_date_info(df,cols,):
for feat in cols:
df[feat +'_year'] = df[feat].dt.quarter
df[feat +'_day'] = df[feat].dt.day
df[feat +'_month'] = df[feat].dt.month
df[feat +'_quarter'] = df[feat].dt.quarter
df.drop(columns=date_col,axis=1,inplace=True)
extract_date_info(all_data,date_col)
all_data.head()
all_data.Gender.unique()
mapper = {"Male":"M","Female":'F','Entity':'O','Joint Gender':'O',9999:'O','NO GENDER':'O','NOT STATED':'O','SEX':'O' }
all_data.Gender = all_data.Gender.map(mapper)
all_data.Gender.unique()
# pd.get_dummies(all_data)
###Output
_____no_output_____
###Markdown
Creat Base model
###Code
all_data.target = all_data.target.astype(int)
all_data.drop(columns=['ID'],inplace=True)
#Get the new dataset
train_n = all_data[:ntrain]
test_n = all_data[ntrain:]
test_n.drop("target",axis = 1,inplace = True)
X= train_n.drop(columns=['target'])
y= train_n.target
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.33, random_state=42,)
test_n.columns
categorical_feat = ['Gender', 'Age', 'No_Pol', 'Car_Category', 'Subject_Car_Colour',
'Subject_Car_Make', 'LGA_Name', 'State', 'ProductName']
categorical_feat
from catboost import CatBoostClassifier
import catboost
model = CatBoostClassifier(cat_features=categorical_feat,verbose=50)
model.fit(X_train,y_train)
y_pred = model.predict(X_train)
from sklearn.metrics import classification_report
target_names = ['class 0', 'class 1']
print('*************** Classification report on training set ********************')
print(classification_report(y_train, y_pred, target_names=target_names))
print('*************** Classification report on testing set ********************')
print(classification_report(y_test, model.predict(X_test), target_names=target_names))
###Output
*************** Classification report on testing set ********************
precision recall f1-score support
class 0 0.89 0.99 0.94 3514
class 1 0.60 0.12 0.20 473
accuracy 0.89 3987
macro avg 0.75 0.55 0.57 3987
weighted avg 0.86 0.89 0.85 3987
###Markdown
Train on full train dataset
###Code
model.fit(X,y)
###Output
Learning rate set to 0.029851
0: learn: 0.6709212 total: 22.4ms remaining: 22.4s
50: learn: 0.3224438 total: 634ms remaining: 11.8s
100: learn: 0.2885096 total: 1.7s remaining: 15.1s
150: learn: 0.2793904 total: 2.71s remaining: 15.3s
200: learn: 0.2733042 total: 3.56s remaining: 14.1s
250: learn: 0.2687850 total: 4.47s remaining: 13.4s
300: learn: 0.2649334 total: 5.37s remaining: 12.5s
350: learn: 0.2615520 total: 6.21s remaining: 11.5s
400: learn: 0.2575390 total: 7.08s remaining: 10.6s
450: learn: 0.2542236 total: 7.94s remaining: 9.67s
500: learn: 0.2512264 total: 8.9s remaining: 8.86s
550: learn: 0.2470561 total: 9.91s remaining: 8.08s
600: learn: 0.2438768 total: 10.7s remaining: 7.13s
650: learn: 0.2409194 total: 11.8s remaining: 6.33s
700: learn: 0.2379106 total: 12.7s remaining: 5.42s
750: learn: 0.2351184 total: 13.7s remaining: 4.55s
800: learn: 0.2323654 total: 14.7s remaining: 3.64s
850: learn: 0.2300332 total: 15.6s remaining: 2.73s
900: learn: 0.2281375 total: 16.4s remaining: 1.81s
950: learn: 0.2255888 total: 17.4s remaining: 898ms
999: learn: 0.2232852 total: 18.3s remaining: 0us
###Markdown
First submission file
###Code
set(test.ID == ss.ID)
prediction = model.predict(test_n)
sns.countplot(prediction)
ss.head()
sub_file = ss.copy()
sub_file.target = prediction
sub_file.to_csv('base_model_pred_file.csv',index=False)
###Output
_____no_output_____ |
Rnn.ipynb | ###Markdown
>**Course No** : CSE4238>**Course Title** : Soft Computing Lab>**Assignment No** : 03>**Submitted By**: **`170104003`** Dip Chowdhury --- **Google Drive Connect & Navigate Directory**
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My Drive/SC/Assignment_3
###Output
/content/drive/My Drive/SC/Assignment_3
###Markdown
**Import Libraries**
###Code
import os, re, sys, pprint, string, math, time, copy
import warnings, random, helper, shutil, cv2
random.seed(5)
warnings.filterwarnings('ignore')
from datetime import datetime
import IPython.display as ipd
from IPython.display import IFrame, display, HTML
import pandas as pd
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', -1)
# Plotting library
import matplotlib.pyplot as plt
# tells matplotlib to embed plots within the notebook
%matplotlib inline
plt.set_cmap('viridis')
plt.style.use('fivethirtyeight')
import matplotlib.cm as cm
# Scientific and vector computation for python
import numpy as np
np.random.seed(5)
# Import seaborn
import seaborn as sns
# Apply the default theme
sns.set_theme()
from PIL import ImageFile, Image
from textblob import TextBlob
from wordcloud import WordCloud, STOPWORDS, ImageColorGenerator
from sklearn.model_selection import KFold, cross_val_score, train_test_split, RepeatedStratifiedKFold, RepeatedKFold, RandomizedSearchCV
from sklearn import preprocessing
from sklearn.metrics import mean_squared_error, confusion_matrix, accuracy_score, f1_score, precision_score, recall_score, roc_auc_score
from sklearn import tree
from sklearn.tree import DecisionTreeRegressor,DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier,RandomForestRegressor
from sklearn.linear_model import Lasso,Ridge,BayesianRidge,ElasticNet,HuberRegressor,LinearRegression,LogisticRegression,SGDRegressor,LinearRegression,ElasticNet,BayesianRidge
from sklearn.ensemble import GradientBoostingRegressor,GradientBoostingClassifier
from sklearn.ensemble import AdaBoostRegressor,AdaBoostClassifier
from sklearn.ensemble import ExtraTreesRegressor,ExtraTreesClassifier
from sklearn.neighbors import KNeighborsRegressor,KNeighborsClassifier
from sklearn.kernel_ridge import KernelRidge
from sklearn.svm import SVR,SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.neural_network import MLPRegressor
from sklearn.feature_extraction.text import TfidfVectorizer,CountVectorizer
from sklearn.decomposition import PCA
from sklearn.model_selection import GridSearchCV
from xgboost import XGBRegressor
from lightgbm import LGBMRegressor
from mlxtend.regressor import StackingCVRegressor
# !pip install -Uqq fastbook
# import fastbook
# fastbook.setup_book()
# from fastai.vision.all import *
# from fastbook import *
import torch
import torchvision
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim import lr_scheduler
from torchvision import datasets, models, transforms
from torchsummary import summary
from torch.utils.data import Subset, DataLoader, ConcatDataset, Dataset
import librosa # for music and audio analysis
import librosa.display # for audio visualization
import soundfile as sf # librosa fails when reading files on Kaggle.
#df.tail()
import tensorflow as tf
import matplotlib.pyplot as plt
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras import layers
from keras import backend as K
from keras.utils.vis_utils import plot_model
###Output
_____no_output_____
###Markdown
Download/Import Dataset
###Code
# Dataset Link https://drive.google.com/file/d/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/view
# Download
# !gdown --id xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
# Unzip
# !unzip "/content/drive/My Drive/xx.zip" -d "/content/drive/My Drive/"
dir_path = '/content/drive/My Drive/SC/Assignment_3/'
df = pd.read_csv('Dataset 1.csv', encoding='ISO-8859-1')
df.head()
###Output
_____no_output_____
###Markdown
Pre-Processing
###Code
df.drop(10313,inplace=True)
df.tail()
import re
def remove(text):
return re.sub(r"[,.\"!@#$%^&*(){}?/;`~:<>+=-]", "", text)
def remove_username(text):
return re.sub(r"(?:@[\w_]+)", "", text)
def remove_url(text):
return re.sub(r"http[s]?://(?:[a-z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-f][0-9a-f]))+", "", text)
def remove_numbers(text):
return re.sub(r"(?:(?:\d+,?)+(?:\.?\d+)?)", "", text)
def remove_punctuation(text):
return re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text)
def remove_multiple_spaces(text):
return re.sub(r" +", " ", text)
def remove_newline_lowercase(text):
return re.sub(r"\n", " ", text).lower()
df['message'] = df['message'].apply(lambda x: remove_username(x))
df['message'] = df['message'].apply(lambda x: remove_url(x))
df['message'] = df['message'].apply(lambda x: remove_numbers(x))
df['message'] = df['message'].apply(lambda x: remove_punctuation(x))
df['message'] = df['message'].apply(lambda x: remove_multiple_spaces(x))
df['message'] = df['message'].apply(lambda x: remove_newline_lowercase(x))
df.head()
nan_value = float("NaN")
df.replace(" ", nan_value, inplace=True)
df.dropna(subset = ["message"], inplace=True)
df.head()
import spacy
nlp = spacy.load('en_core_web_sm')
spacy_stopwords = spacy.lang.en.stop_words.STOP_WORDS
df['message'] = df['message'].apply(lambda x: ' '.join([word for word in x.split() if word not in (spacy_stopwords)]))
df.tail()
import spacy
nlp = spacy.load('en_core_web_sm')
spacy_stopwords = spacy.lang.en.stop_words.STOP_WORDS
df['message'] = df['message'].apply(lambda x: ' '.join([word for word in x.split() if word not in (spacy_stopwords)]))
def lemmatize_text(text):
text = nlp(text)
lemmatized = list()
for word in text:
lemma = word.lemma_.strip()
if lemma:
lemmatized.append(lemma)
return " ".join(lemmatized)
df['message'] = df['message'].apply(lemmatize_text)
df.head()
df = df.sample(frac = 1., random_state = 5).reset_index(drop = True)
df.to_csv('Cleaned.csv',index=False)
data = pd.read_csv('Dataset 1.csv', skip_blank_lines=True, engine = 'python')
data = data.sample(frac = 1., random_state = 5).reset_index(drop = True)
print(data['label'].value_counts(0))
EPOCH = 10
split_val = int(0.2 * data.shape[0])
test = data.iloc[-split_val :]
val = data.iloc[- 2 * split_val : -split_val]
train = data.iloc[: - 2 * split_val]
print(train['label'].value_counts())
# train
print(val['label'].value_counts())
# validX
print(test['label'].value_counts())
# test
trainX = np.array(train.iloc[:, 0])
trainY = np.array(train.iloc[:, 1])
valX = np.array(val.iloc[:, 0])
valY = np.array(val.iloc[:, 1])
testX = np.array(test.iloc[:, 0])
testY = np.array(test.iloc[:, 1])
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words = 10000, filters = '!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n')
tokenizer.fit_on_texts(trainX)
train_seqs = tokenizer.texts_to_sequences(trainX)
val_seqs = tokenizer.texts_to_sequences(valX)
test_seqs = tokenizer.texts_to_sequences(testX)
train_seqs = tf.keras.preprocessing.sequence.pad_sequences(train_seqs)
val_seqs = tf.keras.preprocessing.sequence.pad_sequences(val_seqs)
test_seqs = tf.keras.preprocessing.sequence.pad_sequences(test_seqs)
print(train_seqs.shape)
print(val_seqs.shape)
print(test_seqs.shape)
model = Sequential()
model.add(layers.Embedding(len(tokenizer.word_index), 128))
model.add(layers.SimpleRNN(256, return_sequences = True, dropout = 0.2))
model.add(layers.SimpleRNN(128, return_sequences = True, dropout = 0.2))
model.add(layers.SimpleRNN(64, return_sequences = True, dropout = 0.2))
model.add(layers.SimpleRNN(8, return_sequences = True, dropout = 0.2))
model.add(layers.Dense(1, activation = 'sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.summary()
history = model.fit(train_seqs, trainY, epochs = EPOCH, validation_data = (val_seqs, valY), verbose = 1)
plt.plot(history.history["accuracy"])
plt.plot(history.history["val_accuracy"])
plt.xlabel("Epochs")
plt.ylabel("accuracy")
plt.legend(["accuracy", "val_accuracy"])
plt.show()
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.xlabel("Epochs")
plt.ylabel("Loss")
plt.legend(["Loss", "val_loss"])
plt.show()
y_pred = model.predict(val_seqs)
y_pred = (y_pred > 0.5)
len(y_pred)
len(valY)
loss, accuracy = model.evaluate(val_seqs, valY, verbose = 1)
print('Validation Loss:', loss)
print('Validation Accuracy:', accuracy)
y_pred = model.predict(test_seqs)
y_pred = np.where(y_pred > 0.5)
loss, accuracy = model.evaluate(test_seqs, testY, verbose = 1)
print('Test Loss:', loss)
print('Test Accuracy:', accuracy)
valY.shape
y_pred = y_pred.reshape((y_pred.shape[0]*y_pred.shape[1]), y_pred.shape[2])
y_pred = y_pred.transpose()
y_pred.shape
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ['{0:0.0f}'.format(value) for value in
confusion_matrix(valY, y_pred).flatten()]
group_percentages = ['{0:.2%}'.format(value) for value in
confusion_matrix(valY, y_pred).flatten()/np.sum(confusion_matrix(valY, y_pred))]
labels = [f'{v1}\n{v2}\n{v3}' for v1, v2, v3 in
zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.heatmap(confusion_matrix(valY, y_pred), annot=labels, fmt='', cmap='Blues')
data = pd.read_csv('Cleaned.csv', skip_blank_lines=True, engine = 'python', dtype=str)
data = data.sample(frac = 1., random_state = 5).reset_index(drop = True)
print(data['label'].value_counts(0))
data['message'].astype(str)
data['label'].astype(str)
EPOCH = 4
split_val = int(0.2 * data.shape[0])
test = data.iloc[-split_val :]
val = data.iloc[- 2 * split_val : -split_val]
train = data.iloc[: - 2 * split_val]
print(train['label'].value_counts())
print(val['label'].value_counts())
print(test['label'].value_counts())
trainX = np.array(train.iloc[:, 0])
trainY = np.array(train.iloc[:, 1])
valX = np.array(val.iloc[:, 0])
valY = np.array(val.iloc[:, 1])
testX = np.array(test.iloc[:, 0])
testY = np.array(test.iloc[:, 1])
print(trainX.shape)
print(trainY.shape)
# print(trainX)
# print(trainY)
print(valX.shape)
print(valY.shape)
# print(validX)
# print(validY)
print(testX.shape)
print(testY.shape)
# print(testX)
# print(testY)
tokenizer = tf.keras.preprocessing.text.Tokenizer(num_words = 10000, filters = '!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n')
tokenizer.fit_on_texts(trainX)
train_seqs = tokenizer.texts_to_sequences(trainX)
val_seqs = tokenizer.texts_to_sequences(valX)
test_seqs = tokenizer.texts_to_sequences(testX)
train_seqs = tf.keras.preprocessing.sequence.pad_sequences(train_seqs)
val_seqs = tf.keras.preprocessing.sequence.pad_sequences(val_seqs)
test_seqs = tf.keras.preprocessing.sequence.pad_sequences(test_seqs)
print(train_seqs.shape)
print(val_seqs.shape)
print(test_seqs.shape)
###Output
_____no_output_____ |
Research/Non-Abelian_Two-Level.ipynb | ###Markdown
Spin-1/2 Non-Abelian Geometric Phase via Floquet Engineering
###Code
import numpy as np
import qutip as qt
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
$$ \hat{\mathcal{H}} = \tilde{\Omega}\left( \sin\phi \hat{F}_x + \cos\phi \hat{F}_y \right) + \tilde{\delta} \hat{F}_z $$Recall the minus plus sign in front of $\tilde{\delta}$ here, should probably be a minus.$$ \tilde{\Omega} = \Omega_0 \sin\Omega t \cos\omega t $$$$ \tilde{\delta} = \Omega_0 \cos\Omega t \cos\omega t $$$$ \hat{\mathcal{H}}\left( t \right) = \Omega_0 \vec{r} \cdot \hat{\vec{\sigma}} \cos\omega t$$$$ \vec{r} = \left( \sin\Omega t \cos \Phi, \sin\Omega t \sin \Phi, \cos\Omega t \right)^T $$
###Code
#----- Global Settings -----
sx, sy, sz = qt.sigmax(), qt.sigmay(), qt.sigmaz()
#--- Projection Operators ---
psi1, psi2 = qt.basis(2,0), qt.basis(2,1) #Two-level basis states
p1, p2 = psi1.proj(), psi2.proj() #Project onto bare spins (z-basis)
eigx, eigy = sx.eigenstates(), sy.eigenstates() #eigenstates of sx, sy
px1, px2 = eigx[1][0].proj(), eigx[1][1].proj() #Corresponding proj. ops.
py1, py2 = eigy[1][0].proj(), eigy[1][1].proj() #Corresponding proj. ops.
proj_ops = [ p1, p2, px1, px2, py1, py2 ] #List of all proj. ops.
###Output
_____no_output_____
###Markdown
Extended Time-Evolution
###Code
#----- Input Parameters -----
Omega0 = 1 #Rabi frequency
Phi = 0 #Operator phase
n_cyc = 10 #Number of Rabi oscillations per op.
#--- Computed Values ---
delta = Omega0 #Field detuning
slow_f = (1/n_cyc)*Omega0 #Slow freq.
floq_f = 2*Omega0 #Floquet freq.
#--- Setup Evolution ---
periods = 4 #Number of periods of slow_f to simulate over
t = np.linspace(0, periods*(2*np.pi/slow_f), num=5000) #Time axis
#--- Initial States ---
psi = psi1 #Initial state
psi = psi.unit() #Force normalization
#----- Time-Dependent Operators -----
H0 = Omega0 * (np.sin(Phi)*sx + np.cos(Phi)*sy) #Coupling term coefficient
H1 = -1*delta*sz #Detuning term coefficient
def coeff0_t(t, args):
''' Time-dependent coefficient of H1 '''
w = args['floq_f'] #Floquet freq.
Om = args['slow_f'] #Adiabatic freq.
return np.cos(w*t)*np.sin(Om*t)
def coeff1_t(t, args):
''' Time-dependent coefficient of H1 '''
w = args['floq_f'] #Floquet freq.
Om = args['slow_f'] #Adiabatic freq.
return np.cos(w*t)*np.cos(Om*t)
#--- Solve SE ---
H = [ [H0, coeff0_t], [H1, coeff1_t] ] #Full Hamiltonian for func.-based approach
args = {'floq_f':floq_f, 'slow_f':slow_f} #Input params
Psi = qt.sesolve(H, psi, t, e_ops=[p1, p2, px1, px2, py1, py2], args=args)
#----- Plot Results -----
fig = plt.figure( figsize=(10,7) )
axz, axx, axy = fig.add_subplot(311), fig.add_subplot(312), fig.add_subplot(313)
fs = 16 #Label fontsize
labels = ['$|c_{1}|^2$', '$|c_{2}|^2$'] #Plot labels
#--- Draw Plots ---
#Bare spins
axz.plot( Psi.times*(slow_f/2/np.pi), Psi.expect[0], 'b-', lw=2, label=labels[0])
axz.plot( Psi.times*(slow_f/2/np.pi), Psi.expect[1], 'r-', lw=2, label=labels[1])
#x-spins
axx.plot( Psi.times*(slow_f/2/np.pi), Psi.expect[2], 'b-', lw=2, label=labels[0])
axx.plot( Psi.times*(slow_f/2/np.pi), Psi.expect[3], 'r-', lw=2, label=labels[1])
#y-spins
axy.plot( Psi.times*(slow_f/2/np.pi), Psi.expect[4], 'b-', lw=2, label=labels[0])
axy.plot( Psi.times*(slow_f/2/np.pi), Psi.expect[5], 'r-', lw=2, label=labels[1])
#--- Plot Settings ---
axz.set_ylabel('$z$-Populations', fontsize=fs)
axx.set_ylabel('$x$-Populations', fontsize=fs)
axy.set_ylabel('$y$-Populations', fontsize=fs)
axy.set_xlabel('$\Omega t/2\pi$', fontsize=fs) #Comman x-label
axx.legend(loc='best', fancybox=True, shadow=True, framealpha=1, fontsize=8)
for ax in axz, axx, axy:
ax.set_xlim([0,periods]) #Remove extra spaces at ends
ax.tick_params(direction='in') #Set grid-ticks inward
plt.show()
###Output
_____no_output_____
###Markdown
Loops: Evolution Operators
###Code
def hamiltonian(t, args):
''' Returns the Hamiltonian for qt.sesolve() '''
Omega0 = args['Omega0'] #Rabi freq.
delta = args['delta'] #Detuning
#----- Time-Dependent Operators -----
CX = Omega0*sx #Sigma-x coupling term coefficient
CY = Omega0*sy #Sigma-y coupling term coefficient
D = -1*delta*sz #Detuning term coefficient
def CX_t(t, args):
''' Time-dependent part of CX '''
w = args['floq_f'] #Floquet freq.
Theta = args['Theta'] #Theta loop parameter
Phi = args['Phi'] #Phi loop parameter
return np.cos(w*t)*np.sin(Theta[0]*t + Theta[1])*np.sin(Phi[0]*t + Phi[1])
def CY_t(t, args):
''' Time-dependent part of CY '''
w = args['floq_f'] #Floquet freq.
Theta = args['Theta'] #Theta loop parameter
Phi = args['Phi'] #Phi loop parameter
return np.cos(w*t)*np.sin(Theta[0]*t + Theta[1])*np.cos(Phi[0]*t + Phi[1])
def D_t(t, args):
''' Time-dependent coefficient of H1 '''
w = args['floq_f'] #Floquet freq.
Theta = args['Theta'] #Theta loop parameter
return np.cos(w*t)*np.cos(Theta[0]*t + Theta[1])
#--- Solve SE ---
H = [ [CX, CX_t], [CY, CY_t], [D, D_t] ] #Full Hamiltonian for func.-based approach
return H
def loop(psi0, t, Theta, Phi, args):
''' '''
args['Theta'], args['Phi'] = Theta, Phi #Add parameteres to args
H = hamiltonian(t, args) #compute Hamiltonian
Psi = qt.sesolve(H, psi0, t, args=args) #Solve TDSE
return Psi
#----- Input Parameters -----
Omega0 = 1 #Rabi frequency
n_cyc = 10 #Number of Rabi oscillations per op.
#Computed Values
delta = Omega0 #Field detuning
slow_f = (1/n_cyc)*Omega0 #Slow freq.
floq_f = 2*Omega0 #Floquet freq.
args = {'Omega0':Omega0, 'delta':delta, 'slow_f':slow_f, 'floq_f':floq_f} #Parameter list
#Setup Evolution
t = np.linspace(0, 2*np.pi/slow_f, num=1000) #Time axis
#Initial States
psi0 = psi1 #Initial state
psi0 = psi0.unit() #Force normalization
#----- Loops -----
Thetas = [ [slow_f,0], [slow_f,0], [0,np.pi/2] ] #Thetas for 3 loops of form sin( {0}*t + {1} )
Phis = [ [0,0], [0,np.pi/2], [slow_f,0] ] #Phis for 3 loops of form sin( {0}*t + {1} )
ans1 = loop(psi0, t, Thetas[0], Phis[0], args)
ans2 = loop(psi0, t, Thetas[1], Phis[1], args)
ans3 = loop(psi0, t, Thetas[2], Phis[2], args)
ans1_s, ans2_s, ans3_s = ans1.states[-1], ans2.states[-1], ans3.states[-1]
print( f'Simulation Result: {ans1_s}, {ans2_s}, {ans3_s}' )
from scipy.special import j0
def u_theory(Theta, Phi, phi_const=True):
''' '''
g = (1/2) * ( j0(2*Omega0/floq_f) -1 ) #Phase factor
if phi_const is True:
'Use phi=const. form'
U = ( -1j*g * 2*np.pi*(-1*np.cos(Phi[1])*sx + np.sin(Phi[1])*sy) ).expm()
elif phi_const is False:
'Use phi linear in t form, assuming no constant part & phi is harmonic of slow_f'
n = Phi[0]/slow_f
U = ( -1j*g * (2*np.pi*n * np.sin(Theta[1])**2)*sz ).expm()
return U
#----- Compare to Theory -----
U1, U2, U3 = u_theory(Thetas[0], Phis[0]), u_theory(Thetas[1], Phis[1]), \
u_theory(Thetas[2], Phis[2], phi_const=False) #Evolution operators
fl1, fl2, fl3 = U1*psi0, U2*psi0, U3*psi0 #Theory states
print( '--- Comparing Results ---' )
print( ans1_s.overlap(fl1) )
print( ans2_s.overlap(fl2) )
print( ans3_s.overlap(fl3) )
print('')
g = (1/2) * ( j0(2*Omega0/floq_f) -1 ) #Phase factor
print( '--- Wilson Loops ---' )
print( ( U3*(U2*U1 - U1*U2) ).tr() )
print( -4*( np.sin(2*np.pi*g)**3 ) )
###Output
--- Comparing Results ---
(0.9995458757825944-0.022293334719184144j)
(0.9995458757825952-0.022293334719199028j)
(0.9995460360488326-0.0008587832264496109j)
--- Wilson Loops ---
1.216857162205843
1.216857162205843
|
Human Resource dataset (Analysis and Predictions)/notebook.ipynb | ###Markdown
Human Resource Dataset (Analysis and Prediction) Dataset Column Description - satisfacion_level: Showing satisfaction of a particular employee- last_evaluation: Showing last evaluation of a particular employee- number_project: Showing number of projects handled a particular employee- average_montly_hours: Showing the monthly hours that were spent the particular emloyee- time_spend_company: Shows the number of years spent by the particular employee in the company.- Work_accident: Showing an employee has whether been part of the company or not.- left: Tells either and employee has left the company or not. Shows two values 0= not left, 1= left- promotion_last_5years: Shows that the whether the employee has got any promotion in the last 5 years or not.- dept: Shows the departments- salary: Shows the salary type of the employee Wrangling & EDA 1. Loading Packages
###Code
#Write code here
import numpy as np
import pandas as pd
import sklearn.preprocessing
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set()
###Output
_____no_output_____
###Markdown
2. Loading Data & Basic Analysis - **Task 1**:Load the data and after making a copy of it, find **shape, data types, basic statistics, and null values** from the data set
###Code
# Load the data
data= pd.read_csv('data/HR_data.csv')
df=data.copy()
# Find the shape
df.shape
# Display the top 5 rows.
df.head()
# Find the data types of columns
df.dtypes
# Find the basic statistics
df.describe(include='all')
# Find the null values
df.isnull().sum()
###Output
_____no_output_____
###Markdown
3. Exploration Before moving ahead, let us check the details of different variables in the data **Task 2: Find out the how many employees left the company?**
###Code
# Count of how many employees left the company
df.left.value_counts()
###Output
_____no_output_____
###Markdown
**Task 3: Find out the number of projects being handled.**
###Code
# Write code here
num_projects=df.groupby('number_project').count()
plt.bar(num_projects.index.values, num_projects['satisfaction_level'])
plt.xlabel('Number of Projects')
plt.ylabel('Number of Employees')
plt.show()
###Output
_____no_output_____
###Markdown
**Question: What insights can you infer from the above plot?** Answer: **Task 4: Find out how number of projects contributed to employee turn-over.***Hint:* For this purpose, we can do a groupby.
###Code
# Renaming certain columns for better readability
df = df.rename(columns={'satisfaction_level': 'satisfaction',
'last_evaluation': 'evaluation',
'number_project': 'projectCount',
'average_montly_hours': 'averageMonthlyHours',
'time_spend_company': 'yearsAtCompany',
'Work_accident': 'workAccident',
'promotion_last_5years': 'promotion',
'sales' : 'department',
'left' : 'turnover'
})
turnover_Summary = df.groupby('turnover')
turnover_Summary.mean()
###Output
_____no_output_____
###Markdown
**Task 5:** Make a plot of your findings (only turn-over employees)
###Code
ax = sns.barplot(x="projectCount", y="projectCount", hue="turnover", data=df, estimator=lambda x: len(x) / len(df) * 100)
ax.set(ylabel="Percent")
###Output
_____no_output_____
###Markdown
**Question: What can you conclude from the above graph? Which people are leaving the company(as per number of projects)? What can be the reasons behind?** Answer: This graph is quite interesting as well. Here's what I found:- More than half of the employees with 2,6, and 7 projects left the company- Majority of the employees who did not leave the company had 3,4, and 5 projects- All of the employees with 7 projects left the company- There is an increase in employee turnover rate as project count increases **Time spent at the company** **Task 6: Find out how time spend at company can lead to employee turn over. Show the following plots.**- Count of Number of years spent by employees.- After how many years are mostly employees leaving the company? *Hint: For the second part do the similar procedure as done in case of 'number_projects' above. Try to find the percetage to show that after how much time/years did most of employees exactly leave.*
###Code
# Show the plot for the count of years here
ax = sns.barplot(x="yearsAtCompany", y="yearsAtCompany", hue="turnover", data=df, estimator=lambda x: len(x) / len(df) * 100)
ax.set(ylabel="Percent")
###Output
_____no_output_____
###Markdown
**Question: What is the maximum number of time spend by the employees?** Answer: 10 Years **Question: After what time period are employees most likely to leave the company ?** Answer:Between 2 - 5 years of time **Employees engaged in any work accident** **Task 7: Find out that how many employees were engaged in work accident and how many of them actually left? Use count plots to show your results**
###Code
# Number of employees involved in work accident
feature='Work_accident'
sns.countplot(x=feature, data = data)
plt.title("No. of Employees")
###Output
_____no_output_____
###Markdown
**Question: What can you conclude from the graph above?** Answer: Very small number of employees are involved in work accident
###Code
# Number of employees involved in work accident and left or not left
ax = sns.barplot(x="workAccident", y="workAccident", hue="turnover", data=df, estimator=lambda x: len(x) / len(df) * 100)
ax.set(ylabel="Percent")
###Output
_____no_output_____
###Markdown
**Promotions in last 5 years** **Task 8: How many number of employees got the promotion in last 5 year and how many of them left?**
###Code
# Number of Employees Promoted
feature='promotion_last_5years'
sns.countplot(x=feature, data = data)
plt.title("No. of Employees")
# Number of employees involved in promotion and left or not left
ax = sns.barplot(x="promotion", y="promotion", hue="turnover", data=df, estimator=lambda x: len(x) / len(df) * 100)
ax.set(ylabel="Percent")
###Output
_____no_output_____
###Markdown
**Salary trends** **Task 9: What are the salary trends in the data? Use graphical representation for explanation**
###Code
#Write code here
feature='salary'
sns.countplot(x=feature, data = data)
plt.title("No. of Employees")
###Output
_____no_output_____
###Markdown
**Quesion: Which type salary holders are most likely to leave? Try to show the percentage of employees who left according to their salaries, using a bar plot or as you like.**
###Code
# Write code here
ax = sns.countplot(x="salary", hue="turnover", data=df)
ax.set(ylabel="Count")
###Output
_____no_output_____
###Markdown
**Question: What does the above plot show?** Answer:The employees with high salaries are less-likely to leave **Employees per Department** **Task 10: Find out employees per department and also see which which department has highest number of employees leaving the company.**
###Code
# Write the code here to check employee count in each department. You can use a graphical representation or use simple code to check.
df = df.rename(columns={'sales':'department'})
sns.countplot(x='department', data=df).set_title('Employee Department Distribution');
plt.xticks(rotation=-45)
###Output
_____no_output_____
###Markdown
**Question: Which department has maximum number of employees?** Answer:Sales Department **Question: Which department has highest percentage of turn-over? Use graphical representation to find out.**
###Code
# Write code here
df = df.rename(columns={'left':'turnover'})
f, ax = plt.subplots(figsize=(15, 5))
sns.countplot(y="department", hue='turnover', data=df).set_title('Employee Department Turnover Distribution');
ax = sns.barplot(x="number_project", y="number_project", hue="turnover", data=df, estimator=lambda x: len(x) / len(df) * 100)
ax.set(ylabel="Percent")
###Output
_____no_output_____
###Markdown
Answer: More than half of the employees with 2,6, and 7 projects left the company.Majority of the employees who did not leave the company had 3,4, and 5 projects.All of the employees with 7 projects left the company.There is an increase in employee turnover rate as project count increases **Satisfaction Level** **Task 11: Show the satisfaction level of employees who left the company and those who didn't leave, using a kde plot**
###Code
# Write the code here
fig = plt.figure(figsize=(15,4),)
ax=sns.kdeplot(df.loc[(df['turnover'] == 0),'evaluation'] , color='b',shade=True,label='no turnover_left')
ax=sns.kdeplot(df.loc[(df['turnover'] == 1),'evaluation'] , color='r',shade=True, label='turnover_left')
ax.set(xlabel='Satisfaction Level', ylabel='Last Evaluation')
###Output
_____no_output_____
###Markdown
**Question: What can you conclude from the plot above?** Answer: Feature Engineering For feature engineering we will two new features. Looking at the the satisfcation we can conclude that people who are leaving have a low satisfaction level, most likely below 0.5 are leaving and people having a high satisfaction_level, most likely above 0.5 are likely to stay. **Task 12: Make a new feature 'satisfaction_level_type' through following conditions:**- **satisfaction_level >= 0.5 then satisfaction_level_type = 'High'**- **satisfaction_level < 0.5 then satisfaction_level_type = 'Low'**
###Code
# Write the code here
satisfaction_level_type = []
sat = df['satisfaction'].tolist()
for i in sat:
if i >= 0.5:
satisfaction_level_type.append('High')
else:
satisfaction_level_type.append('Low')
df = df.assign(satisfaction_level_type = satisfaction_level_type)
df.head()
# Write the code here to make bins as mentioned above
fig = plt.figure(figsize=(15,4))
ax=sns.kdeplot(df.loc[(df['turnover'] >= 0.5),'satisfaction'] , color='b',shade=True, label='no turnover')
ax=sns.kdeplot(df.loc[(df['turnover'] < 0.5),'satisfaction'] , color='r',shade=True, label='turnover')
plt.title('Employee Satisfaction Distribution - Turnover V.S. No Turnover')
###Output
_____no_output_____
###Markdown
**Task 12: Make a count plot for satisfaction_level_type and and see which type has more turn over using hue='left'**
###Code
# Write code here
ax = sns.countplot(x="satisfaction_level_type", hue="turnover", data=df)
ax.set(ylabel="Count")
###Output
_____no_output_____
###Markdown
Previously we saw that employees having high number of projects are leaving. We also saw that some employees with extremely less number of projects are also leaving the company. Let us see how number of projects and satisfaction level are related.We can see this by checking the satisfaction level type and number of projects in according to that specific type. **Make a Plot of your findings**
###Code
import seaborn as sns
sns.boxplot(x='projectCount', y='satisfaction_level_type', hue='turnover', data=df)
###Output
_____no_output_____
###Markdown
**Question:** What did you infer drom the above plot **Answer:** **Task 13: Make a new column 'employee_type' and assign categories as following:**- **If number of projects is equal to 2 then employee_type='unburdened'**- **If number of projects is between 3 and 5 then employee_type = 'Satisfactory'**- **If number of projects is 6 and above then employee_type='Burdened'**
###Code
employee_type = []
sat = df['projectCount'].tolist()
for i in sat:
if i == 2:
employee_type.append('Unburdened')
elif 3<= i <= 5:
employee_type.append('Satisfactory')
elif i >= 6:
employee_type.append('Burdened')
else:
employee_type.append('')
df = df.assign(employee_type = employee_type)
df.head()
###Output
_____no_output_____
###Markdown
**Task 14: Make a countplot to see which type of employee is leaving**
###Code
# Write code here
ax = sns.countplot(x="employee_type", hue="turnover", data=df)
ax.set(ylabel="Count")
###Output
_____no_output_____
###Markdown
Machine Learning Before moving further, we need to apply one-hot encoding on categorical variables i.e. **dept, salary, satisfaction_level_type,** and **employee_type** **Task 15: Do ONE HOT ENCODING of the above mentioned variables**
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, precision_score, recall_score, confusion_matrix, precision_recall_curve
from sklearn.preprocessing import RobustScaler
df = df.rename(columns={'satisfaction_level': 'satisfaction',
'last_evaluation': 'evaluation',
'number_project': 'projectCount',
'average_montly_hours': 'averageMonthlyHours',
'time_spend_company': 'yearsAtCompany',
'Work_accident': 'workAccident',
'promotion_last_5years': 'promotion',
'sales' : 'department',
'left' : 'turnover'
})
df.head
# Write code here
# Convert these variables into categorical variables
df["dept"] = df["dept"].astype('category').cat.codes
df["salary"] = df["salary"].astype('category').cat.codes
# Move the reponse variable "turnover" to the front of the table
front = df['turnover']
df.drop(labels=['turnover'], axis=1,inplace = True)
df.insert(0, 'turnover', front)
###Output
_____no_output_____
###Markdown
**Task 16: Creating Independant and Dependant Variables**
###Code
# Write code here
# Create an intercept term for the logistic regression equation
df['int'] = 1
indep_var = ['satisfaction', 'evaluation', 'yearsAtCompany', 'int', 'turnover']
df = df[indep_var]
###Output
_____no_output_____
###Markdown
**Task 17: Perform Train Test Split with test size 30 percent and random state = 100**
###Code
from sklearn.model_selection import train_test_split
#Write code here
# Create train and test splits
target_name = 'turnover'
X = df.drop('turnover', axis=1)
y=df[target_name]
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3, random_state=100, stratify=y)
X_train.head()
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
###Output
(10499, 4) (10499,)
(4500, 4) (4500,)
###Markdown
**Task 18: Get the predictions using the following models.**- Random Forest- Logistic Regression- Ada Boost- XG Boost **Also get the following scores for each of the above models**- Accuracy- Precision- Recall- F1-Score- Classification Report
###Code
# Importing the models from sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, precision_score, recall_score, confusion_matrix, precision_recall_curve
from sklearn.preprocessing import RobustScaler
from sklearn.metrics import roc_auc_score
from sklearn.metrics import classification_report
from sklearn.ensemble import RandomForestClassifier
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import VotingClassifier
###Output
_____no_output_____
###Markdown
Random Forest
###Code
# Making instance and training the model
# Random Forest Model
rf = RandomForestClassifier(
n_estimators=1000,
max_depth=None,
min_samples_split=10,
class_weight="balanced"
#min_weight_fraction_leaf=0.02
)
rf.fit(X_train, y_train)
# Get predictions
print ("\n\n ---Random Forest Model---")
rf_roc_auc = roc_auc_score(y_test, rf.predict(X_test))
###Output
---Random Forest Model---
###Markdown
**Precision**
###Code
# Write the code to import the function for calculation of the specific score
print ("Random Forest Model Precision Score is %2.2f" % precision_score(y_test,rf.predict(X_test)))
###Output
Random Forest Model Precision is 0.95
###Markdown
**Accuracy**
###Code
# Write the code to import the function for calculation of the specific score
print ("Random Forest Model Accuracy Score is %2.2f" % accuracy_score(y_test,rf.predict(X_test)))
###Output
Random Forest Model Accuracy is 0.98
###Markdown
**Recall**
###Code
# Write the code to import the function for calculation of the specific score
print ("Random Forest Model Recall Score is %2.2f" % recall_score(y_test,rf.predict(X_test)))
###Output
Random Forest Model Recall Score is 0.94
###Markdown
**F1-Score**
###Code
# Write the code to import the function for calculation of the specific score
from sklearn.metrics import f1_score
print ("Random Forest Model F1-Score is %2.2f" % f1_score(y_test,rf.predict(X_test)))
###Output
Random Forest Model F1-Score is 0.95
###Markdown
**Classification Report**
###Code
# Write the code to import the function for calculation of the specific score
print ("Random Forest AUC = %2.2f" % rf_roc_auc)
print(classification_report(y_test, rf.predict(X_test)))
###Output
Random Forest AUC = 0.96
precision recall f1-score support
0 0.98 0.98 0.98 3429
1 0.95 0.94 0.95 1071
accuracy 0.98 4500
macro avg 0.97 0.96 0.97 4500
weighted avg 0.98 0.98 0.98 4500
###Markdown
Logistic Regression
###Code
# Create instance and train, random _state=100
model = LogisticRegression(penalty='l2', C=1, random_state = 100)
# get the predictions
model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
**Accuracy**
###Code
#Write the code here
print ("Logistic accuracy is %2.2f" % accuracy_score(y_test, model.predict(X_test)))
###Output
Logistic accuracy is 0.77
###Markdown
**Precision**
###Code
#Write the code here
print ("Logistic precision is %2.2f" % precision_score(y_test, model.predict(X_test)))
###Output
Logistic precision is 0.54
###Markdown
**Recall**
###Code
#Write the code here
print ("Logistic recall is %2.2f" % recall_score(y_test, model.predict(X_test)))
###Output
Logistic recall is 0.26
###Markdown
**F1 Score**
###Code
#Write the code here
print ("Logistic F1-Score is %2.2f" % f1_score(y_test, model.predict(X_test)))
###Output
Logistic F1-Score is 0.35
###Markdown
**Classification Report**
###Code
#Write the code here
logit_roc_auc = roc_auc_score(y_test, model.predict(X_test))
print ("Logistic AUC = %2.2f" % logit_roc_auc)
print(classification_report(y_test, model.predict(X_test)))
###Output
Logistic AUC = 0.60
precision recall f1-score support
0 0.80 0.93 0.86 3429
1 0.54 0.26 0.35 1071
accuracy 0.77 4500
macro avg 0.67 0.60 0.61 4500
weighted avg 0.74 0.77 0.74 4500
###Markdown
Ada Boost
###Code
#Write the code here to make an instance and train the model with random state =100
ada = AdaBoostClassifier(n_estimators=400, learning_rate=0.1, random_state = 100)
# Get the predictions
ada.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
**Accuracy**
###Code
#Write code here
print ("Ada Boost accuracy is %2.2f" % accuracy_score(y_test, ada.predict(X_test)))
###Output
Ada Boost accuracy is 0.94
###Markdown
**Precision**
###Code
#Write code here
print ("Ada Boost precision is %2.2f" % precision_score(y_test, model.predict(X_test)))
###Output
Ada Boost precision is 0.54
###Markdown
**Recall**
###Code
#Write code here
print ("Ada Boost Recall is %2.2f" % recall_score(y_test, model.predict(X_test)))
###Output
Ada Boost Recall is 0.26
###Markdown
**F1-Score**
###Code
#Write code here
print ("Ada Boost F1-Score is %2.2f" % f1_score(y_test, model.predict(X_test)))
###Output
Ada Boost F1-Score is 0.35
###Markdown
**Classification Report**
###Code
#Write code here
ada_roc_auc = roc_auc_score(y_test, ada.predict(X_test))
print ("AdaBoost AUC = %2.2f" % ada_roc_auc)
print(classification_report(y_test, ada.predict(X_test)))
###Output
AdaBoost AUC = 0.91
precision recall f1-score support
0 0.95 0.97 0.96 3429
1 0.91 0.85 0.88 1071
accuracy 0.94 4500
macro avg 0.93 0.91 0.92 4500
weighted avg 0.94 0.94 0.94 4500
###Markdown
XG Boost
###Code
#Write the code here to import the model
import xgboost as xgb
#Write the code here to make an instance and train the model with random state =100
xgb_model = xgb.XGBClassifier(objective="binary:logistic", random_state=100)
xgb_model.fit(X_train, y_train)
# Get the predictions
pred_clf_xgb= xgb_model.predict(X)
###Output
_____no_output_____
###Markdown
**Accuracy**
###Code
#Write code here
print ("XG Boost accuracy is %2.2f" % accuracy_score(y_test, xgb_model.predict(X_test)))
###Output
XG Boost accuracy is 0.97
###Markdown
**Precision**
###Code
#Write code here
print ("XG Boost presicion is %2.2f" % precision_score(y_test, xgb_model.predict(X_test)))
###Output
XG Boost presicion is 0.94
###Markdown
**Recall**
###Code
#Write code here
print ("XG Boost Recall is %2.2f" % recall_score(y_test, xgb_model.predict(X_test)))
###Output
XG Boost Recall is 0.92
###Markdown
**F1-Score**
###Code
#Write code here
print ("XG Boost F1-Score is %2.2f" % f1_score(y_test, xgb_model.predict(X_test)))
###Output
XG Boost F1-Score is 0.93
###Markdown
**Classification Report**
###Code
#Write code here
xgb_roc_auc = roc_auc_score(y_test, xgb_model.predict(X_test))
print ("AdaBoost AUC = %2.2f" % xgb_roc_auc)
print(classification_report(y_test, xgb_model.predict(X_test)))
###Output
AdaBoost AUC = 0.95
precision recall f1-score support
0 0.97 0.98 0.98 3429
1 0.94 0.92 0.93 1071
accuracy 0.97 4500
macro avg 0.96 0.95 0.95 4500
weighted avg 0.97 0.97 0.97 4500
###Markdown
Result Comparisons **Task 19: Do the comparison of the above used models as per the scores found.Make a datafram that shows the models and scores for each models.**
###Code
# Write the code here
print ("Random Forest AUC = %2.2f" % rf_roc_auc)
print(classification_report(y_test, rf.predict(X_test)))
print ("Logistic AUC = %2.2f" % logit_roc_auc)
print(classification_report(y_test, model.predict(X_test)))
print ("AdaBoost AUC = %2.2f" % ada_roc_auc)
print(classification_report(y_test, ada.predict(X_test)))
print ("AdaBoost AUC = %2.2f" % xgb_roc_auc)
print(classification_report(y_test, xgb_model.predict(X_test)))
# Create ROC Graph
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_test, model.predict_proba(X_test)[:,1])
rf_fpr, rf_tpr, rf_thresholds = roc_curve(y_test, rf.predict_proba(X_test)[:,1])
dt_fpr, dt_tpr, dt_thresholds = roc_curve(y_test, xgb_model.predict_proba(X_test)[:,1])
ada_fpr, ada_tpr, ada_thresholds = roc_curve(y_test, ada.predict_proba(X_test)[:,1])
plt.figure()
# Plot Logistic Regression ROC
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % logit_roc_auc)
# Plot Random Forest ROC
plt.plot(rf_fpr, rf_tpr, label='Random Forest (area = %0.2f)' % rf_roc_auc)
# Plot AdaBoost ROC
plt.plot(ada_fpr, ada_tpr, label='AdaBoost (area = %0.2f)' % ada_roc_auc)
# Plot XGBoost ROC
plt.plot(dt_fpr, dt_tpr, label='XG Boost (area = %0.2f)' % xgb_roc_auc)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Graph')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
**Task 20: Which model has the best score? Do you think that you need to apply any sort of tunning on the model selected. If Yes, then apply it conclude with the final scores of the best model.** Answer:
###Code
The Random Forest has the best score!
###Output
_____no_output_____ |
wandb/run-20210521_221750-36jm533o/tmp/code/train.ipynb | ###Markdown
Modelling
###Code
from torchvision import models
device = torch.device('cuda')
import torch.nn as nn
# model = models.resnet18(pretrained=True).to(device)
# inf = model.fc.in_features
# model.fc = nn.Linear(inf,2)
from models.baseline_model import BaseLine_Model
model = BaseLine_Model().to(device)
model = model.to(device)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=0.1)
BATCH_SIZE = 32
EPOCHS = 100
import wandb
PROJECT_NAME = 'Face-Mask-Detection'
from tqdm import tqdm
wandb.init(project=PROJECT_NAME,name='test')
for _ in tqdm(range(EPOCHS),leave=False):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,3,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
preds = model(X_batch)
preds.to(device)
print(preds.shape)
loss = criterion(preds,y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item()})
wandb.finish()
for index in range(len(preds)):
print(torch.argmax(torch.round(preds)[index]))
print(y_batch[index])
print('\n')
###Output
tensor(1, device='cuda:0')
tensor(1, device='cuda:0')
tensor(1, device='cuda:0')
tensor(1, device='cuda:0')
tensor(1, device='cuda:0')
tensor(1, device='cuda:0')
tensor(0, device='cuda:0')
tensor(0, device='cuda:0')
|
analyses/2019-01-22-visualize-predictors-by-timepoints.ipynb | ###Markdown
Visualize predictors by timepoints
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
%matplotlib inline
plt.style.use("huddlej")
!pwd
df = pd.read_table("../results/builds/h3n2/5_viruses_per_month/sample_0/2005-10-01--2015-10-01/tip_attributes.tsv")
#df = pd.read_table("../results/builds/h3n2/5_viruses_per_month/sample_0/2005-10-01--2015-10-01/standardized_tip_attributes.tsv")
df["timepoint"] = pd.to_datetime(df["timepoint"])
df.head()
g = sns.FacetGrid(df, hue="timepoint", height=4, aspect=2., legend_out=True)
g = g.map(sns.distplot, "ep_x")
g = sns.FacetGrid(df, hue="timepoint", height=4, aspect=2., legend_out=True)
g = g.map(sns.distplot, "cTiterSub_x")
g = sns.FacetGrid(df, hue="timepoint", height=4, aspect=2., legend_out=True)
g = g.map(sns.distplot, "lbi")
g = sns.FacetGrid(df, hue="timepoint", height=4, aspect=2., legend_out=True)
g = g.map(sns.distplot, "ne_star")
np.log(df[df["ne_star"] > 0]["ne_star"]).value_counts()
g = sns.FacetGrid(df[np.abs(df["dms_star"]) < 10], hue="timepoint", height=4, aspect=2., legend_out=True)
g = g.map(sns.distplot, "dms_star", kde=False)
sns.lmplot("cTiterSub_x", "ep_x", df)
sns.lmplot("ne_star", "dms_star", df)
sns.lmplot("ep_x", "ne_star", df)
sns.lmplot("ep_x", "dms_star", df)
sns.lmplot("ep_x", "lbi", df)
sns.lmplot("cTiterSub_x", "lbi", df)
sns.pairplot(df.loc[:, ["ep_x", "cTiterSub_x", "ne_star", "dms_star", "lbi"]].dropna())
df.loc[:, ["ep_x", "cTiterSub_x", "ne_star", "dms_star", "lbi"]].shape
df.loc[:, ["ep_x", "cTiterSub_x", "ne_star", "dms_star", "lbi"]].dropna().shape
###Output
_____no_output_____ |
Classifying_datasets/Convolutional_Neural_Networks/scikit_tutorial.ipynb | ###Markdown
Classifying Images With Scikit_Learn
###Code
import sklearn as sk
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn .datasets import fetch_olivetti_faces
faces = fetch_olivetti_faces()
faces.DESCR
faces.keys()
faces.images.shape
faces.data.shape
faces.target.shape
np.max(faces.data)
np.min(faces.data)
np.median(faces.data)
def print_faces(images , target , top_n):
fig = plt.figure(figsize=(20,20))
for i in range(top_n):
p = fig.add_subplot(20,20,i+1,xticks=[],yticks=[])
p.imshow(images[i],cmap=plt.cm.bone)
p.text(0,14,str(target[i]))
p.text(0,59,str(i))
print_faces(faces.images,faces.target,20)
plt.show()
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score,KFold
from scipy.stats import sem
svc_1 = SVC(kernel='linear')
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(faces.data, faces.target, test_size=0.25, random_state=0)
def evaluate_cross_validation(clf, X, y, K):
cv = KFold(len(y) , K, shuffle =True, random_state = 0)
scores = cross_val_score(clf,X,y,cv=cv)
print scores
evaluate_cross_validation(svc_1,X_train,y_train,5)
from sklearn import metrics
def train_and_test(clf, X_train, X_test, y_train, y_test):
clf.fit(X_train, y_train)
print "Accuracy on training Set"
print clf.score(X_train, y_train)
print "Accuracy on testing Set"
print clf.score(X_test, y_test)
y_pred = clf.predict(X_test)
print "Classification Report"
print metrics.classification_report(y_test, y_pred)
print "Confudion Matrix"
print metrics.confusion_matrix(y_test, y_pred)
train_and_test(svc_1, X_train, X_test, y_train, y_test)
glasses = [
(10, 19), (30, 32), (37, 38), (50, 59), (63, 64),
(69, 69), (120, 121), (124, 129), (130, 139), (160, 161),
(164, 169), (180, 182), (185, 185), (189, 189), (190, 192),
(194, 194), (196, 199), (260, 269), (270, 279), (300, 309),
(330, 339), (358, 359), (360, 369)]
def create_target(segments):
y = np.zeros(faces.target.shape[0])
for (start, end) in segments:
y[start:end+1] = 1
return y
target_glasses = create_target(glasses)
X_train, X_test, y_train, y_test = train_test_split(faces.data, target_glasses, test_size=0.25, random_state=0)
svc_2 = SVC(kernel='linear')
evaluate_cross_validation(svc_2, X_train, y_train, 5)
train_and_test(svc_2, X_train, X_test, y_train, y_test)
X_test = faces.data[30:40]
y_test = target_glasses[30:40]
y_test.shape
select = np.ones(target_glasses.shape[0])
select[30:40] = 0
X_train = faces.data[select == 1]
y_train = target_glasses[select == 1]
y_train.shape
svc_3 = SVC(kernel='linear')
train_and_test(svc_3, X_train, X_test, y_train, y_test)
###Output
Accuracy on training Set
1.0
Accuracy on testing Set
0.9
Classification Report
precision recall f1-score support
0.0 0.83 1.00 0.91 5
1.0 1.00 0.80 0.89 5
avg / total 0.92 0.90 0.90 10
Confudion Matrix
[[5 0]
[1 4]]
###Markdown
Naive Bayes Using Scikit_Lerarn
###Code
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups(subset='all')
print type(news.data), type(news.target), type(news.target_names)
print news.target_names
len(news.data)
len(news.target)
news.data[0] #Content of the data at 0th index
news.target[0], news.target_names[news.target[0]] # Target_Name
###Output
_____no_output_____
###Markdown
Pre-Processing The Data machine learning algorithms can work only on numeric data, so our next step will be to convert our text-based dataset to a numeric dataset
###Code
SPLIT_PERC = .75
split_size = int(len(news.data)*SPLIT_PERC)
X_train = news.data[:split_size]
X_test = news.data[split_size:]
y_train = news.target[:split_size]
y_test = news.target[split_size:]
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer, HashingVectorizer, CountVectorizer
clf_1 = Pipeline([('vect', CountVectorizer()), ('clf', MultinomialNB())])
clf_2 = Pipeline([('vect', HashingVectorizer(non_negative=True)), ('clf', MultinomialNB())])
clf_3 = Pipeline([('vect', TfidfVectorizer()), ('clf', MultinomialNB())])
###Output
_____no_output_____
###Markdown
We will define a function that takes a classifier(evaluate_cross_validation) and performs the K-fold crossvalidationover the specified X and y values:
###Code
from sklearn.cross_validation import cross_val_score, KFold
from scipy.stats import sem
clfs = [clf_1, clf_2, clf_3]
for clf in clfs:
print clf
evaluate_cross_validation(clf, news.data, news.target, 5)
###Output
Pipeline(steps=[('vect', CountVectorizer(analyzer=u'word', binary=False, decode_error=u'strict',
dtype=<type 'numpy.int64'>, encoding=u'utf-8', input=u'content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern=u'(?u)\\b\\w\\w+\\b',
tokenizer=None, vocabulary=None)), ('clf', MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True))])
[ 0.85782493 0.85725657 0.84664367 0.85911382 0.8458477 ]
Pipeline(steps=[('vect', HashingVectorizer(analyzer=u'word', binary=False, decode_error=u'strict',
dtype=<type 'numpy.float64'>, encoding=u'utf-8', input=u'content',
lowercase=True, n_features=1048576, ngram_range=(1, 1),
non_negative=True, norm=u'l2', preprocessor=None, stop_words=None,
strip_accents=None, token_pattern=u'(?u)\\b\\w\\w+\\b',
tokenizer=None)), ('clf', MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True))])
[ 0.75543767 0.77659857 0.77049615 0.78508888 0.76200584]
Pipeline(steps=[('vect', TfidfVectorizer(analyzer=u'word', binary=False, decode_error=u'strict',
dtype=<type 'numpy.int64'>, encoding=u'utf-8', input=u'content',
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), norm=u'l2', preprocessor=None, smooth_idf=True...rue,
vocabulary=None)), ('clf', MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True))])
[ 0.84482759 0.85990979 0.84558238 0.85990979 0.84213319]
|
huanghaiguang_code/ex6-SVM/4- spam filter.ipynb | ###Markdown
4-垃圾邮件检测
###Code
from sklearn import svm
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
import scipy.io as sio
###Output
_____no_output_____
###Markdown
> I think the hard part is how to vecotrize emails. Using this preprocessed data set is cheating XD
###Code
mat_tr = sio.loadmat('data/spamTrain.mat')
mat_tr.keys()
###Output
_____no_output_____
###Markdown
> be careful with the column vector : `(4000, 1)` is not the same as `(4000, )`
###Code
X, y = mat_tr.get('X'), mat_tr.get('y').ravel()
X.shape, y.shape
mat_test = sio.loadmat('data/spamTest.mat')
mat_test.keys()
test_X, test_y = mat_test.get('Xtest'), mat_test.get('ytest').ravel()
test_X.shape, test_y.shape
###Output
_____no_output_____
###Markdown
fit SVM model
###Code
svc = svm.SVC()
svc.fit(X, y)
pred = svc.predict(test_X)
print(metrics.classification_report(test_y, pred))
###Output
precision recall f1-score support
0 0.94 0.99 0.97 692
1 0.98 0.87 0.92 308
avg / total 0.95 0.95 0.95 1000
###Markdown
what about linear logistic regresion?
###Code
logit = LogisticRegression()
logit.fit(X, y)
pred = logit.predict(test_X)
print(metrics.classification_report(test_y, pred))
###Output
precision recall f1-score support
0 1.00 0.99 1.00 692
1 0.99 0.99 0.99 308
avg / total 0.99 0.99 0.99 1000
|
aas229_workshop/Tutorial_Notebooks/wcs/aas229_WCS_solutions.ipynb | ###Markdown
WCS solutions Exercise 1:- Create a WCS object for a different file. dist_file_name = os.path.join('../Supporting_Data', 'dist_lookup.fits.gz')This file contains all distortions typical for HST imaging data - SIP, lookup_table and det2im (detector to image - correcting detector irregularities). The lookup table and det2im distortions are stored in separate extensions so you will need to pass as a second argument to `wcs.WCS` the file object (already opened with astropy.io.fits).- Look at the file object with the `info()` method. The lookup_table and det2im distortions are saved in separate extensions.- Modify one of the WCS keywords and save ot to file. (As some of the distortion is saved in extensions, use the method `to_fits()` to save the entire WCS.
###Code
import os.path
from astropy.io import fits
from astropy import wcs
dist_file_name = os.path.join('dist_lookup.fits.gz')
f = fits.open(dist_file_name)
w = wcs.WCS(f[1].header, f)
print(f.info())
###Output
Filename: dist_lookup.fits.gz
No. Name Type Cards Dimensions Format
0 PRIMARY PrimaryHDU 4 ()
1 SCI ImageHDU 170 (100, 100) float32
2 D2IMARR ImageHDU 15 (4096, 1) float32
3 WCSDVARR ImageHDU 15 (65, 33) float32
4 WCSDVARR ImageHDU 15 (65, 33) float32
None
###Markdown
The `WCSDVARR` contain the lookup_table distortion and `D2IMARR` extension contains the detector correction.
###Code
hdr = w.to_header(relax=True)
hdr
print(w.wcs.crpix)
w.wcs.crpix = [10, 20]
new_file = w.to_fits(relax=True)
new_file.info()
new_file[0].header['CRPIX*']
###Output
_____no_output_____
###Markdown
Exercise 2:Using the same file create a WCS object for the alternate WCS in its 'SCI' header, by passing also `key='O'` to wcs.WCS.Commpare the two WCSs using the `printwcs()` method`
###Code
f = fits.open(dist_file_name)
w = wcs.WCS(f[1].header, f)
walt = wcs.WCS(f[1].header, f, key='O')
# Primary WCS
w.printwcs()
# Alternate WCS
walt.printwcs()
###Output
WCS Keywords
Number of WCS axes: 2
CTYPE : 'RA---TAN-SIP' 'DEC--TAN-SIP'
CRVAL : 5.6305681061800001 -72.054571842800001
CRPIX : 2048.0 1024.0
CD1_1 CD1_2 : 1.29056256334e-05 5.9530912341999997e-06
CD2_1 CD2_2 : 5.0220581265600003e-06 -1.26447741482e-05
NAXIS : 100 100
|
notebook/intro-to-python1.ipynb | ###Markdown
Introduction to Python - part 1 Author: Manuel Dalcastagnè. This work is licensed under a CC Attribution 3.0 Unported license (http://creativecommons.org/licenses/by/3.0/). Original material, "Introduction to Python programming", was created by J.R. Johansson under the CC Attribution 3.0 Unported license (http://creativecommons.org/licenses/by/3.0/) and can be found at https://github.com/jrjohansson/scientific-python-lectures. Python program files and some general rules * Python code is usually stored in text files with the file ending "`.py`": myprogram.py* To run our Python program from the command line we use: $ python myprogram.py* Every line in a Python program file is assumed to be a Python statement, except comment lines which start with ``: this is a comment * Remark: **multiline comments do not exist in Python!** * Differently from other languages, statements of code do not require any punctuation like `;` at the end of rows* Code blocks of flow controls do not require curly brackets `{}`; in contrast, they are defined using code indentation (`tab`)* Conditions of flow controls do not require round brackets `{}`* Python does not require a `main` function to run a program; we can define that, but it is not mandatory Variables and types Names Variable names in Python can contain alphanumerical characters `a-z`, `A-Z`, `0-9` and some special characters such as `_`. Normal variable names must start with a letter. By convention, variable names start with a lower-case letter, and Class names start with a capital letter. In addition, there are a number of Python keywords that cannot be used as variable names. These keywords are: and, as, assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, in, is, lambda, not, or, pass, print, raise, return, try, while, with, yield Variable assignment The assignment operator in Python is `=`, and it can be used to create new variables or assign values to already existing ones:
###Code
# variable assignments
x = 1.0
my_variable = 12.2
###Output
_____no_output_____
###Markdown
However, a variable has a type associated with it. The type is derived from the assigned value.
###Code
type(x)
###Output
_____no_output_____
###Markdown
If we assign a new value to a variable, its type changes.
###Code
x = 1
type(x)
###Output
_____no_output_____
###Markdown
Fundamental data types
###Code
# integers
x = 1
type(x)
# float
x = 1.0
type(x)
# boolean
b1 = True
b2 = False
type(b1)
# complex numbers: note the use of `j` to specify the imaginary part
x = 1.0 - 1.0j
type(x)
# string
s = "Hello world"
type(s)
###Output
_____no_output_____
###Markdown
Strings can be seen as sequences of characters, although the character data type is not supported in Python.
###Code
# length of the string: the number of characters
len(s)
###Output
_____no_output_____
###Markdown
Data slicing We can index characters in a string using `[]`, but **heads up MATLAB users:** indexing starts at 0! Moreover, we can extract a part of a string using the syntax `[start:stop]` (data slicing), which extracts characters between index `start` and `stop` -1:
###Code
s[0:5]
###Output
_____no_output_____
###Markdown
If we omit either of `start` or `stop` from `[start:stop]`, the default is the beginning and the end of the string, respectively:
###Code
s[:5]
s[6:]
###Output
_____no_output_____
###Markdown
How to print and format strings
###Code
print("str1", 1.0, False, -1) # The print statement converts all arguments to strings
print("str1" + "str2", "is not", False) # strings added with + are concatenated without space
# alternative way to format a string
s3 = 'value1 = {0}, value2 = {1}'.format(3.1415, "Hello")
print(s3)
# how to print a float number up to a certain number of decimals
print("{0:.4f}".format(5.78678387))
###Output
5.7868
###Markdown
Python has a very rich set of functions for text processing. See for example http://docs.python.org/3/library/string.html for more information. Type utility functions The module `types` contains a number of functions that can be used to test if variables are of certain types:
###Code
import types
x = 1.0
# check if the variable x is a float
type(x) is float
# check if the variable x is an int
type(x) is int
###Output
_____no_output_____
###Markdown
The module `types` contains also functions to convert variables from a type to another (**type casting**):
###Code
x = 1.5
print(x, type(x))
x = int(x)
print(x, type(x))
###Output
1 <class 'int'>
###Markdown
Operators Arithmetic operators: `+`, `-`, `*`, `/`, `//` (integer division), `%` (modulo), `**` (power)
###Code
1 + 2, 1 - 2, 1 * 2, 2 / 4
1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 2.0 / 4.0
7.0 // 3.0, 7.0 % 3.0
7 // 3, 7 % 3
2 ** 2
###Output
_____no_output_____
###Markdown
Boolean operators: `and`, `not`, `or`
###Code
True and False
not False
True or False
###Output
_____no_output_____
###Markdown
Comparison operators: `>`, `=`, `<=`, `==` (equality), `is` (identity)
###Code
2 > 1, 2 < 1
2 >= 2, 2 <= 1
# equality
[1,2] == [1,2]
# identity
l1 = l2 = [1,2]
l1 is l2
# identity
l1 = [1,2]
l2 = [1,2]
l1 is l2
###Output
_____no_output_____
###Markdown
When testing for identity, we are asking Python if an object is the same as another one. If you come from C or C++, you can think of the identity as an operator to check if the pointers of two objects are pointing to the same memory address. In Python identities of objects are integers which are guaranteed to be unique for the lifetime of objects, and they can be found by using the `id()` function.
###Code
id(l1), id(l2), id(10)
###Output
_____no_output_____
###Markdown
Basic data structures: List, Set, Tuple and Dictionary List Lists are collections of ordered elements, where elements can be of different types and duplicate elements are allowed.The syntax for creating lists in Python is `[...]`:
###Code
l = [1,2,3,4]
print(type(l))
print(l)
###Output
<class 'list'>
[1, 2, 3, 4]
###Markdown
We can use the same slicing techniques to manipulate lists as we could use on strings:
###Code
print(l)
print(l[1:3])
###Output
[1, 2, 3, 4]
[2, 3]
###Markdown
Lists play a very important role in Python. For example they are used in loops and other flow control structures (discussed below). There are a number of convenient functions for generating lists of various types, for example the `range` function:
###Code
start = 1
stop = 10
step = 2
# range generates an iterator, which can be converted to a list using 'list(...)'.
list(range(start, stop, step))
list(range(-10, 10))
###Output
_____no_output_____
###Markdown
Adding, modifying and removing elements in lists
###Code
# create a new empty list
l = []
# append an element to the end of a list using `append`
l.append("A")
l.append("B")
l.append("C")
print(l)
# modify lists by assigning new values to elements in the list.
l[1] = "D"
l[2] = "E"
print(l)
# remove first element with specific value using 'remove'
l.remove("A")
print(l)
###Output
_____no_output_____
###Markdown
See `help(list)` for more details, or read the online documentation. Set Sets are collections of unordered elements, where elements can be of different types and duplicate elements are not allowed.The syntax for creating sets in Python is `{...}`:
###Code
l = {1,2,3,4}
print(type(l))
print(l)
###Output
_____no_output_____
###Markdown
Set elements are not ordered, so we can not use slicing techniques or access elements using indexes. Adding and removing elements in sets
###Code
# create a new empty set
l = set()
# add an element to the set using `add`
l.add("A")
l.add("B")
l.add("C")
print(l)
#Remove first element with specific value using 'remove'
l.remove("A")
print(l)
###Output
_____no_output_____
###Markdown
See `help(set)` for more details, or read the online documentation. Dictionary Dictionaries are collections of ordered elements, where each element is a key-value pair and keys-values can be of different types. The syntax for dictionaries is `{key1 : value1, ...}`:
###Code
params = {"parameter1" : 1.0,
"parameter2" : 2.0,
"parameter3" : 3.0,}
print(type(params))
print(params)
# add a new element with key = "parameter4" and value = 4.0
params["parameter4"] = 4.0
print(params)
###Output
_____no_output_____
###Markdown
Adding and removing elements in dictionaries
###Code
# create a new empty dictionary
params = {}
# add an element to the set using `add`
params.update({"A": 1})
params.update({"B": 2})
params.update({"C": 3})
print(params)
#Remove the element with key = "parameter4" using 'pop'
params.pop("C")
print(params)
###Output
_____no_output_____
###Markdown
See `help(dict)` for more details, or read the online documentation. Tuples Tuples are collections of ordered elements, where each element can be of different types. However, once created, tuples cannot be modified.In Python, tuples are created using the syntax `(..., ..., ...)`:
###Code
point = (10, 20)
print(type(point))
print(point[1])
###Output
_____no_output_____
###Markdown
See `help(tuple)` for more details, or read the online documentation. Control Flow Conditional statements: if, elif, else The Python syntax for conditional execution of code uses the keywords `if`, `elif` (else if), `else`:
###Code
x = 10
# round parenthesis for conditions are not necessary
if (x==5):
print("statement1 is True")
elif x==10:
print("statement2 is True")
else:
print("statement1 and statement2 are False")
###Output
_____no_output_____
###Markdown
For the first time, we found a peculiar aspect of the Python language: **blocks are defined by their indentation level (usually a tab).**In many languages blocks are defined by curly brakets `{ }`, and the level of indentation is optional. In contrast, in Python we have to be careful to indent our code correctly or else we will get syntax errors. Other examples:
###Code
statement1 = statement2 = True
if statement1:
if statement2:
print("both statement1 and statement2 are True")
statement1 = False
if statement1:
print("printed if statement1 is True")
print("still inside the if block")
if statement1:
print("printed if statement1 is True")
print("now outside the if block")
###Output
_____no_output_____
###Markdown
Loops: for, while In Python, there are two types of loops: `for` and `while`. `for` loop The `for` loop iterates over the elements of the list, and executes the code block once for each element of the list:
###Code
for x in [1,2,3]:
print(x)
###Output
_____no_output_____
###Markdown
Any kind of list can be used in the `for` loop. For example:
###Code
for x in range(4):
print(x)
for x in range(-1,3):
print(x)
###Output
_____no_output_____
###Markdown
To iterate over key-value pairs of a dictionary:
###Code
params = {"parameter1" : 1.0,
"parameter2" : 2.0,
"parameter3" : 3.0,}
for key, value in params.items():
print(key + " = " + str(value))
###Output
_____no_output_____
###Markdown
Sometimes it is useful to have access to the indices of the values when iterating over a list. We can use the `enumerate` function for this:
###Code
for idx, x in enumerate(range(-3,3)):
print(idx, x)
###Output
_____no_output_____
###Markdown
Loops can be interrupted, using the `break` command:
###Code
for i in range(1,5):
if i==3:
break
print(i)
###Output
_____no_output_____
###Markdown
`while` loop The `while` loop iterates until its boolean condition is satisfied (so equal to True), and it executes the code block once for each iteration. Be careful to write the code so that the condition will be satisfied at a certain point, otherwise it will loop forever!
###Code
i = 0
while i < 5:
print(i)
i = i + 1
print("done")
###Output
_____no_output_____
###Markdown
**If something goes wrong and you enter an infinite loop**, the only solution is to kill the process. In Jupiter Notebook, go to the main dashboard and select the Running tab: then pick the notebook which is stuck and press Shutdown. In the Python interpreter, press CTRL + C twice. EXERCISE 1:Given a list of integers, without using any package or built-in function, compute and print: - mean of the list - number of negative and positive numbers in the list - two lists that contain positives and negatives in the original list
###Code
input = [-2,2,-3,3,10]
###Output
_____no_output_____
###Markdown
EXERCISE 2:Given a list of integers, without using any package or built-in function, compute and print: - a dictionary where: - keys are unique numbers contained in the list - values count the occurrencies of unique numbers in the listTIP: you can use dictionary functions
###Code
input = [1,2,3,4,2,3,1,2,3,4,2,1,3]
###Output
_____no_output_____ |
jupyter_notebooks/notebooks/NB19_CXVII-Keras_VAE_MNIST.ipynb | ###Markdown
Notebook 19: Variational Autoencoders with Keras and MNIST Learning GoalsThe goals of this notebook is to learn how to code a variational autoencoder in Keras. We will discuss hyperparameters, training, and loss-functions. In addition, we will familiarize ourselves with the Keras sequential GUI as well as how to visualize results and make predictions using a VAE with a small number of latent dimensions. OverviewThis notebook teaches the reader how to build a Variational Autoencoder (VAE) with Keras. The code is a minimally modified, stripped-down version of the code from Lous Tiao in his wonderful [blog post](http://tiao.io/posts/implementing-variational-autoencoders-in-keras-beyond-the-quickstart-tutorial/) which the reader is strongly encouraged to also read.Our VAE will have Gaussian Latent variables and a Gaussian Posterior distribution $q_\phi({\mathbf z}|{\mathbf x})$ with a diagonal covariance matrix. Recall, that a VAE consists of four essential elements:* A latent variable ${\mathbf z}$ drawn from a distribution $p({\mathbf z})$ which in our case will be a Gaussian with mean zero and standarddeviation $\epsilon$.* A decoder $p(\mathbf{x}|\mathbf{z})$ that maps latent variables ${\mathbf z}$ to visible variables ${\mathbf x}$. In our case, this is just a Multi-Layer Perceptron (MLP) - a neural network with one hidden layer.* An encoder $q_\phi(\mathbf{z}|\mathbf{x})$ that maps examples to the latent space. In our case, this map is just a Gaussian with means and variances that depend on the input: $q_\phi({\bf z}|{\bf x})= \mathcal{N}({\bf z}, \boldsymbol{\mu}({\bf x}), \mathrm{diag}(\boldsymbol{\sigma}^2({\bf x})))$* A cost function consisting of two terms: the reconstruction error and an additional regularization term that minimizes the KL-divergence between the variational and true encoders. Mathematically, the reconstruction error is just the cross-entropy between the samples and their reconstructions. The KL-divergence term can be calculated analytically for this term and can be written as$$-D_{KL}(q_\phi({\bf z}|{\bf x})|p({\bf z}))={1 \over 2} \sum_{j=1}^J \left (1+\log{\sigma_j^2({\bf x})}-\mu_j^2({\bf x}) -\sigma_j^2({\bf x})\right).$$ Importing Data and specifying hyperparametersIn the next section of code, we import the data and specify hyperparameters. The MNIST data are gray scale ranging in values from 0 to 255 for each pixel. We normalize this range to lie between 0 and 1. The hyperparameters we need to specify the architecture and train the VAE are:* The dimension of the hidden layers for encoders and decoders (`intermediate_dim`)* The dimension of the latent space (`latent_dim`)* The standard deviation of latent variables (`epsilon_std`)* Optimization hyper-parameters: `batch_size`, `epochs`
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from keras import backend as K
from keras.layers import (Input, InputLayer, Dense, Lambda, Layer,
Add, Multiply)
from keras.models import Model, Sequential
from keras.datasets import mnist
import pandas as pd
#Load Data and map gray scale 256 to number between zero and 1
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = np.expand_dims(x_train, axis=-1) / 255.
x_test = np.expand_dims(x_test, axis=-1) / 255.
print(x_train.shape)
# Find dimensions of input images
img_rows, img_cols, img_chns = x_train.shape[1:]
# Specify hyperparameters
original_dim = img_rows * img_cols
intermediate_dim = 256
latent_dim = 2
batch_size = 100
epochs = 3
epsilon_std = 1.0
###Output
Using TensorFlow backend.
###Markdown
Specifying the loss functionHere we specify the loss function. The first block of code is just the reconstruction error which is given by the cross-entropy. The second block of code calculates the KL-divergence analytically and adds it to the loss function with the line `self.add_loss`. It represents the KL-divergence as just another layer in the neural network with the inputs equal to the outputs: the means and variances for the variational encoder (i.e. $\boldsymbol{\mu}({\bf x})$ and $\boldsymbol{\sigma}^2({\bf x})$).
###Code
def nll(y_true, y_pred):
""" Negative log likelihood (Bernoulli). """
# keras.losses.binary_crossentropy gives the mean
# over the last axis. we require the sum
return K.sum(K.binary_crossentropy(y_true, y_pred), axis=-1)
class KLDivergenceLayer(Layer):
""" Identity transform layer that adds KL divergence
to the final model loss.
"""
def __init__(self, *args, **kwargs):
self.is_placeholder = True
super(KLDivergenceLayer, self).__init__(*args, **kwargs)
def call(self, inputs):
mu, log_var = inputs
kl_batch = - .5 * K.sum(1 + log_var -
K.square(mu) -
K.exp(log_var), axis=-1)
self.add_loss(K.mean(kl_batch), inputs=inputs)
return inputs
###Output
_____no_output_____
###Markdown
Encoder and DecoderThe following specifies both the encoder and decoder. The encoder is a MLP with three layers that maps ${\bf x}$ to $\boldsymbol{\mu}({\bf x})$ and $\boldsymbol{\sigma}^2({\bf x})$, followed by the generation of a latent variable using the reparametrization trick (see main text). The decoder is specified as a single sequential Keras layer.
###Code
# Encoder
x = Input(shape=(original_dim,))
h = Dense(intermediate_dim, activation='relu')(x)
z_mu = Dense(latent_dim)(h)
z_log_var = Dense(latent_dim)(h)
z_mu, z_log_var = KLDivergenceLayer()([z_mu, z_log_var])
# Reparametrization trick
z_sigma = Lambda(lambda t: K.exp(.5*t))(z_log_var)
eps = Input(tensor=K.random_normal(shape=(K.shape(x)[0],
latent_dim)))
z_eps = Multiply()([z_sigma, eps])
z = Add()([z_mu, z_eps])
# This defines the Encoder which takes noise and input and outputs
# the latent variable z
encoder = Model(inputs=[x, eps], outputs=z)
# Decoder is MLP specified as single Keras Sequential Layer
decoder = Sequential([
Dense(intermediate_dim, input_dim=latent_dim, activation='relu'),
Dense(original_dim, activation='sigmoid')
])
x_pred = decoder(z)
###Output
_____no_output_____
###Markdown
Training the modelWe now train the model. Even though the loss function is the negative log likelihood (cross-entropy), recall that the KL-layer adds the analytic form of the loss function as well. We also have to reshape the data to make it a vector, and specify an optimizer.
###Code
vae = Model(inputs=[x, eps], outputs=x_pred, name='vae')
vae.compile(optimizer='rmsprop', loss=nll)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(-1, original_dim) / 255.
x_test = x_test.reshape(-1, original_dim) / 255.
hist = vae.fit(
x_train,
x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test)
)
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/3
60000/60000 [==============================] - 10s 173us/step - loss: 190.8570 - val_loss: 172.5236
Epoch 2/3
60000/60000 [==============================] - 10s 173us/step - loss: 170.4935 - val_loss: 168.1820
Epoch 3/3
60000/60000 [==============================] - 12s 200us/step - loss: 166.9137 - val_loss: 165.7955
###Markdown
Visualizing the loss functionWe can automatically visualize the loss function as a function of the epoch using the standard Keras interface for fitting.
###Code
%matplotlib inline
#for pretty plots
golden_size = lambda width: (width, 2. * width / (1 + np.sqrt(5)))
fig, ax = plt.subplots(figsize=golden_size(6))
hist_df = pd.DataFrame(hist.history)
hist_df.plot(ax=ax)
ax.set_ylabel('NELBO')
ax.set_xlabel('# epochs')
ax.set_ylim(.99*hist_df[1:].values.min(),
1.1*hist_df[1:].values.max())
plt.show()
###Output
_____no_output_____
###Markdown
Visualizing embedding in latent spaceSince our latent space is two dimensional, we can think of our encoder as defining a dimensional reduction of the original 784 dimensional space to just two dimensions! We can visualize the structure of this mapping by plotting the MNIST dataset in the latent space, with each point colored by which number it is $[0,1,\ldots,9]$.
###Code
x_test_encoded = encoder.predict(x_test, batch_size=batch_size)
plt.figure(figsize=golden_size(6))
plt.scatter(x_test_encoded[:, 0], x_test_encoded[:, 1], c=y_test, cmap='nipy_spectral')
plt.colorbar()
plt.savefig('VAE_MNIST_latent.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Generating new examplesOne of the nice things about VAEs is that they are generative models. Thus, we can generate new examples or fantasy particles much like we did for RBMs and DBMs. We will generate the particles in two different ways* Sampling uniformally in the latent space * Sampling accounting for the fact that the latent space is Gaussian so that we expect most of the data points to be centered around (0,0) and fall off exponentially in all directions. This is done by transforming the uniform grid using the inverse Cumulative Distribution Function (CDF) for the Gaussian.
###Code
# display a 2D manifold of the images
n = 5 # figure with 15x15 images
quantile_min = 0.01
quantile_max = 0.99
# Linear Sampling
# we will sample n points within [-15, 15] standard deviations
z1_u = np.linspace(5, -5, n)
z2_u = np.linspace(5, -5, n)
z_grid = np.dstack(np.meshgrid(z1_u, z2_u))
x_pred_grid = decoder.predict(z_grid.reshape(n*n, latent_dim)) \
.reshape(n, n, img_rows, img_cols)
# Plot figure
fig, ax = plt.subplots(figsize=golden_size(10))
ax.imshow(np.block(list(map(list, x_pred_grid))), cmap='gray')
ax.set_xticks(np.arange(0, n*img_rows, img_rows) + .5 * img_rows)
ax.set_xticklabels(map('{:.2f}'.format, z1_u), rotation=90)
ax.set_yticks(np.arange(0, n*img_cols, img_cols) + .5 * img_cols)
ax.set_yticklabels(map('{:.2f}'.format, z2_u))
ax.set_xlabel('$z_1$')
ax.set_ylabel('$z_2$')
ax.set_title('Uniform')
ax.grid(False)
plt.savefig('VAE_MNIST_fantasy_uniform.pdf')
plt.show()
# Inverse CDF sampling
z1 = norm.ppf(np.linspace(quantile_min, quantile_max, n))
z2 = norm.ppf(np.linspace(quantile_max, quantile_min, n))
z_grid2 = np.dstack(np.meshgrid(z1, z2))
x_pred_grid2 = decoder.predict(z_grid2.reshape(n*n, latent_dim)) \
.reshape(n, n, img_rows, img_cols)
# Plot figure Inverse CDF sampling
fig, ax = plt.subplots(figsize=golden_size(10))
ax.imshow(np.block(list(map(list, x_pred_grid2))), cmap='gray')
ax.set_xticks(np.arange(0, n*img_rows, img_rows) + .5 * img_rows)
ax.set_xticklabels(map('{:.2f}'.format, z1), rotation=90)
ax.set_yticks(np.arange(0, n*img_cols, img_cols) + .5 * img_cols)
ax.set_yticklabels(map('{:.2f}'.format, z2))
ax.set_xlabel('$z_1$')
ax.set_ylabel('$z_2$')
ax.set_title('Inverse CDF')
ax.grid(False)
plt.savefig('VAE_MNIST_fantasy_invCDF.pdf')
plt.show()
###Output
_____no_output_____ |
examples/automatic data correction.ipynb | ###Markdown
Load Demo DatasetA preprocessed data list is used.
###Code
with open("../data/example.pkl", "rb") as f:
datalist = pickle.load(f)
numstates = 9
###Output
_____no_output_____
###Markdown
Fit with uncorrected data
###Code
model1 = ctmc.Ctmc(numstates, transintv=1.0, toltime=1e-8, debug=False)
model1 = model1.fit(datalist)
mat1 = model1.transmat
mat1.round(2)
###Output
_____no_output_____
###Markdown
Fit with auto-corrected data
###Code
model2 = ctmc.Ctmc(numstates, transintv=1.0, toltime=1e-8, autocorrect=True, debug=False)
model2 = model2.fit(datalist)
mat2 = model2.transmat
mat2.round(2)
###Output
_____no_output_____
###Markdown
Differences
###Code
(mat2 - mat1).round(2)
###Output
_____no_output_____ |
RDD_interface.ipynb | ###Markdown
Find the best bandwidth around the threshold (cutoff point)
###Code
bandwidth_opt = rdd.optimal_bandwidth(data['y'], data['x'], cut=threshold)
print("Optimal bandwidth:", bandwidth_opt)
###Output
Optimal bandwidth: 0.7448859965965812
###Markdown
Preserve the data with 'x' within bandwidth of the cutoff point
###Code
data_rdd = rdd.truncated_data(data, 'x', bandwidth_opt, cut=threshold)
# directly plot the datapoints
plt.figure(figsize=(12, 8))
plt.scatter(data_rdd['x'], data_rdd['y'], facecolors='none', edgecolors='r')
plt.xlabel('x')
plt.ylabel('y')
plt.axvline(x=threshold, color='b')
plt.show()
plt.close()
data_rdd.head()
###Output
_____no_output_____
###Markdown
To better visualize, put data into bins (here 100 bins in total) and compute the mean
###Code
data_binned = rdd.bin_data(data_rdd, 'y', 'x', 100)
data_binned.head()
plt.figure(figsize=(12, 8))
plt.scatter(data_binned['x'], data_binned['y'],
s = data_binned['n_obs'], facecolors='none', edgecolors='r')
plt.axvline(x=threshold, color='b')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
plt.close()
###Output
_____no_output_____
###Markdown
Estimation Here the effect of the 'cutoff' event can be estimated using different formulas. The formulas can be specified in the class 'rdd'. In the following case, we use the regression y ~ TREATED + x. Here TREATED=1 if x >= cut, and 0 otherwise.
###Code
# estimate the model directly
# treatment = 1 if x>=threshold
model = rdd.rdd(data_rdd, 'x', 'y', cut=threshold)
print(model.fit().summary())
###Output
Estimation Equation: y ~ TREATED + x
WLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.508
Model: WLS Adj. R-squared: 0.508
Method: Least Squares F-statistic: 2811.
Date: Mon, 20 Jan 2020 Prob (F-statistic): 0.00
Time: 23:20:31 Log-Likelihood: -7794.0
No. Observations: 5442 AIC: 1.559e+04
Df Residuals: 5439 BIC: 1.561e+04
Df Model: 2
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 1.0297 0.046 22.267 0.000 0.939 1.120
TREATED 0.4629 0.054 8.636 0.000 0.358 0.568
x 1.9944 0.065 30.776 0.000 1.867 2.121
==============================================================================
Omnibus: 2.452 Durbin-Watson: 2.036
Prob(Omnibus): 0.293 Jarque-Bera (JB): 2.429
Skew: -0.034 Prob(JB): 0.297
Kurtosis: 3.077 Cond. No. 10.3
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
This example includes other variables and interaction w1*w2
###Code
model = rdd.rdd(data_rdd, 'x', cut=threshold, equation='y ~ TREATED + x + w1*w2')
print(model.fit().summary())
###Output
Estimation Equation: y ~ TREATED + x + w1*w2
WLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.523
Model: WLS Adj. R-squared: 0.523
Method: Least Squares F-statistic: 1194.
Date: Mon, 20 Jan 2020 Prob (F-statistic): 0.00
Time: 23:21:35 Log-Likelihood: -7709.6
No. Observations: 5442 AIC: 1.543e+04
Df Residuals: 5436 BIC: 1.547e+04
Df Model: 5
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 1.0303 0.046 22.617 0.000 0.941 1.120
TREATED 0.4784 0.053 9.054 0.000 0.375 0.582
x 1.9828 0.064 31.054 0.000 1.858 2.108
w1 -0.1746 0.014 -12.831 0.000 -0.201 -0.148
w2 0.0080 0.003 2.362 0.018 0.001 0.015
w1:w2 -0.0025 0.003 -0.737 0.461 -0.009 0.004
==============================================================================
Omnibus: 2.725 Durbin-Watson: 2.031
Prob(Omnibus): 0.256 Jarque-Bera (JB): 2.732
Skew: -0.033 Prob(JB): 0.255
Kurtosis: 3.088 Cond. No. 26.9
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
This example involves interaction x*TREATED and allows different coefficients of x before and after the event.
###Code
model = rdd.rdd(data_rdd, 'x', cut=threshold, equation='y ~ TREATED + x + x*TREATED')
print(model.fit().summary())
###Output
Estimation Equation: y ~ TREATED + x + x*TREATED
WLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.508
Model: WLS Adj. R-squared: 0.508
Method: Least Squares F-statistic: 1874.
Date: Mon, 20 Jan 2020 Prob (F-statistic): 0.00
Time: 23:22:25 Log-Likelihood: -7793.7
No. Observations: 5442 AIC: 1.560e+04
Df Residuals: 5438 BIC: 1.562e+04
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 0.9931 0.063 15.792 0.000 0.870 1.116
TREATED 0.5740 0.140 4.104 0.000 0.300 0.848
x 2.0510 0.092 22.198 0.000 1.870 2.232
x:TREATED -0.1115 0.130 -0.860 0.390 -0.366 0.143
==============================================================================
Omnibus: 2.540 Durbin-Watson: 2.036
Prob(Omnibus): 0.281 Jarque-Bera (JB): 2.518
Skew: -0.035 Prob(JB): 0.284
Kurtosis: 3.078 Cond. No. 26.3
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
|
python_bootcamp/notebooks/00-Python Object and Data Structure Basics/02-Strings.ipynb | ###Markdown
Strings Strings are used in Python to record text information, such as names. Strings in Python are actually a *sequence*, which basically means Python keeps track of every element in the string as a sequence. For example, Python understands the string "hello' to be a sequence of letters in a specific order. This means we will be able to use indexing to grab particular letters (like the first letter, or the last letter).This idea of a sequence is an important one in Python and we will touch upon it later on in the future.In this lecture we'll learn about the following: 1.) Creating Strings 2.) Printing Strings 3.) String Indexing and Slicing 4.) String Properties 5.) String Methods 6.) Print Formatting Creating a StringTo create a string in Python you need to use either single quotes or double quotes. For example:
###Code
# Single word
'hello'
# Entire phrase
'This is also a string'
# We can also use double quote
"String built with double quotes"
# Be careful with quotes!
' I'm using single quotes, but this will create an error'
###Output
_____no_output_____
###Markdown
The reason for the error above is because the single quote in I'm stopped the string. You can use combinations of double and single quotes to get the complete statement.
###Code
"Now I'm ready to use the single quotes inside a string!"
###Output
_____no_output_____
###Markdown
Now let's learn about printing strings! Printing a StringUsing Jupyter notebook with just a string in a cell will automatically output strings, but the correct way to display strings in your output is by using a print function.
###Code
# We can simply declare a string
'Hello World'
# Note that we can't output multiple strings this way
'Hello World 1'
'Hello World 2'
###Output
_____no_output_____
###Markdown
We can use a print statement to print a string.
###Code
print('Hello World 1')
print('Hello World 2')
print('Use \n to print a new line')
print('\n')
print('See what I mean?')
###Output
Hello World 1
Hello World 2
Use
to print a new line
See what I mean?
###Markdown
String Basics We can also use a function called len() to check the length of a string!
###Code
len('Hello World')
###Output
_____no_output_____
###Markdown
Python's built-in len() function counts all of the characters in the string, including spaces and punctuation. String IndexingWe know strings are a sequence, which means Python can use indexes to call parts of the sequence. Let's learn how this works.In Python, we use brackets [] after an object to call its index. We should also note that indexing starts at 0 for Python. Let's create a new object called s and then walk through a few examples of indexing.
###Code
# Assign s as a string
s = 'Hello World'
#Check
s
# Print the object
print(s)
###Output
Hello World
###Markdown
Let's start indexing!
###Code
# Show first element (in this case a letter)
s[0]
s[1]
s[2]
###Output
_____no_output_____
###Markdown
We can use a : to perform *slicing* which grabs everything up to a designated point. For example:
###Code
# Grab everything past the first term all the way to the length of s which is len(s)
s[1:]
# Note that there is no change to the original s
s
# Grab everything UP TO the 3rd index
s[:3]
###Output
_____no_output_____
###Markdown
Note the above slicing. Here we're telling Python to grab everything from 0 up to 3. It doesn't include the 3rd index. You'll notice this a lot in Python, where statements and are usually in the context of "up to, but not including".
###Code
#Everything
s[:]
###Output
_____no_output_____
###Markdown
We can also use negative indexing to go backwards.
###Code
# Last letter (one index behind 0 so it loops back around)
s[-1]
# Grab everything but the last letter
s[:-1]
###Output
_____no_output_____
###Markdown
We can also use index and slice notation to grab elements of a sequence by a specified step size (the default is 1). For instance we can use two colons in a row and then a number specifying the frequency to grab elements. For example:
###Code
# Grab everything, but go in steps size of 1
s[::1]
# Grab everything, but go in step sizes of 2
s[::2]
# We can use this to print a string backwards
s[::-1]
###Output
_____no_output_____
###Markdown
String PropertiesIt's important to note that strings have an important property known as *immutability*. This means that once a string is created, the elements within it can not be changed or replaced. For example:
###Code
s
# Let's try to change the first letter to 'x'
s[0] = 'x'
###Output
_____no_output_____
###Markdown
Notice how the error tells us directly what we can't do, change the item assignment!Something we *can* do is concatenate strings!
###Code
s
# Concatenate strings!
s + ' concatenate me!'
# We can reassign s completely though!
s = s + ' concatenate me!'
print(s)
s
###Output
_____no_output_____
###Markdown
We can use the multiplication symbol to create repetition!
###Code
letter = 'z'
letter*10
###Output
_____no_output_____
###Markdown
Basic Built-in String methodsObjects in Python usually have built-in methods. These methods are functions inside the object (we will learn about these in much more depth later) that can perform actions or commands on the object itself.We call methods with a period and then the method name. Methods are in the form:object.method(parameters)Where parameters are extra arguments we can pass into the method. Don't worry if the details don't make 100% sense right now. Later on we will be creating our own objects and functions!Here are some examples of built-in methods in strings:
###Code
s
# Upper Case a string
s.upper()
# Lower case
s.lower()
# Split a string by blank space (this is the default)
s.split()
# Split by a specific element (doesn't include the element that was split on)
s.split('W')
###Output
_____no_output_____
###Markdown
There are many more methods than the ones covered here. Visit the Advanced String section to find out more! Print FormattingWe can use the .format() method to add formatted objects to printed string statements. The easiest way to show this is through an example:
###Code
'Insert another string with curly brackets: {}'.format('The inserted string')
###Output
_____no_output_____ |
src_optimization/04_tiling/exe_runtime_by_tiling.ipynb | ###Markdown
Runtime by cells
###Code
import matplotlib.pyplot as plt
from matplotlib import rcParams
# from matplotlib.pyplot import figure
import pandas as pd
import seaborn as sns
def plotRoutine(p_path):
data_frame = pd.read_csv(p_path)
data_frame = data_frame[data_frame.n_tiling_cols >= 4]
data_frame['n_tiling_levels_rows'] = data_frame['n_tiling_levels_rows'].map(str)
rcParams['figure.figsize'] = 11.7,8.27
plot = sns.lineplot(x='n_tiling_cols',
y='runtime',
hue='n_tiling_levels_rows',
style='impl_id',
data=data_frame)
plot.set_yscale('log')
plot.set_xscale('log')
plot.set(xlabel='Number of cells', ylabel='Runtime [s]')
plt.figure(figsize=(1, 1), dpi=80)
plt.show()
###Output
_____no_output_____
###Markdown
Gauss3
###Code
path = './exe_runtime_by_tiling_gauss3.csv'
plotRoutine(path)
###Output
_____no_output_____ |
Image Classification/VGG16.ipynb | ###Markdown
Using colab as the basic hardware platform, So we need to change the colab tensorflow version to 2.1.0 and mount the Google Drive. =============================================== 使用colab作为基础硬件平台,因此我们需要将colab tensorflow版本更改为2.1.0并安装谷歌网盘驱动器。
###Code
import sys
sys.path[0] = '/tensorflow-2.1.0/python3.6'
from google.colab import drive
drive.mount('/drive')
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, Sequential, losses, optimizers, datasets
from tensorflow.keras.utils import to_categorical
tf.__version__
###Output
_____no_output_____
###Markdown
Read cifar100 data from keras.datasets =============================================== 从keras.datasets 中读取cifar100 数据
###Code
(x, y), (x_test, y_test) = datasets.cifar100.load_data()
print(x.shape,y.shape,x_test.shape,y_test.shape)
print(x.dtype,y.dtype)
###Output
_____no_output_____
###Markdown
Preprocess the data type and values. Normalize x and embedding y to one_hot format and reshape the y dimension to tf accessable dimension. =============================================== 预处理数据类型和值。将x标准化并将y转换到one_hot格式,并将y维度重塑为tf可访问维度。
###Code
def preprocess(x, y):
x = tf.cast(x, dtype=tf.float32) / 255.
y = tf.cast(to_categorical(tf.squeeze(tf.cast(y, dtype=tf.int32), axis=1), num_classes=100), dtype=tf.int32)
return x,y
class VGG16(keras.Model):
def __init__(self):
super(VGG16, self).__init__()
self.VGG16 = Sequential([
layers.Conv2D(64, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.Dropout(0.3),
layers.Conv2D(64, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.Dropout(rate=0.5),
layers.MaxPool2D(pool_size=[2,2], strides=2, padding="same"),
layers.Conv2D(128, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.Dropout(0.4),
layers.Conv2D(128, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.MaxPool2D(pool_size=[2,2], strides=2, padding="same"),
layers.Conv2D(256, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.Dropout(0.4),
layers.Conv2D(256, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.Dropout(0.4),
layers.Conv2D(256, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.MaxPool2D(pool_size=[2,2], strides=2, padding="same"),
layers.Conv2D(512, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.Dropout(0.4),
layers.Conv2D(512, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.Dropout(0.4),
layers.Conv2D(512, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.MaxPool2D(pool_size=[2,2], strides=2, padding="same"),
layers.Conv2D(512, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.Dropout(0.4),
layers.Conv2D(512, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.Dropout(0.4),
layers.Conv2D(512, kernel_size=[3,3], padding="same", activation=tf.nn.relu),
layers.BatchNormalization(),
layers.MaxPool2D(pool_size=[2,2], strides=2, padding="same"),
layers.Flatten(),
layers.Dense(4096, activation=tf.nn.relu),
layers.Dropout(rate=0.5),
layers.Dense(4096, activation=tf.nn.relu),
layers.Dropout(rate=0.5),
layers.Dense(100, activation=tf.nn.softmax)
])
def call(self, inputs, training=None):
x = inputs
prediction = self.VGG16(x)
return prediction
def main(x, y, x_test, y_test):
epochs = 1000
model = VGG16()
model.build(input_shape=(None, 32, 32, 3))
model.summary()
save_best = keras.callbacks.ModelCheckpoint('/drive/My Drive/Github/CNN/VGG16_best_model.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='min')
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', verbose=1, min_delta=0, patience=100, mode='auto')
callbacks_list = [early_stop, save_best]
model.compile(optimizer=optimizers.Adam(),
loss=losses.categorical_crossentropy,
metrics=['accuracy'])
x, y = preprocess(x, y)
x_test, y_test = preprocess(x_test, y_test)
history = model.fit(x=x, y=y, epochs=epochs, batch_size=512, validation_data=(x_test, y_test), verbose=1, callbacks=callbacks_list)
return history
history = main(x, y, x_test, y_test)
###Output
_____no_output_____ |
Nengo.ipynb | ###Markdown
Nengoによる神経シミュレーションNengoはネットワークレベルでの神経シミュレーションが出来るライブラリです。応用として、SPA(またはSpaun)という認知モデルが有名です。PCにインストールしてGUIを使うことも出来ますが、今回はGoogleColab上のCUIで行います。順番に Shift + Enter を押していくだけで、シミュレーションが可能です。参考文献1.Nengo公式ドキュメント https://www.nengo.ai/nengo/v2.8.0/index.html2.Nengoの使い方①~③ http://system-medicine.blog.jp/archives/7015961.html Nengoのインストール
###Code
!pip install nengo
###Output
Requirement already satisfied: nengo in /usr/local/lib/python2.7/dist-packages (2.8.0)
Requirement already satisfied: numpy>=1.8 in /usr/local/lib/python2.7/dist-packages (from nengo) (1.16.4)
###Markdown
これで、GoogleColab上でNengoが使えるようになりました。(制限時間12時間)次からは、デモを行います。 デモ1 神経回路のダイナミクスここではカオス理論のローレンツ方程式で記述されるダイナミクスを持つ神経ネットワークを作成してみます。
###Code
import nengo
# モデルの定義(100個の神経細胞で構成され、3変数を出力するネットワーク)
tau = 0.1
sigma = 10
beta = 8.0 / 3
rho = 28
def feedback(x):
dx0 = -sigma * x[0] + sigma * x[1]
dx1 = -x[0] * x[2] - x[1]
dx2 = x[0] * x[1] - beta * (x[2] + rho) - rho
return [dx0 * tau + x[0], dx1 * tau + x[1], dx2 * tau + x[2]]
model = nengo.Network(label='Lorenz attractor', seed=100)
with model:
state = nengo.Ensemble(100, 3, radius=30)
nengo.Connection(state, state, function=feedback, synapse=tau)
state_probe = nengo.Probe(state, synapse=tau)
spike_probe = nengo.Probe(state.neurons)
# シミュレーション実行
with nengo.Simulator(model) as sim:
sim.run(10)
# スパイクのラスタープロット
import matplotlib.pyplot as plt
from nengo.utils.matplotlib import rasterplot
rasterplot(sim.trange(), sim.data[spike_probe])
plt.xlabel('Time (s)')
plt.ylabel('Neuron Index');
# ダイナミクスのプロット
from mpl_toolkits.mplot3d import Axes3D
plt.figure()
plt.plot(sim.trange(), sim.data[state_probe])
plt.xlabel('Time (s)')
plt.ylabel('Output Value');
ax = plt.figure().add_subplot(111, projection='3d')
ax.plot(*sim.data[state_probe].T)
###Output
_____no_output_____
###Markdown
カオス的な振る舞いが出力されました。nengo.Networkのseed値を変えると結果が変わります。 デモ2 SPAによる認知モデルSPAによる認知モデルを使って、神経回路に2つの語句の組み合わせを覚えさせます。まずは、'One'と'Uno'など対になる語句(ここでは英語とスペイン語)を入力していきます。 その後に片方の語句を入力すると、もう片方のものを思い出して出力するようになります。
###Code
import nengo
from nengo import spa
# モデルの定義
model = spa.SPA(label="Question answering")
dimensions = 32
with model:
model.English_in = spa.State(dimensions=dimensions)
model.Spanish_in = spa.State(dimensions=dimensions)
model.conv = spa.State(dimensions=dimensions,
neurons_per_dimension=100,
feedback=1,
feedback_synapse=0.4)
model.cue = spa.State(dimensions=dimensions)
model.out = spa.State(dimensions=dimensions)
# Connect the state populations
cortical_actions = spa.Actions(
'conv = English_in * Spanish_in',
'out = conv * ~cue'
)
model.cortical = spa.Cortical(cortical_actions)
# 入力の設定
input_time = 0.5
def English_input(t):
if t < input_time:
return 'One'
elif t < 2*input_time:
return 'Two'
else:
return '0'
def Spanish_input(t):
if t < input_time:
return 'Uno'
elif t < 2*input_time:
return 'Dos'
else:
return '0'
def cue_input(t):
if t < 2*input_time:
return '0'
sequence = ['0', 'Uno', 'One', '0', 'Dos', 'Two']
idx = int(((t - 2*input_time) // (1. / len(sequence))) % len(sequence))
return sequence[idx]
with model:
model.inp = spa.Input(English_in=English_input, Spanish_in=Spanish_input, cue=cue_input)
# 出力の設定
with model:
model.config[nengo.Probe].synapse = nengo.Lowpass(0.03)
English_in = nengo.Probe(model.English_in.output)
Spanish_in = nengo.Probe(model.Spanish_in.output)
cue = nengo.Probe(model.cue.output)
conv = nengo.Probe(model.conv.output)
out = nengo.Probe(model.out.output)
# シミュレーション実行
with nengo.Simulator(model) as sim:
sim.run(5.)
# プロット
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 10))
vocab = model.get_default_vocab(dimensions)
plt.subplot(4, 1, 1)
plt.plot(sim.trange(), model.similarity(sim.data, English_in))
plt.legend(model.get_output_vocab('English_in').keys)
plt.ylabel("English")
plt.subplot(4, 1, 2)
plt.plot(sim.trange(), model.similarity(sim.data, Spanish_in))
plt.legend(model.get_output_vocab('Spanish_in').keys)
plt.ylabel("Spanish")
plt.subplot(4, 1, 3)
plt.plot(sim.trange(), model.similarity(sim.data, cue))
plt.legend(model.get_output_vocab('cue').keys)
plt.ylabel("Cue")
plt.subplot(4, 1, 4)
plt.plot(sim.trange(), spa.similarity(sim.data[out], vocab))
plt.legend(model.get_output_vocab('out').keys)
plt.ylabel("Output")
plt.xlabel("Time [s]");
###Output
_____no_output_____ |
Tareas_solved/Tarea-Python-01.ipynb | ###Markdown
Tarea 1-Python *1. Escribe una secuencia de instrucciones que permitan leer un número real por pantalla y que muestre si el número es positivo o no.*
###Code
numero = int(input("Escribe tú número real: "))
if numero > 0 :
print("El número digitado es positivo")
elif numero < 0 :
print("El número digitado es negativo")
else:
print("El número digitado es cero")
###Output
Escribe tú número real: 0
El número digitado es cero
###Markdown
*2. Escribe una secuencia de instrucciones que permitan leer un número real por pantalla y que muestre si el número está en el rango entre -5 y 5, ambos incluidos.*
###Code
from decimal import Decimal as D # para leer valores en la frontera
numero = D(input("Escribe el número real: "))
if numero >= -5 and numero <=5:
print("El número está en el rango entre -5 y 5")
else:
print("El número digitado no se encuentra en el rango entre -5 y 5")
numero = float(input("Escribe el número real: ")) # con float tenemos valores de frontera con posibles errores
if numero >= -5 and numero <=5:
print("El número está en el rango entre -5 y 5")
else:
print("El número digitado no se encuentra en el rango entre -5 y 5")
###Output
Escribe el número real: 5.000000000000000000000000001
El número está en el rango entre -5 y 5
###Markdown
*3. Escribe una secuencia de instrucciones que permitan leer las coordenadas de un punto (x, y) e indique en cuál de los cuatro cuadrantes se encuentra dicho punto.***i. Si x = 0, deberás indicar que el punto se encuentra sobre el eje vertical.****ii. Si y = 0, deberás indicar que el punto se encuentra sobre el eje horizontal.****iii. Si tanto x = 0 como y = 0, entonces deberás indicar que el punto se trata del origen de coordenadas.**
###Code
# Digitación de número
x = float(input("Digite el punto x: "))
y = float(input("Digite el punto y: "))
print('El punto es', "(",x,",",y,")")
# Funcionalidad sobre el punto
# Funcionalidad
if x > 0 and y > 0:
print("El punto se encuentra en el primer cuadrante")
elif x < 0 and y > 0:
print("El punto se encuentra en el segundo cuadrante")
elif x < 0 and y < 0:
print("El punto se encuentra en el tercer cuadrante")
elif x > 0 and y <0:
print("El punto se encuentra en el cuarto cuadrante")
elif x == 0 and y != 0:
print("El punto se encuentra sobre el eje vertical")
elif y == 0 and x != 0:
print("El punto se encuentra sobre el eje horizontal")
else:
print("El punto se encuentra sobre el origen")
###Output
Digite el punto x: 4
Digite el punto y: -inf
El punto es ( 4.0 , -inf )
El punto se encuentra en el cuarto cuadrante
###Markdown
*4. Escribe una secuencia de instrucciones que permitan leer dos números enteros y muestre el cociente de la división entera y el resto de la división entera.*
###Code
a = int(input("Ingrese el primer número entero (o dividendo)= "))
b = int(input("Ingrese el segundo número entero (o divisor) = "))
print("El cociente de la división entero es igual a ", a//b, "y el resto corresponde a ", a%b)
###Output
Ingrese el primer número entero (o dividendo)= 147
Ingrese el segundo número entero (o divisor) = 8
El cociente de la división entero es igual a 18 y el resto corresponde a 3
###Markdown
*5. Escribe una secuencia de instrucciones que permitan leer un número entero y determinar si es cuadrado perfecto o no(piensa la mejor forma de hacerlo con lo que has aprendido hasta ahora).*
###Code
# Un cuadrado perfecto o un número cuadrado perfecto es un número natural
# que, si tiene raíz, da como resultado otro número natural.
numero = int(input("Digite un número positivo : "))
x = math.sqrt(numero)
if x >= 1 and x%1== 0 :
print("El número", numero, " es un cuadrado perfecto")
else:
print("El número", numero, " no es un cuadrado perfecto")
###Output
Digite un número positivo : 225
El número 225 es un cuadrado perfecto
###Markdown
*6. Escribe una expresión que permita determinar si un número entero positivo puede corresponder a un año bisiesto o no. Se consideran años bisiestos aquellos cuyo número es divisible por cuatro excepto los años que son múltiplos de 100, a no ser que lo sean de 400 (por ejemplo el año 2000 fue bisiesto pero el 2100 no lo será).*
###Code
numero = int(input("Escribe el número del año :"))
# Un año que es divisible por 100
a = numero % 4
b = numero % 100
c = numero % 400
if a == 0:
# print(" Es divisible por 4")
if b==0:
# print("El año es divisible por 100")
if c == 0:
print("Es un año bisiesto")
else:
print("No es bisiesto")
else:
print("El año es bisiesto")
else:
print("No es un año bisiesto")
###Output
Escribe el número del año :2000
Es un año bisiesto
###Markdown
*7. Busca la imagen de un tablero de ajedrez en Google y fíjate en la nomenclatura de las casillas. Escribe una secuencia que lea una letra y un número de teclado correspondiente a una casilla de un tablero de ajedrez y que indique si esta casilla es negra o blanca.*
###Code
# Guardar todas las combinaciones posibles
import itertools
data_casillas_impares_negras = [(1,3,5,7), ('b','d','f','h'), ('N')]
data_casillas_impares_negras = list(itertools.product(*data_casillas_impares_negras)) # creamos la lista con todos las casillas
data_casillas_impares_negras_list = [(1,3,5,7), ('b','d','f','h')]
data_casillas_impares_negras_list = list(itertools.product(*data_casillas_impares_negras_list)) # creamos la lista con todos las casillas
data_casillas_impares_blancas = [(1,3,5,7), ('a','c','e','g'), 'B']
data_casillas_impares_blancas = list(itertools.product(*data_casillas_impares_blancas))
data_casillas_impares_blancas_list = [(1,3,5,7), ('a','c','e','g')]
data_casillas_impares_blancas_list = list(itertools.product(*data_casillas_impares_blancas_list))
data_casillas_pares_blancas = [(2,4,6,8), ('b','d','f','h'), 'B']
data_casillas_pares_blancas = list(itertools.product(*data_casillas_pares_blancas))
data_casillas_pares_blancas_list = [(2,4,6,8), ('b','d','f','h')]
data_casillas_pares_blancas_list = list(itertools.product(*data_casillas_pares_blancas_list))
data_casillas_pares_negra = [(2,4,6,8), ('a','c','e','g'), 'N']
data_casillas_pares_negra = list(itertools.product(*data_casillas_pares_negra))
data_casillas_pares_negra_list = [(2,4,6,8), ('a','c','e','g')]
data_casillas_pares_negra_list = list(itertools.product(*data_casillas_pares_negra_list))
# creamos la lista con todos las casillas con color
d = data_casillas_pares_blancas + data_casillas_impares_blancas + data_casillas_pares_negra + data_casillas_impares_negras
#creamos la lista con todas las casillas sin color
d_list = data_casillas_pares_blancas_list + data_casillas_impares_blancas_list + data_casillas_pares_negra_list + data_casillas_impares_negras_list
ab = input("ingrese coordenada de la abcisa del tablero, es decir una letra de la 'a al h': ")
ord = int(input("ingrese coordenada de la ordenada del tablero, es decir un número del '1 al 8': "))
index = d_list.index((ord, ab))
if d[index][2] == 'B':
print("La casilla es Blanca")
else:
print("La casilla es Negra")
entrada = input("Introduce la Casilla (ej: A1): ")
letras = {"A": 1, "B": 2, "C": 3, "D": 4, "E": 5, "F": 6, "G": 7, "H": 8}
letra = entrada[0]
numero1 = int(entrada[1])
numero2 = letras[letra]
if (numero1 + numero2) % 2 == 0:
print("Negro")
else:
print("Blanco")
entrada = input("Introduce la Casilla (ej: A1): ")
letras = {"A": 1, "B": 2, "C": 3, "D": 4, "E": 5, "F": 6, "G": 7, "H": 8}
letra = entrada[0]
numero2 = letras[letra]
numero2
letras = {"A": 1, "B": 2, "C": 3, "D": 4, "E": 5, "F": 6, "G": 7, "H": 8}
letras
letras[A]
x = input(" :")
x[0]
###Output
:558
|
Wk09/Wk09_Dataset-preprocessing-in-class-solutions.ipynb | ###Markdown
Week 9 - Dataset preprocessing Before we utilize machine learning algorithms we must first prepare our dataset. This can often take a significant amount of time and can have a large impact on the performance of our models.We will be looking at four different types of data:* Tabular data* Image data* Text Tabular dataWe will look at three different steps we may need to take when handling tabular data: * Missing data* Normalization* Categorical data Image dataImage data can present a number of issues that we must address to maximize performance:* Histogram normalization* Windows* Pyramids (for detection at different scales)* Centering TextText can present a number of issues, mainly due to the number of words that can be found in our features. There are a number of ways we can convert from text to usable features:* Bag of words* Parsing
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Tabular data* Missing data* Normalization* Categorical data Missing dataThere are a number of ways to handle missing data:* Drop all records with a value missing* Substitute all missing values with an average value* Substitute all missing values with some placeholder value, i.e. 0, 1e9, -1e9, etc* Predict missing values based on other attributes* Add additional feature indicating when a value is missingIf the machine learning model will be used with new data it is important to consider the possibility of receiving records with values missing that we have not observed previously in the training dataset.The simplest approach is to remove any records that have missing data. Unfortunately missing values are often not randomly distributed through a dataset and removing them can introduce bias.An alternative approach is to substitute the missing values. This can be with the mean of the feature across all the records or the value can be predicted based on the values of the other features in the dataset. Placeholder values can also be used with decision trees but do not work as well for most other algorithms.Finally, missing values can themselves be useful features. Adding an additional feature indicating when a value is missing is often used to include this information.
###Code
from sklearn import linear_model
x = np.array([[0, 0], [1, 1], [2, 2]])
y = np.array([0, 1, 2])
print(x,y)
clf = linear_model.LinearRegression()
clf.fit(x, y)
print(clf.coef_)
x_missing = np.array([[0, 0], [1, np.nan], [2, 2]])
print(x_missing, y)
clf = linear_model.LinearRegression()
clf.fit(x_missing, y)
print(clf.coef_)
import pandas as pd
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
[4,1,7,9,0,2,np.nan]], ).T
x.columns =['A', 'B', 'C', 'D', 'E']
y = pd.Series([29.0,
31.2,
63.25,
57.27,
66.3,
26.21,
48.24])
print(x, y)
x.dropna()
x.fillna(value={'A':1000,'B':2000,'C':3000,'D':4000,'E':5000})
x.fillna(value=x.mean())
###Output
_____no_output_____
###Markdown
NormalizationMany machine learning algorithms expect features to have similar distributions and scales.A classic example is gradient descent, if features are on different scales some weights will update faster than others because the feature values scale the weight updates.There are two common approaches to normalization:* Z-score standardization* Min-max scaling Z-score standardizationZ-score standardization rescales values so that they have a mean of zero and a standard deviation of 1. Specifically we perform the following transformation:$$z = \frac{x - \mu}{\sigma}$$ Min-max scalingAn alternative is min-max scaling that transforms data into the range of 0 to 1. Specifically:$$x_{norm} = \frac{x - x_{min}}{x_{max} - x_{min}}$$Min-max scaling is less commonly used but can be useful for image data and in some neural networks.
###Code
x_filled = x.fillna(value=x.mean())
print(x_filled)
x_norm = (x_filled - x_filled.min()) / (x_filled.max() - x_filled.min())
print(x_norm)
from sklearn import preprocessing
scaling = preprocessing.MinMaxScaler().fit(x_filled)
scaling.transform(x_filled)
###Output
_____no_output_____
###Markdown
Categorical dataCategorical data can take one of a number of possible values. The different categories may be related to each other or be largely independent and unordered.Continuous variables can be converted to categorical variables by applying a threshold.
###Code
x = pd.DataFrame([[0,1,2,3,4,5,6],
[2,np.nan,7,4,9,1,3],
[0.1,0.12,0.11,0.15,0.16,0.11,0.14],
[100,120,np.nan,127,130,121,124],
['Green','Red','Blue','Blue','Green','Red','Green']], ).T
x.columns = index=['A', 'B', 'C', 'D', 'E']
print(x)
x_cat = x.copy()
for val in x['E'].unique():
x_cat['E_{0}'.format(val)] = x_cat['E'] == val
x_cat
###Output
_____no_output_____
###Markdown
Exercises1. Substitute missing values in `x` with the column mean and add an additional column to indicate when missing values have been substituted. The `isnull` method on the pandas dataframe may be useful.2. Convert `x` to the z-scaled values. The StandardScaler method in the preprocessing module can be used or the z-scaled values calculated directly.3. Convert `x['C']` into a categorical variable using a threshold of 0.125
###Code
x, x.isnull()
x['B_isnull'] = x['B'].isnull()
x
(x[['A', 'B', 'C', 'D', 'E']] - x[['A', 'B', 'C', 'D', 'E']].mean()) / \
x[['A', 'B', 'C', 'D', 'E']].std()
x_scaled = _74
x_scaled.mean(), x_scaled.std()
x['C_cat'] = x['C'] > 0.125
x
###Output
_____no_output_____
###Markdown
Image dataDepending on the type of task being performed there are a variety of steps we may want to take in working with images:* Histogram normalization* Windows and pyramids (for detection at different scales)* CenteringOccasionally the camera used to generate an image will use 10- to 14-bits while a 16-bit file format will be used. In this situation all the pixel intensities will be in the lower values. Rescaling to the full range (or to 0-1) can be useful.Further processing can be done to alter the histogram of the image.When looking for particular features in an image a sliding window can be used to check different locations. This can be combined with an image pyramid to detect features at different scales. This is often needed when objects can be at different distances from the camera.If objects are sparsely distributed in an image a faster approach than using sliding windows is to identify objects with a simple threshold and then test only the bounding boxes containing objects. Before running these through a model centering based on intensity can be a useful approach. Small offsets, rotations and skewing can be used to generate additional training data.
###Code
# http://scikit-image.org/docs/stable/auto_examples/color_exposure/plot_equalize.html#example-color-exposure-plot-equalize-py
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from skimage import data, img_as_float
from skimage import exposure
matplotlib.rcParams['font.size'] = 8
def plot_img_and_hist(img, axes, bins=256):
"""Plot an image along with its histogram and cumulative histogram.
"""
img = img_as_float(img)
ax_img, ax_hist = axes
ax_cdf = ax_hist.twinx()
# Display image
ax_img.imshow(img, cmap=plt.cm.gray)
ax_img.set_axis_off()
ax_img.set_adjustable('box-forced')
# Display histogram
ax_hist.hist(img.ravel(), bins=bins, histtype='step', color='black')
ax_hist.ticklabel_format(axis='y', style='scientific', scilimits=(0, 0))
ax_hist.set_xlabel('Pixel intensity')
ax_hist.set_xlim(0, 1)
ax_hist.set_yticks([])
# Display cumulative distribution
img_cdf, bins = exposure.cumulative_distribution(img, bins)
ax_cdf.plot(bins, img_cdf, 'r')
ax_cdf.set_yticks([])
return ax_img, ax_hist, ax_cdf
# Load an example image
img = data.moon()
# Contrast stretching
p2, p98 = np.percentile(img, (2, 98))
img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98))
# Equalization
img_eq = exposure.equalize_hist(img)
# Adaptive Equalization
img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
# Display results
fig = plt.figure(figsize=(8, 5))
axes = np.zeros((2,4), dtype=np.object)
axes[0,0] = fig.add_subplot(2, 4, 1)
for i in range(1,4):
axes[0,i] = fig.add_subplot(2, 4, 1+i, sharex=axes[0,0], sharey=axes[0,0])
for i in range(0,4):
axes[1,i] = fig.add_subplot(2, 4, 5+i)
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img, axes[:, 0])
ax_img.set_title('Low contrast image')
y_min, y_max = ax_hist.get_ylim()
ax_hist.set_ylabel('Number of pixels')
ax_hist.set_yticks(np.linspace(0, y_max, 5))
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_rescale, axes[:, 1])
ax_img.set_title('Contrast stretching')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_eq, axes[:, 2])
ax_img.set_title('Histogram equalization')
ax_img, ax_hist, ax_cdf = plot_img_and_hist(img_adapteq, axes[:, 3])
ax_img.set_title('Adaptive equalization')
ax_cdf.set_ylabel('Fraction of total intensity')
ax_cdf.set_yticks(np.linspace(0, 1, 5))
# prevent overlap of y-axis labels
fig.tight_layout()
plt.show()
from sklearn.feature_extraction import image
img = data.page()
fig, ax = plt.subplots(1,1)
ax.imshow(img, cmap=plt.cm.gray)
ax.set_axis_off()
plt.show()
print(img.shape)
patches = image.extract_patches_2d(img, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
from sklearn import datasets
digits = datasets.load_digits()
#print(digits.DESCR)
fig, ax = plt.subplots(1,1, figsize=(1,1))
ax.imshow(digits.data[0].reshape((8,8)), cmap=plt.cm.gray, interpolation='nearest')
###Output
_____no_output_____
###Markdown
TextWhen working with text the simplest approach is known as bag of words. In this approach we simply count the number of instances of each word, and then adjust the values based on how commonly the word is used.The first task is to break a piece of text up into individual tokens. The number of occurrences of each word is then recorded. More rarely used words are likely to be more interesting and so word counts are scaled by the inverse document frequency.We can extend this to look at not just individual words but also bigrams and trigrams.
###Code
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train',
categories=['comp.graphics', 'sci.med'], shuffle=True, random_state=0)
print(twenty_train.target_names)
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data)
print(X_train_counts.shape)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
print(X_train_tfidf.shape, X_train_tfidf[:5,:15].toarray())
print(twenty_train.data[0])
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(twenty_train.data[0:1])
print(X_train_counts[0].toarray())
print(count_vect.vocabulary_.keys())
###Output
[[1 1 1 1 1 1 1 1 1 1 1 1 1 3 1 1 2 1 2 3 4 3 2 3 1 5 1 3 1 1 2 2 2 1 3 1 1
1 1 2 2 1 2 1 1 1 1 1 1 3 1 1 1 1 1 1 1 1 1 1 1 1 1 2 1 2 1 5 1 3 3 1 1 1
1 2 1]]
dict_keys(['stefan', 'getting', 'is', 'best', 'ftp', 'mailgzrz', 'quit', 'wanna', 'now', 'subject', 'organization', 'how', '7900', 'up', 'further', '192', 'mget', 'from', 'any', 'please', 'have', 'graphics', 'question', 'the', 'me', 'here', 'site', '9ti', 'genoa', 'access', 'berlin', 'for', 'de', 'if', 'sequence', 'latest', 'ls', 'article', '109', 'hash', 'board', 'opened', 'password', 'email', '7000series', 'tuberlin', 'behse', 'to', 'mikro', 'tu', 'ee', 'software', 'posting', 'it', 'you', 'drivers', '1qpf1r', 'lines', 'hartmann', 'zrz', '42', 'host', 'cd', 'well', 'pub', '29', 'nntp', 'cards', 'this', 'hi', '11', 'binary', 'get', 'regards', 'login', 'prompt', 'harti'])
###Markdown
Exercises1. Choose one of the histogram processing methods and apply it to the page example.2. Take patches for the page example used above at different scales (10, 20 and 40 pixels). The resulting patches should be [rescaled](http://scikit-image.org/docs/stable/api/skimage.transform.htmlrescale) to have the same size.3. Change the vectorization approach to ignore very common words such as 'the' and 'a'. These are known as stop words. Reading the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.htmlsklearn.feature_extraction.text.CountVectorizer) should help.4. Change the vectorization approach to consider both single words and sequences of 2 words. Reading the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.htmlsklearn.feature_extraction.text.CountVectorizer) should help.
###Code
from sklearn.feature_extraction import image
img = data.page()
fig, ax = plt.subplots(1,1)
ax.imshow(img, cmap=plt.cm.gray)
ax.set_axis_off()
plt.show()
print(img.shape)
from skimage import exposure
# Adaptive Equalization
img_adapteq = exposure.equalize_adapthist(img, clip_limit=0.03)
plt.imshow(img_adapteq, cmap=plt.cm.gray)
plt.show()
patches = image.extract_patches_2d(img, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
from skimage.transform import rescale
im_small = rescale(img, 0.5)
patches = image.extract_patches_2d(im_small, (20, 20), max_patches=2, random_state=0)
patches.shape
plt.imshow(patches[0], cmap=plt.cm.gray)
plt.show()
count_vect = CountVectorizer(stop_words='english', ngram_range=(1,2))
X_train_counts = count_vect.fit_transform(twenty_train.data[0:1])
print(X_train_counts[0].toarray())
print(count_vect.vocabulary_.keys())
###Output
[[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 3 1 1 1 1 1 1 2 1 1 1 1 2
1 1 4 1 1 1 1 3 3 2 1 1 5 1 1 1 2 3 2 1 1 1 2 1 1 2 2 2 1 1 1 1 1 1 1 1 2
1 1 1 1 1 1 1 1 1 1 1 1 3 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2
1 1 1 1 2 2 1 1 3 3 1 1 1 1 1 1]]
dict_keys(['tuberlin zrz', '42 11', 'ftp 192', 'stefan', 'getting', 'binary prompt', 'board', 'ftp', 'mailgzrz', 'quit', 'wanna', '29 nntp', 'board cd', 'organization', 'software', '7900', 'ls binary', 'mailgzrz 1qpf1r', 'getting latest', 'mget', 'ftp cd', 'organization tuberlin', 'access', 'lines', '192', 'drivers genoa', 'drivers question', 'drivers ftp', 'ftp site', 'zrz lines', 'graphics', 'email best', 'graphics board', 'hartmann behse', 'question', 'best regards', 'berlin stefan', '192 109', 'host mikro', 'email', 'genoa', 'hi', 'behse subject', 'berlin', 'berlin hi', '9ti organization', '7900 board', 'latest software', 'tuberlin', 'latest', 'email harti', 'ee tu', 'ls', 'article', '109', 'prompt hash', 'hash', 'cards access', 'article mailgzrz', 'subject genoa', 'best', '109 42', 'ftp password', 'lines 29', 'zrz', 'cd 7000series', 'opened', 'password', 'nntp posting', '7000series', 'wanna latest', 'access ftp', 'regards stefan', 'host', 'site article', 'board drivers', 'quit sequence', 'subject', 'tu', 'ee', 'password ftp', 'tu berlin', 'latest drivers', 'opened ftp', 'posting', 'question email', 'drivers', '1qpf1r', 'sequence', 'hartmann', 'graphics cards', 'login ftp', 'harti mikro', 'mget quit', '42', 'behse', 'drivers 7900', 'genoa graphics', 'cd', 'hi opened', 'hash wanna', 'site getting', 'pub', '29', '1qpf1r 9ti', 'nntp', '7000series mget', 'cards', 'genoa ls', '9ti', 'hartmann email', '11 login', 'site', 'sequence drivers', 'mikro ee', '11', 'mikro', 'stefan hartmann', 'pub genoa', 'regards', 'login', 'software drivers', 'prompt', 'harti', 'posting host', 'binary', 'cd pub'])
|
Pandas-tutorials/pandas-110-exercises.ipynb | ###Markdown
Task 1:Check Pandas Version
###Code
print('Task 1:')
print(pd.__version__)
###Output
_____no_output_____
###Markdown
Task 2:Create Numpy ArrayCreate three columns with Zero values
###Code
print('Task 2:')
dtype = [('Col1','int32'), ('Col2','float32'), ('Col3','float32')]
values = numpy.zeros(20, dtype=dtype)
index = ['Row'+str(i) for i in range(1, len(values)+1)]
df = pandas.DataFrame(values, index=index)
print(df)
print('--------')
df = pandas.DataFrame(values)
print(df)
###Output
_____no_output_____
###Markdown
Task 3:iLoc in PandasPrint first five rows
###Code
print('task 3:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/data1.csv', sep=';', header=None)
print(df.iloc[:4])
###Output
_____no_output_____
###Markdown
Task 4:Create Random integer between 2 to 10 with 4 items
###Code
print('Task 4:')
values = np.random.randint(2, 10, size=4)
print(values)
###Output
_____no_output_____
###Markdown
Task 5:Create Random integer between 0 to 100
###Code
print('Task 5:')
df = pd.DataFrame(np.random.randint(0, 100, size=(3, 2)), columns=list('xy'))
print(df)
###Output
_____no_output_____
###Markdown
Task 6:Create Random integer between 2 to 10 with 4 columns
###Code
print('Task 6:')
df = pd.DataFrame(np.random.randint(0, 100, size=(2,4)), columns = ['A', 'B', 'C', 'D'])
print(df)
###Output
_____no_output_____
###Markdown
Task 7:2D array with random between 0 and 5
###Code
print('Task 7:')
values = np.random.randint(5, size=(2,4))
print(values)
print(type(values))
###Output
_____no_output_____
###Markdown
Task 8:Create Random integer between 0 to 100 with 10 itmes (2 rows, 5 columns)
###Code
print('Task 8:')
df = pd.DataFrame(np.random.randint(0, 100, size=(3, 5)), columns=['Toronto', 'Ottawa', 'Calgary', 'Montreal', 'Quebec'])
print(df)
###Output
_____no_output_____
###Markdown
Task 9:3 rows, 2 columns in pandas1st column = random between 10 to 202nd column = random between 80 and 903rd column = random between 40 and 50
###Code
print('Task 9:')
dtype = [('one', 'int32'), ('two', 'int32')]
values = np.zeros(3, dtype=dtype)
index = ['Row'+str(i) for i in range(1, 4)]
df = pandas.DataFrame(values, index=index)
print(df)
###Output
_____no_output_____
###Markdown
Task 10:Fill Random Science and Math Marks
###Code
print('Task 10:')
dtype = [('Science','int32'), ('Maths','int32')]
values = np.zeros(3, dtype=dtype)
df = pandas.DataFrame(values, index=index)
print(df)
###Output
_____no_output_____
###Markdown
Task 11:CSV to DatRaframe (from_csv)Note: from_csv is Deprecated since version 0.21.0: Use pandas.read_csv() instead
###Code
print('Task 11:')
csv = pd.read_csv('/kaggle/input/datasets-for-pandas/uk-500.csv')
print(csv.head())
###Output
_____no_output_____
###Markdown
Task 12:CSV to Dataframe (from_csv
###Code
print('Task 12:')
#df = df.from_csv(path, header, sep, index_col, parse_dates, encoding, tupleize_cols, infer_datetime_format)
df = pd.read_csv('../input/datasets-for-pandas/uk-500.csv')
print(df.head())
###Output
_____no_output_____
###Markdown
Task 13:First 4 rows and 2 columns of CSV
###Code
print('Task 13:')
df = pandas.read_csv('../input/datasets-for-pandas/data1.csv', sep=',')
print(df.shape)
print(df.iloc[:3, :2])
###Output
_____no_output_____
###Markdown
Task 14:Show even rows and first three columns
###Code
print('Task 14:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8")
print(df.shape)
print(df.iloc[::2, 0:3])
###Output
_____no_output_____
###Markdown
Task 15:New columns as sum of all
###Code
print('Task 15:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8")
print(df.shape)
print(df)
df['total'] = df.sum(axis=1)
print(df)
###Output
_____no_output_____
###Markdown
Task 16:Delete Rows of one column where the value is less than 50
###Code
print('Task 16:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8")
print(df.shape)
print(df)
print('åååååååååå')
df = df[df.science > 50]
print(df)
###Output
_____no_output_____
###Markdown
Task 17:Delete with QueryNote: Query doesn't work if your column has space in it
###Code
print('Task 17:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8")
print(df.shape)
print(df)
df = df.query('science > 45')
print(df)
###Output
_____no_output_____
###Markdown
Task 18:Skip single row
###Code
print('Task 18:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8", skiprows=[5])
print(df.shape)
print(df)
###Output
_____no_output_____
###Markdown
Task 19:Skip multiple rows
###Code
print('Task 19:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8", skiprows=[1, 5, 7])
print(df.shape)
#print(df)
#df = df[df[[1]] > 45]
print(df)
###Output
_____no_output_____
###Markdown
Task 20:Select Column by IndexNote:df[[1]] doesn't work in Pandas updated version (need to double check)New columns as sum of all
###Code
print('Task 20:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8")
print(df.shape)
print(df)
###Output
_____no_output_____
###Markdown
Task 21:Skip rowsNote:df[[1]] doesn't work in Pandas updated version (need to double check)
###Code
print('Task 21:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8", skiprows=[0])
print(df.shape)
print(df)
#df = df[int(df.columns[2]) > 45]
#print(df)
print(df.columns[2])
###Output
_____no_output_____
###Markdown
Task 22:String to DataframeNote:df[[1]] doesn't work in Pandas updated version (need to double check)
###Code
print('Task 22:')
from io import StringIO
s = """
1, 2
3, 4
5, 6
"""
df = pd.read_csv(StringIO(s), header=None)
print(df.shape)
print(df)
###Output
_____no_output_____
###Markdown
Task 23:New columns as max of other columnsfloat to int used
###Code
print('Task 23:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8")
print(df.shape)
df['sum'] = df.sum(axis=1)
df['max'] = df.max(axis=1)
df['average'] = df.mean(axis=1).astype(int)
df['min'] = df.min(axis=1)
print(df)
###Output
_____no_output_____
###Markdown
Task 24:New columns as max of other columnsfloat to int usedMath is considered more, so double the marks for maths
###Code
def apply_math_special(row):
return (row.maths * 2 + row.language / 2 + row.history / 3 + row.science) / 4
print('Task 24:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8")
print(df.shape)
df['sum'] = df.sum(axis=1)
df['max'] = df.max(axis=1)
df['min'] = df.min(axis=1)
df['average'] = df.mean(axis=1).astype(int)
df['math_special'] = df.apply(apply_math_special, axis=1).astype(int)
print(df)
###Output
_____no_output_____
###Markdown
Task 25:New columns as max of other columns35 marks considered as passIf the student fails in math, consider failIf the student passes in language and science, consider as pass
###Code
def pass_one_subject(row):
if(row.maths > 34):
return 'Pass'
if(row.language > 34 and row.science > 34):
return 'Pass'
return 'Fail'
print('Task 25:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc.csv', sep=',', encoding = "utf-8")
print(df.shape)
df['pass_one'] = df.apply(pass_one_subject, axis=1)
print(df)
###Output
_____no_output_____
###Markdown
Task 26:Fill with average
###Code
print('Task 26:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/abc2.csv', sep=',', encoding = "utf-8")
print(df.shape)
print(df)
df.fillna(df.mean(), inplace=True)
print(df)
###Output
_____no_output_____
###Markdown
Task 27:New columns as sum of all
###Code
print('Task 27:')
df = pd.DataFrame(np.random.rand(10, 5))
df.iloc[0:3, 0:4] = np.nan # throw in some na values
print(df)
df.loc[:, 'test'] = df.iloc[:, 2:].sum(axis=1)
print(df)
###Output
_____no_output_____
###Markdown
Task 28:Unicode issue and fix
###Code
print('Task 28:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/score.csv', sep=',', encoding = "ISO-8859-1")
print(df.shape)
###Output
_____no_output_____
###Markdown
Task 29:Fill with average
###Code
print('Task 29:')
df = pd.DataFrame(np.random.rand(3,4), columns=list("ABCD"))
print(df.shape)
print(df)
df.fillna(df.mean(), inplace=True)
print(df)
###Output
_____no_output_____
###Markdown
Task 30:Last 4 rows
###Code
print('Task 30:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/data1.csv', sep=';')
print(df[-4:])
###Output
_____no_output_____
###Markdown
Task 31:Expanding Apply
###Code
print('Task 31:')
series1 = pd.Series([i / 100.0 for i in range (1,6)])
print(series1)
def CumRet(x,y):
return x*(1 + y)
def Red(x):
return functools.reduce(CumRet, x, 1.0)
s2 = series1.expanding().apply(Red)
print(s2)
###Output
_____no_output_____
###Markdown
Task 32:Get 3 and 4th row
###Code
print('Task 32:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/data1.csv', sep=';')
print(df[2:4])
###Output
_____no_output_____
###Markdown
Task 33:Last 4th to 1st
###Code
print('Task 33:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/data1.csv', sep=';')
print(df[-4:-1])
###Output
_____no_output_____
###Markdown
Task 34:iloc position slice
###Code
print('Task 34:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/data1.csv', sep=';')
print(df.iloc[1:9])
###Output
_____no_output_____
###Markdown
Task 35:Loc - iloc - ix - at - iat
###Code
print('Task 35:')
df = pandas.read_csv('/kaggle/input/datasets-for-pandas/data1.csv', sep=';')
###Output
_____no_output_____
###Markdown
Task 36:Random data
###Code
print('Task 36:')
def xrange(x):
return iter(range(x))
rnd_1 = [ rn.randrange ( 1 , 20 ) for x in xrange ( 1000 )]
rnd_2 = [ rn.randrange ( 1 , 20 ) for x in xrange ( 1000 )]
rnd_3 = [ rn.randrange ( 1 , 20 ) for x in xrange ( 1000 )]
date = pd . date_range ( '2012-4-10' , '2015-1-4' )
print(len(date))
data = pd . DataFrame ({ 'date' : date , 'rnd_1' : rnd_1 , 'rnd_2' : rnd_2 , 'rnd_3' : rnd_3 })
data.head()
###Output
_____no_output_____
###Markdown
Task 37:Filter with the value comparison
###Code
print('Task 37:')
below_20 = data[data['rnd_1'] < 20]
print(below_20)
###Output
_____no_output_____
###Markdown
Task 38:Filter between 5 and 10 on col 1
###Code
print('Task 38:')
def xrange(x):
return iter(range(x))
rnd_1 = [ rn.randrange ( 1 , 20 ) for x in xrange ( 1000 )]
rnd_2 = [ rn.randrange ( 1 , 20 ) for x in xrange ( 1000 )]
rnd_3 = [ rn.randrange ( 1 , 20 ) for x in xrange ( 1000 )]
date = pd . date_range ( '2012-4-10' , '2015-1-4' )
print(len(date))
data = pd . DataFrame ({ 'date' : date , 'rnd_1' : rnd_1 , 'rnd_2' : rnd_2 , 'rnd_3' : rnd_3 })
below_20 = data[data['rnd_1'] < 20]
ten_to_20 = data[(data['rnd_1'] >= 5) & (data['rnd_1'] < 10)]
print(ten_to_20)
###Output
_____no_output_____
###Markdown
Task 39:Filter between 15 to 20
###Code
print('Task 39:')
date = pd . date_range ( '2018-08-01' , '2018-08-15' )
date_count = len(date)
def fill_rand(start, end, count):
return [rn.randrange(1, 20 ) for x in xrange( count )]
rnd_1 = fill_rand(1, 20, date_count)
rnd_2 = fill_rand(1, 20, date_count)
rnd_3 = fill_rand(1, 20, date_count)
#print(len(date))
data = pd . DataFrame ({ 'date' : date , 'rnd_1' : rnd_1 , 'rnd_2' : rnd_2 , 'rnd_3' : rnd_3 })
#print(len(date))
ten_to_20 = data[(data['rnd_1'] >= 15) & (data['rnd_1'] < 20)]
print(ten_to_20)
###Output
_____no_output_____
###Markdown
Task 40:15 to 33
###Code
print('Task 40:')
date = pd . date_range ( '2018-08-01' , '2018-08-15' )
date_count = len(date)
def fill_rand(start, end, count):
return [rn.randrange(1, 20 ) for x in xrange( count )]
rnd_1 = fill_rand(1, 20, date_count)
rnd_2 = fill_rand(1, 20, date_count)
rnd_3 = fill_rand(1, 20, date_count)
data = pd . DataFrame ({ 'date' : date , 'rnd_1' : rnd_1 , 'rnd_2' : rnd_2 , 'rnd_3' : rnd_3 })
ten_to_20 = data[(data['rnd_1'] >= 15) & (data['rnd_1'] < 33)]
print(ten_to_20)
###Output
_____no_output_____
###Markdown
Task 41:Custom method and xrnage on dataframe
###Code
print('Task 41:')
date = pd . date_range ( '2018-08-01' , '2018-08-15' )
date_count = len(date)
def xrange(x):
return iter(range(x))
def fill_rand(start, end, count):
return [rn.randrange(1, 20 ) for x in xrange( count )]
rnd_1 = fill_rand(1, 20, date_count)
rnd_2 = fill_rand(1, 20, date_count)
rnd_3 = fill_rand(1, 20, date_count)
data = pd . DataFrame ({ 'date' : date , 'rnd_1' : rnd_1 , 'rnd_2' : rnd_2 , 'rnd_3' : rnd_3 })
filter_loc = data.loc[ 2 : 4 , [ 'rnd_2' , 'date' ]]
print(filter_loc)
###Output
_____no_output_____
###Markdown
Task 42:Set index with date column
###Code
print('Task 42:')
date_date = data.set_index('date')
print(date_date.head())
###Output
_____no_output_____
###Markdown
Task 43:Change columns based on other columns
###Code
print('Task 43:')
df = pd.DataFrame({
'a' : [1,2,3,4],
'b' : [9,8,7,6],
'c' : [11,12,13,14]
})
print(df)
print('changing on one column')
# change columns
df.loc[df.a >= 2, 'b'] = 9
print(df)
###Output
_____no_output_____
###Markdown
Task 44:Change multiple columns based on one column values
###Code
print('Task 44:')
print('changing on multiple columns')
df.loc[df.a>=1, ['b','c']] = 45
print(df)
###Output
_____no_output_____
###Markdown
Task 45:Pandas Mask
###Code
print('Task 45:')
print(df)
df_mask = pd.DataFrame({
'a' : [True]*4,
'b' : [False] *4,
'c' : [True, False]*2
})
print(df.where(df_mask, -1000))
###Output
_____no_output_____
###Markdown
Task 46:Check high or low comparing the column against 5
###Code
print('Task 46:')
print(df)
df['logic'] = np.where(df['a'] >= 3, 'high', 'low')
print(df)
###Output
_____no_output_____
###Markdown
Task 47:Student Marks (Pass or Fail
###Code
print('Task 47:')
marks_df = pd.DataFrame({
'Language' : [60, 45, 78, 4],
'Math' : [90, 80, 23, 60],
'Science' : [45, 90, 95, 20]
});
print(marks_df)
marks_df['language_grade'] = np.where(marks_df['Language'] >= 50, 'Pass', 'Fail')
marks_df['math_grade'] = np.where(marks_df['Math'] >= 50, 'Pass', 'Fail')
marks_df['science_grade'] = np.where(marks_df['Science'] >= 50, 'Pass', 'Fail')
print(marks_df)
###Output
_____no_output_____
###Markdown
Task 48:Get passed grades
###Code
print('Task 48:')
marks_df = pd.DataFrame({
'Language' : [60, 45, 78, 4],
'Math' : [90, 80, 23, 60],
'Science' : [45, 90, 95, 20]
});
print(marks_df)
marks_df_passed_in_language = marks_df[marks_df.Language >=50 ]
print(marks_df_passed_in_language)
###Output
_____no_output_____
###Markdown
Task 49:Students passed in Language and Math
###Code
print('Task 49:')
marks_df_passed_in_lang_math = marks_df[(marks_df.Language >=50) & (marks_df.Math >= 50)]
print(marks_df_passed_in_lang_math)
###Output
_____no_output_____
###Markdown
Task 50:Students passed in Language and Science
###Code
print('Task 50:')
marks_df_passed_in_lang_sc = marks_df.loc[(marks_df.Language >=50) & (marks_df.Science >= 50)]
print(marks_df_passed_in_lang_sc)
###Output
_____no_output_____
###Markdown
Task 51:Loc with Label oriented slicingpossible error:pandas.errors.UnsortedIndexError
###Code
print('Task 51:')
stars = {
'age' : [31, 23, 65, 50],
'movies' : [51, 23, 87, 200],
'awards' : [42, 12, 4, 78]
}
star_names = ['dhanush', 'simbu', 'kamal', 'vikram']
stars_df = pd.DataFrame(data=stars, index=[star_names])
print(stars_df)
###Output
_____no_output_____
###Markdown
Task 52:iloc with positional slicing
###Code
print('Task 52:')
print(stars_df.iloc[1:3])
###Output
_____no_output_____
###Markdown
Task 53:Label between numbers
###Code
print('Task 53:')
numbers = pd.DataFrame({
'one' : [10, 50, 80, 40],
'two' : [2, 6, 56, 45]
},
index = [12, 14, 16, 18])
print(numbers)
print('label between 12 and 16')
print(numbers.loc[12:16])
print('index between 1 and 3')
print(numbers.iloc[1:3])
###Output
_____no_output_____
###Markdown
Task 54:Stars with names
###Code
print('Task 54:')
stars = {
'age' : [31, 23, 65, 50],
'movies' : [51, 23, 87, 200],
'awards' : [42, 12, 4, 78]
}
star_names = ['dhanush', 'simbu', 'kamal', 'vikram']
stars_df = pd.DataFrame(data=stars, index=[star_names])
numbers = pd.DataFrame({
'one' : [10, 50, 80, 40],
'two' : [2, 6, 56, 45]
},
index = [12, 14, 16, 18])
print(numbers)
###Output
_____no_output_____
###Markdown
Task 55:Row label selectionAge is above 25 and movies above 25
###Code
print('Task 55:')
age_movies_25 = stars_df[(stars_df.movies > 25) &(stars_df.age > 25)]
print(age_movies_25)
###Output
_____no_output_____
###Markdown
Task 56:Stars in in certain ages
###Code
print('Task 56:')
custom_stars = stars_df[stars_df.age.isin([31, 65])]
print(custom_stars)
###Output
_____no_output_____
###Markdown
Task 57:inverse opeartor!( above one.45 and below two.50 )
###Code
print('Task 57:')
print(numbers)
print(numbers[~( (numbers.one > 45) &(numbers.two < 50) )])
###Output
_____no_output_____
###Markdown
Task 58:Apply custom function
###Code
print('Task 58:')
def GrowUp(x):
avg_weight = sum(x[x['size'] == 'series1'].weight * 1.5)
avg_weight += sum(x[x['size'] == 'M'].weight * 1.25)
avg_weight += sum(x[x['size'] == 'L'].weight)
avg_weight /= len(x)
return pd.Series(['L',avg_weight,True], index=['size', 'weight', 'adult'])
animals_df = pd.DataFrame({'animal': 'cat dog cat fish dog cat cat'.split(),
'size': list('SSMMMLL'),
'weight': [8, 10, 11, 1, 20, 12, 12],
'adult' : [False] * 5 + [True] * 2})
gb = animals_df.groupby(['animal'])
expected_df = gb.apply(GrowUp)
print(expected_df)
###Output
_____no_output_____
###Markdown
Task 59:Group by single column
###Code
print('Task 59:')
weights = animals_df.groupby(['weight']).get_group(20)
print(weights)
###Output
_____no_output_____
###Markdown
Task 60:Creating new Columns using ApplymapSides & applymap
###Code
print('Task 60:')
sides_df = pd.DataFrame({
'a' : [1, 1, 2, 4],
'b' : [2, 1, 3, 4]
})
print(sides_df)
source_cols = sides_df.columns
print(source_cols)
new_cols = [str(x)+"_side" for x in source_cols]
side_category ={
1 : 'North',
2 : 'East',
3 : 'South',
4 : 'West'
}
sides_df[new_cols] = sides_df[source_cols].applymap(side_category.get)
print(sides_df)
###Output
_____no_output_____
###Markdown
Task 61:Replacing some values with mean of the rest of a group
###Code
print('Task 61:')
df = pd.DataFrame({'A' : [1, 1, 2, 2], 'B' : [1, -1, 1, 2]})
print(df)
gb = df.groupby('A')
def replace(g):
mask = g < 0
g.loc[mask] = g[~mask].mean()
return g
gbt = gb.transform(replace)
print(gbt)
###Output
_____no_output_____
###Markdown
Task 62:Students passed in Language or Science (any one subject)
###Code
print('Task 62:')
marks_df = pd.DataFrame({
'Language' : [60, 45, 78, 4],
'Math' : [90, 80, 23, 60],
'Science' : [45, 90, 95, 20]
});
print(marks_df)
marks_df_passed_in_lang_or_sc = marks_df.loc[(marks_df.Language >= 50) | (marks_df.Science >= 50)]
print(marks_df_passed_in_lang_or_sc)
###Output
_____no_output_____
###Markdown
Task 63:possible errors:TypeError: 'Series' objects are mutable, thus they cannot be hashed
###Code
print('Task 63:')
marks_df['passed_one_subject'] = 'Fail'
marks_df.loc[(marks_df.Language >= 50) , 'passed_one_subject'] = 'Pass'
print(marks_df)
###Output
_____no_output_____
###Markdown
Task 64:argsortSelect rows with data closest to certain value using argsort
###Code
print('Task 64:')
df = pd.DataFrame({
"a": np.random.randint(0, 100, size=(5,)),
"b": np.random.randint(0, 70, size=(5,))
})
print(df)
par = 65
print('with argsort')
df1 = df.loc[(df.a-par).abs().argsort()]
print(df1)
print(df.loc[(df.b-2).abs().argsort()])
###Output
_____no_output_____
###Markdown
Task 65:argsort with starsold stars (near by 50 age) argsort
###Code
print('Task 65:')
stars = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"movies": [2, 3, 90, 45, 34, 2]
})
print(stars)
print(stars.loc[(stars.age - 50).abs().argsort()])
###Output
_____no_output_____
###Markdown
Task 66:Argsort with actorsyoung stars (near by 17)
###Code
print('Task 66:')
print(stars.loc[(stars.age - 17).abs().argsort()])
###Output
_____no_output_____
###Markdown
Task 67:Binary operatorsStars withyounger than 19 - very youngmore movies acted**
###Code
print('Task 67:')
stars = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"movies": [22, 33, 90, 75, 34, 2]
})
print(stars)
print('Young and more movies acted')
young = stars.age < 30
more_movies = stars.movies > 30
young_more = [young, more_movies]
young_more_Criteria = functools.reduce(lambda x, y: x & y, young_more)
print(stars[young_more_Criteria])
###Output
_____no_output_____
###Markdown
Task 68:Young, Higher Salary, and Higher Position
###Code
print('Task 68:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8]
})
print(employees)
print('Young, Higher Salary, and Higher Position')
young = employees.age < 30
high_salary = employees.salary > 60
high_position = employees.grade > 6
young_salary_position = [young, high_salary, high_position]
young_salary_position_Criteria = functools.reduce(lambda x, y : x & y, young_salary_position)
print(employees[young_salary_position_Criteria])
###Output
_____no_output_____
###Markdown
Task 69:Rename columns
###Code
print('Task 69:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8]
})
print(employees)
employees.rename(columns={'age': 'User Age', 'salary': 'Salary 2018'}, inplace=True)
print(employees)
###Output
_____no_output_____
###Markdown
Task 70:Add a new column
###Code
print('Task 70:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8]
})
print(employees)
employees['group'] = pd.Series(np.random.randn(len(employees)))
print(employees)
###Output
_____no_output_____
###Markdown
Task 71:Drop a column
###Code
print('Task 71:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8]
})
print(employees)
employees['group'] = pd.Series(np.random.randn(len(employees)))
print(employees)
employees.drop(employees.columns[[0]], axis=1, inplace = True)
print(employees)
###Output
_____no_output_____
###Markdown
Task 72:Drop multiple columns
###Code
print('Task 72:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8]
})
print(employees)
employees['group'] = pd.Series(np.random.randn(len(employees)))
print(employees)
employees.drop(employees.columns[[1, 2]], axis=1, inplace = True)
print(employees)
###Output
_____no_output_____
###Markdown
Task 73:Drop first and last column
###Code
print('Task 73:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8],
"group" : [1, 1, 2, 2, 2, 1]
})
print(employees)
employees.drop(employees.columns[[0, len(employees.columns)-1]], axis=1, inplace = True)
print(employees)
###Output
_____no_output_____
###Markdown
Task 74:Delete by pop function
###Code
print('Task 74:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8],
"group" : [1, 1, 2, 2, 2, 1]
})
print(employees)
group = employees.pop('group')
print(employees)
print(group)
###Output
_____no_output_____
###Markdown
Task 75:DataFrame.from_items
###Code
print('Task 75:')
# df = pd.DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6]), ('C', [7,8, 9])], orient='index', columns=['one', 'two', 'three'])
# print(df) # throwing error
###Output
_____no_output_____
###Markdown
Task 76:Pandas to list
###Code
print('Task 76:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8],
"group" : [1, 1, 2, 2, 2, 1]
})
print(employees)
employees_list1 = list(employees.columns.values)
employees_list2 = employees.values.tolist()
print(employees_list1)
print(employees_list2)
###Output
_____no_output_____
###Markdown
Task 77:Pandas rows to list
###Code
print('Task 77:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8],
"group" : [1, 1, 2, 2, 2, 1]
})
print(employees)
employees_list2 = employees.values.tolist()
print(employees_list2)
print(type(employees_list2))
print(len(employees_list2))
###Output
_____no_output_____
###Markdown
Task 78:XPandas rows to arrayNote: as_matrix is deprecatedYZ
###Code
print('Task 78:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8],
"group" : [1, 1, 2, 2, 2, 1]
})
print(employees)
employees_list2 = employees.values
print(employees_list2)
print(type(employees_list2))
print(employees_list2.shape)
###Output
_____no_output_____
###Markdown
Task 79:Pandas rows to map
###Code
print('Task 79:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8],
"group" : [1, 1, 2, 2, 2, 1]
})
print(employees)
employees_list2 = map(list, employees.values)
print(employees_list2)
print(type(employees_list2))
###Output
_____no_output_____
###Markdown
Task 80:Pandas rows to map
###Code
print('Task 80:')
employees = pd.DataFrame({
"age": [17, 50, 24, 45, 65, 18],
"salary": [75, 33, 90, 175, 134, 78],
"grade" : [7, 8, 9, 2, 7, 8],
"group" : [1, 1, 2, 2, 2, 1]
})
print(employees)
employees_list2 = list(map(list, employees.values))
print(employees_list2)
print(type(employees_list2))
###Output
_____no_output_____
###Markdown
Task 81:Drop duplicates
###Code
print('Task 81:')
users = pd.DataFrame({
"id": [1, 1, 2, 2, 3, 3],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
users.drop_duplicates('city', inplace=True, keep='last')
print(users)
###Output
_____no_output_____
###Markdown
Task 82:Selecting multiple columns
###Code
print('Task 82:')
users = pd.DataFrame({
"id": [1, 1, 2, 2, 3, 3],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
users1 = users[['id', 'city']]
print(users1)
###Output
_____no_output_____
###Markdown
Task 83:Selecting multiple columns
###Code
print('Task 83:')
users = pd.DataFrame({
"id": [1, 1, 2, 2, 3, 3],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
columns = ['id', 'count']
users1 = pd.DataFrame(users, columns=columns)
print(users1)
###Output
_____no_output_____
###Markdown
Task 84:Row and Column Slicing
###Code
print('Task 84:')
users = pd.DataFrame({
"id": [1, 1, 2, 2, 3, 3],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
users1 = users.iloc[0:2, 1:3]
print(users1)
###Output
_____no_output_____
###Markdown
Task 85:Iterating rows
###Code
print('Task 85:')
users = pd.DataFrame({
"id": [1, 1, 2, 2, 3, 3],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
for index, row in users.iterrows():
print(row['city'], "==>", row['count'])
###Output
_____no_output_____
###Markdown
Task 86:Iterating tuples
###Code
print('Task 86:')
users = pd.DataFrame({
"id": [1, 1, 2, 2, 3, 3],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
for row in users.itertuples(index=True, name='Pandas'):
print(getattr(row, 'city'))
for row in users.itertuples(index=True, name='pandas'):
print(row.count)
###Output
_____no_output_____
###Markdown
Task 87:Iterating rows and columns
###Code
print('Task 87:')
users = pd.DataFrame({
"id": [1, 1, 2, 2, 3, 3],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
for i, row in users.iterrows():
for j, col in row.iteritems():
print(col)
###Output
_____no_output_____
###Markdown
Task 88:List of Dictionary to Dataframe
###Code
print('Task 88:')
pointlist = [
{'points': 50, 'time': '5:00', 'year': 2010},
{'points': 25, 'time': '6:00', 'month': "february"},
{'points':90, 'time': '9:00', 'month': 'january'},
{'points_h1':20, 'month': 'june'}
]
print(pointlist)
pointDf = pd.DataFrame(pointlist)
print(pointDf)
pointDf1 = pd.DataFrame.from_dict(pointlist)
print(pointDf1)
###Output
_____no_output_____
###Markdown
Task 89:NaN values
###Code
print('Task 89:')
df = pd.DataFrame(np.random.randn(10,6))
# make a few areas have nan values
df.iloc[1:3, 1] = np.nan
df.iloc[5,3] = np.nan
df.iloc[7:9,5] = np.nan
print(df)
df1 = df.isnull()
print(df1)
###Output
_____no_output_____
###Markdown
Task 90:Sum of all nan
###Code
print('Task 90:')
df = pd.DataFrame(np.random.randn(10,6))
# Make a few areas have NaN values
df.iloc[1:3,1] = np.nan
df.iloc[5,3] = np.nan
df.iloc[7:9,5] = np.nan
print(df)
print('wwwwwwwww')
print(df.isnull().sum())
print('wwwwwwwww')
print(df.isnull().sum(axis=1))
print('wwwwwwwww')
print(df.isnull().sum().tolist())
###Output
_____no_output_____
###Markdown
Task 91:Sum of all nan rowwise
###Code
print('Task 91:')
df = pd.DataFrame(np.random.randn(10,6))
# Make a few areas have NaN values
df.iloc[1:3,1] = np.nan
df.iloc[5,3] = np.nan
df.iloc[7:9,5] = np.nan
print(df)
print(df.isnull().sum(axis=1))
# in pandas axis = 0 is column and axis=1 is row
###Output
_____no_output_____
###Markdown
Task 92:Sum of all nan as list
###Code
print('Task 92:')
df = pd.DataFrame(np.random.randn(10,6))
# Make a few areas have NaN values
df.iloc[1:3,1] = np.nan
df.iloc[5,3] = np.nan
df.iloc[7:9,5] = np.nan
print(df)
print(df.isnull().sum().tolist())
###Output
_____no_output_____
###Markdown
Task 93:Change the order of columns Note: FutureWarning: '.reindex_axis' is deprecated and will be removed in a future version
###Code
print('Task 93:')
users = pd.DataFrame({
"id": [1, 1, 2, 2, 3, 3],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
users2 = users.reindex(columns=['city', 'id', 'count'])
print(users2)
###Output
_____no_output_____
###Markdown
Task 94:Drop multiple rows
###Code
print('Task 94:')
numbers = pd.DataFrame({
"id": [1, 2, 3, 4, 5, 6],
"number": [10, 20, 30, 30, 23, 12]
})
print(numbers)
numbers.drop(numbers.index[[0, 3, 5]], inplace=True)
print(numbers)
###Output
_____no_output_____
###Markdown
Task 95:Drop multiple rows by row name
###Code
print('Task 95:')
numbers = pd.DataFrame({
"id": [1, 2, 3, 4, 5, 6],
"number": [10, 20, 30, 30, 23, 12]
}, index=['one', 'two', 'three', 'four', 'five', 'six'])
print(numbers)
numbers1 = numbers.drop(['two', 'six'])
print(numbers1)
numbers2 = numbers.drop('two')
print(numbers2)
###Output
_____no_output_____
###Markdown
Task 96:Get group
###Code
print('Task 96:')
cats = animals_df.groupby(['animal']).get_group('cat')
print(cats)
###Output
_____no_output_____
###Markdown
Task 97:Get the the odd row
###Code
print('Task 97:')
x = numpy.array([
[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20]
]
)
print(x)
print(x[::2])
###Output
_____no_output_____
###Markdown
Task 98:Get the even columns
###Code
print('Task 98:')
x = numpy.array([
[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20]
]
)
print(x)
print(x[:, 1::2])
###Output
_____no_output_____
###Markdown
Task 99:Odd rows and even columns
###Code
print('Task 99:')
x = numpy.array([
[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20]
]
)
print(x)
print(x[::2, 1::2])
###Output
_____no_output_____
###Markdown
Task 100:Drop duplicates
###Code
print('Task 100:')
users = pd.DataFrame({
"id": [1, 1, 2, 2, 3, 3],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
users.drop_duplicates('id', inplace=True)
print(users)
###Output
_____no_output_____
###Markdown
Task 101:Drop all duplicates
###Code
print('Task 101:')
users = pd.DataFrame({
"name": ['kevin', 'james', 'kumar', 'kevin', 'kevin', 'james'],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa'],
"count" : [7, 8, 9, 2, 7, 8]
})
print(users)
users.drop_duplicates('name', inplace=True, keep='last')
print(users)
users1 = users.drop_duplicates('name', keep=False)
print(users1)
###Output
_____no_output_____
###Markdown
Task 102:Basic group by
###Code
print('Task 102:')
animals_df1 = animals_df.groupby('animal').apply(lambda x: x['size'][x['weight'].idxmax()])
print(animals_df1)
###Output
_____no_output_____
###Markdown
Task 103:Missing Data: Make A'th 3rd coulmn Nan
###Code
print('Task 103:')
df = pd.DataFrame(np.random.randn(6,1), index=pd.date_range('2013-08-01', periods=6, freq='B'), columns=list('A'))
print(df)
df.loc[df.index[3], 'A'] = np.nan
print(df)
###Output
_____no_output_____
###Markdown
Task 104:reindex
###Code
print('Task 104:')
df1 = df.reindex(df.index[::-1]).ffill()
print(df1)
###Output
_____no_output_____
###Markdown
Task 105:Column reset Nan
###Code
print('Task 105:')
animals_df = pd.DataFrame({'animal': 'cat dog cat fish dog cat cat'.split(),
'size': list('SSMMMLL'),
'weight': [8, 10, 11, 1, 20, 12, 12],
'adult' : [False] * 5 + [True] * 2})
print(animals_df)
###Output
_____no_output_____
###Markdown
Task 106:Change columns
###Code
print('Task 106:')
users = pd.DataFrame({
"name": ['kevin', 'james', 'kumar', 'kevin', 'kevin', 'james'],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa']
})
print('Before changing columns : ')
print(users)
# change columns
users_new = users.rename({'name': 'first_name', 'city': 'current_city'}, axis = 1)
print('\nAfter changing columns : ')
print(users_new)
###Output
_____no_output_____
###Markdown
Task 107:Match with isin function
###Code
print('Task 107:')
users = pd.DataFrame({
"name": ['kevin', 'james', 'kumar', 'kevin', 'kevin', 'james'],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa']
})
print('Original Dataframe:')
print(users)
print('\nFinding `Montreal` in by using isin function :')
users.isin(['Montreal']).any()
###Output
_____no_output_____
###Markdown
Task 108:Finding specific items by using `isin` function
###Code
print('Task 108:')
users = pd.DataFrame({
"name": ['kevin', 'james', 'kumar', 'kevin', 'kevin', 'james'],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa']
})
print('Original Dataframe: ')
print(users)
print('\nFinding `Montreal` in using isin and stack them: ')
print(users[users.isin(['Montreal'])].stack())
###Output
_____no_output_____
###Markdown
Task 109:Exclude specific matching
###Code
print('Task 109:')
users = pd.DataFrame({
"name": ['kevin', 'james', 'kumar', 'kevin', 'kevin', 'james'],
"city": ['Toronto', 'Montreal', 'Calgary', 'Montreal', 'Montreal', 'Ottawa']
})
print('Original Dataframe: ')
print(users)
print('\nExcluding `Montreal` in using isin and stack them: ')
print(users[~users.isin(['Montreal'])].stack())
###Output
_____no_output_____
###Markdown
Task 110:Apply a custom function on multiple columns
###Code
print('Task 110:')
amounts = pd.DataFrame({
"CIBC": [200, 4200, 300, 300],
"TD": [1200, 800, 4000, 2000]
})
print('Original Dataframe: ')
print(amounts)
def get_total_amount(x):
# if the amount is less than 500,skip it
total_amount = 0
if(x['CIBC'] > 499):
total_amount += x['CIBC']
if(x['TD'] > 499):
total_amount += x['TD']
return total_amount
amounts['Total'] = amounts.apply(get_total_amount, axis = 1)
print('Dataframe after applying the custom function: ')
print(amounts)
###Output
_____no_output_____ |
notebooks/logging-01b.ipynb | ###Markdown
Start a basic logger: super simple
###Code
import logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(logging.DEBUG)
# log.setLevel(logging.WARNING)
log.debug('a debug message')
log.info('an info message')
log.error('an error message')
###Output
ERROR:root:an error message
|
notebooks/10_aes-statistics.ipynb | ###Markdown
Original Datasets
###Code
df = pd.read_csv('aes_train.csv')
print(f"Number of imbeauty_scoress: {df['beauty_scores'].values.shape[0]}")
print(f"beauty_scores label range: {df['beauty_scores'].values.min()} - {df['beauty_scores'].values.max() }")
print(f"Real beauty_scores range: {df['beauty_scores'].values.min() + 1} - {df['beauty_scores'].values.max() +1}")
plt.hist(df['beauty_scores'].values, bins=np.arange(0, df['beauty_scores'].values.max()+1))
plt.show()
df = pd.read_csv('aes_valid.csv')
print(f"Number of imbeauty_scoress: {df['beauty_scores'].values.shape[0]}")
print(f"beauty_scores label range: {df['beauty_scores'].values.min()} - {df['beauty_scores'].values.max() }")
print(f"Real beauty_scores range: {df['beauty_scores'].values.min() + 1} - {df['beauty_scores'].values.max() +1}")
plt.hist(df['beauty_scores'].values, bins=np.arange(0, df['beauty_scores'].values.max()+1))
plt.show()
df = pd.read_csv('aes_test.csv')
print(f"Number of imbeauty_scoress: {df['beauty_scores'].values.shape[0]}")
print(f"beauty_scores label range: {df['beauty_scores'].values.min()} - {df['beauty_scores'].values.max() }")
print(f"Real beauty_scores range: {df['beauty_scores'].values.min() + 1} - {df['beauty_scores'].values.max() +1}")
plt.hist(df['beauty_scores'].values, bins=np.arange(0, df['beauty_scores'].values.max()+1))
plt.show()
###Output
Number of imbeauty_scoress: 2081
beauty_scores label range: 0 - 4
Real beauty_scores range: 1 - 5
###Markdown
Balanced Datasets
###Code
df = pd.read_csv('aes_train_balanced.csv')
print(f"Number of imbeauty_scoress: {df['beauty_scores'].values.shape[0]}")
print(f"beauty_scores label range: {df['beauty_scores'].values.min()} - {df['beauty_scores'].values.max() }")
print(f"Real beauty_scores range: {df['beauty_scores'].values.min() + 1 +1} - {df['beauty_scores'].values.max() +1 +1}")
plt.hist(df['beauty_scores'].values, bins=np.arange(0, df['beauty_scores'].values.max()+1 +1))
plt.show()
df = pd.read_csv('aes_valid_balanced.csv')
print(f"Number of imbeauty_scoress: {df['beauty_scores'].values.shape[0]}")
print(f"beauty_scores label range: {df['beauty_scores'].values.min()} - {df['beauty_scores'].values.max() }")
print(f"Real beauty_scores range: {df['beauty_scores'].values.min() + 1 +1} - {df['beauty_scores'].values.max() +1 +1}")
plt.hist(df['beauty_scores'].values, bins=np.arange(0, df['beauty_scores'].values.max()+1 +1))
plt.show()
df = pd.read_csv('aes_test_balanced.csv')
print(f"Number of imbeauty_scoress: {df['beauty_scores'].values.shape[0]}")
print(f"beauty_scores label range: {df['beauty_scores'].values.min()} - {df['beauty_scores'].values.max() }")
print(f"Real beauty_scores range: {df['beauty_scores'].values.min() + 1 +1} - {df['beauty_scores'].values.max() +1 +1}")
plt.hist(df['beauty_scores'].values, bins=np.arange(0, df['beauty_scores'].values.max()+1 +1))
plt.show()
###Output
Number of imbeauty_scoress: 288
beauty_scores label range: 0 - 2
Real beauty_scores range: 2 - 4
|
term_term_pr_distr_and_inference.ipynb | ###Markdown
autoreload modules and utilities
###Code
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
import all neceesary libraries/packages
###Code
import numpy as np
import pandas as pd
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.pipeline import FeatureUnion
from sklearn.datasets import fetch_20newsgroups
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.naive_bayes import MultinomialNB, ComplementNB, BernoulliNB
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics import f1_score as calculate_f1_score
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix
###Output
_____no_output_____
###Markdown
Utility functions
###Code
## utilities
# from utils import clean_text
import string
from sklearn.base import TransformerMixin
import nltk
from nltk import word_tokenize
from nltk.stem import WordNetLemmatizer
nltk.download('stopwords')
nltk.download('wordnet')
def clean_text(text: str, lemmatizer = lambda x: x) -> str:
# removes upper cases
text = text.lower().strip()
# removes punctuation
for char in string.punctuation:
text = text.replace(char, " ")
#lematize the words and join back into string text
text = " ".join([lemmatizer(word) for word in word_tokenize(text)])
return text
def data_isvalid(text, analyser, min_character_size, max_character_size):
return min_character_size <= len(analyser(text)) <= max_character_size
def get_pipeline(vectorizer_type, classifier, use_t2pi, min_df=3, stop_words=None, lemmatizer = lambda x: x):
vectorizer = CountVectorizer if vectorizer_type == "count" else TfidfVectorizer
vect = vectorizer(stop_words=stop_words, min_df=min_df)
models = [
('clean_text', CleanTextTransformer(lemmatizer)),
]
if use_t2pi:
models.append(
("vectorizers", FeatureUnion([
('count_binary', CountVectorizer(stop_words=stop_words, binary=True, min_df=min_df)),
("count", vect)
])
)
)
models.append(('t2pi_transformer', T2PITransformer()))
else:
models.append(('vectorizer', vect))
# models.append(("scaler", StandardScaler(with_mean=False)))
models.append(('classifier', classifier))
return Pipeline(models)
def plot_bars(df, ylabel, ymin=0.77):
xlabels = ["count_model", "count_sw_model", "tfidf_model", "tfidf_sw_model"]
accuracy_means = df[["count_model", "count_sw_model", "tfidf_model", "tfidf_sw_model"]].loc["mean"]
t2pi_accuracy_means = df[["t2pi_count_model", "t2pi_count_sw_model", "t2pi_tfidf_model", "t2pi_tfidf_sw_model"]].loc["mean"]
xvalues = np.arange(len(xlabels)) # the label locations
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(xvalues - width/2, accuracy_means, width, label='Baseline')
rects2 = ax.bar(xvalues + width/2, t2pi_accuracy_means, width, label='T2PI')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_ylabel(ylabel.capitalize())
ax.set_title(f'{ylabel.capitalize()} of Baseline and T2PI')
ax.set_ylim(ymin=ymin)
ax.set_xticks(xvalues)
ax.set_xticklabels(xlabels)
ax.legend()
plt.show()
class CleanTextTransformer(TransformerMixin):
def __init__(self, lemmatizer):
self._lemmatizer = lemmatizer
def fit(self, X, y=None, **fit_params):
return self
def transform(self, X, y=None, **fit_params):
return np.vectorize(lambda x: clean_text(x, self._lemmatizer))(X)
def __str__(self):
return "CleanTextTransformer()"
def __repr__(self):
return self.__str__()
class T2PITransformer(TransformerMixin):
@staticmethod
def _max_weight(x, pbar, word_word_pr):
pbar.update(1)
return (word_word_pr.T * x).max(0)
def fit(self, X, y=None, **fit_params):
X = X[:, :int(X.shape[1]/2)].toarray()
print("creating term-term co-occurence pr matrix")
terms = np.arange(X.shape[1])
X = pd.DataFrame(X, columns=terms)
self.word_word_pr_distr = pd.DataFrame(data=0.0, columns=terms, index=terms)
for term in tqdm(terms):
self.word_word_pr_distr[term] = X[X[term] > 0].sum(0) / X.sum(0)
return self
def transform(self, X, y=None, **fit_params):
X = X[:, int(X.shape[1]/2):].toarray()
X = pd.DataFrame(X, columns=self.word_word_pr_distr.columns)
print("transforming ...")
with tqdm(total=X.shape[0]) as pbar:
X = X.apply(self._max_weight, axis=1, args=(pbar, self.word_word_pr_distr))
return X
def __str__(self):
return "T2PITransformer()"
def __repr__(self):
return self.__str__()
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\christian\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package wordnet to
[nltk_data] C:\Users\christian\AppData\Roaming\nltk_data...
[nltk_data] Package wordnet is already up-to-date!
###Markdown
Load Data
###Code
# total number of samples needed
randomize = False
# retrieve dataset
categories = ['rec.autos', 'talk.politics.mideast', 'alt.atheism', 'sci.space']
all_docs = fetch_20newsgroups(subset='train', shuffle=randomize, remove=('headers', 'footers', 'quotes'), categories=categories)
categories = all_docs.target_names
print(all_docs.data[0])
###Output
I think that domestication will change behavior to a large degree.
Domesticated animals exhibit behaviors not found in the wild. I
don't think that they can be viewed as good representatives of the
wild animal kingdom, since they have been bred for thousands of years
to produce certain behaviors, etc.
###Markdown
Create Dataframe
###Code
data = pd.DataFrame(
data={
"text":all_docs.data,
"label":all_docs.target
}
)
data.head()
###Output
_____no_output_____
###Markdown
Label Frequency
###Code
print(data["label"].value_counts())
print()
barlist = plt.bar(categories, data["label"].value_counts())
plt.title("Frequency of documents")
plt.xticks(categories, list(map(lambda x: x.split(".")[1], categories)))
plt.ylabel('Number of documents')
plt.xlabel('Sentiment expressed in Reviews')
barlist[0].set_color('red')
barlist[1].set_color('green')
barlist[2].set_color('blue')
barlist[3].set_color('grey')
plt.show()
###Output
1 594
2 593
3 564
0 480
Name: label, dtype: int64
###Markdown
The Dataset labels needs to be balanced Parameters
###Code
min_df = 3
stop_words = "english"
def get_classifier():
# return ComplementNB()
return MultinomialNB()
# return BernoulliNB()
# return LogisticRegression(random_state=0)
def get_lemmatizer():
# return lambda x: x
return WordNetLemmatizer().lemmatize
###Output
_____no_output_____
###Markdown
Select Valid Data
###Code
max_size_per_class = 500
# remove long text
min_chr_size = 128
max_chr_size = 2048
indices = data["text"].apply(data_isvalid, args=(lambda x: clean_text(x, get_lemmatizer()), min_chr_size, max_chr_size))
data = data[indices]
# make classes balanced
class_indices = []
for index in range(4):
class_indices.append(np.where((data["label"] == index))[0])
size_per_class = min(max_size_per_class, min(map(len, class_indices)))
indices = np.concatenate([class_ids[:size_per_class] for class_ids in class_indices])
data = data.iloc[indices]
data.head()
print(data.iloc[0]["text"])
print(data["label"].value_counts())
print()
barlist = plt.bar(categories, data["label"].value_counts())
plt.title("Frequency of documents")
plt.xticks(categories, list(map(lambda x: x.split(".")[1], categories)))
plt.ylabel('Number of documents')
plt.xlabel('Sentiment expressed in Reviews')
barlist[0].set_color('red')
barlist[1].set_color('green')
barlist[2].set_color('blue')
barlist[3].set_color('grey')
plt.show()
###Output
3 366
2 366
1 366
0 366
Name: label, dtype: int64
###Markdown
initialize input and output
###Code
X = data["text"]
y = data['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
###Output
_____no_output_____
###Markdown
initialize recursive word infer model
###Code
# initialize model
t2pi_model = get_pipeline("count", get_classifier(), use_t2pi=True, min_df=min_df, stop_words=None, lemmatizer = get_lemmatizer())
t2pi_model
# fit model
t2pi_model.fit(X_train, y_train)
y_pred = t2pi_model.predict(X_test) #predict testing data
print(classification_report(y_test, y_pred))
print(confusion_matrix(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Initialize models
###Code
# normal model
count_model = get_pipeline("count", get_classifier(), use_t2pi=False, min_df=min_df, stop_words=None, lemmatizer = get_lemmatizer())
count_sw_model = get_pipeline("count", get_classifier(), use_t2pi=False, min_df=min_df, stop_words=stop_words, lemmatizer = get_lemmatizer())
tfidf_model = get_pipeline("tfidf", get_classifier(), use_t2pi=False, min_df=min_df, stop_words=None, lemmatizer = get_lemmatizer())
tfidf_sw_model = get_pipeline("tfidf", get_classifier(), use_t2pi=False, min_df=min_df, stop_words=stop_words, lemmatizer = get_lemmatizer())
# model
t2pi_count_model = get_pipeline("count", get_classifier(), use_t2pi=True, min_df=min_df, stop_words=None, lemmatizer = get_lemmatizer())
t2pi_count_sw_model = get_pipeline("count", get_classifier(), use_t2pi=True, min_df=min_df, stop_words=stop_words, lemmatizer = get_lemmatizer())
t2pi_tfidf_model = get_pipeline("tfidf", get_classifier(), use_t2pi=True, min_df=min_df, stop_words=None, lemmatizer = get_lemmatizer())
t2pi_tfidf_sw_model = get_pipeline("tfidf", get_classifier(), use_t2pi=True, min_df=min_df, stop_words=stop_words, lemmatizer = get_lemmatizer())
models = {
"count_model": count_model,
"count_sw_model": count_model,
"tfidf_model": tfidf_model,
"tfidf_sw_model": tfidf_sw_model,
"t2pi_count_model": t2pi_count_model,
"t2pi_count_sw_model": t2pi_count_sw_model,
"t2pi_tfidf_model": t2pi_tfidf_model,
"t2pi_tfidf_sw_model": t2pi_tfidf_sw_model
}
###Output
_____no_output_____
###Markdown
Running Cross validation on all Models
###Code
split_size = 3
skf = StratifiedKFold(n_splits=split_size, shuffle=True, random_state=100)
index = 0
macro_f1_scores, weighted_f1_scores, accuracies = [], [], []
for train_index, test_index in skf.split(X, y):
index += 1
x_train_fold, x_test_fold = X.iloc[train_index], X.iloc[test_index]
y_train_fold, y_test_fold = y.iloc[train_index], y.iloc[test_index]
accuracies.append([])
macro_f1_scores.append([])
weighted_f1_scores.append([])
for model_name, model in models.items():
print(f'-> {index}. {model_name} \n{"="*100}\n')
model.fit(x_train_fold, y_train_fold)
y_pred = model.predict(x_test_fold)
accuracy = accuracy_score(y_test_fold, y_pred)
weighted_f1_score = calculate_f1_score(y_test_fold, y_pred, average='weighted')
macro_f1_score = calculate_f1_score(y_test_fold, y_pred, average='macro')
weighted_f1_scores[-1].append(weighted_f1_score)
macro_f1_scores[-1].append(macro_f1_score)
accuracies[-1].append(accuracy)
model_names = list(models.keys())
accuracy = pd.DataFrame(data=np.array(accuracies), columns=model_names)
weighted_f1_score = pd.DataFrame(data=np.array(weighted_f1_scores), columns=model_names)
macro_f1_score = pd.DataFrame(data=np.array(macro_f1_scores), columns=model_names)
accuracy.loc["mean"] = accuracy.mean(0)
weighted_f1_score.loc["mean"] = weighted_f1_score.mean(0)
macro_f1_score.loc["mean"] = macro_f1_score.mean(0)
accuracy.head(split_size+1)
plot_bars(accuracy, ylabel="accuracy", ymin=0.62)
weighted_f1_score.head(split_size+1)
plot_bars(weighted_f1_score, ylabel="weighted_f1_score", ymin=0.72)
macro_f1_score.head(split_size+1)
plot_bars(macro_f1_score, ylabel="macro_f1_score", ymin=0.72)
###Output
_____no_output_____ |
3 - Convolutional Neural Networks/Autoencoders/convolutional-autoencoder/Upsampling_Solution.ipynb | ###Markdown
Mounting GDrive & Installing Pytorch
###Code
import os
from google.colab import drive
# 1. Mounting GDrive
drive.mount('/content/drive/')
# 2. Installing Pytorch
!pip3 install torch torchvision
# 3. Other dependencies
!pip install helper
!pip install bokeh
!pip install opencv-contrib-python
# 4. changing our working directory
# 4. changing our working directory
MyPath = '/content/drive/My Drive/Colab Notebooks/Python Notebooks/Tutorials/PyTorch Scholarship Challenge'
os.chdir(MyPath)
###Output
_____no_output_____
###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. We'll build a convolutional autoencoder to compress the MNIST dataset. >The encoder portion will be made of convolutional and pooling layers and the decoder will be made of **upsampling and convolutional layers**. Compressed RepresentationA compressed representation can be great for saving and sharing any kind of data in a way that is more efficient than storing raw data. In practice, the compressed representation often holds key information about an input image and we can use it for denoising images or oher kinds of reconstruction and transformation!Let's get started by importing our libraries and getting the dataset.
###Code
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# Create training and test dataloaders
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, num_workers=num_workers)
###Output
_____no_output_____
###Markdown
Visualize the Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
###Output
_____no_output_____
###Markdown
--- Convolutional AutoencoderThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. Upsampling + Convolutions, DecoderThis decoder uses a combination of nearest-neighbor **upsampling and normal convolutional layers** to increase the width and height of the input layers.It is important to note that transpose convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. This is the approach we take, here. TODO: Build the network shown above. > Build the encoder out of a series of convolutional and pooling layers. > When building the decoder, use a combination of upsampling and normal, convolutional layers.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class ConvAutoencoder(nn.Module):
def __init__(self):
super(ConvAutoencoder, self).__init__()
## encoder layers ##
# conv layer (depth from 1 --> 16), 3x3 kernels
self.conv1 = nn.Conv2d(1, 16, 3, padding=1)
# conv layer (depth from 16 --> 8), 3x3 kernels
self.conv2 = nn.Conv2d(16, 4, 3, padding=1)
# pooling layer to reduce x-y dims by two; kernel and stride of 2
self.pool = nn.MaxPool2d(2, 2)
## decoder layers ##
self.conv4 = nn.Conv2d(4, 16, 3, padding=1)
self.conv5 = nn.Conv2d(16, 1, 3, padding=1)
def forward(self, x):
# add layer, with relu activation function
# and maxpooling after
x = F.relu(self.conv1(x))
x = self.pool(x)
# add hidden layer, with relu activation function
x = F.relu(self.conv2(x))
x = self.pool(x) # compressed representation
## decoder
# upsample, followed by a conv layer, with relu activation function
# this function is called `interpolate` in some PyTorch versions
x = F.upsample(x, scale_factor=2, mode='nearest')
x = F.relu(self.conv4(x))
# upsample again, output should have a sigmoid applied
x = F.upsample(x, scale_factor=2, mode='nearest')
x = F.sigmoid(self.conv5(x))
return x
# initialize the NN
model = ConvAutoencoder()
print(model)
###Output
ConvAutoencoder(
(conv1): Conv2d(1, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv4): Conv2d(4, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(16, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
###Markdown
--- TrainingHere I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. We are not concerned with labels in this case, just images, which we can get from the `train_loader`. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use `MSELoss`. And compare output images and input images as follows:```loss = criterion(outputs, images)```Otherwise, this is pretty straightfoward training with PyTorch. We flatten our images, pass them into the autoencoder, and record the training loss as we go.
###Code
# specify loss function
criterion = nn.MSELoss()
# specify loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# number of epochs to train the model
n_epochs = 30
for epoch in range(1, n_epochs+1):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data in train_loader:
# _ stands in for labels, here
# no need to flatten images
images, _ = data
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
outputs = model(images)
# calculate the loss
loss = criterion(outputs, images)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*images.size(0)
# print avg training statistics
train_loss = train_loss/len(train_loader)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch,
train_loss
))
###Output
Epoch: 1 Training Loss: 0.323222
Epoch: 2 Training Loss: 0.167930
Epoch: 3 Training Loss: 0.150233
Epoch: 4 Training Loss: 0.141811
Epoch: 5 Training Loss: 0.136143
Epoch: 6 Training Loss: 0.131509
Epoch: 7 Training Loss: 0.126820
Epoch: 8 Training Loss: 0.122914
Epoch: 9 Training Loss: 0.119928
Epoch: 10 Training Loss: 0.117524
Epoch: 11 Training Loss: 0.115594
Epoch: 12 Training Loss: 0.114085
Epoch: 13 Training Loss: 0.112878
Epoch: 14 Training Loss: 0.111946
Epoch: 15 Training Loss: 0.111153
Epoch: 16 Training Loss: 0.110411
Epoch: 17 Training Loss: 0.109753
Epoch: 18 Training Loss: 0.109152
Epoch: 19 Training Loss: 0.108625
Epoch: 20 Training Loss: 0.108119
Epoch: 21 Training Loss: 0.107637
Epoch: 22 Training Loss: 0.107156
Epoch: 23 Training Loss: 0.106703
Epoch: 24 Training Loss: 0.106221
Epoch: 25 Training Loss: 0.105719
Epoch: 26 Training Loss: 0.105286
Epoch: 27 Training Loss: 0.104917
Epoch: 28 Training Loss: 0.104582
Epoch: 29 Training Loss: 0.104284
Epoch: 30 Training Loss: 0.104016
###Markdown
Checking out the resultsBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# prep images for display
images = images.numpy()
# output is resized into a batch of iages
output = output.view(batch_size, 1, 28, 28)
# use detach when it's an output that requires_grad
output = output.detach().numpy()
# plot the first ten input images and then reconstructed images
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4))
# input images on top row, reconstructions on bottom
for images, row in zip([images, output], axes):
for img, ax in zip(images, row):
ax.imshow(np.squeeze(img), cmap='gray')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
###Output
_____no_output_____ |
PropagationMethods_Notebook.ipynb | ###Markdown
Importing Libraries Notice: The code works for tensorflow version 1.2.1 as higher order gradients are implemented. We implement all the models on K80 and PropagationMethodsAttack attack MUST run on GPU (as max_pool_with_argmax is used).
###Code
%load_ext autoreload
%autoreload 2
from keras import backend as K
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import _pickle as pkl
import scipy.stats as stats
import tensorflow as tf
def get_session(number=None):
config_gpu = tf.ConfigProto()
config_gpu.gpu_options.allow_growth = True
return tf.Session(config=config_gpu)
###Output
Using TensorFlow backend.
###Markdown
Squeezenet Model:We slightly modified https://github.com/rcmalli/keras-squeezenet to be able to change the activation function. As described in the paper for attacking integrated gradients and simple gradient saliency maps we replace ReLU activations with Softplus in our saliency loss function gradient (And the perturbed image is applied to the original ReLU network).
###Code
from modified_squeezenet import SqueezeNet
###Output
_____no_output_____
###Markdown
Load images:80 correctly classified imagenet images. Squeeznet accepts channel mean subtracted images and therefore we subtract the channel mean from the image.
###Code
from utils import dataReader
X_dic, y_dic, labels_dic = dataReader()
mean_image = np.zeros((227,227,3))
mean_image[:,:,0]=103.939
mean_image[:,:,1]=116.779
mean_image[:,:,2]=123.68
X = X_dic - mean_image #Mean Subtraction
y = y_dic
###Output
_____no_output_____
###Markdown
Loading squeezenet model:
###Code
tf.reset_default_graph()
sess = get_session()
K.set_session(sess)
K.set_learning_phase(0)
model = SqueezeNet("relu")
###Output
_____no_output_____
###Markdown
Saliency Map: (Takes a while)The chosen propagation method's saliency map tensor is created for SqueezeNET model. As discussed in the paper, we define the saliency map to be sum equal to one. Here, we multiply the sum-one saliency map by image dimensions for avoiding very small values. Here we have only implemented the chosen propagation method's method specific to squeezenet. (For a general library of DeepLIFT method please refer to https://github.com/kundajelab/deeplift. We used the channel mean Image as the reference image which after channel mean subtraction would be all-zero.)
###Code
mode = 'DeepLIFT' #One of the "DeepLIFT", "LP"(for layerwise propagation method),
# or "DTD" (for deep taylor decomposition)
from utils import propagetion
def create_saliency_ops(sess,model,reference_image, mode):
w = model.input.get_shape()[1].value
h = model.input.get_shape()[2].value
c = model.input.get_shape()[3].value
num_classes = model.output.get_shape()[-1].value
m = propagetion(w,h,c,num_classes,sess,model,reference_image,mode=mode)
model.m = m
if mode=='DeepLIFT':
relevance = m[-1]* (model.input[-1]-reference_image)
elif mode=='LP':
relevance = m[-1] * model.input[-1]
elif mode=='DTD':
relevance = m[-1]
saliency_simple= tf.reduce_sum(tf.abs(relevance), -1)
model.saliency = w*h*tf.divide(saliency_simple, tf.reduce_sum(saliency_simple))
model.saliency_flatten = tf.reshape(model.saliency, [w*h])
reference_image = np.zeros((227,227,3)) #Mean Subtracted Reference Image
create_saliency_ops(sess, model, reference_image=reference_image, mode=mode)
###Output
_____no_output_____
###Markdown
Test Image:A correctly classified ImageNET image is randomly chosen.
###Code
n = np.random.choice(80)
test_image = X[n]
original_label = y[n]
print("Image Label : {}".format(labels_dic[y[n]]))
%matplotlib inline
plt.imshow((X[n,:,:,::-1]+mean_image[:,:,::-1])/255)
###Output
Image Label : guinea_pig
###Markdown
Call the perturbation module: (Creating attack directions takes a long while)We create the attack object with our own parameters. The object is feeded with the mean subtracted image. The recommended k_top parameter for ImageNET is 1000. (Refer to the paper for description of the parameter).
###Code
k_top = 1000 #Recommended for ImageNet
from utils import PropagationMethodsAttack
module = PropagationMethodsAttack(mean_image, sess, test_image, original_label, NET=model, k_top=k_top)
###Output
_____no_output_____
###Markdown
Attack! (Takes a while)
###Code
method = "mass_center" #Method should be one of "random", "mass_center", "topK", or "target"
epsilon = 8 #Maximum allowed perturbation for each pixel
output = module.iterative_attack(method, epsilon=16, alpha=0.1, iters=300, measure="mass_center")
print("The prediction confidence changes from {} to {} after perturbation.".format(module.original_confidence,
output[-1]))
print('''{} % of the {} most salient pixels in the original image are among {} most salient pixels of the
perturbed image'''.format(output[0]*100,k_top,k_top))
print("The rank correlation between salieny maps is equal to {}".format(output[1]))
print("The L2 distance between mass centers of saliencies is {} pixels.".format(output[2]))
###Output
Iteration : 0
Iteration : 60
Iteration : 120
Iteration : 180
Iteration : 240
For maximum allowed perturbation size equal to 16, the resulting perturbation size was11.60000017285347
The prediction confidence changes from 0.9867644309997559 to 11.60000017285347 after perturbation.
9.2 % of the 1000 most salient pixels in the original image are among 1000 most salient pixels of the
perturbed image
The rank correlation between salieny maps is equal to 0.5356668993924382
The L2 distance between mass centers of saliencies is 56.04462507680822 pixels.
###Markdown
Time for depiction...
###Code
mpl.rcParams["figure.figsize"]=8,8
plt.rc("text",usetex=False)
plt.rc("font",family="sans-serif",size=12)
plt.subplot(2,2,1)
plt.title("Original Image")
plt.imshow((X[n,:,:,::-1]+mean_image[:,:,::-1])/255)
plt.subplot(2,2,2)
plt.title("Original Image Saliency Map")
plt.imshow(module.saliency1.clip(0, np.percentile(module.saliency1,100)),cmap="hot")
plt.subplot(2,2,3)
plt.title("Perturbed Image")
plt.imshow((module.perturbed_image[:,:,::-1]+mean_image[:,:,::-1])/255)
plt.subplot(2,2,4)
plt.title("Perturbed Image Saliency Map")
plt.imshow(module.saliency2,cmap="hot")
###Output
_____no_output_____ |
optim/sparsifiedSGD/notebooks/plots.ipynb | ###Markdown
Compare QSGD
###Code
def qsgd_bits_used(precision, d, d_eff=None, with_method=False):
"""
precision: bits used by QSGD
d: dimension of the vectors
d_eff: d * average density (only for sparse datasets)
with_method: if True, return the method used
"""
s = 2 ** precision
alternatives = []
bits = []
# naive encoding
alternatives.append('naive')
bits.append((precision + 1) * d)
# bound in QSGD with Elias coding
alternatives.append('QSGD elias boud')
bits.append(3 * s * (s + np.sqrt(d)) + 32)
# for sparse vectors send indices and values
if d_eff:
bits_to_encode_indices = 2 ** np.log2(np.log2(d))
alternatives.append('sparse with indices')
bits.append((precision + 1 + 32) * d_eff)
bits = np.ceil(np.array(bits))
best_idx = bits.argmin()
if with_method:
return int(bits[best_idx]), alternatives[best_idx]
else:
return int(bits[best_idx])
for name, d, d_eff in [('rcv', 47236, 70), ('epsilon', 2000, None)]:
for b in [2, 4, 8]:
bits, method = qsgd_bits_used(b, d, d_eff, with_method=True)
print("QSGD {}-bits on {}: {} bits per gradient sent computed using {}".format(b, name, bits, method))
eps_ylim = [baselines['epsilon'] - 0.005, .34]
def plot_qsgd_epsilon(ax):
baseline = baselines['epsilon']
data = unpickle_dir('../results/eps-quantized')
markers_every = 10
show = {
"full-sgd": ("C0-o", "SGD"),
"qsgd-8bit": ("C8-", "QSGD 8bits"),
"qsgd-4bit": ("C8--", "QSGD 4bits"),
"qsgd-2bit": ("C8:", "QSGD 2bits"),
"top1": ("C4*-", "top k=1"),
"rand1": ("C1s-", "rand k=1"),
}
for p, loss in zip(data['params'], map(lambda x: x[1], data['results'])):
loss =loss[1:-1]
if p.name in show:
style, label = show[p.name]
ax.plot(np.arange(1, len(loss) + 1) / 10, loss,
style, label=label, markevery=markers_every if 'top' not in p.name else markers_every - 1)
ax.axhline(baseline, color='black', linestyle=':', label='baseline')
ax.legend()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
ax.set_ylim(eps_ylim);
ax.set_title('epsilon dataset');
ax.set_xlabel('epoch');
ax.set_ylabel('training loss');
if False: #LOG_SCALE:
ax.set_yscale("log")
fig, ax = plt.subplots(1)
plot_qsgd_epsilon(ax)
def plot_qsgd_epsilon_com(ax):
baseline = baselines['epsilon']
data = unpickle_dir('../results/eps-quantized')
num_samples = 400000
one_mb = 8 * (2 ** 20)
d = 2000
markers_every = 10
show = {
"full-sgd": ("C0-o", "SGD", 32 * 2 * d),
"qsgd-2bit": ("C8:", "QSGD 2bits", qsgd_bits_used(2, d)),
"qsgd-4bit": ("C8--", "QSGD 4bits", qsgd_bits_used(4, d)),
"qsgd-8bit": ("C8-", "QSGD 8bits", qsgd_bits_used(8, d)),
"top1": ("C4*-", "top k=1", 2 * 32),
"rand1": ("C1s-", "rand k=1", 2 * 32),
}
for p, loss in zip(data['params'], map(lambda x: x[1], data['results'])):
loss =loss[1:-1]
if p.name in show:
style, label, bits_per_update = show[p.name]
ax.plot(np.arange(1, len(loss) + 1) / 100 * num_samples * bits_per_update / one_mb, loss, style, label=label, markevery=markers_every)
ax.axhline(baseline, color='black', linestyle=':', label='baseline')
# ax.legend()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
ax.set_ylim(eps_ylim);
# ax.set_title('epsilon dataset');
ax.set_xlabel('total size of communicated gradients (MB)');
ax.set_ylabel('training loss');
ax.set_xscale("log")
if False: # LOG_SCALE:
ax.set_yscale("log")
fig, ax = plt.subplots(1)
plot_qsgd_epsilon_com(ax)
rcv_ylim = [baselines['RCV1-test'] - 0.003, .11]
def plot_qsgd_rcv1(ax):
baseline = baselines['RCV1-test']
data = unpickle_dir('../results/rcv-quantized')
markers_every = 10
show = {
"full-sgd": ("C0-o", "SGD"),
"qsgd-8bit": ("C8-", "QSGD 8bits"),
"qsgd-4bit": ("C8--", "QSGD 4bits"),
"qsgd-2bit": ("C8:", "QSGD 2bits"),
"top1": ("C4*-", "top k=1"),
# "rand10": ("C68-", "rand k=10"),
"rand1": ("C1s-", "rand k=1"),
}
for p, loss in zip(data['params'], map(lambda x: x[1], data['results'])):
loss =loss[:-1]
if p.name in show:
style, label = show[p.name]
ax.plot(np.arange(1, len(loss) + 1) / 10, loss, style, label=label,
markevery=markers_every if 'top' not in p.name else markers_every - 1)
ax.axhline(baseline, color='black', linestyle=':', label='baseline')
ax.legend()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
ax.set_ylim(rcv_ylim);
# ax.set_xlim([0, 1.]);
ax.set_title('RCV1 dataset');
ax.set_xlabel('epoch');
# ax.set_ylabel('training loss');
if False: #LOG_SCALE:
ax.set_yscale("log")
fig, ax = plt.subplots(1)
plot_qsgd_rcv1(ax)
def plot_qsgd_rcv1_com(ax):
baseline = baselines['RCV1-test']
data = unpickle_dir('../results/rcv-quantized')
num_samples = 677399
d = 47236
d_eff = 70
one_mb = 8 * (2 ** 20)
markers_every = 10
show = {
"full-sgd": ("C0-o", "SGD", 32 * 2 * d_eff),
"qsgd-2bit": ("C8:", "QSGD 2bits", qsgd_bits_used(2, d, d_eff)),
"qsgd-4bit": ("C8--", "QSGD 4bits", qsgd_bits_used(4, d, d_eff)),
"qsgd-8bit": ("C8-", "QSGD 8bits", qsgd_bits_used(8, d, d_eff)),
"top1": ("C4*-", "top k=1", 1 * 2 * 32),
"rand1": ("C1s-", "rand k=1", 1 * 2 * 32),
}
for p, loss in zip(data['params'], map(lambda x: x[1], data['results'])):
loss =loss[1:-1]
if p.name in show:
style, label, bits_per_update = show[p.name]
ax.plot(np.arange(1, len(loss) + 1) / 100 * num_samples * bits_per_update / one_mb, loss,
style, label=label, markevery=markers_every)
ax.axhline(baseline, color='black', linestyle=':', label='baseline')
# ax.legend()
ax.xaxis.set_major_locator(MaxNLocator(integer=True))
ax.set_ylim(rcv_ylim);
# ax.set_xlim([0, 1.]);
# ax.set_title('RCV1 dataset');
ax.set_xlabel('total size of communicated gradients (MB)');
# ax.set_ylabel('training loss');
ax.set_xscale("log")
if False: #LOG_SCALE:
ax.set_yscale("log")
fig, ax = plt.subplots(1)
plot_qsgd_rcv1_com(ax)
fig, axes = plt.subplots(2, 2, figsize=(10,6))
[ax1, ax2], [ax3, ax4] = axes
plot_qsgd_epsilon(ax1)
plot_qsgd_rcv1(ax2)
plot_qsgd_epsilon_com(ax3)
plot_qsgd_rcv1_com(ax4)
plt.subplots_adjust(left=None, bottom=None, right=None, top=None,
wspace=None, hspace=0.3)
plt.tight_layout()
fig.savefig('../figures/qsgd.pdf')
###Output
_____no_output_____ |
ospi/docs/models/decision_tree_train.ipynb | ###Markdown
Decision Tree Classifier ```{tip}It is recommended to use google colaboratory for running the notebook.```
###Code
# Extra libraries required
# Install ray tune
! pip install tune-sklearn ray[tune]
# Install shap
! pip install shap
###Output
_____no_output_____
###Markdown
Decision trees are non-parametric supervised algorithms which predicts the target variable by learning simple decision rules inferred from the data. They are widely known for their interpretability. They are highly reliable and can even perform under certain violations of assumptions. But they can be highly sensitive to data and can also be unstable.
###Code
# Import necessary packages
import pandas as pd
from imblearn.over_sampling import SMOTE
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
import plotly.express as px
import plotly.io as pio
# Set default plotly renderer
pio.renderers.default = "notebook_connected" # set it to "colab" for working in google colaboratory
# Load data into dataframe
df = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/uci/ospi/datasets/preprocessed_osi.csv')
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
y = df['Revenue']
X = df.drop('Revenue', axis=1)
# Split data into training and testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
###Output
_____no_output_____
###Markdown
Since decision trees can not perform better with imbalanced target variable, it is necessary to oversample the minority class in the dataset.
###Code
# Oversample the minority class in the target variable
oversample = SMOTE()
X_train, y_train = oversample.fit_resample(X_train, y_train)
###Output
_____no_output_____
###Markdown
Model Training
###Code
# Declare the model
estimator = DecisionTreeClassifier()
# Declare cross-validation method
cv = StratifiedKFold()
# Declare the parameter grid to be searched
param_grid = dict(
criterion = ['gini', 'entropy'],
splitter = ['best', 'random'],
max_depth = [20, 40, 60, None],
min_samples_split = [2, 10, 40],
max_features = ['auto', 'log2', None]
)
# Import grid search model from tune sklearn
from tune_sklearn import TuneGridSearchCV
# Train the model
dtree_clf = TuneGridSearchCV(estimator=estimator, param_grid=param_grid, scoring='f1', n_jobs=-1, cv=cv, use_gpu=True, verbose=2)
dtree_clf.fit(X_train, y_train)
# Get the best performing model
dtree_clf.best_estimator_
# Save and load the model if required
import joblib
joblib.dump(dtree_clf.best_estimator_, '/content/drive/MyDrive/Colab Notebooks/uci/ospi/models/dtree.pkl')
dtree_clf = joblib.load('/content/drive/MyDrive/Colab Notebooks/uci/ospi/models/dtree.pkl')
# Use model for prediction
y_pred = dtree_clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
False 0.93 0.91 0.92 2594
True 0.58 0.63 0.60 489
accuracy 0.87 3083
macro avg 0.75 0.77 0.76 3083
weighted avg 0.87 0.87 0.87 3083
###Markdown
Model seems to be too weak for identifying the visitors who will transact. Decision tree classifier performed close to the support vector classifier and logistic regression. Model Interpretation Since decision trees use white-box models, it is easy to interpret them. The scikit-learn impleentation gives us the feature importances score for each independent variable used.
###Code
# Create a feature importance dataframe
feat_imp_data = zip(list(df.drop('Revenue', axis=1).columns), dtree_clf.feature_importances_)
feat_imp_df = pd.DataFrame(columns=['column', 'feature_importance'], data=feat_imp_data)
# Sort feature importance
feat_imp_df.sort_values('feature_importance', ascending=False, inplace=True)
fig = px.bar(feat_imp_df[:20], x='feature_importance', y='column', orientation='h')
fig.show()
###Output
_____no_output_____
###Markdown
Once again the feature page value came out to be the most important feature by a huge margin. The months of November and May also came out to be important as it was observed in the case of logistic regression. Surprisingly, this model thinks that number of product related pages visited by visitors is more important that the time spent by the visitors on those pages. The other most features features being bounce rate and number of informational pages visited by the visitors. SHAP package contain the tree explainer which can be used for tree based algorithms such as decision trees.
###Code
# Import shap
import shap
# Comput shap values
explainer = shap.explainers.Tree(dtree_clf, X_train, feature_names=df.drop('Revenue', axis=1))
shap_values = explainer.shap_values(X_test)
shap.summary_plot(shap_values)
# Plot summary plot for class
shap.summary_plot(shap_values[1], X_test)
###Output
_____no_output_____ |
2-Terrorism.ipynb | ###Markdown
Terrorism detection and estimationsTerrorism is another interesting application of Machine Learning for crime detection and prevention. The topic is really vast therefore we're going to talk about some techniques and give some references for the details.We'll divide this notebook into 2 parts:1. Definition of terrorism2. **Estimating terrorism behaviour over time**3. **Defeating terrorism** Introduction**Terrorism** is, in the broadest sense, the use of intentionally indiscriminate violence as a means to create terror among masses of people; or fear to achieve a financial, political, religious or ideological aim. (Wikipedia)Terrorist groups usually differ for these peculiarities:- the organisation (solitaire, family-based, supported by a larger network etc.)- ethnical composition- strategies to conceal- ideology (rebellion, politics, religion orthodoxy etc.)Besides, collecting useful data to process may not be a trivial task: some terrorists are really good at hiding their tracks. Differently from other crimes, it is unacceptable not to prevent a terrorits attack in our modern, democratic society. Therefore the task is threefold complex. Red Brigade Movement and Volterra formulaAccording to Wikipedia, the **Lotka–Volterra equations**, also known as the **predator–prey equations**, are a pair of *first-order nonlinear differential equations*, frequently used to describe the dynamics of biological systems in which two species interact, one as a *predator* and the other as *prey*. (Bold and italics are ours).The solution for the Lotka-Volterra equations is usually the so-called logistic curve. \begin{equation}f(x) = \frac{L}{1 + e^{-k(x - x_0)}}\end{equation}Where:* e = the natural logarithm base (also known as Euler's number),* $x_0$ = the x-value of the sigmoid's midpoint,* L = the curve's maximum value, and* k = the steepness of the curve.Basically it is nothing but a dilatated and translated sigmoid function: \begin{equation}g(x) = \frac{1}{1+e^{-x}} \end{equation}Mr Marchetti noticed how this formula could work and fit fairly well even in radically differents contexts such as the artistic vérve of William Shakespeare and Sandro Botticelli, the rate of automobile population in Italy and the cumulative number of murderers done by Red Brigades members.
###Code
%matplotlib inline
from __future__ import print_function
# for jupyter use mainly
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
from math import exp
import numpy as np
import matplotlib.pyplot as plt
# The purpuose is to return a function that is then applied to a dataset.
# Unfortunately sklearn's LogisticRegression is indeed a classifier
# xmin and xmax are the minimum and the maximum X value;
# ticks lets you add the ticks to the graph
# plot lets you choose whether to plot the graph or not
def logistic_function(L, k, x_0, xmin, xmax, ticks=False, plot=True):
logit = np.vectorize(lambda x: L/(1+exp(-k*(x-x_0))))
if plot:
X_test = np.linspace(xmin, xmax)
y = logit(X_test)
if ticks:
%matplotlib inline
plt.xticks(np.unique(np.ceil(X_test)), rotation='vertical')
plt.plot(X_test, y, color='red')
return logit
interact(logistic_function, L = (0.5,5.), k = (0.05, 1), x_0 = (-5.,5.),
xmin=-20, xmax = +20)
###Output
_____no_output_____
###Markdown
We want to estimate the number of attacks made by red brigadists during the "lead years" in Italy during the period 1971-1975 and we want to predict the **cumulative** number of attacks and the end of the activity of the terrorist group. By cumulative we mean the sum of the attacks happened from the first year of the activity until the current time.| Year | No of attacks (cumulative) ||------|----------------------------|| 1971 | 5 || 1972 | 11 || 1973 | 19 || 1974 | 34 || 1975 | 69 Since the solution is: \begin{equation} f(t) = \frac{250}{1+e^{-0.75(t - 1976.1)}} \end{equation}we can try the following prediction
###Code
%matplotlib inline
# fix the parameters and get the function
regression = logistic_function(L=250, k=0.75, x_0 = 1976.1,
xmin=1965, xmax = +1985,
ticks=True, plot=True)
# our dataset lacks of the year 1976 in order to get an accurate prediction
dataset = np.array([
[1971, 5],
[1972, 11],
[1973, 19],
[1974, 34],
[1975, 69]
])
# numpy's ravel "flattens" a vector of vectors into a single vector
# otherwise we would have dataset[...,0] = [[1971], [1972], ...]
features = dataset[...,0].ravel()
targets = dataset[...,1].ravel()
# plot the graph, black dots for the true values, blue dots for the predicted values
plt.plot(features, targets, 'ko')
plt.plot(features, regression(features), 'bo')
np.trunc(regression(features)), np.trunc(targets)
###Output
_____no_output_____
###Markdown
Good! Now we can estimate the number of attacks in any year we like. Let's consider 1976
###Code
# transform 1976 into [1976], apply the function and truncate
# the result to the nearest integer
np.trunc(regression(np.array(1976)))
###Output
_____no_output_____ |
Textual analysis.ipynb | ###Markdown
Load precomputed emoji network
###Code
import json
with open("Networks/emoji_dict.json", 'r') as fp:
emoji_dict=json.load(fp)
emoji_dict_reverse = {v: k for k, v in emoji_dict.items()}
G=nx.read_weighted_edgelist("Networks/network_emoji_98.edgelist.gz")
import community as community_louvain
import networkx as nx
partion = community_louvain.best_partition(G)
partition_emoji={}
for key in partion:
emoji=emoji_dict_reverse[int(key)]
try:
partition_emoji[partion[key]].append(emoji)
except KeyError:
partition_emoji[partion[key]]=[emoji]
for key in partition_emoji:
print(partition_emoji[key])
###Output
['✊', '🚩', '🏹', '👊', '⛳', '🙏', '🤝', '🔱', '🕉', '🌞', '🍁', '🌷', '😊', '🐚', '🔥', '🌹', '🗣', '❗', '〰', '✅', '👆', '😢', '🌼', '💐', '🐆', '👍', '⏺']
['💪', '✔', '😭', '😰', '👏', '😬', '📱', '📲']
['👉', '👿', '😎', '👺', '📗', '✍', '❌', '📌', '⭕', '🐖', '🐷', '🕋', '😈', '❓', '🔰', '🌀']
['😡', '🛑', '✋', '⚫', '🔵', '⬇']
['⚔', '🗡', '🤺', '🦁', '💥', '💣', '🔫', '🐅', '☪']
['😱', '🤔', '👇', '😠', '😏', '👈', '📖', '☹', '😳', '😔', '🌐', '😤', '🔺', '🎀', '➡']
['📚', '✒', '🖌']
['♂', '🤷', '👹', '🧐', '🤫', '💁', '😞', '♀', '🧝', '🤨', '⁉']
['😀', '😁', '😂']
###Markdown
Topic analysis
###Code
from gensim.models.ldamodel import LdaModel
import gensim
from gensim.models import CoherenceModel
def handler_funtion(data):
processed_words=list(data['tokenized'])
dictionary = gensim.corpora.Dictionary(processed_words)
dictionary.filter_extremes(no_below=3, no_above=0.5, keep_n=50000)
list_docs=[]
for index,row in tqdm_notebook(data.iterrows(),total=len(data)):
list_docs.append(dictionary.doc2bow(row['tokenized']))
idxtoken={k:v for v,k in dictionary.token2id.items()}
return dictionary,idxtoken,list_docs
from gensim.test.utils import common_corpus, common_dictionary
from gensim.models.coherencemodel import CoherenceModel
from gensim.models.ldamodel import LdaModel
time_filtered_data=annotated_df
params={'remove_numbers':True,'remove_emoji':False,'remove_stop_words':True,'tokenize':True}
token_list=[preprocess_sent(ele,params) for ele in tqdm_notebook(list(time_filtered_data['message_text']),total=len(time_filtered_data))]
time_filtered_data['tokenized']=token_list
numtopics=10
dictionary,idxtoken,list_docs= handler_funtion(time_filtered_data)
model = LdaModel(list_docs, numtopics, dictionary,random_state=2020,chunksize=100,passes=20)
cm = CoherenceModel(model=model, texts=token_list, coherence='c_v')
coherence = cm.get_coherence()
print("Coherence score",coherence)
model.print_topics()
###Output
_____no_output_____ |
ch4_self_oil_store.ipynb | ###Markdown
4장 - 셀프 주유소는 정말 저렴할까이번 장에서는 아주 간단한 질문에 답하는 것을 함꼐 해보려 한다. 누군가 셀프 주유소는 정말 저렴한가? 에 대해 물었다면 데이터를 만지는 사람들은 어떻게 할까? 답은 바로 주유소의 가격을 조사해서 셀프 주유소와 아닌 주유소를 구분해서 비교하면 된다.앞서 우리는 BeautifulSoup를 사용했다. 그걸로도 많은 일을 할 수 있다. 하지만 몇가지 문제로 인해 BeautifulSoup만으로는 접근할 수 없는 인터넷 정보가 있다. 그래서 사용하는 것이 바로 Selenium 이다.우선 주유소의 가격을 비교하는 정부 사이트인 Opninet에 접속해서 정보를 모아야 한다.
###Code
from selenium import webdriver
###Output
_____no_output_____
###Markdown
먼저 selenium 에서 webdriver를 import 한다. 그리고 네이버(www.naver.com)에 접속해 보자.
###Code
driver = webdriver.Chrome('C:/Users/이재윤/Downloads/chromedriver_win32/chromedriver')
driver.get("http://naver.com")
###Output
_____no_output_____
###Markdown
위처럼 네이버에 접속을 하면 'chrome이 자동화된 테스트 소프트웨어에 의해 제어되고 있습니다' 라는 문구와 함께 웹 브라우저가 또 하나 떠 있을 텐데, 저 웹 브라우저는 우리가 코드로 움직일 브라우저이다. 그래서 가급적 우리가 생성한 크롬 브라우저는 손으로 조작하면 안된다. 코드를 작성할 때 혼선이 생길 수 있다. 별도로 네이버창을 하나 더 열어서 작업하자.
###Code
driver.save_screenshot('C:/Users/이재윤/images/001.jpg')
###Output
_____no_output_____
###Markdown
selenium은 save_screenshot 명령으로 화면을 캡처할 수 있다.
###Code
elem_login = driver.find_element_by_id("id")
elem_login.clear()
elem_login.send_keys("wodbsdbsk")
elem_login = driver.find_element_by_id("pw")
elem_login.clear()
elem_login.send_keys("sorkwodbs1!")
###Output
_____no_output_____
###Markdown
네이버에 로그인 정보를 입력하는 곳이 있다. 크롬 드라이버로 네이버에 로그인하고 싶다면 당연히 id와 비밀번호를 입력해야 한다. 개발자 도구를 이용해서 id와 비밀번호를 입력하는 부분의 html 소스 코드를 확인해 보면 id= 라는 항목에 id 혹은 pw 라고 되어있다. selenium이 제공하는 명령 중 find_element_by_id 를 이용해서 id와 pw를 찾으면 된다. 위 코드를 치면 로그인 정보가 입력되어 있을 것이다.그런 후에 로그인 개발자 도구로 로그인 버튼 쪽을 클릭하면 특정 코드가 하이라이트 되어 있을 것이다. 그곳에서 마우스 오른쪽 버튼을 클릭 후 copy 항목으로 가서 copy xpath를 선택하면 로그인 버튼의 xpath를 복사할 수 있다.
###Code
xpath = '''//*[@id="frmNIDLogin"]/fieldset/input'''
driver.find_element_by_xpath(xpath).click()
###Output
_____no_output_____
###Markdown
그리고 위 코드를 작성하면서 방금 복사한 xpath를 xpath = 뒤에 붙여 넣기 하면 된다.이렇게 로그인을 한 후 메일(mail.naver.com)에 접근해 보자.
###Code
driver.get("http://mail.naver.com")
###Output
_____no_output_____
###Markdown
원하는 곳으로 이동 했다면 Beautiful Soup를 이용해서 페이지 내용을 읽어오게 된다.
###Code
from bs4 import BeautifulSoup
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
###Output
_____no_output_____
###Markdown
driver.page_source를 사용하면 현재 selenium이 접근한 페이지의 소스를 넘겨 받을 수 있다.이제 크롬 개발자 도구를 이용해서 메일을 보낸 사람이 나타나는 곳의 태그를 확인해두자.
###Code
raw_list = soup.find_all('div', 'name _ccr(lst.from)')
raw_list
###Output
_____no_output_____
###Markdown
위처럼 find_all 명령을 사용하면 된다.
###Code
send_list = [raw_list[n].find('a').get_text() for n in range(0, len(raw_list))]
send_list
###Output
_____no_output_____
###Markdown
손쉽게 메일을 보낸 사람의 리스트를 확보할 수 있다. 이제 크롬 드라이버를 닫아야 한다.
###Code
driver.close()
###Output
_____no_output_____
###Markdown
close()명령을 이용해서 실행된 크롬 드라이버를 종료할 수 있다. 4-2 서울시 구별 주유소 가격 정보 얻기앞에서 배운 selenium 지식으로 https://goo.gl/VH1A5t 에 접속해서 서울시 구별 주유소 정보를 받아오자.
###Code
from selenium import webdriver
driver = webdriver.Chrome('C:/Users/이재윤/Downloads/chromedriver_win32/chromedriver')
driver.get('http://www.naver.com')
driver.get('http://www.opinet.co.kr/searRgSelect.do')
###Output
_____no_output_____
###Markdown
우리는 서울시만 검색할 것이니 가만히 두고, 처음에 나타나는 구에 종로구 라는 글자가 있는 부분은 바꿔줘야 한다. 리스트 박스 형태로 되어 있어서 해당 리스트의 내용을 받아와서 순차적으로 반환해주면 된다.
###Code
gu_list_raw = driver.find_element_by_xpath('''//*[@id="SIGUNGU_NM0"]''')
gu_list = gu_list_raw.find_elements_by_tag_name('option')
###Output
_____no_output_____
###Markdown
크롬 개발자 도구를 이용해서롬 개발자 도구를 이용해서 구 이름이 위치하는 곳을 클릭하면 나타나는 우측 코드에 마우스 오른쪽 버튼으로 Xpath를 복사하기를 하면 된다. 그렇게 확보한 xpath를 이용해서 element를 찾고 gu_list_raw 변수에 저장한다.구 리스트는 find_elements_by_tag_name으로 option이라는 태그를 찾으면 된다.
###Code
gu_names = [option.get_attribute("value") for option in gu_list]
gu_names.remove('')
gu_names
###Output
_____no_output_____
###Markdown
그렇게 얻은 결과가 나타난다. 구 이름을 전체적으로 다 알았다. 구 이름이 있는 태그에 위 코드에서 저장한 gu_names에서 첫 번째 것을 한번 시험 삼아 입력하자.
###Code
element = driver.find_element_by_id("SIGUNGU_NM0")
element.send_keys(gu_names[0])
driver.save_screenshot('C:/Users/이재윤/images/002.jpg')
###Output
_____no_output_____
###Markdown
그러면 구 이름부분이 변경된 것을 알 수 있다. 그리고 조회 버튼을 누르면 된다. 조회 버튼의 xpath도 알아낼 수 있다.
###Code
xpath ='''//*[@id="searRgSelect"]/span'''
element_sel_gu = driver.find_element_by_xpath(xpath).click()
###Output
_____no_output_____
###Markdown
그리고 해당 xpath를 찾아서 click()을 붙여 주면 된다. 결과가 나타난 곳 아래에 엑셀저장 버튼을 눌러서 엑셀로 내용을 저장해야 한다. 지금까지 한 것처럼 xpat를 알아내서 엑셀 저장 버튼을 누른다.
###Code
xpath = """//*[@id="glopopd_excel"]/span"""
element_get_excel = driver.find_element_by_xpath(xpath).click()
###Output
_____no_output_____
###Markdown
이렇게 엑셀로 저장 버튼까지 누르고 나면 당연히 크럼 드라이버가 실행하는 브라우저가 지정된 다운로드 폴더에 파일을 다운로드 한다.
###Code
import time
from tqdm import tqdm_notebook
for gu in tqdm_notebook(gu_names):
element = driver.find_element_by_id('SIGUNGU_NM0')
element.send_keys(gu)
time.sleep(2)
xpath ='''//*[@id="searRgSelect"]/span'''
element_sel_gu = driver.find_element_by_xpath(xpath).click()
time.sleep(1)
xpath = '''//*[@id="glopopd_excel"]/span'''
element_get_excel = driver.find_element_by_xpath(xpath).click()
time.sleep(1)
###Output
<ipython-input-79-da81b98703dc>:4: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0
Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`
for gu in tqdm_notebook(gu_names):
###Markdown
이제 서울시 25개 구에 대해 반복문을 수행하면 된다.적절히 중간중간에 기다리라는 time.sleep 명령을 사용했고 25개 구에 대한 주유소 기름값을 저장한 엑셀파일이 다운로드 폴더에 저장된다.
###Code
driver.close()
###Output
_____no_output_____
###Markdown
4-3 구별 주유 가격에 대한 데이터의 정리앞서 받은 25개의 엑셀 파일을 우리가 다루는 data 폴더로 옮긴다. 이전에 배운 엑셀 파일을 read하는 명령으로 하나하나 읽으면 25줄을 입력해야 하지만 파이썬에는 이를 해결해줄 좋은 모듈이 있다.pandas와 파일 glob 라고 하는 파일 경로등을 쉽게 접근할 수 있게 해주는 모듈을 import 하자.
###Code
import pandas as pd
from glob import glob
glob('C:/Users/이재윤/DataScience/data/지역*xls')
###Output
_____no_output_____
###Markdown
위와 같이 ./data 폴더 안에 지역으로 시작하는 xls 파일 전체를 의미하는 ../data/지역*.xls과 같은 명령을 사용할 수 있다.
###Code
stations_files = glob('C:/Users/이재윤/DataScience/data/지역*xls')
stations_files
###Output
_____no_output_____
###Markdown
이제 station_files 변수에 각 엑셀 파일의 경로와 이름을 리스트로 저장한다.
###Code
tmp_raw = []
for file_name in stations_files:
tmp = pd.read_excel(file_name, header=2)
tmp_raw.append(tmp)
station_raw = pd.concat(tmp_raw)
###Output
_____no_output_____
###Markdown
그리고 read_excel로 각 파일을 반복문을 이용해서 읽은 후 tml_raw 변수에 append 시킨다. 반복문이 끝나고 나면 concat 명령으로 쉽게 하나로 합칠 수 있다.
###Code
station_raw.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 537 entries, 0 to 45
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 지역 537 non-null object
1 상호 537 non-null object
2 주소 537 non-null object
3 상표 537 non-null object
4 전화번호 537 non-null object
5 셀프여부 537 non-null object
6 고급휘발유 537 non-null object
7 휘발유 537 non-null object
8 경유 537 non-null object
9 실내등유 537 non-null object
dtypes: object(10)
memory usage: 46.1+ KB
###Markdown
총 545개의 주유소 정보가 저장된 것을 알 수 있다. 하지만 가격 정보가 숫자형(int, float)이 아니어서 나중에 처리하자.
###Code
station_raw.head()
stations = pd.DataFrame({'Oil_store': station_raw['상호'],
'주소': station_raw['주소'],
'가격': station_raw['휘발유'],
'셀프': station_raw['셀프여부'],
'상표': station_raw['상표']
})
stations.head()
###Output
_____no_output_____
###Markdown
위 처럼 원하는 컬럼만 가지고 오고 이름도 다시 정의해서 stations 라는 변수에 저장하자. 여기에 추가로 주소에서 구 이름만 추출한다. 그래서 구별 주유 가격도 조사하자.
###Code
stations['구'] = [eachAddress.split()[1] for eachAddress in stations['주소']]
stations.head()
###Output
_____no_output_____
###Markdown
위 출력을 보면 주소 컬럼의 주소를 봤을 때 빈 칸을 기준으로 분리(split) 시키고 두 번째 단어를 선택하면 구 이름이 될 것 같다.일단 head()만 조사했을 때는 이상 없어 보이지만 5백여 개나 되는 데이터를 다 보기에는 난감하다. 이때는 unique() 검사를 수행하자.
###Code
stations['구'].unique()
###Output
_____no_output_____
###Markdown
결과를 보면 '서울특별시'와 '특별시'라는 항목이 구 이름이 아닌데 들어 있다는 것을 확인할 수 있다.
###Code
stations[stations['구']=='서울특별시']
###Output
_____no_output_____
###Markdown
서울 특별시를 확인해보니 애초 주소가 입력될 때 알 수 없는 글자가 하나 들어가서 칸 수가 맞지 않았다.
###Code
stations.loc[stations['구']=='서울특별시','구'] = '성동구'
stations['구'].unique()
###Output
_____no_output_____
###Markdown
위와 같이 직접 변경하자.
###Code
stations[stations['구']=='특별시']
###Output
_____no_output_____
###Markdown
또 특별시로 되었던 것도 검색 했다. 이번엔 서울특별시라 적지 않고 서울과 특별시를 띄어쓰기를 해서 발생한 문제이다.
###Code
stations.loc[stations['구']=='특별시','구']= '도봉구'
stations['구'].unique()
stations[stations['가격']=='-']
###Output
_____no_output_____
###Markdown
한 가지 문제가 더 있는데 바로 가격이 기록된 컬럼이 숫자형이 아니라는 것이다. 그래서 확인해 보니 가격이 기록되지 않은 경우 '-' 문자를 기입한 것 같다.이 주유소들에 대해 우리가 가격을 일일이 확인할 수 없으니 가격 정보가 입력되지 않은 주유소는 대상에서 제외하자.
###Code
stations = stations[stations['가격'] != '-']
stations.head()
###Output
_____no_output_____
###Markdown
아직 가격 정보가 숫자형으로 변환되지는 않았다.
###Code
stations['가격'] = [float(value) for value in stations['가격']]
###Output
_____no_output_____
###Markdown
이제 변수 stations의 가격 컬럼은 float 형으로 변경되었다.
###Code
stations.reset_index(inplace=True)
del stations['index']
###Output
_____no_output_____
###Markdown
그리고 25개의 엑셀을 합쳤기 때문에 index가 중복될 수 있다. 그래서 reset_index 명령으로 처음부터 인덱스를 다시 기록하기로 한다. 그러면 index라는 컬럼이 하나 더 생성되는데 그 부분을 제거하자.
###Code
stations.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 533 entries, 0 to 532
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Oil_store 533 non-null object
1 주소 533 non-null object
2 가격 533 non-null float64
3 셀프 533 non-null object
4 상표 533 non-null object
5 구 533 non-null object
dtypes: float64(1), object(5)
memory usage: 25.1+ KB
###Markdown
이제 데이터가 어느정도 준비 되었다. 4-4 셀프 주유소는 정말 저렴한지 boxplot으로 확인하기앞서 우리는 셀프 주유소가 정말 저렴한지 확인할 수 있는 데이터를 모두 준비했다. 이번엔 아주 간단하게 그래프를 통해 셀프 주유소가 저렴한지 확인하자.
###Code
# import platform
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import platform
from matplotlib import font_manager, rc
plt.rcParams['axes.unicode_minus'] = False
if platform.system() == 'Darwin':
rc('font', family='AppleGothic')
elif platform.system() == 'Windows':
path = "c:/Windows/Fonts/malgun.ttf"
font_name = font_manager.FontProperties(fname=path).get_name()
rc('font', family=font_name)
elif platform.system() == 'Linux':
path = "/usr/share/fonts/NanumGothic.ttf"
font_name = font_manager.FontProperties(fname=path).get_name()
plt.rc('font', family=font_name)
else:
print('Unknown system... sorry~~~~')
###Output
_____no_output_____
###Markdown
한글 문제를 해결하는 코드를 준비한다.
###Code
stations.boxplot(column='가격', by='셀프', figsize=(12,8));
###Output
_____no_output_____
###Markdown
boxplot으로 간편하게 셀프 컬럼을 기준으로 가격 분포를 확인할 수 있게 되었다. 전반적으로 셀프 주유소인 경우가 가격이 낮게 되어있다.
###Code
plt.figure(figsize=(12,8))
sns.boxplot(x="상표", y="가격", hue="셀프", data=stations, palette="Set3")
plt.show()
###Output
_____no_output_____
###Markdown
현대 오일뱅크,GS칼텍스,S-Oil,SK에너지 모두 셀프주유소가 저렴하다. SK에너지는 그중 가격대가 가장 높게 형성되어 있다.
###Code
plt.figure(figsize=(12,8))
sns.boxplot(x="상표",y="가격", data=stations, palette="Set3")
sns.swarmplot(x="상표", y="가격", data=stations, color=".6")
plt.show()
###Output
C:\Users\이재윤\AppData\Roaming\Python\Python39\site-packages\seaborn\categorical.py:1296: UserWarning: 5.6% of the points cannot be placed; you may want to decrease the size of the markers or use stripplot.
warnings.warn(msg, UserWarning)
C:\Users\이재윤\AppData\Roaming\Python\Python39\site-packages\seaborn\categorical.py:1296: UserWarning: 5.5% of the points cannot be placed; you may want to decrease the size of the markers or use stripplot.
warnings.warn(msg, UserWarning)
###Markdown
Swarmplot을 같이 그려보면 좀 더 확실히 데이터의 분포를 볼 수 있다. 셀프 주유소 말고 상표별 데이터를 확인했는데 SK에너지가 높은 가격대를 형성하는 주유소가 많았다. 전반적으로 현대오일뱅크가 4대 브랜드 중에서는 저렴하다는 것을 알 수 있다.이렇게 해서 셀프 주유소는 대체로 저렴하다고 얘기할 수 있다. 여기서 더 나아가 서울시 구별 주유 가격, 서울에서 높은 가격의 주유손 낮은 가격의 주유소에 대해서도 확인해보자. 4-5 서울시 구별 주유 가격 확인하기
###Code
import json
import folium
import googlemaps
import warnings
warnings.simplefilter(action = "ignore", category = FutureWarning)
###Output
_____no_output_____
###Markdown
먼저 지도를 구하기 위해 필요한 모듈을 import를 한다. 이제 서울시에서 가장 주유 가격이 비싼 주유소를 보자
###Code
stations.sort_values(by='가격', ascending=False).head(10)
stations.sort_values(by='가격', ascending=True).head(10)
###Output
_____no_output_____
###Markdown
위에는 서울시에서 가장 낮은 주유 가격의 주유소이다.pivot_table을 이용해 구별 가격 정보로 변경하고 가격은 평균값으로 정리하자.
###Code
import numpy as np
gu_data = pd.pivot_table(stations, index=['구'], values=['가격'],
aggfunc=np.mean)
gu_data.head()
geo_path = 'C:/Users/이재윤/DataScience/data/02. skorea_municipalities_geo_simple.json'
geo_str = json.load(open(geo_path, encoding='utf-8'))
map = folium.Map(location=[37.5502, 126.982], zoom_start=10.5,
tiles='Stamen Toner')
map.choropleth(geo_data = geo_str,
data = gu_data,
columns=[gu_data.index, '가격'],
fill_color='PuRd', #PuRd, YlGnBu
key_on='feature.id')
map
###Output
_____no_output_____
###Markdown
이를 서울시 구별 정보에 대해 지도로 표현하자. 4-6 서울시 주유 가격 상하위 10개 주유소 지도에 표현하기주유 가격 상위 10개 주유소를 oil_price_top10 이름으로 저장하자.
###Code
oil_price_top10 = stations.sort_values(by='가격', ascending=False).head(10)
oil_price_top10
###Output
_____no_output_____
###Markdown
역시 하위 10개에 대해서도 oil_price_bottom10에 저장하자
###Code
oil_price_bottom10 = stations.sort_values(by='가격', ascending=True).head(10)
oil_price_bottom10
gmap_key = "AIzaSyCkSfQUN2VCuFMbYkgJ7UpRz4NJGpgD-uU"
gmaps = googlemaps.Client(key=gmap_key)
from tqdm import tqdm_notebook
lat = []
lng = []
for n in tqdm_notebook(oil_price_top10.index):
try:
tmp_add = str(oil_price_top10['주소'][n]).split('(')[0]
tmp_map = gmaps.geocode(tmp_add)
tmp_loc = tmp_map[0].get('geometry')
lat.append(tmp_loc['location']['lat'])
lng.append(tmp_loc['location']['lng'])
except:
lat.append(np.nan)
lng.append(np.nan)
print('Here is nan !')
oil_price_top10['lat']=lat
oil_price_top10['lng']=lng
oil_price_top10
###Output
<ipython-input-66-dc0998aef091>:6: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0
Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`
for n in tqdm_notebook(oil_price_top10.index):
###Markdown
이제 주유 가격 상위 10개 주유소에 대해 위도, 경도 정보를 읽어온다. 혹시 알 수 없는 문제, 이를테면 구글맵에서 주소를 검색할 수 없다든지 하는 문제로 에러가 나는 것에 대비해서 try - except 구문을 사용했다. try 구문을 실행하다가 에러가 나면 except 구문에서 지정된 코드를 실행하게 되는데 이 경우는 NaN을 저장하도록 했다.
###Code
from tqdm import tqdm
lat = []
lng = []
for n in tqdm(oil_price_bottom10.index):
try:
tmp_add = str(oil_price_bottom10['주소'][n]).split('(')[0]
tmp_map = gmaps.geocode(tmp_add)
tmp_loc = tmp_map[0].get('geometry')
lat.append(tmp_loc['location']['lat'])
lng.append(tmp_loc['location']['lng'])
except:
lat.append(np.nan)
lng.append(np.nan)
print('Here is nan !')
oil_price_bottom10['lat']=lat
oil_price_bottom10['lng']=lng
oil_price_bottom10
###Output
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.75it/s]
###Markdown
동일하게 주유 가격이 가장 낮은 10개 주유소에 대해서도 작업을 수행했다.
###Code
map = folium.Map(location=[37.5202, 126.975], zoom_start=10.5)
for n in oil_price_top10.index:
if pd.notnull(oil_price_top10['lat'][n]):
folium.CircleMarker([oil_price_top10['lat'][n], oil_price_top10['lng'][n]],
radius=15, color='#CD3181',
fill_color='#CD3181').add_to(map)
for n in oil_price_bottom10.index:
if pd.notnull(oil_price_bottom10['lat'][n]):
folium.CircleMarker([oil_price_bottom10['lat'][n],
oil_price_bottom10['lng'][n]],
radius=15, color='#3186cc',
fill_color='#3186cc').add_to(map)
map
###Output
_____no_output_____ |
docs/source/.ipynb_checkpoints/20210120-Bifurcation_model_dynamic_barcoding-checkpoint.ipynb | ###Markdown
Synthetic data (dynamic barcoding)We simulated a differentiation process over a bifurcation fork. In this simulation, cells are barcoded, and the barcodes could accumulate mutations, which we call *dynamic barcoding*. In the simulation we resample clones over time, like the experimental design to obtain the hematopoietic dataset or the reprogramming dataset. The dataset has two time points.
###Code
import cospar as cs
###Output
_____no_output_____
###Markdown
Loading data
###Code
adata_orig=cs.datasets.synthetic_bifurcation_dynamic_BC()
adata_orig
cs.pl.embedding(adata_orig,color='state_info')
###Output
_____no_output_____
###Markdown
Raw clonal data analysis
###Code
cs.pl.clones_on_manifold(adata_orig,selected_clone_list=[1])
selected_time_point='2'
cs.pl.fate_coupling_from_clones(adata_orig,selected_time_point, selected_fates=[], color_bar=True)
selected_time_point='2'
cs.pl.barcode_heatmap(adata_orig,selected_time_point, selected_fates=[], color_bar=True)
clonal_fate_bias,clone_id=cs.pl.clonal_fate_bias(adata_orig,selected_fate='1',N_resampling=100)
###Output
Current clone id: 0
Current clone id: 50
Current clone id: 100
Current clone id: 150
Current clone id: 200
Current clone id: 250
Current clone id: 300
Current clone id: 350
Current clone id: 400
Current clone id: 450
Current clone id: 500
Current clone id: 550
Current clone id: 600
Current clone id: 650
Current clone id: 700
Current clone id: 750
Current clone id: 800
Current clone id: 850
Current clone id: 900
Current clone id: 950
Current clone id: 1000
Current clone id: 1050
Current clone id: 1100
Current clone id: 1150
###Markdown
Transition map inference Transition map from multiple clonal time points.
###Code
noise_threshold=0.2 #
selected_clonal_time_points=['1','2']
adata=cs.tmap.infer_Tmap_from_multitime_clones(adata_orig,selected_clonal_time_points,smooth_array=[10,10,10],
CoSpar_KNN=20,noise_threshold=noise_threshold)
###Output
-------Step 1: Select time points---------
--> Clonal cell fraction (day 1-2): 0.6891891891891891
--> Clonal cell fraction (day 2-1): 0.6954397394136808
--> Numer of cells that are clonally related -- day 1: 459 and day 2: 854
Valid clone number 'FOR' post selection 664
Cell number=1313, Clone number=1250
-------Step 2: Compute the full Similarity matrix if necessary---------
Compute similarity matrix: computing new; beta=0.1
Smooth round: 1
--> Time elapsed: 0.004083871841430664
Smooth round: 2
--> Time elapsed: 0.02873396873474121
--> Orignal sparsity=0.20817922210860954, Thresholding
--> Final sparsity=0.19417261646571343
similarity matrix truncated (Smooth round=2): 0.040075063705444336
Smooth round: 3
--> Time elapsed: 0.08621597290039062
--> Orignal sparsity=0.38404861012768604, Thresholding
--> Final sparsity=0.3351262643439127
similarity matrix truncated (Smooth round=3): 0.050195932388305664
Smooth round: 4
--> Time elapsed: 0.14152908325195312
--> Orignal sparsity=0.4768350897459771, Thresholding
--> Final sparsity=0.4027396023010474
similarity matrix truncated (Smooth round=4): 0.06666898727416992
Smooth round: 5
--> Time elapsed: 0.1745469570159912
--> Orignal sparsity=0.5275939469831369, Thresholding
--> Final sparsity=0.45277060109789263
similarity matrix truncated (Smooth round=5): 0.05336403846740723
Save the matrix at every 5 rounds
Smooth round: 6
--> Time elapsed: 0.1903398036956787
--> Orignal sparsity=0.5694367474010631, Thresholding
--> Final sparsity=0.4926809945038464
similarity matrix truncated (Smooth round=6): 0.055147647857666016
Smooth round: 7
--> Time elapsed: 0.240264892578125
--> Orignal sparsity=0.608515860121832, Thresholding
--> Final sparsity=0.5261845052848488
similarity matrix truncated (Smooth round=7): 0.05676889419555664
Smooth round: 8
--> Time elapsed: 0.2283029556274414
--> Orignal sparsity=0.6446768486935345, Thresholding
--> Final sparsity=0.5550340150466821
similarity matrix truncated (Smooth round=8): 0.05555582046508789
Smooth round: 9
--> Time elapsed: 0.22017908096313477
--> Orignal sparsity=0.6754060786633497, Thresholding
--> Final sparsity=0.5813643150325208
similarity matrix truncated (Smooth round=9): 0.05191326141357422
Smooth round: 10
--> Time elapsed: 0.2646317481994629
--> Orignal sparsity=0.7013976777663917, Thresholding
--> Final sparsity=0.6061248827788303
similarity matrix truncated (Smooth round=10): 0.05305814743041992
Save the matrix at every 5 rounds
-------Step 3: Optimize the transition map recursively---------
---------Compute the transition map-----------
Compute similarity matrix: load existing data
--> Time elapsed: 0.002672910690307617
--> Time elapsed: 0.010859012603759766
--> Time elapsed: 0.002843141555786133
--> Time elapsed: 0.00743412971496582
Compute similarity matrix: load existing data
--> Time elapsed: 0.002441883087158203
--> Time elapsed: 0.0065860748291015625
--> Time elapsed: 0.0036690235137939453
--> Time elapsed: 0.008401155471801758
Compute similarity matrix: load existing data
--> Time elapsed: 0.002808094024658203
--> Time elapsed: 0.009913921356201172
--> Time elapsed: 0.002871990203857422
--> Time elapsed: 0.0071430206298828125
Current iteration: 0
Use smooth_round=10
Clone normalization
--> Relative time point pair index: 0
--> Clone id: 0
--> Clone id: 1000
Start to smooth the refined clonal map
Phase I: time elapsed -- 0.0022797584533691406
Phase II: time elapsed -- 0.005179882049560547
Current iteration: 1
Use smooth_round=10
Clone normalization
--> Relative time point pair index: 0
--> Clone id: 0
--> Clone id: 1000
Start to smooth the refined clonal map
Phase I: time elapsed -- 0.001950979232788086
Phase II: time elapsed -- 0.005830049514770508
Current iteration: 2
Use smooth_round=10
Clone normalization
--> Relative time point pair index: 0
--> Clone id: 0
--> Clone id: 1000
Start to smooth the refined clonal map
Phase I: time elapsed -- 0.00185394287109375
Phase II: time elapsed -- 0.0042629241943359375
No need for Final Smooth (i.e., clonally states are the final state space for Tmap)
----Demultiplexed transition map----
Clone normalization
--> Relative time point pair index: 0
--> Clone id: 0
--> Clone id: 1000
-----------Total used time: 17.957317113876343 s ------------
###Markdown
Generate demultiplexed map within each clone (Optional, as this map has been generated already)
###Code
run_demultiplex=False
if run_demultiplex:
demulti_threshold=0.2 # This threshold should be smaller, ass the map has been further smoothed to expand to more states.
cs.tmap.infer_intraclone_Tmap(adata,demulti_threshold=demulti_threshold)
cs.pl.fate_bias_from_binary_competition(adata,selected_fates=['0','1'],used_map_name='transition_map',
plot_target_state=False,map_backwards=True,sum_fate_prob_thresh=0)
###Output
_____no_output_____
###Markdown
Transition map from a single clonal time point
###Code
initial_time_points=['1']
clonal_time_point='2'
adata=cs.tmap.infer_Tmap_from_one_time_clones(adata_orig,initial_time_points,clonal_time_point,
Clone_update_iter_N=1,initialize_method='OT',smooth_array=[10,10,10],
noise_threshold=0.2,compute_new=False)
cs.pl.fate_bias_from_binary_competition(adata,selected_fates=['0','1'],used_map_name='transition_map',
plot_target_state=False,map_backwards=True,sum_fate_prob_thresh=0)
###Output
_____no_output_____
###Markdown
Transition amp from only the clonal information
###Code
cs.tmap.infer_Tmap_from_clonal_info_alone(adata)
cs.pl.fate_bias_from_binary_competition(adata,selected_fates=['0','1'],used_map_name='clonal_transition_map',
plot_target_state=False,map_backwards=True,sum_fate_prob_thresh=0)
###Output
Use all clones (naive method)
|
model/brian/EI_model-Single_pattern.ipynb | ###Markdown
EI balance in the CA3-CA1 feedforward network - tuned vs untuned weights
###Code
from brian2 import *
from brian_utils import *
import numpy as np
###Output
/usr/local/lib/python2.7/dist-packages/brian2/core/variables.py:174: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
return np.issubdtype(np.bool, self.dtype)
###Markdown
Parameterizing the CA3-CA1 network
###Code
## Network parameters
## Connectivity
p_CA3_CA1 = 0.05
p_CA3_I = 0.2
p_I_CA1 = 0.2
N_CA3 = 100 # CA3
N_I = 20 # Interneurons
N_CA1 = 100 #CA1 cells
# Cell intrinsic Parameters
Cm = 100*pF
gl = 6.25*nS # This was initially 5e-5 siemens
# Reversal potentials
El = -65*mV # Was -60mV
Ee = 0*mV
Ei = -80*mV
minThreshold, maxThreshold = -60, -30
CA1_VT = np.linspace(minThreshold, maxThreshold, N_CA1 ) *mV
## Synaptic parameters
# Synaptic time constants
taue = 10 *ms
taui = 20 *ms
we = 10*nS # excitatory synaptic weight
wi = 10*nS # inhibitory synaptic weight, was -67 nS
del_CA3_CA1 = 2*ms
del_CA3_I = 2*ms
del_I_CA1 = 2*ms
# The model
### CA3
spiking_eqs = Equations('''
dv_/dt = (-v_+inp)/(20*ms) : 1
inp : 1
''')
### Interneurons
eqs_I = Equations('''
dv/dt = (gl*(El-v)+ge*(Ee-v))/Cm : volt
dge/dt = -ge*(1./taue) : siemens
dgi/dt = -gi*(1./taui) : siemens
''')
### CA1
eqs_CA1 = Equations('''
dv/dt = (gl*(El-v)+ge*(Ee-v)+gi*(Ei-v))/Cm : volt
v_th : volt # neuron-specific threshold
dge/dt = -ge*(1./taue) : siemens
dgi/dt = -gi*(1./taui) : siemens
''')
## Simulation parameters
K = 10 # Number of neurons stimulated (based on fraction of area covered)
inp = randomInput(N=N_CA3,K=K)
active_indices = np.nonzero(inp)[0]
input_time = 1*ms
spike_times = K*[input_time]
runtime = 100 # milliseconds
waittime = 1000 # milliseconds
## Plasticity parameters
inhChange = 5*nS
### CA1 inh plasticity
inh_plasticity = 'w+=inhChange'
###Output
_____no_output_____
###Markdown
Setting up model
###Code
Pe = SpikeGeneratorGroup(N_CA3, active_indices, spike_times)
Pi = NeuronGroup(N_I, model=eqs_I, threshold='v>-45*mV', refractory=0.5*ms, reset='v=El',
method='exponential_euler')
P_CA1 = NeuronGroup(N_CA1, model=eqs_CA1, threshold='v > v_th', refractory=1*ms, reset='v=El',
method='exponential_euler')
P_CA1.v_th = CA1_VT
Ce = Synapses(Pe, P_CA1, 'w: siemens', on_pre='ge+=w')
Ce_i = Synapses(Pe, Pi, 'w: siemens', on_pre='ge+=w')
Ci = Synapses(Pi, P_CA1, 'w: siemens', on_pre='gi+=w', on_post=inh_plasticity)
Ce.connect(p=p_CA3_CA1)
Ce_i.connect(p=p_CA3_I)
Ci.connect(p=p_I_CA1)
W_CA3_CA1 = we*rand(len(Ce.w))
W_CA3_I = we*rand(len(Ce_i.w))
W_I_CA1 = wi*rand(len(Ci.w))
Ce.w = W_CA3_CA1
Ce_i.w = W_CA3_I
Ci.w = W_I_CA1
Ce.delay = del_CA3_CA1
Ce_i.delay = del_CA3_I
Ci.delay = del_I_CA1
# Initialization
Pi.v = El #+ (randn() * 5 - 5)*mV'
P_CA1.v = El #'El + (randn() * 5 - 5)*mV'
# Record a few traces
input_spikes = SpikeMonitor(Pe)
interneuron_volt = StateMonitor(Pi, 'v', record=True)
ca1_volt = StateMonitor(P_CA1, 'v', record=True)
conductances = StateMonitor(P_CA1, ['ge','gi'], record=True)
output_spikes = SpikeMonitor(P_CA1)
visualise_connectivity(Ce)
visualise_connectivity(Ce_i)
visualise_connectivity(Ci)
###Output
_____no_output_____
###Markdown
Plotting example active neurons on the grid
###Code
gridPlot(inp)
run(runtime * ms, report='text')
run(waittime * ms, report='text')
neuronIndex = 2
fig, ax = plt.subplots(nrows=5,sharex=True)
# ax.plot(trace.t/ms, trace.v/mV, label="CA3")
#raster_plot(inp)
for j in range(N_CA3):
ax[0].plot(input_spikes.t/ms, input_spikes.i, '|k', label="CA3")
# frac_inh_active = 0
# for j in range(N_I):
# if not np.sum(interneuron_volt[j].v-El) == 0:
# ax[1].plot(interneuron_volt.t/ms, interneuron_volt[j].v/mV, label="I")
# frac_inh_active+=1
# frac_inh_active/=float(N_I)
# print(frac_inh_active)
ax[2].plot(ca1_volt.t/ms, ca1_volt[neuronIndex].v/mV, label="CA1")
ax[3].plot(conductances.t/ms, conductances[neuronIndex].ge/nS, label="CA1_exc")
ax3_copy = ax[3].twinx()
ax3_copy.plot(conductances.t/ms, conductances[neuronIndex].gi/nS, color='orange', label="CA1_inh")
for j in range(N_CA1):
ax[4].plot(output_spikes.t/ms, output_spikes.i, '|k', label="CA1")
# ax1 = fig.add_axes()
# ax1.plot(trace.t/ms, trace.i, c='b', label="CA3")
# ax[0].set_xlabel('t (ms)')
ax[0].set_ylabel('Neuron #')
# ax[1].set_xlabel('t (ms)')
ax[1].set_ylabel('v (mV)')
ax[2].set_ylabel('v (mV)')
ax[3].set_xlabel('t (ms)')
ax[3].set_ylabel('g (nS)')
ax[4].set_ylabel('Neuron #')
plt.legend(loc = 'center right')
plt.show()
###Output
WARNING /usr/local/lib/python2.7/dist-packages/brian2/monitors/statemonitor.py:287: FutureWarning: Conversion of the second argument of issubdtype from `int` to `np.signedinteger` is deprecated. In future, it will be treated as `np.int64 == np.dtype(int).type`.
if np.issubdtype(dtype, np.int):
[py.warnings]
WARNING /usr/local/lib/python2.7/dist-packages/brian2/monitors/statemonitor.py:57: FutureWarning: Conversion of the second argument of issubdtype from `int` to `np.signedinteger` is deprecated. In future, it will be treated as `np.int64 == np.dtype(int).type`.
if np.issubdtype(dtype, np.int) and not isinstance(item, np.ndarray):
[py.warnings]
###Markdown
Giving repeated patterns
###Code
numInputPatterns = 1
numRepeats = 10
input_spikes = {}
interneuron_volt = {}
ca1_volt = {}
conductances = {}
output_spikes = {}
for j in range(numInputPatterns):
K = np.random.randint(1,20) # Number of neurons stimulated (based on fraction of area covered)
inp = randomInput(N=N_CA3,K=K)
active_indices = np.nonzero(inp)[0]
input_time = 1*ms
spike_times = K*[input_time]
for k in range(numRepeats):# Record a few traces
input_spikes[(j,k)] = SpikeMonitor(Pe)
interneuron_volt[(j,k)] = StateMonitor(Pi, 'v', record=True)
ca1_volt[(j,k)] = StateMonitor(P_CA1, 'v', record=True)
conductances[(j,k)] = StateMonitor(P_CA1, ['ge','gi'], record=True)
output_spikes[(j,k)] = SpikeMonitor(P_CA1)
Pe.set_spikes(active_indices, spike_times)
run(runtime * ms, report='text')
# run(waittime * ms, report='text')
# tuneSynapses()
neuronIndex = 2
fig, ax = plt.subplots(nrows=5,sharex=True)
# ax.plot(trace.t/ms, trace.v/mV, label="CA3")
#raster_plot(inp)
for
for iteration in range(N_CA3):
ax[0].plot(input_spikes.t/ms, input_spikes.i, '|k', label="CA3")
# frac_inh_active = 0
# for j in range(N_I):
# if not np.sum(interneuron_volt[j].v-El) == 0:
# ax[1].plot(interneuron_volt.t/ms, interneuron_volt[j].v/mV, label="I")
# frac_inh_active+=1
# frac_inh_active/=float(N_I)
# print(frac_inh_active)
ax[2].plot(ca1_volt.t/ms, ca1_volt[neuronIndex].v/mV, label="CA1")
ax[3].plot(conductances.t/ms, conductances[neuronIndex].ge/nS, label="CA1_exc")
ax3_copy = ax[3].twinx()
ax3_copy.plot(conductances.t/ms, conductances[neuronIndex].gi/nS, color='orange', label="CA1_inh")
for j in range(N_CA1):
ax[4].plot(output_spikes.t/ms, output_spikes.i, '|k', label="CA1")
# ax1 = fig.add_axes()
# ax1.plot(trace.t/ms, trace.i, c='b', label="CA3")
# ax[0].set_xlabel('t (ms)')
ax[0].set_ylabel('Neuron #')
# ax[1].set_xlabel('t (ms)')
ax[1].set_ylabel('v (mV)')
ax[2].set_ylabel('v (mV)')
ax[3].set_xlabel('t (ms)')
ax[3].set_ylabel('g (nS)')
ax[4].set_ylabel('Neuron #')
plt.legend(loc = 'center right')
plt.show()
neuronIndices = np.arange(N_CA1)
step = len(conductances.t /ms)/51
ie_ratio = []
fig, ax = plt.subplots()
color_cell = matplotlib.cm.plasma(np.linspace(0,1,N_CA1))
for neuronIndex in neuronIndices[99:100]:
for tslice in range(0,len(conductances.t /ms),step):
currentRatio = np.max(conductances[neuronIndex].gi [tslice:tslice+step]/nS) / np.max(conductances[neuronIndex].ge [tslice:tslice+step]/nS)
if 0 < currentRatio < 2e6:
ie_ratio.append(currentRatio)
else:
ie_ratio.append(np.nan)
ax.plot(ie_ratio, '.-', color=color_cell[neuronIndex])
plt.show()
###Output
WARNING /usr/local/lib/python2.7/dist-packages/ipykernel_launcher.py:8: RuntimeWarning: invalid value encountered in double_scalars
[py.warnings]
###Markdown
Giving multiple patterns
###Code
numInputPatterns = 1
numRepeats = 100
for j in range(numInputPatterns):
K = np.random.randint(1,20) # Number of neurons stimulated (based on fraction of area covered)
inp = randomInput(N=N_CA3,K=K)
active_indices = np.nonzero(inp)[0]
input_time = 1*ms
spike_times = K*[input_time]
for k in range(numRepeats):
Pe.set_spikes(active_indices, spike_times)
run(runtime * ms, report='text')
run(waittime * ms, report='text')
# tuneSynapses()
conductances[neuronIndex].gi /nS [tslice:tslice+step]
###Output
_____no_output_____
###Markdown
Tuning synapses
###Code
numInputPatterns = 30
for j in range(numInputPatterns):
K = np.random.randint(1,20) # Number of neurons stimulated (based on fraction of area covered)
inp = randomInput(N=N_CA3,K=K)
active_indices = np.nonzero(inp)[0]
input_time = 1*ms
spike_times = K*[input_time]
Pe.set_spikes(active_indices, spike_times)
run(runtime * ms, report='text')
run(waittime * ms, report='text')
###Output
_____no_output_____ |
notebooks/LRL_ASR_004.ipynb | ###Markdown
Low Resource Language ASR model Lars Ericson, Catskills Research Company, OpenASR20
###Code
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Language Analysis
###Code
from Language import Language
L=Language('amharic')
L.visualization()
dfL=L.sample_statistics()
dfL
###Output
_____no_output_____
###Markdown
Sub splitting based on words per split and average samples per grapheme (imperfect, introduces error) Trim all samples aggressively and examine the longest trimmed 1-word sample and resulting distribution
###Code
fat=L.splits
skinny=fat.aggressive_clip_ends()
one_word_fat=[x for x in fat.artifacts if x.target.n_words==1]
one_word_skinny=[x for x in skinny.artifacts if x.target.n_words==1]
both=list(zip(one_word_fat, one_word_skinny))
one_word=sorted(both, key = lambda x: x[0].source.n_samples)
len(one_word)
(old,new)=one_word[-1]
old.display()
new.display()
fat.diff_sample_statistics(skinny)
fat.diff_visualization(skinny)
###Output
_____no_output_____
###Markdown
Split the longest sample by silence gaps, aggressively trim each, then allocate graphemes evenly over the clips, maximizing alignment of graphemes to word boundaries on silence
###Code
df=skinny.sample_statistics()
max_words=int(df[(df.Corpus=="Split Transcription") & (df.Units=="Length in words") & (df.Measurement=="Max")].Value.values[0])
max_words
target_max_sample_length=int(df[(df.Corpus=="Split Speech") & (df.Units=="Length in samples") & (df.Measurement=="Median")].Value.values[0])
target_max_sample_length
df
###Output
_____no_output_____
###Markdown
Split corpus down to median length
###Code
del SubSplitCorpus
from SubSplitCorpus import SubSplitCorpus
from multiprocessing import Pool
if __name__ == '__main__':
with Pool(16) as pool:
subsplits=SubSplitCorpus(pool, skinny)
skinny.diff_sample_statistics(subsplits)
skinny.diff_visualization(subsplits)
###Output
_____no_output_____
###Markdown
Make a test training corpus
###Code
clips=[]
for i, sound in enumerate(sounds):
fn=f"frob/clip_{i}.wav"
sf.write(fn, sound, C.sample_rate)
clips.append(fn)
text='infer.txt'
manifest_fn='manifest.csv'
manifest='\n'.join([f'{audio},{text}' for audio in clips])
with open(manifest_fn, 'w') as f: plt.show()
f.write(manifest)
!cat manifest.csv
###Output
_____no_output_____
###Markdown
ASR end-to-end speech-to-grapheme model stacked on top of grapheme-to-grapheme corrector model
###Code
C.extension='_gradscaler'
C.batch_size=12
C.save_every = 5
C.start_from = 246
import json, sys, os, librosa, random, math, time, torch
sys.path.append('/home/catskills/Desktop/openasr20/end2end_asr_pytorch')
os.environ['IN_JUPYTER']='True'
import numpy as np
import pandas as pd
from itertools import groupby
from operator import itemgetter
import soundfile as sf
from utils import constant
from utils.functions import load_model
from utils.data_loader import SpectrogramDataset, AudioDataLoader, BucketingSampler
from clip_ends import clip_ends
import torch.optim as optim
import torchtext
from torchtext.data import Field, BucketIterator
from torchtext.data import TabularDataset
import matplotlib.ticker as ticker
from IPython.display import Audio
from unidecode import unidecode
from seq_to_seq import *
args=constant.args
args.continue_from=None
args.cuda = True
args.labels_path = C.grapheme_dictionary_fn
args.lr = 1e-4
args.name = C.model_name
args.save_folder = f'save'
args.epochs = 1000
args.save_every = 1
args.feat_extractor = f'vgg_cnn'
args.dropout = 0.1
args.num_layers = 4
args.num_heads = 8
args.dim_model = 512
args.dim_key = 64
args.dim_value = 64
args.dim_input = 161
args.dim_inner = 2048
args.dim_emb = 512
args.shuffle=True
args.min_lr = 1e-6
args.k_lr = 1
args.sample_rate=C.sample_rate
args.continue_from=C.best_model
args.augment=True
audio_conf = dict(sample_rate=args.sample_rate,
window_size=args.window_size,
window_stride=args.window_stride,
window=args.window,
noise_dir=args.noise_dir,
noise_prob=args.noise_prob,
noise_levels=(args.noise_min, args.noise_max))
with open(args.labels_path, 'r') as label_file:
labels = str(''.join(json.load(label_file)))
# add PAD_CHAR, SOS_CHAR, EOS_CHAR
labels = constant.PAD_CHAR + constant.SOS_CHAR + constant.EOS_CHAR + labels
label2id, id2label = {}, {}
count = 0
for i in range(len(labels)):
if labels[i] not in label2id:
label2id[labels[i]] = count
id2label[count] = labels[i]
count += 1
else:
print("multiple label: ", labels[i])
model, opt, epoch, metrics, loaded_args, label2id, id2label = load_model(constant.args.continue_from)
train_data = SpectrogramDataset(audio_conf, manifest_filepath_list=[manifest_fn],
label2id=label2id, normalize=True, augment=args.augment)
args.batch_size=1
train_sampler = BucketingSampler(train_data, batch_size=args.batch_size)
train_loader = AudioDataLoader(train_data, num_workers=args.num_workers, batch_sampler=train_sampler)
strs_hyps=[]
for i, (data) in enumerate(tqdm(train_loader)):
src, tgt, _, src_lengths, tgt_lengths = data
src = src.cuda()
tgt = tgt.cuda()
pred, gold, hyp_seq, gold_seq = model(src, src_lengths, tgt, verbose=False)
seq_length = pred.size(1)
for ut_hyp in hyp_seq:
str_hyp = ""
for x in ut_hyp:
if int(x) == constant.PAD_TOKEN:
break
str_hyp = str_hyp + id2label[int(x)]
strs_hyps.append(str_hyp)
for j in range(len(strs_hyps)):
strs_hyps[j] = strs_hyps[j].replace(constant.SOS_CHAR, '').replace(constant.EOS_CHAR, '')
gold_tgt = ' '.join([x.strip() for x in gold_tgt.split(' ') if x])
gold_tgt
pred=' '.join(strs_hyps)
pred
error_correction_training_fn='frob/pred_gold.tsv'
with open(error_correction_training_fn, 'w', encoding='utf-8') as f:
f.write(f"{gold_tgt}\t{pred}")
!cat frob/pred_gold.tsv
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
tokenize=lambda x: [y for y in x]
SRC = Field(tokenize = tokenize,
init_token = '<sos>',
eos_token = '<eos>',
lower = True,
batch_first = True)
TRG = Field(tokenize = tokenize,
init_token = '<sos>',
eos_token = '<eos>',
lower = True,
batch_first = True)
from torchtext.data import Iterator
train_data = TabularDataset(
path=error_correction_training_fn,
format='tsv',
fields=[('trg', TRG), ('src', SRC)])
train_iterator = Iterator(train_data, batch_size=1)
gold_fns=list(sorted(glob(f'{C.build_dir}/transcription_split/*.txt')))
len(gold_fns)
import os
goldrows=[]
for fn in gold_fns:
with open(fn, 'r', encoding='utf-8') as f:
goldrows.append(f.read())
MAX_LENGTH=max(len(x) for x in goldrows)+10
MAX_LENGTH
graphemes=''.join([x for x in C.grapheme_dictionary])
MIN_FREQ=1
SRC.build_vocab(graphemes, min_freq = MIN_FREQ)
TRG.build_vocab(graphemes, min_freq = MIN_FREQ)
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
INPUT_DIM, OUTPUT_DIM
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
BATCH_SIZE = 1
HID_DIM = 256
ENC_LAYERS = 3
DEC_LAYERS = 3
ENC_HEADS = 8
DEC_HEADS = 8
ENC_PF_DIM = 512
DEC_PF_DIM = 512
ENC_DROPOUT = 0.1
DEC_DROPOUT = 0.1
enc = Encoder(INPUT_DIM,
HID_DIM,
ENC_LAYERS,
ENC_HEADS,
ENC_PF_DIM,
ENC_DROPOUT,
device,
MAX_LENGTH)
dec = Decoder(OUTPUT_DIM,
HID_DIM,
DEC_LAYERS,
DEC_HEADS,
DEC_PF_DIM,
DEC_DROPOUT,
device,
MAX_LENGTH)
SRC_PAD_IDX = SRC.vocab.stoi[SRC.pad_token]
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
model = Seq2Seq(enc, dec, SRC_PAD_IDX, TRG_PAD_IDX, device).to(device)
model_fn='tut6-model.pt'
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
def initialize_weights(m):
if hasattr(m, 'weight') and m.weight.dim() > 1:
nn.init.xavier_uniform_(m.weight.data)
model.apply(initialize_weights);
if os.path.exists(model_fn):
model.load_state_dict(torch.load(model_fn))
LEARNING_RATE = 0.0005
optimizer = torch.optim.Adam(model.parameters(), lr = LEARNING_RATE)
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
model.train()
for j in range(100):
epoch_loss = 0
for i, batch in enumerate(train_iterator):
src = batch.src.to(device)
trg = batch.trg.to(device)
optimizer.zero_grad()
output, _ = model(src, trg[:,:-1])
output_dim = output.shape[-1]
output = output.contiguous().view(-1, output_dim)
trg = trg[:,1:].contiguous().view(-1)
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1)
optimizer.step()
epoch_loss += loss.item()
print(j, epoch_loss)
pred=output.argmax(1).cpu().detach().numpy()
pred
''.join([SRC.vocab.itos[x] for x in src.cpu().numpy()[0]])
silver=''.join([TRG.vocab.itos[x] for x in trg.cpu().numpy()]).split('<eos>')[0]
silver
pred=''.join([TRG.vocab.itos[x] for x in pred]).split('<eos>')[0]
pred
from utils.metrics import calculate_cer, calculate_wer
calculate_cer(pred, silver)
calculate_wer(pred, silver)
###Output
_____no_output_____ |
permamodel/notebooks/Ku_2D.ipynb | ###Markdown
This model was developed by Permamodel workgroup.Basic theory is Kudryavtsev's method.Reference: Anisimov, O. A., Shiklomanov, N. I., & Nelson, F. E. (1997). Global warming and active-layer thickness: results from transient general circulation models. Global and Planetary Change, 15(3), 61-77.
###Code
import os,sys
sys.path.append('../../permamodel/')
from permamodel.components import bmi_Ku_component
from permamodel import examples_directory
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap, addcyclic
import matplotlib as mpl
print examples_directory
cfg_file = os.path.join(examples_directory, 'Ku_method_2D.cfg')
x = bmi_Ku_component.BmiKuMethod()
x.initialize(cfg_file)
y0 = x.get_value('datetime__start')
y1 = x.get_value('datetime__end')
for i in np.linspace(y0,y1,y1-y0+1):
x.update()
print i
x.finalize()
ALT = x.get_value('soil__active_layer_thickness')
TTOP = x.get_value('soil__temperature')
LAT = x.get_value('latitude')
LON = x.get_value('longitude')
SND = x.get_value('snowpack__depth')
LONS, LATS = np.meshgrid(LON, LAT)
#print np.shape(ALT)
#print np.shape(LONS)
###Output
../../permamodel/permamodel/examples
Ku model component: Initializing...
2014.0
2015.0
2016.0
***
Writing output finished!
Please look at./NA_ALT.nc and ./NA_TPS.nc
###Markdown
Spatially visualize active layer thickness:
###Code
fig=plt.figure(figsize=(8,4.5))
ax = fig.add_axes([0.05,0.05,0.9,0.85])
m = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\
rsphere=(6378137.00,6356752.3142),\
resolution='l',area_thresh=1000.,projection='lcc',\
lat_1=50.,lon_0=-107.,ax=ax)
X, Y = m(LONS, LATS)
m.drawcoastlines(linewidth=1.25)
# m.fillcontinents(color='0.8')
m.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0])
m.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1])
clev = np.array([0.5, 1.0, 1.5, 2.0, 2.5, 3.0])
cs = m.contourf(X, Y, ALT, clev, cmap=plt.cm.PuBu_r, extend='both')
cbar = m.colorbar(cs)
cbar.set_label('m')
plt.show()
# print x._values["ALT"][:]
ALT2 = np.reshape(ALT, np.size(ALT))
ALT2 = ALT2[np.where(~np.isnan(ALT2))]
print 'Simulated ALT:'
print 'Max:', np.nanmax(ALT2),'m', '75% = ', np.percentile(ALT2, 75)
print 'Min:', np.nanmin(ALT2),'m', '25% = ', np.percentile(ALT2, 25)
plt.hist(ALT2)
###Output
_____no_output_____
###Markdown
Spatially visualize mean annual ground temperature:
###Code
fig2=plt.figure(figsize=(8,4.5))
ax2 = fig2.add_axes([0.05,0.05,0.9,0.85])
m2 = Basemap(llcrnrlon=-145.5,llcrnrlat=1.,urcrnrlon=-2.566,urcrnrlat=46.352,\
rsphere=(6378137.00,6356752.3142),\
resolution='l',area_thresh=1000.,projection='lcc',\
lat_1=50.,lon_0=-107.,ax=ax2)
X, Y = m2(LONS, LATS)
m2.drawcoastlines(linewidth=1.25)
# m.fillcontinents(color='0.8')
m2.drawparallels(np.arange(-80,81,20),labels=[1,1,0,0])
m2.drawmeridians(np.arange(0,360,60),labels=[0,0,0,1])
clev = np.linspace(start=-10, stop=0, num =11)
cs2 = m2.contourf(X, Y, TTOP, clev, cmap=plt.cm.seismic, extend='both')
cbar2 = m2.colorbar(cs2)
cbar2.set_label('Ground Temperature ($^\circ$C)')
plt.show()
# # print x._values["ALT"][:]
TTOP2 = np.reshape(TTOP, np.size(TTOP))
TTOP2 = TTOP2[np.where(~np.isnan(TTOP2))]
# Hist plot:
plt.hist(TTOP2)
mask = x._model.mask
print np.shape(mask)
plt.imshow(mask)
print np.nanmin(x._model.tot_percent)
###Output
1.0
|
notebooks/language_popularity_2017_2018.ipynb | ###Markdown
Language Reality Check for 2017 with data from 2018 Our questions to answer are:1. How big was the change in language use between 2017 and 2018 actually?2. How big is the difference between what developers wanted to work with in 2018 and what they actually worked with? Since we want to compare the data from 2018 with the data from 2017. We will first load the data from 2017 and process it like in the *language_popularity_2017* notebook.
###Code
from pathlib import Path
import pandas as pd
DATA_DIR = Path() / '..' / 'data'
FILE_NAME_2017 = 'survey_results_public_2017.csv'
df_2017 = pd.read_csv(str(DATA_DIR / FILE_NAME_2017))
df_q1 = df_2017[['Respondent', 'HaveWorkedLanguage']]
df_q1 = df_q1[df_q1['HaveWorkedLanguage'].notna()]
# Normalize the HaveWorkedLanguage column
df_q1 = df_q1.assign(HaveWorkedLanguage=df_q1['HaveWorkedLanguage'].str.split(';')).explode('HaveWorkedLanguage')
df_q1['HaveWorkedLanguage'] = df_q1['HaveWorkedLanguage'].apply(lambda x: str(x).strip())
counts_per_lang = df_q1['HaveWorkedLanguage'].value_counts()
percentage_per_lang = counts_per_lang / df_q1['Respondent'].nunique()
percentage_per_lang.head()
df_q2 = df_2017[['Respondent', 'WantWorkLanguage']]
df_q2 = df_q2[df_q2['WantWorkLanguage'].notna()]
# Normalize the HaveWorkedLanguage column
df_q2 = df_q2.assign(WantWorkLanguage=df_q2['WantWorkLanguage'].str.split(';')).explode('WantWorkLanguage')
df_q2['WantWorkLanguage'] = df_q2['WantWorkLanguage'].apply(lambda x: str(x).strip())
counts_want_per_lang = df_q2['WantWorkLanguage'].value_counts()
percentage_want_per_lang = counts_want_per_lang / df_q2['Respondent'].nunique()
percentage_want_per_lang.head()
###Output
_____no_output_____
###Markdown
Question 1 The data processing for the 2018 data is almost the same. The *HaveWorkedLanguage* column was just renamed to *LanguageWorkedWith*
###Code
from pathlib import Path
import pandas as pd
DATA_DIR = Path() / '..' / 'data'
FILE_NAME_2018 = 'survey_results_public_2018.csv'
df_2018 = pd.read_csv(str(DATA_DIR / FILE_NAME_2018), dtype=object)
df_q1_18 = df_2018[['Respondent', 'LanguageWorkedWith']]
df_q1_18 = df_q1_18[df_q1_18['LanguageWorkedWith'].notna()]
# Normalize the HaveWorkedLanguage column
df_q1_18 = df_q1_18.assign(LanguageWorkedWith=df_q1_18['LanguageWorkedWith'].str.split(';')).explode('LanguageWorkedWith')
df_q1_18['LanguageWorkedWith'] = df_q1_18['LanguageWorkedWith'].apply(lambda x: str(x).strip())
counts_per_lang = df_q1_18['LanguageWorkedWith'].value_counts()
percentage_per_lang_18 = counts_per_lang / df_q1_18['Respondent'].nunique()
percentage_per_lang_18
###Output
_____no_output_____
###Markdown
Now we can compute the differences per language. While doing that we will disregard some values:1. We will disregard all data for languages with which less than 4% of the respondents worked with in 2017 because their relevance is very low and it makes the chart more concise.2. We will disregard all data for languages which were only part of the survey in either 2017 or 2018 since we are calculating a difference here, which only makes sense when having values for two years to compare.
###Code
diff_17_18 = (percentage_per_lang_18 - percentage_per_lang[percentage_per_lang >= 0.04]).dropna().sort_values()
diff_17_18.plot.barh(
x='Language',
color =(diff_17_18 > 0).map({True: 'g',
False: 'r'}),
title='Difference: Have worked with 2017/2018 per Language',
figsize=(16, 9))
###Output
_____no_output_____
###Markdown
Suprisingly we can see that almost all languages have grown in usage. A factor for that is probably that the respondents in 2018 were other (and more) persons than in 2017. Therefore, the overall average respondent's profile also changed. Nevertheless, we can see that the 5 languages that grew the most from 2017 to 2018 are TypeScript, JavaScript, Python, SQL and Java and only Perl shrunk. Question 2 To answer question 2 we compare the *WantWorkLanguage* data from 2017 with the *LanguageWorkedWith* from 2018.While doing that we will disregard some values:1. We will disregard all data for languages with which less than 5% of the respondents wanted to work with in the next year in 2017 because their relevance is very low and it makes the chart more concise.2. We will disregard all data for languages which were only part of the survey in either 2017 or 2018 since we are calculating a difference here, which only makes sense when having values for two years to compare.
###Code
diff_17_want_18_have = (percentage_per_lang_18 - percentage_want_per_lang[percentage_want_per_lang >= 0.05]).dropna()
diff_17_want_18_have.plot.barh(
x='Language',
color =(diff_17_want_18_have > 0).map({True: 'g',
False: 'r'}),
title='Difference: Have worked with 2018 - Want to work with 2017 per Language',
figsize=(16, 9))
###Output
_____no_output_____ |
OSD/OSD_A2_Testing.ipynb | ###Markdown
Open Source Development - Assignment 2Click here for better notebook render. **Open Source Project: *pandas - Python Data Analysis Library (https://github.com/pandas-dev/pandas)*****Initial Proposal - Proposed Enhancements:**1. `read_csv()` function - add a pop-up GUI for the user to select the .csv file to be imported, increased convenience when files are located in another directory. [*[LINK]*](feature1)2. `to_csv()` function - add a pop-up GUI for the user to export their existing dataframe into a .csv file, increased convenience when the user wants to save the file in another directory as the user need not change directories via code or copy/paste the exported file. [*[LINK]*](feature2)3. `DataFrame.dropna()` function - add a function parameter to also drop columns/rows (depending on the selected axis) containing infinity values, instead of purely columns/rows containing NaN values. [*[LINK]*](feature3)Dataset Source: https://www.kaggle.com/agirlcoding/all-space-missions-from-1957 *Check current Conda environment and import necessary libraries (for installation and for the project).*
###Code
import sys
print("Conda environment directory:", sys.executable)
import os
###Output
Conda environment directory: C:\Users\Michael\Anaconda3\envs\OSD\python.exe
###Markdown
*Install pandas (original library) using pip.*
###Code
!pip3 install pandas
###Output
_____no_output_____
###Markdown
*Install pandas (modified library).*
###Code
library_path = "pandas-gh-repo"
current_path = os.getcwd()
os.chdir(library_path)
!python setup.py build_ext --inplace
!python setup.py install
os.chdir(current_path)
###Output
_____no_output_____
###Markdown
*Uninstall pandas using pip.*
###Code
!pip3 uninstall pandas --yes
###Output
_____no_output_____
###Markdown
*Import pandas and check version. If pandas is not installed, ModuleNotFoundError will be raised.*
###Code
import pandas as pd
print("pandas:", pd.__version__)
###Output
pandas: 0+unknown
###Markdown
Feature Enhancement 1: pandas.read_csv() **Original**
###Code
df1_original = pd.read_csv("E:\Desktop\PSB\- CU\Term 4 - 27Jul20 to 30Oct20\OSD\Assignment2\Space_Corrected.csv")
df1_original
df2_original = pd.read_csv("E:\Desktop\PSB\- CU\Term 4 - 27Jul20 to 30Oct20\OSD\Assignment2\Testing\initial.csv",
sep = '/',
header = [0, 1])
df2_original
###Output
_____no_output_____
###Markdown
**Modified**
###Code
df1_modified = pd.read_csv(gui=True)
df1_modified
df2_modified = pd.read_csv(gui=True)
df2_modified
###Output
_____no_output_____
###Markdown
**Verification**
###Code
df1_original.equals(df1_modified)
df2_original.equals(df2_modified)
###Output
_____no_output_____
###Markdown
Feature Enhancement 2: pandas.DataFrame.to_csv() **Original**
###Code
df1_modified.to_csv("df1_modified.csv")
df2_modified.to_csv("df2_modified.csv", sep = '!', header = ['Letter', 'Num'], index = False, encoding = 'utf-16')
###Output
_____no_output_____
###Markdown
**Modified**
###Code
df1_modified.to_csv(gui=True)
df2_modified.to_csv(gui=True)
###Output
Filepath: E:/Desktop/PSB/- CU/Term 4 - 27Jul20 to 30Oct20/OSD/Assignment2/Testing/df2_modified_x.csv
Syntax: DataFrame.to_csv("E:/Desktop/PSB/- CU/Term 4 - 27Jul20 to 30Oct20/OSD/Assignment2/Testing/df2_modified_x.csv", sep='!', header=['Letter', 'Num'], index=False, encoding='utf-16')
###Markdown
**Verification**
###Code
pd.read_csv("df1_modified.csv").equals(pd.read_csv("df1_modified_x.csv"))
pd.read_csv("df2_modified.csv", sep = '!', encoding = 'utf-16').equals(pd.read_csv("df2_modified_x.csv", sep = '!', encoding = 'utf-16'))
###Output
_____no_output_____
###Markdown
Feature Enhancement 3: pandas.DataFrame.dropna() **Original**
###Code
df2_original
df2_original.dropna()
###Output
_____no_output_____
###Markdown
**Modified**
###Code
df2_original.dropna(dropinf=True)
###Output
_____no_output_____ |
python/.ipynb_checkpoints/oop_pro-checkpoint.ipynb | ###Markdown
OOP
###Code
def pen():
tip_size=float(input("enter tip size"))
model=input('enter pen model')
cost=int(input('enter pen cost'))
print('the pen has ',tip_size,'tip size')
###Output
_____no_output_____ |
04_RandomSearchCV/Assignment_4_Reference.ipynb | ###Markdown
Implementing Custom GridSearchCV
###Code
# it will take classifier and set of values for hyper prameter in dict type dict({hyper parmeter: [list of values]})
# we are implementing this only for KNN, the hyper parameter should n_neighbors
from sklearn.metrics import accuracy_score
def randomly_select_60_percent_indices_in_range_from_1_to_len(x_train):
return random.sample(range(0, len(x_train)), int(0.6*len(x_train)))
def GridSearch(x_train,y_train,classifier, params, folds):
trainscores = []
testscores = []
for k in tqdm(params['n_neighbors']):
trainscores_folds = []
testscores_folds = []
for j in range(0, folds):
# check this out: https://stackoverflow.com/a/9755548/4084039
train_indices = randomly_select_60_percent_indices_in_range_from_1_to_len(x_train)
test_indices = list(set(list(range(1, len(x_train)))) - set(train_indices))
# selecting the data points based on the train_indices and test_indices
X_train = x_train[train_indices]
Y_train = y_train[train_indices]
X_test = x_train[test_indices]
Y_test = y_train[test_indices]
classifier.n_neighbors = k
classifier.fit(X_train,Y_train)
Y_predicted = classifier.predict(X_test)
testscores_folds.append(accuracy_score(Y_test, Y_predicted))
Y_predicted = classifier.predict(X_train)
trainscores_folds.append(accuracy_score(Y_train, Y_predicted))
trainscores.append(np.mean(np.array(trainscores_folds)))
testscores.append(np.mean(np.array(testscores_folds)))
return trainscores,testscores
from sklearn.metrics import accuracy_score
from sklearn.neighbors import KNeighborsClassifier
import matplotlib.pyplot as plt
import random
import warnings
warnings.filterwarnings("ignore")
neigh = KNeighborsClassifier()
params = {'n_neighbors':[3,5,7,9,11,13,15,17,19,21,23]}
folds = 3
trainscores,testscores = GridSearch(X_train, y_train, neigh, params, folds)
plt.plot(params['n_neighbors'],trainscores, label='train cruve')
plt.plot(params['n_neighbors'],testscores, label='test cruve')
plt.title('Hyper-parameter VS accuracy plot')
plt.legend()
plt.show()
# understanding this code line by line is not that importent
def plot_decision_boundary(X1, X2, y, clf):
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
x_min, x_max = X1.min() - 1, X1.max() + 1
y_min, y_max = X2.min() - 1, X2.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02), np.arange(y_min, y_max, 0.02))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X1, X2, c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("2-Class classification (k = %i)" % (clf.n_neighbors))
plt.show()
from matplotlib.colors import ListedColormap
neigh = KNeighborsClassifier(n_neighbors = 21)
neigh.fit(X_train, y_train)
plot_decision_boundary(X_train[:, 0], X_train[:, 1], y_train, neigh)
###Output
_____no_output_____ |
NebraskaVoters.ipynb | ###Markdown
Local politics: small town Nebraska is further right than ever**By Matt Waite**In the 2016 election, 84 of Nebraska's 93 counties gave a higher percentage to a GOP candidate than any election since 2000. It's the most visible sign of a shift in rural Nebraska politcs that's been quietly unfolding for more than a decade. As most of the state's counties shrink, they're tilting to the right. And in the process, they're becoming less politically diverse, some dramatically so.
###Code
library(ggplot2)
library(dplyr)
library(scales)
pct <- read.csv("data/pctgop.csv")
head(pct)
###Output
_____no_output_____
###Markdown
The pctgop.csv file has a single record for each county and election year going back to 2000. The PCTGOP field is the percent of the vote that went to the GOP candidate for president that election. So in 2016, that was Donald J. Trump, in 2000 and 2004, it was George W. Bush, and so on. By charting each county, you can see the trend: With the exception of a noticeable dip for President Obama's 2008 election, the majority of counties bend up to the right, meaning a continuously higher percentage of the vote in those counties is going to the Republican candidate.
###Code
ggplot(pct, aes(x=Year, y=PCTGOP, group=County, colour = PCTGOP)) + geom_line() + theme(axis.text.x = element_blank(), axis.ticks = element_blank()) + scale_colour_gradient(low="blue", high="red") + facet_wrap(~County)
###Output
_____no_output_____
###Markdown
Viewed another way, we can see that Trump counties *vastly* outnumber counties where other GOP candidates did better.
###Code
tiltbar <- read.csv("data/tiltbar.csv")
ggplot(aes(x=Top), data=tiltbar) + geom_bar() + theme_bw()
###Output
_____no_output_____
###Markdown
So that raises the question of what is going on in these counties? A partial answer statewide is that the number of registered Democrats is slowly shrinking while the number of registered Non Partisans is rising quickly.
###Code
statewidevoters <- read.csv("data/statewidevoters.csv")
head(statewidevoters)
ggplot(statewidevoters, aes(x=Year, y=Registered, group=Party, colour = Party)) + geom_line() + scale_y_continuous(labels = comma) + theme_bw()
###Output
_____no_output_____
###Markdown
But that statewide trend masks a near collapse of the Democractic party in many counties. In the smaller places in Nebraska, the number of registered Democrats is plunging.
###Code
voters <- read.csv("data/voters.csv")
head(voters)
ggplot(voters, aes(x=Year, y=Voters, group=Party, colour = Party)) + geom_line() + theme(axis.text.x = element_blank(), axis.text.y = element_blank(), axis.ticks = element_blank()) + facet_wrap(~ County, scales = "free")
###Output
_____no_output_____
###Markdown
Since the 2000 presidential election in Nebraska, 67 counties have lost registered voters. It's no secret that parts of Nebraska are shrinking as people move away from rural areas toward cities. But of those 67 counties that shrunk, 64 of them went stronger for Trump than any other presidential candidate in the last five elections. And 61 of those shrinking counties became more Republican as a percentage over the same period of time. That shift to the right in smaller places has an impact on political discourse in small towns across the state. Using a measure of diversity -- the USA Today Diversity Index, which measures the probability that two people chosen at random will be different -- political diversity in Nebraska's smaller counties is plunging.
###Code
diversitychange <- read.csv("data/diversitychange.csv")
head(diversitychange)
diversitychangesorted <- arrange(diversitychange, desc(Change)) %>% mutate(County = factor(County, County)) %>% mutate(pos = Change >= 0)
ggplot(diversitychangesorted, aes(x=County, y=Change, fill=pos)) + geom_bar(stat='identity', position='identity') + coord_flip()
###Output
_____no_output_____ |
Machine Learning/1) Regression(all reg course)/4.Multiple Linear Regression/Code to predict University admission.ipynb | ###Markdown
CODE TO PREDICT ACCEPTANCE CHANCE OF UNIVERSITY ADMISSION Dr. Ryan Ahmed @STEMplicity PROBLEM STATEMENT - In this project, a regression model is developed to predict the probability of being accepted for Graduate school.- Data Source: https://www.kaggle.com/mohansacharya/graduate-admissions- Citation: Mohan S Acharya, Asfia Armaan, Aneeta S Antony : A Comparison of Regression Models for Prediction of Graduate Admissions, IEEE International Conference on Computational Intelligence in Data Science 2019- The dataset contains the following parameters: - GRE Scores ( out of 340 ) - TOEFL Scores ( out of 120 ) - University Rating ( out of 5 ) - Statement of Purpose and Letter of Recommendation Strength ( out of 5 ) - Undergraduate GPA ( out of 10 ) - Research Experience ( either 0 or 1 ) - Chance of Admit ( ranging from 0 to 1 ) STEP 0: IMPORT LIBRARIES
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
STEP 1: IMPORT DATASET
###Code
admission_df = pd.read_csv('Admission.csv')
admission_df.head(5)
admission_df.tail(10)
admission_df.info()
admission_df.describe()
###Output
_____no_output_____
###Markdown
STEP 2: VISUALIZE DATASET
###Code
admission_df = admission_df.drop(['Serial No.'], axis = 1)
admission_df
column_headers = admission_df.columns.values
column_headers
i = 1
fig, ax = plt.subplots(2, 4, figsize = (20, 20))
for column_header in column_headers:
plt.subplot(2,4,i)
sns.distplot(admission_df[column_header])
i = i + 1
plt.figure(figsize = (10, 10))
sns.heatmap(admission_df.corr(), annot = True)
sns.pairplot(admission_df)
i = 1
fig, ax = plt.subplots(2, 4, figsize = (20, 10))
for column_header in column_headers:
plt.subplot(2,4,i)
sns.boxplot(admission_df[column_header])
i = i + 1
###Output
_____no_output_____
###Markdown
STEP 3: CREATE TESTING AND TRAINING DATASET/DATA CLEANING
###Code
sns.heatmap(admission_df.isnull())
X = admission_df.drop(['Admission Chance'], axis = 1)
X
y = admission_df['Admission Chance']
y
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
STEP 4: TRAINING THE MODEL
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression(fit_intercept = True)
regressor.fit(X_train, y_train)
print('Linear Model Coeff (m)', regressor.coef_)
print('Linear Model Coeff (b)', regressor.intercept_)
###Output
Linear Model Coeff (m) [ 0.00182934 0.00264915 0.00557509 -0.00025733 0.02125271 0.12610349
0.0176724 ]
Linear Model Coeff (b) -1.3252540092853387
###Markdown
STEP 5: EVALUATING THE MODEL
###Code
y_predict = regressor.predict(X_test)
plt.scatter(y_test, y_predict, color = 'r')
plt.ylabel('Model Predictions')
plt.xlabel('True (ground truth)')
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from math import sqrt
k = X_test.shape[1]
n = len(X_test)
k
n
RMSE = float(format(np.sqrt(mean_squared_error(y_test, y_predict)) , '.3f'))
MSE = mean_squared_error(y_test, y_predict)
MAE = mean_absolute_error(y_test, y_predict)
r2 = r2_score(y_test, y_predict)
adj_r2 = 1-(1-r2)*(n-1)/(n-k-1)
MAPE = np.mean( np.abs((y_test - y_predict) /y_test ) ) * 100
print('RMSE =',RMSE, '\nMSE =',MSE, '\nMAE =',MAE, '\nR2 =', r2, '\nAdjusted R2 =', adj_r2, '\nMean Absolute Percentage Error =', MAPE, '%')
###Output
RMSE = 0.067
MSE = 0.004510922165564735
MAE = 0.050929671418159364
R2 = 0.7477104063946121
Adjusted R2 = 0.7231822514607549
Mean Absolute Percentage Error = 7.9301531803655205 %
###Markdown
STEP 6 RETRAIN AND VISUALIZE THE RESULTS
###Code
X = admission_df[[ 'GRE Score', 'TOEFL Score' ]]
y = admission_df['Admission Chance']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2)
from sklearn.linear_model import LinearRegression
regressor = LinearRegression(fit_intercept = True)
regressor.fit(X_train, y_train)
y_predict = regressor.predict(X_test)
plt.scatter(y_test, y_predict, color = 'r')
plt.ylabel('Model Predictions')
plt.xlabel('True (ground truth)')
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
from math import sqrt
k = X_test.shape[1]
n = len(X_test)
RMSE = float(format(np.sqrt(mean_squared_error(y_test, y_predict)) , '.3f'))
MSE = mean_squared_error(y_test, y_predict)
MAE = mean_absolute_error(y_test, y_predict)
r2 = r2_score(y_test, y_predict)
adj_r2 = 1-(1-r2)*(n-1)/(n-k-1)
MAPE = np.mean( np.abs((y_test - y_predict) /y_test ) ) * 100
print('RMSE =',RMSE, '\nMSE =',MSE, '\nMAE =',MAE, '\nR2 =', r2, '\nAdjusted R2 =', adj_r2, '\nMean Absolute Percentage Error =', MAPE, '%')
from mpl_toolkits.mplot3d import Axes3D
x_surf, y_surf = np.meshgrid(np.linspace(admission_df['GRE Score'].min(), admission_df['GRE Score'].max(), 100) , np.linspace(admission_df['TOEFL Score'].min(), admission_df['TOEFL Score'].max(), 100) )
onlyX = pd.DataFrame({'GRE Score': x_surf.ravel(), 'TOEFL Score':y_surf.ravel()})
fittedY = regressor.predict(onlyX)
fittedY = fittedY.reshape(x_surf.shape)
fig = plt.figure()
ax = fig.add_subplot(111, projection = '3d')
ax.scatter(admission_df['GRE Score'], admission_df['TOEFL Score'], admission_df['Admission Chance'])
ax.plot_surface(x_surf, y_surf, fittedY, color = 'r', alpha = 0.3)
ax.set_xlabel('GRE Score')
ax.set_ylabel('TOEFL Score')
ax.set_zlabel('Acceptance Chance')
###Output
_____no_output_____ |
notebooks/ToolDevelopment_Harringtonine_Morisaki.ipynb | ###Markdown
Harringtonine CropArray Example --- Notebook summary - Load a microscope image of video- Tracking spots on the image and generate a pandas dataframe with the spots locations- Creating a croparray with the image and dataframe- Signal quantification and plotting- Visualization of croparray with Napari---- Importing libraries----
###Code
# To manipulate arrays
import numpy as np
from skimage.io import imread
import matplotlib.pyplot as plt
from matplotlib.path import Path
import pylab as pyl
import seaborn as sns; sns.set()
import pathlib # for working with windows paths
import sys
import cv2
import trackpy as tp
#!pip install shapely
from shapely.geometry import Polygon
from shapely.geometry import Point
current_dir = pathlib.Path().absolute()
croparray_dir = current_dir.parents[0].joinpath('croparray')
sys.path.append(str(croparray_dir))
import crop_array_tools as ca
# %matplotlib inline
plt.style.use('dark_background')
# Napari
%gui qt5
import napari
from napari.utils import nbscreenshot
# Magicgui
from magicgui import magicgui
import datetime
import pathlib
import pandas as pd
###Output
_____no_output_____
###Markdown
Functions ___
###Code
def TrackWithMasks(img_max, particle_diameter ,min_m, mask_include, mask_exclude):
f = tp.batch(img_max, diameter=particle_diameter,minmass=min_m)
f_list = []
for i in np.arange(len(f['frame'].unique())):
f0 = f[f['frame']==i]
f1 = f0.copy()
# If no masks, include everything and exclude nothing
if len(mask_include) == 0 :
mask_include1 = [[0,0],[0,10000000],[10000000,10000000],[10000000,0]]
else:
mask_include1 = mask_include
if len(mask_exclude) == 0:
mask_exclude1 = [[0,0],[0,-10],[-10,-10]]
else:
mask_exclude1 = mask_exclude
mask_in = Polygon(mask_include1) # This is a polygon that defines the mask
mypts = np.transpose([f1.y,f1.x]) # These are the points detected by trackpy, notice the x/y inversion for napari
f1['Include']=[mask_in.contains(Point(mypts[i])) for i in np.arange(len(mypts))] # Check if pts are on/in polygon mask
# # Label points in nucleus if polygon mask exists
# if polygon != None:
mask_out = Polygon(mask_exclude1) # This is a polygon that defines the mask
f1['Exclude']=[mask_out.contains(Point(mypts[i])) for i in np.arange(len(mypts))] # Check if pts are on/in polygon mask
f_list.append(f1[(f1['Include']==True) & (f1['Exclude']==False)])
f_all = pd.concat(f_list)
return f_all
###Output
_____no_output_____
###Markdown
Loading data (and track file, if already available)----
###Code
#Video directory
img_4D = imread(os.path.join(dir,img_4D_filename))
img_4D.shape
# Converting the video to Croparray format
img_croparray = np.expand_dims(img_4D,axis=0) #
img_croparray.shape # dimensions MUST be (fov, f , z, y, x, ch)
img_croparray.shape
print("croparray format shape [fov, f , z, y, x, ch] = ", img_croparray.shape)
###Output
croparray format shape [fov, f , z, y, x, ch] = (1, 65, 13, 512, 512, 3)
###Markdown
Optional tracking: Max projection, masking, and tracking---- Just view video to determine what are the best z planes
###Code
viewer1 = napari.view_image(img_croparray[0,:,:,:,:,1])
###Output
_____no_output_____
###Markdown
Now do max-projection on best-z planes
###Code
best_zs = [1,10]
img_max = np.max(img_croparray[0,:,best_zs[0]:best_zs[1],:,:,1],axis=1)
img_max.shape
###Output
_____no_output_____
###Markdown
Now you can create a mask to exclude regions: Shapes will be excluded and Shapes [1] will be included in tracks
###Code
viewer2 = napari.view_image(np.max(img_max,axis=0)) # Max of max, should be a single frame
# Use this if you don't have a mask:
# f_all = TrackWithMasks(img_max,7,1000,[],[])
particle_diameter = 7
min_mass = 1000
viewer3 = napari.view_image(img_max)
f_all = TrackWithMasks(img_max,particle_diameter,min_mass,viewer.layers['Shapes [1]'].data[0][:,-2:],viewer.layers['Shapes'].data[0][:,-2:])
data = f_all[['frame','y','x']].values
layer_name = 'Spots all frames'
viewer3.add_points(data, size = 7, edge_color = 'yellow', symbol='ring', name = layer_name)
f_all
f_all.to_csv(os.path.join(dir,img_4D_filename[:-4]+'.csv'))
###Output
_____no_output_____
###Markdown
Convert f to crop_array format
###Code
#only if you didn't track:
spots = f_all.copy() # Nice to copy; seems it can cause to overwrite otherwise
spots['id']=spots.index
spots.rename(columns={'x': 'xc','y': 'yc', 'frame': 'f','signal':'signal_tp'},
inplace=True, errors='raise')
spots['fov']=0
spots.rename(columns={'particle':'id'})
spots = spots[['fov','id','f','yc','xc','signal_tp','Include','Exclude']] # keeping signal out of curiousity... want to compare to disk-donut measurements
spots.head()
###Output
_____no_output_____
###Markdown
Create Crop Array____ Create a crop array from 4D movie
###Code
my_ca = ca.create_crop_array(img_croparray,spots,xy_pad=5, dxy=130, dz=500, dt=1, units=['nm','min'], name = os.path.join(dir,img_4D_filename))
my_ca
###Output
Original video dimensions: (1, 65, 13, 512, 512, 3)
Padded video dimensions: (1, 65, 13, 524, 524, 3)
Max # of spots per frame: 36
Shape of numpy array to hold all crop intensity data: (1, 36, 65, 13, 11, 11, 3)
Shape of xc and yc numpy arrays: (1, 36, 65, 3)
Shape of extra my_layers numpy array: (4, 1, 36, 65)
###Markdown
Save the crop array____
###Code
my_ca.to_netcdf(os.path.join(dir,img_4D_filename[:-4]+'.nc') )
###Output
_____no_output_____
###Markdown
Quantify signal intensity through time____
###Code
# Measure signals and plot average signal through time, creating 'best_z' layer and 'signal' layer
ca.measure_signal(my_ca, ref_ch=1, disk_r=3, roll_n=3)
my_ca.best_z.mean('n').sel(fov=0,ch=1).rolling(t=3,min_periods=1).mean().plot.imshow(col='t',col_wrap=10,robust=True,xticks=[],yticks=[],size=1.5,cmap='gray', vmin=0, vmax =500)
###Output
_____no_output_____
###Markdown
Let's compare our disk-donut 'signal' layer (acquired from 3D image) to trackpy's (acquired from max-projection):
###Code
# Let's compare our intensity numbers to those from trackpy:
my_ca.where(my_ca.signal>0).plot.scatter(x='signal',y='signal_tp',col='ch',hue='ch',colors=['red','limegreen','blue'],levels=[0,1,2,3])
###Output
_____no_output_____
###Markdown
Let's look at average signal vs time
###Code
# Let's look at average signal vs time
start_sig = my_ca.signal.mean('n').sel(t=slice(0,4)).mean('t')
end_sig = 0# my_ca.signal.mean('n').sel(t=slice(15,20)).mean('t')
norm_sig = (my_ca.signal.mean('n') - end_sig)/(start_sig - end_sig)
sns.set_palette(['limegreen','limegreen','blue'])
norm_sig.sel(fov=0,ch=1).plot.line(x='t',hue='ch')
###Output
_____no_output_____
###Markdown
Now let's just use trackpy's values:
###Code
# Let's look at average signal vs time
start_sig = my_ca.signal_tp.mean('n').sel(t=slice(0,4)).mean('t')
end_sig = 0# my_ca.signal_tp.mean('n').sel(t=slice(15,20)).mean('t')
norm_sig = (my_ca.signal_tp.mean('n') - end_sig)/(start_sig - end_sig)
sns.set_palette(['limegreen','limegreen','blue'])
norm_sig.sel(fov=0).plot.line(x='t',hue='ch')
###Output
_____no_output_____
###Markdown
I guess trackpy and the disk donut method do a very good job at getting the intensities of spots. Although note that trackpy got the values from the max-intensity projection. Interesting. Visualize crop array montage with Napari___ Now let's see a montage of the selected spots' best-z planes:
###Code
# view the action of montage showing an n x n crop_array through time
viewer = napari.view_image(ca.montage(my_ca.sel(fov=0,ch=0).best_z,row='n',col='t'),contrast_limits=[60,800])
###Output
_____no_output_____
###Markdown
Optional: Create Track Array___
###Code
f_all
# Note, if you actually wanted to track, you could use the following:
# link tracks
max_distance_movement = 15
track_skip_frames = 3
min_trajectory_length = 10
t = tp.link(f_all, max_distance_movement, memory=track_skip_frames)
t1 = tp.filter_stubs(t, min_trajectory_length)
t1['particle'] = t1['particle']+1 # VERY IMPORTANT NOT TO HAVE TRACK IDs WITH VALUES = 0 WHEN MAKING CROP ARRAYS AS ZERO IS DEFAULT EMPTY VALUE
# Compare the number of particles in the unfiltered and filtered data.
print('Before:', t['particle'].nunique())
print('After:', t1['particle'].nunique())
# only if you tracked:
spots = t1.copy()
spots.rename(columns={'x': 'xc','y': 'yc', 'frame': 'f','signal':'signal_tp','particle':'id'},
inplace=True, errors='raise')
spots['fov']=0
spots.rename(columns={'particle':'id'})
spots = spots[['fov','id','f','yc','xc','signal_tp']] # keeping signal out of curiousity... want to compare to disk-donut measurements
spots.head()
my_ca2 = ca.create_crop_array(img_croparray,spots,xy_pad=particle_diameter//2+1, dxy=130, dz=500, dt=1, units=['nm','min'])
# Measure signals and plot average signal through time, creating 'best_z' layer and 'signal' layer
ca.measure_signal(my_ca2, ref_ch=1, disk_r=3, roll_n=3)
import xarray as xr
import pandas as pd
# Since ids correspond to tracks, we can organize tracks in rows
my_ids = np.unique(my_ca2.id) # Find all unique ids
my_ids = my_ids[1:] # remove the '0' ID used as filler in Crop Arrays
my_ids
# Get a list of xarrays for each unique id
my_das = []
for i in np.arange(len(my_ids)):
temp = my_ca2.groupby('id')[my_ids[i]].reset_index('stacked_fov_n_t').reset_coords('n',drop=True).reset_coords('fov',drop=True).swap_dims({'stacked_fov_n_t':'t'})
my_das.append(temp)
del temp
# Concatenate the xarrays together to make a new xarray dataset in a track array format (each track on separate row). Here 'n' is replaced by 'tracks'
my_taz = xr.concat(my_das, dim=pd.Index(my_ids, name='track_id'), fill_value=255) # fill_value=0 so keep int instead of moving to floats with NaNs
my_taz = my_taz.transpose('track_id','fov','n','t','z','y','x','ch', missing_dims='ignore') # reorder for napari
my_taz
# view the action of montage showing an n x n crop_array through time
viewer = napari.view_image(ca.montage(my_taz.sel(ch=1).best_z,row='track_id',col='t'),contrast_limits=[60,800])
my_taz.isel(track_id=[12,16,20]).sel(ch=1).signal.plot.line(x='t',col='track_id',col_wrap=3)
###Output
_____no_output_____
###Markdown
Filenames----
###Code
# Data filename and directory
dir = r'X:\Tim'
dir = r'Z:\galindo\1_Imaging_Data\20220210_metabolites\PEP_10mM'
#img_4D_max_filename = r'MAX_Chamber02_HT_Cell01.tif'
img_4D_filename = r'Chamber02_HT_Cell02.tif'
img_4D_filename = r'Cell02.tif'
img_4D_filename = r'Cell04.tif'
###Output
_____no_output_____
###Markdown
Trying to track in x, y, and z: Measure signals and plot average signal through time, creating 'best_z' layer and 'signal' layer
###Code
max_distance_movement = 5
track_skip_frames = 3
my_frame = 15
min_trajectory_length=2
my_list = []
for my_frame in np.arange(len(img_croparray[0,:])): #np.arange(15,16,1):
f = tp.batch(img_croparray[0,my_frame,:,:,:,1], diameter=7,minmass=1000)
t1 = tp.link(f, max_distance_movement, memory=track_skip_frames)
t = tp.filter_stubs(t1, min_trajectory_length)
sort_t = t.sort_values(['particle']).groupby('particle').aggregate('max').reset_index().rename(columns={'frame':'z'})
sort_t['t']=my_frame
my_list.append(sort_t)
my_df = pd.concat(my_list)
my_df.reset_index()
import pandas as pd
my_df = pd.concat(my_list)
my_df.reset_index()
my_df['raw_mass'].hist()
plt.figure(figsize=(20,10))
tp.annotate(my_df[(my_df['t']==28)&(my_df['raw_mass']<40000)], img_4D_max_real[28]);
np.array([len(t[t['particle']==i].x) for i in np.arange(len(t.particle))])
f[f['frame']==6]
###Output
_____no_output_____
###Markdown
GUI ___
###Code
from napari.layers import Image, Shapes
from napari.types import LabelsData, ImageData, ShapesData
import napari.types
import trackpy as tp
#viewer.dims.current_step[0] This is how you can access the slider slice in napari
# @magicgui(call_button = 'Load File')
# def open_file(
# number_of_channels: int,
# filename = pathlib.Path('/some/path.ext')
# ) -> napari.types.LayerDataTuple:
# img = imread(filename)
# my_channel_axis = np.where(np.array(img.shape) == number_of_channels)[0][0]
# img = np.moveaxis(img_4D,my_channel_axis,0)
# return [(img[i],{'colormap':'gray','name':'Channel'+str(i)}) for i in np.arange(len(img))]
@magicgui(call_button = 'Make Tracking Channel Layer')
def get_channel(
image: Image,
channel_axis: int,
channel_num: int,
) -> napari.types.LayerDataTuple:
return (np.expand_dims(np.moveaxis(image.data,channel_axis,0)[channel_num],axis=channel_axis),{'name':image.name+'Ch '+str(channel_num)})
@magicgui(call_button = 'Make Max Projection')
def max_project(
image1: Image,
projection_axis: int,
start_slice: int,
stop_slice: int,
) -> napari.types.LayerDataTuple:
return (np.expand_dims(np.max(image1.data, axis=projection_axis),axis=projection_axis),{'name':'Max of '+image1.name})
@magicgui(call_button = 'Detect Spots in Max Projection')
def detect_spots(
image: Image,
mask_include: Shapes,
mask_exclude: Shapes,
test_frame: int,
size = 9,
min_m = 1000,
detect_spots_in_all_frames = False,
) -> napari.types.LayerDataTuple:
if detect_spots_in_all_frames:
f_all = TrackWithMasks(np.squeeze(image.data),size,min_m,mask_include.data[0][:,-2:],mask_exclude.data[0][:,-2:])
print(f_all)
print(np.squeeze(image.data).shape)
data = f_all[['frame','y','x']].values
layer_name = 'Spots all frames'
else:
f = tp.locate(np.squeeze(image.data[test_frame]),diameter=size,minmass=min_m)
print(f)
data = f[['y','x']].values
layer_name = 'Spots frame '+str(test_frame)
#save_croparray.dir.value = 'test'
#save_croparray.show()
return [(data,{'size':size,'edge_color':'yellow','opacity':0.4,'symbol':'ring','name':layer_name}, 'points'),
(np.max(image,axis=0), {'name': 'Max Projection', 'blending': 'additive'})]
viewer = napari.Viewer()
#viewer = napari.view_image(img_max, name="My Image")
#viewer.window.add_dock_widget(detect_spots)
#viewer.window.add_dock_widget(max_project)
#viewer.window.add_dock_widget(open_file, name='Step 1',area='right')
viewer.window.add_dock_widget(get_channel, name='Get Video',area='right')
dw2 = viewer.window.add_dock_widget(max_project, name='Max Project',area='right')
dw3 = viewer.window.add_dock_widget(detect_spots, name ='Tracking',area='right')
viewer.window._qt_window.tabifyDockWidget(dw2, dw3)
#viewer.layers.events.changed.connect(detect_spots.reset_choices)
###Output
_____no_output_____ |
Projects-2018-2/Cancer/Tutorial.ipynb | ###Markdown
CLASIFICACIÓN DE TUMORES DE TEJIDOS MAMARIOS USANDO TÉCNICAS DE MACHINE LEARNINGPrimero se instalará la librería que permite la lectura de tablas de excel en python.
###Code
!pip install xlrd
###Output
Requirement already satisfied: xlrd in /usr/local/lib/python3.6/dist-packages (1.1.0)
###Markdown
Se clona la carpeta de github para obtener la data
###Code
!git clone 'https://github.com/vcadillog/Machine-Learning-MT616.git'
###Output
Cloning into 'Machine-Learning-MT616'...
remote: Enumerating objects: 14, done.[K
remote: Counting objects: 100% (14/14), done.[K
remote: Compressing objects: 100% (12/12), done.[K
remote: Total 14 (delta 1), reused 0 (delta 0), pack-reused 0[K
Unpacking objects: 100% (14/14), done.
###Markdown
EXTRACCIÓN DE LA DATALa data se encuentra alojado en el repositorio UCI y se realiza la conversión de la tabla usando la librería pandas.La data tiene datos etiquetados con 6 tipos de tumores presentes en el tejido mamario.1. Carcinoma2. Fibroadenoma3. Mastopatia4. Glandular5. Conectivo6. Adiposo Se han renombrado 2 características, por contener caractéres que dificultan las operaciones con pandas.
###Code
from google.colab import drive
drive.mount('/content/drive_all')
import pandas as pd
datafile='/content/drive_all/My Drive/PYTHON/PC3-4/Cadillo/BreastTissue.xls'
data=pd.read_excel(datafile,'Data')
data.head()
data['A']=data['A/DA']
data['MaxIP']=data['Max IP']
data=data.drop(columns=['A/DA','Max IP','Case #'])
feature_names=['I0' ,'PA500', 'HFS' ,'DA' ,'Area' ,'A', 'MaxIP', 'DR', 'P']
###Output
_____no_output_____
###Markdown
Ploteo de la correlación de las características de 2 a 2.
###Code
import seaborn as sns
g = sns.pairplot(data, hue="Class")
###Output
_____no_output_____
###Markdown
Separación de la data para el entrenamiento y prueba por cross-validation, con un tamaño de data de prueba del 40%.
###Code
import numpy as np
X = data.drop('Class',axis=1).values
y = data['Class'].values
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.4,random_state=42, stratify=y)
###Output
_____no_output_____
###Markdown
Pre procesamiento de la data usando el escalamiento estándar.
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
CLASIFICACIÓN POR EL K VECINO MÁS CERCANO (KNN)Es un método de clasificación no paramétrico en el cual un punto toma la etiqueta de los puntos que más aparecen en su proximidad, también se puede usar para regresión de datos.
###Code
from sklearn.neighbors import KNeighborsClassifier
#Inicialización de vectores
neighbors = np.arange(1,10)
knn_train_accuracy =np.empty(len(neighbors))
knn_test_accuracy = np.empty(len(neighbors))
distance=2
for i,k in enumerate(neighbors):
#Actualización del clasificador
knn = KNeighborsClassifier(n_neighbors=k,p=distance)
#Entrenamiento del modelo
knn.fit(X_train, y_train)
#Cómputo de la precisión del modelo en los datos de entrenamiento
knn_train_accuracy[i] = knn.score(X_train, y_train)
#Cómputo de la precisión del modelo en los datos de prueba
knn_test_accuracy[i] = knn.score(X_test, y_test)
#Se extrae el indice del modelo que da la mayor precisión
iknnmax=np.argmax(knn_test_accuracy)+1
###Output
_____no_output_____
###Markdown
Ploteo de la precisión del modelo vs la cantidad de vecinos más cercanos.
###Code
import matplotlib.pyplot as plt
plt.title('K Vecinos más cercanos')
plt.plot(neighbors, knn_test_accuracy, label='Precisión en los datos de prueba')
plt.plot(neighbors, knn_train_accuracy, label='Precisión en los datos de entrenamiento')
plt.legend()
plt.xlabel('Número de vecinos')
plt.ylabel('Precisión')
plt.show()
knn = KNeighborsClassifier(n_neighbors=iknnmax,p=distance)
knn.fit(X_train,y_train)
print('La precisión por {0} vecinos es: {1}'.format(iknnmax,knn.score(X_test,y_test)))
###Output
La precisión por 2 vecinos es: 0.813953488372093
###Markdown
ÁRBOLES DE DECISIÓNEs un modelo predictivo que organiza los datos en diagramas lógicos-secuenciales.Se ha elegido la métrica de impureza de Gini, la impureza de Gini indica que tan recurrente se da un etiquetado incorrecto, cuando el elemento y la etiqueta son elegidos aleatoriamente.
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
#Inicialización de vectores
nodos = np.arange(2,11)
td_train_accuracy =np.empty(len(nodos))
td_test_accuracy = np.empty(len(nodos))
for i,k in enumerate(nodos):
#Actualización del clasificador
td_gini = DecisionTreeClassifier(criterion = "gini", random_state = 0,max_depth=10, min_samples_leaf=2,max_leaf_nodes=k)
#Entrenamiento del modelo
td_gini.fit(X_train, y_train)
#Cómputo de la precisión del modelo en los datos de entrenamiento
td_train_accuracy[i] = td_gini.score(X_train, y_train)
#Cómputo de la precisión del modelo en los datos de prueba
td_test_accuracy[i] = td_gini.score(X_test, y_test)
itdmax=np.argmax(td_test_accuracy)+2
###Output
_____no_output_____
###Markdown
Ploteo de la precisión del modelo vs la cantidad de nodos del árbol
###Code
import matplotlib.pyplot as plt
plt.title('Precisión del Árbol de decisión')
plt.plot(nodos, td_test_accuracy, label='Precisión en los datos de prueba')
plt.plot(nodos, td_train_accuracy, label='Precisión en los datos de entrenamiento')
plt.legend()
plt.xlabel('Número de nodos')
plt.ylabel('Precisión')
plt.show()
print('Precisión del árbol de decisión para {0} nodos es: {1}'.format(itdmax,max(td_test_accuracy)))
###Output
Precisión del árbol de decisión para 6 nodos es: 0.6976744186046512
###Markdown
MÁQUINA DE VECTORES DE SOPORTE (SVM)Es un método que construye un hiperplano óptimo de separación entre dos puntos más cercanos al vector de soporte que separa las dos clases o logra una regresión.Cuando la data no es linealmente separable, se usa una herramienta que permite su separación al transformar el espacio aumentando su dimensión y así encontrar un hiperplano óptimo.
###Code
#Import svm model
from sklearn import svm
#Create a svm Classifier
sup = svm.SVC(kernel='linear') # Linear Kernel
#Train the model using the training sets
sup.fit(X_train, y_train)
# Model Accuracy: how often is the classifier correct?
print("Accuracy:",sup.score(X_test, y_test))
###Output
Accuracy: 0.6744186046511628
###Markdown
COMPARACIÓN DE PRECISIÓN ENTRE LOS MÉTODOS Nueva sección
###Code
print('La precisión con SVM es :',sup.score(X_test, y_test))
print('La precisión con Árboles de decisión es :',max(td_test_accuracy))
print('La precisión con KNN es :',max(knn_test_accuracy))
###Output
_____no_output_____ |
Diabetes Dataset/Improvements/Features IMprovement with Median/09_Pregnancies, Glucose, BloodPressure, SkinThickness, Insulin and Age.ipynb | ###Markdown
We can infer that even though we do not have NaN values, there are a lot of wrong values present in our data, like:- Glucose Level cannot be above 150 or below 70- Blood Pressure cannot be below 55- Skin thickness cannot be 0- BMI index cannot be 0
###Code
# Data Cleaning
df_improv = diabetesDF.copy()
# Calculate the median value for BMI
median_bmi = diabetesDF['BMI'].median()
# Substitute it in the BMI column of the
# dataset where values are 0
diabetesDF['BMI'] = diabetesDF['BMI'].replace(to_replace=0, value=median_bmi)
# Calculate the median value for BloodP
median_bloodp = diabetesDF['BloodPressure'].median()
# Substitute it in the BloodP column of the
# dataset where values are 0
diabetesDF['BloodPressure'] = diabetesDF['BloodPressure'].replace(to_replace=0, value=median_bloodp)
# Calculate the median value for PlGlcConc
median_glucose = diabetesDF['Glucose'].median()
# Substitute it in the PlGlcConc column of the
# dataset where values are 0
diabetesDF['Glucose'] = diabetesDF['Glucose'].replace(to_replace=0, value=median_glucose)
# Calculate the median value for SkinThick
median_skinthick = diabetesDF['SkinThickness'].median()
# Substitute it in the SkinThick column of the
# dataset where values are 0
diabetesDF['SkinThickness'] = diabetesDF['SkinThickness'].replace(to_replace=0, value=median_skinthick)
# Calculate the median value for TwoHourSerIns
median_insulin = diabetesDF['Insulin'].median()
# Substitute it in the TwoHourSerIns column of the
# dataset where values are 0
diabetesDF['Insulin'] = diabetesDF['Insulin'].replace(to_replace=0, value=median_insulin)
df_improv.head()
df_improv.describe()
df_improv.describe()
df_improv.drop(['BMI', 'DiabetesPedigreeFunction'], axis=1, inplace=True)
df_improv.head()
# Total 768 patients record
# Using 650 data for training
# Using 100 data for testing
# Using 18 data for validation
dfTrain = df_improv[:650]
dfTest = df_improv[650:750]
dfCheck = df_improv[750:]
# Separating label and features and converting to numpy array to feed into our model
trainLabel = np.asarray(dfTrain['Outcome'])
trainData = np.asarray(dfTrain.drop('Outcome',1))
testLabel = np.asarray(dfTest['Outcome'])
testData = np.asarray(dfTest.drop('Outcome',1))
# Normalize the data
means = np.mean(trainData, axis=0)
stds = np.std(trainData, axis=0)
trainData = (trainData - means)/stds
testData = (testData - means)/stds
# models target t as sigmoid(w0 + w1*x1 + w2*x2 + ... + wd*xd)
diabetesCheck = LogisticRegression()
diabetesCheck.fit(trainData,trainLabel)
accuracy = diabetesCheck.score(testData,testLabel)
print("accuracy = ",accuracy * 100,"%")
# predict values using training data
predict_train = diabetesCheck.predict(trainData)
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(trainLabel,predict_train)))
print()
# predict values using testing data
predict_train = diabetesCheck.predict(testData)
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(testLabel,predict_train)))
print()
# Confusion Matrix
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(testLabel,predict_train)))
print("")
print("Classification Report")
print("{0}".format(metrics.classification_report(testLabel,predict_train)))
# models target t as sigmoid(w0 + w1*x1 + w2*x2 + ... + wd*xd)
diabetesCheck = KNeighborsClassifier()
diabetesCheck.fit(trainData,trainLabel)
accuracy = diabetesCheck.score(testData,testLabel)
print("accuracy = ",accuracy * 100,"%")
# predict values using training data
predict_train = diabetesCheck.predict(trainData)
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(trainLabel,predict_train)))
print()
# predict values using testing data
predict_train = diabetesCheck.predict(testData)
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(testLabel,predict_train)))
print()
# Confusion Matrix
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(testLabel,predict_train)))
print("")
print("Classification Report")
print("{0}".format(metrics.classification_report(testLabel,predict_train)))
# models target t as sigmoid(w0 + w1*x1 + w2*x2 + ... + wd*xd)
diabetesCheck = SVC()
diabetesCheck.fit(trainData,trainLabel)
accuracy = diabetesCheck.score(testData,testLabel)
print("accuracy = ",accuracy * 100,"%")
# predict values using training data
predict_train = diabetesCheck.predict(trainData)
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(trainLabel,predict_train)))
print()
# predict values using testing data
predict_train = diabetesCheck.predict(testData)
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(testLabel,predict_train)))
print()
# Confusion Matrix
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(testLabel,predict_train)))
print("")
print("Classification Report")
print("{0}".format(metrics.classification_report(testLabel,predict_train)))
# models target t as sigmoid(w0 + w1*x1 + w2*x2 + ... + wd*xd)
diabetesCheck = RandomForestClassifier()
diabetesCheck.fit(trainData,trainLabel)
accuracy = diabetesCheck.score(testData,testLabel)
print("accuracy = ",accuracy * 100,"%")
# predict values using training data
predict_train = diabetesCheck.predict(trainData)
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(trainLabel,predict_train)))
print()
# predict values using testing data
predict_train = diabetesCheck.predict(testData)
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(testLabel,predict_train)))
print()
# Confusion Matrix
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(testLabel,predict_train)))
print("")
print("Classification Report")
print("{0}".format(metrics.classification_report(testLabel,predict_train)))
###Output
Classification Report
precision recall f1-score support
0 0.77 0.84 0.80 63
1 0.68 0.57 0.62 37
accuracy 0.74 100
macro avg 0.72 0.70 0.71 100
weighted avg 0.73 0.74 0.73 100
|
Time_Series_Data_Plot.ipynb | ###Markdown
Simple Graph
###Code
import matplotlib.pyplot as plt
import numpy as np
def plot():
plt.plot([1, 2, 3], [1, 2, 3], label='line-1')
plt.plot([1, 2, 3], [3, 2, 1], label='line-2')
plt.xlabel('Time')
plt.ylabel('Value')
plt.legend(fontsize=12)
plt.grid(True)
plot()
def plot_series(time,series, format ='-' ,start=0,end=None,label=None):
plt.plot(time[start:end] ,series[start:end] , label=label )
plt.xlabel('Time')
plt.ylabel('Value')
plt.legend(fontsize=12)
def trend(time, appreciation):
return time * appreciation
time =np.arange(365*4 )
series = trend(time, 0.2)
plot_series(time,series)
plt.show()
###Output
_____no_output_____
###Markdown
Noise plot
###Code
def generate_noise(time, noise_level ,seed =None):
random = np.random.RandomState(seed)
return random.randn(len(time)) * time + noise_level
noise = generate_noise(time, 10 , 42)
print (noise)
plt.plot(time, noise, label='line-1')
plt.xlabel('Time')
plt.ylabel('Value')
plt.legend(fontsize=12)
def seasonal_pattern(season_time):
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
print(time)
print(phase)
print(period)
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
baseline = 10
amplitude = 40
series = seasonality(time, period=365, amplitude=amplitude)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
slope = 0.05
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
def autocorrelation(time, amplitude, seed=None):
rnd = np.random.RandomState(seed)
φ1 = 0.5
φ2 = -0.1
ar = rnd.randn(len(time) + 50)
ar[:50] = 100
for step in range(50, len(time) + 50):
ar[step] += φ1 * ar[step - 50]
ar[step] += φ2 * ar[step - 33]
return ar[50:] * amplitude
series = autocorrelation(time, 10, seed=42)
plot_series(time[:200], series[:200])
plt.show()
series = autocorrelation(time, 10, seed=42) + trend(time, 2)
plot_series(time[:200], series[:200])
plt.show()
def autocorrelation(source, φs):
ar = source.copy()
max_lag = len(φs)
for step, value in enumerate(source):
for lag, φ in φs.items():
if step - lag > 0:
ar[step] += φ * ar[step - lag]
return ar
def impulses(time, num_impulses, amplitude=1, seed=None):
rnd = np.random.RandomState(seed)
impulse_indices = rnd.randint(len(time), size=10)
series = np.zeros(len(time))
for index in impulse_indices:
series[index] += rnd.rand() * amplitude
return series
signal = impulses(time, 10, seed=42)
series = autocorrelation(signal, {1: 0.99})
plot_series(time, series)
plt.plot(time, signal, "k-")
plt.show()
from pandas.plotting import autocorrelation_plot
series_diff = series
for lag in range(50):
series_diff = series_diff[1:] - series_diff[:-1]
autocorrelation_plot(series_diff)
import pandas as pd
series_diff1 = pd.Series(series[1:] - series[:-1])
autocorrs = [series_diff1.autocorr(lag) for lag in range(1, 60)]
plt.plot(autocorrs)
plt.show()
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(series, order=(5, 1, 0))
model_fit = model.fit(disp=0)
print(model_fit.summary())
###Output
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
|
Climate data.ipynb | ###Markdown
Climate Change Hackathon: Climate data preprocessing Libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from datetime import date
###Output
_____no_output_____
###Markdown
Import the raw Data
###Code
df = pd.read_csv('Data/climate-daily.csv')
df.columns
# keep only the data we want to use
df0 = df[['STATION_NAME','LOCAL_YEAR', 'LOCAL_MONTH', 'LOCAL_DAY',
'MEAN_TEMPERATURE', 'MIN_TEMPERATURE', 'MAX_TEMPERATURE',
'TOTAL_PRECIPITATION', 'TOTAL_RAIN', 'TOTAL_SNOW', 'SNOW_ON_GROUND',
'DIRECTION_MAX_GUST','SPEED_MAX_GUST', 'COOLING_DEGREE_DAYS',
'HEATING_DEGREE_DAYS','MIN_REL_HUMIDITY','MAX_REL_HUMIDITY']]
# extract each station's data
station_names = df0['STATION_NAME'].unique().tolist()
station_names
###Output
_____no_output_____
###Markdown
Create 3 separate DataFrame: one per station in Montreal
###Code
df1 = df0.loc[df0.STATION_NAME=='MONTREAL/PIERRE ELLIOTT TRUDEAU INTL']
df1 = df1[df1.columns[1:]]
df2 = df0.loc[df0.STATION_NAME=='MONTREAL/ST-HUBERT']
df2 = df2[df2.columns[1:]]
df3 = df0.loc[df0.STATION_NAME=='MONTREAL/PIERRE ELLIOTT TRUDEAU INTL A']
df3 = df3[df3.columns[1:]]
###Output
_____no_output_____
###Markdown
Aggregate the data from days to weeks
###Code
def aggregate_data_per_week(df):
# aggregate the lines per weeks
df_per_week = []
for year in df['LOCAL_YEAR'].unique():
df_y = df.loc[df.LOCAL_YEAR==year]
cpt=0
for month in df_y['LOCAL_MONTH'].unique():
df_y_m = df_y.loc[df_y.LOCAL_MONTH==month]
df_y_m_s = []
n_week_memory = None
for index, row in df_y_m.iterrows():
line = row.values.tolist()
day = line[2]
n_year, n_week = date(int(year), int(month), int(day)).isocalendar()[:2]
id_week = int(n_year)*100+int(n_week)
df_y_m_s.append([id_week] + line[3:])
if n_week_memory is None:
n_week_memory = n_week
elif n_week_memory<n_week:
df_per_week.append(df_y_m_s)
df_y_m_s = []
# make the summary of each week
# the columns are: 'MEAN_TEMPERATURE', 'MIN_TEMPERATURE', 'MAX_TEMPERATURE',
# 'TOTAL_PRECIPITATION', 'TOTAL_RAIN', 'TOTAL_SNOW', 'SNOW_ON_GROUND',
# 'DIRECTION_MAX_GUST','SPEED_MAX_GUST', 'COOLING_DEGREE_DAYS',
# 'HEATING_DEGREE_DAYS'
df_per_week_summary = []
for data_week in df_per_week:
data_week = np.array(data_week)
summary = [int(data_week[0,0])] # Id week
# summary of the week
summary.append(np.nanmean(data_week[:,1])) # Mean temp
summary.append(np.nanmin(data_week[:,2])) # min temp
summary.append(np.nanmax(data_week[:,3])) # max temp
summary.append(np.nansum(data_week[:,4])) # total precipitation
summary.append(np.nansum(data_week[:,5])) # total rain
summary.append(np.nansum(data_week[:,6])) # total snow
summary.append(np.nanmean(data_week[:,8])) # direction max gust
summary.append(np.nanmax(data_week[:,9])) # speed max gust
summary.append(np.nanmean(data_week[:,10])) # cooling degree days
summary.append(np.nanmean(data_week[:,11])) # heating degree days
summary.append(len(np.where(data_week[:,3]>20)[0])) # nb day per week where temp >20
summary.append(len(np.where(data_week[:,3]>25)[0])) # nb day per week where temp >25
summary.append(len(np.where(data_week[:,3]>30)[0])) # nb day per week where temp >30
summary.append(len(np.where(data_week[:,2]<10)[0])) # nb day per week where temp <10
summary.append(len(np.where(data_week[:,2]<0)[0])) # nb day per week where temp <0
summary.append(len(np.where(data_week[:,2]<-5)[0])) # nb day per week where temp <-5
summary.append(len(np.where(data_week[:,2]<-10)[0])) # nb day per week where temp <-10
summary.append(len(np.where(data_week[:,4]>5)[0])) # nb day day per week where precipitation > 5
df_per_week_summary.append(summary)
df_week = pd.DataFrame(df_per_week_summary, columns=['id_week',
'MEAN_TEMPERATURE', 'MIN_TEMPERATURE', 'MAX_TEMPERATURE',
'TOTAL_PRECIPITATION', 'TOTAL_RAIN', 'TOTAL_SNOW',
'DIRECTION_MAX_GUST','SPEED_MAX_GUST', 'COOLING_DEGREE_DAYS',
'HEATING_DEGREE_DAYS',
'nb_j_t_sup20_1w', 'nb_j_t_sup25_1w', 'nb_j_t_sup30_1w', 'nb_j_t_inf_10_1w', 'nb_j_t_inf_0_1w', 'nb_jt__inf_m5_1w',
'nb_j_t_inf_m10_1w', 'nb_j_precip_sup_5_1w'])
return(df_week)
df1_w = aggregate_data_per_week(df1)
df2_w = aggregate_data_per_week(df2)
df3_w = aggregate_data_per_week(df3)
###Output
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:36: RuntimeWarning: Mean of empty slice
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:37: RuntimeWarning: All-NaN slice encountered
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:38: RuntimeWarning: All-NaN slice encountered
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:44: RuntimeWarning: Mean of empty slice
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:45: RuntimeWarning: Mean of empty slice
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:46: RuntimeWarning: invalid value encountered in greater
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:47: RuntimeWarning: invalid value encountered in greater
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:48: RuntimeWarning: invalid value encountered in greater
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:49: RuntimeWarning: invalid value encountered in less
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:50: RuntimeWarning: invalid value encountered in less
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:51: RuntimeWarning: invalid value encountered in less
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:52: RuntimeWarning: invalid value encountered in less
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:42: RuntimeWarning: Mean of empty slice
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:43: RuntimeWarning: All-NaN slice encountered
L:\Anaconda3\lib\site-packages\ipykernel_launcher.py:53: RuntimeWarning: invalid value encountered in greater
###Markdown
Aggregate the data from the 3 stations
###Code
data_week = []
for week in df1_w.id_week.unique()[:]:
df1_week = df1_w.loc[df1_w.id_week==week].values
# if there is data for the 3 station
if week in df2_w.id_week.unique() and week in df3_w.id_week.unique():
df2_week = df2_w.loc[df2_w.id_week==week].values
df3_week = df3_w.loc[df3_w.id_week==week].values
# concatenate the 3 lines, take the mean
data_week.append(np.nanmean(np.concatenate((df1_week, df2_week, df3_week), axis=0), axis=0))
# if there is data for only station 1 and 2
elif week in df2_w.id_week.unique():
df2_week = df2_w.loc[df2_w.id_week==week].values
# take the mean of the 2 values
data_week.append(np.nanmean(np.concatenate((df1_week, df2_week), axis=0), axis=0))
# if there is data for only station 1 and 3
elif week in df3_w.id_week.unique():
df3_week = df3_w.loc[df3_w.id_week==week].values
# take the mean of the 2 values
data_week.append(np.nanmean(np.concatenate((df1_week, df3_week), axis=0), axis=0))
# else there is only data for station 1
else:
data_week.append(df1_week[0])
###Output
_____no_output_____
###Markdown
Augment the data with rolling averages/min/max etc...
###Code
all_data = []
# initialize the memories
week_m1 = np.reshape(data_week[0], (1, len(data_week[0])))
week_m2 = np.reshape(data_week[0], (1, len(data_week[0])))
week_m3 = np.reshape(data_week[0], (1, len(data_week[0])))
for week in data_week:
liste_week = week.tolist()
# data from last weeks
week0 = np.reshape(week, (1,len(week)))
weeks_4 = np.concatenate((week_m3, week_m2, week_m1, week0), axis=0)
weeks_3 = np.concatenate((week_m2, week_m1, week0), axis=0)
weeks_2 = np.concatenate((week_m1, week0), axis=0)
for weeks in [weeks_4, weeks_3, weeks_2]:
liste_week += [np.nanmean(weeks[:,1], axis=0)] # mean temp
liste_week += [np.nanmin(weeks[:,2], axis=0)] # min temp
liste_week += [np.nanmax(weeks[:,3], axis=0)] # max temp
liste_week += [np.nansum(weeks[:,4], axis=0)] # tot precipitation
liste_week += [np.nansum(weeks[:,5], axis=0)] # tot rain
liste_week += [np.nansum(weeks[:,6], axis=0)] # tot snow
liste_week += [np.nansum(weeks[:,11], axis=0)] # nb days per x weeks temp>20
liste_week += [np.nansum(weeks[:,12], axis=0)] # nb days per x weeks temp>25
liste_week += [np.nansum(weeks[:,13], axis=0)] # nb days per x weeks temp>30
liste_week += [np.nansum(weeks[:,14], axis=0)] # nb days per x weeks temp<10
liste_week += [np.nansum(weeks[:,15], axis=0)] # nb days per x weeks temp<0
liste_week += [np.nansum(weeks[:,16], axis=0)] # nb days per x weeks temp<-5
liste_week += [np.nansum(weeks[:,17], axis=0)] # nb days per x weeks temp<-10
liste_week += [np.nansum(weeks[:,18], axis=0)] # nb days per x weeks precipitation > 5
week_m3 = week_m2
week_m2 = week_m1
week_m1 = week0
all_data.append(liste_week)
Columns = ['id_week', 'MEAN_TEMPERATURE', 'MIN_TEMPERATURE', 'MAX_TEMPERATURE', 'TOTAL_PRECIPITATION',
'TOTAL_RAIN', 'TOTAL_SNOW', 'DIRECTION_MAX_GUST','SPEED_MAX_GUST', 'COOLING_DEGREE_DAYS',
'HEATING_DEGREE_DAYS', 'nb_j_t_sup20_1w', 'nb_j_t_sup25_1w', 'nb_j_t_sup30_1w', 'nb_j_t_inf_10_1w',
'nb_j_t_inf_0_1w', 'nb_jt__inf_m5_1w', 'nb_j_t_inf_m10_1w', 'nb_j_precip_sup_5_1w',
'mean_t_4w', 'min_t_4w', 'max_t_4w', 'tot_precip_4w', 'tot_rain_4w', 'tot_snow_4w',
'nb_j_t_sup20_4w', 'nb_j_t_sup25_4w', 'nb_j_t_sup30_4w', 'nb_j_t_inf_10_4w',
'nb_j_t_inf_0_4w', 'nb_jt__inf_m5_4w', 'nb_j_t_inf_m10_4w', 'nb_j_precip_sup_5_4w',
'mean_t_3w', 'min_t_3w', 'max_t_3w', 'tot_precip_3w', 'tot_rain_3w', 'tot_snow_3w',
'nb_j_t_sup20_3w', 'nb_j_t_sup25_3w', 'nb_j_t_sup30_3w', 'nb_j_t_inf_10_3w',
'nb_j_t_inf_0_3w', 'nb_jt__inf_m5_3w', 'nb_j_t_inf_m10_3w', 'nb_j_precip_sup_5_3w',
'mean_t_2w', 'min_t_4w', 'max_t_2w', 'tot_precip_2w', 'tot_rain_2w', 'tot_snow_2w',
'nb_j_t_sup20_2w', 'nb_j_t_sup25_2w', 'nb_j_t_sup30_2w', 'nb_j_t_inf_10_2w',
'nb_j_t_inf_0_2w', 'nb_jt__inf_m5_2w', 'nb_j_t_inf_m10_2w', 'nb_j_precip_sup_5_2w']
df_final = pd.DataFrame(all_data[4:], columns=Columns)
df_final.fillna(method='ffill', inplace=True)
df_final.to_csv('climate_per_week_final.csv')
###Output
_____no_output_____ |
src/log_anomaly_keras.ipynb | ###Markdown
Read data
###Code
import os
import re
import numpy as np
import pandas
from tqdm.notebook import tqdm
import signal
class TimeoutException(Exception): # Custom exception class
pass
def timeout_handler(signum, frame): # Custom signal handler
raise TimeoutException
# Change the behavior of SIGALRM
signal.signal(signal.SIGALRM, timeout_handler)
class DataImporter:
"""
loads data set from the raw dataset
"""
def __init__(self, log_template, dataset_folder_path, dataset_name, dataset_step=1,
dataset_limit=100000, dataset_type='main', normal_indicator:str='-', aux_count=50000):
self.log_template = log_template # a template containing <Token{n}> and <Message>
self.log_dataframe = None
self.dataset_folder_path: str = dataset_folder_path # path to the dataset folder
self.dataset_name: str = dataset_name # full name of raw dataset
self.step: int = dataset_step # step taken to sample auxiliary dataset
self.log_template_regex: re = re.compile(r'')
self.log_template_headers: list[str] = []
self.limit: int = dataset_limit # used for faster experiment only
self.dataset_type: str = dataset_type
self.normal_indicator: str = normal_indicator # a sign indicating the log line is anomaly
self.aux_count: int = aux_count
def log_loader(self):
"""
read from IO stream and only take the actual log message based on template
:return:
"""
log_messages = []
counter = 0
# there's uncommon encoding in dataset BG/P
with open(os.path.join(self.dataset_folder_path, self.dataset_name), 'r', encoding="latin-1") as ds:
for line_no, line in enumerate(tqdm(ds)):
if line_no % self.step == 0: # jump over steps
try:
#signal.alarm(30)
try:
match = self.log_template_regex.search(line.strip())
message = [match.group(header) for header in self.log_template_headers]
# if self.dataset_name=='Intrepid_RAS_0901_0908_scrubbed_small':
# print(message)
log_messages.append(message)
counter += 1
except Exception:
#print("Regex hang detected, skipping")
pass # catastrophic backtracking
except TimeoutException:
pass
if line_no == self.limit:
break
df = pandas.DataFrame(log_messages, columns=self.log_template_headers)
df.insert(0, 'LineId', None)
df['LineId'] = [i + 1 for i in range(counter)]
return df
def load(self):
self.log_template_matcher()
self.log_dataframe = self.log_loader()
# differentiate anomaly with normal log
log_messages= self.log_dataframe.Message
true_labels = np.where(self.log_dataframe.Token0.values == self.normal_indicator, 0, 1)
if self.dataset_type == 'auxiliary':
print(log_messages.iloc[true_labels.flatten() == 0].shape)
print(log_messages.iloc[true_labels.flatten() == 1])
df_normal = log_messages.iloc[true_labels.flatten() == 0].sample(n=self.aux_count).values
df_anomalies = log_messages.iloc[true_labels.flatten() == 1].sample(n=self.aux_count).values
return df_normal, df_anomalies
elif self.dataset_type == 'main':
return log_messages, true_labels
def load_special(self):
self.log_template_matcher()
self.log_dataframe = self.log_loader()
# differentiate anomaly with normal log
log_messages= self.log_dataframe.Message
true_labels = np.where(self.log_dataframe.Token0.values == self.normal_indicator, 0, 1)
if self.dataset_type == 'auxiliary':
df_normal = log_messages.iloc[true_labels.flatten() == 0].sample(n=self.aux_count).values
df_anomalies = log_messages.iloc[true_labels.flatten() == 1].sample(n=self.aux_count).values
return df_normal, df_anomalies
elif self.dataset_type == 'main':
return log_messages, true_labels
def log_template_matcher(self):
headers = []
template_chunks = re.split(r'(<[^<>]+>)', self.log_template)
expression = ''
for template_chunk_idx in range(len(template_chunks)):
if template_chunk_idx % 2 == 0:
splitter = re.sub(' +', '\\\s+', template_chunks[template_chunk_idx])
expression += splitter
else:
header = template_chunks[template_chunk_idx].strip('<').strip('>')
expression += '(?P<%s>.+?)' % header # change * from +
headers.append(header)
print(expression)
expression = re.compile('^' + expression + '$')
self.log_template_headers, self.log_template_regex = headers, expression
def pickle_processed(self, processed):
"""
pickle the df with only log message to a file
:return:
"""
import pickle
pickle_path = os.path.join(self.dataset_folder_path, f'{self.dataset_name}_processed.pkl')
with open(pickle_path) as cached:
print(f"Dumping processed dataset to pickle file path - {pickle_path}")
pickle.dump(processed, cached)
###Output
_____no_output_____
###Markdown
Tokenize data Use NLTK and regex to remove http endpoints, stopwords, numerical words. Also turn to lower case.Finally add [CLS]
###Code
# Get NLTK data dicts
import re
import nltk
from nltk import word_tokenize
from nltk.corpus import stopwords
nltk.download('stopwords')
nltk.download('punkt')
"""
Standard tokenizer + do what the paper says
"""
class DataTokenizer:
def __init__(self):
self.word2index = {'[PAD]': 0, '[CLS]': 1, '[MASK]': 2}
self.num_words = 3
self.stop_words = set(stopwords.words('english'))
def tokenize(self, message):
# paper section IV: Tokenization processing
message = message.lower()
message = re.sub(r'/.*:', '', message, flags=re.MULTILINE) # filter for endpoints
message = re.sub(r'/.*', '', message, flags=re.MULTILINE)
message = word_tokenize(message) # remove non words
message = [word for word in message if word.isalpha()] # remove numerical
message = [word for word in message if word not in self.stop_words] # remove nltk common stopwords
#message = ['[CLS]'] + message # add embedding token
for word_idx, word in enumerate(message): # convert to value
if word not in self.word2index:
self.word2index[word] = self.num_words
self.num_words += 1
message[word_idx] = self.word2index[word]
return message
from google.colab import drive
drive.mount('/content/drive')
# special to google colab
folder_path = 'drive/MyDrive/logsy_data/dataset'
# TODO move small dataset to same named but in dataset_small folder
## loading bgp https://www.usenix.org/sites/default/files/4372-intrepid_ras_0901_0908_scrubbed.zip.tar 1.0GB; this dataset uses anomaly indicator 'FATAL'
bgp_template = '<Token1> <Token2> <Token3> <Token4> <Token5> <Token0> <Message>'
name = 'Intrepid_RAS_0901_0908_scrubbed' # BG/P
# use special loader
special_d, special_l = DataImporter(log_template=bgp_template, dataset_folder_path=folder_path,
dataset_name=name, dataset_step=1, dataset_type='main',dataset_limit=11000000, normal_indicator='FATAL').load_special()
third_anomaly = special_d[special_l==0]
print(f'\nSuccessfully imported - {len(special_d)} => {len(third_anomaly)} Messages from dataset at {os.path.join(folder_path, name)}\n\n')
print(third_anomaly[1:5])
# third_normal = third_data[label==1]
# third_anomaly = third_data[label==0]
## Use others as auxiliary
### loading spirit http://0b4af6cdc2f0c5998459-c0245c5c937c5dedcca3f1764ecc9b2f.r43.cf2.rackcdn.com/hpc4/spirit2.gz
#### big one so step should be bigger, 39GB of data; this dataset uses anomaly indicator '-'
spirit_template = '<Token0> <Token1> <Token2> <Token3> <Token4> <Token5> <Token6> <Token7> <Message>'
#name = 'spirit2'
name = 'spirit_small'
dataset_limit = 6000000
first_normal, first_anomaly = DataImporter(log_template=spirit_template, dataset_folder_path=folder_path,
dataset_name=name, dataset_step=3, dataset_type='auxiliary',dataset_limit=dataset_limit, normal_indicator='-', aux_count=int(dataset_limit*0.02)).load()
print(f'\nSuccessfully imported - {len(first_normal)} => {len(first_anomaly)} Messages from dataset at {os.path.join(folder_path, name)}\n\n')
print(first_anomaly[1:5])
### loading liberty http://0b4af6cdc2f0c5998459-c0245c5c937c5dedcca3f1764ecc9b2f.r43.cf2.rackcdn.com/hpc4/liberty2.gz
### 30GB not used yet
...
# thunderbird_template = '<Token0> <Token1> <Token2> <Token3> <Token4> <Token5> <Token6> <Token7> <Token8>(\[<Token9>\])?: <Message>'
# name = 'tbird2_small' # original dataset is too big, limit to 5,000,000 rows
# second_normal, second_anomaly = DataImporter(log_template=thunderbird_template, dataset_folder_path=folder_path,
# dataset_name=name, dataset_step=1, dataset_type='auxiliary',dataset_limit=dataset_limit, normal_indicator='-', aux_count=int(dataset_limit*0.02)).load()
# print(f'\nSuccessfully imported - {len(second_normal)} => {len(second_anomaly)} Messages from dataset at {os.path.join(folder_path, name)}\n\n')
### BG/L http://0b4af6cdc2f0c5998459-c0245c5c937c5dedcca3f1764ecc9b2f.r43.cf2.rackcdn.com/hpc4/bgl2.gz 0.72GB
bgl_template = '<Token0> <Token1> <Token2> <Token3> <Token4> <Token5> <Token6> <Token7> <Token8> <Message>' # bgl style token template
name = 'bgl2' # BG/P
second_normal, second_anomaly = DataImporter(log_template=bgl_template, dataset_folder_path=folder_path,
dataset_name=name, dataset_step=1, dataset_type='auxiliary', dataset_limit=5000000, normal_indicator='-', aux_count=int(dataset_limit*0.02)).load()
print(f'\nSuccessfully imported - {len(second_normal)} => {len(second_anomaly)} Messages from dataset at {os.path.join(folder_path, name)}\n\n')
print(second_anomaly[1:5])
##################### < THIS PART WORKS PROPERLY
# concatenate the 3 auxiliary datasets
concat_normal = [] # not needed, auxiliary data are all treated as anomalies
print(third_anomaly)
print(len(first_anomaly), len(second_anomaly), len(third_anomaly))
concat_anomaly = np.append(first_anomaly, second_anomaly)
concat_anomaly = np.append(concat_anomaly, third_anomaly)
print(len(concat_anomaly))
# sampling auxiliary data from concat # we have 300000 =int(dataset_limit*0.05)
aux_anomalies = np.random.choice(concat_anomaly, size=250000, replace=False)
print(aux_anomalies.shape)
###
# 12.5% is anomaly aux
############################ Loading main data and auxiliary data (for testing purposes use small version)
## use tbird2 as main - currently using BG/L
# ### BG/L http://0b4af6cdc2f0c5998459-c0245c5c937c5dedcca3f1764ecc9b2f.r43.cf2.rackcdn.com/hpc4/bgl2.gz 0.72GB
# bgl_template = '<Token0> <Token1> <Token2> <Token3> <Token4> <Token5> <Token6> <Token7> <Token8> <Message>' # bgl style token template
# name = 'bgl'
# log_messages, labels = DataImporter(log_template=bgl_template, dataset_folder_path=folder_path,
# dataset_name=name, dataset_step=1, dataset_type='main', dataset_limit=5000000, normal_indicator='-', aux_count=int(dataset_limit*0.02)).load()
# print(f'\nSuccessfully imported - {len(second_normal)} => {len(second_anomaly)} Messages from dataset at {os.path.join(folder_path, name)}\n\n')
thunderbird_template = '<Token0> <Token1> <Token2> <Token3> <Token4> <Token5> <Token6> <Token7> <Token8>(\[<Token9>\])?: <Message>'
name = 'tbird2_medium_200m_40step' # original dataset is too big, limit to 5,000,000 rows
log_messages, labels = DataImporter(log_template=thunderbird_template, dataset_folder_path=folder_path,
dataset_name=name, dataset_step=1, dataset_type='main',dataset_limit=5000000, normal_indicator='-', aux_count=int(dataset_limit*0.02)).load()
print(f'\nSuccessfully imported - {len(log_messages)} => {len(labels)} Messages from dataset at {os.path.join(folder_path, name)}\n\n')
print(log_messages)
print(log_messages.shape)
print(labels.shape)
labels = labels.reshape(-1, 1) # reshape
print(labels.shape)
print(labels[0])
#append the anomalies to the full data
concat_messages = np.append(log_messages.values.reshape(-1,1), aux_anomalies.reshape(-1,1), axis=0)
print(labels.shape)
print(np.ones(len(aux_anomalies)).shape)
concat_labels = np.append(labels, np.ones(len(aux_anomalies)).reshape(-1,1), axis=0).flatten()
concat_messages.shape, concat_labels.shape
import collections
collections.Counter(concat_labels)
############################# Tokenize full data
from tqdm.notebook import trange
print(f'Starting to tokenize messages, pushing result to pickle(TODO)')
tokenizer = DataTokenizer()
data_tokenized = []
print("##################### Data Shape ##############")
print(concat_messages.shape, concat_labels.shape)
print("##################### Data Shape End ##############")
df_len = int(concat_messages.shape[0])
for i in trange(df_len):
tokenized = tokenizer.tokenize(concat_messages[i][0])
data_tokenized.append(tokenized)
data_tokenized = np.asanyarray(data_tokenized)
print(data_tokenized.shape)
import pickle
print(f"vocab size - {tokenizer.num_words}")
vocab_size = tokenizer.num_words
print(folder_path)
with open(f'{folder_path}/pickled_concat', 'wb') as message_file:
pickle.dump(data_tokenized, message_file)
###Output
Starting to tokenize messages, pushing result to pickle(TODO)
##################### Data Shape ##############
(109801, 1) (109801,)
##################### Data Shape End ##############
###Markdown
Split data to train set and test setvariables: data_tokenized: all data labels : all labels
###Code
# load from file directly without tokenization
with open(f'{folder_path}/pickled_concat','rb') as p:
data_tokenized = pickle.load(p)
ratio = 0.5
train_size = int(len(log_messages) * ratio)
test_size = int(len(log_messages) * (1-ratio))
print(train_size, test_size)
print(train_size/len(log_messages))
# def split_data(data, labels, train_size, test_size):
# print(train_size, test_size)
# x_train, = np.append(data[:train_size][labels[:train_size]==0], data[train_size:])
# y_train = labels[:train_size][labels[:train_size]==0]
# x_test = data[train_size:][labels[train_size:]==1]
# y_test = labels[train_size:][labels[train_size:]==1]
# print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
# return x_train, y_train, x_test, y_test
# #from sklearn.model_selection import train_test_split
# #x_train, y_train, x_test, y_test = train_test_split(pd, labels, test_size=0.2, random_state=42)
# x_train, y_train, x_test, y_test = split_data(pd, labels, train_size, test_size)
# len(x_train), len(x_val), len(y_train), len(y_val)
from collections import Counter
collections.Counter(concat_labels)
#print(Counter(labels)) # target set
print(data_tokenized.shape) # tokenized concat shape
print(len(log_messages))
print(len(concat_labels))
target_size = len(log_messages)
a = collections.Counter(concat_labels[target_size:])
print(a)
x_train = np.append(data_tokenized[:train_size][concat_labels[:train_size]==0], data_tokenized[target_size:],axis=0)
y_train = np.append(concat_labels[:train_size][concat_labels[:train_size]==0].flatten(), concat_labels[target_size:].flatten(),axis=0)
x_test = data_tokenized[train_size:target_size]
y_test = concat_labels[train_size:target_size]
# print(x_train, y_train, x_test, y_test )
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
print(collections.Counter(y_train))
print(collections.Counter(y_test))
from keras.preprocessing.sequence import pad_sequences
x_train = pad_sequences(x_train, maxlen=50, truncating="post", padding="post")
x_test = pad_sequences(x_test, maxlen=50, truncating="post", padding="post")
print(x_train.shape, x_test.shape)
print(x_train[0])
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
## padding masks
x_train_masks = tf.equal(x_train, 0)
x_test_masks = tf.equal(x_test, 0)
print(x_train_masks,x_test_masks)
###Output
(59740, 50) (49901, 50)
[3 4 5 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0]
(59740, 50) (59740,) (49901, 50) (49901,)
tf.Tensor(
[[False False False ... True True True]
[False False False ... True True True]
[False False False ... True True True]
...
[False False False ... True True True]
[False False False ... True True True]
[False False False ... True True True]], shape=(59740, 50), dtype=bool) tf.Tensor(
[[False False False ... True True True]
[False False True ... True True True]
[False False False ... True True True]
...
[False False False ... True True True]
[False False False ... True True True]
[False False False ... True True True]], shape=(49901, 50), dtype=bool)
###Markdown
Transformer models An open-source implementation of standard transformer with multi-head attention, for comparison
###Code
import numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Layer
from tensorflow.keras.callbacks import Callback
import os
@tf.keras.utils.register_keras_serializable()
class Embedding(tf.keras.layers.Layer):
def __init__(self, vocab_size, model_dim, **kwargs):
self._vocab_size = vocab_size
self._model_dim = model_dim
super(Embedding, self).__init__(**kwargs)
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self._vocab_size, self._model_dim),
initializer='glorot_uniform',
name="embeddings")
super(Embedding, self).build(input_shape)
def call(self, inputs, **kwargs):
if K.dtype(inputs) != 'int32':
inputs = K.cast(inputs, 'int32')
embeddings = K.gather(self.embeddings, inputs)
embeddings *= self._model_dim ** 0.5 # Scale
return embeddings
def compute_output_shape(self, input_shape):
return input_shape + (self._model_dim,)
@tf.keras.utils.register_keras_serializable()
class ScaledDotProductAttention(tf.keras.layers.Layer):
def __init__(self, masking=True, future=False, dropout_rate=0., **kwargs):
self._masking = masking
self._future = future
self._dropout_rate = dropout_rate
self._masking_num = -2**32+1
super(ScaledDotProductAttention, self).__init__(**kwargs)
def mask(self, inputs, masks):
masks = K.cast(masks, 'float32')
masks = K.tile(masks, [K.shape(inputs)[0] // K.shape(masks)[0], 1])
masks = K.expand_dims(masks, 1)
outputs = inputs + masks * self._masking_num
return outputs
def future_mask(self, inputs):
diag_vals = tf.ones_like(inputs[0, :, :])
tril = tf.linalg.LinearOperatorLowerTriangular(diag_vals).to_dense()
future_masks = tf.tile(tf.expand_dims(tril, 0), [tf.shape(inputs)[0], 1, 1])
paddings = tf.ones_like(future_masks) * self._masking_num
outputs = tf.where(tf.equal(future_masks, 0), paddings, inputs)
return outputs
def call(self, inputs, **kwargs):
if self._masking:
assert len(inputs) == 4, "inputs should be set [queries, keys, values, masks]."
queries, keys, values, masks = inputs
else:
assert len(inputs) == 3, "inputs should be set [queries, keys, values]."
queries, keys, values = inputs
if K.dtype(queries) != 'float32': queries = K.cast(queries, 'float32')
if K.dtype(keys) != 'float32': keys = K.cast(keys, 'float32')
if K.dtype(values) != 'float32': values = K.cast(values, 'float32')
matmul = K.batch_dot(queries, tf.transpose(keys, [0, 2, 1])) # MatMul
scaled_matmul = matmul / int(queries.shape[-1]) ** 0.5 # Scale
if self._masking:
scaled_matmul = self.mask(scaled_matmul, masks) # Mask(opt.)
if self._future:
scaled_matmul = self.future_mask(scaled_matmul)
softmax_out = K.softmax(scaled_matmul) # SoftMax
# Dropout
out = K.dropout(softmax_out, self._dropout_rate)
outputs = K.batch_dot(out, values)
return outputs
def compute_output_shape(self, input_shape):
return input_shape
@tf.keras.utils.register_keras_serializable()
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, n_heads, head_dim, dropout_rate=.1, masking=True, future=False, trainable=True, **kwargs):
self._n_heads = n_heads
self._head_dim = head_dim
self._dropout_rate = dropout_rate
self._masking = masking
self._future = future
self._trainable = trainable
super(MultiHeadAttention, self).__init__(**kwargs)
def build(self, input_shape):
self._weights_queries = self.add_weight(
shape=(input_shape[0][-1], self._n_heads * self._head_dim),
initializer='glorot_uniform',
trainable=self._trainable,
name='weights_queries')
self._weights_keys = self.add_weight(
shape=(input_shape[1][-1], self._n_heads * self._head_dim),
initializer='glorot_uniform',
trainable=self._trainable,
name='weights_keys')
self._weights_values = self.add_weight(
shape=(input_shape[2][-1], self._n_heads * self._head_dim),
initializer='glorot_uniform',
trainable=self._trainable,
name='weights_values')
super(MultiHeadAttention, self).build(input_shape)
def call(self, inputs, **kwargs):
if self._masking:
assert len(inputs) == 4, "inputs should be set [queries, keys, values, masks]."
queries, keys, values, masks = inputs
else:
assert len(inputs) == 3, "inputs should be set [queries, keys, values]."
queries, keys, values = inputs
queries_linear = K.dot(queries, self._weights_queries)
keys_linear = K.dot(keys, self._weights_keys)
values_linear = K.dot(values, self._weights_values)
queries_multi_heads = tf.concat(tf.split(queries_linear, self._n_heads, axis=2), axis=0)
keys_multi_heads = tf.concat(tf.split(keys_linear, self._n_heads, axis=2), axis=0)
values_multi_heads = tf.concat(tf.split(values_linear, self._n_heads, axis=2), axis=0)
if self._masking:
att_inputs = [queries_multi_heads, keys_multi_heads, values_multi_heads, masks]
else:
att_inputs = [queries_multi_heads, keys_multi_heads, values_multi_heads]
attention = ScaledDotProductAttention(
masking=self._masking, future=self._future, dropout_rate=self._dropout_rate)
att_out = attention(att_inputs)
outputs = tf.concat(tf.split(att_out, self._n_heads, axis=0), axis=2)
return outputs
def compute_output_shape(self, input_shape):
return input_shape
@tf.keras.utils.register_keras_serializable()
class PositionEncoding(Layer):
def __init__(self, model_dim, **kwargs):
self._model_dim = model_dim
super(PositionEncoding, self).__init__(**kwargs)
def call(self, inputs, **kwargs):
seq_length = inputs.shape[1]
position_encodings = np.zeros((seq_length, self._model_dim))
for pos in range(seq_length):
for i in range(self._model_dim):
position_encodings[pos, i] = pos / np.power(10000, (i-i%2) / self._model_dim)
position_encodings[:, 0::2] = np.sin(position_encodings[:, 0::2]) # 2i
position_encodings[:, 1::2] = np.cos(position_encodings[:, 1::2]) # 2i+1
position_encodings = K.cast(position_encodings, 'float32')
return position_encodings
def compute_output_shape(self, input_shape):
return input_shape
@tf.keras.utils.register_keras_serializable()
class Add(Layer):
def __init__(self, **kwargs):
super(Add, self).__init__(**kwargs)
def call(self, inputs, **kwargs):
input_a, input_b = inputs
return input_a + input_b
def compute_output_shape(self, input_shape):
return input_shape[0]
@tf.keras.utils.register_keras_serializable()
class PositionWiseFeedForward(Layer):
def __init__(self, model_dim, inner_dim, trainable=True, **kwargs):
self._model_dim = model_dim
self._inner_dim = inner_dim
self._trainable = trainable
super(PositionWiseFeedForward, self).__init__(**kwargs)
def build(self, input_shape):
self.weights_inner = self.add_weight(
shape=(input_shape[-1], self._inner_dim),
initializer='glorot_uniform',
trainable=self._trainable,
name="weights_inner")
self.weights_out = self.add_weight(
shape=(self._inner_dim, self._model_dim),
initializer='glorot_uniform',
trainable=self._trainable,
name="weights_out")
self.bias_inner = self.add_weight(
shape=(self._inner_dim,),
initializer='uniform',
trainable=self._trainable,
name="bias_inner")
self.bias_out = self.add_weight(
shape=(self._model_dim,),
initializer='uniform',
trainable=self._trainable,
name="bias_out")
super(PositionWiseFeedForward, self).build(input_shape)
def call(self, inputs, **kwargs):
if K.dtype(inputs) != 'float32':
inputs = K.cast(inputs, 'float32')
inner_out = K.relu(K.dot(inputs, self.weights_inner) + self.bias_inner)
outputs = K.dot(inner_out, self.weights_out) + self.bias_out
return outputs
def compute_output_shape(self, input_shape):
return self._model_dim
@tf.keras.utils.register_keras_serializable()
class LayerNormalization(Layer):
def __init__(self, epsilon=1e-8, **kwargs):
self._epsilon = epsilon
super(LayerNormalization, self).__init__(**kwargs)
def build(self, input_shape):
self.beta = self.add_weight(
shape=(input_shape[-1],),
initializer='zero',
name='beta')
self.gamma = self.add_weight(
shape=(input_shape[-1],),
initializer='one',
name='gamma')
super(LayerNormalization, self).build(input_shape)
def call(self, inputs, **kwargs):
mean, variance = tf.nn.moments(inputs, [-1], keepdims=True)
normalized = (inputs - mean) / ((variance + self._epsilon) ** 0.5)
outputs = self.gamma * normalized + self.beta
return outputs
def compute_output_shape(self, input_shape):
return input_shape
@tf.keras.utils.register_keras_serializable()
class Transformer(Layer):
def __init__(self,
vocab_size,
model_dim,
n_heads=8,
encoder_stack=6,
decoder_stack=6,
feed_forward_size=2048,
dropout_rate=0.1,
**kwargs):
self._vocab_size = vocab_size
self._model_dim = model_dim
self._n_heads = n_heads
self._encoder_stack = encoder_stack
self._decoder_stack = decoder_stack
self._feed_forward_size = feed_forward_size
self._dropout_rate = dropout_rate
super(Transformer, self).__init__(**kwargs)
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self._vocab_size, self._model_dim),
initializer='glorot_uniform',
trainable=True,
name="embeddings")
self.EncoderPositionEncoding = PositionEncoding(self._model_dim)
self.EncoderMultiHeadAttentions = [
MultiHeadAttention(self._n_heads, self._model_dim // self._n_heads)
for _ in range(self._encoder_stack)
]
self.EncoderLayerNorms0 = [
LayerNormalization()
for _ in range(self._encoder_stack)
]
self.EncoderPositionWiseFeedForwards = [
PositionWiseFeedForward(self._model_dim, self._feed_forward_size)
for _ in range(self._encoder_stack)
]
self.EncoderLayerNorms1 = [
LayerNormalization()
for _ in range(self._encoder_stack)
]
self.DecoderPositionEncoding = PositionEncoding(self._model_dim)
self.DecoderMultiHeadAttentions0 = [
MultiHeadAttention(self._n_heads, self._model_dim // self._n_heads, future=True)
for _ in range(self._decoder_stack)
]
self.DecoderLayerNorms0 = [
LayerNormalization()
for _ in range(self._decoder_stack)
]
self.DecoderMultiHeadAttentions1 = [
MultiHeadAttention(self._n_heads, self._model_dim // self._n_heads)
for _ in range(self._decoder_stack)
]
self.DecoderLayerNorms1 = [
LayerNormalization()
for _ in range(self._decoder_stack)
]
self.DecoderPositionWiseFeedForwards = [
PositionWiseFeedForward(self._model_dim, self._feed_forward_size)
for _ in range(self._decoder_stack)
]
self.DecoderLayerNorms2 = [
LayerNormalization()
for _ in range(self._decoder_stack)
]
super(Transformer, self).build(input_shape)
def encoder(self, inputs):
if K.dtype(inputs) != 'int32':
inputs = K.cast(inputs, 'int32')
masks = K.equal(inputs, 0)
# Embeddings
embeddings = K.gather(self.embeddings, inputs)
embeddings *= self._model_dim ** 0.5 # Scale
# Position Encodings
position_encodings = self.EncoderPositionEncoding(embeddings)
# Embeddings + Position-encodings
encodings = embeddings + position_encodings
# Dropout
encodings = K.dropout(encodings, self._dropout_rate)
for i in range(self._encoder_stack):
# Multi-head-Attention
attention = self.EncoderMultiHeadAttentions[i]
attention_input = [encodings, encodings, encodings, masks]
attention_out = attention(attention_input)
# Add & Norm
attention_out += encodings
attention_out = self.EncoderLayerNorms0[i](attention_out)
# Feed-Forward
ff = self.EncoderPositionWiseFeedForwards[i]
ff_out = ff(attention_out)
# Add & Norm
ff_out += attention_out
encodings = self.EncoderLayerNorms1[i](ff_out)
return encodings, masks
def decoder(self, inputs):
decoder_inputs, encoder_encodings, encoder_masks = inputs
if K.dtype(decoder_inputs) != 'int32':
decoder_inputs = K.cast(decoder_inputs, 'int32')
decoder_masks = K.equal(decoder_inputs, 0)
# Embeddings
embeddings = K.gather(self.embeddings, decoder_inputs)
embeddings *= self._model_dim ** 0.5 # Scale
# Position Encodings
position_encodings = self.DecoderPositionEncoding(embeddings)
# Embeddings + Position-encodings
encodings = embeddings + position_encodings
# Dropout
encodings = K.dropout(encodings, self._dropout_rate)
for i in range(self._decoder_stack):
# Masked-Multi-head-Attention
masked_attention = self.DecoderMultiHeadAttentions0[i]
masked_attention_input = [encodings, encodings, encodings, decoder_masks]
masked_attention_out = masked_attention(masked_attention_input)
# Add & Norm
masked_attention_out += encodings
masked_attention_out = self.DecoderLayerNorms0[i](masked_attention_out)
# Multi-head-Attention
attention = self.DecoderMultiHeadAttentions1[i]
attention_input = [masked_attention_out, encoder_encodings, encoder_encodings, encoder_masks]
attention_out = attention(attention_input)
# Add & Norm
attention_out += masked_attention_out
attention_out = self.DecoderLayerNorms1[i](attention_out)
# Feed-Forward
ff = self.DecoderPositionWiseFeedForwards[i]
ff_out = ff(attention_out)
# Add & Norm
ff_out += attention_out
encodings = self.DecoderLayerNorms2[i](ff_out)
# Pre-SoftMax Embeddings
linear_projection = K.dot(encodings, K.transpose(self.embeddings))
outputs = K.softmax(linear_projection)
return outputs
def call(self, encoder_inputs, decoder_inputs, **kwargs):
encoder_encodings, encoder_masks = self.encoder(encoder_inputs)
encoder_outputs = self.decoder([decoder_inputs, encoder_encodings, encoder_masks])
return encoder_outputs
def compute_output_shape(self, input_shape):
return input_shape[0][0], input_shape[0][1], self._vocab_size
def get_config(self):
config = {
"vocab_size": self._vocab_size,
"model_dim": self._model_dim,
"n_heads": self._n_heads,
"encoder_stack": self._encoder_stack,
"decoder_stack": self._decoder_stack,
"feed_forward_size": self._feed_forward_size,
"dropout_rate": self._dropout_rate
}
base_config = super(Transformer, self).get_config()
return {**base_config, **config}
class Noam(Callback):
def __init__(self, model_dim, step_num=0, warmup_steps=4000, verbose=False):
self._model_dim = model_dim
self._step_num = step_num
self._warmup_steps = warmup_steps
self.verbose = verbose
super(Noam, self).__init__()
def on_train_begin(self, logs=None):
logs = logs or {}
init_lr = self._model_dim ** -.5 * self._warmup_steps ** -1.5
K.set_value(self.model.optimizer.lr, init_lr)
def on_batch_end(self, epoch, logs=None):
logs = logs or {}
self._step_num += 1
lrate = self._model_dim ** -.5 * K.minimum(self._step_num ** -.5, self._step_num * self._warmup_steps ** -1.5)
K.set_value(self.model.optimizer.lr, lrate)
def on_epoch_begin(self, epoch, logs=None):
if self.verbose:
lrate = K.get_value(self.model.optimizer.lr)
print(f"epoch {epoch} lr: {lrate}")
def on_epoch_end(self, epoch, logs=None):
logs = logs or {}
logs['lr'] = K.get_value(self.model.optimizer.lr)
def label_smoothing(inputs, epsilon=0.1):
output_dim = inputs.shape[-1]
smooth_label = (1 - epsilon) * inputs + (epsilon / output_dim)
return smooth_label
###Output
_____no_output_____
###Markdown
Build Transformer
###Code
from keras import backend as K
# def recall_m(y_true, y_pred):
# y_pred = K.sum(K.square(y_pred), axis=1)
# true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
# possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
# recall = true_positives / (possible_positives + K.epsilon())
# return recall
def recall_m(y_true, y_pred):
# y_pred = K.sum(K.square(y_pred), axis=1)
# true_positives = K.sum(y_true)
# possible_positives = K.sum(K.round(K.clip(y_pred * y_true, 0 ,1)))
#y_pred = K.sum(K.square(y_pred), axis=1)
y_pred = K.sum(K.square(y_pred), axis=1)
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
return true_positives / (possible_positives + K.epsilon())
def precision_m(y_true, y_pred):
y_pred = K.sum(K.square(y_pred))
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
return true_positives / (predicted_positives + K.epsilon())
def f1_m(y_true, y_pred):
y_pred = K.sum(K.square(y_pred))
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
def accuracy_m(y_true, y_pred):
y_pred = K.sum(K.square(y_pred), axis=1)
acc = K.mean(y_true==K.round(y_pred))
return acc
def create_model():
model_dim = 16
batch_size = 256
epochs = 10
max_len = 50
encoder_inputs = tf.keras.Input(shape=(max_len,), name='encoder_inputs')
decoder_inputs = tf.keras.Input(shape=(max_len,), name='decoder_inputs')
vocab_size =tokenizer.n_words
outputs = Transformer(
vocab_size,
model_dim,
n_heads=2,
encoder_stack=2,
decoder_stack=2,
feed_forward_size=16
)(encoder_inputs, decoder_inputs)
outputs = tf.keras.layers.GlobalAveragePooling1D()(outputs)
#outputs = tf.keras.layers.
#outputs = K.sum(K.square(outputs), axis=1)
# function = lambda x: K.sum(x, axis=1)
# outputs = tf.keras.layers.Lambda(function, output_shape=(None,1))(outputs)
# model = tf.keras.Model(inputs=[encoder_inputs, decoder_inputs], outputs=outputs)
# outputs=tf.keras.activations.sigmoid(outputs)
model = tf.keras.Model(inputs=[encoder_inputs, decoder_inputs], outputs=outputs)
return model
# keract.get_activations(model, x, layer_names=None, nodes_to_evaluate=None, output_format='simple', nested=False, auto_compile=True)
# model.compile(optimizer=tf.keras.optimizers.Adam(beta_1=0.9, beta_2=0.98, epsilon=1e-9),
# loss='binary_crossentropy', metrics=['accuracy',recall_m, precision_m,f1_m], loss_weights = [0.3, 1.0])
# learning rate decay for optmimizer
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.001, # 0.0001,
decay_steps=10000,
decay_rate=1-0.001)
# optimizer
model_opt = tf.keras.optimizers.Adam(learning_rate=lr_schedule,beta_1=0.9, beta_2=0.999)
def custom_loss_function(y_true, y_pred):
import torch
dist = K.sum(K.square(y_pred),axis=1)
# print(f"y_pred -==== {y_pred,y_true}")
# print(f"y_pred - 0.0 {y_pred - 0.0}")
#dist = torch.sum((y_pred[:,0,:] - 0) ** 2, dim=1)
# loss = K.mean(y_pred,axis=1)
# loss = K.mean((1-y_true)*K.sqrt(dist) - (y_true)*K.log(1-K.exp(-K.sqrt(dist))))
loss = K.mean((1-y_true)*K.square(dist) - (y_true)*K.log(1-K.exp(-K.square(dist))))
return loss
#model.compile(optimizer=model_opt, loss="binary_crossentropy", metrics=['accuracy',recall_m, precision_m,f1_m],loss_weights = [0.3, 1.0]) #, loss_weights = [0.3, 1.0]
use_tpu = True
if use_tpu:
# Create distribution strategy
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
strategy = tf.distribute.TPUStrategy(tpu)
# Create model
with strategy.scope():
model = create_model()
#model.compile(optimizer=model_opt, loss=custom_loss_function ,loss_weights = [0.3, 1.0]) #, loss_weights = [0.3, 1.0]
model.compile(optimizer=model_opt, loss=custom_loss_function, metrics=[accuracy_m,recall_m,precision_m],loss_weights = [0.3, 1.0]) #, loss_weights = [0.3, 1.0]
#es = EarlyStopping(patience=3)
print(model.summary())
model.fit([x_train, x_train_masks], y_train,
batch_size=batch_size, epochs=30, validation_data=([x_test,x_test_masks],y_test)) # , callbacks=[es]
###Output
_____no_output_____
###Markdown
Define spherical loss function and optimizer Test Train
###Code
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
"""## Implement a Transformer block as a layer"""
class TransformerBlock(layers.Layer):
def __init__(self, embed_dim, num_heads, ff_dim, rate=0.05):
super(TransformerBlock, self).__init__()
self.att = layers.MultiHeadAttention(num_heads=num_heads, key_dim=embed_dim)
self.ffn = keras.Sequential(
[layers.Dense(ff_dim, activation="relu"), layers.Dense(embed_dim),]
)
self.layernorm1 = layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = layers.Dropout(rate)
self.dropout2 = layers.Dropout(rate)
def call(self, inputs, training):
attn_output = self.att(inputs, inputs)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(inputs + attn_output)
ffn_output = self.ffn(out1)
ffn_output = self.dropout2(ffn_output, training=training)
return self.layernorm2(out1 + ffn_output)
"""## Implement embedding layer
Two seperate embedding layers, one for tokens, one for token index (positions).
"""
class TokenAndPositionEmbedding(layers.Layer):
def __init__(self, maxlen, vocab_size, embed_dim):
super(TokenAndPositionEmbedding, self).__init__()
self.token_emb = layers.Embedding(input_dim=vocab_size, output_dim=embed_dim)
self.pos_emb = layers.Embedding(input_dim=maxlen, output_dim=embed_dim)
def call(self, x):
maxlen = tf.shape(x)[-1]
positions = tf.range(start=0, limit=maxlen, delta=1)
positions = self.pos_emb(positions)
x = self.token_emb(x)
return x + positions
input_vocab_size = tokenizer.num_words #tokenizer.n_words # size of input data
output_vocab_size = 2 # binary classficiation output: normal or anomaly data
maxlen = 50 # max encoding position for encoder and decoder layer?
embed_dim = 16 # Embedding size for each token
num_heads = 2 # 2 Number of attention heads
ff_dim = 16 # 16 Hidden layer size in feed forward network inside transformer
dropout_rate = 0.05
inputs = layers.Input(shape=(maxlen,))
embedding_layer = TokenAndPositionEmbedding(maxlen, input_vocab_size, embed_dim)
x = embedding_layer(inputs)
transformer_block = TransformerBlock(embed_dim, num_heads, ff_dim)
x = transformer_block(x)
x = transformer_block(x)
x = layers.GlobalMaxPooling1D()(x)
# x = layers.Dropout(dropout_rate)(x)
# x = layers.Dense(20, activation="relu")(x)
# x = layers.Dropout(dropout_rate)(x)
outputs = layers.Dense(1, activation="sigmoid")(x)
# transformer model
model = keras.Model(inputs=inputs, outputs=outputs)
# learning rate decay for optmimizer
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=0.0001, # 0.0001,
decay_steps=10000,
decay_rate=1-0.001)
# optimizer
model_opt = tf.keras.optimizers.Adam(learning_rate=lr_schedule,beta_1=0.9, beta_2=0.999)
#loss_fun = SimpleLossCompute(model,criterion,model_opt)
#model.compile(model_opt, "sparse_categorical_crossentropy", metrics=["accuracy", recall_m, precision_m, f1_m],) # use our own loss
model.compile(model_opt, loss = custom_loss_function, metrics=["accuracy", recall_m, precision_m, f1_m], loss_weights = [0.3, 1.0]) # use our own loss
#model.compile(model_opt, loss = 'binary_crossentropy', metrics=["accuracy", recall_m, precision_m, f1_m],) # use our own loss
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
# idx = np.random.choice(np.arange(len(x_train)), 1000000, replace=False)
# x_train_small = x_train[idx]
# y_train_small = y_train[idx]
# idx_test = np.random.choice(np.arange(len(x_test)), 60000, replace=False)
# x_test_small = x_test[idx_test]
# y_test_small = y_test[idx_test]
# print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
# print(collections.Counter(y_train_small))
history = model.fit(x_train, y_train,batch_size=256, epochs=30, validation_data=(x_test, y_test), shuffle=True ,)
#history = model.fit(x_train_small, y_train_small, batch_size=2048, epochs=30, validation_data=(x_test_small, y_test_small), shuffle=True ,)
###Output
_____no_output_____
###Markdown
Train and TE
###Code
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_score(test_ground_labels.astype(np.int32), max_distances))
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____ |
ukpsummarizer-be/cplex/python/examples/cp/jupyter/sched_square.ipynb | ###Markdown
Sched SquareThis tutorial includes everything you need to set up decision optimization engines, build constraint programming models.When you finish this tutorial, you'll have a foundational knowledge of _Prescriptive Analytics_.>This notebook is part of the **[Prescriptive Analytics for Python](https://rawgit.com/IBMDecisionOptimization/docplex-doc/master/docs/index.html)**>It requires a valid subscription to **Decision Optimization on the Cloud** or a **local installation of CPLEX Optimizers**. Discover us [here](https://developer.ibm.com/docloud)Table of contents:- [Describe the business problem](Describe-the-business-problem)* [How decision optimization (prescriptive analytics) can help](How--decision-optimization-can-help)* [Use decision optimization](Use-decision-optimization) * [Step 1: Download the library](Step-1:-Download-the-library) * [Step 2: Set up the engines](Step-2:-Set-up-the-prescriptive-engine) - [Step 3: Model the Data](Step-3:-Model-the-data) - [Step 4: Set up the prescriptive model](Step-4:-Set-up-the-prescriptive-model) * [Define the decision variables](Define-the-decision-variables) * [Express the business constraints](Express-the-business-constraints) * [Express the search phase](Express-the-search-phase) * [Solve with Decision Optimization solve service](Solve-with-Decision-Optimization-solve-service) * [Step 5: Investigate the solution and run an example analysis](Step-5:-Investigate-the-solution-and-then-run-an-example-analysis)* [Summary](Summary)**** Describe the business problem* The aim of the square example is to place a set of small squares of different sizes into a large square. ***** How decision optimization can help* Prescriptive analytics technology recommends actions based on desired outcomes, taking into account specific scenarios, resources, and knowledge of past and current events. This insight can help your organization make better decisions and have greater control of business outcomes. * Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes. * Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage. + For example: + Automate complex decisions and trade-offs to better manage limited resources. + Take advantage of a future opportunity or mitigate a future risk. + Proactively update recommendations based on changing events. + Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes. Use decision optimization Step 1: Download the libraryRun the following code to install Decision Optimization CPLEX Modeling library. The *DOcplex* library contains the two modeling packages, Mathematical Programming and Constraint Programming, referred to earlier.
###Code
import sys
try:
import docplex.cp
except:
if hasattr(sys, 'real_prefix'):
#we are in a virtual env.
!pip install docplex
else:
!pip install --user docplex
###Output
_____no_output_____
###Markdown
Note that the more global package docplex contains another subpackage docplex.mp that is dedicated to Mathematical Programming, another branch of optimization. Step 2: Set up the prescriptive engine* Subscribe to the [Decision Optimization on Cloud solve service](https://developer.ibm.com/docloud).* Get the service URL and your personal API key. Set your DOcplexcloud credentials:0. A first option is to set the DOcplexcloud url and key directly in the model source file *(see below)*1. For a persistent setting, create a Python file __docloud_config.py__ somewhere that is visible from the __PYTHONPATH__
###Code
# Initialize IBM Decision Optimization credentials
SVC_URL = "ENTER YOUR URL HERE"
SVC_KEY = "ENTER YOUR KEY HERE"
from docplex.cp.model import *
###Output
_____no_output_____
###Markdown
Step 3: Model the data Size of the englobing square
###Code
SIZE_SQUARE = 112
###Output
_____no_output_____
###Markdown
Sizes of the sub-squares
###Code
SIZE_SUBSQUARE = [50, 42, 37, 35, 33, 29, 27, 25, 24, 19, 18, 17, 16, 15, 11, 9, 8, 7, 6, 4, 2]
###Output
_____no_output_____
###Markdown
Step 4: Set up the prescriptive model
###Code
mdl = CpoModel(name="SchedSquare")
###Output
_____no_output_____
###Markdown
Define the decision variables Create array of variables for sub-squares
###Code
x = []
y = []
rx = pulse((0, 0), 0)
ry = pulse((0, 0), 0)
for i in range(len(SIZE_SUBSQUARE)):
sq = SIZE_SUBSQUARE[i]
vx = interval_var(size=sq, name="X" + str(i))
vx.set_end((0, SIZE_SQUARE))
x.append(vx)
rx += pulse(vx, sq)
vy = interval_var(size=sq, name="Y" + str(i))
vy.set_end((0, SIZE_SQUARE))
y.append(vy)
ry += pulse(vy, sq)
###Output
_____no_output_____
###Markdown
Express the business constraints Create dependencies between variables
###Code
for i in range(len(SIZE_SUBSQUARE)):
for j in range(i):
mdl.add((end_of(x[i]) <= start_of(x[j]))
| (end_of(x[j]) <= start_of(x[i]))
| (end_of(y[i]) <= start_of(y[j]))
| (end_of(y[j]) <= start_of(y[i])))
###Output
_____no_output_____
###Markdown
Set other constraints
###Code
mdl.add(always_in(rx, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE))
mdl.add(always_in(ry, (0, SIZE_SQUARE), SIZE_SQUARE, SIZE_SQUARE))
###Output
_____no_output_____
###Markdown
Express the search phase
###Code
mdl.set_search_phases([search_phase(x), search_phase(y)])
###Output
_____no_output_____
###Markdown
Solve with Decision Optimization solve service
###Code
msol = mdl.solve(url=SVC_URL,
key=SVC_KEY,
TimeLimit=20,
LogPeriod=50000)
###Output
_____no_output_____
###Markdown
Step 5: Investigate the solution and then run an example analysis Print Solution
###Code
print("Solution: ")
msol.print_solution()
###Output
_____no_output_____
###Markdown
Import graphical tools
###Code
import docplex.cp.utils_visu as visu
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
*You can set __POP\_UP\_GRAPHIC=True__ if you prefer a pop up graphic window instead of an inline one.*
###Code
POP_UP_GRAPHIC=False
if msol and visu.is_visu_enabled():
import matplotlib.cm as cm
from matplotlib.patches import Polygon
if not POP_UP_GRAPHIC:
%matplotlib inline
# Plot external square
print("Plotting squares....")
fig, ax = plt.subplots()
plt.plot((0, 0), (0, SIZE_SQUARE), (SIZE_SQUARE, SIZE_SQUARE), (SIZE_SQUARE, 0))
for i in range(len(SIZE_SUBSQUARE)):
# Display square i
(sx, sy) = (msol.get_var_solution(x[i]), msol.get_var_solution(y[i]))
(sx1, sx2, sy1, sy2) = (sx.get_start(), sx.get_end(), sy.get_start(), sy.get_end())
poly = Polygon([(sx1, sy1), (sx1, sy2), (sx2, sy2), (sx2, sy1)], fc=cm.Set2(float(i) / len(SIZE_SUBSQUARE)))
ax.add_patch(poly)
# Display identifier of square i at its center
ax.text(float(sx1 + sx2) / 2, float(sy1 + sy2) / 2, str(SIZE_SUBSQUARE[i]), ha='center', va='center')
plt.margins(0)
plt.show()
###Output
_____no_output_____ |
mini_book/_build/html/_sources/docs/python_by_example.ipynb | ###Markdown
(python_by_example)= An Introductory Example OverviewWe\'re now ready to start learning the Python language itself.In this lecture, we will write and then pick apart small Pythonprograms.The objective is to introduce you to basic Python syntax and datastructures.Deeper concepts will be covered in later lectures.You should have read the {ref}`lecture ` on getting started with Python before beginning this one. The Task: Plotting a White Noise ProcessSuppose we want to simulate and plot the white noise process$\epsilon_0, \epsilon_1, \ldots, \epsilon_T$, where each draw$\epsilon_t$ is independent standard normal.In other words, we want to generate figures that look something likethis:```{figure} /_static/lecture_specific/python_by_example/test_program_1_updated.png```(Here $t$ is on the horizontal axis and $\epsilon_t$ is on the verticalaxis.)We\'ll do this in several different ways, each time learning somethingmore about Python.We run the following command first, which helps ensure that plots appearin the notebook if you run it on your own machine.
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Version 1(ourfirstprog)=Here are a few lines of code that perform the task we set
###Code
import numpy as np
import matplotlib.pyplot as plt
ϵ_values = np.random.randn(100)
plt.plot(ϵ_values)
plt.show()
###Output
_____no_output_____
###Markdown
Let\'s break this program down and see how it works.(import)= ImportsThe first two lines of the program import functionality from externalcode libraries.The first line imports NumPy, afavorite Python package for tasks like- working with arrays (vectors and matrices)- common mathematical functions like `cos` and `sqrt`- generating random numbers- linear algebra, etc.After `import numpy as np` we have access to these attributes via thesyntax `np.attribute`.Here\'s two more examples
###Code
np.sqrt(4)
np.log(4)
###Output
_____no_output_____
###Markdown
We could also use the following syntax:
###Code
import numpy
numpy.sqrt(4)
###Output
_____no_output_____
###Markdown
But the former method (using the short name `np`) is convenient and morestandard. Why So Many Imports?Python programs typically require several import statements.The reason is that the core language is deliberately kept small, so thatit\'s easy to learn and maintain.When you want to do something interesting with Python, you almost alwaysneed to import additional functionality. PackagesAs stated above, NumPy is a Python *package*.Packages are used by developers to organize code they wish to share.In fact, a package is just a directory containing1. files with Python code --- called **modules** in Python speak2. possibly some compiled code that can be accessed by Python (e.g., functions compiled from C or FORTRAN code)3. a file called `__init__.py` that specifies what will be executed when we type `import package_name`In fact, you can find and explore the directory for NumPy on yourcomputer easily enough if you look around.On this machine, it\'s located in```{code-block} noneanaconda3/lib/python3.7/site-packages/numpy``` SubpackagesConsider the line `ϵ_values = np.random.randn(100)`.Here `np` refers to the package NumPy, while `random` is a**subpackage** of NumPy.Subpackages are just packages that are subdirectories of anotherpackage. Importing Names DirectlyRecall this code that we saw above
###Code
import numpy as np
np.sqrt(4)
###Output
_____no_output_____
###Markdown
Here\'s another way to access NumPy\'s square root function
###Code
from numpy import sqrt
sqrt(4)
###Output
_____no_output_____
###Markdown
This is also fine.The advantage is less typing if we use `sqrt` often in our code.The disadvantage is that, in a long program, these two lines might beseparated by many other lines.Then it\'s harder for readers to know where `sqrt` came from, shouldthey wish to. Random DrawsReturning to our program that plots white noise, the remaining threelines after the import statements are
###Code
ϵ_values = np.random.randn(100)
plt.plot(ϵ_values)
plt.show()
###Output
_____no_output_____
###Markdown
The first line generates 100 (quasi) independent standard normals andstores them in `ϵ_values`.The next two lines genererate the plot.We can and will look at various ways to configure and improve this plotbelow. Alternative ImplementationsLet\'s try writing some alternative versions of{ref}`our first program `, whichplotted IID draws from the normal distribution.The programs below are less efficient than the original one, and hencesomewhat artificial.But they do help us illustrate some important Python syntax andsemantics in a familiar setting. A Version with a For LoopHere\'s a version that illustrates `for` loops and Python lists.(firstloopprog)=
###Code
ts_length = 100
ϵ_values = [] # empty list
for i in range(ts_length):
e = np.random.randn()
ϵ_values.append(e)
plt.plot(ϵ_values)
plt.show()
###Output
_____no_output_____
###Markdown
In brief,- The first line sets the desired length of the time series.- The next line creates an empty *list* called `ϵ_values` that will store the $\epsilon_t$ values as we generate them.- The statement ` empty list` is a *comment*, and is ignored by Python\'s interpreter.- The next three lines are the `for` loop, which repeatedly draws a new random number $\epsilon_t$ and appends it to the end of the list `ϵ_values`.- The last two lines generate the plot and display it to the user.Let\'s study some parts of this program in more detail.(lists_ref)= ListsConsider the statement `ϵ_values = []`, which creates an empty list.Lists are a *native Python data structure* used to group a collection ofobjects.For example, try
###Code
x = [10, 'foo', False]
type(x)
###Output
_____no_output_____
###Markdown
The first element of `x` is an[integer](https://en.wikipedia.org/wiki/Integer_%28computer_science%29),the next is a[string](https://en.wikipedia.org/wiki/String_%28computer_science%29),and the third is a [Boolean value](https://en.wikipedia.org/wiki/Boolean_data_type).When adding a value to a list, we can use the syntax`list_name.append(some_value)`
###Code
x
x.append(2.5)
x
###Output
_____no_output_____
###Markdown
Here `append()` is what\'s called a *method*, which is a function\"attached to\" an object---in this case, the list `x`.We\'ll learn all about methods later on, but just to give you some idea,- Python objects such as lists, strings, etc. all have methods that are used to manipulate the data contained in the object.- String objects have [string methods](https://docs.python.org/3/library/stdtypes.htmlstring-methods), list objects have [list methods](https://docs.python.org/3/tutorial/datastructures.htmlmore-on-lists), etc.Another useful list method is `pop()`
###Code
x
x.pop()
x
###Output
_____no_output_____
###Markdown
Lists in Python are zero-based (as in C, Java or Go), so the firstelement is referenced by `x[0]`
###Code
x[0] # first element of x
x[1] # second element of x
###Output
_____no_output_____
###Markdown
The For LoopNow let\'s consider the `for` loop from{ref}`the program above `, whichwas
###Code
for i in range(ts_length):
e = np.random.randn()
ϵ_values.append(e)
###Output
_____no_output_____
###Markdown
Python executes the two indented lines `ts_length` times before movingon.These two lines are called a `code block`, since they comprise the\"block\" of code that we are looping over.Unlike most other languages, Python knows the extent of the code block*only from indentation*.In our program, indentation decreases after line `ϵ_values.append(e)`,telling Python that this line marks the lower limit of the code block.More on indentation below---for now, let\'s look at another example ofa `for` loop
###Code
animals = ['dog', 'cat', 'bird']
for animal in animals:
print("The plural of " + animal + " is " + animal + "s")
###Output
The plural of dog is dogs
The plural of cat is cats
The plural of bird is birds
###Markdown
This example helps to clarify how the `for` loop works: When we executea loop of the form```{code-block}---class: no-execute---for variable_name in sequence: ```The Python interpreter performs the following:- For each element of the `sequence`, it \"binds\" the name `variable_name` to that element and then executes the code block.The `sequence` object can in fact be a very general object, as we\'llsee soon enough. A Comment on IndentationIn discussing the `for` loop, we explained that the code blocks beinglooped over are delimited by indentation.In fact, in Python, **all** code blocks (i.e., those occurring insideloops, if clauses, function definitions, etc.) are delimited byindentation.Thus, unlike most other languages, whitespace in Python code affects theoutput of the program.Once you get used to it, this is a good thing: It- forces clean, consistent indentation, improving readability- removes clutter, such as the brackets or end statements used in other languagesOn the other hand, it takes a bit of care to get right, so pleaseremember:- The line before the start of a code block always ends in a colon - `for i in range(10):` - `if x > y:` - `while x < 100:` - etc., etc.- All lines in a code block **must have the same amount of indentation**.- The Python standard is 4 spaces, and that\'s what you should use. While LoopsThe `for` loop is the most common technique for iteration in Python.But, for the purpose of illustration, let\'s modify{ref}`the program above ` to usea `while` loop instead.(whileloopprog)=
###Code
ts_length = 100
ϵ_values = []
i = 0
while i < ts_length:
e = np.random.randn()
ϵ_values.append(e)
i = i + 1
plt.plot(ϵ_values)
plt.show()
###Output
_____no_output_____
###Markdown
Note that- the code block for the `while` loop is again delimited only by indentation- the statement `i = i + 1` can be replaced by `i += 1` Another ApplicationLet\'s do one more application before we turn to exercises.In this application, we plot the balance of a bank account over time.There are no withdraws over the time period, the last date of which isdenoted by $T$.The initial balance is $b_0$ and the interest rate is $r$.The balance updates from period $t$ to $t+1$ according to```{math}:label: ilom b_{t+1} = (1 + r) b_t```In the code below, we generate and plot the sequence $b_0, b_1, \ldots, b_T$generated by {eq}`ilom`.Instead of using a Python list to store this sequence, we will use aNumPy array.
###Code
r = 0.025 # interest rate
T = 50 # end date
b = np.empty(T+1) # an empty NumPy array, to store all b_t
b[0] = 10 # initial balance
for t in range(T):
b[t+1] = (1 + r) * b[t]
plt.plot(b, label='bank balance')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The statement `b = np.empty(T+1)` allocates storage in memory for `T+1`(floating point) numbers.These numbers are filled in by the `for` loop.Allocating memory at the start is more efficient than using a Pythonlist and `append`, since the latter must repeatedly ask for storagespace from the operating system.Notice that we added a legend to the plot --- a feature you will beasked to use in the exercises. ExercisesNow we turn to exercises. It is important that you complete them beforecontinuing, since they present new concepts we will need. Exercise 1Your first task is to simulate and plot the correlated time series$$x_{t+1} = \alpha \, x_t + \epsilon_{t+1}\quad \text{where} \quadx_0 = 0\quad \text{and} \quad t = 0,\ldots,T$$The sequence of shocks $\{\epsilon_t\}$ is assumed to be IID andstandard normal.In your solution, restrict your import statements to
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Set $T=200$ and $\alpha = 0.9$. Exercise 2Starting with your solution to exercise 2, plot three simulated timeseries, one for each of the cases $\alpha=0$, $\alpha=0.8$ and$\alpha=0.98$.Use a `for` loop to step through the $\alpha$ values.If you can, add a legend, to help distinguish between the three timeseries.Hints:- If you call the `plot()` function multiple times before calling `show()`, all of the lines you produce will end up on the same figure.- For the legend, noted that the expression `'foo' + str(42)` evaluates to `'foo42'`. Exercise 3Similar to the previous exercises, plot the time series$$x_{t+1} = \alpha \, |x_t| + \epsilon_{t+1}\quad \text{where} \quadx_0 = 0\quad \text{and} \quad t = 0,\ldots,T$$Use $T=200$, $\alpha = 0.9$ and $\{\epsilon_t\}$ as before.Search online for a function that can be used to compute the absolutevalue $|x_t|$. Exercise 4One important aspect of essentially all programming languages isbranching and conditions.In Python, conditions are usually implemented with if--else syntax.Here\'s an example, that prints -1 for each negative number in an arrayand 1 for each nonnegative number
###Code
numbers = [-9, 2.3, -11, 0]
for x in numbers:
if x < 0:
print(-1)
else:
print(1)
###Output
-1
1
-1
1
###Markdown
Now, write a new solution to Exercise 3 that does not use an existingfunction to compute the absolute value.Replace this existing function with an if--else condition.(pbe_ex3)= Exercise 5Here\'s a harder exercise, that takes some thought and planning.The task is to compute an approximation to $\pi$ using [Monte Carlo](https://en.wikipedia.org/wiki/Monte_Carlo_method).Use no imports besides
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Your hints are as follows:- If $U$ is a bivariate uniform random variable on the unit square $(0, 1)^2$, then the probability that $U$ lies in a subset $B$ of $(0,1)^2$ is equal to the area of $B$.- If $U_1,\ldots,U_n$ are IID copies of $U$, then, as $n$ gets large, the fraction that falls in $B$, converges to the probability of landing in $B$.- For a circle, $area = \pi * radius^2$. Solutions Exercise 1Here\'s one solution.
###Code
α = 0.9
T = 200
x = np.empty(T+1)
x[0] = 0
for t in range(T):
x[t+1] = α * x[t] + np.random.randn()
plt.plot(x)
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 2
###Code
α_values = [0.0, 0.8, 0.98]
T = 200
x = np.empty(T+1)
for α in α_values:
x[0] = 0
for t in range(T):
x[t+1] = α * x[t] + np.random.randn()
plt.plot(x, label=f'$\\alpha = {α}$')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 3Here\'s one solution:
###Code
α = 0.9
T = 200
x = np.empty(T+1)
x[0] = 0
for t in range(T):
x[t+1] = α * np.abs(x[t]) + np.random.randn()
plt.plot(x)
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 4Here\'s one way:
###Code
α = 0.9
T = 200
x = np.empty(T+1)
x[0] = 0
for t in range(T):
if x[t] < 0:
abs_x = - x[t]
else:
abs_x = x[t]
x[t+1] = α * abs_x + np.random.randn()
plt.plot(x)
plt.show()
###Output
_____no_output_____
###Markdown
Here\'s a shorter way to write the same thing:
###Code
α = 0.9
T = 200
x = np.empty(T+1)
x[0] = 0
for t in range(T):
abs_x = - x[t] if x[t] < 0 else x[t]
x[t+1] = α * abs_x + np.random.randn()
plt.plot(x)
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 5Consider the circle of diameter 1 embedded in the unit square.Let $A$ be its area and let $r=1/2$ be its radius.If we know $\pi$ then we can compute $A$ via $A = \pi r^2$.But here the point is to compute $\pi$, which we can do by$\pi = A / r^2$.Summary: If we can estimate the area of a circle with diameter 1, thendividing by $r^2 = (1/2)^2 = 1/4$ gives an estimate of $\pi$.We estimate the area by sampling bivariate uniforms and looking at thefraction that falls into the circle.
###Code
n = 100000
count = 0
for i in range(n):
u, v = np.random.uniform(), np.random.uniform()
d = np.sqrt((u - 0.5)**2 + (v - 0.5)**2)
if d < 0.5:
count += 1
area_estimate = count / n
print(area_estimate * 4) # dividing by radius**2
###Output
3.13584
|
_notebooks/2021-06-07-linear.ipynb | ###Markdown
A real SGD update using linear regression> visualizing gradient descent- toc: true- branch: master- badges: true- image: images/sgd.png- comments: true- author: Sajjad Ayoubi- categories: [implementation]
###Code
import numpy as np
from IPython import display
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from torch import nn
import torch
###Output
_____no_output_____
###Markdown
helper functions
###Code
def set_default(figsize=(5, 5), dpi=100):
plt.style.use(['dark_background', 'bmh'])
plt.rc('axes', facecolor='k')
plt.rc('figure', facecolor='k')
plt.rc('figure', figsize=figsize, dpi=dpi)
set_default()
def plot_data(x, y):
plt.scatter(x[:, 0], x[:, 1], c=y)
R = np.array([[0, 1],
[-1, 0]])
def plot_update(x, y, w, g):
plt.quiver(w[:,0], w[:,1], g[:, 0], g[:, 1], color=['g'])
plt.quiver([0], [0], w[:,0], w[:,1], color=['r'])
plt.axline((0, 0), (([email protected]).T[0]))
plt.scatter(x[:, 0], x[:, 1], c=y)
###Output
_____no_output_____
###Markdown
binary classification dataset
###Code
x, y = make_blobs(1_000, centers=2)
x = x - x.mean(axis=0)
plot_data(x, y)
# Convert to Torch format
X = torch.tensor(x).float()
Y = torch.tensor(y).reshape(-1, 1).float()
class LinearLearning:
def __init__(self, input_dim=2, lr=1e-2):
self.model = nn.Sequential(nn.Linear(input_dim, 1), nn.Sigmoid())
self.loss_fn = nn.BCELoss()
self.optimizer = torch.optim.SGD(self.model.parameters(), lr=lr, momentum=0.9)
self.w = self.model[0].weight.detach().numpy()
self.lr = lr
def train_step(self, x, y, i):
self.optimizer.zero_grad()
y_hat = self.model(x)
loss = self.loss_fn(y_hat, y)
loss.backward()
self.optimizer.step()
self.g = - self.lr * self.model[0].weight.grad.detach().numpy()
# print accuracy and loss
acc = (y == (y_hat>0.5)).sum().float() / len(y)
display.clear_output(wait=True)
print("[STEP]: %i [LOSS]: %.6f, [ACCURACY]: %.3f" % (i, loss.item(), acc.item()))
def fit(self, x, y, steps=10, plot_step=1, batch_size=10):
ax = plt.figure(figsize=(20, 8))
c = 0
for i in range(1, steps+1):
self.train_step(x[c:c+batch_size], y[c:c+batch_size], i=i)
if i%plot_step==0:
ax = plt.subplot(2, (steps//plot_step)//2, i//plot_step)
plot_update(x[:c+batch_size], y[:c+batch_size], w=self.w, g=self.g)
ax.legend(['Boundry', 'Grad', 'W'])
plt.title(f'Training Step: {i}')
c += batch_size
plt.show()
learner = LinearLearning()
learner.fit(X, Y)
###Output
[STEP]: 10 [LOSS]: 0.035528, [ACCURACY]: 1.000
|
docs/source/user_guide/clean/clean_hu_anum.ipynb | ###Markdown
Hungarian ANUM Numbers Introduction The function `clean_hu_anum()` cleans a column containing Hungarian ANUM number (ANUM) strings, and standardizes them in a given format. The function `validate_hu_anum()` validates either a single ANUM strings, a column of ANUM strings or a DataFrame of ANUM strings, returning `True` if the value is valid, and `False` otherwise. ANUM strings can be converted to the following formats via the `output_format` parameter:* `compact`: only number strings without any seperators or whitespace, like "12892312"* `standard`: ANUM strings with proper whitespace in the proper places. Note that in the case of ANUM, the compact format is the same as the standard one.Invalid parsing is handled with the `errors` parameter:* `coerce` (default): invalid parsing will be set to NaN* `ignore`: invalid parsing will return the input* `raise`: invalid parsing will raise an exceptionThe following sections demonstrate the functionality of `clean_hu_anum()` and `validate_hu_anum()`. An example dataset containing ANUM strings
###Code
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"anum": [
'HU-12892312',
'HU-12892313',
'BE 428759497',
'BE431150351',
"002 724 334",
"hello",
np.nan,
"NULL",
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"1111 S Figueroa St, Los Angeles, CA 90015",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
###Output
_____no_output_____
###Markdown
1. Default `clean_hu_anum`By default, `clean_hu_anum` will clean anum strings and output them in the standard format with proper separators.
###Code
from dataprep.clean import clean_hu_anum
clean_hu_anum(df, column = "anum")
###Output
_____no_output_____
###Markdown
2. Output formats This section demonstrates the output parameter. `standard` (default)
###Code
clean_hu_anum(df, column = "anum", output_format="standard")
###Output
_____no_output_____
###Markdown
`compact`
###Code
clean_hu_anum(df, column = "anum", output_format="compact")
###Output
_____no_output_____
###Markdown
3. `inplace` parameterThis deletes the given column from the returned DataFrame. A new column containing cleaned ANUM strings is added with a title in the format `"{original title}_clean"`.
###Code
clean_hu_anum(df, column="anum", inplace=True)
###Output
_____no_output_____
###Markdown
4. `errors` parameter `coerce` (default)
###Code
clean_hu_anum(df, "anum", errors="coerce")
###Output
_____no_output_____
###Markdown
`ignore`
###Code
clean_hu_anum(df, "anum", errors="ignore")
###Output
_____no_output_____
###Markdown
4. `validate_hu_anum()` `validate_hu_anum()` returns `True` when the input is a valid ANUM. Otherwise it returns `False`.The input of `validate_hu_anum()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated. When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_hu_anum()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_hu_anum()` returns the validation result for the whole DataFrame.
###Code
from dataprep.clean import validate_hu_anum
print(validate_hu_anum("HU-12892312"))
print(validate_hu_anum("HU-12892313"))
print(validate_hu_anum('BE 428759497'))
print(validate_hu_anum('BE431150351'))
print(validate_hu_anum("004085616"))
print(validate_hu_anum("hello"))
print(validate_hu_anum(np.nan))
print(validate_hu_anum("NULL"))
###Output
_____no_output_____
###Markdown
Series
###Code
validate_hu_anum(df["anum"])
###Output
_____no_output_____
###Markdown
DataFrame + Specify Column
###Code
validate_hu_anum(df, column="anum")
###Output
_____no_output_____
###Markdown
Only DataFrame
###Code
validate_hu_anum(df)
###Output
_____no_output_____ |
main_not_spin_resolved.ipynb | ###Markdown
Calculation of $G_\mathrm{ep}$ and heat capacities from DFT results (not spin resolved)This notebook calculates the electron-phonon coupling parameter $G_\mathrm{ep}$ and the electron and phonon heat capacity from DFT results (electronic density of states, phonon density of states, Fermi level, Eliashberg function). Note that this is the version for a non-spin-resolved DFT calculation. Use the notebook main_spin_resolved for a spin-resolved DFT calculation (e.g. for ferromagnetic materials). Load required modules, settings from config file, DFT results and other material parametersIn order to perform the calculation for another material, create a .py file named material+'_spin_resolved' in the folder 'load_inputs' that loads all necessary material-specific data (see examples). Furthermore, change the material entry in the file confic.cfg.
###Code
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
# import functions saved separately
from functions.chemical_potential import calculate_chemical_potential
from functions.electron_phonon_coupling import calculate_electron_phonon_coupling
from functions.electron_heat_capacity import calculate_electron_heat_capacity
from functions.phonon_heat_capacity import calculate_phonon_heat_capacity
# import settings from config file
import configparser
config = configparser.ConfigParser()
config.read('config.cfg')
# material (string), to load the corresponding material-specific data and to save the results in the right text file:
material=config['GENERAL']['Material']
# lattice temperature in K, for the calculation of the electron-phonon coupling parameter G:
lattice_temperature=float(config['GENERAL']['Lattice_temperature'])
# desired temperature range in which all quantities should be calculated, in K:
temperatures = np.linspace(int(config['TEMPERATURE']['Temperature_min']),\
int(config['TEMPERATURE']['Temperature_max']),\
int(config['TEMPERATURE']['Temperature_points']))
# run the script that imports all required material-specific data (DFT calculation results and unit cell volume)
input_script_name='load_inputs/'+material+'_not_spin_resolved'
%run $input_script_name
###Output
Material-specific data for nickel has been loaded.
###Markdown
Chemical potentialThe chemical potential is required for calculating the electron-phonon coupling and the electronic heat capacity. The chemical potential ($\mu$) is temperature-dependent because the number of electrons per unit cell ($N_\mathrm{e}$) is constant: $ N_\mathrm{e}=\int_{-\infty}^{\infty} g_\mathrm{e}(\epsilon) \frac{1}{\mathrm{exp}\left( \frac{\epsilon-\mu}{k_\mathrm{B}T}\right)+1}d\epsilon=\int_{-\infty}^{\epsilon_\mathrm{F}} g_\mathrm{e}(\epsilon) d\epsilon.$Here $g_\mathrm{e}$ is the electronic density of states, $T$ is the temperature, $\epsilon_\mathrm{F}$ is the Fermi energy and $\epsilon$ denotes the energy.Using this equation, the chemical potential as a function of temperature is calculated numerically.
###Code
mu = calculate_chemical_potential(temperatures,e_dos,fermi_energy)
# plot the result (optional)
plt.figure()
plt.plot(temperatures,mu)
plt.xlabel('Temperature [K]'), plt.ylabel('Chemical potential [eV]');
###Output
_____no_output_____
###Markdown
Electron-phonon coupling parameter ($G_\mathrm{ep}$)The electron-phonon coupling parameter is calculated as in Waldecker et al., Phys. Rev. X 6, 021003 (https://doi.org/10.1103/PhysRevX.6.021003), Equation 9, with the (small) difference that changes of the chemical potential with electron temperature are considered here. The corresponding equations are:$Z(T_\mathrm{e},T_\mathrm{l})=-\frac{2\pi}{g_\mathrm{e}(\mu)}\int_0^\infty\left(\hbar \omega\right)^2 \alpha^2F(\omega)\hspace{2pt} \left[ n_\mathrm{BE}(\omega,T_\mathrm{e})-n_\mathrm{BE}(\omega,T_\mathrm{l})\right]d\omega \hspace{2pt} \int_{-\infty}^\infty g_\mathrm{e}^2(\epsilon)\frac{\partial n_\mathrm{FD}(\epsilon,T_\mathrm{e},\mu)}{\partial \epsilon}d\epsilon.$$G_\mathrm{ep}(T_\mathrm{e},T_\mathrm{l})=\frac{Z(T_\mathrm{e},T_\mathrm{l})}{T_\mathrm{e}-T_\mathrm{l}}.$Here, $T_\mathrm{e}$ and $T_\mathrm{l}$ are the temperatures of the electrons and the lattice, respectively. $n_\mathrm{BE}$ is the Bose-Einstein distribution function (with $\mu=0$) and $n_\mathrm{FD}$ is the Fermi-Dirac distribution function. $\alpha^2F(\omega)$ denotes the Eliashberg function, $g_\mathrm{e}$ is the electronic density of states, and $\mu$ is the chemical potential (see above).
###Code
g = calculate_electron_phonon_coupling(temperatures,lattice_temperature,mu,e_dos,fermi_energy,eliashberg)
# convert from W/(unit cell K) to W/(m^3 K)
g = g/unit_cell_volume
# plot the result (optional)
plt.figure()
plt.plot(temperatures,g*1e-18)
plt.xlabel('Temperature [K]'), plt.ylabel('Electron-phonon coupling parameter G$_\mathrm{ep}$ [10$^{18}$ W/(m$^3$K)]')
# save the result as text file
np.savetxt('results/'+material+'_notSpinResolved_electronPhononCoupling.txt',np.transpose([temperatures,g]),fmt='%07.2f %e',\
header='material: '+material+' \n'
'electron-phonon coupling parameter G_ep '
'(DFT calculation not spin resolved) \n'
'temperature [K], g_ep [W/(m^3K)] ')
###Output
_____no_output_____
###Markdown
Electron heat capacityThe electron heat capacity, $c_\mathrm{e}$, is the derivative of the electron energy ($E_\mathrm{e}$) with respect to the temperature:\begin{equation} c_\mathrm{e}(T)=\left(\frac{\partial E_\mathrm{e}}{\partial T}\right)_V=\frac{d}{dT}\left[\int_{-\infty}^{\infty} g_\mathrm{e}(\epsilon) \frac{\epsilon}{\mathrm{exp}\left( \frac{\epsilon-\mu(T)}{k_\mathrm{B}T}\right)+1}d\epsilon\right]. \label{eq:c_e}\end{equation}$g_\mathrm{e}$ is the electronic density of states and $\mu$ is the chemical potential. The equation corresponds to the electron heat capacity at constant volume.
###Code
electron_heat_capacity = calculate_electron_heat_capacity(temperatures,mu,e_dos,fermi_energy)
# Due to the numerical differentiation, the temperature sampling changes. Therefore, the output has two colums
# with the first column corresponding to the new temperature points.
# convert from J/(unit cell K) to J/(m^3 K)
electron_heat_capacity[:,1] = electron_heat_capacity[:,1]/unit_cell_volume
# plot the result (optional)
plt.figure()
plt.plot(electron_heat_capacity[:,0],electron_heat_capacity[:,1]*1e-6)
plt.xlabel('Temperature [K]'), plt.ylabel('Electron heat capacity [10$^6$ J/(m$^3$K)]')
# save the result as text file
np.savetxt('results/'+material+'_notSpinResolved_electronHeatCapacity.txt',electron_heat_capacity,fmt='%07.2f %e',\
header='material: '+material+' \n'
'electron heat capacity (DFT calculation not spin resolved) \n'
'temperature [K], heat capacity [J/(m^3K)] ')
###Output
_____no_output_____
###Markdown
Phonon heat capacityThe phonon heat capacity is calculated analogously to the electron heat capacity, with the difference that the Bose-Einstein distribution instead of the Fermi-Dirac distribution is required here:\begin{equation} c_\mathrm{l}(T)=\left(\frac{\partial E_\mathrm{l}}{\partial T}\right)_V=\frac{d}{dT}\left[\int_{-\infty}^{\infty} g_\mathrm{l}(\epsilon) \frac{\epsilon}{\mathrm{exp}\left( \frac{\epsilon}{k_\mathrm{B}T}\right)-1}d\epsilon\right]. \label{eq:c_l}\end{equation}$g_\mathrm{l}$ is the phonon density of states and $E_\mathrm{l}$ is the total energy in the phonon system.Also here, the equation corresponds to the heat capacity at constant volume.
###Code
phonon_heat_capacity = calculate_phonon_heat_capacity(temperatures,v_dos)
# Due to the numerical differentiation, the temperature sampling changes. Therefore, the output has two colums
# with the first column corresponding to the new temperature points.
# convert from J/(unit cell K) to J/(m^3 K)
phonon_heat_capacity[:,1] = phonon_heat_capacity[:,1]/unit_cell_volume
# plot the result (optional)
plt.figure()
plt.plot(phonon_heat_capacity[:,0],phonon_heat_capacity[:,1]*1e-6)
plt.xlabel('Temperature [K]'), plt.ylabel('Phonon heat capacity [10$^6$ J/(m$^3$K)]')
# save the result as text file
np.savetxt('results/'+material+'_notSpinResolved_phononHeatCapacity.txt',phonon_heat_capacity,fmt='%07.2f %e',\
header='material: '+material+' \n'
'phonon heat capacity (DFT calculation not spin resolved) \n'
'temperature [K], heat capacity [J/(m^3K)] ')
###Output
_____no_output_____ |
Classification Evaluation.ipynb | ###Markdown
Confusion Matrix
###Code
y_true = [1, 2, 2, 0, 0, 1]
y_pred = [1, 0, 2, 1, 2, 1]
print(confusion_matrix( y_true, y_pred ))
y_true = ['degree-1', 'degree-1', 'degree-2', 'degree-3', 'degree-3', 'degree-2']
y_pred = ['degree-2', 'degree-1', 'degree-3', 'degree-1', 'degree-3', 'degree-2']
lbls = ['degree-1', 'degree-2', 'degree-3']
print(confusion_matrix( y_true, y_pred, labels=lbls ))
y_true = ['degree-1', 'degree-1', 'degree-2', 'degree-3', 'degree-3', 'degree-2']
y_pred = ['degree-2', 'degree-1', 'degree-3', 'degree-1', 'degree-3', 'degree-2']
lbls = ['degree-1', 'degree-2', 'degree-3']
cm = confusion_matrix( y_true, y_pred, labels=lbls )
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
cax = ax.matshow(cm)
fig.colorbar(cax)
ax.set_xticklabels([''] + lbls)
ax.set_yticklabels([''] + lbls)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
###Output
_____no_output_____
###Markdown
Accuracy
###Code
from sklearn.metrics import accuracy_score
y_true = ['degree-1', 'degree-1', 'degree-1', 'degree-3', 'degree-3', 'degree-2']
y_pred = ['degree-2', 'degree-1', 'degree-3', 'degree-3', 'degree-3', 'degree-2']
print(accuracy_score(y_true, y_pred))
from sklearn.metrics import accuracy_score
y_true = ['degree-1', 'degree-1', 'degree-1', 'degree-3', 'degree-3', 'degree-2']
y_pred = ['degree-1', 'degree-1', 'degree-1', 'degree-1', 'degree-1', 'degree-1']
print(accuracy_score(y_true, y_pred))
###Output
0.5
###Markdown
Precision/Recall/F1-Score Classification Report
###Code
from sklearn.metrics import classification_report
y_true = ['degree-1', 'degree-1', 'degree-1', 'degree-3', 'degree-3', 'degree-2', 'degree-3', 'degree-1']
y_pred = ['degree-2', 'degree-2', 'degree-1', 'degree-3', 'degree-2', 'degree-1', 'degree-3', 'degree-1']
print(classification_report(y_true, y_pred))
from sklearn.metrics import classification_report
y_true = ['degree-1', 'degree-1', 'degree-1', 'degree-3', 'degree-3', 'degree-2']
y_pred = ['degree-1', 'degree-1', 'degree-1', 'degree-1', 'degree-1', 'degree-1']
print(classification_report(y_true, y_pred))
###Output
precision recall f1-score support
degree-1 0.50 1.00 0.67 3
degree-2 0.00 0.00 0.00 1
degree-3 0.00 0.00 0.00 2
avg / total 0.25 0.50 0.33 6
|
notebooks/2020-05-08-visualization using matplotlib.ipynb | ###Markdown
Advanced visualization using matplotlib
###Code
import matplotlib.pyplot as plt
%matplotlib inline
from getDummiesFile import getDummies
df= getDummies()
#直方圖
plt.hist(df.Age)
#移除信息
plt.show()
#標題,x軸Y軸
plt.hist(df.Age,bins=20,color='c')
plt.title('Histogram: Age')
plt.xlabel('Bins')
plt.ylabel('Counts')
plt.show()
#單個可視化中顯示更多圖形,add subplots
f,ax=plt.subplots(1,2,figsize=(14,3))
ax[0].hist(df.Age,bins=20,color='c')
ax[0].set_title('Histogram: age')
ax[0].set_xlabel('Bins')
ax[0].set_ylabel('Counts')
ax[1].hist(df.Fare,bins=20,color='tomato')
ax[1].set_title('Histogram: age')
ax[1].set_xlabel('Bins')
ax[1].set_ylabel('Counts')
plt.show()
f,ax_arr=plt.subplots(3,2,figsize=(14,5))
ax_arr[0,0].hist(df.Fare,bins=20,color='c')
ax_arr[0,0].set_title('Histogram Fare')
ax_arr[0,0].set_xlabel('Bins')
ax_arr[0,0].set_ylabel('Counts')
ax_arr[0,1].hist(df.Age,bins=20,color='c')
ax_arr[0,1].set_title('Histogram Age')
ax_arr[0,1].set_xlabel('Bins')
ax_arr[0,1].set_ylabel('Counts')
print(df.loc[df.Fare.notnull()]['Fare'].values)
ax_arr[1,0].boxplot(df.loc[df.Fare.notnull()]['Fare'].values)
ax_arr[1,0].set_title('Boxplot Fare')
ax_arr[1,0].set_xlabel('Fare')
ax_arr[1,0].set_ylabel('Fare')
ax_arr[1,1].boxplot(df.loc[df.Age.notnull()]['Age'].values)
ax_arr[1,1].set_title('Boxplot Age')
ax_arr[1,1].set_xlabel('Age')
ax_arr[1,1].set_ylabel('Age')
ax_arr[2,0].scatter(df.Age,df.Fare,alpha=0.15,color='c')
ax_arr[2,0].set_title('Histogram Fare')
ax_arr[2,0].set_xlabel('Bins')
ax_arr[2,0].set_ylabel('Counts')
#消除字重疊問題
plt.tight_layout()
#將軸設置關閉
ax_arr[2,1].axis('off')
plt.show()
###Output
D:\ANACONDA\envs\ML\lib\site-packages\numpy\lib\histograms.py:839: RuntimeWarning: invalid value encountered in greater_equal
keep = (tmp_a >= first_edge)
D:\ANACONDA\envs\ML\lib\site-packages\numpy\lib\histograms.py:840: RuntimeWarning: invalid value encountered in less_equal
keep &= (tmp_a <= last_edge)
|
pairplotr_demo.ipynb | ###Markdown
Pairplotr introductionHere I introduce pairplotr, a tool I developed to do pairwise plots of features, including mixtures of numerical and categorical ones, starting from a cleaned Pandas dataframe with neither missing data nor data id columns. This demo imports an already cleaned Titanic dataset and demonstrates certain features of pyplotr. Plot descriptionPlot details vary according to whether they are on- or off-diagonal and whether the intersecting rows and columns correspond to numerical or categorical variables.All descriptions assume the first row/column has index 1.Here's a description of the types of subplot encountered:- On-diagonal: - Categorical feature: - Horizontal bar chart of the counts of each feature value colored according to that value. - It, along with y-tick labels on the left of the grid, acts as a legend for the row feature values. - See example below for how this works. - Numerical feature: - Histogram of feature.- Off-diagonal: - Categorical feature row and categorical feature column. - Horizontal stacked bar chart of row feature for each value of the column feature and colored accordingly. - Categorical feature row and Numerical feature column. - Overlapping histograms of column feature for each value of the row feature and colored accordingly. - Numerical feature row and Numerical feature column. - Scatter plot of row feature veruss column feature. - Optionally, colored by a feature dictated by scatter_plot_filter keyword argument. Import dependencies
###Code
%matplotlib inline
import sys
import pairplotr.pairplotr as ppr
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Unpickle data
###Code
df = pd.read_pickle('trimmed_titanic_data.pkl')
###Output
_____no_output_____
###Markdown
Inspect data
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 9 columns):
Survived 891 non-null int64
Pclass 891 non-null int64
Sex 891 non-null object
Age 891 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Fare 891 non-null float64
Embarked 891 non-null object
Title 891 non-null object
dtypes: float64(2), int64(4), object(3)
memory usage: 62.7+ KB
###Markdown
Note, how the data has no missing values. This is required for the current version of pairplotr.
###Code
df.head(10)
###Output
_____no_output_____
###Markdown
Additionally, the data must have no fields that could be considered an id. For instance, the Titanic survival dataset had a PassengerId field that I removed. The reason for this is to avoid a high number of categorical feature values that causes the code to slow to a crawl. Set categorical features as that typeThe first step, starting from squeaky clean data, is to set categorical features as such:
###Code
visualize_df = df.copy()
categorical_features = ['Survived','Pclass','Sex','Embarked','Title','Parch','SibSp']
for feature in categorical_features:
visualize_df[feature] = visualize_df[feature].astype('category')
visualize_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 9 columns):
Survived 891 non-null category
Pclass 891 non-null category
Sex 891 non-null category
Age 891 non-null float64
SibSp 891 non-null category
Parch 891 non-null category
Fare 891 non-null float64
Embarked 891 non-null category
Title 891 non-null category
dtypes: category(7), float64(2)
memory usage: 20.3 KB
###Markdown
Note, Parch and SibSp are numerical, though I find it easier to visualize them as categories because there are so few values for them (max 8).Now that the desired types have been stored in a dictionary we can move on to graphing the pair plot. Example pairplot and interpretationTo plot all pair-wise features simply run the compare_data() method like this:
###Code
%%time
ppr.compare_data(visualize_df,fig_size=16)
###Output
CPU times: user 8.23 s, sys: 161 ms, total: 8.39 s
Wall time: 8.57 s
###Markdown
We can also select specific features to graph using the plot_vars keyword argument:
###Code
%%time
ppr.compare_data(visualize_df,fig_size=16,plot_vars=['Survived','Sex','Pclass','Age','Fare'])
###Output
CPU times: user 2.18 s, sys: 36.2 ms, total: 2.22 s
Wall time: 2.24 s
###Markdown
We can zoom in on individual plots by using the zoom keyword argument:
###Code
%%time
ppr.compare_data(visualize_df,fig_size=16,zoom=['Sex','Pclass'])
%%time
ppr.compare_data(visualize_df,fig_size=16,zoom=['Pclass','Age'],plot_medians=True)
###Output
_____no_output_____
###Markdown
Note how there is now a scale for the Age feature and the frequencies corresponding to each bin.This currently only works for category vs category and category vs numerical comparisons and only for different features. This will be changed soon.Additionally, we can make it so that numerical vs numerical feature comparisons highlight points based on a particular color using the scatter_plot_filter keyword argument:
###Code
%%time
ppr.compare_data(visualize_df,fig_size=16,scatter_plot_filter='Survived')
###Output
CPU times: user 8.4 s, sys: 110 ms, total: 8.51 s
Wall time: 8.78 s
###Markdown
Here is an example interpretation using pairplotr:Row/column 1/1 indicates that survival (1) and death (0) are indicated by cyan and gray, respectively.Row/column 3/1 indicates that most women survived (I'd guess about ~80%). Row/column 3/2 indicates that more than half of all women were from Pclasses 1 and 2. This makes me curious about what characteristics women from Pclass 3 might have.We can slice the data using normal Pandas notation and use it with pairplotr. Here's an example that investigates women from Pclass 3:
###Code
%%time
where = (visualize_df['Sex']=='female')&(visualize_df['Pclass']==3) # Women from Pclass 3
ppr.compare_data(visualize_df[where],scatter_plot_filter='Survived')
###Output
CPU times: user 7.83 s, sys: 59.1 ms, total: 7.89 s
Wall time: 8 s
###Markdown
Row/column 1/1 automatically shows that only about half of Pclass 3 women survived.Row/column 8/1 is interesting. It seems to indicate that most women from Embarked values Q and C survived, while the bulk of Pclass 3 women from Embarked S died.Row/column pairs 8/5 and 8/6 seem to indicate that Embarked S had a higher concentration of larger amounts of Siblings/Spouses and Parents/Childen. Additionally, row/colum pairs 5/1 and 6/1 seem to indicate that women with less family had a better chance to survive. Here I zoom in on these two figures to check:
###Code
%%time
where = (visualize_df['Sex']=='female')&(visualize_df['Pclass']==3) # Women from Pclass 3
ppr.compare_data(visualize_df[where],scatter_plot_filter='Survived',zoom=['SibSp','Survived'])
%%time
where = (visualize_df['Sex']=='female')&(visualize_df['Pclass']==3) # Women from Pclass 3
ppr.compare_data(visualize_df[where],scatter_plot_filter='Survived',zoom=['Parch','Survived'])
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.