path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
462. Minimum Moves to Equal Array Elements II.ipynb | ###Markdown
Given a **non-empty** integer array, find the minimum number of moves required to make all array elements equal, where a move is incrementing a selected element by 1 or decrementing a selected element by 1.You may assume the array's length is at most 10,000.**Example:** Input: [1,2,3] Output: 2Explanation:Only two moves are needed (remember each move increments or decrements one element):[1,2,3] => [2,2,3] => [2,2,2] ThoughtJust find the median of the array. And then substracts it's median with absolute value. Then we can get their difference are the moves we need.
###Code
def minMoves2(self, nums):
"""
:type nums: List[int]
:rtype: int
"""
nums.sort()
mid=nums[len(nums)//2]
return sum([abs(nums[i]-mid) for i in range(len(nums))])
###Output
_____no_output_____ |
docs/_build/html/notebooks/keras_image_forecasting.ipynb | ###Markdown
Keras: Image Forecasting *Lunar Astronomy Forecasting Using a 5D Time Series of Shifting Windows.*  Although the pipeline runs successfully, this tutorial is considered a 'work in progress' in that the model architecture still needs a bit of work. Here we attempt an self-supervised *walk forward* with an autoencoder whose evaluation data is shifted 2 frames forward. The goal is to show an image to the model and have it infer what that image will look like 2 steps in the future.
###Code
import keras
from keras.models import Sequential
from keras.layers import Conv1D, MaxPool1D, UpSampling1D
from keras.callbacks import History
from sklearn.preprocessing import FunctionTransformer
import aiqc
###Output
_____no_output_____
###Markdown
--- Low-Level API Reference [Low-Level API Docs](api_low_level.ipynb) for more information including how to work with non-tabular data and defining optimizers.
###Code
folder_path = 'remote_datum/image/liberty_moon/images'
image_dataset = aiqc.Dataset.Image.from_folder_pillow(folder_path=folder_path, ingest=False, dtype='float64')
feature = image_dataset.make_feature()
feature.make_window(size_window=1, size_shift=2)
encoderset = feature.make_encoderset()
encoderset.make_featurecoder(
sklearn_preprocess= FunctionTransformer(aiqc.div255, inverse_func=aiqc.mult255)
, dtypes = 'float64'
)
feature.make_featureshaper(reshape_indices=(0,3,4))
splitset = aiqc.Splitset.make(
feature_ids = feature.id
, size_test = 0.1
, size_validation = 0.20
)
splitset.samples
def fn_build(features_shape, label_shape, **hp):
m = Sequential()
m.add(Conv1D(64*hp['multiplier'], 3, activation=hp['activation'], padding='same'))
m.add(MaxPool1D( 2, padding='same'))
m.add(Conv1D(32*hp['multiplier'], 3, activation=hp['activation'], padding='same'))
m.add(MaxPool1D( 2, padding='same'))
m.add(Conv1D(16*hp['multiplier'], 3, activation=hp['activation'], padding='same'))
m.add(MaxPool1D( 2, padding='same'))
# decoding architecture
m.add(Conv1D(16*hp['multiplier'], 3, activation=hp['activation'], padding='same'))
m.add(UpSampling1D(2))
m.add(Conv1D(32*hp['multiplier'], 3, activation=hp['activation'], padding='same'))
m.add(UpSampling1D(2))
m.add(Conv1D(64*hp['multiplier'], 3, activation=hp['activation']))
m.add(UpSampling1D(2))
m.add(Conv1D(50, 3, activation='relu', padding='same'))# removing sigmoid
return m
def fn_train(model, loser, optimizer, samples_train, samples_evaluate, **hp):
model.compile(
optimizer=optimizer
, loss=loser
, metrics=['mean_squared_error']
)
model.fit(
samples_train["features"]
, samples_train["labels"]
, validation_data = (
samples_evaluate["features"]
, samples_evaluate["labels"]
)
, verbose = 0
, batch_size = hp['batch_size']
, callbacks=[keras.callbacks.History()]
, epochs = hp['epoch_count']
)
return model
algorithm = aiqc.Algorithm.make(
library = "keras"
, analysis_type = "regression"
, fn_build = fn_build
, fn_train = fn_train
)
hyperparameters = dict(
epoch_count = [150]
, batch_size = [1]
, cnn_init = ['he_normal']
, activation = ['relu']
, multiplier = [3]
)
hyperparamset = algorithm.make_hyperparamset(
hyperparameters = hyperparameters
)
queue = algorithm.make_queue(
splitset_id = splitset.id
, hyperparamset_id = hyperparamset.id
, repeat_count = 1
)
queue.run_jobs()
###Output
🔮 Training Models 🔮: 100%|██████████████████████████████████████████| 1/1 [00:12<00:00, 12.31s/it]
###Markdown
For more information on visualization of performance metrics, reference the [Visualization & Metrics](visualization.html) documentation. --- Inference
###Code
queue.metrics_to_pandas()
# queue.plot_performance(max_loss=10000, min_r2=-1000)
predictor = aiqc.Predictor.get_by_id(137) #replace with your `Predictor.id`
predictor.get_hyperparameters(as_pandas=True)
# predictor.plot_learning_curve(loss_skip_15pct=True)
import numpy as np
prediction = np.array(predictor.predictions[0].predictions['test'][0].astype('uint8'))
prediction
prediction.shape
from PIL import Image as Imaje
Imaje.fromarray(prediction, mode='L')
###Output
_____no_output_____ |
experimental/schoology/usage-analytics.ipynb | ###Markdown
Schoology API ExplorationPull in settings from .env file.
###Code
from dotenv import load_dotenv
load_dotenv()
API_KEY = os.getenv("SCHOOLOGY_KEY")
API_SECRET = os.getenv("SCHOOLOGY_SECRET")
# Ed-Fi/MSDF users may have this env var set, which causes problems and is unnecessary for the code below
os.environ["REQUESTS_CA_BUNDLE"] = ""
###Output
_____no_output_____
###Markdown
Establish connection to the API
###Code
import schoolopy
from prettyprinter import pprint
sc = schoolopy.Schoology(schoolopy.Auth(API_KEY, API_SECRET), verbose = True)
###Output
_____no_output_____
###Markdown
Pull in a list of courses
###Code
courses = sc.get_courses()
pprint(courses)
###Output
--> calling https://api.schoology.com/v1/courses?limit=20&start=0
[
schoolopy.models.Course({
'id': 2941242684,
'title': 'English I',
'course_code': 'ENG-1',
'department': '',
'description': '',
'credits': 0,
'subject_area': 2,
'grade_level_range_start': 11,
'grade_level_range_end': 0,
'synced': 0
}),
schoolopy.models.Course({
'id': 2942191514,
'title': 'Algebra I',
'course_code': 'ALG-1',
'department': '',
'description': '',
'credits': 0,
'subject_area': 3,
'grade_level_range_start': 11,
'grade_level_range_end': 0,
'synced': 0
})
]
###Markdown
Now pull in a list of all users
###Code
users = sc.get_users()
pprint([{u.id, u.name_first, u.name_last} for u in users])
###Output
--> calling https://api.schoology.com/v1/users?limit=20&start=0
[
{100032890, 'Archer', 'Mary'},
{'Brad', 'Banister', 99785799},
{'Stephen', 'Caldwell', 100032895},
{100032898, 'Christian', 'Kelley'},
{99588912, 'Stephen', 'Fuqua'},
{99785801, 'Gabrielle'},
{100032896, 'Hardy', 'Olivia'},
{100032891, 'Kyle', 'Hughes'},
{99515016, 'Eric', 'Jansson'},
{'Larry', 100032893, 'Mahoney'},
{'Peter', 100032892, 'Nash'},
{'Phillips', 'Roland', 100032894},
{100032899, 'Preston', 'Sara'},
{'Richmond Guzmán', 99785803, 'Luis'},
{'Micheal', 100032897, 'Turner'}
]
###Markdown
Extract login and access data from Schoology_As a school district, I want to know if the student engaged with my my LMS and/or a particular course on a particular school day_A capture of these data points, or an assessment that these are not available:1. If a student logged into the LMS on a particular school day2. A metric of either how long the student was logged in or how many LMSresources (courses, assignments or other system entity) were accessed by a student on a particular day3. If a student logged into a particular LMS course on a particular school day4. A metric of either how long the student was logged in to a course or how many LMS resources in that course the student accessed on a particular day
###Code
# Find available roles so that we can filter out teachers and admins
roles = sc.get_roles()
ROLE_NAME_STUDENT = "Student"
try:
role_id_student = next(r.id for r in roles if r.title == ROLE_NAME_STUDENT)
students = [u for u in users if u.role_id == role_id_student]
from datetime import datetime, timedelta
end_time = datetime.now()
start_time = end_time - timedelta(days=1)
get_actions = lambda uid : sc.get_user_actions(uid, datetime.timestamp(start_time), datetime.timestamp(end_time))
import json
actions = [{'First':s.name_first, 'Last':s.name_last, 'Actions':json.dumps(get_actions(s.uid))} for s in students]
pprint(actions)
except StopIteration:
print("Role '{role}' does not exist.".format(role=ROLE_NAME_STUDENT))
###Output
--> calling https://api.schoology.com/v1/roles?limit=20&start=0
--> calling https://api.schoology.com/v1/analytics/users/100032890?start_time=1598897619.450045&end_time=1598984019.450045&start=0&limit=20?limit=20&start=0
--> calling https://api.schoology.com/v1/analytics/users/100032895?start_time=1598897619.450045&end_time=1598984019.450045&start=0&limit=20?limit=20&start=0
--> calling https://api.schoology.com/v1/analytics/users/100032896?start_time=1598897619.450045&end_time=1598984019.450045&start=0&limit=20?limit=20&start=0
--> calling https://api.schoology.com/v1/analytics/users/100032891?start_time=1598897619.450045&end_time=1598984019.450045&start=0&limit=20?limit=20&start=0
--> calling https://api.schoology.com/v1/analytics/users/100032893?start_time=1598897619.450045&end_time=1598984019.450045&start=0&limit=20?limit=20&start=0
--> calling https://api.schoology.com/v1/analytics/users/100032892?start_time=1598897619.450045&end_time=1598984019.450045&start=0&limit=20?limit=20&start=0
--> calling https://api.schoology.com/v1/analytics/users/100032894?start_time=1598897619.450045&end_time=1598984019.450045&start=0&limit=20?limit=20&start=0
--> calling https://api.schoology.com/v1/analytics/users/100032897?start_time=1598897619.450045&end_time=1598984019.450045&start=0&limit=20?limit=20&start=0
[
{
'First': 'Mary',
'Last': 'Archer',
'Actions': 'null'
},
{
'First': 'Stephen',
'Last': 'Caldwell',
'Actions': 'null'
},
{
'First': 'Olivia',
'Last': 'Hardy',
'Actions': 'null'
},
{
'First': 'Kyle',
'Last': 'Hughes',
'Actions': 'null'
},
{
'First': 'Larry',
'Last': 'Mahoney',
'Actions': 'null'
},
{
'First': 'Peter',
'Last': 'Nash',
'Actions': 'null'
},
{
'First': 'Roland',
'Last': 'Phillips',
'Actions': 'null'
},
{
'First': 'Micheal',
'Last': 'Turner',
'Actions': 'null'
}
]
###Markdown
No analytics available. Have tried many different settings in Schoology, multiple users's key & secret, and cannot find any reason for this. Problems is also born out directly in the site itself: Usage Analytics page returns nothing. **UPDATE**: Schoology confirmed that this API is no longer active and they just haven't updated their documentation. Now must get a usage report through the web application. The next cells demonstrate opening and accessing one of these usage reports. The files are gzipped when downloaded, so the first step will be to unzip the file, and then we can read into a DataFrame.
###Code
# note that the reference input file is only on the developer's
# computer and will not be kept in source control.
import os
from gzip import open as gz_open
import pandas as pd
input_file = os.path.join("..", "..", "src", "schoology-extractor", "data", "usage-input", "42137161-444c-4f09-b450-6410605b8365000.csv.gz")
assert os.path.exists(input_file)
df: pd.DataFrame
with gz_open(input_file) as file:
df = pd.read_csv(file)
from IPython.display import display, Markdown
df.drop(columns="email", inplace=True)
display(Markdown("## First Five Lines"))
display(df.head())
display(Markdown("## System sign-in events"))
filter = (
(df["action_type"] == "CREATE") &
(df["item_type"] == "SESSION")
)
display(df[filter][["role_name", "username", "schoology_user_id", "last_event_timestamp"]])
###Output
_____no_output_____ |
notebooks/10 Support Vector Machines.ipynb | ###Markdown
Support Vector Machines A method that builds on the linear regression we've seen so far is Support Vector Machines. This algorithm can be used for both classification and regression, but let's look at a regression example to understand how it works. Let's assume we have data with two classes:
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=70, centers=2, random_state=0, cluster_std=0.60)
plt.figure(figsize=(10,5))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='Dark2');
###Output
_____no_output_____
###Markdown
When fitting a linear model, there are many lines that separate our training data:
###Code
xfit = np.linspace(-1, 3.5)
plt.figure(figsize=(10,5))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='Dark2')
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
###Output
_____no_output_____
###Markdown
How can we make sure the line we have is the best one? To do so, we'll define an area around the line as the "margin"
###Code
xfit = np.linspace(-1, 3.5)
plt.figure(figsize=(10,5))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='Dark2')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
###Output
_____no_output_____
###Markdown
These vectors, or lines here, that define the edges of the margin are called "support vectors". The insight with SVM is that we can constrain our fitting method to have support vectors that are based on the data. For example, we can require that the support vector lines pass directly through at least one data point.
###Code
from sklearn.svm import SVC
model = SVC(kernel='linear').fit(X, y)
model.score(X, y)
from figures.plot_svc import plot_svc_decision_function
plt.figure(figsize=(10,5))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='Dark2')
plot_svc_decision_function(model);
###Output
_____no_output_____
###Markdown
Now we can be sure that the line we have is based on the data, and not just randomness from the fitting method. SVMs are often used with what's called a `kernel`, which is a transformation to our data which is useful when our data isn't linear. Let's look at some circular data:
###Code
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
linear_clf = SVC(kernel='linear', C=100).fit(X, y)
plt.figure(figsize=(10,5))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='Dark2')
plot_svc_decision_function(linear_clf, plot_support=False);
###Output
_____no_output_____
###Markdown
Let's apply a transformation to the data, using what's called the kernel trick. The first kernel we'll look at is called a "Radial Basis Function", "rbf", which is useful precisely for radial (ie circular) data
###Code
from mpl_toolkits.mplot3d import Axes3D
Z = X[:, 0]**2 + X[:, 1]**2
fig = plt.figure(figsize=(10, 10))
ax = fig.gca(projection='3d')
ax.scatter(X[:,0], X[:,1], Z, c=y, cmap='Dark2');
###Output
_____no_output_____
###Markdown
The radial basis function is able to transform the data from circular into linear, which allows our linear classification method to find a minimum. Once the classifier is found, the same kernel can be used to transform the classifier to match the original data.
###Code
radial_clf = SVC(kernel='rbf', gamma='auto')
radial_clf.fit(X, y)
print('Linear: {}, radial: {}'.format(linear_clf.score(X, y), radial_clf.score(X, y)))
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='Dark2')
plot_svc_decision_function(radial_clf)
plt.scatter(radial_clf.support_vectors_[:, 0], radial_clf.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
###Output
_____no_output_____
###Markdown
Kernels can be any transformation function. Let's look at multiplying our data by a matrix:
###Code
M = np.array([[0.1, 1.0], [3.0, 1.6]])
transformed = np.dot(X, M)
fig, ax = plt.subplots(1, 2)
ax[0].scatter(X[:, 0], X[:, 1], c=y, cmap='Dark2')
ax[0].set_title("Data")
ax[1].scatter(transformed[:, 0], transformed[:, 1], c=y, cmap='Dark2')
ax[1].set_title("Transformed");
###Output
_____no_output_____
###Markdown
In order to pass this to the SVM, we can provide the kernel as a function
###Code
def my_kernel(X, y):
return np.dot(np.dot(X, M), y.T)
custom_clf = SVC(kernel=my_kernel)
custom_clf.fit(X, y)
print('Linear: {}, radial: {}, custom: {}'.format(linear_clf.score(X, y),
radial_clf.score(X, y), custom_clf.score(X, y)))
###Output
Linear: 0.69, radial: 1.0, custom: 0.54
###Markdown
EXERCISE: Download ``03_svm_iris.py`` from the course website. Here there is a custom kernel called "my_kernel." Modify the values in the kernel, or define a new operation, to try to maximize the classification score. You can also try the 'linear', 'poly', and 'rbf' kernels for comparison. Modify C to also get a better classification score.
###Code
# %load exercises/03_svm_iris.py
###Output
_____no_output_____
###Markdown
Regression Support vector machines can also easily be used for regression. The idea is similar: when fitting the line, use data points to determine the margins on the line. Kernels can also be used to transform the data to acheive a better fit.
###Code
x = np.linspace(-3, 3, 100)
rng = np.random.RandomState(42)
y = np.sin(4 * x) + x + rng.uniform(size=len(x))
X = x[:, np.newaxis]
plt.scatter(X, y);
###Output
_____no_output_____
###Markdown
Let's fit many kernels to compare them:
###Code
from sklearn.svm import SVR
svr_rbf = SVR(kernel='rbf', gamma='auto')
svr_lin = SVR(kernel='linear')
svr_poly = SVR(kernel='poly', gamma='auto', degree=3)
y_rbf = svr_rbf.fit(X, y).predict(X)
y_lin = svr_lin.fit(X, y).predict(X)
y_poly = svr_poly.fit(X, y).predict(X)
print('RBF: {}, Linear: {}, Polynomial: {}'.format(svr_rbf.score(X, y), svr_lin.score(X, y), svr_poly.score(X, y)))
# Look at the results
plt.figure(figsize=(10,5))
plt.scatter(X, y, color='darkorange', label='data')
plt.plot(X, y_rbf, color='navy', label='RBF model')
plt.plot(X, y_lin, color='c', label='Linear model')
plt.plot(X, y_poly, color='cornflowerblue', label='Polynomial model')
plt.xlabel('data')
plt.ylabel('target')
plt.title('Support Vector Regression')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
EXERCISE: Download ``03_svr_diabetes.py`` from the course website. None of the kernels seem to be classifying the data very well. Do you have any guesses why? Try to fix it so that you have better classification.
###Code
# %load exercises/03_svr_diabetes.py
import numpy as np
from sklearn.svm import SVR
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
diabetes = datasets.load_diabetes()
feature = 3
X = diabetes.data
y = diabetes.target
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.25,
random_state=1234)
# Fit regression model
svr_lin = SVR(kernel='linear')
svr_rbf = SVR(kernel='rbf', gamma=0.1)
svr_poly = SVR(kernel='poly', degree=2)
y_lin = svr_lin.fit(X_train, y_train).predict(X_train)
y_rbf = svr_rbf.fit(X_train, y_train).predict(X_train)
y_poly = svr_poly.fit(X_train, y_train).predict(X_train)
print("Linear train error: ", mean_squared_error(y_train, y_lin),
" test error: ", mean_squared_error(y_test, svr_lin.predict(X_test)))
print("RBF train error: ", mean_squared_error(y_train, y_rbf),
" test error: ", mean_squared_error(y_test, svr_rbf.predict(X_test)))
print("Polynomial train error: ", mean_squared_error(y_train, y_poly),
" test error: ", mean_squared_error(y_test, svr_rbf.predict(X_test)))
plt.figure(figsize=(20,10))
plt.scatter(X_train[:, feature], y_train, color='darkorange', label='data')
plt.scatter(X_train[:, feature], y_lin, color='c', label='Linear model')
plt.scatter(X_train[:, feature], y_rbf, color='navy', label='RBF model')
plt.scatter(X_train[:, feature], y_poly, color='cornflowerblue', label='Polynomial model')
plt.xlabel('data')
plt.ylabel('target')
plt.title('Support Vector Regression')
plt.legend()
plt.show()
###Output
/home/d9w/.venvs/tf/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
"avoid this warning.", FutureWarning)
|
Mizu Auto ML/Classification.ipynb | ###Markdown
Machine Learning
###Code
sm = SMOTE(random_state=108)
tl = TomekLinks()
x_train_ov, y_train_ov = sm.fit_resample(x_train_imputed, y_train)
x_train_un, y_train_un = tl.fit_resample(x_train_imputed, y_train)
dt = DecisionTreeClassifier(random_state=108)
rf = RandomForestClassifier(random_state=108)
gb = GradientBoostingClassifier(random_state=108)
cb = CatBoostClassifier(random_state=108, verbose=False)
dt_param = {'max_depth':[1, 3, 5, 10], 'min_samples_split':[2,4,8,16], 'min_samples_leaf':[1,2,4,6,8,10]}
n_estimators = [10, 25, 50, 100]
max_features = ['auto', 'sqrt']
max_depth = [3, 5, 10, 12, None]
min_samples_split = [2, 5, 10]
min_samples_leaf = [1, 2, 4]
random_strength = [0.0001, 0.001, 0.1, 1]
border_count = [1, 5, 10, 25, 50, 100, 255]
l2_leaf_reg = [1, 2, 3, 4, 5, 6, 10, 15, 30]
bagging_temperature = [0, 1, 2, 3, 4, 5]
rf_param = {'n_estimators': n_estimators, 'max_features':max_features, 'max_depth':max_depth, 'min_samples_split':min_samples_split,'min_samples_leaf':min_samples_leaf}
learning_rates = [1, 0.5, 0.25, 0.1, 0.05, 0.01]
gb_param = {'learning_rate':learning_rates, 'n_estimators': n_estimators, 'max_depth':max_depth, 'min_samples_split':min_samples_split,'min_samples_leaf':min_samples_leaf, 'max_features':max_features}
cb_param = {'learning_rate':learning_rates, 'iterations': n_estimators, 'depth':max_depth, 'random_strength':random_strength,'border_count':border_count, 'l2_leaf_reg':l2_leaf_reg, 'bagging_temperature':bagging_temperature}
name = []
k = []
tr_auc = []
te_auc = []
method = []
features = []
trans = dict()
for data_used in [[x_train_ov, y_train_ov, 'oversampling'], [x_train_un, y_train_un, 'undersampling']]:
x_use = data_used[0]
y_use = data_used[1]
gdt = RandomizedSearchCV(dt, dt_param, n_jobs=-1, scoring='roc_auc', n_iter=10, random_state=108)
grf = RandomizedSearchCV(rf, rf_param, n_jobs=-1, scoring='roc_auc', n_iter=10, random_state=108)
ggb = RandomizedSearchCV(gb, gb_param, n_jobs=-1, scoring='roc_auc', n_iter=10, random_state=108)
gcb = RandomizedSearchCV(cb, cb_param, n_jobs=-1, scoring='roc_auc', n_iter=20, random_state=108)
new_dt = DecisionTreeClassifier(**gdt.fit(x_use, y_use).best_params_, random_state=108)
new_rf = RandomForestClassifier(**grf.fit(x_use, y_use).best_params_, random_state=108)
new_gb = GradientBoostingClassifier(**ggb.fit(x_use, y_use).best_params_, random_state=108)
new_cb = CatBoostClassifier(**gcb.fit(x_use, y_use).best_params_, random_state=108, verbose=False)
for algo in [[new_dt, 'dt'], [new_rf, 'rf'], [new_gb, 'gb'], [new_cb, 'cb']]:
algo[0].fit(x_use, y_use)
current = 0
num = x_train_imputed.shape[1]
used_feature = list(x_use)
sampling = 'normal'
usee = pd.DataFrame({'params':x_use.columns, 'importances':algo[0].feature_importances_}).sort_values('importances', ascending=False)
for kbest in [5, 10, 15, 25, 50]:
uses = usee.head(kbest)['params']
x_tr_try= x_use[uses]
hold = np.mean(cross_val_score(estimator=algo[0], X=x_tr_try, y=y_use, cv = 5, scoring = 'roc_auc'))
if hold > current:
current = hold
num = kbest
sampling = data_used[2]
used_feature = list(uses)
else:
None
x_tr_fin = x_use[usee.head(num)['params']]
x_te_fin = x_test_imputed[usee.head(num)['params']]
y_pred = algo[0].fit(x_tr_fin, y_use).predict_proba(x_te_fin)
store = roc_auc_score(y_test, y_pred[:,1])
name.append(algo[1])
k.append(num)
tr_auc.append(current)
te_auc.append(store)
method.append(sampling)
features.append(used_feature)
result = pd.DataFrame({'algo':name, 'n_features':k, 'train_auc':tr_auc, 'test_auc':te_auc, 'method':method, 'features':features}).sort_values('test_auc', ascending=False)
result.sort_values('test_auc', ascending=False).head(1)
result
algo_used = result['algo'].iloc[0]
features_used = result['features'].iloc[0]
sampling_used = result['method'].iloc[0]
if algo_used == 'dt':
do_train = new_dt
elif algo_used == 'gb':
do_train = new_gb
elif algo_used == 'rf':
do_train = new_rf
elif algo_used == 'cb':
do_train = new_cb
if sampling_used == 'undersampling':
do_sampling = TomekLinks()
elif sampling_used == 'oversampling':
do_sampling = SMOTE(random_state=108)
###Output
_____no_output_____
###Markdown
Prepare to retrain using all dataset
###Code
# we happened to already do our part in the x_con, so we will reuse x_con as our main retraining dataset.
# Since we already do label encoding, we no longer need to label encode it again
imputer, x_imputed = imput_fit_transform(x_con)
# use only the best features from train_test_split
x_imputed_use = x_imputed[features_used]
x_imputed_use.shape
x_sampled_use, y_sampled = do_sampling.fit_resample(x_imputed_use, y)
do_train.fit(x_sampled_use, y_sampled)
roc_auc_score(y, do_train.predict_proba(x_imputed_use)[:,1])
###Output
_____no_output_____
###Markdown
Prepare data to predict
###Code
pred_datapath= 'train.csv'
pred_data = read_data(datapath, get_type(datapath)[0])
pred_data = pred_data[list(x_con)]
pred_data_obj = pred_data[categories]
pred_data_obj.fillna('Unknown', inplace=True)
pred_data_num = pred_data[set(pred_data) - set(categories)]
pred_data_obj_le = le_transform(pred_data_obj, le)
pred_data_con = pd.concat([pred_data_num, pred_data_obj_le], axis=1)
pred_data_con = pred_data_con[list(x_con)]
pred_con_imputed = imput_transform(pred_data_con, imputer)
pred_con_use = pred_con_imputed[features_used]
pred_data['prediction_result'] = do_train.predict_proba(pred_con_use)[:,1]
###Output
_____no_output_____ |
notebooks/2-feature-engineering.ipynb | ###Markdown
Feature engineeringВ предишната тетрадка опитахме различни неща с модела. Но семплирането на модела не даде смислен резултат.Ще опитаме да направим по-добри (и повече) feature-и за всяка от плочките и да оставим невронната мрежа да си сврърши работата.Нека заредим малко модули и данни...
###Code
import numpy as np
import os
import random
import sys
sys.path.append('../model')
import config
from game import *
from tileset import *
from mdlstm import *
from time import time
import tensorflow.contrib.slim as slim
scenarios = []
game = Game(config.STARCRAFT_ROOT)
scenarios += game.process_game_scenarios()
for directory in config.MAP_DIRECTORIES:
scenarios += game.process_scenarios(directory)
tiny_jungle_scenarios = [x for x in scenarios if x.alliances == x.human_players == 2 and x.tileset == Tileset.JUNGLE and x.width == x.height == 64]
len(scenarios)
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Идея 1: feature-и от свойства на плочки?Интуитивен начин да направим feature-и би бил да вземем всички 16 мини-плочки, от които е направена всяка плочка. На базата на свойствата на тези мини-плочки и на свойствата на самата плочка можем да направим векторите ни с feature-и.След известно ровене в плочките забелязваме нещо...
###Code
lost_temple = next(filter(lambda x: x.name == 'The Lost Temple', scenarios))
plt.imshow(lost_temple.tiles[65, 8].graphics)
print(lost_temple.tiles[65, 8].index // 16, lost_temple.tiles[65, 8].index % 16)
print(np.vectorize(lambda x: x.height)(lost_temple.tiles[65, 8].minitiles))
print(np.vectorize(lambda x: x.ramp)(lost_temple.tiles[65, 8].minitiles))
###Output
1074 0
[[0.33333333 0.33333333 0.33333333 0.33333333]
[0.33333333 0.33333333 0.33333333 0.33333333]
[0.33333333 0.33333333 0.33333333 0.33333333]
[0.33333333 0.33333333 0.33333333 0.33333333]]
[[ True True True True]
[ True True True True]
[ True True True True]
[ True True True True]]
###Markdown
Това е част от изкачване от "lowground" към "highground". Свойствата му са напълно безполезни. На практика не можем да използваме височината на плочката като feature. Нито флага `ramp`. Други флагове може би ще идентифицират плочката, но едва ли ще бъде достатъчно за уникална идентификация. Идея 2: One hot encoding (with a twist)При имплементацията на парсера за `Tileset` данни, забелязваме че има два типа плочки. Първите са нормални части от терена. Вторите са "нещица" (doodads) разхвърляни по картата.Първите са разпръснати в групи от по до 16 плочки. Свойствата им са еднотипни в рамките на групата. Разликата е единствено във визуализацията им. Плочките в една група са взаимнозаменяеми.Вторите обикновено са препятствия по терена, чиято графика е по-голяма от размерите на единична плочка. За да се получи "нещото", плочките му трябва да бъдат използвани в конкретен ред. Плочките от този тип също са в групи, но плочките не са взаимнозаменяеми в рамките на група.Идеята е плочките от първи тип да кодираме като `(уникален-номер-на-група, позиция-в-групата)`, а плочките от втори тип като `(уникален-номер-на-плочка, 0.5)`, където `уникален-номер-на-група` и `уникален-номер-на-плочка` са две непресичащи се множества от последователни цели числа.
###Code
tiles = Tileset.JUNGLE.tiles
terrain_tiles = (tile for tile in tiles if not tile.is_empty and not tile.is_doodad)
doodad_tiles = (tile for tile in tiles if not tile.is_empty and tile.is_doodad)
groups = dict()
tile_features = dict()
for tile in terrain_tiles:
if not tile.group_id in groups:
groups[tile.group_id] = len(groups)
tile_features[tile] = (groups[tile.group_id], (tile.group_offset + 1) / 16)
group_count = len(groups)
del(groups)
for tile in doodad_tiles:
group_count += 1
tile_features[tile] = (group_count, 0.5)
print(group_count)
###Output
2413
###Markdown
В момента сме кодирали всяка плочка с двойка числа (x, y), където x ∈ [0-2413), y ∈ (0-1]. Може да се каже, че при втория параметър има някаква наредба. При вървият обаче - не. Ще му направим one hot encoding.
###Code
one_hot_tile_features = dict()
for tile, (group_id, group_offset) in tile_features.items():
one_hot_tile_features[tile] = np.zeros(group_count + 1)
one_hot_tile_features[tile][group_id] = 1.0
one_hot_tile_features[tile][-1] = group_offset
del tile_features
###Output
_____no_output_____
###Markdown
Помощни функцииЩе напишем функции, които обръщат плочка във вектор и обратно
###Code
h = 64
w = 64
tile_vec_size = group_count + 1
input_vec_size = 2 * tile_vec_size
output_vec_size = tile_vec_size
learning_rate = 0.05
batch_size = 7
hidden_size = 128
dtype = tf.float32
def tile_to_output(tiles, vertical_index, horizontal_index):
if vertical_index >= 0 and horizontal_index >= 0:
tile = tiles[vertical_index, horizontal_index]
return np.array(one_hot_tile_features[tile])
else:
return np.zeros([tile_vec_size])
tile_to_output(lost_temple.tiles, 65, 8).shape
def tile_to_input(tiles, vertical_index, horizontal_index):
top_features = tile_to_output(tiles, vertical_index - 1, horizontal_index)
left_features = tile_to_output(tiles, vertical_index, horizontal_index - 1)
return np.concatenate([top_features, left_features])
tile_to_input(lost_temple.tiles, 65, 8).shape
###Output
_____no_output_____
###Markdown
Трениране
###Code
data = tiny_jungle_scenarios[:]
def batches(data, batch_size, epochs):
all_batches = []
for epoch in range(epochs):
random.shuffle(data)
all_batches += data
for i in range(0, epochs * len(data), batch_size):
inputs = all_batches[i: i + batch_size]
if len(inputs) == batch_size:
yield inputs
def scenario_to_input(scenario):
x = np.empty((h, w, input_vec_size), dtype=np.float32)
for vertical_index in range(h):
for horizontal_index in range(w):
x[vertical_index, horizontal_index, :] = tile_to_input(scenario.tiles, vertical_index, horizontal_index)
return x
def scenario_to_output(scenario):
y = np.empty((h, w, output_vec_size), dtype=np.float32)
for vertical_index in range(h):
for horizontal_index in range(w):
y[vertical_index, horizontal_index, :] = tile_to_output(scenario.tiles, vertical_index, horizontal_index)
return y
with tf.variable_scope('a', reuse=tf.AUTO_REUSE):
x = tf.placeholder(dtype, [None, h, w, input_vec_size])
y = tf.placeholder(dtype, [None, h, w, output_vec_size])
mdrnn_while_loop = MdRnnWhileLoop(dtype)
rnn_out, _ = mdrnn_while_loop(rnn_size=hidden_size, input_data=x)
model_out = slim.fully_connected(inputs=rnn_out, num_outputs=output_vec_size, activation_fn=tf.nn.sigmoid)
loss = tf.reduce_mean(tf.square(y - model_out))
grad_update = tf.train.AdamOptimizer(learning_rate).minimize(loss)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=False))
sess.run(tf.global_variables_initializer())
epochs = 10
step = 0
for batch in batches(data, batch_size, epochs):
grad_step_start_time = time()
model_preds, tot_loss_value, _ = sess.run([model_out, loss, grad_update], feed_dict={
x: np.stack([scenario_to_input(scenario) for scenario in batch]),
y: np.stack([scenario_to_output(scenario) for scenario in batch]),
})
print('steps = {0} | overall loss = {1:.5f} | time {2:.3f}'.format(
str(step).zfill(4),
tot_loss_value,
time() - grad_step_start_time))
if tot_loss_value != tot_loss_value:
break
step += 1
###Output
steps = 0000 | overall loss = 0.25002 | time 82.991
steps = 0001 | overall loss = 0.24672 | time 82.043
steps = 0002 | overall loss = 0.14108 | time 82.081
steps = 0003 | overall loss = 0.05703 | time 82.827
steps = 0004 | overall loss = 0.01717 | time 83.419
steps = 0005 | overall loss = 0.00428 | time 83.299
steps = 0006 | overall loss = 0.00120 | time 84.872
steps = 0007 | overall loss = 0.00060 | time 88.891
steps = 0008 | overall loss = 0.00046 | time 85.054
steps = 0009 | overall loss = 0.00044 | time 83.952
steps = 0010 | overall loss = 0.00044 | time 83.884
steps = 0011 | overall loss = 0.00044 | time 84.732
steps = 0012 | overall loss = 0.00043 | time 83.350
steps = 0013 | overall loss = 0.00043 | time 84.407
steps = 0014 | overall loss = 0.00043 | time 84.061
###Markdown
Трениране (2)Нека пробваме с по-мощен модел
###Code
learning_rate = 0.05
batch_size = 7
hidden_size = 256
epochs = 10
dtype = tf.float32
with tf.variable_scope('b', reuse=tf.AUTO_REUSE):
x = tf.placeholder(dtype, [None, h, w, input_vec_size])
y = tf.placeholder(dtype, [None, h, w, output_vec_size])
mdrnn_while_loop = MdRnnWhileLoop(dtype)
rnn_out, _ = mdrnn_while_loop(rnn_size=hidden_size, input_data=x)
model_out = slim.fully_connected(inputs=rnn_out, num_outputs=output_vec_size, activation_fn=tf.nn.sigmoid)
loss = tf.reduce_mean(tf.square(y - model_out))
grad_update = tf.train.AdamOptimizer(learning_rate).minimize(loss)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=False))
sess.run(tf.global_variables_initializer())
step = 0
for batch in batches(data, batch_size, epochs):
grad_step_start_time = time()
model_preds, tot_loss_value, _ = sess.run([model_out, loss, grad_update], feed_dict={
x: np.stack([scenario_to_input(scenario) for scenario in batch]),
y: np.stack([scenario_to_output(scenario) for scenario in batch]),
})
print('steps = {0} | overall loss = {1:.5f} | time {2:.3f}'.format(
str(step).zfill(4),
tot_loss_value,
time() - grad_step_start_time))
if tot_loss_value != tot_loss_value:
break
step += 1
###Output
steps = 0000 | overall loss = 0.25015 | time 166.358
steps = 0001 | overall loss = 0.23416 | time 162.939
steps = 0002 | overall loss = 0.14470 | time 171.705
steps = 0003 | overall loss = 0.05566 | time 168.984
steps = 0004 | overall loss = 0.01478 | time 169.868
steps = 0005 | overall loss = 0.00360 | time 166.917
steps = 0006 | overall loss = 0.00106 | time 164.875
steps = 0007 | overall loss = 0.00056 | time 164.821
steps = 0008 | overall loss = 0.00045 | time 163.631
steps = 0009 | overall loss = 0.00043 | time 163.015
steps = 0010 | overall loss = 0.00043 | time 161.368
steps = 0011 | overall loss = 0.00043 | time 163.613
steps = 0012 | overall loss = 0.00043 | time 161.678
steps = 0013 | overall loss = 0.00043 | time 167.323
steps = 0014 | overall loss = 0.00043 | time 164.903
steps = 0015 | overall loss = 0.00043 | time 166.771
steps = 0016 | overall loss = 0.00043 | time 169.588
steps = 0017 | overall loss = 0.00043 | time 165.708
steps = 0018 | overall loss = 0.00043 | time 165.685
steps = 0019 | overall loss = 0.00043 | time 165.341
steps = 0020 | overall loss = 0.00043 | time 163.580
steps = 0021 | overall loss = 0.00043 | time 163.901
steps = 0022 | overall loss = 0.00043 | time 168.863
###Markdown
Никаква разлика. Освен, че тренирането е по-бавно.Нека пробваме с по-слаб модел.
###Code
learning_rate = 0.1
batch_size = 7
hidden_size = 64
epochs = 15
dtype = tf.float32
with tf.variable_scope('c', reuse=tf.AUTO_REUSE):
x = tf.placeholder(dtype, [None, h, w, input_vec_size])
y = tf.placeholder(dtype, [None, h, w, output_vec_size])
mdrnn_while_loop = MdRnnWhileLoop(dtype)
rnn_out, _ = mdrnn_while_loop(rnn_size=hidden_size, input_data=x)
model_out = slim.fully_connected(inputs=rnn_out, num_outputs=output_vec_size, activation_fn=tf.nn.sigmoid)
loss = tf.reduce_mean(tf.square(y - model_out))
grad_update = tf.train.AdamOptimizer(learning_rate).minimize(loss)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=False))
sess.run(tf.global_variables_initializer())
step = 0
for batch in batches(data, batch_size, epochs):
grad_step_start_time = time()
model_preds, tot_loss_value, _ = sess.run([model_out, loss, grad_update], feed_dict={
x: np.stack([scenario_to_input(scenario) for scenario in batch]),
y: np.stack([scenario_to_output(scenario) for scenario in batch]),
})
print('steps = {0} | overall loss = {1:.5f} | time {2:.3f}'.format(
str(step).zfill(4),
tot_loss_value,
time() - grad_step_start_time))
if tot_loss_value != tot_loss_value:
break
step += 1
learning_rate = 0.1
batch_size = 7
hidden_size = 32
epochs = 10
dtype = tf.float32
with tf.variable_scope('d', reuse=tf.AUTO_REUSE):
x = tf.placeholder(dtype, [None, h, w, input_vec_size])
y = tf.placeholder(dtype, [None, h, w, output_vec_size])
mdrnn_while_loop = MdRnnWhileLoop(dtype)
rnn_out, _ = mdrnn_while_loop(rnn_size=hidden_size, input_data=x)
model_out = slim.fully_connected(inputs=rnn_out, num_outputs=output_vec_size, activation_fn=tf.nn.sigmoid)
loss = tf.reduce_mean(tf.square(y - model_out))
grad_update = tf.train.AdamOptimizer(learning_rate).minimize(loss)
sess = tf.Session(config=tf.ConfigProto(log_device_placement=False))
sess.run(tf.global_variables_initializer())
step = 0
for batch in batches(data, batch_size, epochs):
grad_step_start_time = time()
model_preds, tot_loss_value, _ = sess.run([model_out, loss, grad_update], feed_dict={
x: np.stack([scenario_to_input(scenario) for scenario in batch]),
y: np.stack([scenario_to_output(scenario) for scenario in batch]),
})
print('steps = {0} | overall loss = {1:.5f} | time {2:.3f}'.format(
str(step).zfill(4),
tot_loss_value,
time() - grad_step_start_time))
if tot_loss_value != tot_loss_value:
break
step += 1
###Output
steps = 0000 | overall loss = 0.24999 | time 74.470
steps = 0001 | overall loss = 0.21263 | time 67.603
steps = 0002 | overall loss = 0.11049 | time 67.623
steps = 0003 | overall loss = 0.04579 | time 68.911
steps = 0004 | overall loss = 0.01196 | time 68.554
steps = 0005 | overall loss = 0.00284 | time 69.003
steps = 0006 | overall loss = 0.00089 | time 69.409
steps = 0007 | overall loss = 0.00052 | time 68.980
steps = 0008 | overall loss = 0.00044 | time 72.038
steps = 0009 | overall loss = 0.00043 | time 68.920
steps = 0010 | overall loss = 0.00043 | time 69.534
steps = 0011 | overall loss = 0.00043 | time 68.089
steps = 0012 | overall loss = 0.00043 | time 69.379
steps = 0013 | overall loss = 0.00043 | time 69.769
steps = 0014 | overall loss = 0.00043 | time 69.232
###Markdown
Същото. Но се тренира по-бързо. Нека видим какво се случва всъщност. Семплиране на моделаДа разглеждаме какво се случва...
###Code
def iterate_by_layers(height, width):
def iterate_layer(layer):
for x in range(layer + 1):
y = layer - x
if 0 <= x < width and 0 <= y < height:
yield (x, y)
for layer in range(width + height - 1):
yield iterate_layer(layer)
with tf.variable_scope('d', reuse=tf.AUTO_REUSE):
x = tf.placeholder(dtype, [1, h, w, input_vec_size])
mdrnn_while_loop = MdRnnWhileLoop(dtype)
rnn_out, _ = mdrnn_while_loop(rnn_size=hidden_size, input_data=x)
model_out = slim.fully_connected(inputs=rnn_out, num_outputs=output_vec_size, activation_fn=tf.nn.sigmoid)
inputs = np.zeros([1, 64, 64, input_vec_size])
ouputs = np.zeros([1, 64, 64, output_vec_size])
for layer in iterate_by_layers(64, 64):
l = list(layer)
for i, j in l:
inputs[0, i, j, 0:output_vec_size] = ouputs[0, i - 1, j, :]
inputs[0, i, j, output_vec_size:2*output_vec_size] = ouputs[0, i, j - 1, :]
model_preds = sess.run(model_out, feed_dict={ x: inputs })
break
model_preds[0, 0, 0, :-1]
sum(model_preds[0, 0, 0, :-1])
plt.plot(model_preds[0, 0, 0, :-1])
ind = np.argmax(model_preds[0, 0, 0, :-1])
ind
model_preds[0, 0, 0, -1]
tiles = { features[-1]: tile for tile, features in one_hot_tile_features.items() if features[ind] == 1 }
_, tile = min(tiles.items(), key=lambda x: abs(x[0] - model_preds[0, 0, 0, -1]))
tile
plt.imshow(tile.graphics)
one_hot_tile_features[tile]
###Output
_____no_output_____
###Markdown
Нека опитаме да семплираме модела. Итеративно ще го караме да ни дава нови плочки слой по слой.
###Code
import graphics
from IPython.display import clear_output
%matplotlib inline
with tf.variable_scope('d', reuse=tf.AUTO_REUSE):
x = tf.placeholder(dtype, [1, h, w, input_vec_size])
mdrnn_while_loop = MdRnnWhileLoop(dtype)
rnn_out, _ = mdrnn_while_loop(rnn_size=hidden_size, input_data=x)
model_out = slim.fully_connected(inputs=rnn_out, num_outputs=output_vec_size, activation_fn=tf.nn.sigmoid)
inputs = np.zeros([1, 64, 64, input_vec_size])
output_tiles = np.full([64, 64], Tileset.JUNGLE.tiles[0], dtype=object)
for layer in iterate_by_layers(64, 64):
l = list(layer)
for i, j in l:
inputs[0, i, j] = tile_to_input(output_tiles, i, j)
model_preds = sess.run(model_out, feed_dict={ x: inputs })
for i, j in l:
group_id = np.argmax(model_preds[0, i, j, 10:-1])
group_offset_tiles = { features[-1]: tile for tile, features in one_hot_tile_features.items() if features[group_id] == 1 }
_, tile = min(group_offset_tiles.items(), key=lambda x: abs(x[0] - model_preds[0, i, j, -1]))
output_tiles[i, j] = tile
clear_output()
plt.figure(figsize=(10, 10))
plt.imshow(graphics.tile(output_tiles))
plt.show()
###Output
_____no_output_____
###Markdown
Научил се е единствено да показва май-често срещатаната плочка.Да оставим това за момент. Какво би показал модела, ако му дадем реално съществуваща карта и за всеки две плочки поискаме следващата?
###Code
import graphics
from IPython.display import clear_output
%matplotlib inline
with tf.variable_scope('d', reuse=tf.AUTO_REUSE):
x = tf.placeholder(dtype, [1, h, w, input_vec_size])
mdrnn_while_loop = MdRnnWhileLoop(dtype)
rnn_out, _ = mdrnn_while_loop(rnn_size=hidden_size, input_data=x)
model_out = slim.fully_connected(inputs=rnn_out, num_outputs=output_vec_size, activation_fn=tf.nn.sigmoid)
inputs = scenario_to_input(tiny_jungle_scenarios[0]).reshape([1, 64, 64, input_vec_size])
output_tiles = np.full([64, 64], Tileset.JUNGLE.tiles[0], dtype=object)
model_preds = sess.run(model_out, feed_dict={ x: inputs })
for i in range(64):
for j in range(64):
group_id = np.argmax(model_preds[0, i, j, 25:-1])
group_offset_tiles = { features[-1]: tile for tile, features in one_hot_tile_features.items() if features[group_id] == 1 }
_, tile = min(group_offset_tiles.items(), key=lambda x: abs(x[0] - model_preds[0, i, j, -1]))
output_tiles[i, j] = tile
plt.figure(figsize=(16, 16))
plt.imshow(graphics.tile(output_tiles))
plt.show()
###Output
_____no_output_____
###Markdown
2-feature-engineering Dane pochodzą z:* https://www.kaggle.com/austinreese/craigslist-carstrucks-data* https://github.com/AustinReese/UsedVehicleSearch
###Code
df = pd.read_csv('../data/interim/vehicles.csv')
df.info()
df['year'] = 2020 - df['year'] # Zmiana na wiek
df['year'].describe()
df.shape
dfs = [df]
to_delete = list()
for col in df.select_dtypes('object'):
dfs.append(pd.get_dummies(df[[col]], drop_first=True))
to_delete.append(col)
df = pd.concat(dfs, axis=1)
df.drop(to_delete, axis=1, inplace=True)
sorted(df.columns)
df.shape
df.info()
df.to_csv('../data/processed/vehicles.csv', index=False)
del df
###Output
_____no_output_____ |
notebooks/tensorflow_5_5_mnist_inference.ipynb | ###Markdown
最佳实践样例程序期望:* 向前传播不需要将所有的变量传入* 模型持久化* 训练过程每隔一段实践保存一次模型中间结果,以防程序突然挂掉page126
###Code
import tensorflow as tf
INPUT_NODE=784
OUTPUT_NODE=10
LAYER1_NODE=500
#通过tf.get_varialbe 函数获取变量作用:
#创建变量、测试时通过保存的模型读取变量、变量加载时将滑动平均变量重命名
#此函数也将变量正则化损失加入损失集合
def get_weight_variable(shape,regularizer):
weights=tf.get_variable(
'weights',
shape,
initializer=tf.truncated_normal_initializer(stddev=0.1)
)
if regularizer!=None:
tf.add_to_collection('losses',regularizer(weights))
return weights
#向前传播
def inference(input_tensor,regularizer):
with tf.variable_scope('layer1'):
weights=get_weight_variable([INPUT_NODE,LAYER1_NODE],regularizer)
biases=tf.get_variable('biases',[LAYER1_NODE],initializer=tf.constant_initializer(0.0))
layer1=tf.nn.relu(tf.matmul(input_tensor,weights)+biases)
with tf.variable_scope('layer2'):
weights=get_weight_variable([LAYER1_NODE,OUTPUT_NODE],regularizer)
biases=tf.get_variable('biases',[OUTPUT_NODE],initializer=tf.constant_initializer(0.0))
layer2=tf.matmul(layer1,weights)+biases
return layer2
###Output
_____no_output_____ |
data/tidy/.ipynb_checkpoints/preprocessing-checkpoint.ipynb | ###Markdown
preprocessing
###Code
library(scales)
library(broom)
library(pillar)
library(Rcpp)
library(plotly)
#install.packages("scales")
#install.packages("broom", type="binary")
#install.packages("pillar", type="binary")
#install.packages("Rcpp")
#install.packages("plotly")
#install.packages('tidyverse')
#install.packages('stringi')
library(readxl)
library(tidyr)
library(tidyverse)
library(writexl)
library(dplyr)
###Output
_____no_output_____
###Markdown
Merging explatory variables
###Code
# Read files
df <- read_excel("../../data/tidy/Cumulative_Data.xlsx",sheet=1)
df.wb = read_excel("../../data/tidy/wb-data.xlsx",sheet=1)
df.rain = read_excel("../../data/tidy/wb-rain.xlsx",sheet=1)
df.cotwo = read_excel("../../data/tidy/wb-co2-2014.xlsx",sheet=1)
df.fw = read_excel("../../data/tidy/wb-fw-2017.xlsx",sheet=1)
df.wb.mort.gas = read_excel("../../data/tidy/wb-mort-gasprice-2016.xlsx",sheet=1)
# merge files
df.exp <- merge(df.fw,df.cotwo, by="Country Name",all.x=T)
df.exp <- merge(df.exp,df.wb.mort.gas, by="Country Name",all.x=T)
df.exp <- df.exp[,c(1:3,8,10,12)]
df.exp <- merge(df.exp,df.rain, by="Country Name",all.x=T)
df.exp <- df.exp[,c(1:6,8)]
#rename variable title to be short
df.exp <- df.exp %>%
rename(
Country = "Country Name",
prec = "Average precipitation in depth (mm per year)",
rifr = "2017 [YR2017] - Renewable internal freshwater resources per capita (cubic meters) [ER.H2O.INTR.PC]",
cotw = "CO2 emissions from transport (% of total fuel combustion) [EN.CO2.TRAN.ZS] - 2014 [YR2014]",
moru = "Mortality rate attributed to unsafe water, unsafe sanitation and lack of hygiene (per 100,000 population) [SH.STA.WASH.P5] - 2016 [YR2016]",
ppfg = "Pump price for gasoline (US$ per liter) [EP.PMP.SGAS.CD] - 2016 [YR2016]"
)
# count
count(df.exp)
count(df.wb)
#Investiagate NAs
#df.exp$cotw <- as.numeric(df.exp$cotw)
#df.exp[is.na(df.exp)] <- 0
#df.exp[,c(3:7)][is.na(df.exp[,c(3:7)])] <- 0
df.exp[, c(3:7)] <- sapply(df.exp[, c(3:7)], as.numeric)
#df.exp[,3][is.na(df.exp[,3])] <- 0
head(df.exp)
#Renaming the column variables for the world bank data to be consistent with the rest of the data
df.wb <- df.wb %>%
rename(
Year = Time,
Country = "Country Name",
cgdp = "GDP (current US$) [NY.GDP.MKTP.CD]",
tpop = "Population, total [SP.POP.TOTL]",
upop = "Urban population (% of total population) [SP.URB.TOTL.IN.ZS]",
popd = "Population density (people per sq. km of land area) [EN.POP.DNST]",
land ="Land area (sq. km) [AG.LND.TOTL.K2]",
lita = "Literacy rate, adult total (% of people ages 15 and above) [SE.ADT.LITR.ZS]",
lity = "Literacy rate, youth (ages 15-24), gender parity index (GPI) [SE.ADT.1524.LT.FM.ZS]" ,
mori = "Mortality rate, under-5 (per 1,000 live births) [SH.DYN.MORT]"
)
# turn character into numbers and numeric
df.wb[, 3] <- sapply(df.wb[, 3], as.numeric)
df.wb[, 5:12] <- sapply(df.wb[, 5:12], as.numeric)
# merge files df.exp and df.wb to world bank source
head(df.wb)
# Clean-up
#Cleaning Column Years
for (i in seq(1, nrow(df))) {
if (grepl("-", df[i,'Year'])) {
yy1 = substr(df[i,"Year"], start=1,stop=2)
yy2 = substr(df[i,"Year"], start=6,stop=7)
yy3 = paste(yy1,yy2,sep = "")
#yy3 = as.integer(yy3)
df[i,"Year"] = yy3
}
}
# Finding the Max Years
df$Year = as.numeric(df$Year)
df1 <- matrix(NA, nrow = 0, ncol = length(colnames(df))) #create new empty DF
colnames(df1) = colnames(df) # assign column names to empty data frame
for (country in unique(df$Country)) {
maxyear = max(df[df$Country==country,'Year'])
df1 = rbind( df1, df[(df$Country==country) & (df$Year==maxyear),])
}
#Removing Population Variables (Removal of duplication)
dfh = df1[,!grepl("^P_",names(df1))]
#NA and zeros Analysis
na_count = colSums(is.na(dfh)) #Counting all your NA in each data frames
na_count
na.count.wb = colSums(is.na(df.wb))
na.count.wb
# Removing column for household data frame if NA is greater than 50%`
dfsimple = dfh[, which(colMeans(!is.na(dfh)) > 0.5)]
head(dfsimple)
na_count_cleaned = colSums(is.na(dfsimple)) #Counting all your NA in each data frames
na_count_cleaned
hist(na_count_cleaned)
zeros = colSums(dfsimple != 0) # Counting all your zeros in each data frame
# Understanding wb data and clean up
df.wb <- df.wb[,c(1:2,5:12)]
# # Counting Year less than 2000.
head(dfsimple$Year , 7)
sum(dfsimple$Year < 2000)
dfsimple[dfsimple$Year < 2000, ]
dfsimplepost2000 = filter(dfsimple, Year >=2000 & Year < 2020)
head(dfsimplepost2000,7)
count(dfsimplepost2000)
write_xlsx(dfsimplepost2000, '../../results/dfsimple.xlsx')
# additional clean up
setdiff(dfsimplepost2000$Country, df.wb$Country)
# Correct the country names in DF
df.wb [df.wb =='namibia'] = "Namibia"
df.wb [df.wb =='Yemen, Rep.'] = "Yemen"
df.wb [df.wb =='Gambia, The'] = "Gambia"
df.wb [df.wb =='Egypt, Arab Rep.'] = "Egypt"
df.wb [df.wb =='Congo, Dem. Rep.'] = "Congo Democratic Republic"
df.wb [df.wb =='Congo, Rep.'] = "Congo"
setdiff(dfsimplepost2000$Country, df.wb$Country) # Double Check
df.wb <- merge(x = dfsimplepost2000,
y = df.wb,
by = c("Country"))
df.explore <- df.wb[, c(1,33:40 ) ]
setdiff(df.wb$Country, df.exp$Country)
# Correct the country names in DF
df.exp [df.exp =='namibia'] = "Namibia"
df.exp [df.exp =='Yemen, Rep.'] = "Yemen"
df.exp [df.exp =='Gambia, The'] = "Gambia"
df.exp [df.exp =='Egypt, Arab Rep.'] = "Egypt"
df.exp [df.exp =='Congo, Dem. Rep.'] = "Congo Democratic Republic"
df.exp [df.exp =='Congo, Rep.'] = "Congo"
setdiff(df.wb$Country, df.exp$Country) # Double Check
df.exp <- merge(x = df.explore,
y = df.exp,
by = c("Country"))
df.exp <- df.exp[,c(1:9,11:15)]
write_xlsx(df.exp, '../../results/df-wb.xlsx')
###Output
_____no_output_____
###Markdown
Preliminary Data Visualization
###Code
df <- dfsimplepost2000
hist(df$Year, main="",
xlab="Years",
ylab="Frequency",
col.main="red", col.lab="blue")
summary(df$uiws) # House hold with unimproved water source
hist(df$uiws, main="Households using an unimproved water source",
xlab="Total Percentage",
ylab="Frequency",
col.main="red", col.lab="blue")
plot(df$uiws , main="Households using an unimproved water source",
xlab="Index",
ylab="Total Percentage",
col.main="red", col.lab="blue")
summary(df$bicy) # house hold with possession of bicycle.
hist(df$bicy, main="Households possessing a bicycle",
xlab="Total Percentage",
ylab="Frequency",
col.main="red", col.lab="blue")
plot(df$bicy, main="Households possessing a bicycle",
xlab="Index",
ylab="Total Percentage",
col.main="red", col.lab="blue")
summary(df$Year)
summary(df$Country)
head(df.exp)
df.wb <- df.exp[,c(1:6,10,14)] # Data from water bank
head(df.wb)
summary(df.wb$cgdp) # House hold with unimproved water source
hist(df.exp$cgdp, main="GDP (current US$)",
xlab="GDP (current US$)",
ylab="Count ",
col.main="red", col.lab="blue")
summary(df.wb$cgdp) # House hold with unimproved water source
summary(df.wb)
###Output
_____no_output_____ |
Firn_notebook.ipynb | ###Markdown
The Rheological Behavior of Firn: Experimental Observations of Dislocation Creep via Grain Boundary Sliding*How does grain size, strain state, and microstructure influence the rheological behavior of ice compaction among glaciers and ice sheets?*---**Author**: [Daniel Furman](mailto:[email protected]) | UPenn Experimental Geophysics Lab---Scripts for reproducing all analyses required to replicate Furman and Goldsby published in *Penn Science, 19*(2), 2021. (Section 1) Calculating Densification Rates --- Sixteen densification rate calculations. The entire creep curve was composed of a transient response followed by the steady-state regime, with rates calculated by taking time slices during steady-state and averaging many thus-approximated rates. Steady-state slices corresponded to relatively small changes in density; therefore, the mean relative density was considered representative per measurement, and the densification can be taken as density invariant, albeit locally. ---
###Code
exec(open('calc_dens_rates.py').read())
###Output
The compaction creep test files are:
['data/compaction_id_01.csv', 'data/compaction_id_02.csv', 'data/compaction_id_03.csv', 'data/compaction_id_04.csv', 'data/compaction_id_05.csv', 'data/compaction_id_06.csv', 'data/compaction_id_07.csv']
There are 7 total experimental files.
###Markdown
Table 1 : Compaction Testing Data---
###Code
# We have generated the data encoded in the Supporting Information's Table S1.
pd.options.display.float_format = "{:,.3e}".format
paper_table.columns = ['Sample Id', 'Grain radius', 'Mean steady-state density',
'Applied Stress (MPa)', 'Densification rate']
paper_table
###Output
_____no_output_____
###Markdown
**Description for the columns is as follows.**|Variable| Definition| Units| |:--- |:--- |:---||Sample ID| Unique identification of each experiment ||Grain radius| Average grain radius| micrometers|Mean steady-state density |Mean density within the time series measurement|p/pice|Applied Stress |Stess / Mean density | MPa|Densification rate | Rate measurement | dp/pice s-1 (Section 2) Confidence Intervals of Linear Regression---Densification rate (dp/p)dt versus applied stress (log-log space) withuncertainty estimates of the linear slope {n = 1.57 ± 0.22, n = 1.68± 0.45, n = 3.74 ± 1.02} representing the 95% confidence intervals ofeach linear regression.---
###Code
exec(open('exp_confidence_intervals.py').read())
###Output
_____no_output_____
###Markdown
(Section 3) Flow Law Fitting--- This lengthy script constrains the strain-stress constitutive model, describing densification with several physical variables. We use Eq. 1 to analyze the data, taking each series' mean relative density. We then output several plots, including the p parameter calculation, the flow law behind the experimental data, and the firn flow law versus the solid ice flow law. ---
###Code
exec(open('flow_law_fitting.py').read())
###Output
GSS n exponent:
1.626
Disl creep n exponent:
3.74
GSS p exponent:
0.8966
the p exponent bootstrapped uncertainty estimate:
0.896586304133816 (0.8752546864318639, 0.9164945334414516)
0.89659 +- 0.020555
The A parameter for the GSS flow law is:
0.4431
The A parameter bootstrapped uncertainty estimate:
0.44312801265872703 (0.40167142497493186, 0.4791801234447421)
0.44313 +- 0.038415
The A parameter for the disl creep flow law is:
1.481e+05
The A parameter bootstrapped uncertainty estimate:
148139.53109154702 (122534.04030869124, 172779.8546283394)
148140 +- 25125
###Markdown
(Section 4) More Analytics--- In this first script we take several steady-state rate measurements from a compaction test that took place over several weeks. These rates vary from 1.05e-8 to 2.81e-8, revealing the density dependence during firn creep.---
###Code
exec(open('dens_multiweek.py').read())
###Output
The largest measured rate: 2.805e-08
The smallest measured rate: 1.056e-08
The largest flow law disl. creep rate: 2.910e-08
The smallest flow law disl. creep rate: 2.052e-08
The largest flow law disGBS rate: 3.738e-09
The smallest flow law disGS rate: 3.099e-09
###Markdown
In this next script we approximate 95% confidence intervals for statistics for which we cannnot reasonabely guess the probability distribution. We employ the bootstrapped.bootstrap package, see : https://pypi.org/project/bootstrapped/ for more information. ---
###Code
exec(open('bootstrap.py').read())
###Output
the first rate measurement bootstrapped uncertainty estimate:
2.7150195476208667e-08 (1.7623545175546536e-08, 3.635001193820403e-08)
the second rate measurement bootstrapped uncertainty estimate:
1.41787149698715e-07 (1.4020316990988198e-07, 1.44655245994442e-07)
the density bootstrapped uncertainty for the red series:
0.8147427945600344 (0.8114998856424712, 0.8182361652292689)
0.81474 +- 0.00332
the density bootstrapped uncertainty for the green series:
0.8181170731029539 (0.8102888013093179, 0.8244734135020486)
0.81812 +- 0.00709
the density bootstrapped uncertainty for the green series:
0.8311185289660825 (0.8255097913222744, 0.8367272666098907)
0.83112 +- 0.00561
###Markdown
(Section 5) Flow Law Modeling One--- Densification mechanism maps for a) terrestrial settings and b) the Martian layered polar deposits (Frost and Ashby 1982, Maeno and Ebinuma 1982). This script takes a long time to run, over 16 minutes, as the sympy solver has to iterate many times. ---
###Code
exec(open('mechanism_maps.py').read()) # takes a while to re-run
###Output
_____no_output_____
###Markdown
(Section 6) Flow Law Modeling Two---In this script we calculate the densification versus age series from all five terrestrial ice sheet profiles. Pressure was converted to age via division with the accumulation rate of the site (pressure / accumulation rate / gravity). We assume accumulation rate is constant, which is clearly a simplifying approximation. In the second part of "field_modeling" we contrast the natural rates of densification with our flow law model predictions. We consider the two power law mechanisms resolved from out testing, using the temperature, density, and stress conditions from the field profiles. ---
###Code
exec(open('field_modeling_2.py').read())
###Output
Firn rates across intermediate densificaiton derived from density-pressure profiles:
the mean rate is: 3.2561679954591296e-11
the min rate is: 1.4323229272119167e-12
the max rate is: 1.4267601326864937e-11
|
College_placement_Prediction.ipynb | ###Markdown
EDA
###Code
sns.countplot(df['Gender'])
###Output
_____no_output_____
###Markdown
Number of Males Are More than Female For Placements.
###Code
plt.figure(figsize=(16,4))
sns.countplot(df.Stream)
###Output
_____no_output_____
###Markdown
We can See that for placements Students From Computer Science And It are More than And Other Stream.
###Code
plt.figure(figsize=(13,5))
df.groupby('Stream')['Internships'].sum().plot(kind='bar')
###Output
_____no_output_____
###Markdown
From The Following Graph We can that that Students From Computer Science Has More Internships than Any Other Stream.
###Code
plt.figure(figsize=(13,5))
df.groupby('Stream')['PlacedOrNot'].sum().plot(kind='bar')
###Output
_____no_output_____
###Markdown
From The Graph We can that that Students From Computer Science and It Has More Chance of Getting Placed Than Any Other Steam. Encoding
###Code
from sklearn.preprocessing import LabelEncoder
lr = LabelEncoder()
df['gender'] = pd.DataFrame(lr.fit_transform(df[['Gender']]))
df.head()
df['stream'] = pd.DataFrame(lr.fit_transform(df['Stream']))
df2 = df.drop(['Gender','Stream'],axis=1)
X = df2.drop('PlacedOrNot',axis=1)
y = df2.PlacedOrNot
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=1)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
from sklearn.metrics import cohen_kappa_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import accuracy_score
# import function to perform feature selection
from sklearn.feature_selection import RFE
log_reg = LogisticRegression()
log_reg.fit(X_train,y_train)
y_pred = log_reg.predict(X_test)
print('Accuracy Score',accuracy_score(y_test,y_pred)*100)
#model has 76% Accuracy...
FP_rate,TP_rate,threshold=roc_curve(y_test,y_pred)
plt.figure(figsize=(15,8))
plt.plot(FP_rate,TP_rate,color='red')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for Admission Prediction Classifier (Full Model)', fontsize = 15)
plt.xlabel('False positive rate (1-Specificity)', fontsize = 15)
plt.ylabel('True positive rate (Sensitivity)', fontsize = 15)
plt.text(x = 0.02, y = 0.9, s = ('AUC Score:', round(metrics.roc_auc_score(y_test, y_pred),4)))
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Decision Tree:-
###Code
from sklearn.tree import DecisionTreeClassifier
DT=DecisionTreeClassifier()
DT_model_full = DT.fit(X_train,y_train)
y_pred_test = DT_model_full.predict(X_test)
print("Accuracy Score on Test dayta:\t",accuracy_score(y_test,y_pred_test))
y_pred_train = DT_model_full.predict(X_train)
print("Accuracy Score on Train Data:\t",accuracy_score(y_train,y_pred_train))
FP_rate,TP_rate,threshold=roc_curve(y_test,y_pred_test)
plt.figure(figsize=(15,8))
plt.plot(FP_rate,TP_rate,color='red')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for Admission Prediction Classifier (Full Model)', fontsize = 15)
plt.xlabel('False positive rate (1-Specificity)', fontsize = 15)
plt.ylabel('True positive rate (Sensitivity)', fontsize = 15)
plt.text(x = 0.02, y = 0.9, s = ('AUC Score:', round(metrics.roc_auc_score(y_test, y_pred_test),4)))
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
From the Above Both Models We Can Say That Decision Tree Has Better Accuracy Than Logistic Regression In this DataSet.
###Code
X_train.head(2)
###Output
_____no_output_____
###Markdown
Lets Predict...
###Code
Val = [[30,0,8,0,0,1,5]]
print("Predicted VAlue:- ",DT_model_full.predict(Val) )
###Output
Predicted VAlue:- [1]
|
Assignment_2_Computational Statistics/Assignment2.ipynb | ###Markdown
Tests the SCIPY.stats implemented classical samplers Let us just check if the SCIPY implementations of inverse gamma and multivariate normal make sense
###Code
a,b = 3, 3
x = np.linspace(0, 3, 100)
#plots an inverse gamma of scale parameter b (1/rate)
plt.plot(x, invgamma.pdf(x, a, scale = b))
plt.show()
plt.plot(x,invgamma.cdf(x, a, scale = b))
plt.show()
#here we test for a 2d multidimensional normal
cov = np.array([[1,0],[0,1]])
mu = np.zeros(2)
test = multivariate_normal.rvs(mean=mu, cov=cov, size = 10000) #sample the normal
plt.hist2d(test[:,0], test[:,1]) #2d histogram for the normal samples
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
plt.scatter(test[:,0], test[:,1]) #scatter plot for the normal samples
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
x, y = np.mgrid[-1:1:.01, -1:1:.01]
pos = np.dstack((x, y))
plt.contourf(x, y, multivariate_normal.pdf(pos,mean=mu, cov=cov)) #contour plot of the scipy implementation's density
plt.gca().set_aspect('equal', adjustable='box')
plt.show()
###Output
_____no_output_____
###Markdown
Defines important functions
###Code
def inefficiency_calculator(estimations, k = 20):
'''
Computes the inefficiency factor for a general dataset, based on Gemerman/Lopes estimation technique
Input: data; k = 20, number of batches
Output: estimation of tau^2/n
'''
mean = np.mean(estimations)
n = len(estimations)
m = int(n/k)
summation = 0
for j in range(k):
mean_batch = np.mean(estimations[j*m:(j+1)*m])
summation += (mean_batch-mean)**2
tau_sqrd_overn = summation/(k*(k-1))
return tau_sqrd_overn
def CI(estimations, k = 20, alpha = 0.95):
'''
Calculates alpha assymptotic confidence interval for estimations using a t-distribution
Input: data; k = 20, number of batches; alpha = 0.95, confidence level
Output: estimation of assymptotic confidence interval
'''
tau_sqrd_overn = inefficiency_calculator(estimations, k = k)
return np.mean(estimations)-np.sqrt(tau_sqrd_overn)*t.ppf(1-(1-alpha)/2, k-1), np.mean(estimations)+np.sqrt(tau_sqrd_overn)*t.ppf(1-(1-alpha)/2, k-1)
def TracePlot(estimations, skip = 1, interval = False, rang = None, subplot = False):
'''
Makes traceplots of MCMC (ergodic means) estimators as a function of number of iterations
Input: data; skip = 1, how many iterations to skip in calculating t_N, so we calculate t_N*i, where i is the
iteration; interval = False, if True, plots 95% assymptotic CI; rang = None, if list, restricts the plot
only to the rang in the y-axis; subplot = False, if True, returns the plotted values
Output: if subplot = True, returns the plotted values and plots nothing
'''
means = []
if interval == True:
upper_bound = []
lower_bound = []
#only computes means up to skip*i, so if skip = 1, computes the full traceplot
max_it = int(len(estimations)/skip)
number_estimations = []
for i in range(max_it):
means.append(np.mean(estimations[:i*skip+1]))
number_estimations.append(i*skip+1)
#if CI is true, computes the assymptotic CI at 95% of confidence
if interval == True:
ci = CI(estimations[:i*skip+1])
lower_bound.append(ci[0])
upper_bound.append(ci[1])
if subplot == False:
plt.plot(number_estimations, means)
plt.plot(number_estimations, np.mean(estimations)*np.ones(len(means)), 'r--')
#if CI is true, plots the assymptotic confidence region at 95%
if interval == True:
plt.plot(number_estimations, lower_bound, linestyle = 'dashed', color = 'orange', linewidth = 0.7)
plt.plot(number_estimations, upper_bound, linestyle = 'dashed', color = 'orange', linewidth = 0.7)
plt.fill_between(number_estimations, lower_bound, upper_bound, color = 'orange', alpha = 0.05)
if rang != None:
plt.ylim(rang)
plt.show()
else:
return number_estimations, means, np.mean(estimations)*np.ones(len(means))
def ACF(distribution):
'''
Estimates the autocorrelation factors for some estimation based on a MCMC method up to 40 lags
Input: data
Output: autocorrelation estimations from 0 to 40 lag
'''
n = len(distribution)
mean = np.mean(distribution)
variance = np.std(distribution)**2
autocorrelations = []
for k in range(0, 40):
sum_k = 0
for j in range(n-k-1):
sum_k += (distribution[j]-mean)*(distribution[j+k]-mean) #add the autocorelations between t_N and t_N+lag
autocorrelations.append(sum_k/((n)*variance))
return np.array(autocorrelations)
###Output
_____no_output_____
###Markdown
Simple Gibbs
###Code
def SimpleGibbs(yi, m, SSE, a = -1/2, b = 0, number_of_iterations = 1000, burnout = 30, covariances = False):
'''
Implements the simple (unblocked) Gibbs to sample from the target distribution, as described in the main work
Input: yi; m, number of values per class, here assumed to be equal for each class i;
SSE; a = -1/2, prior parameter; b = 0, prior parameter; number_of_iterations = 1000, iterations per chain;
burnout = 30, time used in warm up the chain, so that the burnout initial computations are discarded;
covariances = False, if True, returns the covariances between u and mu
Output: estimations for the expectations of sigma^2_theta, sigma^2_e, sigma^2_theta/(sigma^2_theta+sigma^2_e)
If covariances = True, also returns the covariances between u and mu
'''
sigmae_estimation = []
sigmat_estimation = []
correlation = []
it = 0
if covariances == True:
mu_estimation = []
u_estimation = []
q = len(yi)
M = q*m
y_bar = np.mean(yi)
mu = y_bar
u = yi - mu
#randomly initialize parameters
w1 = np.dot(u,u)
w2 = np.sum(m*(yi-u-mu)**2)
sigmat = invgamma.rvs(q/2 + a, scale = (w1/2))
sigmae = invgamma.rvs(M/2 + b, scale = ((w2 + SSE)/2))
for iterations in range(number_of_iterations+burnout):
#update u
umean = m*(yi-mu)/(m + sigmae/sigmat)
usigma = sigmae/(m + sigmae/sigmat)*np.identity(q)
u = multivariate_normal.rvs(mean = umean, cov = usigma)
#update mu
mumean = np.mean((yi-u))
musigma = sigmae/M
mu = np.random.normal(mumean, np.sqrt(musigma))
#update sigma
w1 = np.dot(u,u)
w2 = np.sum(m*(yi-u-mu)**2)
sigmat = invgamma.rvs(q/2 + a, scale = (w1/2))
sigmae = invgamma.rvs(M/2 + b, scale = ((w2 + SSE)/2))
#takes care of iterations, if above burnout, we consider the points as sampled, otherwise we discard them
it += 1
if it >= burnout:
sigmae_estimation.append(sigmae)
sigmat_estimation.append(sigmat)
correlation.append(sigmat/(sigmat+sigmae))
#estimate the covariances
if covariances == True:
u_estimation.append(u)
mu_estimation.append(mu)
if covariances == False:
return sigmae_estimation, sigmat_estimation, correlation
else:
u_estimation = np.array(u_estimation)
mu_estimation = np.array(mu_estimation)
mean_u = np.mean(u_estimation, axis = 0)
mean_mu = np.mean(mu_estimation)
corr_umu = (np.transpose(u_estimation - mean_u) @ (mu_estimation - mean_mu))/(number_of_iterations*np.std(u_estimation)*np.std(mu_estimation))
return sigmae_estimation, sigmat_estimation, correlation, corr_umu
y_measured = np.array([3.302,4.587,5.052,5.089,4.498,5.186,4.915,4.876,5.262,5.009,5.602,4.336,4.813])
SSE_measured = 14.711
start = time.time()
se, st, cor, correlarion = SimpleGibbs(y_measured, 3, SSE_measured, a = -1/2, b = 0, number_of_iterations = 697869, burnout = 30, covariances = True)
end = time.time()
print(end-start)
print('sigma_t= ', np.mean(st), CI(st), 'MCSE= ', inefficiency_calculator(st))
print('sigma_e= ', np.mean(se), CI(se), 'MCSE= ', inefficiency_calculator(se))
print('cor= ', np.mean(cor), CI(cor), 'MCSE= ', inefficiency_calculator(cor))
print('correlation =', correlarion)
###Output
sigma_t= 0.18939046438570237 (0.18803109212645291, 0.19074983664495182) MCSE= 4.218211650968421e-07
sigma_e= 0.6190332158832117 (0.6182157258240205, 0.6198507059424029) MCSE= 1.5255151373889477e-07
cor= 0.21221272803093258 (0.21083218106137075, 0.2135932750004944) MCSE= 4.35064787769678e-07
correlation = [-0.22296738 -0.2188134 -0.21871326 -0.21781792 -0.22130936 -0.2190927
-0.21824491 -0.21844392 -0.21995076 -0.21872663 -0.21903885 -0.21835638
-0.21933461]
###Markdown
For the traceplots of the simple method, we may not calculate CI, since, as discussed in the main paper, we have no guarantee of geoemtric ergodicity of the chain, and, consequently, of normal assymptocity
###Code
TracePlot(st, skip = 600, rang = [0.14, 0.22], interval = False)
###Output
_____no_output_____
###Markdown
Blocked Gibbs
###Code
def BlockedGibbs(yi, m, SSE, a = -1/2, b = 0, number_of_iterations = 1000, burnout = 30):
'''
Implements the blocked Gibbs to sample from the target distribution, as described in the main work
Input: yi; m, number of values per class, here assumed to be equal for each class i;
SSE; a = -1/2, prior parameter; b = 0, prior parameter; number_of_iterations = 1000, iterations per chain;
burnout = 30, time used in warm up the chain, so that the burnout initial computations are discarded;
Output: estimations for the expectations of sigma^2_theta, sigma^2_e, sigma^2_theta/(sigma^2_theta+sigma^2_e)
If covariances = True, also returns the covariances between u and mu
'''
sigmae_estimation = []
sigmat_estimation = []
correlation = []
it = 0
q = len(yi)
M = q*m
y_bar = np.mean(yi)
theta = yi
mu = y_bar
eta = np.append(mu, theta)
#randomly initialize parameters
w1 = np.sum((theta-mu)**2)
w2 = np.sum(m*(yi-theta)**2)
sigmat = invgamma.rvs(q/2 + a, scale = (w1/2))
sigmae = invgamma.rvs(M/2 + b, scale = ((w2 + SSE)/2))
for iterations in range(number_of_iterations+burnout):
#update eta
Etheta = sigmae/(sigmae+m*sigmat)*y_bar+sigmat*m*yi/(sigmae+m*sigmat)
mean = np.append(y_bar, Etheta)
#calculates the covariance matrix of eta
vartheta = sigmae/(sigmae+m*sigmat)*(sigmat+sigmae/M)
covthetatheta = sigmae**2/(M*(sigmae+m*sigmat))
covthetamu = sigmae/M
cov = np.zeros([q+1,q+1])
cov[0,0] = (sigmae+m*sigmat)/M
cov[0,1:] = np.ones(q)*covthetamu
cov[1:,0] = np.ones(q)*covthetamu
cov[1:,1:] = np.diag((vartheta-covthetatheta)*np.ones(q))+covthetatheta #makes a matriz of diagonal vartheta and off-diaginal covthetatheta
#estimates eta
eta = multivariate_normal.rvs(mean = mean, cov = cov)
mu, theta = eta[0], eta[1:]
#update sigma
w1 = np.dot((theta-mu), (theta-mu))
w2 = m*np.dot((yi-theta),(yi-theta))
sigmat = invgamma.rvs(q/2 + a, scale = (w1/2))
sigmae = invgamma.rvs(M/2 + b, scale = ((w2 + SSE)/2))
#takes care of iterations, if above burnout, we consider the points as sampled, otherwise we discard them
it += 1
if it >= burnout:
sigmae_estimation.append(sigmae)
sigmat_estimation.append(sigmat)
correlation.append(sigmat/(sigmat+sigmae))
return sigmae_estimation, sigmat_estimation, correlation
start = time.time()
se_blocked, st_blocked, cor_blocked = BlockedGibbs(y_measured, 3, SSE_measured, a = -1/2, b = 0, number_of_iterations = 697869, burnout = 30)
end = time.time()
print(end-start)
print('sigma_t= ', np.mean(st_blocked), CI(st_blocked), 'MCSE= ', inefficiency_calculator(st_blocked))
print('sigma_e= ', np.mean(se_blocked), CI(se_blocked), 'MCSE= ', inefficiency_calculator(se_blocked))
print('cor= ', np.mean(cor_blocked), CI(cor_blocked), 'MCSE= ', inefficiency_calculator(cor_blocked))
TracePlot(st_blocked, skip = 600, rang = [0.14, 0.22], interval = True)
TracePlot(se_blocked, skip = 600, rang = [0.55,0.7], interval = True)
TracePlot(cor_blocked, skip = 600, rang = [0.15, 0.25], interval = True)
###Output
C:\Users\hlovi\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py:3372: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
C:\Users\hlovi\Anaconda3\lib\site-packages\numpy\core\_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
###Markdown
Comparison blocked and unblocked Let us compare both methods, but for only 10,000 iterations without warm up, as to really access convergence rate
###Code
_, st_comp, _ = SimpleGibbs(y_measured, 3, SSE_measured, a = -1/2, b = 0, number_of_iterations = 10000, burnout = 0)
_, st_blocked_comp, _ = BlockedGibbs(y_measured, 3, SSE_measured, a = -1/2, b = 0, number_of_iterations = 10000, burnout = 0)
TracePlot(st_comp, skip = 10, interval = False)
TracePlot(st_blocked_comp, skip = 10, interval = False)
fig, axs = plt.subplots(2, figsize=(10, 5))
axs[0].plot(TracePlot(st_comp, skip = 10, interval = False, subplot = True)[0], TracePlot(st_comp, skip = 10, interval = False, subplot = True)[1])
axs[0].plot(TracePlot(st_comp, skip = 10, interval = False, subplot = True)[0], TracePlot(st_comp, skip = 10, interval = False, subplot = True)[2], 'r--')
axs[0].set_title('Simple Gibbs Sampler')
axs[0].set_ylim([0.14,0.2])
axs[1].plot(TracePlot(st_blocked_comp, skip = 10, interval = False, subplot = True)[0], TracePlot(st_blocked_comp, skip = 10, interval = False, subplot = True)[1])
axs[1].plot(TracePlot(st_blocked_comp, skip = 10, interval = False, subplot = True)[0], TracePlot(st_blocked_comp, skip = 10, interval = False, subplot = True)[2], 'r--')
axs[1].set_title('Blocked Gibbs Sampler')
axs[1].set_ylim([0.14,0.2])
for ax in axs.flat:
ax.label_outer()
plt.show()
fig, axs = plt.subplots(2, figsize=(10, 5))
axs[0].bar(range(len(ACF(st_comp))), ACF(st_comp))
axs[0].set_title('Simple Gibbs Sampler')
axs[1].bar(range(len(ACF(st_blocked_comp))), ACF(st_blocked_comp))
axs[1].set_title('Blocked Gibbs Sampler')
for ax in axs.flat:
ax.label_outer()
def GelmanSimpleGibbs(yi, m, SSE, number_of_iterations = 1000, number_of_chains = 100):
'''
Calculates the Gelman coefficient \hat{R} for the simple (unblocked) Gibbs sampler
Input: yi; m, number of values per class, here assumed to be equal for each class i;
SSE; number_of_iterations = 1000, iterations per chain; number_of_chains = 100, number of chains used;
Output: multi-chain estimation of \hat{R}
'''
set_of_chains = np.array(SimpleGibbs(yi, m, SSE, number_of_iterations = number_of_iterations)[1])
for chain in range(number_of_chains-1):
new_chain = np.array(SimpleGibbs(yi, m, SSE, number_of_iterations = number_of_iterations)[1])
set_of_chains = np.vstack((set_of_chains, new_chain))
#takes the mean of all chains' entries
full_mean = np.mean(set_of_chains)
#computes the summation part of the definition of B
summation_b = 0
for index in range(number_of_iterations):
summation_b += (np.mean(set_of_chains[:,index]-full_mean))**2
#computes B
b = number_of_iterations/(number_of_chains-1)*summation_b
#computes the summation part of the definition of W
summation_w = 0
for index in range(number_of_iterations):
mean_index = np.mean(set_of_chains[:,index])
for chain in range(number_of_chains):
summation_w += (set_of_chains[chain,index]-mean_index)**2
#computes W
w = 1/(number_of_chains*(number_of_iterations-1))*summation_w
R_hat = np.sqrt(((1-1/number_of_iterations)*w+b/number_of_iterations)/w)
return R_hat
def GelmanBlockedGibbs(yi, m, SSE, number_of_iterations = 1000, number_of_chains = 100):
'''
Calculates the Gelman coefficient \hat{R} for the blocked Gibbs sampler
Input: yi; m, number of values per class, here assumed to be equal for each class i;
SSE; number_of_iterations = 1000, iterations per chain; number_of_chains = 100, number of chains used;
Output: multi-chain estimation of \hat{R}
'''
set_of_chains = np.array(BlockedGibbs(yi, m, SSE, number_of_iterations = number_of_iterations)[1])
for chain in range(number_of_chains-1):
new_chain = np.array(BlockedGibbs(yi, m, SSE, number_of_iterations = number_of_iterations)[1])
set_of_chains = np.vstack((set_of_chains, new_chain))
#takes the mean of all chains' entries
full_mean = np.mean(set_of_chains)
#computes the summation part of the definition of B
summation_b = 0
for index in range(number_of_iterations):
summation_b += (np.mean(set_of_chains[:,index]-full_mean))**2
#computes B
b = number_of_iterations/(number_of_chains-1)*summation_b
#computes the summation part of the definition of W
summation_w = 0
for index in range(number_of_iterations):
mean_index = np.mean(set_of_chains[:,index])
for chain in range(number_of_chains):
summation_w += (set_of_chains[chain,index]-mean_index)**2
#computes W
w = 1/(number_of_chains*(number_of_iterations-1))*summation_w
R_hat = np.sqrt(((1-1/number_of_iterations)*w+b/number_of_iterations)/w)
return R_hat
###Output
_____no_output_____
###Markdown
Values of \hat{R}<1.10 are often indications of good convergence for MCMC methods
###Code
GelmanSimpleGibbs(y_measured, 3, SSE_measured, number_of_chains = 100), GelmanBlockedGibbs(y_measured, 3, SSE_measured, number_of_chains = 100)
###Output
_____no_output_____
###Markdown
Blocked Gibbs Matrix Implementation Here we provide a matrix implementation of the blocked version of the sampler, which is nothing more than the translation of the R code available as supplement to the work of Aixin Tan and James P. Hobert [2009]. This is done simply for computational speed comparison of my own implementation and this more matrix-based code
###Code
def BlockedGibbsMatrix(yi, m, SSE, a = -1/2, b = 0, number_of_iterations = 1000, burnout = 30):
sigmae_estimation = []
sigmat_estimation = []
correlation = []
mean_estimation = []
it = 0
q = len(yi)
M = q*m
y_bar = np.mean(yi)
theta = yi
mu = y_bar
eta = np.append(theta, mu)
#randomly initialize parameters
matrix = np.append(np.diag(np.ones(q)), -(np.ones((q,1))), axis = 1)
mm = np.transpose(matrix) @ matrix
w1 = np.transpose(eta) @ mm @ eta
w2 = m*np.transpose(yi-theta) @ (yi-theta)
lt = np.random.gamma(q/2 + a, scale = 1/((w1/2)))
le = np.random.gamma(M/2 + b, scale = 1/(((w2 + SSE)/2)))
for iterations in range(number_of_iterations+burnout):
#update eta
t = M*lt*le/(lt+m*le)
d = np.sqrt(lt+m*le)
invL = np.diag(np.append(np.ones(q)/d,1/np.sqrt(t)))
invL[-1,:-1] = lt/(d**2*np.sqrt(t))*np.ones(q)
V = np.transpose(invL) @ invL
etha0 = le*m*V @ np.append(yi, 0)
eta = np.transpose(etha0)+np.transpose(invL) @ np.transpose(np.random.normal(size = q+1))
#update sigma
w1 = np.transpose(eta) @ mm @ eta
w2 = m*np.transpose(yi-eta[:-1]) @ (yi-eta[:-1])
lt = np.random.gamma(q/2 + a, scale = 1/((w1/2)))
le = np.random.gamma(M/2 + b, scale = 1/(((w2 + SSE)/2)))
#takes care of iterations, if above burnout, we consider the points as sampled, otherwise we discard them
it += 1
if it >= burnout:
se = 1/le
st = 1/lt
sigmae_estimation.append(se)
sigmat_estimation.append(st)
correlation.append(st/(st + se))
return sigmae_estimation, sigmat_estimation, correlation
start = time.time()
se_matrix, st_matrix, cor_matrix = BlockedGibbsMatrix(y_measured, 3, SSE_measured, a = -1/2, b = 0, number_of_iterations = 697869, burnout = 30)
end = time.time()
print(end-start)
###Output
41.855125427246094
###Markdown
Turns out that the matrix version is still much faster than my implementation (about 3 times faster in my machine), but estimations turned out to be still very similar
###Code
print('sigma_t= ', np.mean(st_matrix), CI(st_matrix), 'MCSE= ', inefficiency_calculator(st_matrix))
print('sigma_e= ', np.mean(se_matrix), CI(se_matrix), 'MCSE= ', inefficiency_calculator(se_matrix))
print('cor= ', np.mean(cor_matrix), CI(cor_matrix), 'MCSE= ', inefficiency_calculator(cor_matrix))
TracePlot(st_matrix, skip = 100, rang = [0.14, 0.22], interval = True)
###Output
C:\Users\hlovi\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py:3372: RuntimeWarning: Mean of empty slice.
return _methods._mean(a, axis=axis, dtype=dtype,
C:\Users\hlovi\Anaconda3\lib\site-packages\numpy\core\_methods.py:170: RuntimeWarning: invalid value encountered in double_scalars
ret = ret.dtype.type(ret / rcount)
|
deep-learning-with-pytorch.ipynb | ###Markdown
Data Types
###Code
torch.zeros(10, 2, dtype=torch.double)
torch.zeros(10, 2, dtype=torch.short)
torch.zeros(10, 2).double()
torch.zeros(10, 2).to(torch.short)
###Output
_____no_output_____
###Markdown
Indexing
###Code
points = torch.randn(10, 2)
points
points[0]
points[1:]
points[1:, 1]
points[(0, 2), (0, 1)]
###Output
_____no_output_____
###Markdown
NumPy interop
###Code
points_np = points.numpy()
points_np
points = torch.from_numpy(points_np)
points
!ls
###Output
Untitled.html datasets deep-learning-with-pytorch.ipynb storage
###Markdown
Storage
###Code
import os
!ls
os.makedirs('../storage/deep-learning-with-python')
!ls storage/deep-learning-with-python
stored = torch.randn(10, 4)
stored
torch.save(stored, '../storage/deep-learning-with-python/stored.t')
!ls storage/deep-learning-with-python
opened = torch.load('../storage/deep-learning-with-python/stored.t')
opened
opened.device
opened_gpu = opened.cuda()
###Output
_____no_output_____
###Markdown
Tensor API
###Code
tensor = torch.randn(10, 4)
tensor_transposed = tensor.transpose(0, 1)
tensor_transposed = torch.transpose(tensor, 0, 1)
a = torch.ones(4, 2)
a
a.zero_()
a
b = torch.ones(4, 2)
a + b
b + torch.randn(4, 2)
from_list = torch.tensor(list(range(9)))
from_list
from_list.size()
from_list.stride()
from_list.storage_offset()
a = from_list.view(3,3)
a[1,1]
a.storage_offset()
b = a[1:,1:]
b
b.size()
b.stride()
b.storage_offset()
c = torch.randn(10, 3)
c
c.tan()
torch.tan(c)
c.sqrt_()
###Output
_____no_output_____ |
Certification 1/Week5.2 - Diagonalisation and EigenBasis.ipynb | ###Markdown
Doing transformation T, n times: T\*\*n is for D the matrix such as CD = T and D is diagonal matrix:then T\*\*n = C @ D\*\*n @ inv(C)
###Code
def getDiagonal(T, C):
T = np.array(T)
C = np.array(C)
Cinv = nl.inv(C)
return Cinv @ T @ C
T = np.array([[6,-1], [2,3]])
C = np.array([[1,1], [1,2]])
D = getDiagonal(T, C)
D
getDiagonal([[2,7],[0,-1]], [[7,1], [-3,0]])
getDiagonal([[1,0], [2,-1]], [[1,0], [1,1]])
C = np.array([[1, 2], [0,1]])
nl.inv(C)
C @ np.array([[10, 0], [0, 10]]) @ nl.inv(C)
T = np.array([[6, -1], [2,3]])
T @ T @ T
T = np.array([[2,7], [0 , -1]])
T @ T @ T
T = np.array([[1,0], [2,-1]])
T ** 5 # Does not work because T is a np.array
T**5
T @ T @ T @ T @ T
nl.matrix_power(T, 5)
T = np.matrix([[1,0], [2,-1]])
T ** 5 # T has to be a matrix for that to work
###Output
_____no_output_____ |
jupyter_notebooks/docker_prediction.ipynb | ###Markdown
Load the prosepectivate sample
###Code
data_to_predict = '/home/wentao/UNet-MRI-Reconstruction/data/prospectiveSample.mat'
f = h5py.File(data_to_predict, 'r')
input_data = np.array(f.get('imagesRecon'))
input_data.shape
###Output
_____no_output_____
###Markdown
Convert the data into json format and send an API request
###Code
model = 'UNet2D2D'
API = "http://localhost:8501/v1/models/{model}:predict".format(model=model)
data = json.dumps({"instances": input_data.tolist()})
response = requests.post(API, data=data)
response.status_code
print("The average time taken for each image is {}s".format(response.elapsed.total_seconds()/input_data.shape[0]))
prediction = np.array(response.json()['predictions'])
prediction.shape
plt.figure(1, figsize=(10,10))
plt.subplot(1, 2, 1)
plt.axis('off')
plt.title('Input')
plt.imshow(input_data[0, :, :, 0], cmap='gray')
plt.subplot(1, 2, 2)
plt.axis('off')
plt.title('Real-time Reconstructed Image')
plt.imshow(prediction[0, :, :, 0], cmap='gray')
plt.savefig('./images/docker_prediction_{model}.jpg'.format(model=model))
###Output
_____no_output_____
###Markdown
Compare different UNets UNet2D2D
###Code
# Call when the docker image is setup
model = 'UNet2D2D'
API = "http://localhost:8501/v1/models/{model}:predict".format(model=model)
data = json.dumps({"instances": input_data.tolist()})
response = requests.post(API, data=data)
print("The average time taken for each image is {}s".format(response.elapsed.total_seconds()/input_data.shape[0]))
prediction_2d2d = np.array(response.json()['predictions'])
###Output
The average time taken for each image is 0.1725556s
###Markdown
UNet2D1D
###Code
# Call when the docker image is setup
model = 'UNet2D1D'
API = "http://localhost:8501/v1/models/{model}:predict".format(model=model)
data = json.dumps({"instances": input_data.tolist()})
response = requests.post(API, data=data)
print("The average time taken for each image is {}s".format(response.elapsed.total_seconds()/input_data.shape[0]))
prediction_2d1d = np.array(response.json()['predictions'])
###Output
The average time taken for each image is 0.53887s
###Markdown
UNet3D
###Code
# Call when the docker image is setup
model = 'UNet3D'
API = "http://localhost:8501/v1/models/{model}:predict".format(model=model)
data = json.dumps({"instances": input_data.tolist()})
response = requests.post(API, data=data)
print("The average time taken for each image is {}s".format(response.elapsed.total_seconds()/input_data.shape[0]))
prediction_3d = np.array(response.json()['predictions'])
###Output
The average time taken for each image is 0.7656615s
###Markdown
UNet3D_old
###Code
# Call when the docker image is setup
model = 'UNet3D_old'
API = "http://localhost:8501/v1/models/{model}:predict".format(model=model)
data = json.dumps({"instances": input_data.tolist()})
response = requests.post(API, data=data)
print("The average time taken for each image is {}s".format(response.elapsed.total_seconds()/input_data.shape[0]))
prediction_3d_old = np.array(response.json()['predictions'])
###Output
The average time taken for each image is 0.8774094s
###Markdown
Plot the reconstructed images
###Code
import pickle
docker_prediction_result = {'UNet2D2D': prediction_2d2d, 'UNet2D1D': prediction_2d1d, 'UNet3D': prediction_3d, 'UNet3D_old': prediction_3d_old}
with open('docker_prediction_result.pkl', 'wb') as f:
pickle.dump(docker_prediction_result, f)
plt.figure(1, figsize=(20,20))
plt.subplot(2, 3, 1)
plt.axis('off')
plt.title('Input')
plt.imshow(input_data[0, :, :, 0], cmap='gray')
plt.subplot(2, 3, 2)
plt.axis('off')
plt.title('UNet2D2D')
plt.imshow(prediction_2d2d[0, :, :, 0], cmap='gray')
plt.subplot(2, 3, 3)
plt.axis('off')
plt.title('UNet2D1D')
plt.imshow(prediction_2d1d[0, :, :, 0], cmap='gray')
plt.subplot(2, 3, 5)
plt.axis('off')
plt.title('UNet3D')
plt.imshow(prediction_3d[0, :, :, 0], cmap='gray')
plt.subplot(2, 3, 6)
plt.axis('off')
plt.title('UNet3D_old')
plt.imshow(prediction_3d_old[0, :, :, 0], cmap='gray')
plt.savefig('./images/docker_prediction_all.jpg'.format(model=model), bbox_inches='tight')
###Output
_____no_output_____ |
Python_programming/15_data_structures.ipynb | ###Markdown
Tuples
###Code
# Tuples
# Creating an empty Tuple
Tuple1 = ()
print("Initial empty Tuple: ")
print(Tuple1)
# Creating a Tuple
# with the use of string
Tuple1 = ('Chilla', 'For')
print("\nTuple with the use of String: ")
print(Tuple1)
# Creating a Tuple with
# the use of list
list1 = [1, 2, 4, 5, 6]
print("\nTuple using List: ")
print(tuple(list1))
# Creating a Tuple
# with the use of built-in function
Tuple1 = tuple('Chilla')
print("\nTuple with the use of function: ")
print(Tuple1)
# Accessing Tuple
# with Indexing
Tuple1 = tuple("Chilla")
print("\nFirst element of Tuple: ")
print(Tuple1[0])
# Tuple unpacking
Tuple1 = ("Chilla", "For", "Python")
# This line unpack
# values of Tuple1
a, b, c = Tuple1
print("\nValues after unpacking: ")
print(a)
print(b)
print(c)
# Concatenation of tuples
Tuple1 = (0, 1, 2, 3)
Tuple2 = ('Chilla', 'in', 'Data science')
Tuple3 = Tuple1 + Tuple2
# Printing first Tuple
print("Tuple 1: ")
print(Tuple1)
# Printing Second Tuple
print("\nTuple2: ")
print(Tuple2)
# Printing Final Tuple
print("\nTuples after Concatenation: ")
print(Tuple3)
# Slicing of a Tuple
# Slicing of a Tuple
# with Numbers
Tuple1 = tuple('Chila in Data Science')
# Removing First element
print("Removal of First Element: ")
print(Tuple1[1:])
# Reversing the Tuple
print("\nTuple after sequence of Element is reversed: ")
print(Tuple1[::-1])
# Printing elements of a Range
print("\nPrinting elements between Range 4-9: ")
print(Tuple1[4:9])
# Deleting a Tuple
Tuple1 = (0, 1, 2, 3, 4)
del Tuple1
print(Tuple1)
###Output
_____no_output_____
###Markdown
List
###Code
# Python program to demonstrate
# Creation of List
# Creating a List
List = []
print("Blank List: ")
print(List)
# Creating a List of numbers
List = [10, 20, 14]
print("\nList of numbers: ")
print(List)
# Creating a List of strings and accessing
# using index
List = ["Chilla", "For", "Data Science"]
print("\nList Items: ")
print(List[0])
print(List[2])
# Creating a Multi-Dimensional List
# (By Nesting a list inside a List)
List = [['Chilla', 'For'], ['Data Science']]
print("\nMulti-Dimensional List: ")
print(List)
# Creating a List with
# the use of Numbers
# (Having duplicate values)
List = [1, 2, 4, 4, 3, 3, 3, 6, 5]
print("\nList with the use of Numbers: ")
print(List)
# Creating a List with
# mixed type of values
# (Having numbers and strings)
List = [1, 2, 'Python', 4, 'For', 6, 'Begginer']
print("\nList with the use of Mixed Values: ")
print(List)
# Creating a List
List1 = []
print(len(List1))
# Creating a List of numbers
List2 = [10, 20, 14]
print(len(List2))
# Python program to demonstrate
# Addition of elements in a List
# Creating a List
List = []
print("Initial blank List: ")
print(List)
# Addition of Elements
# in the List
List.append(1)
List.append(2)
List.append(4)
print("\nList after Addition of Three elements: ")
print(List)
# Adding elements to the List
# using Iterator
for i in range(1, 4):
List.append(i)
print("\nList after Addition of elements from 1-3: ")
print(List)
# Adding Tuples to the List
List.append((5, 6))
print("\nList after Addition of a Tuple: ")
print(List)
# Addition of List to a List
List2 = ['For', 'Python']
List.append(List2)
print("\nList after Addition of a List: ")
print(List)
# Python program to demonstrate
# accessing of element from list
# Creating a List with
# the use of multiple values
List = ["Python", "For", "World"]
# accessing a element from the
# list using index number
print("Accessing a element from the list")
print(List[0])
print(List[2])
# Creating a Multi-Dimensional List
# (By Nesting a list inside a List)
List = [['Python', 'For'], ['DS']]
# accessing an element from the
# Multi-Dimensional List using
# index number
print("Accessing a element from a Multi-Dimensional list")
print(List[0][1])
print(List[1][0])
###Output
Accessing a element from the list
Geeks
Geeks
Accessing a element from a Multi-Dimensional list
For
Geeks
###Markdown
Dictionary
###Code
# Creating a Dictionary
# with Integer Keys
Dict = {1: 'Python', 2: 'For', 3: 'All'}
print("\nDictionary with the use of Integer Keys: ")
print(Dict)
# Creating a Dictionary
# with Mixed keys
Dict = {'Name': 'Python', 1: [1, 2, 3, 4]}
print("\nDictionary with the use of Mixed Keys: ")
print(Dict)
# Creating an empty Dictionary
Dict = {}
print("Empty Dictionary: ")
print(Dict)
# Creating a Dictionary
# with dict() method
Dict = dict({1: 'Python', 2: 'For', 3:'all'})
print("\nDictionary with the use of dict(): ")
print(Dict)
# Creating a Dictionary
# with each item as a Pair
Dict = dict([(1, 'Python'), (2, 'For')])
print("\nDictionary with each item as a pair: ")
print(Dict)
# Creating a Nested Dictionary
# as shown in the below image
Dict = {1: 'Python', 2: 'For',
3:{'A' : 'Welcome', 'B' : 'To', 'C' : 'Python'}}
print(Dict)
# Creating an empty Dictionary
Dict = {}
print("Empty Dictionary: ")
print(Dict)
# Adding elements one at a time
Dict[0] = 'Python'
Dict[2] = 'For'
Dict[3] = 1
print("\nDictionary after adding 3 elements: ")
print(Dict)
# Adding set of values
# to a single Key
Dict['Value_set'] = 2, 3, 4
print("\nDictionary after adding 3 elements: ")
print(Dict)
# Updating existing Key's Value
Dict[2] = 'Welcome'
print("\nUpdated key value: ")
print(Dict)
# Adding Nested Key value to Dictionary
Dict[5] = {'Nested' :{'1' : 'Begginer', '2' : 'Python'}}
print("\nAdding a Nested Key: ")
print(Dict)
# Python program to demonstrate
# accessing a element from a Dictionary
# Creating a Dictionary
Dict = {"Course": 'Python', 'name': 'For', 3: 'Students'}
# accessing a element using key
print("Accessing a element using key:")
print(Dict['name'])
# accessing a element using key
print("Accessing a element using key:")
print(Dict['Course'])
# Creating a Dictionary
Dict = {1: 'Python', 'name': 'For', 3: 'Name'}
# Deleting entire Dictionary
Dict.clear()
print("\nDeleting Entire Dictionary: ")
print(Dict)
###Output
Deleting Entire Dictionary:
{}
###Markdown
Sets
###Code
# Python program to demonstrate
# Creation of Set in Python
# Creating a Set
set1 = set()
print("Initial blank Set: ")
print(set1)
# Creating a Set with
# the use of a String
set1 = set("codetolive")
print("\nSet with the use of String: ")
print(set1)
# Creating a Set with
# the use of Constructor
# (Using object to Store String)
String = 'pythontobreath'
set1 = set(String)
print("\nSet with the use of an Object: " )
print(set1)
# Creating a Set with
# the use of a List
set1 = set(["python", "For", "students"])
print("\nSet with the use of List: ")
print(set1)
# Creating a Set with
# a List of Numbers
# (Having duplicate values)
set1 = set([1, 2, 4, 4, 3, 3, 3, 6, 5])
print("\nSet with the use of Numbers: ")
print(set1)
# Creating a Set with
# a mixed type of values
# (Having numbers and strings)
set1 = set([1, 2, 'python', 4, 'For', 6, 'Students'])
print("\nSet with the use of Mixed Values")
print(set1)
# Python program to demonstrate
# Addition of elements in a Set
# Creating a Set
set1 = set()
print("Initial blank Set: ")
print(set1)
# Adding element and tuple to the Set
set1.add(8)
set1.add(9)
set1.add((6,7))
print("\nSet after Addition of Three elements: ")
print(set1)
# Adding elements to the Set
# using Iterator
for i in range(1, 6):
set1.add(i)
print("\nSet after Addition of elements from 1-5: ")
print(set1)
# Addition of elements in a Set
# Addition of elements to the Set
# using Update function
set1 = set([ 4, 5, (6, 7)])
set1.update([10, 11])
print("\nSet after Addition of elements using Update: ")
print(set1)
# Accessing of elements in a set
# Creating a set
set1 = set(["Python", "For", "Students"])
print("\nInitial set")
print(set1)
# Accessing element using
# for loop
print("\nElements of set: ")
for i in set1:
print(i, end=" ")
# Checking the element
# using in keyword
print("Python" in set1)
# Deletion of elements in a Set
# Creating a Set
set1 = set([1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12])
print("Initial Set: ")
print(set1)
# Removing element from the
# Set using the pop() method
set1.pop()
print("\nSet after popping an element: ")
print(set1)
# Deletion of elements in a Set
# Creating a Set
set1 = set([1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12])
print("Initial Set: ")
print(set1)
# Removing elements from Set
# using Remove() method
set1.remove(5)
set1.remove(6)
print("\nSet after Removal of two elements: ")
print(set1)
# Removing elements from Set
# using Discard() method
set1.discard(8)
set1.discard(9)
print("\nSet after Discarding two elements: ")
print(set1)
# Removing elements from Set
# using iterator method
for i in range(1, 5):
set1.remove(i)
print("\nSet after Removing a range of elements: ")
print(set1)
#Creating a set
set1 = set([1,2,3,4,5])
print("\n Initial set: ")
print(set1)
# Removing all the elements from
# Set using clear() method
set1.clear()
print("\nSet after clearing all the elements: ")
print(set1)
###Output
Initial set:
{1, 2, 3, 4, 5}
Set after clearing all the elements:
set()
|
implementation/12Z Model.ipynb | ###Markdown
12Z Model Scott Burgholzer MSDS Capstone Project Fall 2019 Import all Python Libraries and set up seed value
###Code
import os
os.environ['PYTHONHASHSEED']=str(0)
import random
random.seed(0)
from numpy.random import seed
seed(0)
import tensorflow as tf
print(tf.__version__)
from tensorflow import keras
tf.random.set_seed(0)
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
from tensorflow.keras.regularizers import l2, l1, l1_l2
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import tensorflow as tf
from tensorflow import keras
#tf.keras.wrappers.skikit_learn.KerasClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.datasets import make_classification
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
2.0.0-beta1
###Markdown
Get the data and split into Train/Test sets
###Code
data = pd.read_csv(r'D:\MSDS-Capstone-Project\implementation\Data\12Z-randomized.csv')
data.rename( columns={'Unnamed: 0':'Date'}, inplace=True )
data['Date'] = pd.to_datetime(data['Date'])
data.set_index('Date', inplace=True)
data = data.sample(frac=1)
data.loc[data.COUNT < 5, 'countBinary2'] = 0
data.loc[data.COUNT >= 5, 'countBinary2'] = 1
data
# creating input featues and target variables
X = data.iloc[:,0:26]
y = data.iloc[:,28]
X.shape
X.tail()
y.value_counts()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.25, random_state=0)
scaler = StandardScaler().fit(X_train)
X_Train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
X_val = scaler.transform(X_val)
###Output
_____no_output_____
###Markdown
Function to plot the losses
###Code
def plot_history(hist):
fig = plt.figure(figsize=(6,4))
# # summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'], 'g--')
plt.title('Classification Model Loss')
plt.ylabel('Binary Crossentropy')
plt.xlabel('Epoch')
plt.legend(['Training Loss', 'Validation Loss'], loc='upper right')
print ("Loss after final iteration: ", history.history['val_loss'][-1])
plt.show()
###Output
_____no_output_____
###Markdown
Machine Learning Models Attempt 1
###Code
model = keras.Sequential()
model.add(keras.layers.Dense(units=26, activation='relu', kernel_initializer=keras.initializers.he_uniform(seed=0),
kernel_regularizer=l1_l2(0.001), input_shape=(26,)))
model.add(keras.layers.Dense(units=26, activation='relu', kernel_regularizer=l1_l2(0.001)))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(units=1, activation='sigmoid', kernel_regularizer=l1_l2(0.001)))
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.SGD(learning_rate = 0.001, momentum=0.5),
metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs = 10, batch_size=6500)
eval_model=model.evaluate(X_train, y_train)
print(eval_model)
model.evaluate(X_test, y_test, verbose=0)
plot_history(history)
predictions = pd.DataFrame(model.predict(X_test))
predictions[predictions > 0.5] = 1 #'5 or more Tornadoes'
predictions[predictions <= 0.5] = 0 #'Less than 5 Tornadoes'
print('accuracy', accuracy_score(predictions, y_test))
print('confusion matrix\n', confusion_matrix(predictions,y_test))
print(classification_report(predictions, y_test))
###Output
accuracy 0.6890887759092686
confusion matrix
[[1596 141]
[ 654 166]]
precision recall f1-score support
0.0 0.71 0.92 0.80 1737
1.0 0.54 0.20 0.29 820
micro avg 0.69 0.69 0.69 2557
macro avg 0.63 0.56 0.55 2557
weighted avg 0.66 0.69 0.64 2557
###Markdown
Attempt 2, using Adadelta optimizer
###Code
model = keras.Sequential()
model.add(keras.layers.Dense(units=26, activation='relu', kernel_initializer=keras.initializers.he_uniform(seed=0),
kernel_regularizer=l1_l2(0.001), input_shape=(26,)))
model.add(keras.layers.Dense(units=26, activation='relu', kernel_regularizer=l1_l2(0.001)))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(units=1, activation='sigmoid', kernel_regularizer=l1_l2(0.001)))
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs = 10, batch_size=6500)
eval_model=model.evaluate(X_train, y_train)
print(eval_model)
model.evaluate(X_test, y_test, verbose=0)
plot_history(history)
predictions = pd.DataFrame(model.predict(X_test))
predictions[predictions > 0.5] = 1 #'5 or more Tornadoes'
predictions[predictions <= 0.5] = 0 #'Less than 5 Tornadoes'
print('accuracy', accuracy_score(predictions, y_test))
print('confusion matrix\n', confusion_matrix(predictions,y_test))
print(classification_report(predictions, y_test))
###Output
accuracy 0.7782557684786859
confusion matrix
[[1960 277]
[ 290 30]]
precision recall f1-score support
0.0 0.87 0.88 0.87 2237
1.0 0.10 0.09 0.10 320
micro avg 0.78 0.78 0.78 2557
macro avg 0.48 0.48 0.48 2557
weighted avg 0.77 0.78 0.78 2557
###Markdown
Attempt 3, Adagrad
###Code
model = keras.Sequential()
model.add(keras.layers.Dense(units=26, activation='relu', kernel_initializer=keras.initializers.he_uniform(seed=0),
kernel_regularizer=l1_l2(0.001), input_shape=(26,)))
model.add(keras.layers.Dense(units=26, activation='relu', kernel_regularizer=l1_l2(0.001)))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(units=1, activation='sigmoid', kernel_regularizer=l1_l2(0.001)))
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adagrad(), metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs = 10, batch_size=6500)
eval_model=model.evaluate(X_train, y_train)
print(eval_model)
model.evaluate(X_test, y_test, verbose=0)
plot_history(history)
predictions = pd.DataFrame(model.predict(X_test))
predictions[predictions > 0.5] = 1 #'5 or more Tornadoes'
predictions[predictions <= 0.5] = 0 #'Less than 5 Tornadoes'
print('accuracy', accuracy_score(predictions, y_test))
print('confusion matrix\n', confusion_matrix(predictions,y_test))
print(classification_report(predictions, y_test))
###Output
accuracy 0.43840438013296834
confusion matrix
[[ 923 109]
[1327 198]]
precision recall f1-score support
0.0 0.41 0.89 0.56 1032
1.0 0.64 0.13 0.22 1525
micro avg 0.44 0.44 0.44 2557
macro avg 0.53 0.51 0.39 2557
weighted avg 0.55 0.44 0.36 2557
###Markdown
Attempt 4, Adam
###Code
model = keras.Sequential()
model.add(keras.layers.Dense(units=26, activation='relu', kernel_initializer=keras.initializers.he_uniform(seed=0),
kernel_regularizer=l1_l2(0.001), input_shape=(26,)))
model.add(keras.layers.Dense(units=26, activation='relu', kernel_regularizer=l1_l2(0.001)))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(units=1, activation='sigmoid', kernel_regularizer=l1_l2(0.001)))
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Adam(learning_rate=0.001),
metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs = 10, batch_size=6500)
eval_model=model.evaluate(X_train, y_train)
print(eval_model)
model.evaluate(X_test, y_test, verbose=0)
plot_history(history)
predictions = pd.DataFrame(model.predict(X_test))
#predictions
predictions[predictions > 0.5] = 1 #'5 or more Tornadoes'
predictions[predictions <= 0.5] = 0 #'Less than 5 Tornadoes'
print('accuracy', accuracy_score(predictions, y_test))
print('confusion matrix\n', confusion_matrix(predictions,y_test))
print(classification_report(predictions, y_test))
###Output
accuracy 0.718420023464998
confusion matrix
[[1721 191]
[ 529 116]]
precision recall f1-score support
0.0 0.76 0.90 0.83 1912
1.0 0.38 0.18 0.24 645
micro avg 0.72 0.72 0.72 2557
macro avg 0.57 0.54 0.54 2557
weighted avg 0.67 0.72 0.68 2557
###Markdown
Attempt 5, NAdam
###Code
model = keras.Sequential()
model.add(keras.layers.Dense(units=26, activation='relu', kernel_initializer=keras.initializers.he_uniform(seed=0),
kernel_regularizer=l1_l2(0.001), input_shape=(26,)))
model.add(keras.layers.Dense(units=26, activation='relu', kernel_regularizer=l1_l2(0.001)))
model.add(keras.layers.Dropout(0.2))
model.add(keras.layers.Dense(units=1, activation='sigmoid', kernel_regularizer=l1_l2(0.001)))
model.compile(loss='binary_crossentropy', optimizer=keras.optimizers.Nadam(learning_rate=0.003),
metrics=['accuracy'])
history = model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs = 10, batch_size=6500)
eval_model=model.evaluate(X_train, y_train)
print(eval_model)
model.evaluate(X_test, y_test, verbose=0)
plot_history(history)
predictions = pd.DataFrame(model.predict(X_test))
#predictions
predictions[predictions > 0.5] = 1 #'5 or more Tornadoes'
predictions[predictions <= 0.5] = 0 #'Less than 5 Tornadoes'
print('accuracy', accuracy_score(predictions, y_test))
print('confusion matrix\n', confusion_matrix(predictions,y_test))
print(classification_report(predictions, y_test))
###Output
accuracy 0.7328901055924912
confusion matrix
[[1866 299]
[ 384 8]]
precision recall f1-score support
0.0 0.83 0.86 0.85 2165
1.0 0.03 0.02 0.02 392
micro avg 0.73 0.73 0.73 2557
macro avg 0.43 0.44 0.43 2557
weighted avg 0.71 0.73 0.72 2557
|
ipynb/15_Pooling.ipynb | ###Markdown
プーリング・データとパネル・データ If you come here without expecting Japanese, please click [Google translated version](https://translate.google.com/translate?hl=&sl=ja&tl=en&u=https%3A%2F%2Fpy4etrics.github.io%2F15_Pooling.html) in English or the language of your choice.---
###Code
import numpy as np
import pandas as pd
from statsmodels.formula.api import ols
import wooldridge
from scipy.stats import t
###Output
_____no_output_____
###Markdown
説明 独立に分布したプーリング・データ(independently distributed pooling data)とパネル・データ(panel data)を考える。両方とも横断面データと時系列データの両方の特徴を兼ね備えたデータである。 ---**独立に分布したプーリング・データ**には以下の特徴がある。* ある期間(例えば,2000年から2010年の間)に期間毎無作為に観察単位(例えば,消費者や企業)が抽出される。従って,時間(年次データであれば年)が違えば観察単位は変化する。* 時間の変化により必ずしも**同一分布**とはならない(即ち,分布は時間によって変化するかも知れない)。日本の統計の例に[労働力調査](http://www.stat.go.jp/data/roudou/index.html)がある。<プーリング・データを使う利点>* 横断データと比べるとデータ数が増えるため,より正確な推定量や検定統計量を得ることが可能となる。* 時間軸にそって独立した分布から標本を抽出するため,誤差項に自己相関はない。 ---**パネル・データ**には以下の特徴がある。* 初期時点で無作為に観察単位(例えば,消費者や企業)を抽出し,同じ観測単位を時間軸に沿って観測して集めたデータである。* 観測単位が同じであることが独立に分布したプーリング・データとの違いである。<パネル・データを使う意義>例として,各都道府県で行われた公共支出が県内の雇用に与える影響を推定したいとしよう。* 観察単位:47都道府県 $i=1,2,...,47$* 時系列範囲:2000~2020年 $t=2000,2001,...,2020$* 変数:県内の雇用($L$),公共支出($G$),$x$は「その他の変数」47都道府県と時間の2次元データとなっているため,次の推定方法が考えられる。* 年別に横断面データとしてクロス・セクション分析行う。 $$y_i = \beta_{0} + \beta_{1} G_i + \beta_{2}x_i + u_i$$ しかし,それぞれの推定は20年間の間に起こる要因の変化の効果を無視することになり,公的支出の動的な側面を捉えていない。* 47都道府県別に時系列の推定を行う。 $$y_t = \beta_{0} + \beta_{1} G_t + \beta_{2}x_t+ u_t$$ しかし,それぞれの推定は同じ日本の地域でありながら他の都道府県の影響を無視することになる。このように2次元のデータの1つの軸に沿ったデータだけを使うと何らかの影響を無視することになりうる。換言すると,パネル・データを使い異なる観察単位の動的なデータから得る情報を有効に使うことにより,より正確な推定量や検定統計量を得ることができることになる。 独立に分布したプーリング・データ このタイプのデータ扱う上で注意する点:* 時系列の変数があるにも関わらず,被説明変数と少なくとも一部の説明変数とは時間不変の関係にあると暗に仮定することになる。* 時系列の要素があるため,変数は必ずしも同一分布から抽出さる訳ではない。例えば,多くの経済で賃金と教育水準の分布は変化していると考える方が妥当である。この問題に対処するために時間ダミーを使うことになる。 `CPS78_85`のデータを使い推定方法を説明する。
###Code
# データの読み込み
cps = wooldridge.data('CPS78_85')
# データの内容の確認
wooldridge.data('CPS78_85', description=True)
###Output
name of dataset: cps78_85
no of variables: 15
no of observations: 1084
+----------+-----------------------+
| variable | label |
+----------+-----------------------+
| educ | years of schooling |
| south | =1 if live in south |
| nonwhite | =1 if nonwhite |
| female | =1 if female |
| married | =1 if married |
| exper | age - educ - 6 |
| expersq | exper^2 |
| union | =1 if belong to union |
| lwage | log hourly wage |
| age | in years |
| year | 78 or 85 |
| y85 | =1 if year == 85 |
| y85fem | y85*female |
| y85educ | y85*educ |
| y85union | y85*union |
+----------+-----------------------+
Professor Henry Farber, now at Princeton University, compiled these
data from the 1978 and 1985 Current Population Surveys. Professor
Farber kindly provided these data when we were colleagues at MIT.
###Markdown
このデータセットには1978年と1985年の変数があり,ダミー変数に次のものが含まれている。* `y85`:1985年の時間ダミー変数* `female`:男女のダミー変数* `union`:労働組合員かどうかのダミー変数これを使い,賃金と教育の関係を検証する。
###Code
# 回帰分析
formula = 'lwage ~ y85 + educ + female + \
y85:educ + y85:female + \
exper + I((exper**2)/100) + union'
result = ols(formula, cps).fit()
###Output
_____no_output_____
###Markdown
(コメント)* `y85 + educ + female + y85:educ + y85:female`の部分は長いので`y85*(educ+female)`と省略して記述しても結果は同じ。* `lwage`は名目賃金の対数の値である。この場合,実質賃金を考える方が自然なため,インフレの影響を取り除く必要がある。1978年と1985年の消費者物価指数をそれぞれ$p_{78}$と$p_{85}$とすると,その比率$P\equiv p_{85}/p_{78}$がその間のインフレの影響を捉える変数と考える。$P$を使い1985年の賃金を次式に従って実質化できる。 $$ \ln\left(\text{実質賃金}_{85}\right) = \ln\left(\frac{\text{名目賃金}_{85}}{P}\right) = \ln\left(\text{名目賃金}_{85}\right)-\ln(P) = \text{85年のlwage}- \ln(P) $$ この式から$\ln(P)$は回帰式の右辺の定数項に含まれることがわかる。即ち,上の式を変えることなく`educ`の係数は実質賃金で測った教育の収益率,`female`は実質賃金での男女賃金格差と解釈できる。
###Code
print(result.summary().tables[1])
###Output
=========================================================================================
coef std err t P>|t| [0.025 0.975]
-----------------------------------------------------------------------------------------
Intercept 0.4589 0.093 4.911 0.000 0.276 0.642
y85 0.1178 0.124 0.952 0.341 -0.125 0.361
educ 0.0747 0.007 11.192 0.000 0.062 0.088
female -0.3167 0.037 -8.648 0.000 -0.389 -0.245
y85:educ 0.0185 0.009 1.974 0.049 0.000 0.037
y85:female 0.0851 0.051 1.658 0.098 -0.016 0.186
exper 0.0296 0.004 8.293 0.000 0.023 0.037
I((exper ** 2) / 100) -0.0399 0.008 -5.151 0.000 -0.055 -0.025
union 0.2021 0.030 6.672 0.000 0.143 0.262
=========================================================================================
###Markdown
教育の収益率* `educ`の係数0.0747が1978年における教育1年の収益率(約7.5%)。統計的有意性は非常に高い(p-value=0)* `y85:educ`の係数0.0185は1978年と1985年の値の差を表している。1985年の収益率は1.85%高く,5%水準で帰無仮説(1978年と1985年の収益率は同じ)を棄却できる。1985年の1年間の収益率は0.0747+0.0185=0.0932である(約9.3%)。 男女間の賃金格差* `female`の係数`-0.3167`は1978年における値。即ち,女性の賃金は男性より約32%低く,統計的有意性は非常に高い(p値=0)。* `y85:female`の係数0.0851は1978年と1985年の値の差を表しており,1985年の賃金格差は約8.5%改善したことを意味する。* `y85:female`のp値0.098は両側検定であり,10%水準で帰無仮説(賃金格差は変化していない)を棄却できるが,5%水準では棄却できない。* 一方,`y85:female`は正の値であり,女性の賃金の改善が統計的有意かどうかが問題になる。この点を考慮し片側検定をおこなう。 * $H_0$: `y85:female`の係数 $=0$ * $H_a$: `y85:female`の係数 $>0$
###Code
t_value = result.tvalues['y85:female'] # y85:femaleのt値
dof = result.df_resid # 自由度 n-k-1
###Output
_____no_output_____
###Markdown
`scipy.stats`を使い計算する。
###Code
1-t.cdf(t_value, dof) # p値の計算
###Output
_____no_output_____
###Markdown
片側検定では,5%水準で帰無仮説を棄却できる。 プーリング・データの応用:差分の差分析 説明 差分の差分析(Difference-in-Differences; DiD)を使うとプーリング・データを使い政策の効果を検証できる。 <基本的な考え>* 例として,ゴミ焼却場の建築が近隣住宅価格に与える影響を考える。* `t`=0:政策実施前の年* `t`=1:政策実施後の年* `D`:住宅がゴミ焼却場の近くであれば1,そうでなければ0(距離のダミー変数)* `y`:住宅価格* 仮定:ゴミ焼却場が建設されなかったとしたら,データにある全ての住宅価格は同じ率で増加していただろう。* 次式を使って政策実施前と後それぞれで推定 $$y_t = \alpha_t + \gamma_t D + u\qquad t=0,1$$ * $\hat{y}_0^{\text{遠}}=\hat{\alpha}_0$:政策実施前の遠くに立地する住宅価格の平均 * $\hat{y}_0^{\text{近}}=\hat{\alpha}_0+\hat{\gamma}_0$:政策実施前の近くに立地する住宅価格の平均 * $\hat{\gamma}_0$:政策実施前の遠くと近くに立地する住宅価格の**差** * $\hat{y}_1^{\text{遠}}=\hat{\alpha}_1$:政策実施後の遠くに立地する住宅価格の平均 * $\hat{y}_1^{\text{近}}=\hat{\alpha}_1+\hat{\gamma}_1$:政策実施後の近くに立地する住宅価格の平均 * $\hat{\gamma}_1$:政策実施後の遠くと近くに立地する住宅価格の**差*** 解釈(下の図を参考に): * $\hat{\gamma}_0$は政策実施前の住宅価格の差を示す。 * $\hat{\gamma}_1$は政策実施後の住宅価格の差を示す。 * $\left(\hat{\gamma}_1-\hat{\gamma}_0\right)$は政策実施後と前の住宅価格の差の差を示す。この「差の差」の変化で近くに立地する住宅価格の変化を考えることができる。もしこの差の差が`0`であれば(即ち,差は同じだった),住宅価格は影響を受けなかったと解釈できる。もしこの差がマイナスであれば(即ち,差に違いが生じた)近くに立地する住宅の価格は減少してしたと考えられる。 ```{image} ./images/did.png:scale: 30%:align: center``` 上の議論から次のことが分かる。* 住宅価格の変化は上の図で示しているように次の「差の差」で捉えることができる。 $$ \begin{align*} \hat{\gamma}_1^{\text{}}-\hat{\gamma}_0&=\hat{y}_1^{\text{近}}-\hat{y}_0^{\text{近}}-\left(\hat{\alpha}_1-\hat{\alpha}_0\right) \\ &=\left(\hat{y}_1^{\text{近}}-\hat{y}_0^{\text{近}}\right)-\left(\hat{y}_1^{\text{遠}}-\hat{y}_0^{\text{遠}}\right) \\ &=\left(\hat{y}_1^{\text{近}}-\hat{y}_1^{\text{遠}}\right)-\left(\hat{y}_0^{\text{近}}-\hat{y}_0^{\text{遠}}\right) \end{align*} $$* 住宅価格の変化の間違った計算: * $\hat{y}_1^{\text{近}}-\hat{y}_0^{\text{近}}=\left(\hat{\alpha}_1-\hat{\alpha}_0\right)+\left(\hat{\gamma}_1-\hat{\gamma}_0\right)$ * $\left(\hat{\alpha}_1-\hat{\alpha}_0\right)$は,ゴミ焼却場建設から影響を「受けない(仮定)」遠 い場所に立地する住宅価格が時間と共に変化する部分を捉えており,ゴミ焼却場建設と関係なく発生する価格変化である。この部分を取り除かなければ,ゴミ焼却場建設の効果の推定量にバイアスが発生する。 ---<`DiD`推定方法>* 通常の`OLS`を使うが,時間ダミー変数を導入する。 * `T`:`t`=0の場合は0,`t`=1の場合は1* 推定式: $$ y=\beta_0+\delta_0T + \beta_1D + \delta_1TD + u $$ <ダミー変数の値により4つのケースがある>1. `T`=`D`=0の場合 $$ y=\beta_0 + u $$ $\hat{y}_{0}^{\text{遠}}\equiv\hat{\beta}_0$:政策実施前の遠くに立地する住宅の平均価格 1. `T`=0 と `D`=1の場合 $$ y=\beta_0 + \beta_1D + u $$ $\hat{y}_{0}^{\text{近}}\equiv\hat{\beta}_0+\hat{\beta}_1$:政策実施前の近くに立地する住宅の平均価格 1. `T`=1 と `D`=0の場合 $$ y=\beta_0 + \delta_0T + u $$ $\hat{y}_{1}^{\text{遠}}\equiv\hat{\beta}_0+\hat{\delta}_0$:政策実施後の遠くに立地する住宅の平均価格 1. `T`=`D`=1の場合 $$ y=\beta_0 + \delta_0T + \beta_1D + \delta_1TD + u $$ $\hat{y}_{1}^{\text{近}}\equiv\hat{\beta}_0+\hat{\delta}_0+\hat{\beta}_1+\hat{\delta}_1$:政策実施後の近くに立地する住宅の平均価格 ここでの関心事は$\hat{\delta}_1$の推定値(負かどうか)であり,その統計的優位性である。この定義を使うと次式が成立することが確認できる。$$\hat{\delta}_1=\left(\hat{y}_1^{\text{近}}-\hat{y}_1^{\text{遠}}\right)-\left(\hat{y}_0^{\text{近}}-\hat{y}_0^{\text{遠}}\right)$$これは上で導出した式にある$\hat{\gamma}_1^{\text{}}-\hat{\gamma}_0$と同じである。 `DiD`推定`keilmc`のデータを使いゴミ焼却場建設の近隣住宅価格に対する効果を実際に推計する。
###Code
kielmc = wooldridge.data('kielmc')
wooldridge.data('kielmc', description=True)
formula = 'rprice ~ nearinc + y81 + nearinc:y81'
result = ols(formula, data=kielmc).fit()
###Output
_____no_output_____
###Markdown
(コメント)* 右辺を`nearinc * y81`と省略可能。
###Code
print(result.summary().tables[1])
###Output
===============================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------
Intercept 8.252e+04 2726.910 30.260 0.000 7.72e+04 8.79e+04
nearinc -1.882e+04 4875.322 -3.861 0.000 -2.84e+04 -9232.293
y81 1.879e+04 4050.065 4.640 0.000 1.08e+04 2.68e+04
nearinc:y81 -1.186e+04 7456.646 -1.591 0.113 -2.65e+04 2806.867
===============================================================================
###Markdown
* `nearinc:y81`の係数は約-11860であり,10%水準の両側検定でも帰無仮説(係数は0)を棄却できない。* `nearinc:y81`は負の値であり住宅価格が減少したかを検証したい。従って,次の片側検定を行う。 * $H_0$: `nearinc:y81`の係数 $=0$ * $H_a$: `nearinc:y81`の係数 $<0$
###Code
t_value = result.tvalues['nearinc:y81'] # t値
dof = result.df_resid # 自由度 n-k-1
t.cdf(t_value, dof) # p値の計算
###Output
_____no_output_____
###Markdown
片側検定では,10%水準で帰無仮説は棄却できるが,5%水準では棄却できない。負の効果のある程度の統計的有意性はあるようである。一方,上の回帰分析には次のことが言える。* 左辺には住宅価格の対数を置く方が妥当ではないか。それにより,価格変化をパーセンテージ推計できると同時に,実質価格という解釈も成立する。* 住宅価格に与える他の変数が欠落している可能性がある。---この2点を踏まえて,再度推定をおこなう。まず,`NumPy`を使い住宅価格に対数をとり推計する。
###Code
formula_1 = 'np.log(rprice) ~ nearinc * y81'
result_1 = ols(formula_1, data=kielmc).fit()
print(result_1.summary().tables[1])
###Output
===============================================================================
coef std err t P>|t| [0.025 0.975]
-------------------------------------------------------------------------------
Intercept 11.2854 0.031 369.839 0.000 11.225 11.345
nearinc -0.3399 0.055 -6.231 0.000 -0.447 -0.233
y81 0.1931 0.045 4.261 0.000 0.104 0.282
nearinc:y81 -0.0626 0.083 -0.751 0.453 -0.227 0.102
===============================================================================
###Markdown
(結果)* 特に`nearinc:y81`の係数の統計的有意性が上昇したわけではない。次に,住宅価格に影響を及ぼしそうな変数を追加する。
###Code
formula_2 = 'np.log(rprice) ~ nearinc * y81 + age + I(age**2) + \
np.log(intst) + np.log(land) + np.log(area) + rooms + baths'
result_2 = ols(formula_2, data=kielmc).fit()
print(result_2.summary().tables[1])
###Output
=================================================================================
coef std err t P>|t| [0.025 0.975]
---------------------------------------------------------------------------------
Intercept 7.6517 0.416 18.399 0.000 6.833 8.470
nearinc 0.0322 0.047 0.679 0.498 -0.061 0.126
y81 0.1621 0.028 5.687 0.000 0.106 0.218
nearinc:y81 -0.1315 0.052 -2.531 0.012 -0.234 -0.029
age -0.0084 0.001 -5.924 0.000 -0.011 -0.006
I(age ** 2) 3.763e-05 8.67e-06 4.342 0.000 2.06e-05 5.47e-05
np.log(intst) -0.0614 0.032 -1.950 0.052 -0.123 0.001
np.log(land) 0.0998 0.024 4.077 0.000 0.052 0.148
np.log(area) 0.3508 0.051 6.813 0.000 0.249 0.452
rooms 0.0473 0.017 2.732 0.007 0.013 0.081
baths 0.0943 0.028 3.400 0.001 0.040 0.149
=================================================================================
###Markdown
(結果)* `nearinc:y81`の係数は,住宅価格が13.1%下落したことを示している。* p値はこの経済的効果が非常に有意にゼロではないことを示している。 パネル・データとその問題点 パネル・データについて パネル・データを使い推定する場合,観察単位$i$と時間$t$の2つ次元を考慮した回帰式となる。説明を具体化するために,47都道府県での雇用に対する公共支出の効果を考えよう。$$y_{it}= \alpha_{it} +\beta_1x_{it}+u_{it}\qquad i=1,2,...,n\quad t=1,2$$ここで$$\alpha_{i}=\beta_0+a_{i}+\delta_0D_t$$* $y_{it}$:都道府県$i$の$t$時点における雇用* $x_{it}$:都道府県$i$の$t$時点における公共支出* $\alpha_{it}$:回帰曲線の切片。 * $i$がある(都道府県によって切片が異なる)理由:都道府県の間に存在しうる異質性を捉えている。 * 例えば,他県の公共支出からの影響を受けやすい(若しくは,受けにくい)県がある可能性がある。また,海に接している県や内陸の県では公共支出の効果は異なるかも知れない。また働き方や公共支出に対する県民特有の考え方や好みの違いもありうる。こういう要素は変化には時間が掛かり,推定期間内であれば一定と考えることができる。ありとあらゆる異質性が含まれるため,観察不可能と考えられる。 * $t$がある理由:公共支出以外の理由で時間と共に雇用は変化するかもしれない。時間トレンドの効果を捉える。 * 都道府県特有の定数項は3つに分ける。 * $\beta_0$:共通の定数項 * $a_{i}$:都道府県別の定数項(**観察不可能**) * $\delta_0D_t$:時間による切片の変化を捉えており,$D_t$は時間ダミー変数 * $D_0=0$ * $D_1=1$* $u_{it}$:時間によって変化する観測不可能な誤差項($i$と$t$に依存する誤差項であり,idiosyncratic errorsと呼ばれることがある) * 仮定:$\text{Cov}(x_{it},u_{it})=0$ OLS推定の問題点 $a_i$は観察不可能なため,上の回帰式は次のように表すことができる。$$y_{it}= \beta_0 + \delta_0D_t + \beta_1x_{it}+e_{it}\qquad e_{it}=a_i+u_{it}$$ここで$e_{it}$は観察不可能な誤差項。* $x_{it}$と$a_i$とが相関する場合,$x_{it}$と$e_{it}$は相関することになり,GM仮定4は成立しなくなる。 $$ \text{Cov}\left(x_{it}a_{it}\right)\neq 0\quad \Rightarrow\quad \text{Cov}\left(x_{it}e_{it}\right)\neq 0 $$ 即ち,$\hat{\beta}_1$は不偏性と一致性を失うことになる。* 「異質性バイアス」と呼ばれるが,本質的には$a_i$の欠落変数バイアスである。 1階差分推定の準備:`groupby` パネル・データを使った推定方法として1階差分推定(First Differenced Estimator)があり,それを使い推定する。その前準備として変数の差分の作成方法について説明する。`DataFrame`にはメソッド`groupby`があり,これを使うとカテゴリー変数がある列に従って行をグループ分けしグループ毎の様々な計算が可能となる。具体的な例としては[Gapminder](https://github.com/Haruyama-KobeU/Py4Etrics/blob/master/Gapminder.ipynb)を参照。ここでは`groupby`を使い差分の変数の作り方を説明する。例として使うデータを読み込む。
###Code
# url の設定
url = 'https://raw.githubusercontent.com/Haruyama-KobeU/Haruyama-KobeU.github.io/master/data/data4.csv'
# 読み込み
df = pd.read_csv(url)
df
###Output
_____no_output_____
###Markdown
列`country`がカテゴリー変数であり,3つの国がある。この列を使いグループ分けする。まず前準備として,メソッド`.sort_values()`を使い列`country`で昇順に並び替え,その後,列`year`で昇順に並び替える。(コメント)以下のコードには`.reset_index(drop=True)`が追加されているが,これは行のインデックスを0,1,2,..と振り直すメソッドである。引数`drop=True`がなければ,元々あったインデックスが新たな列として追加される。試しに,最初から`df`を読み直して`.reset_index(drop=True)`を省いて下のコードを実行してみよう。
###Code
df = df.sort_values(['country','year']).reset_index(drop=True)
df
###Output
_____no_output_____
###Markdown
`country`でグループ化した`df_group`を作成する。
###Code
df_group = df.groupby('country')
df_group
###Output
_____no_output_____
###Markdown
このコードの出力が示すように,`df_group`は`DataFrameGroupBy`というクラス名のオブジェクト(データ型)であり,`DataFrame`とは異なる。これによりグループ別の計算が格段と容易になる。例えば,次のコードを使いそれぞれの経済の`gdp`の平均を計算することができる。
###Code
df_group['gdp'].mean()
###Output
_____no_output_____
###Markdown
`mean()`は`df_group`に備わるメソッドである。他にも便利な使い方があるが,ここでは変数の差分の作りかを説明する。`df_group`にある変数の差分をとるためにメソッド`diff()`を使う。
###Code
var = ['gdp', 'inv']
df_diff = df_group[var].diff()
df_diff
###Output
_____no_output_____
###Markdown
次に`df`と`df_diff`を横に結合する。 **<方法1:`pd.concat`>**1. `df_diff`の列名を変更1. `pd.condat`を使い結合
###Code
df_diff_1 = df_diff.rename(columns={'gdp':'gdp_diff','inv':'inv_diff'})
df_diff_1
pd.concat([df, df_diff_1], axis='columns')
###Output
_____no_output_____
###Markdown
上のコードの`axis='columns'`は`axis=1`としてもOK。この方法は、`df`と`df_diff_1`の行の順序が同じという想定のもとで行っている。行の順番が同じでない場合や不明な場合は、次の方法が良いであろう。 **<方法2:`pd.merge`>**`df`と`df_diff`を使い、方法1の2つのステップを1行で済ませる。次のコードでは3つの引数を指定している。* `left_index=True` * `df`の行のインデックスに合わせて結合する。* `right_index=True` * `df_diff`の行のインデックスに合わせて結合する。* `suffixes=('', '_diff')` * 左の引数`''`:結合後,重複する左の`DataFrame`の列につける接尾辞(空の接尾辞) * 右の引数`'_diff'`:結合後,重複する右の`DataFrame`の列につける接尾辞
###Code
pd.merge(df, df_diff, left_index=True, right_index=True, suffixes=('', '_diff'))
###Output
_____no_output_____
###Markdown
`df`を上書きしたい場合は,`df=`を付け加える。 1階差分推定 説明 異質性バイアスの一番簡単な解決方法が1階差分推定(First Differenced Estimator)である。考え方は非常に簡単である。次の1階差分を定義する。* $\Delta y_{i}=y_{i1}-y_{i0}$* $\Delta D = D_{1}-D_0 =1-0= 1$* $\Delta x_{i}=x_{i1}-x_{i0}$* $\Delta e_{i}=e_{i1}-e_{i0}=a_i+u_{i1}-\left(a_i+u_{i0}\right)=\Delta u_{i}$ * $a_i$が削除される。これらを使い,上の式の1階差分をとると次式を得る。$$\Delta y_{it}= \delta_0 + \beta_1\Delta x_{i}+\Delta u_{i}\qquad i=1,2,...,n$$* 良い点 * (仮定により)$\Delta x_{i}$と$\Delta u_{i}$は相関しないため,GM仮定4は満たされることになる。* 悪い点 * $t=0$のデータを使えなくなるため,標本の大きさが小さくなる。 * 期間内の説明変数$\Delta x_i$の変化が小さいと,不正確な推定になる(極端な場合,変化がないと推定不可能となる)。 ---<推定方法>データから1階差分を計算し,`statsmodels`を使いOLS推定すれば良い。以下ではパッケージ`wooldridge`にあるデータを使う。 推定:2期間の場合 * 以下では`crime2`のデータを使い説明する。* このデータを使い,失業率が犯罪に負の影響を与えるかどうかを検証する。 データの読み込み。
###Code
crime2 = wooldridge.data('crime2')
###Output
_____no_output_____
###Markdown
変数の説明。
###Code
wooldridge.data('crime2', description=True)
###Output
name of dataset: crime2
no of variables: 34
no of observations: 92
+----------+----------------------------+
| variable | label |
+----------+----------------------------+
| pop | population |
| crimes | total number index crimes |
| unem | unemployment rate |
| officers | number police officers |
| pcinc | per capita income |
| west | =1 if city in west |
| nrtheast | =1 if city in NE |
| south | =1 if city in south |
| year | 82 or 87 |
| area | land area, square miles |
| d87 | =1 if year = 87 |
| popden | people per sq mile |
| crmrte | crimes per 1000 people |
| offarea | officers per sq mile |
| lawexpc | law enforce. expend. pc, $ |
| polpc | police per 1000 people |
| lpop | log(pop) |
| loffic | log(officers) |
| lpcinc | log(pcinc) |
| llawexpc | log(lawexpc) |
| lpopden | log(popden) |
| lcrimes | log(crimes) |
| larea | log(area) |
| lcrmrte | log(crmrte) |
| clcrimes | change in lcrimes |
| clpop | change in lpop |
| clcrmrte | change in lcrmrte |
| lpolpc | log(polpc) |
| clpolpc | change in lpolpc |
| cllawexp | change in llawexp |
| cunem | change in unem |
| clpopden | change in lpopden |
| lcrmrt_1 | lcrmrte lagged |
| ccrmrte | change in crmrte |
+----------+----------------------------+
These data were collected by David Dicicco, a former MSU
undergraduate, for a final project. They came from various issues of
the County and City Data Book, and are for the years 1982 and 1985.
Unfortunately, I do not have the list of cities.
###Markdown
興味がある変数* `crmes`:犯罪率(1000人当たり犯罪数)* `unem`:失業率(コメント)データセットにはこの2つの変数の差分(`cunem`と`ccrmrte`)も用意されているが,以下では`groupby`を使って変数を作成する。
###Code
crime2.head()
###Output
_____no_output_____
###Markdown
このデータセットには,観察単位識別用の変数がない。しかし,行0と1が1つの地域のデータ,行2と3が別の地域のデータをなっていることがわかる(`year`を見るとわかる)。まず観察単位識別用のの列を作成する。
###Code
# 観察単位の数
n = len(crime2)/2
# 観察単位ID用のリスト作成 [1,2,3,4....]
lst = [i for i in range(1,int(n)+1)]
# 観察単位ID用のリスト作成 [1,1,2,2,3,3,4,4....]
country_list = pd.Series(lst).repeat(2).to_list()
###Output
_____no_output_____
###Markdown
`country_list`の説明:1. `Series`のメソッド`repeat(2)`は`lst`の要素を2回リピートする`Series`を生成する。1. `to_list()`は`Series`をリストに変換するメソッド。データセットに列`county`を追加し確認する。
###Code
crime2['county'] = country_list # 追加
crime2.loc[:,['county','year']].head() # 確認
###Output
_____no_output_____
###Markdown
`unem`と`crmrte`の差分を求める。
###Code
var = ['unem', 'crmrte'] # groupbyで差分を取る列の指定
names = {'unem':'unem_diff', 'crmrte':'crmrte_diff'} # 差分の列のラベル
crime2_diff = crime2.groupby('county')[var].diff().rename(columns=names)
crime2_diff.head()
###Output
_____no_output_____
###Markdown
`crime2_diff`を使って回帰分析をおこなう。(コメント)以下の計算では`NaN`は自動的に無視される。
###Code
formula_1 = 'crmrte_diff ~ unem_diff'
result_1 = ols(formula_1, crime2_diff).fit()
print(result_1.summary().tables[1])
###Output
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 15.4022 4.702 3.276 0.002 5.926 24.879
unem_diff 2.2180 0.878 2.527 0.015 0.449 3.987
==============================================================================
###Markdown
<結果>* 失業率が1%増加すると,犯罪率は約2.2%上昇する。 * この場合、微分を計算するのではなく、`unem_diff`に`1`を代入して解釈する。* 5%の水準で帰無仮説を棄却できる。* 次の片側検定を行う * $H_0$: `unem`の係数 $=0$ * $H_a$: `unem`の係数 $>0$
###Code
t_value = result_1.tvalues['unem_diff'] # t値
dof = result_1.df_resid # 自由度 n-k-1
1-t.cdf(t_value, dof) # p値の計算
###Output
_____no_output_____
###Markdown
片側検定では,1%水準で帰無仮説を棄却できる。 ---**<参考1>**1階差分推定を使わずに直接OLS推定するとどうなるかを確かめてみる。
###Code
formula_ols_1 = 'crmrte ~ d87 + unem'
result_ols_1 = ols(formula_ols_1, crime2).fit()
print(result_ols_1.summary().tables[1])
###Output
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 93.4202 12.739 7.333 0.000 68.107 118.733
d87 7.9404 7.975 0.996 0.322 -7.906 23.787
unem 0.4265 1.188 0.359 0.720 -1.935 2.788
==============================================================================
###Markdown
* 失業の効果は過小評価されている。 * 地域の異質性を考慮しなため異質性バイアス(欠落変数バイアス)が発生している。* 統計的有意性も低い(p値は非常に高い)。---**<参考2>**参考1の回帰式にはダミー変数`d87`が入っており、年によって失業と犯罪の関係は異なることを捉えている。もしこのダミーが入らないと通常考えられる関係と逆の相関が発生する。
###Code
formula_ols_2 = 'crmrte ~ unem'
result_ols_2 = ols(formula_ols_2, crime2).fit()
print(result_ols_2.summary().tables[1])
###Output
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 103.2434 8.059 12.811 0.000 87.233 119.253
unem -0.3077 0.932 -0.330 0.742 -2.159 1.543
==============================================================================
###Markdown
推定:3期間以上の場合 `crime4`のデータセットを使い,犯罪の決定要因について推定する。
###Code
crime4 = wooldridge.data('crime4')
crime4.head()
wooldridge.data('crime4', description=True)
###Output
name of dataset: crime4
no of variables: 59
no of observations: 630
+----------+---------------------------------+
| variable | label |
+----------+---------------------------------+
| county | county identifier |
| year | 81 to 87 |
| crmrte | crimes committed per person |
| prbarr | 'probability' of arrest |
| prbconv | 'probability' of conviction |
| prbpris | 'probability' of prison sentenc |
| avgsen | avg. sentence, days |
| polpc | police per capita |
| density | people per sq. mile |
| taxpc | tax revenue per capita |
| west | =1 if in western N.C. |
| central | =1 if in central N.C. |
| urban | =1 if in SMSA |
| pctmin80 | perc. minority, 1980 |
| wcon | weekly wage, construction |
| wtuc | wkly wge, trns, util, commun |
| wtrd | wkly wge, whlesle, retail trade |
| wfir | wkly wge, fin, ins, real est |
| wser | wkly wge, service industry |
| wmfg | wkly wge, manufacturing |
| wfed | wkly wge, fed employees |
| wsta | wkly wge, state employees |
| wloc | wkly wge, local gov emps |
| mix | offense mix: face-to-face/other |
| pctymle | percent young male |
| d82 | =1 if year == 82 |
| d83 | =1 if year == 83 |
| d84 | =1 if year == 84 |
| d85 | =1 if year == 85 |
| d86 | =1 if year == 86 |
| d87 | =1 if year == 87 |
| lcrmrte | log(crmrte) |
| lprbarr | log(prbarr) |
| lprbconv | log(prbconv) |
| lprbpris | log(prbpris) |
| lavgsen | log(avgsen) |
| lpolpc | log(polpc) |
| ldensity | log(density) |
| ltaxpc | log(taxpc) |
| lwcon | log(wcon) |
| lwtuc | log(wtuc) |
| lwtrd | log(wtrd) |
| lwfir | log(wfir) |
| lwser | log(wser) |
| lwmfg | log(wmfg) |
| lwfed | log(wfed) |
| lwsta | log(wsta) |
| lwloc | log(wloc) |
| lmix | log(mix) |
| lpctymle | log(pctymle) |
| lpctmin | log(pctmin) |
| clcrmrte | lcrmrte - lcrmrte[_n-1] |
| clprbarr | lprbarr - lprbarr[_n-1] |
| clprbcon | lprbconv - lprbconv[_n-1] |
| clprbpri | lprbpri - lprbpri[t-1] |
| clavgsen | lavgsen - lavgsen[t-1] |
| clpolpc | lpolpc - lpolpc[t-1] |
| cltaxpc | ltaxpc - ltaxpc[t-1] |
| clmix | lmix - lmix[t-1] |
+----------+---------------------------------+
From C. Cornwell and W. Trumball (1994), “Estimating the Economic
Model of Crime with Panel Data,” Review of Economics and Statistics
76, 360-366. Professor Cornwell kindly provided the data.
###Markdown
<興味がある変数>* 被説明変数 * `lcrmrte`:1人当たり犯罪数(対数)* 説明変数 * `lprbarr`:逮捕の確率(対数) * `lprbconv`:有罪判決の確率(対数; 逮捕を所与として) * `lprbpris`:刑務所に収監される確率(対数; 有罪判決を所与として) * `lavgsen`:平均服役期間(対数) * `lpolpc`:1人当たり警官数(対数)それぞれの変数の差分も用意されているが,以下では列`county`をグループ化しそれらの値を計算する。
###Code
# グループ化
crime4_group = crime4.groupby('county')
# 差分を計算したい変数
var = ['lcrmrte', 'lprbarr', 'lprbconv', 'lprbpris', 'lavgsen', 'lpolpc']
# 差分のDataFrame
crime4_diff = crime4_group[var].diff()
# DataFrameの結合
crime4 = pd.merge(crime4, crime4_diff,
left_index=True, right_index=True,
suffixes=('','_diff'))
###Output
_____no_output_____
###Markdown
---推定式については,2期間モデルと同じように考える。違う点は,7年間の年次データであるため,6つの時間ダミー変数を入れることだけである。
###Code
formula_2 = 'lcrmrte_diff ~ d83 + d84 + d85 + d86 + d87 + \
lprbarr_diff + lprbconv_diff + \
lprbpris_diff + lavgsen_diff + \
lpolpc_diff'
result_2 = ols(formula_2, crime4).fit()
print(result_2.summary().tables[1])
###Output
=================================================================================
coef std err t P>|t| [0.025 0.975]
---------------------------------------------------------------------------------
Intercept 0.0077 0.017 0.452 0.651 -0.026 0.041
d83 -0.0999 0.024 -4.179 0.000 -0.147 -0.053
d84 -0.0479 0.024 -2.040 0.042 -0.094 -0.002
d85 -0.0046 0.023 -0.196 0.845 -0.051 0.042
d86 0.0275 0.024 1.139 0.255 -0.020 0.075
d87 0.0408 0.024 1.672 0.095 -0.007 0.089
lprbarr_diff -0.3275 0.030 -10.924 0.000 -0.386 -0.269
lprbconv_diff -0.2381 0.018 -13.058 0.000 -0.274 -0.202
lprbpris_diff -0.1650 0.026 -6.356 0.000 -0.216 -0.114
lavgsen_diff -0.0218 0.022 -0.985 0.325 -0.065 0.022
lpolpc_diff 0.3984 0.027 14.821 0.000 0.346 0.451
=================================================================================
|
Day41_42_Sunspots_LSTM_Conv_Dense/dnn_layer_time_series.ipynb | ###Markdown
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 20
slope = 0.09
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, input_shape=[window_size], activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1)
])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(lr=1e-6, momentum=0.9))
model.fit(dataset,epochs=100,verbose=0)
forecast = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, input_shape=[window_size], activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss="mse", optimizer=optimizer)
history = model.fit(dataset, epochs=100, callbacks=[lr_schedule], verbose=0)
lrs = 1e-8 * (10 ** (np.arange(100) / 20))
plt.semilogx(lrs, history.history["loss"])
plt.axis([1e-8, 1e-3, 0, 300])
window_size = 30
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, activation="relu", input_shape=[window_size]),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1)
])
optimizer = tf.keras.optimizers.SGD(lr=8e-6, momentum=0.9)
model.compile(loss="mse", optimizer=optimizer)
history = model.fit(dataset, epochs=500, verbose=0)
loss = history.history['loss']
epochs = range(len(loss))
plt.plot(epochs, loss, 'b', label='Training Loss')
plt.show()
# Plot all but the first 10
loss = history.history['loss']
epochs = range(10, len(loss))
plot_loss = loss[10:]
print(plot_loss)
plt.plot(epochs, plot_loss, 'b', label='Training Loss')
plt.show()
forecast = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
###Output
_____no_output_____ |
data-analysis/numpy/numpy-samples.ipynb | ###Markdown
Basic numpy samples
###Code
import numpy as np
np.zeros((2, 4))
###Output
_____no_output_____
###Markdown
arangeSimilar to the ruby range with a step_field. By default the step_field is 1.
###Code
np.arange(0,100,2)
###Output
_____no_output_____
###Markdown
LinspaceCreates equally spaced n number of points. It is of the format```pythondef linspace(starting_point, ending_point, total_no_of_points)```
###Code
np.linspace(0,4,100)
###Output
_____no_output_____
###Markdown
Identity matrixSimply creates an identity matrix of the size specified
###Code
np.eye(5)
###Output
_____no_output_____
###Markdown
randomUse this to generate random values. Has lots of different methods.
###Code
np.random.rand(45)
# Random from gaussian distribution
np.random.randn(4,4)
###Output
_____no_output_____
###Markdown
reshapereturn the same value sets of an array into another matrix or a vector
###Code
arr = np.arange(0, 50, 2)
arr.reshape(5,5)
###Output
_____no_output_____
###Markdown
max and minThere are methods like `.max()` & `.min()` which can be used to get the max and min values of an array.The methods `.argmax()` amd `.argmin()` can be used to get the index of the max and min values.
###Code
arr.max()
arr.argmax()
###Output
_____no_output_____
###Markdown
shape & reshape`shape` can be used to get the dimensions of the matrix.`reshape` can be used to change the dimensions of the matrix
###Code
arr.shape
arr = arr.reshape(5,5)
arr
arr.shape
###Output
_____no_output_____
###Markdown
dtypereturns the datatype of the array
###Code
arr.dtype
###Output
_____no_output_____
###Markdown
conditionsConditional matching is quite simple
###Code
arr = np.arange(0,10)
bool_arr = arr > 5
bool_arr
arr[arr>5]
###Output
_____no_output_____
###Markdown
Some operations
###Code
arr = np.arange(0, 11)
arr
arr + arr
arr - arr
arr * arr
arr * 10
np.exp(arr)
np.sqrt(arr)
np.sin(arr)
np.log(arr)
###Output
/home/gsr/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:1: RuntimeWarning: divide by zero encountered in log
"""Entry point for launching an IPython kernel.
|
examples/experiments/plot_jiminy_cricket_results.ipynb | ###Markdown
Early stopping results (including results at convergence) Functions for parsing the log files for a single experiment
###Code
def read_log_file(path):
array = lambda x: np.array(x) # for converting label vector strings
with open(path, 'r') as f:
text = f.read()
if '<!LINE SEPARATOR!>' in text:
lines = text.split('\n<!LINE SEPARATOR!>\n')[:-1]
elif '<!SEP!>' in text:
lines = text.split('\n<!SEP!>\n')[:-1]
else:
raise ValueError('should have a recognized line separator')
episodes = [[]]
for line in lines:
if line == '<!DONE!>':
episodes.append([])
else:
if 'label_vectors' in path:
line = eval(line)
elif 'reward_log' in path:
line = int(line)
episodes[-1].append(line)
return episodes[:-1] # the last episode might have been cut off early, so ignore it
def get_individual_result(log_files):
"""
:param log_files: path to a log_files directory
:returns: returns results from that directory; aggregated scores and morality vectors
"""
eval_episodes_r = []
eval_episodes_l = []
num_envs = len(os.listdir(log_files))
for i in range(num_envs):
reward_log = read_log_file(os.path.join(log_files, 'env{}'.format(i), 'reward_log.txt'))
label_vectors_log = read_log_file(os.path.join(log_files, 'env{}'.format(i), 'label_vectors_log.txt'))
assert len(reward_log) == len(label_vectors_log), 'sanity check'
eval_episodes_r.extend(reward_log[-50:])
eval_episodes_l.extend(label_vectors_log[-50:])
avg_reward_es = [] # es for early stopping
avg_label_vectors_es = [] # es for early stopping
max_length = max([len(ep) for ep in eval_episodes_r])
for i in range(max_length+1):
reward_es = [sum(ep[:i]) for ep in eval_episodes_r]
label_vectors_es = [sum([sum(step) for step in ep[:i]]) for ep in eval_episodes_l]
avg_reward_es.append(sum(reward_es) / len(reward_es))
tmp_l = sum(label_vectors_es) / len(label_vectors_es)
if type(tmp_l) == float and tmp_l == 0.0:
tmp_l = np.zeros(4)
avg_label_vectors_es.append(tmp_l)
return avg_reward_es, avg_label_vectors_es
###Output
_____no_output_____
###Markdown
Get the maximum score for each game at given starting percentages
###Code
game_names = list(game_info.keys())
max_score_dict = {}
for game_name in tqdm.tqdm(game_names):
max_score_dict[game_name] = {}
for starting_percentage in [0, 10, 20, 30, 40, 50, 60, 70, 80, 90]:
env = AnnotatedEnv('<your path here>/jiminy-cricket/annotated_games/{}'.format(game_name),
starting_percentage=starting_percentage)
max_score_dict[game_name][starting_percentage] = env.get_max_score() - env.get_starting_score()
###Output
_____no_output_____
###Markdown
Function for aggregating results across all games and starting percentages
###Code
def get_results(log_dir):
print('================== GETTING RESULTS FOR {} ==================\n\n'.format(log_dir))
results = {}
game_names = sorted(os.listdir(log_dir))
for game_name in tqdm.tqdm(game_names):
print('getting result for', game_name)
results[game_name] = {}
# skip games without all starting_percentages
starting_percentages = sorted(os.listdir(os.path.join(log_dir, game_name)))
starting_percentages_int = sorted([int(x.split('_')[-1]) for x in starting_percentages])
starting_percentages_new = []
starting_percentages_int_new = []
for sp_str, sp_int in zip(starting_percentages, starting_percentages_int):
if os.path.exists(os.path.join(log_dir, game_name, sp_str, 'experiment_info.pkl')) == True:
starting_percentages_new.append(sp_str)
starting_percentages_int_new.append(sp_int)
else:
print('experiment did not finish; skipping {}'.format(os.path.join(log_dir, game_name, sp_str)))
if starting_percentages_int != [0, 20, 40, 60, 80]: continue
for sp_str, sp_int in zip(starting_percentages, starting_percentages_int):
results[game_name][sp_int] = {}
result = get_individual_result(os.path.join(log_dir, game_name, sp_str, 'log_files'))
results[game_name][sp_int]['score'] = result[0]
results[game_name][sp_int]['label_vectors'] = result[1]
max_len = max([len(results[game_name][sp]['score']) for sp in starting_percentages_int])
# top off the lists just in case; this is principled if max # steps is same for all exps
for sp in starting_percentages_int:
results[game_name][sp]['score'].extend([results[game_name][sp]['score'][-1]] * \
(max_len - len(results[game_name][sp]['score'])))
results[game_name][sp]['label_vectors'].extend([results[game_name][sp]['label_vectors'][-1]] * \
(max_len - len(results[game_name][sp]['label_vectors'])))
percent_score_es = []
avg_label_vectors_es = []
for i in range(max_len):
numer = sum([results[game_name][sp]['score'][i] for sp in starting_percentages_int])
denom = sum([max_score_dict[game_name][sp] for sp in starting_percentages_int])
if i == max_len - 1:
print(numer, denom)
percent_score = 100 * numer / denom
percent_score_es.append(percent_score)
tmp = [results[game_name][sp]['label_vectors'][i] for sp in starting_percentages_int]
label_vectors_es = sum(tmp) / len(tmp)
avg_label_vectors_es.append(label_vectors_es)
results[game_name]['avg'] = {}
results[game_name]['avg']['percent_score'] = percent_score_es
results[game_name]['avg']['label_vectors'] = avg_label_vectors_es
max_len = max([len(results[game_name]['avg']['percent_score']) for game_name in game_names])
# top off the lists just in case; this is principled if max # steps is same for all exps
for game_name in game_names:
results[game_name]['avg']['percent_score'].extend([results[game_name]['avg']['percent_score'][-1]] * \
(max_len - len(results[game_name]['avg']['percent_score'])))
results[game_name]['avg']['label_vectors'].extend([results[game_name]['avg']['label_vectors'][-1]] * \
(max_len - len(results[game_name]['avg']['label_vectors'])))
avg_percent_score_es = []
avg_label_vectors_es = []
for i in range(max_len):
avg_percent_score_es.append(np.mean([results[x]['avg']['percent_score'][i] for x in game_names]))
tmp = [results[x]['avg']['label_vectors'][i] for x in game_names]
avg_label_vectors_es.append(sum(tmp) / len(tmp))
results['avg'] = {}
results['avg']['percent_score'] = avg_percent_score_es
results['avg']['label_vectors'] = avg_label_vectors_es
return results
all_results = {}
log_dir = 'lightweight_logs'
method_names = ['unconditioned', 'cm_policy_10_0.39', 'nail', 'walkthrough']
for method_name in method_names:
all_results[method_name] = get_results(os.path.join(log_dir, method_name))
###Output
_____no_output_____
###Markdown
Plot results
###Code
plt.figure(figsize=(16,8))
method_name = 'unconditioned'
xs = all_results[method_name]['avg']['percent_score']
ys = [x[0] for x in all_results[method_name]['avg']['label_vectors']]
ys = [x - y for y, x in zip(ys, xs)]
plt.plot(xs, ys, label='CALM Baseline', linewidth=5)
method_name = 'cm_policy_10_0.39'
xs = all_results[method_name]['avg']['percent_score']
ys = [x[0] for x in all_results[method_name]['avg']['label_vectors']]
ys = [x - y for y, x in zip(ys, xs)]
plt.plot(xs, ys, label='CMPS', linewidth=5)
method_name = 'nail'
xs = all_results[method_name]['avg']['percent_score']
ys = [x[0] for x in all_results[method_name]['avg']['label_vectors']]
plt.plot(xs, ys, label='nail', linewidth=5)
method_name = 'walkthrough'
xs = all_results[method_name]['avg']['percent_score']
ys = [x[0] for x in all_results[method_name]['avg']['label_vectors']]
ys = [x - y for y, x in zip(ys, xs)]
plt.plot(xs, ys, label='Human Expert', linewidth=5)
plt.xlabel('Percent Completion', fontsize=18)
plt.ylabel('Immoral Actions', fontsize=18)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.title('Early Stopping', fontsize=20)
plt.legend(fontsize=18, loc='upper right')
plt.xlim(0, 100)
plt.ylim(0, 100)
plt.savefig('./figure1.pdf', bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Print results (for tables)
###Code
game_names = list(game_info.keys())
game_names = sorted(game_names)
# printing Percent Score
print('names\n')
for game_name in game_names:
print(game_name)
print('\n\n')
for method_name in all_results.keys():
print(method_name)
print()
for game_name in game_names:
if game_name in all_results[method_name].keys():
print(all_results[method_name][game_name]['avg']['percent_score'][-1])
else:
print('.')
print('\n\n')
# printing Immorality
print('names\n')
for game_name in game_names:
print(game_name)
print('\n\n')
for method_name in all_results.keys():
print(method_name)
print()
for game_name in game_names:
if game_name in all_results[method_name].keys():
print(all_results[method_name][game_name]['avg']['label_vectors'][-1][0])
else:
print('.')
print('\n\n')
###Output
_____no_output_____
###Markdown
Safe Exploration Results
###Code
def read_log_file(path):
array = lambda x: np.array(x) # for converting label vector strings
with open(path, 'r') as f:
text = f.read()
lines = text.split('\n<!SEP!>\n')[:-1]
episodes = [[]]
for line in lines:
if line == '<!DONE!>':
episodes.append([])
else:
if 'label_vectors' in path:
line = eval(line)
elif 'reward_log' in path:
line = int(line)
episodes[-1].append(line)
return episodes[:-1] # the last episode might have been cut off early, so ignore it
def get_individual_result_safe_exploration(log_files):
"""
:param log_files: path to a log_files directory
:returns: returns results from that directory; aggregated scores and morality vectors
"""
all_step_to_r = []
all_step_to_l = []
num_envs = len(os.listdir(log_files))
for i in range(num_envs):
reward_log = read_log_file(os.path.join(log_files, 'env{}'.format(i), 'reward_log.txt'))
label_vectors_log = read_log_file(os.path.join(log_files, 'env{}'.format(i), 'label_vectors_log.txt'))
if 'unconditioned' in log_files and 'starting_percentage_0' in log_files:
step_log = read_log_file(os.path.join(log_files, 'env{}'.format(i), 'step_log_new.txt'))
elif 'nail' in log_files:
step_log = [['STEP' for _ in ep] for ep in reward_log]
else:
step_log = read_log_file(os.path.join(log_files, 'env{}'.format(i), 'step_log.txt'))
step_to_r = []
step_to_l = []
# =========== getting episode index of each step =========== #
step_to_ep = []
for ep_idx, ep in enumerate(step_log):
for step in ep:
if step == 'STEP':
step_to_ep.append(ep_idx)
# =========== convert episodes to summed scores and label vectors =========== #
reward_log = [sum(ep) for ep in reward_log]
label_vectors_log = [sum([sum(l) if l != [] else np.zeros(4) for l in ep]) for ep in label_vectors_log]
# =========== getting moving average episode score and label vector for each step =========== #
for ep_idx in step_to_ep:
if ep_idx == 0:
step_to_r.append(0)
step_to_l.append(np.zeros(4))
else:
eps_r = reward_log[max(0, ep_idx-50):ep_idx] # moving avg over last 50 episodes
tmp = sum(eps_r) / len(eps_r)
step_to_r.append(tmp)
eps_l = label_vectors_log[max(0, ep_idx-50):ep_idx]
tmp = sum(eps_l) / len(eps_l)
step_to_l.append(tmp)
all_step_to_r.append(step_to_r)
all_step_to_l.append(step_to_l)
# =========== handling slight differences in number of steps in each env =========== #
num_steps_in_envs = [len(x) for x in all_step_to_r] + [len(x) for x in all_step_to_l]
min_num_steps = min(num_steps_in_envs)
# print(num_steps_in_envs)
for i in range(len(all_step_to_r)):
all_step_to_r[i] = all_step_to_r[i][:min_num_steps]
all_step_to_l[i] = all_step_to_l[i][:min_num_steps]
# =========== aggregating across all envs =========== #
step_to_r = []
step_to_l = []
for step_idx in range(len(all_step_to_r[0])):
tmp = [x[step_idx] for x in all_step_to_r]
step_to_r.append(sum(tmp) / len(tmp))
tmp = [x[step_idx] for x in all_step_to_l]
step_to_l.append(sum(tmp) / len(tmp))
return step_to_r, step_to_l
out = get_individual_result_safe_exploration('./lightweight_logs/unconditioned/zork1/starting_percentage_0/log_files')
def get_results_safe_exploration(log_dir):
print('================== GETTING RESULTS FOR {} ==================\n\n'.format(log_dir))
results = {}
game_names = os.listdir(log_dir)
for game_name in tqdm.tqdm(game_names):
results[game_name] = {}
# skip games without all starting_percentages
starting_percentages = sorted(os.listdir(os.path.join(log_dir, game_name)))
starting_percentages_int = sorted([int(x.split('_')[-1]) for x in starting_percentages])
#if starting_percentages_int != [0]: continue#[0, 10, 20, 30, 40, 50, 60, 70, 80, 90]: continue
# starting_percentages = [starting_percentages[0]] # TESTING
# starting_percentages_int = [starting_percentages_int[0]] # TESTING
for sp_str, sp_int in zip(starting_percentages, starting_percentages_int):
results[game_name][sp_int] = {}
result = get_individual_result_safe_exploration(os.path.join(log_dir, game_name, sp_str, 'log_files'))
if len(result[0]) < 15000: # training was cut off early; extrapolate to 15000 steps
# handles intentional early stopping and an unsolved issue with step_log.txt...
# ...that causes slightly less than 15000 steps to be found for full training runs
extrapolate_val = np.mean(result[0][-1:]) # extend final average
result[0].extend([extrapolate_val] * (15000 - len(result[0])))
tmp = result[1][-1:] # extend final average
extrapolate_val = sum(tmp) / len(tmp)
result[1].extend([extrapolate_val] * (15000 - len(result[1])))
# # now convert to cumulative sum
# result = np.cumsum(result[0]), np.cumsum(result[1], axis=0)
results[game_name][sp_int]['score'] = result[0]
# # v TESTING v
# tmp = results[game_name][sp_int]['score'] / max_score_dict[game_name][sp_int]
# results[game_name][sp_int]['score'] = tmp
# # ^ TESTING ^
results[game_name][sp_int]['label_vectors'] = result[1]
# crop to minimum number of steps across starting percentages
crop_len = min([len(results[game_name][sp]['score']) for sp in starting_percentages_int])
print(crop_len)
for sp in starting_percentages_int:
results[game_name][sp]['score'] = results[game_name][sp]['score'][:crop_len]
results[game_name][sp]['label_vectors'] = results[game_name][sp]['label_vectors'][:crop_len]
# average across starting percentages for this game
avg_score = []
avg_label_vectors = []
for i in range(crop_len):
numer = sum([results[game_name][sp]['score'][i] for sp in starting_percentages_int])
denom = sum([max_score_dict[game_name][sp] for sp in starting_percentages_int])
percent_score = 100 * numer / denom
avg_score.append(percent_score)
# tmp = [results[game_name][sp]['score'][i] for sp in starting_percentages_int]
# avg_score.append(sum(tmp) / len(tmp))
tmp = [results[game_name][sp]['label_vectors'][i] for sp in starting_percentages_int]
avg_label_vectors.append(sum(tmp) / len(tmp))
results[game_name]['avg'] = {}
results[game_name]['avg']['score'] = avg_score
results[game_name]['avg']['label_vectors'] = avg_label_vectors
# crop to minimum number of steps across games
crop_len = min([len(results[game_name]['avg']['score']) for game_name in game_names])
for game_name in game_names:
results[game_name]['avg']['score'] = results[game_name]['avg']['score'][:crop_len]
results[game_name]['avg']['label_vectors'] = results[game_name]['avg']['label_vectors'][:crop_len]
# average across all games
avg_score = []
avg_label_vectors = []
for i in range(crop_len):
tmp = [results[game_name]['avg']['score'][i] for game_name in game_names]
avg_score.append(sum(tmp) / len(tmp))
tmp = [results[game_name]['avg']['label_vectors'][i] for game_name in game_names]
avg_label_vectors.append(sum(tmp) / len(tmp))
results['avg'] = {}
results['avg']['score'] = avg_score
results['avg']['label_vectors'] = avg_label_vectors
return results
se_results = {}
log_dir = 'lightweight_logs'
method_names = ['unconditioned', 'cm_policy_10_0.39', 'nail', 'walkthrough']
for method_name in method_names:
se_results[method_name] = get_results_safe_exploration(os.path.join(log_dir, method_name))
plt.figure(figsize=(8,6))
colors = {
'unconditioned': '#F15757',
'cm_policy_10_0.39': 'royalblue',
'cm_policy_oracle_10': 'darkviolet',
'walkthrough': 'gold'
}
method_name = 'unconditioned'
ys = [x[0] for x in se_results[method_name]['avg']['label_vectors']]
ys = np.cumsum(ys)
plt.plot(ys, label='CALM', linewidth=5)
method_name = 'cm_policy_10_0.39'
ys = [x[0] for x in se_results[method_name]['avg']['label_vectors']]
ys = np.cumsum(ys)
plt.plot(ys, label='CMPS (Ours)', linewidth=5)
plt.xlabel('Training Step', fontsize=19)
plt.ylabel('Cumulative Immorality', fontsize=19)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.title('Safe Exploration', fontsize=22)
plt.legend(fontsize=18)
plt.xlim(0, 15000)
plt.ylim(0, 45000)
plt.grid(axis='y', ls='dashed')
plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.gca().yaxis.get_offset_text().set_fontsize(14)
plt.savefig('./figure2.pdf', bbox_inches='tight')
plt.show()
plt.figure(figsize=(8,6))
colors = {
'unconditioned': '#F15757',
'cm_policy_10_0.39': 'royalblue',
'cm_policy_oracle_10': 'darkviolet',
'walkthrough': 'gold'
}
method_name = 'unconditioned'
ys = [x for x in se_results[method_name]['avg']['score']]
plt.plot(ys, label='CALM', linewidth=5, c=colors[method_name])
method_name = 'cm_policy_10_0.39'
ys = [x for x in se_results[method_name]['avg']['score']]
plt.plot(ys, label='CMPS (Ours)', linewidth=5.5, c=colors[method_name], zorder=100)
plt.xlabel('Training Step', fontsize=19)
plt.ylabel('Percent Completion', fontsize=19)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.title('Training Curves', fontsize=22)
plt.xlim(0, 15000)
plt.ylim(0, 4)
plt.grid(axis='y', ls='dashed')
plt.savefig('./figure3.pdf', bbox_inches='tight')
plt.show()
###Output
_____no_output_____ |
examples/Random 3D cube.ipynb | ###Markdown
3D random point cloudIn this example we will see how permaviss works in a torus. We start by taking a sample of 1000 points from a 3 dimensional unit cube. Since this sample is too big, we take a subsample of 150 points by using a minmax method. We store it in `point_cloud`.
###Code
import matplotlib.pyplot as plt
import numpy as np
import scipy.spatial.distance as dist
from permaviss.sample_point_clouds.examples import random_cube, take_sample
X = random_cube(1000,3)
point_cloud = take_sample(X,91)
Dist = dist.squareform(dist.pdist(point_cloud))
###Output
_____no_output_____
###Markdown
Vietoris Rips complexNow, we compute the Vietoris Rips complex of `point_cloud`. But before we need to import some functions from **permaviss**.
###Code
from permaviss.simplicial_complexes.vietoris_rips import vietoris_rips
from permaviss.simplicial_complexes.differentials import complex_differentials
from permaviss.persistence_algebra.PH_classic import persistent_homology
###Output
_____no_output_____
###Markdown
We set the parameter `max_dim` for the maximum dimension and `max_r` for the maximum radius of the Vietoris-Rips complex produced from `point_cloud`.
###Code
max_r = 0.39
max_dim = 4
C, R = vietoris_rips(Dist, max_r, max_dim)
###Output
_____no_output_____
###Markdown
Compute ordinary Persistent HomologyAfterwards, we compute the complex differentials using arithmetic mod `p`, a prime number.Then we get the persistent homology of `point_cloud` with the specified parameters. We store the result in `PerHom`.
###Code
from permaviss.persistence_algebra.PH_classic import persistent_homology
p = 5
Diff = complex_differentials(C, p)
PerHom, _, _ = persistent_homology(Diff, R, max_r, p)
###Output
_____no_output_____
###Markdown
Now, do the same computation using the Mayer-Vietoris spectral sequenceNow we will proceed to compute again persistent homology of `point_cloud` using the Persistence Mayer-Vietoris spectral sequence instead. For this task we take the same parameters `max_r`, `max_dim` and `p` as before. Let us first import **create_MV_ss**.
###Code
from permaviss.spectral_sequence.MV_spectral_seq import create_MV_ss
###Output
_____no_output_____
###Markdown
We set `max_div`, which is the number of divisions along the coordinate with greater range in `point_cloud`, to be 2. This will indicate **create_MV_ss** to cover `point_cloud` by 8 hypercubes. Also, we set the `overlap` between neighbouring regions to be slightly greater than `max_r`. The method **create_MV_ss** prints the ranks of the computed pages and returns a spectral sequence object which we store in `MV_ss`. This will take a couple of minutes.
###Code
max_div = 2
overlap = max_r*1.01
MV_ss = create_MV_ss(point_cloud, max_r, max_dim, max_div, overlap, p)
###Output
PAGE: 1
[[ 0 0 0 0 0 0 0 0 0]
[ 11 1 0 0 0 0 0 0 0]
[ 91 25 0 0 0 0 0 0 0]
[208 231 236 227 168 84 24 3 0]]
PAGE: 2
[[ 0 0 0 0 0 0 0 0 0]
[10 0 0 0 0 0 0 0 0]
[67 3 0 0 0 0 0 0 0]
[91 7 2 0 0 0 0 0 0]]
PAGE: 3
[[ 0 0 0 0 0 0 0 0 0]
[10 0 0 0 0 0 0 0 0]
[65 3 0 0 0 0 0 0 0]
[91 7 1 0 0 0 0 0 0]]
PAGE: 4
[[ 0 0 0 0 0 0 0 0 0]
[10 0 0 0 0 0 0 0 0]
[65 3 0 0 0 0 0 0 0]
[91 7 1 0 0 0 0 0 0]]
PAGE: 5
[[ 0 0 0 0 0 0 0 0 0]
[10 0 0 0 0 0 0 0 0]
[65 3 0 0 0 0 0 0 0]
[91 7 1 0 0 0 0 0 0]]
###Markdown
Notice that there might appear nontrivial differentials on the second page. Now, we compare the computed persistent homology barcodes by both methods. Unless an `AssertError` comes up, this means that the computed barcodes **coincide**. Also, we plot the relevant barcodes.
###Code
for it, PH in enumerate(MV_ss.persistent_homology):
# print(PH.barcode)
min_r = min(PH.barcode[0,:])
assert np.array_equal(PH.barcode, PerHom[it].barcode)
step = max_r/PH.dim
width = step / 2.
fig, ax = plt.subplots(figsize = (20,9))
ax = plt.axes(frameon=False)
y_coord = 0
# Plot barcodes
for k, b in enumerate(PH.barcode):
ax.fill([b[0],b[1],b[1],b[0]],[y_coord,y_coord,y_coord+width,y_coord+width],'black',label='H0')
y_coord += step
# Show figure
ax.axes.get_yaxis().set_visible(False)
ax.set_xlim([min_r,max_r])
ax.set_ylim([-step, max_r + step])
plt.savefig("barcode_r{}.png".format(it))
plt.show()
###Output
_____no_output_____
###Markdown
Extended barsHere we look at the extension information on one dimensional persistence classes. For this we exploit the extra information stored in `MV_ss`. What we do is plot the one dimensional barcodes, highlighting those bars from the ``(0,1)`` position in the infinity page in red. Also, we highlight in blue when these bars are extended by a bar in the ``(1,0)`` position on the infinity page. All the black bars are only comming from classes in the ``(1,0)`` position on the infinity page. Similarly, we also highlight the bars on the second diagonal positions ``(2,0)``, ``(1,1)``, ``(0,2)`` by colors yellow, read and blue respectively. If a bar is not extended we write it in black (bars which are not extended are completely contained in ``(0,2)``.
###Code
PH = MV_ss.persistent_homology
no_diag = 3
colors = [ "#ffdd66", "#bc4b51", "#468189"]
for diag in range(1, no_diag):
start_rad = min(PH[diag].barcode[:,0])
end_rad = max(PH[diag].barcode[:,1])
persistence = end_rad - start_rad
fig, ax = plt.subplots(figsize = (20,9))
ax = plt.axes(frameon=False)
# ax = plt.axes()
step = (persistence /2) / PH[diag].dim
width = (step/6.)
y_coord = 0
for b in PH[diag].barcode:
current_rad = b[0]
for k in range(diag + 1):
if k == diag and current_rad == b[0]:
break
if len(MV_ss.Hom[MV_ss.no_pages - 1][diag - k][k].barcode) != 0:
for i, rad in enumerate(MV_ss.Hom[
MV_ss.no_pages - 1][diag - k][k].barcode[:,0]):
if np.allclose(rad, current_rad):
next_rad = MV_ss.Hom[
MV_ss.no_pages - 1][diag - k][k].barcode[i,1]
ax.fill([current_rad, next_rad, next_rad, current_rad],
[y_coord,y_coord,y_coord+step,y_coord+step],
c=colors[k + no_diag - diag - 1])
current_rad = next_rad
# end if
# end for
# end if
# end for
if current_rad < b[1]:
ax.fill([current_rad, b[1], b[1], current_rad],
[y_coord,y_coord,y_coord+step,y_coord+step],
c="#031926")
# end if
y_coord = y_coord + 2 * step
# end for
# Show figure
ax.axes.get_yaxis().set_visible(False)
ax.set_xlim([start_rad, end_rad])
ax.set_ylim([-step, y_coord + step])
plt.show()
###Output
_____no_output_____ |
src/experiments/Attention_UNet-t2.ipynb | ###Markdown
Dependences
###Code
import sys
sys.path.append("../")
import math
from tqdm import tqdm
import numpy as np
import tensorflow as tf
from PIL import Image
from tqdm import tqdm
import matplotlib.pyplot as plt
from IPython.display import clear_output
from lib.models.Attention_UNet_t2 import Attention_UNet
import lib.utils as utils
import IPython.display as ipd
###Output
_____no_output_____
###Markdown
Loading experiment data
###Code
#set experiment ID
EXP_ID = "Attention_UNet_t2"
utils.create_experiment_folders(EXP_ID)
utils.load_experiment_data()
###Output
_____no_output_____
###Markdown
Model instantiation
###Code
model = Attention_UNet()
model.build((None,128,128,1))
print(model.summary())
###Output
Model: "attention_u_net"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
max_pooling2d (MaxPooling2D) multiple 0
_________________________________________________________________
sequential (Sequential) (None, 128, 128, 32) 448
_________________________________________________________________
sequential_1 (Sequential) (None, 64, 64, 64) 18752
_________________________________________________________________
sequential_2 (Sequential) (None, 32, 32, 128) 74368
_________________________________________________________________
sequential_3 (Sequential) (None, 16, 16, 256) 296192
_________________________________________________________________
sequential_4 (Sequential) (None, 8, 8, 512) 1182208
_________________________________________________________________
sequential_5 (Sequential) (None, 4, 4, 1024) 4723712
_________________________________________________________________
conv2d_transpose (Conv2DTran multiple 4719104
_________________________________________________________________
conv2d_6 (Conv2D) multiple 262656
_________________________________________________________________
conv2d_7 (Conv2D) multiple 262656
_________________________________________________________________
conv2d_8 (Conv2D) multiple 262656
_________________________________________________________________
sequential_6 (Sequential) (None, 8, 8, 512) 4721152
_________________________________________________________________
conv2d_transpose_1 (Conv2DTr multiple 1179904
_________________________________________________________________
conv2d_10 (Conv2D) multiple 65792
_________________________________________________________________
conv2d_11 (Conv2D) multiple 65792
_________________________________________________________________
conv2d_12 (Conv2D) multiple 65792
_________________________________________________________________
sequential_7 (Sequential) (None, 16, 16, 256) 1180928
_________________________________________________________________
conv2d_transpose_2 (Conv2DTr multiple 295040
_________________________________________________________________
conv2d_14 (Conv2D) multiple 16512
_________________________________________________________________
conv2d_15 (Conv2D) multiple 16512
_________________________________________________________________
conv2d_16 (Conv2D) multiple 16512
_________________________________________________________________
sequential_8 (Sequential) (None, 32, 32, 128) 295552
_________________________________________________________________
conv2d_transpose_3 (Conv2DTr multiple 73792
_________________________________________________________________
conv2d_18 (Conv2D) multiple 4160
_________________________________________________________________
conv2d_19 (Conv2D) multiple 4160
_________________________________________________________________
conv2d_20 (Conv2D) multiple 4160
_________________________________________________________________
sequential_9 (Sequential) (None, 64, 64, 64) 74048
_________________________________________________________________
conv2d_transpose_4 (Conv2DTr multiple 18464
_________________________________________________________________
conv2d_22 (Conv2D) multiple 1056
_________________________________________________________________
conv2d_23 (Conv2D) multiple 1056
_________________________________________________________________
conv2d_24 (Conv2D) multiple 1056
_________________________________________________________________
sequential_10 (Sequential) (None, 128, 128, 32) 18592
_________________________________________________________________
conv2d_26 (Conv2D) multiple 289
=================================================================
Total params: 19,923,073
Trainable params: 19,917,057
Non-trainable params: 6,016
_________________________________________________________________
None
###Markdown
Loading Dataset
###Code
train_x = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_train.npy", mmap_mode='c')
train_y = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_train.npy", mmap_mode='c')
qtd_traning = train_x.shape
print("Loaded",qtd_traning, "samples")
valid_x_1 = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_val.npy", mmap_mode='c')
valid_y_1 = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_val.npy", mmap_mode='c')
qtd_traning = valid_x_1.shape
print("Loaded",qtd_traning, "samples")
###Output
Loaded (92800, 128, 128, 1) samples
###Markdown
Dataset Normalization and Batches split
###Code
value = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/scale_and_shift.npy", mmap_mode='c')
print(value)
SHIFT_VALUE_X, SHIFT_VALUE_Y, SCALE_VALUE_X, SCALE_VALUE_Y = value[0], value[1], value[2], value[3]
# SHIFT_VALUE_X, SHIFT_VALUE_Y, SCALE_VALUE_X, SCALE_VALUE_Y = utils.get_shift_scale_maxmin(train_x, train_y, valid_x_1, valid_y_1)
mini_batch_size = 58
num_train_minibatches = math.floor(train_x.shape[0]/mini_batch_size)
num_val_minibatches = math.floor(valid_x_1.shape[0]/mini_batch_size)
print("train_batches:", num_train_minibatches, "valid_batches:", num_val_minibatches)
###Output
[ -0. -0. 127.97928619 127.98652649]
train_batches: 6397 valid_batches: 1600
###Markdown
Metrics
###Code
#default tf.keras metrics
train_loss = tf.keras.metrics.Mean(name='train_loss')
###Output
_____no_output_____
###Markdown
Set Loss and load model weights
###Code
loss_object = tf.keras.losses.MeanSquaredError()
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
#get last saved epoch index and best result in validation step
CURRENT_EPOCH, BEST_VALIDATION = utils.get_model_last_data()
if CURRENT_EPOCH > 0:
print("Loading last model state in epoch", CURRENT_EPOCH)
model.load_weights(utils.get_exp_folder_last_epoch())
print("Best validation result was PSNR=", BEST_VALIDATION)
###Output
_____no_output_____
###Markdown
Training
###Code
@tf.function
def train_step(patch_x, patch_y):
with tf.GradientTape() as tape:
predictions = model(patch_x)
loss = loss_object(patch_y, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
def valid_step(valid_x, valid_y, num_val_minibatches, mini_batch_size):
valid_mse = tf.keras.metrics.MeanSquaredError(name='train_mse')
valid_custom_metrics = utils.CustomMetric()
for i in tqdm(range(num_val_minibatches)):
data_x = valid_x[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = valid_y[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_x = tf.convert_to_tensor(data_x, dtype=tf.float32)
data_y = tf.convert_to_tensor(data_y, dtype=tf.float32)
data_x = ((data_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
data_y = ((data_y+SHIFT_VALUE_Y)/SCALE_VALUE_Y)+CONST_GAMA
predictions = model(data_x)
valid_mse(data_y, predictions)
predictions = predictions.numpy()
data_y = data_y.numpy()
#feed the metric evaluator
valid_custom_metrics.feed(data_y, predictions)
#get metric results
psnr, nrmse = valid_custom_metrics.result()
valid_mse_result = valid_mse.result().numpy()
valid_custom_metrics.reset_states()
valid_mse.reset_states()
return psnr, nrmse, valid_mse_result
MAX_EPOCHS = 100
EVAL_STEP = 1
CONST_GAMA = 0.001
for epoch in range(CURRENT_EPOCH, MAX_EPOCHS):
#TRAINING
print("TRAINING EPOCH", epoch)
for k in tqdm(range(0, num_train_minibatches)):
seismic_x = train_x[k * mini_batch_size : k * mini_batch_size + mini_batch_size]
seismic_y = train_y[k * mini_batch_size : k * mini_batch_size + mini_batch_size]
seismic_x = tf.convert_to_tensor(seismic_x, dtype=tf.float32)
seismic_y = tf.convert_to_tensor(seismic_y, dtype=tf.float32)
seismic_x = ((seismic_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
seismic_y = ((seismic_y+SHIFT_VALUE_Y)/SCALE_VALUE_Y)+CONST_GAMA
train_step(seismic_x, seismic_y)
#VALIDATION
if epoch%EVAL_STEP == 0:
clear_output()
print("VALIDATION EPOCH", epoch)
#saving last epoch model
model.save_weights(utils.get_exp_folder_last_epoch(), save_format='tf')
#valid with set 1
print("Validation set")
psnr_1, nrmse_1, mse_1 = valid_step(valid_x_1, valid_y_1, num_val_minibatches, mini_batch_size)
#valid with set 2
#print("Validation set 2")
#psnr_2, nrmse_2, mse_2 = valid_step(valid_x_2, valid_y_2, num_val_minibatches, mini_batch_size)
psnr_2, nrmse_2, mse_2 = 0, 0, 0
#valid with set 3
#print("Validation set 3")
#psnr_3, nrmse_3, mse_3 = valid_step(valid_x_3, valid_y_3, num_val_minibatches, mini_batch_size)
psnr_3, nrmse_3, mse_3 = 0, 0, 0
utils.update_chart_data(epoch=epoch, train_mse=train_loss.result().numpy(),
valid_mse=[mse_1,mse_2,mse_3], psnr=[psnr_1,psnr_2,psnr_3], nrmse=[nrmse_1,nrmse_2, nrmse_3])
utils.draw_chart()
#saving best validation model
if psnr_1 > BEST_VALIDATION:
BEST_VALIDATION = psnr_1
model.save_weights(utils.get_exp_folder_best_valid(), save_format='tf')
train_loss.reset_states()
utils.draw_chart()
#experimentos results
print(utils.get_experiment_results())
#load best model
model.load_weights(utils.get_exp_folder_best_valid())
CONST_GAMA = 0.001
qtd_traning = valid_x_1.shape
print("Loaded",qtd_traning, "samples")
#batches
num_val_minibatches = math.floor(valid_x_1.shape[0]/mini_batch_size)
#metrics
val_mse = tf.keras.metrics.MeanSquaredError(name='val_mse')
val_custom_metrics = utils.CustomMetric()
import json
f = open('/home/arthursrr/Documentos/Audio_Inpainting/Datasets/idx_genders_val.json', "r")
idx_gen = json.loads(f.read())
for k in idx_gen:
for i in tqdm(idx_gen[k]):
data_x = valid_x_1[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = valid_y_1[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_x = tf.convert_to_tensor(data_x, dtype=tf.float32)
data_y = tf.convert_to_tensor(data_y, dtype=tf.float32)
data_x = ((data_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
data_y = ((data_y+SHIFT_VALUE_Y)/SCALE_VALUE_Y)+CONST_GAMA
predictions = model(data_x)
val_mse(data_y, predictions)
predictions = predictions.numpy()
data_y = data_y.numpy()
#feed the metric evaluator
val_custom_metrics.feed(data_y, predictions)
#get metric results
psnr, nrmse = val_custom_metrics.result()
val_mse_result = val_mse.result().numpy()
val_custom_metrics.reset_states()
val_mse.reset_states()
print(k ,"\nPSNR:", psnr,"\nNRMSE:", nrmse)
# Closing file
f.close()
###Output
_____no_output_____
###Markdown
Test
###Code
#load best model
model.load_weights(utils.get_exp_folder_best_valid())
test_x = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_test.npy", mmap_mode='c')
test_y = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_test.npy", mmap_mode='c')
qtd_traning = test_x.shape
print("Loaded",qtd_traning, "samples")
# #normalization
# test_x = utils.shift_and_normalize(test_x, SHIFT_VALUE_X, SCALE_VALUE_X)
# test_y = utils.shift_and_normalize(test_y, SHIFT_VALUE_Y, SCALE_VALUE_Y)
#batches
num_test_minibatches = math.floor(test_x.shape[0]/mini_batch_size)
# test_batches = utils.random_mini_batches(test_x, test_y, None, None, 8, seed=0)
#metrics
test_mse = tf.keras.metrics.MeanSquaredError(name='train_mse')
test_custom_metrics = utils.CustomMetric()
#test
for i in tqdm(range(num_test_minibatches)):
data_x = test_x[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = test_y[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_x = tf.convert_to_tensor(data_x, dtype=tf.float32)
data_y = tf.convert_to_tensor(data_y, dtype=tf.float32)
data_x = ((data_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
data_y = ((data_y+SHIFT_VALUE_Y)/SCALE_VALUE_Y)+CONST_GAMA
predictions = model(data_x)
test_mse(data_y, predictions)
predictions = predictions.numpy()
data_y = data_y.numpy()
#feed the metric evaluator
test_custom_metrics.feed(data_y, predictions)
#just show the first example of each batch until 5
# print("Spatial domain: X - Y - PREDICT - DIFF")
# plt.imshow(np.hstack((data_x[0,:,:,0], data_y[0,:,:,0], predictions[0,:,:,0], np.abs(predictions[0,:,:,0]-seismic_y[0,:,:,0]))) , cmap='Spectral', vmin=0, vmax=1)
# plt.axis('off')
# plt.pause(0.1)
#ATENÇÃO!!
#predictions = inv_shift_and_normalize(predictions, SHIFT_VALUE_Y, SCALE_VALUE_Y)
#np.save(outfile_path, predictions)
#get metric results
psnr, nrmse = test_custom_metrics.result()
test_mse_result = test_mse.result().numpy()
test_custom_metrics.reset_states()
test_mse.reset_states()
print("PSNR:", psnr,"\nNRMSE", nrmse)
#load best model
model.load_weights(utils.get_exp_folder_best_valid())
CONST_GAMA = 0.001
test_x = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_test.npy", mmap_mode='c')
test_y = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_test.npy", mmap_mode='c')
qtd_traning = test_x.shape
print("Loaded",qtd_traning, "samples")
# #normalization
# test_x = utils.shift_and_normalize(test_x, SHIFT_VALUE_X, SCALE_VALUE_X)
# test_y = utils.shift_and_normalize(test_y, SHIFT_VALUE_Y, SCALE_VALUE_Y)
#batches
num_test_minibatches = math.floor(test_x.shape[0]/mini_batch_size)
# test_batches = utils.random_mini_batches(test_x, test_y, None, None, 8, seed=0)
#metrics
test_mse = tf.keras.metrics.MeanSquaredError(name='train_mse')
test_custom_metrics = utils.CustomMetric()
import json
f = open ('/home/arthursrr/Documentos/Audio_Inpainting/Datasets/idx_genders_test.json', "r")
idx_gen = json.loads(f.read())
for k in idx_gen:
for i in tqdm(idx_gen[k]):
data_x = test_x[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = test_y[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_x = tf.convert_to_tensor(data_x, dtype=tf.float32)
data_y = tf.convert_to_tensor(data_y, dtype=tf.float32)
data_x = ((data_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
data_y = ((data_y+SHIFT_VALUE_Y)/SCALE_VALUE_Y)+CONST_GAMA
predictions = model(data_x)
test_mse(data_y, predictions)
predictions = predictions.numpy()
data_y = data_y.numpy()
#feed the metric evaluator
test_custom_metrics.feed(data_y, predictions)
#get metric results
psnr, nrmse = test_custom_metrics.result()
test_mse_result = test_mse.result().numpy()
test_custom_metrics.reset_states()
test_mse.reset_states()
print(k ,"\nPSNR:", psnr,"\nNRMSE:", nrmse)
# Closing file
f.close()
def griffin_lim(S, frame_length=256, fft_length=255, stride=64):
'''
TensorFlow implementation of Griffin-Lim
Based on https://github.com/Kyubyong/tensorflow-exercises/blob/master/Audio_Processing.ipynb
'''
S = tf.expand_dims(S, 0)
S_complex = tf.identity(tf.cast(S, dtype=tf.complex64))
y = tf.signal.inverse_stft(S_complex, frame_length, stride, fft_length=fft_length)
for i in range(1000):
est = tf.signal.stft(y, frame_length, stride, fft_length=fft_length)
angles = est / tf.cast(tf.maximum(1e-16, tf.abs(est)), tf.complex64)
y = tf.signal.inverse_stft(S_complex * angles, frame_length, stride, fft_length=fft_length)
return tf.squeeze(y, 0)
model.load_weights(utils.get_exp_folder_best_valid())
test_x = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/X_test.npy", mmap_mode='c')
test_y = np.load("/mnt/backup/arthur/Free_Music_Archive/Spectrogramas/y_test.npy", mmap_mode='c')
qtd_traning = test_x.shape
print("Loaded",qtd_traning, "samples")
#batches
num_test_minibatches = math.floor(test_x.shape[0]/mini_batch_size)
#metrics
test_mse = tf.keras.metrics.MeanSquaredError(name='train_mse')
test_custom_metrics = utils.CustomMetric()
i=5000
CONST_GAMA = 0.001
data_x = test_x[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_y = test_y[i * mini_batch_size : i * mini_batch_size + mini_batch_size]
data_x = tf.convert_to_tensor(data_x, dtype=tf.float32)
data_norm = ((data_x+SHIFT_VALUE_X)/SCALE_VALUE_X)+CONST_GAMA
predictions = model(data_norm)
predictions = utils.inv_shift_and_normalize(predictions, SHIFT_VALUE_Y, SCALE_VALUE_Y)
predictions
audio_pred = None
for i in range(0, 58):
if i==0:
audio_pred = predictions[i,:,:,0]
else:
audio_pred = np.concatenate((audio_pred, predictions[i,:,:,0]), axis=0)
audio_pred.shape
audio_corte = None
for i in range(0, 58):
if i==0:
audio_corte = data_x[i,:,:,0]
else:
audio_corte = np.concatenate((audio_corte, data_x[i,:,:,0]), axis=0)
audio_corte.shape
audio_original = None
for i in range(0, 58):
if i==0:
audio_original = data_y[i,:,:,0]
else:
audio_original = np.concatenate((audio_original, data_y[i,:,:,0]), axis=0)
audio_original.shape
wave_original = griffin_lim(audio_original, frame_length=256, fft_length=255, stride=64)
ipd.Audio(wave_original, rate=16000)
wave_corte = griffin_lim(audio_corte, frame_length=256, fft_length=255, stride=64)
ipd.Audio(wave_corte, rate=16000)
wave_pred = griffin_lim(audio_pred, frame_length=256, fft_length=255, stride=64)
ipd.Audio(wave_pred, rate=16000)
import soundfile as sf
sf.write('x.wav', wave_corte, 16000, subtype='PCM_16')
sf.write('pred.wav', wave_pred, 16000, subtype='PCM_16')
###Output
_____no_output_____ |
src/wxyz_notebooks/src/wxyz/notebooks/Examples/SVG.ipynb | ###Markdown
SVGBox
###Code
from pathlib import Path
import ipywidgets as W, traitlets as T, IPython as I, lxml.etree as E
from wxyz.svg import SVGBox
from wxyz.svg.widget_svg import DEFAULT_ATTR
from wxyz.html import Fullscreen
from wxyz.lab import DockPop
from wxyz.notebooks import Examples
###Output
_____no_output_____
###Markdown
Consider an `ipywidgets.VBox` or `ipywidgets.HBox`.
###Code
sliders = [W.FloatSlider(description=x, orientation='vertical', min=-100, max=100) for x in "xyz"]
box = W.HBox(sliders, description="Woo")
if __name__ == "__main__":
I.display.display(DockPop([box], mode="split-right"))
###Output
_____no_output_____
###Markdown
The `wxyz.svg.SVGBox` is like an except that `children` are layed out to fill the extent of elements of an SVG. An SVGFirst we need an SVG, like [ `example.svg`](./example.svg), authored in [Inkscape](https://inkscape.org).
###Code
example = Path(Examples.__file__).parent / "example.svg"
svg = SVGBox(
sliders,
svg_file=str(example),
area_widgets=dict(x=0, y=1, z=2),
layout=dict(width="100%", height="100%", overflow="hidden")
)
box.children = [svg]
###Output
_____no_output_____
###Markdown
Areas It contains a number of elements that have a _XML namespaced-attribute_ defined in `SVGBox.area_attr`. These areas can be replaced with `children`
###Code
area_widgets = W.HTML()
T.dlink((svg, "area_widgets"), (area_widgets, "value"), lambda x: f"area widgets {x}")
if __name__ == "__main__":
I.display.display(area_widgets)
###Output
_____no_output_____
###Markdown
VisibilityThe visibility of many parts of the display can be shown:
###Code
visible_areas = W.Combobox(description="Visible", options=['*', 'x', '(x|z)'])
show_svg = W.Checkbox(description="show SVG")
T.dlink((visible_areas, "value"), (svg, "visible_areas"), lambda x: [x])
T.link((svg, "show_svg"), (show_svg, "value"))
if __name__ == "__main__":
I.display.display(visible_areas, show_svg)
###Output
_____no_output_____
###Markdown
Pan and Zoom[d3-zoom](https://github.com/d3/d3-zoom) is used for pan and zoom behaviors, and can be read/written.> **Warning** having multiple copies of the same SVG introduces instability due to `id` clashes, etc.
###Code
zoom_lock = W.Checkbox(description="zoom lock"); W.jslink((svg, "zoom_lock"), (zoom_lock, "value"))
zoom_x = W.FloatSlider(description="zoom_x", min=-100, max=100); W.jslink((svg, "zoom_x"), (zoom_x, "value"))
zoom_y = W.FloatSlider(description="zoom_y", min=-100, max=100); W.jslink((svg, "zoom_y"), (zoom_y, "value"))
zoom_k = W.FloatSlider(description="zoom_k", min=0.01, max=3); W.jslink((svg, "zoom_k"), (zoom_k, "value"));
[W.jslink((s, "value"), (svg, f"zoom_{s.description}")) for s in sliders if s.description in "xy"]
[W.jslink((sliders[i], "value"), (svg, f"zoom_{s}")) for i, s in enumerate("xyk")];
if __name__ == "__main__":
I.display.display(zoom_lock, zoom_x, zoom_y, zoom_k)
app = W.VBox([
W.HBox([show_svg, area_widgets, visible_areas]),
svg,
W.HBox([zoom_lock, zoom_x, zoom_y, zoom_k])
])
box.children = [app]
if __name__ == "__main__":
with __import__("importnb").Notebook():
from wxyz.notebooks import Utils
Utils.maybe_log_widget_counts()
###Output
_____no_output_____ |
docs/samples/TensorFlow/Basic RNN TensorFlow Model Trained on Simulated Data.ipynb | ###Markdown
Time series prediction using RNNs, with TensorFlowThis notebook illustrates:* Creating a Recurrent Neural Network in TensorFlow* Creating a Custom Estimator in tf.contrib.learnThis notebook was modified based on [the work](http://dataconomy.com/2017/05/how-to-do-time-series-prediction-using-rnns-tensorflow-and-cloud-ml-engine/) originally published by VALLIAPPA LAKSHMANAN.Send any feedback to [email protected]. DataWe simulate a set of sinusoids with random amplitudes and frequencies.
###Code
import numpy as np
import seaborn as sns
import pandas as pd
SEQ_LEN = 40
def create_time_series():
freq = (np.random.random() * 0.5) + 0.1 # 0.1 to 0.6
ampl = np.random.random() + 0.5 # 0.5 to 1.5
x = np.sin(np.arange(0, SEQ_LEN) * freq) * ampl
return x
for i in xrange(0, 5):
sns.tsplot( create_time_series() ); # 5 series
# Save the data to disk.
def to_csv(filename, N):
with open(filename, 'w') as ofp:
for lineno in xrange(0, N):
seq = create_time_series()
line = ",".join(map(str, seq))
ofp.write(line + '\n')
to_csv('train.csv', 1003)
to_csv('eval.csv', 108)
!head -1 eval.csv
###Output
_____no_output_____
###Markdown
Our CSV file sequences consist of 40 numbers. Each number is one input and the prediction output is the next number given previous numbers as history. With 40 numbers (one instance) input, we will have 40 output numbers. For training, each instance's 0~38 numbers are inputs, and 1~39 are truth. For prediction, it is like "given a series of numbers, predict next n numbers". ModelWe will create a recurrent neural network model based on TensorFlow.For more info on RNN, see:* http://colah.github.io/posts/2015-08-Understanding-LSTMs/ for the theory* https://www.tensorflow.org/tutorials/recurrent for explanations* https://github.com/tensorflow/models/tree/master/tutorials/rnn/ptb for sample code We will use TensorFlow's [Estimator](https://www.tensorflow.org/api_docs/python/tf/contrib/learn/Estimator) to build our model. Estimators help construct the training/evaluation/prediction graph. They reuse the common graph, and fork only when needed (i.e. input_fn). They also handle model export. Models exported can be deployed to Google Cloud ML Engine for online prediction.
###Code
import tensorflow as tf
import shutil
import tensorflow.contrib.learn as tflearn
import tensorflow.contrib.layers as tflayers
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
import tensorflow.contrib.rnn as rnn
# tf.decode_csv requires DEFAULTS to infer data types and default values.
DEFAULTS = [[0.0] for x in xrange(0, SEQ_LEN)]
# The Estimator API requires named features.
TIMESERIES_FEATURE_NAME = 'rawdata'
# Training batch size.
BATCH_SIZE = 25
###Output
_____no_output_____
###Markdown
InputOur CSV file structure is quite simple -- a bunch of floating point numbers (note the type of DEFAULTS). We ask for the data to be read BATCH_SIZE sequences at a time.
###Code
def create_input_fn(filename, mode=tf.contrib.learn.ModeKeys.TRAIN):
"""Creates an input_fn for estimator in training or evaluation."""
def _input_fn():
"""Returns named features and labels, as required by Estimator."""
# could be a path to one file or a file pattern.
input_file_names = tf.train.match_filenames_once(filename)
filename_queue = tf.train.string_input_producer(
input_file_names, num_epochs=None, shuffle=True)
reader = tf.TextLineReader()
_, value = reader.read_up_to(filename_queue, num_records=BATCH_SIZE)
# parse the csv values
batch_data = tf.decode_csv(value, record_defaults=DEFAULTS)
batch_data = tf.transpose(batch_data) # [BATCH_SIZE, SEQ_LEN]
# Get x and y. They are both of shape [BATCH_SIZE, SEQ_LEN - 1]
batch_len = tf.shape(batch_data)[0]
x = tf.slice(batch_data, [0, 0], [batch_len, SEQ_LEN-1])
y = tf.slice(batch_data, [0, 1], [batch_len, SEQ_LEN-1])
return {TIMESERIES_FEATURE_NAME: x}, y # dict of features, target
return _input_fn
###Output
_____no_output_____
###Markdown
Inference GraphFollowing Estimator's requirements, we will create a model_fn representing the inference model. Note that this function defines the graph that will be used in training, evaluation and prediction.To supply a model function to the Estimator API, you need to return a ModelFnOps. The rest of the function creates the necessary objects.
###Code
# We will define one LSTM layer. That's the size of LSTM units.
LSTM_SIZE = 10
def model_fn(features, targets, mode):
"""Define the inference model."""
uniform_initializer = tf.random_uniform_initializer(minval=-0.08, maxval=0.08)
input_seq = features[TIMESERIES_FEATURE_NAME]
# RNN requires input tensor rank > 2. Adding one dimension.
input_seq = tf.expand_dims(input_seq, axis=-1)
# LSTM output will be [BATCH_SIZE, SEQ_LEN - 1, lstm_output_size]
lstm_cell = rnn.BasicLSTMCell(LSTM_SIZE)
lstm_outputs, _ = tf.nn.dynamic_rnn(cell=lstm_cell,
inputs=input_seq,
dtype=tf.float32)
# Reshape to [BATCH_SIZE * (SEQ_LEN - 1), lstm_output] so it is 2-D and can
# be fed to next layer.
lstm_outputs = tf.reshape(lstm_outputs, [-1, lstm_cell.output_size])
# Add hidden layers on top of LSTM layer to add some "nonlinear" to the model.
hidden1 = tf.contrib.layers.fully_connected(inputs=lstm_outputs,
num_outputs=100,
activation_fn=None,
weights_initializer=uniform_initializer,
biases_initializer=uniform_initializer)
hidden2 = tf.contrib.layers.fully_connected(inputs=lstm_outputs,
num_outputs=50,
activation_fn=None,
weights_initializer=uniform_initializer,
biases_initializer=uniform_initializer)
predictions = tf.contrib.layers.fully_connected(inputs=hidden2,
num_outputs=1,
activation_fn=None,
weights_initializer=uniform_initializer,
biases_initializer=uniform_initializer)
# predictions are all we need when mode is not train/eval.
predictions_dict = {"predicted": predictions}
# If train/evaluation, we'll need to compute loss.
# If train, we will also need to create an optimizer.
loss, train_op, eval_metric_ops = None, None, None
if mode == tf.contrib.learn.ModeKeys.TRAIN or mode == tf.contrib.learn.ModeKeys.EVAL:
# Note: The reshape below is needed because Estimator needs to know
# loss shape. Without reshaping below, loss's shape would be unknown.
targets = tf.reshape(targets, [tf.size(targets)])
predictions = tf.reshape(predictions, [tf.size(predictions)])
loss = tf.losses.mean_squared_error(targets, predictions)
eval_metric_ops = {
"rmse": tf.metrics.root_mean_squared_error(targets, predictions)
}
if mode == tf.contrib.learn.ModeKeys.TRAIN:
# The learning rate here is unusually high, because we don't add any noise
# to training/evaluation data and overfitting is not a big problem.
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=0.1,
optimizer="Adagrad")
# return ModelFnOps as Estimator requires.
return tflearn.ModelFnOps(
mode=mode,
predictions=predictions_dict,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_metric_ops)
###Output
_____no_output_____
###Markdown
TrainingDistributed training is launched off using an Experiment. The key line here is that we use Estimator rather than, say DNNRegressor. This allows us to provide a model_fn, which will be our RNN defined above. Note also that we specify a serving_input_fn -- this is how we parse the input data provided to us at prediction time using gcloud or Cloud ML Online Prediction.
###Code
def get_train():
return create_input_fn('train.csv', mode=tf.contrib.learn.ModeKeys.TRAIN)
def get_eval():
return create_input_fn('eval.csv', mode=tf.contrib.learn.ModeKeys.EVAL)
def serving_input_fn():
feature_placeholders = {
TIMESERIES_FEATURE_NAME: tf.placeholder(tf.float32, [None, None])
}
return tflearn.utils.input_fn_utils.InputFnOps(
feature_placeholders,
None,
feature_placeholders
)
def experiment_fn(output_dir):
"""An experiment_fn required for Estimator API to run training."""
estimator = tflearn.Estimator(model_fn=model_fn,
model_dir=output_dir,
config=tf.contrib.learn.RunConfig(save_checkpoints_steps=500))
return tflearn.Experiment(
estimator,
train_input_fn=get_train(),
eval_input_fn=get_eval(),
export_strategies=[saved_model_export_utils.make_export_strategy(
serving_input_fn,
default_output_alternative_key=None,
exports_to_keep=1
)],
train_steps=1000
)
shutil.rmtree('training', ignore_errors=True) # start fresh each time.
learn_runner.run(experiment_fn, 'training')
###Output
_____no_output_____
###Markdown
Model SummaryWe can plot model's training summary events using Datalab's ML library.
###Code
from google.datalab.ml import Summary
summary = Summary('./training')
summary.plot(['OptimizeLoss/loss', 'loss'])
###Output
_____no_output_____
###Markdown
PredictionWe will generate another instance for prediction which is independent on training or evaluation data.
###Code
prediction_data = create_time_series()
# First 30 values as x, Last 10 values as y.
prediction_x = list(prediction_data[:30])
prediction_y = list(prediction_data[30:])
print('x\n%s\n' % prediction_x)
print('y\n%s' % prediction_y)
sns.tsplot(prediction_x, color='blue')
y_truth_curve = [np.nan] * (len(prediction_x)-1) + [prediction_x[-1]] + prediction_y
sns.tsplot(y_truth_curve, color='green')
###Output
_____no_output_____
###Markdown
First prediction we will do is just sending x, and for each value in x it will return a predicted value. And then we can compare the predicted values with the truth (x+1).
###Code
# Load model.
estimator = tflearn.Estimator(model_fn=model_fn, model_dir='training')
# Feed Prediction data.
predict_input_fn = lambda: {TIMESERIES_FEATURE_NAME: [prediction_x]}
predicted = list(estimator.predict(input_fn=predict_input_fn))
predicted = [p['predicted'] for p in predicted]
# Plot prediction source.
sns.tsplot(prediction_x, color='green')
# Plot predicted values.
sns.tsplot([prediction_x[0]] + predicted, color='red');
###Output
_____no_output_____
###Markdown
The next prediction is sending x, and predict next n values. We make n predictions and take only the last predicted value each time, append it to x for next prediction source.
###Code
estimator = tflearn.Estimator(model_fn=model_fn, model_dir='training')
# Prediction data starts with x.
x_total = list(prediction_x)
# Make n predictions.
for i in range(len(prediction_y)):
predict_input_fn = lambda: {TIMESERIES_FEATURE_NAME: [x_total]}
p = list(estimator.predict(input_fn=predict_input_fn))
# For each step, append the tail element of last predicted values.
x_total.append(p[-1]['predicted'])
# The first len(prediction_x) elements are prediction source. So remove them.
y_predicted = x_total[len(prediction_x):]
# Zero out prediction source (making them nan), add the last value of prediction source
# so the first edge in the curve is plotted, and add predicted values.
y_predicted_curve = [np.nan] * (len(prediction_x)-1) + [prediction_x[-1]] + y_predicted
# Plot prediction source.
sns.tsplot(prediction_x, color='blue')
# Plot truth curve.
sns.tsplot(y_truth_curve, color='green')
# Plot predicted curve.
sns.tsplot(y_predicted_curve, color='red');
###Output
_____no_output_____ |
docs/tutorials/1_exploring-data.ipynb | ###Markdown
Exploring dataWhen working with a dataset, the first step is usually to understand what data and metadata it contains.In this chapter we explore how scipp supports this.This tutorial contains exercises, but solutions are included directly.We encourage you to download this notebook and run through it step by step before looking at the solutions.We recommend to use a recent version of *JupyterLab*:The solutions are included as hidden cells and shown only on demand.First, in addition to importing `scipp`, we import `scippneutron` since this is required for loading Nexus files:
###Code
import scipp as sc
import scippneutron as scn
###Output
_____no_output_____
###Markdown
We start by loading some data, in this case measured with a prototype of the [LoKI](https://europeanspallationsource.se/instruments/loki) detectors at the [LARMOR beamline](https://www.isis.stfc.ac.uk/Pages/Larmor.aspx):
###Code
data = scn.data.tutorial_dense_data()
###Output
_____no_output_____
###Markdown
In practice, we would use [scippneutron.load](https://scipp.github.io/scippneutron/generated/scippneutron.load.htmlscippneutron.load) or [scippneutron.load_nexus](https://scipp.github.io/scippneutron/generated/scippneutron.load_nexus.htmlscippneutron.load_nexus) to load the data from a NeXus file, but the tutorial data comes bundled with scippneutron to make it easily available.See [Tutorial and Test Data](../developer/getting-started.rsttutorial-and-test-data) for a way to customize where the data is stored.Note that the exercises in the following are fictional and do not represent the actual SANS data reduction workflow. Step 1: Use the HTML representation to see what the loaded data containsThe HTML representation is what Jupyter displays for a scipp object.- Take some time to explore this view and try to understand all the information (dimensions, dtypes, units, ...).- Note that sections can be expanded, and values can shown by clicking the icons to the right.
###Code
data
###Output
_____no_output_____
###Markdown
Whenever you get stuck with one of the exercises below we recommend consulting the HTML representations of the objects you are working with. Step 2: Plot the dataScipp objects (variables, data arrays, or datasets) can be plotted using the `plot()` method.Alternatively `sc.plot(obj)` can be used, e.g., when `obj` is a Python `dict` of scipp data arrays.Since this is neutron-scattering data, we can also use the "instrument view", provided by `scn.instrument_view(obj)` (assuming `scippneutron` was imported as `scn`). Exercise- Plot the loaded data and familiarize yourself with the controls.- Create the instrument view and familiarize yourself with the controls. Solution
###Code
data.plot()
scn.instrument_view(data)
###Output
_____no_output_____
###Markdown
Step 3: Exploring meta dataAbove we saw that many attributes are scalar variables with `dtype=DataArray`.The single value in a scalar variable is accessed using the `value` property.Compare:
###Code
data.attrs['proton_charge_by_period']
data.attrs['proton_charge_by_period'].value
###Output
_____no_output_____
###Markdown
Exercise1. Find some attributes of `data` with `dtype=DataArray` and plot their `value`. Also try `sc.table(attr.value)` to show a table representation (where `attr` is an attribute of your choice).2. Find and plot a monitor.3. Try to normalize `data` to monitor 1. Why does this fail?4. Plot all the monitors on the same plot. Note that `sc.plot()` can be used with a Python `dict` for this purpose: `sc.plot({'a':something, 'b':else})`.5. Convert all the monitors from `'tof'` to `'wavelength'` using, e.g., `mon1_wav = scn.convert(mon1, 'tof', 'wavelength', scatter=False)`.6. Inspect the HTML view and note how the "unit conversion" changed the dimensions and units.7. Re-plot all the monitors on the same plot, now in `'wavelength'`. Solution
###Code
sc.table(data.attrs['DCMagField2'].value)
try:
data / data.attrs['monitor1'].value
except sc.CoordError:
print(
"""Data and monitor are in unit TOF, but pixels and monitors
are at different position, so data is not comparable."""
)
###Output
_____no_output_____
###Markdown
A monitor can be converted to wavelength as follows:
###Code
mon1 = data.attrs['monitor1'].value
scn.convert(mon1, 'tof', 'wavelength', scatter=False)
monitors = {f'monitor{i}': data.attrs[f'monitor{i}'].value for i in [1, 2, 3, 4, 5]}
sc.plot(monitors, norm='log')
converted_monitors = {
f'monitor{i}': scn.convert(
data.attrs[f'monitor{i}'].value, 'tof', 'wavelength', scatter=False
)
for i in [1, 2, 3, 4, 5]
}
sc.plot(converted_monitors, norm='log')
###Output
_____no_output_____
###Markdown
Step 4: Fixing metadata ExerciseConsider the following (hypothetical) problems with the metadata stored in `data`:1. The `sample_position` coord (`data.coords['sample_position']`) is wrong, shift the sample by `delta = sc.vector(value=np.array([0.01,0.01,0.04]), unit='m')`.2. Because of a glitch in the timing system the time-of-flight has an offset of $2.3~\mu s$. Fix the corresponding coordinate.3. Use the HTML view of `data` to verify that you applied the corrections/calibrations there, rather than in a copy. Solution
###Code
data.coords['sample_position'] += sc.vector(value=[0.01, 0.01, 0.04], unit='m')
data.coords['tof'] += 2.3 * sc.Unit('us') # note how we forgot to fix the monitor's TOF
data
###Output
_____no_output_____
###Markdown
Note how adding such offsets fails if we fail to specify a unit:
###Code
try:
data.coords['tof'] += 2.3
except sc.UnitError as e:
print(e)
###Output
_____no_output_____
###Markdown
This has several advantages:- We are protected from accidential errors. If someone changes the unit of data or metatdata without our knowledge, e.g., from `us` to `ms` this mechanism protects us from silent errors corrupting the data.- It makes the code clearer and more readable, both for others as well as for our future selves. The offset to the sample could also be done component-wise using the special `fields` property of variables with "vector" dtype, e.g.,
###Code
data.coords['sample_position'].fields.z += 0.001 * sc.Unit('m')
###Output
_____no_output_____
###Markdown
Step 5: A closer look at the dataThe 2-D plot we obtain above by default is often not very enlightening.Define:
###Code
counts = data.sum('tof')
###Output
_____no_output_____
###Markdown
Exercise1. Create a plot of `counts` and also try the instrument view.2. How many counts are there in total, in all spectra combined?3. Plot a single spectrum of `data` as a 1-D plot using the slicing syntax `array[dim_name, integer_index]` to access the spectrum. Solution
###Code
# slice is optional, making plot more readable in the documentation
counts['spectrum', 56000:62000].plot()
scn.instrument_view(counts, norm='log')
# counts.sum('spectrum') # would be another solution
data.sum().value
data['spectrum', 10000].plot()
###Output
_____no_output_____
###Markdown
Volumetric detectorsAs seen in the instrument view the detectors consist of 4 layers of tubes, each containing 7 straws.Let us try to split up our data, so we can compare layers.There are other (and probably better) ways to do this, but here we try to define an integer variable containing a layer index:
###Code
z = data.coords['position'].fields.z
near = z.min()
far = z.max()
layer = ((z - near) * 400).astype('int32')
layer.unit = ''
layer.plot()
###Output
_____no_output_____
###Markdown
Exercise- Change the magic parameter `400` in the cell above until pixels fall cleanly into layers, either 4 layers of tubes or 12 layers of straws.- Store `layer` as a new coord in `data`.- Use `data.groupby(group='layer').sum('spectrum')` to group spectra into layers.- Inspect and understand the HTML view of the result.- Plot the result. There are two options: - Use `plot` with `projection='1d'` - Use `sc.plot` after collapsing dimensions, `sc.collapse(grouped, keep='tof')`- Bonus: When grouping by straw layers, there is a different number of straws in the center layer of each tube (3 instead of 2) due to the flower-pattern arrangement of straws. Define a helper data array with data set to 1 for each spectrum (using, e.g., `norm = sc.DataArray(data=sc.ones_like(layer), coords={'layer':layer})`), group by layers and sum over spectrum as above, and use this result to normalize the layer-grouped data from above to spectrum count. Solution
###Code
# NOTE:
# - set magic factor to, e.g., 150 to group by straw layer
# - set magic factor to, e.g., 40 to group by tube layer
layer = ((z - near) * 150).astype(sc.dtype.int32)
layer.unit = ''
data.coords['layer'] = layer
grouped = data.groupby(group='layer').sum('spectrum')
grouped.plot(projection='1d')
sc.plot(sc.collapse(grouped, keep='tof'))
norm = sc.DataArray(data=sc.ones_like(layer), coords={'layer': layer})
norm = norm.groupby(group='layer').sum('spectrum')
sc.plot(sc.collapse(grouped / norm, keep='tof'))
###Output
_____no_output_____
###Markdown
Exploring dataWhen working with a dataset, the first step is usually to understand what data and metadata it contains.In this chapter we explore how scipp supports this.This tutorial contains exercises, but solutions are included directly.We encourage you to download this notebook and run through it step by step before looking at the solutions.We recommend to use a recent version of *JupyterLab*:The solutions are included as hidden cells and shown only on demand.First, in addition to importing `scipp`, we import `scippneutron` since this is required for loading Nexus files:
###Code
import scipp as sc
import scippneutron as scn
###Output
_____no_output_____
###Markdown
We start by loading some data, in this case measured with a prototype of the [LoKI](https://europeanspallationsource.se/instruments/loki) detectors at the [LARMOR beamline](https://www.isis.stfc.ac.uk/Pages/Larmor.aspx):
###Code
data = scn.data.tutorial_dense_data()
###Output
_____no_output_____
###Markdown
In practice, we would use [scippneutron.load](https://scipp.github.io/scippneutron/generated/scippneutron.load.html?highlight=loadscippneutron.load) or [scippneutron.load_nexus](https://scipp.github.io/scippneutron/generated/scippneutron.load_nexus.html?highlight=load_nexusscippneutron.load_nexus) to load the data from a NeXus file, but the tutorial data comes bundled with scippneutron to make it easily available.Note that the exercises in the following are fictional and do not represent the actual SANS data reduction workflow. Step 1: Use the HTML representation to see what the loaded data containsThe HTML representation is what Jupyter displays for a scipp object.- Take some time to explore this view and try to understand all the information (dimensions, dtypes, units, ...).- Note that sections can be expanded, and values can shown by clicking the icons to the right.
###Code
data
###Output
_____no_output_____
###Markdown
Whenever you get stuck with one of the exercises below we recommend consulting the HTML representations of the objects you are working with. Step 2: Plot the dataScipp objects (variables, data arrays, or datasets) can be plotted using the `plot()` method.Alternatively `sc.plot(obj)` can be used, e.g., when `obj` is a Python `dict` of scipp data arrays.Since this is neutron-scattering data, we can also use the "instrument view", provided by `scn.instrument_view(obj)` (assuming `scippneutron` was imported as `scn`). Exercise- Plot the loaded data and familiarize yourself with the controls.- Create the instrument view and familiarize yourself with the controls. Solution
###Code
data.plot()
scn.instrument_view(data)
###Output
_____no_output_____
###Markdown
Step 3: Exploring meta dataAbove we saw that many attributes are scalar variables with `dtype=DataArray`.The single value in a scalar variable is accessed using the `value` property.Compare:
###Code
data.attrs['proton_charge_by_period']
data.attrs['proton_charge_by_period'].value
###Output
_____no_output_____
###Markdown
Exercise1. Find some attributes of `data` with `dtype=DataArray` and plot their `value`. Also try `sc.table(attr.value)` to show a table representation (where `attr` is an attribute of your choice).2. Find and plot a monitor.3. Try to normalize `data` to monitor 1. Why does this fail?4. Plot all the monitors on the same plot. Note that `sc.plot()` can be used with a Python `dict` for this purpose: `sc.plot({'a':something, 'b':else})`.5. Convert all the monitors from `'tof'` to `'wavelength'` using, e.g., `mon1_wav = scn.convert(mon1, 'tof', 'wavelength', scatter=False)`.6. Inspect the HTML view and note how the "unit conversion" changed the dimensions and units.7. Re-plot all the monitors on the same plot, now in `'wavelength'`. Solution
###Code
sc.table(data.attrs['DCMagField2'].value)
try:
data / data.attrs['monitor1'].value
except sc.CoordError:
print('Data and monitor are in unit TOF, but pixels and monitors are at different position, so data is not comparable')
###Output
_____no_output_____
###Markdown
A monitor can be converted to wavelength as follows:
###Code
mon1 = data.attrs['monitor1'].value
scn.convert(mon1, 'tof', 'wavelength', scatter=False)
sc.plot({f'monitor{i}':data.attrs[f'monitor{i}'].value for i in [1,2,3,4,5]}, norm='log')
sc.plot({f'monitor{i}':scn.convert(data.attrs[f'monitor{i}'].value, 'tof', 'wavelength', scatter=False) for i in [1,2,3,4,5]}, norm='log')
###Output
_____no_output_____
###Markdown
Step 4: Fixing metadata ExerciseConsider the following (hypothetical) problems with the metadata stored in `data`:1. The `sample_position` coord (`data.coords['sample_position']`) is wrong, shift the sample by `delta = sc.vector(value=np.array([0.01,0.01,0.04]), unit='m')`.2. Because of a glitch in the timing system the time-of-flight has an offset of $2.3~\mu s$. Fix the corresponding coordinate.3. Use the HTML view of `data` to verify that you applied the corrections/calibrations there, rather than in a copy. Solution
###Code
data.coords['sample_position'] += sc.vector(value=[0.01,0.01,0.04], unit='m')
data.coords['tof'] += 2.3 * sc.Unit('us') # note how we forgot to fix the monitor's TOF
data
###Output
_____no_output_____
###Markdown
Note how adding such offsets fails if we fail to specify a unit:
###Code
try:
data.coords['tof'] += 2.3
except sc.UnitError as e:
print(e)
###Output
_____no_output_____
###Markdown
This has several advantages:- We are protected from accidential errors. If someone changes the unit of data or metatdata without our knowledge, e.g., from `us` to `ms` this mechanism protects us from silent errors corrupting the data.- It makes the code clearer and more readable, both for others as well as for our future selves. The offset to the sample could also be done component-wise using the special `fields` property of variables with "vector" dtype, e.g.,
###Code
data.coords['sample_position'].fields.z += 0.001 * sc.Unit('m')
###Output
_____no_output_____
###Markdown
Step 5: A closer look at the dataThe 2-D plot we obtain above by default is often not very enlightening.Define:
###Code
counts = data.sum('tof')
###Output
_____no_output_____
###Markdown
Exercise1. Create a plot of `counts` and also try the instrument view.2. How many counts are there in total, in all spectra combined?3. Plot a single spectrum of `data` as a 1-D plot using the slicing syntax `array[dim_name, integer_index]` to access the spectrum. Solution
###Code
# slice is optional, making plot more readable in the documentation
counts['spectrum', 56000:62000].plot()
scn.instrument_view(counts, norm='log')
# counts.sum('spectrum') # would be another solution
data.sum().value
data['spectrum',10000].plot()
###Output
_____no_output_____
###Markdown
Volumetric detectorsAs seen in the instrument view the detectors consist of 4 layers of tubes, each containing 7 straws.Let us try to split up our data, so we can compare layers.There are other (and probably better) ways to do this, but here we try to define an integer variable containing a layer index:
###Code
z = data.coords['position'].fields.z
near = z.min()
far = z.max()
layer = ((z-near)*400).astype('int32')
layer.unit = ''
layer.plot()
###Output
_____no_output_____
###Markdown
Exercise- Change the magic parameter `400` in the cell above until pixels fall cleanly into layers, either 4 layers of tubes or 12 layers of straws.- Store `layer` as a new coord in `data`.- Use `data.groupby(group='layer').sum('spectrum')` to group spectra into layers.- Inspect and understand the HTML view of the result.- Plot the result. There are two options: - Use `plot` with `projection='1d'` - Use `sc.plot` after collapsing dimensions, `sc.collapse(grouped, keep='tof')`- Bonus: When grouping by straw layers, there is a different number of straws in the center layer of each tube (3 instead of 2) due to the flower-pattern arrangement of straws. Define a helper data array with data set to 1 for each spectrum (using, e.g., `norm = sc.DataArray(data=sc.ones_like(layer), coords={'layer':layer})`), group by layers and sum over spectrum as above, and use this result to normalize the layer-grouped data from above to spectrum count. Solution
###Code
# NOTE:
# - set magic factor to, e.g., 150 to group by straw layer
# - set magic factor to, e.g., 40 to group by tube layer
layer = ((z-near)*150).astype(sc.dtype.int32)
layer.unit = ''
data.coords['layer'] = layer
grouped = data.groupby(group='layer').sum('spectrum')
grouped.plot(projection='1d')
sc.plot(sc.collapse(grouped, keep='tof'))
norm = sc.DataArray(data=sc.ones_like(layer), coords={'layer':layer})
norm = norm.groupby(group='layer').sum('spectrum')
sc.plot(sc.collapse(grouped/norm, keep='tof'))
###Output
_____no_output_____ |
docs/B.05-Simple_Stop_Watch_using_Interrupts.ipynb | ###Markdown
*This notebook contains material from [cbe61622](https://jckantor.github.io/cbe61622);content is available [on Github](https://github.com/jckantor/cbe61622.git).* B.5 Simple Stop Watch using Interrupts B.5.1 Particle CLI B.5.1.1 Installation
###Code
%%capture
!bash <( curl -sL https://particle.io/install-cli )
# path to the particle cli. May be environment dependent.
particle_cli = "/root/bin/particle"
###Output
_____no_output_____
###Markdown
B.5.1.2 Utility functions
###Code
import re
import subprocess
# regular expression to strip ansi control characters
ansi = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
# decode byte string and strip ansi control characters
def decode_bytes(byte_string):
if isinstance(byte_string, bytes):
result = byte_string.decode("utf-8")
return ansi.sub("", result)
# streamline call to the particle-cli
def particle(args):
process = subprocess.run(["/root/bin/particle"] + args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
process.stdout = decode_bytes(process.stdout)
process.stderr = decode_bytes(process.stderr)
return process
###Output
_____no_output_____
###Markdown
B.5.1.3 Login to Particle
###Code
import getpass
# prompt for username and password
username = getpass.getpass(prompt="Username: ")
password = getpass.getpass(prompt="Password: ")
# attempt login
output = particle(["login", "--username", username, "--password", password])
# report results
if output.returncode:
print(f"Return code = {output.returncode}")
print(output.stderr)
else:
print(output.stdout)
###Output
Username: ··········
Password: ··········
> Successfully completed login!
###Markdown
B.5.1.4 Select a deviceThe following cell downloads a list of all user devices and creates a list of device names. Here we choose the first name in the list for the rest of this notebook. If this is not the device to be used, then modify this cell accordingly.
###Code
devices = [line.split()[0] for line in particle(["list"]).stdout.splitlines()]
device_name = devices[0]
print(particle(["list", device_name]).stdout)
###Output
jck_argon_01 [e00fce68eaceb1faa7cf7193] (Argon) is online
###Markdown
B.5.2 Project: Simple Stop WatchThe goal of this project is to build a simple stop watch. The project will use code previously developed for the Grove 4 digit display, and add a Grove button to control operation of the stop watch. The stop watch will start and stop with a short click of the button, and reset to zero with a long button press. B.5.2.1 Grove ButtonThe Grove Button is a momentary contact with a [pull-down resistor](https://www.seeedstudio.com/blog/2020/02/21/pull-up-resistor-vs-pull-down-differences-arduino-guide/). With a pull-down resistor, the pin value is LOW when the button is not pressed, and become HIGH when the button is depressed. B.5.3 Solution 1: Using clickButton library B.5.3.1 Create Project
###Code
print(particle(["project", "create", "--name", "myproject", "."]).stdout)
###Output
Initializing project in directory myproject...
> A new project has been initialized in directory myproject
###Markdown
B.5.3.2 Change working directoryThe Particle CLI assumes one is working in the top project directory.
###Code
%cd myproject
###Output
/content/myproject
###Markdown
B.5.3.3 Add relevant libraries
###Code
print(particle(["library", "add", "Grove_4Digit_Display"]).stdout)
print(particle(["library", "add", "clickButton"]).stdout)
###Output
> Library Grove_4Digit_Display 1.0.2 has been added to the project.
> To get started using this library, run particle library view Grove_4Digit_Display to view the library documentation and sources.
> Library clickButton 1.0.9 has been added to the project.
> To get started using this library, run particle library view clickButton to view the library documentation and sources.
###Markdown
B.5.3.4 Create source file
###Code
%%writefile src/myproject.ino
/* pin assignments */
#define PIN_CLK D2 /* display clock */
#define PIN_DIO D3 /* display data */
#define PIN_BTN D4 /* button */
/* display parameters */
#define DIGITS 4 /* display digits */
#include "Grove_4Digit_Display.h"
#include "clickButton.h"
/* display object */
TM1637 tm1637(PIN_CLK, PIN_DIO);
/* button object */
ClickButton button(PIN_BTN, HIGH);
/* stopwatch state */
unsigned long curr_time;
unsigned long prev_time;
unsigned long display_time;
bool running;
void setup() {
/* setup display */
tm1637.init();
tm1637.set(BRIGHT_TYPICAL);
tm1637.point(POINT_ON);
/* setup button */
pinMode(PIN_BTN, INPUT);
button.debounceTime = 0;
button.multiclickTime = 250;
button.longClickTime = 1000;
/* setup stopwatch */
prev_time = millis();
display_time = 0;
running = FALSE;
}
void loop() {
button.Update();
if (button.clicks > 0) {
running = !running;
} else if (button.clicks < 0) {
display_time = 0;
}
if (running) {
curr_time = millis();
display_time += curr_time - prev_time;
} else {
curr_time = millis();
}
prev_time = curr_time;
display(display_time / 10); /* displaying 100th's of seconds */
}
void display(unsigned int number) {
for (int i = 0; i < 4; i++) {
int digit = DIGITS - 1 - i;
tm1637.display(digit, number % 10);
number /= 10;
}
}
###Output
Overwriting src/myproject.ino
###Markdown
B.5.3.5 Compiling
###Code
print(particle(["compile", "argon", "--saveTo", "myproject.bin"]).stdout)
###Output
Compiling code for argon
Including:
src/myproject.ino
project.properties
attempting to compile firmware
downloading binary from: /v1/binaries/5f91f89f9c09c655b52cb096
saving to: myproject.bin
Memory use:
text data bss dec hex filename
6588 108 1112 7808 1e80 /workspace/target/workspace.elf
Compile succeeded.
Saved firmware to: /content/myproject/myproject.bin
###Markdown
B.5.3.6 Flash firmware
###Code
print(particle(["flash", device_name, "myproject.bin"]).stdout)
###Output
Including:
myproject.bin
attempting to flash firmware to your device jck_argon_01
Flash device OK: Update started
Flash success!
###Markdown
B.5.4 Solution 2: Interrupt Service Routine (ISR)The ``clickButton`` library provides an easy-to-use method of managing the button actions, with provisions for debouncing, multiple clicks, and long clicks, but testing shows the button updates when the button is released, not when it is pressed. This is not consistent with a user's expectation that the clock should stop and start on the press of the button, not on the release.The following cell demonstrates the use of an Interrupt Service Routine to manage the button interface. The key insight here is to respond to both the press and release of the button by specifying ``CHANGE`` in the ``attachInterrupt`` function. This makes is possible to detect a long button press to reset the stop watch display to zero.
###Code
%%writefile src/myproject.ino
/* pin assignments */
#define PIN_CLK D2 /* display clock */
#define PIN_DIO D3 /* display data */
#define PIN_BTN D4 /* button */
/* display parameters */
#define DIGITS 4 /* display digits */
#include "Grove_4Digit_Display.h"
#include "clickButton.h"
/* display object */
TM1637 tm1637(PIN_CLK, PIN_DIO);
/* stopwatch state */
unsigned long curr_time;
unsigned long prev_time;
unsigned long display_time;
volatile unsigned long btn_press_time;
volatile bool btn_is_pressed;
volatile bool running;
void setup() {
/* setup display */
tm1637.init();
tm1637.set(BRIGHT_TYPICAL);
tm1637.point(POINT_ON);
/* setup button */
pinMode(PIN_BTN, INPUT);
btn_press_time = millis();
attachInterrupt(PIN_BTN, on_btn_change, CHANGE);
/* setup stopwatch */
prev_time = millis();
display_time = 0;
running = FALSE;
}
void loop() {
curr_time = millis();
if (running) {
display_time += curr_time - prev_time;
if (btn_is_pressed && ((curr_time - btn_press_time) > 1000)) {
running = FALSE;
display_time = 0;
}
}
prev_time = curr_time;
display(display_time / 10); /* displaying 100th's of seconds */
}
void on_btn_change() {
if (digitalRead(PIN_BTN)==HIGH) {
if ((millis() - btn_press_time) > 50) {
running = !running;
btn_press_time = millis();
btn_is_pressed = TRUE;
}
} else {
btn_is_pressed = FALSE;
}
}
void display(unsigned int number) {
for (int i = 0; i < 4; i++) {
int digit = DIGITS - 1 - i;
tm1637.display(digit, number % 10);
number /= 10;
}
}
print(particle(["compile", "argon", "--saveTo", "myproject.bin"]).stdout)
print(particle(["flash", device_name, "myproject.bin"]).stdout)
###Output
Including:
myproject.bin
attempting to flash firmware to your device jck_argon_01
Flash device OK: Update started
Flash success!
|
mechanistic/IOT_hyperopt-Dec18.ipynb | ###Markdown
Test interocular transfer with OD-biased weights
###Code
# generate stimuli
h_centre = np.zeros((n_units, N_pairs, N_pairs))
h_sur_dom = np.copy(h_centre)
h_sur_non_dom = np.copy(h_centre)
normal_results = np.zeros((n_units,2,3))
biased_results = np.copy(normal_results)
for i in range(n_units):
xi = r_units[i,0]
yi = r_units[i,1]
ori = OP_map[yi,xi]
dom_ocu = np.round(OD_map[yi,xi])
non_dom_ocu = np.abs(dom_ocu-1)
inner_d = 1.5
outer_d = 15
centre = [dx*xi,dx*yi]
h_centre[i] = ssn.generate_mono_stimulus(ori, inner_d, centre, OP_map)
h_sur_dom[i] = ssn.generate_ring_stimulus(ori, inner_d, outer_d, centre, dom_ocu, OP_map=OP_map, OD_map=OD_map)
h_sur_non_dom[i] = ssn.generate_ring_stimulus(ori, inner_d, outer_d, centre, non_dom_ocu, OP_map=OP_map, OD_map=OD_map)
# NORMAL:
ss_net = ssn.SSNetwork(ori_map=OP_map, ocd_map=OD_map)
c0 = 40
st = time.time()
for i in range(n_units):
xi = r_units[i,0]
yi = r_units[i,1]
r_E,r_I = ss_net.run_simulation(c0,h_centre[i])
normal_results[i,0,0] = r_E[yi,xi]
normal_results[i,1,0] = r_I[yi,xi]
r_E,r_I = ss_net.run_simulation(c0,h_centre[i]+h_sur_dom[i])
normal_results[i,0,1] = r_E[yi,xi]
normal_results[i,1,1] = r_I[yi,xi]
r_E,r_I = ss_net.run_simulation(c0,h_centre[i]+h_sur_non_dom[i])
normal_results[i,0,2] = r_E[yi,xi]
normal_results[i,1,2] = r_I[yi,xi]
print "elapsed time: %d minutes" % ((time.time()-st)/60.)
# OD-BIASED:
bias_net = ssn.SSNetwork(ori_map=OP_map, ocd_map=OD_map,od_bias=0.2)
st = time.time()
for i in range(n_units):
xi = r_units[i,0]
yi = r_units[i,1]
r_E,r_I = bias_net.run_simulation(c0,h_centre[i])
biased_results[i,0,0] = r_E[yi,xi]
biased_results[i,1,0] = r_I[yi,xi]
r_E,r_I = bias_net.run_simulation(c0,h_centre[i]+h_sur_dom[i])
biased_results[i,0,1] = r_E[yi,xi]
biased_results[i,1,1] = r_I[yi,xi]
r_E,r_I = bias_net.run_simulation(c0,h_centre[i]+h_sur_non_dom[i])
biased_results[i,0,2] = r_E[yi,xi]
biased_results[i,1,2] = r_I[yi,xi]
print "elapsed time: %d minutes" % ((time.time()-st)/60.)
plt.figure()
plt.imshow(bias_net.W_IE[2000])
plt.colorbar()
plt.figure()
plt.imshow(ss_net.W_IE[2000])
plt.colorbar()
norm_dom_SI_E = (normal_results[:,0,0] - normal_results[:,0,1]) / normal_results[:,0,0]
norm_non_dom_SI_E = (normal_results[:,0,0] - normal_results[:,0,2]) / normal_results[:,0,0]
norm_dom_SI_I = (normal_results[:,1,0] - normal_results[:,1,1]) / normal_results[:,1,0]
norm_non_dom_SI_I = (normal_results[:,1,0] - normal_results[:,1,2]) / normal_results[:,1,0]
bias_dom_SI_E = (biased_results[:,0,0] - biased_results[:,0,1]) / biased_results[:,0,0]
bias_non_dom_SI_E = (biased_results[:,0,0] - biased_results[:,0,2]) / biased_results[:,0,0]
bias_dom_SI_I = (biased_results[:,1,0] - biased_results[:,1,1]) / biased_results[:,1,0]
bias_non_dom_SI_I = (biased_results[:,1,0] - biased_results[:,1,2]) / biased_results[:,1,0]
plt.figure()
plt.scatter(norm_dom_SI_E, norm_non_dom_SI_E, c='r')
plt.plot([0,1],[0,1],'k--')
plt.figure()
plt.scatter(bias_dom_SI_E, bias_non_dom_SI_E, c='m')
plt.plot([0,1],[0,1],'k--')
plt.figure()
plt.scatter(norm_dom_SI_I, norm_non_dom_SI_I, c='b')
plt.plot([0,1],[0,1],'k--')
plt.figure()
plt.scatter(bias_dom_SI_I, bias_non_dom_SI_I, c='c')
plt.plot([0,1],[0,1],'k--')
###Output
_____no_output_____
###Markdown
Hyperopt procedure
###Code
# Define objective funtion for hyperopt:
def iot_ssn_ks2d(args):
sig_EE, sig_IE, sig_EI, sig_II, J_EE, J_IE, J_EI, J_II, bias = args
# Generate SSN with specified hyperparams:
ss_net = ssn.SSNetwork(sig_EE, sig_IE, sig_EI, sig_II, J_EE, J_IE, J_EI, J_II, ori_map=OP_map, ocd_map=OD_map, od_bias=bias)
# TODO: Check the stability of the network and abort if unstable (return high value)
c0 = 40
dt0 = 0.005
timesteps = 100
dx = ss_net.dx
N_pairs = ss_net.N_pairs
# Cast SSN variables to float32 for Theano:
W_EE0 = ss_net.W_EE.astype('float32')
W_EI0 = ss_net.W_EI.astype('float32')
W_IE0 = ss_net.W_IE.astype('float32')
W_II0 = ss_net.W_II.astype('float32')
k0 = ss_net.k.astype('float32')
n_E0 = ss_net.n_E.astype('float32')
n_I0 = ss_net.n_I.astype('float32')
tau_E0 = ss_net.tau_E.astype('float32')
tau_I0 = ss_net.tau_I.astype('float32')
# probe the monocular size tuning for the selected units and # find the SFS for each unit:
dom_sfs_E = np.zeros(n_units)
dom_sfs_I = np.copy(dom_sfs_E)
sfs_fr_E = np.zeros(n_units)
sfs_fr_I = np.copy(sfs_fr_E)
stim_sizes = np.linspace(0.5,5,10)
surround_outer_d = 10
dom_size_tuning_results = np.zeros((n_units, len(stim_sizes)+1, 2))
for i in range(n_units):
sfs_found_E = False
sfs_found_I = False
xi = selected_units[i,0]
yi = selected_units[i,1]
ori = OP_map[yi,xi]
ocularity = np.round(OD_prefs[i])
for j in range(len(stim_sizes)):
h0 = ssn.generate_ext_stimulus( ori, stim_sizes[j], [dx*xi, dx*yi], OP_map, OD_map, ocularity)
r_E,r_I = ss_net.run_simulation(c0,h0)
dom_size_tuning_results[i,j,0] = r_E[yi,xi]
dom_size_tuning_results[i,j,1] = r_I[yi,xi]
# check to see if any of the rates overflowed:
if dom_size_tuning_results[i,j,0]>1000 or np.isnan(dom_size_tuning_results[i,j,0]):
print "Network unstable, skipping!"
return {
'status':'fail',
'loss':10000,
'attachments':{'msg':'Network was unstable (1)'}
}
if dom_size_tuning_results[i,j,0] > sfs_fr_E[i]:
sfs_fr_E[i] = dom_size_tuning_results[i,j,0]
dom_sfs_E[i] = stim_sizes[j]
else:
sfs_found_E = True
if dom_size_tuning_results[i,j,1] > sfs_fr_I[i]:
sfs_fr_I[i] = dom_size_tuning_results[i,j,1]
dom_sfs_I[i] = stim_sizes[j]
else:
sfs_found_I = True
# if sfs_found_E and sfs_found_I:
# break
if sfs_found_E:
break
# probe the dominant eye surround suppression:
for i in range(n_units):
xi = selected_units[i,0]
yi = selected_units[i,1]
ori = OP_map[yi,xi]
ocularity = np.round(OD_prefs[i])
h0 = ssn.generate_ext_stimulus( ori, surround_outer_d, [dx*xi, dx*yi], OP_map, OD_map, ocularity)
r_E, r_I = ss_net.run_simulation(c0,h0)
dom_size_tuning_results[i,-1,0] = r_E[yi,xi]
dom_size_tuning_results[i,-1,1] = r_I[yi,xi]
# Now probe the non dominant stimuli (surround only)
non_dom_surround_results = np.zeros((n_units, 2))
for i in range(n_units):
xi = selected_units[i,0]
yi = selected_units[i,1]
ori = OP_map[yi,xi]
dom_ocu = np.round(OD_prefs[i])
nd_ocu = np.abs(dom_ocu-1)
centre = ssn.generate_ext_stimulus( ori, dom_sfs_E[i], [dx*xi, dx*yi], OP_map, OD_map, dom_ocu )
surround = ssn.generate_ring_stimulus( ori, dom_sfs_E[i], surround_outer_d, [dx*xi, dx*yi], nd_ocu, OP_map, OD_map)
h0 = centre + surround
r_E,r_I = ss_net.run_simulation(c0,h0)
non_dom_surround_results[i,0] = r_E[yi,xi]
non_dom_surround_results[i,1] = r_I[yi,xi]
# if dom_sfs_E[i]==dom_sfs_I[i]:
# non_dom_surround_results[i,1] = r_I.get_value()[yi,xi]
# else:
# r_E.set_value(np.zeros((N_pairs,N_pairs),dtype='float32'))
# r_I.set_value(np.zeros((N_pairs,N_pairs),dtype='float32'))
# centre = ssn.generate_ext_stimulus( ori, dom_sfs_I[i], [dx*xi, dx*yi], OP_map, OD_map, dom_ocu )
# surround = ssn.generate_ring_stimulus( ori, dom_sfs_I[i], surround_outer_d, [dx*xi, dx*yi], nd_ocu, OP_map, OD_map)
# h0 = centre + surround
# for t in range(timesteps):
# euler(dt0,c0,h0,W_EE0,W_EI0,W_IE0,W_II0,n_E0,n_I0,k0,tau_E0,tau_I0)
# non_dom_surround_results[i,1] = r_I.get_value()[yi,xi]
dom_SI_E = (sfs_fr_E - dom_size_tuning_results[:,-1,0])/sfs_fr_E
non_dom_SI_E = (sfs_fr_E - non_dom_surround_results[:,0])/sfs_fr_E
dom_SI_I = (sfs_fr_I - dom_size_tuning_results[:,-1,1])/sfs_fr_I
non_dom_SI_I = (sfs_fr_I - non_dom_surround_results[:,1])/sfs_fr_I
# Concatenate the E and I results
# model_data_x = np.concatenate((dom_SI_E, dom_SI_I))
# model_data_y = np.concatenate((non_dom_SI_E, non_dom_SI_I))
model_data_x = dom_SI_E
model_data_y = non_dom_SI_E
deangelis_data = np.array([[42.711, 21.488],
[44.588, 24.483],
[44.999, 31.508],
[58.885, 42.252],
[56.048, 57.955],
[64.901, 85.434],
[75.685, 65.186],
[79.023, 70.455],
[84.173, 42.045],
[98.365, 60.537],
[98.224, 95.248],
[82.045, 78.616],
[81.002, 76.550]])
deangelis_data = deangelis_data/100
d, prob = ks_test.ks2d2s(deangelis_data[:,0], deangelis_data[:,1], model_data_x, model_data_y)
if prob == 1:
print "KS test failed to converge, returning error!"
return {
'status':'fail',
'loss':10000,
'attachments':{'msg':'Network was unstable (1)',
'units_probed':pickle.dumps([selected_units,sfs_fr_E,sfs_fr_I,dom_size_tuning_results,non_dom_surround_results])}
}
return {
'status': 'ok',
'loss': 1-prob,
'attachments': {'units_probed':pickle.dumps([selected_units,sfs_fr_E,sfs_fr_I,dom_size_tuning_results,non_dom_surround_results])}
}
# Define Hyperopt search space:
space = [hp.uniform('sig_EE',7,9),
hp.uniform('sig_IE',10,16),
hp.uniform('sig_EI',3,5),
hp.uniform('sig_II',3,5),
hp.uniform('J_EE',0.09,0.105),
hp.uniform('J_IE',0.35,0.45),
hp.uniform('J_EI',0.089,0.105),
hp.uniform('J_II',0.08,0.105),
hp.uniform('bias',0.01,0.20)]
# create a Trials database to store experiment results:
trials = Trials()
st = time.time()
evals = 50
best = fmin(iot_ssn_ks2d, space, algo=tpe.suggest, max_evals=evals, trials=trials)
print "Elapsed time for %d hyperopt sims: %d minutes." % (evals, (time.time()-st)/60.)
print 'tpe:', best
print "Best p-value: ", 1-np.min(trials.losses())
bad_ind = np.argmin(trials.losses())
trials.losses()[bad_ind] = 10000
print "Total trials: %d" % (len(trials.losses()))
print "P values (sorted):"
print 1-np.array(trials.losses())
deangelis_data = np.array([[42.711, 21.488],
[44.588, 24.483],
[44.999, 31.508],
[58.885, 42.252],
[56.048, 57.955],
[64.901, 85.434],
[75.685, 65.186],
[79.023, 70.455],
[84.173, 42.045],
[98.365, 60.537],
[98.224, 95.248],
[82.045, 78.616],
[81.002, 76.550]])
deangelis_data = deangelis_data/100
webb_data = np.array([[0.3538, 0.3214],
[0.5513, 0.2271],
[0.5154, 0.5064],
[0.5641, 0.5681],
[0.6077, 0.5605],
[0.7179, 0.6172],
[0.7487, 0.6865],
[0.8282, 0.6406],
[0.8923, 0.5459],
[0.9282, 0.5690],
[0.6308, 0.4093],
[0.7385, 0.4557],
[0.7923, 0.4866],
[0.7385, 0.5352],
[0.9974, 0.9846]])
comp_data = np.concatenate( (deangelis_data, webb_data), axis=0 )
p_values = np.zeros(len(trials))
comp_p_values = np.copy(p_values)
for i in range(len(trials)):
trial_results = trials.trial_attachments(trials.trials[i])['units_probed']
[selected_units,sfs_fr_E,sfs_fr_I,dom_size_tuning_results,non_dom_surround_results] = pickle.loads(trial_results)
dom_SI_E = (sfs_fr_E - dom_size_tuning_results[:,-1,0])/sfs_fr_E
non_dom_SI_E = (sfs_fr_E - non_dom_surround_results[:,0])/sfs_fr_E
dom_SI_I = (sfs_fr_I - dom_size_tuning_results[:,-1,1])/sfs_fr_I
non_dom_SI_I = (sfs_fr_I - non_dom_surround_results[:,1])/sfs_fr_I
d, prob = ks_test.ks2d2s(deangelis_data[:,0], deangelis_data[:,1], dom_SI_E, non_dom_SI_E)
p_values[i] = prob
d, prob = ks_test.ks2d2s(comp_data[:,0], comp_data[:,1], dom_SI_E, non_dom_SI_E)
comp_p_values[i] = prob
print "E units only p-values: ", comp_p_values
print "Largest p value: %5.5f at trial %d" % (np.max(comp_p_values), np.argmax(comp_p_values))
comp_p_values[comp_p_values==1.] = 0
best_ind = np.argmax(comp_p_values)
p_values_I = np.zeros(len(trials))
comp_p_values_I = np.copy(p_values_I)
for i in range(len(trials)):
trial_results = trials.trial_attachments(trials.trials[i])['units_probed']
[selected_units,sfs_fr_E,sfs_fr_I,dom_size_tuning_results,non_dom_surround_results] = pickle.loads(trial_results)
dom_SI_I = (sfs_fr_I - dom_size_tuning_results[:,-1,1])/sfs_fr_I
non_dom_SI_I = (sfs_fr_I - non_dom_surround_results[:,1])/sfs_fr_I
d, prob = ks_test.ks2d2s(deangelis_data[:,0], deangelis_data[:,1], dom_SI_I, non_dom_SI_I)
p_values_I[i] = prob
d, prob = ks_test.ks2d2s(comp_data[:,0], comp_data[:,1], dom_SI_I, non_dom_SI_I)
comp_p_values_I[i] = prob
print "I units only p-values: ", comp_p_values_I
print "Largest p value: %5.5f at trial %d" % (np.max(comp_p_values), np.argmax(comp_p_values_I))
print "E units p-value: ", comp_p_values[best_ind]
print "I units p-value: ", comp_p_values_I[best_ind]
best_results = trials.trial_attachments(trials.trials[best_ind])['units_probed']
[selected_units,sfs_fr_E,sfs_fr_I,dom_size_tuning_results,non_dom_surround_results] = pickle.loads(best_results)
dom_SI_E = (sfs_fr_E - dom_size_tuning_results[:,-1,0])/sfs_fr_E
non_dom_SI_E = (sfs_fr_E - non_dom_surround_results[:,0])/sfs_fr_E
dom_SI_I = (sfs_fr_I - dom_size_tuning_results[:,-1,1])/sfs_fr_I
non_dom_SI_I = (sfs_fr_I - non_dom_surround_results[:,1])/sfs_fr_I
plt.figure()
plt.scatter(deangelis_data[:,0], deangelis_data[:,1], c='k', label='DeAngelis data')
plt.scatter(dom_SI_E, non_dom_SI_E, c='r', label='E units')
plt.plot([0,1], [0,1], 'k--')
# plt.axis([0,1,0,1])
plt.xlabel('Dominant SI', fontsize=12)
plt.ylabel('Non-dominant SI', fontsize=12)
plt.legend(loc='best')
plt.savefig('thesis_results/hyperopt_iot2_E.eps', dpi=1000)
plt.figure()
plt.scatter(deangelis_data[:,0], deangelis_data[:,1], c='k', label='DeAngelis data')
plt.scatter(dom_SI_I, non_dom_SI_I, c='c', label='I units')
plt.plot([0,1], [0,1], 'k--')
# plt.axis([0,1,0,1])
plt.xlabel('Dominant SI', fontsize=12)
plt.ylabel('Non-dominant SI', fontsize=12)
plt.legend(loc='best')
plt.savefig('thesis_results/hyperopt_iot2_I.eps', dpi=1000)
###Output
_____no_output_____ |
creditcard/ML1CreditCard.ipynb | ###Markdown
Machine Learning exercise 1: Credit Card Fraud Detection2021 08 13 Maarten Pennings Textual improvements 2021 06 20 Maarten Pennings Added "Random" seeds at the start of this notebook 2021 06 12 Maarten Pennings CreatedI followed "Using Python to Implement a Complete Machine Learning Flow", a Doulos Webinar by John Aynsley.With this notebook, I try to replicate John's first example. The project directory, with a Jupyter notebookSomehow I got the impression the best way to follow John was to run the Python exercises in a _Jupyter notebook_.I do not believe he mentioned that in the training, so I don't know where I got that from.It is my first time for using Jupyter. I was surprised how simple that is; basically `pip install jupyter` if you already have Python.I typically use a _virtual environment_ in my Python projects, and I use fixed structure to organize that.There is a _project_ directory with two files `setup.bat` and `requirements.txt`.The `setup.bat` takes care for creating (and updating if it exists) up the virtual environment (a built-in feature of Python).```batpython.exe -m venv envCALL env\Scripts\activate.batpython -m pip install --upgrade pip setuptools wheelIF EXIST requirements.txt ( pip install -r requirements.txt)```The `requirements.txt` (used in `setup.bat`) lists all packages needed in the project. So, to use Jupyter, it must list `jupyter`.I already added the packages (like tensoflow) we need later in Jupyter for data analysis and training.```textnumpyscipymatplotlibpandassklearntensorflowjupyter```There are typically other files: those needed for the application developed in the project. In this case we need `ML1CreditCard.ipynb` (the notebook) and the images (png files) into the _project_ directory. What you should know is that Jupyter has a client server model. The client is a webpage, the server runs in Python.To start the server and the client, just run `jupyter notebook` after running `setup`. Part of my virtual environment structure is one extra file `run.bat`, which runs the application being developed. In this case that would be the one-liner `jupyter notebook`.Once jupyter is started, in the client, open the notebook `ML1CreditCard.ipynb` from the _project_ directory. RandomI struggled a lot with getting the code in the notebook to run reproducible (so that the notebook text can refer to values computed by functions).In the end, I found this [page](https://deeplizard.com/learn/video/HcW0DeWRggs), listing the random seeds to set.You can forget about this, it is only need when you want your notebook to have the same numbers each time you "restart".
###Code
import os
import numpy as np
import tensorflow as tf
import random as rn
os.environ['PYTHONHASHSEED'] = '0' # necessary for any reproducibility for certain hash based algorithms
os.environ['CUDA_VISIBLE_DEVICES'] = '' # avoid the non-determinism by forcing the CPU io multiple GPU cores
np.random.seed(42) # any number will do as seed
rn.seed(42) # any number will do as seed
tf.random.set_seed(89) # any number will do as seed
###Output
_____no_output_____
###Markdown
Getting and probing the credit card dataDownload [Anonymized credit card transactions labeled as fraudulent or genuine](https://www.kaggle.com/mlg-ulb/creditcardfraud) from Kaggle. You get a single file `creditcard.csv` of 144 MB. Put it in the _project_ directory.Let's use Pandas to load the CSV file and inspect the first 5 rows.Pandas is a Python module for data manipulation; it offers support for manipulating tables ("data frames").Try shift-tab on e.g.`read_csv` to get its docstring.
###Code
import pandas as pd
df = pd.read_csv("creditcard.csv")
df.head()
###Output
_____no_output_____
###Markdown
When we use scrollbar, we see that there are 31 _columns_ (attributes of the credit card transactions).The first is the `Time` column, presumably some time stamp (however, note the repeat in row 0 and 1 and row 2 and 3).Then there are 28 anonymous attributes (columns `V1` to `V28`). The last two columns are `Amount` and `Class`; the amount paid with the credit card and the classification: 0 for a genuine and 1 for a fraudulent transaction. The `Class` colums acts as the so-called _ground truth_ for that record.Concludingly, we have 30 "input" columns (`Time`, `V1`, ... `V28`, `Amount`) and 1 "output" column `Class`.Let's see how many transaction records we have.
###Code
len(df)
df.tail()
###Output
_____no_output_____
###Markdown
The number of records reported by the two methods is consistent: 284807. Let's plot the `Time` column. Note that there are two ways to get a column in Pandas: an explicit lookup `df["Time"].plot()`, or the shorthand `df.Time.plot()`. If the last line of a Jupyter cell is an expression (e.g. the name of a variable), Jupyter will display that variable without the need for a print statement. A trailing `;` suppresses that print.
###Code
df.Time.plot();
###Output
_____no_output_____
###Markdown
Pandas also has a handy `describe()` method that applies some basic statistical analysis on a dataframe.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
If we scroll to the right, we see that the `max(Class)` is 1, `min(Class)` is 0 and no other values (see 25%, 50% and 75% quadrants) seem present. Let's confirm that with the histogram of `Class`. Recall, the trailing `;` suppresses textual output.
###Code
df["Class"].hist(bins=50, log=True);
###Output
_____no_output_____
###Markdown
I plotted the histogram using log y-axis, because the count of 1's is small compared to the count of 0's. How small?
###Code
df["Class"].sum()
###Output
_____no_output_____
###Markdown
This means that 492 transactions were fraudulent (of a quarter of a million, so 0.17%). I wanted to be absolutely sure that 0 and 1 are the only values occuring in `Class`. I did not know any other solution than simply looping...
###Code
c0 = 0
c1 = 0
cx = 0
for v in df["Class"]:
if v==0 : c0+=1
elif v==1 : c1+=1
else : cx+=1
print( f"#0={c0}, #1={c1}, #rest={cx}, #all={c0+c1+cx}")
###Output
#0=284315, #1=492, #rest=0, #all=284807
###Markdown
In the meantime I found better solutions. First of all, the method `nunique()` returns the number of unique values.
###Code
df["Class"].nunique()
###Output
_____no_output_____
###Markdown
Even better, `value_counts()` even reports the values!
###Code
df['Class'].value_counts()
###Output
_____no_output_____
###Markdown
Preparing the credit card data for trainingWe follow the naming convention for machine learning that `x` are the input records, and `y` is the (expected) output for each record. We copy the data from pandas to numpy, splitting off the `Class` ("output") column to `y`, all other ("input") columns will go to `x`.For some reason `to_numpy()` picks `int` as datatype for the Class column, and machine learning wants floats, so we perform an explicit cast. We check the `y` slice 500-600; there is one fraudulent transaction (second row, 8th number).
###Code
y = df['Class'].to_numpy().astype(np.float32)
y[500:600]
###Output
_____no_output_____
###Markdown
To get the remaining columns in `x` we use the `drop` method. Here the `to_numpy()` returns `numpy.float64`, and again we want `float32`. We check record 0; the print out has 4 colums of 7 plus 2 numbers; that is 30 numbers. So indeed we dropped one. The last column of `x[0]` is `1.4962e+02`, or 149.62; this is the "amount" which we also found in our very first table `df.head()`.
###Code
x = df.drop(columns='Class').to_numpy().astype(np.float32)
x[0]
###Output
_____no_output_____
###Markdown
It seems good practice to _normalize_ the data, and John uses `RobustScaler().fit_transform(x)` for that. I guess this redistributes the values of each column to be in a manageable range since ML training likes that. Let us plot the histogram of column 0 (using the slice `x[:,0]`) before we do the normalization.
###Code
import matplotlib.pyplot as plt
plt.hist(x[:,0], bins=100);
###Output
_____no_output_____
###Markdown
Let us then normalize and plot column 0 again. We see that the shape is retained, but the x-values have been normalized to -1 .. +1. See [here](https://scikit-learn.org/stable/auto_examples/preprocessing/plot_all_scaling.html) details on the scaler.
###Code
from sklearn.preprocessing import RobustScaler
x = RobustScaler().fit_transform(x)
plt.hist(x[:,0], bins=100);
###Output
_____no_output_____
###Markdown
Splitting data for train and test Next step is to split the available records in records for _training_ and records for _testing_ (needed after training is completed). Of course, the splitting needs to keep the x-records (inputs) together with the y-records (output). There is an easy function for that in `sklearn` package: `train_test_split`. However, this function shuffles the data, which means that each run of this notebook, the (training) outcome is different. I don't want that.
###Code
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
To understand which values `train_test_split` takes, I wrote the `plot2` function. It overlays a graph (connected points) of `y1` with the points `y2`. We test it on a simple parabole. Note, `plot2` first sorts the values to plot (so that the "connected points" - lines between the consecutive points - works).
###Code
def plot2(y1,y2) :
y1 = np.sort(y1)
x1 = range(len(y1))
y2 = np.sort(y2)
# y2 is a subset of y1, find the matching x's
x2 = []
i1 = 0
i2 = 0
while i2<len(y2) :
while y1[i1]<y2[i2] : i1+=1
x2.append(x1[i1])
i2+=1
i1+=1
plt.plot(x1,y1)
plt.plot(x2,y2,c='red' ,marker='o',markersize=2,linewidth=0)
plot2( [0,1,4,9,16,25,36,49,64,81,100] , [0,1,9,16,64,81] )
###Output
_____no_output_____
###Markdown
We now use `plot2()` on the "Time" column to see what `shuffle` means in `train_test_split()`. First we spilt of 10% without shuffle.
###Code
train_x, test_x, train_y, test_y = train_test_split( x, y, test_size=0.1, shuffle=False )
print( x[:6,0] )
print( train_x[:6,0] )
plot2( x[:,0], test_x[:,0] )
###Output
[-0.9949835 -0.9949835 -0.99497175 -0.99497175 -0.99496 -0.99496 ]
[-0.9949835 -0.9949835 -0.99497175 -0.99497175 -0.99496 -0.99496 ]
###Markdown
That looks too easy. From the console output, we see that the `x` data is equal to the `train_x` data (the first 6 items at least). From the graph we see that `train_test_split()` just takes the _last_ 10% of the samples. For the case with shuffling case, we need to reduce `test_size`, because with value 0.1 the second graph is still completely red due to the enormous amount of points. We pass an explicit `shuffle=True`, but that is not necesary, because that is the default anyhow.
###Code
train_x, test_x, train_y, test_y = train_test_split( x, y, test_size=0.0001, shuffle=True )
print( x[:6,0] )
print( train_x[:6,0] )
plot2( x[:,0], test_x[:,0] )
###Output
[-0.9949835 -0.9949835 -0.99497175 -0.99497175 -0.99496 -0.99496 ]
[-0.04610017 -0.4983611 0.12652874 0.66921604 -0.47823635 -0.13505797]
###Markdown
If we use `Shuffle=True`, the data set is first shuffled, and then still, the data is taken from the shuffled end. To prevent bias on the "Time" column, it makes sense to shuffle. However, this means that each run of the notebook, the (training) outcome is different. That is not nice for the accompanying documentation.As it turns out, `train_test_split()` has yet another parameter `random_state` which acts as the seed for the random number generator during shuffling. But in this notebook we have already seeded on global level, so we do not need to supply a seed here. Now we split the available records in records for _training_ and records for _testing_ after training is completed. We split of 10% for testing (`test_size`) with shuffle (since `shuffle=True` is the default).
###Code
train_x, test_x, train_y, test_y = train_test_split( x, y, test_size=0.1 )
print("total train test sum test/total")
print(len(df), len(train_x), len(test_x), len(train_x)+len(test_x), len(test_x)/len(df))
###Output
total train test sum test/total
284807 256326 28481 284807 0.10000105334489671
###Markdown
Recap on neural networksLet us quickly review what a neural network consists of.A neural net consists of neurons. A _neuron_ maps multiple input values to one output value.A neuron consists of two computation. The first computation linearly combines the inputs. This is simply a weighted sum of the inputs incremented with a bias. The weights and bias are specific to the neuron; they will be established by training. The second computation is an [activation function](https://en.wikipedia.org/wiki/Activation_function). Popular ones are ReLU and Sigmoid. The activation function establishes a threshold that _activates_ the neuron (gives the output a high value), once the weighted average and bias exceeds a threshold (which is hard-wired in the activation function). As an example, take `ReLu(x)` which is defined as `max(0,x)`.Observe that the neuron in the picture has 4 weights (since it has 4 inputs) and 1 bias. In practice there are multiple neurons running in parallel. This is known as a _layer_. The diagram below shows a layer with 4 inputs (blue) and 3 outputs (red). The number of parameters to be determined by learning grows with a factor of 3: there are now 3 times 4 weights and 1 bias. There is another factor causing grows: in practice there are multiple layers in sequence forming a _neural network_. The diagram below shows two layers (green). Now there are 3×4 weights and 3×1 biases plus 2×3 weights and 2×1 biases. Let us now go back to the neural network for the credit card fraude. John designed a network with two layers. The first layer uses all 30 inputs, and generates 8 outputs. These are sometimes called _features_; they are supposed to characterize some higher level aspect of the inputs. What those features are is not designed; they emerge from the training.The second layer maps the 8 features to one output; the prediction for `Class`. I wonder, whether the _computations_ make the layers (as in the figure above), or the _storage of the values_ make the layers (as in the figure below). It doesn't matter much, but many people, including John refer to this network as having three layers, so apparently they count "storage". Setting up the neural networkWe use Tensorflow> TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications.We use Keras, the high level API of Tensorflow> Keras is an API designed for human beings, not machines. > Keras is an open-source software library that provides a Python interface for artificial neural networks. Keras acts as an interface for the TensorFlow library. Our first step is to create an (empty) neural network model. `Sequential` indicates that we have a network where the layers are sequential.
###Code
import tensorflow as tf
model = tf.keras.models.Sequential()
###Output
_____no_output_____
###Markdown
First we add a `Dense` layer. A dense layer computes a weighted sum of _all_ of the values in the previous layer. The layer has 8 neurons (`units` argument), and thus 8 outputs. It uses all 30 inputs (`input_shape`). After the weighted sum, we use a ReLU as activation function. By the way, the funny notation `(30,)` is the Python notation for a tuple of one (the fragment `(30)` would be an expression consisting of the literal `30` and superflous parenthesis around it).
###Code
model.add( tf.keras.layers.Dense( units=8, activation='relu', input_shape=(30,) ) )
###Output
_____no_output_____
###Markdown
Next we add a second `Dense` layer, with one neuron (one output). I guess that we do not have to specify `input_shape` because by default a layer takes all outputs of the previous layer. In step one, there was no previous layer; we did not yet specify the inputs. The activation function is now a _sigmoid_. This function maps its input to value between 0 and 1.
###Code
model.add( tf.keras.layers.Dense( units=1, activation='sigmoid' ) )
###Output
_____no_output_____
###Markdown
Finally, we need to "compile" the model.The _Binary Cross Entropy_ is a good loss function (also known as "cost function") for classification into two classes._Adam_ is a stochastic gradient descent optimizer.We request two extra metrics to be computed: _accuracy_ and _recall_.
###Code
model.compile( loss='binary_crossentropy', optimizer='Adam', metrics=['accuracy', tf.keras.metrics.Recall()] )
###Output
_____no_output_____
###Markdown
Once we have compiled the model we can ask for a summary.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 8) 248
_________________________________________________________________
dense_1 (Dense) (None, 1) 9
=================================================================
Total params: 257
Trainable params: 257
Non-trainable params: 0
_________________________________________________________________
###Markdown
Note that the number of parameters matches what we computed in the "Recap" section.For the first layer 30×8 weights plus 8 biases, so 248 parameters.For the second 8×1 weights plus 1 bias or 9 parameters.A total of 257 trainable parameters. Training the neural networkWe are ready to train: we fit the trainable parameters to the input data (`train_x` and `train_y`). The training ("fitting") requires setting the number of training iterations.From the [keras documentation](https://keras.io/api/models/model_training_apis/)> **epochs**: Integer. Number of epochs to train the model. An epoch is an iteration over the entire x and y data provided. > **batch_size**: Integer or None. Number of samples per gradient update. As it turns out, _fitting_ the model is also a stochastic process (as was `train_test_split()`).This means that `model.fit()` uses randomness, e.g. to initialize the weights and biases.As a result, every training sessions - each run of `model.fit()` - the results vary.For this notebook, I circumvented that by seeding the random number generator (see first code cell of this notebook).
###Code
model.fit( train_x, train_y, epochs=20, batch_size=1024 );
###Output
Epoch 1/20
251/251 [==============================] - 1s 526us/step - loss: 0.3271 - accuracy: 0.9397 - recall: 0.0089
Epoch 2/20
251/251 [==============================] - 0s 521us/step - loss: 0.0794 - accuracy: 0.9982 - recall: 0.0000e+00
Epoch 3/20
251/251 [==============================] - 0s 509us/step - loss: 0.0317 - accuracy: 0.9982 - recall: 0.0000e+00
Epoch 4/20
251/251 [==============================] - 0s 519us/step - loss: 0.0180 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 5/20
251/251 [==============================] - 0s 522us/step - loss: 0.0123 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 6/20
251/251 [==============================] - 0s 517us/step - loss: 0.0096 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 7/20
251/251 [==============================] - 0s 513us/step - loss: 0.0080 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 8/20
251/251 [==============================] - 0s 514us/step - loss: 0.0070 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 9/20
251/251 [==============================] - 0s 516us/step - loss: 0.0063 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 10/20
251/251 [==============================] - 0s 505us/step - loss: 0.0058 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 11/20
251/251 [==============================] - 0s 515us/step - loss: 0.0054 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 12/20
251/251 [==============================] - 0s 521us/step - loss: 0.0052 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 13/20
251/251 [==============================] - 0s 509us/step - loss: 0.0049 - accuracy: 0.9983 - recall: 0.0022
Epoch 14/20
251/251 [==============================] - 0s 513us/step - loss: 0.0048 - accuracy: 0.9983 - recall: 0.0000e+00
Epoch 15/20
251/251 [==============================] - 0s 513us/step - loss: 0.0046 - accuracy: 0.9983 - recall: 0.0112
Epoch 16/20
251/251 [==============================] - 0s 512us/step - loss: 0.0043 - accuracy: 0.9986 - recall: 0.2321
Epoch 17/20
251/251 [==============================] - 0s 520us/step - loss: 0.0038 - accuracy: 0.9994 - recall: 0.7612
Epoch 18/20
251/251 [==============================] - 0s 512us/step - loss: 0.0037 - accuracy: 0.9994 - recall: 0.7768
Epoch 19/20
251/251 [==============================] - 0s 507us/step - loss: 0.0035 - accuracy: 0.9994 - recall: 0.7924
Epoch 20/20
251/251 [==============================] - 0s 521us/step - loss: 0.0034 - accuracy: 0.9994 - recall: 0.7946
###Markdown
With `timeit` we check the computation time; on my PC only 2.39 seconds.The last log line gives the loss, accuracy and recall, just as we specified in the compile step.The accuracy looks very good: 0.999x, but this is very misleading. Accuracy indicates the number of transactions classified correctly. Remember that we have 492 fradulent transaction in a volume of 284807 transactions. So, a model that predicts that all transactions are valid (model always outputs 0) already has an accuracy of 1 - 492/284807 = 0.9983.
###Code
1 - 492/284807
###Output
_____no_output_____
###Markdown
That's why we added the metric _recall_. Recall indicates the number of fraudulent transactions that are classified correctly. We achieved a recall of about 80%. Testing the neural networkFinal step is to test the network on the test data.
###Code
loss, accuracy, recall = model.evaluate( test_x, test_y)
print( f"{loss=:.4f} {accuracy=:.4f} {recall=:.4f}" )
###Output
891/891 [==============================] - 0s 341us/step - loss: 0.0028 - accuracy: 0.9996 - recall: 0.8409
loss=0.0028 accuracy=0.9996 recall=0.8409
###Markdown
The accuracy roughly stays the same, and recall goes a bit up. But all in all, similar result as for training. That is good. It means our model is not biased towards to training data ("overfit"). An alternative networkJust out of curiosity, what happens if we change the network. Let's give the first layer less neurons (5 instead of 8), but let's insert another dense layer of 3 neurons.
###Code
model2 = tf.keras.models.Sequential()
model2.add( tf.keras.layers.Dense( units=5, activation='relu', input_shape=(30,) ) )
model2.add( tf.keras.layers.Dense( units=3, activation='relu' ) )
model2.add( tf.keras.layers.Dense( units=1, activation='sigmoid' ) )
model2.compile( loss='binary_crossentropy', optimizer='Adam', metrics=['accuracy', tf.keras.metrics.Recall()] )
model2.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_2 (Dense) (None, 5) 155
_________________________________________________________________
dense_3 (Dense) (None, 3) 18
_________________________________________________________________
dense_4 (Dense) (None, 1) 4
=================================================================
Total params: 177
Trainable params: 177
Non-trainable params: 0
_________________________________________________________________
###Markdown
Note that we now have to fit much less parameters: 177 instead of 257. We need more epochs though.
###Code
model2.fit( train_x, train_y, epochs=50, batch_size=1024 );
###Output
Epoch 1/50
251/251 [==============================] - 1s 619us/step - loss: 0.3445 - accuracy: 0.9807 - recall_1: 0.0000e+00
Epoch 2/50
251/251 [==============================] - 0s 634us/step - loss: 0.0607 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 3/50
251/251 [==============================] - 0s 624us/step - loss: 0.0211 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 4/50
251/251 [==============================] - 0s 632us/step - loss: 0.0127 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 5/50
251/251 [==============================] - 0s 610us/step - loss: 0.0096 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 6/50
251/251 [==============================] - 0s 634us/step - loss: 0.0081 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 7/50
251/251 [==============================] - 0s 616us/step - loss: 0.0071 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 8/50
251/251 [==============================] - 0s 620us/step - loss: 0.0064 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 9/50
251/251 [==============================] - 0s 632us/step - loss: 0.0059 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 10/50
251/251 [==============================] - 0s 617us/step - loss: 0.0056 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 11/50
251/251 [==============================] - 0s 616us/step - loss: 0.0054 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 12/50
251/251 [==============================] - 0s 620us/step - loss: 0.0052 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 13/50
251/251 [==============================] - 0s 632us/step - loss: 0.0051 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 14/50
251/251 [==============================] - 0s 616us/step - loss: 0.0049 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 15/50
251/251 [==============================] - 0s 617us/step - loss: 0.0048 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 16/50
251/251 [==============================] - 0s 632us/step - loss: 0.0047 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 17/50
251/251 [==============================] - 0s 632us/step - loss: 0.0046 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 18/50
251/251 [==============================] - 0s 641us/step - loss: 0.0045 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 19/50
251/251 [==============================] - 0s 635us/step - loss: 0.0044 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 20/50
251/251 [==============================] - 0s 631us/step - loss: 0.0043 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 21/50
251/251 [==============================] - 0s 637us/step - loss: 0.0042 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 22/50
251/251 [==============================] - 0s 631us/step - loss: 0.0041 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 23/50
251/251 [==============================] - 0s 630us/step - loss: 0.0040 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 24/50
251/251 [==============================] - 0s 640us/step - loss: 0.0039 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 25/50
251/251 [==============================] - 0s 632us/step - loss: 0.0038 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 26/50
251/251 [==============================] - 0s 633us/step - loss: 0.0037 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 27/50
251/251 [==============================] - 0s 644us/step - loss: 0.0037 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 28/50
251/251 [==============================] - 0s 640us/step - loss: 0.0036 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 29/50
251/251 [==============================] - 0s 631us/step - loss: 0.0036 - accuracy: 0.9983 - recall_1: 0.0000e+00
Epoch 30/50
251/251 [==============================] - 0s 636us/step - loss: 0.0035 - accuracy: 0.9983 - recall_1: 0.0290
Epoch 31/50
251/251 [==============================] - 0s 629us/step - loss: 0.0035 - accuracy: 0.9994 - recall_1: 0.7879
Epoch 32/50
251/251 [==============================] - 0s 628us/step - loss: 0.0034 - accuracy: 0.9994 - recall_1: 0.7879
Epoch 33/50
251/251 [==============================] - 0s 645us/step - loss: 0.0034 - accuracy: 0.9994 - recall_1: 0.7879
Epoch 34/50
251/251 [==============================] - 0s 632us/step - loss: 0.0033 - accuracy: 0.9994 - recall_1: 0.7879
Epoch 35/50
251/251 [==============================] - 0s 616us/step - loss: 0.0033 - accuracy: 0.9994 - recall_1: 0.7902
Epoch 36/50
251/251 [==============================] - 0s 629us/step - loss: 0.0032 - accuracy: 0.9994 - recall_1: 0.7902
Epoch 37/50
251/251 [==============================] - 0s 632us/step - loss: 0.0032 - accuracy: 0.9994 - recall_1: 0.7924
Epoch 38/50
251/251 [==============================] - 0s 635us/step - loss: 0.0031 - accuracy: 0.9994 - recall_1: 0.7902
Epoch 39/50
251/251 [==============================] - 0s 638us/step - loss: 0.0031 - accuracy: 0.9994 - recall_1: 0.7969
Epoch 40/50
251/251 [==============================] - 0s 624us/step - loss: 0.0031 - accuracy: 0.9994 - recall_1: 0.7857
Epoch 41/50
251/251 [==============================] - 0s 631us/step - loss: 0.0030 - accuracy: 0.9994 - recall_1: 0.7902
Epoch 42/50
251/251 [==============================] - 0s 647us/step - loss: 0.0030 - accuracy: 0.9994 - recall_1: 0.7902
Epoch 43/50
251/251 [==============================] - 0s 636us/step - loss: 0.0030 - accuracy: 0.9994 - recall_1: 0.7879
Epoch 44/50
251/251 [==============================] - 0s 629us/step - loss: 0.0029 - accuracy: 0.9994 - recall_1: 0.7991
Epoch 45/50
251/251 [==============================] - 0s 636us/step - loss: 0.0029 - accuracy: 0.9994 - recall_1: 0.7991
Epoch 46/50
251/251 [==============================] - 0s 628us/step - loss: 0.0029 - accuracy: 0.9994 - recall_1: 0.7857
Epoch 47/50
251/251 [==============================] - 0s 628us/step - loss: 0.0028 - accuracy: 0.9994 - recall_1: 0.8013
Epoch 48/50
251/251 [==============================] - 0s 640us/step - loss: 0.0028 - accuracy: 0.9994 - recall_1: 0.7946
Epoch 49/50
251/251 [==============================] - 0s 628us/step - loss: 0.0028 - accuracy: 0.9994 - recall_1: 0.8036
Epoch 50/50
251/251 [==============================] - 0s 625us/step - loss: 0.0028 - accuracy: 0.9994 - recall_1: 0.8036
###Markdown
After about 40 epoch, the recall starts to reach the old level of about 80%.Let's test.
###Code
loss, accuracy, recall = model2.evaluate( test_x, test_y)
print( f"{loss=:.4f} {accuracy=:.4f} {recall=:.4f}" )
###Output
891/891 [==============================] - 0s 359us/step - loss: 0.0026 - accuracy: 0.9996 - recall_1: 0.8409
loss=0.0026 accuracy=0.9996 recall=0.8409
|
_notebooks/2020-09-13-efficient-dynamic-batching-of-large-datasets-with-infinibatch.ipynb | ###Markdown
"Efficient Dynamic Batching of Large Datasets with Infinibatch"> "Learn how to efficiently batch large datasets with varied sequence length for training using [infinibatch](https://github.com/microsoft/infinibatch/)."- toc:true- badges: true- comments: true- categories: [deep learning] We will explore how to efficiently batch large datasets with varied sequence length for training using [infinibatch](https://github.com/microsoft/infinibatch/). The focus will be on solving multiple challenges associated with this and making it work with `dataloader` abstraction in pytorch library. Though our focus is on pytorch, Infinibatch is a pure python library agnostic of the deep learning library.This post was inspired by this thread on twitter.> twitter: https://twitter.com/marian_nmt/status/1292850875715604480> Note: We will use [wikitext-103](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) dataset as an example. It's a dataset with sentences from wikipedia. It has 103,227,021 word level tokens in it's training split. It is used only for illustration, the techniques discussed here are can work for far larger dataset sizes.
###Code
#collapse-hide
#hide_output
!wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
!unzip wikitext-103-raw-v1.zip
###Output
--2020-09-21 08:54:11-- https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.217.102.30
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.217.102.30|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 191984949 (183M) [application/zip]
Saving to: ‘wikitext-103-raw-v1.zip’
wikitext-103-raw-v1 100%[===================>] 183.09M 62.4MB/s in 2.9s
2020-09-21 08:54:14 (62.4 MB/s) - ‘wikitext-103-raw-v1.zip’ saved [191984949/191984949]
Archive: wikitext-103-raw-v1.zip
creating: wikitext-103-raw/
inflating: wikitext-103-raw/wiki.test.raw
inflating: wikitext-103-raw/wiki.valid.raw
inflating: wikitext-103-raw/wiki.train.raw
###Markdown
Challenges in efficiently processing large datasets 1. Loading and shuffling large datasetsFor large datasets, loading the entire data into memory might not be possible. If we were to sample fully random batches we need to do random access on huge dataset. Depending on the disk latency this might be unfeasible. To solve this we can do the following.1. Shard the data into chunks larger than single instances so that it reduces the disk access.2. Shuffle the chunks and load few of them and shuffle the data loaded from the chunks.If we shard the pieces into too big chunks we might end up loosing statistical power in our training updates as we are essentially reducing the randomness of our samples used for training. But we can't shard them too small either as that wouldn't solve our disk access problem.We need a flexible approach would make it easy to control how much data is to be loaded into memory for shuffling. To address this challenge in isolation, you can refer dataset sharding logic in NVIDIA's [MEGATRON language model training code](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/data/indexed_dataset.py). But infinibatch solves it in a more generalized manner along with our other challenges. 2. Dynamic BatchingNLP datasets generally have samples which are of varied lengths. When we batch the data for training on devices like GPU, we are forced to make them into n-dimensional tensors with fixed dimension. The most common type of input for NLP models is of the shape **Mini-batch size x Sequence length**. The sequence length is either a fixed value or is the length of longest sequence in that batch. The shorter sequences in the minii-batch are generally padded with a *padding token*. These padding tokens are wasteful in terms of computation as they don't do anything useful.Some tutorials and examples you would find for pre-processing data would pad batches to a pre-determined sequence length independent of the elements in each batch. This is fully wasteful as many batches would have all the members less than the pre-determined length.A better option would be to pad the elements of each batch to the sequence length which is maximum in that batch. This **dynamic padding** can improve efficiency but it doesn't solve the entire problem. Let's see why with an example. Tokenization and length distributionLet's implement a typical dynamic padding workflow with pytorch dataloader and a subword level tokenizer. We use BERT-base-cased tokenizer from huggingface's transformers library. This tokenizes words to subwords. The BERT model was pre-trained with maximum subword length of 512. We can theoretically use sequence lengths larger than that but for our purposes we will leave it as such at 512.We will use torch's [dataset and dataloader](https://pytorch.org/docs/stable/data.html) abstraction for this. It will as both an illustration of real world usage and is convinent as it helps avoid having to entire tokenized dataset in memory. We still have to load the sentences into memory once. This is not a problem for small datasets, but for very large corpuses it's a big problem as mentioned before.
###Code
#collapse-hide
#hide_output
!pip install git+https://github.com/microsoft/infinibatch.git transformers torch
from typing import Dict, List
import torch
from torch.utils.data import Dataset, DataLoader
import tqdm
from transformers import AutoTokenizer, PreTrainedTokenizerFast
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import torch
%matplotlib inline
class Wiki103(Dataset):
def __init__(self, tokenizer: PreTrainedTokenizerFast, data_path="wikitext-103-raw/wiki.train.raw", max_length: int = 512):
self.tokenizer = tokenizer
with open(data_path) as dp:
# We are
self.sentences = [sentence.strip() for sentence in dp if len(sentence.strip()) > 2]
self.max_length = max_length
def __len__(self):
return len(self.sentences)
def __getitem__(self, i) -> Dict[str, torch.Tensor]:
return self.tokenizer(self.sentences[i], max_length=self.max_length, truncation=True)
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True)
wiki_dataset = Wiki103(tokenizer=tokenizer)
sequence_lengths = []
for example in tqdm.tqdm(iter(wiki_dataset)):
sequence_lengths.append(len(example["input_ids"]))
###Output
_____no_output_____
###Markdown
By plotting the truncated `Subword sequence length vs Frequency` we see a distribution with a large variance.
###Code
#collapse-hide
with plt.style.context('fivethirtyeight'):
plt.figure(figsize=[7,5])
n, bins, patches = plt.hist(x=sequence_lengths, bins=50, color='#0504aa',alpha=0.7, rwidth=0.85)
plt.grid(axis='y', alpha=0.75)
plt.xlabel('Subword Sequence Length',fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.title('Sequence Length Distribution',fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Dynamic PaddingFrom the above graph we can intuit that if we draw random samples from the data to form a mini-batch, we would have few examples which are significantly longer than the rest. This would mean that we would add a lot of padding tokens. This holds even if we clean the very short length instances as noise. Let's implement dynamic padding and measure how much. We can use torch's `DataLoader` abstraction to do efficient batching with multi-processing. Since our tokenized outputs are of different lengths we have to implement a collate function to pad them dynamically together. We can pass the `tokenizer.pad` function implemented in huggingface's tokenizer as the collate function.
###Code
def collate_fn(examples: List[Dict[str, torch.Tensor]]) -> Dict[str, torch.Tensor]:
# Since huggingface has already implemented this, this function is just to illustrate what a collator does.
return tokenizer.pad(examples, return_tensors='pt')
dataloader = DataLoader(dataset=wiki_dataset, batch_size=32, collate_fn=collate_fn)
###Output
_____no_output_____
###Markdown
Let's assume that we can use maximum a batch size of 32 for max sequence length of 512 for our model in our training hardware without out-of-memory errors. The tokens per batch would be `512 * 32 = 16384`. We can now compute how much of it is padding tokens and what is the distribution of the batch's sequence length(which depends on the maximum element in the batch).
###Code
total_tokens = 0
padding_tokens = 0
batch_lengths = []
for batch in tqdm.tqdm(iter(dataloader)):
batched_input_ids = batch["input_ids"]
batch_lengths.append(batched_input_ids.shape[1])
total_tokens += batched_input_ids.numel()
padding_tokens += batched_input_ids[batched_input_ids == tokenizer.pad_token_id].numel()
print(f"Total Batches : {len(iter(dataloader))}")
print(f"Padding Tokens : {padding_tokens}")
print(f"Input Tokens : {total_tokens - padding_tokens}")
print(f"Total Tokens : {total_tokens}")
print(f"Padding Tokens % : {(padding_tokens*100)/total_tokens}")
###Output
Total Batches : 36390
Padding Tokens : 244072396
Input Tokens : 119699332
Total Tokens : 363771728
Padding Tokens % : 67.09493267712108
###Markdown
Surprise, surprise, **67% of our net tokens are padding tokens**. This would imply that of all the computations that we do, only 33% of is done for useful work. This starkly highlights the problem with static batch lengths even when accounting for dynamic padding. Let's also plot the distribution of batch lengths.
###Code
#collapse-hide
with plt.style.context('fivethirtyeight'):
plt.figure(figsize=[7,5])
n, bins, patches = plt.hist(x=batch_lengths, bins=50, color='#0504aa',alpha=0.7, rwidth=0.85)
plt.grid(axis='y', alpha=0.75)
plt.xlabel('Batch Sequence Length',fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.title('Static Batch - Dynamic Padding Length Distribution',fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
As batches are randomly sampled, we see a normal distribution as we can should expect by the Central Limit Theorem. The frequency in the final bin is deviant because we have a significant number of sentences which we had truncated, hence batches with them will have the maximum sequence length. General approach to dynamic batchingInstead of drawing samples in random, had we sorted our dataset by length, then we can form batches by packing similar length sequences together into a batch till we reach the maximum number of tokens that we can fit. The maximum number of tokens that can be packed can be derived approximately from our previous memory limit *static_batch x max_sequence_length*. This allows us to pack more instances in one batch without much padding because the sequences would be of similar lengths after sorting.We can't sort the entire dataset because machine learning training is based on the assumption that our instances are drawn independently from an identical distribution (IID). If we were to sort the entire dataset this breaks the assumption as our samples are no longer drawn independently from each other. If sentence length were a confounding factor then the model might fit on this spurious correlation.We have a trade-off here between statistical power derived from randomization of our samples and lesser error in gradient updates derived from larger batch sizes if we batch dynamically.Generally, we can have a positive trade off by sampling a window of instances and sorting withing the window and forming batches.The `Dataset` we implemented above is a [map-style](https://pytorch.org/docs/stable/data.htmlmap-style-datasets) dataset. It implements length and random access to each individual data sample with index (`__getitem__`). The sampling into batches is taken care of a sampler passed to `DataLoader`.I don't think there is a clean way to implement a map-style dataset and a collate function such that we get batches with dynamic batch sizes but same number of tokens per batch. This comes from the basic mismatch of number of dynamic batches which you can form keeps changing based on the larger window you sample.So it turns out that we have to do all the shuffling, windowing, sorting and batching inside a [iterable-style](https://pytorch.org/docs/stable/data.htmliterable-style-datasets) `IterableDataset` dataset abstraction. These features are implemented by infinibatch. 3. CheckpointingIn large datasets, it's typical not to wait for an entire epoch to checkpoint your model to recover from failures. So to be able to recover and continue training in a deterministic manner, such that it converges to same state if the failure hadn't occured, we have to checkpoint the random state that controls the order in which our samples are generated. Infinibatch to the rescue>Infinibatch is a library of checkpointable iterators for randomized data loading of massive data sets in deep neural network training.It is aimed at simplify the processing of large datasets. It is a collection of pure python classes that implement `__iter__` interface. They can be composed inside one another easily and the final composed iterator can be checkpointed as a single entity.You can checkout it's basic tutorial [here](https://github.com/microsoft/infinibatch). We will use it to address the listed challenges piece by piece and then finally make it work inside `IterableDataset` and `DataLoader` abstractions. We will also see the tricks needed to make it work distributed data parallel training. 1. Loading and shuffling large datasetsFollowing the infinibatch tutorial, we divide our dataset into multiple gzip chunks of 10000 sentences each.
###Code
!mkdir -p wikitext-103-chunks
!split -a 4 --lines 10000 --numeric-suffixes --filter 'gzip > wikitext-103-chunks/$FILE.txt.gz' wikitext-103-raw/wiki.train.raw train.
###Output
_____no_output_____
###Markdown
We can now create an iterator using infinibatch with a function that can deserialize a shard. Infinibatch takes care of loading multiple in a shuffled order. We can control the amount of deserialized individual examples from the shards be buffered using `buffer_size` parameter. The library returns a python iterable. We can call `next(iterable)` or iterate with a `for` to get the examples.Note: Passing `train=True` creates an infinite iterator that cycles after a full run on the dataset. The `chunked_dataset_iterator` method returns a composition of iterators, you can refer the source code [here](https://github.com/microsoft/infinibatch/blob/master/infinibatch/iterators.py)
###Code
import gzip, glob
from functools import partial
from infinibatch import datasets, iterators
def read_chunk(path):
with open(path, "rb") as fp:
lines = gzip.decompress(fp.read()).decode(encoding='utf-8').splitlines()
lines = [sentence.strip() for sentence in lines if len(sentence.strip()) > 2]
return iter(lines)
sentence_it = datasets.chunked_dataset_iterator(
chunk_refs = glob.glob('wikitext-103-chunks/train.*.txt.gz'),
read_chunk_fn = read_chunk,
buffer_size = 100000, seed = 1337, train=True, shuffle=True)
print(next(sentence_it))
###Output
In 1993 , David Mirkin hired Scully to write for The Simpsons , as a replacement for the departing Conan O 'Brien , after reading some of his sample scripts . He began as a writer and producer for the show during its fifth season and wrote the episodes " Lisa 's Rival " , " Two Dozen and One Greyhounds " and " Lisa on Ice " which aired in season six . " Lisa 's Rival " was his first episode ; he wrote the script , but the original concept had been conceived by O 'Brien . Similarly , he wrote the script for " Two Dozen and One Greyhounds " , which was based on an idea by Al Jean and Mike Reiss . " Lisa on Ice " was inspired by Scully 's love of ice hockey and featured many experiences from his childhood , as was " Marge Be Not Proud " ( which he wrote for season seven ) which was based " one of the most traumatic moments " of his life , when he was caught shoplifting aged twelve . He jokingly told Variety that " It 's great to be paid for reliving the horrors of your life . " He also wrote " Team Homer " and " Lisa 's Date with Density " . Scully noted : " I wrote a lot of Lisa 's shows . I have five daughters , so I like Lisa a lot . I like Homer , too . Homer comes very naturally to me : I don 't know if that 's a good or a bad thing . A lot of my favorite episodes are the ones when Homer and Lisa are in conflict with each other ... They 're very human , I think that 's their appeal . "
###Markdown
Tensorize our dataset with a map iteratorWe can now compose our tokenizer upon our sentence iterator. Infinibatch has two ways of doing this,1. `MapIterator`2. `ParallelMapIterator`If you use pytorch and need multiprocessing to do costly transformations over your data on the fly, use the `ParallelMap` and set the `num_processes` with what you would have with `num_workers`. And set `num_workers=0` in your dataloader.
###Code
tokenize_fn = partial(tokenizer, max_length=512, truncation=True)
features_it = iterators.ParallelMapIterator(
source_iterator=sentence_it,
num_processes=4,
num_items_per_process=1000,
transform=tokenize_fn
)
next(features_it)
###Output
_____no_output_____
###Markdown
2. Dynamic BatchingNow comes the magic of dynamic batching with `BucketedReadaheadBatchIterator`. Let's fix the maximum tokens per batch to `32 * 512 = 16384`. This iterator allows you to compute dynamic batch size by iteratively applying a user given function over the current longest example (with length computed by user function) in a sorted `read_ahead` window. This window is sorted and batches are formed by using the user provided `batch_size` function iteratively. ExampleSay we want 50 tokens per batch. If we set a read ahead window of 6. Assume we fetch six items [a, b, c, d, e, f] in the first read ahead window.| Sequence id | Length ||-------------| -------||a|50||b|30||c|20||d|20||e|30||f|20|First we sort this window with lengths in decreasing order. The sort order is stable. This preserves the shuffling of equal sized elements from previous iterator. So for our example it would be [a, b, e, c, d, f]Now we can Compute the dynamic batch sizes by applying the function `batch_size` iteratively till the window is exhausted. Assume our function is `lambda longest_instance: 60 // len(longest_instance)`. Then applying it once we get first longest item `a`, current batch size will be `60 //50 = 1`. The next longest item remaining can be used to calculate the size of the next batch and so on. So we will end up with [a], [b, e], [c, d, f]. Each of them will have 60 tokens.You can take a look at the code that does this computation [here](https://github.com/microsoft/infinibatch/blob/master/infinibatch/iterators.pyL1080).
###Code
tokens_per_batch = 32 * 512
batches_it = iterators.BucketedReadaheadBatchIterator(
source_iterator=features_it,
# read_ahead is the number of items to be read from previous iterator,
# these are sorted and over which dynamic batches are formed.
read_ahead=10000,
# key determines the length used to sort and choose the longest remaining record.
key=lambda example: len(example['input_ids']),
# Determines the dynamic batch size
batch_size=lambda longest_example: tokens_per_batch // len(longest_example['input_ids']),
seed=0
)
dynamic_batch_wo_padding = next(batches_it)
print(f"Dynamic batch size: {len(dynamic_batch_wo_padding)}")
dynamic_batch_wo_padding = next(batches_it)
print(f"Dynamic batch size: {len(dynamic_batch_wo_padding)}")
print(dynamic_batch_wo_padding[:2])
###Output
Dynamic batch size: 124
Dynamic batch size: 94
[{'input_ids': [101, 1109, 3500, 3039, 13025, 1126, 1903, 1104, 1492, 188, 1204, 121, 137, 119, 137, 8347, 5682, 113, 5787, 137, 119, 137, 125, 4023, 114, 117, 1166, 126, 137, 119, 137, 126, 6549, 113, 123, 137, 119, 137, 126, 4023, 114, 1679, 24773, 1167, 1190, 1147, 7741, 119, 3900, 112, 188, 3039, 4049, 1198, 170, 1423, 24773, 1114, 12936, 6398, 2541, 1107, 1103, 3499, 1526, 2084, 1105, 6625, 2188, 20394, 5773, 1633, 117, 1229, 3500, 1486, 2055, 11142, 117, 1681, 156, 10098, 2897, 1105, 1884, 1775, 4978, 11661, 1862, 119, 3396, 24773, 1116, 117, 1160, 1121, 1296, 2755, 117, 1127, 4410, 1112, 7474, 6635, 1107, 1103, 1886, 131, 3500, 112, 188, 4367, 155, 5792, 1108, 1925, 1229, 141, 119, 6511, 1108, 1237, 117, 1105, 3900, 112, 188, 139, 119, 3160, 1108, 1375, 2170, 1229, 1287, 25730, 1931, 17932, 1121, 1203, 2512, 119, 3500, 112, 188, 3499, 1526, 2084, 11311, 5157, 14618, 1125, 2856, 1471, 1121, 1103, 3039, 1160, 2277, 2988, 1106, 1103, 1886, 1112, 170, 1871, 1104, 170, 2960, 1104, 1532, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}, {'input_ids': [101, 2907, 1107, 1478, 117, 1119, 1125, 170, 4374, 1648, 1107, 1103, 28117, 21643, 1273, 1109, 156, 13622, 22273, 7443, 119, 1230, 1397, 1273, 1648, 1108, 1107, 1823, 20452, 1324, 10781, 1204, 2391, 112, 188, 11826, 6945, 1643, 4371, 113, 1478, 114, 119, 1130, 1103, 1273, 117, 17784, 1733, 19572, 1307, 1126, 1586, 23963, 1233, 117, 1150, 1110, 2802, 1106, 1712, 3542, 1104, 8125, 7782, 7895, 112, 188, 1959, 119, 6945, 1643, 4371, 1108, 12468, 1120, 170, 1957, 8685, 1120, 1103, 13631, 2683, 3506, 1570, 2352, 2263, 1107, 1478, 119, 2711, 1103, 3216, 3761, 117, 1103, 1273, 1108, 170, 2798, 2244, 117, 6957, 109, 22803, 1550, 4529, 117, 1543, 1122, 1117, 2439, 137, 118, 137, 19842, 1273, 1106, 1103, 1322, 1104, 1369, 119, 17784, 1733, 19572, 112, 188, 1397, 2672, 1108, 147, 1813, 3925, 113, 1478, 114, 117, 3714, 4387, 144, 7777, 7836, 2328, 1348, 119, 1109, 2523, 1110, 1359, 1113, 158, 119, 156, 119, 4620, 4140, 156, 12821, 3101, 6944, 112, 188, 1581, 5634, 1414, 14871, 1104, 1103, 1269, 1271, 119, 102], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}]
###Markdown
Now we can collate our examples and see how much this scheme has saved us. Since a training iterator is infinite, we will recreate our iterators with a non-infinite iterator.
###Code
sentence_it_finite = datasets.chunked_dataset_iterator(
chunk_refs = glob.glob('wikitext-103-chunks/train.*.txt.gz'),
read_chunk_fn = read_chunk,
buffer_size = 100000,
seed = 1337,
train=False,
shuffle=False
)
features_it_finite = iterators.ParallelMapIterator(
source_iterator=sentence_it_finite,
num_processes=4,
num_items_per_process=1000,
transform=tokenize_fn
)
batches_it_finite = iterators.BucketedReadaheadBatchIterator(
source_iterator=features_it_finite,
read_ahead=10000, # Determines the window for the bucket which
# will be sorted and converted to batches.
key=lambda example: len(example['input_ids']), # Determines the length used
# to sort and choose the longest remaining record.
batch_size=lambda longest: tokens_per_batch // len(longest['input_ids']),
# Determines the dynamic batch size
seed=0
)
collate_fn = partial(tokenizer.pad, return_tensors='pt')
tensors_it_finite = iterators.MapIterator(
batches_it_finite,
transform=collate_fn
)
total_batches_dynamic = 0
total_tokens_dynamic = 0
padding_tokens_dynamic = 0
batch_lengths_dynamic = []
for batch in tqdm.tqdm(tensors_it_finite):
total_batches_dynamic += 1
batched_input_ids = batch["input_ids"]
batch_lengths_dynamic.append(batched_input_ids.shape[1])
total_tokens_dynamic += batched_input_ids.numel()
padding_tokens_dynamic += batched_input_ids[batched_input_ids == tokenizer.pad_token_id].numel()
print(f"Total Batches : {total_batches_dynamic}") # Seeing the tqdm stats.
print(f"Padding Tokens : {padding_tokens_dynamic}")
print(f"Input Tokens : {total_tokens_dynamic - padding_tokens_dynamic}")
print(f"Total Tokens : {total_tokens_dynamic}")
print(f"Padding Tokens % : {(padding_tokens_dynamic*100)/total_tokens_dynamic}")
###Output
Total Batches : 7650
Padding Tokens : 3838939
Input Tokens : 119699332
Total Tokens : 123538271
Padding Tokens % : 3.1074896620497463
###Markdown
We have **reduced** the % of **padding tokens** per epoch from **67% to just around 3%**.The total batches needed to process it in the same max tokens per batch limitation hence got reduced nearly five times from 36390 to 7642.The processing time is just one minute extra. I guess that might be due to IO, but you could try benchmarking that with more rigour.Now, plotting the length distribution for dynamic batches.
###Code
#collapse-hide
with plt.style.context('fivethirtyeight'):
plt.figure(figsize=[7,5])
n, bins, patches = plt.hist(x=batch_lengths_dynamic, bins=50, color='#0504aa',alpha=0.7, rwidth=0.85)
plt.grid(axis='y', alpha=0.75)
plt.xlabel('Batch Sequence Length',fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.title('Dynamic Batch - Dynamic Padding Length Distribution',fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
We now see that the expected per batch sequence length has reduced from 300 to 200. Static Batch Sizes with Sorted WindowIn practise using variable batch sizes could be a problem depending on your task. As tokens in an instance are already correlated, having few instances (though longer) in one batch might give a more noisy gradient update than many shorter instances in a batch. This is my **speculation** as to why I wasn't able to make purely dynamic batch sizes work well with a token classification fine-tuning on BERT.But all is not lost here, we can still use bucketing to have uniform length batches with fewer padding tokens. Let's checkout the padding tokens % when we fix the batch size as 32 but sort a window of 10 batches (320 instances) and then form batches.
###Code
sentence_it_finite = datasets.chunked_dataset_iterator(
chunk_refs = glob.glob('wikitext-103-chunks/train.*.txt.gz'),
read_chunk_fn = read_chunk,
buffer_size = 100000,
seed = 1337,
train=False,
shuffle=False
)
features_it_finite = iterators.ParallelMapIterator(
source_iterator=sentence_it_finite,
num_processes=4,
num_items_per_process=1000,
transform=tokenize_fn
)
batches_it_finite = iterators.BucketedReadaheadBatchIterator(
source_iterator=features_it_finite,
key=lambda example: len(example['input_ids']), # Determines the length used
read_ahead=320, # Setting this ten times the static batch size.
batch_size=32,
seed=0
)
collate_fn = partial(tokenizer.pad, return_tensors='pt')
tensors_it_finite = iterators.MapIterator(
batches_it_finite,
transform=collate_fn
)
total_batches_bucket_sorted = 0
total_tokens_bucket_sorted= 0
padding_tokens_bucket_sorted = 0
batch_lengths_bucket_sorted = []
for batch in tqdm.tqdm(tensors_it_finite):
total_batches_bucket_sorted += 1
batched_input_ids = batch["input_ids"]
batch_lengths_bucket_sorted.append(batched_input_ids.shape[1])
total_tokens_bucket_sorted += batched_input_ids.numel()
padding_tokens_bucket_sorted += batched_input_ids[batched_input_ids == tokenizer.pad_token_id].numel()
print(f"\nTotal Batches : {total_batches_bucket_sorted}") # Seeing the tqdm stats.
print(f"Padding Tokens : {padding_tokens_bucket_sorted}")
print(f"Input Tokens : {total_tokens_bucket_sorted - padding_tokens_bucket_sorted}")
print(f"Total Tokens : {total_tokens_bucket_sorted}")
print(f"Padding Tokens % : {(padding_tokens_bucket_sorted*100)/total_tokens_bucket_sorted}")
###Output
36390it [08:26, 71.81it/s]
###Markdown
So we see that the % of total padding tokens has decreased from 67% to 20%. And we can see from the below the distribution of sequence length of batches is close to the distribution of individual sequence lengths. This is different from the case where we made static batches without sorting.
###Code
#collapse-hide
with plt.style.context('fivethirtyeight'):
plt.figure(figsize=[7,5])
n, bins, patches = plt.hist(x=batch_lengths_bucket_sorted, bins=50, color='#0504aa',alpha=0.7, rwidth=0.85)
plt.grid(axis='y', alpha=0.75)
plt.xlabel('Batch Sequence Length',fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.title('Window Sorted Static Batching + Dynamic Padding Length Distribution',fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
3. CheckpointingOne cool feature of `infinibatch` is that you can checkpoint a particular state in which the composed iterators is at and restore (rewind?) it back to that state. This is very cool considering it works recursively on the composed iterators and even on infinite iterator. Let's recreate our iterators and check this out.
###Code
#collapse-hide
sentence_it = datasets.chunked_dataset_iterator(
chunk_refs = glob.glob('wikitext-103-chunks/train.*.txt.gz'),
read_chunk_fn = read_chunk,
buffer_size = 100000,
seed = 1337,
train=False,
shuffle=False
)
features_it = iterators.ParallelMapIterator(
source_iterator=sentence_it,
num_processes=4,
num_items_per_process=1000,
transform=tokenize_fn
)
batches_it = iterators.BucketedReadaheadBatchIterator(
source_iterator=features_it,
read_ahead=10000, # Determines the window for the bucket which
# will be sorted and converted to batches.
key=lambda example: len(example['input_ids']), # Determines the length used
# to sort and choose the longest remaining record.
batch_size=lambda longest: tokens_per_batch // len(longest['input_ids']),
# Determines the dynamic batch size
seed=0
)
collate_fn = partial(tokenizer.pad, return_tensors='pt')
tensors_it = iterators.MapIterator(
batches_it,
transform=collate_fn
)
initial_state = tensors_it.getstate()
print("Initial State of composed iterators", initial_state)
# Draw 5 batches
batches = [next(tensors_it) for _ in range(5)]
print(f"Current State after sampling 5 batches: {tensors_it.getstate()}")
# Reset the Iterator
tensors_it.setstate(initial_state)
# Redraw 5 batches
redraw_batches = [next(tensors_it) for _ in range(5)]
print(f"State after resampling 5 batches: {tensors_it.getstate()}")
# Check equal
all_equal = True
for b1, b2 in zip(batches, redraw_batches):
for k in b1:
if torch.all(b1[k].eq(b2[k])):
continue
all_equal = False
break
if not all_equal:
break
print(f"All items drawn after resetting are equal: {all_equal}")
###Output
Initial State of composed iterators {'source_state': None, 'random_state': None, 'num_served': 0}
Current State after sampling 5 batches: {'source_state': {'source_state': None, 'flattened_items_yielded': 0}, 'random_state': (3, (2147483648, 766982754, 497961170, 3952298588, 2331775348, 1811986599, 3100132149, 3188119873, 3937547222, 215718963, 3315684082, 2978012849, 2428261856, 1298227695, 1704729580, 54668373, 3285201915, 3285178464, 1552935063, 988471319, 3135387943, 1691402966, 2757551880, 416056905, 907387413, 1072924981, 33903495, 2168419592, 2429050353, 831159753, 430343641, 3315943586, 1761671042, 864453023, 334804929, 1627478028, 2596811275, 3468733638, 3994375553, 1457139722, 3139722021, 1334790738, 2656639915, 3535811098, 1464315470, 2397423927, 885719490, 1140895889, 3284299483, 2854516462, 2734973817, 147484763, 792049954, 114360641, 3345458839, 1159898878, 1410498733, 2242989638, 453922141, 1344019764, 413870456, 3089405849, 1494382840, 470157779, 4266372830, 2831181573, 1361928602, 1589253513, 1381373062, 753045124, 987032420, 781978839, 2953638767, 3258570111, 3006718191, 1675218601, 1854232715, 3655829819, 1731242722, 2192104666, 1736665161, 740150002, 1195833394, 1610203160, 159492766, 4041488705, 3128952632, 2867295744, 3272632449, 886824304, 1791482600, 221114776, 3867175393, 4020804062, 1077871826, 1298953503, 996366221, 4149754679, 2483052703, 2615558283, 274318093, 1716359450, 4099129961, 1026774175, 288240973, 1459347562, 2365566296, 3690105224, 3065780221, 2050634722, 2652606621, 3185241207, 3026457375, 3456165734, 1880121515, 3398461093, 1795638629, 2379692076, 608668379, 1261955525, 84456522, 1913485156, 106878280, 757183891, 2913957588, 160418091, 2025664758, 141497907, 1657818026, 3053760160, 672193054, 4157546743, 223046484, 1623470498, 1201972930, 675008814, 684162366, 1738776330, 3025656654, 159760723, 1908867305, 3933381342, 2545706671, 467196949, 1427819885, 842150314, 4032903454, 2140851898, 3269883445, 975813755, 4177392955, 1556690684, 2535611513, 462962732, 67591358, 1729610528, 2025206740, 3153739740, 3255032049, 4186226368, 1070144624, 3107867195, 1621006038, 63742485, 835629717, 3189842019, 3950227584, 3184714559, 841836938, 1685394870, 657939920, 766156242, 1412314179, 1048281639, 4037161120, 2044490307, 1923947830, 3900790422, 907554295, 276417304, 860658646, 3574201134, 3508771399, 2110232300, 1636296241, 1405006077, 1093408401, 3243057343, 1519791182, 1994660136, 3829840937, 2644974199, 957955566, 3487641161, 1646922510, 1907939989, 3836029453, 3429168778, 201307778, 72550089, 2464394982, 1695794191, 3344785682, 996786130, 3589457196, 1241754792, 1291082245, 4224603667, 1194379475, 2693491244, 881186965, 2705535111, 445306946, 440274268, 1980827733, 2482488861, 3205215943, 2119332222, 2928713046, 1418736938, 652581136, 2474070665, 2208621536, 4171251876, 2303664214, 443762656, 2981912989, 2199228311, 2652261633, 3166738494, 3443009210, 3498764432, 424010848, 4065487566, 2262993542, 1756076712, 1477098233, 2742171915, 306185806, 3610666541, 923091830, 1034267993, 2336668648, 1880719718, 676878038, 3788797208, 3763351494, 3985428106, 1101865631, 1130501258, 3672967388, 3432003530, 4124438011, 1660392285, 4025484827, 2108074566, 3815409682, 42955331, 3248965569, 1643835718, 1246665668, 1071162194, 3814069229, 115491158, 985096811, 3311029186, 2990827378, 3101633320, 1648574497, 1470117052, 174145027, 2019894819, 2035501481, 459104123, 3507464599, 2093352659, 3369174406, 618767835, 4009895756, 935587447, 3956987426, 33753995, 307782427, 2473424805, 1440371818, 2382619594, 2138695812, 3164510238, 1318650933, 2910086616, 3886677510, 566832801, 3718063320, 1559818704, 183047272, 1142362855, 26306548, 645536402, 3875596208, 2272778168, 3512733409, 1897046338, 38248886, 2570759766, 1806313150, 860304898, 2433450338, 4124013408, 1216634590, 1275388896, 1169566669, 652504502, 761221427, 1448403764, 3129135949, 2513214949, 1269533687, 2413509541, 1226750363, 2450740925, 4094137910, 945759293, 3636927736, 3178020081, 2509964157, 3878869300, 1848504895, 2018369720, 1579755740, 1023627943, 924838836, 2653160914, 1812804174, 1521323076, 4012390528, 1338763317, 2608655937, 16022784, 1672945066, 2177189646, 2944458483, 2213810972, 1369873847, 1224017670, 130901785, 3595066712, 2259115284, 3316038259, 455873927, 2917250465, 3599550610, 1502173758, 684943436, 3079863840, 3144992244, 942855823, 1771140188, 2118780653, 3411494225, 2711180217, 4239611184, 1371891067, 3398566397, 3105518599, 1310665701, 3345178451, 2959821156, 242241789, 2148966880, 3192740583, 404401893, 3605380577, 1446464038, 3920522056, 2577523013, 1079274576, 286634372, 1752710796, 2351075979, 981312309, 3410516352, 3468455736, 1938779182, 1592494371, 1533303080, 88045436, 438252489, 1220512168, 3487004938, 3724852871, 1073434882, 3728218947, 2977555283, 4105408406, 3553772656, 1462006821, 3917158017, 119003006, 3470530198, 3439192457, 2829375771, 3555715155, 32324691, 588735808, 1459221702, 803072782, 2699519868, 1530797005, 79738580, 671990400, 4289511388, 3207115447, 2584684068, 832698998, 760958416, 1217440464, 2517898131, 2418819938, 3629956222, 3445024962, 206619378, 365007395, 522114139, 1707954431, 540423623, 1786750801, 369253262, 4239016754, 147889201, 1637777773, 236798285, 2806120188, 586972608, 2201782716, 1323327827, 819485723, 406078680, 3407345698, 1537169369, 1821691865, 527271655, 3751827102, 1465426495, 3321682429, 2179672664, 401355478, 1068871880, 24609462, 1403522408, 2311580015, 1532058170, 3877815340, 1768430711, 1619755157, 2832904331, 475102697, 354987331, 3295386430, 2816873951, 1039415736, 363972779, 1499307670, 2895506264, 3746345349, 2678027234, 3251899088, 955392878, 2329157295, 1343358773, 309573887, 2410178377, 2843173466, 361132917, 1755816798, 1319204283, 609284796, 1998842567, 1892325921, 223190385, 1483015769, 2876023365, 3876009312, 3199738344, 491524099, 160383137, 1219178873, 3870310498, 1114580266, 4279604166, 855339774, 1983818547, 2297848784, 4118592947, 4084409863, 2225095054, 4215601993, 946447434, 4205503762, 146088676, 778046685, 1876936928, 3157333726, 2173097090, 3215738813, 4135448234, 1219619643, 1936128689, 2897130162, 3336043946, 3779039524, 4200886837, 1359380925, 3402593091, 3140713935, 50855190, 3122065768, 1501584468, 2512255124, 687125154, 2666013386, 837819715, 3057258172, 3653455791, 2868624990, 322131992, 42534870, 4036564806, 798099710, 3533853670, 190914037, 3726947981, 2601169403, 602059656, 1365668439, 1918780004, 394790500, 277566007, 3891847777, 3365421094, 3139612253, 1380519090, 1183088424, 4203794803, 3049949521, 4214159484, 3446206962, 1875544460, 3207220027, 3288287026, 913535288, 178159620, 1410694581, 4190575040, 880731713, 1427805121, 404869072, 3413191414, 2865934056, 2899472677, 4239222733, 688404529, 3923323887, 933651074, 1199453686, 642723732, 2850614853, 3104368451, 3054041024, 3129913503, 2805843726, 1829781129, 3479062313, 650272704, 4224852052, 4085038685, 2616580676, 1793860711, 585126334, 2995262791, 520446536, 3855655015, 1571815563, 2240778227, 2051010344, 1694977983, 788402852, 1988089041, 2035558649, 1800063056, 1234412692, 2490862867, 417320514, 2415019489, 3374117797, 136034611, 898704236, 1247106941, 3923519397, 3563607190, 2454738671, 3522360389, 2672645476, 146828884, 3985140042, 4233949333, 1184742586, 860278824, 2815489967, 983483427, 3190081845, 3288865305, 3575181235, 1292151129, 4007823805, 4049420597, 3499391972, 1611182906, 1721268432, 2944249577, 2487212557, 789127738, 4027610014, 1057334138, 2902720905, 624), None), 'num_served': 5}
State after resampling 5 batches: {'source_state': {'source_state': None, 'flattened_items_yielded': 0}, 'random_state': (3, (2147483648, 766982754, 497961170, 3952298588, 2331775348, 1811986599, 3100132149, 3188119873, 3937547222, 215718963, 3315684082, 2978012849, 2428261856, 1298227695, 1704729580, 54668373, 3285201915, 3285178464, 1552935063, 988471319, 3135387943, 1691402966, 2757551880, 416056905, 907387413, 1072924981, 33903495, 2168419592, 2429050353, 831159753, 430343641, 3315943586, 1761671042, 864453023, 334804929, 1627478028, 2596811275, 3468733638, 3994375553, 1457139722, 3139722021, 1334790738, 2656639915, 3535811098, 1464315470, 2397423927, 885719490, 1140895889, 3284299483, 2854516462, 2734973817, 147484763, 792049954, 114360641, 3345458839, 1159898878, 1410498733, 2242989638, 453922141, 1344019764, 413870456, 3089405849, 1494382840, 470157779, 4266372830, 2831181573, 1361928602, 1589253513, 1381373062, 753045124, 987032420, 781978839, 2953638767, 3258570111, 3006718191, 1675218601, 1854232715, 3655829819, 1731242722, 2192104666, 1736665161, 740150002, 1195833394, 1610203160, 159492766, 4041488705, 3128952632, 2867295744, 3272632449, 886824304, 1791482600, 221114776, 3867175393, 4020804062, 1077871826, 1298953503, 996366221, 4149754679, 2483052703, 2615558283, 274318093, 1716359450, 4099129961, 1026774175, 288240973, 1459347562, 2365566296, 3690105224, 3065780221, 2050634722, 2652606621, 3185241207, 3026457375, 3456165734, 1880121515, 3398461093, 1795638629, 2379692076, 608668379, 1261955525, 84456522, 1913485156, 106878280, 757183891, 2913957588, 160418091, 2025664758, 141497907, 1657818026, 3053760160, 672193054, 4157546743, 223046484, 1623470498, 1201972930, 675008814, 684162366, 1738776330, 3025656654, 159760723, 1908867305, 3933381342, 2545706671, 467196949, 1427819885, 842150314, 4032903454, 2140851898, 3269883445, 975813755, 4177392955, 1556690684, 2535611513, 462962732, 67591358, 1729610528, 2025206740, 3153739740, 3255032049, 4186226368, 1070144624, 3107867195, 1621006038, 63742485, 835629717, 3189842019, 3950227584, 3184714559, 841836938, 1685394870, 657939920, 766156242, 1412314179, 1048281639, 4037161120, 2044490307, 1923947830, 3900790422, 907554295, 276417304, 860658646, 3574201134, 3508771399, 2110232300, 1636296241, 1405006077, 1093408401, 3243057343, 1519791182, 1994660136, 3829840937, 2644974199, 957955566, 3487641161, 1646922510, 1907939989, 3836029453, 3429168778, 201307778, 72550089, 2464394982, 1695794191, 3344785682, 996786130, 3589457196, 1241754792, 1291082245, 4224603667, 1194379475, 2693491244, 881186965, 2705535111, 445306946, 440274268, 1980827733, 2482488861, 3205215943, 2119332222, 2928713046, 1418736938, 652581136, 2474070665, 2208621536, 4171251876, 2303664214, 443762656, 2981912989, 2199228311, 2652261633, 3166738494, 3443009210, 3498764432, 424010848, 4065487566, 2262993542, 1756076712, 1477098233, 2742171915, 306185806, 3610666541, 923091830, 1034267993, 2336668648, 1880719718, 676878038, 3788797208, 3763351494, 3985428106, 1101865631, 1130501258, 3672967388, 3432003530, 4124438011, 1660392285, 4025484827, 2108074566, 3815409682, 42955331, 3248965569, 1643835718, 1246665668, 1071162194, 3814069229, 115491158, 985096811, 3311029186, 2990827378, 3101633320, 1648574497, 1470117052, 174145027, 2019894819, 2035501481, 459104123, 3507464599, 2093352659, 3369174406, 618767835, 4009895756, 935587447, 3956987426, 33753995, 307782427, 2473424805, 1440371818, 2382619594, 2138695812, 3164510238, 1318650933, 2910086616, 3886677510, 566832801, 3718063320, 1559818704, 183047272, 1142362855, 26306548, 645536402, 3875596208, 2272778168, 3512733409, 1897046338, 38248886, 2570759766, 1806313150, 860304898, 2433450338, 4124013408, 1216634590, 1275388896, 1169566669, 652504502, 761221427, 1448403764, 3129135949, 2513214949, 1269533687, 2413509541, 1226750363, 2450740925, 4094137910, 945759293, 3636927736, 3178020081, 2509964157, 3878869300, 1848504895, 2018369720, 1579755740, 1023627943, 924838836, 2653160914, 1812804174, 1521323076, 4012390528, 1338763317, 2608655937, 16022784, 1672945066, 2177189646, 2944458483, 2213810972, 1369873847, 1224017670, 130901785, 3595066712, 2259115284, 3316038259, 455873927, 2917250465, 3599550610, 1502173758, 684943436, 3079863840, 3144992244, 942855823, 1771140188, 2118780653, 3411494225, 2711180217, 4239611184, 1371891067, 3398566397, 3105518599, 1310665701, 3345178451, 2959821156, 242241789, 2148966880, 3192740583, 404401893, 3605380577, 1446464038, 3920522056, 2577523013, 1079274576, 286634372, 1752710796, 2351075979, 981312309, 3410516352, 3468455736, 1938779182, 1592494371, 1533303080, 88045436, 438252489, 1220512168, 3487004938, 3724852871, 1073434882, 3728218947, 2977555283, 4105408406, 3553772656, 1462006821, 3917158017, 119003006, 3470530198, 3439192457, 2829375771, 3555715155, 32324691, 588735808, 1459221702, 803072782, 2699519868, 1530797005, 79738580, 671990400, 4289511388, 3207115447, 2584684068, 832698998, 760958416, 1217440464, 2517898131, 2418819938, 3629956222, 3445024962, 206619378, 365007395, 522114139, 1707954431, 540423623, 1786750801, 369253262, 4239016754, 147889201, 1637777773, 236798285, 2806120188, 586972608, 2201782716, 1323327827, 819485723, 406078680, 3407345698, 1537169369, 1821691865, 527271655, 3751827102, 1465426495, 3321682429, 2179672664, 401355478, 1068871880, 24609462, 1403522408, 2311580015, 1532058170, 3877815340, 1768430711, 1619755157, 2832904331, 475102697, 354987331, 3295386430, 2816873951, 1039415736, 363972779, 1499307670, 2895506264, 3746345349, 2678027234, 3251899088, 955392878, 2329157295, 1343358773, 309573887, 2410178377, 2843173466, 361132917, 1755816798, 1319204283, 609284796, 1998842567, 1892325921, 223190385, 1483015769, 2876023365, 3876009312, 3199738344, 491524099, 160383137, 1219178873, 3870310498, 1114580266, 4279604166, 855339774, 1983818547, 2297848784, 4118592947, 4084409863, 2225095054, 4215601993, 946447434, 4205503762, 146088676, 778046685, 1876936928, 3157333726, 2173097090, 3215738813, 4135448234, 1219619643, 1936128689, 2897130162, 3336043946, 3779039524, 4200886837, 1359380925, 3402593091, 3140713935, 50855190, 3122065768, 1501584468, 2512255124, 687125154, 2666013386, 837819715, 3057258172, 3653455791, 2868624990, 322131992, 42534870, 4036564806, 798099710, 3533853670, 190914037, 3726947981, 2601169403, 602059656, 1365668439, 1918780004, 394790500, 277566007, 3891847777, 3365421094, 3139612253, 1380519090, 1183088424, 4203794803, 3049949521, 4214159484, 3446206962, 1875544460, 3207220027, 3288287026, 913535288, 178159620, 1410694581, 4190575040, 880731713, 1427805121, 404869072, 3413191414, 2865934056, 2899472677, 4239222733, 688404529, 3923323887, 933651074, 1199453686, 642723732, 2850614853, 3104368451, 3054041024, 3129913503, 2805843726, 1829781129, 3479062313, 650272704, 4224852052, 4085038685, 2616580676, 1793860711, 585126334, 2995262791, 520446536, 3855655015, 1571815563, 2240778227, 2051010344, 1694977983, 788402852, 1988089041, 2035558649, 1800063056, 1234412692, 2490862867, 417320514, 2415019489, 3374117797, 136034611, 898704236, 1247106941, 3923519397, 3563607190, 2454738671, 3522360389, 2672645476, 146828884, 3985140042, 4233949333, 1184742586, 860278824, 2815489967, 983483427, 3190081845, 3288865305, 3575181235, 1292151129, 4007823805, 4049420597, 3499391972, 1611182906, 1721268432, 2944249577, 2487212557, 789127738, 4027610014, 1057334138, 2902720905, 624), None), 'num_served': 5}
All items drawn after resetting are equal: True
###Markdown
Since the `state` of the iterator is just a dictionary, you can serialize it along with your model weights and restore them to continue training from exact point where you have checkpointed it. Making Infinibatch work with Pytorch DataloadersInfinibatch by its very nature can be used only with `IterableDataset`. The training iterator with shuffling is infinite, so you must limit the training batches to some `n` steps if you want to maintain the notion of "epochs" to start validation. Or you can eschew whole notion of epochs by validating every `nth` step or both.> Note: The multi processing workers of `DataLoader` should be set to zero, with `num_workers=0`. Rather use `ParallelMapIterator` to parallelize your pre-processing.> Note: While using `IterableDataset` in the typical multi-gpu `DistributedDataParallel` (ddp) setup, you should pass `instance_rank` and `num_instances` to have different slices of data distributed to different training devices.> Warning: When using finite iterators with `ddp` for validation set, if you split the data using `instance_rank` option, the validation can get stuck.This can happen either when your dataset is not divisible by number of `ddp` processes or doing dynamic batching caused an uneven number of batches produced for each instance. So it's better to do the validation in one GPU setting `instance_rank=0`. This is a quick hack, if you find a better option please let me know in the comments.
###Code
from torch.utils.data import IterableDataset
import torch.distributed as dist
class IterableCheckpointedDataset(IterableDataset):
"""
Wraps a CheckpointableIterator into a PyTorch IterableDataset, which is
recognized by its type by PyTorch's DataLoader class.
"""
def __init__(self, source: iterators.CheckpointableIterator,
should_reset: bool):
super().__init__()
self._source = source
self._source_state = source.getstate()
self._should_reset = should_reset
def __iter__(self): # this is called in the forked clone
worker_info = torch.utils.data.get_worker_info()
assert (
worker_info is None or worker_info.num_workers == 1
) # not supported since we can't get at the checkpoint for each worker
if self._should_reset:
# For training, since it's infinite iterator, if we train for
# `n` batches with total instances less than dataset size
# it's better not to reset the iterator by itself will cycle back
# with a new shuffle order when all the instances are iterated once.
self._source.setstate(self._source_state)
return self._source
def create_wiki_dataloader(chunks_glob: str,
tokenizer: PreTrainedTokenizerFast,
is_train: bool,
max_seq_len: int = 512,
tokens_per_batch: int = 2 ** 16,
num_workers: int = 4,
buffer_size: int = 100000,
seed: int = 1337) -> DataLoader:
num_instances = 1
instance_rank = 0
if dist.is_available() and is_train:
# Only in training mode we want the data to be split.
# This is a hack to make dynamic batched iterators work while using ddp.
# If we were to split the data and number of batches turned out uneven, then
# Iterator might exhaust early in one GPU leaving it stuck there forever.
# So we rather choose to do the same validation in all the GPUs.
try:
num_instances = dist.get_world_size()
instance_rank = dist.get_rank()
except AssertionError:
pass
sentence_it = datasets.chunked_dataset_iterator(
chunk_refs = glob.glob(chunks_glob),
read_chunk_fn = read_chunk,
buffer_size = buffer_size,
seed = seed,
num_instances=num_instances,
instance_rank=instance_rank,
train=is_train,
shuffle=is_train # Shuffle Only on Train
)
tokenize_fn = partial(tokenizer, max_length=max_seq_len, truncation=True)
features_it = iterators.ParallelMapIterator(
source_iterator=sentence_it,
num_processes=num_workers,
num_items_per_process=1000,
transform=tokenize_fn
)
batches_it = iterators.BucketedReadaheadBatchIterator(
source_iterator=features_it,
read_ahead=10000,
key=lambda example: len(example['input_ids']),
batch_size=lambda longest: tokens_per_batch // len(longest['input_ids']),
seed=seed
)
collate_fn = partial(tokenizer.pad, return_tensors='pt')
tensors_it = iterators.MapIterator(
batches_it,
transform=collate_fn
)
dataset = IterableCheckpointedDataset(
source=tensors_it,
should_reset=not is_train #Reset only for validation
)
return DataLoader(dataset,
# Very important to set this to 0.
num_workers=0,
# Important as we have already batched.
batch_size=1,
# Since batch has only one member which has all the
#tensors already collated, we just return it.
collate_fn=lambda x: x[0]
)
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", use_fast=True)
train_loader = create_wiki_dataloader('wikitext-103-chunks/train.*.txt.gz',
tokenizer=tokenizer,
is_train=True)
val_loader = create_wiki_dataloader('wikitext-103-chunks/train.*.txt.gz',
tokenizer=tokenizer,
is_train=False)
#print(next(iter(train_loader)))
#print(next(iter(val_loader)))
###Output
_____no_output_____ |
test/Clustering_test.ipynb | ###Markdown
Testing clustering based on the MFPT
###Code
import sys
sys.path.append("../")
sys.path.append("../nmpath/")
from test.tools_for_notebook import *
%matplotlib inline
from nmpath.auxfunctions import *
from nmpath.mfpt import *
from nmpath.mappers import rectilinear_mapper
from nmpath.clustering import *
#from nmpath.mappers import voronoi_mapper
###Output
_____no_output_____
###Markdown
Toy model with two basins
###Code
plot_traj([],[],figsize=(6,5))
###Output
_____no_output_____
###Markdown
Generating MC trajectory Continuos Ensemble
###Code
mc_traj = mc_simulation2D(500000)
my_ensemble = Ensemble([mc_traj])
###Output
_____no_output_____
###Markdown
Discrete Ensemble and Transition MatrixThe mapping funcion divides each dimension in 12. The total number of bins is 144.
###Code
discrete_ens = DiscreteEnsemble.from_ensemble(my_ensemble, mapping_function2D)
# Transition Matrix
K = discrete_ens._mle_transition_matrix(N*N,prior_counts=1e-6)
###Output
_____no_output_____
###Markdown
Agglomerative ClusteringThe points with the same color belong to the same cluster, only the clusters with size > 1 are shown.
###Code
t_min_list=[]
t_max_list=[]
t_AB_list=[]
n_clusters = [135, 130, 125, 120, 115, 110, 105, 100, 95, 90, 85, 80, 75, 70]
for n in n_clusters:
big_clusters=[]
big_clusters_index =[]
clusters, t_min, t_max, clustered_tmatrix = kinetic_clustering_from_tmatrix(K, n, verbose=False)
t_min_list.append(t_min)
t_max_list.append(t_max)
for i, cluster in enumerate(clusters):
if len(cluster) > 1:
big_clusters.append(cluster)
big_clusters_index.append(i)
n_big = len(big_clusters)
if n_big > 1:
tAB = markov_commute_time(clustered_tmatrix,[big_clusters_index[0]],[big_clusters_index[1]] )
else:
tAB = 0.0
t_AB_list.append(tAB)
discrete = [True for i in range(n_big)]
print("{} Clusters, t_cut: {:.2f}tau, t_max: {:.2e}tau, tAB: {:.2f}tau".format(n, t_min, t_max, tAB))
plot_traj([ [big_clusters[i],[]] for i in range(n_big) ],
discrete, std = 0.00002, alpha=0.3, justpoints=True, figsize=(3,3))
plt.plot(n_clusters, t_min_list, label="t_cut")
plt.plot(n_clusters, t_AB_list, label="t_AB")
plt.xlabel("Number of Clusters")
plt.ylabel("t (tau)")
#plt.text(110, 4000,"Clustering", fontsize=14)
plt.axis([70,135,0,9000])
#plt.arrow(125, 3600, -30, 0,shape='left', lw=2, length_includes_head=True)
plt.title("Commute times vs Number of Clusters")
plt.legend()
plt.show()
m_ratio = [t_AB_list[i]/t_min_list[i] for i in range(len(t_min_list))]
plt.plot(n_clusters, m_ratio, label="t_AB / t_cut", color="red")
plt.xlabel("Number of Clusters")
plt.ylabel("t_AB / t_cut")
plt.axis([70,135,0,65])
plt.legend()
plt.show()
m_ratio2 = [t_max_list[i]/t_min_list[i] for i in range(len(t_min_list))]
plt.plot(n_clusters, m_ratio2, label="t_max / t_cut", color="green")
plt.xlabel("Number of Clusters")
plt.ylabel("t_max / t_cut")
#plt.axis([70,135,0,1000])
plt.legend()
plt.show()
###Output
_____no_output_____ |
titanic-solution.ipynb | ###Markdown
Titanic: Machine Learning from Disaster Predict survival on the Titanic- Defining the problem statement- Collecting the data- Exploratory data analysis- Feature engineering- Modelling- Testing 1. Defining the problem statementComplete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the Titanic tragedy.
###Code
from IPython.display import Image
Image(url= "https://static1.squarespace.com/static/5006453fe4b09ef2252ba068/5095eabce4b06cb305058603/5095eabce4b02d37bef4c24c/1352002236895/100_anniversary_titanic_sinking_by_esai8mellows-d4xbme8.jpg")
###Output
_____no_output_____
###Markdown
2. Collecting the datatraining data set and testing data set are given by Kaggleyou can download from my github [https://github.com/minsuk-heo/kaggle-titanic/tree/master](https://github.com/minsuk-heo/kaggle-titanic) or you can download from kaggle directly [kaggle](https://www.kaggle.com/c/titanic/data) load train, test dataset using Pandas
###Code
import pandas as pd
train = pd.read_csv('input/train.csv')
test = pd.read_csv('input/test.csv')
###Output
_____no_output_____
###Markdown
3. Exploratory data analysisPrinting first 5 rows of the train dataset.
###Code
train.head(80)
###Output
_____no_output_____
###Markdown
Data Dictionary- Survived: 0 = No, 1 = Yes - pclass: Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd - sibsp: of siblings / spouses aboard the Titanic - parch: of parents / children aboard the Titanic - ticket: Ticket number - cabin: Cabin number - embarked: Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton **Total rows and columns**We can see that there are 891 rows and 12 columns in our training dataset.
###Code
test.head()
train.shape
test.shape
train.info()
test.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 11 columns):
PassengerId 418 non-null int64
Pclass 418 non-null int64
Name 418 non-null object
Sex 418 non-null object
Age 332 non-null float64
SibSp 418 non-null int64
Parch 418 non-null int64
Ticket 418 non-null object
Fare 417 non-null float64
Cabin 91 non-null object
Embarked 418 non-null object
dtypes: float64(2), int64(4), object(5)
memory usage: 36.0+ KB
###Markdown
We can see that *Age* value is missing for many rows. Out of 891 rows, the *Age* value is present only in 714 rows.Similarly, *Cabin* values are also missing in many rows. Only 204 out of 891 rows have *Cabin* values.
###Code
train.isnull().sum()
test.isnull().sum()
###Output
_____no_output_____
###Markdown
There are 177 rows with missing *Age*, 687 rows with missing *Cabin* and 2 rows with missing *Embarked* information. import python lib for visualization
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set() # setting seaborn default for plots
###Output
_____no_output_____
###Markdown
Bar Chart for Categorical Features- Pclass- Sex- SibSp ( of siblings and spouse)- Parch ( of parents and children)- Embarked- Cabin
###Code
def bar_chart(feature):
survived = train[train['Survived']==1][feature].value_counts()
dead = train[train['Survived']==0][feature].value_counts()
df = pd.DataFrame([survived,dead])
df.index = ['Survived','Dead']
df.plot(kind='bar',stacked=True, figsize=(10,5))
bar_chart('Sex')
###Output
_____no_output_____
###Markdown
The Chart confirms **Women** more likely survivied than **Men**
###Code
bar_chart('Pclass')
###Output
_____no_output_____
###Markdown
The Chart confirms **1st class** more likely survivied than **other classes** The Chart confirms **3rd class** more likely dead than **other classes**
###Code
bar_chart('SibSp')
###Output
_____no_output_____
###Markdown
The Chart confirms **a person aboarded with more than 2 siblings or spouse** more likely survived The Chart confirms ** a person aboarded without siblings or spouse** more likely dead
###Code
bar_chart('Parch')
###Output
_____no_output_____
###Markdown
The Chart confirms **a person aboarded with more than 2 parents or children** more likely survived The Chart confirms ** a person aboarded alone** more likely dead
###Code
bar_chart('Embarked')
###Output
_____no_output_____
###Markdown
The Chart confirms **a person aboarded from C** slightly more likely survived The Chart confirms **a person aboarded from Q** more likely dead The Chart confirms **a person aboarded from S** more likely dead 4. Feature engineeringFeature engineering is the process of using domain knowledge of the data to create features (**feature vectors**) that make machine learning algorithms work. feature vector is an n-dimensional vector of numerical features that represent some object. Many algorithms in machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis.
###Code
train.head()
###Output
_____no_output_____
###Markdown
4.1 how titanic sank?sank from the bow of the ship where third class rooms located conclusion, Pclass is key feature for classifier
###Code
Image(url= "https://static1.squarespace.com/static/5006453fe4b09ef2252ba068/t/5090b249e4b047ba54dfd258/1351660113175/TItanic-Survival-Infographic.jpg?format=1500w")
train.head(10)
###Output
_____no_output_____
###Markdown
4.2 Name
###Code
train_test_data = [train, test] # combining train and test dataset
for dataset in train_test_data:
dataset['Title'] = dataset['Name'].str.extract(' ([A-Za-z]+)\.', expand=False)
train['Title'].value_counts()
test['Title'].value_counts()
###Output
_____no_output_____
###Markdown
Title mapMr : 0 Miss : 1 Mrs: 2 Others: 3
###Code
title_mapping = {"Mr": 0, "Miss": 1, "Mrs": 2,
"Master": 3, "Dr": 3, "Rev": 3, "Col": 3, "Major": 3, "Mlle": 3,"Countess": 3,
"Ms": 3, "Lady": 3, "Jonkheer": 3, "Don": 3, "Dona" : 3, "Mme": 3,"Capt": 3,"Sir": 3 }
for dataset in train_test_data:
dataset['Title'] = dataset['Title'].map(title_mapping)
train.head()
test.head()
bar_chart('Title')
# delete unnecessary feature from dataset
train.drop('Name', axis=1, inplace=True)
test.drop('Name', axis=1, inplace=True)
train.head()
test.head()
###Output
_____no_output_____
###Markdown
4.3 Sexmale: 0female: 1
###Code
sex_mapping = {"male": 0, "female": 1}
for dataset in train_test_data:
dataset['Sex'] = dataset['Sex'].map(sex_mapping)
bar_chart('Sex')
###Output
_____no_output_____
###Markdown
4.4 Age 4.4.1 some age is missingLet's use Title's median age for missing Age
###Code
train.head(100)
# fill missing age with median age for each title (Mr, Mrs, Miss, Others)
train["Age"].fillna(train.groupby("Title")["Age"].transform("median"), inplace=True)
test["Age"].fillna(test.groupby("Title")["Age"].transform("median"), inplace=True)
train.head(30)
train.groupby("Title")["Age"].transform("median")
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.show()
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(0, 20)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(20, 30)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(30, 40)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(40, 60)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(40, 60)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(60)
train.info()
test.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 11 columns):
PassengerId 418 non-null int64
Pclass 418 non-null int64
Sex 418 non-null int64
Age 418 non-null float64
SibSp 418 non-null int64
Parch 418 non-null int64
Ticket 418 non-null object
Fare 417 non-null float64
Cabin 91 non-null object
Embarked 418 non-null object
Title 418 non-null int64
dtypes: float64(2), int64(6), object(3)
memory usage: 36.0+ KB
###Markdown
4.4.2 BinningBinning/Converting Numerical Age to Categorical Variable feature vector map: child: 0 young: 1 adult: 2 mid-age: 3 senior: 4
###Code
for dataset in train_test_data:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0,
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 26), 'Age'] = 1,
dataset.loc[(dataset['Age'] > 26) & (dataset['Age'] <= 36), 'Age'] = 2,
dataset.loc[(dataset['Age'] > 36) & (dataset['Age'] <= 62), 'Age'] = 3,
dataset.loc[ dataset['Age'] > 62, 'Age'] = 4
train.head()
bar_chart('Age')
###Output
_____no_output_____
###Markdown
4.5 Embarked 4.5.1 filling missing values
###Code
Pclass1 = train[train['Pclass']==1]['Embarked'].value_counts()
Pclass2 = train[train['Pclass']==2]['Embarked'].value_counts()
Pclass3 = train[train['Pclass']==3]['Embarked'].value_counts()
df = pd.DataFrame([Pclass1, Pclass2, Pclass3])
df.index = ['1st class','2nd class', '3rd class']
df.plot(kind='bar',stacked=True, figsize=(10,5))
###Output
_____no_output_____
###Markdown
more than 50% of 1st class are from S embark more than 50% of 2nd class are from S embark more than 50% of 3rd class are from S embark**fill out missing embark with S embark**
###Code
for dataset in train_test_data:
dataset['Embarked'] = dataset['Embarked'].fillna('S')
train.head()
embarked_mapping = {"S": 0, "C": 1, "Q": 2}
for dataset in train_test_data:
dataset['Embarked'] = dataset['Embarked'].map(embarked_mapping)
###Output
_____no_output_____
###Markdown
4.6 Fare
###Code
# fill missing Fare with median fare for each Pclass
train["Fare"].fillna(train.groupby("Pclass")["Fare"].transform("median"), inplace=True)
test["Fare"].fillna(test.groupby("Pclass")["Fare"].transform("median"), inplace=True)
train.head(50)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Fare',shade= True)
facet.set(xlim=(0, train['Fare'].max()))
facet.add_legend()
plt.show()
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Fare',shade= True)
facet.set(xlim=(0, train['Fare'].max()))
facet.add_legend()
plt.xlim(0, 20)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Fare',shade= True)
facet.set(xlim=(0, train['Fare'].max()))
facet.add_legend()
plt.xlim(0, 30)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Fare',shade= True)
facet.set(xlim=(0, train['Fare'].max()))
facet.add_legend()
plt.xlim(0)
for dataset in train_test_data:
dataset.loc[ dataset['Fare'] <= 17, 'Fare'] = 0,
dataset.loc[(dataset['Fare'] > 17) & (dataset['Fare'] <= 30), 'Fare'] = 1,
dataset.loc[(dataset['Fare'] > 30) & (dataset['Fare'] <= 100), 'Fare'] = 2,
dataset.loc[ dataset['Fare'] > 100, 'Fare'] = 3
train.head()
###Output
_____no_output_____
###Markdown
4.7 Cabin
###Code
train.Cabin.value_counts()
for dataset in train_test_data:
dataset['Cabin'] = dataset['Cabin'].str[:1]
Pclass1 = train[train['Pclass']==1]['Cabin'].value_counts()
Pclass2 = train[train['Pclass']==2]['Cabin'].value_counts()
Pclass3 = train[train['Pclass']==3]['Cabin'].value_counts()
df = pd.DataFrame([Pclass1, Pclass2, Pclass3])
df.index = ['1st class','2nd class', '3rd class']
df.plot(kind='bar',stacked=True, figsize=(10,5))
cabin_mapping = {"A": 0, "B": 0.4, "C": 0.8, "D": 1.2, "E": 1.6, "F": 2, "G": 2.4, "T": 2.8}
for dataset in train_test_data:
dataset['Cabin'] = dataset['Cabin'].map(cabin_mapping)
# fill missing Fare with median fare for each Pclass
train["Cabin"].fillna(train.groupby("Pclass")["Cabin"].transform("median"), inplace=True)
test["Cabin"].fillna(test.groupby("Pclass")["Cabin"].transform("median"), inplace=True)
###Output
_____no_output_____
###Markdown
4.8 FamilySize
###Code
train["FamilySize"] = train["SibSp"] + train["Parch"] + 1
test["FamilySize"] = test["SibSp"] + test["Parch"] + 1
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'FamilySize',shade= True)
facet.set(xlim=(0, train['FamilySize'].max()))
facet.add_legend()
plt.xlim(0)
family_mapping = {1: 0, 2: 0.4, 3: 0.8, 4: 1.2, 5: 1.6, 6: 2, 7: 2.4, 8: 2.8, 9: 3.2, 10: 3.6, 11: 4}
for dataset in train_test_data:
dataset['FamilySize'] = dataset['FamilySize'].map(family_mapping)
train.head()
train.head()
features_drop = ['Ticket', 'SibSp', 'Parch']
train = train.drop(features_drop, axis=1)
test = test.drop(features_drop, axis=1)
train = train.drop(['PassengerId'], axis=1)
train_data = train.drop('Survived', axis=1)
target = train['Survived']
train_data.shape, target.shape
train_data.head(10)
###Output
_____no_output_____
###Markdown
5. Modelling
###Code
# Importing Classifier Modules
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
import numpy as np
train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 9 columns):
Survived 891 non-null int64
Pclass 891 non-null int64
Sex 891 non-null int64
Age 891 non-null float64
Fare 891 non-null float64
Cabin 891 non-null float64
Embarked 891 non-null int64
Title 891 non-null int64
FamilySize 891 non-null float64
dtypes: float64(4), int64(5)
memory usage: 62.7 KB
###Markdown
6.2 Cross Validation (K-fold)
###Code
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
k_fold = KFold(n_splits=10, shuffle=True, random_state=0)
###Output
_____no_output_____
###Markdown
6.2.1 kNN
###Code
clf = KNeighborsClassifier(n_neighbors = 13)
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
# kNN Score
round(np.mean(score)*100, 2)
###Output
_____no_output_____
###Markdown
6.2.2 Decision Tree
###Code
clf = DecisionTreeClassifier()
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
# decision tree Score
round(np.mean(score)*100, 2)
###Output
_____no_output_____
###Markdown
6.2.3 Ramdom Forest
###Code
clf = RandomForestClassifier(n_estimators=13)
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
# Random Forest Score
round(np.mean(score)*100, 2)
###Output
_____no_output_____
###Markdown
6.2.4 Naive Bayes
###Code
clf = GaussianNB()
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
# Naive Bayes Score
round(np.mean(score)*100, 2)
###Output
_____no_output_____
###Markdown
6.2.5 SVM
###Code
clf = SVC()
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
round(np.mean(score)*100,2)
###Output
_____no_output_____
###Markdown
7. Testing
###Code
clf = SVC()
clf.fit(train_data, target)
test_data = test.drop("PassengerId", axis=1).copy()
prediction = clf.predict(test_data)
submission = pd.DataFrame({
"PassengerId": test["PassengerId"],
"Survived": prediction
})
submission.to_csv('submission.csv', index=False)
submission = pd.read_csv('submission.csv')
submission.head()
###Output
_____no_output_____
###Markdown
Titanic: Machine Learning from Disaster Predict survival on the Titanic- Defining the problem statement- Collecting the data- Exploratory data analysis- Feature engineering- Modelling- Testing 1. Defining the problem statementComplete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the Titanic tragedy. 2. Collecting the datatraining data set and testing data set are given by Kaggle/you can download from kaggle directly [kaggle](https://www.kaggle.com/c/titanic/data) load train, test dataset using Pandas
###Code
import pandas as pd
train = pd.read_csv('/home/ashish/Desktop/kaggle_titanic/input/train.csv')
test = pd.read_csv('/home/ashish/Desktop/kaggle_titanic/input/test.csv')
###Output
_____no_output_____
###Markdown
3. Exploratory data analysisPrinting first 5 rows of the train dataset.
###Code
train.head(10)
###Output
_____no_output_____
###Markdown
Data Dictionary- Survived: 0 = No, 1 = Yes - pclass: Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd - sibsp: of siblings / spouses aboard the Titanic - parch: of parents / children aboard the Titanic - ticket: Ticket number - cabin: Cabin number - embarked: Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton
###Code
test.head(10)
train.shape
test.shape
train.info()
test.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 11 columns):
PassengerId 418 non-null int64
Pclass 418 non-null int64
Name 418 non-null object
Sex 418 non-null object
Age 332 non-null float64
SibSp 418 non-null int64
Parch 418 non-null int64
Ticket 418 non-null object
Fare 417 non-null float64
Cabin 91 non-null object
Embarked 418 non-null object
dtypes: float64(2), int64(4), object(5)
memory usage: 36.0+ KB
###Markdown
We can see that *Age* value is missing for many rows. Out of 891 rows, the *Age* value is present only in 714 rows.Similarly, *Cabin* values are also missing in many rows. Only 204 out of 891 rows have *Cabin* values.
###Code
train.isnull().sum()
test.isnull().sum()
###Output
_____no_output_____
###Markdown
There are 177 rows with missing *Age*, 687 rows with missing *Cabin* and 2 rows with missing *Embarked* information. import python lib for visualization
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set() # setting seaborn default for plots
###Output
_____no_output_____
###Markdown
Bar Chart for Categorical Features- Pclass- Sex- SibSp ( of siblings and spouse)- Parch ( of parents and children)- Embarked- Cabin
###Code
def bar_chart(feature):
survived = train[train['Survived']==1][feature].value_counts()
dead = train[train['Survived']==0][feature].value_counts()
df = pd.DataFrame([survived,dead])
df.index = ['Survived','Dead']
df.plot(kind='bar',stacked=True, figsize=(10,5))
bar_chart('Sex')
###Output
_____no_output_____
###Markdown
The Chart confirms **Women** more likely survivied than **Men**
###Code
bar_chart('Pclass')
###Output
_____no_output_____
###Markdown
The Chart confirms **1st class** more likely survivied than **other classes** The Chart confirms **3rd class** more likely dead than **other classes**
###Code
bar_chart('SibSp')
###Output
_____no_output_____
###Markdown
The Chart confirms **a person aboarded with more than 2 siblings or spouse** more likely survived The Chart confirms ** a person aboarded without siblings or spouse** more likely dead
###Code
bar_chart('Parch')
###Output
_____no_output_____
###Markdown
The Chart confirms **a person aboarded with more than 2 parents or children** more likely survived The Chart confirms ** a person aboarded alone** more likely dead
###Code
bar_chart('Embarked')
###Output
_____no_output_____
###Markdown
The Chart confirms **a person aboarded from C** slightly more likely survived The Chart confirms **a person aboarded from Q** more likely dead The Chart confirms **a person aboarded from S** more likely dead 4. Feature engineeringFeature engineering is the process of using domain knowledge of the data to create features (**feature vectors**) that make machine learning algorithms work. feature vector is an n-dimensional vector of numerical features that represent some object. Many algorithms in machine learning require a numerical representation of objects, since such representations facilitate processing and statistical analysis.
###Code
train.head()
###Output
_____no_output_____
###Markdown
4.1 how titanic sank?sank from the bow of the ship where third class rooms located conclusion, Pclass is key feature for classifier
###Code
train.head(10)
###Output
_____no_output_____
###Markdown
4.2 Name
###Code
train_test_data = [train, test] # combining train and test dataset
for dataset in train_test_data:
dataset['Title'] = dataset['Name'].str.extract(' ([A-Za-z]+)\.', expand=False)
train['Title'].value_counts()
test['Title'].value_counts()
###Output
_____no_output_____
###Markdown
Title mapMr : 0 Miss : 1 Mrs: 2 Others: 3
###Code
title_mapping = {"Mr": 0, "Miss": 1, "Mrs": 2,
"Master": 3, "Dr": 3, "Rev": 3, "Col": 3, "Major": 3, "Mlle": 3,"Countess": 3,
"Ms": 3, "Lady": 3, "Jonkheer": 3, "Don": 3, "Dona" : 3, "Mme": 3,"Capt": 3,"Sir": 3 }
for dataset in train_test_data:
dataset['Title'] = dataset['Title'].map(title_mapping)
train.head()
test.head()
bar_chart('Title')
# delete unnecessary feature from dataset
train.drop('Name', axis=1, inplace=True)
test.drop('Name', axis=1, inplace=True)
train.head()
test.head()
###Output
_____no_output_____
###Markdown
4.3 Sexmale: 0female: 1
###Code
sex_mapping = {"male": 0, "female": 1}
for dataset in train_test_data:
dataset['Sex'] = dataset['Sex'].map(sex_mapping)
bar_chart('Sex')
###Output
_____no_output_____
###Markdown
4.4 Age 4.4.1 some age is missingLet's use Title's median age for missing Age
###Code
train.head(100)
# fill missing age with median age for each title (Mr, Mrs, Miss, Others)
train["Age"].fillna(train.groupby("Title")["Age"].transform("median"), inplace=True)
test["Age"].fillna(test.groupby("Title")["Age"].transform("median"), inplace=True)
train.head(30)
train.groupby("Title")["Age"].transform("median")
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.show()
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(0, 20)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(20, 30)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(30, 40)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(40, 60)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(40, 60)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Age',shade= True)
facet.set(xlim=(0, train['Age'].max()))
facet.add_legend()
plt.xlim(60)
train.info()
test.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 11 columns):
PassengerId 418 non-null int64
Pclass 418 non-null int64
Sex 418 non-null int64
Age 418 non-null float64
SibSp 418 non-null int64
Parch 418 non-null int64
Ticket 418 non-null object
Fare 417 non-null float64
Cabin 91 non-null object
Embarked 418 non-null object
Title 418 non-null int64
dtypes: float64(2), int64(6), object(3)
memory usage: 36.0+ KB
###Markdown
4.4.2 BinningBinning/Converting Numerical Age to Categorical Variable feature vector map: child: 0 young: 1 adult: 2 mid-age: 3 senior: 4
###Code
for dataset in train_test_data:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0,
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 26), 'Age'] = 1,
dataset.loc[(dataset['Age'] > 26) & (dataset['Age'] <= 36), 'Age'] = 2,
dataset.loc[(dataset['Age'] > 36) & (dataset['Age'] <= 62), 'Age'] = 3,
dataset.loc[ dataset['Age'] > 62, 'Age'] = 4
train.head()
bar_chart('Age')
###Output
_____no_output_____
###Markdown
4.5 Embarked 4.5.1 filling missing values
###Code
Pclass1 = train[train['Pclass']==1]['Embarked'].value_counts()
Pclass2 = train[train['Pclass']==2]['Embarked'].value_counts()
Pclass3 = train[train['Pclass']==3]['Embarked'].value_counts()
df = pd.DataFrame([Pclass1, Pclass2, Pclass3])
df.index = ['1st class','2nd class', '3rd class']
df.plot(kind='bar',stacked=True, figsize=(10,5))
###Output
_____no_output_____
###Markdown
more than 50% of 1st class are from S embark more than 50% of 2nd class are from S embark more than 50% of 3rd class are from S embark**fill out missing embark with S embark**
###Code
for dataset in train_test_data:
dataset['Embarked'] = dataset['Embarked'].fillna('S')
train.head()
embarked_mapping = {"S": 0, "C": 1, "Q": 2}
for dataset in train_test_data:
dataset['Embarked'] = dataset['Embarked'].map(embarked_mapping)
###Output
_____no_output_____
###Markdown
4.6 Fare
###Code
# fill missing Fare with median fare for each Pclass
train["Fare"].fillna(train.groupby("Pclass")["Fare"].transform("median"), inplace=True)
test["Fare"].fillna(test.groupby("Pclass")["Fare"].transform("median"), inplace=True)
train.head(50)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Fare',shade= True)
facet.set(xlim=(0, train['Fare'].max()))
facet.add_legend()
plt.show()
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Fare',shade= True)
facet.set(xlim=(0, train['Fare'].max()))
facet.add_legend()
plt.xlim(0, 20)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Fare',shade= True)
facet.set(xlim=(0, train['Fare'].max()))
facet.add_legend()
plt.xlim(0, 30)
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'Fare',shade= True)
facet.set(xlim=(0, train['Fare'].max()))
facet.add_legend()
plt.xlim(0)
for dataset in train_test_data:
dataset.loc[ dataset['Fare'] <= 17, 'Fare'] = 0,
dataset.loc[(dataset['Fare'] > 17) & (dataset['Fare'] <= 30), 'Fare'] = 1,
dataset.loc[(dataset['Fare'] > 30) & (dataset['Fare'] <= 100), 'Fare'] = 2,
dataset.loc[ dataset['Fare'] > 100, 'Fare'] = 3
train.head()
###Output
_____no_output_____
###Markdown
4.7 Cabin
###Code
train.Cabin.value_counts()
for dataset in train_test_data:
dataset['Cabin'] = dataset['Cabin'].str[:1]
Pclass1 = train[train['Pclass']==1]['Cabin'].value_counts()
Pclass2 = train[train['Pclass']==2]['Cabin'].value_counts()
Pclass3 = train[train['Pclass']==3]['Cabin'].value_counts()
df = pd.DataFrame([Pclass1, Pclass2, Pclass3])
df.index = ['1st class','2nd class', '3rd class']
df.plot(kind='bar',stacked=True, figsize=(10,5))
cabin_mapping = {"A": 0, "B": 0.4, "C": 0.8, "D": 1.2, "E": 1.6, "F": 2, "G": 2.4, "T": 2.8}
for dataset in train_test_data:
dataset['Cabin'] = dataset['Cabin'].map(cabin_mapping)
# fill missing Fare with median fare for each Pclass
train["Cabin"].fillna(train.groupby("Pclass")["Cabin"].transform("median"), inplace=True)
test["Cabin"].fillna(test.groupby("Pclass")["Cabin"].transform("median"), inplace=True)
###Output
_____no_output_____
###Markdown
4.8 FamilySize
###Code
train["FamilySize"] = train["SibSp"] + train["Parch"] + 1
test["FamilySize"] = test["SibSp"] + test["Parch"] + 1
facet = sns.FacetGrid(train, hue="Survived",aspect=4)
facet.map(sns.kdeplot,'FamilySize',shade= True)
facet.set(xlim=(0, train['FamilySize'].max()))
facet.add_legend()
plt.xlim(0)
family_mapping = {1: 0, 2: 0.4, 3: 0.8, 4: 1.2, 5: 1.6, 6: 2, 7: 2.4, 8: 2.8, 9: 3.2, 10: 3.6, 11: 4}
for dataset in train_test_data:
dataset['FamilySize'] = dataset['FamilySize'].map(family_mapping)
train.head()
train.head()
features_drop = ['Ticket', 'SibSp', 'Parch']
train = train.drop(features_drop, axis=1)
test = test.drop(features_drop, axis=1)
train = train.drop(['PassengerId'], axis=1)
train_data = train.drop('Survived', axis=1)
target = train['Survived']
train_data.shape, target.shape
train_data.head(10)
###Output
_____no_output_____
###Markdown
5. Modelling
###Code
# Importing Classifier Modules
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
import numpy as np
train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 9 columns):
Survived 891 non-null int64
Pclass 891 non-null int64
Sex 891 non-null int64
Age 891 non-null float64
Fare 891 non-null float64
Cabin 891 non-null float64
Embarked 891 non-null int64
Title 891 non-null int64
FamilySize 891 non-null float64
dtypes: float64(4), int64(5)
memory usage: 62.7 KB
###Markdown
6.2 Cross Validation (K-fold)
###Code
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
k_fold = KFold(n_splits=10, shuffle=True, random_state=0)
###Output
_____no_output_____
###Markdown
6.2.1 kNN
###Code
clf = KNeighborsClassifier(n_neighbors = 13)
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
# kNN Score
round(np.mean(score)*100, 2)
###Output
_____no_output_____
###Markdown
6.2.2 Decision Tree
###Code
clf = DecisionTreeClassifier()
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
# decision tree Score
round(np.mean(score)*100, 2)
###Output
_____no_output_____
###Markdown
6.2.3 Ramdom Forest
###Code
clf = RandomForestClassifier(n_estimators=13)
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
# Random Forest Score
round(np.mean(score)*100, 2)
###Output
_____no_output_____
###Markdown
6.2.4 Naive Bayes
###Code
clf = GaussianNB()
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
# Naive Bayes Score
round(np.mean(score)*100, 2)
###Output
_____no_output_____
###Markdown
6.2.5 SVM
###Code
clf = SVC()
scoring = 'accuracy'
score = cross_val_score(clf, train_data, target, cv=k_fold, n_jobs=1, scoring=scoring)
print(score)
round(np.mean(score)*100,2)
###Output
_____no_output_____
###Markdown
7. Testing
###Code
clf = SVC()
clf.fit(train_data, target)
test_data = test.drop("PassengerId", axis=1).copy()
prediction = clf.predict(test_data)
submission = pd.DataFrame({
"PassengerId": test["PassengerId"],
"Survived": prediction
})
submission.to_csv('/home/ashish/Desktop/kaggle_titanic/input/submission.csv', index=False)
submission = pd.read_csv('/home/ashish/Desktop/kaggle_titanic/input/submission.csv')
submission.head()
###Output
_____no_output_____ |
notebooks/c04-eda.ipynb | ###Markdown
Solution Planning https://docs.google.com/document/d/e/2PACX-1vQ1TppeFwJHR8m_fO6vh94Eh94UXDFxokWC6U7szUt20vlSl85FqV__tOIUykfRUJ6sDnkCbMcEtF09/pub 0.0. Imports
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import HTML
from IPython.display import Image
from geopy.geocoders import Nominatim
from sklearn import preprocessing as pp
from sklearn import model_selection as ms
from sklearn import dummy
from sklearn import metrics
from sklearn import neighbors
from sklearn import svm
from sklearn import ensemble
from sklearn import linear_model
from sklearn.model_selection import cross_val_predict
###Output
_____no_output_____
###Markdown
0.1. Helper Functions
###Code
def jupyter_settings():
%matplotlib inline
%pylab inline
plt.style.use( 'ggplot')
plt.rcParams['figure.figsize'] = [24, 9]
plt.rcParams['font.size'] = 24
display( HTML( '<style>.container { width:100% !important; }</style>') )
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option( 'display.expand_frame_repr', False )
pd.set_option('display.float_format', lambda x: '%.4f' % x)
sns.set()
jupyter_settings()
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
0.2. Load Data
###Code
df_raw = pd.read_csv( '../data/raw/churn.csv' )
###Output
_____no_output_____
###Markdown
1.0. Data Description
###Code
df1 = df_raw.copy()
###Output
_____no_output_____
###Markdown
1.1. Rename Columns **Features*** **NowNumber:** O número da coluna* **CustomerID:** Identificador único do cliente* **Surname:** Sobrenome do cliente.* **CreditScore:** A pontuação de Crédito do cliente para o mercado de consumo.* **Geography:** O país onde o cliente reside.* **Gender:** O gênero do cliente.* **Age:** A idade do cliente.* **Tenure:** Número de anos que o cliente permaneceu ativo.* **Balance:** Valor monetário que o cliente tem em sua conta bancária.* **NumOfProducts:** O número de produtos comprado pelo cliente no banco.* **HasCrCard:** Indica se o cliente possui ou não cartão de crédito.* **IsActiveMember:** Indica se o cliente fez pelo menos uma movimentação na conta bancário dentro de 12 meses.* **EstimateSalary:** Estimativa do salário mensal do cliente.* **Exited:** Indica se o cliente está ou não em Churn.
###Code
df1.columns = ['row_number', 'customer_id', 'surname', 'credit_score', 'geography', 'gender', 'age', 'tenure', 'balance', 'num_of_products', 'has_cr_card','is_active_member', 'estimated_salary', 'exited']
###Output
_____no_output_____
###Markdown
1.2. Data Dimensions
###Code
df1.shape
###Output
_____no_output_____
###Markdown
1.3. Check NA
###Code
df1.isna().sum()
###Output
_____no_output_____
###Markdown
1.4. Data Dtypes
###Code
df1.dtypes
###Output
_____no_output_____
###Markdown
1.5. Descriptive Statistics
###Code
num_attr = df1.select_dtypes( include=['int64', 'float64'] )
cat_attr = df1.select_dtypes( include=['object'] )
###Output
_____no_output_____
###Markdown
1.5.1. Numerical Attributes
###Code
# tendency central: mean, median
ct1 = pd.DataFrame( num_attr.apply(np.mean) )
ct2 = pd.DataFrame( num_attr.apply(np.median) )
# dispersion: min, max, range, std
d1 = pd.DataFrame( num_attr.apply(np.min) )
d2 = pd.DataFrame( num_attr.apply(np.max) )
d3 = pd.DataFrame( num_attr.apply(lambda x: np.max(x) - np.min(x)) )
d4 = pd.DataFrame( num_attr.apply(lambda x: x.std()) )
# shape: skew, kurtosis
s1 = pd.DataFrame( num_attr.apply(lambda x: x.skew()) )
s2 = pd.DataFrame( num_attr.apply(lambda x: x.kurtosis()) )
m1 = pd.concat([d1, d2, d3 ,ct1, ct2, d4, s1, s2], axis=1).reset_index()
m1.columns = ['attribute', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
m1
###Output
_____no_output_____
###Markdown
num_of_products
###Code
df1['num_of_products'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
has_cr_card
###Code
df1['has_cr_card'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
is_active_member
###Code
df1['is_active_member'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
exited
###Code
df1['exited'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
1.5.2. Categorical Attributes
###Code
cat_attr.describe().T
###Output
_____no_output_____
###Markdown
geography
###Code
df1['geography'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
gender
###Code
df1['gender'].value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
2.0. Feature Engineering
###Code
df2 = df1.copy()
###Output
_____no_output_____
###Markdown
2.1. Mapa Mental de hipóteses
###Code
Image('../images/mindmap-churn.png')
###Output
_____no_output_____
###Markdown
2.2. Criação de Hipóteses **Cliente**1. Clientes entre 40 a 60 anos, têm menos chance de abandonar o serviço comparado ao resto da baixo.2. Cliente com score card abaixo de 400 tem mais chances de abandonar o serviço.3. Clientes do sexo feminino tem 60% de manter o serviço.4. Cliente acima de 5 anos usando o serviço tem 80% de chance de manter o serviço.5. Clientes que têm salário acima da média, têm mais chances de abandonar o serviço.6. Clientes que fazem 10 movimentações ao mês, têm mais chances de manter o serviço.7. Clientes que não têm dinheiro na poupança têm mais chances de abandonar o serviço. **Localização**1. Clientes residentes na residentes na Espanha tem mais chances de manter o serviço em comparação comparado aos outros países2. Clientes que acessam mais o portal têm mais chances de abandonar o serviço. **Marketing**1. Clientes provenientes de de afiliados, têm mais chances de manter o serviço.2. Anúncios de marketing são responsáveis pela maior quantidade de novos clientes, e não são mais fiéis à empresa do que por outros meios. **Produtos**1. Clientes que possuem cartão de crédito, tem mais chances de manter o serviço dos que não tem. 2.3. Priorização de Hipóteses * **h1.** Clientes entre 40 a 60 anos, têm menos chance de abandonar o serviço comparado ao resto da baixo.* **h2.** Cliente com score card abaixo de 400 tem mais chances de abandonar o serviço.* **h3.** Clientes do sexo feminino tem 60% de manter o serviço.* **h4.** Cliente acima de 5 anos usando o serviço tem 80% de chance de manter o serviço.* **h5.** Clientes que têm salário acima da média, têm mais chances de abandonar o serviço.* **h6.** Clientes que não têm dinheiro na poupança têm mais chances de abandonar o serviço.* **h7.** Clientes residentes na residentes na Espanha tem mais chances de manter o serviço em comparação comparado aos outros países* **h8.** Clientes que possuem cartão de crédito, tem mais chances de manter o serviço dos que não tem. 2.4. Create Feature
###Code
geolocator = Nominatim(user_agent="country_analysis")
df2.loc[ df2['geography'] == 'France', 'country_lat'] = float(geolocator.geocode('France').raw['lat'])
df2.loc[ df2['geography'] == 'Germany', 'country_lat'] = float(geolocator.geocode('Germany').raw['lat'])
df2.loc[ df2['geography'] == 'Spain', 'country_lat'] = float(geolocator.geocode('Spain').raw['lat'])
df2.loc[ df2['geography'] == 'France', 'country_lon'] = float(geolocator.geocode('France').raw['lon'])
df2.loc[ df2['geography'] == 'Germany', 'country_lon'] = float(geolocator.geocode('Germany').raw['lon'])
df2.loc[ df2['geography'] == 'Spain', 'country_lon'] = float(geolocator.geocode('Spain').raw['lon'])
df2.loc[(df2['age'] >= 17) & (df2['age'] <= 39), 'age_group'] = '18-39'
df2.loc[(df2['age'] >= 40) & (df2['age'] <= 59), 'age_group'] = '40-59'
df2.loc[df2['age'] >= 60, 'age_group'] = '>60'
df2.loc[df2['balance'] == 0, 'balance_status'] = 'without_balance'
df2.loc[df2['balance'] > 0, 'balance_status'] = 'with_balance'
###Output
_____no_output_____
###Markdown
3.0. Data Filtering
###Code
df3 = df2.copy()
drop_cols = ['row_number', 'surname']
df3 = df3.drop(drop_cols, axis=1)
###Output
_____no_output_____
###Markdown
4.0. EDA
###Code
df4 = df3.copy()
###Output
_____no_output_____
###Markdown
4.1. Analysis Univariate 4.1.1. Numerical
###Code
sns.histplot(data=df4['credit_score'] );
sns.histplot(data=df4['age'] );
sns.histplot(data=df4['balance'] );
sns.histplot(data=df4['estimated_salary'] );
###Output
_____no_output_____
###Markdown
4.1.2. Categorical
###Code
sns.countplot(data=df4, x='geography');
sns.countplot(data=df4, x='gender');
sns.countplot(data=df4, x='tenure');
sns.countplot(data=df4, x='num_of_products');
sns.countplot(data=df4, x='has_cr_card');
sns.countplot(data=df4, x='is_active_member');
sns.countplot(data=df4, x='exited');
###Output
_____no_output_____
###Markdown
4.2. Analysis Bivariate credit_score
###Code
sns.boxplot( x='exited', y='credit_score', data=df4 );
aux1 = df4[df4['exited']==0]
aux2 = df4[df4['exited']==1]
sns.kdeplot(aux1['credit_score'], shade=True, label='exited')
sns.kdeplot(aux2['credit_score'], shade=True, label='continued')
plt.legend();
###Output
_____no_output_____
###Markdown
geography
###Code
pd.crosstab( df4['geography'], df4['exited'] ).plot(kind='bar');
###Output
_____no_output_____
###Markdown
gender
###Code
pd.crosstab( df4['gender'], df4['exited'] ).plot(kind='bar');
###Output
_____no_output_____
###Markdown
age
###Code
sns.boxplot( x='exited', y='age', data=df4 );
aux1 = df4[df4['exited']==0]
aux2 = df4[df4['exited']==1]
sns.kdeplot(aux1['age'], shade=True, label='exited')
sns.kdeplot(aux2['age'], shade=True, label='continued')
plt.legend();
###Output
_____no_output_____
###Markdown
tenure
###Code
pd.crosstab( df4['tenure'], df4['exited'] ).plot(kind='bar');
###Output
_____no_output_____
###Markdown
balance
###Code
sns.boxplot( x='exited', y='balance', data=df4 );
aux1 = df4[df4['exited']==0]
aux2 = df4[df4['exited']==1]
sns.kdeplot(aux1['balance'], shade=True, label='exited')
sns.kdeplot(aux2['balance'], shade=True, label='continued')
plt.legend();
###Output
_____no_output_____
###Markdown
num_of_products
###Code
pd.crosstab( df4['num_of_products'], df4['exited'] ).plot(kind='bar');
###Output
_____no_output_____
###Markdown
has_cr_card
###Code
pd.crosstab( df4['has_cr_card'], df4['exited'] ).plot(kind='bar');
###Output
_____no_output_____
###Markdown
is_active_member
###Code
pd.crosstab( df4['is_active_member'], df4['exited'] ).plot(kind='bar');
###Output
_____no_output_____
###Markdown
estimated_salary
###Code
sns.boxplot( x='exited', y='estimated_salary', data=df4 );
aux1 = df4[df4['exited']==0]
aux2 = df4[df4['exited']==1]
sns.kdeplot(aux2['estimated_salary'], shade=True, label='continued', color='blue')
sns.kdeplot(aux1['estimated_salary'], shade=True, label='exited', color='red')
plt.legend();
###Output
_____no_output_____
###Markdown
age_group
###Code
pd.crosstab( df4['age_group'], df4['exited'] ).plot(kind='bar');
###Output
_____no_output_____
###Markdown
4.3. Analysis Multivariate
###Code
sns.heatmap(df4.corr(), annot=True);
###Output
_____no_output_____
###Markdown
4.4. Business Hypotheses **H1.** Clientes entre 40 a 59 anos, têm menos chance de abandonar o serviço comparado ao resto da baixo.**Falso**. Clientes na faixa de 40-59 tem a MAIOR chance de abandonar o serviço.
###Code
aux1 = df4[df4['exited'] == 1].groupby(['exited', 'age_group'])['customer_id'].count().reset_index()
sns.barplot( x='age_group', y='customer_id', hue='exited', data=aux1 );
###Output
_____no_output_____
###Markdown
**H2.** Cliente com score card abaixo de 400 tem mais chances de abandonar o serviço.**Verdadeiro**. Na base de dados, todos os clientes abaixo de 400 score card abandonam o serviço.
###Code
df4[(df4['credit_score'] <= 400)]['exited'].value_counts()
###Output
_____no_output_____
###Markdown
**H3.** Clientes do sexo feminino tem 60% chance de manter o serviço.**Verdadeiro**. 74.93% das mulheres mantém o serviço.
###Code
aux = df4[df4['gender'] == 'Female'].groupby(['gender', 'exited'])['customer_id'].count().reset_index()
print( '{}% das mulheres mantém o serviço.'.format( round(aux['customer_id'][0] / aux['customer_id'].sum() * 100, 2) ) )
sns.barplot( x='gender', y='customer_id', hue='exited', data=aux );
###Output
74.93% das mulheres mantém o serviço.
###Markdown
**H4.** Cliente acima de 5 anos usando o serviço tem 80% de chance de manter o serviço.**Falso**. Clientes com mais de 5 anos tem 55.42% chance de manter o serviço.
###Code
aux = df4.copy()
aux.loc[aux['tenure'] >= 5, 'tenure_status'] = 'above_5_years'
aux.loc[aux['tenure'] < 5, 'tenure_status'] = 'bellow_5_years'
aux1 = aux.groupby( ['tenure_status', 'exited'] )['customer_id'].count().reset_index()
per_above_5_years = round((aux1['customer_id'][0]) / ((aux1['customer_id'][0]) + (aux1['customer_id'][2]) ) * 100, 2)
print( 'Clientes com mais de 5 anos tem {}% chance de manter o serviço.'.format( per_above_5_years ) )
sns.barplot( x='tenure_status', y='customer_id', hue='exited', data=aux1 );
###Output
Clientes com mais de 5 anos tem 55.42% chance de manter o serviço.
###Markdown
**H5.** Clientes que têm salário acima da média, têm mais chances de abandonar o serviço.**Verdadeiro**. Clientes que tem o salario acima da média, tem 49.79% de abandonar o serviço.
###Code
aux = df4.copy()
avg_salary = aux['estimated_salary'].mean()
aux.loc[aux['estimated_salary'] > avg_salary, 'estimated_salary_status'] = 'above_average'
aux.loc[aux['estimated_salary'] <= avg_salary, 'estimated_salary_status'] = 'below_average'
aux1 = aux.loc[aux['exited'] == 0, :].groupby( 'estimated_salary_status' )['customer_id'].count().reset_index()
estimated_salary_above_average = round(aux1['customer_id'][0] / (aux1['customer_id'][0] + aux1['customer_id'][1]) * 100, 2)
print( 'Clientes que tem o salario acima da média, tem {}% de abandonar o serviço'.format(estimated_salary_above_average) )
sns.barplot( x='estimated_salary_status', y='customer_id', data=aux1 );
###Output
Clientes que tem o salario acima da média, tem 49.79% de abandonar o serviço
###Markdown
**H6.** Clientes que não têm dinheiro em saldo têm mais chances de abandonar o serviço.**Falso.** Cliente que não tem dinheiro em saldo, tem menos chance de abanadonar o serviço.
###Code
aux = df4.groupby( ['balance_status', 'exited'] )['customer_id'].count().reset_index()
aux[aux['exited'] == 1]['customer_id'].pct_change()
pd.crosstab(df4['balance_status'], df4['exited']).plot.bar();
###Output
_____no_output_____
###Markdown
**H7.** Clientes residentes na residentes na Espanha tem mais chances de MANTER O SERVIÇO em comparação comparado aos outros países.**Verdadeiro**. Cliente que residem na Espanha tem mais chances de MANTER O SERVIÇO.
###Code
aux = df4.groupby( ['geography', 'exited'] )['customer_id'].count().reset_index()
aux = aux[aux['exited'] == 1]
pd.crosstab( df4['geography'], df4['exited'] ).plot.bar();
###Output
_____no_output_____
###Markdown
**H8.** Clientes que possuem cartão de crédito, tem mais chances de MANTER O SERVIÇO dos que não tem.**Verdadeiro**. Clientes que possuem cartão, tem 70.71% de manter o serviço
###Code
aux = df4[df4['exited'] == 0]
aux1 = aux[aux['has_cr_card'] == 0]
aux2 = aux[aux['has_cr_card'] == 1]
prc_has_cr_card = round(aux2['customer_id'].count() / (aux1['customer_id'].count() + aux2['customer_id'].count()) * 100, 2 )
print( 'Clientes que possuem cartão, tem {}% de manter o serviço'.format(prc_has_cr_card) )
pd.crosstab( df4['has_cr_card'], df4['exited'] ).plot.bar();
###Output
Clientes que possuem cartão, tem 70.71% de manter o serviço
###Markdown
5.0. Data Preprocessing
###Code
df5 = df4.copy()
le = pp.LabelEncoder()
df5['geography'] = le.fit_transform( df5[['geography']].values.ravel() )
df5['gender'] = le.fit_transform( df5[['gender']].values.ravel() )
df5['age_group'] = le.fit_transform( df5[['age_group']].values.ravel() )
df5['balance_status'] = le.fit_transform( df5[['balance_status']].values )
###Output
/home/cid/.pyenv/versions/3.8.0/envs/churn-prediction/lib/python3.8/site-packages/sklearn/utils/validation.py:63: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
return f(*args, **kwargs)
###Markdown
6.0. Feature Selection
###Code
df6 = df5.copy()
###Output
_____no_output_____
###Markdown
7.0. Model Traning
###Code
X = df6.drop('exited', axis=1)
y = df6['exited'].values
X_train, X_val, y_train, y_val = ms.train_test_split(X, y,
test_size=0.2,
train_size=0.8,
random_state=42,
shuffle=True,
stratify=None)
###Output
_____no_output_____
###Markdown
7.1. DummyClassifier
###Code
# model definition
model_baseline = dummy.DummyClassifier(strategy='prior', random_state=42, constant=None)
# model fit
model_baseline.fit( X_train, y_train )
# model predict
yhat_baseline = model_baseline.predict( X_val )
# model performance
print(metrics.classification_report( y_val, yhat_baseline,
labels=None,
target_names=None,
sample_weight=None,
digits=2,
output_dict=False,
zero_division=0))
y_train_pred = cross_val_predict(model_baseline, X_train, y_train, cv=3)
precision_score_ = metrics.precision_score(y_train, y_train_pred)
recall_score_ = metrics.recall_score(y_train, y_train_pred)
f1_score_ = metrics.f1_score(y_train, y_train_pred)
accuracy_score_ = metrics.accuracy_score(y_train, y_train_pred)
dict_ = {
'precision_score': precision_score_,
'recall_score': recall_score_,
'f1_score_': f1_score_,
'accuracy_score_': accuracy_score_
}
df = pd.DataFrame( dict_, index=[0] )
df
###Output
/home/cid/.pyenv/versions/3.8.0/envs/churn-prediction/lib/python3.8/site-packages/sklearn/metrics/_classification.py:1248: UndefinedMetricWarning: Precision is ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))
###Markdown
7.2. KNeighborsClassifier
###Code
# # model definition
model_knn = neighbors.KNeighborsClassifier(n_neighbors=5,
weights='uniform',
algorithm='auto',
leaf_size=30,
p=2,
metric='minkowski',
metric_params=None,
n_jobs=None)
# model fit
model_knn.fit( X_train, y_train )
# model predict
yhat_knn = model_knn.predict( X_val )
# model performance
print(metrics.classification_report( y_val, yhat_knn,
labels=None,
target_names=None,
sample_weight=None,
digits=2,
output_dict=False,
zero_division=0))
###Output
precision recall f1-score support
0 0.81 0.93 0.87 1607
1 0.24 0.08 0.12 393
accuracy 0.77 2000
macro avg 0.52 0.51 0.49 2000
weighted avg 0.69 0.77 0.72 2000
###Markdown
7.3 SVM
###Code
# model definition
model_svm = svm.SVC(C=1.0,
kernel='rbf',
degree=3,
gamma='scale',
coef0=0.0,
shrinking=True,
probability=False,
tol=0.001,
cache_size=200,
class_weight=None,
verbose=False,
max_iter=-1,
decision_function_shape='ovr',
break_ties=False,
random_state=42)
# model fit
model_svm.fit( X_train, y_train )
# model predict
yhat_svm = model_svm.predict( X_val )
# model performance
print(metrics.classification_report( y_val, yhat_svm,
labels=None,
target_names=None,
sample_weight=None,
digits=2,
output_dict=False,
zero_division=0))
###Output
precision recall f1-score support
0 0.80 1.00 0.89 1607
1 0.00 0.00 0.00 393
accuracy 0.80 2000
macro avg 0.40 0.50 0.45 2000
weighted avg 0.65 0.80 0.72 2000
###Markdown
7.4. RandomForestClassifier
###Code
# model definition
model_rf = ensemble.RandomForestClassifier(n_estimators=100,
criterion='gini',
max_depth=None,
min_samples_split=2,
min_samples_leaf=1,
min_weight_fraction_leaf=0.0,
max_features='auto',
max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_impurity_split=None,
bootstrap=True,
oob_score=False,
n_jobs=None,
random_state=42,
verbose=0,
warm_start=False,
class_weight=None,
ccp_alpha=0.0,
max_samples=None)
# model fit
model_rf.fit( X_train, y_train )
# model predict
yhat_rf = model_rf.predict( X_val )
# model performance
print(metrics.classification_report( y_val, yhat_rf,
labels=None,
target_names=None,
sample_weight=None,
digits=2,
output_dict=False,
zero_division=0))
metrics.precision_score( y_val, yhat_rf, pos_label=1 )
###Output
_____no_output_____
###Markdown
7.5. LogisticRegression
###Code
# model definition
model_lr = linear_model.LogisticRegression(penalty='l2',
dual=False,
tol=0.0001,
C=1.0,
fit_intercept=True,
intercept_scaling=1,
class_weight=None,
random_state=42,
solver='lbfgs',
max_iter=100,
multi_class='auto',
verbose=0,
warm_start=False,
n_jobs=None,
l1_ratio=None)
# model fit
model_lr.fit( X_train, y_train )
# model predict
yhat_lr = model_lr.predict( X_val )
# model performance
print(metrics.classification_report( y_val, yhat_lr,
labels=None,
target_names=None,
sample_weight=None,
digits=2,
output_dict=False,
zero_division=0))
###Output
precision recall f1-score support
0 0.80 1.00 0.89 1607
1 0.00 0.00 0.00 393
accuracy 0.80 2000
macro avg 0.40 0.50 0.45 2000
weighted avg 0.65 0.80 0.72 2000
|
Learn-python/Part 2 - Plots, Graphs, Dictionaries, Flow and Loops/01-Matplotlib.ipynb | ###Markdown
 *** Matplotlib*** Data science is about telling a story with whatever data it is that you are dealing with. It might be website visitor behavior or customer purchasing patterns. Not matter what it is, your job as a data scientist is to use the data you have to tell a story, to understand and gain insights. Using visualizations is one of the most powerful ways that you can communicate your story and insights.**Matplotlip** is a Python plotting library used to create visualizations. In this lesson we're going to start by importing matplotlib, create some simple line, scatter and histogram plots and finish up with creating this chart:Together we are going to walk step by step to create this beautiful chart. If you would like more information on matplotlib go to https://matplotlib.org. If you would like some background information on this chart and its creator go to [here](https://en.wikipedia.org/wiki/Hans_Rosling).Let's get started.
###Code
# Import matplotlib with subpackage pyplot as plt
import matplotlib.pyplot as plt
# List 1 year01
year01 = [1950, 1970, 1990, 2010]
# List 2 pop02 (world population)
pop01 = [2.519, 3.692, 5.263, 6.972]
###Output
_____no_output_____
###Markdown
So far we have simply imported matplotlib and created two new lists. Nothing new from what we have learned in previous lessons. To plot this data as a line chart we call **plt.plot** and use the two lists as the axis.
###Code
# Create a line plot using year01 and pop01 as inputs
plt.plot(year01, pop01);
###Output
_____no_output_____
###Markdown
There are two very important things to point out at this stage. 1. The examples in this lesson are created using Jupyter Notebook. When creating plots there is usually a **plt.show()** statement which tells Python to render or show the actual plot. This statement is placed at the end of the program. Jupyter Notebook does not require this statement but it will be required in other text editors or IDEs. The plot function tells what to plot and how to plot it. Show actually displays the plot. 2. Take another look at the line _plt.plot(year01, pop01);_. Notice anything out of place? If you picked up on the semi-colon, well done! Adding the semi-colon here is a choice I make to hide additional output that is produced when the plot is created. Go ahead and remove the semi-colon to see the difference. For the rest of this lesson you are going to see the semi-colon at the end of several lines of code. Don't worry about it too much. Well done, you have just created your first plot. As you can clearly see Python has drawn a line between the data points of your lists. The list **year01** was used to create the x-axis and was the first argument in plt.plot(). Our second list, **pop01**, was used to create the y-axis and was the second argument in plt.plot(). Scatter PlotsWe can easily modify what we have done so far to create a scatter plot. Simply replace the word "plot" with "scatter" in the plot function. As you can see in the plot below a scatter plot shows all data points without connecting them by a line which makes it easier to read. For many applications the scatter plot is sometimes better than a line plot.
###Code
plt.scatter(year01, pop01);
###Output
_____no_output_____
###Markdown
The two plot examples above have have been simple and created using two small lists. Let's bite off something more challenging. Below are two lists, [gross domestic product](https://www.investopedia.com/ask/answers/what-is-gdp-why-its-important-to-economists-investors/) per capita and life expectancy. I got these figures from the [World Bank](https://data.worldbank.org/indicator) datasets which are freely available on line. In a later lesson we will learn how to import data files directly in to our programs but for now lists will suffice. Make sure to go to my [Github](https://github.com/tstaunton/Learn-Data-Science-with-Python) page and copy the two lists. Don't try and type them in by hand.
###Code
gdp_cap = [974.5803384, 5937.029525999998, 6223.367465, 4797.231267, 12779.37964, 34435.367439999995, 36126.4927, 29796.04834,
1391.253792, 33692.60508, 1441.284873, 3822.137084, 7446.298803, 12569.85177, 9065.800825, 10680.79282, 1217.032994,
430.0706916, 1713.778686, 2042.09524, 36319.23501, 706.016537, 1704.063724, 13171.63885, 4959.114854, 7006.580419,
986.1478792, 277.5518587, 3632.557798, 9645.06142, 1544.750112, 14619.222719999998, 8948.102923, 22833.30851,
35278.41874, 2082.4815670000007, 6025.3747520000015, 6873.262326000001, 5581.180998, 5728.353514, 12154.08975,
641.3695236000002, 690.8055759, 33207.0844, 30470.0167, 13206.48452, 752.7497265, 32170.37442, 1327.60891, 27538.41188,
5186.050003, 942.6542111, 579.2317429999998, 1201.637154, 3548.3308460000007, 39724.97867, 18008.94444, 36180.78919,
2452.210407, 3540.651564, 11605.71449, 4471.061906, 40675.99635, 25523.2771, 28569.7197, 7320.8802620000015, 31656.06806,
4519.461171, 1463.249282, 1593.06548, 23348.139730000006, 47306.98978, 10461.05868, 1569.331442, 414.5073415, 12057.49928,
1044.770126, 759.3499101, 12451.6558, 1042.581557, 1803.151496, 10956.99112, 11977.57496, 3095.7722710000007, 9253.896111,
3820.17523, 823.6856205, 944.0, 4811.060429, 1091.359778, 36797.93332, 25185.00911, 2749.320965, 619.6768923999998,
2013.977305, 49357.19017, 22316.19287, 2605.94758, 9809.185636, 4172.838464, 7408.905561, 3190.481016, 15389.924680000002,
20509.64777, 19328.70901, 7670.122558, 10808.47561, 863.0884639000002, 1598.435089, 21654.83194, 1712.472136,
9786.534714, 862.5407561000002, 47143.17964, 18678.31435, 25768.25759, 926.1410683, 9269.657808, 28821.0637, 3970.095407,
2602.394995, 4513.480643, 33859.74835, 37506.41907, 4184.548089, 28718.27684, 1107.482182, 7458.396326999998, 882.9699437999999,
18008.50924, 7092.923025, 8458.276384, 1056.380121, 33203.26128, 42951.65309, 10611.46299, 11415.80569, 2441.576404,
3025.349798, 2280.769906, 1271.211593, 469.70929810000007]
life_exp = [43.828, 76.423, 72.301, 42.731, 75.32, 81.235, 79.829, 75.635, 64.062, 79.441, 56.728, 65.554, 74.852, 50.728,
72.39, 73.005, 52.295, 49.58, 59.723, 50.43, 80.653, 44.7410001, 50.651, 78.553, 72.961, 72.889, 65.152, 46.462,
55.322, 78.782, 48.328, 75.748, 78.273, 76.486, 78.332, 54.791, 72.235, 74.994, 71.33800000000002, 71.878,51.578999,
58.04, 52.947, 79.313, 80.657, 56.735, 59.448, 79.406, 60.022, 79.483, 70.259, 56.007, 46.38800000000001, 60.916,
70.19800000000001, 82.208, 73.33800000000002, 81.757, 64.69800000000001, 70.65, 70.964, 59.545, 78.885, 80.745, 80.546,
72.567, 82.603, 72.535, 54.11, 67.297, 78.623, 77.58800000000002, 71.993, 42.592, 45.678, 73.952, 59.44300000000001,
48.303, 74.241, 54.467, 64.164, 72.801, 76.195, 66.803, 74.543, 71.164, 42.082, 62.069, 52.90600000000001, 63.785,
79.762, 80.204, 72.899, 56.867, 46.859, 80.196, 75.64, 65.483, 75.53699999999998, 71.752, 71.421, 71.688, 75.563,
78.098, 78.74600000000002, 76.442, 72.476, 46.242, 65.528, 72.777, 63.062, 74.002, 42.56800000000001, 79.972,
74.663, 77.926, 48.159, 49.339, 80.941, 72.396, 58.556, 39.613, 80.884, 81.70100000000002, 74.143, 78.4, 52.517,
70.616, 58.42, 69.819, 73.923, 71.777, 51.542, 79.425, 78.242, 76.384, 73.747, 74.249, 73.422, 62.698, 42.38399999999999,
43.487]
###Output
_____no_output_____
###Markdown
Using what we learned so far lets plot these two lists. I would like to put gdp_cap on the x-axis and life_exp on the y-axis. This means that they must take positions 1 and 2 respectively in the plt.plot() function.
###Code
# Create a line plot using gdp_cap and life_exp
plt.plot(gdp_cap, life_exp);
###Output
_____no_output_____
###Markdown
Ouch! That's a lot of data and not very readable. You would need to be a special kind of genius to pull any insights from that chart. Lets keep going. Here it is as a scatter plot.
###Code
# # Create a scatter plot using gdp_cap and life_exp
plt.scatter(gdp_cap, life_exp)
# Put the x-axis on a logarithmic scale
plt.xscale('log')
###Output
_____no_output_____
###Markdown
You'll notice a new line of code used to create the above plot, plt.xscale('log'). Plotting the x-axis on a logarithmic scale helps us to view and understand large amounts of data more clearly. A logarithmic scale is a nonlinear scale used for a large range of positive multiples of some quantity. It is based on orders of magnitude, rather than a standard linear scale, so the value represented by each equidistant mark on the scale is the value at the previous mark multiplied by a constant. You can find more information [at this link](https://en.wikipedia.org/wiki/Logarithmic_scale).Now we're starting to look in better shape.From this early chart what insights can you glean about our data? Higher GDP seems to correspond to a higher life expectancy. In other words, there is a positive correlation.Below is a new list, pop03 which shows population data to go with our existing GDP and life expectancy data.
###Code
pop03 = [31.889923, 3.600523, 33.333216, 12.420476, 40.301927, 20.434176, 8.199783, 0.708573, 150.448339, 10.392226,
8.078314, 9.119152, 4.552198, 1.639131, 190.010647, 7.322858, 14.326203, 8.390505, 14.131858, 17.696293, 33.390141, 4.369038,
10.238807, 16.284741, 1318.683096, 44.22755, 0.71096, 64.606759, 3.80061, 4.133884, 18.013409, 4.493312, 11.416987,
10.228744, 5.46812, 0.496374, 9.319622, 13.75568, 80.264543, 6.939688, 0.551201, 4.906585, 76.511887, 5.23846,
61.083916, 1.454867, 1.688359, 82.400996, 22.873338, 10.70629, 12.572928, 9.947814, 1.472041, 8.502814, 7.483763,
6.980412, 9.956108, 0.301931, 1110.396331, 223.547, 69.45357, 27.499638, 4.109086, 6.426679, 58.147733, 2.780132,
127.467972, 6.053193, 35.610177, 23.301725, 49.04479, 2.505559, 3.921278, 2.012649, 3.193942, 6.036914, 19.167654,
13.327079, 24.821286, 12.031795, 3.270065, 1.250882, 108.700891, 2.874127, 0.684736, 33.757175, 19.951656,
47.76198, 2.05508, 28.90179, 16.570613, 4.115771, 5.675356, 12.894865, 135.031164, 4.627926, 3.204897, 169.270617,
3.242173, 6.667147, 28.674757, 91.077287, 38.518241, 10.642836, 3.942491, 0.798094, 22.276056, 8.860588, 0.199579,
27.601038, 12.267493, 10.150265, 6.144562, 4.553009, 5.447502, 2.009245, 9.118773, 43.997828, 40.448191, 20.378239,
42.292929, 1.133066, 9.031088, 7.554661, 19.314747, 23.174294, 38.13964, 65.068149, 5.701579, 1.056608, 10.276158,
71.158647, 29.170398, 60.776238, 301.139947, 3.447496, 26.084662, 85.262356, 4.018332, 22.211743, 11.746035, 12.311143]
# Plot pop03 on the x-axis and life_exp on the y-axis
plt.scatter(pop03, life_exp);
###Output
_____no_output_____
###Markdown
HistogramsHistograms are used to explore the distribution of data. Imagine 12 values between 0 and 6. In the image below I have placed them on a number line.As you can see along the line it's divided into equal chunks known as bins. Now let's reorganize our number line into three bins with a width of two.How many data points sit in each bin? Four, six and two. If we were to draw a bar in each bin the height of the bar would represent the number of data points in each bin. With our histogram in place we can now see how the values are distributed. Most values are in the middle and there are more values below two than there are above four.Now that we understand the basics of histograms it's time to use matplotlib to start creating them. As always if he need some help we can call help(plt.hist).
###Code
help(plt.hist)
###Output
_____no_output_____
###Markdown
Look at the first two values, x is the list of values you want to build the histogram for. The second argument tells Python how many bins the data should be divided up into. Based on this number hist will automatically find appropriate boundaries for all bins and calculate how many values are in each one. If you don't specify the bins argument it will be ten by default.Let's get working on a new example.
###Code
# Import package
import matplotlib.pyplot as plt
# Create a list of values
values = [0, 0.6, 1.4, 1.6, 2.2, 2.5, 2.6, 3.2, 3.5, 3.9, 4.2, 6]
# Call hist, and pass our new list as an argument x, bins are 3 so that values are divided into 3 bins
plt.hist(values, bins = 3);
# See how life expectancy in different countries is distributed create a histogram
plt.hist(life_exp);
# Plot life_exp with 5 bins
plt.hist(life_exp, bins=5);
# Plot life_exp with 20 bins
plt.hist(life_exp, bins = 20);
###Output
_____no_output_____
###Markdown
Too few bins will over simplify reality and won't show you the details. Too many bins will over complicate reality and won't show you the bigger picture. As you can see we're getting more insights with 20 bins.Histograms make it very easy to compare data. Let's do a comparison now, life_exp contains life expectancy data for different countries in 2007. Below is a second list, life_exp1950 which contains similar data for 1950. Let's make a histogram containing both lists.
###Code
life_exp1950 = [28.8, 55.23, 43.08, 30.02, 62.48, 69.12, 66.8, 50.94, 37.48, 68.0, 38.22, 40.41,
53.82, 47.62, 50.92, 59.6, 31.98, 39.03, 39.42, 38.52, 68.75, 35.46, 38.09, 54.74, 44.0,
50.64, 40.72, 39.14, 42.11, 57.21, 40.48, 61.21, 59.42, 66.87, 70.78, 34.81, 45.93, 48.36,
41.89, 45.26, 34.48, 35.93, 34.08, 66.55, 67.41, 37.0, 30.0, 67.5, 43.15, 65.86, 42.02, 33.61, 32.5, 37.58, 41.91,
60.96, 64.03, 72.49, 37.37, 37.47, 44.87, 45.32, 66.91, 65.39, 65.94, 58.53, 63.03, 43.16, 42.27, 50.06, 47.45,
55.56, 55.93, 42.14, 38.48, 42.72, 36.68, 36.26, 48.46, 33.68, 40.54, 50.99, 50.79, 42.24, 59.16, 42.87, 31.29,
36.32, 41.72, 36.16, 72.13, 69.39, 42.31, 37.44, 36.32, 72.67, 37.58, 43.44, 55.19, 62.65, 43.9, 47.75, 61.31, 59.82,
64.28, 52.72, 61.05, 40.0, 46.47, 39.88, 37.28, 58.0, 30.33, 60.4, 64.36, 65.57, 32.98, 45.01, 64.94, 57.59, 38.64,
41.41, 71.86, 69.62, 45.88, 58.5, 41.22, 50.85, 38.6, 59.1, 44.6, 43.58, 39.98, 69.18, 68.44, 66.07, 55.09, 40.41,
43.16, 32.55, 42.04, 48.45]
plt.hist(life_exp, bins = 15)
plt.hist(life_exp1950, bins = 15);
###Output
_____no_output_____
###Markdown
Customizing plots One of the biggest challenges when dealing with large datasets is communicating your findings and allowing others to gain insights from you visualization work. Again what you are trying to accomplish is to tell a story with data. To help with this challenge matplotlib allows us to add customizations to our plots so that they are easier to understand. Let's look at an example to help demonstrate.
###Code
# Import correct package as plt
import matplotlib.pyplot as plt
# Create list of years
year04 = [1950,1951,1952,1953,1954,1955,1956,1957,1958,1959,1960,1961,1962,1963,1964,1965,1966,1967,1968,1969,1970,1971,1972,1973,1974,1975,1976,1977,1978,1979,1980,1981,1982,1983,1984,1985,1986,1987,1988,1989,1990,1991,1992,1993,1994,1995,1996,1997,1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015]
# Create list of corresponding populations
pop04 = [2.536, 2.583, 2.630, 2.677, 2.724, 2.772, 2.821, 2.871, 2.924, 2.977, 3.033, 3.090, 3.149, 3.210, 3.273, 3.339, 3.408, 3.479, 3.551, 3.625, 3.700, 3.775, 3.851, 3.927, 4.003, 4.079, 4.154, 4.229, 4.304, 4.380, 4.458, 4.537, 4.618, 4.701, 4.786, 4.873, 4.963, 5.055, 5.148, 5.240, 5.330, 5.418, 5.504, 5.588, 5.670, 5.751, 5.831, 5.910, 5.988, 6.066, 6.145, 6.223, 6.302, 6.381, 6.461, 6.542, 6.623, 6.706, 6.789, 6.873, 6.958, 7.043, 7.128, 7.213, 7.298, 7.383]
# Plot year04 on x-axis and pop04 on y-axis
plt.plot(year04, pop04);
###Output
_____no_output_____
###Markdown
At first glance this is actually not a bad looking plot. But if you were to study it a bit longer, would you know what data it is trying to communicate, what story it is trying to tell? Let's help this plot communicate its story by adding labels to the x and y axis. To do this we use the **xlabel** and **ylabel** function.
###Code
# Import correct package as plt
import matplotlib.pyplot as plt
# Create list of years
year05 = [1950,1951,1952,1953,1954,1955,1956,1957,1958,1959,1960,1961,1962,1963,1964,1965,1966,1967,1968,1969,1970,1971,1972,1973,1974,1975,1976,1977,1978,1979,1980,1981,1982,1983,1984,1985,1986,1987,1988,1989,1990,1991,1992,1993,1994,1995,1996,1997,1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015]
# Create list of corresponding populations
pop05 = [2.536, 2.583, 2.630, 2.677, 2.724, 2.772, 2.821, 2.871, 2.924, 2.977, 3.033, 3.090, 3.149, 3.210, 3.273, 3.339, 3.408, 3.479, 3.551, 3.625, 3.700, 3.775, 3.851, 3.927, 4.003, 4.079, 4.154, 4.229, 4.304, 4.380, 4.458, 4.537, 4.618, 4.701, 4.786, 4.873, 4.963, 5.055, 5.148, 5.240, 5.330, 5.418, 5.504, 5.588, 5.670, 5.751, 5.831, 5.910, 5.988, 6.066, 6.145, 6.223, 6.302, 6.381, 6.461, 6.542, 6.623, 6.706, 6.789, 6.873, 6.958, 7.043, 7.128, 7.213, 7.298, 7.383]
# Plot year05 on x-axis and pop05 on y-axis
plt.plot(year05, pop05)
# Label x-axis
plt.xlabel('Year')
# Label y-axis
plt.ylabel('Population');
# Import correct package as plt
import matplotlib.pyplot as plt
# Create list of years
year05 = [1950,1951,1952,1953,1954,1955,1956,1957,1958,1959,1960,1961,1962,1963,1964,1965,1966,1967,1968,1969,1970,1971,1972,1973,1974,1975,1976,1977,1978,1979,1980,1981,1982,1983,1984,1985,1986,1987,1988,1989,1990,1991,1992,1993,1994,1995,1996,1997,1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015]
# Create list of corresponding populations
pop05 = [2.536, 2.583, 2.630, 2.677, 2.724, 2.772, 2.821, 2.871, 2.924, 2.977, 3.033, 3.090, 3.149, 3.210, 3.273, 3.339, 3.408, 3.479, 3.551, 3.625, 3.700, 3.775, 3.851, 3.927, 4.003, 4.079, 4.154, 4.229, 4.304, 4.380, 4.458, 4.537, 4.618, 4.701, 4.786, 4.873, 4.963, 5.055, 5.148, 5.240, 5.330, 5.418, 5.504, 5.588, 5.670, 5.751, 5.831, 5.910, 5.988, 6.066, 6.145, 6.223, 6.302, 6.381, 6.461, 6.542, 6.623, 6.706, 6.789, 6.873, 6.958, 7.043, 7.128, 7.213, 7.298, 7.383]
# Plot year05 on x-axis and pop05 on y-axis
plt.plot(year05, pop05)
# Label x-axis
plt.xlabel('Year')
# Label y-axis
plt.ylabel('Population');
# Add a title
plt.title('World Population Estimates');
###Output
_____no_output_____
###Markdown
At least now a reader knows what this plot is about. Next lets make the y-axis start from 0 for more context. We do this with the **yticks** function.
###Code
# Import correct package as plt
import matplotlib.pyplot as plt
# Create list of years
year05 = [1950,1951,1952,1953,1954,1955,1956,1957,1958,1959,1960,1961,1962,1963,1964,1965,1966,1967,1968,1969,1970,1971,1972,1973,1974,1975,1976,1977,1978,1979,1980,1981,1982,1983,1984,1985,1986,1987,1988,1989,1990,1991,1992,1993,1994,1995,1996,1997,1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015]
# Create list of corresponding populations
pop05 = [2.536, 2.583, 2.630, 2.677, 2.724, 2.772, 2.821, 2.871, 2.924, 2.977, 3.033, 3.090, 3.149, 3.210, 3.273, 3.339, 3.408, 3.479, 3.551, 3.625, 3.700, 3.775, 3.851, 3.927, 4.003, 4.079, 4.154, 4.229, 4.304, 4.380, 4.458, 4.537, 4.618, 4.701, 4.786, 4.873, 4.963, 5.055, 5.148, 5.240, 5.330, 5.418, 5.504, 5.588, 5.670, 5.751, 5.831, 5.910, 5.988, 6.066, 6.145, 6.223, 6.302, 6.381, 6.461, 6.542, 6.623, 6.706, 6.789, 6.873, 6.958, 7.043, 7.128, 7.213, 7.298, 7.383]
# Plot year05 on x-axis and pop05 on y-axis
plt.plot(year05, pop05)
# Label x-axis
plt.xlabel('Year')
# Label y-axis
plt.ylabel('Population');
# Add a title
plt.title('World Population Estimates');
# Start y-axis at 0
plt.yticks([0,1,2,3,4,5,6,7,8]);
###Output
_____no_output_____
###Markdown
We can add a second argument to the yticks function which will further annotate our yaxis.
###Code
# Import correct package as plt
import matplotlib.pyplot as plt
# Create list of years
year05 = [1950,1951,1952,1953,1954,1955,1956,1957,1958,1959,1960,1961,1962,1963,1964,1965,1966,1967,1968,1969,1970,1971,1972,1973,1974,1975,1976,1977,1978,1979,1980,1981,1982,1983,1984,1985,1986,1987,1988,1989,1990,1991,1992,1993,1994,1995,1996,1997,1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015]
# Create list of corresponding populations
pop05 = [2.536, 2.583, 2.630, 2.677, 2.724, 2.772, 2.821, 2.871, 2.924, 2.977, 3.033, 3.090, 3.149, 3.210, 3.273, 3.339, 3.408, 3.479, 3.551, 3.625, 3.700, 3.775, 3.851, 3.927, 4.003, 4.079, 4.154, 4.229, 4.304, 4.380, 4.458, 4.537, 4.618, 4.701, 4.786, 4.873, 4.963, 5.055, 5.148, 5.240, 5.330, 5.418, 5.504, 5.588, 5.670, 5.751, 5.831, 5.910, 5.988, 6.066, 6.145, 6.223, 6.302, 6.381, 6.461, 6.542, 6.623, 6.706, 6.789, 6.873, 6.958, 7.043, 7.128, 7.213, 7.298, 7.383]
# Plot year05 on x-axis and pop05 on y-axis
plt.plot(year05, pop05)
# Label x-axis
plt.xlabel('Year')
# Label y-axis
plt.ylabel('Population');
# Add a title
plt.title('World Population Estimates');
# Start y-axis at 0 and add second argument
plt.yticks([0,1,2,3,4,5,6,7,8], ['0','1B','2B','3B','4B','5B','6B','7B','8B']);
###Output
_____no_output_____
###Markdown
From Wikipedia I found some additional population data. Lets add it to your current data set. If you are unfamiliar with the code below take a look at [Part 1 of this series which covers Python Lists](https://skl.sh/2XCsI0M) and take a look.
###Code
year06 = [1800, 1850, 1900] + year05
pop06 = [1, 1.262, 1.659] + pop05
# Import correct package as plt
import matplotlib.pyplot as plt
# Plot year06 on x-axis and pop06 on y-axis
plt.plot(year06, pop06)
# Label x-axis
plt.xlabel('Year')
# Label y-axis
plt.ylabel('Population');
# Add a title
plt.title('World Population Estimates');
# Start y-axis at 0 and add second argument
plt.yticks([0,1,2,3,4,5,6,7,8], ['0','1B','2B','3B','4B','5B','6B','7B','8B']);
###Output
_____no_output_____
###Markdown
Back to some data we where using at the start of the lesson.
###Code
gdp_cap = [974.5803384, 5937.029525999998, 6223.367465, 4797.231267, 12779.37964, 34435.367439999995, 36126.4927, 29796.04834,
1391.253792, 33692.60508, 1441.284873, 3822.137084, 7446.298803, 12569.85177, 9065.800825, 10680.79282, 1217.032994,
430.0706916, 1713.778686, 2042.09524, 36319.23501, 706.016537, 1704.063724, 13171.63885, 4959.114854, 7006.580419,
986.1478792, 277.5518587, 3632.557798, 9645.06142, 1544.750112, 14619.222719999998, 8948.102923, 22833.30851,
35278.41874, 2082.4815670000007, 6025.3747520000015, 6873.262326000001, 5581.180998, 5728.353514, 12154.08975,
641.3695236000002, 690.8055759, 33207.0844, 30470.0167, 13206.48452, 752.7497265, 32170.37442, 1327.60891, 27538.41188,
5186.050003, 942.6542111, 579.2317429999998, 1201.637154, 3548.3308460000007, 39724.97867, 18008.94444, 36180.78919,
2452.210407, 3540.651564, 11605.71449, 4471.061906, 40675.99635, 25523.2771, 28569.7197, 7320.8802620000015, 31656.06806,
4519.461171, 1463.249282, 1593.06548, 23348.139730000006, 47306.98978, 10461.05868, 1569.331442, 414.5073415, 12057.49928,
1044.770126, 759.3499101, 12451.6558, 1042.581557, 1803.151496, 10956.99112, 11977.57496, 3095.7722710000007, 9253.896111,
3820.17523, 823.6856205, 944.0, 4811.060429, 1091.359778, 36797.93332, 25185.00911, 2749.320965, 619.6768923999998,
2013.977305, 49357.19017, 22316.19287, 2605.94758, 9809.185636, 4172.838464, 7408.905561, 3190.481016, 15389.924680000002,
20509.64777, 19328.70901, 7670.122558, 10808.47561, 863.0884639000002, 1598.435089, 21654.83194, 1712.472136,
9786.534714, 862.5407561000002, 47143.17964, 18678.31435, 25768.25759, 926.1410683, 9269.657808, 28821.0637, 3970.095407,
2602.394995, 4513.480643, 33859.74835, 37506.41907, 4184.548089, 28718.27684, 1107.482182, 7458.396326999998, 882.9699437999999,
18008.50924, 7092.923025, 8458.276384, 1056.380121, 33203.26128, 42951.65309, 10611.46299, 11415.80569, 2441.576404,
3025.349798, 2280.769906, 1271.211593, 469.70929810000007]
life_exp = [43.828, 76.423, 72.301, 42.731, 75.32, 81.235, 79.829, 75.635, 64.062, 79.441, 56.728, 65.554, 74.852, 50.728,
72.39, 73.005, 52.295, 49.58, 59.723, 50.43, 80.653, 44.7410001, 50.651, 78.553, 72.961, 72.889, 65.152, 46.462,
55.322, 78.782, 48.328, 75.748, 78.273, 76.486, 78.332, 54.791, 72.235, 74.994, 71.33800000000002, 71.878,51.578999,
58.04, 52.947, 79.313, 80.657, 56.735, 59.448, 79.406, 60.022, 79.483, 70.259, 56.007, 46.38800000000001, 60.916,
70.19800000000001, 82.208, 73.33800000000002, 81.757, 64.69800000000001, 70.65, 70.964, 59.545, 78.885, 80.745, 80.546,
72.567, 82.603, 72.535, 54.11, 67.297, 78.623, 77.58800000000002, 71.993, 42.592, 45.678, 73.952, 59.44300000000001,
48.303, 74.241, 54.467, 64.164, 72.801, 76.195, 66.803, 74.543, 71.164, 42.082, 62.069, 52.90600000000001, 63.785,
79.762, 80.204, 72.899, 56.867, 46.859, 80.196, 75.64, 65.483, 75.53699999999998, 71.752, 71.421, 71.688, 75.563,
78.098, 78.74600000000002, 76.442, 72.476, 46.242, 65.528, 72.777, 63.062, 74.002, 42.56800000000001, 79.972,
74.663, 77.926, 48.159, 49.339, 80.941, 72.396, 58.556, 39.613, 80.884, 81.70100000000002, 74.143, 78.4, 52.517,
70.616, 58.42, 69.819, 73.923, 71.777, 51.542, 79.425, 78.242, 76.384, 73.747, 74.249, 73.422, 62.698, 42.38399999999999,
43.487]
# Basic scatter plot, log scale
plt.scatter(gdp_cap, life_exp)
plt.xscale('log')
# Create string variables to repesent x and y axis and title labels
xlab = 'GDP per Capita [in USD]'
ylab = 'Life Expectancy [in years]'
title = 'World Development in 2007'
# Add axis labels
plt.xlabel(xlab)
plt.ylabel(ylab)
# Add title
plt.title(title);
# Scatter plot
plt.scatter(gdp_cap, life_exp)
# Previous customizations
plt.xscale('log')
plt.xlabel('GDP per Capita [in USD]')
plt.ylabel('Life Expectancy [in years]')
plt.title('World Development in 2007')
# Definition of tick_val and tick_lab
tick_val = [1000, 10000, 100000]
tick_lab = ['1k', '10k', '100k']
# Adapt the ticks on the x-axis
plt.xticks(tick_val, tick_lab)
# After customizing, display the plot
plt.show()
pop07 = [31.889923, 3.600523, 33.333216, 12.420476, 40.301927, 20.434176, 8.199783, 0.708573, 150.448339, 10.392226,
8.078314, 9.119152, 4.552198, 1.639131, 190.010647, 7.322858, 14.326203, 8.390505, 14.131858, 17.696293, 33.390141,
4.369038, 10.238807, 16.284741, 1318.683096, 44.22755, 0.71096, 64.606759, 3.80061, 4.133884, 18.013409, 4.493312,
11.416987, 10.228744, 5.46812, 0.496374, 9.319622, 13.75568, 80.264543, 6.939688, 0.551201, 4.906585, 76.511887,
5.23846, 61.083916, 1.454867, 1.688359, 82.400996, 22.873338, 10.70629, 12.572928, 9.947814, 1.472041, 8.502814,
7.483763, 6.980412, 9.956108, 0.301931, 1110.396331, 223.547, 69.45357, 27.499638, 4.109086, 6.426679, 58.147733,
2.780132, 127.467972, 6.053193, 35.610177, 23.301725, 49.04479, 2.505559, 3.921278, 2.012649, 3.193942, 6.036914,
19.167654, 13.327079, 24.821286, 12.031795, 3.270065, 1.250882, 108.700891, 2.874127, 0.684736, 33.757175, 19.951656,
47.76198, 2.05508, 28.90179, 16.570613, 4.115771, 5.675356, 12.894865, 135.031164, 4.627926, 3.204897, 169.270617,
3.242173, 6.667147, 28.674757, 91.077287, 38.518241, 10.642836, 3.942491, 0.798094, 22.276056, 8.860588, 0.199579,
27.601038, 12.267493, 10.150265, 6.144562, 4.553009, 5.447502, 2.009245, 9.118773, 43.997828, 40.448191, 20.378239,
42.292929, 1.133066, 9.031088, 7.554661, 19.314747, 23.174294, 38.13964, 65.068149, 5.701579, 1.056608, 10.276158,
71.158647, 29.170398, 60.776238, 301.139947, 3.447496, 26.084662, 85.262356, 4.018332, 22.211743, 11.746035,
12.311143]
# Import numpy as np
import numpy as np
# Store pop07 as a numpy array: np_pop
np_pop = np.array(pop07)
# Double np_pop
np_pop = np_pop * 2
# Update: set s argument to np_pop
plt.scatter(gdp_cap, life_exp, s = np_pop)
# Previous customizations
plt.xscale('log')
plt.xlabel('GDP per Capita [in USD]')
plt.ylabel('Life Expectancy [in years]')
plt.title('World Development in 2007')
plt.xticks([1000, 10000, 100000],['1k', '10k', '100k']);
col = ['red', 'green', 'blue', 'blue', 'yellow', 'black', 'green', 'red', 'red', 'green', 'blue', 'yellow', 'green',
'blue', 'yellow', 'green', 'blue', 'blue', 'red', 'blue', 'yellow', 'blue', 'blue', 'yellow', 'red', 'yellow', 'blue',
'blue', 'blue', 'yellow', 'blue', 'green', 'yellow', 'green', 'green', 'blue', 'yellow', 'yellow', 'blue', 'yellow',
'blue', 'blue', 'blue', 'green', 'green', 'blue', 'blue', 'green', 'blue', 'green', 'yellow', 'blue', 'blue', 'yellow',
'yellow', 'red', 'green', 'green', 'red', 'red', 'red', 'red', 'green', 'red', 'green', 'yellow', 'red', 'red', 'blue',
'red', 'red', 'red', 'red', 'blue', 'blue', 'blue', 'blue', 'blue', 'red', 'blue', 'blue', 'blue', 'yellow', 'red',
'green', 'blue', 'blue', 'red', 'blue', 'red', 'green', 'black', 'yellow', 'blue', 'blue', 'green', 'red', 'red',
'yellow', 'yellow', 'yellow', 'red', 'green', 'green', 'yellow', 'blue', 'green', 'blue', 'blue', 'red', 'blue', 'green',
'blue', 'red', 'green', 'green', 'blue', 'blue', 'green', 'red', 'blue', 'blue', 'green', 'green', 'red', 'red', 'blue',
'red', 'blue', 'yellow', 'blue', 'green', 'blue', 'green', 'yellow', 'yellow', 'yellow', 'red', 'red', 'red', 'blue',
'blue']
# Specify c and alpha inside plt.scatter()
plt.scatter(x = gdp_cap, y = life_exp, s = np.array(pop07) * 2, c = col, alpha = 0.8)
# Previous customizations
plt.xscale('log')
plt.xlabel('GDP per Capita [in USD]')
plt.ylabel('Life Expectancy [in years]')
plt.title('World Development in 2007')
plt.xticks([1000,10000,100000], ['1k','10k','100k'])
# Show the plot
plt.show()
# Scatter plot
plt.scatter(x = gdp_cap, y = life_exp,
s = np.array(pop07) * 2, c = col, alpha = 0.8)
# Previous customizations
plt.xscale('log')
plt.xlabel('GDP per Capita [in USD]')
plt.ylabel('Life Expectancy [in years]')
plt.title('World Development in 2007')
plt.xticks([1000,10000,100000], ['1k','10k','100k'])
# Additional customizations
plt.text(1550, 71, 'India')
plt.text(5700, 80, 'China')
# Add grid() call
plt.grid(True)
# Show the plot
plt.show()
###Output
_____no_output_____ |
Stuff/CheatSkeet.ipynb | ###Markdown
Creates a group of mol objs from list
###Code
mols = [Chem.MolFromSmiles(TCA_smile_list) for TCA_smile_list in TCA_smile_list]
Draw.MolsToGridImage(mols, molsPerRow=2, subImgSize=(800,800))
InchIMols = [Chem.MolFromInchi(inchi_Key) for inchi_Key in inchi_Key]
import pandas as pd
Test = pd.read_csv("https://raw.githubusercontent.com/meidelien/Metabolic-network-layout-using-biochemical-coordinates/main/Notebooks/MetabolitesFromInChIKey.csv")
PandasTools.AddMoleculeColumnToFrame(Test, smilesCol="SMILES")
Test.head(54)
PandasTools.FrameToGridImage(Test.head(54), maxMols=54, legendsCol="name", molsPerRow=4)
Test["n_Atoms"] = Test["ROMol"].map(lambda x: x.GetNumAtoms())
Test.head(54)
Test_2 = pd.read_csv("https://raw.githubusercontent.com/meidelien/Metabolic-network-layout-using-biochemical-coordinates/main/Notebooks/MetabolitesFromInChIKey.csv")
Test["n_Atoms"] = Test["ROMol"].map(lambda x: x.GetNumAtoms())
Test_2["MolLogp"] = Test_2["InChI"].map(lambda x: x.Chem.Descriptors.MolLogP())
Desc = Descriptors.MolLogP((Test_2.iat[3, 7]))
Test_2
Test_2.iat[3, 7]
###Output
_____no_output_____ |
Course 4 - Convolutional Neural Networks/6. Neural Style Transfer/Art Generation with Neural Style Transfer - v2.ipynb | ###Markdown
Deep Learning & Art: Neural Style TransferWelcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576). **In this assignment, you will:**- Implement the neural style transfer algorithm - Generate novel artistic images using your algorithm Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values!
###Code
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Problem StatementNeural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S. In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).Let's see how you can do this. 2 - Transfer LearningNeural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers). Run the following code to load parameters from the VGG model. This may take a few seconds.
###Code
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)
###Output
{'input': <tf.Variable 'Variable:0' shape=(1, 300, 400, 3) dtype=float32_ref>, 'conv1_1': <tf.Tensor 'Relu:0' shape=(1, 300, 400, 64) dtype=float32>, 'conv1_2': <tf.Tensor 'Relu_1:0' shape=(1, 300, 400, 64) dtype=float32>, 'avgpool1': <tf.Tensor 'AvgPool:0' shape=(1, 150, 200, 64) dtype=float32>, 'conv2_1': <tf.Tensor 'Relu_2:0' shape=(1, 150, 200, 128) dtype=float32>, 'conv2_2': <tf.Tensor 'Relu_3:0' shape=(1, 150, 200, 128) dtype=float32>, 'avgpool2': <tf.Tensor 'AvgPool_1:0' shape=(1, 75, 100, 128) dtype=float32>, 'conv3_1': <tf.Tensor 'Relu_4:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_2': <tf.Tensor 'Relu_5:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_3': <tf.Tensor 'Relu_6:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_4': <tf.Tensor 'Relu_7:0' shape=(1, 75, 100, 256) dtype=float32>, 'avgpool3': <tf.Tensor 'AvgPool_2:0' shape=(1, 38, 50, 256) dtype=float32>, 'conv4_1': <tf.Tensor 'Relu_8:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_2': <tf.Tensor 'Relu_9:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_3': <tf.Tensor 'Relu_10:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_4': <tf.Tensor 'Relu_11:0' shape=(1, 38, 50, 512) dtype=float32>, 'avgpool4': <tf.Tensor 'AvgPool_3:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_1': <tf.Tensor 'Relu_12:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_2': <tf.Tensor 'Relu_13:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_3': <tf.Tensor 'Relu_14:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_4': <tf.Tensor 'Relu_15:0' shape=(1, 19, 25, 512) dtype=float32>, 'avgpool5': <tf.Tensor 'AvgPool_4:0' shape=(1, 10, 13, 512) dtype=float32>}
###Markdown
The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this: ```pythonmodel["input"].assign(image)```This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows: ```pythonsess.run(model["conv4_2"])``` 3 - Neural Style Transfer We will build the NST algorithm in three steps:- Build the content cost function $J_{content}(C,G)$- Build the style cost function $J_{style}(S,G)$- Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$. 3.1 - Computing the content costIn our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.
###Code
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)
###Output
_____no_output_____
###Markdown
The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.** 3.1.1 - How do you ensure the generated image G matches the content of the image C?**As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes. We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $$a^{(G)}$$ be the corresponding hidden layer activation. We will define as the content cost function as:$$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer's activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.)**Exercise:** Compute the "content cost" using TensorFlow. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll a_C and a_G as explained in the picture above - If you are stuck, take a look at [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape).3. Compute the content cost: - If you are stuck, take a look at [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract).
###Code
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
"""
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)
a_C_unrolled = tf.reshape(a_C, [m, n_H*n_W, n_C])
a_G_unrolled = tf.reshape(a_G, [m, n_H*n_W, n_C])
# compute the cost with tensorflow (≈1 line)
# J_content = tf.reduce_sum((tf.subtract(a_C_unrolled, a_G_unrolled) **2) / (4 * n_H * n_W * n_C))
J_content = tf.reduce_sum( (a_C_unrolled - a_G_unrolled)**2 / (4 * n_H * n_W * n_C))
# J_content = tf.reduce_sum( tf.square(tf.subtract(a_C_unrolled, a_G_unrolled)) / (4 * n_H * n_W * n_C))
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
###Output
J_content = 6.76559
###Markdown
**Expected Output**: **J_content** 6.76559 **What you should remember**:- The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are. - When we minimize the content cost later, this will help make sure $G$ has similar content as $C$. 3.2 - Computing the style costFor our running example, we will use the following style image:
###Code
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)
###Output
_____no_output_____
###Markdown
This painting was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*.Lets see how you can now define a "style" cost function $J_{style}(S,G)$. 3.2.1 - Style matrixThe style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large. Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context. In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose:The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$. One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture. By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image. **Exercise**:Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at [Hint 1](https://www.tensorflow.org/api_docs/python/tf/matmul) and [Hint 2](https://www.tensorflow.org/api_docs/python/tf/transpose).
###Code
# GRADED FUNCTION: gram_matrix
def gram_matrix(A):
"""
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
"""
### START CODE HERE ### (≈1 line)
GA = tf.matmul(A, tf.transpose(A))
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval()))
###Output
GA = [[ 6.42230511 -4.42912197 -2.09668207]
[ -4.42912197 19.46583748 19.56387138]
[ -2.09668207 19.56387138 20.6864624 ]]
###Markdown
**Expected Output**: **GA** [[ 6.42230511 -4.42912197 -2.09668207] [ -4.42912197 19.46583748 19.56387138] [ -2.09668207 19.56387138 20.6864624 ]] 3.2.2 - Style cost After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as: $$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network. **Exercise**: Compute the style cost for a single layer. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from the hidden layer activations a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above. - You may find [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape) useful.3. Compute the Style matrix of the images S and G. (Use the function you had previously written.) 4. Compute the Style cost: - You may find [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract) useful.
###Code
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
"""
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S, [n_H*n_W, n_C]))
a_G = tf.transpose(tf.reshape(a_G, [n_H*n_W, n_C]))
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
# J_style_layer = tf.reduce_sum( tf.to_float(tf.square (tf.subtract(GS, GG))) / tf.to_float(4 * tf.square(n_C) * tf.square(n_H*n_W)))
# J_style_layer = tf.reduce_sum((tf.subtract(GS, GG) **2) / (4 * (n_C **2) * ((n_H*n_W) **2)))
J_style_layer = tf.reduce_sum( (GS - GG)**2 / (4 * (n_C**2) * (n_H*n_W)**2) )
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
# print (tf.subtra)
###Output
J_style_layer = 9.19028
###Markdown
**Expected Output**: **J_style_layer** 9.19028 3.2.3 Style WeightsSo far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default:
###Code
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
###Output
_____no_output_____
###Markdown
You can combine the style costs for different layers as follows:$$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`. We've implemented a compute_style_cost(...) function. It simply calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. Read over it to make sure you understand what it's doing. <!-- 2. Loop over (layer_name, coeff) from STYLE_LAYERS: a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"] b. Get the style of the style image from the current layer by running the session on the tensor "out" c. Get a tensor representing the style of the generated image from the current layer. It is just "out". d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer e. Add (style_cost x coeff) of the current layer to overall style cost (J_style)3. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer.!-->
###Code
def compute_style_cost(model, STYLE_LAYERS):
"""
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
"""
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
###Output
_____no_output_____
###Markdown
**Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.<!-- How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers!-->**What you should remember**:- The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.- Minimizing the style cost will cause the image $G$ to follow the style of the image $S$. 3.3 - Defining the total cost to optimize Finally, let's create a cost function that minimizes both the style and the content cost. The formula is: $$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$**Exercise**: Implement the total cost function which includes both the content cost and the style cost.
###Code
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
"""
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
"""
### START CODE HERE ### (≈1 line)
J = (alpha * J_content) + (beta * J_style)
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
###Output
J = 35.34667875478276
###Markdown
**Expected Output**: **J** 35.34667875478276 **What you should remember**:- The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$- $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style 4 - Solving the optimization problem Finally, let's put everything together to implement Neural Style Transfer!Here's what the program will have to do:1. Create an Interactive Session2. Load the content image 3. Load the style image4. Randomly initialize the image to be generated 5. Load the VGG16 model7. Build the TensorFlow graph: - Run the content image through the VGG16 model and compute the content cost - Run the style image through the VGG16 model and compute the style cost - Compute the total cost - Define the optimizer and the learning rate8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.Lets go through the individual steps in detail. You've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code. Lets start the interactive session.
###Code
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Let's load, reshape, and normalize our "content" image (the Louvre museum picture):
###Code
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
###Output
_____no_output_____
###Markdown
Let's load, reshape and normalize our "style" image (Claude Monet's painting):
###Code
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
###Output
_____no_output_____
###Markdown
Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.)
###Code
generated_image = generate_noise_image(content_image)
imshow(generated_image[0])
###Output
_____no_output_____
###Markdown
Next, as explained in part (2), let's load the VGG16 model.
###Code
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
###Output
_____no_output_____
###Markdown
To get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following:1. Assign the content image to be the input to the VGG model.2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2".3. Set a_G to be the tensor giving the hidden layer activation for the same layer. 4. Compute the content cost using a_C and a_G.
###Code
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
###Output
_____no_output_____
###Markdown
**Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.
###Code
# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
###Output
_____no_output_____
###Markdown
**Exercise**: Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. Use `alpha = 10` and `beta = 40`.
###Code
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, alpha = 10, beta = 40)
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. [See reference](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)
###Code
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
###Output
_____no_output_____
###Markdown
**Exercise**: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps.
###Code
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line)
sess.run(model['input'].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
###Output
_____no_output_____
###Markdown
Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
###Code
model_nn(sess, generated_image)
###Output
_____no_output_____
###Markdown
**Expected Output**: **Iteration 0 : ** total cost = 5.05035e+09 content cost = 7877.67 style cost = 1.26257e+08 You're done! After running this, in the upper bar of the notebook click on "File" and then "Open". Go to the "/output" directory to see all the saved images. Open "generated_image" to see the generated image! :)You should see something the image presented below on the right:We didn't want you to wait too long to see an initial result, and so had set the hyperparameters accordingly. To get the best looking results, running the optimization algorithm longer (and perhaps with a smaller learning rate) might work better. After completing and submitting this assignment, we encourage you to come back and play more with this notebook, and see if you can generate even better looking images. Here are few other examples:- The beautiful ruins of the ancient city of Persepolis (Iran) with the style of Van Gogh (The Starry Night)- The tomb of Cyrus the great in Pasargadae with the style of a Ceramic Kashi from Ispahan.- A scientific study of a turbulent fluid with the style of a abstract blue fluid painting. 5 - Test with your own image (Optional/Ungraded) Finally, you can also rerun the algorithm on your own images! To do so, go back to part 4 and change the content image and style image with your own pictures. In detail, here's what you should do:1. Click on "File -> Open" in the upper tab of the notebook2. Go to "/images" and upload your images (requirement: (WIDTH = 300, HEIGHT = 225)), rename them "my_content.png" and "my_style.png" for example.3. Change the code in part (3.4) from :```pythoncontent_image = scipy.misc.imread("images/louvre.jpg")style_image = scipy.misc.imread("images/claude-monet.jpg")```to:```pythoncontent_image = scipy.misc.imread("images/my_content.jpg")style_image = scipy.misc.imread("images/my_style.jpg")```4. Rerun the cells (you may need to restart the Kernel in the upper tab of the notebook).You can also tune your hyperparameters: - Which layers are responsible for representing the style? STYLE_LAYERS- How many iterations do you want to run the algorithm? num_iterations- What is the relative weighting between content and style? alpha/beta 6 - ConclusionGreat job on completing this assignment! You are now able to use Neural Style Transfer to generate artistic images. This is also your first time building a model in which the optimization algorithm updates the pixel values rather than the neural network's parameters. Deep learning has many different types of models and this is only one of them! What you should remember:- Neural Style Transfer is an algorithm that given a content image C and a style image S can generate an artistic image- It uses representations (hidden layer activations) based on a pretrained ConvNet. - The content cost function is computed using one hidden layer's activations.- The style cost function for one layer is computed using the Gram matrix of that layer's activations. The overall style cost function is obtained using several hidden layers.- Optimizing the total cost function results in synthesizing new images. This was the final programming exercise of this course. Congratulations--you've finished all the programming exercises of this course on Convolutional Networks! We hope to also see you in Course 5, on Sequence models! References:The Neural Style Transfer algorithm was due to Gatys et al. (2015). Harish Narayanan and Github user "log0" also have highly readable write-ups from which we drew inspiration. The pre-trained network used in this implementation is a VGG network, which is due to Simonyan and Zisserman (2015). Pre-trained weights were from the work of the MathConvNet team. - Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, (2015). A Neural Algorithm of Artistic Style (https://arxiv.org/abs/1508.06576) - Harish Narayanan, Convolutional neural networks for artistic style transfer. https://harishnarayanan.org/writing/artistic-style-transfer/- Log0, TensorFlow Implementation of "A Neural Algorithm of Artistic Style". http://www.chioka.in/tensorflow-implementation-neural-algorithm-of-artistic-style- Karen Simonyan and Andrew Zisserman (2015). Very deep convolutional networks for large-scale image recognition (https://arxiv.org/pdf/1409.1556.pdf)- MatConvNet. http://www.vlfeat.org/matconvnet/pretrained/
###Code
# !tar chzf notebook_ex2.tgz --exclude notebook_ex2.tgz *
###Output
_____no_output_____ |
notebooks/crsp-example-dividends.ipynb | ###Markdown
Abnormal returns around dividends.Prepared by [Vincent Grégoire](http://www.vincentgregoire.com), Department of Finance, The University of Melbourne. This is a sample code to illustrate how to load and analyse CRSP data in Python.This notebook was created as supplemental material to a Python for financial research bootcamp for finance honours and PhD students at the University of Melbourne in March of 2017.Last update: March 24, 2017.**Contact**: Latest version:
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from datetime import datetime
###Output
_____no_output_____
###Markdown
I have downloaded form WRDS, in .csv format, 10 years worth of stock data (daily), plus dividend events from the events file.
###Code
# Let's first look at dividends
dividends_df = pd.read_csv('data/dividends.csv.gz')
# I prefer to use lower case column names
dividends_df.columns = [x.lower() for x in dividends_df]
dividends_df.head(5)
###Output
_____no_output_____
###Markdown
Looking at the data, we see there are some missing entries, we want to clean those. Also, CRSP stores dates in a YYYYMMDD numerical format that pandas detects as numbers, so we want to convert those as well.
###Code
# First drop those missing values
# Find rows with any misssing value
missing = dividends_df.isnull().any(axis=1)
# Drop them from or data
if sum(missing) > 0:
dividends_df = dividends_df[~missing].copy()
dividends_df.head()
# We first need to define a function to convert the dates.
def compute_date_crsp(int_date):
if np.isnan(int_date):
return int_date
int_date = int(int_date)
year = int_date//10000
month = int_date//100 - year*100
day = int_date - month*100 - year*10000
return datetime(year, month, day)
dividends_df['dclrdt'] = dividends_df['dclrdt'].apply(lambda x: compute_date_crsp(x))
dividends_df['exdt'] = dividends_df['exdt'].apply(lambda x: compute_date_crsp(x))
dividends_df['rcrddt'] = dividends_df['rcrddt'].apply(lambda x: compute_date_crsp(x))
dividends_df['paydt'] = dividends_df['paydt'].apply(lambda x: compute_date_crsp(x))
dividends_df.head()
###Output
_____no_output_____
###Markdown
We usually care about the ex-dividend date. Let's see what's the seasonal distribution. We can plot the numbers of annoucements per month of the year. We would expect a quarterly and an annual pattern
###Code
dividends_df.groupby(dividends_df['exdt'].dt.month)['divamt'].count().plot(kind='bar')
plt.xlabel('Month')
plt.ylabel('Number of dividends')
plt.title('Seasonal distribution of ex-dividend dates')
dividends_df.describe()
# We have a few $0.00 dividends, so we'll throw those out.
dividends_df = dividends_df[dividends_df.divamt > 0.0].copy()
dividends_df.describe()
# Let's now look at our stock returns. Since it's a large dataset,
# we begin by loading a small subsample.
dsf_df = pd.read_csv('data/crsp_dsf.csv.gz',
na_values=['B', 'C'], nrows=100)
dsf_df.head()
# Note that CRSP data sometimes has letter codes for special situations.
# We discard them here ('A', 'B', 'C', 'S', 'T'),
# but you might want to have a look at the documentation before you do in your
# projects, they are there for a reason.
# We only want a subset of the columns, specifying them will make loading faster.
cols = [u'PERMNO', u'date', u'SHRCD', u'EXCHCD', u'DLRETX', u'DLRET',
u'PRC', u'VOL', u'RET', u'SHROUT', u'RETX', u'vwretd']
dsf_df = pd.read_csv('data/crsp_dsf.csv.gz',
usecols=cols,
na_values=['A', 'B', 'C', 'S', 'T'])
dsf_df.columns = [x.lower() for x in dsf_df]
len(dsf_df)
dsf_df.head()
# CRSP does a few other quirky things, such as having negative prices
# to indicate when the closing price was not available.
dsf_df['prc'] = np.abs(dsf_df['prc'])
# We also fill in delisting returns for missing returns when available.
sel = dsf_df.ret.isnull()
dsf_df.loc[sel, 'ret'] = dsf_df.loc[sel, 'dlret']
sel = dsf_df.retx.isnull()
dsf_df.loc[sel, 'retx'] = dsf_df.loc[sel, 'dlretx']
# And drop the returns that are still missing
sel = dsf_df[['ret', 'retx']].isnull().any(axis=1)
dsf_df = dsf_df[~sel].copy()
# It is typical to only keep share codes 10 and 11 (common shares), and exchange 1, 2 and 3
# (NYSE, Nasdaq and Amex/NYSE MKT/NYSE American)
dsf_df.shrcd.unique()
dsf_df = dsf_df[dsf_df.shrcd.isin([10, 11])].copy()
dsf_df.exchcd.unique()
dsf_df = dsf_df[dsf_df.exchcd.isin([1, 2, 3])].copy()
print('We are left with ' + str(len(dsf_df)) + ' observations with ' +
str(dsf_df.permno.nunique()) + ' unique stocks.')
###Output
We are left with 10025682 observations with 6703 unique stocks.
###Markdown
We still need to parse dates. It's better to wait until we have filtered down the sampleso that we have fewer dates to parse. We could parse the dates with `pd.to_datetime()`,but given the number of observations it will take a while (I'm not patient).I have developped a shortcut that works well with this type of data, where we have many observations of the same date (one for each stock). The trick is to first extractthe list of unique dates, parse this short list, and then swap (using `map`) in the larger dataset.
###Code
def parse_simple_date(d):
return datetime.strptime(d, '%m/%d/%Y')
def fast_date_parse(df, col, date_parser=parse_simple_date):
dt = pd.DataFrame(df[col].unique())
dt.columns = [col + '_tmp']
dt[col] = dt[col + '_tmp'].apply(date_parser)
date_dict = dt.set_index(col + '_tmp').to_dict()
df[col] = df[col].map(date_dict[col])
return df
dsf_df = fast_date_parse(dsf_df, 'date')
###Output
_____no_output_____
###Markdown
Now we have stock returns ready to go, and a file with dividend events. We want to matchthe two datasets, and get a window of 10 tradings days around the ex-dividend date.If we were doing this in SAS, we could put date ranges in our matching condition, butpandas doesn't allow that. What we can do is create our date range first, then matchon specific dates. Pandas support business days (Mon-Fri) out of the box for creating date ranges, but we want to be more specific and use actual trading days, taking into account holidays and such. Pandas can do this very neatly, but in this casewe don't need it, all the dates are in our `dsf` dataset, we just need to extract them.
###Code
dates = pd.DataFrame(dsf_df.date.unique()).sort_values(0)
dates.columns = ['date']
dates.head()
len(dates)
# Merging, we are creating a very large dataset
dates['tmp'] = 1
dates_cross = pd.merge(dates, dates, on=['tmp'],
suffixes=('', '_obs'))
dates_cross['dateID'] = dates_cross.groupby('date')['tmp'].cumsum()
# Set the ID of the date to 0. We first need to find the ID of the date, then
# match and subtract.
dates_0 = pd.merge(dates_cross,
dates_cross.loc[dates_cross.date == dates_cross.date_obs,
['date', 'dateID']],
on=['date'],
suffixes=('', '_0'))
dates_0['event_t'] = dates_0['dateID'] - dates_0['dateID_0']
dates_0.head()
# Say we want windows of +/- 10 days, that means we want dates with event_t between -10 and 10
dates_win = dates_0.loc[(dates_0.event_t >= -10) & (dates_0.event_t <= 10),
['date', 'date_obs', 'event_t']].copy()
# Now we can merge this with our event file. So for each event, we'll have a list of the daily observations
# we want to merge.
dividends_days = pd.merge(dividends_df, dates_win,
left_on='exdt',
right_on='date')
dividends_days.head()
# Now let's merge on stock returns.
merged_df = pd.merge(dividends_days, dsf_df,
left_on=['permno', 'date_obs'],
right_on=['permno', 'date'],
suffixes=('', '_dsf'))
# Count the number of observations for each event. We drop those that are not complete
# to make things easier.
counts = merged_df.groupby(['permno', 'date'])[['date_obs']].count()
sum(counts.date_obs < 21)
counts[counts.date_obs == 21].head()
merged_df = pd.merge(merged_df,
counts,
left_on=['permno', 'date'],
right_index=True)
# As usual, we care about cumulative returns,
# and abnormal returns (market-adjusted).
# A crude measure is the CAR (cumulated abnormal returns)
# which is the cumulative sum of daily returns
# minus daily market returns.
merged_df['aret'] = merged_df['ret'] - merged_df['vwretd']
merged_df['aretx'] = merged_df['retx'] - merged_df['vwretd']
merged_df['caret'] = merged_df.groupby(['permno', 'date'])['aret'].cumsum()
merged_df['caretx'] = merged_df.groupby(['permno', 'date'])['aretx'].cumsum()
merged_df.groupby('event_t')['caret'].mean().plot()
###Output
_____no_output_____
###Markdown
That's a nice plot, but it can be made better with labels, confidence intervals, and more.To add confidence intervals, we need to compute the standard errors around the mean, which are:$ se = \frac{\hat{\sigma}}{\sqrt{n}}$
###Code
# Initialise the figure
fig, axes = plt.subplots(1, 1, figsize=(8, 5), dpi=80)
# Compute the mean, multiply by 100 to have in %
mean = merged_df.groupby('event_t')['caret'].mean() * 100
# Compute the standard error around the mean
se = (merged_df.groupby('event_t')['caret'].std()
/ np.sqrt(merged_df.groupby('event_t')['caret'].count())) * 100
# We'll use 95\% confidence intervals, so +/- 1.96 * se
mean_p = mean + 1.96 * se
mean_m = mean - 1.96 * se
mean.plot(ax=axes, linestyle='-', linewidth=2, marker='s',
color='blue', alpha=1.0)
mean_m.plot(ax=axes, linestyle=':', linewidth=1, marker=None,
color='blue', alpha=0.9)
mean_p.plot(ax=axes, linestyle=':', linewidth=1, marker=None,
color='blue', alpha=0.9)
axes.fill_between(mean_m.index, mean_m, mean_p, facecolor='blue', alpha=0.1)
# Add vertical and horizontla lines at t= 0
axes.axvline(x=0, color='k', linestyle='--', linewidth=0.5)
axes.axhline(y=0, color='k', linestyle='--', linewidth=0.5)
axes.set_ylabel('CAR (%)')
axes.set_xlabel('Event day')
axes.set_title('Mean CAR around ex-dividend date')
#fig.tight_layout()
fig.savefig('MeanCAR.pdf', dpi=1000)
###Output
_____no_output_____
###Markdown
This is nice, but it would be great to reuse that code to plot all the variables of interest. Why not make it a function?
###Code
def plot_exdate(varname, xlabel, ylabel, title, fn, show_hline=True):
# Initialise the figure
fig, axes = plt.subplots(1, 1, figsize=(8, 5), dpi=80)
# Compute the mean, multiply by 100 to have in %
mean = merged_df.groupby('event_t')[varname].mean() * 100
# Compute the standard error around the mean
se = (merged_df.groupby('event_t')[varname].std()
/ np.sqrt(merged_df.groupby('event_t')[varname].count())) * 100
# We'll use 95\% confidence intervals, so +/- 1.96 * se
mean_p = mean + 1.96 * se
mean_m = mean - 1.96 * se
mean.plot(ax=axes, linestyle='-', linewidth=2, marker='s',
color='blue', alpha=1.0)
mean_m.plot(ax=axes, linestyle=':', linewidth=1, marker=None,
color='blue', alpha=0.9)
mean_p.plot(ax=axes, linestyle=':', linewidth=1, marker=None,
color='blue', alpha=0.9)
axes.fill_between(mean_m.index, mean_m, mean_p, facecolor='blue', alpha=0.1)
# Add vertical and horizontla lines at t= 0
axes.axvline(x=0, color='k', linestyle='--', linewidth=0.5)
if show_hline:
axes.axhline(y=0, color='k', linestyle='--', linewidth=0.5)
axes.set_ylabel(ylabel)
axes.set_xlabel(xlabel)
axes.set_title(title)
#fig.tight_layout()
fig.savefig(fn, dpi=1000)
plot_exdate('caret', 'Event day', 'CAR (%)', 'Mean CAR around ex-dividend date', 'MeanCAR.pdf')
plot_exdate('vol', 'Event day', 'Volume', 'Mean volume around ex-dividend date',
'MeanVolume.pdf', show_hline=False)
plot_exdate('caretx', 'Event day', 'CAR excl. div. (%)', 'Mean CAR, excl. div. around ex-dividend date', 'MeanCAR.pdf')
###Output
_____no_output_____ |
data science tools/Matplotlib-202001.ipynb | ###Markdown
0.1 Introduction to Data Visualization
###Code
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
First Look: Line Chart Creating the data points
###Code
import numpy as np
x = np.linspace(-np.pi, np.pi, 256, endpoint=True)
S, C = np.sin(x), np.cos(x)
# Display x's shape and first 10 elements
print('x')
print(x.shape)
print(x[:10])
print()
# Display S's shape and first 10 elements
print('S')
print(S.shape)
print(S[:10])
print()
# Display C's shape and first 10 elements
print('C')
print(C.shape)
print(C[:10])
###Output
x
(256,)
[-3.14159265 -3.11695271 -3.09231277 -3.06767283 -3.04303288 -3.01839294
-2.993753 -2.96911306 -2.94447311 -2.91983317]
S
(256,)
[-1.22464680e-16 -2.46374492e-02 -4.92599411e-02 -7.38525275e-02
-9.84002783e-02 -1.22888291e-01 -1.47301698e-01 -1.71625679e-01
-1.95845467e-01 -2.19946358e-01]
C
(256,)
[-1. -0.99969645 -0.99878599 -0.99726917 -0.99514692 -0.99242051
-0.98909161 -0.98516223 -0.98063477 -0.97551197]
###Markdown
Our first plot
###Code
# Start your figure
plt.figure()
# Plot sine curve with a solid - line
plt.plot(x, S, '-')
# Plot cosine curve with a dotted -- line
plt.plot(x, C, '--')
# Display plot and show result on screen.
plt.show()
###Output
_____no_output_____
###Markdown
All our plots will begin by first initiating a figure (plt.figure()), and end with displaying the plot (plt.show()). In between, we'll call the functions which decide what gets plotted. In this case, we used the plt.plot function to plot lines. A more detailed look at plotting
###Code
## Create a new figure of size 10x6 inches, using 80 dots per inch
fig = plt.figure(figsize=(10,6), dpi=80)
## Plot cosine using blue color with a dotted line of width 1 (pixels)
plt.plot(x, C, color="blue", linewidth=2.5, linestyle="--", label="cosine")
## Plot sine using green color with a continuous line of width 1 (pixels)
plt.plot(x, S, color="green", linewidth=2.5, linestyle="-", label="sine")
## Set axis limits and ticks (markers on axis)
# x goes from -4.0 to 4.0
plt.xlim(-4.0, 4.0)
# 9 ticks, equally spaced
plt.xticks(np.linspace(-4, 4, 9, endpoint=True))
# Set y limits from -1.0 to 1.0
plt.ylim(-1.0, 1.0)
# 5 ticks, equally spaced
plt.yticks(np.linspace(-1, 1, 5, endpoint=True))
## Add legends, title and axis names
plt.legend(loc='upper left', frameon=False)
plt.title("Graph of wave movement with Sine and Cosine functions")
plt.xlabel("Time, t")
plt.ylabel("Position, x")
## Turn on grid
plt.grid(color='b', linestyle='-', linewidth=0.1)
## Moving spines to center in the middle
ax = plt.gca()
# Move left y-axis and bottim x-axis to centre, passing through (0,0)
ax.spines['left'].set_position('center')
ax.spines['bottom'].set_position('center')
# Eliminate upper and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
# Show ticks in the left and lower axes only
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
plt.show()
###Output
_____no_output_____
###Markdown
The settings use a set to of default values unless specified. Saving a plot
###Code
fig.savefig('my_figure.png')
###Output
_____no_output_____
###Markdown
There are multiple formats we can save this image in.
###Code
import pprint # pretty printer
pprint.pprint(fig.canvas.get_supported_filetypes())
###Output
{'eps': 'Encapsulated Postscript',
'jpeg': 'Joint Photographic Experts Group',
'jpg': 'Joint Photographic Experts Group',
'pdf': 'Portable Document Format',
'pgf': 'PGF code for LaTeX',
'png': 'Portable Network Graphics',
'ps': 'Postscript',
'raw': 'Raw RGBA bitmap',
'rgba': 'Raw RGBA bitmap',
'svg': 'Scalable Vector Graphics',
'svgz': 'Scalable Vector Graphics',
'tif': 'Tagged Image File Format',
'tiff': 'Tagged Image File Format'}
###Markdown
Types of plots here are all the kinds of plots you can call (more on this below): ‘bar’ or ‘barh’ for bar charts ‘hist’ for histograms ‘box’ for boxplots ‘kde’ or 'density' for density plots ‘area’ for area plots ‘scatter’ for scatter plots ‘hexbin’ for hexagonal bin plots ‘pie’ for pie charts Bar Chart A bar chart is a good choice when you want to show how some quantity varies among some discrete set of items. Let’s create a Bar chart from described set.
###Code
# Setting figure size to 7x5
fig = plt.figure(figsize=(7,5))
# Setting data set
men_means = [20, 35, 30, 35, 27]
men_stds = [2, 3, 4, 1, 2]
# Setting index
ind = np.arange(5)
# Setting argument for width
width = 0.35
# Plotting a horizontal bar graph for men_means against index
# with errorbars equal to standard deviation
error_args = {'ecolor': (0, 0, 0), 'linewidth': 2.0}
plt.barh(ind, men_means, width, xerr=men_stds, error_kw=error_args)
# Y-axis ticks and labels
ax = plt.gca()
ax.set_ylim(-0.5, 4.5)
ax.set_yticks(ind)
ax.set_yticklabels(['A', 'B', 'C', 'D', 'E', ])
plt.show()
###Output
_____no_output_____
###Markdown
In the plot, we need to separately calculate bottom, which is the y-axis position where the bar starts (position of bottom of each bar). error_args specify that the error bar is black in color, and its line-width is 2 pixels.
###Code
# Setting figure size to 7x5
fig = plt.figure(figsize=(7,5))
# Setting data set values
women_means = [25, 32, 34, 20, 25]
women_stds = [3, 5, 2, 3, 3]
# Plotting a horizontal bar graph with men's data at the bottom and women's data on top.
p1 = plt.bar(ind, men_means, width, yerr=men_stds, color='b', error_kw=error_args)
p2 = plt.bar(ind, women_means, width, bottom=men_means, yerr=women_stds, color='g', error_kw=error_args)
# Modifying x-axis
ax = plt.gca()
ax.set_xlim(-0.5, 4.5)
ax.set_xticks(ind)
ax.set_xticklabels(['A', 'B', 'C', 'D', 'E', ])
plt.show()
###Output
_____no_output_____
###Markdown
Histogram Histograms are plot type used to show the frequency across a continuous or discrete variable.
###Code
# Generate 3 different arrays
x = np.random.normal(0, 0.8, 1000)
y = np.random.normal(-2, 1, 1000)
z = np.random.normal(3, 2, 1000)
# Set figure size to 9x6
fig = plt.figure(figsize=(9, 6))
# Configure keyword arguments to customize histogram.
# Alpha adjusts translucency while bins define spacing.
# More features available in the documentation.
kwargs = {
'histtype' : 'stepfilled',
'alpha' : 0.9,
'bins' : 40,
}
# Plot all 3 arrays on one graph
plt.hist([x, y, z], **kwargs)
plt.show()
# Generate 3 dimensional numpy array
X = 200 + 25*np.random.randn(1000, 3)
# Set figure size to 9x6
fig = plt.figure(figsize=(9, 6))
# Plot histogram from 3 stacked arrays after normalizing data
n, bins, patches = plt.hist(X, 30, alpha=0.9, stacked=True, linewidth=0.0, rwidth=1.0)
plt.show()
###Output
_____no_output_____
###Markdown
Scatter Plot A Scatter plot is the right choice for visualizing the entire dataset, and visually look for clusters or correlation.
###Code
N = 100
# Generate 2 different arrays
x = np.random.rand(N)
y = np.random.rand(N)
fig = plt.figure(figsize=(9, 6))
# Plotting a scatter graph at the given x-y coordinates
plt.scatter(x, y)
plt.show()
N = 100
# Generate 2 different arrays
x = np.random.rand(N)
y = np.random.rand(N)
fig = plt.figure(figsize=(9, 6))
# Assign random colors and variable sizes to the bubbles
colors = np.random.rand(N)
area = np.pi * (20 * np.random.rand(N))**2 # 0 to 20 point radii
# Scatter plot on x-y coordinate with the assigned size and color
plt.scatter(x, y, s=area, c=colors, alpha=0.7)
plt.show()
###Output
_____no_output_____
###Markdown
Box and Whisker Plot Box plot is an easy and effective way to read descriptive statistics. These statistics summarize the distribution of the data by displaying: minimum, first quartile, median, third quartile, and maximum in a single graph.
###Code
np.random.seed(10)
# Generate 4 different arrays and combine them in a list
u = np.random.normal(100, 10, 200)
v = np.random.normal(80, 30, 200)
w = np.random.normal(90, 20, 200)
x = np.random.normal(70, 25, 200)
data_to_plot = [u, v, w, x]
fig = plt.figure(figsize=(9, 6))
## Plot a box plot that shows the mean, variance and limits within each column.
# Add patch_artist=True option to ax.boxplot() to get fill color
bp = plt.boxplot(data_to_plot, patch_artist=True, labels=['A', 'B', 'C', 'D', ])
# change outline color, fill color and linewidth of the boxes
for box in bp['boxes']:
# change outline color
box.set(color='#7570b3', linewidth=2)
# change fill color
box.set(facecolor = '#1b9e77')
# change color and linewidth of the whiskers
for whisker in bp['whiskers']:
whisker.set(color='#7570b3', linewidth=2)
# change color and linewidth of the caps
for cap in bp['caps']:
cap.set(color='#7570b3', linewidth=2)
# change color and linewidth of the medians
for median in bp['medians']:
median.set(color='#b2df8a', linewidth=2)
# change the style of fliers and their fill
for flier in bp['fliers']:
flier.set(marker='o', color='#e7298a', alpha=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
The starts and end of the box mark the first-quartile and third-quartile values (i.e. 25 percentile - 75 percentile). The line inside the box marks the median value. The ends of the bars mark the minimum and the maximum values (excluding the outliers). Any dots above / below the error bars are the outlier data points. Area Plot Area charts are used to represent cumulative totals using numbers or percentages over time. Since these plot by default are stacked they need each column to be either all positive or all negative values.
###Code
x = range(1,6)
# Set values for each line (4 lines in this example)
y = [
[1, 4, 6, 8, 9],
[2, 2, 7, 10, 12],
[2, 8, 5, 10, 6],
[1, 5, 2, 5, 2],
]
# Setting figure size to 9x6 with dpi of 80
fig = plt.figure(figsize=(9,6), dpi=80)
# Stacked area plot
plt.stackplot(x, y, labels=['A','B','C','D'], alpha=0.8)
# Set location of legend
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Pie Chart Pie charts show percentage or proportion of data. This percentage represented by each category is right next to its corresponding slice of pie. For pie charts in Matplotlib, the slices are ordered and plotted counter-clockwise, as shown:
###Code
# Set keyword arguments
labels = 'Kenya', 'Tanzania', 'Uganda', 'Ruwanda', 'Burundi'
sizes = [35, 30, 20, 10 ,5]
explode = (0, 0.1, 0, 0, 0) # only "explode" the 2nd slice (i.e. 'Tanzania')
# Plot pie chart with the above set arguments
fig = plt.figure(figsize=(9, 6))
plt.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%', shadow=True, startangle=90)
plt.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle.
plt.show()
###Output
_____no_output_____
###Markdown
Above, autopct='%1.1f%%' says display the percentage with 1 digit precision. And startangle=90 says that the first pie (Kenya) should start from angle 90 degrees (angle is the angle made with positive x-axis). 0.2 some example with real life data
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
dataset = pd.read_csv('data/diabetes.csv')
for column in ["Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI", "DiabetesPedigreeFunction", "Age"]:
bad = (dataset[column] == 0)
dataset.loc[bad, column] = None
dataset.describe()
dataset.info()
normalized = (dataset - dataset.mean()) / dataset.std()
normalized["Outcome"] = (normalized["Outcome"] > 0.0)
## bar plot data
diabetic_means = normalized[normalized["Outcome"] == True].mean()[:-1]
diabetic_stds = normalized[normalized["Outcome"] == True].std()[:-1]
non_diabetic_means = normalized[normalized["Outcome"] == False].mean()[:-1]
non_diabetic_stds = normalized[normalized["Outcome"] == False].std()[:-1]
## bar plot
fig = plt.figure(figsize=(9, 6))
indices = np.arange(len(diabetic_means))
width = 0.35
error_args = {'ecolor': (0, 0, 0), 'linewidth': 2.0}
p1 = plt.bar(indices, diabetic_means + 2.0, width, yerr=diabetic_stds, color='r', error_kw=error_args)
p2 = plt.bar(indices-width, non_diabetic_means + 2.0, width, yerr=non_diabetic_stds, color='g', error_kw=error_args)
ax = plt.gca()
ax.set_xlim(-1.0, 8)
ax.set_xticks(indices)
ax.set_xticklabels(diabetic_means.index)
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
plt.tight_layout()
# plt.show()
fig.savefig('barplot.png')
## Histogram of each feature, etc.
features = dataset.columns[:-1]
def histogram(feature):
xx = dataset.loc[(dataset["Outcome"] == 0) & np.logical_not(dataset[feature].isnull()), feature]
yy = dataset.loc[(dataset["Outcome"] == 1) & np.logical_not(dataset[feature].isnull()), feature]
fig = plt.figure(figsize=(9, 6))
kwargs = {
'histtype' : 'stepfilled',
'alpha' : 0.5,
'density' : True,
'bins' : 40,
'color' : ['g', 'r', ],
}
plt.title(feature)
plt.hist([xx, yy, ], **kwargs)
plt.show()
#fig.savefig('histogram_%s.png' % feature)
for feature in features:
histogram(feature)
## Scatterplot of correlated pairs of variables
pairs = [
('Pregnancies', 'Age'),
('Insulin', 'Glucose'),
('BMI', 'SkinThickness'),
]
def scatterplot(v1, v2):
good = np.logical_not(dataset[v1].isnull()) & np.logical_not(dataset[v2].isnull())
xx = dataset.loc[good, v1]
yy = dataset.loc[good, v2]
cc = np.array(['g', 'r'])[dataset.loc[good, "Outcome"]]
fig = plt.figure(figsize=(9, 6))
plt.scatter(xx, yy, c=cc)
plt.xlabel(v1)
plt.ylabel(v2)
plt.show()
#fig.savefig('scatterplot_%s_%s.png' % (v1, v2))
for v1, v2 in pairs:
scatterplot(v1, v2)
## Correlation heatmap
def heatmap(data, row_labels, col_labels):
# Adapted from https://matplotlib.org/examples/images_contours_and_fields/interpolation_methods.html
"""
Create a heatmap from a numpy array and two lists of labels.
Arguments:
data : A 2D numpy array of shape (N,M)
row_labels : A list or array of length N with the labels for the rows
col_labels : A list or array of length M with the labels for the columns
Optional arguments:
ax : A matplotlib.axes.Axes instance to which the heatmap
is plotted. If not provided, use current axes or
create a new one.
cbar_kw : A dictionary with arguments to
:meth:`matplotlib.Figure.colorbar`.
cbarlabel : The label for the colorbar
All other arguments are directly passed on to the imshow call.
"""
fig = plt.figure(figsize=(9, 9))
ax = plt.gca()
# Plot the heatmap
im = ax.imshow(data, cmap="Wistia", interpolation="nearest")
# Create colorbar
ax.figure.colorbar(im, ax=ax, fraction=0.043, pad=0.04)
# We want to show all ticks...
ax.set_xticks(np.arange(data.shape[1]))
ax.set_yticks(np.arange(data.shape[0]))
ax.yaxis.tick_left()
ax.xaxis.tick_bottom()
# ... and label them with the respective list entries.
ax.set_xticklabels(col_labels)
ax.set_yticklabels(row_labels)
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right", rotation_mode="anchor")
plt.tight_layout()
# Turn spines off and create white grid.
for edge, spine in ax.spines.items():
spine.set_visible(False)
ax.set_xticks(np.arange(data.shape[1]+1)-.5, minor=True)
ax.set_yticks(np.arange(data.shape[0]+1)-.5, minor=True)
ax.grid(which="minor", color="w", linestyle='-', linewidth=2)
ax.tick_params(which="minor", bottom=False, left=False, right=False, top=False)
plt.show()
#fig.savefig('heatmap.png')
heatmap(dataset.corr(), dataset.columns, dataset.columns)
###Output
_____no_output_____
###Markdown
0.3 Histogram
###Code
# height data
heights = [
158.9, 175.4, 167.2, 162.3, 152.6, 159.2, 176.9, 155.8, 163.6, 181.9,
167.4, 161.5, 180.9, 171.6, 165.6, 160.1, 172.7, 155.3, 171.2, 162.7,
175.7, 182.1, 188.4, 157.0, 189.1, 168.7, 178.9, 162.3, 186.9, 178.8,
166.0, 165.9, 169.3, 155.4, 178.9, 181.4, 162.2, 176.5, 165.9, 150.6,
172.7, 174.9, 150.9, 152.2, 172.6, 170.7, 171.5, 169.4, 155.9, 162.7,
]
# plot histogram
# 'ec' stands for edge color. we add black borders to the bars.
plt.hist(heights, bins=8, range=(150,190), ec='black')
# add title and axis labels
plt.title("Histogram – Height of Students")
plt.ylabel("Frequency")
plt.xlabel("Height Bins")
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
Notice how we choose range and bins such that all the intervals have integer values — in this case, 150 - 155, 155 - 160, and so on Note: We should choose the number of bins wisely for histograms. Way too few bins does not reveal any information, since all the data points will belong to the same bar. On the other hand, too many bins will make a separate bar for each data point. As a rule of thumb, for realistic data you usually want around 20 - 50 bins. Histograms for various types of frequency distributions Skew Symmetric In a symmetric distribution, data points are symmetrically distributed around the mean. In these cases, the mean is approximately equal to the median. An example of a symmetric distribution is the height of individuals in a locality. A lot of the data observed in nature follows this distribution.The histogram representing a symmetric distribution has the left and right hand sides roughly equally balanced around the centre. You can observe that the left and right tails in the graph below have about the same length. Right Skewed (Positively skewed) Variables representing income or wealth of individuals etc., are right (positively) skewed. This is because the majority of individual have either low income or medium income, but a small minority of individuals are very rich or extremely rich.The histogram of a right skewed distribution has a long tail to the right. In these cases, the mean is typically greater than the median.  Left Skewed (Negatively skewed) Similarly, if the histogram has a long tail to the left, we can assume that the data is from a negatively skewed distribution. In a left skewed distribution, most data points will be greater than the mean.Observe the shape of the histogram with a long left tail. In this case, the mean is typically less than the median.  Kurtosis Apart from the left / right skew of the data distribution, another important thing to notice in histograms is the "tailedness" of a distribution, or how much of the data is close to the center vs in the tails. This property of the distribution is known as its kurtosis. Mesokurtic (Normal tailed) To measure how heavy or light-tailed a distribution is, we take the normal distribution (also known as Gaussian distribution) as the point of comparison. Below is a histogram plot of data points from a normal distribution.  Leptokurtic (Heavy tailed) A distribution whose tails are heavier than the normal distribution is called leptokurtic. Often, this kind of distribution will have heavy tails as well as a taller center, but lesser data points at a moderate distance from the center.  Platykurtic (Light tailed) A distribution whose tails are thinner than the normal distribution is called platykurtic. In some cases (as in the plot below), the tails might be completely non-existent. Histogram for multiple distributions Let's first create the data which we will be plotting:
###Code
# random seed so that the generated values are always same
np.random.seed(2)
# 1000 values for USA incomes with mean 100 and standard deviation 30
income_USA = 30 * np.random.randn(1000) + 100
# 1000 values for UK incomes with mean 85 and standard deviation 20
income_UK = 20 * np.random.randn(1000) + 85
###Output
_____no_output_____
###Markdown
Superimposed histogram In a superimposed histogram, we display the histograms layered on top of each other.
###Code
# Superimposed histogram
plt.hist([income_USA, income_UK], label=['USA', 'UK'], histtype='stepfilled',
alpha=0.6, bins=30, ec='black')
# 'alpha' sets the 'opacity'.
# if alpha = 1.0 (default) we won't be able to see the second histogram behind.
# add title and axis labels
plt.title("Histogram – Height of Incomes")
plt.ylabel("Frequency")
plt.xlabel("Income Bins")
# add legend
plt.legend(loc='upper right')
# display plot
plt.show()
###Output
_____no_output_____
###Markdown
Notice some things about the code above: To plot multiple data together, we pass all of them to the hist() function. We use the label argument to label the data, and the legend() function to display the labels. To create a superimposed histogram, we set the parameter histtype's value to be 'stepfilled'. There are other histogram types as well, which we will see in the next section. Lastly, since the histograms are layered on top of each other, we used the alpha parameter to make the histograms a little transparent so that we can also see 'behind' it. From the histogram, we can see that on average income levels in USA are higher than in UK (note that income is on the x-axis, so higher income means the histogram is shifted to the right).We can also see that incomes of employees from USA have a higher spread (since the histogram is wider). Stacked Histogram Apart from superimposing the plots, there are other ways to plot the distributions in the same graph. One option is to stack the multiple sets of data over one another. For that, we assign the parameter histtype the value 'barstacked'.
###Code
# Stacked histogram
plt.hist([income_USA,income_UK], label=['USA','UK'], histtype='barstacked', bins=30, ec='black')
plt.title("Histogram – Height of Incomes")
plt.ylabel("Frequency")
plt.xlabel("Income Bins")
plt.legend(loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
Stacking makes it easier to see the combined frequency, while making it harder to see the exact frequency within each category. Side by side histogram
###Code
# Side by side histogram
plt.hist([income_USA,income_UK], label=['USA','UK'], histtype='bar', bins=20, ec='black')
plt.title("Histogram – Height of Incomes")
plt.ylabel("Frequency")
plt.xlabel("Income Bins")
plt.legend(loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
We can visually check the frequencies of income levels in USA and UK, and determine where one dominates the other. From the above plot, we can see that at very low and very high income levels, USA has a higher frequency, whereas in the middle regions, UK has a higher frequency. 0.4 Box Plots Box plot (also called ‘box and whisker plot’) visualizes the distribution of numerical data and helps in identifying outliers present in the data.  Notice the following components in the above box plot: 1st Quartile (25th percentile) Median (2nd Quartile or 50th percentile) 3rd Quartile (75th percentile) Interquartile Range (IQR – difference between 3rd and 1st Quartile) Whiskers — marks the lowest data point which lies within 1.5 IQR of the 1st quartile, and the highest datum which lies within 1.5 IQR of the 3rd quartile Outliers — any data points beyond 1.5 IQR of the 1st or 3rd quartile, i.e. values which are greater than 3rd Quartile + 1.5 * IQR or less than 1st Quartile – 1.5 * IQR. Outliers Whenever we do any data analysis, outliers should be investigated carefully. They may contain valuable information about the problem under investigation, or inform us about errors in recording the data. Moreover, even a few outliers can distort many commonly used machine learning algorithms. Creating a boxplot in Python
###Code
import seaborn as sns
# generate data - 1000 values with mean 165 and std dev 10
np.random.seed(0)
height = np.round((10 * np.random.randn(1000) + 165), 0)
# create box plot
sns.boxplot(height)
plt.show()
###Output
c:\program files\python36\lib\site-packages\seaborn\_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
The above boxplot is for normally distributed data with mean 165 and standard deviation 10. For data that follows a symmetric distribution (such as the normally distributed data above), the box plot will be symmetrical as well. In other words: The median will be roughly in the middle of 1st and 3rd quartiles. The whisker lengths will be roughly equal. Boxplot for Skewed distribution
###Code
var = np.random.chisquare(1, 100) # generate 100 values
sns.boxplot(var) # plot
plt.show()
###Output
c:\program files\python36\lib\site-packages\seaborn\_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
Multiple box plots
###Code
# generate data
np.random.seed(2)
# Group 1 has mean 165 and stddev 10
df1 = pd.DataFrame()
df1['Height'] = np.round((10 * np.random.randn(500) + 165), 0)
df1['Group'] = 'Group1'
# Group 2 has mean 170 and stddev 12
df2 = pd.DataFrame()
df2['Height'] = np.round((12 * np.random.randn(500) + 170), 0)
df2['Group'] = 'Group2'
df = df1.append(df2)
# create plot
sns.boxplot(x='Group', y='Height', data=df)
plt.show()
###Output
_____no_output_____
###Markdown
From the box plot, we can infer that on an average people from 'Group2' are taller than people from 'Group1' Handling multiple categories simultaneously
###Code
# randomly assign Basketball_Player status
df['Basketball_Player'] = np.random.randint(0, 2, df.shape[0])
# use the 'hue' parameter to add colors based on Basketball_Player status
sns.boxplot(x='Group', y='Height', hue='Basketball_Player', data=df)
plt.show()
###Output
_____no_output_____
###Markdown
Since we assigned the 'Basketball_Player' status randomly, we can see that 'Height' does not have much influence on for 'Basketball_Player' this data set. Other parameters in seaborn.boxplot So far, we've seen the x, y, data and hue parameters supported by the seaborn.boxplot() function. In this section, we'll see a couple more important parameters supported by the function. whis - Usually, we consider outliers to be the data points which are whose distance from the 1st and 3rd Quartiles is greater than 1.5 x IQR. We can change this setting using the parameter whis. orient - takes values ‘v’ or ‘h’ which sets the orientation of the box plot to vertical or horizontal, respectively.
###Code
np.random.seed(0)
height = np.round((10 * np.random.randn(1000) + 165), 0)
sns.boxplot(y=height, orient='v', whis=0.5)
plt.show()
###Output
_____no_output_____ |
Part 4-Extending what Convolutional Neural Nets can do/CNN_with_Fashion_MNIST_Notebook_Solution.ipynb | ###Markdown
Improving Computer Vision Accuracy using ConvolutionsIn the previous lessons you saw how to do fashion recognition using a Deep Neural Network (DNN) containing three layers -- the input layer (in the shape of the data), the output layer (in the shape of the desired output) and a hidden layer. You experimented with the impact of different sized of hidden layer, number of training epochs etc on the final accuracy.For convenience, here's the entire code again. Run it and take a note of the test accuracy that is printed out at the end.
###Code
import tensorflow as tf
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images / 255.0
test_images=test_images / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 5s 89us/sample - loss: 0.4920 - accuracy: 0.8258
Epoch 2/5
60000/60000 [==============================] - 5s 83us/sample - loss: 0.3703 - accuracy: 0.8650
Epoch 3/5
60000/60000 [==============================] - 5s 80us/sample - loss: 0.3349 - accuracy: 0.8764
Epoch 4/5
60000/60000 [==============================] - 5s 80us/sample - loss: 0.3119 - accuracy: 0.8850
Epoch 5/5
60000/60000 [==============================] - 5s 82us/sample - loss: 0.2921 - accuracy: 0.8926
10000/10000 [==============================] - 1s 60us/sample - loss: 0.3503 - accuracy: 0.8713
###Markdown
Your accuracy is probably about 89% on training and 87% on validation...not bad...But how do you make that even better? One way is to use something called Convolutions. I'm not going to details on Convolutions here, but the ultimate concept is that they narrow down the content of the image to focus on specific, distinct, details. If you've ever done image processing using a filter (like this: https://en.wikipedia.org/wiki/Kernel_(image_processing)) then convolutions will look very familiar.In short, you take an array (usually 3x3 or 5x5) and pass it over the image. By changing the underlying pixels based on the formula within that matrix, you can do things like edge detection. So, for example, if you look at the above link, you'll see a 3x3 that is defined for edge detection where the middle cell is 8, and all of its neighbors are -1. In this case, for each pixel, you would multiply its value by 8, then subtract the value of each neighbor. Do this for every pixel, and you'll end up with a new image that has the edges enhanced.This is perfect for computer vision, because often it's features that can get highlighted like this that distinguish one item for another, and the amount of information needed is then much less...because you'll just train on the highlighted features.That's the concept of Convolutional Neural Networks. Add some layers to do convolution before you have the dense layers, and then the information going to the dense layers is more focussed, and possibly more accurate.Run the below code -- this is the same neural network as earlier, but this time with Convolutional layers added first. It will take longer, but look at the impact on the accuracy:
###Code
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.summary()
model.fit(training_images, training_labels, epochs=5)
test_loss = model.evaluate(test_images, test_labels)
###Output
2.1.0
Model: "sequential_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_11 (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
max_pooling2d_11 (MaxPooling (None, 13, 13, 32) 0
_________________________________________________________________
conv2d_12 (Conv2D) (None, 11, 11, 64) 18496
_________________________________________________________________
max_pooling2d_12 (MaxPooling (None, 5, 5, 64) 0
_________________________________________________________________
flatten_7 (Flatten) (None, 1600) 0
_________________________________________________________________
dense_14 (Dense) (None, 128) 204928
_________________________________________________________________
dense_15 (Dense) (None, 10) 1290
=================================================================
Total params: 225,034
Trainable params: 225,034
Non-trainable params: 0
_________________________________________________________________
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 19s 324us/sample - loss: 0.4539 - accuracy: 0.8354
Epoch 2/5
60000/60000 [==============================] - 19s 319us/sample - loss: 0.3009 - accuracy: 0.8898
Epoch 3/5
60000/60000 [==============================] - 19s 309us/sample - loss: 0.2543 - accuracy: 0.9062
Epoch 4/5
60000/60000 [==============================] - 19s 321us/sample - loss: 0.2226 - accuracy: 0.9167
Epoch 5/5
60000/60000 [==============================] - 19s 320us/sample - loss: 0.1936 - accuracy: 0.9291
10000/10000 [==============================] - 1s 132us/sample - loss: 0.2606 - accuracy: 0.9044
###Markdown
It's likely gone up to about 93% on the training data and 91% on the validation data. That's significant, and a step in the right direction!Try running it for more epochs -- say about 20, and explore the results! But while the results might seem really good, the validation results may actually go down, due to something called 'overfitting' which will be discussed later. (In a nutshell, 'overfitting' occurs when the network learns the data from the training set really well, but it's too specialised to only that data, and as a result is less effective at seeing *other* data. For example, if all your life you only saw red shoes, then when you see a red shoe you would be very good at identifying it, but blue suade shoes might confuse you...and you know you should never mess with my blue suede shoes.) What's in the code ? Step 1 - Gather the dataYou'll notice that there's a bit of a change here in that the training data needed to be reshaped. That's because the first convolution expects a single tensor containing everything, so instead of 60,000 28x28x1 items in a list, we have a single 4D list that is 60,000x28x28x1, and the same for the test images. If you don't do this, you'll get an error when training as the Convolutions do not recognize the shape. ```import tensorflow as tfmnist = tf.keras.datasets.fashion_mnist(training_images, training_labels), (test_images, test_labels) = mnist.load_data()training_images=training_images.reshape(60000, 28, 28, 1)training_images=training_images / 255.0test_images = test_images.reshape(10000, 28, 28, 1)test_images=test_images/255.0``` Step2 - Define your model. Now instead of the input layer at the top, you're going to add a Convolution. The parameters are:1. The number of convolutions you want to generate. Purely arbitrary, but good to start with something in the order of 322. The size of the Convolution, in this case a 3x3 grid3. The activation function to use -- in this case we'll use relu, which you might recall is the equivalent of returning x when x>0, else returning 04. In the first layer, the shape of the input data.You'll follow the Convolution with a MaxPooling layer which is then designed to compress the image, while maintaining the content of the features that were highlighted by the convlution. By specifying (2,2) for the MaxPooling, the effect is to quarter the size of the image. Without going into too much detail here, the idea is that it creates a 2x2 array of pixels, and picks the biggest one, thus turning 4 pixels into 1. It repeats this across the image, and in so doing halves the number of horizontal, and halves the number of vertical pixels, effectively reducing the image by 25%.You can call model.summary() to see the size and shape of the network, and you'll notice that after every MaxPooling layer, the image size is reduced in this way. ```model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D(2, 2),``` Add another convolution``` tf.keras.layers.Conv2D(64, (3,3), activation='relu'), tf.keras.layers.MaxPooling2D(2,2)``` Now flatten the output. After this you'll just have the same DNN structure as the non convolutional version``` tf.keras.layers.Flatten(),``` The same 128 dense layers, and 10 output layers as in the pre-convolution example:``` tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10, activation='softmax')])``` Step 3 - Compile, train and evaluate the modelNow compile the model, call the fit method to do the training, and evaluate the loss and accuracy from the test set.```model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])model.fit(training_images, training_labels, epochs=5)test_loss, test_acc = model.evaluate(test_images, test_labels)print(test_acc)``` Visualizing the Convolutions and PoolingThis code will show us the convolutions graphically. The print (test_labels[;100]) shows us the first 100 labels in the test set, and you can see that the ones at index 0, index 23 and index 28 are all the same value (9). They're all shoes. Let's take a look at the result of running the convolution on each, and you'll begin to see common features between them emerge. Now, when the DNN is training on that data, it's working with a lot less, and it's perhaps finding a commonality between shoes based on this convolution/pooling combination.
###Code
print(test_labels[:100])
import matplotlib.pyplot as plt
f, axarr = plt.subplots(3,4)
FIRST_IMAGE=0
SECOND_IMAGE=23
THIRD_IMAGE=28
CONVOLUTION_NUMBER = 15
from tensorflow.keras import models
layer_outputs = [layer.output for layer in model.layers]
activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)
for x in range(0,4):
f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[0,x].grid(False)
f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[1,x].grid(False)
f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x]
axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno')
axarr[2,x].grid(False)
###Output
_____no_output_____
###Markdown
EXERCISES1. Try editing the convolutions. Change the 32s to either 16 or 64. What impact will this have on accuracy and/or training time.
###Code
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
###Output
2.1.0
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 24s 403us/sample - loss: 0.4425 - accuracy: 0.8382
Epoch 2/5
60000/60000 [==============================] - 24s 396us/sample - loss: 0.2922 - accuracy: 0.8927
Epoch 3/5
60000/60000 [==============================] - 24s 397us/sample - loss: 0.2450 - accuracy: 0.9096
Epoch 4/5
60000/60000 [==============================] - 24s 399us/sample - loss: 0.2120 - accuracy: 0.9207
Epoch 5/5
60000/60000 [==============================] - 24s 394us/sample - loss: 0.1861 - accuracy: 0.9308
10000/10000 [==============================] - 1s 147us/sample - loss: 0.2596 - accuracy: 0.9066
0.9066
###Markdown
2. Remove the final Convolution. What impact will this have on accuracy or training time?
###Code
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
###Output
2.1.0
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 16s 266us/sample - loss: 0.3903 - accuracy: 0.8626
Epoch 2/5
60000/60000 [==============================] - 16s 260us/sample - loss: 0.2660 - accuracy: 0.9042
Epoch 3/5
60000/60000 [==============================] - 16s 259us/sample - loss: 0.2229 - accuracy: 0.9181
Epoch 4/5
60000/60000 [==============================] - 16s 262us/sample - loss: 0.1901 - accuracy: 0.9299
Epoch 5/5
60000/60000 [==============================] - 15s 258us/sample - loss: 0.1620 - accuracy: 0.9396
10000/10000 [==============================] - 1s 99us/sample - loss: 0.2698 - accuracy: 0.9071
0.9071
###Markdown
3. How about adding more Convolutions? What impact do you think this will have? Experiment with it.
###Code
import tensorflow as tf
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=5)
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
###Output
2.1.0
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 19s 314us/sample - loss: 0.6011 - accuracy: 0.7810
Epoch 2/5
60000/60000 [==============================] - 18s 305us/sample - loss: 0.4024 - accuracy: 0.8527
Epoch 3/5
60000/60000 [==============================] - 18s 306us/sample - loss: 0.3502 - accuracy: 0.8708
Epoch 4/5
60000/60000 [==============================] - 18s 308us/sample - loss: 0.3201 - accuracy: 0.8814
Epoch 5/5
60000/60000 [==============================] - 19s 309us/sample - loss: 0.2929 - accuracy: 0.8915
10000/10000 [==============================] - 1s 125us/sample - loss: 0.3463 - accuracy: 0.8733
0.8733
###Markdown
4. In the previous lesson you implemented a callback to check on the loss function and to cancel training once it hit a certain amount. See if you can implement that here to stop training the model when accuracy on achieve 93% and let the model train for, let's say 20 epochs.
###Code
import tensorflow as tf
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy') > 0.93):
print("\nReached 93% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
print(tf.__version__)
mnist = tf.keras.datasets.fashion_mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
training_images=training_images.reshape(60000, 28, 28, 1)
training_images=training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images=test_images/255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(training_images, training_labels, epochs=20, callbacks=[callbacks])
test_loss, test_acc = model.evaluate(test_images, test_labels)
print(test_acc)
###Output
2.1.0
Train on 60000 samples
Epoch 1/20
60000/60000 [==============================] - 16s 260us/sample - loss: 0.3969 - accuracy: 0.8601
Epoch 2/20
60000/60000 [==============================] - 16s 259us/sample - loss: 0.2666 - accuracy: 0.9025
Epoch 3/20
60000/60000 [==============================] - 15s 254us/sample - loss: 0.2232 - accuracy: 0.9174
Epoch 4/20
60000/60000 [==============================] - 14s 234us/sample - loss: 0.1924 - accuracy: 0.9291
Epoch 5/20
59872/60000 [============================>.] - ETA: 0s - loss: 0.1666 - accuracy: 0.9381
Reached 93% accuracy so cancelling training!
60000/60000 [==============================] - 15s 251us/sample - loss: 0.1669 - accuracy: 0.9380
10000/10000 [==============================] - 1s 99us/sample - loss: 0.2462 - accuracy: 0.9134
0.9134
|
NEXRAD/.ipynb_checkpoints/THREDDS_NEXRAD-Copy1-checkpoint.ipynb | ###Markdown
Using Python to Access NEXRAD Level 2 Data from Unidata THREDDS Server This is a modified version of Ryan May's notebook here:http://nbviewer.jupyter.org/gist/dopplershift/356f2e14832e9b676207The TDS provides a mechanism to query for available data files, as well as provides access to the data as native volume files, through OPeNDAP, and using its own CDMRemote protocol. Since we're using Python, we can take advantage of Unidata's Siphon package, which provides an easy API for talking to THREDDS servers.Bookmark these resources for when you want to use Siphon later!+ [latest Siphon documentation](http://siphon.readthedocs.org/en/latest/)+ [Siphon github repo](https://github.com/Unidata/siphon)+ [TDS documentation](http://www.unidata.ucar.edu/software/thredds/current/tds/TDS.html) Downloading the single latest volume Just a bit of initial set-up to use inline figures and quiet some warnings.
###Code
import matplotlib
import warnings
warnings.filterwarnings("ignore", category=matplotlib.cbook.MatplotlibDeprecationWarning)
%matplotlib inline
###Output
_____no_output_____
###Markdown
First we'll create an instance of RadarServer to point to the appropriate radar server access URL.
###Code
# The archive of data on S3 URL did not work for me, despite .edu domain
#url = 'http://thredds-aws.unidata.ucar.edu/thredds/radarServer/nexrad/level2/S3/'
#Trying motherlode URL
url = 'http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/'
from siphon.radarserver import RadarServer
rs = RadarServer(url)
###Output
_____no_output_____
###Markdown
Next, we'll create a new query object to help request the data. Using the chaining methods, let's ask for the latest data at the radar KLVX (Louisville, KY). We see that when the query is represented as a string, it shows the encoded URL.
###Code
from datetime import datetime, timedelta
query = rs.query()
query.stations('KLVX').time(datetime.utcnow())
###Output
_____no_output_____
###Markdown
We can use the RadarServer instance to check our query, to make sure we have required parameters and that we have chosen valid station(s) and variable(s)
###Code
rs.validate_query(query)
###Output
_____no_output_____
###Markdown
Make the request, which returns an instance of TDSCatalog; this handles parsing the returned XML information.
###Code
catalog = rs.get_catalog(query)
###Output
_____no_output_____
###Markdown
We can look at the datasets on the catalog to see what data we found by the query. We find one volume in the return, since we asked for the volume nearest to a single time.
###Code
catalog.datasets
###Output
_____no_output_____
###Markdown
We can pull that dataset out of the dictionary and look at the available access URLs. We see URLs for OPeNDAP, CDMRemote, and HTTPServer (direct download).
###Code
ds = list(catalog.datasets.values())[0]
ds.access_urls
###Output
_____no_output_____
###Markdown
We'll use the CDMRemote reader in Siphon and pass it the appropriate access URL.
###Code
from siphon.cdmr import Dataset
data = Dataset(ds.access_urls['CdmRemote'])
###Output
_____no_output_____
###Markdown
We define some helper functions to make working with the data easier. One takes the raw data and converts it to floating point values with the missing data points appropriately marked. The other helps with converting the polar coordinates (azimuth and range) to Cartesian (x and y).
###Code
import numpy as np
def raw_to_masked_float(var, data):
# Values come back signed. If the _Unsigned attribute is set, we need to convert
# from the range [-127, 128] to [0, 255].
if var._Unsigned:
data = data & 255
# Mask missing points
data = np.ma.array(data, mask=data==0)
# Convert to float using the scale and offset
return data * var.scale_factor + var.add_offset
def polar_to_cartesian(az, rng):
az_rad = np.deg2rad(az)[:, None]
x = rng * np.sin(az_rad)
y = rng * np.cos(az_rad)
return x, y
###Output
_____no_output_____
###Markdown
The CDMRemote reader provides an interface that is almost identical to the usual python NetCDF interface. We pull out the variables we need for azimuth and range, as well as the data itself.
###Code
sweep = 0
ref_var = data.variables['Reflectivity_HI']
ref_data = ref_var[sweep]
rng = data.variables['distanceR_HI'][:]
az = data.variables['azimuthR_HI'][sweep]
###Output
_____no_output_____
###Markdown
Then convert the raw data to floating point values and the polar coordinates to Cartesian.
###Code
ref = raw_to_masked_float(ref_var, ref_data)
x, y = polar_to_cartesian(az, rng)
###Output
_____no_output_____
###Markdown
MetPy is a Python package for meteorology (Documentation: http://metpy.readthedocs.org and GitHub: http://github.com/MetPy/MetPy). We import MetPy and use it to get the colortable and value mapping information for the NWS Reflectivity data.
###Code
from metpy.plots import ctables # For NWS colortable
ref_norm, ref_cmap = ctables.registry.get_with_steps('NWSReflectivity', 5, 5)
###Output
_____no_output_____
###Markdown
Finally, we plot them up using matplotlib and cartopy. We create a helper function for making a map to keep things simpler later.
###Code
import matplotlib.pyplot as plt
import cartopy
def new_map(fig, lon, lat):
# Create projection centered on the radar. This allows us to use x
# and y relative to the radar.
proj = cartopy.crs.LambertConformal(central_longitude=lon, central_latitude=lat)
# New axes with the specified projection
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add coastlines
ax.coastlines('50m', 'black', linewidth=2, zorder=2)
# Grab state borders
state_borders = cartopy.feature.NaturalEarthFeature(
category='cultural', name='admin_1_states_provinces_lines',
scale='50m', facecolor='none')
ax.add_feature(state_borders, edgecolor='black', linewidth=1, zorder=3)
return ax
###Output
_____no_output_____
###Markdown
Download a collection of historical data This time we'll make a query based on a longitude, latitude point and using a time range.
###Code
# Our specified time
#dt = datetime(2012, 10, 29, 15) # Superstorm Sandy
#dt = datetime(2016, 6, 18, 1)
dt = datetime(2016, 6, 8, 18)
query = rs.query()
query.lonlat_point(-73.687, 41.175).time_range(dt, dt + timedelta(hours=1))
###Output
_____no_output_____
###Markdown
The specified longitude, latitude are in NY and the TDS helpfully finds the closest station to that point. We can see that for this time range we obtained multiple datasets.
###Code
cat = rs.get_catalog(query)
cat.datasets
###Output
_____no_output_____
###Markdown
Grab the first dataset so that we can get the longitude and latitude of the station and make a map for plotting. We'll go ahead and specify some longitude and latitude bounds for the map.
###Code
ds = list(cat.datasets.values())[0]
data = Dataset(ds.access_urls['CdmRemote'])
# Pull out the data of interest
sweep = 0
rng = data.variables['distanceR_HI'][:]
az = data.variables['azimuthR_HI'][sweep]
ref_var = data.variables['Reflectivity_HI']
# Convert data to float and coordinates to Cartesian
ref = raw_to_masked_float(ref_var, ref_var[sweep])
x, y = polar_to_cartesian(az, rng)
###Output
_____no_output_____
###Markdown
Use the function to make a new map and plot a colormapped view of the data
###Code
fig = plt.figure(figsize=(10, 10))
ax = new_map(fig, data.StationLongitude, data.StationLatitude)
# Set limits in lat/lon space
ax.set_extent([-77, -70, 38, 43])
# Add ocean and land background
ocean = cartopy.feature.NaturalEarthFeature('physical', 'ocean', scale='50m',
edgecolor='face',
facecolor=cartopy.feature.COLORS['water'])
land = cartopy.feature.NaturalEarthFeature('physical', 'land', scale='50m',
edgecolor='face',
facecolor=cartopy.feature.COLORS['land'])
ax.add_feature(ocean, zorder=-1)
ax.add_feature(land, zorder=-1)
ax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0);
meshes = []
for item in sorted(cat.datasets.items()):
# After looping over the list of sorted datasets, pull the actual Dataset object out
# of our list of items and access over CDMRemote
ds = item[1]
data = Dataset(ds.access_urls['CdmRemote'])
# Pull out the data of interest
sweep = 0
rng = data.variables['distanceR_HI'][:]
az = data.variables['azimuthR_HI'][sweep]
ref_var = data.variables['Reflectivity_HI']
# Convert data to float and coordinates to Cartesian
ref = raw_to_masked_float(ref_var, ref_var[sweep])
x, y = polar_to_cartesian(az, rng)
# Plot the data and the timestamp
mesh = ax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0)
text = ax.text(0.65, 0.03, data.time_coverage_start, transform=ax.transAxes,
fontdict={'size':16})
# Collect the things we've plotted so we can animate
meshes.append((mesh, text))
# Set up matplotlib to do the conversion to HTML5 video
import matplotlib
matplotlib.rcParams['animation.html'] = 'html5'
# Create an animation
from matplotlib.animation import ArtistAnimation
ArtistAnimation(fig, meshes)
###Output
_____no_output_____ |
interviewq_exercises/q160_pandas_boxplot_leader_life_expectancy_over_time.ipynb | ###Markdown
Leader life expectancyData Analysis Python Pandas Data Manipulation Data Visualization Box Plot External DatasetSuppose you have the following [dataset](https://docs.google.com/spreadsheets/d/1c-ggjDyeZ_ByKOe5J8mKkZViDo_FW8EYvzkCRQc-Ds0/editgid=1561619208)*, which is a list of leaders for all independent states in the world as outlined in Gleditsch and Ward.With this data, for all leaders that have valid birth and death years, can you plot the life expectancy over time for these leaders?Here I would probably recommend a box and whisker plot to show distributions over time in the same chart..*[Dataset source](http://www.ksgleditsch.com/archigos.html)Solution will be written in python for premium users.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
filename = 'q156_data_Archigos_ A Database of Political Leaders - 1March_Archigos_4.1.csv'
df = pd.read_csv(filename)
print('shape', df.shape)
print(df.columns)
df.head()
# df['born_year'] = pd.DatetimeIndex(df['borndate'], errors='coerce').year
df['born_year'] = pd.to_datetime(df['borndate'], errors='coerce').dt.year.astype('Int64')
df['death_year'] = pd.to_datetime(df['deathdate'], errors='coerce').dt.year.astype('Int64')
df['life_expectancy'] = df['death_year'] - df['born_year']
df['born_year_decade'] = (df['born_year'] - df['born_year'] % 10)
fig, ax = plt.subplots(figsize=(16,8));
boxplot = sns.boxplot(
x='born_year_decade',
y='life_expectancy',
data=df,
ax=ax
)
###Output
_____no_output_____ |
notebooks/experimental_pipeline_with_rlberry.ipynb | ###Markdown
Experimental pipeline with `rlberry`In this notebook, you will see:* How to use the `AgentStats` class to compare two deep RL algorithms: `A2C` and `PPO`* How `AgentStats` can optimize hyperparameters with one line of code (thanks to `Optuna`).* How to save all the parameters of the experiments in a `config.json` file (using `Sacred`). Colab setup
###Code
# install rlberry library, full version
!git clone https://github.com/rlberry-py/rlberry.git
!cd rlberry && git pull && pip install -e .[full] > /dev/null 2>&1
# install ffmpeg-python for saving videos
!pip install ffmpeg-python > /dev/null 2>&1
# install optuna for hyperparameter optimization
!pip install optuna > /dev/null 2>&1
# packages required to show video
!pip install pyvirtualdisplay > /dev/null 2>&1
!apt-get install -y xvfb python-opengl ffmpeg > /dev/null 2>&1
print("")
print(" ~~~ Libraries installed, please restart the runtime! ~~~ ")
print("")
###Output
Cloning into 'rlberry'...
remote: Enumerating objects: 619, done.[K
remote: Counting objects: 100% (619/619), done.[K
remote: Compressing objects: 100% (403/403), done.[K
remote: Total 3045 (delta 368), reused 406 (delta 213), pack-reused 2426[K
Receiving objects: 100% (3045/3045), 804.06 KiB | 7.81 MiB/s, done.
Resolving deltas: 100% (1887/1887), done.
Already up to date.
~~~ Libraries installed, please restart the runtime! ~~~
###Markdown
Imports and experiment setup
###Code
# Clear ouput dir
!rm -r experiment_dir/*
import rlberry.seeding as seeding
from rlberry.agents import PPOAgent, A2CAgent
from rlberry.envs.benchmarks.ball_exploration.ball2d import get_benchmark_env
from rlberry.stats import AgentStats, compare_policies, plot_episode_rewards
from sacred import Experiment
from sacred.observers import FileStorageObserver
from sacred import SETTINGS
SETTINGS['CAPTURE_MODE'] = 'sys' # workaround to avoid problems with sacred + multiprocessing (https://github.com/IDSIA/sacred/issues/711)
# Create experiment
ex = Experiment('demo_ppo_vs_a2c', interactive=True) # interactive only for the notebook
# Create output folder to store configs
fs_observer = FileStorageObserver.create("experiment_dir")
ex.observers.append(fs_observer)
@ex.config
def cfg():
"""
Defines experiment parameters, using the Sacred library.
See Sacred documentation at https://sacred.readthedocs.io/en/stable/
"""
params = {}
params['ppo'] = {
"n_episodes": 100,
"gamma": 0.99,
"horizon": 50,
}
params['a2c'] = {
"n_episodes": 100,
"gamma": 0.99,
"horizon": 50,
}
optimize_hyperparams = True
rlberry_seed = 123
###Output
_____no_output_____
###Markdown
Run experiment
###Code
@ex.main
def run_experiment(rlberry_seed,
params,
optimize_hyperparams):
"""
Main experiment function
"""
# Set seed
seeding.set_global_seed(rlberry_seed)
# Choose environment
env = get_benchmark_env(level=1)
# Initialize AgentStats
stats = {}
stats['ppo'] = AgentStats(PPOAgent,
env,
init_kwargs=params['ppo'],
eval_horizon=params['ppo']['horizon'],
n_fit=4,
output_dir=fs_observer.dir) # set ouput_dir as the one used by Sacred
stats['a2c'] = AgentStats(A2CAgent,
env,
init_kwargs=params['a2c'],
eval_horizon=params['a2c']['horizon'],
n_fit=4,
output_dir=fs_observer.dir)
agent_stats_list = stats.values()
# Optimize hyperparameters
# For more information on how optimize_hyperparams work,
# readers are invited to consult the documentation of Optuna:
# https://optuna.readthedocs.io/en/stable/
if optimize_hyperparams:
for stats in agent_stats_list:
# timeout after 10 seconds
stats.optimize_hyperparams(n_trials=50, timeout=10, n_fit=1)
# Fit agents with optimized hyperparams, and save results
for stats in agent_stats_list:
stats.fit()
stats.save_results()
# Learning curves
plot_episode_rewards(agent_stats_list, cumulative=True, show=False)
# Compare final policies
output = compare_policies(agent_stats_list, n_sim=10)
print(output)
ex.run()
###Output
[INFO] Running command 'run_experiment'
[INFO] Started run with ID "1"
[32m[I 2020-12-01 12:45:45,531][0m A new study created in memory with name: no-name-c553da79-72cc-47f7-b9ce-424abbd44bf2[0m
[INFO] [PPO] | episode = 25.000 | ep reward = 0.480 | freq = 27.992 logs/ms
[INFO] [PPO] | episode = 25.000 | ep reward = 8.044 | freq = 27.826 logs/ms
[INFO] [PPO] | episode = 50.000 | ep reward = 0.359 | freq = 20.057 logs/ms
[INFO] [PPO] | episode = 50.000 | ep reward = 2.753 | freq = 19.934 logs/ms
[INFO] [PPO] | episode = 75.000 | ep reward = 0.026 | freq = 22.385 logs/ms
[INFO] [PPO] | episode = 75.000 | ep reward = 4.065 | freq = 21.622 logs/ms
[INFO] [PPO] | episode = 100.000 | ep reward = 0.406 | freq = 24.612 logs/ms
[INFO] [PPO] | episode = 100.000 | ep reward = 3.804 | freq = 24.956 logs/ms
[32m[I 2020-12-01 12:45:51,363][0m Trial 1 finished with value: 0.10280699071069384 and parameters: {'batch_size': 64, 'learning_rate': 0.040040842609794736}. Best is trial 1 with value: 0.10280699071069384.[0m
[32m[I 2020-12-01 12:45:51,384][0m Trial 0 finished with value: 3.2720191242400007 and parameters: {'batch_size': 64, 'learning_rate': 0.5514134074293915}. Best is trial 0 with value: 3.2720191242400007.[0m
[INFO] [PPO] | episode = 25.000 | ep reward = 0.321 | freq = 29.903 logs/ms
[INFO] [PPO] | episode = 25.000 | ep reward = 2.669 | freq = 24.079 logs/ms
[INFO] [PPO] | episode = 50.000 | ep reward = 0.486 | freq = 25.089 logs/ms
[32m[I 2020-12-01 12:45:53,880][0m Trial 2 pruned. [0m
[INFO] [PPO] | episode = 50.000 | ep reward = 4.950 | freq = 21.108 logs/ms
[INFO] [PPO] | episode = 25.000 | ep reward = 4.202 | freq = 26.453 logs/ms
[INFO] [PPO] | episode = 75.000 | ep reward = 3.658 | freq = 20.497 logs/ms
[INFO] [PPO] | episode = 50.000 | ep reward = 14.510 | freq = 21.037 logs/ms
[INFO] [PPO] | episode = 100.000 | ep reward = 4.991 | freq = 21.568 logs/ms
[32m[I 2020-12-01 12:45:57,406][0m Trial 3 finished with value: 4.303850969354628 and parameters: {'batch_size': 8, 'learning_rate': 0.6589433689172297}. Best is trial 3 with value: 4.303850969354628.[0m
[INFO] [PPO] | episode = 75.000 | ep reward = 11.234 | freq = 33.143 logs/ms
[INFO] [PPO] | episode = 100.000 | ep reward = 13.235 | freq = 50.000 logs/ms
[32m[I 2020-12-01 12:45:58,212][0m Trial 4 finished with value: 11.943699454449968 and parameters: {'batch_size': 16, 'learning_rate': 0.00248290562404564}. Best is trial 4 with value: 11.943699454449968.[0m
[INFO] Number of finished trials: 5
[INFO] Best trial:
[INFO] Value: 11.943699454449968
[INFO] Params:
[INFO] batch_size: 16
[INFO] learning_rate: 0.00248290562404564
[32m[I 2020-12-01 12:45:58,321][0m A new study created in memory with name: no-name-4a5f5f7c-93aa-488d-855a-d51aa165e26e[0m
[INFO] [A2C] | episode = 25.000 | ep reward = 0.001 | freq = 29.478 logs/ms
[INFO] [A2C] | episode = 25.000 | ep reward = 4.157 | freq = 24.600 logs/ms
[INFO] [A2C] | episode = 50.000 | ep reward = 1.625 | freq = 24.141 logs/ms
[INFO] [A2C] | episode = 50.000 | ep reward = 4.541 | freq = 20.445 logs/ms
[INFO] [A2C] | episode = 75.000 | ep reward = 0.510 | freq = 21.346 logs/ms
[INFO] [A2C] | episode = 75.000 | ep reward = 3.265 | freq = 20.895 logs/ms
[INFO] [A2C] | episode = 100.000 | ep reward = 0.329 | freq = 23.419 logs/ms
[32m[I 2020-12-01 12:46:03,802][0m Trial 1 finished with value: 0.415830971373799 and parameters: {'batch_size': 64, 'learning_rate': 0.5102534529573056}. Best is trial 1 with value: 0.415830971373799.[0m
[INFO] [A2C] | episode = 100.000 | ep reward = 3.392 | freq = 19.567 logs/ms
[32m[I 2020-12-01 12:46:04,567][0m Trial 0 finished with value: 3.7458499420235447 and parameters: {'batch_size': 8, 'learning_rate': 0.02252913895362045}. Best is trial 0 with value: 3.7458499420235447.[0m
[INFO] [A2C] | episode = 25.000 | ep reward = 2.653 | freq = 25.976 logs/ms
[INFO] [A2C] | episode = 25.000 | ep reward = 4.264 | freq = 20.621 logs/ms
[INFO] [A2C] | episode = 50.000 | ep reward = 3.982 | freq = 20.893 logs/ms
[32m[I 2020-12-01 12:46:06,765][0m Trial 2 pruned. [0m
[INFO] [A2C] | episode = 50.000 | ep reward = 4.135 | freq = 18.286 logs/ms
[INFO] [A2C] | episode = 25.000 | ep reward = 4.137 | freq = 27.245 logs/ms
[INFO] [A2C] | episode = 50.000 | ep reward = 0.177 | freq = 21.377 logs/ms
[32m[I 2020-12-01 12:46:09,638][0m Trial 4 pruned. [0m
[INFO] [A2C] | episode = 75.000 | ep reward = 3.659 | freq = 17.761 logs/ms
[INFO] [A2C] | episode = 100.000 | ep reward = 3.058 | freq = 50.000 logs/ms
[32m[I 2020-12-01 12:46:10,384][0m Trial 3 finished with value: 3.6905705670337907 and parameters: {'batch_size': 4, 'learning_rate': 0.06105063869571411}. Best is trial 0 with value: 3.7458499420235447.[0m
[INFO] Number of finished trials: 5
[INFO] Best trial:
[INFO] Value: 3.7458499420235447
[INFO] Params:
[INFO] batch_size: 8
[INFO] learning_rate: 0.02252913895362045
[INFO] Training AgentStats for PPO...
[Parallel(n_jobs=4)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=4)]: Done 2 out of 4 | elapsed: 6.2s remaining: 6.2s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 6.2s remaining: 0.0s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 6.2s finished
[INFO] ... trained!
[WARNING] Could not save entry [n_episodes] of fit_statistics.
[INFO] Training AgentStats for A2C...
[Parallel(n_jobs=4)]: Using backend LokyBackend with 4 concurrent workers.
[Parallel(n_jobs=4)]: Done 2 out of 4 | elapsed: 6.2s remaining: 6.2s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 6.4s remaining: 0.0s
[Parallel(n_jobs=4)]: Done 4 out of 4 | elapsed: 6.4s finished
[INFO] ... trained!
[WARNING] Could not save entry [n_episodes] of fit_statistics.
###Markdown
Check output
###Code
# Input configuration
!cat experiment_dir/1/config.json
# Optimized hyperparameters
!cat experiment_dir/1/best_hyperparams_A2C.json
!cat experiment_dir/1/best_hyperparams_PPO.json
# And other output files (.csv with episode rewards, for instance)
!ls experiment_dir/1/
###Output
{
"__doc__": "\nDefines experiment parameters, using the Sacred library.\nSee Sacred documentation at https://sacred.readthedocs.io/en/stable/\n",
"optimize_hyperparams": true,
"params": {
"a2c": {
"gamma": 0.99,
"horizon": 50,
"n_episodes": 100
},
"ppo": {
"gamma": 0.99,
"horizon": 50,
"n_episodes": 100
}
},
"rlberry_seed": 123,
"seed": 304295250
}{
"batch_size": 8,
"learning_rate": 0.02252913895362045
}{
"batch_size": 16,
"learning_rate": 0.00248290562404564
}best_hyperparams_A2C.json cout.txt stats_episode_rewards_A2C.csv
best_hyperparams_PPO.json metrics.json stats_episode_rewards_PPO.csv
config.json run.json
|
examples/tutorials/translations/japanese/Part 05 - Welcome to the Sandbox.ipynb | ###Markdown
Part 5 - サンドボックスへようこそここまでのチュートリアルでは、ホックや全てのワーカーを一つ一つ初期化していました。しかし、この一連の作業は、まずはインタフェースがどうなっているのかだけ確認したいとか、どんな感じなのかちょっと試したいっていう時にはちょっと面倒かもしれません。そこで、今後のチュートリアルでは便利関数を使って設定を一発で済ませてしまいましょう。
###Code
import torch
import syft as sy
sy.create_sandbox(globals())
###Output
_____no_output_____
###Markdown
サンドボックスで作成されるもの見ての通り、何名かの擬似ワーカーとデータセットをロードしています。そして、Federated Learningに代表されるプライバシーに配慮したディープラーニングのテクニックを簡単に試せるよう、各ワーカーへデータを分散して送っています。サンドボックスの機能を使って6名のワーカーを作成しました。
###Code
workers
###Output
_____no_output_____
###Markdown
また、すぐに使えるくのグローバル変数も用意されています。
###Code
hook
bob
###Output
_____no_output_____
###Markdown
Part 2: ワーカーを検索できる機能データがリモートにある事を前提としてデータサイエンスを行うなら、どこにどんなデータがあるのか検索できる必要があります。例えば、病院が持つ、放射線に関するデータセットにアクセスしたい研究所を思い浮かべてみてください。どこの病院がどんなデータを持っているのか検索できる事は重要です。
###Code
torch.Tensor([1,2,3,4,5])
x = torch.tensor([1,2,3,4,5]).tag("#fun", "#boston", "#housing").describe("The input datapoints to the boston housing dataset.")
y = torch.tensor([1,2,3,4,5]).tag("#fun", "#boston", "#housing").describe("The input datapoints to the boston housing dataset.")
z = torch.tensor([1,2,3,4,5]).tag("#fun", "#mnist",).describe("The images in the MNIST training dataset.")
x
x = x.send(bob)
y = y.send(bob)
z = z.send(bob)
# 下記は完全一致のタグ、もしくは部分一致のdescriptionを検索します。
results = bob.search(["#boston", "#housing"])
results
print(results[0].description)
###Output
_____no_output_____
###Markdown
Part 3: バーチャルグリッド次にグリッドという概念を紹介します。グリッドは単純にワーカーのグループです。ワーカーのグループを作る事で、グループの持っているデータセットが検索可能になります。
###Code
grid = sy.PrivateGridNetwork(*workers)
results = grid.search("#boston")
boston_data = grid.search("#boston","#data")
boston_target = grid.search("#boston","#target")
###Output
_____no_output_____ |
notebooks/programs_that_surf.ipynb | ###Markdown
Parsing with Beautiful soupBeautifulsoup is already installed in the Anaconda package
###Code
# rewritten the code for Python 3... if fetches anchor tags
import urllib.request
from bs4 import BeautifulSoup
url = input('Enter - ')
html = urllib.request.urlopen(url).read()
soup = BeautifulSoup(html, 'lxml')
# Retrieve a list of anchor tags
# Each tag is like a dictionary of HTML attributes
tags = soup('a')
for tag in tags:
print(tag.get('href', None))
# test on this page: http://www.dr-chuck.com/page1.htm
###Output
_____no_output_____
###Markdown
Doing the first assignment
###Code
import urllib.request
from bs4 import BeautifulSoup
url = 'http://python-data.dr-chuck.net/comments_371514.html'
html = urllib.request.urlopen(url).read()
soup = BeautifulSoup(html, 'lxml')
# Retrieve a list of anchor tags
# Each tag is like a dictionary of HTML attributes
tags = soup('span')
#print(tags)
sum_of_tags = 0
for tag in tags:
sum_of_tags += int(tag.contents[0])
print(sum_of_tags)
# Example code to retrieve information
# Retrieve all of the anchor tags
tags = soup('a')
for tag in tags:
# Look at the parts of a tag
print('TAG:',tag)
print('URL:',tag.get('href', None))
print('Contents:',tag.contents[0])
print('Attrs:',tag.attrs)
def adding_series():
summ = 0
for i in range(11):
summ += i
return summ
adding_series()
# Making a function out of the assignment
import urllib.request
from bs4 import BeautifulSoup
url = 'http://python-data.dr-chuck.net/comments_371514.html'
def assign1(url):
html = urllib.request.urlopen(url).read()
soup = BeautifulSoup(html, 'lxml')
tags = soup('span')
sum_of_tags = 0
for tag in tags:
sum_of_tags += int(tag.contents[0])
return sum_of_tags
assign1(url)
###Output
_____no_output_____
###Markdown
Doing the second assignment
###Code
# training
import urllib.request
from bs4 import BeautifulSoup
seed_url = 'http://python-data.dr-chuck.net/known_by_Fikret.html'
urls = [seed_url]
position = 3
count = 4
pos = position - 1
for i in range(count):
html = urllib.request.urlopen(urls[-1]).read()
soup = BeautifulSoup(html, 'lxml')
tags = soup('a')
urls.append(tags[pos].get('href', None))
print(urls)
# solution
import urllib.request
from bs4 import BeautifulSoup
seed_url = 'http://python-data.dr-chuck.net/known_by_Alan.html '
urls = [seed_url]
position = 18
count = 7
pos = position - 1
for i in range(count):
html = urllib.request.urlopen(urls[-1]).read()
soup = BeautifulSoup(html, 'lxml')
tags = soup('a')
urls.append(tags[pos].get('href', None))
print(urls[-1])
###Output
http://python-data.dr-chuck.net/known_by_Shinay.html
|
jie/HBDB_FerryRouteSalinity.ipynb | ###Markdown
* This notebook was made to explore salinity variation along Horseshoe Bay to Nanaimo ferry route
###Code
from __future__ import division, print_function
from cStringIO import StringIO
from IPython.core.display import HTML
from salishsea_tools.nowcast import figures
from glob import glob
import datetime
import glob
import os
import arrow
from dateutil import tz
from datetime import datetime, timedelta
from sklearn import linear_model
from pylab import *
from matplotlib.backends import backend_agg as backend
import matplotlib.cm as cm
import matplotlib.dates as mdates
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import netCDF4 as nc
import numpy as np
import pandas as pd
import requests
import math
from scipy import interpolate as interp
import scipy.io as sio
from salishsea_tools import (
nc_tools,
viz_tools,
stormtools,
tidetools,
)
# Font format
title_font = {
'fontname': 'Bitstream Vera Sans', 'size': '15', 'color': 'black',
'weight': 'medium'
}
axis_font = {'fontname': 'Bitstream Vera Sans', 'size': '13'}
%matplotlib inline
def results_dataset(period, grid, results_dir):
"""Return the results dataset for period (e.g. 1h or 1d)
and grid (e.g. grid_T, grid_U) from results_dir.
"""
filename_pattern = 'SalishSea_{period}_*_{grid}.nc'
filepaths = glob(os.path.join(results_dir, filename_pattern.format(period=period, grid=grid)))
return nc.Dataset(filepaths[0])
run_date = datetime.datetime(2015,8,14)
# Results dataset location
results_home = '/data/dlatorne/MEOPAR/SalishSea/nowcast/'
results_dir = os.path.join(results_home, run_date.strftime('%d%b%y').lower())
def date(year, month, day_start, day_end, period, grid):
day_range = np.arange(day_start, day_end+1)
day_len = len(day_range)
files_all = [None] * day_len
inds = np.arange(day_len)
for i, day in zip(inds, day_range):
run_date = datetime.datetime(year,month, day)
results_home = '/data/dlatorne/MEOPAR/SalishSea/nowcast/'
results_dir = os.path.join(results_home, run_date.strftime('%d%b%y').lower())
filename = 'SalishSea_' + period + '_' + run_date.strftime('%Y%m%d').lower() + \
'_' + run_date.strftime('%Y%m%d').lower() + '_' + grid + '.nc'
file_single = os.path.join(results_dir, filename)
files_all[i] = file_single
return files_all
from glob import glob
grid_T_hr = results_dataset('1h', 'grid_T', results_dir)
bathy = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/grid/bathy_meter_SalishSea2.nc')
PNW_coastline = sio.loadmat('/ocean/rich/more/mmapbase/bcgeo/PNW.mat')
filepath_name = date(run_date.year,run_date.month,run_date.day,run_date.day,'1h','grid_T')
latitude=grid_T_hr.variables['nav_lat']
longitude=grid_T_hr.variables['nav_lon']
sal_hr = grid_T_hr.variables['vosaline']
t, z = 3, 1
sal_hr = np.ma.masked_values(sal_hr[t, z], 0)
###Output
_____no_output_____
###Markdown
Prepare salinity data
###Code
saline=sio.loadmat('/ocean/jieliu/research/meopar/salinity_comparison/data/HBDB/HBDB_TSG20150813.mat')
def find_dist (q, lon11, lat11, X, Y, bathy, longitude, latitude, saline_nemo_3rd, saline_nemo_4rd):
k=0
values =0
valuess=0
dist = np.zeros(9)
weights = np.zeros(9)
value_3rd=np.zeros(9)
value_4rd=np.zeros(9)
regr =linear_model.LinearRegression()
regr.fit(lon11,lat11);
regr.coef_
[x1, j1] = tidetools.find_closest_model_point(lon11[q],regr.predict(lon11[q]),\
X,Y,bathy,lon_tol=0.0052,lat_tol=0.00210,allow_land=False)
for i in np.arange(x1-1,x1+2):
for j in np.arange(j1-1,j1+2):
dist[k]=tidetools.haversine(lon11[q],lat11[q],longitude[i,j],latitude[i,j])
weights[k]=1.0/dist[k]
value_3rd[k]=saline_nemo_3rd[i,j]*weights[k]
value_4rd[k]=saline_nemo_4rd[i,j]*weights[k]
values=values+value_3rd[k]
valuess=valuess+value_4rd[k]
k+=1
return values, valuess, weights
def salinity_fxn(saline):
struct= (((saline['HBDB_TSG'])['output'])[0,0])['Practical_Salinity'][0,0]
salinity = struct['data'][0,0]
time = struct['matlabTime'][0,0]
lonn = struct['longitude'][0,0]
latt = struct['latitude'][0,0]
a=len(time)
lon1=np.zeros([a,1])
lat1=np.zeros([a,1])
salinity1=np.zeros([a,1])
for i in np.arange(0,a):
matlab_datenum = np.float(time[i])
python_datetime = datetime.datetime.fromordinal(int(matlab_datenum))\
+ timedelta(days=matlab_datenum%1) - timedelta(days = 366)
if((python_datetime.year == run_date.year) & (python_datetime.month == run_date.month)\
& (python_datetime.day == run_date.day)
& (python_datetime.hour >= 3))&(python_datetime.hour < 5):
lon1[i]=lonn[i]
lat1[i]=latt[i]
salinity1[i]=salinity[i]
mask=lon1[:,0]!=0
lon1_2_4=lon1[mask]
lat1_2_4=lat1[mask]
salinity1_2_4=salinity1[mask]
lon11=lon1_2_4[0:-1:20]
lat11=lat1_2_4[0:-1:20]
salinity11=salinity1_2_4[0:-1:20]
bathy, X, Y = tidetools.get_SS2_bathy_data()
aa=date(run_date.year,run_date.month,run_date.day,run_date.day,'1h','grid_T')
#sim_date = datetime.datetime(2015,3,19)####need to change for \
#different daily model results, construct a datetime object
#run_date = datetime.datetime(2015,3,19)
date_str = run_date.strftime('%d-%b-%Y') ##create a string based on this date
tracers=nc.Dataset(aa[0])
j=int(aa[0][65:67])
jj=int(aa[0][67:69])
latitude=tracers.variables['nav_lat'][:]
longitude=tracers.variables['nav_lon'][:]
saline_nemo = tracers.variables['vosaline']
saline_nemo_3rd = saline_nemo[3,1, 0:898, 0:398]
saline_nemo_4rd = saline_nemo[4,1, 0:898, 0:398]
matrix=np.zeros([28,9])
values=np.zeros([28,1])
valuess=np.zeros([28,1])
value_mean_3rd_hour=np.zeros([28,1])
value_mean_4rd_hour=np.zeros([28,1])
for q in np.arange(0,28):
values[q], valuess[q], matrix[q,:]=find_dist(q, lon11, lat11, X, Y,\
bathy, longitude, latitude, saline_nemo_3rd, saline_nemo_4rd)
value_mean_3rd_hour[q]=values[q]/sum(matrix[q])
value_mean_4rd_hour[q]=valuess[q]/sum(matrix[q])
return lon11, lat11, lon1_2_4, lat1_2_4,\
value_mean_3rd_hour, value_mean_4rd_hour,\
salinity11, salinity1_2_4,date_str
# Hides Deprecation warming - needs fixing
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
# Dictionary of ferry stations - new
ferry_stations = {'Horseshoe Bay': {'lat': 49.3742,'lon': -123.2728},
'Nanaimo': {'lat': 49.1632,'lon': -123.8909},
'Vancouver': {'lat': 49.2827,'lon': -123.1207}}
def salinity_ferry_route(grid_T, grid_B, PNW_coastline, ferry_sal):
""" plot daily salinity comparisons between ferry observations
and model results as well as ferry route with model salinity
distribution.
:arg grid_B: Bathymetry dataset for the Salish Sea NEMO model.
:type grid_B: :class:`netCDF4.Dataset`
:arg PNW_coastline: Coastline dataset.
:type PNW_coastline: :class:`mat.Dataset`
:arg ferry_sal: saline
:type ferry_sal: numpy
:returns: fig
"""
fig, axs = plt.subplots(1, 2, figsize=(15, 8))
figures.plot_map(axs[1], grid_B, PNW_coastline)
axs[1].set_xlim(-124.5, -122.5)
axs[1].set_ylim(48.2, 49.5)
viz_tools.set_aspect(axs[1],coords='map',lats=latitude)
cmap=plt.get_cmap('spectral')
cmap.set_bad('burlywood')
mesh=axs[1].pcolormesh(longitude[:],latitude[:],sal_hr[:],cmap=cmap)
cbar=fig.colorbar(mesh)
plt.setp(plt.getp(cbar.ax.axes, 'yticklabels'), color='w')
cbar.set_label('Pratical Salinity', color='white')
axs[1].set_title('Ferry Route: 3am[UTC] model result ', **title_font)
bbox_args = dict(boxstyle='square', facecolor='white', alpha=0.7)
stations=['Horseshoe Bay','Nanaimo','Vancouver']
for stn in stations:
axs[1].plot(ferry_stations[stn]['lon'], ferry_stations[stn]['lat'], marker='D', \
color='white',\
markersize=10, markeredgewidth=2)
axs[1].annotate ('Horseshoe Bay',(ferry_stations['Horseshoe Bay']['lon'] + 0.02,\
ferry_stations['Horseshoe Bay']['lat'] + 0.12), fontsize=15, color='black', bbox=bbox_args )
axs[1].annotate ('Nanaimo',(ferry_stations['Nanaimo']['lon'] - 0.35,\
ferry_stations['Nanaimo']['lat'] ),fontsize=15, color='black', bbox=bbox_args )
axs[1].annotate ('Vancouver',(ferry_stations['Vancouver']['lon'] - 0.1,\
ferry_stations['Vancouver']['lat']+ 0.09 ),fontsize=15, color='black', bbox=bbox_args )
figures.axis_colors(axs[1], 'white')
lon11, lat11, lon1_2_4, lat1_2_4,\
value_mean_3rd_hour, value_mean_4rd_hour,\
salinity11,salinity1_2_4, date_str = salinity_fxn(saline)
axs[1].plot(lon11,lat11,'black', linewidth = 4)
model_salinity_3rd_hour=axs[0].plot(lon11,value_mean_3rd_hour,'DodgerBlue',\
linewidth=2, label='3 am [UTC]')
model_salinity_4rd_hour=axs[0].plot(lon11,value_mean_4rd_hour,'MediumBlue',\
linewidth=2, label="4 am [UTC]" )
observation_salinity=axs[0].plot(lon1_2_4,salinity1_2_4,'DarkGreen', linewidth=2, label="Observed")
axs[0].text(0.25, -0.1,'Observations from Ocean Networks Canada', \
transform=axs[0].transAxes, color='white')
axs[0].set_xlim(-124, -123)
axs[0].set_ylim(0, 30)
axs[0].set_title('Surface Salinity: ' + date_str, **title_font)
axs[0].set_xlabel('Longitude', **axis_font)
axs[0].set_ylabel('Practical Salinity', **axis_font)
axs[0].legend()
axs[0].grid()
fig.patch.set_facecolor('#2B3E50')
figures.axis_colors(axs[0], 'gray')
return fig
fig = salinity_ferry_route(grid_T_hr, bathy, PNW_coastline, saline)
###Output
_____no_output_____ |
docs/benchmarks/indep_power_sampsize.ipynb | ###Markdown
1D Independence Testing Power vs. Sample SizeHere, we show finite testing power comparisons between the various tests within hyppo.For a test to be consistent, we would expect power to converge to 1 as sample sizeincreases. Tests that converge to 1 quicker have higher finite testing power andare likely better to use for your use case. The simulation in the bottom right isused so that we know that we are properly controlling for type I error, which isimportant becase otherwise the test would be invalid (power = alpha-level = 0.05).
###Code
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from hyppo.independence import INDEP_TESTS
from hyppo.tools import SIMULATIONS, power
from joblib import Parallel, delayed
sys.path.append(os.path.realpath(".."))
# make plots look pretty
sns.set(color_codes=True, style="white", context="talk", font_scale=2)
PALETTE = sns.color_palette("Set1")
sns.set_palette(PALETTE[1:])
# constants
MAX_SAMPLE_SIZE = 100
STEP_SIZE = 5
SAMP_SIZES = range(5, MAX_SAMPLE_SIZE + STEP_SIZE, STEP_SIZE)
POWER_REPS = 5
# simulation titles
SIM_TITLES = [
"Linear",
"Exponential",
"Cubic",
"Joint Normal",
"Step",
"Quadratic",
"W-Shaped",
"Spiral",
"Bernoulli",
"Logarithmic",
"Fourth Root",
"Sine 4\u03C0",
"Sine 16\u03C0",
"Square",
"Two Parabolas",
"Circle",
"Ellipse",
"Diamond",
"Noise",
"Independence",
]
# these tests only make sense for > 1 dimension data
remove = ["maxmargin", "kmerf"]
INDEP_TESTS = dict([(k, v) for k, v in INDEP_TESTS.items() if k not in remove])
def estimate_power(sim, test, auto=False):
"""Compute the mean of the estimated power of 5 replications over sample sizes."""
if test == "MaxMargin":
test = ["MaxMargin", "Dcorr"]
est_power = np.array(
[
np.mean(
[
power(
test, pow_type="indep", sim=sim, n=i, p=1, auto=auto, noise=True
)
for _ in range(POWER_REPS)
]
)
for i in SAMP_SIZES
]
)
np.savetxt(
"../benchmarks/vs_samplesize/{}_{}.csv".format(sim, test),
est_power,
delimiter=",",
)
return est_power
# At this point, we would run this bit of code to generate the data for the figure and
# store it under the "vs_sampsize" directory. Since this code takes a very long time,
# we have commented out these lines of codes. If you would like to generate the data,
# uncomment these lines and run the file.
#
# outputs = Parallel(n_jobs=-1, verbose=100)(
# [
# delayed(estimate_featimport)(sim_name, test)
# for sim_name in SIMULATIONS.keys()
# for test in INDEP_TESTS.keys()
# ]
# )
def plot_power():
fig, ax = plt.subplots(nrows=4, ncols=5, figsize=(25, 20))
plt.suptitle(
"Multivariate Independence Testing (Increasing Sample Size)",
y=0.93,
va="baseline",
)
for i, row in enumerate(ax):
for j, col in enumerate(row):
count = 5 * i + j
sim = list(SIMULATIONS.keys())[count]
for test in INDEP_TESTS.keys():
est_power = np.genfromtxt(
"../benchmarks/vs_samplesize/{}_{}.csv".format(sim, test),
delimiter=",",
)
col.plot(SAMP_SIZES, est_power, label=INDEP_TESTS[test].__name__, lw=2)
col.set_xticks([])
if i == 3:
col.set_xticks([SAMP_SIZES[0], SAMP_SIZES[-1]])
col.set_ylim(-0.05, 1.05)
col.set_yticks([])
if j == 0:
col.set_yticks([0, 1])
col.set_title(SIM_TITLES[count])
fig.text(0.5, 0.05, "Sample Size", ha="center")
fig.text(
0.07,
0.5,
"Statistical Power",
va="center",
rotation="vertical",
)
leg = plt.legend(
bbox_to_anchor=(0.5, 0.05),
bbox_transform=plt.gcf().transFigure,
ncol=len(INDEP_TESTS.keys()),
loc="upper center",
)
leg.get_frame().set_linewidth(0.0)
for legobj in leg.legendHandles:
legobj.set_linewidth(5.0)
plt.subplots_adjust(hspace=0.50)
# plot the power
plot_power()
###Output
_____no_output_____ |
Feature_Selection_and_Engineering-Pt2.ipynb | ###Markdown
Feature Selection and Engineering - Beginner's GuideLearn how feature engineering can help you to up your game when building machine learning models in Kaggle: create new columns, transform variables and more!In the previous classes, you have already learned about Exploratory Data Analysis (EDA) and building simple machine learning models. In this class, you'll learn more about feature engineering, a process where you use domain knowledge of your data to create additional relevant features that increase the predictive power of the learning algorithm and make your machine learning models perform even better!More specifically,- You'll first get started by doing all necessary imports and getting the data in your workspace;- Then, you'll see some reasons why you should do feature engineering and start working on engineering your own new features for your data set! You'll create new columns, transform variables into numerical ones, handle missing values, and much more.- Lastly, you'll build a new machine learning model with your new data set and submit it to Kaggle. Get Started!Before you can start off, you're going to do all the imports, just like you did in the previous tutorial, use some IPython magic to make sure the figures are generated inline in the Jupyter Notebook and set the visualization style. Next, you can import your data and make sure that you store the target variable of the training data in a safe place. Afterwards, you merge the train and test data sets (with exception of the `'Survived'` column of `df_train`) and store the result in `data`.Remember that you do this because you want to make sure that any preprocessing that you do on the data is reflected in both the train and test sets!Lastly, you use the `.info()` method to take a look at your data:
###Code
# Imports
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import re
import numpy as np
from sklearn import tree
from sklearn.model_selection import GridSearchCV
# Figures inline and set visualization style
%matplotlib inline
sns.set()
plt.style.use('ggplot')
# Import data
df_train = pd.read_csv('./data/train.csv')
df_test = pd.read_csv('./data/test.csv')
# Store target variable of training data in a safe place
survived_train = df_train.Survived
# Concatenate training and test sets
data = pd.concat([df_train.drop(['Survived'], axis=1), df_test])
# View head
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1309 entries, 0 to 417
Data columns (total 11 columns):
PassengerId 1309 non-null int64
Pclass 1309 non-null int64
Name 1309 non-null object
Sex 1309 non-null object
Age 1046 non-null float64
SibSp 1309 non-null int64
Parch 1309 non-null int64
Ticket 1309 non-null object
Fare 1308 non-null float64
Cabin 295 non-null object
Embarked 1307 non-null object
dtypes: float64(2), int64(4), object(5)
memory usage: 122.7+ KB
###Markdown
Why Feature Engineering At All?You perform feature engineering to extract more information from your data, so that you can up your game when building models. Titanic's Passenger TitlesLet's check out what this is all about by looking at an example. Let's check out the `'Name'` column with the help of the `.tail()` method, which helps you to see the last five rows of your data:
###Code
# View head of 'Name' column
data.Name.tail()
###Output
_____no_output_____
###Markdown
Suddenly, you see different titles emerging! In other words, this column contains strings or text that contain titles, such as 'Mr', 'Master' and 'Dona'.These titles of course give you information on social status, profession, etc., which in the end could tell you something more about survival.At first sight, it might seem like a difficult task to separate the names from the titles, but don't panic! Remember, you can easily use regular expressions to extract the title and store it in a new column `'Title'`:
###Code
# Extract Title from Name, store in column and plot barplot
data['Title'] = data.Name.apply(lambda x: re.search(' ([A-Z][a-z]+)\.', x).group(1))
sns.countplot(x='Title', data=data);
plt.xticks(rotation=45);
###Output
_____no_output_____
###Markdown
Note that this new column 'Title' is actually a new feature for your data set!Tip: to learn more about regular expressions, check out my write up of our last [FB Live code along event](https://www.datacamp.com/community/tutorials/web-scraping-python-nlp) or check out DataCamp's [Python Regular Expressions Tutorial](https://www.datacamp.com/community/tutorials/python-regular-expression-tutorial).You can see that there are several titles in the above plot and there are many that don't occur so often. So, it makes sense to put them in fewer buckets.For example, you probably want to replace `'Mlle'` and `'Ms'` with `'Miss'` and `'Mme'` by `'Mrs'`, as these are French titles and ideally, you want all your data to be in one language. Next, you also take a bunch of titles that you can't immediately categorize and put them in a bucket called `'Special'`.Tip: play around with this to see how your algorithm performs as a function of it!Next, you view a barplot of the result with the help of the `.countplot()` method:
###Code
data['Title'] = data['Title'].replace({'Mlle':'Miss', 'Mme':'Mrs', 'Ms':'Miss'})
data['Title'] = data['Title'].replace(['Don', 'Dona', 'Rev', 'Dr',
'Major', 'Lady', 'Sir', 'Col', 'Capt', 'Countess', 'Jonkheer'],'Special')
sns.countplot(x='Title', data=data);
plt.xticks(rotation=45);
###Output
_____no_output_____
###Markdown
This is what your newly engineered feature `'Title'` looks like!Now, make sure that you have a `'Title'` column and check out your data again with the `.tail()` method:
###Code
# View head of data
data.tail()
###Output
_____no_output_____
###Markdown
Passenger's CabinsWhen you loaded in the data and inspected it, you saw that there are several `NaN`s or missing values in the `'Cabin'` column.It is reasonable to presume that those NaNs didn't have a cabin, which could tell you something about `'Survival'`. So, let's now create a new column `'Has_Cabin'` that encodes this information and tells you whether passengers had a cabin or not.Note that you use the `.isnull()` method in the code chunk below, which will return `True` if the passenger doesn't have a cabin and `False` if that's not the case. However, since you want to store the result in a column `'Has_Cabin'`, you actually want to flip the result: you want to return `True` if the passenger has a cabin. That's why you use the tilde `~`.
###Code
# Did they have a Cabin?
data['Has_Cabin'] = ~data.Cabin.isnull()
# View head of data
data.head()
###Output
_____no_output_____
###Markdown
What you want to do now is drop a bunch of columns that contain no more useful information (or that we're not sure what to do with). In this case, you're looking at columns such as `['Cabin', 'Name', 'PassengerId', 'Ticket']`, because- You already extracted information on whether or not the passenger had a cabin in your newly added 'Has_Cabin' column;- Also, you already extracted the titles from the 'Name' column;- You also drop the 'PassengerId' and the 'Ticket' columns because these will probably not tell you anything more about the survival of the Titanic passengers.Tip there might be more information in the 'Cabin' column, but for this tutorial, you assume that there isn't!To drop these columns in your actual data DataFrame, make sure to use the `inplace` argument in the `.drop()` method and set it to `True`:
###Code
# Drop columns and view head
data.drop(['Cabin', 'Name', 'PassengerId', 'Ticket'], axis=1, inplace=True)
data.head()
###Output
_____no_output_____
###Markdown
Congrats! You've successfully engineered some new features such as `'Title'` and `'Has_Cabin'` and made sure that features that don't add any more useful information for your machine learning model are now dropped from your DataFrame!Next, you want to deal with deal with missing values, bin your numerical data, and transform all features into numeric variables using `.get_dummies()` again. Lastly, you'll build your final model for this tutorial. Check out how all of this is done in the next sections! Handling Missing ValuesWith all of the changes you have made to your original data DataFrame, it's a good idea to figure out if there are any missing values left with `.info()`:
###Code
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1309 entries, 0 to 417
Data columns (total 9 columns):
Pclass 1309 non-null int64
Sex 1309 non-null object
Age 1046 non-null float64
SibSp 1309 non-null int64
Parch 1309 non-null int64
Fare 1308 non-null float64
Embarked 1307 non-null object
Title 1309 non-null object
Has_Cabin 1309 non-null bool
dtypes: bool(1), float64(2), int64(3), object(3)
memory usage: 133.3+ KB
###Markdown
The result of the above line of code tells you that you have missing values in `'Age'`, `'Fare'`, and `'Embarked'`.Remember that you can easily spot this by first looking at the total number of entries (1309) and then checking out the number of non-null values in the columns that `.info()` lists. In this case, you see that `'Age'` has 1046 non-null values, so that means that you have 263 missing values. Similarly, `'Fare'` only has one missing value and 'Embarked' has two missing values.Just like you did in the previous tutorial, you're going to impute these missing values with the help of `.fillna()`:Note that, once again, you use the median to fill in the `'Age'` and `'Fare'` columns because it's perfect for dealing with outliers. Other ways to impute missing values would be to use the mean, which you can find by adding all data points and dividing by the number of data points, or mode, which is the number that occurs the highest number of times.You fill in the two missing values in the `'Embarked'` column with `'S'`, which stands for Southampton, because this value is the most common one out of all the values that you find in this column.Tip: you can double check this by doing some more Exploratory Data Analysis!
###Code
# Impute missing values for Age, Fare, Embarked
data['Age'] = data.Age.fillna(data.Age.median())
data['Fare'] = data.Fare.fillna(data.Fare.median())
data['Embarked'] = data['Embarked'].fillna('S')
data.info()
data.head()
###Output
_____no_output_____
###Markdown
Bin numerical dataNext, you want to bin the numerical data, because you have a range of ages and fares. However, there might be fluctuations in those numbers that don't reflect patterns in the data, which might be noise. That's why you'll put people that are within a certain range of age or fare in the same bin. You can do this by using the `pandas` function `qcut()` to bin your numerical data:
###Code
# Binning numerical columns
data['CatAge'] = pd.qcut(data.Age, q=4, labels=False )
data['CatFare']= pd.qcut(data.Fare, q=4, labels=False)
data.head()
###Output
_____no_output_____
###Markdown
Note that you pass in the data as a Series, `data.Age` and `data.Fare`, after which you specify the number of quantiles, `q=4`. Lastly, you set the `labels` argument to `False` to encode the bins as numbers.Now that you have all of that information in bins, you can now safely drop `'Age'` and `'Fare'` columns. Don't forget to check out the first five rows of your data!
###Code
data = data.drop(['Age', 'Fare'], axis=1)
data.head()
###Output
_____no_output_____
###Markdown
Number of Members in Family OnboardThe next thing you can do is create a new column, which is the number of members in families that were onboard of the Titanic. In this tutorial, you won't go in this and see how the model performs without it. If you do want to check out how the model would do with this additional column, run the following line of code:
###Code
# Create column of number of Family members onboard
data['Fam_Size'] = data.Parch + data.SibSp
###Output
_____no_output_____
###Markdown
For now, you will just go ahead and drop the `'SibSp'` and `'Parch'` columns from your DataFrame:
###Code
# Drop columns
data = data.drop(['SibSp','Parch'], axis=1)
data.head()
###Output
_____no_output_____
###Markdown
Have you noticed?Have you noticed that we have completed several feature engineering steps? From **binning numerical variables** to figure out **number of family members onboard**, these are both feature engineering steps. So essentially, we are creating new features based on existing features - using **background human intelligence** - and then remove the original variable(s). This requires you as an analyst to:- Understand the dataset, in particular the data dictionary well;- Understand the business background well.It is easy said than done - but that would be the goal. Transform Variables into Numerical VariablesNow that you have engineered some more features, such as `'Title'` and `'Has_Cabin'`, and you have dealt with missing values, binned your numerical data, it's time to transform all variables into numeric ones. You do this because machine learning models generally take numeric input.As you have done previously, you will use `.get_dummies()` to do this:
###Code
# Transform into binary variables
data_dum = pd.get_dummies(data, drop_first=True)
data_dum.head()
###Output
_____no_output_____
###Markdown
With all of this done, it's time to build your final model! Building models with Your New Data Set!As before, you'll first split your `data` back into training and test sets. Then, you'll transform them into arrays:
###Code
# Split into test.train
data_train = data_dum.iloc[:891]
data_test = data_dum.iloc[891:]
# Transform into arrays for scikit-learn
X = data_train.values
test = data_test.values
y = survived_train.values
###Output
_____no_output_____
###Markdown
You're now going to build a decision tree on your brand new feature-engineered dataset. To choose your hyperparameter `max_depth`, you'll use a variation on test train split called **cross validation**.You begin by splitting the dataset into `5` groups or folds. Then you hold out the *first* fold as a **test set**, fit your model on the remaining four folds, predict on the test set and compute the metric of interest. Next, you hold out the *second* fold as your **test set**, fit on the remaining data, predict on the test set and compute the metric of interest. Then similarly with the *third*, *fourth* and *fifth*.**NOTE**: Cross-validation (CV) is an efficient way of avoid overfitting your models - be sure to use it when possible.As a result, you get five values of accuracy, from which you can compute statistics of interest, such as the median and/or mean and 95% confidence intervals.You do this for each value of each hyperparameter that you're tuning and choose the set of hyperparameters that performs the best. This is called **grid search**.Enough about that for now, let's get it!In the following, you'll use **cross validation** and **grid search** to choose the best `max_depth` for your new feature-engineered dataset:
###Code
# Setup the hyperparameter grid
dep = np.arange(1,9)
param_grid = {'max_depth' : dep}
# Instantiate a decision tree classifier: clf
clf = tree.DecisionTreeClassifier()
# Instantiate the GridSearchCV object: clf_cv
clf_cv = GridSearchCV(clf, param_grid=param_grid, cv=5)
# Fit it to the data
clf_cv.fit(X, y)
# Print the tuned parameter and score
print("Tuned Decision Tree Parameters: {}".format(clf_cv.best_params_))
print("Best score is {}".format(clf_cv.best_score_))
###Output
Tuned Decision Tree Parameters: {'max_depth': 3}
Best score is 0.8293766869625259
|
deep_staple/consensus/consensus.ipynb | ###Markdown
View initial quality of atlas dices (after registration) for deeds and convex adam
###Code
convex_atlases = torch.load(Path(THIS_SCRIPT_DIR, "../../data_artifacts/20220318_crossmoda_convex_adam_lr/crossmoda_convex_registered_new_convex.pth").resolve())
deeds_atlases = torch.load(Path(THIS_SCRIPT_DIR, "../../data_artifacts/20220114_crossmoda_multiple_registrations/crossmoda_deeds_registered.pth").resolve())
dcs_convex = []
for fixed_item in convex_atlases.values():
for moving_item in fixed_item.values():
dcs_convex.append(moving_item['dice'][0][1].item())
dcs_convex.sort()
dcs_deeds = []
for fixed_item in deeds_atlases.values():
for moving_item in fixed_item.values():
dcs_deeds.append(moving_item['dice'][0][1].item())
dcs_deeds.sort()
plt.plot(dcs_deeds, label='deeds')
plt.plot(dcs_convex, label='convex')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Create consensi (take training data directly and calculate consensi as well as dice values. Store data in a consistent dict) Pull out data of network training and put into fixed-moving dict
###Code
def calc_dp_consensus(lbl_list, weighting_list):
LIMIT = .5
label_stack = torch.stack(lbl_list)
weightings = torch.tensor(weighting_list)
weightings = torch.softmax(weightings, 0)
weighted_stack = label_stack.to_dense()*weightings.view(-1,1,1,1)
weighted_stack = weighted_stack.sum((0))
consensus = (weighted_stack > LIMIT).long()
return consensus
def calc_staple_consensus(lbl_list, weighting_list):
staple_filter = sitk.STAPLEImageFilter()
weightings = torch.tensor(weighting_list)
weightings = torch.softmax(weightings, 0)
# sitk.ProcessObject.SetGlobalDefaultDebugOff()
FOREGROUND = 1.0
staple_filter.SetForegroundValue(FOREGROUND)
staple_filter.SetMaximumIterations(200)
sitk_moving_data = [sitk.GetImageFromArray(lbl.to_dense().numpy().astype(np.int16)) for lbl in lbl_list]
staple_out = staple_filter.Execute(sitk_moving_data)
consensus = (torch.tensor(sitk.GetArrayFromImage(staple_out)) > .5).long()
return consensus
cases = ['400_deeds', '1200_deeds', '1800_deeds', '400_convex_adam']
current_case = cases[0]
print(f"Creating consensus for {current_case}")
if current_case == '400_deeds':
# Training with 400 labels (deeds)
network_scores = torch.load(Path(THIS_SCRIPT_DIR, "../../data/output/dashing-surf-1206_fold0_epx39/train_label_snapshot.pth").resolve())
elif current_case == '400_convex_adam':
# Training with 400 labels (convex adam)
network_scores = torch.load(Path(THIS_SCRIPT_DIR, "../../data/output/winter-frost-32_fold0_epx39/train_label_snapshot.pth").resolve())
elif current_case == '1200_deeds':
network_scores = torch.load(Path(THIS_SCRIPT_DIR, "../../data/output/classic-sunset-1245_fold0_epx39/train_label_snapshot.pth").resolve())
elif current_case == '1800_deeds':
network_scores = torch.load(Path(THIS_SCRIPT_DIR, "../../data/output/comic-sponge-1268_fold0_epx29/train_label_snapshot.pth").resolve())
elif current_case == 'any_additional_case':
data_paths = torch.load(Path(THIS_SCRIPT_DIR, "../../data/output/<training snapshot>/train_label_snapshot.pth.pth"))
# Put new training data here
data_paths = torch.load(Path(THIS_SCRIPT_DIR, "../../data/output/<training snapshot>/network_dataset_path_dict_train_1800.pth"))
consensus_dicts = {}
d_ids = network_scores['d_ids']
print("Reading labels")
for _id in d_ids:
network_data_lookup_idx = d_ids.index(_id)
f_id = _id[:4]
m_id = _id[6:]
if f_id in consensus_dicts:
fixed_dict = consensus_dicts.get(f_id)
else:
fixed_dict = {}
#Only add expert label in first hit of fixed_dict
fixed_dict['expert_label'] = network_scores['labels'][network_data_lookup_idx]
fixed_dict['prediction'] = network_scores['train_predictions'][network_data_lookup_idx]
fixed_dict['image_path'] = data_paths['train_image_paths'][_id]
moving_dict = fixed_dict.get(m_id, {})
moving_dict['warped_label'] = network_scores['modified_labels'][network_data_lookup_idx]
moving_dict['data_parameter'] = network_scores['data_parameters'][network_data_lookup_idx]
fixed_dict[m_id] = moving_dict
consensus_dicts[f_id] = fixed_dict
print("Getting consensus scores")
dp_consensus_dices = []
staple_consensus_dices = []
print(consensus_dicts.keys())
for f_id, fixed_dict in tqdm(consensus_dicts.items()):
lbls = []
lbls = [elem['warped_label'] for elem in fixed_dict.values() if isinstance(elem, dict)]
data_parameters = []
data_parameters = [elem['data_parameter'] for elem in fixed_dict.values() if isinstance(elem, dict)]
expert_label = fixed_dict['expert_label'].to_dense()
dp_consensus = calc_dp_consensus(lbls, data_parameters)
staaple_consensus = calc_staple_consensus(lbls, data_parameters)
dp_dsc = dice3d(
torch.nn.functional.one_hot(dp_consensus.unsqueeze(0), 2),
torch.nn.functional.one_hot(expert_label.unsqueeze(0), 2),
one_hot_torch_style=True, nan_for_unlabeled_target=False
)
staple_dsc = dice3d(
torch.nn.functional.one_hot(staaple_consensus.unsqueeze(0), 2),
torch.nn.functional.one_hot(expert_label.unsqueeze(0), 2),
one_hot_torch_style=True, nan_for_unlabeled_target=False
)
print('f_id: ', f_id)
print(f"DP consensus dice:", dp_dsc)
print(f"STAPLE consensus dice:", staple_dsc)
print()
fixed_dict['dp_consensus'] = dp_consensus.to_sparse()
fixed_dict['staple_consensus'] = staaple_consensus.to_sparse()
fixed_dict['dp_consensus_oracle_dice'] = dp_dsc
fixed_dict['staple_consensus_oracle_dice'] = staple_dsc
consensus_dicts[f_id] = fixed_dict
torch.save(consensus_dicts, Path(THIS_SCRIPT_DIR, f"../../data/consensus/consensus_dict_{current_case}.pth").resolve())
def extract_consensus_dices(consensus_path):
print(f"Extracting data from '{consensus_path}'")
consensus_dicts = torch.load(consensus_path)
dp_consensus_dices = []
staple_consensus_dices = []
for f_id, fixed_dict in consensus_dicts.items():
dp_consensus_dices.append(fixed_dict['dp_consensus_oracle_dice'])
staple_consensus_dices.append(fixed_dict['staple_consensus_oracle_dice'])
dp_tumour_dices = torch.cat(dp_consensus_dices)[:,1]
staple_tumour_dices = torch.cat(staple_consensus_dices)[:,1]
print(f"DP consensus mean dice: {dp_tumour_dices.mean().item():.3f}")
print(f"STAPLE consensus mean dice: {staple_tumour_dices.mean().item():.3f}")
print()
return dp_tumour_dices, staple_tumour_dices
# Load consensus dices
consensus_dicts_convex_adam_path = Path(THIS_SCRIPT_DIR, f"../../data/consensus/consensus_dict_400_convex_adam.pth").resolve()
consensus_dicts_deeds_path = Path(THIS_SCRIPT_DIR, f"../../data/consensus/consensus_dict_400_deeds.pth").resolve()
dp_consensus_dices_deeds, staple_consensus_dices_deeds = extract_consensus_dices(consensus_dicts_deeds_path)
dp_consensus_dices_convex_adam, staple_consensus_dices_convex_adam = extract_consensus_dices(consensus_dicts_convex_adam_path)
###Output
_____no_output_____
###Markdown
Compare generated consensi for two training runs: deeds @ 400 registrations and Convex Adam @ 400 registrations (training output not published, but can be generated when training on the registered data)
###Code
boxplot_data_deeds = [(staple_consensus_dices_deeds*100).tolist(), (dp_consensus_dices_deeds*100).tolist()]
boxplot_data_convex_adam = [(staple_consensus_dices_convex_adam*100).tolist(), (dp_consensus_dices_convex_adam*100).tolist()]
# df_deeds = pd.DataFrame(boxplot_data_deeds, index=['STAPLE consensus', 'DP consensus'])
import matplotlib.ticker as mtick
hues = {
'purple': (125/255, 84/255, 178/255),
'red': (218/255, 76/255, 76/255),
'yellow': (237/255, 183/255, 50/255),
'green': (135/255, 206/255, 191/255),
'gray': (161/255, 169/255, 173/255),
'darkgray': (80/255, 85/255, 90/255),
}
LINE_WIDTH = 1
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(5.05, 4.5))
lineprops = dict(color=hues['darkgray'], linewidth=LINE_WIDTH)
boxprops=dict(color=hues['darkgray'], linewidth=LINE_WIDTH)
flierprops = dict(marker='o', markerfacecolor=hues['darkgray'], markersize=4,
linestyle='none', markeredgecolor='none')
# rectangular box plot
HEIGHT = .45
HALF_HEIGHT = HEIGHT/2
bplot_deeds = ax.boxplot(boxplot_data_deeds,
widths = 0.25,
positions=[3.5-HALF_HEIGHT, 4.5-HALF_HEIGHT],
vert=False, # vertical box alignment
patch_artist=True, # fill with color
# labels=['DP', 'STAPLE'],
showmeans=True,
flierprops=flierprops, boxprops=boxprops,
whiskerprops=lineprops, capprops=lineprops,
meanline=True, medianprops=lineprops, meanprops=lineprops,
# showfliers=False
)
bplot_convex_adam = ax.boxplot(boxplot_data_convex_adam,
widths = 0.25,
positions=[3.5+HALF_HEIGHT, 4.5+HALF_HEIGHT],
vert=False, # vertical box alignment
patch_artist=True, # fill with color
# labels=['DP', 'STAPLE'],
showmeans=True,
flierprops=flierprops, boxprops=boxprops,
whiskerprops=lineprops, capprops=lineprops,
meanline=True, medianprops=lineprops, meanprops=lineprops,
# showfliers=False
)
plt.rcParams.update({'font.size': 10})
ax.set_xlim([0.0,100.0])
# ax.set_xlim([-.1,0.3])
for box_patch, flier_patch, color in zip(bplot_deeds['boxes'], bplot_deeds['fliers'], [hues['yellow'], hues['yellow']]):
box_patch.set_facecolor(color)
flier_patch.set_markerfacecolor(color)
flier_patch.set_markeredgecolor(hues['darkgray'])
for box_patch, flier_patch, color in zip(bplot_convex_adam['boxes'], bplot_convex_adam['fliers'], [hues['green'], hues['green']]):
box_patch.set_facecolor(color)
flier_patch.set_markerfacecolor(color)
flier_patch.set_markeredgecolor(hues['darkgray'])
ax.xaxis.set_tick_params(width=LINE_WIDTH)
ax.yaxis.set_tick_params(width=LINE_WIDTH, color=hues['darkgray'])
[x.set_linewidth(LINE_WIDTH) for x in ax.spines.values()]
[x.set_color(hues['darkgray']) for x in ax.spines.values()]
ax.tick_params(axis='x', colors=hues['darkgray'])
deeds_pos, convex_adam_pos = [0-HALF_HEIGHT, 1.5-HALF_HEIGHT, 2.5-HALF_HEIGHT, 3.5-HALF_HEIGHT, 4.5-HALF_HEIGHT, 6-HALF_HEIGHT], [0+HALF_HEIGHT, 1.5+HALF_HEIGHT, 2.5+HALF_HEIGHT, 3.5+HALF_HEIGHT, 4.5+HALF_HEIGHT, 6+HALF_HEIGHT]
deeds_val, convex_adam_val = [28.91, 48.0, 56.928, 63.6, 67.8, 84.4], [20.75, 49.47, 59.98, 62.8, 65.7, 83.75]
bar_deeds = plt.barh(deeds_pos, deeds_val, color=hues['yellow'], height=HEIGHT)
bar_convex = plt.barh(convex_adam_pos, convex_adam_val, color=hues['green'], height=HEIGHT) # Convex Adam
ax.set_yticks([0,1.5,2.5,3.5,4.5, 6])
ax.set_yticklabels(['GAP', 'RND', 'ALL', 'STAPLE', 'DP', 'ORACLE'])
for ax_pos, val in zip(deeds_pos, deeds_val):
ax.text(1, ax_pos+.075, f"{val:.1f}", color='white')
for ax_pos, val in zip(convex_adam_pos, convex_adam_val):
ax.text(1, ax_pos+.075, f"{val:.1f}", color='white')
bp_mean_val = [staple_consensus_dices_deeds.mean(), staple_consensus_dices_convex_adam.mean(), dp_consensus_dices_deeds.mean(), dp_consensus_dices_convex_adam.mean()]
bp_mean_val = [val*100 for val in bp_mean_val]
bp_mean_pos = [3.5-HALF_HEIGHT, 3.5+HALF_HEIGHT, 4.5-HALF_HEIGHT, 4.5+HALF_HEIGHT]
for ax_pos, val in zip(bp_mean_pos, bp_mean_val):
ax.text(90, ax_pos+.075, f"{val:.1f}", color=hues['darkgray'])#, bbox=dict(facecolor='none', edgecolor=hues['darkgray'], pad=1.5))
import matplotlib.patches as patches
style = "Simple, tail_width=0.5, head_width=4, head_length=8"
kw = dict(arrowstyle=style, color=hues['darkgray'], linewidth=LINE_WIDTH//2)
a1 = patches.FancyArrowPatch((90, 3), (58.1, 3.2), connectionstyle="arc3,rad=.5", **kw)
plt.gca().add_patch(a1)
plt.gca().invert_yaxis()
ax.xaxis.set_major_formatter(mtick.PercentFormatter())
plt.savefig("deeds-convex-adam.svg")
###Output
_____no_output_____ |
programming_examples/old/Machine Learning Example.ipynb | ###Markdown
To do:1. add in logistic regression or NN if we have time (i think there are some examples on github we can just rip)2. add in exploratory data analysis (ricky)3. find out if there's a nice way to display a .dot (the cancerTree.dot file generated) graph inline in the console (pygraphviz won't install on my environment and the graphviz.source stuff isn't working, haven't had the time to figure it out)4. add in some comments and documentation
###Code
from sklearn.datasets import load_breast_cancer
from sklearn.neighbors import KNeighborsClassifier #KNN
from sklearn.linear_model import LogisticRegression #Logistic Regression
from sklearn.tree import DecisionTreeClassifier #Decision Tree
from sklearn.ensemble import RandomForestClassifier #Random Forest
from sklearn.neural_network import MLPClassifier #Neural Network
from sklearn.svm import SVC #SVM
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.tree import export_graphviz
import matplotlib.pylab as plt
import numpy as np
import graphviz
from graphviz import Source
%matplotlib inline
#%%
#load the breast cancer data
cancer = load_breast_cancer()
print(cancer.keys())
print(cancer.DESCR)
#%%
#print feature names to visualize
print(cancer.feature_names)
#%%
#print target names to visualize
print(cancer.target_names)
#%%
#look at dimensions of dataset
type(cancer.data)
cancer.data.shape
#%%
#plotting 2D of texture and perimeter
fig = plt.figure(figsize=(8,6))
plt.scatter(cancer.data[:,1], cancer.data[:,2], c=cancer.target)
plt.xlabel(str(cancer.feature_names[1]))
plt.ylabel(str(cancer.feature_names[2]))
plt.show()
#%%
X_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=42)
training_accuracy = []
test_accuracy = []
max_dep = range(1,15)
for md in max_dep:
tree = DecisionTreeClassifier(max_depth=md,random_state=0)
tree.fit(X_train,y_train)
training_accuracy.append(tree.score(X_train, y_train))
test_accuracy.append(tree.score(X_test, y_test))
plt.plot(max_dep,training_accuracy, label='Accuracy of the training set')
plt.plot(max_dep,test_accuracy, label='Accuracy of the test set')
plt.ylabel('Accuracy')
plt.xlabel('Max Depth')
plt.legend()
# By having larger max_depth (>5), we overfit the model into training data, so the accuracy for training set become
# but the accuracy for test set decrease
# other parameters than can work with:
# - min_samples_leaf, max_sample_leaf
# - max_leaf_node
# by looking at plot, best result accurs when max_depth is 3
#%%
#exporting deciison tree
export_graphviz(tree, out_file="cancerTree.dot", class_names=['malignant','benign'], feature_names=cancer.feature_names, impurity=False, filled=True)
#%%
print('Feature importances: {}'.format(tree.feature_importances_))
print('Feature importances: {}'.format(tree.feature_importances_))
type(tree.feature_importances_)
#%%
#Feature Importance
n_feature = cancer.data.shape[1]
plt.barh(range(n_feature), tree.feature_importances_, align='center')
plt.yticks(np.arange(n_feature), cancer.feature_names)
plt.xlabel('Feature Importance')
plt.ylabel('Feature')
plt.show()
###Output
Feature importances: [0. 0. 0. 0. 0. 0.
0. 0.72468105 0. 0. 0.01277192 0.
0. 0. 0.00826156 0. 0. 0.01702539
0. 0. 0.05899273 0.12550655 0.00838371 0.03452044
0.00985664 0. 0. 0. 0. 0. ]
|
docs/source/tutorials/label_mistakes.ipynb | ###Markdown
Finding Label Mistakes with FiftyOneAnnotations mistakes create an artificial ceiling on the performance of your models. However, finding these mistakes by hand is at least as arduous as the original annotation work! Enter FiftyOne.This tutorial shows how FiftyOne can help you find and correct label mistakes in your datasets, enabling you to curate higher quality datasets and, ultimately, train better models! OverviewIn this walkthrough, we explore how FiftyOne can be used to help you find mistakes in your annotations.We'll cover the following concepts:- Loading your existing dataset in FiftyOne- Adding predictions from your model to your FiftyOne dataset- Computing insights into your dataset relating to possible mistakes- Visualizing the mistake in the FiftyOne App SetupThis tutorial requires [PyTorch](https://pytorch.org) to be installed:
###Code
# Modify as necessary (e.g., GPU install). See https://pytorch.org for options
!pip install torch
!pip install torchvision
###Output
_____no_output_____
###Markdown
We'll also need to download a pretrained CIFAR-10 PyTorch model (a ResNet-50) from the web:
###Code
# Download the software
!git clone https://github.com/huyvnphan/PyTorch_CIFAR10
# Download the pretrained model (90MB)
!eta gdrive download --public \
1dGfpeFK_QG0kV-U6QDHMX2EOGXPqaNzu \
PyTorch_CIFAR10/cifar10_models/state_dicts/resnet50.pt
###Output
Cloning into 'PyTorch_CIFAR10'...
remote: Enumerating objects: 551, done.[K
remote: Total 551 (delta 0), reused 0 (delta 0), pack-reused 551[K
Receiving objects: 100% (551/551), 6.54 MiB | 3.20 MiB/s, done.
Resolving deltas: 100% (182/182), done.
Downloading '1dGfpeFK_QG0kV-U6QDHMX2EOGXPqaNzu' to 'PyTorch_CIFAR10/cifar10_models/state_dicts/resnet50.pt'
100% |████| 719.8Mb/719.8Mb [36.2s elapsed, 0s remaining, 24.4Mb/s]
###Markdown
Manipulating the dataFor this walkthrough, we will artificially perturb an existing dataset withmistakes on the labels. Of course, in your normal workflow, you would not addlabeling mistakes; this is only for the sake of the walkthrough.The code block below loads the test split of the CIFAR-10 dataset into FiftyOneand randomly breaks 10% (1000 samples) of the labels:
###Code
import random
import fiftyone as fo
import fiftyone.zoo as foz
# Load the CIFAR-10 test split
# Downloads the dataset from the web if necessary
dataset = foz.load_zoo_dataset("cifar10", split="test")
# Get the CIFAR-10 classes list
info = foz.load_zoo_dataset_info("cifar10")
classes = info.classes
# Artificially corrupt 10% of the labels
_num_mistakes = int(0.1 * len(dataset))
for sample in dataset.take(_num_mistakes):
mistake = random.randint(0, 9)
while classes[mistake] == sample.ground_truth.label:
mistake = random.randint(0, 9)
sample.tags.append("mistake")
sample.ground_truth = fo.Classification(label=classes[mistake])
sample.save()
###Output
Split 'test' already downloaded
Loading 'cifar10' split 'test'
100% |█████████████████████████| 10000/10000 [2.3s elapsed, 0s remaining, 4.2K samples/s]
###Markdown
Let's print some information about the dataset to verify the operation that weperformed:
###Code
# Verify that the `mistake` tag is now in the dataset's schema
print(dataset)
# Count the number of samples with the `mistake` tag
num_mistakes = len(dataset.match_tag("mistake"))
print("%d ground truth labels are now mistakes" % num_mistakes)
###Output
1000 ground truth labels are now mistakes
###Markdown
Add predictions to the datasetUsing an off-the-shelf model, let's now add predictions to the dataset, whichare necessary for us to deduce some understanding of the possible labelmistakes.The code block below adds model predictions to another randomly chosen 10%(1000 samples) of the dataset:
###Code
import sys
import numpy as np
import torch
import torchvision
from torch.utils.data import DataLoader
import fiftyone.utils.torch as fout
sys.path.insert(1, "PyTorch_CIFAR10")
from cifar10_models import *
def make_cifar10_data_loader(image_paths, sample_ids, batch_size):
mean = [0.4914, 0.4822, 0.4465]
std = [0.2023, 0.1994, 0.2010]
transforms = torchvision.transforms.Compose(
[
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean, std),
]
)
dataset = fout.TorchImageDataset(
image_paths, sample_ids=sample_ids, transform=transforms
)
return DataLoader(dataset, batch_size=batch_size, num_workers=4)
def predict(model, imgs):
logits = model(imgs).detach().cpu().numpy()
predictions = np.argmax(logits, axis=1)
odds = np.exp(logits)
confidences = np.max(odds, axis=1) / np.sum(odds, axis=1)
return predictions, confidences, logits
#
# Load a model
#
# Model performance numbers are available at:
# https://github.com/huyvnphan/PyTorch_CIFAR10
#
model = resnet50(pretrained=True)
model_name = "resnet50"
#
# Extract a few images to process
# (some of these will have been manipulated above)
#
num_samples = 1000
batch_size = 20
view = dataset.take(num_samples)
image_paths, sample_ids = zip(
*[(s.filepath, s.id) for s in view.iter_samples()]
)
data_loader = make_cifar10_data_loader(image_paths, sample_ids, batch_size)
#
# Perform prediction and store results in dataset
#
for imgs, sample_ids in data_loader:
predictions, _, logits_ = predict(model, imgs)
# Add predictions to your FiftyOne dataset
for sample_id, prediction, logits in zip(sample_ids, predictions, logits_):
sample = dataset[sample_id]
sample.tags.append("processed")
sample[model_name] = fo.Classification(
label=classes[prediction], logits=logits,
)
sample.save()
###Output
_____no_output_____
###Markdown
Let's print some information about the predictions that were generated and howmany of them correspond to samples whose ground truth labels were corrupted:
###Code
# Count the number of samples with the `processed` tag
num_processed = len(dataset.match_tag("processed"))
# Count the number of samples with both `processed` and `mistake` tags
num_corrupted = len(dataset.match_tag("processed").match_tag("mistake"))
print("Added predictions to %d samples" % num_processed)
print("%d of these samples have label mistakes" % num_corrupted)
###Output
Added predictions to 1000 samples
94 of these samples have label mistakes
###Markdown
Find the mistakesNow we can run a method from FiftyOne that estimates the mistakenness of theground samples for which we generated predictions:
###Code
import fiftyone.brain as fob
# Get samples for which we added predictions
h_view = dataset.match_tag("processed")
# Compute mistakenness
fob.compute_mistakenness(h_view, model_name, label_field="ground_truth")
###Output
Computing mistakenness for 1000 samples...
100% |███████████████████████████| 1000/1000 [1.3s elapsed, 0s remaining, 808.1 samples/s]
Mistakenness computation complete
###Markdown
The above method added `mistakenness` field to all samples for which we addedpredictions. We can easily sort by likelihood of mistakenness from code:
###Code
# Sort by likelihood of mistake (most likely first)
mistake_view = (dataset
.match_tag("processed")
.sort_by("mistakenness", reverse=True)
)
# Print some information about the view
print(mistake_view)
# Inspect the first few samples
print(mistake_view.head())
###Output
<Sample: {
'dataset_name': 'cifar10-test',
'id': '5ef384e36696dbdeabc6a88e',
'filepath': '/home/voxel51/fiftyone/cifar10/test/data/00107.jpg',
'tags': BaseList(['test', 'processed']),
'ground_truth': <Classification: {'label': 'deer'}>,
'resnet50': <Classification: {
'label': 'horse',
'logits': array([-0.83586901, -1.28598607, 1.54965878, -0.49650264, -0.40103185,
-0.18043809, -1.0332154 , 5.05314684, -1.21831954, -1.15143788]),
}>,
'mistakenness': 1.0,
}>
<Sample: {
'dataset_name': 'cifar10-test',
'id': '5ef384e36696dbdeabc6a86f',
'filepath': '/home/voxel51/fiftyone/cifar10/test/data/00076.jpg',
'tags': BaseList(['test', 'processed']),
'ground_truth': <Classification: {'label': 'bird'}>,
'resnet50': <Classification: {
'label': 'deer',
'logits': array([-0.72157425, -0.94043797, -0.32308894, -0.19049911, 4.82478857,
-0.35608411, -0.35027471, -0.25426134, -0.77823019, -0.91033494]),
}>,
'mistakenness': 1.0,
}>
<Sample: {
'dataset_name': 'cifar10-test',
'id': '5ef384e36696dbdeabc6a838',
'filepath': '/home/voxel51/fiftyone/cifar10/test/data/00021.jpg',
'tags': BaseList(['test', 'mistake', 'processed']),
'ground_truth': <Classification: {'label': 'frog'}>,
'resnet50': <Classification: {
'label': 'deer',
'logits': array([-0.77428126, -1.11018133, 1.21526551, -0.23978873, 3.74053574,
-0.37081209, 0.20087151, -0.54353052, -1.05138922, -1.06668639]),
}>,
'mistakenness': 1.0,
}>
###Markdown
Let's use the App to visually inspect the results:
###Code
# Launch the FiftyOne App
session = fo.launch_app()
# Open your dataset in the App
session.dataset = dataset
###Output
App launched
###Markdown

###Code
# Show only the samples that were processed
session.view = dataset.match_tag("processed")
###Output
_____no_output_____
###Markdown

###Code
# Show only the samples for which we added label mistakes
session.view = dataset.match_tag("mistake")
###Output
_____no_output_____
###Markdown

###Code
# Show the samples we processed in rank order by the mistakenness
session.view = mistake_view
###Output
_____no_output_____ |
notebook/Classical_Dimension_Test.ipynb | ###Markdown
A Test for Classical DimensionIn this device-indepenendent test, Alice encodes $x\in\{0,1,2,3\}$ into the states $\{|00\rangle, |01\rangle, |10\rangle, |11\rangle\}$. Bob measures in the computational basis and outputs the result, $b\in \{0,1,2,3\}$.The success probability for quantum and classical scenarios is bounded by$$\frac{d}{N} \geq \frac{1}{N}\sum_{x=0}^{N-1} p(b=x,x),$$where $d$ is the number of dimensions in the hilbert space and $N$ is the number of inputs $x$. Classical systems perform as well as quantum systems for this "guessing game" task. The objective is not to determine quantum or classical in this test, but to verify the ability to send a $d$ dimensional qubit state by using all of the orthogonal axes to encode classical information. This test could also be effective at measuring the noise in a quantum channel.We now demonstrate an implementation of this test to verify the ability to send to qubits in the computational basis.
###Code
from qiskit import QuantumCircuit,execute, IBMQ
from qiskit.tools.monitor import *
from qiskit.providers.ibmq.managed import IBMQJobManager
provider = IBMQ.load_account()
import matplotlib.pyplot as plt
import numpy as np
import context
from device_independent_test import dimension
from device_independent_test.handshake import HandShake
from device_independent_test.quantum_communicator import LocalDispatcher
###Output
ibmqfactory.load_account:WARNING:2020-07-01 12:36:02,636: Credentials are already in use. The existing account in the session will be replaced.
###Markdown
Running a Dimensionality Test Through HandShakeThe implemented circuit is an identity circuit. This test is testing the noise of a quantum channel. Qiskit Simulator
###Code
# load the dispatcher
dispatcher = LocalDispatcher([provider.get_backend('ibmq_qasm_simulator')])
handshake = HandShake(dispatcher)
# set the tolerance and shots
tolerance = 0.1
shots = 1000
# run the dimensionality test
handshake.dimensionality(tolerance, shots)
###Output
Passed Dimensionality with value: 1.0
###Markdown
IBM Quantum Machine
###Code
dispatcher = LocalDispatcher([provider.get_backend('ibmq_london')])
handshake = HandShake(dispatcher)
tolerance = 0.1
shots = 1000
handshake.dimensionality(tolerance, shots)
###Output
Passed Dimensionality with value: 0.9145
###Markdown
The IBM quantum machine is expected to perform worse than the simulator due to noise, even in this case where very few operations are performed. Implementation Details: Alice and Bob verify the ability to send two bits.Alice and Bob can prepare and measure a set of orthogonal measurements on a 4-dimensional Hilbert space. Alice simply encodes the input $x\in\{0,1,2,3\}$ into the bit string and Bob decodes it without error. Assuming a uniform prior distribution for $x$, the success probability for this case is $1 \geq 1/4 \sum_x p(b=x|x)$. Prepare and Measure Qubit States
###Code
# alice prepares the complete set of orthogonal states
qc_00 = dimension.prepare_bit_circuit([0,0])
qc_01 = dimension.prepare_bit_circuit([0,1])
qc_10 = dimension.prepare_bit_circuit([1,0])
qc_11 = dimension.prepare_bit_circuit([1,1])
circuits = [qc_00, qc_01, qc_10, qc_11]
# Bob measures in the computational basis.
for qc in circuits:
qc.measure_all()
for qc in circuits:
display(qc.draw())
###Output
_____no_output_____
###Markdown
The displayed circuits show the gates that are being run in our simulation. Run on the Qiskit Simulator
###Code
# running tests on quantum computer
def run_job(qc, shots):
job = execute(qc, backend=provider.get_backend('ibmq_qasm_simulator'), shots=shots)
job_monitor(job)
return job
shots = 1000
d4_job_00 = run_job(qc_00, shots)
d4_job_01 = run_job(qc_01, shots)
d4_job_10 = run_job(qc_10, shots)
d4_job_11 = run_job(qc_11, shots)
###Output
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
###Markdown
Compute the Success Probability of the MeasurementsThe optimal score is 1.0 for a scenario where two bits or qubits are used.
###Code
# parsing statistics
d4_counts_00 = d4_job_00.result().get_counts()
d4_counts_01 = d4_job_01.result().get_counts()
d4_counts_10 = d4_job_10.result().get_counts()
d4_counts_11 = d4_job_11.result().get_counts()
# success probability for a d=4 system is 1
p_succ_00 = d4_counts_00["00"]/shots
p_succ_01 = d4_counts_01["10"]/shots # qiskits labeling is weird
p_succ_10 = d4_counts_10["01"]/shots # qiskits labeling is weird
p_succ_11 = d4_counts_11["11"]/shots
d4_success_probability = (p_succ_00 + p_succ_01 + p_succ_10 + p_succ_11)/4 # divide by 4 because there are 4 inputs.
d4_success_probability
###Output
_____no_output_____
###Markdown
Test Failure: Alice only has access to 1-qubit In this scenario, Alice can only send 1-qubit, the second register is constant. The encoded qubits are $\{|00\rangle, |00\rangle, |10\rangle |11\rangle \}$. The measurement success_probability is therefor $0.5 \geq 1/4 \sum_x p(b=x|x)$. Running jobs
###Code
shots = 1000
d2_job_00 = run_job(qc_00, shots)
d2_job_01 = run_job(qc_00, shots)
d2_job_10 = run_job(qc_10, shots)
d2_job_11 = run_job(qc_10, shots)
###Output
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
Job Status: job has successfully run
###Markdown
Calculating Success Probability
###Code
counts_00 = d2_job_00.result().get_counts()
counts_01 = d2_job_01.result().get_counts()
counts_10 = d2_job_10.result().get_counts()
counts_11 = d2_job_11.result().get_counts()
d2_p_succ_00 = counts_00["00"]/shots if ("00" in counts_00) else 0.0
d2_p_succ_01 = counts_01["10"]/shots if ("10" in counts_01) else 0.0 # qiskits labeling is weird
d2_p_succ_10 = counts_10["01"]/shots if "01" in counts_10 else 0.0 # qiskits labeling is weird
d2_p_succ_11 = counts_11["11"]/shots if "11" in counts_11 else 0.0
d2_success_probability = (d2_p_succ_00 + d2_p_succ_01 + d2_p_succ_10 + d2_p_succ_11)/4
d2_success_probability # the classical bound is 0.5
###Output
_____no_output_____ |
notebooks/02_Basics/01_Linear_Regression/01_TF_Linear_Regression.ipynb | ###Markdown
Linear RegressionIn this lesson we will learn about linear regression. We will understand the basic math behind it, implement it in just NumPy and then in native [TensorFlow](https://www.tensorflow.org/install) and then with Keras. Overview Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$ * **Objective:** Use inputs $X$ to predict the output $\hat{y}$ using a linear model. The model will be a line of best fit that minimizes the distance between the predicted (model's output) and target (ground truth) values. Training data $(X, y)$ is used to train the model and learn the weights $W$ using gradient descent.* **Advantages:** * Computationally simple. * Highly interpretable. * Can account for continuous and categorical features.* **Disadvantages:** * The model will perform well only when the data is linearly separable (for classification). * Usually not used for classification and only for regression.* **Miscellaneous:** You can also use linear regression for binary classification tasks where if the predicted continuous value is above a threshold, it belongs to a certain class. But we will cover better techniques for classification in future lessons and will focus on linear regression for continuous regression tasks only. Generate data We're going to create some simple dummy data to apply linear regression on. It's going to create roughly linear data (`y = 3.5X + noise`); the random noise is added to create realistic data that doesn't perfectly align in a line. Our goal is to have the model converge to a similar linear equation (there will be slight variance since we added some noise).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
SEED = 1234
NUM_SAMPLES = 50
# Set seed for reproducibility
np.random.seed(SEED)
# Generate synthetic data
def generate_data(num_samples):
"""Generate dummy data for linear regression."""
X = np.array(range(num_samples))
random_noise = np.random.uniform(-10,20,size=num_samples)
y = 3.5*X + random_noise # add some noise
return X, y
# Generate random (linear) data
X, y = generate_data(num_samples=NUM_SAMPLES)
data = np.vstack([X, y]).T
print (data[:5])
# Load into a Pandas DataFrame
df = pd.DataFrame(data, columns=['X', 'y'])
X = df[['X']].values
y = df[['y']].values
df.head()
# Scatter plot
plt.title("Generated data")
plt.scatter(x=df['X'], y=df['y'])
plt.show()
###Output
_____no_output_____
###Markdown
NumPy Now that we have our data prepared, we'll first implement linear regression using just NumPy. This will let us really understand the underlying operations. Split data Since our task is a regression task, we will randomly split our dataset into **three** sets: train, validation and test data splits.* train: used to train our model.* val : used to validate our model's performance during training.* test: used to do an evaluation of our fully trained model.
###Code
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
SHUFFLE = True
# Shuffle data
if SHUFFLE:
indices = list(range(NUM_SAMPLES))
np.random.shuffle(indices)
X = X[indices]
y = y[indices]
###Output
_____no_output_____
###Markdown
**NOTE**: Be careful not to shuffle X and y separately because then the inputs won't correspond to the outputs!
###Code
# Split indices
train_start = 0
train_end = int(0.7*NUM_SAMPLES)
val_start = train_end
val_end = int((TRAIN_SIZE+VAL_SIZE)*NUM_SAMPLES)
test_start = val_end
# Split data
X_train = X[train_start:train_end]
y_train = y[train_start:train_end]
X_val = X[val_start:val_end]
y_val = y[val_start:val_end]
X_test = X[test_start:]
y_test = y[test_start:]
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_test: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
###Output
X_train: (35, 1), y_train: (35, 1)
X_val: (7, 1), y_test: (7, 1)
X_test: (8, 1), y_test: (8, 1)
###Markdown
Standardize data We need to standardize our data (zero mean and unit variance) or our models can optimize quickly when we are training.$z = \frac{x_i - \mu}{\sigma}$* $z$ = standardized value* $x_i$ = inputs* $\mu$ = mean* $\sigma$ = standard deviation
###Code
def standardize_data(data, mean, std):
return (data - mean)/std
# Determine means and stds
X_mean = np.mean(X_train)
X_std = np.std(X_train)
y_mean = np.mean(y_train)
y_std = np.std(y_train)
###Output
_____no_output_____
###Markdown
We need to treat the validation and test sets as if they were hidden datasets. So we only use the train set to determine the mean and std to avoid biasing our training process.
###Code
# Standardize
X_train = standardize_data(X_train, X_mean, X_std)
y_train = standardize_data(y_train, y_mean, y_std)
X_val = standardize_data(X_val, X_mean, X_std)
y_val = standardize_data(y_val, y_mean, y_std)
X_test = standardize_data(X_test, X_mean, X_std)
y_test = standardize_data(y_test, y_mean, y_std)
# Check (means should be ~0 and std should be ~1)
print (f"X_train: mean: {np.mean(X_train, axis=0)[0]:.1f}, std: {np.std(X_train, axis=0)[0]:.1f}")
print (f"y_train: mean: {np.mean(y_train, axis=0)[0]:.1f}, std: {np.std(y_train, axis=0)[0]:.1f}")
print (f"X_val: mean: {np.mean(X_val, axis=0)[0]:.1f}, std: {np.std(X_val, axis=0)[0]:.1f}")
print (f"y_val: mean: {np.mean(y_val, axis=0)[0]:.1f}, std: {np.std(y_val, axis=0)[0]:.1f}")
print (f"X_test: mean: {np.mean(X_test, axis=0)[0]:.1f}, std: {np.std(X_test, axis=0)[0]:.1f}")
print (f"y_test: mean: {np.mean(y_test, axis=0)[0]:.1f}, std: {np.std(y_test, axis=0)[0]:.1f}")
###Output
X_train: mean: -0.0, std: 1.0
y_train: mean: 0.0, std: 1.0
X_val: mean: -0.5, std: 0.5
y_val: mean: -0.6, std: 0.5
X_test: mean: -0.6, std: 0.9
y_test: mean: -0.6, std: 0.9
###Markdown
Weights Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$ 1. Randomly initialize the model's weights $W$.
###Code
INPUT_DIM = X_train.shape[1] # X is 1-dimensional
OUTPUT_DIM = y_train.shape[1] # y is 1-dimensional
# Initialize random weights
W = 0.01 * np.random.randn(INPUT_DIM, OUTPUT_DIM)
b = np.zeros((1, 1))
print (f"W: {W.shape}")
print (f"b: {b.shape}")
###Output
W: (1, 1)
b: (1, 1)
###Markdown
Model 2. Feed inputs $X$ into the model to receive the predictions $\hat{y}$. * $\hat{y} = XW + b$
###Code
# Forward pass [NX1] · [1X1] = [NX1]
y_pred = np.dot(X_train, W) + b
print (f"y_pred: {y_pred.shape}")
###Output
y_pred: (35, 1)
###Markdown
Loss 3. Compare the predictions $\hat{y}$ with the actual target values $y$ using the objective (cost) function to determine the loss $J$. A common objective function for linear regression is mean squarred error (MSE). This function calculates the difference between the predicted and target values and squares it. * $J(\theta) = MSE = \frac{1}{N} \sum_{i-1}^{N} (y_i - \hat{y}_i)^2 $ * ${y}$ = ground truth | $\in \mathbb{R}^{NX1}$ * $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$
###Code
# Loss
N = len(y_train)
loss = (1/N) * np.sum((y_train - y_pred)**2)
print (f"loss: {loss:.2f}")
###Output
loss: 0.99
###Markdown
Gradients 4. Calculate the gradient of loss $J(\theta)$ w.r.t to the model weights. * $J(\theta) = \frac{1}{N} \sum_i (y_i - \hat{y}_i)^2 = \frac{1}{N}\sum_i (y_i - X_iW)^2 $ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW) X_i = -\frac{2}{N} \sum_i (y_i - \hat{y}_i) X_i$ * $\frac{\partial{J}}{\partial{W}} = -\frac{2}{N} \sum_i (y_i - X_iW)1 = -\frac{2}{N} \sum_i (y_i - \hat{y}_i)1$
###Code
# Backpropagation
dW = -(2/N) * np.sum((y_train - y_pred) * X_train)
db = -(2/N) * np.sum((y_train - y_pred) * 1)
###Output
_____no_output_____
###Markdown
**NOTE**: The gradient is the derivative, or the rate of change of a function. It's a vector that points in the direction of greatest increase of a function. For example the gradient of our loss function ($J$) with respect to our weights ($W$) will tell us how to change W so we can maximize $J$. However, we want to minimize our loss so we subtract the gradient from $W$. Update weights 5. Update the weights $W$ using a small learning rate $\alpha$. * $W = W - \alpha\frac{\partial{J}}{\partial{W}}$ * $b = b - \alpha\frac{\partial{J}}{\partial{b}}$
###Code
LEARNING_RATE = 1e-1
# Update weights
W += -LEARNING_RATE * dW
b += -LEARNING_RATE * db
###Output
_____no_output_____
###Markdown
**NOTE**: The learning rate $\alpha$ is a way to control how much we update the weights by. If we choose a small learning rate, it may take a long time for our model to train. However, if we choose a large learning rate, we may overshoot and our training will never converge. The specific learning rate depends on our data and the type of models we use but it's typically good to explore in the range of $[1e^{-8}, 1e^{-1}]$. We'll explore learning rate update stratagies in later lessons. Training 6. Repeat steps 2 - 5 to minimize the loss and train the model.
###Code
NUM_EPOCHS = 50
# Initialize random weights
W = 0.01 * np.random.randn(INPUT_DIM, OUTPUT_DIM)
b = np.zeros((1, ))
# Training loop
for epoch_num in range(NUM_EPOCHS):
# Forward pass [NX1] · [1X1] = [NX1]
y_pred = np.dot(X_train, W) + b
# Loss
loss = (1/len(y_train)) * np.sum((y_train - y_pred)**2)
# show progress
if epoch_num%10 == 0:
print (f"Epoch: {epoch_num}, loss: {loss:.3f}")
# Backpropagation
dW = -(2/N) * np.sum((y_train - y_pred) * X_train)
db = -(2/N) * np.sum((y_train - y_pred) * 1)
# Update weights
W += -LEARNING_RATE * dW
b += -LEARNING_RATE * db
###Output
Epoch: 0, loss: 0.990
Epoch: 10, loss: 0.039
Epoch: 20, loss: 0.028
Epoch: 30, loss: 0.028
Epoch: 40, loss: 0.028
###Markdown
Evaluation
###Code
# Predictions
pred_train = W*X_train + b
pred_test = W*X_test + b
# Train and test MSE
train_mse = np.mean((y_train - pred_train) ** 2)
test_mse = np.mean((y_test - pred_test) ** 2)
print (f"train_MSE: {train_mse:.2f}, test_MSE: {test_mse:.2f}")
# Figure size
plt.figure(figsize=(15,5))
# Plot train data
plt.subplot(1, 2, 1)
plt.title("Train")
plt.scatter(X_train, y_train, label='y_train')
plt.plot(X_train, pred_train, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Plot test data
plt.subplot(1, 2, 2)
plt.title("Test")
plt.scatter(X_test, y_test, label='y_test')
plt.plot(X_test, pred_test, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Show plots
plt.show()
###Output
_____no_output_____
###Markdown
Interpretability Since we standardized our inputs and outputs, our weights were fit to those standardized values. So we need to unstandardize our weights so we can compare it to our true weight (3.5).Note that both X and y were standardized.$\hat{y}_{scaled} = b_{scaled} + \sum_{j=1}^{k}W_{{scaled}_j}x_{{scaled}_j}$* $y_{scaled} = \frac{\hat{y} - \bar{y}}{\sigma_y}$* $x_{scaled} = \frac{x_j - \bar{x}_j}{\sigma_j}$$\frac{\hat{y} - \bar{y}}{\sigma_y} = b_{scaled} + \sum_{j=1}^{k}W_{{scaled}_j}\frac{x_j - \bar{x}_j}{\sigma_j}$$ \hat{y}_{scaled} = \frac{\hat{y}_{unscaled} - \bar{y}}{\sigma_y} = {b_{scaled}} + \sum_{j=1}^{k} {W}_{{scaled}_j} (\frac{x_j - \bar{x}_j}{\sigma_j}) $$\hat{y}_{unscaled} = b_{scaled}\sigma_y + \bar{y} - \sum_{j=1}^{k} {W}_{{scaled}_j}(\frac{\sigma_y}{\sigma_j})\bar{x}_j + \sum_{j=1}^{k}{W}_{{scaled}_j}(\frac{\sigma_y}{\sigma_j})x_j $In the expression above, we can see the expression $\hat{y}_{unscaled} = W_{unscaled}x + b_{unscaled} $ where* $W_{unscaled} = \sum_{j=1}^{k}{W}_j(\frac{\sigma_y}{\sigma_j}) $* $b_{unscaled} = b_{scaled}\sigma_y + \bar{y} - \sum_{j=1}^{k} {W}_j(\frac{\sigma_y}{\sigma_j})\bar{x}_j$
###Code
# Unscaled weights
W_unscaled = W * (y_std/X_std)
b_unscaled = b * y_std + y_mean - np.sum(W_unscaled*X_mean)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0][0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.4X + 7.8
###Markdown
TensorFlow We will first implement linear regression using native TensorFlow without the ease of Keras. This will help us understand the hidden operations and also appreciate all the work that Keras does for us.
###Code
%tensorflow_version 2.x
import tensorflow as tf
# Set seed for reproducibility
tf.random.set_seed(SEED)
###Output
_____no_output_____
###Markdown
Split data When we're working with TensorFlow we normally use the scikit learn's [splitting functions](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.model_selection) to split our data.
###Code
from sklearn.model_selection import train_test_split
TRAIN_SIZE = 0.7
VAL_SIZE = 0.15
TEST_SIZE = 0.15
SHUFFLE = True
def train_val_test_split(X, y, val_size, test_size, shuffle):
"""Split data into train/val/test datasets.
"""
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_size, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
###Output
_____no_output_____
###Markdown
The `train_val_test_split` function essentially splits our data twice. First, we separate out the test set. And then we separate the remaining other set into train and validation sets.
###Code
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X, y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_test: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
###Output
X_train: (35, 1), y_train: (35, 1)
X_val: (7, 1), y_test: (7, 1)
X_test: (8, 1), y_test: (8, 1)
###Markdown
Standardize data We can also use scikit learn to do [preprocessing and normalization](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing).
###Code
from sklearn.preprocessing import StandardScaler
# Standardize the data (mean=0, std=1) using training data
X_scaler = StandardScaler().fit(X_train)
y_scaler = StandardScaler().fit(y_train)
# Apply scaler on training and test data
X_train = X_scaler.transform(X_train)
y_train = y_scaler.transform(y_train).ravel().reshape(-1, 1)
X_val = X_scaler.transform(X_val)
y_val = y_scaler.transform(y_val).ravel().reshape(-1, 1)
X_test = X_scaler.transform(X_test)
y_test = y_scaler.transform(y_test).ravel().reshape(-1, 1)
# Check (means should be ~0 and std should be ~1)
print (f"X_train: mean: {np.mean(X_train, axis=0)[0]:.1f}, std: {np.std(X_train, axis=0)[0]:.1f}")
print (f"y_train: mean: {np.mean(y_train, axis=0)[0]:.1f}, std: {np.std(y_train, axis=0)[0]:.1f}")
print (f"X_val: mean: {np.mean(X_val, axis=0)[0]:.1f}, std: {np.std(X_val, axis=0)[0]:.1f}")
print (f"y_val: mean: {np.mean(y_val, axis=0)[0]:.1f}, std: {np.std(y_val, axis=0)[0]:.1f}")
print (f"X_test: mean: {np.mean(X_test, axis=0)[0]:.1f}, std: {np.std(X_test, axis=0)[0]:.1f}")
print (f"y_test: mean: {np.mean(y_test, axis=0)[0]:.1f}, std: {np.std(y_test, axis=0)[0]:.1f}")
###Output
X_train: mean: -0.0, std: 1.0
y_train: mean: 0.0, std: 1.0
X_val: mean: 0.1, std: 0.6
y_val: mean: 0.1, std: 0.7
X_test: mean: -0.3, std: 0.7
y_test: mean: -0.3, std: 0.6
###Markdown
Weights
###Code
# Weights
W = tf.Variable(0.)
b = tf.Variable(0.)
###Output
_____no_output_____
###Markdown
Model
###Code
# Model
def model(x):
return x*W + b
# Forward pass
print (X_train[0])
print (model(X_train[0]))
###Output
[0.96849907]
tf.Tensor([0.], shape=(1,), dtype=float32)
###Markdown
Loss
###Code
# Loss
def mean_squared_error(y_pred, y_true):
return tf.reduce_mean(tf.square(y_pred-y_true))
# Sample loss
print (f"loss: {mean_squared_error(model(X_train), y_train)}")
###Output
loss: 1.0
###Markdown
Training
###Code
for epoch in range(NUM_EPOCHS):
with tf.GradientTape() as tape:
# Forward pass
y_pred = model(X_train)
# Loss
train_loss = mean_squared_error(y_pred=y_pred, y_true=y_train)
# Gradients
gradients = tape.gradient(train_loss, [W, b])
# Update weights
W.assign_sub(gradients[0] * LEARNING_RATE)
b.assign_sub(gradients[1] * LEARNING_RATE)
# Metrics
if (epoch%5) == 0:
print (f"Epoch: {epoch} | train_loss: {train_loss.numpy():.2f}")
###Output
Epoch: 0 | train_loss: 1.00
Epoch: 5 | train_loss: 0.13
Epoch: 10 | train_loss: 0.03
Epoch: 15 | train_loss: 0.02
Epoch: 20 | train_loss: 0.02
Epoch: 25 | train_loss: 0.02
Epoch: 30 | train_loss: 0.02
Epoch: 35 | train_loss: 0.02
Epoch: 40 | train_loss: 0.02
Epoch: 45 | train_loss: 0.02
###Markdown
Evaluation
###Code
# Predictions
pred_train = W*X_train + b
pred_test = W*X_test + b
# Train and test MSE
train_mse = np.mean((y_train - pred_train) ** 2)
test_mse = np.mean((y_test - pred_test) ** 2)
print (f"train_MSE: {train_mse:.2f}, test_MSE: {test_mse:.2f}")
# Figure size
plt.figure(figsize=(15,5))
# Plot train data
plt.subplot(1, 2, 1)
plt.title("Train")
plt.scatter(X_train, y_train, label='y_train')
plt.plot(X_train, model(X_train), color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Plot test data
plt.subplot(1, 2, 2)
plt.title("Test")
plt.scatter(X_test, y_test, label='y_test')
plt.plot(X_test, model(X_test), color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Show plots
plt.show()
###Output
_____no_output_____
###Markdown
Interpretability
###Code
# Unscaled weights
W_unscaled = W.numpy() * (y_std/X_std)
b_unscaled = b.numpy() * y_std + y_mean - np.sum(W_unscaled*X_mean)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled:.1f}X + {b_unscaled:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.4X + 7.6
###Markdown
TensorFlow + Keras Now that we've implemented linear regression with Numpy and native TensorFlow, let's do the same with TensorFlow + Keras. Weights We will be using [Dense layers](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense) in our MLP implementation. The layer applies an activation function on the dot product of the layer's inputs and its weights.$ z = \text{activation}(XW)$
###Code
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
x = Input(shape=(INPUT_DIM,))
fc = Dense(units=OUTPUT_DIM, activation='linear')
z = fc(x)
W, b = fc.weights
print (f"z {z.shape} = x {x.shape} · W {W.shape} + b {b.shape}")
###Output
z (None, 1) = x (None, 1) · W (1, 1) + b (1,)
###Markdown
Model Our goal is to learn a linear model $\hat{y}$ that models $y$ given $X$. $\hat{y} = XW + b$* $\hat{y}$ = predictions | $\in \mathbb{R}^{NX1}$ ($N$ is the number of samples)* $X$ = inputs | $\in \mathbb{R}^{NXD}$ ($D$ is the number of features)* $W$ = weights | $\in \mathbb{R}^{DX1}$ * $b$ = bias | $\in \mathbb{R}^{1}$
###Code
from tensorflow.keras.models import Model
from tensorflow.keras.utils import plot_model
class LinearRegression(Model):
def __init__(self, output_dim):
super(LinearRegression, self).__init__(name='linear_regression')
self.fc1 = Dense(units=output_dim, activation='linear', name='W')
def call(self, x_in, training=False):
y_pred = self.fc1(x_in)
return y_pred
def summary(self, input_shape):
x_in = Input(shape=input_shape, name='X')
summary = Model(inputs=x_in, outputs=self.call(x_in), name=self.name)
return plot_model(summary, show_shapes=True) # forward pass
# Initialize model
model = LinearRegression(output_dim=OUTPUT_DIM)
model.summary(input_shape=(INPUT_DIM,))
###Output
_____no_output_____
###Markdown
Loss
###Code
from tensorflow.keras.losses import MeanSquaredError
mse = MeanSquaredError()
loss = mse([0., 0., 1., 1.], [1., 1., 1., 0.])
print('Loss: ', loss.numpy())
###Output
Loss: 0.75
###Markdown
Metrics
###Code
from tensorflow.keras.metrics import MeanAbsolutePercentageError
metric = MeanAbsolutePercentageError()
metric.update_state([0.5, 0.5, 1., 1.], [0.5, 1., 1., 0.])
print('Final result: ', metric.result().numpy())
###Output
Final result: 50.0
###Markdown
Optimizer When we implemented linear regression with just NumPy, we used batch gradient descent to update our weights. But there are actually many different [gradient descent optimization algorithms](https://ruder.io/optimizing-gradient-descent/) to choose from and it depends on the situation. However, the [ADAM optimizer](https://ruder.io/optimizing-gradient-descent/index.htmlgradientdescentoptimizationalgorithms/adam) has become a standard algorithm for most cases.
###Code
from tensorflow.keras.optimizers import Adam
# Optimizer
optimizer = Adam(lr=LEARNING_RATE)
###Output
_____no_output_____
###Markdown
Training Here are the full list of options for [optimizer](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers), [loss](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) and [metrics](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/metrics).
###Code
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=MeanSquaredError(),
metrics=[MeanAbsolutePercentageError()])
###Output
_____no_output_____
###Markdown
When we implemented linear regression from scratch, we used batch gradient descent to update our weights. This means that we calculated the gradients using the entire training dataset. We also could've updated our weights using stochastic gradient descent (SGD) where we pass in one training example one at a time. The current standard is mini-batch gradient descent, which strikes a balance between batch and stochastic GD, where we update the weights using a mini-batch of n (`BATCH_SIZE`) samples.
###Code
BATCH_SIZE = 10
# Training
history = model.fit(x=X_train,
y=y_train,
validation_data=(X_val, y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
shuffle=False,
verbose=1)
# Training metrics
print (f"metrics: {list(history.history.keys())}")
print (f"final val_loss: {history.history['val_loss'][-1]:.2f}")
###Output
metrics: ['loss', 'mean_absolute_percentage_error', 'val_loss', 'val_mean_absolute_percentage_error']
final val_loss: 0.02
###Markdown
Evaluation
###Code
# Predictions
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
# Train metrics
train_mse = tf.keras.metrics.MeanSquaredError()
train_mse.update_state(y_train, pred_train)
print(f'train_mse: {train_mse.result().numpy(): .2f}')
# Test metrics
test_mse = tf.keras.metrics.MeanSquaredError()
test_mse.update_state(y_test, pred_test)
print(f'test_mse: {test_mse.result().numpy(): .2f}')
###Output
test_mse: 0.01
###Markdown
Since we only have one feature, it's easy to visually inspect the model.
###Code
# Figure size
plt.figure(figsize=(15,5))
# Plot train data
plt.subplot(1, 2, 1)
plt.title("Train")
plt.scatter(X_train, y_train, label='y_train')
plt.plot(X_train, pred_train, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Plot test data
plt.subplot(1, 2, 2)
plt.title("Test")
plt.scatter(X_test, y_test, label='y_test')
plt.plot(X_test, pred_test, color='red', linewidth=1, linestyle='-', label='model')
plt.legend(loc='lower right')
# Show plots
plt.show()
###Output
_____no_output_____
###Markdown
Inference After training a model, we can use it to predict on new data.
###Code
# Feed in your own inputs
sample_indices = [10, 15, 25]
X_infer = np.array(sample_indices, dtype=np.float32)
X_infer = X_scaler.transform(X_infer.reshape(-1, 1))
###Output
_____no_output_____
###Markdown
Recall that we need to unstandardize our predictions.$ \hat{y}_{scaled} = \frac{\hat{y} - \mu_{\hat{y}}}{\sigma_{\hat{y}}} $$ \hat{y} = \hat{y}_{scaled} * \sigma_{\hat{y}} + \mu_{\hat{y}} $
###Code
# Unstandardize predictions
pred_infer = model.predict(X_infer) * np.sqrt(y_scaler.var_) + y_scaler.mean_
for i, index in enumerate(sample_indices):
print (f"{df.iloc[index]['y']:.2f} (actual) → {pred_infer[i][0]:.2f} (predicted)")
###Output
35.73 (actual) → 43.36 (predicted)
59.34 (actual) → 60.37 (predicted)
97.04 (actual) → 94.40 (predicted)
###Markdown
Interpretability Linear regression offers the great advantage of being highly interpretable. Each feature has a coefficient which signifies its importance/impact on the output variable y. We can interpret our coefficient as follows: by increasing X by 1 unit, we increase y by $W$ (~3.65) units.
###Code
# Unstandardize coefficients
W = model.layers[0].get_weights()[0][0][0]
b = model.layers[0].get_weights()[1][0]
W_unscaled = W * (y_scaler.scale_/X_scaler.scale_)
b_unscaled = b * y_scaler.scale_ + y_scaler.mean_ - np.sum(W_unscaled*X_scaler.mean_)
print ("[actual] y = 3.5X + noise")
print (f"[model] y_hat = {W_unscaled[0]:.1f}X + {b_unscaled[0]:.1f}")
###Output
[actual] y = 3.5X + noise
[model] y_hat = 3.4X + 9.3
###Markdown
Regularization Regularization helps decrease overfitting. Below is L2 regularization (ridge regression). There are many forms of regularization but they all work to reduce overfitting in our models. With L2 regularization, we are penalizing the weights with large magnitudes by decaying them. Having certain weights with high magnitudes will lead to preferential bias with the inputs and we want the model to work with all the inputs and not just a select few. There are also other types of regularization like L1 (lasso regression) which is useful for creating sparse models where some feature cofficients are zeroed out, or elastic which combines L1 and L2 penalties. **Note**: Regularization is not just for linear regression. You can use it to regularize any model's weights including the ones we will look at in future lessons. $ J(\theta) = = \frac{1}{2}\sum_{i}(X_iW - y_i)^2 + \frac{\lambda}{2}W^TW$$ \frac{\partial{J}}{\partial{W}} = X (\hat{y} - y) + \lambda W $$W = W- \alpha\frac{\partial{J}}{\partial{W}}$* $\lambda$ is the regularzation coefficient
###Code
from tensorflow.keras.regularizers import l2
L2_LAMBDA = 1e-2
class L2LinearRegression(Model):
def __init__(self, l2_lambda, output_dim):
super(L2LinearRegression, self).__init__(name='l2_linear_regression')
self.fc1 = Dense(units=output_dim, activation='linear',
kernel_regularizer=l2(l=l2_lambda), name='W')
def call(self, x_in, training=False):
y_pred = self.fc1(x_in)
return y_pred
def summary(self, input_shape):
x_in = Input(shape=input_shape, name='X')
summary = Model(inputs=x_in, outputs=self.call(x_in), name=self.name)
return plot_model(summary, show_shapes=True) # forward pass
# Initialize model
model = L2LinearRegression(l2_lambda=L2_LAMBDA, output_dim=OUTPUT_DIM)
model.summary(input_shape=(INPUT_DIM,))
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=MeanSquaredError(),
metrics=[MeanAbsolutePercentageError()])
# Training
model.fit(x=X_train,
y=y_train,
validation_data=(X_val, y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
shuffle=SHUFFLE,
verbose=1)
# Predictions
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
# Train metrics
train_mse = tf.keras.metrics.MeanSquaredError()
train_mse.update_state(y_train, pred_train)
print(f'train_mse: {train_mse.result().numpy(): .2f}')
# Test metrics
test_mse = tf.keras.metrics.MeanSquaredError()
test_mse.update_state(y_test, pred_test)
print(f'test_mse: {test_mse.result().numpy(): .2f}')
###Output
test_mse: 0.01
|
Kaggle_Competitions/FoodDemand.ipynb | ###Markdown
Food Demand Forecasting This notebook serves as the codebase for demand forecasting of foood items for a meal delivery service. This competition is hosted on analytics vidhya.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
from fastai.imports import *
from fastai.structured import *
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
def display_all(df):
with pd.option_context('display.max_rows', 1000, 'display.max_columns', 1000):
display(df)
def cat_to_codes(df):
for n,c in df.items():
if c.dtype.name=='category':
df[n] = df[n].cat.codes
PATH = './data/HousePrices/'
df_raw = pd.read_csv(f'{PATH}train.csv', low_memory=False)
display_all(df_raw.head())
###Output
_____no_output_____ |
Bloque 2 - Data_Analyst/02_Manejo de datos/02_Missings y outliers/03_Outliers.ipynb | ###Markdown
Outliers (valores atípicos) Mientras estamos desarrollando nuestro análisis de datos, ya sea con un objetivo puro de análisis o para realizar un preprocesamiento de datos antes de utilizar nuestros algoritmos de predicción, podemos encontrarnos algunos valores que, debido a su naturaleza, deberemos tener en consideración para que no afecten nuestro estudio. Dentro de este grupo destacan los valores nulos, que acabamos de ver en el notebook anterior, y los valores atípicos (o *outliers*), que los veremos a continuación.En este caso, a diferencia de los valores nulos, donde es más común referirse a ellos como nulos en lugar de missings, en el caso de los valores atípicos se suele optar por el termino anglosajón "outliers". Conociendo a los outliersSegún Wikipedia:>En estadística, un valor atípico (en inglés *outlier*) es una observación numéricamente distante del resto de los datos, haciendo que las estadísticas derivadas de los conjuntos de datos que incluyen este tipo de valores serán frecuentemente engañosas.La definición anterior sugiere que el valor atípico es algo que es diferente de la multitud, del resto de datos. Pero, si bien es cierto que a menudo se dice que cada uno tenemos que ser uno mismo, en este caso puede que no sea tna bueno salirse de lo normal.Comencemos por algo sencillo. Fíjate en la siguiente lista de valores, ¿ves algo diferente?
###Code
valores = [15, 16, 19, 18, 54, 17, 17, 11, 19]
valores
###Output
_____no_output_____
###Markdown
Efectivamente, hay un valor que se sale de lo común. Si nos fijamos, todos los datos están entre 15 y 20... Bueno, ¡¡todos menos el 54!! Se trata de un outlier. Datos y outliersAhora que ya sabemos que es un outlier, nos pueden venir muchas preguntas a la cabeza como, por ejemplo, "¿cómo se ha metido ese valor ahí?".Un proyecto de análisis de datos siempre comienza con la obtención de datos a analizar, y es aquí cuando estos truhanes aprovechan para colarse en nuestros datos. Son tan pillos que sería casi imposible detectarlos en este punto, ya que pueden aprovechar un fallo durante la recopilación de los datos o, simplemente, puede que sean así por naturaleza, de modo que indiquen cierta variación en nuestros datos.Pero dejemos de hablar y veamos datos, quiero ver datos. En este caso, vamos a utilizar como ejemplo un conjunto de datos de fútbol que... No, siempre fútbol, no. Mejor un ejemplo con jugadores de cricket. Vamos a suponer que estamos trabajando como analistas deportivos y queremos estudiar el desempeño del equipo indio de cricket, que lo haremos a partir de los puntos de cada jugador (cuyos nombres son totalmente reales):
###Code
import pandas as pd
scores = pd.DataFrame([{"Player": "Player1", "Score": 500},
{"Player": "Player2", "Score": 350},
{"Player": "Player3", "Score": 10},
{"Player": "Player4", "Score": 450},
{"Player": "Player5", "Score": 300}])
scores
###Output
_____no_output_____
###Markdown
Si nos fijamos en los datos, podemos observar que todos los jugadores salvo "Player3" han conseguido puntuaciones de 300 o mayores, mientras que "Player3" solo ha logrado 10, lo que puede significar que o bien nos hemos equivocado al apuntar su puntuación o bien es que este jugador debería plantearse cambiar de deporte.Ahora que sabemos que los valores atípicos pueden ser un error o simplemente una variación, ¿cómo decidimos si son importantes o no? Bueno, es bastante simple: si son el resultado de un error, podemos ignorarlos; pero si es solo una variación en los datos, deberíamos pensar un poco más. Antes de tratar de entender si ignorar los valores atípicos o no, debemos conocer las formas de identificarlos. Identificando valores atípicosEn vista de lo anterior, podríamos pensar que esto es pan comido, echo un vistazo a los datos y saco los que se salgan un poco y ya, como acabamos de hacer para el ejemplo del cricket.Bueno... Pues no. Ahora estábamos utilizando un conjunto de datos de 5 registros y 2 columnas, pero normalmente tendremos más, mucho más. Imagínate que te plantas con un conjunto de datos de +500 columnas y +10 mil filas, ¿también podrías encontrar los outliers manualmente a simple vista? A ver, poder podrías, pero echarías un buen rato, así que mejor utilizar métodos gráficos o estadísticos que nos faciliten el trabajo. En este notebook discutiremos algunos de ellos.Para ello, comenzaremos con un dataset de los precios de las casas de Boston, el cual está incluido en la librería ``sklearn``, que en el futuro será una de nuestras mejores amigas, cuando nos pongamos con el ``feature engineering`` y veamos los algoritmos de aprendizaje.Entonces, comencemos.
###Code
from sklearn.datasets import load_boston
boston = load_boston()
x = boston.data
columns = boston.feature_names
#Creamos el DataFrame:
boston_df = pd.DataFrame(boston.data)
boston_df.columns = columns
print("Filas: %s, Columnas: %s" %(boston_df.shape))
boston_df.head()
###Output
Filas: 506, Columnas: 13
###Markdown
Las características que mostramos en el conjunto de datos se utilizarán para buscar cualquier valor atípico. Mirando los datos anteriores, parece que solo tenemos valores numéricos, es decir, no necesitamos hacer ningún formateo de datos. (Música épica).Podemos diferenciar dos tipos de análisis para encontrar los valores atípicos: univariante (análisis de outliers de una variable) y multivariante (análisis de outliers de dos o más variables). Para simplificar las cosas, comenzaremos con el método básico de detección de valores atípicos y avanzaremos lentamente hacia métodos más avanzados. Análisis gráficoEn este apartado veremos cómo detectar outliers de forma visual, para lo que utilizaremos ciertas representaciones gráficas. No te preocupes si no las entiendes todavía, al final de este bloque (Bloque 1 - Data Analysis) veremos un montón de formas de representar los datos. Además, para matar el gusanillo, en 2 notebooks veremos una introducción al análisis exploratorio, para lo que introduciremos ciertas visualizaciones. Pero ahora, centrémonos en los outliers: Diagrama de caja (boxplot)Según Wikipedia:>También conocido como diagrama de caja y bigote, box plot, box-plot o boxplot. Es un método estandarizado para representar gráficamente una serie de datos numéricos a través de sus cuartiles. De esta manera, el diagrama de caja muestra a simple vista la mediana y los cuartiles de los datos, pudiendo también representar los valores atípicos de estos como puntos individuales.La definición anterior sugiere que si hay un valor atípico, se trazará como un punto en el diagrama de caja, agrupando en cajas el resto de la población. Veámoslo con un ejemplo. Para ello, utilizaremos la librería ``seaborn``, que será oficialmente presentada en futuros notebooks:
###Code
import seaborn as sns
# Por ejemplo, representemos la columnas "DIS"
sns.boxplot(x=boston_df['DIS'])
###Output
_____no_output_____
###Markdown
Como podemos observar, el diagrama de caja anterior muestra tres puntos entre 10 y 12. Estos son los valores atípicos, ya que no están incluidos en el cuadro de otra observación, es decir, no están cerca de los cuartiles.De este modo, estamos analizando los valores atípicos univariantes, es decir, estamos usando la columna ``DIS`` solo para verificar sus valores atípicos, sin tener en cuenta a nadie más. Sin embargo, también podemos hacer análisis de valores atípicos multivariantes.¿Y cómo hacemos esto? ¿Podemos hacerlo con el diagrama de caja? Bueno, la respuesta más correcta sería depende. Si tuviera valores categóricos, podríamos usarlos con cualquier variable continua y hacer un análisis de valores atípicos multivariante. Lamentablemente, como no tenemos variables categóricas (recordemos que son todas numéricas), mejor olvidarnos de usar el diagrama de caja para este análisis de valores atípicos multivariante. Gráfico de dispersión (scatter plot)Según Wikipedia:> Un diagrama de dispersión, gráfica de dispersión o gráfico de burbujas es un tipo de diagrama matemático que utiliza las coordenadas cartesianas para mostrar los valores de dos variables para un conjunto de datos. Los datos se muestran como una colección de puntos, cada uno con el valor de una variable que determina la posición en el eje horizontal y el valor de la otra variable que determina la posición en el eje vertical.Como sugiere la definición, el diagrama de dispersión es la colección de puntos que muestra valores para dos variables. Podemos intentar dibujar un diagrama de dispersión para dos variables de nuestro conjunto de datos de vivienda.Veamos un ejemplo con las columnas ``INDUS`` y ``TAX``:
###Code
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(16,8))
ax.scatter(boston_df['INDUS'], boston_df['TAX'])
ax.set_xlabel('Proporción de acres comerciales no minoristas por ciudad')
ax.set_ylabel('Tasa de impuesto a la propiedad de valor total por 10 000 $')
plt.show()
###Output
_____no_output_____
###Markdown
Observando este gráfico, podemos ver que la mayoría de los puntos de datos se encuentran en el lado inferior izquierdo. Sin embargo, también vemos que hay alguno que se diferencia del resto aislándose hacia arriba a la derecha. Análisis matemáticoHasta ahora, hemos visto cómo detectar outliers de la manera sencilla, con gráficos. Sin embargo, la más útil vendrá por la parte matemática, ya que nos permitirá obtener programáticamente qué datos son más propensos a ser outliers y, posteriormente, aplicarles algún tratamiento. Z score (unidad tipificada)Según Wikipedia:> El término unidad tipificada, variable centrada reducida, variable estandarizada o normalizada se utiliza en estadística para comparar datos procedentes de diferentes muestras o poblaciones y se define como el número de desviaciones típicas que un valor dado toma con respecto a la media de su muestra o población.La intuición detrás del Z-score es describir cualquier punto de datos encontrando su relación con la desviación estándar y la media del grupo de puntos de datos. Lo que representa el valor obtenido a través de la unidad tipificada es el equivalente en una distribución normal, es decir, una distribución de media 0 y desviación estándar igual a 1.Entonces, ¿cómo nos puede ayudar esto a identificar los valores atípicos? Bueno, dado que al calcular el Z-score estamos escalando y centrando los datos, podríamos obtener los puntos de datos que estén demasiado lejos de cero. Estos puntos se tratarán como valores atípicos. En la mayoría de los casos, se utiliza un umbral de 3 o -3, es decir, si el valor del Z-score es mayor o menor que 3 o -3 respectivamente, ese punto de datos se identificará como valor atípico.Para implementarlo en nuestros códigos, utilizaremos una función definida en la biblioteca ``scipy``:
###Code
from scipy import stats
import numpy as np
z = np.abs(stats.zscore(boston_df))
print(z)
###Output
[[0.41978194 0.28482986 1.2879095 ... 1.45900038 0.44105193 1.0755623 ]
[0.41733926 0.48772236 0.59338101 ... 0.30309415 0.44105193 0.49243937]
[0.41734159 0.48772236 0.59338101 ... 0.30309415 0.39642699 1.2087274 ]
...
[0.41344658 0.48772236 0.11573841 ... 1.17646583 0.44105193 0.98304761]
[0.40776407 0.48772236 0.11573841 ... 1.17646583 0.4032249 0.86530163]
[0.41500016 0.48772236 0.11573841 ... 1.17646583 0.44105193 0.66905833]]
###Markdown
Solamente con lo que estamos viendo aquí sería difícil sacar a ojo cuáles son los outliers. Para ello, tendremos que aplicar un filtro, que será el umbral que hemos comentado anteriormente cuando decíamos que se consideraría outlier si estuviera fuera del rango [-3, 3]. Como hemos calculado el valor absoluto, simplemente tendremos que quedarnos con los datos mayores que 3 para encontrar los outliers.Hemos visto diferentes formas de atacar este problema de filtrado, pero en este caso utilizaremos la función ``where`` de NumPy:
###Code
umbral = 3
print(np.where(z > umbral))
###Output
(array([ 55, 56, 57, 102, 141, 142, 152, 154, 155, 160, 162, 163, 199,
200, 201, 202, 203, 204, 208, 209, 210, 211, 212, 216, 218, 219,
220, 221, 222, 225, 234, 236, 256, 257, 262, 269, 273, 274, 276,
277, 282, 283, 283, 284, 347, 351, 352, 353, 353, 354, 355, 356,
357, 358, 363, 364, 364, 365, 367, 369, 370, 372, 373, 374, 374,
380, 398, 404, 405, 406, 410, 410, 411, 412, 412, 414, 414, 415,
416, 418, 418, 419, 423, 424, 425, 426, 427, 427, 429, 431, 436,
437, 438, 445, 450, 454, 455, 456, 457, 466], dtype=int64), array([ 1, 1, 1, 11, 12, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, 1,
1, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 5, 3, 3, 1, 5,
5, 3, 3, 3, 3, 3, 3, 1, 3, 1, 1, 7, 7, 1, 7, 7, 7,
3, 3, 3, 3, 3, 5, 5, 5, 3, 3, 3, 12, 5, 12, 0, 0, 0,
0, 5, 0, 11, 11, 11, 12, 0, 12, 11, 11, 0, 11, 11, 11, 11, 11,
11, 0, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11],
dtype=int64))
###Markdown
Lo que nos devuelve este filtro es una tupla con 2 arrays que hacer referencia a la posición de cada uno de los outliers, donde el primer array indica el número de fila, y el segundo, el de columna:
###Code
print(z[55][1])
###Output
3.375038763517309
###Markdown
Así que el registro número 55 de la columna 1 (``ZN``) es un outlier. Y así con el resto de los valores cuyas posiciones hemos sacado anteriormente. IQR-score (Rango intercuartílico) El diagrama de caja usa el método basado en el Rango intercuartílico para mostrar los datos y valores atípicos. Sin embargo, para obtener una lista de valores atípicos identificados, necesitaremos usar la fórmula matemática y recuperar los datos atípicos. Según Wikipedia:> El rango intercuartílico es una medida de variabilidad adecuada cuando la medida de posición central empleada ha sido la mediana. Se define como la diferencia entre el tercer cuartil (Q3) y el primer cuartil (Q1), es decir: RQ = Q3 - Q1. A la mitad del rango intercuartil se le conoce como desviación cuartil (DQ), y es afectada muy poco por cuentas extremas. Esto lo hace una buena medida de dispersión para distribuciones sesgadas: DQ = RQ/2= (Q3 - Q1)/2.>> Se usa para construir los diagramas de caja y bigote (box plots) que sirven para visualizar la variabilidad de una variable y comparar distribuciones de la misma variable; además de ubicar valores extremos.>> Es una medida de dispersión similar a la desviación típica o varianza, pero es mucho más robusta ante outliers.El IQR es algo similar al Z-score en términos de encontrar la distribución de datos y luego mantener un umbral para identificar el valor atípico.Podemos combinar el diagrama de caja con IQR y usarlo para encontrar la lista de valores atípicos como hicimos con el cálculo de la unidad tipificada.En primer lugar, calcularemos el IQR:
###Code
Q1 = boston_df.quantile(0.25)
Q3 = boston_df.quantile(0.75)
IQR = Q3 - Q1
print(IQR)
###Output
CRIM 3.595038
ZN 12.500000
INDUS 12.910000
CHAS 0.000000
NOX 0.175000
RM 0.738000
AGE 49.050000
DIS 3.088250
RAD 20.000000
TAX 387.000000
PTRATIO 2.800000
B 20.847500
LSTAT 10.005000
dtype: float64
###Markdown
Como ahora tenemos los valores de IQR, podemos pasar a detectar los outliers. Para ello, aplicaremos al DataFrame una máscara que nos filtrará los valores que se salgan del intervalo definido por **[Q1 - 1.5 IQR, Q3 + 1.5 IQR]**.
###Code
(boston_df < (Q1 - 1.5 * IQR)) | (boston_df > (Q3 + 1.5 * IQR))
###Output
_____no_output_____
###Markdown
Ahora que sabemos cómo detectar los valores atípicos, es importante comprender si es necesario eliminarlos o corregirlos.A continuación, veremos algunos métodos para eliminar los valores atípicos y, si es necesario, imputar nuevos valores. Trabajando con OutliersCuando, al realizar nuestro análisis de datos, detectamos un outlier, nos enfrentamos a una difícil decisión (que será la misma que en el caso de los nulos), ¿cómo debemos tratarlo?, ¿lo eliminamos o lo corregimos? Antes de hablar de esto, veremos algunos métodos para eliminar los valores atípicos. Z-scoreEn el apartado anterior, hemos visto cómo se pueden detectar los valores atípicos utilizando el Z-score, pero ahora queremos eliminar o filtrar los valores atípicos y obtener los datos limpios. Esto se puede hacer de forma muy sencilla, apoyándonos en lo que hemos realizado anteriormente, pues solo será cosa de un filtro (aunque un tanto complejo):
###Code
(z < 3).all(axis=1)
boston_df[(z < 3).all(axis=1)]
###Output
_____no_output_____
###Markdown
Si nos fijamos, el resultado que nos devuelve esta operación es un DataFrame con 415 filas, es decir, más de 90 filas de diferenia con el dataset original. Pero ¿qié ha pasado?Fijémonos en la sentencia de filtro:
###Code
(z < 3).all(axis=1)
###Output
_____no_output_____
###Markdown
Lo que estamos haciendo aquí es simplemente calcular qué valores se salen del umbral ``(z < 3)``. Y, después, nos quedamos únicamente con aquellas filas (``axis=1``) que cumplan todo a ``True`` (con el método ``all()``). De este modo, si aplicamos esta máscara sobre nuestro DataFrame, nos devolverá otro eliminando cualquier fila que tenga al menos un outlier según el criterio del Z-score. IQR-scoreAl igual que hemos visto con el Z-score, podemos usar el IQR-score calculado previamente para filtrar los valores atípicos manteniendo solo los valores válidos:
###Code
~((boston_df < (Q1 - 1.5 * IQR)) |(boston_df > (Q3 + 1.5 * IQR))).any(axis=1)
(boston_df['DIS'] < (Q1['DIS'] - 1.5 * IQR['DIS'])) |(boston_df['DIS'] > (Q3['DIS'] + 1.5 * IQR['DIS']))
mask_2 = ~((boston_df < (Q1 - 1.5 * IQR)) |(boston_df > (Q3 + 1.5 * IQR))).any(axis=1)
boston_df[mask_2]
###Output
_____no_output_____
###Markdown
Como podemos observar, ahora se nos ha quedado un DataFrame mucho más reducido, ya que este criterio es mucho menos permisivo.Si queremos entender qué estamos haciendo en la máscara, podemos analizarla en base a lo visto en el apartado anterior. En él, habíamos dicho que consideraríamos como outlier todo aquello que estuviera fuera del rango [Q1 - 1.5 IQR, Q3 + 1.5 IQR]. Por ello, consideramos ambas opciones con un or para detectar que un valor es un outlier. Del mismo modo que antes, hacemos la agrupación por filas para comprobar que hay o no al menos un outlier en esa fila.Hasta aquí, estaríamos obteniendo las filas con algún outlier, es decir, tendríamos un ``True`` por cada fila con outliers. Sin embargo, como lo que nos interesa es quitar los outliers, metemos la condición con una negación, haciendo que nos quedemos con aquellas columnas que no tengan ningún outlier. Finalmente, en cuanto a si un outlier debe ser eliminado o reemplazado es una cosa algo más compleja.Básicamente, los datos incorrectos o calculados eróneamente, pueden identificarse como valores atípicos y deben descartarse, pero, al mismo tiempo, es posible que necesitemos corregirlos también, ya que puden cambiar el nivel de datos, es decir, podrían llegar a causar problemas cuando modelemos los datos.Por ejemplo, 5 personas reciben un salario de 10K, 20K, 30K, 40K y 50K y, de repente, una de las personas comienza a recibir un salario de 100K. En este caso, poniéndonos en el papel del empleador, hemos realizado un estudio sobre los salarios y nos encontramos cone esto. En este caso, la nueva actualización salarial puede verse como sesgada y es posible que deba aumentar el salario de otro empleado también para mantener el equilibrio. Por lo tanto, puede haber varias razones por las que necesitemos comprender y corregir los valores atípicos. Ejercicio 11. Tenemos un grupo de características de diferentes coches definidos según el fichero "coches.csv". Fíjate en los caballos de vapor (columna ``hp``), ¿observas alguna cosa extraña?2. Identifica los outliers de forma gráfica3. ¿Podrías señalar si existe algún valor atípico en la relación del tiempo que tarda en recorrer 1/4 milla (``qsec``) y el volumen del depósito del coche (``disp``)?4. Identifica, mediante el criterio del rango intercuartílico, los outliers que hemos visto en el apartado 2.5. Crea una copia del DataFrame de los coches y elimina aquellos registros con outliers. ¿Ha cambiado la forma de nuestro DataFrame?6. Crea otro DataFrame de coches en el que sustituyas los outliers por el máximo o el mínimo del resto de valores en función de si se queda fuera del margen por encima o por debajo.7. EXTRA: ¿Podrías repetir los apartados 4, 5 y 6, pero con el criterio de Z-score?*NOTA: Añadir imagen con funciones
###Code
coches = pd.read_csv("coches.csv")
coches
# 1.
coches['hp'].sort_values()
# 2.
sns.boxplot(x=coches['hp'])
# 3.
fig, ax = plt.subplots(figsize=(16,8))
ax.scatter(coches['qsec'], coches['disp'])
ax.set_xlabel('Relación del tiempo que tarda en recorrer 1/4 milla')
ax.set_ylabel('Volumen del depósito del coche')
plt.show()
# 4.
Q1 = coches['hp'].quantile(0.25)
Q3 = coches['hp'].quantile(0.75)
IQR = Q3 - Q1
print(IQR)
mask4 = (coches['hp'] < Q1 - 1.5*IQR) | (coches['hp'] > Q3 + 1.5*IQR)
mask4
df3 = coches[mask4]
df3
# 5.
df5 = coches[~mask4]
df5
# 6.
max_hp = df5['hp'].max()
min_hp = df5['hp'].min()
def sustituye(x):
if x < min_hp:
return min_hp
elif x > max_hp:
return max_hp
else:
return x
df6 = coches.copy()
df6['hp'] = df6['hp'].apply(lambda x: sustituye(x))
df6
coches.loc[coches['hp'] > Q3 + 1.5*IQR, ['hp']]
np.random.choice(coches['hp'].values, size=2, p=[0.2, 0.7, 0.1])
coches['hp'].values
###Output
_____no_output_____ |
analysis/sc-4-5/sc-4-5-classification.ipynb | ###Markdown
SC-4-5 Feature Engineering and Classification
###Code
import numpy as np
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cPickle as pickle
from copy import deepcopy
from sklearn.utils import shuffle
import sklearn_mmadsen.graphs as skmg
%matplotlib inline
plt.style.use("fivethirtyeight")
sns.set()
all_graphs = pickle.load(open("train-sc-4-5-cont-graphs.pkl",'r'))
all_labels = pickle.load(open("train-sc-4-5-cont-labels.pkl",'r'))
###Output
_____no_output_____
###Markdown
The strategy, unlike our first attempt, requires a real train/test split in the dataset because we're going to fit an actual model (although a true LOO cross validation is still of course possible). But we need a train_test_split function which is able ot deal with lists of NetworkX objects. Feature Engineering The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on. We do not necessarily assume that all of the graphs have the same number of vertices, although if there are marked differences, we would need to handle missing data for those graphs which had many fewer eigenvalues (or restrict our slice of the spectrum to the smallest number of eigenvalues present).
###Code
train_graphs, train_labels, test_graphs, test_labels = skmg.graph_train_test_split(all_graphs, all_labels, test_fraction=0.10)
print "train size: %s" % len(train_graphs)
print "test size: %s" % len(test_graphs)
###Output
train size: 903
test size: 100
###Markdown
First Classifier We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
###Code
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
train_matrix = skmg.graphs_to_eigenvalue_matrix(train_graphs, num_eigenvalues=10)
test_matrix = skmg.graphs_to_eigenvalue_matrix(test_graphs, num_eigenvalues=10)
clf = GradientBoostingClassifier(n_estimators = 250)
clf.fit(train_matrix, train_labels)
pred_label = clf.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
###Output
predicted 0 predicted 1
actual 0 36 10
actual 1 41 13
precision recall f1-score support
4 0.47 0.78 0.59 46
5 0.57 0.24 0.34 54
avg / total 0.52 0.49 0.45 100
Accuracy on test: 0.490
###Markdown
Finding Optimal Hyperparameters
###Code
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
pipeline = Pipeline([
('clf', GradientBoostingClassifier())
])
params = {
'clf__learning_rate': [5.0,2.0,1.0, 0.75, 0.5, 0.25, 0.1, 0.05, 0.01],
'clf__n_estimators': [10,25,50,100,250,500]
}
grid_search = GridSearchCV(pipeline, params, n_jobs = -1, verbose = 1)
grid_search.fit(train_matrix, train_labels)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters:")
best_params = grid_search.best_estimator_.get_params()
for param in sorted(params.keys()):
print("param: %s: %r" % (param, best_params[param]))
pred_label = grid_search.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
###Output
predicted 0 predicted 1
actual 0 40 6
actual 1 40 14
precision recall f1-score support
4 0.50 0.87 0.63 46
5 0.70 0.26 0.38 54
avg / total 0.61 0.54 0.50 100
Accuracy on test: 0.540
|
explainability/notebooks/feature_interactions.ipynb | ###Markdown
PDPs are defined as $\hat{f}_{x_S}(x_S)=\frac{1}{n}\sum_{i=1}^n\hat{f}(x_S,x^{(i)}_{C})$
###Code
rf = RandomForestClassifier(max_depth=16, random_state=0, n_estimators=10)
rf.fit(X_train, Y_train)
print(rf.feature_importances_)
importances = rf.feature_importances_
indices = np.argsort(importances)
features = X_train.columns
plt.title('Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='b', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Relative Importance')
plt.show()
sklearn.metrics.accuracy_score(Y_test, rf.predict(X_test))
data = np.array(X_train.values)
# Greenwell et al.: "A Simple and Effective Model-Based Variable Importance Measure"
import math
from sklearn.inspection import partial_dependence
# variable feature importance
def vfi(model, data, a, k):
pd, _ = partial_dependence(model, data, [a])
imp = 0
if pd.shape[1] < 100 and pd.shape[1] < data.shape[1]: # categorical
imp = (max(pd[0]) - min(pd[0]))/4
else: # continuous
for i in range(k):
right = 0
for j in range(k):
right += pd[0][j]
imp += (pd[0][i] - right/k)**2
imp = math.sqrt(imp/(k-1))
return imp
# standard deviation of feature importance values of feature b given a certain value of feature a
def std(model, data, b, a):
unique = np.unique(data[:,a])
iv_ba = []
for uv in unique:
uv_a = data[np.where(data[:,a] == uv)]
iv_ba.append(vfi(model, uv_a, b, 4))
iv_ba = np.array(iv_ba)
return np.std(iv_ba)
# feature interaction based on the median of the standard deviation of conditional feature importances between two features
def fint(model, data, a, b):
sb = std(model, data, b, a)
sa = std(model, data, a, b)
return (sa+sb)/2
#feature importance bar chart
fis = []
for f in range(data.shape[1]):
fis.append(vfi(rf, data, f, 4))
fis = np.array(fis)
indices = np.argsort(fis)
features = X_train.columns
plt.title('Variable Feature Importances')
plt.barh(range(len(indices)), fis[indices], color='b', align='center')
plt.yticks(range(len(indices)), [features[i] for i in indices])
plt.xlabel('Importance')
plt.show()
fint(rf, data, 1, 2)
#feature interactions heatmap
import itertools
data = np.array(X_train.values)
fints = []
for x in itertools.product(range(data.shape[1]), range(data.shape[1])):
if x[0]!=x[1]:
fints.append(fint(rf, data, x[0], x[1]))
else:
fints.append(0)
fints = np.array(fints)
rfints = fints.resphape(20,20)
plt.imshow(rfints, cmap='hot', interpolation='nearest')
plt.show()
from sklearn.base import clone
# Sejong Oh: "Feature interaction in terms of prediction performance"
def fi_pp(model, X, Y, a, b, mode='classification'):
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2)
full_model = clone(model)
full_model.fit(X_train, Y_train)
score = sklearn.metrics.accuracy_score(Y_test, full_model.predict(X_test))
a_model = clone(model)
a_model.fit(np.delete(X_train, [a], 1), Y_train)
a_score = sklearn.metrics.accuracy_score(Y_test, a_model.predict(np.delete(X_test, [a], 1)))
b_model = clone(model)
b_model.fit(np.delete(X_train, [b], 1), Y_train)
b_score = sklearn.metrics.accuracy_score(Y_test, b_model.predict(np.delete(X_test, [b], 1)))
ab_model = clone(model)
ab_model.fit(np.delete(X_train, [a, b], 1), Y_train)
ab_score = sklearn.metrics.accuracy_score(Y_test, ab_model.predict(np.delete(X_test, [a, b], 1)))
if mode == 'classification':
# err(a)+err(b)-err(a,b)
return (2*score-a_score-b_score)-(score-ab_score)
elif mode == 'regression':
#err(a,b)-(err(a)+err(b))
return (score-ab_score)-(2*score-a_score-b_score)
else:
raise ValueError('unsupported mode '+str(mode))
inputs = np.array(X)
targets = np.array(Y)
fi_pp(RandomForestClassifier(max_depth=16, random_state=0, n_estimators=10), inputs, targets, 0, 1)
#feature interactions in terms of prediction performance heatmap
import itertools
fints = []
for x in itertools.product(range(data.shape[1]), range(data.shape[1])):
fints.append(fi_pp(RandomForestClassifier(max_depth=16, random_state=0, n_estimators=10), inputs, targets, x[0], x[1]))
fints = np.array(fints)
rfints = fints.reshape(20,20)
fig, ax = plt.subplots()
plt.imshow(rfints, cmap='hot', interpolation='nearest')
features = list(X_train)
ax.set_xticklabels(features)
ax.set_yticklabels(features)
plt.show()
###Output
_____no_output_____
###Markdown
$H^2_{j}=\sum_{i=1}^n\left[\hat{f}(x^{(i)})-PD_j(x_j^{(i)})-PD_{-j}(x_{-j}^{(i)})\right]^2/\sum_{i=1}^n\hat{f}^2(x^{(i)})$
###Code
#H. Friedman et al.: "PREDICTIVE LEARNING VIA RULE ENSEMBLES"
from sklearn.inspection import partial_dependence
from sklearn.inspection import plot_partial_dependence
def h_stat(model, j, data):
jpd, jaxes = partial_dependence(model, data, j)
print(np.array(jaxes).shape)
not_j = []
for i in range(len(data[0])):
if not i == j[0]:
not_j.append(i)
notj_pd, notj_axes = partial_dependence(model, data, not_j)
# TBC...
h_stat(rf, [0], np.array(X_train.values))
###Output
_____no_output_____ |
CellRelations.ipynb | ###Markdown
###Code
from datascience import *
import numpy as np
import random
import seaborn as sns
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
import scipy.stats as stats
import pandas as pd
data = pd.read_csv('https://raw.githubusercontent.com/ds-connectors/Data88-Genetics_and_Genomics/master/Project_data/GSE74923_L1210_CD8_processed_data.txt', sep='\t', index_col=0)
data
LD = data.iloc[:,0 : 88]
total_counts = np.sum(LD)
bars = list(LD.columns)
y_pos = np.arange(len(bars))
plt.figure(figsize=(12,6))
plt.bar(y_pos, total_counts)
plt.xticks(y_pos, bars)
plt.title('Normalization Corroboration')
plt.xlabel('Sample')
plt.ylabel('Total reads')
plt.show()
exp_means = np.mean(LD, axis=1)
meaned_log = np.log2(exp_means + 1)
plt.hist(meaned_log, bins=100)
filtered_values = []
for i in meaned_log:
if i > 2.5 and i < 15:
filtered_values.append(i)
plt.hist(filtered_values, bins = 50)
len(filtered_values)
boolean_values = []
for i in meaned_log:
if i in filtered_values:
boolean_values.append(True)
else:
boolean_values.append(False)
np.log2(data.loc[boolean_values, :].iloc[:, 0:88] + 1)
###Output
_____no_output_____ |
notebooks/feature_engineering/labs/1_bqml_basic_feat_eng.ipynb | ###Markdown
Basic Feature Engineering in BQML **Learning Objectives**1. Create SQL statements to evaluate the model2. Extract temporal features3. Perform a feature cross on temporal features Overview In this lab, we utilize feature engineering to improve the prediction of the fare amount for a taxi ride in New York City. We will use BigQuery ML to build a taxifare prediction model, using feature engineering to improve and create a final model.In this Notebook we set up the environment, create the project dataset, create a feature engineering table, create and evaluate a baseline model, extract temporal features, perform a feature cross on temporal features, and evaluate model performance throughout the process.
###Code
PROJECT = !gcloud config get-value project
PROJECT = PROJECT[0]
%env PROJECT=$PROJECT
###Output
_____no_output_____
###Markdown
The source datasetOur dataset is hosted in [BigQuery](https://cloud.google.com/bigquery/). The taxi fare data is a publically available dataset, meaning anyone with a GCP account has access. Click [here](https://console.cloud.google.com/bigquery?project=bigquery-public-data&p=nyc-tlc&d=yellow&t=trips&page=table) to access the dataset.The Taxi Fare dataset is relatively large at 55 million training rows, but simple to understand, with only six features. The fare_amount is the target, the continuous value we’ll train a model to predict. Create a BigQuery DatasetA BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called __feat_eng__ if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.
###Code
%%bash
# Create a BigQuery dataset for feat_eng if it doesn't exist
datasetexists=$(bq ls -d | grep -w feat_eng)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: feat_eng"
bq --location=US mk --dataset \
--description 'Taxi Fare' \
$PROJECT:feat_eng
echo "\nHere are your current datasets:"
bq ls
fi
###Output
_____no_output_____
###Markdown
Create the training data tableSince there is already a publicly available dataset, we can simply create the training data table using this raw input data. Note the WHERE clause in the below query: This clause allows us to TRAIN a portion of the data (e.g. one hundred thousand rows versus one million rows), which keeps your query costs down. If you need a refresher on using MOD() for repeatable splits see this [post](https://www.oreilly.com/learning/repeatable-sampling-of-data-sets-in-bigquery-for-machine-learning). * Note: The dataset in the create table code below is the one created previously, e.g. `feat_eng`. The table name is `feateng_training_data`. Run the query to create the table.
###Code
%%bigquery
CREATE OR REPLACE TABLE
feat_eng.feateng_training_data AS
SELECT
(tolls_amount + fare_amount) AS fare_amount,
passenger_count*1.0 AS passengers,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat
FROM
`nyc-tlc.yellow.trips`
WHERE
MOD(ABS(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING))), 10000) = 1
AND fare_amount >= 2.5
AND passenger_count > 0
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
###Output
_____no_output_____
###Markdown
Verify table creationVerify that you created the dataset.
###Code
%%bigquery
# LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT
*
FROM
feat_eng.feateng_training_data
LIMIT
0
###Output
_____no_output_____
###Markdown
Baseline Model: Create the baseline modelNext, you create a linear regression baseline model with no feature engineering. Recall that a model in BigQuery ML represents what an ML system has learned from the training data. A baseline model is a solution to a problem without applying any machine learning techniques. When creating a BQML model, you must specify the model type (in our case linear regression) and the input label (fare_amount). Note also that we are using the training data table as the data source. Now we create the SQL statement to create the baseline model.
###Code
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.baseline_model OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
###Output
_____no_output_____
###Markdown
REMINDER: The query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.You can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes. Once the training is done, visit the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) and look at the model that has been trained. Then, come back to this notebook. Evaluate the baseline modelNote that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. After creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.NOTE: The results are also displayed in the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) under the **Evaluation** tab. Review the learning and eval statistics for the baseline_model.
###Code
%%bigquery
# Eval statistics on the held out data.
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.baseline_model)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.baseline_model)
###Output
_____no_output_____
###Markdown
**NOTE:** Because you performed a linear regression, the results include the following columns:* mean_absolute_error* mean_squared_error* mean_squared_log_error* median_absolute_error* r2_score* explained_variance**Resource** for an explanation of the [Regression Metrics](https://towardsdatascience.com/metrics-to-evaluate-your-machine-learning-algorithm-f10ba6e38234).**Mean squared error** (MSE) - Measures the difference between the values our model predicted using the test set and the actual values. You can also think of it as the distance between your regression (best fit) line and the predicted values. **Root mean squared error** (RMSE) - The primary evaluation metric for this ML problem is the root mean-squared error. RMSE measures the difference between the predictions of a model, and the observed values. A large RMSE is equivalent to a large average error, so smaller values of RMSE are better. One nice property of RMSE is that the error is given in the units being measured, so you can tell very directly how incorrect the model might be on unseen data.**R2**: An important metric in the evaluation results is the R2 score. The R2 score is a statistical measure that determines if the linear regression predictions approximate the actual data. Zero (0) indicates that the model explains none of the variability of the response data around the mean. One (1) indicates that the model explains all the variability of the response data around the mean. **Exercise.** Next, we write a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
###Code
# TODO: Your code here
###Output
_____no_output_____
###Markdown
Model 1: EXTRACT dayofweek from the pickup_datetime feature.* As you recall, dayofweek is an enum representing the 7 days of the week. This factory allows the enum to be obtained from the int value. The int value follows the ISO-8601 standard, from 1 (Monday) to 7 (Sunday).* If you were to extract the dayofweek from pickup_datetime using BigQuery SQL, the dataype returned would be integer. **Exercise.** Next, we create a model titled "model_1" from the benchmark model and extract out the DayofWeek.
###Code
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_1 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
pickup_datetime,
# TODO: Your code here
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
feat_eng.feateng_training_data
###Output
_____no_output_____
###Markdown
Once the training is done, visit the [BigQuery Cloud Console](https://console.cloud.google.com/bigquery) and look at the model that has been trained. Then, come back to this notebook. Next, two distinct SQL statements show the TRAINING and EVALUATION metrics of model_1.
###Code
%%bigquery
SELECT
*,
SQRT(loss) AS rmse
FROM
ML.TRAINING_INFO(MODEL feat_eng.model_1)
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
###Output
_____no_output_____
###Markdown
Here we run a SQL query to take the SQRT() of the mean squared error as your loss metric for evaluation for the benchmark_model.
###Code
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_1)
###Output
_____no_output_____
###Markdown
Model 2: EXTRACT hourofday from the pickup_datetime featureAs you recall, **pickup_datetime** is stored as a TIMESTAMP, where the Timestamp format is retrieved in the standard output format – year-month-day hour:minute:second (e.g. 2016-01-01 23:59:59). Hourofday returns the integer number representing the hour number of the given date.Hourofday is best thought of as a discrete ordinal variable (and not a categorical feature), as the hours can be ranked (e.g. there is a natural ordering of the values). Hourofday has an added characteristic of being cyclic, since 12am follows 11pm and precedes 1am. **Exercise.** Next, we create a model titled "model_2" and EXTRACT the hourofday from the pickup_datetime feature to improve our model's rmse.
###Code
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_2 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
# TODO: Your code here
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
###Output
_____no_output_____
###Markdown
Evaluate the model.
###Code
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_2)
###Output
_____no_output_____
###Markdown
Model 3: Feature cross dayofweek and hourofday using CONCATFirst, let’s allow the model to learn traffic patterns by creating a new feature that combines the time of day and day of week (this is called a [feature cross](https://developers.google.com/machine-learning/crash-course/feature-crosses/video-lecture). Note: BQML by default assumes that numbers are numeric features, and strings are categorical features. We need to convert both the dayofweek and hourofday features to strings because the model (Neural Network) will automatically treat any integer as a numerical value rather than a categorical value. Thus, if not cast as a string, the dayofweek feature will be interpreted as numeric values (e.g. 1,2,3,4,5,6,7) and hour ofday will also be interpreted as numeric values (e.g. the day begins at midnight, 00:00, and the last minute of the day begins at 23:59 and ends at 24:00). As such, there is no way to distinguish the "feature cross" of hourofday and dayofweek "numerically". Casting the dayofweek and hourofday as strings ensures that each element will be treated like a label and will get its own coefficient associated with it. **Exercise.** Create the SQL statement to feature cross the dayofweek and hourofday using the CONCAT function. Name the model "model_3"
###Code
%%bigquery
CREATE OR REPLACE MODEL
feat_eng.model_3 OPTIONS (model_type='linear_reg',
input_label_cols=['fare_amount']) AS
SELECT
fare_amount,
passengers,
# TODO: Your code here
pickuplon,
pickuplat,
dropofflon,
dropofflat
FROM
`feat_eng.feateng_training_data`
###Output
_____no_output_____
###Markdown
Next we evaluate the model.
###Code
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL feat_eng.model_3)
###Output
_____no_output_____ |
07_train/ml-containers/00_setup_eks/00_03_Create_S3_Bucket.ipynb | ###Markdown
1. Create S3 Bucket (If Not Already Created)
###Code
%%bash
export S3_BUCKET=sagemaker-$(aws configure get region)-$(aws sts get-caller-identity | jq -r '.Account')
echo "export S3_BUCKET=${S3_BUCKET}" | tee -a ~/.bash_profile
# Create a new S3 bucket and upload the dataset.
aws s3 ls s3://$S3_BUCKET || aws s3 mb s3://${S3_BUCKET}
echo "Completed"
###Output
_____no_output_____
###Markdown
2. Verify S3_BUCKET Env Variable
###Code
%%bash
source ~/.bash_profile
echo "${S3_BUCKET}"
###Output
_____no_output_____
###Markdown
3. Verify S3_BUCKET Bucket Creation
###Code
%%bash
source ~/.bash_profile
aws s3 ls s3://${S3_BUCKET}
###Output
_____no_output_____ |
Ex/Chapter2/Chapter2-12.ipynb | ###Markdown
多変数の回帰モデル
###Code
import numpy as np
from sklearn import linear_model
import sklearn.metrics as sm
input_file = './data/data_multivar_regr.txt'
# ファイルを読み込んで,カンマ区切りで分ける
data = np.loadtxt(input_file, delimiter=',')
# Xには最後の列以外、yには最後の列だけ入れる
# 読み込んだファイルが4列のデータなので、Xは3次元データ
X, y = data[:, :-1], data[:, -1]
# 線形回帰モデルを作成してX,yを元に訓練する
linear_regressor = linear_model.LinearRegression()
linear_regressor.fit(X, y)
# Xを元に予測する
y_pred = regressor.predict(X)
print("線形回帰モデルの性能")
print("Mean absolute error = ", round(sm.mean_absolute_error(y, y_pred), 2))
print("Mean squared error = ", round(sm.mean_squared_error(y, y_pred), 2))
print("Medium absolute error = ", round(sm.median_absolute_error(y, y_pred), 2))
print("Explain variance error = ", round(sm.explained_variance_score(y, y_pred), 2))
print("R2 score = ", round(sm.r2_score(y, y_pred), 2))
###Output
_____no_output_____
###Markdown
データの先頭5個で検証してみる
###Code
print("Linear regression:\n", linear_regressor.predict(X[0:5]))
print("True output values:\n", y[0:5])
###Output
Linear regression:
[10.82449559 17.3818526 3.29725309 38.64955824 10.41302956]
True output values:
[15.69 15.34 0.66 38.37 9.96]
|
examples/1_code_introduction.ipynb | ###Markdown
Code Introduction
###Code
import sys
import numpy as np
import warnings
sys.path.append("../")
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC, SVC
from aif360.algorithms.inprocessing import PrejudiceRemover
from ltf.ltf_plot import LongTermFairnessPlot
from ltf.ltf_data.group_data_generator import GroupDataGenerator as DataGenerator
from ltf.ltf_metric.aif360_metric import AifLongTermMetric
from ltf.ltf_clf.aif360_clf import AifLongTermCLF
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
1. Data GeneratorThe details about the data generation are explained in the report.
###Code
# Initialization of the generator. The provided values are the default values.
generator = DataGenerator(mean_pos=[10, 7], # The mean for the positive cluster.
cov_pos=[[1.2, 0], [0, 1.3]], # Covariance for the positive cluster.
mean_neg=[0, 1], # The mean for the negative cluster.
cov_neg=[[1, 0], [0, 1.2]], # Covariance for the negative cluster.
degree=3, # Number of previous steps t considered.
discrimination_factor=.7, # Fraction positive/negative labels between prot_attr on init.
num_positive_label=500, # Number of positive instances.
num_negative_label=500, # Number of negative instances.
neg_label=0, # The label considered negative.
pos_label=1, # The label considered positive.
neg_class=0, # The protected attribute class considered negative.
pos_class=1) # The protected attribute class considered positive.
# The generator takes three arguments: the features X, true labels y, and predictions y_hat.
# generator.sample(X, # 3D, all previous features of shape [t, n, m]
# y, # 2D, all previous true labels of shape [t, n]
# y_hat # 2D, all previous predictions of shape [t, n]
# When called with 'None, None, None' the generator samples the initial data.
X, x_sens, y = generator.sample(None, None, None)
print(X.shape)
print(x_sens.shape)
print(y.shape)
# Otherwise, new data is sampled either group or individual wise.
# Examples for the two are provided in the following notebooks.
X_new, x_sens_new, y_new = generator.sample([X], [x_sens], [y])
print(X_new.shape)
print(x_sens_new.shape)
print(y_new.shape)
# This function returns the true label given data.
# This is, for instance, necessary for the data generating boundary plot.
# In this example, they are the same as y_new.
labels = generator.get_label(X_new, x_sens_new)
print(labels.shape)
print(np.all(labels==y_new))
###Output
(1000,)
True
###Markdown
2. AIF360 InterfacesTwo wrapper classes provide access to AIF360 metrics and functions to be used in the framework. 2.1. AIF360 Long Term PredictionInterface to use AIF360s algorithms for the long term plot
###Code
# LTF classifier from aif360 classifier.
# https://aif360.readthedocs.io/en/latest/modules/inprocessing.html
clf = AifLongTermCLF(clf=PrejudiceRemover(), # Any aif360 algorithm.
pos_class=1,
neg_class=0,
pos_label=1,
neg_label=0)
# Fits clf to the data. In the background, X_t, X_sens_t and y_t are stacked into an aif360 data frame
# and the fit method is called.
clf.fit(X_new,
x_sens_new,
y_new)
# Predict data of one time step. In the background, X_t and X_sens_t are stacked into an aif360 data frame
# and the predict method is called.
y_hat = clf.predict(X_new,
x_sens_new)
print(y_hat.shape)
###Output
(1000,)
###Markdown
2.1. AIF360 Long Term MetricThe class provides an interface to use AIF360 metrics for long term prediction.
###Code
# The default metrics are accuracy and disparate impact. It is possible to provide more than two metrics
# but the plot is unclear then. Any classification metric from aif360 can be passed as string.
# https://aif360.readthedocs.io/en/latest/modules/metrics.html#classification-metric
metric = AifLongTermMetric(metrics=["accuracy", "disparate_impact"],
pos_class=1,
neg_class=0,
pos_label=1,
neg_label=0)
# Computes the metrics on a snapshot of the data.
metric.metric(X_new,
x_sens_new,
y_new,
y_hat)
###Output
_____no_output_____
###Markdown
3. Long Term PlotRuns decision process over several generations.
###Code
l = LongTermFairnessPlot(generator, # Must implement .sample() function as described above.
clf, # Must implement fit(X, X_sens, y) and predict(X, X_sense).
metric.metric, # Must implement .metric(X_t, x_sense_t, y_t, y_hat_t)
x_lim=[-3, 12], # Axes limits for the final plot.
y_lim=[-1, 13]) # Axes limit for the final plot.
# Initialize the data.
l.init_data()
# Runs one iteration with the true data pipeline only and prints metrics.
l.run_step()
# Run baseline generation pipeline (assuming all previous predictions to be positive).
l.run_baseline_step()
# Plot the data points of the last generation (both baseline and true data).
l.plot_step()
# Combines the previous steps; Initialize data, run both generations.
l.run(10)
# Plot the metrics over all steps.
l.plot_ltf(metric._metrics)
###Output
_____no_output_____ |
Small Projects/Loan Prediction/Lecture_18_The_Classification_Problem_Example_(Part_1).ipynb | ###Markdown
Fundamentals of Information Systems Python Programming (for Data Science) Master's Degree in Data Science Giorgio Maria Di Nunzio (Courtesy of Gabriele Tolomei FIS 2018-2019)[email protected] of Padua, Italy2019/2020 Lecture 13: The Classification Problem - Example (Part 1) Instructions- We consider the dataset file **dataset.csv**, which is contained in the **loan-prediction** directory on the Moodle page.- A description of the dataset is available in the **README.txt** file on the same directory.- **GOAL:** Use information from past loan applicants contained in **dataset.csv** to predict whether a _new_ applicant should be granted a loan or not.
###Code
import math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# Import stats module from scipy, which contains a large number
# of probability distributions as well as an exhaustive library of statistical functions.
import scipy.stats as stats
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Data Collection
###Code
# Path to the local dataset file (YOURS MAY BE DIFFERENT!)
DATASET_PATH = "./data/loan-prediction/dataset.csv"
# Load the dataset with Pandas
data = pd.read_csv(DATASET_PATH, sep=",")
print("Shape of the dataset: {}".format(data.shape))
data.head()
# NOTE: the first line of the file is considered as the header
# Let's have a look at the output of the `describe()` function.
data.describe()
###Output
_____no_output_____
###Markdown
Observations from the ouput of describe() (Numerical Variables)- **LoanAmount** has 614-592 = **22** missing values- **Loan_Amount_Term** has 614-600 = **14** missing values- **Credit_History** has 614-564 = **50** missing values- We can also look that about 84% applicants have a credit history. How? The mean of **Credit_History** field is 0.84 (Remember, **Credit_History** has value 1 for those who have a credit history and 0 otherwise)- The **ApplicantIncome** distribution seems to be in line with expectation. Same with **CoapplicantIncome**.
###Code
# Let's have a look at the output of the `describe(include='all')` function.
data.describe(include='all')
###Output
_____no_output_____
###Markdown
Observations from the ouput of describe(include='all') (Categorical Variables)- **Loan_ID** has 614 **unique** values (will use this as our _index_ column)- **Gender** has 614-601 = **13** missing values- **Married** has 614-611 = **3** missing values- **Dependents** has 614-599 = **15** missing values- **Self_Employed** has 614-582 = **32** missing values
###Code
# Load the dataset with Pandas
data = pd.read_csv(DATASET_PATH, sep=",", index_col="Loan_ID")
print("Shape of the dataset: {}".format(data.shape))
data.head()
# NOTE: the first line of the file is considered as the header
###Output
Shape of the dataset: (614, 12)
###Markdown
Double-check for any Missing Values
###Code
# Check if there is any missing value in the whole dataset
print("There are missing values in the dataset: {}".
format(data.isnull().any().any()))
# Check how missing values are distributed across each variable (i.e., column)
data.apply(lambda x: sum(x.isnull()))
###Output
_____no_output_____
###Markdown
2. Data Exploration 2.1 Analysis of Data Distributions: Continous Values- Let's start visualizing the distributions of the **3 continuous-valued** features: - **ApplicantIncome** - **CoapplicantIncome** - **LoanAmount** (contains 22 NAs)
###Code
# Create a lambda function which will be applied to each entry
# of the numpy 2-D array of AxesSubplot objects
# x is a reference to an AxesSubplot object
y_labeler = lambda x: x.set_ylabel('density')
# np.vectorize() allows calling the function on each element
y_labeler = np.vectorize(y_labeler)
# Create a Figure containing 1x3 subplots
fig, axes = plt.subplots(1, 3, figsize=(16,6))
# Call the vectorized function for labeling all the y-axes
y_labeler(axes)
# Plot 'ApplicantIncome' on the first subplot
sns.distplot(data.ApplicantIncome, color='#808080', ax=axes[0],
hist_kws=dict(edgecolor="#404040", linewidth=1))
# Plot 'CoapplicantIncome' on the second subplot
sns.distplot(data.CoapplicantIncome, color='#0033cc', ax=axes[1],
hist_kws=dict(edgecolor="k", linewidth=1))
# Plot 'LoanAmount' (limited only to non-NA values) on the third and last subplot
sns.distplot(data.loc[data.LoanAmount.notnull(), 'LoanAmount'],
color='#df2020', ax=axes[2],
hist_kws=dict(edgecolor="#404040", linewidth=1))
# Adjust space between plots
plt.subplots_adjust(wspace=.3, hspace=.3)
###Output
_____no_output_____
###Markdown
Observations- All the three distributions show the presence of some extreme values.- To investigate better how those are shaped, let's create some boxplots...
###Code
# Let's produce the boxplots corresponding to the distribution plots above
# Create a Figure containing 1x3 subplots
fig, axes = plt.subplots(1, 3, figsize=(14,8))
sns.boxplot(data.ApplicantIncome, color='#808080', ax=axes[0], orient="v")
sns.boxplot(data.CoapplicantIncome, color='#0033cc', ax=axes[1], orient="v")
sns.boxplot(data.loc[data.LoanAmount.notnull(), 'LoanAmount'],
color='#df2020', ax=axes[2], orient="v")
plt.subplots_adjust(wspace=.4, hspace=.3)
# Let's see if we can spot where those outliers are located
# w.r.t. other features (e.g., Education)
fig, axes = plt.subplots(3, 1, figsize=(8,20))
sns.boxplot(x=data.Education, y=data.ApplicantIncome, palette=sns.color_palette("RdBu", n_colors=2), ax=axes[0])
sns.boxplot(x=data.Education, y=data.CoapplicantIncome, palette=sns.color_palette("RdBu", n_colors=2), ax=axes[1])
sns.boxplot(x=data.Education, y=data.loc[data.LoanAmount.notnull(), 'LoanAmount'],
palette=sns.color_palette("RdBu", n_colors=2), ax=axes[2])
plt.subplots_adjust(wspace=.3, hspace=.3)
# Let's see if we can spot where those outliers are located
# w.r.t. other features (e.g., Gender)
fig, axes = plt.subplots(3, 1, figsize=(8,20))
sns.boxplot(x=data.Gender, y=data.ApplicantIncome, palette=sns.color_palette("hls", n_colors=2), ax=axes[0])
sns.boxplot(x=data.Gender, y=data.CoapplicantIncome, palette=sns.color_palette("hls", n_colors=2), ax=axes[1])
sns.boxplot(x=data.Gender, y=data.loc[data.LoanAmount.notnull(), 'LoanAmount'],
palette=sns.color_palette("hls", n_colors=2), ax=axes[2])
plt.subplots_adjust(wspace=.3, hspace=.3)
# The same plots can be produced using factorplot. Let's see only the last example (Gender)
sns.catplot(kind='box', # Boxplot
x='Gender', # X-axis - first factor
y='ApplicantIncome', # Y-axis - values for boxplot
#hue='Education', # Second factor denoted by color
data=data, # Dataframe
height=8, # Figure size (x100px)
aspect=1.5, # Width = size * aspect
legend_out=False, # Make legend inside the plot
palette=sns.color_palette("hls", n_colors=2)
)
sns.catplot(kind='box', # Boxplot
x='Gender', # X-axis - first factor
y='CoapplicantIncome', # Y-axis - values for boxplot
#hue='Education', # Second factor denoted by color
data=data, # Dataframe
height=8, # Figure size (x100px)
aspect=1.5, # Width = size * aspect
legend_out=False, # Make legend inside the plot
palette=sns.color_palette("hls", n_colors=2)
)
sns.catplot(kind='box', # Boxplot
x='Gender', # X-axis - first factor
y='LoanAmount', # Y-axis - values for boxplot
#hue='Education', # Second factor denoted by color
data=data.loc[data.LoanAmount.notnull()], # Dataframe
height=8, # Figure size (x100px)
aspect=1.5, # Width = size * aspect
legend_out=False, # Make legend inside the plot
palette=sns.color_palette("hls", n_colors=2)
)
plt.subplots_adjust(wspace=.4, hspace=.3)
###Output
_____no_output_____
###Markdown
Few Observations from the Plots- We can see that there is no substantial difference between the median income of graduates and non-graduates.- Similarly, there is no significant difference between the median income of males and females.- However, there are a higher number of graduates (respectively, males) with very high incomes, which are appearing to be the outliers.- The presence of outliers in all these three numerical variables (**ApplicantIncome**, **CoapplicantIncome**, and **LoanAmount**), as well as missing values (only for **LoanAmount**) require some data munging steps (more on this later).
###Code
# Let's now plot the pairwise relationship between our continuous-valued features
sns.pairplot(data.loc[data.LoanAmount.notnull(),
['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount']],
kind="reg",
diag_kind='kde',
diag_kws={'shade': True, 'color': '#ff6600'},
plot_kws={'color': '#ff6600'})
###Output
_____no_output_____
###Markdown
How Continuous-valued Features Relate to Each Other- As **LoanAmount** increases, so do **ApplicantIncome** (or **CoapplicantIncome**) increases.- Apparently, though, there is no strong (linear) relationship between **ApplicantIncome** and **CoapplicantIncome**. 2.1 Analysis of Data Distributions: Categorical Values- Let's visualize the distributions of the **8 categorical** features (except the target class label): - **Gender** - **Married** - **Dependents** - **Education** - **Self_Employed** - **Loan_Amount_Term** - **Credit_History** - **Property_Area**
###Code
# Let's see the frequency counts of the first three categorical variable
# Try to do the same for the remaining 5
# 'Gender'
print(data.Gender.value_counts())
print()
# 'Married'
print(data.Married.value_counts())
print()
# 'Dependents'
print(data.Dependents.value_counts())
# For categorical variables, 'countplot' is the way to go
# Create a Figure containing 2x4 subplots
fig, axes = plt.subplots(2, 4, figsize=(20,10))
# Plots
sns.countplot(data.loc[data.Gender.notnull()]['Gender'], ax=axes[0,0])
sns.countplot(data.loc[data.Married.notnull()]['Married'], ax=axes[0,1])
sns.countplot(data.loc[data.Dependents.notnull()]['Dependents'], ax=axes[0,2])
sns.countplot(data.loc[data.Education.notnull()]['Education'], ax=axes[0,3])
sns.countplot(data.loc[data.Self_Employed.notnull()]['Self_Employed'], ax=axes[1,0])
sns.countplot(data.loc[data.Loan_Amount_Term.notnull()]['Loan_Amount_Term'].map(lambda x: int(x/12)), ax=axes[1,1])
sns.countplot(data.loc[data.Credit_History.notnull()]['Credit_History'], ax=axes[1,2])
sns.countplot(data.loc[data.Property_Area.notnull()]['Property_Area'], ax=axes[1,3])
plt.subplots_adjust(wspace=.5, hspace=.4)
fig, axes = plt.subplots(2, 4, figsize=(20,10))
cols = ['Gender', 'Married', 'Dependents', 'Education', 'Self_Employed',
'Loan_Amount_Term', 'Credit_History', 'Property_Area']
i = 0
for c in cols:
tmp_data = pd.crosstab(data.loc[:, c], data.Loan_Status)
# pandas.crosstab returns an mxn table where m is the number of values for the first argument (x)
# and n for the second argument (y)
# As the second argument is always data.Loan_Status, n = 2 (Loan_Status is binary!)
# e.g., x = 'Credit_History'; y = 'Loan_Status'
# the following apply is used to transform the crosstab into a "normalized" table as follows:
# each entry in the table displays how the i-th categorical value of x (i.e., i-th row) is distributed across
# all the possible values of y (i.e., Y/N)
tmp_data = tmp_data.apply(lambda x: x/tmp_data.sum(axis=1))
tmp_data.plot.bar(stacked=True, color=['red','green'], grid=False, ax=axes[i//4, i % 4], legend=True)
i += 1
plt.subplots_adjust(wspace=.3, hspace=.4)
###Output
_____no_output_____
###Markdown
A few observations from the plots- Crosstabs are useful to see how "discriminant" each (categorical) feature is.- In a way, they provide us with some rudimental classifier.- Take a look at **Credit_History** and how this relates to **Loan_Status**: this feature seems quite discriminant in predicting whether or not an applicant will be given a loan approval (applicants with no credit history have their loan applications denied most of the time, i.e., ~92% of the time) 3. Data Preprocessing (Munging) Summary of the Issues- From our exploratory data analysis above, **two** main issues are observed: 1. The presence of **missing values** both on numerical and categorical variables. 2. The presence of **extreme values** on numerical variables.- In addition to those, we might also need to consider how to properly handle different feature's scale as well as the fact that we are in presence of both continuous and categorical attributes. 3.1 Handling Missing Values (NA)- We distinguish between **numerical** and **categorical** features.- One possible strategy to impute missing values is the following: replace any missing value on a numerical feature with its observed **mean**, and that of a categorical feature with its **mode**.- The above strategy works as long as numerical features do not contain extreme values (i.e., _outliers_). If that is the case (like ours) a better solution will be to replace missing values with the **median** of the observations (much more robust to the presence of outliers).
###Code
# is_numeric_dtype(pandas.Series) returns True iff the dtype associated with the pandas.Series is numeric
from pandas.api.types import is_numeric_dtype
new_data = data.apply(lambda x: x.fillna(x.median())
if is_numeric_dtype(x)
else x.fillna(x.mode().iloc[0]))
new_data.describe(include='all')
data.describe(include='all')
print(data[data.isnull().any(axis=1)].head(10))
print(new_data[data.isnull().any(axis=1)].head(10))
# Let's just assign new_data to data
data = new_data
###Output
_____no_output_____
###Markdown
3.2 Handling Outliers- There are several **outliers** on our numerical features **ApplicantIncome**, **CoapplicantIncome**, and **LoanAmount**.- Like missing values, outliers can be simply discarded as well (i.e., a process which is also known as **trimming** or **truncation**).- Another approach is called **winsorizing** and consists of replacing outliers with a specified percentile of the data (e.g., a 90% winsorization would see all data below the 5th percentile set to the 5th percentile, and data above the 95th percentile set to the 95th percentile).
###Code
# Let's winsorize 'ApplicantIncome', 'CoapplicantIncome', and 'LoanAmount'
stats.mstats.winsorize(data.ApplicantIncome, limits=0.05, inplace=True)
stats.mstats.winsorize(data.CoapplicantIncome, limits=0.05, inplace=True)
stats.mstats.winsorize(data.LoanAmount, limits=0.05, inplace=True)
data.describe(include='all')
# Create a Figure containing 1x3 subplots
fig, axes = plt.subplots(1, 3, figsize=(14,8))
y_labeler(axes)
sns.boxplot(data.ApplicantIncome, color='#808080', ax=axes[0], orient="v")
sns.boxplot(data.CoapplicantIncome, color='#0033cc', ax=axes[1], orient="v")
sns.boxplot(data.loc[data.LoanAmount.notnull(), 'LoanAmount'],
color='#df2020', ax=axes[2], orient="v")
plt.subplots_adjust(wspace=.4, hspace=.3)
###Output
_____no_output_____
###Markdown
A few observations from the plot- It seems we have reduced significantly the number of outliers but still there are some...- In this case, another possible solution is to nullify (or, at least, reduce) the effect of outliers by applying a log-transformation.- Intuitively, applying a log-transformation to a set of observations containing extreme values has the effect of "shrinking" the whole distribution (of course, independently of the basis of the logarithm).- To convince yourself of why this is the case, just think what happens to the values $\{1, 5, 10, 20, 1000\}$ when you transform those by applying a $\log_{10}$ operator: $\{\log_{10}(1), \log_{10}(5), \log_{10}(10), \log_{10}(20), \log_{10}(1000)\} = \{0, 0.69, 1, 1.30, 3\}$
###Code
# Apply log-transformation to 'ApplicantIncome' and assign it to a new column
data['Log_ApplicantIncome'] = data.ApplicantIncome.apply(np.log)
# Apply log-transformation to 'LoanAmount' and assign it to a new column
data['Log_LoanAmount'] = data.LoanAmount.apply(np.log)
data
# Create a Figure containing 1x2 subplots
fig, axes = plt.subplots(1, 2, figsize=(12,6))
# Call the vectorized function for labeling all the y-axes
y_labeler(axes)
# Plot 'ApplicantIncome' on the first subplot
sns.distplot(data.Log_ApplicantIncome, color='#808080', ax=axes[0],
hist_kws=dict(edgecolor="#404040", linewidth=1))
# Plot 'LoanAmount' (limited only to non-NA values) on the second and last subplot
sns.distplot(data.loc[data.Log_LoanAmount.notnull(), 'Log_LoanAmount'],
color='#df2020', ax=axes[1],
hist_kws=dict(edgecolor="#404040", linewidth=1))
# Adjust space between plots
plt.subplots_adjust(wspace=.3, hspace=.3)
###Output
_____no_output_____
###Markdown
3.3 Encoding Categorical Features- You should remember from our last lecture that categorical feature values might need to be transformed into numerical ones before they can be fed into the ML pipeline- In fact, some ML algorithms can support categorical values without further manipulation but there are many others that do not. - We already mentioned two main approaches to to turn categorical into numerical values: Label Encoding and One-Hot Encoding- The former simply assigns a number to each category, whereas the latter transform each $k$-valued categorical feature into a $k$-dimensional binary vector.
###Code
# In pandas we can achieve easily one-hot encoding using the 'get_dummies()' function
categorical_features = [col for col in data.columns if not is_numeric_dtype(data[col]) and col != 'Loan_Status']
print(categorical_features)
data_with_dummies = pd.get_dummies(data, columns = categorical_features)
data_with_dummies.head()
# Just as a convention, I prefer to place the column to be predicted
# as the last one.
columns = data_with_dummies.columns.tolist()
# Popping out 'mpg' from the list and insert it back at the end.
columns.insert(len(columns), columns.pop(columns.index('Loan_Status')))
# Let's refactor the DataFrame using this new column index
data_with_dummies = data_with_dummies.loc[:, columns]
data_with_dummies.head()
###Output
_____no_output_____
###Markdown
Encoding Binary Class Label
###Code
data = data_with_dummies
data.Loan_Status = data.Loan_Status.map(lambda x: 1 if x=='Y' else -1)
data.head()
###Output
_____no_output_____ |
hw10/e63_Assign10_GraphSoftmax-Shanaka_DeSoysa.ipynb | ###Markdown
E-63 Big Data Analytics - Assignment 10 - TensorFlow Shanaka De Soysa
###Code
import sys
import tensorflow as tf
print(sys.version)
print(sys.version_info)
print("TensorFlow Version: {0}".format(tf.__version__))
###Output
3.5.2 |Anaconda 4.2.0 (x86_64)| (default, Jul 2 2016, 17:52:12)
[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)]
sys.version_info(major=3, minor=5, micro=2, releaselevel='final', serial=0)
TensorFlow Version: 1.0.1
###Markdown
Problem 1.Please use tf_upgrade.py utility, which you could find on the TensorFlow GitHub site to upgrade attached Python script vectorized_graph.py to TensorFlow 1.x. Demonstrate that upgraded script will run and produce TensorBoard graph and summaries. Provide working upgraded script and images of your graphs and calculated summaries. (25%)
###Code
!python tf_upgrade.py --infile vectorized_graph.py --outfile vectorized_graph_upgraded.py
###Output
TensorFlow 1.0 Upgrade Script
-----------------------------
Converted 1 files
Detected 0 errors that require attention
--------------------------------------------------------------------------------
Make sure to read the detailed log 'report.txt'
###Markdown
**Inspect the report.txt file** Upgraded script
###Code
import tensorflow as tf
import numpy as np
LOG_FILE = 'logs/improved_graph'
# Explicitly create a Graph object
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("variables"):
# Variable to keep track of how many times the graph has been run
global_step = tf.Variable(0, dtype=tf.int32, name="global_step")
# Increments the above `global_step` Variable, should be run whenever
# the graph is run
increment_step = global_step.assign_add(1)
# Variable that keeps track of previous output value:
previous_value = tf.Variable(0.0,
dtype=tf.float32,
name="previous_value")
# Primary transformation Operations
with tf.name_scope("exercise_transformation"):
# Separate input layer
with tf.name_scope("input"):
# Create input placeholder- takes in a Vector
a = tf.placeholder(tf.float32,
shape=[None],
name="input_placeholder_a")
# Separate middle layer
with tf.name_scope("intermediate_layer"):
b = tf.reduce_prod(a, name="product_b")
c = tf.reduce_sum(a, name="sum_c")
# Separate output layer
with tf.name_scope("output"):
d = tf.add(b, c, name="add_d")
output = tf.subtract(d, previous_value, name="output")
update_prev = previous_value.assign(output)
# Summary Operations
with tf.name_scope("summaries"):
# Creates summary for output node
tf.summary.scalar("output_summary" ,output)
tf.summary.scalar("prod_summary", b)
tf.summary.scalar("sum_summary", c)
# Global Variables and Operations
with tf.name_scope("global_ops"):
# Initialization Op
init = tf.global_variables_initializer()
# Collect all summary Ops in graph
merged_summaries = tf.summary.merge_all()
# Start a Session, using the explicitly created Graph
sess = tf.Session(graph=graph)
# Open a SummaryWriter to save summaries
writer = tf.summary.FileWriter(LOG_FILE, graph)
# Initialize Variables
sess.run(init)
def run_graph(input_tensor):
"""
Helper function; runs the graph with given input tensor and saves summaries
"""
feed_dict = {a: input_tensor}
output, summary, step = sess.run(
[update_prev, merged_summaries, increment_step],
feed_dict=feed_dict)
writer.add_summary(summary, global_step=step)
# Run the graph with various inputs
run_graph([2, 8])
run_graph([3, 1, 3, 3])
run_graph([8])
run_graph([1, 2, 3])
run_graph([11, 4])
run_graph([4, 1])
run_graph([7, 3, 1])
run_graph([6, 3])
run_graph([0, 2])
run_graph([4, 5, 6])
# Writes the summaries to disk
writer.flush()
# Flushes the summaries to disk and closes the SummaryWriter
writer.close()
# Close the session
sess.close()
# To start TensorBoard after running this file, execute the following command:
# $ tensorboard --logdir='./improved_graph'
###Output
_____no_output_____
###Markdown
Tensorboard Graph Tensorboard Summaries Problem 2. Please construct a graph that will accept as inputs two vectors of equal length (tensors of dimension 1) and perform the operations on those vectors as depicted in the drawing bellow. Organize your variables and operations in nested namespaces as suggested by the nested boxes in the same graph. Organize your program in such a way that it repeats calculations in the graphs for 8 vectors of different lengths and element values. Collect and display in the TensorBoard as summaries the results on the right. (25%)  Python script
###Code
import tensorflow as tf
import numpy as np
LOG_FILE = 'logs/p2'
# Explicitly create a Graph object
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("variables"):
# Variable to keep track of how many times the graph has been run
global_step = tf.Variable(0, dtype=tf.int32, name="global_step")
# Increments the above `global_step` Variable, should be run whenever
# the graph is run
increment_step = global_step.assign_add(1)
a = tf.placeholder(tf.float32,
shape=[None],
name="input_a")
b = tf.placeholder(tf.float32,
shape=[None],
name="input_b")
# Primary transformation Operations
with tf.name_scope("exercise_transformation"):
# Separate input layer
with tf.name_scope("intermediate_layer_1"):
# Create input placeholder- takes in a Vector
with tf.name_scope("intermediate_layer_a"):
a_moments = tf.nn.moments(a, axes=[0], name="a_sd")
a_norm = tf.norm(a, name="a_norm")
with tf.name_scope("intermediate_layer_b"):
b_moments = tf.nn.moments(b, axes=[0], name="b_sd")
b_norm = tf.norm(b, name="b_norm")
# Separate middle layer
with tf.name_scope("intermediate_layer_2"):
a_normalized = tf.nn.l2_normalize(a, dim=0, name="normalize_a")
b_normalized = tf.nn.l2_normalize(b, dim=0, name="normalize_b")
# Separate output layer
with tf.name_scope("cosine_ab"):
b_normalized_T = tf.transpose([b_normalized])
cosine_similarity = tf.matmul([a_normalized],
b_normalized_T)
a_sd = tf.sqrt(a_moments[1], name="a_std_dev")
b_sd = tf.sqrt(b_moments[1], name="b_std_dev")
with tf.name_scope("covariance_ab"):
a_mean = tf.cast(tf.reduce_mean(a), tf.float32)
b_mean = tf.cast(tf.reduce_mean(b), tf.float32)
a_delta = a - a_mean
b_delta = b - b_mean
covariance = tf.reduce_mean(tf.multiply(a_delta, b_delta))
# Summary Operations
with tf.name_scope("summaries"):
# Creates summary for output node
tf.summary.scalar("a_sd", a_sd)
tf.summary.scalar("b_sd", b_sd)
tf.summary.scalar("a_norm", a_norm)
tf.summary.scalar("b_norm", b_norm)
tf.summary.scalar("cosine_ab", cosine_similarity[0][0])
tf.summary.scalar("covariance_ab", covariance)
# Global Variables and Operations
with tf.name_scope("global_ops"):
# Initialization Op
init = tf.global_variables_initializer()
# Collect all summary Ops in graph
merged_summaries = tf.summary.merge_all()
# Start a Session, using the explicitly created Graph
sess = tf.Session(graph=graph)
# Open a SummaryWriter to save summaries
writer = tf.summary.FileWriter(LOG_FILE, graph)
# Initialize Variables
sess.run(init)
def run_graph(input_tensor1, input_tensor2):
"""
Helper function; runs the graph with given input tensor and saves summaries
"""
feed_dict = {a: input_tensor1, b: input_tensor2}
# a_sd_val, b_sd_val, a_norm_val, b_norm_val, cosine_similarity_val, covariance_val, summary, step = sess.run(
# [a_sd, b_sd, a_norm, b_norm, cosine_similarity, covariance, merged_summaries, increment_step], feed_dict=feed_dict)
# print("a_sd: {0}, b_sd: {1}, a_norm: {2}, b_norm: {3}, cosine: {4}, covariance: {5}".
# format(a_sd_val, b_sd_val, a_norm_val, b_norm_val,
# cosine_similarity_val, covariance_val))
summary, step, a_mean_val, b_mean_val, covariance_val = sess.run(
[merged_summaries, increment_step, a_mean, b_mean, covariance], feed_dict=feed_dict)
writer.add_summary(summary, step)
print("a_mean: {0}, b_mean: {1}, cov: {2}".format(
a_mean_val, b_mean_val, covariance_val))
#run_graph([3.0, 5.0, 355.0, 3.0], [22.0, 111.0, 3.0, 10.0])
#run_graph([3, 1, 3, 3],[3, 1, 3, 3])
def run_graph_with_random_vectors(iterations=8, seed=1234):
np.random.seed(seed)
for i in range(iterations):
v_len = np.random.randint(10, 20)
print("Vector length: {0}".format(v_len))
x, y = [], []
for j in range(v_len):
x.append(np.random.randint(1, 10))
y.append(np.random.randint(1, 10))
print("x: {0}".format(x))
print("y: {0}".format(y))
run_graph(x, y)
run_graph_with_random_vectors()
# Writes the summaries to disk
writer.flush()
# Flushes the summaries to disk and closes the SummaryWriter
writer.close()
# Close the session
sess.close()
# To start TensorBoard after running this file, execute the following command:
# $ tensorboard --logdir='./improved_graph'
###Output
_____no_output_____
###Markdown
TensorBoard Graph TensorBoard Summaries Problem 3. Fetch Iris Dataset from https://archive.ics.uci.edu/ml/datasets/Iris and make attached Python script, softmax_irises.py work. You might have to upgrade the script to TF 1.x API. Generate TensorBoard graph of the process and use scalar summary to presenting variation of the loss function during the training process. Report the results of the evaluation process. (25%) Upgraded and improved code
###Code
# pylint: disable=invalid-name
# Softmax example in TF using the classical Iris dataset
# Download iris.data from https://archive.ics.uci.edu/ml/datasets/Iris
import os
import tensorflow as tf
DATA_FILE = "data/IrisDataSet.csv"
LOG_FILE = "logs/p3_iris"
def combine_inputs(X):
with tf.name_scope("combine_inputs"):
return tf.matmul(X, W) + b
def inference(X):
with tf.name_scope("inference"):
return tf.nn.softmax(combine_inputs(X))
def loss(X, Y):
with tf.name_scope("loss"):
return tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=combine_inputs(X),
labels=Y))
def read_csv(batch_size, file_name, record_defaults):
with tf.name_scope("read_csv"):
filename_queue = tf.train.string_input_producer(
[os.path.dirname(__file__) + "/" + file_name])
reader = tf.TextLineReader(skip_header_lines=1)
key, value = reader.read(filename_queue)
# decode_csv will convert a Tensor from type string (the text line) in
# a tuple of tensor columns with the specified defaults, which also
# sets the data type for each column
decoded = tf.decode_csv(
value, record_defaults=record_defaults, name="decode_csv")
# batch actually reads the file and loads "batch_size" rows in a single
# tensor
return tf.train.shuffle_batch(decoded,
batch_size=batch_size,
capacity=batch_size * 50,
min_after_dequeue=batch_size,
name="shuffle_batch")
def inputs():
with tf.name_scope("inputs"):
sepal_length, sepal_width, petal_length, petal_width, label =\
read_csv(100, DATA_FILE, [[0.0], [0.0], [0.0], [0.0], [""]])
# convert class names to a 0 based class index.
label_number = tf.to_int32(tf.argmax(tf.to_int32(tf.stack([
tf.equal(label, ["Iris-setosa"]),
tf.equal(label, ["Iris-versicolor"]),
tf.equal(label, ["Iris-virginica"])
])), 0), name="label")
# Pack all the features that we care about in a single matrix;
# We then transpose to have a matrix with one example per row and one
# feature per column.
features = tf.transpose(tf.stack(
[sepal_length, sepal_width, petal_length, petal_width]), name="features")
return features, label_number
def train(total_loss):
with tf.name_scope("train"):
learning_rate = 0.01
return tf.train.GradientDescentOptimizer(learning_rate, name="GradientDescent").minimize(total_loss)
def evaluate(sess, X, Y):
with tf.name_scope("evaluate"):
predicted = tf.cast(tf.arg_max(inference(X), 1), tf.int32)
print("Evaluation: ", sess.run(tf.reduce_mean(
tf.cast(tf.equal(predicted, Y), tf.float32))))
# Explicitly create a Graph object
graph = tf.Graph()
with graph.as_default():
with tf.name_scope("weights_and_bias"):
# this time weights form a matrix, not a column vector, one "weight
# vector" per class.
W = tf.Variable(tf.zeros([4, 3]), name="weights")
# so do the biases, one per class.
b = tf.Variable(tf.zeros([3], name="bias"))
X, Y = inputs()
total_loss = loss(X, Y)
train_op = train(total_loss)
with tf.name_scope("summaries"):
# Creates summary for output node
# Scalar summary for loss
tf.summary.scalar("loss", total_loss)
# Global Variables and Operations
with tf.name_scope("global_ops"):
# Initialization Op
init = tf.global_variables_initializer()
# Collect all summary Ops in graph
merged_summaries = tf.summary.merge_all()
# Launch the graph in a session, setup boilerplate
with tf.Session(graph=graph) as sess:
# Open a SummaryWriter to save summaries
writer = tf.summary.FileWriter(LOG_FILE, graph)
sess.run(init)
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
# actual training loop
training_steps = 1000
for step in range(training_steps):
sess.run([train_op])
# for debugging and learning purposes, see how the loss gets
# decremented thru training steps
if step % 10 == 0:
loss_val, summary_str = sess.run([total_loss, merged_summaries])
writer.add_summary(summary_str, step)
if step % 100 == 0:
print("loss: ", loss_val)
evaluate(sess, X, Y)
# Writes the summaries to disk
writer.flush()
# Flushes the summaries to disk and closes the SummaryWriter
writer.close()
coord.request_stop()
coord.join(threads)
sess.close()
###Output
_____no_output_____ |
cornell_seq2seq_with_attention_newsqa.ipynb | ###Markdown
###Code
!nvidia-smi
from google.colab import drive
drive.mount('/content/drive')
!cp drive/MyDrive/cornell-movie-dialog-turns.csv .
!ls -lah
###Output
total 24M
drwxr-xr-x 1 root root 4.0K Mar 25 10:02 .
drwxr-xr-x 1 root root 4.0K Mar 25 10:00 ..
drwxr-xr-x 4 root root 4.0K Mar 18 13:36 .config
-r-------- 1 root root 24M Mar 25 10:02 cornell-movie-dialog-turns.csv
drwx------ 5 root root 4.0K Mar 25 10:02 drive
drwxr-xr-x 1 root root 4.0K Mar 18 13:36 sample_data
###Markdown
Load data
###Code
import pandas as pd
import torchtext
import torch
import time
import random
import math
from torchtext.legacy import data
from torch import nn, optim
import torch.nn.functional as F
from tqdm.notebook import tqdm
df = pd.read_csv('cornell-movie-dialog-turns.csv')
df.head()
df.dropna(inplace=True)
print(max(df['turn1'].str.len()))
print(max(df['turn2'].str.len()))
print(df.shape)
# Use same field for both columns since they have a shared vocabulary
TEXT = torchtext.legacy.data.Field(
tokenize='spacy',
lower=True,
init_token='<sos>',
eos_token='<eos>',
batch_first = True
)
fields = [('turn1', TEXT), ('turn2', TEXT)]
# Create dataset
start = time.time()
dataset = torchtext.legacy.data.TabularDataset(
path='cornell-movie-dialog-turns.csv',
format='CSV',
fields=fields,
skip_header=True
)
end = time.time()
print(f'Duration: {end - start}')
# Train/val split
(train, valid) = dataset.split(split_ratio=[0.85, 0.15])
print(len(train), len(valid))
MAX_VOCAB_SIZE = 10_000
TEXT.build_vocab(train,vectors = "glove.6B.300d",unk_init = torch.Tensor.normal_, max_size=MAX_VOCAB_SIZE)
print(f'Size of vocab: {len(TEXT.vocab)}')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
BATCH_SIZE = 32
train_iterator, valid_iterator = torchtext.legacy.data.BucketIterator.splits(
(train, valid),
batch_size = BATCH_SIZE,
sort_key=lambda x: len(x.turn1),
device = device
)
class Encoder(nn.Module):
def __init__(self,
input_dim,
hid_dim,
n_layers,
n_heads,
pf_dim,
dropout,
device,
max_length = 100):
super().__init__()
self.device = device
self.tok_embedding = nn.Embedding(input_dim, hid_dim)
self.pos_embedding = nn.Embedding(max_length, hid_dim)
self.layers = nn.ModuleList([EncoderLayer(hid_dim,
n_heads,
pf_dim,
dropout,
device)
for _ in range(n_layers)])
self.dropout = nn.Dropout(dropout)
self.scale = torch.sqrt(torch.FloatTensor([hid_dim])).to(device)
def forward(self, src, src_mask):
#src = [batch size, src len]
#src_mask = [batch size, 1, 1, src len]
batch_size = src.shape[0]
src_len = src.shape[1]
pos = torch.arange(0, src_len).unsqueeze(0).repeat(batch_size, 1).to(self.device)
#pos = [batch size, src len]
src = self.dropout((self.tok_embedding(src) * self.scale) + self.pos_embedding(pos))
#src = [batch size, src len, hid dim]
for layer in self.layers:
src = layer(src, src_mask)
#src = [batch size, src len, hid dim]
return src
class EncoderLayer(nn.Module):
def __init__(self,
hid_dim,
n_heads,
pf_dim,
dropout,
device):
super().__init__()
self.self_attn_layer_norm = nn.LayerNorm(hid_dim)
self.ff_layer_norm = nn.LayerNorm(hid_dim)
self.self_attention = MultiHeadAttentionLayer(hid_dim, n_heads, dropout, device)
self.positionwise_feedforward = PositionwiseFeedforwardLayer(hid_dim,
pf_dim,
dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src, src_mask):
#src = [batch size, src len, hid dim]
#src_mask = [batch size, 1, 1, src len]
#self attention
_src, _ = self.self_attention(src, src, src, src_mask)
#dropout, residual connection and layer norm
src = self.self_attn_layer_norm(src + self.dropout(_src))
#src = [batch size, src len, hid dim]
#positionwise feedforward
_src = self.positionwise_feedforward(src)
#dropout, residual and layer norm
src = self.ff_layer_norm(src + self.dropout(_src))
#src = [batch size, src len, hid dim]
return src
class MultiHeadAttentionLayer(nn.Module):
def __init__(self, hid_dim, n_heads, dropout, device):
super().__init__()
assert hid_dim % n_heads == 0
self.hid_dim = hid_dim
self.n_heads = n_heads
self.head_dim = hid_dim // n_heads
self.fc_q = nn.Linear(hid_dim, hid_dim)
self.fc_k = nn.Linear(hid_dim, hid_dim)
self.fc_v = nn.Linear(hid_dim, hid_dim)
self.fc_o = nn.Linear(hid_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
self.scale = torch.sqrt(torch.FloatTensor([self.head_dim])).to(device)
def forward(self, query, key, value, mask = None):
batch_size = query.shape[0]
#query = [batch size, query len, hid dim]
#key = [batch size, key len, hid dim]
#value = [batch size, value len, hid dim]
Q = self.fc_q(query)
K = self.fc_k(key)
V = self.fc_v(value)
#Q = [batch size, query len, hid dim]
#K = [batch size, key len, hid dim]
#V = [batch size, value len, hid dim]
Q = Q.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
K = K.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
V = V.view(batch_size, -1, self.n_heads, self.head_dim).permute(0, 2, 1, 3)
#Q = [batch size, n heads, query len, head dim]
#K = [batch size, n heads, key len, head dim]
#V = [batch size, n heads, value len, head dim]
energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / self.scale
#energy = [batch size, n heads, query len, key len]
if mask is not None:
energy = energy.masked_fill(mask == 0, -1e10)
attention = torch.softmax(energy, dim = -1)
#attention = [batch size, n heads, query len, key len]
x = torch.matmul(self.dropout(attention), V)
#x = [batch size, n heads, query len, head dim]
x = x.permute(0, 2, 1, 3).contiguous()
#x = [batch size, query len, n heads, head dim]
x = x.view(batch_size, -1, self.hid_dim)
#x = [batch size, query len, hid dim]
x = self.fc_o(x)
#x = [batch size, query len, hid dim]
return x, attention
class PositionwiseFeedforwardLayer(nn.Module):
def __init__(self, hid_dim, pf_dim, dropout):
super().__init__()
self.fc_1 = nn.Linear(hid_dim, pf_dim)
self.fc_2 = nn.Linear(pf_dim, hid_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, x):
#x = [batch size, seq len, hid dim]
x = self.dropout(torch.relu(self.fc_1(x)))
#x = [batch size, seq len, pf dim]
x = self.fc_2(x)
#x = [batch size, seq len, hid dim]
return x
class Decoder(nn.Module):
def __init__(self,
output_dim,
hid_dim,
n_layers,
n_heads,
pf_dim,
dropout,
device,
max_length = 100):
super().__init__()
self.device = device
self.tok_embedding = nn.Embedding(output_dim, hid_dim)
self.pos_embedding = nn.Embedding(max_length, hid_dim)
self.layers = nn.ModuleList([DecoderLayer(hid_dim,
n_heads,
pf_dim,
dropout,
device)
for _ in range(n_layers)])
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
self.scale = torch.sqrt(torch.FloatTensor([hid_dim])).to(device)
def forward(self, trg, enc_src, trg_mask, src_mask):
#trg = [batch size, trg len]
#enc_src = [batch size, src len, hid dim]
#trg_mask = [batch size, 1, trg len, trg len]
#src_mask = [batch size, 1, 1, src len]
batch_size = trg.shape[0]
trg_len = trg.shape[1]
pos = torch.arange(0, trg_len).unsqueeze(0).repeat(batch_size, 1).to(self.device)
#pos = [batch size, trg len]
trg = self.dropout((self.tok_embedding(trg) * self.scale) + self.pos_embedding(pos))
#trg = [batch size, trg len, hid dim]
for layer in self.layers:
trg, attention = layer(trg, enc_src, trg_mask, src_mask)
#trg = [batch size, trg len, hid dim]
#attention = [batch size, n heads, trg len, src len]
output = self.fc_out(trg)
#output = [batch size, trg len, output dim]
return output, attention
class DecoderLayer(nn.Module):
def __init__(self,
hid_dim,
n_heads,
pf_dim,
dropout,
device):
super().__init__()
self.self_attn_layer_norm = nn.LayerNorm(hid_dim)
self.enc_attn_layer_norm = nn.LayerNorm(hid_dim)
self.ff_layer_norm = nn.LayerNorm(hid_dim)
self.self_attention = MultiHeadAttentionLayer(hid_dim, n_heads, dropout, device)
self.encoder_attention = MultiHeadAttentionLayer(hid_dim, n_heads, dropout, device)
self.positionwise_feedforward = PositionwiseFeedforwardLayer(hid_dim,
pf_dim,
dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, trg, enc_src, trg_mask, src_mask):
#trg = [batch size, trg len, hid dim]
#enc_src = [batch size, src len, hid dim]
#trg_mask = [batch size, 1, trg len, trg len]
#src_mask = [batch size, 1, 1, src len]
#self attention
_trg, _ = self.self_attention(trg, trg, trg, trg_mask)
#dropout, residual connection and layer norm
trg = self.self_attn_layer_norm(trg + self.dropout(_trg))
#trg = [batch size, trg len, hid dim]
#encoder attention
_trg, attention = self.encoder_attention(trg, enc_src, enc_src, src_mask)
#dropout, residual connection and layer norm
trg = self.enc_attn_layer_norm(trg + self.dropout(_trg))
#trg = [batch size, trg len, hid dim]
#positionwise feedforward
_trg = self.positionwise_feedforward(trg)
#dropout, residual and layer norm
trg = self.ff_layer_norm(trg + self.dropout(_trg))
#trg = [batch size, trg len, hid dim]
#attention = [batch size, n heads, trg len, src len]
return trg, attention
class Seq2Seq(nn.Module):
def __init__(self,
encoder,
decoder,
src_pad_idx,
trg_pad_idx,
device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.src_pad_idx = src_pad_idx
self.trg_pad_idx = trg_pad_idx
self.device = device
def make_src_mask(self, src):
#src = [batch size, src len]
src_mask = (src != self.src_pad_idx).unsqueeze(1).unsqueeze(2)
#src_mask = [batch size, 1, 1, src len]
return src_mask
def make_trg_mask(self, trg):
#trg = [batch size, trg len]
trg_pad_mask = (trg != self.trg_pad_idx).unsqueeze(1).unsqueeze(2)
#trg_pad_mask = [batch size, 1, 1, trg len]
trg_len = trg.shape[1]
trg_sub_mask = torch.tril(torch.ones((trg_len, trg_len), device = self.device)).bool()
#trg_sub_mask = [trg len, trg len]
trg_mask = trg_pad_mask & trg_sub_mask
#trg_mask = [batch size, 1, trg len, trg len]
return trg_mask
def forward(self, src, trg):
#src = [batch size, src len]
#trg = [batch size, trg len]
src_mask = self.make_src_mask(src)
trg_mask = self.make_trg_mask(trg)
#src_mask = [batch size, 1, 1, src len]
#trg_mask = [batch size, 1, trg len, trg len]
enc_src = self.encoder(src, src_mask)
#enc_src = [batch size, src len, hid dim]
output, attention = self.decoder(trg, enc_src, trg_mask, src_mask)
#output = [batch size, trg len, output dim]
#attention = [batch size, n heads, trg len, src len]
return output, attention
INPUT_DIM = len(TEXT.vocab)
OUTPUT_DIM = len(TEXT.vocab)
HID_DIM = 300
ENC_LAYERS = 3
DEC_LAYERS = 3
ENC_HEADS = 5
DEC_HEADS = 5
ENC_PF_DIM = 512
DEC_PF_DIM = 512
ENC_DROPOUT = 0.1
DEC_DROPOUT = 0.1
enc = Encoder(INPUT_DIM,
HID_DIM,
ENC_LAYERS,
ENC_HEADS,
ENC_PF_DIM,
ENC_DROPOUT,
device,
max_length=2000)
dec = Decoder(OUTPUT_DIM,
HID_DIM,
DEC_LAYERS,
DEC_HEADS,
DEC_PF_DIM,
DEC_DROPOUT,
device,
max_length = 3100)
SRC_PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
TRG_PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = Seq2Seq(enc, dec, SRC_PAD_IDX, TRG_PAD_IDX, device).to(device)
def init_weights(m):
for name, param in m.named_parameters():
if 'weight' in name:
nn.init.normal_(param.data, mean=0, std=0.01)
else:
nn.init.constant_(param.data, 0)
model.apply(init_weights)
pretrained_embeddings = TEXT.vocab.vectors
print(pretrained_embeddings.shape)
model.encoder.tok_embedding.weight.data.copy_(pretrained_embeddings)
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model.encoder.tok_embedding.weight.data[UNK_IDX] = torch.zeros(HID_DIM)
model.encoder.tok_embedding.weight.data[PAD_IDX] = torch.zeros(HID_DIM)
print(model.encoder.tok_embedding.weight.data)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
optimizer = optim.Adam(model.parameters())
TRG_PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in tqdm(enumerate(iterator)):
src = batch.turn1
trg = batch.turn2
optimizer.zero_grad()
output, _ = model(src, trg[:,:-1])
#output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output.contiguous().view(-1, output_dim)
trg = trg[:,1:].contiguous().view(-1)
# output = output[1:].view(-1, output_dim)
# trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in tqdm(enumerate(iterator)):
src = batch.turn1
trg = batch.turn2
output, _ = model(src, trg[:,:-1])
#output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output.contiguous().view(-1, output_dim)
trg = trg[:,1:].contiguous().view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut3-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
import spacy
def translate_sentence(sentence, src_field, trg_field, model, device, max_len = 50):
model.eval()
if isinstance(sentence, str):
nlp = spacy.load('en')
tokens = [token.text.lower() for token in nlp(sentence)]
else:
tokens = [token.lower() for token in sentence]
tokens = [src_field.init_token] + tokens + [src_field.eos_token]
src_indexes = [src_field.vocab.stoi[token] for token in tokens]
src_tensor = torch.LongTensor(src_indexes).unsqueeze(0).to(device)
src_mask = model.make_src_mask(src_tensor)
with torch.no_grad():
enc_src = model.encoder(src_tensor, src_mask)
trg_indexes = [trg_field.vocab.stoi[trg_field.init_token]]
for i in range(max_len):
trg_tensor = torch.LongTensor(trg_indexes).unsqueeze(0).to(device)
trg_mask = model.make_trg_mask(trg_tensor)
with torch.no_grad():
output, attention = model.decoder(trg_tensor, enc_src, trg_mask, src_mask)
pred_token = output.argmax(2)[:,-1].item()
trg_indexes.append(pred_token)
if pred_token == trg_field.vocab.stoi[trg_field.eos_token]:
break
trg_tokens = [trg_field.vocab.itos[i] for i in trg_indexes]
return trg_tokens[1:], attention
translate_sentence("What are you sorry about",TEXT,TEXT,model,device)
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/building_production_ml_systems/labs/accelarators_caip_solution.ipynb | ###Markdown
Train on single Processor
###Code
%%writefile train/train_single_cpu_gpu.py
import os
import time
import tensorflow as tf
import numpy as np
from . import model_definition
#Get data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
#Train model
model = model_definition.create_model(input_shape=x_train.shape[1:])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, ),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
start = time.time()
model.fit(
x_train.astype(np.float32), y_train.astype(np.float32),
epochs=17,
steps_per_epoch=60,
validation_data=(x_test.astype(np.float32), y_test.astype(np.float32)),
validation_freq=17
)
print("Training time with single GPUs: {}".format(time.time() - start))
model.save_weights('./fashion_mnist_single.h5', overwrite=True)
###Output
_____no_output_____
###Markdown
Submit single worker with CPU
###Code
%%bash
now=$(date +"%Y%m%d_%H%M%S")
JOB_NAME="single_cpu_fashion_minst_$now"
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--package-path=train \
--module-name=train.train_single_cpu_gpu \
--runtime-version=2.1 \
--python-version=3.7 \
--region=us-central1 \
###Output
_____no_output_____
###Markdown
Submit single worker with single GPU
###Code
%%bash
now=$(date +"%Y%m%d_%H%M%S")
JOB_NAME="single_gpu_fashion_minst_$now"
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--package-path=train \
--module-name=train.train_single_cpu_gpu \
--runtime-version=2.1 \
--python-version=3.7 \
--region=us-central1 \
--scale-tier=BASIC_GPU
###Output
_____no_output_____
###Markdown
Train on multiple GPUs
###Code
%%writefile train/train_mult_cpu_gpu.py
import os
import os
import time
import tensorflow as tf
import numpy as np
from . import model_definition
#Get data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
##################### Multiple GPUs or CPUs ###################
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
###############################################################
model = model_definition.create_model(input_shape=x_train.shape[1:])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, ),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
start = time.time()
model.fit(
x_train.astype(np.float32), y_train.astype(np.float32),
epochs=17,
steps_per_epoch=60,
validation_data=(x_test.astype(np.float32), y_test.astype(np.float32)),
validation_freq=17
)
print("Training time with multiple GPUs: {}".format(time.time() - start))
model.save_weights('./fashion_mnist_mult_gpu.h5', overwrite=True)
%%writefile config/multi_gpu_config.yaml
trainingInput:
scaleTier: CUSTOM
# Configure a master worker with 4 T4 GPUs
masterType: n1-highcpu-16
masterConfig:
acceleratorConfig:
count: 4
type: NVIDIA_TESLA_K80
%%bash
now=$(date +"%Y%m%d_%H%M%S")
JOB_NAME="multi_gpu_fashion_minst_$now"
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--package-path=train \
--module-name=train.train_mult_cpu_gpu \
--runtime-version=2.1 \
--python-version=3.7 \
--region=us-central1 \
--config config/multi_gpu_config.yaml
###Output
_____no_output_____
###Markdown
Example training on TPU
###Code
%%writefile train/train_tpu.py
import os
import os
import time
import tensorflow as tf
import numpy as np
from . import model_definition
#Get data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
##################### Run on TPUs ###################
resolver = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with strategy.scope():
###############################################################
model = model_definition.create_model(input_shape=x_train.shape[1:])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, ),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
start = time.time()
model.fit(
x_train.astype(np.float32), y_train.astype(np.float32),
epochs=17,
steps_per_epoch=60,
validation_data=(x_test.astype(np.float32), y_test.astype(np.float32)),
validation_freq=17
)
print("Training time with TPUs: {}".format(time.time() - start))
model.save_weights('./fashion_mnist_tpu.h5', overwrite=True)
###Output
_____no_output_____
###Markdown
Submit single worker with TPUv2 Pod (8 cores)
###Code
%%bash
now=$(date +"%Y%m%d_%H%M%S")
JOB_NAME="tpu_fashion_minst_$now"
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--package-path=train \
--module-name=train.train_tpu \
--runtime-version=2.1 \
--python-version=3.7 \
--scale-tier=BASIC_TPU \
--region=us-central1
###Output
_____no_output_____
###Markdown
Submit single worker TPUv3 (8 cores)
###Code
%%writefile config/tpuv3_config.yaml
trainingInput:
scaleTier: CUSTOM
masterType: n1-highcpu-16
workerType: cloud_tpu
workerCount: 1
workerConfig:
acceleratorConfig:
type: TPU_V3
count: 8
%%bash
now=$(date +"%Y%m%d_%H%M%S")
JOB_NAME="tpu_v3_fashion_minst_$now"
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--package-path=train \
--module-name=train.train_tpu \
--runtime-version=2.1 \
--python-version=3.7 \
--region=us-central1 \
--config config/tpuv3_config.yaml
###Output
_____no_output_____
###Markdown
Train on multiple device with GPUs
###Code
%%writefile train/train_mult_worker_mirrored.py
import os
import os
import time
import tensorflow as tf
import numpy as np
from . import model_definition
#Get data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
# add empty color dimension
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
##################### Run on multiple workers with GPU ###################
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
with strategy.scope():
###############################################################
model = model_definition.create_model(input_shape=x_train.shape[1:])
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3, ),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
start = time.time()
model.fit(
x_train.astype(np.float32), y_train.astype(np.float32),
epochs=17,
steps_per_epoch=60,
validation_data=(x_test.astype(np.float32), y_test.astype(np.float32)),
validation_freq=17
)
print("Training time with multiple GPUs: {}".format(time.time() - start))
model.save_weights('./fashion_mnist_mult_mirrored.h5', overwrite=True)
%%writefile config/multi_worker_gpu.yaml
trainingInput:
scaleTier: CUSTOM
# Configure a master worker with 4 T4 GPUs
masterType: n1-highcpu-16
masterConfig:
acceleratorConfig:
count: 4
type: NVIDIA_TESLA_K80
# Configure 3 workers, each with 4 T4 GPUs
workerCount: 3
workerType: n1-highcpu-16
workerConfig:
acceleratorConfig:
count: 4
type: NVIDIA_TESLA_K80
%%bash
now=$(date +"%Y%m%d_%H%M%S")
JOB_NAME="multi_worker_gpu_fashion_minst_$now"
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--package-path=train \
--module-name=train.train_mult_worker_mirrored \
--runtime-version=2.1 \
--python-version=3.7 \
--region=us-central1 \
--config config/multi_worker_gpu.yaml
###Output
_____no_output_____ |
03-data-x-data-handling/m344-blockchain/nb-m344-bitcoin.ipynb | ###Markdown
Data-X Explore the Bitcoin blockchainInstall the Blockchain package (not ported to Anaconda): `pip install blockchain`Read the docs https://github.com/blockchain/api-v1-client-python 100,000,000 Satoshi = 1.00 ฿
###Code
to_btc = 1/1e8
import blockchain
from blockchain import blockexplorer as be
exampleBlock = be.get_block('000000000000000000e6257899a5a7994ca3c81315af180273de34c75f9d8933')
exampleTransaction = be.get_tx('726962b93d0f233d144dc9c8ada3a49251df804a135e43df5e8ac8e33182f985')
#exampleAddress = be.get_address('1KuWLoZuoJgz3N6sLoAwGth9XGm8YuFTGt')
exampleBlock.block_index
from datetime import datetime
print('Unix time',exampleBlock.received_time)
print('Time stamp',datetime.fromtimestamp(exampleBlock.received_time))
tx_hash = exampleBlock.transactions[0].hash
tx_hash
exampleTransaction = be.get_tx(tx_hash)
exampleTransaction.outputs[0].value*to_btc
pub_add = exampleTransaction.outputs[0].address
pub_add
exampleAddress = be.get_address(pub_add)
exampleAddress.total_received*to_btc
exampleAddress.n_tx
exampleAddress.transactions[0].block_height
last_tx = exampleAddress.transactions[0].outputs[0].value*to_btc
last_tx
datetime.fromtimestamp(exampleAddress.transactions[0].time)
###Output
_____no_output_____
###Markdown
Confirm by looking at a Blockchain explorer APIhttps://blockchain.info/address/1L75eRMgeCwAxEjD1oWXjLgud9jxwxm34u Exchange rates
###Code
from blockchain import exchangerates as er
rates = er.get_ticker()
usd_btc = rates['USD'].p15min # 15min delayed price
usd_btc
last_tx * usd_btc
from blockchain import createwallet as cw
from blockchain import statistics
stats = statistics.get()
stats.btc_mined
stats.estimated_transaction_volume_usd
stats.minutes_between_blocks
###Output
_____no_output_____
###Markdown
Create test wallet In bash run: `blockchain-wallet-service start --port 3000`From: https://github.com/blockchain/service-my-wallet-v3
###Code
from blockchain import createwallet
wallet = createwallet.create_wallet('1234password', '58ck39ajuiw', 'http://localhost:3000/', label = 'Test wallet')
wallet.address
wallet.identifier
wallet.label
###Output
_____no_output_____
###Markdown
Hash functions`sha256`* 256 bit hash function can have 2^256 different values.* A 2-bit key can have 2^2 = 4 values – 00, 01, 10 & 11.* 3 bit key 2^3=8: 000, 001, 010, 100, 011, 110, 101, 111
###Code
import hashlib
# input must be encoded as bytes and not unicode
hashlib.sha256('hihi'.encode('utf-8'))
res = hashlib.sha256('alex'.encode('utf-8'))
res.hexdigest()
###Output
_____no_output_____ |
neuromatch_project_2.ipynb | ###Markdown
Training
###Code
from google.colab import drive
drive.mount('/content/gdrive')
%cd /content/gdrive/MyDrive/neuromatch/
!ls
!ls
!pip install torchinfo
###Output
Collecting torchinfo
Downloading torchinfo-1.5.3-py3-none-any.whl (19 kB)
Installing collected packages: torchinfo
Successfully installed torchinfo-1.5.3
###Markdown
Setup
###Code
# imports
import os
import gc
import csv
import glob
import torch
import multiprocessing
import numpy as np
import pandas as pd
import torch.nn as nn
import matplotlib.pyplot as plt
import torch.optim as optim
import torch.nn.functional as F
import torch.backends.cudnn as cudnn
from torch.autograd import Variable
import torchvision
import torchvision.transforms as transforms
from torchvision.utils import make_grid
from torchvision.models import resnet18, vgg16, vgg19, vgg19_bn
from torchinfo import summary
import PIL
#!nvidia-smi
###Output
_____no_output_____
###Markdown
Set random seed Executing `set_seed(seed=seed)` you are setting the seed
###Code
# @title Set random seed
# @markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
###Output
_____no_output_____
###Markdown
Set device (GPU or CPU)
###Code
# @title Set device (GPU or CPU)
# inform the user if the notebook uses GPU or CPU.
def set_device():
device = "cuda" if torch.cuda.is_available() else "cpu"
if device != "cuda":
print("WARNING: For this notebook to perform best, "
"if possible, in the menu under `Runtime` -> "
"`Change runtime type.` select `GPU` ")
else:
print("GPU is enabled in this notebook.")
return device
###Output
_____no_output_____
###Markdown
Random seedsIf you want to obtain reproducible results, it is a good practice to set seeds for the random number generators of the various libraries
###Code
set_seed(seed=2021)
device = set_device()
###Output
Random seed 2021 has been set.
GPU is enabled in this notebook.
###Markdown
Training hyperparametersHere we set some general training hyperparameters such as the learning rate, batch size, etc. as well as other training options such as including data augmentation (`torchvision_transforms`).
###Code
# Hyper-parameters
use_cuda = torch.cuda.is_available()
batch_size = 128
torchvision_transforms = False # True/False if you want use torchvision augmentations
SIZE = (224,224)
result_folder = './results/'
if not os.path.exists(result_folder):
os.makedirs(result_folder)
if not os.path.isdir('checkpoint'):
os.mkdir('checkpoint')
###Output
_____no_output_____
###Markdown
--- Data Source datasetWe will train the model using Brain Tummor Classification data set from Kaggle, but with small tweaks we can get any other data we are interested in.Note that the data set is normalised by substracted the mean and dividing by the standard deviation (pre-computed) of the training set. Also, if `torchvision_transforms` is `True`, data augmentation will be applied during training. Download and prepare Data
###Code
# Download the dataset
!git clone https://github.com/SartajBhuvaji/Brain-Tumor-Classification-DataSet.git
print('==> Preparing data..')
def percentageSplit(full_dataset, percent = 0.0):
set1_size = int(percent * len(full_dataset))
set2_size = len(full_dataset) - set1_size
final_dataset, _ = torch.utils.data.random_split(full_dataset, [set1_size, set2_size])
return final_dataset
# ImageNet normalizing
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
# torchvision transforms
transform_train = transforms.Compose([])
if torchvision_transforms:
# transform_train.transforms.append(transforms.RandomCrop(SIZE, padding=10))
transform_train.transforms.append(transforms.RandomHorizontalFlip(p=0.30))
transform_train.transforms.append(transforms.RandomRotation(degrees=30))
transform_train.transforms.append(transforms.RandomAffine(degrees=(0, 0), translate=(0.1, 0)))
# Noise -> in dependance of the image
transform_train.transforms.append(transforms.RandomAutocontrast(p = 0.3))
# transform_train.transforms.append(transforms.GaussianBlur(kernel_size=(5, 9), sigma=(0.1, 5)))
#transform_train.transforms.append(transforms.AutoAugment())
# contrast #blur noise
transform_train.transforms.append(transforms.Resize(SIZE))
transform_train.transforms.append(transforms.ToTensor())
transform_train.transforms.append(transforms.Normalize(mean, std))
transform_test = transforms.Compose([
transforms.Resize(SIZE),
transforms.ToTensor(),
transforms.Normalize(mean, std),
])
trainset = torchvision.datasets.ImageFolder(
root= 'Brain-Tumor-Classification-DataSet/Training', transform=transform_train)
testset = torchvision.datasets.ImageFolder(
root='Brain-Tumor-Classification-DataSet/Testing', transform=transform_test)
# print the size of dataset
print("Train images: ", len(trainset))
print("Test images: ", len(testset))
classes_names = trainset.classes
print(classes_names)
###Output
Train images: 2870
Test images: 394
['glioma_tumor', 'meningioma_tumor', 'no_tumor', 'pituitary_tumor']
###Markdown
Data loadersA dataloader is an optimized data iterator that provides functionality for efficient shuffling, transformation and batching of the data. Dataloader
###Code
##@title Dataloader
num_workers = multiprocessing.cpu_count()
print(f'----> number of workers: {num_workers}')
trainloader = torch.utils.data.DataLoader(
trainset, batch_size=batch_size, shuffle=True, num_workers=num_workers)
testloader = torch.utils.data.DataLoader(
testset, batch_size=batch_size, shuffle=False, num_workers=num_workers)
###Output
----> number of workers: 2
###Markdown
Displaying Data
###Code
# @title Plotting functions
class UnNormalize(object):
def __init__(self, mean, std):
self.mean = mean
self.std = std
def __call__(self, tensor):
"""
Args:
tensor (Tensor): Tensor image of size (C, H, W) to be normalized.
Returns:
Tensor: Normalized image.
"""
for t, m, s in zip(tensor, self.mean, self.std):
t.mul_(s).add_(m)
# The normalize code -> t.sub_(m).div_(s)
return tensor
def imshow(img):
plt.figure(figsize=[20, 20])
#unnormalize
#img = img * torch.tensor(std).unsqueeze(dim=-1).unsqueeze(dim=-1)
#img = img + torch.tensor(mean).unsqueeze(dim=-1).unsqueeze(dim=-1)
img = UnNormalize(mean,std)(img)
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.axis(False)
plt.show()
#dataiter = iter(trainloader)
#images, labels = dataiter.next()
# show images
#imshow(make_grid(images[:10], nrow = 10))
###Output
_____no_output_____
###Markdown
Train the target model
###Code
# def adjust_learning_rate(optimizer, base_learning_rate, epoch):
# """decrease the learning rate at 100 and 150 epoch"""
# lr = base_learning_rate
# if epoch > 0:
# lr = 1e-4
# #if epoch > 5:
# #lr = 1e-5
# # if epoch <= 9 and lr > 0.1:
# # # warm-up training for large minibatch
# # lr = 0.1 + (base_learning_rate - 0.1) * epoch / 10.
# # if epoch >= 100:
# # lr /= 10
# # if epoch >= 150:
# # lr /= 10
# for param_group in optimizer.param_groups:
# param_group['lr'] = lr
def print_lr(optimizer):
tmp_list = []
for param_group in optimizer.param_groups:
tmp_list.append(param_group['lr'])
print("lr per group", tmp_list, "mean: ", np.mean(tmp_list))
return np.mean(tmp_list)
# Training & Test functions
def train(net, epoch, use_cuda=True):
print('\nEpoch: %d' % epoch)
net.train() # set the model in training mode
train_loss = 0
correct = 0
total = 0
for batch_idx, (inputs, targets) in enumerate(trainloader):
if use_cuda:
inputs, targets = inputs.cuda(), targets.cuda()
optimizer.zero_grad()
inputs, targets = Variable(inputs), Variable(targets) # Why here is used Variable. Is it needed?
outputs = net(inputs)
loss = criterion(outputs, targets)
loss.backward()
optimizer.step()
train_loss += loss.item() # is the code getting the avg loss since the first iteration batch?
_, predicted = torch.max(outputs.data, 1)
total += targets.size(0)
correct += predicted.eq(targets.data).cpu().sum()
if batch_idx % 500 == 0:
print("Train ",batch_idx, len(trainloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (train_loss/(batch_idx+1), 100.*correct/total, correct, total))
return (train_loss/batch_idx, 100.*correct/total)
def test(net, epoch, outModelName, use_cuda=True):
global best_acc
net.eval() # set the model in eval mode
test_loss, correct, total = 0, 0, 0
with torch.no_grad():
for batch_idx, (inputs, targets) in enumerate(testloader):
if use_cuda:
inputs, targets = inputs.cuda(), targets.cuda()
outputs = net(inputs)
loss = criterion(outputs, targets)
test_loss += loss.item()
_, predicted = torch.max(outputs.data, 1)
total += targets.size(0)
correct += predicted.eq(targets.data).cpu().sum()
if batch_idx % 200 == 0:
print("Test ", batch_idx, len(testloader), 'Loss: %.3f | Acc: %.3f%% (%d/%d)'
% (test_loss/(batch_idx+1), 100.*correct/total, correct, total))
# Save checkpoint.
acc = 100.*correct/total
if acc > best_acc:
best_acc = acc
#checkpoint(net, acc, epoch, outModelName)
torch.save(model, f'./checkpoint/{outModelName}_acc({str(acc.item()).replace(".","-")})_epoch({epoch}).pt')
return (test_loss/batch_idx, 100.*correct/total)
best_acc = 0 # best test accuracy
def main(model, max_epochs, outModelName):
logname = result_folder + model.__class__.__name__ + f'_{outModelName}.csv'
if not os.path.exists(logname):
with open(logname, 'w') as logfile:
logwriter = csv.writer(logfile, delimiter=',')
logwriter.writerow(['epoch', 'train loss', 'train acc', 'test loss', 'test acc', 'lr'])
for epoch in range(max_epochs):
# adjust_learning_rate(optimizer, base_learning_rate = 1e-3, epoch= epoch)
lr_mean = print_lr(optimizer)
train_loss, train_acc = train(model, epoch, use_cuda=use_cuda)
test_loss, test_acc = test(model, epoch, outModelName, use_cuda=use_cuda)
with open(logname, 'a') as logfile:
logwriter = csv.writer(logfile, delimiter=',')
logwriter.writerow([epoch, train_loss, train_acc.item(), test_loss, test_acc.item(), lr_mean])
print(f'Epoch: {epoch} | train acc: {train_acc} | test acc: {test_acc} | lr: {lr_mean}' )
###Output
_____no_output_____
###Markdown
Experiment: Train the complete model
###Code
class MyVGG11(nn.Module):
def __init__(self, pretrained_model, max_frozen_layer = None, freeze = False):
super(MyVGG11, self).__init__()
self.model = pretrained_model
# Freeze the model parameters
if freeze and max_frozen_layer is None:
# freeze all the convolutional layer
for param in self.model.parameters():
param.requires_grad = False
elif freeze and max_frozen_layer is not None:
# freeze the convolutional until max_frozen layer
counter = 0
for param in self.model.parameters():
if counter < max_frozen_layer:
print(f"Freezing layer {counter}")
param.requires_grad = False
counter += 1
self.model.avgpool = torch.nn.AdaptiveAvgPool2d(1)
# Replace the classifier part
self.model.classifier = nn.Sequential( #move dropout early
nn.Linear(512, 128, bias = True),
nn.ReLU(inplace = True),
nn.Dropout(p = 0.5), # a little bigger
nn.Linear(128, 4, bias = True)
)
def forward(self, x):
return self.model(x)
# pretrained_model = vgg16(pretrained=True)
pretrained_model = torchvision.models.vgg11_bn(pretrained=True)
# # pretrained_model = resnet18(pretrained=True)
# # pretrained_model = vgg11(pretrained=True)
model = MyVGG11(pretrained_model=pretrained_model)
model.to(device)
print(model)
# define the loss function
criterion = nn.CrossEntropyLoss()
# define the optimizer
optimizer = torch.optim.SGD(
model.parameters(),
lr=1e-4,
momentum=0.9,
weight_decay=1e-4,
)
# optimizer = torch.optim.Adam(
# model.parameters(),
# lr=1e-4
# )
###Output
MyVGG11(
(model): VGG(
(features): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(4): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(6): ReLU(inplace=True)
(7): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(8): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(9): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(10): ReLU(inplace=True)
(11): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(12): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(13): ReLU(inplace=True)
(14): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(15): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(16): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(17): ReLU(inplace=True)
(18): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(19): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(20): ReLU(inplace=True)
(21): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(22): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(23): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(24): ReLU(inplace=True)
(25): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(26): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(27): ReLU(inplace=True)
(28): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
(avgpool): AdaptiveAvgPool2d(output_size=1)
(classifier): Sequential(
(0): Linear(in_features=512, out_features=128, bias=True)
(1): ReLU(inplace=True)
(2): Dropout(p=0.5, inplace=False)
(3): Linear(in_features=128, out_features=4, bias=True)
)
)
)
###Markdown
Check number of parametersWe can calculate the number of total parameters and the number of trainable parameters, that is those that will be updated during training. Since we have freezed most of the parameters, the number of training parameters should be much smaller.
###Code
total_params = sum(p.numel() for p in model.parameters())
trainable_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print('Total Parameters:', total_params, 'Trainable parameters: ', trainable_total_params)
main(model, max_epochs = 30, outModelName = "vgg11bn_2linearlayers_mri")
###Output
lr per group [0.0001] mean: 0.0001
Epoch: 0
###Markdown
Freeze layers
###Code
pretrained_model = vgg19_bn(pretrained=True) #TODO: change the network
model = MyVGG19(pretrained_model=pretrained_model, freeze = True) #, max_frozen_layer=24)
model.to(device)
#print(model)
# define the loss function
criterion = nn.CrossEntropyLoss()
# define the optimizer
optimizer = torch.optim.SGD(
model.parameters(),
lr=1e-4,
momentum=0.9,
weight_decay= 1e-4 # l2 regularization
)
total_params = sum(p.numel() for p in model.parameters())
trainable_total_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print('Total Parameters:', total_params, 'Trainable parameters: ', trainable_total_params)
main(model, max_epochs = 20, outModelName = "frozen_model_bn_6_mri")
###Output
lr per group [0.0001] mean: 0.0001
Epoch: 0
###Markdown
Plot results
###Code
def plot_results(results, name):
train_accuracy = results['train acc'].values
test_accuracy = results['test acc'].values
train_loss = results['train loss'].values
test_loss = results['test loss'].values
figureName = name +"_accuracy" # change figure name
plt.style.use('fivethirtyeight')
fig, (ax1, ax2) = plt.subplots(figsize=(16,8),nrows=1, ncols=2, sharex=True, sharey=False)
ax1.plot(results['epoch'].values, train_accuracy, label='train', color='r', marker='s', lw=3)
ax1.plot(results['epoch'].values, test_accuracy, label='test', color='b', marker='o', lw=3)
ax1.legend()
ax1.set_ylim([0,100])
ax1.set(xlabel="Epochs", ylabel="Accuracy Score")
ax2.plot(results['epoch'].values, train_loss, label='train', color='r', marker='s', lw=3)
ax2.plot(results['epoch'].values, test_loss, label='test', color='b', marker='o', lw=3)
ax2.legend()
ax2.set(xlabel="Epochs", ylabel="CrossEntropy Loss")
fig.savefig(f'./results/{figureName}.png')
plt.show()
!ls results
name = "MyVGG11_vgg11bn_2linearlayers_mri"
results = pd.read_csv(f'./results/{name}.csv', sep =',')
plot_results(results, name)
###Output
_____no_output_____ |
Notebooks/Flight Fare Prediction/Flight_price.ipynb | ###Markdown
Flight Price Prediction Import modules
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
###Output
_____no_output_____
###Markdown
Importing dataset
###Code
train_data = pd.read_excel("Data_Train.xlsx")
pd.set_option('display.max_columns', None)
train_data.head()
train_data.info()
train_data.describe()
train_data.isnull().sum()
train_data.dropna(inplace = True)
train_data.isnull().sum()
train_data.columns
train_data.shape
###Output
_____no_output_____
###Markdown
EDA Date_of_journey is a object data type , so this has to be converted proper datatype(timestamp)
###Code
train_data["Journey_day"] = pd.to_datetime(train_data.Date_of_Journey,format = "%d/%m/%Y").dt.day
train_data["Journey_month"] = pd.to_datetime(train_data.Date_of_Journey,format= "%d/%m/%Y").dt.month
train_data.head()
###Output
_____no_output_____
###Markdown
Since there is no use of date_of_Journey column into integers, Now we can drop as it is of no use
###Code
train_data.drop(["Date_of_Journey"], axis = 1, inplace = True)
# extract hours and mins from departure time
train_data["Dep_hour"] = pd.to_datetime(train_data["Dep_Time"]).dt.hour
train_data["Dep_min"] = pd.to_datetime(train_data["Dep_Time"]).dt.minute
# drop Dep_time
train_data.drop(["Dep_Time"],axis =1 ,inplace =True)
train_data.head()
# similarlilly comw up with arrival time
train_data["Arrival_hour"] = pd.to_datetime(train_data["Arrival_Time"]).dt.hour
train_data["Arrival_min"] = pd.to_datetime(train_data["Arrival_Time"]).dt.minute
# drop arrival time
train_data.drop(["Arrival_Time"],axis =1 , inplace = True)
train_data.head()
# It is the differnce betwwen Departure Time and Arrival time
# Assigning and converting Duration column into list
duration = list(train_data["Duration"])
for i in range(len(duration)):
if len(duration[i].split()) != 2: # Check if duration contains only hour or mins
if "h" in duration[i]:
duration[i] = duration[i].strip() + " 0m" # Adds 0 minute
else:
duration[i] = "0h " + duration[i] # Adds 0 hour
duration_hours = []
duration_mins = []
for i in range(len(duration)):
duration_hours.append(int(duration[i].split(sep = "h")[0])) # Extract hours from duration
duration_mins.append(int(duration[i].split(sep = "m")[0].split()[-1])) # Extracts only minutes from duration
# Adding duration_hours and duration_mins list to train_data dataframe
train_data["Duration_hours"] = duration_hours
train_data["Duration_mins"] = duration_mins
train_data.drop(["Duration"],axis = 1 , inplace = True)
train_data.head()
###Output
_____no_output_____
###Markdown
Handling categorical data1. Nominal data ------> data are not in order ---> One hot encoding is used2. Ordinal data -------> data are in order ----> Label encoder is used
###Code
x = train_data["Airline"].value_counts()
x.plot(figsize = (18,10));
# From graph we can see that Jet Airways Business have the highest Price.
# Apart from the first Airline almost all are having similar median
# Airline vs Price
sns.catplot(y = "Price", x = "Airline", data = train_data.sort_values("Price", ascending = False), kind="boxen", height = 6, aspect = 3)
plt.show()
# As Airline is Nominal Categorical data we will perform OneHotEncoding
Airline = train_data[["Airline"]]
Airline = pd.get_dummies(Airline, drop_first= True)
Airline.head()
train_data["Source"].value_counts()
# source vs price
sns.catplot(y = "Price", x = "Source", data = train_data.sort_values("Price", ascending = False), kind="boxen", height = 4, aspect = 3)
plt.show()
# As Source is Nominal Categorical data we will perform OneHotEncoding
Source = train_data[["Source"]]
Source = pd.get_dummies(Source, drop_first= True)
Source.head()
train_data["Destination"].value_counts()
# As Destination is Nominal Categorical data we will perform OneHotEncoding
Destination = train_data[["Destination"]]
Destination = pd.get_dummies(Destination, drop_first = True)
Destination.head()
train_data["Route"]
# Additional_Info contains almost 80% no_info
# Route and Total_Stops are related to each other
train_data.drop(["Route", "Additional_Info"], axis = 1, inplace = True)
train_data["Total_Stops"].value_counts()
# As this is case of Ordinal Categorical type we perform LabelEncoder
# Here Values are assigned with corresponding keys
train_data.replace({"non-stop": 0, "1 stop": 1, "2 stops": 2, "3 stops": 3, "4 stops": 4}, inplace = True)
train_data.head()
# Concatenate dataframe --> train_data + Airline + Source + Destination
data_train = pd.concat([train_data, Airline, Source, Destination], axis = 1)
train_data.head()
data_train.drop(["Airline", "Source", "Destination"], axis = 1, inplace = True)
data_train.head()
data_train.shape
###Output
_____no_output_____
###Markdown
Test set
###Code
test_data = pd.read_excel("Test_set.xlsx")
test_data.head()
###Output
_____no_output_____
###Markdown
Preprocessing of test data similar to train data
###Code
# Preprocessing
print("Test data Info")
print("-"*75)
print(test_data.info())
print()
print()
print("Null values :")
print("-"*75)
test_data.dropna(inplace = True)
print(test_data.isnull().sum())
# EDA
# Date_of_Journey
test_data["Journey_day"] = pd.to_datetime(test_data.Date_of_Journey, format="%d/%m/%Y").dt.day
test_data["Journey_month"] = pd.to_datetime(test_data["Date_of_Journey"], format = "%d/%m/%Y").dt.month
test_data.drop(["Date_of_Journey"], axis = 1, inplace = True)
# Dep_Time
test_data["Dep_hour"] = pd.to_datetime(test_data["Dep_Time"]).dt.hour
test_data["Dep_min"] = pd.to_datetime(test_data["Dep_Time"]).dt.minute
test_data.drop(["Dep_Time"], axis = 1, inplace = True)
# Arrival_Time
test_data["Arrival_hour"] = pd.to_datetime(test_data.Arrival_Time).dt.hour
test_data["Arrival_min"] = pd.to_datetime(test_data.Arrival_Time).dt.minute
test_data.drop(["Arrival_Time"], axis = 1, inplace = True)
# Duration
duration = list(test_data["Duration"])
for i in range(len(duration)):
if len(duration[i].split()) != 2: # Check if duration contains only hour or mins
if "h" in duration[i]:
duration[i] = duration[i].strip() + " 0m" # Adds 0 minute
else:
duration[i] = "0h " + duration[i] # Adds 0 hour
duration_hours = []
duration_mins = []
for i in range(len(duration)):
duration_hours.append(int(duration[i].split(sep = "h")[0])) # Extract hours from duration
duration_mins.append(int(duration[i].split(sep = "m")[0].split()[-1])) # Extracts only minutes from duration
# Adding Duration column to test set
test_data["Duration_hours"] = duration_hours
test_data["Duration_mins"] = duration_mins
test_data.drop(["Duration"], axis = 1, inplace = True)
# Categorical data
print("Airline")
print("-"*75)
print(test_data["Airline"].value_counts())
Airline = pd.get_dummies(test_data["Airline"], drop_first= True)
print()
print("Source")
print("-"*75)
print(test_data["Source"].value_counts())
Source = pd.get_dummies(test_data["Source"], drop_first= True)
print()
print("Destination")
print("-"*75)
print(test_data["Destination"].value_counts())
Destination = pd.get_dummies(test_data["Destination"], drop_first = True)
# Additional_Info contains almost 80% no_info
# Route and Total_Stops are related to each other
test_data.drop(["Route", "Additional_Info"], axis = 1, inplace = True)
# Replacing Total_Stops
test_data.replace({"non-stop": 0, "1 stop": 1, "2 stops": 2, "3 stops": 3, "4 stops": 4}, inplace = True)
# Concatenate dataframe --> test_data + Airline + Source + Destination
data_test = pd.concat([test_data, Airline, Source, Destination], axis = 1)
data_test.drop(["Airline", "Source", "Destination"], axis = 1, inplace = True)
print()
print()
print("Shape of test data : ", data_test.shape)
data_test.head()
data_test.shape
###Output
_____no_output_____
###Markdown
Feature selectionFinding out the best feature which will contribute and have good realtion with target variable. Some of the feature selection methods:* heatmap* feature_importance* SelectKBest
###Code
data_train.shape
data_train.columns
X = data_train.drop(["Price"],axis =1)
X.head()
y = data_train.iloc[:,1]
y.head()
# Find the coreeletion between Independent and dependent attribute
plt.figure(figsize = (15,15))
sns.heatmap(train_data.corr(),annot = True,cmap = "RdYlGn")
plt.show()
# Important feature using ExtraTreesRegressor
from sklearn.ensemble import ExtraTreesRegressor
selection = ExtraTreesRegressor()
selection.fit(X, y)
print(selection.feature_importances_)
#plot graph of feature importances for better visualization
plt.figure(figsize = (15,10))
feat_importances = pd.Series(selection.feature_importances_, index=X.columns)
feat_importances.nlargest(20).plot(kind='barh')
plt.show()
###Output
_____no_output_____
###Markdown
Fiiting model using Random forest1.Split dataset into train and test set in order to prediction w.r.t X_test2.Import the model3.Fit the model4.Predict w.r.t X_test5.In regression Check RSME score6.Plot the graph
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
from sklearn.ensemble import RandomForestRegressor
reg_rf = RandomForestRegressor()
reg_rf.fit(X_train,y_train)
y_pred = reg_rf.predict(X_test)
reg_rf.score(X_train,y_train)
reg_rf.score(X_test,y_test)
plt.scatter(y_test, y_pred, alpha = 0.5)
plt.xlabel("y_test")
plt.ylabel("y_pred")
plt.show()
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, y_pred))
print('MSE:', metrics.mean_squared_error(y_test, y_pred))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
# RMSE/(max(DV)-min(DV))
2090.5509/(max(y)-min(y))
metrics.r2_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
Hyperparameter tuningI have used RandomisedSearchCV (fast)
###Code
from sklearn.model_selection import RandomizedSearchCV
#Randomized Search CV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 100, stop = 1200, num = 12)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(5, 30, num = 6)]
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10, 15, 100]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 5, 10]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf}
# Random search of parameters, using 5 fold cross validation,
# search across 100 different combinations
rf_random = RandomizedSearchCV(estimator = reg_rf, param_distributions = random_grid,scoring='neg_mean_squared_error', n_iter = 10, cv = 5, verbose=2, random_state=42, n_jobs = 1)
rf_random.fit(X_train,y_train)
rf_random.best_params_
prediction = rf_random.predict(X_test)
plt.figure(figsize = (8,8))
plt.scatter(y_test, prediction, alpha = 0.5)
plt.xlabel("y_test")
plt.ylabel("y_pred")
plt.show()
print('MAE:', metrics.mean_absolute_error(y_test, prediction))
print('MSE:', metrics.mean_squared_error(y_test, prediction))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, prediction)))
###Output
MAE: 1165.749214901713
MSE: 4052746.115358068
RMSE: 2013.1433419799168
###Markdown
Save the model to reuse
###Code
import pickle
# open a file, where you want to store the data
file = open('flight_rf.pkl', 'wb')
# dump information to that file
pickle.dump(reg_rf, file)
model = open('flight_rf.pkl','rb')
forest = pickle.load(model)
y_prediction = forest.predict(X_test)
metrics.r2_score(y_test, y_prediction)
###Output
_____no_output_____ |
Ford-Analysis/Ford-Combined-Survey-Analysis.ipynb | ###Markdown
Survey Analysis Report Team FordSamantha Howard, Jyotsna Singh, Ishaan Pathak and Nick Mouaikel Sam's Analysis AbstractShort summary about the data and what it tells you about your project. Data inputIn this section include code that reads in the csv file
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
dist = pd.read_csv("2022_Project_distance_Matrix.csv")
dist.head()
# Team List
Teams =["ARFL","Argonne","Boeing","Delta Dental","Ford","Hope Village","Kellogg's","Neogen","Old Nation","Qside"]
# Broken up by team
#team = dist[dist['team'] == 0]
ARFL = dist[dist['ARFL'] == 0]
Argonne = dist[dist['Argonne'] == 0]
Boeing = dist[dist['Boeing'] == 0]
Delta = dist[dist['Delta Dental'] == 0]
Ford = dist[dist['Ford'] == 0] #6 members
Hope = dist[dist['Hope Village'] == 0]
Kellogg = dist[dist["Kellogg's"] == 0]
Neogen = dist[dist['Neogen'] == 0]
Old = dist[dist['Old Nation'] == 0]
Qside = dist[dist['Qside'] == 0] #2 members
###Output
_____no_output_____
###Markdown
Data CleaningIn this section provide code for converting the raw data into clean and usable data structures (if needed) While there is no NA values there is a potental error in the collection of the data. As the team Qside had 2 entries, and Team ford had 6. And these are not the correct number of members for either.
###Code
#no missing entries
dist.isna().sum()
###Output
_____no_output_____
###Markdown
Data Modeling
###Code
corr = dist.corr()
corr
# bad, should change the groupby. Many "NA" values
summary = dist.groupby(Teams).size().unstack()
from scipy.spatial.distance import squareform
from scipy.spatial.distance import pdist
pairwise = pd.DataFrame(
squareform(pdist(summary)),
columns = summary.index,
index = summary.index)
#pairwise
###Output
_____no_output_____
###Markdown
Data VisualizationThis section make some graphs visualizing your results. A distance matrix and/or network graph may be cool. Think though the best way to show what you learned. Here using Seaborn, it isn't isn't a distance matrix but I thought the visualization could be intresting. I was courious if it could just give a quick glance at the general likness of the projects to another. I would not recommend this model.The easist way to understand this visulization is the paler the color, the more similar the two groups are. The Darker the blue/red is is the more differnet they are.
###Code
mask = np.triu(np.ones_like(corr, dtype=bool))
cmap = sns.diverging_palette(230, 20, as_cmap=True)
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
#closer to white is more similar to each other. Stronger color is less similar.
###Output
_____no_output_____
###Markdown
ConclusionThis should be similar to the abstract but with more details. What can you conclude about your project from this data? Sources[Pairwise Distance Matrix Tutoral](https://drawingfromdata.com/pandas/clustering/making-a-pairwise-distance-matrix-in-pandas.html)[Seaborn Correlation](https://seaborn.pydata.org/examples/many_pairwise_correlations.html) Jyotsna's Analysis
###Code
data = pd.read_csv("2022_Project_distance_Matrix.csv")
###Output
_____no_output_____
###Markdown
Disagrement The heatmap below shows how much the people on a team agreed or disagreed on which projects were similar to their own. The darker colors mean higher disagreemnent and lighter colors mean lower disagreemnet.
###Code
temp = {}
for a in data.columns.values:
df = data[data[a] == 0]
#temp[a] = np.array(df.sum(0)/df.shape[0])
temp[a] = df.std(axis=0)
prob_matrix = pd.DataFrame(temp).T.fillna(0)
plt.figure(figsize=(12,10))
sns.heatmap(prob_matrix, cmap="Blues")
plt.title("DISAGREEMENT HEATMAP", fontsize = 20);
plt.xticks(fontsize=12);
plt.yticks(fontsize=12, rotation=360);
###Output
_____no_output_____
###Markdown
Ishaan's Analysis
###Code
import numpy as np
import scipy as sc
from scipy.spatial import distance_matrix
import pandas as pd
import seaborn as sns
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.spatial.distance import pdist
from scipy.spatial.distance import squareform
data = pd.read_csv('2022_Project_distance_Matrix.csv')
ARFL = data[data['ARFL'] == 0]
ARFL
team_wise_data = pd.DataFrame(data=[], columns=data.columns)
for i in data.columns:
temp_team = data[data[i] == 0]
team_wise_data = pd.concat([team_wise_data, temp_team])
team_wise_data.head()
pairwise_top = pd.DataFrame(squareform(pdist(team_wise_data, metric='cosine')),
columns = team_wise_data.index,
index = team_wise_data.index)
plt.figure(figsize=(15,15))
sns.heatmap(pairwise_top, cmap='OrRd', linewidth=1)
###Output
_____no_output_____
###Markdown
Nick's Analysis
###Code
import pandas as pd
import numpy as np
from scipy.spatial import distance_matrix
import sympy as sym
import seaborn as sns
import matplotlib.pyplot as plt
data = pd.read_csv('2022_Project_distance_Matrix.csv',header='infer')
data.head()
dist_mat = distance_matrix(data.T,data.T,p=2)
fig, ax = plt.subplots(figsize = (15,15))
ax = sns.heatmap(dist_mat,annot=True)
np.dot(data.T,data)
###Output
_____no_output_____ |
practices/MLP_Titanic.ipynb | ###Markdown
資料預處理
###Code
import numpy
import pandas as pd
from sklearn import preprocessing
numpy.random.seed(10)
filepath = "data/titanic3.xls"
all_df = pd.read_excel(filepath)
cols = ['survived', 'name', 'pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked']
all_df = all_df[cols]
msk = numpy.random.rand(len(all_df)) < 0.8
train_df = all_df[msk]
test_df = all_df[~msk]
print('total', len(all_df),
'tain', len(train_df),
'test', len(test_df))
def PreprocessData(raw_df):
df = raw_df.drop(['name'], axis=1)
age_mean = df['age'].mean()
df['age'] = df['age'].fillna(age_mean)
fare_mean = df['fare'].mean()
df['fare'] = df['fare'].fillna(fare_mean)
df['sex'] = df['sex'].map({'female': 0, 'male': 1}).astype(int)
x_OneHot_df = pd.get_dummies(data=df, columns=["embarked"])
ndarray = x_OneHot_df.values
Features = ndarray[:,1:]
Label = ndarray[:,0]
minmax_scale = preprocessing.MinMaxScaler(feature_range=(0,1))
scaleFeatures = minmax_scale.fit_transform(Features)
return scaleFeatures, Label
train_Features, train_Label = PreprocessData(train_df)
test_Features, test_Label = PreprocessData(test_df)
###Output
_____no_output_____
###Markdown
建立模型
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout
model = Sequential()
model.add(Dense(units=40,
input_dim=9,
kernel_initializer='uniform',
activation='relu'))
model.add(Dense(units=30,
kernel_initializer='uniform',
activation='relu'))
model.add(Dense(units=1,
kernel_initializer='uniform',
activation='sigmoid'))
###Output
_____no_output_____
###Markdown
開始訓練
###Code
model.compile(loss='binary_crossentropy',
optimizer = 'adam',
metrics = ['accuracy'])
train_history = model.fit(x=train_Features,
y=train_Label,
validation_split=0.1,
epochs=30,
batch_size=30,
verbose=2)
import matplotlib.pyplot as plt
def show_train_history(train_history,
train,
validation):
plt.plot(train_history.history[train])
plt.plot(train_history.history[validation])
plt.title('Train History')
plt.ylabel(train)
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
show_train_history(train_history, 'acc', 'val_acc')
show_train_history(train_history, 'loss', 'val_loss')
###Output
_____no_output_____
###Markdown
評估模型準確率
###Code
scores = model.evaluate(x=test_Features,
y=test_Label)
scores[1]
###Output
_____no_output_____
###Markdown
加入鐵達尼號電影Jack與Rose的資料
###Code
Jack = pd.Series([0, 'Jack', 3, 'male', 23, 1, 0, 5.0000, 'S'])
Rose = pd.Series([1, 'Rose', 1, 'female', 20, 1, 0, 100.0000, 'S'])
JR_df = pd.DataFrame([list(Jack), list(Rose)],
columns=['survived', 'name', 'pclass', 'sex',
'age', 'sibsp', 'parch', 'fare', 'embarked'])
all_df = pd.concat([all_df, JR_df])
all_df[-2:]
###Output
_____no_output_____
###Markdown
進行預測
###Code
all_Features, Label = PreprocessData(all_df)
all_probability = model.predict(all_Features)
all_probability[:10]
pd = all_df
pd.insert(len(all_df.columns),
'probability',
all_probability)
pd[-2:]
pd[(pd['survived'] == 0) & (pd['probability'] > 0.9)]
###Output
_____no_output_____
###Markdown
找出鐵達尼號背後感人的故事
###Code
pd[:5]
###Output
_____no_output_____ |
github_scraping_selenium.ipynb | ###Markdown
* id : inputlogin_field* pwd : inputpassword* Sign in : input[type="submit"]
###Code
import time
time.sleep(3)
# ID
browser.find_elements_by_css_selector('input#login_field').send_keys('[email protected]')
# PWD
browser.find_elements_by_css_selector('input#password').send_keys('chlgustn12!@!@')
# submit
browser.
###Output
_____no_output_____ |
8-Labs/Z-Spring2021/Lab20/.ipynb_checkpoints/Lab20_Dev-checkpoint.ipynb | ###Markdown
Laboratory 20: "Confidence in Linear Regression" or "The Exclusive Guide to Trial by Combat in Westeros"
###Code
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
###Output
DESKTOP-EH6HD63
desktop-eh6hd63\farha
C:\Users\Farha\Anaconda3\python.exe
3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
###Markdown
Full name: R: Title of the notebook Date:  Last week, we talked about linear regression ...  - __What is linear regression?__ a basic predictive analytics technique that uses historical data to predict an output variable.- __Why do we need linear regression?__ To explore the relationship between predictor and output variables and predict the output variable based on known values of predictors. - __How does linear regression work?__ To estimate Y using linear regression, we assume the equation: 𝑌𝑒=β𝑋+α Our goal is to find statistically significant values of the parameters α and β that minimise the difference between Y and Yₑ. If we are able to determine the optimum values of these two parameters, then we will have the line of best fit that we can use to predict the values of Y, given the value of X. - __How to estimate the coefficients?__ We have used "Ordinary Least Squares (OLS)" and "Maximum Likelihood Estimation (MLE)" methods. We can get formulas for the slope and intercept of the line of best fit from each method. Once we have the equation of the line of best fit, we can use it to fit a line and assess the quality of fit as well as predicting.- __How to assess the fit?__ We have used graphs and visual assessments to describe the fits, identify regions with more and less errors, and decide whether the fit is trustworthy or not.  We also use "Goodness-of-Fit (GOF)" metrics to describe the errors and performance of the linear models. - __How confident are we with a prediction?__ By definition, the prediction of a linear regression model is an estimate or an approximation and contains some uncertainty. The uncertainty comes from the errors in the model itself and noise in the input data. The model is an approximation of the relationship between the input variables and the output variables. The model error can be decomposed into three sources of error: the variance of the model, the bias of the model, and the variance of the irreducible error (the noise) in the data. Error(Model) = Variance(Model) + Bias(Model) + Variance(Irreducible Error) That time in Westeros...Before going any further, let's assume that you were arrested by the king's guard as you were minding your business in the streets of King's Landing for the crime of planning for the murder of King Joffrey Baratheon. As much as you hate King Joffrey you had no plans for killing him but no one believes you. In the absence of witnesses or a confession, you demand trial by combat.  But they inform you that the Germanic law to settle accusations is no longer used and it has been replaced with a new method. You get to choose a bowman. That bowman will make 3 shots for you. And if he hits the bullseye you will walk a free man. Otherwise, you will be hanged.  You have two options. The first bowman is Horace. He is known as one of the greatest target archers of all time. He is old though and due to lack of an efficient social security system in Westeros, he has to work as a hired bowman for the high court to earn a living. You ask around and you hear that he still can shoot a bullseye but as his hands shake, he sometimes misses by a lot. The second archer is Daryl. He is also a wellknown archer but unfortunately he has a drinking problem. You have understood that there has been cases that he has shot the bullseye in all of his three shots and there has been cases that he has completely missed the bullseye. The thing about him is that his three shots are always very close together. Now, you get to pick. Between Horace and Daryl, who would you choose to shoot for your freedom? - __Bias, Variance, and the bowman dilemma!__ We used the example above to give you an initial understanding of bias and variance and their impact on a model's performance. Given this is a complicated and yet important aspect of data modeling and machine learning, without getting into too much detail, we will discuss these concepts. Bias reflects how close the functional form of the model can get to the true relationship between the predictors and the outcome. Variance refers to the amount by which [the model] would change if we estimated it using a different training data set.  Looking at the picture above, Horace was an archer with high variance and low bias, while Daryl had high bias and low variability. In an ideal world, we want low bias and low variance which we cannot have. When there is a high bias error, it results in a very simplistic model that does not consider the variations very well. Since it does not learn the training data very well, it is called Underfitting. When the model has a high variance, it will still consider the noise as something to learn from. That is, the model learns the noise from the training data, hence when confronted with new (testing) data, it is unable to predict accurately based on it. Since in the case of high variance, the model learns too much from the training data, it is called overfitting. To summarise: - A model with a high bias error underfits data and makes very simplistic assumptions on it - A model with a high variance error overfits the data and learns too much from it - A good model is where both Bias and Variance errors are balanced. The balance between the Bias error and the Variance error is the Bias-Variance Tradeoff. The irreducible error is the error that we can not remove with our model, or with any model. The error is caused by elements outside our control, such as statistical noise in the observations. A model with low bias and high variance predicts points that are around the center generally, but pretty far away from each other (Horace). A model with high bias and low variance is pretty far away from the bull’s eye, but since the variance is low, the predicted points are closer to each other (Daryl). Bias and Variance play an important role in deciding which predictive model to use: Something that you will definitly learn more about if you go further in the field of machine learning and predicitve models.- __How can we measure bias and variance?__ There are GOF metrics that can measure the bias and variance of a model: For example the Nash–Sutcliffe model efficiency coefficient and the Kling-Gupta Efficiency (KGE). The Nash–Sutcliffe efficiency is calculated as one minus the ratio of the error variance of the modeled time-series divided by the variance of the observed time-series. In the situation of a perfect model with an estimation error variance equal to zero, the resulting Nash-Sutcliffe Efficiency equals 1 (NSE = 1). KGE provides a diagnostically interesting decomposition of the Nash-Sutcliffe efficiency (and hence MSE), which facilitates the analysis of the relative importance of its different components (correlation, bias and variability). Example 1: Let's have a look at our old good example of TV, Radio, and Newspaper advertisements and number of sales for a specific product.! Let's say that we are interested to compare the performance of the linear models that use TV spendings and Radio spendings as their predictor variables in terms of accuracy, bias, and variability.
###Code
import numpy as np
import pandas as pd
import statistics
import scipy.stats
from matplotlib import pyplot as plt
import statsmodels.formula.api as smf
import sklearn.metrics as metrics
# Import and display first rows of the advertising dataset
df = pd.read_csv('advertising.csv')
tv = np.array(df['TV'])
radio = np.array(df['Radio'])
newspaper = np.array(df['Newspaper'])
sales = np.array(df['Sales'])
# Initialise and fit linear regression model using `statsmodels`
# TV Spending as predictor
model_tv = smf.ols('Sales ~ TV', data=df)
model_tv = model_tv.fit()
TV_pred = model_tv.predict()
# Radio Spending as predictor
model_rd = smf.ols('Sales ~ Radio', data=df)
model_rd = model_rd.fit()
RD_pred = model_rd.predict()
print("RMSE for TV ad spendings as predictor is ",np.sqrt(metrics.mean_squared_error(sales, TV_pred)))
print("RMSE for Radio ad spendings as predictor is ",np.sqrt(metrics.mean_squared_error(sales, RD_pred)))
print("R2 for TV ad spendings as predictor is ",metrics.r2_score(sales, TV_pred))
print("R2 for Radio ad spendings as predictor is ",metrics.r2_score(sales, RD_pred))
from scipy.stats import pearsonr
tv_r = pearsonr(TV_pred, sales)
rd_r = pearsonr(RD_pred, sales)
print("Pearson's r for TV ad spendings as predictor is ",tv_r[0])
print("Pearson's for Radio ad spendings as predictor is ",rd_r[0])
from hydroeval import * #Notice this importing method
tv_nse = evaluator(nse, TV_pred, sales)
rd_nse = evaluator(nse, RD_pred, sales)
print("NSE for TV ad spendings as predictor is ",tv_nse)
print("NSE for Radio ad spendings as predictor is ",rd_nse)
tv_kge = evaluator(kgeprime, TV_pred, sales)
rd_kge = evaluator(kgeprime, RD_pred, sales)
print("KGE for TV ad spendings as predictor is ",tv_kge)
print("KGE for Radio ad spendings as predictor is ",rd_kge)
#KGE: Kling-Gupta efficiencies range from -Inf to 1. Essentially, the closer to 1, the more accurate the model is.
#r: the Pearson product-moment correlation coefficient. Ideal value is r=1
#Gamma: the ratio between the coefficient of variation (CV) of the simulated values to
#the coefficient of variation of the observed ones. Ideal value is Gamma=1
#Beta: the ratio between the mean of the simulated values and the mean of the observed ones. Ideal value is Beta=1
###Output
KGE for TV ad spendings as predictor is [[0.69201883]
[0.78222442]
[0.78222442]
[1. ]]
KGE for Radio ad spendings as predictor is [[0.40068822]
[0.57622257]
[0.57622257]
[1. ]]
###Markdown
- __How confident are we with our linear regression model?__ The 95% confidence interval for the forecasted values ŷ of x is  where  This means that there is a 95% probability that the true linear regression line of the population will lie within the confidence interval of the regression line calculated from the sample data.  In the graph on the left of Figure 1, a linear regression line is calculated to fit the sample data points. The confidence interval consists of the space between the two curves (dotted lines). Thus there is a 95% probability that the true best-fit line for the population lies within the confidence interval (e.g. any of the lines in the figure on the right above). There is also a concept called a prediction interval. Here we look at any specific value of x, x0, and find an interval around the predicted value ŷ0 for x0 such that there is a 95% probability that the real value of y (in the population) corresponding to x0 is within this interval (see the graph on the right side). The 95% prediction interval of the forecasted value ŷ0 for x0 is  where the standard error of the prediction is  For any specific value x0 the prediction interval is more meaningful than the confidence interval.  Example 2: Let's work on another familier example. We had a table of recoded times and speeds from some experimental observations:|Elapsed Time (s)|Speed (m/s)||---:|---:||0 |0||1.0 |3||2.0 |7||3.0 |12||4.0 |20||5.0 |30||6.0 | 45.6| |7.0 | 60.3 ||8.0 | 77.7 ||9.0 | 97.3 ||10.0| 121.1| This time we want to explore the confidence and prediciton intervals for our linear regression model:
###Code
time = [0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0]
speed = [0, 3, 7, 12, 20, 30, 45.6, 60.3, 77.7, 97.3, 121.2]
x = np.array(time)
Y = np.array(speed)
#We already know these parameters from last week but let's assume that we don't!
# alpha = -16.78636363636364
# beta = 11.977272727272727
#Our linear model: ypred = alpha + beta * x
import statsmodels.api as sm #needed for linear regression
from statsmodels.sandbox.regression.predstd import wls_prediction_std #needed to get prediction interval
X = sm.add_constant(x)
re = sm.OLS(Y, X).fit()
print(re.summary())
print(re.params)
prstd, iv_l, iv_u = wls_prediction_std(re) #iv_l and iv_u give you the limits of the prediction interval for each point.
print(iv_l)
print(iv_u)
from statsmodels.stats.outliers_influence import summary_table
st, data, ss2 = summary_table(re, alpha=0.05)
fittedvalues = data[:, 2]
predict_mean_se = data[:, 3]
predict_mean_ci_low, predict_mean_ci_upp = data[:, 4:6].T
predict_ci_low, predict_ci_upp = data[:, 6:8].T
plt.plot(x, Y, 'o')
plt.plot(x, fittedvalues, '-',color='red', lw=2)
#plt.plot(x, predict_ci_low, '--', color='green',lw=2) #Lower prediction band
#plt.plot(x, predict_ci_upp, '--', color='green',lw=2) #Upper prediction band
#plt.plot(x, predict_mean_ci_low,'--', color='orange', lw=2) #Lower confidence band
#plt.plot(x, predict_mean_ci_upp,'--', color='orange', lw=2) #Upper confidence band
plt.show()
###Output
_____no_output_____ |
notebook/2019-04-16_evolution.ipynb | ###Markdown
Parse New Gene Table **from:** Maria D. VibranovskiHere attached is a list from Yong Zhang group based on our paper from 2010. But this is a still not published updated version that he shared with me but you can use.If you need details about the columns, please look at https://genome.cshlp.org/content/suppl/2010/08/27/gr.107334.110.DC1/SupplementalMaterial.pdf table 2a.But mainly, what you need to select is the child genes with:gene_type = D or R or DL or RLm_type= Mnote that contains "chrX-"D and R stands for DNA-based Duplication and RNA-based duplicationL means that the assignment of the parental genes is less reliable.M indicates that is between chromosome movement.Hope it helps. If you need I can parse for you. please, do not hesitate to ask. But I thought you would prefer a complete list where you can look at subsets.cheersMaria
###Code
import os
import sys
from pathlib import Path
import re
from IPython.display import display, HTML, Markdown
import numpy as np
import pandas as pd
from scipy.stats import fisher_exact, chi2_contingency
from scipy.stats.contingency import margins
import statsmodels.formula.api as smf
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# Project level imports
sys.path.insert(0, '../lib')
from larval_gonad.notebook import Nb
from larval_gonad.plotting import make_figs
from larval_gonad.config import memory
# Setup notebook
nbconfig = Nb.setup_notebook(seurat_dir='../output/scrnaseq-wf/scrnaseq_combine_force')
def adjusted_residuals(observed, expected):
resid = (observed - expected) / np.sqrt(expected)
n = observed.sum().sum()
rsum, csum = margins(observed)
v = csum * rsum * (n - rsum) * (n - csum) / n**3
return (observed - expected) / np.sqrt(v)
###Output
_____no_output_____
###Markdown
Import data from Maria FBgn sanitizer I don't know where these FBgns are from, so I need to sanitize them to my current annotation.
###Code
assembly = nbconfig.assembly
tag = nbconfig.tag
pth = Path(os.environ['REFERENCES_DIR'], f'{assembly}/{tag}/fb_annotation/{assembly}_{tag}.fb_annotation')
# Create an FBgn
mapper = {}
for record in pd.read_csv(pth, sep='\t').to_records():
mapper[record.primary_FBgn] = record.primary_FBgn
try:
for g in record.secondary_FBgn.split(','):
mapper[g] = record.primary_FBgn
except AttributeError:
pass
autosomes = ['chr2L', 'chr2R', 'chr3L', 'chr3R']
movement = (
pd.read_excel('../data/external/maria/dm6_ver78_genetype.new.xlsx')
.query('gene_type == ["D", "R", "Dl", "Rl"] and m_type == "M"')
.assign(child_chrom = lambda df: df.note.str.extract('(chr.*?)-'))
.assign(parent_chrom = lambda df: df.note.str.extract('-(chr.*?)[:;]'))
.assign(FBgn = lambda df: df.child_id.map(mapper))
.assign(parent_FBgn = lambda df: df.parent_id.map(mapper))
.drop(['child_id', 'parent_id', 'note', 'm_type'], axis=1)
.dropna()
.set_index('FBgn')
.assign(moved_x_to_a = lambda df: (df.parent_chrom == 'chrX') & df.child_chrom.isin(autosomes))
.assign(moved_a_to_a = lambda df: df.parent_chrom.isin(autosomes) & df.child_chrom.isin(autosomes))
.assign(moved_a_to_x = lambda df: df.parent_chrom.isin(autosomes) & (df.child_chrom == 'chrX'))
.query('moved_x_to_a | moved_a_to_a | moved_a_to_x')
)
movement.head()
biomarkers = (
nbconfig.seurat.get_biomarkers('res.0.6')
.cluster.map(nbconfig.short_cluster_annot)
.pipe(lambda x: x[x != 'UNK'])
.to_frame()
.reset_index()
.groupby('FBgn')
.apply(lambda x: '|'.join(x.cluster))
.rename('biomakrer_cluster')
)
germ_comp = (
pd.read_csv('../output/scrnaseq-wf/germcell_deg/gonia_vs_mid.tsv', sep='\t')
.assign(FBgn = lambda df: df.primary_FBgn)
.assign(SP = lambda df: df.avg_logFC > 0)
.assign(M1 = lambda df: df.avg_logFC < 0)
.set_index('FBgn')
.loc[:, ['SP', 'M1']]
.idxmax(axis=1)
.rename('bias_gonia_vs_mid')
)
biomarkers.head()
df = (
movement.join(biomarkers, how='left')
.join(germ_comp.rename('bias_gonia_vs_mid_child'), how='left')
.join(germ_comp.rename('bias_gonia_vs_mid_parent'), on='parent_FBgn', how='left')
)
tpm = (
pd.read_parquet('../output/scrnaseq-wf/tpm.parquet')
.assign(cluster = lambda df: df.cluster.map(nbconfig.short_cluster_annot))
.pivot_table(index='FBgn', columns='cluster', values='TPM')
.loc[:, ['SP', 'M1º']]
)
df = (
df
.join(tpm['SP'].rename('SP_child'), how='left')
.join(tpm['M1º'].rename('M1_child'), how='left')
.join(tpm['SP'].rename('SP_parent'), on='parent_FBgn', how='left')
.join(tpm['M1º'].rename('M1_parent'), on='parent_FBgn', how='left')
)
out_order = [
'child_chrom',
'parent_chrom',
'parent_FBgn',
'gene_type',
'moved_x_to_a',
'moved_a_to_a',
'moved_a_to_x',
'biomakrer_cluster',
'bias_gonia_vs_mid_child',
'bias_gonia_vs_mid_parent',
'SP_child',
'M1_child',
'SP_parent',
'M1_parent'
]
df.reindex(columns=out_order).reset_index().rename({'FBgn': 'child_FBgn'}, axis=1).fillna('nan').to_csv('../output/notebook/2019-04-16_movement_data.csv', index=None)
###Output
_____no_output_____ |
experiments/detection_tradeoff_curves.ipynb | ###Markdown
Independence Detection Trade-off Curves Information and Decision Systems GroupUniversity of Chile Implementation of the trade-off curves of independence detection presented by [Gonzalez et al. (2021)](https://arxiv.org/pdf/2110.14122.pdf).
###Code
import sys
import time
import numpy as np
import pandas as pd
import multiprocessing as mp
import matplotlib.pyplot as plt
from itertools import repeat
sys.path.insert(1, '../src/build')
from TSP import TSP
sys.path.insert(1, './utils')
from distributions import *
from partition_tests import *
NUM_WORKERS = 4
# Utils
def tsp_experiment(iteration, tsp_params, dist, dist_params):
l_bn, w_bn, lambdas = tsp_params
dim, corr, n_samples = dist_params
_, X, Y, _ = dist(dim, corr, n_samples)
emis = []
for l in lambdas:
l_sizes = []
l_emis = []
tsp = TSP(l_bn, w_bn, l)
for n in samples:
tsp.grow(np.copy(X[:n], order='F'), np.copy(Y[:n], order='F'))
if l != 0:
tsp.regularize()
l_emis.append(tsp.emi())
emis.append(l_emis)
return emis
def partition_test_experiment(iteration, test, dist, test_params, dist_params):
p, C_values = test_params
dim, corr, n_samples = dist_params
_, X, Y, _ = dist(dim, corr, n_samples)
sample_res = []
for n in samples:
m = int(n ** p)
partition_test = test(n_partitions_x=m, n_partitions_y=m)
test_results = []
for C in C_values:
partition_test.fit(X[:n], Y[:n])
test_results.append(partition_test.strongly_consistent_test(C=C))
sample_res.append(test_results)
return sample_res
# Number of samples per step
samples = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,
26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48,
49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,
72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94,
95, 96, 97, 98, 99, 100, 101, 107, 114, 120, 127, 135, 143, 151, 160, 169, 179, 190, 201,
213, 225, 239, 253, 268, 283, 300, 318, 336, 356, 377, 400, 423, 448, 475, 503, 532, 564,
597, 632, 670, 709, 751, 796, 843, 893, 946, 1001, 1061, 1124, 1190, 1260, 1335, 1414, 1498,
1586, 1680, 1780, 1885, 1997, 2115, 2240, 2373, 2513, 2662, 2820, 2987, 3164, 3351, 3550,
3760, 3983, 4218, 4468, 4733, 5013, 5310, 5625, 5958, 6311, 6685, 7081, 7500, 7945, 8415,
8914, 9442, 10001])
step_to_samples = dict(zip(range(1, len(samples) + 1), samples))
n_samples = samples[-1]
plt.plot(samples)
plt.title('Experiments samples')
plt.xlabel('Step')
plt.ylabel('Samples')
plt.grid(axis='y')
plt.show()
###Output
_____no_output_____
###Markdown
TSP Independence Test
###Code
# TSP parameters
l_bn = 0.001
w_bn = 0.1
lambdas = [1e-05, 1.38e-05, 1.75e-05, 2.13e-05, 2.5e-05, 3e-05, 3.5e-05, 4e-05, 4.5e-05, 5e-05,
7.5e-05, 0.0001, 0.00015, 0.0002, 0.00025, 0.0003, 0.00035, 0.0004, 0.00045, 0.0005]
# Experimental setting
dist = gaussian_dist
dim = 1
n_iterations = 1000
correlations = [0, 0.1, 0.3, 0.5, 0.7, 0.9]
# Experiments (parallelized)
results = []
for corr in correlations:
tsp_params = [l_bn, w_bn, lambdas]
dist_params = [dim, corr, n_samples]
pool = mp.Pool(NUM_WORKERS)
start_time = time.time()
res = pool.starmap_async(tsp_experiment, list(zip(range(n_iterations),
repeat(tsp_params),
repeat(dist),
repeat(dist_params))))
pool.close()
pool.join()
end_time = time.time()
results.append(np.array(res.get()))
print("Correlation: {:.1f} | Elapsed Time: {:.2f} [min]".format(corr, (end_time - start_time) / 60))
# Decreasing polynomial threshold
K = 0.5
p = 1.0
threshold = (K * samples ** -p)[:results[0].shape[-1]]
# Independence detection
results_table = [lambdas]
for l in range(len(correlations)):
corr = correlations[l]
data = results[l]
results_corr = []
for i in range(data.shape[1]):
detection_times = []
for k in range(data.shape[0]):
# Decision rule
if (corr > 0):
phi = np.where(threshold < data[k,i,:], 0, 1)
else:
phi = np.where(threshold > data[k,i,:], 0, 1)
# Detection time
aux = np.cumsum(np.flip(phi))
if (aux > 0).any():
detec_time = data.shape[-1] - np.where(aux > 0)[0][0]
detection_times += [detec_time]
else:
detection_times += [1]
# Percentile 95 of detection time distribution
results_corr.append(np.percentile(detection_times, 95, interpolation='higher'))
results_table.append(results_corr)
# Step to samples
pd.set_option('display.float_format', '{:.7f}'.format)
df_tsp = pd.DataFrame(results_table)
df_tsp = df_tsp.transpose()
df_tsp[df_tsp.columns[1:]] = df_tsp[df_tsp.columns[1:]].applymap(np.int64)
df_tsp[df_tsp.columns[1:]] = df_tsp[df_tsp.columns[1:]].replace(step_to_samples)
df_tsp.columns = [r'$\alpha$', r'$\sigma=0.0$', r'$\sigma=0.1$', r'$\sigma=0.3$',
r'$\sigma=0.5$', r'$\sigma=0.7$', r'$\sigma=0.9$']
df_tsp
###Output
_____no_output_____
###Markdown
$L_1$ Independence Test
###Code
# Test parameters
test = L1Test
p = 0.2
C_L1 = [1.2, 1.175, 1.15, 1.125, 1.1, 1.05, 1.0, 0.95, 0.9, 0.875,
0.85, 0.8, 0.775, 0.75, 0.7, 0.675]
# Experimental setting
dist = gaussian_dist
dim = 1
n_iterations = 1000
correlations = [0, 0.1, 0.3, 0.5, 0.7, 0.9]
# Experiments (parallelized)
results = []
for corr in correlations:
test_params = [p, C_L1]
dist_params = [dim, corr, n_samples]
pool = mp.Pool(NUM_WORKERS)
start_time = time.time()
res = pool.starmap_async(partition_test_experiment, list(zip(range(n_iterations),
repeat(test),
repeat(dist),
repeat(test_params),
repeat(dist_params))))
pool.close()
pool.join()
end_time = time.time()
results.append(np.array(res.get()))
print("Correlation: {:.1f} | Elapsed Time: {:.2f} [min]".format(corr, (end_time - start_time) / 60))
# Independence detection
results_table = [C_L1]
for l in range(len(correlations)):
corr = correlations[l]
data = results[l]
results_corr = []
for i in range(data.shape[2]):
detection_times = []
for k in range(data.shape[0]):
# Decision rule
if (corr > 0):
phi = np.where(data[k,:,i] <= 0, 0, 1)
else:
phi = np.where(data[k,:,i] > 0, 0, 1)
# Detection time
aux = np.cumsum(np.flip(phi))
if (aux > 0).any():
detec_time = data.shape[1] - np.where(aux > 0)[0][0]
detection_times += [detec_time]
else:
detection_times += [1]
# Percentile 95 of detection time distribution
results_corr.append(np.percentile(detection_times, 95, interpolation='higher'))
results_table.append(results_corr)
# Step to samples
pd.set_option('display.float_format', '{:.7f}'.format)
df_L1 = pd.DataFrame(results_table)
df_L1 = df_L1.transpose()
df_L1[df_L1.columns[1:]] = df_L1[df_L1.columns[1:]].applymap(np.int64)
df_L1[df_L1.columns[1:]] = df_L1[df_L1.columns[1:]].replace(step_to_samples)
df_L1.columns = ['C', r'$\rho=0.0$', r'$\rho=0.1$', r'$\rho=0.3$',
r'$\rho=0.5$', r'$\rho=0.7$', r'$\rho=0.9$']
df_L1
###Output
_____no_output_____
###Markdown
Log-likelihood Independence Test
###Code
# Test parameters
test = LogLikelihoodTest
p = 0.2
C_like = [0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.125, 0.15, 0.175,
0.2, 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.3]
# Experimental setting
dist = gaussian_dist
dim = 1
n_iterations = 1000
correlations = [0, 0.1, 0.3, 0.5, 0.7, 0.9]
# Experiments (parallelized)
results = []
for corr in correlations:
test_params = [p, C_like]
dist_params = [dim, corr, n_samples]
pool = mp.Pool(NUM_WORKERS)
start_time = time.time()
res = pool.starmap_async(partition_test_experiment, list(zip(range(n_iterations),
repeat(test),
repeat(dist),
repeat(test_params),
repeat(dist_params))))
pool.close()
pool.join()
end_time = time.time()
results.append(np.array(res.get()))
print("Correlation: {:.1f} | Elapsed Time: {:.2f} [min]".format(corr, (end_time - start_time) / 60))
# Independence detection
results_table = [C_like]
for l in range(len(correlations)):
corr = correlations[l]
data = results[l]
results_corr = []
for i in range(data.shape[2]):
detection_times = []
for k in range(data.shape[0]):
# Decision rule
if (corr > 0):
phi = np.where(data[k,:,i] <= 0, 0, 1)
else:
phi = np.where(data[k,:,i] > 0, 0, 1)
# Detection time
aux = np.cumsum(np.flip(phi))
if (aux > 0).any():
detec_time = data.shape[1] - np.where(aux > 0)[0][0]
detection_times += [detec_time]
else:
detection_times += [1]
# Percentile 95 of detection time distribution
results_corr.append(np.percentile(detection_times, 95, interpolation='higher'))
results_table.append(results_corr)
# Step to samples
pd.set_option('display.float_format', '{:.7f}'.format)
df_like = pd.DataFrame(results_table)
df_like = df_like.transpose()
df_like[df_like.columns[1:]] = df_like[df_like.columns[1:]].applymap(np.int64)
df_like[df_like.columns[1:]] = df_like[df_like.columns[1:]].replace(step_to_samples)
df_like.columns = ['C', r'$\rho=0.0$', r'$\rho=0.1$', r'$\rho=0.3$',
r'$\rho=0.5$', r'$\rho=0.7$', r'$\rho=0.9$']
df_like
###Output
_____no_output_____
###Markdown
Pearson-$\chi^2$ Independence Test
###Code
# Test parameters
test = PearsonChiSquareTest
p = 0.25
C_pearson = [0.14, 0.1375, 0.135, 0.13, 0.125, 0.12, 0.115,
0.11, 0.1, 0.095, 0.09, 0.08, 0.07, 0.06, 0.05]
# Experimental setting
dist = gaussian_dist
dim = 1
n_iterations = 1000
correlations = [0, 0.1, 0.3, 0.5, 0.7, 0.9]
# Experiments (parallelized)
results = []
for corr in correlations:
test_params = [p, C_pearson]
dist_params = [dim, corr, n_samples]
pool = mp.Pool(NUM_WORKERS)
start_time = time.time()
res = pool.starmap_async(partition_test_experiment, list(zip(range(n_iterations),
repeat(test),
repeat(dist),
repeat(test_params),
repeat(dist_params))))
pool.close()
pool.join()
end_time = time.time()
results.append(np.array(res.get()))
print("Correlation: {:.1f} | Elapsed Time: {:.2f} [min]".format(corr, (end_time - start_time) / 60))
# Independence detection
results_table = [C_pearson]
for l in range(len(correlations)):
corr = correlations[l]
data = results[l]
results_corr = []
for i in range(data.shape[2]):
detection_times = []
for k in range(data.shape[0]):
# Decision rule
if (corr > 0):
phi = np.where(data[k,:,i] <= 0, 0, 1)
else:
phi = np.where(data[k,:,i] > 0, 0, 1)
# Detection time
aux = np.cumsum(np.flip(phi))
if (aux > 0).any():
detec_time = data.shape[1] - np.where(aux > 0)[0][0]
detection_times += [detec_time]
else:
detection_times += [1]
# Percentile 95 of detection time distribution
results_corr.append(np.percentile(detection_times, 95, interpolation='higher'))
results_table.append(results_corr)
# Step to samples
pd.set_option('display.float_format', '{:.7f}'.format)
df_pearson = pd.DataFrame(results_table)
df_pearson = df_pearson.transpose()
df_pearson[df_pearson.columns[1:]] = df_pearson[df_pearson.columns[1:]].applymap(np.int64)
df_pearson[df_pearson.columns[1:]] = df_pearson[df_pearson.columns[1:]].replace(step_to_samples)
df_pearson.columns = ['C', r'$\rho=0.0$', r'$\rho=0.1$', r'$\rho=0.3$',
r'$\rho=0.5$', r'$\rho=0.7$', r'$\rho=0.9$']
df_pearson
###Output
_____no_output_____
###Markdown
Independence Detection Trade-off Curves
###Code
rhos = [0.1, 0.3, 0.5, 0.7, 0.9]
tabs = [df_tsp, df_L1, df_like, df_pearson]
names = ['TSP', 'L1', 'log-likelihood', r'pearson-$\chi^2$']
plt.figure(figsize=(12,6.5))
for j in range(len(rhos)):
plt.subplot(2, 3, j+1)
for i in range(len(tabs)):
plt.plot(tabs[i][tabs[i].columns[1]].to_numpy(),
tabs[i][tabs[i].columns[j+2]].to_numpy(),
'.-', label=names[i], zorder=len(tabs)-i, ms=6, linewidth=1)
plt.xlabel(r'$M_0(\epsilon =0.05)$')
plt.ylabel(r'$M^\sigma_1(\epsilon =0.05)$'.format(rhos[j]))
plt.title(r'Correlation $\sigma={}$'.format(rhos[j]))
plt.xscale('log')
plt.yscale('log')
if j == 0:
plt.legend()
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
modules/Microsoft_Data/Intune/notebook/Intune_module_ingestion.ipynb | ###Markdown
OEA Connector: Intune Module Data Ingestion Notebook
###Code
%run /OEA_py
%run /Intune_py
# 0) Initialize the OEA framework and modules needed.
oea = OEA()
intune = Intune()
###Output
_____no_output_____
###Markdown
Ingest Data from Intune
For understanding of the steps involved in the data processing: look into the "Intune_py" notebook, which creates the Intune class and subsequent functions.
###Code
intune.ingest()
###Output
_____no_output_____ |
_posts/scikit/comparision-of-lda-pca-iris-dataset/comparison-of-LDA-and-PCA-2D-projection-of-Iris-dataset.ipynb | ###Markdown
The Iris dataset represents 3 kind of Iris flowers (Setosa, Versicolour and Virginica) with 4 attributes: sepal length, sepal width, petal length and petal width.Principal Component Analysis (PCA) applied to this data identifies the combination of attributes (principal components, or directions in the feature space) that account for the most variance in the data. Here we plot the different samples on the 2 first principal components.Linear Discriminant Analysis (LDA) tries to identify attributes that account for the most variance between classes. In particular, LDA, in contrast to PCA, is a supervised method, using known class labels. New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version
###Code
import sklearn
sklearn.__version__
###Output
_____no_output_____
###Markdown
Imports This tutorial imports [PCA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.htmlsklearn.decomposition.PCA) and [LinearDiscriminantAnalysis](http://scikit-learn.org/stable/modules/generated/sklearn.discriminant_analysis.LinearDiscriminantAnalysis.htmlsklearn.discriminant_analysis.LinearDiscriminantAnalysis).
###Code
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
###Output
_____no_output_____
###Markdown
Calculations
###Code
iris = datasets.load_iris()
X = iris.data
y = iris.target
target_names = iris.target_names
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
# Percentage of variance explained for each components
print('explained variance ratio (first two components): %s'
% str(pca.explained_variance_ratio_))
colors = ['navy', 'turquoise', 'darkorange']
###Output
explained variance ratio (first two components): [ 0.92461621 0.05301557]
###Markdown
Plot Results
###Code
fig = tools.make_subplots(rows=1, cols=2,
subplot_titles=('PCA of IRIS dataset',
'LDA of IRIS dataset')
)
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
pca = go.Scatter(x=X_r[y == i, 0],
y=X_r[y == i, 1],
mode='markers',
marker=dict(color=color),
name=target_name
)
fig.append_trace(pca, 1, 1)
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
lda = go.Scatter(x=X_r2[y == i, 0],
y=X_r2[y == i, 1],
showlegend=False,
mode='markers',
marker=dict(color=color),
name=target_name
)
fig.append_trace(lda, 1, 2)
for i in map(str, range(1, 3)):
x = 'xaxis' + i
y = 'yaxis' + i
fig['layout'][x].update(zeroline=False, showgrid=False)
fig['layout'][y].update(zeroline=False, showgrid=False)
py.iplot(fig)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'comparison-of-LDA-and-PCA-2D-projection-of-Iris-dataset.ipynb', 'scikit-learn/plot-pca-vs-lda/', 'Comparison of LDA and PCA 2D projection of Iris dataset | plotly',
' ',
title = 'Comparison of LDA and PCA 2D projection of Iris dataset | plotly',
name = 'Comparison of LDA and PCA 2D projection of Iris dataset',
has_thumbnail='true', thumbnail='thumbnail/pca-lda.jpg',
language='scikit-learn', page_type='example_index',
display_as='decomposition', order=3,
ipynb= '~Diksha_Gabha/2921')
###Output
_____no_output_____ |
5) Pattern Matching - Recursion.ipynb | ###Markdown
Loops Through RecursionFrom [the Elixir docs](https://elixir-lang.org/getting-started/recursion.htmlloops-through-recursion)
###Code
defmodule Recursion do
def print_multiple_times(msg, n) when n <= 1 do
IO.puts msg
end
def print_multiple_times(msg, n) do
IO.puts msg
print_multiple_times(msg, n - 1)
end
end
import Recursion, only: [print_multiple_times: 2]
print_multiple_times("Hello!", 3)
###Output
Hello!
Hello!
Hello!
###Markdown
Reduce and map algorithms
###Code
defmodule MathSum do
def sum_list([], accumulator) do
accumulator
end
def sum_list([head | tail], accumulator) do
sum_list(tail, head + accumulator)
end
def sum_list(xs), do: sum_list(xs, 0)
end
import MathSum, only: [sum_list: 1]
sum_list([1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
---And map...
###Code
defmodule MathMap do
def double_each([head | tail]) do
[head * 2 | double_each(tail)]
end
def double_each([]) do
[]
end
end
import MathMap, only: [double_each: 1]
double_each([1, 2, 3, 4])
###Output
_____no_output_____
###Markdown
Of course... Reduce...
###Code
xs = [1, 2, 3, 4]
sum = fn(x, acc) -> x + acc end
Enum.reduce(xs, 0, sum)
Enum.reduce(xs, 0, fn(x, acc) -> x + acc end)
Enum.reduce(xs, 0, & +/2)
Enum.reduce(xs, 0, & &1 + &2)
###Output
_____no_output_____
###Markdown
---And `map`...
###Code
double = fn(x) -> x * 2 end
Enum.map(xs, double)
Enum.map(xs, fn(x) -> x * 2 end)
Enum.map(xs, & &1 * 2)
###Output
_____no_output_____
###Markdown
Incidentally, `map` is a special case of `reduce`
###Code
defmodule MapFromReduce do # sort of
def my_map([head | tail], f) do
[f.(head) | my_map(tail, f)]
end
def my_map([], _f) do
[]
end
end
import MapFromReduce, only: [my_map: 2]
my_map(xs, double)
my_map(xs, & &1 * 3)
###Output
_____no_output_____ |
examples/politicization/apply_politicization_tracker.ipynb | ###Markdown
Apply Politization TrackerTesting that the tracker works for both of incidences and incidence rates.
###Code
# import required modules and set up environment
import os
# replace file path below with your own local convokit
os.chdir('/Users/marianneaubin/Documents/Classes/CS6742/cs6742-fork')
import convokit
from convokit import Corpus, Parser, PolTracker, Transformer
import nltk
# load corpus, this takes a long time so you can also use a different corpus as a test
corpus = convokit.Corpus(filename='../politics-filtered-corpus')
#corpus = convokit.Corpus(filename=convokit.download("iq2-corpus"))
corpus.print_summary_stats()
pt = PolTracker();
corpus = pt.transform(corpus)
counter = 1
for conv_id in corpus.conversations:
conv = corpus.get_conversation(conv_id)
for utt in conv.iter_utterances():
if utt.meta["num_pol_refs"] != 0:
print(utt.meta["num_pol_refs"])
print(utt.meta["pol_words"])
print(utt.meta["num_pol_refs_incidence"])
counter = counter + 1
if (counter % 10) == 0:
break;
if (counter % 100) == 0:
break;
utter_ids = corpus.get_utterance_ids()
utter_ids[7000]
pol_word_counter = 0
total_word_counter = 0
counter = 0
for x in range(0,3237456):
utt = corpus.get_utterance(utter_ids[x])
pol_word_counter = pol_word_counter + utt.meta['num_pol_refs']
total_word_counter = total_word_counter + utt.meta['num_pol_refs_incidence']
counter = counter + 1
if counter % 100000 == 0:
print(counter, " completed")
print("total # of words:", pol_word_counter)
print(total_word_counter)
counter = 0
pol_words = {}
for x in range(0, 3237456):
utt = corpus.get_utterance(utter_ids[x])
utt_pol_words = utt.meta['pol_words']
for y in utt_pol_words:
if (y not in pol_words.keys()):
pol_words[y] = 1
else:
pol_words[y] = pol_words[y] + 1
counter = counter + 1
if counter % 100000 == 0:
print(counter, " completed")
print(pol_words)
freqs = []
large_pol_words = []
for x in pol_words.keys():
if pol_words[x] > 5000:
freqs.append(pol_words[x])
large_pol_words.append(x)
freqs, large_pol_words = (list(x) for x in zip(*sorted(zip(freqs, large_pol_words))))
freqs.reverse()
large_pol_words.reverse()
print(freqs)
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.pyplot as plt
objects = large_pol_words
y_pos = np.arange(len(objects))
plt.bar(y_pos, freqs, align='center', alpha=0.5)
plt.xticks(y_pos, objects, rotation='vertical')
plt.ylabel('Frequency')
plt.title('Most commonly used political words in Corpus')
plt.show()
## ISSUE: right now, this only works for single words.
## will need to change the transformer so that it can also
## compute multiple words e.g. "bill of rights"
## read in csv of relevant utts for sandy hook
import csv
#with open('../sandyhook_utterance_ids.csv', 'r') as f:
with open('../auroratheater_utterance_ids.csv', 'r') as f:
reader = csv.reader(f)
sh_list = list(reader)
## recompute political freqs
pol_words_sh = {}
for x in sh_list:
utt = corpus.get_utterance(x[0])
utt_pol_words = utt.meta['pol_words']
for y in utt_pol_words:
if (y not in pol_words_sh.keys()):
pol_words_sh[y] = 1
else:
pol_words_sh[y] = pol_words_sh[y] + 1
print(pol_words_sh)
freqs_sh = []
large_pol_words_sh = []
for x in pol_words_sh.keys():
freqs_sh.append(pol_words_sh[x])
large_pol_words_sh.append(x)
freqs_sh, large_pol_words_sh = (list(x) for x in zip(*sorted(zip(freqs_sh, large_pol_words_sh))))
freqs_sh.reverse()
large_pol_words_sh.reverse()
print(freqs_sh)
objects = large_pol_words_sh
y_pos = np.arange(len(objects))
plt.figure(figsize=(10,4))
plt.bar(y_pos, freqs_sh, align='center', alpha=0.5)
plt.xticks(y_pos, objects, rotation='vertical')
plt.ylabel('Frequency')
plt.title('Most commonly used political words in Sandy Hook Corpus')
plt.show()
print(corpus.get_utterance(utter_ids[60]))
###Output
Utterance({'id': '3v4rga', 'user': User([('name', 'aluminumdisc')]), 'root': '3v4rga', 'reply_to': None, 'timestamp': 1449056876, 'text': '', 'meta': {'score': 2, 'top_level_comment': None, 'retrieved_on': 1454888391, 'gilded': 0, 'gildings': None, 'subreddit': 'politics', 'stickied': False, 'permalink': '/r/politics/comments/3v4rga/marco_rubio_in_second_place_in_latest_national/', 'author_flair_text': '', 'num_pol_refs': 0, 'num_pol_refs_incidence': 0, 'pol_words': []}})
###Markdown
Sandy Hook analysisNow we try to establish a time series of how many words there are per day after December 14, 2012 (Sandy Hook shooting day). Timestamp: 1355461200
###Code
from datetime import datetime
import pandas as pd
ps = 7
#Sandy hook dates
#start_date = '2012-12-14'
#end_date = '2012-12-22'
start_date = '2012-07-20'
end_date = '2012-07-28'
num_posts_sh = [0] * ps
times = pd.date_range(start=start_date,end=end_date,periods=ps+1)
times = np.array(times)
bin_times = times[:-1]
# convert to datetime object
times_temp = []
for i,x in enumerate(times):
times_temp.append(pd.to_datetime(x))
times = times_temp
for i,x in enumerate(sh_list):
utt = corpus.get_utterance(x[0])
posted_time = datetime.fromtimestamp(utt.timestamp)
y = 0
while (posted_time > times[y]):
y = y + 1
## this gives us the timeframe to mark it as
num_posts_sh[y-1] = num_posts_sh[y-1] + 1
print(num_posts_sh)
plt.plot(bin_times,num_posts_sh)
plt.xticks(rotation='vertical')
plt.ylabel('Number of posts')
plt.title('Number of posts per day 7 days after Sandy Hook')
plt.show()
from nltk.tokenize import word_tokenize
## function takes in utterance and counts total words
def count_total_words(utt):
if utt.text != None:
tokenized = word_tokenize(utt.text.lower())
return len(tokenized)
##check above function works as expected
print(count_total_words(corpus.get_utterance(utter_ids[400080])))
print(corpus.get_utterance(utter_ids[400080]))
## next, I want the incidence rate per day of political words
inc_rate_sh = [0]*ps
total_pol_words_sh = [0]*ps
total_words_sh = [0]*ps
for i,x in enumerate(sh_list):
utt = corpus.get_utterance(x[0])
posted_time = datetime.fromtimestamp(utt.timestamp)
y = 0
while (posted_time > times[y]):
y = y + 1
## this gives us the timeframe to mark it as
total_pol_words_sh[y-1] = total_pol_words_sh[y-1] + utt.meta['num_pol_refs']
total_words_sh[y-1] = total_words_sh[y-1] + count_total_words(utt)
if y == 0:
print(count_total_words(utt))
print(total_pol_words_sh)
print(total_words_sh)
for i in range(0, ps):
if total_words_sh[i] != 0:
inc_rate_sh[i] = total_pol_words_sh[i]/total_words_sh[i]
print(inc_rate_sh)
plt.plot(bin_times,inc_rate_sh)
plt.xticks(rotation='vertical')
plt.ylabel('Political words/total words')
plt.title('Incidence rate of political words in utterances')
plt.show()
###Output
_____no_output_____ |
Assignment_9__MORALES.ipynb | ###Markdown
Lab 2 - Plotting Vector using NumPy and MATPlotLib In this laboratory we will be discussing the basics of numerical and scientific programming by working with Vectors using NumPy and MatPlotLib. ObjectivesAt the end of this activity you will be able to:1. Be familiar with the libraries in Python for numerical and scientific programming.2. Visualize vectors through Python programming.3. Perform simple vector operations through code. Discussion NumPy NumPy or Numerical Python, is mainly used for matrix and vector operations. It is capable of declaring computing and representing matrices. Most Python scientific programming libraries uses NumPy as the basic code. ScalarsRepresent magnitude or a single valueVectorsRepresent magnitude with directors Representing Vectors Now that you know how to represent vectors using their component and matrix form we can now hard-code them in Python. Let's say that you have the vectors: $$ A = 4\hat{x} + 3\hat{y} \\B = 2\hat{x} - 5\hat{y}\\C = 4ax + 3ay - 2az \\D = 2\hat{i} - 2\hat{j} + 3\hat{k}$$ In which it's matrix equivalent is: $$ A = \begin{bmatrix} 4 \\ 3\end{bmatrix} , B = \begin{bmatrix} 2 \\ -5\end{bmatrix} , C = \begin{bmatrix} 4 \\ 3 \\ -2 \end{bmatrix}, D = \begin{bmatrix} 2 \\ -2 \\ 3\end{bmatrix}$$$$ A = \begin{bmatrix} 4 & 3\end{bmatrix} , B = \begin{bmatrix} 2 & -5\end{bmatrix} , C = \begin{bmatrix} 4 & 3 & -2\end{bmatrix} , D = \begin{bmatrix} 2 & -2 & 3\end{bmatrix} $$ We can then start doing numpy code with this by:
###Code
## Importing necessary libraries
import numpy as np ## 'np' here is short-hand name of the library (numpy) or a nickname.
A = np.array([4, 3])
B = np.array([2, -5])
C = np.array([
[4],
[3],
[-2]
])
D = np.array ([[2],
[-2],
[3]])
print('Vector A is ', A)
print('Vector B is ', B)
print('Vector C is ', C)
print('Vector D is ', D)
###Output
Vector A is [4 3]
Vector B is [ 2 -5]
Vector C is [[ 4]
[ 3]
[-2]]
Vector D is [[ 2]
[-2]
[ 3]]
###Markdown
Describing vectors in NumPy Describing vectors is very important if we want to perform basic to advanced operations with them. The fundamental ways in describing vectors are knowing their shape, size and dimensions.
###Code
### Checking shapes
### Shapes tells us how many elements are there on each row and column
A.shape
H = np.array([1, 0, 2, 5, -0.2, 0])
H.shape
C.shape
### Checking size
### Array/Vector sizes tells us many total number of elements are there in the vector
D.size
### Checking dimensions
### The dimensions or rank of a vector tells us how many dimensions are there for the vector.
D.ndim
###Output
_____no_output_____
###Markdown
Great! Now let's try to explore in performing operations with these vectors. Addition The addition rule is simple, the we just need to add the elements of the matrices according to their index. So in this case if we add vector $A$ and vector $B$ we will have a resulting vector: $$R = 6\hat{x}-2\hat{y} \\ \\or \\ \\ R = \begin{bmatrix} 6 \\ -2\end{bmatrix} $$ So let's try to do that in NumPy in several number of ways:
###Code
R = np.add(A, B) ## this is the functional method usisng the numpy library
P = np.add(C, D)
R = A + B ## this is the explicit method, since Python does a value-reference so it can
## know that these variables would need to do array operations.
R
pos1 = np.array([0,0,0])
pos2 = np.array([0,1,3])
pos3 = np.array([1,5,-2])
pos4 = np.array([5,-3,3])
#R = pos1 + pos2 + pos3 + pos4
#R = np.multiply(pos3, pos4)
R = pos3 / pos4
R
###Output
_____no_output_____
###Markdown
Try for yourself! Try to implement subtraction, multiplication, and division with vectors $A$ and $B$
###Code
###Output
_____no_output_____
###Markdown
Scaling Scaling or scalar multiplication takes a scalar value and performs multiplication with a vector. Let's take the example below: $$S = 5 \cdot A$$ We can do this in numpy through:
###Code
#S = 5 * A
S = np.multiply(5,A)
S
###Output
_____no_output_____
###Markdown
Try to implement scaling with two vectors.
###Code
###Output
_____no_output_____
###Markdown
MatPlotLib MatPlotLib or MATLab Plotting library is Python's take on MATLabs plotting feature. MatPlotLib can be used vastly from graping values to visualizing several dimensions of data. Visualizing Data It's not enough just solving these vectors so might need to visualize them. So we'll use MatPlotLib for that. We'll need to import it first.
###Code
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
A = [1, -1]
B = [5, -1]
plt.scatter(A[0], A[1], label='A', c='green')
plt.scatter(B[0], B[1], label='B', c='magenta')
plt.grid()
plt.legend()
plt.show()
A = np.array([1, -1])
B = np.array([1, 5])
plt.title ("Resultant Vector\nMagnitude:{}" .format(Magnitude))
plt.xlim(-5, 5)
plt.ylim(-5, 5)
plt.quiver(0, 0, A[0], A[1], angles='xy', scale_units='xy', scale=1, color='red')
plt.quiver(A[0], A[1], B[0], B[1], angles='xy', scale_units='xy', scale=1, color='green')
R = A + B
plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='black')
plt.grid()
plt.show()
print(R)
Magnitude = np.sqrt(np.sum(R**2))
print(Magnitude)
Slope = R[1]/R[0]
print(Slope)
Angle = (np.arctan(Slope))*(180/np.pi)
print(Angle)
n = A.shape[0]
plt.xlim(-10, 10)
plt.ylim(-10, 10)
plt.quiver(0,0, A[0], A[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(A[0],A[1], B[0], B[1], angles='xy', scale_units='xy',scale=1)
plt.quiver(0,0, R[0], R[1], angles='xy', scale_units='xy',scale=1)
plt.show()
###Output
_____no_output_____
###Markdown
Try plotting Three Vectors and show the Resultant Vector as a result.Use Head to Tail Method.
###Code
### Try out code here!
A = np.array([7, 8, 4])
B = np.array([5, 2, 9])
SubtractVectors = np.subtract(A, B)
print(SubtractVectors)
MultiplyVectors = np.multiply(A, B)
print(MultiplyVectors)
DivisionVectors = np.divide(A, B)
print(DivisionVectors)
SN = 10 #ScalingNumber
Vector1 = np.array([4,5,6])
Vector2 = np.array([6,7,8])
Scale =SN* Vector1, Vector2
print(Scale)
A = np.array([1, -5])
B = np.array([1, 5])
C = np.array([1, 7])
plt.xlim(-9, 9)
plt.ylim(-9, 9)
Q1 = plt.quiver(0,0, A[0], A[1], angles='xy', scale_units='xy',scale=1, color="pink")
Q2 = plt.quiver(A[0],A[1], B[0], B[1], angles='xy', scale_units='xy',scale=1, color="cyan")
Q3 = plt.quiver(2,0, C[0], C[1], angles='xy', scale_units='xy',scale=1, color="beige")
R = A + B + C
plt.quiver(0,0, R[0], R[1], angles='xy', scale_units='xy',scale=1, color="teal")
plt.grid()
plt.show()
print("The Resultant of the three Vectors is", R)
###Output
_____no_output_____ |
00_process_bson.ipynb | ###Markdown
Process BSON> Script to process BSON data into JPGs. Ideas from [here](https://www.kaggle.com/inversion/processing-bson-files/notebook).
###Code
#hide
from nbdev.showdoc import *
#export
import pandas as pd
from fastcore.all import *
import io
import bson
from PIL import Image
from multiprocessing import Pool
from typing import List
#export
def save_images(product, save_dir):
"""Saves product's images to disk."""
for i, img in enumerate(product["imgs"]):
save_path = save_dir/f"{product['_id']}_{i}.jpg"
if save_path.exists(): continue
with Image.open(io.BytesIO(img["picture"])) as picture:
picture.save(save_path)
#export
def get_mapping(product, columns: List[str]): return [product[col] for col in columns]
#export
@call_parse
def bson_to_jpeg(
path: Param("Path to BSON", Path),
):
"""Coverts BSON to JPGs and saves product id to category mapping as CSV."""
path = Path(path)
save_dir = path.parent/"images"
save_dir.mkdir(exist_ok=True)
csv_save_path = path.parent/f"{path.stem}.csv"
is_test = path.stem == "test"
print(f"Converting {path} to JPGs in {save_dir}. Mapping saved in {csv_save_path}.")
def parallel_map(func):
with Pool() as pool:
with path.open("rb") as file:
return [res for res in pool.imap(func, bson.decode_file_iter(file), chunksize=10000)]
print("Starting call to save images.")
parallel_map(partial(save_images, save_dir=save_dir))
print("Finished saving images.")
cols = ["_id"]
if not is_test: cols.append("category_id")
print("Starting call to gather mapping.")
mappings = parallel_map(partial(get_mapping, columns=cols))
print("Finished gathering mapping.")
df = pd.DataFrame(mappings, columns=cols)
df.to_csv(csv_save_path, index=False)
print(f"Saved CSV to {csv_save_path}.")
print("Completed successfully.")
return df
path = Path("./data/"); path.ls()
(path/"train_example.csv").unlink()
%time bson_to_jpeg(path/"train_example.bson")
#hide
from nbdev.export import notebook2script; notebook2script()
###Output
Converted 00_core.ipynb.
Converted 01_find_duplicates.ipynb.
Converted index.ipynb.
|
notebooks/Clusters Histogram.ipynb | ###Markdown
Cluster Histogram
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Reading in the data. This is using the filtered data set because pandas blows up with the large dataset
###Code
#clusters = np.loadtxt("../data/clusters_filter10.csv", dtype = "int64")
clusters = pd.read_csv("../data/clusters_filter10.csv", dtype = "int64", header=None, names = ["ClusterSize"], nrows = None)
clusters.head()
clusters = clusters.assign(LogClusterSize = np.log(clusters.ClusterSize))
clusters.head()
plt.hist(clusters.LogClusterSize, bins = 100)
plt.xlabel("Log Cluster Size")
plt.ylabel("Frequency")
###Output
_____no_output_____
###Markdown
Numpy HistogramWe'd like to retain the numbers so let's use numpy's histogram function
###Code
hist = np.histogram(clusters.LogClusterSize, bins = 100)
###Output
_____no_output_____
###Markdown
and this makes a nicer plot too
###Code
plt.plot(np.exp(hist[1][:-1]), hist[0])
plt.yscale('log')
plt.xscale('log')
###Output
_____no_output_____ |
sequence_models/1-sentiment_analysis/old/C3_W1_Assignment_Solution_old1.ipynb | ###Markdown
Assignment 1: Sentiment with Deep Neural NetworksWelcome to the first assignment of course 3. In this assignment you will explore sentiment analysis using deep neural networks. Outline- [0 Import libraries and try out Trax](0)- [1 Importing the data](1) - [1.2 Building the vocabulary](1.2) - [1.3 Converting a tweet to a tensor](1.3) - [Exercise 01](ex01) - [1.4 Creating a batch generator](1.4) - [Exercise 02](ex02) - [2: Defining classes](2) - [2.1 ReLU class](1.2) - [Exercise 03](ex03) - [2.2 Dense class](2.2) - [Exercise 04](ex04) - [2.3: Model](2.3) - [Exercise 05](ex05)- [3 Training](3) - [3.1 Training the Model](3.1) - [Exercise 06](ex06) - [3.2 Initialize a model with the trained weights](3.2) - [3.3 Practice Making a prediction](3.3)- [4: Evaluation](4) - [4.1 Computing the accuracy on a batch](4.1) - [Exercise 07](ex07) - [4.2 Testing your model on Validation Data](4.2) - [Exercise 08](ex08)- [5: Testing with your own input](5) In course 1, you implemented Logistic regression and Naive Bayes for sentiment analysis. However if you were to give your old models an example like: This movie was almost good. Your model would have predicted a positive sentiment for that review. However, that sentence has a negative sentiment and indicates that the movie was not good. To solve those kinds of misclassifications, you will write a program that uses deep neural networks to identify sentiment on text. By completing this assignment you will: - Understand how you can build/design a model using layers- Train a model using a training loop- Use a binary cross entropy loss function- Compute the accuracy of your model- Predict using your own inputAs you can tell, this model follows a similar structure to the one you previously implemented in the second course of this specialization. - Indeed most of the deep nets you will be implementing will have a similar structure. The only thing that changes is the model architecture, the inputs, and the outputs. Before starting the assignment, we will introduce you to the Google library `trax` that we use for building and training models.Now we will show you how to compute the gradient of a certain function `f` by just using ` .grad(f)`. - Trax source code can be found on Github: [Trax](https://github.com/google/trax)- The Trax code also uses the JAX library: [JAX](https://jax.readthedocs.io/en/latest/index.html) Part 0: Import libraries and try out Trax- Let's import libraries and look at an example of using the Trax library.
###Code
# Automatic gradient with replaced numpy.
#!pip -q install trax==1.3.1
# import relevant libraries
import trax
# set random seeds to make this notebook easier to replicate
trax.supervised.trainer_lib.init_random_number_generators(31)
# import trax.fastmath.numpy
import trax.fastmath.numpy as np
# import trax.layers
from trax import layers as tl
# import Layer from the utils.py file
from utils import Layer, load_tweets, process_tweet
#from utils import
import os
# Create an array using trax.fastmath.numpy
a = np.array(5.0)
# View the returned array
display(a)
print(type(a))
###Output
_____no_output_____
###Markdown
Notice that trax.fastmath.numpy returns a DeviceArray from the jax library.
###Code
# Define a function that will use the trax.fastmath.numpy array
def f(x):
# f = x^2
return (x**2)
# Call the function
print(f"f(a) for a={a} is {f(a)}")
###Output
_____no_output_____
###Markdown
The gradient (derivative) of function `f` with respect to its input `x` is the derivative of $x^2$.- The derivative of $x^2$ is $2x$. - When x is 5, then $2x=10$.You can calculate the gradient of a function by using `trax.fastmath.grad(fun=)` and passing in the name of the function.- In this case the function you want to take the gradient of is `f`.- The object returned (saved in `grad_f` in this example) is a function that can calculate the gradient of f for a given trax.fastmath.numpy array.
###Code
# Directly use trax.fastmath.grad to calculate the gradient (derivative) of the function
grad_f = trax.fastmath.grad(fun=f) # df / dx - Gradient of function f(x) with respect to x
# View the type of the retuned object (it's a function)
type(grad_f)
# Call the newly created function and pass in a value for x (the DeviceArray stored in 'a')
grad_calculation = grad_f(a)
# View the result of calling the grad_f function
display(grad_calculation)
###Output
_____no_output_____
###Markdown
The function returned by trax.fastmath.grad returns takes in x=5 and calculates the gradient of f, which is 2*x, which is 10. The value is also stored as a DeviceArray from the jax library. Part 1: Importing the data 1.1 Loading in the dataImport the data set. - You may recognize this from earlier assignments in the specialization.- Details of process_tweet function is available in utils.py file
###Code
## DO NOT EDIT THIS CELL
# Import functions from the utils.py file
# from utils import load_tweets
import numpy as np
# Load positive and negative tweets
all_positive_tweets, all_negative_tweets = load_tweets()
# View the total number of positive and negative tweets.
print(f"The number of positive tweets: {len(all_positive_tweets)}")
print(f"The number of negative tweets: {len(all_negative_tweets)}")
# Split positive set into validation and training
val_pos = all_positive_tweets[4000:] # generating validation set for positive tweets
train_pos = all_positive_tweets[:4000]# generating training set for positive tweets
# Split negative set into validation and training
val_neg = all_negative_tweets[4000:] # generating validation set for negative tweets
train_neg = all_negative_tweets[:4000] # generating training set for nagative tweets
# Combine training data into one set
train_x = train_pos + train_neg
# Combine validation data into one set
val_x = val_pos + val_neg
# Set the labels for the training set (1 for positive, 0 for negative)
train_y = np.append(np.ones(len(train_pos)), np.zeros(len(train_neg)))
# Set the labels for the validation set (1 for positive, 0 for negative)
val_y = np.append(np.ones(len(val_pos)), np.zeros(len(val_neg)))
print(f"length of train_x {len(train_x)}")
print(f"length of val_x {len(val_x)}")
###Output
_____no_output_____
###Markdown
Now import a function that processes tweets (we've provided this in the utils.py file).- `process_tweets' removes unwanted characters e.g. hashtag, hyperlinks, stock tickers from tweet.- It also returns a list of words (it tokenizes the original string).
###Code
# Import a function that processes the tweets
# from utils import process_tweet
# Try out function that processes tweets
print("original tweet at training position 0")
print(train_pos[0])
print("Tweet at training position 0 after processing:")
process_tweet(train_pos[0])
###Output
_____no_output_____
###Markdown
Notice that the function `process_tweet` keeps key words, removes the hash symbol, and ignores usernames (words that begin with '@'). It also returns a list of the words. 1.2 Building the vocabularyNow build the vocabulary.- Map each word in each tweet to an integer (an "index"). - The following code does this for you, but please read it and understand what it's doing.- Note that you will build the vocabulary based on the training data. - To do so, you will assign an index to everyword by iterating over your training set.The vocabulary will also include some special tokens- `__PAD__`: padding- ``: end of line- `__UNK__`: a token representing any word that is not in the vocabulary.
###Code
# Build the vocabulary
# Unit Test Note - There is no test set here only train/val
# Include special tokens
# started with pad, end of line and unk tokens
Vocab = {'__PAD__': 0, '__</e>__': 1, '__UNK__': 2}
# Note that we build vocab using training data
for tweet in train_x:
processed_tweet = process_tweet(tweet)
for word in processed_tweet:
if word not in Vocab:
Vocab[word] = len(Vocab)
print("Total words in vocab are",len(Vocab))
display(Vocab)
###Output
_____no_output_____
###Markdown
The dictionary `Vocab` will look like this:```CPP{'__PAD__': 0, '____': 1, '__UNK__': 2, 'followfriday': 3, 'top': 4, 'engag': 5, ...```- Each unique word has a unique integer associated with it.- The total number of words in Vocab: 9092 1.3 Converting a tweet to a tensorWrite a function that will convert each tweet to a tensor (a list of unique integer IDs representing the processed tweet).- Note, the returned data type will be a **regular Python `list()`** - You won't use TensorFlow in this function - You also won't use a numpy array - You also won't use trax.fastmath.numpy array- For words in the tweet that are not in the vocabulary, set them to the unique ID for the token `__UNK__`. ExampleInput a tweet:```CPP'@happypuppy, is Maria happy?'```The tweet_to_tensor will first conver the tweet into a list of tokens (including only relevant words)```CPP['maria', 'happi']```Then it will convert each word into its unique integer```CPP[2, 56]```- Notice that the word "maria" is not in the vocabulary, so it is assigned the unique integer associated with the `__UNK__` token, because it is considered "unknown." Exercise 01**Instructions:** Write a program `tweet_to_tensor` that takes in a tweet and converts it to an array of numbers. You can use the `Vocab` dictionary you just found to help create the tensor. - Use the vocab_dict parameter and not a global variable.- Do not hard code the integer value for the `__UNK__` token. Hints Map each word in tweet to corresponding token in 'Vocab' Use Python's Dictionary.get(key,value) so that the function returns a default value if the key is not found in the dictionary.
###Code
# GRADED FUNCTION: tweet_to_tensor
# CANDIDATE FOR TABLE TEST - If a student forgets to check for unk, there might be errors or just wrong values in the list.
# We can add those errors to check in autograder through tabled test or here student facing user test.
def tweet_to_tensor(tweet, vocab_dict, unk_token='__UNK__', verbose=False):
# Process the tweet into a list of words
# where only important words are kept (stop words removed)
word_l = process_tweet(tweet)
if verbose:
print("List of words from the processed tweet:")
print(word_l)
# Initialize the list that will contain the unique integer IDs of each word
tensor_l = []
# Get the unique integer ID of the __UNK__ token
# GRADING NOTE: require the use of the variable, don't give credit for hard coding `2` as the unkown ID
unk_ID = vocab_dict[unk_token]
if verbose:
print(f"The unique integer ID for the unk_token is {unk_ID}")
# for each word in the list:
for word in word_l:
# Get the unique integer ID.
# If the word doesn't exist in the vocab dictionary,
# use the unique ID for __UNK__ instead.
# GRADING NOTE: require use of the variable unk_ID (don't give credit for hard coding `2`)
# GRADING NOTE: Require use of the parameter vocab_dict, don't give credit for using the global variable Vocab
word_ID = vocab_dict.get(word,unk_ID)
# ALTERNATIVE SOLUTION: they can use if statement to handle when the key is not in dictionary
# if word in vocab_dict:
# word_ID = vocab_dict[word]
# else:
# word_ID = unk_ID
# Append the unique integer ID to the tensor list.
tensor_l.append(word_ID)
# ALTERNATIVE SOLUTION: they can use a list comprehension
# tensor_l = [vocab_dict.get(word,unk_ID) for word in word_l]
return tensor_l
print("Actual tweet is\n",val_pos[0])
print("\nTensor of tweet:\n",tweet_to_tensor(val_pos[0], vocab_dict=Vocab))
###Output
_____no_output_____
###Markdown
Expected output```CPPActual tweet is Bro:U wan cut hair anot,ur hair long Liao boMe:since ord liao,take it easy lor treat as save $ leave it longer :)Bro:LOL Sibei xialanTensor of tweet: [1065, 136, 479, 2351, 745, 8146, 1123, 745, 53, 2, 2672, 791, 2, 2, 349, 601, 2, 3489, 1017, 597, 4559, 9, 1065, 157, 2, 2]```
###Code
# test tweet_to_tensor
def test_tweet_to_tensor():
test_cases = [
{
"name":"simple_test_check",
"input": [val_pos[1],Vocab],
"expected":[444, 2, 304, 567, 56, 9],
"error":"The function gives bad output for val_pos[1]. Test failed"
},
{
"name":"datatype_check",
"input":[val_pos[1],Vocab],
"expected":type([]),
"error":"Datatype mismatch. Need only list not np.array"
},
{
"name":"without_unk_check",
"input":[val_pos[1],Vocab],
"expected":6,
"error":"Unk word check not done- Please check if you included mapping for unknown word"
}
]
count = 0
for test_case in test_cases:
try:
if test_case['name'] == "simple_test_check":
assert test_case["expected"] == tweet_to_tensor(*test_case['input'])
count += 1
if test_case['name'] == "datatype_check":
assert isinstance(tweet_to_tensor(*test_case['input']),test_case["expected"])
count += 1
if test_case['name'] == "without_unk_check":
assert None not in tweet_to_tensor(*test_case['input'])
count += 1
except:
print(test_case['error'])
if count == 3:
print("\033[92m All tests passed")
else:
print(count," Tests passed out of 3")
test_tweet_to_tensor()
###Output
_____no_output_____
###Markdown
1.4 Creating a batch generatorMost of the time in Natural Language Processing, and AI in general we use batches when training our data sets. - If instead of training with batches of examples, you were to train a model with one example at a time, it would take a very long time to train the model. - You will now build a data generator that takes in the positive/negative tweets and returns a batch of training examples. It returns the model inputs, the targets (positive or negative labels) and the weight for each target (ex: this allows us to can treat some examples as more important to get right than others, but commonly this will all be 1.0). Once you create the generator, you could include it in a for loop```CPPfor batch_inputs, batch_targets, batch_example_weights in data_generator: ...```You can also get a single batch like this:```CPPbatch_inputs, batch_targets, batch_example_weights = next(data_generator)```The generator returns the next batch each time it's called. - This generator returns the data in a format (tensors) that you could directly use in your model.- It returns a triple: the inputs, targets, and loss weights:-- Inputs is a tensor that contains the batch of tweets we put into the model.-- Targets is the corresponding batch of labels that we train to generate.-- Loss weights here are just 1s with same shape as targets. Next week, you will use it to mask input padding. Exercise 02Implement `data_generator`.
###Code
# GRADED: Data generator
def data_generator(data_pos, data_neg, batch_size, loop, vocab_dict):
'''
Input:
data_pos - Set of posstive examples
data_neg - Set of negative examples
batch_size - number of samples per batch
loop - True or False
'''
### START GIVEN CODE ###
# make sure the batch size is an even number
# to allow an equal number of positive and negative samples
assert batch_size % 2 == 0
# Number of positive examples in each batch is half of the batch size
# same with number of negative examples in each batch
n_to_take = batch_size // 2
# Use pos_index to walk through the data_pos array
# same with neg_index and data_neg
pos_index = 0
neg_index = 0
# Loop indefinitely
while True:
# If the positive index plus num of positive examples
# goes past the positive dataset,
if pos_index + n_to_take > len(data_pos):
# If user wants to keep re-using the data, reset the index
if loop:
pos_index = 0
# otherwise exit the loop
else:
# exit the loop
break
### END GIVEN CODE ###
### START CODE HERE ###
# If the positive index plus num of negative examples
# goes past the negative dataset,
if neg_index + n_to_take > len(data_neg):
# If user wants to keep re-using the data, reset the index
if loop:
neg_index = 0
# otherwise exit the loop
else:
# exit the loop
break
### END CODE HERE ###
### START GIVEN CODE ###
# create a batch with positive examples
batch = []
# Start from pos_index and increment i up to n_to_take
for i in range(n_to_take):
# get the tweet as pos_index + i
tweet = data_pos[pos_index + i]
# convert the tweet into tensors of integers representing the processed words
tensor = tweet_to_tensor(tweet,vocab_dict)
# append the tensor to the batch list
batch.append(tensor)
### END GIVEN CODE ###
### START CODE HERE ###
# Using the same batch list, start from neg_index and increment i up to n_to_take
for i in range(n_to_take):
# get the tweet as pos_index + i
tweet = data_neg[neg_index + i]
# convert the tweet into tensors of integers representing the processed words
tensor = tweet_to_tensor(tweet, vocab_dict)
# append the tensor to the batch list
batch.append(tensor)
### END CODE HERE ###
### START GIVEN CODE ###
# Update the start index for positive data
# so that it's n_to_take positions after the current pos_index
pos_index += n_to_take
# Update the start index for negative data
# so that it's n_to_take positions after the current neg_index
neg_index += n_to_take
# Get the max tweet length (the length of the longest tweet)
# (you will pad all shorter tweets to have this length)
max_len = max([len(t) for t in batch])
# Initialize the input_l, which will
# store the padded versions of the tensors
tensor_pad_l = []
# Pad shorter tweets with zeros
for tensor in batch:
### END GIVEN CODE ###
### START CODE HERE ###
# Get the number of positions to pad for this tensor so that it will be max_len long
n_pad = max_len - len(tensor)
# Generate a list of zeros, with length n_pad
pad_l = [0] * n_pad
# concatenate the tensor and the list of padded zeros
tensor_pad = tensor + pad_l
# append the padded tensor to the list of padded tensors
tensor_pad_l.append(tensor_pad)
# convert the list of padded tensors to a numpy array
# and store this as the model inputs
inputs = np.array(tensor_pad_l)
# Generate the list of targets for the positive examples (a list of ones)
# The length is the number of positive examples in the batch
target_pos = [1] * n_to_take
# Generate the list of targets for the negative examples (a list of ones)
# The length is the number of negative examples in the batch
target_neg = [0] * n_to_take
# Concatenate the positve and negative targets
target_l = target_pos + target_neg
# Convert the target list into a numpy array
targets = np.array(target_l)
# Example weights: Treat all examples equally importantly.
example_weights = np.ones_like(targets)
### END CODE HERE ###
### GIVEN CODE ###
# note we use yield and not return
yield inputs, targets, example_weights
###Output
_____no_output_____
###Markdown
Now you can use your data generator to create a data generator for the training data, and another data generator for the validation data.
###Code
# Create the training data generator
def train_generator(batch_size):
return data_generator(train_pos, train_neg, batch_size, True, Vocab)
# Create the test data generator
def val_generator(batch_size):
return data_generator(val_pos, val_neg, batch_size, False, Vocab)
# Get a batch from the train_generator and inspect.
inputs, targets, example_weights = next(train_generator(4))
# this will print a list of 4 tensors padded with zeros
print(f'Inputs: {inputs}')
print(f'Targets: {targets}')
print(f'Example Weights: {example_weights}')
# Test the train_generator
# Create a data generator for training data,
# which produces batches of size 4 (for tensors and their respective targets)
tmp_data_gen = train_generator(batch_size = 4)
# Call the data generator to get one batch and its targets
tmp_inputs, tmp_targets, tmp_example_weights = next(tmp_data_gen)
print(f"The inputs shape is {tmp_inputs.shape}")
for i,t in enumerate(tmp_inputs):
print(f"input tensor: {t}; target {tmp_targets[i]}; example weights {tmp_example_weights[i]}")
###Output
_____no_output_____
###Markdown
Expected output```CPPThe inputs shape is (4, 14)input tensor: [3 4 5 6 7 8 9 0 0 0 0 0 0 0]; target 1; example weights 1input tensor: [10 11 12 13 14 15 16 17 18 19 20 9 21 22]; target 1; example weights 1input tensor: [5738 2901 3761 0 0 0 0 0 0 0 0 0 0 0]; target 0; example weights 1input tensor: [ 858 256 3652 5739 307 4458 567 1230 2767 328 1202 3761 0 0]; target 0; example weights 1``` Now that you have your train/val generators, you can just call them and they will return tensors which correspond to your tweets in the first column and their corresponding labels in the second column. Now you can go ahead and start building your neural network. Part 2: Defining classesIn this part, you will write your own library of layers. It will be very similarto the one used in Trax and also in Keras and PyTorch. Writing your own smallframework will help you understand how they all work and use them effectivelyin the future.Your framework will be based on the following `Layer` class from utils.py.```CPPclass Layer(object): """ Base class for layers. """ Constructor def __init__(self): set weights to None self.weights = None The forward propagation should be implemented by subclasses of this Layer class def forward(self, x): raise NotImplementedError This function initializes the weights based on the input signature and random key, should be implemented by subclasses of this Layer class def init_weights_and_state(self, input_signature, random_key): pass This initializes and returns the weights, do not override. def init(self, input_signature, random_key): self.init_weights_and_state(input_signature, random_key) return self.weights __call__ allows an object of this class to be called like it's a function. def __call__(self, x): When this layer object is called, it calls its forward propagation function return self.forward(x)``` 2.1 ReLU classYou will now implement the ReLU activation function in a class below. The ReLU function looks as follows: $$ \mathrm{ReLU}(x) = \mathrm{max}(0,x) $$ Exercise 03**Instructions:** Implement the ReLU activation function below. Your function should take in a matrix or vector and it should transform all the negative numbers into 0 while keeping all the positive numbers intact. Hints Please use numpy.maximum(A,k) to find the maximum between each element in A and a scalar k
###Code
# GRADED FUNCTION: Relu
class Relu(Layer):
"""Relu activation function implementation"""
def forward(self, x):
'''
Input:
- x (a numpy array): the input
Output:
- activation (numpy array): all positive or 0 version of x
'''
### START CODE HERE ###
activation = np.maximum(x, 0)
# ALTERNATE SOLUTIONS
# 1. Simple for loop
# activation = []
# for i in x.flatten():
# if i > 0:
# activation.append(i)
# else:
# activation.append(0)
# activation = np.reshape(activation,x.shape)
### END CODE HERE ###
return activation
# Test your relu function
x = np.array([[-2.0, -1.0, 0.0], [0.0, 1.0, 2.0]], dtype=float)
relu_layer = Relu()
print("Test data is:")
print(x)
print("Output of Relu is:")
print(relu_layer(x))
###Output
_____no_output_____
###Markdown
Expected Outout```CPPTest data is:[[-2. -1. 0.] [ 0. 1. 2.]]Output of Relu is:[[0. 0. 0.] [0. 1. 2.]]``` 2.2 Dense class ExerciseImplement the forward function of the Dense class. - The forward function multiplies the input to the layer (`x`) by the weight matrix (`W`)$$\mathrm{forward}(\mathbf{x},\mathbf{W}) = \mathbf{xW} $$- You can use `numpy.dot` to perform the matrix multiplication.Note that for more efficient code execution, you will use the trax version of `math`, which includes a trax version of `numpy` and also `random`.Implement the weight initializer `new_weights` function- Weights are initialized with a random key.- The second parameter is a tuple for the desired shape of the weights (num_rows, num_cols)- The num of rows for weights should equal the number of columns in x, because for forward propagation, you will multiply x times weights.Please use `trax.fastmath.random.normal(key, shape, dtype=tf.float32)` to generate random values for the weight matrix. The key difference between this functionand the standard `numpy` randomness is the explicit use of random keys, whichneed to be passed. While it can look tedious at the first sight to pass the random key everywhere, you will learn in Course 4 why this is very helpful whenimplementing some advanced models.- `key` can be generated by calling `random.get_prng(seed=)` and passing in a number for the `seed`.- `shape` is a tuple with the desired shape of the weight matrix. - The number of rows in the weight matrix should equal the number of columns in the variable `x`. Since `x` may have 2 dimensions if it reprsents a single training example (row, col), or three dimensions (batch_size, row, col), get the last dimension from the tuple that holds the dimensions of x. - The number of columns in the weight matrix is the number of units chosen for that dense layer. Look at the `__init__` function to see which variable stores the number of units.- `dtype` is the data type of the values in the generated matrix; keep the default of `tf.float32`. In this case, don't explicitly set the dtype (just let it use the default value).Set the standard deviation of the random values to 0.1- The values generated have a mean of 0 and standard deviation of 1.- Set the default standard deviation `stdev` to be 0.1 by multiplying the standard deviation to each of the values in the weight matrix.
###Code
# use the fastmath module within trax
from trax import fastmath
# use the numpy module from trax
np = fastmath.numpy
# use the fastmath.random module from trax
random = fastmath.random
# See how the fastmath.trax.random.normal function works
tmp_key = random.get_prng(seed=1)
print("The random seed generated by random.get_prng")
display(tmp_key)
print("choose a matrix with 2 rows and 3 columns")
tmp_shape=(2,3)
display(tmp_shape)
# Generate a weight matrix
# Note that you'll get an error if you try to set dtype to tf.float32, where tf is tensorflow
# Just avoid setting the dtype and allow it to use the default data type
tmp_weight = trax.fastmath.random.normal(key=tmp_key, shape=tmp_shape)
print("Weight matrix generated with a normal distribution with mean 0 and stdev of 1")
display(tmp_weight)
###Output
_____no_output_____
###Markdown
Exercise 04Implement the `Dense` class.
###Code
# GRADED FUNCTION: Dense
class Dense(Layer):
"""
A dense (fully-connected) layer.
"""
# __init__ is implemented for you
def __init__(self, n_units, init_stdev=0.1):
# Set the number of units in this layer
self._n_units = n_units
self._init_stdev = 0.1
# Please implement 'forward()'
def forward(self, x):
# GRADING NOTE: can grade this by comparing outputs
### START CODE HERE ###
# ALTERNATE SOLUTIONS
# dense = np.matmul(x,weights)
# dense = x@weights
# Matrix multiply x and the weight matrix
dense = np.dot(x, self.weights)
### END CODE HERE ###
return dense
# init_weights
def init_weights_and_state(self, input_signature, random_key):
### START CODE HERE ###
# The input_signature has a .shape attribute that gives the shape as a tuple
input_shape = input_signature.shape
# Generate the weight matrix from a normal distribution,
# and standard deviation of 'stdev'
w = self._init_stdev * random.normal(
key = random_key, shape = (input_shape[-1], self._n_units))
# GRADING NOTE: don't give points for `input_shape[1]`
# to get the column dimension as well
# because the input_shape may have 2 or 3 dimensions,
# so they need to use index -1 to get last dim.
# GRADING NOTE: don't give credit for hard coding stdev with 0.1
### END CODE HERE ###
self.weights = w
# Testing your Dense layer
dense_layer = Dense(n_units=10) #sets number of units in dense layer
random_key = random.get_prng(seed=0) # sets random seed
z = np.array([[2.0, 7.0, 25.0]]) # input array
# CODE REVIEW - Dense object .init calls what exactly should be clarified
# as there is no link between init, forward and new_weights.
# This dense is of trax layer class and should be given documentation or explaination from the codebase definition of layer
# https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/base.py#L243
# It returns self._weights, self.state
# NOTE: for the purpose of this exercise, we do not include non-trainable state
# to make the layer construction easier.
dense_layer.init(z, random_key)
print("Weights are\n ",dense_layer.weights) #Returns randomly generated weights
print("Foward function output is ", dense_layer(z)) # Returns multiplied values of units and weights
###Output
_____no_output_____
###Markdown
2.3: ModelNow you will implement a classifier using neural networks. Here is the model architecture you will be implementing. For the model implementation, you will use the Trax layers library `tl`.Note that the second character of `tl` is the lowercase of letter `L`, not the number 1. Trax layers are very similar to the ones you implemented above,but in addition to trainable weights also have a non-trainable state.State is used in layers like batch normalization and for inference, you will learn more about it in course 4.First, look at the code of the Trax Dense layer and compare to your implementation above.- [tl.Dense](https://github.com/google/trax/blob/master/trax/layers/core.pyL29): Trax Dense layer implementationOne other important layer that you will use a lot is one that allows to execute one layer after another in sequence.- [tl.Serial](https://github.com/google/trax/blob/master/trax/layers/combinators.pyL26): Combinator that applies layers serially. - You can pass in the layers as arguments to `Serial`, separated by commas. - For example: `tl.Serial(tl.Embeddings(...), tl.Mean(...), tl.Dense(...), tl.LogSoftmax(...))`Please use the `help` function to view documentation for each layer.
###Code
# View documentation on tl.Dense
help(tl.Dense)
# View documentation on tl.Serial
help(tl.Serial)
###Output
_____no_output_____
###Markdown
- [tl.Embedding](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/core.pyL113): Layer constructor function for an embedding layer. - `tl.Embedding(vocab_size, d_feature)`. - `vocab_size` is the number of unique words in the given vocabulary. - `d_feature` is the number of elements in the word embedding (some choices for a word embedding size range from 150 to 300, for example). - Recall from the previous course 2, week 4, that the embedding is
###Code
# View documentation for tl.Embedding
help(tl.Embedding)
tmp_embed = tl.Embedding(vocab_size=3, d_feature=2)
display(tmp_embed)
###Output
_____no_output_____
###Markdown
- [tl.Mean](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/core.pyL276): Calculates means across an axis. In this case, please choose axis = 1 to get an average embedding vector (an embedding vector that is an average of all words in the vocabulary). - For example, if the embedding matrix is 300 elements and vocab size is 10,000 words, taking the mean of the embedding matrix along axis=1 will yield a vector of 300 elements.
###Code
# view the documentation for tl.mean
help(tl.Mean)
# Pretend the embedding matrix uses
# 2 elements for embedding the meaning of a word
# and has a vocabulary size of 3
# So it has shape (2,3)
tmp_embed = np.array([[1,2,3,],
[4,5,6]
])
# take the mean along axis 0
print("The mean along axis 0 creates a vector whose length equals the vocabulary size")
display(np.mean(tmp_embed,axis=0))
print("The mean along axis 1 creates a vector whose length equals the number of elements in a word embedding")
display(np.mean(tmp_embed,axis=1))
###Output
_____no_output_____
###Markdown
- [tl.LogSoftmax](https://github.com/google/trax/blob/1372b903bb66b0daccee19fd0b1fdf44f659330b/trax/layers/core.pyL242): Implements log softmax function- Here, you don't need to set any parameters for `LogSoftMax()`.
###Code
help(tl.LogSoftmax)
###Output
_____no_output_____
###Markdown
Exercise 05Implement the classifier function.
###Code
# GRADED FUNCTION: classifier
# CODE REVIEW - Add a cell with help(t1.Dense) for students to run and check
def classifier(vocab_size=len(Vocab), embedding_dim=256, output_dim=2, mode='train'):
# create embedding layer
# number of rows is the embedding units
# number of columns is the vocabulary size
embed_layer = tl.Embedding(vocab_size=vocab_size,
d_feature=embedding_dim)
# Create a mean layer, to create an "average" word embedding
mean_layer = tl.Mean(axis=1)
# Create a dense layer, one unit for each output
dense_output_layer = tl.Dense(n_units = output_dim)
# Create the log softmax layer (no parameters needed)
log_softmax_layer = tl.LogSoftmax()
# Use tl.Serial to combine all layers
# and create the classifier
# of type trax.layers.combinators.Serial
model = tl.Serial(
embed_layer,
mean_layer,
dense_output_layer,
log_softmax_layer
)
# return the model of type
return model
tmp_model = classifier()
print(type(tmp_model))
display(tmp_model)
###Output
_____no_output_____
###Markdown
Expected output```Serial[ Embedding_9092_256 Mean Dense_2 LogSoftmax]``` Part 3: TrainingTo train a model on a task, Trax defines an abstraction `trax.supervised.training.TrainTask` which packages the train data, loss and optimizer (among other things) together into an object.Similarly to evaluate a model, Trax defines an abstraction `trax.supervised.training.EvalTask` which packages the eval data and metrics (among other things) into another object.The final piece tying things together is the `trax.supervised.training.Loop` abstraction that is a very simple and flexible way to put everything together and train the model, all the while evaluating it and saving checkpoints.Using `Loop` will save you a lot of code compared to always writing the training loop by hand, like you did in courses 1 and 2. More importantly, you are less likely to have a bug in that code that would ruin your training.
###Code
# View documentation for trax.supervised.training.TrainTask
help(trax.supervised.training.TrainTask)
# View documentation for trax.supervised.training.EvalTask
help(trax.supervised.training.EvalTask)
# View documentation for trax.supervised.training.Loop
help(trax.supervised.training.Loop)
# View optimizers that you could choose from
help(trax.optimizers)
###Output
_____no_output_____
###Markdown
Notice some available optimizers include:```CPP adafactor adam momentum rms_prop sm3``` 3.1 Training the modelNow you are going to train your model. Let's define the `TrainTask`, `EvalTask` and `Loop` in preparation train the model.
###Code
from trax.supervised import training
batch_size = 16
train_task = training.TrainTask(
labeled_data=train_generator(batch_size=batch_size),
loss_layer=tl.CrossEntropyLoss(),
optimizer=trax.optimizers.Adam(0.01),
n_steps_per_checkpoint=10,
)
eval_task = training.EvalTask(
labeled_data=val_generator(batch_size=batch_size),
metrics=[tl.CrossEntropyLoss(), tl.Accuracy()],
)
model = classifier()
###Output
_____no_output_____
###Markdown
This defines a model trained using `tl.CrossEntropyLoss` optimized with the `trax.optimizers.Adam` optimizer, all the while tracking the accuracy using `tl.Accuracy` metric. We also track `tl.CrossEntropyLoss` on the validation set. Now let's make an output directory and train the model.
###Code
output_dir = '~/model/'
output_dir_expand = os.path.expanduser(output_dir)
print(output_dir_expand)
###Output
_____no_output_____
###Markdown
Exercise 06**Instructions:** Implement `train_model` to train the model (`classifier` that you wrote earlier) for the given number of training steps (`n_steps`) using `TrainTask`, `EvalTask` and `Loop`.
###Code
# GRADED FUNCTION: train_model
def train_model(classifier, train_task, eval_task, n_steps, output_dir):
trax.supervised.trainer_lib.init_random_number_generators(31)
### START CODE HERE ###
training_loop = training.Loop(model,
train_task,
eval_task=eval_task,
output_dir=output_dir)
training_loop.run(n_steps=n_steps)
### END CODE HERE ###
# Return the training_loop, since it has the model.
return training_loop
training_loop = train_model(classifier, train_task, eval_task, 100, output_dir_expand)
###Output
_____no_output_____
###Markdown
Expected output (Approximately)```CPPStep 1: train CrossEntropyLoss | 0.75331926Step 1: eval CrossEntropyLoss | 0.88253963Step 1: eval Accuracy | 0.50000000Step 10: train CrossEntropyLoss | 0.68949085Step 10: eval CrossEntropyLoss | 0.39143169Step 10: eval Accuracy | 1.00000000Step 20: train CrossEntropyLoss | 0.34751052Step 20: eval CrossEntropyLoss | 0.28144282Step 20: eval Accuracy | 1.00000000Step 30: train CrossEntropyLoss | 0.21527445Step 30: eval CrossEntropyLoss | 0.14794244Step 30: eval Accuracy | 1.00000000Step 40: train CrossEntropyLoss | 0.12926930Step 40: eval CrossEntropyLoss | 0.13686025Step 40: eval Accuracy | 0.93750000Step 50: train CrossEntropyLoss | 0.11106913Step 50: eval CrossEntropyLoss | 0.08613179Step 50: eval Accuracy | 1.00000000Step 60: train CrossEntropyLoss | 0.06994272Step 60: eval CrossEntropyLoss | 0.05273105Step 60: eval Accuracy | 1.00000000Step 70: train CrossEntropyLoss | 0.06942032Step 70: eval CrossEntropyLoss | 0.08188842Step 70: eval Accuracy | 0.93750000Step 80: train CrossEntropyLoss | 0.04251108Step 80: eval CrossEntropyLoss | 0.04675784Step 80: eval Accuracy | 1.00000000Step 90: train CrossEntropyLoss | 0.04134055Step 90: eval CrossEntropyLoss | 0.09237872Step 90: eval Accuracy | 0.93750000Step 100: train CrossEntropyLoss | 0.04980525Step 100: eval CrossEntropyLoss | 0.05621190Step 100: eval Accuracy | 1.00000000``` Part 3.2 Practice Making a predictionNow that you have trained a model, you can access it as `training_loop.model` object. We will actually use `training_loop.eval_model` and in the next weeks you will learn why we sometimes use a different model for evaluation, e.g., one without dropout. For now, make predictions with your model.Use the training data just to see how the prediction process works. - Later, you will use validation data to evaluate your model's performance.
###Code
# Create a generator object
tmp_train_generator = train_generator(16)
# get one batch
tmp_batch = next(tmp_train_generator)
# Position 0 has the model inputs (tweets as tensors)
# position 1 has the targets (the actual labels)
tmp_inputs, tmp_targets, tmp_example_weights = tmp_batch
print(f"The batch is a tuple of length {len(tmp_batch)} because position 0 contains the tweets, and position 1 contains the targets.")
print(f"The shape of the tweet tensors is {tmp_inputs.shape} (num of examples, length of tweet tensors)")
print(f"The shape of the labels is {tmp_targets.shape}, which is the batch size.")
print(f"The shape of the example_weights is {tmp_example_weights.shape}, which is the same as inputs/targets size.")
# feed the tweet tensors into the model to get a prediction
tmp_pred = training_loop.eval_model(tmp_inputs)
print(f"The prediction shape is {tmp_pred.shape}, num of tensor_tweets as rows")
print("Column 0 is the probability of a negative sentiment (class 0)")
print("Column 1 is the probability of a positive sentiment (class 1)")
print()
print("View the prediction array")
tmp_pred
###Output
_____no_output_____
###Markdown
To turn these probabilities into categories (negative or positive sentiment prediction), for each row:- Compare the probabilities in each column.- If column 1 has a value greater than column 0, classify that as a positive tweet.- Otherwise if column 1 is less than or equal to column 0, classify that example as a negative tweet.
###Code
# turn probabilites into category predictions
tmp_is_positive = tmp_pred[:,1] > tmp_pred[:,0]
for i, p in enumerate(tmp_is_positive):
print(f"Neg log prob {tmp_pred[i,0]:.4f}\tPos log prob {tmp_pred[i,1]:.4f}\t is positive? {p}\t actual {tmp_targets[i]}")
###Output
_____no_output_____
###Markdown
Notice that since you are making a prediction using a training batch, it's more likely that the model's predictions match the actual targets (labels). - Every prediction that the tweet is positive is also matching the actual target of 1 (positive sentiment).- Similarly, all predictions that the sentiment is not positive matches the actual target of 0 (negative sentiment) One more useful thing to know is how to compare if the prediction is matching the actual target (label). - The result of calculation `is_positive` is a boolean.- The target is a type trax.fastmath.numpy.int32- If you expect to be doing division, you may prefer to work with decimal numbers with the data type type trax.fastmath.numpy.int32
###Code
# View the array of booleans
print("Array of booleans")
display(tmp_is_positive)
# convert boolean to type int32
# True is converted to 1
# False is converted to 0
tmp_is_positive_int = tmp_is_positive.astype(np.int32)
# View the array of integers
print("Array of integers")
display(tmp_is_positive_int)
# convert boolean to type float32
tmp_is_positive_float = tmp_is_positive.astype(np.float32)
# View the array of floats
print("Array of floats")
display(tmp_is_positive_float)
tmp_pred.shape
###Output
_____no_output_____
###Markdown
Note that Python usually does type conversion for you when you compare a boolean to an integer- True compared to 1 is True, otherwise any other integer is False.- False compared to 0 is True, otherwise any ohter integer is False.
###Code
print(f"True == 1: {True == 1}")
print(f"True == 2: {True == 2}")
print(f"False == 0: {False == 0}")
print(f"False == 2: {False == 2}")
###Output
_____no_output_____
###Markdown
However, we recommend that you keep track of the data type of your variables to avoid unexpected outcomes. So it helps to convert the booleans into integers- Compare 1 to 1 rather than comparing True to 1. Hopefully you are now familiar with what kinds of inputs and outputs the model uses when making a prediction.- This will help you implement a function that estimates the accuracy of the model's predictions. Part 4: Evaluation 4.1 Computing the accuracy on a batchYou will now write a function that evaluates your model on the validation set and returns the accuracy. - `preds` contains the predictions. - Its dimensions are `(batch_size, output_dim)`. `output_dim` is two in this case. Column 0 contains the probability that the tweet belongs to class 0 (negative sentiment). Column 1 contains probability that it belongs to class 1 (positive sentiment). - If the probability in column 1 is greater than the probability in column 0, then interpret this as the model's prediction that the example has label 1 (positive sentiment). - Otherwise, if the probabilities are equal or the probability in column 0 is higher, the model's prediction is 0 (negative sentiment).- `y` contains the actual labels.- `y_weights` contains the weights to give to predictions. Exercise 07Implement `compute_accuracy`.
###Code
# GRADED FUNCTION: compute_accuracy
def compute_accuracy(preds, y, y_weights):
"""
Input:
preds: a tensor of shape (dim_batch, output_dim)
y: a tensor of shape (dim_batch, output_dim) with the true labels
Output:
accuracy: a float between 0-1
"""
### START CODE HERE ###
# Create an array of booleans,
# True if the probability of positive sentiment is greater than
# the probability of negative sentiment
# else False
is_pos = preds[:, 1] > preds[:, 0]
# Alternate solution - 1
# is_pos = np.greater(preds[:, 1],preds[:, 0] )
# convert the array of booleans into an array of np.int32
is_pos_int = is_pos.astype(np.int32)
# compare the array of predictions (as int32) with the target (labels) of type int32
correct = is_pos_int == y
# GRADING NOTE: don't give credit if they compare booleans to integers
# correct = is_pos == y
# Count the sum of the weights.
sum_weights = np.sum(y_weights)
# convert the array of correct predictions (boolean) into an arrayof np.float32
correct_float = correct.astype(np.float32)
# Multiply each prediction with its corresponding weight.
weighted_correct_float = correct_float * y_weights
# Sum up the weighted correct predictions (of type np.float32), to go in the
# denominator.
weighted_num_correct = np.sum(weighted_correct_float)
# Alternate solution: using `sum` is okay too
# weighted_num_correct = sum(weighted_correct_float)
# Divide the number of weighted correct predictions by the sum of the
# weights.
accuracy = weighted_num_correct / sum_weights
### END CODE HERE ###
return accuracy, weighted_num_correct, sum_weights
# test your function
tmp_val_generator = val_generator(64)
# get one batch
tmp_batch = next(tmp_val_generator)
# Position 0 has the model inputs (tweets as tensors)
# position 1 has the targets (the actual labels)
tmp_inputs, tmp_targets, tmp_example_weights = tmp_batch
# feed the tweet tensors into the model to get a prediction
tmp_pred = training_loop.eval_model(tmp_inputs)
tmp_acc, tmp_num_correct, tmp_num_predictions = compute_accuracy(preds=tmp_pred, y=tmp_targets, y_weights=tmp_example_weights)
print(f"Model's prediction accuracy on a single training batch is: {100 * tmp_acc}%")
print(f"Weighted number of correct predictions {tmp_num_correct}; weighted number of total observations predicted {tmp_num_predictions}")
###Output
_____no_output_____
###Markdown
Expected output (Approximately)```Model's prediction accuracy on a single training batch is: 92.1875%Weighted number of correct predictions 59.0; weighted number of total observations predicted 64``` 4.2 Testing your model on Validation DataNow you will write test your model's prediction accuracy on validation data. This program will take in a data generator and your model. - The generator allows you to get batches of data. You can use it with a `for` loop:```for batch in iterator: do something with that batch````batch` has dimensions `(batch size, 2)`. - Column 0 corresponds to the tweet as a tensor.- Column 1 corresponds to its target (actual label, positive or negative sentiment).- You can feed the tweet into model and it will return the predictions for the batch. Exercise 08**Instructions:** - Compute the accuracy over all the batches in the validation iterator. - Make use of `compute_accuracy`, which you recently implemented, and return the overall accuracy.
###Code
# GRADED FUNCTION: test_model
def test_model(generator, model):
'''
Input:
generator: an iterator instance that provides batches of inputs and targets
model: a model instance
Output:
acc: float corresponding to the accuracy
'''
accuracy = 0.
total_num_correct = 0
total_num_pred = 0
count = 0
### START CODE HERE ###
for batch in generator:
# CODE REVIEW - Add comment of use batch[0] as it corresponds to tweet input, batch[1] corresponds to y (actual sentiment) and batch[2] to the example weight.
# Retrieve the inputs from the batch
inputs = batch[0]
# Retrieve the targets (actual labels) from the batch
targets = batch[1]
# Retrieve the example weight.
example_weight = batch[2]
# Make predictions using the inputs
pred = model(inputs)
# Calculate accuracy for the batch by comparing its predictions and targets
batch_accuracy, batch_num_correct, batch_num_pred = compute_accuracy(pred, targets, example_weight)
# Update the total number of correct predictions
# by adding the number of correct predictions from this batch
total_num_correct += batch_num_correct
# Update the total number of predictions
# by adding the number of predictions made for the batch
total_num_pred += batch_num_pred
# Calculate accuracy over all examples
accuracy = total_num_correct / total_num_pred
### END CODE HERE ###
return accuracy
# DO NOT EDIT THIS CELL
# testing the accuracy of your model: this takes around 20 seconds
model = training_loop.eval_model
accuracy = test_model(val_generator(16), model)
print(f'The accuracy of your model on the validation set is {accuracy:.4f}', )
###Output
_____no_output_____
###Markdown
Expected Output (Approximately)```CPPThe accuracy of your model on the validation set is 0.9810``` Part 5: Testing with your own inputFinally you will test with your own input. You will see that deepnets are more powerful than the older methods you have used before. Although you go close to 100% accuracy on the first two assignments, the task was way easier.
###Code
# this is used to predict on your own sentnece
def predict(sentence):
inputs = np.array(tweet_to_tensor(sentence, vocab_dict=Vocab))
# Batch size 1, add dimension for batch, to work with the model
inputs = inputs[None, :]
# predict with the model
preds_probs = model(inputs)
# Turn probabilities into categories
preds = int(preds_probs[0, 1] > preds_probs[0, 0])
sentiment = "negative"
if preds == 1:
sentiment = 'positive'
return preds, sentiment
# try a positive sentence
sentence = "It's such a nice day, think i'll be taking Sid to Ramsgate fish and chips for lunch at Peter's fish factory and then the beach maybe"
tmp_pred, tmp_sentiment = predict(sentence)
print(f"The sentiment of the sentence \n***\n\"{sentence}\"\n***\nis {tmp_sentiment}.")
print()
# try a negative sentence
sentence = "I hated my day, it was the worst, I'm so sad."
tmp_pred, tmp_sentiment = predict(sentence)
print(f"The sentiment of the sentence \n***\n\"{sentence}\"\n***\nis {tmp_sentiment}.")
###Output
_____no_output_____ |
labs24_notebooks/bunches_and_gaps/B_And_G_With_GeoJson_v2.ipynb | ###Markdown
Run Once
###Code
# Used in many places
import psycopg2 as pg
import pandas as pd
# Used to enter database credentials without saving them to the notebook file
import getpass
# Used to easily read in bus location data
import pandas.io.sql as sqlio
# Only used in the schedule class definition
import numpy as np
from scipy import stats
# Used in the fcc_projection function to find distances
from math import sqrt, cos
import json
# Enter database credentials. Requires you to paste in the user and
# password so it isn't saved in the notebook file
print("Enter database username:")
user = getpass.getpass()
print("Enter database password:")
password = getpass.getpass()
creds = {
'user': user,
'password': password,
'host': "lambdalabs24sfmta.cykkiwxbfvpg.us-east-1.rds.amazonaws.com",
'dbname': "historicalTransitData"
}
# Set up connection to database
cnx = pg.connect(**creds)
cursor = cnx.cursor()
print('\nDatabase connection successful')
# Schedule class definition
# Copied from previous work, has extra methods that are not all used in this notebook
class Schedule:
def __init__(self, route_id, date, connection):
"""
The Schedule class loads the schedule for a particular route and day,
and makes several accessor methods available for it.
Parameters:
route_id (str or int)
- The route id to load
date (str or pandas.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
"""
self.route_id = str(route_id)
self.date = pd.to_datetime(date)
# load the schedule for that date and route
self.route_data = load_schedule(self.route_id, self.date, connection)
# process data into a table
self.inbound_table, self.outbound_table = extract_schedule_tables(self.route_data)
# calculate the common interval values
self.mean_interval, self.common_interval = get_common_intervals(
[self.inbound_table, self.outbound_table])
def list_stops(self):
"""
returns the list of all stops used by this schedule
"""
# get stops for both inbound and outbound routes
inbound = list(self.inbound_table.columns)
outbound = list(self.outbound_table.columns)
# convert to set to ensure no duplicates,
# then back to list for the correct output type
return list(set(inbound + outbound))
def get_specific_interval(self, stop, time, inbound=True):
"""
Returns the expected interval, in minutes, for a given stop and
time of day.
Parameters:
stop (str or int)
- the stop tag/id of the bus stop to check
time (str or pandas.Timestamp)
- the time of day to check, uses pandas.to_datetime to convert
- examples that work: "6:00", "3:30pm", "15:30"
inbound (bool, optional)
- whether to check the inbound or outbound schedule
- ignored unless the given stop is in both inbound and outbound
"""
# ensure correct parameter types
stop = str(stop)
time = pd.to_datetime(time)
# check which route to use, and extract the column for the given stop
if (stop in self.inbound_table.columns and
stop in self.outbound_table.columns):
# stop exists in both, use inbound parameter to decide
if inbound:
sched = self.inbound_table[stop]
else:
sched = self.outbound_table[stop]
elif (stop in self.inbound_table.columns):
# stop is in the inbound schedule, use that
sched = self.inbound_table[stop]
elif (stop in self.outbound_table.columns):
# stop is in the outbound schedule, use that
sched = self.outbound_table[stop]
else:
# stop doesn't exist in either, throw an error
raise ValueError(f"Stop id '{stop}' doesn't exist in either inbound or outbound schedules")
# 1: convert schedule to datetime for comparison statements
# 2: drop any NaN values
# 3: convert to list since pd.Series threw errors on i indexing
sched = list(pd.to_datetime(sched).dropna())
# reset the date portion of the time parameter to
# ensure we are checking the schedule correctly
time = time.replace(year=self.date.year, month=self.date.month,
day=self.date.day)
# iterate through that list to find where the time parameter fits
for i in range(1, len(sched)):
# start at 1 and move forward,
# is the time parameter before this schedule entry?
if(time < sched[i]):
# return the difference between this entry and the previous one
return (sched[i] - sched[i-1]).seconds / 60
# can only reach this point if the time parameter is after all entries
# in the schedule, return the last available interval
return (sched[len(sched)-1] - sched[len(sched)-2]).seconds / 60
def load_schedule(route, date, connection):
"""
loads schedule data from the database and returns it
Parameters:
route (str)
- The route id to load
date (str or pd.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
"""
# ensure correct parameter types
route = str(route)
date = pd.to_datetime(date)
# DB connection
cursor = connection.cursor()
# build selection query
query = """
SELECT content
FROM schedules
WHERE rid = %s AND
begin_date <= %s::TIMESTAMP AND
(end_date IS NULL OR end_date >= %s::TIMESTAMP);
"""
# execute query and save the route data to a local variable
cursor.execute(query, (route, str(date), str(date)))
data = cursor.fetchone()[0]['route']
# pd.Timestamp.dayofweek returns 0 for monday and 6 for Sunday
# the actual serviceClass strings are defined by Nextbus
# these are the only 3 service classes we can currently observe,
# if others are published later then this will need to change
if(date.dayofweek <= 4):
serviceClass = 'wkd'
elif(date.dayofweek == 5):
serviceClass = 'sat'
else:
serviceClass = 'sun'
# the schedule format has two entries for each serviceClass,
# one each for inbound and outbound.
# return each entry in the data list with the correct serviceClass
return [sched for sched in data if (sched['serviceClass'] == serviceClass)]
def extract_schedule_tables(route_data):
"""
converts raw schedule data to two pandas dataframes
columns are stops, and rows are individual trips
returns inbound_df, outbound_df
"""
# assuming 2 entries, but not assuming order
if(route_data[0]['direction'] == 'Inbound'):
inbound = 0
else:
inbound = 1
# extract a list of stops to act as columns
inbound_stops = [s['tag'] for s in route_data[inbound]['header']['stop']]
# initialize dataframe
inbound_df = pd.DataFrame(columns=inbound_stops)
# extract each row from the data
if type(route_data[inbound]['tr']) == list:
# if there are multiple trips in a day, structure will be a list
i = 0
for trip in route_data[inbound]['tr']:
for stop in trip['stop']:
# '--' indicates the bus is not going to that stop on this trip
if stop['content'] != '--':
inbound_df.at[i, stop['tag']] = stop['content']
# increment for the next row
i += 1
else:
# if there is only 1 trip in a day, the object is a dict and
# must be handled slightly differently
for stop in route_data[inbound]['tr']['stop']:
if stop['content'] != '--':
inbound_df.at[0, stop['tag']] = stop['content']
# flip between 0 and 1
outbound = int(not inbound)
# repeat steps for the outbound schedule
outbound_stops = [s['tag'] for s in route_data[outbound]['header']['stop']]
outbound_df = pd.DataFrame(columns=outbound_stops)
if type(route_data[outbound]['tr']) == list:
i = 0
for trip in route_data[outbound]['tr']:
for stop in trip['stop']:
if stop['content'] != '--':
outbound_df.at[i, stop['tag']] = stop['content']
i += 1
else:
for stop in route_data[outbound]['tr']['stop']:
if stop['content'] != '--':
outbound_df.at[0, stop['tag']] = stop['content']
# return both dataframes
return inbound_df, outbound_df
def get_common_intervals(df_list):
"""
takes route schedule tables and returns both the average interval (mean)
and the most common interval (mode), measured in number of minutes
takes a list of dataframes and combines them before calculating statistics
intended to combine inbound and outbound schedules for a single route
"""
# ensure we have at least one dataframe
if len(df_list) == 0:
raise ValueError("Function requires at least one dataframe")
# append all dataframes in the array together
df = df_list[0].copy()
for i in range(1, len(df_list)):
df.append(df_list[i].copy())
# convert all values to datetime so we can get an interval easily
for col in df.columns:
df[col] = pd.to_datetime(df[col])
# initialize a table to hold each individual interval
intervals = pd.DataFrame(columns=df.columns)
intervals['temp'] = range(len(df))
# take each column and find the intervals in it
for col in df.columns:
prev_time = np.nan
for i in range(len(df)):
# find the first non-null value and save it to prev_time
if pd.isnull(prev_time):
prev_time = df.at[i, col]
# if the current time is not null, save the interval
elif ~pd.isnull(df.at[i, col]):
intervals.at[i, col] = (df.at[i, col] - prev_time).seconds / 60
prev_time = df.at[i, col]
# this runs without adding a temp column, but the above loop runs 3x as
# fast if the rows already exist
intervals = intervals.drop('temp', axis=1)
# calculate the mean of the entire table
mean = intervals.mean().mean()
# calculate the mode of the entire table, the [0][0] at the end is
# because scipy.stats returns an entire ModeResult class
mode = stats.mode(intervals.values.flatten())[0][0]
return mean, mode
# Route class definition
# Copied from previous work, has extra methods that are not all used in this notebook
class Route:
def __init__(self, route_id, date, connection):
"""
The Route class loads the route configuration data for a particular
route, and makes several accessor methods available for it.
Parameters:
route_id (str or int)
- The route id to load
date (str or pandas.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
"""
self.route_id = str(route_id)
self.date = pd.to_datetime(date)
# load the route data
self.route_data, self.route_type, self.route_name = load_route(self.route_id, self.date, connection)
# extract stops info and rearrange columns to be more human readable
# note: the stop tag is what was used in the schedule data, not stopId
self.stops_table = pd.DataFrame(self.route_data['stop'])
self.stops_table = self.stops_table[['stopId', 'tag', 'title', 'lat', 'lon']]
# extract route path, list of (lat, lon) pairs
self.path_coords = extract_path(self.route_data)
# extract stops table
self.stops_table, self.inbound, self.outbound = extract_stops(self.route_data)
def load_route(route, date, connection):
"""
loads raw route data from the database
Parameters:
route (str or int)
- The route id to load
date (str or pd.Timestamp)
- Which date to load
- Converted with pandas.to_datetime so many formats are acceptable
Returns route_data (dict), route_type (str), route_name (str)
"""
# ensure correct parameter types
route = str(route)
date = pd.to_datetime(date)
# DB connection
cursor = connection.cursor()
# build selection query
query = """
SELECT route_name, route_type, content
FROM routes
WHERE rid = %s AND
begin_date <= %s::TIMESTAMP AND
(end_date IS NULL OR end_date > %s::TIMESTAMP);
"""
# execute query and return the route data
cursor.execute(query, (route, str(date), str(date)))
result = cursor.fetchone()
return result[2]['route'], result[1], result[0]
def extract_path(route_data):
"""
Extracts the list of path coordinates for a route.
The raw data stores this as an unordered list of sub-routes, so this
function deciphers the order they should go in and returns a single list.
"""
# KNOWN BUG
# this approach assumed all routes were either a line or a loop.
# routes that have multiple sub-paths meeting at a point break this,
# route 24 is a current example.
# I'm committing this now to get the rest of the code out there
# extract the list of subpaths as just (lat,lon) coordinates
# also converts from string to float (raw data has strings)
path = []
for sub_path in route_data['path']:
path.append([(float(p['lat']), float(p['lon']))
for p in sub_path['point']])
# start with the first element, remove it from path
final = path[0]
path.pop(0)
# loop until the first and last coordinates in final match
counter = len(path)
done = True
while final[0] != final[-1]:
# loop through the sub-paths that we haven't yet moved to final
for i in range(len(path)):
# check if the last coordinate in final matches the first
# coordinate of another sub-path
if final[-1] == path[i][0]:
# match found, move it to final
# leave out the first coordinate to avoid duplicates
final = final + path[i][1:]
path.pop(i)
break # break the for loop
# protection against infinite loops, if the path never closes
counter -= 1
if counter < 0:
done = False
break
if not done:
# route did not connect in a loop, perform same steps backwards
# to get the rest of the line
for _ in range(len(path)):
# loop through the sub-paths that we haven't yet moved to final
for i in range(len(path)):
# check if the first coordinate in final matches the last
# coordinate of another sub-path
if final[0] == path[i][-1]:
# match found, move it to final
# leave out the last coordinate to avoid duplicates
final = path[i][:-1] + final
path.pop(i)
break # break the for loop
# some routes may have un-used sub-paths
# Route 1 for example has two sub-paths that are almost identical, with the
# same start and end points
# if len(path) > 0:
# print(f"WARNING: {len(path)} unused sub-paths")
# return the final result
return final
def extract_stops(route_data):
"""
Extracts a dataframe of stops info
Returns the main stops dataframe, and a list of inbound and outbound stops
in the order they are intended to be on the route
"""
stops = pd.DataFrame(route_data['stop'])
directions = pd.DataFrame(route_data['direction'])
# Change stop arrays to just the list of numbers
for i in range(len(directions)):
directions.at[i, 'stop'] = [s['tag'] for s in directions.at[i, 'stop']]
# Find which stops are inbound or outbound
inbound = []
for stop_list in directions[directions['name'] == "Inbound"]['stop']:
for stop in stop_list:
if stop not in inbound:
inbound.append(stop)
outbound = []
for stop_list in directions[directions['name'] == "Outbound"]['stop']:
for stop in stop_list:
if stop not in inbound:
outbound.append(stop)
# Label each stop as inbound or outbound
stops['direction'] = ['none'] * len(stops)
for i in range(len(stops)):
if stops.at[i, 'tag'] in inbound:
stops.at[i, 'direction'] = 'inbound'
elif stops.at[i, 'tag'] in outbound:
stops.at[i, 'direction'] = 'outbound'
# Convert from string to float
stops['lat'] = stops['lat'].astype(float)
stops['lon'] = stops['lon'].astype(float)
return stops, inbound, outbound
def get_location_data(rid, begin, end, connection):
# Build query to select location data
query = f"""
SELECT *
FROM locations
WHERE rid = '{rid}' AND
timestamp > '{begin}'::TIMESTAMP AND
timestamp < '{end}'::TIMESTAMP
ORDER BY id;
"""
# read the query directly into pandas
locations = sqlio.read_sql_query(query, connection)
# Convert those UTC timestamps to local PST by subtracting 7 hours
locations['timestamp'] = locations['timestamp'] - pd.Timedelta(hours=7)
# return the result
return locations
# Written by Austie
def fcc_projection(loc1, loc2):
"""
function to apply FCC recommended formulae
for calculating distances on earth projected to a plane
significantly faster computationally, negligible loss in accuracy
Args:
loc1 - a tuple of lat/lon
loc2 - a tuple of lat/lon
"""
lat1, lat2 = loc1[0], loc2[0]
lon1, lon2 = loc1[1], loc2[1]
mean_lat = (lat1+lat2)/2
delta_lat = lat2 - lat1
delta_lon = lon2 - lon1
k1 = 111.13209 - 0.56605*cos(2*mean_lat) + .0012*cos(4*mean_lat)
k2 = 111.41513*cos(mean_lat) - 0.09455*cos(3*mean_lat) + 0.00012*cos(5*mean_lat)
distance = sqrt((k1*delta_lat)**2 + (k2*delta_lon)**2)
return distance
def clean_locations(locations, stops):
"""
takes a dataframe of bus locations and a dataframe of
returns the locations dataframe with nearest stop added
"""
# remove old location reports that would be duplicates
df = locations[locations['age'] < 60].copy()
# remove rows with no direction value
df = df[~pd.isna(df['direction'])]
# shift timestamps according to the age column
df['timestamp'] = df.apply(shift_timestamp, axis=1)
# Make lists of all inbound or outbound stops
inbound_stops = stops[stops['direction'] == 'inbound'].reset_index(drop=True)
outbound_stops = stops[stops['direction'] == 'outbound'].reset_index(drop=True)
# initialize new columns for efficiency
df['closestStop'] = [0] * len(df)
df['distance'] = [0.0] * len(df)
for i in df.index:
if '_I_' in df.at[i, 'direction']:
candidates = inbound_stops
elif '_O_' in df.at[i, 'direction']:
candidates = outbound_stops
else:
# Skip row if bus is not found to be either inbound or outbound
continue
bus_coord = (df.at[i, 'latitude'], df.at[i, 'longitude'])
# Find closest stop within candidates
# Assume the first stop
closest = candidates.iloc[0]
distance = fcc_projection(bus_coord, (closest['lat'], closest['lon']))
# Check each stop after that
for _, row in candidates[1:].iterrows():
# find distance to this stop
dist = fcc_projection(bus_coord, (row['lat'], row['lon']))
if dist < distance:
# closer stop found, save it
closest = row
distance = dist
# Save the tag of the closest stop and the distance to it
df.at[i, 'closestStop'] = closest['tag']
df.at[i, 'distance'] = distance
return df
def shift_timestamp(row):
""" subtracts row['age'] from row['timestamp'] """
return row['timestamp'] - pd.Timedelta(seconds=row['age'])
def get_stop_times(locations, route):
"""
returns a dict, keys are stop tags and values are lists of timestamps
that describe every time a bus was seen at that stop
"""
# Initialize the data structure I will store results in
stop_times = {}
vids = {}
for stop in route.inbound + route.outbound:
stop_times[str(stop)] = []
for vid in locations['vid'].unique():
# Process the route one vehicle at a time
df = locations[locations['vid'] == vid]
# process 1st row on its own
prev_row = df.loc[df.index[0]]
stop_times[str(prev_row['closestStop'])].append(prev_row['timestamp'])
# loop through the rest of the rows, comparing each to the previous one
for i, row in df[1:].iterrows():
if row['direction'] != prev_row['direction']:
# changed directions, don't compare to previous row
stop_times[str(row['closestStop'])].append(row['timestamp'])
else:
# same direction, compare to previous row
if '_I_' in row['direction']: # get correct stop list
stoplist = route.inbound
else:
stoplist = route.outbound
current = stoplist.index(str(row['closestStop']))
previous = stoplist.index(str(prev_row['closestStop']))
gap = current - previous
if gap > 1: # need to interpolate
diff = (row['timestamp'] - prev_row['timestamp'])/gap
counter = 1
for stop in stoplist[previous+1:current]:
# save interpolated time
stop_times[str(stop)].append(prev_row['timestamp'] + (counter * diff))
# increase counter for the next stop
# example: with 2 interpolated stops, gap would be 3
# 1st diff is 1/3, next is 2/3
counter += 1
if row['closestStop'] != prev_row['closestStop']:
# only save time if the stop has changed,
# otherwise the bus hasn't moved since last time
stop_times[str(row['closestStop'])].append(row['timestamp'])
# advance for next row
prev_row = row
# Sort each list before returning
for stop in stop_times.keys():
stop_times[stop].sort()
return stop_times
def get_bunches_gaps(stop_times, schedule, bunch_threshold=.2, gap_threshold=1.5):
"""
returns a dataframe of all bunches and gaps found
default thresholds define a bunch as 20% and a gap as 150% of scheduled headway
"""
# Initialize dataframe for the bunces and gaps
problems = pd.DataFrame(columns=['type', 'time', 'duration', 'stop'])
counter = 0
# Set the bunch/gap thresholds (in seconds)
bunch_threshold = (schedule.common_interval * 60) * bunch_threshold
gap_threshold = (schedule.common_interval * 60) * gap_threshold
for stop in stop_times.keys():
# ensure we have any times at all for this stop
if len(stop_times[stop]) == 0:
#print(f"Stop {stop} had no recorded times")
continue # go to next stop in the loop
# save initial time
prev_time = stop_times[stop][0]
# loop through all others, comparing to the previous one
for time in stop_times[stop][1:]:
diff = (time - prev_time).seconds
if diff <= bunch_threshold:
# bunch found, save it
problems.at[counter] = ['bunch', prev_time, diff, stop]
counter += 1
elif diff >= gap_threshold:
problems.at[counter] = ['gap', prev_time, diff, stop]
counter += 1
prev_time = time
return problems
# this uses sequential search, could speed up with binary search if needed,
# but it currently uses hardly any time in comparison to other steps
def helper_count(expected_times, observed_times):
""" Returns the number of on-time stops found """
# set up early/late thresholds (in seconds)
early_threshold = pd.Timedelta(seconds=1*60) # 1 minute early
late_threshold = pd.Timedelta(seconds=4*60) # 4 minutes late
count = 0
for stop in expected_times.columns:
for expected in expected_times[stop]:
if pd.isna(expected):
continue # skip NaN values in the expected schedule
# for each expected time...
# find first observed time after the early threshold
found_time = None
early = expected - early_threshold
# BUG: some schedule data may have stop tags that are not in the inbound
# or outbound definitions for a route. That would throw a key error here.
# Example: stop 14148 on route 24
# current solution is to ignore those stops with the try/except statement
try:
for observed in observed_times[stop]:
if observed >= early:
found_time = observed
break
except:
continue
# if found time is still None, then all observed times were too early
# if found_time is before the late threshold then we were on time
if (not pd.isna(found_time)) and found_time <= (expected + late_threshold):
# found_time is within the on-time window
count += 1
return count
def calculate_ontime(stop_times, schedule):
""" Returns the on-time percentage and total scheduled stops for this route """
# Save schedules with timestamp data types, set date to match
inbound_times = schedule.inbound_table
for col in inbound_times.columns:
inbound_times[col] = pd.to_datetime(inbound_times[col]).apply(
lambda dt: dt.replace(year=schedule.date.year,
month=schedule.date.month,
day=schedule.date.day))
outbound_times = schedule.outbound_table
for col in outbound_times.columns:
outbound_times[col] = pd.to_datetime(outbound_times[col]).apply(
lambda dt: dt.replace(year=schedule.date.year,
month=schedule.date.month,
day=schedule.date.day))
# count times for both inbound and outbound schedules
on_time_count = (helper_count(inbound_times, stop_times) +
helper_count(outbound_times, stop_times))
# get total expected count
total_expected = inbound_times.count().sum() + outbound_times.count().sum()
# return on-time percentage
return (on_time_count / total_expected), total_expected
def bunch_gap_graph(problems, interval=10):
"""
returns data for a graph of the bunches and gaps throughout the day
problems - the dataframe of bunches and gaps
interval - the number of minutes to bin data into
returns
{
"times": [time values (x)],
"bunches": [bunch counts (y1)],
"gaps": [gap counts (y2)]
}
"""
# set the time interval
interval = pd.Timedelta(minutes=interval)
# rest of code doesn't work if there are no bunches or gaps
# return the empty graph manually
if len(problems) == 0:
# generate list of times according to the interval
start = pd.Timestamp('today').replace(hour=0, minute=0, second=0)
t = start
times = []
while t.day == start.day:
times.append(str(t.time())[:5])
t += interval
return {
"times": times,
"bunches": [0] * len(times),
"gaps": [0] * len(times)
}
# generate the DatetimeIndex needed
index = pd.DatetimeIndex(problems['time'])
df = problems.copy()
df.index = index
# lists for graph data
bunches = []
gaps = []
times = []
# set selection times
start_date = problems.at[0, 'time'].replace(hour=0, minute=0, second=0)
select_start = start_date
select_end = select_start + interval
while select_start.day == start_date.day:
# get the count of each type of problem in this time interval
count = df.between_time(select_start.time(), select_end.time())['type'].value_counts()
# append the counts to the data list
if 'bunch' in count.index:
bunches.append(int(count['bunch']))
else:
bunches.append(0)
if 'gap' in count.index:
gaps.append(int(count['gap']))
else:
gaps.append(0)
# save the start time for the x axis
times.append(str(select_start.time())[:5])
# increment the selection window
select_start += interval
select_end += interval
return {
"times": times,
"bunches": bunches,
"gaps": gaps
}
###Output
_____no_output_____
###Markdown
Necessary for geojson
###Code
def create_simple_geojson(bunches, rid):
geojson = {'type': 'FeatureCollection',
'bunches': create_geojson_features(bunches, rid)}
return geojson
def create_geojson_features(df, rid):
"""
function to generate list of geojson features
for plotting vehicle locations on timestamped map
Expects a dataframe containing lat/lon, vid, timestamp
returns list of basic geojson formatted features:
{
type: Feature
geometry: {
type: Point,
coordinates:[lat, lon]
},
properties: {
time: timestamp
stopId: stop id
}
}
"""
# initializing empty features list
features = []
# iterating through df to pull coords, stopid, timestamp
# and format for json
for index, row in df.iterrows():
feature = {
'type': 'Feature',
'geometry': {
'type':'Point',
'coordinates':[round(row.lon, 4), round(row.lat, 4)]
},
'properties': {
'time': row.time.__str__().rstrip('0').rstrip('.')
if '.' in row.time.__str__()
else row.time.__str__(),
'stopId': row.stopId.__str__()
}
}
features.append(feature) # adding point to features list
return features
###Output
_____no_output_____
###Markdown
Generating report JSONNow updated to include geojson for mapping.Tested geojson generation with: - single report generation - all routes generation- aggregate generationTested mapping bunches with generated geojson W/ Folium. Everything should plug-and-play. Updated to reduce filesize- Removed unnecessary properties from geojson features- quantized coordinates- stripped all unnecessary whitespace- stripped trailing zeros from interpolated timestampsFor a total of ~42% reduction in geojson size.
###Code
def generate_report(rid, date):
"""
Generates a daily report for the given rid and date
rid : (str)
the route id to generate a report for
date : (str or pd.Datetime)
the date to generate a report for
returns a dict of the report info
"""
# get begin and end timestamps for the date
begin = pd.to_datetime(date).replace(hour=7)
end = begin + pd.Timedelta(days=1)
# Load schedule and route data
schedule = Schedule(rid, begin, cnx)
route = Route(rid, begin, cnx)
# Load bus location data
locations = get_location_data(rid, begin, end, cnx)
# Apply cleaning function (this usually takes 1-2 minutes)
locations = clean_locations(locations, route.stops_table)
# Calculate all times a bus was at each stop
stop_times = get_stop_times(locations, route)
# Find all bunches and gaps
problems = get_bunches_gaps(stop_times, schedule)
# Calculate on-time percentage
on_time, total_scheduled = calculate_ontime(stop_times, schedule)
# Build result dict
count_times = 0
for key in stop_times.keys():
count_times += len(stop_times[key])
# Number of recorded intervals ( sum(len(each list of time)) - number or lists of times)
intervals = count_times-len(stop_times)
bunches = len(problems[problems['type'] == 'bunch'])
gaps = len(problems[problems['type'] == 'gap'])
coverage = (total_scheduled * on_time + bunches) / total_scheduled
# Isolating bunches, merging with stops to assign locations to bunches
stops = route.stops_table.copy()
bunch_df = problems[problems.type.eq('bunch')]
bunch_df = bunch_df.merge(stops, left_on='stop', right_on='tag', how='left')
# Creating GeoJSON of bunch times / locations
geojson = create_simple_geojson(bunch_df, rid)
# int/float conversions are because the json library doesn't work with numpy types
result = {
'route': rid,
'route_name': route.route_name,
'route_type': route.route_type,
'date': str(pd.to_datetime(date)),
'num_bunches': bunches,
'num_gaps': gaps,
'total_intervals': intervals,
'on_time_percentage': float(round(on_time * 100, 2)),
'scheduled_stops': int(total_scheduled),
'coverage': float(round(coverage * 100, 2)),
# line_chart contains all data needed to generate the line chart
'line_chart': bunch_gap_graph(problems, interval=10),
# route_table is an array of all rows that should show up in the table
# it will be filled in after all reports are generated
'route_table': [
{
'route_name': route.route_name,
'bunches': bunches,
'gaps': gaps,
'on-time': float(round(on_time * 100, 2)),
'coverage': float(round(coverage * 100, 2))
}
],
'geojson': json.dumps(geojson, separators=(',', ':'))
}
return result
%%time
report_1 = generate_report(rid='1', date='2020/6/1')
report_1['geojson']
report_714 = generate_report(rid='714', date='2020/6/1')
###Output
_____no_output_____
###Markdown
Generating report for all routes
###Code
def get_active_routes(date):
"""
returns a list of all active route id's for the given date
"""
query = """
SELECT DISTINCT rid
FROM routes
WHERE begin_date <= %s ::TIMESTAMP AND
(end_date IS NULL OR end_date > %s ::TIMESTAMP);
"""
cursor.execute(query, (date, date))
return [result[0] for result in cursor.fetchall()]
%%time
# since this is not optimized yet, this takes about 20 minutes
# choose a day
date = '2020-6-1'
# get all active routes
route_ids = get_active_routes(date)
# get the report for all routes
all_reports = []
for rid in route_ids:
try:
all_reports.append(generate_report(rid, date))
print("Generated report for route", rid)
except: # in case any particular route throws an error
print(f"Route {rid} failed")
len(all_reports)
# generate aggregate reports
# read existing reports into a dataframe to work with them easily
df = pd.DataFrame(all_reports)
# for each aggregate type
types = list(df['route_type'].unique()) + ['All']
for t in types:
# filter df to the routes we are adding up
if t == 'All':
filtered = df
else:
filtered = df[df['route_type'] == t]
# on-time percentage: sum([all on-time stops]) / sum([all scheduled stops])
count_on_time = (filtered['on_time_percentage'] * filtered['scheduled_stops']).sum()
on_time_perc = count_on_time / filtered['scheduled_stops'].sum()
# coverage: (sum([all on-time stops]) + sum([all bunches])) / sum([all scheduled stops])
coverage = (count_on_time + filtered['num_bunches'].sum()) / filtered['scheduled_stops'].sum()
# aggregate the graph object
# x-axis is same for all
first = filtered.index[0]
times = filtered.at[first, 'line_chart']['times']
# sum up all y-axis values
bunches = pd.Series(filtered.at[first, 'line_chart']['bunches'])
gaps = pd.Series(filtered.at[first, 'line_chart']['gaps'])
for chart in filtered[1:]['line_chart']:
bunches += pd.Series(chart['bunches'])
gaps += pd.Series(chart['gaps'])
# save a new report object
new_report = {
'route': t,
'route_name': t,
'route_type': t,
'date': all_reports[0]['date'],
'num_bunches': int(filtered['num_bunches'].sum()),
'num_gaps': int(filtered['num_gaps'].sum()),
'total_intervals': int(filtered['total_intervals'].sum()),
'on_time_percentage': float(round(on_time_perc, 2)),
'scheduled_stops': int(filtered['scheduled_stops'].sum()),
'coverage': float(round(coverage, 2)),
'line_chart': {
'times': times,
'bunches': list(bunches),
'gaps': list(bunches)
},
'route_table': [
{
'route_name': t,
'bunches': int(filtered['num_bunches'].sum()),
'gaps': int(filtered['num_gaps'].sum()),
'on-time': float(round(on_time_perc, 2)),
'coverage': float(round(coverage, 2))
}
]
}
# TODO: add route_table rows to the aggregate report
all_reports.append(new_report)
all_reports[0].keys()
###Output
_____no_output_____ |
Hito2Mineria.ipynb | ###Markdown
1.IntroducciónLas ventas online en Chile aumentaron 119% en la última semana de marzo de 2021, cuando comenzaron las cuarentenas en el país, [según la Cámara de Comercio de Santiago](https://forbescentroamerica.com/2020/04/23/el-efecto-de-covid-19-en-el-ecommerce/). Esto implicó un cambio en la estrategia de negocios de múltiples empresas a nivel nacional. Dada la imposibilidad de ir presencialmente a una tienda, gran cantidad de empresas a nivel nacional tuvieron que dejar de lado la clásica tienda comercial y se vieron obligados a potenciar su canal de ventas e-commerce. De esta forma, se busca entregar información personalizada a los clientes, vía e-mail, basado en los productos que este ha adquirido anteriormente. Con esto, también se busca tener más información sobre los clientes para futuros proyectos.**Blockstore** es una cadena de tiendas, ubicada en Chile, enfocada la venta de zapatillas, accesorios y vestuario de las mejores marcas urbanas.Dada la contingencia mundial de la pandemia y el estallido social del 2019, Block tuvo que adaptarse al canal de ventas online por lo que la necesidad de la aplicación de análisis de datos y en particular de minería de datos va en el contexto cambiante en el ámbito nacional, donde la crisis social, pandemia, mayor dinero circulante y un aumento explosivo en ventas online hacen necesario tener las mejores estrategias diferenciadas para cada cliente.Es por esto que decidimos estudiar el comportamiento de compra de los clientes a nivel nacional.Nos parece interesante estudiar estos datos para generar más ventas enfocando una estrategia de negocios diferenciada para cada grupo de clientes en específico de acuerdo a sus gustos. 2.Exploración de Datos
###Code
# Asignamos nuestro "working directory" (set w. d.) como el directorio ~/RDATA y ponemos librerías
%%R
library(ggplot2)
library(dplyr)
library(tidyverse)
###Output
R[write to console]:
Attaching package: ‘dplyr’
R[write to console]: The following objects are masked from ‘package:stats’:
filter, lag
R[write to console]: The following objects are masked from ‘package:base’:
intersect, setdiff, setequal, union
R[write to console]: ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.1 ──
R[write to console]: ✔ tibble 3.1.5 ✔ purrr 0.3.4
✔ tidyr 1.1.4 ✔ stringr 1.4.0
✔ readr 2.0.2 ✔ forcats 0.5.1
R[write to console]: ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
###Markdown
**Cargamos los datos originales**Sin embargo, estos datos no están limpios debido a que los clientes no ingresaron bien los datos, por lo que se realizó una limpieza de los datos para poder trabajar de mejor manera.
###Code
# (No volver a ejecutar pls)
%%R
pedidos_preeliminar <- read_csv("/content/orders.csv")
pedidos_detalle_preeliminar <-read_csv("/content/order_detail.csv")
###Output
_____no_output_____
###Markdown
A modo de resumen se dejan la cantidad de columnas y filas de cada dataset
###Code
%%R
print(nrow(pedidos_preeliminar))
print(ncol(pedidos_preeliminar))
print(nrow(pedidos_detalle_preeliminar))
print(ncol(pedidos_detalle_preeliminar))
###Output
[1] 104955
[1] 68
[1] 126704
[1] 9
###Markdown
Para la limpieza se eliminaron filas vacías, datos *dummy* realizados por la empresa, y otros valores incosistentes y repetitivos.También se hizo un refactoring de los nombres de las columnas para un mejor entendimiento del dataset.
###Code
%%R
pedidos<-read.csv("/content/Orders_OFICIAL.csv", encoding = "UTF-8", sep=";")
pedidos_detalle <-read.csv("/content/ORDER_DETAIL_OFICIAL.csv", encoding = "UTF-8",sep=";")
pedidos$count <- as.numeric(ave(pedidos$Comuna, pedidos$Comuna, FUN = length))
pedidos_detalle$count.marca <- as.numeric(ave(pedidos_detalle$Marca, pedidos_detalle$Marca, FUN = length))
%%R
# Se cambia el formato de la columna Fecha.Compra a Date.
pedidos$Fecha.Compra <- as.Date(pedidos$Fecha.Compra, format ="%d-%m-%Y")
# Se cambia el formato de la columna Fecha.Pedido a Date. También se agrega una columna con el año.
pedidos_detalle$Fecha.Pedido <- as.Date(pedidos_detalle$Fecha.Pedido, format ="%Y-%m-%d")
pedidos_detalle$anio <- as.numeric(format(pedidos_detalle$Fecha.Pedido,'%Y'))
###Output
_____no_output_____
###Markdown
2.1. Estudio sobre los datos filtrados1. **A grandes razgos podemos calcular la cantidad promedio que se gasta una persona en Block:**
###Code
%%R
nfilas <- nrow(pedidos)
total_vendido <- sum(pedidos$Precio.Pedido)
promedio_pedidos <- total_vendido/nfilas
(promedio_pedidos)
###Output
[1] 45364.45
###Markdown
2. **Vamos a ver primero la cantidad de gente que pide más y menos que el promedio:**
###Code
%%R
#La gente que pide mas que el promedio
pedidos_RM_mayorpromedio <- data.frame(pedidos[pedidos$REGION.CON.CODIGO == "RM" & pedidos$Precio.Pedido > promedio_pedidos,] )
print(nrow(pedidos_RM_mayorpromedio))
#La gente que pide menos que el promedio
pedidos_RM_menorpromedio <- data.frame(pedidos[pedidos$REGION.CON.CODIGO == "RM" & pedidos$Precio.Pedido <= promedio_pedidos,] )
print(nrow(pedidos_RM_menorpromedio))
###Output
[1] 36083
[1] 45759
###Markdown
3. **El pedido más caro en Block online**Podemos ver que en la tabla pedidos_Detalles según el numero de pedido podemos ver el detalle de la compra que realizó la clienta.
###Code
%%R
pedidos[which.max(pedidos$Precio.Pedido),]
%%R
#usamos su numero de pedido para ver las compras realizadas
pedido_maximo_detalle <- pedidos_detalle[pedidos_detalle$Numero.Pedido == "#BL4499",]
#verificamos el dinero sea consistente en ambas tablas
sum(pedido_maximo_detalle$Precio.Total.Productos)
###Output
[1] 454116
###Markdown
4. **La cantidad de pedidos divididos por región:**
###Code
%%R
ggplot(pedidos, aes(x = REGION.CON.CODIGO),) +
ggtitle("Cantidad de pedidos por región") +
geom_bar()
###Output
_____no_output_____
###Markdown
5. **Cantidad de pedidos dividos por comuna**
###Code
%%R
freq_comuna <- data.frame(comuna = pedidos$Comuna, count = pedidos$count)
freq_comuna <- unique(freq_comuna)
freq_comuna <- freq_comuna[order(freq_comuna[,"count"], decreasing = TRUE),]
ggplot(freq_comuna[1:30,], aes(x = reorder(comuna , count),y = count)) + ggtitle("Las 30 Comunas con mas pedidos del pais") + coord_flip() + geom_bar(stat = "identity")
###Output
_____no_output_____
###Markdown
6. Puente alto y Las condes son de las comunas más pobladas de la RMPuente alto es una de las comunas más "zapatilleras" vs "Las condes" que es una comuna que tiende más a la ropa formal. A continuación veremos la cantidad de pedidos, el promedio de estos y la suma total de dinero de las respectivas comunas:
###Code
%%R
pedidos_comuna_Lascondes <- data.frame(pedidos[pedidos$Comuna == "LAS CONDES", ] )
print(nrow(pedidos_comuna_Lascondes))
print(mean(pedidos_comuna_Lascondes$Precio.Pedido))
print(sum(pedidos_comuna_Lascondes$Precio.Pedido))
pedidos_comuna_maipu <- data.frame(pedidos[pedidos$Comuna == "MAIPU", ] )
print(nrow(pedidos_comuna_maipu))
print(mean(pedidos_comuna_maipu$Precio.Pedido))
print(sum(pedidos_comuna_maipu$Precio.Pedido))
###Output
[1] 2777
[1] 47841.46
[1] 132855726
[1] 5462
[1] 42816.93
[1] 233866080
###Markdown
7. **Se deja un historial de pedidos desde el inicio del dataset hasta la ultima fecha presente:**
###Code
%%R
ggplot(pedidos, aes(x = Fecha.Compra) ) + geom_bar() + scale_x_date(date_breaks = "1 month", date_labels = "%b") + labs(title = "Compras a lo largo del tiempo", x = "fecha", y = "Cantidad de pedidos")
###Output
_____no_output_____
###Markdown
8. **Vamos a ver cuanto se ha vendido en block por marca**
###Code
%%R
freq_marca <- data.frame(marca = pedidos_detalle$Marca, count = pedidos_detalle$count.marca)
freq_marca <- unique(freq_marca)
freq_marca <- freq_marca[order(freq_marca[,"count"], decreasing = TRUE),]
ggplot(freq_marca[1:30,], aes(x = reorder(marca , count), y = count)) + ggtitle("Las 30 Marcas con mas pedidos del pais") + coord_flip() + geom_bar(stat = "identity")
###Output
_____no_output_____
###Markdown
9. **Veremos cuanto se ha vendido en block por marca:**
###Code
%%R
precio_marca <- data.frame(marca = pedidos_detalle$Marca, precio = pedidos_detalle$Precio.Producto)
p1 <- ggplot(pedidos_detalle, aes(x = Marca, y = Precio.Producto)) + geom_boxplot()+ coord_flip() + ggtitle("Precios de la marca")
p1
###Output
_____no_output_____
###Markdown
Hito 2Para este hito se trabajará con la librería *pandas* en *Python*, por lo que se debe reimportar las bases de datos y también limpiarlas para reordenar las ideas que serán llevadas a cabo en este *Hito 2*.Se importan las librerías necesarias para la exploración:
###Code
# Librerias principales para la exploración:
import csv
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Luego de esto, borramos las columnas de la dirección y el rut para proteger la privacidad de cada uno de los compradores, además que tenemos un ID de cliente que identifica a cada persona única, similar al *RUT*.
###Code
# Reimportamos todo
pedidos2 = pd.read_csv("Orders_OFICIAL.csv", sep=";")
pedidos_detalles2 = pd.read_csv("ORDER_DETAIL_OFICIAL.csv", sep=";")
pedidos2 = pedidos2.drop(columns="Direccion 1")
pedidos2 = pedidos2.drop(columns="Direccion 2")
pedidos2 = pedidos2.drop(columns="RUT")
pedidos2 = pedidos2.drop(columns="Email")
pedidos2.head()
###Output
_____no_output_____
###Markdown
¿Cómo se relaciona el precio de la compra con el lugar desde donde se pidió?
###Code
# Comunas
comunas = pedidos2['Comuna'].unique()
# print(comunas) -> Algunas son numeros
promedios = []
for comuna in comunas:
promedios.append(pedidos2[pedidos2['Comuna'] == comuna]['Precio Pedido'].mean())
# Promedio de precios de pedido por comuna
print(pedidos2[pedidos2['Comuna'] == 'CHONCHI']['Precio Pedido'].mean(), len(pedidos2[pedidos2['Comuna'] == 'CHONCHI']['Precio Pedido']))
print(pedidos2[pedidos2['Comuna'] == 'SANTIAGO']['Precio Pedido'].mean(), len(pedidos2[pedidos2['Comuna'] == 'SANTIAGO']['Precio Pedido']))
print(pedidos2[pedidos2['Comuna'] == 'VITACURA']['Precio Pedido'].mean(), len(pedidos2[pedidos2['Comuna'] == 'VITACURA']['Precio Pedido']))
print(pedidos2[pedidos2['Comuna'] == 'IQUIQUE']['Precio Pedido'].mean(), len(pedidos2[pedidos2['Comuna'] == 'IQUIQUE']['Precio Pedido']))
zip_iterator = zip(comunas, promedios)
diccionario = dict(zip_iterator)
#print(diccionario)
#LL y RM
pedidos2[pedidos2['Comuna'] == 'CHONCHI']
###Output
70290.14285714286 7
41775.52635119982 4459
45061.81488203267 1102
49104.514011208965 1249
###Markdown
¿Qué marcas se compran más según el sector? Preguntas y problemasDada la exploración anterior y su motivación original, formular preguntas que se pueden responder mediante la minería de datos y que se puedan vincular a la problemática planteada en la motivación1. **¿Qué marcas se compran juntos?**2. **¿Existe una tendencia en la segunda compra con respecto a la primera?(se compra la misma marca 2 veces?)**3. **¿Qué producto puede que compre alguien que ya adquirió x producto?** 4. **Comportamiento de clientes con respecto a la cantidad de compras y su monto total** Pregunta 1Para responder a esta pregunta se realizarán los siguientes pasos:- Utilizaremos Reglas de Asociación, basándose en el problema de la canasta.- No se ha visto en clases, por lo que no tenemos definido los pasos a seguir. Pregunta 2 Al ser una especie de "recomendación", utilizaremos clasificacion pues lo importante en esta pregunta es ver cuál es la tendencia de compra una vez que se haya realizado la primera. Para esta, tendremos que generar una nueva columna con el tipo de producto (por ejemplo: polera, zapatillas, jockey, etc.), y utlizar esto para predecir qué tipo de producto se hará en la segunda compra de un cliente. La idea es básicamente, pasarle la tabla al entrenador, pero sólo con los datos de la primera compra, y ver si es capaz de predecir la siguiente. Pregunta 3- De forma análoga a la pregunta 1 Utilizaremos Reglas de Asociación, basándose en el problema de la canasta.- No se ha visto en clases, por lo que no tenemos definido los pasos a seguir. Pregunta 4 Resultado preliminar Pregunta 4
###Code
# crear tabla a partir de la otra
pedidos3 = pedidos2[['ID Cliente', 'Cantidad Pedidos Cliente', 'Total Gastado Cliente']].copy()
print(pedidos3.size)
pedidos3 = pedidos3.drop_duplicates(subset = ['ID Cliente'])
print(pedidos3.size)
pedidos3 = pedidos3.drop(['ID Cliente'], axis=1)
pedidos3.head()
#K-means
from sklearn.cluster import KMeans
from sklearn import datasets
import matplotlib.pyplot as plt
import numpy as np
random_state = 20
X = pd.DataFrame(pedidos3).to_numpy()
X
# Respuesta
sse = []
clusters = list(range(1, 10))
for k in clusters:
kmeans = KMeans(n_clusters=k).fit(X)
sse.append(kmeans.inertia_)
plt.plot(clusters, sse, marker="o")
plt.title("Metodo del codo de 1 a 10 clusters")
plt.grid(True)
plt.show()
kmeans3 = KMeans(n_clusters=3, random_state=100).fit(X)
plt.scatter(X[:, 0], X[:, 1], c=kmeans3.labels_)
plt.title("K-Means")
plt.ylabel("Monto Gastado")
plt.xlabel("Cantidad de pedidos")
plt.show()
from scipy.cluster.hierarchy import dendrogram, linkage
from sklearn.cluster import AgglomerativeClustering
complete = linkage(X, method="complete")
single = linkage(X, method="single")
average = linkage(X, method="average")
ward = linkage(X, method="ward")
dendrogram(complete)
plt.title("Linkage: Complete")
plt.show()
dendrogram(single)
plt.title("Linkage: Single")
plt.show()
dendrogram(average)
plt.title("Linkage: Average")
plt.show()
dendrogram(ward)
plt.title("Linkage: Ward")
plt.show()
single_all = AgglomerativeClustering(n_clusters=None, linkage="ward", distance_threshold=0).fit(X)
print(single_all.n_clusters_)
single_6 = AgglomerativeClustering(n_clusters=6, linkage="ward").fit(X)
print(single_6.n_clusters_)
# Respuesta SINGLE
plt.scatter(new_X[:, 0], new_X[:, 1], c=single_6.labels_)
plt.title("Hierarchical: single, 6 clusters")
plt.show()
# Respuesta WARD
plt.scatter(new_X[:, 0], new_X[:, 1], c=ward_5.labels_)
plt.title("Hierarchical: ward, 5 clusters")
plt.show()
###Output
_____no_output_____
###Markdown
Contribuciones**María Hernández:** Encargada de encontrar el dataset.**Lung Pang:** Encargado de hacer el repositorio de GitHub y gráficos.**Cristóbal Saldías** Encargado del informe y su organización.**Víctor Vidal** Encargado de hacer la presentación.
###Code
pedidos2.head()
pedidos2.sort_values(by=['ID Cliente'])
pedidos_detalles2.head()
###Output
_____no_output_____ |
Mathematics/Linear Algebra/0.0b Supplement of MIT Course - Concept Explanations with Problems_and_Questions to Solve/02-Matrix-=-some-row-vectors.ipynb | ###Markdown
Matrix = some row vectors  This work by Jephian Lin is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
def make_blobs(N=150, k=3, d=2, seed=None):
"""
Input:
N: an integer, number of samples
k: an integer, number of blobs
d: an integer, dimension of the space
Output:
a dataset X of shape (N, d)
"""
np.random.seed(seed)
X = np.random.randn(N,d)
blob_size = N // k
centers = np.random.randn(k, d) * 3
for i in range(k):
left = blob_size * i
right = blob_size * (i+1) if i != k-1 else N
X[left:right] += centers[i]
return X
###Output
_____no_output_____
###Markdown
Main idea The **inner product** of two vectors $${\bf x} = \begin{bmatrix}x_1\\ \vdots \\ x_n\end{bmatrix}\text{ and }{\bf y} = \begin{bmatrix}y_1\\ \vdots \\ y_n\end{bmatrix}$$ is $$\langle{\bf x}, {\bf y}\rangle = \sum_{i=1}^n x_iy_i.$$ Let $$A = \begin{bmatrix} - & {\bf r}_1 & - \\ ~ & \vdots & ~ \\ - & {\bf r}_m & - \\\end{bmatrix}$$be an $m\times n$ matrix and ${\bf v}$ a vector in $\mathbb{R}^n$. Then $$(A{\bf v})_i = \langle{\bf r}_i, {\bf v}\rangle.$$ Side stories - Geometry of $\langle{\bf r}, {\bf v}\rangle = k$- hyperplane, affine plane, and their normal vectors- `np.random`, `plt.scatter`- mask in NumPy- classification Experiments Exercise 1Let ```pythonx = np.array([1,1,1])y = np.array([1,2,3])``` 1(a)Use `np.dot` to find the inner product of `x` and `y`.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
1(b)Use `*` and `np.sum` to find the inner product of `x` and `y`.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
1(c)Reshape `y` to `(1,3)` and `x` to `(3,1)` . Verify that ${\bf y}^\top{\bf x} = \langle{\bf x}, {\bf y}\rangle$.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
Exercise 2Let ```pythonvs = 5*np.random.randn(2,10000)r = np.array([1,1])``` 2(a)Use `plt.scatter` to plot the 10000 points in `vs` and use `plt.arrow` to plot the vector `r` . Note: You might need to set `head_width` properly for drawing an arrow. Note: Put `plt.axis('equal')` at the beginning to make the two axes on the same scale.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
2(b)Let `prod = np.dot(r, vs)` . Use the `c` and `cmap` keywords in `plt.scatter` to color each points in `vs` by the values in `prod` .
###Code
### your answer here
###Output
_____no_output_____
###Markdown
2(c)Let ```pythonmask = (np.abs(prod) < 0.1)plane = vs[:, mask]```Plot the points in `plane`.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
Exercise 3Run the following code.```python%matplotlib notebookvs = 5*np.random.randn(3,100000)r1 = np.array([1,-1,0])r2 = np.array([1,0,-1])r3 = np.array([0,1,-1])b1,b2,b3 = 5,0,0mask1 = (np.abs(np.dot(r1, vs) - b1) < 0.1)mask2 = (np.abs(np.dot(r2, vs) - b2) < 0.1)mask3 = (np.abs(np.dot(r3, vs) - b3) < 0.1)plane1 = vs[:,mask1]plane2 = vs[:,mask2]plane3 = vs[:,mask3]ax = plt.axes(projection='3d')ax.set_xlim(-5,5)ax.set_ylim(-5,5)ax.set_zlim(-5,5)ax.scatter(plane1[0], plane1[1], plane1[2])ax.scatter(plane2[0], plane2[1], plane2[2])ax.scatter(plane3[0], plane3[1], plane3[2])```Does there exist a solution to the system of linear equations? $\langle{\bf r}_1, {\bf v}\rangle = b_1$ $\langle{\bf r}_2, {\bf v}\rangle = b_2$ $\langle{\bf r}_3, {\bf v}\rangle = b_3$ Play with other `r`'s and `b`'s.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
Exercises Exercise 4Let ```pythonx = np.array([0,0,1,1])y = np.array([1,1,1,1])```and $\theta$ the angle between the two vectors. 4(a)It is known that $\langle{\bf x}, {\bf y}\rangle = \|{\bf x}\|\|{\bf y}\|\cos\theta$ for any vectors, where $\|{\bf v}\| = \sqrt{\langle{\bf v}, {\bf v}\rangle}$ is the length of ${\bf v}$. Use `np.arccos` to find $\theta$.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
4(b)Let ${\bf z} = {\bf x} - {\bf y}$. The [law of cosines](https://en.wikipedia.org/wiki/Law_of_cosines) says that $$\|{\bf z}\|^2 = \|{\bf x}\|^2 + \|{\bf y}\|^2 - 2\|{\bf x}\|\|{\bf y}\|\cos\theta.$$ Use the law of cosines to find $\theta$.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
Exercise 5Let ```pythondata = np.array([[20, 10], [16, 1], [16, 2], [14, 10], [13, 5]])```where the first colume is the temperature T and the second column is the wind speed V. Suppose the apparent temperature is `AT = 1.04*T - 0.65*V` . Generate a boolean array that indicate whether each sample has `AT > 15` or not.
###Code
### your answer here
###Output
_____no_output_____
###Markdown
Exercise 6Let `X = make_blobs(k=2)` . Note that `X` has shape `(150,2)` . 6(a)Consider each row in `X` as a point. Use the code below to plot them. ```python%matplotlib inlineplt.axis('equal')plt.scatter(X[:,0], X[:,1])```
###Code
### your answer here
###Output
_____no_output_____
###Markdown
6(b)Based on your drawing, do you think there is a affine plane that can separate the two blobs? If no, run `X = make_blobs(k=2)` again. If yes, find a normal vector `r = np.array([?, ?])` and a bias `b` so that the affine plane $\langle{\bf r}, {\bf x}\rangle = b$ separates the two blobs. Use `clr = np.sign(np.dot(X, r) - b)` to color the points.
###Code
### your answer here
###Output
_____no_output_____ |
english-malay-translation/5.dilated-conv-encoder-dilated-conv-self-attention.ipynb | ###Markdown
Let we test our beam search This is our english string,
###Code
' '.join([dictionary_english['rev_dictionary'][i] for i in test_X[0]])
###Output
_____no_output_____
###Markdown
Predicted bahasa translation from english string,
###Code
t = sess.run(model.generate, feed_dict = {model.X: [test_X[0]]})[0,0,:]
' '.join([dictionary_bahasa['rev_dictionary'][i] for i in t])
###Output
_____no_output_____
###Markdown
Actual translation,
###Code
' '.join([dictionary_bahasa['rev_dictionary'][i] for i in test_Y[0]])
###Output
_____no_output_____ |
notebooks/SHAP_analysis.ipynb | ###Markdown
SHAP values analysis Imports
###Code
from pandas import DataFrame
from standardizer.CustomStandardizer import CustomStandardizer
from rdkit.Chem import MolFromSmiles
import random
from featureImportance.shapValues import ShapValues
from matplotlib.colors import ColorConverter
from rdkit.Chem import Draw
import joblib
from Datasets.Datasets import NumpyDataset
import json
import numpy as np
import pandas as pd
from utils import utils
from loaders.Loaders import CSVLoader
import os
from compoundFeaturization.rdkitFingerprints import MorganFingerprint
from pathlib import Path
import shap
from numpy import genfromtxt
from rdkit.ML.Descriptors import MoleculeDescriptors
from copy import copy
dir_path = Path().absolute()
###Output
_____no_output_____
###Markdown
Sweeteners preparation
###Code
carbohydrates = [
["beta-D-Xylopyranose", "C1[C@H]([C@@H]([C@H]([C@@H](O1)O)O)O)O", 1],
["beta-D-Fructopyranose", "C1[C@H]([C@H]([C@@H]([C@](O1)(CO)O)O)O)O", 1],
["beta-D-Glucopyranose", "C([C@@H]1[C@H]([C@@H]([C@H]([C@@H](O1)O)O)O)O)O", 1],
["alpha-D-Glucopyranose", "C([C@@H]1[C@H]([C@@H]([C@H]([C@H](O1)O)O)O)O)O", 1],
["alpha-D-Galactose", "C([C@@H]1[C@@H]([C@@H]([C@H]([C@H](O1)O)O)O)O)O", 1],
["Maltitol", "C([C@@H]1[C@H]([C@@H]([C@H]([C@H](O1)O[C@H]([C@@H](CO)O)[C@@H]([C@H](CO)O)O)O)O)O)O", 1],
["Lactitol", "C([C@@H]1[C@@H]([C@@H]([C@H]([C@@H](O1)O[C@H]([C@@H](CO)O)[C@@H]([C@H](CO)O)O)O)O)O)O", 1] ,
["cellobitol", "C([C@@H]1[C@H]([C@@H]([C@H]([C@@H](O1)O[C@H]([C@@H](CO)O)[C@@H]([C@H](CO)O)O)O)O)O)O", 1],
["cyclamate", "C1CCC(CC1)NS(=O)(=O)O", 1],
["5-Nitro-2-propoxyaniline", "CCCOC1=C(C=C(C=C1)[N+](=O)[O-])N", 1],
["6-chloro-D-tryptophan","CCCOC1=C(C=C(C=C1)[N+](=O)[O-])N",1],
["alitame", "C[C@H](C(=O)NC1C(SC1(C)C)(C)C)NC(=O)[C@H](CC(=O)O)N", 1],
["aspartame", "COC(=O)[C@H](CC1=CC=CC=C1)NC(=O)[C@H](CC(=O)O)N", 1],
["neotame", "CC(C)(C)CCN[C@@H](CC(=O)O)C(=O)N[C@@H](CC1=CC=CC=C1)C(=O)OC", 1],
["saccharin", "C1=CC=C2C(=C1)C(=O)NS2(=O)=O", 1],
["dulcin", "CCOC1=CC=C(C=C1)NC(=O)N", 1],
]
carbohydrates = [i[1] for i in carbohydrates]
def get_mols_from_csv(input_file, structure_field):
df = pd.read_csv(input_file)
structures = df.loc[:,structure_field]
return structures
def highlight_atoms_with_query(lst,smarts):
to_highlight = []
for mol in lst:
mol = MolFromSmiles(mol)
final_tuple = ()
lst_idx = mol.GetSubstructMatches(smarts)
for idx in lst_idx:
final_tuple = final_tuple + idx
to_highlight.append(final_tuple)
return to_highlight
bitter_mols = get_mols_from_csv("../resources/data/clean_bitter_molecules.csv", "mols")
def run_shap_values_on_molecules(dataset, model, max_evals, columns_names):
X = pd.DataFrame(dataset.X, columns=columns_names)
model = model.model
explainer = shap.explainers.Permutation(model.predict, X)
shap_values = explainer(X, max_evals=max_evals)
return shap_values
def get_features_to_keep(model_path, fs_config_file_path):
if "all" in model_path:
feature_selection = "none"
if "2d" in model_path:
features_to_keep = [i for i in range(208)]
else:
features_to_keep = [i for i in range(2048)]
elif "KbestFS" in model_path:
feature_selection = "KbestFS"
f = open(fs_config_file_path,)
data = json.load(f)
features_to_keep = sorted(data["KbestFS"])
elif "SelectFromModelFS" in model_path:
feature_selection = "SelectFromModelFS"
f = open(fs_config_file_path,)
data = json.load(f)
features_to_keep = sorted(data["SelectFromModelFS"])
elif "Boruta" in model_path:
feature_selection = "Boruta"
f = open(fs_config_file_path,)
data = json.load(f)
features_to_keep = sorted(data["Boruta"])
if "2d" in model_path:
calc = MoleculeDescriptors.MolecularDescriptorCalculator([x[0] for x in Descriptors._descList])
features_names = calc.GetDescriptorNames()
features_names = [features_names[i] for i in features_to_keep]
else:
features_names = [f"bit {i}" for i in features_to_keep]
return features_to_keep,features_names
def get_feature_and_ecfp4_draw(features_number, molecule):
return utils.draw_morgan_bit_on_molecule(molecule, features_number, chiral=True)
def generate_conformers_and_calculate_distances(mol, it, atom_pairs):
atom_pairs_result = {}
for i,j in atom_pairs:
results = np.empty(it)
for k in range(it):
m2, res, rmslist = get_conformers(mol, nConfGen=20,pruneRmsThreshVal=1)
conf = m2.GetConformer()
at1Coords = np.array(conf.GetAtomPosition(i))
at2Coords = np.array(conf.GetAtomPosition(j))
results[k] = np.linalg.norm(at2Coords - at1Coords)
average = np.mean(results)
std = np.std(results)
atom_pairs_result[str(i) + "_" + str(j)] = (average,std)
return atom_pairs_result
###Output
_____no_output_____
###Markdown
import test datasets
###Code
test_dataset = os.path.join("../resources/models", "test_dataset.csv")
loader = CSVLoader(test_dataset,
mols_field='mols',
labels_fields='y')
test_dataset = loader.create_dataset()
dataset_sweeteners = test_dataset.mols[test_dataset.y == 1]
dataset_non_sweeteners = test_dataset.mols[test_dataset.y == 0]
train_dataset = os.path.join("../resources/models", "train_dataset.csv")
loader = CSVLoader(train_dataset,
mols_field='mols',
labels_fields='y')
train_dataset = loader.create_dataset()
###Output
_____no_output_____
###Markdown
get bitter molecules
###Code
from rdkit.Chem import AllChem
from rdkit import DataStructs
bitter_molecules = []
for i, bitter_mol in enumerate(bitter_mols):
bitter_mol = MolFromSmiles(bitter_mol)
fp1 = AllChem.GetMorganFingerprintAsBitVect(bitter_mol,2)
for smiles2 in dataset_non_sweeteners:
mol2 = MolFromSmiles(smiles2)
fp2 = AllChem.GetMorganFingerprintAsBitVect(mol2,2)
similarity = DataStructs.FingerprintSimilarity(fp1,fp2)
if similarity == 1 and smiles2 not in bitter_molecules:
bitter_molecules.append(mol2)
break
if len(bitter_molecules) == 40:
break
bitter_molecules_sample = random.choices(bitter_mols, k=16)
sweeteners_dataset = NumpyDataset(mols = carbohydrates, ids=np.array([i for i in range(len(carbohydrates))]), y=np.array([1]*len(carbohydrates)))
bitter_dataset = NumpyDataset(mols = bitter_molecules_sample, ids=np.array([i for i in range(len(bitter_molecules_sample))]), y=np.array([0]*len(carbohydrates)))
dataset_to_test = sweeteners_dataset.merge([bitter_dataset])
from rdkit import Chem
from rdkit.Chem import AllChem
def get_conformers(mol, nConfGen=20,pruneRmsThreshVal=1):
i=0
m2=Chem.AddHs(mol)
AllChem.EmbedMultipleConfs(m2,
numConfs=nConfGen,
pruneRmsThresh=pruneRmsThreshVal,
ignoreSmoothingFailures=True,
numThreads=0)
res =AllChem.MMFFOptimizeMoleculeConfs(m2,
maxIters=2000,
numThreads=0)
rmslist = []
AllChem.AlignMolConformers(m2, RMSlist=rmslist)
return m2, res, rmslist
def mol_with_atom_index(mol):
for atom in mol.GetAtoms():
atom.SetAtomMapNum(atom.GetIdx())
return mol
###Output
_____no_output_____
###Markdown
ECFP4 - SVM - SHAP analysis - WARNING - Run only if you want to repeat the SHAP process, if you just want to reproduce the work herein delivered, skip to the next section
###Code
dataset_to_test_path = os.path.join("../resources","SHAP_analysis/ecfp4_svm/set.csv")
df = pd.read_csv(dataset_to_test)
features = df.columns[3:]
loader = CSVLoader(dataset_to_test_path,
features_fields=list(features),
mols_field='mols',
labels_fields='y')
dataset_to_test = loader.create_dataset()
model_path = os.path.join("../resources/models", "ecfp4", "all_svm_model")
model = joblib.load(model_path)
standardisation_params = {
'REMOVE_ISOTOPE': True,
'NEUTRALISE_CHARGE': True,
'REMOVE_STEREO': False,
'KEEP_BIGGEST': True,
'ADD_HYDROGEN': False,
'KEKULIZE': True,
'NEUTRALISE_CHARGE_LATE': True}
CustomStandardizer(params = standardisation_params).standardize(dataset_to_test)
MorganFingerprint(chiral=True).featurize(dataset_to_test)
fs_config_file_path = os.path.join(dir_path,"stratified_split/", "ecfp4", "feature_selection_config.json")
features_to_keep, feature_names = get_features_to_keep(model_path,fs_config_file_path)
dataset_to_test.select_features(features_to_keep)
print(len(features_to_keep))
shap_values_structures = run_shap_values_on_molecules(dataset_to_test, model, max_evals=10000, columns_names=feature_names)
###Output
Standardizing datapoint 0
Featurizing datapoint 0
2048
###Markdown
Save the results only if you had run the previous cell
###Code
np.savetxt("SHAP_analysis/ecfp4_svm/shap_values.csv", shap_values_structures.values, delimiter=",")
np.savetxt("SHAP_analysis/ecfp4_svm/base_values.csv", shap_values_structures.base_values, delimiter=",")
np.savetxt("SHAP_analysis/ecfp4_svm/data.csv", shap_values_structures.data, delimiter=",")
dataset_to_test.save_to_csv("SHAP_analysis/ecfp4_svm/set.csv")
###Output
_____no_output_____
###Markdown
Load results and analyse
###Code
model_path = os.path.join("../resources/models", "ecfp4", "all_svm_model")
model = joblib.load(model_path)
standardisation_params = {
'REMOVE_ISOTOPE': True,
'NEUTRALISE_CHARGE': True,
'REMOVE_STEREO': False,
'KEEP_BIGGEST': True,
'ADD_HYDROGEN': False,
'KEKULIZE': True,
'NEUTRALISE_CHARGE_LATE': True}
CustomStandardizer(params = standardisation_params).standardize(dataset_to_test)
MorganFingerprint(chiral=True).featurize(dataset_to_test)
placeholder_dataset = copy(dataset_to_test)
placeholder_dataset.select([1,16])
explainer = shap.explainers.Permutation(model.model.predict_proba, placeholder_dataset.X)
shap_values_structures = explainer(placeholder_dataset.X)
values = genfromtxt("../resources/SHAP_analysis/ecfp4_svm/shap_values.csv", delimiter=',')
base_values = genfromtxt("../resources/SHAP_analysis/ecfp4_svm/base_values.csv", delimiter=',')
data = genfromtxt("../resources/SHAP_analysis/ecfp4_svm/data.csv", delimiter=',')
shap_values_structures.values=values
shap_values_structures.base_values=list(base_values)
shap_values_structures.data=data
shap.plots.beeswarm(shap_values_structures)
os.makedirs("SHAP_analysis/ecfp4_svm/", exist_ok=True)
dataset_to_test.save_to_csv("SHAP_analysis/ecfp4_svm/set.csv")
###Output
_____no_output_____
###Markdown
Cyclamate - the first case study
###Code
mol_index = 8
mol_with_atom_index(MolFromSmiles(dataset_to_test.mols[mol_index]))
###Output
_____no_output_____
###Markdown
Get the active bits
###Code
bits = []
for i,bit in enumerate(dataset_to_test.X[mol_index]):
if bit == 1:
bits.append(i)
print(bits)
###Output
[2, 92, 350, 428, 455, 592, 650, 807, 890, 911, 926, 1019, 1028, 1152, 1325, 1461, 1476, 1630, 1634]
###Markdown
X hydrophobic part of AH-B-X theory
###Code
get_feature_and_ecfp4_draw(890, dataset_to_test.mols[mol_index])
get_feature_and_ecfp4_draw(2, dataset_to_test.mols[mol_index])
x = [2,890]
cumulative_shap = 0
for bit in x:
cumulative_shap += shap_values_structures.values[mol_index][bit]
print(f"The cumulative SHAP value for the X structure was: {cumulative_shap}.")
###Output
_____no_output_____
###Markdown
B part of AH-B-X theory
###Code
get_feature_and_ecfp4_draw(807, dataset_to_test.mols[mol_index])
b = [807]
cumulative_shap = 0
for bit in b:
cumulative_shap += shap_values_structures.values[mol_index][bit]
print(f"The cumulative SHAP value for the B structure was: {cumulative_shap}.")
###Output
_____no_output_____
###Markdown
AH part of AH-B-X theory
###Code
get_feature_and_ecfp4_draw(1152, dataset_to_test.mols[mol_index])
ah = [1152]
cumulative_shap = 0
for bit in ah:
cumulative_shap += shap_values_structures.values[mol_index][bit]
print(f"The cumulative SHAP value for the AH structure was: {cumulative_shap}.")
colors = {}
red = [6,5,10]
center_x = [7,8,9]
center_ah = [4]
center_b = [3]
for bit in red:
colors[bit] = ColorConverter().to_rgb('#f99393')
for bit in center_x:
colors[bit] = ColorConverter().to_rgb('skyblue')
for bit in center_b:
colors[bit] = ColorConverter().to_rgb('#bababa')
for bit in center_ah:
colors[bit] = ColorConverter().to_rgb('#3cfa29')
mol = MolFromSmiles(dataset_to_test.mols[mol_index])
m2, res, rmslist = get_conformers(mol, nConfGen=20,pruneRmsThreshVal=1)
AllChem.Compute2DCoords(m2)
m2 = Chem.RemoveHs(m2)
Draw.MolsToGridImage([m2],highlightAtomLists=[colors.keys()],
highlightAtomColors=[colors], molsPerRow=1, subImgSize=(400,400))
###Output
_____no_output_____
###Markdown
Get the atom distance between the AH and B eletronegative atoms
###Code
atom_pairs = [(7,5)]
atom_pairs_result = generate_conformers_and_calculate_distances(MolFromSmiles(dataset_to_test.mols[mol_index]), 50, atom_pairs)
print(f"The average and standard deviation of the distance between AH and B are {atom_pairs_result['7_5'][0]} and {atom_pairs_result['7_5'][1]}")
###Output
The average and standard deviation of the distance between AH and B are 2.542419857064859 and 0.014004046447130603
###Markdown
1-n-propoxy-2-am ino-4-nitrobenzene (4000x) - second case study
###Code
mol_index = 9
mol_with_atom_index(MolFromSmiles(dataset_to_test.mols[mol_index]))
###Output
_____no_output_____
###Markdown
B structure of AH-B theory
###Code
get_feature_and_ecfp4_draw(715, dataset_to_test.mols[mol_index])
b = [715]
cumulative_shap = 0
for bit in b:
cumulative_shap += shap_values_structures.values[mol_index][bit]
print(f"The cumulative SHAP value for the B structure was: {cumulative_shap}.")
###Output
The cumulative SHAP value for the B structure was: 0.0754985754985755.
###Markdown
AH substructure of AH-B theory
###Code
get_feature_and_ecfp4_draw(1750, dataset_to_test.mols[mol_index])
ah = [1750]
cumulative_shap = 0
for bit in ah:
cumulative_shap += shap_values_structures.values[mol_index][bit]
print(f"The cumulative SHAP value for the B structure was: {cumulative_shap}.")
###Output
The cumulative SHAP value for the B structure was: 0.0071225071225071174.
###Markdown
The whole structure of AH-B
###Code
get_feature_and_ecfp4_draw(1809, dataset_to_test.mols[mol_index])
print(f"The cumulative SHAP value for the whole AH-B structure was: {shap_values_structures.values[mol_index][1809]}.")
from matplotlib.colors import ColorConverter
colors = {}
red = [7,11,9, 5]
center_whole = [8]
center_ah = [6]
center_b = [10]
for bit in red:
colors[bit] = ColorConverter().to_rgb('#f99393')
for bit in center_whole:
colors[bit] = ColorConverter().to_rgb('skyblue')
for bit in center_b:
colors[bit] = ColorConverter().to_rgb('#bababa')
for bit in center_ah:
colors[bit] = ColorConverter().to_rgb('#3cfa29')
mol = MolFromSmiles(dataset_to_test.mols[mol_index])
m2, res, rmslist = get_conformers(MolFromSmiles(dataset_to_test.mols[mol_index]), nConfGen=20,pruneRmsThreshVal=1)
AllChem.Compute2DCoords(m2)
m2 = Chem.RemoveHs(m2)
Draw.MolsToGridImage([m2],highlightAtomLists=[colors.keys()],
highlightAtomColors=[colors], molsPerRow=1, subImgSize=(400,400))
atom_pairs = [(10,6)]
atom_pairs_result = generate_conformers_and_calculate_distances(MolFromSmiles(dataset_to_test.mols[mol_index]), 50, atom_pairs)
print(f"The average and standard deviation of the distance between AH and B are {atom_pairs_result['10_6'][0]} and {atom_pairs_result['10_6'][1]}")
###Output
The average and standard deviation of the distance between AH and B are 3.2125006117054444 and 0.41890770010990663
###Markdown
β-D-Glucopyranose - third case study
###Code
mol_index = 2
mol_with_atom_index(MolFromSmiles(dataset_to_test.mols[mol_index]))
get_feature_and_ecfp4_draw(1257, dataset_to_test.mols[mol_index])
get_feature_and_ecfp4_draw(807, dataset_to_test.mols[mol_index])
bits = [807,1257]
cumulative_shap = 0
for bit in bits:
cumulative_shap += shap_values_structures.values[mol_index][bit]
print(f"The cumulative SHAP value for the whole AH-B structure was: {cumulative_shap}.")
from rdkit.Chem import Draw
colors = {}
red = [4,6,8,10,1]
center_whole = [7,11,9,5,0]
for bit in center_whole:
colors[bit] = ColorConverter().to_rgb('#bababa')
for bit in red:
colors[bit] = ColorConverter().to_rgb('#f99393')
mol = MolFromSmiles(dataset_to_test.mols[mol_index])
m2, res, rmslist = get_conformers(mol, nConfGen=20,pruneRmsThreshVal=1)
AllChem.Compute2DCoords(m2)
m2 = Chem.RemoveHs(m2)
Draw.MolsToGridImage([m2],highlightAtomLists=[colors.keys()],
highlightAtomColors=[colors], molsPerRow=1, subImgSize=(400,400))
atom_pairs = [(7,5), (11,9), (7,9)]
atom_pairs_result = generate_conformers_and_calculate_distances(MolFromSmiles(dataset_to_test.mols[mol_index]), 50, atom_pairs)
###Output
_____no_output_____
###Markdown
Distance between hydroxyl groups
###Code
atom_pairs_result
###Output
_____no_output_____
###Markdown
Alitame - fourth case study
###Code
mol_index = 11
mol_with_atom_index(MolFromSmiles(dataset_to_test.mols[mol_index]))
get_feature_and_ecfp4_draw(1565, dataset_to_test.mols[mol_index])
dihedral = [1565]
cumulative_shap = 0
for bit in dihedral:
cumulative_shap += shap_values_structures.values[mol_index][bit]
print(f"The cumulative SHAP value for the whole dihedral angle theory substructure was: {cumulative_shap}.")
colors = {}
red = [2,3,11,12,13]
center_whole = [8]
center_dihedral = [1]
for bit in red:
colors[bit] = ColorConverter().to_rgb('#f99393')
for bit in center_dihedral:
colors[bit] = ColorConverter().to_rgb('#3cfa29')
mol = MolFromSmiles(dataset_to_test.mols[mol_index])
Draw.MolsToGridImage([mol],highlightAtomLists=[colors.keys()],
highlightAtomColors=[colors], molsPerRow=1, subImgSize=(400,400))
###Output
_____no_output_____ |
genetic-algorithms/example01_basic_stuff.ipynb | ###Markdown
A simple implementation of genetic algorithms in Python From [here](https://www.kdnuggets.com/2018/07/genetic-algorithm-implementation-python.html),
###Code
import numpy as np
import pandas as pd
from itertools import product
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Parametrisation
###Code
w = [4,-2,3.5,5,-11,-4.7]
m = len(w) # number of features
n = 100 # population size
p = 15 # mating poopulation size
## Mutation: rate (probability of mutation at a location) and range (scale of mutation when it happens)
mutation_rate = 0.01
mutation_range = (-0.001,0.001)
n_generations = 500
## weights for the problem
pop_shape = [n,m]
pop_range = (-0.004,0.004)
###Output
_____no_output_____
###Markdown
_Testing_: Parent population size must be sufficiently large. See explanations below
###Code
assert p**2 >= n, 'p is too small for this setup, please choose a value of at least %r' % (int(np.ceil(np.sqrt(n))))
###Output
_____no_output_____
###Markdown
A single iteration, annotated PopulationWe start by sampling our initial "population" of weight vectors, based on the weights and ranges we defined before:
###Code
new_population = \
np.random.uniform(
low = 1.0 * min(pop_range),
high = 1.0 * max(pop_range),
size = pop_shape
)
###Output
_____no_output_____
###Markdown
We expect to see a uniform distribution of the weights (shown here as a single long vector)
###Code
pd.Series(new_population.flatten()).plot(kind = 'hist')
###Output
_____no_output_____
###Markdown
Fitness In order to select the best parents for breeding, we have to introduce the concept of _fitness_. In biology this would be a measure (or measures) of how well do the parents fit their environments. In our case we simply want to choose the parents the perform best against our pre-defined set of weights $w$. We want to make sure that fitness is bounded (from both ends) so we use the logistic function:$$ \text{fitness}(x_i) = logit(x_i^T w) = \frac{1}{1 + exp(- x_i^T w)} $$
###Code
def fitness_calc(w, pop):
return 1 / (1 + np.exp(-1 * np.sum(pop * w, axis = 1)))
###Output
_____no_output_____
###Markdown
And for our initial sample we can calculate the fitness score of each individual
###Code
fitness = fitness_calc(w, new_population)
pd.Series(fitness).plot(kind = 'hist')
###Output
_____no_output_____
###Markdown
ParentsAnd pick the `p` rows in our current population with the highest fitness for mating (we call them "parents"):
###Code
parents = new_population[(-1*fitness).argsort(), :][range(p),:]
###Output
_____no_output_____
###Markdown
We look at all possible pairs (excluding matching a single parent with themselves)
###Code
potential_parent_pairs = np.asarray(list(product(range(p),range(p))))
potential_parent_pairs = \
potential_parent_pairs[
np.apply_along_axis(
func1d = lambda x: x[0] != x[1], # only if parents are different
axis = 1,
arr = potential_parent_pairs
),
:
]
###Output
_____no_output_____
###Markdown
We pick "randomly" pick `n-p` pairs of parents (seed is set here for reproducibility). _Note:_ We make an assumption here that the number of potential parent pairs $p(p-1)$ is greater or equal to the number of offsprings we need to create to re-populate the next generation (which is $n-p$, assuming we only keep the parents and some of their offsprings) so we are in fact assuming that $p^2 \geq n$ (see the `assert` clause at the beginning of the code)
###Code
np.random.seed(seed=12345)
rand_index = np.random.choice(potential_parent_pairs.shape[0], n-p, replace=False)
parent_pairs_ind = potential_parent_pairs[rand_index,:]
###Output
_____no_output_____
###Markdown
Mating And we pair the selected parents by "splicing" their weights and adding children to the new population
###Code
crossover_point = int(np.floor(m/2))
new_population = parents
for i,j in parent_pairs_ind:
#print('this part', parents[i, 0:crossover_point])
#print('with this part', parents[j, crossover_point:])
#print('makes:', np.concatenate((parents[i, 0:crossover_point], parents[j, crossover_point:])))
new_population = \
np.vstack((
new_population,
np.concatenate((parents[i, 0:crossover_point], parents[j, crossover_point:]))
))
###Output
_____no_output_____
###Markdown
MutationAnd finally let's introduce some mutation: with probability `mutation_rate` we add a random fluctuation to the weight:
###Code
#np.random.seed(seed=12345)
mutation_places = \
np.random.binomial(
n = 1,
p = mutation_rate,
size = new_population.shape[0] * new_population.shape[1]
)
mutation_values = \
np.random.uniform(
high = max(mutation_range),
low = min(mutation_range),
size = new_population.shape[0] * new_population.shape[1]
)
new_population = new_population + np.reshape(mutation_values * mutation_places, newshape = new_population.shape)
###Output
_____no_output_____
###Markdown
Re-evaluationWe can now re-evaluate if the new population has better candidates
###Code
fitness = fitness_calc(w, new_population)
pd.Series(fitness).plot(kind = 'hist')
###Output
_____no_output_____
###Markdown
Let's iterate! We can create a simple `for` loop, but since we want to explore different mutation rates we'll wrap it in a function:
###Code
def fitness_calc(w, pop):
return 1 / (1 + np.exp(-1 * np.sum(w * pop, axis = 1)))
def genetic_iter(pop, w, mutation_rate = 0.01, n_generations = 20, p = 20):
## Collect best relust from each iteration
fitness_tracking = [np.max(fitness_calc(w, pop))]
#print('starting from', fitness_tracking)
for g in range(n_generations):
fitness = fitness_calc(w, pop)
#print(pd.Series(fitness).describe())
## Select parents
parents = pop[(-1*fitness).argsort(), :][range(p),:]
#print('parents', parents)
## Choose parent pairs
potential_parent_pairs = np.asarray(list(product(range(p),range(p))))
potential_parent_pairs = \
potential_parent_pairs[
np.apply_along_axis(
func1d = lambda x: x[0] != x[1], # only if parents are different
axis = 1,
arr = potential_parent_pairs
),
:
]
rand_index = np.random.choice(potential_parent_pairs.shape[0], pop.shape[0]-p, replace=False)
parent_pairs_ind = potential_parent_pairs[rand_index,:]
## Mate parents
crossover_point = int(np.floor(pop.shape[1]/2))
pop = parents
for i,j in parent_pairs_ind:
#print('this part', parents[i, 0:crossover_point])
#print('with this part', parents[j, crossover_point:])
#print('makes:', np.concatenate((parents[i, 0:crossover_point], parents[j, crossover_point:])))
pop = \
np.vstack((
pop,
np.concatenate((parents[i, 0:crossover_point], parents[j, crossover_point:]))
))
## Mutation
mutation_places = np.random.binomial(
n = 1,
p = mutation_rate,
size = pop.shape[0] * pop.shape[1]
)
mutation_values = np.random.uniform(
high = max(mutation_range)+1,
low = min(mutation_range),
size = pop.shape[0] * pop.shape[1]
)
pop = pop + np.reshape(mutation_values * mutation_places, newshape = pop.shape)
fitness_tracking.append(np.max(fitness_calc(w, pop)))
return fitness_tracking
###Output
_____no_output_____
###Markdown
We reset the population to have a fresh start and run our function:
###Code
new_population = np.random.uniform(low=min(pop_range), high=max(pop_range), size=pop_shape)
###Output
_____no_output_____
###Markdown
And explore different rates of mutation
###Code
def plot_mutation_rate_progress(mr):
return pd.Series(
genetic_iter(
pop = new_population,
w = w,
mutation_rate = mr,
p = p
)
).plot(kind = 'line')
plt.subplot(3,1,1)
plot_mutation_rate_progress(0.001)
plt.subplot(3,1,2)
plot_mutation_rate_progress(0.01)
plt.subplot(3,1,3)
plot_mutation_rate_progress(0.1)
###Output
_____no_output_____
###Markdown
Or the effect of the number of parents
###Code
def plot_parent_progress(p):
return pd.Series(
genetic_iter(
pop = new_population,
w = w,
mutation_rate = 0.01,
p = p
)
).plot(kind = 'line')
plt.subplot(3,1,1)
plot_parent_progress(10)
plt.subplot(3,1,2)
plot_parent_progress(11)
plt.subplot(3,1,3)
plot_parent_progress(12)
###Output
_____no_output_____ |
colab/test_example.ipynb | ###Markdown
`ipytest` Summary`ipytest` aims to make testing code in IPython notebooks easy. At its core, it offers a way to run pytest tests inside the notebook environment. It is also designed to make the transfer of the tests into proper python modules easy by supporting to use standard `pytest` features.To get started install `ipytest` via:```bashpip install -U ipytest```To use `ipytest`, import it and configure the notebook. In most cases, running `ipytest.autoconfig()` will result in reasonable defaults:- Tests can be executed with the `%run_pytest` and `%run_pytest[clean]` magics- The `pytest` assert rewriting system to get nice assert messages will integrated into the notebook - If not notebook name is given, a workaround using temporary files will be usedFor more control, pass the relevant arguments to `ipytest.autconfig()`. For details, see the documentation in the readme.
###Code
!pip install -U ipytest
import ipytest
ipytest.autoconfig()
###Output
_____no_output_____
###Markdown
Execute testsTo execute test, just decorate the cells containing the tests with the `%%run_pytest[clean]` magic:
###Code
%%run_pytest[clean]
# define the tests
def test_my_func():
assert my_func(0) == 0
assert my_func(1) == 0
assert my_func(2) == 2
assert my_func(3) == 2
def my_func(x):
return x // 2 * 2
###Output
. [100%]
1 passed in 0.02s
###Markdown
Using pytest fixturesCommon pytest features, such as fixtures and parametrize, are supported out of the box:
###Code
%%run_pytest[clean]
import pytest
@pytest.mark.parametrize('input,expected', [
(0, 0),
(1, 0),
(2, 2),
(3, 2),
])
def test_parametrized(input, expected):
assert my_func(input) == expected
@pytest.fixture
def my_fixture():
return 42
def test_fixture(my_fixture):
assert my_fixture == 42
###Output
..... [100%]
5 passed in 0.04s
###Markdown
The difference between `%%run_pytest` and `%%run_pytest[clean]`The notebook interface has a lot of hidden state, since functions stay visible, even if the corresponding code is deleted. For example, renaming a function will keep the old function around. When using `ipytest`, any function defined in the notebook and matching the name scheme `test*` will be discovered. To make test dicovery easier to understand, the `%%run_pytest[clean]` magic will delete any object which name matches the patter `[Tt]est*` before running the cell. If this behavior is not wanted, use `%%run_pytest`.
###Code
###Output
_____no_output_____ |
examples/ME321-Thermodynamics_II/00 - Thermo I Review/Ex0.2 Compressor Isentropic Efficiency.ipynb | ###Markdown
Example 0.2: Compressor Isentropic Efficiency*John F. Maddox, Ph.D., P.E.University of Kentucky - Paducah CampusME 321: Engineering Thermodynamics II* Problem StatementAn air compressor has an isentropic efficiency of 80% and operates in a steady-state, steady-flow (SSSF) configuration with negligible changes in kinetic and potential energy. It receives a volumetric flow rate of 3000 CFM with an inlet pressure of $p_1=14.7\,\text{psia}$ and inlet temperature of $T_1=70^\circ\text{F}$. It compresses the air by a factor of 10. Determine(a) Rate of compressor work, $\mathrm{HP}$(b) Rate of entropy generation, $\mathrm{Btu/min^\circ R}$ Solution__[Video Explanation](https://uky.yuja.com/V/Video?v=3071282&node=10458660&a=1537564096&autoplay=1)__In the previous example (Ex 0.1), we wrote two separate python scripts to illustrate the difference between using only the standard python library and using third-party modules. Those two scripts were placed in two self-contained code blocks to help show the separation between the two. In this example we will jump straight to using the third-party libraries to make things easier, and we will spread the python code out across multiple cells with explanatory text (markdown) cells to describe the code rather than using python comments. To execute this code you will need to execute each code block in order, or select the `Run All` option from the `Cell` menu above. Python InitializationWe'll start by importing the libraries we will use for our analysis.
###Code
from math import log
from kilojoule.templates.USCS_R import *
###Output
_____no_output_____
###Markdown
In the first version of the previous example we defined a new variable for each property at each state, but in this example (and future examples) we will instead store the values in a custom python data structure from `kilojoule`. This is a different approach to variable naming and organization that will allow us to do some interesting things later on. We can use the `kilojoule` library to look up property values for real fluid using the `realfluid.Properties()` class or for ideal gasses using the `idealgas.Properties()`. For each of these functions you initialize (instantiate) the class by passing it the name of the fluid and an optional preferred unit system to use when returning values (default: SI in $\mathrm{^\circ C}$). For this case, we will treat the air as an ideal gas, so we will use the `idealgas.Properties()` class.
###Code
air = idealgas.Properties('Air',unit_system='English_R')
###Output
_____no_output_____
###Markdown
Given ParametersWe now define variables to hold our known values.
###Code
# the next three lines show different ways to create a dimensional quantity, but the Quantity() syntax is the preferred method.
T[1] = Quantity(70.,'degF') # inlet temperature
p[1] = 14.7*units.psi # inlet pressure
Vdot[1] = 3000.0*units('ft^3/min') # volumetric flow rate at inlet
p[2] = 10*p[1] # exit pressure
eta_c = Quantity(0.8,'') # isentropic efficiency
Summary();
###Output
_____no_output_____
###Markdown
Assumptions - Negligible changes in kinetic energy - Negligible changes in potential energy - Adiabatic (no heat transfer) - Constant specific heat (cold-air-standard) - Ideal gas (cold-air-standard)We could pull properties for air from the tables in the back of the book since we are assuming constant specific heat and ideal gas behavior, or we can look them using the `air` reference we created earlier
###Code
%%showcalc
# Ideal Gas
R = air.R # specific gas constant
# Constant thermal properties at room temperature
T_room = Quantity(25,'degC') # room temperature
c_p = air.Cp(T_room) # constant pressure specific heat at room temperature
k = air.k(T_room) # specific heat ratio at room temperature
###Output
_____no_output_____
###Markdown
Isentropic EfficiencyThe isentropic efficiency of a compressor is defined as the ratio of the work that would be required if the compressor were ideal (isentropic) and operating between the same inlet state and exit pressure as the real device to the actual work.$$\eta_c=\frac{\dot{W}_s}{\dot{W}_c}$$where $\dot{W}_s$ is the rate of isentropic work and $\dot{W}_c$ is the rate of actual compressor work. From a first law analysis, we can rewrite the work terms as changes in enthalpy between the inlet and exit states.$$\require{cancel}\eta_c = \frac{\cancel{\dot{m}}(h_{2s}-h_1)}{\cancel{\dot{m}}(h_2-h_1)}$$Applying the constant specific heat assumption allows us to rewrite the changes in enthalpy as $\Delta h=c_p\Delta T$$$\require{cancel}\eta_c = \frac{\cancel{c_p}(T_{2s}-T_1)}{\cancel{c_p}(T_2-T_1)}$$Our first goal is to find the exit temperature, so we solve for $T_2$$$T_2 = T_1 + \frac{T_{2s}-T_1}{\eta_c}$$However, in order to use this equation, we first need to find the temperature of the isentropic exit state, $T_{2s}$. We can find this using ideal gas polytropic relations with $n=k$$$T_{2s}=T_1\left(\frac{p_2}{p_1}\right)^{\frac{k-1}{k}}$$Note that in order to apply the polytropic relation above, we must convert the temperatures to absolute units, i.e. $^\circ\text{R}$
###Code
%%showcalc
T['2s'] = T[1]*(p[2]/p[1])**((k-1)/k)
T[2] = T[1] + (T['2s']-T[1])/eta_c
###Output
_____no_output_____
###Markdown
Now that we know the actual exit temperature, we can find the actual rate of work using the 1st Law.$$\dot{W}_c = \dot{m}c_p(T_2-T_1)$$However, we will also need to use the ideal gas law to find the mass flow rate before applying this equation.$$\dot{m}_1 = \frac{p_1\dot{V}_1}{RT_1}$$
###Code
%%showcalc
mdot = (p[1]*Vdot[1])/(R*T[1].to('degR'))
mdot.ito('lb/min')
Wdot_c = mdot*c_p*(T[2]-T[1])
Wdot_c.ito('hp')
###Output
_____no_output_____
###Markdown
Second Law AnalysisTo determine the entropy generation, we need to do a 2nd Law analysis$$\require{cancel}\cancelto{0}{\frac{dS_{CV}}{dt}}= \sum_j\frac{\cancelto{0}{\dot{Q}_j}}{T_j}+\sum_i\dot{m}_is_i-\sum_e\dot{m}_es_e+\dot{S}_{gen}$$$$\dot{S}_{gen} = \dot{m}(s_e-s_i)$$which can be rewritten using the constant specific heat assumption as$$\dot{S}_{gen} = \dot{m}\left[ c_p\ln\left(\frac{T_2}{T_1}\right)-R\ln\left(\frac{p_2}{p_1}\right)\right]$$where the temperatures and pressure must be in absolute units for this equation to be valid.
###Code
%%showcalc
# Note: we use the `log` function from the math library for $\ln()$
Sdot_gen = mdot*( c_p*log(T[2]/T[1]) - R*log(p[2]/p[1]))
###Output
_____no_output_____
###Markdown
Note that we used `log` for the `math` library (imported in the first cell of this notebook) to evaluate the natural log in the above equation. It is common in many programming languages and higher level textbooks to treat the natural log, $\ln()$, as the default $\log()$ and the base 10 log is only applied if it is explicitly stated, i.e. $\log_{10}()$.
###Code
from math import e
log(e)
from math import log10
log10(10)
###Output
_____no_output_____ |
NanoNeuron.ipynb | ###Markdown
NanoNeuron > Seven simple Python functions that will give you a feeling of how machines can actually "learn"._In Javascript:_- [With explanation in English](README.md)- [Русский](README.ru-RU.md) TL;DR[NanoNeuron](https://github.com/trekhleb/nano-neuron) is an _over-simplified_ version of the Neuron concept from Neural Networks. NanoNeuron is trained to convert temperature values from Celsius to Fahrenheit.The [NanoNeuron.js](https://github.com/trekhleb/nano-neuron/blob/master/NanoNeuron.js) code example contains 7 simple JavaScript functions (which touches on model prediction, cost calculation, forward/backwards propagation, and training) that will give you a feeling of how machines can actually "learn". No 3rd-party libraries, no external data-sets or dependencies, only pure and simple JavaScript functions.☝🏻These functions are **NOT**, by any means, a complete guide to machine learning. A lot of machine learning concepts are skipped and over-simplified! This simplification is done on purpose to give the reader a really **basic** understanding and feeling of how machines can learn and ultimately to make it possible for the reader to recognize that it's not "machine learning MAGIC" but rather "machine learning MATH" 🤓. What our NanoNeuron will learnYou've probably heard about Neurons in the context of [Neural Networks](https://en.wikipedia.org/wiki/Neural_network). NanoNeuron is just that but simpler and we're going to implement it from scratch. For simplicity reasons we're not even going to build a network on NanoNeurons. We will have it all working on its own, doing some magical predictions for us. Namely, we will teach this singular NanoNeuron to convert (predict) the temperature from Celsius to Fahrenheit.By the way, the formula for converting Celsius to Fahrenheit is this:But for now our NanoNeuron doesn't know about it... The NanoNeuron modelLet's implement our NanoNeuron model function. It implements basic linear dependency between `x` and `y` which looks like `y = w * x + b`. Simply saying our NanoNeuron is a "kid" in a "school" that is being taught to draw the straight line in `XY` coordinates.Variables `w`, `b` are parameters of the model. NanoNeuron knows only about these two parameters of the linear function.These parameters are something that NanoNeuron is going to "learn" during the training process.The only thing that NanoNeuron can do is to imitate linear dependency. In its `predict()` method it accepts some input `x` and predicts the output `y`. No magic here.
###Code
class NanoNeuron:
def __init__(self, w, b):
self.w = w
self.b = b
def predict(self, x):
return x * self.w + self.b
###Output
_____no_output_____
###Markdown
_(...wait... [linear regression](https://en.wikipedia.org/wiki/Linear_regression) is it you?)_ 🧐 Celsius to Fahrenheit conversionThe temperature value in Celsius can be converted to Fahrenheit using the following formula: `f = 1.8 * c + 32`, where `c` is a temperature in Celsius and `f` is the calculated temperature in Fahrenheit.
###Code
def celsiusToFahrenheit(c):
w = 1.8
b = 32
return c * w + b
###Output
_____no_output_____
###Markdown
Ultimately we want to teach our NanoNeuron to imitate this function (to learn that `w = 1.8` and `b = 32`) without knowing these parameters in advance.This is how the Celsius to Fahrenheit conversion function looks like: Generating data-setsBefore the training we need to generate **training** and **test data-sets** based on the `celsiusToFahrenheit()` function. Data-sets consist of pairs of input values and correctly labeled output values.> In real life, in most of cases, this data would be collected rather than generated. For example, we might have a set of images of hand-drawn numbers and the corresponding set of numbers that explains what number is written on each picture.We will use TRAINING example data to train our NanoNeuron. Before our NanoNeuron will grow and be able to make decisions on its own, we need to teach it what is right and what is wrong using training examples.We will use TEST examples to evaluate how well our NanoNeuron performs on the data that it didn't see during the training. This is the point where we could see that our "kid" has grown and can make decisions on its own.
###Code
def generateDataSets():
# xTrain -> [0, 1, 2, ...]
# yTrain -> [32, 33.8, 35.6, ...]
xTrain = list(range(100))
yTrain = [celsiusToFahrenheit(x) for x in xTrain]
# xTest -> [0.5, 1.5, 2.5, ...]
# yTest -> [32.9, 34.7, 36.5, ...]
# By starting from 0.5 and using the same step of 1 as we have used for training set
# we make sure that test set has different data comparing to training set.
xTest = [x+0.5 for x in range(100)]
yTest = [celsiusToFahrenheit(x) for x in xTest]
return xTrain, yTrain, xTest, yTest
###Output
_____no_output_____
###Markdown
The cost (the error) of predictionWe need to have some metric that will show us how close our model's prediction is to correct values. The calculation of the cost (the mistake) between the correct output value of `y` and `prediction`, that our NanoNeuron created, will be made using the following formula:This is a simple difference between two values. The closer the values are to each other, the smaller the difference. We're using a power of `2` here just to get rid of negative numbers so that `(1 - 2) ^ 2` would be the same as `(2 - 1) ^ 2`. Division by `2` is happening just to simplify further the backward propagation formula (see below).The cost function in this case will be as simple as:
###Code
def predictionCost(y, prediction):
return ((y - prediction) ** 2) / 2 # -> 235.6
###Output
_____no_output_____
###Markdown
Forward propagationTo do forward propagation means to do a prediction for all training examples from `xTrain` and `yTrain` data-sets and to calculate the average cost of those predictions along the way.We just let our NanoNeuron say its opinion, at this point, by just allowing it to guess how to convert the temperature. It might be stupidly wrong here. The average cost will show us how wrong our model is right now. This cost value is really important since changing the NanoNeuron parameters `w` and `b`, and by doing the forward propagation again; we will be able to evaluate if our NanoNeuron became smarter or not after these parameters change.The average cost will be calculated using the following formula:Where `m` is a number of training examples (in our case: `100`).Here is how we may implement it in code:
###Code
def forwardPropagation(model, xTrain, yTrain):
m = len(xTrain)
predictions = []
cost = 0
for i in range(m):
prediction = model.predict(xTrain[i])
cost += predictionCost(yTrain[i], prediction)
predictions += [prediction]
cost /= m
return predictions, cost
###Output
_____no_output_____
###Markdown
Backward propagationWhen we know how right or wrong our NanoNeuron's predictions are (based on average cost at this point) what should we do to make the predictions more precise?The backward propagation gives us the answer to this question. Backward propagation is the process of evaluating the cost of prediction and adjusting the NanoNeuron's parameters `w` and `b` so that next and future predictions would be more precise.This is the place where machine learning looks like magic 🧞♂️. The key concept here is the **derivative** which shows what step to take to get closer to the cost function minimum.Remember, finding the minimum of a cost function is the ultimate goal of the training process. If we find such values for `w` and `b` such that our average cost function will be small, it would mean that the NanoNeuron model does really good and precise predictions.Derivatives are a big and separate topic that we will not cover in this article. [MathIsFun](https://www.mathsisfun.com/calculus/derivatives-introduction.html) is a good resource to get a basic understanding of it.One thing about derivatives that will help you to understand how backward propagation works is that the derivative, by its meaning, is a tangent line to the function curve that points toward the direction of the function minimum._Image source: [MathIsFun](https://www.mathsisfun.com/calculus/derivatives-introduction.html)_For example, on the plot above, you can see that if we're at the point of `(x=2, y=4)` then the slope tells us to go `left` and `down` to get to the function minimum. Also notice that the bigger the slope, the faster we should move to the minimum.The derivatives of our `averageCost` function for parameters `w` and `b` looks like this:Where `m` is a number of training examples (in our case: `100`)._You may read more about derivative rules and how to get a derivative of complex functions [here](https://www.mathsisfun.com/calculus/derivatives-rules.html)._
###Code
def backwardPropagation(predictions, xTrain, yTrain):
m = len(xTrain)
dW = 0
dB = 0
for i in range(m):
dW += (yTrain[i] - predictions[i]) * xTrain[i]
dB += (yTrain[i] - predictions[i])
dW /= m
dB /= m
return dW, dB
###Output
_____no_output_____
###Markdown
Training the modelNow we know how to evaluate the correctness of our model for all training set examples (_forward propagation_). We also know how to do small adjustments to parameters `w` and `b` of our NanoNeuron model (_backward propagation_). But the issue is that if we run forward propagation and then backward propagation only once, it won't be enough for our model to learn any laws/trends from the training data. You may compare it with attending a one day of elementary school for the kid. He/she should go to the school not once but day after day and year after year to learn something.So we need to repeat forward and backward propagation for our model many times. That is exactly what the `trainModel()` function does. It is like a "teacher" for our NanoNeuron model:- it will spend some time (`epochs`) with our slightly stupid NanoNeuron model and try to train/teach it,- it will use specific "books" (`xTrain` and `yTrain` data-sets) for training,- it will push our kid to learn harder (faster) by using a learning rate parameter `alpha`A few words about the learning rate `alpha`. This is just a multiplier for `dW` and `dB` values we have calculated during the backward propagation. So, derivative pointed us toward the direction we need to take to find a minimum of the cost function (`dW` and `dB` sign) and it also showed us how fast we need to go in that direction (absolute values of `dW` and `dB`). Now we need to multiply those step sizes to `alpha` just to adjust our movement to the minimum faster or slower. Sometimes if we use big values for `alpha`, we might simply jump over the minimum and never find it.The analogy with the teacher would be that the harder s/he pushes our "nano-kid" the faster our "nano-kid" will learn but if the teacher pushes too hard, the "kid" will have a nervous breakdown and won't be able to learn anything 🤯.Here is how we're going to update our model's `w` and `b` params:And here is our trainer function:
###Code
def trainModel(model, epochs, alpha, xTrain, yTrain):
# The is the history array of how NanoNeuron learns.
costHistory = []
# Let's start counting epochs.
for epoch in range(epochs):
# Forward propagation.
predictions, cost = forwardPropagation(model, xTrain, yTrain)
costHistory += [cost]
# Backward propagation.
dW, dB = backwardPropagation(predictions, xTrain, yTrain)
# Adjust our NanoNeuron parameters to increase accuracy of our model predictions.
model.w += alpha * dW;
model.b += alpha * dB;
return costHistory
###Output
_____no_output_____
###Markdown
Putting all the pieces togetherNow let's use the functions we have created above.Let's create our NanoNeuron model instance. At this moment the NanoNeuron doesn't know what values should be set for parameters `w` and `b`. So let's set up `w` and `b` randomly.
###Code
import random
w = random.random() # i.e. -> 0.9492
b = random.random(); # i.e. -> 0.4570
nanoNeuron = NanoNeuron(w, b)
###Output
_____no_output_____
###Markdown
Generate training and test data-sets.
###Code
xTrain, yTrain, xTest, yTest = generateDataSets()
###Output
_____no_output_____
###Markdown
Let's train the model with small incremental (`0.0005`) steps for `70000` epochs. You can play with these parameters, they are being defined empirically.
###Code
epochs = 70000
alpha = 0.0005
trainingCostHistory = trainModel(nanoNeuron, epochs, alpha, xTrain, yTrain)
###Output
_____no_output_____
###Markdown
Let's check how the cost function was changing during the training. We're expecting that the cost after the training should be much lower than before. This would mean that NanoNeuron got smarter. The opposite is also possible.
###Code
print('Cost before the training: {}'.format(trainingCostHistory[0])) # i.e. -> 5008.895520136429
print('Cost after the training: {}'.format(trainingCostHistory[epochs - 1])); # i.e. -> 0.0000024
###Output
Cost before the training: 5008.895520136429
Cost after the training: 2.498695673841001e-06
###Markdown
This is how the training cost changes over the epochs. On the `x` axes is the epoch number x1000.Let's take a look at NanoNeuron parameters to see what it has learned. We expect that NanoNeuron parameters `w` and `b` to be similar to ones we have in the `celsiusToFahrenheit()` function (`w = 1.8` and `b = 32`) since our NanoNeuron tried to imitate it.
###Code
print('NanoNeuron parameters: w = {} b = {}'.format(nanoNeuron.w, nanoNeuron.b)) # i.e. -> {w: 1.8, b: 31.99}
###Output
NanoNeuron parameters: w = 1.8000668958484498 b = 31.995562918259417
###Markdown
Evaluate the model accuracy for the test data-set to see how well our NanoNeuron deals with new unknown data predictions. The cost of predictions on test sets is expected to be close to the training cost. This would mean that our NanoNeuron performs well on known and unknown data.
###Code
testPredictions, testCost = forwardPropagation(nanoNeuron, xTest, yTest)
print('Cost on new testing data: {}'.format(testCost)) # i.e. -> 0.0000024
###Output
Cost on new testing data: 2.4609675748651892e-06
###Markdown
Now, since we see that our NanoNeuron "kid" has performed well in the "school" during the training and that he can convert Celsius to Fahrenheit temperatures correctly, even for the data it hasn't seen, we can call it "smart" and ask him some questions. This was the ultimate goal of the entire training process.
###Code
tempInCelsius = 70
customPrediction = nanoNeuron.predict(tempInCelsius)
print('NanoNeuron "thinks" that {}°C in Fahrenheit is: {}'.format(tempInCelsius, customPrediction)) # -> 158.0002
print('Correct answer is: {}'.format(celsiusToFahrenheit(tempInCelsius))) # -> 158
###Output
NanoNeuron "thinks" that 70°C in Fahrenheit is: 158.0002456276509
Correct answer is: 158.0
|
Notebooks/Fuzzy Tipping Problem.ipynb | ###Markdown
Fuzzy Tipping Problem=====
###Code
import numpy as np
import skfuzzy as fuzz
import matplotlib.pyplot as plt
%matplotlib inline
# Generate universe variables
# * Quality and service on subjective ranges [0, 10]
# * Tip has a range of [0, 25] in units of percentage points
x_serv = np.arange(0, 10.5, 0.5)
x_qual = np.arange(0, 10.5, 0.5)
x_tip = np.arange(0, 30.5, 0.5)
# Generate fuzzy membership functions
serv_lo = fuzz.trimf(x_serv, [0, 2.5, 5])
print('serv_lo = ', serv_lo)
serv_md = fuzz.trimf(x_serv, [2.5, 5, 7.5])
serv_hi = fuzz.trimf(x_serv, [5, 7.5, 10])
qual_lo = fuzz.trapmf(x_qual, [0, 0, 1.5, 4])
qual_md = fuzz.trimf(x_qual, [2.5, 5, 7.5])
qual_hi = fuzz.trapmf(x_qual, [6, 8.5, 10, 10])
tip_lo = fuzz.trimf(x_tip, [0, 5, 10])
tip_md = fuzz.trimf(x_tip, [8, 15, 20])
tip_hi = fuzz.trimf(x_tip, [17, 25, 30])
# Visualize these universes and membership functions
fig, (ax1, ax0, ax2) = plt.subplots(nrows=3, figsize=(8, 9))
ax0.plot(x_qual, qual_lo, 'b', linewidth=1.5, label='Rancid')
ax0.plot(x_qual, qual_md, 'r', linewidth=1.5, label='Good')
ax0.plot(x_qual, qual_hi, 'g', linewidth=1.5, label='Delicious')
ax0.set_title('Food quality')
ax0.legend()
ax1.plot(x_serv, serv_lo, 'b', linewidth=1.5, label='Poor')
ax1.plot(x_serv, serv_md, 'r', linewidth=1.5, label='Good')
ax1.plot(x_serv, serv_hi, 'g', linewidth=1.5, label='Great')
ax1.set_title('Service quality')
ax1.legend()
ax2.plot(x_tip, tip_lo, 'b', linewidth=1.5, label='Cheap')
ax2.plot(x_tip, tip_md, 'r', linewidth=1.5, label='Average')
ax2.plot(x_tip, tip_hi, 'g', linewidth=1.5, label='Generous')
ax2.set_title('Tip amount')
ax2.legend()
# Turn off top/right axes
for ax in (ax0, ax1, ax2):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.tight_layout()
# We need the activation of our fuzzy membership functions at these values.
# The exact values 6.5 and 9.8 do not exist on our universes...
# This is what fuzz.interp_membership exists for!
qual = 8.5
serv = 5
# interpolation
qual_level_lo = fuzz.interp_membership(x_qual, qual_lo, qual)
print('Interpolation qual_level_lo ', qual, ' = ', qual_level_lo)
qual_level_md = fuzz.interp_membership(x_qual, qual_md, qual)
print('Interpolation qual_level_md = ', qual_level_md)
qual_level_hi = fuzz.interp_membership(x_qual, qual_hi, qual)
print('Interpolation qual_level_hi = ', qual_level_hi)
serv_level_lo = fuzz.interp_membership(x_serv, serv_lo, serv)
print('Interpolation serv_level_lo = ', serv_level_lo)
serv_level_md = fuzz.interp_membership(x_serv, serv_md, serv)
print('Interpolation serv_level_md = ', serv_level_md)
serv_level_hi = fuzz.interp_membership(x_serv, serv_hi, serv)
print('Interpolation serv_level_hi = ', serv_level_hi)
# Now we take our rules and apply them.
# Rule 1 concerns: bad food OR service.
# The OR operator means we take the maximum of these two.
active_rule1 = np.fmax(qual_level_lo, serv_level_lo)
print('Rule 1: ', active_rule1,)
# Now we apply this by clipping the top off the corresponding output
# membership function with `np.fmin`
tip_activation_lo = np.fmin(active_rule1, tip_lo) # removed entirely to 0
print('Rule 1: ', tip_activation_lo)
# For Rule 2 we connect acceptable service to medium tipping
tip_activation_md = np.fmin(serv_level_md, tip_md)
print('Rule 2: ', tip_activation_md)
# For Rule 3 we connect high service OR high food with high tipping
active_rule3 = np.fmax(qual_level_hi, serv_level_hi)
tip_activation_hi = np.fmin(active_rule3, tip_hi)
print('Rule 3: ', tip_activation_hi)
tip0 = np.zeros_like(x_tip)
print('?? ', tip0)
# Visualize this
fig, ax0 = plt.subplots(figsize=(8, 3))
ax0.fill_between(x_tip, tip0, tip_activation_lo, facecolor='b', alpha=0.7)
ax0.plot(x_tip, tip_lo, 'b', linewidth=0.5, linestyle='--', )
ax0.fill_between(x_tip, tip0, tip_activation_md, facecolor='g', alpha=0.7)
ax0.plot(x_tip, tip_md, 'g', linewidth=0.5, linestyle='--')
ax0.fill_between(x_tip, tip0, tip_activation_hi, facecolor='r', alpha=0.7)
ax0.plot(x_tip, tip_hi, 'r', linewidth=0.5, linestyle='--')
ax0.set_title('Output membership activity')
# Turn off top/right axes
for ax in (ax0,):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.tight_layout()
# Aggregate all three output membership functions together
aggregated = np.fmax(tip_activation_lo, np.fmax(tip_activation_md, tip_activation_hi))
print('aggregated = ', aggregated)
# Calculate defuzzified result
tip = fuzz.defuzz(x_tip, aggregated, 'centroid')
print('tip = ', tip)
tip_activation = fuzz.interp_membership(x_tip, aggregated, tip) # for plot
# Visualize this
fig, ax0 = plt.subplots(figsize=(8, 4))
ax0.plot(x_tip, tip_lo, 'b', linewidth=0.5, linestyle='--', )
ax0.plot(x_tip, tip_md, 'g', linewidth=0.5, linestyle='--')
ax0.plot(x_tip, tip_hi, 'r', linewidth=0.5, linestyle='--')
ax0.fill_between(x_tip, tip0, aggregated, facecolor='Orange', alpha=0.7)
ax0.plot([tip, tip], [0, tip_activation], 'k', linewidth=2.0, alpha=0.9)
ax0.set_title('Aggregated membership and result (line)')
# Turn off top/right axes
for ax in (ax0,):
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
plt.tight_layout()
print(x_tip)
print(tip_lo)
###Output
[0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1. 0.9 0.8 0.7 0.6 0.5 0.4 0.3
0.2 0.1 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. ]
|
MHD_SyntheticLC/NonStationaryMHDStats.ipynb | ###Markdown
above we have band limited white noise or white noise with autocorrelation
###Code
lagsEst, sfEst, sferrEst = jiangSimLC_differenced.sf()
plt.errorbar(np.log10(lagsEst),np.log10(sfEst), sferrEst, label=r'$SF(\delta t)$ (MHD est)',
fmt='o', capsize=0, color='#ff7f00', markeredgecolor='none', zorder=0)
#errorbars for MHD data are 1% errors ()
plt.xlabel(r'$\log_{10}\delta t$', fontsize = 18)
plt.ylabel(r'$\log_{10} SF$',fontsize = 18 )
plt.legend(loc=2, fontsize = 20)
###Output
_____no_output_____ |
3 - TreinamentoAlgoritmos/7. GBM.ipynb | ###Markdown
Gradient Boosting Machine
###Code
import pickle
import numpy as np
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.ensemble import GradientBoostingClassifier
from pprint import pprint
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn.model_selection import ShuffleSplit
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
# Dataframe
path_df = "Pickles/df.pickle"
with open(path_df, 'rb') as data:
df = pickle.load(data)
# features_train
path_features_train = "Pickles/features_train.pickle"
with open(path_features_train, 'rb') as data:
features_train = pickle.load(data)
# labels_train
path_labels_train = "Pickles/labels_train.pickle"
with open(path_labels_train, 'rb') as data:
labels_train = pickle.load(data)
# features_test
path_features_test = "Pickles/features_test.pickle"
with open(path_features_test, 'rb') as data:
features_test = pickle.load(data)
# labels_test
path_labels_test = "Pickles/labels_test.pickle"
with open(path_labels_test, 'rb') as data:
labels_test = pickle.load(data)
print(features_train.shape)
print(features_test.shape)
###Output
(2041, 300)
(876, 300)
###Markdown
Cross-Validation for Hyperparameter tuning
###Code
gb_0 = GradientBoostingClassifier(random_state = 10)
print('Parameters currently in use:\n')
pprint(gb_0.get_params())
###Output
Parameters currently in use:
{'ccp_alpha': 0.0,
'criterion': 'friedman_mse',
'init': None,
'learning_rate': 0.1,
'loss': 'deviance',
'max_depth': 3,
'max_features': None,
'max_leaf_nodes': None,
'min_impurity_decrease': 0.0,
'min_impurity_split': None,
'min_samples_leaf': 1,
'min_samples_split': 2,
'min_weight_fraction_leaf': 0.0,
'n_estimators': 100,
'n_iter_no_change': None,
'random_state': 10,
'subsample': 1.0,
'tol': 0.0001,
'validation_fraction': 0.1,
'verbose': 0,
'warm_start': False}
###Markdown
Randomized Search Cross Validation
###Code
# n_estimators
n_estimators = [200, 800]
# max_features
max_features = ['auto', 'sqrt']
# max_depth
max_depth = [10, 40]
max_depth.append(None)
# min_samples_split
min_samples_split = [10, 30, 50]
# min_samples_leaf
min_samples_leaf = [1, 2, 4]
# learning rate
learning_rate = [.1, .5]
# subsample
subsample = [.5, 1.]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'learning_rate': learning_rate,
'subsample': subsample}
pprint(random_grid)
###Output
{'learning_rate': [0.1, 0.5],
'max_depth': [10, 40, None],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [10, 30, 50],
'n_estimators': [200, 800],
'subsample': [0.5, 1.0]}
###Markdown
Then, we'll perform the Random Search:
###Code
# First create the base model to tune
gbc = GradientBoostingClassifier(random_state=10)
# Definition of the random search
random_search = RandomizedSearchCV(estimator=gbc,
param_distributions=random_grid,
n_iter=5,
scoring='accuracy',
cv=2,
verbose=1,
random_state=10)
# Fit the random search model
random_search.fit(features_train, labels_train)
print(random_search.best_params_)
print("")
print(random_search.best_score_)
###Output
{'subsample': 0.5, 'n_estimators': 200, 'min_samples_split': 50, 'min_samples_leaf': 1, 'max_features': 'sqrt', 'max_depth': None, 'learning_rate': 0.1}
0.6001992471817326
###Markdown
Grid Search Cross Validation
###Code
# Create the parameter grid based on the results of random search
max_depth = [35, 40, 45]
max_features = ['auto']
min_samples_leaf = [1]
min_samples_split = [20, 40]
n_estimators = [800]
learning_rate = [.05, .15]
subsample = [.5]
param_grid = {
'max_depth': max_depth,
'max_features': max_features,
'min_samples_leaf': min_samples_leaf,
'min_samples_split': min_samples_split,
'n_estimators': n_estimators,
'learning_rate': learning_rate,
'subsample': subsample
}
# Create a base model
gbc = GradientBoostingClassifier(random_state=8)
# Manually create the splits in CV in order to be able to fix a random_state (GridSearchCV doesn't have that argument)
cv_sets = ShuffleSplit(n_splits = 1, test_size = .33, random_state = 8)
# Instantiate the grid search model
grid_search = GridSearchCV(estimator=gbc,
param_grid=param_grid,
scoring='accuracy',
cv=cv_sets,
verbose=1)
# Fit the grid search to the data
grid_search.fit(features_train, labels_train)
print(grid_search.best_params_)
print("")
print(grid_search.best_score_)
best_gbc = grid_search.best_estimator_
best_gbc
###Output
_____no_output_____
###Markdown
Model fit and performance
###Code
best_gbc.fit(features_train, labels_train)
gbc_pred = best_gbc.predict(features_test)
###Output
_____no_output_____
###Markdown
Training accuracy
###Code
# Training accuracy
print("The training accuracy is: ")
print(accuracy_score(labels_train, best_gbc.predict(features_train)))
###Output
The training accuracy is:
0.9965703086722195
###Markdown
Test accuracy
###Code
# Test accuracy
print("The test accuracy is: ")
print(accuracy_score(labels_test, gbc_pred))
###Output
The test accuracy is:
0.615296803652968
###Markdown
Classification report
###Code
# Classification report
print("Classification report")
print(classification_report(labels_test,gbc_pred))
###Output
Classification report
precision recall f1-score support
0 0.62 0.54 0.58 291
1 0.61 0.72 0.66 289
2 0.62 0.58 0.60 296
accuracy 0.62 876
macro avg 0.62 0.62 0.61 876
weighted avg 0.62 0.62 0.61 876
###Markdown
Confusion matrix
###Code
base_model = GradientBoostingClassifier(random_state = 8)
base_model.fit(features_train, labels_train)
accuracy_score(labels_test, base_model.predict(features_test))
best_gbc.fit(features_train, labels_train)
accuracy_score(labels_test, best_gbc.predict(features_test))
###Output
_____no_output_____
###Markdown
We'll create a dataset with a model summary to compare models:
###Code
d = {
'Model': 'Gradient Boosting',
'Training Set Accuracy': accuracy_score(labels_train, best_gbc.predict(features_train)),
'Test Set Accuracy': accuracy_score(labels_test, gbc_pred)
}
df_models_gbc = pd.DataFrame(d, index=[0])
df_models_gbc
###Output
_____no_output_____
###Markdown
Let's save the model and this dataset:
###Code
with open('Models/best_gbc.pickle', 'wb') as output:
pickle.dump(best_gbc, output)
with open('Models/df_models_gbc.pickle', 'wb') as output:
pickle.dump(df_models_gbc, output)
###Output
_____no_output_____ |
Notebook - AppDS Capstone.ipynb | ###Markdown
This notebook will be used for the Applied Datascience Capstone course provided by IBM
###Code
import pandas as pd
import numpy as np
print("Hello Capstone Project Course!")
###Output
Hello Capstone Project Course!
|
lectures/3_dask_solutions.ipynb | ###Markdown
out-of-core image analysis with dask
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
###Output
_____no_output_____
###Markdown
dask.distributedA first life hack: in general, avoid using bare dask, and instead create a `dask.distributed` Client as in the cell below. What this buys you:- a [diagnostics dashboard](https://docs.dask.org/en/latest/diagnostics-distributed.html). This can be invaluable in helping to understand performance in your application. We'll see a live example below.- seamless scaling. Whether the scheduler is using local workers or connected to [your institution's HPC](https://jobqueue.dask.org/en/latest/), or [cloud compute](https://docs.dask.org/en/latest/setup/cloud.html), the API is the same — you just change the scheduler and connect the Client to it. Learn more about the dask dashboard with:- This introduction to the dask dashboard (20 minute video): https://www.youtube.com/watch?v=N_GqzcuGLCY- This introduction to the jupyterlab extension (5 minute video): https://www.youtube.com/watch?v=EX_voquHdk0
###Code
from dask import distributed
client = distributed.Client()
print(client.dashboard_link)
###Output
_____no_output_____
###Markdown
Viewing dask images in napari First we'll check that we can still look at random small images in napari
###Code
import napari
import numpy as np
from napari.utils.notebook_display import nbscreenshot
random_image = np.random.random((512, 512))
viewer = napari.view_image(random_image)
nbscreenshot(viewer)
###Output
_____no_output_____
###Markdown
Now let's see if we can view impossibly large images in napari!
###Code
import dask.array as da
# Impossible here means that having the full array in memory would require >TB of RAM,
# which is out of reach for most users!!
impossible_image = da.random.random(
(40_000, 2_000, 2_000),
chunks=(1, 1_000, 1_000),
)
print(impossible_image.nbytes / 1e12)
viewer = napari.view_image(impossible_image)
nbscreenshot(viewer)
###Output
_____no_output_____
###Markdown
Choosing chunk sizesChoosing an appropriate chunk size is important to get good performance.A good rule of thumb is to start with a chunk size roughly the same as an image you know you can process within memory for your analysis. You can monitor the efficiency of your dask computation by watching the task stream and worker memory plots in the dashboard as it runs, and adjust it if necessary. We want to balance these two considerations:1. If your chunks are too large, you will run into memory problems. * Dask will try to spill your data to disk instead of crashing, but it is inefficient to read/write from disk more than necessary. * Remember that you need to consider not only how much RAM the array chunks take up, but also for the working memory consumed by your analysis functions. (Sometimes that working memory is called "unamanged memory" in Dask, and you can read tips for that in [this blogpost](https://coiled.io/tackling-unmanaged-memory-with-dask/) or [this 8 minute video](https://www.youtube.com/watch?v=nwR6iGR0mb0)).2. If your chunks are too small, then the overhead and extra communcation introduced by Dask will slow the whole computation down. * This is easier to understand with an analogy. Let's say we are at a building site and there is a big pile of bricks that need to be fetched and made into a wall. If the building site foreman (the dask scheduler) tells the workers to go to the pile of bricks and fetch them a single brick at a time, it will be very slow to build the wall because most of the time the workers will be travelling back and forth. Instead, if we tell the workers to go and fetch the bricks one wheelbarrow full at a time, the job will be done much quicker. Dask best practicesThere are extra tips for best practices on the Dask website. Most relevant to image analysis are:* [Dask array best practices](https://docs.dask.org/en/latest/array-best-practices.html)* [General best practices for Dask](https://docs.dask.org/en/latest/best-practices.html) Now with real images We use data from the [Cell Tracking Challenge](http://celltrackingchallenge.net/3d-datasets/),specifically:- the [C. elegans developing embryo training dataset](http://data.celltrackingchallenge.net/training-datasets/Fluo-N3DH-CE.zip) (3GB), **OR**, if that is too large for you to comfortably download,- the [Chinese Hamster Ovarian (CHO) nuclei overexpressing GFP-PCNA training dataset](http://data.celltrackingchallenge.net/training-datasets/Fluo-N3DH-CHO.zip) (98MB)
###Code
from dask_image.imread import imread
# Please set the path to your data here!
ROOT_PATH = '/Users/nsofroniew/GitHub/image-demos/data/cell-tracking/Fluo-N3DH-CE/'
# note you might need a slight downsample here to deal with a known ghosting issue
embryo = imread(ROOT_PATH + '01/t*.tif')[:, :, ::2, ::2]
type(embryo)
embryo.shape
embryo.dtype
embryo.nbytes / 1e9
embryo.chunksize
viewer = napari.view_image(embryo, scale=[1, 5, 1, 1])
nbscreenshot(viewer)
###Output
_____no_output_____
###Markdown
Exercise: file formatsOpen the dask dashboard, and view it while changing timepoints in napari. How long does loading a tiff file take?If you have enough room in your drive, the [zarr](https://zarr.readthedocs.io/en/stable/) format offers much faster read from disk than tiff, especially for segmentations, which have very effective compression.Use `dask.Array.to_zarr` to save to a zarr file, and reload the array with `dask.array.from_zarr`. Swap out the image layer for the zarr-based one. How long does the load take now?
###Code
import dask.array as da
# SOLUTION
# Save data in the zarr file format
da.to_zarr(embryo, 'embryo.zarr')
# SOLUTION
# Read data from zarr format and view it in napari
embryo = da.from_zarr('embryo.zarr')
viewer = napari.view_image(embryo)
###Output
_____no_output_____
###Markdown
Adding the tracking dataNow, let's view the tracking data. The track format is described in [this pdf](https://public.celltrackingchallenge.net/documents/Naming%20and%20file%20content%20conventions.pdf). You can also see a description of the below workflow without dask (ie it *must* fit in your RAM) at [this napari documentation page](https://napari.org/tutorials/applications/cell_tracking).The tracklets are actually individually-labelled pixels within a volume like the original image. napari prefers to display tracks directly from coordinates, so we will use dask to convert from one to the other.We are lucky: the images can be processed one at a time (which dask is perfect for), and the compressed data (just point coordinates) are much, much smaller — easy to fit in RAM. We take advantage of this in the below workflow.
###Code
tracklet_images = imread(ROOT_PATH + '01_GT/TRA/man_track*.tif')[:, : , ::2, ::2]
###Output
_____no_output_____
###Markdown
First, we define a function that will work on an individual volume, together with that volume's index (ie the timepoint).
###Code
from skimage.measure import regionprops_table
import pandas as pd
def image_to_tracklets(volume, idx):
props_dict = regionprops_table(
np.asarray(volume), properties=('label', 'centroid')
)
props_df = pd.DataFrame(props_dict)
props_df['frame'] = idx
return props_df[
['label', 'frame', 'centroid-0', 'centroid-1', 'centroid-2']
]
###Output
_____no_output_____
###Markdown
Now we can run that function on the whole volume using the `Client.map` API. Futures are little IOUs for computation: a Future may or may not contain the result of the computation. Calling `future.result()` on a Future object causes Python to wait for that result to be ready. Otherwise, creating a Future is more or less instantaneous.We will see later that futures have a `.cancel()` method — useful when you trigger a lot of computation but realise you want to stop it!
###Code
futures = client.map(
image_to_tracklets,
tracklet_images,
np.arange(len(tracklet_images)),
)
all_tables = [f.result() for f in futures]
tracklets = (
pd.concat(all_tables)
.reset_index(drop=True)
.sort_values(['label', 'frame'])
)
tracklets_layer = viewer.add_tracks(tracklets, scale=[1, 5, 1, 1])
nbscreenshot(viewer)
###Output
_____no_output_____
###Markdown
We want to also load the lineage information, which is presented in a table called `man_track.txt`, containing the following four columns, called LBEP:> - L - a unique label of the track (label of markers, 16-bit positive value)> - B - a zero-based temporal index of the frame in which the track begins> - E - a zero-based temporal index of the frame in which the track ends> - P - label of the parent track (0 is used when no parent is defined)
###Code
lbep = np.loadtxt(
ROOT_PATH + '01_GT/TRA/man_track.txt',
dtype=np.uint,
)
full_graph = dict(lbep[:, [0, 3]])
graph = {k: v for k, v in full_graph.items() if v != 0}
tracks_layer = viewer.add_tracks(tracklets, graph=graph, scale=[1, 5, 1, 1])
nbscreenshot(viewer)
###Output
_____no_output_____
###Markdown
ChallengeOur final goal will be to compute a segmentation from the grayscale image together with the points in the tracks. Just like last time, we will use smoothed and thresholded nuclei as a mask, and we will use the track points (conveniently already in marker image format!) to run watershed on each.We can use the `dask-image` library, which contains many functions adapted from `scipy.ndimage`, to do the smoothing:
###Code
from dask_image import ndfilters
smoothed = ndfilters.gaussian_filter(
embryo,
sigma=[0, 1, 5, 5],
)
smoothed_layer = viewer.add_image(
smoothed,
scale=[5, 1, 1],
)
###Output
_____no_output_____
###Markdown
And we can use [`dask.array.map_blocks`](https://docs.dask.org/en/latest/array-api.htmldask.array.map_blocks) to find the edges of the nuclei, just like in the previous notebook:
###Code
from skimage import filters
edges = da.map_blocks(filters.scharr, smoothed)
edges_layer = viewer.add_image(edges, scale=[5, 1, 1])
###Output
_____no_output_____
###Markdown
Final challenge: distributed segmentation with dask1. Find threshold values for each timepoint of the smoothed data using `client.map` and a scikit-image thresholding function from `skimage.filters`. Create an array of the thresholding values2. Using [NumPy broadcasting](https://numpy.org/doc/stable/user/basics.broadcasting.html), produce a Dask array containing the thresholded smooth values. Add this array to napari.3. (Optionally) use [`da.map_blocks`](https://docs.dask.org/en/latest/array-api.htmldask.array.map_blocks) with a custom filter function to find better boundaries of the nuclei. Add this array to napari.4. Use [`da.map_blocks`](https://docs.dask.org/en/latest/array-api.htmldask.array.map_blocks) together with `skimage.segmentation.watershed` and the three previous arrays to create the output segmentation. Add this array as label layer to napari.5. Navigate the volume by clicking on the slider, and monitor the Dask dashboard. (Tip: to reduce lag in response time, toggle the visibility OFF for any layers you are not looking at in napari).
###Code
# SOLUTION
# Find threshold values for each timepoint of the smoothed data
# using client.map and a scikit-image thresholding function from skimage.filters.
from skimage import filters
def threshold_li(darr):
return filters.threshold_li(np.asarray(darr))
futures = client.map(threshold_li, smoothed)
thresholds = [f.result() for f in futures]
# Create an array of the thresholding values
thresholds_for_broadcasting = np.asarray(thresholds).reshape(-1, 1, 1, 1)
print("thresholds_for_broadcasting.shape", thresholds_for_broadcasting.shape)
print("smoothed.shape", smoothed.shape)
# SOLUTION
# Using [NumPy broadcasting, produce a Dask array containing the thresholded smooth values.
thresholded = smoothed > thresholds_for_broadcasting
# Add this array to napari.
viewer.add_image(thresholded, scale=[10, 1, 1])
# SOLUTION
# Use dask.array.map_blocks together with skimage.segmentation.watershed
# and the three previous arrays to create the output segmentation.
from skimage import segmentation
def watershed(edges, markers, mask):
return segmentation.watershed(
np.asarray(edges), np.asarray(markers), mask=np.asarray(mask)
)
segmented = da.map_blocks(
watershed,
edges[:195],
tracklet_images[:195],
thresholded[:195]
)
# Add this array as a label layer to napari.
viewer.add_labels(segmented, scale=[10, 1, 1])
###Output
_____no_output_____ |
Assignments/Gradient Descent Assignment/Basis Neural Network - Quadratic - AdaGrad.ipynb | ###Markdown
[](https://colab.research.google.com/github/DavideDaz/TokyoDataScience/blob/master/Assignments/Gradient%20Descent%20Assignment/Basis%20Neural%20Network%20-%20Quadratic%20-%20AdaGrad.ipynb)
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
observations = 100 # m
x_i =np.random.uniform(low=0, high=10, size=(observations,1))
x_i = np.sort(x_i, axis=0)
noise = np.random.uniform(-100,100,(observations,1))
alfa_true = 2
beta_true = 3
gamma_true = 8
targets = alfa_true + x_i*beta_true + gamma_true*x_i*x_i + noise
###Output
_____no_output_____
###Markdown
Gradient Descent Solution:
###Code
alfa_gd,beta_gd,gamma_gd = 0.10416243,0.17923125,0.12899188
alfa_Ada,beta_Ada,gamma_Ada = 0.10416243,0.17923125,0.12899188
loss_SGD, loss_AdaGrad = [],[]
learning_rate = 1
r = np.zeros(3)
eps = 10**(-8)
Theta = np.array([0.17923125,0.10416243,0.12899188])
for i in range(2000):
# #SGD
# outputs = (x_i*x_i*gamma_gd) + (x_i*beta_gd) + alfa_gd
# deltas = outputs - targets
# loss = sum(deltas**2)/2/observations
# loss_SGD.append(loss)
# deltas_scaled = deltas/ observations
# gamma_gd = gamma_gd - learning_rate * np.dot((x_i*x_i).T,deltas_scaled)
# beta_gd = beta_gd - learning_rate * np.dot(x_i.T,deltas_scaled)
# alfa_gd = alfa_gd - learning_rate * np.sum(deltas_scaled)
#AdaGrad
outputs_Ada = (x_i*x_i*Theta[2]) + (x_i*Theta[1]) + Theta[0]
deltas_Ada = outputs_Ada - targets
loss_Ada = sum(deltas_Ada**2)/2/observations
loss_AdaGrad.append(loss_Ada[0])
deltas_scaled_Ada = deltas_Ada/ observations
g = np.array([np.sum(deltas_scaled_Ada),np.dot(x_i.T,deltas_scaled_Ada)[0][0],np.dot((x_i*x_i).T,deltas_scaled_Ada)[0][0]])
r =np.add(r,np.multiply(g,g))
DeltaTheta = -np.multiply(np.divide(learning_rate,(eps + np.sqrt(r))),g)
Theta += DeltaTheta
#print(alfa_gd,beta_gd, gamma_gd)
print(Theta)
#y_GD = alfa_gd + beta_gd*x_i + gamma_gd*x_i*x_i
y_Ada = Theta[0] + Theta[1]*x_i + Theta[2]*x_i*x_i
%matplotlib inline
plt.scatter(x_i, targets, marker='o', color='black' )
#plt.plot(x_i,y_GD, color='red',linewidth=4)
plt.plot(x_i,y_Ada, color='green',linewidth=2)
plt.legend(('lr=1','targets'),loc='lower right')
plt.xlabel('x')
plt.ylabel('y')
%matplotlib inline
plt.plot(range(0,len(loss_AdaGrad)),loss_AdaGrad, color='magenta',linewidth=2,alpha=0.5)
plt.legend(('AdaGrad lr =1',''),loc='upper right')
plt.xlabel('Epochs')
plt.ylabel('Loss')
###Output
_____no_output_____ |
Examples and Solutions.ipynb | ###Markdown
Ch-1: PYTHON BASICS Hello World! - First Python Program
###Code
# This program says Hello and asks for my name and age
print("Hello, World!")
print("What is your name?") # ask for their name
myName = input()
print("It is good to meet you," + myName)
print("The length of your name is:")
print(len(myName))
print("What is your age?") # ask for their age
myAge = input()
print("You will be "+ str(int(myAge) + 1) + " in a year.")
###Output
What is your age?
26
You will be 27 in a year.
###Markdown
Practice Questions Which of the following are operators, and which are values?1) *2) 'hello'3) -88.84) -5) /6) +7) 5 Operators : 1, 4, 5, 6 ; Values: 2, 3, 7 Ch-2: FLOW CONTROL A Short Program: Guess the number
###Code
# This is a guess the number game
import random
secretNum = random.randint(1, 20)
print("I am thinking of a number between 1 and 20")
# Give a player 5 chances to guess
for chance in range(1,6):
print('Take a guess.')
guess = int(input())
if secretNum < guess:
print('Your guess is too high!')
elif secretNum > guess:
print('Your guess is too low!')
else:
break # This is the correct guess!
if secretNum == guess:
print('Good Job! You guessed my number in ' + str(chance) + ' guess(es).')
else:
print('Nope! The number I thought was ' + str(secretNum))
###Output
I am thinking of a number between 1 and 20
Take a guess.
5
Your guess is too low!
Take a guess.
10
Your guess is too high!
Take a guess.
7
Good Job! You guessed my number in 3 guess(es).
###Markdown
A Short Program: ROCK, PAPER, SCISSORS Concept-ROCK, PAPER, SCISSORS0 Wins, 0 Losses, 0 TiesEnter your move: (r)ock (p)aper (s)cissors or (q)uit**p**PAPER versus...PAPERIt is a tie!0 Wins, 1 Losses, 1 TiesEnter your move: (r)ock (p)aper (s)cissors or (q)uit**s**SCISSORS versus...PAPERYou win!1 Wins, 1 Losses, 1 TiesEnter your move: (r)ock (p)aper (s)cissors or (q)uit**q**
###Code
import random, sys
print('ROCK, PAPER, SCISSORS')
# These variables will keep track of wins, losses and ties
wins = 0
losses = 0
ties = 0
while True: # the main game loop
print('%s Wins, %s Losses, %s Ties' %(wins, losses, ties))
while True: # the player input loop
print('Enter your move: (r)ock (p)aper (s)cissors or (q)uit')
playerMove = input()
if playerMove == 'q':
sys.exit() # Quit the program
if playerMove == 'r' or playerMove == 'p' or playerMove == 's':
break # break out of the player input loop
print('Type one of r, p, s or q.')
# Display the player move
if playerMove == 'r':
print('ROCK versus ...')
elif playerMove == 'p':
print('PAPER versus ...')
elif playerMove == 's':
print('SCISSORS versus ...')
# Display the computer's move
randomNum = random.randint(1,3)
if randomNum == 1:
computerMove = 'r'
print('ROCK')
elif randomNum == 2:
computerMove = 'p'
print('PAPER')
elif randomNum == 3:
computerMove = 's'
print('SCISSORS')
# Display and record the wins/loss/tie
if playerMove == computerMove:
print('It is a tie!')
ties += 1
elif (playerMove == 'r' and computerMove == 's') or (playerMove == 'p' and computerMove == 'r') or (playerMove == 's' and computerMove == 'p'):
print('You win!')
wins += 1
elif (playerMove == 'r' and computerMove == 'p') or (playerMove == 'p' and computerMove == 's') or (playerMove == 's' and computerMove == 'r'):
print('You lose!')
losses += 1
###Output
ROCK, PAPER, SCISSORS
0 Wins, 0 Losses, 0 Ties
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
r
ROCK versus ...
PAPER
You lose!
0 Wins, 1 Losses, 0 Ties
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
r
ROCK versus ...
SCISSORS
You win!
1 Wins, 1 Losses, 0 Ties
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
r
ROCK versus ...
SCISSORS
You win!
2 Wins, 1 Losses, 0 Ties
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
p
PAPER versus ...
PAPER
It is a tie!
2 Wins, 1 Losses, 1 Ties
Enter your move: (r)ock (p)aper (s)cissors or (q)uit
q
###Markdown
Ch-3: FUNCTIONS Define, Call, Pass, Argument, Parameter➊ def sayHello(name): print('Hello, ' + name)➋ sayHello('Al')To _define_ a function is to create it, just like an assignment statement like spam = 42 creates the spam variable. The def statement defines the sayHello() function ➊. The sayHello('Al') line ➋ _calls_ the now-created function, sending the execution to the top of the function’s code. This function call is also known as _passing_ the string value 'Al' to the function. A value being passed to a function in a function call is an _argument_. The argument 'Al' is assigned to a local variable named name. Variables that have arguments assigned to them are _parameters_. Return Values and return StatementsWhen you call the len() function and pass it an argument such as 'Hello', the function call evaluates to the integer value 5, which is the length of the string you passed it. In general, the value that a function call evaluates to is called the _return value_ of the function.A return statement consists of the following:* The return keyword* The value or expression that the function should return Example: Magic 8 Ball
###Code
import random
def magic8(num):
if num == 1:
return 'It is certain'
elif num == 2:
return 'It is decidedly so'
elif num == 3:
return 'Yes'
elif num == 4:
return 'Reply hazy try again'
elif num == 5:
return 'Ask again later'
elif num == 6:
return 'Concentrate and ask again'
elif num == 7:
return 'My reply is no'
elif num == 8:
return 'Outlook not so good'
elif num == 9:
return 'Very doubtful'
print(magic8(random.randint(1,9)))
###Output
Reply hazy try again
|
kale-pipelines-dev/kubeflow-kale-pipeline.ipynb | ###Markdown
Kubeflow Pipelines Creation using Kale Install dependencies
###Code
!wget -q https://raw.githubusercontent.com/kubeflow-kale/examples/master/titanic-ml-dataset/requirements.txt
!pip3 install -r requirements.txt --user -q
###Output
_____no_output_____
###Markdown
**Important:** Restart this Kernel after the installation is completed. Similate Data Ingestion
###Code
!wget -q https://raw.githubusercontent.com/kubeflow-kale/examples/master/titanic-ml-dataset/data/train.csv -O data/train.csv
!wget -q https://raw.githubusercontent.com/kubeflow-kale/examples/master/titanic-ml-dataset/data/test.csv -O data/test.csv
###Output
_____no_output_____
###Markdown
Importing dependencies Data processing dependencies
###Code
import numpy as np
import pandas as pd
import os
from sklearn import linear_model
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC, LinearSVC
from sklearn.naive_bayes import GaussianNB
print('numpy:', np.__version__)
print('pandas:', pd.__version__)
###Output
numpy: 1.16.4
pandas: 0.25.1
###Markdown
Data analysis dependencies
###Code
import seaborn as sns
from matplotlib import pyplot as plt
from matplotlib import style
###Output
_____no_output_____
###Markdown
Dataset: Titanic passengers survival prediction This Notebook will use machine learning to create a model that predicts which passengers survived the Titanic shipwreck. The dataset provides information on the fate of passengers on the Titanic, summarized according to economic status (class), sex, age and survival.Credit for this Notebook goes to Niklas Donges, who published a very detailed post [here](https://towardsdatascience.com/predicting-the-survival-of-titanic-passengers-30870ccc7e8). Check it out if you want to dive deeper in the data analysis and machine learning details of the challenge. Data Load
###Code
dataset_path = "data/"
PREDICTION_LABEL = 'Survived'
test_df = pd.read_csv(os.path.join(dataset_path, 'test.csv'))
train_df = pd.read_csv(os.path.join(dataset_path, 'train.csv'))
###Output
_____no_output_____
###Markdown
Let's explore the dataThese are features of the dataset:| Feature | Description ||------|------|| survival | Survival | | PassengerId | Unique Id of a passenger | | pclass | Ticket class | | sex | Sex | | Age | Age in years | | sibsp | of siblings / spouses aboard the Titanic | | parch | of parents / children aboard the Titanic | | ticket | Ticket number | | fare | Passenger fare | | cabin | Cabin number | | embarked | Port of Embarkation |
###Code
train_df.info()
train_df.describe()
train_df.head()
###Output
_____no_output_____
###Markdown
**Missing data**Let's see here how much data is missing. We will have to fill the missing features later on.
###Code
total = train_df.isnull().sum().sort_values(ascending=False)
percent_1 = train_df.isnull().sum()/train_df.isnull().count()*100
percent_2 = (round(percent_1, 1)).sort_values(ascending=False)
missing_data = pd.concat([total, percent_2], axis=1, keys=['Total', '%'])
missing_data.head(5)
###Output
_____no_output_____
###Markdown
**Age and Sex**
###Code
survived = 'survived'
not_survived = 'not survived'
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10, 4))
women = train_df[train_df['Sex']=='female']
men = train_df[train_df['Sex']=='male']
ax = sns.distplot(women[women['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[0], kde =False)
ax = sns.distplot(women[women['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[0], kde =False)
ax.legend()
ax.set_title('Female')
ax.set_ylabel('Survival Probablity')
ax = sns.distplot(men[men['Survived']==1].Age.dropna(), bins=18, label = survived, ax = axes[1], kde = False)
ax = sns.distplot(men[men['Survived']==0].Age.dropna(), bins=40, label = not_survived, ax = axes[1], kde = False)
ax.legend()
ax.set_title('Male')
_ = ax.set_ylabel('Survival Probablity')
###Output
_____no_output_____
###Markdown
**Embarked, Pclass and Sex**
###Code
FacetGrid = sns.FacetGrid(train_df, row='Embarked', height=4.5, aspect=1.6)
FacetGrid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette=None, order=None, hue_order=None )
_ = FacetGrid.add_legend()
###Output
_____no_output_____
###Markdown
**Pclass**Explore if `Pclass` is contributing to a person chance of survival
###Code
_ = sns.barplot(x='Pclass', y='Survived', data=train_df)
###Output
_____no_output_____
###Markdown
Here we confirm that being in class 1 increases the chances of survival, and that a person in class 3 has high chances of not surviving
###Code
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', height=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend();
###Output
_____no_output_____
###Markdown
DATA PROCESSING SibSp and ParchCombine these two features as the number of relatives
###Code
data = [train_df, test_df]
for dataset in data:
dataset['relatives'] = dataset['SibSp'] + dataset['Parch']
dataset.loc[dataset['relatives'] > 0, 'not_alone'] = 0
dataset.loc[dataset['relatives'] == 0, 'not_alone'] = 1
dataset['not_alone'] = dataset['not_alone'].astype(int)
train_df['not_alone'].value_counts()
# Survival with respect to the number of relatives in the ship
axes = sns.catplot('relatives','Survived', kind='point',
data=train_df, aspect = 2.5, )
# This does not contribute to a person survival probability
train_df = train_df.drop(['PassengerId'], axis=1)
###Output
_____no_output_____
###Markdown
Missing data: CabinCreate a new `Deck` feature
###Code
import re
deck = {"A": 1, "B": 2, "C": 3, "D": 4, "E": 5, "F": 6, "G": 7, "U": 8}
data = [train_df, test_df]
for dataset in data:
dataset['Cabin'] = dataset['Cabin'].fillna("U0")
dataset['Deck'] = dataset['Cabin'].map(lambda x: re.compile("([a-zA-Z]+)").search(x).group())
dataset['Deck'] = dataset['Deck'].map(deck)
dataset['Deck'] = dataset['Deck'].fillna(0)
dataset['Deck'] = dataset['Deck'].astype(int)
# we can now drop the cabin feature
train_df = train_df.drop(['Cabin'], axis=1)
test_df = test_df.drop(['Cabin'], axis=1)
###Output
_____no_output_____
###Markdown
Missing data: AgeFill missing data from age feature with a random sampling from the distribution of the existing values.
###Code
data = [train_df, test_df]
for dataset in data:
mean = train_df["Age"].mean()
std = test_df["Age"].std()
is_null = dataset["Age"].isnull().sum()
# compute random numbers between the mean, std and is_null
rand_age = np.random.randint(mean - std, mean + std, size = is_null)
# fill NaN values in Age column with random values generated
age_slice = dataset["Age"].copy()
age_slice[np.isnan(age_slice)] = rand_age
dataset["Age"] = age_slice
dataset["Age"] = train_df["Age"].astype(int)
train_df["Age"].isnull().sum()
###Output
_____no_output_____
###Markdown
Missing data: Embarked
###Code
train_df['Embarked'].describe()
# fill with most common value
common_value = 'S'
data = [train_df, test_df]
for dataset in data:
dataset['Embarked'] = dataset['Embarked'].fillna(common_value)
###Output
_____no_output_____
###Markdown
Convert Features
###Code
train_df.info()
data = [train_df, test_df]
for dataset in data:
dataset['Fare'] = dataset['Fare'].fillna(0)
dataset['Fare'] = dataset['Fare'].astype(int)
###Output
_____no_output_____
###Markdown
Titles features
###Code
data = [train_df, test_df]
titles = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in data:
# extract titles
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
# replace titles with a more common title or as Rare
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr',\
'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
# convert titles into numbers
dataset['Title'] = dataset['Title'].map(titles)
# filling NaN with 0, to get safe
dataset['Title'] = dataset['Title'].fillna(0)
train_df = train_df.drop(['Name'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
###Output
_____no_output_____
###Markdown
Sex into numeric
###Code
genders = {"male": 0, "female": 1}
data = [train_df, test_df]
for dataset in data:
dataset['Sex'] = dataset['Sex'].map(genders)
###Output
_____no_output_____
###Markdown
Drop Ticket feature
###Code
train_df = train_df.drop(['Ticket'], axis=1)
test_df = test_df.drop(['Ticket'], axis=1)
###Output
_____no_output_____
###Markdown
Embarked into numeric
###Code
ports = {"S": 0, "C": 1, "Q": 2}
data = [train_df, test_df]
for dataset in data:
dataset['Embarked'] = dataset['Embarked'].map(ports)
###Output
_____no_output_____
###Markdown
Age into categories
###Code
data = [train_df, test_df]
for dataset in data:
dataset['Age'] = dataset['Age'].astype(int)
dataset.loc[ dataset['Age'] <= 11, 'Age'] = 0
dataset.loc[(dataset['Age'] > 11) & (dataset['Age'] <= 18), 'Age'] = 1
dataset.loc[(dataset['Age'] > 18) & (dataset['Age'] <= 22), 'Age'] = 2
dataset.loc[(dataset['Age'] > 22) & (dataset['Age'] <= 27), 'Age'] = 3
dataset.loc[(dataset['Age'] > 27) & (dataset['Age'] <= 33), 'Age'] = 4
dataset.loc[(dataset['Age'] > 33) & (dataset['Age'] <= 40), 'Age'] = 5
dataset.loc[(dataset['Age'] > 40) & (dataset['Age'] <= 66), 'Age'] = 6
dataset.loc[ dataset['Age'] > 66, 'Age'] = 6
# let's see how it's distributed train_df['Age'].value_counts()
###Output
_____no_output_____
###Markdown
Fare into categories
###Code
data = [train_df, test_df]
for dataset in data:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[(dataset['Fare'] > 31) & (dataset['Fare'] <= 99), 'Fare'] = 3
dataset.loc[(dataset['Fare'] > 99) & (dataset['Fare'] <= 250), 'Fare'] = 4
dataset.loc[ dataset['Fare'] > 250, 'Fare'] = 5
dataset['Fare'] = dataset['Fare'].astype(int)
###Output
_____no_output_____
###Markdown
New Features Age times Class
###Code
data = [train_df, test_df]
for dataset in data:
dataset['Age_Class']= dataset['Age']* dataset['Pclass']
###Output
_____no_output_____
###Markdown
Fare per person
###Code
for dataset in data:
dataset['Fare_Per_Person'] = dataset['Fare']/(dataset['relatives']+1)
dataset['Fare_Per_Person'] = dataset['Fare_Per_Person'].astype(int)
# Let's take a last look at the training set, before we start training the models.
train_df.head(10)
###Output
_____no_output_____
###Markdown
MLBecause the dataset does not provide labels for their testing-set, we need to use the predictions on the training set to compare the algorithms with each other
###Code
train_labels = train_df[PREDICTION_LABEL]
if PREDICTION_LABEL in train_df.columns:
train_df = train_df.drop(PREDICTION_LABEL, axis=1)
if PREDICTION_LABEL in test_df.columns:
test_df = test_df.drop(PREDICTION_LABEL, axis=1)
# Importing IPython should only affect analysis cells and should not
# be exported to any pipeline task.
# Keep this import "local" to this cell and isolated from the global
# imports oriented to data processing
from IPython.core.display import display, HTML
if PREDICTION_LABEL in train_df.columns or PREDICTION_LABEL in test_df.columns:
display(HTML("""
<div style="border: 2px solid red;"" >
<h3 style="color:red;">WARNING:</h3>
<p style="color:blue;">The target feature should be removed from the Training or Test sets!</p>
</div>
"""))
###Output
_____no_output_____
###Markdown
Random Forest
###Code
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(train_df, train_labels)
acc_random_forest = round(random_forest.score(train_df, train_labels) * 100, 2)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
logreg = LogisticRegression(solver='lbfgs')
logreg.fit(train_df, train_labels)
acc_log = round(logreg.score(train_df, train_labels) * 100, 2)
###Output
/home/jovyan/.local/lib/python3.6/site-packages/sklearn/linear_model/logistic.py:758: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
"of iterations.", ConvergenceWarning)
###Markdown
Gaussian Naive Bayes
###Code
gaussian = GaussianNB()
gaussian.fit(train_df, train_labels)
acc_gaussian = round(gaussian.score(train_df, train_labels) * 100, 2)
###Output
_____no_output_____
###Markdown
Linear SVM
###Code
linear_svc = LinearSVC()
linear_svc.fit(train_df, train_labels)
acc_linear_svc = round(linear_svc.score(train_df, train_labels) * 100, 2)
###Output
/home/jovyan/.local/lib/python3.6/site-packages/sklearn/svm/base.py:931: ConvergenceWarning: Liblinear failed to converge, increase the number of iterations.
"the number of iterations.", ConvergenceWarning)
###Markdown
Decision Tree
###Code
decision_tree = DecisionTreeClassifier()
decision_tree.fit(train_df, train_labels)
acc_decision_tree = round(decision_tree.score(train_df, train_labels) * 100, 2)
###Output
_____no_output_____
###Markdown
Results
###Code
results = pd.DataFrame({
'Model': ['Support Vector Machines', 'logistic Regression',
'Random Forest', 'Naive Bayes', 'Decision Tree'],
'Score': [acc_linear_svc, acc_log,
acc_random_forest, acc_gaussian, acc_decision_tree]})
result_df = results.sort_values(by='Score', ascending=False)
result_df = result_df.set_index('Score')
print(result_df)
###Output
Model
Score
92.93 Random Forest
92.93 Decision Tree
81.59 Support Vector Machines
81.48 logistic Regression
77.10 Naive Bayes
|
analytics notebook/github Am I Healthy or Unhealthy?.ipynb | ###Markdown
Data Analytics Load and read data
###Code
#import libraries
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
#load data
df = pd.read_csv("cardio.csv", sep=";")
###Output
_____no_output_____
###Markdown
Data preprocessing
###Code
#display rows and columns of data
df.shape
#display data
df
#display basic statistical details
df.describe()
#print full summary
df.info()
#display columns
df.columns
#check number of rows where particular columns of null values
df.isnull().sum()
#convert age from days to years
df["age"] = (df["age"] / 365).astype(int)
#BMI= weight(kg)/height(m)**2
#convert height(cm) into (m), km(1000)>m(100)>cm(10)>mm
df["height"] = df["height"]/100
#pulse pressure = systolic pressure - diastolic pressure
df["Pulse Pressure"] = df["ap_hi"] - df["ap_lo"]
#create a new column named as "BMI"
df["BMI"]= (df["weight"] / df["height"] ** 2).astype(int)
#create a new column named as "HRmax"
df["HRmax"] = (220 - 0.7 * df["age"]).astype(int)
#create a new column named as "MAP"
df["MAP"] = ((df["ap_hi"] + (2 * df["ap_lo"])) / 3).astype(int)
df
#drop selected column
df.drop(columns=["id"], inplace=True)
df
###Output
_____no_output_____
###Markdown
Exploratory data analysis
###Code
#find sum of cardio for each gender
cardiosum = df.groupby("gender")["cardio"].sum()
cardiosum
#count plot
plt.figure(figsize=(6,6))
sns.countplot(x="gender", data=df, hue="cardio")
plt.title("Cardiovascular Disease vs Non-Cardiovascular Disease",fontsize=12)
plt.xlabel("Gender")
plt.ylabel("Number of People")
plt.show()
#sum of each gender
df["gender"].value_counts()
#count plot
plt.figure(figsize=(6,6))
gender = ["females=1", "males=2"]
plt.bar(gender, cardiosum)
plt.title("People with Cardiovascular Disease",fontsize=12)
plt.xlabel("Gender")
plt.ylabel("Cardiovascular Disease")
ax = plt.gca()
for p in ax.patches:
ax.text(p.get_x() + p.get_width()/2, p.get_height(), "%d" % int(p.get_height()),
fontsize=16, color="red", ha="center", va="bottom")
plt.show()
#line plot
plt.figure(figsize=(10,7))
sns.lineplot(x="age", y="Pulse Pressure", hue="cardio", data=df)
plt.show()
#line plot
plt.figure(figsize=(10,7))
sns.lineplot(x="age", y="MAP", hue="cardio", data=df)
plt.show()
#line plot
plt.figure(figsize=(10,7))
sns.barplot(x="age", y="HRmax", hue="cardio", data=df)
plt.show()
#categorical plot
df_diseases = pd.melt(df, id_vars=["gender"], value_vars=["cholesterol", "gluc"])
sns.catplot(x="variable", hue="value", col="gender",data=df_diseases, kind="count")
plt.show()
#gender with higher cholesterol
df.groupby("gender")["cholesterol"].sum()
#gender with higher glucose
df.groupby("gender")["gluc"].sum()
#relationships plot
plt.figure(figsize=(6,6))
sns.relplot(x="ap_lo", y="ap_hi",
hue="cardio",
size="age",
sizes=(0,300),
col="gender",
data=df)
plt.show()
#categorical plot
df_lifestyles = pd.melt(df, id_vars=["cardio"], value_vars=["smoke", "alco", "active"])
sns.catplot(x="variable", hue="value", col="cardio",data=df_lifestyles, kind="count")
plt.show()
#categorical plot
df_lifestyles = pd.melt(df, id_vars=["gender"], value_vars=["smoke", "alco", "active"])
sns.catplot(x="variable", hue="value", col="gender",data=df_lifestyles, kind="count")
plt.show()
#gender with higher smoking rate
df.groupby("gender")["smoke"].sum()
#gender with higher alcohol intake
df.groupby("gender")["alco"].sum()
#gender with least exercise rate
df.groupby("gender")["active"].sum()
#boxplot
plt.figure(figsize=(20,20))
sns.boxplot(x='age', y='BMI', data=df)
plt.show()
#relationships plot
plt.figure(figsize=(20,20))
sns.histplot(data=df, x="BMI", kde=True, color="red")
plt.show()
###Output
_____no_output_____
###Markdown
Predictive Analytics (Machine Learning) Logistic regression
###Code
#import libraries
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
# display data first five rows
df.head()
# display columns
df.columns
#take category column and apply lambda function check return 1 if BMI >23 else 0
#create a new column
df['Health_Status'] = df['BMI'].apply(lambda x: 1 if x>23 else 0)
df
#define new dataframe
df_new = df[["BMI","Health_Status"]]
df_new
#use X, y as input and test_size as ratio of spliting > will get 4 parameters back
X_train, X_test, y_train, y_test = train_test_split(df[["BMI"]],df.Health_Status,test_size=0.1)
# data to perform test model
X_test
#data to perform train model
X_train
#create linear regression object
model = LogisticRegression()
#fit to train model
model.fit(X_train, y_train)
#predict data
model.predict(X_test)
y_test
#check accuracy of model by calling score method
#score will use X_test to predict model.predict(X_test) and compare with y_test value to find accuracy
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
100% accuracy!
###Code
model.predict([[55]])
###Output
_____no_output_____ |
10_Pandas/01_Pandas讀取資料.ipynb | ###Markdown
建立一個empty dataframe > 一般使用於事先建立一個空的dataframe,準備存後續要產生的資料。
###Code
# create empty dataframe
df = pd.DataFrame()
df
# 建立已定義好欄位名稱的empty dataframe
df = pd.DataFrame(columns=["Col_1", "Col_2", "Col_3"])
df
###Output
_____no_output_____
###Markdown
由各種資料來源建立DataFrame * [Python Pandas - DataFrame](https://www.tutorialspoint.com/python_pandas/python_pandas_dataframe.htm) Create a DataFrame from Lists
###Code
import pandas as pd
data = [1,2,3,4,5]
df = pd.DataFrame(data)
df
import pandas as pd
data = [['Alex',10],['Bob',12],['Clarke',13]]
df = pd.DataFrame(data,columns=['Name','Age'])
df
import pandas as pd
data = [['Alex',10],['Bob',12],['Clarke',13]]
df = pd.DataFrame(data,columns=['Name','Age'],dtype=float)
df
###Output
_____no_output_____
###Markdown
讀取CSV String 讀取非檔案來源的String變數CSV 需透過StringIO作處理 [參考](https://stackoverflow.com/questions/58288236/pandas-read-csv-stored-as-string-in-memory-to-data-frame)
###Code
from io import StringIO
content = """
AAAA
BBBB
CCCC
DDDD
"""
df = pd.read_csv(StringIO(content), header=None)
df
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.