path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/MS2DeepScore_tutorial.ipynb | ###Markdown
Using ms2deepscore: How to load data, train a model, and compute similarities.
###Code
from pathlib import Path
from matchms.importing import load_from_mgf
from tensorflow import keras
import pandas as pd
from ms2deepscore import SpectrumBinner
from ms2deepscore.data_generators import DataGeneratorAllSpectrums
from ms2deepscore.models import SiameseModel
from ms2deepscore import MS2DeepScore
###Output
_____no_output_____
###Markdown
Data loading Here we load in a small sample of test spectrum as well as reference scores data.
###Code
TEST_RESOURCES_PATH = Path.cwd().parent / 'tests' / 'resources'
spectrums_filepath = str(TEST_RESOURCES_PATH / "pesticides_processed.mgf")
score_filepath = str(TEST_RESOURCES_PATH / "pesticides_tanimoto_scores.json")
###Output
_____no_output_____
###Markdown
Load processed spectrums from .mgf file. For processing itself see [matchms](https://github.com/matchms/matchms) documentation.
###Code
spectrums = list(load_from_mgf(spectrums_filepath))
###Output
_____no_output_____
###Markdown
Load reference scores from a .json file. This is a Pandas DataFrame with reference similarity scores (=labels) for compounds identified by inchikeys. Columns and index should be inchikeys, the value in a row x column depicting the similarity score for that pair. Must be symmetric (reference_scores_df[i,j] == reference_scores_df[j,i]) and column names should be identical to the index.
###Code
tanimoto_scores_df = pd.read_json(score_filepath)
###Output
_____no_output_____
###Markdown
Data preprocessing Bin the spectrums using `ms2deepscore.SpectrumBinner`. In this binned form we can feed spectra to the model.
###Code
spectrum_binner = SpectrumBinner(1000, mz_min=10.0, mz_max=1000.0, peak_scaling=0.5)
binned_spectrums = spectrum_binner.fit_transform(spectrums)
###Output
Spectrum binning: 100%|██████████| 76/76 [00:00<00:00, 1366.15it/s]
Create BinnedSpectrum instances: 100%|██████████| 76/76 [00:00<00:00, 69478.44it/s]
###Markdown
Create a data generator that will generate batches of training examples.Each training example consists of a pair of binned spectra and the corresponding reference similarity score.
###Code
dimension = len(spectrum_binner.known_bins)
data_generator = DataGeneratorAllSpectrums(binned_spectrums, tanimoto_scores_df,
dim=dimension)
###Output
_____no_output_____
###Markdown
Model training Initialize a SiameseModel. It consists of a dense 'base' network that produces an embedding for each of the 2 inputs. The 'head' model computes the cosine similarity between the embeddings.
###Code
model = SiameseModel(spectrum_binner, base_dims=(200, 200, 200), embedding_dim=200,
dropout_rate=0.2)
model.compile(loss='mse', optimizer=keras.optimizers.Adam(lr=0.001))
model.summary()
###Output
Model: "base"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
base_input (InputLayer) [(None, 543)] 0
_________________________________________________________________
dense1 (Dense) (None, 200) 108800
_________________________________________________________________
normalization1 (BatchNormali (None, 200) 800
_________________________________________________________________
dropout1 (Dropout) (None, 200) 0
_________________________________________________________________
dense2 (Dense) (None, 200) 40200
_________________________________________________________________
normalization2 (BatchNormali (None, 200) 800
_________________________________________________________________
dropout2 (Dropout) (None, 200) 0
_________________________________________________________________
dense3 (Dense) (None, 200) 40200
_________________________________________________________________
normalization3 (BatchNormali (None, 200) 800
_________________________________________________________________
dropout3 (Dropout) (None, 200) 0
_________________________________________________________________
embedding (Dense) (None, 200) 40200
=================================================================
Total params: 231,800
Trainable params: 230,600
Non-trainable params: 1,200
_________________________________________________________________
Model: "head"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_a (InputLayer) [(None, 543)] 0
__________________________________________________________________________________________________
input_b (InputLayer) [(None, 543)] 0
__________________________________________________________________________________________________
base (Functional) (None, 200) 231800 input_a[0][0]
input_b[0][0]
__________________________________________________________________________________________________
cosine_similarity (Dot) (None, 1) 0 base[0][0]
base[1][0]
==================================================================================================
Total params: 231,800
Trainable params: 230,600
Non-trainable params: 1,200
__________________________________________________________________________________________________
###Markdown
Train the model on the data, for the sake of simplicity we use the same dataset for training and validation.
###Code
model.fit(data_generator,
validation_data=data_generator,
epochs=2)
###Output
Epoch 1/2
2/2 [==============================] - 2s 413ms/step - loss: 0.0799 - val_loss: 0.0490
Epoch 2/2
2/2 [==============================] - 0s 167ms/step - loss: 0.1049 - val_loss: 0.0576
###Markdown
Model inference Calculate similariteis for a pair of spectra
###Code
similarity_measure = MS2DeepScore(model)
score = similarity_measure.pair(spectrums[0], spectrums[1])
print(score)
###Output
Spectrum binning: 100%|██████████| 1/1 [00:00<00:00, 1144.11it/s]
Create BinnedSpectrum instances: 100%|██████████| 1/1 [00:00<00:00, 9532.51it/s]
Spectrum binning: 100%|██████████| 1/1 [00:00<00:00, 870.91it/s]
Create BinnedSpectrum instances: 100%|██████████| 1/1 [00:00<00:00, 8830.11it/s]
###Markdown
Calculate similarities for a 3x3 matrix of spectra
###Code
scores = similarity_measure.matrix(spectrums[:3], spectrums[:3])
print(scores)
###Output
Spectrum binning: 100%|██████████| 3/3 [00:00<00:00, 1661.99it/s]
Create BinnedSpectrum instances: 100%|██████████| 3/3 [00:00<00:00, 14074.85it/s]
Calculating vectors of reference spectrums: 100%|██████████| 3/3 [00:00<00:00, 21.24it/s]
Spectrum binning: 100%|██████████| 3/3 [00:00<00:00, 1515.83it/s]
Create BinnedSpectrum instances: 100%|██████████| 3/3 [00:00<00:00, 11949.58it/s]
Calculating vectors of reference spectrums: 100%|██████████| 3/3 [00:00<00:00, 19.07it/s] |
ImageClassification/LeNet_MNIST.ipynb | ###Markdown
Image Classification using LeNet CNN MNIST Dataset - Handwritten Digits (0-9)
###Code
# import the necessary packages
from LeNet import LeNet
from sklearn.model_selection import train_test_split
from keras.datasets import mnist
from keras.optimizers import SGD
from keras.utils import np_utils
from keras import backend as K
import numpy as np
import argparse
import cv2
###Output
_____no_output_____
###Markdown
Load the data
###Code
# grab the MNIST dataset (may take time the first time)
print("[INFO] downloading MNIST...")
((trainData, trainLabels), (testData, testLabels)) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Prepare the data
###Code
# parameters for MNIST data set
num_classes = 10
image_width = 28
image_height = 28
image_channels = 1
# shape the input data using "channels last" ordering
# num_samples x rows x columns x depth
trainData = trainData.reshape(
(trainData.shape[0], image_height, image_width, image_channels))
testData = testData.reshape(
(testData.shape[0], image_height, image_width, image_channels))
# scale data to the range of [0.0, 1.0]
trainData = trainData.astype("float32") / 255.0
testData = testData.astype("float32") / 255.0
# transform the training and testing labels into vectors in the
# range [0, classes] -- this generates a vector for each label,
# where the index of the label is set to `1` and all other entries
# to `0`; in the case of MNIST, there are 10 class labels
trainLabels = np_utils.to_categorical(trainLabels, num_classes) # one hot encoding
testLabels = np_utils.to_categorical(testLabels, num_classes)
###Output
_____no_output_____
###Markdown
Train Model
###Code
# initialize the model
print("[INFO] compiling model...")
model = LeNet.build(numChannels=image_channels,
imgRows=image_height, imgCols=image_width,
numClasses=num_classes,
weightsPath=None)
# initialize the optimizer
opt = SGD(lr=0.01) # Stochastic Gradient Descent
# build the model
model.compile(loss="categorical_crossentropy", # Soft-Max
optimizer=opt, metrics=["accuracy"])
# initialize hyper parameters
batch_size = 128
epochs = 1
print("[INFO] training...")
model.fit(trainData, trainLabels, batch_size=batch_size,
epochs=epochs, verbose=1)
# show the accuracy on the testing set
print("[INFO] evaluating...")
(loss, accuracy) = model.evaluate(testData, testLabels,
batch_size=batch_size, verbose=1)
print("[INFO] accuracy: {:.2f}%".format(accuracy * 100))
model.save_weights("lenet_mnist_test.hdf5", overwrite=True)
###Output
_____no_output_____
###Markdown
Evaluate Pre-trained Model
###Code
# load the model weights
print("[INFO] compiling model...")
model = LeNet.build(numChannels=image_channels,
imgRows=image_height, imgCols=image_width,
numClasses=num_classes,
weightsPath="weights/lenet_weights_mnist.hdf5")
# initialize the optimizer
opt = SGD(lr=0.01) # Stochastic Gradient Descent
# build the model
model.compile(loss="categorical_crossentropy", # Soft-Max
optimizer=opt, metrics=["accuracy"])
# show the accuracy on the testing set
print("[INFO] evaluating...")
(loss, accuracy) = model.evaluate(testData, testLabels,
batch_size=batch_size, verbose=1)
print("[INFO] accuracy: {:.2f}%".format(accuracy * 100))
###Output
_____no_output_____
###Markdown
Model Predictions
###Code
# set prediction parameters
num_predictions = 10
# randomly select a few testing digits
for i in np.random.choice(np.arange(0, len(testLabels)), size=(num_predictions,)):
# classify the digit
probs = model.predict(testData[np.newaxis, i])
prediction = probs.argmax(axis=1)
image = (testData[i] * 255).astype("uint8")
# merge the channels into one image
image = cv2.merge([image] * 3)
# resize the image from a 28 x 28 image to a 96 x 96 image so we
# can better see it
image = cv2.resize(image, (96, 96), interpolation=cv2.INTER_LINEAR)
print("[INFO] Predicted: {}, Actual: {}".format(
prediction[0], np.argmax(testLabels[i])))
# show the image and prediction
cv2.putText(image, str(prediction[0]), (5, 20),
cv2.FONT_HERSHEY_SIMPLEX, 0.75, (0, 255, 0), 2)
cv2.imshow("Digit", image)
cv2.waitKey(0)
# close the display window
cv2.destroyAllWindows()
###Output
_____no_output_____ |
product_c_3d_mode.ipynb | ###Markdown
###Code
!git clone https://github.com/ultralytics/yolov5 # clone
%cd yolov5
%pip install -qr requirements.txt # install
import torch
from yolov5 import utils
from IPython.display import Image, clear_output # to display images
display = utils.notebook_init() # checks
print('Setup complete. Using torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))
%cd /content/yolov5
!pip install roboflow
from roboflow import Roboflow
rf = Roboflow(api_key="9u8eLhhbftnfbyLtXg8t")
project = rf.workspace("3d-model-realworld-evalution").project("product-c")
dataset = project.version(1).download("yolov5")
# this is the YAML file Roboflow wrote for us that we're loading into this notebook with our data
%cat {dataset.location}/data.yaml
!python train.py --img 640 --batch 64 --epochs 110 --data {dataset.location}/data.yaml --weights yolov5s.pt --cache
!python detect.py --weights runs/train/exp/weights/best.pt --img 416 --conf 0.1 --source {dataset.location}/test/images
#display inference on ALL test images
import glob
from IPython.display import Image, display
for imageName in glob.glob('/content/yolov5/runs/detect/exp/*.jpg'): #assuming JPG
display(Image(filename=imageName))
print("\n")
!python export.py --weights /content/yolov5/runs/train/exp/weights/best.pt --include tfjs
cd ../..
! git clone https://github.com/mdhasanali3/3d-model-yolov5.git
!git config --global user.email "[email protected]"
!git config --global user.name "mdhasanali3"
!git pull origin
pwd
%cd /content/3d-model-yolov5
%mkdir product_C_64b_110e
%cp -r /content/yolov5/runs/train/exp/weights/best.pt /content/3d-model-yolov5/product_C_64b_110e
%cp -r /content/yolov5/runs/train/exp/weights/best_web_model /content/3d-model-yolov5/product_C_64b_110e
!git status
!git add -A
!git commit -m "product C model"
!git remote -v
!git remote rm origin
!git remote add origin https://[email protected]/mdhasanali3/3d-model-yolov5.git
!git push -u origin main
###Output
_____no_output_____ |
docs/ml/iris_LogisticRegression.ipynb | ###Markdown
ロジスティック回帰モデル Pythonの機械学習用ライブラリ`scikit-learn`を使って,ロジスティック回帰モデルを使って簡単な分類問題にチャレンジしてみましょう.--- 0.ライブラリのインポート
###Code
import numpy as np
import pandas as pd
import sklearn
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
np.set_printoptions(precision=4)
print("numpy :", np.__version__)
print("pandas :", pd.__version__)
print("sklearn :", sklearn.__version__)
print("seaborn :", sns.__version__)
print("matplotlib :", matplotlib.__version__)
###Output
numpy : 1.16.1
pandas : 0.24.2
sklearn : 0.20.2
seaborn : 0.9.0
matplotlib : 3.0.2
###Markdown
1. データの読込・整形 `sklearn.datasets`からIrisデータセットを読み込みましょう.
###Code
# make data samples
from sklearn.datasets import load_iris
iris = load_iris()
###Output
_____no_output_____
###Markdown
次に,pandas DataFrame()クラスのインスタンスとして,変数`df_feature`, `df_target`, `df`を定義します.参考: [pandas.DataFrame — pandas 1.0.1 documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html)
###Code
df_feature = pd.DataFrame(iris.data, columns=iris.feature_names)
df_target = pd.DataFrame(iris.target, columns=["target"])
df_target.loc[df_target['target'] == 0, 'target_name'] = "setosa"
df_target.loc[df_target['target'] == 1, 'target_name'] = "versicolor"
df_target.loc[df_target['target'] == 2, 'target_name'] = "virginica"
df = pd.concat([df_target, df_feature], axis=1)
df.head(10)
###Output
_____no_output_____
###Markdown
データの要約統計量(サンプル数, 平均, 標準偏差, 四分位数, 中央値, 最小値, 最大値など)をみましょう.
###Code
df.describe().T
###Output
_____no_output_____
###Markdown
データの共分散行列を描画します.対角成分は自分との共分散(相関)を表すため常に1.0となります.
###Code
df.corr()
###Output
_____no_output_____
###Markdown
seabornを使って,共分散行列を可視化してみましょう.
###Code
# Correlation matrix
sns.set()
cols = ['target', 'sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)'] # プロットしたい特徴量
plt.figure(figsize=(12,10))
plt.title('Pearson Correlation of Iris Features', y=1.01, fontsize=14)
sns.heatmap(df[cols].astype(float).corr(),
linewidths=0.1,
vmax=1.0,
cmap=sns.diverging_palette(220, 10, as_cmap=True),
square=True,
linecolor='white',
annot=True)
###Output
_____no_output_____
###Markdown
データの散布図行列を描画します.相関が大きい説明変数のペアについては, 多重共線性を考えるべきです.
###Code
# pairplot
sns.set()
sns.pairplot(df, diag_kind='hist', height=2.0)
plt.show()
###Output
_____no_output_____
###Markdown
分類用のデータセットには,各データに対応するクラスラベルが与えられています.上の散布図行列の各点を所属する3つのクラスに応じて色分けしてみましょう.
###Code
sns.set()
sns.pairplot(df, hue='target', diag_kind='hist', height=2.0)
plt.show()
###Output
_____no_output_____
###Markdown
2. データの分割 変数`iris`から,説明変数と目的変数に相当するデータをそれぞれ取り出し,numpy.ndarray()クラスの変数`X`, `y`へ格納します.
###Code
X = iris.data
y = iris.target
###Output
_____no_output_____
###Markdown
全データをtrainデータとtestデータに分割します.すなわち,変数`X`を`X_train`と`X_test`に,変数`y`を`y_train`と`y_test`に分けます.
###Code
# split data by Hold-out-method
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
`print()`で配列の形状を確認してみましょう.
###Code
print("X_train: ", X_train.shape)
print("y_train: ", y_train.shape)
print("X_test: ", X_test.shape)
print("y_test: ", y_test.shape)
###Output
X_train: (120, 4)
y_train: (120,)
X_test: (30, 4)
y_test: (30,)
###Markdown
- X_train: 4次元データが120コ格納されている.- y_train: 1次元データが120コ格納されている.- X_test: 4次元データが30コ格納されている.- y_test: 1次元データが30コ格納されている. 3. モデルの作成
###Code
# Logistic Regression
from sklearn.linear_model import LogisticRegression
clf_lr = LogisticRegression(random_state=0,
solver='lbfgs',
multi_class='auto')
###Output
_____no_output_____
###Markdown
4. モデルへデータを適合させる
###Code
# fit
clf_lr.fit(X_train, y_train)
###Output
/usr/local/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:758: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
"of iterations.", ConvergenceWarning)
###Markdown
モデルの評価
###Code
# predictions
y_train_pred = clf_lr.predict(X_train)
y_test_pred = clf_lr.predict(X_test)
# Accuracy
from sklearn.metrics import accuracy_score
print('Accuracy (train) : {:>.4f}'.format(accuracy_score(y_train, y_train_pred)))
print('Accuracy (test) : {:>.4f}'.format(accuracy_score(y_test, y_test_pred)))
# Confusion matrix
from sklearn.metrics import confusion_matrix
cmat_train = confusion_matrix(y_train, y_train_pred)
cmat_test = confusion_matrix(y_test, y_test_pred)
def print_confusion_matrix(confusion_matrix, class_names, plt_title='Confusion matrix: ', cmap='BuGn', figsize = (6.25, 5), fontsize=10):
df_cm = pd.DataFrame(confusion_matrix, index=class_names, columns=class_names)
fig = plt.figure(figsize=figsize)
heatmap = sns.heatmap(df_cm, annot=True, fmt="d", cmap=cmap)
heatmap.yaxis.set_ticklabels(heatmap.yaxis.get_ticklabels(), rotation=0, ha='right', fontsize=fontsize)
heatmap.xaxis.set_ticklabels(heatmap.xaxis.get_ticklabels(), rotation=45, ha='right', fontsize=fontsize)
plt.xlabel('Predicted label')
plt.ylabel('True label')
plt.title(plt_title, fontsize=fontsize*1.25)
plt.show()
print_confusion_matrix(cmat_train,
iris.target_names,
plt_title='Confusion matrix (train, 120 samples)')
print_confusion_matrix(cmat_test,
iris.target_names,
plt_title='Confusion matrix (test, 30 samples)')
###Output
_____no_output_____ |
.ipynb_checkpoints/dis.008-checkpoint.ipynb | ###Markdown
dis.008 Import libraries
###Code
# Libraries for downloading data from remote server (may be ftp)
import requests
from urllib.request import urlopen
from contextlib import closing
import shutil
# Library for uploading/downloading data to/from S3
import boto3
# Libraries for handling data
import rasterio as rio
import numpy as np
# from netCDF4 import Dataset
# import pandas as pd
# import scipy
# Libraries for various helper functions
# from datetime import datetime
import os
import threading
import sys
from glob import glob
from matplotlib import pyplot
%matplotlib inline
###Output
_____no_output_____
###Markdown
s3 tools
###Code
s3_upload = boto3.client("s3")
s3_download = boto3.resource("s3")
s3_bucket = "wri-public-data"
s3_folder = "resourcewatch/raster/ene_019_wind_energy_potential/"
s3_file = "ene_018_wind_energy_potential.tif"
s3_key_orig = s3_folder + s3_file
s3_key_edit = s3_key_orig[0:-4] + "_edit.tif"
os.environ["Zs3_key1"] = "s3://wri-public-data/" + s3_key_orig
os.environ["Zs3_key2"] = "s3://wri-public-data/" + s3_key_edit
class ProgressPercentage(object):
def __init__(self, filename):
self._filename = filename
self._size = float(os.path.getsize(filename))
self._seen_so_far = 0
self._lock = threading.Lock()
def __call__(self, bytes_amount):
# To simplify we'll assume this is hooked up
# to a single filename.
with self._lock:
self._seen_so_far += bytes_amount
percentage = (self._seen_so_far / self._size) * 100
sys.stdout.write("\r%s %s / %s (%.2f%%)"%(
self._filename, self._seen_so_far, self._size,
percentage))
sys.stdout.flush()
###Output
_____no_output_____
###Markdown
Define local file locations
###Code
local_folder = "C:/Users/Max81007/Desktop/Python/Resource_Watch/Raster/cit.018/"
file_name = "cit_018_monthly_no2_concentrations_in_atmosphere_201701.tif"
local_orig = local_folder + file_name
orig_extension_length = 4 #4 for each char in .tif
local_edit = local_orig[:-orig_extension_length] + "_edit.tif"
files = [local_orig, local_edit]
for file in files:
with rio.open(file, 'r') as src:
profile = src.profile
print(profile)
###Output
{'driver': 'AAIGrid', 'dtype': 'float32', 'nodata': -1.2676499957653196e+30, 'width': 3600, 'height': 1800, 'count': 1, 'crs': None, 'transform': Affine(0.1, 0.0, -180.0,
0.0, -0.1, 90.0), 'tiled': False}
###Markdown
Use rasterio to reproject and compress
###Code
os.getcwd()
os.chdir(local_folder)
os.environ["local_orig"] =local_orig
os.environ["local_edit"] =local_edit
!gdalwarp -overwrite -t_srs epsg:4326 -srcnodata none -co compress=lzw %local_orig% %local_edit%
files = [local_orig, local_edit]
for file in files:
with rio.open(file, 'r') as src:
profile = src.profile
print(profile)
###Output
{'driver': 'AAIGrid', 'dtype': 'float32', 'nodata': -1.2676499957653196e+30, 'width': 3600, 'height': 1800, 'count': 1, 'crs': None, 'transform': Affine(0.1, 0.0, -180.0,
0.0, -0.1, 90.0), 'tiled': False}
{'driver': 'GTiff', 'dtype': 'float32', 'nodata': None, 'width': 3600, 'height': 1800, 'count': 1, 'crs': CRS({'init': 'epsg:4326'}), 'transform': Affine(0.1, 0.0, -180.0,
0.0, -0.1, 90.0), 'tiled': False, 'compress': 'lzw', 'interleave': 'band'}
###Markdown
Upload orig and edit files to s3
###Code
# Original
s3_upload.upload_file(local_orig, s3_bucket, s3_key_orig,
Callback=ProgressPercentage(local_orig))
# Edit
s3_upload.upload_file(local_edit, s3_bucket, s3_key_edit,
Callback=ProgressPercentage(local_edit))
os.environ["Zgs_key"] = "gs://resource-watch-public/" + s3_key_orig
!echo %Zs3_key2%
!echo %Zgs_key%
!gsutil cp %Zs3_key2% %Zgs_key%
with rio.open(local_orig) as src:
data = src.read(indexes=1)
pyplot.imshow(data)
with rio.open(local_edit) as src:
data = src.read(indexes=1)
pyplot.imshow(data)
os.environ["asset_id"] = "users/resourcewatch/cit_018_monthly_no2_concentrations_in_atmosphere_201701"
!earthengine upload image --asset_id=%asset_id% %Zgs_key%
!earthengine task info F7ZP3YOHXBMERJK2KRG4C5M2
###Output
F7ZP3YOHXBMERJK2KRG4C5M2:
State: COMPLETED
Type: Upload
Description: Asset ingestion: users/resourcewatch/ene_018_wind_energy_potential
Created: 2017-10-05 16:37:23.361000
Started: 2017-10-05 16:37:26.531000
Updated: 2017-10-05 16:39:03.039000
|
Labs/Lab2_Regression/.ipynb_checkpoints/Lab 2 (Part C) - Linear regression with multiple features-checkpoint.ipynb | ###Markdown
Lab 2 (Part C) - Linear regression with multiple features__IMPORTANT__ Please complete this Jupyter Notebook file and upload it to blackboard __before 05 February 2020__.In this part of the lab, you will implement linear regression with multiple variables to predict the price of houses. Suppose you are selling your house and you want to know what a good market price would be. One way to do this is to first collect information on recent houses sold and make a model of housing prices. 1. Loading the datasetThe file `housing-dataset.csv` contains a training set of housing prices in Portland, Oregon. The first column is the size of the house (in square feet), the second column is the number of bedrooms, and the third column is the price of the house. The following Python code helps you load the dataset from the data file into the variables $X$ and $y$. Read the code and print a small subset of $X$ and $y$ to see what they look like.
###Code
%matplotlib inline
import numpy as np
filename = "datasets/housing-dataset.csv"
mydata = np.genfromtxt(filename, delimiter=",")
# We have n data-points (houses)
n = len(mydata)
# X is a matrix of two column, i.e. an array of n 2-dimensional data-points
X = mydata[:, :2].reshape(n, 2)
# y is the vector of outputs, i.e. an array of n scalar values
y = mydata[:, -1]
""" TODO:
You can print a small subset of X and y to see what it looks like.
"""
print(X[:10])
print(y[:10])
###Output
[[2.104e+03 3.000e+00]
[1.600e+03 3.000e+00]
[2.400e+03 3.000e+00]
[1.416e+03 2.000e+00]
[3.000e+03 4.000e+00]
[1.985e+03 4.000e+00]
[1.534e+03 3.000e+00]
[1.427e+03 3.000e+00]
[1.380e+03 3.000e+00]
[1.494e+03 3.000e+00]]
[399900. 329900. 369000. 232000. 539900. 299900. 314900. 198999. 212000.
242500.]
###Markdown
2. Data normalizationBy looking at the values, note that house sizes are about 1000 times the number of bedrooms. When features differ by orders of magnitude, first performing feature scaling can make gradient descent converge much more quickly. Your task here is to write the following code to:- Subtract the mean value of each feature from the dataset.- After subtracting the mean, additionally scale (divide) the feature values by their respective *standard deviations*.In Python, you can use the numpy function `np.mean(..)` to compute the mean. This function can directly be used on a $d$-dimensional dataset to compute a $d$-dimensional mean vector `mu` where each value `mu[j]` is the mean of the $j^{th}$ feature. This is done by setting the $2^{nd}$ argument `axis` of this function to `0`. For example, consider the following matrix `A` where each line corresponds to one data-point and each column corresponds to one feature:```pythonA = [[ 100, 10], [ 30, 10], [ 230, 25]]```In this case, `np.mean(A, axis=0)` will give `[120, 15]` where 120 is the mean of the 1st column (1st feature) and 15 is the mean of the 2nd column (2nd feature). Another function `np.std(..)` exists to compute the standard deviation. The standard deviation is a way of measuring how much variation there is in the range of values of a particular feature (usually, most data points will lie within the interval: mean $\pm$ 2 standard_deviation).Once the features are normalized, you can do a scatter plot of the original dataset `X` (size of the house vs. number of bedrooms) and a scatter plot of the normalized dataset `X_normalized`. You will notice that the normalized dataset still have the same shape as the original one; the difference is that the new feature values have a similar scale and are centred arround the origin.**Implementation Note**: When normalizing the features, it is important to store the values used for normalization (the mean and the standard deviation used for the computations). Indeed, after learning the parameters of a model, we often want to predict the prices of houses we have not seen before. Given a new $x$ value (living room area and number of bedrooms), we must first normalize $x$ using the mean and standard deviation that we had previously computed from the training set.
###Code
import matplotlib.pylab as plt
""" TODO:
Complete the following code to compute a normalized version of X called: X_normalized
"""
# TODO: compute mu, the mean vector from X
mu = X.mean(axis=0)
# TODO: compute std, the standard deviation vector from X
std = X.std(axis=0)
# X_normalized = (X - mu) / std
X_normalized = (X-mu)/std
""" TODO:
- Do a scatter plot of the original dataset X
- Do a scatter plot of the normalized dataset X_normalized
"""
fig, ax = plt.subplots()
ax.set_xlabel('Size')
ax.set_ylabel('Rooms')
ax.scatter(X[:,0],X[:,1], color="red", marker='o', label='Data points')
fig, ax = plt.subplots()
ax.set_xlabel('Size')
ax.set_ylabel('Rooms')
ax.scatter(X_normalized[:,0],X_normalized[:,1], color="red", marker='x', label='Data points')
###Output
_____no_output_____
###Markdown
Similar to what you did in Lab2 Part B, you can simplify your implementation of linear regression by adding an additional first column to `X_normalized` with all the values of this column set to $1$. To do this you can re-use the function `add_all_ones_column(..)` defined in Lab2 Part B, which takes a matrix as argument and returns a new matrix with an additional first column (of ones).
###Code
""" TODO:
Copy-past here the definition of the function add_all_ones_column(...) that
you have see in Lab 2 (Part B).
"""
# definition of the function add_all_ones_column() here ...
def add_all_ones_column(X):
n, d = X.shape # dimension of the matrix X (n lines, d columns)
XX = np.ones((n, d+1)) # new matrix of all ones with one additional column
XX[:, 1:] = X # set X starting from column 1 (keep only column 0 unchanged)
return XX
""" TODO:
Just uncomment the following lines to create a matrix
X_normalized_new with an additional first column (of ones).
"""
X_normalized_new = add_all_ones_column(X_normalized)
print("Subset of X_normalized_new")
print(X_normalized_new[:10])
###Output
Subset of X_normalized_new
[[ 1. 0.13141542 -0.22609337]
[ 1. -0.5096407 -0.22609337]
[ 1. 0.5079087 -0.22609337]
[ 1. -0.74367706 -1.5543919 ]
[ 1. 1.27107075 1.10220517]
[ 1. -0.01994505 1.10220517]
[ 1. -0.59358852 -0.22609337]
[ 1. -0.72968575 -0.22609337]
[ 1. -0.78946678 -0.22609337]
[ 1. -0.64446599 -0.22609337]]
###Markdown
You are now ready to implement the linear regression using gradient descent (with more than one feature). In this multivariate case, you can further simply your implementation by writing the cost function in the following vectorized form:$$E(\theta) = \frac{1}{2n} (X \theta - y)^T (X \theta - y)$$$$\text{where }\quadX = \begin{bmatrix}-- ~ {x^{(1)}}^T ~ -- \\ -- ~ {x^{(2)}}^T ~ -- \\ \vdots \\ -- ~ {x^{(n)}}^T ~ --\end{bmatrix}\quad \quad \quady = \begin{bmatrix}y^{(1)} \\ y^{(2)} \\ \vdots \\ y^{(n)} \end{bmatrix}$$The vectorized form of the gradient of $E(\theta)$ is a vector denoted as $\nabla E(\theta)$ and defined follows:$$\nabla E(\theta) = \left ( \frac{\partial E}{\partial \theta_0}, \frac{\partial E}{\partial \theta_1}, \dots, \frac{\partial E}{\partial \theta_d} \right ) = \frac{1}{n} X^T (X \theta - y)$$this is a **vector** where each $j^{th}$ value corresponds to $\frac{\partial E}{\partial \theta_j}$ (the derivative of the function $E$ with respect to the parameter $\theta_j$)One your code is finished, you will get to try out different learning rates $\alpha$ for the dataset and find a learning rate that converges quickly. To do so, you can plot the history of the cost $E(\theta)$ with respect to the number of iterations at the end of your code.For example for alpha values of 0.01, 0.05 and 0.1, the plot should look like follows:If your learning rate is too large, $E(\theta)$ can diverge and *blow up*, resulting in values which are too large for computer calculations. In these situations, Python will tend to return `NaN` or `inf` (NaN stands for "*not a number*" and is often caused by undefined operations that involve $-\inf$ and $+\inf$). If your value of $E(\theta)$ increases or even blows up, adjust your learning rate and try again.
###Code
""" TODO:
Write the cost function E using the vectorized form
"""
def E(theta, X, y):
return (1/(2*len(X)))*np.transpose(X@theta - y)@(X@theta - y)
""" TODO:
Define the function grad_E (the gradient of E) using the vectorized form.
This should return a vector of the same dimension as theta
"""
def grad_E(theta, X, y):
return 1/len(X)*np.transpose(X)@(X@theta-y)
""" TODO:
Complete the definition of the function LinearRegressionWithGD(...) below
Note: don't forget to call the functions E(..) and grad_E(..) with X_normalized_new instead of X
The arguments of LinearRegressionWithGD(..) are:
*** theta: vector of initial parameter values
*** alpha: the learning rate (used by gradient descent)
*** max_iterations: maximum number of iterations to perform
*** epsilon: to stop iterating if the cost decreases by less than epsilon
The function returns:
*** errs: a list corresponding to the historical cost values
*** theta: the final parameter values
"""
def LinearRegressionWithGD(theta, alpha, max_iterations, epsilon):
errs = []
cost_list = []
for itr in range(max_iterations):
mse = E(theta, X_normalized_new, y)
errs.append(mse)
# TODO: take a gradient descent step to adapt the vector of parameters theta
theta = theta - alpha*grad_E(theta, X_normalized_new,y) # Vectorized Gradient descent
# TODO: test if the cost decreases by less than epsilon (to stop iterating)
CONDITION = mse - E(theta, X_normalized_new, y) < epsilon
if CONDITION:
break
return errs, theta
""" TODO:
Here you will call LinearRegressionWithGD(..) in a loop with different values of alpha,
and plot the cost history (errs) returned by each call of LinearRegressionWithGD(..)
"""
fig, ax = plt.subplots()
ax.set_xlabel("Number of Iterations")
ax.set_ylabel(r"Cost $E(\theta)$")
theta_init = np.array([0, 0, 0])
max_iterations = 100
epsilon = 0.000000000001
for alpha in [0.01, 0.05, 0.1]:
# TODO: call LinearRegressionWithGD(...) using the current alpha, to get errs and theta
errs, theta = LinearRegressionWithGD(theta_init, alpha, max_iterations, epsilon)
print("alpha = {}, theta = {}".format(alpha, theta))
# plot the errs using ax.plot(..)
ax.plot(errs)
plt.legend()
fig.show()
###Output
No handles with labels found to put in legend.
###Markdown
Now, once you have found a good $\theta$ using gradient descent, use it to make a price prediction for a new house of 1650-square-foot with 3 bedrooms. **Note**: since the parameter vector $\theta$ was learned using the normalized dataset, you will need to normalize the new data-point corresponding to this new house before predicting its price.
###Code
""" TODO:
Use theta to predict the price of a 1650-square-foot house with 3 bedrooms
Don't forget to normalize the feature values of this new house first.
"""
# Create a data-point x corresponding to the new house
x = (np.array([[1650,3]]))
# Normalize the feature values of x
x_normalized = (x-mu)/std
x_normalized = add_all_ones_column(x_normalized)
# Use the vector of parameters theta to predict the price of x
predict1 = x_normalized @ theta
print("Prediction", predict1)
"""
HINT: if you are not able to compute the dot product between x and theta, then
make sure that the arrays have the same size. Did you forget something?
"""
###Output
Prediction [293214.16354571]
###Markdown
Normal Equation: Linear regression without gradient descentAs you know from the lecture, the MSE cost function $E(\theta)$ that we are trying to minimize is a convex function, and its derivative at the optimal $\theta$ (that minimizes $E(\theta)$) is equal to $0$. Therefore, to find the optimal $\theta$, one can simply compute the derivative of $E(\theta)$ with respect to $\theta$, set it equal to $0$, and solve for $\theta$.We have seen in the lecture that, by doing this, the closed-form solution is given as follows:$$\theta = (X^T X)^{-1} X^T y$$Using this formula does not require any feature scaling, and you will get an exact solution in one calculation: there is no "*loop until convergence*" like in gradient descent.You are asked to implement this equation to directly compute the best parameter vector $\theta$ for the linear regression. In Python, you can use the `inv` function from `numpy.linalg.inv` to compute the inverse of a function.Remember that while you don't need to scale your features, we still need to add a column of 1's to the $X$ matrix to have an intercept term ($\theta_0$).
###Code
from numpy.linalg import inv
""" TODO:
Use the function add_all_ones_column(..) to add a column of 1's to X.
Let's call the returned dataset X_new.
"""
new_X = add_all_ones_column(X)
""" TODO:
Compute the optimal theta using new_X and y (without using gradient descent).
Use the normal equation shown above. You can use the function inv (imported above)
to compute the inverse of a matrix.
"""
theta = np.linalg.inv(np.transpose(new_X)@new_X)@np.transpose(new_X)@y
print("With the original (non-normalized) dataset: theta = {}".format(theta))
###Output
With the original (non-normalized) dataset: theta = [89597.9095428 139.21067402 -8738.01911233]
###Markdown
Now, once you have computed the optimal $\theta$, use it to make a price prediction for the new house of 1650-square-foot with 3 bedrooms. Remeber that $\theta$ was computed above based on the original dataset (without normalization); so, you do not need to normalize the feature values of the new house to make the prediction in this case.
###Code
""" TODO:
Use theta to predict the price of a 1650-square-foot house with 3 bedrooms
"""
x = add_all_ones_column(np.array([[1650,3]]))
prediction = x @ theta
print(prediction)
###Output
[293081.46433489]
###Markdown
Using the previous formula does not require any feature normalization or scaling. However, you can still compute again the optimal $\theta$ when using `X_normalized_new` instead of `new_X`.By doing this, you will be able to compare the $\theta$ that you compute here with the one you got previously when you used gradient descent. The two parameter vectors should be quite similar (but not necessarily exatly the same).
###Code
""" TODO:
Compute the optimal theta using X_normalized_new and y (without using gradient descent).
Use the normal equation (shown previously).
"""
theta = np.linalg.inv(np.transpose(X_normalized_new)@X_normalized_new)@np.transpose(X_normalized_new)@y
print("With the normalized dataset: theta = {}".format(theta))
###Output
With the normalized dataset: theta = [340412.65957447 109447.79646964 -6578.35485416]
###Markdown
Again, now that you have computed the optimal $\theta$ based on `X_normalized_new`, use it to make a price prediction for the new house of 1650-square-foot with 3 bedrooms. Do you need to normalize the feature values of the new house here? Remeber that $\theta$ was computed here based on the normalized dataset.You should find that this predicted price similar to the price you predicted previsouly for the same house.
###Code
""" TODO:
Use theta to predict the price of a 1650-square-foot house with 3 bedrooms
"""
# Cretate a data-point x corresponding to the new house
x = np.array([[1650,3]])
# Normalize the feature values of x
x_normalized = (x-mu)/std
x_normalized = add_all_ones_column(x_normalized)
predict1 = x_normalized @ theta
# Use the vector of parameters theta to predict the price of x
print("prediction:", predict1)
###Output
prediction: [293081.4643349]
|
HeroesOfPymoli/Solved - HeroesOfPymoli.ipynb | ###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
#############################
# #
# Homework 4 - Pandas #
# Student - Matheus Gratz #
# #
#############################
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data.head(5)
# Cast the df types, just in case :)
purchase_data.dtypes
###Output
_____no_output_____
###Markdown
Player Count * Display the total number of players
###Code
# Set the empty list
total_players = []
# Calculate the total number of players
total_players.append(len(purchase_data['SN'].unique()))
# Set de Data Frame
total_players_df = pd.DataFrame(total_players, columns = ['Total Players'])
# Display the Data Frame
total_players_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Total) * Run basic calculations to obtain number of unique items, average price, etc.* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Set the empty dictionary
purchasing_analysis_dict = {}
# Number of Unique Items
num_unique_items = len(purchase_data['Item ID'].unique())
purchasing_analysis_dict['Number of Unique Items'] = num_unique_items
# Average Purchase Price
mean_price = purchase_data['Price'].mean(axis=0)
purchasing_analysis_dict['Average Price'] = mean_price
# Total Number of Purchases
total_num_purchases = purchase_data['Item ID'].count()
purchasing_analysis_dict['Total Number of Purchases'] = total_num_purchases
# Total Revenue
total_revenue = purchase_data['Price'].sum(axis=0)
purchasing_analysis_dict['Total Revenue'] = total_revenue
# Set the summary data frame
purchasing_analysis_df = pd.DataFrame(list(purchasing_analysis_dict.values()))
purchasing_analysis_df = purchasing_analysis_df.transpose()
purchasing_analysis_df.columns = purchasing_analysis_dict.keys()
# Format fields
purchasing_analysis_df['Number of Unique Items'] = purchasing_analysis_df['Number of Unique Items'].map("{:.0f}".format)
purchasing_analysis_df['Total Number of Purchases'] = purchasing_analysis_df['Total Number of Purchases'].map("{:.0f}".format)
purchasing_analysis_df['Average Price'] = purchasing_analysis_df['Average Price'].map("${:.2f}".format)
purchasing_analysis_df['Total Revenue'] = purchasing_analysis_df['Total Revenue'].map("${:,.2f}".format)
# Display the summary data frame
purchasing_analysis_df
###Output
_____no_output_____
###Markdown
Gender Demographics * Percentage and Count of Male Players* Percentage and Count of Female Players* Percentage and Count of Other / Non-Disclosed
###Code
# Set a data frame with unique Player Names
unique_players_df = purchase_data.drop_duplicates(subset=['SN', 'Gender'])
# Create a count column for each gender
count_gender = unique_players_df["Gender"].value_counts()
# Set the total
gender_demographics_df = pd.DataFrame(count_gender)
gender_demographics_df.columns = ["Total Count"]
# Calculate the sum
sum_players = gender_demographics_df['Total Count'].sum()
# Generate te final output
gender_demographics_df['Percentage of Players'] = gender_demographics_df['Total Count'] / sum_players * 100
# Format fields
gender_demographics_df['Percentage of Players'] = gender_demographics_df['Percentage of Players'].map("{:.2f}%".format)
# Display the summary data frame
gender_demographics_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Gender) * Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. by gender* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
#Generate the calculations
purchase_analysis_gender_df = purchase_data.groupby('Gender').agg(
total_users = ('SN', 'nunique'),
total_orders = ('Purchase ID', 'count'),
avg_price = ('Price', 'mean'),
total_revenue = ('Price', 'sum')
)
# Calculate the average per person
purchase_analysis_gender_df['Average Purchase Total per Person'] = purchase_analysis_gender_df['total_revenue'] / purchase_analysis_gender_df['total_users']
# Rename Columns
purchase_analysis_gender_df = purchase_analysis_gender_df.rename(columns={
'total_users' : 'Total Users',
'total_orders' : 'Purchase Count',
'avg_price' : 'Average Purchase Price',
'total_revenue' : 'Total Purchase Value',
})
# Format fields
purchase_analysis_gender_df['Average Purchase Price'] = purchase_analysis_gender_df['Average Purchase Price'].map("${:,.2f}".format)
purchase_analysis_gender_df['Total Purchase Value'] = purchase_analysis_gender_df['Total Purchase Value'].map("${:,.2f}".format)
purchase_analysis_gender_df['Average Purchase Total per Person'] = purchase_analysis_gender_df['Average Purchase Total per Person'].map("${:,.2f}".format)
# Display the summary data frame
purchase_analysis_gender_df
###Output
_____no_output_____
###Markdown
Age Demographics * Establish bins for ages* Categorize the existing players using the age bins. Hint: use pd.cut()* Calculate the numbers and percentages by age group* Create a summary data frame to hold the results* Optional: round the percentage column to two decimal points* Display Age Demographics Table
###Code
# Generate min and max ages, stablishing min and max bins.
min_age = purchase_data['Age'].min()
max_age = purchase_data['Age'].max()
# Generate bins by list comprehension
bins = [x for x in range(0, int(max_age)+1, int(max_age/9))]
# Create bin labels
labels = [f"from {round(bins[x])} to {round(bins[x+1])}" for x in range(len(bins)-1)]
# Cut the dataframe in bins
purchase_data_groups = purchase_data
purchase_data_groups['Age Group'] = pd.cut(purchase_data_groups['Age'], bins, labels = labels)
# Calculate fields
age_demographics_df = purchase_data.groupby('Age Group').agg(total_users = ('SN', 'nunique'))
# Create sum measure
sum_ages = age_demographics_df['total_users'].sum()
# Calculate percentages
age_demographics_df['Percentage of Players'] = age_demographics_df['total_users'] / sum_ages * 100
# Format fields
age_demographics_df['Percentage of Players'] = age_demographics_df['Percentage of Players'].map("{:.2f}%".format)
age_demographics_df = age_demographics_df.rename(columns={'total_users' : 'Total Users'})
# Display the summary data frame
age_demographics_df
###Output
_____no_output_____
###Markdown
Purchasing Analysis (Age) * Bin the purchase_data data frame by age* Run basic calculations to obtain purchase count, avg. purchase price, avg. purchase total per person etc. in the table below* Create a summary data frame to hold the results* Optional: give the displayed data cleaner formatting* Display the summary data frame
###Code
# Calculate fields
purchase_data_groups = purchase_data.groupby('Age Group').agg(
total_users = ('SN', 'nunique'),
total_orders = ('Purchase ID', 'count'),
avg_price = ('Price', 'mean'),
total_revenue = ('Price', 'sum')
)
purchase_data_groups['Average Purchase Total per Person'] = purchase_data_groups['total_revenue'] / purchase_data_groups['total_users']
# Rename Columns
purchase_data_groups = purchase_data_groups.rename(columns={
'total_users' : 'Total Users',
'total_orders' : 'Purchase Count',
'avg_price' : 'Average Purchase Price',
'total_revenue' : 'Total Purchase Value',
})
# Format fields
purchase_data_groups['Average Purchase Price'] = purchase_data_groups['Average Purchase Price'].map("${:,.2f}".format)
purchase_data_groups['Total Purchase Value'] = purchase_data_groups['Total Purchase Value'].map("${:,.2f}".format)
purchase_data_groups['Average Purchase Total per Person'] = purchase_data_groups['Average Purchase Total per Person'].map("${:,.2f}".format)
# Display the summary data frame
purchase_data_groups
###Output
_____no_output_____
###Markdown
Top Spenders * Run basic calculations to obtain the results in the table below* Create a summary data frame to hold the results* Sort the total purchase value column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
# Calculate fields
spenders_df = purchase_data.groupby('SN').agg(
total_orders = ('Purchase ID', 'count'),
avg_price = ('Price', 'mean'),
total_revenue = ('Price', 'sum')
)
spenders_df = spenders_df.sort_values('total_orders', ascending=False)
# Rename Columns
spenders_df = spenders_df.rename(columns={
'total_orders' : 'Purchase Count',
'avg_price' : 'Average Purchase Price',
'total_revenue' : 'Total Purchase Value',
})
# Format fields
spenders_df['Average Purchase Price'] = spenders_df['Average Purchase Price'].map("${:,.2f}".format)
spenders_df['Total Purchase Value'] = spenders_df['Total Purchase Value'].map("${:,.2f}".format)
# Display the summary data frame
spenders_df.head(5)
###Output
_____no_output_____
###Markdown
Most Popular Items * Retrieve the Item ID, Item Name, and Item Price columns* Group by Item ID and Item Name. Perform calculations to obtain purchase count, item price, and total purchase value* Create a summary data frame to hold the results* Sort the purchase count column in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the summary data frame
###Code
most_popular_items_df = purchase_data[['Item ID', 'Item Name', 'Price']]
most_popular_items_gp = most_popular_items_df.groupby(['Item ID', 'Item Name']).agg(
total_orders = ('Item ID', 'count'),
avg_price = ('Price', 'mean'),
total_revenue = ('Price', 'sum'),
)
most_popular_items_gp = most_popular_items_gp.rename(columns={
'total_orders' : 'Purchase Count',
'avg_price' : 'Item Price',
'total_revenue' : 'Total Purchase Value',
})
most_popular_items_gp.sort_values('Purchase Count', ascending=False, inplace=True)
# Format fields
most_popular_items_gp['Item Price'] = most_popular_items_gp['Item Price'].map("${:,.2f}".format)
most_popular_items_gp['Total Purchase Value'] = most_popular_items_gp['Total Purchase Value'].map("${:,.2f}".format)
# Display the summary data frame
most_popular_items_gp.head(5)
###Output
_____no_output_____
###Markdown
Most Profitable Items * Sort the above table by total purchase value in descending order* Optional: give the displayed data cleaner formatting* Display a preview of the data frame
###Code
# Unformat fields, replacing the currency symbol and converting to a float
most_popular_items_gp['Total Purchase Value'] = most_popular_items_gp['Total Purchase Value'].replace('[\$,]','',regex=True).astype(float)
# Sort the data
most_popular_items_gp.sort_values('Total Purchase Value', ascending=False, inplace=True)
# Format fields
most_popular_items_gp['Total Purchase Value'] = most_popular_items_gp['Total Purchase Value'].map("${:,.2f}".format)
# Display the summary data frame
most_popular_items_gp.head(5)
###Output
_____no_output_____ |
docs/notebooks/visualisation.ipynb | ###Markdown
VisualisationThis notebook showcases different ways of visualizing lig-prot and prot-prot interactions, either with atomistic details or simply at the residue level.
###Code
import MDAnalysis as mda
import prolif as plf
import numpy as np
# load topology
u = mda.Universe(plf.datafiles.TOP, plf.datafiles.TRAJ)
lig = u.select_atoms("resname LIG")
prot = u.select_atoms("protein")
# create RDKit-like molecules for visualisation
lmol = plf.Molecule.from_mda(lig)
pmol = plf.Molecule.from_mda(prot)
# get lig-prot interactions with atom info
fp = plf.Fingerprint(["HBDonor", "HBAcceptor", "Cationic", "PiStacking"])
fp.run(u.trajectory[0:1], lig, prot)
df = fp.to_dataframe(return_atoms=True)
df.T
###Output
_____no_output_____
###Markdown
py3Dmol (3Dmol.js)With py3Dmol we can easily display the interactions.For interactions involving a ring (pi-cation, pi-stacking...etc.) ProLIF returns the index of one of the ring atoms, but for visualisation having the centroid of the ring looks nicer. We'll start by writing a function to find the centroid, given the index of one of the ring atoms.
###Code
from rdkit import Chem
from rdkit import Geometry
def get_ring_centroid(mol, index):
# find ring using the atom index
Chem.SanitizeMol(mol, Chem.SanitizeFlags.SANITIZE_SETAROMATICITY)
ri = mol.GetRingInfo()
for r in ri.AtomRings():
if index in r:
break
else:
raise ValueError("No ring containing this atom index was found in the given molecule")
# get centroid
coords = mol.xyz[list(r)]
ctd = plf.utils.get_centroid(coords)
return Geometry.Point3D(*ctd)
###Output
_____no_output_____
###Markdown
Finally, the actual visualisation code. The API of py3Dmol is exactly the same as the GLViewer class of 3Dmol.js, for which the documentation can be found [here](https://3dmol.csb.pitt.edu/doc/$3Dmol.GLViewer.html).
###Code
import py3Dmol
colors = {
"HBAcceptor": "blue",
"HBDonor": "red",
"Cationic": "green",
"PiStacking": "purple",
}
# JavaScript functions
resid_hover = """function(atom,viewer) {{
if(!atom.label) {{
atom.label = viewer.addLabel('{0}:'+atom.atom+atom.serial,
{{position: atom, backgroundColor: 'mintcream', fontColor:'black'}});
}}
}}"""
hover_func = """
function(atom,viewer) {
if(!atom.label) {
atom.label = viewer.addLabel(atom.interaction,
{position: atom, backgroundColor: 'black', fontColor:'white'});
}
}"""
unhover_func = """
function(atom,viewer) {
if(atom.label) {
viewer.removeLabel(atom.label);
delete atom.label;
}
}"""
v = py3Dmol.view(650, 600)
v.removeAllModels()
models = {}
mid = -1
for i, row in df.T.iterrows():
lresid, presid, interaction = i
lindex, pindex = row[0]
lres = lmol[lresid]
pres = pmol[presid]
# set model ids for reusing later
for resid, res, style in [(lresid, lres, {"colorscheme": "cyanCarbon"}),
(presid, pres, {})]:
if resid not in models.keys():
mid += 1
v.addModel(Chem.MolToMolBlock(res), "sdf")
model = v.getModel()
model.setStyle({}, {"stick": style})
# add residue label
model.setHoverable({}, True, resid_hover.format(resid), unhover_func)
models[resid] = mid
# get coordinates for both points of the interaction
if interaction in ["PiStacking", "EdgeToFace", "FaceToFace", "PiCation"]:
p1 = get_ring_centroid(lres, lindex)
else:
p1 = lres.GetConformer().GetAtomPosition(lindex)
if interaction in ["PiStacking", "EdgeToFace", "FaceToFace", "CationPi"]:
p2 = get_ring_centroid(pres, pindex)
else:
p2 = pres.GetConformer().GetAtomPosition(pindex)
# add interaction line
v.addCylinder({"start": dict(x=p1.x, y=p1.y, z=p1.z),
"end": dict(x=p2.x, y=p2.y, z=p2.z),
"color": colors[interaction],
"radius": .15,
"dashed": True,
"fromCap": 1,
"toCap": 1,
})
# add label when hovering the middle of the dashed line by adding a dummy atom
c = Geometry.Point3D(*plf.utils.get_centroid([p1, p2]))
modelID = models[lresid]
model = v.getModel(modelID)
model.addAtoms([{"elem": 'Z',
"x": c.x, "y": c.y, "z": c.z,
"interaction": interaction}])
model.setStyle({"interaction": interaction}, {"clicksphere": {"radius": .5}})
model.setHoverable(
{"interaction": interaction}, True,
hover_func, unhover_func)
# show protein:
# first we need to reorder atoms as in the original MDAnalysis file.
# needed because the RDKitConverter reorders them when infering bond order
# and 3Dmol.js doesn't like when atoms from the same residue are spread accross the whole file
order = np.argsort([atom.GetIntProp("_MDAnalysis_index") for atom in pmol.GetAtoms()])
mol = Chem.RenumberAtoms(pmol, order.astype(int).tolist())
mol = Chem.RemoveAllHs(mol)
pdb = Chem.MolToPDBBlock(mol, flavor=0x20 | 0x10)
v.addModel(pdb, "pdb")
model = v.getModel()
model.setStyle({}, {"cartoon": {"style":"edged"}})
v.zoomTo({"model": list(models.values())})
###Output
_____no_output_____
###Markdown
Ligand Interaction Network (LigPlot)Protein-ligand interactions are typically represented with the ligand in atomic details, residues as nodes, and interactions as edges. Such diagram can be easily displayed by calling ProLIF's builtin class `prolif.plotting.network.LigNetwork`.This diagram is interactive and allows moving around the residues, as well as clicking the legend to toggle the display of specific residues types or interactions.LigNetwork can generate two kinds of depictions:- Based on a single specific frame- By aggregating results from several framesIn the latter case, the frequency with which an interaction is seen will control the width of the corresponding edge. You can hide the least frequent interactions by using a threshold, *i.e.* `threshold=0.3` will hide interactions that occur in less than 30% of frames.
###Code
from prolif.plotting.network import LigNetwork
fp = plf.Fingerprint()
fp.run(u.trajectory[::10], lig, prot)
df = fp.to_dataframe(return_atoms=True)
net = LigNetwork.from_ifp(df, lmol,
# replace with `kind="frame", frame=0` for the other depiction
kind="aggregate", threshold=.3,
rotation=270)
net.display()
###Output
_____no_output_____
###Markdown
You can further customize the diagram by changing the colors in `LigNetwork.COLORS` or the residues types in `LigNetwork.RESIDUE_TYPES`. Type `help(LigNetwork)` for more details. The diagram can be saved as an HTML file by calling `net.save("output.html")`. It is not currently possible to export it as an image, so please make a screenshot instead. You can combine both saving and displaying the diagram with `net.show("output.html")`. NetworkX and pyvisNetworkX is a great library for working with graphs, but the drawing options are quickly limited so we will use networkx to create a graph, and pyvis to create interactive plots. The following code snippet will calculate the IFP, each residue (ligand or protein) is converted to a node, each interaction to an edge, and the occurence of each interaction between residues will be used to control the weight and thickness of each edge.
###Code
import networkx as nx
from pyvis.network import Network
from tqdm.auto import tqdm
from matplotlib import cm, colors
from IPython.display import IFrame
# get lig-prot interactions and distance between residues
fp = plf.Fingerprint()
fp.run(u.trajectory[::10], lig, prot)
df = fp.to_dataframe()
df.head()
def make_graph(values, df=None,
node_color=["#FFB2AC", "#ACD0FF"], node_shape="dot",
edge_color="#a9a9a9", width_multiplier=1):
"""Convert a pandas DataFrame to a NetworkX object
Parameters
----------
values : pandas.Series
Series with 'ligand' and 'protein' levels, and a unique value for
each lig-prot residue pair that will be used to set the width and weigth
of each edge. For example:
ligand protein
LIG1.G ALA216.A 0.66
ALA343.B 0.10
df : pandas.DataFrame
DataFrame obtained from the fp.to_dataframe() method
Used to label each edge with the type of interaction
node_color : list
Colors for the ligand and protein residues, respectively
node_shape : str
One of ellipse, circle, database, box, text or image, circularImage,
diamond, dot, star, triangle, triangleDown, square, icon.
edge_color : str
Color of the edge between nodes
width_multiplier : int or float
Each edge's width is defined as `width_multiplier * value`
"""
lig_res = values.index.get_level_values("ligand").unique().tolist()
prot_res = values.index.get_level_values("protein").unique().tolist()
G = nx.Graph()
# add nodes
# https://pyvis.readthedocs.io/en/latest/documentation.html#pyvis.network.Network.add_node
for res in lig_res:
G.add_node(res, title=res, shape=node_shape,
color=node_color[0], dtype="ligand")
for res in prot_res:
G.add_node(res, title=res, shape=node_shape,
color=node_color[1], dtype="protein")
for resids, value in values.items():
label = "{} - {}<br>{}".format(*resids, "<br>".join([f"{k}: {v}"
for k, v in (df.xs(resids,
level=["ligand", "protein"],
axis=1)
.sum()
.to_dict()
.items())]))
# https://pyvis.readthedocs.io/en/latest/documentation.html#pyvis.network.Network.add_edge
G.add_edge(*resids, title=label, color=edge_color,
weight=value, width=value*width_multiplier)
return G
###Output
_____no_output_____
###Markdown
Regrouping all interactionsWe will regroup all interactions as if they were equivalent.
###Code
data = (df.groupby(level=["ligand", "protein"], axis=1)
.sum()
.astype(bool)
.mean())
G = make_graph(data, df, width_multiplier=3)
# display graph
net = Network(width=600, height=500, notebook=True, heading="")
net.from_nx(G)
net.write_html("lig-prot_graph.html")
IFrame("lig-prot_graph.html", width=610, height=510)
###Output
_____no_output_____
###Markdown
Only plotting a specific interactionWe can also plot a specific type of interaction.
###Code
data = (df.xs("Hydrophobic", level="interaction", axis=1)
.mean())
G = make_graph(data, df, width_multiplier=3)
# display graph
net = Network(width=600, height=500, notebook=True, heading="")
net.from_nx(G)
net.write_html("lig-prot_hydrophobic_graph.html")
IFrame("lig-prot_hydrophobic_graph.html", width=610, height=510)
###Output
_____no_output_____
###Markdown
Protein-protein interactionThis kind of "residue-level" visualisation is especially suitable for protein-protein interactions. Here we'll show the interactions between one helix of our G-Protein coupled receptor (transmembrane helix 3, or TM3) in red and the rest of the protein in blue.
###Code
tm3 = u.select_atoms("resid 119:152")
prot = u.select_atoms("protein and not group tm3", tm3=tm3)
fp = plf.Fingerprint()
fp.run(u.trajectory[::10], tm3, prot)
df = fp.to_dataframe()
df.head()
data = (df.groupby(level=["ligand", "protein"], axis=1, sort=False)
.sum()
.astype(bool)
.mean())
G = make_graph(data, df, width_multiplier=8)
# color each node based on its degree
max_nbr = len(max(G.adj.values(), key=lambda x: len(x)))
blues = cm.get_cmap('Blues', max_nbr)
reds = cm.get_cmap('Reds', max_nbr)
for n, d in G.nodes(data=True):
n_neighbors = len(G.adj[n])
# show TM3 in red and the rest of the protein in blue
palette = reds if d["dtype"] == "ligand" else blues
d["color"] = colors.to_hex( palette(n_neighbors / max_nbr) )
# convert to pyvis network
net = Network(width=640, height=500, notebook=True, heading="")
net.from_nx(G)
net.write_html("prot-prot_graph.html")
IFrame("prot-prot_graph.html", width=650, height=510)
###Output
_____no_output_____
###Markdown
Residue interaction networkAnother possible application is the visualisation of the residue interaction network of the whole protein. Since this protein is a GPCR, the graph will mostly display the HBond interactions reponsible for the secondary structure of the protein (7 alpha-helices). It would also show hydrophobic interactions between neighbor residues, so I'm simply going to disable it in the Fingerprint.
###Code
prot = u.select_atoms("protein")
fp = plf.Fingerprint(['HBDonor', 'HBAcceptor', 'PiStacking', 'Anionic', 'Cationic', 'CationPi', 'PiCation'])
fp.run(u.trajectory[::10], prot, prot)
df = fp.to_dataframe()
df.head()
###Output
_____no_output_____
###Markdown
To hide most of the HBond interactions responsible for the alpha-helix structuration, I will show how to do it on the pandas DataFrame for simplicity, but ideally you should copy-paste the source code inside the `fp.run` method and add the condition shown below before calculating the bitvector for a residue pair, then use the custom function instead of `fp.run`. This would make the analysis faster and more memory efficient.
###Code
# remove interactions between residues i and i±4 or less
mask = []
for l, p, interaction in df.columns:
lr = plf.ResidueId.from_string(l)
pr = plf.ResidueId.from_string(p)
if (pr == lr) or (abs(pr.number - lr.number) <= 4
and interaction in ["HBDonor", "HBAcceptor", "Hydrophobic"]):
mask.append(False)
else:
mask.append(True)
df = df[df.columns[mask]]
df.head()
data = (df.groupby(level=["ligand", "protein"], axis=1, sort=False)
.sum()
.astype(bool)
.mean())
G = make_graph(data, df, width_multiplier=5)
# color each node based on its degree
max_nbr = len(max(G.adj.values(), key=lambda x: len(x)))
palette = cm.get_cmap('YlGnBu', max_nbr)
for n, d in G.nodes(data=True):
n_neighbors = len(G.adj[n])
d["color"] = colors.to_hex( palette(n_neighbors / max_nbr) )
# convert to pyvis network
net = Network(width=640, height=500, notebook=True, heading="")
net.from_nx(G)
# use specific layout
layout = nx.circular_layout(G)
for node in net.nodes:
node["x"] = layout[node["id"]][0] * 1000
node["y"] = layout[node["id"]][1] * 1000
net.toggle_physics(False)
net.write_html("residue-network_graph.html")
IFrame("residue-network_graph.html", width=650, height=510)
###Output
_____no_output_____ |
alejogm0520/Repaso_Estadistico.ipynb | ###Markdown
Analisis de Redes: Repaso Estadistico Ejercicio 1: Hacer este gŕafico en Python.
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stas
%matplotlib inline
x = np.arange(0.01, 1, 0.01)
values = [(0.5, 0.5),(5, 1),(1, 3),(2, 2),(2, 5)]
for i, j in values:
y = stas.beta.pdf(x,i,j)
plt.plot(x,y)
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 2: Con datos aleatorios de distribuciones beta, obtener y graficar sus propiedades descriptivas.
###Code
md = []
mn = []
mo = []
kur = []
ske = []
for i, j in values:
r = stas.beta.rvs(i, j, size=1000000)
md.append(np.median(r))
mn.append(np.mean(r))
mo.append(stas.mode(r)[0][0])
kur.append(stas.kurtosis(r))
ske.append(stas.skew(r))
fig = plt.figure()
ax1 = fig.add_subplot(151)
ax1.set_title('Median')
ax1.plot(md)
ax2 = fig.add_subplot(152)
ax2.set_title('Mean')
ax2.plot(mn)
ax3 = fig.add_subplot(153)
ax3.set_title('Mode')
ax3.plot(mo)
ax4 = fig.add_subplot(154)
ax4.set_title('Kurtosis')
ax4.plot(kur)
ax5 = fig.add_subplot(155)
ax5.set_title('Skewness')
ax5.plot(ske)
axes = [ax1, ax2, ax3, ax4, ax5]
for i in axes:
plt.setp(i.get_xticklabels(), visible=False)
plt.setp(i.get_yticklabels(), visible=False)
###Output
_____no_output_____ |
notebook/2_data-wrangling.ipynb | ###Markdown
Data Wranglingในบทนี้ จะเป็นการแนะนำการทำ Data Wrangling หรือการเตรียมข้อมูล ด้วยภาษา `R` Data Wrangling เป็น ขั้นตอนในการเตรียมข้อมูลเพื่อการวิเคราะห์ซึ่งจะรวมถึงขั้นตอน- import data หมายถึง การนำเข้าข้อมูลจากหลากหลายแหล่ง- tidy data หมายถึง การจัดรูปแบบของข้อมูล เช่น การ reshape ข้อมูล- transform data หมายถึง การแปลงข้อมูล การทำ feature engineering เป็นต้น Install and Load packages
###Code
options(
repr.plot.width=10,
repr.plot.height=6,
repr.plot.res = 300,
repr.matrix.max.rows = 10,
repr.matrix.max.cols = Inf
)
# if running on google colab
options("repos" = "https://packagemanager.rstudio.com/cran/__linux__/bionic/latest/")
install.packages("RPostgres")
install.packages("writexl")
library(tidyverse)
library(DBI)
library(RPostgres)
library(httr)
library(vroom)
library(readxl)
library(writexl)
###Output
-- [1mAttaching packages[22m ------------------------------------------------------------------------------------------------------------------------------------------------ tidyverse 1.3.1 --
[32mv[39m [34mggplot2[39m 3.3.5 [32mv[39m [34mpurrr [39m 0.3.4
[32mv[39m [34mtibble [39m 3.1.2 [32mv[39m [34mdplyr [39m 1.0.7
[32mv[39m [34mtidyr [39m 1.1.3 [32mv[39m [34mstringr[39m 1.4.0
[32mv[39m [34mreadr [39m 1.4.0 [32mv[39m [34mforcats[39m 0.5.1
-- [1mConflicts[22m --------------------------------------------------------------------------------------------------------------------------------------------------- tidyverse_conflicts() --
[31mx[39m [34mdplyr[39m::[32mfilter()[39m masks [34mstats[39m::filter()
[31mx[39m [34mdplyr[39m::[32mlag()[39m masks [34mstats[39m::lag()
###Markdown
Load Data setting path
###Code
getwd()
setwd("..")
getwd()
###Output
_____no_output_____
###Markdown
csv
###Code
df_csv <- read_csv("data/production_oae.csv")
df_csv
df_vroom <- vroom("data/production_oae.csv")
df_vroom
###Output
[1m[1mRows: [1m[22m[34m[34m18685[34m[39m [1m[1mColumns: [1m[22m[34m[34m18[34m[39m
[36m--[39m [1m[1mColumn specification[1m[22m [36m-----------------------------------------------------------------------------------------------------------------------------------------------------------------[39m
[1mDelimiter:[22m ","
[31mchr[39m (10): commod, subcommod, variety, year_crop, province, province_code, r...
[32mdbl[39m (7): season, year, area_plant, area_harvest, production, yield_plant, ...
[34mdate[39m (1): update_date
[36mi[39m Use [30m[47m[30m[47m`spec()`[47m[30m[49m[39m to retrieve the full column specification for this data.
[36mi[39m Specify the column types or set [30m[47m[30m[47m`show_col_types = FALSE`[47m[30m[49m[39m to quiet this message.
###Markdown
excel
###Code
df_excel <- read_excel("data/production_oae.xlsx")
df_excel
###Output
_____no_output_____
###Markdown
rds
###Code
df_rds <- readRDS("data/production_oae.rds")
df_rds
###Output
_____no_output_____
###Markdown
database
###Code
conn <- dbConnect(
RPostgres::Postgres(),
host = "192.168.4.133",
user = "oae_user",
password = "",
dbname = "db_nabc",
options="-c search_path=definition"
)
dbListTables(conn)
ref <- list()
ref$tha1 <- dbReadTable(conn, "tha1")
ref$tha1
###Output
_____no_output_____
###Markdown
API (Application Programming Interface)
###Code
# https://data.moc.go.th/OpenData/GISProductPrice
httr::set_config(config(ssl_verifypeer = 0L))
res <- httr::GET(
"https://dataapi.moc.go.th/gis-product-prices?",
query = list(
product_id = "R11029",
from_date = "2021-01-01",
to_date = Sys.Date()
)
)
res %>% httr::content("text") %>% jsonlite::fromJSON()
res <- res %>% httr::content("text") %>% jsonlite::fromJSON()
res$price_list
###Output
_____no_output_____
###Markdown
Tidy Data ConceptTidy data คือ การจัดการข้อมูลให้อยู่ในโครงสร้างตาราง โดยที่1. column = ตัวแปร2. row = ข้อมูล3. cell = ค่าที่วัดได้> Tidy data is a way to describe data that’s organized with a particular structure – a rectangular structure, where each variable has its own column, and each observation has its own row (Wickham 2014). Data Wrangling/ Data Manipulationการจัดการข้อมูลมี operation หลักๆ ดังนี้- select- filter- mutate- summarize- arrange- join selectคือ การเลือกตัวแปร หรือ column ที่ต้องการ โดยสามารถเปลี่ยนชื่อตัวแปรได้ด้วย**Syntax:**```{r}df %>% select( column_x, new_column_y = column_y, ... )```Note: `%>%` คือ pipe operator ซึ่งจะนำ object ที่อยู่ก่อนหน้ามาใส่เป็น argument แรกของ function ถัดไปการใช้ `%>%` จะทำให้ code อ่านง่ายขึ้น หากไม่ใช้จะเป็น nested function
###Code
df_rds %>%
select(subcommod,
year_crop,
province_name = province, # เปลี่ยนชื่อจาก province เป็น province_name
status,
area_plant, area_harvest, production, yield_plant, yield_harvest
)
###Output
_____no_output_____
###Markdown
filterคือ การเลือกบาง rows ของ dataframe ตามเงื่อนไขที่เราต้องการ เช่น **Syntax:**```{r}df %>% filter( logical_expression(column_x) i.e. column_x > 10 )```
###Code
df_rds %>%
filter(province == "กรุงเทพมหานคร")
###Output
_____no_output_____
###Markdown
mutateคือ การสร้างตัวแปร หรือ column โดยผลลัพธ์ที่ได้จะมีจำนวนแถวเท่าเดิม เช่น การคำนวณผลผลิตต่อไร่**Syntax:**```{r}df %>% mutate( new_x = expression )```
###Code
df_rds %>%
mutate(area_diff = area_plant - area_harvest)
###Output
_____no_output_____
###Markdown
summarizeคือ การยุบรวมข้อมูล เช่น ค่าเฉลี่ย ค่าผลรวม โดยสามารถแบ่งตามกลุ่มได้ เช่น การคำนวณเนื้อที่ ผลผลิต ผลผลิตต่อไร่ ระดับภาค ทั้งนี้ผลลัพธ์ที่ได้จาก summarize จะเป็น dataframe ที่มีจำนวนแถวลดลง**Syntax:**```{r}df %>% group_by(column1, column2, ...) %>% summarize( new_x = FUN(column_x) ) %>% ungroup()```
###Code
df_rds %>%
filter(subcommod == "ข้าวนาปี", is.na(reg_oae)) %>% tail(5) %>%
select(subcommod, province, area_plant:yield_harvest)
df_rds %>%
filter(subcommod == "ข้าวนาปี", !is.na(reg_oae)) %>%
group_by(year_crop, reg_oae) %>%
summarize(
area_plant = sum(area_plant, na.rm = TRUE),
area_harvest = sum(area_harvest, na.rm = TRUE),
production = sum(production, na.rm = TRUE),
yield_plant = mean(yield_plant, na.rm = TRUE),
yield_harvest = mean(yield_harvest, na.rm = TRUE),
) %>% tail(4)
df_rds %>%
filter(subcommod == "ข้าวนาปี", !is.na(reg_oae)) %>%
group_by(year_crop, reg_oae) %>%
summarize(
yield_plant = weighted.mean(yield_plant, area_plant, na.rm = TRUE),
yield_harvest = weighted.mean(yield_harvest, area_harvest, na.rm = TRUE),
area_plant = sum(area_plant, na.rm = TRUE),
area_harvest = sum(area_harvest, na.rm = TRUE),
production = sum(production, na.rm = TRUE)
) %>% tail(4) %>%
select(year_crop, reg_oae, area_plant, area_harvest, production, yield_plant, yield_harvest)
###Output
`summarise()` has grouped output by 'year_crop'. You can override using the `.groups` argument.
###Markdown
arrange
###Code
df_rds %>%
filter(year_crop == "2563", subcommod == "ทุเรียน") %>%
select(province, reg_oae, area_harvest, production, yield_harvest) %>%
arrange(area_harvest)
df_rds %>%
filter(year_crop == "2563", subcommod == "ทุเรียน", !is.na(reg_oae)) %>%
select(province, reg_oae, area_harvest, production, yield_harvest) %>%
arrange(-area_harvest) %>% head(10)
###Output
_____no_output_____
###Markdown
joinเป็นการเอา 2 ตาราง มาเชื่อมกัน โดยมี column เป็นตัวเชื่อม (key) การเชื่อมมีหลายรูปแบบ ดังนี้- left_join- right_join- inner_join- full_join- semi_join- anti_join
###Code
ref$tha1
df_rds_subset <- df_rds %>%
select(commod, subcommod, season, variety, province, area_harvest, production, yield_harvest)
df_rds_subset
df_rds_subset %>%
left_join(ref$tha1, by = c("province" = "adm1_name_th"))
###Output
_____no_output_____
###Markdown
Reshape Data pivot widerแปลงข้อมูลจาก long format เป็น wide format
###Code
df_rds %>%
filter(subcommod == "ข้าวนาปี", !is.na(reg_oae)) %>%
pivot_wider(
province,
names_prefix = "year_",
names_from = year,
values_from = production
) %>%
arrange(-year_2564)
###Output
_____no_output_____
###Markdown
pivot longerแปลงข้อมูลจาก wide format เป็น long format
###Code
df_rds %>%
filter(subcommod == "ข้าวนาปี") %>%
pivot_longer(
area_plant:yield_harvest,
names_to = "attribute",
values_to = "value"
) %>%
select(year_crop, subcommod, province, attribute, value)
###Output
_____no_output_____
###Markdown
Other useful commands - `dplyr::count`- `tidyr::fill`- etc. cheatsheet Save
###Code
df_rds %>% write_xlsx("data/df.xlsx")
df_rds %>% saveRDS("data/df.rds")
###Output
_____no_output_____ |
CONTRIBUTING.ipynb | ###Markdown
PyPRECIS Notebook Style GuideThanks for showing the enthusiasm to help develop the PyPRECIS notebooks. Please use this style guide as a reference when creating or modifying content... Worksheet TitleAll worksheets should start with a title formatted as a level 1 heading:```md Worksheet ?: All Worksheets Should Have a Clear Title```Worksheet titles should be followed with a short description of the worksheet. Learning AimsThis followed by a list of 3 to 4 learning aims for the worksheet. We use the HTML `div class="alert alert-block alert-warning"` to colour this is a nice way:```mdBy the end of this worksheet you should be able to: - Identify and list the names of PRECIS output data in PP format using standard Linux commands.- Use basic Iris commands to load data files, and view Iris cubes. - Use Iris commands to remove the model rim, select data variables and save the output as NetCDF files.```When rendered, it looks like this:By the end of this worksheet you should be able to: - Identify and list the names of PRECIS output data in PP format using standard Linux commands.- Use basic Iris commands to load data files, and view Iris cubes. - Use Iris commands to remove the model rim, select data variables and save the output as NetCDF files.Remember to start each learning aim with a verb. Keep them short and to the point. If you have more than 3 to 4 learning aims, consider whether there is too much content in the workbook. NotesYou may wish to use a Note box to draw the learners attention to particular actions or points to note. Note boxes are created using `div class="alert alert-block alert-info"````mdNote: In the boxes where there is code or where you are asked to type code, click in the box, then press Ctrl + Enter to run the code. Note: An percentage sign % is needed to run some commands on the shell. It is noted where this is needed.Note: A hash denotes a comment; anything written after this character does not affect the command being run. ```Which looks like:Note: In the boxes where there is code or where you are asked to type code, click in the box, then press Ctrl + Enter to run the code. Note: An percentage sign % is needed to run some commands on the shell. It is noted where this is needed.Note: A hash denotes a comment; anything written after this character does not affect the command being run. ContentsImmediately following the Learning Aims (or Note box if used) add a list of contents.```md Contents [1.1: Data locations and file names](1.1) ...additional headings```Items in the contents list are formatted as level 3 headings. Note the `[Link Name](Link location)` syntax. Each subsequent heading in the notebook needs to have a `id` tag associated with it for the links to work. These are formatted like this:```md 1.1 Data locations and file names```Remember that the `id` string must match the link location otherwise the link won't work. Remember to update both the link title numbering and the link id numbering if you are reordering content. Section HeadingsTo help users navigate round the document use section headings to break the content into sections. As detailed above, each section heading needs to have an `id` tag associated with it to build the Contents links.If you want to further subdivide each section, use bold letters with a parentheses:```md**a)** Ordinary section text continues...``` General FormattingUse links to point learners to additional learning resources. These follow the standard markdown style: `[Link text](Link location)`, eg.```md[Iris](http://scitools.org.uk/iris/docs/latest/index.html)```gives[Iris](http://scitools.org.uk/iris/docs/latest/index.html)Format key commands using bold back-ticks: ```md**`cd`**```Where certain keyboard combinations are necessary to execute commands, use the `` html formatting.```mdCtrl + Enter```which gives:Ctrl + EnterCode blocks are entered in new notebook cells, with the `Code` style. Remember, all python should be **Python 3**.
###Code
# This is a code block
# Make sure you include comments with your code to help explain what you are doing
# Leave space if you want learners to complete portions of code
###Output
_____no_output_____ |
HCDR_Notebook.ipynb | ###Markdown
Home Credit Default Risk (HCDR) Dataset and how to download Back ground Home Credit GroupMany people struggle to get loans due to insufficient or non-existent credit histories. And, unfortunately, this population is often taken advantage of by untrustworthy lenders. Home Credit GroupHome Credit strives to broaden financial inclusion for the unbanked population by providing a positive and safe borrowing experience. In order to make sure this underserved population has a positive loan experience, Home Credit makes use of a variety of alternative data--including telco and transactional information--to predict their clients' repayment abilities.While Home Credit is currently using various statistical and machine learning methods to make these predictions, they're challenging Kagglers to help them unlock the full potential of their data. Doing so will ensure that clients capable of repayment are not rejected and that loans are given with a principal, maturity, and repayment calendar that will empower their clients to be successful. Background on the datasetHome Credit is a non-banking financial institution, founded in 1997 in the Czech Republic.The company operates in 14 countries (including United States, Russia, Kazahstan, Belarus, China, India) and focuses on lending primarily to people with little or no credit history which will either not obtain loans or became victims of untrustworthly lenders.Home Credit group has over 29 million customers, total assests of 21 billions Euro, over 160 millions loans, with the majority in Asia and and almost half of them in China (as of 19-05-2018).While Home Credit is currently using various statistical and machine learning methods to make these predictions, they're challenging Kagglers to help them unlock the full potential of their data. Doing so will ensure that clients capable of repayment are not rejected and that loans are given with a principal, maturity, and repayment calendar that will empower their clients to be successful. Data files overviewThere are 7 different sources of data:* __application_train/application_test:__ the main training and testing data with information about each loan application at Home Credit. Every loan has its own row and is identified by the feature SK_ID_CURR. The training application data comes with the TARGET indicating __0: the loan was repaid__ or __1: the loan was not repaid__. The target variable defines if the client had payment difficulties meaning he/she had late payment more than X days on at least one of the first Y installments of the loan. Such case is marked as 1 while other all other cases as 0.* __bureau:__ data concerning client's previous credits from other financial institutions. Each previous credit has its own row in bureau, but one loan in the application data can have multiple previous credits.* __bureau_balance:__ monthly data about the previous credits in bureau. Each row is one month of a previous credit, and a single previous credit can have multiple rows, one for each month of the credit length.* __previous_application:__ previous applications for loans at Home Credit of clients who have loans in the application data. Each current loan in the application data can have multiple previous loans. Each previous application has one row and is identified by the feature SK_ID_PREV.* __POS_CASH_BALANCE:__ monthly data about previous point of sale or cash loans clients have had with Home Credit. Each row is one month of a previous point of sale or cash loan, and a single previous loan can have many rows.* credit_card_balance: monthly data about previous credit cards clients have had with Home Credit. Each row is one month of a credit card balance, and a single credit card can have many rows.* __installments_payment:__ payment history for previous loans at Home Credit. There is one row for every made payment and one row for every missed payment. Imports
###Code
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
import os
import zipfile
from sklearn.base import BaseEstimator, TransformerMixin
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline, FeatureUnion
from pandas.plotting import scatter_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
import warnings
warnings.filterwarnings('ignore')
from google.colab import drive
drive.mount('/content/drive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code
Enter your authorization code:
··········
Mounted at /content/drive
###Markdown
Application train
###Code
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
import os
import zipfile
from sklearn.base import BaseEstimator, TransformerMixin
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline, FeatureUnion
from pandas.plotting import scatter_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import OneHotEncoder
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Application test* __application_train/application_test:__ the main training and testing data with information about each loan application at Home Credit. Every loan has its own row and is identified by the feature SK_ID_CURR. The training application data comes with the TARGET indicating __0: the loan was repaid__ or __1: the loan was not repaid__. The target variable defines if the client had payment difficulties meaning he/she had late payment more than X days on at least one of the first Y installments of the loan. Such case is marked as 1 while other all other cases as 0.
###Code
app_train = pd.read_csv(r'/content/drive/My Drive/HCDR Project/application_train.csv')
app_test = pd.read_csv(r'/content/drive/My Drive/HCDR Project/application_test.csv')
###Output
_____no_output_____
###Markdown
The application dataset has the most information about the client: Gender, income, family status, education ... The Other datasets* __bureau:__ data concerning client's previous credits from other financial institutions. Each previous credit has its own row in bureau, but one loan in the application data can have multiple previous credits.* __bureau_balance:__ monthly data about the previous credits in bureau. Each row is one month of a previous credit, and a single previous credit can have multiple rows, one for each month of the credit length.* __previous_application:__ previous applications for loans at Home Credit of clients who have loans in the application data. Each current loan in the application data can have multiple previous loans. Each previous application has one row and is identified by the feature SK_ID_PREV.* __POS_CASH_BALANCE:__ monthly data about previous point of sale or cash loans clients have had with Home Credit. Each row is one month of a previous point of sale or cash loan, and a single previous loan can have many rows.* credit_card_balance: monthly data about previous credit cards clients have had with Home Credit. Each row is one month of a credit card balance, and a single credit card can have many rows.* __installments_payment:__ payment history for previous loans at Home Credit. There is one row for every made payment and one row for every missed payment.
###Code
ds_name = 'application_train'
app_train = load_data(os.path.join(DATA_DIR, f'{ds_name}.csv'), 'ds_name')
bureau = pd.read_csv("datasets/bureau.csv")
bureau_balance = pd.read_csv("datasets/bureau_balance.csv")
credit_card_balance = pd.read_csv("datasets/credit_card_balance.csv")
installments_payments = pd.read_csv("datasets/installments_payments.csv")
previous_application = pd.read_csv("datasets/previous_application.csv")
POS_CASH_balance = pd.read_csv("datasets/POS_CASH_balance.csv")
print("bureau - rows:",bureau.shape[0]," columns:", bureau.shape[1])
print("bureau_balance - rows:",bureau_balance.shape[0]," columns:", bureau_balance.shape[1])
print("credit_card_balance - rows:",credit_card_balance.shape[0]," columns:", credit_card_balance.shape[1])
print("installments_payments - rows:",installments_payments.shape[0]," columns:", installments_payments.shape[1])
print("previous_application - rows:",previous_application.shape[0]," columns:", previous_application.shape[1])
print("POS_CASH_balance - rows:",POS_CASH_balance.shape[0]," columns:", POS_CASH_balance.shape[1])
###Output
bureau - rows: 1716428 columns: 17
bureau_balance - rows: 27299925 columns: 3
credit_card_balance - rows: 3840312 columns: 23
installments_payments - rows: 13605401 columns: 8
previous_application - rows: 1670214 columns: 37
POS_CASH_balance - rows: 10001358 columns: 8
###Markdown
* __Bureau__
###Code
bureau.head()
###Output
_____no_output_____
###Markdown
* __Bureau Balance__
###Code
bureau_balance.head()
###Output
_____no_output_____
###Markdown
* __Credit card balance__
###Code
credit_card_balance.head()
###Output
_____no_output_____
###Markdown
* __Installment payments__
###Code
installments_payments.head()
###Output
_____no_output_____
###Markdown
* __Previous application__
###Code
previous_application.head()
###Output
_____no_output_____
###Markdown
* __POS_CASH_balance__
###Code
POS_CASH_balance.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis Summary of Application train
###Code
app_train.info()
app_train.describe()
###Output
_____no_output_____
###Markdown
Missing data for application train
###Code
total = app_train.isnull().sum().sort_values(ascending = False)
percent = (app_train.isnull().sum()/app_train.isnull().count()*100).sort_values(ascending = False).round(2)
missing_application_train_data = pd.concat([total, percent], axis=1, keys=['Total', 'Percent'])
missing_application_train_data.head(20)
###Output
_____no_output_____
###Markdown
Distribution of the target column
###Code
app_train['TARGET'].astype(int).plot.hist();
###Output
_____no_output_____
###Markdown
Correlation with the target column
###Code
correlations = app_train.corr()['TARGET'].sort_values()
print('Most Positive Correlations:\n', correlations.tail(10))
print('\nMost Negative Correlations:\n', correlations.head(10))
###Output
Most Positive Correlations:
FLAG_DOCUMENT_3 0.044346
REG_CITY_NOT_LIVE_CITY 0.044395
FLAG_EMP_PHONE 0.045982
REG_CITY_NOT_WORK_CITY 0.050994
DAYS_ID_PUBLISH 0.051457
DAYS_LAST_PHONE_CHANGE 0.055218
REGION_RATING_CLIENT 0.058899
REGION_RATING_CLIENT_W_CITY 0.060893
DAYS_BIRTH 0.078239
TARGET 1.000000
Name: TARGET, dtype: float64
Most Negative Correlations:
EXT_SOURCE_3 -0.178919
EXT_SOURCE_2 -0.160472
EXT_SOURCE_1 -0.155317
DAYS_EMPLOYED -0.044932
FLOORSMAX_AVG -0.044003
FLOORSMAX_MEDI -0.043768
FLOORSMAX_MODE -0.043226
AMT_GOODS_PRICE -0.039645
REGION_POPULATION_RELATIVE -0.037227
ELEVATORS_AVG -0.034199
Name: TARGET, dtype: float64
###Markdown
Applicants Age
###Code
plt.hist(app_train['DAYS_BIRTH'] / -365, edgecolor = 'k', bins = 25)
plt.title('Age of Client'); plt.xlabel('Age (years)'); plt.ylabel('Count');
###Output
_____no_output_____
###Markdown
Applicants occupations
###Code
sns.countplot(x='OCCUPATION_TYPE', data=app_train);
plt.title('Applicants Occupation');
plt.xticks(rotation=90);
###Output
_____no_output_____
###Markdown
Processing pipeline
###Code
# Create a class to select numerical or categorical columns
# since Scikit-Learn doesn't handle DataFrames yet
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names].values
# Identify the numeric features we wish to consider.
num_attribs = [
'AMT_INCOME_TOTAL', 'AMT_CREDIT','DAYS_EMPLOYED','DAYS_BIRTH','EXT_SOURCE_1',
'EXT_SOURCE_2','EXT_SOURCE_3']
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attribs)),
('imputer', SimpleImputer(strategy='mean')),
('std_scaler', StandardScaler()),
])
# Identify the categorical features we wish to consider.
cat_attribs = ['CODE_GENDER', 'FLAG_OWN_REALTY','FLAG_OWN_CAR','NAME_CONTRACT_TYPE',
'NAME_EDUCATION_TYPE','OCCUPATION_TYPE','NAME_INCOME_TYPE']
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attribs)),
('imputer', SimpleImputer(strategy='most_frequent')),
('ohe', OneHotEncoder(sparse=False, handle_unknown="ignore"))
])
full_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
app_train_subset= app_train[['AMT_INCOME_TOTAL', 'AMT_CREDIT','DAYS_EMPLOYED','DAYS_BIRTH','EXT_SOURCE_1',
'EXT_SOURCE_2','EXT_SOURCE_3','CODE_GENDER', 'FLAG_OWN_REALTY','FLAG_OWN_CAR','NAME_CONTRACT_TYPE',
'NAME_EDUCATION_TYPE','OCCUPATION_TYPE','NAME_INCOME_TYPE']]
test= app_test[['AMT_INCOME_TOTAL', 'AMT_CREDIT','DAYS_EMPLOYED','DAYS_BIRTH','EXT_SOURCE_1',
'EXT_SOURCE_2','EXT_SOURCE_3','CODE_GENDER', 'FLAG_OWN_REALTY','FLAG_OWN_CAR','NAME_CONTRACT_TYPE',
'NAME_EDUCATION_TYPE','OCCUPATION_TYPE','NAME_INCOME_TYPE']]
###Output
_____no_output_____
###Markdown
Baseline ModelTo get a baseline, we will use some of the features after being preprocessed through the pipeline.The baseline model is a logistic regression model
###Code
def pct(x):
return round(100*x,3)
results = pd.DataFrame(columns=["ExpID", "Cross fold train accuracy", "Experiment description"])
%%time
full_pipeline_with_predictor = Pipeline([
("preparation", full_pipeline),
("linear", LogisticRegression(n_jobs=-1))
])
train_labels = app_train['TARGET']
fit_pipeline= full_pipeline_with_predictor.fit(app_train_subset, train_labels)
np.random.seed(42)
%%time
cv = cv = KFold(n_splits=5, random_state=42, shuffle=False)
logit_scores = cross_val_score(fit_pipeline, app_train_subset, train_labels, cv=cv,n_jobs = -1)
log_reg_pred = fit_pipeline.predict_proba(test)[:, 1]
logit_score_train = logit_scores.mean()
results.loc[0] = ["Baseline", pct(logit_score_train),"Untuned LogisticRegression"]
results
# Submission dataframe
submit = app_test[['SK_ID_CURR']]
submit['TARGET'] = log_reg_pred
submit.head()
###Output
_____no_output_____
###Markdown
Kaggle submission
###Code
submit.to_csv("submission.csv",index=False)
! kaggle competitions submit -c home-credit-default-risk -f submission.csv -m "baseline submission"
!pip install kaggle
!ls -a
!mkdir .kaggle
###Output
_____no_output_____ |
posts/using-pipelines-for-multiple-preprocessing-steps.ipynb | ###Markdown
用管线命令处理多个步骤 管线命令不经常用,但是很有用。它们可以把多个步骤组合成一个对象执行。这样可以更方便灵活地调节和控制整个模型的配置,而不只是一个一个步骤调节。 Getting ready 这是我们把多个数据处理步骤组合成一个对象的第一部分。在scikit-learn里称为`pipeline`。这里我们首先通过计算处理缺失值;然后将数据集调整为均值为0,标准差为1的标准形。让我们创建一个有缺失值的数据集,然后再演示`pipeline`的用法:
###Code
from sklearn import datasets
import numpy as np
mat = datasets.make_spd_matrix(10)
masking_array = np.random.binomial(1, .1, mat.shape).astype(bool)
mat[masking_array] = np.nan
mat[:4, :4]
###Output
_____no_output_____
###Markdown
How to do it... 如果不用管线命令,我们可能会这样实现:
###Code
from sklearn import preprocessing
impute = preprocessing.Imputer()
scaler = preprocessing.StandardScaler()
mat_imputed = impute.fit_transform(mat)
mat_imputed[:4, :4]
mat_imp_and_scaled = scaler.fit_transform(mat_imputed)
mat_imp_and_scaled[:4, :4]
###Output
_____no_output_____
###Markdown
现在我们用`pipeline`来演示:
###Code
from sklearn import pipeline
pipe = pipeline.Pipeline([('impute', impute), ('scaler', scaler)])
###Output
_____no_output_____
###Markdown
我们看看`pipe`的内容。和前面介绍一致,管线命令定义了处理步骤:
###Code
pipe
###Output
_____no_output_____
###Markdown
然后在调用`pipe`的`fit_transform`方法,就可以把多个步骤组合成一个对象了:
###Code
new_mat = pipe.fit_transform(mat)
new_mat[:4, :4]
###Output
_____no_output_____
###Markdown
可以用Numpy验证一下结果:
###Code
np.array_equal(new_mat, mat_imp_and_scaled)
###Output
_____no_output_____
###Markdown
完全正确!本书后面的主题中,我们会进一步展示管线命令的威力。不仅可以用于预处理步骤中,在降维、算法拟合中也可以很方便的使用。 How it works... 前面曾经提到过,每个scikit-learn的算法接口都类似。`pipeline`最重要的函数也不外乎下面三个:- `fit`- `transform`- `fit_transform`具体来说,如果管线命令有`N`个对象,前`N-1`个对象必须实现`fit`和`transform`,第`N`个对象至少实现`fit`。否则就会出现错误。如果这些条件满足,管线命令就会运行,但是不一定每个方法都可以。例如,`pipe`有个`inverse_transform`方法就是这样。因为由于计算步骤没有`inverse_transform`方法,一运行就有错误:
###Code
pipe.inverse_transform(new_mat)
###Output
_____no_output_____
###Markdown
但是,`scalar`对象可以正常运行:
###Code
scaler.inverse_transform(new_mat)[:4, :4]
###Output
_____no_output_____ |
5kb_DNA_analysis/20200827_batch_IgH_batch1_proB_DMSO_Chem.ipynb | ###Markdown
0. required packages for h5py
###Code
import h5py
from ImageAnalysis3.classes import _allowed_kwds
import ast
###Output
_____no_output_____
###Markdown
1. Create field-of-view class
###Code
reload(ia)
reload(classes)
reload(classes.batch_functions)
reload(classes.field_of_view)
reload(io_tools.load)
reload(visual_tools)
reload(ia.correction_tools)
reload(ia.correction_tools.alignment)
reload(ia.spot_tools.matching)
reload(ia.segmentation_tools.chromosome)
reload(ia.spot_tools.fitting)
fov_param = {'data_folder':r'\\10.245.74.158\Chromatin_NAS_6\20200827-B_DMSO_CTP-08_IgH',
'save_folder':r'G:\Pu_Temp\2020827_proB_DMSO',
'experiment_type': 'DNA',
'num_threads': 6,
'correction_folder': r'\\10.245.74.158\Chromatin_NAS_0\Corrections\20200807-Corrections_3color',
'shared_parameters':{
'single_im_size':[30,2048,2048],
'corr_channels':['750', '647', '561'],
'num_empty_frames': 0,
'corr_hot_pixel':True,
'corr_Z_shift':False,
'min_num_seeds':200,
'max_num_seeds': 2500,
'spot_seeding_th':125,
'normalize_intensity_local':False,
'normalize_intensity_background':False,
},
}
fov_ids = np.arange(3,23)
reload(io_tools.load)
from ImageAnalysis3.spot_tools.picking import assign_spots_to_chromosomes
overwrite=False
intensity_th = 200
spots_list_list = []
chrom_coords_list = []
cand_chr_spots_list = []
for _fov_id in fov_ids:
# create fov class
fov = classes.field_of_view.Field_of_View(fov_param, _fov_id=_fov_id,
_color_info_kwargs={
'_color_filename':'Color_Usage',
},
_prioritize_saved_attrs=False,
)
# process image into spots
id_list, spot_list = fov._process_image_to_spots('unique',
_load_common_reference=True,
_load_with_multiple=False,
_save_images=True,
_warp_images=False,
_overwrite_drift=False,
_overwrite_image=False,
_overwrite_spot=overwrite,
_verbose=True)
# identify chromosomes
chrom_im = fov._load_chromosome_image(_overwrite=overwrite)
chrom_coords = fov._find_candidate_chromosomes_by_segmentation(_filt_size=4,
_binary_per_th=99.5,
_morphology_size=2,
_overwrite=overwrite)
fov._load_from_file('unique')
chrom_coords = fov._select_chromosome_by_candidate_spots(_good_chr_loss_th=0.5,
_cand_spot_intensity_th=intensity_th,
_save=True,
_overwrite=overwrite)
# append
spots_list_list.append(fov.unique_spots_list)
chrom_coords_list.append(fov.chrom_coords)
fov_cand_chr_spots_list = [[] for _ct in fov.chrom_coords]
# finalize candidate spots
for _spots in fov.unique_spots_list:
_cands_list = assign_spots_to_chromosomes(_spots, fov.chrom_coords)
for _i, _cands in enumerate(_cands_list):
fov_cand_chr_spots_list[_i].append(_cands)
cand_chr_spots_list += fov_cand_chr_spots_list
print(f"kept chromosomes: {len(fov.chrom_coords)}")
# combine acquired spots and chromosomes
chrom_coords = np.concatenate(chrom_coords_list)
from ImageAnalysis3.spot_tools.picking import convert_spots_to_hzxys
dna_cand_hzxys_list = [convert_spots_to_hzxys(_spots, fov.shared_parameters['distance_zxy'])
for _spots in cand_chr_spots_list]
dna_reg_ids = fov.unique_ids
print(f"{len(chrom_coords)} are found.")
# select_hzxys close to the chromosome center
dist_th = 3000 # upper limit is 5000nm
intensity_th = 500
sel_dna_cand_hzxys_list = []
for _cand_hzxys, _chrom_coord in zip(dna_cand_hzxys_list, chrom_coords):
_sel_cands_list = []
for _cands in _cand_hzxys:
if len(_cands) == 0:
_sel_cands_list.append([])
else:
_dists = np.linalg.norm(_cands[:,1:4] - _chrom_coord*np.array([200,108,108]), axis=1)
_sel_cands_list.append(_cands[(_dists < dist_th) & (_cands[:,0]>=intensity_th)])
# append
sel_dna_cand_hzxys_list.append(_sel_cands_list)
###Output
_____no_output_____
###Markdown
EM pick spots
###Code
# load functions
reload(ia.spot_tools.picking)
from ImageAnalysis3.spot_tools.picking import Pick_spots_by_intensity, EM_pick_scores_in_population, generate_reference_from_population,evaluate_differences
%matplotlib inline
niter= 10
nkeep = len(sel_dna_cand_hzxys_list)
num_threads = 12
# initialize
init_dna_hzxys = Pick_spots_by_intensity(sel_dna_cand_hzxys_list[:nkeep])
# set save list
sel_dna_hzxys_list, sel_dna_scores_list, all_dna_scores_list = [init_dna_hzxys], [], []
for _iter in range(niter):
print(f"- iter:{_iter}")
# generate reference
ref_ct_dists, ref_local_dists, ref_ints = generate_reference_from_population(
sel_dna_hzxys_list[-1], dna_reg_ids,
sel_dna_hzxys_list[-1][:nkeep], dna_reg_ids,
num_threads=num_threads,
collapse_regions=True,
)
plt.figure(figsize=(4,2))
plt.hist(np.ravel(ref_ints), bins=np.arange(0,5000,100))
plt.figure(figsize=(4,2))
plt.hist(np.ravel(ref_ct_dists), bins=np.arange(0,3000,100))
plt.figure(figsize=(4,2))
plt.hist(np.ravel(ref_local_dists), bins=np.arange(0,3000,100))
plt.show()
# scoring
sel_hzxys, sel_scores, all_scores = EM_pick_scores_in_population(
sel_dna_cand_hzxys_list[:nkeep], dna_reg_ids, sel_dna_hzxys_list[-1],
ref_ct_dists, ref_local_dists, ref_ints,
sel_dna_hzxys_list[-1], dna_reg_ids, num_threads=num_threads,
)
update_rate = evaluate_differences(sel_hzxys, sel_dna_hzxys_list[-1])
print(f"-- region kept: {update_rate:.4f}")
sel_dna_hzxys_list.append(sel_hzxys)
sel_dna_scores_list.append(sel_scores)
all_dna_scores_list.append(all_scores)
if update_rate > 0.995:
break
np.ravel(sel_dna_scores_list[-1][:10000]).shape
scores = np.array(sel_dna_scores_list[-1])[np.isnan(sel_dna_scores_list[-1])==False]
plt.figure(dpi=100)
plt.hist(np.log(scores), 40, range=(-20,0))
plt.show()
from scipy.spatial.distance import pdist, squareform
sel_iter = -1
final_dna_hzxys_list = []
kept_chr_ids = []
distmap_list = []
score_th = np.exp(-10)
int_th = 500
bad_spot_percentage = 0.5
for _hzxys, _scores in zip(sel_dna_hzxys_list[sel_iter], sel_dna_scores_list[sel_iter]):
_kept_hzxys = np.array(_hzxys).copy()
_bad_inds = _kept_hzxys[:,0] < int_th
_kept_hzxys[_bad_inds] = np.nan
#_kept_hzxys[_scores < score_th] = np.nan
if np.mean(np.isnan(_kept_hzxys).sum(1)>0)<bad_spot_percentage:
kept_chr_ids.append(True)
final_dna_hzxys_list.append(_kept_hzxys)
distmap_list.append(squareform(pdist(_kept_hzxys[:,1:4])))
else:
kept_chr_ids.append(False)
kept_chr_ids = np.array(kept_chr_ids, dtype=np.bool)
distmap_list = np.array(distmap_list)
median_distmap = np.nanmedian(distmap_list, axis=0)
loss_rates = np.mean(np.sum(np.isnan(final_dna_hzxys_list), axis=2)>0, axis=0)
fig, ax = plt.subplots(figsize=(4,2),dpi=200)
ax.plot(loss_rates, '.-')
ax.set_xticks(np.arange(0,150,20))
plt.show()
kept_inds = np.where(loss_rates<0.5)[0]
fig, ax = plt.subplots(figsize=(4,3),dpi=200)
ax = ia.figure_tools.distmap.plot_distance_map(median_distmap,
median_distmap[kept_inds][:,kept_inds],
color_limits=[0,600],
ax=ax,
ticks=np.arange(0,150,20),
figure_dpi=200)
ax.axvline(x=74, color=[1,1,0])
ax.axhline(y=74, color=[1,1,0])
ax.set_title(f"proB DMSO, n={len(distmap_list)}", fontsize=7.5)
plt.show()
###Output
_____no_output_____
###Markdown
###Code
# generate full distmap
full_size = np.max(dna_reg_ids) - np.min(dna_reg_ids)+1
full_median_distmap = np.ones([full_size, full_size])*np.nan
full_median_distmap[np.arange(full_size), np.arange(full_size)] = np.zeros(len(full_median_distmap))
for _i, _id in enumerate(dna_reg_ids-np.min(dna_reg_ids)):
full_median_distmap[_id, dna_reg_ids-np.min(dna_reg_ids)] = median_distmap[_i]
import matplotlib
median_cmap = matplotlib.cm.get_cmap('seismic_r')
median_cmap.set_bad(color=[0.4,0.4,0.4,1])
fig, ax = plt.subplots(figsize=(4,3),dpi=200)
ax = ia.figure_tools.distmap.plot_distance_map(full_median_distmap,
#median_distmap[kept_inds][:,kept_inds],
cmap=median_cmap,
color_limits=[0,600],
ax=ax,
ticks=np.arange(0, np.max(dna_reg_ids)-np.min(dna_reg_ids), 50),
tick_labels=np.arange(np.min(dna_reg_ids), np.max(dna_reg_ids),50),
figure_dpi=200)
ax.set_title(f"proB bone marrow IgH+/+, n={len(distmap_list)}", fontsize=7.5)
ax.set_xlabel(f"5kb region ids", fontsize=7.5)
plt.show()
###Output
_____no_output_____
###Markdown
quality check
###Code
with h5py.File(fov.save_filename, "r", libver='latest') as _f:
_grp = _f['unique']
_ind = list(_grp['ids'][:]).index(41)
_im = _grp['ims'][_ind]
sel_drifts = _grp['drifts'][:,:]
sel_flags = _grp['flags'][:]
sel_ids = _grp['ids'][:]
sel_spots = _grp['spots'][:,:,:]
print(_ind, np.sum(_grp['spots'][1]))
fov.unique_spots_list[100]
%matplotlib notebook
from matplotlib.cm import Spectral
plt.figure(figsize=(5,5),dpi=150)
for _id,_s in zip(sel_ids, kept_spots_list):
plt.plot(_s[:,2],_s[:,3], '.', label=f'{_id}',
markersize=1.5, color=Spectral(_id/len(sel_ids)), alpha=0.5)
#plt.legend()
plt.ylim([0,2048])
plt.xlim([0,2048])
#plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
visualize pciked hzxys
###Code
%matplotlib notebook
from matplotlib.cm import Spectral
plt.figure(figsize=(5,5),dpi=150)
for _i, _id in enumerate(sel_ids):
plt.plot([_spots[_i,2] for _spots in final_dna_hzxys_list],
[_spots[_i,3] for _spots in final_dna_hzxys_list],
'.', markersize=2, color=Spectral(_id/(len(sel_ids)+1)), alpha=0.7)
#for _id,_s in zip(sel_ids, kept_spots_list):
# plt.plot(_s[:,2],_s[:,3], '.', label=f'{_id}',
# markersize=1.5, color=Spectral(_id/len(sel_ids)), alpha=0.5)
#plt.legend()
#plt.ylim([0,2048])
#plt.xlim([0,2048])
#plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
visualize fitted spots
###Code
plt.figure(figsize=(4,4),dpi=150)
plt.plot(fov.chrom_coords[:,1], fov.chrom_coords[:,2], 'r.', markersize=2)
plt.plot(fov.unique_spots_list[0][:,2], fov.unique_spots_list[0][:,3], 'b.', markersize=2)
plt.plot(fov.unique_spots_list[-1][:,2], fov.unique_spots_list[-1][:,3], 'g.', markersize=2)
sel_drifts[kept_inds]
sel_drifts[kept_inds]
fov.fov_id
pickle.load(open(fov.drift_filename, 'rb'))
bead_im, _ = io_tools.load.correct_fov_image(os.path.join(fov.data_folder[0], 'H29R29\\Conv_zscan_05.dax'),
[fov.channels[fov.bead_channel_index]],
correction_folder=fov.correction_folder,
single_im_size=fov.shared_parameters['single_im_size'],
all_channels=fov.channels,
illumination_corr=True,
warp_image=False, calculate_drift=False, return_drift=False,
verbose=True,
)
correction_tools.alignment.cross_correlation_align_single_image(bead_im[0], fov.ref_im,
single_im_size=fov.shared_parameters['single_im_size'])
visual_tools.imshow_mark_3d_v2([bead_im[0], fov.ref_im])
###Output
_____no_output_____ |
Projects/Project 2 Image Captioning/2.Training.ipynb | ###Markdown
Computer Vision Nanodegree Project: Image Captioning---In this notebook, you will train your CNN-RNN model. You are welcome and encouraged to try out many different architectures and hyperparameters when searching for a good model.This does have the potential to make the project quite messy! Before submitting your project, make sure that you clean up:- the code you write in this notebook. The notebook should describe how to train a single CNN-RNN architecture, corresponding to your final choice of hyperparameters. You should structure the notebook so that the reviewer can replicate your results by running the code in this notebook. - the output of the code cell in **Step 2**. The output should show the output obtained when training the model from scratch.This notebook **will be graded**. Feel free to use the links below to navigate the notebook:- [Step 1](step1): Training Setup- [Step 2](step2): Train your Model- [Step 3](step3): (Optional) Validate your Model Step 1: Training SetupIn this step of the notebook, you will customize the training of your CNN-RNN model by specifying hyperparameters and setting other options that are important to the training procedure. The values you set now will be used when training your model in **Step 2** below.You should only amend blocks of code that are preceded by a `TODO` statement. **Any code blocks that are not preceded by a `TODO` statement should not be modified**. Task 1Begin by setting the following variables:- `batch_size` - the batch size of each training batch. It is the number of image-caption pairs used to amend the model weights in each training step. - `vocab_threshold` - the minimum word count threshold. Note that a larger threshold will result in a smaller vocabulary, whereas a smaller threshold will include rarer words and result in a larger vocabulary. - `vocab_from_file` - a Boolean that decides whether to load the vocabulary from file. - `embed_size` - the dimensionality of the image and word embeddings. - `hidden_size` - the number of features in the hidden state of the RNN decoder. - `num_epochs` - the number of epochs to train the model. We recommend that you set `num_epochs=3`, but feel free to increase or decrease this number as you wish. [This paper](https://arxiv.org/pdf/1502.03044.pdf) trained a captioning model on a single state-of-the-art GPU for 3 days, but you'll soon see that you can get reasonable results in a matter of a few hours! (_But of course, if you want your model to compete with current research, you will have to train for much longer._)- `save_every` - determines how often to save the model weights. We recommend that you set `save_every=1`, to save the model weights after each epoch. This way, after the `i`th epoch, the encoder and decoder weights will be saved in the `models/` folder as `encoder-i.pkl` and `decoder-i.pkl`, respectively.- `print_every` - determines how often to print the batch loss to the Jupyter notebook while training. Note that you **will not** observe a monotonic decrease in the loss function while training - this is perfectly fine and completely expected! You are encouraged to keep this at its default value of `100` to avoid clogging the notebook, but feel free to change it.- `log_file` - the name of the text file containing - for every step - how the loss and perplexity evolved during training.If you're not sure where to begin to set some of the values above, you can peruse [this paper](https://arxiv.org/pdf/1502.03044.pdf) and [this paper](https://arxiv.org/pdf/1411.4555.pdf) for useful guidance! **To avoid spending too long on this notebook**, you are encouraged to consult these suggested research papers to obtain a strong initial guess for which hyperparameters are likely to work best. Then, train a single model, and proceed to the next notebook (**3_Inference.ipynb**). If you are unhappy with your performance, you can return to this notebook to tweak the hyperparameters (and/or the architecture in **model.py**) and re-train your model. Question 1**Question:** Describe your CNN-RNN architecture in detail. With this architecture in mind, how did you select the values of the variables in Task 1? If you consulted a research paper detailing a successful implementation of an image captioning model, please provide the reference.**Answer:** CNN modelis based on ResNet arch. It is a pretty robust model with low error rate. I have implemented most things from the paper "Show and Tell: A Neural Image Captioning Generator". Embed_size and hidden_size are set to 512 in the paper. Batch size of 64 was giving god results so i haven't changed it. To keep the training faster, i did only 3 epochs of training. (Optional) Task 2Note that we have provided a recommended image transform `transform_train` for pre-processing the training images, but you are welcome (and encouraged!) to modify it as you wish. When modifying this transform, keep in mind that:- the images in the dataset have varying heights and widths, and - if using a pre-trained model, you must perform the corresponding appropriate normalization. Question 2**Question:** How did you select the transform in `transform_train`? If you left the transform at its provided value, why do you think that it is a good choice for your CNN architecture?**Answer:** I left the transform unchanged to original because i think the operations performed are sufficient to obtain a good result. Task 3Next, you will specify a Python list containing the learnable parameters of the model. For instance, if you decide to make all weights in the decoder trainable, but only want to train the weights in the embedding layer of the encoder, then you should set `params` to something like:```params = list(decoder.parameters()) + list(encoder.embed.parameters()) ``` Question 3**Question:** How did you select the trainable parameters of your architecture? Why do you think this is a good choice?**Answer:** As I am using a pre-trained ResNet-50 model, only the embedding layer is to be trained.No layer in decoder is previously trained. So, we should train all the layers. Task 4Finally, you will select an [optimizer](http://pytorch.org/docs/master/optim.htmltorch.optim.Optimizer). Question 4**Question:** How did you select the optimizer used to train your model?**Answer:** Used Adam optimizer as it converges faster
###Code
# all imports
import torch
import torch.nn as nn
from torchvision import transforms
import sys
sys.path.append('/opt/cocoapi/PythonAPI')
from pycocotools.coco import COCO
from data_loader import get_loader
from model import EncoderCNN, DecoderRNN
import math
## All Variables to set
## TODO #1: Select appropriate values for the Python variables below.
batch_size = 64 # batch size
vocab_threshold = 4 # minimum word count threshold
vocab_from_file = True # if True, load existing vocab file
embed_size = 512 # dimensionality of image and word embeddings
hidden_size = 512 # number of features in hidden state of the RNN decoder
num_epochs = 2 # number of training epochs
save_every = 1 # determines frequency of saving model weights
print_every = 100 # determines window for printing average loss
log_file = 'training_log.txt' # name of file with saved training loss and perplexity
# (Optional) TODO #2: Amend the image transform below.
transform_train = transforms.Compose([
transforms.Resize(256), # smaller edge of image resized to 256
transforms.RandomCrop(224), # get 224x224 crop from random location
transforms.RandomHorizontalFlip(), # horizontally flip image with probability=0.5
transforms.ToTensor(), # convert the PIL Image to a tensor
transforms.Normalize((0.485, 0.456, 0.406), # normalize image for pre-trained model
(0.229, 0.224, 0.225))])
# Build data loader.
data_loader = get_loader(transform=transform_train,
mode='train',
batch_size=batch_size,
vocab_threshold=vocab_threshold,
vocab_from_file=vocab_from_file)
# The size of the vocabulary.
vocab_size = len(data_loader.dataset.vocab)
# Initialize the encoder and decoder.
encoder = EncoderCNN(embed_size)
decoder = DecoderRNN(embed_size, hidden_size, vocab_size)
# Move models to GPU if CUDA is available.
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
encoder.to(device)
decoder.to(device)
# Define the loss function.
criterion = nn.CrossEntropyLoss().cuda() if torch.cuda.is_available() else nn.CrossEntropyLoss()
# TODO #3: Specify the learnable parameters of the model.
params = list(decoder.parameters()) + list(encoder.embed.parameters())
# TODO #4: Define the optimizer.
optimizer = torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999))
# Set the total number of training steps per epoch.
total_step = math.ceil(len(data_loader.dataset.caption_lengths) / data_loader.batch_sampler.batch_size)
###Output
_____no_output_____
###Markdown
Step 2: Train your ModelOnce you have executed the code cell in **Step 1**, the training procedure below should run without issue. It is completely fine to leave the code cell below as-is without modifications to train your model. However, if you would like to modify the code used to train the model below, you must ensure that your changes are easily parsed by your reviewer. In other words, make sure to provide appropriate comments to describe how your code works! You may find it useful to load saved weights to resume training. In that case, note the names of the files containing the encoder and decoder weights that you'd like to load (`encoder_file` and `decoder_file`). Then you can load the weights by using the lines below:```python Load pre-trained weights before resuming training.encoder.load_state_dict(torch.load(os.path.join('./models', encoder_file)))decoder.load_state_dict(torch.load(os.path.join('./models', decoder_file)))```While trying out parameters, make sure to take extensive notes and record the settings that you used in your various training runs. In particular, you don't want to encounter a situation where you've trained a model for several hours but can't remember what settings you used :). A Note on Tuning HyperparametersTo figure out how well your model is doing, you can look at how the training loss and perplexity evolve during training - and for the purposes of this project, you are encouraged to amend the hyperparameters based on this information. However, this will not tell you if your model is overfitting to the training data, and, unfortunately, overfitting is a problem that is commonly encountered when training image captioning models. For this project, you need not worry about overfitting. **This project does not have strict requirements regarding the performance of your model**, and you just need to demonstrate that your model has learned **_something_** when you generate captions on the test data. For now, we strongly encourage you to train your model for the suggested 3 epochs without worrying about performance; then, you should immediately transition to the next notebook in the sequence (**3_Inference.ipynb**) to see how your model performs on the test data. If your model needs to be changed, you can come back to this notebook, amend hyperparameters (if necessary), and re-train the model.That said, if you would like to go above and beyond in this project, you can read about some approaches to minimizing overfitting in section 4.3.1 of [this paper](http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7505636). In the next (optional) step of this notebook, we provide some guidance for assessing the performance on the validation dataset.
###Code
import torch.utils.data as data
import numpy as np
import os
import requests
import time
# Open the training log file.
f = open(log_file, 'w')
old_time = time.time()
response = requests.request("GET",
"http://metadata.google.internal/computeMetadata/v1/instance/attributes/keep_alive_token",
headers={"Metadata-Flavor":"Google"})
for epoch in range(1, num_epochs+1):
for i_step in range(1, total_step+1):
if time.time() - old_time > 60:
old_time = time.time()
requests.request("POST",
"https://nebula.udacity.com/api/v1/remote/keep-alive",
headers={'Authorization': "STAR " + response.text})
# Randomly sample a caption length, and sample indices with that length.
indices = data_loader.dataset.get_train_indices()
# Create and assign a batch sampler to retrieve a batch with the sampled indices.
new_sampler = data.sampler.SubsetRandomSampler(indices=indices)
data_loader.batch_sampler.sampler = new_sampler
# Obtain the batch.
images, captions = next(iter(data_loader))
# Move batch of images and captions to GPU if CUDA is available.
images = images.to(device)
captions = captions.to(device)
# Zero the gradients.
decoder.zero_grad()
encoder.zero_grad()
# Pass the inputs through the CNN-RNN model.
features = encoder(images)
outputs = decoder(features, captions)
# Calculate the batch loss.
loss = criterion(outputs.view(-1, vocab_size), captions.view(-1))
# Backward pass.
loss.backward()
# Update the parameters in the optimizer.
optimizer.step()
# Get training statistics.
stats = 'Epoch [%d/%d], Step [%d/%d], Loss: %.4f, Perplexity: %5.4f' % (epoch, num_epochs, i_step, total_step, loss.item(), np.exp(loss.item()))
# Print training statistics (on same line).
print('\r' + stats, end="")
sys.stdout.flush()
# Print training statistics to file.
f.write(stats + '\n')
f.flush()
# Print training statistics (on different line).
if i_step % print_every == 0:
print('\r' + stats)
# Save the weights.
if epoch % save_every == 0:
torch.save(decoder.state_dict(), os.path.join('./models', 'decoder-%d.pkl' % epoch))
torch.save(encoder.state_dict(), os.path.join('./models', 'encoder-%d.pkl' % epoch))
# Close the training log file.
f.close()
###Output
Epoch [1/3], Step [100/6471], Loss: 4.5218, Perplexity: 92.0008
Epoch [1/3], Step [200/6471], Loss: 3.7283, Perplexity: 41.60764
Epoch [1/3], Step [300/6471], Loss: 3.7600, Perplexity: 42.9495
Epoch [1/3], Step [400/6471], Loss: 3.5082, Perplexity: 33.3866
Epoch [1/3], Step [500/6471], Loss: 3.3675, Perplexity: 29.0047
Epoch [1/3], Step [600/6471], Loss: 3.3409, Perplexity: 28.2457
Epoch [1/3], Step [700/6471], Loss: 3.2590, Perplexity: 26.0226
Epoch [1/3], Step [800/6471], Loss: 3.1923, Perplexity: 24.3448
Epoch [1/3], Step [900/6471], Loss: 3.4864, Perplexity: 32.66809
Epoch [1/3], Step [1000/6471], Loss: 3.2017, Perplexity: 24.5745
Epoch [1/3], Step [1100/6471], Loss: 2.8471, Perplexity: 17.23760
Epoch [1/3], Step [1200/6471], Loss: 3.3957, Perplexity: 29.83429
Epoch [1/3], Step [1300/6471], Loss: 2.8143, Perplexity: 16.6808
Epoch [1/3], Step [1400/6471], Loss: 2.7412, Perplexity: 15.5054
Epoch [1/3], Step [1500/6471], Loss: 2.7068, Perplexity: 14.9807
Epoch [1/3], Step [1600/6471], Loss: 2.8094, Perplexity: 16.5994
Epoch [1/3], Step [1700/6471], Loss: 2.7322, Perplexity: 15.3664
Epoch [1/3], Step [1800/6471], Loss: 2.7412, Perplexity: 15.5063
Epoch [1/3], Step [1900/6471], Loss: 3.0192, Perplexity: 20.4749
Epoch [1/3], Step [2000/6471], Loss: 2.7178, Perplexity: 15.1469
Epoch [1/3], Step [2100/6471], Loss: 2.5075, Perplexity: 12.2740
Epoch [1/3], Step [2200/6471], Loss: 2.6035, Perplexity: 13.5108
Epoch [1/3], Step [2300/6471], Loss: 3.2503, Perplexity: 25.7984
Epoch [1/3], Step [2400/6471], Loss: 2.4187, Perplexity: 11.2318
Epoch [1/3], Step [2500/6471], Loss: 2.6615, Perplexity: 14.3180
Epoch [1/3], Step [2600/6471], Loss: 2.4829, Perplexity: 11.9763
Epoch [1/3], Step [2700/6471], Loss: 2.2754, Perplexity: 9.73180
Epoch [1/3], Step [2800/6471], Loss: 2.4736, Perplexity: 11.8657
Epoch [1/3], Step [2900/6471], Loss: 2.3892, Perplexity: 10.9049
Epoch [1/3], Step [3000/6471], Loss: 2.5187, Perplexity: 12.4121
Epoch [1/3], Step [3100/6471], Loss: 2.3921, Perplexity: 10.9368
Epoch [1/3], Step [3200/6471], Loss: 2.4762, Perplexity: 11.8963
Epoch [1/3], Step [3300/6471], Loss: 2.3599, Perplexity: 10.5903
Epoch [1/3], Step [3400/6471], Loss: 2.4169, Perplexity: 11.2109
Epoch [1/3], Step [3500/6471], Loss: 2.3541, Perplexity: 10.5291
Epoch [1/3], Step [3600/6471], Loss: 2.9826, Perplexity: 19.7382
Epoch [1/3], Step [3700/6471], Loss: 2.3861, Perplexity: 10.8710
Epoch [1/3], Step [3800/6471], Loss: 2.3646, Perplexity: 10.6396
Epoch [1/3], Step [3900/6471], Loss: 2.3194, Perplexity: 10.1692
Epoch [1/3], Step [4000/6471], Loss: 2.3880, Perplexity: 10.8912
Epoch [1/3], Step [4100/6471], Loss: 2.3221, Perplexity: 10.1966
Epoch [1/3], Step [4200/6471], Loss: 2.3396, Perplexity: 10.3772
Epoch [1/3], Step [4300/6471], Loss: 2.3701, Perplexity: 10.6987
Epoch [1/3], Step [4400/6471], Loss: 2.6036, Perplexity: 13.5127
Epoch [1/3], Step [4500/6471], Loss: 2.2085, Perplexity: 9.10242
Epoch [1/3], Step [4600/6471], Loss: 3.3022, Perplexity: 27.1733
Epoch [1/3], Step [4700/6471], Loss: 2.7579, Perplexity: 15.7665
Epoch [1/3], Step [4800/6471], Loss: 2.5946, Perplexity: 13.3910
Epoch [1/3], Step [4900/6471], Loss: 2.8834, Perplexity: 17.8745
Epoch [1/3], Step [5000/6471], Loss: 2.2033, Perplexity: 9.05531
Epoch [1/3], Step [5100/6471], Loss: 2.2761, Perplexity: 9.73856
Epoch [1/3], Step [5200/6471], Loss: 2.6565, Perplexity: 14.2464
Epoch [1/3], Step [5300/6471], Loss: 2.4118, Perplexity: 11.1544
Epoch [1/3], Step [5400/6471], Loss: 2.4533, Perplexity: 11.6267
Epoch [1/3], Step [5500/6471], Loss: 2.0934, Perplexity: 8.11257
Epoch [1/3], Step [5600/6471], Loss: 2.3873, Perplexity: 10.8845
Epoch [1/3], Step [5700/6471], Loss: 2.1308, Perplexity: 8.421650
Epoch [1/3], Step [5800/6471], Loss: 2.4807, Perplexity: 11.9499
Epoch [1/3], Step [5900/6471], Loss: 2.2764, Perplexity: 9.74208
Epoch [1/3], Step [6000/6471], Loss: 2.4219, Perplexity: 11.2678
Epoch [1/3], Step [6100/6471], Loss: 2.0704, Perplexity: 7.92820
Epoch [1/3], Step [6200/6471], Loss: 2.2936, Perplexity: 9.91053
Epoch [1/3], Step [6300/6471], Loss: 2.2755, Perplexity: 9.73290
Epoch [1/3], Step [6400/6471], Loss: 2.6223, Perplexity: 13.7680
Epoch [2/3], Step [100/6471], Loss: 2.0691, Perplexity: 7.917986
Epoch [2/3], Step [200/6471], Loss: 2.2568, Perplexity: 9.55278
Epoch [2/3], Step [300/6471], Loss: 2.1678, Perplexity: 8.73924
Epoch [2/3], Step [400/6471], Loss: 2.1480, Perplexity: 8.56788
Epoch [2/3], Step [500/6471], Loss: 2.3019, Perplexity: 9.99277
Epoch [2/3], Step [600/6471], Loss: 2.2467, Perplexity: 9.45640
Epoch [2/3], Step [700/6471], Loss: 2.3325, Perplexity: 10.3041
Epoch [2/3], Step [800/6471], Loss: 2.2613, Perplexity: 9.59580
Epoch [2/3], Step [900/6471], Loss: 2.3086, Perplexity: 10.0604
Epoch [2/3], Step [1000/6471], Loss: 2.0802, Perplexity: 8.0061
Epoch [2/3], Step [1100/6471], Loss: 1.9997, Perplexity: 7.38687
Epoch [2/3], Step [1200/6471], Loss: 2.1149, Perplexity: 8.28851
Epoch [2/3], Step [1300/6471], Loss: 2.1374, Perplexity: 8.47731
Epoch [2/3], Step [1400/6471], Loss: 2.6317, Perplexity: 13.8978
Epoch [2/3], Step [1500/6471], Loss: 2.3402, Perplexity: 10.3838
Epoch [2/3], Step [1600/6471], Loss: 2.1557, Perplexity: 8.63413
Epoch [2/3], Step [1700/6471], Loss: 1.9921, Perplexity: 7.33107
Epoch [2/3], Step [1800/6471], Loss: 2.0854, Perplexity: 8.04818
Epoch [2/3], Step [1900/6471], Loss: 2.2936, Perplexity: 9.91088
Epoch [2/3], Step [2000/6471], Loss: 2.1862, Perplexity: 8.90100
Epoch [2/3], Step [2100/6471], Loss: 2.8537, Perplexity: 17.3522
Epoch [2/3], Step [2200/6471], Loss: 2.2995, Perplexity: 9.96886
Epoch [2/3], Step [2300/6471], Loss: 2.1319, Perplexity: 8.43075
Epoch [2/3], Step [2400/6471], Loss: 2.0518, Perplexity: 7.78208
Epoch [2/3], Step [2500/6471], Loss: 2.0349, Perplexity: 7.65116
Epoch [2/3], Step [2600/6471], Loss: 2.2648, Perplexity: 9.62892
Epoch [2/3], Step [2700/6471], Loss: 2.1298, Perplexity: 8.41290
Epoch [2/3], Step [2800/6471], Loss: 2.1986, Perplexity: 9.01254
Epoch [2/3], Step [2900/6471], Loss: 2.3103, Perplexity: 10.0773
Epoch [2/3], Step [3000/6471], Loss: 2.1846, Perplexity: 8.88720
Epoch [2/3], Step [3100/6471], Loss: 2.1580, Perplexity: 8.65340
Epoch [2/3], Step [3200/6471], Loss: 2.2292, Perplexity: 9.29246
Epoch [2/3], Step [3300/6471], Loss: 1.8819, Perplexity: 6.56583
Epoch [2/3], Step [3400/6471], Loss: 2.3803, Perplexity: 10.8081
Epoch [2/3], Step [3500/6471], Loss: 2.4078, Perplexity: 11.1094
Epoch [2/3], Step [3600/6471], Loss: 2.2128, Perplexity: 9.14174
Epoch [2/3], Step [3700/6471], Loss: 2.0495, Perplexity: 7.76377
Epoch [2/3], Step [3800/6471], Loss: 2.1137, Perplexity: 8.27880
Epoch [2/3], Step [3900/6471], Loss: 1.9873, Perplexity: 7.29551
Epoch [2/3], Step [4000/6471], Loss: 2.1141, Perplexity: 8.28184
Epoch [2/3], Step [4100/6471], Loss: 2.0982, Perplexity: 8.15194
Epoch [2/3], Step [4200/6471], Loss: 1.9257, Perplexity: 6.85990
Epoch [2/3], Step [4300/6471], Loss: 2.2778, Perplexity: 9.75528
Epoch [2/3], Step [4400/6471], Loss: 2.3145, Perplexity: 10.1204
Epoch [2/3], Step [4500/6471], Loss: 2.0857, Perplexity: 8.05058
Epoch [2/3], Step [4600/6471], Loss: 2.1453, Perplexity: 8.54506
Epoch [2/3], Step [4700/6471], Loss: 2.4439, Perplexity: 11.5181
Epoch [2/3], Step [4800/6471], Loss: 2.4398, Perplexity: 11.47123
Epoch [2/3], Step [4900/6471], Loss: 2.0735, Perplexity: 7.95266
Epoch [2/3], Step [5000/6471], Loss: 2.2301, Perplexity: 9.30038
Epoch [2/3], Step [5100/6471], Loss: 2.0144, Perplexity: 7.49646
Epoch [2/3], Step [5200/6471], Loss: 1.9276, Perplexity: 6.87286
Epoch [2/3], Step [5300/6471], Loss: 2.1884, Perplexity: 8.92124
Epoch [2/3], Step [5400/6471], Loss: 2.2629, Perplexity: 9.61060
Epoch [2/3], Step [5500/6471], Loss: 2.0748, Perplexity: 7.96294
Epoch [2/3], Step [5600/6471], Loss: 2.1107, Perplexity: 8.25371
Epoch [2/3], Step [5700/6471], Loss: 2.5667, Perplexity: 13.0221
Epoch [2/3], Step [5800/6471], Loss: 1.9837, Perplexity: 7.26987
Epoch [2/3], Step [5900/6471], Loss: 2.1704, Perplexity: 8.76209
Epoch [2/3], Step [6000/6471], Loss: 1.7752, Perplexity: 5.90156
Epoch [2/3], Step [6100/6471], Loss: 2.6451, Perplexity: 14.0843
Epoch [2/3], Step [6200/6471], Loss: 1.9639, Perplexity: 7.12730
Epoch [2/3], Step [6300/6471], Loss: 2.0161, Perplexity: 7.50875
Epoch [2/3], Step [6400/6471], Loss: 2.4282, Perplexity: 11.3384
Epoch [3/3], Step [100/6471], Loss: 2.0140, Perplexity: 7.493059
Epoch [3/3], Step [159/6471], Loss: 2.5616, Perplexity: 12.9565
###Markdown
Step 3: (Optional) Validate your ModelTo assess potential overfitting, one approach is to assess performance on a validation set. If you decide to do this **optional** task, you are required to first complete all of the steps in the next notebook in the sequence (**3_Inference.ipynb**); as part of that notebook, you will write and test code (specifically, the `sample` method in the `DecoderRNN` class) that uses your RNN decoder to generate captions. That code will prove incredibly useful here. If you decide to validate your model, please do not edit the data loader in **data_loader.py**. Instead, create a new file named **data_loader_val.py** containing the code for obtaining the data loader for the validation data. You can access:- the validation images at filepath `'/opt/cocoapi/images/train2014/'`, and- the validation image caption annotation file at filepath `'/opt/cocoapi/annotations/captions_val2014.json'`.The suggested approach to validating your model involves creating a json file such as [this one](https://github.com/cocodataset/cocoapi/blob/master/results/captions_val2014_fakecap_results.json) containing your model's predicted captions for the validation images. Then, you can write your own script or use one that you [find online](https://github.com/tylin/coco-caption) to calculate the BLEU score of your model. You can read more about the BLEU score, along with other evaluation metrics (such as TEOR and Cider) in section 4.1 of [this paper](https://arxiv.org/pdf/1411.4555.pdf). For more information about how to use the annotation file, check out the [website](http://cocodataset.org/download) for the COCO dataset.
###Code
# (Optional) TODO: Validate your model.
###Output
_____no_output_____ |
notebooks/analysis_zebrafish-related_chebi.ipynb | ###Markdown
Demonstration of pyMultiOmics Load the processed Zebrafish data from [1][1] [Rabinowitz, Jeremy S., et al. "Transcriptomic, proteomic, and metabolomic landscape of positional memory in the caudal fin of zebrafish." Proceedings of the National Academy of Sciences 114.5 (2017): E717-E726.](https://www.pnas.org/content/114/5/E717.short)
###Code
DATA_FOLDER = os.path.abspath(os.path.join('test_data', 'zebrafish_data'))
DATA_FOLDER
###Output
_____no_output_____
###Markdown
Read metabolomics data
###Code
compound_data = pd.read_csv(os.path.join(DATA_FOLDER, 'compound_data_chebi.csv'), index_col='Identifier')
compound_design = pd.read_csv(os.path.join(DATA_FOLDER, 'compound_design.csv'), index_col='sample')
compound_data
compound_data.loc[18139]
fly_new_data = pd.read_csv(os.path.join(DATA_FOLDER, '../fly_data/fly_metabolomics_no_dupes.csv'), index_col='Identifier')
fly_data = prepare_input(fly_new_data)
fly_data
r_chebi = get_related_chebi(fly_data)
remove_dupes(r_chebi)
fly_data
remove_dupes(fly_data)
no_dupes
os.getcwd()
set_log_level_info()
type(compound_design)
print(compound_design.head())
compound_design
compound_data.head()
###Output
_____no_output_____
###Markdown
Methods for adding related chebi IDs
###Code
# This method is pretty inefficient with the use of iterrows but I'm not sure of another way to run this
# All attempts at vectorisation failed - help Joe?
def get_related_chebi_data(cmpd_data):
# dont want to modify the original df
cmpd_data = cmpd_data.copy()
# ensure index type is set to string, since get_chebi_relation_dict also returns string as the keys
cmpd_data.index = cmpd_data.index.map(str)
chebi_rel_dict = get_chebi_relation_dict()
with_related = list(chebi_rel_dict.keys())
cmpd_data.loc[cmpd_data.index.isin(with_related), 'related']= 'Yes'
cmpd_data = cmpd_data.reset_index()
# We use this related_df so that we are not looking at all rows, only those with related chebi_ids
related_df = cmpd_data[cmpd_data.related=='Yes']
# print(related_df)
for ix, row in related_df.iterrows():
print (ix)
chebi_list = chebi_rel_dict[str(row.Identifier)]
for c in chebi_list:
#Check if the duplicate row with that chebi exists in the DF
current_row = row
current_row.Identifier = int(c)
matches = cmpd_data[(cmpd_data==current_row).all(axis=1)]
if len(matches) == 0:
# print ("no matching rows, appending")
cmpd_data = cmpd_data.append(current_row)
# else:
# print ("row found in DF therefore skipping")
c_data = cmpd_data.drop(['related'], axis=1)
c_data = c_data.set_index(['Identifier'])
return c_data
def get_related_chebi_data_v2(cmpd_data):
cmpd_data = cmpd_data.copy()
# ensure index type is set to string, since get_chebi_relation_dict also returns string as the keys
cmpd_data.index = cmpd_data.index.map(str)
cmpd_data = cmpd_data.reset_index()
original_cmpds = set(cmpd_data['Identifier']) # used for checking later
# construct the related chebi dict
chebi_rel_dict = get_chebi_relation_dict()
# loop through each row in cmpd_data
with_related_data = []
for ix, row in cmpd_data.iterrows():
# add the current row we're looping
current_identifier = row['Identifier']
with_related_data.append(row)
# check if there are related compounds to add
if current_identifier in chebi_rel_dict:
# if yes, get the related compounds
chebi_list = chebi_rel_dict[current_identifier]
for c in chebi_list:
# add the related chebi, but only if it's not already present in the original compound
if c not in original_cmpds:
current_row = row.copy()
current_row['Identifier'] = c
with_related_data.append(current_row)
# combine all the rows into a single dataframe
df = pd.concat(with_related_data, axis=1).T
df = df.set_index('Identifier')
logger.info('Inserted %d related compounds' % (len(df) - len(cmpd_data)))
return df
def remove_dupes(df):
df = df.reset_index()
# group df by the 'Identifier' column
to_delete = []
grouped = df.groupby(df['Identifier'])
for identifier, group_df in grouped:
# if there are multiple rows sharing the same identifier
if len(group_df) > 1:
# remove 'Identifier' column from the grouped df since it can't be summed
group_df = group_df.drop('Identifier', axis=1)
# find the row with the largest sum across the row in the group
idxmax = group_df.sum(axis=1).idxmax()
# mark all the rows in the group for deletion, except the one with the largest sum
temp = group_df.index.tolist()
temp.remove(idxmax)
to_delete.extend(temp)
# actually do the deletion here
logger.info('Removing %d rows with duplicate identifiers' % (len(to_delete)))
df = df.drop(to_delete)
df = df.set_index('Identifier')
return df
def get_chebi_relation_dict():
"""
A method to parse the chebi relation tsv and store the relationship we want in a dictionary
:return: Dict with structure Chebi_id: [related_chebi_ids]
"""
CHEBI_BFS_RELATION_DICT = 'chebi_bfs_relation_dict.pkl'
try:
chebi_bfs_relation_dict = load_object("../pyMultiOmics/data/" + CHEBI_BFS_RELATION_DICT)
except Exception as e:
logger.info("Constructing %s " % CHEBI_BFS_RELATION_DICT)
try:
chebi_relation_df = pd.read_csv("data/relation.tsv", delimiter="\t")
except FileNotFoundError as e:
logger.error("data/relation.tsv must be present")
raise e
# List of relationship we want in the dictionary
select_list = ["is_conjugate_base_of", "is_conjugate_acid_of", "is_tautomer_of"]
chebi_select_df = chebi_relation_df[chebi_relation_df.TYPE.isin(select_list)]
chebi_relation_dict = {}
# Gather all the INIT_IDs into a dictionary so that each INIT_ID is unique
for ix, row in chebi_select_df.iterrows():
init_id = str(row.INIT_ID)
final_id = str(row.FINAL_ID)
if init_id in chebi_relation_dict.keys():
# Append the final_id onto the existing values
id_1 = chebi_relation_dict[init_id]
joined_string = ", ".join([id_1, final_id])
chebi_relation_dict[init_id] = joined_string
else: # make a new key entry for the dict
chebi_relation_dict[init_id] = final_id
# Change string values to a list.
graph = {k: v.replace(" ", "").split(",") for k, v in chebi_relation_dict.items()}
chebi_bfs_relation_dict = {}
for k, v in graph.items():
r_chebis = bfs_get_related(graph, k)
r_chebis.remove(k) #remove original key from list
chebi_bfs_relation_dict[k] = r_chebis
try:
logger.info("saving chebi_relation_dict")
save_object(chebi_bfs_relation_dict, "./data/" + CHEBI_BFS_RELATION_DICT + ".pkl")
except Exception as e:
logger.error("Pickle didn't work because of %s " % e)
traceback.print_exc()
pass
return chebi_bfs_relation_dict
import gzip
import pickle
def load_object(filename):
"""
Load saved object from file
:param filename: The file to load
:return: the loaded object
"""
with gzip.GzipFile(filename, 'rb') as f:
return pickle.load(f)
def bfs_get_related(graph_dict, node):
"""
:param graph: Dictionary of key: ['value'] pairs
:param node: the key for which all related values should be returned
:return: All related keys as a list
"""
visited = [] # List to keep track of visited nodes.
queue = [] #Initialize a queue
related_keys = []
visited.append(node)
queue.append(node)
while queue:
k = queue.pop(0)
related_keys.append(k)
for neighbour in graph_dict[k]:
if neighbour not in visited:
visited.append(neighbour)
queue.append(neighbour)
return related_keys
def get_related_chebi_ids(chebi_ids):
"""
:param chebi_ids: A list of chebi IDS
:return: A set of related chebi_IDs that are not already in the list
"""
chebi_relation_dict = get_chebi_relation_dict()
related_chebis = set()
for c_id in chebi_ids:
if c_id in chebi_relation_dict:
related_chebis.update(chebi_relation_dict[c_id])
return related_chebis
###Output
_____no_output_____
###Markdown
For each chebi_id in the DF that has other relaed Chebi_ids, add on a duplicate row. For the Zebrafish DF we expect the input and output to be the same as all the related Chebi_ids are already present in the DF
###Code
zebra_f_related_chebi = get_related_chebi_data_v2(compound_data)
compound_data
zebra_f_related_chebi
zebra_f_related_chebi_no_dupes = remove_dupes(zebra_f_related_chebi)
zebra_f_related_chebi_no_dupes
###Output
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
2021-03-29 13:21:43.472 | INFO | __main__:remove_dupes:24 - Removing 0 rows with duplicate identifiers
###Markdown
For the Fly DF we expect the input and output to be the same as all the related Chebi_ids are already present in the DF
###Code
fly_related_chebi = get_related_chebi_data_v2(fly_compound_data)
fly_related_chebi
fly_compound_data
fly_r_chebi_new = get_related_chebi_data_v2(fly_new_data)
fly_r_chebi_new
fly_new_data
fly_r_chebi_new[fly_r_chebi_new['CAR_F1.mzXML']==78386424.0]
new_fly_no_dupes = remove_dupes(fly_r_chebi_new)
new_fly_no_dupes
fly_new_data
hc_chebi_int = list(map(int, hc_chebi))
fly_new_data.loc[hc_chebi_int]
new_fly_no_dupes.loc[hc_chebi]
## Difference between two databases.
from pandas._testing import assert_frame_equal
test1 = new_fly_no_dupes.loc[hc_chebi]
fly_new_data.index = fly_new_data.index.map(str)
test2 = fly_new_data.loc[hc_chebi]
t1 = test1.astype(object)
t2 = test2.astype(object)
assert_frame_equal(t1, t2)
hc_chebi = ['17203',
'17750',
'16414',
'16704',
'18132',
'27596',
'17533',
'32796',
'18049',
'28483',
'17256',
'6032',
'17015',
'46905',
'16168',
'15603',
'37023',
'28587',
'15978',
'35704',
'17215',
'45133',
'47977',
'18050',
'27570',
'16958',
'18183',
'32816',
'506227',
'16347',
'1547',
'17380',
'29069',
'18123',
'18344',
'10072',
'16283',
'17895',
'16785',
'16828',
'16349',
'17154',
'17747',
'16015',
'73685',
'17981',
'18019',
'28123',
'38571',
'70744',
'17549',
'18095',
'42111',
'33198',
'4167',
'16865',
'17587',
'16946',
'17310',
'16856',
'17992',
'17521',
'17515',
'16467',
'17562',
'16020',
'16708',
'4170',
'15354',
'16899',
'18300',
'19062',
'16643',
'17295',
'17368',
'15746',
'17489',
'17858',
'17196',
'15676',
'17482',
'17351',
'30769',
'16335',
'16870',
'17596',
'16027',
'18295',
'15891',
'21547',
'30797',
'27781',
'16040',
'84543',
'17345',
'73124',
'45658',
'17713',
'16742',
'16610',
'73238',
'73882',
'30836',
'17148',
'17361',
'16695',
'52742']
fly_related_chebi_no_dupes = remove_dupes(fly_related_chebi)
fly_related_chebi_no_dupes
all_rows = len(fly_related_chebi_no_dupes)
unique_rows = len(fly_related_chebi_no_dupes['sak_h_3.mzXML'].unique())
unique_rows
print ("There are", unique_rows, "unique rows out of", all_rows, "total rows in the FlyMet data")
###Output
There are 892 unique rows out of 9244 total rows in the FlyMet data
|
code/analysis/3_figures_and_tables/supplementary_figs/FigS17/cnn_vs_MOSAIKS_scatter.ipynb | ###Markdown
This notebook plots errors in MOSAIKS predictions against errors in CNN predictions
###Code
from mosaiks import config as c
import os
import pickle
import pandas as pd
import seaborn as sns
from scipy.stats import pearsonr
from mosaiks.utils.imports import *
%matplotlib inline
# plot settings
plt.rcParams["pdf.fonttype"] = 42
sns.set(context="paper", style="ticks")
###Output
_____no_output_____
###Markdown
Get task names in the specified order
###Code
# get task names
c_by_app = [getattr(c, i) for i in c.app_order]
num_tasks = len(c.app_order)
disp_names = [config["disp_name"] for config in c_by_app]
###Output
_____no_output_____
###Markdown
Grab primary MOSAIKS analysis predictions and labels
###Code
# get variables and determine if sampled UAR or POP in main analysis
variables = [config["variable"] for config in c_by_app]
sample_types = [config["sampling"] for config in c_by_app]
# get filepaths for data
file_paths_local = []
filetype = ["testset", "scatter"]
for tx, t in enumerate(c.app_order):
c = io.get_filepaths(c, t)
for ft in filetype:
this_filename = f"outcomes_{ft}_obsAndPred_{t}_{variables[tx]}_CONTUS_16_640_{sample_types[tx]}_100000_0_random_features_3_0.data"
this_filepath_local = os.path.join(c.fig_dir_prim, this_filename)
file_paths_local.append(this_filepath_local)
mos_dfs = []
for tx, t in enumerate(c.app_order):
file1 = file_paths_local[tx * 2]
file2 = file_paths_local[tx * 2 + 1]
dfs_task = []
# grab the test set and validation/training set; concatenate to match test set for CNN
for fidx in [0, 1]:
with open(file_paths_local[tx * 2 + fidx], "rb") as file_this:
data_this = pickle.load(file_this)
mos_dfs.append(
pd.DataFrame(
{
"truth": np.squeeze(data_this["truth"]),
"preds": data_this["preds"],
"lat": data_this["lat"],
"lon": data_this["lon"],
},
index=[t] * len(data_this["lat"]),
)
)
mos_df = pd.concat(mos_dfs)
mos_df.index.name = "task"
mos_df["errors"] = mos_df["truth"] - mos_df["preds"]
###Output
_____no_output_____
###Markdown
Grab CNN predictions
###Code
file_paths_local = [
os.path.join(c.data_dir, "output", "cnn_comparison", f"resnet18_{t}.pickle")
for t in c.app_order
]
cnn_dfs = []
for tx, t in enumerate(c.app_order):
with open(file_paths_local[tx], "rb") as file_this:
data_this = pickle.load(file_this)
cnn_dfs.append(
pd.DataFrame(
{
"truth": np.squeeze(data_this["y_test"]),
"preds": np.squeeze(data_this["y_test_pred"]),
"test_r2": data_this["test_r2"],
},
index=pd.MultiIndex.from_product(
[[t], np.squeeze(data_this["ids_test"])], names=["task", "ID"]
),
)
)
cnn_df = pd.concat(cnn_dfs)
cnn_df["errors"] = cnn_df.truth - cnn_df.preds
###Output
_____no_output_____
###Markdown
Merge CNN errors to MOSAIKS errors
###Code
latlons = {}
for s in ["UAR", "POP"]:
_, latlons[s] = io.get_X_latlon(c, s)
latlons = pd.concat(latlons.values())
latlons = latlons.drop_duplicates()
cnn_df = (
cnn_df.join(latlons, on="ID", how="left")
.reset_index()
.set_index(["task", "lat", "lon"])
)
mos_df = mos_df.set_index(["lat", "lon"], append=True)
merged_df = mos_df.join(cnn_df, lsuffix="_mos", rsuffix="_cnn")
# keep only matched labels
merged_df = merged_df[merged_df.truth_cnn.notnull()]
###Output
_____no_output_____
###Markdown
Compute R2s between CNN and MOSAIKS predictions and errors
###Code
r2s = []
for t in c.app_order:
r2s.append(
pd.DataFrame(
{
"R2preds": pearsonr(
merged_df.loc[t]["preds_cnn"], merged_df.loc[t]["preds_mos"]
)[0]
** 2,
"R2errors": pearsonr(
merged_df.loc[t]["errors_cnn"], merged_df.loc[t]["errors_mos"]
)[0]
** 2,
},
index=[t],
)
)
r2s_df = pd.concat(r2s)
###Output
_____no_output_____
###Markdown
Plot CNN vs MOSAIKS predictions and errors
###Code
# settings for text formatting
yloc = np.linspace(1, 1 / 6, 7) - 0.06
fig, ax = plt.subplots(7, 2, figsize=(6, 10))
for tx, t in enumerate(c.app_order):
# simplify ticks
maxerr = round(merged_df.loc[t].filter(like="errors").abs().max().max())
maxpred = round(merged_df.loc[t].filter(like="preds").abs().max().max())
minpred = round(merged_df.loc[t].filter(like="preds").abs().min().min())
if t in ["elevation", "income", "roads"]:
maxerr = round(maxerr / 10) * 10
maxpred = round(maxpred / 10) * 10
minpred = round(minpred / 10) * 10
errticks = np.linspace(-1 * maxerr, maxerr, 3)
predticks = np.linspace(minpred, maxpred, 3)
for jx, j in enumerate([("preds", predticks), ("errors", errticks)]):
kind = j[0]
ticks = j[1]
ax[tx, jx].plot(
merged_df.loc[t][f"{kind}_mos"],
merged_df.loc[t][f"{kind}_cnn"],
"o",
color=c_by_app[tx]["color"],
alpha=0.2,
markersize=1,
)
x = np.linspace(*ax[tx, jx].get_xlim())
ax[tx, jx].plot(x, x, color="grey")
# force tick marks to be the same
ax[tx, jx].set_xticks(ticks)
ax[tx, jx].set_yticks(ticks)
# force equality of lines
ax[tx, jx].set_aspect("equal")
# kill left and top lines
sns.despine(ax=ax[tx, jx])
# add R2
r2 = r2s_df.loc[t].filter(like=kind)[0].round(2)
txt = fr"$\rho^2 = {r2:.2f}$"
ax[tx, jx].annotate(
txt, xy=(6, 72), xycoords="axes points", size=9, ha="left", va="top"
)
# add evenly spaced y labels
fig.text(
0.55,
yloc[tx],
"CNN errors",
rotation="vertical",
rotation_mode="anchor",
va="center",
ha="center",
)
fig.text(
0.07,
yloc[tx],
"CNN predictions",
rotation="vertical",
rotation_mode="anchor",
va="center",
ha="center",
)
fig.text(
0.01,
yloc[tx],
c_by_app[tx]["disp_name"].capitalize().replace(" ", "\n"),
weight="bold",
rotation="vertical",
rotation_mode="anchor",
va="bottom",
ha="center",
)
ax[6, 0].set_xlabel("MOSAIKS predictions", ha="center", va="top", rotation="horizontal")
ax[6, 1].set_xlabel("MOSAIKS errors", ha="center", va="top", rotation="horizontal")
fig.tight_layout(pad=0.5)
# Save
save_dir = os.path.join(c.res_dir, "figures", "FigS17")
os.makedirs(save_dir, exist_ok=True)
fig.savefig(
os.path.join(save_dir, "cnn_mosaiks_predictions_errors_scatter.png"),
dpi=300,
tight_layout=True,
bbox_inches="tight",
)
###Output
_____no_output_____ |
2018_05_28_Pandas_Pivot_review.ipynb | ###Markdown
Pandas - Pivot- Data Frame의 컬럼에서 index, columns, values를 선택하여 데이터 프레임을 만드는 방법- 아래 형태로 파라미터는 index, columns, values로 들어간다. - df.pivot(index, columns, values)- index와 columns 데이터에 해당하는 values가 2개 이상이면 에러가 발생한다.
###Code
# titanic data load
# https://www.kaggle.com/c/titanic/data
# survived : 0-no, 1-yes
titanic = pd.read_csv('train.csv')
titanic.tail()
# sex와 Pclass에 따라 groupby를 하고 데이터의 수를 Counts 컬럼에 추가
tatanic_f1 = pd.DataFrame(titanic, columns= ['Sex', 'Pclass'])
titanic_f1 = titanic.groupby(['Sex', 'Pclass']).size().reset_index(name='Counts')
titanic_f1
# pivot: 객실등급과 남녀에 따른 데이터 수
# df.pivot(index, columns, values)
titanic_f1.pivot('Sex', 'Pclass', 'Counts')
titanic_f1.pivot('Pclass','Sex','Counts')
# 생존과 성별에 따라 groupby를 하고 데이터의 수를 Counts 컬럼에 추가
titanic.tail(5)
titanic_df2 = pd.DataFrame(titanic, columns=['Survived', 'Sex'])
titanic_df2 = titanic.groupby(['Survived','Sex']).size().reset_index(name='Counts')
titanic_df2
# pivot: 성별과 생존에 따른 데이터 수
# df.pivot(index, column, values)
titanic_df2.pivot('Sex', 'Survived', 'Counts')
###Output
_____no_output_____
###Markdown
Pivot_table- pivot_table(values, index, columns, aggfunc) - values: value 값 - index: index 리스트 데이터 - columns: columns 리스트 데이터 - aggfunc: groupby aggregate 함수 - fill_value: 데이터가 없을 때 들어가는 데이터 - dropna: 없는 데이터 컬럼은 제거
###Code
titanic_df3 = pd.DataFrame(titanic)
titanic_df3['Count'] = 1
titanic_df3.tail()
# pivot_table: 객실등급(Plcass)과 남녀(Sex)에 따른 데이터 수
titanic_df3.pivot_table(values='Count', index='Sex', columns='Pclass', aggfunc=np.sum)
# 객실등급(Pclass)과 성별별(Sex) 생존 수(Survived)
titanic_df3.pivot_table(index=['Pclass', 'Sex'], columns='Survived', values='Count', aggfunc=np.sum)
# 문제
# 아래와 같은 데이터 프레임을 pivot_table을 이용하여 남녀 성별별 생존
# 데이터를 구하시오.
# index = Survived, Column Sex, Count
result = titanic_df3.pivot_table(index='Survived', columns='Sex', values='Count', aggfunc=np.sum)
result
# total column
result['total'] = result['female'] + result['male']
result
# total row
result.loc['total'] = result.loc[0] + result.loc[1]
result
# delete row
result.drop('total', inplace=True)
result
result.drop('total', axis=1,inplace=True)
result
# SibSp: 형제/배우자
# Parch: 아이들
result1 = titanic_df3.pivot_table(index='Survived', columns = ['Pclass', 'Parch'], values = 'Count', aggfunc=sum)
result1 # Nan 값들도 보이고, Parch에 없는 숫자도 보인다.
# fill_value : 데이터가 없을 때 들어가는 데이터
result1 = titanic_df3.pivot_table(index='Survived', columns=['Pclass', 'Parch'], values='Count', aggfunc=np.sum, fill_value = 0)
result1
# dropna: 없는 데이터 컬럼은 제거
df = titanic_df3.pivot_table(index='Survived', columns=['Parch', 'Pclass'], values='Count', aggfunc=np.sum, fill_value=0, dropna=False)
df
###Output
_____no_output_____ |
Build, train, and deploy a machine learning model with Amazon SageMaker.ipynb | ###Markdown
Build, train, and deploy a machine learning model with Amazon SageMaker Source: https://aws.amazon.com/getting-started/hands-on/build-train-deploy-machine-learning-model-sagemaker/?trk=el_a134p000003yWILAA2&trkCampaign=DS_SageMaker_Tutorial&sc_channel=el&sc_campaign=Data_Scientist_Hands-on_Tutorial&sc_outcome=Product_Marketing&sc_geo=mult&p=gsrc&c=lp_ds 1. Imports the required libraries and defines the environment variables you need to prepare the data, train the ML model, and deploy the ML model.
###Code
# import libraries
import boto3, re, sys, math, json, os, sagemaker, urllib.request
from sagemaker import get_execution_role
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import Image
from IPython.display import display
from time import gmtime, strftime
from sagemaker.predictor import csv_serializer
# Define IAM role
role = get_execution_role()
prefix = 'sagemaker/DEMO-xgboost-dm'
my_region = boto3.session.Session().region_name # set the region of the instance
# this line automatically looks for the XGBoost image URI and builds an XGBoost container.
xgboost_container = sagemaker.image_uris.retrieve("xgboost", my_region, "latest")
print("Success - the MySageMakerInstance is in the " + my_region + " region. You will use the " + xgboost_container + " container for your SageMaker endpoint.")
###Output
Success - the MySageMakerInstance is in the eu-central-1 region. You will use the 813361260812.dkr.ecr.eu-central-1.amazonaws.com/xgboost:latest container for your SageMaker endpoint.
###Markdown
2. Create the S3 bucket to store your data, name should be changed.
###Code
bucket_name = 'your-s3-bucket-moanesga' # <--- CHANGE THIS VARIABLE TO A UNIQUE NAME FOR YOUR BUCKET
s3 = boto3.resource('s3')
try:
if my_region == 'us-east-1':
s3.create_bucket(Bucket=bucket_name)
else:
s3.create_bucket(Bucket=bucket_name, CreateBucketConfiguration={ 'LocationConstraint': my_region })
print('S3 bucket created successfully')
except Exception as e:
print('S3 error: ',e)
###Output
S3 bucket created successfully
###Markdown
3. Download the data to your SageMaker instance and load the data into a dataframe
###Code
try:
urllib.request.urlretrieve ("https://d1.awsstatic.com/tmt/build-train-deploy-machine-learning-model-sagemaker/bank_clean.27f01fbbdf43271788427f3682996ae29ceca05d.csv", "bank_clean.csv")
print('Success: downloaded bank_clean.csv.')
except Exception as e:
print('Data load error: ',e)
try:
model_data = pd.read_csv('./bank_clean.csv',index_col=0)
print('Success: Data loaded into dataframe.')
except Exception as e:
print('Data load error: ',e)
###Output
Success: downloaded bank_clean.csv.
Success: Data loaded into dataframe.
###Markdown
4. Shuffle and split the data into training data and test data. The training data (70% of customers) is used during the model training loop. You use gradient-based optimization to iteratively refine the model parameters. Gradient-based optimization is a way to find model parameter values that minimize the model error, using the gradient of the model loss function.The test data (remaining 30% of customers) is used to evaluate the performance of the model and measure how well the trained model generalizes to unseen data.
###Code
train_data, test_data = np.split(model_data.sample(frac=1, random_state=1729), [int(0.7 * len(model_data))])
print(train_data.shape, test_data.shape)
###Output
(28831, 61) (12357, 61)
###Markdown
5. Train the ML model This code reformats the header and first column of the training data and then loads the data from the S3 bucket. This step is required to use the Amazon SageMaker pre-built XGBoost algorithm.
###Code
pd.concat([train_data['y_yes'], train_data.drop(['y_no', 'y_yes'], axis=1)], axis=1).to_csv('train.csv', index=False, header=False)
boto3.Session().resource('s3').Bucket(bucket_name).Object(os.path.join(prefix, 'train/train.csv')).upload_file('train.csv')
s3_input_train = sagemaker.inputs.TrainingInput(s3_data='s3://{}/{}/train'.format(bucket_name, prefix), content_type='csv')
###Output
_____no_output_____
###Markdown
6. Set up the Amazon SageMaker session, create an instance of the XGBoost model (an estimator), and define the model’s hyperparameters.
###Code
sess = sagemaker.Session()
xgb = sagemaker.estimator.Estimator(xgboost_container,role, instance_count=1, instance_type='ml.m4.xlarge',output_path='s3://{}/{}/output'.format(bucket_name, prefix),sagemaker_session=sess)
xgb.set_hyperparameters(max_depth=5,eta=0.2,gamma=4,min_child_weight=6,subsample=0.8,silent=0,objective='binary:logistic',num_round=100)
###Output
_____no_output_____
###Markdown
7. Start the training job. This code trains the model using gradient optimization on a ml.m4.xlarge instance. After a few minutes, you should see the training logs being generated in your Jupyter notebook.
###Code
xgb.fit({'train': s3_input_train})
###Output
2021-06-30 07:52:36 Starting - Starting the training job...
2021-06-30 07:52:59 Starting - Launching requested ML instancesProfilerReport-1625039555: InProgress
......
2021-06-30 07:54:00 Starting - Preparing the instances for training......
2021-06-30 07:55:00 Downloading - Downloading input data...
2021-06-30 07:55:31 Training - Training image download completed. Training in progress..[34mArguments: train[0m
[34m[2021-06-30:07:55:32:INFO] Running standalone xgboost training.[0m
[34m[2021-06-30:07:55:32:INFO] Path /opt/ml/input/data/validation does not exist![0m
[34m[2021-06-30:07:55:32:INFO] File size need to be processed in the node: 3.38mb. Available memory size in the node: 8392.67mb[0m
[34m[2021-06-30:07:55:32:INFO] Determined delimiter of CSV input is ','[0m
[34m[07:55:32] S3DistributionType set as FullyReplicated[0m
[34m[07:55:33] 28831x59 matrix with 1701029 entries loaded from /opt/ml/input/data/train?format=csv&label_column=0&delimiter=,[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[0]#011train-error:0.100482[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[1]#011train-error:0.099858[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[2]#011train-error:0.099754[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[3]#011train-error:0.099095[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[4]#011train-error:0.098991[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[5]#011train-error:0.099303[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[6]#011train-error:0.099684[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[7]#011train-error:0.09906[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[8]#011train-error:0.098852[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 36 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[9]#011train-error:0.098679[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[10]#011train-error:0.098748[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[11]#011train-error:0.098748[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[12]#011train-error:0.098748[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[13]#011train-error:0.09854[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[14]#011train-error:0.098574[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[15]#011train-error:0.098609[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[16]#011train-error:0.098817[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[17]#011train-error:0.098817[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[18]#011train-error:0.098679[0m
[34m[19]#011train-error:0.098679[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[20]#011train-error:0.098713[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[21]#011train-error:0.098505[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[22]#011train-error:0.098401[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[23]#011train-error:0.098332[0m
[34m[07:55:33] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 34 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[24]#011train-error:0.098332[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[25]#011train-error:0.09795[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[26]#011train-error:0.098262[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[27]#011train-error:0.098193[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 24 pruned nodes, max_depth=3[0m
[34m[28]#011train-error:0.097985[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[29]#011train-error:0.097499[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[30]#011train-error:0.097638[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[31]#011train-error:0.097395[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 28 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[32]#011train-error:0.097222[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[33]#011train-error:0.097118[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[34]#011train-error:0.097014[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[35]#011train-error:0.09684[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[36]#011train-error:0.096667[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[37]#011train-error:0.096736[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 30 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[38]#011train-error:0.096563[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[39]#011train-error:0.096355[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 36 pruned nodes, max_depth=3[0m
[34m[40]#011train-error:0.096285[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 12 extra nodes, 38 pruned nodes, max_depth=4[0m
[34m[41]#011train-error:0.096528[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 16 pruned nodes, max_depth=4[0m
[34m[42]#011train-error:0.096355[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 26 pruned nodes, max_depth=5[0m
[34m[43]#011train-error:0.096459[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 36 pruned nodes, max_depth=5[0m
[34m[44]#011train-error:0.096355[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 4 extra nodes, 34 pruned nodes, max_depth=2[0m
[34m[45]#011train-error:0.096216[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[46]#011train-error:0.096077[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[47]#011train-error:0.0958[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[48]#011train-error:0.095904[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[49]#011train-error:0.095904[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 34 pruned nodes, max_depth=4[0m
[34m[50]#011train-error:0.095834[0m
[34m[07:55:34] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[51]#011train-error:0.095765[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 8 pruned nodes, max_depth=5[0m
[34m[52]#011train-error:0.095904[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 6 pruned nodes, max_depth=5[0m
[34m[53]#011train-error:0.095834[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[54]#011train-error:0.095834[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[55]#011train-error:0.09573[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[56]#011train-error:0.095626[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[57]#011train-error:0.095696[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 28 pruned nodes, max_depth=5[0m
[34m[58]#011train-error:0.095661[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 24 pruned nodes, max_depth=4[0m
[34m[59]#011train-error:0.095592[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 24 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[60]#011train-error:0.095522[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=4[0m
[34m[61]#011train-error:0.095383[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 20 pruned nodes, max_depth=4[0m
[34m[62]#011train-error:0.095314[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 26 pruned nodes, max_depth=5[0m
[34m[63]#011train-error:0.095661[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 32 pruned nodes, max_depth=4[0m
[34m[64]#011train-error:0.095661[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 32 extra nodes, 18 pruned nodes, max_depth=5[0m
[34m[65]#011train-error:0.095418[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 36 pruned nodes, max_depth=3[0m
[34m[66]#011train-error:0.095314[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[67]#011train-error:0.095349[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 6 extra nodes, 28 pruned nodes, max_depth=3[0m
[34m[68]#011train-error:0.095314[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[69]#011train-error:0.095314[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 32 pruned nodes, max_depth=3[0m
[34m[70]#011train-error:0.095383[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[71]#011train-error:0.095453[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 12 pruned nodes, max_depth=5[0m
[34m[72]#011train-error:0.095349[0m
[34m[07:55:35] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 24 pruned nodes, max_depth=4[0m
[34m[73]#011train-error:0.095245[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[74]#011train-error:0.095175[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 18 pruned nodes, max_depth=4[0m
[34m[75]#011train-error:0.095071[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[76]#011train-error:0.095175[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 18 extra nodes, 30 pruned nodes, max_depth=5[0m
[34m[77]#011train-error:0.095002[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 20 pruned nodes, max_depth=4[0m
[34m[78]#011train-error:0.095037[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 32 pruned nodes, max_depth=5[0m
[34m[79]#011train-error:0.095037[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[80]#011train-error:0.095002[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[81]#011train-error:0.094794[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[82]#011train-error:0.094759[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[83]#011train-error:0.094933[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 20 extra nodes, 10 pruned nodes, max_depth=5[0m
[34m[84]#011train-error:0.09469[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 22 pruned nodes, max_depth=5[0m
[34m[85]#011train-error:0.094759[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 22 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[86]#011train-error:0.094482[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 16 pruned nodes, max_depth=5[0m
[34m[87]#011train-error:0.094447[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 20 pruned nodes, max_depth=0[0m
[34m[88]#011train-error:0.094482[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 24 pruned nodes, max_depth=5[0m
[34m[89]#011train-error:0.094378[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[90]#011train-error:0.094343[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 8 extra nodes, 28 pruned nodes, max_depth=4[0m
[34m[91]#011train-error:0.094274[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[92]#011train-error:0.094239[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 16 extra nodes, 32 pruned nodes, max_depth=5[0m
[34m[93]#011train-error:0.094169[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 14 extra nodes, 28 pruned nodes, max_depth=5[0m
[34m[94]#011train-error:0.094169[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 10 extra nodes, 14 pruned nodes, max_depth=5[0m
[34m[95]#011train-error:0.094204[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 28 pruned nodes, max_depth=0[0m
[34m[96]#011train-error:0.094204[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 26 extra nodes, 20 pruned nodes, max_depth=5[0m
[34m[97]#011train-error:0.093927[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 38 pruned nodes, max_depth=0[0m
[34m[98]#011train-error:0.093927[0m
[34m[07:55:36] src/tree/updater_prune.cc:74: tree pruning end, 1 roots, 0 extra nodes, 32 pruned nodes, max_depth=0[0m
[34m[99]#011train-error:0.093892[0m
2021-06-30 07:56:00 Uploading - Uploading generated training model
2021-06-30 07:56:00 Completed - Training job completed
Training seconds: 52
Billable seconds: 52
###Markdown
8. Deploy the model.This code deploys the model on a server and creates a SageMaker endpoint that you can access. This step may take a few minutes to complete
###Code
xgb_predictor = xgb.deploy(initial_instance_count=1,instance_type='ml.m4.xlarge')
###Output
-------------!
###Markdown
9. To predict whether customers in the test data enrolled for the bank product or not,
###Code
from sagemaker.serializers import CSVSerializer
test_data_array = test_data.drop(['y_no', 'y_yes'], axis=1).values #load the data into an array
xgb_predictor.serializer = CSVSerializer() # set the serializer type
predictions = xgb_predictor.predict(test_data_array).decode('utf-8') # predict!
predictions_array = np.fromstring(predictions[1:], sep=',') # and turn the prediction into an array
print(predictions_array.shape)
###Output
(12357,)
###Markdown
10. Evaluate model performance. Evaluate the performance and accuracy of the machine learning model.This code compares the actual vs. predicted values in a table called a confusion matrix.Based on the prediction, we can conclude that you predicted a customer will enroll for a certificate of deposit accurately for 90% of customers in the test data, with a precision of 65% (278/429) for enrolled and 90% (10,785/11,928) for didn’t enroll.
###Code
cm = pd.crosstab(index=test_data['y_yes'], columns=np.round(predictions_array), rownames=['Observed'], colnames=['Predicted'])
tn = cm.iloc[0,0]; fn = cm.iloc[1,0]; tp = cm.iloc[1,1]; fp = cm.iloc[0,1]; p = (tp+tn)/(tp+tn+fp+fn)*100
print("\n{0:<20}{1:<4.1f}%\n".format("Overall Classification Rate: ", p))
print("{0:<15}{1:<15}{2:>8}".format("Predicted", "No Purchase", "Purchase"))
print("Observed")
print("{0:<15}{1:<2.0f}% ({2:<}){3:>6.0f}% ({4:<})".format("No Purchase", tn/(tn+fn)*100,tn, fp/(tp+fp)*100, fp))
print("{0:<16}{1:<1.0f}% ({2:<}){3:>7.0f}% ({4:<}) \n".format("Purchase", fn/(tn+fn)*100,fn, tp/(tp+fp)*100, tp))
###Output
Overall Classification Rate: 89.5%
Predicted No Purchase Purchase
Observed
No Purchase 90% (10769) 37% (167)
Purchase 10% (1133) 63% (288)
###Markdown
11. Clean up. In this step, you terminate the resources you used in this lab.Important: Terminating resources that are not actively being used reduces costs and is a best practice. Not terminating your resources will result in charges to your account. Delete your endpoint:
###Code
xgb_predictor.delete_endpoint(delete_endpoint_config=True)
###Output
_____no_output_____
###Markdown
12. Delete your training artifacts and S3 bucket
###Code
bucket_to_delete = boto3.resource('s3').Bucket(bucket_name)
bucket_to_delete.objects.all().delete()
###Output
_____no_output_____ |
covid_19_version_iii.ipynb | ###Markdown
Make sure to open in colab to see the plots!You might want to change the plot sizes; just ctrl+f for "figsize" and change them all (ex.: (20,4) to (10,2)) Imports
###Code
import numpy as np
import pandas as pd
pd.options.mode.chained_assignment = None # default='warn'
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
%matplotlib inline
!pip install mpld3
import mpld3
mpld3.enable_notebook()
from scipy.integrate import odeint
!pip install lmfit
import lmfit
from lmfit.lineshapes import gaussian, lorentzian
import warnings
warnings.filterwarnings('ignore')
###Output
Requirement already satisfied: mpld3 in /usr/local/lib/python3.6/dist-packages (0.3)
Requirement already satisfied: lmfit in /usr/local/lib/python3.6/dist-packages (1.0.1)
Requirement already satisfied: scipy>=1.2 in /usr/local/lib/python3.6/dist-packages (from lmfit) (1.4.1)
Requirement already satisfied: uncertainties>=3.0.1 in /usr/local/lib/python3.6/dist-packages (from lmfit) (3.1.2)
Requirement already satisfied: numpy>=1.16 in /usr/local/lib/python3.6/dist-packages (from lmfit) (1.18.4)
Requirement already satisfied: asteval>=0.9.16 in /usr/local/lib/python3.6/dist-packages (from lmfit) (0.9.18)
###Markdown
We want to fit the following curve: Supplemental and Coronavirus Data
###Code
# !! if you get a timeout-error, just click on the link and download the data manually !!
# read the data
beds = pd.read_csv("https://raw.githubusercontent.com/alamgirm/infectious_disease_modelling/master/data/beds.csv", header=0)
agegroups = pd.read_csv("https://raw.githubusercontent.com/hf2000510/infectious_disease_modelling/master/data/agegroups.csv")
probabilities = pd.read_csv("https://raw.githubusercontent.com/hf2000510/infectious_disease_modelling/master/data/probabilities.csv")
#covid_data = pd.read_csv("https://tinyurl.com/t59cgxn", parse_dates=["Date"], skiprows=[1])
#covid_data["Location"] = covid_data["Country/Region"]
# create some dicts for fast lookup
# 1. beds
beds_lookup = dict(zip(beds["Country"], beds["ICU_Beds"]))
# 2. agegroups
agegroup_lookup = dict(zip(agegroups['Location'], agegroups[['0_9', '10_19', '20_29', '30_39', '40_49', '50_59', '60_69', '70_79', '80_89', '90_100']].values))
# store the probabilities collected
prob_I_to_C_1 = list(probabilities.prob_I_to_ICU_1.values)
prob_I_to_C_2 = list(probabilities.prob_I_to_ICU_2.values)
prob_C_to_Death_1 = list(probabilities.prob_ICU_to_Death_1.values)
prob_C_to_Death_2 = list(probabilities.prob_ICU_to_Death_2.values)
###Output
_____no_output_____
###Markdown
Plotting
###Code
plt.gcf().subplots_adjust(bottom=0.15)
def plotter(t, S, E, I, C, R, D, R_0, B, S_1=None, S_2=None, x_ticks=None, isLog=False):
if S_1 is not None and S_2 is not None:
print(f"percentage going to ICU: {S_1*100}; percentage dying in ICU: {S_2 * 100}")
f, ax = plt.subplots(1,1,figsize=(20,6))
ax.xaxis.set_major_formatter(mdates.DateFormatter("%B-%d"))
ax.xaxis.set_minor_formatter(mdates.DateFormatter("%B-%d"))
if x_ticks is None:
ax.set_xlabel('Time (days)')
if isLog == True:
#ax.semilogy(t, S, 'b', alpha=0.7, linewidth=2, label='Susceptible')
ax.semilogy(t, E, 'y', alpha=0.7, linewidth=2, label='Exposed')
ax.semilogy(t, I, 'r', alpha=0.7, linewidth=2, label='Infected')
ax.semilogy(t, C, 'r--', alpha=0.7, linewidth=2, label='Critical')
ax.semilogy(t, R, 'g', alpha=0.7, linewidth=2, label='Recovered')
ax.semilogy(t, D, 'k', alpha=0.7, linewidth=2, label='Dead')
else:
#ax.plot(t, S, 'b', alpha=0.7, linewidth=2, label='Susceptible')
ax.plot(t, E, 'y', alpha=0.7, linewidth=2, label='Exposed')
ax.plot(t, I, 'r', alpha=0.7, linewidth=2, label='Infected')
ax.plot(t, C, 'r--', alpha=0.7, linewidth=2, label='Critical')
ax.plot(t, R, 'g', alpha=0.7, linewidth=2, label='Recovered')
ax.plot(t, D, 'k', alpha=0.7, linewidth=2, label='Dead')
else:
ax.set_xlabel('Date')
if isLog == True:
ax.semilogy(x_ticks, E, 'y', alpha=0.7, linewidth=2, label='Exposed')
ax.semilogy(x_ticks, I, 'r', alpha=0.7, linewidth=2, label='Infected')
ax.semilogy(x_ticks, C, 'r--', alpha=0.7, linewidth=2, label='Critical')
#ax.semilogy(x_ticks, R, 'g', alpha=0.7, linewidth=2, label='Recovered')
ax.semilogy(x_ticks, D, 'k', alpha=0.7, linewidth=2, label='Dead')
else:
#ax.plot(x_ticks, S, 'b', alpha=0.7, linewidth=2, label='Susceptible')
ax.plot(x_ticks, E, 'y', alpha=0.7, linewidth=2, label='Exposed')
ax.plot(x_ticks, I, 'r', alpha=0.7, linewidth=2, label='Infected')
ax.plot(x_ticks, C, 'r--', alpha=0.7, linewidth=2, label='Critical')
#ax.plot(x_ticks, R, 'g', alpha=0.7, linewidth=2, label='Recovered')
ax.plot(x_ticks, D, 'k', alpha=0.7, linewidth=2, label='Dead')
ax.title.set_text('extended SEIR-Model')
ax.yaxis.set_tick_params(length=0)
#ax.grid(b='True', which='minor', )
legend = ax.legend()
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
plt.minorticks_on()
plt.grid(b=True, which='minor', linestyle='dotted')
plt.show();
f = plt.figure(figsize=(20,6))
# sp1
ax1 = f.add_subplot(131)
if x_ticks is None:
ax1.plot(t, R_0, 'b--', alpha=0.7, linewidth=2, label='R_0')
else:
ax1.plot(x_ticks, R_0, 'b--', alpha=0.7, linewidth=2, label='R_0')
ax1.set_xlabel('Date')
ax1.xaxis.set_major_formatter(mdates.DateFormatter("%B-%d"))
ax1.xaxis.set_minor_formatter(mdates.DateFormatter("%B-%d"))
ax1.title.set_text('R_0 over time')
ax1.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax1.legend()
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
# sp2
ax2 = f.add_subplot(132)
total_CFR = [0] + [100 * D[i] / sum(sigma*E[:i]) if sum(sigma*E[:i])>0 else 0 for i in range(1, len(t))]
daily_CFR = [0] + [100 * ((D[i]-D[i-1]) / ((R[i]-R[i-1]) + (D[i]-D[i-1]))) if max((R[i]-R[i-1]), (D[i]-D[i-1]))>10 else 0 for i in range(1, len(t))]
if x_ticks is None:
ax2.plot(t, total_CFR, 'r--', alpha=0.7, linewidth=2, label='total')
ax2.plot(t, daily_CFR, 'b--', alpha=0.7, linewidth=2, label='daily')
else:
ax2.plot(x_ticks, total_CFR, 'r--', alpha=0.7, linewidth=2, label='total')
ax2.plot(x_ticks, daily_CFR, 'b--', alpha=0.7, linewidth=2, label='daily')
ax2.set_xlabel('Date')
ax2.xaxis.set_major_formatter(mdates.DateFormatter("%B-%d"))
ax2.xaxis.set_minor_formatter(mdates.DateFormatter("%B-%d"))
ax2.title.set_text('Fatality Rate (%)')
ax2.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax2.legend()
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
# sp3
ax3 = f.add_subplot(133)
newDs = [0] + [D[i]-D[i-1] for i in range(1, len(t))]
if x_ticks is None:
ax3.plot(t, newDs, 'r--', alpha=0.7, linewidth=2, label='total')
ax3.plot(t, [max(0, C[i]-B(i)) for i in range(len(t))], 'b--', alpha=0.7, linewidth=2, label="over capacity")
else:
ax3.plot(x_ticks, newDs, 'r--', alpha=0.7, linewidth=2, label='total')
ax3.plot(x_ticks, [max(0, C[i]-B(i)) for i in range(len(t))], 'b--', alpha=0.7, linewidth=2, label="over capacity")
ax3.set_xlabel('Date')
ax3.xaxis.set_major_formatter(mdates.DateFormatter("%B-%d"))
ax3.xaxis.set_minor_formatter(mdates.DateFormatter("%B-%d"))
ax3.title.set_text('Deaths per day')
#ax3.yaxis.set_tick_params(length=0)
#ax3.xaxis.set_tick_params(length=0)
ax3.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax3.legend()
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
plt.show();
###Output
_____no_output_____
###Markdown
Model
###Code
def deriv(y, t, beta, gamma, sigma, N, p_I_to_C, p_C_to_D, Beds):
S, E, I, C, R, D = y
dSdt = -beta(t) * I * S / N
dEdt = beta(t) * I * S / N - sigma * E
dIdt = sigma * E - 1/12.0 * p_I_to_C * I - gamma * (1 - p_I_to_C) * I
dCdt = 1/12.0 * p_I_to_C * I - 1/7.5 * p_C_to_D * min(Beds(t), C) - max(0, C-Beds(t)) - (1 - p_C_to_D) * 1/6.5 * min(Beds(t), C)
dRdt = gamma * (1 - p_I_to_C) * I + (1 - p_C_to_D) * 1/6.5 * min(Beds(t), C)
dDdt = 1/7.5 * p_C_to_D * min(Beds(t), C) + max(0, C-Beds(t))
return dSdt, dEdt, dIdt, dCdt, dRdt, dDdt
gamma = 1.0/9.0
sigma = 1.0/3.0
def logistic_R_0(t, R_0_start, k, x0, R_0_end):
return (R_0_start-R_0_end) / (1 + np.exp(-k*(-t+x0))) + R_0_end
def Model(days, agegroups, beds_per_100k, R_0_start, k, x0, R_0_end, prob_I_to_C, prob_C_to_D, s):
def beta(t):
return logistic_R_0(t, R_0_start, k, x0, R_0_end) * gamma
N = sum(agegroups)
def Beds(t):
beds_0 = beds_per_100k / 100_000 * N
return beds_0 + s*beds_0*t # 0.003
y0 = N-1.0, 1.0, 0.0, 0.0, 0.0, 0.0
t = np.linspace(0, days-1, days)
ret = odeint(deriv, y0, t, args=(beta, gamma, sigma, N, prob_I_to_C, prob_C_to_D, Beds))
S, E, I, C, R, D = ret.T
R_0_over_time = [beta(i)/gamma for i in range(len(t))]
#R_0_over_time = [3.5 for i in range(len(t))]
return t, S, E, I, C, R, D, R_0_over_time, Beds, prob_I_to_C, prob_C_to_D
###Output
_____no_output_____
###Markdown
Fitting
###Code
# parameters
file_url="https://docs.google.com/spreadsheets/u/0/d/1742jLWWYbjFdNn2IcPGzHM6UCNnuLrWq9b4xbBzfP_M/export?format=csv"
df = pd.read_csv(file_url)
bddata = df["DeathCum"]
#data = covid_data[covid_data["Location"] == "Bangladesh"]["Value"].values[::-1]
data = df.iloc[:,8].values
agegroups = agegroup_lookup["Bangladesh"]
beds_per_100k = beds_lookup["Bangladesh"]
# most sensitive parameter now
# actual date of first infection - first reporting
# 30 means = the infection started 30 days priod to first reported case
# fit by visual trial and error
outbreak_shift = 20
params_init_min_max = {"R_0_start": (3.0, 1.0, 5.0), "k": (1.1, 0.01, 5.0), "x0": (50, 0, 150), "R_0_end": (0.9, 0.3, 4.5),
"prob_I_to_C": (0.05, 0.01, 0.1), "prob_C_to_D": (0.5, 0.05, 0.8),
"s": (0.003, 0.001, 0.01)} # form: {parameter: (initial guess, minimum value, max value)}
days = outbreak_shift + len(data)
if outbreak_shift >= 0:
y_data = np.concatenate((np.zeros(outbreak_shift), data))
else:
y_data = y_data[-outbreak_shift:]
x_data = np.linspace(0, days - 1, days, dtype=int) # x_data is just [0, 1, ..., max_days] array
def fitter(x, R_0_start, k, x0, R_0_end, prob_I_to_C, prob_C_to_D, s):
ret = Model(days, agegroups, beds_per_100k, R_0_start, k, x0, R_0_end, prob_I_to_C, prob_C_to_D, s)
return ret[6][x]
def fitter(x, R_0_start, k, x0, R_0_end, prob_I_to_C, prob_C_to_D, s):
ret = Model(days, agegroups, beds_per_100k, R_0_start, k, x0, R_0_end, prob_I_to_C, prob_C_to_D, s)
return ret[6][x]
mod = lmfit.Model(fitter)
for kwarg, (init, mini, maxi) in params_init_min_max.items():
mod.set_param_hint(str(kwarg), value=init, min=mini, max=maxi, vary=True)
params = mod.make_params()
fit_method = "leastsq"
result = mod.fit(y_data, params, method="least_squares", x=x_data)
result.plot_fit(datafmt="-");
result.best_values
full_days = 180
first_date = np.datetime64(dt.datetime.strptime(df.iloc[:,0].values.min(),"%m/%d/%Y")) - np.timedelta64(outbreak_shift,'D')
x_ticks = pd.date_range(start=first_date, periods=full_days, freq="D")
base = dt.datetime(2020,3,8)
xticks = [base + dt.timedelta(days=x) for x in range(0,len(x_ticks))]
print("Prediction for Bangladesh")
plotter(*Model(full_days, agegroup_lookup["Bangladesh"], beds_lookup["Bangladesh"], **result.best_values), x_ticks=xticks, isLog=False);
from datetime import datetime
start_date = datetime.strptime(df.iloc[:,0].values.min(),"%m/%d/%Y")
start_date
###Output
_____no_output_____ |
notebooks/en-gb/Communication - Send mails.ipynb | ###Markdown
Requirement:--For sending mail you need an outgoing mail server (that, in the case of this script, also needs to allow unauthenticated outgoing communication). Fill out the required credentials in the folowing variables:
###Code
MAIL_SERVER = "mail.****.com"
FROM_ADDRESS = "noreply@****.com"
TO_ADDRESS = "my_friend@****.com"
###Output
_____no_output_____
###Markdown
Sending a mail is, with the proper library, a piece of cake...
###Code
from sender import Mail
mail = Mail(MAIL_SERVER)
mail.fromaddr = ("Secret admirer", FROM_ADDRESS)
mail.send_message("Raspberry Pi has a soft spot for you", to=TO_ADDRESS, body="Hi sweety! Grab a smoothie?")
###Output
_____no_output_____
###Markdown
... but if we take it a little further, we can connect our doorbell project to the sending of mail! APPKEY is the Application Key for a (free) http://www.realtime.co/ "Realtime Messaging Free" subscription. See "[104 - Remote deurbel - Een cloud API gebruiken om berichten te sturen](104%20-%20Remote%20door%20bell%20-%20Using%20a%20cloud%20API%20to%20send%20messages.ipynb)" voor meer gedetailleerde info. info.
###Code
APPKEY = "******"
mail.fromaddr = ("Your doorbell", FROM_ADDRESS)
mail_to_addresses = {
"Donald Duck":"dd@****.com",
"Maleficent":"mf@****.com",
"BigBadWolf":"bw@****.com"
}
def on_message(sender, channel, message):
mail_message = "{}: Call for {}".format(channel, message)
print(mail_message)
mail.send_message("Raspberry Pi alert!", to=mail_to_addresses[message], body=mail_message)
import ortc
oc = ortc.OrtcClient()
oc.cluster_url = "http://ortc-developers.realtime.co/server/2.1"
def on_connected(sender):
print('Connected')
oc.subscribe('doorbell', True, on_message)
oc.set_on_connected_callback(on_connected)
oc.connect(APPKEY)
###Output
_____no_output_____ |
Starbucks_Capstone_Challenge/Starbucks_Capstone_notebook.ipynb | ###Markdown
Project Description: Mining Starbucks customer data - predicting offer success**[BLOGPOST](https://gonzalo-munillag.medium.com/starbucks-challenge-accepted-ded225a0867)** Table of Contents1. [Introduction and motivation](Introduction_and_motivation)2. [Installation](Installation)3. [Files in the repository](files)4. [Results](Results)5. [Details](Details)6. [Data Sets](Data) Introduction and motivation This project aims to answer a set of questions based on the provided datasets from Starbucks: transactions, customer profiles and offer types. The main question we will ask, and around which the whole project revolves, is: What is the likelihood that a customer will respond to a certain offer?Other questions to be answered are:About the offers:- Which one is the longest offer duration?- Which one is the most rewarding offer?About the customers:- What is the gender distribution?- How different genders are distributed with respect to income?- How different genders are distributed with respect to age?- What is the distribution of new memberships along time?About the transactions:- Which offers are preferred according to gender?- Which offers are preferred according to income?- Which offers are preferred according to age?- Which offers are preferred according to date of becoming a member?- Which are the most successful offers?- Which are the most profitable offers?- Which are the most profitable offers between informational?- How much money was earned in total with offers Vs. without offers?**The motivation is to improve targeting of offers to Starbucks' customers to increase revenue.****Process and results presented in this [blogpost](https://gonzalo-munillag.medium.com/starbucks-challenge-accepted-ded225a0867).**We will follow the [CRISP-DM](https://en.wikipedia.org/wiki/Cross-industry_standard_process_for_data_mining) data science process standard for accomplishing the data analysis at hand. Installation **Packages needed**Wrangling and cleansing: pandas, json, pickleMath: numpy, math, scipyVisualization: matplotlib, IPython, Progress bar: tim, progressbarML: sklearn Files in the repository 1. data folder: 1.1 portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.) 1.2 profile.json - demographic data for each customer 1.3 transcript.json - records for transactions, offers received, offers viewed, and offers completed2. Starbucks_Capstone_notebook.ipynb: Contains an executable python notebook for your to execute and modify as you wish.3. Starbucks_Capstone_notebook.html: If you are not interested in extending or executing the code yourself, you may open this file and read through the anaylsis.4. Other pickle files saving the datasets and models. Results The best model to predict if an offer will be successful is Gradient Boosting.However, 70% is not such a high accuracy, better than human though. Grid search did not show much improvements, so furtehr tunning should be carried out.We saw that the learning rate went from 0.1 to 0.5, while the rest of parameters stayed the same. The enxt logical step would be to try with a learning rate of 0.75 (as 1 was not chosen) and try to change other parameters. Details This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks. Not all users receive the same offer, and that is the challenge to solve with this data set.Your task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.You'll be given transactional data showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer. Keep in mind as well that someone using the app might make a purchase through the app without having received an offer or seen an offer. ExampleTo give an example, a user could receive a discount offer buy 10 dollars get 2 off on Monday. The offer is valid for 10 days from receipt. If the customer accumulates at least 10 dollars in purchases during the validity period, the customer completes the offer.However, there are a few things to watch out for in this data set. Customers do not opt into the offers that they receive; in other words, a user can receive an offer, never actually view the offer, and still complete the offer. For example, a user might receive the "buy 10 dollars get 2 dollars off offer", but the user never opens the offer during the 10 day validity period. The customer spends 15 dollars during those ten days. There will be an offer completion record in the data set; however, the customer was not influenced by the offer because the customer never viewed the offer. CleaningThis makes data cleaning especially important and tricky.You'll also want to take into account that some demographic groups will make purchases even if they don't receive an offer. From a business perspective, if a customer is going to make a 10 dollar purchase without an offer anyway, you wouldn't want to send a buy 10 dollars get 2 dollars off offer. You'll want to try to assess what a certain demographic group will buy when not receiving any offers. AdviceBecause this is a capstone project, you are free to analyze the data any way you see fit. For example, you could build a machine learning model that predicts how much someone will spend based on demographics and offer type. Or you could build a model that predicts whether or not someone will respond to an offer. Or, you don't need to build a machine learning model at all. You could develop a set of heuristics that determine what offer you should send to each customer (i.e., 75 percent of women customers who were 35 years old responded to offer A vs 40 percent from the same demographic to offer B, so send offer A). Data Sets This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks.The data is contained in three files: portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.) profile.json - demographic data for each customer transcript.json - records for transactions, offers received, offers viewed, and offers completedHere is the schema and explanation of each variable in the files: Mining Starbucks customer data - predicting purchasing likelihood We will follow the [CRISP-DM](https://en.wikipedia.org/wiki/Cross-industry_standard_process_for_data_mining) data science process standard for accomplishing the data anaylsis at hand. 1. Business Understanding The motivation is to improve targeting of offers to Starbucks customers to increase revenue.Our goal therefore is to find a relationship between customers and Starbucks offers based on purchasing patterns from the customers.Thus, we need to understand the customers included in the datasets, identify groups within them, and assign the best matching offers to these groups.Therefore, the main question we should answer is:**What is the likelihood that a customer will respond to a certain offer?**Having a model that predicts a customer's behavior will accomplish the goal of this project.During the data exploration, other questions related to customers and offers will be formulated, as tour understanding of the data will be increased. 2. Data Understanding The goal of data understanding is to have an overview of what is in the datasets and already filter the data we need to answer the main question. After we filter the needed data, we will proceed to wrangle and clean the data to make modelling possible. After wrangling and cleaning, we will explore further the data to extract additional questions we could answer based on its new form. Metrics We first need to define a set of metrics to be able to assess whether an offer suits a particular customer (assesing whether we answered the question correctly with our model).We have a classification problem (customer-offer) and data to train a model. Thus, we we will use supervised learning models and use: 1. **Accuracy** (number of correct predictions divided by the total number of predictions), 2. **F-Score** with beta=0.5 ((1+beta^2)*Precision*Recall/(beta^2 * (Precision+Recall))) (F-score is used to combine precision (True_Positive/ (True_Positive+ False_Positive)) and recall(True_Positive / (True_Positive + False_Negative))).The data seems balanced, but nonetheless, the F-score might come in handy to choose between top models in case accuracy are similar.Definitions from [Medium blogpost](https://towardsdatascience.com/20-popular-machine-learning-metrics-part-1-classification-regression-evaluation-metrics-1ca3e282a2ce). And for a better understanding on precision and recall, [wikipedia](https://en.wikipedia.org/wiki/Precision_and_recall) does the job great! Imports for data handling
###Code
# Data cleaning and wrangling
import pandas as pd
import json
import pickle
# Math
import numpy as np
import math
from scipy.stats import wasserstein_distance
# Visualization
import matplotlib.pyplot as plt
from IPython.display import display, Math, Latex
%matplotlib inline
%pylab inline
# progress bar
import progressbar
from time import sleep
from time import time
# ML
from sklearn.model_selection import train_test_split
# disable warning for mapping
pd.options.mode.chained_assignment = None # default='warn'
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
DataSetsThe data is contained in three files:* portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)* profile.json - demographic data for each customer* transcript.json - records for transactions, offers received, offers viewed, and offers completed Offer types: portfolio.json* id (string) - offer id* offer_type (string) - type of offer ie BOGO, discount, informational* difficulty (int) - minimum required spend to complete an offer* reward (int) - reward given for completing an offer* duration (int) - time for offer to be open, in days* channels (list of strings)
###Code
# read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
portfolio
print('Size:', portfolio.size, 'Shape:', portfolio.shape)
print('\n')
print('Portfolio Information')
print(portfolio.info())
print('\n')
print('Null values [%]')
print(portfolio.isnull().sum()/portfolio.shape[0])
print('\n')
print('Duplicated values')
# We first need toconvert the channels attribute temporaly to a string insead of a list of strings. Otherwsie "unhashable"
df_temp = portfolio
df_temp = df_temp.astype({"channels": str})
print(df_temp.duplicated().sum())
print('\n')
print('Portfolio Description')
print(portfolio.describe())
print('\n')
print('Skewness: ', portfolio.skew())
###Output
Size: 60 Shape: (10, 6)
Portfolio Information
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 10 entries, 0 to 9
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 reward 10 non-null int64
1 channels 10 non-null object
2 difficulty 10 non-null int64
3 duration 10 non-null int64
4 offer_type 10 non-null object
5 id 10 non-null object
dtypes: int64(3), object(3)
memory usage: 608.0+ bytes
None
Null values [%]
reward 0.0
channels 0.0
difficulty 0.0
duration 0.0
offer_type 0.0
id 0.0
dtype: float64
Duplicated values
0
Portfolio Description
reward difficulty duration
count 10.000000 10.000000 10.000000
mean 4.200000 7.700000 6.500000
std 3.583915 5.831905 2.321398
min 0.000000 0.000000 3.000000
25% 2.000000 5.000000 5.000000
50% 4.000000 8.500000 7.000000
75% 5.000000 10.000000 7.000000
max 10.000000 20.000000 10.000000
Skewness: reward 0.665459
difficulty 0.669945
duration 0.233151
dtype: float64
###Markdown
1. There are 10 types of offers for one product (as specified in the descrition) and are characterized by 6 attributes.2. They are a mixture of integers (3) and strings (2) and arrays of strings (1).3. There are no null values.4. The offers have an average reward of 4, a duration of 6.5 days and a difficulty of 7.7.5. The domain of the integer attributes is small (0-20).6. The median (50% percentile) is not too far from the mean, thus, the integer columms should be somewhat balanced.7. Cross checking with the skewness, we see that duration is balanced, while reward and difficulty are somwhat inbalance, but it is not extreme whatsoever. 8. It is a very small datasets interms of bytes.9. We clearly see the types of categories that channels and offer_type have. 10. There are no duplicated values To answer the proposed question, we need every data point of this dataset, as it characterizes all types of offers. Customer demographics: profile.json* age (int) - age of the customer * became_member_on (int) - date when customer created an app account* gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)* id (str) - customer id* income (float) - customer's income
###Code
# read in the json files
profile = pd.read_json('data/profile.json', orient='records', lines=True)
profile.head()
print('Size:', profile.size, 'Shape:', profile.shape)
print('\n')
print('Portfolio Information')
print(profile.info())
print('\n')
print('Null values [%]')
print(profile.isnull().sum()/profile.shape[0])
print('\n')
print('Duplicated values')
print(profile.duplicated().sum())
print('\n')
print('Portfolio Description')
print(profile.describe())
print('\n')
print('Skewness: ', profile.skew())
###Output
Size: 85000 Shape: (17000, 5)
Portfolio Information
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 17000 entries, 0 to 16999
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 gender 14825 non-null object
1 age 17000 non-null int64
2 id 17000 non-null object
3 became_member_on 17000 non-null int64
4 income 14825 non-null float64
dtypes: float64(1), int64(2), object(2)
memory usage: 664.2+ KB
None
Null values [%]
gender 0.127941
age 0.000000
id 0.000000
became_member_on 0.000000
income 0.127941
dtype: float64
Duplicated values
0
Portfolio Description
age became_member_on income
count 17000.000000 1.700000e+04 14825.000000
mean 62.531412 2.016703e+07 65404.991568
std 26.738580 1.167750e+04 21598.299410
min 18.000000 2.013073e+07 30000.000000
25% 45.000000 2.016053e+07 49000.000000
50% 58.000000 2.017080e+07 64000.000000
75% 73.000000 2.017123e+07 80000.000000
max 118.000000 2.018073e+07 120000.000000
Skewness: age 0.761858
became_member_on -0.900457
income 0.402005
dtype: float64
###Markdown
1. There are 17000 customer records (one per customer) and there are 5 attributes to charaterize each.2. They are a mixture of numeric values (2 ints and a float) and strings (2).3. There are some null values in gender and their income. The number is the same, so most probably they are paired in the same record. ~13% is not a considerable value but nonetheless, we will consider them as part of the analysis, and see if this group of people that do not share the gender have a particular preference for a type of offer. It is also interesting to see that these values apparently have an age of 118, therefore something wnet wrong on collection.4. The average salary 5. The domain of the integer attributes is reasonable ,being the highest for the income column.6. The median (50% percentile) is not too far from the mean, thus, the integer columms should be somewhat balanced.7. Cross checking with the skewness, we see that income is balanced, while age and became_member_on are somwhat inbalance, but it is not extreme whatsoever. 8. It is a relatively small dataset in terms of bytes.9. no duplicated values As noted for the previous dataset, it is clear that we will need all these data to cluster customers and find their best fitting offer matches. Transactions: transcript.json* event (str) - record description (ie transaction, offer received, offer viewed, etc.)* person (str) - customer id* time (int) - time in hours since start of test. The data begins at time t=0* value - (dict of strings) - either an offer id or transaction amount depending on the record
###Code
# read in the json files
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
transcript.head()
print('Size:', transcript.size, 'Shape:', transcript.shape)
print('\n')
print('Portfolio Information')
print(transcript.info())
print('\n')
print('Null values [%]')
print(transcript.isnull().sum()/transcript.shape[0])
print('\n')
print('Duplicated values')
print(profile.duplicated().sum())
print('\n')
print('Portfolio Description')
print(transcript.describe())
print('\n')
print('Skewness: ', transcript.skew())
###Output
Size: 1226136 Shape: (306534, 4)
Portfolio Information
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 306534 entries, 0 to 306533
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 person 306534 non-null object
1 event 306534 non-null object
2 value 306534 non-null object
3 time 306534 non-null int64
dtypes: int64(1), object(3)
memory usage: 9.4+ MB
None
Null values [%]
person 0.0
event 0.0
value 0.0
time 0.0
dtype: float64
Duplicated values
0
Portfolio Description
time
count 306534.000000
mean 366.382940
std 200.326314
min 0.000000
25% 186.000000
50% 408.000000
75% 528.000000
max 714.000000
Skewness: time -0.318927
dtype: float64
###Markdown
1. There are 1226136 transactions recorded in the dataset.2. Teh attributes are strings, and one int.3. There are no null values.4. The domain for time is around 30 days, which is larger than the highest duration of the offer, which is 10. this indicates that we are also measuring pruchases past the offer time. 5. The time attribute is balanced.8. It is a mediun size dataset in terms of bytes, much larger if compared to the rest of the datasets.7. No duplicated values Let us check the category types of event, as they are not compeltly described in the project description.
###Code
transcript['event'].unique()
###Output
_____no_output_____
###Markdown
The chronological order therefore would be:1. offer received2. offer viewed3. transaction4. offer completed With respect to 'value', we will need to clean the data to extact more information. But the amount is most probably attached when the event is 'transaction' and an offer 'id' otherwise. As with the rest of datasets, we need all information from this one as wel in order to answer the main question. Overall Observations Looking how we need all data points for answering our question (there is no attribute that we could delete without further knowledge), and that everything has the potential to be correlated, we will merge the 3 datasets into one after wrangling and cleaning. 3. Data Preparation We will follow [Tidy Data from Hadley Wickham](https://vita.had.co.nz/papers/tidy-data.pdf) to prepare the datasets. We will visualize further each dataset and come up with further questions that we also could answer. We will also indicate the data we need to answer each. There will be an exploration section after wrangling and cleansing. PORTFOLIO
###Code
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
portfolio
###Output
_____no_output_____
###Markdown
Data wrangling 1. reward: no change2. Channels: create 4 new columns with binary values for the training of the model3. difficulty: no change4. duration: change to hours to be on the same units as the other dataset5. offer_type: reate 3 new columns with binary values ofr the trainig of the model6. id: Convert it into an increasing integer ID for easier representation laterIt is common to all that for better understand-ability, we will rename the attributes in a way that we can know the units of the column and that we can link the datasets as we already surmised. Rename columns
###Code
# Renaming columns
portfolio.columns = ['reward_[$]', 'channels', 'difficulty_[$]', 'duration_[h]', 'offer_type', 'offer_id']
###Output
_____no_output_____
###Markdown
convert to hours the duration column
###Code
portfolio['duration_[h]'] = portfolio['duration_[h]'].apply(lambda x: x * 24)
###Output
_____no_output_____
###Markdown
Rename the offer ids
###Code
# We create a dictionary with offer ids we will need to save for later use
offer_id_dict = portfolio['offer_id'].to_dict()
# We invert the key-value pairs so the hash is in the key position
offer_id_dict = {v: k for k, v in offer_id_dict.items()}
# We save it as a pickle file
with open('offer_id_dict.pickle', 'wb') as handle:
pickle.dump(offer_id_dict, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Now we convert the column: https://stackoverflow.com/questions/20250771/remap-values-in-pandas-column-with-a-dict
portfolio = portfolio.replace({"offer_id": offer_id_dict})
###Output
_____no_output_____
###Markdown
Create new columns based on offer_type
###Code
# We save the dummy column of offer type (convinient for later analysis)
offer_type_dummy = portfolio['offer_type']
# Get dummies: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html
portfolio = pd.get_dummies(portfolio, prefix=['offer_type'], columns=['offer_type'])
# We concat the dummy column
portfolio = pd.concat([portfolio, offer_type_dummy], axis=1)
###Output
_____no_output_____
###Markdown
Create new columns based on event
###Code
# Get teh dummie variables: https://stackoverflow.com/questions/29034928/pandas-convert-a-column-of-list-to-dummies
channel_dummies = pd.get_dummies(portfolio['channels'].apply(pd.Series).stack(), prefix='channel').sum(level=0)
# We drop the old column and concat the new one
portfolio.drop(columns=['channels'], inplace=True)
# We concat the new set of columns
portfolio = pd.concat([portfolio, channel_dummies], axis=1)
###Output
_____no_output_____
###Markdown
Data cleansing Undesirable value detection:1. Missing values: No3. Duplicates: No 4. Incorrect values: No. We trust Starbucks that the offer portfolio is correct, as there is no way for us to verify it.5. Irrelevant: Each row is relevant becuase it belongs to a distinct offer we will have to match with customers. The dataset is not large, we do not need to use PCA to know that the channel_email column does not explain any variability (all values are the same), so we can drop it.Measures:1. Replace: No2. Modify: No3. Delete: channel_email Irrelevancy
###Code
# We delete the email channel column
portfolio.drop(columns=['channel_email'], inplace=True)
###Output
_____no_output_____
###Markdown
We save the dataframe as a pickle file
###Code
portfolio.to_pickle("./portfolio.pkl")
###Output
_____no_output_____
###Markdown
Data exploration
###Code
portfolio
###Output
_____no_output_____
###Markdown
We now habve tidy data, each record is an observation, we only have one observation type (offer), the values are in the cells and all variable names are in the columns (except for the column offer_type which is left there for visualization purposes later, it will be deleted) This is the portfolio dataset by itself, the questions that we could make are solely related to the types of offers Starbucks has. It is not so interesting given the small size of the dataset, looking at the layout, one can have a feeling for the data already. There are 4 bogo offer types, 4 discount types and 2 informationa. There is only one offer that is using one channel and is of discount type. The rest use at least 2 channels. Let us set a few further questions:1. Which one is the longest offer duration?2. Which one is the most rewarding offer?This one however is interesting, but we cannot answer it yet: Which one is the longest offer duration?
###Code
max_duration_offers = portfolio.loc[portfolio['duration_[h]'] == portfolio['duration_[h]'].max()].index
print('The maximum duration offers are', portfolio.iloc[max_duration_offers]['offer_id'].iloc[0], 'and', \
portfolio.iloc[max_duration_offers]['offer_id'].iloc[1])
print('With a duration of:', portfolio['duration_[h]'].max(), 'h')
###Output
The maximum duration offers are 4 and 6
With a duration of: 240 h
###Markdown
Which one is the most rewarding offer?
###Code
max_reward_offers = portfolio.loc[portfolio['reward_[$]'] == portfolio['reward_[$]'].max()].index
print('The most rewarding offers are', portfolio.iloc[max_reward_offers]['offer_id'].iloc[0], 'and', \
portfolio.iloc[max_reward_offers]['offer_id'].iloc[1])
print('With a reward of:', portfolio['reward_[$]'].max(), '$')
###Output
The most rewarding offers are 0 and 1
With a reward of: 10 $
###Markdown
However, a question that is interesting but that we cannot yet answer with this data (we need to use the other datasets as well), is what are the features of an offer that explain better which offer aligns better with a user. PROFILE
###Code
# read in the json files
profile = pd.read_json('data/profile.json', orient='records', lines=True)
profile.head()
###Output
_____no_output_____
###Markdown
Data wrangling 1. age: No changes 2. became_member_on: transform into date and time3. gender: Create dummy variables with M, F, O, and missing_gender. We keep the missing values because they do not seem random, as income has the same number of missing values4. id: transform it into an easier id to read5. income: No changesWe will change the column names to add the units and proper names. Understanding NaN records
###Code
# Let us verify that the missing values of gender pair with the ones from income
(profile['gender'].isnull() == profile['income'].isnull()).value_counts()
###Output
_____no_output_____
###Markdown
There are no False values, therefore they match. Now let us check what types of values in age and became_member_on these outliers have.
###Code
print('Here is the age from the discussed outliers')
print(profile[profile['gender'].isnull()]['age'].unique())
print('Here is the number of unique became_member_on values of the discussed outliers')
print(profile[profile['gender'].isnull()]['became_member_on'].nunique())
# Let us check how many unique values does age have
print('Age unique values', profile['age'].nunique())
# Let us check what is the maximum age of the records that do not have missing values
print('Maximum age of not outliers', profile[~profile['gender'].isnull()]['age'].max())
###Output
Age unique values 85
Maximum age of not outliers 101
###Markdown
From these data we understand that the age of the outliers (records of missing valzes) is in itself an outlier (duh!). Because the oldest person is 101. In data cleansing, we will replace 118 by the average age of the customers as age is not too skewed. If we leave it at 118, these records would be weighted higher, which is not desirable. We will do the same for the income, we will substitute the nan values with the average income, as that column was not too skewed as well. What is important for data wrangling (and this overlaps with cleansing) is that we will replace the missing values with a gender sampled from a distribution equal to the distribution of the non_missing values. This way we do not skew the column. Additionally, we will add another column called 'missing_gender', to add another layer of information about these outliers.
###Code
# Let us check the number of became_member_on unique values
profile['became_member_on'].nunique()
###Output
_____no_output_____
###Markdown
There are 950 unique values of became_member_on in the outlier records, which is more than half of the toal unique values in the dataset. Considering that the outliers constitute about 12% of the dataset, this diversity hints that these values are true. Change column names
###Code
profile.columns = ['gender', 'age', 'customer_id', 'became_member_on_[y-m-d]', 'income_[$]']
###Output
_____no_output_____
###Markdown
Transform become_member_on date and time
###Code
profile['became_member_on_[y-m-d]'] = pd.to_datetime(profile['became_member_on_[y-m-d]'], format='%Y%m%d')
###Output
_____no_output_____
###Markdown
Transform id into an easier id form
###Code
# We create a dictionary with offer ids we will need to save for later use
customer_id_dict = profile['customer_id'].to_dict()
# We invert the key-value pairs so the hash is in the key position
customer_id_dict = {v: k for k, v in customer_id_dict.items()}
# We save it as a pickle file
with open('customer_id_dict.pickle', 'wb') as handle:
pickle.dump(customer_id_dict, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Now we convert the column: https://stackoverflow.com/questions/20250771/remap-values-in-pandas-column-with-a-dict
profile = profile.replace({"customer_id": customer_id_dict})
###Output
_____no_output_____
###Markdown
Gender wrangling First we create a new binary column indicating if the record had missing values
###Code
# first we identify which type of missing values has gender
# We know the first value of gender is missing
print(type(profile['gender'].iloc[0]))
# Thus, we know we have to check with None
print(profile['gender'].iloc[0] is None)
print(profile['gender'].iloc[0] is np.nan)
profile['missing_values'] = profile['gender'].apply(lambda x: 1 if x is None else 0)
###Output
_____no_output_____
###Markdown
Second, we assign M,F or O to the missing values according to the underlying distribution of gender
###Code
# We also get the frecuencies and categories. It is a nnice trick becuase value_counts does not consider None values
number_None_values = profile['gender'].isnull().sum()
total_gender_counts = profile.shape[0] - number_None_values
frecuencies_gender = profile['gender'].value_counts() / total_gender_counts
print('Frecuencies in %:')
print(frecuencies_gender)
# Secomf we replicate the disribution
profile['gender'] = profile['gender'].apply \
(lambda x: np.random.choice(frecuencies_gender.index, p=frecuencies_gender.values) if x is None else x)
###Output
_____no_output_____
###Markdown
Third, we perofrm one hot encoding on gender
###Code
# Save dummy variable for later (visualization
gender_dummy = profile['gender']
# Get dummies: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html
profile = pd.get_dummies(profile, prefix=['gender'], columns=['gender'])
profile = pd.concat([profile, gender_dummy], axis=1)
###Output
_____no_output_____
###Markdown
Data cleansing Undesirable value detection:1. Missing values: income. Gender was taken care by the wrangling step.3. Duplicates: No 4. Incorrect values: age5. Irrelevant: Measures:1. Replace: replace income nans with the average of the column. Replace the age of 118 with the average age.2. Modify: no3. Delete: gender_dummy column (AFTER EXPLORATION OF ALL THE DATSETS COMBINED) We replace income nan values with the average income
###Code
# We calculate the average income
mean_income = profile['income_[$]'].mean()
# We know the first value of income is missing
print(type(profile['income_[$]'].iloc[0]))
# Let us check how we can identify this value
print(profile['income_[$]'].iloc[0] is None)
print(profile['income_[$]'].iloc[0] is np.nan)
print(pd.isna(profile['income_[$]'].iloc[0]))
# We replace the nan values with the mean income
profile['income_[$]'] = profile['income_[$]'].apply(lambda x: mean_income if pd.isna(x) else x)
###Output
_____no_output_____
###Markdown
We replace the 118 values with the average age (we truncate it)
###Code
# We get the mean age
mean_age = int(profile['age'].mean())
# We replace the 118 values by the mean
profile['age'] = profile['age'].apply(lambda x: mean_age if x == 118 else x)
###Output
_____no_output_____
###Markdown
Data exploration
###Code
profile
###Output
_____no_output_____
###Markdown
We now habve tidy data, each record is an observation, we only have one observation type (customer), the values are in the cells and all variable names are in the columns (except for the column gender which is left there for visualization purposes later, it will be deleted) Question we could answe:1. What is the gender distribution?2. How different genders are distributed with respect to income?3. How different genders are distributed with respect to age?4. What is the istribution of new memberships along time? What is the gender distribution?
###Code
profile['gender'].hist()
###Output
_____no_output_____
###Markdown
There are more males in the dataset How different genders are distributed with respect to income?
###Code
profile['income_[$]'].hist(by=profile['gender'])
###Output
_____no_output_____
###Markdown
It seems that females and males have more or less the same income in this dataset. But it also seems that there is more women that earn abover the average than men. (The comparisons are not perfect becaus ethe number of women is around 3000 less than men, so the sample of women is less representative)It is worth extracting mroe insights as the distributions seem different. Let us check the [wassertein_distance](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.wasserstein_distance.html)
###Code
wasserstein_distance(profile.loc[profile['gender'] == 'F']['income_[$]'], profile.loc[profile['gender'] == 'M']['income_[$]'])
###Output
_____no_output_____
###Markdown
Indeed, the difference is prety high between the distributions of men and women.
###Code
profile.loc[profile['gender'] == 'F']['income_[$]'].mean()
women_mean_income = profile.loc[profile['gender'] == 'F']['income_[$]'].mean()
men_mean_income = profile.loc[profile['gender'] == 'M']['income_[$]'].mean()
print('Mean salary for women is', women_mean_income)
print('Mean salary for men is', men_mean_income)
print('In this datase, women earn more money than men in average, by: ', women_mean_income-men_mean_income, '$')
women_median_income = profile.loc[profile['gender'] == 'F']['income_[$]'].median()
men_median_income = profile.loc[profile['gender'] == 'M']['income_[$]'].median()
print('Median salary for women is', women_median_income)
print('Median salary for men is', men_median_income)
print('In this dataset, however, the median is not so far from genders: ', women_median_income-men_median_income, '$')
###Output
Median salary for women is 66000.0
Median salary for men is 63000.0
In this datase, however, the median is not so far from genders: 3000.0 $
###Markdown
There are around 5000 men earning above $63k and around 3500 women earning mnore than $66k.
###Code
women_std_income = profile.loc[profile['gender'] == 'F']['income_[$]'].std()
men_std_income = profile.loc[profile['gender'] == 'M']['income_[$]'].std()
print('The std of the salary for women is', women_std_income)
print('The std of the salary for men is', men_std_income)
print('In this datase, however, the median is not so far from genders: ', women_std_income-men_std_income, '$')
###Output
The std of the salary for women is 20981.542952480755
The std of the salary for men is 18774.535245943607
In this datase, however, the median is not so far from genders: 2207.007706537148 $
###Markdown
The stds are high, and the one from women is higher (representative that there are less women in the sample than men) How different genders distributed with respect to age?
###Code
profile['age'].hist(by=profile['gender'])
###Output
_____no_output_____
###Markdown
The histograms here are very similar. We could perform the same statictics as with income, but the histograms seem prety similar. Let us check the statistcial distance we used before:
###Code
wasserstein_distance(profile.loc[profile['gender'] == 'F']['age'], profile.loc[profile['gender'] == 'M']['age'])
###Output
_____no_output_____
###Markdown
Indeed they are very similar. What is the istribution of new memberships along time?
###Code
profile['became_member_on_[y-m-d]'].value_counts().plot.line()
###Output
_____no_output_____
###Markdown
There are noticeable jumps every 2 years (half of 2015 and half 2017). Perhaps they correspond to new campaigns or improvements in the app. It is also interesting that in 2018, the number of new memberships dropped (first time), thus perhaps new competitors arrived into the market. We save the dataframe as a pickle file
###Code
profile.to_pickle("./profile.pkl")
###Output
_____no_output_____
###Markdown
TRANSCRIPT
###Code
# read in the json files
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
transcript
###Output
_____no_output_____
###Markdown
Data wrangling 1. person: replace the ids with the ones from the previous datasets. It can be connected to customer id 2. event: no changes. We will not use this column for prediction and it is useful to have it in this format for cleaning and wrangling and visualization.3. time: no changes4. value: make dict_keys a column and dict_values the values within. Once we do that, we transform offer ids into the easier to read ids defined before and leave the nans as they are (dealt with in cleansing). And the transaction one, we could replace the nans with a 0. For the offer ids with nans (meniang there is only an amount), we can replace the nan value with higher number than the last offer id, indicating that there was no offer. With reward, we set it to 0 for nans, we must check if this coincides with offer completed.We will rename the columns so they can be combined later on.We have to think about how to collapse the records from a user into one row, so we can join the 3 datasets. This will require feature engineering. This last sentence is related ot the 'unsure' of points 2 and 4. **In essence our final dataset should have pairs of customers and offers, together with a score for how well that offer did with the customer. The score can be binary, whether it worked or not. This score must be distilled from this transcript dataset** Change column names
###Code
# Renaming specific column: https://stackoverflow.com/questions/20868394/changing-a-specific-column-name-in-pandas-dataframe
transcript = transcript.rename(columns = {'person':'customer_id'})
###Output
_____no_output_____
###Markdown
Replace the person id with the ones from the dictionary mapping hashes to ints
###Code
# Read back the dictionary
with open('customer_id_dict.pickle', 'rb') as handle:
customer_id_dict = pickle.load(handle)
# Now we convert the column: https://stackoverflow.com/questions/20250771/remap-values-in-pandas-column-with-a-dict
transcript = transcript.replace({"customer_id": customer_id_dict})
# It tool a long time to execute the previous cell, let us do a check point
transcript.to_pickle("./transcript.pkl")
###Output
_____no_output_____
###Markdown
We transform the 'value' column into new columns for better wrangling
###Code
# Ref: https://www.codegrepper.com/code-examples/python/dict+column+to+be+in+multiple+columns+python
value_dum = transcript['value'].apply(pd.Series)
transcript.to_pickle("./transcript.pkl")
value_dum
###Output
_____no_output_____
###Markdown
offer id and offer_id should be the same. We will combine both columns.Reward might be a new column we have not accounted for in the beginning.
###Code
# Let us check if this attribute is empty - It is not
value_dum['offer_id'].isnull().value_counts()
# Let us check if the values that are NOT missing in offer_id overlap with the ones NOT missing in offer id
print('Number of records where offer_id is NOT null', value_dum[~value_dum['offer_id'].isnull()].shape[0])
# If the number of missing values in 'offer id' is equal to the following number, then, while there is an overlap
# of both attributes in missing values, there is no overlap when there is content (we can assure this by means of
# the mehtod used to find this value thorugh pandas) - We count nan values of 'offer id' in a df where there are NO
# nans in 'offer_id'
value_dum[~value_dum['offer_id'].isnull()]['offer id'].isnull().value_counts()
###Output
Number of records where offer_id is NOT null 33579
###Markdown
We thus conclude that the columns 'offer id' and 'offer_id' are the same (duh! - but one needs to cross check). We need to merge them.
###Code
# Let us check how to identify the nan value
print(value_dum['offer_id'].iloc[-1] is None)
print(value_dum['offer_id'].iloc[-1] is np.nan)
print(pd.isna(value_dum['offer_id'].iloc[-1]))
# Let us check if the values that are NOT missing in offer_id overlap with the ones NOT missing in offer id
value_dum['offer_id'] = value_dum.apply(lambda row: row['offer_id'] if pd.isna(row['offer id']) else row['offer id'], axis=1)
# We drop the column from value_dum that is not useful
value_dum.drop(columns=['offer id'], inplace=True)
# We drop the old column and concat the new one
transcript.drop(columns=['value'], inplace=True)
# We concat the new set of columns
transcript = pd.concat([transcript, value_dum], axis=1)
###Output
_____no_output_____
###Markdown
We convert the offer id into the id we created before
###Code
# Read back the dictionary
with open('offer_id_dict.pickle', 'rb') as handle:
offer_id_dict = pickle.load(handle)
# Now we convert the column: https://stackoverflow.com/questions/20250771/remap-values-in-pandas-column-with-a-dict
transcript = transcript.replace({"offer_id": offer_id_dict})
# It tool a long time to execute the previous cell, let us do a check point
transcript.to_pickle("./transcript.pkl")
###Output
_____no_output_____
###Markdown
Data cleansing Undesirable value detection:1. Missing values: offer_id, amount, reward.3. Duplicates: No4. Incorrect values: offer ids are floats and not ints (probably becuase there are nans in the same column and somehow it affected the converion)5. Irrelevant: Measures:1. Replace: offer_id nans with the value 10 (one above the last offer id). amount and reward nans will be replaced with a 02. Modify: offer_id into int again 3. Delete: none Replace offer_id nans with the value 10
###Code
# Let us check how to identify the nan value
print(transcript['offer_id'].iloc[-1] == None)
print(transcript['offer_id'].iloc[-1] is np.nan)
print(pd.isna(transcript['offer_id'].iloc[-1]))
print(transcript['offer_id'].iloc[-1] == 'nan')
# We perform the change
transcript['offer_id'] = transcript['offer_id'].apply(lambda x: 10 if pd.isna(x) else x)
###Output
_____no_output_____
###Markdown
Convert offer_id into ints
###Code
transcript = transcript.astype({"offer_id": int})
###Output
_____no_output_____
###Markdown
Replace amount nans with 0
###Code
# Let us check how to identify the nan value
print(transcript['amount'].iloc[0] == None)
print(transcript['amount'].iloc[0] is np.nan)
print(pd.isna(transcript['amount'].iloc[0]))
print(transcript['amount'].iloc[0] == 'nan')
# We perform the change
transcript['amount'] = transcript['amount'].apply(lambda x: 0 if pd.isna(x) else x)
###Output
_____no_output_____
###Markdown
Replace reward nans with 0
###Code
# Let us check how to identify the nan value
print(transcript['reward'].iloc[0] == None)
print(transcript['reward'].iloc[0] is np.nan)
print(pd.isna(transcript['reward'].iloc[0]))
print(transcript['reward'].iloc[0] == 'nan')
# We perform the change
transcript['reward'] = transcript['reward'].apply(lambda x: 0 if pd.isna(x) else x)
###Output
_____no_output_____
###Markdown
Let us change the column names with the appropiate units
###Code
transcript.columns = ['customer_id', 'event', 'time_[h]', 'amount_[$]', 'offer_id', 'reward_[$]']
# It tool a long time to execute the previous cell, let us do a check point
transcript.to_pickle("./transcript.pkl")
###Output
_____no_output_____
###Markdown
Feature engineering
###Code
transcript = pd.read_pickle("./transcript.pkl")
transcript
###Output
_____no_output_____
###Markdown
We would like to combine all datasets to feed it to a model. The transcript dataset contains information about successful and unsuccessful offers, about the purchasing of the customers and about the rewards they have retrieved. We have two id columns, we can use them as foreign keys for the primary keys in the other two datasets to combine them. However, we cannot do that yet. We have to distill the data of the transcript data set to obtain the valuable information that will allow us to prognosticate if an offer will be accepted or not by an individual in the future. So first of all, how the ideal dataset would look like: Offer id |...offer properties...| customer id | ...customer qualities... | success/no_success| profit | Viewed/Not viewed | Received/not_received In order to get this datset, we need to assess the success of the offer. For that, we need to attach to the transcript dataset information from the portfolio: **offer duration, reward, difficulty** and type for data exploration.- Success column: an offer is successful if a user has purchased the amount of the difficulty before the offer expires. thus, we need the duration and difficulty.- Profit column: we coul dmake predictions and comparisons between groups with this column. for this we need: difficulty - reward.- Viewed column: we need the event column of transaction- Received column: we need the event column from transaction Now let us start joining the transaction and the portfolio dataset
###Code
# First, let us read again the portfolio
portfolio = pd.read_pickle("./portfolio.pkl")
###Output
_____no_output_____
###Markdown
We will add a virtual row for an offer with id 10, menaing that the offer does not exits. This will be attached to the rows of transacript where there was no offer made, but nonetheless there was a transaction.
###Code
new_row = {'reward_[$]':[0], 'difficulty_[$]':[0], 'duration_[h]':[0], 'offer_id':[10], 'offer_type_bogo':[0], 'offer_type_discount':[0], \
'offer_type_informational':[0], 'offer_type':'none', 'channel_mobile':[0], 'channel_social':[0], 'channel_web':[0]}
new_row = pd.DataFrame.from_dict(new_row)
portfolio = portfolio.append(new_row)
portfolio.to_pickle("./portfolio.pkl")
transcript = pd.merge(transcript, portfolio, on='offer_id', how='left', sort=True, suffixes=('_trans', '_port'))
transcript = transcript.drop(columns=['offer_type_bogo', 'offer_type_discount', 'offer_type_informational', 'channel_mobile', 'channel_social', 'channel_web'])
###Output
_____no_output_____
###Markdown
(The offer_type attribute is only used for exploring raw data more easily, not needed to distill information) Let us check that we still have all the rows from transaction data, i.e. the left join was successful - we see it is, it contains all the infromation.
###Code
transcript.shape[0]
###Output
_____no_output_____
###Markdown
We create a groubby object based on customer:
###Code
customer_transactions = transcript.groupby(['customer_id'])
###Output
_____no_output_____
###Markdown
**Build dictionary to store the distilled values foolowing:**Offer id |...offer properties...| customer id | ...customer qualities... | success/failure/Not_applicable | profit | Viewed/Not viewed | Received/not_received | effective_time
###Code
ledger = {'customer_id': [], 'offer_id': [], 'received':[], 'viewed':[], 'completed':[], 'success': [], 'profit': []}
###Output
_____no_output_____
###Markdown
We will append to this ledger the values sequentially. 'success' will be categorical for now, as there could be 3 possibilities. Fearure engineering algorithm
###Code
# Let us get every customer id in a list
customer_ids = np.sort(transcript['customer_id'].unique())
# Let is get every offer id in a list,
offer_ids = np.sort(portfolio['offer_id'].unique())
# Let us get a list of informatinoal offer ids, as they behave differently
informational_offer_ids = [2, 7]
# this is the id for the added offer which is not an offer
non_offer_id = 10
###Output
_____no_output_____
###Markdown
We have in mind that the last offer id is a non offer. And that bogos and discounts act essenstially in the same manner. Something to note is that a customer might not spend exactly the same amount of money needed to fulfill the offer, that is interesting but unfortunately, becuase offers overlap, you cannot really assign a profit to a offer-customer pair aside from the obvious one of difficulty - reward. What we will do for the profit attribute:bogos and discount profit = difficulty- reward (Note that bogos will have a profit of 0)informational = the amount of dollars transacted in its period. We could also subtract the rewards obtained in that period, but it would only be useful if we added the other offers that were completed at the same time. This is out of scope for my questions. So the word profit for informational is not completely right.non_offer = the amount of dollars outside any offer period Further considerations:For viewed or completed to happen, at least received has had to happen.There is no complete offer event outside the limit of time (offer leaves the app)There is no viewed offer event after the limit of time (offer leaves the app)There can be a viewed event after the complete event, so the offer is failure.Offers, be them the same or different, can overlap, which makes calculating which offer was successful trickierYou can get the same offer in the same interval of time, you could get this combination: Received offer, received offer, complete offer, view offer, complete offer, view offer. You might think that at least one of the offers was successful, as view happens before complete at least once. That is in my view wrong. We assume that the same offer type comes sequentially, so the first time you see it, it belongs to the first offer you received, so that is why, in that sequence, both offers of the same type failed. We would need an identifier that says to which offer it belongs that helps you distinguish between views and completions of the same offer type.For the non-offers, we count only the gaps not influenced by an offer. Even if the offer was completed, we consider still it s influence. The algorithm to find successful offers per customer (feature engineering):1. Group by customer2. Loop through each customer (for) 3. Loop through each offer (for) 4. Bogos and discounts: distill information about success, viewed, received, effective time and profit. 1. Get the amount of received offers 2. Iterate sequentially in time and event ordered, and find viewed or completed events. Depending on what was seen before, offers will have been successful or not 5. Informational: idem, but there is no concept of success 6. Non-offer: find the gaps where no offer was active and use these gaps to add the profit *Profit is viewed at the end-customer of course, not considering how much money costs to sell the product. *Transaction periods can overlap
###Code
ledger = {'customer_id': [], 'offer_id': [], 'received':[], 'viewed':[], 'completed':[], 'success': [], 'profit': []}
# initialize the progress bar
bar = progressbar.ProgressBar(maxval=len(customer_ids), \
widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
# For sorting the events later, as receive and view can happen in the same hour
event_sort = {'offer received':0,'offer viewed':1,'offer completed':2}
# start the loop through the customer transaction entries
bar.start()
for customer_id in customer_transactions.groups.keys():
# We isolate the customers
customer = customer_transactions.get_group(customer_id)
# We get the offers received by the customer
customer_offer_ids = customer['offer_id'].unique()
# Loop thorugh customer offers
for customer_offer_id in customer_offer_ids:
# we filter the offer currently on in the loop
offers = customer.loc[(customer['offer_id'] == customer_offer_id)]
# The events at the same hour are not ordered like R - V - C, so we must order them
offers['name_sort'] = offers['event'].map(event_sort)
offers = offers.sort_values(['time_[h]', 'name_sort'], ascending=[True, True])
# We focus in BOGOS AND DISCOUNTS
if (customer_offer_id not in informational_offer_ids) and (customer_offer_id != non_offer_id):
ledger_temp = {}
received_offers = offers.loc[offers['event'] == 'offer received'].shape[0]
for i in range(0, received_offers):
ledger_temp[i] = {'received':1, 'viewed':0, 'completed':0, 'success': 0, 'profit': 0}
# We loop through each row of the customer offer sub dataframe
for index, row in offers.iterrows():
if (row['event'] == 'offer viewed'):
for i in range(0, received_offers):
if (ledger_temp[i]['viewed'] == 0):
ledger_temp[i]['viewed'] = 1
break
elif (row['event'] == 'offer completed'):
for i in range(0, received_offers):
if (ledger_temp[i]['completed'] == 0):
ledger_temp[i]['completed'] = 1
# only if it was viewed before was successful
if (ledger_temp[i]['viewed'] == 1):
ledger_temp[i]['success'] = 1
ledger_temp[i]['profit'] = row['difficulty_[$]'] - row['reward_[$]_port']
break
for i in range(0, received_offers):
ledger['customer_id'].append(customer_id)
ledger['offer_id'].append(customer_offer_id)
ledger['received'].append(ledger_temp[i]['received'])
ledger['viewed'].append(ledger_temp[i]['viewed'])
ledger['completed'].append(ledger_temp[i]['completed'])
ledger['success'].append(ledger_temp[i]['success'])
ledger['profit'].append(ledger_temp[i]['profit'])
# We focus on INFORMATIONAL
elif (customer_offer_id != non_offer_id):
ledger_temp = {}
received_offer_counter = 0
# We loop through each row of the customer offer sub dataframe
for index, row in offers.iterrows():
if (row['event'] == 'offer received'):
ledger_temp[received_offer_counter] = {'received':1, 'time_received':row['time_[h]'], 'viewed':0, 'completed':0, 'success': 0, 'profit': 0}
received_offer_counter += 1
if (row['event'] == 'offer viewed'):
# Calculate profit
for i in range(0, received_offer_counter):
if ledger_temp[i]['viewed'] == 0:
ledger_temp[i]['viewed'] = 1
time_of_view = row['time_[h]']
expire_offer_time = ledger_temp[i]['time_received'] + row['duration_[h]']
profit = customer.loc[(customer['time_[h]'] >= time_of_view) & \
(customer['time_[h]'] <= expire_offer_time)]['amount_[$]'].sum()
ledger_temp[i]['profit'] = profit
# We consider it successful if the customer bought something
if profit > 0:
ledger_temp[i]['success'] = 1
break
for i in range(0, received_offer_counter):
ledger['customer_id'].append(customer_id)
ledger['offer_id'].append(customer_offer_id)
ledger['received'].append(ledger_temp[i]['received'])
ledger['viewed'].append(ledger_temp[i]['viewed'])
ledger['completed'].append(ledger_temp[i]['completed'])
ledger['success'].append(ledger_temp[i]['success'])
ledger['profit'].append(ledger_temp[i]['profit'])
# Order the offers properly
# The events at the same hour are not ordered like R - V - C, so we must order them
temp_customer = customer
temp_customer['name_sort'] = temp_customer['event'].map(event_sort)
temp_customer = temp_customer.sort_values(['time_[h]', 'name_sort'], ascending=[True, True])
# We get the amount spent without offer influence
# We find the first and last offer received
start_times = []
end_times = []
offers_received = False
for index, row in temp_customer.iterrows():
if row['event'] == 'offer received':
start_times.append(row['time_[h]'])
end_times.append(row['time_[h]'] + row['duration_[h]'])
offers_received = True
if offers_received:
time_gap_start = []
time_gap_start.append(0)
time_gap_end = []
time_gap_end.append(start_times[0])
for index in range(0, len(start_times)-1):
if end_times[index] < start_times[index+1]:
time_gap_start.append(end_times[index])
time_gap_end.append(start_times[index+1])
time_gap_start.append(end_times[-1])
time_gap_end.append(temp_customer.iloc[-1]['time_[h]'])
# Initialize the amount
total_profit = 0
for index in range(0, len(time_gap_start)):
total_profit += temp_customer.loc[(temp_customer['time_[h]'] >= time_gap_start[index]) & \
(temp_customer['time_[h]'] <= time_gap_end[index])]['amount_[$]'].sum()
else:
total_profit = temp_customer.loc[temp_customer['offer_id'] == non_offer_id]['amount_[$]'].sum()
ledger['customer_id'].append(customer_id)
ledger['offer_id'].append(non_offer_id)
ledger['received'].append(0)
ledger['viewed'].append(0)
ledger['completed'].append(0)
ledger['success'].append(0)
ledger['profit'].append(total_profit)
# initialize bool and total profit again
offers_received = False
total_profit = 0
# progress bar
bar.update(customer_id+1)
sleep(0.1)
bar.finish()
###Output
[========================================================================] 100%
###Markdown
We save the ledger
###Code
with open('ledger.pickle', 'wb') as handle:
pickle.dump(ledger, handle, protocol=pickle.HIGHEST_PROTOCOL)
Starbucks_ledger = pd.DataFrame.from_dict(ledger)
Starbucks_ledger.to_pickle("./Starbucks_ledger.pkl")
###Output
_____no_output_____
###Markdown
FINAL STEP: Join all three datasets, Portfolio, profile and ledger
###Code
# We read the wrangled and clean datasets
portfolio = pd.read_pickle("./portfolio.pkl")
profile = pd.read_pickle("./profile.pkl")
Starbucks_ledger = pd.read_pickle("./Starbucks_ledger.pkl")
Starbucks_ledger
Starbucks_final_df = pd.merge(Starbucks_ledger, profile, on='customer_id', how='left', sort=True, suffixes=('_led', '_pro'))
Starbucks_final_df = pd.merge(Starbucks_final_df, portfolio, on='offer_id', how='left', sort=True, suffixes=('_led', '_port'))
###Output
_____no_output_____
###Markdown
We save the FINAL dataframe as a pickle file
###Code
Starbucks_final_df.to_pickle("./Starbucks_final_df.pkl")
Starbucks_final_df = pd.read_pickle("./Starbucks_final_df.pkl")
###Output
_____no_output_____
###Markdown
HERE IS THE BEAUTY:
###Code
Starbucks_final_df
Starbucks_final_df.describe()
Starbucks_final_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 93277 entries, 0 to 93276
Data columns (total 25 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 customer_id 93277 non-null int64
1 offer_id 93277 non-null int64
2 received 93277 non-null int64
3 viewed 93277 non-null int64
4 completed 93277 non-null int64
5 success 93277 non-null int64
6 profit 93277 non-null float64
7 age 93277 non-null int64
8 became_member_on_[y-m-d] 93277 non-null datetime64[ns]
9 income_[$] 93277 non-null float64
10 missing_values 93277 non-null int64
11 gender_F 93277 non-null uint8
12 gender_M 93277 non-null uint8
13 gender_O 93277 non-null uint8
14 gender 93277 non-null object
15 reward_[$] 93277 non-null int64
16 difficulty_[$] 93277 non-null int64
17 duration_[h] 93277 non-null int64
18 offer_type_bogo 93277 non-null int64
19 offer_type_discount 93277 non-null int64
20 offer_type_informational 93277 non-null int64
21 offer_type 93277 non-null object
22 channel_mobile 93277 non-null int64
23 channel_social 93277 non-null int64
24 channel_web 93277 non-null int64
dtypes: datetime64[ns](1), float64(2), int64(17), object(2), uint8(3)
memory usage: 16.6+ MB
###Markdown
Data wrangling We will convert some of the coluimns into categorical values for better representation and to input into the model later. We can delete the received column, as it does not provide usefiul information that we already do not know. If receive is 0, that means that there was no offer in the first, but we go that covered with offer id 10.
###Code
Starbucks_final_df.drop(columns=['received'], inplace=True)
###Output
_____no_output_____
###Markdown
Now we will convert age into categories and then make dummy variables:1. <25: young2. 26-50: young_adult3. 51-75: senior_adult4. 75<: senior
###Code
def select_age(age):
if age <= 25:
return 'young'
elif age <= 50:
return 'young_adult'
elif age <= 75:
return 'senior_adult'
else:
return 'senior'
Starbucks_final_df['age'] = Starbucks_final_df['age'].apply(lambda x: select_age(x))
# Get dummies: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html
dummy_age = Starbucks_final_df['age']
Starbucks_final_df = pd.get_dummies(Starbucks_final_df, prefix=['age'], columns=['age'])
Starbucks_final_df['dummy_age'] = dummy_age
###Output
_____no_output_____
###Markdown
We do the same with the income (considering that the min income was 30k):1. < 50k: low2. < 75: midium_low3. < 100: medium_high4. 100<: high
###Code
def select_income(income):
if income <= 50000:
return 'low'
elif income <= 75000:
return 'medium_low'
elif income <= 100000:
return 'medium_high'
else:
return 'high'
Starbucks_final_df['income_[$]'] = Starbucks_final_df['income_[$]'].apply(lambda x: select_income(x))
# Get dummies: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html
dummy_income = Starbucks_final_df['income_[$]']
Starbucks_final_df = pd.get_dummies(Starbucks_final_df, prefix=['income'], columns=['income_[$]'])
Starbucks_final_df['dummy_income'] = dummy_income
###Output
_____no_output_____
###Markdown
Now let us divide the members in groups depending on how early or late they became an app customer.
###Code
earliest = Starbucks_final_df['became_member_on_[y-m-d]'].min()
latest = Starbucks_final_df['became_member_on_[y-m-d]'].max()
print(earliest)
print(latest)
time_period = Starbucks_final_df['became_member_on_[y-m-d]'].max() - Starbucks_final_df['became_member_on_[y-m-d]'].min()
time_period/4
###Output
_____no_output_____
###Markdown
We will divide the customers in 4 groups:1. <455 days: early adopters2. <910 days: early majority3. <1365 days: late majority4. 1365< days: laggards
###Code
def select_period(date):
if date <= earliest + time_period/4:
return 'early_adopter'
elif date <= earliest + time_period/4*2:
return 'early_majority'
elif date <= earliest + time_period/4*3:
return 'late_majority'
else:
return 'laggard'
Starbucks_final_df['became_member_on_[y-m-d]'] = Starbucks_final_df['became_member_on_[y-m-d]'].apply(lambda x: select_period(x))
# Get dummies: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.get_dummies.html
dummy_period = Starbucks_final_df['became_member_on_[y-m-d]']
Starbucks_final_df = pd.get_dummies(Starbucks_final_df, prefix=['member_'], columns=['became_member_on_[y-m-d]'])
Starbucks_final_df['dummy_became_member_on_[y-m-d]'] = dummy_period
###Output
_____no_output_____
###Markdown
Data cleansing We will delete the dummy attributes after exploration, aside from that, there is nothing to cleanse. we will do this after the exploration. Data Exploration Here are the questions we can answer:- Which offers are preferred according to gender?- Which offers are preferred according to income?- Which offers are preferred according to age?- Which offers are preferred according to date of becoming a member?- Which are the most successful offers (most completed) between discount and bogos?- Which are the most profitable offers between discont offers?- Which are the most profitable offers between informational?- How much money was earned in total with offers Vs. without offers? Which offers are preferred according to gender? We will show a top list of successful offers per gender. Let us remember that informational offers success cannot
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_id'] != 10].groupby(['gender', 'offer_id'])\
['success'].sum().unstack('gender').plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
Most males and females prefer offer 6 which is a discount of 2 dollars after buying products worth 10 dollars, with the longest duration of 10 days, and it reaches thorugh all media: mobile, social and web.The second most liked is offer 5, which is also a discount. An the top 3 is 1 for males (bogo) and 8 for females (bogo).But the differnces between female and mae preferences are not that large. Here is a more summarized plot:
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_type'] != 'none'].groupby(['gender', 'offer_type'])\
['success'].sum().unstack('gender').plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
Which offers are preferred according to income?
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_id'] != 10].groupby(['dummy_income', 'offer_id'])\
['success'].sum().unstack('dummy_income').plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
There are not so many customers with high income. The target group is peoel who earn between 50 and 75k. for them, the most preffered offer ids are 5 and 6, in the table below we see they prefer discount offers. Here is amore summarized table
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_id'] != 10].groupby(['dummy_income', 'offer_type'])\
['success'].sum().unstack('dummy_income').plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
Which offers are preferred according to age?
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_id'] != 10].groupby(['dummy_age', 'offer_id'])\
['success'].sum().unstack('dummy_age').plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
Senior adults are the biggest clientele, and prefer offer ids 5 and 6, (discount type on the plotbelow) This leads me to think that most of them have also a medium low income.
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_id'] != 10].groupby(['dummy_age', 'offer_type'])\
['success'].sum().unstack('dummy_age').plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
Which offers are preferred according to date of becoming a member?
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_id'] != 10].groupby(['dummy_became_member_on_[y-m-d]', 'offer_id'])\
['success'].sum().unstack('dummy_became_member_on_[y-m-d]').plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
This is something we could have expected, each offer has the same distribution, which leads ot think that time of becoming a member does not have an effect on which offers they might prefer.
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_id'] != 10].groupby(['dummy_became_member_on_[y-m-d]', 'offer_type'])\
['success'].sum().unstack('dummy_became_member_on_[y-m-d]').plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
Which are the most successful offers?
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_type'] != 'none'].groupby(['offer_type'])\
['success'].sum()\
.plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
The most successful offer type is discount. Which are the most profitable offers?
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_type'] != 'none'].groupby(['offer_id'])\
['profit'].sum()\
.plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
This plot must be taken with a grain of salt. Bogos are not profitable from our considerations as you produce 0 profit. But they serve other purposed and their success is measured not based on the profit. The most profitable offer is 7 and 2, these are informational. However, informational offers' profit is based on the spending in a period time, during which there were other offers as well. We could clearly conclude that offer 6 among discounts is the most profitable one.
###Code
groups_gender_offers = Starbucks_final_df[Starbucks_final_df['offer_type'] != 'none'].groupby(['offer_type'])\
['profit'].sum()\
.plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
Which are the most profitable offers between informational?
###Code
groups_gender_offers = Starbucks_final_df[(Starbucks_final_df['offer_id'] == 2) | \
(Starbucks_final_df['offer_id'] == 7)].groupby(['offer_id'])\
['profit'].sum()\
.plot.bar(figsize = (12,8), rot=0)
###Output
_____no_output_____
###Markdown
How much money was earned in total with offers Vs. without offers?
###Code
offer_profit = Starbucks_final_df[Starbucks_final_df['offer_id'] != 10].groupby(['offer_id'])['profit'].sum().sum()
none_offer_profit = Starbucks_final_df[Starbucks_final_df['offer_id'] == 10].groupby(['offer_id'])['profit'].sum().sum()
y_values = [offer_profit, none_offer_profit]
x_values = ['offers', 'no_offers']
plt.bar(x_values, y_values)
###Output
_____no_output_____
###Markdown
They have made more money without the offers
###Code
Starbucks_final_df.to_pickle("./Starbucks_plotting.pkl")
###Output
_____no_output_____
###Markdown
Data cleansing We will transform offer_id into dummies, as it is a categorical atribute.Rewards, difficulty_[$], duration_[h] will be scaled between 0 and 1
###Code
Starbucks_final_df['reward_[$]'] = (Starbucks_final_df['reward_[$]'] - Starbucks_final_df['reward_[$]'].min())/Starbucks_final_df['reward_[$]'].max()
Starbucks_final_df['difficulty_[$]'] = (Starbucks_final_df['difficulty_[$]'] - Starbucks_final_df['difficulty_[$]'].min())/Starbucks_final_df['difficulty_[$]'].max()
Starbucks_final_df['duration_[h]'] = (Starbucks_final_df['duration_[h]'] - Starbucks_final_df['duration_[h]'].min())/Starbucks_final_df['duration_[h]'].max()
Starbucks_final_df.shape
Starbucks_final_df = pd.get_dummies(Starbucks_final_df, prefix=['offer_id'], columns=['offer_id'])
###Output
_____no_output_____
###Markdown
Alright, so data has been scaled and categrical variables have been converted to dummies. But there are further considerations. We are focused on offers, so we should eliminate the rows with offer ids = 10. We will focus on the success variable, that will be our label for training.We will delete viewed and completed because we know that would directly explain success.Also we delete all dummies
###Code
Starbucks_final_df = Starbucks_final_df.loc[Starbucks_final_df['offer_id_10'] != 1]
Starbucks_final_df = Starbucks_final_df.drop(columns=['viewed'])
Starbucks_final_df = Starbucks_final_df.drop(columns=['completed'])
Starbucks_final_df = Starbucks_final_df.drop(columns=['offer_id_10'])
Starbucks_final_df = Starbucks_final_df.drop(columns=['dummy_age'])
Starbucks_final_df = Starbucks_final_df.drop(columns=['offer_type'])
Starbucks_final_df = Starbucks_final_df.drop(columns=['dummy_became_member_on_[y-m-d]'])
Starbucks_final_df = Starbucks_final_df.drop(columns=['dummy_income'])
Starbucks_final_df = Starbucks_final_df.drop(columns=['gender'])
###Output
_____no_output_____
###Markdown
Delete columns which in my opinion will not bring any value to the prediction. Year of becoming a member does not say nothing much about the person, perhaps that he/she was an early adopter. But I will assume that these personal feature does not impact on advertisement.Customer id does not help, it does not include any value, since we already have the demographics and know that each record is an individual.Profit could be the ground truth used to predict how much an offer will make if sent to an individual, but that is not the question at hand,
###Code
Starbucks_final_df = Starbucks_final_df.drop(columns=['customer_id'])
Starbucks_final_df = Starbucks_final_df.drop(columns=['profit'])
###Output
_____no_output_____
###Markdown
Save the final dataset into a pickle file
###Code
Starbucks_final_df.to_pickle("./Starbucks_modelling_df.pkl")
###Output
_____no_output_____
###Markdown
4. Modeling
###Code
Starbucks_modelling_df = pd.read_pickle("./Starbucks_modelling_df.pkl")
Starbucks_modelling_df
###Output
_____no_output_____
###Markdown
Everything is ready to input our dataset into the model.
###Code
labels = Starbucks_modelling_df['success']
features = Starbucks_modelling_df.drop(columns=['success'], inplace=False)
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features,
labels,
test_size = 0.3,
random_state = 42)
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
###Output
Training set has 53393 samples.
Testing set has 22884 samples.
###Markdown
For classification, we will use:- SVM- Random forests- Logistic regression- Gradient Boosting
###Code
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import GradientBoostingClassifier
# REF: udavity course
# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO: Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:])
start = time() # Get start time
learner = learner.fit(X_train[:sample_size], y_train[:sample_size])
end = time() # Get end time
# TODO: Calculate the training time
results['train_time'] = end - start
# TODO: Get the predictions on the test set(X_test),
# then get predictions on the first 300 training samples(X_train) using .predict()
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# TODO: Calculate the total prediction time
results['pred_time'] = end - start
# TODO: Compute accuracy on the first 300 training samples which is y_train[:300]
results['acc_train'] = accuracy_score(y_train[:300], predictions_train)
# TODO: Compute accuracy on test set using accuracy_score()
results['acc_test'] = accuracy_score(y_test, predictions_test)
# TODO: Compute F-score on the the first 300 training samples using fbeta_score()
results['f_train'] = fbeta_score(y_train[:300], predictions_train, beta=0.5)
# TODO: Compute F-score on the test set which is y_test
# I use beta = 0.5 as we are focuing on precision
results['f_test'] = fbeta_score(y_test, predictions_test, beta=0.5)
# Success
print("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return results
clf_A = SVC(random_state=0)
clf_B = RandomForestClassifier(random_state=0)
clf_C = LogisticRegression(random_state=0)
clf_D = GradientBoostingClassifier(random_state=0)
clfs = [clf_A, clf_B, clf_C, clf_D]
# initialize the progress bar
bar_1 = progressbar.ProgressBar(maxval=len(clfs), \
widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
# initialize the progress bar
bar_2 = progressbar.ProgressBar(maxval=2, \
widgets=[progressbar.Bar('=', '[', ']'), ' ', progressbar.Percentage()])
# REF: udacity course
# It is interesting to see how they perform with less samples
samples_100 = int(len(y_train))
samples_10 = int(round(0.1 * samples_100))
# Collect results on the learners
results = {}
bar_1.start()
clf_counter = 0
for clf in clfs:
clf_name = clf.__class__.__name__
results[clf_name] = {}
bar_2.start()
for i, samples in enumerate([samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# progress bar
bar_2.update(i+1)
sleep(0.1)
bar_2.finish()
# progress bar
clf_counter += 1
bar_1.update(clf_counter)
sleep(0.1)
bar_1.finish()
# Run metrics visualization for the three supervised learning models chosen
ALL_results = {}
ALL_results = results
###Output
_____no_output_____
###Markdown
Performing algorithm comparisons
###Code
# We print the different metrics for all the tested algorithms.
print('Training with all samples')
for clf in clfs:
clf_name = clf.__class__.__name__
print(clf_name)
print('Training time = ', ALL_results[clf_name][1]['train_time'])
print('Testing time = ', ALL_results[clf_name][1]['pred_time'])
print('Test Accuracy = ', ALL_results[clf_name][1]['acc_test'])
print('Test Fscore = ', ALL_results[clf_name][1]['f_test'])
print('\n')
###Output
Training with all samples
SVC
Training time = 164.17853903770447
Testing time = 33.015408754348755
Test Accuracy = 0.6959884635553225
Test Fscore = 0.582545208095869
RandomForestClassifier
Training time = 0.2846059799194336
Testing time = 0.0351099967956543
Test Accuracy = 0.7047718930256948
Test Fscore = 0.5984216117772044
LogisticRegression
Training time = 0.16101694107055664
Testing time = 0.0060138702392578125
Test Accuracy = 0.7007953155042824
Test Fscore = 0.5910921218455107
GradientBoostingClassifier
Training time = 4.785567045211792
Testing time = 0.03660392761230469
Test Accuracy = 0.7081803880440483
Test Fscore = 0.6026470491641762
###Markdown
- Best train time: LogisticRegression- Best test time: LogisticRegression- Best Accuracy: GradientBoostingClassifier (by a hair)- Best Fscore (beta = 0.5): GradientBoostingClassifier (by a hair)
###Code
# We print the different metrics for all the tested algorithms.
print('Training with 10% of samples')
for clf in clfs:
clf_name = clf.__class__.__name__
print(clf_name)
print('Training time = ', ALL_results[clf_name][0]['train_time'])
print('Testing time = ', ALL_results[clf_name][0]['pred_time'])
print('Test Accuracy = ', ALL_results[clf_name][0]['acc_test'])
print('Test Fscore = ', ALL_results[clf_name][0]['f_test'])
print('\n')
###Output
Training with 10% of samples
SVC
Training time = 1.0258140563964844
Testing time = 3.1299290657043457
Test Accuracy = 0.6952455864359378
Test Fscore = 0.5777104623680969
RandomForestClassifier
Training time = 0.0488131046295166
Testing time = 0.033370018005371094
Test Accuracy = 0.673527355357455
Test Fscore = 0.5574165715010785
LogisticRegression
Training time = 0.021611928939819336
Testing time = 0.008096933364868164
Test Accuracy = 0.6955514770145079
Test Fscore = 0.5839501477428981
GradientBoostingClassifier
Training time = 0.4492828845977783
Testing time = 0.0361332893371582
Test Accuracy = 0.701800384548156
Test Fscore = 0.5943984313266412
###Markdown
- Best train time: LogisticRegression- Best test time: LogisticRegression- Best Accuracy: GradientBoostingClassifier (by a hair)- Best Fscore (beta = 0.5): GradientBoostingClassifier (by a hair)
###Code
# We save the model
filename = 'trained_classifier.sav'
pickle.dump(clf_D, open(filename, 'wb'))
loaded_model = pickle.load(open(filename, 'rb'))
###Output
_____no_output_____
###Markdown
The most performant classifier is GradientBoosting. We will use it and now try to optimize its parameters for getting a better accuracy and fscore. Before we do that, let us get some intuition of the guts of the model afteer training This is the feature that has the biggest impact on the classification for success
###Code
best_feature_index = np.where(loaded_model.feature_importances_ == loaded_model.feature_importances_.max())
features.columns[best_feature_index]
###Output
_____no_output_____
###Markdown
Grid search on best performing algorithm
###Code
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
loaded_model
# TODO: Initialize the classifier again
GradientBoostingClassifier_optimal = GradientBoostingClassifier()
# Create the parameters list you wish to tune, using a dictionary if needed.
def grid_search(model, scorer):
# specify parameters for grid search
parameters = {
'learning_rate': [0.1, 0.5, 1],
'max_depth': [3, 4],
'n_estimators': [100, 125, 150],
}
# create grid search object
cv = GridSearchCV(model, param_grid=parameters, scoring=scorer)
return cv
# fbeta_score scoring object using make_scorer()
scorer = make_scorer(fbeta_score, beta=0.5)
# Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
grid_object = grid_search(GradientBoostingClassifier_optimal, scorer)
# Fit the grid search object to the training data
grid_fit = grid_object.fit(X_train, y_train)
# Get the estimator
best_clf = grid_fit.best_estimator_
# We save the model
filename = 'best_trained_classifier.sav'
pickle.dump(best_clf, open(filename, 'wb'))
best_clf = pickle.load(open(filename, 'rb'))
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print("Unoptimized model\n------")
print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)))
print("\nOptimized Model\n------")
print("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
best_clf
###Output
Unoptimized model
------
Accuracy score on testing data: 0.7082
F-score on testing data: 0.6026
Optimized Model
------
Final accuracy score on the testing data: 0.7105
Final F-score on the testing data: 0.6086
|
test/testdata/GenerateConvolutedData.ipynb | ###Markdown
Generate data with small beads and Poisson noise from experimental PSF
###Code
import numpy as np
import tifffile
import napari
#Create a 2d array to do FFT
ashape = (256,256,256)
a = np.zeros(ashape, dtype=float)
#Add a few cubes in grid-like locations
cubesize=2
cubespacing=60
for iz in range(int(cubespacing/2),ashape[0],cubespacing):
for iy in range(int(cubespacing/2),ashape[1],cubespacing):
for ix in range(int(cubespacing/2),ashape[2],cubespacing):
a[iz:iz+cubesize , iy:iy+cubesize , ix:ix+cubesize] = np.ones((cubesize,cubesize,cubesize))
nview_data = napari.view_image(a, ndisplay=3)
#OK
#Optionally save the data
tifffile.imsave('test/gendata1_raw.tif', a)
###Output
_____no_output_____
###Markdown
Convolve data with the experimental 'Rosalind' Psf.Read data first
###Code
psfdata=tifffile.imread('PSF_RFI_8bit.tif')
type(psfdata)
psfdata.dtype
psfdata.shape
psfdata_norm = (psfdata.astype(float) - psfdata.min() ) / (psfdata.max() - psfdata.min())
nview_psf = napari.view_image(psfdata_norm, ndisplay=3)
###Output
Exception in callback BaseAsyncIOLoop._handle_events(4036, 1)
handle: <Handle BaseAsyncIOLoop._handle_events(4036, 1)>
Traceback (most recent call last):
File "C:\Users\Luis\miniconda3\envs\dev\lib\asyncio\events.py", line 81, in _run
self._context.run(self._callback, *self._args)
File "C:\Users\Luis\miniconda3\envs\dev\lib\site-packages\tornado\platform\asyncio.py", line 189, in _handle_events
handler_func(fileobj, events)
File "C:\Users\Luis\miniconda3\envs\dev\lib\site-packages\zmq\eventloop\zmqstream.py", line 452, in _handle_events
self._handle_recv()
File "C:\Users\Luis\miniconda3\envs\dev\lib\site-packages\zmq\eventloop\zmqstream.py", line 481, in _handle_recv
self._run_callback(callback, msg)
File "C:\Users\Luis\miniconda3\envs\dev\lib\site-packages\zmq\eventloop\zmqstream.py", line 431, in _run_callback
callback(*args, **kwargs)
File "C:\Users\Luis\miniconda3\envs\dev\lib\site-packages\jupyter_client\threaded.py", line 121, in _handle_recv
msg_list = self.ioloop._asyncio_event_loop.run_until_complete(get_msg(future_msg))
File "C:\Users\Luis\miniconda3\envs\dev\lib\asyncio\base_events.py", line 592, in run_until_complete
self._check_running()
File "C:\Users\Luis\miniconda3\envs\dev\lib\asyncio\base_events.py", line 554, in _check_running
raise RuntimeError(
RuntimeError: Cannot run the event loop while another loop is running
###Markdown
Convolve
###Code
import scipy.signal
data_convolved = scipy.signal.convolve(a, psfdata_norm, mode='same')
data_convolved.shape
#normalises to 0-255 range
data_convolved = (data_convolved - data_convolved.min()) / (data_convolved.max() - data_convolved.min())*255
print(data_convolved.max())
print(data_convolved.min())
nview_dataconv = napari.view_image(data_convolved,ndisplay=3)
###Output
_____no_output_____
###Markdown
Add Poisson noise
###Code
#data_convolved_noised = data_convolved + np.random.poisson(256 , size=ashape).astype(np.float32)/80
#This method of adding does not look right. The original intensity should be the lambda poisson parameter in the function
rng = np.random.default_rng()
data_convolved_noised = rng.poisson(lam = data_convolved)
nview_data_noised = napari.view_image(data_convolved_noised,ndisplay=3)
data_convolved_noised_uint8 = ((data_convolved_noised - data_convolved_noised.min()) / ( data_convolved_noised.max() - data_convolved_noised.min() ) *255 ).astype(np.uint8)
tifffile.imsave('test/gendata_psfconv_poiss.tif', data_convolved_noised_uint8)
###Output
_____no_output_____
###Markdown
Create large data
###Code
import numpy as np
import tifffile
import napari
import scipy.signal
#Create a 2d array to do FFT
ashape = (60,1026,1544) # Casper LM size
a = np.zeros(ashape, dtype=float)
#a = np.random.poisson(256 , size=(size0,size0,size0)).astype(np.float32)/2000
#Add a few cubes in grid-like locations
cubesize=2
cubespacing=67
for iz in range(5,ashape[0],cubespacing):
for iy in range(5,ashape[1],cubespacing):
for ix in range(5,ashape[2],cubespacing):
a[iz:iz+cubesize , iy:iy+cubesize , ix:ix+cubesize] = np.ones((cubesize,cubesize,cubesize))
#Read psf
psfdata=tifffile.imread('PSF_RFI_8bit.tif')
psfdata_norm = (psfdata.astype(float) - psfdata.min() ) / (psfdata.max() - psfdata.min())
#Convolve
data_convolved = scipy.signal.convolve(a, psfdata_norm, mode='same')
#Adjust max/min and intensity
data_convolved = (data_convolved - data_convolved.min()) / (data_convolved.max() - data_convolved.min())*255
#Noisify with Poisson
rng = np.random.default_rng()
data_convolved_noised = rng.poisson(lam = data_convolved)
data_convolved_noised_uint8 = ((data_convolved_noised - data_convolved_noised.min()) / ( data_convolved_noised.max() - data_convolved_noised.min() ) *255 ).astype(np.uint8)
tifffile.imsave('gendata_psfconv_poiss_large.tif', data_convolved_noised_uint8)
###Output
_____no_output_____
###Markdown
Create very large data
###Code
import numpy as np
import tifffile
import napari
import scipy.signal
#Create a 2d array to do FFT
ashape = (51,2048,2048) # Jeonyoon Choi
a = np.zeros(ashape, dtype=float)
#a = np.random.poisson(256 , size=(size0,size0,size0)).astype(np.float32)/2000
#Add a few cubes in grid-like locations
cubesize=2
cubespacing=67
for iz in range(5,ashape[0],cubespacing):
for iy in range(5,ashape[1],cubespacing):
for ix in range(5,ashape[2],cubespacing):
a[iz:iz+cubesize , iy:iy+cubesize , ix:ix+cubesize] = np.ones((cubesize,cubesize,cubesize))
#Read psf
psfdata=tifffile.imread('PSF_RFI_8bit.tif')
psfdata_norm = (psfdata.astype(float) - psfdata.min() ) / (psfdata.max() - psfdata.min())
#Convolve
data_convolved = scipy.signal.convolve(a, psfdata_norm, mode='same')
#Adjust max/min and intensity
data_convolved = (data_convolved - data_convolved.min()) / (data_convolved.max() - data_convolved.min())*255
#Noisify with Poisson
rng = np.random.default_rng()
data_convolved_noised = rng.poisson(lam = data_convolved)
data_convolved_noised_uint8 = ((data_convolved_noised - data_convolved_noised.min()) / ( data_convolved_noised.max() - data_convolved_noised.min() ) *255 ).astype(np.uint8)
nview_data_noised = napari.view_image(data_convolved_noised_uint8,ndisplay=3)
tifffile.imsave('gendata_psfconv_poiss_vlarge.tif', data_convolved_noised_uint8)
###Output
_____no_output_____ |
classifier/notebooks/.ipynb_checkpoints/very-simple-pytorch-training-0-59-checkpoint.ipynb | ###Markdown
Cool Imports
###Code
import pandas as pd
import time
import torchvision
import torch.nn as nn
from tqdm import tqdm_notebook as tqdm
from PIL import Image, ImageFile
from torch.utils.data import Dataset
import torch
import torch.optim as optim
from torchvision import transforms
from torch.optim import lr_scheduler
import os
device = torch.device("cuda:0")
ImageFile.LOAD_TRUNCATED_IMAGES = True
###Output
_____no_output_____
###Markdown
Dataset Class
###Code
class RetinopathyDatasetTrain(Dataset):
def __init__(self, csv_file):
self.data = pd.read_csv(csv_file)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
img_name = os.path.join('../data/train_images', self.data.loc[idx, 'id_code'] + '.png')
image = Image.open(img_name)
image = image.resize((256, 256), resample=Image.BILINEAR)
label = torch.tensor(self.data.loc[idx, 'diagnosis'])
return {'image': transforms.ToTensor()(image),
'labels': label
}
###Output
_____no_output_____
###Markdown
Get the model
###Code
model = torchvision.models.resnet101(pretrained=True)
#model.load_state_dict(torch.load("../data/resnet101-5d3b4d8f.pth"))
num_features = model.fc.in_features
model.fc = nn.Linear(2048, 1)
model = model.to(device)
###Output
Downloading: "https://download.pytorch.org/models/resnet101-5d3b4d8f.pth" to /home/ags/.cache/torch/checkpoints/resnet101-5d3b4d8f.pth
100%|██████████| 178728960/178728960 [00:55<00:00, 3221921.18it/s]
###Markdown
Create dataset + optimizer
###Code
train_dataset = RetinopathyDatasetTrain(csv_file='../input/aptos2019-blindness-detection/train.csv')
data_loader = torch.utils.data.DataLoader(train_dataset, batch_size=16, shuffle=True, num_workers=4)
plist = [
{'params': model.layer4.parameters(), 'lr': 1e-4, 'weight': 0.001},
{'params': model.fc.parameters(), 'lr': 1e-3}
]
optimizer = optim.Adam(plist, lr=0.001)
scheduler = lr_scheduler.StepLR(optimizer, step_size=10)
###Output
_____no_output_____
###Markdown
Training Loop
###Code
since = time.time()
criterion = nn.MSELoss()
num_epochs = 15
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
scheduler.step()
model.train()
running_loss = 0.0
tk0 = tqdm(data_loader, total=int(len(data_loader)))
counter = 0
for bi, d in enumerate(tk0):
inputs = d["image"]
labels = d["labels"].view(-1, 1)
inputs = inputs.to(device, dtype=torch.float)
labels = labels.to(device, dtype=torch.float)
optimizer.zero_grad()
with torch.set_grad_enabled(True):
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item() * inputs.size(0)
counter += 1
tk0.set_postfix(loss=(running_loss / (counter * data_loader.batch_size)))
epoch_loss = running_loss / len(data_loader)
print('Training Loss: {:.4f}'.format(epoch_loss))
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
torch.save(model.state_dict(), "model.bin")
###Output
Epoch 0/14
----------
|
Lec11_Recurrent Neural Networks/Lec11_Many to One Classification by Stacked Bi-directional GRU with Drop out.ipynb | ###Markdown
CS 20 : TensorFlow for Deep Learning Research Lecture 11 : Recurrent Neural NetworksSimple example for Many to One Classification (word sentiment classification) by Stacked Bi-directional Gated Recurrent Unit with Drop out. Many to One Classification by Stacked Bi-directional GRU with Drop out- Creating the **data pipeline** with `tf.data`- Preprocessing word sequences (variable input sequence length) using `padding technique` by `user function (pad_seq)`- Using `tf.nn.embedding_lookup` for getting vector of tokens (eg. word, character)- Creating the model as **Class**- Applying **Drop out** to model by `tf.contrib.rnn.DropoutWrapper`- Applying **Stacking** and **dynamic rnn** to model by `tf.contrib.rnn.stack_bidirectional_dynamic_rnn`- Reference - https://github.com/golbin/TensorFlow-Tutorials/blob/master/10%20-%20RNN/02%20-%20Autocomplete.py - https://github.com/aisolab/TF_code_examples_for_Deep_learning/blob/master/Tutorial%20of%20implementing%20Sequence%20classification%20with%20RNN%20series.ipynb - https://pozalabs.github.io/blstm/ Setup
###Code
import os, sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import tensorflow as tf
import string
%matplotlib inline
slim = tf.contrib.slim
print(tf.__version__)
###Output
1.8.0
###Markdown
Prepare example data
###Code
words = ['good', 'bad', 'amazing', 'so good', 'bull shit', 'awesome']
y = [[1.,0.], [0.,1.], [1.,0.], [1., 0.],[0.,1.], [1.,0.]]
# Character quantization
char_space = string.ascii_lowercase
char_space = char_space + ' ' + '*'
char_space
char_dic = {char : idx for idx, char in enumerate(char_space)}
print(char_dic)
###Output
{'a': 0, 'b': 1, 'c': 2, 'd': 3, 'e': 4, 'f': 5, 'g': 6, 'h': 7, 'i': 8, 'j': 9, 'k': 10, 'l': 11, 'm': 12, 'n': 13, 'o': 14, 'p': 15, 'q': 16, 'r': 17, 's': 18, 't': 19, 'u': 20, 'v': 21, 'w': 22, 'x': 23, 'y': 24, 'z': 25, ' ': 26, '*': 27}
###Markdown
Create pad_seq function
###Code
def pad_seq(sequences, max_len, dic):
seq_len, seq_indices = [], []
for seq in sequences:
seq_len.append(len(seq))
seq_idx = [dic.get(char) for char in seq]
seq_idx += (max_len - len(seq_idx)) * [dic.get('*')] # 27 is idx of meaningless token "*"
seq_indices.append(seq_idx)
return seq_len, seq_indices
###Output
_____no_output_____
###Markdown
Apply pad_seq function to data
###Code
max_length = 10
X_length, X_indices = pad_seq(sequences = words, max_len = max_length, dic = char_dic)
print(X_length)
print(np.shape(X_indices))
###Output
[4, 3, 7, 7, 9, 7]
(6, 10)
###Markdown
Define CharStackedBiGRU class
###Code
class CharStackedBiGRU:
def __init__(self, X_length, X_indices, y, n_of_classes, hidden_dims, dic):
# data pipeline
with tf.variable_scope('input_layer'):
self._X_length = X_length
self._X_indices = X_indices
self._y = y
one_hot = tf.eye(len(dic), dtype = tf.float32)
self._one_hot = tf.get_variable(name='one_hot_embedding', initializer = one_hot,
trainable = False) # embedding vector training 안할 것이기 때문
self._X_batch = tf.nn.embedding_lookup(params = self._one_hot, ids = self._X_indices)
self._keep_prob = tf.placeholder(dtype = tf.float32)
# Stacked Bi-directional GRU with Drop out
with tf.variable_scope('stacked_bi-directional_gru'):
# forward
gru_fw_cells = []
for hidden_dim in hidden_dims:
gru_fw_cell = tf.contrib.rnn.GRUCell(num_units = hidden_dim, activation = tf.nn.tanh)
gru_fw_cell = tf.contrib.rnn.DropoutWrapper(cell = gru_fw_cell, output_keep_prob = self._keep_prob)
gru_fw_cells.append(gru_fw_cell)
# backword
gru_bw_cells = []
for hidden_dim in hidden_dims:
gru_bw_cell = tf.contrib.rnn.GRUCell(num_units = hidden_dim, activation = tf.nn.tanh)
gru_bw_cell = tf.contrib.rnn.DropoutWrapper(cell = gru_bw_cell, output_keep_prob = self._keep_prob)
gru_bw_cells.append(gru_bw_cell)
_, output_state_fw, output_state_bw = \
tf.contrib.rnn.stack_bidirectional_dynamic_rnn(cells_fw = gru_fw_cells, cells_bw = gru_bw_cells,
inputs = self._X_batch,
sequence_length = self._X_length,
dtype = tf.float32)
final_state = tf.concat([output_state_fw[-1], output_state_bw[-1]], axis = 1)
with tf.variable_scope('output_layer'):
self._score = slim.fully_connected(inputs = final_state, num_outputs = n_of_classes, activation_fn = None)
with tf.variable_scope('loss'):
self.ce_loss = tf.losses.softmax_cross_entropy(onehot_labels = self._y, logits = self._score)
with tf.variable_scope('prediction'):
self._prediction = tf.argmax(input = self._score, axis = -1, output_type = tf.int32)
def predict(self, sess, X_length, X_indices, keep_prob = 1.):
feed_prediction = {self._X_length : X_length, self._X_indices : X_indices, self._keep_prob : keep_prob}
return sess.run(self._prediction, feed_dict = feed_prediction)
###Output
_____no_output_____
###Markdown
Create a model of CharStackedBiGRU
###Code
# hyper-parameter#
lr = .003
epochs = 10
batch_size = 2
total_step = int(np.shape(X_indices)[0] / batch_size)
print(total_step)
## create data pipeline with tf.data
tr_dataset = tf.data.Dataset.from_tensor_slices((X_length, X_indices, y))
tr_dataset = tr_dataset.shuffle(buffer_size = 20)
tr_dataset = tr_dataset.batch(batch_size = batch_size)
tr_iterator = tr_dataset.make_initializable_iterator()
print(tr_dataset)
X_length_mb, X_indices_mb, y_mb = tr_iterator.get_next()
char_stacked_bi_gru = CharStackedBiGRU(X_length = X_length_mb, X_indices = X_indices_mb,
y = y_mb, n_of_classes = 2, hidden_dims = [16,16], dic = char_dic)
###Output
_____no_output_____
###Markdown
Creat training op and train model
###Code
## create training op
opt = tf.train.AdamOptimizer(learning_rate = lr)
training_op = opt.minimize(loss = char_stacked_bi_gru.ce_loss)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
tr_loss_hist = []
for epoch in range(epochs):
avg_tr_loss = 0
tr_step = 0
sess.run(tr_iterator.initializer)
try:
while True:
_, tr_loss = sess.run(fetches = [training_op, char_stacked_bi_gru.ce_loss],
feed_dict = {char_stacked_bi_gru._keep_prob : .5})
avg_tr_loss += tr_loss
tr_step += 1
except tf.errors.OutOfRangeError:
pass
avg_tr_loss /= tr_step
tr_loss_hist.append(avg_tr_loss)
print('epoch : {:3}, tr_loss : {:.3f}'.format(epoch + 1, avg_tr_loss))
plt.plot(tr_loss_hist, label = 'train')
yhat = char_stacked_bi_gru.predict(sess = sess, X_length = X_length, X_indices = X_indices)
print('training acc: {:.2%}'.format(np.mean(yhat == np.argmax(y, axis = -1))))
###Output
training acc: 83.33%
|
Week3/Day2.ipynb | ###Markdown
Week 3 Intro to NLP - Day 2
###Code
%matplotlib inline
%pprint
import nltk
import matplotlib
import matplotlib.pyplot as plt
from nltk.corpus import gutenberg
from nltk.corpus import brown
nltk.corpus.gutenberg.fileids()
emma = nltk.corpus.gutenberg.words('austen-emma.txt')
len(emma)
###Output
_____no_output_____
###Markdown
Tokenization
###Code
gutenberg.fileids()
for fileid in gutenberg.fileids():
print
from nltk.corpus import webtext
for fileid in webtext.fileids():
print(fileid, webtext.raw(fileid)[:65], '...')
###Output
firefox.txt Cookie Manager: "Don't allow sites that set removed cookies to se ...
grail.txt SCENE 1: [wind] [clop clop clop]
KING ARTHUR: Whoa there! [clop ...
overheard.txt White guy: So, do you have any plans for this evening?
Asian girl ...
pirates.txt PIRATES OF THE CARRIBEAN: DEAD MAN'S CHEST, by Ted Elliott & Terr ...
singles.txt 25 SEXY MALE, seeks attrac older single lady, for discreet encoun ...
wine.txt Lovely delicate, fragrant Rhone wine. Polished leather and strawb ...
###Markdown
Brown Corpus
###Code
brown
brown.categories()
brown.words()
brown.words(categories="humor")
brown.fileids()
brown.sents(categories=['adventure','humor','mystery']) #sentences
humor_text = brown.words(categories='humor')
fdist = nltk.FreqDist(word.lower() for word in humor_text)
modals = ['can','could','may','must']
for m in modals:
print(m + ":" , fdist[m],end = ' ')
cfd = nltk.ConditionalFreqDist(
(genre,word)
for genre in brown.categories()
for word in brown.words(categories=genre)
)
genres = ['humor','news','hobbies']
pronouns = ['she','her','hers','he','him','his','it','its','they','them','theirs']
cfd.tabulate(conditions=genres,samples=pronouns)
#We create a matrix with generes in rows and words in columns
###Output
she her hers he him his it its they them theirs
humor 58 62 0 146 48 137 162 16 70 49 2
news 42 103 0 451 93 399 363 174 205 96 0
hobbies 21 16 0 155 49 238 476 150 177 127 0
|
inference_exploration/colab/tfhub_image_inference.ipynb | ###Markdown
###Code
%tensorflow_version 2.x
import numpy as np
import PIL.Image as Image
import matplotlib.pylab as plt
import time
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers
#classifier_url ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/classification/2"
#classifier_url ="https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/classification/4"
#classifier_url ="https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/classification/4"
#classifier_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_035_224/classification/4"
classifier_url = "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/classification/4"
IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
hub.KerasLayer(classifier_url, input_shape=IMAGE_SHAPE+(3,))
])
img_file = tf.keras.utils.get_file('image1.jpg','https://storage.googleapis.com/demostration_images/2.jpg')
img = Image.open(img_file).resize(IMAGE_SHAPE)
img_array = np.array(img)/255.0
img_array.shape
result = classifier.predict(img_array[np.newaxis, ...])
result.shape
predicted_class = np.argmax(result[0], axis=-1)
predicted_class
labels_path = tf.keras.utils.get_file('ImageNetLabels.txt','https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_labels = np.array(open(labels_path).read().splitlines())
plt.imshow(img_array)
plt.axis('off')
predicted_class_name = imagenet_labels[predicted_class]
_ = plt.title("Prediction: " + predicted_class_name.title())
start = time.time()
result = classifier.predict(img_array[np.newaxis, ...])
predicted_class = np.argmax(result[0], axis=-1)
predicted_class_name = imagenet_labels[predicted_class]
end = time.time()
print(end - start)
###Output
_____no_output_____ |
Deep-Learning/Multiple-Class/Letter.ipynb | ###Markdown
کدهای پیاده سازی پروژه پایان ترم بیگ دیتا - امین زایراومالی وارد کردن کتابخانه های مورد نیاز پایتون
###Code
# first neural network with keras tutorial with letter Dataset
# Dataset Link : https://archive.ics.uci.edu/ml/datasets/Letter+Recognition
# Powered By Amin Zayeromali - [email protected]
from numpy import loadtxt
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.utils.vis_utils import plot_model
import tensorflow as tf
from keras.models import Model,load_model
from sklearn import preprocessing
import math
###Output
_____no_output_____
###Markdown
بارگزاری دیتاست و توضیحات مشخصه آن
###Code
# DataSet Information
# Author: David J. Slate
# Source: [UCI](https://archive.ics.uci.edu/ml/datasets/Letter+Recognition) - 01-01-1991
# Please cite: P. W. Frey and D. J. Slate. "Letter Recognition Using Holland-style Adaptive Classifiers". Machine Learning 6(2), 1991
# TITLE: Letter Image Recognition Data
#
# The objective is to identify each of a large number of black-and-white
# rectangular pixel displays as one of the 26 capital letters in the English
# alphabet. The character images were based on 20 different fonts and each
# letter within these 20 fonts was randomly distorted to produce a file of
# 20,000 unique stimuli. Each stimulus was converted into 16 primitive
# numerical attributes (statistical moments and edge counts) which were then
# scaled to fit into a range of integer values from 0 through 15. We
# typically train on the first 16000 items and then use the resulting model
# to predict the letter category for the remaining 4000. See the article
# cited above for more details.
# load the dataset
dataset = loadtxt('dataset_6_letter.txt', dtype=str, delimiter=',')
# split into input (X) and output (y) variables
X = dataset[:10000,0:16] # انتخاب ۱۰۰۰۰ هزار رکورد از داده ها برای ترین
X = X.astype(float)
y = dataset[:10000,16]
le = preprocessing.LabelEncoder()
y = le.fit_transform(y).astype(int)
print("Number Of Unique Label on Target :", len(set(y))) # تعداد کلاسهای یونیک مشخص می شود
y = tf.keras.utils.to_categorical(y, num_classes=len(set(y))) # تبدیل داده های اینتیجر لیبل به باینری
# نمایش تعداد رکوردهای دیتاست ( فیچرها و لیبلهای تارگت )
print(X.shape)
print(y.shape)
print(X)
print(y) # لیبل بندی کلاسهای خروجی براساس تعداد اپوچ های خروجی با فرمت باینری
###Output
[[0. 0. 0. ... 0. 0. 1.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 1.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
ساخت مدل برای دیتاست مذکور که به صورت چند کلاسه مرحله اول فقط دو لایه پنهان با ورودی ۱۶ تایی و خروجی ۲۶ تایی ساخته و فیت می کنیم param_number = output_channel_number * (input_channel_number + 1)
###Code
# define the keras model
model = Sequential()
model.add(Dense(32, input_dim=16, activation='relu')) # تعداد نورون های ورودی برابر با فیچر ها ۱۶ تاست
#model.add(Dense(64, activation='relu'))
model.add(Dense(26, activation='sigmoid')) #تعداد نورون های خروجی نیز برابر با تعداد لیبلهای غیر تکراری کلاس تارگت گرفتیم
# compile the keras model
#from keral.optimizer import adam
model.compile(loss='categorical_crossentropy', optimizer='RMSProp' , metrics=['accuracy'])#categorical_crossentropy or #binary_crossentropy
print(model.summary())
plot_model(model, show_shapes=True, to_file='mymodel.png')
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_2 (Dense) (None, 32) 544
dense_3 (Dense) (None, 26) 858
=================================================================
Total params: 1,402
Trainable params: 1,402
Non-trainable params: 0
_________________________________________________________________
None
('You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) ', 'for plot_model/model_to_dot to work.')
###Markdown
مدل را برای مقادیر epochs=50, batch_size=10 ساخته و فیت می کنیم
###Code
# fit the keras model on the dataset
history= model.fit(X, y, epochs=50, batch_size=10)
import matplotlib.pyplot as plt
plt.figure()
plt.plot(history.history['loss'])
import matplotlib.pyplot as plt
plt.figure()
plt.plot(history.history['accuracy'])
# evaluate the keras model
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
predictions = model.predict(X)
# summarize the first 5 cases
for i in range(10):
print("\n------------------------------- Case ",i+1," ---------------------------------------------------\n")
print("Data is :\n", X[i].tolist(),"\nPredicted Value is :\n" , predictions[i],"\nReal Value is :\n", y[i])
###Output
------------------------------- Case 1 ---------------------------------------------------
Data is :
[2.0, 4.0, 4.0, 3.0, 2.0, 7.0, 8.0, 2.0, 9.0, 11.0, 7.0, 7.0, 1.0, 8.0, 5.0, 6.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 1.]
------------------------------- Case 2 ---------------------------------------------------
Data is :
[4.0, 7.0, 5.0, 5.0, 5.0, 5.0, 9.0, 6.0, 4.0, 8.0, 7.0, 9.0, 2.0, 9.0, 7.0, 10.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 3 ---------------------------------------------------
Data is :
[7.0, 10.0, 8.0, 7.0, 4.0, 8.0, 8.0, 5.0, 10.0, 11.0, 2.0, 8.0, 2.0, 5.0, 5.0, 10.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 4 ---------------------------------------------------
Data is :
[4.0, 9.0, 5.0, 7.0, 4.0, 7.0, 7.0, 13.0, 1.0, 7.0, 6.0, 8.0, 3.0, 8.0, 0.0, 8.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 5 ---------------------------------------------------
Data is :
[6.0, 7.0, 8.0, 5.0, 4.0, 7.0, 6.0, 3.0, 7.0, 10.0, 7.0, 9.0, 3.0, 8.0, 3.0, 7.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 6 ---------------------------------------------------
Data is :
[4.0, 7.0, 5.0, 5.0, 3.0, 4.0, 12.0, 2.0, 5.0, 13.0, 7.0, 5.0, 1.0, 10.0, 1.0, 7.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 7 ---------------------------------------------------
Data is :
[6.0, 10.0, 8.0, 8.0, 4.0, 7.0, 8.0, 2.0, 5.0, 10.0, 7.0, 8.0, 5.0, 8.0, 1.0, 8.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 8 ---------------------------------------------------
Data is :
[1.0, 0.0, 2.0, 0.0, 1.0, 6.0, 10.0, 7.0, 2.0, 7.0, 5.0, 8.0, 2.0, 7.0, 4.0, 9.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 9 ---------------------------------------------------
Data is :
[5.0, 9.0, 7.0, 6.0, 7.0, 7.0, 7.0, 2.0, 4.0, 9.0, 8.0, 9.0, 7.0, 6.0, 2.0, 8.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 10 ---------------------------------------------------
Data is :
[1.0, 0.0, 2.0, 1.0, 1.0, 5.0, 7.0, 8.0, 6.0, 7.0, 6.0, 6.0, 2.0, 8.0, 3.0, 8.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
###Markdown
مدل را برای مقادیر epochs=100, batch_size=50 ساخته و فیت می کنیم
###Code
# define the keras model
model = Sequential()
model.add(Dense(32, input_dim=16, activation='relu')) # تعداد نورون های ورودی برابر با فیچر ها ۱۶ تاست
#model.add(Dense(64, activation='relu'))
model.add(Dense(26, activation='sigmoid')) #تعداد نورون های خروجی نیز برابر با تعداد لیبلهای غیر تکراری کلاس تارگت گرفتیم
# compile the keras model
#from keral.optimizer import adam
model.compile(loss='categorical_crossentropy', optimizer='RMSProp' , metrics=['accuracy'])#categorical_crossentropy or #binary_crossentropy
print(model.summary())
plot_model(model, show_shapes=True, to_file='mymodel.png')
# fit the keras model on the dataset
history= model.fit(X, y, epochs=100, batch_size=50)
import matplotlib.pyplot as plt
plt.figure()
plt.plot(history.history['loss'])
import matplotlib.pyplot as plt
plt.figure()
plt.plot(history.history['accuracy'])
# evaluate the keras model
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
predictions = model.predict(X)
# summarize the first 5 cases
for i in range(10):
print("\n------------------------------- Case ",i+1," ---------------------------------------------------\n")
print("Data is :\n", X[i].tolist(),"\nPredicted Value is :\n" , predictions[i],"\nReal Value is :\n", y[i])
###Output
------------------------------- Case 1 ---------------------------------------------------
Data is :
[2.0, 4.0, 4.0, 3.0, 2.0, 7.0, 8.0, 2.0, 9.0, 11.0, 7.0, 7.0, 1.0, 8.0, 5.0, 6.0]
Predicted Value is :
[5.0791823e-12 9.4748664e-10 1.2060948e-11 5.8751726e-10 6.6218621e-07
2.9167737e-08 3.4936717e-12 4.5911338e-13 1.2157548e-07 2.5409574e-06
5.9689021e-13 4.7515596e-09 2.1567156e-20 5.1742665e-19 2.7170157e-16
2.7021717e-13 6.1899943e-13 6.1158577e-14 2.2441856e-05 1.9091628e-09
7.1929551e-16 2.6599331e-17 1.8550800e-37 1.6649376e-07 4.2550276e-11
1.7629040e-03]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 1.]
------------------------------- Case 2 ---------------------------------------------------
Data is :
[4.0, 7.0, 5.0, 5.0, 5.0, 5.0, 9.0, 6.0, 4.0, 8.0, 7.0, 9.0, 2.0, 9.0, 7.0, 10.0]
Predicted Value is :
[5.70015226e-13 7.28875715e-09 1.03611715e-08 2.05981135e-11
1.02846037e-07 1.97222967e-08 2.32125394e-08 1.26638406e-08
5.16257349e-12 1.43038983e-10 2.23486607e-09 3.46163764e-10
2.37588117e-14 1.34143369e-12 2.55289956e-10 9.90668969e-09
1.25811128e-09 1.51890465e-08 2.61430855e-09 7.10842366e-12
1.61457727e-14 1.65369749e-11 2.36387064e-18 3.23924672e-11
1.95336748e-13 2.10108640e-15]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 3 ---------------------------------------------------
Data is :
[7.0, 10.0, 8.0, 7.0, 4.0, 8.0, 8.0, 5.0, 10.0, 11.0, 2.0, 8.0, 2.0, 5.0, 5.0, 10.0]
Predicted Value is :
[8.7941819e-17 1.5716913e-15 1.7086447e-14 1.3494807e-13 3.3105397e-11
2.0074209e-13 5.2398201e-13 8.2592031e-15 1.7060531e-13 1.4810406e-13
6.1072952e-12 5.8614158e-14 4.5985034e-21 5.6963884e-18 1.1110523e-13
5.3210964e-17 1.9942994e-14 4.6521983e-12 1.0758917e-09 4.7131179e-16
1.7413522e-20 2.5038672e-20 0.0000000e+00 9.9386211e-12 1.2857707e-22
2.5440672e-14]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 4 ---------------------------------------------------
Data is :
[4.0, 9.0, 5.0, 7.0, 4.0, 7.0, 7.0, 13.0, 1.0, 7.0, 6.0, 8.0, 3.0, 8.0, 0.0, 8.0]
Predicted Value is :
[1.02211783e-16 0.00000000e+00 9.03740056e-20 1.43331789e-17
1.98094571e-38 4.97769113e-28 3.64443823e-21 1.64161740e-11
1.42167053e-24 6.36054897e-19 1.34151325e-19 2.78852950e-19
1.23931349e-15 5.43787064e-12 1.44690087e-13 7.78477577e-22
3.28556579e-19 1.65173250e-20 9.93893301e-29 2.15342880e-26
1.08865743e-16 4.76463312e-18 2.12057072e-24 1.01803463e-26
4.09532430e-25 0.00000000e+00]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 5 ---------------------------------------------------
Data is :
[6.0, 7.0, 8.0, 5.0, 4.0, 7.0, 6.0, 3.0, 7.0, 10.0, 7.0, 9.0, 3.0, 8.0, 3.0, 7.0]
Predicted Value is :
[1.1706129e-12 2.7711992e-21 8.4702449e-13 1.8940207e-12 1.9752535e-18
1.2497628e-15 8.7255671e-12 1.6998430e-09 5.7274246e-15 2.2939346e-12
1.1515475e-09 1.3189306e-11 9.1088958e-15 5.3814849e-15 1.5070153e-12
1.9413922e-17 7.9546807e-16 1.0557829e-16 6.2205899e-13 4.8565578e-15
2.1460401e-11 8.7943247e-15 1.3875023e-26 4.4869605e-10 5.1545567e-15
8.9797106e-18]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 6 ---------------------------------------------------
Data is :
[4.0, 7.0, 5.0, 5.0, 3.0, 4.0, 12.0, 2.0, 5.0, 13.0, 7.0, 5.0, 1.0, 10.0, 1.0, 7.0]
Predicted Value is :
[4.67624779e-22 1.90015813e-23 4.39008468e-20 2.55727612e-21
4.72962423e-20 7.50199103e-09 9.54649819e-33 5.02205341e-21
4.18764720e-14 4.05101421e-16 6.96862831e-21 1.91938764e-26
1.24624110e-28 3.81936473e-15 1.28459150e-26 9.97044600e-15
1.13637272e-31 2.25727340e-26 3.18431737e-14 1.24017914e-11
1.34760832e-24 4.73127829e-15 1.23660755e-30 1.14864140e-16
2.12671830e-14 2.27379031e-32]
Real Value is :
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 7 ---------------------------------------------------
Data is :
[6.0, 10.0, 8.0, 8.0, 4.0, 7.0, 8.0, 2.0, 5.0, 10.0, 7.0, 8.0, 5.0, 8.0, 1.0, 8.0]
Predicted Value is :
[2.7237194e-16 5.0053619e-34 2.6539287e-20 1.6252307e-21 2.8186905e-33
1.7867945e-19 5.3766914e-27 1.1284633e-12 3.0108195e-20 8.9439890e-17
4.4030102e-11 4.2178361e-25 1.3706058e-14 6.6549759e-11 5.1205604e-20
1.2218595e-24 4.6991709e-30 3.0853253e-27 4.2553896e-20 3.9413349e-18
2.4292519e-14 8.1491124e-13 1.4974244e-17 1.8362031e-12 1.5255543e-18
4.4747886e-37]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 8 ---------------------------------------------------
Data is :
[1.0, 0.0, 2.0, 0.0, 1.0, 6.0, 10.0, 7.0, 2.0, 7.0, 5.0, 8.0, 2.0, 7.0, 4.0, 9.0]
Predicted Value is :
[7.76053267e-12 6.42372613e-12 4.64162432e-12 1.33093363e-12
1.25546830e-11 1.35013931e-10 5.14945975e-10 8.10619198e-08
1.33956075e-14 1.05922148e-13 3.18219451e-09 4.97246016e-14
5.96874841e-11 1.51634227e-09 2.20012222e-08 7.33375849e-09
2.88145757e-11 4.17596966e-05 1.30348995e-11 1.03573469e-14
3.14494927e-15 1.25454907e-11 1.10571623e-17 6.25636088e-12
1.00536207e-16 3.96807410e-22]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 9 ---------------------------------------------------
Data is :
[5.0, 9.0, 7.0, 6.0, 7.0, 7.0, 7.0, 2.0, 4.0, 9.0, 8.0, 9.0, 7.0, 6.0, 2.0, 8.0]
Predicted Value is :
[2.25077498e-13 2.95288736e-21 1.16579485e-17 7.15564897e-17
1.37176724e-25 2.92966826e-17 5.92893706e-25 6.05022889e-12
5.08465079e-19 4.58448383e-13 1.12940863e-12 2.69217660e-20
1.84450855e-08 1.29173450e-10 2.48257400e-19 3.72088749e-20
4.37024011e-25 8.45571144e-22 2.21973312e-21 2.30151705e-15
9.77987222e-12 5.86769348e-13 1.39885421e-11 5.94309087e-17
1.23471612e-17 2.29903411e-30]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 10 ---------------------------------------------------
Data is :
[1.0, 0.0, 2.0, 1.0, 1.0, 5.0, 7.0, 8.0, 6.0, 7.0, 6.0, 6.0, 2.0, 8.0, 3.0, 8.0]
Predicted Value is :
[4.48093452e-16 1.32554371e-14 3.52346659e-11 1.39929583e-07
1.16374778e-15 6.11002221e-12 1.67357156e-10 3.16687085e-08
8.73233885e-13 5.84096486e-12 9.81727026e-12 1.53747307e-10
4.26072141e-14 5.89674403e-12 1.56296469e-08 2.36489966e-10
1.02679094e-10 5.93845473e-10 9.89620712e-13 3.96246196e-11
2.49676235e-12 2.70283817e-12 5.75308685e-23 1.58805293e-12
5.41706057e-13 7.33919876e-21]
Real Value is :
[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
###Markdown
مدل را سه لایه پنهان و برای مقادیر توینیگ شده و بهینه فیت و ترین می کنیم
###Code
# define the keras model
model = Sequential()
model.add(Dense(32, input_dim=16, activation='relu')) # تعداد نورون های ورودی برابر با فیچر ها ۱۶ تاست
model.add(Dense(64, activation='relu'))
model.add(Dense(26, activation='sigmoid')) #تعداد نورون های خروجی نیز برابر با تعداد لیبلهای غیر تکراری کلاس تارگت گرفتیم
# compile the keras model
#from keral.optimizer import adam
model.compile(loss='categorical_crossentropy', optimizer='RMSProp' , metrics=['accuracy'])#categorical_crossentropy or #binary_crossentropy
print(model.summary())
plot_model(model, show_shapes=True, to_file='mymodel.png')
# fit the keras model on the dataset
history= model.fit(X, y, epochs=100, batch_size=50)
import matplotlib.pyplot as plt
plt.figure()
plt.plot(history.history['loss'])
import matplotlib.pyplot as plt
plt.figure()
plt.plot(history.history['accuracy'])
# evaluate the keras model
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
predictions = model.predict(X)
# summarize the first 5 cases
for i in range(10):
print("\n------------------------------- Case ",i+1," ---------------------------------------------------\n")
print("Data is :\n", X[i].tolist(),"\nPredicted Value is :\n" , predictions[i],"\nReal Value is :\n", y[i])
###Output
------------------------------- Case 1 ---------------------------------------------------
Data is :
[2.0, 4.0, 4.0, 3.0, 2.0, 7.0, 8.0, 2.0, 9.0, 11.0, 7.0, 7.0, 1.0, 8.0, 5.0, 6.0]
Predicted Value is :
[3.3699525e-27 1.7475883e-27 3.2621699e-29 3.0790030e-25 2.9524769e-17
2.4167027e-23 1.2374475e-30 7.9540427e-31 1.3211787e-19 2.0308410e-23
8.4793450e-32 4.3147547e-29 0.0000000e+00 0.0000000e+00 1.8100696e-29
7.4487800e-35 8.1395273e-32 6.3023114e-36 2.6254349e-18 2.9661869e-21
1.1971732e-28 0.0000000e+00 0.0000000e+00 1.6084132e-19 1.7477472e-28
5.1523476e-14]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 1.]
------------------------------- Case 2 ---------------------------------------------------
Data is :
[4.0, 7.0, 5.0, 5.0, 5.0, 5.0, 9.0, 6.0, 4.0, 8.0, 7.0, 9.0, 2.0, 9.0, 7.0, 10.0]
Predicted Value is :
[9.54505350e-23 1.11601105e-14 1.19543337e-15 1.67069814e-21
9.99040125e-14 8.54327568e-13 9.47142183e-16 1.33617383e-15
6.72420600e-19 5.01013733e-19 2.41475880e-15 2.27642990e-16
1.51307245e-23 8.91949406e-24 1.52977283e-19 1.92084434e-12
3.45720223e-22 1.87472108e-15 7.90446752e-16 2.85328756e-18
2.55369909e-28 1.32631562e-17 2.84748337e-24 3.32210429e-25
1.16954199e-23 2.32232999e-23]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 3 ---------------------------------------------------
Data is :
[7.0, 10.0, 8.0, 7.0, 4.0, 8.0, 8.0, 5.0, 10.0, 11.0, 2.0, 8.0, 2.0, 5.0, 5.0, 10.0]
Predicted Value is :
[2.1290878e-26 3.9985319e-26 1.1112808e-25 1.3999086e-22 1.5544681e-19
1.1875343e-22 1.0592612e-22 1.0206702e-20 3.6537701e-18 2.6337177e-20
1.0320877e-19 9.8553101e-21 0.0000000e+00 2.1422706e-25 1.9047226e-23
2.1148409e-26 5.5063703e-23 8.6989171e-26 3.2602742e-17 9.1371560e-22
2.1224798e-25 5.6928198e-35 0.0000000e+00 1.3272158e-18 8.8345341e-31
1.8791279e-19]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 4 ---------------------------------------------------
Data is :
[4.0, 9.0, 5.0, 7.0, 4.0, 7.0, 7.0, 13.0, 1.0, 7.0, 6.0, 8.0, 3.0, 8.0, 0.0, 8.0]
Predicted Value is :
[1.4455014e-33 0.0000000e+00 4.7806239e-38 5.4463784e-36 0.0000000e+00
0.0000000e+00 0.0000000e+00 1.3159717e-28 0.0000000e+00 1.8585466e-34
0.0000000e+00 0.0000000e+00 0.0000000e+00 4.6123765e-31 9.3699672e-32
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
8.0751747e-38 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 5 ---------------------------------------------------
Data is :
[6.0, 7.0, 8.0, 5.0, 4.0, 7.0, 6.0, 3.0, 7.0, 10.0, 7.0, 9.0, 3.0, 8.0, 3.0, 7.0]
Predicted Value is :
[5.34932541e-17 6.28375619e-28 8.98112012e-17 3.69132988e-18
1.69050320e-19 1.03454348e-19 1.22183531e-18 1.93545180e-12
5.46930399e-20 6.61115992e-16 1.01973454e-12 1.43764541e-15
3.01818738e-22 7.24759275e-18 2.35731960e-22 1.32103724e-27
1.49455923e-25 1.02293227e-25 5.37785030e-16 5.41811605e-20
3.20527458e-17 3.37730641e-24 4.38512055e-32 2.11548037e-13
2.10903395e-17 2.10566135e-21]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 6 ---------------------------------------------------
Data is :
[4.0, 7.0, 5.0, 5.0, 3.0, 4.0, 12.0, 2.0, 5.0, 13.0, 7.0, 5.0, 1.0, 10.0, 1.0, 7.0]
Predicted Value is :
[9.0671031e-26 8.0128804e-31 2.9025539e-28 1.7947017e-30 6.1530646e-33
1.4984086e-15 1.3486983e-32 2.7785385e-26 3.5643770e-21 7.2633431e-25
3.2114452e-23 7.9614344e-35 1.0574570e-36 4.1997210e-27 9.6073413e-30
1.3445288e-22 0.0000000e+00 5.9205697e-34 5.0460966e-21 1.0660580e-17
2.2842275e-28 1.2684607e-23 2.8637091e-35 3.7209598e-27 1.1066486e-25
0.0000000e+00]
Real Value is :
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 7 ---------------------------------------------------
Data is :
[6.0, 10.0, 8.0, 8.0, 4.0, 7.0, 8.0, 2.0, 5.0, 10.0, 7.0, 8.0, 5.0, 8.0, 1.0, 8.0]
Predicted Value is :
[3.9831663e-21 0.0000000e+00 2.3378357e-27 1.0043137e-27 0.0000000e+00
1.3618503e-25 2.1738416e-32 2.5167530e-23 3.5469825e-24 1.5614770e-24
5.5300421e-19 1.0130400e-28 4.9534439e-21 1.1764645e-15 1.7788496e-28
5.9974934e-34 0.0000000e+00 3.1075001e-32 1.1832334e-23 2.9697315e-28
1.4812176e-22 2.9631161e-25 5.4693319e-25 8.3448398e-22 2.9078998e-25
0.0000000e+00]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 8 ---------------------------------------------------
Data is :
[1.0, 0.0, 2.0, 0.0, 1.0, 6.0, 10.0, 7.0, 2.0, 7.0, 5.0, 8.0, 2.0, 7.0, 4.0, 9.0]
Predicted Value is :
[1.95754587e-23 1.19302314e-23 1.25759736e-21 7.23955032e-26
1.11616435e-29 1.08370682e-22 4.02747098e-23 2.54101178e-18
1.30083924e-27 6.33300414e-24 9.44539344e-19 2.05195126e-24
3.57671333e-23 6.85295227e-21 1.95829034e-18 1.33112376e-20
6.21923778e-24 9.65733502e-14 3.45521577e-27 5.04173876e-32
4.22932664e-31 7.97414499e-26 9.31786499e-30 1.79296644e-27
1.65048870e-31 2.29914211e-35]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 9 ---------------------------------------------------
Data is :
[5.0, 9.0, 7.0, 6.0, 7.0, 7.0, 7.0, 2.0, 4.0, 9.0, 8.0, 9.0, 7.0, 6.0, 2.0, 8.0]
Predicted Value is :
[1.3609261e-17 3.3205230e-31 4.4220988e-22 9.7380872e-19 1.3829874e-29
9.2180227e-23 4.0151640e-25 2.1818076e-18 2.6464222e-24 1.2696219e-24
1.8455407e-18 5.0652562e-25 9.2080096e-15 1.1469233e-19 9.3476172e-25
3.6040911e-31 4.7811857e-32 5.7680954e-23 7.6299018e-28 3.8097919e-29
4.6552372e-18 2.2748831e-25 4.3599062e-22 2.6835571e-25 6.1984727e-22
2.3007214e-35]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 10 ---------------------------------------------------
Data is :
[1.0, 0.0, 2.0, 1.0, 1.0, 5.0, 7.0, 8.0, 6.0, 7.0, 6.0, 6.0, 2.0, 8.0, 3.0, 8.0]
Predicted Value is :
[3.15353049e-35 1.08060628e-30 1.23884804e-26 1.61656820e-21
1.63326308e-33 3.38937983e-28 3.17318258e-31 1.02698428e-30
8.90864043e-33 5.94773463e-34 3.88607699e-35 2.52965021e-31
0.00000000e+00 2.55096897e-31 5.22464673e-26 1.55064594e-29
1.25410910e-37 8.55607257e-29 1.80731189e-34 1.04848144e-25
2.84395065e-27 0.00000000e+00 0.00000000e+00 0.00000000e+00
5.34057536e-33 1.68551085e-36]
Real Value is :
[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
###Markdown
این بار مدل را برای اپوچ ۲۵۰ تایی اجرا می کنیم
###Code
# define the keras model
model = Sequential()
model.add(Dense(32, input_dim=16, activation='relu')) # تعداد نورون های ورودی برابر با فیچر ها ۱۶ تاست
model.add(Dense(64, activation='relu'))
model.add(Dense(26, activation='sigmoid')) #تعداد نورون های خروجی نیز برابر با تعداد لیبلهای غیر تکراری کلاس تارگت گرفتیم
# compile the keras model
#from keral.optimizer import adam
model.compile(loss='categorical_crossentropy', optimizer='RMSProp' , metrics=['accuracy'])#categorical_crossentropy or #binary_crossentropy
print(model.summary())
plot_model(model, show_shapes=True, to_file='mymodel.png')
# fit the keras model on the dataset
history= model.fit(X, y, epochs=250, batch_size=50)
import matplotlib.pyplot as plt
plt.figure()
plt.plot(history.history['loss'])
import matplotlib.pyplot as plt
plt.figure()
plt.plot(history.history['accuracy'])
# evaluate the keras model
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
predictions = model.predict(X)
# summarize the first 5 cases
for i in range(10):
print("\n------------------------------- Case ",i+1," ---------------------------------------------------\n")
print("Data is :\n", X[i].tolist(),"\nPredicted Value is :\n" , predictions[i],"\nReal Value is :\n", y[i])
###Output
------------------------------- Case 1 ---------------------------------------------------
Data is :
[2.0, 4.0, 4.0, 3.0, 2.0, 7.0, 8.0, 2.0, 9.0, 11.0, 7.0, 7.0, 1.0, 8.0, 5.0, 6.0]
Predicted Value is :
[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
2.9780912e-36]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 1.]
------------------------------- Case 2 ---------------------------------------------------
Data is :
[4.0, 7.0, 5.0, 5.0, 5.0, 5.0, 9.0, 6.0, 4.0, 8.0, 7.0, 9.0, 2.0, 9.0, 7.0, 10.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 3 ---------------------------------------------------
Data is :
[7.0, 10.0, 8.0, 7.0, 4.0, 8.0, 8.0, 5.0, 10.0, 11.0, 2.0, 8.0, 2.0, 5.0, 5.0, 10.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 4 ---------------------------------------------------
Data is :
[4.0, 9.0, 5.0, 7.0, 4.0, 7.0, 7.0, 13.0, 1.0, 7.0, 6.0, 8.0, 3.0, 8.0, 0.0, 8.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 5 ---------------------------------------------------
Data is :
[6.0, 7.0, 8.0, 5.0, 4.0, 7.0, 6.0, 3.0, 7.0, 10.0, 7.0, 9.0, 3.0, 8.0, 3.0, 7.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 6 ---------------------------------------------------
Data is :
[4.0, 7.0, 5.0, 5.0, 3.0, 4.0, 12.0, 2.0, 5.0, 13.0, 7.0, 5.0, 1.0, 10.0, 1.0, 7.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 7 ---------------------------------------------------
Data is :
[6.0, 10.0, 8.0, 8.0, 4.0, 7.0, 8.0, 2.0, 5.0, 10.0, 7.0, 8.0, 5.0, 8.0, 1.0, 8.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 8 ---------------------------------------------------
Data is :
[1.0, 0.0, 2.0, 0.0, 1.0, 6.0, 10.0, 7.0, 2.0, 7.0, 5.0, 8.0, 2.0, 7.0, 4.0, 9.0]
Predicted Value is :
[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 1.1148285e-36 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 9 ---------------------------------------------------
Data is :
[5.0, 9.0, 7.0, 6.0, 7.0, 7.0, 7.0, 2.0, 4.0, 9.0, 8.0, 9.0, 7.0, 6.0, 2.0, 8.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 10 ---------------------------------------------------
Data is :
[1.0, 0.0, 2.0, 1.0, 1.0, 5.0, 7.0, 8.0, 6.0, 7.0, 6.0, 6.0, 2.0, 8.0, 3.0, 8.0]
Predicted Value is :
[0.0000000e+00 0.0000000e+00 0.0000000e+00 9.0426407e-35 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00]
Real Value is :
[0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
###Markdown
ساخت مدل با ولیدیشن و استاپ لاست برای مدل بهینه تر
###Code
import matplotlib.pyplot as plt
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=20)
mc = ModelCheckpoint('best_model.h5', monitor='val_accuracy', mode='max', verbose=1, save_best_only=True)
history=model.fit(X, y, epochs=250, batch_size=10, verbose=1, validation_split=0.2,callbacks=[mc,es]) #validation_data=[test_x, test_y]
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['loss','val_loss'], loc='upper right')
plt.figure()
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.legend(['acc','val_acc'], loc='upper right')
###Output
Epoch 1/250
790/800 [============================>.] - ETA: 0s - loss: 0.1239 - accuracy: 0.9580
Epoch 00001: val_accuracy improved from -inf to 0.93250, saving model to best_model.h5
800/800 [==============================] - 3s 3ms/step - loss: 0.1244 - accuracy: 0.9578 - val_loss: 0.2222 - val_accuracy: 0.9325
Epoch 2/250
781/800 [============================>.] - ETA: 0s - loss: 0.1337 - accuracy: 0.9553
Epoch 00002: val_accuracy improved from 0.93250 to 0.95800, saving model to best_model.h5
800/800 [==============================] - 2s 3ms/step - loss: 0.1336 - accuracy: 0.9550 - val_loss: 0.1394 - val_accuracy: 0.9580
Epoch 3/250
796/800 [============================>.] - ETA: 0s - loss: 0.1396 - accuracy: 0.9541
Epoch 00003: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1397 - accuracy: 0.9541 - val_loss: 0.1647 - val_accuracy: 0.9475
Epoch 4/250
795/800 [============================>.] - ETA: 0s - loss: 0.1400 - accuracy: 0.9540
Epoch 00004: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1393 - accuracy: 0.9542 - val_loss: 0.1809 - val_accuracy: 0.9445
Epoch 5/250
798/800 [============================>.] - ETA: 0s - loss: 0.1289 - accuracy: 0.9578
Epoch 00005: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1294 - accuracy: 0.9576 - val_loss: 0.1346 - val_accuracy: 0.9550
Epoch 6/250
790/800 [============================>.] - ETA: 0s - loss: 0.1407 - accuracy: 0.9542
Epoch 00006: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1409 - accuracy: 0.9540 - val_loss: 0.1633 - val_accuracy: 0.9445
Epoch 7/250
795/800 [============================>.] - ETA: 0s - loss: 0.1329 - accuracy: 0.9574
Epoch 00007: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1326 - accuracy: 0.9572 - val_loss: 0.1849 - val_accuracy: 0.9390
Epoch 8/250
780/800 [============================>.] - ETA: 0s - loss: 0.1265 - accuracy: 0.9610
Epoch 00008: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1259 - accuracy: 0.9613 - val_loss: 0.1922 - val_accuracy: 0.9470
Epoch 9/250
781/800 [============================>.] - ETA: 0s - loss: 0.1329 - accuracy: 0.9575
Epoch 00009: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1321 - accuracy: 0.9575 - val_loss: 0.1756 - val_accuracy: 0.9510
Epoch 10/250
779/800 [============================>.] - ETA: 0s - loss: 0.1335 - accuracy: 0.9573
Epoch 00010: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1329 - accuracy: 0.9575 - val_loss: 0.2220 - val_accuracy: 0.9440
Epoch 11/250
799/800 [============================>.] - ETA: 0s - loss: 0.1366 - accuracy: 0.9544
Epoch 00011: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1368 - accuracy: 0.9544 - val_loss: 0.2591 - val_accuracy: 0.9240
Epoch 12/250
796/800 [============================>.] - ETA: 0s - loss: 0.1415 - accuracy: 0.9558
Epoch 00012: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1415 - accuracy: 0.9557 - val_loss: 0.2218 - val_accuracy: 0.9400
Epoch 13/250
779/800 [============================>.] - ETA: 0s - loss: 0.1359 - accuracy: 0.9573
Epoch 00013: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1350 - accuracy: 0.9571 - val_loss: 0.2032 - val_accuracy: 0.9460
Epoch 14/250
795/800 [============================>.] - ETA: 0s - loss: 0.1405 - accuracy: 0.9566
Epoch 00014: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1418 - accuracy: 0.9564 - val_loss: 0.1947 - val_accuracy: 0.9490
Epoch 15/250
782/800 [============================>.] - ETA: 0s - loss: 0.1430 - accuracy: 0.9552
Epoch 00015: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1414 - accuracy: 0.9556 - val_loss: 0.2182 - val_accuracy: 0.9325
Epoch 16/250
799/800 [============================>.] - ETA: 0s - loss: 0.1408 - accuracy: 0.9546
Epoch 00016: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1406 - accuracy: 0.9546 - val_loss: 0.2221 - val_accuracy: 0.9385
Epoch 17/250
799/800 [============================>.] - ETA: 0s - loss: 0.1478 - accuracy: 0.9514
Epoch 00017: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1485 - accuracy: 0.9514 - val_loss: 0.2192 - val_accuracy: 0.9355
Epoch 18/250
797/800 [============================>.] - ETA: 0s - loss: 0.1343 - accuracy: 0.9576
Epoch 00018: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1350 - accuracy: 0.9574 - val_loss: 0.3035 - val_accuracy: 0.9290
Epoch 19/250
782/800 [============================>.] - ETA: 0s - loss: 0.1526 - accuracy: 0.9545
Epoch 00019: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1543 - accuracy: 0.9541 - val_loss: 0.2586 - val_accuracy: 0.9295
Epoch 20/250
795/800 [============================>.] - ETA: 0s - loss: 0.1426 - accuracy: 0.9558
Epoch 00020: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1428 - accuracy: 0.9559 - val_loss: 0.2952 - val_accuracy: 0.9290
Epoch 21/250
796/800 [============================>.] - ETA: 0s - loss: 0.1444 - accuracy: 0.9552
Epoch 00021: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1464 - accuracy: 0.9549 - val_loss: 0.2777 - val_accuracy: 0.9325
Epoch 22/250
785/800 [============================>.] - ETA: 0s - loss: 0.1484 - accuracy: 0.9550
Epoch 00022: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1497 - accuracy: 0.9550 - val_loss: 0.2827 - val_accuracy: 0.9320
Epoch 23/250
784/800 [============================>.] - ETA: 0s - loss: 0.1456 - accuracy: 0.9560
Epoch 00023: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1459 - accuracy: 0.9555 - val_loss: 0.2657 - val_accuracy: 0.9330
Epoch 24/250
797/800 [============================>.] - ETA: 0s - loss: 0.1527 - accuracy: 0.9541
Epoch 00024: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1528 - accuracy: 0.9540 - val_loss: 0.2737 - val_accuracy: 0.9370
Epoch 25/250
784/800 [============================>.] - ETA: 0s - loss: 0.1439 - accuracy: 0.9551
Epoch 00025: val_accuracy did not improve from 0.95800
800/800 [==============================] - 2s 3ms/step - loss: 0.1457 - accuracy: 0.9550 - val_loss: 0.2334 - val_accuracy: 0.9405
Epoch 00025: early stopping
###Markdown
با استفاده از مدل بهینه تعداد ۵ نمونه را تست میگیریم و نتایج را مشخص می کنیم
###Code
model=load_model('best_model.h5')
_, accuracy = model.evaluate(X, y)
print('Accuracy: %.2f' % (accuracy*100))
predictions = model.predict(X)
# summarize the first 5 cases
for i in range(5):
print("\n------------------------------- Case ",i+1," ---------------------------------------------------\n")
print("Data is :\n", X[i].tolist(),"\nPredicted Value is :\n" , predictions[i],"\nReal Value is :\n", y[i])
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1135 - accuracy: 0.9628
Accuracy: 96.28
------------------------------- Case 1 ---------------------------------------------------
Data is :
[2.0, 4.0, 4.0, 3.0, 2.0, 7.0, 8.0, 2.0, 9.0, 11.0, 7.0, 7.0, 1.0, 8.0, 5.0, 6.0]
Predicted Value is :
[0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00
1.3256963e-35]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 1.]
------------------------------- Case 2 ---------------------------------------------------
Data is :
[4.0, 7.0, 5.0, 5.0, 5.0, 5.0, 9.0, 6.0, 4.0, 8.0, 7.0, 9.0, 2.0, 9.0, 7.0, 10.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 3 ---------------------------------------------------
Data is :
[7.0, 10.0, 8.0, 7.0, 4.0, 8.0, 8.0, 5.0, 10.0, 11.0, 2.0, 8.0, 2.0, 5.0, 5.0, 10.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 4 ---------------------------------------------------
Data is :
[4.0, 9.0, 5.0, 7.0, 4.0, 7.0, 7.0, 13.0, 1.0, 7.0, 6.0, 8.0, 3.0, 8.0, 0.0, 8.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
------------------------------- Case 5 ---------------------------------------------------
Data is :
[6.0, 7.0, 8.0, 5.0, 4.0, 7.0, 6.0, 3.0, 7.0, 10.0, 7.0, 9.0, 3.0, 8.0, 3.0, 7.0]
Predicted Value is :
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
Real Value is :
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0.]
|
examples/4_model-training.ipynb | ###Markdown
Model training This notebook aims at illustrating the way we can train simple neural network models in current framework. `Mapillary` data will be used as an illustration of this process. Some details about the [dataset labels](./1b_mapillary-label-analysis.ipynb), as well as about the [dataset creation](./1a_mapillary-dataset-presentation.ipynb), are available in previous notebooks. Moreover the model of interest here will be *semantic segmentation*. To get some more details about model handling, please refer to [model creation notebook](./3_neural-network-model-creation.ipynb). Introduction As usual, some dependencies must be loaded before to begin:
###Code
import os
from keras.models import Model
from keras.optimizers import Adam
from keras import backend, callbacks
import matplotlib.pyplot as plt
%matplotlib inline
from deeposlandia import utils, dataset, generator, semantic_segmentation
###Output
_____no_output_____
###Markdown
As in previous notebooks, a range of variables is declared to make further developments easier:
###Code
DATAPATH = "../data"
DATASET = "mapillary"
MODEL = "semantic_segmentation"
IMG_SIZE = 128
BATCH_SIZE = 10
NB_CHANNELS = 3
LR_RATE = 1e-3
LR_DECAY = 1e-5
NB_EPOCHS=10
INSTANCE_NAME = "demo"
INPUT_FOLDER = utils.prepare_input_folder(DATAPATH, DATASET)
INPUT_CONFIG = os.path.join(INPUT_FOLDER, "config_aggregate.json")
PREPROCESS_FOLDER = utils.prepare_preprocessed_folder(DATAPATH, DATASET, IMG_SIZE, "aggregated")
###Output
_____no_output_____
###Markdown
Dataset recovering Here we recover an existing dataset generated in the [dedicated notebook](./2_generator-creation.ipynb):
###Code
training_dataset = dataset.MapillaryDataset(IMG_SIZE, INPUT_CONFIG)
training_dataset.load(PREPROCESS_FOLDER["training_config"])
training_dataset.get_nb_images()
###Output
2018-08-22 16:29:24,115 :: INFO :: dataset :: load : The dataset has been loaded from ../data/mapillary/preprocessed/128_aggregated/training.json
###Markdown
As we need the dataset two first components, *i.e.* training and validation, we still need to generate a brand new validation dataset:
###Code
validation_dataset = dataset.MapillaryDataset(IMG_SIZE, INPUT_CONFIG)
validation_dataset.populate(PREPROCESS_FOLDER["validation"],
os.path.join(INPUT_FOLDER, "validation"),
nb_images=10,
aggregate=True)
validation_dataset.save(PREPROCESS_FOLDER["validation_config"])
validation_dataset.get_nb_images()
###Output
2018-08-22 16:29:25,258 :: INFO :: dataset :: save : The dataset has been saved into ../data/mapillary/preprocessed/128_aggregated/validation.json
###Markdown
Build on-the-fly data generator Starting from this dataset, we can build the generators that will be used during training.
###Code
training_config = utils.read_config(PREPROCESS_FOLDER["training_config"])
validation_config = utils.read_config(PREPROCESS_FOLDER["validation_config"])
train_generator = generator.create_generator(DATASET,
MODEL,
PREPROCESS_FOLDER["training"],
IMG_SIZE,
BATCH_SIZE,
training_config["labels"])
validation_generator = generator.create_generator(DATASET,
MODEL,
PREPROCESS_FOLDER["validation"],
IMG_SIZE,
BATCH_SIZE,
validation_config["labels"])
###Output
Found 100 images belonging to 1 classes.
Found 100 images belonging to 1 classes.
Found 10 images belonging to 1 classes.
Found 10 images belonging to 1 classes.
###Markdown
Model creation From now, data are ready-to-use. We only have to initialize a neural network model. This step is described in a [dedicated notebook](./3_neural-network-model-creation.ipynb). First an instance of `SemanticSegmentationNetwork` object must be declared.
###Code
nb_labels = len(validation_config['labels'])
network = semantic_segmentation.SemanticSegmentationNetwork(INSTANCE_NAME,
IMG_SIZE,
NB_CHANNELS,
nb_labels,
architecture="simple")
###Output
_____no_output_____
###Markdown
Then a Keras model is instanciated starting from the built architecture. The model has to be compiled with given loss function, optimizer and metrics, in order to launch any training process.
###Code
model = Model(network.X, network.Y)
model.compile(loss="categorical_crossentropy",
optimizer=Adam(lr=LR_RATE, decay=LR_DECAY),
metrics=["acc"])
###Output
_____no_output_____
###Markdown
Model training Some parameter setting are necessary before to begin training. Namely we have to define the number of training and validation steps (basically the number of image divided by the batch size, in order to define an epoch as the evaluation of every image once).
###Code
steps = training_dataset.get_nb_images() // BATCH_SIZE
val_steps = validation_dataset.get_nb_images() // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Then we define a [Keras checkpoint callbacks](https://keras.io/callbacks/) to save the result of the optimization in a place of our choice.
###Code
output_folder = utils.prepare_output_folder(DATAPATH, DATASET, MODEL, INSTANCE_NAME)
output_folder
checkpoint_filename = os.path.join(output_folder, "checkpoint-epoch-{epoch:03d}.h5")
checkpoint = callbacks.ModelCheckpoint(checkpoint_filename,
monitor="val_loss",
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
###Output
_____no_output_____
###Markdown
And finally the training process itself is run with training (and optionnally validation) generator(s).
###Code
hist = model.fit_generator(train_generator,
epochs=NB_EPOCHS,
steps_per_epoch=steps,
validation_data=validation_generator,
validation_steps=val_steps,
callbacks=[checkpoint])
###Output
Epoch 1/10
10/10 [==============================] - 26s 3s/step - loss: 2.3551 - acc: 0.2951 - val_loss: 1.9213 - val_acc: 0.4626
Epoch 2/10
10/10 [==============================] - 25s 2s/step - loss: 1.7571 - acc: 0.5056 - val_loss: 1.4701 - val_acc: 0.6066
Epoch 3/10
10/10 [==============================] - 25s 2s/step - loss: 1.5579 - acc: 0.5450 - val_loss: 1.3500 - val_acc: 0.6595
Epoch 4/10
10/10 [==============================] - 26s 3s/step - loss: 1.4474 - acc: 0.5639 - val_loss: 1.4252 - val_acc: 0.5931
Epoch 5/10
10/10 [==============================] - 26s 3s/step - loss: 1.4562 - acc: 0.5498 - val_loss: 1.3519 - val_acc: 0.6067
Epoch 6/10
10/10 [==============================] - 25s 3s/step - loss: 1.4033 - acc: 0.5759 - val_loss: 1.2806 - val_acc: 0.6210
Epoch 7/10
10/10 [==============================] - 25s 3s/step - loss: 1.3953 - acc: 0.5688 - val_loss: 1.2571 - val_acc: 0.6322
Epoch 8/10
10/10 [==============================] - 26s 3s/step - loss: 1.3783 - acc: 0.5845 - val_loss: 1.3158 - val_acc: 0.6214
Epoch 9/10
10/10 [==============================] - 26s 3s/step - loss: 1.3812 - acc: 0.5758 - val_loss: 1.2610 - val_acc: 0.6442
Epoch 10/10
10/10 [==============================] - 26s 3s/step - loss: 1.3359 - acc: 0.5924 - val_loss: 1.1913 - val_acc: 0.6628
###Markdown
At this step, we have trained a model, and stored the corresponding checkpoints onto the file system. We may display some learning curves:
###Code
f, ax = plt.subplots(1, 2, figsize=(8, 4))
ax[0].plot(hist.history['loss'])
ax[0].plot(hist.history['val_loss'])
ax[0].set_xlabel("Training epochs")
ax[0].set_ylabel("Categorical crossentropy")
ax[1].plot(hist.history['acc'])
ax[1].plot(hist.history['val_acc'])
ax[1].set_xlabel("Training epochs")
ax[1].set_ylabel("Accuracy (%)")
ax[1].legend(["Training", "Validation"])
plt.tight_layout()
###Output
_____no_output_____ |
pandasplotcode.ipynb | ###Markdown
An Introduction to Data Visualization with Pandas Importing the libraries
###Code
# The first line is only required if you are using a Jupyter Notebook
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Getting the data
###Code
weather = pd.read_csv('https://raw.githubusercontent.com/alanjones2/dataviz/master/london2018.csv')
print(weather)
###Output
Year Month Tmax Tmin Rain Sun
0 2018 1 9.7 3.8 58.0 46.5
1 2018 2 6.7 0.6 29.0 92.0
2 2018 3 9.8 3.0 81.2 70.3
3 2018 4 15.5 7.9 65.2 113.4
4 2018 5 20.8 9.8 58.4 248.3
5 2018 6 24.2 13.1 0.4 234.5
6 2018 7 28.3 16.4 14.8 272.5
7 2018 8 24.5 14.5 48.2 182.1
8 2018 9 20.9 11.0 29.4 195.0
9 2018 10 16.5 8.5 61.0 137.0
10 2018 11 12.2 5.8 73.8 72.9
11 2018 12 10.7 5.2 60.6 40.3
###Markdown
A first Pandas Plot
###Code
weather.plot(y='Tmax', x='Month')
plt.show()
###Output
_____no_output_____
###Markdown
Simple charts
###Code
weather.plot(y=['Tmax','Tmin'], x='Month')
plt.show()
weather['Tmed'] = (weather['Tmax'] + weather['Tmin'])/2
weather.plot(y=['Tmax','Tmin','Tmed'], x='Month')
plt.show()
###Output
_____no_output_____
###Markdown
Bar Charts
###Code
weather.plot(kind='bar', y='Rain', x='Month')
plt.show()
weather.plot(kind='barh', y='Rain', x='Month')
plt.show()
weather.plot(kind='bar', y=['Tmax','Tmin'], x='Month')
plt.show()
weather.plot(kind='bar', y=['Tmax','Tmed','Tmin'], x='Month')
plt.show()
###Output
_____no_output_____
###Markdown
Scatter Plot
###Code
weather.plot(kind='scatter', x='Sun', y='Rain')
plt.show()
###Output
_____no_output_____
###Markdown
Pie charts
###Code
weather.plot(kind='pie', y='Sun')
plt.show()
weather.index=['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
weather.plot(kind='pie', y = 'Sun', legend=False)
plt.show()
###Output
_____no_output_____
###Markdown
Statistical charts and spotting unusual events
###Code
more_weather = pd.read_csv('https://raw.githubusercontent.com/alanjones2/dataviz/master/londonweather.csv')
print(more_weather[0:48])
print(more_weather.Rain.describe())
more_weather.plot.box(y='Rain')
plt.show()
###Output
_____no_output_____
###Markdown
Histograms
###Code
more_weather.plot(kind='hist', y='Rain')
plt.show()
###Output
_____no_output_____
###Markdown
More bins
###Code
more_weather.plot(kind='hist', y='Rain', bins=[0,25,50,75,100,125,150,175])
plt.show()
more_weather.plot.hist(y='Rain', bins=[0,25,75,175])
plt.show()
###Output
_____no_output_____
###Markdown
Pandas Plot utilities Multiple charts
###Code
weather.plot(y=['Tmax', 'Tmin','Rain','Sun'], subplots=True, layout=(2,2), figsize=(10,5))
plt.show()
weather.plot(kind='bar', y=['Tmax', 'Tmin','Rain','Sun'], subplots=True, layout=(2,2), figsize=(10,5))
plt.show()
weather.plot(kind='pie', y=['Tmax', 'Tmin','Rain','Sun'], subplots=True, legend=False, layout=(2,2), figsize=(10,10))
plt.show()
###Output
_____no_output_____
###Markdown
Saving the Charts
###Code
weather.plot(kind='pie', y='Rain', legend=False)
plt.show()
plt.savefig("pie.png")
###Output
_____no_output_____ |
EDA E-commerce data.ipynb | ###Markdown
Context of DataCompany - UK-based and registered non-store online retailProducts for selling - Mainly all-occasion giftsCustomers - Most are wholesalers (local or international)Transactions Period - 1st Dec 2010 - 9th Dec 2011 (One year) Results obtained from Exploratory Data Analysis (EDA) The customer with the highest number of orders comes from the United Kingdom (UK) The customer with the highest money spent on purchases comes from Netherlands The company receives the highest number of orders from customers in the UK (since it is a UK-based company). Therefore, the TOP 5 countries (including UK) that place the highest number of orders are as below: United Kingdom Germany France Ireland (EIRE) Spain As the company receives the highest number of orders from customers in the UK (since it is a UK-based company), customers in the UK spend the most on their purchases. Therefore, the TOP 5 countries (including UK) that spend the most money on purchases are as below: United Kingdom Netherlands Ireland (EIRE) Germany France November 2011 has the highest sales The month with the lowest sales is undetermined as the dataset consists of transactions until 9th December 2011 in December There are no transactions on Saturday between 1st Dec 2010 - 9th Dec 2011 The number of orders received by the company tends to increases from Monday to Thursday and decrese afterward The company receives the highest number of orders at 12:00pm Possibly most customers made purchases during lunch hour between 12:00pm - 2:00pm The company tends to give out FREE items for purchases occasionally each month (Except June 2011) However, it is not clear what factors contribute to giving out the FREE items to the particular customers
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
color=sns.color_palette()
pd.set_option("display.max_columns",100)
import warnings
import datetime
import gc
warnings.filterwarnings("ignore")
sns.set_style("whitegrid")
import missingno as msno
#module for Python
import pandas_profiling
import os
os.chdir("E:\PYTHON NOTES\EDA\Ecommerce data")
df=pd.read_csv("data.csv",encoding="ISO-8859-1")
df.head()
df.rename(index=str,columns={'InvoiceNo': 'invoice_num','StockCode' : 'stock_code','Description' : 'description','Quantity' : 'quantity','InvoiceDate' : 'invoice_date','UnitPrice' : 'unit_price','CustomerID' : 'cust_id','Country' : 'country'},inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
df.info()
df.isnull().sum().sort_values(ascending=False)
df.columns[df.isna().any()]
# check out the rows with missing values
df[df.isnull().any(axis=1)].head()
# change the invoice_date format - String to Timestamp format
df['invoice_date']=pd.to_datetime(df["invoice_date"],format='%m/%d/%Y %H:%M')
# change description - UPPER case to LOWER case
df["description"]=df["description"].str.lower()
df.head()
df.shape
df_new=df.dropna()
df_new.isnull().sum()
df_new.columns[df_new.isna().any()]
df_new.info()
#chnage the columns type
df_new["cust_id"]=df_new["cust_id"].astype("int64")
df_new.describe().round(2)
df_new=df_new[df_new.quantity>0]
df_new.describe().round(2)
###Output
_____no_output_____
###Markdown
Add the column - amount_spent
###Code
df_new["amount_spent"]=df_new["quantity"]*df_new["unit_price"]
# rearrange all the columns for easy reference
df_new = df_new[['invoice_num','invoice_date','stock_code','description','quantity','unit_price','amount_spent','cust_id','country']]
###Output
_____no_output_____
###Markdown
Add the columns - Month, Day and Hour for the invoice
###Code
df_new.insert(loc=2,column="year_month",value=df_new["invoice_date"].map(lambda x:100*x.year+x.month))
df_new.insert(loc=3,column="month",value=df_new["invoice_date"].dt.month)
# +1 to make Monday=1.....until Sunday=7
df_new.insert(loc=4,column="day",value=(df_new["invoice_date"].dt.dayofweek)+1)
df_new.insert(loc=5, column='hour', value=df_new.invoice_date.dt.hour)
df_new.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis (EDA) How many orders made by the customers?
###Code
df_new['cust_id'].value_counts().sort_values(ascending=False)
df_new.groupby(["cust_id","country"],as_index=False)["invoice_num"].count().sort_values("invoice_num",ascending=False).head()
orders = df_new.groupby(by=['cust_id','country'], as_index=False)['invoice_num'].count()
plt.subplots(figsize=(15,6))
plt.plot(orders.cust_id, orders.invoice_num)
plt.xlabel('Customers ID')
plt.ylabel('Number of Orders')
plt.title('Number of Orders for different Customers')
plt.show()
###Output
_____no_output_____
###Markdown
Check TOP 5 most number of orders
###Code
orders.sort_values(by='invoice_num', ascending=False).head()
###Output
_____no_output_____
###Markdown
How much money spent by the customers?
###Code
money_spent=df_new.groupby(["cust_id","country"],as_index=False)["amount_spent"].sum()
###Output
_____no_output_____
###Markdown
Check TOP 5 highest money spent
###Code
money_spent.sort_values("amount_spent",ascending=False).head()
money_spent=df_new.groupby(["cust_id","country"],as_index=False)["amount_spent"].sum()
plt.subplots(figsize=(15,6))
plt.plot(money_spent.cust_id,money_spent.amount_spent)
plt.xlabel("cust_id")
plt.ylabel("amount_spent")
plt.title("Money Spent for different Customers")
plt.show()
sns.palplot(color)
###Output
_____no_output_____
###Markdown
How many orders (per month)?
###Code
ax = df_new.groupby('invoice_num')['year_month'].unique().value_counts().sort_index().plot('bar',color=color[0],figsize=(15,6))
ax = df_new.groupby('invoice_num')['year_month'].unique().value_counts().sort_index().plot('bar',color=color[0],figsize=(15,6))
ax.set_xlabel('Month',fontsize=15)
ax.set_ylabel('Number of Orders',fontsize=15)
ax.set_title('Number of orders for different Months (1st Dec 2010 - 9th Dec 2011)',fontsize=15)
ax.set_xticklabels(('Dec_10','Jan_11','Feb_11','Mar_11','Apr_11','May_11','Jun_11','July_11','Aug_11','Sep_11','Oct_11','Nov_11','Dec_11'), rotation='horizontal', fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
How many orders (per day)?
###Code
df_new.groupby("invoice_num")["day"].unique().value_counts().sort_index()
ax = df_new.groupby('invoice_num')['day'].unique().value_counts().sort_index().plot('bar',color=color[0],figsize=(15,6))
ax.set_xlabel('Day',fontsize=15)
ax.set_ylabel('Number of Orders',fontsize=15)
ax.set_title('Number of orders for different Days',fontsize=15)
ax.set_xticklabels(('Mon','Tue','Wed','Thur','Fri','Sun'), rotation='horizontal', fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
How many orders (per hour)?
###Code
df_new.groupby('invoice_num')['hour'].unique().value_counts().iloc[:-1].sort_index()
ax = df_new.groupby('invoice_num')['hour'].unique().value_counts().iloc[:-1].sort_index().plot('bar',color=color[0],figsize=(15,6))
ax.set_xlabel('Hour',fontsize=15)
ax.set_ylabel('Number of Orders',fontsize=15)
ax.set_title('Number of orders for different Hours',fontsize=15)
ax.set_xticklabels(range(6,21), rotation='horizontal', fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
Discover patterns for Unit Price
###Code
df_new.unit_price.describe()
#We see that there are unit price = 0 (FREE items)
#check the distribution of unit price
plt.subplots(figsize=(12,6))
sns.boxplot(df_new.unit_price)
plt.show()
df_free=df_new[df_new["unit_price"]==0]
df_free.year_month.value_counts().sort_index()
ax = df_free.year_month.value_counts().sort_index().plot('bar',figsize=(12,6), color=color[0])
ax.set_xlabel('Month',fontsize=15)
ax.set_ylabel('Frequency',fontsize=15)
ax.set_title('Frequency for different Months (Dec 2010 - Dec 2011)',fontsize=15)
ax.set_xticklabels(('Dec_10','Jan_11','Feb_11','Mar_11','Apr_11','May_11','July_11','Aug_11','Sep_11','Oct_11','Nov_11'), rotation='horizontal', fontsize=13)
plt.show()
###Output
_____no_output_____
###Markdown
How much money spent by each country?
###Code
group_country_amount_spent = df_new.groupby('country')['amount_spent'].sum().sort_values()
###Output
_____no_output_____
###Markdown
How many orders for each country?
###Code
group_country_orders = df_new.groupby('country')['invoice_num'].count().sort_values()
###Output
_____no_output_____ |
notebooks/japanese/graph_coloring.ipynb | ###Markdown
グラフ彩色問題グラフ$G=(V,E)$と色数$K$が与えられたとき、グラフの頂点を$K$色で塗り分ける (彩色する)。このとき、隣接する頂点 (すなわち、辺で接続されている頂点) は同色にならないという制約下での彩色方法を見つけたい。この問題はQUBOにより次のように定式化される。\begin{eqnarray*}H &=& \alpha H_{A} + H_{B} \\H_{A} &=& \sum_{i \in V} \left( 1 - \sum_{k = 1}^{K} x_{i,k}\right )^2 \\H_{B} &=& \sum_{(i, j) \in E} \sum_{k = 1}^{K} x_{i,k} x_{j,k}\end{eqnarray*}$H_{A}$は1つの頂点はただ1色に対応する制約を表す。$V$内のすべての頂点に対し、頂点が対応する$K$個のバイナリ変数のうち1個だけが1で残りがすべて0のときに$H_{A}=0$となり最小となる。$H_{B}$は隣接する頂点は別色に彩色されるという制約を表す。すべての隣接する頂点ペア (すなわち、$E$内のすべての辺) に対し、同色の隣接が存在しないときに$H_{B}=0$となり最小となる。$\alpha$は制約の強さを調整するパラメータである。
###Code
def plot_graph(N, E, colors=None):
G = nx.Graph()
G.add_nodes_from([n for n in range(N)])
for (i, j) in E:
G.add_edge(i, j)
plt.figure(figsize=(4,4))
pos = nx.circular_layout(G)
colorlist = ['#e41a1c', '#377eb8', '#4daf4a', '#984ea3', '#ff7f00', '#ffff33', '#a65628', '#f781bf']
if colors:
nx.draw_networkx(G, pos, node_color=[colorlist[colors[node]] for node in G.nodes], node_size=400, font_weight='bold', font_color='w')
else:
nx.draw_networkx(G, pos, node_color=[colorlist[0] for _ in G.nodes], node_size=400, font_weight='bold', font_color='w')
plt.axis("off")
plt.show()
# 頂点数と色数
N = 6
K = 3
# エッジが以下のように与えられる
E = {(0, 1), (0, 2), (0, 3), (1, 2), (2, 3), (3, 4)}
plot_graph(N, E)
###Output
_____no_output_____
###Markdown
頂点数 $\times$ 色数 $= 6 \times 3$次元のバイナリベクトル$x$を用意。$x[i, k]=1$は頂点$i$が色$k$に彩色されていることを表現している (one-hot encoding)。
###Code
x = Array.create('x', (N, K), 'BINARY')
# ある頂点iが1色のみである制約
onecolor_const = 0.0
for i in range(N):
onecolor_const += Constraint((Sum(0, K, lambda j: x[i, j]) - 1)**2, label="onecolor{}".format(i))
# 隣接頂点は異色で塗り分けられるという制約
adjacent_const = 0.0
for (i, j) in E:
for k in range(K):
adjacent_const += Constraint(x[i, k] * x[j, k], label="adjacent({},{})".format(i, j))
# エネルギー (ハミルトニアン) を構築
alpha = Placeholder("alpha")
H = alpha * onecolor_const + adjacent_const
# モデルをコンパイル
model = H.compile()
# QUBOを作成
feed_dict = {'alpha': 1.0}
qubo, offset = model.to_qubo(feed_dict=feed_dict)
# 最適解を求める
solution = solve_qubo(qubo)
decoded_solution, broken, energy = model.decode_solution(solution, vartype="BINARY", feed_dict=feed_dict)
print("number of broken constarint = {}".format(len(broken)))
# 各頂点の色を取得する
colors = [0 for i in range(N)]
for i in range(N):
for k in range(K):
if decoded_solution['x'][i][k] == 1:
colors[i] = k
break
# 彩色後のグラフを表示
plot_graph(N, E, colors)
###Output
_____no_output_____ |
labs/lab_06 (1).ipynb | ###Markdown
MAT281 - Laboratorio N°06 Problema 01 El **Iris dataset** es un conjunto de datos que contine una muestras de tres especies de Iris (Iris setosa, Iris virginica e Iris versicolor). Se midió cuatro rasgos de cada muestra: el largo y ancho del sépalo y pétalo, en centímetros.Lo primero es cargar el conjunto de datos y ver las primeras filas que lo componen:
###Code
# librerias
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
pd.set_option('display.max_columns', 500) # Ver más columnas de los dataframes
# Ver gráficos de matplotlib en jupyter notebook/lab
%matplotlib inline
# cargar datos
df = pd.read_csv(os.path.join("iris_contaminados.csv"))
df.columns = ['sepalLength',
'sepalWidth',
'petalLength',
'petalWidth',
'species']
df.head()
###Output
_____no_output_____
###Markdown
Bases del experimentoLo primero es identificar las variables que influyen en el estudio y la naturaleza de esta.* **species**: * Descripción: Nombre de la especie de Iris. * Tipo de dato: *string* * Limitantes: solo existen tres tipos (setosa, virginia y versicolor).* **sepalLength**: * Descripción: largo del sépalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 4.0 y 7.0 cm.* **sepalWidth**: * Descripción: ancho del sépalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 2.0 y 4.5 cm.* **petalLength**: * Descripción: largo del pétalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 1.0 y 7.0 cm.* **petalWidth**: * Descripción: ancho del pépalo. * Tipo de dato: *integer*. * Limitantes: los valores se encuentran entre 0.1 y 2.5 cm. Su objetivo es realizar un correcto **E.D.A.**, para esto debe seguir las siguientes intrucciones: 1. Realizar un conteo de elementos de la columna **species** y corregir según su criterio. Reemplace por "default" los valores nan..
###Code
df.species.unique()
df['species']=df['species'].str.lower().str.strip().fillna('default')
df.species.unique()
###Output
_____no_output_____
###Markdown
Lo que se hizo fue tomar por iguales los nombres con mayuscula, minuscula y con espacios 2. Realizar un gráfico de box-plot sobre el largo y ancho de los petalos y sépalos. Reemplace por **0** los valores nan.
###Code
fig = plt.figure(figsize=(10, 8))
plt.boxplot([df['sepalLength'].fillna(0),df['sepalWidth'].fillna(0),df['petalLength'].fillna(0),df['petalWidth'].fillna(0)],labels=['sepalLength','sepalWidth','petalLength','petalWidth'])
plt.title('Información Medidas Pétalos', size=18)
plt.show()
###Output
_____no_output_____
###Markdown
3. Anteriormente se define un rango de valores válidos para los valores del largo y ancho de los petalos y sépalos. Agregue una columna denominada **label** que identifique cuál de estos valores esta fuera del rango de valores válidos.
###Code
lista=[[],[],[],[],[]]
label=[None]*len(df['species'])
for i in range(len(df['species'])):
if (df.species[i] in ['setosa','virginica', 'versicolor'])and (4<=df.sepalLength[i]<=7) and (2<=df.sepalWidth[i]<=4.5) and (1<=df.petalLength[i]<=7) and (0.1<=df.petalWidth[i]<=2.5):
label[i]=True
lista[0].append(df.species[i])
lista[1].append(df.sepalLength[i])
lista[2].append(df.sepalWidth[i])
lista[3].append(df.petalLength[i])
lista[4].append(df.petalWidth[i])
else:
label[i]=False
df['label']=label
df.tail()
###Output
_____no_output_____
###Markdown
4. Realice un gráfico de *sepalLength* vs *petalLength* y otro de *sepalWidth* vs *petalWidth* categorizados por la etiqueta **label**. Concluya sus resultados.
###Code
y1=df.sepalLength
y2=df.petalLength
x=df.label
plt.figure(figsize=(10, 5))
plt.bar(x,y1,0.35,color='red')
plt.bar(x,y2,0.35,color='black')
plt.legend(['sepalLength','petalLength'])
plt.title("Sepal Length vs Petal Length")
plt.show()
###Output
_____no_output_____
###Markdown
Concluimos que Petal Lenght tiene más valores admisibles
###Code
y1=df.sepalWidth
y2=df.petalWidth
x=df.label
plt.figure(figsize=(10, 5))
plt.bar(x,y1,0.35,color='red')
plt.bar(x,y2,0.35,color='black')
plt.legend(['sepalLength','petalLength'])
plt.title("sepalWidth vs petalWidth")
plt.show()
###Output
_____no_output_____
###Markdown
5. Filtre los datos válidos y realice un gráfico de *sepalLength* vs *petalLength* categorizados por la etiqueta **species**. La corrección de los datos ya se hize en la parte 3
###Code
l=['species', 'sepalLength', 'sepalWidth', 'petalLength', 'petalWidth']
for i in range(5):
print('Los valores corregidos para: ', l[i], 'son', list(set(lista[i])))
print('')
y1=lista[1]
y2=lista[3]
x=lista[0]
plt.figure(figsize=(10, 5))
plt.bar(x,y1,0.35,color='red')
plt.bar(x,y2,0.35,color='black')
plt.legend(['sepalLength','petalLength'])
plt.title("Sepal Length vs Petal Length")
plt.show()
###Output
_____no_output_____ |
Notebooks/other-attempts/spaCy/data_preparation/vectorizer.ipynb | ###Markdown
VectorizerThis notebook takes all preprocessings and vectorizes them, in order to be classified with the MLP. As an exploration, we used spaCy's pre-trained vectors. Note that the docuemnt vectors are obtained from the word vectors via an average.
###Code
import spacy
import pandas as pd
import numpy as np
from progressbar import ProgressBar, Bar, Percentage
from os import listdir
from os.path import isfile, join
###Output
_____no_output_____
###Markdown
Load the big model (as per [documentation](https://spacy.io/usage/vectors-similarity)
###Code
nlp = spacy.load(r"Q:\anaconda\Lib\site-packages\en_core_web_lg\en_core_web_lg-2.2.5")
%%time
def_str = r"Q:\\tooBigToDrive\data-mining\kaggle\data\csv"
path = r"Q:\tooBigToDrive\data-mining\kaggle\data\csv"
files = listdir(def_str)
files = [f.replace(".csv","") for f in files if "Agg" in f]
for s in files:
csvPath = def_str +"\\"+ s + ".csv"
npyPath = def_str +"\\"+ s +"sSub"+ ".npy"
train = pd.read_csv(csvPath)
train.replace(to_replace = "empty", value = "", inplace = True)
train["body"].fillna("",inplace = True)
# enable this to add subreddits to body
train["body"] = train["subreddit"]+" "+train["body"]
to_be_vectorized = train["body"].tolist()
vectorsl = []
print("doing"+" "+s+".csv ...", "len(to_be_vectorized) = ",len(to_be_vectorized) )
pbar = ProgressBar(widgets=[Percentage(), Bar()], maxval=len(to_be_vectorized)).start()
i = 0
# disable parser and ner pipes to have better performance
with nlp.disable_pipes():
for tex in to_be_vectorized:
vectorsl.append(nlp(tex).vector)
i += 1
pbar.update(i)
pbar.finish()
vectors = np.array(vectorsl)
np.save(npyPath,vectors)
print("done")
###Output
_____no_output_____ |
convolutional-neural-networks/cifar-cnn/box_conv.ipynb | ###Markdown
Convolutional Neural Networks---In this notebook, we train a **CNN** to classify images from the CIFAR-10 database.The images in this database are small color images that fall into one of ten classes; some example images are pictured below. Test for [CUDA](http://pytorch.org/docs/stable/cuda.html)Since these are larger (32x32x3) images, it may prove useful to speed up your training time by using a GPU. CUDA is a parallel computing platform and CUDA Tensors are the same as typical Tensors, only they utilize GPU's for computation.
###Code
import torch
import numpy as np
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is available! Training on GPU ...
###Markdown
--- Load the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)Downloading may take a minute. We load in the training and test data, split the training data into a training and validation set, then create DataLoaders for each of these sets of data.
###Code
from torchvision import datasets
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# percentage of training set to use as validation
valid_size = 0.2
# convert data to a normalized torch.FloatTensor
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
# choose the training and test datasets
train_data = datasets.CIFAR10('data', train=True,
download=True, transform=transform)
test_data = datasets.CIFAR10('data', train=False,
download=True, transform=transform)
# obtain training indices that will be used for validation
num_train = len(train_data)
indices = list(range(num_train))
np.random.shuffle(indices)
split = int(np.floor(valid_size * num_train))
train_idx, valid_idx = indices[split:], indices[:split]
# define samplers for obtaining training and validation batches
train_sampler = SubsetRandomSampler(train_idx)
valid_sampler = SubsetRandomSampler(valid_idx)
# prepare data loaders (combine dataset and sampler)
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
# specify the image classes
classes = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Visualize a Batch of Training Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# helper function to un-normalize and display an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
plt.imshow(np.transpose(img, (1, 2, 0))) # convert from Tensor image
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
# display 20 images
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
View an Image in More DetailHere, we look at the normalized red, green, and blue (RGB) color channels as three separate, grayscale intensity images.
###Code
rgb_img = np.squeeze(images[3])
channels = ['red channel', 'green channel', 'blue channel']
fig = plt.figure(figsize = (36, 36))
for idx in np.arange(rgb_img.shape[0]):
ax = fig.add_subplot(1, 3, idx + 1)
img = rgb_img[idx]
ax.imshow(img, cmap='gray')
ax.set_title(channels[idx])
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center', size=8,
color='white' if img[x][y]<thresh else 'black')
###Output
_____no_output_____
###Markdown
--- Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)This time, you'll define a CNN architecture. Instead of an MLP, which used linear, fully-connected layers, you'll use the following:* [Convolutional layers](https://pytorch.org/docs/stable/nn.htmlconv2d), which can be thought of as stack of filtered images.* [Maxpooling layers](https://pytorch.org/docs/stable/nn.htmlmaxpool2d), which reduce the x-y size of an input, keeping only the most _active_ pixels from the previous layer.* The usual Linear + Dropout layers to avoid overfitting and produce a 10-dim output.A network with 2 convolutional layers is shown in the image below and in the code, and you've been given starter code with one convolutional and one maxpooling layer. TODO: Define a model with multiple convolutional layers, and define the feedforward metwork behavior.The more convolutional layers you include, the more complex patterns in color and shape a model can detect. It's suggested that your final model include 2 or 3 convolutional layers as well as linear layers + dropout in between to avoid overfitting. It's good practice to look at existing research and implementations of related models as a starting point for defining your own models. You may find it useful to look at [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py) to help decide on a final structure. Output volume for a convolutional layerTo compute the output size of a given convolutional layer we can perform the following calculation (taken from [Stanford's cs231n course](http://cs231n.github.io/convolutional-networks/layers)):> We can compute the spatial size of the output volume as a function of the input volume size (W), the kernel/filter size (F), the stride with which they are applied (S), and the amount of zero padding used (P) on the border. The correct formula for calculating how many neurons define the output_W is given by `(W−F+2P)/S+1`. For example for a 7x7 input and a 3x3 filter with stride 1 and pad 0 we would get a 5x5 output. With stride 2 we would get a 3x3 output.
###Code
import torch.nn as nn
from box_convolution import BoxConv2d
import torch.nn.functional as F
from torchsummary import summary
class Flatten(nn.Module):
def forward(self, x):
return x.view(x.size()[0], -1)
# define the CNN architecture
class Net(nn.Module):
"""
[3@32x32] Input
[16@32x32] CONV1 (3x3), stride 1, pad 1
[16@16x16] POOL1 (2x2) stride 2
[32@16x16] CONV2 (5x5), stride 1, pad 2
[32@8x8] POOL2 (2x2) stride 2
[64@8x8] CONV3 (5x5), stride 1, pad 2
[64@4x4] POOL3 (2x2) stride 2
[128] FC
[10] FC
[10] Softmax
"""
def __init__(self):
super(Net, self).__init__()
self.model = nn.Sequential(
BoxConv2d(3, 16, 240, 320),
nn.ReLU(),
nn.MaxPool2d(2, 2),
BoxConv2d(16, 32, 240, 320),
nn.ReLU(),
nn.MaxPool2d(2, 2),
BoxConv2d(32, 64, 240, 320),
nn.ReLU(),
nn.MaxPool2d(2, 2),
Flatten(),
nn.Linear(64 * 4 * 4, 500),
nn.Dropout(p=0.2),
nn.Linear(500, 10),
nn.LogSoftmax(dim=1)
)
def forward(self, x):
# add sequence of convolutional and max pooling layers
x = self.model(x)
return x
# create a complete CNN
model = Net()
print(model)
# move tensors to GPU if CUDA is available
if train_on_gpu:
model.cuda()
print(summary(model, (3, 32, 32)))
###Output
Net(
(model): Sequential(
(0): BoxConv2d()
(1): ReLU()
(2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(3): BoxConv2d()
(4): ReLU()
(5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(6): BoxConv2d()
(7): ReLU()
(8): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(9): Flatten()
(10): Linear(in_features=1024, out_features=500, bias=True)
(11): Dropout(p=0.2)
(12): Linear(in_features=500, out_features=10, bias=True)
(13): LogSoftmax()
)
)
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Decide on a loss and optimization function that is best suited for this classification task. The linked code examples from above, may be a good starting point; [this PyTorch classification example](https://github.com/pytorch/tutorials/blob/master/beginner_source/blitz/cifar10_tutorial.py) or [this, more complex Keras example](https://github.com/keras-team/keras/blob/master/examples/cifar10_cnn.py). Pay close attention to the value for **learning rate** as this value determines how your model converges to a small error. TODO: Define the loss and optimizer and see how these choices change the loss over time.
###Code
import torch.optim as optim
# specify loss function
criterion = nn.NLLLoss()
# specify optimizer
optimizer = optim.SGD(model.parameters(), lr=1e-2, momentum=0.1)
###Output
_____no_output_____
###Markdown
--- Train the NetworkRemember to look at how the training and validation loss decreases over time; if the validation loss ever increases it indicates possible overfitting.
###Code
# number of epochs to train the model
n_epochs = 30 # you may increase this number to train a final model
valid_loss_min = np.Inf # track change in validation loss
for epoch in range(1, n_epochs+1):
# keep track of training and validation loss
train_loss = 0.0
valid_loss = 0.0
###################
# train the model #
###################
model.train()
for data, target in train_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update training loss
train_loss += loss.item()*data.size(0)
######################
# validate the model #
######################
model.eval()
for data, target in valid_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update average validation loss
valid_loss += loss.item()*data.size(0)
# calculate average losses
train_loss = train_loss/len(train_loader.dataset)
valid_loss = valid_loss/len(valid_loader.dataset)
# print training/validation statistics
print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
epoch, train_loss, valid_loss))
# save model if validation loss has decreased
if valid_loss <= valid_loss_min:
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model ...'.format(
valid_loss_min,
valid_loss))
torch.save(model.state_dict(), 'model_cifar.pt')
valid_loss_min = valid_loss
###Output
Epoch: 1 Training Loss: 1.567243 Validation Loss: 0.318992
Validation loss decreased (inf --> 0.318992). Saving model ...
Epoch: 2 Training Loss: 1.173796 Validation Loss: 0.269032
Validation loss decreased (0.318992 --> 0.269032). Saving model ...
Epoch: 3 Training Loss: 1.028757 Validation Loss: 0.246770
Validation loss decreased (0.269032 --> 0.246770). Saving model ...
Epoch: 4 Training Loss: 0.921441 Validation Loss: 0.216404
Validation loss decreased (0.246770 --> 0.216404). Saving model ...
Epoch: 5 Training Loss: 0.834351 Validation Loss: 0.207697
Validation loss decreased (0.216404 --> 0.207697). Saving model ...
Epoch: 6 Training Loss: 0.760644 Validation Loss: 0.200960
Validation loss decreased (0.207697 --> 0.200960). Saving model ...
Epoch: 7 Training Loss: 0.701266 Validation Loss: 0.182290
Validation loss decreased (0.200960 --> 0.182290). Saving model ...
Epoch: 8 Training Loss: 0.651235 Validation Loss: 0.175738
Validation loss decreased (0.182290 --> 0.175738). Saving model ...
Epoch: 9 Training Loss: 0.604201 Validation Loss: 0.179084
Epoch: 10 Training Loss: 0.563799 Validation Loss: 0.176997
Epoch: 11 Training Loss: 0.526861 Validation Loss: 0.167888
Validation loss decreased (0.175738 --> 0.167888). Saving model ...
Epoch: 12 Training Loss: 0.495276 Validation Loss: 0.166380
Validation loss decreased (0.167888 --> 0.166380). Saving model ...
Epoch: 13 Training Loss: 0.459740 Validation Loss: 0.167014
Epoch: 14 Training Loss: 0.431169 Validation Loss: 0.167763
Epoch: 15 Training Loss: 0.402207 Validation Loss: 0.183173
Epoch: 16 Training Loss: 0.375668 Validation Loss: 0.169221
Epoch: 17 Training Loss: 0.347971 Validation Loss: 0.188686
Epoch: 18 Training Loss: 0.326553 Validation Loss: 0.212313
Epoch: 19 Training Loss: 0.305025 Validation Loss: 0.191567
Epoch: 20 Training Loss: 0.283192 Validation Loss: 0.208092
Epoch: 21 Training Loss: 0.261318 Validation Loss: 0.212983
Epoch: 22 Training Loss: 0.242415 Validation Loss: 0.220880
Epoch: 23 Training Loss: 0.229040 Validation Loss: 0.226403
Epoch: 24 Training Loss: 0.213435 Validation Loss: 0.239513
Epoch: 25 Training Loss: 0.201142 Validation Loss: 0.265936
Epoch: 26 Training Loss: 0.185206 Validation Loss: 0.250317
Epoch: 27 Training Loss: 0.175800 Validation Loss: 0.264945
Epoch: 28 Training Loss: 0.162568 Validation Loss: 0.282741
Epoch: 29 Training Loss: 0.156265 Validation Loss: 0.284597
Epoch: 30 Training Loss: 0.147730 Validation Loss: 0.304910
Epoch: 31 Training Loss: 0.141637 Validation Loss: 0.337037
Epoch: 32 Training Loss: 0.132603 Validation Loss: 0.316942
Epoch: 33 Training Loss: 0.126549 Validation Loss: 0.339389
Epoch: 34 Training Loss: 0.126471 Validation Loss: 0.333883
###Markdown
Load the Model with the Lowest Validation Loss
###Code
model.load_state_dict(torch.load('model_cifar.pt'))
###Output
_____no_output_____
###Markdown
--- Test the Trained NetworkTest your trained model on previously unseen data! A "good" result will be a CNN that gets around 70% (or more, try your best!) accuracy on these test images.
###Code
# track test loss
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval()
# iterate over test data
for data, target in test_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# average test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.859250
Test Accuracy of airplane: 75% (757/1000)
Test Accuracy of automobile: 88% (889/1000)
Test Accuracy of bird: 65% (657/1000)
Test Accuracy of cat: 56% (568/1000)
Test Accuracy of deer: 59% (598/1000)
Test Accuracy of dog: 51% (518/1000)
Test Accuracy of frog: 80% (800/1000)
Test Accuracy of horse: 81% (812/1000)
Test Accuracy of ship: 81% (813/1000)
Test Accuracy of truck: 69% (692/1000)
Test Accuracy (Overall): 71% (7104/10000)
###Markdown
Question: What are your model's weaknesses and how might they be improved? **Answer**: (double-click to edit and add an answer) Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
images = images.cpu()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
imshow(images[idx])
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____ |
S2/RITAL/TAL/TME/TME2/UnsupervisedClustering.ipynb | ###Markdown
Exploration des techniques de clusteringLe but de ce tp est de faire face à la problématique: Voici XXX documents -bruts, non étiquetés-... Comment les valoriser? Les exploiter? Les comprendre? Les résumer? Nous avons vu dans les séances précédentes comment représenter les données textuelles sous forme de sacs de mots:$$X = \begin{matrix} & \textbf{t}_j \\ & \downarrow \\ \textbf{d}_i \rightarrow & \begin{pmatrix} x_{1,1} & \dots & x_{1,D} \\ \vdots & \ddots & \vdots \\ x_{N,1} & \dots & x_{N,D} \\ \end{pmatrix} \end{matrix} $$A partir de cette représentation, les questions qui se posent sont les suivantes:1. Quel algorithme de clustiering choisir? - K-means, LSA, pLSA, LDA1. Quels résultats attendre? - qualité, bruit, exploitabilité immédiate etc...1. Quelles analyses qualitatives effectuer pour comprendre les groupes?1. Comment boucler, itérer pour améliorer la qualité du processus?
###Code
import numpy as np
import matplotlib.pyplot as plt
import codecs
import re
import os.path
import sklearn
###Output
_____no_output_____
###Markdown
Chargement des données
###Code
from sklearn.datasets import fetch_20newsgroups
newsgroups_train = fetch_20newsgroups(subset='train')
# conversion BoW + tf-idf
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer() # TfidfVectorizer(max_features=500)
vectors = vectorizer.fit_transform(newsgroups_train.data)
print(vectors.shape)
# mesure de la sparsité = 157 mots actif par document sur 130000 !!
print(vectors.nnz / float(vectors.shape[0]))
# retrouver les mots
print([(i,vectorizer.get_feature_names_out()[i]) \
for i in np.random.randint(vectors.shape[1], size=10)])
# gestion des étiquettes (pour l'évaluation seulemnet en non-supervisé)
Y = newsgroups_train.target
print(Y[:10]) # numérique
print([newsgroups_train.target_names[i] for i in Y[:10]]) # vraie classe
###Output
[ 7 4 4 1 14 16 13 3 2 4]
['rec.autos', 'comp.sys.mac.hardware', 'comp.sys.mac.hardware', 'comp.graphics', 'sci.space', 'talk.politics.guns', 'sci.med', 'comp.sys.ibm.pc.hardware', 'comp.os.ms-windows.misc', 'comp.sys.mac.hardware']
###Markdown
Tests préliminairesCommençons par le commencement: tout problème non-supervisé (ou presque) doit être analysé en premier lieu avec les $k$-means!
###Code
from sklearn.cluster import KMeans
# Algo => risque de prendre du temps si le vocabulaire n'a pas été réduit !!
# Note: on dirait que l'algo transforme les données en dense vector=> catastrophe pour nous !!!
# => limitation du nombre d'itération arbitraire + limitation du vocabulaire
from time import time
t0 = time()
kmeans = KMeans(n_clusters=20, random_state=0, max_iter=10).fit(vectors)
print("done in %0.3fs" % (time() - t0))
from wordcloud import WordCloud
# recupération des proto:
print(kmeans.cluster_centers_.shape)
n_class = kmeans.cluster_centers_.shape[0]
# mots les plus importants par cluster => TODO
# version print / version word cloud
pred = kmeans.predict(vectors)
features_vocab = np.array(vectorizer.get_feature_names())
print(features_vocab[52367])
for i in range(n_class):
wordcloud = WordCloud(background_color='white', max_words=10).generate(????)
plt.imshow(wordcloud)
plt.axis("off")
plt.show()
###Output
(20, 130107)
errands
###Markdown
Limites- Limites liées à la description - trop de mots - trop de mots fréquents qui déroutent l'algorithme - ...- Limites liées à l'algorithme - distance euclidienne absurdeLes limites algorithmiques vont être résolues en changeant d'algorithme... Les limites de représentation des données seront résolues par votre capacité en ingénierie.Algorithmes à tester:- LSAhttps://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.htmlsklearn.decomposition.TruncatedSVD- LDAhttps://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html**Note:** pour des tests rapides, il est plus simple de rester dans le cadre de scikit-learn... Néanmoins, dans un milieu industriel, il faudrait exploiter des outils plus efficaces comme ceux présents dans la librairie ```gensim```. Si vous vous sentez à l'aise avec la donnée textuelles, allez directement vers ces outils:https://radimrehurek.com/gensim/models/ldamodel.html
###Code
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import TfidfVectorizer
limit = int(0.8*len(newsgroups_train.data))
vectorizer = TfidfVectorizer()
X_train, X_test = newsgroups_train.data[:limit], newsgroups_train.data[limit:]
X_train_vector = vectorizer.fit_transform(X_train)
X_test_vector = vectorizer.transform(X_test)
lda = LatentDirichletAllocation(n_components=20)
lda.fit(X_train_vector)
out_lda = lda.transform(X_test_vector)
pred = np.argmax(out_lda, axis=1)
print(np.where(pred==Y, 1, 0).mean())
print(pred.mean())
print(Y.mean())
###Output
1.0883782589482986
9.29299982322786
|
Notebook/Intro to my research.ipynb | ###Markdown
Different APIs for text analytics and SEMANTIC ANALYSIS using machine learning were tried including :Algorithmia - Many text analytics, NLP and entity extraction algorithms are available as part of their cloud based offering Algorithmia algorithms tried out include:Part of speech tagging using OpenNLP: http://opennlp.apache.org/ The Part of Speech Tagger marks tokens with their corresponding word type based on the token itself and the context of the token. A token might have multiple pos tags depending on the token and the context. The OpenNLP POS Tagger uses a probability model to predict the correct pos tag out of the tag set. To limit the possible tags for a token a tag dictionary can be used which increases the tagging and runtime performance of the tagger. Parts are tagged according to the conventions of the Penn Treebank Project (https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html). For example, a plural noun is denoted NNS, a singular or mass noun is NN, and a determiner (such as a/an, every, no, the,another, any, some, each, etc.) as DT.Tokenizer: https://algorithmia.com/algorithms/ApacheOpenNLP/TokenizeBySentenceAuto tagging of text: Algorithm uses a variant of nlp/LDA to extract tags / keywords - https://algorithmia.com/algorithms/nlp/AutoTagAylien - Classification by Taxonomy: https://developer.aylien.com/Use LDA to Classify Text Documents - LDA is an algorithm that can be used to generate topics to understand a document’s general theme: http://blog.algorithmia.com/lda-algorithm-classify-text-documents/MonkeyLearn: Taxonomy Classifier: https://app.monkeylearn.com/main/classifiers/cl_b7qAkDMz/tab/tree-sandbox/Output - Python Dictionary data structure inside AlgorithmiaTesseract OCR in Algorithmia: https://algorithmia.com/algorithms/tesseractocr/OCRCreate PDF using ReportLab PLUS: https://www.reportlab.com/reportlabplus/
###Code
#Text Analysis or Natural Language Processing (NLP) - Algorithmia API
import Algorithmia
client = Algorithmia.client('sim3x6PzEv6m2icRR+23rqTTcOo1')
#from Algorithmia.acl import ReadAcl, AclType
#Next create a data collection called nlp_directory:
import os
os.listdir(".")
# Set your Data URI -- a jpg
input = {"src":"data://shamitb/ocr/ai.jpg"}
#setting up a client object
client_algo = Algorithmia.client('sim3x6PzEv6m2icRR+23rqTTcOo1')
#passing the algo name for OCR detection
algo = client_algo.algo('tesseractocr/OCR/0.1.0')
#applying the algorithm
response = algo.pipe(input).result
#input object after being processsed by algorithm is produced
print(response['result'])
print('\n*******************************\n TAXONOMY : \n')
#classification of data takes place here into categories, by the algorithm
# **************** Aylien - Taxonomy ****************
from aylienapiclient import textapi
client_aylien = textapi.Client("a19bb245", "2623b77754833e2711998a0b0bdad9db")
#response object from requested api is made the input for algo.
text = "Predicting future stock prices has become an infamous topic in the fields of finance and trading. There is a growing interest among investors, financiers, and researchers about the strong prediction of future prices so that stocks can be traded profitably. Professionals today use technical analysis along with fundamental analysis to analyse stocks for making uncanny investment choices. Fundamental analysis is the traditional approach that studies for contributing factors like the company’s revenues, expenses, market position, and annual growth rates. Technical analysis (Chen, 2014), on the other hand, is completely based on the study of market fluctuations. Technical analysts study market patterns and use price data in different mathematical computations to forecast future prices. In the prediction process described in paper, six different technical indicators i.e. relative strength index (RSI) (Wu and Diao, 2015), simple moving average (SMA) (Lauren and Harlili, 2014), average directional index (ADX) (Creighton and Zulkernine, 2017), correlation (Li et al., 2016), parabolic stop and reverse (SAR) (Putra, Permanasari and Fauziati, 2016), and the return which is the dividend paid by the stock for that particular time are used. The complete analysis is broken down into subparts of implementation. First, an unsupervised algorithm to predict the regimes is implemented, followed by the visualization of regimes, then training a support vector classifier and using it to predict the current day’s trend is performed. The standardscaler function is instantiated and created an unsupervised learning algorithm to make the regime prediction. There are four different regimes that are developed for which, stock market returns are calculated through a single Gaussian distribution particle swarm optimization (PSO) technique which is applied to these regimes for training an adaptive linear combiner to form a prediction model. The mean and covariance values are used for regime plotting. The data from data split is fed in Support Vector Machine (SVM) given by sklearn without hyper-parameters tuning for training. The prediction made by SVM is used to create prediction signals. These signals are used to calculate the returns of the strategy. The cumulative strategy returns and the cumulative market returns are used to calculate the Sharpe ratio (Hao Li, 2017) to measure performance."
classifications = client_aylien.ClassifyByTaxonomy({"text": text, "taxonomy": "iab-qag"})
for category in classifications['categories']:
print(category['label'])
print('\n*******************************\n AUTO TAGS : \n')
#tags are being generated from the result of last
# ************** Algorithmia - Auto - tag *******************
input = "Predicting future stock prices has become an infamous topic in the fields of finance and trading. There is a growing interest among investors, financiers, and researchers about the strong prediction of future prices so that stocks can be traded profitably. Professionals today use technical analysis along with fundamental analysis to analyse stocks for making uncanny investment choices. Fundamental analysis is the traditional approach that studies for contributing factors like the company’s revenues, expenses, market position, and annual growth rates. Technical analysis (Chen, 2014), on the other hand, is completely based on the study of market fluctuations. Technical analysts study market patterns and use price data in different mathematical computations to forecast future prices. In the prediction process described in paper, six different technical indicators i.e. relative strength index (RSI) (Wu and Diao, 2015), simple moving average (SMA) (Lauren and Harlili, 2014), average directional index (ADX) (Creighton and Zulkernine, 2017), correlation (Li et al., 2016), parabolic stop and reverse (SAR) (Putra, Permanasari and Fauziati, 2016), and the return which is the dividend paid by the stock for that particular time are used. The complete analysis is broken down into subparts of implementation. First, an unsupervised algorithm to predict the regimes is implemented, followed by the visualization of regimes, then training a support vector classifier and using it to predict the current day’s trend is performed. The standardscaler function is instantiated and created an unsupervised learning algorithm to make the regime prediction. There are four different regimes that are developed for which, stock market returns are calculated through a single Gaussian distribution particle swarm optimization (PSO) technique which is applied to these regimes for training an adaptive linear combiner to form a prediction model. The mean and covariance values are used for regime plotting. The data from data split is fed in Support Vector Machine (SVM) given by sklearn without hyper-parameters tuning for training. The prediction made by SVM is used to create prediction signals. These signals are used to calculate the returns of the strategy. The cumulative strategy returns and the cumulative market returns are used to calculate the Sharpe ratio (Hao Li, 2017) to measure performance."
#tags are created over results from the text read by ocr algo over words which occur the most
algo = client.algo('nlp/AutoTag/1.0.0')
response2 = algo.pipe(input)
for category in response2.result:
print(category)
print(response2.result)
print('\n*******************************\n ENTITIES : \n')
# **************** Algorithmia - Entities ****************
text ="Predicting future stock prices has become an infamous topic in the fields of finance and trading. There is a growing interest among investors, financiers, and researchers about the strong prediction of future prices so that stocks can be traded profitably. Professionals today use technical analysis along with fundamental analysis to analyse stocks for making uncanny investment choices. Fundamental analysis is the traditional approach that studies for contributing factors like the company’s revenues, expenses, market position, and annual growth rates. Technical analysis (Chen, 2014), on the other hand, is completely based on the study of market fluctuations. Technical analysts study market patterns and use price data in different mathematical computations to forecast future prices. In the prediction process described in paper, six different technical indicators i.e. relative strength index (RSI) (Wu and Diao, 2015), simple moving average (SMA) (Lauren and Harlili, 2014), average directional index (ADX) (Creighton and Zulkernine, 2017), correlation (Li et al., 2016), parabolic stop and reverse (SAR) (Putra, Permanasari and Fauziati, 2016), and the return which is the dividend paid by the stock for that particular time are used. The complete analysis is broken down into subparts of implementation. First, an unsupervised algorithm to predict the regimes is implemented, followed by the visualization of regimes, then training a support vector classifier and using it to predict the current day’s trend is performed. The standardscaler function is instantiated and created an unsupervised learning algorithm to make the regime prediction. There are four different regimes that are developed for which, stock market returns are calculated through a single Gaussian distribution particle swarm optimization (PSO) technique which is applied to these regimes for training an adaptive linear combiner to form a prediction model. The mean and covariance values are used for regime plotting. The data from data split is fed in Support Vector Machine (SVM) given by sklearn without hyper-parameters tuning for training. The prediction made by SVM is used to create prediction signals. These signals are used to calculate the returns of the strategy. The cumulative strategy returns and the cumulative market returns are used to calculate the Sharpe ratio (Hao Li, 2017) to measure performance."
text.encode('ascii', 'ignore')
#the OCR resutlt is again used to indentify entities like numbers, organizaions, date in the data by classifying them
algo = client.algo('StanfordNLP/NamedEntityRecognition/0.2.0')
entities = algo.pipe(text)
print(entities.result)
entities = entities.result
print('\n*******************************\n DOCUMENT SIMILARITY : \n')
# **************** Algorithmia - TextSimilarity ****************
input = {"files": [["doc1", "Predicting future stock prices has become an infamous topic in the fields of finance and trading. There is a growing interest among investors, financiers, and researchers about the strong prediction of future prices so that stocks can be traded profitably. Professionals today use technical analysis along with fundamental analysis to analyse stocks for making uncanny investment choices. Fundamental analysis is the traditional approach that studies for contributing factors like the company’s revenues, expenses, market position, and annual growth rates. Technical analysis (Chen, 2014), on the other hand, is completely based on the study of market fluctuations. Technical analysts study market patterns and use price data in different mathematical computations to forecast future prices. In the prediction process described in paper, six different technical indicators i.e. relative strength index (RSI) (Wu and Diao, 2015), simple moving average (SMA) (Lauren and Harlili, 2014), average directional index (ADX) (Creighton and Zulkernine, 2017), correlation (Li et al., 2016), parabolic stop and reverse (SAR) (Putra, Permanasari and Fauziati, 2016), and the return which is the dividend paid by the stock for that particular time are used. The complete analysis is broken down into subparts of implementation. First, an unsupervised algorithm to predict the regimes is implemented, followed by the visualization of regimes, then training a support vector classifier and using it to predict the current day’s trend is performed. The standardscaler function is instantiated and created an unsupervised learning algorithm to make the regime prediction. There are four different regimes that are developed for which, stock market returns are calculated through a single Gaussian distribution particle swarm optimization (PSO) technique which is applied to these regimes for training an adaptive linear combiner to form a prediction model. The mean and covariance values are used for regime plotting. The data from data split is fed in Support Vector Machine (SVM) given by sklearn without hyper-parameters tuning for training. The prediction made by SVM is used to create prediction signals. These signals are used to calculate the returns of the strategy. The cumulative strategy returns and the cumulative market returns are used to calculate the Sharpe ratio (Hao Li, 2017) to measure performance."], ["doc2", "the movie about cars"], ["doc3", "the document about cats"]]}
#print(input)
#document similarity is checked from three other documents doc1, doc2, doc3 compared to files/our data
algo = client.algo('PetiteProgrammer/TextSimilarity/0.1.2')
print(algo.pipe(input).result)
print('\n*******************************\n SENTENCE PARSING : \n')
"""
Parsing is a traditional grammatical exercise that involves breaking down a text into its component
parts of speech with an explanation of the form, function, and syntactic relationship of each part.
"""
# **************** Algorithmia - SENTENCE PARSING ****************
input = {
"src":"Predicting future stock prices has become an infamous topic in the fields of finance and trading. There is a growing interest among investors, financiers, and researchers about the strong prediction of future prices so that stocks can be traded profitably. Professionals today use technical analysis along with fundamental analysis to analyse stocks for making uncanny investment choices. Fundamental analysis is the traditional approach that studies for contributing factors like the company’s revenues, expenses, market position, and annual growth rates. Technical analysis (Chen, 2014), on the other hand, is completely based on the study of market fluctuations. Technical analysts study market patterns and use price data in different mathematical computations to forecast future prices. In the prediction process described in paper, six different technical indicators i.e. relative strength index (RSI) (Wu and Diao, 2015), simple moving average (SMA) (Lauren and Harlili, 2014), average directional index (ADX) (Creighton and Zulkernine, 2017), correlation (Li et al., 2016), parabolic stop and reverse (SAR) (Putra, Permanasari and Fauziati, 2016), and the return which is the dividend paid by the stock for that particular time are used. The complete analysis is broken down into subparts of implementation. First, an unsupervised algorithm to predict the regimes is implemented, followed by the visualization of regimes, then training a support vector classifier and using it to predict the current day’s trend is performed. The standardscaler function is instantiated and created an unsupervised learning algorithm to make the regime prediction. There are four different regimes that are developed for which, stock market returns are calculated through a single Gaussian distribution particle swarm optimization (PSO) technique which is applied to these regimes for training an adaptive linear combiner to form a prediction model. The mean and covariance values are used for regime plotting. The data from data split is fed in Support Vector Machine (SVM) given by sklearn without hyper-parameters tuning for training. The prediction made by SVM is used to create prediction signals. These signals are used to calculate the returns of the strategy. The cumulative strategy returns and the cumulative market returns are used to calculate the Sharpe ratio (Hao Li, 2017) to measure performance.",
"format":"conll",
"language":"english"
}
#prepositions, nouns, verbs have been omitted from the text.
algo = client.algo('deeplearning/Parsey/1.0.2')
print(algo.pipe(input).result)
print('\n*******************************\n CO-REFERENCE : \n')
# ****************** CO REFERENCE **********************
algo = client.algo('StanfordNLP/DeterministicCoreferenceResolution/0.1.1')
"""
Classifications where references are found, meaning full details that could be used from the data are resulted by this algo
{'terra Solutions lwww.entenasnlutinns.com': ['it']}
"""
print(algo.pipe(text).result)
print('\n*******************************\n PART-OF-SPEECH (POS) TAGGER : \n')
# ****************** PART-OF-SPEECH (POS) TAGGER **********************
algo = client.algo('ApacheOpenNLP/POSTagger/0.1.1')
print(text)
#tags part-of-speech and returns an array but throwing an error - all the inputs to be in json -- our reslut is string not getting fixes for that
#text = response['result']
print(algo.pipe(text).result)
print('\n*******************************\n TOKENIZE : \n')
# ****************** TOKENIZE **********************
algo = client.algo('ApacheOpenNLP/TokenizeBySentence/0.1.0')
print(algo.pipe(text))
print('\n*******************************\n LDA : \n')
# ****************** LDA **********************
#classify text in a document to a particular topic.
algo = client.algo('ApacheOpenNLP/SentenceDetection/0.1.0')
sentences = algo.pipe(text)
#print(sentences)
algo = client.algo('nlp/LDA/1.0.0')
input = {
"docsList": sentences.result,
"mode": "quality"
}
print(input)
LDA = algo.pipe(input).result
print(LDA)
#Summary
print('\n*******************************\n Summary : \n')
summ_text = "Predicting future stock prices has become an infamous topic in the fields of finance and trading. There is a growing interest among investors, financiers, and researchers about the strong prediction of future prices so that stocks can be traded profitably. Professionals today use technical analysis along with fundamental analysis to analyse stocks for making uncanny investment choices. Fundamental analysis is the traditional approach that studies for contributing factors like the company’s revenues, expenses, market position, and annual growth rates. Technical analysis (Chen, 2014), on the other hand, is completely based on the study of market fluctuations. Technical analysts study market patterns and use price data in different mathematical computations to forecast future prices. In the prediction process described in paper, six different technical indicators i.e. relative strength index (RSI) (Wu and Diao, 2015), simple moving average (SMA) (Lauren and Harlili, 2014), average directional index (ADX) (Creighton and Zulkernine, 2017), correlation (Li et al., 2016), parabolic stop and reverse (SAR) (Putra, Permanasari and Fauziati, 2016), and the return which is the dividend paid by the stock for that particular time are used. The complete analysis is broken down into subparts of implementation. First, an unsupervised algorithm to predict the regimes is implemented, followed by the visualization of regimes, then training a support vector classifier and using it to predict the current day’s trend is performed. The standardscaler function is instantiated and created an unsupervised learning algorithm to make the regime prediction. There are four different regimes that are developed for which, stock market returns are calculated through a single Gaussian distribution particle swarm optimization (PSO) technique which is applied to these regimes for training an adaptive linear combiner to form a prediction model. The mean and covariance values are used for regime plotting. The data from data split is fed in Support Vector Machine (SVM) given by sklearn without hyper-parameters tuning for training. The prediction made by SVM is used to create prediction signals. These signals are used to calculate the returns of the strategy. The cumulative strategy returns and the cumulative market returns are used to calculate the Sharpe ratio (Hao Li, 2017) to measure performance."
#summarizes the text - result of OCR
algo = client.algo('nlp/Summarizer/0.1.8')
summ = algo.pipe(summ_text).result
print(algo.pipe(summ_text).result)
#Sentiment Analysis
print('\n*******************************\n Sentiments : \n')
algo = client.algo('nlp/SentimentAnalysis/1.0.5')
sentiment = []
for category in response2.result:
s = algo.pipe(category).result
print("Sentiment Score (",category,"): ", s)
sentiment.append(s)
#checking the level of sentiment
import numpy
sentiment = numpy.asarray(sentiment)
how_much_senti = sentiment.var()
#Var returns the variance of the array elements, a measure of the spread of a distribution.
#The variance is computed for the flattened array by default, otherwise over the specified axis.
print(how_much_senti)
#this variance value can affect the result of our classification
###Output
*******************************
Sentiments :
Sentiment Score ( analysis ): 2
Sentiment Score ( data ): 2
Sentiment Score ( market ): 2
Sentiment Score ( prediction ): 2
Sentiment Score ( regimes ): 2
Sentiment Score ( returns ): 2
Sentiment Score ( technical ): 2
Sentiment Score ( training ): 2
0.0
|
doc/_build/Notebooks/2.Preprocess/2.2Mask-Copy1.ipynb | ###Markdown
Event definition Time selectionFor the UK, the event of interest is UK February average precipitation. Since we download monthly averages, we do not have to do any preprocessing along the time dimension her. For the Siberian heatwave, we are interested in the March-May average. Therefore we need to take the seasonal average of the monthly timeseries. Spatial selectionFrom grid to country-averaged timeseries.In this notebook we explore how to best extract areal averaged precipitation and test this for UK precipitation within SEAS5 and EOBS, as part of our UNSEEN-open [workflow](../Workflow.ipynb). The code is inspired on Matteo De Felice's [blog](http://www.matteodefelice.name/post/aggregating-gridded-data/) -- credits to him!We create a mask for all 241 countries within [Regionmask](https://regionmask.readthedocs.io/en/stable/), that has predefined countries from [Natural Earth datasets](http://www.naturalearthdata.com) (shapefiles). We use the mask to go from gridded precipitation to country-averaged timeseries. We start with UK, number 31 within the country mask. Import packagesWe need the packages regionmask for masking and xesmf for regridding. I cannot install xesmf into the UNSEEN-open environment without breaking my environment, so in this notebook I use a separate 'upscale' environment, as suggested by this [issue](https://github.com/JiaweiZhuang/xESMF/issues/47issuecomment-582421822). I use the packages esmpy=7.1.0 xesmf=0.2.1 regionmask cartopy matplotlib xarray numpy netcdf4.
###Code
##This is so variables get printed within jupyter
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
##import packages
import os
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
import cartopy
import cartopy.crs as ccrs
import matplotlib.ticker as mticker
import regionmask # Masking
import xesmf as xe # Regridding
##We want the working directory to be the UNSEEN-open directory
pwd = os.getcwd() ##current working directory is UNSEEN-open/Notebooks/1.Download
pwd #print the present working directory
os.chdir(pwd+'/../../') # Change the working directory to UNSEEN-open
os.getcwd() #print the working directory
###Output
_____no_output_____
###Markdown
Load SEAS5 and EOBS From CDS, we retrieve SEAS5 in notebook [1.2 Retrieve](1.Download/1.2Retrieve.ipynb) and concatenate the retrieved files in notebook [1.3 Merge](1.Download/1.3Merge.ipynb). We create a netcdf file containing the dimensions lat, lon, time (35 years), number (25 ensembles) and leadtime (5 initialization months).
###Code
SEAS5 = xr.open_dataset('../UK_example/SEAS5/SEAS5.nc')
SEAS5
###Output
_____no_output_____
###Markdown
And load EOBS netcdf with only February precipitation, resulting in 71 values, one for each year within 1950 - 2020 over the European domain (25N-75N x 40W-75E).
###Code
EOBS = xr.open_dataset('../UK_example/EOBS/EOBS.nc')
EOBS
###Output
_____no_output_____
###Markdown
MaskingHere we load the countries and create a mask for SEAS5 and for EOBS. Regionmask has predefined countries from [Natural Earth datasets](http://www.naturalearthdata.com) (shapefiles).
###Code
countries = regionmask.defined_regions.natural_earth.countries_50
countries
###Output
_____no_output_____
###Markdown
Now we create the mask for the SEAS5 grid. Only one timestep is needed to create the mask. This mask will lateron be used to mask all the timesteps.
###Code
SEAS5_mask = countries.mask(SEAS5.sel(leadtime=2, number=0, time='1982'),
lon_name='longitude',
lat_name='latitude')
###Output
_____no_output_____
###Markdown
And create a plot to illustrate what the mask looks like. The mask just indicates for each gridcell what country the gridcell belongs to.
###Code
SEAS5_mask
SEAS5_mask.plot()
###Output
_____no_output_____
###Markdown
Extract spatial averageAnd now we can extract the UK averaged precipitation within SEAS5 by using the mask index of the UK: `where(SEAS5_mask == UK_index)`. So we need to find the index of one of the 241 abbreviations. In this case for the UK use 'GB'. Additionally, if you can't find a country, use `countries.regions` to get the full names of the countries.
###Code
countries.abbrevs.index('GB')
###Output
_____no_output_____
###Markdown
To select the UK average, we select SEAS5 precipitation (tprate), select the gridcells that are within the UK and take the mean over those gridcells. This results in a dataset of February precipitation for 35 years (1981-2016), with 5 leadtimes and 25 ensemble members.
###Code
SEAS5_UK = (SEAS5['tprate']
.where(SEAS5_mask == 31)
.mean(dim=['latitude', 'longitude']))
SEAS5_UK
###Output
_____no_output_____
###Markdown
However, xarray does not take into account the area of the gridcells in taking the average. Therefore, we have to calculate the [area-weighted mean](http://xarray.pydata.org/en/stable/examples/area_weighted_temperature.html) of the gridcells. To calculate the area of each gridcell, I use cdo `cdo gridarea infile outfile`. Here I load the generated file:
###Code
Gridarea_SEAS5 = xr.open_dataset('../UK_example/Gridarea_SEAS5.nc')
Gridarea_SEAS5['cell_area'].plot()
SEAS5_UK_weighted = (SEAS5['tprate']
.where(SEAS5_mask == 31)
.weighted(Gridarea_SEAS5['cell_area'])
.mean(dim=['latitude', 'longitude'])
)
SEAS5_UK_weighted
###Output
_____no_output_____
###Markdown
What is the difference between the weighted and non-weighted average?I plot the UK average for ensemble member 0 and leadtime 2
###Code
SEAS5_UK.sel(leadtime=2,number=0).plot()
SEAS5_UK_weighted.sel(leadtime=2,number=0).plot()
###Output
_____no_output_____
###Markdown
And a scatter plot of all ensemble members, leadtimes and years also shows little influence
###Code
plt.scatter(SEAS5_UK.values.flatten(),SEAS5_UK_weighted.values.flatten())
###Output
_____no_output_____
###Markdown
EOBSSame for EOBS. Because this is a larger domain on higher resolution, there are more countries and they look more realistic.
###Code
EOBS_mask = countries.mask(EOBS.sel(time='1982'),
lon_name='longitude',
lat_name='latitude')
EOBS_mask.plot()
EOBS_mask
Gridarea_EOBS = xr.open_dataset('../UK_example/Gridarea_EOBS.nc')
Gridarea_EOBS['cell_area'].plot()
EOBS_UK_weighted = (EOBS['rr']
.where(EOBS_mask == 31)
.weighted(Gridarea_EOBS['cell_area'])
.mean(dim=['latitude', 'longitude'])
)
EOBS_UK_weighted
EOBS_UK_weighted.plot()
###Output
_____no_output_____
###Markdown
Save the UK weighted average datasets
###Code
SEAS5_UK_weighted.to_netcdf('Data/SEAS5_UK_weighted.nc')
EOBS_UK_weighted.to_netcdf('Data/EOBS_UK_weighted.nc') ## save as netcdf
EOBS_UK_weighted.to_pandas().to_csv('Data/EOBS_UK_weighted.csv') ## and save as csv.
SEAS5_UK_weighted.close()
EOBS_UK_weighted.close()
###Output
_____no_output_____
###Markdown
Illustrate the SEAS5 and EOBS masks for the UKHere I plot the masked mean SEAS5 and EOBS precipitation. EOBS is averaged over 71 years, SEAS5 is averaged over years, leadtime and ensemble members.
###Code
fig, axs = plt.subplots(1, 2, subplot_kw={'projection': ccrs.OSGB()})
SEAS5['tprate'].where(SEAS5_mask == 31).mean(
dim=['time', 'leadtime', 'number']).plot(
transform=ccrs.PlateCarree(),
vmin=0,
vmax=8,
cmap=plt.cm.Blues,
ax=axs[0])
EOBS['rr'].where(EOBS_mask == 31).mean(dim='time').plot(
transform=ccrs.PlateCarree(),
vmin=0,
vmax=8,
cmap=plt.cm.Blues,
ax=axs[1])
for ax in axs.flat:
ax.coastlines(resolution='10m')
axs[0].set_title('SEAS5')
axs[1].set_title('EOBS')
###Output
/soge-home/users/cenv0732/.conda/envs/upscale/lib/python3.7/site-packages/xarray/core/nanops.py:142: RuntimeWarning: Mean of empty slice
return np.nanmean(a, axis=axis, dtype=dtype)
###Markdown
Illustrate the SEAS5 and EOBS UK averageAnd the area-weighted average UK precipitation for SEAS5 and EOBS I plot here. For SEAS5 I plot the range, both min/max and the 2.5/97.5 % percentile of all ensemble members and leadtimes for each year.
###Code
ax = plt.axes()
Quantiles = SEAS5_UK_weighted.quantile([0,2.5/100, 0.5, 97.5/100,1], dim=['number','leadtime'])
ax.plot(Quantiles.time, Quantiles.sel(quantile=0.5), color='orange',label = 'SEAS5 median')
ax.fill_between(Quantiles.time.values, Quantiles.sel(quantile=0.025), Quantiles.sel(quantile=0.975), color='orange', alpha=0.2,label = '95% / min max')
ax.fill_between(Quantiles.time.values, Quantiles.sel(quantile=0), Quantiles.sel(quantile=1), color='orange', alpha=0.2)
EOBS_UK_weighted.plot(ax=ax,x='time',label = 'E-OBS')
# Quantiles_EOBS = EOBS['rr'].where(EOBS_mask == 143).mean(dim = ['latitude','longitude']).quantile([2.5/100, 0.5, 97.5/100], dim=['time'])#.plot()
# ax.plot(EOBS.time, np.repeat(Quantiles_EOBS.sel(quantile=0.5).values,71), color='blue',linestyle = '--',linewidth = 1)
# ax.plot(EOBS.time, np.repeat(Quantiles_EOBS.sel(quantile=2.5/100).values,71), color='blue',linestyle = '--',linewidth = 1)
# ax.plot(EOBS.time, np.repeat(Quantiles_EOBS.sel(quantile=97.5/100).values,71), color='blue',linestyle = '--',linewidth = 1)
plt.legend(loc = 'lower left', ncol=2 )#loc = (0.1, 0) upper left
###Output
_____no_output_____ |
Clase8_Taller2.ipynb | ###Markdown
**Ejercicio de práctica**--- Se les pide a la analista realizar una estimación poblacional a partir de informacion descriptiva relevante sobre la pobalción de los paises de vía de desarrollo.Para ello de inicio se solicita que se cree un programa que permita ingresar como prueba 5 paises y su respectiva población e identificar que país tiene la mayor cantidad de habitantes.
###Code
def poblacion():
pais=[]
for i in range(5):
nombre = input("ingresa el nombre del pais: ")
cant= int(input("ingrese la cantidad de habitantes: "))
pais.append((nombre,cant))
return pais
def view(pais):
print("habitantes por país: ")
for x in range(len(pais)): #para separar cada elemento de la lista
print(pais[x][0], pais[x][1])
def mayor(pais):
p=0
for x in range(1,len(pais)):
if pais[x][1]>pais[p][1]:
p=x
print("el país con la más alta población registrada es: ",pais[p][0])
pais=poblacion()
view(pais)
mayor(pais)
se=[1,10,7,9,3,15]
sorted(se) #ordena los elementos de la lista
###Output
_____no_output_____
###Markdown
**Taller 2**---Se solicita realizar un programa que permita identificar el nivel de cumplimiento del area de ventas de la empresa AVA. Entre los principales requerimientos se encuentran los siguientes:1. Permita ingresar la cantidad de vendedores del área de ventas.2. permita ingresar la puntuación por cada uno de ellos(escala de 1-el peor a 10-el mejor).3. el programa debe permitir identificar el vendedor con peor rendimiento y mejor rendimiento4. Obtener el promedio general del nivel de cumplimiento del area de ventas.**NOTA:** si hay empate en la puntuación para definir el primer y ultimo puesto, se define por orden alfabetico.
###Code
print(" Empresa AVA")
print(" ")
def cantvend():
vendedor=[]
n=0
c =int(input("Ingrese la cantidad de vendedores del área de venta: "))
for i in range(c):
n=str(i+1)
vend= input("ingrese el nombre del vendedor "+n+" es: ")
puntuacion=int(input("Ingrese su puntuación (escala de 1 al 10): "))
vendedor.append((vend,puntuacion))
return vendedor
def viewvend(vendedor):
print(" ")
print("puntuación de cada vendedor: ")
print(" ")
ord=sorted(vendedor, key=lambda sub:(sub[1], sub[0]))
for x in range(len(ord)):
print(ord[x][0], ord[x][1])
def mayormenor(vendedor):
p=0
w=0
for x in range(1,len(vendedor)):
if vendedor[x][1]>vendedor[p][1]:
p=x
if vendedor[x][1]<vendedor[w][1]:
w=x
print(" ")
print("El vendedor con el mejor rendimiento es: ",vendedor[p][0])
print("El vendedor con el menor rendimiento es: ",vendedor[w][0])
def promedio(vendedor):
suma=0
for x in range(0,len(vendedor)):
suma=suma+vendedor[x][1]
prom=suma/len(vendedor)
print("el promedio general del nivel de cumplimiento del area de ventas es: ",prom)
vendedor=cantvend()
viewvend(vendedor)
mayormenor(vendedor)
promedio(vendedor)
###Output
Empresa AVA
Ingrese la cantidad de vendedores del área de venta: 5
ingrese el nombre del vendedor 1 es: Daniel
Ingrese su puntuación (escala de 1 al 10): 6
ingrese el nombre del vendedor 2 es: Natalia
Ingrese su puntuación (escala de 1 al 10): 8
ingrese el nombre del vendedor 3 es: Diego
Ingrese su puntuación (escala de 1 al 10): 6
ingrese el nombre del vendedor 4 es: Andrea
Ingrese su puntuación (escala de 1 al 10): 9
ingrese el nombre del vendedor 5 es: Fabian
Ingrese su puntuación (escala de 1 al 10): 2
puntuación de cada vendedor:
Fabian 2
Daniel 6
Diego 6
Natalia 8
Andrea 9
El vendedor con el mejor rendimiento es: Andrea
El vendedor con el menor rendimiento es: Fabian
el promedio general del nivel de cumplimiento del area de ventas es: 6.2
|
cifar_10/main.ipynb | ###Markdown
Data preprocess
###Code
cli.download_data()
print os.getcwd()
unzip(pjoin(data_path, 'test.7z'), data_path)
unzip(pjoin(data_path, 'test.7z'), data_path)
unzip(pjoin(data_path, 'train.7z'), data_path)
for base_path in [data_path, sample_path]:
for category in ['dogs', 'cats']:
for folder in ['train', 'valid']:
mkdir(os.path.join(base_path, folder, category))
mkdir(pjoin(base_path, 'test', 'unknown'))
cwd = os.getcwd()
os.chdir('data/train/')
call("find . -name 'cat.*' | xargs -J ^ mv ^ cats")
call("find . -name 'dog.*' | xargs -J ^ mv ^ dogs")
os.chdir(cwd)
os.chdir('data/test/')
call("find . -name '*.jpg' | xargs -J ^ mv ^ unknown")
os.chdir(cwd)
cwd = os.getcwd()
os.chdir('data/')
train_cats, valid_cats, train_dogs, valid_dogs = train_test_split(os.listdir('train/cats'), os.listdir('train/dogs'), test_size=0.2)
# train_cats, test_cats, train_dogs, test_dogs = train_test_split(train_cats, train_dogs, test_size=0.1)
# training data
for d in valid_dogs:
call("mv train/dogs/{} valid/dogs".format(d))
for c in valid_cats:
call("mv train/cats/{} valid/cats".format(c))
# for d in test_dogs:
# call("mv train/dogs/{} test/dogs".format(d))
# for c in test_cats:
# call("mv train/cats/{} test/cats".format(c))
# sample data
for d in train_dogs[:20]:
call("cp train/dogs/{} sample/train/dogs".format(d))
for c in train_cats[:20]:
call("cp train/cats/{} sample/train/cats".format(c))
for d in valid_dogs[:5]:
call("cp valid/dogs/{} sample/valid/dogs".format(d))
for c in valid_cats[:5]:
call("cp valid/cats/{} sample/valid/cats".format(c))
from random import sample
for d in sample(os.listdir('test/unknown'), 10):
call("cp test/unknown/{} sample/test/unknown/".format(d))
os.chdir(cwd)
###Output
_____no_output_____
###Markdown
Fine tune VGG
###Code
data_path = sample_path
from utils.pretrained_models import VGG16
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint
vgg_model = VGG16.get_model(2).model
train_datagen = ImageDataGenerator()
valid_datagen = ImageDataGenerator()
test_datagen = ImageDataGenerator()
train_flow = train_datagen.flow_from_directory(
os.path.join(data_path, 'train'),
target_size=(224, 224),
batch_size=5,
class_mode='categorical')
valid_flow = valid_datagen.flow_from_directory(
os.path.join(data_path, 'valid'),
target_size=(224, 224),
batch_size=5,
class_mode='categorical')
test_flow = test_datagen.flow_from_directory(
os.path.join(data_path, 'test'),
target_size=(224, 224),
batch_size=5,
class_mode='categorical',
shuffle=False)
for l in vgg_model.layers[:-1]:
l.trainable = False
vgg_model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
checkpointer = ModelCheckpoint(pjoin(model_path, 'weights_best.hdf5'), save_best_only=True)
vgg_model.fit_generator(
train_flow,
steps_per_epoch=10,
epochs=1,
validation_data=valid_flow,
validation_steps=10,
callbacks=[checkpointer])
vgg_model.load_weights(pjoin(model_path, 'weights_best.hdf5'))
vgg_model.evaluate_generator(valid_flow)
preds = vgg_model.predict_generator(test_flow)
isdog = preds[:,1]
ids = np.array([int(f.split('.')[-2].split('/')[1]) for f in test_flow.filenames])
submission = np.stack([ids, isdog], axis=1)
result_path = pjoin(data_path, 'submission.csv')
np.savetxt(result_path, submission, fmt='%d, %.5f', header='id,label', comments="")
cli.submit_result(result_path)
###Output
_____no_output_____ |
notebooks/introduction_to_copulas.ipynb | ###Markdown
Introduction to CopulasTo make our lives extremely simple with the following discusssion, we will focus on continuous random variables. This will really help facilitate subsequent discussions. Probability density functionConsider a random variable $X$ with realisation $x$. A probabilitiy density function is a special type of functions that take $x$ and maps to the likelihood that $X=x$. Example is the standard normal density distribution function. It is given as $$f(x) = \frac{1}{2\pi}\exp\{-\frac{x^2}{2}\}.$$Many people turn to confuse between a density function and an actual probability. A density function rather gives the likelihood/tendency that a random variable $X$ can take the value $x$. Note that there is an additional constraint that the integral over a density function must be one. The actual densities themselves might already be larger than one. Cummulative distribution functionAs we saw above $f(x)$ represents the probability density function of $X$ at $x$. Cummulative distribution functions on the other hand are defined as $$F(x)=\int_{-\infty}^xf(x)dx$$ Probability Integral Transform Probability integral transform is a very simple concept which is central to the copula theory. Assume that we have a random variable X that comes from a distribution with cummulative density function $F(X)$. Then, we can define a random variable $Y$ as $$Y = F(X).$$As we saw before $Y$ is an integral and $Y$ follows a uniform distribution over the interval [0,1]. Can we show that $Y$ is uniform on [0,1]?<!-- $$P(Y\leq y) = P(F(x)\leq y) = 1 \text{ if } (y>1)$$ -->$$ P(Y\leq y) =\begin{cases} P(F(x)\leq y) = 1,& \text{if } y\geq 1\\, P(F(x)\leq y) = 0, & \text{if } y\leg 0 \\, 0 , & \text{otherwise}\end{cases}$$Let's try to demonstrate this concept in code.
###Code
from scipy import stats
from matplotlib import pyplot as plt
import plotly.express as px
# Sample standard random values generated
X = stats.norm.rvs(size=10000)
# Compute the comulative probability of each value
X_trans = stats.norm.cdf(X)
# plot the results
px.histogram(X,title="Original Samples")
px.histogram(X_trans,title="Transformed Samples")
###Output
_____no_output_____
###Markdown
Copulas Multivariate data is often hard model, the key intuition underlying copulas is that the marginal distributions can be modeled independently from the joint distribution. Let's take an example:Consider a dataset with two variables $age$ and $income$ and our goal is to model their joint distribution. Here is the data:
###Code
from copulas.datasets import sample_bivariate_age_income
df = sample_bivariate_age_income()
df.head()
###Output
_____no_output_____
###Markdown
The copula approach in modelling the their joint goes as follows:* Model age and income independently, i.e., get their univariate commulative distribution functions* Transform them into a uniform distribution using the probability integral transform explained above* Model the relationship between the transformed variables using the copula function.Now we use the term copula again without really telling you what it means. We will make things clearer as we proceed. Let's not loose track of the fact that our goal is to model the joint distribution of age and income. Let's start by looking at their marginal distributions.
###Code
from copulas.visualization import hist_1d, side_by_side
side_by_side(hist_1d, {'Age': df['age'], 'Income': df['income']})
###Output
_____no_output_____ |
examples/cgcnn_sklearn_tests_qingyanz_for-background-use_no-dropout.ipynb | ###Markdown
05/06/2019 To-do list:1. Calibration curve analog for regression (edited) 2. Indicate how many points are below/above each parity line3. Plot multiple cases side by side on the same graph (e.g. test set with 50 pts & 1500 pts)4. Manipulating dropout 06/14/2019 To-do list:1. If process gets stuck on SDT transform (tqdm) step: a. Cache This document demonstrates the making, training, saving, loading, and usage of a sklearn-compliant CGCNN model.
###Code
%load_ext ipycache
import os
import sys
#Comment/add these
sys.path.insert(0,'../')
sys.path.insert(0,'/home/zulissi/software/adamwr/')
import numpy as np
import cgcnn
import time
#Select which GPU to use if necessary
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
###Output
env: CUDA_DEVICE_ORDER=PCI_BUS_ID
env: CUDA_VISIBLE_DEVICES=0
###Markdown
Load the dataset as mongo docs
###Code
import random
import pickle
starttime = time.clock()
#Load a selection of documents
docs = pickle.load(open('/pylon5/ch5fq5p/zulissi/CO_docs.pkl','rb'))
random.seed(42)
random.shuffle(docs)
docs = [doc for doc in docs if -3<doc['energy']<1.0]
docs = docs[:6000]
endtime = time.clock()
print('This operation took', endtime - starttime, 's.')
docs[0]
###Output
_____no_output_____
###Markdown
Get the size of the features from the data transformer, to be used in setting up the net model
###Code
# %%cache SDT_list.pkl SDT_list
from torch.utils.data import Dataset, DataLoader
import mongo
from cgcnn.data import StructureData, ListDataset, StructureDataTransformer
import numpy as np
import tqdm
from sklearn.preprocessing import StandardScaler
SDT = StructureDataTransformer(atom_init_loc='../atom_init.json',
max_num_nbr=12,
step=0.2,
radius=1,
use_tag=False,
use_fixed_info=False,
use_distance=True)
SDT_out = SDT.transform(docs)
structures = SDT_out[0]
# Settings necessary to build the model (since they are size of vectors as inputs)
orig_atom_fea_len = structures[0].shape[-1]
nbr_fea_len = structures[1].shape[-1]
SDT_out[4]
import multiprocess as mp
from sklearn.model_selection import ShuffleSplit
SDT_out = SDT.transform(docs)
with mp.Pool(4) as pool:
SDT_list = list(tqdm.tqdm(pool.imap(lambda x: SDT_out[x],range(len(SDT_out)),chunksize=40),total=len(SDT_out)))
starttime = time.clock()
with open('distance_all_docs.pkl','wb') as fhandle:
pickle.dump(SDT_list,fhandle)
endtime = time.clock()
print('This step took', endtime - starttime, 's to complete.')
###Output
This step took 1.0800000000000054 s to complete.
###Markdown
CGCNN model with skorch to make it sklearn compliant
###Code
with open('distance_all_docs.pkl','rb') as opensdtlist:
SDT_list = pickle.load(opensdtlist)
print(SDT_list[0])
from torch.optim import Adam, SGD
from sklearn.model_selection import ShuffleSplit
from skorch.callbacks import Checkpoint, LoadInitState #needs skorch 0.4.0, conda-forge version at 0.3.0 doesn't cut it
from cgcnn.data import collate_pool
from skorch import NeuralNetRegressor
from cgcnn.model_no_dropout import CrystalGraphConvNet
import torch
from cgcnn.data import MergeDataset
import skorch.callbacks.base
cuda = torch.cuda.is_available()
if cuda:
device = torch.device("cuda")
else:
device='cpu'
startime = time.clock()
#Make a checkpoint to save parameters every time there is a new best for validation lost
cp = Checkpoint(monitor='valid_loss_best',fn_prefix='valid_best_')
#Callback to load the checkpoint with the best validation loss at the end of training
class train_end_load_best_valid_loss(skorch.callbacks.base.Callback):
def on_train_end(self, net, X, y):
net.load_params('valid_best_params.pt')
load_best_valid_loss = train_end_load_best_valid_loss()
endtime = time.clock()
print('This step takes', endtime - startime, 's to complete.')
###Output
This step takes 0.0 s to complete.
###Markdown
\color{red}{This seems to be a time consuming step.} Example converting all the documents up front
###Code
starttime = time.clock()
#Make the target list
target_list = np.array([doc['energy'] for doc in docs]).reshape(-1,1)
endtime = time.clock()
print('This step takes', endtime - startime, 's to complete.')
###Output
This step takes 0.020000000000003126 s to complete.
###Markdown
Shuffle and Split
###Code
from sklearn.model_selection import train_test_split
starttime = time.clock()
SDT_training, SDT_test, target_training, target_test = train_test_split(SDT_list, target_list, test_size=0.2)
endtime = time.clock()
print('This step takes', endtime - startime, 's to complete.')
SDT_training[0]
###Output
_____no_output_____
###Markdown
Fit the model
###Code
from skorch.dataset import CVSplit
from skorch.callbacks.lr_scheduler import WarmRestartLR, LRScheduler
train_test_splitter = ShuffleSplit(test_size=0.25) # , random_state=42)
LR_schedule = LRScheduler('MultiStepLR',milestones=[100],gamma=0.1)
net = NeuralNetRegressor(
CrystalGraphConvNet,
module__orig_atom_fea_len = orig_atom_fea_len,
module__nbr_fea_len = nbr_fea_len,
# module__dropout = 0.2,
batch_size=214,
module__classification=False,
lr=0.0056,
max_epochs=188, # 292
module__atom_fea_len=46,
module__h_fea_len=83,
module__n_conv=8,
module__n_h=4,
optimizer=Adam,
iterator_train__pin_memory=True,
iterator_train__num_workers=0,
iterator_train__collate_fn = collate_pool,
iterator_train__shuffle=True,
iterator_valid__pin_memory=True,
iterator_valid__num_workers=0,
iterator_valid__collate_fn = collate_pool,
device=device,
criterion=torch.nn.MSELoss,
# criterion=torch.nn.L1Loss,
dataset=MergeDataset,
train_split = CVSplit(cv=train_test_splitter),
callbacks=[cp, load_best_valid_loss, LR_schedule]
)
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
def plot(df_training, df_validation, df_test):
f, ax = plt.subplots(figsize=(8,8))
ax.scatter(df_training['actual_value'], df_training['predicted_value'], color='orange',
marker='o', alpha=0.5, label='train\nMAE=%0.2f, RMSE=%0.2f, R$^2$=%0.2f'\
%(mean_absolute_error(df_training['actual_value'], df_training['predicted_value']),
np.sqrt(mean_squared_error(df_training['actual_value'], df_training['predicted_value'])),
r2_score(df_training['actual_value'], df_training['predicted_value'])))
ax.scatter(df_validation['actual_value'], df_validation['predicted_value'], color='blue',
marker='o', alpha=0.5, label='valid\nMAE=%0.2f, RMSE=%0.2f, R$^2$=%0.2f'\
%(mean_absolute_error(df_validation['actual_value'], df_validation['predicted_value']),
np.sqrt(mean_squared_error(df_validation['actual_value'], df_validation['predicted_value'])),
r2_score(df_validation['actual_value'], df_validation['predicted_value'])))
ax.scatter(df_test['actual_value'], df_test['predicted_value'], color='green',
marker='o', alpha=0.5, label='test\nMAE=%0.2f, RMSE=%0.2f, R$^2$=%0.2f'\
%(mean_absolute_error(df_test['actual_value'], df_test['predicted_value']),
np.sqrt(mean_squared_error(df_test['actual_value'], df_test['predicted_value'])),
r2_score(df_test['actual_value'], df_test['predicted_value'])))
ax.plot([min(df_training['actual_value']), max(df_training['actual_value'])],
[min(df_training['actual_value']), max(df_training['actual_value'])], 'k--')
# format graph
ax.tick_params(labelsize=14)
ax.set_xlabel('DFT E (eV)', fontsize=14)
ax.set_ylabel('CGCNN predicted E (eV)', fontsize=14)
ax.set_title('Multi-element ', fontsize=14)
ax.legend(fontsize=12)
plt.show()
def train(SDT_training, SDT_test, target_training, target_test, net):
iters = 6
tr_vl_len = len(SDT_training) # 4800
""" VERY IMPORTANT: PICK TR_VL_LEN & ITERS s.t. ITERS DIVIDE TR_VL_LEN"""
batchsize = tr_vl_len // iters
splitter = KFold(iters, shuffle=False)
arr_training = [[] for _ in range(iters)]
arr_validation = []
TrainingData = []
ValidationData = []
TestData = []
for i, (train_indices, valid_indices) in enumerate(splitter.split(SDT_training)):
net.initialize()
net.fit(SDT_training, target_training)
subdiv = [j for j in range(iters) if i != j]
arr_validation.extend(net.predict(SDT_training)[valid_indices].reshape(-1))
for k, j in enumerate(subdiv):
train_segment = train_indices[(k*batchsize):((k+1)*batchsize)]
arr_training[j].append(net.predict(SDT_training)[train_segment].reshape(-1))
"""
validation_data = {'actual_value':np.array(target_training.reshape(-1))[valid_indices],
'predicted_value':net.predict(SDT_training)[valid_indices].reshape(-1)}
dfvalidation = pd.DataFrame(validation_data)
ValidationData.append(dfvalidation)
"""
test_data = {'actual_value':np.array(target_test).reshape(-1),
'predicted_value':net.predict(SDT_test).reshape(-1)}
dftest = pd.DataFrame(test_data)
TestData.append(dftest)
try:
crude_training_data = {'actual_value':np.array(target_training).reshape(-1)[train_indices],
'predicted_value':net.predict(SDT_training)[train_indices].reshape(-1)}
crude_validation_data = {'actual_value':np.array(target_training).reshape(-1)[valid_indices],
'predicted_value':net.predict(SDT_training)[valid_indices].reshape(-1)}
dfcrudetraining = pd.DataFrame(crude_training_data)
dfcrudevalidation = pd.DataFrame(crude_validation_data)
plot(dfcrudetraining, dfcrudevalidation, dftest)
except:
print("Error in plotting")
print("arr_training:")
print(arr_training)
print("arr_validation:")
print(arr_validation)
"""
training_data = {'actual_value':np.array(target_training.reshape(-1))[train_indices],
'predicted_value':net.predict(SDT_training)[train_indices].reshape(-1)}
test_data = {'actual_value':np.array(target_test).reshape(-1),
'predicted_value':net.predict(SDT_test).reshape(-1)}
validation_data = {'actual_value':np.array(target_training.reshape(-1))[valid_indices],
'predicted_value':net.predict(SDT_training)[valid_indices].reshape(-1)}
"""
arr_training = np.array(arr_training)
arr_training = np.transpose(arr_training, (1, 0, 2))
arr_training = np.reshape(arr_training, (iters-1, tr_vl_len))
validation_data = {'actual_value':np.array(target_training).reshape(-1),
'predicted_value':arr_validation}
dfvalidation = pd.DataFrame(validation_data)
ValidationData.append(dfvalidation)
for line in arr_training:
training_data = {'actual_value':np.array(target_training).reshape(-1),
'predicted_value':line}
dftraining = pd.DataFrame(training_data)
TrainingData.append(dftraining)
return TrainingData, ValidationData, TestData
from sklearn.model_selection import KFold
starttime = time.clock()
TrainingData, ValidationData, TestData = train(SDT_training,
SDT_test,
target_training,
target_test, net)
TrainingData = pd.concat(TrainingData, axis=1)
ValidationData = pd.concat(ValidationData, axis=1)
TestData = pd.concat(TestData, axis=1)
endtime = time.clock()
print("Calculating the same points takes {} s.".format(endtime-starttime,))
"""
from sklearn.model_selection import train_test_split
TrainingData = []
ValidationData = []
TestData = []
iters = 7
starttime = time.clock()
for i in range(iters):
# net()
net.initialize()
train_test_splitter = ShuffleSplit(test_size=0.25, random_state=42)
train_indices, valid_indices = next(train_test_splitter.split(SDT_training))
print("train_indices:", train_indices)
print("valid_indices:", valid_indices)
with open('no-dropout_log.txt', 'a') as logfile:
logfile.write("Iter: %s" % (i,))
logfile.write("train_indices: %s" % (train_indices,))
logfile.write("train_indices: %s\n" % (train_indices,))
net.fit(SDT_training, target_training)
dftraining, dfvalidation, dftest = plot(SDT_training,
SDT_test,
target_training,
target_test,
train_indices,
valid_indices, net)
print('dftraining.type', dftraining.dtype)
print('dftraining.type',dftraining.size)
print(dftraining)
TrainingData.append(dftraining)
ValidationData.append(dfvalidation)
TestData.append(dftest)
TrainingData = pd.concat(TrainingData, axis=1)
ValidationData = pd.concat(ValidationData, axis=1)
TestData = pd.concat(TestData, axis=1)
endtime = time.clock()
print("Calculating the same points {} times takes {} s.".format(iters, endtime-starttime))
"""
"""
# The d20 suffix means a droupout of 20% is applied
TrainingData.to_pickle('TrData_7iters_d20.pkl')
ValidationData.to_pickle('VlData_7iters_d20.pkl')
TestData.to_pickle('TsData_7iters_d20.pkl')
"""
"""
# The d30 suffix means a droupout of 30% is applied
TrainingData.to_pickle('TrData_7iters_d30.pkl')
ValidationData.to_pickle('VlData_7iters_d30.pkl')
TestData.to_pickle('TsData_7iters_d30.pkl')
"""
TrainingData.to_pickle('TrData_7iters_vanilla.pkl')
ValidationData.to_pickle('VlData_7iters_vanilla.pkl')
TestData.to_pickle('TsData_7iters_vanilla.pkl')
###Output
_____no_output_____ |
Chi^2 (Kai-Square) Algorithm/Feature Selection Using Chi^2 (Kai-Square).ipynb | ###Markdown
Select top 10 features with highest chi-squared statistics
###Code
chi2_selector = SelectKBest(chi2, k=10) # K = select top 10
X_kbest = chi2_selector.fit_transform(X, y)
X_kbest
###Output
_____no_output_____
###Markdown
Highest chi-squared feature ranking
###Code
from scipy.stats import chisquare
import numpy as np
result = pd.DataFrame(columns=["Features", "Chi2Weights"])
for i in range(len(X.columns)):
chi2, p = chisquare(X[X.columns[i]])
result = result.append([pd.Series([X.columns[i], chi2], index = result.columns)], ignore_index=True)
result = result.sort_values(by="Chi2Weights", ascending=False)
result.head(10)
###Output
_____no_output_____ |
Doc/Notebooks/GephiStreaming_UserGuide.ipynb | ###Markdown
Introduction NetworKit provides an easy interface to Gephi that uses the Gephi graph streaming plugin. To be able to use it, install the Graph Streaming plugin using the **Gephi plugin manager**. Afterwards, open the Streaming window by selecting **Windows/Streaming** in the menu. Workflow Once the plugin is installed in gephi, create a new project and start the **Master Server** in the Streaming tab within gephi. The running server will be indicated by a green dot. As an example, we generate a random graph...
###Code
G = generators.ErdosRenyiGenerator(300, 0.2).generate()
G.addEdge(0, 1) #We want to make sure this specific edge exists, for usage in an example later.
###Output
_____no_output_____
###Markdown
... and export it directly export it into the active gephi workspace. After executing the following code, the graph should be available in the first gephi workspace. **Attention**: Any graph currently contained in that workspace will be overriden.
###Code
client = gephi.streaming.GephiStreamingClient()
client.exportGraph(G)
###Output
_____no_output_____
###Markdown
Exporting node values We now apply a community detection algorithm to the generated random graph and export the community as a node attribute to gephi. Any python list or Partition object can be exported. Please note that only the attribute itself is transfered, so make sure you called exportGraph(graph) first.
###Code
communities = community.detectCommunities(G)
client.exportNodeValues(G, communities, "community")
###Output
_____no_output_____
###Markdown
The node attribute can now be selected and used in gephi, for partitioning or any other desired scenario. Exporting edges scores Just like node values, we can export edge values. After graph creation, each edge is assigned an integer id that is then used to access arbitrary attribute vectors, so any python list can be exported to gephi. In the following example, we assign an even number edge and export that score to gephi.
###Code
edgeScore = [2*x for x in range(0, G.upperEdgeIdBound())]
client.exportEdgeValues(G, edgeScore, "myEdgeScore")
###Output
_____no_output_____
###Markdown
Changing the server URL By default, the streaming client in NetworKit connects to http://localhost:8080/workspace0, i.e. the first workspace of the local gephi instance. One might want to connect to a gephi instance running on a remote host or change the used port (this can be done in gephi by selecting **Settings** within the Streaming tab). To change the url in NetworKit, simply pass it upon the creation of the client object:
###Code
client = gephi.streaming.GephiStreamingClient(url='http://localhost:8080/workspace0')
###Output
_____no_output_____ |
Loan_status_prediction_v2.ipynb | ###Markdown
Randomforest Classifier
###Code
# # Random Forest Classification
from sklearn.ensemble import RandomForestClassifier
random = RandomForestClassifier(n_estimators = 10, criterion = 'entropy')
random.fit(X_train, y_train)
y_pred = random.predict(X_test)
accuracy = accuracy_score(np.array(y_test).flatten(), y_pred)
print("Accuracy: %.10f%%" % (accuracy * 100.0))
accuracy_per_roc_auc = roc_auc_score(np.array(y_test).flatten(), y_pred)
print("ROC-AUC: %.10f%%" % (accuracy_per_roc_auc * 100))
final_pred = pd.DataFrame(random.predict_proba(np.array(finalTest)))
dfSub = pd.concat([test_member_id, final_pred.iloc[:, 1:2]], axis=1)
submission_file_name = "submission_randomforest"
dfSub.rename(columns={1:'loan_status'}, inplace=True)
dfSub.to_csv((('%s.csv') % (submission_file_name)), index=False)
###Output
_____no_output_____
###Markdown
Deep Neural Network
###Code
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.layers.advanced_activations import PReLU
model = Sequential()
model.add(Dense(units=40, input_dim=33))
model.add(Activation('relu'))
model.add(Dense(units=40))
model.add(Activation('relu'))
model.add(Dense(units=40))
model.add(Activation('relu'))
model.add(Dense(units=40))
model.add(Activation('relu'))
model.add(Dense(units=40))
model.add(Activation('relu'))
model.add(Dense(units=2))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(X_train, y_train, epochs=10, batch_size=64)
###Output
_____no_output_____ |
training/2021_Fully3D/Week2/02_tikhonov_block_framework.ipynb | ###Markdown
Tikhonov regularisation using CGLS and block framework This exercise introduces Tikhonov regularisation and explains how this is implemented in the CIL framework using the so-called block framework.In a previous exercise, it was seen how CGLS could be used to determine a reconstruction based on the least squares reconstruction problem. It was seen that in case of noisy data, the least squares solution obtained by running until convergence is not desirable due to a high amount of noise. The number of iterations was seen to have a regularising effect, with the smooth, low-frequency components of the image recovered in the first iterations, while high-frequency components of the image such as edges were recovered later. Unfortunately, noise also kicks in, and one needs to pick the number of iterations that best balances the sharpness and amount of noise. As such, the regularising effect is implicitly obtained by choosing the number of iterations to run and never actually running until converged to the least squares solution.Tikhonov regularisation is more explicit in that a regularisation term is added to the least squares fitting term, specifically a squared 2-norm. This problem should now be solved to convergence instead of using the number of iterations as implicit regularising effect. Instead, a parameter, the regularisation parameter, balances the emphasis on fitting the data and enforcing the regularity and must be chosen to provide the best trade-off.Tikhonov regularisation tends to offer reduction of noise in the reconstruction, at the price of some blurring. This will be seen in what follows.To set up Tikhonov problems we need to represent block matrices and concatenate data. In CIL we can do this using BlockOperator and BlockDataContainer as demonstrated in the exercise. **Learning objectives:**1. Construct and manipulate BlockOperators and BlockDataContainer, including direct and adjoint operations and algebra.2. Use Block Framework to solve Tikhonov regularisation with CGLS algorithm.3. Apply Tikhonov regularisation to tomographic reconstruction and explain the effect of regularisation parameter and operator in regulariser. First, all imports required are carried out. This includes tools from the cil.framework and cil.optimisation modules, as well as test image generation tools in the tomophantom library and standard imports such as numpy.
###Code
# CIL core components needed
from cil.framework import ImageGeometry, ImageData, AcquisitionGeometry, AcquisitionData, BlockDataContainer
# CIL optimisation algorithms and linear operators
from cil.optimisation.algorithms import CGLS
from cil.optimisation.operators import BlockOperator, GradientOperator, IdentityOperator, FiniteDifferenceOperator
# CIL example synthetic test image
from cil.utilities.dataexample import SHAPES
# CIL display tools
from cil.utilities.display import show2D, show_geometry
# Forward/backprojector from CIL ASTRA plugin
from cil.plugins.astra import ProjectionOperator
# For shepp-logan test image in CIL tomophantom plugin
import cil.plugins.TomoPhantom as TP
# Third-party imports
import numpy as np
import matplotlib.pyplot as plt
import os
###Output
_____no_output_____
###Markdown
Setting up a simulated 2D dataset A 2D parallel beam case will be simulated. We start by creating a test image and will use the classic Shepp-Logan Phantom with 1024x1024 pixels on the square domain [-1,1]x[-1,1]. We set up the `ImageGeometry` to specify the dimensions and pixel size of the image:
###Code
# Set up image geometry
n = 256
ig = ImageGeometry(voxel_num_x=n,
voxel_num_y=n,
voxel_size_x=2/n,
voxel_size_y=2/n)
print(ig)
###Output
_____no_output_____
###Markdown
Using the CIL tomophantom plugin we can create a CIL `ImageData` holding the Shepp-Logan image of the desired size:
###Code
phantom2D = TP.get_ImageData(num_model=1, geometry=ig)
show2D(phantom2D)
###Output
_____no_output_____
###Markdown
Next, we specify the acquisition parameters and store them in an `AcquisitionGeometry` object. We use a parallel-beam geometry with 180 projections, and a detector with the same of number and size of pixels as the image:
###Code
num_angles = 180
ag = AcquisitionGeometry.create_Parallel2D() \
.set_angles(np.linspace(0, 180, num_angles, endpoint=False)) \
.set_panel(n, 2/n)
print(ag)
###Output
_____no_output_____
###Markdown
We illustrate the geometry:
###Code
show_geometry(ag)
###Output
_____no_output_____
###Markdown
To simulate a sinogram we set up a ProjectionOperator using GPU-acceleration using the ASTRA plugin:
###Code
device = "gpu"
A = ProjectionOperator(ig, ag, device)
###Output
_____no_output_____
###Markdown
The ideal noisefree sinogram is created by forward-projecting the phantom:
###Code
sinogram = A.direct(phantom2D)
###Output
_____no_output_____
###Markdown
The generated test image and sinogram are displayed as images:
###Code
plots = [phantom2D, sinogram]
titles = ["Ground truth", "sinogram"]
show2D(plots, titles)
###Output
_____no_output_____
###Markdown
Next, Poisson noise will be applied to this noise-free sinogram. The severity of the noise can be adjusted by changing the background_counts variable.
###Code
# Incident intensity: lower counts will increase the noise
background_counts = 5000
# Convert the simulated absorption sinogram to transmission values using Lambert-Beer.
# Use as mean for Poisson data generation.
# Convert back to absorption sinogram.
counts = background_counts * np.exp(-sinogram.as_array())
noisy_counts = np.random.poisson(counts)
sino_out = -np.log(noisy_counts/background_counts)
# Create new AcquisitionData object with same geometry and fill with noisy data.
sinogram_noisy = ag.allocate()
sinogram_noisy.fill(sino_out)
###Output
_____no_output_____
###Markdown
The simulated clean and noisy sinograms are displayed side by side as images:
###Code
plots = [sinogram, sinogram_noisy]
titles = ["sinogram", "sinogram noisy"]
show2D(plots, titles)
###Output
_____no_output_____
###Markdown
Reconstruct using CGLS Before describing Tikhonov regularisation, we recall the problem solved by CGLS:$$\underset{u}{\mathrm{argmin}}\begin{Vmatrix}A u - b\end{Vmatrix}^2_2$$where,- $A$ is the projection operator- $b$ is the acquired data- $u$ is the unknown image to be determinedIn the solution provided by CGLS the low frequency components tend to converge faster than the high frequency components. This means we need to control the number of iterations carefully to select the optimal solution. Set up the CGLS algorithm, including specifying its initial point to start from, and an upper bound on the number of iterations to run:
###Code
x_init = ig.allocate(0)
cgls_simple = CGLS(x_init=x_init, operator=A, data=sinogram_noisy)
cgls_simple.max_iteration = 1000
###Output
_____no_output_____
###Markdown
Once set up, we can run the algorithm for a specified number of iterations:
###Code
cgls_simple.run(5, verbose = True)
###Output
_____no_output_____
###Markdown
Display the resulting image from CGLS, along with its difference image with the original ground truth image:
###Code
plots = [cgls_simple.solution, cgls_simple.solution - phantom2D]
titles = ["CGLS reconstruction","Difference from ground truth" ]
show2D(plots, titles, fix_range=[(-0.2,1.2),(-0.2,0.2)])
###Output
_____no_output_____
###Markdown
Plot central vertical line profile of CGLS and ground truth:
###Code
plt.figure(figsize=(10,5))
plt.plot(cgls_simple.solution.get_slice(horizontal_y=n/2).as_array(),label="CGLS",color='dodgerblue')
plt.plot(phantom2D.get_slice(horizontal_y=n/2).as_array(),label="Ground Truth",color='black')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Exercise 1:** Try running fewer and more iterations to see how the image and line profile changes. Try also with noisier data, by specifying a smaller value of background_counts. Remember you can change the number of iterations to run between outputs. Also note that the algorithm will continue from the point it stopped and run more iterations from that point if `run` is called again. If you want to run from the beginning, the algorithm needs to be re-initialised. Try to stop the algorithm before the solution starts to diverge. [go to section start](section_CGLS_simple) Tikhonov regularisation using CGLS RegularisationNoisy datasets are problematic with an ill-posed problem such as tomographic reconstruction. If we try to solve these using CGLS we end up with an unstable solution. Regularisation adds information in order for us to solve the problem. Tikhonov regularisationWe can add a regularisation term to problem solved by CGLS; this gives us the minimisation problem in the following form, which is known as Tikhonov regularisation:$$\underset{u}{\mathrm{argmin}}\begin{Vmatrix}A u - b \end{Vmatrix}^2_2 + \alpha^2\|Lu\|^2_2$$where,- $A$ is the projection operator- $b$ is the acquired data- $u$ is the unknown image to be solved for- $\alpha$ is the regularisation parameter- $L$ is a regularisation operatorThe first term measures the fidelity of the solution to the data. The second term meausures the fidelity to the prior knowledge we have imposed on the system, operator $L$. $\alpha$ controls the trade-off between these terms. $L$ is often chosen to be a smoothing operator like the identity matrix, or a gradient operator **constrained to the squared L2-norm**.This can be re-written equivalently in the block matrix form:$$\underset{u}{\mathrm{argmin}}\begin{Vmatrix}\binom{A}{\alpha L} u - \binom{b}{0}\end{Vmatrix}^2_2$$With the definitions:- $\tilde{A} = \binom{A}{\alpha L}$- $\tilde{b} = \binom{b}{0}$this can now be recognised as a least squares problem:$$\underset{u}{\mathrm{argmin}}\begin{Vmatrix}\tilde{A} u - \tilde{b}\end{Vmatrix}^2_2$$and being a least squares problem, it can be solved using CGLS with $\tilde{A}$ as operator and $\tilde{b}$ as data. Introducing the block framework We can construct $\tilde{A}$ and $\tilde{b}$ using the BlockFramework in the CIL.$\tilde{A}$ is a (column) BlockOperator of size 2x1 and can be set up by`BlockOperator(op0,op1)`The right hand side $\tilde{b}$ is a BlockDataContainer and can be set up by`BlockDataContainer(DataContainer0, DataContainer1)` Reconstruct using CGLS and the identity operator The simplest form of Tikhonov uses the identity matrix as the regularisation operator. We use an identity matrix as our regularisation operator we are penalising on the magnitude of the solution $u$, which will tend to reduce the pixel values of $u$.
###Code
L = IdentityOperator(ig)
alpha = 0.1
operator_block = BlockOperator(A, alpha*L)
###Output
_____no_output_____
###Markdown
In the formulation of Tikhonov as a least squares problem, we need to set up the right hand side vector $\tilde{b}$ holding both the $b$ and a zero-filled `ImageData` of the right size, matching the range of the regularising operator. The operator allows us to query the geometry of its range and allocate a zero-filled `ImageData` of that geometry. We combine both into a `BlockDataContainer`:
###Code
zero_data = L.range.allocate(0)
data_block = BlockDataContainer(sinogram_noisy, zero_data)
###Output
_____no_output_____
###Markdown
Run CGLS as before, but passing the BlockOperator and BlockDataContainer
###Code
#setup CGLS with the Block Operator and Block DataContainer
x_init = ig.allocate(0)
cgls_tikh = CGLS(x_init=x_init, operator=operator_block, data=data_block, update_objective_interval = 10)
cgls_tikh.max_iteration = 1000
#run the algorithm
cgls_tikh.run(100)
###Output
_____no_output_____
###Markdown
Display results as images and plot central vertical line profile of the Tikhonov with Identity:
###Code
plots = [cgls_tikh.solution, cgls_tikh.solution - phantom2D]
titles = ["Tikhonov with Identity regularisation","Difference from ground truth" ]
show2D(plots, titles, fix_range=[(-0.2,1.2),(-0.2,0.2)])
###Output
_____no_output_____
###Markdown
Let's compare the reconstructions from CGLS and Tikhonov with identity regularisation.
###Code
plots = [cgls_simple.solution, cgls_tikh.solution]
titles = ["CGLS", "Tikhonov with Identity regularisation" ]
show2D(plots, titles, fix_range=(-0.2,1.2))
#compare the vertical line profiles
plt.figure(figsize=(10,5))
plt.plot(cgls_simple.solution.get_slice(horizontal_y=n/2).as_array(),label="CGLS",color='dodgerblue')
plt.plot(cgls_tikh.solution.get_slice(horizontal_y=n/2).as_array(),label="Tikhonov with Identity regularisation",color='firebrick')
plt.plot(phantom2D.get_slice(horizontal_y=n/2).as_array(),label="Ground Truth",color='black')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Exercise 2:** Try running Tikhonov with a range of $\alpha$ values from very small to very large, display reconstruction and line profile and describe the effect of $\alpha$. Find the value of $\alpha$ that gives you the best solution. Then change how much noise you add to the data by going back here: [set noise](section_noise) and run through the notebook again. Try with `background_counts` set to 5000, 10000 and 1000 remember to find an appropriate value of alpha for each run. With Tikhonov regularisation the problem should now be solved to convergence instead of using the number of iterations as implicit regularising effect. By increasing the regularisation parameter $\alpha$ we balance the emphasis on fitting the data and enforcing the regularity. A low value of $\alpha$ will give you the CGLS solution, a higher value will reduce the noise in the reconstruction but at the cost of some blurring. Using the BlockFramework to build a gradient operator The basic Tikhonov with the identity operator provided perhaps a bit of improvement compared to just CGLS, but there was still a lot of noise in the reconstruction and the pixel values had been reduced. Using the identity as regularising operator means that we penalise pixel values that are non-zero, which may not be what we want. Instead, we want to encourage similar values of neighboring pixels to smooth out the noise. This can be achieved by using the gradient as the smoothing operator. To do that we will again need to use the BlockFramework, which is now demonstrated in a bit more detail.A discrete gradient operator (using finite differences) can be constructed using BlockOperators.The direct gradient operator $\nabla$ acts on an image $u$ and returns a BlockDataContainer $\textbf{w}$, holding finite differences in the $x$ and $y$ directions:$$ \nabla(u) = \begin{bmatrix} \nabla_x\\ \nabla_y\\\end{bmatrix}*u =\begin{bmatrix} \nabla_xu\\ \nabla_yu\\\end{bmatrix}= \begin{bmatrix}w_{x}\\w_{y}\end{bmatrix}= \textbf{w}$$The adjoint gradient operator $\nabla^*$ acts on the BlockDataContainer $\textbf{y}$ and returns an image $\rho$$$ \nabla^*(\textbf w) = \begin{bmatrix} \nabla^*_x & \nabla^*_y\end{bmatrix}*\begin{bmatrix} w_{x}\\ w_{y}\\\end{bmatrix} =\begin{bmatrix} \nabla^*_x w_x + \nabla^*_y w_y\end{bmatrix} = \rho$$ We load a test image to demonstrate how the gradient operator works:
###Code
shapes = SHAPES.get()
show2D(shapes, "shapes")
###Output
_____no_output_____
###Markdown
The finite difference operator can be called from the framework. This returns the difference between each pair of pixels along one direction.We need to initialise it with the image geometry, the direction of the calculation and the boundary conditions to use.`FiniteDifferenceOperator(gm_domain, direction, bnd_cond='Neumann' or 'Periodic')`
###Code
#define the operator FiniteDiff - needs to image geometry, the direction and the boundary conditions
fdx = FiniteDifferenceOperator(shapes.geometry, direction='horizontal_x', bnd_cond='Neumann')
#run it over the input image
image_2D_dx = fdx.direct(shapes)
#plot ths results
show2D(image_2D_dx, "dx")
###Output
_____no_output_____
###Markdown
Note how all vertical edges have been picked up (and their sign) applying this operator doing finite differences in the horizontal direction. To set up a gradient in both $x$ and $y$ directions, we can create a BlockOperator to contain a finite difference operator for each of the $x$ and $y$ directions. We can apply it (using its `direct` method) to the test image and visualise the result.
###Code
# Define the x and y operators
fdx = FiniteDifferenceOperator(shapes.geometry, direction='horizontal_x', bnd_cond='Neumann')
fdy = FiniteDifferenceOperator(shapes.geometry, direction='horizontal_y', bnd_cond='Neumann')
# Construct the BlockOperator combining the two operators
FD = BlockOperator(fdx, fdy)
#run it on the test image
fd_out = FD.direct(shapes)
###Output
_____no_output_____
###Markdown
Display output:
###Code
plots = [fd_out.get_item(0), fd_out.get_item(1)]
titles = ["dx","dy" ]
show2D(plots,titles,fix_range=(-1,1))
###Output
_____no_output_____
###Markdown
To see what is going on, we take a closer look at data types.First, the input is an `ImageData` and its shape is a 2-element vector with the number of pixels in each direction:
###Code
print(type(shapes))
print(shapes)
###Output
_____no_output_____
###Markdown
The output however is a `BlockDataContainer`, essentially a list (with additional functionality) holding two `ImageData` elements, one for each direction we have taken finite differences. We can pick out each element of the `BlockDataContainer` and see that they indeed are `ImageData` and print their shapes (number of pixels in each direction):
###Code
#output is BloackDataContainer
print(type(fd_out))
print(fd_out.shape)
print("\tDataContainer 0")
print(type(fd_out.get_item(0)))
print(fd_out.get_item(0))
print("\tDataContainer 1")
print(type(fd_out.get_item(1)))
print(fd_out.get_item(1))
###Output
_____no_output_____
###Markdown
The BlockFramework provides basic algebra between BlockDataContainers, numpy arrays, lists of numbers, DataContainers, subclasses and scalars providing the shape of the containers are compatible- add- subtract- multiply- divide- power- squared_norm The `BlockOperator` is a special kind of `Operator`, and being an `Operator` it should have an adjoint method. This is automatically provided from the adjoints of the operators. In the present case our `BlockOperator` will take a `BlockDataContainer` as input to its adjoint and return an `ImageData`, as visualised below:
###Code
# Run the adjoint method
adjoint_output = FD.adjoint(fd_out)
show2D(adjoint_output, "adjoint gradient")
###Output
_____no_output_____
###Markdown
A deeper look at the BlockFramework BlockDataContainer BlockDataContainer holds datacontainers as a column vector$$\textbf{x} = \begin{bmatrix}x_{1}\\ x_{2}\end{bmatrix}$$$$\textbf{y} = \begin{bmatrix}y_{1}\\ y_{2} \\ y_{3}\end{bmatrix}$$ BlockOperator: BlockOperator is a matrix of operators.$$ K = \begin{bmatrix}A_{1} & A_{2} \\A_{3} & A_{4} \\A_{5} & A_{6}\end{bmatrix}_{(3,2)} * \quad \underbrace{\begin{bmatrix}x_{1} \\x_{2} \end{bmatrix}_{(2,1)}}_{\textbf{x}} = \begin{bmatrix}A_{1}x_{1} + A_{2}x_{2}\\A_{3}x_{1} + A_{4}x_{2}\\A_{5}x_{1} + A_{6}x_{2}\\\end{bmatrix}_{(3,1)} = \begin{bmatrix}y_{1}\\y_{2}\\y_{3}\end{bmatrix}_{(3,1)} = \textbf{y}$$Column: Share the same domains $X_{1}, X_{2}$Rows: Share the same ranges $Y_{1}, Y_{2}, Y_{3}$$$ K : (X_{1}\times X_{2}) \rightarrow (Y_{1}\times Y_{2} \times Y_{3})$$$$ A_{1}, A_{3}, A_{5}: \text{share the same domain } X_{1}$$$$ A_{2}, A_{4}, A_{6}: \text{share the same domain } X_{2}$$$$A_{1}: X_{1} \rightarrow Y_{1}, \quad A_{3}: X_{1} \rightarrow Y_{2}, \quad A_{5}: X_{1} \rightarrow Y_{3}$$$$A_{2}: X_{2} \rightarrow Y_{1}, \quad A_{4}: X_{2} \rightarrow Y_{2}, \quad A_{6}: X_{2} \rightarrow Y_{3}$$ Reconstruct using Tikhonov by CGLS with the gradient operator Tikhonov regularisationNow we go back to our Tikhonov reconstruction, this time use the gradient operator in the regulariser.$${\mathrm{argmin}}\begin{Vmatrix}\binom{A}{\alpha \nabla} u - \binom{b}{0}\end{Vmatrix}^2_2$$With the definitions:- $\tilde{A} = \binom{A}{\alpha \nabla}$- $\tilde{b} = \binom{b}{0}$And solve using CGLS:$$\underset{u}{\mathrm{argmin}}\begin{Vmatrix}\tilde{A} u - \tilde{b}\end{Vmatrix}^2_2$$We'll use the framework's `Gradient()` operator - this is an optimised form of FD over the space dimensions (or even or space+channels in case of multiple channels).**Exercise 3:** Set up the BlockOperator $\tilde{A}$ and the BlockDataContainer $\tilde{b}$ as before but with the Gradient operator. Outline code to be completed is given in the next two code cells. Once set up, run the following cells to execute CGLS with these as input. Run Tikhonov reconstruction using gradient regularisation. Try a range of $\alpha$ values ranging from very small to very large, visualise the resulting image and central line profiles, and describe the effect of the regularisation parameter choice. Find the $\alpha$ that (visually) gives you the best solution.
###Code
L = GradientOperator(ig)
alpha = 0.01
operator_block = BlockOperator( ... )
#define the data b
data_block = BlockDataContainer( ... )
#setup CGLS with the block operator and block data
x_init = ig.allocate(0)
cgls_tikh_g = CGLS(x_init=x_init, operator=operator_block, data=data_block, update_objective_interval = 10)
cgls_tikh_g.max_iteration = 1000
#run the algorithm
cgls_tikh_g.run(200, verbose = True)
#plot the results
plots = [cgls_tikh_g.solution, cgls_tikh_g.solution - phantom2D]
titles = ["Tikhonov with gradient regularisation","Difference from ground truth" ]
show2D(plots,titles,fix_range=[(-0.2,1.2),(-0.2,0.2)])
###Output
_____no_output_____
###Markdown
Central vertical line profiles of ground truth and Tikhonov with Gradient operator:
###Code
#compare the vertical line profiles
plt.figure(figsize=(10,5))
plt.plot(cgls_tikh_g.solution.get_slice(horizontal_y=n/2).as_array(),label="Tikhonov with Gradient regularisation",color='purple')
plt.plot(phantom2D.get_slice(horizontal_y=n/2).as_array(),label="Ground Truth",color='black')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Summary Comparison of the outputs of each reconstructionTo wrap up we compare the reconstructions produced by all reconstruction methods considered in this notebook: Simple CGLS, Tikhonov with Identity regularisation and Tikhonov with Gradient regularisation, along with the ground truth image. We display images and central vertical line profiles:
###Code
plots = [phantom2D, cgls_simple.solution, cgls_tikh.solution, cgls_tikh_g.solution]
titles = ["Ground truth", "CGLS simple", "Tikhonov with Identity regularisation", "Tikhonov with gradient regularisation" ]
show2D(plots, titles, fix_range=(-0.2,1.2), num_cols=4)
plt.figure(figsize=(10,5))
plt.plot(cgls_simple.solution.get_slice(horizontal_y=n/2).as_array(),label="CGLS",color='dodgerblue')
plt.plot(cgls_tikh.solution.get_slice(horizontal_y=n/2).as_array(),label="Tikhonov with Identity regularisation",color='firebrick')
plt.plot(cgls_tikh_g.solution.get_slice(horizontal_y=n/2).as_array(),label="Tikhonov with Gradient regularisation",color='purple')
plt.plot(phantom2D.get_slice(horizontal_y=n/2).as_array(),label="Ground Truth",color='black')
plt.legend()
plt.show()
###Output
_____no_output_____ |
smart_queueing_system/Create_Job_Submission_Script.ipynb | ###Markdown
Step 2: Create Job Submission ScriptThe next step is to create our job submission script. In the cell below, you will need to complete the job submission script and run the cell to generate the file using the magic `%%writefile` command. Your main task is to complete the following items of the script:* Create a variable `MODEL` and assign it the value of the first argument passed to the job submission script.* Create a variable `DEVICE` and assign it the value of the second argument passed to the job submission script.* Create a variable `VIDEO` and assign it the value of the third argument passed to the job submission script.* Create a variable `PEOPLE` and assign it the value of the sixth argument passed to the job submission script.
###Code
%%writefile queue_job.sh
#!/bin/bash
exec 1>/output/stdout.log 2>/output/stderr.log
# TODO: Create MODEL variable
MODEL=$1
# TODO: Create DEVICE variable
DEVICE=$2
# TODO: Create VIDEO variable
VIDEO=$3
QUEUE=$4
OUTPUT=$5
# TODO: Create PEOPLE variable
PEOPLE=$6
mkdir -p $5
if echo "$DEVICE" | grep -q "FPGA"; then # if device passed in is FPGA, load bitstream to program FPGA
#Environment variables and compilation for edge compute nodes with FPGAs
export AOCL_BOARD_PACKAGE_ROOT=/opt/intel/openvino/bitstreams/a10_vision_design_sg2_bitstreams/BSP/a10_1150_sg2
source /opt/altera/aocl-pro-rte/aclrte-linux64/init_opencl.sh
aocl program acl0 /opt/intel/openvino/bitstreams/a10_vision_design_sg2_bitstreams/2020-3_PL2_FP16_MobileNet_Clamp.aocx
export CL_CONTEXT_COMPILER_MODE_INTELFPGA=3
fi
python3 person_detect.py --model ${MODEL} \
--device ${DEVICE} \
--video ${VIDEO} \
--queue_param ${QUEUE} \
--output_path ${OUTPUT}\
--max_people ${PEOPLE} \
cd /output
tar zcvf output.tgz *
###Output
Overwriting queue_job.sh
|
Code/TensorFlow Basics/Notebook.ipynb | ###Markdown
TensorFlow Basics--- Importing TensorFlowTo use TensorFlow, we need to import the library. We imported it and optionally gave it the name "tf", so the modules can be accessed by tf.module-name:
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Building a GraphAs we said before, TensorFlow works as a graph computational model. Let's create our first graph which we named as graph1.
###Code
graph1 = tf.Graph()
###Output
_____no_output_____
###Markdown
Now we call the TensorFlow functions that construct new tf.Operation and tf.Tensor objects and add them to the graph1. As mentioned, each tf.Operation is a node and each tf.Tensor is an edge in the graph.Lets add 2 constants to our graph. For example, calling tf.constant([2], name = 'constant_a') adds a single tf.Operation to the default graph. This operation produces the value 2, and returns a tf.Tensor that represents the value of the constant. Notice: tf.constant([2], name="constant_a") creates a new tf.Operation named "constant_a" and returns a tf.Tensor named "constant_a:0".
###Code
with graph1.as_default():
a = tf.constant([2], name = 'constant_a')
b = tf.constant([3], name = 'constant_b')
###Output
_____no_output_____
###Markdown
Lets look at the tensor __a__.
###Code
a
###Output
_____no_output_____
###Markdown
As you can see, it just show the name, shape and type of the tensor in the graph. We will see it's value when we run it in a TensorFlow session.
###Code
# Printing the value of a
sess = tf.Session(graph = graph1)
result = sess.run(a)
print(result)
sess.close()
###Output
[2]
###Markdown
After that, let's make an operation over these tensors. The function tf.add() adds two tensors (you could also use `c = a + b`).
###Code
with graph1.as_default():
c = tf.add(a, b)
#c = a + b is also a way to define the sum of the terms
###Output
_____no_output_____
###Markdown
Then TensorFlow needs to initialize a session to run our code. Sessions are, in a way, a context for creating a graph inside TensorFlow. Let's define our session:
###Code
sess = tf.Session(graph = graph1)
###Output
_____no_output_____
###Markdown
Let's run the session to get the result from the previous defined 'c' operation:
###Code
result = sess.run(c)
print(result)
###Output
[5]
###Markdown
Close the session to release resources:
###Code
sess.close()
###Output
_____no_output_____
###Markdown
To avoid having to close sessions every time, we can define them in a with block, so after running the with block the session will close automatically:
###Code
with tf.Session(graph = graph1) as sess:
result = sess.run(c)
print(result)
###Output
[5]
###Markdown
Even this silly example of adding 2 constants to reach a simple result defines the basis of TensorFlow. Define your operations (In this case our constants and _tf.add_), and start a session to build a graph. Defining multidimensional arrays using TensorFlowNow we will try to define such arrays using TensorFlow:
###Code
graph2 = tf.Graph()
with graph2.as_default():
Scalar = tf.constant(2)
Vector = tf.constant([5,6,2])
Matrix = tf.constant([[1,2,3],[2,3,4],[3,4,5]])
Tensor = tf.constant( [ [[1,2,3],[2,3,4],[3,4,5]] , [[4,5,6],[5,6,7],[6,7,8]] , [[7,8,9],[8,9,10],[9,10,11]] ] )
with tf.Session(graph = graph2) as sess:
result = sess.run(Scalar)
print ("Scalar (1 entry):\n %s \n" % result)
result = sess.run(Vector)
print ("Vector (3 entries) :\n %s \n" % result)
result = sess.run(Matrix)
print ("Matrix (3x3 entries):\n %s \n" % result)
result = sess.run(Tensor)
print ("Tensor (3x3x3 entries) :\n %s \n" % result)
###Output
Scalar (1 entry):
2
Vector (3 entries) :
[5 6 2]
Matrix (3x3 entries):
[[1 2 3]
[2 3 4]
[3 4 5]]
Tensor (3x3x3 entries) :
[[[ 1 2 3]
[ 2 3 4]
[ 3 4 5]]
[[ 4 5 6]
[ 5 6 7]
[ 6 7 8]]
[[ 7 8 9]
[ 8 9 10]
[ 9 10 11]]]
###Markdown
tf.shape returns the shape of our data structure.
###Code
Scalar.shape
Tensor.shape
###Output
_____no_output_____
###Markdown
Now that you understand these data structures, I encourage you to play with them using some previous functions to see how they will behave, according to their structure types:
###Code
graph3 = tf.Graph()
with graph3.as_default():
Matrix_one = tf.constant([[1,2,3],[2,3,4],[3,4,5]])
Matrix_two = tf.constant([[2,2,2],[2,2,2],[2,2,2]])
add_1_operation = tf.add(Matrix_one, Matrix_two)
add_2_operation = Matrix_one + Matrix_two
with tf.Session(graph =graph3) as sess:
result = sess.run(add_1_operation)
print ("Defined using tensorflow function :")
print(result)
result = sess.run(add_2_operation)
print ("Defined using normal expressions :")
print(result)
###Output
Defined using tensorflow function :
[[3 4 5]
[4 5 6]
[5 6 7]]
Defined using normal expressions :
[[3 4 5]
[4 5 6]
[5 6 7]]
###Markdown
With the regular symbol definition and also the TensorFlow function we were able to get an element-wise multiplication, also known as Hadamard product. But what if we want the regular matrix product?We then need to use another TensorFlow function called tf.matmul():
###Code
graph4 = tf.Graph()
with graph4.as_default():
Matrix_one = tf.constant([[2,3],[3,4]])
Matrix_two = tf.constant([[2,3],[3,4]])
mul_operation = tf.matmul(Matrix_one, Matrix_two)
with tf.Session(graph = graph4) as sess:
result = sess.run(mul_operation)
print ("Defined using tensorflow function :")
print(result)
###Output
Defined using tensorflow function :
[[13 18]
[18 25]]
###Markdown
We could also define this multiplication ourselves, but there is a function that already does that, so no need to reinvent the wheel!To update the value of a variable, we simply run an assign operation that assigns a value to the variable:
###Code
v = tf.Variable(0)
###Output
_____no_output_____
###Markdown
Let's first create a simple counter, a variable that increases one unit at a time:To do this we use the tf.assign(reference_variable, value_to_update) command. tf.assign takes in two arguments, the reference_variable to update, and assign it to the value_to_update it by.
###Code
update = tf.assign(v, v+1)
###Output
_____no_output_____
###Markdown
Variables must be initialized by running an initialization operation after having launched the graph. We first have to add the initialization operation to the graph:
###Code
init_op = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
We then start a session to run the graph, first initialize the variables, then print the initial value of the state variable, and then run the operation of updating the state variable and printing the result after each update:
###Code
with tf.Session() as session:
session.run(init_op)
print(session.run(v))
for _ in range(3):
session.run(update)
print(session.run(v))
###Output
0
1
2
3
###Markdown
So we create a placeholder:
###Code
a = tf.placeholder(tf.float32)
###Output
_____no_output_____
###Markdown
And define a simple multiplication operation:
###Code
b = a * 2
###Output
_____no_output_____
###Markdown
Now we need to define and run the session, but since we created a "hole" in the model to pass the data, when we initialize the session we are obligated to pass an argument with the data, otherwise we would get an error.To pass the data into the model we call the session with an extra argument feed_dict in which we should pass a dictionary with each placeholder name followed by its respective data, just like this:
###Code
with tf.Session() as sess:
result = sess.run(b,feed_dict={a:3.5})
print (result)
###Output
7.0
###Markdown
Since data in TensorFlow is passed in form of multidimensional arrays we can pass any kind of tensor through the placeholders to get the answer to the simple multiplication operation:
###Code
dictionary={a: [ [ [1,2,3],[4,5,6],[7,8,9],[10,11,12] ] , [ [13,14,15],[16,17,18],[19,20,21],[22,23,24] ] ] }
with tf.Session() as sess:
result = sess.run(b,feed_dict=dictionary)
print (result)
graph5 = tf.Graph()
with graph5.as_default():
a = tf.constant([5])
b = tf.constant([2])
c = tf.add(a,b)
d = tf.subtract(a,b)
with tf.Session(graph = graph5) as sess:
result = sess.run(c)
print ('c =: %s' % result)
result = sess.run(d)
print ('d =: %s' % result)
###Output
c =: [7]
d =: [3]
|
session2/perceptron.ipynb | ###Markdown
Ens'IA - Session 2 - Intro to neural networks 1/2 Welcome the **second session** of Ens'IA.Today, things are gonna get interesting !We are gonna focus on **neural networks**But first, (what a surprise), neural networks are make of neurons ! So what is a neuron ? To make sure you understand what it is, we will go back in 1958 and give a look at **the perceptron**. During the oral presentation, you should have seen that a perceptron have one or more inputs and a unique output. The output is given by : \begin{equation} s = \left\{ \begin{array}{ll} 1 & \mbox{if } \sum_{i=0}^{n} a_{i} \times w_{i} + b > 0 \\ 0 & \mbox{otherwise} \end{array} \right.\end{equation}Now, it's your time to *create your own perceptron*. Perceptron implementation
###Code
class Perceptron:
"""
Build of a perceptron
weights : List of weights
bias : The bias.
"""
def __init__(self,weights,bias):
#TODO
self.bias = bias
self.weights = weights
"""
Function called when you want to get the output from the input
input : List of input values.
"""
def forward(self,input):
assert(len(input)==len(self.weights))
#TODO
sum = 0
for inp,w in zip(input,self.weights):
sum += inp*w
if sum + self.bias > 0:
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
Test Now, let's **test** it !
###Code
#TODO
perceptron = Perceptron([1,1],-1)
assert(perceptron.forward([1,1])==1)
assert(perceptron.forward([1,0])==0)
assert(perceptron.forward([0,1])==0)
assert(perceptron.forward([0,0])==0)
###Output
_____no_output_____
###Markdown
If you have no message, then it must work! Here we have created a perceptron with weights 1 and 1 and the bias is 2. Do you notice any link between the inputs and the output? Maybe something you have seen in your processor architecture class... NAND implementation Your next mission will be to create a perceptron that reproduces a NAND gate. It will have 2 inputs and a bias, but you have to find which values are the right ones ...
###Code
perceptron_nand = Perceptron([-2,-2],3)
assert(perceptron_nand.forward([1,1])==0)
assert(perceptron_nand.forward([1,0])==1)
assert(perceptron_nand.forward([0,1])==1)
assert(perceptron_nand.forward([0,0])==1)
#If you don't get any error messages when running this code, then you have found the right weights and bias!
###Output
_____no_output_____
###Markdown
XOR implementation And if you now try to find a perceptron that reproduces an XOR gate...?
###Code
#TODO
#Spoil : It's impossible :p
###Output
_____no_output_____ |
src/plotting/TATA_enrichmentold.ipynb | ###Markdown
Housekeeping genes have significantly fewer TATA boxes than variable genes now I need to rerun analyses using gat enrichment If binding sites you're mapping are small, need to get the mapability genome containing all regions that are uniquely mappable with reads of 24 bases. https://genome.ucsc.edu/cgi-bin/hgTrackUi?db=hg38&g=mappabilitySee https://gat.readthedocs.io/en/latest/tutorialGenomicAnnotation.html Downloaded TATA_boxes.bed and TATA_boxes.fps (both the same, different formats) from EPDUsed the following search parameters for download: FindM Genome Assembly : A. thaliana (Feb 2011 TAIR10/araTha1)Series : EPDnew, the Arabidopsis Curated Promoter DatabaseSample : TSS from EPDnew rel 004Repeat masking: off5' border: -50 3' border: 0Search mode: forwardSelection mode : all matches In the end I didn't need these files, can use existing tatabox files for the specific genes of interest (responsive_housekeeping_TATA_box_positive.bed)Copied the chromsizes.chr to data/EPD_promoter_analysis/TATA and converted it into a BED file for the workspace.
###Code
#create a bed file containing all 100 constitutive/responsive promoters with the fourth column annotating whether it's constitutive or responsive
promoters_no_random = promoters.copy()
#drop randCont rows
promoters_no_random = promoters_filtered[~(promoters.gene_type == 'randCont')]
promoters_no_random
promoterbedfile = '../../data/FIMO/responsivepromoters.bed'
promoters = pd.read_table(promoterbedfile, sep='\t', header=None)
cols = ['chr', 'start', 'stop', 'promoter_AGI', 'score', 'strand', 'source', 'feature_name', 'dot2', 'attributes']
promoters.columns = cols
merged = pd.merge(promoters,promoters_no_random, on='promoter_AGI')
merged
merged_reordered = merged[['chr','start','stop','gene_type', 'strand', 'source', 'attributes','promoter_AGI']]
sorted_motifs = merged_reordered.sort_values(['chr','start'])
bed = BedTool.from_dataframe(sorted_motifs).saveas('../../data/EPD_promoter_analysis/TATA/promoters_norandom.bed')
def add_chr_linestart(input_location,output_location):
"""this function removes characters from the start of each line in the input file and sends modified lines to output"""
output = open(output_location, 'w') #make output file with write capability
#open input file
with open(input_location, 'r') as infile:
#iterate over lines in file
for line in infile:
line = line.strip() # removes hidden characters/spaces
if line[0].isdigit():
line = 'chr' + line #prepend chr to the beginning of line if starts with a digit
output.write(line + '\n') #output to new file
output.close()
add_chr_linestart('../../data/EPD_promoter_analysis/TATA/promoters_norandom.bed', '../../data/EPD_promoter_analysis/TATA/promoters_norandom_renamed.bed')
# #In bash I ran this:
# gat-run.py --ignore-segment-tracks --segments=../../data/EPD_promoter_analysis/responsive_housekeeping_TATA_box_positive.bed `#TATA box annotations` \
# --annotations=../../data/EPD_promoter_analysis/TATA/promoters_norandom.bed `#100 constitutive/responsive promoter annotations` \
# --workspace=../../data/EPD_promoter_analysis/TATA/chromsizes.bed `#Arabidopsis chromosome bed file` \
# --num-samples=1000 --log=../../data/EPD_promoter_analysis/TATA/gat.log > ../../data/EPD_promoter_analysis/TATA/gat_TATA.out
# # note, --num-threads=7 is currently broken`
# #test run
# gat-run.py --ignore-segment-tracks --segments=../../data/EPD_promoter_analysis/responsive_housekeeping_TATA_box_positive.bed `#TATA box annotations` \
# --annotations=../../data/EPD_promoter_analysis/TATA/promoters_norandom_renamed.bed `#100 constitutive/responsive promoter annotations` \
# --workspace=../../data/EPD_promoter_analysis/TATA/chromsizes.bed `#Arabidopsis chromosome bed file` \
# --num-samples=1000 --log=../../data/EPD_promoter_analysis/TATA/gat.log > ../../data/EPD_promoter_analysis/TATA/gat_TATA.out
###Output
_____no_output_____
###Markdown
Calculate distance of TATA box from TSS
###Code
cols = ['chrTATA', 'startTATA', 'stopTATA', 'gene_IDTATA','number','strandTATA','TATA_present','promoter_AGI']
TATA.columns = cols
TATA
#merge TATA bed with promoters
sorted_motifs
TATA_distance = pd.merge(TATA,sorted_motifs, how='inner', on='promoter_AGI')
TATA_distance
#calculate distance between TATA and TSS
TATA_distance.loc[TATA_distance.strand =='+', 'TATAdistance(bp)'] = TATA_distance.startTATA - TATA_distance.stop
TATA_distance.loc[TATA_distance.strand =='-', 'TATAdistance(bp)'] = TATA_distance.start - TATA_distance.startTATA
TATA_distance
###Output
_____no_output_____
###Markdown
Create distribution plotNote:The y axis is a density, not a probability. The normalized histogram does not show a probability mass function, where the sum the bar heights equals 1; the normalization ensures that the sum of the bar heights times the bar widths equals 1. This is what ensures that the normalized histogram is comparable to the kernel density estimate, which is normalized so that the area under the curve is equal to 1.
###Code
dist_plot = TATA_distance['TATAdistance(bp)']
#create figure with no transparency
dist_plot_fig = sns.distplot(dist_plot).get_figure()
dist_plot_fig.savefig('../../data/plots/TATAbox/TATA_distance_from_extracted_promoters.pdf', format='pdf')
TATA_distance['TATAdistance(bp)']
#Make TATA box segment the actual size - I will set all to 15 bp
TATA_15bp = TATA.copy()
TATA_15bp
#Make TATA box segment the actual size - I will set all to 15 bp
TATA_15bp.loc[TATA_15bp.strandTATA =='+', 'stopTATA'] = TATA_15bp.stopTATA + 14
TATA_15bp.loc[TATA_15bp.strandTATA =='-', 'startTATA'] = TATA_15bp.startTATA - 14
TATA_15bp
#make into bed file
sorted_TATA = TATA_15bp.sort_values(['chrTATA','startTATA'])
bed = BedTool.from_dataframe(sorted_TATA).saveas('../../data/EPD_promoter_analysis/TATA/TATA_15bp.bed')
#extend promoter 3' end by 661 bp (to furthest registered TATA box)
responsive_constitutive_promoters_extended = sorted_motifs.copy()
responsive_constitutive_promoters_extended.loc[responsive_constitutive_promoters_extended.strand =='+', 'stop'] = responsive_constitutive_promoters_extended.stop + 675
responsive_constitutive_promoters_extended.loc[responsive_constitutive_promoters_extended.strand =='-', 'start'] = responsive_constitutive_promoters_extended.start - 675
sorted_proms = responsive_constitutive_promoters_extended.sort_values(['chr','start'])
bed = BedTool.from_dataframe(sorted_proms).saveas('../../data/EPD_promoter_analysis/TATA/responsive_constitutive_promoters_extended.bed')
#add chr to chromosome name
add_chr_linestart('../../data/EPD_promoter_analysis/TATA/responsive_constitutive_promoters_extended.bed', '../../data/EPD_promoter_analysis/TATA/responsive_constitutive_promoters_extended_renamed.bed')
#rerun analysis using nonbidirectional promoters
nonbidirectional_proms_file = '../../data/FIMO/nonbidirectional_proms.bed'
nonbidirectional_proms = pd.read_table(nonbidirectional_proms_file, sep='\t', header=None)
cols3 = ['chr', 'start', 'stop','promoter_AGI','dot1', 'strand','source_bi', 'type','dot2', 'attributes']
nonbidirectional_proms.columns = cols3
nonbidir_const_var_proms = pd.merge(sorted_motifs, nonbidirectional_proms[['promoter_AGI','source_bi']], how='left', on='promoter_AGI')
nonbidir_const_var_proms = nonbidir_const_var_proms[~nonbidir_const_var_proms['source_bi'].isnull()]
nonbidir_const_var_proms
#number of nonbidirectional housekeeping genes
len(nonbidir_const_var_proms[nonbidir_const_var_proms.gene_type == 'housekeeping'])
#number of nonbidirectional variable genes
len(nonbidir_const_var_proms[nonbidir_const_var_proms.gene_type == 'highVar'])
# gat-run.py --ignore-segment-tracks --segments=../../data/EPD_promoter_analysis/TATA/TATA_15bp.bed `#TATA box annotations` \
# --annotations=../../data/EPD_promoter_analysis/TATA/responsive_constitutive_promoters_extended_renamed.bed `#100 constitutive/responsive promoter annotations` \
# --workspace=../../data/EPD_promoter_analysis/TATA/chromsizes.bed `#Arabidopsis chromosome bed file` \
# --num-samples=1000 --log=../../data/EPD_promoter_analysis/TATA/gat.log > ../../data/EPD_promoter_analysis/TATA/gat_TATA.out
#Create file with only the variable promoters
extended_promoters_file = '../../data/EPD_promoter_analysis/TATA/responsive_constitutive_promoters_extended_renamed.bed'
extended_promoters = pd.read_table(extended_promoters_file, sep='\t', header=None)
#make a new gat workspace file with all promoters (first 3 columns)
bed = BedTool.from_dataframe(extended_promoters[[0,1,2]]).saveas('../../data/EPD_promoter_analysis/TATA/responsive_constitutive_promoters_extended_workspace.bed')
#select only variable promoters
variable_promoters_extended = extended_promoters[extended_promoters[3] == 'highVar']
sorted_variable = variable_promoters_extended.sort_values([0,1])
bed = BedTool.from_dataframe(sorted_variable).saveas('../../data/EPD_promoter_analysis/TATA/variable_promoters_extended.bed')
#make a constitutive only file for completness sake
constitutive_promoters_extended = extended_promoters[extended_promoters[3] == 'housekeeping']
sorted_constitutive = constitutive_promoters_extended.sort_values([0,1])
bed = BedTool.from_dataframe(sorted_constitutive).saveas('../../data/EPD_promoter_analysis/TATA/constitutive_promoters_extended.bed')
log2fold = pd.read_csv('../../data/EPD_promoter_analysis/TATA/TATAlogfold.csv', header=0)
log2fold
#rename
log2fold.Gene_type.replace('Variable','variable', inplace=True)
log2fold.Gene_type.replace('Constitutive','constitutive', inplace=True)
#set style to ticks
sns.set(style="ticks", color_codes=True)
#bar chart, 95% confidence intervals
plot = sns.barplot(x="Gene_type", y="Log2-fold", data=log2fold)
plot.axhline(0, color='black')
plt.xlabel("Gene type")
plt.ylabel("Log2-fold enrichment over background").get_figure().savefig('../../data/plots/TATAbox/log2fold.pdf', format='pdf')
###Output
_____no_output_____ |
深度学习/d2l-zh-1.1/chapter_convolutional-neural-networks/resnet.ipynb | ###Markdown
残差网络(ResNet)让我们先思考一个问题:对神经网络模型添加新的层,充分训练后的模型是否只可能更有效地降低训练误差?理论上,原模型解的空间只是新模型解的空间的子空间。也就是说,如果我们能将新添加的层训练成恒等映射$f(x) = x$,新模型和原模型将同样有效。由于新模型可能得出更优的解来拟合训练数据集,因此添加层似乎更容易降低训练误差。然而在实践中,添加过多的层后训练误差往往不降反升。即使利用批量归一化带来的数值稳定性使训练深层模型更加容易,该问题仍然存在。针对这一问题,何恺明等人提出了残差网络(ResNet) [1]。它在2015年的ImageNet图像识别挑战赛夺魁,并深刻影响了后来的深度神经网络的设计。 残差块让我们聚焦于神经网络局部。如图5.9所示,设输入为$\boldsymbol{x}$。假设我们希望学出的理想映射为$f(\boldsymbol{x})$,从而作为图5.9上方激活函数的输入。左图虚线框中的部分需要直接拟合出该映射$f(\boldsymbol{x})$,而右图虚线框中的部分则需要拟合出有关恒等映射的残差映射$f(\boldsymbol{x})-\boldsymbol{x}$。残差映射在实际中往往更容易优化。以本节开头提到的恒等映射作为我们希望学出的理想映射$f(\boldsymbol{x})$。我们只需将图5.9中右图虚线框内上方的加权运算(如仿射)的权重和偏差参数学成0,那么$f(\boldsymbol{x})$即为恒等映射。实际中,当理想映射$f(\boldsymbol{x})$极接近于恒等映射时,残差映射也易于捕捉恒等映射的细微波动。图5.9右图也是ResNet的基础块,即残差块(residual block)。在残差块中,输入可通过跨层的数据线路更快地向前传播。ResNet沿用了VGG全$3\times 3$卷积层的设计。残差块里首先有2个有相同输出通道数的$3\times 3$卷积层。每个卷积层后接一个批量归一化层和ReLU激活函数。然后我们将输入跳过这2个卷积运算后直接加在最后的ReLU激活函数前。这样的设计要求2个卷积层的输出与输入形状一样,从而可以相加。如果想改变通道数,就需要引入一个额外的$1\times 1$卷积层来将输入变换成需要的形状后再做相加运算。残差块的实现如下。它可以设定输出通道数、是否使用额外的$1\times 1$卷积层来修改通道数以及卷积层的步幅。
###Code
import d2lzh as d2l
from mxnet import gluon, init, nd
from mxnet.gluon import nn
class Residual(nn.Block): # 本类已保存在d2lzh包中方便以后使用
def __init__(self, num_channels, use_1x1conv=False, strides=1, **kwargs):
super(Residual, self).__init__(**kwargs)
self.conv1 = nn.Conv2D(num_channels, kernel_size=3, padding=1,
strides=strides)
self.conv2 = nn.Conv2D(num_channels, kernel_size=3, padding=1)
if use_1x1conv:
self.conv3 = nn.Conv2D(num_channels, kernel_size=1,
strides=strides)
else:
self.conv3 = None
self.bn1 = nn.BatchNorm()
self.bn2 = nn.BatchNorm()
def forward(self, X):
Y = nd.relu(self.bn1(self.conv1(X)))
Y = self.bn2(self.conv2(Y))
if self.conv3:
X = self.conv3(X)
return nd.relu(Y + X)
###Output
_____no_output_____
###Markdown
下面我们来查看输入和输出形状一致的情况。
###Code
blk = Residual(3)
blk.initialize()
X = nd.random.uniform(shape=(4, 3, 6, 6))
blk(X).shape
###Output
_____no_output_____
###Markdown
我们也可以在增加输出通道数的同时减半输出的高和宽。
###Code
blk = Residual(6, use_1x1conv=True, strides=2)
blk.initialize()
blk(X).shape
###Output
_____no_output_____
###Markdown
ResNet模型ResNet的前两层跟之前介绍的GoogLeNet中的一样:在输出通道数为64、步幅为2的$7\times 7$卷积层后接步幅为2的$3\times 3$的最大池化层。不同之处在于ResNet每个卷积层后增加的批量归一化层。
###Code
net = nn.Sequential()
net.add(nn.Conv2D(64, kernel_size=7, strides=2, padding=3),
nn.BatchNorm(), nn.Activation('relu'),
nn.MaxPool2D(pool_size=3, strides=2, padding=1))
###Output
_____no_output_____
###Markdown
GoogLeNet在后面接了4个由Inception块组成的模块。ResNet则使用4个由残差块组成的模块,每个模块使用若干个同样输出通道数的残差块。第一个模块的通道数同输入通道数一致。由于之前已经使用了步幅为2的最大池化层,所以无须减小高和宽。之后的每个模块在第一个残差块里将上一个模块的通道数翻倍,并将高和宽减半。下面我们来实现这个模块。注意,这里对第一个模块做了特别处理。
###Code
def resnet_block(num_channels, num_residuals, first_block=False):
blk = nn.Sequential()
for i in range(num_residuals):
if i == 0 and not first_block:
blk.add(Residual(num_channels, use_1x1conv=True, strides=2))
else:
blk.add(Residual(num_channels))
return blk
###Output
_____no_output_____
###Markdown
接着我们为ResNet加入所有残差块。这里每个模块使用2个残差块。
###Code
net.add(resnet_block(64, 2, first_block=True),
resnet_block(128, 2),
resnet_block(256, 2),
resnet_block(512, 2))
###Output
_____no_output_____
###Markdown
最后,与GoogLeNet一样,加入全局平均池化层后接上全连接层输出。
###Code
net.add(nn.GlobalAvgPool2D(), nn.Dense(10))
###Output
_____no_output_____
###Markdown
这里每个模块里有4个卷积层(不计算$1\times 1$卷积层),加上最开始的卷积层和最后的全连接层,共计18层。这个模型通常也被称为ResNet-18。通过配置不同的通道数和模块里的残差块数可以得到不同的ResNet模型,例如更深的含152层的ResNet-152。虽然ResNet的主体架构跟GoogLeNet的类似,但ResNet结构更简单,修改也更方便。这些因素都导致了ResNet迅速被广泛使用。在训练ResNet之前,我们来观察一下输入形状在ResNet不同模块之间的变化。
###Code
X = nd.random.uniform(shape=(1, 1, 224, 224))
net.initialize()
for layer in net:
X = layer(X)
print(layer.name, 'output shape:\t', X.shape)
###Output
conv5 output shape: (1, 64, 112, 112)
batchnorm4 output shape: (1, 64, 112, 112)
relu0 output shape: (1, 64, 112, 112)
pool0 output shape: (1, 64, 56, 56)
sequential1 output shape: (1, 64, 56, 56)
sequential2 output shape: (1, 128, 28, 28)
sequential3 output shape: (1, 256, 14, 14)
sequential4 output shape: (1, 512, 7, 7)
pool1 output shape: (1, 512, 1, 1)
dense0 output shape: (1, 10)
###Markdown
训练模型下面我们在Fashion-MNIST数据集上训练ResNet。
###Code
lr, num_epochs, batch_size, ctx = 0.05, 5, 256, d2l.try_gpu()
net.initialize(force_reinit=True, ctx=ctx, init=init.Xavier())
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': lr})
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size, resize=96)
d2l.train_ch5(net, train_iter, test_iter, batch_size, trainer, ctx,
num_epochs)
###Output
training on gpu(0)
|
prediction/multitask/fine-tuning/function documentation generation/ruby/small_model.ipynb | ###Markdown
**Predict the documentation for ruby code using codeTrans multitask finetuning model**You can make free prediction online through this Link (When using the prediction online, you need to parse and tokenize the code first.) **1. Load necessry libraries including huggingface transformers**
###Code
!pip install -q transformers sentencepiece
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
###Output
_____no_output_____
###Markdown
**2. Load the token classification pipeline and load it into the GPU if avilabile**
###Code
pipeline = SummarizationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask_finetune"),
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_documentation_generation_ruby_multitask_finetune", skip_special_tokens=True),
device=0
)
###Output
/usr/local/lib/python3.6/dist-packages/transformers/models/auto/modeling_auto.py:852: FutureWarning: The class `AutoModelWithLMHead` is deprecated and will be removed in a future version. Please use `AutoModelForCausalLM` for causal language models, `AutoModelForMaskedLM` for masked language models and `AutoModelForSeq2SeqLM` for encoder-decoder models.
FutureWarning,
###Markdown
**3 Give the code for summarization, parse and tokenize it**
###Code
code = "def add(severity, progname, &block)\n return true if io.nil? || severity < level\n message = format_message(severity, progname, yield)\n MUTEX.synchronize { io.write(message) }\n true\n end" #@param {type:"raw"}
!pip install tree_sitter
!git clone https://github.com/tree-sitter/tree-sitter-ruby
from tree_sitter import Language, Parser
Language.build_library(
'build/my-languages.so',
['tree-sitter-ruby']
)
RUBY_LANGUAGE = Language('build/my-languages.so', 'ruby')
parser = Parser()
parser.set_language(RUBY_LANGUAGE)
def get_string_from_code(node, lines):
line_start = node.start_point[0]
line_end = node.end_point[0]
char_start = node.start_point[1]
char_end = node.end_point[1]
if line_start != line_end:
code_list.append(' '.join([lines[line_start][char_start:]] + lines[line_start+1:line_end] + [lines[line_end][:char_end]]))
else:
code_list.append(lines[line_start][char_start:char_end])
def my_traverse(node, code_list):
lines = code.split('\n')
if node.child_count == 0:
get_string_from_code(node, lines)
elif node.type == 'string':
get_string_from_code(node, lines)
else:
for n in node.children:
my_traverse(n, code_list)
return ' '.join(code_list)
tree = parser.parse(bytes(code, "utf8"))
code_list=[]
tokenized_code = my_traverse(tree.root_node, code_list)
print("Output after tokenization: " + tokenized_code)
###Output
Output after tokenization: def add ( severity , progname , & block ) return true if io . nil? || severity < level message = format_message ( severity , progname , yield ) MUTEX . synchronize { io . write ( message ) } true end
###Markdown
**4. Make Prediction**
###Code
pipeline([tokenized_code])
###Output
Your max_length is set to 512, but you input_length is only 57. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)
|
notebooks/Add_numbers_to_csv_without_missing_values_for_shiny.ipynb | ###Markdown
Create .csv File to Use For Shiny AppOnce the decision was made to use a rules based model, a .csv file was put together that only had the necessary information for the shiny app. When making the Shiny App it was found that missing values were causing problems. Schools with missing values in the key variables were dropped from the final .csv and app. The total of schools missing enrollment data was one, schools missing data on absenteeism 18, schools missing data on days missed due to suspensions 19, and schools without data on sports participation was 3,461. The final .csv file had 63 columns.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import math
numbers = pd.read_csv('/Users/flatironschool/Absenteeism_Project/data/processed/combo_cleaned.csv')
numbers.tail()
###Output
_____no_output_____
###Markdown
Clean up graduation rates and add grad rate bins
###Code
#need to keep original reported grade and need column to modify and clean data
numbers['grad_slice'] = numbers['ALL_RATE_1516']
#remove "GE" and "LE" from ranges
numbers['grad_slice'].replace(['GE99'], '+100', inplace=True) # need to replace with
numbers['grad_slice'].replace(['GE95'], '95', inplace=True)
numbers['grad_slice'].replace(['GE90'], '90', inplace=True)
numbers['grad_slice'].replace(['LE10'], '10', inplace=True)
numbers['grad_slice'].replace(['LE1'], '1', inplace=True)
numbers['grad_slice'].replace(['LE5'], '05', inplace=True)
#smallest range needs to be dealt with, has one digit before '-'
numbers['grad_slice'].replace(['6-9'], '6', inplace=True)
#take first two digits of rates to get rid of ranges
numbers['grad_slice'] = numbers['grad_slice'].str[:2]
#fix 100
numbers['grad_slice'].replace(['+1'], '100', inplace=True)
#get rid of very small schools
grad_num = numbers[numbers['ALL_COHORT_1516'] >= 31]
#create the binned categories
grad_num['grad_rate_bin'] = pd.cut(grad_num['grad_slice'].astype(int), [0, 59, 79, 89, 99, 100],
labels = ['0-59%', '60-79%', '80-89%', '90-99%', '100%'])
grad_num.tail()
grad_num['grad_rate_bin'].value_counts()
grad_num.columns.to_list()
###Output
_____no_output_____
###Markdown
Create Level Up Bins
###Code
#level up bins
#create the binned categories
grad_num['level_up_bins'] = pd.cut(grad_num['grad_slice'].astype(int),
[0, 59, 79, 89, 99, 100], labels = ['60-79% Level Up Rate', '80-89% Level Up Rate', '90-99% Level Up Rate', '100% Level Up Rate', '100% Top Rate'])
grad_num.head()
###Output
_____no_output_____
###Markdown
Calculate Quantiles and Add to Data Frame
###Code
quantile_df_25 = grad_num.groupby('grad_rate_bin')['non_cert_rate', 'sports_rate', 'chronic_absent_rate', 'suspensed_day_rate'].quantile(.25).reset_index()
quantile_df_75 = grad_num.groupby('grad_rate_bin')['non_cert_rate', 'sports_rate', 'chronic_absent_rate', 'suspensed_day_rate'].quantile(.75).reset_index()
quantile_df_25.head()
quantile_df_75.head()
grad_num = grad_num.merge(quantile_df_25, on='grad_rate_bin', suffixes=('_x', '_25th'))
grad_num = grad_num.merge(quantile_df_75, on='grad_rate_bin', suffixes=('_y', '_75th'))
grad_num.head()
###Output
_____no_output_____
###Markdown
Change column names to final names for shiny app
###Code
grad_num.rename(columns={'total_enrollment':'Total_Enrollment','total_chronic_absent':'Number_of_Chronically_Absent_Students', 'sports_part':'Number_of_Student_Athletes'},inplace=True)
grad_num.rename(columns={'ALL_RATE_1516':'Graduation_Rate_2015_16'}, inplace=True)
grad_num.rename(columns={'SCH_FTETEACH_NOTCERT':'Number_of_Non_Certified_Teachers'}, inplace=True)
grad_num.rename(columns={'level_up_bins':'Level_Up_Graduation_Rate'}, inplace=True)
grad_num.rename(columns={'STNAM':'State', 'LEANM':'District', 'SCHNAM':'High_School', 'ALL_RATE_1516':'Graduation_Rate_2015_16'},inplace=True)
grad_num.rename(columns={'total_suspension_days':'Number_of_Days_Missed_to_Suspensions'}, inplace=True)
grad_num.rename(columns={'SCH_FTETEACH_TOT':'Number_of_Total_Teachers'}, inplace=True)
###Output
_____no_output_____
###Markdown
Calculate Middle 50% Range for App
###Code
grad_num['Level_Up_25th_Percentile_Number_Chronic_Absent_Students'] = grad_num['Total_Enrollment'] * grad_num['chronic_absent_rate_25th']
grad_num['Level_Up_75th_Percentile_Number_Chronic_Absent_Students'] = round(grad_num['Total_Enrollment'] * grad_num['chronic_absent_rate_75th'],0)
grad_num['Level_Up_25th_Percentile_Student_Athletes'] = round(grad_num['Total_Enrollment'] * grad_num['sports_rate_25th'],0)
grad_num['Level_Up_75th_Percentile_Student_Athletes'] = round(grad_num['Total_Enrollment'] * grad_num['sports_rate_75th'],0)
grad_num['Level_Up_25th_Percentile_Days_Missed_due_to_Suspension'] = round(grad_num['Total_Enrollment'] * grad_num['suspensed_day_rate_25th'],0)
grad_num['Level_Up_75th_Percentile_Days_Missed_due_to_Suspension'] = round(grad_num['Total_Enrollment'] * grad_num['suspensed_day_rate_75th'],0)
grad_num['Level_Up_25th_Percentile_Non_Certified_Teachers'] = round(grad_num['Total_Enrollment'] * grad_num['non_cert_rate_25th'],0)
grad_num['Level_up_75th_Percentile_Non_Certified_Teachers'] = round(grad_num['Total_Enrollment'] * grad_num['non_cert_rate_75th'],0)
grad_num.head()
###Output
_____no_output_____
###Markdown
Add back in the Chronic Absentee Rate and Sports Participation Rate for each school
###Code
grad_num['Chronic_Absent_Rate'] = grad_num['Number_of_Chronically_Absent_Students']/grad_num['Total_Enrollment']
grad_num['Sports_Participant_Rate'] = grad_num['Number_of_Student_Athletes']/grad_num['Total_Enrollment']
###Output
_____no_output_____
###Markdown
Clean up final data frame and save to csv
###Code
grad_num.drop(['Unnamed: 0', 'Unnamed: 0.1'], axis=1, inplace=True)
grad_num.drop(['LEA_STATE_NAME', 'SCH_NAME', 'SCH_MAGNETDETAIL','SCH_ALTFOCUS', 'TOT_GTENR_M', 'TOT_GTENR_F'], axis=1, inplace=True)
grad_num.drop(grad_num.columns.to_series()['TOT_ALGENR_GS0910_M':'TOT_SATACT_F'], axis=1, inplace=True)
grad_num.drop(grad_num.columns.to_series()['SCH_HBALLEGATIONS_SEX':'SCH_HBALLEGATIONS_REL'], axis=1, inplace=True)
grad_num.drop(['SCH_NPE_WOFED', 'SCH_NPE_WFED', 'SCH_FTECOUNSELORS', 'SCH_FTETEACH_ABSENT'], axis=1, inplace=True)
grad_num.drop(grad_num.columns.to_series()['total_ap_ib_de':'calc_rate'], axis=1, inplace=True)
grad_num.drop(grad_num.columns.to_series()['harassed':'activities_funds_rate'], axis=1, inplace=True)
grad_num.drop(['counselor_rate', 'absent_teacher_rate'], axis=1, inplace=True)
grad_num.drop(['TOT_DUAL_M', 'TOT_DUAL_F'], axis=1, inplace=True)
#delete rows no longer needed for shiny app
#delete rows with NANs
#no missing values
grad_num.isna().sum()
#delete rows with NANs in enrollment, absenteeism, sports participation, non-certified teachers,
#and days missed to suspension
grad_num.dropna(subset=['Total_Enrollment', 'TOT_ENR_M', 'TOT_ENR_F'], inplace=True)
grad_num.dropna(subset=['Number_of_Chronically_Absent_Students','TOT_ABSENT_M', 'TOT_ABSENT_F'], inplace=True)
grad_num.dropna(subset=['Number_of_Student_Athletes', 'SCH_SSPART_M', 'SCH_SSPART_F', 'TOT_SSPART'], inplace=True)
grad_num.dropna(subset=['Number_of_Non_Certified_Teachers', 'non_cert_rate_x', 'Number_of_Total_Teachers'], inplace=True)
grad_num.dropna(subset=['TOT_DAYSMISSED_M', 'TOT_DAYSMISSED_F', 'Number_of_Days_Missed_to_Suspensions', 'suspensed_day_rate_x'], inplace=True)
###Output
_____no_output_____
###Markdown
Save the final data frame to csv for future use
###Code
grad_num2 = grad_num
#data frame deleting NANs
grad_num2.to_csv('grad_num2.csv')
grad_num2.head()
grad_num2['Number_of_Days_Missed_to_Suspensions'].mean()
grad_num2['Total_Enrollment'].max()
sns.lmplot(data = grad_num2, x='Chronic')
###Output
_____no_output_____ |
notebooks/03-DTW-measure.ipynb | ###Markdown
Dynamic Time Warping measureAfter observing the forecast in notebook `02-LSTM-experiment`, the delayed forecast can clearly be observed in the visualized storms, however, the used metrics do not show the model to give bad performance. We introduce a new measure based on dynamic time warping to detect this kind of error.__Remark: Make sure that the previous notebooks have at least ran once to ensure the necessary files exists__
###Code
import sys
import pandas as pd
import numpy as np
sys.path.append('../')
from src.dtw.dtw_measure import dtw_measure
import h5py
# Import the data
def load_testing_sets(fname='../data/processed/datasets.h5'):
with h5py.File(fname, 'r') as f:
test_in = f['test_sets/test_in'][:]
test_out = f['test_sets/test_out'][:]
predict = f['test_sets/prediction'][:]
lookup = f['test_sets/lookup'][:]
return test_in, test_out, predict, lookup.astype('datetime64[s]')
test_in, test_out, predict, lookup = load_testing_sets()
time_forward = 6
###Output
_____no_output_____
###Markdown
An important condition for DTW is that each time series is continuous, e.g. combining independent time series into one and evaluating this will give incorrect results. In notebook `01-data-preparation`, invalid measurements were removed, breaking the test data into a set of continous time series. All of these series must first be identified.
###Code
def extract_continuous_intervals(table):
r'''Check lookup table for time discontinuities
output:
Returns list of continouos times inside the lookup table
'''
lookup = pd.DataFrame(data=np.arange(table.shape[0]), index=pd.to_datetime(table[:,0]))
lookup.index = pd.DatetimeIndex(lookup.index)
# split = [g for n,g in lookup.groupby(pd.Grouper(freq='M')) if g.shape[0] != 0]
min_size = 10
timeseries = []
#for month in split:
series = lookup.index
while len(series) > 0:
# We can assume that the series starts from non-missing values, so the first diff gives sizes of continous intervals
diff = pd.date_range(series[0], series[-1], freq='H').difference(series)
if len(diff) > 0:
if pd.Timedelta(diff[0] - pd.Timedelta('1h') - series[0])/pd.Timedelta('1h') > min_size:
v1 = lookup.loc[series[0]][0]
v2 = lookup.loc[diff[0] - pd.Timedelta('1h')][0]
timeseries.append([v1, v2])
if pd.Timedelta(series[-1] - diff[-1] - pd.Timedelta('1h'))/pd.Timedelta('1h') > min_size:
v1 = lookup.loc[diff[-1] + pd.Timedelta('1h')][0]
v2 = lookup.loc[series[-1]][0]
timeseries.append([v1, v2])
diff = pd.date_range(diff[0], diff[-1], freq='H').difference(diff)
else:
# Only when diff is empty
v1 = lookup.loc[series[0]][0]
v2 = lookup.loc[series[-1]][0]
timeseries.append([v1, v2])
series = diff
return np.array(timeseries)
intervals = extract_continuous_intervals(lookup)
###Output
_____no_output_____
###Markdown
Now that we have continous intervals, the dtw measure is applied to each interval. From the resulting path, we measure the time shift between the mapping. The total counts are summarized in a pandas DataFrame, which is then normalized with `reformat_dtw_res` over the rows to provide a percentage.
###Code
bincounts = np.zeros((time_forward,7))
counter = 0
for start, stop in intervals:
counter += 1
for i in range(time_forward):
_, path, _ = dtw_measure(predict[start:stop, 0, i], test_out[start:stop, 0, i], time_forward)
bins, counts = np.unique(abs(path[0, :] - path[1, :]), return_counts=True)
bincounts[i, bins] += counts
lat_res = pd.DataFrame(data=bincounts, index=np.arange(1, time_forward+1), columns=np.arange(7))
print(lat_res)
def reformat_dtw_res(df, filename=None):
'''Normalize the result from the dtw measure
'''
res = df.div(df.sum(axis=1), axis=0)
shifts = np.array(['t+{}h'.format(i+1) for i in np.arange(res.shape[0])])
res['Prediction'] = shifts.T
res = res.set_index('Prediction')
res.columns = ['{}h'.format(i) for i in res.columns]
res = res.apply(lambda x: round(x, 3))
if filename:
res.to_csv('{}reformated_{}'.format(path, filename))
return res
reformat_dtw_res(lat_res)
###Output
_____no_output_____ |
student_intervention/student_intervention_stratified.ipynb | ###Markdown
Project 2: Supervised Learning Building a Student Intervention System 1. Classification vs RegressionYour goal is to identify students who might need early intervention - which type of supervised machine learning problem is this, classification or regression? Why? Identifying students who might need early intervention is a *classification* problem as you are sorting students into classes (*needs intervention*, *doesn't need intervention*) rather than trying to predict a quantitative value. 2. Exploring the DataLet's go ahead and read in the student dataset first._To execute a code cell, click inside it and press **Shift+Enter**._
###Code
# Import libraries
import numpy as np
import pandas as pd
# additional imports
import matplotlib.pyplot as plot
import seaborn
from sklearn.cross_validation import train_test_split
%matplotlib inline
RANDOM_STATE = 100
REPETITIONS = 1
RUN_PLOTS = True
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
# Note: The last column 'passed' is the target/label, all other are feature columns
###Output
Student data read successfully!
###Markdown
Now, can you find out the following facts about the dataset?- Total number of students- Number of students who passed- Number of students who failed- Graduation rate of the class (%)- Number of features_Use the code block below to compute these values. Instructions/steps are marked using **TODO**s._
###Code
n_students = student_data.shape[0]
assert n_students == student_data.passed.count()
n_features = student_data.shape[1] - 1
assert n_features == len(student_data.columns[student_data.columns != 'passed'])
n_passed = sum(student_data.passed.map({'no': 0, 'yes': 1}))
assert n_passed == len(student_data[student_data.passed == 'yes'].passed)
n_failed = n_students - n_passed
grad_rate = n_passed/float(n_students)
print "Total number of students: {}".format(n_students)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Number of features: {}".format(n_features)
print "Graduation rate of the class: {:.2f}%".format(100 * grad_rate)
passing_rates = student_data.passed.value_counts()/student_data.passed.count()
print(passing_rates)
seaborn.set_style('whitegrid')
axe = seaborn.barplot(x=passing_rates.index, y=passing_rates.values)
title = axe.set_title("Proportion of Passing Students")
###Output
_____no_output_____
###Markdown
3. Preparing the DataIn this section, we will prepare the data for modeling, training and testing. Identify feature and target columnsIt is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.Let's first separate our data into feature and target columns, and see if any features are non-numeric.**Note**: For this dataset, the last column (`'passed'`) is the target or label we are trying to predict.
###Code
# Extract feature (X) and target (y) columns
feature_cols = list(student_data.columns[:-1]) # all columns but last are features
target_col = student_data.columns[-1] # last column is the target/label
print "Feature column(s):-\n{}".format(feature_cols)
print "Target column: {}".format(target_col)
X_all = student_data[feature_cols] # feature values for all students
y_all = student_data[target_col] # corresponding targets/labels
print "\nFeature values:-"
print X_all.head() # print the first 5 rows
print(len(X_all.columns))
###Output
30
###Markdown
Preprocess feature columnsAs you can see, there are several non-numeric columns that need to be converted! Many of them are simply `yes`/`no`, e.g. `internet`. These can be reasonably converted into `1`/`0` (binary) values.Other columns, like `Mjob` and `Fjob`, have more than two values, and are known as _categorical variables_. The recommended way to handle such a column is to create as many columns as possible values (e.g. `Fjob_teacher`, `Fjob_other`, `Fjob_services`, etc.), and assign a `1` to one of them and `0` to all others.These generated columns are sometimes called _dummy variables_, and we will use the [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummiespandas.get_dummies) function to perform this transformation.
###Code
# Preprocess feature columns
def preprocess_features(X):
outX = pd.DataFrame(index=X.index) # output dataframe, initially empty
# Check each column
for col, col_data in X.iteritems():
# If data type is non-numeric, try to replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# Note: This should change the data type for yes/no columns to int
# If still non-numeric, convert to one or more dummy variables
if col_data.dtype == object:
col_data = pd.get_dummies(col_data, prefix=col) # e.g. 'school' => 'school_GP', 'school_MS'
outX = outX.join(col_data) # collect column(s) in output dataframe
return outX
X_all = preprocess_features(X_all)
y_all = y_all.replace(['yes', 'no'], [1, 0])
print "Processed feature columns ({}):-\n{}".format(len(X_all.columns), list(X_all.columns))
len(X_all.columns)
###Output
_____no_output_____
###Markdown
Split data into training and test setsSo far, we have converted all _categorical_ features into numeric values. In this next step, we split the data (both features and corresponding labels) into training and test sets.
###Code
# First, decide how many training vs test samples you want
num_all = student_data.shape[0] # same as len(student_data)
num_train = 300 # about 75% of the data
num_test = num_all - num_train
# TODO: Then, select features (X) and corresponding labels (y) for the training and test sets
# Note: Shuffle the data or randomly select samples to avoid any bias due to ordering in the dataset
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all,
test_size=num_test,
train_size=num_train,
random_state=RANDOM_STATE,
stratify=y_all)
assert len(y_train) == 300
assert len(y_test) == 95
print "Training set: {} samples".format(X_train.shape[0])
print "Test set: {} samples".format(X_test.shape[0])
# Note: If you need a validation set, extract it from within training data
###Output
Training set: 300 samples
Test set: 95 samples
###Markdown
4. Training and Evaluating ModelsChoose 3 supervised learning models that are available in scikit-learn, and appropriate for this problem. For each model:- What are the general applications of this model? What are its strengths and weaknesses?- Given what you know about the data so far, why did you choose this model to apply?- Fit this model to the training data, try to predict labels (for both training and test sets), and measure the F1 score. Repeat this process with different training set sizes (100, 200, 300), keeping test set constant.Produce a table showing training time, prediction time, F1 score on training set and F1 score on test set, for each training set size.Note: You need to produce 3 such tables - one for each model. Train a model
###Code
import time
def train_classifier(clf, X_train, y_train, verbose=True):
if verbose:
print "Training {}...".format(clf.__class__.__name__)
times = []
for repetition in range(REPETITIONS):
start = time.time()
clf.fit(X_train, y_train)
times.append(time.time() - start)
if verbose:
print "Done!\nTraining time (secs): {:.3f}".format(min(times))
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
classifiers = [LogisticRegression(),
RandomForestClassifier(),
KNeighborsClassifier()]
for clf in classifiers:
# Fit model to training data
train_classifier(clf, X_train, y_train) # note: using entire training set here
# Predict on training set and compute F1 score
from sklearn.metrics import f1_score
def predict_labels(clf, features, target, verbose=True):
if verbose:
print "Predicting labels using {}...".format(clf.__class__.__name__)
times = []
scores = []
for repetition in range(REPETITIONS):
start = time.time()
y_pred = clf.predict(features)
times.append(time.time() - start)
scores.append(f1_score(target.values, y_pred, pos_label=1))
if verbose:
print "Done!\nPrediction time (secs): {:.3f}".format(min(times))
return np.median(scores)
# Predict on test data
for classifier in classifiers:
print "F1 score for test set: {}".format(predict_labels(classifier,
X_test, y_test))
class ClassifierData(object):
"""A Container for classifire performance data"""
def __init__(self, classifier, f1_test_score, f1_train_score):
"""
:param:
- `classifier`: classifier object (e.g. LogisticRegression())
- `f1_test_score`: score for the classifier on the test set
- `f1_train_score`: score for the classifier on the training set
"""
self.classifier = classifier
self.f1_test_score = f1_test_score
self.f1_train_score = f1_train_score
return
from collections import defaultdict
# Train and predict using different training set sizes
def train_predict(clf, X_train, y_train, X_test, y_test, verbose=True):
if verbose:
print "------------------------------------------"
print "Training set size: {}".format(len(X_train))
train_classifier(clf, X_train, y_train, verbose)
f1_train_score = predict_labels(clf, X_train, y_train, verbose)
f1_test_score = predict_labels(clf, X_test, y_test, verbose)
if verbose:
print "F1 score for training set: {}".format(f1_train_score)
print "F1 score for test set: {}".format(f1_test_score)
return ClassifierData(clf, f1_test_score, f1_train_score)
# TODO: Run the helper function above for desired subsets of training data
# Note: Keep the test set constant
def train_by_size(sizes = [100, 200, 300], verbose=True):
classifier_containers = {}
for classifier in classifiers:
name = classifier.__class__.__name__
if verbose:
print(name)
print("=" * len(name))
classifier_containers[name] = defaultdict(lambda: {})
for size in sizes:
x_train_sub, y_train_sub = X_train[:size], y_train[:size]
assert len(x_train_sub) == size
assert len(y_train_sub) == size
classifier_data = train_predict(classifier, x_train_sub, y_train_sub, X_test, y_test, verbose)
classifier_containers[name][size] = classifier_data
if verbose:
print('')
return classifier_containers
_ = train_by_size()
if RUN_PLOTS:
# this takes a long time, don't run if not needed
sizes = range(10, 310, 10)
classifier_containers = train_by_size(sizes=sizes,
verbose=False)
color_map = {'LogisticRegression': 'b',
'KNeighborsClassifier': 'r',
'RandomForestClassifier': 'm'}
def plot_scores(containers, which_f1='test', color_map=color_map):
"""
Plot the f1 scores for the models
:param:
- `containers`: dict of <name><size> : classifier data
- `which_f1`: 'test' or 'train'
- `color_map`: dict of <model name> : <color string>
"""
sizes = sorted(containers['LogisticRegression'].keys())
figure = plot.figure()
axe = figure.gca()
for model in containers:
scores = [getattr(containers[model][size], 'f1_{0}_score'.format(which_f1)) for size in sizes]
axe.plot(sizes, scores, label=model, color=color_map[model])
axe.legend(loc='lower right')
axe.set_title("{0} Set F1 Scores by Training-Set Size".format(which_f1.capitalize()))
axe.set_xlabel('Training Set Size')
axe.set_ylabel('F1 Score')
axe.set_ylim([0, 1.0])
if RUN_PLOTS:
for f1 in 'train test'.split():
plot_scores(classifier_containers, f1)
def plot_test_train(containers, model_name, color_map=color_map):
"""
Plot testing and training plots for each model
:param:
- `containers`: dict of <model name><size>: classifier data
- `model_name`: class name of the model
- `color_map`: dict of <model name> : color string
"""
sizes = sorted(containers['LogisticRegression'].keys())
figure = plot.figure()
axe = figure.gca()
test_scores = [containers[model][size].f1_test_score for size in sizes]
train_scores = [containers[model][size].f1_train_score for size in sizes]
axe.plot(sizes, test_scores, label="Test", color=color_map[model])
axe.plot(sizes, train_scores, '--', label="Train", color=color_map[model])
axe.legend(loc='lower right')
axe.set_title("{0} F1 Scores by Training-Set Size".format(model))
axe.set_xlabel('Training Set Size')
axe.set_ylabel('F1 Score')
axe.set_ylim([0, 1.0])
return
if RUN_PLOTS:
for model in color_map.keys():
plot_test_train(classifier_containers, model)
###Output
_____no_output_____
###Markdown
5. Choosing the Best Model- Based on the experiments you performed earlier, in 1-2 paragraphs explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?- In 1-2 paragraphs explain to the board of supervisors in layman's terms how the final model chosen is supposed to work (for example if you chose a Decision Tree or Support Vector Machine, how does it make a prediction).- Fine-tune the model. Use Gridsearch with at least one important parameter tuned and with at least 3 settings. Use the entire training set for this.- What is the model's final F1 score? Based on the previous experiments I chose *Logistic Regression* as the classifier to use. Given the data available, all three models have comparable F1 scores (on the test data) but the Logistic Regression classifier is the fastest for both training and prediction when compared to *K-Nearest Neighbor* and *Random Forests*. In addition, the Logistic Regression classifier offers readily interpretable coefficients and L1 regression to sparsify the data, allowing us to see the most important of the variables when deciding who will pass their final exam.Logistic Regression works by estimating the probability that a student's attributes - such as their age, how often they go out, etc. - predicts that they will pass. It does this using the *logistic function* which creates an S-shaped curve which goes to 0 at negative infinity and 1 at positive infinity:
###Code
%%latex
P(passed=yes|x) = \frac{1}{1+e^{-weights^T \times attributes}}\\
###Output
_____no_output_____
###Markdown
Here *attributes* is a vector of student attributes and *weights* is the vector of weights that the Logistic Regression algorithm finds. To see what this function looks like I'll plot its output when there is a weight of one and a single attribute whose values are centered around 0, since this is a fictional attribute that I'm creating for plotting I'll call it *x*.
###Code
x = np.linspace(-6, 7, 100)
y = 1/(1 + np.exp(-x))
figure = plot.figure()
axe = figure.gca()
axe.plot(x, y)
title = axe.set_title("Sigmoid Function")
axe.set_ylabel(r"P(passed=yes|x)")
label = axe.set_xlabel("x")
###Output
_____no_output_____
###Markdown
To clarify the previous equation, if we only had two attributes, *age* and *school* to predict if a student passed, then it could be written as:
###Code
%%latex
\textit{probability student passed given age and school} = \frac{1}{1+e^{-(intercept + w_1 \times age + w_2 * school)}}\\
###Output
_____no_output_____
###Markdown
The goal of the Logistic Regression algorithm is to find the weights that most accurately predict whether a given student passes or not. In other words, it seeks to find the values for the weights so that the logistic function most accurately produces a probability greater than :math:`\frac{1}{2}` if the student passed and a probablity less than :math:`\frac{1}{2}` if the student did not pass. Set up the parameters
###Code
from sklearn.metrics import f1_score, make_scorer
scorer = make_scorer(f1_score)
passing_ratio = (sum(y_test) +
sum(y_train))/float(len(y_test) +
len(y_train))
assert abs(passing_ratio - .67) < .01
model = LogisticRegression()
# python standard library
import warnings
# third party
import numpy
from sklearn.grid_search import GridSearchCV
parameters = {'penalty': ['l1', 'l2'],
'C': np.arange(.01, 1., .01),
'class_weight': [None, 'balanced', {1: passing_ratio, 0: 1 - passing_ratio}]}
###Output
_____no_output_____
###Markdown
Grid search
###Code
grid = GridSearchCV(model, param_grid=parameters, scoring=scorer, cv=10, n_jobs=-1)
with warnings.catch_warnings():
warnings.simplefilter('ignore')
grid.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
The best parameters
###Code
grid.best_params_
###Output
_____no_output_____
###Markdown
The Coefficients
###Code
def print_coefficients(grid):
column_names = X_train.columns
coefficients = grid.best_estimator_.coef_[0]
odds = np.exp(coefficients)
sorted_coefficients = sorted((column for column in coefficients), reverse=True)
non_zero_coefficients = [coefficient for coefficient in sorted_coefficients
if coefficient != 0]
non_zero_indices = [np.where(coefficients==coefficient)[0][0] for coefficient in non_zero_coefficients]
non_zero_variables = [column_names[index] for index in non_zero_indices]
non_zero_odds = [odds[index] for index in non_zero_indices]
for column, coefficient, odds_ in zip(non_zero_variables, non_zero_coefficients, non_zero_odds):
print('{0: <10}{1: >5.2f}\t{2: >8.2f}'.format(column, coefficient, odds_))
return non_zero_variables
non_zero_variables = print_coefficients(grid)
feature_map = {"sex_M": "male student",
"age": "student's age",
"Medu": "mother's education",
"traveltime": "home to school travel time",
"studytime": "weekly study time",
"failures": "number of past class failures",
"schoolsup": "extra educational support",
"famsup": "family educational support",
"paid": "extra paid classes within the course subject (Math or Portuguese)",
"activities": "extra-curricular activities",
"nursery": "attended nursery school",
"higher": "wants to take higher education",
"internet": "Internet access at home",
"romantic": "within a romantic relationship",
"famrel": "quality of family relationships",
"freetime": "free time after school",
"goout": "going out with friends",
"Dalc": "workday alcohol consumption",
"Walc": "weekend alcohol consumption",
"health": "current health status",
"absences": "number of school absences",
"passed": "did the student pass the final exam"}
###Output
_____no_output_____
###Markdown
The plots were originally created separately for the write-up but I'm putting the code here too to show how they were made.
###Code
data_all = X_all.copy()
data_all['passed'] = y_all.values
def plot_counts(x_name, hue='passed'):
"""
plot counts for a given variable
:param:
- `x_name`: variable name in student data
- `hue`: corellating variable
"""
title = "{0} vs Passing".format(feature_map[x_name].title())
figure = plot.figure()
axe = figure.gca()
axe.set_title(title)
lines = seaborn.countplot(x=x_name, hue=hue, data=data_all)
count_plot_variables = [name for name in non_zero_variables
if name not in ('age', 'absences')]
for variable in count_plot_variables:
plot_counts(variable)
plot_counts('passed', 'age')
axe = seaborn.kdeplot(student_data[student_data.passed=='yes'].absences, label='passed')
axe.set_title('Distribution of Absences')
axe.set_xlim([0, 80])
axe = seaborn.kdeplot(student_data[student_data.passed=='no'].absences, ax=axe, label="didn't pass")
###Output
_____no_output_____
###Markdown
Final F1 Score
###Code
with warnings.catch_warnings():
warnings.simplefilter('ignore')
print("{0:.2f}".format(grid.score(X_test, y_test)))
###Output
0.79
###Markdown
Re-do with only significant columns.
###Code
X_all_trimmed = X_all[non_zero_variables]
grid_2 = GridSearchCV(model, param_grid=parameters, scoring=scorer, cv=10, n_jobs=-1)
with warnings.catch_warnings():
warnings.simplefilter('ignore')
grid_2.fit(X_train, y_train)
grid_2.best_params_
print_coefficients(grid_2)
with warnings.catch_warnings():
warnings.simplefilter('ignore')
print("{0:.2f}".format(grid_2.score(X_test, y_test)))
###Output
0.79
|
005 - Softmax.ipynb | ###Markdown
TensorFlow softmax
###Code
import tensorflow as tf
logit_data = [2.0, 1.0, 0.1]
logits = tf.placeholder(tf.float32)
softmax = tf.nn.softmax(logits)
with tf.Session() as sess:
output = sess.run(softmax, feed_dict={logits : logit_data })
print(output)
###Output
[ 0.65900117 0.24243298 0.09856589]
|
Fig 5b Data - Trials vs Coeff Error - Sys3.ipynb | ###Markdown
Number of Trials vs Coefficient Estimate ErrorTests the effect of Gaussian white noise on the estimated coefficients
###Code
%load_ext autoreload
%autoreload 2
# Import Python packages
import pickle
# Package Imports
from sindy_bvp import SINDyBVP
from sindy_bvp.differentiators import PolyInterp
from sindy_bvp.library_builders import NoiseMaker
# Set file to load and stem for saving
load_stem = "./data/S3-P2-"
save_stem = "./data/Fig5b-S3-"
%%time
# Set a range of noise magnitudes to test
num_trials = [10, 20, 30, 40, 50,
60, 70, 80, 90, 100,
110, 120, 130, 140, 150,
160, 170, 180, 190, 200]
# Since the data is noisy, we'll use a Polynomial Interpolation derivative method
poly = PolyInterp(diff_order=2, width=20, degree=5)
# Initialize NoiseMaker, which adds 1% noise then filters noisy signal
nm = NoiseMaker(noise_magnitude=0.01)
# Create an empty
results_list = []
print("Completed:", end=" ")
for trial_count in num_trials:
# Initialize SINDyBVP object
sbvp = SINDyBVP(file_stem = load_stem,
num_trials = trial_count,
differentiator = poly,
outcome_var = 'd^{2}u/dx^{2}',
noisemaker = nm,
known_vars = ['u', 'u^{2}', 'du/dx', 'f'],
dep_var_name = 'u',
ind_var_name = 'x')
# Execute the optimization
coeffs, plotter = sbvp.sindy_bvp()
# Compute the S-L coeffs with Plotter analysis tool
plotter.compute_sl_coeffs()
# gather the learned coefficients and relevant metrics
# And place into the results_list
results_list.append({'num_trials': trial_count,
'loss': min(sbvp.groupreg.Losses),
'p': plotter.inferred_phi,
'q': plotter.inferred_q,
'coeffs': coeffs})
print(trial_count, end=" | ")
## Pickle the results
pickle.dump(results_list, open(save_stem+"results.pickle", "wb"))
###Output
_____no_output_____ |
churn-model-training-hosting.ipynb | ###Markdown
Train, host, and optimize 50+ XGBoost models in a multi-model endpoint for millisecond latencyThis example demonstrate hosting 51 State-wise ML models in a SageMaker Multi-Model Endpoint to predict customer churn based on account usage. The models are trained using a synthetic telecommunication customer churn dataset and SageMaker's built-in XGBoost algorithm. We will host this multi-model endpoint on two instance types: `ml.c5.xlarge` and `ml.c5.2xlarge` and compare the performance with a load test in order to find out an optimal hosting architecture. We will analyze the load testing results in Amazon CloudWatch.Instead of hosting 51 models in 51 endpoints as illustrated below,We can host 51 models in one endpoint and load models dynamically from S3.Amazon CloudWatch dashboard to show endpoint performance. This notebook is developed in SageMaker Studio using `Python 3 (Data Science)` kernel with a ml.t3.medium instance.First we install a library `sagemaker-experiment` to manage the training jobs.
###Code
!pip install -q sagemaker-experiments
###Output
_____no_output_____
###Markdown
Import the libraries and set up the SageMaker resources.
###Code
import sagemaker
import os, sys
import json
import boto3
import numpy as np
import pandas as pd
role = sagemaker.get_execution_role()
sess = sagemaker.Session()
region = sess.boto_region_name
bucket = sess.default_bucket()
prefix = 'sagemaker/reinvent21-aim408/churn-mme'
###Output
_____no_output_____
###Markdown
The dataset is a customer churn dataset from a synthetic telecommunication use case. We download the data from source.
###Code
sagemaker.s3.S3Downloader.download('s3://sagemaker-sample-files/datasets/tabular/synthetic/churn.txt', './')
df=pd.read_csv('churn.txt')
df['CustomerID']=df.index
df.head()
###Output
_____no_output_____
###Markdown
We perform minimal data preprocessing: 1. replacing binary columns from string type to integers (0 & 1).2. setting CustomerID as the dataframe index and move the target column to the first column for XGBoost training.
###Code
binary_columns=["Int'l Plan", "VMail Plan"]
df[binary_columns] = df[binary_columns].replace(to_replace=['yes', 'no'],
value=[1, 0])
df['Churn?'] = df['Churn?'].replace(to_replace=['True.', 'False.'],
value=[1, 0])
columns=['Churn?', 'State', 'Account Length', "Int'l Plan",
'VMail Plan', 'VMail Message', 'Day Mins', 'Day Calls', 'Day Charge',
'Eve Mins', 'Eve Calls', 'Eve Charge', 'Night Mins', 'Night Calls',
'Night Charge', 'Intl Mins', 'Intl Calls', 'Intl Charge',
'CustServ Calls']
df.index = df['CustomerID']
df_processed = df[columns]
###Output
_____no_output_____
###Markdown
The processed data shown below.
###Code
df_processed.head()
###Output
_____no_output_____
###Markdown
We hold out 10% of data as a test set, stratified by `State`. The remaining data will be further split into train and validation set later right before training.
###Code
from sklearn.model_selection import train_test_split
df_train, df_test = train_test_split(df_processed, test_size=0.1, random_state=42,
shuffle=True, stratify=df_processed['State'])
###Output
_____no_output_____
###Markdown
Save the test data into S3 bucket. Two version of the test data are saved, one that has complete data, and the other one without target and index for inference purposes.
###Code
columns_no_target=['Account Length', "Int'l Plan", 'VMail Plan', 'VMail Message',
'Day Mins', 'Day Calls', 'Day Charge', 'Eve Mins', 'Eve Calls',
'Eve Charge', 'Night Mins', 'Night Calls', 'Night Charge',
'Intl Mins', 'Intl Calls', 'Intl Charge', 'CustServ Calls']
df_test.to_csv('churn_test.csv')
df_test[columns_no_target].to_csv('churn_test_no_target.csv',
index=False)
sagemaker.s3.S3Uploader.upload('churn_test.csv',
f's3://{bucket}/{prefix}/churn_data')
sagemaker.s3.S3Uploader.upload('churn_test_no_target.csv',
f's3://{bucket}/{prefix}/churn_data')
###Output
_____no_output_____
###Markdown
We set up an experiment in SageMaker to hold all the training job information.
###Code
from sagemaker.amazon.amazon_estimator import image_uris
from smexperiments.experiment import Experiment
from smexperiments.trial import Trial
from botocore.exceptions import ClientError
import time
from time import gmtime, strftime
dict_estimator = {}
experiment_name = 'churn-prediction'
try:
experiment = Experiment.create(
experiment_name=experiment_name,
description='Training churn prediction models based on telco churn dataset.')
except ClientError as e:
experiment = Experiment.load(experiment_name)
print(f'{experiment_name} experiment already exists! Reusing the existing experiment.')
###Output
_____no_output_____
###Markdown
For convenience, we create a function `launch_training_job()` so that later we can reuse it in a loop through the States. The training algorithm used here is SageMaker's built-in XGBoost algorithm with 20 rounds of training as the only hyperparameter we specify.
###Code
image = image_uris.retrieve(region=region, framework='xgboost', version='1.3-1')
train_instance_type = 'ml.m5.xlarge'
train_instance_count = 1
s3_output = f's3://{bucket}/{prefix}/churn_data/training'
def launch_training_job(state, train_data_s3, val_data_s3):
exp_datetime = strftime('%Y-%m-%d-%H-%M-%S', gmtime())
jobname = f'churn-xgb-{state}-{exp_datetime}'
# Creating a new trial for the experiment
exp_trial = Trial.create(experiment_name=experiment_name,
trial_name=jobname)
experiment_config={'ExperimentName': experiment_name,
'TrialName': exp_trial.trial_name,
'TrialComponentDisplayName': 'Training'}
xgb = sagemaker.estimator.Estimator(image,
role,
instance_count=train_instance_count,
instance_type=train_instance_type,
output_path=s3_output,
enable_sagemaker_metrics=True,
sagemaker_session=sess)
xgb.set_hyperparameters(objective='binary:logistic',
num_round=20)
train_input = sagemaker.inputs.TrainingInput(s3_data=train_data_s3,
content_type='csv')
val_input = sagemaker.inputs.TrainingInput(s3_data=val_data_s3,
content_type='csv')
data_channels={'train': train_input, 'validation': val_input}
xgb.fit(inputs=data_channels,
job_name=jobname,
experiment_config=experiment_config,
wait=False)
return xgb
###Output
_____no_output_____
###Markdown
We isolate the data points by `State`, create train and validation sets for each `State` and train models by `State` using `launch_training_job()`. Again we hold out 10% as validation set in each `State`. We save the estimators in a dictionary `dict_estimator`. Execute the next four cells to launch the training jobs if this is the first time running the demo. There will be 51 training jobs submitted. We implemented a function `wait_for_training_quota()` to check for the current job count and limit the total training job in this experiment to `job_limit`. If the job count is at the limit, the function waits number of seconds specified in `wait` argument and check the job count again. This is to account for account level SageMaker quota that may cause error in the for loop. The default service quota for *Number of instances across training jobs* and *number of ml.m5.xlarge instances* are 4 as documented in [Service Quota page](https://docs.aws.amazon.com/general/latest/gr/sagemaker.htmllimits_sagemaker). If your account has a higher limit, you may change the `job_limit` to a higher number to allow more simultaneous training jobs (therefore faster). You can also [request a quota increase](https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html).If you already have run the training jobs from this notebook and have completed trials in SageMaker Experiments, you can proceed to [loading the existing estimators](loading-estimators).
###Code
def wait_for_training_quota(dict_estimator, job_limit = 4, wait = 30):
def query_jobs(dict_estimator):
counter=0
for key, estimator in dict_estimator.items():
status = estimator.latest_training_job.describe()["TrainingJobStatus"]
time.sleep(2)
if status == "InProgress":
counter+=1
return counter
job_count = query_jobs(dict_estimator)
if job_count < job_limit:
print(f'Current total running jobs {job_count} is below {job_limit}. Proceeding...')
return
while job_count >= job_limit:
print(f'Current total running jobs {job_count} is reaching the limit {job_limit}. Waiting {wait} seconds...')
time.sleep(wait)
job_count = query_jobs(dict_estimator)
print(f'Current total running jobs {job_count} is below {job_limit}. Proceeding...')
os.makedirs('churn_data_by_state', exist_ok=True)
for state in df_processed.State.unique():
print(state)
output_dir = f's3://{bucket}/{prefix}/churn_data/by_state'
out_train_csv_s3 = f's3://{bucket}/{prefix}/churn_data/by_state/churn_{state}_train.csv'
out_val_csv_s3 = f's3://{bucket}/{prefix}/churn_data/by_state/churn_{state}_val.csv'
# create train/val split for each State
df_state = df_train[df_train['State']==state].drop(labels='State', axis=1)
df_state_train, df_state_val = train_test_split(df_state,
test_size=0.1,
random_state=42,
shuffle=True,
stratify=df_state['Churn?'])
df_state_train.to_csv(f'churn_data_by_state/churn_{state}_train.csv', index=False)
df_state_val.to_csv(f'churn_data_by_state/churn_{state}_val.csv', index=False)
sagemaker.s3.S3Uploader.upload(f'churn_data_by_state/churn_{state}_train.csv', output_dir)
sagemaker.s3.S3Uploader.upload(f'churn_data_by_state/churn_{state}_val.csv', output_dir)
wait_for_training_quota(dict_estimator, job_limit=4, wait=30)
dict_estimator[state] = launch_training_job(state, out_train_csv_s3, out_val_csv_s3)
time.sleep(2)
###Output
_____no_output_____
###Markdown
Wait for all jobs to complete.
###Code
def wait_for_training_job_to_complete(estimator):
job = estimator.latest_training_job.job_name
print(f"Waiting for job: {job}")
status = estimator.latest_training_job.describe()["TrainingJobStatus"]
while status == "InProgress":
time.sleep(45)
status = estimator.latest_training_job.describe()["TrainingJobStatus"]
if status == "InProgress":
print(f"{job} job status: {status}")
print(f"DONE. Status for {job} is {status}\n")
for est in list(dict_estimator.values()):
wait_for_training_job_to_complete(est)
###Output
_____no_output_____
###Markdown
The code snippet below is to retrieve the estimators from the experiment trials. It is useful when you have already trained the models but somehow lost the dictionary `dict_estimator` and want to resume the work.```pythondict_estimator={}experiment = Experiment.load(experiment_name)for i, j in enumerate(experiment.list_trials()): print(i, j.trial_name) jobname=j.trial_name state=jobname.split('-')[2] print(state) try: dict_estimator[state]=sagemaker.estimator.Estimator.attach(jobname) except: pass```
###Code
## Uncomment this part to load the estimators if you already have trained them.
# dict_estimator={}
# experiment = Experiment.load(experiment_name)
# for i, j in enumerate(experiment.list_trials()):
# print(i, j.trial_name)
# jobname=j.trial_name
# state=jobname.split('-')[2]
# print(state)
# try:
# dict_estimator[state]=sagemaker.estimator.Estimator.attach(jobname)
# except:
# pass
###Output
_____no_output_____
###Markdown
Once the training are completed, we can start hosting our multimodel endpoint. We host our State-wise multi-model endpoint in two different instances: `ml.c5.xlarge` and `ml.c5.2xlarge`. And we will be conducting load testing to profile the performance.
###Code
print(len(dict_estimator))
print(dict_estimator.keys())
###Output
_____no_output_____
###Markdown
Here we designate a S3 location to hold all the model artifacts we would like to host. At any time (before or after the endpoint is created), we can dynamically add models to the designated model artifacts folder, making multi-model endpoint a flexible tool to serve models at scale.
###Code
model_data_prefix = f's3://{bucket}/{prefix}/churn_data/multi_model_artifacts/'
for state, est in dict_estimator.items():
artifact_path = est.model_data
state_model_name = f'churn-xgb-{state}.tar.gz'
print(f'Copying {state_model_name} to multi_model_artifacts folder')
# This is copying over the model artifact to the S3 location for the MME.
!aws s3 --quiet cp {artifact_path} {model_data_prefix}{state_model_name}
###Output
_____no_output_____
###Markdown
Endpoint creation is a three-step process with the API. `create_model()`==>`create_endpoint_config()`==>`creat_endpoint()`.Create our first endpoint with `ml.c5.xlarge` instance which has 4 vCPU and 8 GB RAM.
###Code
exp_datetime = strftime('%Y-%m-%d-%H-%M-%S', gmtime())
model_name = f'churn-xgb-mme-{exp_datetime}'
hosting_instance_type = 'ml.c5.xlarge'
hosting_instance_count = 1
endpoint_name = f'{model_name}-c5-xl'
# image = image_uris.retrieve(region=region, framework='xgboost', version='1.3-1')
container = {'Image': image,
'ModelDataUrl': model_data_prefix,
'Mode': 'MultiModel'}
response1 = sess.sagemaker_client.create_model(ModelName = model_name,
ExecutionRoleArn = role,
Containers = [container])
response2 = sess.sagemaker_client.create_endpoint_config(
EndpointConfigName = endpoint_name,
ProductionVariants = [{'InstanceType': hosting_instance_type,
'InitialInstanceCount': hosting_instance_count,
'InitialVariantWeight': 1,
'ModelName': model_name,
'VariantName': 'AllTraffic'}])
response3 = sess.sagemaker_client.create_endpoint(EndpointName = endpoint_name,
EndpointConfigName = endpoint_name)
print(endpoint_name)
###Output
_____no_output_____
###Markdown
We create another endpoint with `ml.c5.2xlarge` which has 8 vCPU and 16 GB RAM.
###Code
hosting_instance_type = 'ml.c5.2xlarge'
hosting_instance_count = 1
endpoint_name_2 = f'{model_name}-c5-2xl'
response4 = sess.sagemaker_client.create_endpoint_config(
EndpointConfigName = endpoint_name_2,
ProductionVariants = [{'InstanceType': hosting_instance_type,
'InitialInstanceCount': hosting_instance_count,
'InitialVariantWeight': 1,
'ModelName': model_name, # re-using the model
'VariantName': 'AllTraffic'}])
response5 = sess.sagemaker_client.create_endpoint(EndpointName = endpoint_name_2,
EndpointConfigName = endpoint_name_2)
print(endpoint_name_2)
waiter = sess.sagemaker_client.get_waiter('endpoint_in_service')
print(f'Waiting for endpoint {endpoint_name} to create...')
waiter.wait(EndpointName=endpoint_name)
print(f'Waiting for endpoint {endpoint_name_2} to create...')
waiter.wait(EndpointName=endpoint_name_2)
###Output
_____no_output_____
###Markdown
Let's move our load testing to [AWS Cloud9](https://console.aws.amazon.com/cloud9/home?region=us-east-1). You could also use your local computer to run the load testing. (Optional) Enable autoscalingWe have verified the baseline single instance performance, let's apply a autoscaling policy to allow scale in/out between 2 to 5 instances for variable traffic to ensure performance. Here we use a predefined metric `SageMakerVariantInvocationsPerInstance` with a `TargetValue` 4,000 to balance the load to 4,000 requests per instance.
###Code
# Common class representing Application Auto Scaling for SageMaker amongst other services
client = boto3.client('application-autoscaling')
# This is the format in which application autoscaling references the endpoint
resource_id=f'endpoint/{endpoint_name_2}/variant/AllTraffic'
response = client.register_scalable_target(
ServiceNamespace='sagemaker',
ResourceId=resource_id,
ScalableDimension='sagemaker:variant:DesiredInstanceCount',
MinCapacity=2,
MaxCapacity=5
)
response = client.put_scaling_policy(
PolicyName='Invocations-ScalingPolicy',
ServiceNamespace='sagemaker', # The namespace of the AWS service that provides the resource.
ResourceId=resource_id, # Endpoint name
ScalableDimension='sagemaker:variant:DesiredInstanceCount', # SageMaker supports only Instance Count
PolicyType='TargetTrackingScaling', # 'StepScaling'|'TargetTrackingScaling'
TargetTrackingScalingPolicyConfiguration={
'TargetValue': 4000, # The target value for the metric: ApproximateBacklogSizePerInstance.
'PredefinedMetricSpecification': {
'PredefinedMetricType': 'SageMakerVariantInvocationsPerInstance',
},
'ScaleInCooldown': 600, # The cooldown period helps you prevent your Auto Scaling group from launching or terminating
# additional instances before the effects of previous activities are visible.
# You can configure the length of time based on your instance startup time or other application needs.
# ScaleInCooldown - The amount of time, in seconds, after a scale in activity completes before another scale in activity can start.
'ScaleOutCooldown': 300,# ScaleOutCooldown - The amount of time, in seconds, after a scale out activity completes before another scale out activity can start.
'DisableScaleIn': False,# Indicates whether scale in by the target tracking policy is disabled.
# If the value is true , scale in is disabled and the target tracking policy won't
# remove capacity from the scalable resource.
}
)
###Output
_____no_output_____
###Markdown
After you are done with the load-testing, uncomment and run the next cell to delete endpoints to stop incurring cost.
###Code
# sess.sagemaker_client.delete_endpoint(EndpointName=endpoint_name)
# sess.sagemaker_client.delete_endpoint(EndpointName=endpoint_name_2)
###Output
_____no_output_____ |
lec1_MLP/lec1_mnist.ipynb | ###Markdown
###Code
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense , Activation
from tensorflow.keras.optimizers import RMSprop
%tensorflow_version 2.x
import tensorflow as tf
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
# download dataset
(train_images, train_labels), (test_images,test_labels) = mnist.load_data()
# Data prepor
train_images = train_images.reshape(train_images.shape[0],train_images.shape[1]**2)
train_images = train_images/255.0
test_images = test_images.reshape(test_images.shape[0],test_images.shape[1]**2)
test_images = test_images/255.0
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# Model Settings
print(train_images.shape[1])
model = Sequential(Dense(512,input_shape = (train_images.shape[1],)))
model.add(Activation('relu'))
model.add(Dense(10))
model.add(Activation('softmax'))
# Optimezer Settings
rms = RMSprop(lr=0.1)
model.compile(loss = 'categorical_crossentropy', optimizer=rms,metrics=['accuracy'] )
def train(model):
with tf.device('/device:GPU:0'):
history = model.fit(train_images,train_labels,
verbose = 1,
batch_size = 128,
epochs = 5
)
return history
train(model)
import matplotlib.pyplot as plt
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
plt.imshow(test_images [0])
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
img = test_images [0].reshape ((1, 28*28))
print (model.predict(img))
###Output
_____no_output_____ |
experiments/tl_3v2/A/cores-oracle.run1.framed/trials/9/trial.ipynb | ###Markdown
Transfer Learning Template
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Allowed ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_3Av2:cores -> oracle.run1.framed",
"device": "cuda",
"lr": 0.0001,
"x_shape": [2, 200],
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 200]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 16000, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_power", "take_200"],
"episode_transforms": [],
"domain_prefix": "C_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "take_200", "resample_20Msps_to_25Msps"],
"episode_transforms": [],
"domain_prefix": "O_",
},
],
"seed": 7,
"dataset_seed": 7,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
altair/preprocess00/uncompressed_py3.ipynb | ###Markdown
---
###Code
notutf8_rdd_success = j_rdd.filter(is_not_utf8)
notutf8_rdd_error = j_rdd.filter(is_utf8)
notutf8_rdd_success.saveAsTextFile("hdfs://namenode/datasets/github/uncompressed/01_notutf8/success")
notutf8_rdd_error.saveAsTextFile("hdfs://namenode/datasets/github/uncompressed/01_notutf8/error")
###Output
_____no_output_____
###Markdown
---
###Code
syntax_rdd_success = notutf8_rdd_success.filter(is_valid_syntax)
syntax_rdd_error = notutf8_rdd_success.filter(is_invalid_syntax)
syntax_rdd_success.saveAsTextFile("hdfs://namenode/datasets/github/uncompressed/02_syntax/success")
syntax_rdd_error.saveAsTextFile("hdfs://namenode/datasets/github/uncompressed/02_syntax/error")
syntax_rdd_success.count() # should be 4023537
j_rdd.count() # should be 5267543
###Output
_____no_output_____
###Markdown
---
###Code
from lib2to3.refactor import RefactoringTool, get_fixers_from_package
def convert_python3(x):
try:
fixers = get_fixers_from_package('lib2to3.fixes')
refactoring_tool = RefactoringTool(fixer_names=fixers)
node3 = refactoring_tool.refactor_string(x["content"], 'script')
py3_str = str(node3)
x["content"] = py3_str
return (True, x)
except:
return (False, x)
def is_success(x):
return x[0] # Key is True if success
py3_rdd = syntax_rdd_success.map(convert_python3)
py3_rdd_success = py3_rdd.filter(is_success)
py3_rdd_success = py3_rdd_success.map(lambda x: x[1])
py3_rdd_success.map(dump_json).saveAsTextFile("hdfs://namenode/datasets/github/uncompressed/03_py3/success")
py3_rdd_error = py3_rdd.map(to_json_string).subtract(py3_rdd_success.map(to_json_string)).map(convert_json)
py3_rdd_error.map(dump_json).saveAsTextFile("hdfs://namenode/datasets/github/uncompressed/03_py3/errors")
###Output
_____no_output_____ |
Bronze/classical-systems/CS16_Probabilistic_States.ipynb | ###Markdown
$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\I}{ \mymatrix{rr}{1 & 0 \\ 0 & 1} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $$ \newcommand{\greenbit}[1] {\mathbf{{\color{green}1}}} $$ \newcommand{\bluebit}[1] {\mathbf{{\color{blue}1}}} $$ \newcommand{\redbit}[1] {\mathbf{{\color{red}1}}} $$ \newcommand{\brownbit}[1] {\mathbf{{\color{brown}1}}} $$ \newcommand{\blackbit}[1] {\mathbf{{\color{black}1}}} $ Probabilistic States _prepared by Abuzer Yakaryilmaz_[](https://youtu.be/tJjrF7WgT1g) Suppose that Asja tosses a fair coin secretly.As we do not see the result, our information about the outcome will be probabilistic:$\rightarrow$ The outcome is heads with probability $0.5$ and the outcome will be tails with probability $0.5$.If the coin has a bias $ \dfrac{Pr(Head)}{Pr(Tail)} = \dfrac{3}{1}$, then our information about the outcome will be as follows:$\rightarrow$ The outcome will be heads with probability $ 0.75 $ and the outcome will be tails with probability $ 0.25 $. Explanation: The probability of getting heads is three times of the probability of getting tails. The total probability is 1. We divide the whole probability 1 into four parts (three parts are for heads and one part is for tail), one part is $ \dfrac{1}{4} = 0.25$, and then give three parts for heads ($0.75$) and one part for tails ($0.25$). Listing probabilities as a column We have two different outcomes: heads (0) and tails (1).We use a column of size 2 to show the probabilities of getting heads and getting tails.For the fair coin, our information after the coin-flip will be $ \myvector{0.5 \\ 0.5} $. For the biased coin, it will be $ \myvector{0.75 \\ 0.25} $.The first entry shows the probability of getting heads, and the second entry shows the probability of getting tails. $ \myvector{0.5 \\ 0.5} $ and $ \myvector{0.75 \\ 0.25} $ are two examples of 2-dimensional (column) vectors. Task 1 Suppose that Balvis secretly flips a coin having the bias $ \dfrac{Pr(Heads)}{Pr(Tails)} = \dfrac{1}{4}$.Represent your information about the outcome as a column vector. Task 2 Suppose that Fyodor secretly rolls a loaded (tricky) dice with the bias $$ Pr(1):Pr(2):Pr(3):Pr(4):Pr(5):Pr(6) = 7:5:4:2:6:1 . $$Represent your information about the result as a column vector. Remark that the size of your column vector should be 6.You may use python for your calculations.
###Code
#
# your code is here
#
###Output
_____no_output_____
###Markdown
click for our solution Vector representation Suppose that we have a system with 4 distiguishable states: $ s_1 $, $s_2 $, $s_3$, and $s_4$. We expect the system to be in one of them at any moment. By speaking with probabilities, we say that the system is in one of the states with probability 1, and in any other state with probability 0. By using our column representation, we can show each state as a column vector (by using the vectors in standard basis of $ \mathbb{R}^4 $):$ e_1 = \myvector{1\\ 0 \\ 0 \\ 0}, e_2 = \myvector{0 \\ 1 \\ 0 \\ 0}, e_3 = \myvector{0 \\ 0 \\ 1 \\ 0}, \mbox{ and } e_4 = \myvector{0 \\ 0 \\ 0 \\ 1}.$ This representation helps us to represent our information on a system when it is in more than one state with certain probabilities. Remember the case in which the coins are tossed secretly. For example, suppose that the system is in states $ s_1 $, $ s_2 $, $ s_3 $, and $ s_4 $ with probabilities $ 0.20 $, $ 0.25 $, $ 0.40 $, and $ 0.15 $, respectively. (The total probability must be 1, i.e., $ 0.20+0.25+0.40+0.15 = 1.00 $)Then, we can say that the system is in the following probabilistic state:$ 0.20 \cdot e_1 + 0.25 \cdot e2 + 0.40 \cdot e_3 + 0.15 \cdot e4 $$ = 0.20 \cdot \myvector{1\\ 0 \\ 0 \\ 0} + 0.25 \cdot \myvector{0\\ 1 \\ 0 \\ 0} + 0.40 \cdot \myvector{0\\ 0 \\ 1 \\ 0} + 0.15 \cdot \myvector{0\\ 0 \\ 0 \\ 1} $$ = \myvector{0.20\\ 0 \\ 0 \\ 0} + \myvector{0\\ 0.25 \\ 0 \\ 0} + \myvector{0\\ 0 \\0.40 \\ 0} + \myvector{0\\ 0 \\ 0 \\ 0.15 } = \myvector{ 0.20 \\ 0.25 \\ 0.40 \\ 0.15 }, $where the summation of entries must be 1. Probabilistic state A probabilistic state is a linear combination of the vectors in the standard basis. Here coefficients (scalars) must satisfy certain properties: Each coefficient is non-negative The summation of coefficients is 1 Alternatively, we can say that a probabilistic state is a probability distribution over deterministic states.We can show all information as a single mathematical object, which is called as a stochastic vector. Remark that the state of any linear system is a linear combination of the vectors in the basis. Task 3 For a system with 4 states, randomly create a probabilistic state, and print its entries, e.g., $ 0.16~~0.17~~0.02~~0.65 $.Hint: You may pick your random numbers between 0 and 100 (or 1000), and then normalize each value by dividing the summation of all numbers.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Task 4 [extra] As given in the hint for Task 3, you may pick your random numbers between 0 and $ 10^k $. For better precision, you may take bigger values of $ k $.Write a function that randomly creates a probabilisitic state of size $ n $ with a precision up to $ k $ digits. Test your function.
###Code
#
# your solution is here
#
###Output
_____no_output_____ |
deep_learning_v2_pytorch/autoencoder/convolutional-autoencoder/Upsampling_Solution.ipynb | ###Markdown
Convolutional AutoencoderSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. We'll build a convolutional autoencoder to compress the MNIST dataset. >The encoder portion will be made of convolutional and pooling layers and the decoder will be made of **upsampling and convolutional layers**. Compressed RepresentationA compressed representation can be great for saving and sharing any kind of data in a way that is more efficient than storing raw data. In practice, the compressed representation often holds key information about an input image and we can use it for denoising images or oher kinds of reconstruction and transformation!Let's get started by importing our libraries and getting the dataset.
###Code
import torch
import numpy as np
from torchvision import datasets
import torchvision.transforms as transforms
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# load the training and test datasets
train_data = datasets.MNIST(root="data", train=True, download=True, transform=transform)
test_data = datasets.MNIST(root="data", train=False, download=True, transform=transform)
# Create training and test dataloaders
num_workers = 0
# how many samples per batch to load
batch_size = 20
# prepare data loaders
train_loader = torch.utils.data.DataLoader(
train_data, batch_size=batch_size, num_workers=num_workers
)
test_loader = torch.utils.data.DataLoader(
test_data, batch_size=batch_size, num_workers=num_workers
)
###Output
_____no_output_____
###Markdown
Visualize the Data
###Code
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111)
ax.imshow(img, cmap="gray")
###Output
_____no_output_____
###Markdown
--- Convolutional AutoencoderThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below. Upsampling + Convolutions, DecoderThis decoder uses a combination of nearest-neighbor **upsampling and normal convolutional layers** to increase the width and height of the input layers.It is important to note that transpose convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. This is the approach we take, here. TODO: Build the network shown above. > Build the encoder out of a series of convolutional and pooling layers. > When building the decoder, use a combination of upsampling and normal, convolutional layers.
###Code
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class ConvAutoencoder(nn.Module):
def __init__(self):
super(ConvAutoencoder, self).__init__()
## encoder layers ##
# conv layer (depth from 1 --> 16), 3x3 kernels
self.conv1 = nn.Conv2d(1, 16, 3, padding=1)
# conv layer (depth from 16 --> 8), 3x3 kernels
self.conv2 = nn.Conv2d(16, 4, 3, padding=1)
# pooling layer to reduce x-y dims by two; kernel and stride of 2
self.pool = nn.MaxPool2d(2, 2)
## decoder layers ##
self.conv4 = nn.Conv2d(4, 16, 3, padding=1)
self.conv5 = nn.Conv2d(16, 1, 3, padding=1)
def forward(self, x):
# add layer, with relu activation function
# and maxpooling after
x = F.relu(self.conv1(x))
x = self.pool(x)
# add hidden layer, with relu activation function
x = F.relu(self.conv2(x))
x = self.pool(x) # compressed representation
## decoder
# upsample, followed by a conv layer, with relu activation function
# this function is called `interpolate` in some PyTorch versions
x = F.upsample(x, scale_factor=2, mode="nearest")
x = F.relu(self.conv4(x))
# upsample again, output should have a sigmoid applied
x = F.upsample(x, scale_factor=2, mode="nearest")
x = F.sigmoid(self.conv5(x))
return x
# initialize the NN
model = ConvAutoencoder()
print(model)
###Output
ConvAutoencoder(
(conv1): Conv2d(1, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv4): Conv2d(4, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv5): Conv2d(16, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
###Markdown
--- TrainingHere I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. We are not concerned with labels in this case, just images, which we can get from the `train_loader`. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use `MSELoss`. And compare output images and input images as follows:```loss = criterion(outputs, images)```Otherwise, this is pretty straightfoward training with PyTorch. We flatten our images, pass them into the autoencoder, and record the training loss as we go.
###Code
# specify loss function
criterion = nn.MSELoss()
# specify loss function
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# number of epochs to train the model
n_epochs = 30
for epoch in range(1, n_epochs + 1):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data in train_loader:
# _ stands in for labels, here
# no need to flatten images
images, _ = data
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
outputs = model(images)
# calculate the loss
loss = criterion(outputs, images)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item() * images.size(0)
# print avg training statistics
train_loss = train_loss / len(train_loader)
print("Epoch: {} \tTraining Loss: {:.6f}".format(epoch, train_loss))
###Output
Epoch: 1 Training Loss: 0.323222
Epoch: 2 Training Loss: 0.167930
Epoch: 3 Training Loss: 0.150233
Epoch: 4 Training Loss: 0.141811
Epoch: 5 Training Loss: 0.136143
Epoch: 6 Training Loss: 0.131509
Epoch: 7 Training Loss: 0.126820
Epoch: 8 Training Loss: 0.122914
Epoch: 9 Training Loss: 0.119928
Epoch: 10 Training Loss: 0.117524
Epoch: 11 Training Loss: 0.115594
Epoch: 12 Training Loss: 0.114085
Epoch: 13 Training Loss: 0.112878
Epoch: 14 Training Loss: 0.111946
Epoch: 15 Training Loss: 0.111153
Epoch: 16 Training Loss: 0.110411
Epoch: 17 Training Loss: 0.109753
Epoch: 18 Training Loss: 0.109152
Epoch: 19 Training Loss: 0.108625
Epoch: 20 Training Loss: 0.108119
Epoch: 21 Training Loss: 0.107637
Epoch: 22 Training Loss: 0.107156
Epoch: 23 Training Loss: 0.106703
Epoch: 24 Training Loss: 0.106221
Epoch: 25 Training Loss: 0.105719
Epoch: 26 Training Loss: 0.105286
Epoch: 27 Training Loss: 0.104917
Epoch: 28 Training Loss: 0.104582
Epoch: 29 Training Loss: 0.104284
Epoch: 30 Training Loss: 0.104016
###Markdown
Checking out the resultsBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# prep images for display
images = images.numpy()
# output is resized into a batch of iages
output = output.view(batch_size, 1, 28, 28)
# use detach when it's an output that requires_grad
output = output.detach().numpy()
# plot the first ten input images and then reconstructed images
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25, 4))
# input images on top row, reconstructions on bottom
for images, row in zip([images, output], axes):
for img, ax in zip(images, row):
ax.imshow(np.squeeze(img), cmap="gray")
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
###Output
_____no_output_____ |
.ipynb_checkpoints/cnt_tda-checkpoint.ipynb | ###Markdown
CNT orientation detection using TDA The following shows scanning electron microscopy (SEM) images of carbon nanotube (CNT) samples of different alignment degree. Samples with high alignment degree usually have stronger physical properties.
###Code

###Output
'[Kernel' is not recognized as an internal or external command,
operable program or batch file.
'front-end' is not recognized as an internal or external command,
operable program or batch file.
###Markdown
Preprocess of SEM imagesBefore we applying TDA, we can preprocess the SEM images using Canny edge detection:
###Code
import scipy.io
import matplotlib.pyplot as plt
import cv2
# mentioning path of the image
img_path = "SEM\\40.PNG"
# read/load an SEM image
image = cv2.imread(img_path)
# detection of the edges
img_edge = cv2.Canny(image, 100, 200, apertureSize = 7)
plt.imshow(img_edge, cmap=plt.cm.gray)
###Output
_____no_output_____
###Markdown
Derive the variation function We define variation function $V(X;\theta)$ as follows to measure the total length of extension for $X$ in direction $\theta$. Let the band containing $X$ andorthogonal to direction $\theta$ be bounded by the lines $x = m$ and $x = M$, $m < M$:$$V(X;\theta):=\sum_{i=1}^m l(I_i)+\sum_{j=1}^n l(J_j)-b_0(X)(M-m),$$where $J_i$ and $J_j$ are the intervals comprising the barcodes for the sub-level and super-level set filtrations of $X$ along the direction $0\leq \theta\leq \pi$.
###Code
import numpy as np
import gudhi as gd
from math import cos
from math import sin
from math import sqrt
from numpy import inf
from numpy import NaN
def variation(BW,slices):
theta = np.linspace(0, 2*np.pi, 2*slices+1) # divide [0,2*pi] evenly into 2*slices slices
BW = np.float_(BW)
[r,c] = np.shape(BW)
M = np.ceil(1/2*sqrt(r**2+c**2))
f = np.zeros(len(theta)-1) # since 0 and 2*pi represent the same direction, we dont need to calculate 2*pi
# Now calculate the 0-th Betti number
cc = gd.CubicalComplex(dimensions=[c,r], top_dimensional_cells = -BW.reshape((-1,)))
p = cc.persistence()
pers = cc.persistence_intervals_in_dimension(0)
bars = np.array(pers)
betti = np.shape(bars)[0]
for i in range(len(f)):
x = np.ones((r,1)) * cos(theta[i]) * (np.arange(c).reshape([1,c])-1/2*(c-1))
y = np.ones((1,c)) * sin(theta[i]) * (np.arange(r).reshape([r,1])-1/2*(r-1))
dist = (x+y)*BW # the distance of coordinates to center of BW
dist[BW==0] = M
cc = gd.CubicalComplex(dimensions=[c,r], top_dimensional_cells = dist.reshape((-1,)))# be carefull about dim
p = cc.persistence()
pers = cc.persistence_intervals_in_dimension(0)
bars = np.array(pers)
bars[bars == inf] = M
f[i] = np.sum(bars[:,1]-bars[:,0])
variation = f[0:slices]+f[slices:2*slices]-betti*np.ones(slices)*2*M
return variation
slices = 20
bw = img_edge/np.amax(img_edge)
v = variation(bw, slices)
### Plot the variation function using polar coordinates and the mark the maximum direction/angle
# function - the array of variation function values
# center - the coordinates of center
# lienwidth - the linewidth of graph curve
def polarplot(function, center, linewidth):
v0 = np.append(np.concatenate((function,function)),function[0])
t0 = np.linspace(0,2*np.pi,2*len(function)+1)
x = v0*np.cos(t0) + center[0]
y = v0*np.sin(t0) + center[1]
plt.plot(x,y, linewidth=linewidth)
ind_of_max = np.argmax(function)
xval = v0[ind_of_max]*np.cos(t0[ind_of_max]) + center[0]
yval = v0[ind_of_max]*np.sin(t0[ind_of_max]) + center[1]
plt.plot([center[0], xval], [center[1], yval], linewidth = linewidth)
vec = v/max(v) # Normalize variation function
plt.imshow(img_edge, cmap=plt.cm.gray)
[r,c] = np.shape(img_edge)
polarplot(min(r/2,c/2)*vec, [c/2,r/2], 3)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Alignment degreeThe idea to derive the alignment degree is that: if the CNT fibers are well aligned along a direction $\theta$,$V(X;\theta)$ should be large and $V(X;\theta^\perp)$ small, where $\theta^\perp$ is the direction orthogonal to $\theta$. Let $\theta_{max}$ be the direction that maximizes $V(X;\theta)$.Then, we define the alignment degree as the ratio$$\zeta:=\frac{V(X;\theta_{max})-V(X;\theta_{max}^\perp)}{V(X;\theta_{max})}\approx \frac{\max V - \min V}{\max V}.$$
###Code
ali_degree = (max(v) - min(v))/max(v)
print(ali_degree)
###Output
0.7417307611482369
|
Feature-Engineering/Outliers.ipynb | ###Markdown
Discussion Related With Outliers And Impact On Machine Learning!! Which Machine LEarning Models Are Sensitive To Outliers?1. Naivye Bayes Classifier--- Not Sensitive To Outliers2. SVM-------- Not Sensitive To Outliers 3. Linear Regression---------- Sensitive To Outliers4. Logistic Regression------- Sensitive To Outliers5. Decision Tree Regressor or Classifier---- Not Sensitive6. Ensemble(RF,XGboost,GB)------- Not Sensitive7. KNN--------------------------- Not Sensitive 8. Kmeans------------------------ Sensitive9. Hierarichal------------------- Sensitive 10. PCA-------------------------- Sensitive 11. Neural Networks-------------- Sensitive
###Code
import pandas as pd
df=pd.read_csv('titanic.csv')
df.head()
df['Age'].isnull().sum()
import seaborn as sns
sns.distplot(df['Age'].dropna())
sns.distplot(df['Age'].fillna(100))
###Output
_____no_output_____
###Markdown
Gaussian Distributed
###Code
figure=df.Age.hist(bins=50)
figure.set_title('Age')
figure.set_xlabel('Age')
figure.set_ylabel('No of passenger')
figure=df.boxplot(column="Age")
df['Age'].describe()
###Output
_____no_output_____
###Markdown
If The Data Is Normally Distributed We use this
###Code
##### Assuming Age follows A Gaussian Distribution we will calculate the boundaries which differentiates the outliers
uppper_boundary=df['Age'].mean() + 3* df['Age'].std()
lower_boundary=df['Age'].mean() - 3* df['Age'].std()
print(lower_boundary), print(uppper_boundary),print(df['Age'].mean())
###Output
-13.880374349943303
73.27860964406094
29.69911764705882
###Markdown
If Features Are Skewed We Use the below Technique
###Code
figure=df.Fare.hist(bins=50)
figure.set_title('Fare')
figure.set_xlabel('Fare')
figure.set_ylabel('No of passenger')
df.boxplot(column="Fare")
df['Fare'].describe()
#### Lets compute the Interquantile range to calculate the boundaries
IQR=df.Fare.quantile(0.75)-df.Fare.quantile(0.25)
lower_bridge=df['Fare'].quantile(0.25)-(IQR*1.5)
upper_bridge=df['Fare'].quantile(0.75)+(IQR*1.5)
print(lower_bridge), print(upper_bridge)
#### Extreme outliers
lower_bridge=df['Fare'].quantile(0.25)-(IQR*3)
upper_bridge=df['Fare'].quantile(0.75)+(IQR*3)
print(lower_bridge), print(upper_bridge)
data=df.copy()
data.loc[data['Age']>=73,'Age']=73
data.loc[data['Fare']>=100,'Fare']=100
figure=data.Age.hist(bins=50)
figure.set_title('Fare')
figure.set_xlabel('Fare')
figure.set_ylabel('No of passenger')
figure=data.Fare.hist(bins=50)
figure.set_title('Fare')
figure.set_xlabel('Fare')
figure.set_ylabel('No of passenger')
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(data[['Age','Fare']].fillna(0),data['Survived'],test_size=0.3)
### Logistic Regression
from sklearn.linear_model import LogisticRegression
classifier=LogisticRegression()
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
y_pred1=classifier.predict_proba(X_test)
from sklearn.metrics import accuracy_score,roc_auc_score
print("Accuracy_score: {}".format(accuracy_score(y_test,y_pred)))
print("roc_auc_score: {}".format(roc_auc_score(y_test,y_pred1[:,1])))
###Output
Accuracy_score: 0.6716417910447762
roc_auc_score: 0.7115520907158045
|
data-science/metrics/MetricsARIMA_PvA.ipynb | ###Markdown
###Code
# Imports
import pandas as pd
import numpy as np
# Load the raw data
w1_results_df = pd.read_csv('https://raw.githubusercontent.com/JimKing100/NFL-Live/master/data-science/data/arima-combined/predictions-week1.csv')
#### The week 1 predictions
# week1-cur = 2018 total points
# week1-pred = predicted points for the season
# week1-act = actual points for the season
# weekn-cur = week (n-1) actual points
# weekn-pred = predicted points for the rest of the season (n-17)
# weekn-act = actual points for the rest of the season (n-17)
w1_results_df.head()
# Calculate the metrics for ARIMA vs Baseline Average
column_names = ['week', 'avg-total', 'avg-pct', 'arima-total', 'arima-pct', 'ab-correct']
metric_df = pd.DataFrame(columns = column_names)
for i in range(1, 18):
filename = 'https://raw.githubusercontent.com/JimKing100/NFL-Live/master/data-science/data/arima-combined/predictions-week' + str(i) + '.csv'
# Column names
week_cur = 'week' + str(i) + '-cur'
week_pred = 'week' + str(i) + '-pred'
week_act = 'week' + str(i) + '-act'
# Weekly predictions
results_df = pd.read_csv(filename)
# Create the current points list using 2018 points in week 1 and average points going forward
if i == 1:
week_current = results_df['week1-cur'].tolist()
else:
# for each player (element) calculate the average points (element/(i-1)) and multiply by remaining games (17-(i-1))
# the 17th week is 0 and represents the bye week (17 weeks and 16 games)
week_list = results_df[week_cur].tolist()
week_current = [((element / (i -1)) * (17 - (i -1))) for element in week_list]
# Create the prediction and actual lists
week_pred = results_df[week_pred].tolist()
week_act = results_df[week_act].tolist()
# Calulate prediction vs. actual for average
cur_total_correct = 0
num_players = len(week_current)
for j in range(0, num_players):
cur_count = 0
act_count = 0
for k in range(0, num_players):
if j != k:
if week_current[j] < week_current[k]:
cur_count = cur_count + 1
if week_act[j] < week_act[k]:
act_count = act_count + 1
cur_total_correct = cur_total_correct + (num_players - abs(act_count - cur_count))
# print(cur_count, act_count, num_players, '# correct predictions = ', (num_players - abs(act_count - cur_count)), cur_total_correct)
print('Total correct and % correct - Baseline Average', cur_total_correct, cur_total_correct/(653 * 652))
# Calulate prediction vs. actual for ARIMA
pred_total_correct = 0
num_players = len(week_pred)
for j in range(0, num_players):
pred_count = 0
act_count = 0
for k in range(0, num_players):
if j != k:
if week_pred[j] < week_pred[k]:
pred_count = pred_count + 1
if week_act[j] < week_act[k]:
act_count = act_count + 1
pred_total_correct = pred_total_correct + (num_players - abs(act_count - pred_count))
# print(pred_count, act_count, num_players, '# correct predictions = ', (num_players - abs(act_count - pred_count)), pred_total_correct)
print('Total correct and % correct - ARIMA', pred_total_correct, pred_total_correct/(653 * 652))
print('Total additional correct using ARIMA = ', pred_total_correct - cur_total_correct)
metric_df = metric_df.append({'week': i, 'avg-total': cur_total_correct, 'avg-pct': cur_total_correct/(653 * 652),
'arima-total': pred_total_correct, 'arima-pct': pred_total_correct/(653 * 652),
'ab-correct': pred_total_correct - cur_total_correct}, ignore_index=True)
file_name = '/content/ab_metrics.csv'
metric_df.to_csv(file_name, index=False)
###Output
Total correct and % correct - Baseline Average 350701 0.8237135824274937
Total correct and % correct - ARIMA 353497 0.8302807241706517
Total additional correct using ARIMA = 2796
Total correct and % correct - Baseline Average 341356 0.8017643908717669
Total correct and % correct - ARIMA 357868 0.8405471678614042
Total additional correct using ARIMA = 16512
Total correct and % correct - Baseline Average 350501 0.8232438297992277
Total correct and % correct - ARIMA 358132 0.8411672413307153
Total additional correct using ARIMA = 7631
Total correct and % correct - Baseline Average 355994 0.8361455857345522
Total correct and % correct - ARIMA 357600 0.8399176993395278
Total additional correct using ARIMA = 1606
Total correct and % correct - Baseline Average 357082 0.8387010400323189
Total correct and % correct - ARIMA 357377 0.8393939251590112
Total additional correct using ARIMA = 295
Total correct and % correct - Baseline Average 357546 0.839790866129896
Total correct and % correct - ARIMA 357995 0.840845460780353
Total additional correct using ARIMA = 449
Total correct and % correct - Baseline Average 360267 0.8461818506374543
Total correct and % correct - ARIMA 357893 0.8406058869399374
Total additional correct using ARIMA = -2374
Total correct and % correct - Baseline Average 360363 0.846407331899022
Total correct and % correct - ARIMA 357398 0.8394432491849791
Total additional correct using ARIMA = -2965
Total correct and % correct - Baseline Average 360401 0.8464965848983925
Total correct and % correct - ARIMA 358692 0.84248254868986
Total additional correct using ARIMA = -1709
Total correct and % correct - Baseline Average 360028 0.8456204962466765
Total correct and % correct - ARIMA 360060 0.845695656667199
Total additional correct using ARIMA = 32
Total correct and % correct - Baseline Average 359741 0.8449464012251149
Total correct and % correct - ARIMA 360623 0.8470180103157677
Total additional correct using ARIMA = 882
Total correct and % correct - Baseline Average 358688 0.8424731536372946
Total correct and % correct - ARIMA 361371 0.8487748851454824
Total additional correct using ARIMA = 2683
Total correct and % correct - Baseline Average 357439 0.8395395484737737
Total correct and % correct - ARIMA 362309 0.8509780249720498
Total additional correct using ARIMA = 4870
Total correct and % correct - Baseline Average 356793 0.8380222474844746
Total correct and % correct - ARIMA 364171 0.8553514219412057
Total additional correct using ARIMA = 7378
Total correct and % correct - Baseline Average 352646 0.8282819267373801
Total correct and % correct - ARIMA 365714 0.8589755634682776
Total additional correct using ARIMA = 13068
Total correct and % correct - Baseline Average 347005 0.8150325538571388
Total correct and % correct - ARIMA 367992 0.8643260459042268
Total additional correct using ARIMA = 20987
Total correct and % correct - Baseline Average 333462 0.7832232546341097
Total correct and % correct - ARIMA 373364 0.8769436014994504
Total additional correct using ARIMA = 39902
|
Learn/Fundamental/Exploratory Data Analysis with Python for Beginner.ipynb | ###Markdown
Exploratory Data Analysis dengan Pandas Membaca file dari csv
###Code
import pandas as pd
order_df = pd.read_csv('../../Dataset/order.csv')
###Output
_____no_output_____
###Markdown
Inspeksi struktur data frameSetelah melakukan proses loading dataframe ke dalam Python. Hal selanjutnya sebelum memulai analisis tentunya mengerti struktur dataset tersebut. Melihat struktur kolom dan baris dari data frameHal pertama dalam mengerti struktur dari dataframe adalah informasi mengenai berapa size dari dataframe yang akan digunakan termasuk berapa jumlah kolom dan jumlah baris data frame tersebut.
###Code
order_df.shape
###Output
_____no_output_____
###Markdown
Melihat preview data dari data frameSelanjutnya, untuk mendapatkan gambaran dari konten dataframe tersebut. Kita dapat menggunakan function head dan tail,`
###Code
order_df.head()
###Output
_____no_output_____
###Markdown
Statistik Deskriptif dari Data Frame
###Code
# Statistik deskriptif atau summary dalam Python - Pandas
order_df.describe()
# Secara umum function describe() akan secara otomatis mengabaikan kolom category dan hanya memberikan summary statistik untuk kolom berjenis numerik.
# argument bernama include = "all" untuk mendapatkan summary statistik atau statistik deskriptif dari kolom numerik dan karakter.
order_df.describe(include='all')
# Jika ingin mendapatkan summary statistik dari kolom yang tidak bernilai angka, maka aku dapat menambahkan command include=["object"]
order_df.describe(include=['object'])
# untuk mencari rataan dari suatu data dari dataframe, dapat menggunakan syntax mean, median, dan mode dari Pandas.
print(order_df.loc[:, 'price'].mean())
print(order_df.loc[:, 'freight_value'].median())
###Output
2607783.9156783135
104000.0
###Markdown
Mengenal dan Membuat Distribusi Data dengan HistogramHistogram merupakan salah satu cara untuk mengidentifikasi sebaran distribusi dari data. Histogram adalah grafik yang berisi ringkasan dari sebaran (dispersi atau variasi) suatu data. Pada histogram, tidak ada jarak antar batang/bar dari grafik. Hal ini dikarenakan bahwa titik data kelas bisa muncul dimana saja di daerah cakupan grafik. Sedangkan ketinggian bar sesuai dengan frekuensi atau frekuensi relatif jumlah data di kelas. Semakin tinggi bar, semakin tinggi frekuensi data. Semakin rendah bar, semakin rendah frekuensi data. **Syntax umum:**1. bins = jumlah_bins dalam histogram yang akan digunakan. Jika tidak didefinisikan jumlah_bins, maka function akan secara default menentukan jumlah_bins sebanyak 10.2. by = nama kolom di DataFrame untuk di group by. (valuenya berupa nama column di dataframe tersebut).3. alpha = nilai_alpha untuk menentukan opacity dari plot di histogram. (value berupa range 0.0 - 1.0, dimana semakin kecil akan semakin kecil opacity nya)4. figsize = tuple_ukuran_gambar yang digunakan untuk menentukan ukuran dari plot histogram. Contoh: figsize=(10,12)
###Code
import pandas as pd
import matplotlib.pyplot as plt
order_df = pd.read_csv('../../Dataset/order.csv')
# plot histogram kolom: price
order_df[['price']].hist(figsize=(4, 5), bins=10, xlabelsize=8, ylabelsize=8)
plt.show() # Untuk menampilkan histogram plot
###Output
_____no_output_____
###Markdown
Standar Deviasi dan Varians pada PandasVarians dan standar deviasi juga merupakan suatu ukuran dispersi atau variasi. Standar deviasi merupakan ukuran dispersi yang paling banyak dipakai. Hal ini mungkin karena standar deviasi mempunyai satuan ukuran yang sama dengan satuan ukuran data asalnya. Sedangkan varians memiliki satuan kuadrat dari data asalnya (misalnya cm^2).
###Code
print(order_df.loc[:, 'price'].std())
print(order_df.loc[:, 'freight_value'].var())
###Output
1388311.591031153
3044815290.0703516
###Markdown
Menemukan Outliers Menggunakan PandasSebelum menuju ke step by step dalam menemukan **outliers**, sedikit intermezo dahulu mengenai definisi dari **outliers**.**Outliers** merupakan data observasi yang muncul dengan nilai-nilai ekstrim. Yang dimaksud dengan nilai-nilai ekstrim dalam observasi adalah nilai yang jauh atau beda sama sekali dengan sebagian besar nilai lain dalam kelompoknya. image.pngPada umumnya, outliers dapat ditentukan dengan metric IQR (interquartile range).Rumus dasar dari IQR: Q3 - Q1. Dan data suatu observasi dapat dikatakan outliers jika memenuhi kedua syarat dibawah ini:1. data < Q1 - 1.5 * IQR2. data > Q3 + 1.5 * IQR
###Code
# Hitung quartile 1
Q1 = order_df[['product_weight_gram']].quantile(0.25)
# Hitung quartile 3
Q3 = order_df[['product_weight_gram']].quantile(0.75)
# Hitung inter quartile range dan cetak ke console
IQR = Q3 - Q1
print(IQR)
###Output
product_weight_gram 1550.0
dtype: float64
###Markdown
Rename Kolom Data FrameMengganti nama kolom pada Pandas dapat dilakukan dengan 2 cara:1. Menggunakan nama kolom.2. Menggunakan indeks kolom.
###Code
order_df.rename(columns={'freight_value': 'shipping_cost'}, inplace=True)
order_df.head()
###Output
_____no_output_____
###Markdown
.groupby menggunakan PandasKegunaan .groupby adalah mencari summary dari data frame dengan menggunakan aggregate dari kolom tertentu.
###Code
# Hitung rata rata dari price per payment_type
rata_rata = order_df['price'].groupby(order_df['payment_type']).mean()
###Output
_____no_output_____
###Markdown
Sorting Menggunakan Pandas**Sorting** adalah sebuah metode mengurutkan data berdasarkan syarat kolom tertentu dan biasanya digunakan untuk melihat nilai maksimum dan minimum dari dataset. Library Pandas sendiri menyediakan fungsi sorting sebagai fundamental dari exploratory data analysis.Function tersebut akan secara default mengurutkan secara ascending (dimulai dari nilai terkecil), untuk dapat mengurutkan secara descending (nilai terbesar lebih dahulu), dapat menggunakan properti tambahan:Fungsi sorting di Pandas juga dapat dilakukan menggunakan lebih dari satu kolom sebagai syarat. Contohnya pada skenario di bawah, akan mencoba mengaplikasikan fungsi sorting menggunakan Age dan Score sekaligus:
###Code
sort_harga = order_df.sort_values(by='price', ascending=False)
sort_harga.head()
###Output
_____no_output_____
###Markdown
Mini Project
###Code
import pandas as pd
import matplotlib.pyplot as plt
order_df = pd.read_csv('../../Dataset/order.csv')
# Median price yang dibayar customer dari masing-masing metode pembayaran.
median_price = order_df['price'].groupby(order_df['payment_type']).median()
median_price.head()
# Ubah freight_value menjadi shipping_cost dan cari shipping_cost
# termahal dari data penjualan tersebut menggunakan sort.
order_df.rename(columns={"freight_value": "shipping_cost"}, inplace=True)
sort_value = order_df.sort_values(by="shipping_cost", ascending=0)
sort_value.head()
# Untuk product_category_name, berapa rata-rata weight produk tersebut
# dan standar deviasi mana yang terkecil dari weight tersebut,
mean_value = order_df["product_weight_gram"].groupby(order_df["product_category_name"]).mean()
print(mean_value.sort_values())
std_value = order_df["product_weight_gram"].groupby(order_df["product_category_name"]).std()
print(std_value.sort_values())
# Buat histogram quantity penjualan dari dataset tersebutuntuk melihat persebaran quantity
# penjualan tersebut dengan bins = 5 dan figsize= (4,5)
order_df[["quantity"]].hist(figsize=(4, 5), bins=5)
plt.show()
###Output
_____no_output_____ |
src/Transformada de Gabor - Haralick.ipynb | ###Markdown
**1.Conectamos Colab con Drive**
###Code
from google.colab import drive
drive.mount('/content/drive')
import os
PATH_ORIGEN = "/content/drive/MyDrive/Proyectos-independientes/Proyecto-MINSA/Dataset/Clasificacion/HGG-LGG"
os.chdir(PATH_ORIGEN)
%matplotlib inline
import cv2
import os
import numpy as np
import keras
import matplotlib.pyplot as plt
from random import shuffle
from tensorflow.keras.applications import VGG16
from tensorflow.keras import backend as K
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import Dense, Activation
import sys
import h5py
import utils
import math
from fractions import Fraction
from tqdm.auto import tqdm
from skimage.feature import greycomatrix, greycoprops
import pandas as pd
import time
import torch
sys.path.append(os.path.abspath(PATH_ORIGEN))
!nvidia-smi
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(device)
# Frame size
img_size = 224
img_size_touple = (img_size, img_size)
# Number of channels (RGB)
num_channels = 3
# Flat frame size
img_size_flat = img_size * img_size * num_channels
# Number of classes for classification (HGG-LGG)
num_classes = 2
# Number of files to train
_num_files_train = 1
# Number of frames per video
_images_per_file = 155
# Number of frames per training set
_num_images_train = _num_files_train * _images_per_file
# Video extension
video_exts = ".mp4"
in_dir = "/content/drive/MyDrive/Proyectos-independientes/Proyecto-MINSA/Dataset/Clasificacion/HGG-LGG/AVI"
###Output
_____no_output_____
###Markdown
**2.Llamando funciones de Utils.py**
###Code
names, labels = utils.label_video_names(in_dir)
print(names[0])
print(len(names))
print(labels[0])
print(len(labels))
frames = utils.get_frames(in_dir, names[12])
print(frames.shape)
visible_frame = (frames*255).astype('uint8')
img = visible_frame[80][:,:,2]
plt.figure(1,figsize = (10,10))
plt.imshow(img,cmap = 'gray')
plt.show()
###Output
_____no_output_____
###Markdown
**2.1.Preprocesamiento**
###Code
# P1: Filtro LoG
blur = cv2.GaussianBlur(img,(3,3),0)
laplacian = cv2.Laplacian(blur,cv2.CV_8UC1)
#laplacian1 = laplacian/laplacian.max()
plt.figure(1,figsize = (10,10))
plt.imshow(laplacian,cmap = 'gray')
plt.show()
# P2: Umbralizacion
aux = np.zeros((img.shape[0],img.shape[1]),dtype= np.uint8)
for i in range(img.shape[0]):
for j in range(img.shape[1]):
if img[i,j] != 0:
aux[i,j] = 1
else:
aux[i,j] = 0
plt.figure(1,figsize = (10,10))
plt.imshow(aux,cmap = 'gray')
plt.show()
# Contorno más grande
cnts,_ = cv2.findContours(laplacian,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
#cnts,_ = cv2.findContours(aux,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
contour_sizes = [(cv2.contourArea(cnt), cnt) for cnt in cnts]
biggest_contour = max(contour_sizes, key=lambda x: x[0])[1]
# Coordenadas que encierran al contorno más grande
x,y,w,h = cv2.boundingRect(biggest_contour)
print("Coordenadas: " + " \n x1: " + str(x) ," \n x2:" , str(x + w) , "\n y1: ", str(y) , "\n y2:", str(y + h))
# Cropped --> LoG
crop = img[y:y+h,x:x+w]
plt.figure(1,figsize = (10,10))
plt.imshow(crop,cmap = "gray")
plt.show()
print(crop.shape)
# Cropped --> Umbralizacion
crop = img[y:y+h,x:x+w]
plt.figure(1,figsize = (10,10))
plt.imshow(crop,cmap = "gray")
plt.show()
print(crop.shape)
def LoG(image):
blur = cv2.GaussianBlur(image,(3,3),0)
laplacian = cv2.Laplacian(blur,cv2.CV_8UC1)
cnts,_ = cv2.findContours(laplacian,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
contour_sizes = [(cv2.contourArea(cnt), cnt) for cnt in cnts]
biggest_contour = max(contour_sizes, key=lambda x: x[0])[1]
x,y,w,h = cv2.boundingRect(biggest_contour)
crop = image[y:y+h,x:x+w]
return crop
def cropped(image):
aux = np.zeros((img.shape[0],img.shape[1]),dtype= np.uint8)
for i in range(img.shape[0]):
for j in range(img.shape[1]):
if img[i,j] != 0:
aux[i,j] = 1
else:
aux[i,j] = 0
cnts,_ = cv2.findContours(aux,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
contour_sizes = [(cv2.contourArea(cnt), cnt) for cnt in cnts]
if len(contour_sizes) > 0:
biggest_contour = max(contour_sizes, key=lambda x: x[0])[1]
x,y,w,h = cv2.boundingRect(biggest_contour)
crop = image[y:y+h,x:x+w]
return crop
len(crop.shape)
###Output
_____no_output_____
###Markdown
**3. Feature Extraction** **3.1.Transformada de Gabor**
###Code
# Diccionario de parámetros
thetas = np.arange(0, np.pi, np.pi/4) # range of theta
lambds = np.array([ 2 * pow(math.sqrt(2), i + 1) for i in range(5)], dtype = 'float32') # range of lambda
sigmas = np.array([1.5,2.5]) # range de desviacion estandar
gamma = 1
psis = np.array([0,np.pi/2], dtype = 'float32')
## Creacion de banco de gabor
gaborFilterBank0 = []
gaborFilterBank90 = []
gaborParams0 = []
gaborParams90 = []
## Agregando valores al banco de gabor
for theta in thetas:
for lambd in lambds:
for sigma in sigmas:
gaborParam0 = {'ksize':(20, 20),'sigma':sigma,'theta':theta,
'lambd':lambd,'gamma':gamma,'psi':0,'ktype':cv2.CV_32F}
gaborParam90 = {'ksize':(20, 20),'sigma':sigma,'theta':theta,
'lambd':lambd,'gamma':gamma,'psi':90,'ktype':cv2.CV_32F}
Gabor0 = cv2.getGaborKernel(**gaborParam0)
Gabor90 = cv2.getGaborKernel(**gaborParam90)
gaborFilterBank0.append(Gabor0)
gaborFilterBank90.append(Gabor90)
gaborParams0.append(gaborParam0)
gaborParams90.append(gaborParam90)
# Plot
print("Banco de funciones de Gabor para distintos angulos con psi = 0")
fig = plt.figure(1,figsize=(20,20))
n0 = len(gaborFilterBank0)
for i in range(n0):
ang= gaborParams0[i]['theta'] / np.pi
a = Fraction(ang)
plt.subplot(4,n0//4, i+1)
plt.title("{} $\pi$".format(a))
plt.axis('off')
plt.imshow(gaborFilterBank0[i],cmap='gray')
plt.show()
# Plot
print("Banco de funciones de Gabor para distintos angulos con psi = 90")
fig = plt.figure(1,figsize=(20,20))
n90 = len(gaborFilterBank90)
for i in range(n90):
ang= gaborParams90[i]['theta'] / np.pi
a = Fraction(ang)
plt.subplot(4,n90//4, i+1)
plt.title("{} $\pi$".format(a))
plt.axis('off')
plt.imshow(gaborFilterBank90[i],cmap='gray')
plt.show()
def EuclideanDistanceMatrix(M1,M2):
shape = np.dot(M1,M2.T).shape
result = np.zeros(shape,dtype = np.float32)
for i in range(M1.shape[0]):
for j in range(M2.shape[0]):
a = M1[i,:] # vector fila
b = M2[j,:] # Vector fila
dist = np.linalg.norm(a-b)
#dist = torch.norm(a - b) # escalar
result[i,j] = dist
return result
def gabor_features(image,gaborFilterBank0,gaborFilterBank90):
GaborFeatures = np.zeros((1,40),dtype = np.float32)
for count,(mask0,mask90) in enumerate(zip(gaborFilterBank0,gaborFilterBank90)):
#count = count + 1
g0 = cv2.filter2D(image,-1,mask0)
# convertir a tensor
#g0_ = torch.from_numpy(g0).float().to(device)
#g0 = pow(g0,2)
g90 = cv2.filter2D(image,-1,mask90)
# convertir a tensor
#g90_ = torch.from_numpy(g90).float().to(device)
#g90 = pow(g90,2)
#g_T = math.sqrt(g0 + g90)
### Distancia euclidiana entre 2 matrices
g_T = EuclideanDistanceMatrix(g0,g90)
### Valor de Gabor
suma = np.sum(g_T,axis = 0)
suma = np.sum(suma)
GaborFeatures[0,count] = suma
#count = count + 1
return GaborFeatures
def glcm_features(image):
GLCMFeatures = np.zeros((1,6),dtype = np.float32)
dst = [1]
ang = [np.pi/2] # (np.pi/2 --> (dx =0 y dy = dst))
## Matriz GLCM nivel 1
co_matriz_1 = greycomatrix(image, dst, ang).astype('uint8')
co_matriz_1 = co_matriz_1[:,:,0,0]
#print("O.o:",co_matriz_1.shape)
## Matriz GLCM nivel 2
co_matriz_2 = greycomatrix(co_matriz_1, dst, ang).astype('uint8')
#co_matriz_2 = co_matriz_2[:,:,0,0]
# Indicadores
properties = ['ASM', 'correlation','contrast','dissimilarity','energy','homogeneity']
## Indicadores
"""glcm = greycomatrix(co_matriz_2, distances = dst, angles = ang,
symmetric = True,normed = True)"""
for i,prop in enumerate(properties):
GLCMFeatures[0,i] = greycoprops(co_matriz_2, prop)
#print(GLCMFeatures.shape)
#GLCMFeatures[] = np.hstack([greycoprops(co_matriz_2, prop).ravel() for prop in properties])
return GLCMFeatures
help(greycoprops)
!ls
# Contenedores
K = 369
N = 155
gab = 40
glc = 6
Xgab = np.zeros((K*N,gab + glc)) # K x N muestras (filas), y Gab características (columnas)
y = np.zeros((K*N),dtype ='int')
t = 0
columns_gab = [ 'GAB' + str(i + 1) for i in range(gab)]
columns_glc = [ 'GLC' + str(i + 1) for i in range(glc)]
X = []
X.extend(columns_gab)
X.extend(columns_glc)
df = pd.DataFrame(Xgab, columns = X)
dfy = pd.DataFrame(y,columns = ['clase'])
df = pd.concat([df, dfy], axis=1)
df.head()
df.shape
# Proceso en batch
for i in tqdm(range(len(names))):
frames = utils.get_frames(in_dir, names[i])
visible_frame = (frames*255).astype('uint8')
for j in range(50, 130 + 1):
img = visible_frame[j][:,:,2]
img = cropped(img)
#print(img.shape)
example_gab = gabor_features(img,gaborFilterBank0,gaborFilterBank90)
example_glc = glcm_features(img)
if len(example_glc.shape) == 2:
df.iloc[t,0:40] = [i for i in example_gab[0]]
df.iloc[t,40:46] = [i for i in example_glc[0]]
df.iloc[t,46] = labels[i][0]
df.to_csv('./features_total.csv', index=False)
#Xgab[t,:] = example
t = t + 1
else:
df.iloc[t,0:40] = [0 for i in range(40)]
df.iloc[t,40:46] = [0 for i in range(6)]
df.iloc[t,46] = labels[i][0]
df.to_csv('./features_total.csv', index=False)
t = t + 1
###Output
_____no_output_____
###Markdown
**3.2.GLCM**
###Code
## Matriz GLCM nivel 1
dst = [1]
ang = [np.pi/2] # (np.pi/2 --> (dx =0 y dy = dst))
co_matrices = greycomatrix(crop, dst, ang).astype('float')
print("Matriz GLCM: \n", co_matrices[:,:,0,0])
print(crop.shape)
## Matriz GLCM nivel 2
## Indicadores
dissimilarity = greycoprops(co_matrices, 'dissimilarity')[0][0]
correlation = greycoprops(co_matrices, 'correlation')[0][0]
print("Disimilaridad: ",dissimilarity)
print("Correlacion:", correlation)
angular_moment = greycoprops(co_matriz_2, 'ASM')[0][0]
correlation = greycoprops(co_matriz_2, 'correlation')[0][0]
contrast = greycoprops(co_matriz_2, 'contrast')[0][0]
entropy = greycoprops(co_matriz_2, '')[0][0]
energy = greycoprops(co_matriz_2, 'energy')[0][0]
homogeneity = greycoprops(co_matriz_2, 'homogeneity')[0][0]
###Output
_____no_output_____
###Markdown
**4.Train / Test**
###Code
# Training
training_set = int(len(names)*0.8)
names_training = names[0:training_set]
labels_training = labels[0:training_set]
# Test
test_set = int(len(names)*0.2)
names_test = names[training_set:]
labels_test = labels[training_set:]
# Generando Prueba.h5
utils.make_files(training_set, names_training, in_dir, labels_training,transfer_values_size,image_model_transfer)
# # Generando Pruebavalidation.h5
utils.make_files_test(test_set, names_test, in_dir, labels_test, transfer_values_size, image_model_transfer)
data, target = utils.process_alldata_training()
print(data[0].shape)
print(target[0].shape)
print(len(data))
print(len(target))
data_test, target_test = utils.process_alldata_test()
print(data_test[0].shape)
print(target_test[0].shape)
print(len(data_test))
print(len(target_test))
###Output
73
73
###Markdown
**5.Arquitectura LSTM**
###Code
chunk_size = 4096
n_chunks = 155
rnn_size = 512 #### 100 neuronas
model = Sequential()
model.add(LSTM(rnn_size, input_shape=(n_chunks, chunk_size))) # RNN,GRU
model.add(Dense(1024))
model.add(Activation('relu'))
model.add(Dense(50))
model.add(Activation('sigmoid'))
model.add(Dense(2))
model.add(Activation('softmax'))
model.compile(loss = 'categorical_crossentropy', optimizer = 'adam',metrics = ['accuracy'])
model.summary()
# Dividimos prueba.h5 como train y validacion
print(len(data))
print(len(target))
print(data[0].shape)
print(data[1].shape)
print(len(data[0:5])) # 5 registros de resonancia magnética
print(len(data[0:5][0])) # tamaño de uno de estos registsros
print(len(data[0:])) # numero total de registros de resonancia magnética en Prueba.h5
# Numero de registros en Pruebavalidation.h5
total_train = len(data[0:])
train = int(total_train*0.8)
print("Numero de registros totales en Prueba.h5",total_train)
print("Numero de registros para entrenamiento",train)
print("Numero de registros totales en validation",total_train - train )
# Entrenando
epoch = 20
batchS = 50
history = model.fit(np.array(data[0:train]), np.array(target[0:train]), epochs=epoch,
validation_data=(np.array(data[train:]), np.array(target[train:])),
batch_size=batchS, verbose=1)
###Output
Epoch 1/20
5/5 [==============================] - 5s 522ms/step - loss: 0.5460 - accuracy: 0.7966 - val_loss: 0.5121 - val_accuracy: 0.7966
Epoch 2/20
5/5 [==============================] - 2s 383ms/step - loss: 0.5154 - accuracy: 0.7966 - val_loss: 0.5220 - val_accuracy: 0.7966
Epoch 3/20
5/5 [==============================] - 2s 380ms/step - loss: 0.5218 - accuracy: 0.7966 - val_loss: 0.5051 - val_accuracy: 0.7966
Epoch 4/20
5/5 [==============================] - 2s 374ms/step - loss: 0.5060 - accuracy: 0.7966 - val_loss: 0.5052 - val_accuracy: 0.7966
Epoch 5/20
5/5 [==============================] - 2s 384ms/step - loss: 0.5071 - accuracy: 0.7966 - val_loss: 0.5051 - val_accuracy: 0.7966
Epoch 6/20
5/5 [==============================] - 2s 379ms/step - loss: 0.5066 - accuracy: 0.7966 - val_loss: 0.5051 - val_accuracy: 0.7966
Epoch 7/20
5/5 [==============================] - 2s 376ms/step - loss: 0.5138 - accuracy: 0.7966 - val_loss: 0.5055 - val_accuracy: 0.7966
Epoch 8/20
5/5 [==============================] - 2s 378ms/step - loss: 0.5091 - accuracy: 0.7966 - val_loss: 0.5062 - val_accuracy: 0.7966
Epoch 9/20
5/5 [==============================] - 2s 382ms/step - loss: 0.5161 - accuracy: 0.7966 - val_loss: 0.5076 - val_accuracy: 0.7966
Epoch 10/20
5/5 [==============================] - 2s 378ms/step - loss: 0.5144 - accuracy: 0.7966 - val_loss: 0.5111 - val_accuracy: 0.7966
Epoch 11/20
5/5 [==============================] - 2s 379ms/step - loss: 0.5075 - accuracy: 0.7966 - val_loss: 0.5056 - val_accuracy: 0.7966
Epoch 12/20
5/5 [==============================] - 2s 378ms/step - loss: 0.5066 - accuracy: 0.7966 - val_loss: 0.5081 - val_accuracy: 0.7966
Epoch 13/20
5/5 [==============================] - 2s 377ms/step - loss: 0.5083 - accuracy: 0.7966 - val_loss: 0.5051 - val_accuracy: 0.7966
Epoch 14/20
5/5 [==============================] - 2s 378ms/step - loss: 0.5094 - accuracy: 0.7966 - val_loss: 0.5057 - val_accuracy: 0.7966
Epoch 15/20
5/5 [==============================] - 2s 384ms/step - loss: 0.5059 - accuracy: 0.7966 - val_loss: 0.5053 - val_accuracy: 0.7966
Epoch 16/20
5/5 [==============================] - 2s 377ms/step - loss: 0.5071 - accuracy: 0.7966 - val_loss: 0.5053 - val_accuracy: 0.7966
Epoch 17/20
5/5 [==============================] - 2s 376ms/step - loss: 0.5059 - accuracy: 0.7966 - val_loss: 0.5060 - val_accuracy: 0.7966
Epoch 18/20
5/5 [==============================] - 2s 371ms/step - loss: 0.5076 - accuracy: 0.7966 - val_loss: 0.5052 - val_accuracy: 0.7966
Epoch 19/20
5/5 [==============================] - 2s 378ms/step - loss: 0.5069 - accuracy: 0.7966 - val_loss: 0.5053 - val_accuracy: 0.7966
Epoch 20/20
5/5 [==============================] - 2s 380ms/step - loss: 0.5050 - accuracy: 0.7966 - val_loss: 0.5053 - val_accuracy: 0.7966
###Markdown
**5.Métricas**
###Code
result = model.evaluate(np.array(data_test), np.array(target_test))
for name, value in zip(model.metrics_names, result):
print(name, value)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.savefig('destination_path.eps', format='eps', dpi=1000)
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.savefig('destination_path1.eps', format='eps', dpi=1000)
plt.show()
###Output
_____no_output_____ |
progs/.ipynb_checkpoints/global-spin-of-crf-from-bootstrap-checkpoint.ipynb | ###Markdown
In this notebook, I used the bootstrap resampling method to give a robust estimate of the global spin of the VLBI CRF.
###Code
import matplotlib.pyplot as plt
# from matplotlib.ticker import MultipleLocator
import numpy as np
np.random.seed(28)
# Used for ECDF estimate
import statsmodels.api as sm
from astropy.table import Table, join
from astropy.stats import bootstrap
from tool_func import vsh_fit_for_pm
###Output
_____no_output_____
###Markdown
Load the table for fitted APM and convert the unit of APM.
###Code
apm_tab = Table.read("../data/ts_nju_pm_fit_3sigma-10step.dat", format="ascii.csv")
# convert mas/yr into muas/yr
apm_tab["pmra"] = apm_tab["pmra"] * 1e3
apm_tab["pmra_err"] = apm_tab["pmra_err"] * 1e3
apm_tab["pmdec"] = apm_tab["pmdec"] * 1e3
apm_tab["pmdec_err"] = apm_tab["pmdec_err"] * 1e3
###Output
_____no_output_____
###Markdown
ICRF3 defining source table.
###Code
icrf3_def = Table.read("../data/icrf3sx-def-sou.txt", format="ascii")
###Output
_____no_output_____
###Markdown
Remove sources without apparant proper motion estimate.
###Code
mask = apm_tab["num_cln"] >= 5
apm_tab = apm_tab[mask]
apm_def = join(icrf3_def, apm_tab, keys="iers_name")
###Output
_____no_output_____
###Markdown
Generate an array of index for the bootstrap resampling.
###Code
idx = np.arange(len(apm_def), dtype=int)
###Output
_____no_output_____
###Markdown
Create 100 arrays of index for resampled data.
###Code
sample_num = 1000
resample_idx = bootstrap(idx, sample_num)
###Output
_____no_output_____
###Markdown
Create empty arrays for store the results.
###Code
resample_wx = np.zeros(sample_num)
resample_wy = np.zeros(sample_num)
resample_wz = np.zeros(sample_num)
resample_w = np.zeros(sample_num)
resample_ra = np.zeros(sample_num)
resample_dec = np.zeros(sample_num)
###Output
_____no_output_____
###Markdown
Do the LSQ fit.
###Code
for i, new_idx in enumerate(resample_idx):
new_table = apm_def[np.array(new_idx, dtype=int)]
pmt, sig, output = vsh_fit_for_pm(new_table)
resample_wx[i] = pmt[0]
resample_wy[i] = pmt[1]
resample_wz[i] = pmt[2]
resample_w[i] = pmt[3]
resample_ra[i] = output["R_ra"]
resample_dec[i] = output["R_dec"]
###Output
_____no_output_____
###Markdown
Assume that $\omega_x$, $\omega_y$, $\omega_z$, and $\omega$ follows a Gaussian distribution, I estimate the mean and sigma.
###Code
from scipy.stats import norm
mu_wx, std_wx = norm.fit(resample_wx)
mu_wy, std_wy = norm.fit(resample_wy)
mu_wz, std_wz = norm.fit(resample_wz)
mu_w, std_w = norm.fit(resample_w)
mu_ra, std_ra = norm.fit(resample_ra)
mu_dec, std_dec = norm.fit(resample_dec)
# Distribution
rvs_wx = norm(mu_wx, std_wx)
rvs_wy = norm(mu_wy, std_wy)
rvs_wz = norm(mu_wz, std_wz)
rvs_w = norm(mu_w, std_w)
rvs_ra = norm(mu_ra, std_ra)
rvs_dec = norm(mu_dec, std_dec)
###Output
_____no_output_____
###Markdown
Plot the distribution of $\omega_x$, $\omega_y$, $\omega_z$, and $\omega$.
###Code
bin_size = 0.1
bin_array = np.arange(-2.5, 1.5, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_wx,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_wx.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(-2.5, 55, "$\mu={:+.2f}$".format(mu_wx), fontsize=15)
ax.text(-2.5, 45, "$\sigma={:.2f}$".format(std_wx), fontsize=15)
ax.set_xlabel("$\\omega_{\\rm x}$ ($\\mu$as$\,$yr$^{-1}$)", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout
plt.savefig("../plots/spin-x-from-resampled-apm.eps")
bin_size = 0.1
bin_array = np.arange(-2.0, 2.0, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_wy,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_wy.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(-2., 50, "$\mu={:+.2f}$".format(mu_wy), fontsize=15)
ax.text(-2., 40, "$\sigma={:.2f}$".format(std_wy), fontsize=15)
ax.set_xlabel("$\\omega_{\\rm y}$ ($\\mu$as$\,$yr$^{-1}$)", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
plt.savefig("../plots/spin-y-from-resampled-apm.eps")
bin_size = 0.1
bin_array = np.arange(-2.0, 2.0, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_wz,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_wz.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(-2., 55, "$\mu={:+.2f}$".format(mu_wz), fontsize=15)
ax.text(-2., 45, "$\sigma={:.2f}$".format(std_wz), fontsize=15)
ax.set_xlabel("$\\omega_{\\rm z}$ ($\\mu$as$\,$yr$^{-1}$)", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
plt.savefig("../plots/spin-z-from-resampled-apm.eps")
bin_size = 0.1
bin_array = np.arange(0, 4.1, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_w,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_w.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(0.1, 65, "$\mu={:.2f}$".format(mu_w), fontsize=15)
ax.text(0.1, 55, "$\sigma={:.2f}$".format(std_w), fontsize=15)
ax.set_xlabel("$\\omega$ ($\\mu$as$\,$yr$^{-1}$)", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
bin_size = 10
bin_array = np.arange(0, 361, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_ra,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_ra.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(0, 75, "$\mu={:.0f}$".format(mu_ra), fontsize=15)
ax.text(0, 65, "$\sigma={:.0f}$".format(std_ra), fontsize=15)
ax.set_xlabel("$\\alpha_{\\rm apex}$ (degree))", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
bin_size = 5
bin_array = np.arange(-90, 91, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_dec,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_dec.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(-80, 85, "$\mu={:-.0f}$".format(mu_dec), fontsize=15)
ax.text(-80, 75, "$\sigma={:.0f}$".format(std_dec), fontsize=15)
ax.set_xlabel("$\\delta_{\\rm apex}$ (degree))", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
All these is done for the ICRF3 defining source subset. And I repeat this procedure to the all sources with the APM estimates.Generate an array of index for the bootstrap resampling.
###Code
idx = np.arange(len(apm_tab), dtype=int)
###Output
_____no_output_____
###Markdown
Create 100 arrays of index for resampled data.
###Code
sample_num = 1000
resample_idx = bootstrap(idx, sample_num)
###Output
_____no_output_____
###Markdown
Create empty arrays for store the results.
###Code
resample_wx = np.zeros(sample_num)
resample_wy = np.zeros(sample_num)
resample_wz = np.zeros(sample_num)
resample_w = np.zeros(sample_num)
resample_ra = np.zeros(sample_num)
resample_dec = np.zeros(sample_num)
###Output
_____no_output_____
###Markdown
Do the LSQ fit.
###Code
for i, new_idx in enumerate(resample_idx):
new_table = apm_tab[np.array(new_idx, dtype=int)]
pmt, sig, output = vsh_fit_for_pm(new_table)
resample_wx[i] = pmt[0]
resample_wy[i] = pmt[1]
resample_wz[i] = pmt[2]
resample_w[i] = pmt[3]
resample_ra[i] = output["R_ra"]
resample_dec[i] = output["R_dec"]
###Output
_____no_output_____
###Markdown
Assume that $\omega_x$, $\omega_y$, $\omega_z$, and $\omega$ follows a Gaussian distribution, I estimate the mean and sigma.
###Code
mu_wx, std_wx = norm.fit(resample_wx)
mu_wy, std_wy = norm.fit(resample_wy)
mu_wz, std_wz = norm.fit(resample_wz)
mu_w, std_w = norm.fit(resample_w)
mu_ra, std_ra = norm.fit(resample_ra)
mu_dec, std_dec = norm.fit(resample_dec)
# Distribution
rvs_wx = norm(mu_wx, std_wx)
rvs_wy = norm(mu_wy, std_wy)
rvs_wz = norm(mu_wz, std_wz)
rvs_w = norm(mu_w, std_w)
rvs_ra = norm(mu_ra, std_ra)
rvs_dec = norm(mu_dec, std_dec)
###Output
_____no_output_____
###Markdown
Plot the distribution of $\omega_x$, $\omega_y$, $\omega_z$, and $\omega$.
###Code
bin_size = 0.1
bin_array = np.arange(-2., 2, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_wx,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_wx.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(-2., 55, "$\mu={:+.2f}$".format(mu_wx), fontsize=15)
ax.text(-2., 45, "$\sigma={:.2f}$".format(std_wx), fontsize=15)
ax.set_xlabel("$\\omega_{\\rm x}$ ($\\mu$as$\,$yr$^{-1}$)", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
bin_size = 0.1
bin_array = np.arange(-2.0, 2.0, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_wy,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_wy.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(-2., 70, "$\mu={:+.2f}$".format(mu_wy), fontsize=15)
ax.text(-2., 60, "$\sigma={:.2f}$".format(std_wy), fontsize=15)
ax.set_xlabel("$\\omega_{\\rm y}$ ($\\mu$as$\,$yr$^{-1}$)", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
bin_size = 0.1
bin_array = np.arange(-2.0, 2.0, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_wz,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_wz.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(-2., 70, "$\mu={:+.2f}$".format(mu_wz), fontsize=15)
ax.text(-2., 60, "$\sigma={:.2f}$".format(std_wz), fontsize=15)
ax.set_xlabel("$\\omega_{\\rm z}$ ($\\mu$as$\,$yr$^{-1}$)", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
bin_size = 0.1
bin_array = np.arange(0, 2.1, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_w,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_w.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(0.1, 90, "$\mu={:.2f}$".format(mu_w), fontsize=15)
ax.text(0.1, 80, "$\sigma={:.2f}$".format(std_w), fontsize=15)
ax.set_xlabel("$\\omega$ ($\\mu$as$\,$yr$^{-1}$)", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
bin_size = 10
bin_array = np.arange(0, 361, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_ra,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_ra.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(0, 40, "$\mu={:.0f}$".format(mu_ra), fontsize=15)
ax.text(0, 30, "$\sigma={:.0f}$".format(std_ra), fontsize=15)
ax.set_xlabel("$\\alpha_{\\rm apex}$ (degree))", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
bin_size = 5
bin_array = np.arange(-90, 91, bin_size)
fig, ax = plt.subplots()
ax.hist(resample_dec,
bins=bin_array,
color="grey",
fill=False,
label="All")
ax.plot(bin_array, rvs_dec.pdf(bin_array)*sample_num*bin_size, "r--")
ax.text(-80, 60, "$\mu={:-.0f}$".format(mu_dec), fontsize=15)
ax.text(-80, 50, "$\sigma={:.0f}$".format(std_dec), fontsize=15)
ax.set_xlabel("$\\delta_{\\rm apex}$ (degree))", fontsize=15)
ax.set_ylabel("Nb sources in bins", fontsize=15)
plt.tight_layout()
###Output
_____no_output_____ |
eu_lstm.model_train.ipynb | ###Markdown
Model Train
###Code
lstm_history = lstm.fit(X_train, y_train, epochs=5, batch_size=batch_size, shuffle=True, validation_data=(X_val, y_val))
lr_model = pipeline.fit(mlr_train)
###Output
_____no_output_____ |
Model/RandomForest.ipynb | ###Markdown
Importing libraries
###Code
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestRegressor
from collections import Counter
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
###Output
_____no_output_____
###Markdown
Reading dataset
###Code
df = pd.read_csv('./cleaned_tweets.csv')
df.head()
###Output
_____no_output_____
###Markdown
Drop text
###Code
df = df[['sentiment', 'Snowball_Stem']]
df.head()
###Output
_____no_output_____
###Markdown
Removing rows with nan
###Code
df.isna().sum()
df = df.dropna()
df.isna().sum()
###Output
_____no_output_____
###Markdown
Splitting into test and train
###Code
y= df.iloc[:,0:1].values
x = df.iloc[:,1].values
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=0)
###Output
_____no_output_____
###Markdown
Reducing dataframe size
###Code
reduced_df = pd.concat([df[df.sentiment != 0][:50000], df[df.sentiment == 0][:50000]])
reduced_df.shape
x=reduced_df['Snowball_Stem']
y=reduced_df['sentiment']
x_train,x_test,y_train,y_test = train_test_split(x, y)
x_train.shape, x_test.shape,y_train.shape,y_test.shape
###Output
_____no_output_____
###Markdown
Tf-Idf unigram
###Code
v1 = TfidfVectorizer()
v1.fit(x)
x1_train = v1.transform(x_train)
x1_test = v1.transform(x_test)
###Output
_____no_output_____
###Markdown
Tfd-Idf bigram
###Code
v2 = TfidfVectorizer(ngram_range = (2, 2))
v2.fit(x)
x2_train = v2.transform(x_train)
x2_test = v2.transform(x_test)
###Output
_____no_output_____
###Markdown
Tf-Idf unigram+bigram
###Code
X = df["Snowball_Stem"]
len(X)
v3 = TfidfVectorizer(ngram_range = (1, 2))
v3.fit(X)
x3_train = v3.transform(x_train)
x3_test = v3.transform(x_test)
###Output
_____no_output_____
###Markdown
Encoding labels
###Code
Encoder = LabelEncoder()
y_train = Encoder.fit_transform(y_train)
y_test = Encoder.fit_transform(y_test)
x3_test
rfc=RandomForestClassifier(n_estimators=10,random_state=0)
rfc.fit(x3_train,y_train)
rfc_pred=rfc.predict(x3_test)
accuracy_score(rfc_pred,y_test)
###Output
_____no_output_____
###Markdown
Saving model
###Code
import pickle
RFC_model_path = "./RFC_UnigramBigram_72.pickle"
vectorizer_path ="./UnigramBigram_vectorizer2.pickle"
pickle.dump(rfc, open(RFC_model_path, 'wb'))
###Output
_____no_output_____ |
ProblemSet3.ipynb | ###Markdown
Problem Set 3
###Code
NUM_CASES = 2000
#Setup
import warnings; warnings.simplefilter('ignore')
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
import os
import numpy as np
df1 = pd.read_csv('data/cases_metadata.csv')[['caseid','case_reversed','judge_id','year','x_republican','log_cites']]
df1.dropna(subset=['x_republican'], inplace=True)
df1.dropna(subset=['log_cites'],inplace=True)
print(df1.isnull().sum())
from random import shuffle
keep = [True] * NUM_CASES + [False] * (len(df1) - NUM_CASES)
shuffle(keep)
df1 = df1[keep]
print('Number of rows: ',len(df1))
df1.head()
# load text documents
tmp=[]
for i in range(len(df1)):
caseid=df1.iloc[i][0]
caseid=caseid+'.txt'
txt_file = [f for f in os.listdir('data/cases/') if f.endswith(caseid)]
path='data/cases/'+txt_file[0]
txt = open(path, 'r').read() # open a document
tmp.append(txt)
df1['text']=tmp
df1.head()
# Capitalization
def capitalization(doc):
return doc.lower()
df1['doc'] = df1['text'].apply(capitalization) # go to lower-case
#####
# Punctuation
#####
# recipe for fast punctuation removal
from string import punctuation
def remove_punctuation(doc):
translator = str.maketrans('','',punctuation)
return doc.translate(translator)
df1['doc'] = df1['doc'].apply(remove_punctuation)
# Tokens
def tokenize(doc):
return doc.split()
df1['doc'] = df1['doc'].apply(tokenize)
# remove numbers (keep if not a digit)
def remove_numbers(doc):
return [t for t in doc if not t.isdigit()]
df1['doc'] = df1['doc'].apply(remove_numbers)
df1.head()
# Stopwords
from nltk.corpus import stopwords
stoplist = stopwords.words('english')
def remove_stopwords(doc):
return [t for t in doc if t not in stoplist]
df1['doc'] = df1['doc'].apply(remove_stopwords)
# Stemming
from nltk.stem import SnowballStemmer
stemmer = SnowballStemmer('english') # snowball stemmer, english
def stemming(doc):
return [stemmer.stem(t) for t in doc]
df1['doc'] = df1['doc'].apply(stemming)
# Lemmatizing
#import nltk
#from nltk.stem import WordNetLemmatizer
#nltk.download('wordnet')
#wnl = WordNetLemmatizer()
#def lemmatizing(doc):
# return [wnl.lemmatize(t) for t in doc]
#wnl.lemmatize('corporation'), wnl.lemmatize('corporations')
#df1['doc'] = df1['doc'].apply(lemmatizing)
# remove tokens
def remove_tokens(doc):
return " ".join(doc)
df1['doc'] = df1['doc'].apply(remove_tokens)
df1.head()
###Output
_____no_output_____
###Markdown
1) Train a word embedding (Word2Vec, GloVe, ELMo, BERT, etc) on your corpus, once with a small window (e.g. 2) and again with a long window (e.g. 16). What do you expect to change for the different window sizes? Pick a sample of 100 words and visualize them in two dimensions, to demonstrate the difference between the models.
###Code
###
# Word2Vec in gensim (short window)
###
# word2vec requires sentences as input
from txt_utils import get_sentences
sentences = []
for doc in df1['doc']:
sentences += get_sentences(doc)
from random import shuffle
shuffle(sentences) # stream in sentences in random order
# train the model
from gensim.models import Word2Vec
w2v_short = Word2Vec(sentences, # list of tokenized sentences
workers = 8, # Number of threads to run in parallel
size=300, # Word vector dimensionality
min_count = 20, # Minimum word count
window = 2, # Context window size
sample = 1e-3, # Downsample setting for frequent words
)
# done training, so delete context vectors
w2v_short.init_sims(replace=True)
w2v_short.save('w2v-vectors-short_window.pkl')
#w2v.wv['judg'] # vector for "judge"
###
# Word2Vec in gensim (long window)
###
# word2vec requires sentences as input
from txt_utils import get_sentences
sentences = []
for doc in df1['doc']:
sentences += get_sentences(doc)
from random import shuffle
shuffle(sentences) # stream in sentences in random order
# train the model
from gensim.models import Word2Vec
w2v_long = Word2Vec(sentences, # list of tokenized sentences
workers = 8, # Number of threads to run in parallel
size=300, # Word vector dimensionality
min_count = 20, # Minimum word count
window = 16, # Context window size
sample = 1e-3, # Downsample setting for frequent words
)
# done training, so delete context vectors
w2v_long.init_sims(replace=True)
w2v_long.save('w2v-vectors-long_window.pkl')
#w2v.wv['judg'] # vector for "judge"
w2v_short.wv.most_similar('judg') # most similar words
w2v_long.wv.most_similar('judg') # most similar words
from sklearn.manifold import TSNE
import re
vocab_long = list(dict(list(w2v_long.wv.vocab.items())[:100]))#select just 100 words #w2v_long.wv.vocab
X = w2v_long[vocab_long]
tsne = TSNE(n_components=2)
X_tsne = tsne.fit_transform(X)
df_long = pd.DataFrame(X_tsne, columns=['x', 'y'])
df_long['word']=vocab_long
df_long.head()
vocab_short = list(dict(list(w2v_long.wv.vocab.items())[:100]))#select just 100 words #w2v_long.wv.vocab
X = w2v_short[vocab_short]
tsne = TSNE(n_components=2)
X_tsne = tsne.fit_transform(X)
df_short = pd.DataFrame(X_tsne, columns=['x', 'y'])
df_short['word']=vocab_short
df_short.head()
import ggplot as gg
chart = gg.ggplot( df_long, gg.aes(x='x', y='y', label='word') ) \
+ gg.geom_text(size=10, alpha=.8, label='word')
chart.show()
chart = gg.ggplot( df_short, gg.aes(x='x', y='y', label='word') ) \
+ gg.geom_text(size=10, alpha=.8, label='word')
chart.show()
###Output
_____no_output_____
###Markdown
In the model with smaller window vectors which are similar like 'base' and 'ground' are more closer to each other, because a window size of 2 considers only the neighbouring words in a sentence. With a bigger winow size, the words are analyzed in a larger context, which is why words like 'reject' and 'regard' are closer to 'base' in a model with windowsize = 16 2) Train separate word embeddings for Republican and Democrat judges. Use your word embeddings to list the adjectives most associated with a social group or concept of your choice, and analyze differences by judge party.
###Code
df_rep=df1[df1['x_republican']==1]
print(len(df_rep))
df_rep.head()
df_dem=df1[df1['x_republican']==0]
print(len(df_dem))
df_dem.head()
###
# Word2Vec in gensim for republican
###
# word2vec requires sentences as input
from txt_utils import get_sentences
sentences = []
for doc in df_rep['doc']:
sentences += get_sentences(doc)
from random import shuffle
shuffle(sentences) # stream in sentences in random order
# train the model
from gensim.models import Word2Vec
w2v_rep = Word2Vec(sentences, # list of tokenized sentences
workers = 8, # Number of threads to run in parallel
size=300, # Word vector dimensionality
min_count = 20, # Minimum word count
window = 6, # Context window size
sample = 1e-3, # Downsample setting for frequent words
)
# done training, so delete context vectors
w2v_rep.init_sims(replace=True)
w2v_rep.save('w2v-vectors-republican.pkl')
#w2v.wv['judg'] # vector for "judge"
###
# Word2Vec in gensim for democrats
###
# word2vec requires sentences as input
from txt_utils import get_sentences
sentences = []
for doc in df_dem['doc']:
sentences += get_sentences(doc)
from random import shuffle
shuffle(sentences) # stream in sentences in random order
# train the model
from gensim.models import Word2Vec
w2v_dem = Word2Vec(sentences, # list of tokenized sentences
workers = 8, # Number of threads to run in parallel
size=300, # Word vector dimensionality
min_count = 20, # Minimum word count
window = 6, # Context window size
sample = 1e-3, # Downsample setting for frequent words
)
# done training, so delete context vectors
w2v_dem.init_sims(replace=True)
w2v_dem.save('w2v-vectors-democrats.pkl')
#w2v.wv['judg'] # vector for "judge"
w2v_rep.wv.most_similar('judg') # most similar words
import spacy
nlp = spacy.load('en_core_web_sm')
i=0
republican=[]
for tmp in w2v_rep.wv.most_similar(stemmer.stem('negro'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
republican.append(tmp[0])
i=i+1
i=0
democrat=[]
for tmp in w2v_dem.wv.most_similar(stemmer.stem('negro'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
democrat.append(tmp[0])
i=i+1
print('adjectives most associated with the word','\033[1m','negro','\033[0m','\n')
print('republican: democrats: \n')
for i in range(len(democrat)):
print("%-20s %-20s" % (republican[i], democrat[i]))
i=0
republican=[]
for tmp in w2v_rep.wv.most_similar(stemmer.stem('communist'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
republican.append(tmp[0])
i=i+1
i=0
democrat=[]
for tmp in w2v_dem.wv.most_similar(stemmer.stem('communist'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
democrat.append(tmp[0])
i=i+1
print('adjectives most associated with the word','\033[1m','communist','\033[0m','\n')
print('republican: democrats: \n')
for i in range(len(democrat)):
print("%-20s %-20s" % (republican[i], democrat[i]))
i=0
republican=[]
for tmp in w2v_rep.wv.most_similar(stemmer.stem('poor'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
republican.append(tmp[0])
i=i+1
i=0
democrat=[]
for tmp in w2v_dem.wv.most_similar(stemmer.stem('poor'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
democrat.append(tmp[0])
i=i+1
print('adjectives most associated with the word','\033[1m','poor','\033[0m','\n')
print('republican: democrats: \n')
for i in range(len(democrat)):
print("%-20s %-20s" % (republican[i], democrat[i]))
i=0
republican=[]
for tmp in w2v_rep.wv.most_similar(stemmer.stem('christian'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
republican.append(tmp[0])
i=i+1
i=0
democrat=[]
for tmp in w2v_dem.wv.most_similar(stemmer.stem('christian'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
democrat.append(tmp[0])
i=i+1
print('adjectives most associated with the word','\033[1m','christian','\033[0m','\n')
print('republican: democrats: \n')
for i in range(len(democrat)):
print("%-20s %-20s" % (republican[i], democrat[i]))
i=0
republican=[]
for tmp in w2v_rep.wv.most_similar(stemmer.stem('indian'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
republican.append(tmp[0])
i=i+1
i=0
democrat=[]
for tmp in w2v_dem.wv.most_similar(stemmer.stem('indian'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
democrat.append(tmp[0])
i=i+1
print('adjectives most associated with the word','\033[1m','indian','\033[0m','\n')
print('republican: democrats: \n')
for i in range(len(democrat)):
print("%-20s %-20s" % (republican[i], democrat[i]))
i=0
republican=[]
for tmp in w2v_rep.wv.most_similar(stemmer.stem('american'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
republican.append(tmp[0])
i=i+1
i=0
democrat=[]
for tmp in w2v_dem.wv.most_similar(stemmer.stem('american'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
democrat.append(tmp[0])
i=i+1
print('adjectives most associated with the word','\033[1m','american','\033[0m','\n')
print('republican: democrats: \n')
for i in range(len(democrat)):
print("%-20s %-20s" % (republican[i], democrat[i]))
i=0
republican=[]
for tmp in w2v_rep.wv.most_similar(stemmer.stem('banker'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
republican.append(tmp[0])
i=i+1
i=0
democrat=[]
for tmp in w2v_dem.wv.most_similar(stemmer.stem('banker'),topn=100)[:][:]:
doc = nlp(tmp[0])
if doc[0].pos_=='ADJ' and i < 15:
democrat.append(tmp[0])
i=i+1
print('adjectives most associated with the word','\033[1m','banker','\033[0m','\n')
print('republican: democrats: \n')
for i in range(len(democrat)):
print("%-20s %-20s" % (republican[i], democrat[i]))
###Output
adjectives most associated with the word [1m banker [0m
republican: democrats:
louisvil manhattan
kan reynold
lincoln louisvil
cas vend
lawrenc midwest
southwest wet
columbus red
nashvil sidney
lloyd syndic
vincent theatr
canadian bondhold
reynold roy
bros lawrenc
stuart reinsur
aerospac plumb
|
.ipynb_checkpoints/01_gsw_tools-checkpoint.ipynb | ###Markdown
01_gsw_tools.ipynb---This notebook uses the Gibbs-SeaWater (GSW) Oceanographic Toolbox containing the TEOS-10 subroutines for evaluating the thermodynamic properties of seawater. Specifically we are going to calculate: **1.** Absolute Salinity ($S_A$) from Argo practical salinity **2.** Conservative Temperature ($\theta$) from Argo in-situ temperature **3.** Potential Density Anomaly ($\sigma_t$) with reference pressure of 0 dbar using the calculated $S_A$ and $\theta$Anomalies in $S_A$, $\theta$, and $\sigma_t$ are computed by removing the long-term (January 2004 – December 2020) monthly mean at each space and pressure point. Roemmich and Gilson Argo ClimatologyThe data we are using is from the updated **[Roemmich and Gilson (RG) Argo Climatology](http://sio-argo.ucsd.edu/RG_Climatology.html)** from the Scripps Instition of Oceanography. This data contains monthly ocean temperature and salinity on 58 levels from 2.5 to 2000 dbars from January 2004 through present. The RG climatology has a regular global 1 degree grid and is available as NetCDF only. It contains only data from Argo floats using optimal interpolation. This data product is described in further detail in [Roemmich and Gilson (2009)](https://www.sciencedirect.com/science/article/abs/pii/S0079661109000160?via%3Dihub)
###Code
import xarray as xr
import dask
import gsw
import numpy as np
import numpy.matlib
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
%%time
# Import the netCDF files that are downloaded when running 01_get_data.sh from the command line
data_path = '/glade/scratch/scanh/RG-argo-climatology/*'
dask.config.set({"array.slicing.split_large_chunks": False})
ds = xr.open_mfdataset(data_path, decode_times=False)
dyr = ds.TIME
dates = np.arange('2004-01', '2021-01', dtype='datetime64[M]')
ds = ds.assign(TIME = dates)
def monthly_anomaly_noseason(time, t_anom, t_mean, s_anom, s_mean):
t_tot = t_anom + t_mean
s_tot = s_anom + s_mean
t_clim = t_tot.groupby('TIME.month').mean()
s_clim = s_tot.groupby('TIME.month').mean()
t_anom_noseason = np.empty(t_tot.shape)
t_anom_noseason[:] = np.nan
for i in enumerate(t_clim.month.values):
I = np.where(time.dt.month == i[1])[0]
t_anom_noseason[I,:,:,:] = t_tot[I,:,:,:] - t_clim[i[0],:,:,:]
t_anom_noseason = xr.DataArray(t_anom_noseason, dims=t_tot.dims, coords=t_tot.coords)
s_anom_noseason = np.empty(s_tot.shape)
s_anom_noseason[:] = np.nan
for i in enumerate(s_clim.month.values):
I = np.where(time.dt.month == i[1])[0]
s_anom_noseason[I] = s_tot[I,:,:,:] - s_clim[i[0],:,:,:]
s_anom_noseason = xr.DataArray(s_anom_noseason, dims=s_tot.dims, coords=s_tot.coords)
return t_tot, s_tot, t_anom_noseason, s_anom_noseason
%%time
t_tot, s_tot, t_anom_noseason, s_anom_noseason = monthly_anomaly_noseason(ds.TIME, ds.ARGO_TEMPERATURE_ANOMALY, ds.ARGO_TEMPERATURE_MEAN, ds.ARGO_SALINITY_ANOMALY, ds.ARGO_SALINITY_MEAN)
blob1 = t_anom_noseason.sel(TIME='2019-12-01')
plt.pcolormesh(blob1.LONGITUDE, blob1.LATITUDE, blob1[0,:,:], vmin=-3, vmax=3, cmap='RdBu_r'); plt.colorbar()
plt.plot(215.5, 45.5,'k*',ms=12)
blob1_ts = t_anom_noseason.sel(LONGITUDE=215.5, LATITUDE=45.5)
plt.plot(t_anom_noseason.TIME,blob1_ts[:,0])
s_tot.load();
ds.load();
###Output
_____no_output_____
###Markdown
--- **1**. Absolute Salinity ($S_A$) from Practical Salinity. gsw.SA_from_SP(SP, p, lon, lat) Calculates Absolute Salinity from Practical Salinity. Since SP is non-negative by definition, this function changes any negative input values of SP to be zero.**Parameters**:- *SP* array-like Practical Salinity (PSS-78), unitless- $p$ array-like Sea pressure (absolute pressure minus 10.1325 dbar), dbar- $lon$ array-like Longitude, -360 to 360 degrees- $lat$ array-like Latitude, -90 to 90 degrees**Returns**:- Absolute Salinity ($S_A$) array-like, g/kg Absolute Salinity
###Code
%%time
SA_tot = np.empty(s_tot.shape)
SA_tot[:] = np.nan
# Compute Absolute Salinity
for la in np.arange(0, ds.LATITUDE.shape[0]):
for lo in np.arange(0, ds.LONGITUDE.shape[0]):
SA_tot[:,:,la,lo] = gsw.SA_from_SP(s_tot[:,:,la,lo].values, ds.PRESSURE[:].values, 360-ds.LONGITUDE[lo].values, ds.LATITUDE[la].values)
###Output
CPU times: user 2min, sys: 1.04 s, total: 2min 1s
Wall time: 2min 2s
###Markdown
--- **2**. Conservative Temperature ($\Theta$) from in-situ temperature. gsw.CT_from_t(SA, t, p)Calculates Conservative Temperature of seawater from in-situ temperature.**Parameters**: - $S_A$ array-like Absolute Salinity, g/kg- $t$ array-like In-situ temperature (ITS-90), degrees C- $p$ array-like Sea pressure (absolute pressure minus 10.1325 dbar), dbar**Returns**:- Conservative Temperature ($\Theta$) array-like, deg C Conservative Temperature (ITS-90)
###Code
t_tot.load();
%%time
CT_tot = np.empty(t_tot.shape)
CT_tot[:] = np.nan
for p in np.arange(0, ds.PRESSURE.shape[0]):
CT_tot[:,p,:,:] = gsw.CT_from_t(SA_tot[:,p,:,:], t_tot[:,p,:,:].values, ds.PRESSURE[p].values)
###Output
CPU times: user 1min 10s, sys: 3.14 s, total: 1min 13s
Wall time: 1min 13s
###Markdown
--- **3**. Potential Density Anomaly ($\sigma_t$) from absolute salinity and conservative temperature gsw.density.sigma0(SA, CT)Calculates potential density anomaly with reference pressure of 0 dbar, this being this particular potential density minus 1000 kg/m^3. This function has inputs of Absolute Salinity and Conservative Temperature. This function uses the computationally-efficient expression for specific volume in terms of SA, CT and p (Roquet et al., 2015).**Parameters**:- $S_A$ array-like Absolute Salinity, g/kg- $CT$ array-like Conservative Temperature (ITS-90), degrees C**Returns**:- Potential Density Anomaly ($\sigma_t$) array-like, kg/m$^{3}$ potential density anomaly with respect to a reference pressure of 0 dbar, that is, this potential density - 1000 kg/m$^{3}$.
###Code
%%time
sigmaT_tot = np.empty(t_tot.shape)
sigmaT_tot[:] = np.nan
for p in np.arange(0, ds.PRESSURE.shape[0]):
sigmaT_tot[:,p,:,:] = gsw.density.sigma0(SA_tot[:,p,:,:], CT_tot[:,p,:,:])
###Output
CPU times: user 11 s, sys: 2.08 s, total: 13.1 s
Wall time: 13.1 s
###Markdown
---**Compute anomalies in $S_A$, $\theta$, and $\sigma_t$** by removing the long-term (January 2004 – December 2020) monthly mean at each space and pressure point.
###Code
# Initialize numpy matrices
SA_anom = np.empty(s_tot.shape)
SA_anom[:] = np.nan
CT_anom = np.empty(t_tot.shape)
CT_anom[:] = np.nan
sigmaT_anom = np.empty(t_tot.shape)
sigmaT_anom[:] = np.nan
for i in np.arange(1,13):
I = np.where(ds.TIME.dt.month==i)[0]
CT_mn = CT_tot[I,:,:,:].mean(axis=0)
CT_anom[I,:,:,:] = CT_tot[I,:,:,:] - CT_mn
SA_mn = SA_tot[I,:,:,:].mean(axis=0)
SA_anom[I,:,:,:] = SA_tot[I,:,:,:] - SA_mn
sigt_mn = sigmaT_tot[I,:,:,:].mean(axis=0)
sigmaT_anom[I,:,:,:] = sigmaT_tot[I,:,:,:] - sigt_mn
###Output
_____no_output_____
###Markdown
---**Save** the total and anomaly fields of $S_A$, $\theta$, and $\sigma_t$ to zarr
###Code
RG_GSW_anoms = xr.Dataset({'SA_tot': (('TIME', 'PRESSURE', 'LATITUDE', 'LONGITUDE'), SA_tot),
'CT_tot': (('TIME', 'PRESSURE', 'LATITUDE', 'LONGITUDE'), CT_tot),
'sigmaT_tot': (('TIME','PRESSURE', 'LATITUDE', 'LONGITUDE'), sigmaT_tot),
'SA_anom': (('TIME', 'PRESSURE', 'LATITUDE', 'LONGITUDE'), SA_anom),
'CT_anom': (('TIME', 'PRESSURE', 'LATITUDE', 'LONGITUDE'), CT_anom),
'sigmaT_anom': (('TIME','PRESSURE', 'LATITUDE', 'LONGITUDE'), sigmaT_anom),
'mask': (('PRESSURE', 'LATITUDE', 'LONGITUDE'), ds.MAPPING_MASK.values)},
coords={'TIME': ds.TIME,
'PRESSURE': ds.PRESSURE,
'LATITUDE': ds.LATITUDE,
'LONGITUDE': ds.LONGITUDE})
RG_GSW_anoms.to_netcdf("/glade/scratch/scanh/climate2020/CT_SA_sigmaT_RG09_d18_m01_y2021.nc")
###Output
_____no_output_____ |
cry101.ipynb | ###Markdown
Let's try some code1. Install stuff here- panda- sqlalchemy- request
###Code
#!pip install python-binance
import pandas as pd
import sqlalchemy
from binance.client import Client
from binance import Client, ThreadedWebsocketManager, ThreadedDepthCacheManager
#install Binance key
from addons import binance_keys
client = Client(api_key, api_secret)
bsm = BinanceSocketManager(client)
socket = bsm.trade_socket('BTCUSDT')
await socket.__aenter__()
msg = await socket.recv()
print(msg)
###Output
_____no_output_____ |
LSTM-testing.ipynb | ###Markdown
Let's start by downloading the data:
###Code
## Note: Linux bash commands start with a "!" inside those "ipython notebook" cells
#
DATA_PATH = "data/"
#
#!pwd && ls
#os.chdir(DATA_PATH)
#!pwd && ls
#
#!python download_dataset.py
#
#!pwd && ls
#os.chdir("..")
#!pwd && ls
#
DATASET_PATH = DATA_PATH + "UCI HAR Dataset/"
print("\n" + "Dataset is now located at: " + DATASET_PATH)
#
###Output
Dataset is now located at: data/UCI HAR Dataset/
###Markdown
Preparing dataset:
###Code
TRAIN = "train/"
TEST = "test/"
X_train_signals_paths = [DATASET_PATH + TRAIN + "Inertial Signals/" + signal + "train.txt" for signal in INPUT_SIGNAL_TYPES]
'''
['data/UCI HAR Dataset/train/Inertial Signals/body_acc_x_train.txt',
'data/UCI HAR Dataset/train/Inertial Signals/body_acc_y_train.txt',
'data/UCI HAR Dataset/train/Inertial Signals/body_acc_z_train.txt',
'data/UCI HAR Dataset/train/Inertial Signals/body_gyro_x_train.txt',
'data/UCI HAR Dataset/train/Inertial Signals/body_gyro_y_train.txt',
'data/UCI HAR Dataset/train/Inertial Signals/body_gyro_z_train.txt',
'data/UCI HAR Dataset/train/Inertial Signals/total_acc_x_train.txt',
'data/UCI HAR Dataset/train/Inertial Signals/total_acc_y_train.txt',
'data/UCI HAR Dataset/train/Inertial Signals/total_acc_z_train.txt']
'''
X_test_signals_paths = [DATASET_PATH + TEST + "Inertial Signals/" + signal + "test.txt" for signal in INPUT_SIGNAL_TYPES]
'''
['data/UCI HAR Dataset/test/Inertial Signals/body_acc_x_test.txt',
'data/UCI HAR Dataset/test/Inertial Signals/body_acc_y_test.txt',
'data/UCI HAR Dataset/test/Inertial Signals/body_acc_z_test.txt',
'data/UCI HAR Dataset/test/Inertial Signals/body_gyro_x_test.txt',
'data/UCI HAR Dataset/test/Inertial Signals/body_gyro_y_test.txt',
'data/UCI HAR Dataset/test/Inertial Signals/body_gyro_z_test.txt',
'data/UCI HAR Dataset/test/Inertial Signals/total_acc_x_test.txt',
'data/UCI HAR Dataset/test/Inertial Signals/total_acc_y_test.txt',
'data/UCI HAR Dataset/test/Inertial Signals/total_acc_z_test.txt']
'''
##################################################
# Load "X" (the neural network's training and testing inputs)
##################################################
def load_X(X_signals_paths):
X_signals = []
for signal_type_path in X_signals_paths:
file = open(signal_type_path, 'r')
# Read dataset from disk, dealing with text files' syntax
X_signals.append([np.array(serie, dtype=np.float32) for serie in [ row.replace(' ', ' ').strip().split(' ') for row in file]])
file.close()
return np.transpose(np.array(X_signals), (1, 2, 0))
X_train = load_X(X_train_signals_paths) # (7352, 128, 9)
X_test = load_X(X_test_signals_paths) # (2947, 128, 9)
##################################################
# Load "y" (the neural network's training and testing outputs)
##################################################
def load_y(y_path):
file = open(y_path, 'r')
# Read dataset from disk, dealing with text file's syntax
y_ = np.array([elem for elem in [row.replace(' ', ' ').strip().split(' ') for row in file]], dtype=np.int32)
file.close()
# Substract 1 to each output class for friendly 0-based indexing
return y_ - 1
y_train_path = DATASET_PATH + TRAIN + "y_train.txt"
''' data/UCI HAR Dataset/train/y_train.txt '''
y_test_path = DATASET_PATH + TEST + "y_test.txt"
''' data/UCI HAR Dataset/test/y_test.txt '''
y_train = load_y(y_train_path) # (7352, 1)
y_test = load_y(y_test_path) # (2947, 1)
###Output
_____no_output_____
###Markdown
Additionnal Parameters:Here are some core parameter definitions for the training. For example, the whole neural network's structure could be summarised by enumerating those parameters and the fact that two LSTM are used one on top of another (stacked) output-to-input as hidden layers through time steps.
###Code
# Input Data
training_data_count = len(X_train) # 7352 training series (with 50% overlap between each serie)
test_data_count = len(X_test) # 2947 testing series
n_steps = len(X_train[0]) # 128 timesteps per series
n_input = len(X_train[0][0]) # 9 input parameters per timestep
# LSTM Neural Network's internal structure
n_hidden = 32 # Hidden layer num of features
n_classes = 6 # Total classes (should go up, or should go down)
# Training
learning_rate = 0.0025
lambda_loss_amount = 0.0015
training_iters = training_data_count * 300 # Loop 300 times on the dataset
batch_size = 1500
display_iter = 30000 # To show test set accuracy during training
# Some debugging info
print("Some useful info to get an insight on dataset's shape and normalisation:")
print("(X shape, y shape, every X's mean, every X's standard deviation)")
print(X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test))
print("The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.")
print(('X_train: {}').format(X_train.shape))
print(('X_test: {}').format(X_test.shape))
print(('y_train: {}').format(y_train.shape))
print(('y_test: {}').format(y_test.shape))
##128: readings/window
##[acc_x", "acc_y", "acc_z", "gyro_x", "gyro_y", "gyro_z", "total_acc_x", "total_acc_y" , "total_acc_z"]
##
##["WALKING", "WALKING_UPSTAIRS", "WALKING_DOWNSTAIRS", "SITTING", "STANDING", "LAYING"]
X_train[0][0]
X_train[0][1]
X_train[0][0]
###Output
_____no_output_____
###Markdown
Utility functions for training:
###Code
def LSTM_RNN(_X, _weights, _biases):
# Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters.
# Moreover, two LSTM cells are stacked which adds deepness to the neural network.
# Note, some code of this notebook is inspired from an slightly different
# RNN architecture used on another dataset, some of the credits goes to
# "aymericdamien" under the MIT license.
# (NOTE: This step could be greatly optimised by shaping the dataset once
# input shape: (batch_size, n_steps, n_input)
_X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
# Reshape to prepare input to hidden activation
_X = tf.reshape(_X, [-1, n_input])
# new shape: (n_steps*batch_size, n_input)
# ReLU activation, thanks to Yu Zhao for adding this improvement here:
_X = tf.nn.relu(tf.matmul(_X, _weights['hidden']) + _biases['hidden'])
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(_X, n_steps, 0)
# new shape: n_steps * (batch_size, n_hidden)
# Define two stacked LSTM cells (two recurrent layers deep) with tensorflow
lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)
# Get LSTM cell output
outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)
# Get last time step's output feature for a "many-to-one" style classifier,
# as in the image describing RNNs at the top of this page
lstm_last_output = outputs[-1]
# Linear activation
return tf.matmul(lstm_last_output, _weights['out']) + _biases['out']
def extract_batch_size(_train, step, batch_size):
# Function to fetch a "batch_size" amount of data from "(X|y)_train" data.
shape = list(_train.shape)
shape[0] = batch_size
batch_s = np.empty(shape)
for i in range(batch_size):
# Loop index
index = ((step-1)*batch_size + i) % len(_train)
batch_s[i] = _train[index]
return batch_s # (1500, 128, 9)
def one_hot(y_, n_classes=n_classes):
# Function to encode neural one-hot output labels from number indexes
# e.g.:
# one_hot(y_=[[5], [0], [3]], n_classes=6):
# return [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]
y_ = y_.reshape(len(y_))
return np.eye(n_classes)[np.array(y_, dtype=np.int32)] # Returns FLOATS
###Output
_____no_output_____
###Markdown
Let's get serious and build the neural network:
###Code
################
# n_steps: 128 readings / window
# n_input: 9 [acc_x", "acc_y", "acc_z", "gyro_x", "gyro_y", "gyro_z", "total_acc_x", "total_acc_y" , "total_acc_z"]
# n_classes: 6 ["WALKING", "WALKING_UPSTAIRS", "WALKING_DOWNSTAIRS", "SITTING", "STANDING", "LAYING"]
# n_hidden: 32
#training_data_count: 7352
#test_data_count: 2947
#learning_rate: 0.0025
#lambda_loss_amount: 0.0015
# training_iters: 2205600
#batch_size: 1500
#display_iter: 30000
################
# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
# Graph weights
weights = {
'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
'hidden': tf.Variable(tf.random_normal([n_hidden])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
weights
# prediction
pred = LSTM_RNN(x, weights, biases)
# Loss, optimizer and evaluation:
#################################
# L2 loss prevents this overkill neural network to overfit the data
l2 = lambda_loss_amount * sum(tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables())
# Softmax loss
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2
# Adam Optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
step = 1
batch_xs = extract_batch_size(X_train, step, batch_size)
batch_ys = one_hot(extract_batch_size(y_train, step, batch_size))
# extract_batch_size(X_train, 1, 1500) >>>>> shape of output: (7352, 128, 9)
# extract_batch_size(y_train, 1, 1500) >>>>> shape of output: (1500, 1)
# one_hot(extract_batch_size(y_train, 1, 1500)) >>>>> shape of output: (1500, 6)
print(X_train.shape) # (7352, 128, 9)
print(y_train.shape) # (7352, 1)
print(batch_size) # 1500
print(batch_xs.shape) # (1500, 128, 9)
print(batch_ys.shape) # (1500, 6)
print(extract_batch_size(y_train, step, batch_size).shape)
###Output
(7352, 128, 9)
(7352, 1)
1500
(1500, 128, 9)
(1500, 6)
(1500, 1)
|
notebooks/test_active_learning_deepweeds_entropy_no_dropout.ipynb | ###Markdown
###Code
!git clone --single-branch --branch cassava-deepweeds https://github.com/ravindrabharathi/fsdl-active-learning2.git
%cd fsdl-active-learning2
from google.colab import drive
drive.mount('/gdrive')
!mkdir './data/deepweeds/'
!cp '/gdrive/MyDrive/LiveAI/AgriAI/images.zip' './data/deepweeds/'
!unzip -q './data/deepweeds/images.zip' -d './data/deepweeds/images'
!cp '/gdrive/MyDrive/LiveAI/AgriAI/labels_deep_weeds.csv' './data/deepweeds/'
# alternative way: if you cloned the repository to your GDrive account, you can mount it here
#from google.colab import drive
#drive.mount('/content/drive', force_remount=True)
#%cd /content/drive/MyDrive/fsdl-active-learning
!pip3 install PyYAML==5.3.1
!pip3 install boltons wandb pytorch_lightning==1.2.8
!pip3 install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio==0.7.2 torchtext==0.8.1 -f https://download.pytorch.org/whl/torch_stable.html # general lab / pytorch installs
!pip3 install modAL tensorflow # active learning project
!pip install hdbscan
%env PYTHONPATH=.:$PYTHONPATH
#!python training/run_experiment.py --wandb --gpus=1 --max_epochs=1 --num_workers=4 --data_class=DroughtWatch --model_class=ResnetClassifier --batch_size=32 --sampling_method="random"
!python training/run_experiment.py --gpus=1 --max_epochs=10 --num_workers=4 --data_class=DeepweedsDataModule --model_class=ResnetClassifier3 --sampling_method="entropy" --batch_size=128
###Output
2021-05-14 11:03:11.719403: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
INIT SETUP CALLED!!
___________________
[34m[1mwandb[0m: (1) Create a W&B account
[34m[1mwandb[0m: (2) Use an existing W&B account
[34m[1mwandb[0m: (3) Don't visualize my results
[34m[1mwandb[0m: Enter your choice: 2
[34m[1mwandb[0m: You chose 'Use an existing W&B account'
[34m[1mwandb[0m: You can find your API key in your browser here: https://wandb.ai/authorize
[34m[1mwandb[0m: Paste an API key from your profile and hit enter:
[34m[1mwandb[0m: Appending key for api.wandb.ai to your netrc file: /root/.netrc
2021-05-14 11:04:35.042771: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
[34m[1mwandb[0m: Tracking run with wandb version 0.10.30
[34m[1mwandb[0m: Syncing run [33mfsdl-active-learning_DeepweedsDataModule_entropy_multi-class_all-channels[0m
[34m[1mwandb[0m: ⭐️ View project at [34m[4mhttps://wandb.ai/ravindra/fsdl-active-learning2-training[0m
[34m[1mwandb[0m: 🚀 View run at [34m[4mhttps://wandb.ai/ravindra/fsdl-active-learning2-training/runs/686kvaz6[0m
[34m[1mwandb[0m: Run data is saved locally in /content/fsdl-active-learning2/wandb/run-20210514_110433-686kvaz6
[34m[1mwandb[0m: Run `wandb offline` to turn off syncing.
Initializing model for active learning iteration 0
setting n_channels to 3
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to /root/.cache/torch/hub/checkpoints/resnet50-19c8e357.pth
100%|███████████████████████████████████████| 97.8M/97.8M [00:00<00:00, 171MB/s]
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 0%| | 0/39 [00:00<?, ?it/s]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 51%|█████████████████▍ | 20/39 [00:04<00:04, 4.55it/s]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.14it/s][A
Epoch 0: 100%|███████████| 39/39 [00:12<00:00, 3.06it/s, loss=1.37, v_num=vaz6]
Epoch 1: 51%|█████▋ | 20/39 [00:03<00:03, 5.04it/s, loss=1.37, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.34it/s][A
Epoch 1: 100%|██████████| 39/39 [00:11<00:00, 3.33it/s, loss=0.771, v_num=vaz6]
Epoch 2: 51%|█████▏ | 20/39 [00:03<00:03, 5.15it/s, loss=0.771, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.43it/s][A
Epoch 2: 100%|███████████| 39/39 [00:11<00:00, 3.38it/s, loss=0.31, v_num=vaz6]
Epoch 3: 51%|█████▋ | 20/39 [00:03<00:03, 5.19it/s, loss=0.31, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.32it/s][A
Epoch 3: 100%|███████████| 39/39 [00:11<00:00, 3.38it/s, loss=0.18, v_num=vaz6]
Epoch 4: 51%|█████▋ | 20/39 [00:03<00:03, 5.09it/s, loss=0.18, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.35it/s][A
Epoch 4: 100%|██████████| 39/39 [00:11<00:00, 3.37it/s, loss=0.104, v_num=vaz6]
Epoch 5: 51%|█████▏ | 20/39 [00:03<00:03, 5.16it/s, loss=0.104, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.32it/s][A
Epoch 5: 100%|█████████| 39/39 [00:11<00:00, 3.37it/s, loss=0.0574, v_num=vaz6]
Epoch 6: 51%|████▌ | 20/39 [00:03<00:03, 5.08it/s, loss=0.0574, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.29it/s][A
Epoch 6: 100%|██████████| 39/39 [00:11<00:00, 3.34it/s, loss=0.034, v_num=vaz6]
Epoch 7: 51%|█████▏ | 20/39 [00:03<00:03, 5.25it/s, loss=0.034, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.33it/s][A
Epoch 7: 100%|█████████| 39/39 [00:11<00:00, 3.40it/s, loss=0.0225, v_num=vaz6]
Epoch 8: 51%|████▌ | 20/39 [00:03<00:03, 5.17it/s, loss=0.0225, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.26it/s][A
Epoch 8: 100%|█████████| 39/39 [00:11<00:00, 3.34it/s, loss=0.0169, v_num=vaz6]
Epoch 9: 51%|████▌ | 20/39 [00:03<00:03, 5.20it/s, loss=0.0169, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.43it/s][A
Epoch 9: 100%|█████████| 39/39 [00:11<00:00, 3.42it/s, loss=0.0137, v_num=vaz6]
Epoch 9: 100%|█████████| 39/39 [00:11<00:00, 3.42it/s, loss=0.0137, v_num=vaz6]
Total Unlabelled Pool Size 12607
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 99/99 [00:26<00:00, 3.72it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.7720314264297485,
'test_f1': 0.6901328563690186,
'train_size': 1400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "entropy":
-----------------
[12092 8400 1402 ... 5471 5148 2938]
-----------------
New train set size 3400
New unlabelled pool size 10607
Initializing model for active learning iteration 1
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 36%|████ | 20/55 [00:06<00:11, 3.06it/s, loss=1.47, v_num=vaz6]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 73%|████████ | 40/55 [00:08<00:03, 4.92it/s, loss=1.47, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.46it/s][A
Epoch 0: 100%|███████████| 55/55 [00:15<00:00, 3.51it/s, loss=1.21, v_num=vaz6]
Epoch 1: 73%|███████▎ | 40/55 [00:08<00:03, 4.99it/s, loss=0.622, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.41it/s][A
Epoch 1: 100%|██████████| 55/55 [00:15<00:00, 3.52it/s, loss=0.616, v_num=vaz6]
Epoch 2: 73%|████████ | 40/55 [00:08<00:03, 4.91it/s, loss=0.37, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.29it/s][A
Epoch 2: 100%|██████████| 55/55 [00:15<00:00, 3.46it/s, loss=0.368, v_num=vaz6]
Epoch 3: 73%|███████▎ | 40/55 [00:08<00:03, 4.99it/s, loss=0.209, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.42it/s][A
Epoch 3: 100%|██████████| 55/55 [00:15<00:00, 3.51it/s, loss=0.215, v_num=vaz6]
Epoch 4: 73%|████████ | 40/55 [00:07<00:02, 5.04it/s, loss=0.13, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.21it/s][A
Epoch 4: 100%|██████████| 55/55 [00:15<00:00, 3.47it/s, loss=0.134, v_num=vaz6]
Epoch 5: 73%|██████▌ | 40/55 [00:08<00:03, 4.93it/s, loss=0.0956, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.35it/s][A
Epoch 5: 100%|█████████| 55/55 [00:15<00:00, 3.48it/s, loss=0.0953, v_num=vaz6]
Epoch 6: 73%|███████▎ | 40/55 [00:08<00:03, 4.98it/s, loss=0.054, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.43it/s][A
Epoch 6: 100%|█████████| 55/55 [00:15<00:00, 3.50it/s, loss=0.0597, v_num=vaz6]
Epoch 7: 73%|██████▌ | 40/55 [00:08<00:03, 4.91it/s, loss=0.0417, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.31it/s][A
Epoch 7: 100%|█████████| 55/55 [00:15<00:00, 3.46it/s, loss=0.0451, v_num=vaz6]
Epoch 8: 73%|██████▌ | 40/55 [00:08<00:03, 4.96it/s, loss=0.0267, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.33it/s][A
Epoch 8: 100%|██████████| 55/55 [00:15<00:00, 3.49it/s, loss=0.029, v_num=vaz6]
Epoch 9: 73%|███████▎ | 40/55 [00:07<00:02, 5.02it/s, loss=0.019, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.35it/s][A
Epoch 9: 100%|█████████| 55/55 [00:15<00:00, 3.52it/s, loss=0.0187, v_num=vaz6]
Epoch 9: 100%|█████████| 55/55 [00:15<00:00, 3.52it/s, loss=0.0187, v_num=vaz6]
Total Unlabelled Pool Size 10607
Query Sample size 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Resetting Predictions
Testing: 100%|██████████████████████████████████| 83/83 [00:21<00:00, 3.94it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.8643348813056946,
'test_f1': 0.8090488314628601,
'train_size': 3400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "entropy":
-----------------
[ 1418 383 6289 ... 8266 9274 10214]
-----------------
New train set size 5400
New unlabelled pool size 8607
Initializing model for active learning iteration 2
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 56%|██████▏ | 40/71 [00:11<00:08, 3.50it/s, loss=1.05, v_num=vaz6]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 85%|█████████▎ | 60/71 [00:12<00:02, 4.98it/s, loss=1.05, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.29it/s][A
Epoch 0: 100%|███████████| 71/71 [00:19<00:00, 3.59it/s, loss=1.03, v_num=vaz6]
Epoch 1: 85%|████████▍ | 60/71 [00:12<00:02, 4.94it/s, loss=0.629, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.36it/s][A
Epoch 1: 100%|██████████| 71/71 [00:19<00:00, 3.59it/s, loss=0.642, v_num=vaz6]
Epoch 2: 85%|████████▍ | 60/71 [00:12<00:02, 4.92it/s, loss=0.431, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.33it/s][A
Epoch 2: 100%|███████████| 71/71 [00:19<00:00, 3.56it/s, loss=0.45, v_num=vaz6]
Epoch 3: 85%|████████▍ | 60/71 [00:11<00:02, 5.01it/s, loss=0.311, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.33it/s][A
Epoch 3: 100%|███████████| 71/71 [00:19<00:00, 3.61it/s, loss=0.32, v_num=vaz6]
Epoch 4: 85%|████████▍ | 60/71 [00:12<00:02, 4.96it/s, loss=0.231, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.38it/s][A
Epoch 4: 100%|██████████| 71/71 [00:19<00:00, 3.60it/s, loss=0.261, v_num=vaz6]
Epoch 5: 85%|████████▍ | 60/71 [00:12<00:02, 4.96it/s, loss=0.167, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.41it/s][A
Epoch 5: 100%|██████████| 71/71 [00:19<00:00, 3.62it/s, loss=0.182, v_num=vaz6]
Epoch 6: 85%|████████▍ | 60/71 [00:12<00:02, 4.99it/s, loss=0.119, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.31it/s][A
Epoch 6: 100%|██████████| 71/71 [00:19<00:00, 3.59it/s, loss=0.131, v_num=vaz6]
Epoch 7: 85%|████████▍ | 60/71 [00:12<00:02, 4.94it/s, loss=0.087, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.31it/s][A
Epoch 7: 100%|█████████| 71/71 [00:19<00:00, 3.57it/s, loss=0.0868, v_num=vaz6]
Epoch 8: 85%|███████▌ | 60/71 [00:12<00:02, 4.88it/s, loss=0.0506, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.39it/s][A
Epoch 8: 100%|█████████| 71/71 [00:19<00:00, 3.57it/s, loss=0.0544, v_num=vaz6]
Epoch 9: 85%|███████▌ | 60/71 [00:12<00:02, 4.94it/s, loss=0.0438, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.31it/s][A
Epoch 9: 100%|█████████| 71/71 [00:19<00:00, 3.57it/s, loss=0.0464, v_num=vaz6]
Epoch 9: 100%|█████████| 71/71 [00:19<00:00, 3.56it/s, loss=0.0464, v_num=vaz6]
Total Unlabelled Pool Size 8607
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 68/68 [00:17<00:00, 3.89it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.9172766208648682,
'test_f1': 0.8816561698913574,
'train_size': 5400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "entropy":
-----------------
[3721 5540 5793 ... 2830 6010 6984]
-----------------
New train set size 7400
New unlabelled pool size 6607
Initializing model for active learning iteration 3
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 47%|█████ | 40/86 [00:11<00:13, 3.42it/s, loss=0.99, v_num=vaz6]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 70%|███████▋ | 60/86 [00:16<00:06, 3.73it/s, loss=0.99, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 0: 93%|██████████▏| 80/86 [00:22<00:01, 3.61it/s, loss=0.99, v_num=vaz6]
Epoch 0: 100%|██████████| 86/86 [00:23<00:00, 3.61it/s, loss=0.881, v_num=vaz6]
Epoch 1: 70%|██████▉ | 60/86 [00:16<00:07, 3.65it/s, loss=0.579, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 1: 93%|█████████▎| 80/86 [00:22<00:01, 3.53it/s, loss=0.579, v_num=vaz6]
Epoch 1: 100%|██████████| 86/86 [00:24<00:00, 3.52it/s, loss=0.595, v_num=vaz6]
Epoch 2: 70%|██████▉ | 60/86 [00:16<00:07, 3.70it/s, loss=0.405, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 2: 93%|█████████▎| 80/86 [00:22<00:01, 3.63it/s, loss=0.405, v_num=vaz6]
Epoch 2: 100%|██████████| 86/86 [00:23<00:00, 3.60it/s, loss=0.411, v_num=vaz6]
Epoch 3: 70%|██████▉ | 60/86 [00:16<00:07, 3.68it/s, loss=0.265, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 3: 93%|█████████▎| 80/86 [00:22<00:01, 3.59it/s, loss=0.265, v_num=vaz6]
Epoch 3: 100%|███████████| 86/86 [00:24<00:00, 3.57it/s, loss=0.29, v_num=vaz6]
Epoch 4: 70%|██████▉ | 60/86 [00:16<00:06, 3.72it/s, loss=0.201, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 4: 93%|█████████▎| 80/86 [00:22<00:01, 3.63it/s, loss=0.201, v_num=vaz6]
Epoch 4: 100%|██████████| 86/86 [00:23<00:00, 3.62it/s, loss=0.222, v_num=vaz6]
Epoch 5: 70%|██████▉ | 60/86 [00:16<00:07, 3.71it/s, loss=0.127, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 5: 93%|█████████▎| 80/86 [00:22<00:01, 3.60it/s, loss=0.127, v_num=vaz6]
Epoch 5: 100%|██████████| 86/86 [00:23<00:00, 3.60it/s, loss=0.144, v_num=vaz6]
Epoch 6: 70%|██████▎ | 60/86 [00:16<00:06, 3.72it/s, loss=0.0808, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 6: 93%|████████▎| 80/86 [00:22<00:01, 3.61it/s, loss=0.0808, v_num=vaz6]
Epoch 6: 100%|███████████| 86/86 [00:23<00:00, 3.61it/s, loss=0.11, v_num=vaz6]
Epoch 7: 70%|██████▎ | 60/86 [00:15<00:06, 3.76it/s, loss=0.0734, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 7: 93%|████████▎| 80/86 [00:22<00:01, 3.63it/s, loss=0.0734, v_num=vaz6]
Epoch 7: 100%|█████████| 86/86 [00:23<00:00, 3.63it/s, loss=0.0802, v_num=vaz6]
Epoch 8: 70%|██████▎ | 60/86 [00:16<00:06, 3.74it/s, loss=0.0546, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 8: 93%|████████▎| 80/86 [00:22<00:01, 3.61it/s, loss=0.0546, v_num=vaz6]
Epoch 8: 100%|█████████| 86/86 [00:23<00:00, 3.61it/s, loss=0.0574, v_num=vaz6]
Epoch 9: 70%|██████▉ | 60/86 [00:16<00:06, 3.73it/s, loss=0.041, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 9: 93%|█████████▎| 80/86 [00:22<00:01, 3.61it/s, loss=0.041, v_num=vaz6]
Epoch 9: 100%|█████████| 86/86 [00:23<00:00, 3.60it/s, loss=0.0393, v_num=vaz6]
Epoch 9: 100%|█████████| 86/86 [00:23<00:00, 3.60it/s, loss=0.0393, v_num=vaz6]
Total Unlabelled Pool Size 6607
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Query Sample size 2000
Resetting Predictions
Testing: 100%|██████████████████████████████████| 52/52 [00:13<00:00, 3.78it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.9667019844055176,
'test_f1': 0.9525201916694641,
'train_size': 7400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "entropy":
-----------------
[3797 3648 3632 ... 5571 3813 3475]
-----------------
New train set size 9400
New unlabelled pool size 4607
Initializing model for active learning iteration 4
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 59%|█████▎ | 60/102 [00:16<00:11, 3.58it/s, loss=0.808, v_num=vaz6]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 78%|███████ | 80/102 [00:20<00:05, 3.98it/s, loss=0.808, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 0: 98%|███████▊| 100/102 [00:26<00:00, 3.83it/s, loss=0.808, v_num=vaz6]
Epoch 0: 100%|████████| 102/102 [00:27<00:00, 3.64it/s, loss=0.749, v_num=vaz6]
Epoch 1: 78%|███████ | 80/102 [00:20<00:05, 3.97it/s, loss=0.519, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 1: 98%|███████▊| 100/102 [00:26<00:00, 3.84it/s, loss=0.519, v_num=vaz6]
Epoch 1: 100%|████████| 102/102 [00:27<00:00, 3.68it/s, loss=0.519, v_num=vaz6]
Epoch 2: 78%|███████ | 80/102 [00:20<00:05, 3.97it/s, loss=0.357, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 2: 98%|███████▊| 100/102 [00:26<00:00, 3.83it/s, loss=0.357, v_num=vaz6]
Epoch 2: 100%|████████| 102/102 [00:27<00:00, 3.66it/s, loss=0.387, v_num=vaz6]
Epoch 3: 78%|███████ | 80/102 [00:20<00:05, 3.96it/s, loss=0.245, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 3: 98%|███████▊| 100/102 [00:26<00:00, 3.80it/s, loss=0.245, v_num=vaz6]
Epoch 3: 100%|████████| 102/102 [00:27<00:00, 3.64it/s, loss=0.274, v_num=vaz6]
Epoch 4: 78%|███████ | 80/102 [00:20<00:05, 3.95it/s, loss=0.193, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 4: 98%|███████▊| 100/102 [00:26<00:00, 3.79it/s, loss=0.193, v_num=vaz6]
Epoch 4: 100%|████████| 102/102 [00:28<00:00, 3.63it/s, loss=0.202, v_num=vaz6]
Epoch 5: 78%|███████ | 80/102 [00:20<00:05, 3.96it/s, loss=0.133, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 5: 98%|███████▊| 100/102 [00:26<00:00, 3.81it/s, loss=0.133, v_num=vaz6]
Epoch 5: 100%|████████| 102/102 [00:27<00:00, 3.65it/s, loss=0.134, v_num=vaz6]
Epoch 6: 78%|██████▎ | 80/102 [00:20<00:05, 3.97it/s, loss=0.0807, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 6: 98%|██████▊| 100/102 [00:26<00:00, 3.83it/s, loss=0.0807, v_num=vaz6]
Epoch 6: 100%|███████| 102/102 [00:27<00:00, 3.65it/s, loss=0.0905, v_num=vaz6]
Epoch 7: 78%|██████▎ | 80/102 [00:20<00:05, 3.96it/s, loss=0.0692, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 7: 98%|██████▊| 100/102 [00:26<00:00, 3.82it/s, loss=0.0692, v_num=vaz6]
Epoch 7: 100%|███████| 102/102 [00:27<00:00, 3.65it/s, loss=0.0771, v_num=vaz6]
Epoch 8: 78%|██████▎ | 80/102 [00:20<00:05, 3.97it/s, loss=0.0602, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 8: 98%|██████▊| 100/102 [00:26<00:00, 3.79it/s, loss=0.0602, v_num=vaz6]
Epoch 8: 100%|███████| 102/102 [00:28<00:00, 3.64it/s, loss=0.0607, v_num=vaz6]
Epoch 9: 78%|██████▎ | 80/102 [00:20<00:05, 3.99it/s, loss=0.0329, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Epoch 9: 98%|██████▊| 100/102 [00:26<00:00, 3.82it/s, loss=0.0329, v_num=vaz6]
Epoch 9: 100%|███████| 102/102 [00:27<00:00, 3.66it/s, loss=0.0317, v_num=vaz6]
Epoch 9: 100%|███████| 102/102 [00:27<00:00, 3.66it/s, loss=0.0317, v_num=vaz6]
Total Unlabelled Pool Size 4607
Query Sample size 2000
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Resetting Predictions
Testing: 100%|██████████████████████████████████| 36/36 [00:09<00:00, 3.69it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.9924028515815735,
'test_f1': 0.9859451651573181,
'train_size': 9400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "entropy":
-----------------
[ 866 3659 4577 ... 4042 1930 3785]
-----------------
New train set size 11400
New unlabelled pool size 2607
Initializing model for active learning iteration 5
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 68%|██████▊ | 80/118 [00:22<00:10, 3.63it/s, loss=0.68, v_num=vaz6]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 85%|███████▋ | 100/118 [00:24<00:04, 4.12it/s, loss=0.68, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.35it/s][A
Epoch 0: 100%|████████| 118/118 [00:31<00:00, 3.69it/s, loss=0.666, v_num=vaz6]
Epoch 1: 85%|██████▊ | 100/118 [00:24<00:04, 4.13it/s, loss=0.464, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.28it/s][A
Epoch 1: 100%|████████| 118/118 [00:32<00:00, 3.68it/s, loss=0.444, v_num=vaz6]
Epoch 2: 85%|██████▊ | 100/118 [00:24<00:04, 4.12it/s, loss=0.329, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.37it/s][A
Epoch 2: 100%|████████| 118/118 [00:31<00:00, 3.70it/s, loss=0.347, v_num=vaz6]
Epoch 3: 85%|██████▊ | 100/118 [00:24<00:04, 4.16it/s, loss=0.246, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.30it/s][A
Epoch 3: 100%|████████| 118/118 [00:31<00:00, 3.71it/s, loss=0.251, v_num=vaz6]
Epoch 4: 85%|██████▊ | 100/118 [00:24<00:04, 4.13it/s, loss=0.167, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.32it/s][A
Epoch 4: 100%|████████| 118/118 [00:32<00:00, 3.68it/s, loss=0.162, v_num=vaz6]
Epoch 5: 85%|█████▉ | 100/118 [00:24<00:04, 4.12it/s, loss=0.0928, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.32it/s][A
Epoch 5: 100%|████████| 118/118 [00:32<00:00, 3.68it/s, loss=0.141, v_num=vaz6]
Epoch 6: 85%|██████▊ | 100/118 [00:24<00:04, 4.15it/s, loss=0.103, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.37it/s][A
Epoch 6: 100%|█████████| 118/118 [00:31<00:00, 3.72it/s, loss=0.12, v_num=vaz6]
Epoch 7: 85%|██████▊ | 100/118 [00:24<00:04, 4.14it/s, loss=0.088, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.37it/s][A
Epoch 7: 100%|████████| 118/118 [00:31<00:00, 3.70it/s, loss=0.114, v_num=vaz6]
Epoch 8: 85%|█████▉ | 100/118 [00:24<00:04, 4.13it/s, loss=0.0809, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.38it/s][A
Epoch 8: 100%|███████| 118/118 [00:31<00:00, 3.71it/s, loss=0.0844, v_num=vaz6]
Epoch 9: 85%|██████▊ | 100/118 [00:24<00:04, 4.14it/s, loss=0.052, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.39it/s][A
Epoch 9: 100%|████████| 118/118 [00:31<00:00, 3.71it/s, loss=0.142, v_num=vaz6]
Epoch 9: 100%|████████| 118/118 [00:31<00:00, 3.71it/s, loss=0.142, v_num=vaz6]
Total Unlabelled Pool Size 2607
Query Sample size 2000
Resetting Predictions
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Testing: 100%|██████████████████████████████████| 21/21 [00:06<00:00, 3.43it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 0.9980821013450623,
'test_f1': 0.9953089952468872,
'train_size': 11400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "entropy":
-----------------
[2560 2228 2283 ... 1809 112 350]
-----------------
New train set size 13400
New unlabelled pool size 607
Initializing model for active learning iteration 6
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Epoch 0: 75%|██████ | 100/133 [00:27<00:08, 3.70it/s, loss=0.515, v_num=vaz6]/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric Accuracy was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/prints.py:36: UserWarning: The ``compute`` method of metric F1_Score was called before the ``update`` method which may lead to errors, as metric states have not yet been updated.
warnings.warn(*args, **kwargs)
Epoch 0: 90%|███████▏| 120/133 [00:28<00:03, 4.25it/s, loss=0.515, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.26it/s][A
Epoch 0: 100%|████████| 133/133 [00:36<00:00, 3.69it/s, loss=0.513, v_num=vaz6]
Epoch 1: 90%|███████▏| 120/133 [00:28<00:03, 4.25it/s, loss=0.347, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.32it/s][A
Epoch 1: 100%|████████| 133/133 [00:35<00:00, 3.70it/s, loss=0.363, v_num=vaz6]
Epoch 2: 90%|███████▏| 120/133 [00:28<00:03, 4.25it/s, loss=0.282, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.34it/s][A
Epoch 2: 100%|████████| 133/133 [00:35<00:00, 3.70it/s, loss=0.292, v_num=vaz6]
Epoch 3: 90%|███████▏| 120/133 [00:28<00:03, 4.26it/s, loss=0.184, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.30it/s][A
Epoch 3: 100%|████████| 133/133 [00:35<00:00, 3.70it/s, loss=0.183, v_num=vaz6]
Epoch 4: 90%|███████▏| 120/133 [00:28<00:03, 4.26it/s, loss=0.121, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.41it/s][A
Epoch 4: 100%|████████| 133/133 [00:35<00:00, 3.71it/s, loss=0.122, v_num=vaz6]
Epoch 5: 90%|██████▎| 120/133 [00:28<00:03, 4.28it/s, loss=0.0946, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.31it/s][A
Epoch 5: 100%|███████| 133/133 [00:35<00:00, 3.71it/s, loss=0.0924, v_num=vaz6]
Epoch 6: 90%|██████▎| 120/133 [00:27<00:03, 4.29it/s, loss=0.0632, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:05<00:02, 3.39it/s][A
Epoch 6: 100%|███████| 133/133 [00:35<00:00, 3.74it/s, loss=0.0738, v_num=vaz6]
Epoch 7: 90%|██████▎| 120/133 [00:28<00:03, 4.28it/s, loss=0.0512, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.27it/s][A
Epoch 7: 100%|███████| 133/133 [00:35<00:00, 3.71it/s, loss=0.0534, v_num=vaz6]
Epoch 8: 90%|██████▎| 120/133 [00:27<00:03, 4.30it/s, loss=0.0397, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.31it/s][A
Epoch 8: 100%|█████████| 133/133 [00:35<00:00, 3.73it/s, loss=0.04, v_num=vaz6]
Epoch 9: 90%|██████▎| 120/133 [00:28<00:03, 4.27it/s, loss=0.0352, v_num=vaz6]
Validating: 0it [00:00, ?it/s][A
Validating: 0%| | 0/28 [00:00<?, ?it/s][A
Validating: 71%|██████████████████████▏ | 20/28 [00:06<00:02, 3.25it/s][A
Epoch 9: 100%|███████| 133/133 [00:35<00:00, 3.70it/s, loss=0.0328, v_num=vaz6]
Epoch 9: 100%|███████| 133/133 [00:36<00:00, 3.69it/s, loss=0.0328, v_num=vaz6]
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Total Unlabelled Pool Size 607
Query Sample size 607
Resetting Predictions
Testing: 100%|████████████████████████████████████| 5/5 [00:02<00:00, 2.36it/s]
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_acc': 1.0, 'test_f1': 1.0, 'train_size': 13400.0}
--------------------------------------------------------------------------------
Indices selected for labelling via method "entropy":
-----------------
[ 56 280 217 326 536 123 7 94 358 42 365 593 218 363 426 447 16 197
381 525 183 163 188 463 1 418 568 127 322 371 581 288 38 589 396 318
155 430 553 19 399 182 266 331 578 158 539 113 383 408 85 62 25 362
235 293 268 439 427 521 292 180 343 369 453 547 23 392 312 492 353 529
348 227 238 380 240 440 54 39 71 464 213 57 565 387 438 448 395 151
606 340 273 313 420 150 187 588 115 398 255 96 520 117 254 513 220 360
282 459 111 272 122 130 504 590 496 499 269 141 306 535 26 462 579 334
284 320 178 37 530 281 548 489 73 324 277 307 485 70 524 149 466 415
58 159 274 295 507 214 17 556 55 384 186 84 207 443 165 20 354 79
47 15 323 510 119 59 140 172 24 97 75 145 146 72 364 316 299 558
6 379 388 410 407 591 199 181 596 236 102 116 400 421 251 341 500 222
243 92 270 486 484 557 69 431 495 184 406 373 78 82 10 560 262 533
265 416 541 531 107 471 594 574 247 329 605 342 48 545 602 460 245 368
193 419 223 441 278 361 356 564 475 283 257 244 546 366 162 391 423 567
481 176 494 83 576 586 385 509 230 60 126 100 287 442 394 195 357 35
36 29 435 538 397 4 276 249 411 95 14 200 203 580 61 351 519 237
8 167 139 5 157 2 493 68 518 135 376 315 461 192 201 87 497 540
66 143 478 12 132 455 216 386 101 291 206 52 477 208 34 550 600 98
328 359 204 261 528 144 337 483 467 93 498 333 286 479 129 597 304 50
570 44 598 215 103 413 482 175 134 446 30 142 503 229 563 577 604 506
451 148 166 372 221 336 32 480 317 309 45 319 508 429 552 452 11 154
89 402 585 425 502 0 414 248 434 253 109 511 587 382 263 250 260 345
105 314 231 344 527 46 173 378 544 9 120 465 559 233 285 41 86 210
573 370 110 562 432 405 428 76 224 136 575 133 583 271 234 124 33 512
437 279 472 347 31 202 275 543 501 99 514 335 205 160 584 532 412 436
569 296 241 566 298 297 338 444 375 63 252 603 108 445 239 232 302 374
458 377 152 476 300 90 246 303 571 77 106 161 349 259 321 582 74 112
22 212 174 264 118 267 401 487 450 537 169 551 417 3 65 209 104 599
473 131 332 390 121 43 403 190 505 515 80 219 468 526 422 367 389 138
49 114 449 488 424 301 13 469 168 327 289 555 595 53 194 196 454 185
308 456 18 310 28 156 393 27 128 350 164 542 179 125 177 330 601 91
21 491 51 290 549 470 81 211 517 554 242 534 67 523 171 325 346 64
490 409 88 404 352 592 198 225 522 305 170 137 516 294 189 339 311 228
433 474 256 561 191 457 355 40 153 147 258 226 572]
-----------------
New train set size 14007
New unlabelled pool size 0
Initializing model for active learning iteration 7
setting n_channels to 3
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
[34m[1mwandb[0m: Waiting for W&B process to finish, PID 506
[34m[1mwandb[0m: Program ended successfully.
[34m[1mwandb[0m:
[34m[1mwandb[0m: Find user logs for this run at: /content/fsdl-active-learning2/wandb/run-20210514_110433-686kvaz6/logs/debug.log
[34m[1mwandb[0m: Find internal logs for this run at: /content/fsdl-active-learning2/wandb/run-20210514_110433-686kvaz6/logs/debug-internal.log
[34m[1mwandb[0m: Run summary:
[34m[1mwandb[0m: train_loss 0.03116
[34m[1mwandb[0m: train_acc 0.99224
[34m[1mwandb[0m: train_f1 0.99031
[34m[1mwandb[0m: train_size 13400.0
[34m[1mwandb[0m: epoch 9
[34m[1mwandb[0m: trainer/global_step 1050
[34m[1mwandb[0m: _runtime 1818
[34m[1mwandb[0m: _timestamp 1620992091
[34m[1mwandb[0m: _step 153
[34m[1mwandb[0m: val_loss 0.4695
[34m[1mwandb[0m: val_acc 0.87179
[34m[1mwandb[0m: val_f1 0.82956
[34m[1mwandb[0m: train_acc_max 0.99224
[34m[1mwandb[0m: val_acc_max 0.87179
[34m[1mwandb[0m: train_f1_max 0.99031
[34m[1mwandb[0m: val_f1_max 0.82956
[34m[1mwandb[0m: train_acc_best 0.99224
[34m[1mwandb[0m: val_acc_best 0.87179
[34m[1mwandb[0m: train_f1_best 0.99031
[34m[1mwandb[0m: val_f1_best 0.82956
[34m[1mwandb[0m: test_acc 1.0
[34m[1mwandb[0m: test_f1 1.0
[34m[1mwandb[0m: Run history:
[34m[1mwandb[0m: train_loss █▃▂▁▁▁█▃▂▁▁▁▄▂▂▂▁▇▄▂▂▁▁▆▃▂▁▁▁▃▂▂▂▂▅▃▂▁▁▁
[34m[1mwandb[0m: train_acc ▂▆████▁▆████▄▇▇▇█▂▅▇▇██▃▆▇███▆▇▇▇▇▅▆▇███
[34m[1mwandb[0m: train_f1 ▁▆████▁▇████▅▇▇██▂▅▇▇██▃▆▇███▆▇▇▇▇▄▆▇███
[34m[1mwandb[0m: train_size ▁▁▁▁▁▁▂▂▂▂▂▂▃▃▃▃▃▅▅▅▅▅▅▆▆▆▆▆▆▇▇▇▇▇██████
[34m[1mwandb[0m: epoch ▁▂▃▅▆█▂▃▄▆▇█▂▃▄▆▇▁▃▄▅▆█▁▃▄▆▆█▂▃▅▆▇█▂▃▅▆█
[34m[1mwandb[0m: trainer/global_step ▁▁▁▁▂▂▁▁▂▂▃▃▂▂▃▃▄▁▂▃▃▄▅▁▂▃▄▅▆▂▃▅▆▇▂▃▄▆▇█
[34m[1mwandb[0m: _runtime ▁▁▁▁▁▁▂▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▆▆▆▆▇▇▇▇███
[34m[1mwandb[0m: _timestamp ▁▁▁▁▁▁▂▂▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▆▆▆▆▇▇▇▇███
[34m[1mwandb[0m: _step ▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
[34m[1mwandb[0m: val_loss █▆▅▅▅▅▄▃▄▄▂▃▂▂▂▃▂▃▂▂▁▂▂▂▂▂▁▃▁▂▁▃▃▂▂▃▁▂▂▂
[34m[1mwandb[0m: val_acc ▁▃▄▅▅▅▅▆▆▆▆▆▆▇▆▇▇▆▆▇██▇▆▇▇█▇█▇▇▇▇█▆▆▇▇██
[34m[1mwandb[0m: val_f1 ▁▃▄▅▅▅▅▆▆▆▇▇▆▇▆▇█▆▆▆█▇█▆▇▇█▇█▇▇▆▆█▆▆▇███
[34m[1mwandb[0m: train_acc_max ▂▆████▁▆████▄▇▇▇█▂▅▇▇██▃▆▇███▆▇▇██▅▆▇███
[34m[1mwandb[0m: val_acc_max ▁▃▄▅▅▅▅▆▆▆▆▇▆▇▇▇▇▆▆▇███▆▇▇███▇▇▇▇█▆▆▇▇██
[34m[1mwandb[0m: train_f1_max ▁▆████▁▇████▅▇▇██▂▅▇▇██▃▆▇███▆▇▇██▄▆▇███
[34m[1mwandb[0m: val_f1_max ▁▃▄▅▅▅▅▆▆▆▇▇▆▇▇▇█▆▆▇███▆▇▇███▇▇▇▇█▆▆▇███
[34m[1mwandb[0m: train_acc_best ██▅▅▄▁▄
[34m[1mwandb[0m: val_acc_best ▁▅▇▇█▇█
[34m[1mwandb[0m: train_f1_best ██▅▄▄▁▄
[34m[1mwandb[0m: val_f1_best ▁▅▇▇█▇█
[34m[1mwandb[0m: test_acc ▁▄▅▇███
[34m[1mwandb[0m: test_f1 ▁▄▅▇███
[34m[1mwandb[0m:
[34m[1mwandb[0m: Synced 5 W&B file(s), 7 media file(s), 0 artifact file(s) and 1 other file(s)
[34m[1mwandb[0m:
[34m[1mwandb[0m: Synced [33mfsdl-active-learning_DeepweedsDataModule_entropy_multi-class_all-channels[0m: [34mhttps://wandb.ai/ravindra/fsdl-active-learning2-training/runs/686kvaz6[0m
|
code/201801_jupyter_pandas_tutorial.ipynb | ###Markdown
Genome data analysis in Python A brief tutorial on the use of *jupyter notebooks* and the python data analysis library *pandas* for genomic data analysis. Workshop on Population and Speciation Genomics, Český Krumlov, January 2018. By Hannes Svardal () This is a jupyter notebook running a Python 2 kernel. The Jupyter Notebook App (formerly IPython Notebook) is an application running inside the browser. Jupyter notebooks can run different kernels: Python 2/3, R, Julia, bash, ... Further resources about jupyter notebooks can be found here: - https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/ - https://www.datacamp.com/community/tutorials/tutorial-jupyter-notebook Jupyter notebooks can run locally or on a server. You access them in your browser. To start the jupyter server - Log into your amazon cloud instance: ```ssh [email protected]``` (replace my-ip-here with your instance's address) - Navigate into the tutorial directory: ```cd ~/workshop_materials/03a_jupyter_notebooks/``` - Start the notebook server: ```jupyter notebook --no-browser --port=7000 --ip=0.0.0.0``` - In your local browser, navigate to the web address: http://my-ip-here.compute-1.amazonaws.com:7000 - On the web page, type in the password *evomics2018* Now you should have this notebook in front of you. - At the top of the webpage, the notebook environment has a **header** and a **toolbar**, which can be used to change settings, formatting, and interrupt or restart the kernel that interprets the notebook cells. - The body of the notebook is built up of cells of two has two major types: markdown cells and code cells. You can set the type for each cell either using the toolbar or with keyboard commands. The right-most button in the toolbar shows all keyboard shortcuts. - **Markdown cells** (this cell and all above) contain text that can be formatted using html-like syntax http://jupyter-notebook.readthedocs.io/en/stable/examples/Notebook/Working%20With%20Markdown%20Cells.html Double-klick into a markdown cell (like this one) to get into *edit mode* - **Code cells** contain computer code (in our case written in python 2). Code cells have an **intput field** in which you type code. Cells are evaluated by pressing *shift + return* with the cursor being in the cell. This produces an **output field** with the result of the evaluation that would be returned to std-out in a normal python (or R) session. Below are a few examples of input cells and the output. Note that by default only the result of the last operation will be output, and that only if it is not asigned to a variable, but all lines will be evaluated. Here are some very basic operations. Evaluate the cells below and check the results.
###Code
# This is a code cell.
# Evaluate it by moving the cursor in the cell an pressing <shift + return>.
1+1
# This is anoter code cell.
# There is no output because the last operation is assigned to a variable.
# However, the operations are performed and c is now assigned a value.
# Evaluate this cell!
a = 5
b = 3
c = a * b
# The variables should now be assigned. Evaluate.
print 'a is', a
print 'b is', b
print 'c is a*b, which is', c
###Output
a is 5
b is 3
c is a*b, which is 15
###Markdown
Try to create more cells using either the "plus" button in the toolbar above or the keyboard combination (Ctrl + M) + B (First Ctrl + M together, then B). Try to define variables and do calculations in these cells. Python basics This is very basic python stuff. People who are farmiliar with python can skip this part. loading modules
###Code
# Load some packages that we will need below
# by evaluating this cell
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Refer to objects (e.g. functions) from these packages by with module.object
print np.sqrt(10)
print np.pi
###Output
3.16227766017
3.14159265359
###Markdown
lists, list comprehension, and numpy arrays Lists are a very basic data type in python. Elements can be accessed by index. **Attention:** Different from R, Python data structures are generally zero indexed. The first element has index 0.
###Code
list0 = [1, 2, 3, 4]
print list0
print list0[1]
# the last element
print list0[-1]
# elements 2 to 3
print list0[1:3]
###Output
4
[2, 3]
###Markdown
*List comprehensions* are a very useful feature in Python. It is an in-line way of iteratin through a list.
###Code
# This is the syntax of a so-called list comprehension.
# A very useful feature to create a new list by iterating through other list(s).
squares = [i*i for i in list0]
print squares
# Doing this in conventional syntax would be more verbose:
squares = []
for i in list0:
squares.append(i*i)
print squares
###Output
[1, 4, 9, 16]
[1, 4, 9, 16]
###Markdown
A numpy array is a vector-like object of arbitrary dimensions. Operations on numpy arrays are generally faster than iterating through lists.
###Code
array0 = np.array(list0)
print array0
# Operations on an array are usually element-wise.
# Square the array elements.
print array0**2
# Instantiate array 0 .. 19
x = np.arange(20)
print x
# 2D array
array2d = np.array(
[[1,2,3],
[4,5,6]]
)
print array2d
print array2d*2
print 'number of rows and columns:', array2d.shape
###Output
[[1 2 3]
[4 5 6]]
[[ 2 4 6]
[ 8 10 12]]
number of rows and columns: (2, 3)
###Markdown
anonymous (lambda) function A lambda function is a function that is not bound to a name at creation time.
###Code
# This is a regular function
def square(x):
"""
This is a regular function
definition. Defined in evomics2018.
This function takes a number (int or float)
and returns the square of it.
"""
return x*x
print 'This is a regular function:', square
print square(5)
# A lambda function is defined in-line; here it is bound to a name,
# but that is not necessary
square2 = lambda x: x*x
print 'This is an anonymous function:', square2
print square2(5)
###Output
This is an anonymous function: <function <lambda> at 0x7f451adc60c8>
25
###Markdown
The advantage of an anonymous function is that you can define it on the go.
###Code
#For this you must pre-define the function 'square'
map(square, list0)
#Here the same but defining the function on the go.
#This is very useful when we apply functions to data frames below
map(lambda x:x*x, list0)
###Output
_____no_output_____
###Markdown
Ipython Ipython is an interactive interface for python. Jupyter notebooks that run a python kernel use Ipython. It basically is a wrapper around python that adds some useful features and commands. A tutorial can be found here: https://ipython.org/ipython-doc/3/interactive/tutorial.html The four most helpful commands (type in a code cell and evaluate)|command| description||------|------||?| Introduction and overview of IPython’s features.||%quickref| Quick reference.||help| Python’s own help system.||object?| Details about ‘object’, use ‘object??’ for extra details.|
###Code
# Evaluate this to get the documentation of the function **map** as a popup below.
map?
# Get the docstring of your own function defined above.
square?
###Output
_____no_output_____
###Markdown
Ipython magic IPython *magic commands*, are tools that simplify various tasks. They are prefixed by the % character. Magic commands come in two flavors: line magics, which are denoted by a single % prefix and operate on a single line of input, and cell magics, which are denoted by a double %% prefix and operate on multiple lines of input. Examples
###Code
# Time a command with %timeit
%timeit 1+1
%%timeit
#Time a cell operation
x = range(10000)
max(x)
# This is a very useful magic that allows us to create plots inside the jupyter notebook
# EVALUATE THIS CELL!!!
%matplotlib inline
# Make a basic plot
plt.plot(np.random.randn(10))
###Output
_____no_output_____
###Markdown
running shell commands You can use ipython magic to run a command using the system shell and return the output. Simply prepend a command with "!" or start a cell with %%bash for a multi line command.
###Code
!ls
files = !ls
print files
!msmc2
%%bash
cd ~
ls
echo ----------------
echo $PATH
###Output
bin
Desktop
dlang
Downloads
miniconda3
R
software
workshop_materials
----------------
/usr/local/bin:/home/wpsg/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin:/home/wpsg/miniconda3/bin:/home/wpsg/software/hmmer-3.1b2-linux-intel-x86_64/binaries/:/home/wpsg/software/partitionfinder-2.1.1/:/home/wpsg/software/.source/jmodeltest-2.1.10:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin:/home/wpsg/software/EIG-6.1.3/bin:/home/wpsg/software/plink:/home/wpsg/software/SLiM/bin:/home/wpsg/software/msms/bin:/home/wpsg/software/WFABC_v1.1/binaries/Linux:/home/wpsg/software/beast/bin:/home/wpsg/software/.source/pcangsd:/home/wpsg/software/msmc2/build/release:/home/wpsg/software/msmc-tools:/home/wpsg/software/LFMM_CL_v1.5/bin
###Markdown
Pandas https://pandas.pydata.org/ *pandas* is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. List of tutorials - https://pandas.pydata.org/pandas-docs/stable/tutorials.html 10 minutes quick start guide - https://pandas.pydata.org/pandas-docs/stable/10min.html Installation (just like other python modules) ```pip install pandas```
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
The two most important data structures in pandas are **Series** and **DataFrames**. pandas Series A Series is a 1D-array-like object where each element has an index.
###Code
s = pd.Series([1, 3, 5, np.nan, 6, 8])
###Output
_____no_output_____
###Markdown
In this case the index of s are integers 0,1,... but it could be strings, floats, ...
###Code
s
# for operations like addtions, elements are matched by index
s + s
# elements are matched by index, even if they arein a different order)
s1 = pd.Series([1, 3, 5, np.nan, 6, 8],index=['F','E','D','C','B','A'])
s2 = pd.Series([1, 3, 5, np.nan, 6, 8],
index=['A','B','C','D','E','F'])
print s1
print '-------------'
print s2
# what do you expect the result of this to be?
s1 + s2
###Output
_____no_output_____
###Markdown
Do you understand the result above?
###Code
# Access an element using the index
s.loc[2]
# Access an element using the psition
s.iloc[2]
###Output
_____no_output_____
###Markdown
In the above case, the two are trivially the same, but for s1 and s2 it is very different. Try both ways of accessing elements on s1 and s2. pandas DataFrame Pandas data frames are similar to R data frames. A DataFrame is a 2D-array-like object where each element has a row index and a column index. The row index is called 'index', the column index is called 'columns'. In the following, create a simple data frame and inspect its elements. Try to modify the code in this section.
###Code
df = pd.DataFrame([[1,2,3],
[4,5,6]],
index=[100,200],
columns=['A','B','C'])
df
df.index
df.columns
df.index.values
# access by position
df.iloc[1, 2]
# access an element by index
df.loc[200, 'C']
#access a row
df.loc[200,:]
#access a column
df.loc[:,'C']
#logical indexing
df.loc[df['A']>2,]
df.loc[:, df.loc[200]>4]
df + df
# mean of rows
print df.mean()
# mean of columns
print df.mean(axis=1)
# If a function on a data frame returns a 1D object, the results is a pd.Series
print type(df.loc[100,:])
###Output
<class 'pandas.core.series.Series'>
###Markdown
Apply operations Data Frames have many handy methods built in. For applying functions, grouping elements, plotting. We will see several of them below. Here The simples apply opperations.
###Code
df.apply(square)
# apply along rows (column-wise)
df.apply(np.sum, axis=0)
# apply along columns (row-wise)
df.apply(np.sum, axis=1)
###Output
_____no_output_____
###Markdown
For the above there exists a shortcut. You can directly use df.sum(axis=...)
###Code
# apply element-wise
df.applymap(lambda i:'ABCDEFGHIJKLMN'[i])
###Output
_____no_output_____
###Markdown
What does the above code cell do? Play around with it to understand what is happening. Tipp: It helps to look at each of the part in turn.
###Code
'ABCDEFGHIJKLMN'[2]
df.applymap?
###Output
_____no_output_____
###Markdown
Working with SNP calls Here it gets interesting. How can we use pandas to analyse genome data. Note that some of the below is a bit simplified and you would do things slightly differently in a production pipeline. The combination of jupyter notebooks and pandas is great for quick exploration of data. But using ipython parallel one can also handle demaning analyses. We will be using a cichlid fish VCF file with bi-allelic SNP calls.
###Code
#check which files are in the folder
!ls
vcf_fn = 'cichlid_data_outgroup.vcf.gz'
# Use bash magic to take a look at the file contents
%%bash
gzip -dc "cichlid_data_outgroup.vcf.gz" | head -n 18
###Output
##fileformat=VCFv4.1
##FILTER=<ID=PASS,Description="All filters passed">
##fileDate=13092017_10h46m48s
##source=SHAPEIT2.v837
##log_file=shapeit_13092017_10h46m48s_a225583f-ce12-4530-881d-63b6e20bb1ee.log
##FORMAT=<ID=GT,Number=1,Type=String,Description="Phased Genotype">
##contig=<ID=Contig237>
##contig=<ID=Contig262>
##contig=<ID=Contig263>
##bcftools_concatVersion=1.3.1+htslib-1.3.1
##bcftools_concatCommand=concat -O z -o /lustre/scratch113/projects/cichlid/analyses/20170704_variant_calling_malombe/_data/cichlid_data_outgroup.vcf.gz /lustre/scratch113/projects/cichlid/analyses/20170704_variant_calling_malombe/_data/cichlid_data_Contig237_phased_outgroup.vcf.gz /lustre/scratch113/projects/cichlid/analyses/20170704_variant_calling_malombe/_data/cichlid_data_Contig262_phased_outgroup.vcf.gz /lustre/scratch113/projects/cichlid/analyses/20170704_variant_calling_malombe/_data/cichlid_data_Contig263_phased_outgroup.vcf.gz
#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT VirSWA1 VirSWA2 VirSWA3 VirSWA4 VirSWA5 VirSWA6 VirSWA7 VirSWA8 VirSWA9 VirSWA10 VirSWA11 VirSWA12 VirSWA13 VirSWA14 VirSWA15 VirSWA16 VirSWA17 VirSWA18 VirSWA19 VirSWA20 VirSWA21 VirSWA22 VirSWA23 VirSWA24 VirSWA25 VirSWA26 VirSWA27 VirMAL1 VirMAL2 VirMAL3 VirMAL4 VirMAL5 VirMAL6 VirMAL7 VirMAL8 VirMAL9 VirMAL10 VirMAL11 VirMAL12 VirMAL13 VirMAL14 VirMAL15 VirMAL16 VirMAL17 VirMAL18 VirMAL19 VirMAL20 VirMAL21 VirMAL22 VirMAL23 VirMAL24 VirSEA1 VirSEA2 VirSEA3 VirSEA4 VirSEA5 VirSEA6 VirSEA7 VirSEA8 VirSEA9 VirSEA10 VirSEA11 VirSEA12 VirSEA13 VirSEA14 VirSEA15 VirSEA16 VirSEA17 VirSEA18 VirSEA19 VirSEA20 VirSEA21 VirSEA22 VirSEA23 VirSEA24 OreSqu1
Contig237 3190 . G A . PASS . GT 0|0 0|1 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|1 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0
Contig237 3203 . T C . PASS . GT 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 0|1 1|1 1|1 1|1 1|1 1|1 1|1 0|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 0|1 1|1 1|1 0|1 1|1 0|1 1|1 0|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 0|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1 1|1
Contig237 3230 . A G . PASS . GT 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 1|0 0|0 0|0 0|0 0|0 0|0 0|0 1|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 1|0 0|0 0|0 1|0 0|0 1|0 0|0 1|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0
Contig237 3310 . G T . PASS . GT 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|1 0|1 0|0 0|0 0|0 0|0
Contig237 3311 . G T . PASS . GT 0|1 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|1 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0
Contig237 3313 . C T . PASS . GT 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|1 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0 0|0
###Markdown
Parse in the header line of the file
###Code
## parse the header line starting with "#CHROM"
import gzip
with gzip.open(vcf_fn) as f:
for line in f:
if line[:6] == '#CHROM':
vcf_header = line.strip().split('\t')
vcf_header[0] = vcf_header[0][1:]
break
#The vcf
print vcf_header[:20]
# Read a tsv, csv, into a data frame
pd.read_csv?
# Here we read in the vcf file, which basically is tab-separated value file.
gen_df = pd.read_csv(vcf_fn,
sep='\t',
comment='#',
header=None,
names=vcf_header,
index_col=['CHROM','POS'])
gen_df.head()
# Convert the GT=string into data frames with integer for first and second haplotype
first_haplotype = gen_df.iloc[:, 9:].applymap(lambda s: int(s.split('|')[0]))
second_haplotype = gen_df.iloc[:, 9:].applymap(lambda s: int(s.split('|')[1]))
first_haplotype.head()
# Create a second level in the column index that specifies the haplotype
first_haplotype.columns = pd.MultiIndex.from_product([first_haplotype.columns, [0]])
second_haplotype.columns = pd.MultiIndex.from_product([second_haplotype.columns, [1]])
first_haplotype.head()
# Creat a haplotype dataframe with all the data
hap_df = pd.concat([first_haplotype, second_haplotype], axis=1).sort_index(axis=1)
hap_df.head()
import subprocess
def read_hap_df(vcf_fn, chrom=None, start=None, end=None, samples=None, **kwa):
"""
A slightly more advanced vcf parser.
Reads in haplotypes from a vcf file.
Basically does the same as done in the
cells above, but allows the used to
specify the range of the genome that
should be read in. Also allows to specify
which samples should be used.
Parameters:
vcf_fn : file path of the VCF to be read
chrom : specify which chromosome (or scaffold)
to read from the file
(only works on bgzipped, tabix-indexed files)
default ... read whole file
start: specify the start nucleotide position
(only works if chrom given on bgzipped,
tabix-indexed files); default=1
end: specify the ebd nucleotide position
(only works if chrom given on bgzipped,
tabix-indexed files); default=chrom_end
samples: list of sample names to read;
default ... all samples
returns:
Pandas dataframe of index (chrom, pos)
and columns (sample, haplotype). Values
are 0 for first and 1 for second allele.
"""
# parse header
with gzip.open(vcf_fn) as f:
for line in f:
if line[:6] == '#CHROM':
vcf_header = line.strip().split('\t')
vcf_header[0] = vcf_header[0][1:]
break
# determine genomic region to read in
if chrom is not None:
assert vcf_fn[-3:] == ".gz", "Only supply chrom if vcf is bgzipped and tabix indexed"
region = chrom
if end is not None and start is None:
start = 0
if start is not None:
region += ':' + str(start)
if end is not None:
region += '-' + str(end)
else:
region = None
# If no specific samples given, use all samples in the VCF
if samples is None:
samples = vcf_header[9:]
# Either use regional input or input whole VCF
if region is None:
stdin = vcf_fn
else:
tabix_stream = subprocess.Popen(['tabix', vcf_fn, region],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = tabix_stream.stdout
gen_df = pd.read_csv(stdin,
sep='\t',
comment='#',
names=vcf_header,
usecols=['CHROM','POS']+samples,
index_col=['CHROM','POS'], **kwa)
first_haplotype = gen_df.applymap(lambda s: int(s.split('|')[0]))
second_haplotype = gen_df.applymap(lambda s: int(s.split('|')[1]))
first_haplotype.columns = pd.MultiIndex.from_product([first_haplotype.columns, [0]])
second_haplotype.columns = pd.MultiIndex.from_product([second_haplotype.columns, [1]])
hap_df = pd.concat([first_haplotype, second_haplotype], axis=1).sort_index(axis=1)
return hap_df
small_hap_df = read_hap_df(vcf_fn,
chrom='Contig237',
start=3000,
end=5000,
samples=['VirSWA1', 'VirSWA2', 'VirSWA3',
'VirSWA4', 'VirSWA5', 'VirSWA6'])
small_hap_df
###Output
_____no_output_____
###Markdown
indexing
###Code
# access a cell by index
# gt_df.loc['row index', 'column index']
print 'Get the second haplotye of individual',
print 'VirSWA6 for position 3203 on Contig237:',
print hap_df.loc[('Contig237', 3203), ('VirSWA6', 1)]
hap_df.loc['Contig237'].loc[ 3000:3400, 'VirSWA6']
###Output
_____no_output_____
###Markdown
investigate haplotype data frame
###Code
# get the number of SNPs and number of samples
hap_df.shape
# get the name of sequenced in this data frame
hap_df.index.droplevel(1).unique()
###Output
_____no_output_____
###Markdown
load sample metadata
###Code
meta_df = pd.read_csv('cichlid_sample_metadata.csv', index_col=0)
meta_df.head()
###Output
_____no_output_____
###Markdown
Group a data frame using groupby Groupby groups a data frame into sub data frames. You can group on values in a specific column (or row) or by applying a function or dictionary to column or row indices. This is very handy.
###Code
# group individuals by sampling location
place_groups = meta_df.groupby('place')
# iterate through groups
for group_name, group_df in place_groups:
print group_name
print group_df
print '-----------------------------------------'
###Output
Malembo
genus species place fishing_pressure
id
OreSqu1 Oreochromis squamipinnis Malembo NaN
-----------------------------------------
Malombe
genus species place fishing_pressure
id
VirMAL1 Copadichromis virginalis Malombe 4.0
VirMAL2 Copadichromis virginalis Malombe 4.0
VirMAL3 Copadichromis virginalis Malombe 4.0
VirMAL4 Copadichromis virginalis Malombe 4.0
VirMAL5 Copadichromis virginalis Malombe 4.0
VirMAL6 Copadichromis virginalis Malombe 4.0
VirMAL7 Copadichromis virginalis Malombe 4.0
VirMAL8 Copadichromis virginalis Malombe 4.0
VirMAL9 Copadichromis virginalis Malombe 4.0
VirMAL10 Copadichromis virginalis Malombe 4.0
VirMAL11 Copadichromis virginalis Malombe 4.0
VirMAL12 Copadichromis virginalis Malombe 4.0
VirMAL13 Copadichromis virginalis Malombe 4.0
VirMAL14 Copadichromis virginalis Malombe 4.0
VirMAL15 Copadichromis virginalis Malombe 4.0
VirMAL16 Copadichromis virginalis Malombe 4.0
VirMAL17 Copadichromis virginalis Malombe 4.0
VirMAL18 Copadichromis virginalis Malombe 4.0
VirMAL19 Copadichromis virginalis Malombe 4.0
VirMAL20 Copadichromis virginalis Malombe 4.0
VirMAL21 Copadichromis virginalis Malombe 4.0
VirMAL22 Copadichromis virginalis Malombe 4.0
VirMAL23 Copadichromis virginalis Malombe 4.0
VirMAL24 Copadichromis virginalis Malombe 4.0
-----------------------------------------
South East Arm
genus species place fishing_pressure
id
VirSEA1 Copadichromis virginalis South East Arm 2.0
VirSEA2 Copadichromis virginalis South East Arm 2.0
VirSEA3 Copadichromis virginalis South East Arm 2.0
VirSEA4 Copadichromis virginalis South East Arm 2.0
VirSEA5 Copadichromis virginalis South East Arm 2.0
VirSEA6 Copadichromis virginalis South East Arm 2.0
VirSEA7 Copadichromis virginalis South East Arm 2.0
VirSEA8 Copadichromis virginalis South East Arm 2.0
VirSEA9 Copadichromis virginalis South East Arm 2.0
VirSEA10 Copadichromis virginalis South East Arm 2.0
VirSEA11 Copadichromis virginalis South East Arm 2.0
VirSEA12 Copadichromis virginalis South East Arm 2.0
VirSEA13 Copadichromis virginalis South East Arm 2.0
VirSEA14 Copadichromis virginalis South East Arm 2.0
VirSEA15 Copadichromis virginalis South East Arm 2.0
VirSEA16 Copadichromis virginalis South East Arm 2.0
VirSEA17 Copadichromis virginalis South East Arm 2.0
VirSEA18 Copadichromis virginalis South East Arm 2.0
VirSEA19 Copadichromis virginalis South East Arm 2.0
VirSEA20 Copadichromis virginalis South East Arm 2.0
VirSEA21 Copadichromis virginalis South East Arm 2.0
VirSEA22 Copadichromis virginalis South East Arm 2.0
VirSEA23 Copadichromis virginalis South East Arm 2.0
VirSEA24 Copadichromis virginalis South East Arm 2.0
-----------------------------------------
South West Arm
genus species place fishing_pressure
id
VirSWA3 Copadichromis virginalis South West Arm 1.0
VirSWA4 Copadichromis virginalis South West Arm 1.0
VirSWA5 Copadichromis virginalis South West Arm 1.0
VirSWA6 Copadichromis virginalis South West Arm 1.0
VirSWA7 Copadichromis virginalis South West Arm 1.0
VirSWA8 Copadichromis virginalis South West Arm 1.0
VirSWA9 Copadichromis virginalis South West Arm 1.0
VirSWA10 Copadichromis virginalis South West Arm 1.0
VirSWA11 Copadichromis virginalis South West Arm 1.0
VirSWA12 Copadichromis virginalis South West Arm 1.0
VirSWA13 Copadichromis virginalis South West Arm 1.0
VirSWA14 Copadichromis virginalis South West Arm 1.0
VirSWA15 Copadichromis virginalis South West Arm 1.0
VirSWA16 Copadichromis virginalis South West Arm 1.0
VirSWA17 Copadichromis virginalis South West Arm 1.0
VirSWA18 Copadichromis virginalis South West Arm 1.0
VirSWA19 Copadichromis virginalis South West Arm 1.0
VirSWA20 Copadichromis virginalis South West Arm 1.0
VirSWA21 Copadichromis virginalis South West Arm 1.0
VirSWA22 Copadichromis virginalis South West Arm 1.0
VirSWA23 Copadichromis virginalis South West Arm 1.0
VirSWA24 Copadichromis virginalis South West Arm 1.0
VirSWA25 Copadichromis virginalis South West Arm 1.0
VirSWA26 Copadichromis virginalis South West Arm 1.0
VirSWA27 Copadichromis virginalis South West Arm 1.0
-----------------------------------------
###Markdown
You can apply functions to the groups. These are applied to each group data frame. Pandas will try to give a series or data frame as result where the index contains the group names.
###Code
place_groups.apply(len)
# here individuals are grouped by the columns genus and species
meta_df.groupby(['genus', 'species']).apply(len)
# This is a Series with the same index as meta_df.
# The values are True/False depending on whether the species name is virginalis.
is_virginalis = (meta_df['species']=='virginalis')
###Output
_____no_output_____
###Markdown
What length do you expect is_virginalis to be? How many True and False entries? The above can be used for logical indexing.
###Code
# Logical indexing. Select viriginalis samples only.
meta_df[is_virginalis].groupby('place').apply(len)
###Output
_____no_output_____
###Markdown
apply operations Apply functions to our haplotype data frame.
###Code
allele_frequency = hap_df.mean(axis=1)
allele_frequency.head()
###Output
_____no_output_____
###Markdown
Plot the site frequency spectrum.
###Code
allele_frequency.hist(bins=20)
###Output
_____no_output_____
###Markdown
What is on the x and y axis? Does this spectrum look neutral to you? restrict to samples of species Copadichromis virginalis
###Code
virginalis_samples = meta_df[meta_df['species']=='virginalis'].index.values
# only virginalis samples
hap_df_vir = hap_df.loc[:, list(virginalis_samples)]
# the list conversion above is not needed in newer pandas versions
af_virginalis = hap_df_vir.mean(axis=1)
af_virginalis_variable = af_virginalis[(af_virginalis>0)&(af_virginalis<1)]
###Output
_____no_output_____
###Markdown
What does the above line of code do?
###Code
# restrict haplotype data frame to alleles variable in virginalis
hap_df_vir = hap_df_vir.loc[af_virginalis_variable.index, :]
# or equivalently
#hap_df_vir = hap_df_vir[(af_virginalis>0)&(af_virginalis<1)]
###Output
_____no_output_____
###Markdown
Check how the number of SNPs was reduced by removing non-variable sites
###Code
print af_virginalis.shape
print af_virginalis_variable.shape
af_virginalis_variable.hist(bins=20)
###Output
_____no_output_____
###Markdown
remove low frequency variants
###Code
allele_count = hap_df.sum(axis=1)
# This is the number of non-missing entries per row.
# our data has no missing values, so it is just the row length
n_alleles = hap_df.notnull().sum(axis=1)
min_allele_count = 4
hap_min_ac = hap_df[(allele_count >= min_allele_count) & (allele_count <= n_alleles - min_allele_count)]
print hap_df.shape
print hap_min_ac.shape
(hap_min_ac.mean(axis=1)).hist(bins=20)
###Output
_____no_output_____
###Markdown
grouping by sample
###Code
# grouping can be done by a dictionary that is applied to index or columns
sample_groups = {'VirMAL1':'Malombe',
'VirMAL2':'Malombe',
'VirMAL3':'Malombe',
'VirSWA1':'South West Arm',
'VirSWA2':'South West Arm',
'VirSWA3':'South West Arm'}
sample_groups0 = hap_df_vir.groupby(sample_groups, axis=1, level=0)
sample_groups0.mean()
# group using a function
def get_location(sample_id):
return meta_df.loc[sample_id, 'place']
location_groups = hap_df_vir.groupby(get_location, axis=1, level=0)
# equivalent to above but using a lambda function
location_groups = hap_df_vir.groupby(lambda id: meta_df.loc[id, 'place'], axis=1, level=0)
###Output
_____no_output_____
###Markdown
Calculate the allele frequency for each local population.
###Code
population_af = location_groups.mean()
fig = plt.figure(figsize=(16,10))
ax = plt.gca()
axes = population_af.hist(bins=20, ax=ax)
###Output
/home/wpsg/.local/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2869: UserWarning: To output multiple subplots, the figure containing the passed axes is being cleared
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
Do the allele frequency spectra look different in the different populations? Calculate nucleotide diversity $\pi$ and divergence $d_{xy}$
###Code
# pi = 2p(1-p)
# dxy = pq
# apply a function in rolling windows of 100 SNPs
window_size = 100
rolling_window_df = population_af.loc['Contig237', 'Malombe'].rolling(window_size,
center=True, axis=0)
pi_rolling = rolling_window_df.apply(lambda s:(2*s*(1-s)).mean())
pi_rolling.plot(style='.')
def get_dxy(af):
"""
Get dxy between Malombe and
South East Arm
"""
dxy = af['Malombe']*(1-af['South East Arm']) + (1-af['Malombe'])*af['South East Arm']
return dxy.mean()
# apply function in non-overlapping 100 bp windows
window_size = 100
dxy = population_af.loc['Contig237'].groupby(lambda ix: ix // window_size).apply(get_dxy)
dxy.plot(style='.')
###Output
_____no_output_____
###Markdown
Vary the parameters of the above functions. Try to plot pi for the different chromosomes and the different populations. A more general function to calculate dxy across muliple populations
###Code
def get_divergence(af):
"""
Takes a allele frequency df
returns nucleotide diversity (diagonal)
and dxy (off-diagonal).
Attention! The estimator for pi
is biased because it does not take
resampling of the same allele into account.
For small populations pi will be downward biased.
"""
# This looks complicated. If basically
# uses tensor muliplication to efficiently
# calculate all pairwise comparisons.
divergence = np.einsum('ij,ik->jk',af, 1-af) \
+ np.einsum('ij,ik->jk',1-af, af)
# the above results in a numpy array
# put it into a data frame
divergence = pd.DataFrame(divergence,
index=af.columns,
columns=af.columns)
return divergence
get_divergence(population_af)
individual_af = hap_df_vir.groupby(axis=1, level=0).mean()
individual_dxy = get_divergence(individual_af)
#Be aware of the biased single-individual pi estimated on the diagonal.
individual_dxy
###Output
_____no_output_____
###Markdown
The above could be used to construct a neighbour-joining tree. Ipython parallel Ipython parallel is very handy to use multiple local or remote cores to do calculations. It is surprisingly easy to set up, even on a compute cluster. (However, the ipyparallel package is not installed for python 2.7 on this amazon cloud instance and I realised it too late to fix it.) Here are more resource for the parallel setup: - - A minimal example (that would work for me): In a terminal execute ```ipcluster start -n 4``` to start a ipython cluster with 4 engines
###Code
from ipyparallel import Client
rc = Client(profile="default")
lv = rc.load_balanced_view()
map_obj = lv.map_async(lambda x: x*x, range(20))
###Output
_____no_output_____
###Markdown
The above is the parallel equivalent of ```map(lambda x: x*x, range(20))``` but using the 4 engines started above.
###Code
# retrieve the result
result = map_obj.result()
###Output
_____no_output_____ |
NoteBook/.ipynb_checkpoints/Model-checkpoint.ipynb | ###Markdown
NoteBook de modeloEn este notebook se explica a teoria detras del apilamiento de modelos, una forma de enssemble de hombre pobre. Es una técnica avanzada que mejora los resultados de modelos tradicionales.Para este ejemplo se definira una estrategia de validación, se definira los modelos base para hacerlos robustos a valores atipicos y se optimizara la rata de aprendizaje para optimizar sus resultados
###Code
#Importando librerias nesesarias
from sklearn.linear_model import ElasticNet, Lasso, BayesianRidge, LassoLarsIC
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.kernel_ridge import KernelRidge
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import GridSearchCV
import xgboost as xgb
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Estrategia de validaciónLa estrategia de validación es fundamental para determinar la validez del ajuste de los algoritmos utilizados. Para este caso se usara cross validation, agregando una línea de codigo que garantize la mezcla de los datos para mejores resultados.Al ser un problema de regressión se usara como metrica de scoring NMSE (Negative Mean Squared Error), para los problemas de clasificación se puede utilizar r2 o BCE (Binary Cross Entropy)
###Code
n_folds = 5
def rmsle_cv(model):
kf = KFold(n_folds, shuffle=True, random_state=42).get_n_splits(train.values)
rmse= np.sqrt(-cross_val_score(model, train.values, target, scoring="neg_mean_squared_error", cv = kf))
return(rmse)
def rmsle(y, y_pred):
return np.sqrt(mean_squared_error(y, y_pred))
###Output
_____no_output_____
###Markdown
Importando datos con pandas
###Code
train = pd.read_csv('../csv/clean_train.csv')
test = pd.read_csv('../csv/clean_test.csv')
target = pd.read_csv('../csv/target.csv')
print(train.shape)
print(test.shape)
print(target.shape)
test_id = test['Id']
test.drop('Id', axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Modelos baseLos modelos base seran los cuales se apilaran, para este caso se puede trabajar solo con el modelo stacked o promediarlo con otros modelos tipo Boosting como XGB o LightGB. Para este notebook se promedia con XGB.----- LassoEste modelo es muy sensible a valores atipicos, así que para hacerlo más robusto se hara un pipeline con la librería RobustScaler() de Scikit Learn. Refrescando conceptos, un pipeline permite combinar dos metodos para lograr un solo resultado, es increiblemente util en ML dominar este concepto, más información aquí:https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.make_pipeline.html ElasticNetAl igual que Lasso este modelo es suseptible a valores atipicos así que se hace neseario hacer pipeline con RobustScaler() KernelRidgeRegressionEste metodo de por sí ya es robusto a valores atipicos, es una conbinación de Ridge con Kernel PCA. En este caso no se nesecita RobustScaler() Gradient Boost RegressorCuando se tiene valores atipicos, Gradient Boost Regressorse recomienda trabajar con una función de perdida tipo Huber. Modelos de apoyoEstos modelos le agregan un peso por si solos al modelo de ensamble, tienen voto por fuera del stacking. Para estecaso se usaran solo XGBoost, peor LightGB es otra buena opción.---- XGBoostUna variante de Gradient Boost Regresor, se enfoca en combinar diferentes arquitecturas de arboles de desiciones.
###Code
class Models:
def __init__(self):
self.reg = {
'ELASTIC_NET': ElasticNet(l1_ratio=.9, random_state=3),
'GRADIENT': GradientBoostingRegressor(n_estimators=3000,
max_depth=4, max_features='sqrt',
min_samples_leaf=15, min_samples_split=10,
loss='huber', random_state =5),
'LASSO': Lasso(random_state=1),
'KERNEL_RIDGE': KernelRidge(kernel='polynomial', degree=2, coef0=2.5),
'XGB': xgb.XGBRegressor(colsample_bytree=0.4603, gamma=0.0468, max_depth=3,
min_child_weight=1.7817, n_estimators=2200,
reg_alpha=0.4640, reg_lambda=0.8571,
subsample=0.5213, silent=1,
random_state =7, nthread = -1)
}
self.params = {
'ELASTIC_NET': {
'alpha': [0.0005, 0.005, 1]
},
'GRADIENT': {
'learning_rate': [0.01, 0.05, 0.1]
},
'LASSO': {
'alpha': [0.0005, 0.005, 1]
},
'KERNEL_RIDGE': {
'alpha': [0.1, 0.5, 0.6]
},
'XGB': {
'learning_rate': [0.05, 0.06, 0.07]
}
}
def grid_training(self, X, y, name):
best_model = None
reg_dic = self.reg[name]
grid_reg = GridSearchCV(reg_dic, self.params[name], cv=3)
grid_reg.fit(X, y.values.ravel())
#Modelos base más robustos a valores atipicos, usando robust scaler: Lasso y ElasticNet.
if name == 'ELASTIC_NET' or name == 'LASSO':
best_model = make_pipeline(RobustScaler(), grid_reg.best_estimator_)
else:
best_model = grid_reg.best_estimator_
return best_model
models = ['ELASTIC_NET', 'GRADIENT', 'LASSO', 'KERNEL_RIDGE', 'XGB']
base_models = []
for model in models:
base_model = Models().grid_training(train,target,model)
base_models.append(base_model)
###Output
_____no_output_____
###Markdown
Embedding con modelos apiladosEl paso a paso detras de esta técncia es la siguiente: * Dividir el trainig set en dos partes. Train y holdout. Para esto se clonan los modelos* Entrenar estos modelos en la primera parte* Probar estos modelos en la segunda parte* Usar la predicciones con la metodologia fold como las entradas y las respuestas correctas (La variable objetivo) como la salida de alto nivelImagen tomada de https://www.kaggle.com/getting-started/18153post103381Este metodo se puede hacer más robusto agregando meta modelos en el ultimo paso, agregando un loop que repita los ultimos tres de forma iterativa y luego hacer un promedio de los modelos base sobre la data de prueba y usar estas como meta features en los meta modelos. Dado que con la primera opción se obtiene un resultado del 90% de presición, no implatare el meta modelo en la solución, sin embargo dejare la estructura de la clase por si alguien está interesado en hacerlo.
###Code
class AveragingModels(BaseEstimator, RegressorMixin, TransformerMixin):
def __init__(self, models):
self.models = models
# Definiendo clones de los modelos base para entrenar la data
def fit(self, X, y):
self.models_ = [clone(x) for x in self.models]
# Entrenando los modelos base clonados
for model in self.models_:
model.fit(X, y)
return self
#Prediciendo los modelos base y promediandolos
def predict(self, X):
predictions = np.column_stack([
model.predict(X) for model in self.models_
])
return np.mean(predictions, axis=1)
averaged_models = AveragingModels(models = (base_models[0],
base_models[1],
base_models[2],
base_models[3]))
score = rmsle_cv(averaged_models)
print(" Averaged base models score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))
averaged_models.fit(train, target)
pred = averaged_models.predict(test)
xgb = base_models[4]
xgb.fit(train, target)
xgb_pred_train = xgb.predict(train)
xgb_pred = xgb.predict(test)
rmsle(target, xgb_pred_train)
enssemble = pred*0.6 + xgb_pred*0.3
sub = pd.DataFrame()
sub['Id'] = test_id
sub['SalePrice'] = np.expm1(enssemble)
print(sub.head())
###Output
Id SalePrice
0 1461 37096.338440
1 1462 49087.601088
2 1463 54685.353372
3 1464 57276.752902
4 1465 56916.214770
###Markdown
Bonus: Clase con meta modelos
###Code
class StackingAveragedModels(BaseEstimator, RegressorMixin, TransformerMixin):
def __init__(self, base_models, meta_model, n_folds=5):
self.base_models = base_models
self.meta_model = meta_model
self.n_folds = n_folds
# Nuevamente entrenamos los clones
def fit(self, X, y):
self.base_models_ = [list() for x in self.base_models]
self.meta_model_ = clone(self.meta_model)
kfold = KFold(n_splits=self.n_folds, shuffle=True, random_state=156)
# Entrenamos clones y creamos predicciones flod
# que se nesecitan para entrenar los meta-modelos
out_of_fold_predictions = np.zeros((X.shape[0], len(self.base_models)))
for i, model in enumerate(self.base_models):
for train_index, holdout_index in kfold.split(X, y):
instance = clone(model)
self.base_models_[i].append(instance)
instance.fit(X[train_index], y[train_index])
y_pred = instance.predict(X[holdout_index])
out_of_fold_predictions[holdout_index, i] = y_pred
# Ahora entrenamos los meta-modelos clonados usando prediciones out-of-fold como nueva caracteristica
self.meta_model_.fit(out_of_fold_predictions, y)
return self
#Hacemos las predcciones de todos los modelos base con la data de prueba y usamos las predicciones promedio como
#meta-caracteristicas para la predicción final la cual es hecha por el meta-modelo
def predict(self, X):
meta_features = np.column_stack([
np.column_stack([model.predict(X) for model in base_models]).mean(axis=1)
for base_models in self.base_models_ ])
return self.meta_model_.predict(meta_features)
###Output
_____no_output_____ |
.ipynb_checkpoints/Building+your+Deep+Neural+Network+-+Step+by+Step+v3-checkpoint.ipynb | ###Markdown
Building your Deep Neural Network: Step by StepWelcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!- In this notebook, you will implement all the functions required to build a deep neural network.- In the next assignment, you will use these functions to build a deep neural network for image classification.**After this assignment you will be able to:**- Use non-linear units like ReLU to improve your model- Build a deeper neural network (with more than 1 hidden layer)- Implement an easy-to-use neural network class**Notation**:- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example.- Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).Let's get started! 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the main package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- dnn_utils provides some necessary functions for this notebook.- testCases provides some test cases to assess the correctness of your functions- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
/opt/conda/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
###Markdown
2 - Outline of the AssignmentTo build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:- Initialize the parameters for a two-layer network and for an $L$-layer neural network.- Implement the forward propagation module (shown in purple in the figure below). - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$). - We give you the ACTIVATION function (relu/sigmoid). - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function. - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.- Compute the loss.- Implement the backward propagation module (denoted in red in the figure below). - Complete the LINEAR part of a layer's backward propagation step. - We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function. - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function- Finally update the parameters. **Figure 1****Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. 3 - InitializationYou will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers. 3.1 - 2-layer Neural Network**Exercise**: Create and initialize the parameters of the 2-layer neural network.**Instructions**:- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*. - Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.- Use zero initialization for the biases. Use `np.zeros(shape)`.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x)*0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h)*0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0.01624345 -0.00611756]
[-0.00528172 -0.01072969]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0.00865408 -0.02301539]]
b2 = [[ 0.]]
###Markdown
**Expected output**: **W1** [[ 0.01624345 -0.00611756] [-0.00528172 -0.01072969]] **b1** [[ 0.] [ 0.]] **W2** [[ 0.00865408 -0.02301539]] **b2** [[ 0.]] 3.2 - L-layer Neural NetworkThe initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then: **Shape of W** **Shape of b** **Activation** **Shape of Activation** **Layer 1** $(n^{[1]},12288)$ $(n^{[1]},1)$ $Z^{[1]} = W^{[1]} X + b^{[1]} $ $(n^{[1]},209)$ **Layer 2** $(n^{[2]}, n^{[1]})$ $(n^{[2]},1)$ $Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ $(n^{[2]}, 209)$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ **Layer L-1** $(n^{[L-1]}, n^{[L-2]})$ $(n^{[L-1]}, 1)$ $Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ $(n^{[L-1]}, 209)$ **Layer L** $(n^{[L]}, n^{[L-1]})$ $(n^{[L]}, 1)$ $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$ $(n^{[L]}, 209)$ Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} j & k & l\\ m & n & o \\ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \\ t \\ u\end{bmatrix}\tag{2}$$Then $WX + b$ will be:$$ WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u\end{bmatrix}\tag{3} $$ **Exercise**: Implement initialization for an L-layer Neural Network. **Instructions**:- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.- Use random initialization for the weight matrices. Use `np.random.rand(shape) * 0.01`.- Use zeros initialization for the biases. Use `np.zeros(shape)`.- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers! - Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).```python if L == 1: parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01 parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))```
###Code
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1])* 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))*0.01
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]
b2 = [[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected output**: **W1** [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]] **b2** [[ 0.] [ 0.] [ 0.]] 4 - Forward propagation module 4.1 - Linear Forward Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:- LINEAR- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)The linear forward module (vectorized over all the examples) computes the following equations:$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$where $A^{[0]} = X$. **Exercise**: Build the linear part of forward propagation.**Reminder**:The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
###Code
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = W.dot(A)+b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
###Output
Z = [[ 3.26295337 -1.23429987]]
###Markdown
**Expected output**: **Z** [[ 3.26295337 -1.23429987]] 4.2 - Linear-Activation ForwardIn this notebook, you will use two activation functions:- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call: ``` pythonA, activation_cache = sigmoid(Z)```- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:``` pythonA, activation_cache = relu(Z)``` For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
###Code
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
###Output
With sigmoid: A = [[ 0.96890023 0.11013289]]
With ReLU: A = [[ 3.43896131 0. ]]
###Markdown
**Expected output**: **With sigmoid: A ** [[ 0.96890023 0.11013289]] **With ReLU: A ** [[ 3.43896131 0. ]] **Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers. d) L-Layer Model For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID. **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model**Exercise**: Implement the forward propagation of the above model.**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.) **Tips**:- Use the functions you had previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
###Code
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W'+str(l)], parameters['b'+str(l)], "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W'+str(L)], parameters['b'+str(L)], "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
###Output
AL = [[ 0.17007265 0.2524272 ]]
Length of caches list = 2
###Markdown
**AL** [[ 0.17007265 0.2524272 ]] **Length of caches list ** 2 Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions. 5 - Cost functionNow you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1-Y, np.log(1-AL)))/m # , axis=1 or not
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
###Output
cost = 0.414931599615
###Markdown
**Expected Output**: **cost** 0.41493159961539694 6 - Backward propagation moduleJust like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. **Reminder**: **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* <!-- For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.This is why we talk about **backpropagation**.!-->Now, similar to forward propagation, you are going to build the backward propagation in three steps:- LINEAR backward- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model) 6.1 - Linear backwardFor layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$. **Figure 4** The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$ **Exercise**: Use the 3 formulas above to implement linear_backward().
###Code
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = dZ.dot(A_prev.T)/m
db = dZ.sum(axis=1, keepdims=True)/m
dA_prev = W.T.dot(dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
###Output
dA_prev = [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]]
dW = [[-0.10076895 1.40685096 1.64992505]]
db = [[ 0.50629448]]
###Markdown
**Expected Output**: **dA_prev** [[ 0.51822968 -0.19517421] [-0.40506361 0.15255393] [ 2.37496825 -0.89445391]] **dW** [[-0.10076895 1.40685096 1.64992505]] **db** [[ 0.50629448]] 6.2 - Linear-Activation backwardNext, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**. To help you implement `linear_activation_backward`, we provided two backward functions:- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:```pythondZ = sigmoid_backward(dA, activation_cache)```- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:```pythondZ = relu_backward(dA, activation_cache)```If $g(.)$ is the activation function, `sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$. **Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
###Code
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
# A_p, W, b = linear_cache
# Z = W.dot(A_p)+b
# g_prime = np.zeros_like(Z)
# g_prime[Z>=0]=1
# print(g_prime)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
# print(np.multiply(activation_cache, 1-activation_cache))
# dZ = np.multiply(dA, np.multiply(activation_cache, 1-activation_cache))
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
###Output
sigmoid:
dA_prev = [[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]]
dW = [[ 0.10266786 0.09778551 -0.01968084]]
db = [[-0.05729622]]
relu:
dA_prev = [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]]
dW = [[ 0.44513824 0.37371418 -0.10478989]]
db = [[-0.20837892]]
###Markdown
**Expected output with sigmoid:** dA_prev [[ 0.11017994 0.01105339] [ 0.09466817 0.00949723] [-0.05743092 -0.00576154]] dW [[ 0.10266786 0.09778551 -0.01968084]] db [[-0.05729622]] **Expected output with relu** dA_prev [[ 0.44090989 0. ] [ 0.37883606 0. ] [-0.2298228 0. ]] dW [[ 0.44513824 0.37371418 -0.10478989]] db [[-0.20837892]] 6.3 - L-Model Backward Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. **Figure 5** : Backward pass ** Initializing backpropagation**:To backpropagate through this network, we know that the output is, $A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):```pythondAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) derivative of cost with respect to AL```You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : $$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
###Code
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = np.divide((1-Y), 1-AL)-np.divide(Y, AL)
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
# print(len(caches[0][0]))
current_cache = caches[L-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, "sigmoid")
#dAL, np.dot(AL-Y, current_cache[0][0].T), AL-Y
#linear_activation_backward()
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l+2)], current_cache, "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
###Output
dW1 = [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]]
db1 = [[-0.22007063]
[ 0. ]
[-0.02835349]]
dA1 = [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]]
###Markdown
**Expected Output** dW1 [[ 0.41010002 0.07807203 0.13798444 0.10502167] [ 0. 0. 0. 0. ] [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] db1 [[-0.22007063] [ 0. ] [-0.02835349]] dA1 [[ 0. 0.52257901] [ 0. -0.3269206 ] [ 0. -0.32070404] [ 0. -0.74079187]] 6.4 - Update ParametersIn this section you will update the parameters of the model, using gradient descent: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. **Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.**Instructions**:Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
###Code
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] -= np.multiply(learning_rate,grads["dW" + str(l+1)])
parameters["b" + str(l+1)] -= np.multiply(learning_rate,grads["db" + str(l+1)])
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
###Output
W1 = [[-0.59562069 -0.09991781 -2.14584584 1.82662008]
[-1.76569676 -0.80627147 0.51115557 -1.18258802]
[-1.0535704 -0.86128581 0.68284052 2.20374577]]
b1 = [[-0.04659241]
[-1.28888275]
[ 0.53405496]]
W2 = [[-0.55569196 0.0354055 1.32964895]]
b2 = [[-0.84610769]]
|
Fase 2 - Manejo de datos y optimizacion/Tema 04 - Colecciones de datos/Apuntes/Leccion 1 plantilla- Tuplas.ipynb | ###Markdown
Las TuplasSon unas colecciones parecidas a las listas, con la peculiaridad de que son inmutables.
###Code
tupla = (100,"Hola",[1,2,3],-50)
tupla
###Output
_____no_output_____
###Markdown
Indexación y slicing
###Code
tupla[0]
tupla[-1]
tupla[2:]
tupla[2][-1]
###Output
_____no_output_____
###Markdown
Inmutabilidad
###Code
tupla[0] = 50
###Output
_____no_output_____
###Markdown
Función len()
###Code
len(tupla)
len(tupla[2])
###Output
_____no_output_____
###Markdown
Métodos integrados index()Sirve para buscar un elemento y saber su posición en la tupla. Da error si no se encuentra.
###Code
tupla.index(100)
tupla
tupla.index('Hola')
tupla.index('Otro')
###Output
_____no_output_____
###Markdown
count()Sirve para contar cuantas veces aparece un elemento en una tupla.
###Code
tupla.count(100)
tupla.count('Algo')
tupla = (100,100,100,50,10)
tupla.count(100)
###Output
_____no_output_____
###Markdown
append() ?Al ser inmutables, las tuplas __no disponen__ de métodos para modificar su contenido.
###Code
tupla.append(10)
###Output
_____no_output_____ |
meshnet_v17.ipynb | ###Markdown
MeshNet architecture based on https://arxiv.org/pdf/1612.00940.pdf"End-to-end learning of brain tissue segmentationfrom imperfect labeling"Jun 2017Alex Fedorov∗†, Jeremy Johnson‡, Eswar Damaraju∗†, Alexei Ozerin§, Vince Calhoun∗†, Sergey Plis∗† Libraries and Global Parameters
###Code
import os
import cv2
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from skimage.io import imread, imshow, imread_collection, concatenate_images
from skimage.util import img_as_bool, img_as_uint, img_as_ubyte, img_as_int
from skimage.transform import resize
from skimage.morphology import label
import random
from random import randint
from keras import regularizers
from keras.models import Model, load_model
from keras.optimizers import Adam, SGD, RMSprop
from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Activation, Dense, \
UpSampling2D, BatchNormalization, add, Dropout, Flatten
from keras.layers.core import Lambda
from keras.callbacks import EarlyStopping, ModelCheckpoint, ReduceLROnPlateau, LearningRateScheduler
from keras import backend as K
from keras.losses import binary_crossentropy, sparse_categorical_crossentropy
####### UPDATE THIS #########
#############################
model_num = 17
#############################
#############################
model_checkpoint_file= 'meshnet_v' + str(model_num) +'.h5'
submission_filename = 'meshnet_v' + str(model_num) +'_mesh_pred.csv'
# Root folders for test and training data
train_root = "./stage1_train"
test_root = "./stage1_test"
# Size we resize all images to
#image_size = (128,128)
img_height = 128
img_width = 128
img_channels = 1 # 1 for B&W, 3 for RGB
import warnings
warnings.filterwarnings('ignore', category=UserWarning, module='skimage')
#warnings.resetwarnings()
###Output
Using TensorFlow backend.
###Markdown
Preparing the Data
###Code
# Import images (either test or training)
# Decolorize, resize, store in array, and save filenames, etc.
def import_images(root):
dirs = os.listdir(root)
filenames=[os.path.join(root,file_id) + "/images/"+file_id+".png" for file_id in dirs]
images=[imread(imagefile,as_grey=True) for imagefile in filenames]
resized_images = [ resize(image,(img_width,img_height)) for image in images]
Array = np.reshape(np.array(resized_images),
(len(resized_images),img_height,img_width,img_channels))
#Array = np.reshape(np.array(img_as_ubyte(resized_images),dtype=np.uint8).astype(np.uint8),
# (len(resized_images),img_height,img_width,img_channels))
print(Array.mean())
print(Array.std())
# Normalize inputs
Array = ((Array - Array.mean())/Array.std())
print(Array.mean())
print(Array.std())
print(images[0].dtype)
print(resized_images[0].dtype)
print(Array[0,0,0,0].dtype)
return Array, resized_images, images, filenames, dirs
train_X, resized_train_images, \
train_images, train_filenames, train_dirs = import_images(train_root)
## Import Training Masks
# this takes longer than the training images because we have to
# combine a lot of mask files
# This function creates a single combined mask image
# when given a list of masks
# Probably a computationally faster way to do this...
def collapse_masks(mask_list):
for i, mask_file in enumerate(mask_list):
if i != 0:
# combine mask with previous mask in list
mask = np.maximum(mask, imread(os.path.join(train_root,mask_file)))
else:
# read first mask in
mask = imread(os.path.join(train_root,mask_file))
return mask
# Import all the masks
train_mask_dirs = [ os.path.join(path, 'masks') for path in os.listdir(train_root) ]
train_mask_files = [ [os.path.join(dir,file) for file in os.listdir(os.path.join(train_root,dir)) ] for dir in train_mask_dirs]
train_masks = [ collapse_masks(mask_files) for mask_files in train_mask_files ]
resized_train_masks = [ img_as_bool(resize(image,(img_width,img_height))) for image in train_masks]
train_Y = np.reshape(np.array(resized_train_masks),(len(resized_train_masks),img_height,img_width,img_channels))
# Plot images side by side for a list of datasets
def plot_side_by_side(ds_list,image_num,size=(15,10)):
print('Image #: ' + str(image_num))
fig = plt.figure(figsize=size)
for i in range(len(ds_list)):
ax1 = fig.add_subplot(1,len(ds_list),i+1)
ax1.imshow(ds_list[i][image_num])
plt.show()
# Plots random corresponding images and masks
def plot_check(ds_list,rand_imgs=None,img_nums=None,size=(15,10)):
if rand_imgs != None:
for i in range(rand_imgs):
plot_side_by_side(ds_list, randint(0,len(ds_list[0])-1),size=size)
if img_nums != None:
for i in range(len(img_nums)):
plot_side_by_side(ds_list,img_nums[i],size=size)
plot_check([train_images,train_masks],rand_imgs=1,size=(10,7))
# Check size of arrays we are inputting to model
# This is important! We need the datasets to be as
# small as possible to reduce computation time
# Check physical size
print(train_X.shape)
print(train_Y.shape)
# Check memory size
print(train_X.nbytes)
print(train_Y.nbytes)
# Check datatypes
print(train_X.dtype)
print(train_Y.dtype)
plot_check([resized_train_images,np.squeeze(train_X,axis=3)],rand_imgs=1,size=(10,7))
###Output
Image #: 403
###Markdown
Now Let's Build the Model
###Code
# Loss and metric functions for the neural net
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred = K.cast(y_pred, 'float32')
y_pred_f = K.cast(K.greater(K.flatten(y_pred), 0.5), 'float32')
intersection = y_true_f * y_pred_f
score = 2. * K.sum(intersection) / (K.sum(y_true_f) + K.sum(y_pred_f))
return score
def dice_loss(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = y_true_f * y_pred_f
score = (2. * K.sum(intersection) + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
return 1. - score
def bce_dice_loss(y_true, y_pred):
return binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
def create_block(x, filters=21, filter_size=(3, 3), activation='relu',dil_rate=1,dropout_rate=0.25):
x = Conv2D(filters, filter_size, padding='same', activation=activation, dilation_rate = dil_rate) (x)
# x = BatchNormalization() (x)
x = Dropout(dropout_rate) (x)
return x
## master function for creating a net
def get_net(
input_shape=(img_height, img_width,img_channels),
loss=binary_crossentropy,
lr=0.001,
n_class=1,
nb_filters=21,
dropout=0.2
):
inputs = Input(input_shape)
# Create layers
net_body = create_block(inputs,filters=nb_filters,dropout_rate=dropout)
net_body = create_block(net_body,filters=nb_filters,dropout_rate=dropout)
net_body = create_block(net_body,filters=nb_filters,dropout_rate=dropout)
net_body = create_block(net_body,filters=nb_filters,dropout_rate=dropout)
net_body = create_block(net_body,filters=nb_filters,dropout_rate=dropout,dil_rate=2)
net_body = create_block(net_body,filters=nb_filters,dropout_rate=dropout,dil_rate=4)
net_body = create_block(net_body,filters=nb_filters,dropout_rate=dropout,dil_rate=8)
net_body = create_block(net_body,filters=nb_filters,dropout_rate=dropout)
classify = Conv2D(n_class,(1,1),activation='sigmoid') (net_body)
model = Model(inputs=inputs, outputs=classify)
model.compile(optimizer=Adam(lr), loss=loss, metrics=[bce_dice_loss, dice_coef])
return model
#### CREATE MODEL ##########################################################
my_model = get_net(nb_filters=21,dropout=0.1,loss=binary_crossentropy)
############################################################################
print(my_model.summary())
# Fit model
earlystopper = EarlyStopping(patience=10, verbose=1)
checkpointer = ModelCheckpoint(model_checkpoint_file, verbose=1, save_best_only=True)
reduce_plateau = ReduceLROnPlateau(monitor='val_loss',
factor=0.2,
patience=4,
verbose=1,
# min_lr=0.00001,
epsilon=0.001,
mode='auto')
results = my_model.fit(train_X, train_Y, validation_split=0.1, batch_size=20, epochs=100, verbose=1,
shuffle=True, callbacks=[ earlystopper, checkpointer, reduce_plateau])
for val_loss in results.history['val_loss']:
print(round(val_loss,3))
#print(results.history)
## Import Test Data and Make Predictions with Model
# Import images (either test or training)
# Decolorize, resize, store in array, and save filenames, etc.
test_X, resized_test_images, \
test_images, test_filenames, test_dirs = import_images(test_root)
# Load model and make predictions on test data
final_model = load_model(model_checkpoint_file, custom_objects={'dice_coef': dice_coef, 'bce_dice_loss':bce_dice_loss})
preds_test = final_model.predict(test_X, verbose=1)
preds_test_t = (preds_test > 0.5)
# Create list of upsampled test masks
preds_test_upsampled = []
for i in range(len(preds_test)):
preds_test_upsampled.append(resize(np.squeeze(preds_test[i]),
(test_images[i].shape[0], test_images[i].shape[1]),
mode='constant', preserve_range=True))
preds_test_upsampled_bool = [ (mask > 0.5).astype(bool) for mask in preds_test_upsampled ]
plot_check([test_images,preds_test_upsampled,preds_test_upsampled_bool],rand_imgs=2)
# Run-length encoding stolen from https://www.kaggle.com/rakhlin/fast-run-length-encoding-python
def rle_encoding(x):
dots = np.where(x.T.flatten() == 1)[0]
run_lengths = []
prev = -2
for b in dots:
if (b>prev+1): run_lengths.extend((b + 1, 0))
run_lengths[-1] += 1
prev = b
return run_lengths
def prob_to_rles(x, cutoff=0.5):
lab_img = label(x > cutoff)
for i in range(1, lab_img.max() + 1):
yield rle_encoding(lab_img == i)
def generate_prediction_file(image_names,predictions,filename):
new_test_ids = []
rles = []
for n, id_ in enumerate(image_names):
rle = list(prob_to_rles(predictions[n]))
rles.extend(rle)
new_test_ids.extend([id_] * len(rle))
sub = pd.DataFrame()
sub['ImageId'] = new_test_ids
sub['EncodedPixels'] = pd.Series(rles).apply(lambda x: ' '.join(str(y) for y in x))
sub.to_csv(filename, index=False)
generate_prediction_file(test_dirs,preds_test_upsampled_bool,submission_filename)
###Output
_____no_output_____ |
docs/_static/notebooks/tutorial3.ipynb | ###Markdown
Tutorial 3 Plotting Red Noise Spectra
###Code
import la_forge.core as co
import la_forge.rednoise as rn
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
coredir = '/Users/hazboun/software_development/la_forge/tests/data/cores/'
c0 = co.Core(corepath=coredir+'J1713+0747_plaw_dmx.core',
label='NG12.5yr Noise Run: Power Law Red Noise')
c1 = co.Core(corepath=coredir+'J1713+0747_fs_dmx.core',
label='NG12.5yr Noise Run: Free Spectral Red Noise')
rn.plot_rednoise_spectrum('J1713+0747',
[c0,c1],
rn_types=['_red_noise','_red_noise'])
rn.plot_rednoise_spectrum('J1713+0747',
[c0,c1],
free_spec_ul=True,
rn_types=['_red_noise','_red_noise'],
Colors=['C0','C1'],
n_plaw_realizations=100)
rn.plot_rednoise_spectrum('J1713+0747',
[c0,c1],
rn_types=['_red_noise','_red_noise'],
free_spec_violin=True,
Colors=['C0','C1'])
###Output
Plotting Powerlaw RN Params:Tspan = 12.4 yrs, 1/Tspan = 2.6e-09
Red noise parameters: log10_A = -13.97, gamma = 1.02
Plotting Free Spectral RN Params:Tspan = 12.4 yrs f_min = 2.6e-09
|
Python Data Science Toolbox -Part 1/Lambda functions and error-handling/01.Writing a lambda function you already know.ipynb | ###Markdown
Some function definitions are simple enough that they can be converted to a lambda function. By doing this, you write less lines of code, which is pretty awesome and will come in handy, especially when you're writing and maintaining big programs. In this exercise, you will use what you know about lambda functions to convert a function that does a simple task into a lambda function. Take a look at this function definition:>def echo_word(word1, echo): """Concatenate echo copies of word1.""" words = word1 * echo return words The function echo_word takes 2 parameters: a string value, word1 and an integer value, echo. It returns a string that is a concatenation of echo copies of word1. Your task is to convert this simple function into a lambda function. Define the lambda function echo_word using the variables word1 and echo. Replicate what the original function definition for echo_word() does above. Call echo_word() with the string argument 'hey' and the value 5, in that order. Assign the call to result.
###Code
# Define echo_word as a lambda function: echo_word
echo_word = (lambda word1,echo: word1*echo)
# Call echo_word: result
result = echo_word('hey',5)
# Print result
print(result)
###Output
heyheyheyheyhey
|
solutions by participants/ex2/ex2-ashishar-8cnot.ipynb | ###Markdown
Exercise 2 - Shor's algorithm Historical backgroundIn computing, we often measure the performance of an algorithm by how it grows with the size of the input problem. For example, addition has an algorithm that grows linearly with the size of the numbers we're adding. There are some computing problems for which the best algorithms we have grow _exponentially_ with the size of the input, and this means inputs with a relatively modest size are too big to solve using any computer on earth. We're so sure of this, much of the internet's security depends on certain problems being unsolvable.In 1994, Peter Shor showed that it’s possible to factor a number into its primes efficiently on a quantum computer.[1] This is big news, as the best classical algorithm we know of is one of these algorithms that grows exponentially. And in fact, [RSA encryption](https://en.wikipedia.org/wiki/RSA_(cryptosystem)) relies on factoring large enough numbers being infeasible. To factor integers that are too big for our current classical computers will require millions of qubits and gates, and these circuits are far too big to run on today’s quantum computers successfully.So how did Lieven M.K. Vandersypen, Matthias Steffen, Gregory Breyta, Costantino S. Yannoni, Mark H. Sherwood and Isaac L. Chuang manage to factor 15 on a quantum computer, all the way back in 2001?![2]The difficulty in creating circuits for Shor’s algorithm is creating the circuit that computes a controlled $ay \bmod N$. While we know how to create these circuits using a polynomial number of gates, these are still too large for today’s computers. Fortunately, if we know some information about the problem a priori, then we can sometimes ‘cheat’ and create more efficient circuits.To run this circuit on the hardware available to them, the authors of the above paper found a very simple circuit that performed $7y \bmod 15$. This made the circuit small enough to run on their hardware. By the end of this exercise, you will have created a circuit for $35y \bmod N$ that can be used in Shor’s algorithm and can run on `ibmq_santiago`.If you want to understand what's going on in this exercise, you should check out the [Qiskit Textbook page on Shor's algorithm](https://qiskit.org/textbook/ch-algorithms/shor.html), but if this is too involved for you, you can complete the exercise without this. References1. Shor, Peter W. "Algorithms for quantum computation: discrete logarithms and factoring." Proceedings 35th annual symposium on foundations of computer science. Ieee, 1994.1. Vandersypen, Lieven MK, et al. "Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance." Nature 414.6866 (2001): 883-887. tl;dr: Shor’s algorithmThere is an algorithm called [_quantum phase estimation_](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html) that tells us the phase a gate introduces to a certain type of state. For example, inputs to phase estimation algorithm could be the state $|1\rangle$ and the gate $Z$. If the $Z$-gate acts on the state $|1\rangle$, we get back the same state with an added global phase of $\pi$:$$Z|1\rangle = -|1\rangle = e^{i\pi} |1\rangle$$And the quantum phase estimation algorithm could work this out for us. You can see another example [here](https://qiskit.org/textbook/ch-algorithms/quantum-phase-estimation.html2.-Example:-T-gate-).Shor showed that if we do phase estimation on a gate, $U$, that has the behavior $U|y\rangle = |a y\bmod N\rangle$, we can quickly get some information about $N$’s factors. The problemIn this exercise, we will factor 35 by doing phase estimation on a circuit that implements $13y \bmod 35$. The exercise is to create a circuit that does this, and is also small enough to run on `ibmq_santiago`! This is not an easy task, so the first thing we’re going to do is cheat.A detail of Shor’s algorithm is that our circuit only needs to work on states we can reach through applying $U$ to the starting state $|1\rangle$. I.e. we can use _any_ circuit that has the behavior: $$\begin{aligned}U|1\rangle &= |13\rangle \\UU|1\rangle &= |29\rangle \\UUU|1\rangle &= |27\rangle \\UUUU|1\rangle &= |1\rangle \\\end{aligned}$$So how can we make this easier for us? Since we only need to correctly transform 4 different states, we can encode these onto two qubits. For this exercise, we will choose to map the 2-qubit computational basis states to the numbers like so:$$\begin{aligned}|1\rangle &\rightarrow |00\rangle \\|13\rangle &\rightarrow |01\rangle \\|29\rangle &\rightarrow |10\rangle \\|27\rangle &\rightarrow |11\rangle \\\end{aligned}$$Why is this “cheating”? Well, to take advantage of this optimization, we need to know all the states $U$ is going to affect, which means we have to compute $ay \bmod N$ until we get back to 1 again, and that means we know the period of $a^x \bmod N$ and can therefore get the factors of $N$. Any optimization like this, in which we use information that would tell us the value $r$, is obviously not going to scale to problems that classical computers can’t solve. But the purpose of this exercise is just to verify that Shor’s algorithm does in fact work as intended, and we’re not going to worry about the fact that we cheated to get a circuit for $U$.**Exercise 2a:** Create a circuit ($U$) that performs the transformation:$$\begin{aligned}U|00\rangle &= |01\rangle \\U|01\rangle &= |10\rangle \\U|10\rangle &= |11\rangle \\U|11\rangle &= |00\rangle \\\end{aligned}$$and is controlled by another qubit. The circuit will act on a 2-qubit target register named 'target', and be controlled by another single-qubit register named 'control'. You should assign your finished circuit to the variable '`cu`'.
###Code
from qiskit import QuantumCircuit
from qiskit import QuantumRegister, QuantumCircuit
from qiskit import transpile
c = QuantumRegister(1, 'control')
t = QuantumRegister(2, 'target')
cu = QuantumCircuit(c, t, name="Controlled 13^x mod 35")
# WRITE YOUR CODE BETWEEN THESE LINES - START
initial_state = [0,1] # Define initial_state as |1>
# cu.initialize(initial_state, 0) # Apply initialisation operation to the 0th qubit
#qc.initialize([1,0], 1) # Apply initialisation operation to the 0th qubit
dummy=QuantumRegister(1,'dummy')
cu.cx(0,1)
cu.x(1)
cu.ccx(0,1,2)
cu.x(1)
# cu=transpile(cu, basis_gates=['cx','rz', 'sx','x'])
# WRITE YOUR CODE BETWEEN THESE LINES - END
cu.draw('mpl')
###Output
_____no_output_____
###Markdown
And run the cell below to check your answer:
###Code
# Check your answer using following code
from qc_grader import grade_ex2a
grade_ex2a(cu)
###Output
Grading your answer for ex2/part1. Please wait...
Congratulations 🎉! Your answer is correct.
###Markdown
Congratulations! You’ve completed the hard part. We read the output of the phase estimation algorithm by measuring qubits, so we will need to make sure our 'counting' register contains enough qubits to read off $r$. In our case, $r = 4$, which means we only need $\log_2(4) = 2$ qubits (cheating again because we know $r$ beforehand), but since Santiago has 5 qubits, and we've only used 2 for the 'target' register, we'll use all remaining 3 qubits as our counting register.To do phase estimation on $U$, we need to create circuits that perform $U^{2^x}$ ($U$ repeated $2^x$ times) for each qubit (with index $x$) in our register of $n$ counting qubits. In our case this means we need three circuits that implement:$$ U, \; U^2, \; \text{and} \; U^4 $$So the next step is to create a circuit that performs $U^2$ (i.e. a circuit equivalent to applying $U$ twice).**Exercise 2b:** Create a circuit ($U^2$) that performs the transformation:$$\begin{aligned}U|00\rangle &= |10\rangle \\U|01\rangle &= |11\rangle \\U|10\rangle &= |00\rangle \\U|11\rangle &= |01\rangle \\\end{aligned}$$and is controlled by another qubit. The circuit will act on a 2-qubit target register named 'target', and be controlled by another single-qubit register named 'control'. You should assign your finished circuit to the variable '`cu2`'.
###Code
c = QuantumRegister(1, 'control')
t = QuantumRegister(2, 'target')
cu2 = QuantumCircuit(c, t)
# WRITE YOUR CODE BETWEEN THESE LINES - START
cu2.cx(0,2)
# WRITE YOUR CODE BETWEEN THESE LINES - END
cu2.draw('mpl')
###Output
_____no_output_____
###Markdown
And you can check your answer below:
###Code
# Check your answer using following code
from qc_grader import grade_ex2b
grade_ex2b(cu2)
###Output
Grading your answer for ex2/part2. Please wait...
Congratulations 🎉! Your answer is correct.
###Markdown
Finally, we also need a circuit that is equivalent to applying $U$ four times (i.e. we need the circuit $U^4$). **Exercise 2c:** Create a circuit ($U^4$) that performs the transformation:$$\begin{aligned}U|00\rangle &= |00\rangle \\U|01\rangle &= |01\rangle \\U|10\rangle &= |10\rangle \\U|11\rangle &= |11\rangle \\\end{aligned}$$and is controlled by another qubit. The circuit will act on a 2-qubit target register named 'target', and be controlled by another single-qubit register named 'control'. You should assign your finished circuit to the variable '`cu4`'. _Hint: The best solution is very simple._
###Code
c = QuantumRegister(1, 'control')
t = QuantumRegister(2, 'target')
cu4 = QuantumCircuit(c, t)
# WRITE YOUR CODE BETWEEN THESE LINES - START
# WRITE YOUR CODE BETWEEN THESE LINES - END
cu4.draw('mpl')
###Output
_____no_output_____
###Markdown
You can check your answer using the code below:
###Code
# Check your answer using following code
from qc_grader import grade_ex2c
grade_ex2c(cu4)
###Output
Grading your answer for ex2/part3. Please wait...
Congratulations 🎉! Your answer is correct.
###Markdown
**Exercise 2 final:** Now we have controlled $U$, $U^2$ and $U^4$, we can combine this into a circuit that carries out the quantum part of Shor’s algorithm.The initialization part is easy: we need to put the counting register into the state $|{+}{+}{+}\rangle$ (which we can do with three H-gates) and we need the target register to be in the state $|1\rangle$ (which we mapped to the computational basis state $|00\rangle$, so we don’t need to do anything here). We'll do all this for you._Your_ task is to create a circuit that carries out the controlled-$U$s, that will be used in-between the initialization and the inverse quantum Fourier transform. More formally, we want a circuit:$$CU_{c_0 t}CU^2_{c_1 t}CU^4_{c_2 t}$$Where $c_0$, $c_1$ and $c_2$ are the three qubits in the ‘counting’ register, $t$ is the ‘target’ register, and $U$ is as defined in the first part of this exercise. In this notation, $CU_{a b}$ means $CU$ is controlled by $a$ and acts on $b$. An easy solution to this is to simply combine the circuits `cu`, `cu2` and `cu4` that you created above, but you will most likely find a more efficient circuit that has the same behavior! Your circuit can only contain [CNOTs](https://qiskit.org/documentation/stubs/qiskit.circuit.library.CXGate.html) and single qubit [U-gates](https://qiskit.org/documentation/stubs/qiskit.circuit.library.UGate.html). Your score will be the number of CNOTs you use (less is better), as multi-qubit gates are usually much more difficult to carry out on hardware than single-qubit gates. If you're struggling with this requirement, we've included a line of code next to the submission that will convert your circuit to this form, although you're likely to do better by hand.
###Code
# Code to combine your previous solutions into your final submission
from qiskit import transpile
cqr = QuantumRegister(3, 'control')
tqr = QuantumRegister(2, 'target')
cux = QuantumCircuit(cqr, tqr)
solutions = [cu, cu2, cu4]
for i in range(3):
cux = cux.compose(solutions[i], [cqr[i], tqr[0], tqr[1]])
cux=transpile(cux, basis_gates=['cx','u'])
cux.draw('mpl')
# Check your answer using following code
from qc_grader import grade_ex2_final
# Uncomment the two lines below if you need to convert your circuit to CNOTs and single-qubit gates
from qiskit import transpile
cux = transpile(cux, basis_gates=['cx','u'])
grade_ex2_final(cux)
###Output
Grading your answer for ex2/part4. Please wait...
Congratulations 🎉! Your answer is correct.
Your cost is 8.
Feel free to submit your answer.
###Markdown
Once you're happy with the circuit, you can submit it below:
###Code
# Submit your answer. You can re-submit at any time.
from qc_grader import submit_ex2_final
submit_ex2_final(cux)
###Output
Submitting your answer for ex2/part4. Please wait...
Success 🎉! Your answer has been submitted.
###Markdown
Congratulations! You've finished the exercise. Read on to see your circuit used to factor 35, and see how it performs . Using your circuit to factorize 35The code cell below takes your submission for the exercise and uses it to create a circuit that will give us $\tfrac{s}{r}$, where $s$ is a random integer between $0$ and $r-1$, and $r$ is the period of the function $f(x) = 13^x \bmod 35$.
###Code
from qiskit.circuit.library import QFT
from qiskit import ClassicalRegister
# Create the circuit object
cr = ClassicalRegister(3)
shor_circuit = QuantumCircuit(cqr, tqr, cr)
# Initialise the qubits
shor_circuit.h(cqr)
# Add your circuit
shor_circuit = shor_circuit.compose(cux)
# Perform the inverse QFT and extract the output
shor_circuit.append(QFT(3, inverse=True), cqr)
shor_circuit.measure(cqr, cr)
shor_circuit.draw('mpl')
###Output
_____no_output_____
###Markdown
Let's transpile this circuit and see how large it is, and how many CNOTs it uses:
###Code
from qiskit import Aer, transpile
from qiskit.visualization import plot_histogram
qasm_sim = Aer.get_backend('aer_simulator')
tqc = transpile(shor_circuit, basis_gates=['u', 'cx'], optimization_level=3)
print(f"circuit depth: {tqc.depth()}")
print(f"circuit contains {tqc.count_ops()['cx']} CNOTs")
###Output
circuit depth: 30
circuit contains 17 CNOTs
###Markdown
And let's see what we get:
###Code
counts = qasm_sim.run(tqc).result().get_counts()
plot_histogram(counts)
###Output
_____no_output_____
###Markdown
Assuming everything has worked correctly, we should see equal probability of measuring the numbers $0$, $2$, $4$ and $8$. This is because phase estimation gives us $2^n \cdot \tfrac{s}{r}$, where $n$ is the number of qubits in our counting register (here $n = 3$, $s$ is a random integer between $0$ and $r-1$, and $r$ is the number we're trying to calculate). Let's convert these to fractions that tell us $s/r$ (this is something we can easily calculate classically):
###Code
from fractions import Fraction
n = 3 # n is number of qubits in our 'counting' register
# Cycle through each measurement string
for measurement in counts.keys():
# Convert the binary string to an 'int', and divide by 2^n
decimal = int(measurement, 2)/2**n
# Use the continued fractions algorithm to convert to form a/b
print(Fraction(decimal).limit_denominator(35))
###Output
1/2
3/4
0
1/4
###Markdown
We can see the denominator of some of the results will tell us the correct answer $r = 4$. We can verify $r=4$ quickly:
###Code
13**4 % 35
###Output
_____no_output_____
###Markdown
So how do we get the factors from this? There is then a high probability that the greatest common divisor of $N$ and either $a^{r/2}-1$ or $a^{r/2}+1$ is a factor of $N$, and the greatest common divisor is also something we can easily calculate classically.
###Code
from math import gcd # Greatest common divisor
for x in [-1, 1]:
print(f"Guessed factor: {gcd(13**(4//2)+x, 35)}")
###Output
Guessed factor: 7
Guessed factor: 5
###Markdown
We only need to find one factor, and can use it to divide $N$ to find the other factor. But in this case, _both_ $a^{r/2}-1$ or $a^{r/2}+1$ give us $35$'s factors. We can again verify this is correct:
###Code
7*5
###Output
_____no_output_____
###Markdown
Running on `ibmq_santiago`We promised this would run on Santiago, so here we will show you how to do that. In this example we will use a simulated Santiago device for convenience, but you can switch this out for the real device if you want:
###Code
from qiskit.test.mock import FakeSantiago
from qiskit import assemble
from qiskit.visualization import plot_histogram
santiago = FakeSantiago()
real_device = False
## Uncomment this code block to run on the real device
#from qiskit import IBMQ
#IBMQ.load_account()
#provider = IBMQ.get_provider(hub='ibm-q', group='open', project='main')
#santiago = provider.get_backend('ibmq_santiago')
#real_device = True
# We need to transpile for Santiago
tqc = transpile(shor_circuit, santiago, optimization_level=3)
if not real_device:
tqc = assemble(tqc)
# Run the circuit and print the counts
counts = santiago.run(tqc).result().get_counts()
plot_histogram(counts)
###Output
_____no_output_____ |
Diversos/Final Capstone week3.ipynb | ###Markdown
Segmenting and Clustering Neighborhoods in Toronto First Part
###Code
#downloading all the dependencies needed
import numpy as np # library to handle data in a vectorized manner
import pandas as pd # library for data analsysis
import json # library to handle JSON files
!conda install -c conda-forge geopy --yes
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
import requests # library to handle requests
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
# import k-means from clustering stage
from sklearn.cluster import KMeans
!conda install -c conda-forge folium=0.5.0 --yes
import folium # map rendering library
print('Libraries imported.')
#Using the pandas to extract the table of postal codes of the Wikpedia page.
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
df_list = pd.read_html(url) #readind the html file.
df = df_list[0] # defining a dataframe
#Ignoring cells with a borough that is Not assigned.
df = df[df.Borough != 'Not assigned'].reset_index(drop=True)
df = df.rename(columns={"Postcode": "Postcode","Borough":"Borough","Neighbourhood":"Neighborhood"})
#treating the cells that do not have a linked Neighborhood to a Borough.
df["Neighborhood"] = df.apply(lambda row: row.Neighborhood if row.Neighborhood !="Not assigned" else row.Borough, axis=1)
#gathering the neighborhoods with the same postal code in a Borough.
df = df.groupby(['Postcode', 'Borough'])['Neighborhood'].apply(', '.join).reset_index()
df
df.shape
###Output
_____no_output_____
###Markdown
Second Part
###Code
#Using the Geocoder package or the csv file to create the new dataframe:
url1 = "https://cocl.us/Geospatial_data"
df_postal = pd.read_csv(url1)
df_postal = df_postal.rename(columns={"Postal Code": "Postcode","Latitude":"Latitude","Longitude":"Longitude"})
df_toronto = pd.merge(df, df_postal, on='Postcode', how='left')
df_toronto
df_toronto.shape
###Output
_____no_output_____
###Markdown
Third Part
###Code
#Getting the latitude and longitude values of toronto
address = 'Toronto, CA'
geolocator = Nominatim(user_agent="CA_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto are {}, {}.'.format(latitude, longitude))
# create map of Toronto latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, neighborhood in zip(df_toronto['Latitude'], df_toronto['Longitude'], df_toronto['Neighborhood']):
label = '{}'.format(neighborhood)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
##worrking only with boroughs how contain toronto in name.
df_filtered = df_toronto[df_toronto['Borough'].str.contains("Toronto")]
df_filtered.head(15)
df_filtered.shape
#Analyzing East Toronto Borough to replicate required analysis
east_toronto = df_filtered[df_filtered['Borough'] == 'East Toronto'].reset_index(drop=True)
east_toronto
###Output
_____no_output_____
###Markdown
Let's get the geographical coordinates of East Toronto.
###Code
address = 'East Toronto, Toronto CA'
geolocator = Nominatim(user_agent="ca_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of East Toronto are {}, {}.'.format(latitude, longitude))
# create map of East Toronto using latitude and longitude values
map_eastToronto = folium.Map(location=[latitude, longitude], zoom_start=11)
# add markers to map
for lat, lng, neighborhood in zip(east_toronto['Latitude'], east_toronto['Longitude'], east_toronto['Neighborhood']):
label = '{}'.format(neighborhood)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='red',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_eastToronto)
map_eastToronto
#Utilizing the Foursquare API to explore the neighborhoods and segment them.
#Define Foursquare Credentials and Version
CLIENT_ID = 'T1F14ZSMPK4DKKGTZASK3MLJU0RMKYNKQXTIYYW3SSTJHY4W' # your Foursquare ID
CLIENT_SECRET = 'EBLGELR3FEVXWRSTVI3EGAFDZFTEY5CG32ARFEIO4V14W1FL' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
#Let's explore the first neighborhood in our dataframe.
#Get the neighborhood's name.
east_toronto.loc[0, 'Neighborhood']
#Get the neighborhood's latitude and longitude values.
neighborhood_latitude = east_toronto.loc[0, 'Latitude'] # neighborhood latitude value
neighborhood_longitude = east_toronto.loc[0, 'Longitude'] # neighborhood longitude value
neighborhood_name = east_toronto.loc[0, 'Neighborhood'] # neighborhood name
print('Latitude and longitude values of {} are {}, {}.'.format(neighborhood_name,
neighborhood_latitude,
neighborhood_longitude))
#Now, let's get the top venues that are in The Beachs within a radius of 500 meters.
#First, let's create the GET request URL. Name your URL url.
LIMIT = 100
radius = 500
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
neighborhood_latitude,
neighborhood_longitude,
radius,
LIMIT)
url
#Send the GET request and examine the resutls
results = requests.get(url).json()
results
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
#Now we are ready to clean the json and structure it into a pandas dataframe
venues = results['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues.head()
#And how many venues were returned by Foursquare?
print('{} venues were returned by Foursquare.'.format(nearby_venues.shape[0]))
# 2. Explore Neighborhoods in East Toronto
#Let's create a function to repeat the same process to all the neighborhoods in East Toronto
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
eastToronto_venues = getNearbyVenues(names=east_toronto['Neighborhood'],
latitudes=east_toronto['Latitude'],
longitudes=east_toronto['Longitude']
)
#Let's check the size of the resulting dataframe
print(eastToronto_venues.shape)
eastToronto_venues.head()
#Let's check how many venues were returned for each neighborhood
eastToronto_venues.groupby('Neighborhood').count()
#Let's find out how many unique categories can be curated from all the returned venues
print('There are {} uniques categories.'.format(len(eastToronto_venues['Venue Category'].unique())))
#3. Analyze Each Neighborhood
# one hot encoding
eastToronto_onehot = pd.get_dummies(eastToronto_venues[['Venue Category']], prefix="", prefix_sep="")
eastToronto_onehot.head(10)
# add neighborhood column back to dataframe
eastToronto_onehot['Neighborhood'] = eastToronto_venues['Neighborhood']
eastToronto_onehot.head(5)
# move neighborhood column to the first column
fixed_columns = [eastToronto_onehot.columns[-19]] + list(eastToronto_onehot.columns[:-19])
eastToronto_onehot = eastToronto_onehot[fixed_columns]
eastToronto_onehot.head()
eastToronto_onehot.shape
#Next, let's group rows by neighborhood and by taking the mean of the frequency of occurrence of each category
eastToronto_grouped = eastToronto_onehot.groupby('Neighborhood').mean().reset_index()
eastToronto_grouped
eastToronto_grouped.shape
#Let's print each neighborhood along with the top 5 most common venues
num_top_venues = 5
for hood in eastToronto_grouped['Neighborhood']:
print("----"+hood+"----")
temp = eastToronto_grouped[eastToronto_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
#Let's put that into a pandas dataframe
#First, let's write a function to sort the venues in descending order.
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
#Now let's create the new dataframe and display the top 10 venues for each neighborhood.
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = eastToronto_grouped['Neighborhood']
for ind in np.arange(eastToronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(eastToronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
#4. Cluster Neighborhoods
#Run k-means to cluster the neighborhood into 3 clusters.
# set number of clusters
kclusters = 3
eastToronto_grouped_clustering = eastToronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(eastToronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:5]
#Let's create a new Dataframe
#that includes the cluster as well as the top 10 venues for each neighborhood.
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
eastToronto_merged = east_toronto
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
eastToronto_merged = eastToronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
eastToronto_merged.head() # check the last columns!
#Finally, let's visualize the resulting clusters
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(eastToronto_merged['Latitude'], eastToronto_merged['Longitude'], eastToronto_merged['Neighborhood'], eastToronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
#5. Examine Clusters
#Now, you can examine each cluster and determine the discriminating venue categories that distinguish each cluster. Based on the defining categories, you can then assign a name to each cluster. I will leave this exercise to you.
#Cluster 1
eastToronto_merged.loc[eastToronto_merged['Cluster Labels'] == 0, eastToronto_merged.columns[[1] + list(range(5, eastToronto_merged.shape[1]))]]
#Cluster 2
eastToronto_merged.loc[eastToronto_merged['Cluster Labels'] == 1, eastToronto_merged.columns[[1] + list(range(5, eastToronto_merged.shape[1]))]]
#Cluster 3
eastToronto_merged.loc[eastToronto_merged['Cluster Labels'] == 2, eastToronto_merged.columns[[1] + list(range(5, eastToronto_merged.shape[1]))]]
###Output
_____no_output_____ |
HI_70/notebook2/flo_test/3. augmentation flo.ipynb | ###Markdown
Move to **'model flo emission'** notebook to optimize model for augmenting emission
###Code
# adding column 'abs_nm' to input
input_col.append('abs_nm')
input_col_2 = input_col.copy()
print(input_col_2)
input_col.remove('abs_nm')
# Now, the dataset only has 'None' values in the 'emission_nm' column
df_emi = pd.read_csv('dataset_augmented_abs.csv')
# Load ML model for predicting emission
loaded_rf_emi = joblib.load('model_aug_emission_DecisionTree.joblib')
# # Replace 'None' entries in 'emission_nm' column by predicted values.
a = 0
for index, row in df_emi.iterrows():
if row['emission_nm'] == 'None':
X = df_emi.loc[index, input_col_2].to_numpy()
df_emi.loc[index, 'emission_nm'] = loaded_rf_emi.predict(X.reshape(1, -1))[0]
a += 1
# # # Save the dataset where all 'None' values are replaced.
# # # Ready to use for other analysis.
df_emi.to_csv('dataset_augmented_abs_emission.csv')
###Output
_____no_output_____
###Markdown
Aldjust dataset_augmented_abs_emission.csv to dataset_augmented_abs_emission_adjusted.csv
###Code
# adding column 'emission_nm' to input
input_col_2.append('emission_nm')
input_col_3 = input_col_2.copy()
print(input_col_3)
input_col_2.remove('emission_nm')
df_second_aug = pd.read_csv('dataset_augmented_abs_emission_adjusted.csv')
#Saves the row indexes to drop for diameter modelling into a list
total_row_num = len(df_second_aug)
drop_list_dia =[]
for row_i in range(total_row_num):
if df_second_aug['diameter_nm'].values[row_i] == 'None':
drop_list_dia.append(row_i)
len(drop_list_dia)
#Drops the appropriate rows
df_dia_scaled_encoded = df_second_aug.drop(drop_list_dia)
#Saves the data for absorbance modelling to CSV
df_dia_scaled_encoded.to_csv('dataset_scaled_diameter.csv')
###Output
_____no_output_____
###Markdown
Move to **'model hao diameter'** notebook to optimize model for augmenting emission.
###Code
# Load ML model for predicting diameters
df_dia = pd.read_csv('dataset_augmented_abs_emission_adjusted.csv')
loaded_rf_dia = joblib.load('model_aug_diameter_ExtraTrees.joblib')
# Replace 'None' entries in 'diameter_nm' column by predicted values.
a = 0
for index, row in df_dia.iterrows():
if row['diameter_nm'] == 'None':
X = df_dia.loc[index, input_col_3].to_numpy()
df_dia.loc[index, 'diameter_nm'] = loaded_rf_dia.predict(X.reshape(1, -1))[0]
a += 1
# Save the dataset where all 'None' values in 'diameter_nm' column are replaced.
df_dia.to_csv('flo_dataset_augmented.csv')
###Output
_____no_output_____ |
notebooks/protocol_neb_example.ipynb | ###Markdown
Climbing image NEB example - Lammps
###Code
# headers
# general modules
import numpy as np
import matplotlib.pyplot as plt
# pyiron modules
from pyiron_atomistics import Project
import pyiron_contrib
# define project
pr = Project('neb_example')
pr.remove_jobs(recursive=True)
# check the git head of the repos that this notebook worked on when this notebook was written
pr.get_repository_status()
# inputs
# structure specific
element = 'Al'
supercell = 3
vac_id_initial = 0
vac_id_final = 1
cubic = True
# job specific
potential = '2008--Mendelev-M-I--Al--LAMMPS--ipr1'
# NEB specific
n_images = 9
neb_steps = 200
gamma0 = 0.01
climbing_image = True
# create base structure
box = pr.create_ase_bulk(name=element, cubic=cubic).repeat(supercell)
# template minimization job
template_job = pr.create_job(job_type=pr.job_type.Lammps, job_name='template')
template_job.potential = potential
# vacancy @ atom id 0 minimization
vac_0_struct = box.copy() # copy box
vac_0_struct.pop(vac_id_initial) # create vacancy
vac_0 = template_job.copy_template(project=pr, new_job_name='vac_0')
vac_0.structure = vac_0_struct
vac_0.calc_minimize(pressure=0.)
vac_0.run()
# vacancy @ atom id 1 minimization
vac_1_struct = box.copy() # copy box
vac_1_struct.pop(vac_id_final) # create vacancy
vac_1 = template_job.copy_template(project=pr, new_job_name='vac_1')
vac_1.structure = vac_1_struct
vac_1.calc_minimize(pressure=0.)
vac_1.run()
# create and run the NEB job
pr_neb = pr.create_group('neb') # create a new folder
neb_ref = pr_neb.create_job(job_type=pr.job_type.Lammps, job_name='ref_neb')
neb_ref.structure = vac_0.get_structure()
neb_ref.potential = potential
neb_ref.save() # Don't forget this step!
neb_job = pr_neb.create_job(job_type=pr.job_type.ProtoNEBSer, job_name='neb_job')
neb_job.input.ref_job_full_path = neb_ref.path
neb_job.input.structure_initial = vac_0.get_structure()
neb_job.input.structure_final = vac_1.get_structure()
neb_job.input.n_images = n_images
neb_job.input.n_steps = neb_steps
neb_job.input.gamma0 = gamma0
neb_job.input.use_climbing_image = climbing_image
# set_output_whitelist sets how often an output of a particular vertex is stored in the archive.
# for example, here, the output 'energy_pot' of vertex 'calc_static' is saved every 20 steps in the archive.
neb_job.set_output_whitelist(**{'calc_static': {'energy_pot': 20}})
neb_job.run()
# lets check the archive for the output - 1
neb_job['graph/vertices/calc_static/archive/output/energy_pot']
# here we see in 'nodes' that the output is stored every 20 steps
# note: unfortunately, this is the only way to access the archive quantitites at the moment!
# note: all outputs of the other nodes will only be saved for the final step!
# lets check the archive for the output - 2
# the final image energies?
neb_job['graph/vertices/calc_static/archive/output/energy_pot/t_80']
# lets plot the final barrier
neb_job.plot_elastic_band(frame=-1)
# lets get the migration barrier over different archived iterations
e_mig = [neb_job.get_barrier(frame=i) for i in range(int(neb_steps/20))]
# final migration barrier after 200 steps
print('final migration barrier = {}'.format(e_mig[-1]))
###Output
final migration barrier = 0.6524359464470422
###Markdown
Notes: At the moment, there is no convergence criterion implemented for this NEB protocol. So it runs for all of the steps that have been provided as an input.
###Code
# plot the iterations
plt.plot(e_mig, marker='o')
plt.xlabel('archived iterations')
plt.ylabel('migration_barrier [eV]')
plt.show()
# other output saved by the job can be obtained only for the final step!
neb_job.output.keys()
###Output
_____no_output_____ |
05_pythonic_ways.ipynb | ###Markdown
Tuple unpackingfrom http://nbviewer.jupyter.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-1-Introduction-to-Python-Programming.ipynbTuples are like lists, except that they cannot be modified once created, that is they are immutable.In Python, tuples are created using the syntax (..., ..., ...)
###Code
point = (10, 20)
print(point, type(point))
###Output
(10, 20) <class 'tuple'>
###Markdown
We can unpack a tuple by assigning it to a comma-separated list of variables:
###Code
x, y = point
print("x =", x)
print("y =", y)
###Output
x = 10
y = 20
###Markdown
Enumeratefrom http://nbviewer.jupyter.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-1-Introduction-to-Python-Programming.ipynbSometimes it is useful to have access to the indices of the values when iterating over a list. We can use the enumerate function for this:
###Code
my_list = list(range(-3,3))
print(my_list)
for idx, x in enumerate(my_list):
print(idx, x)
###Output
[-3, -2, -1, 0, 1, 2]
0 -3
1 -2
2 -1
3 0
4 1
5 2
###Markdown
Unnamed functions (lambda functions)from http://nbviewer.jupyter.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-1-Introduction-to-Python-Programming.ipynbIn Python we can also create unnamed functions, using the lambda keyword:
###Code
f1 = lambda x: x**2
# is equivalent to
def f2(x):
return x**2
f1(2), f2(2)
###Output
_____no_output_____ |
Lab 5: Explore the Data Set.ipynb | ###Markdown
**Survey Dataset Exploration Lab** Estimated time needed: **30** minutes Objectives After completing this lab you will be able to: * Load the dataset that will used thru the capstone project.* Explore the dataset.* Get familier with the data types. Load the dataset Import the required libraries.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
The dataset is available on the IBM Cloud at the below url.
###Code
dataset_url = "https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m1_survey_data.csv"
###Output
_____no_output_____
###Markdown
Load the data available at dataset_url into a dataframe.
###Code
# your code goes here
df = pd.read_csv(dataset_url)
###Output
_____no_output_____
###Markdown
Explore the data set It is a good idea to print the top 5 rows of the dataset to get a feel of how the dataset will look. Display the top 5 rows and columns from your dataset.
###Code
# your code goes here
df.head()
###Output
_____no_output_____
###Markdown
Find out the number of rows and columns Start by exploring the numbers of rows and columns of data in the dataset. Print the number of rows in the dataset.
###Code
# your code goes here
df.shape[1]
###Output
_____no_output_____
###Markdown
Print the number of columns in the dataset.
###Code
# your code goes here
df.shape[0]
###Output
_____no_output_____
###Markdown
Identify the data types of each column Explore the dataset and identify the data types of each column. Print the datatype of all columns.
###Code
# your code goes here
df.dtypes
###Output
_____no_output_____
###Markdown
Print the mean age of the survey participants.
###Code
# your code goes here
df.Age.mean()
###Output
_____no_output_____
###Markdown
The dataset is the result of a world wide survey. Print how many unique countries are there in the Country column.
###Code
# your code goes here
len(df.Country.unique())
###Output
_____no_output_____ |
web_scrapping/alternative_solution_to_scrapping_150_python_job_from_indeed.ipynb | ###Markdown
Possible Solution: Build A Pipeline- Combine Your Knowledge of the Website, `requests` and `bs4`- Automate Your Scraping Process Across Multiple Pages- Generalize Your Code For Varying Searches- Target & Save Specific Information You Want Your Tasks:- Scrape the first 100 available search results- Generalize your code to allow searching for different locations/jobs- Pick out information about the URL, job title, and job location- Save the results to a file
###Code
import requests
from bs4 import BeautifulSoup
###Output
_____no_output_____
###Markdown
--- Part 1: Inspect- How do the URLs change when you navigate to the next results page?- How do the URLs change when you use a different location and/or job title search?- Which HTML elements contain the link, title, and location of each job? **Next Page**: The `start=` parameter gets added and incremented by the value of `10` for each additional page. This is because each results page displays 10 job results.E.g.: **Different Location/Job Title**: The values for the query parameters `q` (for job title) and `l` (for location) change accordingly.
###Code
page = requests.get('https://www.indeed.com/jobs?q=python&l=new+york')
###Output
_____no_output_____
###Markdown
**HTML Elements**: A single job posting lives inside of a `div` element with the class name `result`. Inside there are other elements. You can find the specific info you're looking for here:- **Link**: In the `href` attribute of the `` Element that is a child of the title `` element- **Title**: The text of the link in the `` element which also contains the link URL mentioned above- **Location**: A `` element with the telling class name `location` --- Part 2: Scrape- Build the code to fetch the first 100 search results. This means you will need to automatically navigate to multiple results pages- Write functions that allow you to specify the job title, location, and amount of results as arguments
###Code
page_2 = requests.get('https://www.indeed.com/jobs?q=python&l=new+york&start=20')
###Output
_____no_output_____
###Markdown
Every 10 results means you're on a new page. Let's make that an argument to a function:
###Code
def get_jobs(page=1):
"""Fetches the HTML from a search for Python jobs in New York on Indeed.com from a specified page."""
base_url_indeed = 'https://www.indeed.com/jobs?q=python&l=new+york&start='
results_start_num = page*10
url = f'{base_url_indeed}{results_start_num}'
page = requests.get(url)
return page
get_jobs(3)
get_jobs(4)
###Output
_____no_output_____
###Markdown
Great! Let's customize this function some more to allow for different search queries and search locations:
###Code
def get_jobs(title, location, page=1):
"""Fetches the HTML from a search for Python jobs in New York on Indeed.com from a specified page."""
loc = location.replace(' ', '+') # for multi-part locations
base_url_indeed = f'https://www.indeed.com/jobs?q={title}&l={loc}&start='
results_start_num = page*10
url = f'{base_url_indeed}{results_start_num}'
page = requests.get(url)
return page
get_jobs('python', 'new york', 3)
###Output
_____no_output_____
###Markdown
With a generalized way of scraping the page done, you can move on to picking out the information you need by parsing the HTML. --- Part 3: Parse- Sieve through your HTML soup to pick out only the job title, link, and location- Format the results in a readable format (e.g. JSON)- Save the results to a file Let's start by getting access to all interesting search results for one page:
###Code
site = get_jobs('python', 'new york')
soup = BeautifulSoup(site.content)
results = soup.find(id='resultsCol')
jobs = results.find_all('div', class_='result')
###Output
_____no_output_____
###Markdown
**Job Titles** can be found like this:
###Code
job_titles = [job.find('h2').find('a').text.strip() for job in jobs]
job_titles
###Output
_____no_output_____
###Markdown
**Link URLs** need to be assembled, and can be found like this:
###Code
base_url = 'https://www.indeed.com'
job_links = [base_url + job.find('h2').find('a')['href'] for job in jobs]
job_links
###Output
_____no_output_____
###Markdown
**Locations** can be picked out of the soup by their class name:
###Code
job_locations = [job.find(class_='location').text for job in jobs]
job_locations
###Output
_____no_output_____
###Markdown
Let's assemble all this info into a function, so you can pick out the pieces and save them to a useful data structure:
###Code
def parse_info(soup):
"""
Parses HTML containing job postings and picks out job title, location, and link.
args:
soup (BeautifulSoup object): A parsed bs4.BeautifulSoup object of a search results page on indeed.com
returns:
job_list (list): A list of dictionaries containing the title, link, and location of each job posting
"""
results = soup.find(id='resultsCol')
jobs = results.find_all('div', class_='result')
base_url = 'https://www.indeed.com'
job_list = list()
for job in jobs:
title = job.find('h2').find('a').text.strip()
link = base_url + job.find('h2').find('a')['href']
location = job.find(class_='location').text
job_list.append({'title': title, 'link': link, 'location': location})
return job_list
###Output
_____no_output_____
###Markdown
Let's give it a try:
###Code
page = get_jobs('python', 'new_york')
soup = BeautifulSoup(page.content)
results = parse_info(soup)
results
###Output
_____no_output_____
###Markdown
And let's add a final step of generalization:
###Code
def get_job_listings(title, location, amount=100):
results = list()
for page in range(amount//10):
site = get_jobs(title, location, page=page)
soup = BeautifulSoup(site.content)
page_results = parse_info(soup)
results += page_results
return results
r = get_job_listings('python', 'new york', 100)
len(r)
r[42]
###Output
_____no_output_____ |
assignments/assignment1_Scripting/Assignment1_word_counts_au617836.ipynb | ###Markdown
Portfolio Assignment 1: Basic Scripting with Python Using the corpus called 100-english-novels found on the cds-language GitHub repo, write a Python programme which does the following:1. Calculate the total word count for each novel2. Calculate the total number of unique words for each novel3. Save result as a single file consisting of three columns: filename, total_words, unique_words __TASK 1: CALCULATE THE TOTAL WORD COUNT FOR EACH NOVEL__
###Code
# First I want to import the novels from the 100-english-novels corpus and for this I need the Path module
# Importing Path module and the os module
from pathlib import Path
import os
# Specifying the data path
data_path = os.path.join("..", "data", "100_english_novels", "corpus")
# Importing all files (all novels) ending with ".txt" using the glob() function. I then split each novel into tokens (words) using the split() function and then I count the number of tokens/words for each novel using the len() function.
for filename in Path(data_path).glob("*.txt"):
with open (filename, "r", encoding = "utf-8") as file:
novel = file.read()
split_novel = novel.split() # splitting the novel into tokens/words
print(f"{filename} has a word count of {len(split_novel)}") # counting the number of words in each novel
###Output
../data/100_english_novels/corpus/Cbronte_Villette_1853.txt has a word count of 196557
../data/100_english_novels/corpus/Forster_Angels_1905.txt has a word count of 50477
../data/100_english_novels/corpus/Woolf_Lighthouse_1927.txt has a word count of 70185
../data/100_english_novels/corpus/Meredith_Richmond_1871.txt has a word count of 214985
../data/100_english_novels/corpus/Stevenson_Treasure_1883.txt has a word count of 68448
../data/100_english_novels/corpus/Forster_Howards_1910.txt has a word count of 111057
../data/100_english_novels/corpus/Wcollins_Basil_1852.txt has a word count of 118088
../data/100_english_novels/corpus/Schreiner_Undine_1929.txt has a word count of 90672
../data/100_english_novels/corpus/Galsworthy_Man_1906.txt has a word count of 110455
../data/100_english_novels/corpus/Corelli_Innocent_1914.txt has a word count of 121950
../data/100_english_novels/corpus/Kipling_Light_1891.txt has a word count of 72479
../data/100_english_novels/corpus/Conrad_Nostromo_1904.txt has a word count of 172276
../data/100_english_novels/corpus/Stevenson_Arrow_1888.txt has a word count of 80291
../data/100_english_novels/corpus/Hardy_Tess_1891.txt has a word count of 151197
../data/100_english_novels/corpus/Thackeray_Esmond_1852.txt has a word count of 187049
../data/100_english_novels/corpus/Doyle_Lost_1912.txt has a word count of 76281
../data/100_english_novels/corpus/Trollope_Angel_1881.txt has a word count of 217694
../data/100_english_novels/corpus/Gissing_Warburton_1903.txt has a word count of 85093
../data/100_english_novels/corpus/Barclay_Rosary_1909.txt has a word count of 105920
../data/100_english_novels/corpus/Eliot_Daniel_1876.txt has a word count of 311335
../data/100_english_novels/corpus/James_Tragic_1890.txt has a word count of 210553
../data/100_english_novels/corpus/Doyle_Micah_1889.txt has a word count of 177917
../data/100_english_novels/corpus/Dickens_Bleak_1853.txt has a word count of 357936
../data/100_english_novels/corpus/Woolf_Night_1919.txt has a word count of 167075
../data/100_english_novels/corpus/Braddon_Audley_1862.txt has a word count of 148763
../data/100_english_novels/corpus/Bennet_Babylon_1902.txt has a word count of 68397
../data/100_english_novels/corpus/Kipling_Kim_1901.txt has a word count of 107663
../data/100_english_novels/corpus/Lytton_What_1858.txt has a word count of 338512
../data/100_english_novels/corpus/Meredith_Marriage_1895.txt has a word count of 156151
../data/100_english_novels/corpus/Corelli_Satan_1895.txt has a word count of 169571
../data/100_english_novels/corpus/Haggard_Mines_1885.txt has a word count of 82948
../data/100_english_novels/corpus/Stevenson_Catriona_1893.txt has a word count of 102106
../data/100_english_novels/corpus/Doyle_Hound_1902.txt has a word count of 59639
../data/100_english_novels/corpus/Chesterton_Innocence_1911.txt has a word count of 79361
../data/100_english_novels/corpus/Blackmore_Lorna_1869.txt has a word count of 273259
../data/100_english_novels/corpus/Haggard_Sheallan_1921.txt has a word count of 121506
../data/100_english_novels/corpus/Gaskell_Wives_1865.txt has a word count of 270014
../data/100_english_novels/corpus/Cbronte_Jane_1847.txt has a word count of 189103
../data/100_english_novels/corpus/Wcollins_Legacy_1889.txt has a word count of 120704
../data/100_english_novels/corpus/Morris_Roots_1890.txt has a word count of 154305
../data/100_english_novels/corpus/Burnett_Garden_1911.txt has a word count of 81043
../data/100_english_novels/corpus/Ford_Post_1926.txt has a word count of 72055
../data/100_english_novels/corpus/Thackeray_Pendennis_1850.txt has a word count of 359496
../data/100_english_novels/corpus/James_Roderick_1875.txt has a word count of 132759
../data/100_english_novels/corpus/Haggard_She_1887.txt has a word count of 113770
../data/100_english_novels/corpus/Galsworthy_River_1933.txt has a word count of 89871
../data/100_english_novels/corpus/Morris_Wood_1894.txt has a word count of 49983
../data/100_english_novels/corpus/Barclay_Postern_1911.txt has a word count of 40015
../data/100_english_novels/corpus/Conrad_Almayer_1895.txt has a word count of 63257
../data/100_english_novels/corpus/Thackeray_Virginians_1859.txt has a word count of 356604
../data/100_english_novels/corpus/Bennet_Helen_1910.txt has a word count of 52644
../data/100_english_novels/corpus/Lee_Brown_1884.txt has a word count of 48242
../data/100_english_novels/corpus/Lawrence_Women_1920.txt has a word count of 183180
../data/100_english_novels/corpus/Schreiner_Farm_1883.txt has a word count of 100645
../data/100_english_novels/corpus/Lytton_Novel_1853.txt has a word count of 456592
../data/100_english_novels/corpus/Lawrence_Peacock_1911.txt has a word count of 124497
../data/100_english_novels/corpus/Schreiner_Trooper_1897.txt has a word count of 24612
../data/100_english_novels/corpus/Cbronte_Shirley_1849.txt has a word count of 218572
../data/100_english_novels/corpus/James_Ambassadors_1903.txt has a word count of 167555
../data/100_english_novels/corpus/Lawrence_Serpent_1926.txt has a word count of 172356
../data/100_english_novels/corpus/Braddon_Quest_1871.txt has a word count of 174199
../data/100_english_novels/corpus/Dickens_Oliver_1839.txt has a word count of 159489
../data/100_english_novels/corpus/Trollope_Warden_1855.txt has a word count of 72102
../data/100_english_novels/corpus/Barclay_Ladies_1917.txt has a word count of 122382
../data/100_english_novels/corpus/Ward_Harvest_1920.txt has a word count of 75043
../data/100_english_novels/corpus/Blackmore_Erema_1877.txt has a word count of 167016
../data/100_english_novels/corpus/Wcollins_Woman_1860.txt has a word count of 247078
../data/100_english_novels/corpus/Hardy_Madding_1874.txt has a word count of 138440
../data/100_english_novels/corpus/Lee_Penelope_1903.txt has a word count of 21840
../data/100_english_novels/corpus/Eliot_Adam_1859.txt has a word count of 216651
../data/100_english_novels/corpus/Gaskell_Lovers_1863.txt has a word count of 191037
../data/100_english_novels/corpus/Corelli_Romance_1886.txt has a word count of 100526
../data/100_english_novels/corpus/Conrad_Rover_1923.txt has a word count of 88101
../data/100_english_novels/corpus/Gissing_Women_1893.txt has a word count of 139234
../data/100_english_novels/corpus/Woolf_Years_1937.txt has a word count of 130903
../data/100_english_novels/corpus/Trollope_Phineas_1869.txt has a word count of 266634
../data/100_english_novels/corpus/Lytton_Kenelm_1873.txt has a word count of 193074
../data/100_english_novels/corpus/Blackmore_Springhaven_1887.txt has a word count of 202113
../data/100_english_novels/corpus/Forster_View_1908.txt has a word count of 67930
../data/100_english_novels/corpus/Eliot_Felix_1866.txt has a word count of 182816
../data/100_english_novels/corpus/Chesterton_Napoleon_1904.txt has a word count of 54920
../data/100_english_novels/corpus/Bennet_Imperial_1930.txt has a word count of 255975
../data/100_english_novels/corpus/Burnett_Princess_1905.txt has a word count of 66877
../data/100_english_novels/corpus/Ward_Milly_1881.txt has a word count of 47588
../data/100_english_novels/corpus/Ford_Girl_1907.txt has a word count of 35708
../data/100_english_novels/corpus/Meredith_Feverel_1859.txt has a word count of 168781
../data/100_english_novels/corpus/Lee_Albany_1884.txt has a word count of 62913
../data/100_english_novels/corpus/Ford_Soldier_1915.txt has a word count of 76750
../data/100_english_novels/corpus/Ward_Ashe_1905.txt has a word count of 141832
../data/100_english_novels/corpus/Morris_Water_1897.txt has a word count of 147737
../data/100_english_novels/corpus/Galsworthy_Saints_1919.txt has a word count of 95156
../data/100_english_novels/corpus/Gissing_Unclassed_1884.txt has a word count of 124877
../data/100_english_novels/corpus/Anon_Clara_1864.txt has a word count of 197620
../data/100_english_novels/corpus/Hardy_Jude_1895.txt has a word count of 147273
../data/100_english_novels/corpus/Dickens_Expectations_1861.txt has a word count of 186804
../data/100_english_novels/corpus/Chesterton_Thursday_1908.txt has a word count of 58299
../data/100_english_novels/corpus/Burnett_Lord_1886.txt has a word count of 58698
../data/100_english_novels/corpus/Braddon_Phantom_1883.txt has a word count of 180676
../data/100_english_novels/corpus/Gaskell_Ruth_1855.txt has a word count of 161797
../data/100_english_novels/corpus/Kipling_Captains_1896.txt has a word count of 53467
###Markdown
__TASK 2: CALCULATE THE TOTAL NUMBER OF UNIQUE WORDS FOR EACH NOVEL__
###Code
# For this task I am going to use the set() function that removes duplicates of words which allows me to find the unique words for each novel. I then use the len() function to count the number of unique words, i.e. words that do not occur more than once, as identified by the set() function.
for filename in Path(data_path).glob("*.txt"):
with open (filename, "r", encoding = "utf-8") as file:
novel = file.read()
split_novel = novel.split() # splitting the novel into words
unique_words = set(split_novel) # removing duplicate words
print(f"{filename} contains {len(unique_words)} unique words") # counting the number of unique words for each novel
###Output
../data/100_english_novels/corpus/Cbronte_Villette_1853.txt contains 29084 unique words
../data/100_english_novels/corpus/Forster_Angels_1905.txt contains 9464 unique words
../data/100_english_novels/corpus/Woolf_Lighthouse_1927.txt contains 11157 unique words
../data/100_english_novels/corpus/Meredith_Richmond_1871.txt contains 28892 unique words
../data/100_english_novels/corpus/Stevenson_Treasure_1883.txt contains 10831 unique words
../data/100_english_novels/corpus/Forster_Howards_1910.txt contains 17065 unique words
../data/100_english_novels/corpus/Wcollins_Basil_1852.txt contains 14586 unique words
../data/100_english_novels/corpus/Schreiner_Undine_1929.txt contains 11744 unique words
../data/100_english_novels/corpus/Galsworthy_Man_1906.txt contains 16713 unique words
../data/100_english_novels/corpus/Corelli_Innocent_1914.txt contains 19627 unique words
../data/100_english_novels/corpus/Kipling_Light_1891.txt contains 12493 unique words
../data/100_english_novels/corpus/Conrad_Nostromo_1904.txt contains 21884 unique words
../data/100_english_novels/corpus/Stevenson_Arrow_1888.txt contains 13168 unique words
../data/100_english_novels/corpus/Hardy_Tess_1891.txt contains 20955 unique words
../data/100_english_novels/corpus/Thackeray_Esmond_1852.txt contains 21375 unique words
../data/100_english_novels/corpus/Doyle_Lost_1912.txt contains 12621 unique words
../data/100_english_novels/corpus/Trollope_Angel_1881.txt contains 18023 unique words
../data/100_english_novels/corpus/Gissing_Warburton_1903.txt contains 12864 unique words
../data/100_english_novels/corpus/Barclay_Rosary_1909.txt contains 15223 unique words
../data/100_english_novels/corpus/Eliot_Daniel_1876.txt contains 28606 unique words
../data/100_english_novels/corpus/James_Tragic_1890.txt contains 21589 unique words
../data/100_english_novels/corpus/Doyle_Micah_1889.txt contains 23564 unique words
../data/100_english_novels/corpus/Dickens_Bleak_1853.txt contains 30797 unique words
../data/100_english_novels/corpus/Woolf_Night_1919.txt contains 19055 unique words
../data/100_english_novels/corpus/Braddon_Audley_1862.txt contains 18055 unique words
../data/100_english_novels/corpus/Bennet_Babylon_1902.txt contains 11529 unique words
../data/100_english_novels/corpus/Kipling_Kim_1901.txt contains 17998 unique words
../data/100_english_novels/corpus/Lytton_What_1858.txt contains 38658 unique words
../data/100_english_novels/corpus/Meredith_Marriage_1895.txt contains 24931 unique words
../data/100_english_novels/corpus/Corelli_Satan_1895.txt contains 22058 unique words
../data/100_english_novels/corpus/Haggard_Mines_1885.txt contains 12373 unique words
../data/100_english_novels/corpus/Stevenson_Catriona_1893.txt contains 13816 unique words
../data/100_english_novels/corpus/Doyle_Hound_1902.txt contains 9393 unique words
../data/100_english_novels/corpus/Chesterton_Innocence_1911.txt contains 13458 unique words
../data/100_english_novels/corpus/Blackmore_Lorna_1869.txt contains 25408 unique words
../data/100_english_novels/corpus/Haggard_Sheallan_1921.txt contains 13669 unique words
../data/100_english_novels/corpus/Gaskell_Wives_1865.txt contains 23203 unique words
../data/100_english_novels/corpus/Cbronte_Jane_1847.txt contains 25762 unique words
../data/100_english_novels/corpus/Wcollins_Legacy_1889.txt contains 13383 unique words
../data/100_english_novels/corpus/Morris_Roots_1890.txt contains 14085 unique words
../data/100_english_novels/corpus/Burnett_Garden_1911.txt contains 8939 unique words
../data/100_english_novels/corpus/Ford_Post_1926.txt contains 12728 unique words
../data/100_english_novels/corpus/Thackeray_Pendennis_1850.txt contains 34188 unique words
../data/100_english_novels/corpus/James_Roderick_1875.txt contains 17715 unique words
../data/100_english_novels/corpus/Haggard_She_1887.txt contains 15269 unique words
../data/100_english_novels/corpus/Galsworthy_River_1933.txt contains 13114 unique words
../data/100_english_novels/corpus/Morris_Wood_1894.txt contains 6890 unique words
../data/100_english_novels/corpus/Barclay_Postern_1911.txt contains 7921 unique words
../data/100_english_novels/corpus/Conrad_Almayer_1895.txt contains 10344 unique words
../data/100_english_novels/corpus/Thackeray_Virginians_1859.txt contains 34367 unique words
../data/100_english_novels/corpus/Bennet_Helen_1910.txt contains 10251 unique words
../data/100_english_novels/corpus/Lee_Brown_1884.txt contains 9369 unique words
../data/100_english_novels/corpus/Lawrence_Women_1920.txt contains 22055 unique words
../data/100_english_novels/corpus/Schreiner_Farm_1883.txt contains 13501 unique words
../data/100_english_novels/corpus/Lytton_Novel_1853.txt contains 42679 unique words
../data/100_english_novels/corpus/Lawrence_Peacock_1911.txt contains 18254 unique words
../data/100_english_novels/corpus/Schreiner_Trooper_1897.txt contains 4832 unique words
../data/100_english_novels/corpus/Cbronte_Shirley_1849.txt contains 29500 unique words
../data/100_english_novels/corpus/James_Ambassadors_1903.txt contains 17390 unique words
../data/100_english_novels/corpus/Lawrence_Serpent_1926.txt contains 21246 unique words
../data/100_english_novels/corpus/Braddon_Quest_1871.txt contains 17608 unique words
../data/100_english_novels/corpus/Dickens_Oliver_1839.txt contains 20367 unique words
../data/100_english_novels/corpus/Trollope_Warden_1855.txt contains 11464 unique words
../data/100_english_novels/corpus/Barclay_Ladies_1917.txt contains 15657 unique words
../data/100_english_novels/corpus/Ward_Harvest_1920.txt contains 13264 unique words
../data/100_english_novels/corpus/Blackmore_Erema_1877.txt contains 18885 unique words
../data/100_english_novels/corpus/Wcollins_Woman_1860.txt contains 19959 unique words
../data/100_english_novels/corpus/Hardy_Madding_1874.txt contains 20378 unique words
../data/100_english_novels/corpus/Lee_Penelope_1903.txt contains 5325 unique words
../data/100_english_novels/corpus/Eliot_Adam_1859.txt contains 21482 unique words
../data/100_english_novels/corpus/Gaskell_Lovers_1863.txt contains 21087 unique words
../data/100_english_novels/corpus/Corelli_Romance_1886.txt contains 15923 unique words
../data/100_english_novels/corpus/Conrad_Rover_1923.txt contains 11978 unique words
../data/100_english_novels/corpus/Gissing_Women_1893.txt contains 16912 unique words
../data/100_english_novels/corpus/Woolf_Years_1937.txt contains 16701 unique words
../data/100_english_novels/corpus/Trollope_Phineas_1869.txt contains 19592 unique words
../data/100_english_novels/corpus/Lytton_Kenelm_1873.txt contains 24678 unique words
../data/100_english_novels/corpus/Blackmore_Springhaven_1887.txt contains 23115 unique words
../data/100_english_novels/corpus/Forster_View_1908.txt contains 12114 unique words
../data/100_english_novels/corpus/Eliot_Felix_1866.txt contains 22340 unique words
../data/100_english_novels/corpus/Chesterton_Napoleon_1904.txt contains 10847 unique words
../data/100_english_novels/corpus/Bennet_Imperial_1930.txt contains 29278 unique words
../data/100_english_novels/corpus/Burnett_Princess_1905.txt contains 9037 unique words
../data/100_english_novels/corpus/Ward_Milly_1881.txt contains 6688 unique words
../data/100_english_novels/corpus/Ford_Girl_1907.txt contains 7092 unique words
../data/100_english_novels/corpus/Meredith_Feverel_1859.txt contains 24576 unique words
../data/100_english_novels/corpus/Lee_Albany_1884.txt contains 11628 unique words
../data/100_english_novels/corpus/Ford_Soldier_1915.txt contains 10626 unique words
../data/100_english_novels/corpus/Ward_Ashe_1905.txt contains 21292 unique words
../data/100_english_novels/corpus/Morris_Water_1897.txt contains 12079 unique words
../data/100_english_novels/corpus/Galsworthy_Saints_1919.txt contains 14582 unique words
../data/100_english_novels/corpus/Gissing_Unclassed_1884.txt contains 15769 unique words
../data/100_english_novels/corpus/Anon_Clara_1864.txt contains 24797 unique words
../data/100_english_novels/corpus/Hardy_Jude_1895.txt contains 19237 unique words
../data/100_english_novels/corpus/Dickens_Expectations_1861.txt contains 20536 unique words
../data/100_english_novels/corpus/Chesterton_Thursday_1908.txt contains 10385 unique words
../data/100_english_novels/corpus/Burnett_Lord_1886.txt contains 8131 unique words
../data/100_english_novels/corpus/Braddon_Phantom_1883.txt contains 22474 unique words
###Markdown
__TASK 3: SAVE THE RESULT AS A SINGLE FILE WITH COLUMNS FILENAME, TOTAL_WORDS, UNIQUE_WORDS__
###Code
# For this task I am going to use the Pandas module to create a dataframe and then convert it into a CSV-file using the module CSV.
import pandas as pd
import csv
# Creating an empty list that will be used later in the loop
data = {'filename': [],
'height': [],
'width': []}
# Creating an empty dataframe with Pandas that will be used later in the loop
dataframe = pd.DataFrame(info, columns = ['filename', 'total_words', 'unique_words'])
# Creating a loop that loops through each txt-file and appends the dataframe with the information (filename, total words, unique words)
for filename in Path(data_path).glob("*.txt"):
with open (filename, "r", encoding = "utf-8") as file:
novel = file.read()
split_novel = novel.split()
unique_words = set(split_novel)
data = {'filename': [filename],
'total_words': [len(split_novel)],
'unique_words': [len(unique_words)]}
dataframe = dataframe.append(pd.DataFrame(data, columns = ['filename', 'total_words', 'unique_words']))
print(dataframe) # making sure that the dataframe looks right
csv_file = dataframe.to_csv(r'../data/100_english_novels/novel_info.csv', index = False) # converting the dataframe to a csv-file
# Now I have a single CSV-file called "novel_info.csv" that contains all the relevant columns and is located in the specified directory.
###Output
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
0 ../data/100_english_novels/corpus/Wcollins_Bas... 118088 14586
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
0 ../data/100_english_novels/corpus/Wcollins_Bas... 118088 14586
0 ../data/100_english_novels/corpus/Schreiner_Un... 90672 11744
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
0 ../data/100_english_novels/corpus/Wcollins_Bas... 118088 14586
0 ../data/100_english_novels/corpus/Schreiner_Un... 90672 11744
0 ../data/100_english_novels/corpus/Galsworthy_M... 110455 16713
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
0 ../data/100_english_novels/corpus/Wcollins_Bas... 118088 14586
0 ../data/100_english_novels/corpus/Schreiner_Un... 90672 11744
0 ../data/100_english_novels/corpus/Galsworthy_M... 110455 16713
0 ../data/100_english_novels/corpus/Corelli_Inno... 121950 19627
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
0 ../data/100_english_novels/corpus/Wcollins_Bas... 118088 14586
0 ../data/100_english_novels/corpus/Schreiner_Un... 90672 11744
0 ../data/100_english_novels/corpus/Galsworthy_M... 110455 16713
0 ../data/100_english_novels/corpus/Corelli_Inno... 121950 19627
0 ../data/100_english_novels/corpus/Kipling_Ligh... 72479 12493
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
0 ../data/100_english_novels/corpus/Wcollins_Bas... 118088 14586
0 ../data/100_english_novels/corpus/Schreiner_Un... 90672 11744
0 ../data/100_english_novels/corpus/Galsworthy_M... 110455 16713
0 ../data/100_english_novels/corpus/Corelli_Inno... 121950 19627
0 ../data/100_english_novels/corpus/Kipling_Ligh... 72479 12493
0 ../data/100_english_novels/corpus/Conrad_Nostr... 172276 21884
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
0 ../data/100_english_novels/corpus/Wcollins_Bas... 118088 14586
0 ../data/100_english_novels/corpus/Schreiner_Un... 90672 11744
0 ../data/100_english_novels/corpus/Galsworthy_M... 110455 16713
0 ../data/100_english_novels/corpus/Corelli_Inno... 121950 19627
0 ../data/100_english_novels/corpus/Kipling_Ligh... 72479 12493
0 ../data/100_english_novels/corpus/Conrad_Nostr... 172276 21884
0 ../data/100_english_novels/corpus/Stevenson_Ar... 80291 13168
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
0 ../data/100_english_novels/corpus/Wcollins_Bas... 118088 14586
0 ../data/100_english_novels/corpus/Schreiner_Un... 90672 11744
0 ../data/100_english_novels/corpus/Galsworthy_M... 110455 16713
0 ../data/100_english_novels/corpus/Corelli_Inno... 121950 19627
0 ../data/100_english_novels/corpus/Kipling_Ligh... 72479 12493
0 ../data/100_english_novels/corpus/Conrad_Nostr... 172276 21884
0 ../data/100_english_novels/corpus/Stevenson_Ar... 80291 13168
0 ../data/100_english_novels/corpus/Hardy_Tess_1... 151197 20955
filename total_words unique_words
0 ../data/100_english_novels/corpus/Cbronte_Vill... 196557 29084
0 ../data/100_english_novels/corpus/Forster_Ange... 50477 9464
0 ../data/100_english_novels/corpus/Woolf_Lighth... 70185 11157
0 ../data/100_english_novels/corpus/Meredith_Ric... 214985 28892
0 ../data/100_english_novels/corpus/Stevenson_Tr... 68448 10831
0 ../data/100_english_novels/corpus/Forster_Howa... 111057 17065
0 ../data/100_english_novels/corpus/Wcollins_Bas... 118088 14586
0 ../data/100_english_novels/corpus/Schreiner_Un... 90672 11744
0 ../data/100_english_novels/corpus/Galsworthy_M... 110455 16713
0 ../data/100_english_novels/corpus/Corelli_Inno... 121950 19627
0 ../data/100_english_novels/corpus/Kipling_Ligh... 72479 12493
0 ../data/100_english_novels/corpus/Conrad_Nostr... 172276 21884
0 ../data/100_english_novels/corpus/Stevenson_Ar... 80291 13168
0 ../data/100_english_novels/corpus/Hardy_Tess_1... 151197 20955
0 ../data/100_english_novels/corpus/Thackeray_Es... 187049 21375
|
tf-keras/sm_tf_keras_example_updated/sagemaker-keras-text-classification-updated.ipynb | ###Markdown
Text Classification Using Keras & TensorFlow on Amazon SageMaker Download Data Download and unzip the dataset
###Code
! wget -q https://archive.ics.uci.edu/ml/machine-learning-databases/00359/NewsAggregatorDataset.zip && unzip -o NewsAggregatorDataset.zip -d data
###Output
Archive: NewsAggregatorDataset.zip
inflating: data/2pageSessions.csv
creating: data/__MACOSX/
inflating: data/__MACOSX/._2pageSessions.csv
inflating: data/newsCorpora.csv
inflating: data/__MACOSX/._newsCorpora.csv
inflating: data/readme.txt
inflating: data/__MACOSX/._readme.txt
###Markdown
Now lets also download and unzip the pre-trained glove embedding files
###Code
! wget -q --no-check-certificate https://nlp.stanford.edu/data/glove.6B.zip && unzip -o glove.6B.zip -d data
!rm data/2pageSessions.csv data/glove.6B.200d.txt data/glove.6B.50d.txt data/glove.6B.300d.txt glove.6B.zip data/readme.txt NewsAggregatorDataset.zip && rm -rf data/__MACOSX/
!ls data
###Output
glove.6B.100d.txt newsCorpora.csv
###Markdown
Data Exploration
###Code
import pandas as pd
import tensorflow as tf
import re
import numpy as np
import os
from tensorflow.python.keras.preprocessing.text import Tokenizer
from tensorflow.python.keras.preprocessing.sequence import pad_sequences
from tensorflow.python.keras.utils import to_categorical
column_names = ["TITLE", "URL", "PUBLISHER", "CATEGORY", "STORY", "HOSTNAME", "TIMESTAMP"]
news_dataset = pd.read_csv('data/newsCorpora.csv', names=column_names, header=None, delimiter='\t')
news_dataset.head()
###Output
_____no_output_____
###Markdown
Here we first import the necessary libraries and tools such as TensorFlow, pandas and numpy. An open-source high performance data analysis library, pandas is an essential tool used in almost every Python-based data science experiment. NumPy is another Python library that provides data structures to hold multi-dimensional array data and provides many utility functions to transform that data. TensorFlow is a widely used deep learning framework that also includes the higher-level deep learning Python library called Keras. We will be using Keras to build and iterate our text classification model.Next we define the list of columns contained in this dataset (the format is usually described as part of the dataset as it is here). Finally, we use the ‘read_csv()’ method of the pandas library to read the dataset into memory and look at the first few lines using the ‘head()’ method.Remember, our goal is to accurately predict the category of any news article. So, ‘Category’ is our label or target column. For this example, we will only use the information contained in the ‘Title’ to predict the category. When should I build my own algorithm container?You may not need to create a container to bring your own code to Amazon SageMaker. When you are using a framework such as Apache MXNet or TensorFlow that has direct support in SageMaker, you can simply supply the Python code that implements your algorithm using the SDK entry points for that framework. This set of supported frameworks is regularly added to, so you should check the current list to determine whether your algorithm is written in one of these common machine learning environments.Even if there is direct SDK support for your environment or framework, you may find it more effective to build your own container. If the code that implements your algorithm is quite complex or you need special additions to the framework, building your own container may be the right choice.Some of the reasons to build an already supported framework container are:A specific version isn't supported.Configure and install your dependencies and environment.Use a different training/hosting solution than provided.This walkthrough shows that it is quite straightforward to build your own container. So you can still use SageMaker even if your use case is not covered by the deep learning containers that we've built for you. The DockerfileThe Dockerfile describes the image that we want to build. You can think of it as describing the complete operating system installation of the system that you want to run. A Docker container running is quite a bit lighter than a full operating system, however, because it takes advantage of Linux on the host machine for the basic operations.For the Python science stack, we start from an official TensorFlow docker image and run the normal tools to install TensorFlow Serving. Then we add the code that implements our specific algorithm to the container and set up the right environment for it to run under.Let's look at the Dockerfile for this example.
###Code
!cat container/Dockerfile
%%sh
# The name of our algorithm
algorithm_name=sagemaker-keras-text-classification
cd container
chmod +x sagemaker_keras_text_classification/train
chmod +x sagemaker_keras_text_classification/serve
account=$(aws sts get-caller-identity --query Account --output text)
# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)
region=${region:-us-west-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# Get the login command from ECR and execute it directly
$(aws ecr get-login --region ${region} --no-include-email)
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} .
docker tag ${algorithm_name} ${fullname}
docker push ${fullname}
###Output
Login Succeeded
Sending build context to Docker daemon 25.09kB
Step 1/8 : FROM tensorflow/tensorflow:1.8.0-py3
---> a83a3dd79ff9
Step 2/8 : RUN apt-get update && apt-get install -y --no-install-recommends nginx curl
---> Using cache
---> b2c6ee34bd63
Step 3/8 : RUN echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list
---> Using cache
---> a6abb67693c2
Step 4/8 : RUN curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -
---> Using cache
---> 191bced6e844
Step 5/8 : RUN apt-get update && apt-get install tensorflow-model-server
---> Using cache
---> 6a5c372c80ac
Step 6/8 : ENV PATH="/opt/ml/code:${PATH}"
---> Using cache
---> fafb737091f1
Step 7/8 : COPY /sagemaker_keras_text_classification /opt/ml/code
---> Using cache
---> 322614cc2708
Step 8/8 : WORKDIR /opt/ml/code
---> Using cache
---> d65bd0ecc164
Successfully built d65bd0ecc164
Successfully tagged sagemaker-keras-text-classification:latest
The push refers to repository [325928439752.dkr.ecr.us-east-1.amazonaws.com/sagemaker-keras-text-classification]
ad70d2db6a48: Preparing
23799f216cf7: Preparing
2aed47ce3a8e: Preparing
b8ae7b672121: Preparing
9f68233145ee: Preparing
e0c4197104f9: Preparing
1fb2bc13bdda: Preparing
9136ffbbf4aa: Preparing
b3a9262c451e: Preparing
ce70cf3f2428: Preparing
2faed3426aa2: Preparing
fee4cef4c353: Preparing
dc657e1d2f27: Preparing
588d3e4e8828: Preparing
bf3d982208f5: Preparing
cd7b4cc1c2dd: Preparing
3a0404adc8bd: Preparing
82718dbf791d: Preparing
c8aa3ff3c3d3: Preparing
2faed3426aa2: Waiting
fee4cef4c353: Waiting
dc657e1d2f27: Waiting
588d3e4e8828: Waiting
bf3d982208f5: Waiting
cd7b4cc1c2dd: Waiting
3a0404adc8bd: Waiting
82718dbf791d: Waiting
c8aa3ff3c3d3: Waiting
1fb2bc13bdda: Waiting
9136ffbbf4aa: Waiting
b3a9262c451e: Waiting
ce70cf3f2428: Waiting
e0c4197104f9: Waiting
ad70d2db6a48: Layer already exists
23799f216cf7: Layer already exists
9f68233145ee: Layer already exists
2aed47ce3a8e: Layer already exists
b8ae7b672121: Layer already exists
e0c4197104f9: Layer already exists
1fb2bc13bdda: Layer already exists
9136ffbbf4aa: Layer already exists
b3a9262c451e: Layer already exists
ce70cf3f2428: Layer already exists
fee4cef4c353: Layer already exists
2faed3426aa2: Layer already exists
bf3d982208f5: Layer already exists
dc657e1d2f27: Layer already exists
588d3e4e8828: Layer already exists
cd7b4cc1c2dd: Layer already exists
82718dbf791d: Layer already exists
3a0404adc8bd: Layer already exists
c8aa3ff3c3d3: Layer already exists
latest: digest: sha256:a7a7e1aa7f110ffe8fcc653ddb50f908808ffd5e89cb2b9e8f838d04550f7e5b size: 4297
###Markdown
Once you have your container packaged, you can use it to train and serve models. Let's do that with the algorithm we made above. Set up the environmentHere we specify a bucket to use and the role that will be used for working with SageMaker.
###Code
# S3 prefix
prefix = 'sagemaker-keras-text-classification'
# Define IAM role
import boto3
import re
import os
import numpy as np
import pandas as pd
from sagemaker import get_execution_role
role = get_execution_role()
###Output
_____no_output_____
###Markdown
Create the sessionThe session remembers our connection parameters to SageMaker. We'll use it to perform all of our SageMaker operations.
###Code
import sagemaker as sage
from time import gmtime, strftime
sess = sage.Session()
###Output
_____no_output_____
###Markdown
Upload the data for trainingWhen training large models with huge amounts of data, you'll typically use big data tools, like Amazon Athena, AWS Glue, or Amazon EMR, to create your data in S3. We can use use the tools provided by the SageMaker Python SDK to upload the data to a default bucket.
###Code
WORK_DIRECTORY = 'data'
data_location = sess.upload_data(WORK_DIRECTORY, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Create an estimator and fit the modelIn order to use SageMaker to fit our algorithm, we'll create an `Estimator` that defines how to use the container to to train. This includes the configuration we need to invoke SageMaker training:* The __container name__. This is constucted as in the shell commands above.* The __role__. As defined above.* The __instance count__ which is the number of machines to use for training.* The __instance type__ which is the type of machine to use for training.* The __output path__ determines where the model artifact will be written.* The __session__ is the SageMaker session object that we defined above.Then we use fit() on the estimator to train against the data that we uploaded above.
###Code
account = sess.boto_session.client('sts').get_caller_identity()['Account']
region = sess.boto_session.region_name
image = '{}.dkr.ecr.{}.amazonaws.com/sagemaker-keras-text-classification'.format(account, region)
tree = sage.estimator.Estimator(image,
role, 1, 'ml.c5.2xlarge',
output_path="s3://{}/output".format(sess.default_bucket()),
sagemaker_session=sess)
tree.fit(data_location)
###Output
2019-10-09 17:51:04 Starting - Starting the training job...
2019-10-09 17:51:06 Starting - Launching requested ML instances....
###Markdown
Deploy the modelDeploying the model to SageMaker hosting just requires a `deploy` call on the fitted model. This call takes an instance count, instance type, and optionally serializer and deserializer functions. These are used when the resulting predictor is created on the endpoint.
###Code
from sagemaker.predictor import json_serializer
predictor = tree.deploy(1, 'ml.m5.xlarge', serializer=json_serializer)
#request = { "input": "‘Deadpool 2’ Has More Swearing, Slicing and Dicing from Ryan Reynolds"}
#print(predictor.predict(request).decode('utf-8'))
###Output
_____no_output_____
###Markdown
Clean Up
###Code
sess.delete_endpoint(predictor.endpoint)
###Output
_____no_output_____ |
Classifiction/Logistic_Regression.ipynb | ###Markdown
Logistic Regression Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
print(X_train)
print(y_train)
print(X_test)
print(y_test)
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print(X_train)
print(X_test)
###Output
_____no_output_____
###Markdown
Training the Logistic Regression model on the Training set
###Code
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting a new result
###Code
print(classifier.predict(sc.transform([[30,87000]])))
###Output
[0]
###Markdown
Predicting the Test set results
###Code
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
_____no_output_____
###Markdown
Making the Confusion Matrix
###Code
from sklearn.metrics import accuracy_score, confusion_matrix
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
###Output
[[65 3]
[ 8 24]]
###Markdown
In this 65 is the number of correct predicted results for class 0 which means that we predicted that the person won't buy the SUV and he didn't buy it. The 8 is the number of persons who didn't buy the suv which is class 0 but we predicted that they will buy it. The 24 is the number of people who will buy the SUV and we correctly predicted it. The 3 is the wrongly predicted people who we predicted will not buy it but bought it. Visualising the Training set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_train), y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
Visualising the Test set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = sc.inverse_transform(X_test), y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 10, stop = X_set[:, 0].max() + 10, step = 0.25),
np.arange(start = X_set[:, 1].min() - 1000, stop = X_set[:, 1].max() + 1000, step = 0.25))
plt.contourf(X1, X2, classifier.predict(sc.transform(np.array([X1.ravel(), X2.ravel()]).T)).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
|
colab/dataprep.ipynb | ###Markdown
DescriptionThis script takes a set of audio files and prepares them to be used as training data for MP3net. UsageThe script below assumes you store the program code on Google Drive and audio data on gs:// To use this notebook, check the cells below for capitalized tags which you will need to personalize.
###Code
# check location of backend
import subprocess
import json
proc=subprocess.Popen('curl ipinfo.io', shell=True, stdout=subprocess.PIPE, )
ip_data = json.loads(proc.communicate()[0])
server_country = ip_data['country']
print(f"Server location: {ip_data['city']} ({ip_data['region']}), {server_country}\n")
project_id = 'YOUR_PROJECT_ID'
!gcloud config set project {project_id}
# connect to gs://
from google.colab import auth
auth.authenticate_user()
# Connect to Google Drive
# The program code is assumed to be on Google Drive
from google.colab import drive
drive.mount('/content/gdrive')
# Set environment variable so service accounts gets access to bucket (needed for gspath)
# (for more info see: https://cloud.google.com/docs/authentication/getting-started)
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="/content/gdrive/JSON_WITH_SERVICE_ACCOUNT_PRIVATE_KEYS"
### ======================== RUN PARAMETERS ======================= ###
### ###
# dict with bucket-region pairs
# script will pick bucket in same region as backend to avoid expensive e-gress charges
# when training on TPUs YOUR_BUCKET_REGION should be US since all Colab TPUs are the US region
BUCKETS = {'gs://YOUR_BUCKET_NAME/': ['YOUR_BUCKET_REGION']}
# Location and type of source files (on gs://...)
REMOTE_INPUT_FILEPATH = 'FILEPATH_TO_INPUT_FILES' # don't preface with gs://YOUR_BUCKET_NAME
INPUT_FILE_EXTENSION = 'mp4'
INPUT_BATCH_SIZE = 42 # number of input files to be batched into one .tfrecord file (target 400MiB .tfrecord file)
# Destination where .tfrecord files will be written (on gs://...)
DATA_DIR = 'FILEPATH_OF_TFRECORD_FILES' # don't preface with gs://YOUR_BUCKET_NAME
# Local directory on backend (probably needs a High-RAM runtime type)
LOCAL_INPUT_FILES = 'local/'
### ###
### =============================================================== ###
%tensorflow_version 2.x
import tensorflow as tf
print(f"TensorFlow v{tf.__version__}")
import re
# select target bucket, based on country of backend (avoid e-gress!!!)
target_bucket = None
for bucket, country_lst in BUCKETS.items():
if server_country in country_lst:
target_bucket = bucket
break
if target_bucket is None:
raise ValueError(f'No target-bucket found for {server_country}')
print(f"Target-bucket: {target_bucket}")
# add target-bucket to directories
DATA_DIR = target_bucket + DATA_DIR
REMOTE_INPUT_FILEPATH = target_bucket + REMOTE_INPUT_FILEPATH
# install modules used by the code
!pip install tensorboardx
!pip install soundfile
!pip install tensorflow_addons
!pip install pytube
# Make sure python finds the imports
import sys
sys.path.append('/content/gdrive/PATH_TO/audiocodec')
sys.path.append('/content/gdrive/PATH_TO/mp4net')
sys.path.append('/content/gdrive/PATH_TO/preprocessing')
# local install of audiocodec (only needs to be executed once)
!pip install -e /content/gdrive/PATH_TO/audiocodec
# Copy input data -> local server
# (only do this when data is not already on local server)
!mkdir ./{LOCAL_INPUT_FILES}
!gsutil -m cp {REMOTE_INPUT_FILEPATH}/* ./{LOCAL_INPUT_FILES}
# ######### #
# DATA PREP #
# ######### #
#
import datetime
from utils import gspath
from utils import audio_utils
from model import mp4net
import dataprep
in_filepath = LOCAL_INPUT_FILES
input_file_extension = INPUT_FILE_EXTENSION
out_filepath = DATA_DIR
model = mp4net.MP4netFactory()
temp_filepath = 'local_process/'
!mkdir {temp_filepath}
!rm {temp_filepath}*.*
# group input files in batches
file_pattern = gspath.join(in_filepath, f"*.{input_file_extension}")
audio_file_paths = gspath.findall(file_pattern)
audio_file_paths.sort()
input_batch_size = INPUT_BATCH_SIZE
input_files_batched = [audio_file_paths[i:i + input_batch_size]
for i in range(0, len(audio_file_paths), input_batch_size)]
# loop over batches
for batch_no, batch in enumerate(input_files_batched):
print()
print(f'batch {batch_no}')
tf_output_filename = gspath.join(out_filepath, f'yt-{batch_no:04d}' + f'_sr{model.sample_rate}_Nx{model.freq_n}x{model.channels_n}.tfrecord')
if gspath.findall(tf_output_filename):
# skip if output file already exists (maybe from earlier run that crashed)
print(f' Output file {tf_output_filename} already exists...')
else:
# loop over all songs in batch
temp_wavs = []
for song_no, song_filename in enumerate(batch):
# convert and resample to WAV
temp_wavfile = temp_filepath + f'yt-{batch_no:04d}-{song_no:02d}.wav'
temp_wavs.append(temp_wavfile)
print(f' resampling to {model.sample_rate}Hz: {song_filename} -> {temp_wavfile}')
!ffmpeg -loglevel quiet -i {song_filename} -ar {model.sample_rate} {temp_wavfile}
# loop over all songs in batch
print(f" {datetime.datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f')}: {tf_output_filename} <-- {temp_wavs}")
# convert to tf-record
dataprep.audio2tfrecord(temp_wavs, tf_output_filename, model)
!rm {temp_filepath}*.*
###Output
_____no_output_____ |
Examples/2-Content/2.11-Estimates/EX-2.11.01-Estimates-Summary.ipynb | ###Markdown
---- Data Library for Python---- Content layer - Estimates - SummaryThis notebook demonstrates how to retrieve Estimates.I/B/E/S (Institutional Brokers' Estimate System) delivers a complete suite of Estimates content with a global view and is the largest contributor base in the industry. RDP I/B/E/S Estimates API provides information about consensus and aggregates data(26 generic measures, 23 KPI measures), company guidance data and advanced analytics. With over 40 years of collection experience and extensive quality controls that include thousands of automated error checks and stringent manual analysis, RDP I/B/E/S gives the clients the content they need for superior insight, research and investment decision making.The I/B/E/S database currently covers over 56,000 companies in 100 markets.More than 900 firms contribute data to I/B/E/S, from the largest global houses to regional and local brokers, with US data back to 1976 and international data back to 1987. Learn moreTo learn more about the Refinitiv Data Library for Python please join the Refinitiv Developer Community. By [registering](https://developers.refinitiv.com/iam/register) and [logging](https://developers.refinitiv.com/content/devportal/en_us/initCookie.html) into the Refinitiv Developer Community portal you will have free access to a number of learning materials like [Quick Start guides](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/quick-start), [Tutorials](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/learning), [Documentation](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/docs) and much more. Getting Help and SupportIf you have any questions regarding using the API, please post them on the [Refinitiv Data Q&A Forum](https://community.developers.refinitiv.com/spaces/321/index.html). The Refinitiv Developer Community will be happy to help. ---- Set the configuration file locationFor a better ease of use, you have the option to set initialization parameters of the Refinitiv Data Library in the _refinitiv-data.config.json_ configuration file. This file must be located beside your notebook, in your user folder or in a folder defined by the _RD_LIB_CONFIG_PATH_ environment variable. The _RD_LIB_CONFIG_PATH_ environment variable is the option used by this series of examples. The following code sets this environment variable.
###Code
import os
os.environ["RD_LIB_CONFIG_PATH"] = "../../../Configuration"
###Output
_____no_output_____
###Markdown
Some Imports to start with
###Code
import refinitiv.data as rd
from refinitiv.data.content import estimates
from refinitiv.data.content.estimates import Package
###Output
_____no_output_____
###Markdown
Open the data sessionThe open_session() function creates and open sessions based on the information contained in the refinitiv-data.config.json configuration file. Please edit this file to set the session type and other parameters required for the session you want to open.
###Code
rd.open_session("platform.rdp")
###Output
_____no_output_____
###Markdown
Retrieve Data Summary - Annual
###Code
response = estimates.view_summary.annual.Definition("BNPP.PA", Package.BASIC).get_data()
response.data.df
###Output
_____no_output_____
###Markdown
Summary - Historical snapshots non-periodic measures
###Code
response = estimates.view_summary.historical_snapshots_non_periodic_measures.Definition("BNPP.PA", Package.BASIC).get_data()
response.data.df
###Output
_____no_output_____
###Markdown
Summary - Historical snapshots periodic measures annual
###Code
response = estimates.view_summary.historical_snapshots_periodic_measures_annual.Definition("BNPP.PA", Package.BASIC).get_data()
response.data.df
###Output
_____no_output_____
###Markdown
Summary - Historical snapshots periodic measures interim
###Code
response = estimates.view_summary.historical_snapshots_periodic_measures_interim.Definition("BNPP.PA", Package.BASIC).get_data()
response.data.df
###Output
_____no_output_____
###Markdown
Summary - Historical snapshots recommendations
###Code
response = estimates.view_summary.historical_snapshots_recommendations.Definition("BNPP.PA", Package.BASIC).get_data()
response.data.df
###Output
_____no_output_____
###Markdown
Summary - Interim
###Code
response = estimates.view_summary.interim.Definition("BNPP.PA", Package.BASIC).get_data()
response.data.df
###Output
_____no_output_____
###Markdown
Summary - Non-periodic measures
###Code
response = estimates.view_summary.non_periodic_measures.Definition("BNPP.PA", Package.BASIC).get_data()
response.data.df
###Output
_____no_output_____
###Markdown
Summary - Recommendations
###Code
response = estimates.view_summary.recommendations.Definition("BNPP.PA", Package.BASIC).get_data()
response.data.df
###Output
_____no_output_____
###Markdown
Close the session
###Code
rd.close_session()
###Output
_____no_output_____ |
code/processing/2021-02-15_21-59-19/_run_jnb/2021-02-15_21-59-19_Or179_Or177_overnight-output (3).ipynb | ###Markdown
Get each bird's recording, and their microphone channels
###Code
# This needs to be less repetitive
if 'Or177' in data_path:
# Whole recording from the hard drive
recording = se.BinDatRecordingExtractor(OE_data_path,30000,40, dtype='int16')
# Note I am adding relevant ADC channels
# First bird
Or179_recording = se.SubRecordingExtractor(
recording,
channel_ids=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12,13,14,15, 32])
# Second bird
Or177_recording = se.SubRecordingExtractor(
recording,
channel_ids=[16, 17,18,19,20,21,22,23,24,25,26,27,28,29,30,31, 33])
# Bandpass fiter microphone recoridngs
mic_recording = st.preprocessing.bandpass_filter(
se.SubRecordingExtractor(recording,channel_ids=[32,33]),
freq_min=500,
freq_max=14000
)
else:
# Whole recording from the hard drive
recording = se.BinDatRecordingExtractor(OE_data_path, 30000, 24, dtype='int16')
# Note I am adding relevant ADC channels
# First bird
Or179_recording = se.SubRecordingExtractor(
recording,
channel_ids=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12,13,14,15,16])
# Bandpass fiter microphone recoridngs
mic_recording = st.preprocessing.bandpass_filter(
se.SubRecordingExtractor(recording,channel_ids=[16]),
freq_min=500,
freq_max=1400
)
# Get wav files
wav_names = [file_name for file_name in os.listdir(data_path) if file_name.endswith('.wav')]
wav_paths = [os.path.join(data_path,wav_name) for wav_name in wav_names]
# Get tranges for wav files in the actual recording
# OE_data_path actually contains the path all the way to the .bin. We just need the parent directory
# with the timestamp.
# Split up the path
OE_data_path_split= OE_data_path.split(os.sep)
# Take only the first three. os.path is weird so we manually add the separator after the
# drive name.
OE_parent_path = os.path.join(OE_data_path_split[0] + os.sep, *OE_data_path_split[1:3])
# Get all time ranges given the custom offset.
dateparts = os.path.normpath(data_path).split(os.sep)[-1].split('-')
exp_date = datetime.datetime(
year=int(dateparts[0]),
month=int(dateparts[1]),
day=int(dateparts[2][:2]),
minute=int(dateparts[2][3:]),
second=int(dateparts[3][:2])
)
dateparts = os.path.normpath(data_path).split(os.sep)[-1].split('-')
exp_date = datetime.datetime(
year=int(dateparts[0]),
month=int(dateparts[1]),
day=int(dateparts[2][:2]),
minute=int(dateparts[2][3:]),
second=int(dateparts[3][:2])
)
# We synchronized the computer on Feb 2
if exp_date < datetime.datetime(2021, 2, 21,11,0):
# Use the default offset.
tranges=np.array([
get_trange(OE_parent_path, path, duration=3)
for path in wav_paths])
else:
tranges=np.array([
get_trange(OE_parent_path, path, offset=datetime.timedelta(seconds=0), duration=3)
for path in wav_paths])
wav_df = pd.DataFrame({'wav_paths':wav_paths, 'wav_names':wav_names, 'trange0':tranges[:, 0], 'trange1':tranges[:, 1]})
wav_df.head()
# Set up widgets
wav_selector = pnw.Select(options=[(i, name) for i, name in enumerate(wav_df.wav_names.values)], name="Select song file")
window_radius_selector = pnw.Select(options=[0,1,2,3,4,5,6,7,8, 10,20,30,40,60], value=8, name="Select window radius")
spect_chan_selector = pnw.Select(options=list(range(16)), name="Spectrogram channel")
spect_freq_lo = pnw.Select(options=np.linspace(0,130,14).tolist(), value=20, name="Low frequency for spectrogram (Hz)")
spect_freq_hi = pnw.Select(options=np.linspace(130,0,14).tolist(), value=40, name="Hi frequency for spectrogram (Hz)")
log_nfft_selector = pnw.Select(options=np.linspace(10,16,7).tolist(), value=14, name="magnitude of nfft (starts at 256)")
@pn.depends(
wav_selector=wav_selector.param.value,
window_radius=window_radius_selector.param.value,
spect_chan=spect_chan_selector.param.value,
spect_freq_lo=spect_freq_lo.param.value,
spect_freq_hi=spect_freq_hi.param.value,
log_nfft=log_nfft_selector.param.value
)
def create_figure(wav_selector,
window_radius, spect_chan,
spect_freq_lo, spect_freq_hi, log_nfft):
# Each column in each row to a tuple that we unpack
wav_file_path, wav_file_name, tr0, tr1 = wav_df.loc[wav_selector[0],:]
# Set up figure
fig,axes = plt.subplots(4,1, figsize=(16,12))
# Get wav file numpy recording object
wav_recording = get_wav_recording(wav_file_path)
# Apply offset and apply window radius
tr0 = tr0 - window_radius
# Add duration of wav file
tr1 = tr1 + window_radius +wav_recording.get_num_frames()/wav_recording.get_sampling_frequency()
'''Plot sound spectrogram (Hi fi mic)'''
sw.plot_spectrogram(wav_recording, channel=0, freqrange=[300,14000],ax=axes[0],cmap='magma')
axes[0].set_title('Hi fi mic spectrogram')
'''Plot sound spectrogram (Lo fi mic)'''
if 'Or179' in wav_file_name:
LFP_recording = Or179_recording
elif 'Or177' in wav_file_name:
LFP_recording = Or177_recording
mic_channel = LFP_recording.get_channel_ids()[-1]
sw.plot_spectrogram(
mic_recording,
mic_channel,
trange=[tr0, tr1],
freqrange=[600,14000],
ax=axes[1],cmap='magma'
)
axes[1].set_title('Lo fi mic spectrogram')
'''Plot LFP timeseries (smoothed)'''
chan_ids = np.array([LFP_recording.get_channel_ids()]).flatten()
sw.plot_timeseries(
st.preprocessing.bandpass_filter(
se.SubRecordingExtractor(LFP_recording),
freq_min=25,
freq_max=45
),
channel_ids=[chan_ids[spect_chan]],
trange=[tr0, tr1],
ax=axes[2]
)
axes[2].set_title('Raw LFP')
# Clean lines
for line in plt.gca().lines:
line.set_linewidth(0.1)
'''Plot LFP spectrogram'''
sw.plot_spectrogram(
LFP_recording,
channel=chan_ids[spect_chan],
freqrange=[spect_freq_lo,spect_freq_hi],
trange=[tr0, tr1],
ax=axes[3],
nfft=int(2**log_nfft)
)
axes[3].set_title('LFP')
for i, ax in enumerate(axes):
ax.set_yticks([ax.get_ylim()[1]])
ax.set_yticklabels([ax.get_ylim()[1]])
ax.set_xlabel('')
ax.xaxis.set_major_formatter(FormatStrFormatter('%.2f'))
# Show 30 Hz
axes[3].set_yticks([30, axes[3].get_ylim()[1]])
axes[3].set_yticklabels([30, axes[3].get_ylim()[1]])
return fig
dash = pn.Column(
pn.Row(wav_selector, window_radius_selector,spect_chan_selector),
pn.Row(spect_freq_lo,spect_freq_hi,log_nfft_selector),
create_figure
);
###Output
_____no_output_____
###Markdown
Deep dive into a single channel
###Code
dash
###Output
_____no_output_____
###Markdown
Looking at all channels at once
###Code
# Make chanmap
chanmap=np.array([[3, 7, 11, 15],[2, 4, 10, 14],[4, 8, 12, 16],[1, 5, 9, 13]])
# Set up widgets
wav_selector = pnw.Select(options=[(i, name) for i, name in enumerate(wav_df.wav_names.values)], name="Select song file")
window_radius_selector = pnw.Select(options=[10,20,30,40,60], name="Select window radius")
spect_freq_lo = pnw.Select(options=np.linspace(0,130,14).tolist(), name="Low frequency for spectrogram (Hz)")
spect_freq_hi = pnw.Select(options=np.linspace(130,0,14).tolist(), name="Hi frequency for spectrogram (Hz)")
log_nfft_selector = pnw.Select(options=np.linspace(10,16,7).tolist(),value=14, name="magnitude of nfft (starts at 256)")
def housekeeping(wav_selector, window_radius):
# Each column in each row to a tuple that we unpack
wav_file_path, wav_file_name, tr0, tr1 = wav_df.loc[wav_selector[0],:]
# Get wav file numpy recording object
wav_recording = get_wav_recording(wav_file_path)
# Apply offset and apply window radius
offset = 0
tr0 = tr0+ offset-window_radius
# Add duration of wav file
tr1 = tr1+ offset+window_radius+wav_recording.get_num_frames()/wav_recording.get_sampling_frequency()
return wav_recording, wav_file_name, tr0, tr1
@pn.depends(
wav_selector=wav_selector.param.value,
window_radius=window_radius_selector.param.value)
def create_sound_figure(wav_selector, window_radius):
# Housekeeping
wav_recording, wav_file_name, tr0, tr1 = housekeeping(wav_selector, window_radius)
# Set up figure for sound
fig,axes = plt.subplots(1,2, figsize=(16,2))
'''Plot sound spectrogram (Hi fi mic)'''
sw.plot_spectrogram(wav_recording, channel=0, freqrange=[300,14000], ax=axes[0],cmap='magma')
axes[0].set_title('Hi fi mic spectrogram')
'''Plot sound spectrogram (Lo fi mic)'''
if 'Or179' in wav_file_name:
LFP_recording = Or179_recording
elif 'Or177' in wav_file_name:
LFP_recording = Or177_recording
mic_channel = LFP_recording.get_channel_ids()[-1]
sw.plot_spectrogram(
mic_recording,
mic_channel,
trange=[tr0, tr1],
freqrange=[600,4000],
ax=axes[1],cmap='magma'
)
axes[1].set_title('Lo fi mic spectrogram')
for ax in axes:
ax.axis('off')
return fig
@pn.depends(
wav_selector=wav_selector.param.value,
window_radius=window_radius_selector.param.value,
spect_freq_lo=spect_freq_lo.param.value,
spect_freq_hi=spect_freq_hi.param.value,
log_nfft=log_nfft_selector.param.value
)
def create_LFP_figure(wav_selector, window_radius,
spect_freq_lo, spect_freq_hi, log_nfft):
# Housekeeping
wav_recording, wav_file_name, tr0, tr1 = housekeeping(wav_selector, window_radius)
fig,axes=plt.subplots(4,4,figsize=(16,8))
'''Plot LFP'''
for i in range(axes.shape[0]):
for j in range(axes.shape[1]):
ax = axes[i][j]
sw.plot_spectrogram(recording, chanmap[i][j], trange=[tr0, tr1],
freqrange=[spect_freq_lo,spect_freq_hi],
nfft=int(2**log_nfft), ax=ax,cmap='magma')
ax.axis('off')
# Set channel as title
ax.set_title(chanmap[i][j])
# Clean up
for i in range(axes.shape[0]):
for j in range(axes.shape[1]):
ax=axes[i][j]
ax.set_yticks([ax.get_ylim()[1]])
ax.set_yticklabels([ax.get_ylim()[1]])
ax.set_xlabel('')
# Show 30 Hz
ax.set_yticks([30, ax.get_ylim()[1]])
ax.set_yticklabels([30, ax.get_ylim()[1]])
return fig
dash = pn.Column(
pn.Row(wav_selector,window_radius_selector),
pn.Row(spect_freq_lo,spect_freq_hi,log_nfft_selector),
create_sound_figure, create_LFP_figure
);
dash
###Output
_____no_output_____
###Markdown
Sleep data analysis
###Code
csvs = [os.path.normpath(os.path.join(data_path,file)) for file in os.listdir(data_path) if file.endswith('.csv')]
csvs
csv = csvs[0]
df = pd.read_csv(csv)
del df['Unnamed: 0']
df.head()
csv_name = csv.split(os.sep)[-1]
rec=None
if 'Or179' in csv_name:
rec = st.preprocessing.resample(Or179_recording, 500)
elif 'Or177' in csv_name:
rec = st.preprocessing.resample(Or177_recording, 500)
# Get second to last element in split
channel = int(csv_name.split('_')[-2])
window_slider = pn.widgets.DiscreteSlider(
name='window size',
options=[*range(1,1000)],
value=1
)
window_slider_raw = pn.widgets.DiscreteSlider(
name='window size (raw timeseries)',
options=[*range(1,1000)],
value=1
)
freq_slider_1 = pn.widgets.DiscreteSlider(
name='f (Hz)',
options=[*range(1,200)],
value=30
)
freq_slider_2 = pn.widgets.DiscreteSlider(
name='f (Hz)',
options=[*range(1,200)],
value=10
)
freq_slider_3 = pn.widgets.DiscreteSlider(
name='f (Hz)',
options=[*range(1,200)],
value=4
)
range_slider = pn.widgets.RangeSlider(
start=0,
end=df.t.max(),
step=10,
value=(0, 500),
name="Time range",
value_throttled=(0,500)
)
@pn.depends(window=window_slider.param.value,
freq_1=freq_slider_1.param.value,
freq_2=freq_slider_2.param.value,
freq_3=freq_slider_3.param.value,
rang=range_slider.param.value_throttled)
def plot_ts(window, freq_1, freq_2, freq_3, rang):
subdf = df.loc[
((df['f']==freq_1)|(df['f']==freq_2)|(df['f']==freq_3))
& ((df['t'] > rang[0]) & (df['t'] < rang[1])),:]
return hv.operation.timeseries.rolling(
hv.Curve(
data = subdf,
kdims=["t", "f"],
vdims="logpower"
).groupby("f").overlay().opts(width=1200, height=300),
rolling_window=window
)
@pn.depends(window=window_slider_raw.param.value, rang=range_slider.param.value_throttled)
def plot_raw_ts(window, rang):
sr = rec.get_sampling_frequency()
return hv.operation.datashader.datashade(
hv.operation.timeseries.rolling(
hv.Curve(
rec.get_traces(channel_ids=[channel], start_frame=sr*rang[0], end_frame=sr*rang[1]).flatten()
),
rolling_window=window
),
aggregator="any"
).opts(width=1200, height=300)
pn.Column(
window_slider,window_slider_raw,freq_slider_1, freq_slider_2, freq_slider_3,range_slider,
plot_ts,
plot_raw_ts
)
###Output
_____no_output_____ |
2020_week_3/pendulum_animation_notebook_v1-Copy1.ipynb | ###Markdown
Basic pendulum animations: using %matplotlib notebookUse Pendulum class to generate basic pendulum animations. Uses the `%matplotlib notebook` backend for Jupyter notebooks to display the animation as real-time updates with `animation.FuncAnimation` (as opposed to making a movie, see the pendulum_animation_notebook_inline versions for an alternative).* v1: Created 25-Jan-2019. Last revised 27-Jan-2019 by Dick Furnstahl ([email protected]).
###Code
#%matplotlib inline
%matplotlib notebook
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
plt.rcParams['figure.dpi'] = 100. # this is the default
###Output
_____no_output_____
###Markdown
Pendulum class and utility functions
###Code
class Pendulum():
"""
Pendulum class implements the parameters and differential equation for
a pendulum using the notation from Taylor. The class will now have the
solve_ode method removed...everything else will remain the same.
Parameters
----------
omega_0 : float
natural frequency of the pendulum (\sqrt{g/l} where l is the
pendulum length)
beta : float
coefficient of friction
gamma_ext : float
amplitude of external force is gamma * omega_0**2
omega_ext : float
frequency of external force
phi_ext : float
phase angle for external force
Methods
-------
dy_dt(y, t)
Returns the right side of the differential equation in vector y,
given time t and the corresponding value of y.
driving_force(t)
Returns the value of the external driving force at time t.
"""
def __init__(self, omega_0=1., beta=0.2,
gamma_ext=0.2, omega_ext=0.689, phi_ext=0.
):
self.omega_0 = omega_0
self.beta = beta
self.gamma_ext = gamma_ext
self.omega_ext = omega_ext
self.phi_ext = phi_ext
def dy_dt(self, y, t):
"""
This function returns the right-hand side of the diffeq:
[dphi/dt d^2phi/dt^2]
Parameters
----------
y : float
A 2-component vector with y[0] = phi(t) and y[1] = dphi/dt
t : float
time
Returns
-------
"""
F_ext = self.driving_force(t)
return [y[1], -self.omega_0**2 * np.sin(y[0]) - 2.*self.beta * y[1] \
+ F_ext]
def driving_force(self, t):
"""
This function returns the value of the driving force at time t.
"""
return self.gamma_ext * self.omega_0**2 \
* np.cos(self.omega_ext*t + self.phi_ext)
v_0 = np.array([10.0])
abserr = 1.e-8
relerr = 1.e-8
t_start = 0.
t_end = 10.
t_pts = np.linspace(t_start, t_end, 20)
solution = solve_ivp(dy_dt, (t_start, t_end), v_0, t_eval=t_pts,
rtol=relerr, atol=abserr)
def plot_y_vs_x(x, y, axis_labels=None, label=None, title=None,
color=None, linestyle=None, semilogy=False, loglog=False,
ax=None):
"""
Generic plotting function: return a figure axis with a plot of y vs. x,
with line color and style, title, axis labels, and line label
"""
if ax is None: # if the axis object doesn't exist, make one
ax = plt.gca()
if (semilogy):
line, = ax.semilogy(x, y, label=label,
color=color, linestyle=linestyle)
elif (loglog):
line, = ax.loglog(x, y, label=label,
color=color, linestyle=linestyle)
else:
line, = ax.plot(x, y, label=label,
color=color, linestyle=linestyle)
if label is not None: # if a label if passed, show the legend
ax.legend()
if title is not None: # set a title if one if passed
ax.set_title(title)
if axis_labels is not None: # set x-axis and y-axis labels if passed
ax.set_xlabel(axis_labels[0])
ax.set_ylabel(axis_labels[1])
return ax, line
def start_stop_indices(t_pts, plot_start, plot_stop):
"""Given an array (e.g., of times) and desired starting and stop values,
return the array indices that are closest to those values.
"""
start_index = (np.fabs(t_pts-plot_start)).argmin() # index in t_pts array
stop_index = (np.fabs(t_pts-plot_stop)).argmin() # index in t_pts array
return start_index, stop_index
###Output
_____no_output_____
###Markdown
Plots to animate
###Code
# Labels for individual plot axes
phi_vs_time_labels = (r'$t$', r'$\phi(t)$')
phi_dot_vs_time_labels = (r'$t$', r'$d\phi/dt(t)$')
state_space_labels = (r'$\phi$', r'$d\phi/dt$')
# Common plotting time (generate the full time then use slices)
t_start = 0.
t_end = 100.
delta_t = 0.01
t_pts = np.arange(t_start, t_end+delta_t, delta_t)
# Common pendulum parameters
gamma_ext = 1.077
omega_ext = 2.*np.pi
phi_ext = 0.
omega_0 = 1.5*omega_ext
beta = omega_0/4.
# Instantiate a pendulum
p1 = Pendulum(omega_0=omega_0, beta=beta,
gamma_ext=gamma_ext, omega_ext=omega_ext, phi_ext=phi_ext)
# calculate the driving force for t_pts
driving = p1.driving_force(t_pts)
###Output
_____no_output_____
###Markdown
Demo animation
###Code
# initial conditions specified
phi_0 = 0.0 # -np.pi / 2.
phi_dot_0 = 0.0
phi_1, phi_dot_1 = p1.solve_ode(phi_0, phi_dot_0)
# Change the common font size
font_size = 10
plt.rcParams.update({'font.size': font_size})
# start the plot!
overall_title = 'Parameters: ' + \
rf' $\omega = {omega_ext:.2f},$' + \
rf' $\gamma = {gamma_ext:.3f},$' + \
rf' $\omega_0 = {omega_0:.2f},$' + \
rf' $\beta = {beta:.2f},$' + \
rf' $\phi_0 = {phi_0:.2f},$' + \
rf' $\dot\phi_0 = {phi_dot_0:.2f}$' + \
'\n' # \n means a new line (adds some space here)
fig = plt.figure(figsize=(10,3.3), num='Pendulum Plots')
fig.suptitle(overall_title, va='top')
# first plot: plot from t=0 to t=10
ax_a = fig.add_subplot(1,3,1)
start, stop = start_stop_indices(t_pts, 0., 10.)
plot_y_vs_x(t_pts[start : stop], phi_1[start : stop],
axis_labels=phi_vs_time_labels,
color='blue',
label=None,
title='Figure 12.2',
ax=ax_a)
# second plot: state space plot from t=0 to t=10
ax_b = fig.add_subplot(1,3,2)
start, stop = start_stop_indices(t_pts, 0., 10.)
plot_y_vs_x(phi_1[start : stop], phi_dot_1[start : stop],
axis_labels=state_space_labels,
color='blue',
label=None,
title=rf'$0 \leq t \leq 10$',
ax=ax_b)
# third plot: state space plot from t=5 to t=12
ax_c = fig.add_subplot(1,3,3)
start, stop = start_stop_indices(t_pts, 5., 12.)
plot_y_vs_x(phi_1[start : stop], phi_dot_1[start : stop],
axis_labels=state_space_labels,
color='blue',
label=None,
title=rf'$5 \leq t \leq 12$',
ax=ax_c)
fig.tight_layout()
fig.subplots_adjust(top=0.8)
fig.savefig('Figure_Pendulum_plots.png', bbox_inches='tight') # always bbox_inches='tight'
def animate_pendulum(i, t_pts, phi_1, phi_dot_1):
pt_1.set_data(t_pts[i], phi_1[i])
line_2.set_data([phi_1[i], phi_1[i]], [0.,length])
pt_2.set_data(phi_1[i], length)
phi_string = rf'$\phi = {phi_1[i]:.1f}$'
phi_text.set_text(phi_string)
pt_3.set_data(phi_1[i], phi_dot_1[i])
return pt_1, pt_2, phi_text, pt_3
#%%capture
start, stop = start_stop_indices(t_pts, 10., 30.)
fig_new = plt.figure(figsize=(10, 3.3), num='Pendulum animation')
ax_1 = fig_new.add_subplot(1,3,1)
line_1, = ax_1.plot(t_pts[start : stop], phi_1[start : stop], color='blue')
pt_1, = ax_1.plot(t_pts[start], phi_1[start], 'o', color='red')
ax_1.set_xlabel(r'$t$')
ax_1.set_ylabel(r'$\phi(t)$')
ax_2 = fig_new.add_subplot(1,3,2, projection='polar')
ax_2.set_aspect(1) # aspect ratio 1 subplot
ax_2.set_rorigin(0.) # origin in the middle
ax_2.set_theta_zero_location('S') # phi=0 at the bottom
ax_2.set_ylim(-1.,1.) # r goes from 0 to 1
ax_2.grid(False) # no longitude/lattitude lines
ax_2.set_xticklabels([]) # turn off angle labels
ax_2.set_yticklabels([]) # turn off radial labels
ax_2.spines['polar'].set_visible(False) # no circular border
length = 0.8
ax_2.plot(0, 0, color='black', marker='o', markersize=5)
line_2, = ax_2.plot([phi_1[start], phi_1[start]], [0.,length],
color='blue', lw=3)
pt_2, = ax_2.plot(phi_1[start], length,
marker='o', markersize=15, color='red')
phi_string = rf'$\phi = {phi_1[start]:.1f}$'
phi_text = ax_2.text(np.pi, 1., phi_string, horizontalalignment='center')
ax_3 = fig_new.add_subplot(1,3,3)
line_3, = ax_3.plot(phi_1[start : stop], phi_dot_1[start : stop],
color='blue')
pt_3, = ax_3.plot(phi_1[start], phi_dot_1[start], 'o', color='red')
ax_3.set_xlabel(r'$\phi$')
ax_3.set_ylabel(r'$\dot\phi$')
fig_new.tight_layout()
#plt.rcParams["animation.embed_limit"] = 50.0 # max size of animation in MB
skip = 2 # skip between points in t_pts array
interval = 25 # time between frames in milliseconds
anim = animation.FuncAnimation(fig_new, animate_pendulum,
fargs=(t_pts[start:stop:skip],
phi_1[start:stop:skip],
phi_dot_1[start:stop:skip]),
init_func=None,
frames=len(t_pts[start:stop:skip]),
interval=interval,
blit=True, repeat=False,
save_count=0)
#HTML(anim.to_jshtml())
fig_new.show()
###Output
_____no_output_____ |
sklearn/CV_API.ipynb | ###Markdown
Scikit Learn API Experimentation StratifiedKFold, KFold, shuffle What does StratifiedKFold do that's different from KFold? What does shuffle=True do that's different than shuffle=False? Cross Validation ResourcesGood resources for understanding cross validation and overfiting in Python:* [Train/Test Split and Cross Validation](https://towardsdatascience.com/train-test-split-and-cross-validation-in-python-80b61beca4b6)* [Learning Curves](https://www.dataquest.io/blog/learning-curves-machine-learning/)Good resources for understanding cross validation and overfitting in general:* chapter 5.1 of [ISL](http://www-bcf.usc.edu/~gareth/ISL/)* The first 3 videos for Chapter 5 [ISL Videos](http://www.dataschool.io/15-hours-of-expert-machine-learning-videos/)
###Code
# Load Titanic Data
%cd -q ../projects/titanic
%run LoadTitanicData.py
%cd -q -
# X: features
# y: target variable
print('X Shape: ', X.shape)
print('y Shape: ', y.shape)
print('X columns:\n', X.columns.values)
print('y name:',y.name)
###Output
X Shape: (891, 11)
y Shape: (891,)
X columns:
['PassengerId' 'Pclass' 'Name' 'Sex' 'Age' 'SibSp' 'Parch' 'Ticket' 'Fare'
'Cabin' 'Embarked']
y name: Survived
###Markdown
ExperiementEach train/test split from crossvalidation.split() generates two numpy array of indexes. The first array picks out the records in the training set and the second array picks out the data in the test set.
###Code
from sklearn.model_selection import StratifiedKFold, KFold
k_folds = 10
random_seed = 5
crossvalidation = StratifiedKFold(n_splits=k_folds, shuffle=False)
# get train and test sets for crossvaldiation
train_test_sets = [(train_idx, test_idx) for
train_idx, test_idx in crossvalidation.split(X,y)]
# in Python, looking at data types helps understanding
print(f'List Len: {len(train_test_sets)}')
print(f'1st Element Type: {type(train_test_sets[0])}')
print(f'1st Element Len: {len(train_test_sets[0])}')
print(f'1st Element 1st Tuple Type: {type(train_test_sets[0][0])}')
print(f'1st Element 1st Tuple Len: {len(train_test_sets[0][0])}')
print(f'1st Element 2nd Tuple Type: {type(train_test_sets[0][1])}')
print(f'1st Element 2nd Tuple Len: {len(train_test_sets[0][1])}')
print(f'Data Length: {len(X)}')
###Output
List Len: 10
1st Element Type: <class 'tuple'>
1st Element Len: 2
1st Element 1st Tuple Type: <class 'numpy.ndarray'>
1st Element 1st Tuple Len: 801
1st Element 2nd Tuple Type: <class 'numpy.ndarray'>
1st Element 2nd Tuple Len: 90
Data Length: 891
###Markdown
Describing the above in words:* The train_test_sets list is of length 10 (10 CV folds).* Each element in the list is a tuple which consists of 2 numpy arrays.* The first array in the tuple are the indexes used to created the training data. It is of length 801.* The second array in the tuple are the indexes used to created the test data. It is of length 90.* The total length of all data is 891 records.
###Code
# Experiement: KFold with shuffle=False
crossvalidation = KFold(n_splits=k_folds, shuffle=False)
train_test_sets = [(train_idx, test_idx) for
train_idx, test_idx in crossvalidation.split(X,y)]
# Check: for contiguous blocks of records in the test set
# if the records are contiguous, each index differs by 1
for i in range(10):
print((np.diff(train_test_sets[i][1]) == 1).all(), end=' ')
# print one fold of test set indexes
train_test_sets[0][1]
###Output
_____no_output_____
###Markdown
So KFold with shuffle=False means we are using test sets that represent blocks of contiguous records.A contiguous block of records for the test set means that the training set is as contiguous as possible.
###Code
# Experiement: KFold with shuffle=True
crossvalidation = KFold(n_splits=k_folds, shuffle=True,
random_state=random_seed)
train_test_sets = [(train_idx, test_idx) for
train_idx, test_idx in crossvalidation.split(X,y)]
# Check: for contiguous blocks of records in the test set
# if the records are contiguous, each index differs by 1
for i in range(10):
print((np.diff(train_test_sets[i][1]) == 1).all(), end=' ')
# print one fold of test set indexes
train_test_sets[0][1]
###Output
_____no_output_____
###Markdown
So shuffle=True caused non-consecutive indexes to be used for determining the test datasets.This implies that non-consecutive indexes are also used for the train datasets.In other words, we are no longer using blocks of records from the original dataset for our train and test sets.
###Code
# Experiement: KFold with shuffle=True
crossvalidation = KFold(n_splits=k_folds, shuffle=True,
random_state=random_seed)
train_test_sets = [(train_idx, test_idx) for
train_idx, test_idx in crossvalidation.split(X,y)]
# Check: for frequency of class labels
# Note: y only has values of 0 or 1, so y.mean() is the frequency of 1 values
print('y: ', np.round(y.mean(), 2))
# print frequency of survival in the 10 train and 10 test sets
for i in range(10):
for j in range(2):
print(np.round(y[train_test_sets[i][j]].mean(), 2), end=' ')
###Output
y: 0.38
0.38 0.4 0.39 0.36 0.39 0.37 0.39 0.33 0.38 0.45 0.38 0.39 0.38 0.39 0.38 0.4 0.39 0.36 0.38 0.38
###Markdown
So KFold did *not* keep the percentage of survivors the same in each dataset. Values as low as 33% and as high as 45% are seen.
###Code
# Experiement: StratifiedKFold with shuffle=True
crossvalidation = StratifiedKFold(n_splits=k_folds, shuffle=True,
random_state=random_seed)
train_test_sets = [(train_idx, test_idx) for
train_idx, test_idx in crossvalidation.split(X,y)]
# Check: for frequency of class labels
# Note: y only has values of 0 or 1, so y.mean() is the frequency of 1 values
print('y: ', np.round(y.mean(), 2))
# print frequency of survival in the 10 train and 10 test sets
for i in range(10):
for j in range(2):
print(np.round(y[train_test_sets[i][j]].mean(), 2), end=' ')
###Output
y: 0.38
0.38 0.39 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.38 0.39
###Markdown
So StratifiedKFold caused about the same percentage of survivors to occur in each training and test dataset. Summary of StratifedKFold and KFoldFor classification, you want each train/test subset to have (about) the same frequency of class values as is represented in the entire target array, so you normally **choose StratifiedKFold instead of KFold**.The original dataset may have an inherent ordering. This ordering could bias your train/test splits. To avoid this, you normally choose **shuffle=True**. **NOTE** shuffle=True does **not** cause the test sets to overlap. It is not like SuffleSplit.
###Code
# Show: test sets do not overlap when suffle=True
crossvalidation = StratifiedKFold(n_splits=k_folds, shuffle=True,
random_state=random_seed)
train_test_sets = [(train_idx, test_idx) for
train_idx, test_idx in crossvalidation.split(X,y)]
# In this example, there are 10 disjoint test sets.
# This is equivalent to saying that each check for intersection between
# each pair of test sets, has a length of 0
# Intersection is commutative, so we only need to check half of the possible
# pairs of test sets and we don't check a test set with itself
for i in range(10):
for j in range(i+1, 10):
print(len(np.intersect1d(train_test_sets[i][1],train_test_sets[j][1])), end=' ')
###Output
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
###Markdown
We see that the test sets are disjoint. shuffle=True in this context does not cause test set overlap.
###Code
# Show: train set is disjoint from its respective test set
crossvalidation = StratifiedKFold(n_splits=k_folds, shuffle=True,
random_state=random_seed)
train_test_sets = [(train_idx, test_idx) for
train_idx, test_idx in crossvalidation.split(X,y)]
for i in range(10):
print(len(np.intersect1d(train_test_sets[i][0],train_test_sets[i][1])), end=' ')
###Output
0 0 0 0 0 0 0 0 0 0 |
module_5/ds_mod5_lecture4.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.dummy import DummyClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
dados = pd.read_excel('https://github.com/willianrocha/bootcamp-datascience-alura/blob/main/files/Kaggle_Sirio_Libanes_ICU_Prediction.xlsx?raw=true')
dados.head()
def preenche_tabela(dados):
features_continuas_colunas = dados.iloc[:, 13:-2].columns
features_continuas = dados.groupby("PATIENT_VISIT_IDENTIFIER", as_index=False)[features_continuas_colunas].fillna(method='bfill').fillna(method='ffill')
features_categoricas = dados.iloc[:, :13]
saida = dados.iloc[:, -2:]
dados_finais = pd.concat([features_categoricas, features_continuas, saida], ignore_index=True,axis=1)
dados_finais.columns = dados.columns
return dados_finais
def prepare_window(rows):
if(np.any(rows["ICU"])):
rows.loc[rows["WINDOW"]=="0-2", "ICU"] = 1
return rows.loc[rows["WINDOW"] == "0-2"]
dados_limpos = preenche_tabela(dados)
a_remover = dados_limpos.query("WINDOW=='0-2' and ICU==1")['PATIENT_VISIT_IDENTIFIER'].values
dados_limpos = dados_limpos.query("PATIENT_VISIT_IDENTIFIER not in @a_remover")
dados_limpos = dados_limpos.dropna()
# dados_limpos.describe()
dados_limpos = dados_limpos.groupby("PATIENT_VISIT_IDENTIFIER").apply(prepare_window)
dados_limpos.AGE_PERCENTIL = dados_limpos.AGE_PERCENTIL.astype("category").cat.codes
dados_limpos.head()
np.random.seed(73246)
x_columns = dados_limpos.columns
y = dados_limpos["ICU"]
x = dados_limpos[x_columns].drop(["ICU","WINDOW"], axis=1)
x_train, x_test, y_train, y_test = train_test_split(x, y, stratify=y)
modelo = DummyClassifier()
modelo.fit(x_train, y_train)
y_prediction = modelo.predict(x_test)
accuracy_score(y_test, y_prediction)
modelo = LogisticRegression(max_iter=10000)
modelo.fit(x_train, y_train)
y_prediction = modelo.predict(x_test)
accuracy_score(y_test, y_prediction)
modelo_arvore = DecisionTreeClassifier()
modelo_arvore.fit(x_train, y_train)
predicao_arvore = modelo_arvore.predict(x_test)
accuracy_score(y_test, predicao_arvore)
prob_arvore = modelo_arvore.predict_proba(x_test)
auc = roc_auc_score(y_test, prob_arvore[:,1])
print(classification_report(y_test, predicao_arvore))
auc
def roda_n_modelo(modelo, dados, n):
# np.random.seed(73246)
x_columns = dados.columns
y = dados['ICU']
x = dados[x_columns].drop(['ICU', 'WINDOW'], axis=1)
auc_lista = []
for _ in range(n):
x_train, x_test, y_train, y_test = train_test_split(x, y, stratify=y)
modelo.fit(x_train, y_train)
# predicao = modelo.predict(x_test)
prob_predict = modelo.predict_proba(x_test)
auc = roc_auc_score(y_test, prob_predict[:,1])
auc_lista.append(auc)
auc_medio = np.mean(auc_lista)
auc_std = np.std(auc_lista)
print(f'AUC: {auc_medio}')
print(f'AUC STD: {auc_std}')
print(f'Intervalo: {auc_medio + 2*auc_std} - {auc_medio - 2*auc_std}')
# print('\nClassification Report')
# print(classification_report(y_test, predicao))
roda_n_modelo(modelo_arvore, dados_limpos, 50)
from sklearn.model_selection import cross_validate
from sklearn.model_selection import StratifiedKFold
cv = StratifiedKFold(n_splits=5, shuffle=True)
cross_validate(modelo, x, y, cv=cv)
from sklearn.model_selection import RepeatedStratifiedKFold #StratifiedKFold
cv = RepeatedStratifiedKFold(n_splits=5, n_repeats=10)
cross_validate(modelo, x, y, cv=cv)
def roda_modelo_cv(modelo, dados, n_splits, n_repeat):
np.random.seed(73246)
dados = dados.sample(frac=1).reset_index(drop=True)
x_columns = dados.columns
y = dados['ICU']
x = dados[x_columns].drop(['ICU', 'WINDOW'], axis=1)
cv = RepeatedStratifiedKFold(n_splits=n_splits, n_repeats=n_repeat)
resultados = cross_validate(modelo, x, y, cv=cv, scoring='roc_auc')
auc_medio = np.mean(resultados['test_score'])
auc_std = np.std(resultados['test_score'])
print(f'AUC: {auc_medio}')
print(f'AUC STD: {auc_std}')
print(f'Intervalo: {auc_medio + 2*auc_std} - {auc_medio - 2*auc_std}')
roda_modelo_cv(modelo, dados_limpos, 5, 10)
###Output
AUC: 0.7598762877710246
AUC STD: 0.052242437531309194
Intervalo: 0.864361162833643 - 0.6553914127084062
###Markdown
Desafio Desafio 08: Testar outros splitter classes e observar as diferenças.
###Code
from sklearn.model_selection import StratifiedShuffleSplit
def roda_modelo_cv(modelo, dados, n_splits, n_repeat):
np.random.seed(73246)
dados = dados.sample(frac=1).reset_index(drop=True)
x_columns = dados.columns
y = dados['ICU']
x = dados[x_columns].drop(['ICU', 'WINDOW'], axis=1)
cv = StratifiedShuffleSplit(n_splits=n_splits, test_size=0.5, random_state=0)
resultados = cross_validate(modelo, x, y, cv=cv, scoring='roc_auc')
print(resultados)
auc_medio = np.mean(resultados['test_score'])
auc_std = np.std(resultados['test_score'])
print(f'AUC: {auc_medio}')
print(f'AUC STD: {auc_std}')
print(f'Intervalo: {auc_medio + 2*auc_std} - {auc_medio - 2*auc_std}')
roda_modelo_cv(modelo, dados_limpos, 50, 10)
# vc_sss = (n_splits=5, test_size=0.5, random_state=0)
###Output
{'fit_time': array([0.4400425 , 0.49129319, 0.46734667, 0.52816415, 0.20942664,
0.39971232, 0.17572188, 0.19282627, 0.38223052, 0.44871068,
0.46178412, 0.10449028, 0.38825321, 0.42320776, 0.16983891,
0.17552757, 0.43969274, 0.15288806, 0.40396142, 0.41381073,
0.17393422, 0.43180776, 0.37222171, 0.44072151, 0.14946365,
0.41182065, 0.37824798, 0.19410563, 0.1457119 , 0.46420622,
0.1833117 , 0.48322821, 0.15869236, 0.43909144, 0.1680932 ,
0.4308939 , 0.35957551, 0.17447543, 0.21677923, 0.43977046,
0.45221162, 0.36344552, 0.52062368, 0.44790149, 0.41742492,
0.17615771, 0.44234443, 0.15960169, 0.15685368, 0.11736894]), 'score_time': array([0.00376391, 0.00383472, 0.00367761, 0.00363779, 0.00367737,
0.00381088, 0.00366569, 0.00372934, 0.00365996, 0.0040884 ,
0.00372672, 0.0037291 , 0.00374341, 0.00370193, 0.0037396 ,
0.003865 , 0.00369143, 0.00368977, 0.00370264, 0.00371385,
0.00368595, 0.00439286, 0.00389314, 0.00368643, 0.00371671,
0.00368214, 0.00371194, 0.00368786, 0.00371027, 0.00373983,
0.00384402, 0.00369334, 0.0036788 , 0.00379682, 0.00370431,
0.00366139, 0.00366545, 0.003757 , 0.00371838, 0.00375366,
0.00378275, 0.00373721, 0.00375175, 0.00370646, 0.00377893,
0.00369406, 0.00796723, 0.00377393, 0.00366068, 0.00364184]), 'test_score': array([0.78362573, 0.75 , 0.75311365, 0.77361923, 0.76920083,
0.70537104, 0.76427089, 0.69733593, 0.75272444, 0.75635703,
0.76231319, 0.72696556, 0.76309292, 0.71812865, 0.71159834,
0.74052932, 0.72586923, 0.7672548 , 0.78580695, 0.72358674,
0.74801819, 0.73398311, 0.77192982, 0.73879142, 0.73021442,
0.77218474, 0.75869227, 0.75282651, 0.76997924, 0.75880442,
0.75531915, 0.75061728, 0.75607537, 0.76608718, 0.74701609,
0.69434698, 0.72029061, 0.73320338, 0.79190451, 0.75285418,
0.75984405, 0.78256357, 0.77790773, 0.73866147, 0.74688635,
0.77841204, 0.77465887, 0.72911261, 0.7907369 , 0.74312403])}
AUC: 0.7511162199353734
AUC STD: 0.023502367239373503
Intervalo: 0.7981209544141203 - 0.7041114854566264
###Markdown
STD AUC foi menor, dando um intervalo de confiança melhor
###Code
###Output
_____no_output_____ |
Trading_Strategies/Momentum/momentum.ipynb | ###Markdown
读取数据
###Code
#%%original data
N = 6
data = pd.read_csv('/Users/jianboxue/Documents/Research_Projects/Momentum/index_shanghai.csv',index_col = 'date',parse_dates = 'date')
#features owned by the day for predicting(include open)
data['month'] = data.index.month
data['week'] = data.index.week
data['weekofyear'] = data.index.weekofyear
data['day'] = data.index.day
data['dayofweek'] = data.index.dayofweek
data['dayofyear'] = data.index.dayofyear
donchian_channel_max = np.array([max(data['high'][max(i,20)-20:max(i,20)]) for i in range(len(data))])#the highest price in last n days
donchian_channel_min = np.array([min(data['low'][max(i,20)-20:max(i,20)]) for i in range(len(data))])
data['dcmaxod'] = (data['open']-donchian_channel_max)/donchian_channel_max
data['dcminod'] = (data['open']-donchian_channel_min)/donchian_channel_min
num_all = data.shape[1]
#features owned only by previous data(include close,high,low,vol)
data['price_change'] = (data['close']-data['open']) /data['open']
data['vol_change'] = 0
data['vol_change'][1:] = (data['vol'][1:].values-data['vol'][:-1].values) /data['vol'][:-1].values
data['ibs'] = (data['close']-data['low']) /(data['high']-data['low'])
data['dcmaxcd'] = (data['close']-donchian_channel_max)/donchian_channel_max
data['dcmincd'] = (data['close']-donchian_channel_min)/donchian_channel_min
#data['macd'] = MACD(data).macd
#data['macdsignal'] = MACD(data).macdsignal
#data['macdhist'] = MACD(data).macdhist
data['%R'] = (np.array([max(data['high'][max(i,14)-14:max(i,14)]) for i in range(len(data))])-data.close.values)/((np.array([max(data['high'][max(i,14)-14:max(i,14)]) for i in range(len(data))])-np.array([min(data['low'][max(i,14)-14:max(i,14)]) for i in range(len(data))])))#Williams %R is a momentum indicator The default setting for Williams %R is 14 periods, which can be days, weeks, months or an intraday timeframe.
###Output
/Applications/anaconda/lib/python3.4/site-packages/ipykernel/__main__.py:24: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
###Markdown
Y is the target series to predict
###Code
y = [1 if data['close'][i]>data['open'][i] else 0 for i in range(len(data))]
y = y[N-1:]
n_windows = data.shape[0]-N+1
windows = range(n_windows)
#%% features of open,high,low,close,vol
d = np.array(data.ix[:,:5])
d = np.array([d[w:w+N].ravel() for w in windows])
#generated features for all days that can be used in training
d_na = np.array(data.ix[:,5:num_all])
d_na = np.array([d_na[w:w+N].ravel() for w in windows])
d_n = np.array(data.ix[:,num_all:])
d_n = np.array([d_n[w:w+N-1].ravel() for w in windows])
nday = 1500
d = d[len(data)- nday:]
d_na = d_na[len(data)- nday:]
d_n = d_n[len(data)- nday:]
y = np.array(y[len(data)- nday:])
#%%
def normalizeNday(stocks,N):
def process_column(i):
#Replaces all high/low/vol data with 0, and divides all stock data by the opening price on the first day
if operator.mod(i, 5) == 1:
return stocks[i] * 0
if operator.mod(i, 5) == 2:
return stocks[i] * 0
if operator.mod(i, 5) == 4:
return stocks[i] * 0
#return np.log(stocks[:,i] + 1)
else:
return stocks[i] / stocks[0]
#n = stocks.shape[0]
stocks_dat = np.array([ process_column(i) for i in range(N*5-4)]).transpose()
#stocks_movingavgO9O10 = np.array([int(i > j) for i,j in zip(stocks_dat[:,45], stocks_dat[:,40])]).reshape((n, 1))
#stocks_movingavgC9O10 = np.array([int(i > j) for i,j in zip(stocks_dat[:,45], stocks_dat[:,43])]).reshape((n, 1))
#return np.hstack((stocks_dat, stocks_movingavgO9O10, stocks_movingavgC9O10))
return stocks_dat
#%%
d_normalized = pd.DataFrame(np.hstack((np.array([normalizeNday(w,N) for w in d]),d_n,d_na)))
#remove constants
nunique = pd.Series([len(d_normalized[col].unique()) for col in d_normalized.columns], index = d_normalized.columns)
constants = nunique[nunique<2].index.tolist()
for col in constants:
del d_normalized[col]
d_normalized = np.array(d_normalized)
train = d_normalized[:int(len(d)*2/3.)]
train_y = y[:int(len(d)*2/3.)]
test = d_normalized[int(len(d)*2/3.):]
test_y = y[int(len(d)*2/3.):]
plt.scatter(d[:, (N-1)*5] / d[:, (N-1)*5-2], d[:, (N-1)*5+3] / d[:, (N-1)*5])
plt.xlim((.8,1.2)); plt.ylim((.8,1.2))
plt.xlabel("Opening N / Closing N-1"); plt.ylabel("Closing N / Opening N-1")
plt.title("Correlation between interday and intraday stock movement")
plt.show()
d = np.array(data.ix[:,:5])
d = np.array([d[w:w+N].ravel() for w in windows])
_96
%magic
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.