path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/03_Assignment_Solved.ipynb | ###Markdown
Module 03 - Assignment *** Environment`conda activate sklearn-env`*** Goals - [Load the data sets from the UCI website](Dataset-load-from-CSV-located-on-UCI-website)- [Print statistics about the data](Basic-statistical-properties)- [Display total count of missing values](Display-total-count-of-missing-values)- [Use `IterativeImputer` to compute missing values](Use-IterativeImputer-to-compute-missing-values)- [Use `OneHotEncoder` to encode `Cylinders` and `Origin` fields](Use-OneHotEncoder-to-encode-Cylinders-and-Origin-fields)- [Rescale `Displacement`, `Horsepower`, `Weight`, `Acceleration` fields using `RobustScaler` estimator](Rescale-Displacement,-Horsepower,-Weight,-Acceleration-fields-using-RobustScaler)- [Bucketize `Model year` field in 4 different bins to reduce the number of distinct values used in it](Bucketize-Model-year-field-in-4-different-bins-to-reduce-the-number-of-distinct-values-used-in-it)- [Run `LinearRegression` estimator over the transformed data and print predicted values along with label values](Run-LinearRegression-estimator-over-the-transformed-data-and-print-predicted-values-along-with-label-values)- [Optional](Optional) *- [Apply the same transformations (imp_mean,encoder, scaler, bucketer and reg ) on test datasets](Apply-the-same-transformations-(imp_mean,-encoder,-scaler,-bucketer-and-reg-)-on-test-datasets) - [Apply imputer (`imp_mean` object)](Apply-imputer-(imp_mean-object)) - [Apply category encoder (`encoder` object)](Apply-category-encoder-(encoder-object)) - [Apply scaller (`scaler` object)](Apply-scaller-(scaler-object)) - [Apply binning (`bucketer` object)](Apply-binning-(bucketer-object)) - [Run logistic regression and compute model $R^2$ score (`reg` object)](Run-logistic-regression-and-compute-model-$R^2$-score-(reg-object)) Basic python imports for panda (dataframe) and seaborn(visualization) packages
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from IPython.display import display
###Output
_____no_output_____
###Markdown
Dataset load from CSV located on UCI website.http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data If the URL does not work the dataset can be loaded from the data folder `./data/auto-mpg.data`.
###Code
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data'
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(url, names=column_names,
na_values='?', comment='\t',
sep=' ', skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail(2)
###Output
_____no_output_____
###Markdown
Dataset meta information
###Code
dataset.info()
###Output
_____no_output_____
###Markdown
Basic statistical properties
###Code
dataset.describe().transpose()
###Output
_____no_output_____
###Markdown
Display total count of missing valuesNottice missing values on one of the fields.
###Code
dataset.isna().sum()
###Output
_____no_output_____
###Markdown
Data preparationSplit data in `training` and `test` datasets
###Code
from sklearn.model_selection import train_test_split
train_dataset, test_dataset = train_test_split(dataset, test_size=0.2)
train_dataset.reset_index(drop=True,inplace=True)
test_dataset.reset_index(drop=True,inplace=True)
train_features = train_dataset.drop('MPG', axis='columns', inplace=False)
test_features = test_dataset.drop('MPG', axis='columns', inplace=False)
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
###Output
_____no_output_____
###Markdown
Use `IterativeImputer` to compute missing valueshttps://scikit-learn.org/stable//modules/generated/sklearn.impute.IterativeImputer.htmlThis imputer estimates the replacement for missing values based on the other fields. For this reason we are passing to `fit` and `transform` calls, all the other columns not only the ones that have missing elements
###Code
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
imp_mean = IterativeImputer(random_state=0 , skip_complete=True)
imp_mean.fit(train_features)
train_features[['Cylinders',
'Displacement',
'Horsepower',
'Weight',
'Acceleration',
'Model Year',
'Origin' ]] = imp_mean.transform(train_features)
train_features.head()
###Output
_____no_output_____
###Markdown
Use `OneHotEncoder` to encode `Cylinders` and `Origin` fieldshttps://scikit-learn.org/stable//modules/generated/sklearn.preprocessing.OneHotEncoder.html
###Code
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder(sparse=False, handle_unknown='ignore').fit(train_features[['Cylinders', 'Origin']])
display("OHE categories: for Cylinders and Origin columns " + str(encoder.categories_))
train_features[['Cylinders_3',
'Cylinders_4',
'Cylinders_5',
'Cylinders_6',
'Cylinders_8',
'Origin_USA',
'Origin_Europe',
'Origin_Japan']] = encoder.transform(train_features[['Cylinders', 'Origin']])
train_features.drop(['Cylinders', 'Origin'], axis=1, inplace=True)
train_features.head()
###Output
_____no_output_____
###Markdown
Rescale `Displacement`, `Horsepower`, `Weight`, `Acceleration` fields using `RobustScaler`https://scikit-learn.org/stable//modules/generated/sklearn.preprocessing.KBinsDiscretizer.htm
###Code
from sklearn.preprocessing import RobustScaler
scaler = RobustScaler().fit(train_features[['Displacement', 'Horsepower', 'Weight', 'Acceleration']])
train_features[['Displacement', 'Horsepower', 'Weight', 'Acceleration']] = scaler.transform(train_features[['Displacement', 'Horsepower', 'Weight', 'Acceleration']])
train_features.head()
###Output
_____no_output_____
###Markdown
Bucketize `Model year` field in 4 different bins to reduce the number of distinct values used in ithttps://scikit-learn.org/stable//modules/generated/sklearn.preprocessing.KBinsDiscretizer.html
###Code
from sklearn.preprocessing import KBinsDiscretizer
bucketer = KBinsDiscretizer(n_bins=4, encode='ordinal', strategy='uniform').fit(train_features[['Model Year']])
train_features[['Model Year']] = bucketer.transform(train_features[['Model Year']])
train_features.head()
###Output
_____no_output_____
###Markdown
Run `LinearRegression` estimator over the transformed data and print predicted values along with label valueshttps://scikit-learn.org/stable//modules/generated/sklearn.linear_model.LinearRegression.html
###Code
from sklearn.linear_model import LinearRegression
reg = LinearRegression().fit(train_features, train_labels)
train_features['Predicted_MPG'] = reg.predict(train_features)
pd.concat([train_features, train_labels], axis=1).head()
###Output
_____no_output_____
###Markdown
Optional Apply the same transformations (`imp_mean`, `encoder`, `scaler`, `bucketer` and `reg` ) on test datasetsNote: do not retrain these estimators on this unused data (do not call `fit` method) Apply imputer (`imp_mean` object)
###Code
test_features[['Cylinders',
'Displacement',
'Horsepower',
'Weight',
'Acceleration',
'Model Year',
'Origin' ]] = imp_mean.transform(test_features)
test_features.head()
###Output
_____no_output_____
###Markdown
Apply category encoder (`encoder` object)
###Code
test_features[['Cylinders_3',
'Cylinders_4',
'Cylinders_5',
'Cylinders_6',
'Cylinders_8',
'Origin_USA',
'Origin_Europe',
'Origin_Japan']] = encoder.transform(test_features[['Cylinders', 'Origin']])
test_features.drop(['Cylinders', 'Origin'], axis=1, inplace=True)
test_features.head()
###Output
_____no_output_____
###Markdown
Apply scaller (`scaler` object)
###Code
test_features[['Displacement', 'Horsepower', 'Weight', 'Acceleration']] = scaler.transform(test_features[['Displacement', 'Horsepower', 'Weight', 'Acceleration']])
test_features.head()
###Output
_____no_output_____
###Markdown
Apply binning (`bucketer` object)
###Code
test_features[['Model Year']] = bucketer.transform(test_features[['Model Year']])
test_features.head()
###Output
_____no_output_____
###Markdown
Run `LinearRegression` estimator on test data
###Code
test_features['Predicted_MPG'] = reg.predict(test_features)
###Output
_____no_output_____
###Markdown
Print a random sample of 10 records to observe prediction accuracy
###Code
pd.concat([test_features, test_labels], axis=1).sample(10)
###Output
_____no_output_____ |
trax/examples/NER_using_Reformer.ipynb | ###Markdown
###Code
#@title
# Copyright 2020 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Author - [@SauravMaheshkar](https://github.com/SauravMaheshkar) Install Dependencies Install the latest version of the [Trax](https://github.com/google/trax) Library.
###Code
!pip install -q -U trax
###Output
[31mERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts.
We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default.
pytorch-lightning 0.9.0 requires tensorboard==2.2.0, but you'll have tensorboard 2.3.0 which is incompatible.
kfac 0.2.3 requires tensorflow-probability==0.8, but you'll have tensorflow-probability 0.7.0 which is incompatible.[0m
[33mWARNING: You are using pip version 20.2.3; however, version 20.2.4 is available.
You should consider upgrading via the '/opt/conda/bin/python3.7 -m pip install --upgrade pip' command.[0m
###Markdown
Introduction ---**Named-entity recognition** (NER) is a subtask of *information extraction* that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.To evaluate the quality of a NER system's output, several measures have been defined. The usual measures are called **Precision**, **recall**, and **F1 score**. However, several issues remain in just how to calculate those values. State-of-the-art NER systems for English produce near-human performance. For example, the best system entering MUC-7 scored 93.39% of F-measure while human annotators scored 97.60% and 96.95%. Importing Packages
###Code
import trax # Our Main Library
from trax import layers as tl
import os # For os dependent functionalities
import numpy as np # For scientific computing
import pandas as pd # For basic data analysis
import random as rnd # For using random functions
###Output
_____no_output_____
###Markdown
Pre-Processing Loading the Dataset Let's load the `ner_dataset.csv` file into a dataframe and see what it looks like
###Code
data = pd.read_csv("/kaggle/input/entity-annotated-corpus/ner_dataset.csv",encoding = 'ISO-8859-1')
data = data.fillna(method = 'ffill')
data.head()
###Output
_____no_output_____
###Markdown
Creating a Vocabulary File We can see there's a column for the words in each sentence. Thus, we can extract this column using the [`.loc()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html) and store it into a `.txt` file using the [`.savetext()`](https://numpy.org/doc/stable/reference/generated/numpy.savetxt.html) function from numpy.
###Code
## Extract the 'Word' column from the dataframe
words = data.loc[:, "Word"]
## Convert into a text file using the .savetxt() function
np.savetxt(r'words.txt', words.values, fmt="%s")
###Output
_____no_output_____
###Markdown
Creating a Dictionary for Vocabulary Here, we create a Dictionary for our vocabulary by reading through all the sentences in the dataset.
###Code
vocab = {}
with open('words.txt') as f:
for i, l in enumerate(f.read().splitlines()):
vocab[l] = i
print("Number of words:", len(vocab))
vocab['<PAD>'] = len(vocab)
###Output
Number of words: 35178
###Markdown
Extracting Sentences from the Dataset For extracting sentences from the dataset and creating (X,y) pairs for training.
###Code
class Get_sentence(object):
def __init__(self,data):
self.n_sent=1
self.data = data
agg_func = lambda s:[(w,p,t) for w,p,t in zip(s["Word"].values.tolist(),
s["POS"].values.tolist(),
s["Tag"].values.tolist())]
self.grouped = self.data.groupby("Sentence #").apply(agg_func)
self.sentences = [s for s in self.grouped]
getter = Get_sentence(data)
sentence = getter.sentences
words = list(set(data["Word"].values))
words_tag = list(set(data["Tag"].values))
word_idx = {w : i+1 for i ,w in enumerate(words)}
tag_idx = {t : i for i ,t in enumerate(words_tag)}
X = [[word_idx[w[0]] for w in s] for s in sentence]
y = [[tag_idx[w[2]] for w in s] for s in sentence]
###Output
_____no_output_____
###Markdown
Making a Batch Generator Here, we create a batch generator for training.
###Code
def data_generator(batch_size, x, y,pad, shuffle=False, verbose=False):
num_lines = len(x)
lines_index = [*range(num_lines)]
if shuffle:
rnd.shuffle(lines_index)
index = 0
while True:
buffer_x = [0] * batch_size
buffer_y = [0] * batch_size
max_len = 0
for i in range(batch_size):
if index >= num_lines:
index = 0
if shuffle:
rnd.shuffle(lines_index)
buffer_x[i] = x[lines_index[index]]
buffer_y[i] = y[lines_index[index]]
lenx = len(x[lines_index[index]])
if lenx > max_len:
max_len = lenx
index += 1
X = np.full((batch_size, max_len), pad)
Y = np.full((batch_size, max_len), pad)
for i in range(batch_size):
x_i = buffer_x[i]
y_i = buffer_y[i]
for j in range(len(x_i)):
X[i, j] = x_i[j]
Y[i, j] = y_i[j]
if verbose: print("index=", index)
yield((X,Y))
###Output
_____no_output_____
###Markdown
Splitting into Test and Train
###Code
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(X,y,test_size = 0.1,random_state=1)
###Output
_____no_output_____
###Markdown
Building the Model The Reformer Model In this notebook, we use the Reformer, which is a more efficient of Transformer that uses reversible layers and locality-sensitive hashing. You can read the original paper [here](https://arxiv.org/abs/2001.04451). Locality-Sensitive Hashing ---The biggest problem that one might encounter while using Transformers, for huge corpora is the handling of the attention layer. Reformer introduces Locality Sensitive Hashing to solve this problem, by computing a hash function that groups similar vectors together. Thus, a input sequence is rearranged to bring elements with the same hash together and then divide into segments(or *chunks*, *buckets*) to enable parallel processing. Thus, we can apply Attention to these chunks (rather than the whole input sequence) to reduce the computational load.  Reversible Layers ---Using Locality Sensitive Hashing, we were able to solve the problem of computation but still we have a memory issue. Reformer implements a novel approach to solve this problem, by recomputing the input of each layer on-demand during back-propagation, rather than storing it in memory. This is accomplished by using Reversible Layers (*activations from last layers are used to recover activations from any intermediate layer*). Reversible layers store two sets of activations for each layer. - One follows the standard procedure in which the activations are added as they pass through the network- The other set only captures the changes. Thus, if we run the network in reverse, we simply subtract the activations applied at each layer.  Model Architecture We will perform the following steps:* Use input tensors from our data generator* Produce Semantic entries from an Embedding Layer* Feed these into our Reformer Language model* Run the Output through a Linear Layer* Run these through a log softmax layer to get predicted classes We use the:1. [`tl.Serial()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.combinators.Serial): Combinator that applies layers serially(by function composition). It's commonly used to construct deep networks. It uses stack semantics to manage data for its sublayers2. [`tl.Embedding()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.core.Embedding): Initializes a trainable embedding layer that maps discrete tokens/ids to vectors3. [`trax.models.reformer.Reformer()`](https://trax-ml.readthedocs.io/en/latest/trax.models.htmltrax.models.reformer.reformer.Reformer): Creates a Reversible Transformer encoder-decoder model.4. [`tl.Dense()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.core.Dense): Creates a Dense(*fully-connected, affine*) layer5. [`tl.LogSoftmax()`](https://trax-ml.readthedocs.io/en/latest/trax.layers.htmltrax.layers.core.LogSoftmax): Creates a layer that applies log softmax along one tensor axis.
###Code
def NERmodel(tags, vocab_size=35181, d_model = 50):
model = tl.Serial(
# tl.Embedding(vocab_size, d_model),
trax.models.reformer.Reformer(vocab_size, d_model, ff_activation=tl.LogSoftmax),
tl.Dense(tags),
tl.LogSoftmax()
)
return model
model = NERmodel(tags = 17)
print(model)
###Output
Serial_in2_out2[
Serial_in2_out2[
Select[0,1,1]_in2_out3
Branch_out2[
[]
[PaddingMask(0), Squeeze]
]
Serial_in2_out2[
Embedding_35181_512
Dropout
PositionalEncoding
Dup_out2
ReversibleSerial_in3_out3[
ReversibleHalfResidual_in3_out3[
Serial[
LayerNorm
]
SelfAttention_in2
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in3_out3[
Serial[
LayerNorm
]
SelfAttention_in2
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in3_out3[
Serial[
LayerNorm
]
SelfAttention_in2
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in3_out3[
Serial[
LayerNorm
]
SelfAttention_in2
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in3_out3[
Serial[
LayerNorm
]
SelfAttention_in2
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in3_out3[
Serial[
LayerNorm
]
SelfAttention_in2
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
]
XYAvg_in2
LayerNorm
]
Select[2,0,1]_in3_out3
ShiftRight(1)
Embedding_50_512
Dropout
PositionalEncoding
Dup_out2
ReversibleSerial_in4_out4[
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
]
SelfAttention
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in4_out4[
Serial[
LayerNorm
]
EncDecAttention_in3
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
]
SelfAttention
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in4_out4[
Serial[
LayerNorm
]
EncDecAttention_in3
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
]
SelfAttention
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in4_out4[
Serial[
LayerNorm
]
EncDecAttention_in3
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
]
SelfAttention
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in4_out4[
Serial[
LayerNorm
]
EncDecAttention_in3
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
]
SelfAttention
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in4_out4[
Serial[
LayerNorm
]
EncDecAttention_in3
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
]
SelfAttention
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in4_out4[
Serial[
LayerNorm
]
EncDecAttention_in3
]
ReversibleSwap_in2_out2
ReversibleHalfResidual_in2_out2[
Serial[
LayerNorm
Dense_2048
Dropout
LogSoftmax
Dense_512
Dropout
]
]
ReversibleSwap_in2_out2
]
XYAvg_in2
LayerNorm
Select[0]_in3
Dense_50
LogSoftmax
]
Dense_17
LogSoftmax
]
###Markdown
Train the Model
###Code
from trax.supervised import training
rnd.seed(33)
batch_size = 64
train_generator = trax.data.inputs.add_loss_weights(
data_generator(batch_size, x_train, y_train,vocab['<PAD>'], True),
id_to_mask=vocab['<PAD>'])
eval_generator = trax.data.inputs.add_loss_weights(
data_generator(batch_size, x_test, y_test,vocab['<PAD>'] ,True),
id_to_mask=vocab['<PAD>'])
def train_model(model, train_generator, eval_generator, train_steps=1, output_dir='model'):
train_task = training.TrainTask(
train_generator,
loss_layer = tl.CrossEntropyLoss(),
optimizer = trax.optimizers.Adam(0.01),
n_steps_per_checkpoint=10
)
eval_task = training.EvalTask(
labeled_data = eval_generator,
metrics = [tl.CrossEntropyLoss(), tl.Accuracy()],
n_eval_batches = 10
)
training_loop = training.Loop(
model,
train_task,
eval_tasks = eval_task,
output_dir = output_dir)
training_loop.run(n_steps = train_steps)
return training_loop
train_steps = 100
training_loop = train_model(model, train_generator, eval_generator, train_steps)
###Output
Step 1: Ran 1 train steps in 815.40 secs
Step 1: train CrossEntropyLoss | 2.97494578
Step 1: eval CrossEntropyLoss | 5.96823492
Step 1: eval Accuracy | 0.85458949
Step 10: Ran 9 train steps in 6809.59 secs
Step 10: train CrossEntropyLoss | 5.27117538
Step 10: eval CrossEntropyLoss | 5.19212604
Step 10: eval Accuracy | 0.85005882
Step 20: Ran 10 train steps in 5372.06 secs
Step 20: train CrossEntropyLoss | 6.68565750
Step 20: eval CrossEntropyLoss | 4.00950582
Step 20: eval Accuracy | 0.81635543
Step 30: Ran 10 train steps in 1040.84 secs
Step 30: train CrossEntropyLoss | 3.92878985
Step 30: eval CrossEntropyLoss | 3.32506871
Step 30: eval Accuracy | 0.78096363
Step 40: Ran 10 train steps in 3624.02 secs
Step 40: train CrossEntropyLoss | 3.41684675
Step 40: eval CrossEntropyLoss | 3.47973170
Step 40: eval Accuracy | 0.84054841
Step 50: Ran 10 train steps in 195.43 secs
Step 50: train CrossEntropyLoss | 2.64065409
Step 50: eval CrossEntropyLoss | 2.21273057
Step 50: eval Accuracy | 0.84472065
Step 60: Ran 10 train steps in 1060.08 secs
Step 60: train CrossEntropyLoss | 2.35068488
Step 60: eval CrossEntropyLoss | 2.66343498
Step 60: eval Accuracy | 0.84561690
Step 70: Ran 10 train steps in 1041.36 secs
Step 70: train CrossEntropyLoss | 2.30295134
Step 70: eval CrossEntropyLoss | 1.31594980
Step 70: eval Accuracy | 0.84971260
Step 80: Ran 10 train steps in 1178.78 secs
Step 80: train CrossEntropyLoss | 1.15712142
Step 80: eval CrossEntropyLoss | 1.15898243
Step 80: eval Accuracy | 0.84357584
Step 90: Ran 10 train steps in 2033.67 secs
Step 90: train CrossEntropyLoss | 1.06345284
Step 90: eval CrossEntropyLoss | 0.93652567
Step 90: eval Accuracy | 0.84781972
Step 100: Ran 10 train steps in 2001.96 secs
Step 100: train CrossEntropyLoss | 1.04488492
Step 100: eval CrossEntropyLoss | 1.02899926
Step 100: eval Accuracy | 0.85163420
|
tutorials/Image_and_Text_Classification_LIME.ipynb | ###Markdown
LIME to Inspect Image & Text Classification This tutorial focuses on showing how to use Captum's implementation of Local Interpretable Model-agnostic Explanations (LIME) to understand neural models. The following content is divided into an image classification section to present our high-level interface `Lime` class and a text classification section for the more customizable low-level interface `LimeBase`.
###Code
import torch
import torch.nn.functional as F
from captum.attr import visualization as viz
from captum.attr import Lime, LimeBase
from captum._utils.models.linear_model import SkLearnLinearRegression, SkLearnLasso
import os
import json
###Output
_____no_output_____
###Markdown
1. Image Classification In this section, we will learn applying Lime to analyze a Resnet trained on ImageNet-1k. For testing data, we use samples from PASCAL VOC 2012 since its segmentation masks can directly serve as semantic "super-pixels" for images.
###Code
from torchvision.models import resnet18
from torchvision.datasets import VOCSegmentation
import torchvision.transforms as T
from captum.attr._core.lime import get_exp_kernel_similarity_function
from PIL import Image
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1.1 Load the model and dataset We can directly load the pretrained Resnet from torchvision and set it to evaluation mode as our target image classifier to inspect.
###Code
resnet = resnet18(pretrained=True)
resnet = resnet.eval()
###Output
_____no_output_____
###Markdown
This model predicts ImageNet-1k labels for given sample images. To better present the results, we also load the mapping of label index and text.
###Code
!wget -P $HOME/.torch/models https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json
labels_path = os.getenv('HOME') + '/.torch/models/imagenet_class_index.json'
with open(labels_path) as json_data:
idx_to_labels = {idx: label for idx, [_, label] in json.load(json_data).items()}
###Output
_____no_output_____
###Markdown
As mentioned before, we will use PASCAL VOC 2012 as the test data, which is available in torchvision as well. We will load it with `torchvision` transforms which convert both the images and targets, i.e., segmentation masks, to tensors.
###Code
voc_ds = VOCSegmentation(
'./VOC',
year='2012',
image_set='train',
download=False,
transform=T.Compose([
T.ToTensor(),
T.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
)
]),
target_transform=T.Lambda(
lambda p: torch.tensor(p.getdata()).view(1, p.size[1], p.size[0])
)
)
###Output
_____no_output_____
###Markdown
This dataset provides an addional segmentation mask along with every image. Compared with inspecting each pixel, the segments (or "super-pixels") are semantically more intuitive for human to perceive. We will discuss more in section 1.3.Let's pick one example to see how the image and corresponding mask look like. Here we choose an image with more than one segments besides background so that we can compare each segment's impact on the classification.
###Code
sample_idx = 439
def show_image(ind):
fig, ax = plt.subplots(1, 2, figsize=[6.4 * 2, 4.8])
for i, (name, source) in enumerate(zip(['Image', 'Mask'], [voc_ds.images, voc_ds.masks])):
ax[i].imshow(Image.open(source[ind]));
ax[i].set_title(f"{name} {ind}")
ax[i].axis('off')
show_image(sample_idx)
###Output
_____no_output_____
###Markdown
1.2 Baseline classification We can check how well our model works with the above example. The original Resnet only gives the logits of labels, so we will add a softmax layer to normalize them into probabilities.
###Code
img, seg_mask = voc_ds[sample_idx] # tensors of shape (channel, hight, width)
outputs = resnet(img.unsqueeze(0))
output_probs = F.softmax(outputs, dim=1).squeeze(0)
###Output
_____no_output_____
###Markdown
Then we present the top 5 predicted labels to verify the result.
###Code
def print_result(probs, topk=1):
probs, label_indices = torch.topk(probs, topk)
probs = probs.tolist()
label_indices = label_indices.tolist()
for prob, idx in zip(probs, label_indices):
label = idx_to_labels[str(idx)]
print(f'{label} ({idx}):', round(prob, 4))
print_result(output_probs, topk=5)
###Output
television (851): 0.083
screen (782): 0.0741
monitor (664): 0.0619
laptop (620): 0.0421
ashcan (412): 0.03
###Markdown
As we can see, the result is pretty reasonable. 1.3 Inspect the model prediction with Lime In this section, we will bring in LIME from Captum to analyze how the Resnet made above prediction based on the sample image.Like many other Captum algorithms, Lime also support analyzing a number of input features together as a group. This is very useful when dealing with images, where each color channel in each pixel is an input feature. Such group is also refered as "super-pixel". To define our desired groups over input features, all we need is to provide a feature mask.In case of an image input, the feature mask is a 2D image of the same size, where each pixel in the mask indicates the feature group it belongs to via an integer value. Pixels of the same value define a group.This means we can readily use VOC's segmentation masks as feature masks for Captum! However, while segmentaion numbers range from 0 to 255, Captum prefers consecutive group IDs for efficiency. Therefore, we will also include extra steps to convert mask IDs.
###Code
seg_ids = sorted(seg_mask.unique().tolist())
print('Segmentation IDs:', seg_ids)
# map segment IDs to feature group IDs
feature_mask = seg_mask.clone()
for i, seg_id in enumerate(seg_ids):
feature_mask[feature_mask == seg_id] = i
print('Feature mask IDs:', feature_mask.unique().tolist())
###Output
Segmentation IDs: [0, 9, 20, 255]
Feature mask IDs: [0, 1, 2, 3]
###Markdown
It is time to configure our Lime algorithm now. Essentially, Lime trains an interpretable surrogate model to simulate the target model's predictions. So, building an appropriate interpretable model is the most critical step in Lime. Fortunately, Captum has provided many most common interpretable models to save the efforts. We will demonstrate the usages of Linear Regression and Linear Lasso. Another important factor is the similarity function. Because Lime aims to explain the local behavior of an example, it will reweight the training samples according to their similarity distances. By default, Captum's Lime uses the exponential kernel on top of the consine distance. We will change to euclidean distance instead which is more popular in vision.
###Code
exp_eucl_distance = get_exp_kernel_similarity_function('euclidean', kernel_width=1000)
lr_lime = Lime(
resnet,
interpretable_model=SkLearnLinearRegression(), # build-in wrapped sklearn Linear Regression
similarity_func=exp_eucl_distance
)
###Output
_____no_output_____
###Markdown
Next, we will analyze these groups' influence on the most confident prediction `television`. Every time we call Lime's `attribute` function, an interpretable model is trained around the given input, so unlike many other Captum's attribution algorithms, it is strongly recommended to only provide a single example as input (tensors with first dimension or batch size = 1). There are advanced use cases of passing batched inputs. Interested readers can check the [documentation](https://captum.ai/api/lime.html) for details.In order to train the interpretable model, we need to specify enough training data through the argument `n_samples`. Lime creates the perturbed samples in the form of interpretable representation, i.e., a binary vector indicating the “presence” or “absence” of features. Lime needs to keep calling the target model to get the labels/values for all perturbed samples. This process can be quite time-consuming depending on the complexity of the target model and the number of samples. Setting the `perturbations_per_eval` can batch multiple samples in one forward pass to shorten the process as long as your machine still has capacity. You may also consider turning on the flag `show_progress` to display a progess bar showing how many forward calls are left.
###Code
label_idx = output_probs.argmax().unsqueeze(0)
attrs = lr_lime.attribute(
img.unsqueeze(0),
target=label_idx,
feature_mask=feature_mask.unsqueeze(0),
n_samples=40,
perturbations_per_eval=16,
show_progress=True
).squeeze(0)
print('Attribution range:', attrs.min().item(), 'to', attrs.max().item())
###Output
Lime attribution: 100%|██████████| 3/3 [00:08<00:00, 2.95s/it]
###Markdown
Now, let us use Captum's visualization tool to view the attribution heat map.
###Code
def show_attr(attr_map):
viz.visualize_image_attr(
attr_map.permute(1, 2, 0).numpy(), # adjust shape to height, width, channels
method='heat_map',
sign='all',
show_colorbar=True
)
show_attr(attrs)
###Output
_____no_output_____
###Markdown
The result looks decent: the television segment does demonstrate strongest positive correlation with the prediction, while the chairs has relatively trivial impact and the border slightly shows negative contribution.However, we can further improve this result. One desired characteristic of interpretability is the ease for human to comprehend. We should help reduce the noisy interference and emphisze the real influential features. In our case, all features more or less show some influences. Adding lasso regularization to the interpretable model can effectively help us filter them. Therefore, let us try Linear Lasso with a fit coefficient `alpha`. For all built-in sklearn wrapper model, you can directly pass any sklearn supported arguments.Moreover, since our example only has 4 segments, there are just 16 possible combinations of interpretable representations in total. So we can exhaust them instead random sampling. The `Lime` class's argument `perturb_func` allows us to pass a generator function yielding samples. We will create the generator function iterating the combinations and set the `n_samples` to its exact length.
###Code
n_interpret_features = len(seg_ids)
def iter_combinations(*args, **kwargs):
for i in range(2 ** n_interpret_features):
yield torch.tensor([int(d) for d in bin(i)[2:].zfill(n_interpret_features)]).unsqueeze(0)
lasso_lime = Lime(
resnet,
interpretable_model=SkLearnLasso(alpha=0.08),
similarity_func=exp_eucl_distance,
perturb_func=iter_combinations
)
attrs = lasso_lime.attribute(
img.unsqueeze(0),
target=label_idx,
feature_mask=feature_mask.unsqueeze(0),
n_samples=2 ** n_interpret_features,
perturbations_per_eval=16,
show_progress=True
).squeeze(0)
print('Attribution range:', attrs.min().item(), 'to', attrs.max().item())
show_attr(attrs)
###Output
Lime attribution: 100%|██████████| 1/1 [00:02<00:00, 2.51s/it]
###Markdown
As we can see, the new attribution result removes the chairs and border with the help of Lasso.Another interesting question to explore is if the model also recognize the chairs in the image. To answer it, we will use the most related label `rocking_chair` from ImageNet as the target, whose label index is `765`. We can check how confident the model feels about the alternative object.
###Code
alter_label_idx = 765
alter_prob = output_probs[alter_label_idx].item()
print(f'{idx_to_labels[str(alter_label_idx)]} ({alter_label_idx}):', round(alter_prob, 4))
###Output
rocking_chair (765): 0.0048
###Markdown
Then, we will redo the attribution with our Lasso Lime.
###Code
attrs = lasso_lime.attribute(
img.unsqueeze(0),
target=alter_label_idx,
feature_mask=feature_mask.unsqueeze(0),
n_samples=2 ** n_interpret_features,
perturbations_per_eval=16,
show_progress=True,
return_input_shape=True,
).squeeze(0)
print('Attribution range:', attrs.min().item(), 'to', attrs.max().item())
show_attr(attrs)
###Output
Lime attribution: 100%|██████████| 1/1 [00:02<00:00, 2.14s/it]
###Markdown
As shown in the heat map, our ResNet does present right belief about the chair segment. However, it gets hindered by the television segment in the foreground. This may also explain why the model feels less confident about the chairs than the television. 1.4 Understand the sampling process We have already learned how to use Captum's Lime. This section will additionally dive into the internal sampling process to give interested readers an overview of what happens underneath. The goal of the sampling process is to collect a set of training data for the surrogate model. Every data point consists of three parts: interpretable input, model predicted label, and similarity weight. We will roughly illustrate how Lime achieve each of them behind the scene.As we mentioned before, Lime samples data from the interpretable space. By default, Lime uses the presense or absense of the given mask groups as interpretable features. In our example, facing the above image of 4 segments, the interpretable representation is therefore a binary vector of 4 values indicating if each segment is present or absent. This is why we know there are only 16 possible interpretable representations and can exhaust them with our `iter_combinations`. Lime will keep calling its `perturb_func` to get the sample interpretable inputs. Let us simulate this step and give us a such interpretable input.
###Code
SAMPLE_INDEX = 13
pertubed_genertator = iter_combinations()
for _ in range(SAMPLE_INDEX + 1):
sample_interp_inp = next(pertubed_genertator)
print('Perturbed interpretable sample:', sample_interp_inp)
###Output
Perturbed interpretable sample: tensor([[1, 1, 0, 1]])
###Markdown
Our input sample `[1, 1, 0, 1]` means the third segment (television) is absent while other three segments stay. In order to find out what the target ImageNet's prediction is for this sample, Lime needs to convert it from interpretable space back to the original example space, i.e., the image space. The transformation takes the original example input and modify it by setting the features of the absent groups to a baseline value which is `0` by default. The transformation function is called `from_interp_rep_transform` under Lime. We will run it manually here to get the pertubed image input and then visualize what it looks like.
###Code
pertubed_img = lasso_lime.from_interp_rep_transform(
sample_interp_inp,
img.unsqueeze(0),
feature_mask=feature_mask.unsqueeze(0),
baselines=0
)
# invert the normalization for render
invert_norm = T.Normalize(
mean=[-0.485/0.229, -0.456/0.224, -0.406/0.225],
std=[1/0.229, 1/0.224, 1/0.225]
)
plt.imshow(invert_norm(pertubed_img).squeeze(0).permute(1, 2, 0).numpy())
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
As shown above, compared with the original image, the absent feature, i.e., the television segment, gets masked in the perturbed image, while the rest present features stay unchanged. With the perturbed image, Lime is able to find out the model's prediction. Let us still use "television" as our attribution target, so the label of perturbed sample is the value of the model's prediction on "television". Just for curiosity, we can also check how the model's prediction changes with the perturbation.
###Code
perturbed_outputs = resnet(pertubed_img).squeeze(0).detach()
sample_label = perturbed_outputs[label_idx.item()]
print('Label of the perturbed sample as Television:', sample_label)
print('\nProbabilities of the perturbed image')
perturbed_output_probs = F.softmax(perturbed_outputs, dim=0)
print_result(perturbed_output_probs, topk=5)
print(f'\ntelevision ({label_idx.item()}):', perturbed_output_probs[label_idx].item())
###Output
Label of the perturbed sample as Television: tensor(3.2104)
Probabilities of the perturbed image
jigsaw_puzzle (611): 0.094
chest (492): 0.0377
laptop (620): 0.0338
birdhouse (448): 0.0328
ashcan (412): 0.0315
television (851): 0.006506193894892931
###Markdown
Reasonably, our ImageNet no longer feel confident about classifying the image as television.At last, because Lime focuses on the local interpretability, it will calculate the similarity between the perturbed and original images to reweight the loss of this data point. Note the calculation is based on the input space instead of the interpretable space. This step is simply passing the two image tensors into the given `similarity_func` argument which is the exponential kernel of euclidean distance in our case.
###Code
sample_similarity = exp_eucl_distance(img.unsqueeze(0), pertubed_img, None)
print('Sample similarity:', sample_similarity)
###Output
Sample similarity: 0.9705052174205847
###Markdown
This is basically how Lime create a single training data point of `sample_interp_inp`, `sample_label`, and `sample_similarity`. By repeating this process `n_samples` times, it collects a dataset to train the interpretable model.Worth noting that the steps we showed in this section is an example based on our Lime instance configured above. The logic of each step can be customized, especially with `LimeBase` class which will be demonstrated in Section 2. 2. Text Classification In this section, we will take use of a news subject classification example to demonstrate more customizable functions in Lime. We will train a simple embedding-bag classifier on AG_NEWS dataset and analyze its understanding of words.
###Code
from torch import nn
from torch.utils.data import DataLoader
from torch.utils.data.dataset import random_split
from torchtext.datasets import AG_NEWS
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import Vocab
from collections import Counter
from IPython.core.display import HTML, display
###Output
_____no_output_____
###Markdown
2.1 Load the data and define the model `torchtext` has included the AG_NEWS dataset but since it is only split into train & test, we need to further cut a validation set from the original train split. Then we build the vocabulary of the frequent words based on our train split.
###Code
ag_ds = list(AG_NEWS(split='train'))
ag_train, ag_val = ag_ds[:100000], ag_ds[100000:]
tokenizer = get_tokenizer('basic_english')
word_counter = Counter()
for (label, line) in ag_train:
word_counter.update(tokenizer(line))
voc = Vocab(word_counter, min_freq=10)
print('Vocabulary size:', len(voc))
num_class = len(set(label for label, _ in ag_train))
print('Num of classes:', num_class)
###Output
Vocabulary size: 18707
Num of classes: 4
###Markdown
The model we use is composed of an embedding-bag, which averages the word embeddings as the latent text representation, and a final linear layer, which maps the latent vector to the logits. Unconventially, `pytorch`'s embedding-bag does not assume the first dimension is batch. Instead, it requires a flattened vector of indices with an additional offset tensor to mark the starting position of each example. You can refer to its [documentation](https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.htmlembeddingbag) for details.
###Code
class EmbeddingBagModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super().__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim)
self.linear = nn.Linear(embed_dim, num_class)
def forward(self, inputs, offsets):
embedded = self.embedding(inputs, offsets)
return self.linear(embedded)
###Output
_____no_output_____
###Markdown
2.2 Training and Baseline Classification In order to train our classifier, we need to define a collate function to batch the samples into the tensor fomat required by the embedding-bag and create the interable dataloaders.
###Code
BATCH_SIZE = 64
def collate_batch(batch):
labels = torch.tensor([label - 1 for label, _ in batch])
text_list = [tokenizer(line) for _, line in batch]
# flatten tokens across the whole batch
text = torch.tensor([voc[t] for tokens in text_list for t in tokens])
# the offset of each example
offsets = torch.tensor(
[0] + [len(tokens) for tokens in text_list][:-1]
).cumsum(dim=0)
return labels, text, offsets
train_loader = DataLoader(ag_train, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
val_loader = DataLoader(ag_val, batch_size=BATCH_SIZE,
shuffle=False, collate_fn=collate_batch)
###Output
_____no_output_____
###Markdown
We will then train our embedding-bag model with the common cross-entropy loss and Adam optimizer. Due to the simplicity of this task, 5 epochs should be enough to give us a stable 90% validation accuracy.
###Code
EPOCHS = 7
EMB_SIZE = 64
CHECKPOINT = './models/embedding_bag_ag_news.pt'
USE_PRETRAINED = True # change to False if you want to retrain your own model
def train_model(train_loader, val_loader):
model = EmbeddingBagModel(len(voc), EMB_SIZE, num_class)
loss = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters())
for epoch in range(1, EPOCHS + 1):
# training
model.train()
total_acc, total_count = 0, 0
for idx, (label, text, offsets) in enumerate(train_loader):
optimizer.zero_grad()
predited_label = model(text, offsets)
loss(predited_label, label).backward()
optimizer.step()
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
if (idx + 1) % 500 == 0:
print('epoch {:3d} | {:5d}/{:5d} batches | accuracy {:8.3f}'.format(
epoch, idx + 1, len(train_loader), total_acc / total_count
))
total_acc, total_count = 0, 0
# evaluation
model.eval()
total_acc, total_count = 0, 0
with torch.no_grad():
for label, text, offsets in val_loader:
predited_label = model(text, offsets)
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
print('-' * 59)
print('end of epoch {:3d} | valid accuracy {:8.3f} '.format(epoch, total_acc / total_count))
print('-' * 59)
torch.save(model, CHECKPOINT)
return model
eb_model = torch.load(CHECKPOINT) if USE_PRETRAINED else train_model(train_loader, val_loader)
###Output
_____no_output_____
###Markdown
Now, let us take the following sports news and test how our model performs.
###Code
test_label = 2 # {1: World, 2: Sports, 3: Business, 4: Sci/Tec}
test_line = ('US Men Have Right Touch in Relay Duel Against Australia THENS, Aug. 17 '
'- So Michael Phelps is not going to match the seven gold medals won by Mark Spitz. '
'And it is too early to tell if he will match Aleksandr Dityatin, '
'the Soviet gymnast who won eight total medals in 1980.')
test_labels, test_text, test_offsets = collate_batch([(test_label, test_line)])
probs = F.softmax(eb_model(test_text, test_offsets), dim=1).squeeze(0)
print('Prediction probability:', round(probs[test_labels[0]].item(), 4))
###Output
Prediction probability: 0.967
###Markdown
Our embedding-bag does successfully identify the above news as sports with pretty high confidence. 2.3 Inspect the model prediction with Lime Finally, it is time to bring back Lime to inspect how the model makes the prediction. However, we will use the more customizable `LimeBase` class this time which is also the low-level implementation powering the `Lime` class we used before. The `Lime` class is opinionated when creating features from perturbed binary interpretable representations. It can only set the "absense" features to some baseline values while keeping other "presense" features. This is not what we want in this case. For text, the interpretable representation is a binary vector indicating if the word of each position is present or not. The corresponding text input should literally remove the absent words so our embedding-bag can calculate the average embeddings of the left words. Setting them to any baselines will pollute the calculation and moreover, our embedding-bag does not have common baseline tokens like `` at all. Therefore, we have to use `LimeBase` to customize the conversion logic through the `from_interp_rep_transform` argument.`LimeBase` is not opinionated at all so we have to define every piece manually. Let us talk about them in order:- `forward_func`, the forward function of the model. Notice we cannot pass our model directly since Captum always assumes the first dimension is batch while our embedding-bag requires flattened indices. So we will add the dummy dimension later when calling `attribute` and make a wrapper here to remove the dummy dimension before giving to our model.- `interpretable_model`, the surrogate model. This works the same as we demonstrated in the above image classification example. We also use sklearn linear lasso here.- `similarity_func`, the function calculating the weights for training samples. The most common distance used for texts is the cosine similarity in their latent embedding space. The text inputs are just sequences of token indices, so we have to leverage the trained embedding layer from the model to encode them to their latent vectors. Due to this extra encoding step, we cannot use the util `get_exp_kernel_similarity_function('cosine')` like in the image classification example, which directly calculate the cosine similarity of the given inputs.- `perturb_func`, the function to sample interpretable representations. We present another way to define this argument other than using generator as shown in the above image classification example. Here we directly define a function returning a randomized sample every call. It outputs a binary vector where each token is selected independently and uniformly at random.- `perturb_interpretable_space`, whether perturbed samples are in interpretable space. `LimeBase` also supports sampling in the original input space, but we do not need it in our case.- `from_interp_rep_transform`, the function transforming the perturbed interpretable samples back to the original input space. As explained above, this argument is the main reason for us to use `LimeBase`. We pick the subset of the present tokens from the original text input according to the interpretable representation.- `to_interp_rep_transform`, the opposite of `from_interp_rep_transform`. It is needed only when `perturb_interpretable_space` is set to false.
###Code
# remove the batch dimension for the embedding-bag model
def forward_func(text, offsets):
return eb_model(text.squeeze(0), offsets)
# encode text indices into latent representations & calculate cosine similarity
def exp_embedding_cosine_distance(original_inp, perturbed_inp, _, **kwargs):
original_emb = eb_model.embedding(original_inp, None)
perturbed_emb = eb_model.embedding(perturbed_inp, None)
distance = 1 - F.cosine_similarity(original_emb, perturbed_emb, dim=1)
return torch.exp(-1 * (distance ** 2) / 2)
# binary vector where each word is selected independently and uniformly at random
def bernoulli_perturb(text, **kwargs):
probs = torch.ones_like(text) * 0.5
return torch.bernoulli(probs).long()
# remove absenst token based on the intepretable representation sample
def interp_to_input(interp_sample, original_input, **kwargs):
return original_input[interp_sample.bool()].view(original_input.size(0), -1)
lasso_lime_base = LimeBase(
forward_func,
interpretable_model=SkLearnLasso(alpha=0.08),
similarity_func=exp_embedding_cosine_distance,
perturb_func=bernoulli_perturb,
perturb_interpretable_space=True,
from_interp_rep_transform=interp_to_input,
to_interp_rep_transform=None
)
###Output
_____no_output_____
###Markdown
The attribution call is the same as the `Lime` class. Just remember to add the dummy batch dimension to the text input and put the offsets in the `additional_forward_args` because it is not a feature for the classification but a metadata for the text input.
###Code
attrs = lasso_lime_base.attribute(
test_text.unsqueeze(0), # add batch dimension for Captum
target=test_labels,
additional_forward_args=(test_offsets,),
n_samples=32000,
show_progress=True
).squeeze(0)
print('Attribution range:', attrs.min().item(), 'to', attrs.max().item())
###Output
Lime Base attribution: 100%|██████████| 32000/32000 [00:22<00:00, 1432.67it/s]
###Markdown
At last, let us create a simple visualization to highlight the influential words where green stands for positive correlation and red for negative.
###Code
def show_text_attr(attrs):
rgb = lambda x: '255,0,0' if x < 0 else '0,255,0'
alpha = lambda x: abs(x) ** 0.5
token_marks = [
f'<mark style="background-color:rgba({rgb(attr)},{alpha(attr)})">{token}</mark>'
for token, attr in zip(tokenizer(test_line), attrs.tolist())
]
display(HTML('<p>' + ' '.join(token_marks) + '</p>'))
show_text_attr(attrs)
###Output
_____no_output_____ |
neural-networks/nn_mnist_classifier.ipynb | ###Markdown
Creating a Sequential model
###Code
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten(input_shape=[28, 28]))
model.add(tf.keras.layers.Dense(200, activation="relu"))
model.add(tf.keras.layers.Dense(100, activation="relu"))
model.add(tf.keras.layers.Dense(10, activation="softmax"))
model.summary()
weights, biases = model.layers[2].get_weights()
weights
model.compile(loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(),
metrics=[tf.keras.metrics.sparse_categorical_accuracy])
history = model.fit(X_train, y_train, epochs=30, validation_data=(X_valid, y_valid))
pd.DataFrame(history.history).plot(figsize=(10, 6))
model.evaluate(X_test, y_test)
###Output
313/313 [==============================] - 0s 2ms/step - loss: 14.9817 - sparse_categorical_accuracy: 0.9733
|
07_train/archive/99_Train_Reviews_BERT_XGBoost_Transformers_PyTorch_AdHoc.ipynb | ###Markdown
BERT + XGBoost!In this notebook, we will use pre-trained deep learning model to process some text. We will then use the output of that model to classify the text. The text is a list of sentences from film reviews. And we will calssify each sentence as either speaking "positively" about its subject of "negatively". IntroductionOur goal is to create a model that takes a sentence (just like the ones in our dataset) and produces either 1 (indicating the sentence carries a positive sentiment) or a 0 (indicating the sentence carries a negative sentiment). We can think of it as looking like this:Under the hood, the model is actually made up of two model.* DistilBERT processes the sentence and passes along some information it extracted from it on to the next model. DistilBERT is a smaller version of BERT developed and open sourced by the team at HuggingFace. It’s a lighter and faster version of BERT that roughly matches its performance.* The next model, a basic Logistic Regression model from scikit learn will take in the result of DistilBERT’s processing, and classify the sentence as either positive or negative (1 or 0, respectively).The data we pass between the two models is a vector of size 768. We can think of this of vector as an embedding for the sentence that we can use for classification. DatasetThe dataset we will use in this example is [Amazon Customer Reviews Dataset](https://s3.amazonaws.com/amazon-reviews-pds/readme.html), which contains reviews from many item reviews on the Amazon.com catalog. We use the text from `review_body` to predict the `star_rating`. review_body star_rating a stirring , funny and finally transporting re imagining of beauty and the beast and 1930s horror films 1 apparently reassembled from the cutting room floor of any given daytime soap 0 they presume their audience won't sit still for a sociology lesson 0 this is a visually stunning rumination on love , memory , history and the war between art and commerce 1 jonathan parker 's bartleby should have been the be all end all of the modern office anomie films 1 Installing the transformers libraryLet's start by installing the huggingface transformers library so we can load our deep learning NLP model.
###Code
!pip install -q pandas==1.0.3
!pip install -q torch==1.4.0
!pip install -q transformers
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import cross_val_score
import torch
from transformers import DistilBertTokenizer, DistilBertModel
import warnings
warnings.filterwarnings('ignore')
import boto3
import sagemaker
import pandas as pd
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
###Output
_____no_output_____
###Markdown
Importing the datasetWe'll use pandas to read the dataset and load it into a dataframe. For performance reasons, we'll only use a subset of the dataset
###Code
!aws s3 cp s3://amazon-reviews-pds/tsv/amazon_reviews_us_Digital_Software_v1_00.tsv.gz ./data/
import csv
df = pd.read_csv('./data/amazon_reviews_us_Digital_Software_v1_00.tsv.gz',
delimiter='\t',
quoting=csv.QUOTE_NONE,
compression='gzip')
df.shape
df.head(5)
df_features = df[['review_body', 'star_rating']].sample(100)
df_features.shape
df_features.head(10)
###Output
_____no_output_____
###Markdown
Convert Raw Text into BERT Embeddings Loading the Pre-trained BERT modelLet's now load a pre-trained BERT model.
###Code
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
###Output
_____no_output_____
###Markdown
Right now, the variable `model` holds a pretrained BERT or distilBERT model (a version of BERT that is smaller, but much faster and requiring a lot less memory.) Preparing the DatasetBefore we can hand our sentences to BERT, we need to so some minimal processing to put them in the format it requires. TokenizationOur first step is to tokenize the sentences -- break them up into word and subwords in the format BERT is comfortable with.
###Code
tokenized = df_features['review_body'].apply((lambda x: tokenizer.encode(x, add_special_tokens=True)))
###Output
_____no_output_____
###Markdown
PaddingAfter tokenization, `tokenized` is a list of sentences -- each sentences is represented as a list of tokens. We want BERT to process our examples all at once (as one batch). It's just faster that way. For that reason, we need to pad all lists to the same size, so we can represent the input as one 2-d array, rather than a list of lists (of different lengths).
###Code
max_len = 0
for i in tokenized.values:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0]*(max_len-len(i)) for i in tokenized.values])
###Output
_____no_output_____
###Markdown
Our dataset is now in the `padded` variable, we can view its dimensions below:
###Code
np.array(padded).shape
###Output
_____no_output_____
###Markdown
MaskingIf we directly send `padded` to BERT, that would slightly confuse it. We need to create another variable to tell it to ignore (mask) the padding we've added when it's processing its input. That's what attention_mask is:
###Code
attention_mask = np.where(padded != 0, 1, 0)
attention_mask.shape
###Output
_____no_output_____
###Markdown
And Now, Deep Learning!Now that we have our model and inputs ready, let's run our model!The `model()` function runs our sentences through BERT. The results of the processing will be returned into `last_hidden_states`.
###Code
model = DistilBertModel.from_pretrained('distilbert-base-uncased')
input_ids = torch.tensor(padded)
attention_mask = torch.tensor(attention_mask)
with torch.no_grad():
last_hidden_states = model(input_ids, attention_mask=attention_mask)
###Output
_____no_output_____
###Markdown
Let's slice only the part of the output that we need. That is the output corresponding the first token of each sentence. The way BERT does sentence classification, is that it adds a token called `[CLS]` (for classification) at the beginning of every sentence. The output corresponding to that token can be thought of as an embedding for the entire sentence.We'll save those in the `features` variable, as they'll serve as the features to our logitics regression model.
###Code
features = last_hidden_states[0][:,0,:].numpy()
features
###Output
_____no_output_____
###Markdown
The labels indicating star_view for each review.
###Code
labels = df_features['star_rating']
labels.size
labels.head(10)
###Output
_____no_output_____
###Markdown
Split the Embeddings into Train and Test SetsLet's now split the embeddings into training and test datasets._Note that we were able to generate the BERT embeddings *before* we split - unlike TF/IDF requires us to split before we generate the TF/IDF embeddings or else test data will leak into train data which creates overly-optimistic performance metrics._
###Code
train_features, test_features, train_labels, test_labels = train_test_split(features,
labels,
stratify=df_features['star_rating'])
train_features.shape
train_features
test_features.shape
test_features
###Output
_____no_output_____
###Markdown
Training Starts Here [Bonus] Grid Search for ParametersWe can dive into Logistic regression directly with the Scikit Learn default parameters, but sometimes it's worth searching for the best value of the C parameter, which determines regularization strength.
###Code
parameters = {'C': np.linspace(0.0001, 100, 20)}
grid_search = GridSearchCV(LogisticRegression(), parameters)
grid_search.fit(train_features, train_labels)
print('best parameters: ', grid_search.best_params_)
print('best scores: ', grid_search.best_score_)
###Output
_____no_output_____
###Markdown
We now train the LogisticRegression model. If you've chosen to do the gridsearch, you can plug the value of C into the model declaration (e.g. `LogisticRegression(C=5.2)`).
###Code
lr_clf = LogisticRegression()
lr_clf.fit(train_features, train_labels)
###Output
_____no_output_____
###Markdown
Evaluate the ModelSo how well does our model do in classifying sentences? One way is to check the accuracy against the testing dataset:
###Code
lr_clf.score(test_features, test_labels)
###Output
_____no_output_____
###Markdown
Reference MetricsFor reference, the [highest accuracy score](http://nlpprogress.com/english/sentiment_analysis.html) for this dataset is currently **96.8**. DistilBERT can be trained to improve its score on this task – a process called **fine-tuning** which updates BERT’s weights to make it achieve a better performance in this sentence classification task (which we can call the downstream task). The fine-tuned DistilBERT turns out to achieve an accuracy score of **90.7**. The full size BERT model achieves **94.9**.And that’s it! That’s a good first contact with BERT. The next step would be to head over to the documentation and try your hand at [fine-tuning](https://huggingface.co/transformers/examples.htmlglue). You can also go back and switch between distilBERT and BERT - and see how that works! Classify with XGBoost
###Code
!pip install -q xgboost==1.0.2
import xgboost
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
import xgboost as xgb
from xgboost import XGBClassifier
objective = 'binary:logistic'
max_depth = 10
num_round = 10
#scale_pos_weight=1.43 # this is the ratio majority
xgb_estimator = XGBClassifier(objective=objective,
num_round=num_round,
max_depth=max_depth,
scale_pos_weight=scale_pos_weight)
xgb_estimator.fit(train_features, train_labels)
preds_test = xgb_estimator.predict(test_features)
preds_test
from sklearn.metrics import accuracy_score, precision_score, classification_report, confusion_matrix
print('Test Accuracy: ', accuracy_score(test_labels, preds_test))
print('Test Precision: ', precision_score(test_labels, preds_test, average=None))
print(classification_report(test_labels, preds_test))
import seaborn as sn
import pandas as pd
import matplotlib.pyplot as plt
df_cm_test = confusion_matrix(test_labels, preds_test)
df_cm_test
import itertools
import numpy as np
def plot_conf_mat(cm, classes, title, cmap = plt.cm.Greens):
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="black" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Plot non-normalized confusion matrix
plt.figure()
fig, ax = plt.subplots(figsize=(10,5))
plot_conf_mat(df_cm_test, classes=['1', '2', '3', '4', '5'],
title='Confusion matrix')
plt.show()
from sklearn import metrics
auc = round(metrics.roc_auc_score(test_labels, preds_test), 4)
print('AUC is ' + repr(auc))
fpr, tpr, _ = metrics.roc_curve(test_labels, preds_test)
plt.title('ROC Curve')
plt.plot(fpr, tpr, 'b',
label='AUC = %0.2f'% auc)
plt.legend(loc='lower right')
plt.plot([0,1],[0,1],'r--')
plt.xlim([-0.1,1.1])
plt.ylim([-0.1,1.1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
###Output
_____no_output_____ |
2 - Modelling data with multiple channels/PixelCNN_RGB.ipynb | ###Markdown
Autoregressive models - PixelCNN for multiple layers In our previous notebook, we trained a generative model to create black and white drawings of numbers by learning the manifold distribution of the MNIST dataset. Now we are going to create a generative model with a similar architecture that is able to generate coloured images and is trained on the CIFAR10 dataset - a commonly used dataset for machine learning research which is composed of natural images of 10 different classes: airplanes, birds, cars, cats, deers, dogs, frogs, horses, ships and trucks. Besides having 3 channels of colour instead of one, these images are more complex and diverse than the hand-written numbers of the MNIST dataset, so it is more challenging for the model to learn the manifold that generates these images. Let's see how it fares. This implementation uses Tensorflow 2.0. We start by installing and importing the code dependencies.
###Code
%tensorflow_version 2.x
import random as rn
import time
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
###Output
_____no_output_____
###Markdown
To allow for reproducible results we fix a random seed.
###Code
# Defining random seeds
random_seed = 42
tf.random.set_seed(random_seed)
np.random.seed(random_seed)
rn.seed(random_seed)
###Output
_____no_output_____
###Markdown
The original CIFAR10 dataset has 255 different intensity values for each sub-pixel. We use the `quantisize` function to define the number intensity values we are using and control for the complexity of the problem.
###Code
def quantisize(images, q_levels):
"""Digitize image into q levels"""
return (np.digitize(images, np.arange(q_levels) / q_levels) - 1).astype('float32')
###Output
_____no_output_____
###Markdown
Masked Convolution for sub-pixelsIn the PixelCNN we use masks to control for the information the model has access when processing the data of a central pixel. The convolutional filter naturally considers all the pixels around it, so we use masks to block the information from the pixels that come after the central pixel in our pre-defined ordering.When we are working with more than 1 channel (coloured images have 3), we need to make sure the sub-pixels don't have access to information yet to be processed. We need to adapt our masks to the information of future sub-pixels when processing each sub-pixel at a time. To do so, we ajust the `build` function in the `MaskedConv2D` class in order to block the central pixel's information for yet unseen sub-pixels just as is pictured in the red squares of the following image:
###Code
class MaskedConv2D(tf.keras.layers.Layer):
"""Convolutional layers with masks for autoregressive models
Convolutional layers with simple implementation to have masks type A and B.
"""
def __init__(self,
mask_type,
filters,
kernel_size,
strides=1,
padding='same',
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
input_n_channels=3):
super(MaskedConv2D, self).__init__()
assert mask_type in {'A', 'B'}
self.mask_type = mask_type
self.filters = filters
self.kernel_size = kernel_size
self.strides = strides
self.padding = padding.upper()
self.kernel_initializer = keras.initializers.get(kernel_initializer)
self.bias_initializer = keras.initializers.get(bias_initializer)
self.input_n_channels = input_n_channels
def build(self, input_shape):
self.kernel = self.add_weight("kernel",
shape=(self.kernel_size,
self.kernel_size,
int(input_shape[-1]),
self.filters),
initializer=self.kernel_initializer,
trainable=True)
self.bias = self.add_weight("bias",
shape=(self.filters,),
initializer=self.bias_initializer,
trainable=True)
center = self.kernel_size // 2
mask = np.ones(self.kernel.shape, dtype=np.float32)
mask[center, center + 1:, :, :] = 0
mask[center + 1:, :, :, :] = 0
for i in range(self.input_n_channels):
for j in range(self.input_n_channels):
if (self.mask_type == 'A' and i >= j) or (self.mask_type == 'B' and i > j):
mask[center, center, i::self.input_n_channels, j::self.input_n_channels] = 0
self.mask = tf.constant(mask, dtype=tf.float32, name='mask')
def call(self, input):
masked_kernel = tf.math.multiply(self.mask, self.kernel)
x = tf.nn.conv2d(input,
masked_kernel,
strides=[1, self.strides, self.strides, 1],
padding=self.padding)
x = tf.nn.bias_add(x, self.bias)
return x
###Output
_____no_output_____
###Markdown
The Residual block is an important building block of Oord's seminal paper that originated the PixelCNN.
###Code
class ResidualBlock(tf.keras.Model):
"""Residual blocks that compose pixelCNN
Blocks of layers with 3 convolutional layers and one residual connection.
Based on Figure 5 from [1] where h indicates number of filters.
Refs:
[1] - Oord, A. V. D., Kalchbrenner, N., & Kavukcuoglu, K. (2016). Pixel
recurrent neural networks. arXiv preprint arXiv:1601.06759.
"""
def __init__(self, h):
super(ResidualBlock, self).__init__(name='')
self.conv2a = MaskedConv2D(mask_type='B', filters=h//2, kernel_size=1, strides=1)
self.conv2b = MaskedConv2D(mask_type='B', filters=h//2, kernel_size=7, strides=1)
self.conv2c = MaskedConv2D(mask_type='B', filters=h, kernel_size=1, strides=1)
def call(self, input_tensor):
x = tf.nn.relu(input_tensor)
x = self.conv2a(x)
x = tf.nn.relu(x)
x = self.conv2b(x)
x = tf.nn.relu(x)
x = self.conv2c(x)
x += input_tensor
return x
###Output
_____no_output_____
###Markdown
Overfitting to 2 examplesGenerating natural-looking coloured images is challenging. To make sure the implementation is sound and learning is happening, we start by learning an easier problem. We do so, by overfitting the generative model to just 2 examples - a frog and a truck. In this case, if everything is working correctly, the model will memorize the two examples and replicate them with near perfect accuracy, instead of generalizing for different unseen examples, which is what happens when there is no overfitting.
###Code
# --------------------------------------------------------------------------------------------------------------
# Loading data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
height = 32
width = 32
n_channel = 3
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape(x_train.shape[0], height, width, n_channel)
x_test = x_test.reshape(x_test.shape[0], height, width, n_channel)
x_train_overfit = np.tile(x_train[:2], 25000)
x_train_overfit = x_train_overfit.reshape(2,32,32,25000,3)
x_train_overfit = np.transpose(x_train_overfit, (0,3,1,2,4)).reshape(50000,32,32,3)
x_test_overfit = x_train_overfit
# Quantisize the input data in q levels
q_levels = 128
x_train_quantised_of = quantisize(x_train_overfit, q_levels)
x_test_quantised_of = quantisize(x_test_overfit, q_levels)
# Check the first samples
print('Training examples')
fig,ax = plt.subplots(1, 10, figsize=(15, 10), dpi=80)
for i in range(10):
ax[i].imshow(x_train_quantised_of[i*5000]/(q_levels-1))
[x.axis('off') for x in ax]
plt.show()
print('Testing examples')
fig,ax = plt.subplots(1, 10, figsize=(15, 10), dpi=80)
for i in range(10):
ax[i].imshow(x_test_quantised_of[i*5000]/(q_levels-1))
[x.axis('off') for x in ax]
plt.show()
# Creating input stream using tf.data API
batch_size = 128
train_buf = 60000
train_dataset = tf.data.Dataset.from_tensor_slices((x_train_quantised_of / (q_levels - 1),
x_train_quantised_of.astype('int32')))
train_dataset = train_dataset.shuffle(buffer_size=train_buf)
train_dataset = train_dataset.batch(batch_size)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test_quantised_of / (q_levels - 1),
x_test_quantised_of.astype('int32')))
test_dataset = test_dataset.batch(batch_size)
# Create PixelCNN model
n_filters = 120
inputs = keras.layers.Input(shape=(height, width, n_channel))
x = MaskedConv2D(mask_type='A', filters=n_filters, kernel_size=7)(inputs)
for i in range(15):
x = keras.layers.Activation(activation='relu')(x)
x = ResidualBlock(h=n_filters)(x)
x = keras.layers.Activation(activation='relu')(x)
x = MaskedConv2D(mask_type='B', filters=n_filters, kernel_size=1)(x)
x = keras.layers.Activation(activation='relu')(x)
x = MaskedConv2D(mask_type='B', filters=n_channel * q_levels, kernel_size=1)(x) # shape [N,H,W,DC]
pixelcnn = tf.keras.Model(inputs=inputs, outputs=x)
# Prepare optimizer and loss function
lr_decay = 0.9999
learning_rate = 5e-3 #5
optimizer = tf.keras.optimizers.Adam(lr=learning_rate)
compute_loss = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
@tf.function
def train_step(batch_x, batch_y):
with tf.GradientTape() as ae_tape:
logits = pixelcnn(batch_x, training=True)
logits = tf.reshape(logits, [-1, height, width, q_levels, n_channel])
logits = tf.transpose(logits, perm=[0, 1, 2, 4, 3])
loss = compute_loss(tf.one_hot(batch_y, q_levels), logits)
gradients = ae_tape.gradient(loss, pixelcnn.trainable_variables)
gradients, _ = tf.clip_by_global_norm(gradients, 1.0)
optimizer.apply_gradients(zip(gradients, pixelcnn.trainable_variables))
return loss
###Output
_____no_output_____
###Markdown
We train the model for 10 epochs and we can see in no time the loss function dropping to almost 0. When the model generates new examples, it will show almost identical images to the two it was presented during training.
###Code
# Training loop
n_epochs = 10
val_error = []
n_iter = int(np.ceil(x_train_quantised_of.shape[0] / batch_size))
for epoch in range(n_epochs):
start_epoch = time.time()
for i_iter, (batch_x, batch_y) in enumerate(train_dataset):
start = time.time()
optimizer.lr = optimizer.lr * lr_decay
loss = train_step(batch_x, batch_y)
val_error.append(loss)
iter_time = time.time() - start
if i_iter % 100 == 0:
print('EPOCH {:3d}: ITER {:4d}/{:4d} TIME: {:.2f} LOSS: {:.4f}'.format(epoch,
i_iter, n_iter,
iter_time,
loss))
epoch_time = time.time() - start_epoch
print('EPOCH {:3d}: TIME: {:.2f} ETA: {:.2f}'.format(epoch,
epoch_time,
epoch_time * (n_epochs - epoch)))
# Generating new images
samples = np.zeros((9, height, width, n_channel), dtype='float32')
for i in range(height):
for j in range(width):
for k in range(n_channel):
logits = pixelcnn(samples)
logits = tf.reshape(logits, [-1, height, width, q_levels, n_channel])
logits = tf.transpose(logits, perm=[0, 1, 2, 4, 3])
next_sample = tf.random.categorical(logits[:, i, j, k, :], 1)
samples[:, i, j, k] = (next_sample.numpy() / (q_levels - 1))[:, 0]
fig = plt.figure(figsize=(10, 10))
for i in range(9):
ax = fig.add_subplot(3, 3, i+1)
ax.imshow(samples[i, :, :, :])
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
###Output
_____no_output_____
###Markdown
Now, if our test set contains images that were not the two that the model overfit to, the loss will be large as the model is not able to recreate the unseen image. It is only able to recreate the two memorized images.
###Code
# Test
x_train_quantised = quantisize(x_train, q_levels)
x_test_quantised = quantisize(x_test, q_levels)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test_quantised / (q_levels - 1),
x_test_quantised.astype('int32')))
test_dataset = test_dataset.batch(batch_size)
test_loss = []
for batch_x, batch_y in test_dataset:
logits = pixelcnn(batch_x, training=False)
logits = tf.reshape(logits, [-1, height, width, q_levels, n_channel])
logits = tf.transpose(logits, perm=[0, 1, 2, 4, 3])
# Calculate cross-entropy (= negative log-likelihood)
loss = compute_loss(tf.one_hot(batch_y, q_levels), logits)
test_loss.append(loss)
print('nll : {:} nats'.format(np.array(test_loss).mean()))
print('bits/dim : {:}'.format(np.array(test_loss).mean() / (height * width)))
###Output
nll : 126.13430786132812 nats
bits/dim : 0.12317803502082825
###Markdown
A similar result is obtained if we try to use the model to predict half of an occluded image. All the model is able to do is present the two examples it has seen. Let's then try to make the model learn the manifold distribution of the whole CIFAR10 dataset and create unseen examples of natural images.
###Code
# Filling occluded images
occlude_start_row = 16
num_generated_images = 10
samples = np.copy(x_test_quantised[0:num_generated_images, :, :, :])
samples = samples / (q_levels - 1)
samples[:, occlude_start_row:, :, :] = 0
fig = plt.figure(figsize=(20, 20))
for i in range(10):
ax = fig.add_subplot(1, 10, i + 1)
ax.matshow(samples[i, :, :, :], cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
for i in range(occlude_start_row, height):
for j in range(width):
for k in range(n_channel):
logits = pixelcnn(samples)
logits = tf.reshape(logits, [-1, height, width, q_levels, n_channel])
logits = tf.transpose(logits, perm=[0, 1, 2, 4, 3])
next_sample = tf.random.categorical(logits[:, i, j, k, :], 1)
samples[:, i, j, k] = (next_sample.numpy() / (q_levels - 1))[:, 0]
fig = plt.figure(figsize=(20, 20))
for i in range(10):
ax = fig.add_subplot(1, 10, i + 1)
ax.matshow(samples[i, :, :, :], cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
###Output
_____no_output_____
###Markdown
Generalizing CIFAR10 imagesNow we will be using 100000 different images of the 10 classes presented in CIFAR10. For now, this is a challenge for our current implementation of the PixelCNN. To facilitate the model's training we are reducing the number of intensity values of each sub-pixel - from 256 to 8. This means the images will have less resolution.
###Code
# Quantisize the input data in q levels
q_levels = 8
x_train_quantised = quantisize(x_train, q_levels)
x_test_quantised = quantisize(x_test, q_levels)
# Check the first samples
print('Training examples')
fig,ax = plt.subplots(1, 10, figsize=(15, 10), dpi=80)
for i in range(10):
ax[i].imshow(x_train_quantised[i]/(q_levels-1))
[x.axis('off') for x in ax]
plt.show()
print('Testing examples')
fig,ax = plt.subplots(1, 10, figsize=(15, 10), dpi=80)
for i in range(10):
ax[i].imshow(x_test_quantised[i]/(q_levels-1))
[x.axis('off') for x in ax]
plt.show()
train_dataset = tf.data.Dataset.from_tensor_slices((x_train_quantised / (q_levels - 1),
x_train_quantised.astype('int32')))
train_dataset = train_dataset.shuffle(buffer_size=train_buf)
train_dataset = train_dataset.batch(batch_size)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test_quantised / (q_levels - 1),
x_test_quantised.astype('int32')))
test_dataset = test_dataset.batch(batch_size)
###Output
Training examples
###Markdown
We train the model for 20 epochs.
###Code
# Training loop
pixelcnn = tf.keras.Model(inputs=inputs, outputs=x)
n_epochs = 20
val_error = []
n_iter = int(np.ceil(x_train_quantised.shape[0] / batch_size))
for epoch in range(n_epochs):
start_epoch = time.time()
for i_iter, (batch_x, batch_y) in enumerate(train_dataset):
start = time.time()
optimizer.lr = optimizer.lr * lr_decay
loss = train_step(batch_x, batch_y)
val_error.append(loss)
iter_time = time.time() - start
if i_iter % 100 == 0:
print('EPOCH {:3d}: ITER {:4d}/{:4d} TIME: {:.2f} LOSS: {:.4f}'.format(epoch,
i_iter, n_iter,
iter_time,
loss))
epoch_time = time.time() - start_epoch
print('EPOCH {:3d}: TIME: {:.2f} ETA: {:.2f}'.format(epoch,
epoch_time,
epoch_time * (n_epochs - epoch)))
###Output
EPOCH 0: ITER 0/ 391 TIME: 11.04 LOSS: 2.0768
EPOCH 0: ITER 100/ 391 TIME: 0.92 LOSS: 1.1606
EPOCH 0: ITER 200/ 391 TIME: 0.93 LOSS: 0.9095
EPOCH 0: ITER 300/ 391 TIME: 0.92 LOSS: 0.7589
EPOCH 0: TIME: 375.12 ETA: 7502.38
EPOCH 1: ITER 0/ 391 TIME: 0.20 LOSS: 0.6693
EPOCH 1: ITER 100/ 391 TIME: 0.92 LOSS: 0.6433
EPOCH 1: ITER 200/ 391 TIME: 0.92 LOSS: 0.5806
EPOCH 1: ITER 300/ 391 TIME: 0.93 LOSS: 0.5253
EPOCH 1: TIME: 364.26 ETA: 6920.88
EPOCH 2: ITER 0/ 391 TIME: 0.21 LOSS: 0.5504
EPOCH 2: ITER 100/ 391 TIME: 0.92 LOSS: 0.5570
EPOCH 2: ITER 200/ 391 TIME: 0.93 LOSS: 0.5388
EPOCH 2: ITER 300/ 391 TIME: 0.93 LOSS: 0.5375
EPOCH 2: TIME: 365.10 ETA: 6571.84
EPOCH 3: ITER 0/ 391 TIME: 0.21 LOSS: 0.5209
EPOCH 3: ITER 100/ 391 TIME: 0.93 LOSS: 0.5560
EPOCH 3: ITER 200/ 391 TIME: 0.93 LOSS: 0.5015
EPOCH 3: ITER 300/ 391 TIME: 0.93 LOSS: 0.5159
EPOCH 3: TIME: 365.68 ETA: 6216.59
EPOCH 4: ITER 0/ 391 TIME: 0.21 LOSS: 0.5077
EPOCH 4: ITER 100/ 391 TIME: 0.92 LOSS: 0.5205
EPOCH 4: ITER 200/ 391 TIME: 0.92 LOSS: 0.5353
EPOCH 4: ITER 300/ 391 TIME: 0.92 LOSS: 0.4964
EPOCH 4: TIME: 363.95 ETA: 5823.15
EPOCH 5: ITER 0/ 391 TIME: 0.21 LOSS: 0.5137
EPOCH 5: ITER 100/ 391 TIME: 0.92 LOSS: 0.4984
EPOCH 5: ITER 200/ 391 TIME: 0.91 LOSS: 0.4987
EPOCH 5: ITER 300/ 391 TIME: 0.92 LOSS: 0.4963
EPOCH 5: TIME: 362.37 ETA: 5435.51
EPOCH 6: ITER 0/ 391 TIME: 0.20 LOSS: 0.4949
EPOCH 6: ITER 100/ 391 TIME: 0.91 LOSS: 0.4774
EPOCH 6: ITER 200/ 391 TIME: 0.91 LOSS: 0.5056
EPOCH 6: ITER 300/ 391 TIME: 0.92 LOSS: 0.4893
EPOCH 6: TIME: 362.62 ETA: 5076.63
EPOCH 7: ITER 0/ 391 TIME: 0.21 LOSS: 0.4844
EPOCH 7: ITER 100/ 391 TIME: 0.92 LOSS: 0.4754
EPOCH 7: ITER 200/ 391 TIME: 0.92 LOSS: 0.4857
EPOCH 7: ITER 300/ 391 TIME: 0.93 LOSS: 0.4626
EPOCH 7: TIME: 364.51 ETA: 4738.60
EPOCH 8: ITER 0/ 391 TIME: 0.20 LOSS: 0.4919
EPOCH 8: ITER 100/ 391 TIME: 0.92 LOSS: 0.4631
EPOCH 8: ITER 200/ 391 TIME: 0.92 LOSS: 0.4713
EPOCH 8: ITER 300/ 391 TIME: 0.92 LOSS: 0.4944
EPOCH 8: TIME: 364.26 ETA: 4371.16
EPOCH 9: ITER 0/ 391 TIME: 0.20 LOSS: 0.4843
EPOCH 9: ITER 100/ 391 TIME: 0.93 LOSS: 0.5249
EPOCH 9: ITER 200/ 391 TIME: 0.92 LOSS: 0.4806
EPOCH 9: ITER 300/ 391 TIME: 0.92 LOSS: 0.4903
EPOCH 9: TIME: 363.69 ETA: 4000.62
EPOCH 10: ITER 0/ 391 TIME: 0.21 LOSS: 0.4931
EPOCH 10: ITER 100/ 391 TIME: 0.92 LOSS: 0.5169
EPOCH 10: ITER 200/ 391 TIME: 0.92 LOSS: 0.4670
EPOCH 10: ITER 300/ 391 TIME: 0.92 LOSS: 0.4655
EPOCH 10: TIME: 364.21 ETA: 3642.13
EPOCH 11: ITER 0/ 391 TIME: 0.21 LOSS: 0.4645
EPOCH 11: ITER 100/ 391 TIME: 0.92 LOSS: 0.4682
EPOCH 11: ITER 200/ 391 TIME: 0.92 LOSS: 0.4538
EPOCH 11: ITER 300/ 391 TIME: 0.92 LOSS: 0.4945
EPOCH 11: TIME: 362.94 ETA: 3266.47
EPOCH 12: ITER 0/ 391 TIME: 0.21 LOSS: 0.4558
EPOCH 12: ITER 100/ 391 TIME: 0.92 LOSS: 0.4477
EPOCH 12: ITER 200/ 391 TIME: 0.92 LOSS: 0.4608
EPOCH 12: ITER 300/ 391 TIME: 0.92 LOSS: 0.4734
EPOCH 12: TIME: 363.36 ETA: 2906.87
EPOCH 13: ITER 0/ 391 TIME: 0.21 LOSS: 0.4973
EPOCH 13: ITER 100/ 391 TIME: 0.91 LOSS: 0.4853
EPOCH 13: ITER 200/ 391 TIME: 0.92 LOSS: 0.4587
EPOCH 13: ITER 300/ 391 TIME: 0.92 LOSS: 0.4731
EPOCH 13: TIME: 363.12 ETA: 2541.84
EPOCH 14: ITER 0/ 391 TIME: 0.21 LOSS: 0.4683
EPOCH 14: ITER 100/ 391 TIME: 0.92 LOSS: 0.4840
EPOCH 14: ITER 200/ 391 TIME: 0.92 LOSS: 0.4608
EPOCH 14: ITER 300/ 391 TIME: 0.92 LOSS: 0.4581
EPOCH 14: TIME: 362.97 ETA: 2177.81
EPOCH 15: ITER 0/ 391 TIME: 0.21 LOSS: 0.5594
EPOCH 15: ITER 100/ 391 TIME: 0.92 LOSS: 0.4678
EPOCH 15: ITER 200/ 391 TIME: 0.92 LOSS: 0.4537
EPOCH 15: ITER 300/ 391 TIME: 0.92 LOSS: 0.4717
EPOCH 15: TIME: 363.99 ETA: 1819.94
EPOCH 16: ITER 0/ 391 TIME: 0.21 LOSS: 0.4713
EPOCH 16: ITER 100/ 391 TIME: 0.93 LOSS: 0.4655
EPOCH 16: ITER 200/ 391 TIME: 0.92 LOSS: 0.4656
EPOCH 16: ITER 300/ 391 TIME: 0.94 LOSS: 0.4442
EPOCH 16: TIME: 364.10 ETA: 1456.41
EPOCH 17: ITER 0/ 391 TIME: 0.21 LOSS: 0.4783
EPOCH 17: ITER 100/ 391 TIME: 0.93 LOSS: 0.4557
EPOCH 17: ITER 200/ 391 TIME: 0.92 LOSS: 0.4707
EPOCH 17: ITER 300/ 391 TIME: 0.92 LOSS: 0.4606
EPOCH 17: TIME: 364.32 ETA: 1092.96
EPOCH 18: ITER 0/ 391 TIME: 0.21 LOSS: 0.4742
EPOCH 18: ITER 100/ 391 TIME: 0.92 LOSS: 0.4463
EPOCH 18: ITER 200/ 391 TIME: 0.93 LOSS: 0.4634
EPOCH 18: ITER 300/ 391 TIME: 0.92 LOSS: 0.4656
EPOCH 18: TIME: 364.42 ETA: 728.83
EPOCH 19: ITER 0/ 391 TIME: 0.21 LOSS: 0.4879
EPOCH 19: ITER 100/ 391 TIME: 0.93 LOSS: 0.4759
EPOCH 19: ITER 200/ 391 TIME: 0.93 LOSS: 0.4709
EPOCH 19: ITER 300/ 391 TIME: 0.92 LOSS: 0.4728
EPOCH 19: TIME: 364.72 ETA: 364.72
###Markdown
We can now see that the model has a good result also on the test set when presented with unseen images. This is a good pointer that the model has been successful in learning the dataset images manifold.
###Code
# Test
test_loss = []
for batch_x, batch_y in test_dataset:
logits = pixelcnn(batch_x, training=False)
logits = tf.reshape(logits, [-1, height, width, q_levels, n_channel])
logits = tf.transpose(logits, perm=[0, 1, 2, 4, 3])
# Calculate cross-entropy (= negative log-likelihood)
loss = compute_loss(tf.one_hot(batch_y, q_levels), logits)
test_loss.append(loss)
print('nll : {:} nats'.format(np.array(test_loss).mean()))
print('bits/dim : {:}'.format(np.array(test_loss).mean() / (height * width)))
# Filling occluded images
occlude_start_row = 16
num_generated_images = 10
samples = np.copy(x_test_quantised[0:num_generated_images, :, :, :])
samples = samples / (q_levels - 1)
samples[:, occlude_start_row:, :, :] = 0
fig = plt.figure(figsize=(20, 20))
for i in range(10):
ax = fig.add_subplot(1, 10, i + 1)
ax.matshow(samples[i, :, :, :], cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
for i in range(occlude_start_row, height):
for j in range(width):
for k in range(n_channel):
logits = pixelcnn(samples)
logits = tf.reshape(logits, [-1, height, width, q_levels, n_channel])
logits = tf.transpose(logits, perm=[0, 1, 2, 4, 3])
next_sample = tf.random.categorical(logits[:, i, j, k, :], 1)
samples[:, i, j, k] = (next_sample.numpy() / (q_levels - 1))[:, 0]
fig = plt.figure(figsize=(20, 20))
for i in range(10):
ax = fig.add_subplot(1, 10, i + 1)
ax.matshow(samples[i, :, :, :], cmap=matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
# Generating new images
samples = np.zeros((25, height, width, n_channel), dtype='float32')
for i in range(height):
for j in range(width):
for k in range(n_channel):
logits = pixelcnn(samples)
logits = tf.reshape(logits, [-1, height, width, q_levels, n_channel])
logits = tf.transpose(logits, perm=[0, 1, 2, 4, 3])
next_sample = tf.random.categorical(logits[:, i, j, k, :], 1)
samples[:, i, j, k] = (next_sample.numpy() / (q_levels - 1))[:, 0]
fig = plt.figure(figsize=(10, 10))
for i in range(25):
ax = fig.add_subplot(5, 5, i+1)
ax.imshow(samples[i, :, :, :])
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
###Output
_____no_output_____
###Markdown
We can see that the images present natural colours and patterns, sometimes identifying what might be a horse or a truck. Since Oord's paper in 2016, there has been a lot of progress in the implementation of the PixelCNN, with each iteration generating more and more realistic looking images. In our next blogposts and notebooks we are going to address these new tricks and changes in order to create better images. So stay tuned!
###Code
###Output
_____no_output_____ |
udacity/nanodegree/machine_learning/ml_foundations/dog-project/dog_app.ipynb | ###Markdown
Artificial Intelligence Nanodegree Convolutional Neural Networks Project: Write an Algorithm for a Dog Identification App ---In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully! > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this IPython notebook.--- Why We're Here In this notebook, you will make the first steps towards developing an algorithm that could be used as part of a mobile or web app. At the end of this project, your code will accept any user-supplied image as input. If a dog is detected in the image, it will provide an estimate of the dog's breed. If a human is detected, it will provide an estimate of the dog breed that is most resembling. The image below displays potential sample output of your finished project (... but we expect that each student's algorithm will behave differently!). In this real-world setting, you will need to piece together a series of models to perform different tasks; for instance, the algorithm that detects humans in an image will be different from the CNN that infers dog breed. There are many points of possible failure, and no perfect algorithm exists. Your imperfect solution will nonetheless create a fun user experience! The Road AheadWe break the notebook into separate steps. Feel free to use the links below to navigate the notebook.* [Step 0](step0): Import Datasets* [Step 1](step1): Detect Humans* [Step 2](step2): Detect Dogs* [Step 3](step3): Create a CNN to Classify Dog Breeds (from Scratch)* [Step 4](step4): Use a CNN to Classify Dog Breeds (using Transfer Learning)* [Step 5](step5): Create a CNN to Classify Dog Breeds (using Transfer Learning)* [Step 6](step6): Write your Algorithm* [Step 7](step7): Test Your Algorithm--- Step 0: Import Datasets Import Dog DatasetIn the code cell below, we import a dataset of dog images. We populate a few variables through the use of the `load_files` function from the scikit-learn library:- `train_files`, `valid_files`, `test_files` - numpy arrays containing file paths to images- `train_targets`, `valid_targets`, `test_targets` - numpy arrays containing onehot-encoded classification labels - `dog_names` - list of string-valued dog breed names for translating labels
###Code
from sklearn.datasets import load_files
from keras.utils import np_utils
import numpy as np
from glob import glob
# define function to load train, test, and validation datasets
def load_dataset(path):
data = load_files(path)
dog_files = np.array(data['filenames'])
dog_targets = np_utils.to_categorical(np.array(data['target']), 133)
return dog_files, dog_targets
# load train, test, and validation datasets
train_files, train_targets = load_dataset('/data/dog_images/train')
valid_files, valid_targets = load_dataset('/data/dog_images/valid')
test_files, test_targets = load_dataset('/data/dog_images/test')
# load list of dog names
dog_names = [item[20:-1] for item in sorted(glob("/data/dog_images/train/*/"))]
# print statistics about the dataset
print('There are %d total dog categories.' % len(dog_names))
print('There are %s total dog images.\n' % len(np.hstack([train_files, valid_files, test_files])))
print('There are %d training dog images.' % len(train_files))
print('There are %d validation dog images.' % len(valid_files))
print('There are %d test dog images.'% len(test_files))
###Output
Using TensorFlow backend.
###Markdown
Import Human DatasetIn the code cell below, we import a dataset of human images, where the file paths are stored in the numpy array `human_files`.
###Code
import random
random.seed(8675309)
# load filenames in shuffled human dataset
human_files = np.array(glob("/data/lfw/*/*"))
random.shuffle(human_files)
# print statistics about the dataset
print('There are %d total human images.' % len(human_files))
###Output
There are 13233 total human images.
###Markdown
--- Step 1: Detect HumansWe use OpenCV's implementation of [Haar feature-based cascade classifiers](http://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html) to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on [github](https://github.com/opencv/opencv/tree/master/data/haarcascades). We have downloaded one of these detectors and stored it in the `haarcascades` directory.In the next code cell, we demonstrate how to use this detector to find human faces in a sample image.
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# extract pre-trained face detector
face_cascade = cv2.CascadeClassifier('haarcascades/haarcascade_frontalface_alt.xml')
# load color (BGR) image
img = cv2.imread(human_files[3])
# convert BGR image to grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# find faces in image
faces = face_cascade.detectMultiScale(gray)
# print number of faces detected in the image
print('Number of faces detected:', len(faces))
# get bounding box for each detected face
for (x,y,w,h) in faces:
# add bounding box to color image
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
# convert BGR image to RGB for plotting
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# display the image, along with bounding box
plt.imshow(cv_rgb)
plt.show()
###Output
Number of faces detected: 1
###Markdown
Before using any of the face detectors, it is standard procedure to convert the images to grayscale. The `detectMultiScale` function executes the classifier stored in `face_cascade` and takes the grayscale image as a parameter. In the above code, `faces` is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as `x` and `y`) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as `w` and `h`) specify the width and height of the box. Write a Human Face DetectorWe can use this procedure to write a function that returns `True` if a human face is detected in an image and `False` otherwise. This function, aptly named `face_detector`, takes a string-valued file path to an image as input and appears in the code block below.
###Code
# returns "True" if face is detected in image stored at img_path
def face_detector(img_path):
img = cv2.imread(img_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray)
return len(faces) > 0
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Assess the Human Face Detector__Question 1:__ Use the code cell below to test the performance of the `face_detector` function. - What percentage of the first 100 images in `human_files` have a detected human face? - What percentage of the first 100 images in `dog_files` have a detected human face? Ideally, we would like 100% of human images with a detected face and 0% of dog images with a detected face. You will see that our algorithm falls short of this goal, but still gives acceptable performance. We extract the file paths for the first 100 images from each of the datasets and store them in the numpy arrays `human_files_short` and `dog_files_short`.__Answer:__ Percentage of human faces detected in human files short: 100.000000% and time taken for it is 2 secs.Percentage of human faces detected in human files short: 11.000000% and time taken for it is 14 secs.
###Code
human_files_short = human_files[:100]
dog_files_short = train_files[:100]
# Do NOT modify the code above this line.
## TODO: Test the performance of the face_detector algorithm
## on the images in human_files_short and dog_files_short.
human_face_det_count = 0
dog_face_det_count = 0
import time
l_start_time = time.time()
# Test human faces and get the counts
for l_i in range(len(human_files_short)):
if face_detector(human_files_short[l_i]):
human_face_det_count += 1
l_stop_time = time.time()
print("Percentage of human faces detected in human files short: %f%%" % ((float(human_face_det_count)/len(human_files_short))*100))
print(" and time taken for it is %d secs.\n" % (l_stop_time - l_start_time))
l_start_time = time.time()
# Test dog faces and get the counts
for l_i in range(len(dog_files_short)):
if face_detector(dog_files_short[l_i]):
dog_face_det_count += 1
l_stop_time = time.time()
print("Percentage of human faces detected in human files short: %f%%" % ((float(dog_face_det_count)/len(dog_files_short))*100))
print(" and time taken for it is %d secs.\n" % (l_stop_time - l_start_time))
###Output
Percentage of human faces detected in human files short: 100.000000%
and time taken for it is 3 secs.
Percentage of human faces detected in human files short: 11.000000%
and time taken for it is 14 secs.
###Markdown
__Question 2:__ This algorithmic choice necessitates that we communicate to the user that we accept human images only when they provide a clear view of a face (otherwise, we risk having unneccessarily frustrated users!). In your opinion, is this a reasonable expectation to pose on the user? If not, can you think of a way to detect humans in images that does not necessitate an image with a clearly presented face?__Answer:__I do not think that is a reasonable expectation to keep that our model will detect only when we have a clear view of the face. Our model should be more generic and dynamic in nature. Maybe, we can use features like eyes, ears, placement or spacing of eyes, ears and nose, forehead, etc.We suggest the face detector from OpenCV as a potential way to detect human images in your algorithm, but you are free to explore other approaches, especially approaches that make use of deep learning :). Please use the code cell below to design and test your own face detection algorithm. If you decide to pursue this _optional_ task, report performance on each of the datasets.
###Code
## (Optional) TODO: Report the performance of another
## face detection algorithm on the LFW dataset
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
--- Step 2: Detect DogsIn this section, we use a pre-trained [ResNet-50](http://ethereon.github.io/netscope//gist/db945b393d40bfa26006) model to detect dogs in images. Our first line of code downloads the ResNet-50 model, along with weights that have been trained on [ImageNet](http://www.image-net.org/), a very large, very popular dataset used for image classification and other vision tasks. ImageNet contains over 10 million URLs, each linking to an image containing an object from one of [1000 categories](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a). Given an image, this pre-trained ResNet-50 model returns a prediction (derived from the available categories in ImageNet) for the object that is contained in the image.
###Code
from keras.applications.resnet50 import ResNet50
# define ResNet50 model
ResNet50_model = ResNet50(weights='imagenet')
###Output
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels.h5
102858752/102853048 [==============================] - 1s 0us/step
###Markdown
Pre-process the DataWhen using TensorFlow as backend, Keras CNNs require a 4D array (which we'll also refer to as a 4D tensor) as input, with shape$$(\text{nb_samples}, \text{rows}, \text{columns}, \text{channels}),$$where `nb_samples` corresponds to the total number of images (or samples), and `rows`, `columns`, and `channels` correspond to the number of rows, columns, and channels for each image, respectively. The `path_to_tensor` function below takes a string-valued file path to a color image as input and returns a 4D tensor suitable for supplying to a Keras CNN. The function first loads the image and resizes it to a square image that is $224 \times 224$ pixels. Next, the image is converted to an array, which is then resized to a 4D tensor. In this case, since we are working with color images, each image has three channels. Likewise, since we are processing a single image (or sample), the returned tensor will always have shape$$(1, 224, 224, 3).$$The `paths_to_tensor` function takes a numpy array of string-valued image paths as input and returns a 4D tensor with shape $$(\text{nb_samples}, 224, 224, 3).$$Here, `nb_samples` is the number of samples, or number of images, in the supplied array of image paths. It is best to think of `nb_samples` as the number of 3D tensors (where each 3D tensor corresponds to a different image) in your dataset!
###Code
from keras.preprocessing import image
from tqdm import tqdm
def path_to_tensor(img_path):
# loads RGB image as PIL.Image.Image type
img = image.load_img(img_path, target_size=(224, 224))
# convert PIL.Image.Image type to 3D tensor with shape (224, 224, 3)
x = image.img_to_array(img)
# convert 3D tensor to 4D tensor with shape (1, 224, 224, 3) and return 4D tensor
return np.expand_dims(x, axis=0)
def paths_to_tensor(img_paths):
list_of_tensors = [path_to_tensor(img_path) for img_path in tqdm(img_paths)]
return np.vstack(list_of_tensors)
###Output
_____no_output_____
###Markdown
Making Predictions with ResNet-50Getting the 4D tensor ready for ResNet-50, and for any other pre-trained model in Keras, requires some additional processing. First, the RGB image is converted to BGR by reordering the channels. All pre-trained models have the additional normalization step that the mean pixel (expressed in RGB as $[103.939, 116.779, 123.68]$ and calculated from all pixels in all images in ImageNet) must be subtracted from every pixel in each image. This is implemented in the imported function `preprocess_input`. If you're curious, you can check the code for `preprocess_input` [here](https://github.com/fchollet/keras/blob/master/keras/applications/imagenet_utils.py).Now that we have a way to format our image for supplying to ResNet-50, we are now ready to use the model to extract the predictions. This is accomplished with the `predict` method, which returns an array whose $i$-th entry is the model's predicted probability that the image belongs to the $i$-th ImageNet category. This is implemented in the `ResNet50_predict_labels` function below.By taking the argmax of the predicted probability vector, we obtain an integer corresponding to the model's predicted object class, which we can identify with an object category through the use of this [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a).
###Code
from keras.applications.resnet50 import preprocess_input, decode_predictions
def ResNet50_predict_labels(img_path):
# returns prediction vector for image located at img_path
img = preprocess_input(path_to_tensor(img_path))
return np.argmax(ResNet50_model.predict(img))
###Output
_____no_output_____
###Markdown
Write a Dog DetectorWhile looking at the [dictionary](https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a), you will notice that the categories corresponding to dogs appear in an uninterrupted sequence and correspond to dictionary keys 151-268, inclusive, to include all categories from `'Chihuahua'` to `'Mexican hairless'`. Thus, in order to check to see if an image is predicted to contain a dog by the pre-trained ResNet-50 model, we need only check if the `ResNet50_predict_labels` function above returns a value between 151 and 268 (inclusive).We use these ideas to complete the `dog_detector` function below, which returns `True` if a dog is detected in an image (and `False` if not).
###Code
### returns "True" if a dog is detected in the image stored at img_path
def dog_detector(img_path):
prediction = ResNet50_predict_labels(img_path)
return ((prediction <= 268) & (prediction >= 151))
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Assess the Dog Detector__Question 3:__ Use the code cell below to test the performance of your `dog_detector` function. - What percentage of the images in `human_files_short` have a detected dog? - What percentage of the images in `dog_files_short` have a detected dog?__Answer:__ Using Resnet, percentage of human faces detected in human files short: 0.000000% and time taken for it is 5 secs.Using Resnet, percentage of human faces detected in human files short: 100.000000% and time taken for it is 4 secs.
###Code
### TODO: Test the performance of the dog_detector function
### on the images in human_files_short and dog_files_short.
resnet_human_face_det_count = 0
resnet_dog_face_det_count = 0
l_start_time = time.time()
# Test human faces and get the counts
for l_i in range(len(human_files_short)):
if dog_detector(human_files_short[l_i]):
resnet_human_face_det_count += 1
l_stop_time = time.time()
print("Using Resnet, percentage of human faces detected in human files short: %f%%" % ((float(resnet_human_face_det_count)/len(human_files_short))*100))
print(" and time taken for it is %d secs.\n" % (l_stop_time - l_start_time))
l_start_time = time.time()
# Test dog faces and get the counts
for l_i in range(len(dog_files_short)):
if dog_detector(dog_files_short[l_i]):
resnet_dog_face_det_count += 1
l_stop_time = time.time()
print("Using Resnet, percentage of human faces detected in human files short: %f%%" % ((float(resnet_dog_face_det_count)/len(dog_files_short))*100))
print(" and time taken for it is %d secs.\n" % (l_stop_time - l_start_time))
###Output
Using Resnet, percentage of human faces detected in human files short: 0.000000%
and time taken for it is 5 secs.
Using Resnet, percentage of human faces detected in human files short: 100.000000%
and time taken for it is 4 secs.
###Markdown
--- Step 3: Create a CNN to Classify Dog Breeds (from Scratch)Now that we have functions for detecting humans and dogs in images, we need a way to predict breed from images. In this step, you will create a CNN that classifies dog breeds. You must create your CNN _from scratch_ (so, you can't use transfer learning _yet_!), and you must attain a test accuracy of at least 1%. In Step 5 of this notebook, you will have the opportunity to use transfer learning to create a CNN that attains greatly improved accuracy.Be careful with adding too many trainable layers! More parameters means longer training, which means you are more likely to need a GPU to accelerate the training process. Thankfully, Keras provides a handy estimate of the time that each epoch is likely to take; you can extrapolate this estimate to figure out how long it will take for your algorithm to train. We mention that the task of assigning breed to dogs from images is considered exceptionally challenging. To see why, consider that *even a human* would have great difficulty in distinguishing between a Brittany and a Welsh Springer Spaniel. Brittany | Welsh Springer Spaniel- | - | It is not difficult to find other dog breed pairs with minimal inter-class variation (for instance, Curly-Coated Retrievers and American Water Spaniels). Curly-Coated Retriever | American Water Spaniel- | - | Likewise, recall that labradors come in yellow, chocolate, and black. Your vision-based algorithm will have to conquer this high intra-class variation to determine how to classify all of these different shades as the same breed. Yellow Labrador | Chocolate Labrador | Black Labrador- | - | | We also mention that random chance presents an exceptionally low bar: setting aside the fact that the classes are slightly imabalanced, a random guess will provide a correct answer roughly 1 in 133 times, which corresponds to an accuracy of less than 1%. Remember that the practice is far ahead of the theory in deep learning. Experiment with many different architectures, and trust your intuition. And, of course, have fun! Pre-process the DataWe rescale the images by dividing every pixel in every image by 255.
###Code
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# pre-process the data for Keras
train_tensors = paths_to_tensor(train_files).astype('float32')/255
valid_tensors = paths_to_tensor(valid_files).astype('float32')/255
test_tensors = paths_to_tensor(test_files).astype('float32')/255
###Output
100%|██████████| 6680/6680 [01:12<00:00, 91.61it/s]
100%|██████████| 835/835 [00:08<00:00, 102.40it/s]
100%|██████████| 836/836 [00:08<00:00, 102.96it/s]
###Markdown
(IMPLEMENTATION) Model ArchitectureCreate a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line: model.summary()We have imported some Python modules to get you started, but feel free to import as many modules as you need. If you end up getting stuck, here's a hint that specifies a model that trains relatively fast on CPU and attains >1% test accuracy in 5 epochs: __Question 4:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. If you chose to use the hinted architecture above, describe why you think that CNN architecture should work well for the image classification task.__Answer:__ First, I tried with a simple Inception V1 CNN, in which on top of the inception model, I added a Maxpooling layer and a Convolution layer and a Dense layer for the output classes with activation function as relu.Secondly, I tried the above mentioned hinted CNN architecture.Thirdly, I modified the above hineted CNN architecture, by changing the filters parameter.Also, I had to change the batch parameter to 10 as the GPU was not supporting 20 images to be loaded.The results of all the three models are given below:Model Name Val Loss Test AccuracyInceptionV1 15.94437 1.19620%Orininal Hinted 3.7448 1.19620%Modified Hinted 3.7448 1.19620%
###Code
from keras.layers import Conv2D, MaxPooling2D, GlobalAveragePooling2D, Input
from keras.layers import Dropout, Flatten, Dense, concatenate
from keras.models import Sequential, Model
def google_small_inception_layer():
#https://keras.io/getting-started/functional-api-guide/
#http://joelouismarino.github.io/blog_posts/blog_googlenet_keras.html
input_img = Input(shape=(224, 224, 3))
tower_1 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_1 = Conv2D(64, (3, 3), padding='same', activation='relu')(tower_1)
tower_2 = Conv2D(64, (1, 1), padding='same', activation='relu')(input_img)
tower_2 = Conv2D(64, (5, 5), padding='same', activation='relu')(tower_2)
tower_3 = MaxPooling2D((3, 3), strides=(1, 1), padding='same')(input_img)
tower_3 = Conv2D(64, (1, 1), padding='same', activation='relu')(tower_3)
first_layer = concatenate([tower_1, tower_2, tower_3], axis=1)
second_layer = MaxPooling2D((3,3), strides=(1,1), padding='valid')(first_layer)
third_layer = Conv2D(8, (1,1), padding='same', activation='relu')(second_layer)
flattened_layer = Flatten()(third_layer)
output_layer = Dense(133, activation='relu')(flattened_layer)
l_model = Model(inputs = input_img, outputs = output_layer)
l_model.summary()
return l_model
def hinted_architecture_original():
l_model = Sequential()
### TODO: Define your architecture.
l_model.add(Conv2D(input_shape=(224, 224, 3), filters=16,kernel_size=2, activation='relu'))
l_model.add(MaxPooling2D(pool_size=2, data_format='channels_last'))
l_model.add(Conv2D(filters=32,kernel_size=2, activation='relu'))
l_model.add(MaxPooling2D(pool_size=2, data_format='channels_last'))
l_model.add(Conv2D(filters=64,kernel_size=2, activation='relu'))
l_model.add(MaxPooling2D(pool_size=2, data_format='channels_last'))
l_model.add(GlobalAveragePooling2D())
l_model.add(Dense(133, activation='relu'))
l_model.summary()
return model
def hinted_architecture_modified():
l_model = Sequential()
### TODO: Define your architecture.
l_model.add(Conv2D(input_shape=(224, 224, 3), filters=8,kernel_size=2, activation='relu'))
l_model.add(MaxPooling2D(pool_size=2, data_format='channels_last'))
l_model.add(Conv2D(filters=16,kernel_size=2, activation='relu'))
l_model.add(MaxPooling2D(pool_size=2, data_format='channels_last'))
l_model.add(Conv2D(filters=32,kernel_size=2, activation='relu'))
l_model.add(MaxPooling2D(pool_size=2, data_format='channels_last'))
l_model.add(GlobalAveragePooling2D())
l_model.add(Dense(133, activation='relu'))
l_model.summary()
return model
model = hinted_architecture_modified()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_16 (Conv2D) (None, 223, 223, 8) 104
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 111, 111, 8) 0
_________________________________________________________________
conv2d_17 (Conv2D) (None, 110, 110, 16) 528
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 55, 55, 16) 0
_________________________________________________________________
conv2d_18 (Conv2D) (None, 54, 54, 32) 2080
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 27, 27, 32) 0
_________________________________________________________________
global_average_pooling2d_2 ( (None, 32) 0
_________________________________________________________________
dense_4 (Dense) (None, 133) 4389
=================================================================
Total params: 7,101
Trainable params: 7,101
Non-trainable params: 0
_________________________________________________________________
###Markdown
Compile the Model
###Code
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Train the ModelTrain your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss.You are welcome to [augment the training data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), but this is not a requirement.
###Code
from keras.callbacks import ModelCheckpoint
### TODO: specify the number of epochs that you would like to use to train the model.
epochs = 5
### Do NOT modify the code below this line.
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.from_scratch.hdf5',
verbose=1, save_best_only=True)
model.fit(train_tensors, train_targets,
validation_data=(valid_tensors, valid_targets),
epochs=epochs, batch_size=10, callbacks=[checkpointer], verbose=1)
###Output
Train on 6680 samples, validate on 835 samples
Epoch 1/5
6670/6680 [============================>.] - ETA: 0s - loss: 4.0742 - acc: 0.0160Epoch 00001: val_loss improved from inf to 3.74480, saving model to saved_models/weights.best.from_scratch.hdf5
6680/6680 [==============================] - 444s 66ms/step - loss: 4.0778 - acc: 0.0160 - val_loss: 3.7448 - val_acc: 0.0168
Epoch 2/5
6670/6680 [============================>.] - ETA: 0s - loss: 4.0742 - acc: 0.0160Epoch 00002: val_loss did not improve
6680/6680 [==============================] - 435s 65ms/step - loss: 4.0778 - acc: 0.0160 - val_loss: 3.7448 - val_acc: 0.0168
Epoch 3/5
6670/6680 [============================>.] - ETA: 0s - loss: 4.0791 - acc: 0.0160Epoch 00003: val_loss did not improve
6680/6680 [==============================] - 435s 65ms/step - loss: 4.0778 - acc: 0.0160 - val_loss: 3.7448 - val_acc: 0.0168
Epoch 4/5
6670/6680 [============================>.] - ETA: 0s - loss: 4.0815 - acc: 0.0159Epoch 00004: val_loss did not improve
6680/6680 [==============================] - 437s 65ms/step - loss: 4.0778 - acc: 0.0160 - val_loss: 3.7448 - val_acc: 0.0168
Epoch 5/5
6670/6680 [============================>.] - ETA: 0s - loss: 4.0815 - acc: 0.0160Epoch 00005: val_loss did not improve
6680/6680 [==============================] - 438s 66ms/step - loss: 4.0778 - acc: 0.0160 - val_loss: 3.7448 - val_acc: 0.0168
###Markdown
Load the Model with the Best Validation Loss
###Code
model.load_weights('saved_models/weights.best.from_scratch.hdf5')
###Output
_____no_output_____
###Markdown
Test the ModelTry out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 1%.
###Code
# get index of predicted dog breed for each image in test set
dog_breed_predictions = [np.argmax(model.predict(np.expand_dims(tensor, axis=0))) for tensor in test_tensors]
# report test accuracy
test_accuracy = 100*np.sum(np.array(dog_breed_predictions)==np.argmax(test_targets, axis=1))/len(dog_breed_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
###Output
Test accuracy: 1.1962%
###Markdown
--- Step 4: Use a CNN to Classify Dog BreedsTo reduce training time without sacrificing accuracy, we show you how to train a CNN using transfer learning. In the following step, you will get a chance to use transfer learning to train your own CNN. Obtain Bottleneck Features
###Code
bottleneck_features = np.load('/data/bottleneck_features/DogVGG16Data.npz')
train_VGG16 = bottleneck_features['train']
valid_VGG16 = bottleneck_features['valid']
test_VGG16 = bottleneck_features['test']
###Output
_____no_output_____
###Markdown
Model ArchitectureThe model uses the the pre-trained VGG-16 model as a fixed feature extractor, where the last convolutional output of VGG-16 is fed as input to our model. We only add a global average pooling layer and a fully connected layer, where the latter contains one node for each dog category and is equipped with a softmax.
###Code
VGG16_model = Sequential()
VGG16_model.add(GlobalAveragePooling2D(input_shape=train_VGG16.shape[1:]))
VGG16_model.add(Dense(133, activation='softmax'))
VGG16_model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
global_average_pooling2d_3 ( (None, 512) 0
_________________________________________________________________
dense_5 (Dense) (None, 133) 68229
=================================================================
Total params: 68,229
Trainable params: 68,229
Non-trainable params: 0
_________________________________________________________________
###Markdown
Compile the Model
###Code
VGG16_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the Model
###Code
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.VGG16.hdf5',
verbose=1, save_best_only=True)
VGG16_model.fit(train_VGG16, train_targets,
validation_data=(valid_VGG16, valid_targets),
epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)
###Output
Train on 6680 samples, validate on 835 samples
Epoch 1/20
6580/6680 [============================>.] - ETA: 0s - loss: 12.0505 - acc: 0.1286Epoch 00001: val_loss improved from inf to 10.43724, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 280us/step - loss: 12.0458 - acc: 0.1293 - val_loss: 10.4372 - val_acc: 0.2192
Epoch 2/20
6660/6680 [============================>.] - ETA: 0s - loss: 9.8117 - acc: 0.2926Epoch 00002: val_loss improved from 10.43724 to 9.72085, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 256us/step - loss: 9.8121 - acc: 0.2928 - val_loss: 9.7209 - val_acc: 0.2994
Epoch 3/20
6600/6680 [============================>.] - ETA: 0s - loss: 9.2207 - acc: 0.3665Epoch 00003: val_loss improved from 9.72085 to 9.43865, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 250us/step - loss: 9.2203 - acc: 0.3668 - val_loss: 9.4387 - val_acc: 0.3365
Epoch 4/20
6500/6680 [============================>.] - ETA: 0s - loss: 8.9617 - acc: 0.3988Epoch 00004: val_loss improved from 9.43865 to 9.12578, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 249us/step - loss: 8.9568 - acc: 0.3990 - val_loss: 9.1258 - val_acc: 0.3677
Epoch 5/20
6460/6680 [============================>.] - ETA: 0s - loss: 8.6934 - acc: 0.4226Epoch 00005: val_loss improved from 9.12578 to 8.96794, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 255us/step - loss: 8.7015 - acc: 0.4222 - val_loss: 8.9679 - val_acc: 0.3713
Epoch 6/20
6640/6680 [============================>.] - ETA: 0s - loss: 8.5309 - acc: 0.4453Epoch 00006: val_loss improved from 8.96794 to 8.85152, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 258us/step - loss: 8.5360 - acc: 0.4451 - val_loss: 8.8515 - val_acc: 0.3677
Epoch 7/20
6500/6680 [============================>.] - ETA: 0s - loss: 8.3502 - acc: 0.4538Epoch 00007: val_loss improved from 8.85152 to 8.70082, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 280us/step - loss: 8.3524 - acc: 0.4534 - val_loss: 8.7008 - val_acc: 0.3940
Epoch 8/20
6660/6680 [============================>.] - ETA: 0s - loss: 8.2073 - acc: 0.4694Epoch 00008: val_loss improved from 8.70082 to 8.65376, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 248us/step - loss: 8.2045 - acc: 0.4696 - val_loss: 8.6538 - val_acc: 0.3928
Epoch 9/20
6580/6680 [============================>.] - ETA: 0s - loss: 8.1379 - acc: 0.4805Epoch 00009: val_loss improved from 8.65376 to 8.53271, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 243us/step - loss: 8.1610 - acc: 0.4793 - val_loss: 8.5327 - val_acc: 0.4036
Epoch 10/20
6480/6680 [============================>.] - ETA: 0s - loss: 8.1074 - acc: 0.4847Epoch 00010: val_loss did not improve
6680/6680 [==============================] - 2s 247us/step - loss: 8.1085 - acc: 0.4843 - val_loss: 8.6307 - val_acc: 0.3916
Epoch 11/20
6620/6680 [============================>.] - ETA: 0s - loss: 8.0648 - acc: 0.4891Epoch 00011: val_loss did not improve
6680/6680 [==============================] - 2s 258us/step - loss: 8.0736 - acc: 0.4886 - val_loss: 8.6156 - val_acc: 0.3952
Epoch 12/20
6580/6680 [============================>.] - ETA: 0s - loss: 8.0321 - acc: 0.4932Epoch 00012: val_loss did not improve
6680/6680 [==============================] - 2s 251us/step - loss: 8.0353 - acc: 0.4930 - val_loss: 8.5835 - val_acc: 0.4036
Epoch 13/20
6600/6680 [============================>.] - ETA: 0s - loss: 8.0025 - acc: 0.4942Epoch 00013: val_loss improved from 8.53271 to 8.52112, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 276us/step - loss: 8.0000 - acc: 0.4945 - val_loss: 8.5211 - val_acc: 0.4084
Epoch 14/20
6500/6680 [============================>.] - ETA: 0s - loss: 7.9168 - acc: 0.4983Epoch 00014: val_loss improved from 8.52112 to 8.34423, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 254us/step - loss: 7.8939 - acc: 0.4993 - val_loss: 8.3442 - val_acc: 0.4240
Epoch 15/20
6660/6680 [============================>.] - ETA: 0s - loss: 7.6668 - acc: 0.5105Epoch 00015: val_loss improved from 8.34423 to 8.16364, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 243us/step - loss: 7.6620 - acc: 0.5108 - val_loss: 8.1636 - val_acc: 0.4228
Epoch 16/20
6480/6680 [============================>.] - ETA: 0s - loss: 7.5545 - acc: 0.5221Epoch 00016: val_loss improved from 8.16364 to 8.15744, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 248us/step - loss: 7.5911 - acc: 0.5195 - val_loss: 8.1574 - val_acc: 0.4335
Epoch 17/20
6620/6680 [============================>.] - ETA: 0s - loss: 7.5649 - acc: 0.5255Epoch 00017: val_loss improved from 8.15744 to 8.08623, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 245us/step - loss: 7.5666 - acc: 0.5253 - val_loss: 8.0862 - val_acc: 0.4359
Epoch 18/20
6500/6680 [============================>.] - ETA: 0s - loss: 7.5340 - acc: 0.5260Epoch 00018: val_loss did not improve
6680/6680 [==============================] - 2s 254us/step - loss: 7.5382 - acc: 0.5256 - val_loss: 8.1293 - val_acc: 0.4479
Epoch 19/20
6600/6680 [============================>.] - ETA: 0s - loss: 7.4370 - acc: 0.5286Epoch 00019: val_loss improved from 8.08623 to 8.04611, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 268us/step - loss: 7.4193 - acc: 0.5295 - val_loss: 8.0461 - val_acc: 0.4240
Epoch 20/20
6600/6680 [============================>.] - ETA: 0s - loss: 7.3621 - acc: 0.5359Epoch 00020: val_loss improved from 8.04611 to 7.93732, saving model to saved_models/weights.best.VGG16.hdf5
6680/6680 [==============================] - 2s 242us/step - loss: 7.3499 - acc: 0.5364 - val_loss: 7.9373 - val_acc: 0.4419
###Markdown
Load the Model with the Best Validation Loss
###Code
VGG16_model.load_weights('saved_models/weights.best.VGG16.hdf5')
###Output
_____no_output_____
###Markdown
Test the ModelNow, we can use the CNN to test how well it identifies breed within our test dataset of dog images. We print the test accuracy below.
###Code
# get index of predicted dog breed for each image in test set
VGG16_predictions = [np.argmax(VGG16_model.predict(np.expand_dims(feature, axis=0))) for feature in test_VGG16]
# report test accuracy
test_accuracy = 100*np.sum(np.array(VGG16_predictions)==np.argmax(test_targets, axis=1))/len(VGG16_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
###Output
Test accuracy: 44.3780%
###Markdown
Predict Dog Breed with the Model
###Code
from extract_bottleneck_features import *
def VGG16_predict_breed(img_path):
# extract bottleneck features
bottleneck_feature = extract_VGG16(path_to_tensor(img_path))
# obtain predicted vector
predicted_vector = VGG16_model.predict(bottleneck_feature)
# return dog breed that is predicted by the model
return dog_names[np.argmax(predicted_vector)]
###Output
_____no_output_____
###Markdown
--- Step 5: Create a CNN to Classify Dog Breeds (using Transfer Learning)You will now use transfer learning to create a CNN that can identify dog breed from images. Your CNN must attain at least 60% accuracy on the test set.In Step 4, we used transfer learning to create a CNN using VGG-16 bottleneck features. In this section, you must use the bottleneck features from a different pre-trained model. To make things easier for you, we have pre-computed the features for all of the networks that are currently available in Keras. These are already in the workspace, at /data/bottleneck_features. If you wish to download them on a different machine, they can be found at:- [VGG-19](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogVGG19Data.npz) bottleneck features- [ResNet-50](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogResnet50Data.npz) bottleneck features- [Inception](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogInceptionV3Data.npz) bottleneck features- [Xception](https://s3-us-west-1.amazonaws.com/udacity-aind/dog-project/DogXceptionData.npz) bottleneck featuresThe files are encoded as such: Dog{network}Data.npz where `{network}`, in the above filename, can be one of `VGG19`, `Resnet50`, `InceptionV3`, or `Xception`. The above architectures are downloaded and stored for you in the `/data/bottleneck_features/` folder.This means the following will be in the `/data/bottleneck_features/` folder:`DogVGG19Data.npz``DogResnet50Data.npz``DogInceptionV3Data.npz``DogXceptionData.npz` (IMPLEMENTATION) Obtain Bottleneck FeaturesIn the code block below, extract the bottleneck features corresponding to the train, test, and validation sets by running the following: bottleneck_features = np.load('/data/bottleneck_features/Dog{network}Data.npz') train_{network} = bottleneck_features['train'] valid_{network} = bottleneck_features['valid'] test_{network} = bottleneck_features['test']
###Code
### TODO: Obtain bottleneck features from another pre-trained CNN.
network_name = 'Xception'
bottleneck_features = np.load('/data/bottleneck_features/Dog' + network_name + 'Data.npz')
if network_name == 'VGG19':
train_VGG19 = bottleneck_features['train']
valid_VGG19 = bottleneck_features['valid']
test_VGG19 = bottleneck_features['test']
elif network_name == 'Resnet50':
train_Resnet50 = bottleneck_features['train']
valid_Resnet50 = bottleneck_features['valid']
test_Resnet50 = bottleneck_features['test']
elif network_name == 'InceptionV3':
train_InceptionV3 = bottleneck_features['train']
valid_InceptionV3 = bottleneck_features['valid']
test_InceptionV3 = bottleneck_features['test']
elif network_name == 'Xception':
train_Xception = bottleneck_features['train']
valid_Xception = bottleneck_features['valid']
test_Xception = bottleneck_features['test']
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Model ArchitectureCreate a CNN to classify dog breed. At the end of your code cell block, summarize the layers of your model by executing the line: .summary() __Question 5:__ Outline the steps you took to get to your final CNN architecture and your reasoning at each step. Describe why you think the architecture is suitable for the current problem.__Answer:__ The analysis with different existing models and using Transfer Learning technique has been carried out, and accuracy and Val Loss values have been captured below. The steps followed were:1. Download the Pre-trained model (This is to get the parameters and the model itself)2. Load the data for train, validation and test3. Do some pre-processing on the data (the step 2 and 3 was already done and stored as numpy array files) (if we add more layers on top of the pre-trained model layers, and then do train, then the training time will be huge. Hence, we get the predicted output of the pre-trained model, and then feed this as input to new layers)4. On top of the predicted or bottleneck values from the pre-trained model, construct a simple model with a Global Average Pooling convolution layer, and a Dense Output layer with softmax as the activation function (since all these models are already trained on image datasets, we need not add more complicated layer, and delay the training or go for overfitting)5. Train the data on this new model6. Validate and Test the accuracy of the model with validation and test dataThe results are:All the models have been trained for 20 epochs.Model Name Val Loss Test AccuracyVGG19 7.39718 47.9665%Resnet50 Count not train Could not train (shape error in Conv2D layer)InceptionV3 0.62263 78.9474%Xception 0.4864 84.69%As we can see from the results, the Tranfer Learning model from Xception model seems to work best and has an accuracy of 84%, which according to me is a decent and acceptable score. The base model which is used earlier (Inception V1), was also able to produce some results, the Xception model is an Inception model, but with higher complex layers.
###Code
### TODO: Define your architecture.
if network_name == 'VGG19':
network_model = Sequential()
network_model.add(GlobalAveragePooling2D(input_shape=train_VGG19.shape[1:]))
network_model.add(Dense(133, activation='softmax'))
elif network_name == 'Resnet50':
network_model = Sequential()
network_model.add(GlobalAveragePooling2D(input_shape=train_Resnet50.shape[1:]))
network_model.add(Dense(133, activation='softmax'))
elif network_name == 'InceptionV3':
network_model = Sequential()
network_model.add(GlobalAveragePooling2D(input_shape=train_InceptionV3.shape[1:]))
network_model.add(Dense(133, activation='softmax'))
elif network_name == 'Xception':
network_model = Sequential()
network_model.add(GlobalAveragePooling2D(input_shape=train_Xception.shape[1:]))
network_model.add(Dense(133, activation='softmax'))
network_model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
global_average_pooling2d_4 ( (None, 2048) 0
_________________________________________________________________
dense_4 (Dense) (None, 133) 272517
=================================================================
Total params: 272,517
Trainable params: 272,517
Non-trainable params: 0
_________________________________________________________________
###Markdown
(IMPLEMENTATION) Compile the Model
###Code
### TODO: Compile the model.
network_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Train the ModelTrain your model in the code cell below. Use model checkpointing to save the model that attains the best validation loss. You are welcome to [augment the training data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html), but this is not a requirement.
###Code
### TODO: Train the model.
from keras.callbacks import ModelCheckpoint
import os
if network_name == 'VGG19':
train_data = train_VGG19
valid_data = valid_VGG19
test_data = test_VGG19
elif network_name == 'Resnet50':
train_data = train_Resnet50
valid_data = valid_VGG19
test_data = test_Resnet50
elif network_name == 'InceptionV3':
train_data = train_InceptionV3
valid_data= valid_InceptionV3
test_data = test_InceptionV3
elif network_name == 'Xception':
train_data = train_Xception
valid_data = valid_Xception
test_data = test_Xception
if os.path.exists('saved_models/weights.best.'+ network_name + '.hdf5'):
print("File exists, not doing training")
else:
pass
checkpointer = ModelCheckpoint(filepath='saved_models/weights.best.'+ network_name + '.hdf5',
verbose=1, save_best_only=True)
network_model.fit(train_data, train_targets,
validation_data=(valid_data, valid_targets),
epochs=20, batch_size=20, callbacks=[checkpointer], verbose=1)
###Output
File exists, not doing training
Train on 6680 samples, validate on 835 samples
Epoch 1/20
6660/6680 [============================>.] - ETA: 0s - loss: 1.0517 - acc: 0.7413Epoch 00001: val_loss improved from inf to 0.52563, saving model to saved_models/weights.best.Xception.hdf5
6680/6680 [==============================] - 3s 515us/step - loss: 1.0506 - acc: 0.7415 - val_loss: 0.5256 - val_acc: 0.8371
Epoch 2/20
6660/6680 [============================>.] - ETA: 0s - loss: 0.3969 - acc: 0.8760Epoch 00002: val_loss improved from 0.52563 to 0.49172, saving model to saved_models/weights.best.Xception.hdf5
6680/6680 [==============================] - 3s 448us/step - loss: 0.3969 - acc: 0.8757 - val_loss: 0.4917 - val_acc: 0.8455
Epoch 3/20
6620/6680 [============================>.] - ETA: 0s - loss: 0.3207 - acc: 0.8958Epoch 00003: val_loss improved from 0.49172 to 0.48640, saving model to saved_models/weights.best.Xception.hdf5
6680/6680 [==============================] - 3s 458us/step - loss: 0.3211 - acc: 0.8961 - val_loss: 0.4864 - val_acc: 0.8551
Epoch 4/20
6600/6680 [============================>.] - ETA: 0s - loss: 0.2752 - acc: 0.9139Epoch 00004: val_loss did not improve
6680/6680 [==============================] - 3s 504us/step - loss: 0.2747 - acc: 0.9141 - val_loss: 0.4910 - val_acc: 0.8563
Epoch 5/20
6640/6680 [============================>.] - ETA: 0s - loss: 0.2409 - acc: 0.9239Epoch 00005: val_loss did not improve
6680/6680 [==============================] - 3s 485us/step - loss: 0.2403 - acc: 0.9241 - val_loss: 0.5621 - val_acc: 0.8503
Epoch 6/20
6660/6680 [============================>.] - ETA: 0s - loss: 0.2194 - acc: 0.9327Epoch 00006: val_loss did not improve
6680/6680 [==============================] - 3s 507us/step - loss: 0.2189 - acc: 0.9328 - val_loss: 0.5491 - val_acc: 0.8527
Epoch 7/20
6620/6680 [============================>.] - ETA: 0s - loss: 0.1980 - acc: 0.9409Epoch 00007: val_loss did not improve
6680/6680 [==============================] - 3s 496us/step - loss: 0.1979 - acc: 0.9412 - val_loss: 0.5425 - val_acc: 0.8635
Epoch 8/20
6540/6680 [============================>.] - ETA: 0s - loss: 0.1818 - acc: 0.9443Epoch 00008: val_loss did not improve
6680/6680 [==============================] - 3s 469us/step - loss: 0.1808 - acc: 0.9446 - val_loss: 0.5392 - val_acc: 0.8527
Epoch 9/20
6540/6680 [============================>.] - ETA: 0s - loss: 0.1601 - acc: 0.9514Epoch 00009: val_loss did not improve
6680/6680 [==============================] - 3s 453us/step - loss: 0.1612 - acc: 0.9510 - val_loss: 0.5899 - val_acc: 0.8515
Epoch 10/20
6580/6680 [============================>.] - ETA: 0s - loss: 0.1534 - acc: 0.9556Epoch 00010: val_loss did not improve
6680/6680 [==============================] - 3s 463us/step - loss: 0.1520 - acc: 0.9560 - val_loss: 0.5984 - val_acc: 0.8527
Epoch 11/20
6620/6680 [============================>.] - ETA: 0s - loss: 0.1411 - acc: 0.9574Epoch 00011: val_loss did not improve
6680/6680 [==============================] - 3s 496us/step - loss: 0.1409 - acc: 0.9576 - val_loss: 0.5708 - val_acc: 0.8623
Epoch 12/20
6660/6680 [============================>.] - ETA: 0s - loss: 0.1223 - acc: 0.9637Epoch 00012: val_loss did not improve
6680/6680 [==============================] - 3s 472us/step - loss: 0.1224 - acc: 0.9635 - val_loss: 0.6038 - val_acc: 0.8587
Epoch 13/20
6540/6680 [============================>.] - ETA: 0s - loss: 0.1210 - acc: 0.9645Epoch 00013: val_loss did not improve
6680/6680 [==============================] - 3s 473us/step - loss: 0.1193 - acc: 0.9650 - val_loss: 0.6219 - val_acc: 0.8611
Epoch 14/20
6640/6680 [============================>.] - ETA: 0s - loss: 0.1112 - acc: 0.9657Epoch 00014: val_loss did not improve
6680/6680 [==============================] - 3s 446us/step - loss: 0.1115 - acc: 0.9654 - val_loss: 0.6413 - val_acc: 0.8527
Epoch 15/20
6580/6680 [============================>.] - ETA: 0s - loss: 0.1004 - acc: 0.9710Epoch 00015: val_loss did not improve
6680/6680 [==============================] - 3s 483us/step - loss: 0.0992 - acc: 0.9714 - val_loss: 0.6446 - val_acc: 0.8575
Epoch 16/20
6620/6680 [============================>.] - ETA: 0s - loss: 0.0940 - acc: 0.9725Epoch 00016: val_loss did not improve
6680/6680 [==============================] - 4s 528us/step - loss: 0.0940 - acc: 0.9725 - val_loss: 0.6379 - val_acc: 0.8551
Epoch 17/20
6640/6680 [============================>.] - ETA: 0s - loss: 0.0901 - acc: 0.9744Epoch 00017: val_loss did not improve
6680/6680 [==============================] - 3s 491us/step - loss: 0.0901 - acc: 0.9744 - val_loss: 0.6752 - val_acc: 0.8587
Epoch 18/20
6620/6680 [============================>.] - ETA: 0s - loss: 0.0828 - acc: 0.9754Epoch 00018: val_loss did not improve
6680/6680 [==============================] - 3s 470us/step - loss: 0.0826 - acc: 0.9753 - val_loss: 0.6726 - val_acc: 0.8575
Epoch 19/20
6560/6680 [============================>.] - ETA: 0s - loss: 0.0814 - acc: 0.9768Epoch 00019: val_loss did not improve
6680/6680 [==============================] - 3s 457us/step - loss: 0.0804 - acc: 0.9771 - val_loss: 0.6634 - val_acc: 0.8527
Epoch 20/20
6540/6680 [============================>.] - ETA: 0s - loss: 0.0755 - acc: 0.9803Epoch 00020: val_loss did not improve
6680/6680 [==============================] - 3s 469us/step - loss: 0.0753 - acc: 0.9804 - val_loss: 0.7154 - val_acc: 0.8491
###Markdown
(IMPLEMENTATION) Load the Model with the Best Validation Loss
###Code
### TODO: Load the model weights with the best validation loss.
network_model.load_weights('saved_models/weights.best.'+ network_name + '.hdf5')
###Output
_____no_output_____
###Markdown
(IMPLEMENTATION) Test the ModelTry out your model on the test dataset of dog images. Ensure that your test accuracy is greater than 60%.
###Code
### TODO: Calculate classification accuracy on the test dataset.
network_model_predictions = [np.argmax(network_model.predict(np.expand_dims(feature, axis=0))) for feature in test_data]
# report test accuracy
test_accuracy = 100*np.sum(np.array(network_model_predictions)==np.argmax(test_targets, axis=1))/len(network_model_predictions)
print('Test accuracy: %.4f%%' % test_accuracy)
###Output
Test accuracy: 84.6890%
###Markdown
(IMPLEMENTATION) Predict Dog Breed with the ModelWrite a function that takes an image path as input and returns the dog breed (`Affenpinscher`, `Afghan_hound`, etc) that is predicted by your model. Similar to the analogous function in Step 5, your function should have three steps:1. Extract the bottleneck features corresponding to the chosen CNN model.2. Supply the bottleneck features as input to the model to return the predicted vector. Note that the argmax of this prediction vector gives the index of the predicted dog breed.3. Use the `dog_names` array defined in Step 0 of this notebook to return the corresponding breed.The functions to extract the bottleneck features can be found in `extract_bottleneck_features.py`, and they have been imported in an earlier code cell. To obtain the bottleneck features corresponding to your chosen CNN architecture, you need to use the function extract_{network} where `{network}`, in the above filename, should be one of `VGG19`, `Resnet50`, `InceptionV3`, or `Xception`.
###Code
### TODO: Write a function that takes a path to an image as input
### and returns the dog breed that is predicted by the model.
from extract_bottleneck_features import *
def network_model_predict_breed(img_path):
# extract bottleneck features
if network_name == 'VGG19':
bottleneck_feature = extract_VGG16(path_to_tensor(img_path))
elif network_name == 'Resnet50':
bottleneck_feature = extract_Resnet50(path_to_tensor(img_path))
elif network_name == 'InceptionV3':
bottleneck_feature = extract_InceptionV3(path_to_tensor(img_path))
elif network_name == 'Xception':
bottleneck_feature = extract_Xception(path_to_tensor(img_path))
# obtain predicted vector
predicted_vector = network_model.predict(bottleneck_feature)
# return dog breed that is predicted by the model
return dog_names[np.argmax(predicted_vector)]
###Output
_____no_output_____
###Markdown
--- Step 6: Write your AlgorithmWrite an algorithm that accepts a file path to an image and first determines whether the image contains a human, dog, or neither. Then,- if a __dog__ is detected in the image, return the predicted breed.- if a __human__ is detected in the image, return the resembling dog breed.- if __neither__ is detected in the image, provide output that indicates an error.You are welcome to write your own functions for detecting humans and dogs in images, but feel free to use the `face_detector` and `dog_detector` functions developed above. You are __required__ to use your CNN from Step 5 to predict dog breed. Some sample output for our algorithm is provided below, but feel free to design your own user experience! (IMPLEMENTATION) Write your Algorithm
###Code
def print_appropriate(p_input, p_image_path):
if p_input in ("human", "dog") :
print("hello, %s!\n" % p_input)
# Use same Image Pipeline as used earlier
img = cv2.imread(p_image_path)
# Convert from BGR to RGB
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# Plot the
plt.imshow(cv_rgb)
plt.show()
print("You look like a ...")
print(network_model_predict_breed(p_image_path))
print("\n")
else:
print("Error: niether human face or dog features detected\n")
# Use same Image Pipeline as used earlier
img = cv2.imread(p_image_path)
# Convert from BGR to RGB
cv_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# Plot the
plt.imshow(cv_rgb)
plt.show()
### TODO: Write your algorithm.
### Feel free to use as many code cells as needed.
def classify_image_and_breed(img_path):
if face_detector(img_path):
print_appropriate("human", img_path)
elif dog_detector(img_path):
print_appropriate("dog", img_path)
else:
print_appropriate("", img_path)
###Output
_____no_output_____
###Markdown
--- Step 7: Test Your AlgorithmIn this section, you will take your new algorithm for a spin! What kind of dog does the algorithm think that __you__ look like? If you have a dog, does it predict your dog's breed accurately? If you have a cat, does it mistakenly think that your cat is a dog? (IMPLEMENTATION) Test Your Algorithm on Sample Images!Test your algorithm at least six images on your computer. Feel free to use any images you like. Use at least two human and two dog images. __Question 6:__ Is the output better than you expected :) ? Or worse :( ? Provide at least three possible points of improvement for your algorithm.__Answer:__ The output is better than I expected. My expectations was around 60 to 65% accuracy, but the Xception Transfer Learning model seems to work better for these examples and inputs. The model was not making any mistake between Cat and Dogs, which itself was a great achievment of the model. An example has been provide below.Also, the performance of the model was very satisfactory, the training on 6680 samples was not at all bad and was well under 10 minutes.Few of the improvements that I could suggets is:- Include few more Conv2D layers and see the results- When the Model has predicted an image to be human, get/train another Xception model that recognizes the ethnicity of the human- For one of the image (doberman.jpg), it predicted as Manchester terrier. I think this could either be because of the pixelated image, or else the training data was not that distinctive. This can be improved
###Code
## TODO: Execute your algorithm from Step 6 on
## at least 6 images on your computer.
## Feel free to use as many code cells as needed.
classify_image_and_breed("images/bob_dylan.jpg")
classify_image_and_breed("images/narendra_modi.jpg")
classify_image_and_breed("images/dhoni.jpg")
classify_image_and_breed("images/bulldog.jpg")
classify_image_and_breed("images/doberman.jpg")
classify_image_and_breed("images/german_shepherd.jpg")
classify_image_and_breed("images/cat.jpg")
###Output
Error: niether human face or dog features detected
|
Practica 3/.ipynb_checkpoints/Practice_03_ANN-checkpoint.ipynb | ###Markdown
_Python_ package [neurolab](https://github.com/zueve/neurolab) provides a working environment for ANN. Perceptrons We can implement a neural network having a single layer of perceptrons (apart from input units) using _neurolab_ package as an instance of the class `newp`. In order to do so we need to provide the following parameters:* `minmax`: a list with the same length as the number of input neurons. The $i$-th element on this list is a list of two numbers, indicating the range of input values for the $i$-th neuron.* `cn`: number of output neurons.* `transf`: activation function (default value is threshold).Therefore, when we choose 1 as the value of parameter `cn`, we will be representing a simple perceptron having as many inputs as the length of the list associated to `minmax`. Let us start by creating a simple perceptron with two inputs, both of them ranging in $[0, 1]$, and with threshold activation function.
###Code
from neurolab import net
perceptron = net.newp(minmax=[[0, 1], [0, 1]], cn=1)
###Output
_____no_output_____
###Markdown
The instance that we just created has the following attributes:* `inp_minmax`: range of input values.* `co`: number of output neurons.* `trainf`: training function (the only one specific for single-layer perceptrons is the Delta rule).* `errorf`: error function (default value is half of SSE, _sum of squared errors_)The layers of the neural network (input layer does not count, thus in our example there is only one) are stored in a list associated with the attribute `layers`. Each layer is an instance of the class `Layer` and has the following attributes:* `ci`: number of inputs.* `cn`: number of neurons on it.* `co`: number of outputs.* `np`: dictionary with an element `'b'` that stores an array with the neurons' biasses (terms $a_0 w_0$, default value is 0) and an element `'w'` that stores an array with the weights associated with the incoming connections arriving on each neuron (default value is 0).
###Code
print(perceptron.inp_minmax)
print(perceptron.co)
print(perceptron.trainf)
print(perceptron.errorf)
layer = perceptron.layers[0]
print(layer.ci)
print(layer.cn)
print(layer.co)
print(layer.np)
###Output
[[0. 1.]
[0. 1.]]
1
Trainer(TrainDelta)
<neurolab.error.SSE object at 0x00000218F3998E80>
2
1
1
{'w': array([[0., 0.]]), 'b': array([0.])}
###Markdown
Next, let us train the perceptron so that it models the logic gate _and_.First of all, let us define the training set. We shall do it indicating on one hand an array or list of lists with the imput values corresponding to the examples, and on the other hand a different array or list of lists with the expected ouput for each example.
###Code
import numpy
input_values = numpy.array([[0, 0], [0, 1], [1, 0], [1, 1]])
expected_outcomes = numpy.array([[0], [0], [0], [1]])
###Output
_____no_output_____
###Markdown
The method `step` allows us to calculate the output of the neural network for a single example, and the method `sim` for all the examples.
###Code
perceptron.step([1, 1])
perceptron.sim(input_values)
###Output
_____no_output_____
###Markdown
Let us check which is the initial error of the perceptron, before the training.__Important__: the arguments of the error function must be arrays.
###Code
perceptron.errorf(expected_outcomes, perceptron.sim(input_values))
###Output
_____no_output_____
###Markdown
Let us next proceed to train the perceptron. We shall check that, as expected (since the training set is linearly separable), we are able to decrease the value of the error down to zero.__Note__: the method `train` that runs the training algorithm on the neural network returns a list showing the value of the network error after each of the _epochs_. More precisely, an epoch represents the set of operations performed by the training algorithm until all the examples of the training set have been considered.
###Code
perceptron.train(input_values, expected_outcomes)
print(perceptron.layers[0].np)
print(perceptron.errorf(expected_outcomes, perceptron.sim(input_values)))
###Output
{'w': array([[0.02, 0.01]]), 'b': array([-0.02])}
0.0
###Markdown
Feed forward perceptrons Package _neurolab_ implements a feed forward artificial neural network as an instance of the class `newff`. In order to do so, we need to provide the following parameters:* `minmax`: a list with the same length as the number of input neurons. The $i$-th element on this list is a list of two numbers, indicating the range of input values for the $i$-th neuron.* `cn`: number of output neurons.* `transf`: activation function (default value is threshold).* `size`: a list with the same length as the number of layers (except the input layer). The $i$-th element on this list is a number, indicating the number of neurons for the $i$-th layer.* `transf`: a list with the same length as the number of layers (except the input layer). The $i$-th element on this list is the activation function (default value is [hyperbolic tangent](https://en.wikipedia.org/wiki/Hyperbolic_functions) for the neurons of the $i$-th layer. Next, let us create a neural network with two inputs ranging over $[0, 1]$, one hidden layer having two neurons and an output layer with only one neuron. All neurons should have the sigmoid function as activation function (you may look for further available activation functions at https://pythonhosted.org/neurolab/lib.htmlmodule-neurolab.trans).
###Code
from neurolab import trans
sigmoid_act_fun = trans.LogSig()
my_net = net.newff(minmax=[[0, 1], [0, 1]], size=[2, 1], transf=[sigmoid_act_fun]*2)
###Output
_____no_output_____
###Markdown
The instance that we just created has the following attributes:* `inp_minmax`: range of input values.* `co`: number of output neurons.* `trainf`: training function (default value is [Broyden–Fletcher–Goldfarb–Shanno algorithm](https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm)).* `errorf`: error function (default value is half of SSE, _sum of squared errors_)The layers of the neural network (input layer excluded) are stored in a list associated with the attribute `layers`. Each layer is an instance of the class `Layer` and has the following attributes:* `ci`: number of inputs.* `cn`: number of neurons on it.* `co`: number of outputs.* `np`: dictionary with an element `'b'` that stores an array with the neurons' biasses (terms $a_0 w_0$) and an element `'w'` that stores an array with the weights associated with the incoming connections arriving on each neuron. Default values for the biasses and the weights are calculated following the [Nguyen-Widrow initialization algorithm](https://web.stanford.edu/class/ee373b/nninitialization.pdf).
###Code
print(my_net.inp_minmax)
print(my_net.co)
print(my_net.trainf)
print(my_net.errorf)
hidden_layer = my_net.layers[0]
print(hidden_layer.ci)
print(hidden_layer.cn)
print(hidden_layer.co)
print(hidden_layer.np)
output_layer = my_net.layers[1]
print(output_layer.ci)
print(output_layer.cn)
print(output_layer.co)
print(output_layer.np)
###Output
[[0. 1.]
[0. 1.]]
1
Trainer(TrainBFGS)
<neurolab.error.SSE object at 0x00000218F4DB24C0>
2
2
2
{'w': array([[-7.62253168, -2.14872306],
[-2.38571743, -7.55171188]]), 'b': array([13.73105272, 5.97763134])}
2
1
1
{'w': array([[3.77591676, 4.13551117]]), 'b': array([-7.91142793])}
###Markdown
It is possible to modify the initialization of the biases and weights, you may find available initialization options at https://pythonhosted.org/neurolab/lib.htmlmodule-neurolab.init.Let us for example set all of them to zero, using the following instructions:
###Code
from neurolab import init
for l in my_net.layers:
l.initf = init.init_zeros
my_net.init()
print(hidden_layer.np)
print(output_layer.np)
###Output
{'w': array([[0., 0.],
[0., 0.]]), 'b': array([0., 0.])}
{'w': array([[0., 0.]]), 'b': array([0.])}
###Markdown
It is also possible to modify the training algorithm, you may find available implemented options at https://pythonhosted.org/neurolab/lib.htmlmodule-neurolab.train.Let us for example switch to the _gradient descent backpropagation_, using the following instructions:
###Code
from neurolab import train
my_net.trainf = train.train_gd
###Output
_____no_output_____
###Markdown
Finally, we can also modify the error function to be used when training, you may find available options at https://pythonhosted.org/neurolab/lib.htmlmodule-neurolab.error.Let us for example choose the _mean squared error_, using the following instructions:
###Code
from neurolab import error
my_net.errorf = error.MSE()
###Output
_____no_output_____
###Markdown
Next, let us train our neural network so that it models the behaviour of the _xor_ logic gate.First, we need to split our training set into two components: on one hand an array or a list of lists with the input data corresponding to each example, *xor_in* , and on the other hand an array or list of lists with the correct expected ouput for each example, *xor_out* (remember that this time the training set is **not** linearly separable).
###Code
xor_in = numpy.array([[0, 0], [0, 1], [1, 0], [1, 1]])
xor_out = numpy.array([[0], [1], [1], [0]])
###Output
_____no_output_____
###Markdown
Let us measure which is the error associated to the initial neural network before the training starts:
###Code
print(my_net.sim(xor_in))
print(my_net.errorf(xor_out, my_net.sim(xor_in)))
###Output
[[0.5]
[0.5]
[0.5]
[0.5]]
0.25
###Markdown
Let us now proceed to run the training process on the neural network. The functions involved in the training work over the following arguments:* `lr`: _learning rate_, default value 0.01.* `epochs`: maximum number of epochs, default value 500.* `show`: number of epochs that should be executed between two messages in the output log, default value 100.* `goal`: maximum error accepted (halting criterion), default value 0.01.
###Code
my_net.train(xor_in, xor_out, lr=0.1, epochs=50, show=10, goal=0.001)
my_net.sim(xor_in)
###Output
Epoch: 10; Error: 0.25;
Epoch: 20; Error: 0.25;
Epoch: 30; Error: 0.25;
Epoch: 40; Error: 0.25;
Epoch: 50; Error: 0.25;
The maximum number of train epochs is reached
###Markdown
Let us now try a different setting. If we reset the neural network and we choose random numbers as initial values for the weights, we obtain the following:
###Code
numpy.random.seed(3287426346) # we set this init seed only for class, so that we always get
# the same random numbers and we can compare
my_net.reset()
for l in my_net.layers:
l.initf = init.InitRand([-1, 1], 'bw') # 'b' means biases will be modified,
# and 'w' the weights
my_net.init()
my_net.train(xor_in, xor_out, lr=0.1, epochs=10000, show=1000, goal=0.001)
my_net.sim(xor_in)
###Output
Epoch: 1000; Error: 0.20624769635154383;
Epoch: 2000; Error: 0.11489105507175529;
Epoch: 3000; Error: 0.01232716148235405;
Epoch: 4000; Error: 0.005060602419320548;
Epoch: 5000; Error: 0.003064045758926594;
Epoch: 6000; Error: 0.0021673030896454206;
Epoch: 7000; Error: 0.00166543486478401;
Epoch: 8000; Error: 0.0013471000008847842;
Epoch: 9000; Error: 0.0011281639233705698;
The goal of learning is reached
###Markdown
_Iris_ dataset _Iris_ is a classic multivariant dataset that has been exhaustively studied and has become a standard reference when analysing the behaviour of different machine learning algorithms._Iris_ gathers four measurements (length and width of sepal and petal) of 50 flowers of each one of the following three species of lilies: _Iris setosa_, _Iris virginica_ and _Iris versicolor_.Let us start by reading the data from the file `iris.csv` that has been provided together with the practice. It suffices to evaluate the following expressions:
###Code
import pandas
iris = pandas.read_csv('iris.csv', header=None,
names=['Sepal length', 'sepal width',
'petal length', 'petal width',
'Species'])
iris.head(10) # Display ten first examples
###Output
_____no_output_____
###Markdown
Next, let us move to use a numerical version of the species instead.Then, we should distribute the examples into two groups: training and test, and split each group into two components: input and expected output (goal).
###Code
#this piece of code might cause an error if wrong version of sklearn
#from sklearn import preprocessing
#from sklearn import model_selection
#iris_training, iris_test = model_selection.train_test_split(
# iris, test_size=.33, random_state=2346523,
# stratify=iris['Species'])
#ohe = preprocessing.OneHotEncoder(sparse = False)
#input_training = iris_training.iloc[:, :4]
#goal_training = ohe.fit_transform(iris_training['Species'].values.reshape(-1, 1))
#input_training = iris_test.iloc[:, :4]
#goal_training = ohe.transform(iris_test['Species'].values.reshape(-1,1))
#################
#try this instead if the previous does not work
import pandas
from sklearn import preprocessing
from sklearn import model_selection
iris2 = pandas.read_csv('iris_enc.csv', header=None,
names=['Sepal length', 'sepal width',
'petal length', 'petal width',
'Species'])
#iris2.head(10) # Display ten first examples
iris_training, iris_test = model_selection.train_test_split(
iris2, test_size=.33, random_state=2346523,
stratify=iris['Species'])
ohe = preprocessing.OneHotEncoder(sparse = False)
input_training = iris_training.iloc[:, :4]
goal_training = ohe.fit_transform(iris_training['Species'].values.reshape(-1, 1))
goal_training[:10] # this displays the 10 first expected output vectors (goal)
# associated with the training set examples
input_test = iris_test.iloc[:, :4]
goal_test = ohe.transform(iris_test['Species'].values.reshape(-1,1))
print(input_training.head(10))
print(goal_training[:10])
print(input_test.head(10))
print(goal_test[0:10])
###Output
Sepal length sepal width petal length petal width
57 4.9 2.4 3.3 1.0
32 5.2 4.1 1.5 0.1
87 6.3 2.3 4.4 1.3
17 5.1 3.5 1.4 0.3
79 5.7 2.6 3.5 1.0
138 6.0 3.0 4.8 1.8
73 6.1 2.8 4.7 1.2
82 5.8 2.7 3.9 1.2
33 5.5 4.2 1.4 0.2
2 4.7 3.2 1.3 0.2
[[0. 1. 0.]
[1. 0. 0.]
[0. 1. 0.]
[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]
[0. 1. 0.]
[0. 1. 0.]
[1. 0. 0.]
[1. 0. 0.]]
###Markdown
__Exercise 1__: define a function **lily_species** that, given an array with three numbers as input, returns the position where the maximum value is.
###Code
def lily_species1(a):
valM = max(a)
for i in range(len(a)):
if(a[i] == valM):
return i
def lily_species2(a):
b=list(a)
return b.index(max(b))
def lily_species3(a):
return numpy.argmax(a)
print(lily_species1(numpy.array([2, 5, 0])))
print(lily_species2(numpy.array([2, 5, 0])))
print(lily_species3(numpy.array([2, 5, 0])))
###Output
1
1
1
###Markdown
__Exercise 2__: Create a feed forward neural network having the following features:1. Has four input neurons, one for each attribute of the iris dataset.2. Has three output neurons, one for each species.3. Has one hidden layer with two neurons.4. All neurons of all layers use the sigmoid as activation function.5. The initial biases and weights are all equal to zero.6. Training method is gradient descent backpropagation.7. The error function is the mean squared error.Once you have created it, train the network over the sets `input_training` and `goal_training`.
###Code
from neurolab import net, trans, init, train, error
sigmoid_act_fun = trans.LogSig()
netEx2 = net.newff(minmax=[[4.0, 8.5], [1.5, 5.0], [0.5, 7.5], [0.0, 3.0]], size=[2,3],
transf=[sigmoid_act_fun]*2)
for l in netEx2.layers:
l.initf = init.init_zeros
netEx2.init()
netEx2.trainf = train.train_gd
netEx2.errorf = error.MSE()
###Output
_____no_output_____
###Markdown
__Exercise 3__: Calculate the performance of the network that was trained on the previous exercise, using to this aim the sets `input_test` and `goal_test`. That is, calculate which fraction of the test set is getting the correct classification predicted by the network.__Hint:__ In order to translate the output of the network and obtain which is the species predicted, use the function from exercise 1.
###Code
netEx2.init()
netEx2.train(input_training, goal_training, lr=0.1, epochs=50, show=10, goal=0.001)
flist = netEx2.sim(input_test)
res = 0
for i in range(len(flist)):
if str(lily_species1(flist[i].tolist())) == str(lily_species1(goal_test[i].tolist())):
res += 1
print("Result: " + str(res / len(flist) * 100) + "%")
###Output
Epoch: 10; Error: 0.22058139862224968;
Epoch: 20; Error: 0.18035997103431817;
Epoch: 30; Error: 0.14500117995857245;
Epoch: 40; Error: 0.12488101563573291;
Epoch: 50; Error: 0.11810431266169673;
The maximum number of train epochs is reached
Result: 68.0%
###Markdown
__Exercise 4__: try to create different variants of the network from exercise 2, by modifying the number of hidden layers and/or the amount of neurons per layer, in such a way that the performance over the test set is improved.
###Code
from neurolab import net, trans, init, train, error
sigmoid_act_fun = trans.LogSig()
netEx4 = net.newff(minmax=[[4.0, 4.5], [1.5, 5.0], [3.5, 7.5], [0.0, 3.0]], size=[3,3],
transf=[sigmoid_act_fun]*2)
for l in netEx4.layers:
l.initf = init.init_zeros
netEx4.init()
netEx4.trainf = train.train_gd
netEx4.errorf = error.MSE()
netEx4.init()
netEx4.train(input_training, goal_training, lr=0.1, epochs=50, show=10, goal=0.001)
flist = netEx4.sim(input_test)
res = 0
for i in range(len(flist)):
if str(lily_species1(flist[i].tolist())) == str(lily_species1(goal_test[i].tolist())):
res += 1
print("Result: " + str(res / len(flist) * 100) + "%")
#################################################################
# Importing the dataset
import pandas
from sklearn import preprocessing
from sklearn import model_selection
transfusion = pandas.read_csv('transfusion.csv', header=None,
names=['Recency (months)', 'Frequency (times)',
'Monetary (c.c. blood)', 'Time (months)',
'Whether he/she donated blood in March 2007'])
transfusion.head(10) # Display ten first examples
transfusion_training, transfusion_test = model_selection.train_test_split(
transfusion, test_size=.33, random_state=2346523,
stratify=transfusion['Whether he/she donated blood in March 2007'])
ohe = preprocessing.OneHotEncoder(sparse = False)
input_training = transfusion_training.iloc[:, :4]
goal_training = ohe.fit_transform(transfusion_training['Whether he/she donated blood in March 2007'].values.reshape(-1, 1))
goal_training[:10]
input_test = transfusion_test.iloc[:, :4]
goal_test = ohe.transform(transfusion_test['Whether he/she donated blood in March 2007'].values.reshape(-1,1))
# Exercise 1
def transfusions(a):
return numpy.argmax(a)
print(transfusions(numpy.array([2, 5, 0])))
# Exercise 2
from neurolab import net, trans, init, train, error
sigmoid_act_fun = trans.LogSig()
netEx2 = net.newff(minmax=[[0.0, 75.5], [1.0, 50.0], [250.0, 12500.5], [2.0, 98.5]], size=[2,2],
transf=[sigmoid_act_fun]*2)
for l in netEx2.layers:
l.initf = init.init_zeros
netEx2.init()
netEx2.trainf = train.train_gd
netEx2.errorf = error.MSE()
# Exercise 3
netEx2.init()
netEx2.train(input_training, goal_training, lr=0.1, epochs=50, show=10, goal=0.001)
flist = netEx2.sim(input_test)
res = 0
for i in range(len(flist)):
if str(transfusions(flist[i].tolist())) == str(transfusions(goal_test[i].tolist())):
res += 1
print("Result: " + str(res / len(flist) * 100) + "%")
# Exercise 4
from neurolab import net, trans, init, train, error
sigmoid_act_fun = trans.LogSig()
netEx4 = net.newff(minmax=[[0.0, 75.5], [1.0, 50.0], [250.0, 12500.5], [2.0, 98.5]], size=[30000,2],
transf=[sigmoid_act_fun]*2)
for l in netEx4.layers:
l.initf = init.init_zeros
netEx4.init()
netEx4.trainf = train.train_gd
netEx4.errorf = error.MSE()
netEx4.init()
netEx4.train(input_training, goal_training, lr=0.1, epochs=50, show=10, goal=0.001)
flist = netEx4.sim(input_test)
res = 0
for i in range(len(flist)):
if str(transfusions(flist[i].tolist())) == str(transfusions(goal_test[i].tolist())):
res += 1
print("Result: " + str(res / len(flist) * 100) + "%")
###Output
Epoch: 10; Error: 0.2375249500998004;
Epoch: 20; Error: 0.2375249500998004;
Epoch: 30; Error: 0.2375249500998004;
Epoch: 40; Error: 0.2375249500998004;
|
C-SVM Regression Frank wolf.ipynb | ###Markdown
C-SVM Regression Frank wolfe Optimizer scratch
###Code
import numpy as np
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
from scipy.optimize import linprog
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Mathematical model of C-SVM Regression
###Code
#We will not use this function ,it's juste for explaining
def objec_func(parmtrs):
return (np.dot(parmtrs[0:d],parmtrs[0:d].T))/2+C*sum(parmtrs[d+1:d+2*n+1])
###Output
_____no_output_____
###Markdown
Frank and wolf Algorithm
###Code
#this function used to find a first initial solution is simplex problem
def first_point(X):
n,d=X.shape
A=np.concatenate((-1*X,X), axis=0)
A=np.insert(A,len(A[1]),1, axis=1)
list1=[]
for i in range(0,2*n):
line=np.zeros(2*n)
line[i]=-1
list1.append(line)
A=np.concatenate((A,np.array(list1)), axis=1)
A=np.insert(A,len(A[1]),0, axis=1)
A[0][2*n+d+1]=1
c=np.zeros(2*n+d+1)
c=np.append(c,1)
B=np.concatenate((ep-y,ep+y), axis=0)
none_bound=[(None,None) for i in range(0,d+1)]
bound=[(0,None) for i in range(0,2*n+1)]
bounds1=np.concatenate((none_bound,bound), axis=0)
res = linprog(c, A_ub=A, b_ub=B, bounds=bounds1,method='interior-point')
x=np.array(res['x'])
return x[0:2*n+d+1]
#this function implement frank and walf algorithm
def frank_walf_svr(X,y,ep,C,n_iter):
parameters=first_point(X)
n,d=X.shape
for k in range(0,n_iter):
# get gradient of objective function
parameters1=parameters
parameters1[d]=0
list_one=np.ones(2*n)
grad=np.concatenate((parameters1[0:d+1],C*list_one) ,axis=0)
A=np.concatenate((-1*X,X), axis=0)
A=np.insert(A,len(A[1]),1, axis=1)
list1=[]
for i in range(0,2*n):
line=np.zeros(2*n)
line[i]=-1
list1.append(line)
A=np.concatenate((A,np.array(list1)), axis=1)
c = grad
B=np.concatenate((ep-y,ep+y), axis=0)
none_bound=[(None,None) for i in range(0,d+1)]
bound=[(0,None) for i in range(0,2*n)]
bounds1=np.concatenate((none_bound,bound), axis=0)
res = linprog(c, A_ub=A, b_ub=B, bounds=bounds1,method='simplex')
res=np.array(res['x'])
dparameters = res-parameters
lr=2/(2+k)
parameters = parameters + lr* dparameters
print("Iteration",k)
return parameters[0:d],parameters[d]
###Output
_____no_output_____
###Markdown
Data
###Code
X, y= make_regression(n_samples=1000, n_features=1,noise=8)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
plt.scatter(X_train, y_train,color='black')
plt.xticks(())
plt.yticks(())
plt.show()
###Output
_____no_output_____
###Markdown
Training
###Code
ep, C, n_iter = 0.0001, 1, 10
w,b=frank_walf_svr(X, y, ep, C, n_iter)
###Output
/home/said/anaconda3/lib/python3.7/site-packages/numpy/core/fromnumeric.py:87: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
###Markdown
Model visualization
###Code
y=np.dot(X_train,w.T)+b
plt.scatter(X_train, y_train,color='black')
plt.plot(X_train,y)
plt.show()
###Output
_____no_output_____
###Markdown
Save model
###Code
np.save('Optimal_wr',w)
np.save('Optimal_br',b)
###Output
_____no_output_____
###Markdown
Load model
###Code
w=np.load('Optimal_wr.npy')
b=np.load('Optimal_br.npy')
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
from sklearn.metrics import r2_score
y_pred=[]
for i in range (X_test.shape[0]):
y_hat=np.dot(w,X_test[i])+b
y_pred.append(y_hat)
print(r2_score(y_test, y_pred))
###Output
0.9301430189782431
|
Can-Make-Arithmetic-Progression-From-Sequence.ipynb | ###Markdown
Can Make Arithmetic Progression From SequenceGiven an array of numbers arr. A sequence of numbers is called an arithmetic progression if the difference between any two consecutive elements is the same.Return true if the array can be rearranged to form an arithmetic progression, otherwise, return false. 题目来源:[LeetCode - 1502 - Can Make Arithmetic Progression From Sequence](https://leetcode.com/problems/can-make-arithmetic-progression-from-sequence/) 解析按照题意,判断是否为等差数列(允许重排序),只需要将数组进行重新排序并且判断每一项与前项是否符合规律即可。 排序后遍历
###Code
def canMakeArithmeticProgression(arr):
sort_arr = sorted(arr)
diff = sort_arr[1] - sort_arr[0]
for idx in range(1,len(sort_arr) - 1):
if (sort_arr[idx] + diff != sort_arr[idx + 1]):
return False
return True
print(canMakeArithmeticProgression([3,5,1]))
###Output
_____no_output_____
###Markdown
非排序本质上和排序后遍历没有太大的差别,左右两数之间从相邻位置选取变成了取出最小和第二小的元素。
###Code
def canMakeArithmeticProgression2(arr):
diff = None
left = min(arr)
arr.remove(left)
while len(arr) > 0:
right = min(arr)
arr.remove(right)
if (diff == None):
diff = right - left
else:
if (diff != right - left):
return False
left = right
return True
print(canMakeArithmeticProgression2([3,5,1]))
###Output
_____no_output_____ |
PCA/pricipal_component_analysis.ipynb | ###Markdown
Functions for PCA Steps for PCA1. Standardize Data (not nescessary, but helpful)2. Compute convariance matrix 3. Calculate Eigenvectors of covariance matrix4. Use Eigenvectors to compute projection matrix5. Multiply the inputs by the projection matrix get the the result6. Return the result and projection matrix
###Code
# function for calculating PCA
# X is the dataset as a matrix. Rows represent variables
# k is the output dimensionality
# should_print refers to whether the results after each step of calculation should be outputted
# Returns Y, matrix of each variable in the output space
# Return projection_mat, the projection matrix which new variables should be multiplied by to transform them into feature space
def PCA(X, k, should_print=True):
# Standardize the Data!
X_std = StandardScaler().fit_transform(X)
if(should_print):
print("Standardized X: \n", X_std[:10, :])
# We could have just as easily said cov_mat = np.cov(X_std.T)
mean_vec = np.mean(X_std, axis=0)
cov_mat = (X_std - mean_vec).T.dot(X_std - mean_vec) / (X_std.shape[0]-1)
if(should_print):
print("Covariance Matrix: \n", cov_mat)
# now, get th eigenvectors and eigenvalues of the covariance matrix
w,v = np.linalg.eig(cov_mat)
if(should_print):
print("Eigenvalues: ", w)
print("Normalized Eigenvectors: \n", v)
# We don't need to sort the eigenvalues because np.ligalg.eig already does that for us!
# now, lets construct the projection matrix. This will take our original data and reduce its dimensionality
projection_mat = v[:, :k]
if(should_print):
print("Projection Matrix: \n", projection_mat)
# use the projection matrix to get the feature space data points
Y = X_std.dot(projection_mat)
return Y, projection_mat
# A helper function to plot the outputs of PCA in 2D
# Y is the output form PCA. The 2D positions of each variable
# y is the labels. These will be used for coloring the dataset
# classes is an array of possible classes. These will also be used for coloring.
def plot_PCA2D(Y, y, classes, cmap_name='jet', marker=','):
cs = []
cmap = plt.get_cmap(cmap_name)
labels_patches = []
for i, c in enumerate(classes):
indicies = np.where(y==c)
vals = Y[indicies]
plt.scatter(vals[:, 0], vals[:, 1], marker=marker, color=cmap(i/len(classes)))
patch = mpatches.Patch(color=cmap(i/len(classes)), label=classes[i])
labels_patches.append(patch)
plt.legend(labels_patches, classes, loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
Iris Dataset
###Code
df = pd.read_csv(
filepath_or_buffer='https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data',
header=None,
sep=',')
df.columns=['sepal_len', 'sepal_wid', 'petal_len', 'petal_wid', 'class']
classes = ['Iris-setosa', 'Iris-versicolor', 'Iris-virginica']
d = 4
print("Dataframe: ", df.tail())
X = df.ix[:, 0:d].values
y = df.ix[:, d].values
print("X: \n", X[:10, :])
print("y: \n", y[:10])
Y, proj_mat = PCA(X, 2, should_print=True)
plot_PCA2D(classes=classes, Y=Y, y=y, cmap_name='jet')
plt.title("Iris Dataset Visualized")
plt.close() # close the interactive version of the last plot, so the next one can be interative!
###Output
_____no_output_____
###Markdown
MNIST Dataset
###Code
mnist = input_data.read_data_sets("MNIST_data/", one_hot=False)
d = 784
classes = [1, 2, 3, 4, 5, 6, 7, 8, 9, 0]
X = np.matrix(mnist.validation.images)
y = np.array(mnist.validation.labels)
print("X: \n", X[:10, :])
print("y: \n", y[:10])
Y, proj_mat = PCA(X, 2, should_print=False)
num_to_visualize=1000
plot_PCA2D(classes=classes, Y=Y[:num_to_visualize], y=y[:num_to_visualize], cmap_name='gist_ncar', marker='x')
plt.title("MNIST Data Visualized")
plt.close()
###Output
_____no_output_____
###Markdown
Pokemon Dataset
###Code
pokemon_data = pd.read_csv(filepath_or_buffer='Pokemon.csv')
print("Dataframe: \n", pokemon_data.head())
# These are the observations we will be using
X = pokemon_data[['HP', 'Attack', 'Defense', 'Sp. Atk', 'Sp. Def', 'Speed', 'Generation', 'Legendary']].values
X = np.matrix(X, dtype=float) # because Legendary is a bool and we want it to be an int
print("X: \n", X)
type1 = pokemon_data['Type 1'].unique()
type2 = pokemon_data['Type 2'].unique()
print("Type 1: \n", type1)
print("Type 2: \n", type2)
y1 = pokemon_data['Type 1']
y2 = pokemon_data['Type 2']
# We are going to use this data multiple times. Once with Type 1 Classes, Once with Type 2 Classes
Y, proj_mat = PCA(X=X, k=2, should_print=False)
plot_PCA2D(Y, y1, type1)
plt.title("Pokemon Type 1s")
# This doesn't tell us much. All the data is almost evenly spread out! Lets try Type 2
plot_PCA2D(Y, y2, type2)
plt.title("Pokemon Type 2s")
# Not much better! Maybe we should try classifyig different generations?
X = pokemon_data[['HP', 'Attack', 'Defense', 'Sp. Atk', 'Sp. Def', 'Speed', 'Legendary']].values
X = np.matrix(X, dtype=float) # because Legendary is a bool and we want it to be an int
print("X: \n", X)
generations = pokemon_data['Generation'].unique()
print("Generations: \n", type1)
y_gen = pokemon_data['Generation']
Y, proj_mat = PCA(X, k=2, should_print=False)
plot_PCA2D(Y, y_gen, generations)
plt.title("Generations of Pokemon")
# Nope! still nothing intersting! Maybe Legendary vs not-legendary will work?
X = pokemon_data[['HP', 'Attack', 'Defense', 'Sp. Atk', 'Sp. Def', 'Speed', 'Generation']].values
X = np.matrix(X, dtype=float) # because Legendary is a bool and we want it to be an int
print("X: \n", X)
classes = pokemon_data['Legendary'].unique()
y_leg = pokemon_data['Legendary']
Y, proj_mat = PCA(X, k=2, should_print=False)
plot_PCA2D(Y, y_leg, classes)
# YAY! Something that looks decently interesting!
# Legendaries have greater values on the pc1 axis! Now that's something your average gamer couldn't tell you!
plt.title("Legendary vs Not-Legendary")
###Output
X:
[[ 45. 49. 49. ..., 65. 45. 1.]
[ 60. 62. 63. ..., 80. 60. 1.]
[ 80. 82. 83. ..., 100. 80. 1.]
...,
[ 80. 110. 60. ..., 130. 70. 6.]
[ 80. 160. 60. ..., 130. 80. 6.]
[ 80. 110. 120. ..., 90. 70. 6.]]
|
Project2/code/.ipynb_checkpoints/knowledge_graph_20181230-checkpoint.ipynb | ###Markdown
Project 1: 利用信息抽取技术搭建知识库 本项目的目的是结合命名实体识别、依存语法分析、实体消歧、实体统一对网站开放语料抓取的数据建立小型知识图谱。 Part1:开发句法结构分析工具 1.1 开发工具使用CYK算法,根据所提供的:非终结符集合、终结符集合、规则集,对以下句子计算句法结构。“the boy saw the dog with a telescope"非终结符集合:N={S, NP, VP, PP, DT, Vi, Vt, NN, IN}终结符集合:{sleeps, saw, boy, girl, dog, telescope, the, with, in}规则集: R={- (1) S-->NP VP 1.0- (2) VP-->VI 0.3- (3) VP-->Vt NP 0.4- (4) VP-->VP PP 0.3- (5) NP-->DT NN 0.8- (6) NP-->NP PP 0.2- (7) PP-->IN NP 1.0- (8) Vi-->sleeps 1.0- (9) Vt-->saw 1.0- (10) NN-->boy 0.1- (11) NN-->girl 0.1- (12) NN-->telescope 0.3- (13) NN-->dog 0.5- (14) DT-->the 0.5- (15) DT-->a 0.5- (16) IN-->with 0.6- (17) IN-->in 0.4}
###Code
# 分数(15)
class my_CYK(object):
def __init__(self, non_ternimal, terminal, rules_prob, start_prob):
self.non_terminal = non_ternimal
self.terminal = terminal
self.rules_prob = rules_prob
self.start_symbol = start_prob
def parse_sentence(self, sentence):
sents = sentence.split()
best_path = [[{} for _ in range(len(sents))] for _ in range(len(sents))]
# initialization
for i in range(len(sents)):
for x in self.non_terminal:
best_path[i][i][x] = {}
if (sents[i],) in self.rules_prob[x].keys():
best_path[i][i][x]['prob'] = self.rules_prob[x][(sents[i],)]
best_path[i][i][x]['path'] = {'split':None, 'rule': sents[i]}
else:
best_path[i][i][x]['prob'] = 0
best_path[i][i][x]['path'] = {'split':None, 'rule': None}
# CKY recursive
for l in range(1, len(sents)):
for i in range(len(sents)-l):
j = i + l
for x in self.non_terminal:
tmp_best_x = {'prob':0, 'path':None}
for key, value in self.rules_prob[x].items():
if key[0] not in self.non_terminal:
break
for s in range(i, j):
tmp_prob = value * best_path[i][s][key[0]]['prob'] * best_path[s+1][j][key[1]]['prob']
if tmp_prob > tmp_best_x['prob']:
tmp_best_x['prob'] = tmp_prob
tmp_best_x['path'] = {'split': s, 'rule': key}
best_path[i][j][x] = tmp_best_x
self.best_path = best_path
# parse result
self._parse_result(0, len(sents)-1, self.start_symbol)
print("prob = ", self.best_path[0][len(sents)-1][self.start_symbol]['prob'])
def _parse_result(self, left_idx, right_idx, root, ind=0):
node = self.best_path[left_idx][right_idx][root]
if node['path']['split'] is not None:
print('\t'*ind, (root, self.rules_prob[root].get(node['path']['rule'])))
self._parse_result(left_idx, node['path']['split'], node['path']['rule'][0], ind+1)
self._parse_result(node['path']['split']+1, right_idx, node['path']['rule'][1], ind+1)
else:
print('\t'*ind, (root, self.rules_prob[root].get((node['path']['rule'],))) )
print('--->', node['path']['rule'])
def main(sentence):
non_terminal = {'S', 'NP', 'VP', 'PP', 'DT', 'Vi', 'Vt', 'NN', 'IN'}
start_symbol = 'S'
terminal = {'sleeps', 'saw', 'boy', 'girl', 'dog', 'telescope', 'the', 'with', 'in'}
rules_prob = {'S': {('NP', 'VP'): 1.0},
'VP': {('Vt', 'NP'): 0.8, ('VP', 'PP'): 0.2},
'NP': {('DT', 'NN'): 0.8, ('NP', 'PP'): 0.2},
'PP': {('IN', 'NP'): 1.0},
'Vi': {('sleeps',): 1.0},
'Vt': {('saw',): 1.0},
'NN': {('boy',): 0.1, ('girl',): 0.1,('telescope',): 0.3,('dog',): 0.5},
'DT': {('the',): 1.0},
'IN': {('with',): 0.6, ('in',): 0.4},
}
cyk = my_CYK(non_terminal, terminal, rules_prob, start_symbol)
cyk.parse_sentence(sentence)
# TODO: 对该测试用例进行测试
# "the boy saw the dog with the telescope"
if __name__ == "__main__":
sentence = "the boy saw the dog with the telescope"
main(sentence)
###Output
('S', 1.0)
('NP', 0.8)
('DT', 1.0)
---> the
('NN', 0.1)
---> boy
('VP', 0.2)
('VP', 0.8)
('Vt', 1.0)
---> saw
('NP', 0.8)
('DT', 1.0)
---> the
('NN', 0.5)
---> dog
('PP', 1.0)
('IN', 0.6)
---> with
('NP', 0.8)
('DT', 1.0)
---> the
('NN', 0.3)
---> telescope
prob = 0.0007372800000000003
###Markdown
1.2 计算算法复杂度计算上一节开发的算法所对应的时间复杂度和空间复杂度。
###Code
# 分数(3)
# 上面所写的算法的时间复杂度和空间复杂度分别是多少?
# TODO
时间复杂度=O(), 空间复杂度=O()
###Output
_____no_output_____
###Markdown
Part2 基于Bootstrapping,抽取企业股权交易关系,并建立知识库 2.1 练习实体消歧将句中识别的实体与知识库中实体进行匹配,解决实体歧义问题。可利用上下文本相似度进行识别。在data/entity_disambiguation目录中,entity_list.csv是50个实体,valid_data.csv是需要消歧的语句(待添加)。答案提交在submit目录中,命名为entity_disambiguation_submit.csv。格式为:第一列是需要消歧的语句序号,第二列为多个“实体起始位坐标-实体结束位坐标:实体序号”以“|”分隔的字符串。*成绩以实体识别准确率以及召回率综合的F值评分
###Code
# code
# 将识别出的实体与知识库中实体进行匹配,解决识别出一个实体对应知识库中多个实体的问题。
# 将entity_list.csv中已知实体的名称导入分词词典
import jieba
import pandas as pd
entity_data = pd.read_csv('../data/entity_disambiguation/entity_list.csv', encoding = 'gb18030')
entity_dict = {}
for i in range(len(entity_data)):
line = entity_data.iloc[i, :]
for word in line.entity_name.split('|'):
jieba.add_word(word)
if word in entity_dict:
entity_dict[word].append(line.entity_id)
else:
entity_dict[word] = [line.entity_id]
# 对每句句子识别并匹配实体
valid_data = pd.read_csv('../data/entity_disambiguation/valid_data.csv', encoding = 'gb18030')
result_data = []
for i in range(len(valid_data)):
line = valid_data.iloc[i, :]
ret =[] # 存储实体的坐标和序号
loc = 0
window = 10 # 观察上下文的窗口大小
sentence = jieba.lcut(line.sentence)
ret = []
for idx, word in enumerate(sentence):
if word in entity_dict:
max_similar = 0
max_entity_id = 0
context = sentence[max(0, idx-window):min(len(sentence)-1, idx+window)]
for ids in entity_dict[word]:
similar = len(set(context)&set(jieba.lcut(entity_data[entity_data.entity_id.isin([ids])].reset_index().desc[0])))
if max_similar>similar:
max_similar = similar
max_entity_id = ids
ret.append(str(loc)+'-'+str(loc+len(word))+':'+str(ids))
loc+=len(word)
result_data.append([i, '|'.join(ret)])
pd.DataFrame(result_data).to_csv('../submit/entity_disambiguation_submit.csv', index=False)
result_data
###Output
_____no_output_____
###Markdown
2.2 实体识别借助开源工具,对实体进行识别。将每句句子中实体识别出,存入实体词典,并用特殊符号替换语句。
###Code
# code
# 首先尝试利用开源工具分出实体
import fool
import pandas as pd
from copy import copy
sample_data = pd.read_csv('../data/info_extract/samples_test.csv', encoding = 'utf-8', header=0)
sample_data['ner'] = None
ner_id = 1001
ner_dict = {} # 存储所有实体
ner_dict_reverse = {} # 存储所有实体
for i in range(len(sample_data)):
sentence = copy(sample_data.iloc[i, 1])
words, ners = fool.analysis(sentence)
ners[0].sort(key=lambda x:x[0], reverse=True)
print(ners)
for start, end, ner_type, ner_name in ners[0]:
if ner_name not in ner_dict:
ner_dict[ner_name] = ner_id
ner_dict_reverse[ner_id] = ner_name
ner_id+=1
sentence = sentence[:start] + ' ner_' + str(ner_dict[ner_name]) + '_ ' + sentence[end-1:]
sample_data.iloc[i, 2] = sentence
sample_data
###Output
_____no_output_____
###Markdown
2.3 实体统一对同一实体具有多个名称的情况进行统一,将多种称谓统一到一个实体上,并体现在实体的属性中(可以给实体建立“别称”属性)公司名称有其特点,例如后缀可以省略、上市公司的地名可以省略等等。在data/dict目录中提供了几个词典,可供实体统一使用。- company_suffix.txt是公司的通用后缀词典- company_business_scope.txt是公司经营范围常用词典- co_Province_Dim.txt是省份词典- co_City_Dim.txt是城市词典- stopwords.txt是可供参考的停用词
###Code
# code
import jieba
import jieba.posseg as pseg
import re
import datetime
#功能:从输入的“公司名”中提取主体(列表形式)
def main_extract(input_str,stop_word,d_4_delete,d_city_province):
input_str = replace(input_str,d_city_province)
#开始分词
seg = pseg.cut(input_str)
seg_lst = []
for w in seg:
elmt = w.word
if elmt not in d_4_delete:
seg_lst.append(elmt)
seg_lst = remove_word(seg_lst,stop_word)
seg_lst = city_prov_ahead(seg_lst,d_city_province)
return seg_lst
#功能:将list中地名提前
def city_prov_ahead(seg_lst,d_city_province):
city_prov_lst = []
for seg in seg_lst:
if seg in d_city_province:
city_prov_lst.append(seg)
seg_lst.remove(seg)
city_prov_lst.sort()
return city_prov_lst+seg_lst
#功能:去除停用词
def remove_word(seg,sw):
ret = []
for i in range(len(seg)):
if seg[i] not in sw:
ret.append(seg[i])
return ret
#功能:替换com,dep的内容
def replace(com,d_city_province):
#————————公司、部门
#替换
#'*'
com = re.sub(r'(\*)*(\#)*(\-)*(\—)*(\~)*(\.)*(\/)*(\?)*(\!)*(\?)*(\")*','',com)
#'、'
com = re.sub(r'(\、)*','',com)
#'+'
com = re.sub(r'(\+)*','',com)
#','
com = re.sub(r'(\,)+',' ',com)
#','
com = re.sub(r'(\,)+',' ',com)
#':'
com = re.sub(r'(\:)*','',com)
#[]【】都删除
com = re.sub(r'\【.*?\】','',com)
com = re.sub(r'\[.*?\]','',com)
#数字在结尾替换为‘’
com = re.sub(r'\d*$',"",com)
#' '或‘<’替换为‘’
com = re.sub(r'(>)*( )*(<)*',"",com)
#地名
com = re.sub(r'\(',"(",com)
com = re.sub(r'\)',")",com)
pat = re.search(r'\(.+?\)',com)
while pat:
v = pat.group()[3:-3]
start = pat.span()[0]
end = pat.span()[1]
if v not in d_city_province:
com = com[:start]+com[end:]
else:
com = com[:start]+com[start+3:end-3]+com[end:]
pat = re.search(r'\(.+?\)',com)
#()()
com = re.sub(r'(\()*(\))*(\()*(\))*','',com)
#全数字
com = re.sub(r'^(\d)+$',"",com)
return com
#初始加载步骤
#输出“用来删除的字典”和“stop word”
def my_initial():
fr1 = open(r"../data/dict/co_City_Dim.txt", encoding='utf-8')
fr2 = open(r"../data/dict/co_Province_Dim.txt", encoding='utf-8')
fr3 = open(r"../data/dict/company_business_scope.txt", encoding='utf-8')
fr4 = open(r"../data/dict/company_suffix.txt", encoding='utf-8')
#城市名
lines1 = fr1.readlines()
d_4_delete = []
d_city_province = [re.sub(r'(\r|\n)*','',line) for line in lines1]
#省份名
lines2 = fr2.readlines()
l2_tmp = [re.sub(r'(\r|\n)*','',line) for line in lines2]
d_city_province.extend(l2_tmp)
#公司后缀
lines3 = fr3.readlines()
l3_tmp = [re.sub(r'(\r|\n)*','',line) for line in lines3]
lines4 = fr4.readlines()
l4_tmp = [re.sub(r'(\r|\n)*','',line) for line in lines4]
d_4_delete.extend(l4_tmp)
#get stop_word
fr = open(r'../data/dict/stopwords.txt', encoding='utf-8')
stop_word = fr.readlines()
stop_word_after = [re.sub(r'(\r|\n)*','',stop_word[i]) for i in range(len(stop_word))]
stop_word_after[-1] = stop_word[-1]
stop_word = stop_word_after
return d_4_delete,stop_word,d_city_province
d_4_delete,stop_word,d_city_province = my_initial()
company_name = "河北银行股份有限公司"
lst = main_extract(company_name,stop_word,d_4_delete,d_city_province)
company_name = ''.join(lst) # 对公司名提取主体部分,将包含相同主体部分的公司统一为一个实体
print(company_name)
# 在语句中统一实体
import fool
import pandas as pd
from copy import copy
sample_data = pd.read_csv('../data/info_extract/samples_test.csv', encoding = 'utf-8', header=0)
sample_data['ner'] = None
ner_id = 1001
ner_dict_new = {} # 存储所有实体
ner_dict_reverse_new = {} # 存储所有实体
for i in range(len(sample_data)):
sentence = copy(sample_data.iloc[i, 1])
words, ners = fool.analysis(sentence)
ners[0].sort(key=lambda x:x[0], reverse=True)
print(ners)
for start, end, ner_type, ner_name in ners[0]:
company_main_name = ''.join(main_extract(ner_name,stop_word,d_4_delete,d_city_province)) # 提取公司主体名称
if company_main_name not in ner_dict:
ner_dict[company_main_name] = ner_id
ner_id+=1
sentence = sentence[:start] + ' ner_' + str(ner_dict[company_main_name]) + '_ ' + sentence[end-1:]
sample_data.iloc[i, 2] = sentence
###Output
_____no_output_____
###Markdown
2.4 关系抽取借助句法分析工具,和实体识别的结果,以及正则表达式,设定模版抽取关系,并存储进图数据库。本次要求抽取股权交易关系,关系为有向边,由投资方指向被投资方。模板建立可以使用“正则表达式”、“实体间距离”、“实体上下文”、“依存句法”等。答案提交在submit目录中,命名为info_extract_submit.csv和info_extract_entity.csv。- info_extract_entity.csv格式为:第一列是实体编号,第二列是实体名(多个实体名用“|”分隔)- info_extract_submit.csv格式为:第一列是关系发起方实体编号,第二列为关系接收方实体编号。*成绩以抽取的关系准确率以及召回率综合的F值评分 建立种子模板
###Code
# code
# 最后提交文件为识别出的整个投资图谱,以及图谱中结点列表与属性。
# 建立模板
import re
rep1 = re.compile(r'(ner_\d\d\d\d)_\s+收购\s+(ner_\d\d\d\d)_') # 例子模板
relation_list = [] # 存储已经提取的关系
for i in range(len(sample_data)):
sentence = copy(sample_data.iloc[i, 2])
for v in rep1.findall(sentence+sentence):
relation_list.append(v)
relation_list
###Output
_____no_output_____
###Markdown
利用bootstrapping搜索
###Code
# code
###Output
_____no_output_____
###Markdown
2.5 存储进图数据库本次作业我们使用neo4j作为图数据库,neo4j需要java环境,请先配置好环境。
###Code
from py2neo import Node, Relationship, Graph
graph = Graph(
"http://localhost:7474",
username="neo4j",
password="person"
)
for v in relation_list:
a = Node('Company', name=v[0])
b = Node('Company', name=v[1])
r = Relationship(a, 'INVEST', b)
s = a | b | r
graph.create(s)
r
###Output
_____no_output_____ |
ex_fashion_mnist/trim_scores.ipynb | ###Markdown
attributions on the low- and high-frequency
###Code
# mask band
img_size = 28
mask = 1 - freq_band(img_size, band_center, band_width_lower, band_width_upper)
scores1 = {
'IG': [],
'DeepLift': [],
'SHAP': [],
'Saliency': [],
'InputXGradient': []
}
attr_methods = ['IG', 'DeepLift', 'SHAP', 'Saliency', 'InputXGradient']
for batch_idx, (im, label) in enumerate(test_loader):
im = im.to(device)
im_t = t(im)
label = label.to(device)
# attr
results = get_attributions(im_t, model1_t, class_num=label,
attr_methods = ['IG', 'DeepLift', 'SHAP', 'Saliency', 'InputXGradient'])
for i, name in enumerate(attr_methods):
low_attr = (fftshift(results[name]) * mask).sum(axis=(1,2))
high_attr = (fftshift(results[name]) * (1-mask)).sum(axis=(1,2))
scores1[name].append(np.c_[low_attr, high_attr])
print('\riteration:', batch_idx, end='')
scores1['IG'] = np.vstack(scores1['IG'])
scores1['DeepLift'] = np.vstack(scores1['DeepLift'])
scores1['SHAP'] = np.vstack(scores1['SHAP'])
scores1['Saliency'] = np.vstack(scores1['Saliency'])
scores1['InputXGradient'] = np.vstack(scores1['InputXGradient'])
# pkl.dump(scores, open('scores.pkl', 'wb'))
plt.hist(scores['IG'][:,0] - scores['IG'][:,1])
plt.show()
plt.hist(scores['DeepLift'][:,0] - scores['DeepLift'][:,1])
plt.show()
plt.hist(scores['SHAP'][:,0] - scores['SHAP'][:,1])
plt.show()
plt.hist(scores['Saliency'][:,0] - scores['Saliency'][:,1])
plt.show()
plt.hist(scores['InputXGradient'][:,0] - scores['InputXGradient'][:,1])
plt.show()
###Output
_____no_output_____
###Markdown
attributions on the low- and high-frequency
###Code
# mask band
img_size = 28
mask = 1 - freq_band(img_size, band_center, band_width_lower, band_width_upper)
scores2 = {
'IG': [],
'DeepLift': [],
'SHAP': [],
'Saliency': [],
'InputXGradient': []
}
attr_methods = ['IG', 'DeepLift', 'SHAP', 'Saliency', 'InputXGradient']
for batch_idx, (im, label) in enumerate(test_loader):
im = im.to(device)
im_t = t(im)
label = label.to(device)
# attr
results = get_attributions(im_t, model2_t, class_num=label,
attr_methods = ['IG', 'DeepLift', 'SHAP', 'Saliency', 'InputXGradient'])
for i, name in enumerate(attr_methods):
low_attr = (fftshift(results[name]) * mask).sum(axis=(1,2))
high_attr = (fftshift(results[name]) * (1-mask)).sum(axis=(1,2))
scores2[name].append(np.c_[low_attr, high_attr])
print('\riteration:', batch_idx, end='')
scores2['IG'] = np.vstack(scores2['IG'])
scores2['DeepLift'] = np.vstack(scores2['DeepLift'])
scores2['SHAP'] = np.vstack(scores2['SHAP'])
scores2['Saliency'] = np.vstack(scores2['Saliency'])
scores2['InputXGradient'] = np.vstack(scores2['InputXGradient'])
# pkl.dump(scores, open('scores.pkl', 'wb'))
plt.hist(scores['IG'][:,0] - scores['IG'][:,1])
plt.show()
plt.hist(scores['DeepLift'][:,0] - scores['DeepLift'][:,1])
plt.show()
plt.hist(scores['SHAP'][:,0] - scores['SHAP'][:,1])
plt.show()
plt.hist(scores['Saliency'][:,0] - scores['Saliency'][:,1])
plt.show()
plt.hist(scores['InputXGradient'][:,0] - scores['InputXGradient'][:,1])
plt.show()
###Output
_____no_output_____
###Markdown
attributions on the low- and high-frequency
###Code
# mask band
img_size = 28
mask = 1 - freq_band(img_size, band_center, band_width_lower, band_width_upper)
scores3 = {
'IG': [],
'DeepLift': [],
'SHAP': [],
'Saliency': [],
'InputXGradient': []
}
attr_methods = ['IG', 'DeepLift', 'SHAP', 'Saliency', 'InputXGradient']
for batch_idx, (im, label) in enumerate(test_loader):
im = im.to(device)
im_t = t(im)
label = label.to(device)
# attr
results = get_attributions(im_t, model3_t, class_num=label,
attr_methods = ['IG', 'DeepLift', 'SHAP', 'Saliency', 'InputXGradient'])
for i, name in enumerate(attr_methods):
low_attr = (fftshift(results[name]) * mask).sum(axis=(1,2))
high_attr = (fftshift(results[name]) * (1-mask)).sum(axis=(1,2))
scores3[name].append(np.c_[low_attr, high_attr])
print('\riteration:', batch_idx, end='')
scores3['IG'] = np.vstack(scores3['IG'])
scores3['DeepLift'] = np.vstack(scores3['DeepLift'])
scores3['SHAP'] = np.vstack(scores3['SHAP'])
scores3['Saliency'] = np.vstack(scores3['Saliency'])
scores3['InputXGradient'] = np.vstack(scores3['InputXGradient'])
# pkl.dump(scores, open('scores.pkl', 'wb'))
plt.hist(scores['IG'][:,0] - scores['IG'][:,1])
plt.show()
plt.hist(scores['DeepLift'][:,0] - scores['DeepLift'][:,1])
plt.show()
plt.hist(scores['SHAP'][:,0] - scores['SHAP'][:,1])
plt.show()
plt.hist(scores['Saliency'][:,0] - scores['Saliency'][:,1])
plt.show()
plt.hist(scores['InputXGradient'][:,0] - scores['InputXGradient'][:,1])
plt.show()
plt.hist(scores1['IG'][:,0] - scores1['IG'][:,1], bins=50)
plt.show()
plt.hist(scores2['IG'][:,0] - scores2['IG'][:,1], bins=50)
plt.show()
plt.hist(scores3['IG'][:,0] - scores3['IG'][:,1], bins=50)
plt.show()
###Output
_____no_output_____ |
SU-2017-LAB04-Ansambli-i-procjena-parametara.ipynb | ###Markdown
Sveučilište u Zagrebu Fakultet elektrotehnike i računarstva Strojno učenje 2017/2018 http://www.fer.unizg.hr/predmet/su ------------------------------ Laboratorijska vježba 4: Ansambli i procjena parametara*Verzija: 0.1 Zadnji put ažurirano: 11. prosinca 2017.*(c) 2015-2017 Jan Šnajder, Domagoj Alagić Objavljeno: **11. prosinca 2017.** Rok za predaju: **18. prosinca 2017. u 07:00h**------------------------------ Objavljeno: **6. prosinca 2016.**Rok za predaju: U terminu vježbe u tjednu od **12. prosinca 2016.** UputeČetvrta laboratorijska vježba sastoji se od **četiri** zadatka. Kako bi kvalitetnije, ali i na manje zamoran način usvojili gradivo ovog kolegija, potrudili smo se uključiti tri vrste zadataka: **1)** implementacija manjih algoritama, modela ili postupaka; **2)** eksperimenti s raznim modelima te njihovim hiperparametrima, te **3)** primjena modela na (stvarnim) podatcima. Ovim zadatcima pokrivamo dvije paradigme učenja: učenje izgradnjom (engl. *learning by building*) i učenje eksperimentiranjem (engl. *learning by experimenting*).U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na **dopunjavanje ove bilježnice**: umetanja ćelije ili više njih **ispod** teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija. Osigurajte da u potpunosti **razumijete** kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (**nemojte pisati** odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.Vježbe trebate raditi **samostalno**. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.
###Code
# Učitaj osnovne biblioteke...
import sklearn
import mlutils
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
1. Ansambli (glasovanje) (a)Vaš je zadatak napisati razred `VotingClassifierDIY` koji implementira glasački ansambl. Konstruktor razreda ima **dva** parametra: `clfs` koji predstavlja listu klasifikatora (objekata iz paketa `sklearn`) i `voting_scheme` koji označava radi li se o glasovanju prebrojavanjem (`SCHEME_COUNTING`) ili usrednjavanjem (`SCHEME_AVERAGING`). Glasovanje prebrojavanjem jednostavno vraća najčešću oznaku klase, dok glasovanje usrednjavanjem uprosječuje pouzdanosti klasifikacije u neku klasu (po svim klasifikatorima) te vraća onu s najvećom pouzdanošću. Primijetite da svi klasifikatori imaju jednake težine. O komplementarnosti klasifikatora vodimo računa tako da koristimo jednake klasifikatore s različitim hiperparametrima.Razred sadržava metode `fit(X, y)` za učenje ansambla i dvije metode za predikciju: `predict(X)` i `predict_proba(X)`. Prva vraća predviđene oznake klasa, a druga vjerojatnosti pripadanja svakoj od klasa za svaki od danih primjera iz `X`.**NB:** Jedan od razreda koji bi Vam mogao biti koristan jest [`collections.Counter`](https://docs.python.org/2/library/collections.htmlcollections.Counter). Također vrijedi i za funkcije [`numpy.argmax`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html) i [`numpy.dstack`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dstack.html).
###Code
from collections import Counter
class VotingClassifierDIY(object):
SCHEME_COUNTING = "counting"
SCHEME_AVERAGING = "averaging"
def __init__(self, clfs, voting_scheme=SCHEME_COUNTING):
self.clfs = clfs
self.voting_scheme = voting_scheme
def fit(self, X, y):
for clf in self.clfs:
clf.fit(X, y)
def predict_proba(self, X):
return np.mean([clf.predict_proba(X) for clf in self.clfs], axis=0)
def predict(self, X):
if self.voting_scheme == self.SCHEME_COUNTING:
return np.asarray([Counter([clf.predict(x.reshape(1, -1)).item() for clf in self.clfs]).most_common(1)[0][0] for x in X])
elif self.voting_scheme == self.SCHEME_AVERAGING:
return self.clfs[0].classes_[np.argmax(self.predict_proba(X), axis=1)]
###Output
_____no_output_____
###Markdown
(b)Uvjerite se da Vaša implementacija radi jednako onoj u razredu [`ensemble.VotingClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html), i to pri oba načina glasovanja (parametar `voting`). Parametar `weights` ostavite na pretpostavljenoj vrijednosti. Za ovu provjeru koristite tri klasifikatora logističke regresije s različitom stopom regularizacije i brojem iteracija. Koristite skup podataka dan u nastavku. Ekvivalentnost implementacije najlakše je provjeriti usporedbom izlaza funkcije `predict` (kod prebrojavanja) i funkcije `predict_proba` (kod usrednjavanja).**NB:** Ne koristimo SVM jer njegova ugrađena (probabilistička) implementacija nije posve deterministička, što bi onemogućilo robusnu provjeru Vaše implementacije.
###Code
from sklearn.datasets import make_classification
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
X_voting, y_voting = make_classification(n_samples=1000, n_features=4, n_redundant=0, n_informative=3, n_classes=3, n_clusters_per_class=2)
clf1 = LogisticRegression(C=0.1, max_iter=10)
clf2 = LogisticRegression(C=1, max_iter=100)
clf3 = LogisticRegression(C=100, max_iter=1000)
eclf_hard_ots = VotingClassifier(estimators=[('lr1', clf1), ('lr2', clf2), ('lr3', clf3)], voting='hard')
eclf_hard_diy = VotingClassifierDIY([clf1, clf2, clf3], voting_scheme="counting")
eclf_hard_ots.fit(X_voting, y_voting)
eclf_hard_diy.fit(X_voting, y_voting)
print((eclf_hard_ots.predict(X_voting) == eclf_hard_diy.predict(X_voting)).all())
eclf_soft_ots = VotingClassifier(estimators=[('lr1', clf1), ('lr2', clf2), ('lr3', clf3)], voting='soft')
eclf_soft_diy = VotingClassifierDIY([clf1, clf2, clf3], voting_scheme="averaging")
eclf_soft_ots.fit(X_voting, y_voting)
eclf_soft_diy.fit(X_voting, y_voting)
print((eclf_soft_ots.predict_proba(X_voting) == eclf_soft_diy.predict_proba(X_voting)).all())
print((eclf_soft_ots.predict(X_voting) == eclf_soft_diy.predict(X_voting)).all())
###Output
True
True
###Markdown
**Q:** Kada je prebrojavanje bolje od usrednjavanja? Zašto? A obratno? **Q:** Bi li se ovakav algoritam mogao primijeniti na regresiju? Kako? **A:** Prebrojavanje je bolje kod klasifikatora s izračunatim težinama, dok je usrednjavanje bolje kod izjednačenih/nepoznatih težina klasifikatora, jer daje prednost "samouvjerenijim" klasifikatorima.**A:** Algoritam se može primijeniti na regresiju tako da ansambl umjesto klase $y$ za koju je zbroj glasova najveći, vraća uprosječeni umnožak $w_j h_j(x)$ po svim klasifikatorima. 2. Ansambli (*bagging*) U ovom zadatku ćete isprobati tipičnog predstavnika *bagging*-algoritma, **algoritam slučajnih šuma**. Pitanje na koje želimo odgovoriti jest kako se ovakvi algoritmi nose s prenaučenošću, odnosno, smanjuje li *bagging* varijancu modela.Eksperiment ćete provesti na danom skupu podataka:
###Code
from sklearn.model_selection import train_test_split
X_bag, y_bag = make_classification(n_samples=1000, n_features=20, n_redundant=1, n_informative=17, n_classes=3, n_clusters_per_class=2)
X_bag_train, X_bag_test, y_bag_train, y_bag_test = train_test_split(X_bag, y_bag, train_size=0.7, random_state=69)
###Output
_____no_output_____
###Markdown
Razred koji implementira stablo odluke jest [`tree.DecisionTreeClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html). Prvo naučite **stablo odluke** (engl. *decision tree*) na skupu za učenje, ali tako da je taj model presložen. To možete postići tako da povećate najveću moguću dubinu stabla (parametar `max_depth`). Ispišite pogrešku na skupu za ispitivanje (pogrešku 0-1; pogledajte paket [`metrics`](http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.metrics)).
###Code
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import zero_one_loss
clf = DecisionTreeClassifier()
clf.fit(X_bag_train, y_bag_train)
print(zero_one_loss(y_bag_test, clf.predict(X_bag_test)))
###Output
0.46
###Markdown
Sada isprobajte algoritam slučajnih šuma (dostupan u razredu [`ensemble.RandomForestClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)) za različit broj stabala $L \in [1, 30]$. Iscrtajte pogrešku na skupu za učenje i na skupu za ispitivanje u ovisnosti o tom hiperparametru. Ispišite najmanju pogrešku na skupu za ispitivanje.
###Code
from sklearn.ensemble import RandomForestClassifier
numbers_of_trees = np.arange(1,31)
train_errors = []
test_errors = []
for number_of_trees in numbers_of_trees:
clf = RandomForestClassifier(n_estimators=number_of_trees)
clf.fit(X_bag_train, y_bag_train)
train_errors.append(zero_one_loss(y_bag_train, clf.predict(X_bag_train)))
test_errors.append(zero_one_loss(y_bag_test, clf.predict(X_bag_test)))
print(test_errors[np.argmin(test_errors)], np.argmin(test_errors))
plt.plot(numbers_of_trees, train_errors, label='Train error')
plt.plot(numbers_of_trees, test_errors, label='Test error')
plt.xlabel('Number of trees')
plt.ylabel('0-1 loss')
plt.legend(loc='best')
plt.show()
###Output
0.196666666667 23
###Markdown
**Q:** Što možete zaključiti iz ovih grafikona? **Q:** Kako *bagging* postiže diverzifikaciju pojedinačnih osnovnih modela? **Q:** Koristi li ovaj algoritam složeni ili jednostavni osnovni model? Zašto? **A:** Ukoliko je broj primjera/značajki velik a broj stabala malen, može se lako dogoditi da određeni primjeri/značajke budu propuštene u *bootstrap* uzorcima, što vodi slabijoj prediktivnoj moći slučajne šume.**A:** *Bagging* svaki osnovni model trenira nad *bootstrap* uzorkom iz skupa za učenje, odnosno nad pojedinim setom iz multiseta kombinacija s ponavljanjem inicijalnog skupa za učenje. Navedeno uzorkovanje provodi se i nad značajkama (*attribute bagging*).**A:** Algoritam *Random Forest* koristi jednostavne osnovne modele, jer im je treniranjem nad *bootstrap* uzorkom smanjena varijanca i ponešto povećana pristranost. 3. Ansambli (*boosting*) U ovom zadatku pogledat ćemo klasifikacijski algoritam AdaBoost, koji je implementiran u razredu [`ensemble.AdaBoostClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html). Ovaj algoritam tipičan je predstavnik *boosting*-algoritama.Najprije ćemo generirati eksperimentalni skup podataka koristeći [`datasets.make_circles`](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_circles.html). Ova funkcija stvara dvodimenzijski klasifikacijski problem u kojem su dva razreda podataka raspoređena u obliku kružnica, tako da je jedan razred unutar drugog.
###Code
from sklearn.datasets import make_circles
circ_X, circ_y = make_circles(n_samples=400, noise=0.1, factor=0.4)
mlutils.plot_2d_clf_problem(circ_X, circ_y)
###Output
_____no_output_____
###Markdown
(a)*Boosting*, kao vrsta ansambla, također se temelji na kombinaciji više klasifikatora s ciljem boljih prediktivnih sposobnosti. Međutim, ono što ovakav tip ansambla čini zanimljivim jest to da za osnovni klasifikator traži **slabi klasifikator** (engl. *weak classifier*), odnosno klasifikator koji radi tek malo bolje od nasumičnog pogađanja. Često korišteni klasifikator za tu svrhu jest **panj odluke** (engl. *decision stump*), koji radi predikciju na temelju samo jedne značajke ulaznih primjera. Panj odluke specijalizacija je **stabla odluke** (engl. *decision tree*) koje smo već spomenuli. Panj odluke stablo je dubine 1. Stabla odluke implementirana su u razredu [`tree.DecisionTreeClassifier`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html).Radi ilustracije, naučite ansambl (AdaBoost) koristeći panj odluke kao osnovni klasifikator, ali pritom isprobavajući različit broj klasifikatora u ansamblu iz skupa $L \in \{1, 2, 3, 50\}$. Prikažite decizijske granice na danom skupu podataka za svaku od vrijednosti korištenjem pomoćne funkcije `mlutils.plot_2d_clf_problem`.**NB:** Još jedan dokaz da hrvatska terminologija zaista može biti smiješna. :)
###Code
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
numbers_of_stumps = [1, 2, 3, 50]
plt.figure(figsize=(10,10))
for i, number_of_stumps in enumerate(numbers_of_stumps):
eclf = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1), n_estimators=number_of_stumps)
eclf.fit(circ_X, circ_y)
plt.subplot(2, 2, i+1)
plt.title("$L={0}$".format(number_of_stumps))
mlutils.plot_2d_clf_problem(circ_X, circ_y, lambda x : eclf.predict(x))
plt.show()
###Output
_____no_output_____
###Markdown
**Q:** Kako AdaBoost radi? Ovise li izlazi pojedinih osnovnih modela o onima drugih? **Q:** Je li AdaBoost linearan klasifikator? Pojasnite. **A:** AdaBoost u svakoj *boosting* iteraciji povećava težine krivo klasificiranim primjerima iz prethodne iteracije, što znači da je svaki sljedeći osnovni model prisiljen koncentrirati se na primjere s čijom su klasifikacijom njegovi prethodnici imali problema.**A:** AdaBoost je linearna kombinacija osnovnih klasifikatora, no ukoliko osnovni klasifikatori nisu linearni ni AdaBoost neće biti linearan, jer ne predstavlja linearnu funkciju ulaznih podataka. (b)Kao što je i za očekivati, broj klasifikatora $L$ u ansamblu predstavlja hiperparametar algoritma *AdaBoost*. U ovom zadatku proučit ćete kako on utječe na generalizacijsku sposobnost Vašeg ansambla. Ponovno, koristite panj odluke kao osnovni klasifikator.Poslužite se skupom podataka koji je dan niže.
###Code
from sklearn.model_selection import train_test_split
X_boost, y_boost = make_classification(n_samples=1000, n_features=20, n_redundant=0, n_informative=18, n_classes=3, n_clusters_per_class=1)
X_boost_train, X_boost_test, y_boost_train, y_boost_test = train_test_split(X_boost, y_boost, train_size=0.7, random_state=69)
###Output
_____no_output_____
###Markdown
Iscrtajte krivulje pogrešaka na skupu za učenje i ispitivanje u ovisnosti o hiperparametru $L \in [1,80]$. Koristite pogrešku 0-1 iz paketa [`metrics`](http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.metrics). Ispišite najmanju ostvarenu pogrešku na skupu za ispitivanje, te pripadajuću vrijednost hiperparametra $L$.
###Code
from sklearn.metrics import zero_one_loss
numbers_of_stumps = np.arange(1,81)
train_errors = []
test_errors = []
for number_of_stumps in numbers_of_stumps:
eclf = AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=1), n_estimators=number_of_stumps)
eclf.fit(X_boost_train, y_boost_train)
train_errors.append(zero_one_loss(y_boost_train, eclf.predict(X_boost_train)))
test_errors.append(zero_one_loss(y_boost_test, eclf.predict(X_boost_test)))
print(test_errors[np.argmin(test_errors)], np.argmin(test_errors))
plt.plot(numbers_of_stumps, train_errors, label='Train error')
plt.plot(numbers_of_stumps, test_errors, label='Test error')
plt.legend(loc='best')
plt.xlabel('$L$')
plt.ylabel('0-1 loss')
plt.show()
###Output
0.266666666667 28
###Markdown
**Q:** Može li uopće doći do prenaučenosti pri korištenju *boosting*-algoritama? **A:** Može, no kako je *boosting* algoritam dosta robustan na *overfitting*, on neće biti znatno izražen u promjeni postotka ispitne pogreške. (c)Kao što je rečeno na početku, *boosting*-algoritmi traže slabe klasifikatore kako bi bili najefikasniji što mogu biti. Međutim, kako se takav ansambl mjeri s jednim **jakim klasifikatorom** (engl. *strong classifier*)? To ćemo isprobati na istom primjeru, ali korištenjem jednog optimalno naučenog stabla odluke.Ispišite pogrešku ispitivanja optimalnog stabla odluke. Glavni hiperparametar stabala odluka jest njihova maksimalna dubina $d$ (parametar `max_depth`). Iscrtajte krivulje pogrešaka na skupu za učenje i ispitivanje u ovisnosti o dubini stabla $d \in [1,20]$.
###Code
depths = np.arange(1,21)
train_errors = []
test_errors = []
for depth in depths:
clf = DecisionTreeClassifier(max_depth=depth)
clf.fit(X_boost_train, y_boost_train)
train_errors.append(zero_one_loss(y_boost_train, clf.predict(X_boost_train)))
test_errors.append(zero_one_loss(y_boost_test, clf.predict(X_boost_test)))
print(test_errors[np.argmin(test_errors)], np.argmin(test_errors))
plt.plot(depths, train_errors, label='Train error')
plt.plot(depths, test_errors, label='Test error')
plt.legend(loc='best')
plt.xlabel('$d$')
plt.ylabel('0-1 loss')
plt.show()
###Output
0.263333333333 7
###Markdown
**Q:** Isplati li se koristiti ansambl u obliku *boostinga*? Idu li grafikoni tome u prilog?**Q:** Koja je prednost *boostinga* nad korištenjem jednog jakog klasifikatora? **A:** Na gore generiranom `(X_boost, y_boost)` skupu podataka *AdaBoost* postiže znatno bolje rezultate naspram stabala odluke, pa ga se u ovom slučaju itekako isplati koristiti.**A:** Generalna prednost *boostinga* nad pojedinim jakim klasifikatorima je brzina izvođenja. 4. Procjena maksimalne izglednosti i procjena maksimalne aposteriorne vjerojatnosti (a)Definirajte funkciju izglednosti $\mathcal{L}(\mu|\mathcal{D})$ za skup $\mathcal{D}=\{x^{(i)}\}_{i=1}^N$ Bernoullijevih varijabli. Neka od $N$ varijabli njih $m$ ima vrijednost 1 (npr. od $N$ bacanja novčića, $m$ puta smo dobili glavu). Definirajte funkciju izglednosti tako da je parametrizirana s $N$ i $m$, dakle definirajte funkciju $\mathcal{L}(\mu|N,m)$.
###Code
def bernoullis_likelihood(mean, N, m):
return np.power(mean, m)*np.power((1-mean), (N-m))
###Output
_____no_output_____
###Markdown
(b) Prikažite funkciju $\mathcal{L}(\mu|N,m)$ za (1) $N=10$ i $m=1,2,5,9$ te za (2) $N=100$ i $m=1,10,50,90$ (dva zasebna grafikona).
###Code
N = 10
ms = [1,2,5,9]
mean = np.linspace(0, 1, num=100)
for m in ms:
plt.plot(mean, bernoullis_likelihood(mean, N, m), label='$m={}$'.format(m))
plt.xlabel('$\mu$')
plt.ylabel('$\mathcal{L}(\mu|N,m)$')
plt.legend(loc='best')
plt.show()
N = 100
ms = [1,10,50,90]
mean = np.linspace(0, 1, num=100)
for m in ms:
plt.plot(mean, bernoullis_likelihood(mean, N, m), label='$m={}$'.format(m))
plt.yscale('log')
plt.xlabel('$\mu$')
plt.ylabel('$\mathcal{L}(\mu|N,m)$')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
**Q:** Koja vrijednost odgovara ML-procjenama i zašto? **A:** ML-procjenama $\mu$ odgovaraju maksimumi krivulja, jer je MLE $argmax$ izglednosti. (c)Prikažite funkciju $\mathcal{L}(\mu|N,m)$ za $N=10$ i $m=\{0,9\}$.
###Code
N = 10
ms = [0,9]
mean = np.linspace(0, 1, num=100)
for m in ms:
plt.plot(mean, bernoullis_likelihood(mean, N, m), label='$m={}$'.format(m))
plt.xlabel('$\mu$')
plt.ylabel('$\mathcal{L}(\mu|N,m)$')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
**Q:** Koja je ML-procjena za $\mu$ i što je problem s takvom procjenom u ovome slučaju? **A:** U gornjem primjeru $\hat{\mu}_{MLE}$ je nula jer je $m=0$, što radi velike probleme kod nekih klasifikatora kao što je to naivni Bayes. Ovo je primjer prenaučenosti MLE procjenitelja kad je $N$ malen. (d)Prikažite beta-distribuciju $B(\mu|\alpha,\beta)$ za različite kombinacije parametara $\alpha$ i $\beta$, uključivo $\alpha=\beta=1$ te $\alpha=\beta=2$.
###Code
from scipy.stats import beta
from itertools import product
means = np.linspace(0, 1, 100)
for a,b in product(range(1,3), repeat=2):
plt.plot(means, beta.pdf(means, a, b), label=r'$\alpha={0}, \beta={1}$'.format(a, b))
plt.plot(means, beta.pdf(means, 2, 1.7), label=r'$\alpha=2, \beta=1.7$')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
###Output
_____no_output_____
###Markdown
**Q:** Koje parametere biste odabrali za modeliranje apriornog znanja o parametru $\mu$ za novčić za koji mislite da je "donekle pravedan, ali malo češće pada na glavu"? Koje biste parametre odabrali za novčić za koji držite da je posve pravedan? Zašto uopće koristimo beta-distribuciju, a ne neku drugu? **A:** Za donekle pravedan novčić koji češće pada na glavu trebalo bi birati $\alpha$ tek nešto veći od $\beta$, s tim da vrijedi $\alpha>1,\beta>1$. **A:** Za posve pravedan novčić uzima se $\alpha=2,\beta=2$.**A:** MAP maksimizira aposteriornu vjerojatnost parametara $p(\theta|\mathcal{D})$, definiranu Bayesovim teoremom. Njena distribucija odgovara distribuciji apriorne vjerojatnosti $p(\theta)$, samo ako je $p(\theta)$ konjugatna apriorna distribucija izglednosti $p(\mathcal(D)|\theta)$. Kako bacanje novčića podliježe Bernoullievoj distribuciji, biramo njenu konjugatnu beta-distribuciju. (e)Definirajte funkciju za izračun zajedničke vjerojatnosti $P(\mu,\mathcal{D}) = P(\mathcal{D}|\mu) \cdot P(\mu|\alpha,\beta)$ te prikažite tu funkciju za $N=10$ i $m=9$ i nekolicinu kombinacija parametara $\alpha$ i $\beta$.
###Code
def bernoullis_joint_probability(mean, N, m, a, b):
return bernoullis_likelihood(mean, N, m) * beta.pdf(mean, a, b)
mean = np.linspace(0, 1, num=100)
plt.plot(mean, bernoullis_joint_probability(mean, 10, 9, 2, 2), label=r'$\alpha={0}, \beta={1}$'.format(2, 2))
plt.plot(mean, bernoullis_joint_probability(mean, 10, 9, 1, 1), label=r'$\alpha={0}, \beta={1}$'.format(1, 1))
plt.plot(mean, bernoullis_joint_probability(mean, 10, 9, 1.5, 3), label=r'$\alpha={0}, \beta={1}$'.format(1.5, 3))
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
**Q**: Koje vrijednosti odgovaraju MAP-procjeni za $\mu$? Usporedite ih sa ML-procjenama. **A:** MAP-procjene $\mu_{MAP}$ nalaze se u maksimumima funkcija gustoće zajedničke vjerojatnosti. Za $\alpha=\beta=1$ (uniformna, neinformativna apriorna distribucija) MAP degenerira u MLE, ostvarujući maksimum u $\mu_{MAP} \approx 0.9$. Ukoliko se koristi **Laplaceovo zaglađivanje** postavljajući $\alpha=\beta=2$ (apriorna distribucija s najvećom gustoćom u $\mu=0.5$), MAP će biti nešto manje prenaučen u odnosu na MLE, ostvarujući maksimum u $\mu_{MAP} \approx 0.8$. (f)Za $N=10$ i $m=1$, na jednome grafikonu prikažite sve tri distribucije: $P(\mu,\mathcal{D})$, $P(\mu|\alpha,\beta)$ i $\mathcal{L}(\mu|\mathcal{D})$.
###Code
mean = np.linspace(0, 1, num=100)
N, m = 10, 1
a, b = 2, 2
plt.plot(mean, bernoullis_joint_probability(mean, N, m, a, b), label=r'$P(\mu, \mathcal{D})$')
plt.plot(mean, beta.pdf(mean, a, b), label=r'$P(\mu|\alpha,\beta)$')
plt.plot(mean, bernoullis_likelihood(mean, N, m), label=r'$\mathcal{L}(\mu|\mathcal{D})$')
plt.yscale('log')
plt.xlabel('$\mu$')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____
###Markdown
(g)Pročitajte [ove](http://scikit-learn.org/stable/datasets/) upute o učitavanju oglednih skupova podataka u SciPy. Učitajte skup podataka *Iris*. Taj skup sadrži $n=4$ značajke i $K=3$ klase. Odaberite jednu klasu i odaberite sve primjere iz te klase, dok ostale primjere zanemarite (**u nastavku radite isključivo s primjerima iz te jedne klase**). Vizualizirajte podatke tako da načinite 2D-prikaze za svaki par značajki (šest grafikona; za prikaz je najjednostavnije koristiti funkciju [`scatter`](http://matplotlib.org/api/pyplot_api.htmlmatplotlib.pyplot.scatter)).**NB:** Mogla bi Vam dobro dući funkcija [`itertools.combinations`](https://docs.python.org/2/library/itertools.htmlitertools.combinations).
###Code
from sklearn.datasets import load_iris
import itertools as it
iris = load_iris()
X = iris.data[iris.target == 0]
plt.figure(figsize=(15,10))
n_sub = 1
for i,j in it.combinations(range(4), 2):
plt.subplot(2, 3, n_sub)
plt.scatter(X[:,i], X[:,j])
plt.xlabel('feature_{}'.format(i+1))
plt.ylabel('feature_{}'.format(j+1))
n_sub += 1
plt.show()
###Output
_____no_output_____
###Markdown
(h)Implementirajte funkciju log-izglednosti za parametre $\mu$ i $\sigma^2$ normalne distribucije.
###Code
def normal_log_likelihood(X, mean, variance):
return -0.5*X.shape[0]*np.log(2*math.pi) - X.shape[0]*np.log(math.sqrt(variance)) - np.sum(np.power((X-mean),2))/(2*variance)
###Output
_____no_output_____
###Markdown
(i)Izračunajte ML-procjene za $(\mu, \sigma^2)$ za svaku od $n=4$ značajki iz skupa *Iris*. Ispišite log-izglednosti tih ML-procjena.
###Code
for n_feature in range(4):
mean = np.mean(X[:,n_feature])
variance = np.var(X[:,n_feature])
#print(np.median(X[:,n_feature]), mean, variance, 0.5*(np.amax(X[:,n_feature]) - np.amin(X[:,n_feature])))
print('feature_{0}: {1}'.format(n_feature+1, normal_log_likelihood(X[:,n_feature], mean, variance)))
###Output
feature_1: -18.305163312803863
feature_2: -22.197265513335253
feature_3: 17.133809129965208
feature_4: 41.2066603289276
###Markdown
**Q:** Možete li, na temelju dobivenih log-izglednosti, zaključiti koja se značajka najbolje pokorava normalnoj distribuciji? **A:** Sama izglednost ne daje mjeru pokoravanja značajke određenoj distribuciji, visoka izglednost neke značajke prije ukazuje na vrlo malu varijancu, zbog čega gustoće vjerojatnosti primjera postižu visoke iznose, pa tako i njihov produkt tj. izglednost. Za dobru mjeru pokoravanja distribuciji, valjalo bi izvršiti LR test (*Likelihood-ratio*) za normalnu distribuciju, ili neki drugi test normalnosti (*Normality test*). (j)Proučite funkciju [`pearsonr`](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.pearsonr.html) za izračun Pearsonovog koeficijenta korelacije. Izračunajte koeficijente korelacije između svih četiri značajki u skupu *Iris*.
###Code
from scipy.stats import pearsonr
for i,j in it.combinations(range(4), 2):
print('PCC (feature_{0}, feature_{1}): {2}'.format(i+1, j+1, pearsonr(X[:,i], X[:,j])[0]))
###Output
PCC (feature_1, feature_2): 0.7467803732639268
PCC (feature_1, feature_3): 0.26387409291868696
PCC (feature_1, feature_4): 0.27909157499959675
PCC (feature_2, feature_3): 0.17669462869680694
PCC (feature_2, feature_4): 0.2799728885169045
PCC (feature_3, feature_4): 0.3063082111580356
###Markdown
(k)Proučite funkciju [`cov`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.cov.html) te izračunajte ML-procjenu za kovarijacijsku matricu za skup *Iris*. Usporedite pristranu i nepristranu procjenu. Pokažite da se razlika (srednja apsolutna i kvadratna) smanjuje s brojem primjera (npr. isprobajte za $N/4$ i $N/2$ i $N$ primjera).
###Code
from sklearn.metrics import mean_squared_error, mean_absolute_error
squared_errors = []
absolute_errors = []
ns_samples = range(X.shape[0], 2, -10)
for n_samples in ns_samples:
X_subsample = X[np.random.choice(X.shape[0], size=n_samples, replace=False), :]
mle_cov_biased = np.cov(X_subsample, rowvar=False, bias=True)
mle_cov_unbiased = np.cov(X_subsample, rowvar=False, bias=False)
squared_errors.append(mean_squared_error(mle_cov_biased, mle_cov_unbiased))
absolute_errors.append(mean_absolute_error(mle_cov_biased, mle_cov_unbiased))
plt.figure(figsize=(10,5))
plt.subplot(121)
plt.title('MSE')
plt.xlabel('$N$')
plt.plot(ns_samples, squared_errors)
plt.subplot(122)
plt.title('MAE')
plt.plot(ns_samples, absolute_errors)
plt.xlabel('$N$')
plt.show()
###Output
_____no_output_____ |
Classification/k-Nearest Neighbor/KNeighborsClassifier_MaxAbsScaler_QuantileTransformer.ipynb | ###Markdown
KNeighborsClassifier with MaxAbsScaler & QuantileTransformer This Code template is for the Classification task using a simple KNeighborsClassifier based on the K-Nearest Neighbors algorithm with data rescaling technique MaxAbsScaler and feature transformation technique QuantileTransformer in a pipeline. Required Packages
###Code
!pip install imblearn
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from imblearn.over_sampling import RandomOverSampler
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder, MaxAbsScaler, QuantileTransformer
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#file_path
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target = ''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), axis=-1, kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Distribution Of Target Variable
###Code
plt.figure(figsize = (10,6))
se.countplot(Y)
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
Handling Target ImbalanceThe challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on, the minority class, although typically it is performance on the minority class that is most important.One approach to addressing imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
###Code
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
###Output
_____no_output_____
###Markdown
Data RescalingScale each feature by its maximum absolute value.This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity.[More on MaxAbsScaler module and parameters](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html) Feature TransformationThis method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.[More on QuantileTransformer module and parameters](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.QuantileTransformer.html) ModelKNN is one of the easiest Machine Learning algorithms based on Supervised Machine Learning technique. The algorithm stores all the available data and classifies a new data point based on the similarity. It assumes the similarity between the new data and data and put the new case into the category that is most similar to the available categories.KNN algorithm at the training phase just stores the dataset and when it gets new data, then it classifies that data into a category that is much similar to the available data. Model Tuning Parameters > - **n_neighbors** -> Number of neighbors to use by default for kneighbors queries. > - **weights** -> weight function used in prediction. {**uniform,distance**} > - **algorithm**-> Algorithm used to compute the nearest neighbors. {**‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’**} > - **p** -> Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used. > - **leaf_size** -> Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
###Code
model= make_pipeline(MaxAbsScaler(), QuantileTransformer(), KNeighborsClassifier())
model.fit(x_train,y_train)
###Output
_____no_output_____
###Markdown
Model Accuracyscore() method return the mean accuracy on the given test data and labels.In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 77.92 %
###Markdown
Confusion MatrixA confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
###Code
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
###Output
_____no_output_____
###Markdown
Classification ReportA Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.* where: - Precision:- Accuracy of positive predictions. - Recall:- Fraction of positives that were correctly identified. - f1-score:- percent of positive predictions were correct - support:- Support is the number of actual occurrences of the class in the specified dataset.
###Code
print(classification_report(y_test,model.predict(x_test)))
###Output
precision recall f1-score support
tested_negative 0.85 0.78 0.82 96
tested_positive 0.68 0.78 0.73 58
accuracy 0.78 154
macro avg 0.77 0.78 0.77 154
weighted avg 0.79 0.78 0.78 154
|
training/.Trash-1000/files/train.ipynb | ###Markdown
MLflow Training TutorialThis `train.pynb` Jupyter notebook predicts the quality of wine using [sklearn.linear_model.ElasticNet](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html). > This is the Jupyter notebook version of the `train.py` exampleAttribution* The data set used in this example is from http://archive.ics.uci.edu/ml/datasets/Wine+Quality* P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis.* Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
###Code
import os
import warnings
import sys
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
import mlflow
import mlflow.sklearn
import logging
# Read the wine-quality csv file from the URL
csv_url ='wine-quality.csv'
data = pd.read_csv(csv_url)
try:
data = pd.read_csv(csv_url)
except Exception as e:
logger.exception(
"Unable to download training & test CSV, check your internet connection. Error: %s", e)
print(data['quality'])
# Wine Quality Sample
def train(in_alpha, in_l1_ratio):
logging.basicConfig(level=logging.WARN)
logger = logging.getLogger(__name__)
mlflow.set_tracking_uri("http://mlflow_server:5000")
def eval_metrics(actual, pred):
rmse = np.sqrt(mean_squared_error(actual, pred))
mae = mean_absolute_error(actual, pred)
r2 = r2_score(actual, pred)
return rmse, mae, r2
warnings.filterwarnings("ignore")
np.random.seed(40)
# Split the data into training and test sets. (0.75, 0.25) split.
train, test = train_test_split(data)
# The predicted column is "quality" which is a scalar from [3, 9]
train_x = train.drop(["quality"], axis=1)
test_x = test.drop(["quality"], axis=1)
train_y = train[["quality"]]
test_y = test[["quality"]]
# Set default values if no alpha is provided
if float(in_alpha) is None:
alpha = 0.5
else:
alpha = float(in_alpha)
# Set default values if no l1_ratio is provided
if float(in_l1_ratio) is None:
l1_ratio = 0.5
else:
l1_ratio = float(in_l1_ratio)
# Useful for multiple runs (only doing one run in this sample notebook)
with mlflow.start_run():
# Execute ElasticNet
lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
lr.fit(train_x, train_y)
# Evaluate Metrics
predicted_qualities = lr.predict(test_x)
(rmse, mae, r2) = eval_metrics(test_y, predicted_qualities)
# Print out metrics
print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio))
print(" RMSE: %s" % rmse)
print(" MAE: %s" % mae)
print(" R2: %s" % r2)
# Log parameter, metrics, and model to MLflow
mlflow.log_param("alpha", alpha)
mlflow.log_param("l1_ratio", l1_ratio)
mlflow.log_metric("rmse", rmse)
mlflow.log_metric("r2", r2)
mlflow.log_metric("mae", mae)
mlflow.sklearn.log_model(lr, "model")
train(0.5, 0.5)
train(0.2, 0.2)
train(0.1, 0.1)
###Output
Elasticnet model (alpha=0.100000, l1_ratio=0.100000):
RMSE: 0.7792546522251949
MAE: 0.6112547988118586
R2: 0.2157063843066196
|
notebooks/tf2/rnn-add-example.ipynb | ###Markdown
Addition as a Sequence to Sequence TranslationAdapted from https://github.com/keras-team/keras/blob/master/examples/addition_rnn.py
###Code
!pip install -q tf-nightly-gpu-2.0-preview
import tensorflow as tf
print(tf.__version__)
###Output
2.0.0-dev20190502
###Markdown
Step 1: Generate sample equations
###Code
class CharacterTable(object):
"""Given a set of characters:
+ Encode them to a one hot integer representation
+ Decode the one hot integer representation to their character output
+ Decode a vector of probabilities to their character output
"""
def __init__(self, chars):
"""Initialize character table.
# Arguments
chars: Characters that can appear in the input.
"""
self.chars = sorted(set(chars))
self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
self.indices_char = dict((i, c) for i, c in enumerate(self.chars))
def encode(self, C, num_rows):
"""One hot encode given string C.
# Arguments
num_rows: Number of rows in the returned one hot encoding. This is
used to keep the # of rows for each data the same.
"""
x = np.zeros((num_rows, len(self.chars)))
for i, c in enumerate(C):
x[i, self.char_indices[c]] = 1
return x
def decode(self, x, calc_argmax=True):
if calc_argmax:
x = x.argmax(axis=-1)
return ''.join(self.indices_char[x] for x in x)
class colors:
ok = '\033[92m'
fail = '\033[91m'
close = '\033[0m'
import numpy as np
# Parameters for the model and dataset.
TRAINING_SIZE = 50000
DIGITS = 3
# REVERSE = True
REVERSE = False
# Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of
# int is DIGITS.
MAXLEN = DIGITS + 1 + DIGITS
# All the numbers, plus sign and space for padding.
chars = '0123456789+ '
ctable = CharacterTable(chars)
questions = []
expected = []
seen = set()
print('Generating data...')
while len(questions) < TRAINING_SIZE:
f = lambda: int(''.join(np.random.choice(list('0123456789'))
for i in range(np.random.randint(1, DIGITS + 1))))
a, b = f(), f()
# Skip any addition questions we've already seen
# Also skip any such that x+Y == Y+x (hence the sorting).
key = tuple(sorted((a, b)))
if key in seen:
continue
seen.add(key)
# Pad the data with spaces such that it is always MAXLEN.
q = '{}+{}'.format(a, b)
query = q + ' ' * (MAXLEN - len(q))
ans = str(a + b)
# Answers can be of maximum size DIGITS + 1.
ans += ' ' * (DIGITS + 1 - len(ans))
if REVERSE:
# Reverse the query, e.g., '12+345 ' becomes ' 543+21'. (Note the
# space used for padding.)
query = query[::-1]
questions.append(query)
expected.append(ans)
print('Total addition questions:', len(questions))
questions[0]
expected[0]
print('Vectorization...')
x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)
y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)
for i, sentence in enumerate(questions):
x[i] = ctable.encode(sentence, MAXLEN)
for i, sentence in enumerate(expected):
y[i] = ctable.encode(sentence, DIGITS + 1)
len(x[0])
len(questions[0])
###Output
_____no_output_____
###Markdown
Input is encoded as one-hot, 7 digits times 12 possibilities
###Code
x[0]
###Output
_____no_output_____
###Markdown
Same for output, but at most 4 digits
###Code
y[0]
# Shuffle (x, y) in unison as the later parts of x will almost all be larger
# digits.
indices = np.arange(len(y))
np.random.shuffle(indices)
x = x[indices]
y = y[indices]
###Output
_____no_output_____
###Markdown
Step 2: Training/Validation Split
###Code
# Explicitly set apart 10% for validation data that we never train over.
split_at = len(x) - len(x) // 10
(x_train, x_val) = x[:split_at], x[split_at:]
(y_train, y_val) = y[:split_at], y[split_at:]
print('Training Data:')
print(x_train.shape)
print(y_train.shape)
print('Validation Data:')
print(x_val.shape)
print(y_val.shape)
###Output
Training Data:
(45000, 7, 12)
(45000, 4, 12)
Validation Data:
(5000, 7, 12)
(5000, 4, 12)
###Markdown
Step 3: Create Model
###Code
# input shape: 7 digits, each being 0-9, + or space (12 possibilities)
MAXLEN, len(chars)
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM, GRU, SimpleRNN, Dense, RepeatVector
# Try replacing LSTM, GRU, or SimpleRNN.
# RNN = LSTM
RNN = SimpleRNN # should be enough since we do not have long sequences and only local dependencies
# RNN = GRU
HIDDEN_SIZE = 128
BATCH_SIZE = 128
model = Sequential()
# encoder
model.add(RNN(units=HIDDEN_SIZE, input_shape=(MAXLEN, len(chars))))
# latent space
encoding_dim = 32
model.add(Dense(units=encoding_dim, activation='relu', name="encoder"))
# decoder: have 4 temporal outputs one for each of the digits of the results
model.add(RepeatVector(DIGITS + 1))
# return_sequences=True tells it to keep all 4 temporal outputs, not only the final one (we need all four digits for the results)
model.add(RNN(units=HIDDEN_SIZE, return_sequences=True))
model.add(Dense(name='classifier', units=len(chars), activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
simple_rnn_4 (SimpleRNN) (None, 128) 18048
_________________________________________________________________
encoder (Dense) (None, 32) 4128
_________________________________________________________________
repeat_vector_2 (RepeatVecto (None, 4, 32) 0
_________________________________________________________________
simple_rnn_5 (SimpleRNN) (None, 4, 128) 20608
_________________________________________________________________
classifier (Dense) (None, 4, 12) 1548
=================================================================
Total params: 44,332
Trainable params: 44,332
Non-trainable params: 0
_________________________________________________________________
###Markdown
Step 4: Train
###Code
%%time
# Train the model each generation and show predictions against the validation
# dataset.
merged_losses = {
"loss": [],
"val_loss": [],
"accuracy": [],
"val_accuracy": [],
}
for iteration in range(1, 50):
print()
print('-' * 50)
print('Iteration', iteration)
iteration_history = model.fit(x_train, y_train,
batch_size=BATCH_SIZE,
epochs=1,
validation_data=(x_val, y_val))
merged_losses["loss"].append(iteration_history.history["loss"])
merged_losses["val_loss"].append(iteration_history.history["val_loss"])
merged_losses["accuracy"].append(iteration_history.history["accuracy"])
merged_losses["val_accuracy"].append(iteration_history.history["val_accuracy"])
# Select 10 samples from the validation set at random so we can visualize
# errors.
for i in range(10):
ind = np.random.randint(0, len(x_val))
rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]
preds = model.predict_classes(rowx, verbose=0)
q = ctable.decode(rowx[0])
correct = ctable.decode(rowy[0])
guess = ctable.decode(preds[0], calc_argmax=False)
print('Q', q[::-1] if REVERSE else q, end=' ')
print('T', correct, end=' ')
if correct == guess:
print(colors.ok + '☑' + colors.close, end=' ')
else:
print(colors.fail + '☒' + colors.close, end=' ')
print(guess)
import matplotlib.pyplot as plt
plt.ylabel('loss')
plt.xlabel('epoch')
plt.yscale('log')
plt.plot(merged_losses['loss'])
plt.plot(merged_losses['val_loss'])
plt.legend(['loss', 'validation loss'])
plt.ylabel('accuracy')
plt.xlabel('epoch')
# plt.yscale('log')
plt.plot(merged_losses['accuracy'])
plt.plot(merged_losses['val_accuracy'])
plt.legend(['accuracy', 'validation accuracy'])
###Output
_____no_output_____ |
2 digit recognizer/mnist-dataset-digit-recognizer (1).ipynb | ###Markdown
IntroductionMNIST ("Modified National Institute of Standards and Technology") is the de facto “Hello World” dataset of computer vision. Since its release in 1999, this classic dataset of handwritten images has served as the basis for benchmarking classification algorithms. As new machine learning techniques emerge, MNIST remains a reliable resource for researchers and learners alike.In this competition, we aim to correctly identify digits from a dataset of tens of thousands of handwritten images. Kaggle has curated a set of tutorial-style kernels which cover everything from regression to neural networks. They hope to encourage us to experiment with different algorithms to learn first-hand what works well and how techniques compare. ApproachFor this competition, we will be using Keras (with TensorFlow as our backend) as the main package to create a simple neural network to predict, as accurately as we can, digits from handwritten images. In particular, we will be calling the Functional Model API of Keras, and creating a 4-layered and 5-layered neural network.Also, we will be experimenting with various optimizers: the plain vanilla Stochastic Gradient Descent optimizer and the Adam optimizer. However, there are many other parameters, such as training epochs which will we will not be experimenting with.In addition, the choice of hidden layer units are completely arbitrary and may not be optimal. This is yet another parameter which we will not attempt to tinker with. Lastly, we introduce dropout, a form of regularisation, in our neural networks to prevent overfitting. ResultFollowing our simulations on the cross validation dataset, it appears that a 4-layered neural network, using 'Adam' as the optimizer along with a learning rate of 0.01, performs best. We proceed to introduce dropout in the model, and use the model to predict for the test set.The test predictions (submitted to Kaggle) generated by our model predicts with an accuracy score of 97.600%, which places us at the top 55 percentile of the competition. Importing key libraries, and reading data
###Code
import pandas as pd
import numpy as np
np.random.seed(1212)
import keras
from keras.models import Model
from keras.layers import *
from keras import optimizers
df_train = pd.read_csv('../input/train.csv')
df_test = pd.read_csv('../input/test.csv')
df_train.head() # 784 features, 1 label
###Output
_____no_output_____
###Markdown
Splitting into training and validation dataset
###Code
df_features = df_train.iloc[:, 1:785]
df_label = df_train.iloc[:, 0]
X_test = df_test.iloc[:, 0:784]
print(X_test.shape)
from sklearn.model_selection import train_test_split
X_train, X_cv, y_train, y_cv = train_test_split(df_features, df_label,
test_size = 0.2,
random_state = 1212)
X_train = X_train.as_matrix().reshape(33600, 784) #(33600, 784)
X_cv = X_cv.as_matrix().reshape(8400, 784) #(8400, 784)
X_test = X_test.as_matrix().reshape(28000, 784)
###Output
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:6: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:7: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
import sys
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:9: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
if __name__ == '__main__':
###Markdown
Data cleaning, normalization and selection
###Code
print((min(X_train[1]), max(X_train[1])))
###Output
(0, 255)
###Markdown
As the pixel intensities are currently between the range of 0 and 255, we proceed to normalize the features, using broadcasting. In addition, we proceed to convert our labels from a class vector to binary One Hot Encoded
###Code
# Feature Normalization
X_train = X_train.astype('float32'); X_cv= X_cv.astype('float32'); X_test = X_test.astype('float32')
X_train /= 255; X_cv /= 255; X_test /= 255
# Convert labels to One Hot Encoded
num_digits = 10
y_train = keras.utils.to_categorical(y_train, num_digits)
y_cv = keras.utils.to_categorical(y_cv, num_digits)
# Printing 2 examples of labels after conversion
print(y_train[0]) # 2
print(y_train[3]) # 7
###Output
[0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
###Markdown
Model Fitting We proceed by fitting several simple neural network models using Keras (with TensorFlow as our backend) and collect their accuracy. The model that performs the best on the validation set will be used as the model of choice for the competition.Model 1: Simple Neural Network with 4 layers (300, 100, 100, 200)In our first model, we will use the Keras library to train a neural network with the activation function set as ReLu. To determine which class to output, we will rely on the SoftMax function
###Code
# Input Parameters
n_input = 784 # number of features
n_hidden_1 = 300
n_hidden_2 = 100
n_hidden_3 = 100
n_hidden_4 = 200
num_digits = 10
Inp = Input(shape=(784,))
x = Dense(n_hidden_1, activation='relu', name = "Hidden_Layer_1")(Inp)
x = Dense(n_hidden_2, activation='relu', name = "Hidden_Layer_2")(x)
x = Dense(n_hidden_3, activation='relu', name = "Hidden_Layer_3")(x)
x = Dense(n_hidden_4, activation='relu', name = "Hidden_Layer_4")(x)
output = Dense(num_digits, activation='softmax', name = "Output_Layer")(x)
# Our model would have '6' layers - input layer, 4 hidden layer and 1 output layer
model = Model(Inp, output)
model.summary() # We have 297,910 parameters to estimate
# Insert Hyperparameters
learning_rate = 0.1
training_epochs = 20
batch_size = 100
sgd = optimizers.SGD(lr=learning_rate)
# We rely on the plain vanilla Stochastic Gradient Descent as our optimizing methodology
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
history1 = model.fit(X_train, y_train,
batch_size = batch_size,
epochs = training_epochs,
verbose = 2,
validation_data=(X_cv, y_cv))
###Output
Train on 33600 samples, validate on 8400 samples
Epoch 1/20
- 2s - loss: 1.8541 - accuracy: 0.4985 - val_loss: 1.0047 - val_accuracy: 0.7601
Epoch 2/20
- 2s - loss: 0.6482 - accuracy: 0.8293 - val_loss: 0.4640 - val_accuracy: 0.8720
Epoch 3/20
- 2s - loss: 0.4099 - accuracy: 0.8834 - val_loss: 0.3621 - val_accuracy: 0.8976
Epoch 4/20
- 2s - loss: 0.3377 - accuracy: 0.9028 - val_loss: 0.3123 - val_accuracy: 0.9102
Epoch 5/20
- 2s - loss: 0.2980 - accuracy: 0.9140 - val_loss: 0.2893 - val_accuracy: 0.9175
Epoch 6/20
- 2s - loss: 0.2684 - accuracy: 0.9226 - val_loss: 0.2651 - val_accuracy: 0.9238
Epoch 7/20
- 2s - loss: 0.2454 - accuracy: 0.9296 - val_loss: 0.2558 - val_accuracy: 0.9257
Epoch 8/20
- 2s - loss: 0.2273 - accuracy: 0.9352 - val_loss: 0.2322 - val_accuracy: 0.9338
Epoch 9/20
- 2s - loss: 0.2102 - accuracy: 0.9378 - val_loss: 0.2176 - val_accuracy: 0.9363
Epoch 10/20
- 2s - loss: 0.1952 - accuracy: 0.9439 - val_loss: 0.2053 - val_accuracy: 0.9398
Epoch 11/20
- 2s - loss: 0.1828 - accuracy: 0.9468 - val_loss: 0.1954 - val_accuracy: 0.9427
Epoch 12/20
- 2s - loss: 0.1707 - accuracy: 0.9504 - val_loss: 0.1849 - val_accuracy: 0.9450
Epoch 13/20
- 2s - loss: 0.1612 - accuracy: 0.9530 - val_loss: 0.1804 - val_accuracy: 0.9455
Epoch 14/20
- 2s - loss: 0.1515 - accuracy: 0.9562 - val_loss: 0.1763 - val_accuracy: 0.9470
Epoch 15/20
- 2s - loss: 0.1431 - accuracy: 0.9588 - val_loss: 0.1656 - val_accuracy: 0.9500
Epoch 16/20
- 2s - loss: 0.1353 - accuracy: 0.9604 - val_loss: 0.1594 - val_accuracy: 0.9531
Epoch 17/20
- 2s - loss: 0.1289 - accuracy: 0.9622 - val_loss: 0.1546 - val_accuracy: 0.9531
Epoch 18/20
- 2s - loss: 0.1217 - accuracy: 0.9646 - val_loss: 0.1478 - val_accuracy: 0.9558
Epoch 19/20
- 2s - loss: 0.1157 - accuracy: 0.9668 - val_loss: 0.1472 - val_accuracy: 0.9555
Epoch 20/20
- 2s - loss: 0.1100 - accuracy: 0.9683 - val_loss: 0.1409 - val_accuracy: 0.9582
###Markdown
Using a 4 layer neural network with:1. 20 training epochs2. A training batch size of 1003. Hidden layers set as (300, 100, 100, 200)4. Learning rate of 0.1Achieved a training score of around 96-98% and a test score of around 95 - 97%.Can we do better if we were to change the optimizer? To find out, we use the Adam optimizer for our second model, while maintaining the same parameter values for all other parameters.
###Code
Inp = Input(shape=(784,))
x = Dense(n_hidden_1, activation='relu', name = "Hidden_Layer_1")(Inp)
x = Dense(n_hidden_2, activation='relu', name = "Hidden_Layer_2")(x)
x = Dense(n_hidden_3, activation='relu', name = "Hidden_Layer_3")(x)
x = Dense(n_hidden_4, activation='relu', name = "Hidden_Layer_4")(x)
output = Dense(num_digits, activation='softmax', name = "Output_Layer")(x)
# We rely on ADAM as our optimizing methodology
adam = keras.optimizers.Adam(lr=learning_rate)
model2 = Model(Inp, output)
model2.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history2 = model2.fit(X_train, y_train,
batch_size = batch_size,
epochs = training_epochs,
verbose = 2,
validation_data=(X_cv, y_cv))
###Output
Train on 33600 samples, validate on 8400 samples
Epoch 1/20
- 3s - loss: 0.3405 - accuracy: 0.8977 - val_loss: 0.1635 - val_accuracy: 0.9492
Epoch 2/20
- 2s - loss: 0.1268 - accuracy: 0.9610 - val_loss: 0.1290 - val_accuracy: 0.9611
Epoch 3/20
- 2s - loss: 0.0843 - accuracy: 0.9732 - val_loss: 0.1042 - val_accuracy: 0.9677
Epoch 4/20
- 3s - loss: 0.0623 - accuracy: 0.9801 - val_loss: 0.1313 - val_accuracy: 0.9608
Epoch 5/20
- 2s - loss: 0.0452 - accuracy: 0.9849 - val_loss: 0.1088 - val_accuracy: 0.9690
Epoch 6/20
- 2s - loss: 0.0403 - accuracy: 0.9867 - val_loss: 0.0968 - val_accuracy: 0.9735
Epoch 7/20
- 2s - loss: 0.0353 - accuracy: 0.9879 - val_loss: 0.0961 - val_accuracy: 0.9740
Epoch 8/20
- 2s - loss: 0.0248 - accuracy: 0.9925 - val_loss: 0.1102 - val_accuracy: 0.9729
Epoch 9/20
- 2s - loss: 0.0219 - accuracy: 0.9925 - val_loss: 0.0928 - val_accuracy: 0.9787
Epoch 10/20
- 2s - loss: 0.0234 - accuracy: 0.9924 - val_loss: 0.1174 - val_accuracy: 0.9705
Epoch 11/20
- 2s - loss: 0.0211 - accuracy: 0.9933 - val_loss: 0.1077 - val_accuracy: 0.9739
Epoch 12/20
- 2s - loss: 0.0151 - accuracy: 0.9951 - val_loss: 0.1083 - val_accuracy: 0.9733
Epoch 13/20
- 2s - loss: 0.0161 - accuracy: 0.9946 - val_loss: 0.1048 - val_accuracy: 0.9761
Epoch 14/20
- 2s - loss: 0.0151 - accuracy: 0.9953 - val_loss: 0.1319 - val_accuracy: 0.9710
Epoch 15/20
- 2s - loss: 0.0131 - accuracy: 0.9957 - val_loss: 0.1508 - val_accuracy: 0.9695
Epoch 16/20
- 2s - loss: 0.0190 - accuracy: 0.9933 - val_loss: 0.1042 - val_accuracy: 0.9776
Epoch 17/20
- 2s - loss: 0.0121 - accuracy: 0.9964 - val_loss: 0.1338 - val_accuracy: 0.9739
Epoch 18/20
- 2s - loss: 0.0169 - accuracy: 0.9942 - val_loss: 0.1114 - val_accuracy: 0.9762
Epoch 19/20
- 2s - loss: 0.0082 - accuracy: 0.9973 - val_loss: 0.1286 - val_accuracy: 0.9742
Epoch 20/20
- 2s - loss: 0.0130 - accuracy: 0.9961 - val_loss: 0.1135 - val_accuracy: 0.9767
###Markdown
As it turns out, it does appear to be the case that the optimizer plays a crucial part in the validation score. In particular, the model which relies on 'Adam' as its optimizer tend to perform 1.5 - 2.5% better on average. Going forward, we will use 'Adam' as our optimizer of choice.What if we changed the learning rate from 0.1 to 0.01, or 0.5? Will it have any impact on the accuracy?Model 2A
###Code
Inp = Input(shape=(784,))
x = Dense(n_hidden_1, activation='relu', name = "Hidden_Layer_1")(Inp)
x = Dense(n_hidden_2, activation='relu', name = "Hidden_Layer_2")(x)
x = Dense(n_hidden_3, activation='relu', name = "Hidden_Layer_3")(x)
x = Dense(n_hidden_4, activation='relu', name = "Hidden_Layer_4")(x)
output = Dense(num_digits, activation='softmax', name = "Output_Layer")(x)
learning_rate = 0.01
adam = keras.optimizers.Adam(lr=learning_rate)
model2a = Model(Inp, output)
model2a.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history2a = model2a.fit(X_train, y_train,
batch_size = batch_size,
epochs = training_epochs,
verbose = 2,
validation_data=(X_cv, y_cv))
###Output
Train on 33600 samples, validate on 8400 samples
Epoch 1/20
- 3s - loss: 0.3382 - accuracy: 0.8982 - val_loss: 0.1951 - val_accuracy: 0.9415
Epoch 2/20
- 2s - loss: 0.1244 - accuracy: 0.9616 - val_loss: 0.1244 - val_accuracy: 0.9626
Epoch 3/20
- 2s - loss: 0.0834 - accuracy: 0.9742 - val_loss: 0.0918 - val_accuracy: 0.9724
Epoch 4/20
- 2s - loss: 0.0594 - accuracy: 0.9808 - val_loss: 0.1027 - val_accuracy: 0.9701
Epoch 5/20
- 2s - loss: 0.0468 - accuracy: 0.9849 - val_loss: 0.0961 - val_accuracy: 0.9733
Epoch 6/20
- 2s - loss: 0.0328 - accuracy: 0.9898 - val_loss: 0.1082 - val_accuracy: 0.9707
Epoch 7/20
- 2s - loss: 0.0327 - accuracy: 0.9890 - val_loss: 0.1212 - val_accuracy: 0.9701
Epoch 8/20
- 2s - loss: 0.0263 - accuracy: 0.9915 - val_loss: 0.1058 - val_accuracy: 0.9742
Epoch 9/20
- 2s - loss: 0.0215 - accuracy: 0.9935 - val_loss: 0.1092 - val_accuracy: 0.9710
Epoch 10/20
- 2s - loss: 0.0245 - accuracy: 0.9924 - val_loss: 0.1133 - val_accuracy: 0.9715
Epoch 11/20
- 2s - loss: 0.0156 - accuracy: 0.9951 - val_loss: 0.1225 - val_accuracy: 0.9732
Epoch 12/20
- 2s - loss: 0.0203 - accuracy: 0.9935 - val_loss: 0.1070 - val_accuracy: 0.9768
Epoch 13/20
- 2s - loss: 0.0177 - accuracy: 0.9943 - val_loss: 0.1105 - val_accuracy: 0.9768
Epoch 14/20
- 2s - loss: 0.0143 - accuracy: 0.9957 - val_loss: 0.1255 - val_accuracy: 0.9726
Epoch 15/20
- 2s - loss: 0.0177 - accuracy: 0.9940 - val_loss: 0.1360 - val_accuracy: 0.9699
Epoch 16/20
- 2s - loss: 0.0135 - accuracy: 0.9955 - val_loss: 0.1051 - val_accuracy: 0.9786
Epoch 17/20
- 2s - loss: 0.0122 - accuracy: 0.9960 - val_loss: 0.1087 - val_accuracy: 0.9788
Epoch 18/20
- 2s - loss: 0.0066 - accuracy: 0.9979 - val_loss: 0.1275 - val_accuracy: 0.9765
Epoch 19/20
- 2s - loss: 0.0162 - accuracy: 0.9949 - val_loss: 0.1160 - val_accuracy: 0.9780
Epoch 20/20
- 2s - loss: 0.0085 - accuracy: 0.9976 - val_loss: 0.1221 - val_accuracy: 0.9786
###Markdown
Model 2B
###Code
Inp = Input(shape=(784,))
x = Dense(n_hidden_1, activation='relu', name = "Hidden_Layer_1")(Inp)
x = Dense(n_hidden_2, activation='relu', name = "Hidden_Layer_2")(x)
x = Dense(n_hidden_3, activation='relu', name = "Hidden_Layer_3")(x)
x = Dense(n_hidden_4, activation='relu', name = "Hidden_Layer_4")(x)
output = Dense(num_digits, activation='softmax', name = "Output_Layer")(x)
learning_rate = 0.5
adam = keras.optimizers.Adam(lr=learning_rate)
model2b = Model(Inp, output)
model2b.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history2b = model2b.fit(X_train, y_train,
batch_size = batch_size,
epochs = training_epochs,
validation_data=(X_cv, y_cv))
###Output
Train on 33600 samples, validate on 8400 samples
Epoch 1/20
33600/33600 [==============================] - 3s 83us/step - loss: 0.3336 - accuracy: 0.9034 - val_loss: 0.1445 - val_accuracy: 0.9580
Epoch 2/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.1220 - accuracy: 0.9628 - val_loss: 0.1310 - val_accuracy: 0.9598
Epoch 3/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.0783 - accuracy: 0.9753 - val_loss: 0.0977 - val_accuracy: 0.9706
Epoch 4/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.0560 - accuracy: 0.9828 - val_loss: 0.1107 - val_accuracy: 0.9686
Epoch 5/20
33600/33600 [==============================] - 2s 73us/step - loss: 0.0495 - accuracy: 0.9833 - val_loss: 0.0896 - val_accuracy: 0.9745
Epoch 6/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.0380 - accuracy: 0.9881 - val_loss: 0.0952 - val_accuracy: 0.9720
Epoch 7/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.0273 - accuracy: 0.9912 - val_loss: 0.1012 - val_accuracy: 0.9755
Epoch 8/20
33600/33600 [==============================] - 2s 73us/step - loss: 0.0253 - accuracy: 0.9918 - val_loss: 0.0955 - val_accuracy: 0.9762
Epoch 9/20
33600/33600 [==============================] - 3s 74us/step - loss: 0.0196 - accuracy: 0.9939 - val_loss: 0.1082 - val_accuracy: 0.9730
Epoch 10/20
33600/33600 [==============================] - 2s 73us/step - loss: 0.0272 - accuracy: 0.9911 - val_loss: 0.1125 - val_accuracy: 0.9723
Epoch 11/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.0191 - accuracy: 0.9939 - val_loss: 0.1230 - val_accuracy: 0.9724
Epoch 12/20
33600/33600 [==============================] - 2s 73us/step - loss: 0.0178 - accuracy: 0.9939 - val_loss: 0.0952 - val_accuracy: 0.9780
Epoch 13/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.0154 - accuracy: 0.9950 - val_loss: 0.1007 - val_accuracy: 0.9762
Epoch 14/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.0130 - accuracy: 0.9959 - val_loss: 0.1219 - val_accuracy: 0.9719
Epoch 15/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.0184 - accuracy: 0.9941 - val_loss: 0.1176 - val_accuracy: 0.9729
Epoch 16/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0091 - accuracy: 0.9968 - val_loss: 0.1061 - val_accuracy: 0.9761
Epoch 17/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0104 - accuracy: 0.9963 - val_loss: 0.1414 - val_accuracy: 0.9714
Epoch 18/20
33600/33600 [==============================] - 2s 73us/step - loss: 0.0196 - accuracy: 0.9937 - val_loss: 0.1094 - val_accuracy: 0.9774
Epoch 19/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0117 - accuracy: 0.9962 - val_loss: 0.1208 - val_accuracy: 0.9750
Epoch 20/20
33600/33600 [==============================] - 2s 74us/step - loss: 0.0082 - accuracy: 0.9973 - val_loss: 0.1260 - val_accuracy: 0.9744
###Markdown
The accuracy, as measured by the 3 different learning rates 0.01, 0.1 and 0.5 are around 98%, 97% and 98% respectively. As there are no considerable gains by changing the learning rates, we stick with the default learning rate of 0.01.We proceed to fit a neural network with 5 hidden layers with the features in the hidden layer set as (300, 100, 100, 100, 200) respectively. To ensure that the two models are comparable, we will set the training epochs as 20, and the training batch size as 100.
###Code
# Input Parameters
n_input = 784 # number of features
n_hidden_1 = 300
n_hidden_2 = 100
n_hidden_3 = 100
n_hidden_4 = 100
n_hidden_5 = 200
num_digits = 10
Inp = Input(shape=(784,))
x = Dense(n_hidden_1, activation='relu', name = "Hidden_Layer_1")(Inp)
x = Dense(n_hidden_2, activation='relu', name = "Hidden_Layer_2")(x)
x = Dense(n_hidden_3, activation='relu', name = "Hidden_Layer_3")(x)
x = Dense(n_hidden_4, activation='relu', name = "Hidden_Layer_4")(x)
x = Dense(n_hidden_5, activation='relu', name = "Hidden_Layer_5")(x)
output = Dense(num_digits, activation='softmax', name = "Output_Layer")(x)
# Our model would have '7' layers - input layer, 5 hidden layer and 1 output layer
model3 = Model(Inp, output)
model3.summary() # We have 308,010 parameters to estimate
# We rely on 'Adam' as our optimizing methodology
adam = keras.optimizers.Adam(lr=0.01)
model3.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history3 = model3.fit(X_train, y_train,
batch_size = batch_size,
epochs = training_epochs,
validation_data=(X_cv, y_cv))
###Output
Train on 33600 samples, validate on 8400 samples
Epoch 1/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.3602 - accuracy: 0.8930 - val_loss: 0.1909 - val_accuracy: 0.9440
Epoch 2/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.1276 - accuracy: 0.9603 - val_loss: 0.1247 - val_accuracy: 0.9630
Epoch 3/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0848 - accuracy: 0.9742 - val_loss: 0.1114 - val_accuracy: 0.9662
Epoch 4/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0619 - accuracy: 0.9805 - val_loss: 0.1020 - val_accuracy: 0.9704
Epoch 5/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0499 - accuracy: 0.9838 - val_loss: 0.0970 - val_accuracy: 0.9718
Epoch 6/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0411 - accuracy: 0.9874 - val_loss: 0.0905 - val_accuracy: 0.9750
Epoch 7/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0357 - accuracy: 0.9883 - val_loss: 0.1114 - val_accuracy: 0.9721
Epoch 8/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0281 - accuracy: 0.9910 - val_loss: 0.1251 - val_accuracy: 0.9690
Epoch 9/20
33600/33600 [==============================] - 3s 77us/step - loss: 0.0293 - accuracy: 0.9908 - val_loss: 0.1136 - val_accuracy: 0.9710
Epoch 10/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0246 - accuracy: 0.9922 - val_loss: 0.1172 - val_accuracy: 0.9720
Epoch 11/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0199 - accuracy: 0.9938 - val_loss: 0.1163 - val_accuracy: 0.9730
Epoch 12/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0189 - accuracy: 0.9939 - val_loss: 0.1243 - val_accuracy: 0.9717
Epoch 13/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0179 - accuracy: 0.9939 - val_loss: 0.1232 - val_accuracy: 0.9725
Epoch 14/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0200 - accuracy: 0.9935 - val_loss: 0.1372 - val_accuracy: 0.9690
Epoch 15/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0169 - accuracy: 0.9944 - val_loss: 0.1233 - val_accuracy: 0.9764
Epoch 16/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0169 - accuracy: 0.9945 - val_loss: 0.1179 - val_accuracy: 0.9751
Epoch 17/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0126 - accuracy: 0.9958 - val_loss: 0.1244 - val_accuracy: 0.9782
Epoch 18/20
33600/33600 [==============================] - 3s 75us/step - loss: 0.0111 - accuracy: 0.9967 - val_loss: 0.1236 - val_accuracy: 0.9744
Epoch 19/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0160 - accuracy: 0.9953 - val_loss: 0.1236 - val_accuracy: 0.9743
Epoch 20/20
33600/33600 [==============================] - 3s 76us/step - loss: 0.0103 - accuracy: 0.9965 - val_loss: 0.1255 - val_accuracy: 0.9762
###Markdown
Compared to our first model, adding an additional layer did not significantly improve the accuracy from our previous model. However, there are computational costs (in terms of complexity) in implementing an additional layer in our neural network. Given that the benefits of an additional layer are low while the costs are high, we will stick with the 4 layer neural network.We now proceed to include dropout (dropout rate of 0.3) in our second model to prevent overfitting.
###Code
# Input Parameters
n_input = 784 # number of features
n_hidden_1 = 300
n_hidden_2 = 100
n_hidden_3 = 100
n_hidden_4 = 200
num_digits = 10
Inp = Input(shape=(784,))
x = Dense(n_hidden_1, activation='relu', name = "Hidden_Layer_1")(Inp)
x = Dropout(0.3)(x)
x = Dense(n_hidden_2, activation='relu', name = "Hidden_Layer_2")(x)
x = Dropout(0.3)(x)
x = Dense(n_hidden_3, activation='relu', name = "Hidden_Layer_3")(x)
x = Dropout(0.3)(x)
x = Dense(n_hidden_4, activation='relu', name = "Hidden_Layer_4")(x)
output = Dense(num_digits, activation='softmax', name = "Output_Layer")(x)
# Our model would have '6' layers - input layer, 4 hidden layer and 1 output layer
model4 = Model(Inp, output)
model4.summary() # We have 297,910 parameters to estimate
model4.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model4.fit(X_train, y_train,
batch_size = batch_size,
epochs = training_epochs,
validation_data=(X_cv, y_cv))
###Output
Train on 33600 samples, validate on 8400 samples
Epoch 1/20
33600/33600 [==============================] - 3s 95us/step - loss: 0.5821 - accuracy: 0.8137 - val_loss: 0.1895 - val_accuracy: 0.9423
Epoch 2/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.2281 - accuracy: 0.9334 - val_loss: 0.1350 - val_accuracy: 0.9606
Epoch 3/20
33600/33600 [==============================] - 3s 85us/step - loss: 0.1768 - accuracy: 0.9492 - val_loss: 0.1135 - val_accuracy: 0.9664
Epoch 4/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.1422 - accuracy: 0.9585 - val_loss: 0.1071 - val_accuracy: 0.9693
Epoch 5/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.1218 - accuracy: 0.9634 - val_loss: 0.0935 - val_accuracy: 0.9742
Epoch 6/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.1063 - accuracy: 0.9680 - val_loss: 0.0861 - val_accuracy: 0.9744
Epoch 7/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.0965 - accuracy: 0.9715 - val_loss: 0.0931 - val_accuracy: 0.9736
Epoch 8/20
33600/33600 [==============================] - 3s 92us/step - loss: 0.0880 - accuracy: 0.9743 - val_loss: 0.0865 - val_accuracy: 0.9761
Epoch 9/20
33600/33600 [==============================] - 3s 87us/step - loss: 0.0805 - accuracy: 0.9762 - val_loss: 0.0923 - val_accuracy: 0.9754
Epoch 10/20
33600/33600 [==============================] - 3s 85us/step - loss: 0.0764 - accuracy: 0.9776 - val_loss: 0.0910 - val_accuracy: 0.9769
Epoch 11/20
33600/33600 [==============================] - 3s 85us/step - loss: 0.0689 - accuracy: 0.9795 - val_loss: 0.0863 - val_accuracy: 0.9760
Epoch 12/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.0659 - accuracy: 0.9806 - val_loss: 0.0871 - val_accuracy: 0.9773
Epoch 13/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.0614 - accuracy: 0.9811 - val_loss: 0.0862 - val_accuracy: 0.9785
Epoch 14/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.0582 - accuracy: 0.9827 - val_loss: 0.0926 - val_accuracy: 0.9761
Epoch 15/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.0541 - accuracy: 0.9839 - val_loss: 0.0866 - val_accuracy: 0.9786
Epoch 16/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.0560 - accuracy: 0.9832 - val_loss: 0.0937 - val_accuracy: 0.9775
Epoch 17/20
33600/33600 [==============================] - 3s 85us/step - loss: 0.0503 - accuracy: 0.9855 - val_loss: 0.0913 - val_accuracy: 0.9798
Epoch 18/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.0480 - accuracy: 0.9856 - val_loss: 0.0794 - val_accuracy: 0.9806
Epoch 19/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.0464 - accuracy: 0.9858 - val_loss: 0.0930 - val_accuracy: 0.9789
Epoch 20/20
33600/33600 [==============================] - 3s 84us/step - loss: 0.0461 - accuracy: 0.9862 - val_loss: 0.0870 - val_accuracy: 0.9788
###Markdown
With a validation score of close to 98%, we proceed to use this model to predict for the test set.
###Code
test_pred = pd.DataFrame(model4.predict(X_test, batch_size=200))
test_pred = pd.DataFrame(test_pred.idxmax(axis = 1))
test_pred.index.name = 'ImageId'
test_pred = test_pred.rename(columns = {0: 'Label'}).reset_index()
test_pred['ImageId'] = test_pred['ImageId'] + 1
test_pred.head()
test_pred.to_csv('mnist_submission.csv', index = False)
###Output
_____no_output_____ |
assignment_no_2_day_5.ipynb | ###Markdown
###Code
lower=1
upper=2500
print("prime number between",lower,"and",upper,"are:")
for num in range(lower,upper+1):
if num>1:
for i in range(2,num):
if(num%i)==0:
break
else:
print(num)
###Output
[1;30;43mStreaming output truncated to the last 5000 lines.[0m
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2473
2475
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2477
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2479
2481
2483
2483
2483
2483
2483
2483
2483
2483
2483
2483
2483
2485
2485
2485
2487
2489
2489
2489
2489
2489
2489
2489
2489
2489
2489
2489
2489
2489
2489
2489
2489
2489
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2491
2493
2495
2495
2495
2497
2497
2497
2497
2497
2497
2497
2497
2497
2499
|
examples/geospatial_viz.ipynb | ###Markdown
This notebook provides some examples of visualizations made with FracMan outputs, outside of the normal FracMan capabilities. Many of these visualizations are geospatial and require Geopandas. The best (and by far easier) way to install Geopandas is using Miniconda.https://docs.conda.io/en/latest/miniconda.html
###Code
from pathlib import Path
from typing import Tuple
import geopandas as gpd
import pandas as pd
import numpy as np
from sklearn import linear_model
from shapely.geometry import LineString
from pyfracman.data import clean_columns
from pyfracman.fab import parse_fab_file
from pyfracman.frac_geo import flatten_frac, get_mid_z
from pyfracman.well_geo import (
load_stage_location,
load_survey_export,
well_surveys_to_linestrings,
stage_locs_to_gdf
)
data_dir = Path(r"C:\Users\scott.mckean\Desktop\spider_plots")
### Load fractures ###
frac_fpath = next(data_dir.rglob("Well_Connected_Fracs_1.fab"))
fracs = parse_fab_file(frac_fpath)
# load fracture properties
prop_df = pd.DataFrame(fracs['prop_list'], columns = fracs['prop_dict'].values(), index=fracs['fid'])
prop_df.index.set_names('fid', inplace=True)
# load fracture geometry and flatten to 2D at midpoint of frac plane
frac_linestrings = list(map(flatten_frac, fracs['fracs']))
frac_mid_z = list(map(get_mid_z, fracs['fracs']))
frac_gdf = gpd.GeoDataFrame(prop_df, geometry=frac_linestrings)
### load stages and surveys ###
# load surveys and convert to linestrings
surveys = pd.concat(
[load_survey_export(well_path) for well_path in data_dir.glob("*_well.txt")]
)
survey_linestrings = well_surveys_to_linestrings(surveys)
# load stage locations and convert to GDF with points and linestring
stage_locs = pd.concat(
[load_stage_location(well_path) for well_path in data_dir.glob("*_intervals.txt")]
)
stage_gdf = stage_locs_to_gdf(stage_locs)
def get_fracture_set_stats(fpath: Path, set_name: str, set_alias: str) -> pd.DataFrame:
"""Parse a connection export from FracMan to get the Fracture set statistics
Args:
fpath (Path): file path
set_name (str): name of fracture set to summarize
Returns:
pd.DataFrame: Dataframe with statistics
"""
conn = pd.read_csv(fpath, delim_whitespace=True).rename(columns={"Set_Name":"FractureSet"})
conn.columns = clean_columns(conn.columns)
# get set ids
set_ids = conn.query("fractureset == @set_name").groupby('object')['fracid'].apply(list)
set_ids.name = set_alias + "_ids"
# get set counts
set_ct = conn.query("fractureset == @set_name").groupby('object').count().iloc[:,0]
set_ct.name = set_alias + "_count"
# get stage counts
stages = conn.groupby("object").count().reset_index().rename(columns={'FractureSet':'interactions'})
stages['stage_no'] = stages.object.str.split("_").str[-1]
stages['well'] = stages.object.str.split("_").str[-3]
stages = (stages[['stage_no','well','object']]
.merge(set_ids, left_on='object', right_index=True)
.merge(set_ct, left_on='object', right_index=True)
)
### load results ###
stage_dfs = []
for conn_fpath in data_dir.rglob("*_Connections.txt"):
conn.query("object == 'StageConnection_A6_Stage_1'")
stages
set_a_ids
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
survey_linestrings.plot(ax = ax)
frac_gdf.plot(ax = ax, color='r')
stage_gdf.set_geometry('stg_line').plot(ax = ax, color='k')
ax.set_aspect('equal')
plt.xlim(-250, 2000)
plt.ylim(-1250, 1000)
plt.show()
# Load results
# Make plot
# Make spider plot
# Redo histograms
###Output
_____no_output_____ |
Pymaceuticals/pymaceuticals_starterRaulVilla.ipynb | ###Markdown
Observations and Insights
###Code
# Dependencies and Setup
%matplotlib notebook
%pylab inline
import matplotlib.pyplot as plt
import pandas as pd
import itertools as it
import seaborn as sns
import scipy.stats as st
import numpy as np
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
study_results
mouse_metadata
# Combine the data into a single dataset
combined_mouse_data = study_results.join(mouse_metadata.set_index('Mouse ID'),
on='Mouse ID')
# Display the data table for preview
combined_mouse_data
# Checking the number of mouse_metadata
combined_mouse_data["Mouse ID"].value_counts()
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
con_duplicate = combined_mouse_data.duplicated(subset=[ 'Mouse ID','Timepoint'])
con_duplicate.value_counts()
# Optional: Get all the data for the duplicate mouse ID.
duplicates_f = combined_mouse_data[con_duplicate]
duplicates_f
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_mouse_data = combined_mouse_data[combined_mouse_data["Mouse ID"].isin(duplicates_f["Mouse ID"])]
clean_mouse_data
# Checking the number of mice in the clean DataFrame.
clean_mouse_data["Mouse ID"].value_counts()
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
grouped_drugs = clean_mouse_data.groupby(["Drug Regimen"])
# This method is the most straighforward, creating multiple series and putting them all together at the end.
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
drug_mean = grouped_drugs.mean()["Tumor Volume (mm3)"]
drug_mean.name = "Mean"
drug_median = grouped_drugs.median()["Tumor Volume (mm3)"]
drug_median.name = "Median"
drug_var = grouped_drugs.var()["Tumor Volume (mm3)"]
drug_var.name = "Variance"
drug_std = grouped_drugs.std()["Tumor Volume (mm3)"]
drug_std.name = "Standard Deviation"
drug_sem = grouped_drugs.sem()["Tumor Volume (mm3)"]
drug_sem.name = "SEM"
# This method produces everything in a single groupby function
summ_stats_by_drugs = pd.concat([drug_mean,
drug_median,
drug_var,
drug_std,
drug_sem], axis = 1).reset_index()
summ_stats_by_drugs
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pandas.
mouse_by_drug = grouped_drugs['Mouse ID'].nunique()
mouse_by_drug.plot(figsize=(4,4),kind='bar')
# Generate a bar plot showing the total number of mice for each treatment throughout the course of the study using pyplot.
fig1, ax1 = plt.subplots(figsize = (5,5))
mouse_by_drug.plot(kind = 'bar')
plt.ylim(10,26)
plt.title("Number of Mice Per Drug Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Mice")
fig1.tight_layout()
# Generate a pie plot showing the distribution of female versus male mice using pandas
mouse_by_gen = clean_mouse_data["Sex"].value_counts()
mouse_by_gen
# Generate a pie plot showing the distribution of female versus male mice using pyplot
fig2, ax2 = plt.subplots(figsize=(5,5))
ax2.pie(mouse_by_gen, labels=("Male", "Female"), autopct="%1.1f%%")
plt.title("Mice Gender Breakdown")
fig2.tight_layout()
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
grouped_mice = clean_mouse_data.groupby(["Mouse ID",
"Drug Regimen"])
# Capomulin, Ramicane, Infubinol, and Ceftamin
filtered_drugs = ("Capomulin","Ramicane","Infubinol","Ceftamin")
df_filtered_drugs = pd.DataFrame({"Drug Regimen":filtered_drugs})
max_timepoints = grouped_mice["Timepoint"].max()
max_tp = max_timepoints.to_frame()
max_tp = max_tp.reset_index()
max_tp
# Start by getting the last (greatest) timepoint for each mouse
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
# Put treatments into a list for for loop (and later for plot labels)
drugs = clean_mouse_data["Drug Regimen"].unique().tolist()
sorted_drugs = sorted(drugs)
sorted_drugs
# Create empty list to fill with tumor vol data (for plotting)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
# Generate a box plot of the final tumor volume of each mouse across four regimens of in
drugs.boxplot(columns=["Drug Regimen"],by="Mouse ID")
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
###Output
_____no_output_____ |
adopt/jordan-make-strata.ipynb | ###Markdown
CREATE CAMPAIGN
###Code
from adopt.facebook.update import Instruction
from adopt.malaria import run_instructions
def create_campaign(name):
params = {
"name": name,
"objective": "MESSAGES",
"status": "PAUSED",
"special_ad_categories": [],
}
return Instruction("campaign", "create", params)
# create_campaign_for_user(USER, CAMPAIGN, db_conf)
# run_instructions([create_campaign(AD_CAMPAIGN)], state)
# cid = next(c for c in state.campaigns if c['name'] == AD_CAMPAIGN)['id']
# run_instructions([Instruction("campaign", "update", {"status": "PAUSED"}, cid)], state)
###Output
_____no_output_____
###Markdown
BASIC CONF
###Code
c = {'optimization_goal': 'REPLIES',
'destination_type': 'MESSENGER',
'adset_hours': 48,
'budget': 1400000.0,
'min_budget': 100.0,
'opt_window': 2*24,
'end_date': '2021-05-25',
'proportional': True,
'page_id': PAGE_ID,
'instagram_id': None,
'ad_account': AD_ACCOUNT,
'ad_campaign': AD_CAMPAIGN}
config = CampaignConf(**c)
create_campaign_confs(CAMPAIGNID, "opt", [config._asdict()], db_conf)
###Output
_____no_output_____
###Markdown
AUDIENCES
###Code
SURVEY_SHORTCODES
from adopt.marketing import dict_from_nested_type
import typedjson
import json
audiences = [
{
"name": f"vlab-vacc-{COUNTRY_CODE}-education-low",
"shortcodes": SURVEY_SHORTCODES,
"subtype": "LOOKALIKE",
"lookalike": {
"name": f"vlab-vacc-{COUNTRY_CODE}-education-low-lookalike",
"target": 1100,
"spec": {
"country": COUNTRY_CODE,
"starting_ratio": 0.0,
"ratio": 0.2
}
},
"question_targeting": {"op": "or",
"vars": [
{"op": "equal", "vars": [
{"type": "response", "value": "education"},
{"type": "constant", "value": "A"}]},
{"op": "equal", "vars": [
{"type": "response", "value": "education"},
{"type": "constant", "value": "B"}]}
]}
},
{
"name": f"vlab-vacc-{COUNTRY_CODE}-education-high",
"shortcodes": SURVEY_SHORTCODES,
"subtype": "LOOKALIKE",
"lookalike": {
"name": f"vlab-vacc-{COUNTRY_CODE}-education-high-lookalike",
"target": 1100,
"spec": {
"country": COUNTRY_CODE,
"starting_ratio": 0.0,
"ratio": 0.2
}
},
"question_targeting": {"op": "or",
"vars": [
{"op": "equal", "vars": [
{"type": "response", "value": "education"},
{"type": "constant", "value": "C"}]},
{"op": "equal", "vars": [
{"type": "response", "value": "education"},
{"type": "constant", "value": "D"}]}
]}
},
{
"name": RESPONDENT_AUDIENCE,
"shortcodes": [INITIAL_SHORTCODE],
"subtype": "CUSTOM"
},
]
audience_confs = [typedjson.decode(AudienceConf, c) for c in audiences]
confs = [dict_from_nested_type(a) for a in audience_confs]
create_campaign_confs(CAMPAIGNID, "audience", confs, db_conf)
###Output
_____no_output_____
###Markdown
CREATIVES
###Code
from mena.strata import generate_creative_confs
images = {i['name']: i for i in state.account.get_ad_images(fields=['name', 'hash'])}
creative_confs, image_confs = generate_creative_confs(CREATIVE_FILE, INITIAL_SHORTCODE, images)
create_campaign_confs(CAMPAIGNID, "creative", creative_confs, db_conf)
###Output
_____no_output_____
###Markdown
STRATA
###Code
from mena.strata import get_adsets, extraction_confs, hyphen_case
from itertools import product
from mena.strata import format_group_product
template_state = CampaignState(userinfo.token,
get_api(env, userinfo.token),
AD_ACCOUNT,
TEMPLATE_CAMPAIGN)
a, g, l = get_adsets(template_state, extraction_confs)
variables = [
{ "name": "age", "source": "facebook", "conf": a},
{ "name": "gender", "source": "facebook", "conf": g},
{ "name": "location", "source": "facebook", "conf": l},
{ "name": "education", "source": "survey", "conf": {
"levels": [{"name": "A+B",
"audiences": [f"vlab-vacc-{COUNTRY_CODE}-education-low-lookalike"],
"excluded_audiences": [f"vlab-vacc-{COUNTRY_CODE}-education-high-lookalike"],
"question_targeting": {"op": "or",
"vars": [
{"op": "equal", "vars": [
{"type": "response", "value": "education"},
{"type": "constant", "value": "A"}]},
{"op": "equal", "vars": [
{"type": "response", "value": "education"},
{"type": "constant", "value": "B"}]}
]}
},
{"name": "C+D",
"audiences": [],
"excluded_audiences": [f"vlab-vacc-{COUNTRY_CODE}-education-low-lookalike"],
"question_targeting": {"op": "or",
"vars": [
{"op": "equal", "vars": [
{"type": "response", "value": "education"},
{"type": "constant", "value": "C"}]},
{"op": "equal", "vars": [
{"type": "response", "value": "education"},
{"type": "constant", "value": "D"}]}
]}
}
]}}
]
share_lookup = pd.read_csv(DISTRIBUTION_FILE, header=[0,1,2], index_col=[0])
share = share_lookup.T.reset_index().melt(id_vars=DISTRIBUTION_VARS,
var_name='location',
value_name='percentage')
groups = product(*[[(v['name'], v['source'], l) for l in v['conf']['levels']] for v in variables])
groups = [format_group_product(g, share) for g in groups]
ALL_CREATIVES = [t['name'] for t in image_confs]
def make_stratum(id_, quota, c):
return { 'id': id_,
'metadata': {**c['metadata'], **EXTRA_METADATA},
'facebook_targeting': c['facebook_targeting'],
'creatives': ALL_CREATIVES,
'audiences': c['audiences'],
'excluded_audiences': [*c['excluded_audiences'], RESPONDENT_AUDIENCE],
'quota': float(quota),
'shortcodes': SURVEY_SHORTCODES,
'question_targeting': c['question_targeting']}
from adopt.marketing import StratumConf, QuestionTargeting
import typedjson
strata = [make_stratum(*g) for g in groups]
strata_data = [dict_from_nested_type(typedjson.decode(StratumConf, c)) for c in strata]
create_campaign_confs(CAMPAIGNID, "stratum", strata_data, db_conf)
###Output
_____no_output_____
###Markdown
TESTING
###Code
mal = load_basics(CAMPAIGNID, env)
%%time
from adopt.malaria import update_ads_for_campaign, update_audience_for_campaign
# instructions, report = update_ads_for_campaign(mal)
instructions, report = update_audience_for_campaign(mal)
from adopt.malaria import run_instructions
from adopt.facebook.update import Instruction, GraphUpdater
run_instructions(instructions, mal.state)
import pandas as pd
from adopt.campaign_queries import get_last_adopt_report
rdf = pd.DataFrame(get_last_adopt_report(CAMPAIGNID, "FACEBOOK_ADOPT", mal.db_conf)).T
###Output
_____no_output_____ |
Chapter04/Performing merging, joins, concatenation, and grouping.ipynb | ###Markdown
Configuring pandas
###Code
# import numpy and pandas
import numpy as np
import pandas as pd
# used for dates
import datetime
from datetime import datetime, date
# Set some pandas options controlling output format
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', 8)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 60)
# bring in matplotlib for graphics
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Merging and join data
###Code
# these are our customers
customers = {'CustomerID': [10, 11],
'Name': ['Mike', 'Marcia'],
'Address': ['Address for Mike',
'Address for Marcia']}
customers = pd.DataFrame(customers)
customers
# and these are the orders made by our customers
# they are related to customers by CustomerID
orders = {'CustomerID': [10, 11, 10],
'OrderDate': [date(2014, 12, 1),
date(2014, 12, 1),
date(2014, 12, 1)]}
orders = pd.DataFrame(orders)
orders
# merge customers and orders so we can ship the items
customers.merge(orders)
# data to be used in the remainder of this section's examples
left_data = {'key1': ['a', 'b', 'c'],
'key2': ['x', 'y', 'z'],
'lval1': [ 0, 1, 2]}
right_data = {'key1': ['a', 'b', 'c'],
'key2': ['x', 'a', 'z'],
'rval1': [ 6, 7, 8 ]}
left = pd.DataFrame(left_data, index=[0, 1, 2])
right = pd.DataFrame(right_data, index=[1, 2, 3])
left
right
# demonstrate merge without specifying columns to merge
# this will implicitly merge on all common columns
left.merge(right)
# demonstrate merge using an explicit column
# on needs the value to be in both DataFrame objects
left.merge(right, on='key1')
# merge explicitly using two columns
left.merge(right, on=['key1', 'key2'])
# join on the row indices of both matrices
pd.merge(left, right, left_index=True, right_index=True)
###Output
_____no_output_____ |
notebook/Original-Proposal.ipynb | ###Markdown
Table of Contents0 [Re] A common representation of time across visual and auditory modalities1 Abstract (text)2 Imports Packages (code)3 Introduction: (text)4 Materials and methods (text and code)4.1 Participants (text)4.1.1 Set the number of volunteers (code)4.1.2 Download Data (code)4.1.3 Reading process (code)4.1.3.1 Checking the shape in arrays4.2 Experimental design (text)4.3 Behavioural analysis (text)4.4 EEG recordings and pre-processing (text and code)4.4.1 Frequency Information (code)4.5 Multivariate pattern analysis – MVPA (text)4.5.1 Similarity between different trials and modalities (text)4.5.2 Decoding the elapsed interval (text)4.5.2.1 Decoding Code (code)4.5.2.2 Joining groups and X (code)4.5.2.3 Within modality classification (text)4.5.2.4 Between modalities classification (text)4.5.3 Correlation of decoding and behaviour (text)5 Classification process5.1 With PCA processing5.2 Entre modalidades.6 Checagem do rótulos às médias7 Exportar para o Auto-Enconder() [Re] [A common representation of time across visual and auditory modalities](../docs/a-common-representation.pdf)Louise C.Barne1, João R. Sato1, Raphael Y. de Camargo1,Peter M. E. Claessens1, Marcelo S. Caetano1, André M. Cravo1*> 1 Centro de Matemática, Computação e Cognição, Universidade Federal do ABC (UFABC), Rua Arcturus, 03. Bairro Jardim Antares, São Bernardo do Campo, CEP 09606-070 SP, Brazil*[email protected] Abstract (text)Humans' and non-human animals' ability to process time on the scale of milliseconds and seconds is essential for adaptive behaviour. A central question of how brains keep track of time is how specific temporal information across different sensory modalities is. In the present study, we show that encoding of temporal intervals in auditory and visual modalities are qualitatively similar. Human participants were instructed to reproduce intervals in the range from 750 ms to 1500 ms marked by auditory or visual stimuli. Our behavioural results suggest that, although participants were more accurate in reproducing intervals marked by auditory stimuli, there was a strong correlation in performance between modalities. Using multivariate pattern analysis in scalp EEG, we show that activity during late periods of the intervals was similar within and between modalities. Critically, we show that a multivariate pattern classifier was able to accurately predict the elapsed interval, even when trained on an interval marked by a stimulus of a different sensory modality. Taken together, our results suggest that, while there are differences in the processing of intervals marked by auditory and visual stimuli, they also share a common neural representation. Keywords: Time perception, Multivariate pattern analysis, EEG, Vision, Audition---Responsible for the reproduction of the results: [Bruno Aristimunha](https://github.com/bruAristimunha).The goals in the work is: - Find common representations of different sensory modalities; - Improve the sensitivity of subsequent MVPA analyzes - Make a reproducible report of the results previously reported;Advisors: [Raphael Y. de Camargo](https://rycamargo.wixsite.com/home) and [André Cravo](https://bv.fapesp.br/pt/pesquisador/65843/andre-mascioli-cravo).***This work follows the structure below: Imports Packages (code)Imports packages which are used by jupyter paper
###Code
from sklearn.inspection import permutation_importance
from sklearn.model_selection import cross_val_score, train_test_split, LeaveOneOut
from tqdm.autonotebook import tqdm
from sklearn import preprocessing
from sklearn.metrics import classification_report, cohen_kappa_score
from sklearn.naive_bayes import GaussianNB
from sklearn.decomposition import PCA
import sys
import matplotlib
import seaborn as sns
import scipy.io as sio
import scipy as sc
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib outline
sns.set()
plt.style.use('seaborn-white')
sys.path.append("../src/data/")
sys.path.append("../src/tools/")
sys.path.append("../src/models/")
sys.path.append("../src/external/")
sys.path.append("../src/visualization/")
from file import *
from conversion import *
from exposure import _exposure_1, _exposure_2, _exposure_1_bad, _exposure_2_bad, get_group_time, fixing_bad_trials, get_last_125ms
from classification import *
from statistical_tests import *
from report import *
###Output
1 items had no tests:
__main__
0 tests in 1 items.
0 passed and 0 failed.
Test passed.
###Markdown
Introduction: (text)The ability to estimate time is essential for humans and non-human animals to interact with their environment ([Buhusi and Meck, 2005](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib5&sa=D&ust=1580959905399000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib27&sa=D&ust=1580959905399000)[Mauk and Buonomano, 2004](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib27&sa=D&ust=1580959905400000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib28&sa=D&ust=1580959905400000)[Merchant et al., 2013a](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib28&sa=D&ust=1580959905401000)). Intervals in the range of hundreds of milliseconds to seconds are critical for sensory and motor processing, learning, and cognition ([Buhusi and Meck, 2005](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib5&sa=D&ust=1580959905401000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib27&sa=D&ust=1580959905402000)[Mauk and Buonomano, 2004](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib27&sa=D&ust=1580959905402000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib28&sa=D&ust=1580959905403000)[Merchant et al., 2013a](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib28&sa=D&ust=1580959905403000)). However, the mechanisms underlying temporal processing in this range are still largely discussed. A central unanswered question is whether temporal processing depends on dedicated or intrinsic circuits ([Ivry and Schlerf, 2008](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib16&sa=D&ust=1580959905404000)). Dedicated models propose that temporal perception depends on central specialised mechanisms, as an internal clock, that create a unified perception of time ([Ivry and Schlerf, 2008](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib16&sa=D&ust=1580959905404000)). This class of models can account for behavioural findings such as correlations in performance for some temporal tasks ([Keele et al., 1985](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib20&sa=D&ust=1580959905405000)) and the observation that learning to discriminate a temporal interval in one sensory modality can sometimes be transferred to other modalities ([Bueti and Buonomano, 2014](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib4&sa=D&ust=1580959905405000)).Intrinsic models of time propose that a variety of neural circuits distributed across the brain are capable of temporal processing. One of the most known examples is the state-dependent network - SDN ([Mauk and Buonomano, 2004](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib27&sa=D&ust=1580959905406000)). Within this framework, neural circuits can take advantage of the natural temporal evolution of its states to keep track of time ([Mauk and Buonomano, 2004](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib27&sa=D&ust=1580959905407000)). One of the main advantages of such models is that they can explain the known differences of temporal processing across sensory modalities ([van Wassenhove, 2009](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib38&sa=D&ust=1580959905407000)) and that learning a specific interval does not commonly improve temporal performance in other intervals ([Bueti and Buonomano, 2014](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib4&sa=D&ust=1580959905408000)).Given that both dedicated and intrinsic views can account for some results while not explaining others, there has been an increase in interest in hybrid models, according to which local task-dependent areas interact with a higher central timing system ([Merchant et al., 2013a](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib28&sa=D&ust=1580959905409000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib39&sa=D&ust=1580959905409000)[Wiener et al., 2011](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib39&sa=D&ust=1580959905410000)). The main advantage of hybrid models is that they can explain why performance in some timing tasks seems to be correlated across participants, while still exhibiting modality and task-related differences ([Merchant et al., 2008b](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib33&sa=D&ust=1580959905410000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib32&sa=D&ust=1580959905411000)[Merchant et al., 2008a](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib32&sa=D&ust=1580959905411000)).In humans, studies that investigate these different models employ a variety of methods, such as behavioural, neuroimaging and neuropharmacological manipulations, on healthy participants and neurological patients ([Coull et al., 2011](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib9&sa=D&ust=1580959905412000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib16&sa=D&ust=1580959905413000)[Ivry and Schlerf, 2008](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib16&sa=D&ust=1580959905413000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib21&sa=D&ust=1580959905414000)[Kononowicz et al., 2016](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib21&sa=D&ust=1580959905414000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib28&sa=D&ust=1580959905415000)[Merchant et al., 2013a](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib28&sa=D&ust=1580959905415000),[ ](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib39&sa=D&ust=1580959905416000)[Wiener et al., 2011](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib39&sa=D&ust=1580959905416000)). Although the high temporal resolution of EEG should in principle be optimal to track neural processing during temporal tasks, the contribution of these methods has been controversial. One of the main difficulties is the absence of a clear electrophysiological correlate of temporal processing (for a recent review see ([Kononowicz et al., 2016](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib21&sa=D&ust=1580959905417000))). This lack of electrophysiological markers makes it hard to judge, for example, whether temporal processing in different modalities share a common representation ([N'Diaye et al., 2004](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib34&sa=D&ust=1580959905417000)).In a recent study, we have shown that multivariate pattern analysis (MVPA) can reveal spatiotemporal dynamics of brain activity related to temporal processing ([Bueno et al., 2017](https://www.sciencedirect.com/science/article/pii/S0028393218304913%23bib3&sa=D&ust=1580959905418000)). Multivariate approaches can take advantage of small differences in the signal across electrodes that might not be detectable using classical EEG methods. These pattern recognition methods allow the assessment of whether brain states evoked by different tasks, stimuli and sensory modalities are qualitatively similar.In the present study, we investigated whether encoding of temporal intervals in different sensory modalities is qualitatively similar. Our behavioural results suggest that, although participants are more accurate in reproducing intervals marked by auditory stimuli, there is a strong correlation over observers in performance between modalities. Critically, we show that a multivariate pattern classifier based on EEG activity can predict the elapsed interval, even when trained on an interval marked by a different sensory modality. Taken together, our results suggest that, while there are differences in the processing of intervals marked by auditory and visual stimuli, they also share a common neural representation.--- Materials and methods (text and code) Participants (text)Twenty volunteers (age range, 18–30 years; 11 female) gave informed consent to participate in this study. All of them had normal or corrected-to-normal vision and were free from psychological or neurological diseases. The experimental protocol was approved by The Research Ethics Committee of the Federal University of ABC. All experiments were performed in accordance with the approved guidelines and regulations. Set the number of volunteers (code)
###Code
N_PEOPLE = 20
###Output
_____no_output_____
###Markdown
Download Data (code)The EEG data obtained from the experiment reported above and the meta-information are required for reproduction. These files have 2.1G GB and you do need them for all the analysis in the Jupyter Paper.
###Code
PATH_AUD = '../data/raw/aud'
PATH_VIS = '../data/raw/vis'
PATH_INFO = '../data/raw/info_'
###Output
_____no_output_____
###Markdown
Reading process (code)TO-DO: short text describing the functions
###Code
data_aud, data_vis, CHANNEL_NAMES = read_file(PATH_AUD, PATH_VIS)
###Output
_____no_output_____
###Markdown
Checking the shape in arrays
###Code
print("Data Aud with shape: {} (individual, raw, channels, trial)".format(data_aud.shape))
print("Data Vis with shape: {} (individual, raw, channels, trial)".format(data_vis.shape))
###Output
Data Aud with shape: (20, 539, 64, 240) (individual, raw, channels, trial)
Data Vis with shape: (20, 539, 64, 240) (individual, raw, channels, trial)
###Markdown
Experimental design (text)The experiment consisted of a temporal reproduction task. The stimuli were presented using the Psychtoolbox v.3.0 package ([Brainard, 1997](https://www.sciencedirect.com/science/article/pii/S0028393218304913bib2)) on a 20-in. CRT monitor with a vertical refresh rate of 60 Hz, placed 50 cm in front of the participant.Each trial started with a fixation point that participants fixated throughout the trial. After a delay (1.5 s), two flashes or tones were presented, separated by a sample interval (measured between tone or flash onsets). After a random delay (1.5–2.5 s), volunteers were re-exposed to the same interval. After another random delay (1.5–2.5 s), a ready stimulus was presented to the participant, indicating the beginning of the reproduction task [Figure 1](Figure 1). Volunteers had to reproduce the exposed interval, initiated by the ready stimulus (RS) and ended by a stimulus caused by a button press (R1).**Figure 1 | Temporal reproduction task.** (**A**) Sequence of events during a trial. Each trial consisted of two equal empty intervals (between 750 ms and 1500 ms), marked by two stimuli. In auditory blocks, the interval was marked by two brief tones (1000 Hz, 100 ms), while in visual blocks the interval was marked by two flashes (0.5° of visual angle, 100 ms). Participants were instructed to reproduce the interval at the end of each trial. .In auditory blocks, the tones consisted of 1000 Hz tones (100 ms duration), while the ready and end stimuli consisted of 500 Hz tones. In visual blocks, the flashes that marked the interval consisted of yellow 0.5° of visual angle discs (100 ms duration), while the ready and end stimuli consisted of magenta flashes of the same size. No direct feedback was given.The sample intervals ranged between 750 ms and 1500 ms and were uniformly distributed. Each block (visual or auditory) consisted of 120 trials. Half of the participants performed the visual block first, while the other half performed the auditory block first. Behavioural analysis (text)Events in which intervals were reproduced as longer than double the sample interval or shorter than half the sample interval were considered errors and excluded from further analyses. The proportion of errors was low for both modalities (auditory: $0.0042 \pm 0.0011$; maximum proportion of rejected trials per participant=$0.0167$; visual: $0.0092 \pm 0.0031$; maximum proportion of rejected trials per participant $= 0.0583$)Similarly to previous studies ([Cicchini et al., 2012](), [Jazayeri and Shadlen, 2010]() ), the total error in the reproduction task was partitioned into two components: the average bias (BIAS) and the average variance ($\text{VAR}$). These two metrics are directly related to the overall mean squared error (MSE). To calculate these components, sample intervals were first binned into six equally sized bins and, for each bin, an estimate of both measures were calculated. The BIAS for each bin was calculated as the average difference between the reproduced interval and the sample intervals. The $\text{VAR}$ for each bin was calculated as the variance of the difference between reproduced and real intervals. The final estimate of the BIAS was calculated as the root mean square of the BIAS across bins and of the $\text{VAR}$ as the average $\text{VAR}$ across bins ([Jazayeri and Shadlen, 2010]()). For ease of interpretation and comparison with previous studies, the $\text{VAR}$ values were plotted using its square root $\sqrt{\text{VAR}}$.We further calculated a regression index ($\text{RI}$) to index the tendency of reproduced intervals to regress towards the mean sample. This index was calculated as the difference in slope between the best linear fit on the reproduced interval and perfect performance ([Cicchini et al., 2012]()). This measure varies from $0$ (perfect performance) to $1$ (complete regression to the mean, after allowing for a constant bias). The same linear fit between real and reproduced intervals was used to calculate the indifference points for both modalities. This point refers to the physical interval where durations are reproduced veridically.To analyse the scalar property of time, we computed the slope of the generalised Weber function ([García-Garibay et al., 2016](), [Getty, 1975](), [Ivry and Hazeltine, 1995]()). A linear regression between the variance of the reproduced intervals and the mean subjective duration squared was performed as follows: (1)where $k$ is the slope that approximates the Weber fraction and $\sigma^2_{\text{indep}}$ is a constant representing the time-independent component of the variability. To account for the systematic bias found in the reproduced intervals, we used an approach similar to [García-Garibay et al. (2016)] in whichwas computed based on the linear fit between real and reproduced intervals.All measures were calculated separately for each participant and condition (auditory and visual). At the group level, comparisons between the calculated parameters were done using paired t-tests (two-sided). To investigate correlations across participants for all three measures, Pearson correlations were calculated between the values of each measure in visual and auditory conditions. EEG recordings and pre-processing (text and code)EEG was recorded continuously from 64 ActiCap Electrodes (Brain Products) at 1000 Hz by a QuickAmp amplifier (Brain Products). All sites were referenced to FCz and grounded to AFz. The electrodes were positioned according to the International 10–10 system. Additional bipolar electrodes registered the electrooculogram (EOG).EEG pre-processing was carried out using BrainVision Analyzer (Brain Products). All data were down-sampled to 250 Hz and re-referenced to the average activity across electrodes. For eye movement artefact rejection, an independent component analysis (ICA) was performed on filtered (Butterworth Zero Phase Filter between 0.05.05 Hz and 30 Hz) and segmented (−200 to 2000 ms relative to S1) data. For the ICA, epochs were baselined based on the average activity of the entire trial. Eye related components were identified by comparing individual ICA components with EOG channels and by visual inspection. The average proportion of rejected trials for each participant corresponded to 0.0613. For all further analyses, epochs were baseline corrected based on the period between −200 ms and 0 ms relative to S1 presentation. Frequency Information (code)The data set was originally recorded in 1000 Hz and down-sampled to 250 Hz;
###Code
N_CHANNELS = 64
#N_CHANNELS = 62
ORIGINAL_FREQUENCY = 1000
DOWN_SAMPLED_FREQUENCY = 250
###Output
_____no_output_____
###Markdown
Multivariate pattern analysis – MVPA (text) Similarity between different trials and modalities (text)Not reproduced **yet**.<!---To calculate the similarity across trials we used a bootstrap approach. For each participant and comparison of interest, data from all intervals that lasted at least 1.125 s were divided into two groups of trials and averaged across trials per group ([Bueno et al., 2017]()). Then, for each time point, data from each split (two-row vectors with the averaged amplitude values of all 62 electrodes) were compared using a Pearson correlation. This procedure was repeated 5000 times for each participant and Fisher transformed coefficients were averaged across permutations for each participant and comparison of interest.At the group level, EEG-analyses were implemented non-parametrically ([Maris and Oostenveld, 2007](), [Wolff et al., 2015]()) with sign-permutation tests. For each time-point, the Fisher transformed coefficients for a random half of the participants were multiplied by $-1$. The resulting distribution was used to calculate the p-value of the null-hypothesis that the mean value was equal to $0$. Cluster-based permutation tests were then used to correct for multiple comparisons across time using $5000$ permutations, with a cluster-forming threshold of p$<0.05$ ([Maris and Oostenveld, 2007]()). The sum of the values within a cluster was used as the cluster-level statistic. The significance threshold was set at p$<0.05$; all tests were one-sided.--> Decoding the elapsed interval (text)To investigate whether information about the elapsed interval could be decoded from activity across electrodes, we used a Naive Bayes classifier to perform a multiclass classification ([Grootswagers et al., 2017]()). For all classifications, activity from the last 125 ms of the elapsed interval (−125 ms to 0 relative to S2) was averaged for each electrode. Data for each exposure (E1 and E2) were analysed separately. Intervals were divided into six equally sized bins (125 ms each). Decoding Code (code)At this point we understand that:> We should divide the exposures. Consulting the authors by definition in trial odd are the first exposure, and trial even the second exposure. For this we develop the functions `_exposure_1`, `_exposure_2`.
###Code
## Splitting the exposures for all the individuals
aud_1 = _exposure_1(data_aud)
aud_2 = _exposure_2(data_aud)
vis_1 = _exposure_1(data_vis)
vis_2 = _exposure_2(data_vis)
bad_trials_aud, bad_trials_vis = get_bad_trials(PATH_AUD, PATH_VIS)
## Splitting the exposures for all the individuals
bad_aud_1 = _exposure_1_bad(bad_trials_aud, 'Aud')
bad_aud_2 = _exposure_2_bad(bad_trials_aud, 'Aud')
bad_vis_1 = _exposure_1_bad(bad_trials_vis, 'Vis')
bad_vis_2 = _exposure_2_bad(bad_trials_vis, 'Vis')
bad_vis_comport = get_bad_trials_comportamental('vis')
bad_aud_comport = get_bad_trials_comportamental('aud')
clean_aud_1, clean_aud_2, clean_vis_1, clean_vis_2 = fixing_bad_trials(
bad_aud_1, bad_aud_2, bad_vis_1, bad_vis_2)
clean_aud_1 = clean_aud_1.append(bad_aud_comport,ignore_index=True)
clean_aud_2 = clean_aud_2.append(bad_aud_comport,ignore_index=True)
clean_vis_1 = clean_vis_1.append(bad_vis_comport,ignore_index=True)
clean_vis_2 = clean_vis_2.append(bad_vis_comport,ignore_index=True)
###Output
_____no_output_____
###Markdown
> For each trial, we will get only the last $125$ ms before $S2$;> The rest is discarded;> We must evaluate the mean value in the last $125$ ms. With the sampled frequency of $250$ Hz, there will be one point every $4$ milliseconds. If we want $ 125 $ milliseconds just divide it and take the ratio $\frac{125 \text{ms}}{4 \text{ms}}$ to get the number of points.
###Code
print("Given the sampling frequency we will then have a total of {} points".format(125/4))
###Output
Given the sampling frequency we will then have a total of 31.25 points
###Markdown
For numerical simplicity we adopted the number of points as $32$. With the number of points established we implement the functions: `read_time_delay`, `get_last_125ms`.
###Code
indice_s2_vis = get_time_delay('vis')
indice_s2_aud = get_time_delay('aud')
time_s2_vis = get_time_delay('vis', export_as_indice=False)
time_s2_aud = get_time_delay('aud', export_as_indice=False)
aud_1_average = list(map(get_last_125ms, aud_1, indice_s2_aud))
aud_2_average = list(map(get_last_125ms, aud_2, indice_s2_aud))
vis_1_average = list(map(get_last_125ms, vis_1, indice_s2_vis))
vis_2_average = list(map(get_last_125ms, vis_2, indice_s2_vis))
###Output
_____no_output_____
###Markdown
> For each trial we must put in one of the six group depending on the trial length:| Group ID | Begin(ms) | End(ms) ||:--------: |:---------: |:-------: || 1 | 750 | 875 || 2 | 875 | 1000 || 3 | 1000 | 1125 || 4 | 1125 | 1250 || 5 | 1250 | 1375 || 6 | 1375 | 1500 |The ranges are defined as $[,[ $. We reused the previously developed function ``read_time_delay``, and just deploy ``categorize``.
###Code
classes_aud, classes_vis = get_group_time(time_s2_aud, time_s2_vis)
###Output
_____no_output_____
###Markdown
Joining groups and X (code)For ease when joining the class we developed the function `to_DataFrame`:
###Code
#Getting in format of dataframe:
df_aud_1_aver = to_DataFrame(aud_1_average,classes_aud,CHANNEL_NAMES)
df_aud_2_aver = to_DataFrame(aud_2_average,classes_aud,CHANNEL_NAMES)
df_vis_1_aver = to_DataFrame(vis_1_average,classes_vis,CHANNEL_NAMES)
df_vis_2_aver = to_DataFrame(vis_2_average,classes_vis,CHANNEL_NAMES)
###Output
_____no_output_____
###Markdown
**Export to Alberto**
###Code
df_aud_1_aver = merge_and_clean(df_aud_1_aver,clean_aud_1)
df_aud_2_aver = merge_and_clean(df_aud_2_aver,clean_aud_2)
df_vis_1_aver = merge_and_clean(df_vis_1_aver,clean_vis_1)
df_vis_2_aver = merge_and_clean(df_vis_2_aver,clean_vis_2)
df_aud_1_aver.to_csv("../data/processed/auditory_exposure_1.csv", index=None)
df_aud_2_aver.to_csv("../data/processed/auditory_exposure_2.csv", index=None)
df_vis_1_aver.to_csv("../data/processed/visual_exposure_1.csv", index=None)
df_vis_2_aver.to_csv("../data/processed/visual_exposure_2.csv", index=None)
###Output
_____no_output_____
###Markdown
Within modality classification (text)The pre-processing and decoding procedures followed standard guidelines ([Grootswagers et al., 2017](), [Lemm et al., 2011]()). A leave-one-out method was used, although similar results were obtained when using a 10-fold cross-validation. For each test trial, data from all other trials were used as the training set. A principal component analysis (PCA) was used to reduce the dimensionality of the data ([Grootswagers et al., 2017]()). The PCA transformation was computed on the training data and the components that accounted for 99% of the variance were retained. The estimated coefficients from the training data were applied to the test data. On average, the number of components used was of 37.17 $\pm$1.43 (mean s.e.m., maximum number of components=47, minimum number of components=17).Decoding performance was summarised with Cohen's quadratic weighted kappa coefficient ([Cohen, 1968]()). The weighted kappa gives more weight to disagreements for categories that are further apart. It assumes values from −1 to 1, with 1 indicating perfect agreement between predicted and decoded label and 0 indicating chance ([Cohen, 1968]()).
###Code
resu_aud_1 = classification_within_modality(df_aud_1_aver,'Auditory','E1')
resu_aud_2 = classification_within_modality(df_aud_2_aver,'Auditory','E2')
resu_vis_1 = classification_within_modality(df_vis_1_aver,'Visual','E1')
resu_vis_2 = classification_within_modality(df_vis_2_aver,'Visual','E2')
###Output
_____no_output_____
###Markdown
Between modalities classification (text)The between modality decoding (training the classifier on data from one modality and testing on the other) followed a similar structure as the within-modality decoding. Given that participants performed the first half of the experiment with one modality and the second with the other, the effects of non-stationarity are low ([Lemm et al., 2011]()). Thus, data from all trials of one modality were used as the training set and tested against all trials of the other modality. Correlation of decoding and behaviour (text)Similarly to our previous decoding analysis, activity from the last 125 ms of the elapsed interval (−125 ms to 0 relative to S2) was used. For each test trial, a Naive Bayes classifier was trained on trials that were shorter (at least 25 ms and up to 125 ms shorter) and longer (at least 25 ms and up to 125 ms longer) than that the physical duration of that given test trial. This classification was applied to all trials between 875 ms and 1375 ms, so that all trials had a similar number of training trials.4Based on this classification, trials were further classified into three categories: (S/S) when in both presentations (E1 and E2) the trial was classified as shorter; (S/L or L/S) when in only one of the two presentations the trial was classified as longer and (L/L) when in both presentation the trials were classified as longer.To test whether classification might explain participants' performance, we first fitted, for each participant, a linear function relating the physical interval and participants' reproduced interval. The residuals from this linear fit were stored and separated as a function of the three decoding categories mentioned above. Notice that the linear fit on the behavioural results naturally takes into account participants tendency to regress towards the mean. Thus, this analysis allows isolating if trials that were encoded as shorter or longer were reproduced as shorter or longer than participants' average reproduction for that interval duration.Statistical analysisAll t-tests, correlations, ANOVAs and effect sizes (Cohen's d for t-tests and omega-squared for ANOVAs) were calculated using JASP (JASP Team, 2017). The p-values were adjusted, whenever appropriate, for violations of the sphericity assumption using the Greenhouse-Geisser correction. Classification process With PCA processing Each circle shows, for each participant and tested bin, the most common output of the classifier (mode). The continuous line shows the median across participants and error bars show the lower quartile (25th percentile) and upper quartile (75th percentile) across participants.
###Code
merge_within, merge_within_mode = merge_export(resu_aud_1,
resu_aud_2,
resu_vis_1,
resu_vis_2,
"../data/processed/resu_fig5.csv")
ck_within = cohen_kappa(merge_within, plot_option=False, title='')
###Output
_____no_output_____
###Markdown
> 5. C) Agreement between decoded and real interval indexed by Cohen's kappa. The left panel shows classification performance when test trials were of the same modality as the training trials while right panels show performance when the test trials were of a different modality than the training trials. Violin plots show the probability density across participants; line markers represent the median. Circles inside the violin plots show data from each participant.
###Code
print_classification_by_mod(merge_within)
###Output
_____no_output_____
###Markdown
--------- Entre modalidades. ---------
###Code
prever_vis_1 = classification_across_modality(
df_aud_1_aver, df_vis_1_aver, inp='Visual', exp='E1')
prever_aud_1 = classification_across_modality(
df_vis_1_aver, df_aud_1_aver, inp='Auditory', exp='E1')
prever_vis_2 = classification_across_modality(
df_aud_2_aver, df_vis_2_aver, inp='Visual', exp='E2')
prever_aud_2 = classification_across_modality(
df_vis_2_aver, df_aud_2_aver, inp='Auditory', exp='E2')
merge_across, merge_across_mode = merge_export(prever_aud_1, prever_aud_2,
prever_vis_1, prever_vis_2,
'../data/processed/resu_fig5b.csv')
ck_across = cohen_kappa(merge_across, plot_option=False, title='')
print_classification_by_mod(merge_across)
make_figure(merge_across_mode, merge_within_mode, ck_across, ck_within)
t_test(ck_within)
t_test(ck_across)
###Output
_____no_output_____
###Markdown
Checagem do rótulos às médias
###Code
contagem_pessoas_classe = df_aud_1_aver.groupby(
['people', 'group']).size().unstack()
fig, ax = plt.subplots(figsize=(20, 10))
ax = contagem_pessoas_classe.T.plot.bar(ax=ax)
fig = fig.suptitle("Distribuição das classes nos trials por pessoas")
contagem_classe = df_aud_1_aver.groupby(['group']).size()
fig, ax = plt.subplots(figsize=(20, 10))
ax = contagem_classe.T.plot.bar(ax=ax)
fig = fig.suptitle("Distribuição das classes nos trials de forma geral")
###Output
_____no_output_____
###Markdown
Exportar para o Auto-Enconder()
###Code
from pandas import DataFrame
from numpy import array, divide, average, concatenate
from os import listdir
from os.path import isfile, join
import doctest
N_TRIALS = 120
def get_last_125ms(exposure, indice_exposure, use_average=True, last_32 = True):
"""Splitting array in bins for the first exposure,
and compute the average (or not)
Parameters
----------
exposure: numpy.array (3-dimensions) [raw, channels, trial]
Array containing data from an exposure.
indice_exposure: numpy.array (1-dimension)
Array containing indice when occurs the S2
average: bool
Control option to future analyses without average
Returns
-------
agg_per_trial: np.array
"""
# Cutting values before the 0. -> y X 64 X 64 X 120
# y X 64 X 64 X 120
# 64 x (y * 64 tamanho) * trial
# 64 x 64 x (y * trial)
# 64
zero = 39
# Getting 2000 ms.
exposure_without_zero = exposure[zero:]
# Accumulator variable
agg_per_trial = []
for trial, ind_s2 in list(zip(exposure_without_zero.T, indice_exposure)):
if last_32:
points_32 = trial.T[int(ind_s2-32):int(ind_s2)]
else:
points_32 = array(list(mit.windowed(trial.T, n=64)))
#import pdb; pdb.set_trace()
if(use_average):
agg_per_trial.append(average(points_32, axis=0).reshape(-1))
else:
agg_per_trial.append(points_32)
# ---------------------------------------------------------------------
return array(agg_per_trial).T
def get_last_125ms_autoenconder(data, indices):
return get_last_125ms(data, indices, use_average=False)
def get_last_125ms_autoenconder_w(data, indices):
return get_last_125ms(data, indices, use_average=False, last_32=False)
import more_itertools as mit
shape
aud_1_autoenconder = list(
map(get_last_125ms_autoenconder, aud_1, indice_s2_aud))
aud_2_autoenconder = list(
map(get_last_125ms_autoenconder, aud_2, indice_s2_aud))
vis_1_autoenconder = list(
map(get_last_125ms_autoenconder, vis_1, indice_s2_vis))
vis_2_autoenconder = list(
map(get_last_125ms_autoenconder, vis_2, indice_s2_vis))
df_aud_1_autoenconder = to_DataFrame_autoenconder(
aud_1_autoenconder, classes_aud, CHANNEL_NAMES)
df_aud_2_autoenconder = to_DataFrame_autoenconder(
aud_2_autoenconder, classes_aud, CHANNEL_NAMES)
df_vis_1_autoenconder = to_DataFrame_autoenconder(
vis_1_autoenconder, classes_vis, CHANNEL_NAMES)
df_vis_2_autoenconder = to_DataFrame_autoenconder(
vis_2_autoenconder, classes_vis, CHANNEL_NAMES)
df_aud_1_autoenconder
df_aud_1_autoenconder.to_csv(
"../data/processed/auditory_exposure_1_autoenconder.csv", index=None)
df_aud_2_autoenconder.to_csv(
"../data/processed/auditory_exposure_2_autoenconder.csv", index=None)
df_vis_1_autoenconder.to_csv(
"../data/processed/visual_exposure_1_autoenconder.csv", index=None)
df_vis_2_autoenconder.to_csv(
"../data/processed/visual_exposure_2_autoenconder.csv", index=None)
vis_1_autoenconder_w = list(
map(get_last_125ms_autoenconder_w, vis_1, indice_s2_vis))
vis_2_autoenconder_w = list(
map(get_last_125ms_autoenconder_w, vis_2, indice_s2_vis))
aud_1_autoenconder_w = list(
map(get_last_125ms_autoenconder_w, aud_1, indice_s2_aud))
aud_2_autoenconder_w = list(
map(get_last_125ms_autoenconder_w, aud_2, indice_s2_aud))
vis_1_all = np.concatenate(np.concatenate(vis_1.transpose([3,1,0,2])))
vis_2_all = np.concatenate(np.concatenate(vis_2.transpose([3,1,0,2])))
aud_1_all = np.concatenate(np.concatenate(aud_1.transpose([3,1,0,2])))
aud_2_all = np.concatenate(np.concatenate(aud_2.transpose([3,1,0,2])))
vis_1_all.shape
data = np.array(aud_1_autoenconder_w)
from xarray import DataArray
from pandas import DataFrame, merge
from numpy import concatenate
from scipy.stats import mode
x_array = DataArray(data)
x_array = x_array.rename({'dim_0': 'people','dim_1': 'channel','dim_2':'time','dim_3':'windows', 'dim_4':'trial'})
x_array = x_array.transpose('people', 'trial', 'channel','time','windows')
x_array.to_dataframe('channel').unstack()
def to_DataFrame_autoenconder_w(data, CHANNEL_NAMES):
'''
TO-DO
'''
x_array = DataArray(data)
x_array = x_array.rename({'dim_0': 'people','dim_1': 'channel','dim_2':'time','dim_3':'windows', 'dim_4':'trial'})
x_array = x_array.transpose('people', 'trial', 'channel','time','windows')
x_array.to_dataframe('channel').unstack()
df = x_array.to_dataframe('time').unstack()
df_classe = DataFrame(classe.stack()).reset_index()
df_classe.columns = ['people','trial','group']
df_v = df_classe.merge(df,on=['people','trial'], validate='one_to_many',how='outer')
df_v['channel'] = df.reset_index()['channel']
df_v['channel'].replace(dict(zip(list(range(64)),CHANNEL_NAMES)), inplace=True)
time_legend = ['time '+str(i) for i in range(32)]
df_v.columns = ['people', 'trial', 'group']+time_legend+['channel']
df_v = df_v[['people', 'trial', 'group','channel']+time_legend]
df_v = df_v[~((df_v['channel'] =='HEOG') | (df_v['channel'] =='VEOG'))]
return df_v
###Output
_____no_output_____ |
Cartography Run.ipynb | ###Markdown
Installing Dependencies
###Code
%%shell
pip install jsonnet
pip install git+git://github.com/huggingface/transformers@189113d8910308b9f3509c6946b2147ce57a0bf7
pip install tensorflow-datasets
import tensorflow_datasets as tfds
wino_train, info_train=tfds.load(name="winogrande", split="train_xl", with_info=True)
wino_dev, info_dev=tfds.load(name="winogrande", split="validation",with_info=True)
#wino_test, info_test=tfds.load(name="winogrande", split="test", with_info=True)
df_train=tfds.as_dataframe(wino_train, info_train)
df_dev=tfds.as_dataframe(wino_dev, info_dev)
#df_test=tfds.as_dataframe(wino_test, info_test)
df=pd.concat((df_train, df_dev), axis=0, ignore_index=True)
len(df)
df=df[:-1]
len(df)
df_small=df[:21000]
len(df_small)
#df=df.drop(labels=[1426], axis=0,inplace=False)
df=df_small
len(df)
text=[]
for op in df.option1:
text.append(op.decode("utf-8"))
df["option1"]=text
df.head()
text=[]
for op in df.option2:
text.append(op.decode("utf-8"))
df["option2"]=text
df.head()
text=[]
for op in df.sentence:
text.append(op.decode("utf-8"))
df["sentence"]=text
df.head()
df=df.loc[:, ['option1', 'option2', 'sentence', 'label']]
df.head()
df.to_csv("/content/gdrive/MyDrive/Maryland Cartography/Data/Winogrande/train_small.tsv", sep="\t", index=True, index_label="guid")
#df_test.to_csv("/content/gdrive/MyDrive/Maryland Cartography/Data/Winogrande/test.tsv", sep="\t", index=True, index_label="guid")
cd /content/gdrive/MyDrive/Maryland Cartography/cartography-main
!python -m cartography.classification.run_glue -h
!python -m cartography.classification.run_glue -c configs/winogrande.jsonnet --do_train --output_dir roberta_base_quick_output --nfolds 3
!python -m cartography.selection.train_dy_filtering --plot --task_name WINOGRANDE --split "eval" --model_dir "/content/gdrive/MyDrive/Maryland Cartography/cartography-main/roberta_base_quick_output/" --model roberta-base
!python -m cartography.selection.train_dy_filtering --plot --task_name WINOGRANDE --split "train" --model_dir "/content/gdrive/MyDrive/Maryland Cartography/cartography-main/roberta_base_quick_output/" --model roberta-base
###Output
_____no_output_____ |
AAND-Sheet03-LauraLyra.ipynb | ###Markdown
General rules: * For all figures that you generate, remember to add meaningful labels to the axes, and make a legend, if applicable. * Do not hard code constants, like number of samples, number of channels, etc in your program. These values should always be determined from the given data. This way, you can easily use the code to analyse other data sets. * Do not use high-level functions from toolboxes like scikit-learn. * Replace *Template* by your *FirstnameLastname* in the filename. AAND - BCI Exercise Sheet 03 Name: Laura Freire Lyra
###Code
%matplotlib inline
import numpy as np
import scipy as sp
from matplotlib import pyplot as plt
import bci_minitoolbox as bci
###Output
_____no_output_____
###Markdown
Exercise 1: Nearest Centroid Classifier (NCC) (1 point)Implement the calculation of the nearest centroid classifier (NCC) as a Python function `train_NCC`. The function should take two arguments, the first being the data matrix $\bf{X}$ where each column is a data point ($\bf{x_k}$), and the second being class labels of the data points. Two output arguments should return the weight vector **`w`** and bias `b`.
###Code
def train_NCC(X, y):
'''
Synopsis:
w, b= train_NCC(X, y)
Arguments:
X: data matrix (features X samples)
y: labels with values 0 and 1 (1 x samples)
Output:
w: NCC weight vector
b: bias term
'''
features, samples = X.shape
class1 = X[:,y==1]
class0 = X[:,y==0]
erp1 = np.mean(class1,axis=1)
erp0 = np.mean(class0,axis=1)
w = erp1 - erp0/np.abs(erp1-erp0)
b = np.dot(w.T , (erp1 + erp0)/2)
return w,b
###Output
_____no_output_____
###Markdown
Exercise 2: Linear Discriminant Analysis (LDA) (3 points)Implement the calculation of the LDA classifier as a Python function `train_LDA`. The function should take two arguments, the first being the data matrix $\bf{X}$ where each column is a data point ($\bf{x_k}$), and the second being class labels of the data points. Two output arguments should return the weight vector **`w`** and bias `b`.
###Code
def train_LDA(X, y):
'''
Synopsis:
w, b= train_LDA(X, y)
Arguments:
X: data matrix (features X samples)
y: labels with values 0 and 1 (1 x samples)
Output:
w: LDA weight vector
b: bias term
'''
features, samples = X.shape
class1 = X[:,y==1]
class0 = X[:,y==0]
erp1 = np.mean(class1,axis=1)
erp0 = np.mean(class0,axis=1)
cov1 = np.cov(class1)
cov0 = np.cov(class0)
N1 = erp1.shape[0]
N0 = erp0.shape[0]
if abs(N1-N0) <= 10:
cov = cov1+cov0/2
else:
cov = cov1*(N1 - 1)/(N1 + N0 -1) + cov0*(N0 - 1)/(N1 + N0 -1)
w = np.linalg.inv(cov) @ (erp1 - erp0)
b = np.dot(w.T , (erp1 + erp0)/2)
return w,b
###Output
_____no_output_____
###Markdown
Exercises 3: Cross-validation with weighted loss (1 point)Complete the implementation of `crossvalidation` by writing a loss function `loss_weighted_error` which calculates the weighted loss as explained in the lecture.
###Code
def crossvalidation(classifier_fcn, X, y, nFolds=10, verbose=False):
'''
Synopsis:
loss_te, loss_tr= crossvalidation(classifier_fcn, X, y, nFolds=10, verbose=False)
Arguments:
classifier_fcn: handle to function that trains classifier as output w, b
X: data matrix (features X samples)
y: labels with values 0 and 1 (1 x samples)
nFolds: number of folds
verbose: print validation results or not
Output:
loss_te: value of loss function averaged across test data
loss_tr: value of loss function averaged across training data
'''
nDim, nSamples = X.shape
inter = np.round(np.linspace(0, nSamples, num=nFolds + 1)).astype(int)
perm = np.random.permutation(nSamples)
errTr = np.zeros([nFolds, 1])
errTe = np.zeros([nFolds, 1])
for ff in range(nFolds):
idxTe = perm[inter[ff]:inter[ff + 1] + 1]
idxTr = np.setdiff1d(range(nSamples), idxTe)
w, b = classifier_fcn(X[:, idxTr], y[idxTr])
out = w.T.dot(X) - b
errTe[ff] = loss_weighted_error(out[idxTe], y[idxTe])
errTr[ff] = loss_weighted_error(out[idxTr], y[idxTr])
if verbose:
print('{:5.1f} +/-{:4.1f} (training:{:5.1f} +/-{:4.1f}) [using {}]'.format(errTe.mean(), errTe.std(),
errTr.mean(), errTr.std(),
classifier_fcn.__name__))
return np.mean(errTe), np.mean(errTr)
def loss_weighted_error(out, y):
'''
Synopsis:
loss= loss_weighted_error( out, y )
Arguments:
out: output of the classifier
y: true class labels
Output:
loss: weighted error
'''
n0 = len(y[y==0])
n1 = len(y[y==1])
err0 = (n0 - sum(out[y==0]<0)) / n0
err1 = (n1 - sum(out[y==1]>=0)) / n1
return 0.5 * (err1+err0)
###Output
_____no_output_____
###Markdown
Preparation: Load Data
###Code
fname = 'erp_hexVPsag.npz'
cnt, fs, clab, mnt, mrk_pos, mrk_class, mrk_className = bci.load_data(fname)
###Output
_____no_output_____
###Markdown
Exercise 4: Classification of Temporal Features (2 points)Extract as temporal features from single channels the epochs of the time interval 0 to 1000 ms. Determine the error of classification with LDA and with NCC on those features using 10-fold cross-validation for each single channel. Display the resulting (test) error rates for all channel as scalp topographies (one for LDA and one for NCC).
###Code
ival = [0, 1000]
all_epo, all_epo_t = bci.makeepochs(cnt,fs,mrk_pos,ival)
test_error_LDA = np.zeros(cnt.shape[0])
test_error_NCC = np.zeros(cnt.shape[0])
for channel in range(cnt.shape[0]):
test_error_LDA[channel] = crossvalidation(train_LDA, all_epo[:,channel,:], mrk_class.T, nFolds=10, verbose=False)[0] #index 0 gets test error rates
test_error_NCC[channel] = crossvalidation(train_NCC, all_epo[:,channel,:], mrk_class.T, nFolds=10, verbose=False)[0] #index 0 gets test error rates
max_lim = np.max((np.max(test_error_LDA),np.max(test_error_NCC)))
min_lim = np.min((np.min(test_error_LDA),np.min(test_error_NCC)))
bci.scalpmap(mnt, test_error_NCC, cb_label='classification error',clim=[min_lim,max_lim])
plt.title("NCC")
plt.show();
plt.figure()
plt.title("LDA")
bci.scalpmap(mnt, test_error_LDA, cb_label='classification error',clim=[min_lim,max_lim]);
###Output
_____no_output_____
###Markdown
Exercise 5: Classification of Spatial Features (3 points)Perform classification (*target* vs. *nontarget*) on spatial features (average across time within a 50 ms interval) in a time window that is shifted from 0 to 1000 ms in steps of 10 ms, again with both, LDA and NCC. Visualize the time courses of the classification error. Again, use 10-fold cross-validation. Here, use a baseline correction w.r.t. the prestimulus interval -100 to 0 ms.
###Code
ref_ival = [-100,0]
ival = [0,1000]
all_epo, all_epo_t = bci.makeepochs(cnt,fs,mrk_pos,ival)
all_base = bci.baseline(all_epo,all_epo_t, ref_ival)
low_bound = np.arange(0,960,10)
up_bound = np.arange(50,1010,10)
test_error_LDA_spatial = []
test_error_NCC_spatial = []
for window in zip(low_bound,up_bound):
windowed = all_base[np.logical_and(all_epo_t>= window[0], all_epo_t <window[1]),:,:]
avg_time = np.mean(windowed,axis=0)
test_error_LDA_spatial.append(crossvalidation(train_LDA, avg_time, mrk_class.T, nFolds=10, verbose=False)[0]) #index 0 gets test error rates
test_error_NCC_spatial.append(crossvalidation(train_NCC, avg_time, mrk_class.T, nFolds=10, verbose=False)[0]) #index 0 gets test error rates
plt.plot(low_bound,test_error_NCC_spatial, label="NCC error" )
plt.plot(low_bound,test_error_LDA_spatial, label="LDA error" )
plt.legend()
plt.title("Classification Error of Spatial Features")
plt.ylabel("Error")
plt.xlabel("time [ms]");
###Output
_____no_output_____ |
book/capt11/churn_prediction.ipynb | ###Markdown
Churn prediction Evasão escolarChurn prediction é um tipo de trabalho muito comum em data science, sendo uma questão de classificação binária. Trada-se do possível abandono de um cliente ou de um aluno. Analisamos o dataset e tentamos prever as situações de risco de abandono, para tomarmos medidas proativas de fidelização.Para este exemplo, usamos um dataset real, porém desidentificado, de uma pesquisa que realizei há algum tempo, a pedido de uma instituição de ensino, para identificar os alunos com maior probabilidade de abandorarem o curso.Da população de alunos, foram eliminados os que possuem percentual de bolsa de estudos igual ou superior a 50%, pois trata-se de situações especiais. Os dados são coletados semanalmente, a partir dos resultados das primeiras provas de cada período.
###Code
import pandas as pd
import numpy as np
from sklearn import svm, datasets
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import svm
df = pd.read_csv('evasao.csv')
df.head()
df.describe()
features = df[['periodo','bolsa','repetiu','ematraso','disciplinas','faltas']]
labels = df[['abandonou']]
features.head()
labels.head()
###Output
_____no_output_____
###Markdown
Separação dos dadosPrecisamos separar os dados de teste dos dados de treino, virtualmente esquecendo que os dados de testes existem!
###Code
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.33, random_state=42)
###Output
_____no_output_____
###Markdown
Padronização dos atributosComo vamos usar SVM, precisamos colocar os atributos numéricos na mesma escala, e codificar os atributos de categoria. Temos um atributo de categoria: 'ematraso' e ele possui apenas dois valores: zero e um, logo, já está codificado. Se fossem múltiplos valores, teríamos que usar algo como o OneHotEncoder para transformá-lo em variáveis binárias.
###Code
padronizador = StandardScaler().fit(X_train[['periodo', 'bolsa', 'repetiu', 'disciplinas', 'faltas']])
X_train_1 = pd.DataFrame(padronizador.transform(X_train[['periodo', 'bolsa', 'repetiu', 'disciplinas', 'faltas']]))
X_train_scaled = pd.DataFrame(X_train_1)
X_train_scaled = X_train_scaled.assign(e = X_train['ematraso'].values)
X_train_scaled.head()
###Output
_____no_output_____
###Markdown
Kernel linear
###Code
modeloLinear = svm.SVC(kernel='linear')
modeloLinear.fit(X_train_scaled.values, y_train.values.reshape(201,))
modeloLinear.score(X_train_scaled.values, y_train.values.reshape(201,))
###Output
_____no_output_____
###Markdown
Kernel RBF com C=2 e sem gamma
###Code
modeloRbf = svm.SVC(kernel='rbf',C=2)
modeloRbf.fit(X_train_scaled.values, y_train.values.reshape(y_train.size))
modeloRbf.score(X_train_scaled.values, y_train.values.reshape(y_train.size))
###Output
_____no_output_____
###Markdown
Kernel RBF com C=1 e gamma=10
###Code
modeloRbfg10 = svm.SVC(kernel='rbf',C=1,gamma=10)
modeloRbfg10.fit(X_train_scaled.values, y_train.values.reshape(y_train.size))
modeloRbfg10.score(X_train_scaled.values, y_train.values.reshape(y_train.size))
###Output
_____no_output_____
###Markdown
Kernel Poly com C=2 e gamma=10
###Code
modeloPoly = svm.SVC(kernel='poly',C=2,gamma=10)
modeloPoly.fit(X_train_scaled.values, y_train.values.reshape(y_train.size))
modeloPoly.score(X_train_scaled.values, y_train.values.reshape(y_train.size))
###Output
_____no_output_____
###Markdown
Kernel Sigmoid com C=2 e gamma=100
###Code
modeloSig = svm.SVC(kernel='sigmoid',C=2,gamma=100)
modeloSig.fit(X_train_scaled.values, y_train.values.reshape(y_train.size))
modeloSig.score(X_train_scaled.values, y_train.values.reshape(y_train.size))
###Output
_____no_output_____
###Markdown
Nem sempre o modelo que tem melhor score é o modelo que dá a melhor previsão para dados ainda não vistos. Pode ser o caso de "overfitting". Vamos testar vários tipos de kernel e combinações de parâmetros.
###Code
X_test_1 = padronizador.transform(X_test[['periodo', 'bolsa', 'repetiu', 'disciplinas', 'faltas']])
X_test_1 = pd.DataFrame(padronizador.transform(X_test[['periodo', 'bolsa', 'repetiu', 'disciplinas', 'faltas']]))
X_test_scaled = pd.DataFrame(X_test_1)
X_test_scaled = X_test_scaled.assign(e = X_test['ematraso'].values)
X_test_scaled.head()
###Output
_____no_output_____
###Markdown
A melhor maneira de testar é comparar os resultados um a um.
###Code
predicoes = modeloRbfg10.predict(X_test_scaled)
printResults(predicoes)
predicoesGamma1 = modeloRbf.predict(X_test_scaled)
printResults(predicoesGamma1)
###Output
Acertos 68
Percentual 0.6868686868686869
Erramos ao dizer que o aluno abandonou 23
Erramos ao dizer que o aluno permaneceu 8
###Markdown
Este foi o melhor resultado que conseguimos.
###Code
predicoesPoly = modeloPoly.predict(X_test_scaled)
printResults(predicoesPoly)
predicoesSig = modeloSig.predict(X_test_scaled)
printResults(predicoesSig)
def printResults(pr):
acertos = 0
errosAbandono = 0
errosPermanencia = 0
for n in range(0,len(pr)):
if pr[n] == y_test.values.flatten()[n]:
acertos = acertos + 1
else:
if pr[n] == 0:
errosAbandono = errosAbandono + 1
else:
errosPermanencia = errosPermanencia + 1
print('Acertos',acertos)
print('Percentual',acertos / len(pr))
print('Erramos ao dizer que o aluno abandonou', errosAbandono)
print('Erramos ao dizer que o aluno permaneceu', errosPermanencia)
###Output
_____no_output_____ |
DAY 401 ~ 500/DAY449_[BaekJoon] 좋은 단어 (Python).ipynb | ###Markdown
2021년 8월 10일 화요일 BaekJoon - 좋은 단어 (Python) 문제 : https://www.acmicpc.net/problem/2857 블로그 : https://somjang.tistory.com/entry/BaekJoon-3986%EB%B2%88-%EC%A2%8B%EC%9D%80-%EB%8B%A8%EC%96%B4-Python Solution
###Code
def well_word(word_list):
well_cnt = 0
for word in word_list:
my_stack = []
for char in word:
if my_stack == []:
my_stack.append(char)
elif my_stack != []:
if my_stack[-1] == char:
my_stack.pop()
else:
my_stack.append(char)
if my_stack == []:
well_cnt += 1
return well_cnt
if __name__ == "__main__":
word_list = []
for _ in range(int(input())):
word = input()
word_list.append(word)
print(well_word(word_list))
###Output
_____no_output_____ |
Notebooks/Stop-words Comparison.ipynb | ###Markdown
Stop-words ComparisonPortuguese and English* NLTK* gensim* spaCy
###Code
import nltk
import gensim
import spacy
spacy_en = spacy.load('en_core_web_sm')
spacy_pt = spacy.load('pt_core_news_sm')
stopwords_nltk_pt = nltk.corpus.stopwords.words('portuguese')
stopwords_nltk_en = nltk.corpus.stopwords.words('english')
stopwords_spacy_en = spacy.lang.en.stop_words.STOP_WORDS
stopwords_spacy_pt = spacy.lang.pt.stop_words.STOP_WORDS
print('Number of stop words spaCy english: %d' % len(stopwords_spacy_en))
print('Number of stop words NLTK english: %d' % len(stopwords_nltk_en))
print('Number of stop words spaCy portuguese: %d' % len(stopwords_spacy_pt))
print('Number of stop words NLTK portuguese: %d' % len(stopwords_nltk_pt))
print('Number of stop words gensim english: %d' % len(gensim.parsing.preprocessing.STOPWORDS))
spacy_en.vocab()
###Output
_____no_output_____ |
Linear_Tranformation_Demo_3.ipynb | ###Markdown
###Code
import numpy as np
A = np.array([[4,10,8],[10,26,26],[8,26,61]])
print(A)
inv_A = np.linalg.inv(A)
print(inv_A)
B = np.array([[44],[128],[214]])
print(B)
#AA^-1X = B.A^-1
X = np.dot(inv_A,B)
print(X)
###Output
[[ 4 10 8]
[10 26 26]
[ 8 26 61]]
[[ 25.27777778 -11.16666667 1.44444444]
[-11.16666667 5. -0.66666667]
[ 1.44444444 -0.66666667 0.11111111]]
[[ 44]
[128]
[214]]
[[-8.]
[ 6.]
[ 2.]]
|
Model Evaluation/boston_housing.ipynb | ###Markdown
Machine Learning Engineer Nanodegree Model Evaluation & Validation Project: Predicting Boston Housing PricesWelcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a *good fit* could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/machine-learning-databases/housing/). The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:- 16 data points have an `'MEDV'` value of 50.0. These data points likely contain **missing or censored values** and have been removed.- 1 data point has an `'RM'` value of 8.78. This data point can be considered an **outlier** and has been removed.- The features `'RM'`, `'LSTAT'`, `'PTRATIO'`, and `'MEDV'` are essential. The remaining **non-relevant features** have been excluded.- The feature `'MEDV'` has been **multiplicatively scaled** to account for 35 years of market inflation.Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
###Code
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print("Boston housing dataset has {} data points with {} variables each.".format(*data.shape))
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
###Markdown
Data ExplorationIn this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into **features** and the **target variable**. The **features**, `'RM'`, `'LSTAT'`, and `'PTRATIO'`, give us quantitative information about each data point. The **target variable**, `'MEDV'`, will be the variable we seek to predict. These are stored in `features` and `prices`, respectively. Implementation: Calculate StatisticsFor your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since `numpy` has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.In the code cell below, you will need to implement the following:- Calculate the minimum, maximum, mean, median, and standard deviation of `'MEDV'`, which is stored in `prices`. - Store each calculation in their respective variable.
###Code
# TODO: Minimum price of the data
minimum_price = min(prices)
# TODO: Maximum price of the data
maximum_price = max(prices)
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print("Statistics for Boston housing dataset:\n")
print("Minimum price: ${}".format(minimum_price))
print("Maximum price: ${}".format(maximum_price))
print("Mean price: ${}".format(mean_price))
print("Median price ${}".format(median_price))
print("Standard deviation of prices: ${}".format(std_price))
import matplotlib.pyplot as plt
plt.figure(figsize=(15, 5))
for i, col in enumerate(features.columns):
plt.subplot(1, 3, i+1)
plt.plot(data[col], prices, 'x')
plt.title('%s x MEDV' % col)
plt.xlabel(col)
plt.ylabel('MEDV')
###Output
Statistics for Boston housing dataset:
Minimum price: $105000.0
Maximum price: $1024800.0
Mean price: $454342.9447852761
Median price $438900.0
Standard deviation of prices: $165171.13154429474
###Markdown
Question 1 - Feature ObservationAs a reminder, we are using three features from the Boston housing dataset: `'RM'`, `'LSTAT'`, and `'PTRATIO'`. For each data point (neighborhood):- `'RM'` is the average number of rooms among homes in the neighborhood.- `'LSTAT'` is the percentage of homeowners in the neighborhood considered "lower class" (working poor).- `'PTRATIO'` is the ratio of students to teachers in primary and secondary schools in the neighborhood.** Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an **increase** in the value of `'MEDV'` or a **decrease** in the value of `'MEDV'`? Justify your answer for each.****Hint:** This problem can phrased using examples like below. * Would you expect a home that has an `'RM'` value(number of rooms) of 6 be worth more or less than a home that has an `'RM'` value of 7?* Would you expect a neighborhood that has an `'LSTAT'` value(percent of lower class workers) of 15 have home prices be worth more or less than a neighborhood that has an `'LSTAT'` value of 20?* Would you expect a neighborhood that has an `'PTRATIO'` value(ratio of students to teachers) of 10 have home prices be worth more or less than a neighborhood that has an `'PTRATIO'` value of 15? **Answer: ** - RM: increasing the nunmber of rooms will increase the the value of 'MEDV' because a house with 6 rooms will most probably have a bigger overall area than a house with 5 rooms.- LSTAT: A neighboorhood with a higher percent of lower class workers will probably decrease the value of 'MEDV' because this means that a lower percent can afford expensive prices.- PTRATIO: increasing the ratio of students to teachers will decrease the value of 'MEDV' because this means that the quality of education will decrease in the schools so the prices will be affected negatively.* The graphs agree with my intution. ---- Developing a ModelIn this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions. Implementation: Define a Performance MetricIt is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the [*coefficient of determination*](http://stattrek.com/statistics/dictionary.aspx?definition=coefficient_of_determination), R2, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions. The values for R2 range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the **target variable**. A model with an R2 of 0 is no better than a model that always predicts the *mean* of the target variable, whereas a model with an R2 of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the **features**. _A model can be given a negative R2 as well, which indicates that the model is **arbitrarily worse** than one that always predicts the mean of the target variable._For the `performance_metric` function in the code cell below, you will need to implement the following:- Use `r2_score` from `sklearn.metrics` to perform a performance calculation between `y_true` and `y_predict`.- Assign the performance score to the `score` variable.
###Code
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true,y_predict)
# Return the score
return score
###Output
_____no_output_____
###Markdown
Question 2 - Goodness of FitAssume that a dataset contains five data points and a model made the following predictions for the target variable:| True Value | Prediction || :-------------: | :--------: || 3.0 | 2.5 || -0.5 | 0.0 || 2.0 | 2.1 || 7.0 | 7.8 || 4.2 | 5.3 |Run the code cell below to use the `performance_metric` function and calculate this model's coefficient of determination.
###Code
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print("Model has a coefficient of determination, R^2, of {:.3f}.".format(score))
###Output
Model has a coefficient of determination, R^2, of 0.923.
###Markdown
* Would you consider this model to have successfully captured the variation of the target variable? * Why or why not?** Hint: ** The R2 score is the proportion of the variance in the dependent variable that is predictable from the independent variable. In other words:* R2 score of 0 means that the dependent variable cannot be predicted from the independent variable.* R2 score of 1 means the dependent variable can be predicted from the independent variable.* R2 score between 0 and 1 indicates the extent to which the dependent variable is predictable. An * R2 score of 0.40 means that 40 percent of the variance in Y is predictable from X. **Answer:** * Yes, because it has a score of 0.923 which means that 92.3% of the variance in Y (dependent variable) is predictable from X (the independent variable).* However,using only a single performance metric can be very misguiding, therefore, we need more performance metrics to answer this question and others. Implementation: Shuffle and Split DataYour next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.For the code cell below, you will need to implement the following:- Use `train_test_split` from `sklearn.cross_validation` to shuffle and split the `features` and `prices` data into training and testing sets. - Split the data into 80% training and 20% testing. - Set the `random_state` for `train_test_split` to a value of your choice. This ensures results are consistent.- Assign the train and testing splits to `X_train`, `X_test`, `y_train`, and `y_test`.
###Code
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print("Training and testing split was successful.")
###Output
Training and testing split was successful.
###Markdown
Question 3 - Training and Testing* What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?**Hint:** Think about how overfitting or underfitting is contingent upon how splits on data is done. **Answer: ** - By splitting the dataset into a ratio of training and testing subsets we train the model on our training set using different alogrithms such as support vector machine, nueral networks, linear regression , etc.. . Then, at the very end after we achieved a good enough accuracy in terms of predictions on our training set, we use our testing set as the model is now exposed to a new data that it has never seen. If the accuracy on the test set is as good as the training set, then the model is not overfitting the data, if it's worse, then it is definitly overfitting the data.- We try to have something like 80 20 training-testing split, because if we have like only 50% of the data in our training set, then the model will most probably underfit the data because there are few examples that it can learn from. and if we have like 99% in our training set, then the testing set will not be sufficient enough to tell if the model is overfitting or not as there are too many few examples in the test set to decide. ---- Analyzing Model PerformanceIn this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing `'max_depth'` parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone. Learning CurvesThe following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R2, the coefficient of determination. Run the code cell below and use these graphs to answer the following question.
###Code
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
###Output
_____no_output_____
###Markdown
Question 4 - Learning the Data* Choose one of the graphs above and state the maximum depth for the model. * What happens to the score of the training curve as more training points are added? What about the testing curve? * Would having more training points benefit the model? **Hint:** Are the learning curves converging to particular scores? Generally speaking, the more data you have, the better. But if your training and testing curves are converging with a score above your benchmark threshold, would this be necessary?Think about the pros and cons of adding more training points based on if the training and testing curves are converging. **Answer: *** Max_depth=3* The learning curves are both converging. * Adding more data can be good at first because it will help decreasing the bias because by adding more data, the model will learn more complex functions instead of simple functions (bias error).* However, as the model is converging, there is no need to add more data because it will help the model and decrease the error by very small amount, and it may be computionally expensive to to do.* So, we can add more training points to some extend - when the model has clearly converged- , and there is no need for adding more points. Complexity CurvesThe following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the **learning curves**, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the `performance_metric` function. ** Run the code cell below and use this graph to answer the following two questions Q5 and Q6. **
###Code
vs.ModelComplexity(X_train, y_train)
###Output
_____no_output_____
###Markdown
Question 5 - Bias-Variance Tradeoff* When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? * How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?**Hint:** High bias is a sign of underfitting(model is not complex enough to pick up the nuances in the data) and high variance is a sign of overfitting(model is by-hearting the data and cannot generalize well). Think about which model(depth 1 or 10) aligns with which part of the tradeoff. **Answer: *** Max depth is 1 : The model suffers from high bias as it scores only 0.44 on the training set which indicates that the model is underfitting the data, it doesn't suffer from a variance problem as the validation set scores 0.42 which is very close to the training set score.* Max depth is 10 : The model suffers from high variance problem as it scores nearly 1 on the training set which is perfect, but the overfitting (the high variance) is very clear when looking on the validation set score which is 0.7 which is very far from the training set score and indicates that the model is not generalizing well. Question 6 - Best-Guess Optimal Model* Which maximum depth do you think results in a model that best generalizes to unseen data? * What intuition lead you to this answer?** Hint: ** Look at the graph above Question 5 and see where the validation scores lie for the various depths that have been assigned to the model. Does it get better with increased depth? At what point do we get our best validation score without overcomplicating our model? And remember, Occams Razor states "Among competing hypotheses, the one with the fewest assumptions should be selected." **Answer: *** Max depth is 3 : Because at 1 and 2, the model is very simple that the training score is very poor, and from 4 tell 10 the model is clearly overfitting the data and suffering from a high variance problem. So, I think that at a maximum depth of 3, the model is optimal. ----- Evaluating Model PerformanceIn this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from `fit_model`. Question 7 - Grid Search* What is the grid search technique?* How it can be applied to optimize a learning algorithm?** Hint: ** When explaining the Grid Search technique, be sure to touch upon why it is used, what the 'grid' entails and what the end goal of this method is. To solidify your answer, you can also give an example of a parameter in a model that can be optimized using this approach. **Answer: *** Grid search technique is basically trying to compare different combinations of hyperparameters and train all these combinations on the training set, then see how well they perform on the cross validation set (F1 score for example) and take the best combination.* For example : in a support vector machine there are different hyperparameters such as kernel and polynomial so be storing different values for these hyperparameters in a python dictionary and using the grid search technique, we can get the best kernel and polynomial combination that neither overfits nor underfits our model. Question 8 - Cross-Validation* What is the k-fold cross-validation training technique? * What benefit does this technique provide for grid search when optimizing a model?**Hint:** When explaining the k-fold cross validation technique, be sure to touch upon what 'k' is, how the dataset is split into different parts for training and testing and the number of times it is run based on the 'k' value.When thinking about how k-fold cross validation helps grid search, think about the main drawbacks of grid search which are hinged upon **using a particular subset of data for training or testing** and how k-fold cv could help alleviate that. You can refer to the [docs](http://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation) for your answer. **Answer: *** In order to see how well the model is performing, we need to test it on another data, so we split our model into two subsets, the training set and the test set. However, once we used our test set, it will mess up our model as it will overfits in order to achieve a higher score so we will need another test set. So, in order to avoid this problem from the beginning, we split our data into three subsets, training set , validation set, and test set.However, by partitioning our data into three sets, we sharply reduce the number of samples which can be used for learning the model, and the results can depend on a particular random choice for the pair of (train, validation) sets.* So, one solution to that is by using a cross-validation set, which basically leaves the test set into the very end, but the validation set is no longer needed when doing cross validation. This is called k-fold technique, which is basically spliting our training set into k smaller sets, where the model is trained using k-1 as a training set, and the results are validated using the remaining part.* The performance is the average of the k values obtained.* This will benefit the grid search in terms of having more data sets which the algorithm can train and test on which will increase the performance significantly. Implementation: Fitting a ModelYour final implementation requires that you bring everything together and train a model using the **decision tree algorithm**. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the `'max_depth'` parameter for the decision tree. The `'max_depth'` parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called *supervised learning algorithms*.In addition, you will find your implementation is using `ShuffleSplit()` for an alternative form of cross-validation (see the `'cv_sets'` variable). While it is not the K-Fold cross-validation technique you describe in **Question 8**, this type of cross-validation technique is just as useful!. The `ShuffleSplit()` implementation below will create 10 (`'n_splits'`) shuffled sets, and for each shuffle, 20% (`'test_size'`) of the data will be used as the *validation set*. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.Please note that ShuffleSplit has different parameters in scikit-learn versions 0.17 and 0.18.For the `fit_model` function in the code cell below, you will need to implement the following:- Use [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) from `sklearn.tree` to create a decision tree regressor object. - Assign this object to the `'regressor'` variable.- Create a dictionary for `'max_depth'` with the values from 1 to 10, and assign this to the `'params'` variable.- Use [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) from `sklearn.metrics` to create a scoring function object. - Pass the `performance_metric` function as a parameter to the object. - Assign this scoring function to the `'scoring_fnc'` variable.- Use [`GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) from `sklearn.grid_search` to create a grid search object. - Pass the variables `'regressor'`, `'params'`, `'scoring_fnc'`, and `'cv_sets'` as parameters to the object. - Assign the `GridSearchCV` object to the `'grid'` variable.
###Code
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
from sklearn.tree import DecisionTreeRegressor
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
# sklearn version 0.18: ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None)
# sklearn versiin 0.17: ShuffleSplit(n, n_iter=10, test_size=0.1, train_size=None, random_state=None)
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=42)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth':[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search cv object --> GridSearchCV()
# Make sure to include the right parameters in the object:
# (estimator, param_grid, scoring, cv) which have values 'regressor', 'params', 'scoring_fnc', and 'cv_sets' respectively.
grid = GridSearchCV(estimator=regressor,param_grid=params, scoring=scoring_fnc,cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/grid_search.py:42: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. This module will be removed in 0.20.
DeprecationWarning)
###Markdown
Making PredictionsOnce a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a *decision tree regressor*, the model has learned *what the best questions to ask about the input data are*, and can respond with a prediction for the **target variable**. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on. Question 9 - Optimal Model* What maximum depth does the optimal model have? How does this result compare to your guess in **Question 6**? Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
###Code
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print("Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']))
###Output
Parameter 'max_depth' is 4 for the optimal model.
###Markdown
** Hint: ** The answer comes from the output of the code snipped above.**Answer: *** Parameter 'max_depth' is 4 for the optimal model.* the maximum depth increased from 3 to 4 and this is because we used the model used k-cross validation technequie which helped in decreasing the underfitting and the overfitting problems by having a larger data set for training (k-1) and testing for many times (k). Question 10 - Predicting Selling PricesImagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:| Feature | Client 1 | Client 2 | Client 3 || :---: | :---: | :---: | :---: || Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms || Neighborhood poverty level (as %) | 17% | 32% | 3% || Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |* What price would you recommend each client sell his/her home at? * Do these prices seem reasonable given the values for the respective features? **Hint:** Use the statistics you calculated in the **Data Exploration** section to help justify your response. Of the three clients, client 3 has has the biggest house, in the best public school neighborhood with the lowest poverty level; while client 2 has the smallest house, in a neighborhood with a relatively high poverty rate and not the best public schools.Run the code block below to have your optimized model make predictions for each client's home.
###Code
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print("Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price))
import matplotlib.pyplot as plt
plt.figure(figsize=(15, 5))
for i, col in enumerate(features.columns):
plt.subplot(1, 3, i+1)
plt.boxplot(data[col])
plt.title(col)
for j in range(3):
plt.plot(1, client_data[j][i], marker='o')
plt.annotate('Client %s' % str(j+1), xy=(1, client_data[j][i]))
###Output
Predicted selling price for Client 1's home: $403,025.00
Predicted selling price for Client 2's home: $237,478.72
Predicted selling price for Client 3's home: $931,636.36
###Markdown
**Answer: *** I will recommend a price near (1000000) for the third client as has the biggest number of rooms and the lowest poverty level and the lowest ST ratio so it should be near the maximum value which was calculated (1024800)* I will recommend a price near the mean and the median price which was calculated (454343) for the first client it's average in every aspect (number of rooms, poverty level, ST ratio).* I will recommend a price near the lowest price which was calculated (105000) for the second client as it has the lowest number of rooms , and the highest poverty level and ST ratio.* The prices seem resaonable as the features are weighted reasonbly so every feature affects the price not very highly or lowly.* The graphs agree with my predictions. SensitivityAn optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. **Run the code cell below to run the `fit_model` function ten times with different training and testing sets to see how the prediction for a specific client changes with respect to the data it's trained on.**
###Code
vs.PredictTrials(features, prices, fit_model, client_data)
###Output
Trial 1: $391,183.33
Trial 2: $419,700.00
Trial 3: $415,800.00
Trial 4: $420,622.22
Trial 5: $418,377.27
Trial 6: $411,931.58
Trial 7: $399,663.16
Trial 8: $407,232.00
Trial 9: $351,577.61
Trial 10: $413,700.00
Range in prices: $69,044.61
|
Week2/Logistic Regression with Numpy.ipynb | ###Markdown
Logistic Regression with NumpyHere we'll attempt to classify the Iris Flower Dataset with a model that we'll build ourselves. **Note:** We won't be using any frameworks, we'll just use numpy and plain python. The Iris Flower Data set / Fisher's Iris data set - https://en.wikipedia.org/wiki/Iris_flower_data_set - Multivariate data set introduced by the British statistician and biologist Ronald Fisher in his 1936 paper *The use of multiple measurements in taxonomic problems* as an example of linear discriminant analysis. - Input / Feature Names (150 x 4) - sepal length (cm) - sepal width (cm) - petal length (cm) - petal width (cm) - Output (150 x 1) - Iris species label 0. setosa 1. versicolor 2. virginica- Aim : - Classifying an Iris flower into one of these species based on features - ie. Assign a class to given input sampleSo as we can see, the dataset consists of **4 input features** and the flowers can be classified as **1 of 3 categorires** as specified above. Importing DependanciesSo here are the essentials we'll need to perform the above classification.*Additionally we'll use matplotlib to visualize the data*
###Code
from sklearn import datasets
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
import numpy as np
import itertools
%matplotlib inline
###Output
_____no_output_____
###Markdown
Loading the DatasetThe dataset comes with sklearn as a standard dataset. So we'll use the datasets module within sklearn to load our data.
###Code
iris = datasets.load_iris()
X = iris.data.T # Here we take the transposition of the data matrix.
y = iris.target.reshape((1,150))
print X.shape
print y.shape
print type(y)
order = np.random.permutation(len(X[1]))
###Output
(4, 150)
(1, 150)
<type 'numpy.ndarray'>
###Markdown
One-Hot EncodingOne-hot or One-of-k is a method we'll use to encode our classes such that **no class has any features that will distinctly differentiate them** and **to ensure all the y-values lie between 0 and 1**.Hence this is how our data will be modified.|Original Value| One-Hot Encoded||--------------|----------------||0| [1, 0, 0]||1| [0, 1, 0]||2| [0, 0, 1]|Refer [sklearn OneHotEncoder docs](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html) for more information. Here since our dataset is small we'll just one-hot encode with a small Python Script.
###Code
ohe_y = np.array([[0,0,0]])
for datapoint in y[0]:
if datapoint == 0:
ohe_y = np.vstack([ohe_y, np.array([1,0,0])])
elif datapoint == 1:
ohe_y = np.vstack([ohe_y, np.array([0,1,0])])
else:
ohe_y = np.vstack([ohe_y, np.array([0,0,1])])
ohe_y = ohe_y[1:,:]
print np.shape(ohe_y)
assert(np.sum(ohe_y, axis=1).all() == 1) # Sanity Check: We're checking that each tuple adds to 1.
###Output
(150, 3)
###Markdown
Splitting Training and Testing DataNow that we've set up our data let's split it up into **Testing** and **Training** sets.As we've explained before to proplerly validate our model, we need to ensure that it's trained and tested on mutually exclusive sets of data. **Why?**If we train a model on a set of data, the model will understand the patterns within that dataset and may even perform well during training. But since it's already fit to that data the testing accuracy will be high. Hence we need to ensure that the model is tested on unseen data so as to properly understand it's performance.
###Code
# Shuffling data
order = np.random.permutation(len(X[1]))
portion = 30
# Splitting data into train and test
# samples 0-19 : Test set
# samples 20-99 : Train set
test_x = X[:,order[:portion]]
test_y = ohe_y[order[:portion],:]
test_y_non_ohe = y[:,order[:portion]]
train_x = X[:,order[portion:]]
train_y = ohe_y[order[portion:],:]
print ("Training dataset shape")
print (train_x.shape)
print (train_y.shape)
print
print ("Test dataset shape")
print (test_x.shape)
print (test_y.shape)
###Output
Training dataset shape
(4, 120)
(120, 3)
Test dataset shape
(4, 30)
(30, 3)
###Markdown
Foreword Before Moving OnPast this point we'll be using some formulae that we **just define** as is. We don't provide any derivations and hopefully we get to covering that in person during a class. But just know that the calculus isn't really all that hard but we can't really explain that here. [Click Here](https://towardsdatascience.com/logistic-regression-detailed-overview-46c4da4303bc) to get a good idea about the derivation.If you have any questions just hit us up on class. Activation FunctionNow that we've split our training data into training and testing sets let's move on to the actual model. An activation function, as we've explained in class the activation function will define the output of our logit using our weighted features as input.For a list of various activation functions and their properties [click here](https://en.wikipedia.org/wiki/Activation_functionComparison%20of%20activation%20functions).We need an activation function that:* Flattens the output between the range 0 and 1* Has very small range within which the output is neither close to 0 or 1.Hence we'll use the Sigmoid activation function.It looks like this:In the next cell we'll demonstrate how this function maps an input. Hence to implement in python we just need to define the following function:
###Code
def sigmoid(z):
return 1.0 / (1 + np.exp(-z))
# Plotting output of sigmoid between -20 and +20
sample_x = np.linspace(-20,20,100)
plt.plot(sample_x,sigmoid(sample_x))
plt.show()
###Output
_____no_output_____
###Markdown
HypothesisThe output for a given sample is derived using the following formulaHere, - `g` - Sigmoid Function- `theta` - Weight Matrix- `z` - Output- `h` - Activated Output PredictionHere we'll take the current weights and the input and calculate the activation output. This output will be used to calculate the cost and gradients for the optimisation function.
###Code
def predict(x,theta):
z = (theta.T).dot(x)
hx = sigmoid(z)
# print "Shape after activation: " + str(hx.shape)
return hx
###Output
_____no_output_____
###Markdown
Loss**REFER : https://www.youtube.com/watch?v=K0YBDyxEXKE**Here the **cost** or *loss* is calculated.The loss is basically the error between the predicted output and the actual output for that case.
###Code
def loss(h, y):
return np.sum(np.negative(y.T) * np.log(h) - (1 - y).T * (np.log(1 - h))) / y.shape[1]
###Output
_____no_output_____
###Markdown
GradientNow to optimise our error ( minimise the error ) we'll use a strategy called [Gradient Descent](https://machinelearningmastery.com/gradient-descent-for-machine-learning/) which will help us minimise the error between the actual values and our predicted values.To do this we'll calculate the **gradient** for our current iteration of weights. The gradient can simply be understood as a small delta ( change ) we should bring about in our weights to move closer to an optimimum solution.
###Code
def gradient(X,h,y):
return (1.0 / y.shape[0]) * np.dot(X, (h.T - y))
###Output
_____no_output_____
###Markdown
Forward Propogation (Application of Activation Function)This function will implement the above functions in the described order. We're basically putting everything together.The basic algorithm is as follows:1. Predict the output2. Use the predicted output to calculate the cost.3. Find the gradient 4. Update the weights based on the gradient5. Do 1 - 4 Again until convergence
###Code
def get_gradient(theta, b, x, y):
# Predicting the output
y_estimate = predict(x,theta)
# Use the predicted output to calculate the cost.
error = loss(y_estimate,y)
# Find the gradient
grad = gradient(x,y_estimate,y)
# Update the weights based on the gradient
error = np.squeeze(error)
return grad,error
###Output
_____no_output_____
###Markdown
Iteratively Updating the WeightsNow we've finally reached the stage where we can start putting all the above functions to use and train our model.We'll first initialize our array of weights. The shape is (4, 3) because we have 4 input features and 3 output classes.Since there's multiple classes the correct function to use here would be a [Softmax Function](https://developers.google.com/machine-learning/crash-course/multi-class-neural-networks/softmax) but we'll simply find class with maximum confidence and consider that to be the predicted output.Here the steps are simple:1. Fetch the gradient for the current iteration using get_gradient()2. Update the weights3. Check for convergence with the tolerance4. If converged stop and return the current weights, bias and error.
###Code
def learn(alpha = 0.1, tolerance = 1e-4):
# Initialising a random vector of weight
theta = np.zeros((4,3))
b = 0
# Here the learning rate, and tolerance are passed as parameters.
print "Learning Rate: " + str(alpha)
print "Tolerance: " + str(tolerance)
# Perform Gradient Descent
iterations = 1
print "Weights: "
print theta
while True:
theta_gradient, error = get_gradient(theta, b, train_x, train_y)
theta_new = theta - (alpha * theta_gradient)
# Stopping Condition: We define convergence based on the error tolerance.
if np.sum(abs(theta_new - theta)) < tolerance:
print ("Converged.")
break
# Print error every 50 iterations
if iterations % 50 == 0:
print ("Iteration: "+str(iterations)+" - Error: "+ str(np.sum(error)))
iterations += 1
theta = theta_new
errorold = error
return theta, b, error
###Output
_____no_output_____
###Markdown
TrainingThat's it. Now we just have to call our training function with the necessary parameters and see our model train with the provided data.
###Code
#Let's train the model:
weights, beta, error_final = learn()
###Output
Learning Rate: 0.1
Tolerance: 0.0001
Weights:
[[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]
[0. 0. 0.]]
Iteration: 50 - Error: 41.85552540842361
Iteration: 100 - Error: 37.123021090391696
Iteration: 150 - Error: 34.807879269468174
Iteration: 200 - Error: 33.27764259055146
Iteration: 250 - Error: 32.144236615399535
Iteration: 300 - Error: 31.255369820382327
Iteration: 350 - Error: 30.53402491785185
Iteration: 400 - Error: 29.93494892295152
Iteration: 450 - Error: 29.42888583304322
Iteration: 500 - Error: 28.995669013831634
Iteration: 550 - Error: 28.620755984465518
Iteration: 600 - Error: 28.29330643871076
Iteration: 650 - Error: 28.005032159154705
Iteration: 700 - Error: 27.74946753808091
Iteration: 750 - Error: 27.52148536530038
Iteration: 800 - Error: 27.316963604368805
Iteration: 850 - Error: 27.132549280600227
Iteration: 900 - Error: 26.965487106809196
Iteration: 950 - Error: 26.813492564864703
Iteration: 1000 - Error: 26.67465628500645
Iteration: 1050 - Error: 26.547370930756585
Iteration: 1100 - Error: 26.430274564636154
Iteration: 1150 - Error: 26.32220627561698
Iteration: 1200 - Error: 26.222171057422184
Iteration: 1250 - Error: 26.129311753059834
Iteration: 1300 - Error: 26.04288645703987
Iteration: 1350 - Error: 25.962250175252347
Iteration: 1400 - Error: 25.886839836638774
Iteration: 1450 - Error: 25.816161965497837
Iteration: 1500 - Error: 25.74978248193356
Iteration: 1550 - Error: 25.68731821652661
Iteration: 1600 - Error: 25.62842981483185
Iteration: 1650 - Error: 25.57281577553304
Iteration: 1700 - Error: 25.520207418536717
Iteration: 1750 - Error: 25.470364619935697
Iteration: 1800 - Error: 25.42307218250933
Iteration: 1850 - Error: 25.37813673538044
Iteration: 1900 - Error: 25.335384076195762
Iteration: 1950 - Error: 25.294656884919533
Iteration: 2000 - Error: 25.255812750921276
Iteration: 2050 - Error: 25.218722465176857
Iteration: 2100 - Error: 25.183268537606782
Iteration: 2150 - Error: 25.149343906248664
Iteration: 2200 - Error: 25.11685081041192
Iteration: 2250 - Error: 25.085699804436555
Iteration: 2300 - Error: 25.055808892363785
Iteration: 2350 - Error: 25.027102766875924
Iteration: 2400 - Error: 24.999512138394888
Iteration: 2450 - Error: 24.97297314233944
Iteration: 2500 - Error: 24.947426814306045
Iteration: 2550 - Error: 24.922818624419037
Iteration: 2600 - Error: 24.89909806334248
Iteration: 2650 - Error: 24.876218273498296
Iteration: 2700 - Error: 24.85413571992648
Iteration: 2750 - Error: 24.83280989597967
Iteration: 2800 - Error: 24.81220305968886
Iteration: 2850 - Error: 24.792279997186593
Iteration: 2900 - Error: 24.77300781004472
Iteration: 2950 - Error: 24.75435572378754
Iteration: 3000 - Error: 24.736294915187823
Iteration: 3050 - Error: 24.718798356252794
Iteration: 3100 - Error: 24.701840673064748
Iteration: 3150 - Error: 24.685398017865122
Iteration: 3200 - Error: 24.669447952963864
Iteration: 3250 - Error: 24.65396934522504
Iteration: 3300 - Error: 24.638942270025424
Iteration: 3350 - Error: 24.62434792371111
Iteration: 3400 - Error: 24.610168543688406
Iteration: 3450 - Error: 24.596387335383366
Iteration: 3500 - Error: 24.582988405389177
Iteration: 3550 - Error: 24.56995670019677
Iteration: 3600 - Error: 24.55727794996906
Iteration: 3650 - Error: 24.544938616878166
Iteration: 3700 - Error: 24.5329258475759
Iteration: 3750 - Error: 24.521227429413205
Iteration: 3800 - Error: 24.509831750064397
Iteration: 3850 - Error: 24.498727760247405
Iteration: 3900 - Error: 24.487904939262915
Iteration: 3950 - Error: 24.477353263102987
Iteration: 4000 - Error: 24.467063174904947
Iteration: 4050 - Error: 24.45702555754811
Iteration: 4100 - Error: 24.447231708210964
Iteration: 4150 - Error: 24.43767331472377
Iteration: 4200 - Error: 24.428342433567536
Iteration: 4250 - Error: 24.41923146938419
Iteration: 4300 - Error: 24.41033315587553
Iteration: 4350 - Error: 24.401640537979763
Iteration: 4400 - Error: 24.393146955224605
Iteration: 4450 - Error: 24.384846026165174
Iteration: 4500 - Error: 24.376731633822846
Iteration: 4550 - Error: 24.368797912048887
Iteration: 4600 - Error: 24.36103923274328
Iteration: 4650 - Error: 24.353450193865118
Iteration: 4700 - Error: 24.346025608176358
Iteration: 4750 - Error: 24.338760492665855
Iteration: 4800 - Error: 24.331650058604755
Iteration: 4850 - Error: 24.32468970218858
Iteration: 4900 - Error: 24.317874995724875
Iteration: 4950 - Error: 24.311201679328732
Iteration: 5000 - Error: 24.304665653091423
Iteration: 5050 - Error: 24.298262969690114
Iteration: 5100 - Error: 24.29198982740927
Iteration: 5150 - Error: 24.28584256354651
Iteration: 5200 - Error: 24.279817648177783
Iteration: 5250 - Error: 24.27391167825874
Iteration: 5300 - Error: 24.268121372040735
Iteration: 5350 - Error: 24.262443563781662
Iteration: 5400 - Error: 24.25687519873317
Iteration: 5450 - Error: 24.251413328387216
Iteration: 5500 - Error: 24.246055105966022
Iteration: 5550 - Error: 24.240797782140863
Iteration: 5600 - Error: 24.235638700965822
Iteration: 5650 - Error: 24.23057529601392
Iteration: 5700 - Error: 24.225605086703627
Iteration: 5750 - Error: 24.22072567480485
Iteration: 5800 - Error: 24.215934741114
Iteration: 5850 - Error: 24.211230042288495
Iteration: 5900 - Error: 24.206609407831778
Iteration: 5950 - Error: 24.20207073722047
Iteration: 6000 - Error: 24.197611997165737
Iteration: 6050 - Error: 24.19323121900153
Iteration: 6100 - Error: 24.188926496192888
Iteration: 6150 - Error: 24.184695981957816
Iteration: 6200 - Error: 24.180537886996635
Iteration: 6250 - Error: 24.176450477323197
Iteration: 6300 - Error: 24.17243207219263
Iteration: 6350 - Error: 24.168481042120604
Iteration: 6400 - Error: 24.16459580698935
Iteration: 6450 - Error: 24.160774834236104
Iteration: 6500 - Error: 24.157016637119785
Iteration: 6550 - Error: 24.153319773061906
Iteration: 6600 - Error: 24.149682842058056
Iteration: 6650 - Error: 24.146104485156666
Iteration: 6700 - Error: 24.14258338300137
Iteration: 6750 - Error: 24.139118254434262
Iteration: 6800 - Error: 24.135707855156824
Iteration: 6850 - Error: 24.13235097644603
Iteration: 6900 - Error: 24.12904644392272
Iteration: 6950 - Error: 24.125793116370044
Iteration: 7000 - Error: 24.122589884599375
Iteration: 7050 - Error: 24.11943567036178
Iteration: 7100 - Error: 24.116329425302705
Iteration: 7150 - Error: 24.113270129957925
Iteration: 7200 - Error: 24.11025679278906
Iteration: 7250 - Error: 24.107288449256675
Iteration: 7300 - Error: 24.104364160929308
Iteration: 7350 - Error: 24.10148301462694
Iteration: 7400 - Error: 24.09864412159727
Iteration: 7450 - Error: 24.09584661672335
Iteration: 7500 - Error: 24.093089657761322
Iteration: 7550 - Error: 24.09037242460676
Iteration: 7600 - Error: 24.087694118588605
Iteration: 7650 - Error: 24.0850539617893
Iteration: 7700 - Error: 24.082451196390195
Iteration: 7750 - Error: 24.07988508404094
Iteration: 7800 - Error: 24.07735490525209
Iteration: 7850 - Error: 24.07485995880977
Iteration: 7900 - Error: 24.072399561211554
Iteration: 7950 - Error: 24.06997304612264
Iteration: 8000 - Error: 24.067579763851587
Iteration: 8050 - Error: 24.06521908084467
Iteration: 8100 - Error: 24.062890379198222
Iteration: 8150 - Error: 24.060593056188083
Iteration: 8200 - Error: 24.058326523815712
Iteration: 8250 - Error: 24.056090208369994
Iteration: 8300 - Error: 24.05388355000436
Iteration: 8350 - Error: 24.051706002328487
Iteration: 8400 - Error: 24.049557032014064
Iteration: 8450 - Error: 24.047436118413994
Iteration: 8500 - Error: 24.045342753194628
Iteration: 8550 - Error: 24.043276439980378
Iteration: 8600 - Error: 24.041236694010404
Iteration: 8650 - Error: 24.039223041806725
Iteration: 8700 - Error: 24.03723502085349
Iteration: 8750 - Error: 24.03527217928685
Iteration: 8800 - Error: 24.033334075595075
Iteration: 8850 - Error: 24.031420278328635
Iteration: 8900 - Error: 24.02953036581965
Iteration: 8950 - Error: 24.02766392591056
Iteration: 9000 - Error: 24.02582055569161
Iteration: 9050 - Error: 24.02399986124675
Iteration: 9100 - Error: 24.02220145740778
Iteration: 9150 - Error: 24.020424967516288
Iteration: 9200 - Error: 24.018670023193224
Iteration: 9250 - Error: 24.016936264115795
Iteration: 9300 - Error: 24.015223337801306
Iteration: 9350 - Error: 24.013530899397924
Iteration: 9400 - Error: 24.011858611481898
Iteration: 9450 - Error: 24.010206143861097
Iteration: 9500 - Error: 24.008573173384715
Iteration: 9550 - Error: 24.006959383758744
Iteration: 9600 - Error: 24.005364465367222
Iteration: 9650 - Error: 24.003788115098853
Iteration: 9700 - Error: 24.002230036178997
Iteration: 9750 - Error: 24.000689938006673
Iteration: 9800 - Error: 23.999167535996587
Iteration: 9850 - Error: 23.997662551425787
Iteration: 9900 - Error: 23.996174711285004
Iteration: 9950 - Error: 23.994703748134402
Iteration: 10000 - Error: 23.993249399963588
Iteration: 10050 - Error: 23.99181141005577
Iteration: 10100 - Error: 23.99038952685593
Iteration: 10150 - Error: 23.98898350384285
Iteration: 10200 - Error: 23.98759309940486
Iteration: 10250 - Error: 23.986218076719197
Iteration: 10300 - Error: 23.984858203634918
Iteration: 10350 - Error: 23.98351325255908
###Markdown
What Just Happened?So we've run our function. After 42300 iterations we can see that the model has *converged*. But why is the training error so high?It's **23.8200 points** as can be seen.But why is it that high? **So we admit it we cheated a little bit**. Since we're having 3 possible classes for prediction we should technically use an activation function that balances the probability such that the total sum of the probabalities for each class together would be 1. But here we're just considering them to be independent probabilities and taking the maximum one to determine the class. Hence the error will always be calculated from either 0 or 1. Hence when you look at an output it will never present an example like (1, 0, 0). There'll definitely be some error in at least one of the classes. Hence that error adds up. So next time we do a classification problem with multiple classes we'll actually use a **softmax** layer which will help perform better for this kind of problem. Testing and ValidationNow that we've retreived a final set of weights and the bias after training we'll use these values to predict values in our test set and validate the model. We can use a variation of our prediction function and find the class that has maximum probability to determine the result for each test tuple.
###Code
result = predict(test_x, weights)
print result.T
###Output
[[9.99e-01 2.23e-01 1.49e-13]
[5.15e-07 1.06e-01 8.54e-01]
[1.00e+00 1.69e-01 1.37e-13]
[5.16e-08 2.47e-01 9.91e-01]
[6.14e-08 8.28e-01 4.86e-01]
[1.00e+00 1.75e-01 6.87e-14]
[1.00e+00 1.45e-01 8.52e-15]
[9.59e-09 4.88e-01 9.96e-01]
[1.86e-04 6.35e-01 1.70e-03]
[1.98e-08 7.76e-02 1.00e+00]
[9.52e-05 4.12e-01 4.09e-03]
[1.99e-09 8.61e-01 9.95e-01]
[8.91e-06 7.50e-01 1.29e-02]
[6.56e-09 2.89e-01 1.00e+00]
[1.00e+00 1.67e-01 1.35e-15]
[1.17e-05 5.48e-01 7.85e-03]
[8.59e-09 2.94e-01 1.00e+00]
[3.72e-09 8.16e-01 9.99e-01]
[2.82e-08 1.84e-01 9.97e-01]
[1.00e+00 1.27e-01 3.73e-16]
[3.90e-08 6.28e-02 1.00e+00]
[1.00e+00 7.83e-02 1.73e-14]
[1.00e+00 6.45e-02 3.71e-16]
[9.99e-01 2.18e-01 4.93e-13]
[1.00e+00 5.55e-02 1.47e-15]
[9.99e-01 2.04e-01 8.08e-14]
[1.00e+00 2.41e-01 1.09e-15]
[3.79e-10 7.73e-01 1.00e+00]
[2.20e-03 2.24e-01 8.08e-05]
[1.44e-06 2.64e-01 7.59e-01]]
###Markdown
So these are the actual results from the model. We can see that we get probabilities of each class but the sum isn't necessarily 1.
###Code
print np.sum(result, axis=0)
###Output
[1.22 0.96 1.17 1.24 1.31 1.17 1.15 1.48 0.64 1.08 0.42 1.86 0.76 1.29
1.17 0.56 1.29 1.82 1.18 1.13 1.06 1.08 1.06 1.22 1.06 1.2 1.24 1.77
0.23 1.02]
###Markdown
Results from Model
###Code
predicted_result = np.argmax(result.T, axis=1)
print predicted_result
###Output
[0 2 0 2 1 0 0 2 1 2 1 2 1 2 0 1 2 2 2 0 2 0 0 0 0 0 0 2 1 2]
###Markdown
Actual Results
###Code
actual_result = test_y_non_ohe[0]
print actual_result
###Output
[0 2 0 2 2 0 0 2 1 2 1 2 1 2 0 1 2 2 2 0 2 0 0 0 0 0 0 2 1 2]
###Markdown
Let's quickly visualize this with a confusion matrix. Confusion MatrixA *confusion matrix* is a specific table layout that allows visualization of the performance of an algorithm. So we can easily see our results and the accuracy of the model itself.You can actually **ignore** the next cell as it's just a function to plot the confusion matrix but we've included it in case you want to see how it's done.
###Code
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
###Output
_____no_output_____
###Markdown
Now we'll generate the confusion matrix
###Code
from sklearn.metrics import confusion_matrix
matrix = confusion_matrix(actual_result, predicted_result)
###Output
_____no_output_____
###Markdown
Plotting
###Code
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(matrix, classes=["Setosa", "Versicolor", "Virginica"],
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(matrix, classes=["Setosa", "Versicolor", "Virginica"], normalize=True,
title='Normalized confusion matrix')
plt.show()
###Output
Confusion matrix, without normalization
[[12 0 0]
[ 0 5 0]
[ 0 1 12]]
Normalized confusion matrix
[[1. 0. 0. ]
[0. 1. 0. ]
[0. 0.08 0.92]]
|
lesson_regression_classification_lecture_notebook_4.ipynb | ###Markdown
Lambda School Data Science, Unit 2: Predictive Modeling Regression & Classification, Module 4 Objectives- begin with baselines for **classification**- use classification metric: **accuracy**- do **train/validate/test** split- use scikit-learn for **logistic regression** Setup Get started on Kaggle1. [Sign up for a Kaggle account](https://www.kaggle.com/), if you don’t already have one. 2. Go to our Kaggle InClass competition website. You will be given the URL in Slack.3. Go to the Rules page. Accept the rules of the competition. If you're using [Anaconda](https://www.anaconda.com/distribution/) locallyInstall required Python packages, if you haven't already:- [category_encoders](http://contrib.scikit-learn.org/categorical-encoding/), version >= 2.0- [pandas-profiling](https://github.com/pandas-profiling/pandas-profiling), version >= 2.0```conda install -c conda-forge category_encoders plotly```
###Code
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# category_encoders, version >= 2.0
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade category_encoders pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module4')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
###Output
_____no_output_____
###Markdown
Read dataThe files are in the GitHub repository, in the `data/tanzania` folder: - `train_features.csv` : the training set features - `train_labels.csv` : the training set labels - `test_features.csv` : the test set features - `sample_submission.csv` : a sample submission file in the correct format Alternative option to get data & make submissions: Kaggle API1. Go to our Kaggle InClass competition webpage. Accept the rules of the competiton.2. [Follow these instructions](https://github.com/Kaggle/kaggle-apiapi-credentials) to create a Kaggle “API Token” and download your `kaggle.json` file.3. Put `kaggle.json` in the correct location. - If you're using Anaconda, put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-apiapi-credentials). - If you're using Google Colab, upload the file to your Google Drive, and run this cell: ``` from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ ```4. [Install the Kaggle API package](https://github.com/Kaggle/kaggle-apiinstallation).5. [Use Kaggle API to download competition files](https://github.com/Kaggle/kaggle-apidownload-competition-files).6. [Use Kaggle API to submit to competition](https://github.com/Kaggle/kaggle-apisubmit-to-a-competition).
###Code
import pandas as pd
train_features = pd.read_csv('../data/tanzania/train_features.csv')
train_labels = pd.read_csv('../data/tanzania/train_labels.csv')
test_features = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
assert train_features.shape == (59400, 40)
assert train_labels.shape == (59400, 2)
assert test_features.shape == (14358, 40)
assert sample_submission.shape == (14358, 2)
import pandas_profiling
train_features.profile_report()
###Output
_____no_output_____
###Markdown
FeaturesYour goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:- `amount_tsh` : Total static head (amount water available to waterpoint)- `date_recorded` : The date the row was entered- `funder` : Who funded the well- `gps_height` : Altitude of the well- `installer` : Organization that installed the well- `longitude` : GPS coordinate- `latitude` : GPS coordinate- `wpt_name` : Name of the waterpoint if there is one- `num_private` : - `basin` : Geographic water basin- `subvillage` : Geographic location- `region` : Geographic location- `region_code` : Geographic location (coded)- `district_code` : Geographic location (coded)- `lga` : Geographic location- `ward` : Geographic location- `population` : Population around the well- `public_meeting` : True/False- `recorded_by` : Group entering this row of data- `scheme_management` : Who operates the waterpoint- `scheme_name` : Who operates the waterpoint- `permit` : If the waterpoint is permitted- `construction_year` : Year the waterpoint was constructed- `extraction_type` : The kind of extraction the waterpoint uses- `extraction_type_group` : The kind of extraction the waterpoint uses- `extraction_type_class` : The kind of extraction the waterpoint uses- `management` : How the waterpoint is managed- `management_group` : How the waterpoint is managed- `payment` : What the water costs- `payment_type` : What the water costs- `water_quality` : The quality of the water- `quality_group` : The quality of the water- `quantity` : The quantity of water- `quantity_group` : The quantity of water- `source` : The source of the water- `source_type` : The source of the water- `source_class` : The source of the water- `waterpoint_type` : The kind of waterpoint- `waterpoint_type_group` : The kind of waterpoint LabelsThere are three possible values:- `functional` : the waterpoint is operational and there are no repairs needed- `functional needs repair` : the waterpoint is operational, but needs repairs- `non functional` : the waterpoint is not operational Why doesn't Kaggle give you labels for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:> 1. a **training set**, which includes the _independent variables_, as well as the _dependent variable_ (what you are trying to predict).> 2. a **test set**, which just has the _independent variables_. You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. Why care about model validation? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> An all-too-common scenario: a seemingly impressive machine learning model is a complete failure when implemented in production. The fallout includes leaders who are now skeptical of machine learning and reluctant to try it again. How can this happen?> One of the most likely culprits for this disconnect between results in development vs results in production is a poorly chosen validation set (or even worse, no validation set at all). Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions/8)> Good validation is _more important_ than good models. James, Witten, Hastie, Tibshirani, [An Introduction to Statistical Learning](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 2.2, Assessing Model Accuracy> In general, we do not really care how well the method works training on the training data. Rather, _we are interested in the accuracy of the predictions that we obtain when we apply our method to previously unseen test data._ Why is this what we care about? > Suppose that we are interested test data in developing an algorithm to predict a stock’s price based on previous stock returns. We can train the method using stock returns from the past 6 months. But we don’t really care how well our method predicts last week’s stock price. We instead care about how well it will predict tomorrow’s price or next month’s price. > On a similar note, suppose that we have clinical measurements (e.g. weight, blood pressure, height, age, family history of disease) for a number of patients, as well as information about whether each patient has diabetes. We can use these patients to train a statistical learning method to predict risk of diabetes based on clinical measurements. In practice, we want this method to accurately predict diabetes risk for _future patients_ based on their clinical measurements. We are not very interested in whether or not the method accurately predicts diabetes risk for patients used to train the model, since we already know which of those patients have diabetes. Why hold out an independent test set? Owen Zhang, [Winning Data Science Competitions](https://www.slideshare.net/OwenZhang2/tips-for-data-science-competitions)> There are many ways to overfit. Beware of "multiple comparison fallacy." There is a cost in "peeking at the answer."> Good validation is _more important_ than good models. Simple training/validation split is _not_ enough. When you looked at your validation result for the Nth time, you are training models on it.> If possible, have "holdout" dataset that you do not touch at all during model build process. This includes feature extraction, etc.> What if holdout result is bad? Be brave and scrap the project. Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.htmlhypothesis-generation-vs.hypothesis-confirmation)> There is a pair of ideas that you must understand in order to do inference correctly:> 1. Each observation can either be used for exploration or confirmation, not both.> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. Why begin with baselines?[My mentor](https://www.linkedin.com/in/jason-sanchez-62093847/) [taught me](https://youtu.be/0GrciaGYzV0?t=40s):>***Your first goal should always, always, always be getting a generalized prediction as fast as possible.*** You shouldn't spend a lot of time trying to tune your model, trying to add features, trying to engineer features, until you've actually gotten one prediction, at least. > The reason why that's a really good thing is because then ***you'll set a benchmark*** for yourself, and you'll be able to directly see how much effort you put in translates to a better prediction. > What you'll find by working on many models: some effort you put in, actually has very little effect on how well your final model does at predicting new observations. Whereas some very easy changes actually have a lot of effect. And so you get better at allocating your time more effectively.My mentor's advice is echoed and elaborated in several sources:[Always start with a stupid model, no exceptions](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa)> Why start with a baseline? A baseline will take you less than 1/10th of the time, and could provide up to 90% of the results. A baseline puts a more complex model into context. Baselines are easy to deploy.[Measure Once, Cut Twice: Moving Towards Iteration in Data Science](https://blog.datarobot.com/measure-once-cut-twice-moving-towards-iteration-in-data-science)> The iterative approach in data science starts with emphasizing the importance of getting to a first model quickly, rather than starting with the variables and features. Once the first model is built, the work then steadily focuses on continual improvement.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> *Consider carefully what would be a reasonable baseline against which to compare model performance.* This is important for the data science team in order to understand whether they indeed are improving performance, and is equally important for demonstrating to stakeholders that mining the data has added value. What does baseline mean?Baseline is an overloaded term, as you can see in the links above. Baseline has multiple meanings: The score you'd get by guessing> A baseline for classification can be the most common class in the training dataset.> A baseline for regression can be the mean of the training labels. > A baseline for time-series regressions can be the value from the previous timestep. —[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488) Fast, first models that beat guessingWhat my mentor was talking about. Complete, tuned "simpler" modelCan be simpler mathematically and computationally. For example, Logistic Regression versus Deep Learning.Or can be simpler for the data scientist, with less work. For example, a model with less feature engineering versus a model with more feature engineering. Minimum performance that "matters"To go to production and get business value. Human-level performance Your goal may to be match, or nearly match, human performance, but with better speed, cost, or consistency.Or your goal may to be exceed human performance. Begin with baselines for classification Get majority class baseline[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)> A baseline for classification can be the most common class in the training dataset.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> For classification tasks, one good baseline is the _majority classifier_, a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. Determine majority class
###Code
y_train = train_labels['status_group']
y_train.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
What if we guessed the majority class for every prediction?
###Code
majority_class = y_train.mode()[0]
y_pred = [majority_class] * len(y_train)
print(len(y_pred))
###Output
59400
###Markdown
Use classification metric: accuracy [_Classification metrics are different from regression metrics!_](https://scikit-learn.org/stable/modules/model_evaluation.html)- Don't use _regression_ metrics to evaluate _classification_ tasks.- Don't use _classification_ metrics to evaluate _regression_ tasks.[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.htmlaccuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions. What is the baseline accuracy if we guessed the majority class for every prediction?
###Code
# Accuracy of majority class baseline =
# frequency of majority class
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
###Output
_____no_output_____
###Markdown
Do train/validate/test split Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle. Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ... Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."**Therefore, we use **"3-way holdout method (train/validation/test split)"** or we use **"cross-validation with independent test set."** We have two options for where we choose to split:- Time- RandomTo split on time, we can use pandas.To split randomly, we can use the [**`sklearn.model_selection.train_test_split`**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function.
###Code
from sklearn.model_selection import train_test_split
X_train = train_features
y_train = train_labels['status_group']
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, train_size=0.80, test_size=0.20,
stratify=y_train, random_state=42
)
X_train.shape, X_val.shape, y_train.shape, y_val.shape
# Stratified sampling gives you the same proportions of classes in train & test
y_train.value_counts(normalize=True)
y_val.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Use scikit-learn for logistic regression- [sklearn.linear_model.LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)- Wikipedia, [Logistic regression](https://en.wikipedia.org/wiki/Logistic_regression) Begin with baselines: fast, first models Drop non-numeric features
###Code
X_train_numeric = X_train.select_dtypes('number')
X_val_numeric = X_val.select_dtypes('number')
###Output
_____no_output_____
###Markdown
Drop nulls if necessary
###Code
# Not necessary here, the numeric features don't have nulls
X_train_numeric.isnull().sum()
###Output
_____no_output_____
###Markdown
Fit Logistic Regresson on train data
###Code
import sklearn; sklearn.__version__
# If you're on an older version of scikit-learn, you'll get warnings...
# What are all these Logistic Regression parameters?
# scikit-learn is changing the defaults for these parameters in future versions,
# so if you don't explicitly set values for `solver` and `multi_class`,
# then scikit-learn gives you a warning about it.
# Also, a scikit-learn developer accidentally set the default `max_iter` to 100,
# which is unreasonably low, so we need to increase it here, and that will be
# fixed in a future release of scikit-learn. Open-source software, amiright?
# model = LogisticRegression(solver='lbfgs', multi_class='auto', max_iter=10000)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(max_iter=5000, n_jobs=-1)
model.fit(X_train_numeric, y_train)
###Output
_____no_output_____
###Markdown
Evaluate on validation data
###Code
# Didn't really beat the baseline yet, or at least not by much,
# but that's ok! We'll iterate
y_pred = model.predict(X_val_numeric)
accuracy_score(y_val, y_pred)
model.score(X_val_numeric, y_val)
###Output
_____no_output_____
###Markdown
What predictions does a Logistic Regression return?
###Code
y_pred
pd.Series(y_pred).value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Do one-hot encoding of categorical features Check "cardinality" of categorical features[Cardinality](https://simple.wikipedia.org/wiki/Cardinality) means the number of unique values that a feature has:> In mathematics, the cardinality of a set means the number of its elements. For example, the set A = {2, 4, 6} contains 3 elements, and therefore A has a cardinality of 3. One-hot encoding adds a dimension for each unique value of each categorical feature. So, it may not be a good choice for "high cardinality" categoricals that have dozens, hundreds, or thousands of unique values.
###Code
X_train.describe(exclude='number').T.sort_values(by='unique')
###Output
_____no_output_____
###Markdown
Explore `quantity` feature
###Code
X_train['quantity'].value_counts(dropna=False)
# Recombine X_train and y_train, for exploratory data analysis
train = X_train.copy()
train['status_group'] = y_train
# Now do groupby...
train.groupby('quantity')['status_group'].value_counts(normalize=True)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
# This will give an error:
# sns.catplot(x='quantity', y='status_group', data=train, kind='bar')
train['functional'] = (train['status_group']=='functional').astype(int)
sns.catplot(x='quantity', y='functional', data=train, kind='bar');
###Output
_____no_output_____
###Markdown
Do one-hot encoding & Scale features, within a complete model fitting workflow. Why and how to scale features before fitting linear modelsScikit-Learn User Guide, [Preprocessing data](https://scikit-learn.org/stable/modules/preprocessing.html)> Standardization of datasets is a common requirement for many machine learning estimators implemented in scikit-learn; they might behave badly if the individual features do not more or less look like standard normally distributed data: Gaussian with zero mean and unit variance.> The `preprocessing` module further provides a utility class `StandardScaler` that implements the `Transformer` API to compute the mean and standard deviation on a training set. The scaler instance can then be used on new data to transform it the same way it did on the training set. How to use encoders and scalers in scikit-learn- Use the **`fit_transform`** method on the **train** set- Use the **`transform`** method on the **validation** set
###Code
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
categorical_features = ['quantity']
numeric_features = X_train.select_dtypes('number').columns.drop('id').tolist()
features = categorical_features + numeric_features
X_train_subset = X_train[features]
X_val_subset = X_val[features]
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train_subset)
X_val_encoded = encoder.transform(X_val_subset)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
model = LogisticRegression()
model.fit(X_train_scaled, y_train)
print('Validation Accuracy', model.score(X_val_scaled, y_val))
###Output
Validation Accuracy 0.6585016835016835
###Markdown
Compare original features, encoded features, & scaled features
###Code
# Shape of original dataframe, and features for first observation
print(X_train.shape)
X_train[:1]
# Shape of subset dataframe, and features for first observation
print(X_train_subset.shape)
X_train_subset[:1]
# Shape of encoded dataframe, and features for first observation
print(X_train_encoded.shape)
X_train_encoded[:1]
# Shape of scaled array, and features for first observation
print(X_train_scaled.shape)
X_train_scaled[:1]
###Output
(47520, 14)
###Markdown
Get & plot coefficients
###Code
functional_coefficients = pd.Series(
model.coef_[0],
X_train_encoded.columns
)
plt.figure(figsize=(10, 10))
functional_coefficients.sort_values().plot.barh();
###Output
_____no_output_____
###Markdown
Submit to predictive modeling competition Write submission CSV fileThe format for the submission file is simply the row id and the predicted label (for an example, see `sample_submission.csv` on the data download page.For example, if you just predicted that all the waterpoints were functional you would have the following predictions:id,status_group50785,functional51630,functional17168,functional45559,functional49871,functionalYour code to generate a submission file may look like this: estimator is your scikit-learn estimator, which you've fit on X_train X_test is your pandas dataframe or numpy array, with the same number of rows, in the same order, as test_features.csv, and the same number of columns, in the same order, as X_trainy_pred = estimator.predict(X_test) Makes a dataframe with two columns, id and status_group, and writes to a csv file, without the indexsample_submission = pd.read_csv('sample_submission.csv')submission = sample_submission.copy()submission['status_group'] = y_predsubmission.to_csv('your-submission-filename.csv', index=False)
###Code
X_test_subset = test_features[features]
X_test_encoded = encoder.transform(X_test_subset)
X_test_scaled = scaler.transform(X_test_encoded)
assert all(X_test_encoded.columns == X_train_encoded.columns)
y_pred = model.predict(X_test_scaled)
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('submission-01.csv', index=False)
!head submission-01.csv
if in_colab:
from google.colab import files
# Just try again if you get this error:
# TypeError: Failed to fetch
# https://github.com/googlecolab/colabtools/issues/337
files.download('submission-01.csv')
###Output
_____no_output_____ |
python/example/spellchecking/ContextSpellChecking.ipynb | ###Markdown
###Code
from pyspark.sql import SparkSession
from pyspark.ml import Pipeline
from sparknlp.annotator import *
from sparknlp.common import RegexRule
from sparknlp.base import DocumentAssembler, Finisher
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
data = ["Yesterday I lost my blue unikorn .", "Through a note of introduction from Bettina."]
documentAssembler = DocumentAssembler()\
.setInputCol("region")\
.setOutputCol("text")
tokenizer = Tokenizer()\
.setInputCols(["text"])\
.setOutputCol("token")
ocrspellModel = ContextSpellCheckerModel()\
.pretrained()\
.setInputCols(["token"])\
.setOutputCol("spell_checked")\
.setTradeoff(10.0)
finisher = Finisher()\
.setInputCols(["spell_checked"])\
.setValueSplitSymbol("@")
pipeline = Pipeline(stages=[
documentAssembler,
tokenizer,
ocrspellModel,
finisher
])
empty = spark.sparkContext.parallelize([['empty']]).toDF().toDF("region").cache()
pipeline_model = pipeline.fit(empty)
checked_data.select("finished_spell_checked").show(truncate=False)
from sparknlp.base import LightPipeline
lp = LightPipeline(pipeline_model)
lp.annotate(data)
###Output
_____no_output_____ |
101 - Elements of a Computation/014 - Code, Notation, and Diagram.ipynb | ###Markdown
Code, Notation, and DiagramWhile an introduction to computer programming is well-troddenground, there are two things that make this presentation unique. First,its pairing of a foundational discussion of code alongside computationalgeometry and visual design applications. Second, and perhapsmore importantly, the visual diagrammatic language which has beendeveloped for the express purpose of this text. Setting the StageThe following block of code gets things ready for the example to follow: importing a few libraries we'll need, and defining a 'container' to hold some geometry. Unlike working in Grasshopper, in which the stage has already been set for us using the Decod.es plugin, nearly every piece of code we run here on the Jupyter platform will require some sort of header code such as this.**Don't panic**. We don't expect you to understand what you see here, an all will become clear in time.> To continue, **run the following code block**. > To run a code block, select it and press ***SHIFT + RETURN*** at the same time (or press the "Run" button in the toolbar). After you do so, the Python shell will execute the contained code, and may report some information back. In this case, expect to see that the decod.es library has successfully loaded by displaying a URL.
###Code
from decodes.core import *
from decodes.io.jupyter_out import JupyterOut
out = JupyterOut.unit_square( grid_color = Color(0.95,0.95,0.95) )
import math
###Output
_____no_output_____
###Markdown
A Sample ScriptWhile we have not yet discussed all the concepts we need to completelyunderstand this script, we can infer quite a bit from investigatingthe relationship between it and the **control flow diagram**, **objectmodel diagram**, and **geometry diagram** found below.> **Run the following code block**, in the same way you did above. Note that if you failed to run the above code, or have restarted the kernel since doing so, that you may recieve an error message.
###Code
count = 100
scale = 0.15
pts=[]
for n in range(count):
theta=((math.pi*2)/(count-1))*n
pt = Point(theta*scale,math.sin(theta)*scale)
pts.append(pt)
pl = PLine(pts)
out.put(pl)
###Output
_____no_output_____
###Markdown
Textual OutputThe first place we'll look for feedback on what our scripts are doing is **the console**.The console is a graphic device which is our most directwindow on the inner workings of the shell. If the scripts we authorare one side of a conversation – a way of explaining to the shell whatwe would like it to do – then the console is one way that we cansee what the computer has to say in response.> **Run the following code block**, in the same way you did above.
###Code
print(pl)
###Output
pline[10v]
###Markdown
Control Flow DiagramsThe control flow diagram is used to describe the order of executionof the various commands and definitions found in any piece of code.Although code is generally executed by the shell starting at the top ofthe script we provide, and concludes when it reaches the end, thereexist a range of structures which can alter this sequence. The connectedlines, indentation, arrows, and flowchart-style decision pointgraphics found in this diagram provide a visual representation of this“flow” of execution and the structures we may employ to control it. A simplified version of the control flow diagram may be found next to every piece of sample code in the text. Geometric OutputNaturally, since decod.es is a geometry library, one of our primary means for understanding the working of our code will be to examine the results. While working in a CAD context, such as Grasshopper, will be invaluable in this regard, we have a limited means for viewing the geometric results of our scripts here in Jupyter. For the time being, the results we view here are limited to 2d plots.>**Execute the following code block**, in the same way you did above. When you do, you should see the results of this code plotted as a graphic.
###Code
out.draw()
out.clear()
###Output
_____no_output_____ |
header_footer/biosignalsnotebooks_environment/categories/Load/.ipynb_checkpoints/test_3-checkpoint.ipynb | ###Markdown
<div id="image_img" class="header_image_1"> <!-- Available classes for "image_td" element: - header_image_color_1 (For Notebooks of "Open" Area); - header_image_color_2 (For Notebooks of "Acquire" Area); - header_image_color_3 (For Notebooks of "Visualise" Area); - header_image_color_4 (For Notebooks of "Process" Area); - header_image_color_5 (For Notebooks of "Detect" Area); - header_image_color_6 (For Notebooks of "Extract" Area); - header_image_color_7 (For Notebooks of "Decide" Area); - header_image_color_8 (For Notebooks of "Explain" Area); Available classes for "image_img" element: - header_image_1 (For Notebooks of "Open" Area); - header_image_2 (For Notebooks of "Acquire" Area); - header_image_3 (For Notebooks of "Visualise" Area); - header_image_4 (For Notebooks of "Process" Area); - header_image_5 (For Notebooks of "Detect" Area); - header_image_6 (For Notebooks of "Extract" Area); - header_image_7 (For Notebooks of "Decide" Area); - header_image_8 (For Notebooks of "Explain" Area);--> Test 3 :)
###Code
from opensignalstools.__notebook_support__ import css_style_apply
css_style_apply()
###Output
_____no_output_____ |
webscraping_basics.ipynb | ###Markdown
Basic web scraping with Beautiful SoupThis notebook is meant to show the basic principles of webscraping using a very simple example page located at: https://novainstitute.ca/examples/examplePage.html Before we start please open the [example page](https://novainstitute.ca/examples/examplePage.html ) in your web browser (we recommend [chrome](https://www.google.com/chrome/)).*This notebook and example materials is developed by [Nova Institute](https://novainstitute.ca) and is released under the [MIT license](https://https://github.com/NovaMaja/webscraping/blob/master/LICENSE). * ImportsrequestsWe will use the **requests** library to get the raw html from a webpage. **requests** makes all http requests simple, and you can use it with GET, POST, PUT, DELETE, HEAD, OPTIONS and there are a lot of useful functions included in the library. See http://docs.python-requests.org/ for more info on the **requests** library.BeautifulSoup**Beautiful Soup** is a library for parsing web pages. It makes a parsing tree of a webpage based on the html structure. With a parsing tree it is easy to navigate through the contents of the webpage and get the information we are looking for. The **Beautiful Soup** [Documentation](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) has a great quickstart section to get you started.
###Code
import requests
from bs4 import BeautifulSoup
###Output
_____no_output_____
###Markdown
Get and parse the web pageThe first thing we will do is to go to indeed.com in our browser (we recommend chrome) and do a search for a job we are interested in. In our case we searched for Data Scientist in Toronto, ON.Once we have a serach we are happy with we need to copy the url from the browser and paste it as an argument to the requests.get() function.After we loaded the webpage into our page variable we will pass it on to BautifulSoup to parse it into a parsing tree for us, using html.parser
###Code
page = requests.get('https://novainstitute.ca/examples/examplePage.html')
soup = BeautifulSoup(page.text, 'html.parser')
###Output
_____no_output_____
###Markdown
soup has a function called prettify() that makes the parsing tree more human readable. We will use it to print out the information we gathered.
###Code
print(soup.prettify())
###Output
_____no_output_____
###Markdown
We can extract elements based on their tags using our parsing tree. For example, this is how to display only the title of the web-page:
###Code
title = soup.title.contents[0]
title = title.strip()
print(title)
###Output
_____no_output_____
###Markdown
If we want just the text in the body section we can get that too:
###Code
bodytext = soup.p.contents[0]
bodytext = bodytext.strip()
print(bodytext)
###Output
_____no_output_____
###Markdown
You can even find all the links on the page:
###Code
websites = soup.find_all('a')
for website in websites:
website = website["href"]
print(website)
###Output
_____no_output_____
###Markdown
Downloading imagesWe can get the url for an image much the same way as we get the url for a link, but we want to download the image itself. For this we will write a small function using requests.get.
###Code
def download_image(url, filename):
img = requests.get(url)
with open(filename, "wb") as code:
code.write(img.content)
###Output
_____no_output_____
###Markdown
Now we can get the urls for the images, and use the function we already defined to download those images.
###Code
images = soup.find_all('img')
for i in range(len(images)):
image_url = images[i]["src"]
print(image_url)
download_image(image_url,"img{}.png".format(i))
###Output
_____no_output_____ |
ProgressUpdate.ipynb | ###Markdown
GEOS 505 Research Computing in Earth System Science Final Project: Progress Update*** D. Kody Johnson November 29, 2018--- Introduction & BackgroundIn the previous submittal (Dataset Overview), two scripts were developed. The first script comprised a set of ABAQUS commands to conduct a parametric study of the strain response of an AREMA 132R rail section to applied vertical and lateral loads. The vertical and lateral loads were varied for each simulation in an effort to find the strain (based on nodal deformations) of eight (8) ‘virtual’ strain gauges. The corresponding nodal deformations were then output to an output file (.dat file) for post processing to calculate the respective shear strains.In the second script, the differential shear strain due to the applied loading for a single simulation was determined. The shear strain was then related to the applied loading through material and geometric properties of the rail section according to the differential shear theory presented in the Problem Statement submittal. This step was a validation that the nodal deformations were properly used to correctly calculate the principle strains and differential shear. Updated WorkflowIn this submittal, the script developed for a single simulation was expanded to iterate through all simulation output files to calculate the eight (8) corresponding principle strain values. These values represent the features or predictors of a training data set used as the input to a machine learning algorithm. Each sample of this training dataset corresponds to each of the simulations in which both horizontal and vertical load were varied from zero (0) load to 250 kN. The load combinations of vertical and lateral load used in each simulation are the response variables in the training dataset. The training data will be used in the next submittal to determine two mapping functions: one for the vertical load and one for the lateral load. The following code gathers the nodal deformations corresponding to each strain gauge from the simulation output data files, validates that the principle strains are properly calculated, and constructs an array of vertical load, lateral load, and the eight strain values. Note: the code has been commented extensively to describe the various steps carried out in the script.
###Code
#This script takes coordinate and displacement data for nodes representing
# virtual strain gauges on a small section of AREMA 132RE rail and calculates
# the infinitesimal strain that would be measured by these strain gauges due
# to applied vertical and lateral loads.
#import necessary libraries
import os #
import numpy as np
import re
#import sklearn.gaussian_process as GP #These libraries will be used in the
#import sklearn.linear_model as LM # next submittal
# Import original coordinates to an ndarray. All 'gauge' nodes are within
# first 50 nodes in the file (as numbered by ABAQUS). Not all 50 are used, and
# they must be reorganized into a consistent pattern for calculation. The
# conventioned used is as follows: starting at location A on the right side,
# the first node is the upper right node of th gauge element and the second
# node is the lower left (+45 degrees). The third node is the lower right and
# the fourth node is the upper left (-45 degrees). This pattern is repeated
# for location B on the right side, then location A on the left side and
# finally location B on the left side (16 nodes in total).
with open('orig_coords.txt') as f:
lines = f.readlines()
OC = np.zeros([len(lines),4],dtype=float) #Original Coordinate array
for i in range(0,len(lines),1):
OC[i] = np.fromstring(lines[i],dtype=float,sep = ",")
OC = np.vstack(OC)
# Reorganize the nodes into pattern described above
OP = np.zeros([16,3],dtype=float) #Original Point array
for i in range(0,50,1):
if OC[i,0] == 49.:
OP[0,:] = OC[i,1:4]
elif OC[i,0] == 15.:
OP[1,:] = OC[i,1:4]
elif OC[i,0] == 8.:
OP[2,:] = OC[i,1:4]
elif OC[i,0] == 45.:
OP[3,:] = OC[i,1:4]
elif OC[i,0] == 37.:
OP[4,:] = OC[i,1:4]
elif OC[i,0] == 20.:
OP[5,:] = OC[i,1:4]
elif OC[i,0] == 16.:
OP[6,:] = OC[i,1:4]
elif OC[i,0] == 40.:
OP[7,:] = OC[i,1:4]
elif OC[i,0] == 50.:
OP[8,:] = OC[i,1:4]
elif OC[i,0] == 14.:
OP[9,:] = OC[i,1:4]
elif OC[i,0] == 5.:
OP[10,:] = OC[i,1:4]
elif OC[i,0] == 46.:
OP[11,:] = OC[i,1:4]
elif OC[i,0] == 38.:
OP[12,:] = OC[i,1:4]
elif OC[i,0] == 19.:
OP[13,:] = OC[i,1:4]
elif OC[i,0] == 13.:
OP[14,:] = OC[i,1:4]
elif OC[i,0] == 39.:
OP[15,:] = OC[i,1:4]
#Create a list of .dat file names to be used with open()
file_list = [] #Initialize filer list.
folder = [] #Initialize folder list.
for dirName, subdirList, fileList in os.walk('.'): #Use OS.walk to walk each folder and directory and create a path name
for fname in fileList: #to each file.
if fname.endswith(".dat"):
path = dirName + '\\' + fname #Build windows readable path neam.
file_list = file_list + [path]
# Read and organize the file_list array in the same order the ABAQUS parametric
# study used (defined in the 'design_definitions.txt' file) OS walk opens the
# files in alphanumeric order and so the design number (i.e. c1, c2,... c36)
# in the file name is used to relate the order of the file_list list to the
# order defined in the 'design_definitions.txt' file.
des = np.zeros([len(file_list),1],dtype=int) #design array (desing number)
pat1 = re.compile(r"_c")
for i in range (0,len(file_list),1):
if pat1.search(file_list[i]) != None: #Only open .dat files with _c in
pat2 = re.compile(r'\d+') # the name. This step is not
desn = pat2.findall(file_list[i]) # necessary as all other .dat
des[i,0] = int(desn[0]) # files have been removed
# The des array is a list of design numbers (e.g. c1, c2 etc.) read from the
# file_list array. The next section uses this array to reorganize the
# file_list
des = np.vstack(des[:,0].argsort())
count = -1
flist = np.vstack([None]*len(file_list)) #Ordered file_list array
for ind in des:
count = count + 1
flist[count] = file_list[ind[0]]
flist = flist.flatten()
flist = flist.tolist()
# Using the flist array, open each file and read the displacement data. The
# .dat files have 3/4 of a million lines of code and the location of the
# nodal displacement data may not be at the same location. Therefore, a
# regular expresson is used to search for a line of text preceeding the
# section in the .dat files that contains the displacement data. Once read,
# the data is reogainzed in the same order as the OC (original coordinate)
# array.
U = np.zeros([16,3,len(flist)],dtype=float) # Ordered Displacements (U)
CD = np.zeros([16,4],dtype=float) # Unordered Coordinate Displacements (CD)
for i in range(0,len(flist),1):
with open(flist[i]) as f:
lines = f.readlines()
pat = re.compile(r"\bINCREMENT 6 SUMMARY\b")
for j in range(0,len(lines),1):
if pat.search(lines[j]) != None:
disp = lines[j+19:j+35] #Temporary displacement array (disp)
for k in range(0,len(disp),1):
CD[k] = np.fromstring(disp[k],dtype=float,sep = " ")
CD = np.vstack(CD)
for p in range(0,16,1):
if CD[p,0] == 49.:
U[0,:,i] = CD[p,1:4]
elif CD[p,0] == 15.:
U[1,:,i] = CD[p,1:4]
elif CD[p,0] == 8.:
U[2,:,i] = CD[p,1:4]
elif CD[p,0] == 45.:
U[3,:,i] = CD[p,1:4]
elif CD[p,0] == 37.:
U[4,:,i] = CD[p,1:4]
elif CD[p,0] == 20.:
U[5,:,i] = CD[p,1:4]
elif CD[p,0] == 16.:
U[6,:,i] = CD[p,1:4]
elif CD[p,0] == 40.:
U[7,:,i] = CD[p,1:4]
elif CD[p,0] == 50.:
U[8,:,i] = CD[p,1:4]
elif CD[p,0] == 14.:
U[9,:,i] = CD[p,1:4]
elif CD[p,0] == 5.:
U[10,:,i] = CD[p,1:4]
elif CD[p,0] == 46.:
U[11,:,i] = CD[p,1:4]
elif CD[p,0] == 38.:
U[12,:,i] = CD[p,1:4]
elif CD[p,0] == 19.:
U[13,:,i] = CD[p,1:4]
elif CD[p,0] == 13.:
U[14,:,i] = CD[p,1:4]
elif CD[p,0] == 39.:
U[15,:,i] = CD[p,1:4]
# Calculate the infinitesimal strain of each 'virtual' strain gauge (as
# previously defined). This is done by first creating a Deformed Point (DP)
# in which is the displacement is added to the original point. Using this
# set of points and the OC set of points, two sets of lines are created:
# Original line (OL) and Deformed Line (DL). These lines are then used to
# calculate infinitesimal strain according to the strain definition.
ST = np.zeros([8,len(flist)],dtype=float) # Strain array
P = np.zeros([len(flist),1],dtype=float) # Estimated vertical load (to be
for i in range(0,len(flist),1): # described later)
DP = np.zeros([16,3],dtype=float) #Deformed point
DP = OP + U[:,:,i]
OL = np.zeros([8,1],dtype=float) #Original line
DL = np.zeros([8,1],dtype=float) #Deformed line
for j in range(0,8,1):
OL[j,0] = np.linalg.norm(OP[2*j+1,:]-OP[2*j,:])
DL[j,0] = np.linalg.norm(DP[2*j+1,:]-DP[2*j,:])
strain = (DL - OL)/OL #Definition of infitesimal strain.
ST[:,i] = strain.reshape(8,)
# As a check to ensure that all strains were calculated correctly, the vertical
# load P, estimated using the differential shear approach (described else-
# where), is calculated and compared to the applied loads. Note: the first 6
# elements of P correspond to cases in which the lateral load is zero; it is
# only for these cases that the calculation will provide accurate results as
# crosstalk from the lateral load causes an error in the differential shear
# calculation.
# Principle strains on the right side (ea, eb, ec, ed; see Figure 1)
ea = ST[0]
eb = ST[1]
ec = ST[2]
ed = ST[3]
eap = ST[4]
ebp = ST[5]
ecp = ST[6]
edp = ST[7]
e1 = (eb + ebp)/2 - (ea + eap)/2
e2 = (ec + ecp)/2 - (ed + edp)/2
delG = e1 + e2
E = 2.07E+11
I = 3.661E-05
t = 0.0173795
Q = 0.000258773
v = 0.30
P[i,0] = E*I*t/(2*Q*(1+v))*delG[i]
vert=[]
lat=[]
pat = re.compile(r'\d+')
with open('design_definitions.txt') as f:
lines = f.readlines()
for i in range(0,len(lines),1):
line = lines[i]
loads = pat.findall(line)
vert = vert + [float(loads[1])]
lat = lat + [float(loads[3])]
loads = np.column_stack((vert,lat))
ST = np.transpose(ST)
#tr_data = np.hstack((loads,ST)) #convenient for copy/paste to other platforms.
X = ST
y = loads
print(P[0:6,:])
print('\n')
print(X)
print('\n')
print(y)
###Output
[[ 0. ]
[ 50322.60875946]
[100646.76858273]
[150972.02216487]
[201298.64769655]
[251626.12337242]]
[[ 0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00
0.00000000e+00 0.00000000e+00 0.00000000e+00 0.00000000e+00]
[-6.36416530e-05 6.48792191e-05 6.48553059e-05 -6.36542249e-05
-6.36929522e-05 6.48632837e-05 6.48671253e-05 -6.36823365e-05]
[-1.27171936e-04 1.29875771e-04 1.29822288e-04 -1.27197574e-04
-1.27275623e-04 1.29840040e-04 1.29848601e-04 -1.27256216e-04]
[-1.90591954e-04 1.94984635e-04 1.94902920e-04 -1.90631523e-04
-1.90746367e-04 1.94933616e-04 1.94941033e-04 -1.90719122e-04]
[-2.53899564e-04 2.60209539e-04 2.60095938e-04 -2.53957057e-04
-2.54105793e-04 2.60138625e-04 2.60148263e-04 -2.54073531e-04]
[-3.17094359e-04 3.25547955e-04 3.25401818e-04 -3.17171522e-04
-3.17352372e-04 3.25462152e-04 3.25467037e-04 -3.17316919e-04]
[ 3.16649020e-04 -1.03112516e-06 -1.03628151e-05 2.89240714e-04
-2.36472391e-04 3.83394374e-05 4.59658365e-05 -2.02734288e-04]
[ 2.53017860e-04 6.37836926e-05 5.44370440e-05 2.25571699e-04
-3.00166900e-04 1.03246090e-04 1.10894030e-04 -2.66444658e-04]
[ 1.89497785e-04 1.28714608e-04 1.19349587e-04 1.62012028e-04
-3.63751455e-04 1.68266578e-04 1.75936810e-04 -3.30045944e-04]
[ 1.26090563e-04 1.93758470e-04 1.84376174e-04 9.85621596e-05
-4.27225058e-04 2.33402338e-04 2.41091364e-04 -3.93539342e-04]
[ 6.27930287e-05 2.58919010e-04 2.49515143e-04 3.52234856e-05
-4.90585425e-04 2.98653140e-04 3.06360378e-04 -4.56920477e-04]
[-3.89919081e-07 3.24193695e-04 3.14766357e-04 -2.80048445e-05
-5.53836310e-04 3.64017878e-04 3.71739665e-04 -5.20191826e-04]
[ 7.13570155e-04 3.52749523e-05 1.49287082e-05 6.65108476e-04
-3.92540761e-04 1.14010100e-04 1.27578358e-04 -3.18712225e-04]
[ 6.49952829e-04 1.00022422e-04 7.96721815e-05 6.01430724e-04
-4.56233751e-04 1.78957494e-04 1.92566199e-04 -3.82446258e-04]
[ 5.86449971e-04 1.64886715e-04 1.44528137e-04 5.37861852e-04
-5.19814756e-04 2.44020042e-04 2.57667172e-04 -4.46070376e-04]
[ 5.23057927e-04 2.29862089e-04 2.09497916e-04 4.74402678e-04
-5.83284845e-04 3.09196006e-04 3.22882880e-04 -5.09583670e-04]
[ 4.59775721e-04 2.94955500e-04 2.74578715e-04 4.11054095e-04
-6.46641908e-04 3.74487802e-04 3.88209490e-04 -5.72988633e-04]
[ 3.96608333e-04 3.60162216e-04 3.39774260e-04 3.47813754e-04
-7.09888617e-04 4.39893046e-04 4.53647919e-04 -6.36283604e-04]
[ 1.19066278e-03 1.08915469e-04 7.58737143e-05 1.12751049e-03
-4.68171086e-04 2.27004917e-04 2.44829631e-04 -3.47903091e-04]
[ 1.12706822e-03 1.73593604e-04 1.40558023e-04 1.06382698e-03
-5.31854531e-04 2.91990937e-04 3.09874667e-04 -4.11654521e-04]
[ 1.06358507e-03 2.38386393e-04 2.05354931e-04 1.00025442e-03
-5.95427767e-04 3.57090709e-04 3.75032751e-04 -4.75295532e-04]
[ 1.00021373e-03 3.03293384e-04 2.70264256e-04 9.36789438e-04
-6.58889273e-04 4.22305261e-04 4.40303944e-04 -5.38827288e-04]
[ 9.36953841e-04 3.68317646e-04 3.35284854e-04 8.73437243e-04
-7.22238077e-04 4.87634322e-04 5.05686439e-04 -6.02249324e-04]
[ 8.73806624e-04 4.33454192e-04 4.00423657e-04 8.10194110e-04
-7.85476356e-04 5.53080743e-04 5.71182709e-04 -6.65562805e-04]
[ 1.74781784e-03 2.19880605e-04 1.72466176e-04 1.67632120e-03
-4.63339742e-04 3.77308816e-04 3.97705544e-04 -2.90299660e-04]
[ 1.68424729e-03 2.84487331e-04 2.37088004e-04 1.61263943e-03
-5.27011282e-04 4.42331043e-04 4.62806506e-04 -3.54062017e-04]
[ 1.62079306e-03 3.49208653e-04 3.01822480e-04 1.54906803e-03
-5.90571238e-04 5.07466868e-04 5.28017991e-04 -4.17716657e-04]
[ 1.55744664e-03 4.14043843e-04 3.66670321e-04 1.48560623e-03
-6.54018476e-04 5.72718571e-04 5.93343580e-04 -4.81258915e-04]
[ 1.49421207e-03 4.78994786e-04 4.31631640e-04 1.42225372e-03
-7.17355322e-04 6.38082573e-04 6.58781905e-04 -5.44691401e-04]
[ 1.43109089e-03 5.44059555e-04 4.96705901e-04 1.35901133e-03
-7.80585014e-04 7.03572301e-04 7.24333986e-04 -6.08007863e-04]
[ 2.38489833e-03 3.68160207e-04 3.04691174e-04 2.31139976e-03
-3.78051973e-04 5.64907898e-04 5.86192262e-04 -1.45918106e-04]
[ 2.32136261e-03 4.32692670e-04 3.69251295e-04 2.24772781e-03
-4.41703841e-04 6.29962448e-04 6.51344439e-04 -2.09684959e-04]
[ 2.25793615e-03 4.97336162e-04 4.33921197e-04 2.18416229e-03
-5.05245524e-04 6.95131612e-04 7.16609362e-04 -2.73344400e-04]
[ 2.19462131e-03 5.62100084e-04 4.98706231e-04 2.12070664e-03
-5.68675734e-04 7.60416919e-04 7.81987006e-04 -3.36894412e-04]
[ 2.13141806e-03 6.26973248e-04 5.63603101e-04 2.05736128e-03
-6.31995572e-04 8.25816903e-04 8.47478613e-04 -4.00331873e-04]
[ 2.06832800e-03 6.91964673e-04 6.28612460e-04 1.99412473e-03
-6.95204267e-04 8.91331485e-04 9.13080400e-04 -4.63665799e-04]]
[[ 0. 0.]
[ 50000. 0.]
[100000. 0.]
[150000. 0.]
[200000. 0.]
[250000. 0.]
[ 0. 50000.]
[ 50000. 50000.]
[100000. 50000.]
[150000. 50000.]
[200000. 50000.]
[250000. 50000.]
[ 0. 100000.]
[ 50000. 100000.]
[100000. 100000.]
[150000. 100000.]
[200000. 100000.]
[250000. 100000.]
[ 0. 150000.]
[ 50000. 150000.]
[100000. 150000.]
[150000. 150000.]
[200000. 150000.]
[250000. 150000.]
[ 0. 200000.]
[ 50000. 200000.]
[100000. 200000.]
[150000. 200000.]
[200000. 200000.]
[250000. 200000.]
[ 0. 250000.]
[ 50000. 250000.]
[100000. 250000.]
[150000. 250000.]
[200000. 250000.]
[250000. 250000.]]
|
.ipynb_checkpoints/Raw estacion meteorology data processing-checkpoint.ipynb | ###Markdown
ARDUAIR DATA COLLECTED PROCESSING Library imports
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import datetime as dt
import xlrd
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data Processing
###Code
# Read
data=pd.read_excel('DATA_CIUDADELA.xlsx')
# Filter columns
data=data[["Fecha","Hora","Temperatura [ºC]","Humedad Relativa [%]","Presión Atmosférica [mmHg]","Radiación Solar [W/m2]"]]
data.columns=['date','time','Temp','Hum','Pr','L']
data=data.drop(0)
data['datetime']=data.apply(lambda x: dt.datetime.combine(x['date'], x['time']), axis=1)
data=data[['datetime','Temp','Hum','Pr','L']].set_index('datetime')
#Save
data.to_csv('ARDUAIR_ACROPOLIS_PROCESADO.csv')
data['Pr'].plot()
data['Temp'].plot()
data['Hum'].plot()
data["L"].plot()
###Output
_____no_output_____ |
updated_receptive.ipynb | ###Markdown
The following article will visualize some mathematical models for brain cell activity. In some regions of the brain, neurons are excited or inhibited by neurons of a preceding input layer. They are called receptive field of that neuron. Since the visual area uses receptive fields as feature detectors (such as edge and edge-orientation detection) for natural images, the application of different receptive field functions on images can be nicely examined. The ipython notebook file to play with the parameters can be found on GitHub.
###Code
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.image as mpimg
import scipy.signal as signal
import numpy as n
import math
###Output
_____no_output_____
###Markdown
We examine the effect on the following images. In the visual pathway the images can be seen as input from the retina to higher visual areas.
###Code
barImg=mpimg.imread('bar.png')
#extract grey values
barImg = barImg[:,:,3]
imgplot = plt.imshow(barImg, cmap=cm.Greys_r)
img=mpimg.imread('stinkbug.png')
#extract grey values
bugImg = img[:,:,0]
imgplot = plt.imshow(bugImg, cmap=cm.Greys_r)
###Output
_____no_output_____
###Markdown
Receptive field functions-------------------The two dimensional gaussian function is used in image processing as blurring filter.$$\phi(x,y) = \frac{1}{2\pi\sigma^2}\exp{(-\frac{1}{2\pi\sigma^2}(x^2+ y^2))}$$
###Code
def gaussian2D(x, y, sigma):
return (1.0/(1*math.pi*(sigma**2)))*math.exp(-(1.0/(2*(sigma**2)))*(x**2 + y**2))
###Output
_____no_output_____
###Markdown
Since scipy's convolve function does not accept functions, we sample sample the function.
###Code
"""make matrix from function"""
def receptiveFieldMatrix(func):
h = 30
g = n.zeros((h,h))
for xi in range(0,h):
for yi in range(0,h):
x = xi-h/2
y = yi-h/2
g[xi, yi] = func(x,y);
return g
def plotFilter(fun):
g = receptiveFieldMatrix(fun)
plt.imshow(g, cmap=cm.Greys_r)
###Output
_____no_output_____
###Markdown
The gaussian function is circular symmetric, leading to excitation of a centered pixel from nearby pixels in convolution.In the context of fourier transformation it is a low pass filter, which cancels out higher frequencies in the frequence domain of the image and is therefore blurring the image.
###Code
plotFilter(lambda x,y:gaussian2D(x,y,4))
###Output
_____no_output_____
###Markdown
Convolution is the process of applying the filter to the input, which is the image $I(x,y)$ denoting the grey value of the pixel at the specified position. $$\int \int I(x',y')\phi(x-x',y-y')dx'dy'$$When applying the gaussian filter every neuron in the output layer is excited by nearby image neurons. The result of the convolution can then also be visualized in an image. Song Ho Ahn has a [nice example](http://www.songho.ca/dsp/convolution/convolution2d_example.html "jkh") for convolution of 2D Arrays.
###Code
Img_barGaussian = signal.convolve(barImg,receptiveFieldMatrix(lambda x,y: gaussian2D(x,y,5)), mode='same')
imgplot = plt.imshow(Img_barGaussian, cmap=cm.Greys_r)
Img_bugGaussian = signal.convolve(bugImg,receptiveFieldMatrix(lambda x,y: gaussian2D(x,y,3)), mode='same')
imgplot = plt.imshow(Img_bugGaussian, cmap=cm.Greys_r)
###Output
_____no_output_____
###Markdown
Difference of Gaussians---------------------The mexican hat function is a difference of gaussians, which leads to an on-center, off-surround receptive field, found in retinal ganglion cells or LGN neurons. It can be seen as a basic edge detector.
###Code
def mexicanHat(x,y,sigma1,sigma2):
return gaussian2D(x,y,sigma1) - gaussian2D(x,y,sigma2)
plotFilter(lambda x,y: mexicanHat(x,y,3,4))
Img_barHat = signal.convolve(barImg,receptiveFieldMatrix(lambda x,y:mexicanHat(x,y,3,4)), mode='same')
imgplot = plt.imshow(Img_barHat, cmap=cm.Greys_r)
Img_bugHat = signal.convolve(bugImg,receptiveFieldMatrix(lambda x,y: mexicanHat(x,y,2,3)), mode='same')
imgplot = plt.imshow(Img_bugHat, cmap=cm.Greys_r)
###Output
_____no_output_____
###Markdown
Gabor functions---------------Gabor functions are used to detect edges with a specific orientation in images. Neurons which can be modeled using gabor functions are found throughout the visual cortex.Odd gabor:$$g_s(x,y):=sin(\omega_x x + \omega_y y)\exp{(-\frac{x^2+y^2}{2\sigma^2})}$$Even gabor:$$g_c(x,y):=cos(\omega_x x + \omega_y y)\exp{(-\frac{x^2+y^2}{2\sigma^2})}$$Orientation is given by the ratio $\omega_y/\omega_x$. $g_s$ is activated by step edges, while $g_c$ is activated by line edges.
###Code
def oddGabor2D(x,y,sigma,orientation):
return math.sin(x + orientation*y) * math.exp(-(x**2 + y**2)/(2*sigma))
def evenGabor2D(x,y, sigma, orientation):
return math.cos(x + orientation*y) * math.exp(-(x**2 + y**2)/(2*sigma))
plotFilter(lambda x,y: oddGabor2D(x,y,7,1))
Img_barOddGabor = signal.convolve(barImg,receptiveFieldMatrix(lambda x,y: oddGabor2D(x,y,5,1)), mode='same')
imgplot = plt.imshow(Img_barOddGabor, cmap=cm.Greys_r)
Img_bugOddGabor = signal.convolve(bugImg,receptiveFieldMatrix(lambda x,y: oddGabor2D(x,y,5,1)), mode='same')
###Output
_____no_output_____
###Markdown
In the following plot one can see clearly the edge orientations that excite the neuron.
###Code
imgplot = plt.imshow(Img_bugOddGabor, cmap=cm.Greys_r)
###Output
_____no_output_____
###Markdown
Using the on-center, off-surround receptive field image as input to the gabor we obtain different results.
###Code
Img_bugOddGaborEdge = signal.convolve(Img_bugHat,receptiveFieldMatrix(lambda x,y: oddGabor2D(x,y,5,1)), mode='same')
imgplot = plt.imshow(Img_bugOddGaborEdge, cmap=cm.Greys_r)
plotFilter(lambda x,y: evenGabor2D(x,y,7,1))
Img_barEvenGabor = signal.convolve(barImg,receptiveFieldMatrix(lambda x,y: evenGabor2D(x,y,5,1)), mode='same')
imgplot = plt.imshow(Img_barEvenGabor, cmap=cm.Greys_r)
Img_bugEvenGabor = signal.convolve(bugImg,receptiveFieldMatrix(lambda x,y: evenGabor2D(x,y,5,1)), mode='same')
imgplot = plt.imshow(Img_bugEvenGabor, cmap=cm.Greys_r)
###Output
_____no_output_____
###Markdown
Quadrature Pairs------------------A complex cell might react equally well to step edges and lines of either polarity. This is modeled by summing the squared responses of both odd and even gabor filter.
###Code
def edgeEnergy(x,y,sigma, orientation):
g1= oddGabor2D(x,y,sigma,orientation)
g2= evenGabor2D(x,y,sigma,orientation)
return(g1**2+g2**2)
plotFilter(lambda x,y:edgeEnergy(x,y,50,0))
Img_barEdgeEnergy = signal.convolve(barImg,receptiveFieldMatrix(lambda x,y: edgeEnergy(x,y,100,1)), mode='same')
imgplot = plt.imshow(Img_barEdgeEnergy, cmap=cm.Greys_r)
Img_bugEdgeEnergy = signal.convolve(bugImg,receptiveFieldMatrix(lambda x,y: edgeEnergy(x,y,10,1)), mode='same')
imgplot = plt.imshow(Img_bugEdgeEnergy, cmap=cm.Greys_r)
###Output
_____no_output_____ |
Miscellaneous-Topics/01-TensorBoard.ipynb | ###Markdown
TensorBoard
###Code
import tensorflow as tf
with tf.name_scope("OPERATION_A"):
a = tf.add(1,2,name="First_add")
a1 = tf.add(100,200,name='a_add')
a2 = tf.multiply(a,a1)
with tf.name_scope("OPERATION_B"):
b = tf.add(3,4,name='Second_add')
b1 = tf.add(300,400,name='b_add')
b2 = tf.multiply(b,b1)
c = tf.multiply(a2,b2,name='final_result')
with tf.Session() as sess:
writer = tf.summary.FileWriter("./output",sess.graph)
print(sess.run(c))
writer.close()
k = tf.placeholder(tf.float32)
# Make a normal distribution, with a shifting mean
mean_moving_normal = tf.random_normal(shape=[1000], mean=(5*k), stddev=1)
# Record that distribution into a histogram summary
tf.summary.histogram("normal/moving_mean", mean_moving_normal)
# Setup a session and summary writer
with tf.Session() as sess:
writer = tf.summary.FileWriter("./tmp/histogram_example")
summaries = tf.summary.merge_all()
# Setup a loop and write the summaries to disk
N = 400
for step in range(N):
k_val = step/float(N)
summ = sess.run(summaries, feed_dict={k: k_val})
writer.add_summary(summ, global_step=step)
writer.close()
###Output
_____no_output_____ |
appendix_compressed-air_chamber/Appendix compressed-air chambers.ipynb | ###Markdown
Imports
###Code
import math
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Global variables
###Code
# The types of additive manufacturing used in this test
models = ['Aluminium','Prusa','Formlabs','Ultimaker_006','Ultimaker_010','Ultimaker_015','Ultimaker_020']
###Output
_____no_output_____
###Markdown
Compressed air chamber test
###Code
# Define a dictionary to store all data from the air chamber tests
# For each ring all variables are stored in this nested dictionary
air_chambers = {}
# For each model type
for model in models:
# Load the data of the corresponding results in .CSV and drop unnecessary columns
model_df = pd.read_csv(f'./data/{model}.csv',delimiter=';',header=None,names=(['Time','A','Pressure']))
model_df.drop(columns=['A'],axis=1,inplace=True)
# Store the data in our larger dictionary
air_chambers[model]={}
# Filtering the time data (in s)
air_chambers[model]['Time'] = model_df['Time'].str.replace('.','').astype('float64')/1000000
# For all models limit the time (in s) and pressure (in MPa) to the same amount
air_chambers[model]['Time'] = air_chambers[model]['Time'].head(1400)
air_chambers[model]['Pressure'] = model_df['Pressure'].head(1400)/10
# Define the pressure drop by reducing all pressures with the first measures pressure (in MPa)
air_chambers[model]['PressureDrop'] = air_chambers[model]['Pressure'] - air_chambers[model]['Pressure'][0]
###Output
<ipython-input-4-8534a57f8758>:14: FutureWarning: The default value of regex will change from True to False in a future version. In addition, single character regular expressions will*not* be treated as literal strings when regex=True.
air_chambers[model]['Time'] = model_df['Time'].str.replace('.','').astype('float64')/1000000
###Markdown
All models with their pressure drop (in MPa) over time (in s)
###Code
plt.plot(air_chambers['Aluminium']['Time'],air_chambers['Aluminium']['PressureDrop'],'black', label='Aluminium', linestyle=(0,(1,1,1)),linewidth=2)
plt.plot(air_chambers['Prusa']['Time'],air_chambers['Prusa']['PressureDrop'],'tab:orange', label='SLA Prusa', linestyle='dashdot')
plt.plot(air_chambers['Formlabs']['Time'],air_chambers['Formlabs']['PressureDrop'],'tab:green', label='SLA Formlabs', linestyle='dotted',linewidth=3)
plt.plot(air_chambers['Ultimaker_006']['Time'],air_chambers['Ultimaker_006']['PressureDrop'],'tab:red',label='Ultimaker 0.06 mm')
plt.plot(air_chambers['Ultimaker_010']['Time'],air_chambers['Ultimaker_010']['PressureDrop'],'tab:purple',label='Ultimaker 0.10 mm', linestyle='dashed')
plt.plot(air_chambers['Ultimaker_015']['Time'],air_chambers['Ultimaker_015']['PressureDrop'],'tab:blue',label='Ultimaker 0.15 mm', linestyle=(0,(10,2,2)))
plt.plot(air_chambers['Ultimaker_020']['Time'],air_chambers['Ultimaker_020']['PressureDrop'],'tab:brown',label='Ultimaker 0.20 mm', linestyle=(0,(5,2,2)))
# Set the labels and save the figure
plt.legend()
plt.xlabel('Time (s)')
plt.ylabel('Pressure drop (MPa)')
plt.savefig('./figures/result_airchamber.pdf',bbox_inches = 'tight')
plt.clf()
###Output
_____no_output_____
###Markdown
Models with their pressure drop (in MPa) over time (in s) (excluding Ultimaker 0.15 mm and 0.20 mm)
###Code
# To smoothen out the lines a sampling [::4] and a rolling window of 20 are applied
plt.plot(air_chambers['Aluminium']['Time'][::4],air_chambers['Aluminium']['PressureDrop'].rolling(window=20).mean()[::4],'black', label='Aluminium', linestyle=(0,(1,1,1)),linewidth=2)
plt.plot(air_chambers['Prusa']['Time'][::4],air_chambers['Prusa']['PressureDrop'].rolling(window=20).mean()[::4],'tab:orange', label='SLA Prusa', linestyle='dashdot')
plt.plot(air_chambers['Formlabs']['Time'][::4],air_chambers['Formlabs']['PressureDrop'].rolling(window=20).mean()[::4],'tab:green', label='SLA Formlabs', linestyle='dotted',linewidth=3)
plt.plot(air_chambers['Ultimaker_006']['Time'][::4],air_chambers['Ultimaker_006']['PressureDrop'].rolling(window=20).mean()[::4],'tab:red',label='Ultimaker 0.06 mm')
plt.plot(air_chambers['Ultimaker_010']['Time'][::4],air_chambers['Ultimaker_010']['PressureDrop'].rolling(window=20).mean()[::4],'tab:purple',label='Ultimaker 0.10 mm', linestyle='dashed')
# Set the labels and save the figure
plt.legend()
plt.xlabel('Time (s)')
plt.ylabel('Pressure drop (MPa)')
plt.savefig('./figures/result_airchamber_part.pdf',bbox_inches = 'tight')
plt.clf()
###Output
_____no_output_____
###Markdown
Repeatability We performed two repeatability tests- The test was rerun without any changes in the connections (rerun)- The model was reconnected prior to taking the tests (reconnected) Rerun
###Code
# Load the data for the rerun repeatability test
test_rerun=pd.read_csv(r'data/Resultaten_opnieuwaanzetten.csv', delimiter=";", header=1, names=(['Time',"Test1","Test2","Test3",'Aluminium','G','SLA Prusa','SLA Formlabs','Ultimaker 0.10']))
# Apply a rolling window for each type of additive manufacturing and convert pressure data to MPa
for model in list(test_rerun.keys())[1:]:
test_rerun[model]=test_rerun[model].rolling(window=20).mean()/1000
# Format the time accordingly
tr = np.arange(0, len(test_rerun["Time"])/10, 0.1)
# Visualize the rerun repeatability test
plt.plot(tr,test_rerun['Aluminium'], color = 'tab:grey',alpha=0.25, linestyle=(0,(1,1,1)),linewidth=2)
plt.plot(tr,test_rerun['SLA Prusa'], color = 'tab:grey',alpha=0.25, linestyle='dashdot')
plt.plot(tr,test_rerun['SLA Formlabs'], color = 'tab:grey',alpha=0.25, linestyle='dotted',linewidth=3)
plt.plot(tr,test_rerun['Ultimaker 0.10'],color = 'tab:grey',alpha=0.25, linestyle='dashed')
plt.plot(tr,test_rerun['Test1'],'red', label='Test 1')
plt.plot(tr,test_rerun['Test2'], color ='firebrick',label='Test 2')
plt.plot(tr,test_rerun['Test3'], color = 'darkred', label='Test 3')
plt.xlabel('Time (s)')
plt.ylabel('Pressure drop (MPa)')
plt.legend(loc=3)
plt.savefig('./figures/app_airchamber_rerun.pdf',bbox_inches = 'tight')
plt.clf()
###Output
_____no_output_____
###Markdown
Reconnected
###Code
# Load the data for the reconnected repeatability test
test_reconnected=pd.read_csv(r'data/Resultaten_In_en_uit_elkaar_deel.csv', delimiter=";", header=1, names=(['Time',"Test1","Test2","Test3",'Aluminium','G','SLA Prusa','SLA Formlabs','Ultimaker 0.10']))
# Apply a rolling window for each type of additive manufacturing and convert pressure data to MPa
for model in list(test_reconnected.keys())[1:]:
test_reconnected[model]=test_reconnected[model].rolling(window=20).mean()/1000
# Format the time accordingly
tr = np.arange(0, len(test_reconnected["Time"])/10, 0.1)
# Visualize the reconnected repeatability test
plt.plot(tr,test_reconnected['Aluminium'], color = 'tab:grey',alpha=0.25, linestyle=(0,(1,1,1)),linewidth=2)
plt.plot(tr,test_reconnected['SLA Prusa'], color = 'tab:grey',alpha=0.25, linestyle='dashdot')
plt.plot(tr,test_reconnected['SLA Formlabs'], color = 'tab:grey',alpha=0.25, linestyle='dotted',linewidth=3)
plt.plot(tr,test_reconnected['Ultimaker 0.10'],color = 'tab:grey',alpha=0.25, linestyle='dashed')
plt.plot(tr,test_reconnected['Test1'], color = 'skyblue', label='Test 1')
plt.plot(tr,test_reconnected['Test2'], color ='cornflowerblue',label='Test 2')
plt.plot(tr,test_reconnected['Test3'], color = 'steelblue', label='Test 3')
plt.xlabel('Time (s)')
plt.ylabel('Pressure drop (MPa)')
plt.legend(loc=3)
plt.savefig('./figures/app_airchamber_reconnected.pdf',bbox_inches = 'tight')
plt.clf()
###Output
_____no_output_____ |
data_structure_and_algorithm_py3_2.ipynb | ###Markdown
接上次python2 失误 About \*错误 以下是python3重新测试
###Code
p = ['a',1,3,'c',('a1','a2')]
a = []
for i in range(len(p)):
a.append(p[i])
print('a[%s]:%s' % (i,a[i]))
a,b,*c = p
c
###Output
_____no_output_____
###Markdown
以上是利用python3进行修改的多参数 扩展的迭代解压语法是专门为解压不确定个数或任意个数元素的可迭代对象而设计的 星号表达式在迭代元素为可变长元组的序列时是很有用
###Code
record = [
('foo', 1, 2),
('bar', 'hello'),
('foo', 3, 4),
]
type(record)
def do_foo(x,y):
print('foo',x,y)
def do_bar(s):
print('bar',s)
for tag,*arg in record:
'''
if tag == 'foo':
do_foo(*arg)
'''
if tag == 'bar':
do_bar(*arg)
###Output
bar hello
###Markdown
星号解压语法在字符串操作的时候也会很有用 字符串进行分割
###Code
line = 'nobody:*:-2:-2:Unprivileged User:/var/empty:/usr/bin/false'
name,*fields,sh = line.split(':')
name
fields
sh
###Output
_____no_output_____
###Markdown
使用\_or ign 进行废弃多余的解压后元素
###Code
jiji = ['ac',12.34,(1,2,3,4)]
_,a,(*ign,need) = jiji
need
ign
###Output
_____no_output_____
###Markdown
将一个列表分割成前后两个部分
###Code
riri = [1,5,6,7,8,9,10]
head,*tail = riri
head
tail
###Output
_____no_output_____
###Markdown
将星号表达式用于递归算法
###Code
def ririsum(riri):
head,*tail = riri
return head + ririsum(tail) if tail else head
ririsum(riri)
sum(riri)
###Output
_____no_output_____
###Markdown
保留最后N个元素 在迭代操作或者其他操作的时候,怎样只保留最后有限几个元素的历史记录
###Code
from collections import deque
def search(lines,pattern,history=2):
previous_lines = deque(maxlen=history)
for li in lines:
if pattern in li:
yield li,previous_lines
previous_lines.append(li)
# Example use on a file
if __name__ == '__main__':
with open(r'somefile.txt') as f:
for line, prevlines in search(f, 'python', 2):
for pline in prevlines:
print(pline, end=',')
#print(line, end='')
print('-' * 20)
d = deque()
d.append((1,2,3))
d
d.append(1)
d
d.appendleft(1)
d
d.pop()
d
d.popleft()
d
###Output
_____no_output_____ |
notebook/user_guide.ipynb | ###Markdown
Autopilot Blueprint and Trusted AI GuideThis notebook guides you through the process of using a no-code ML pipeline (blueprint) to deliver an optimized ML model. The workflow uses: * [Amazon SageMaker DataWrangler](https://aws.amazon.com/sagemaker/data-wrangler/) for data prep. It provdies a graphical interface for creating data prep flows. * [Amazon SageMaker Autopilot](https://aws.amazon.com/sagemaker/autopilot/) for mode creation. It provides AutoML on tabular data through a fully managed experience. * [Amazon SageMaker Clarify](https://aws.amazon.com/sagemaker/clarify/) and [Inference](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html) capabilities are used to facilitate Trusted AI. Leaning Objectives * Learn how to use the AutoML blueprint. * Learn the facilities available to you to facilitated Trusted AI. * Deploy a real-time hosted model endpoint that is ready for integration into your analytics systems. The DataThis notebook uses the [UCI Bank Marketing Dataset](https://archive.ics.uci.edu/ml/datasets/Bank+Marketing) as an example. This tabular dataset contains the results of a marketing campaign to acquire customers for term deposits. Each record represents information about a prospect. The last column is the target variable. It contains information of whether the prospect subscribed to the term deposit. _Citation: Moro et al., 2014] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014_ Compatibility* __Amazon SageMaker DataWrangler:__ design and tested for * Flow definition schema: 1.0 * DataWrangler Processing Container: 1.3.0 * __Amazon SageMaker Clarify:__ designed and tested for * Bias processing container: 1.0 * XAI processing container: 1.0 * __Amazon SageMaker Studio:__ this notebook was designed and tested on * Python3 (Data Science) Kernel * Amazon SageMaker SDK version 2.xYou can contact me regarding issues with the blueprint and notebook: Dylan Tong, AWS--- PrerequsitesFollow this [guide](https://github.com/aws-samples/automl-blueprint/blob/main/automl_blueprint_quickstart_guide.pdf) to prepare your environment: https://github.com/aws-samples/automl-blueprint/blob/main/automl_blueprint_quickstart_guide.pdfThe guide provides you step-by-step instructions for satisfying the prerequsites:1. This notebook requires access to Amazon S3, StepFunctions, and SageMaker. Attach the following managed policies to your Amazon SageMaker Studio execution role: * AmazonSageMakerFullAccess * AWSStepFunctionsFullAccess * AmazonS3FullAccess 2. Deploy the [Autopilot Blueprint CloudFormation template]( https://dtong-public-fileshares3-us-west-2.amazonaws.com/automl-blueprint/code/cf/automl-blueprint.yml) before you continue further. You can deploy the template from the console by providing the following TemplateURL: https://dtong-public-fileshares3-us-west-2.amazonaws.com/automl-blueprint/code/cf/automl-blueprint.yml. Use the __default settings__ or you will be required to modify the code in this notebook.  3. Optionally, [clone the Github repository](https://docs.aws.amazon.com/sagemaker/latest/dg/studio-tasks-git.html) over to your Amazon SageMaker Studio instance. --- The blueprint consists of the following assets: * __code/deploy:__ the CloudFormation template to deploy the solution. * __code/workflow/implementations/autopilot:__ contains the Lambda function code executed as part of a StepFunction workflow. This workflow implemention orchestrates SageMaker DataWrangler, Autopilot, Clarify and other core SageMaker processes. * __code/workflow/layers:__ contains Amazon SageMaker dependencies required by the workflow. These dependencies are deployed as a Lambda Layer. * __config/blueprint-config.json:__ this file contains parameters you can configure to change the behavior of the blueprint. The blueprint looks for this configuration file in the S3 bucket that you configured as your workspace. The file should be stored under the prefix /config. * __meta/uci-bank-marketing-dataset.flow:__ this is a sample flow file produced by DataWrangler. You can create and save your own flow file in your workspace under the /meta prefix. You will need to reconfigure blueprint-config.json to use your flow file. * __notebook:__ this directory contains the assets for this notebook.___ Install DependenciesRun the following to install dependencies that aren't included with the Python 3 (Data Science) Kernel by default.
###Code
!conda install -y -c conda-forge tqdm shap
###Output
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
###Markdown
Initialize NotebookRun the following to assign global variables and import the required libraries.
###Code
import json
import os
from time import strftime, gmtime
import numpy as np
import pandas as pd
import boto3
from utils.bpconfig import BPConfig
from utils.trust import ModelInspector
from utils.wf import SFNMonitor, BPRunner
import utils.wf, utils.bpconfig, utils.prep
sfn = boto3.client("stepfunctions")
sm = boto3.client("sagemaker")
s3 = boto3.client("s3")
account_id = boto3.client('sts').get_caller_identity().get('Account')
region = boto3.session.Session().region_name
workspace = f"bp-workspace-{region}-{account_id}"
bpconfig = BPConfig.get_config(workspace, os.getcwd())
local_dir = os.getcwd()
bprunner = BPRunner(workspace, db_driver=s3, wf_driver=sfn)
inspector_params = {
"workspace": workspace,
"drivers":{
"db": s3,
"dsmlp": sm
},
"prefixes": {
"results_path": "automl-blueprint/eval/error",
"bias_path": "automl-blueprint/eval/bias",
"xai_path": "automl-blueprint/eval/xai"
},
"results-config":{
"gt_index": 0,
"pred_index": 1,
}
}
## If you modified the Cloudformation default parameters, you will need to update wf_name accordingly.
WF_NAME = "bp-autopilot-blueprint"
WORKFLOW_STAGES = 9
###Output
_____no_output_____
###Markdown
--- [Optional] View Pre-training Analysis Reports and Data FlowThis notebook includes a sample flow file for prepping the UCI bank marketing dataset. It includes examples of analysis reports created with Amazon SageMaker DataWrangler for target leakage and data bias analysis.Run the following cell to download the sample flow.
###Code
fname = utils.prep.copy_sample_flow_to_local(workspace, local_dir)
print(f"The flow definition file was copied to {fname}")
###Output
The flow definition file was copied to /root/automl-blueprint/notebook/uci-bank-marketing-dataset.flow
###Markdown
---Launch the DataWrangler GUI from the flow definition file and navigate to the analysis reports as shown below. ---The target leakage report should look like the following:  ---The data bias analysis report should look like the following: ___ Run the AutoML BlueprintBy now, the CloudFormation template should be in the COMPLETED status. The template will create a StepFunction workflow which will orchestrate the process of data preparation, model creation using AutoML and model evaluation.Run the following cell to find the ARN of the blueprint workflow that was created in your account.
###Code
wf_arn = bprunner.find_sfn_arn(WF_NAME)
print(f"Found the resource id of your workflow: {wf_arn}")
###Output
Found the resource id of your workflow: arn:aws:states:eu-west-1:803235869972:stateMachine:bp-autopilot-blueprint
###Markdown
---This blueprint is designed to provide a serverless and no-code experience. As you a user, you provide:1. Raw data2. An Amazon SageMaker DataWrangler flow created through the GUI3. Blueprint configurationsRun the cell below to view the default configurations. You don't need to change any of the settings for this tutorial.
###Code
config = BPConfig.get_config(workspace, local_dir)
print(f"\n The blueprint configurations were downloaded to {local_dir}/blueprint-config.json \n")
config.print()
###Output
The blueprint configurations were downloaded to /root/automl-blueprint/notebook/blueprint-config.json
{
"automl-config": {
"engine": "sagemaker-autopilot",
"job_base_name": "automl-bp",
"max_candidates": "1",
"metric_name": "AUC",
"minimum_performance": 0.9,
"problem_type": "BinaryClassification",
"target_name": "target"
},
"bias-analysis-config": {
"bias-config": {
"facet_name": "age",
"facet_values_or_threshold": [
30
],
"group_name": "job",
"label_values_or_threshold": [
1
]
},
"engine": "sagemaker-clarify",
"instance_count": 1,
"instance_type": "ml.c5.xlarge",
"job_base_name": "bp-clarify-bias",
"output_prefix": "eval/bias",
"prediction-config": {
"label": null,
"label_headers": null,
"probability": 0,
"probability_threshold": 0.8
}
},
"data-config": {
"prepped_out_prefix": "automl-blueprint/data/prepped",
"raw_in_prefix": "automl-blueprint/sample-data/bank-marketing"
},
"dataprep-config": {
"data_version": 1,
"definition_file": "uci-bank-marketing-dataset.flow",
"engine": "sagemaker-datawrangler",
"instance_count": 1,
"instance_type": "ml.m5.4xlarge",
"output_node_id": "82971d23-e4f7-49cd-b4a9-f065d36e01ce.default"
},
"deployment-config": {
"engine": "sagemaker-hosting"
},
"error-analysis-config": {
"engine": "sagemaker-batch-transform",
"job_base_name": "bp-error-analysis",
"output_prefix": "eval/error",
"test_data_uri": null,
"transform-config": {
"assemble_with": "Line",
"input_filter": "$[:-2]",
"instance_count": 1,
"instance_type": "ml.m5.xlarge",
"join_source": "Input",
"output_filter": "$[-2,-1]",
"split_type": "Line",
"strategy": "SingleRecord"
}
},
"model-config": {
"engine": "sagemaker",
"inference_response_keys": [
"probability"
],
"instance_count": 1,
"instance_type": "ml.m5.xlarge",
"model_base_name": "automl-bp-model"
},
"pipeline-config": {
"engine": "aws-stepfunctions"
},
"security-config": {},
"workspace-config": {
"s3_prefix": "automl-blueprint"
},
"xai-config": {
"engine": "sagemaker-clarify",
"instance_count": 1,
"instance_type": "ml.c5.xlarge",
"job_base_name": "bp-clarify-shap",
"output_prefix": "eval/xai",
"shap-config": {
"agg_method": "mean_abs",
"num_samples": 1
}
}
}
###Markdown
___Run the following cell to execute the blueprint workflow.
###Code
execution_arn = bprunner.run_blueprint(WF_NAME, WORKFLOW_STAGES, wait=False)
###Output
_____no_output_____
###Markdown
---You have the option to monitor the workflow progress from the StepFunctions console, or from this notebook. Run the next cell to obtain the link to the StepFunction workflow console.
###Code
from IPython.display import Markdown as md
sfn_console = f"https://{region}.console.aws.amazon.com/states/home?region={region}#/executions/details/{execution_arn}"
md(f"<img src='img/sfn-bp-workflow.png' width=50%, align='right'/> </br> \
You can monitor the progress of your workflow from the StepFunctions console: </br> </br>\
{sfn_console}</br></br> \
Alternatively, you can run the cell below to monitor the progress from your notebook.")
###Output
_____no_output_____
###Markdown
---Run the following cell if you like to render a progress bar to monitor the progress of the workflow.
###Code
SFNMonitor().run(execution_arn, WORKFLOW_STAGES)
###Output
_____no_output_____
###Markdown
___ [Optional] Understand the AutoML ProcessThis notebook runs the blueprint with Amazon SageMaker Autopilot as the AutoML engine. Autopilot provides transparency into the AutoML process, so you can optionally inspect the automated data profiling and pipeline generation code that gets executed by Autopilot's distributed processing.The blueprint configurations are set to have Autopilot explore one candidate to minimize the time and cost of this tutorial. However, this isn't practical. In practice, you want Autopilot to explore many candidates resulting in Autopilot to experiment with multiple algorithms and feature processing steps to discover the pipeline that produces the highest performing model on your dataset.The following steps serve as examples, but won't provide interesting insights. If you like to aim for a production quality type model, you can change the _max_candidates_ configuration to the default value of 250. ___Optionally, uncomment and run the following code. This will modify the blueprint configuration file and re-run the blueprint workflow. This will result in a higher quality model and the remainder of the notebook will provide more practical model insights. Regardless, you can proceed through the remainder of the notebook and learn the process and available functionality.
###Code
#Note that you can modify the json configuration file directly, or you could build a GUI for the configuration file.
#MAX_CANDIDATES = 250
#MINIMUM_PERFORMANCE = 0.93
#automl_config = {
# "job_base_name": "automl-bp",
# "max_candidates": MAX_CANDIDATES,
# "target_name": "target",
# "problem_type": "BinaryClassification",
# "metric_name": "AUC",
# "minimum_performance": MINIMUM_PERFORMANCE
#}
#config.update_automl_config(automl_config)
#execution_arn = bprunner.run_blueprint(WF_NAME, WORKFLOW_STAGES, wait=True)
#print(f"StepFunction workflow has completed. The execution Id is: {execution_arn}.")
###Output
_____no_output_____
###Markdown
--- Clear Box AutoML InsightsYou can inspect the data profiling and candidate pipeline generation steps from notebooks that were generated as part of your Amazon SageMaker Autopilot job. The following animation shows how you can navigate to these notebooks. The candidate generation notebook will only be insightful if you run the blueprint with a sufficiently large max_candidate setting like what is configured in the cell above. --- Algorithm Performance AnalysisAutopilot generates a leaderboard of results and ranks the training runs (trials) by the objective metric that was configured. In our example, the configuration was AUC.---Currently, Autopilot doesn't present the performance of competiting algorithms. Some data scientists might desire this information to serve as a performance baseline that the winning model can be evaluated against. This information can be obtained, but it requires some manual inspection. The process is as follows. First, open your [Candidate Generation](https://docs.aws.amazon.com/sagemaker/latest/dg/autopilot-automate-model-development-notebook-output.htmlcandidate-generation-notebook) notebook as demonstrated previously. Inspect, the candidate pipelines as follows and take note of the prefix, which identifies the pipelines:---Here's an example of a pipeline that leverages the XGBoost algorithm:---Here's an example of a pipeline that leverages the Linear Learning algorithm (linear and logistic regression):---Here's an example of a pipeline that leverages the Multilayer Perceptron (MLP) algorithm (Neural Networks): ---This notebook provides you with utility functions to obtain baselines. The metrics returned are the best results for each candidate pipeline. Autopilot runs many training jobs for each candidate pipeline for hyperparameter optimization.
###Code
inspector = ModelInspector.get_inspector(inspector_params)
# You will need to replace the list of candidate prefixes with ones specific to your
# Autopilot job as demonstrated above.
candidate_ids = ["dpp0", "dpp2", "dpp9"]
maximize_objective = True
job_name = bprunner.get_automl_job_name(execution_arn)
baselines = inspector.get_automl_job_baseline(job_name, candidate_ids, maximize_objective)
print(f"Baselines for AutoML Job: {job_name}:\n\n{json.dumps(baselines, indent=3)}")
###Output
Baselines for AutoML Job: automl-bp-2021-04-13-23-10-22:
{
"dpp0": {
"value": 0.9544699788093567,
"metric": "validation:auc"
},
"dpp2": {
"value": 0.9394024014472961,
"metric": "validation:roc_auc_score"
},
"dpp9": {
"value": 0.9437330365180969,
"metric": "validation:roc_auc"
}
}
###Markdown
---Alternatively, you can use the AWS CLI. Set the appropriate candidate prefix. Next, run this cell to obtain the best results for the specified candidate pipeline and the associated AWS CLI command used for this query.
###Code
import subprocess
# You will need to set this candidate prefix to one specific to your Autopilot job as demonstrated previously.
candidate_id = 'dpp7'
cmd, results = inspector.get_aws_cli_query_for_baselines(job_name, candidate_id)
print(f"AWS CLI Command:\n\n{cmd}\n")
print(f"Results for automl job: {job_name}\n")
print(json.dumps(results, indent=3))
###Output
AWS CLI Command:
aws sagemaker list-candidates-for-auto-ml-job --auto-ml-job-name automl-bp-2021-04-13-23-10-22 --query "max_by(Candidates[].{step_name:CandidateSteps[?CandidateStepType == 'AWS::SageMaker::TrainingJob'].CandidateStepName[?contains(@,'dpp7')==\`true\`],obj_value:FinalAutoMLJobObjectiveMetric.Value}[?not_null(step_name)], &obj_value)"
Results for automl job: automl-bp-2021-04-13-23-10-22
{
"step_name": [
"automl-bp--dpp7-1-784f089270d9452a9a4c257578c31169ac79253c178e4"
],
"obj_value": 0.9471700191497803
}
###Markdown
--- Evaluate your ModelAt this point, your workflow should have completed successfully. We can now inspect our model further.Run the following cell to configure our Model Inspector.
###Code
inspector_params = {
"workspace": workspace,
"drivers":{
"db": s3,
"dsmlp": sm
},
"prefixes": {
"results_path": "automl-blueprint/eval/error",
"bias_path": "automl-blueprint/eval/bias",
"xai_path": "automl-blueprint/eval/xai"
},
"results-config":{
"gt_index": 0,
"pred_index": 1,
}
}
inspector = ModelInspector.get_inspector(inspector_params)
###Output
_____no_output_____
###Markdown
---The ModelInspector provides utility functions to create visualizations from the outputs produce by the blueprint workflow. As part of the validation stage, the blueprint batches predictions. In practice, you should set the blueprint configuration to the location of your hold-out test dataset. The blueprint will use this data to generate predictions."error-analysis-config":{  ...  "test_data_uri": _"the s3 uri for your test dataset"_  ...}For convenience, _test_data_uri_ is set to null in this tutorial. The blueprint will by default use the data used by the automl job. Do not do this in practice.___Run the following command to visualize the ROC curve of the best model produced by your blueprint. This gives you a sense of the trade-off between true positive and false positive rates as well as an idea of the overall performance.
###Code
_, _, fpr, tpr, thresholds = inspector.get_roc_curve()
###Output
_____no_output_____
###Markdown
---Run the following command to visualize the AUC curve to determine an appropriate threshold to you use for your model.
###Code
inspector.visualize_auc(fpr,tpr,thresholds)
###Output
_____no_output_____
###Markdown
---You can also plot a heatmap of the confusion matrix and use the threshold slider to get a sense for the trade-offs at various thresholds.
###Code
%matplotlib inline
inspector.display_interactive_cm()
###Output
_____no_output_____
###Markdown
---- Global Feature ImportanceThe blueprint will run an Amazon Clarify job to obtain Shapley values. You can view the global feature importance of your models by navigating to the "Model Explainability" reports as shown below. ---The Amazon SageMaker Clarify process produces Shap values for each of records used in your automl job. This notebook provides you with utilities to visualize feature impact of single predictions.The following utility function will load a sample of your prep dataset for a specific blueprint run. Run the cell to preview your data.
###Code
df = bprunner.get_prepped_data_df(execution_arn)
display(df)
###Output
_____no_output_____
###Markdown
---Now, you can use your model inspector to get the feature impact for a specific prediction. The cell below shows the features that influence the prediction for row number 31614. You can change the row id to obtain predictions for other records in your dataset.
###Code
description = inspector.explain_prediction(data_row_id = 31614)
print(description)
###Output
_____no_output_____
###Markdown
--- Post Training Bias AnalysisThe blueprint uses Amazon SageMaker Clarify to generate post-training bias metrics. The animation below how to navigate to the bias reports generated for your best model.  --- [Optional] Deploy ModelYou've inspected your model. You've approved the quality. You're now ready to deploy as a hosted endpoint on Amazon SageMaker to serve real-time predictions.You can run the cells below to deploy a hosted endpoint.
###Code
now = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
epc_name = f"automl-bp-epc-{now}"
ep_name = f"automl-bp-ep-{now}"
model_name = bprunner.get_best_model_name(execution_arn)
ep_instance_type = "ml.m5.large"
variant_config = {
"VariantName":"v1",
"ModelName":model_name,
"InitialInstanceCount": 1,
"InstanceType": ep_instance_type
}
sm.create_endpoint_config(
EndpointConfigName = epc_name,
ProductionVariants=[variant_config])
sm.create_endpoint(
EndpointName= ep_name,
EndpointConfigName=epc_name
)
###Output
_____no_output_____
###Markdown
You can monitor the progress of your endpoint deployment from Amazon SageMaker Studio: --- Clean upOnce you're done with your endpoint, you can uncomment and run this line of code to delete your endpoint.
###Code
#sm.delete_endpoint(EndpointName=ep_name)
###Output
_____no_output_____ |
convokit/surprise/demos/surprise_demo.ipynb | ###Markdown
Computing Surprise With ConvoKit=====================This notebook provides a demo of how to use the Surprise transformer to compute surprise across a corpus. In this demo, we will use the Surprise transformer to compute Speaker Convo Diversity, a measure of how surprising a speaker's participation in one conversation is compared to their participation in all other conversations.
###Code
import convokit
import itertools
import numpy as np
import spacy
from convokit import Corpus, download, Surprise
from convokit.text_processing import TextProcessor, TextParser
from sklearn.feature_extraction.text import CountVectorizer
###Output
_____no_output_____
###Markdown
Step 1: Load a corpus--------We will use data from the subreddit r/Cornell to demonstrate the functionality of this transformer
###Code
corpus = Corpus(filename=download('subreddit-Cornell'))
corpus.print_summary_stats()
###Output
Number of Speakers: 7568
Number of Utterances: 74467
Number of Conversations: 10744
###Markdown
In order to speed up the demo, we will take just the top 100 most active speakers (based on the number of conversations they participate in).
###Code
SPEAKER_BLACKLIST = ['[deleted]', 'DeltaBot', 'AutoModerator']
def utterance_is_valid(utterance):
return utterance.speaker.id not in SPEAKER_BLACKLIST and utterance.text
corpus.organize_speaker_convo_history(utterance_filter=utterance_is_valid)
speaker_activities = corpus.get_attribute_table('speaker', ['n_convos'])
speaker_activities.sort_values('n_convos', ascending=False).head(10)
top_speakers = speaker_activities.sort_values('n_convos', ascending=False).head(100).index
import itertools
subset_utts = [list(corpus.get_speaker(speaker).iter_utterances(selector=utterance_is_valid)) for speaker in top_speakers]
subset_corpus = Corpus(utterances=list(itertools.chain(*subset_utts)))
subset_corpus.print_summary_stats()
###Output
Number of Speakers: 100
Number of Utterances: 20550
Number of Conversations: 6866
###Markdown
Step 2: Create instance of Surprise transformer---------------`target_sample_size` and `context_sample_size` specify the minimum number of tokens that should be in the target and context respectively. If we sent these to simply be 1, the most surprising statements tend to just be the very short statements. If a target or context is shorter than the specified sample size, the transformer will set the surprise to be `nan`. The transformer takes `n_samples` samples from the target and context transformer (where samples are of size corresponding to `target_sample_size` and `context_sample_size`). It calculates cross entropy for each pair of samples and takes the average to get the final surprise score. This is done to minimize effect of length on scores.`model_key_selector` defines how utterances in a corpus should be mapped to a model. It takes in an utterance and returns the key for the corresponding model. For this demo we want to map utterances to models based on their speaker and conversation ids.The transformer also has an optional `tokenizer` parameter to customize tokenization. Here we will tokenize the text outside of the surprise transformer, so our tokenizer will be an identity function.The `smooth` parameter determines whether the transformer uses +1 laplace smoothing (`smooth = True`) or naively replaces 0 counts with 1's as the SpeakerConvoDiversity transformer does (`smooth = False`).
###Code
import spacy
spacy_nlp = spacy.load('en_core_web_sm', disable=['ner','parser', 'tagger', 'lemmatizer'])
for utt in subset_corpus.iter_utterances():
utt.meta['joined_tokens'] = [t.text.lower() for t in spacy_nlp(utt.text)]
surp = Surprise(tokenizer=lambda x: x, model_key_selector=lambda utt: '_'.join([utt.speaker.id, utt.conversation_id]), target_sample_size=100, context_sample_size=1000, n_samples=50, smooth=True)
surp = surp.fit(subset_corpus, text_func=lambda utt: [list(itertools.chain(*[u.meta['joined_tokens'] for u in utt.speaker.iter_utterances() if u.conversation_id != utt.conversation_id]))])
###Output
fit1: 20550it [00:16, 1283.44it/s]
fit2: 100%|██████████| 15394/15394 [00:00<00:00, 1032033.56it/s]
###Markdown
Step 3: Transform corpus--------The object type input to `transform` determines what objects the transformer adds metadata to. Valid inputs are `'utterance'`, `'speaker'`, `'conversation'`, and `'corpus'`. Here we'll call `transform` with object type `'speaker'` so that surprise scores will be added as a metadata field for each speaker. See the tennis demo for an example where object type is utterance.
###Code
transformed_corpus = surp.transform(subset_corpus, obj_type='speaker')
###Output
transform: 100it [15:57, 9.57s/it]
###Markdown
Analysis------
###Code
import pandas as pd
from functools import reduce
def combine_dicts(x,y):
x.update(y)
return x
surprise_scores = reduce(combine_dicts, transformed_corpus.get_speakers_dataframe()['meta.surprise'].values)
suprise_series = pd.Series(surprise_scores).dropna()
###Output
_____no_output_____
###Markdown
In the resulting pandas series, the keys are of the format {speaker}_{conversation id}. Let's take a look at some of the most surprising speaker conversation involvements.
###Code
most_surprising = suprise_series.sort_values(ascending=False).head(10)
most_surprising
###Output
_____no_output_____
###Markdown
Now, let's look at some of the least surprising entries.
###Code
least_surprising = suprise_series.sort_values(ascending=True).head(10)
least_surprising
###Output
_____no_output_____ |
Seaside/Dislocation_models/Seaside_Shape_FIles/IN_CORE_Seaside_CommunityDescription.ipynb | ###Markdown
Seaside Testbed - Initial Interdependent Community Description Basic Conceptual Structure of IN-COREStep 1 in IN-CORE is to establish initial interdepent community description at time 0 and with policy levers and decision combinations set to K (baseline case). The community description includes three parts - the built enviroment, social systems, and economic systems. This notebook helps explore the data currently available in IN-CORE for the Seaside Testbed.Seaside, OR is a community located on the Pacific Ocean in Northwest Oregon. The city has a high earthquake and tsunami risks.
###Code
import pandas as pd
import geopandas as gpd # For reading in shapefiles
import numpy as np
import sys # For displaying package versions
import os # For managing directories and file paths if drive is mounted
from pyincore import IncoreClient, Dataset, FragilityService, MappingSet, DataService
from pyincore.analyses.buildingdamage.buildingdamage import BuildingDamage
from pyincore_viz.geoutil import GeoUtil as viz
# Check package versions - good practice for replication
print("Python Version ",sys.version)
print("pandas version: ", pd.__version__)
print("numpy version: ", np.__version__)
# Check working directory - good practice for relative path access
os.getcwd()
client = IncoreClient()
# IN-CORE chaches files on the local machine, it might be necessary to clear the memory
#client.clear_cache()
# create data_service object for loading files
data_service = DataService(client)
###Output
_____no_output_____
###Markdown
1a) Built enviroment: Building InventoryThe building inventory for Joplin consists of...
###Code
# Temporary Seaside, OR Building Inventory v6
building_inv = Dataset.from_file("IN-CORE_Seaside_BuildingInventory_2021-03-20.shp", data_type='ergo:buildingInventoryVer6')
bldg_inv_gdf = gpd.read_file("IN-CORE_Seaside_BuildingInventory_2021-03-20.shp")
bldg_inv_gdf.head()
bldg_inv_gdf.columns
bldg_inv_gdf[['guid','strctid','struct_typ','struct_typ','year_built','occ_type']].head()
bldg_inv_gdf['strctid'].describe()
bldg_inv_gdf[['guid','strctid','struct_typ','year_built','occ_type']].groupby('struct_typ').count()
map = viz.plot_gdf_map(bldg_inv_gdf,column='struct_typ')
map
bldg_inv_gdf
###Output
_____no_output_____
###Markdown
1b) Social Systems: Housing Unit InventoryThe housing unit inventory includes characteristics for individual households and housing units that can be linked to residential buildings. For more information see:>Rosenheim, Nathanael, Roberto Guidotti, Paolo Gardoni & Walter Gillis Peacock. (2019). Integration of detailed household and housing unit characteristic data with critical infrastructure for post-hazard resilience modeling. Sustainable and Resilient Infrastructure. doi.org/10.1080/23789689.2019.1681821
###Code
# Seaside Housing Unit Inventory
housing_unit_inv_id = "5d543087b9219c0689b98234"
# load housing unit inventory as pandas dataframe
housing_unit_inv = Dataset.from_data_service(housing_unit_inv_id, data_service)
filename = housing_unit_inv.get_file_path('csv')
print("The IN-CORE Dataservice has saved the Housing Unit Inventory on your local machine: "+filename)
housing_unit_inv_df = pd.read_csv(filename, header="infer")
housing_unit_inv_df.head()
housing_unit_inv_df.columns
###Output
_____no_output_____
###Markdown
Explore Housing Unit CharacteristicsThe housing unit inventory includes characteristics based on the 2010 Decennial Census. Race and EthnicityThe housing unit inventory includes variables for race and ethnicity.
###Code
housing_unit_inv_df['Race Ethnicity'] = "0 Vacant HU No Race Ethnicity Data"
housing_unit_inv_df['Race Ethnicity'].notes = "Identify Race and Ethnicity Housing Unit Characteristics."
housing_unit_inv_df.loc[(housing_unit_inv_df['race'] == 1) &
(housing_unit_inv_df['hispan'] == 0),'Race Ethnicity'] = "1 White alone, Not Hispanic"
housing_unit_inv_df.loc[(housing_unit_inv_df['race'] == 2) &
(housing_unit_inv_df['hispan'] == 0),'Race Ethnicity'] = "2 Black alone, Not Hispanic"
housing_unit_inv_df.loc[(housing_unit_inv_df['race'].isin([3,4,5,6,7])) &
(housing_unit_inv_df['hispan'] == 0),'Race Ethnicity'] = "3 Other Race, Not Hispanic"
housing_unit_inv_df.loc[(housing_unit_inv_df['hispan'] == 1),'Race Ethnicity'] = "4 Any Race, Hispanic"
housing_unit_inv_df.loc[(housing_unit_inv_df['gqtype'] >= 1),'Race Ethnicity'] = "5 Group Quarters no Race Ethnicity Data"
housing_unit_inv_df['Tenure Status'] = "0 No Tenure Status"
housing_unit_inv_df.loc[(housing_unit_inv_df['ownershp'] == 1),'Tenure Status'] = "1 Owner Occupied"
housing_unit_inv_df.loc[(housing_unit_inv_df['ownershp'] == 2),'Tenure Status'] = "2 Renter Occupied"
housing_unit_inv_df['Tenure Status'].notes = "Identify Tenure Status Housing Unit Characteristics."
table = pd.pivot_table(housing_unit_inv_df, values='numprec', index=['Race Ethnicity'],
margins = True, margins_name = 'Total',
columns=['Tenure Status'], aggfunc=[np.sum]).rename(
columns={'Total': 'Total Population', 'sum': ''})
table_title = "Table 1. Total Population by Race, Ethncity, and Tenure Status, Jefferson and Newton Counties, 2010."
varformat = {('','Total Population'): "{:,.0f}",
('','1 Owner Occupied'): "{:,.0f}",
('','2 Renter Occupied'): "{:,.0f}"}
table.style.set_caption(table_title).format(varformat)
###Output
_____no_output_____
###Markdown
1a + 1b) Interdependent Community DescriptionExplore building inventory and social systems. Specifically look at how the building inventory connects with the housing unit inventory using the housing unit allocation.The housing unit allocation method will provide detail demographic characteristics for the community allocated to each structure.To run the HUA Algorithm, three input datasets are required:1. Housing Unit Inventory - Based on 2010 US Census Block Level Data2. Address Point Inventory - A list of all possible residential/business address points in a community. Address points are the link between buildings and housing units.3. Building Inventory - A list of all buildings within a community. Set Up and Run Housing Unit Allocation The bulding and housing unit inventories have already by loaded. The address point inventory is needed to link the population with the structures.
###Code
# Housing unit and Building Inventories have been loaded
# Galveston, TX Address point inventory
address_point_inv_id = "5d542fefb9219c0689b981fb"
from pyincore.analyses.housingunitallocation import HousingUnitAllocation
# Create housing allocation
hua = HousingUnitAllocation(client)
# Load input dataset
hua.load_remote_input_dataset("housing_unit_inventory", housing_unit_inv_id)
hua.load_remote_input_dataset("address_point_inventory", address_point_inv_id)
#hua.load_remote_input_dataset("buildings", bldg_inv_id)
hua.set_input_dataset("buildings", building_inv)
# Specify the result name
result_name = "Seaside_HUA"
seed = 1238
iterations = 1
# Set analysis parameters
hua.set_parameter("result_name", result_name)
hua.set_parameter("seed", seed)
hua.set_parameter("iterations", iterations)
# Run Housing unit allocation analysis
hua.run_analysis()
###Output
_____no_output_____
###Markdown
Explore results from Housing Unit Allocation
###Code
# Retrieve result dataset
hua_result = hua.get_output_dataset("result")
# Convert dataset to Pandas DataFrame
hua_df = hua_result.get_dataframe_from_csv(low_memory=False)
# Display top 5 rows of output data
hua_df[['guid','numprec','ownershp','geometry','aphumerge']].head()
hua_df.guid.describe()
hua_df.huid.describe()
hua_df.columns
# keep observations where the housing unit characteristics have been allocated to a structure.
hua_df = hua_df.dropna(subset=['guid'])
hua_df.huid.describe()
hua_df['Race Ethnicity'] = "0 Vacant HU No Race Ethnicity Data"
hua_df['Race Ethnicity'].notes = "Identify Race and Ethnicity Housing Unit Characteristics."
hua_df.loc[(hua_df['race'] == 1) &
(hua_df['hispan'] == 0),'Race Ethnicity'] = "1 White alone, Not Hispanic"
hua_df.loc[(hua_df['race'] == 2) &
(hua_df['hispan'] == 0),'Race Ethnicity'] = "2 Black alone, Not Hispanic"
hua_df.loc[(hua_df['race'].isin([3,4,5,6,7])) &
(hua_df['hispan'] == 0),'Race Ethnicity'] = "3 Other Race, Not Hispanic"
hua_df.loc[(hua_df['hispan'] == 1),'Race Ethnicity'] = "4 Any Race, Hispanic"
hua_df.loc[(hua_df['gqtype'] >= 1),'Race Ethnicity'] = "5 Group Quarters no Race Ethnicity Data"
hua_df['Tenure Status'] = "0 No Tenure Status"
hua_df.loc[(hua_df['ownershp'] == 1),'Tenure Status'] = "1 Owner Occupied"
hua_df.loc[(hua_df['ownershp'] == 2),'Tenure Status'] = "2 Renter Occupied"
hua_df['Tenure Status'].notes = "Identify Tenure Status Housing Unit Characteristics."
table = pd.pivot_table(hua_df, values='numprec', index=['Race Ethnicity'],
margins = True, margins_name = 'Total',
columns=['Tenure Status'], aggfunc=[np.sum]).rename(
columns={'Total': 'Total Population', 'sum': ''})
table_title = "Table 1. Total Population by Race, Ethncity, and Tenure Status, Joplin, MO, 2010."
varformat = {('','Total Population'): "{:,.0f}",
('','0 No Tenure Status'): "{:,.0f}",
('','1 Owner Occupied'): "{:,.0f}",
('','2 Renter Occupied'): "{:,.0f}"}
table.style.set_caption(table_title).format(varformat)
###Output
_____no_output_____
###Markdown
Validate the Housing Unit Allocation has workedNotice that the population count totals for the community should match (pretty closely) data collected for the 2010 Decennial Census.This can be confirmed by going to data.census.govTotal Population by Race and Ethnicity:https://data.census.gov/cedsci/table?q=DECENNIALPL2010.H11&g=1600000US4165950&tid=DECENNIALSF12010.H11Differences in the housing unit allocation and the Census count may be due to differences between political boundaries and the building inventory. See Rosenheim et al 2019 for more details.The housing unit allocation, plus the building results will become the input for the social science models such as the population dislocatoin model.
###Code
# Use shapely.wkt loads to convert WKT to GeoSeries
from shapely.geometry import Point
# Geodata frame requires geometry and CRS to be set
hua_gdf = gpd.GeoDataFrame(
hua_df,
crs={'init': 'epsg:4326'},
geometry=[Point(xy) for xy in zip(hua_df['x'], hua_df['y'])])
hua_gdf[['guid','x','y','ownershp','geometry']].head(6)
# visualize population
gdf = hua_gdf
map = viz.plot_gdf_map(gdf,column='ownershp')
map
# visualize population by race and tenure status
minority_renters_gdf = hua_gdf.loc[(hua_gdf.race != 1) &
(hua_gdf['Tenure Status'] == '2 Renter Occupied')]
map = viz.plot_gdf_map(minority_renters_gdf,column='race')
map
# What location should the map be centered on?
center_x = hua_gdf.bounds.minx.mean()
center_y = hua_gdf.bounds.miny.mean()
center_x, center_y
# https://ipyleaflet.readthedocs.io/en/latest/api_reference/heatmap.html
import ipyleaflet as ipylft
from ipyleaflet import Map, Heatmap
print("ipyleaflet Version ",ipylft.__version__)
popdata = minority_renters_gdf[['y','x','numprec']].values.tolist()
from ipyleaflet import Map, Heatmap, LayersControl
map = Map(center=[center_y,center_x], zoom=13)
minority_renters_gdf = Heatmap(
locations = popdata,
radius = 10,
max_val = 1000,
blur = 10,
gradient={0.2: 'yellow', 0.5: 'orange', 1.0: 'red'},
name = 'Minority Renters',
)
map.add_layer(minority_renters_gdf);
control = LayersControl(position='topright')
map.add_control(control)
map
###Output
_____no_output_____ |
scopus api work/functions/CV_Prog_functions.ipynb | ###Markdown
Contains Functions for Interacting with API
###Code
from elsapy.elsclient import ElsClient
from elsapy.elsprofile import ElsAuthor, ElsAffil
from elsapy.elsdoc import FullDoc, AbsDoc
from elsapy.elssearch import ElsSearch
import json
## Load configuration
con_file = open("config.json")
config = json.load(con_file)
con_file.close()
## Initialize client
client = ElsClient(config['apikey'])
## Initialize author search object and execute search
auth_srch = ElsSearch('authlast(Ahmadian) and authfirst(Mehdi)and affil(Virginia Tech)','author')
auth_srch.execute(client)
print ("auth_srch has", len(auth_srch.results), "results.")
ids = [(item['dc:identifier']) for item in auth_srch.results]
ids
## Define an author lookup function
def auth_look(first,last):
auth_srch = ElsSearch(('authlast(%s) and authfirst(%s)and affil(Virginia Tech)' %(last,first)),'author')
auth_srch.execute(client)
print ("auth_srch has", len(auth_srch.results), "results.")
noodles = auth_srch.results
#ids = [(item['dc:identifier']) for item in auth_srch.results]
#print(ids)
print(noodles)
def hist_lookup(aid):
my_auth = ElsAuthor(
uri = 'https://api.elsevier.com/content/author/author_id/'+str(aid))
# Read author data, then write to disk
if my_auth.read(client):
af_hist = my_auth.hist_names['affiliation']
ids = [str(item['@id']) for item in af_hist]
return(ids)
else:
print ("Read author failed.")
print ("Read affiliation failed.")
def hist_name(ids):
lst = []
for i in ids:
my_aff = ElsAffil(affil_id=i)
if my_aff.read(client):
print (my_aff.name)
lst.append(my_aff.name)
my_aff.write()
else:
print("Read affiliation failed.")
auth_look('Dipankar','Chakravarti')
hist_name(hist_lookup(36169200700))
###Output
_____no_output_____ |
Example_API_usage.ipynb | ###Markdown
Example CREEDS API Usages Walk-Through _Zichen Wang_
###Code
## required python packages
import json
import numpy as np
import pandas as pd
import requests
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
BASE_URL = 'http://amp.pharm.mssm.edu/CREEDS/'
###Output
_____no_output_____
###Markdown
1. Search signatures using textThe `/search` endpoint handles the search of the metadata associated with gene expression signatures. You can use a query term of interest such as 'breast cancer' to perform a search using an HTTP GET request.The `requests.get()` function performs an HTTP GET request and returns a response object. JSON data associated with the response object can be parsed using the `.json()` method:
###Code
r = requests.get(BASE_URL + 'search', params={'q':'breast cancer'})
response = r.json()
print r.status_code
###Output
200
###Markdown
We next convert the JSON response to a `pandas.DataFrame` and examine the search result:
###Code
response_df = pd.DataFrame.from_records(response).set_index('id')
response_df.head()
###Output
_____no_output_____
###Markdown
2. Query the signature database using up/down gene sets to find similar/opposite signaturesSignatures can also be queried against using a pair of up/down gene lists from the `/search` endpoint using HTTP POST request. Here we generate vectors of up and down genes to query against the CREEDS database:
###Code
## Get some up and down gene lists
up_genes = ['KIAA0907','KDM5A','CDC25A','EGR1','GADD45B','RELB','TERF2IP','SMNDC1','TICAM1','NFKB2',
'RGS2','NCOA3','ICAM1','TEX10','CNOT4','ARID4B','CLPX','CHIC2','CXCL2','FBXO11','MTF2',
'CDK2','DNTTIP2','GADD45A','GOLT1B','POLR2K','NFKBIE','GABPB1','ECD','PHKG2','RAD9A',
'NET1','KIAA0753','EZH2','NRAS','ATP6V0B','CDK7','CCNH','SENP6','TIPARP','FOS','ARPP19',
'TFAP2A','KDM5B','NPC1','TP53BP2','NUSAP1']
dn_genes = ['SCCPDH','KIF20A','FZD7','USP22','PIP4K2B','CRYZ','GNB5','EIF4EBP1','PHGDH','RRAGA',
'SLC25A46','RPA1','HADH','DAG1','RPIA','P4HA2','MACF1','TMEM97','MPZL1','PSMG1','PLK1',
'SLC37A4','GLRX','CBR3','PRSS23','NUDCD3','CDC20','KIAA0528','NIPSNAP1','TRAM2','STUB1',
'DERA','MTHFD2','BLVRA','IARS2','LIPA','PGM1','CNDP2','BNIP3','CTSL1','CDC25B','HSPA8',
'EPRS','PAX8','SACM1L','HOXA5','TLE1','PYGL','TUBB6','LOXL1']
payload = {
'up_genes': up_genes,
'dn_genes': dn_genes,
'direction': 'similar',
'db_version': 'v1.0'
}
###Output
_____no_output_____
###Markdown
POST the input gene sets to the API and load the response to `pandas.DataFrame`
###Code
r = requests.post(BASE_URL + 'search', json=payload)
print r.status_code
response = r.json()
response_df = pd.DataFrame.from_records(response).set_index('id')
response_df.head()
###Output
_____no_output_____
###Markdown
3. Retrieve a signature using `id`
###Code
r = requests.get(BASE_URL + 'api', params={'id': response_df.index[0]})
sig = r.json()
###Output
_____no_output_____
###Markdown
`sig` is a python `dictionary` with the following keys
###Code
print sig.keys()
###Output
[u'cell_type', u'pert_ids', u'hs_gene_symbol', u'curator', u'id', u'geo_id', u'platform', u'ctrl_ids', u'up_genes', u'mm_gene_symbol', u'organism', u'down_genes', u'pert_type']
###Markdown
4. Retrieve a gene expression dataset from GEO and perform selected analyses 4.0. Prepare dataExamine the metadata required to retrieve the dataset from GEO:
###Code
print 'GEO id: %s' % sig['geo_id']
print 'Control GSMs: %s' % sig['ctrl_ids']
print 'Perturbation GSMs: %s' % sig['pert_ids']
print 'GEO platform id: %s' % sig['platform']
###Output
GEO id: GSE8747
Control GSMs: [u'GSM217200', u'GSM217202', u'GSM217204', u'GSM217206', u'GSM217208']
Perturbation GSMs: [u'GSM217210', u'GSM217214', u'GSM217215', u'GSM217217']
GEO platform id: GPL4200
###Markdown
In order to retrieve gene expression data from GEO, we need to use certain custom utility functions that can be found at: https://github.com/MaayanLab/creeds/tree/master/externals. To identify Differentially Expressed Genes (DEGs), we use the [Charateristic Direction (CD)](http://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-15-79) algorithm implemented in the [geode package](https://github.com/wangz10/geode)
###Code
from externals import (geodownloader, softparser, RNAseq)
import geode
gse_file = geodownloader.download(sig['geo_id'])
annot_file = geodownloader.download(sig['platform'])
expr_df = softparser.parse(gse_file.path(), sig['ctrl_ids'], sig['pert_ids'])
print expr_df.shape
expr_df.head()
###Output
(36607, 9)
###Markdown
4.1. Explanatory analysesVisualize the gene expression values across samples using a density plot:
###Code
ax = expr_df.plot(kind='kde')
ax.set_xlabel('Gene expression')
###Output
_____no_output_____
###Markdown
Visualize using a PCA plot:
###Code
sample_classes = ['CTRL'] * len(sig['ctrl_ids']) + ['PERT'] * len(sig['pert_ids'])
RNAseq.PCA_plot(expr_df.values, sample_classes, log=True, standardize=2)
###Output
_____no_output_____
###Markdown
4.2. Perform differential expression analysis 4.2.1. Applying the Characteristic Direcion (CD) algorithm
###Code
sample_classes = [1] * len(sig['ctrl_ids']) + [2] * len(sig['pert_ids'])
result = geode.chdir(expr_df.values, sample_classes, expr_df.index,
gamma=.5, sort=False)
result = pd.DataFrame(result, columns=['CD_coefficient', 'probes'])
result.head()
###Output
_____no_output_____ |
SharathChandrika_ML_clustering.ipynb | ###Markdown
In this notebook we are going to build K-means Clustering,Agglomerative Hierarchical Clustering, Density-based spatial clustering of applications with noise (DBSCAN) models. 1. K-Means Clustering importing libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Mall_Customers.csv')
X = dataset.iloc[:, [3, 4]].values
###Output
_____no_output_____
###Markdown
Using the elbow method to find the optimal number of clusters
###Code
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
kmeans.fit(X)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title('The Elbow Method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
###Output
_____no_output_____
###Markdown
Training the K-Means model on the dataset
###Code
kmeans = KMeans(n_clusters = 5, init = 'k-means++', random_state = 42)
y_kmeans = kmeans.fit_predict(X)
###Output
_____no_output_____
###Markdown
Visualising the clusters
###Code
plt.scatter(X[y_kmeans == 0, 0], X[y_kmeans == 0, 1], s = 100, c = 'red', label = 'Cluster 1')
plt.scatter(X[y_kmeans == 1, 0], X[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'Cluster 2')
plt.scatter(X[y_kmeans == 2, 0], X[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Cluster 3')
plt.scatter(X[y_kmeans == 3, 0], X[y_kmeans == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4')
plt.scatter(X[y_kmeans == 4, 0], X[y_kmeans == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s = 300, c = 'yellow', label = 'Centroids')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2. Agglomerative Hierarchical Clustering we are working on the above dataset itself, so dataset is not imported again. Using the dendrogram to find the optimal number of clusters
###Code
import scipy.cluster.hierarchy as sch
dendrogram = sch.dendrogram(sch.linkage(X, method = 'ward'))
plt.title('Dendrogram')
plt.xlabel('Customers')
plt.ylabel('Euclidean distances')
plt.show()
###Output
_____no_output_____
###Markdown
Training the Hierarchical Clustering model on the dataset
###Code
from sklearn.cluster import AgglomerativeClustering
hc = AgglomerativeClustering(n_clusters = 5, affinity = 'euclidean', linkage = 'ward')
y_hc = hc.fit_predict(X)
###Output
_____no_output_____
###Markdown
Visualising the clusters
###Code
plt.scatter(X[y_hc == 0, 0], X[y_hc == 0, 1], s = 100, c = 'red', label = 'Cluster 1')
plt.scatter(X[y_hc == 1, 0], X[y_hc == 1, 1], s = 100, c = 'blue', label = 'Cluster 2')
plt.scatter(X[y_hc == 2, 0], X[y_hc == 2, 1], s = 100, c = 'green', label = 'Cluster 3')
plt.scatter(X[y_hc == 3, 0], X[y_hc == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4')
plt.scatter(X[y_hc == 4, 0], X[y_hc == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
3. DBSCAN
###Code
from sklearn.cluster import DBSCAN
from sklearn import metrics
from sklearn.datasets.samples_generator import make_blobs
from sklearn.preprocessing import StandardScaler
from sklearn import datasets
# Load data in X
db = DBSCAN(eps=0.3, min_samples=10).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print(labels)
# Plot result
import matplotlib.pyplot as plt
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = ['y', 'b', 'g', 'r']
print(colors)
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = 'k'
class_member_mask = (labels == k)
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col,
markeredgecolor='k',
markersize=6)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 0], xy[:, 1], 'o', markerfacecolor=col,
markeredgecolor='k',
markersize=6)
plt.title('number of clusters: %d' %n_clusters_)
plt.show()
###Output
[-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1]
['y', 'b', 'g', 'r']
|
modul_2_mnist-digits_cnn.ipynb | ###Markdown
Laden der DatenIm ersten Schritt bereiten wir alles vor und laden unseren Datensatz mit handgeschriebene Ziffern. Unser Ziel ist es, das Bild in usner Programm zu laden und zu klassifizieren. Klassifizieren bedeutet, zu erkennen um welche Ziffer es sich handelt. Ist es eine *0* oder doch eine *9*?Ein kleiner Hinweis signalisiert ein Kommentar im Code, damit notieren sich Programmiererinnen Hinweise um Codezeilen leichter zu verstehen ;-)
###Code
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import os
print(tf.__version__)
# Wir laden den Datensatz
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Wir normieren das Bild, sodass es Werte von 0 - 1 beinhaltet. Das ist ein besserer Input für das NN.
train_images = np.expand_dims(train_images / 255.0, -1)
test_images = np.expand_dims(test_images / 255.0, -1)
###Output
_____no_output_____
###Markdown
Visualisieren - Veranschaulichen - bildlich darstellenIm nächsten Schritt laden wir eine *0* und eine *9* aus unserem Trainingsdatensatz und veranschaulichen die zwei Ziffern.
###Code
# Lade eine 0 aus den Trainingsdaten
indicies_von_allen_0en = (np.where(test_labels == 0))[0]
bild_mit_ziffer_0 = test_images[indicies_von_allen_0en[0]]
# Lade eine 9 aus den Trainingsdaten
indicies_von_allen_9en = (np.where(test_labels == 9))[0]
bild_mit_ziffer_9 = test_images[indicies_von_allen_9en[0]]
# Visualisieren (= anzeigen) der Bilder, damit wir auch sehen ob wir das ganze richtig geladen haben
plt.figure()
plt.imshow(bild_mit_ziffer_0[:,:,0], cmap=plt.cm.binary)
plt.title("Das ist eine 0")
plt.show()
plt.figure()
plt.imshow(bild_mit_ziffer_9[:,:,0], cmap=plt.cm.binary)
plt.title("Das ist eine 9")
plt.show()
###Output
_____no_output_____
###Markdown
Neuronales Netz definierenAls nächstes müssen wir die Architetkur unseres neuronalen Netzes definieren. Wie viele Layer solltes haben, wie viele Neuronen haben diese Layer.Wir entscheiden uns als erstes für foglende Architektur:* Input Layer: 28x28 (so groß sind unsere Bilder!)* 3 x Convolution Layer mit Pooling Layer * Fully Connected Network (FCN) Layer (heißt *dense* in TF!) mit 128 Neuronen und einer ReLU Aktivierung* Output sind 10 Neuronen (wir haben 10 Ziffern die wir klassifizieren wollen)
###Code
# Netzwerk Architektur
model = keras.Sequential([
keras.layers.Conv2D(8, (3, 3), activation='relu', input_shape=(28,28,1)),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(16, (3, 3), activation='relu'),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(16, (3, 3), activation='relu'),
keras.layers.Flatten(),
keras.layers.Dense(128, activation='relu'),
keras.layers.Dense(10)
])
# Lassen TF unser Netzwerk bauen
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Neuronales Netz trainierenIm nächsten Schritt traineiren wir unser Netz mit den Daten, die wir oben geladen haben. Trainieren wird auch *fitten* genannt, da beim Trainieren die Gewichte der Neuronen angepasst werden, also gefitted werden. Das Wort kommt aus dem Englischen! Wir müssen TF natürlcih auch noch sagen, wie lange das Netzwerk traineirt werden soll. Das drückt man aus, wie oft dem Netzwerk die Trainingsdaten gezeigt werden sollen. * 1 x alle Trainingsdaten zeigen = 1 Epoche* 2 x alle Trainingsdaten zeigen = 2 Epochen
###Code
# Trainiere das Netzwerk für 5 Epochen
model.fit(train_images, train_labels, epochs=5)
#Speichere mein modell ab
# 1) zuerst in meinem obersten drive Folder einen Folder / Verezeichnis mit Namen modul_2_cnn anlegen --> 'rechte Maustaste -> neuer Folder -> modul_2_cnn'
# from google.colab import drive
# drive.mount('/content/drive')
# tf.saved_model.save(model, '/content/drive/My Drive/modul_2_cnn/model')
###Output
_____no_output_____
###Markdown
Überprüfen wie gut das Netzwerk istWir haben das Netzwerk trainiert, jetzt wollen wir auch wissen wie gut es funktioniert. Wir sagen auch, wir *evaluieren* das Netzwerk jetzt. Evaluiert wird mit den Testdaten. Wir fragen, wie viele der Testdaten richtig klassifieziert werden, das heißt wie oft das Netzwerk die Zahl richtig erkennt.
###Code
# Testen des Netzwerkes
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=0)
print('Unser Ergebnis:')
print('Von den ', test_images.shape[0], ' wurden ', int(test_acc * test_images.shape[0]), ' richtig erkannt. Das sind {:.2f}% der Daten'.format(test_acc * 100.0))
###Output
_____no_output_____
###Markdown
Können wir darstellen wie es im NN aussieht?Was passiert eigentlcih in diesen *convolutional layers*? Wie sehen solche Filter aus? Das visualisieren wir im nächsten Code Block.
###Code
conv1_layer_weight = model.layers[0].get_weights()[0][:,:,0,:]
import matplotlib.gridspec as gridspec
# Erster convolution layer --> 8 Filter
fig = plt.figure(figsize=(10,10))
gs1 = gridspec.GridSpec(2,4, wspace=0.05, hspace=0.05)
for idx in range(8):
a = plt.subplot(gs1[idx])
a.axis('off')
imgplot = plt.imshow(conv1_layer_weight[:,:,idx], cmap=plt.cm.binary)
conv2_layer_weight = model.layers[2].get_weights()[0][:,:,0,:]
import matplotlib.gridspec as gridspec
# Zweiter convolution layer --> 16 Filter
fig = plt.figure(figsize=(10,10))
gs1 = gridspec.GridSpec(4,4, wspace=0.05, hspace=0.05)
for idx in range(16):
a = plt.subplot(gs1[idx])
a.axis('off')
imgplot = plt.imshow(conv2_layer_weight[:,:,idx], cmap=plt.cm.binary)
conv3_layer_weight = model.layers[4].get_weights()[0][:,:,0,:]
import matplotlib.gridspec as gridspec
# Dritter convolution layer --> 16 Filter
fig = plt.figure(figsize=(10,10))
gs1 = gridspec.GridSpec(4,4, wspace=0.05, hspace=0.05)
for idx in range(16):
a = plt.subplot(gs1[idx])
a.axis('off')
imgplot = plt.imshow(conv3_layer_weight[:,:,idx], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
Können wir darstellen was mit dem Bild in dabei passiert?In den nachfolgenden code snippets visualisieren wir was mit der 0 so passiert wenn wir sie durch das Netz laufen lassen.
###Code
print(model.summary())
layer_outputs = [layer.output for layer in model.layers[:6]]
cnn_model = tf.keras.Model(inputs=model.inputs, outputs=layer_outputs)
aktivierung_der_null = cnn_model.predict(np.expand_dims(bild_mit_ziffer_0, axis=0))
print(len(aktivierung_der_null))
import matplotlib.gridspec as gridspec
# Erster convolution layer --> 8 Filter
output_conv_1 = aktivierung_der_null[0]
print('Aktivierungen der ersten Layer mit Groesse ', output_conv_1.shape)
fig = plt.figure(figsize=(10,10))
gs1 = gridspec.GridSpec(2,4, wspace=0.05, hspace=0.05)
for idx in range(8):
a = plt.subplot(gs1[idx])
a.axis('off')
imgplot = plt.imshow(output_conv_1[0,:,:,idx], cmap=plt.cm.binary)
# Zweiter convolution layer --> 16 Filter
output_conv_2 = aktivierung_der_null[2]
print('Aktivierungen der zweiten Layer mit Groesse ', output_conv_2.shape)
fig = plt.figure(figsize=(10,10))
gs1 = gridspec.GridSpec(4,4, wspace=0.05, hspace=0.05)
for idx in range(16):
a = plt.subplot(gs1[idx])
a.axis('off')
imgplot = plt.imshow(output_conv_2[0,:,:,idx], cmap=plt.cm.binary)
# Dritter convolution layer --> 16 Filter
output_conv_3 = aktivierung_der_null[4]
print('Aktivierungen der dritten Layer mit Groesse ', output_conv_3.shape)
fig = plt.figure(figsize=(10,10))
gs1 = gridspec.GridSpec(4,4, wspace=0.05, hspace=0.05)
for idx in range(16):
a = plt.subplot(gs1[idx])
a.axis('off')
imgplot = plt.imshow(output_conv_3[0,:,:,idx], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
Könnt ihr folgendes herausfinden?* Trainingszeit (=Epochen) des neuronalen Netzwerkes: * Was passiert wenn man nur ganz kurz trainiert (z.B: 1 Epoche)? Wie viele der Testdaten werden dann noch richtig erkannt? * Was passiert wenn man gaaaaanz lange trainiert (z.B: 1000 Epochen) Wie viele der Testdaten werden dann richtig erkannt? Was könnt ihr dabei beobachten? * **Tipp**: Findet die Stelle im Code wo trainiert wird und ändere die Anzahl der Epochen entsprechend. * Was passiert wenn man die Input Zahl leicht nach links verschiebt? Wird sie dann noch immer richtig erkannt? Probiert doch einfach das Beispiel aus und beschriebt was ihr da seht. Findet ihr eine Erklärung dafür?* Was passiert wenn man die Input Zahl leicht verrauscht? Wird sie dann noch immer richtig erkannt? Probiert doch einfach das Beispiel aus und beschriebt was ihr da seht. Findet ihr eine Erklärung dafür? Woher könnte Rauschen zum Beispiel kommen, findet ihr Beispiele dafür?
###Code
# Beispiel nach links verschobene Zahl
verschobene_neun = np.zeros_like(bild_mit_ziffer_9) # wir erstellen ein leeres Bild mit der gleichen Größe wie unsere 9
verschobene_neun[:, :15] = bild_mit_ziffer_9[:, 5:20]
plt.figure()
plt.imshow(bild_mit_ziffer_9[:,:,0], cmap=plt.cm.binary)
plt.title("Das ist eine richtige 9")
plt.show()
plt.figure()
plt.imshow(verschobene_neun[:,:,0], cmap=plt.cm.binary)
plt.title("Das ist eine verschobene 9")
plt.show()
from scipy.special import softmax
logits_des_nn_fuer_neun = model.predict(np.expand_dims(bild_mit_ziffer_9, 0))
wahrscheinlichkeiten_des_nn_fuer_neun = softmax(logits_des_nn_fuer_neun)[0]
erkannte_klasse_des_nn_fuer_neun = np.argmax(wahrscheinlichkeiten_des_nn_fuer_neun)
print('Das NN erkennt die Neun als ', erkannte_klasse_des_nn_fuer_neun, ' mit einer Wahrscheinlikeit von ', wahrscheinlichkeiten_des_nn_fuer_neun[erkannte_klasse_des_nn_fuer_neun])
logits_des_nn_fuer_verschobene_neun = model.predict(np.expand_dims(verschobene_neun, 0))
wahrscheinlichkeiten_des_nn_fuer_verschobene_neun = softmax(logits_des_nn_fuer_verschobene_neun)[0]
erkannte_klasse_des_nn_fuer_verschobene_neun = np.argmax(wahrscheinlichkeiten_des_nn_fuer_verschobene_neun)
print('Das NN erkennt die verschobene Neun als ', erkannte_klasse_des_nn_fuer_verschobene_neun, ' mit einer Wahrscheinlikeit von ', wahrscheinlichkeiten_des_nn_fuer_verschobene_neun[erkannte_klasse_des_nn_fuer_verschobene_neun])
# Beispiel einer verrauschten Zahl
verrauschten_neun = np.copy(bild_mit_ziffer_9) # wir kopieren das Bild mit der Ziffer 9
rauschen = np.zeros_like(bild_mit_ziffer_9) # wir erstellen ein leeres bild
bild_koordinaten = [np.random.randint(0, i - 1, 50) for i in rauschen[:,:,0].shape]
rauschen[bild_koordinaten] = 1
verrauschten_neun += rauschen
bild_koordinaten = [np.random.randint(0, i - 1, 50) for i in rauschen[:,:,0].shape]
rauschen[bild_koordinaten] = -1
verrauschten_neun += rauschen
verrauschten_neun = np.clip(verrauschten_neun,0,1)
plt.figure()
plt.imshow(bild_mit_ziffer_9[:,:,0], cmap=plt.cm.binary, vmin=0, vmax=1)
plt.title("Das ist eine richtige 9")
plt.show()
plt.figure()
plt.imshow(verrauschten_neun[:,:,0], cmap=plt.cm.binary, vmin=0, vmax=1)
plt.title("Das ist eine verrauschten 9")
plt.show()
from scipy.special import softmax
logits_des_nn_fuer_neun = model.predict(np.expand_dims(bild_mit_ziffer_9, 0))
wahrscheinlichkeiten_des_nn_fuer_neun = softmax(logits_des_nn_fuer_neun)[0]
erkannte_klasse_des_nn_fuer_neun = np.argmax(wahrscheinlichkeiten_des_nn_fuer_neun)
print('Das NN erkennt die Neun als ', erkannte_klasse_des_nn_fuer_neun, ' mit einer Wahrscheinlikeit von ', wahrscheinlichkeiten_des_nn_fuer_neun[erkannte_klasse_des_nn_fuer_neun])
logits_des_nn_fuer_verrauschten_neun = model.predict(np.expand_dims(verrauschten_neun, 0))
wahrscheinlichkeiten_des_nn_fuer_verrauschten_neun = softmax(logits_des_nn_fuer_verrauschten_neun)[0]
erkannte_klasse_des_nn_fuer_verrauschten_neun = np.argmax(wahrscheinlichkeiten_des_nn_fuer_verrauschten_neun)
print('Das NN erkennt die verrauschten Neun als ', erkannte_klasse_des_nn_fuer_verrauschten_neun, ' mit einer Wahrscheinlikeit von ', wahrscheinlichkeiten_des_nn_fuer_verrauschten_neun[erkannte_klasse_des_nn_fuer_verrauschten_neun])
###Output
_____no_output_____ |
notebooks/create metadata tables.ipynb | ###Markdown
Table of Contents
###Code
from planet4 import region_data, io, catalog_production
import pvl
from hirise_tools import products, downloads, labels
from osgeo import gdal
regions = ['Inca', 'Ithaca', 'Giza', 'Manhattan2', 'Starburst', 'Oswego_Edge',
'Maccelsfield', 'BuenosAires', 'Potsdam']
seasons = ['season2', 'season3']
def get_fraction_of_black_pixels(savepath):
ds = gdal.Open(str(savepath))
data = ds.ReadAsArray()
fractions = []
for band in data:
nonzeros = band.nonzero()[0]
fractions.append((band.size - nonzeros.size)/band.size)
return np.array(fractions).mean()
def get_labelpath(obsid):
prodid = products.PRODUCT_ID(obsid)
prodid.kind = 'COLOR'
labelpath = downloads.labels_root() / prodid.label_fname
return labelpath
def get_p4_hirise_label(obsid):
labelpath = get_labelpath(obsid)
if not labelpath.exists():
downloads.get_rdr_color_label(obsid)
return labels.HiRISE_Label(labelpath)
def calc_real_area(savepath):
black_fraction = get_fraction_of_black_pixels(savepath)
all_area = label.line_samples*label.lines * label.map_scale**2
real_area = (1-black_fraction)*all_area
return real_area
def read_metadata(obsid):
label = get_p4_hirise_label(obsid)
d = dict(obsid=obsid, binning=label.binning,
l_s=label.l_s, line_samples=label.line_samples,
lines=label.lines, map_scale=label.map_scale)
# invalids=black_fraction, real_area=real_area)
# self calculated north azimuths
# folder = io.get_ground_projection_root()
# path = folder / obsid / f"{obsid}_campt_out.csv"
# df = pd.read_csv(path)
# d['north_azimuth'] = df.NorthAzimuth.median()
return d
obsid = 'ESP_011422_0930'
lbl = get_p4_hirise_label(obsid)
from planetpy.pdstools import indices
lbl = indices.IndexLabel("/Volumes/Data/hirise/EDRCUMINDEX.LBL")
edrindex = pd.read_hdf("/Volumes/Data/hirise/EDRCUMINDEX.hdf")
pd.set_option('display.max_columns', 100)
pd.set_option('display.max_rows', 100)
rm = catalog_production.ReleaseManager('v1.0')
obsids = rm.obsids
p4_edr = edrindex[edrindex.OBSERVATION_ID.isin(obsids)].query('CCD_NAME=="RED4"').drop_duplicates(subset='OBSERVATION_ID')
p4_edr.LINE_SAMPLES.value_counts()
p4_edr.BINNING.value_counts()
[i for i in p4_edr.columns if 'SCALE' in i]
p4_edr.SCALED_PIXEL_WIDTH.round(decimals=1).value_counts()
%matplotlib ipympl
plt.figure()
p4_edr.query('BINNING==2').SCALED_PIXEL_WIDTH.hist(bins=50)
from planet4 import metadata
NAs = metadata.get_north_azimuths_from_SPICE(obsids)
NAs.head()
p4_edr = p4_edr.set_index('OBSERVATION_ID').join(NAs.set_index('OBSERVATION_ID'))
p4_edr.head()
db = io.DBManager()
all_data = db.get_all()
no_of_tiles_per_obsid = all_data.groupby('image_name').image_id.nunique()
p4_edr = p4_edr.join(no_of_tiles_per_obsid)
p4_edr.head()
p4_edr.rename(dict(image_id="# of tiles"), axis=1,inplace=True)
p4_edr.head()
p4_edr.to_csv(rm.EDRINDEX_meta_path)
done_obsids = pd.read_csv("/Users/klay6683/local_data/planet4/done_obsids_n_tiles.csv",
index_col=0)
done_obsids.head()
obsids = done_obsids.image_name.unique()
%matplotlib ipympl
done_obsids.groupby('image_name').size().hist(bins=50)
mr = MetadataReader(obsids[0])
mr.data_dic
read_metadata(obsids[0])
from pathlib import Path
catalog = 'catalog_1.0b3'
from tqdm import tqdm
metadata = []
for img in tqdm(obsids):
metadata.append(read_metadata(img))
df = pd.DataFrame(metadata)
df.head()
savefolder = io.data_root / catalog
savepath = savefolder / "all_metadata.csv"
savepath
list(savefolder.glob("*.csv"))
df.to_csv(savepath, index=False)
name = "{}_metadata.csv".format(region.lower())
folder = io.analysis_folder() / catalog / region.lower()
folder.mkdir(exist_ok=True, parents=True)
fname = folder / name
pd.concat(bucket, ignore_index=True).to_csv(str(fname), index=False)
print("Created", fname)
###Output
_____no_output_____ |
step-detection-workshop.ipynb | ###Markdown
**IA and locomotion: human gait analysis** Introduction**Context**The study of human gait is a central problem in medical research with far-reaching consequences in the public health domain.This complex mechanism can be altered by a wide range of pathologies (such as Parkinson’s disease, arthritis, stroke,...), often resulting in a significant loss of autonomy and an increased risk of fall.Understanding the influence of such medical disorders on a subject's gait would greatly facilitate early detection and prevention of those possibly harmful situations.To address these issues, clinical and bio-mechanical researchers have worked to objectively quantify gait characteristics.Among the gait features that have proved their relevance in a medical context, several are linked to the notion of step (step duration, variation in step length, etc.), which can be seen as the core atom of the locomotion process.Many algorithms have therefore been developed to automatically (or semi-automatically) detect gait events (such as heel-strikes, heel-off, etc.) from accelerometer/gyrometer signals.Most of the time, the algorithms used for step detection are dedicated to a specific population (healthy subjects, elderly subjects, Parkinson patients, etc.) and only a few publications deal with heterogeneous populations composed of several types of subjects.Another limit to existing algorithms is that they often focus on locomotion in established regime (once the subject has initiated its gait) and do not deal with steps during U-turn, gait initiation or gait termination.Yet, initiation and termination steps are particularly sensitive to pathological states.For example, the first step of Parkinsonian patients has been described as slower and smaller that the first step of age-matched subjects.U-turn steps are also interesting since 45% of daily living walking is made up of turning steps, and when compared to straight-line walking, turning has been emphasized as a high-risk fall situation.This argues for reliable algorithms that could detect initiation, termination and turning steps in both healthy and pathological subjects.**Step detection**The objective is to recognize the **start and end times of footsteps** contained in accelerometer and gyrometer signals recorded with Inertial Measurement Units (IMUs). Setup**Import**
###Code
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from loadmydata.load_human_locomotion import (
get_code_list,
load_human_locomotion_dataset,
)
from convsparsecoder import ConvSparseCoder
from locogram import get_locogram
from utils import pad_at_the_end, plot_CDL, plot_steps
###Output
_____no_output_____
###Markdown
**Utility functions**
###Code
def get_signal(sensor_data, dim_name="LRY") -> np.ndarray:
"""Select a signal from a given trial."""
# choose a single dimension
signal = sensor_data.signal[dim_name].to_numpy()
signal -= signal.mean()
signal /= signal.std()
return signal
def fig_ax(figsize=(15, 3)):
return plt.subplots(figsize=figsize)
###Output
_____no_output_____
###Markdown
Data description Data collection and clinical protocol ParticipantsThe data was collected between April 2014 and October 2015 by monitoring healthy (control) subjects and patients from several medical departments (see [publication](Publication) for more information).Participants are divided into three groups depending on their impairment:- **Healthy** subjects had no known medical impairment.- The **orthopedic group** is composed of 2 cohorts of distinct pathologies: lower limb osteoarthrosis and cruciate ligament injury.- The **neurological group** is composed of 4 cohorts: hemispheric stroke, Parkinson's disease, toxic peripheral neuropathy and radiation induced leukoencephalopathy.Note that certain participants were recorded on multiple occasions, therefore several trials may correspond to the same person.In the training set and in the testing set, the proportion of trials coming from the "healthy", "orthopedic" and "neurological" groups is roughly the same, 24%, 24% and 52% respectively. Protocol and equipmentAll subjects underwent the same protocol described below. First, a IMU (Inertial Measurement Unit) that recorded accelerations and angular velocities was attached to each foot.All signals have been acquired at 100 Hz with two brands of IMUs: XSens™ and Technoconcept®.One brand of IMU was attached to the dorsal face of each foot.(Both feet wore the same brand.)After sensor fixation, participants were asked to perform the following sequence of activities:- stand for 6 s,- walk 10 m at preferred walking speed on a level surface to a previously shown turn point,- turn around (without previous specification of a turning side),- walk back to the starting point,- stand for 2 s.Subjects walked at their comfortable speed with their shoes and without walking aid.This protocol is schematically illustrated in the following figure.Each IMU records its acceleration and angular velocity in the $(X, Y, Z, V)$ set of axes defined in the following figure.The $V$ axis is aligned with gravity, while the $X$, $Y$ and $Z$ axes are attached to the sensor. Step detection in a clinical contextThe following schema describes how step detection methods are integrated in a clinical context.(1) During a trial, sensors send their own acceleration and angular velocity to the physician's computer.(2) A software on the physician's computer synchronizes the data sent from both sensors and produces two multivariate signals (of same shape), each corresponding to a foot.A step detection procedure is applied on each signal to produce two lists of footsteps (one per foot/sensor).The numbers of left footsteps and right footsteps are not necessarily the same.Indeed, subjects often have a preferred foot to initiate and terminate a walk or a u-turn, resulting in one or more footsteps from this preferred foot.The starts and ends of footsteps are then used to create meaningful features to characterize the subject's gait. Data explorationDuring a trial, a subject executes the protocol described above.This produces two multivariates signals (one for each foot/sensor) and for each signal, a number of footsteps have be annotated.In addition, information (metadata) about the trial and participant are provided.All three elements (signal, step annotation and metadata) are detailled in this section.
###Code
# This wil download the data on the first run
trial_1 = load_human_locomotion_dataset("17-2")
trial_2 = load_human_locomotion_dataset("56-2")
print(trial_1.description)
###Output
_____no_output_____
###Markdown
SignalEach IMU that the participants wore provided $\mathbb{R}^{8}$-valued signals, sampled at 100 Hz.In this setting, each dimension is defined by the foot (`L` for left, `R` for right), the signal type (`A` for acceleration, `R` for angular velocity) and the axis (`X`, `Y`, `Z` or `V`).For instance, `RRX` denotes the angular velocity around the `X`-axis of the right foot.Accelerations are given in $m/s^2$ and angular velocities, in $deg/s$.The signal is available in the `.signal` attribute as a `Pandas` dataframe.Note that this multivariate signal originates from a two sensors (one on each foot).
###Code
# The signal is available in the `signal` attribute.
fig, (ax_0, ax_1) = plt.subplots(nrows=1, ncols=2, figsize=(20, 3))
# Here we show the left foot (`L`)
trial_1.signal[["LAX", "LAY", "LAZ", "LAV"]].plot(
ax=ax_0
) # select the accelerations
trial_1.signal[["LRX", "LRY", "LRZ", "LRV"]].plot(
ax=ax_1
) # select the angular velocities
trial_1.signal.head()
###Output
_____no_output_____
###Markdown
The "flat part" at the beginning of each dimension is the result of the participants standing still for a fewseconds before walking (see [Protocol](Protocol-and-equipment)).The same behaviour can be seen at the end of each dimension (often but not always), though for a quite smaller duration. MetadataA number of metadata (either numerical or categorical) are provided for each sensor recording, detailing the participant being monitored and the sensor position:- `trial_code`: unique identifier for the trial;- `age` (in years);- `gender`: male ("M") or female ("F");- `height` (in meters);- `weight` (in kilograms);- `bmi` (in kg/m2): body mass index;- `laterality`: subject's "footedness" or "foot to kick a ball" ("Left", "Right" or "Ambidextrous").- `sensor`: brand of the IMU used for the recording (“XSens” or “TCon”);- `pathology_group`: this variable takes value in {“Healthy”, “Orthopedic”, “Neurological”};- `is_control`: whether the subject is a control subject ("Yes" or "No");- `foot`: foot on which the sensor was attached ("Left" or "Right").These are accessible using the notation `sensor_data.metadata`.
###Code
print("-" * 10 + " Trial 1 " + "-" * 10)
print(trial_1.metadata)
print("-" * 10 + " Trial 2 " + "-" * 10)
print(trial_2.metadata)
###Output
_____no_output_____
###Markdown
Step annotation (the "label" to predict)Footsteps were manually annotated by specialists using a software that displayed the signals from the relevant sensor (left or right foot) and allowed the specialist to indicate the starts and ends of each step.A footstep is defined as the period during which the foot is moving.Footsteps are separated by periods when the foot is still and flat on the floor.Therefore, in our setting, a footstep starts with a heel-off and ends with the following toe-strike of the same foot.Footsteps (the "label" to predict from the signal) are contained in a list whose elements are list of two integers, the start and end indexes. For instance:
###Code
# left foot
print(trial_1.left_steps)
msg = f"{trial_1.left_steps.shape[0]} footsteps were annotated on the left foot, and {trial_1.right_steps.shape[0]} on the right."
print(msg)
# plot steps
plot_steps(sensor_data=trial_1, left_or_right="right", choose_step=3)
###Output
_____no_output_____
###Markdown
Visualization of footsteps and signals: **On the first two plots.**The repeated patterns (colored in light green) correspond to periods when the foot is moving.During the non-annotated periods, the foot is flat and not moving and the signals are constant.Generally, steps at the beginning and end of the recording, as well as during the u-turn (in the middle of the signal approximatively, see [Protocol](Protocol-and-equipment)) are a bit different from the other ones.**On the last two plots.** A close-up on a single footstep. Locograms
###Code
_, (ax_left, ax_right) = plt.subplots(ncols=2, figsize=(15, 5))
locogram = get_locogram(sensor_data=trial_2, left_or_right="left")
_ = sns.heatmap(1 - locogram, ax=ax_left)
ax_left.set_title("Left foot")
locogram = get_locogram(sensor_data=trial_2, left_or_right="right")
_ = sns.heatmap(1 - locogram, ax=ax_right)
_ = ax_right.set_title("Right foot")
_, (ax_left, ax_right) = plt.subplots(ncols=2, figsize=(15, 5))
locogram = get_locogram(sensor_data=trial_1, left_or_right="left")
_ = sns.heatmap(1 - locogram, ax=ax_left)
ax_left.set_title("Left foot")
locogram = get_locogram(sensor_data=trial_1, left_or_right="right")
_ = sns.heatmap(1 - locogram, ax=ax_right)
_ = ax_right.set_title("Right foot")
###Output
_____no_output_____
###Markdown
Question Compare the left and right locograms. Step detection Toy example Simulate patterns
###Code
# fmt: off
# synthetic atoms
atom_1 = np.array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])
atom_2 = np.array([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.33066907e-16, 6.06060606e-02, 1.21212121e-01, 1.81818182e-01, 2.42424242e-01, 3.03030303e-01, 3.63636364e-01, 4.24242424e-01, 4.84848485e-01, 5.45454545e-01, 6.06060606e-01, 6.66666667e-01, 7.27272727e-01, 7.87878788e-01, 8.48484848e-01, 9.09090909e-01, 9.69696970e-01, 1.03030303e00, 1.09090909e00, 1.15151515e00, 1.21212121e00, 1.27272727e00, 1.33333333e00, 1.39393939e00, 1.45454545e00, 1.51515152e00, 1.57575758e00, 1.63636364e00, 1.69696970e00, 1.75757576e00, 1.81818182e00, 1.87878788e00, 1.93939394e00, 5.55111512e-16, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
# fmt: on
atom_width = atom_1.shape[0]
fig, (ax_left, ax_right) = plt.subplots(nrows=1, ncols=2, figsize=(10, 3))
ax_left.plot(atom_1, "k", lw=1)
ax_left.set_title("Atom 1")
ax_right.plot(atom_2, "k", lw=1)
_ = ax_right.set_title("Atom 2")
###Output
_____no_output_____
###Markdown
Simulate activations
###Code
n_samples = 1000
# random activitions
activations_1 = np.random.binomial(n=1, p=0.005, size=n_samples)
activations_2 = np.random.binomial(n=1, p=0.001, size=n_samples)
fig, ax = fig_ax()
_ = ax.plot(activations_1, label="activations 1")
_ = ax.plot(activations_2, label="activations 2")
plt.legend()
signal = np.convolve(activations_1, atom_1, mode="same")
signal += np.convolve(activations_2, atom_2, mode="same")
fig, ax = fig_ax()
_ = ax.plot(signal)
###Output
_____no_output_____
###Markdown
Sparse codingThe optimization problem to find the activations is:$$ \mathbf{Z}^\star = \arg\min_{(\mathbf{z}_k)} \left\| \mathbf{x} - \sum_{k=1}^K (\mathbf{z}_k \star \mathbf{d}_k )\right\|_2^2 \quad + \quad \lambda \sum_{k=1}^K \|\mathbf{z}_k \|_1$$where- $\mathbf{x}=[x_1, x_2, \dots, x_T]$ is a univariate signal with $T$ samples;- $\mathbf{d}_k$ ($k=1,\dots,K$) are $K$ patterns of length $L$;- $\mathbf{z}_k$ of length $N-L+1$ is the activation signal of pattern $\mathbf{d}_k$;- $\lambda>0$ controls the regularization.
###Code
# concatenate atoms
atoms = np.c_[atom_1, atom_2].T
# sparse coding
coder = ConvSparseCoder(atoms=atoms, positive_code=True).fit(
signal=signal, penalty=10
)
sparse_codes = coder.sparse_codes
reconstruction = coder.predict()
###Output
_____no_output_____
###Markdown
Look at the reconstruction.
###Code
fig, ax = fig_ax()
ax.plot(signal, label="Original")
ax.plot(reconstruction, label="Reconstruction")
ax.set_title(f"MSE: {((signal-reconstruction)**2).mean():.3f}")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
Look at activations.
###Code
plot_CDL(
signal,
codes=sparse_codes,
atoms=atoms,
)
###Output
_____no_output_____
###Markdown
Question What do you observe when the penalty increases? On real-world dataWe apply the same methodology on the Gait data set.We only consider two trials with different walking patterns.
###Code
signal_1 = get_signal(trial_1, dim_name="LRY")
signal_2 = get_signal(trial_2, dim_name="LRY")
# take an arbitrary footstep
start, end = trial_1.left_steps[4]
template_1 = signal_1[start:end]
template_1 -= template_1.mean()
template_1 /= template_1.std()
start, end = trial_2.left_steps[7]
template_2 = signal_2[start:end]
template_2 -= template_2.mean()
template_2 /= template_2.std()
# Pad the atoms to the same length
template_length = max(template_1.shape[0], template_2.shape[0])
template_1 = pad_at_the_end(template_1, desired_length=template_length)
template_2 = pad_at_the_end(template_2, desired_length=template_length)
# instantiate the convolutional sparse coder
coder = ConvSparseCoder(
atoms=np.c_[template_1, template_2].T, positive_code=True
)
plt.plot(coder.atoms[0], label="Atom 1")
plt.plot(coder.atoms[1], label="Atom 2")
_ = plt.legend()
# Sparse coding
reconstruction_1 = coder.fit(signal=signal_1, penalty=20).predict()
# Plot
plot_CDL(
signal_1,
codes=coder.sparse_codes,
atoms=coder.atoms,
)
# Sparse coding
reconstruction_2 = coder.fit(signal=signal_2, penalty=80).predict()
# Plot
plot_CDL(
signal_2,
codes=coder.sparse_codes,
atoms=coder.atoms,
)
###Output
_____no_output_____ |
community/awards/teach_me_quantum_2018/intro2qc/10.Quantum error correction.ipynb | ###Markdown
10. Quantum error correctionIn the last chapter, we saw how the impossibility of copying an unknown quantum state adds a layer of security to quantum communication when compared to classical ones. At first sight, this feature turns into a drawback once we step into the realm of error correction. In fact, classically one can protect information by making many copies of the bits of interest. The probability that an error will happen to all the copies quickly becomes extremely unlikely with an increasing number of copies. However, the no-cloning theorem prohibits us from using the same technique for quantum bits.Interestingly, error correcting procedure exists also in quantum computation. In this case, quantum entanglement and quantum superposition are used as resources to fight against unwanted errors. In this chapter, we take a closer look at quantum error correction for the simple cases of a bit-flip error and a phase error following a five-qubit protocol outlined in Ref. [1]. 10.1 The bit-flip error in the three-bit code$$\text{1. Error correcting circuit for a bit-flip error.}$$Let us analyze the case of transmission of quantum information, in the form of a qubit, through a noisy channel. Because of the presence of noise, an error could occur during the communication which could change the state of the qubit in an unpredictable way. We must be able to restore its state without measuring the qubit.Let us make a few assumptions. First of all, we are going to assume that if more than one qubits are transferred, errors happen independently on each qubit. Also, here we are going to consider only single bit-flip ($X$) errors.Take the qubit we want to transfer to be in the state\begin{equation}\lvert \psi \rangle= \alpha\lvert 0\rangle +\beta \lvert 1\rangle\end{equation}As a first step, we encode this qubit in a three-qubit entangled state using two extra ancilla qubits in the state $\lvert 0 \rangle$ and CNOT gates from the first qubits to the ancillas ($CX_{12}CX_{13}$)\begin{equation}\lvert\psi_1\rangle=\alpha\lvert000\rangle+\beta\lvert110\rangle\end{equation}The three qubits are then sent through the noisy communication channel and retrieved on the receiver's side. During the communication, a bit-flip error could have occurred on any of the three qubits giving one of the following states$\lvert \psi_{2} \rangle =\alpha\lvert 000\rangle+\beta\lvert 111\rangle$$\lvert \psi_{2} \rangle =\alpha\lvert100\rangle+\beta\lvert011\rangle$$\lvert\psi_{2} \rangle =\alpha\lvert010\rangle+\beta\lvert101\rangle$$\lvert\psi_{2} \rangle =\alpha\lvert001\rangle+\beta\lvert110\rangle$In order to correct the state received at the other end of the quantum communication channel from possible errors, two more ancilla qubits in the state $\lvert 0\rangle$ are added to the system. Moreover, they are entangled with the three bit received by using the following gates: $CX_{14}CX_{24}CX_{15}CX_{35}$. The state of the system is now$\lvert\psi_{3} \rangle =\alpha \lvert000 \rangle \lvert 00\rangle+\beta\lvert111 \rangle \lvert 00\rangle$$\lvert\psi_{3} \rangle =\alpha\lvert001 \rangle \lvert 01\rangle+\beta\lvert110 \rangle \lvert 01\rangle$$\lvert\psi_{3} \rangle =\alpha\lvert010 \rangle \lvert 10\rangle+\beta\lvert101 \rangle \lvert 10\rangle$$\lvert\psi_{3} \rangle =\alpha\lvert100 \rangle \lvert 11\rangle+\beta\lvert011 \rangle \lvert 11\rangle$The two ancilla qubits just added are then measured. They give the "error syndrom", which can be used to diagnose and then correct any error occurred on the three qubits. There are four possible outcomes for the error syndrome:$00$: $\lvert\psi_{4} \rangle =\alpha \lvert000 \rangle +\beta\lvert111 \rangle $$01$: $\lvert\psi_{4} \rangle =\alpha\lvert001 \rangle +\beta\lvert110 \rangle$$10$: $\lvert\psi_{4} \rangle =\alpha\lvert010 \rangle +\beta\lvert101 \rangle$$11$: $\lvert\psi_{4} \rangle =\alpha\lvert100 \rangle +\beta\lvert011 \rangle$Therefore, the state of the three qubits can be corrected by applying the corresponding gate$00$: No correction is needed$01$: Apply $X$ gate to the third qubit$10$: Apply $X$ gate to the second qubit$11$: Apply $X$ gate to the first qubitAfter which the state of the system will be\begin{equation}\lvert\psi_5\rangle=\alpha\lvert000\rangle+\beta\lvert111\rangle\end{equation}As originally sent. Now, to find the state of the qubit which was intended to be transferred, we decode it from the three-qubit state by disentangling the qubit with the two ancillas. Thus we apply $CX_{12}CX_{13}$ and obtain\begin{equation}\lvert\psi_6\rangle = \alpha\lvert0\rangle+\beta\lvert1\rangle = \lvert\psi \rangle\end{equation}We have successfully communicated the state of the qubit $\lvert\psi \rangle$ through a noisy channel, protecting it from any single bit-flip error. QISKit: implement the quantum error correction code for bit-flip error 1) Correct an X error on the second qubit in the state $ \frac{1}{\sqrt{2}}\left( \lvert 0 \rangle + \lvert 1 \rangle \right)$
###Code
from qiskit import *
# Quantum program setup
Q_program = QuantumProgram()
# Creating registers
q = Q_program.create_quantum_register('q', 5)
c0 = Q_program.create_classical_register('c0', 1)
c1 = Q_program.create_classical_register('c1', 1)
c2 = Q_program.create_classical_register('c2', 1)
# Creates the quantum circuit
bit_flip = Q_program.create_circuit('bit_flip', [q], [c0,c1,c2])
# Prepares qubit in the desired initial state
bit_flip.h(q[0])
# Encodes the qubit in a three-qubit entangled state
bit_flip.cx(q[0], q[1])
bit_flip.cx(q[0], q[2])
# Bit-flip error on the second qubit
bit_flip.x(q[1])
# Adds additional two qubits for error-correction
bit_flip.cx(q[0], q[3])
bit_flip.cx(q[1], q[3])
bit_flip.cx(q[0], q[4])
bit_flip.cx(q[2], q[4])
# Measure the two additional qubits
bit_flip.measure(q[3], c0[0])
bit_flip.measure(q[4], c1[0])
# Do error correction
bit_flip.x(q[1]).c_if(c0, 1)
bit_flip.x(q[2]).c_if(c1, 1)
# Decodes the qubit from the three-qubit entangled state
bit_flip.cx(q[0], q[1])
bit_flip.cx(q[0], q[2])
# Check the state of the initial qubit
bit_flip.measure(q[0], c2[0])
# Shows gates of the circuit
circuits = ['bit_flip']
print(Q_program.get_qasms(circuits)[0])
# Parameters for execution on simulator
backend = 'local_qasm_simulator'
shots = 1024 # the number of shots in the experiment
# Run the algorithm
result = Q_program.execute(circuits, backend=backend, shots=shots)
#Shows the results obtained from the quantum algorithm
counts = result.get_counts('bit_flip')
print('\nThe measured outcomes of the circuits are:', counts)
###Output
OPENQASM 2.0;
include "qelib1.inc";
qreg q[5];
creg c0[1];
creg c1[1];
creg c2[1];
h q[0];
cx q[0],q[1];
cx q[0],q[2];
x q[1];
cx q[0],q[3];
cx q[1],q[3];
cx q[0],q[4];
cx q[2],q[4];
measure q[3] -> c0[0];
measure q[4] -> c1[0];
if(c0==1) x q[1];
if(c1==1) x q[2];
cx q[0],q[1];
cx q[0],q[2];
measure q[0] -> c2[0];
The measured outcomes of the circuits are: {'0 0 1': 501, '1 0 1': 523}
###Markdown
10.2 The phase error in the three-bit code$$\text{2. Error correcting circuit for a phase error.}$$Consider now the case of a phase error. Phase error transforms the basis states as\begin{eqnarray}\lvert 0 &\rangle&\rightarrow U\lvert0\rangle= e^{i\phi}\lvert 0 \rangle \notag \\\lvert 1 &\rangle&\rightarrow U\lvert1\rangle=e^{-i\phi}\lvert 1 \rangle\end{eqnarray}Let us see how we can protect a qubit in the generic state $\lvert \psi \rangle=\lvert \alpha \lvert0\rangle+ \beta \lvert1\rangle$ from this kind of errors. We are going to assume again that phase error occurs independently on each qubit and that only single phase error may occur.Before sending the qubit through the noisy channel, we encode it in an entangled state similar to the previous one by using two ancillas in the state $\lvert 0 \rangle$. To do so we apply $CX_{12}CX_{13}$ and then Hadamard gate on all three qubits, obtaining the encoded state\begin{eqnarray}\lvert \psi_1 \rangle = \frac{1}{\sqrt{2^3}}\left\{ \alpha \left[ (\lvert0\rangle+\lvert1\rangle)(\lvert0\rangle+\lvert1\rangle)(\lvert0\rangle+\lvert1\rangle) \right] + \beta \left[ (\lvert0\rangle-\lvert1\rangle)(\lvert0\rangle-\lvert1\rangle)(\lvert0\rangle-\lvert1\rangle) \right] \right\}\end{eqnarray}The three qubits are then sent through the noisy communication channel and retrieved on the receiver's side. During the communication, a phase error could have occurred on any of the three qubits giving one of the following states$\lvert \psi_{2} \rangle = \frac{1}{\sqrt{2^3}}\left\{ \alpha \left[ (\lvert0\rangle+\lvert1\rangle)(\lvert0\rangle+\lvert1\rangle)(\lvert0\rangle+\lvert1\rangle) \right] + \beta \left[ (\lvert0\rangle-\lvert1\rangle)(\lvert0\rangle-\lvert1\rangle)(\lvert0\rangle-\lvert1\rangle) \right] \right\}$$\lvert \psi_{2} \rangle = \frac{1}{\sqrt{2^3}}\left\{ \alpha \left[ (e^{i\phi}\lvert0\rangle+e^{-i\phi}\lvert1\rangle)(\lvert0\rangle+\lvert1\rangle)(\lvert0\rangle+\lvert1\rangle) \right] + \beta \left[ (e^{i\phi} \lvert0\rangle-e^{-i\phi}\lvert1\rangle)(\lvert0\rangle-\lvert1\rangle)(\lvert0\rangle-\lvert1\rangle) \right] \right\}$$\lvert \psi_{2} \rangle =\frac{1}{\sqrt{2^3}}\left\{ \alpha \left[ (\lvert0\rangle+\lvert1\rangle)(e^{i\phi}\lvert0\rangle+e^{-i\phi}\lvert1\rangle)(\lvert0\rangle+\lvert1\rangle) \right] + \beta \left[ (\lvert0\rangle-\lvert1\rangle)(e^{i\phi}\lvert0\rangle-e^{-i\phi}\lvert1\rangle)(\lvert0\rangle-\lvert1\rangle) \right] \right\}$$\lvert \psi_{2} \rangle = \frac{1}{\sqrt{2^3}}\left\{ \alpha \left[ (\lvert0\rangle+\lvert1\rangle)(\lvert0\rangle+\lvert1\rangle)(e^{i\phi} \lvert0\rangle+e^{-i\phi}\lvert1\rangle) \right] + \beta \left[ (\lvert0\rangle-\lvert1\rangle)(\lvert0\rangle-\lvert1\rangle)(e^{i\phi} \lvert0\rangle-e^{-i\phi}\lvert1\rangle) \right] \right\}$In order to correct the state received at the other end of the quantum communication channel from possible errors, we first apply the Hadamard gate on all three qubits again. Giving:$\lvert \psi_{3} \rangle = \alpha \lvert000\rangle + \beta \lvert111\rangle$$\lvert \psi_3 \rangle = \frac{1}{\sqrt{2^3}}\left\{ \alpha \left[ (e^{i\phi}\lvert0\rangle + e^{i \phi} \lvert1\rangle + e^{-i\phi}\lvert0\rangle - e^{-i\phi}\lvert1\rangle) \lvert00\rangle \right] + \beta \left[ (e^{i\phi}\lvert0\rangle + e^{i\phi}\lvert1\rangle - e^{-i\phi}\lvert 0 \rangle + e^{-i\phi}\lvert 1 \rangle) \lvert11\rangle \right] \right\}$$\lvert \psi_{3} \rangle =\frac{1}{\sqrt{2^3}}\left\{ \alpha \left[ \lvert0\rangle (e^{i\phi}\lvert0\rangle + e^{i \phi} \lvert1\rangle + e^{-i\phi}\lvert0\rangle - e^{-i\phi}\lvert1\rangle) \lvert0\rangle \right] + \beta \left[ \lvert1\rangle (e^{i\phi}\lvert0\rangle + e^{i \phi} \lvert1\rangle + e^{-i\phi}\lvert0\rangle - e^{-i\phi}\lvert1\rangle) \lvert1\rangle \right] \right\}$$\lvert \psi_3 \rangle = \frac{1}{\sqrt{2^3}}\left\{ \alpha \left[ \lvert00\rangle (e^{i\phi}\lvert0\rangle + e^{i \phi} \lvert1\rangle + e^{-i\phi}\lvert0\rangle - e^{-i\phi}\lvert1\rangle) \right] + \beta \left[ \lvert11\rangle (e^{i\phi}\lvert0\rangle + e^{i\phi}\lvert1\rangle - e^{-i\phi}\lvert 0 \rangle + e^{-i\phi}\lvert 1 \rangle) \right] \right\}$Expanding the phase as $e^{i \phi} = \cos \phi + i \sin \phi$ and simplifying terms we obtain$\lvert \psi_{3} \rangle = \alpha \lvert000\rangle + \beta \lvert111\rangle$$\lvert \psi_3 \rangle =\alpha(\cos \phi \lvert000 \rangle+i\sin \phi \lvert100\rangle)+\beta(\cos \phi \lvert111\rangle+i\sin \phi \lvert011\rangle)$$\lvert \psi_3 \rangle =\alpha(\cos \phi \lvert000 \rangle+i\sin \phi \lvert010\rangle)+\beta(\cos \phi \lvert111\rangle+i\sin \phi \lvert101\rangle)$$\lvert \psi_3 \rangle =\alpha(\cos \phi \lvert000 \rangle+i\sin \phi \lvert001\rangle)+\beta(\cos \phi \lvert111\rangle+i\sin \phi \lvert110\rangle)$Two more ancilla qubits in the state $\lvert 0\rangle$ are added to the system. Moreover, they are entangled with the three bit received by using the following gates: $CX_{14}CX_{24}CX_{15}CX_{35}$. The state of the system is now$\lvert\psi_{4} \rangle =\alpha \lvert000 \rangle \lvert 00\rangle+\beta\lvert111 \rangle \lvert 00\rangle$$\lvert\psi_{4} \rangle =\alpha(\cos \phi \lvert000 \rangle \lvert 00\rangle+i\sin \phi \lvert100 \rangle \lvert 11\rangle)+\beta(\cos \phi \lvert111 \rangle \lvert 00\rangle+i\sin \phi \lvert011 \rangle \lvert 11\rangle)$$\lvert\psi_{4} \rangle =\alpha(\cos \phi \lvert000 \rangle \lvert 00\rangle+i\sin \phi \lvert010 \rangle \lvert 10\rangle)+\beta(\cos \phi \lvert111 \rangle \lvert 00\rangle+i\sin \phi \lvert101 \rangle \lvert 10\rangle)$$\lvert\psi_{4} \rangle =\alpha(\cos \phi \lvert000 \rangle \lvert 00\rangle+i\sin \phi \lvert001 \rangle \lvert 01\rangle)+\beta(\cos \phi \lvert111 \rangle \lvert 00\rangle+i\sin \phi \lvert110 \rangle \lvert 01\rangle)$The two ancilla qubits just added are then measured. They give the "error syndrome", which can be used to diagnose and then correct any error occurred on the three qubits. There are four possible outcomes for the error syndrome:$00$: $\lvert\psi_{5} \rangle =\alpha \lvert000 \rangle +\beta\lvert111 \rangle $$11$: $\lvert\psi_{5} \rangle =\alpha\lvert100 \rangle +\beta\lvert011 \rangle$$10$: $\lvert\psi_{5} \rangle =\alpha\lvert010 \rangle +\beta\lvert101 \rangle$$01$: $\lvert\psi_{5} \rangle =\alpha\lvert001 \rangle +\beta\lvert110 \rangle$Therefore, the state of the three qubits can be corrected by applying the corresponding gate$00$: No correction is needed$11$: Apply $X$ gate to the first qubit$10$: Apply $X$ gate to the second qubit$01$: Apply $X$ gate to the third qubitAfter which the state of the system will be\begin{equation}\lvert\psi_6\rangle=\alpha\lvert000\rangle+\beta\lvert111\rangle\end{equation}As originally sent. Now, to find the state of the qubit which was intended to be transferred, we decode it from the three-qubit state by disentangling the qubit with the two ancillas. Thus we apply $CX_{12}CX_{13}$ and obtain\begin{equation}\lvert\psi_7\rangle = \alpha\lvert0\rangle+\beta\lvert1\rangle = \lvert\psi \rangle\end{equation}We have successfully communicated the state of the qubit $\lvert\psi \rangle$ through a noisy channel, protecting it from any single phase error. QISKit: implement the quantum error correction code for phase error 2) Correct a Z error on the second qubit in the state $ \frac{1}{\sqrt{2}}\left( \lvert 0 \rangle + \lvert 1 \rangle \right)$
###Code
from qiskit import *
# Quantum program setup
Q_program = QuantumProgram()
# Creating registers
q = Q_program.create_quantum_register('q', 5)
c0 = Q_program.create_classical_register('c0', 1)
c1 = Q_program.create_classical_register('c1', 1)
c2 = Q_program.create_classical_register('c2', 1)
# Creates the quantum circuit
bit_flip = Q_program.create_circuit('bit_flip', [q], [c0,c1,c2])
# Prepares qubit in the desired initial state
bit_flip.h(q[0])
# Encodes the qubit in a three-qubit entangled state
bit_flip.cx(q[0], q[1])
bit_flip.cx(q[0], q[2])
# Go to Hadamard basis
bit_flip.h(q[0])
bit_flip.h(q[1])
bit_flip.h(q[2])
# Phase error on the second qubit
bit_flip.z(q[1])
# Converts phase error in bit-flip error
bit_flip.h(q[0])
bit_flip.h(q[1])
bit_flip.h(q[2])
# Adds additional two qubits for error-correction
bit_flip.cx(q[0], q[3])
bit_flip.cx(q[1], q[3])
bit_flip.cx(q[0], q[4])
bit_flip.cx(q[2], q[4])
# Measure the two additional qubits
bit_flip.measure(q[3], c0[0])
bit_flip.measure(q[4], c1[0])
# Do error correction
bit_flip.x(q[1]).c_if(c0, 1)
bit_flip.x(q[2]).c_if(c1, 1)
# Decodes the qubit from the three-qubit entangled state
bit_flip.cx(q[0], q[1])
bit_flip.cx(q[0], q[2])
# Check the state of the initial qubit
bit_flip.measure(q[0], c2[0])
# Shows gates of the circuit
circuits = ['bit_flip']
print(Q_program.get_qasms(circuits)[0])
# Parameters for execution on simulator
backend = 'local_qasm_simulator'
shots = 1024 # the number of shots in the experiment
# Run the algorithm
result = Q_program.execute(circuits, backend=backend, shots=shots)
#Shows the results obtained from the quantum algorithm
counts = result.get_counts('bit_flip')
print('\nThe measured outcomes of the circuits are:', counts)
###Output
OPENQASM 2.0;
include "qelib1.inc";
qreg q[5];
creg c0[1];
creg c1[1];
creg c2[1];
h q[0];
cx q[0],q[1];
cx q[0],q[2];
h q[0];
h q[1];
h q[2];
z q[1];
h q[0];
h q[1];
h q[2];
cx q[0],q[3];
cx q[1],q[3];
cx q[0],q[4];
cx q[2],q[4];
measure q[3] -> c0[0];
measure q[4] -> c1[0];
if(c0==1) x q[1];
if(c1==1) x q[2];
cx q[0],q[1];
cx q[0],q[2];
measure q[0] -> c2[0];
The measured outcomes of the circuits are: {'0 0 1': 550, '1 0 1': 474}
|
TwitterModules.ipynb | ###Markdown
Twitter API using CNN Step 1: Load Necessary Libraries
###Code
import NNClassify
import random
import json
from urllib import request
import shutil
import matplotlib.pyplot as plt
import pickle
from twython import Twython
import glob
import tweepy
import requests
import time
import string
import oauth2 as oauth
from auth import (
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
NNClassify.PredictFolder('dankest_memes')
img_ext = ['jpg', 'png', 'jpeg']
CIredditURL = 'http://reddit.com/r/blursedimages/.json'
def isImageURL(post):
for ext in img_ext:
if(post['data']['url'].endswith(ext)):
return True
return False
def isOver18URL(post):
return post['data']['over_18']
def getRedditJSON(redditURL):
try:
f = request.urlopen(redditURL)
except:
print("Could not open reddit json! perhaps too many requests...")
else:
json_dat = bytes.decode(f.read())
json_dat = json.loads(json_dat)
pickle.dump(json_dat, open('reddit_info.json', 'wb'))
def GetRandomImage():
json_dat = pickle.load(open('reddit_info.json', 'rb'))
imgurl = []
posts = json_dat['data']['children']
for post in posts:
if((not isOver18URL(post)) and isImageURL(post)):
imgurl.append(post['data']['url'])
rand_url = random.sample(imgurl, 1)[0]
request.urlretrieve(rand_url, './random_img/temp_img.' + rand_url.split('.')[-1])
return './random_img/temp_img.' + rand_url.split('.')[-1]
getRedditJSON(CIredditURL)
img_path = (GetRandomImage())
print(img_path)
curse_image = plt.imread(img_path)
plt.figure()
plt.imshow(curse_image)
plt.show()
print(NNClassify.PredictImage(img_path))
###Output
./random_img/temp_img.jpg
###Markdown
Generate functions to grab images from the associated files with appropriate priority
###Code
def getSubmitterName(fname):
if(len(fname.split('.')) < 3):
return 'none'
else:
return fname.split('.')[-2]
def QueueIsEmpty():
submissions = glob.glob("./submissions/*")
curated = glob.glob("./curated/*")
if (len(submissions) != 0) or (len(curated) != 0):
return False
else:
return True
def GetImage():
submissions = glob.glob("./submissions/*")
curated = glob.glob("./curated/*")
if(len(submissions) != 0):
img_path = random.sample(submissions, 1)[0]
submitter = getSubmitterName(img_path)
return [img_path, submitter]
elif(len(curated) != 0):
img_path = random.sample(curated, 1)[0]
return [img_path, 'none']
else:
getRedditJSON(CIredditURL)
getRedditJSON(CIredditURL)
return [GetRandomImage(), 'none']
def RemoveFromQueue(img_path):
fname = img_path.split('\\')[-1]
shutil.move("submissions/" + fname, "submissions_completed/" + fname)
print(QueueIsEmpty())
print(GetImage())
def postTweet():
twitter = Twython(
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
img_data = GetImage()
img_path = img_data[0]
submitter = img_data[1]
(certainty, img_classification) = NNClassify.PredictImageWithCertainty(img_path)
message = "I predict this image to be: " + img_classification + "!" + " (With " + str(int(certainty * 100)) + "% certainty)"
if(submitter != 'none'):
message = message + ' Submission by @' + submitter
img_post = open(img_path, 'rb')
response = twitter.upload_media(media=img_post)
media_id = [response['media_id']]
twitter.update_status(status=message, media_ids=media_id)
img_post.close()
if(not QueueIsEmpty()):
RemoveFromQueue(img_path)
def getSubmitterName(fname):
if(len(fname.split('.')) < 3):
return 'none'
else:
return fname.split('.')[-2]
for file in glob.glob('submissions/*'):
print(file, '\t\t', getSubmitterName(file))
###Output
submissions\123d23bea9e4007f182abf0eb93a93f46494813b_hq.jpg none
submissions\1507929051951.png none
submissions\1562790405936.jpg none
submissions\1563209869559.jpg none
submissions\58423682_318993072125277_6680615820099686013_n.jpg none
submissions\DvnbP7LU0AAgUcp.toasterpost.jpg toasterpost
submissions\D_w3oBvXYAEmbXU.jpg none
submissions\D__MUYDWkAAnwr3.jpg none
submissions\glenda_and_tux.toasterpost.jpg toasterpost
submissions\gordon-the-gopher.png none
submissions\img_2700.jpg none
submissions\rf33fj70nlf11.jpg none
###Markdown
Automate Direct Messages
###Code
def get_message_response_no_pagination(**kwargs):
twitter_response = twitter.request(
'https://api.twitter.com/1.1/direct_messages/events/list.json',
method='GET',
params=kwargs,
version='1.1'
)
return twitter_response
def get_messages(threshold=40):
messages = []
response = get_message_response_no_pagination()
messages = response['events']
while len(messages) < threshold:
nc = response['next_cursor']
response = get_message_response_no_pagination(cursor=nc)
messages = messages + response['events']
return messages
twitter = Twython(
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
msgs = get_messages()
print(' ')
print(msgs[6]['message_create']['message_data']['attachment']['media']['media_url'])
print(' ')
print('message count: ', len(msgs))
print(' ')
for message in msgs:
print('msg id:', message['id'])
print('sender id:', message['message_create']['sender_id'])
print(message['message_create']['message_data']['text'])
if 'attachment' in message['message_create']['message_data'] :
img_url = message['message_create']['message_data']['attachment']['media']['media_url']
img_name = img_url.split('/')[-1]
print('image attached:', img_url)
print('image name:', img_name)
print(' ')
def list_saved_images(folders):
files = []
for folder in folders:
files = files + glob.glob("./" + folder + "/*")
for i in range(len(files)):
files[i] = files[i].split('\\')[-1]
return files
#toasterpost id = 1151106628793655297
def get_screen_name(userid, twitter):
uname_response = twitter.request(
'https://api.twitter.com/1.1/users/show.json',
method='GET',
params={'user_id' : userid},
version='1.1'
)
return uname_response['screen_name']
def is_in_list(needle, haystack):
for straw in haystack:
if needle == straw:
return True
return False
def send_direct_message(userid, msg, twitter):
twitter_response = twitter.request(
'https://api.twitter.com/1.1/direct_messages/events/new.json',
method='POST',
params='{"event": {"type": "message_create", '
'"message_create": {"target": {"recipient_id": "'+userid+'"}, '
'"message_data": {"text": "'+msg+'"}}}}',
version='1.1'
)
return twitter_response
def genRandomString(length):
letters = string.ascii_letters + string.digits
return ''.join(random.choice(letters) for i in range(length))
print(genRandomString(32))
folders = ['submissions', 'submissions_completed', 'submissions_rejected', 'submissions_unmoderated']
def grab_submissions():
twitter = Twython(
consumer_key,
consumer_secret,
access_token,
access_token_secret
)
# 1. Grab all new messages
msgs_dict = get_messages()
# 2. Iterate through the messages
for message in msgs_dict:
# 3. Check if the message has a image attached
if 'attachment' in message['message_create']['message_data'] :
# 4. If it does, get the name of the image
img_url = message['message_create']['message_data']['attachment']['media']['media_url']
img_name = img_url.split('/')[-1]
img_sender_id = message['message_create']['sender_id']
img_sender_sn = get_screen_name(img_sender_id, twitter)
img_name = img_name.split('.')[-2] + '.' + img_sender_sn + '.' + img_name.split('.')[-1]
# 5. Check if the image already exists in one of the folders
saved_images = list_saved_images(folders)
#print(img_name, is_in_list(img_name, saved_images))
if not is_in_list(img_name, saved_images):
# 6. If it does not, download the image
consumer2 = oauth.Consumer(key=consumer_key, secret=consumer_secret)
token2 = oauth.Token(key=access_token, secret=access_token_secret)
client = oauth.Client(consumer2, token2)
response, data = client.request(img_url)
print(type(data))
f = open('./submissions_unmoderated/' + img_name, 'wb')
f.write(data)
f.close()
# 7. Send a message to the sender that his image is being moderated
submission_message = "Your image has been sent to our moderators! Once approved, it will be posted within a day or two!"
send_direct_message(img_sender_id, submission_message, twitter)
else:
continue
else:
continue
url_tmp = 'https://ton.twitter.com/i/ton/data/dm/1156127413417336836/1156127373470781440/IMenY4pb.jpg'
img_name = 'test.jpg'
consumer2 = oauth.Consumer(key=consumer_key, secret=consumer_secret)
token2 = oauth.Token(key=access_token, secret=access_token_secret)
client = oauth.Client(consumer2, token2)
response, data = client.request(url_tmp)
print(type(data))
f = open('./test.jpg', 'wb')
f.write(data)
f.close()
###Output
{'accept-ranges': 'bytes', 'content-length': '23885', 'content-md5': 'fXFsF7ttfL943No28FSzgw==', 'content-type': 'image/jpeg', 'date': 'Wed, 31 Jul 2019 22:42:18 GMT', 'etag': '"fXFsF7ttfL943No28FSzgw=="', 'expires': 'Wed, 07 Aug 2019 22:42:18 GMT', 'last-modified': 'Tue, 30 Jul 2019 09:00:30 GMT', 'server': 'tsa_b', 'set-cookie': 'personalization_id="v1_uxexX2rFlBFMVpR2B3QS5A=="; Max-Age=63072000; Expires=Fri, 30 Jul 2021 22:42:18 GMT; Path=/; Domain=.twitter.com, guest_id=v1%3A156461293803930476; Max-Age=63072000; Expires=Fri, 30 Jul 2021 22:42:18 GMT; Path=/; Domain=.twitter.com', 'strict-transport-security': 'max-age=631138519', 'surrogate-key': 'dm', 'x-connection-hash': 'eaefefc1240e0bae96b23129050f131a', 'x-content-type-options': 'nosniff', 'x-response-time': '40', 'x-ton-expected-size': '23885', 'status': '200', 'content-location': 'https://ton.twitter.com/i/ton/data/dm/1156127413417336836/1156127373470781440/IMenY4pb.jpg?oauth_consumer_key=IIJkP8HH0Pkawv8sj7kOkbQJI&oauth_timestamp=1564612935&oauth_nonce=54460860&oauth_version=1.0&oauth_token=1151106628793655297-JTY1l1bX9THd55AUNn6Lqb8mm0Czia&oauth_body_hash=2jmj7l5rSw0yVb%2FvlWAYkK%2FYBwk%3D&oauth_signature_method=HMAC-SHA1&oauth_signature=pw5uKjnLFWVJIHiOykf2D8o6O70%3D'}
<class 'bytes'>
|
notebooks/mlmath05.ipynb | ###Markdown
5장 경사하강법 기본 설정 - 필수 모듈 불러오기- 그래프 출력 관련 기본 설정 지정
###Code
# 파이썬 ≥3.5 필수
import sys
assert sys.version_info >= (3, 5)
# 사이킷런 ≥0.20 필수
import sklearn
assert sklearn.__version__ >= "0.20"
# 공통 모듈 임포트
import numpy as np
import os
# 노트북 실행 결과를 동일하게 유지하기 위해
np.random.seed(42)
# 깔끔한 그래프 출력을 위해
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
###Output
_____no_output_____
###Markdown
선형 회귀 아래 코드는 선형 회귀 학습 과정을 설명하기 위해 사용되는 하나의 특성을 사용하는간단한 훈련 데이터를 생성한다. * `X`: 훈련 세트. 하나의 특성 `x1`을 갖는 100개의 데이터* `y`: 100 개의 레이블. 기본적으로 `4 + 3 * x`의 형식을 따르나 훈련을 위해 잡음(noise)를 추가 했음.__참고:__ 정규분포를 따르는 부동소수점 100개를 무작위로 생성하여 잡음으로 사용하였다.
###Code
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
linreg_data = np.c_[X, y]
linreg_data[:5]
###Output
_____no_output_____
###Markdown
특성 `x1`과 레이블 `y`의 관계를 그리면 다음과 같다.기본적으로 `y = 4 + 3 * x` 의 선형관계를 갖지만 잡음으로 인해 데이터가 퍼져 있다.
###Code
plt.plot(X, y, "b.") # 파랑 점: 훈련 세트 산점도
plt.xlabel("$x_1$", fontsize=18) # x축 표시
plt.ylabel("$y$", rotation=0, fontsize=18) # y축 표시
plt.axis([0, 2, 0, 15]) # x축, y축 스케일 지정
plt.show()
###Output
_____no_output_____
###Markdown
정규 방정식 일반적으로 $n$개의 특성을 갖는 임의의 크기의 데이터셋 `X`가 주어졌을 때,$(\mathbf{X}^T \mathbf{X})^{-1}$의 역행렬이 존재하고 실제로 일정 시간 내에 계산이 가능하다면 최적의 파라미터 조합 $\boldsymbol{\hat\theta}$ 를 아래 정규 방정식으로 직접 구할 수 있다.$$\hat{\boldsymbol{\theta}} = (\mathbf{X}^T \mathbf{X})^{-1} \mathbf{X}^T \mathbf{y}$$아래 코드는 위 정규 방정식을 위에서 생성한 훈련 세트에 대해 계산한다.* `np.ones((100, 1))`: 절편 $\theta_0$를 고려하기 위해 훈련세트의 0번 열에 추가되는 `x0=1`로 이루어진 벡터* `X_b`: 모든 샘플에 대해 `x0=1`이 추가된 어레이__주의사항:__ 1 벡터를 추가하는 일은 여기서만 설명을 위해 사용된다. 사이킷런의 모델을 사용할 때는 모델이 알아서 처리해준다.
###Code
X_b = np.c_[np.ones((100, 1)), X] # 모든 샘플에 x0 = 1 추가
X_b[:5]
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
###Output
_____no_output_____
###Markdown
계산된 $\boldsymbol{\hat\theta} = [\theta_0, \theta_1]$ 은 다음과 같다.
###Code
theta_best
###Output
_____no_output_____
###Markdown
계산된 $\boldsymbol{\hat\theta}$ 를 이용하여 `x1=0`과 `x1=2`일 때의 예측값 $\hat y$는 단순한 행렬 곱셈에 불과하다.$$\hat{y} = \mathbf{X}\, \boldsymbol{\hat{\theta}}$$예측을 위해서도 먼저 `x0=1` 열을 추가해야 한다.__주의사항:__ $\mathbf{X}$ 와 $\hat\theta$ 의 순서에 너무 민감해할 필요는 없다.설명하는 방식에 따라 순서가 바뀌거나 전치행렬이 사용될 뿐이며, 결과는 모두 동일하다.
###Code
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # 모든 샘플에 x0 = 1 추가
X_new_b
###Output
_____no_output_____
###Markdown
행렬 곱셈에 의한 예측값은 다음과 같다.
###Code
y_predict = X_new_b.dot(theta_best)
y_predict
###Output
_____no_output_____
###Markdown
$\theta_0$ 을 절편으로, $\theta_1$ 을 기울기로 하는 직선은 아래와 같다.
###Code
plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions") # 빨강 직선. label은 범례 지정용
plt.plot(X, y, "b.") # 파란 점: 훈련 세트 산점도
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14) # 범례 위치
plt.axis([0, 2, 0, 15]) # x축, y축 스케일 지정
plt.show()
###Output
_____no_output_____
###Markdown
사이킷런의 `LinearRegression` 모델 `LinearRegression` 모델은 특잇값 분해(SVD) 방식을 이용하여 계산된 무어-펜로즈의 유사 역행렬 $\mathbf{X}^+$를 이용하여 파라미터 $\hat\theta$를 계산한다.$$\hat{\boldsymbol{\theta}} = \mathbf{X}^+ \mathbf{y}$$훈련된 모델의 `intercept_`와 `coef_` 속성에 절편 $\theta_0$ 과 기울기 $\theta_1$ 이 저장된다.
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
###Output
_____no_output_____
###Markdown
`x1=0`과 `x1=2`에 대한 예측값과 앞서 수동으로 계산된 값과 동일하다.
###Code
lin_reg.predict(X_new)
###Output
_____no_output_____
###Markdown
&x270b; 무어-펜로즈의 유사 역행렬 $\mathbf{X}^{+}$ 는 `np.linalg.pinv()`을 사용해서 구해지며,이를 아래 수식처럼 이용하면 동일한 최적 파라미터를 구한다.$$\boldsymbol{\hat{\theta}} = \mathbf{X}^{+}\hat{y}$$
###Code
np.linalg.pinv(X_b).dot(y)
###Output
_____no_output_____
###Markdown
경사 하강법 앞선 언급한 무어-펜로즈 유사 역행렬을 구하는 알고리즘의 계산 복잡도는 $O(n^2)$이다.즉 특성 수가 늘어날 수록 계산 시간은 제곱 지수승으로 늘어난다.따라서 많은 수의 특성을 사용하는 데이터에 대해서는 SVD 방식을 사용하는 사이킷런의 `LinearRegrssion` 모델을 사용할 수 없으며배치 경사 하강법에 기반한 모델을 대신 사용해야 한다. 배치 경사 하강법 임의로 지정된 파라미터 $\boldsymbol{\hat\theta}$로 시작한 후에비용 함수(MSE)가 작아지는 방향으로 조금씩 파라미터를 조정한다.$$\begin{align*}\mathrm{MSE}(\theta) & = \mathrm{MSE}(\mathbf X, h_\theta) \\ & = \frac 1 m \sum_{i=1}^{m} \big(\hat y^{(i)} - y^{(i)}\big)^2 \\[2ex]\hat y^{(i)} & = \theta^{T}\, \mathbf{x}^{(i)} \\ & = \theta_0 + \theta_1\, \mathbf{x}_1^{(i)} + \cdots + \theta_n\, \mathbf{x}_n^{(i)}\end{align*}$$파라미터를 조정할 때 사용하는 기준은 비용 함수의 그레이디언트 벡터의 방향과 크기이다.$$\nabla_\theta \text{MSE}(\boldsymbol{\theta}) = \frac{2}{m}\, \mathbf{X}^T (\mathbf{X} \boldsymbol{\theta} - \mathbf{y})$$파라미터 조정은 아래 방식으로 이루어진다. 단, $\eta$는 학습률을 가리킨다.$$\boldsymbol{\theta}^{(\text{next step})} = \boldsymbol{\theta}^{(\text{previous step})} - \eta\cdot \nabla_\theta \text{MSE}(\boldsymbol{\theta})$$아래 코드는 $\theta_0$과 $\theta_1$을 무작위로 지정한 다음에앞서 언급한 학습률 조정 과정을 수동으로 1,000번 진행한 결과를 보여준다.
###Code
eta = 0.1 # 학습률
n_iterations = 1000 # 1000번 파라미터 조정
m = 100 # 샘플 수
theta = np.random.randn(2,1) # 파라미터 무작위 초기화
for iteration in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y) # 비용 함수 그레이디언트
theta = theta - eta * gradients # 파라미터 업데이트
###Output
_____no_output_____
###Markdown
앞서 얻은 결과와 동일한 최적의 파라미터가 계산된다.
###Code
theta
###Output
_____no_output_____
###Markdown
학습률과 모델 학습 &x270b; 아래 코드는 학습률에 따라 파라미터 학습과정이 어떻게 달라질 수 있는지를 보여주기 위해세 개의 그래프를 그려준다.- `theta_path_bgd` 변수: 배치 경사 하강법에서 조정되는 파라미터를 순처적으로 저장하는 용도의 리스트. 아래에서 세 종류의 경사 하강법을 비교하는 그림에서 활용됨.- `plot_gradient_descent()` 함수: 선형 회귀 모델 훈련 과정의 처음 10단계를 보여주는 도표 그려주는 함수.
###Code
theta_path_bgd = []
def plot_gradient_descent(theta, eta, theta_path=None):
m = len(X_b)
plt.plot(X, y, "b.") # 훈련 세트 산점도
n_iterations = 1000 # 1000번 반복 훈련
for iteration in range(n_iterations):
# 초반 10번 선형 모델(직선) 그리기
if iteration < 10:
y_predict = X_new_b.dot(theta)
style = "b-" if iteration > 0 else "r--"
plt.plot(X_new, y_predict, style)
# 파라미터 조정
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
# 조정되는 파라미터를 모두 리스트에 저장 (theta_path=None 옵션이 아닌 경우)
if theta_path is not None:
theta_path.append(theta)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 2, 0, 15])
plt.title(r"$\eta = {}$".format(eta), fontsize=16)
###Output
_____no_output_____
###Markdown
파랑 직선은 초반 10번의 모델 학습과정을 보여준다.학습률(`eta`)에 따라 학습 과정이 많이 다를 수 있음을 잘 보여준다.
###Code
np.random.seed(42)
theta = np.random.randn(2,1) # 무작위 초기화
plt.figure(figsize=(10,4)) # 도표 크기 지정
# eta=0.02
plt.subplot(131); plot_gradient_descent(theta, eta=0.02)
plt.ylabel("$y$", rotation=0, fontsize=18)
# eta=0.1
plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd)
# eta=0.5
plt.subplot(133); plot_gradient_descent(theta, eta=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
__참고:__ 사이킷런은 배치 경사 하강법을 지원하는 모델을 제공하지 않는다. 확률적 경사 하강법 아래 코드는 확률적 경사 하강법의 작동 과정을 보여주는 코드이다.* `theta_path_sgd` 변수: 확률적 경사 하강법에서 조정되는 파라미터를 순처적으로 저장하는 용도의 리스트. 아래에서 세 종류의 경사 하강법을 비교하는 그림에서 활용됨.
###Code
theta_path_sgd = []
m = len(X_b)
np.random.seed(42)
###Output
_____no_output_____
###Markdown
* `n_epochs=50`: 총 50번의 에포크 기간동안 훈련 진행* `t0` 과 `t1`: 학습 스케줄을 위한 하이퍼파라미터 역할 수행
###Code
n_epochs = 50 # 에포크 수
t0, t1 = 5, 50 # 학습 스케줄 하이퍼파라미터
def learning_schedule(t):
return t0 / (t + t1)
###Output
_____no_output_____
###Markdown
모델이 수렴값 근처에서 진동하는 것을 확인할 수 있다.
###Code
theta = np.random.randn(2,1) # 파라미터 랜덤 초기화
for epoch in range(n_epochs):
# 매 샘플에 대해 그레이디언트 계산 후 파라미터 업데이트
for i in range(m):
# 처음 20번 선형 모델(직선) 그리기
if epoch == 0 and i < 20:
y_predict = X_new_b.dot(theta)
style = "b-" if i > 0 else "r--"
plt.plot(X_new, y_predict, style)
# 파라미터 업데이트
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi) # 하나의 샘플에 대한 그레이디언트 계산
eta = learning_schedule(epoch * m + i) # 학습 스케쥴을 이용한 학습률 조정
theta = theta - eta * gradients
theta_path_sgd.append(theta)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
plt.show()
###Output
_____no_output_____
###Markdown
그렇지만 50번만의 에포크 만으로 상당히 좋은 파라미터를 찾는다.
###Code
theta
###Output
_____no_output_____
###Markdown
사이킷런의 `SGDRegressor` 모델 `SGDRegressor` 모델은 확률적 경사 하강법을 사용하며 따라서 매우 빠르게 학습한다.
###Code
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(max_iter=1000, tol=1e-3, penalty=None, eta0=0.1, random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.intercept_, sgd_reg.coef_
###Output
_____no_output_____
###Markdown
미니배치 경사 하강법 아래 코드는 크기 20인 배치를 활용하는 미니배치 경사 하강법을 구현한다.* `theta_path_mgd` 변수: 미니배치 경사 하강법에서 조정되는 파라미터를 순처적으로 저장하는 용도의 리스트. 아래에서 세 종류의 경사 하강법을 비교하는 그림에서 활용됨. * `n_iterations = 50`: 에포크 수 50* `minibatch_size = 20`: 배치 크기 20
###Code
theta_path_mgd = []
n_iterations = 50
minibatch_size = 20
np.random.seed(42)
theta = np.random.randn(2,1) # 랜덤 초기화
t0, t1 = 200, 1000
def learning_schedule(t):
return t0 / (t + t1)
###Output
_____no_output_____
###Markdown
학습 스케줄이 사용된 미니배치 수 `t`에 의존한다.* `t`: 훈련에 사용된 미니배치 개수. 학습 스케줄에 사용됨.
###Code
t = 0
for epoch in range(n_iterations):
# 에포크가 바뀔 때마다 훈련 데이터 섞기
shuffled_indices = np.random.permutation(m)
X_b_shuffled = X_b[shuffled_indices]
y_shuffled = y[shuffled_indices]
# 20개 데이터 샘플을 훈련할 때마다 파라미터 업데이트
for i in range(0, m, minibatch_size):
t += 1
xi = X_b_shuffled[i:i+minibatch_size]
yi = y_shuffled[i:i+minibatch_size]
gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(t) # 학습 스케줄 활용
theta = theta - eta * gradients
theta_path_mgd.append(theta)
theta
###Output
_____no_output_____
###Markdown
배치/확률적/미니배치 경사 하강법 파라미터 학습과정 비교 아래 코드는 아래 세 개의 변수에 저정된 파라미터 값들을 도표로 보여준다. * 파랑선: 배치 경사 하강법의 경우이며 최적의 파라미터에 제대로 수렴한다.* 빨강선: 확률적 경사 하강법의 경우이며 최적의 파라미터에 근처에서 심하게 요동친다.* 초록선: 미니배치 경사 하강법의 경우이며 최적의 파라미터에 근처에서 상대적으로 덜 요동친다.
###Code
theta_path_bgd = np.array(theta_path_bgd)
theta_path_sgd = np.array(theta_path_sgd)
theta_path_mgd = np.array(theta_path_mgd)
plt.figure(figsize=(7,4))
plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic")
plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch")
plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch")
plt.legend(loc="upper left", fontsize=16)
plt.xlabel(r"$\theta_0$", fontsize=20)
plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0)
plt.axis([2.5, 4.5, 2.3, 3.9])
plt.show()
###Output
_____no_output_____ |
Linear_Regression_Methods.ipynb | ###Markdown
Linear regression with various methodsThis is a very simple example of using two scipy tools for linear regression.* Scipy.Polyfit* Stats.linregress* Optimize.curve_fit* numpy.linalg.lstsq* statsmodels.OLS* Analytic solution using Moore-Penrose generalized inverse or simple multiplicative matrix inverse* sklearn.linear_model.LinearRegression Import libraries
###Code
from scipy import linspace, polyval, polyfit, sqrt, stats, randn, optimize
import statsmodels.api as sm
import matplotlib.pyplot as plt
import time
import numpy as np
from sklearn.linear_model import LinearRegression
import pandas as pd
%matplotlib inline
###Output
C:\Users\Tirtha\Python\Anaconda3\lib\site-packages\statsmodels\compat\pandas.py:56: FutureWarning: The pandas.core.datetools module is deprecated and will be removed in a future version. Please use the pandas.tseries module instead.
from pandas.core import datetools
###Markdown
Generate random data of a sufficiently large size
###Code
#Sample data creation
#number of points
n=int(5e6)
t=np.linspace(-10,10,n)
#parameters
a=3.25; b=-6.5
x=polyval([a,b],t)
#add some noise
xn=x+3*randn(n)
###Output
_____no_output_____
###Markdown
Draw few random sample points and plot
###Code
xvar=np.random.choice(t,size=20)
yvar=polyval([a,b],xvar)+3*randn(20)
plt.scatter(xvar,yvar,c='green',edgecolors='k')
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Method: Scipy.Polyfit
###Code
#Linear regressison -polyfit - polyfit can be used other orders polynomials
t1=time.time()
(ar,br)=polyfit(t,xn,1)
xr=polyval([ar,br],t)
#compute the mean square error
err=sqrt(sum((xr-xn)**2)/n)
t2=time.time()
t_polyfit = float(t2-t1)
print('Linear regression using polyfit')
print('parameters: a=%.2f b=%.2f, ms error= %.3f' % (ar,br,err))
print("Time taken: {} seconds".format(t_polyfit))
###Output
Linear regression using polyfit
parameters: a=3.25 b=-6.50, ms error= 3.000
Time taken: 1.7698638439178467 seconds
###Markdown
Method: Stats.linregress
###Code
#Linear regression using stats.linregress
t1=time.time()
(a_s,b_s,r,tt,stderr)=stats.linregress(t,xn)
t2=time.time()
t_linregress = float(t2-t1)
print('Linear regression using stats.linregress')
print('a=%.2f b=%.2f, std error= %.3f, r^2 coefficient= %.3f' % (a_s,b_s,stderr,r))
print("Time taken: {} seconds".format(t_linregress))
###Output
Linear regression using stats.linregress
a=3.25 b=-6.50, std error= 0.000, r^2 coefficient= 0.987
Time taken: 0.15017366409301758 seconds
###Markdown
Method: Optimize.curve_fit
###Code
def flin(t,a,b):
result = a*t+b
return(result)
t1=time.time()
p1,_=optimize.curve_fit(flin,xdata=t,ydata=xn,method='lm')
t2=time.time()
t_optimize_curve_fit = float(t2-t1)
print('Linear regression using optimize.curve_fit')
print('parameters: a=%.2f b=%.2f' % (p1[0],p1[1]))
print("Time taken: {} seconds".format(t_optimize_curve_fit))
###Output
Linear regression using optimize.curve_fit
parameters: a=3.25 b=-6.50
Time taken: 1.2034447193145752 seconds
###Markdown
Method: numpy.linalg.lstsq
###Code
t1=time.time()
A = np.vstack([t, np.ones(len(t))]).T
result = np.linalg.lstsq(A, xn)
ar,br = result[0]
err = np.sqrt(result[1]/len(xn))
t2=time.time()
t_linalg_lstsq = float(t2-t1)
print('Linear regression using numpy.linalg.lstsq')
print('parameters: a=%.2f b=%.2f, ms error= %.3f' % (ar,br,err))
print("Time taken: {} seconds".format(t_linalg_lstsq))
###Output
Linear regression using numpy.linalg.lstsq
parameters: a=3.25 b=-6.50, ms error= 3.000
Time taken: 0.3698573112487793 seconds
###Markdown
Method: Statsmodels.OLS
###Code
t1=time.time()
t=sm.add_constant(t)
model = sm.OLS(x, t)
results = model.fit()
ar=results.params[1]
br=results.params[0]
t2=time.time()
t_OLS = float(t2-t1)
print('Linear regression using statsmodels.OLS')
print('parameters: a=%.2f b=%.2f'% (ar,br))
print("Time taken: {} seconds".format(t_OLS))
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 1.000
Model: OLS Adj. R-squared: 1.000
Method: Least Squares F-statistic: 4.287e+34
Date: Fri, 08 Dec 2017 Prob (F-statistic): 0.00
Time: 23:09:33 Log-Likelihood: 1.3904e+08
No. Observations: 5000000 AIC: -2.781e+08
Df Residuals: 4999998 BIC: -2.781e+08
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const -6.5000 9.06e-17 -7.17e+16 0.000 -6.500 -6.500
x1 3.2500 1.57e-17 2.07e+17 0.000 3.250 3.250
==============================================================================
Omnibus: 4418788.703 Durbin-Watson: 0.000
Prob(Omnibus): 0.000 Jarque-Bera (JB): 299716.811
Skew: -0.001 Prob(JB): 0.00
Kurtosis: 1.801 Cond. No. 5.77
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Analytic solution using Moore-Penrose pseudoinverse
###Code
t1=time.time()
mpinv = np.linalg.pinv(t)
result = mpinv.dot(x)
ar = result[1]
br = result[0]
t2=time.time()
t_inv_matrix = float(t2-t1)
print('Linear regression using Moore-Penrose inverse')
print('parameters: a=%.2f b=%.2f'% (ar,br))
print("Time taken: {} seconds".format(t_inv_matrix))
###Output
Linear regression using Moore-Penrose inverse
parameters: a=3.25 b=-6.50
Time taken: 0.6019864082336426 seconds
###Markdown
Analytic solution using simple multiplicative matrix inverse
###Code
t1=time.time()
m = np.dot((np.dot(np.linalg.inv(np.dot(t.T,t)),t.T)),x)
ar = m[1]
br = m[0]
t2=time.time()
t_simple_inv = float(t2-t1)
print('Linear regression using simple inverse')
print('parameters: a=%.2f b=%.2f'% (ar,br))
print("Time taken: {} seconds".format(t_simple_inv))
###Output
Linear regression using simple inverse
parameters: a=3.25 b=-6.50
Time taken: 0.13125276565551758 seconds
###Markdown
Method: sklearn.linear_model.LinearRegression
###Code
t1=time.time()
lm = LinearRegression()
lm.fit(t,x)
ar=lm.coef_[1]
br=lm.intercept_
t2=time.time()
t_sklearn_linear = float(t2-t1)
print('Linear regression using sklearn.linear_model.LinearRegression')
print('parameters: a=%.2f b=%.2f'% (ar,br))
print("Time taken: {} seconds".format(t_sklearn_linear))
###Output
Linear regression using sklearn.linear_model.LinearRegression
parameters: a=3.25 b=-6.50
Time taken: 0.5318112373352051 seconds
###Markdown
Bucket all the execution times in a list and plot
###Code
times = [t_polyfit,t_linregress,t_optimize_curve_fit,t_linalg_lstsq,t_OLS,t_inv_matrix,t_simple_inv,t_sklearn_linear]
plt.figure(figsize=(20,5))
plt.grid(True)
plt.bar(left=[l*0.8 for l in range(8)],height=times, width=0.4,
tick_label=['Polyfit','Stats.linregress','Optimize.curve_fit',
'numpy.linalg.lstsq','statsmodels.OLS','Moore-Penrose matrix inverse',
'Simple matrix inverse','sklearn.linear_model'])
plt.show()
n_min = 50000
n_max = int(1e7)
n_levels = 25
r = np.log10(n_max/n_min)
l = np.linspace(0,r,n_levels)
n_data = list((n_min*np.power(10,l)))
n_data = [int(n) for n in n_data]
#time_dict={'Polyfit':[],'Stats.lingress':[],'Optimize.curve_fit':[],'linalg.lstsq':[],'statsmodels.OLS':[],
#'Moore-Penrose matrix inverse':[],'Simple matrix inverse':[], 'sklearn.linear_model':[]}
l1=['Polyfit', 'Stats.lingress','Optimize.curve_fit', 'linalg.lstsq',
'statsmodels.OLS', 'Moore-Penrose matrix inverse', 'Simple matrix inverse', 'sklearn.linear_model']
time_dict = {key:[] for key in l1}
from tqdm import tqdm
for i in tqdm(range(len(n_data))):
t=np.linspace(-10,10,n_data[i])
#parameters
a=3.25; b=-6.5
x=polyval([a,b],t)
#add some noise
xn=x+3*randn(n_data[i])
#Linear regressison -polyfit - polyfit can be used other orders polynomials
t1=time.time()
(ar,br)=polyfit(t,xn,1)
t2=time.time()
t_polyfit = 1e3*float(t2-t1)
time_dict['Polyfit'].append(t_polyfit)
#Linear regression using stats.linregress
t1=time.time()
(a_s,b_s,r,tt,stderr)=stats.linregress(t,xn)
t2=time.time()
t_linregress = 1e3*float(t2-t1)
time_dict['Stats.lingress'].append(t_linregress)
#Linear regression using optimize.curve_fit
t1=time.time()
p1,_=optimize.curve_fit(flin,xdata=t,ydata=xn,method='lm')
t2=time.time()
t_optimize_curve_fit = 1e3*float(t2-t1)
time_dict['Optimize.curve_fit'].append(t_optimize_curve_fit)
# Linear regression using np.linalg.lstsq (solving Ax=B equation system)
t1=time.time()
A = np.vstack([t, np.ones(len(t))]).T
result = np.linalg.lstsq(A, xn)
ar,br = result[0]
t2=time.time()
t_linalg_lstsq = 1e3*float(t2-t1)
time_dict['linalg.lstsq'].append(t_linalg_lstsq)
# Linear regression using statsmodels.OLS
t1=time.time()
t=sm.add_constant(t)
model = sm.OLS(x, t)
results = model.fit()
ar=results.params[1]
br=results.params[0]
t2=time.time()
t_OLS = 1e3*float(t2-t1)
time_dict['statsmodels.OLS'].append(t_OLS)
# Linear regression using Moore-Penrose pseudoinverse matrix
t1=time.time()
mpinv = np.linalg.pinv(t)
result = mpinv.dot(x)
ar = result[1]
br = result[0]
t2=time.time()
t_mpinverse = 1e3*float(t2-t1)
time_dict['Moore-Penrose matrix inverse'].append(t_mpinverse)
# Linear regression using simple multiplicative inverse matrix
t1=time.time()
m = np.dot((np.dot(np.linalg.inv(np.dot(t.T,t)),t.T)),x)
ar = m[1]
br = m[0]
t2=time.time()
t_simple_inv = 1e3*float(t2-t1)
time_dict['Simple matrix inverse'].append(t_simple_inv)
# Linear regression using scikit-learn's linear_model
t1=time.time()
lm = LinearRegression()
lm.fit(t,x)
ar=lm.coef_[1]
br=lm.intercept_
t2=time.time()
t_sklearn_linear = 1e3*float(t2-t1)
time_dict['sklearn.linear_model'].append(t_sklearn_linear)
df = pd.DataFrame(data=time_dict)
df
plt.figure(figsize=(15,10))
for i in df.columns:
plt.semilogx((n_data),df[i],lw=3)
plt.xticks([1e5,2e5,5e5,1e6,2e6,5e6,1e7],fontsize=15)
plt.xlabel("\nSize of the data set (number of samples)",fontsize=15)
plt.yticks(fontsize=15)
plt.ylabel("Milliseconds needed for simple linear regression model fit\n",fontsize=15)
plt.grid(True)
plt.legend([name for name in df.columns],fontsize=20)
a1=df.iloc[n_levels-1]
plt.figure(figsize=(20,5))
plt.grid(True)
plt.bar(left=[l*0.8 for l in range(8)],height=a1, width=0.4,
tick_label=list(a1.index))
plt.show()
###Output
_____no_output_____ |
2018/jordi/Day 9 - Marble Mania.ipynb | ###Markdown
```You talk to the Elves while you wait for your navigation system to initialize. To pass the time, they introduce you to their favorite marble game.The Elves play this game by taking turns arranging the marbles in a circle according to very particular rules. The marbles are numbered starting with 0 and increasing by 1 until every marble has a number.First, the marble numbered 0 is placed in the circle. At this point, while it contains only a single marble, it is still a circle: the marble is both clockwise from itself and counter-clockwise from itself. This marble is designated the current marble.Then, each Elf takes a turn placing the lowest-numbered remaining marble into the circle between the marbles that are 1 and 2 marbles clockwise of the current marble. (When the circle is large enough, this means that there is one marble between the marble that was just placed and the current marble.) The marble that was just placed then becomes the current marble.However, if the marble that is about to be placed has a number which is a multiple of 23, something entirely different happens. First, the current player keeps the marble they would have placed, adding it to their score. In addition, the marble 7 marbles counter-clockwise from the current marble is removed from the circle and also added to the current player's score. The marble located immediately clockwise of the marble that was removed becomes the new current marble.For example, suppose there are 9 players. After the marble with value 0 is placed in the middle, each player (shown in square brackets) takes a turn. The result of each of those turns would produce circles of marbles like this, where clockwise is to the right and the resulting current marble is in parentheses:[-] (0)[1] 0 (1)[2] 0 (2) 1 [3] 0 2 1 (3)[4] 0 (4) 2 1 3 [5] 0 4 2 (5) 1 3 [6] 0 4 2 5 1 (6) 3 [7] 0 4 2 5 1 6 3 (7)[8] 0 (8) 4 2 5 1 6 3 7 [9] 0 8 4 (9) 2 5 1 6 3 7 [1] 0 8 4 9 2(10) 5 1 6 3 7 [2] 0 8 4 9 2 10 5(11) 1 6 3 7 [3] 0 8 4 9 2 10 5 11 1(12) 6 3 7 [4] 0 8 4 9 2 10 5 11 1 12 6(13) 3 7 [5] 0 8 4 9 2 10 5 11 1 12 6 13 3(14) 7 [6] 0 8 4 9 2 10 5 11 1 12 6 13 3 14 7(15)[7] 0(16) 8 4 9 2 10 5 11 1 12 6 13 3 14 7 15 [8] 0 16 8(17) 4 9 2 10 5 11 1 12 6 13 3 14 7 15 [9] 0 16 8 17 4(18) 9 2 10 5 11 1 12 6 13 3 14 7 15 [1] 0 16 8 17 4 18 9(19) 2 10 5 11 1 12 6 13 3 14 7 15 [2] 0 16 8 17 4 18 9 19 2(20)10 5 11 1 12 6 13 3 14 7 15 [3] 0 16 8 17 4 18 9 19 2 20 10(21) 5 11 1 12 6 13 3 14 7 15 [4] 0 16 8 17 4 18 9 19 2 20 10 21 5(22)11 1 12 6 13 3 14 7 15 [5] 0 16 8 17 4 18(19) 2 20 10 21 5 22 11 1 12 6 13 3 14 7 15 [6] 0 16 8 17 4 18 19 2(24)20 10 21 5 22 11 1 12 6 13 3 14 7 15 [7] 0 16 8 17 4 18 19 2 24 20(25)10 21 5 22 11 1 12 6 13 3 14 7 15The goal is to be the player with the highest score after the last marble is used up. Assuming the example above ends after the marble numbered 25, the winning score is 23+9=32 (because player 5 kept marble 23 and removed marble 9, while no other player got any points in this very short example game).Here are a few more examples:10 players; last marble is worth 1618 points: high score is 831713 players; last marble is worth 7999 points: high score is 14637317 players; last marble is worth 1104 points: high score is 276421 players; last marble is worth 6111 points: high score is 5471830 players; last marble is worth 5807 points: high score is 37305What is the winning Elf's score?```
###Code
from collections import deque
def high_score(players, points):
scores = {}
marbles = deque([0])
# Action remove marble
def play(p if p % 23 == 0):
elf = p % players
marbles.rotate(7)
score = scores.get(elf, 0) + p + marbles.pop()
scores[elf] = score
marbles.rotate(-1)
return score
# Action add marble
@addpattern(play)
def play(p):
marbles.rotate(-1)
marbles.append(p)
return 0
# Play
return range(1, points+1) |> fmap$(play) |> max
# 30 players; last marble is worth 5807 points: high score is 37305
high_score(30, 5807)
# 428 players; last marble is worth 70825 points
high_score(428, 70825)
###Output
_____no_output_____
###Markdown
```Amused by the speed of your answer, the Elves are curious:What would the new winning Elf's score be if the number of the last marble were 100 times larger?```
###Code
high_score(428, 7082500)
###Output
_____no_output_____ |
Model backlog/EfficientNet/EfficientNetB5/227 - EfficientNetB5-Class-Img224 0,5data No Ben.ipynb | ###Markdown
Dependencies
###Code
import os
import sys
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/'))
from efficientnet import *
###Output
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/opt/conda/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/opt/conda/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Using TensorFlow backend.
###Markdown
Load data
###Code
hold_out_set = pd.read_csv('../input/aptos-split-oldnew/hold-out_5.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
X_train['diagnosis'] = X_train['diagnosis'].astype('str')
X_val['diagnosis'] = X_val['diagnosis'].astype('str')
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
display(X_train.head())
###Output
Number of train samples: 17599
Number of validation samples: 1831
Number of test samples: 1928
###Markdown
Model parameters
###Code
# Model parameters
N_CLASSES = X_train['diagnosis'].nunique()
FACTOR = 4
BATCH_SIZE = 8 * FACTOR
EPOCHS = 20
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-4 * FACTOR
WARMUP_LEARNING_RATE = 1e-3 * FACTOR
HEIGHT = 224
WIDTH = 224
CHANNELS = 3
TTA_STEPS = 5
ES_PATIENCE = 5
RLROP_PATIENCE = 3
LR_WARMUP_EPOCHS_1st = 2
LR_WARMUP_EPOCHS_2nd = 6
STEP_SIZE = len(X_train) // BATCH_SIZE
TOTAL_STEPS_1st = WARMUP_EPOCHS * STEP_SIZE
TOTAL_STEPS_2nd = EPOCHS * STEP_SIZE
WARMUP_STEPS_1st = LR_WARMUP_EPOCHS_1st * STEP_SIZE
WARMUP_STEPS_2nd = LR_WARMUP_EPOCHS_2nd * STEP_SIZE
###Output
_____no_output_____
###Markdown
Pre-procecess images
###Code
old_data_base_path = '../input/diabetic-retinopathy-resized/resized_train/resized_train/'
new_data_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(image_id, base_path, save_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
# image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
item = df.iloc[i]
image_id = item['id_code']
item_set = item['set']
item_data = item['data']
if item_set == 'train':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, train_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, train_dest_path)
if item_set == 'validation':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, validation_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, validation_dest_path)
def preprocess_test(df, base_path=test_base_path, save_path=test_dest_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
image_id = df.iloc[i]['id_code']
preprocess_image(image_id, base_path, save_path)
n_cpu = mp.cpu_count()
train_n_cnt = X_train.shape[0] // n_cpu
val_n_cnt = X_val.shape[0] // n_cpu
test_n_cnt = test.shape[0] // n_cpu
# Pre-procecss old data train set
pool = mp.Pool(n_cpu)
dfs = [X_train.iloc[train_n_cnt*i:train_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_train.iloc[train_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss validation set
pool = mp.Pool(n_cpu)
dfs = [X_val.iloc[val_n_cnt*i:val_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_val.iloc[val_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss test set
pool = mp.Pool(n_cpu)
dfs = [test.iloc[test_n_cnt*i:test_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = test.iloc[test_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_test, [x_df for x_df in dfs])
pool.close()
###Output
_____no_output_____
###Markdown
Data generator
###Code
datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="categorical",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="categorical",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
###Output
_____no_output_____
###Markdown
Model
###Code
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = EfficientNetB5(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b5_imagenet_1000_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
final_output = Dense(N_CLASSES, activation='softmax', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
###Output
_____no_output_____
###Markdown
Train top layers
###Code
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = False
for i in range(-2, 0):
model.layers[i].trainable = True
class_weights = class_weight.compute_class_weight('balanced', np.unique(X_train['diagnosis'].astype('int').values), X_train['diagnosis'].astype('int').values)
cosine_lr_1st = WarmUpCosineDecayScheduler(learning_rate_base=WARMUP_LEARNING_RATE,
total_steps=TOTAL_STEPS_1st,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_1st,
hold_base_rate_steps=(2 * STEP_SIZE))
metric_list = ["accuracy"]
callback_list = [cosine_lr_1st]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
callbacks=callback_list,
class_weight=class_weights,
verbose=2).history
###Output
Epoch 1/5
- 244s - loss: 1.3981 - acc: 0.3979 - val_loss: 1.2090 - val_acc: 0.5559
Epoch 2/5
- 232s - loss: 1.2670 - acc: 0.4678 - val_loss: 1.3622 - val_acc: 0.4464
Epoch 3/5
- 231s - loss: 1.2864 - acc: 0.4608 - val_loss: 1.8656 - val_acc: 0.4775
Epoch 4/5
- 233s - loss: 1.2820 - acc: 0.4591 - val_loss: 1.6246 - val_acc: 0.4825
Epoch 5/5
- 231s - loss: 1.2409 - acc: 0.4770 - val_loss: 1.3817 - val_acc: 0.5158
###Markdown
Fine-tune the complete model
###Code
for layer in model.layers:
layer.trainable = True
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cosine_lr_2nd = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE,
total_steps=TOTAL_STEPS_2nd,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_2nd,
hold_base_rate_steps=(2 * STEP_SIZE))
callback_list = [es, cosine_lr_2nd]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=metric_list)
model.summary()
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
class_weight=class_weights,
verbose=2).history
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 6))
ax1.plot(cosine_lr_1st.learning_rates)
ax1.set_title('Warm up learning rates')
ax2.plot(cosine_lr_2nd.learning_rates)
ax2.set_title('Fine-tune learning rates')
plt.xlabel('Steps')
plt.ylabel('Learning rate')
sns.despine()
plt.show()
###Output
_____no_output_____
###Markdown
Model loss graph
###Code
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
train_generator_=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="categorical",
batch_size=1,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator_=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="categorical",
batch_size=1,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
# Add train predictions
train_preds = model.predict_generator(train_generator_, steps=len(X_train))
# Add validation predictions
valid_preds = model.predict_generator(valid_generator_, steps=len(X_val))
train_preds = pd.DataFrame({'label':train_generator_.labels, 'predictions':train_preds.argmax(axis=1)})
validation_preds = pd.DataFrame({'label':valid_generator_.labels, 'predictions':valid_preds.argmax(axis=1)})
###Output
Found 17599 validated image filenames belonging to 5 classes.
Found 1831 validated image filenames belonging to 5 classes.
###Markdown
Model Evaluation Confusion Matrix Original thresholds
###Code
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
###Output
_____no_output_____
###Markdown
Quadratic Weighted Kappa
###Code
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
###Output
Train Cohen Kappa score: 0.702
Validation Cohen Kappa score: 0.868
Complete set Cohen Kappa score: 0.718
###Markdown
Apply model to test set and output predictions
###Code
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
preds = apply_tta(model, test_generator, TTA_STEPS)
predictions = preds.argmax(axis=1)
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
###Output
_____no_output_____
###Markdown
Predictions class distribution
###Code
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
###Output
_____no_output_____
###Markdown
Save model
###Code
model.save_weights('../working/effNetB5_img224_classification.h5')
###Output
_____no_output_____ |
source/YOLOv4_Training_on_CT_Scan_Images.ipynb | ###Markdown
This following pipline was developed based on the instructions and code of the two following repositories.* [AlexeyAB](https://github.com/AlexeyAB/darknet)* [theAIGuysCode](https://github.com/theAIGuysCode) Step 1: Connect to Google Drive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Step 2: Cloning and Building Darknet
###Code
# clone darknet repo
!git clone https://github.com/AlexeyAB/darknet
# change makefile to have GPU and OPENCV enabled
%cd darknet
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
!sed -i 's/GPU=0/GPU=1/' Makefile
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
!sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
!sed -i '0,/assert(x < m.w && y < m.h && c < m.c)/s//\/\/assert(x \< m.w \&\& y \< m.h \&\& c \< m.c)/' src/image.c
# verify CUDA
!/usr/local/cuda/bin/nvcc --version
# make darknet (builds darknet so that you can then use the darknet executable file to run or train object detectors)
!make
###Output
_____no_output_____
###Markdown
Step 3: Download pre-trained YOLOv4 weights
###Code
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
###Output
_____no_output_____
###Markdown
Step 4: Define Helper Functions
###Code
# define helper functions
def imShow(path):
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
image = cv2.imread(path)
height, width = image.shape[:2]
resized_image = cv2.resize(image,(3*width, 3*height), interpolation = cv2.INTER_CUBIC)
fig = plt.gcf()
fig.set_size_inches(18, 10)
plt.axis("off")
plt.imshow(cv2.cvtColor(resized_image, cv2.COLOR_BGR2RGB))
plt.show()
# use this to upload files
def upload():
from google.colab import files
uploaded = files.upload()
for name, data in uploaded.items():
with open(name, 'wb') as f:
f.write(data)
print ('saved file', name)
# use this to download a file
def download(path):
from google.colab import files
files.download(path)
# this creates a symbolic link so that now the path /content/gdrive/My\ Drive/ is equal to /mydrive
!ln -s /content/drive/MyDrive/ /mydrive
!ls /mydrive
###Output
_____no_output_____
###Markdown
Step 5) Train Your Own YOLOv4 Custom Object Detector!In order to create a custom YOLOv4 detector we will need the following:* Labeled Custom Dataset* Custom .cfg file* ctscans.data and ctscans.names files* train.txt file (test.txt is optional here as well) Step 5.1: Moving Your Custom Datasets Into Your Cloud VM
###Code
! cp -R /mydrive/yolov4/ctscans-train /content/darknet/data/ctscans-train
! cp -R /mydrive/yolov4/ctscans-test /content/darknet/data/ctscans-test
###Output
_____no_output_____
###Markdown
Step 5.2: Copy Files for TrainingCreate a new file within a code or text editor called **ctscans.names** where you will have one class name per line.1. YesYou will also create a **ctscans.data** file and fill it in like this (change your number of classes accordingly, as well as your backup location)1. class = 12. train = data/ctscans-train.txt3. valid = data/ctscans-test.txt4. names = data/ctscans.names5. backup = /mydrive/yolo4/backup
###Code
# upload the obj.names and obj.data files to cloud VM from Google Drive
!cp /mydrive/yolov4/support-files/ctscans.names ./data
!cp /mydrive/yolov4/support-files/ctscans.data ./data
!cp /mydrive/yolov4/support-files/yolov4-ctscans.cfg cfg/yolov4-ctscans.cfg
###Output
_____no_output_____
###Markdown
iii) Generating train.txt and test.txt
###Code
# upload the generate_train.py and generate_test.py script to cloud VM from Google Drive
!cp /mydrive/yolov4/support-files/generate_train.py ./
!cp /mydrive/yolov4/support-files/generate_test.py ./
# Run both scripts to genereate the twio aformentioned txt files
!python generate_train.py
!python generate_test.py
###Output
_____no_output_____
###Markdown
Step 6: Download pre-trained weights for the convolutional layers.
###Code
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.conv.137
###Output
_____no_output_____
###Markdown
Step 7: Train Your Custom Object Detector!
###Code
# train your custom detector! (uncomment %%capture below if you run into memory issues or your Colab is crashing)
# %%capture
!./darknet detector train data/ctscans.data cfg/yolov4-ctscans.cfg yolov4.conv.137 -dont_show -ext_output -map
# show chart.png of how custom object detector did with training and save to to the backup folder
imShow('chart.png')
!mv chart.png /mydrive/yolov4/backup/
###Output
_____no_output_____
###Markdown
Step 9: Run Your Custom Object Detector!!!
###Code
# need to set our custom cfg to test mode
%cd cfg
!sed -i 's/batch=64/batch=1/' yolov4-ctscans.cfg
!sed -i 's/subdivisions=16/subdivisions=1/' yolov4-ctscans.cfg
%cd ..
# run your custom detector with this command (upload an image to your google drive to test, thresh flag sets accuracy that detection must be in order to show it)
!./darknet detector test data/ctscans.data cfg/yolov4-ctscans.cfg /mydrive/yolov4/backup/yolov4-ctscans_last.weights /my_image.jpg -thresh 0.6
imShow('predictions.jpg')
###Output
_____no_output_____ |
notebooks/semisupervised/cifar10/euclidean/not-augmented-nothresh-Y/cifar10-16ex-euc-nothresh-Y.ipynb | ###Markdown
Choose GPU
###Code
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
import tensorflow as tf
gpu_devices = tf.config.experimental.list_physical_devices('GPU')
if len(gpu_devices)>0:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
print(gpu_devices)
tf.keras.backend.clear_session()
###Output
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
###Markdown
Load packages
###Code
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
from IPython import display
import pandas as pd
import umap
import copy
import os, tempfile
import tensorflow_addons as tfa
import pickle
###Output
/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
" (e.g. in jupyter console)", TqdmExperimentalWarning)
###Markdown
parameters
###Code
dataset = "cifar10"
labels_per_class = 16 # 'full'
n_latent_dims = 1024
confidence_threshold = 0.0 # minimum confidence to include in UMAP graph for learned metric
learned_metric = False # whether to use a learned metric, or Euclidean distance between datapoints
augmented = False #
min_dist= 0.001 # min_dist parameter for UMAP
negative_sample_rate = 5 # how many negative samples per positive sample
batch_size = 128 # batch size
optimizer = tf.keras.optimizers.Adam(1e-3) # the optimizer to train
optimizer = tfa.optimizers.MovingAverage(optimizer)
label_smoothing = 0.2 # how much label smoothing to apply to categorical crossentropy
max_umap_iterations = 500 # how many times, maximum, to recompute UMAP
max_epochs_per_graph = 10 # how many epochs maximum each graph trains for (without early stopping)
graph_patience = 10 # how many times without improvement to train a new graph
min_graph_delta = 0.0025 # minimum improvement on validation acc to consider an improvement for training
from datetime import datetime
datestring = datetime.now().strftime("%Y_%m_%d_%H_%M_%S_%f")
datestring = (
str(dataset)
+ "_"
+ str(confidence_threshold)
+ "_"
+ str(labels_per_class)
+ "____"
+ datestring
+ '_umap_augmented'
)
print(datestring)
###Output
cifar10_0.0_16____2020_08_24_00_26_41_868150_umap_augmented
###Markdown
Load dataset
###Code
from tfumap.semisupervised_keras import load_dataset
(
X_train,
X_test,
X_labeled,
Y_labeled,
Y_masked,
X_valid,
Y_train,
Y_test,
Y_valid,
Y_valid_one_hot,
Y_labeled_one_hot,
num_classes,
dims
) = load_dataset(dataset, labels_per_class)
###Output
_____no_output_____
###Markdown
load architecture
###Code
from tfumap.semisupervised_keras import load_architecture
encoder, classifier, embedder = load_architecture(dataset, n_latent_dims)
###Output
_____no_output_____
###Markdown
load pretrained weights
###Code
from tfumap.semisupervised_keras import load_pretrained_weights
encoder, classifier = load_pretrained_weights(dataset, augmented, labels_per_class, encoder, classifier)
###Output
WARNING: Logging before flag parsing goes to stderr.
W0824 00:26:56.688759 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7fe921846cf8> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7fe921846f28>).
W0824 00:26:56.690939 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7fe9218956d8> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7fe921d41390>).
W0824 00:26:56.742866 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7fe9220cc160> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fea202ee278>).
W0824 00:26:56.748986 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fea202ee278> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7fe9220802b0>).
W0824 00:26:56.756188 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7fe921d84978> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe921d84cc0>).
W0824 00:26:56.760667 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe921d84cc0> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7fe921d84da0>).
W0824 00:26:56.768090 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7fe92154ae80> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe9215611d0>).
W0824 00:26:56.773827 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe9215611d0> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7fe921561358>).
W0824 00:26:56.789536 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7fe921d705c0> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe921d70a90>).
W0824 00:26:56.797647 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe921d70a90> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7fe921d70dd8>).
W0824 00:26:56.809775 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7fe921507668> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe921507a20>).
W0824 00:26:56.817147 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe921507a20> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7fe921507e80>).
W0824 00:26:56.825432 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7fe92195e9b0> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe92195ea20>).
W0824 00:26:56.829986 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe92195ea20> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7fe92195eeb8>).
W0824 00:26:56.840658 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow_addons.layers.wrappers.WeightNormalization object at 0x7fe92176f0f0> and <tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe92176f710>).
W0824 00:26:56.845364 140643632027392 base.py:272] Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.
Two checkpoint references resolved to different objects (<tensorflow.python.keras.layers.normalization_v2.BatchNormalization object at 0x7fe92176f710> and <tensorflow.python.keras.layers.advanced_activations.LeakyReLU object at 0x7fe92176f940>).
###Markdown
compute pretrained accuracy
###Code
# test current acc
pretrained_predictions = classifier.predict(encoder.predict(X_test, verbose=True), verbose=True)
pretrained_predictions = np.argmax(pretrained_predictions, axis=1)
pretrained_acc = np.mean(pretrained_predictions == Y_test)
print('pretrained acc: {}'.format(pretrained_acc))
###Output
1/313 [..............................] - ETA: 0s
###Markdown
get a, b parameters for embeddings
###Code
from tfumap.semisupervised_keras import find_a_b
a_param, b_param = find_a_b(min_dist=min_dist)
###Output
_____no_output_____
###Markdown
build network
###Code
from tfumap.semisupervised_keras import build_model
model = build_model(
batch_size=batch_size,
a_param=a_param,
b_param=b_param,
dims=dims,
encoder=encoder,
classifier=classifier,
negative_sample_rate=negative_sample_rate,
optimizer=optimizer,
label_smoothing=label_smoothing,
embedder = embedder,
)
###Output
_____no_output_____
###Markdown
build labeled iterator
###Code
from tfumap.semisupervised_keras import build_labeled_iterator
labeled_dataset = build_labeled_iterator(X_labeled, Y_labeled_one_hot, augmented, dims)
###Output
_____no_output_____
###Markdown
training
###Code
from livelossplot import PlotLossesKerasTF
from tfumap.semisupervised_keras import get_edge_dataset
from tfumap.semisupervised_keras import zip_datasets
###Output
_____no_output_____
###Markdown
callbacks
###Code
# plot losses callback
groups = {'acccuracy': ['classifier_accuracy', 'val_classifier_accuracy'], 'loss': ['classifier_loss', 'val_classifier_loss']}
plotlosses = PlotLossesKerasTF(groups=groups)
history_list = []
current_validation_acc = 0
batches_per_epoch = np.floor(len(X_train)/batch_size).astype(int)
epochs_since_last_improvement = 0
current_umap_iterations = 0
current_epoch = 0
# make dataset
edge_dataset = get_edge_dataset(
model,
augmented,
classifier,
encoder,
X_train,
Y_masked,
batch_size,
confidence_threshold,
labeled_dataset,
dims,
learned_metric = learned_metric
)
# zip dataset
zipped_ds = zip_datasets(labeled_dataset, edge_dataset, batch_size)
from tfumap.paths import MODEL_DIR, ensure_dir
save_folder = MODEL_DIR / 'semisupervised-keras' / dataset / str(labels_per_class) / datestring
ensure_dir(save_folder / 'test_loss.npy')
for cui in tqdm(np.arange(current_epoch, max_umap_iterations)):
if len(history_list) > graph_patience+1:
previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list]
best_of_patience = np.max(previous_history[-graph_patience:])
best_of_previous = np.max(previous_history[:-graph_patience])
if (best_of_previous + min_graph_delta) > best_of_patience:
print('Early stopping')
break
# train dataset
history = model.fit(
zipped_ds,
epochs= current_epoch + max_epochs_per_graph,
initial_epoch = current_epoch,
validation_data=(
(X_valid, tf.zeros_like(X_valid), tf.zeros_like(X_valid)),
{"classifier": Y_valid_one_hot},
),
callbacks = [plotlosses],
max_queue_size = 100,
steps_per_epoch = batches_per_epoch,
#verbose=0
)
current_epoch+=len(history.history['loss'])
history_list.append(history)
# save score
class_pred = classifier.predict(encoder.predict(X_test))
class_acc = np.mean(np.argmax(class_pred, axis=1) == Y_test)
np.save(save_folder / 'test_loss.npy', (np.nan, class_acc))
# save weights
encoder.save_weights((save_folder / "encoder").as_posix())
classifier.save_weights((save_folder / "classifier").as_posix())
# save history
with open(save_folder / 'history.pickle', 'wb') as file_pi:
pickle.dump([i.history for i in history_list], file_pi)
current_umap_iterations += 1
if len(history_list) > graph_patience+1:
previous_history = [np.mean(i.history['val_classifier_accuracy']) for i in history_list]
best_of_patience = np.max(previous_history[-graph_patience:])
best_of_previous = np.max(previous_history[:-graph_patience])
if (best_of_previous + min_graph_delta) > best_of_patience:
print('Early stopping')
plt.plot(previous_history)
(best_of_previous + min_graph_delta) , best_of_patience
###Output
_____no_output_____
###Markdown
save embedding
###Code
z = encoder.predict(X_train)
reducer = umap.UMAP(verbose=True)
embedding = reducer.fit_transform(z.reshape(len(z), np.product(np.shape(z)[1:])))
plt.scatter(embedding[:, 0], embedding[:, 1], c=Y_train.flatten(), s= 1, alpha = 0.1, cmap = plt.cm.tab10)
np.save(save_folder / 'train_embedding.npy', embedding)
print
###Output
_____no_output_____ |
data/congress-committees/prepare_senate_data.ipynb | ###Markdown
Historical codebook: http://web.mit.edu/cstewart/www/data/cb9999
###Code
library(tidyverse)
prep_committees <- function(type){
path <- paste0(type, "_committees_modern.csv")
df <- read_csv(path)
if(type == "house"){
df <- df %>%
rename(`Party Code` = Party,
`Committee Code` = `Committee code`)
}
df <- df %>%
select(party = `Party Code`,
id = `ID #`,
committee = `Committee Code`,
session = `Congress`,
name = `Name`)
df <- df %>%
separate(col = name, sep = ", ", into = c("last", "first")) %>%
separate(col = first, sep = " ", into = c("first", "other")) %>%
filter(!is.na(id)) %>%
mutate(party = case_when(party == 100 ~ 1,
party == 200 ~ 2,
TRUE ~ 1)) %>%
filter(party > 0)
lookup <- df %>%
group_by(id, first, last, party) %>%
filter(row_number() == 1) %>%
ungroup() %>%
mutate(new_id = row_number()) %>%
select(id, new_id, first, last, party)
df <- df %>%
left_join(lookup)
out_path <- paste0(type, "_committees.csv")
df %>% write_csv(out_path)
}
prep_committees("house")
senate_df <- read_csv("senate_committees_modern.csv")
senate_df <- senate_df %>%
select(party = `Party Code`,
id = `ID #`,
committee = `Committee Code`,
session = `Congress`,
name = `Name`)
senate_df <- senate_df %>%
separate(col = name, sep = ", ", into = c("last", "first")) %>%
separate(col = first, sep = " ", into = c("first", "other")) %>%
filter(!is.na(id)) %>%
mutate(party = case_when(party == 100 ~ 1,
party == 200 ~ 2,
TRUE ~ 1)) %>%
filter(party > 0)
lookup <- senate_df %>%
group_by(id, first, last, party) %>%
filter(row_number() == 1) %>%
ungroup() %>%
mutate(new_id = row_number()) %>%
select(id, new_id, first, last, party)
senate_df <- senate_df %>%
left_join(lookup)
senate_df %>%
group_by(session) %>%
summarise(n = n())
df %>%
write_csv("senate_committees.csv")
###Output
_____no_output_____ |
divinity_hello.ipynb | ###Markdown
###Code
!pip install divinity
import logging, sys
logging.disable(sys.maxsize)
!pip install microprediction
###Output
_____no_output_____
###Markdown
Hello world exampleSee https://www.microprediction.com/blog/popular-timeseries-packages for more packages
###Code
from microprediction import MicroReader
mr = MicroReader()
YS = mr.get_lagged_values(name='emojitracker-twitter-face_with_medical_mask.json')[:50]
import divinity as dv
import pandas as pd
import datetime
import numpy as np
import warnings
warnings.filterwarnings('ignore')
def run(ys):
""" Slow, see river package or others if you don't like """
burnin = 100
def next_value(ys):
dfc = dv.divinity(forecast_length=1)
dfc.fit(np.array(ys))
y_hat = dfc.predict()[0]
return y_hat
y_hats = list()
for t in range(len(ys)):
if t > burnin:
y_hat = next_value(ys[:t])
elif t >= 1:
y_hat = ys[t - 1]
else:
y_hat = 0
y_hats.append(y_hat)
return y_hats
XS = run(YS)
import matplotlib.pyplot as plt
plt.plot(YS[25:],'*b')
plt.plot(XS[25:],'g')
###Output
_____no_output_____ |
Elo Merchant Category Recommendation/code/Baseline_yeonmin_v2.ipynb | ###Markdown
Utility Function
###Code
def get_prefix(group_col, target_col, prefix=None):
if isinstance(group_col, list) is True:
g = '_'.join(group_col)
else:
g = group_col
if isinstance(target_col, list) is True:
t = '_'.join(target_col)
else:
t = target_col
if prefix is not None:
return prefix + '_' + g + '_' + t
return g + '_' + t
def groupby_helper(df, group_col, target_col, agg_method, prefix_param=None):
try:
prefix = get_prefix(group_col, target_col, prefix_param)
print(group_col, target_col, agg_method)
group_df = df.groupby(group_col)[target_col].agg(agg_method)
group_df.columns = ['{}_{}'.format(prefix, m) for m in agg_method]
except BaseException as e:
print(e)
return group_df.reset_index()
def create_new_columns(name,aggs):
return [name + '_' + k + '_' + agg for k in aggs.keys() for agg in aggs[k]]
###Output
_____no_output_____
###Markdown
데이터 Load
###Code
historical_trans_df = pd.read_csv('input/historical_transactions.csv')
new_merchant_trans_df = pd.read_csv('input/new_merchant_transactions.csv')
merchant_df = pd.read_csv('input/merchants.csv')[['merchant_id','merchant_group_id','category_4','active_months_lag3','active_months_lag6','active_months_lag12']]
train_df = pd.read_csv('input/train.csv')
test_df = pd.read_csv('input/test.csv')
###Output
_____no_output_____
###Markdown
기본 전처리
###Code
def get_hist_default_prorcessing(df):
df['purchase_date'] = pd.to_datetime(df['purchase_date'])
df['year'] = df['purchase_date'].dt.year
df['weekofyear'] = df['purchase_date'].dt.weekofyear
df['month'] = df['purchase_date'].dt.month
df['dayofweek'] = df['purchase_date'].dt.dayofweek
df['weekend'] = (df.purchase_date.dt.weekday >=5).astype(int)
df['hour'] = df['purchase_date'].dt.hour
df['authorized_flag'] = df['authorized_flag'].map({'Y':1, 'N':0})
df['category_1'] = df['category_1'].map({'Y':1, 'N':0})
df['category_3'] = df['category_3'].map({'A':0, 'B':1, 'C':2})
df['month_diff'] = ((datetime.today() - df['purchase_date']).dt.days)//30
df['month_diff'] += df['month_lag']
df['reference_date'] = (df['year']+(df['month'] - df['month_lag'])//12)*100 + (((df['month'] - df['month_lag'])%12) + 1)*1
return df
historical_trans_df = get_hist_default_prorcessing(historical_trans_df)
new_merchant_trans_df = get_hist_default_prorcessing(new_merchant_trans_df)
merchant_df['category_4'] = merchant_df['category_4'].map({'Y':1, 'N':0})
merchant_df = merchant_df.drop_duplicates(['merchant_id'])
merchant_df['active_ratio3_6'] = (merchant_df['active_months_lag3']/3) / (merchant_df['active_months_lag6']/6)
merchant_df['active_ratio3_12'] = (merchant_df['active_months_lag3']/3) / (merchant_df['active_months_lag12']/12)
aggs = {}
aggs['active_ratio3_6'] = ['max','min','mean','var']
aggs['active_ratio3_12'] = ['mean', 'min', 'max', 'var']
new_columns = create_new_columns('merchant',aggs)
merchant_df1 = merchant_df.groupby('merchant_group_id').agg(aggs)
merchant_df1.columns = new_columns
merchant_df1.reset_index(drop=False,inplace=True)
merchant_df = merchant_df.merge(merchant_df1,on='merchant_group_id')
del merchant_df['merchant_group_id'];
del merchant_df['active_months_lag3']
del merchant_df['active_months_lag6']
del merchant_df['active_months_lag12']
del merchant_df['active_ratio3_6']
del merchant_df['active_ratio3_12']
historical_trans_df = historical_trans_df.merge(merchant_df,on='merchant_id',how='left')
new_merchant_trans_df = new_merchant_trans_df.merge(merchant_df,on='merchant_id',how='left')
#cleanup memory
del merchant_df; del merchant_df1;gc.collect()
historical_trans_df = historical_trans_df.sort_values('purchase_date')
new_merchant_trans_df = new_merchant_trans_df.sort_values('purchase_date')
authorized_transactions = historical_trans_df[historical_trans_df['authorized_flag'] == 1]
historical_transactions = historical_trans_df[historical_trans_df['authorized_flag'] == 0]
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
all_df = pd.concat([train_df,test_df])
group_df = groupby_helper(historical_trans_df,['card_id','month_lag'], 'purchase_amount',['count','mean'])
group_df['card_id_month_lag_purchase_amount_count'] = group_df['card_id_month_lag_purchase_amount_count']/(1-group_df['month_lag'])
group_df['card_id_month_lag_purchase_amount_mean'] = group_df['card_id_month_lag_purchase_amount_mean']/(1-group_df['month_lag'])
del group_df['month_lag']
count_df = groupby_helper(group_df,['card_id'], 'card_id_month_lag_purchase_amount_count',['sum','mean','std'])
mean_df = groupby_helper(group_df,['card_id'], 'card_id_month_lag_purchase_amount_mean',['sum','mean','std'])
all_df = all_df.merge(count_df, on=['card_id'], how='left')
all_df = all_df.merge(mean_df, on=['card_id'], how='left')
group_df = groupby_helper(historical_transactions,['card_id'], 'month',['nunique','max','min','mean','std'],'auth_0')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
group_df = groupby_helper(authorized_transactions,['card_id'], 'month',['nunique','max','min','mean','std'],'auth_1')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
group_df = groupby_helper(new_merchant_trans_df,['card_id'], 'month',['nunique','max','min','mean'],'new_hist')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
gc.collect()
group_df = groupby_helper(historical_trans_df,['card_id'], 'merchant_id',['nunique'])
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
group_df = groupby_helper(historical_transactions,['card_id'], 'merchant_category_id',['nunique'],'auth_0')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
group_df = groupby_helper(authorized_transactions,['card_id'], 'merchant_category_id',['nunique'],'auth_1')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
group_df = groupby_helper(historical_transactions,['card_id'], 'subsector_id',['nunique'],'auth_0')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
group_df = groupby_helper(authorized_transactions,['card_id'], 'subsector_id',['nunique'],'auth_1')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
group_df = groupby_helper(historical_transactions,['card_id'], 'state_id',['nunique'],'auth_0')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
group_df = groupby_helper(authorized_transactions,['card_id'], 'state_id',['nunique'],'auth_1')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
del group_df
gc.collect()
group_df = groupby_helper(historical_trans_df,['card_id'], 'city_id',['nunique'])
all_df = all_df.merge(group_df, on=['card_id'], how='left')
group_df = groupby_helper(new_merchant_trans_df,['card_id'], 'merchant_category_id',['nunique'],'new_hist')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
group_df = groupby_helper(new_merchant_trans_df,['card_id'], 'subsector_id',['nunique'],'new_hist')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
group_df = groupby_helper(new_merchant_trans_df,['card_id'], 'state_id',['nunique'],'new_hist')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
aggs = {}
for col in ['hour', 'weekofyear', 'dayofweek', 'year']:
aggs[col] = ['nunique', 'mean', 'min', 'max']
aggs['purchase_amount'] = ['sum','max','min','mean','var']
aggs['installments'] = ['sum','max','min','mean','var']
aggs['purchase_date'] = ['max','min']
aggs['month_lag'] = ['max','min','mean','var']
aggs['month_diff'] = ['mean', 'max', 'min', 'var']
aggs['weekend'] = ['sum', 'mean', 'min', 'max']
aggs['category_1'] = ['sum', 'mean', 'min', 'max']
aggs['authorized_flag'] = ['sum', 'mean', 'min', 'max']
#aggs['category_2'] = ['sum', 'mean', 'min', 'max']
#aggs['category_3'] = ['sum', 'mean', 'min', 'max']
aggs['card_id'] = ['size']
aggs['reference_date'] = ['median']
new_columns = create_new_columns('hist',aggs)
historical_trans_group_df = historical_trans_df.groupby('card_id').agg(aggs)
historical_trans_group_df.columns = new_columns
historical_trans_group_df.reset_index(drop=False,inplace=True)
historical_trans_group_df['hist_purchase_date_diff'] = (historical_trans_group_df['hist_purchase_date_max'] - historical_trans_group_df['hist_purchase_date_min']).dt.days
historical_trans_group_df['hist_purchase_date_average'] = historical_trans_group_df['hist_purchase_date_diff']/historical_trans_group_df['hist_card_id_size']
historical_trans_group_df['hist_purchase_date_uptonow'] = (datetime.today() - historical_trans_group_df['hist_purchase_date_max']).dt.days
historical_trans_group_df['hist_purchase_date_uptomin'] = (datetime.today() - historical_trans_group_df['hist_purchase_date_min']).dt.days
all_df = all_df.merge(historical_trans_group_df, on=['card_id'], how='left')
aggs = {}
for col in ['hour', 'weekofyear', 'dayofweek', 'year']:
aggs[col] = ['nunique', 'mean', 'min', 'max']
aggs['purchase_amount'] = ['sum','max','min','mean','var']
aggs['installments'] = ['sum','max','min','mean','var']
aggs['purchase_date'] = ['max','min']
aggs['month_lag'] = ['max','min','mean','var']
aggs['month_diff'] = ['mean', 'max', 'min', 'var']
aggs['weekend'] = ['sum', 'mean', 'min', 'max']
aggs['category_1'] = ['sum', 'mean', 'min', 'max']
#aggs['authorized_flag'] = ['sum', 'mean', 'min', 'max']
#aggs['category_2'] = ['sum', 'mean', 'min', 'max']
#aggs['category_3'] = ['sum', 'mean', 'min', 'max']
aggs['card_id'] = ['size']
aggs['reference_date'] = ['median']
new_columns = create_new_columns('new_hist',aggs)
new_merchant_trans_group_df = new_merchant_trans_df.groupby('card_id').agg(aggs)
new_merchant_trans_group_df.columns = new_columns
new_merchant_trans_group_df.reset_index(drop=False,inplace=True)
new_merchant_trans_group_df['new_hist_purchase_date_diff'] = (new_merchant_trans_group_df['new_hist_purchase_date_max'] - new_merchant_trans_group_df['new_hist_purchase_date_min']).dt.days
new_merchant_trans_group_df['new_hist_purchase_date_average'] = new_merchant_trans_group_df['new_hist_purchase_date_diff']/new_merchant_trans_group_df['new_hist_card_id_size']
new_merchant_trans_group_df['new_hist_purchase_date_uptonow'] = (datetime.today() - new_merchant_trans_group_df['new_hist_purchase_date_max']).dt.days
new_merchant_trans_group_df['new_hist_purchase_date_uptomin'] = (datetime.today() - new_merchant_trans_group_df['new_hist_purchase_date_min']).dt.days
#merge with train, test
all_df = all_df.merge(new_merchant_trans_group_df, on=['card_id'], how='left')
def get_train_default_prorcessing(df):
df['first_active_month'] = pd.to_datetime(df['first_active_month'])
df['dayofweek'] = df['first_active_month'].dt.dayofweek
df['weekofyear'] = df['first_active_month'].dt.weekofyear
df['dayofyear'] = df['first_active_month'].dt.dayofyear
df['quarter'] = df['first_active_month'].dt.quarter
df['is_month_start'] = df['first_active_month'].dt.is_month_start
df['month'] = df['first_active_month'].dt.month
df['year'] = df['first_active_month'].dt.year
#df['elapsed_time'] = (datetime(2018, 2, 1).date() - df['first_active_month'].dt.date).dt.days
df['elapsed_time'] = (datetime.today() - df['first_active_month']).dt.days
df['hist_first_buy'] = (df['hist_purchase_date_min'] - df['first_active_month']).dt.days
df['hist_last_buy'] = (df['hist_purchase_date_max'] - df['first_active_month']).dt.days
df['new_hist_first_buy'] = (df['new_hist_purchase_date_min'] - df['first_active_month']).dt.days
df['new_hist_last_buy'] = (df['new_hist_purchase_date_max'] - df['first_active_month']).dt.days
df['year_month'] = df['year']*100 + df['month']
df['hist_diff_reference_date_first'] = 12*(df['hist_reference_date_median']//100 - df['year_month']//100) + (df['hist_reference_date_median']%100 - df['year_month']%100)
df['hist_diff_reference_date_last'] = 12*(df['hist_purchase_date_max'].dt.year - df['year_month']//100) + (df['hist_purchase_date_max'].dt.month - df['year_month']%100)
df['new_hist_diff_reference_date_first'] = 12*(df['new_hist_reference_date_median']//100 - df['year_month']//100) + (df['new_hist_reference_date_median']%100 - df['year_month']%100)
df['new_hist_diff_reference_date_last'] = 12*(df['new_hist_purchase_date_max'].dt.year - df['year_month']//100) + (df['new_hist_purchase_date_max'].dt.month - df['year_month']%100)
df['hist_diff_first_last'] = df['hist_diff_reference_date_first'] - df['hist_diff_reference_date_last']
df['new_hist_diff_first_last'] = df['new_hist_diff_reference_date_first'] - df['new_hist_diff_reference_date_last']
df['diff_new_hist_date_min_max'] = (df['new_hist_purchase_date_min'] - df['hist_purchase_date_max']).dt.days
df['diff_new_hist_date_max_max'] = (df['new_hist_purchase_date_max'] - df['hist_purchase_date_max']).dt.days
df['hist_flag_ratio'] = df['hist_authorized_flag_sum'] / df['hist_card_id_size']
#df['new_flag_ratio'] = df['new_hist_authorized_flag_sum'] / df['new_hist_card_id_size']
#df['new_hist_flag_ratio'] = 1/(1+df['hist_flag_ratio'])
for f in ['hist_purchase_date_max','hist_purchase_date_min','new_hist_purchase_date_max',\
'new_hist_purchase_date_min']:
df[f] = df[f].astype(np.int64) * 1e-9
df['card_id_total'] = df['new_hist_card_id_size']+df['hist_card_id_size']
df['purchase_amount_total'] = df['new_hist_purchase_amount_sum']+df['hist_purchase_amount_sum']
del df['year']
del df['year_month']
del df['new_hist_reference_date_median']
return df
all_df = get_train_default_prorcessing(all_df)
group_df = groupby_helper(new_merchant_trans_df,['card_id'], 'authorized_flag',['sum'],'new_hist')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
all_df['new_flag_ratio'] = all_df['new_hist_card_id_authorized_flag_sum'] / all_df['new_hist_card_id_size']
all_df['new_hist_flag_ratio'] = all_df['new_flag_ratio']/(all_df['new_flag_ratio']+all_df['hist_flag_ratio'])
del all_df['new_hist_card_id_authorized_flag_sum']
all_df['invalid_card_date'].value_counts()
group_df = groupby_helper(historical_trans_df,['card_id'], 'purchase_date',['min'],'hist')
all_df = all_df.merge(group_df, on=['card_id'], how='left')
all_df['invalid_card_date'] = 0
all_df.loc[all_df['first_active_month']<all_df['hist_card_id_purchase_date_min'],'invalid_card_date'] = 1
del all_df['hist_card_id_purchase_date_min']
###Output
['card_id'] purchase_date ['min']
###Markdown
Test Feature사용 금지
###Code
def successive_aggregates(df, field1, field2,prefix=''):
t = df.groupby(['card_id', field1])[field2].mean()
u = pd.DataFrame(t).reset_index().groupby('card_id')[field2].agg(['mean', 'min', 'max', 'std'])
u.columns = [prefix + field1 + '_' + field2 + '_' + col for col in u.columns.values]
u.reset_index(inplace=True)
return u
additional_fields = successive_aggregates(new_merchant_trans_df, 'category_1', 'purchase_amount','successive_')
additional_fields = additional_fields.merge(successive_aggregates(new_merchant_trans_df, 'installments', 'purchase_amount','successive_'),
on = 'card_id', how='left')
additional_fields = additional_fields.merge(successive_aggregates(new_merchant_trans_df, 'city_id', 'purchase_amount','successive_'),
on = 'card_id', how='left')
additional_fields = additional_fields.merge(successive_aggregates(new_merchant_trans_df, 'category_1', 'installments','successive_'),
on = 'card_id', how='left')
all_df = all_df.merge(additional_fields, on=['card_id'], how='left')
group_df = groupby_helper(historical_trans_df,['card_id','category_1'], 'purchase_amount',['mean','count'])
mean_df = groupby_helper(group_df,['card_id'], 'card_id_category_1_purchase_amount_mean',['min', 'max'])
count_df = groupby_helper(group_df,['card_id'], 'card_id_category_1_purchase_amount_count',['min', 'max'])
#all_df = all_df.merge(group_df, on=['card_id'], how='left')
all_df = all_df.merge(mean_df, on=['card_id'], how='left')
all_df = all_df.merge(count_df, on=['card_id'], how='left')
all_df['diff_category_1_puarchase_amount_mean'] = all_df['card_id_card_id_category_1_purchase_amount_mean_max'] - all_df['card_id_card_id_category_1_purchase_amount_mean_min']
all_df['diff_category_1_puarchase_amount_count'] = all_df['card_id_card_id_category_1_purchase_amount_count_max'] - all_df['card_id_card_id_category_1_purchase_amount_count_min']
###Output
_____no_output_____
###Markdown
Modelding
###Code
for col in all_df.columns:
if col.find('diff_category_1_puarchase_amount') != -1:
print(col)
del all_df[col]
train_df = all_df.loc[all_df['target'].notnull()]
test_df = all_df.loc[all_df['target'].isnull()]
train_df['outliers'] = 0
train_df.loc[train_df['target'] < -30, 'outliers'] = 1
train_df['outliers'].value_counts()
for f in ['feature_1','feature_2','feature_3']:
order_label = train_df.groupby([f])['outliers'].mean()
train_df[f] = train_df[f].map(order_label)
test_df[f] = test_df[f].map(order_label)
group_df = groupby_helper(train_df,['dayofyear'], 'outliers',['mean'])
train_df = train_df.merge(group_df, on=['dayofyear'], how='left')
test_df = test_df.merge(group_df, on=['dayofyear'], how='left')
group_df = groupby_helper(train_df,['elapsed_time'], 'outliers',['mean'])
train_df = train_df.merge(group_df, on=['elapsed_time'], how='left')
test_df = test_df.merge(group_df, on=['elapsed_time'], how='left')
train_columns = [c for c in train_df.columns if c not in ['card_id', 'first_active_month','target','outliers']]
train_columns
train = train_df.copy()
target = train['target']
del train['target']
param = {'num_leaves': 31,
'min_data_in_leaf': 30,
'objective':'regression',
'max_depth': -1,
'learning_rate': 0.015,
"min_child_samples": 20,
"boosting": "gbdt",
"feature_fraction": 0.9,
"bagging_freq": 1,
"bagging_fraction": 0.9 ,
"bagging_seed": 11,
"metric": 'rmse',
"lambda_l1": 0.1,
"verbosity": -1,
"nthread": 24,
"seed": 4950}
#prepare fit model with cross-validation
np.random.seed(2019)
folds = StratifiedKFold(n_splits=9, shuffle=True, random_state=4950)
oof = np.zeros(len(train))
predictions = np.zeros(len(test_df))
feature_importance_df = pd.DataFrame()
for fold_, (trn_idx, val_idx) in enumerate(folds.split(train, train['outliers'].values)):
strLog = "fold {}".format(fold_+1)
print(strLog)
trn_data = lgb.Dataset(train.iloc[trn_idx][train_columns], label=target.iloc[trn_idx])#, categorical_feature=categorical_feats)
val_data = lgb.Dataset(train.iloc[val_idx][train_columns], label=target.iloc[val_idx])#, categorical_feature=categorical_feats)
num_round = 10000
clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=100, early_stopping_rounds = 100)
oof[val_idx] = clf.predict(train.iloc[val_idx][train_columns], num_iteration=clf.best_iteration)
#feature importance
fold_importance_df = pd.DataFrame()
fold_importance_df["Feature"] = train_columns
fold_importance_df["importance"] = clf.feature_importance()
fold_importance_df["fold"] = fold_ + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
#predictions
predictions += clf.predict(test_df[train_columns], num_iteration=clf.best_iteration) / folds.n_splits
cv_score = np.sqrt(mean_squared_error(oof, target))
print(cv_score)
withoutoutlier_predictions = predictions.copy()
filename = '{}_cv{:.6f}'.format(datetime.now().strftime('%Y%m%d_%H%M%S'), cv_score)
cols = (feature_importance_df[["Feature", "importance"]]
.groupby("Feature")
.mean()
.sort_values(by="importance", ascending=False)[:1000].index)
best_features = feature_importance_df.loc[feature_importance_df.Feature.isin(cols)]
plt.figure(figsize=(14,26))
sns.barplot(x="importance", y="Feature", data=best_features.sort_values(by="importance",ascending=False))
plt.title('LightGBM Features (averaged over folds)')
plt.tight_layout()
plt.savefig('fi/{}_lgbm_importances.png'.format(filename))
sub_df = pd.DataFrame({"card_id":test_df["card_id"].values})
sub_df['target'] = predictions
#sub_df.loc[sub_df['target']<-9,'target'] = -33.21928095
sub_df.to_csv("output/combine_submission_{}.csv".format(filename), index=False)
###Output
_____no_output_____
###Markdown
오히려 점수 떨어짐
###Code
from scipy.stats import ks_2samp
import tqdm
list_p_value =[]
for i in tqdm.tqdm(train_columns):
list_p_value.append(ks_2samp(test_df[i] , train[i])[1])
Se = pd.Series(list_p_value, index = train_columns).sort_values()
list_discarded = list(Se[Se < .1].index)
print(list_discarded)
for col in train_columns:
if col in list_discarded:
train_columns.remove(col)
###Output
100%|██████████| 163/163 [00:07<00:00, 22.00it/s] |
CTforGA_gathering/.ipynb_checkpoints/GatheringTheData-checkpoint.ipynb | ###Markdown
Course: Computational Thinking for Governance Analytics Prof. José Manuel Magallanes, PhD * Visiting Professor of Computational Policy at Evans School of Public Policy and Governance, and eScience Institute Senior Data Science Fellow, University of Washington.* Professor of Government and Political Methodology, Pontificia Universidad Católica del Perú. _____ Data Preprocessing in Python: Data gathering 0. [The GitHub Repo](part0) 1. [Uploading files](part1a)2. [APIs](part2) 3. [Scraping](part3) 4. [Social media](part4) ____ 0. The GitHub RepoOne the first steps you should take when preparing an analytics project is to know where to read your files from. One usually gets a file from an email, or finds it in website, and just download it to the local computer. However, different from most software, like SPSS, EXCEL and STATA, **Python** or **R** need to know the **path** to the file precisely. Sometimes you are not aware of the folders' **tree** in your computer. For instance, once I have opened this Jupyter Notebook, I can find its path like this:
###Code
import os
os.getcwd() # this is a comment, Python will not pay attention to this.
###Output
_____no_output_____
###Markdown
You may not see the same path as the one above. It depends in at least a couple of things, the location of the folder you created in your computer, and the **operating system** you are using (i.e. Windows, OSX, Linux).There is nothing wrong in using the files in a folder in your computer, but I recommend creating a *repo*sitory in Github for your _CTGA_ course, and synchronize that repo with your computer folder. Let's follow these steps:1. Go to [github.com](https://github.com/), and sign up.2. Install the Githib desktop app in your computer. It is available [here](https://desktop.github.com/). Sign in to the app.3. Once you are signed in the Github web, create a repository. Choose a name. **DO NOT forget** to select the option to add a READ ME file, choose a **LICENSE** too (I recommend MIT).4. **Clone** the repository created into your computer. You can select where to created.5. Go to this [link](https://drive.google.com/drive/folders/1nfr3eHiTQVg7rcgVvXaxAbjdo0dCLqjn?usp=sharing) and download all the files you see there in the folder with the Github repository you just created.6. **Push** the files in the local folder to your Github repo ('the cloud'). Once your local folder and the cloud are synchronized, you may go to the cloud repo and find the link to the data. This requires that you click on the name of the file in the repo, and get the link in the **download** or **raw** option; **DO NOT** use the link you see on the URL.[home](home) ____ 1. Uploading 'proprietary software' files Several times, you may find that you are given a file that was previously prepared with proprietary software. The most common in the policy field are:* SPSS (file extension: **sav**).* STATA (file extension: **dta**).* EXCEL (file extension: **xlsx** or **xls**).Getting these files up and running is the easiest, as they are often well organized and do not bring much pre processing challenges. They generally have the data organized in tables, where rows are the cases and columns are the variables. This structure is known by the data science community as the **data frame**. This data structure is not native to Python (it is in R), so when using Python we will need to use **Pandas**.Let me use the links I have from my own repo, but this is the perfect time for you to check if Python can access your files from the cloud. Let's save the file locations first:
###Code
cloudLocation_AllMyFiles='https://github.com/Ditha123/governance_analytics/tree/main/CTforGA_gathering'
linkToSTATA_File=cloudLocation_AllMyFiles+'hsb_ok.dta' # '+' concatenates text.
linkToSPSS_File=cloudLocation_AllMyFiles+'hsb_ok.sav'
linkToEXCEL_File=cloudLocation_AllMyFiles+'hsb_ok.xlsx'
###Output
_____no_output_____
###Markdown
Now, make sure you have **Pandas** installed:
###Code
!pip show pandas
###Output
Name: pandas
Version: 1.3.4
Summary: Powerful data structures for data analysis, time series, and statistics
Home-page: https://pandas.pydata.org
Author: The Pandas Development Team
Author-email: [email protected]
License: BSD-3-Clause
Location: /opt/anaconda3/lib/python3.9/site-packages
Requires: python-dateutil, pytz, numpy
Required-by: statsmodels, seaborn
###Markdown
If you do **not have** Pandas, you can also install it using the Anaconda Navigator or via the Terminal. This code can install it from Jupyter notebook:
###Code
#!pip install pandas
###Output
_____no_output_____
###Markdown
Since I have Pandas, I wrote a **__** before the code above. Please so the same every time you have code you do not need to execute more than once. Notice that the **show** command also informed the version I have. If you have an older version of Pandas than the one I have, you can update yours like this:
###Code
#!pip install pandas -U
###Output
_____no_output_____
###Markdown
____ 1.1 Reading STATA and EXCEL:Most files are easy to read in. In this case, STATA will give no extra work, but reading in Excel file requires the libraris **xlrd** (for xls) or the library **openpyxl** (for xlsx), just check if you have them; otherwise, install them using **!pip install**.
###Code
import pandas as pd # activating pandas
dfStata=pd.read_stata(linkToSTATA_File)
dfExcel=pd.read_excel(linkToEXCEL_File,usecols='B:P') #this omits first column (row number from Excel).
#you can also use:
# import pandas
# dfStata=pandas.read_stata(linkToSTATA_File)
# dfExcel=pandas.read_excel(linkToEXCEL_File)
###Output
_____no_output_____
###Markdown
Above, you are telling Python to use a function from Pandas. Generally, this input functions have a reciprocal output function (for example: write_stata). The objects **dfStata** and **dfExcel** are data frames:
###Code
type(dfStata) , type(dfExcel)
###Output
_____no_output_____
###Markdown
They should have the same information:
###Code
dfStata.shape, dfExcel.shape
###Output
_____no_output_____
###Markdown
You can check some rows from both:
###Code
dfStata.head()
dfExcel.tail()
#?pd.read_excel
###Output
_____no_output_____
###Markdown
Remember that an Excel file can have **several sheets**, so you need to call each by its location (number) or name (label):
###Code
fileXLSXs=cloudLocation_AllMyFiles+'WA_COVID19_Cases_Hospitalizations_Deaths.xlsx'
#opening each in a different data frame.
dataExcelCovidCases=pd.read_excel(fileXLSXs,sheet_name='Cases')
dataExcelCovidDeaths=pd.read_excel(fileXLSXs,sheet_name='Deaths')
dataExcelCovidHospitalizations=pd.read_excel(fileXLSXs,sheet_name='Hospitalizations')
dataExcelCovidDataDictionary=pd.read_excel(fileXLSXs,sheet_name='Data Dictionary')
###Output
_____no_output_____
###Markdown
Let's check one of the dataframes:
###Code
dataExcelCovidDataDictionary
###Output
_____no_output_____
###Markdown
If you need some more space to see cell contents:
###Code
# if you need more space:
pd.set_option('max_colwidth', 800)
dataExcelCovidDataDictionary
###Output
_____no_output_____
###Markdown
[home](home)____ 1.2 Reading an SPSS file Pandas has a **read_spss** function. But, before using it, verify you have the library **[pyreadstat](https://github.com/Roche/pyreadstat)** installed.
###Code
!pip show pyreadstat
###Output
_____no_output_____
###Markdown
Notice that **pyreadstat** cannot read from an url, so we need some some extra coding:
###Code
# to open link:
from urllib.request import urlopen
response = urlopen(linkToSPSS_File)
# reading contents
content = response.read()
# opening a file new file locally
fhandle = open('savingLocally.sav', 'wb') #wb?
# saving contents in that file
fhandle.write(content)
# closing the file
fhandle.close()
###Output
_____no_output_____
###Markdown
If you see the folder where you are writing this code, you should find that the file from the url is now in your computer. Now just read it:
###Code
dataSPSS= pd.read_spss("savingLocally.sav")
# here it is
dataSPSS
###Output
_____no_output_____
###Markdown
[home](home)____ 2. Collecting data from APIsOpen data portals from the government and other organizations have APIs, a service that allows you to collect their data. Let's take a look a Seattle data about [Seattle Real Time Fire 911 Calls](https://dev.socrata.com/foundry/data.seattle.gov/kzjm-xkqj):
###Code
from IPython.display import IFrame
fromAPI="https://dev.socrata.com/foundry/data.seattle.gov/kzjm-xkqj"
IFrame(fromAPI, width=900, height=500)
###Output
_____no_output_____
###Markdown
That page tells you how to get the data into pandas. But first, you need to install **sodapy**. Then you can continue:
###Code
#!pip install sodapy
###Output
_____no_output_____
###Markdown
Let's follow some steps, according to the API:
###Code
from sodapy import Socrata
# Unauthenticated client (using 'None')
client = Socrata("data.seattle.gov", None)
# If you have credentials:
# client = Socrata(data.seattle.gov,
# MyAppToken,
# username="[email protected]",
# password="AFakePassword")
# First 500 results, returned as JSON from API / converted to Python list of
# dictionaries by sodapy.
results = client.get("kzjm-xkqj", limit=2000)
# Convert to pandas DataFrame
results_df = pd.DataFrame.from_records(results)
###Output
_____no_output_____
###Markdown
You can see the results now:
###Code
results_df
###Output
_____no_output_____
###Markdown
Data from APIs may need some more pre processing than the previous files. Besides, you should study the API documentation to know how to interact with the portal. Not every open data portal behaves the same. [home](home)____ 3. Scraping Sometimes you are interested in data from the web. Let me get a table from wikipedia:
###Code
from IPython.display import IFrame
wikiLink="https://en.wikipedia.org/wiki/Democracy_Index"
IFrame(wikiLink, width=700, height=300)
###Output
_____no_output_____
###Markdown
I will use pandas to get the table, but you need to install these first:* html5lib * beautifulsoup* lxml
###Code
dataWIKI=pd.read_html(wikiLink,header=0,flavor='bs4',attrs={'class': 'wikitable'})
###Output
_____no_output_____
###Markdown
Pandas has this command **read_html** that will save lots of coding, above I just said:* The link to the webpage.* The position of the header.* The external library that will be used to extract the text (_flavor_).* The attributes of the table. dataWIKI is not a data frame:
###Code
type(dataWIKI)
###Output
_____no_output_____
###Markdown
The command **read_html** returns all the elements from the link with the same attributes. Let's see how many there are:
###Code
len(dataWIKI)
###Output
_____no_output_____
###Markdown
This means you have six tables. Ours is the first one?
###Code
# remember that Python starts counting in ZERO!
dataWIKI[0]
###Output
_____no_output_____
###Markdown
It is the 6th table:
###Code
dataWIKI[5]
###Output
_____no_output_____
###Markdown
Tables scrapped will bring different cleaning challenges. [home](home)____ Social media data Social media offer APIs too that allow you to get _some_ data. In general, you need to register as a developer. Once you are a confirmed developer, Twitter, Facebook and others will allow you to get _some_ of their data (the more you pay the more they offer). Let's pay attention to Twitter. First, install **tweepy**
###Code
#!pip install tweepy
###Output
_____no_output_____
###Markdown
Tweepy is the key library, but you may need several other libraries according to your goals.
###Code
import tweepy
###Output
_____no_output_____
###Markdown
Let me introduce myself to Twitter:
###Code
# credentials
consumer_key = 'XXX'
consumer_secret = 'YYY'
access_token = 'ZZZ'
access_token_secret = 'AAA'
# introducing myself:
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api=tweepy.API(auth, wait_on_rate_limit=True,timeout=60,
parser=tweepy.parsers.JSONParser())
###Output
_____no_output_____
###Markdown
Let me ask for some tweets from a particular user:
###Code
who='@Minsa_Peru'
howMany=20
gottenTweets = api.user_timeline(screen_name = who,
count = howMany,
include_rts = False,
tweet_mode="extended")
###Output
_____no_output_____
###Markdown
In the previous cases, I got a table (a data frame), you should always check what you have:
###Code
type(gottenTweets)
###Output
_____no_output_____
###Markdown
I have a list, then I could ask how many tweets I got (just to confirm):
###Code
len(gottenTweets)
###Output
_____no_output_____
###Markdown
Let me view what I have in the first tweet:
###Code
gottenTweets[0]
###Output
_____no_output_____
###Markdown
It will take some time to become familiar with a [tweet object structure](https://developer.twitter.com/en/docs/twitter-api/data-dictionary/object-model/tweet). Let's find out how the tweets are currently stored:
###Code
type(gottenTweets[0])
###Output
_____no_output_____
###Markdown
Now you know that each tweet is stored as a dictionaary. Let me see the dict **keys**:
###Code
gottenTweets[0].keys()
###Output
_____no_output_____
###Markdown
Let me recover some info from each tweet:
###Code
dates=[t['created_at'] for t in gottenTweets]
ids=[t['id'] for t in gottenTweets]
rts=[t['retweet_count'] for t in gottenTweets]
likes=[t['favorite_count'] for t in gottenTweets]
text=[t['full_text'] for t in gottenTweets]
###Output
_____no_output_____
###Markdown
Each of the objects created is a list (dates, ids,rts,likes and text). Let me show you one:
###Code
text
###Output
_____no_output_____
###Markdown
Let me create a data frame with those lists:
###Code
tweetsAsDF=pd.DataFrame({'dates':dates,'ids':ids,'rts':rts,'likes':likes,'text':text})
tweetsAsDF
###Output
_____no_output_____ |
code/app_demo/Predict.ipynb | ###Markdown
Demo ML applicationIn this notebook, we will predict whether a wine is high quality given its chemical attributes. This notebook depends on the pickle output of the "Model Build" notebook. Launch the appIf the app is not running, the following will break!
###Code
import os
import requests
# if you have a proxy...
os.environ['NO_PROXY'] = 'localhost'
# test if it's running
url = "http://localhost:5000"
test_url = "%s/test" % url
pred_url = "%s/predict" % url
# print the GET result
print(requests.get(test_url).json()['message'])
###Output
App is live!
###Markdown
Load the data
###Code
import pandas as pd
data = pd.read_csv('data/allwine.csv')
ground_truth = data.pop('quality')
print(data.shape)
data.head()
###Output
(5320, 12)
###Markdown
Generate predictionsNow that we have a live application, let's generate some predictions
###Code
predictions = requests.post(pred_url, json={'data': data.as_matrix().tolist()}).json()
# append predictions to the table
predicted = data.copy()
predicted['predicted_quality'] = predictions['predictions']
predicted.head()
###Output
_____no_output_____
###Markdown
How can we use this?It's not enough to simply build a good model. There has to be a valuable business problem to solve! What if we could recommend new wines on the basis of their predicted quality?
###Code
def recommend_top_k_wines(wines, k=10):
return wines.sort_values('predicted_quality',
ascending=False)
def recommend_top_k_reds(wines, k=10):
return recommend_top_k_wines(wines.loc[wines.red == 1.0])
def recommend_top_k_whites(wines, k=10):
return recommend_top_k_wines(wines.loc[wines.red == 0.0])
top_10_reds = recommend_top_k_reds(predicted, 10)
top_10_reds.head()
top_10_whites = recommend_top_k_whites(predicted, 10)
top_10_whites.head()
###Output
_____no_output_____ |
src/NewRecommenderTest.ipynb | ###Markdown
This is a notebook for training the model, calculating association rules and model fidelityBecause these steps take a long time to calculate, results can be cached locally using .pickle files. To choose whether a calculation should be repeated or its results should be loaded from cache, use the selections below
###Code
development_set = False
retrain_model = True
recalculate_topn = False
recalculate_association_rules = False
if retrain_model:
if development_set:
filename = "../data/ml_100k/ratings.csv"
else:
filename = "../data/ml-20m/ratings.csv"
ratings_df = pd.read_csv(filename, dtype={
'userId': np.int32,
'movieId': np.int32,
'rating': np.float32,
'timestamp': np.int32,
})
reader = Reader(rating_scale=(1, 5))
data = Dataset.load_from_df(ratings_df[['userId', 'movieId', 'rating']], reader)
trainset, testset = train_test_split(data, test_size=.25, random_state=42)
algo = EditableSVD(n_factors=50, n_epochs=20)
train_start_time = datetime.datetime.now()
algo.fit(trainset)
train_end_time = datetime.datetime.now()
print("Training duration: " + str(train_end_time - train_start_time))
predictions = algo.test(testset)
accuracy.rmse(predictions)
print(accuracy)
if development_set:
with open("algo-100k.pickle", "wb+") as fp:
pickle.dump(algo, fp)
else:
with open("algo-20m.pickle", "wb+") as fp:
pickle.dump(algo, fp)
else:
if development_set:
with open("algo-100k.pickle", "rb") as fp:
algo = pickle.load(fp)
else:
with open("algo-20m.pickle", "rb") as fp:
algo = pickle.load(fp)
accuracy.mae(predictions)
# Calculate top-n recommendations to all users
if recalculate_topn:
n = 30
top_n = {}
all_users = algo.trainset._raw2inner_id_users.keys()
all_items = algo.trainset._raw2inner_id_items.keys()
top_n_start_time = datetime.datetime.now()
for index, u in enumerate(all_users):
user_recommendations = []
for i in all_items:
prediction = algo.predict(u, i)
user_recommendations.append((prediction.iid, prediction.est))
user_recommendations.sort(key=lambda x: x[1], reverse=True)
top_n[u] = user_recommendations[:n]
if index == 100 or index % 1000 == 0: # Debug
print(str(index) + ": " + str(datetime.datetime.now()))
top_n_end_time = datetime.datetime.now()
print("Top-n calculation duration: " + str(top_n_end_time - top_n_start_time))
if development_set:
with open("top_n-100k.pickle", "wb+") as fp:
pickle.dump(top_n, fp)
else:
with open("top_n30-20m.pickle", "wb+") as fp:
pickle.dump(top_n, fp)
else:
if development_set:
with open("top_n-100k.pickle", "rb") as fp:
top_n = pickle.load(fp)
else:
with open("top_n20-20m.pickle", "rb") as fp:
top_n = pickle.load(fp)
top_n_items = [ [x[0] for x in row] for row in top_n.values()]
top_n_items[1]
if recalculate_association_rules:
# Calculate association rules from top-n
te = TransactionEncoder()
te_ary = te.fit(top_n_items).transform(top_n_items, sparse=True)
topn_df = pd.DataFrame.sparse.from_spmatrix(te_ary, columns=te.columns_)
print("Sparse df created")
apriori_start_time = datetime.datetime.now()
frequent_itemsets = apriori(topn_df, min_support=0.005, verbose=1, low_memory=True, use_colnames=True, max_len=4)
apriori_end_time = datetime.datetime.now()
print("Training duration: " + str(apriori_end_time - apriori_start_time))
frequent_itemsets['length'] = frequent_itemsets['itemsets'].apply(lambda x: len(x))
print(frequent_itemsets)
frequent_itemsets[(frequent_itemsets['length'] == 2)]
rules = association_rules(frequent_itemsets)
if development_set:
with open("association-rules-100k.pickle", "wb+") as fp:
pickle.dump(rules, fp)
else:
with open("association-rules-25m-n30.pickle", "wb+") as fp:
pickle.dump(rules, fp)
else:
if development_set:
with open("association-rules-100k.pickle", "rb") as fp:
rules = pickle.load(fp)
else:
with open("association-rules-20m.pickle", "rb") as fp:
rules = pickle.load(fp)
rules['consequents_length'] = rules['consequents'].apply(lambda x: len(x))
rules['antecedents_length'] = rules['antecedents'].apply(lambda x: len(x))
filtered_rules = rules[(rules['support'] > 0.05) &
(rules['confidence'] > 0.3) & (rules['antecedents_length'] < 4) & (rules['consequents_length'] == 1)]
print(filtered_rules)
movieId = 2959
filtered_rules.loc[filtered_rules['consequents'].apply(lambda cons: True if movieId in cons else False)]
# Calculate model fidelity
mf_start_time = datetime.datetime.now()
recommendations_amount = 0
explainable_amount = 0
for k, (u, recommendations) in enumerate(top_n.items()):
if k % 10 == 3: # Only take a small sample
if k % 10000 == 3:
print(k)
for (i, rating) in recommendations:
if (i not in [row[0] for row in algo.trainset.ur[u]]):
recommendations_amount += 1
rows = filtered_rules.loc[filtered_rules['consequents']
.apply(lambda cons: True if i in cons else False)]
for index, row in rows.iterrows():
antecedents = list(row['antecedents'])
user_ratings = [ algo.trainset.to_raw_iid(x[0]) for x in algo.trainset.ur[algo.trainset.to_inner_uid(u)] ]
if all([x in user_ratings for x in antecedents]):
explainable_amount += 1
break;
mf_end_time = datetime.datetime.now()
print("Model fidelity calculation duration: " + str(mf_end_time - mf_start_time))
model_fidelity = explainable_amount / recommendations_amount
print(model_fidelity)
###Output
3
10003
20003
30003
40003
50003
60003
70003
80003
90003
100003
110003
120003
130003
Model fidelity calculation duration: 0:08:14.717013
0.0649594417536473
|
day01/demo_vectors_matrices.ipynb | ###Markdown
Vectors and Matrices In NumPy - Add two Python list together does not do vector addition...
###Code
u = [1,5,2,9]
v = [3,6,0,-5]
print("v + u = ", v + u)
###Output
v + u = [3, 6, 0, -5, 1, 5, 2, 9]
###Markdown
- Fortunately, people have provided open-source packages for linear algebra operations in Python
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Vectors
###Code
v = np.array([1,5,2,9])
u = np.array([3,6,0,-5])
# vector addition
print("v+u = ", v+u)
# vector scaling
print("3v = ", 3*v)
# Dot-Product
print("u dot v = ", np.dot(u,v))
print('Or u dot v = ', u.dot(v))
# Length / L2 Norm of a vector
print("sqrt(v dot v) = %.2f" % np.sqrt(np.dot(v,v)))
print("||v|| = %.2f" % np.linalg.norm(v))
###Output
v+u = [ 4 11 2 4]
3v = [ 3 15 6 27]
u dot v = -12
Or u dot v = -12
sqrt(v dot v) = 10.54
||v|| = 10.54
###Markdown
Matrices
###Code
M = np.array([ [1,9,-12], [15, -2, 0] ])
print("M = ", M.shape)
print(M)
# matrix addition
A = np.array([ [1, 1], [2, 1] ])
B = np.array([ [0, 8], [7, 11] ])
print("A+B = \n", A+B) # '\n' is the newline character
# matrix scaling
a = 5
print("aB = \n", a*B)
###Output
M = (2, 3)
[[ 1 9 -12]
[ 15 -2 0]]
A+B =
[[ 1 9]
[ 9 12]]
aB =
[[ 0 40]
[35 55]]
###Markdown
More about Matrices
###Code
# matrix multiplicaiton
print("shapes of A and M:", A.shape, M.shape)
C1 = np.matmul(A, M)
C2 = A.dot(M)
C3 = A@M
print("C1 = \n", C1)
print("C2 = \n", C2)
print("C3 = \n", C3)
# matrix transpose
print("M^T = \n", np.transpose(M))
print("M^T = \n", M.transpose())
# matrix inverse
print("A^-1 = \n", np.linalg.inv(A))
###Output
shapes of A and M: (2, 2) (2, 3)
C1 =
[[ 16 7 -12]
[ 17 16 -24]]
C2 =
[[ 16 7 -12]
[ 17 16 -24]]
C3 =
[[ 16 7 -12]
[ 17 16 -24]]
M^T =
[[ 1 15]
[ 9 -2]
[-12 0]]
M^T =
[[ 1 15]
[ 9 -2]
[-12 0]]
A^-1 =
[[-1. 1.]
[ 2. -1.]]
###Markdown
Vectors and Matrices
###Code
v = np.array([1,5,2,9])
# v
print("v", v.shape, " = ", v)
# row vector v
v = v.reshape(1,-1) # shape -1 in np.reshape means value is infered
print("row vector v", v.shape, " = ", v)
# column vector v
v = v.reshape(-1,1)
print("col vector v", v.shape, " = \n", v)
# matrix vector multiplication
A = np.array([[2,0],[0,1],[1,1]])
u = np.array([1,2])
print("u", u.shape, " = \n", u)
print("A", A.shape, " = \n", A)
print("Au = \n", A.dot(u))
u = u.reshape(-1,1)
print("u", u.shape, " = \n", u)
print("Au = \n", A.dot(u))
# inner product as matrix multiplication
vdotv = np.matmul(np.transpose(v), v)
print("v dot v =", vdotv)
print("shape of vdotv", vdotv.shape)
###Output
v (4,) = [1 5 2 9]
row vector v (1, 4) = [[1 5 2 9]]
col vector v (4, 1) =
[[1]
[5]
[2]
[9]]
u (2,) =
[1 2]
A (3, 2) =
[[2 0]
[0 1]
[1 1]]
Au =
[2 2 3]
u (2, 1) =
[[1]
[2]]
Au =
[[2]
[2]
[3]]
v dot v = [[111]]
shape of vdotv (1, 1)
###Markdown
Calculate the exercises done in the morning, compare the results! $X =\begin{bmatrix} 1 & 2 & -1 \\ 1 & 0 & 1 \end{bmatrix}$ $Y =\begin{bmatrix} 3 & 1 \\ 0 & -1 \\ -2 & 3 \end{bmatrix}$ $Z = \begin{bmatrix}1\\4\\6\end{bmatrix}$ $A = \begin{bmatrix} 1 & 2\\ 3 & 5 \end{bmatrix}$ $ b= \begin{bmatrix} 5 \\ 13 \end{bmatrix}$Calculate $XY$, $YX$, $Z^TY$ and $A^{-1}b$
###Code
## To do
###Output
_____no_output_____ |
notebooks/Explore the tfrecords.ipynb | ###Markdown
Exploring Meta-dataset's TFRecords
###Code
RECORDS_DIR = "/Users/gomerudo/workspace/thesis_results/dtd"
pattern = "{dir}/*.tfrecords".format(dir=RECORDS_DIR)
tfrecords_list = sorted(glob.glob(pattern))
tfrecord_data = tf.data.TFRecordDataset(tfrecords_list)
features = {
'image': tf.FixedLenFeature([], dtype=tf.string),
'label': tf.FixedLenFeature([], tf.int64)
}
c = 0
for record in tfrecord_data:
if c > 5:
break
parsed = tf.parse_single_example(record, features)
# The image (features)
image_decoded = tf.image.decode_jpeg(parsed['image'], channels=3)
img = Image.fromarray(image_decoded.numpy(), 'RGB')
img.show(title="algo")
c+=1
from PIL import Image
def _n_elements(tfrecords_list):
c = 0
for fn in tfrecords_list:
for _ in tf.python_io.tf_record_iterator(fn):
c += 1
return c
def _parser(record, image_size):
# the 'features' here include your normal data feats along
# with the label for that data
features = {
'image': tf.FixedLenFeature([], dtype=tf.string),
'label': tf.FixedLenFeature([], tf.int64)
}
parsed = tf.parse_single_example(record, features)
# The image (features)
image_decoded = tf.image.decode_jpeg(parsed['image'], channels=3)
image_resized = tf.image.resize_images(
image_decoded,
[image_size, image_size],
method=tf.image.ResizeMethod.BILINEAR,
align_corners=True
)
image_normalized = image_resized
# The label
label = tf.cast(parsed['label'], tf.int32)
return {'x': image_normalized}, label
def _input_fn(tfrecord_data, length, batch_size=128, img_size=84):
dataset = tfrecord_data
dataset = dataset.map(lambda record: _parser(record, img_size))
dataset = dataset.batch(batch_size)
# iterator = dataset.make_one_shot_iterator()
return dataset
# 2. Get the dataset as tf.Dataset object
img_size = 84
dataset = _input_fn(
tfrecord_data, _n_elements(tfrecords_list), img_size=img_size
)
# Iterate over all images. For each batch, we append to the csv file to
# avoid memory issues if the dataset is too big.
c = 0
for idx, batch in enumerate(dataset):
print("Processing batch #{i}".format(i=idx+1))
for img, label in zip(batch[0]['x'], batch[1]):
if c > 4:
break
img_np = img.numpy()
img = Image.fromarray(img_np, 'RGB')
img.show()
c += 1
###Output
Processing batch #1
Processing batch #2
Processing batch #3
Processing batch #4
Processing batch #5
Processing batch #6
Processing batch #7
Processing batch #8
Processing batch #9
Processing batch #10
Processing batch #11
Processing batch #12
Processing batch #13
Processing batch #14
Processing batch #15
Processing batch #16
Processing batch #17
Processing batch #18
Processing batch #19
Processing batch #20
Processing batch #21
Processing batch #22
Processing batch #23
Processing batch #24
Processing batch #25
Processing batch #26
Processing batch #27
Processing batch #28
Processing batch #29
Processing batch #30
Processing batch #31
Processing batch #32
Processing batch #33
Processing batch #34
Processing batch #35
Processing batch #36
Processing batch #37
Processing batch #38
Processing batch #39
Processing batch #40
Processing batch #41
Processing batch #42
Processing batch #43
Processing batch #44
Processing batch #45
###Markdown
Playing with TensorFlow's TFRecords, Dataset and Estimator
###Code
image_size = 84
def n_elements(records_list):
"""Return the number of elements in a tensorflow records file."""
count = 0
for tfrecords_file in records_list:
for _ in tf.python_io.tf_record_iterator(tfrecords_file):
count += 1
return count
def parser(record_dataset):
"""Parse a given TFRecordsDataset object."""
# This is the definition we expect in the TFRecords for meta-dataset
features = {
'image': tf.FixedLenFeature([], dtype=tf.string),
'label': tf.FixedLenFeature([], tf.int64)
}
exp_image_size = 84
# 1. We parse the record_dataset with the features defined above.
parsed = tf.parse_single_example(record_dataset, features)
# 2. We will decode the image as a jpeg with 3 channels and resize it to
# the expected image size
image_decoded = tf.image.decode_jpeg(parsed['image'], channels=3)
image_resized = tf.image.resize_images(
image_decoded,
[exp_image_size, exp_image_size],
method=tf.image.ResizeMethod.BILINEAR,
align_corners=True
)
# 3. And we normalize the dataset in the range [0, 1]
image_normalized = image_resized / 255.0
# 4. we make the label an int32.
label = tf.cast(parsed['label'], tf.int32)
# 5. We return as dataset a s pair ( {features}, label)
return {'x': image_normalized}, label
def metadataset_input_fn(tfrecord_data, data_length, batch_size=128,
is_train=True, split_prop=0.33, random_seed=32,
is_distributed=False):
"""Input function for a tensorflow estimator."""
trainset_length = math.floor(data_length*(1. - split_prop))
files = tf.data.Dataset.list_files(
tfrecord_data, shuffle=False
)
n_threads = multiprocessing.cpu_count()
print(
"Number of threads available for dataset processing is %d", n_threads
)
dataset = files.apply(
tf.contrib.data.parallel_interleave(
lambda filename: tf.data.TFRecordDataset(filename),
cycle_length=n_threads
)
)
dataset = dataset.shuffle(data_length, seed=random_seed, reshuffle_each_iteration=False)
if is_train:
dataset = dataset.take(trainset_length)
current_length = trainset_length
dataset = dataset.apply(
tf.contrib.data.shuffle_and_repeat(current_length, 10)
)
else:
dataset = dataset.skip(trainset_length)
current_length = data_length - trainset_length
# shuffle and repeat examples for better randomness and allow training
# beyond one epoch
# count_repeat = 30 if is_train else 1
print("Current length in input_fn %d", current_length)
# map the parse function to each example individually in threads*2
# parallel calls
dataset = dataset.map(
map_func=lambda example: parser(example),
num_parallel_calls=n_threads
)
# batch the examples if using training, otherwise we want to evaluate on
# the whole dataset in one single step
dataset = dataset.batch(batch_size=batch_size)
# prefetch batch
dataset = dataset.prefetch(buffer_size=32)
return dataset
# if is_distributed:
# return dataset
# iterator = dataset.make_one_shot_iterator()
# return iterator.get_next()
###Output
_____no_output_____
###Markdown
Train set
###Code
import numpy as np
dataset_length = n_elements(tfrecords_list)
print("Dataset length is", dataset_length)
dataset = metadataset_input_fn(
tfrecord_data=pattern,
data_length=dataset_length,
batch_size=128,
is_train=True,
split_prop=0.33,
random_seed=32,
is_distributed=False
)
exp_batches = math.ceil(dataset_length*(1-0.33)/128)
print("Expected number of batches per epoch:", exp_batches)
counts_list = []
classes_count = None
obs_set = set()
for i, batch in enumerate(dataset):
if i % exp_batches == 0:
print("Batch number", i+1)
print("Starting over the dataset.")
if classes_count is not None:
counts_list.append(classes_count)
classes_count = np.zeros(47, dtype=int)
for img, label in zip(batch[0]['x'], batch[1]):
img_np = img.numpy()
obs_set.add(str(img_np.flatten()))
label_np = int(label.numpy())
classes_count[label_np] += 1
counts_list.append(classes_count)
for i, c_list in enumerate(counts_list):
print("List number", i+1)
print("Sum is", sum(c_list))
print(c_list)
print("N different observations seen:", len(obs_set))
###Output
Dataset length is 5640
Number of threads available for dataset processing is %d 4
Current length in input_fn %d 3778
Expected number of batches per epoch: 30
Batch number 1
Starting over the dataset.
Batch number 31
Starting over the dataset.
Batch number 61
Starting over the dataset.
Batch number 91
Starting over the dataset.
Batch number 121
Starting over the dataset.
Batch number 151
Starting over the dataset.
Batch number 181
Starting over the dataset.
Batch number 211
Starting over the dataset.
Batch number 241
Starting over the dataset.
Batch number 271
Starting over the dataset.
List number 1
Sum is 3840
[74 89 83 81 89 86 83 86 80 84 73 82 80 87 80 70 85 72 76 85 82 90 90 80
81 82 83 82 74 78 81 83 87 84 84 85 78 79 76 77 87 88 81 87 76 82 78]
List number 2
Sum is 3840
[70 93 80 84 91 88 82 90 82 82 74 79 85 87 82 71 84 71 75 83 81 87 88 82
84 85 84 85 73 81 78 81 85 77 83 86 79 80 78 74 89 84 76 86 80 85 76]
List number 3
Sum is 3840
[76 85 85 76 94 86 79 83 77 82 76 81 77 84 76 73 85 69 79 88 86 95 89 79
82 85 85 82 76 79 80 81 94 87 84 86 75 80 74 77 86 91 77 89 75 77 78]
List number 4
Sum is 3840
[74 89 79 81 86 86 87 84 83 88 70 76 84 89 79 69 84 73 75 87 82 86 94 82
82 81 83 80 77 80 81 86 83 81 92 89 77 76 80 74 86 85 78 90 72 85 75]
List number 5
Sum is 3840
[68 91 84 85 94 85 82 87 80 79 79 85 85 82 77 71 84 73 79 80 79 88 86 75
85 83 86 85 73 81 79 77 89 82 77 79 82 83 79 79 91 87 80 82 78 82 83]
List number 6
Sum is 3840
[77 92 84 80 86 83 78 87 85 87 74 78 78 94 81 74 83 69 75 93 80 93 88 83
79 83 83 79 72 78 84 84 83 71 84 88 72 76 79 77 87 91 79 91 82 82 74]
List number 7
Sum is 3840
[70 83 88 84 92 92 84 88 70 81 79 81 75 87 82 66 83 70 75 83 88 89 88 86
83 84 82 84 78 81 78 84 89 91 92 84 74 75 78 76 86 85 81 84 69 80 78]
List number 8
Sum is 3840
[76 97 74 77 93 86 84 79 85 85 78 80 91 78 76 74 86 74 79 82 80 88 99 77
78 84 92 84 73 76 80 78 91 73 75 81 84 90 70 74 87 85 70 90 79 87 81]
List number 9
Sum is 3840
[75 85 84 87 89 81 80 89 75 81 64 80 77 86 84 72 90 68 74 85 85 90 82 87
87 83 85 81 72 76 80 87 83 85 83 83 72 74 84 80 95 96 79 91 77 84 73]
List number 10
Sum is 3220
[60 76 69 65 76 77 71 77 73 71 63 68 68 76 63 60 66 61 63 74 67 74 76 59
69 70 67 68 62 70 69 69 76 69 76 79 67 67 62 62 66 68 69 70 62 66 64]
N different observations seen: 3693
###Markdown
The repeat makes the batches of equal size, meaning that if the last batch is not complete it will fill the missing values to obtain a complete batch (e.g. complete batch of 128 elements). Test set
###Code
import numpy as np
dataset_length = n_elements(tfrecords_list)
print("Dataset length is", dataset_length)
dataset = metadataset_input_fn(
tfrecord_data=pattern,
data_length=dataset_length,
batch_size=128,
is_train=False,
split_prop=0.33,
random_seed=32,
is_distributed=False
)
exp_batches = math.ceil(dataset_length*(1-0.33)/128)
print("Expected number of batches per epoch:", exp_batches)
counts_list = []
classes_count = None
obs_set_test = set()
for i, batch in enumerate(dataset):
if i % exp_batches == 0:
print("Batch number", i+1)
print("Starting over the dataset.")
if classes_count is not None:
counts_list.append(classes_count)
classes_count = np.zeros(47, dtype=int)
for img, label in zip(batch[0]['x'], batch[1]):
img_np = img.numpy()
obs_set_test.add(str(img_np.flatten()))
label_np = int(label.numpy())
classes_count[label_np] += 1
counts_list.append(classes_count)
for i, c_list in enumerate(counts_list):
print("List number", i+1)
print("Sum is", sum(c_list))
print(c_list)
print("N different observations seen:", len(obs_set_test))
print("Diff 1", len(obs_set.difference(obs_set_test)))
print("Diff 2", len(obs_set_test.difference(obs_set)))
###Output
Diff 1 3676
Diff 2 1811
|
04. Chapter_/Chapter4_conditional if statement.ipynb | ###Markdown
The fourth chapter from Bill Lubanovic's book Additional characters
###Code
print("Cudzysłów powoduje że znak # nie działą")
###Output
Cudzysłów powoduje że znak # nie działą
###Markdown
Continuation sign backslash
###Code
sum = 1 + \
2 + \
3 + \
4
sum
sum = (
1+
2+
3+
77)
sum
###Output
_____no_output_____
###Markdown
Instructions "if", "elif", "else"
###Code
disaster = True
if disaster == True:
print("Niedobrze")
else:
print("Dobrze")
furry = True
large = True
if furry == True:
if large ==True:
print("to jest yeti")
else:
print("to jest kot")
else:
if large == True:
print("to jest wieloryb")
else:
print("to jest czlowiek")
color = "lawendowy"
if color == "czerwony":
print("to jest pomidor")
elif color == "zielony":
print("to jest ogorek")
elif color == "ultrafioletowy":
print("nie wiem co to jest")
else:
print("nie znam kolrou", color)
x = 3
if x < 0:
x = 0
print('Negative changed to zero')
elif x == 0:
print('Zero')
elif x == 1:
print('Single')
else:
print('More')
# Measure some strings:
words = ['cat', 'window', 'defenestrate']
for w in words:
print(w, len(w))
# Create a sample collection
users = {'Hans': 'active', 'Éléonore': 'inactive', '景太郎': 'active'}
# Strategy: Iterate over a copy
for user, status in users.copy().items():
if status == 'inactive':
del users[user]
# Strategy: Create a new collection
active_users = {}
for user, status in users.items():
if status == 'active':
active_users[user] = status
for i in range(5):
print(i)
list(range(5, 10))
list(range(0, 10, 3))
list(range(-10, -100, -30))
a = ['Mary', 'had', 'a', 'little', 'lamb']
for i in range(len(a)):
print(i, a[i])
for n in range(2, 10):
for x in range(2, n):
if n % x == 0:
print(n, 'equals', x, '*', n//x)
break
else:
# loop fell through without finding a factor
print(n, 'is a prime number')
for num in range(2, 10):
if num % 2 == 0:
print("Found an even number", num)
continue
print("Found an odd number", num)
###Output
Found an even number 2
Found an odd number 3
Found an even number 4
Found an odd number 5
Found an even number 6
Found an odd number 7
Found an even number 8
Found an odd number 9
###Markdown
Comparison operators|Operator| Name |Example|| -- | -- | -- ||== | Equal| x == y|!= |Not equal| x != y|> |Greater than| x > y|< |Less than| x < y|>= |Greater than or equal to| x >= y|<= |Less than or equal to| x <= y
###Code
x = 55
x == 8
(5 < x) and (x < 100)
bool(None)
###Output
_____no_output_____
###Markdown
Treated as false in Python|Object| Symbol || -- | -- ||no value | None ||integer zero | 0 ||real number zero | 0.0 ||empty string | ' ' || empty list| [] ||empty tuple | () ||empty dictionary| {} ||empty set | set() |
###Code
some_list = [1]
if some_list:
print("w tej liscie cos jest")
else:
print("ta lista jest pusta")
###Output
w tej liscie cos jest
###Markdown
Operator "in"
###Code
vowels = 'aeiouy'
letter = 'o'
letter in vowels
vowels = 'aeiouy'
letter = 'o'
if letter in vowels:
print(letter, 'jest samogłoską')
letter = 'o'
vowel_set = {'a', 'e', 'i', 'o', 'u', 'y'}
letter in vowel_set
vowel_list = [ 'a', 'e', 'i', 'o', 'u', 'y']
letter in vowel_list
###Output
_____no_output_____
###Markdown
Operator "walrus"
###Code
# befor using operator
tweet_limit = 280
tweet_string = "cosssswwwwww" * 50
diff = tweet_limit - len(tweet_string)
if diff >= 0:
print("dobry tweet")
else:
print("tweet za dlugi o", abs(diff), "znakow")
# after using new operator
tweet_limit = 280
tweet_string = "cos" * 50
if diff := tweet_limit - len(tweet_string) >= 0:
print("dobry tweet")
else:
print("tweebmt za dlugi o", abs(diff), "znakow")
###Output
dobry tweet
|
test_coincidence_donegal_Master_Thesis_npa.ipynb | ###Markdown
Jupyter Notebook per coincidence/detection DONEGAL TASK: WORKFLOW: Import libraries
###Code
# Per la gestione dei file
import os
# Per il calcolo numerico
import numpy as np
# Per la presentazione grafica dei risultati
import matplotlib
import matplotlib.pyplot as plt
# Per l'analisi dei dati sismici (download compreso)
import obspy
from obspy.clients.fdsn import Client
from obspy import UTCDateTime
from obspy.taup import plot_travel_times
from obspy.core import Stream, read
from obspy.signal.trigger import coincidence_trigger
#Per visualizzare immagini
from IPython.display import Image
import matplotlib.image as image
from pprint import pprint
import calendar
###Output
_____no_output_____
###Markdown
Definizioni
###Code
YEAR=2013
MONTH=4
# Counting days in a month
NDAYS=calendar.monthrange(YEAR,MONTH)[1]
LAB_YEAR=str('%04d' % YEAR)
LAB_MONTH=str('%02d' % MONTH)
# Data directory
DONEGAL_DIR = "G:DONEGAL/" # Setting Faderica
#DONEGAL_DIR = "/Volumes/CHALLENGER/DONEGAL-MSEED/" # Setting Faderica
# Info on seismic stations
stat_file= 'DL_151130.txt'
# Read station info
NET = []
STAT = []
statfile = open(stat_file, 'r')
linestoken=statfile.readlines()
nstat=0
il=0
for x in linestoken:
if il>0:
NET.append(x.split()[0])
STAT.append(x.split()[1])
nstat += 1
il+=1
print('Number of seismic stations: ', nstat )
###Output
Number of seismic stations: 13
###Markdown
(1) Process coincidence trigger for all days in selected month
###Code
#Cycle on each day in the month
DAY=17
while DAY <= 17:
LAB_DAY=str('%02d' % DAY)
# Check single MSEED file for each station
print('\n\n --------------------------------------- DAY: ', YEAR, MONTH, DAY)
dt = UTCDateTime(YEAR,MONTH,DAY)
jd_sele=dt.julday
jd_sele_lab=str('%03d' % jd_sele)
print('Selected Julday: ', jd_sele, jd_sele_lab)
year0=str(YEAR)
files = []
print('CONTROLLO ESISTENZA TRACCE')
istat=0
while istat < nstat:
net0=NET[istat]
stat0=STAT[istat]
mseed_file= DONEGAL_DIR + year0 + '/' + net0 + '/' + stat0 + '/HHZ.D/' + net0 + '.'+ stat0 + '..HHZ.D.' + year0 + '.' + jd_sele_lab
#print(mseed_file)
if os.path.exists(mseed_file):
files.append(mseed_file)
print('DATA EXIST for stat: ', net0, stat0)
else:
print(' ---------> DATA DO NOT EXIST ... ', net0, stat0)
istat +=1
# Legge i dati delle stazioni selezionate (quelle per cui esiste il file Z)
st = Stream()
for filename in files:
st += read(filename)
print('LETTURA TRACCE:')
print(st)
# Filtro delle tracce, se ce ne fosse bisogno
print('FILTRO TRACCE')
st.filter('bandpass', freqmin=10, freqmax=20) # optional prefiltering
# Applico il coincidence trigger
print('TRIGGER TRACCE')
st2 = st.copy()
trig = coincidence_trigger("recstalta", 5, 1, st2, 6 ,sta=0.5, lta=10)
# Define trigger time and write in a file
DAYNAME = LAB_YEAR + LAB_MONTH + LAB_DAY
trig_file1 = 'output/OUTPUT_TRIGGER/' + DAYNAME + '.trigger'
trig_file2 = 'output/OUTPUT_TRIGGER_DETAILS/' + DAYNAME + '.trigger_details'
trig_time_stations=open(trig_file1,'w')
trig_details=open(trig_file2,'w')
trig_0 = {}
trig_time_all = []
stat_all = []
stat = []
# All triggers, all keys (FOR DEBUGGING)
pprint(trig,stream=trig_details)
trig_details.flush()
trig_details.close()
# Define trigger time and write in a file
n_triggers=len(trig)
print('Found: ', n_triggers,' trigger ...')
it=0
while it < n_triggers:
trig_0 = trig[it]
trig_time=trig_0['time']
stat=trig_0['stations']
nstat_trig=len(stat)
print(it, trig_time, nstat_trig, *stat)
print(it, trig_time, nstat_trig, *stat, file=trig_time_stations)
#trig_time_stations.write(it, trig_time, stat)
trig_time_all.append(trig_time)
stat_all.append(stat)
it+=1
trig_time_stations.flush()
trig_time_stations.close()
DAY += 1
from scipy.signal import butter, sosfilt, sosfreqz
def butter_bandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
sos = butter(order, [low, high], analog=False, btype='band', output='sos')
return sos
def butter_bandpass_filter(data, lowcut, highcut, fs, order=5):
sos = butter_bandpass(lowcut, highcut, fs, order=order)
y = sosfilt(sos, data)
return y
lowcut = 10.0 # Band-pass Filter Low limit
highcut = 20.0 # Band-pass Filter High limit
#Cycle on each day in the month
DAY=17
while DAY <= 17:
LAB_DAY=str('%02d' % DAY)
# Open file for saving true events
DAYNAME = LAB_YEAR + LAB_MONTH + LAB_DAY
trig_file2 = 'output/OUTPUT_TRIGGER_SAVED/' + DAYNAME + '.trigger_saved'
trig_time_stations_saved=open(trig_file2,'w')
# READ TRIGGER FROM FILE
DAYNAME = LAB_YEAR + LAB_MONTH + LAB_DAY
trig_file1 = 'output/OUTPUT_TRIGGER/' + DAYNAME + '.trigger'
trig_time_stations=open(trig_file1,'r')
trig_0 = {}
trig_time_all = []
stat_all = []
linestoken=trig_time_stations.readlines()
n_triggers=0
for x in linestoken:
n_triggers += 1
trig_time=x.split()[1]
trig_time=UTCDateTime(trig_time)
trig_time_all.append(trig_time)
nstat_trig=int(x.split()[2])
stat = []
istat=0
while istat < nstat_trig:
stat0=x.split()[3+istat]
stat.append(stat0)
istat += 1
stat_all.append(stat)
print('\n\n\n\n -------------------------------------- DAY: ', DAYNAME)
print('Number of triggers: ', n_triggers )
# Read available streams
# Check single MSEED file for each station
dt = UTCDateTime(YEAR,MONTH,DAY)
jd_sele=dt.julday
jd_sele_lab=str('%03d' % jd_sele)
year0=str(YEAR)
files = []
istat=0
while istat < nstat:
net0=NET[istat]
stat0=STAT[istat]
mseed_file= DONEGAL_DIR + year0 + '/' + net0 + '/' + stat0 + '/HHZ.D/' + net0 + '.'+ stat0 + '..HHZ.D.' + year0 + '.' + jd_sele_lab
if os.path.exists(mseed_file):
files.append(mseed_file)
istat +=1
# Legge i dati delle stazioni selezionate (quelle per cui esiste il file Z)
st = Stream()
for filename in files:
st += read(filename)
st.filter('bandpass', freqmin=10, freqmax=20)
# Faccio plot dei singoli trigger per vedere se sono eventi o sono rumore
it=0
while it < n_triggers:
TRIG_SELE=str(it)
time_start=trig_time_all[it]-10
time_end=trig_time_all[it]+10
TRIG_TIME=str(trig_time_all[it])
st3=st.copy()
st3.trim(time_start,time_end)
stat=stat_all[it]
Nstat=len(stat)
fig_size=1.5*Nstat
fig = plt.figure(figsize=(18, fig_size))
istat=0
while istat < Nstat:
stat0=stat[istat]
itr=0
while itr < len(st3):
tr_filt = []
tr0 = st3[itr]
if tr0.stats.station == stat0:
tr_filt = butter_bandpass_filter(st3[itr], lowcut, highcut, 100, order=6)
t0=np.arange(-10,10+0.01,1/100)
label0= 'Station:' + stat0
ax = fig.add_subplot(Nstat, 1, istat+1)
ax.set_xlim(-10,10)
if istat == 0:
ax.set_title('Display traces for trigger: ' + TRIG_SELE+' : '+ TRIG_TIME)
ax.plot(t0,tr_filt, "b-", label=label0)
plt.legend(loc='upper right')
itr += 1
istat += 1
plt.xlabel('Time from trigger (s)')
plt.show()
it += 1
itr_sele=0
while itr_sele >=0:
TRIG_to_save=input("Insert trigger number to be saved. Insert -1 to exit:")
itr_sele=int(TRIG_to_save)
if itr_sele >= 0:
TYPE_of_EQ=input("Insert L for local, R for regional, B for Blast:")
il=0
for x in linestoken:
if il == itr_sele:
print(TYPE_of_EQ, x, file=trig_time_stations_saved, end='')
il+=1
trig_time_stations_saved.flush()
trig_time_stations_saved.close()
trig_time_stations.close()
resume_day = input("Press RETURN to resume")
# Go to next day
DAY +=1
###Output
-------------------------------------- DAY: 20130417
Number of triggers: 2
###Markdown
###Code
%load_ext watermark
%watermark -v -p numpy,matplotlib,obspy,ipywidgets,obspy
print (" ")
%watermark -u -n -t -z
###Output
The watermark extension is already loaded. To reload it, use:
%reload_ext watermark
Python implementation: CPython
Python version : 3.9.7
IPython version : 7.28.0
numpy : 1.21.2
matplotlib: 3.5.0
obspy : 1.2.2
ipywidgets: 7.6.5
Last updated: Fri Feb 04 2022 10:30:58ora solare Europa occidentale
|
ML/f02-chb/feature_set/01-knn.ipynb | ###Markdown
Experimento 8: Acurácia do KNN padrão- Validação usando todas as colunas e usando apenas os 6 melhores canais
###Code
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.model_selection import GroupKFold
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import make_pipeline
chb_df = pd.read_csv('./chb.csv')
MELHORES_6_CANAIS = ['2', '3', '6', '7', '10', '14']
melhores_colunas = [col for col in chb_df.columns if col.split('-')[-1] in MELHORES_6_CANAIS]
groups = chb_df.pop('chb')
y = chb_df.pop('target').values
X1 = chb_df.values
X2 = chb_df[melhores_colunas].values
acc_list = []
for train_index, test_index in GroupKFold(n_splits=24).split(X1, y, groups):
# Separando dados
X1_train, X1_test = X1[train_index], X1[test_index]
X2_train, X2_test = X2[train_index], X2[test_index]
y_train, y_test = y[train_index], y[test_index]
acc_1 = make_pipeline(StandardScaler(), KNeighborsClassifier())\
.fit(X1_train, y_train)\
.score(X1_test, y_test)
acc_2 = make_pipeline(StandardScaler(), KNeighborsClassifier())\
.fit(X2_train, y_train)\
.score(X2_test, y_test)
acc_list.append([acc_1, acc_2])
knn_df = pd.DataFrame(data=acc_list, columns=['todos_os_canais', 'melhores_6_canais'])
knn_df.to_csv('./01-knn.csv')
knn_df.boxplot()
print(knn_df.describe())
knn_df.style.hide_index().background_gradient(cmap='Blues')
###Output
todos_os_canais melhores_6_canais
count 24.000000 24.000000
mean 0.746387 0.746408
std 0.159432 0.161383
min 0.411765 0.382353
25% 0.600946 0.635440
50% 0.788791 0.780598
75% 0.882960 0.865037
max 0.949495 0.944444
|
Logistical regresion/Logical regresion Keras.ipynb | ###Markdown
Capacidad¶La capacidad de un modelo se refiere al tamaño y la complejidad de los patrones que puede aprender. En el caso de las redes neuronales, esto dependerá en gran medida de la cantidad de neuronas que tenga y de cómo estén conectadas entre sí. Si parece que su red no está adaptando los datos, debería intentar aumentar su capacidad. activación 'relu'La función de activación lineal rectificada o ReLU para abreviar es una función lineal por partes que generará la entrada directamente si es positiva; de lo contrario, generará cero. ... La función de activación lineal rectificada supera el problema del gradiente de desaparición, lo que permite que los modelos aprendan más rápido y funcionen mejor La función de pérdida¶Hemos visto cómo diseñar una arquitectura para una red, pero no hemos visto cómo decirle a una red qué problema resolver. Este es el trabajo de la función de pérdida.La función de pérdida mide la disparidad entre el valor real del objetivo y el valor que predice el modelo.Diferentes problemas requieren diferentes funciones de pérdida. Hemos estado analizando problemas de regresión, donde la tarea es predecir algún valor numérico: calorías en 80 cereales, calificación en calidad de vino tinto. Otras tareas de regresión podrían ser predecir el precio de una casa o la eficiencia de combustible de un automóvil.Una función de pérdida común para los problemas de regresión es el error absoluto medio o MAE. Para cada predicción y_pred, MAE mide la disparidad del objetivo verdadero y_true por una diferencia absoluta abs (y_true - y_pred).La pérdida total de MAE en un conjunto de datos es la media de todas estas diferencias absolutas.
###Code
from tensorflow import keras
from tensorflow.keras import layers
#Construimos el modelo#
input_shape = [30]#numero de entradas(inputs)
model = keras.Sequential([
#capa de normalizacion (Batch Normalization)
#Y si lo agrega como la primera capa de su red, puede actuar como una especie de preprocesador adaptativo,
#Remplazando algo como el Escalador estándar de Sci-Kit Learn.
layers.BatchNormalization(input_shape = input_shape),
#La capa oculta de ReLU
layers.Dense(units = 256, activation = "relu"),
#capa de normalizacion (Batch Normalization)
layers.BatchNormalization(),
#La "capa de abandono"(Dropout), que puede ayudar a corregir el sobreajuste.
layers.Dropout(0.3),
#La capa oculta de ReLU
layers.Dense(units = 256, activation = "relu"),
#capa de normalizacion (Batch Normalization)
layers.BatchNormalization(),
#La "capa de abandono"(Dropout), que puede ayudar a corregir el sobreajuste.
layers.Dropout(0.3),
#la capa de salida, usando la activación sigmoidea
layers.Dense(units = 1, activation = "sigmoid"),
])
###Output
2021-12-03 21:54:56.442817: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/oracle/instantclient_19_8
2021-12-03 21:54:56.442894: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2021-12-03 21:54:59.239234: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/oracle/instantclient_19_8
2021-12-03 21:54:59.239325: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2021-12-03 21:54:59.239369: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (virtual-machine): /proc/driver/nvidia/version does not exist
2021-12-03 21:54:59.240250: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
###Markdown
La capacidad está determinada por el tamaño y la complejidad de los patrones que su modelo puede aprender.Puede aumentar la capacidad de una red haciéndola más ancha (más unidades en capas existentes) o más profunda (agregando más capas). A las redes más grandes les resulta más fácil aprender relaciones más lineales, mientras que las redes más profundas prefieren las más no lineales. Cuál es mejor dependencia del conjunto de datos.
###Code
model.compile( optimizer = 'adam', #Un "optimizador"(optimizer) que puede decirle a la red cómo cambiar sus pesos. Aquí usé el optimizador "adam" (pero hay más)
loss = 'binary_crossentropy', #Una "loss function" que mide qué tan buenas son las predicciones de la red (aquí utilicé el error absoluto medio)
metrics = ['binary_accuracy'])#Aquí utilicé las métricas "precisión binaria" porque este es un algoritmo clasificador
#Interpretar las curvas de aprendizaje #
early_stopping = keras.callbacks.EarlyStopping(
patience = 5, #cuantas épocas esperar antes de parar
min_delta = 0.001, #cantidad mínima de cambio para contar como una mejora
restore_best_weights = True,
)
#Aqui entreno el modelo#
history = model.fit(
X_train, y_train,
validation_data = (X_test, y_test),#Validación de datos utilizando datos de prueba
batch_size = 512, #Cantidad de datos por bloque
epochs = 200, #Numero de epocas de entrenamiento
callbacks = [early_stopping], #Utilizar la interpretación de las curvas de aprendizaje
)
###Output
Epoch 1/200
1/1 [==============================] - 0s 90ms/step - loss: 0.4704 - binary_accuracy: 0.7918 - val_loss: 1.8767 - val_binary_accuracy: 0.3640
Epoch 2/200
1/1 [==============================] - 0s 37ms/step - loss: 0.2684 - binary_accuracy: 0.8974 - val_loss: 2.6425 - val_binary_accuracy: 0.3640
Epoch 3/200
1/1 [==============================] - 0s 54ms/step - loss: 0.1840 - binary_accuracy: 0.9413 - val_loss: 3.2908 - val_binary_accuracy: 0.3640
Epoch 4/200
1/1 [==============================] - 0s 44ms/step - loss: 0.1309 - binary_accuracy: 0.9589 - val_loss: 3.8150 - val_binary_accuracy: 0.3640
Epoch 5/200
1/1 [==============================] - 0s 47ms/step - loss: 0.1043 - binary_accuracy: 0.9619 - val_loss: 4.2233 - val_binary_accuracy: 0.3640
Epoch 6/200
1/1 [==============================] - 0s 54ms/step - loss: 0.0812 - binary_accuracy: 0.9736 - val_loss: 4.5349 - val_binary_accuracy: 0.3640
###Markdown
Precisión y entropía cruzadaLa precisión es una de las muchas métricas que se utilizan para medir el éxito en un problema de clasificación. La precisión es la relación entre las predicciones correctas y las predicciones totales: precisión = número_correcto / total. Un modelo que siempre predice correctamente tendría una puntuación de precisión de 1.0. En igualdad de condiciones, la precisión es una métrica razonable para usar siempre que las clases en el conjunto de datos ocurren con aproximadamente la misma frecuencia.El problema con la precisión (y la mayoría de las otras métricas de clasificación) es que no se puede usar como una función de pérdida. SGD necesita una función de pérdida que cambie suavemente, pero la precisión, siendo una proporción de conteos, cambia en "saltos". Entonces, tenemos que elegir un sustituto para que actúe como función de pérdida. Este sustituto es la función de entropía cruzada.Ahora, recuerde que la función de pérdida define el objetivo de la red durante el entrenamiento. Con la regresión, nuestro objetivo era minimizar la distancia entre el resultado esperado y el resultado previsto. Elegimos MAE para medir esta distancia.Para la clasificación, lo que queremos en cambio es una distancia entre probabilidades, y esto es lo que proporciona la entropía cruzada. La entropía cruzada es una especie de medida de la distancia de una distribución de probabilidad a otra.
###Code
history_df = pd.DataFrame(history.history)
history_df.loc[:, ['loss', 'val_loss']].plot(title="Cross-entropy")
history_df.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot(title="Accuracy")
###Output
_____no_output_____
###Markdown
Puede ver que Keras lo mantendrá actualizado sobre la pérdida a medida que el modelo entrena.Sin embargo, a menudo, una mejor manera de ver la pérdida es trazarla. De hecho, el método de ajuste mantiene un registro de la pérdida producida durante el entrenamiento en un objeto Historial. Convertiremos los datos a un marco de datos de Pandas, lo que facilita el trazado
###Code
# convertir el historial de entrenamiento en un marco de datos
history_df = pd.DataFrame(history.history)
# utilizar el método de parcela nativa de Pandas
history_df['loss'].plot();
###Output
_____no_output_____ |
micromort/models/Twitter classification/Untitled.ipynb | ###Markdown
Get the tweetsfilters used:SELECT text FROM rpmdb.rpm_tweets where in_retweet_to_status_id is NULL and in_reply_to_status_id is NULL and quoted_status_id is NULL and source like "twitter web client" and text not like "%http%" and lang = "en";
###Code
fileName = "./tweet.dump"
original_tweets = [ line.strip() for line in open(fileName).readlines() ]
len(original_tweets), original_tweets[:10]
###Output
_____no_output_____
###Markdown
Clean the tweets using andromeda API
###Code
level = 1
classes = [10, 71] #alphanumeric and hashtags
min_word_len = 5 #inclusive
def preprocess_text(str):
r = requests.get("http://172.29.33.45:8000/textpreprocessor/{}?text={}".format(level, urllib.parse.quote_plus(str)))
return r.json()
cleaned_tweets = []
for tweet in original_tweets:
clean_json = preprocess_text(tweet)
_str = ""
for token in clean_json['token_list']:
if token["core"]["class"] in [10]:
_str += token["core"]["token"] + " "
_str = _str.strip()
if len(_str.split(" ")) >= min_word_len :
cleaned_tweets.append(_str)
len(cleaned_tweets)
###Output
_____no_output_____
###Markdown
Preprocessing the text:
###Code
nltk_stopwords = list(stopwords.words('english'))
def lemmatize_token_list(lemmatizer, token_list):
pos_tag_list = pos_tag(token_list)
for idx, (token, tag) in enumerate(pos_tag_list):
tag_simple = tag[0].lower() # Converts, e.g., "VBD" to "c"
if tag_simple in ['n', 'v', 'j']:
word_type = tag_simple.replace('j', 'a')
else:
word_type = 'n'
lemmatized_token = lemmatizer.lemmatize(token, pos=word_type)
token_list[idx] = lemmatized_token
return token_list
def preprocess_text(s, tokenizer=None, remove_stopwords=True, remove_punctuation=True,
stemmer=None, lemmatizer=None, lowercase=True, return_type='str'):
# Throw an error if both stemmer and lemmatizer are not None
if stemmer is not None and lemmatizer is not None:
raise ValueError("Stemmer and Lemmatizer cannot both be not None!")
# Tokenization either with default tokenizer or user-specified tokenizer
if tokenizer is None:
token_list = word_tokenize(s)
else:
token_list = tokenizer.tokenize(s)
# Stem or lemmatize if needed
if lemmatizer is not None:
token_list = lemmatize_token_list(lemmatizer, token_list)
elif stemmer is not None:
token_list = stem_token_list(stemmer, token_list)
# Convert all tokens to lowercase if need
if lowercase:
token_list = [ token.lower() for token in token_list ]
# Remove all stopwords if needed
if remove_stopwords:
token_list = [ token for token in token_list if not token in nltk_stopwords ]
# Remove all punctuation marks if needed (note: also converts, e.g, "Mr." to "Mr")
if remove_punctuation:
token_list = [ ''.join(c for c in s if c not in string.punctuation) for s in token_list ]
token_list = [ token for token in token_list if len(token) > 0 ] # Remove "empty" tokens
if return_type == 'list':
return token_list
elif return_type == 'set':
return set(token_list)
else:
return ' '.join(token_list)
processed_tweets = []
for tweet in cleaned_tweets:
#processed_documents[idx] = preprocess_text(doc)
#processed_documents[idx] = preprocess_text(doc, stemmer=porter_stemmer)
processed_tweets.append(preprocess_text(tweet, lemmatizer=WordNetLemmatizer()))
reduced_processed_tweets = [tweet for tweet in processed_tweets if len(tweet.split(" ")) > min_word_len]
len(reduced_processed_tweets)
tfidf_vectorizer = TfidfVectorizer(max_df=0.95, min_df=2, stop_words='english')
tfidf_model = tfidf_vectorizer.fit_transform(reduced_processed_tweets)
###Output
_____no_output_____
###Markdown
K-means clustering
###Code
num_clusters = int(len(reduced_processed_tweets)/10)
km_model = KMeans(n_clusters=num_clusters)
km_model.fit(tfidf_model)
np.unique(km_model.labels_).shape
clusters = {}
for idx, label in enumerate(km_model.labels_):
if label in clusters:
clusters[label].append(reduced_processed_tweets[idx])
else:
clusters[label] = [reduced_processed_tweets[idx]]
clusters
###Output
_____no_output_____ |
MachineLearning(Advanced)/p6_graduation_project/3. Train.ipynb | ###Markdown
3. Train**Tensorboard**- Input at command: tensorboard --logdir=./log- Input at browser: http://127.0.0.1:6006
###Code
import time
import os
import pandas as pd
project_name = 'Dog_Breed_Identification'
step_name = 'Train'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = project_name + '_' + step_name + '_' + time_str
print('run_name: ' + run_name)
cwd = os.getcwd()
model_path = os.path.join(cwd, 'model')
print('model_path: ' + model_path)
log_path = os.path.join(cwd, 'log')
print('model_path: ' + log_path)
df = pd.read_csv(os.path.join(cwd, 'input', 'labels.csv'))
print('lables amount: %d' %len(df))
df.head()
import h5py
import numpy as np
from sklearn.utils import shuffle
np.random.seed(2017)
x_train = []
y_train = {}
x_val = []
y_val = {}
x_test = []
cwd = os.getcwd()
feature_cgg16 = os.path.join(cwd, 'model', 'feature_VGG16_{}.h5'.format(20171026))
feature_cgg19 = os.path.join(cwd, 'model', 'feature_VGG19_{}.h5'.format(20171026))
feature_resnet50 = os.path.join(cwd, 'model', 'feature_ResNet50_{}.h5'.format(20171026))
feature_xception = os.path.join(cwd, 'model', 'feature_Xception_{}.h5'.format(20171026))
feature_inception = os.path.join(cwd, 'model', 'feature_InceptionV3_{}.h5'.format(20171026))
# feature_inceptionResNetV2 = os.path.join(cwd, 'model', 'feature_InceptionResNetV2_{}.h5'.format(20171028))
for filename in [feature_cgg16, feature_cgg19, feature_resnet50, feature_xception, feature_inception]:
with h5py.File(filename, 'r') as h:
x_train.append(np.array(h['train']))
y_train = np.array(h['train_labels'])
x_test.append(np.array(h['test']))
# print(x_train[0].shape)
x_train = np.concatenate(x_train, axis=-1)
# y_train = np.concatenate(y_train, axis=0)
# x_val = np.concatenate(x_val, axis=-1)
# y_val = np.concatenate(y_val, axis=0)
x_test = np.concatenate(x_test, axis=-1)
print(x_train.shape)
print(x_train.shape[1:])
print(len(y_train))
# print(x_val.shape)
# print(len(y_val))
print(x_test.shape)
from keras.utils.np_utils import to_categorical
y_train = to_categorical(y_train)
# y_val = to_categorical(y_val)
print(y_train.shape)
# print(y_val.shape)
from sklearn.model_selection import train_test_split
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.05, random_state=2017)
print(x_train.shape)
print(y_train.shape)
print(x_val.shape)
print(y_val.shape)
print(x_train.shape[1])
###Output
7168
###Markdown
Build NN
###Code
from sklearn.metrics import confusion_matrix
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential
from keras.layers import Dense, Dropout, Input, Flatten, Conv2D, MaxPooling2D, BatchNormalization
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, TensorBoard
log_dir = os.path.join(log_path, run_name)
print('log_dir:' + log_dir)
tensorBoard = TensorBoard(log_dir=log_dir)
model = Sequential()
model.add(Dense(8192, input_shape=x_train.shape[1:]))
model.add(Dropout(0.5))
model.add(Dense(8192, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(2048, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(512, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(120, activation='softmax'))
model.compile(optimizer=Adam(lr=1e-4),
loss='categorical_crossentropy',
metrics=['accuracy'])
hist = model.fit(x_train, y_train,
batch_size=128,
epochs=40, #Increase this when not on Kaggle kernel
verbose=1, #1 for ETA, 0 for silent
validation_data=(x_val, y_val),
callbacks=[tensorBoard])
final_loss, final_acc = model.evaluate(x_val, y_val, verbose=1)
print("Final loss: {0:.4f}, final accuracy: {1:.4f}".format(final_loss, final_acc))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(hist.history['loss'], color='b')
plt.plot(hist.history['val_loss'], color='r')
plt.show()
plt.plot(hist.history['acc'], color='b')
plt.plot(hist.history['val_acc'], color='r')
plt.show()
run_name0 = run_name + '_' + str(int(final_acc*1000)).zfill(4)
print(run_name0)
def saveModel(model, run_name):
cwd = os.getcwd()
modelPath = os.path.join(cwd, 'model')
if not os.path.isdir(modelPath):
os.mkdir(modelPath)
weigthsFile = os.path.join(modelPath, run_name + '.h5')
model.save(weigthsFile)
saveModel(model, run_name0)
print('Done !')
###Output
Done !
|
notebooks/11-comparison.ipynb | ###Markdown
Comparison Think Bayes, Second EditionCopyright 2020 Allen B. DowneyLicense: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
###Code
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
###Output
_____no_output_____
###Markdown
This chapter introduces joint distributions, which are an essential tool for working with distributions of more than one variable.We'll use them to solve a silly problem on our way to solving a real problem.The silly problem is figuring out how tall two people are, given only that one is taller than the other.The real problem is rating chess players (or participants in other kinds of competition) based on the outcome of a game.To construct joint distributions and compute likelihoods for these problems, we will use outer products and similar operations. And that's where we'll start. Outer OperationsMany useful operations can be expressed as the "outer product" of two sequences, or another kind of "outer" operation.Suppose you have sequences like `x` and `y`:
###Code
x = [1, 3, 5]
y = [2, 4]
###Output
_____no_output_____
###Markdown
The outer product of these sequences is an array that contains the product of every pair of values, one from each sequence.There are several ways to compute outer products, but the one I think is the most versatile is a "mesh grid".NumPy provides a function called `meshgrid` that computes a mesh grid. If we give it two sequences, it returns two arrays.
###Code
import numpy as np
X, Y = np.meshgrid(x, y)
###Output
_____no_output_____
###Markdown
The first array contains copies of `x` arranged in rows, where the number of rows is the length of `y`.
###Code
X
###Output
_____no_output_____
###Markdown
The second array contains copies of `y` arranged in columns, where the number of columns is the length of `x`.
###Code
Y
###Output
_____no_output_____
###Markdown
Because the two arrays are the same size, we can use them as operands for arithmetic functions like multiplication.
###Code
X * Y
###Output
_____no_output_____
###Markdown
This is result is the outer product of `x` and `y`.We can see that more clearly if we put it in a DataFrame:
###Code
import pandas as pd
df = pd.DataFrame(X * Y, columns=x, index=y)
df
###Output
_____no_output_____
###Markdown
The values from `x` appear as column names; the values from `y` appear as row labels.Each element is the product of a value from `x` and a value from `y`.We can use mesh grids to compute other operations, like the outer sum, which is an array that contains the *sum* of elements from `x` and elements from `y`.
###Code
X + Y
###Output
_____no_output_____
###Markdown
We can also use comparison operators to compare elements from `x` with elements from `y`.
###Code
X > Y
###Output
_____no_output_____
###Markdown
The result is an array of Boolean values.It might not be obvious yet why these operations are useful, but we'll see examples soon.With that, we are ready to take on a new Bayesian problem. How Tall Is A?Suppose I choose two people from the population of adult males in the U.S.; I'll call them A and B. If we see that A taller than B, how tall is A?To answer this question:1. I'll use background information about the height of men in the U.S. to form a prior distribution of height,2. I'll construct a joint prior distribution of height for A and B (and I'll explain what that is),3. Then I'll update the prior with the information that A is taller, and 4. From the joint posterior distribution I'll extract the posterior distribution of height for A. In the U.S. the average height of male adults is 178 cm and the standard deviation is 7.7 cm. The distribution is not exactly normal, because nothing in the real world is, but the normal distribution is a pretty good model of the actual distribution, so we can use it as a prior distribution for A and B.Here's an array of equally-spaced values from 3 standard deviations below the mean to 3 standard deviations above (rounded up a little).
###Code
mean = 178
qs = np.arange(mean-24, mean+24, 0.5)
###Output
_____no_output_____
###Markdown
SciPy provides a function called `norm` that represents a normal distribution with a given mean and standard deviation, and provides `pdf`, which evaluates the probability density function (PDF) of the normal distribution:
###Code
from scipy.stats import norm
std = 7.7
ps = norm(mean, std).pdf(qs)
###Output
_____no_output_____
###Markdown
Probability densities are not probabilities, but if we put them in a `Pmf` and normalize it, the result is a discrete approximation of the normal distribution.
###Code
from empiricaldist import Pmf
prior = Pmf(ps, qs)
prior.normalize()
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
from utils import decorate
prior.plot(color='C5')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Approximate distribution of height for men in U.S.')
###Output
_____no_output_____
###Markdown
This distribution represents what we believe about the heights of `A` and `B` before we take into account the data that `A` is taller. Joint DistributionThe next step is to construct a distribution that represents the probability of every pair of heights, which is called a joint distribution. The elements of the joint distribution are$$P(A_x~\mathrm{and}~B_y)$$which is the probability that `A` is $x$ cm tall and `B` is $y$ cm tall, for all values of $x$ and $y$.At this point all we know about `A` and `B` is that they are male residents of the U.S., so their heights are independent; that is, knowing the height of `A` provides no additional information about the height of `B`.In that case, we can compute the joint probabilities like this:$$P(A_x~\mathrm{and}~B_y) = P(A_x)~P(B_y)$$Each joint probability is the product of one element from the distribution of `x` and one element from the distribution of `y`.So if we have `Pmf` objects that represent the distribution of height for `A` and `B`, we can compute the joint distribution by computing the outer product of the probabilities in each `Pmf`.The following function takes two `Pmf` objects and returns a `DataFrame` that represents the joint distribution.
###Code
def make_joint(pmf1, pmf2):
"""Compute the outer product of two Pmfs."""
X, Y = np.meshgrid(pmf1, pmf2)
return pd.DataFrame(X * Y, columns=pmf1.qs, index=pmf2.qs)
###Output
_____no_output_____
###Markdown
The column names in the result are the quantities from `pmf1`; the row labels are the quantities from `pmf2`.In this example, the prior distributions for `A` and `B` are the same, so we can compute the joint prior distribution like this:
###Code
joint = make_joint(prior, prior)
joint.shape
###Output
_____no_output_____
###Markdown
The result is a `DataFrame` with possible heights of `A` along the columns, heights of `B` along the rows, and the joint probabilities as elements.If the prior is normalized, the joint prior is also be normalized.
###Code
joint.to_numpy().sum()
###Output
_____no_output_____
###Markdown
To add up all of the elements, we convert the `DataFrame` to a NumPy array before calling `sum`. Otherwise, `DataFrame.sum` would compute the sums of the columns and return a `Series`.
###Code
series = joint.sum()
series.shape
###Output
_____no_output_____
###Markdown
Visualizing the Joint DistributionThe following function uses `pcolormesh` to plot the joint distribution.
###Code
import matplotlib.pyplot as plt
def plot_joint(joint, cmap='Blues'):
"""Plot a joint distribution with a color mesh."""
vmax = joint.to_numpy().max() * 1.1
plt.pcolormesh(joint.columns, joint.index, joint,
cmap=cmap,
vmax=vmax,
shading='nearest')
plt.colorbar()
decorate(xlabel='A height in cm',
ylabel='B height in cm')
###Output
_____no_output_____
###Markdown
Here's what the joint prior distribution looks like.
###Code
plot_joint(joint)
decorate(title='Joint prior distribution of height for A and B')
###Output
_____no_output_____
###Markdown
As you might expect, the probability is highest (darkest) near the mean and drops off farther from the mean.Another way to visualize the joint distribution is a contour plot.
###Code
def plot_contour(joint):
"""Plot a joint distribution with a contour."""
plt.contour(joint.columns, joint.index, joint,
linewidths=2)
decorate(xlabel='A height in cm',
ylabel='B height in cm')
plot_contour(joint)
decorate(title='Joint prior distribution of height for A and B')
###Output
_____no_output_____
###Markdown
Each line represents a level of equal probability. LikelihoodNow that we have a joint prior distribution, we can update it with the data, which is that `A` is taller than `B`.Each element in the joint distribution represents a hypothesis about the heights of `A` and `B`.To compute the likelihood of every pair of quantities, we can extract the column names and row labels from the prior, like this:
###Code
x = joint.columns
y = joint.index
###Output
_____no_output_____
###Markdown
And use them to compute a mesh grid.
###Code
X, Y = np.meshgrid(x, y)
###Output
_____no_output_____
###Markdown
`X` contains copies of the quantities in `x`, which are possible heights for `A`. `Y` contains copies of the quantities in `y`, which are possible heights for `B`.If we compare `X` and `Y`, the result is a Boolean array:
###Code
A_taller = (X > Y)
A_taller.dtype
###Output
_____no_output_____
###Markdown
To compute likelihoods, I'll use `np.where` to make an array with `1` where `A_taller` is `True` and 0 elsewhere.
###Code
a = np.where(A_taller, 1, 0)
###Output
_____no_output_____
###Markdown
To visualize this array of likelihoods, I'll put in a `DataFrame` with the values of `x` as column names and the values of `y` as row labels.
###Code
likelihood = pd.DataFrame(a, index=x, columns=y)
###Output
_____no_output_____
###Markdown
Here's what it looks like:
###Code
plot_joint(likelihood, cmap='Oranges')
decorate(title='Likelihood of A>B')
###Output
_____no_output_____
###Markdown
The likelihood of the data is 1 where `X > Y` and 0 elsewhere. The UpdateWe have a prior, we have a likelihood, and we are ready for the update. As usual, the unnormalized posterior is the product of the prior and the likelihood.
###Code
posterior = joint * likelihood
###Output
_____no_output_____
###Markdown
I'll use the following function to normalize the posterior:
###Code
def normalize(joint):
"""Normalize a joint distribution."""
prob_data = joint.to_numpy().sum()
joint /= prob_data
return prob_data
normalize(posterior)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot_joint(posterior)
decorate(title='Joint posterior distribution of height for A and B')
###Output
_____no_output_____
###Markdown
All pairs where `B` is taller than `A` have been eliminated. The rest of the posterior looks the same as the prior, except that it has been renormalized. Marginal DistributionsThe joint posterior distribution represents what we believe about the heights of `A` and `B` given the prior distributions and the information that `A` is taller.From this joint distribution, we can compute the posterior distributions for `A` and `B`. To see how, let's start with a simpler problem.Suppose we want to know the probability that `A` is 180 cm tall. We can select the column from the joint distribution where `x=180`.
###Code
column = posterior[180]
column.head()
###Output
_____no_output_____
###Markdown
This column contains posterior probabilities for all cases where `x=180`; if we add them up, we get the total probability that `A` is 180 cm tall.
###Code
column.sum()
###Output
_____no_output_____
###Markdown
It's about 3%.Now, to get the posterior distribution of height for `A`, we can add up all of the columns, like this:
###Code
column_sums = posterior.sum(axis=0)
column_sums.head()
###Output
_____no_output_____
###Markdown
The argument `axis=0` means we want to add up the columns.The result is a `Series` that contains every possible height for `A` and its probability. In other words, it is the distribution of heights for `A`.We can put it in a `Pmf` like this:
###Code
marginal_A = Pmf(column_sums)
###Output
_____no_output_____
###Markdown
When we extract the distribution of a single variable from a joint distribution, the result is called a **marginal distribution**.The name comes from a common visualization that shows the joint distribution in the middle and the marginal distributions in the margins.Here's what the marginal distribution for `A` looks like:
###Code
marginal_A.plot(label='Posterior for A')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Posterior distribution for A')
###Output
_____no_output_____
###Markdown
Similarly, we can get the posterior distribution of height for `B` by adding up the rows and putting the result in a `Pmf`.
###Code
row_sums = posterior.sum(axis=1)
marginal_B = Pmf(row_sums)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
marginal_B.plot(label='Posterior for B', color='C1')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Posterior distribution for B')
###Output
_____no_output_____
###Markdown
Let's put the code from this section in a function:
###Code
def marginal(joint, axis):
"""Compute a marginal distribution."""
return Pmf(joint.sum(axis=axis))
###Output
_____no_output_____
###Markdown
`marginal` takes as parameters a joint distribution and an axis number:* If `axis=0`, it returns the marginal of the first variable (the one on the x-axis);* If `axis=1`, it returns the marginal of the second variable (the one on the y-axis). So we can compute both marginals like this:
###Code
marginal_A = marginal(posterior, axis=0)
marginal_B = marginal(posterior, axis=1)
###Output
_____no_output_____
###Markdown
Here's what they look like, along with the prior.
###Code
prior.plot(label='Prior', color='C5')
marginal_A.plot(label='Posterior for A')
marginal_B.plot(label='Posterior for B')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Prior and posterior distributions for A and B')
###Output
_____no_output_____
###Markdown
As you might expect, the posterior distribution for `A` is shifted to the right and the posterior distribution for `B` is shifted to the left.We can summarize the results by computing the posterior means:
###Code
prior.mean()
print(marginal_A.mean(), marginal_B.mean())
###Output
_____no_output_____
###Markdown
Based on the observation that `A` is taller than `B`, we are inclined to believe that `A` is a little taller than average, and `B` is a little shorter.Notice that the posterior distributions are a little narrower than the prior. We can quantify that by computing their standard deviations.
###Code
prior.std()
print(marginal_A.std(), marginal_B.std())
###Output
_____no_output_____
###Markdown
The standard deviations of the posterior distributions are a little smaller, which means we are more certain about the heights of `A` and `B` after we compare them. Conditional PosteriorsNow suppose we measure `A` and find that he is 170 cm tall. What does that tell us about `B`?In the joint distribution, each column corresponds a possible height for `A`. We can select the column that corresponds to height 170 cm like this:
###Code
column_170 = posterior[170]
###Output
_____no_output_____
###Markdown
The result is a `Series` that represents possible heights for `B` and their relative likelihoods.These likelihoods are not normalized, but we can normalize them like this:
###Code
cond_B = Pmf(column_170)
cond_B.normalize()
###Output
_____no_output_____
###Markdown
Making a `Pmf` copies the data by default, so we can normalize `cond_B` without affecting `column_170` or `posterior`.The result is the conditional distribution of height for `B` given that `A` is 170 cm tall.Here's what it looks like:
###Code
prior.plot(label='Prior', color='C5')
marginal_B.plot(label='Posterior for B', color='C1')
cond_B.plot(label='Conditional posterior for B',
color='C4')
decorate(xlabel='Height in cm',
ylabel='PDF',
title='Prior, posterior and conditional distribution for B')
###Output
_____no_output_____
###Markdown
The conditional posterior distribution is cut off at 180 cm, because we have established that `B` is shorter than `A`, and `A` is 180 cm. Dependence and IndependenceWhen we constructed the joint prior distribution, I said that the heights of `A` and `B` were independent, which means that knowing one of them provides no information about the other.In other words, the conditional probability $P(A_x | B_y)$ is the same as the unconditioned probability $P(A_x)$.But in the posterior distribution, $A$ and $B$ are not independent.If we know that `A` is taller than `B`, and we know how tall `A` is, that gives us information about `B`.The conditional distribution we just computed demonstrates this dependence. SummaryIn this chapter we started with the "outer" operations, like outer product, which we used to construct a joint distribution.In general, you cannot construct a joint distribution from two marginal distributions, but in the special case where the distributions are independent, you can.We extended the Bayesian update process and applied it to a joint distribution. Then from the posterior joint distribution we extracted marginal posterior distributions and conditional posterior distributions.As an exercise, you'll have a chance to apply the same process to problem that's a little more difficult and a lot more useful, updating a chess player's rating based on the outcome of a game. Exercises **Exercise:** Based on the results of the previous example, compute the posterior conditional distribution for `A` given that `B` is 180 cm.Hint: Use `loc` to select a row from a `DataFrame`.
###Code
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose we have established that `A` is taller than `B`, but we don't know how tall `B` is.Now we choose a random woman, `C`, and find that she is shorter than `A` by at least 15 cm. Compute posterior distributions for the heights of `A` and `C`.The average height for women in the U.S. is 163 cm; the standard deviation is 7.3 cm.
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
**Exercise:** [The Elo rating system](https://en.wikipedia.org/wiki/Elo_rating_system) is a way to quantify the skill level of players for games like chess.It is based on a model of the relationship between the ratings of players and the outcome of a game. Specifically, if $R_A$ is the rating of player `A` and $R_B$ is the rating of player `B`, the probability that `A` beats `B` is given by the [logistic function](https://en.wikipedia.org/wiki/Logistic_function):$$P(\mathrm{A~beats~B}) = \frac{1}{1 + 10^{(R_B-R_A)/400}}$$The parameters 10 and 400 are arbitrary choices that determine the range of the ratings. In chess, the range is from 100 to 2800.Notice that the probability of winning depends only on the difference in rankings. As an example, if $R_A$ exceeds $R_B$ by 100 points, the probability that `A` wins is
###Code
1 / (1 + 10**(-100/400))
###Output
_____no_output_____
###Markdown
Suppose `A` has a current rating of 1600, but we are not sure it is accurate. We could describe their true rating with a normal distribution with mean 1600 and standard deviation 100, to indicate our uncertainty.And suppose `B` has a current rating of 1800, with the same level of uncertainty.Then `A` and `B` play and `A` wins. How should we update their ratings? To answer this question:1. Construct prior distributions for `A` and `B`.2. Use them to construct a joint distribution, assuming that the prior distributions are independent.3. Use the logistic function above to compute the likelihood of the outcome under each joint hypothesis. 4. Use the joint prior and likelihood to compute the joint posterior. 5. Extract and plot the marginal posteriors for `A` and `B`.6. Compute the posterior means for `A` and `B`. How much should their ratings change based on this outcome?
###Code
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____ |
notebooks/Concepts/.ipynb_checkpoints/Potential flow-checkpoint.ipynb | ###Markdown
Potential flow Potencial functions**Uniform flow**:$w = U\exp{(-i\alpha)}z$**Line source**:$w = \dfrac{\dot{Q}}{2\pi}\ln z$ Velocity fields$\dfrac{dw}{dz} = u + iv$**Uniform flow:**$\dfrac{dw}{dz} = U\left(\cos{\alpha}-i\sin{\alpha}\right)$ **Line source:**$\dfrac{dw}{dz} = \dfrac{\dot{Q}}{2\pi}\dfrac{1}{z}$ Combine potentials Consider a potential flow with a sink at $(0,0)$ and a source at $(r,0)$, under the influence of a horizontal uniform flow going from left to right ($\alpha = \pi$). The potential that describes that system is given by:$w = w_{\rm uniform} + w_{\rm source} + w_{\rm sink}$Hence, $w = U\exp{(-i\alpha)}z + \dfrac{\dot{Q_{\rm in}}}{2\pi}\ln (z-r) - \dfrac{\dot{Q_{\rm out}}}{2\pi}\ln z$$w = -Uz + \dfrac{\dot{Q_{\rm in}}}{2\pi}\ln (z-r) - \dfrac{\dot{Q_{\rm out}}}{2\pi}\ln z$The uniform flow component will represent the regional flow, which can be described using Darcy's law as $U=-K\dot{I}$, where $K$ is the hydraulic conductivity (m/s) and $\dot{I}$ is the water table gradient (m/m). The source and sink wells represent injection and extraction points, characterized by flow rates per unit depth $\dot{Q_{\rm in}}$ and $\dot{Q_{\rm out}}$ (m²/s). For an aquifer of depth $H$, the volumetric flow rate $Q$ is $\dot{Q}=Q/H$. Now, the extraction rate can be expressed as a proportion to the injection rate as $Q_{\rm out} = fQ_{\rm in}$. With all this replacements, the potential function can be rewritten as:$w = K\dot{I}z + \dfrac{Q_{\rm in}}{2\pi H} \left( \ln(z-r) - f \ln z \right)$Notice that in our case, the gradient $\dot{I}$ must be negative so the regional flow goes indeed from right to left. To avoid confusion, let's rewrite it in terms of the abs value of the gradient $I$:$w = -KIz + \dfrac{Q_{\rm in}}{2\pi H} \left( \ln(z-r) - f \ln z \right)$ Velocity field $\dfrac{dw}{dz} = \dfrac{d}{dz}\left(-KIz + \dfrac{Q_{\rm in}}{2\pi H} \left( \ln(z-r) - f \ln z \right)\right)$$\dfrac{dw}{dz} = -KI + \dfrac{Q_{\rm in}}{2\pi H} \dfrac{d}{dz}\left( \ln(z-r) - f \ln z \right)$$\dfrac{dw}{dz} = -KI + \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{1}{z-r} - \dfrac{f}{z} \right)$ Velocity at the midpointWe'll pick the velocity in the midpoint between the extraction and injection points as the characteristic velocity of the system. This means that we want to evaluate the velocity field at $z = r/2 + 0i$\begin{equation}\begin{array}{rl} \dfrac{dw}{dz}\bigg\rvert_{z=r/2+0i} =& -KI + \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{1}{r/2-r} - \dfrac{f}{r/2} \right)\\ =& -KI - \dfrac{Q_{\rm in}}{\pi H} \left(\dfrac{1+f}{r}\right)\end{array}\end{equation}Notice that this velocity can be split into the contribution of the regional flow and the contribution from the wells:\begin{equation}\begin{array}{rl} w'_{\rm regional} =& -KI \\ w'_{\rm wells} =& -\dfrac{1}{\pi}\dfrac{Q_{\rm in} (1+f)}{r H}\end{array}\end{equation} Non-dimensional flow number ($\mathcal{F}_L$)A comparison between the component in the velocity due regional flow and the component due the wells as a ratio between them two. \begin{equation}\begin{array}{rl}\mathcal{F}_L =& \dfrac{\text{Velocity due regional flow}}{\text{Velocity due well couple}} \\=& \dfrac{w'_{\rm regional}}{w'_{\rm wells}} \\=& \dfrac{-KI}{-\dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{1+f}{r}\right)} \\=& \dfrac{\pi K I H r}{Q_{in}(1+f)}\end{array}\end{equation}If $\mathcal{F}_L \to 0$, the flow induced by the wells rule the system, whereas if $\mathcal{F}_L \to \infty$, the regional flow rule the system. Minimum velocity along a streamlineThe minimum velocity is found at:\begin{equation}\begin{array}{rl} \dfrac{d}{dx}\dfrac{dw}{dz}\bigg\rvert_{z=x+0i} =& 0 \\ \dfrac{d}{dx}\left(-KI + \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{1}{x-r} - \dfrac{f}{x} \right)\right) =& 0\\ \dfrac{Q_{\rm in}}{2 \pi H} \dfrac{d}{dx}\left(\dfrac{1}{x-r} - \dfrac{f}{x} \right) =& 0\\ \dfrac{Q_{\rm in}}{2 \pi H} \left(\dfrac{f}{x^2} - \dfrac{1}{\left(x-r\right)^2}\right) =& 0\\ \\ \dfrac{f}{x^2} =& \dfrac{1}{\left(x-r\right)^2}\\ x^2 =& f \left( x-r \right) ^2\\ x =& \dfrac{\sqrt{f}}{\sqrt{f}\pm1}r \\ x < r \Rightarrow x =& \dfrac{\sqrt{f}}{\sqrt{f}+1}r\end{array}\end{equation} Defining $m = \dfrac{\sqrt{f}}{\sqrt{f}+1}$, evaluating at $z = mr + 0i$ gives the minimum velocity.\begin{equation}\begin{array}{rl} \dfrac{dw}{dz}\bigg\rvert_{z=mr+0i} =& -KI + \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{1}{mr-r} - \dfrac{f}{mr} \right)\\ =& -KI + \dfrac{Q_{\rm in}}{2 \pi H} \left(\dfrac{m-fm+f}{m(m-1)}\right)\\ =& -KI - \dfrac{Q_{\rm in}}{2 \pi H} \left(\dfrac{\left(\sqrt{f}+f\right)\left(\sqrt{f}+1\right)}{\sqrt{f}}\right)\\ =& -KI - \dfrac{Q_{\rm in}}{2 \pi H} \left( \left( 1+ \sqrt{f}\right)\left( 1 + \dfrac{\sqrt{f}}{f} \right) \right)\\ =& -KI - \dfrac{ Q_{\rm in}\left( 1 + \tfrac{\sqrt{f}}{f} \right) } {\pi H r}\end{array}\\\end{equation} This velocity can also be split into the contribution of the regional flow and the contribution from the wells:\begin{equation}\begin{array}{rl} w'_{\rm regional} =& -KI \\ w'_{\rm wells} =& -\dfrac{Q_{\rm in} }{\pi r H}\left(1+\tfrac{\sqrt{f}}{f}\right)\end{array}\end{equation} Mean velocity along a streamlineConsider the streamline that connects the sink and source points. It cannot be measured at those points because those are poles in the system, hence, some separation $\delta$ from those points has to be considered: the well radius should be the straighforward election so we'll adopt it and assume that both injection and extraction wells have the same diameter.Now, with $z = x+iy$, the streamline exists at $y=0$ and $\delta<x<r-\delta$, hence, the mean velocity can be derived as follows:\begin{equation}\begin{array}{rl}\bar{u} =& \dfrac{1}{r-2\delta} \int_{\delta}^{r-\delta}\dfrac{dw}{dz}dx\\=& \dfrac{1}{r-2\delta} \int_{\delta}^{r-\delta} \left(-KI + \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{1}{z-r} - \dfrac{f}{z} \right)\right)dx\\=& -KI + \dfrac{Q_{\rm in}}{2\pi (r-2\delta) H}\int_{\delta}^{r-\delta}\left(\dfrac{1}{x-r} - \dfrac{f}{x} \right)dx\\=& -KI + \dfrac{Q_{\rm in}}{2\pi H}\left(\ln({x-r}) - f\ln{x} \right)\bigg\rvert_{x=\delta}^{x=r-\delta}\\=& -KI + \dfrac{Q_{\rm in}}{2\pi H}\left[ (1+f)(\ln(\delta) -\ln(r-\delta) \right]\\=& -KI - \dfrac{Q_{\rm in}(1+f)}{2\pi H}\ln\left(\dfrac{r-\delta}{\delta}\right)\\\end{array}\end{equation}Notice that this velocity can be split into the contribution of the regional flow and the contribution from the wells:\begin{equation}\begin{array}{rl} w'_{\rm regional} =& -KI \\ w'_{\rm wells} =& -\dfrac{Q_{\rm in}(1+f)}{2\pi H}\ln\left(\dfrac{r-\delta}{\delta}\right)\end{array}\end{equation} Non-dimensional flow number ($\mathcal{F}_L$)A comparison between the component in the velocity due regional flow and the component due the wells as a ratio between them two. \begin{equation}\begin{array}{rl}\mathcal{F}_L =& \dfrac{\text{Velocity due regional flow}}{\text{Velocity due well couple}} \\=& \dfrac{w'_{\rm regional}}{w'_{\rm wells}} \\=& \dfrac{-KI}{-\dfrac{Q_{\rm in}(1+f)}{2\pi H}\ln\left(\dfrac{r-\delta}{\delta}\right)} \\=& \dfrac{2 \pi K I H (r-2\delta)}{Q_{in}(1+f)\ln\left(\frac{r-\delta}{\delta}\right)}\end{array}\end{equation}If $\mathcal{F}_L \to 0$, the flow induced by the wells rule the system, whereas if $\mathcal{F}_L \to \infty$, the regional flow rule the system. Mass flow through an infinite lineThe previous choice of characteristic velocity seems arbitrary in the sense it overestimates the effect of the potential induced by the wells on the system. This is because the streamline along the x-axis has the highest velocity magnitude (the sharpest potential gradient), if we were to pick a point elsewhere, the veloicty form the wells would be less. But in the same sense, for $f=1$, this point represents the lowest velocity along that streamline. To avoid this apparent bias, let's consider the totality of the flow that crosses an infinite vertical line at $z=r/2+iy$. This won't give us a useful insight on the regional flow as that mass flow will diverge, but it will give us a point of comparison to the wells contribution to the total mass balance in the system. So, the velocity field along that vertical line is \begin{equation}\begin{array}{rl}\dfrac{dw}{dz}\bigg\rvert_{z=\tfrac{r}{2}+iy} =& -KI - \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{1}{z-r} - \dfrac{f}{z} \right)\bigg\rvert_{z=\tfrac{r}{2}+iy}\\=& - KI + \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{1}{\tfrac{r}{2}+iy-r} - \dfrac{f}{\tfrac{r}{2}+iy} \right)\\=& -KI - \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{\tfrac{r}{2}(1+f) + iy(1-f)}{\tfrac{r^2}{4}-y^2}\right)\end{array}\end{equation} The mass flow through that vertical line will require only the horizontal component of the velocity field $u$ at that point, meaning that we are only intereste in the real component of the function above:$u = -KI - \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{\tfrac{r}{2}(1+f)}{\tfrac{r^2}{4}-y^2}\right) = -KI - \dfrac{Q_{\rm in}r(1+f)}{4\pi H} \left(\dfrac{1}{\tfrac{r^2}{4}-y^2}\right) $ The mass flow will be described as:\begin{equation}\begin{array}{rl} \int_{-\infty}^{\infty} u \, dy =& \int_{-\infty}^{\infty}\left[ -KI - \dfrac{Q_{\rm in}r(1+f)}{4\pi H} \left(\dfrac{1}{\tfrac{r^2}{4}-y^2}\right)\right] \, dy \\ =& -\underbrace{\int_{-\infty}^{\infty}KI \, dy}_{\text{Diverges}} - \underbrace{\dfrac{Q_{\rm in}r(1+f)}{4\pi H} \int_{-\infty}^{\infty}\left(\dfrac{1}{\tfrac{r^2}{4}-y^2}\right) \, dy}_{\text{Converges}}\\ =& -\int_{-\infty}^{\infty}KI \, dy - \dfrac{Q_{\rm in}r(1+f)}{4\pi H} \left[ \dfrac{2}{r} \tan^{-1}{\left(\dfrac{2y}{r}\right)}\right]\bigg\rvert_{y \to -\infty}^{y \to \infty}\\ =& -\int_{-\infty}^{\infty}KI \, dy - \dfrac{Q_{\rm in}r(1+f)}{4\pi H} \left[ \dfrac{2\pi}{r} \right] \\ =& -\int_{-\infty}^{\infty}KI \, dy - \dfrac{Q_{\rm in}(1+f)}{2 H}\end{array}\end{equation}This end term, we can divide it by the lenght of the line $l_r$ in order to obtain a velocity, hence, the contribution from the wells are:\begin{equation}\begin{array}{rl} w'_{\rm regional} =& -\dfrac{1}{l_r}\int_{-\infty}^{\infty}KI \, dy \\ w'_{\rm wells} =& -\dfrac{1}{2l_r}\dfrac{Q_{\rm in}(1+f)}{rH}\end{array}\end{equation} Non-dimensional flow number ($\mathcal{F}_L$)In this case, this comparison don't make much sense as one of the integrals diverged. \begin{equation}\begin{array}{rl}\mathcal{F}_L =& \dfrac{\text{Velocity due regional flow}}{\text{Velocity due well couple}} \\=& \dfrac{w'_{\rm regional}}{w'_{\rm wells}} \\=& \dfrac{-\dfrac{1}{l_r}\int_{-\infty}^{\infty}KI \, dy }{-\dfrac{1}{2l_r}\dfrac{Q_{\rm in}(1+f)}{rH}} \\=& \dfrac{2rH \int_{-\infty}^{\infty}KI \, dy}{Q_{in}(1+f)}\end{array}\end{equation}We could infer that in this approach, the regional flow will always dwarf the potential induced by the pair of wells. But this is not useful for our purpose :( Mass flow through an finite lineIn order to obtain a convergent integral for the regional component, let's pick a vertical line that spans from $r/2(1-i)$ to $r/2(1+i)$, that way, the mass flow will be described as:\begin{equation}\begin{array}{rl} \int_{-r/2}^{r/2} u \, dy =& \int_{-r/2}^{r/2}\left[ -KI - \dfrac{Q_{\rm in}r(1+f)}{4\pi H} \left(\dfrac{1}{\tfrac{r^2}{4}-y^2}\right)\right] \, dy \\ =& \int_{-r/2}^{r/2}KI \, dy - \dfrac{Q_{\rm in}r(1+f)}{4\pi H} \int_{-r/2}^{r/2}\left(\dfrac{1}{\tfrac{r^2}{4}-y^2}\right) \, dy\\ =& -KIy\bigg\rvert_{y = -r/2}^{y = r/2} - \dfrac{Q_{\rm in}r(1+f)}{4\pi H} \left[ \dfrac{2}{r} \tan^{-1}{\left(\dfrac{2y}{r}\right)}\right]\bigg\rvert_{y = -r/2}^{y = r/2}\\ =& -KIr - \dfrac{Q_{\rm in}r(1+f)}{4\pi H} \left[ \dfrac{\pi}{r} \right] \\ =& -KIr - \dfrac{Q_{\rm in}(1+f)}{4 H}\end{array}\end{equation} Jusy as before, this end term can be divided by $r$ in order to obtain a velocity, hence, both contributions are:\begin{equation}\begin{array}{rl} w'_{\rm regional} =& -KI \\ w'_{\rm wells} =& -\dfrac{1}{4}\dfrac{Q_{\rm in}(1+f)}{rH}\end{array}\end{equation} Non-dimensional flow number (2) ($\mathcal{F}_L$)From this analysis we could also draw a comparison between the two contributors of the system:\begin{equation}\begin{array}{rl}\mathcal{F}_L =& \dfrac{\text{Velocity due regional flow}}{\text{Velocity due well couple}}\\=& \dfrac{w'_{\rm regional}}{w'_{wells}}\\=& \dfrac{-KI}{-\dfrac{Q_{\rm in}}{4 H} \left(\dfrac{1+f}{r}\right)} \\=& \dfrac{4 K I H r}{Q_{in}(1+f)}\end{array}\end{equation}There is not much of a difference in this "more conservative" approach, and it holds that if $\mathcal{F}_L \to 0$, the flow induced by the wells rule the system, whereas if $\mathcal{F}_L \to \infty$, the regional flow rule the system. We well conserve the first non-dimensional flow number definition, i.e., \begin{equation}\mathcal{F}_L = \dfrac{4 K I H r}{Q_{in}(1+f)}\end{equation} Characteristic velocity ($q_c$)We found out before an expression for the velocity at the midpoint between the well couple. This will be used as our characteristic velocity, which can be rewritten in terms of the non-dimensional flow number:\begin{equation}\begin{array}{rl} q_c =& \dfrac{1}{r}\int_{-r/2}^{r/2}\dfrac{dw}{dz}dy\\ =& -KI - \dfrac{Q_{\rm in}(1+f)}{4 r H}\\ =& -KI \left( 1 + \dfrac{1}{\mathcal{F}_L} \right)\\ =& - KI \left( \dfrac{\mathcal{F}_L}{\mathcal{F}_L + 1} \right)\end{array}\end{equation}The actual pore-water velocity $u_c$ is found by dividing by the porosity $\theta$\begin{equation}\begin{array}{rl} u_c =& \dfrac{q_c}{\theta}\\ =& - \dfrac{KI}{\theta} \left( \dfrac{\mathcal{F}_L}{\mathcal{F}_L + 1} \right)\end{array}\end{equation} Flow time scale ($\tau$)Picking the half-distance between the two wells as our characteristic lenght $r/2$, a characteristic time $\tau$ can be derived from $u_c$\begin{equation}\begin{array}{rl} \tau =& \dfrac{r}{|u_c|}\\ =& \dfrac{r\theta}{KI} \left( \dfrac{\mathcal{F}_L + 1}{\mathcal{F}_L} \right)\\\end{array}\end{equation} Decay rate and timescale comparisonConsidering a tracer $C$ that is injected in the source point. This tracer follows a first-order decay reaction with a rate $\lambda$. $\dfrac{dC}{dt} = -\lambda C$With an initial condition $C(0) = C_0$, the concentration of that tracer as a function of space and time is,$C(t) = C_{0}\exp{-\lambda t}$$C_\tau = C_0 \exp{-\lambda \tau} = C_0 \exp{\left(-\lambda\dfrac{r}{|u_c|}\right)}$Hence, for our characteristic time ($t=\tau$), the relative concentration of that tracer will be:$C_\tau = C_0\exp{ \left( - \lambda \dfrac{r \theta (\mathcal{F}_L + 1)}{KI \mathcal{F}_L} \right) }$ Dilution effect Assuming that perfect mixing has between the injection point and the distance between the wells, we could calculate a relative concentration that takes into account this mass balance. Notice that this analysis cannot be done directly from our previous assumptions because there is no mixing in potential flow. However, we can keep the characteristic velocity.\begin{equation}\begin{array}{rl} C_0 u_c \Delta y \Delta z \theta=& C_{\rm in} Q_{\rm in} + C_{\rm reg} Q_{\rm reg}\\ C_0 u_c \Delta y \Delta z \theta=& C_{\rm in} Q_{\rm in}\\ C_0 =& \dfrac{Q_{\rm in}}{u_c \Delta y \Delta z \theta} C_{in} \\ =& \dfrac{Q_{\rm in}}{u_c \Delta y \Delta z \theta} C_{\rm in}\\ =& \dfrac{Q_{\rm in}}{- \dfrac{KI}{\theta} \left( \dfrac{\mathcal{F}_L}{\mathcal{F}_L + 1} \right) \Delta y \Delta z \theta} C_{\rm in}\\ =& \dfrac{Q_{\rm in}(\mathcal{F}_L + 1)}{KI \mathcal{F}_L \Delta y \Delta z} C_{\rm in}\end{array}\end{equation} $\Delta y$ and $\Delta z$ are just lenghts around the arbitrary control volume. For our 2D case, $\Delta z = H$ and $\Delta y = $ size of the element in PFLOTRAN$C_0 =\dfrac{Q_{\rm in}(\mathcal{F}_L + 1)}{KI \mathcal{F}_L \Delta y H} C_{\rm in}$ Which process is more dominant?The effects of each process is more enhanced dependind of the system. For example, if $I$ is big, our characteristic time will be small, then, decay won't be as important as dilution could be in the aquifer. We are interested in finding which set of parameters maximize $C_\tau$, or minimize the log-reductions of the tracer concentration.\begin{equation}\begin{array}{rl}C_\tau =& C_0 \exp{\left(-\lambda\dfrac{r}{|u_c|}\right)}\\C_\tau =& \dfrac{C_{\rm in}Q_{\rm in}}{u_c \Delta y H \theta} \exp{\left(-\lambda\dfrac{r}{|u_c|}\right)}\end{array}\end{equation} Potential flow visualization Well flow is dominant \begin{equation}\begin{array}{rl}\dfrac{dw}{dz} =& -KI - \dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{f}{z} - \dfrac{1}{z-r}\right)\\\\\mathcal{F}_L =& \dfrac{K I}{\dfrac{Q_{\rm in}}{2\pi H} \left(\dfrac{f}{z} - \dfrac{1}{z-r}\right)}\end{array}\end{equation}
###Code
%reset -f
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
def ZFix(x,y):
XX,YY = np.meshgrid(X,Y)
return XX + (YY*1j)
def regionalFlow():
return -K*I
def wellFlow(z):
w = -(QIN/(2*PI*H))*(F/z - 1/(z-R))
wAbs = np.abs(w)
return (w, w.real, -w.imag, wAbs)
def totalFlow(z):
reg = regionalFlow()
well = wellFlow(z)
tot = reg + well[0]
totAbs = np.abs(tot)
return (tot, tot.real,-tot.imag,totAbs)
def flowNumber(z):
reg = regionalFlow()
well = wellFlow(z)[3]
FL = np.abs(reg)/well
return FL
''' GLOBAL CONSTANTS '''
PI = 3.141592
K = 1.0E-2
#I = 6.6E-14
I = 1.0E-10
R = 40.
F = 1.5
QIN = 1.0/86400.
H = 1
#H = 20.
POINT_OUT = {"x":0,
"y":0,
"c":"r",
"s":100,
"linewidths":1,
"edgecolors":"w",
"label":"Extraction"}
POINT_QIN = {"x":R,
"y":0,
"c":"b",
"s":80,
"linewidths":1,
"edgecolors":"w",
"label":"Injection"}
X = np.linspace(-50,150,151)
Y = np.linspace(-100,100,33)
Z = ZFix(X,Y)
_,U,V,C = totalFlow(Z)
FL = flowNumber(Z)
fig, axs = plt.subplots(2,1,figsize=(12,10))
ax = axs[0]
st = ax.streamplot(X,Y,U,V,color="k",density=[0.8, 0.8])
fi = ax.pcolormesh(X,Y,C,shading='auto',vmin=0,vmax=5.0E-7,cmap="cool")
pi = ax.scatter(**POINT_QIN,zorder=3)
po = ax.scatter(**POINT_OUT,zorder=3)
cbar = ax.figure.colorbar(fi,ax=ax,orientation="vertical")
cbar.set_label(r'$\bf{|U|}$')
ax.legend(loc='lower right')
ax = axs[1]
fi = ax.pcolormesh(X,Y,np.log10(FL),shading='auto')
pi = ax.scatter(**POINT_QIN)
po = ax.scatter(**POINT_OUT)
ax.xaxis.set_tick_params(which="both",labelbottom=False,bottom=False)
ax.yaxis.set_tick_params(which="both",labelleft=False,left=False)
cbar = ax.figure.colorbar(fi,ax=ax,orientation="vertical")
ax.legend(loc='lower right')
plt.show()
X = np.linspace(0.01,R-0.01,151)
Y = np.array([0.])
Z = ZFix(X,Y)
_,U,V,C = totalFlow(Z)
FL = flowNumber(Z)
fig, axs = plt.subplots(2,1,figsize=(8,6),sharex=True,\
gridspec_kw={"height_ratios":[3,1],"hspace":0})
ax = axs[0]
li = ax.plot(X,C[0],lw=5)
ax.set(xlim=[0,R],ylim=[0,1.0E-6],ylabel=r"$\bf{|U|}$ [m/s]")
ax = axs[1]
li = ax.plot(X,np.log10(FL[0]),lw=3,c="gray")
ax.set(ylim=[-7.5,-4.5],xlabel=r"$\bf{X}$ [m]",ylabel=r"$\bf{\log(\mathcal{F}_L)}$")
plt.show()
###Output
_____no_output_____
###Markdown
Regional flow is dominant
###Code
%reset -f
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
def ZFix(x,y):
XX,YY = np.meshgrid(X,Y)
return XX + (YY*1j)
def regionalFlow():
return -K*I
def wellFlow(z):
w = -(QIN/(2*PI*H))*(F/z - 1/(z-R))
wAbs = np.abs(w)
return (w, w.real, -w.imag, wAbs)
def totalFlow(z):
reg = regionalFlow()
well = wellFlow(z)
tot = reg + well[0]
totAbs = np.abs(tot)
return (tot, tot.real,-tot.imag,totAbs)
def flowNumber(z):
reg = regionalFlow()
well = wellFlow(z)[3]
FL = np.abs(reg)/well
return FL
''' GLOBAL CONSTANTS '''
PI = 3.141592
K = 1.0E-2
I = 6.6E-4
R = 40.
F = 10
QIN = 1.0/86400.
H = 20.
POINT_OUT = {"x":0,
"y":0,
"c":"r",
"s":100,
"linewidths":1,
"edgecolors":"w",
"label":"Extraction"}
POINT_QIN = {"x":R,
"y":0,
"c":"b",
"s":80,
"linewidths":1,
"edgecolors":"w",
"label":"Injection"}
X = np.linspace(-50,150,151)
Y = np.linspace(-100,100,33)
Z = ZFix(X,Y)
_,U,V,C = totalFlow(Z)
FL = flowNumber(Z)
fig, axs = plt.subplots(2,1,figsize=(12,10))
ax = axs[0]
st = ax.streamplot(X,Y,U,V,color="k",density=[0.8, 0.8])
fi = ax.pcolormesh(X,Y,C,shading='auto',vmin=0,vmax=2.0E-5,cmap="cool")
pi = ax.scatter(**POINT_QIN,zorder=3)
po = ax.scatter(**POINT_OUT,zorder=3)
cbar = ax.figure.colorbar(fi,ax=ax,orientation="vertical")
cbar.set_label(r'$\bf{|U|}$')
ax.legend(loc='lower right')
ax = axs[1]
fi = ax.pcolormesh(X,Y,np.log10(FL),shading='auto')
pi = ax.scatter(**POINT_QIN)
po = ax.scatter(**POINT_OUT)
ax.xaxis.set_tick_params(which="both",labelbottom=False,bottom=False)
ax.yaxis.set_tick_params(which="both",labelleft=False,left=False)
cbar = ax.figure.colorbar(fi,ax=ax,orientation="vertical")
ax.legend(loc='lower right')
plt.show()
X = np.linspace(0.01,R-0.01,151)
Y = np.array([0.])
Z = ZFix(X,Y)
_,U,V,C = totalFlow(Z)
FL = flowNumber(Z)
fig, axs = plt.subplots(2,1,figsize=(8,6),sharex=True,\
gridspec_kw={"height_ratios":[3,1],"hspace":0})
ax = axs[0]
li = ax.plot(X,C[0],lw=5)
ax.set(xlim=[0,R],ylim=[0,1.0E-5],ylabel=r"$\bf{|U|}$ [m/s]")
ax = axs[1]
li = ax.plot(X,np.log10(FL[0]),lw=3,c="gray")
ax.set(xlabel=r"$\bf{X}$ [m]",ylabel=r"$\bf{\log(\mathcal{F}_L)}$")
ax.yaxis.set_tick_params(which="both",labelright=True,right=True,labelleft=False,left=False)
plt.show()
###Output
_____no_output_____
###Markdown
___ Implementation to find worst cases
###Code
%reset -f
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from os import system
from matplotlib.gridspec import GridSpec
''' GLOBAL CONSTANTS '''
PI = 3.141592
THETA = 0.35
###Output
_____no_output_____
###Markdown
\begin{equation}\begin{array}{rl} \mathcal{F}_L =& \dfrac{4 K I H r}{Q_{in}(1+f)}\\ u_{c} =& \dfrac{-KI\mathcal{F}_L}{\theta \left(\mathcal{F}_L+1\right)}\\ \tau =& -\dfrac{r}{|u_{c}|}\\ C_{\tau,{\rm decay}}=& C_0 \exp{\left(-\lambda \tau \right)}\\ C_{\tau,{\rm dilut}} =& C_{in} \left( \dfrac{Q_{in}}{u_c \Delta y \Delta z} \right)\\ C_{\tau,{\rm both}} =& \dfrac{C_{\rm in}Q_{\rm in}}{u_c \Delta y H \theta} \exp{\left(-\lambda\dfrac{r}{|u_c|}\right)}\end{array}\end{equation}
###Code
def flowNumber():
return (4.0*K*I*H*r) / (Qin*(1+f))
def uChar():
return -(K*I*flowNumber())/(THETA*(flowNumber() + 1))
def tChar():
return -r/uChar()
def cDecay():
return C0 * np.exp(-decayRate * tChar())
def cDilut():
return (C0 * Qin) / (-uChar() * delY * delZ * THETA)
def cBoth():
return (C0 * Qin) / (-uChar() * delY * delZ * THETA) * np.exp(-decayRate * tChar())
def findSweet():
deltaConc = np.abs(cBoth() - np.max(cBoth()))
return np.argmin(deltaConc)
###Output
_____no_output_____
###Markdown
**Some constant Parameters**
###Code
K = 10**-2
Qin = 0.24/86400
f = 10
H = 20
r = 40
I = 0.001
C0 = 1.0
decayRate = 3.5353E-06
delY,delZ = 1.35,H
print("Nondim Flow = {:.2E}".format(flowNumber()))
print("Charac. Vel = {:.2E} m/s".format(uChar()))
print("Charac. time = {:.2E} s".format(tChar()))
print("Rel concenc. due decay = {:.2E}".format(cDecay()))
print("Rel conc. due dilution = {:.2E}".format(cDilut()))
print("Rel conc. due both eff = {:.2E}".format(cBoth()))
I = 10**np.linspace(-5,0,num=100)
c1 = cDecay()
c2 = cDilut()
c3 = cBoth()
i = findSweet()
#worstC = np.mean([c1[i],c2[i]])
worstC = c3[i]
worstI = I[i]
print(I[i])
# create objects
fig, axs = plt.subplots(2,2,sharex=True, sharey=False,\
figsize=(12,8),gridspec_kw={"height_ratios":[1,3],"hspace":0.04,"wspace":0.35})
bbox = dict(boxstyle='round', facecolor='mintcream', alpha=0.90)
arrowprops = dict(
arrowstyle="->",
connectionstyle="angle,angleA=90,angleB=10,rad=5")
annotation = \
r"$\bf{-\log(C/C_0)} = $" + "{:.1f}".format(-np.log10(worstC)) + \
"\n@" + r" $\bf{I} = $" + "{:.1E}".format(worstI)
information = \
r"$\bf{K}$" + " = {:.1E} m/s".format(K) + "\n"\
r"$\bf{H}$" + " = {:.1f} m".format(H) + "\n"\
r"$\bf{r}$" + " = {:.1f} m".format(r) + "\n"\
r"$\bf{Q_{in}}$" + " = {:.2f} m³/d".format(Qin*86400) + "\n"\
r"$\bf{f}$" + " = {:.1f}".format(f) + "\n"\
r"$\bf{\lambda}$" + " = {:.2E} 1/s".format(decayRate)
# Ax1 - Relative concentration
ax = axs[1,0]
ax.plot(I,c1,label="Due decay",lw=3,ls="dashed",alpha=0.8)
ax.plot(I,c2,label="Due dilution",lw=3,ls="dashed",alpha=0.8)
#ax.plot(I,np.minimum(c1,c2),label="Overall effect",lw=3,c='k',alpha=0.9)
ax.plot(I,c3,label="Overall effect",lw=3,c='k',alpha=0.9)
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_ylim(1.0E-10,1)
ax.set_xlim(1.0E-4,1.0E-1)
ax.set_xlabel("Water table gradient\n$I$ [m/m]")
ax.set_ylabel("Relative Concentration\n$C/C_0$ [-]")
ax.legend(loc="lower left",shadow=True)
ax.annotate(annotation,(worstI,worstC),
xytext=(0.05,0.85), textcoords='axes fraction',
bbox=bbox, arrowprops=arrowprops)
ax.text(0.65,0.05,information,bbox=bbox,transform=ax.transAxes)
ax.axvline(x=I[i], lw = 1, ls = "dashed", c = "red")
####################################
# Ax2 - log-removals
ax = axs[1,1]
ax.plot(I,-np.log10(c1),label="Due decay",lw=3,ls="dashed",alpha=0.8)
ax.plot(I,-np.log10(c2),label="Due dilution",lw=3,ls="dashed",alpha=0.8)
#ax.plot(I,-np.log10(np.minimum(c1,c2)),label="Overall effect",lw=3,c='k',alpha=0.9)
ax.plot(I,-np.log10(c3),label="Overall effect",lw=3,c='k',alpha=0.9)
ax.set_xscale("log")
ax.set_ylim(0,10)
ax.set_xlim(1.0E-4,1.0E-1)
ax.set_xlabel("Water table gradient\n$I$ [m/m]")
ax.set_ylabel("log-reductions\n$-\log(C/C_0)$ [-]")
ax.legend(loc="lower right",shadow=True)
ax.annotate(annotation,(worstI,-np.log10(worstC)),
xytext=(0.05,0.10), textcoords='axes fraction',
bbox=bbox, arrowprops=arrowprops)
ax.text(0.65,0.70,information,bbox=bbox,transform=ax.transAxes)
ax.axvline(x=I[i], lw = 1, ls = "dashed", c = "red")
####################################
#Ax3 - Flow number
ax = axs[0,1]
ax.plot(I,flowNumber(),label="flowNumber",lw=3,c="gray")
ax.axhline(y=1.0)
ax.set_xscale("log")
ax.set_yscale("log")
ax.xaxis.set_tick_params(which="both",labeltop='on',top=True,bottom=False)
ax.set_ylabel("Nondim. flow number\n$\mathcal{F}_L$ [-]")
ax.axvline(x=I[i], lw = 1, ls = "dashed", c = "red")
#Ax4 - Flow number
ax = axs[0,0]
ax.plot(I,flowNumber(),label="flowNumber",lw=3,c="gray")
ax.axhline(y=1.0)
ax.set_xscale("log")
ax.set_yscale("log")
ax.xaxis.set_tick_params(which="both",labeltop='on',top=True,bottom=False)
ax.set_ylabel("Nondim. flow number\n$\mathcal{F}_L$ [-]")
ax.axvline(x=I[i], lw = 1, ls = "dashed", c = "red")
plt.show()
###Output
_____no_output_____
###Markdown
____ Find the worst case >> Geometric parameters $H$ and $r$
###Code
from drawStuff import *
K = 10**-2
Qin = 0.24/86400
f = 10
C0 = 1.0
decayRate = 3.5353E-06
Harray = np.array([2.,5.,10.,20.,50.])
rarray = np.array([5.,10.,40.,100.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(rarray),len(Harray)])
Ii = np.zeros([len(rarray),len(Harray)])
FLi = np.zeros([len(rarray),len(Harray)])
for hi,H in enumerate(Harray):
for ri,r in enumerate(rarray):
i = findSweet()
worstC = -np.log10(cBoth()[i])
worstGradient = Iarray[i]
worstFlowNumber = flowNumber()[i]
Ci[ri,hi] = worstC
Ii[ri,hi] = worstGradient
FLi[ri,hi] = worstFlowNumber
myLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Aquifer thickness\n$\\bf{H}$ (m)",
"X": "Setback distance\n$\\bf{r}$ (m)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":FLi.T},\
xlabel=Harray,ylabel=rarray,myLabels=myLabels)
###Output
_____no_output_____
###Markdown
>>Well parameters
###Code
K = 10**-2
H = 20
r = 40
C0 = 1.0
decayRate = 3.5353E-06
Qin_array = np.array([0.24,1.0,10.0,100.])/86400.
f_array = np.array([1,10.,100.,1000.,10000.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(Qin_array),len(f_array)])
Ii = np.zeros([len(Qin_array),len(f_array)])
FLi = np.zeros([len(Qin_array),len(f_array)])
for fi,f in enumerate(f_array):
for qi,Qin in enumerate(Qin_array):
i = findSweet()
worstC = -np.log10(cBoth()[i])
worstGradient = Iarray[i]
worstFlowNumber = flowNumber()[i]
Ci[qi,fi] = worstC
Ii[qi,fi] = worstGradient
FLi[qi,fi] = worstFlowNumber
myLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Extraction to injection ratio\n$\\bf{f}$ (-)",
"X": "Injection flow rate\n$\\bf{Q_{in}}$ (m³/d)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":FLi.T},\
xlabel=f_array,ylabel=np.round(Qin_array*86400,decimals=2),myLabels=myLabels)
###Output
_____no_output_____
###Markdown
Hydraulic conductivity
###Code
Qin = 0.24/86400
f = 10
C0 = 1.0
decayRate = 3.5353E-06
Karray = 10.**np.array([-1.,-2.,-3.,-4.,-5.])
rarray = np.array([5.,10.,40.,100.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(rarray),len(Karray)])
Ii = np.zeros([len(rarray),len(Karray)])
FLi = np.zeros([len(rarray),len(Karray)])
for ki,K in enumerate(Karray):
for ri,r in enumerate(rarray):
i = findSweet()
worstC = -np.log10(cBoth()[i])
worstGradient = Iarray[i]
worstFlowNumber = flowNumber()[i]
Ci[ri,ki] = worstC
Ii[ri,ki] = worstGradient
FLi[ri,ki] = worstFlowNumber
myLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Hydraulic conductivity\n$\\bf{K}$ (m/s)",
"X": "Setback distance\n$\\bf{r}$ (m)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":FLi.T},\
xlabel=Karray,ylabel=rarray,myLabels=myLabels)
###Output
_____no_output_____
###Markdown
>> EXPERIMENTAL r and Qin
###Code
K = 10**-2
H = 20
f = 10
C0 = 1.0
decayRate = 3.5353E-06
Qin_array = np.array([0.24,1.0,10.0,100.])/86400.
rarray = np.array([5.,10.,40.,100.])
Iarray = 10**np.linspace(-5,0,num=100)
Ci = np.zeros([len(rarray),len(Qin_array)])
Ii = np.zeros([len(rarray),len(Qin_array)])
FLi = np.zeros([len(rarray),len(Qin_array)])
for qi,Qin in enumerate(Qin_array):
for ri,r in enumerate(rarray):
i = findSweet()
worstC = -np.log10(cBoth()[i])
worstGradient = Iarray[i]
worstFlowNumber = flowNumber()[i]
Ci[ri,qi] = worstC
Ii[ri,qi] = worstGradient
FLi[ri,qi] = worstFlowNumber
myLabels={"Title": { 0: r"$\bf{-\log (C_{\tau}/C_0)}$",
1: r"$\bf{I}$ (%)",
2: r"$\log(\mathcal{F}_L)$"},
"Y": "Injection flow rate\n$\\bf{Q_{in}}$ (m³/d)",
"X": "Setback distance\n$\\bf{r}$ (m)"}
threeHeatplots(data={"I":Ii.T,"C":Ci.T,"FL":FLi.T},\
ylabel=rarray,xlabel=np.round(Qin_array*86400,decimals=2),myLabels=myLabels)
###Output
_____no_output_____ |
src/scraper/scratch.ipynb | ###Markdown
Use this philosophy to scrape data tables Don't need to build stats table i don't think
###Code
stats_dict['2012']['Week 2']['Game 6'].update( {'prr': {} } )
links_dict['2012']['Week 2']['Game 6'].update( {'prr': {} } )
links_dict['2012']['Week 2']['Game 6']
imp.reload( hlp )
test_d = hlp.opn_link_by_key( driver=webScrpr, d=links_dict, key='link' )
test_d.update( {'table': { 'player': 'stat' } } )
range( 1, 10 )[ :2 ]
test2 = hlp.add_to_nested_key( links_dict, test_d )
test2['2011']['Week 1']['Game 1']
imp.reload( hlp )
imp.reload( cfg_glb )
imp.reload( scrp )
hlp.check_for_tbl( driver=webScrpr, tbl='gameInfo' )
webScrpr.check_for_el_by_xp( cfg_glb.TBL_XPS[ 'st_kicking' ] )
cfg_glb.TBL_XPS[ 'gameInfo' ]
hlp.make_tbl_dict( driver=webScrpr, xp_k=xp_ps_off, xp_v=xp_ps_ )
webScrpr.text_by_xpath( xp_ps_off )
links_dict
test_stats = stats_dict
test_links = links_dict
# test_stats['2011']['Week 1'][ 'Game '+ str( 0+1 ) ].update( {'link': 'google.com'} )
webScrpr.opn_webPg('https://www.pro-football-reference.com/boxscores/201209160car.htm')
imp.reload( scrp )
webScrpr = scrp.WebScraper()
webScrpr.opn_webPg('https://www.pro-football-reference.com/boxscores/201209160car.htm')
webScrpr.quit()
###Output
_____no_output_____
###Markdown
%%%%% SCRATCH %%%%%%%
###Code
driver = webdriver.Chrome()
# use nested for loop to loop through games
driver.get( 'https://www.pro-football-reference.com/years/{}/week_{}.htm'.format( 2014, 1 ) )
xp_games = driver.find_elements_by_xpath('//*[@id="content"]/div[4]//a[contains(@href,"boxscores")]')
# links_games[0].send_keys( Keys.RETURN )
lst_games = []
for xp in xp_games[ :3 ]:
lst_games.append( xp.get_attribute( 'href' ) )
print( lst_games )
i=0
for game in lst_games:
driver.get( lst_games[ i ] )
# print( lst_games[ i ], ': DONE' )
i+=1
time.sleep(5)
driver.quit()
import os
os.getcwd()
cfg.SEASON_END
hlp.
# # imp.reload( scrp )
# weeks_in_season = webScrpr.text_by_xpath( xp_weekLinks )
# weeks_in_season
# # imp.reload( scrp )
# bscrs = webScrpr.htmls_by_xpath( xp_boxscores )
# bscrs
l = ['dog', 'cat', 'bird']
for k,v in enumerate( l ):
print( k+1, v )
# imp.reload( scrp )
webScrpr.opn_webPg( webPg=cfg_glb.URL_MASTER )
test_d1 = stats_dict
test_d2 = hlp.scrape_data( d=test_d1, driver=webScrpr, xp_k=xp_kGI, xp_v=xp_vGI )
test_d2
hlp.make_tbl_dict( driver=webScrpr, xp_k=xp_kGI, xp_v=xp_vGI )
any(key.startswith('Game') for key in stats_dict[ 2011 ][ 'Week 1' ])
for k,v in stats_dict.items():
if "Game" in k:
print( k )
else:
print('no')
xp_ps_off = '//*[@id="player_offense"]/tbody//tr//th[@data-stat="player"]'
xp_ps_def = '//*[@id="player_defense"]/tbody//tr//th[@data-stat="player"]'
xp_ps_st = '//*[@id="returns" or @id="kicking"]/tbody/tr//th[@data-stat="player"]//a'
webScrpr.text_by_xpath( xp_ps_st )
efg=defaultdict(dict)
efg
weeks_lst_temp = webScrpr.htmls_by_xpath( '//*[@id="div_week_games"]//a' )
webScrpr.opn_webPg( weeks_lst_temp[ 0 ] )
d_temp = defaultdict( dict )
d_temp[ '2011' ] = { 'Week 1': [ 'www.google.com', 'espn.com' ] }
d_temp[ '2011' ][ 'Week 2'] = [ 'www.instagram.com', 'facebook.com' ]
d_temp
x = temp_sp[0].split('/')
[ i for i in x if i in seasons_str ][0]
###Output
_____no_output_____
###Markdown
Dictionary Manipulation
###Code
def build_stats_dict( name_dict, key ):
try:
name_dict[ key ]
except:
name_dict[ key ] = {}
return name_dict
stats_dict={}
for key in stats_dict:
try:
stats_dict[ key ]
except:
stats_dict[ key ]={}
stats_dict
stats_dict['2011']= {}
stats_dict.update( stats_dict['2011']['Week 1']['Game 1']={'PRR': {} } )
def fun( **lvl_1 ):
for k,v in lvl_1.items():
print( '{}'.format( type( v ) ) )
# yield v
test = fun(season=[1,2,3], weeks=[4,5,6])
print( len( list ( test ) ) )
def stats_shell_dict( d, **lvl1_keys ):
if isinstance( d, dict ):
for k,v in lvl1_keys.items():
if isinstance( v, list ):
for key in v:
try:
d[ key ]
except:
d[ key ] = {}
else:
try:
d[ v ]
except:
d[ v ] = {}
return d
else:
d={}
stats_shell_dict( d, **lvl1_keys )
d[2019]['Divisional'].keys()
WEEKS_LBL = {
0: 'Week 1',
1: 'Week 2',
2: 'Week 3',
3: 'Week 4',
4: 'Week 5',
5: 'Week 6',
6: 'Week 7',
7: 'Week 8',
8: 'Week 9',
9: 'Week 10',
10: 'Week 11',
11: 'Week 12',
12: 'Week 13',
13: 'Week 14',
14: 'Week 15',
15: 'Week 16',
16: 'Week 17',
17: 'Wild Card',
18: 'Divisional',
19: 'Conf Champ',
20: 'Super Bowl'
}
from collections import defaultdict
def make_stats_shell( d=defaultdict( dict ), **kwargs ):
for k,v in kwargs.items():
if isinstance( v, list ):
for el in v:
d[ el ]
temp_lst.append( el )
return d
d={}
def build_stats_shell( d, **kwargs ):
for k,v in d.items():
# print( 'starting')
if isinstance( v, dict ):
build_stats_shell( v )
return d
build_stats_shell( d, seasons=[2011, 2012, 2013] )
d=defaultdict( dict )
for k,v in d.items():
if isinstance( v, dict ):
print('yes')
else:
print('no')
d.items()
for k,v in d.items():
for week in [ 'Week 1', 'Week 2' ]:
v.setdefault( week, {} )
d.items()
d = defaultdict( dict )
d['dog']['cat']['bird']
import collections
dd
imp.reload( hlp )
test_s = hlp.add_to_nested_key( source_d=stats_dict, info_d=links_dict )
test_s[ '2012' ][ 'Week 2' ]
d = defaultdict( dict )
d[ '2011' ]
d[ '2011' ].setdefault( 'Week 1', {} )
d[ '2011' ][ 'Week 1' ].setdefault('Game 1', { 'link': 'www.google.com' } )
d[ '2011' ][ 'Week 1' ].setdefault('Game 2', { 'link': 'www.espn.com' } )
d
###Output
_____no_output_____ |
libFM.ipynb | ###Markdown
Reading data
###Code
behaviors = pd.read_table('behaviors.tsv', usecols=['user_id', 'history', 'impressions'])
behaviors.info()
behaviors = behaviors.dropna()
behaviors.info()
def extract_target(news_impression, splitter='-') -> tuple:
"""
Returns (news, target)
"""
news, target = news_impression.split(splitter)
return news, np.int(target)
def extract_targets(news_impressions, use_news=False, use_target=False) -> dict:
impression_dict = dict(map(extract_target, news_impressions.split()))
if use_news:
return list(impression_dict.keys())
if use_target:
return list(impression_dict.values())
return impression_dict
behaviors['history'] = behaviors.history.apply(str.split).to_list()
behaviors['impression_target'] = behaviors.impressions.apply(extract_targets)
behaviors['impression_news'] = behaviors.impressions.apply(lambda x: extract_targets(x, use_news=True))
grouped_users = behaviors.groupby('user_id')
tags = behaviors['history'].to_list() + behaviors['impression_news'].to_list()
tags = set([tag for user_tags in tags for tag in user_tags])
news_to_id = dict(zip(tags, range(len(tags))))
id_to_news = dict(enumerate(tags))
def merge_dicts(dicts):
return reduce(lambda x, y: {**x, **y}, dicts, {})
def merge_history_tags(tags):
dicts = list(map(lambda tag: dict(zip(tag, [1] * len(tag))), tags))
return merge_dicts(dicts)
user_targets = grouped_users['impression_target'].apply(list).apply(merge_dicts)
user_to_id = dict(zip(user_targets.index, range(len(user_targets))))
id_to_user = dict(enumerate(user_targets.index))
history_targets = grouped_users['history'].apply(list).apply(merge_history_tags)
history_targets.head()
users = pd.merge(user_targets, history_targets, on='user_id')
users['target'] = users[['history', 'impression_target']].apply(lambda x: {**x[0], **x[1]}, axis=1)
users = users.reset_index()
users.head()
def get_user_news_sparse_matrix(liked_target=1):
rows, cols = [], []
for _, (user, targets) in users[['user_id', 'history']].iterrows():
user_id = user_to_id[user]
positives = list(map(lambda kv: news_to_id[kv[0]], filter(lambda kv: kv[1] == liked_target, targets.items())))
rows += [user_id] * len(positives)
cols += positives
return csr_matrix(([liked_target] * len(rows), (rows, cols)), shape=(len(user_to_id), len(news_to_id)))
user_news = get_user_news_sparse_matrix()
news_user = user_news.T
user_news
model = implicit.als.AlternatingLeastSquares(factors=512,
regularization=0,
iterations=40,
calculate_training_loss=True)
model.fit(item_users=news_user)
from sklearn.metrics import ndcg_score, roc_auc_score
from metrics_eval import mrr, utils
def user_auc(user):
user_id = id_to_user[user]
targets = users[users['user_id'] == user_id]['impression_target'].iloc[0]
news = list(map(news_to_id.__getitem__, targets.keys()))
y_true = list(targets.values())
item_factors = model.item_factors[news]
user_factors = model.user_factors[user]
y_score = model.user_factors[user] @ model.item_factors[news].T
return roc_auc_score(y_true, y_score)
def user_ndcg(user, k=5):
user_id = id_to_user[user]
targets = users[users['user_id'] == user_id]['impression_target'].iloc[0]
news = list(map(news_to_id.__getitem__, targets.keys()))
y_true = list(targets.values())
y_score = model.user_factors[user] @ model.item_factors[news].T
return ndcg_score([y_true], [y_score], k=k)
def user_mrr(user):
user_id = id_to_user[user]
targets = users[users['user_id'] == user_id]['impression_target'].iloc[0]
news = list(map(news_to_id.__getitem__, targets.keys()))
y_true = np.array(list(targets.values()))
y_score = model.user_factors[user] @ model.item_factors[news].T
return mrr(y_true, y_score)
from tqdm import trange
def calc_metric(metric, k=None):
metric_scores = []
for u in trange(users.shape[0]):
if k is None:
metric_score = metric(u)
else:
metric_score = metric(u, k)
metric_scores.append(metric_score)
return np.mean(metric_scores)
def mrr_score():
return calc_metric(metric=user_mrr)
def auc():
return calc_metric(metric=user_auc)
def ndcg(k):
return calc_metric(metric=user_ndcg, k=k)
ndcg(k=5)
ndcg(k=10)
auc()
mrr_score()
###Output
100%|██████████| 48593/48593 [04:12<00:00, 192.19it/s]
|
notebooks/Pandas fundamentals.ipynb | ###Markdown
Pandas fundamentals Why use pandas?- Intuitive data format- Easy data transformations- Powerful data analysis Basic concepts- np.array / np.ndarray - single/multi dimensional array numpy_array = np.random.rand(3) - pd.Series - single column array- pd.DataFrame - multi coulumn data tabele numpy_array = np.DataFrame(np.random.rand(3,2))
###Code
import numpy as np
my_array = np.random.rand(3)
my_array
import pandas as pd
series = pd.Series(my_array)
series
series.index = ["one","two","three"]
series
series["one"]
series[0]
array_2d = np.random.rand(2,2)
array_2d
dataframe = pd.DataFrame(array_2d)
dataframe
dataframe.columns = ["one","two"]
dataframe
dataframe["one"]
dataframe.columns
###Output
_____no_output_____
###Markdown
Pandas data sources- Text Files- Binary Files- Relational databases
###Code
# read csv file (text)
import pandas as pd
import os
CSV_PATH = os.path.join(os.path.pardir,'data','raw','collection-master','artwork_data.csv')
df = pd.read_csv(CSV_PATH,nrows=5)
df.info()
df.head()
df = pd.read_csv(CSV_PATH,nrows=5,index_col="id")
df = pd.read_csv(CSV_PATH,nrows=5,index_col="id",usecols=["id","artist"])
df.head()
USE_COLS = ["id","artist","title","medium","year","acquisitionYear","height","width","units"]
df = pd.read_csv(CSV_PATH,usecols=USE_COLS,index_col="id")
df.head()
import pickle
df.to_pickle(os.path.join(os.path.pardir,"data","processed","data_frame.pickle"))
###Output
_____no_output_____
###Markdown
Working with json file
###Code
json_collection = ({"name":'James',"age":12},{"name":"Mark","age":18})
df = pd.DataFrame.from_records(json_collection)
df.head()
records = [("Mark",12),("James",21)]
df = pd.DataFrame.from_records(records)
df.head()
df = pd.DataFrame.from_records(records,columns=["Name","Age"])
df.head()
KEYS_TO_USE = ['id','all_artists','title','medium','acquisitionYear','height','width']
def get_record_from_file(file_path,key_to_use):
with open(file_path) as artwork_file:
content = json.load(artwork_file)
artwork_data = []
for field in key_to_use:
artwork_data.append(content[field])
return tuple(artwork_data)
ARTWORK_TEST_FILE = os.path.join(os.path.pardir,
"data","raw","collection-master","artworks","a","000","a00001-1035.json")
ARTWORK_TEST_FILE
import json
sample_record = get_record_from_file(ARTWORK_TEST_FILE,KEYS_TO_USE)
print(sample_record)
JSON_ROOT = os.path.join(os.path.pardir,"data","raw","collection-master","artworks","a","000")
print(JSON_ROOT)
def get_artworks_from_josn(key_to_use):
JSON_ROOT = os.path.join(os.path.pardir,"data","raw","collection-master","artworks","a","000")
artworks =[]
for root,_,files in os.walk(JSON_ROOT):
for file in files:
if file.endswith('json'):
file = os.path.join(root,file)
#print(file)
artworks.append(get_record_from_file(file,key_to_use))
break
df = pd.DataFrame.from_records(artworks,columns=KEYS_TO_USE)
return df
df = get_artworks_from_josn(KEYS_TO_USE)
df.head()
###Output
_____no_output_____
###Markdown
Indexing and Filtering
###Code
# The Basics
#for getting one column values use df['artist']
#getting more than one column you can pass array df[['artist','title']]
#there is an another way you can get the columns df.artist but avoid sometimes this creates problems
#read from pickle
df = pd.read_pickle(os.path.join(os.path.pardir,"data","processed","data_frame.pickle"))
df['artist']
df[['artist','title']]
df.artist
## Get the distinct values from column
df.head()
artists = df['artist']
pd.unique(artists)
len(pd.unique(artists))
# Data filtering
filter = df['artist'] == 'Bacon, Francis'
filter.value_counts()
artist_counts = df['artist'].value_counts()
artist_counts['Bacon, Francis']
# Index Done the right way
# consistent way of doing loc,iloc (labels vs positions)
# df.loc[row_indexer,column_indexer]
# df.iloc[row_index,column_index]
df.loc[1053,'artist']
df.loc[1053,'units':]
df.iloc[0,0]
df.iloc[0:2,0:2]
df['height'] * df['width']
df['width'].sort_values().head()
df['width'].sort_values().tail()
# converting data types
pd.to_numeric(df['width'])
pd.to_numeric(df['width'],errors='coerce')
df.loc[:,'width'] = pd.to_numeric(df['width'],errors='coerec')
df['width']
df.loc[:,'height'] = pd.to_numeric(df['height'],errors='coerec')
df['height'] * df['width']
area = df['width'] * df['height']
# adding new column
df = df.assign(area=area)
df.head()
df.area.max()
df.area.idxmax()
df.loc[df.area.idxmax(),'area']
df.loc[df.area.idxmax(),:]
#Dos and Don'ts
#Always use loc and iloc
###Output
_____no_output_____
###Markdown
Operation on Groups
###Code
# Motivation
#Min, Max. Aggs, Missing values, transformation, dropping groups, Filtering
##AGG=>TRANS=>FILTERING
small_groups = df.iloc[49980:50019].copy()
grouped = small_groups.groupby('artist')
for grp,groupdef in grouped:
print(grp,groupdef['acquisitionYear'].min())
for grp,groupdef in grouped:
print(grp,groupdef['title'].value_counts().index[0])
## Built-in methods groups
def fill_values(series):
#find the most occurence values to replace missing values
values_counted = series.value_counts()
if (values_counted.empty):
return series
most_frequent = values_counted.index[0]
new_fille_series = series.fillna(most_frequent)
return new_fille_series
def transform_df(source_df):
group_dfs =[]
for name, group_df in source_df.groupby(source_df['artist']):
filled_df = group_df.copy()
filled_df.loc[:,'medium'] = fill_values(group_df['medium'])
group_dfs.append(filled_df)
new_df = pd.concat(group_dfs)
return new_df
filled_df = transform_df(small_groups)
filled_df
###Output
_____no_output_____
###Markdown
Filter and grouping by in-built method
###Code
# group by
group_by_year = small_groups.groupby('title')['medium'].transform(fill_values)
filter_more_than_one_title = lambda x: len(x.index) > 0
filter_rows = small_groups.groupby('title').filter(filter_more_than_one_title)
group_by_min_year = small_groups.groupby('title').min()
group_by_min_year = small_groups.groupby('title')['area'].min()
##group_by_min_year = small_groups.groupby('title').agg(np.min)
group_by_min_year
###Output
_____no_output_____
###Markdown
Outputting data (Excel)
###Code
df.head()
small_df = df.iloc[49980:50019,:].copy()
small_df.to_excel(os.path.join(os.path.pardir,"data","processed","saveasexcel.xls"))
small_df.to_excel(os.path.join(os.path.pardir,"data","processed","no_id_saveasexcel.xls"),index=False)
small_df.to_excel(os.path.join(os.path.pardir,"data","processed","selected_cols_saveasexcel.xls")
,index=False,columns=["artist","title"])
excelsheetFile =os.path.join(os.path.pardir,"data","processed","sheets.xls")
writer = pd.ExcelWriter(excelsheetFile,engine='xlsxwriter')
small_df.to_excel(writer,sheet_name='first',index=False)
small_df.to_excel(writer,sheet_name='second',index=False)
writer.save()
excelColorsheetFile =os.path.join(os.path.pardir,"data","processed","color.xls")
###Output
_____no_output_____
###Markdown
Ouputputting data sqlite
###Code
import sqlite3
with sqlite3.connect("artist.db") as conn:
small_df.to_sql("my_table",conn)
###Output
_____no_output_____
###Markdown
Ouputputting data JSON
###Code
jsonFile =os.path.join(os.path.pardir,"data","processed","json.json")
small_df.to_json(jsonFile)
small_df.to_json(jsonFile,orient='table')
###Output
_____no_output_____
###Markdown
Plotting
###Code
small_df.plot()
accus_year = df.groupby('acquisitionYear')
accus_year.head()
accus_year.size().plot()
small_df.groupby('acquisitionYear').size().plot()
plot = plt.figure()
subplot = plot.add_subplot(1,1,1)
accus_year.size().plot(ax=subplot)
plot.show()
plot = plt.subplot()
plot.plot( accus_year.size())
plot.set_xlabel("year")
plot.xaxis(rot=45)
plot.set_ylabel("title count")
plot.locator_params(nbins=40,axis='x')
###Output
_____no_output_____ |
linear_evaluation/Linear_Evaluation_10_Epochs.ipynb | ###Markdown
Imports and Setups
###Code
!git clone https://github.com/ayulockin/SwAV-TF.git
import sys
sys.path.append('SwAV-TF/utils')
import multicrop_dataset
import architecture
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras.models import Sequential, Model
import matplotlib.pyplot as plt
import numpy as np
import random
import time
import os
from tqdm import tqdm
from imutils import paths
tf.random.set_seed(666)
np.random.seed(666)
tfds.disable_progress_bar()
###Output
_____no_output_____
###Markdown
W&B - Experiment Tracking
###Code
%%capture
!pip install wandb
import wandb
from wandb.keras import WandbCallback
wandb.login()
###Output
_____no_output_____
###Markdown
Restoring model weights from GCS Bucket - Model trained for 10 epochs
###Code
from tensorflow.keras.utils import get_file
feature_backbone_urlpath = "https://storage.googleapis.com/swav-tf/feature_backbone_10_epochs.h5"
prototype_urlpath = "https://storage.googleapis.com/swav-tf/projection_prototype_10_epochs.h5"
feature_backbone_weights = get_file('swav_feature_weights', feature_backbone_urlpath)
prototype_weights = get_file('swav_prototype_projection_weights', prototype_urlpath)
###Output
Downloading data from https://storage.googleapis.com/swav-tf/feature_backbone_10_epochs.h5
94584832/94583160 [==============================] - 1s 0us/step
Downloading data from https://storage.googleapis.com/swav-tf/projection_prototype_10_epochs.h5
17907712/17900192 [==============================] - 0s 0us/step
###Markdown
Dataset gathering and preparation
###Code
# Gather Flowers dataset
train_ds, validation_ds = tfds.load(
"tf_flowers",
split=["train[:85%]", "train[85%:]"],
as_supervised=True
)
AUTO = tf.data.experimental.AUTOTUNE
BATCH_SIZE = 256
@tf.function
def scale_resize_image(image, label):
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, (224, 224)) # Resizing to highest resolution used while training swav
return (image, label)
training_ds = (
train_ds
.map(scale_resize_image, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
testing_ds = (
validation_ds
.map(scale_resize_image, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
###Output
WARNING:absl:Dataset tf_flowers is hosted on GCS. It will automatically be downloaded to your
local data directory. If you'd instead prefer to read directly from our public
GCS bucket (recommended if you're running on GCP), you can instead set
data_dir=gs://tfds-data/datasets.
###Markdown
Get SwAV architecture and Build Linear Model
###Code
def get_linear_classifier(alpha=1e-6):
# input placeholder
inputs = Input(shape=(224, 224, 3))
# get swav baseline model architecture
feature_backbone = architecture.get_resnet_backbone()
# load trained weights
feature_backbone.load_weights(feature_backbone_weights)
feature_backbone.trainable = False
x = feature_backbone(inputs, training=False)
outputs = Dense(5, activation="softmax",
kernel_regularizer=tf.keras.regularizers.L2(alpha))(x)
linear_model = Model(inputs, outputs)
return linear_model
tf.keras.backend.clear_session()
model = get_linear_classifier()
model.summary()
###Output
Model: "functional_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
functional_1 (Functional) (None, 2048) 23587712
_________________________________________________________________
dense (Dense) (None, 5) 10245
=================================================================
Total params: 23,597,957
Trainable params: 10,245
Non-trainable params: 23,587,712
_________________________________________________________________
###Markdown
Callback
###Code
# Early Stopping to prevent overfitting
early_stopper =a tf.keras.callbacks.EarlyStopping(monitor="val_loss", patience=2, verbose=2, restore_best_weights=True)
###Output
_____no_output_____
###Markdown
Training Linear Classifier - Without Augmentation Training
###Code
# get model and compile
tf.keras.backend.clear_session()
model = get_linear_classifier(alpha=1e-6)
model.summary()
model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"],
optimizer="adam")
# initialize wandb run
wandb.init(entity='authors', project='swav-tf')
# train
history = model.fit(training_ds,
validation_data=(testing_ds),
epochs=100,
callbacks=[WandbCallback(),
early_stopper])
###Output
Model: "functional_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
functional_1 (Functional) (None, 2048) 23587712
_________________________________________________________________
dense (Dense) (None, 5) 10245
=================================================================
Total params: 23,597,957
Trainable params: 10,245
Non-trainable params: 23,587,712
_________________________________________________________________
###Markdown
Evaluation
###Code
loss, acc = model.evaluate(testing_ds)
wandb.log({'Test Accuracy': round(acc*100, 2)})
###Output
3/3 [==============================] - 1s 320ms/step - loss: 1.3613 - acc: 0.4182
###Markdown
Training Linear Classifier with Augmentation Augmentation
###Code
# Configs
CROP_SIZE = 224
MIN_SCALE = 0.5
MAX_SCALE = 1.
# Experimental options
options = tf.data.Options()
options.experimental_optimization.noop_elimination = True
options.experimental_optimization.map_vectorization.enabled = True
options.experimental_optimization.apply_default_optimizations = True
options.experimental_deterministic = False
options.experimental_threading.max_intra_op_parallelism = 1
@tf.function
def scale_image(image, label):
image = tf.image.convert_image_dtype(image, tf.float32)
return (image, label)
@tf.function
def random_apply(func, x, p):
return tf.cond(
tf.less(tf.random.uniform([], minval=0, maxval=1, dtype=tf.float32),
tf.cast(p, tf.float32)),
lambda: func(x),
lambda: x)
@tf.function
def random_resize_crop(image, label):
# Conditional resizing
image = tf.image.resize(image, (260, 260))
# Get the crop size for given min and max scale
size = tf.random.uniform(shape=(1,), minval=MIN_SCALE*260,
maxval=MAX_SCALE*260, dtype=tf.float32)
size = tf.cast(size, tf.int32)[0]
# Get the crop from the image
crop = tf.image.random_crop(image, (size,size,3))
crop_resize = tf.image.resize(crop, (CROP_SIZE, CROP_SIZE))
return crop_resize, label
@tf.function
def tie_together(image, label):
# Scale the pixel values
image, label = scale_image(image , label)
# random horizontal flip
image = random_apply(tf.image.random_flip_left_right, image, p=0.5)
# Random resized crops
image, label = random_resize_crop(image, label)
return image, label
trainloader = (
train_ds
.shuffle(1024)
.map(tie_together, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
trainloader = trainloader.with_options(options)
###Output
_____no_output_____
###Markdown
Training
###Code
# get model and compile
tf.keras.backend.clear_session()
model = get_linear_classifier(alpha=1e-6)
model.summary()
model.compile(loss="sparse_categorical_crossentropy", metrics=["acc"],
optimizer="adam")
# initialize wandb run
wandb.init(entity='authors', project='swav-tf')
# train
history = model.fit(training_ds,
validation_data=(testing_ds),
epochs=100,
callbacks=[WandbCallback(),
early_stopper])
###Output
Model: "functional_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
_________________________________________________________________
functional_1 (Functional) (None, 2048) 23587712
_________________________________________________________________
dense (Dense) (None, 5) 10245
=================================================================
Total params: 23,597,957
Trainable params: 10,245
Non-trainable params: 23,587,712
_________________________________________________________________
###Markdown
Evaluation
###Code
loss, acc = model.evaluate(testing_ds)
wandb.log({'Test Accuracy': round(acc*100, 2)})
###Output
3/3 [==============================] - 1s 325ms/step - loss: 1.4145 - acc: 0.3945
|
notebooks/ImageEditing/tests/ImageCompo_DoveNet.ipynb | ###Markdown
Image Composition GAN | TensorFlow-DoveNet[GitHub](https://github.com/Asha-Gutlapalli/TensorFlow-DoveNet) 1. PreparationsBefore start, make sure that you chooseRuntime Type = Python 3Hardware Accelerator = GPU
###Code
!nvidia-smi
###Output
Tue May 3 14:11:16 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |
| N/A 72C P8 34W / 149W | 0MiB / 11441MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
2. linking nextcloudConnecting to the external NextCloud drive
###Code
# we'll link the dataset from next-cloud
!curl https://raw.githubusercontent.com/luca-arts/seeingtheimperceptible/main/notebooks/database_mod.py -o /content/database_mod.py
from database_mod import *
link_nextcloud()
nextcloud = '/content/database/'
input_folder, output_folder = create_io(database=nextcloud,topic='bgRemoval',library='DoveNet')
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2480 100 2480 0 0 5675 0 --:--:-- --:--:-- --:--:-- 5688
what's the username for nextcloud? colab
what's the password for user colab? ··········
0
Please enter the username to authenticate with server
https://cloud.bxlab.net/remote.php/dav/files/colab/colabfiles/ or hit enter for none.
Username: Please enter the password to authenticate user colab with server
https://cloud.bxlab.net/remote.php/dav/files/colab/colabfiles/ or hit enter for none.
Password:
###Markdown
3. clone GIT repo
###Code
import os
root_path = '/content/DoveNet'
# clone the repository
if not os.path.exists(root_path):
!git clone https://github.com/Asha-Gutlapalli/TensorFlow-DoveNet {root_path}
%ls
###Output
Cloning into '/content/DoveNet'...
remote: Enumerating objects: 27, done.[K
remote: Counting objects: 100% (27/27), done.[K
remote: Compressing objects: 100% (24/24), done.[K
remote: Total 27 (delta 2), reused 22 (delta 1), pack-reused 0[K
Unpacking objects: 100% (27/27), done.
[0m[01;34mdatabase[0m/ database_mod.py [01;34mDoveNet[0m/ [01;34m__pycache__[0m/ [01;34msample_data[0m/
###Markdown
4. Setting up the environment
###Code
# installing DoveNet
%cd {root_path}
!pip install -q -r requirements.txt
###Output
/content/DoveNet
[K |████████████████████████████████| 1.1 MB 4.0 MB/s
[K |████████████████████████████████| 10.1 MB 34.9 MB/s
[K |████████████████████████████████| 462 kB 47.3 MB/s
[K |████████████████████████████████| 181 kB 49.7 MB/s
[K |████████████████████████████████| 76 kB 4.9 MB/s
[K |████████████████████████████████| 164 kB 51.4 MB/s
[K |████████████████████████████████| 111 kB 51.9 MB/s
[K |████████████████████████████████| 4.3 MB 36.5 MB/s
[K |████████████████████████████████| 63 kB 1.7 MB/s
[K |████████████████████████████████| 131 kB 50.8 MB/s
[K |████████████████████████████████| 793 kB 51.7 MB/s
[K |████████████████████████████████| 130 kB 47.5 MB/s
[K |████████████████████████████████| 428 kB 51.7 MB/s
[K |████████████████████████████████| 381 kB 38.6 MB/s
[?25h Building wheel for blinker (setup.py) ... [?25l[?25hdone
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
jupyter-console 5.2.0 requires prompt-toolkit<2.0.0,>=1.0.0, but you have prompt-toolkit 3.0.29 which is incompatible.
google-colab 1.0.0 requires ipykernel~=4.10, but you have ipykernel 6.13.0 which is incompatible.
google-colab 1.0.0 requires ipython~=5.5.0, but you have ipython 7.33.0 which is incompatible.
google-colab 1.0.0 requires tornado~=5.1.0; python_version >= "3.0", but you have tornado 6.1 which is incompatible.[0m
|
sample-notebooks/Java_Worker_Sample.ipynb | ###Markdown
Java Worker Example IntroductionThis is a simple example how to use Java workers. Essentially, these samples are same as the ones for Python and other languages.
###Code
IP = '192.168.99.100' # You need to change this address to your container host
BASE = 'http://' + IP + '/v1/'
import requests
import json
def jprint(data):
print(json.dumps(data, indent=4))
HEADERS = {'Content-Type': 'application/json'}
# List of all available services
res = requests.get(BASE + 'services')
print("Here is the list of services supported by this submit agent:\n")
jprint(res.json())
# And this is the sample Java Worker
res = requests.get(BASE + 'services/hello-java')
jprint(res.json())
# Submit a job
greeting = {
'message': 'Hello, this is a message for Java from Python client.'
}
for i in range(10):
res = requests.post(BASE + 'services/hello-java', data=json.dumps(greeting), headers=HEADERS)
job_id1 = res.json()['job_id']
jprint(res.json())
nbs_input = {
'inputFile': 'http://www.cytoscape.org/'
}
for i in range(10):
res = requests.post(BASE + 'services/nbs', data=json.dumps(nbs_input), headers=HEADERS)
job_id1 = res.json()['job_id']
jprint(res.json())
# Check the job status
res = requests.get(BASE + 'queue')
# job_id1 = res.json()[0]['job_id']
jprint(res.json())
result_url = BASE + 'queue/' + job_id1 + '/result'
print(result_url)
res = requests.get(result_url)
jprint(res.json())
res = requests.delete(BASE + 'queue')
###Output
_____no_output_____ |
scraping/Stock_Price.ipynb | ###Markdown
개별 종목의 일별시세 데이터 출력 라이브러리 설치 및 불러오기
###Code
!pip install -U finance-datareader
import FinanceDataReader as fdr
import pandas as pd
###Output
_____no_output_____
###Markdown
종목코드로 종목을 찾아 일별 시세 수집
###Code
# 삼성전자, 2020년 8월 1일 ~ 2021년 일별 시세 받아오기
# 삼성전자의 종목 번호 '005930'
price = fdr.DataReader('005930', "20200801", "2022") # 연도만 기입할 경우, 해당 연도 이전 12월 31일까지 출력됨. 현재까지 시세를 받고 싶다면 2022 입력
price
# 그래프 그리기 (종가데이터 출력)
price["Close"].plot()
###Output
_____no_output_____
###Markdown
종목명으로 종목코드 받아오는 함수 만들기
###Code
df_krx = fdr.StockListing("KRX")
# 종목명으로 종목코드 받아오는 함수 만들기
# df_krx의 Name column에서 같은 값이 있으면
# row의 Symbol column의 값을 list로 name_list에 저장하기
# item_code_by_item_name 함수를 만들기
def item_code_by_item_name(item_name):
# 종목명을 받아 종목코드를 찾아 반환하는 함수
item_code_list = df_krx.loc[df_krx["Name"] == item_name, "Symbol"].tolist()
if len(item_code_list) > 0:
item_code = item_code_list[0]
return item_code
else:
return False
# 예외처리
item_code_by_item_name("삼성전자")
# 종목명을 잘못넣었을 경우
item_code_by_item_name("네이버")
item_code_by_item_name("NAVER")
###Output
_____no_output_____
###Markdown
종목명으로 일별시세를 받아오는 함수 만들기
###Code
# item_code_by_item_name : 종목명으로 종목번호 가져오기
# find_item_list : 종목번호로 해당연도 데이터 가져오기
# find_item_list는 year 값이 없으면 기본으로 2021년 데이터를 선택
def find_item_list(item_name, year=2022):
# 종목명을 넘겨주면 일별시세를 반환하는 함수
# 내부에서 종목명으로 종목코드를 반환하는 함수(item_code_by_item_name)로 종목의 시세를 수집
item_code = item_code_by_item_name(item_name)
if item_code:
df_day = fdr.DataReader(item_code, str(year))
return df_day
else:
return False
find_item_list("NAVER", 2017)
find_item_list("크래프톤",2021)
# 크래프톤의 경우 상장일이 20221년 8월 10일이기 때문에 상장일부터 출력
###Output
_____no_output_____
###Markdown
개별종목의 시세정보 데이터 시각화
###Code
# 종가 그래프를 그리기는 위 종가데이터 출력 참조
price["Close"].plot()
# secondary_y 옵션을 통한 종가와 거래량에 대한 2축 그래프 그리기
price[["Close", "Volume"]].plot(secondary_y="Volume")
###Output
_____no_output_____
###Markdown
개별 종목의 일별시세 데이터 출력 라이브러리 설치 및 불러오기
###Code
!pip install -U finance-datareader
import FinanceDataReader as fdr
import pandas as pd
###Output
_____no_output_____
###Markdown
종목코드로 종목을 찾아 일별 시세 수집
###Code
# 삼성전자, 2020년 8월 1일 ~ 2021년 일별 시세 받아오기
# 삼성전자의 종목 번호 '005930'
price = fdr.DataReader('005930', "20200801", "2022") # 연도만 기입할 경우, 해당 연도 이전 12월 31일까지 출력됨. 현재까지 시세를 받고 싶다면 2022 입력
price
# 그래프 그리기 (종가데이터 출력)
price["Close"].plot()
###Output
_____no_output_____
###Markdown
종목명으로 종목코드 받아오는 함수 만들기
###Code
df_krx = fdr.StockListing("KRX")
# 종목명으로 종목코드 받아오는 함수 만들기
# df_krx의 Name column에서 같은 값이 있으면
# row의 Symbol column의 값을 list로 name_list에 저장하기
# item_code_by_item_name 함수를 만들기
def item_code_by_item_name(item_name):
# 종목명을 받아 종목코드를 찾아 반환하는 함수
item_code_list = df_krx.loc[df_krx["Name"] == item_name, "Symbol"].tolist()
if len(item_code_list) > 0:
item_code = item_code_list[0]
return item_code
else:
return False
# 예외처리
item_code_by_item_name("삼성전자")
# 종목명을 잘못넣었을 경우
item_code_by_item_name("네이버")
item_code_by_item_name("NAVER")
###Output
_____no_output_____
###Markdown
종목명으로 일별시세를 받아오는 함수 만들기
###Code
# item_code_by_item_name : 종목명으로 종목번호 가져오기
# find_item_list : 종목번호로 해당연도 데이터 가져오기
# find_item_list는 year 값이 없으면 기본으로 2021년 데이터를 선택
def find_item_list(item_name, year=2022):
# 종목명을 넘겨주면 일별시세를 반환하는 함수
# 내부에서 종목명으로 종목코드를 반환하는 함수(item_code_by_item_name)로 종목의 시세를 수집
item_code = item_code_by_item_name(item_name)
if item_code:
df_day = fdr.DataReader(item_code, str(year))
return df_day
else:
return False
find_item_list("NAVER", 2017)
find_item_list("크래프톤",2021)
# 크래프톤의 경우 상장일이 20221년 8월 10일이기 때문에 상장일부터 출력
###Output
_____no_output_____
###Markdown
개별종목의 시세정보 데이터 시각화
###Code
# 종가 그래프를 그리기는 위 종가데이터 출력 참조
price["Close"].plot()
# secondary_y 옵션을 통한 종가와 거래량에 대한 2축 그래프 그리기
price[["Close", "Volume"]].plot(secondary_y="Volume")
###Output
_____no_output_____ |
examples_ja/tutorial009_graph_partitioning.ipynb | ###Markdown
グラフ分割問題グラフ問題において、頂点Vの数が偶数ある時に頂点をちょうどV/2ずつ2グループに分割した時に、お互いのグループをつなぐエッジの数が最小になる問題を求めます。1つ目のコスト関数はsi=−1とsi=1に分けた時にそれぞれのグループに含まれる頂点の数が同じになるような条件、2つ目は2つの頂点を選んだ時に違うグループに属する場合にはコストが上がるように設計された項です。これにより、違うグループ同士で接続数が多いとコストが増加するように設計されています。今回はQUBOの{0,1}ではなくイジングの{-1,1}で考えます。 例題1Dリングのグラフを想定しました。8ノードあります。
###Code
import networkx as nx
import matplotlib.pyplot as plt
options = {'node_color': '#efefef','node_size': 1200,'with_labels':'True'}
G = nx.Graph()
G.add_nodes_from([0,1,2,3,4,5,6,7])
G.add_edges_from([(0,1),(1,2),(2,3),(3,4),(4,5),(5,6),(6,7),(7,0)])
nx.draw(G, **options)
###Output
_____no_output_____
###Markdown
早速プログラムをしていきます。準備をして読み込みます。
###Code
!pip install -U wildqat
import wildqat as wq
a = wq.opt()
###Output
_____no_output_____
###Markdown
上記コスト関数の1項目は全ての量子ビットを足し合わせて二乗しています。これはすべての量子ビットの局所磁場を1にして係数をとります。
###Code
matrix1 = wq.sqr([1,1,1,1,1,1,1,1])
print(matrix1)
###Output
[[1 2 2 2 2 2 2 2]
[0 1 2 2 2 2 2 2]
[0 0 1 2 2 2 2 2]
[0 0 0 1 2 2 2 2]
[0 0 0 0 1 2 2 2]
[0 0 0 0 0 1 2 2]
[0 0 0 0 0 0 1 2]
[0 0 0 0 0 0 0 1]]
###Markdown
次に2項目は定数項は無視して、-B/2をすべてのJijに対してかけて足し合わせます。B=0.5としてみます。また、上記1Dリングの繋がっているのは隣り合う量子ビット同士が繋がっており、最後の量子ビットは最後の量子ビットと繋がっています。Wildqatの機能でネットワーク構造からmatrixを作る機能を使ってみます。
###Code
matrix2 = wq.net([[0,1],[1,2],[2,3],[3,4],[4,5],[5,6],[6,7],[7,0]],8)
print(matrix2)
B = 0.5
a.J = matrix1 - B * matrix2
a.sa()
a.sa()
print(a.J)
###Output
[[1. 1.5 2. 2. 2. 2. 2. 1.5]
[0. 1. 1.5 2. 2. 2. 2. 2. ]
[0. 0. 1. 1.5 2. 2. 2. 2. ]
[0. 0. 0. 1. 1.5 2. 2. 2. ]
[0. 0. 0. 0. 1. 1.5 2. 2. ]
[0. 0. 0. 0. 0. 1. 1.5 2. ]
[0. 0. 0. 0. 0. 0. 1. 1.5]
[0. 0. 0. 0. 0. 0. 0. 1. ]]
|
Classification/K_Nearest_Neighbors(K_NN).ipynb | ###Markdown
K-Nearest Neighbors (K-NN) Importing the libraries
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Mounting Google Drive
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/MyDrive/Datasets
###Output
/content/drive/MyDrive/Datasets
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Social_Network_Ads.csv')
dataset
# Calculating the correlation matrix
corr = dataset.corr()
# Generating a heatmap
sns.heatmap(corr,xticklabels=corr.columns, yticklabels=corr.columns)
sns.pairplot(dataset,hue='Gender')
X = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
X
###Output
_____no_output_____
###Markdown
Taking care of missing data
###Code
imputer = SimpleImputer(missing_values=np.nan, strategy='most_frequent')
imputer.fit(X[:, 1:4])
X[:, 1:4] = imputer.transform(X[:, 1:4])
###Output
_____no_output_____
###Markdown
Encoding categorical data Encoding the Independent Variable
###Code
# ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [0])], remainder='passthrough')
# X = np.array(ct.fit_transform(X))
# X
###Output
_____no_output_____
###Markdown
Encoding the Dependent Variable
###Code
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# print(X_train)
# print(y_train)
# print(X_test)
# print(y_test)
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
print(X_train)
print(X_test)
###Output
[[-0.80480212 0.50496393]
[-0.01254409 -0.5677824 ]
[-0.30964085 0.1570462 ]
[-0.80480212 0.27301877]
[-0.30964085 -0.5677824 ]
[-1.10189888 -1.43757673]
[-0.70576986 -1.58254245]
[-0.21060859 2.15757314]
[-1.99318916 -0.04590581]
[ 0.8787462 -0.77073441]
[-0.80480212 -0.59677555]
[-1.00286662 -0.42281668]
[-0.11157634 -0.42281668]
[ 0.08648817 0.21503249]
[-1.79512465 0.47597078]
[-0.60673761 1.37475825]
[-0.11157634 0.21503249]
[-1.89415691 0.44697764]
[ 1.67100423 1.75166912]
[-0.30964085 -1.37959044]
[-0.30964085 -0.65476184]
[ 0.8787462 2.15757314]
[ 0.28455268 -0.53878926]
[ 0.8787462 1.02684052]
[-1.49802789 -1.20563157]
[ 1.07681071 2.07059371]
[-1.00286662 0.50496393]
[-0.90383437 0.30201192]
[-0.11157634 -0.21986468]
[-0.60673761 0.47597078]
[-1.6960924 0.53395707]
[-0.11157634 0.27301877]
[ 1.86906873 -0.27785096]
[-0.11157634 -0.48080297]
[-1.39899564 -0.33583725]
[-1.99318916 -0.50979612]
[-1.59706014 0.33100506]
[-0.4086731 -0.77073441]
[-0.70576986 -1.03167271]
[ 1.07681071 -0.97368642]
[-1.10189888 0.53395707]
[ 0.28455268 -0.50979612]
[-1.10189888 0.41798449]
[-0.30964085 -1.43757673]
[ 0.48261718 1.22979253]
[-1.10189888 -0.33583725]
[-0.11157634 0.30201192]
[ 1.37390747 0.59194336]
[-1.20093113 -1.14764529]
[ 1.07681071 0.47597078]
[ 1.86906873 1.51972397]
[-0.4086731 -1.29261101]
[-0.30964085 -0.3648304 ]
[-0.4086731 1.31677196]
[ 2.06713324 0.53395707]
[ 0.68068169 -1.089659 ]
[-0.90383437 0.38899135]
[-1.20093113 0.30201192]
[ 1.07681071 -1.20563157]
[-1.49802789 -1.43757673]
[-0.60673761 -1.49556302]
[ 2.1661655 -0.79972756]
[-1.89415691 0.18603934]
[-0.21060859 0.85288166]
[-1.89415691 -1.26361786]
[ 2.1661655 0.38899135]
[-1.39899564 0.56295021]
[-1.10189888 -0.33583725]
[ 0.18552042 -0.65476184]
[ 0.38358493 0.01208048]
[-0.60673761 2.331532 ]
[-0.30964085 0.21503249]
[-1.59706014 -0.19087153]
[ 0.68068169 -1.37959044]
[-1.10189888 0.56295021]
[-1.99318916 0.35999821]
[ 0.38358493 0.27301877]
[ 0.18552042 -0.27785096]
[ 1.47293972 -1.03167271]
[ 0.8787462 1.08482681]
[ 1.96810099 2.15757314]
[ 2.06713324 0.38899135]
[-1.39899564 -0.42281668]
[-1.20093113 -1.00267957]
[ 1.96810099 -0.91570013]
[ 0.38358493 0.30201192]
[ 0.18552042 0.1570462 ]
[ 2.06713324 1.75166912]
[ 0.77971394 -0.8287207 ]
[ 0.28455268 -0.27785096]
[ 0.38358493 -0.16187839]
[-0.11157634 2.21555943]
[-1.49802789 -0.62576869]
[-1.29996338 -1.06066585]
[-1.39899564 0.41798449]
[-1.10189888 0.76590222]
[-1.49802789 -0.19087153]
[ 0.97777845 -1.06066585]
[ 0.97777845 0.59194336]
[ 0.38358493 0.99784738]]
###Markdown
Training the K-NN model on the Training set
###Code
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting a new result
###Code
print(classifier.predict(sc.transform(X_test)))
###Output
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
###Markdown
Predicting the Test set results
###Code
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
[[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[1 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 0]
[0 0]
[0 0]
[1 1]
[0 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[1 1]
[1 1]]
###Markdown
Making the Confusion Matrix
###Code
cm = confusion_matrix(y_test, y_pred)
print(cm)
print()
sns.heatmap(cm, annot=True)
print("Naive Bayes Accuracy = ", accuracy_score(y_test, y_pred)*100,'%')
print()
print(classification_report(y_test,y_pred))
###Output
Naive Bayes Accuracy = 93.0 %
precision recall f1-score support
0 0.96 0.94 0.95 68
1 0.88 0.91 0.89 32
accuracy 0.93 100
macro avg 0.92 0.92 0.92 100
weighted avg 0.93 0.93 0.93 100
###Markdown
Finding best fit k value
###Code
error_rate = [] #list that will store the average error rate value of k
for i in range(1,40): #Took the range of k from 1 to 39
knn = KNeighborsClassifier(n_neighbors = i)
knn.fit(X_train, y_train)
score = knn.predict(X_test)
error_rate.append(1-score.mean())
print(error_rate)
###Output
[0.6699999999999999, 0.74, 0.6699999999999999, 0.6799999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6699999999999999, 0.6799999999999999, 0.6799999999999999]
###Markdown
Plotting the error rate vs k graph
###Code
plt.figure(figsize=(15,6))
plt.plot(range(1,40),error_rate,marker="h",markerfacecolor="black",
linestyle='-',color="gray",markersize=15)
plt.title("Error Rate vs k-Value",fontsize=20)
plt.xlabel("k- Values",fontsize=20)
plt.ylabel("Error Rate",fontsize=20)
plt.xticks(range(1,40))
plt.show()
###Output
_____no_output_____
###Markdown
Visualising the Training set results
###Code
# Visualising the Training set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('#2D2926FF', '#E94B3CFF')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('#2D2926FF', '#E94B3CFF'))(i), label = j)
plt.title('K-NN (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
###Markdown
Visualising the Test set results
###Code
# Visualising the Test set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('#2D2926FF', '#E94B3CFF')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('#2D2926FF', '#E94B3CFF'))(i), label = j)
plt.title('K-NN (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
###Output
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
*c* argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with *x* & *y*. Please use the *color* keyword-argument or provide a 2-D array with a single row if you intend to specify the same RGB or RGBA value for all points.
|
1_dend_data_modeling/notebooks/L1-D2-creating-a-table-with-apache-cassandra.ipynb | ###Markdown
Lesson 1 Demo 2: Creating a Table with Apache Cassandraimg file = Cassandralogo Walk through the basics of Apache Cassandra:Creating a table Inserting rows of dataRunning a simple SQL query to validate the information. Use a python wrapper/ python driver called cassandra to run the Apache Cassandra queries. This library should be preinstalled but in the future to install this library you can run this command in a notebook to install locally: `! pip install cassandra-driver`More documentation can be found here: https://datastax.github.io/python-driver/ Import Apache Cassandra python package
###Code
!pip3 install cassandra-driver
import cassandra
###Output
_____no_output_____
###Markdown
Create a connection to the database1. Connect to the local instance of Apache Cassandra *['127.0.0.1']*.2. The connection reaches out to the database (*studentdb*) and uses the correct privileges to connect to the database (*user and password = student*).3. Once we get back the cluster object, we need to connect and that will create our session that we will use to execute queries. *Note 1:* This block of code will be standard in all notebooks
###Code
from cassandra.cluster import Cluster
try:
cluster = Cluster(['127.0.0.1']) #If you have a locally installed Apache Cassandra instance
session = cluster.connect()
except Exception as e:
print(e)
###Output
('Unable to connect to any servers', {'127.0.0.1:9042': ConnectionRefusedError(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
###Markdown
Test the Connection and Error Handling Code*Note:* The try-except block should handle the error: We are trying to do a `select *` on a table but the table has not been created yet.
###Code
try:
session.execute("""select * from music_libary""")
except Exception as e:
print(e)
###Output
name 'session' is not defined
###Markdown
Create a keyspace to the work in *Note:* We will ignore the Replication Strategy and factor information right now as those concepts are covered in depth in Lesson 3. Remember, this will be the strategy and replication factor on a one node local instance.
###Code
try:
session.execute("""
CREATE KEYSPACE IF NOT EXISTS udacity
WITH REPLICATION =
{ 'class' : 'SimpleStrategy', 'replication_factor' : 1 }"""
)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Connect to our Keyspace.*Compare this to how a new session in PostgreSQL is created.*
###Code
try:
session.set_keyspace('udacity')
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Begin with creating a Music Library of albums. Each album has a lot of information we could add to the music library table. We will start with album name, artist name, year. But ...Stop We are working with Apache Cassandra a NoSQL database. We can't model our data and create our table without more information. Think about what queries will you be performing on this data? We want to be able to get every album that was released in a particular year. `select * from music_library WHERE YEAR=1970`*To do that:* We need to be able to do a WHERE on YEAR. YEAR will become my partition key,artist name will be my clustering column to make each Primary Key unique. **Remember there are no duplicates in Apache Cassandra.****Table Name:** music_library**column 1:** Album Name**column 2:** Artist Name**column 3:** Year PRIMARY KEY(year, artist name) Now to translate this information into a Create Table Statement. More information on Data Types can be found here: https://datastax.github.io/python-driver/*Note:* Again, we will go in depth with these concepts in Lesson 3.
###Code
query = "CREATE TABLE IF NOT EXISTS music_library "
query = query + "(year int, artist_name text, album_name text, PRIMARY KEY (year, artist_name))"
try:
session.execute(query)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
The query should run smoothly. Insert two rows
###Code
query = "INSERT INTO music_library (year, artist_name, album_name)"
query = query + " VALUES (%s, %s, %s)"
try:
session.execute(query, (1970, "The Beatles", "Let it Be"))
except Exception as e:
print(e)
try:
session.execute(query, (1965, "The Beatles", "Rubber Soul"))
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Validate your data was inserted into the table.*Note:* The for loop is used for printing the results. If executing queries in the cqlsh, this would not be required.*Note:* Depending on the version of Apache Cassandra you have installed, this might throw an "ALLOW FILTERING" error instead of printing the 2 rows that we just inserted. This is to be expected, as this type of query should not be performed on large datasets, we are only doing this for the sake of the demo.
###Code
query = 'SELECT * FROM music_library'
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.album_name, row.artist_name)
###Output
_____no_output_____
###Markdown
Validate the Data Model with the original query.`select * from music_library WHERE YEAR=1970`
###Code
query = "select * from music_library WHERE YEAR=1970"
try:
rows = session.execute(query)
except Exception as e:
print(e)
for row in rows:
print (row.year, row.album_name, row.artist_name)
###Output
_____no_output_____
###Markdown
Drop the table to avoid duplicates and clean up.
###Code
query = "drop table music_library"
try:
rows = session.execute(query)
except Exception as e:
print(e)
###Output
_____no_output_____
###Markdown
Close the session and cluster connection
###Code
session.shutdown()
cluster.shutdown()
###Output
_____no_output_____ |
tutorial-ja/502_measurement_ja.ipynb | ###Markdown
測定 Fockbase 光子数測定 フォック状態(光子数状態)に対して光子数測定を行います。 フォック状態は光子数が確定した状態なので、測定結果は決定論的です。
###Code
F = pq.Fock(1, cutoff = 20)
F.n_photon(0, 5).run() # prepare five photon state
res = F.photonSampling(0, ite = 10)
print(res)
###Output
[5 5 5 5 5 5 5 5 5 5]
###Markdown
コヒーレンス状態について光子数測定を行います。 この状態は複数のフォック状態の重ね合わせなので、測定結果は確率的です。 $D(\alpha) = \exp(\alpha \hat{a}^{\dagger} - \alpha^{*}a^{\dagger})$$D(\alpha) \lvert 0 \rangle = \lvert\alpha \rangle = \exp(-\lvert \alpha \rvert^2/2) \sum_n \frac{\alpha^n}{(n!)^{1/2}} \lvert n \rangle$
###Code
alpha = (0 + 1j) # parameter
F = pq.Fock(1, cutoff = 15)
F.D(0, alpha) # Dgate
F.run()
res = F.photonSampling(0, ite = 20)
print(res)
###Output
[1 0 1 0 1 2 1 2 1 1 1 2 0 0 0 0 0 0 2 1]
###Markdown
ホモダイン測定直交位相 $q$ ($x$と書かれることも多い) または $p$ を測定します。 コヒーレント状態について、 $q$ を測定
###Code
alpha = 1 + 0.5j
theta = 0
F = pq.Fock(2)
F.D(0, 1 + 0.5j)
F.run()
res = F.homodyneSampling(0, theta, ite = 1000)
plt.hist(res, bins=50)
plt.show()
###Output
_____no_output_____
###Markdown
コヒーレント状態について、 $p$ を測定
###Code
alpha = 1 + 0.5j
theta = - np.pi / 2
F = pq.Fock(2)
F.D(0, 1 + 0.5j)
F.run()
res = F.homodyneSampling(0, theta, ite = 1000)
plt.hist(res, bins=20)
plt.show()
###Output
_____no_output_____
###Markdown
シュレディンガーの猫状態について、 $q$ を測定
###Code
alpha = (1 - 0j) * 2
theta = 0
parity = 'e'
F = pq.Fock(2, cutoff = 20)
F.cat(0, alpha, parity)
F.cat(1, alpha, parity)
F.run()
res = F.homodyneSampling(0, theta, ite = 500)
x, p, W = F.Wigner(1, method = 'clenshaw') # plot
plt.hist(res, bins = 50, range = (-5, 5))
plt.show()
###Output
_____no_output_____
###Markdown
シュレディンガーの猫状態について、 $p$ を測定
###Code
alpha = (1 - 0j) * 2
theta = -np.pi/2
parity = 'e'
F = pq.Fock(2, cutoff = 20)
F.cat(0, alpha, parity)
F.cat(1, alpha, parity)
F.run()
res = F.homodyneSampling(0, theta, ite = 500)
x, p, W = F.Wigner(1, method = 'clenshaw') # plot
plt.hist(res, bins = 50, range = (-5, 5))
plt.show()
###Output
_____no_output_____
###Markdown
Gaussian formula ホモダイン測定直交位相 $q$ ($x$と書かれることも多い) または $p$ を測定します。
###Code
G = pq.Gaussian(2)
res_arr_x = []
for i in range(1000):
G.D(0, 1 + 0.5j)
G.MeasX(0)
G.run()
res = G.Creg(0, "x").read()
res_arr_x.append(res)
###Output
_____no_output_____
###Markdown
The result shows Gaussian distribution sampled from Gaussian Wigner function.
###Code
plt.hist(res_arr_x, bins=20)
G = pq.Gaussian(2)
res_arr_p = []
for i in range(1000):
G.D(0, 1 + 0.5j)
G.MeasP(0)
G.run()
res = G.Creg(0, "p").read()
res_arr_p.append(res)
plt.hist(res_arr_p, bins=20)
###Output
_____no_output_____ |
dmi/Example - DMI.ipynb | ###Markdown
DMI InstallationRun the following cell to install osiris-sdk.
###Code
!pip install osiris-sdk --upgrade
###Output
_____no_output_____
###Markdown
Access to datasetThere are two ways to get access to a dataset1. Service Principle2. Access Token Config file with Service PrincipleIf done with **Service Principle** it is adviced to add the following file with **tenant_id**, **client_id**, and **client_secret**:The structure of **conf.ini**:```[Authorization]tenant_id = client_id = client_secret = [Egress]url = ``` Config file if using Access TokenIf done with **Access Token** then assign it to a variable (see example below).The structure of **conf.ini**:```[Egress]url = ```The egress-url can be [found here](https://github.com/Open-Dataplatform/examples/blob/main/README.md). ImportsExecute the following cell to import the necessary libraries
###Code
import pandas as pd
from io import BytesIO
from osiris.apis.egress import Egress
from osiris.core.azure_client_authorization import ClientAuthorization
from configparser import ConfigParser
###Output
_____no_output_____
###Markdown
Initialize Egress with Service Principle
###Code
config = ConfigParser()
config.read('conf.ini')
client_auth = ClientAuthorization(tenant_id=config['Authorization']['tenant_id'],
client_id=config['Authorization']['client_id'],
client_secret=config['Authorization']['client_secret'])
egress = Egress(client_auth=client_auth,
egress_url=config['Egress']['url'])
###Output
_____no_output_____
###Markdown
Intialize Egress with Access Token
###Code
config = ConfigParser()
config.read('conf.ini')
access_token = 'REPLACE WITH ACCESS TOKEN HERE'
client_auth = ClientAuthorization(access_token=access_token)
egress = Egress(client_auth=client_auth,
egress_url=config['Egress']['url'])
###Output
_____no_output_____
###Markdown
List all DMI stations for a given monthTo list all the available stations for a given month run the following code.Feel free to change the month using the format: YYYY-MM
###Code
# We restrict to only list the first 10 stations
egress.download_dmi_list(from_date='2021-01')[:10]
###Output
_____no_output_____
###Markdown
Download DMI data for a given station and time periodTo download the data for a given station (lon, lat) for a time period (from_date, to_date) execute the following cell.You can find the available values of **lon** and **lat** from the previous call.The **from_date** and **to_date** syntax is [described here](https://github.com/Open-Dataplatform/examples/blob/main/README.md).
###Code
parquet_content = egress.download_dmi_file(lon=15.19,
lat=55.00,
from_date='2021-01',
to_date='2021-03')
data = pd.read_parquet(BytesIO(parquet_content))
data.head()
data.dtypes
###Output
_____no_output_____ |
CNN/Keras CNN.ipynb | ###Markdown
Import packages
###Code
%matplotlib inline
import keras
from keras.datasets import mnist
from keras.callbacks import TensorBoard
from keras.layers import Dense, Dropout, Flatten, AlphaDropout, BatchNormalization
from keras.layers import Conv2D, MaxPooling2D
from keras.models import Sequential, load_model
from keras import backend as K
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['figure.figsize'] = [24, 24]
###Output
_____no_output_____
###Markdown
ShowImages(images,gray=True): Helper function to Show images
###Code
def ShowImages(images,P,T,gray=True):
"""
Display a list of images in a single figure with matplotlib.
Parameters
---------
size of each image must be same and a perfect square.
"""
rows = int(np.sqrt(images.shape[0]))
cols = rows
while (rows * cols < images.shape[0]):
rows = rows + 1
x = images[0].shape[1]
y = x
fig = plt.figure()
plt.suptitle('Images with their Predictions, True labels', fontsize=24)
if gray:
plt.gray()
for n,image in enumerate(images):
a = fig.add_subplot(rows,cols , n + 1)
plt.imshow(images[n].reshape(x,y))
plt.subplots_adjust(right=0.7,hspace = 0.6)
plt.title(f'{P[n]}, {T[n]}')
plt.axis('off')
return
###Output
_____no_output_____
###Markdown
Load MNIST Data Set
###Code
# input image dimensions
img_rows, img_cols = 28, 28
num_classes = 10
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
###Output
_____no_output_____
###Markdown
Normalize the Data Set
###Code
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
###Output
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
###Markdown
Catogerize the traget variable
###Code
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
modelh5 = 'Keras-MNIST.h5'
batch_size = 1024
num_classes = 10
epochs = 1
###Output
_____no_output_____
###Markdown
Creating the Model
###Code
try:
model = load_model(modelh5)
print('Model loaded successfully')
except IOError:
print('Running the model for the first time')
model = Sequential()
model.add(Conv2D(9, kernel_size=(14, 14),
activation='elu',
input_shape=input_shape,
padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(5, 5), strides=2, padding='same'))
model.add(BatchNormalization())
model.add(Conv2D(81, (10, 10), activation='elu', padding='same'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2), strides=2, padding='same'))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(128, activation='selu'))
model.add(AlphaDropout(0.4))
model.add(Dense(64, activation='elu'))
model.add(BatchNormalization())
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(amsgrad=True),
metrics=['accuracy'])
###Output
Model loaded successfully
###Markdown
Train and evaluate the model
###Code
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
#callbacks=[tensorboard])
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
model.save(modelh5)
model.summary()
y_true, y_prob = y_train.argmax(axis=-1), model.predict(x_train)
y_pred = y_prob.argmax(axis=-1)
print(classification_report(y_true, y_pred))
y_true, y_prob = y_test.argmax(axis=-1), model.predict(x_test)
y_pred = y_prob.argmax(axis=-1)
print(classification_report(y_true, y_pred))
not_equal = np.not_equal(y_pred, y_true).nonzero()[0]
y_true[not_equal]
y_pred[not_equal]
ShowImages(x_test[not_equal,:],y_pred[not_equal],y_true[not_equal],gray=True)
###Output
_____no_output_____ |
2015/jordi/Day 13.ipynb | ###Markdown
--- Day 13: Knights of the Dinner Table ---In years past, the holiday feast with your family hasn't gone so well. Not everyone gets along! This year, you resolve, will be different. You're going to find the optimal seating arrangement and avoid all those awkward conversations.You start by writing up a list of everyone invited and the amount their happiness would increase or decrease if they were to find themselves sitting next to each other person. You have a circular table that will be just big enough to fit everyone comfortably, and so each person will have exactly two neighbors.For example, suppose you have only four attendees planned, and you calculate their potential happiness as follows:Alice would gain 54 happiness units by sitting next to Bob.Alice would lose 79 happiness units by sitting next to Carol.Alice would lose 2 happiness units by sitting next to David.Bob would gain 83 happiness units by sitting next to Alice.Bob would lose 7 happiness units by sitting next to Carol.Bob would lose 63 happiness units by sitting next to David.Carol would lose 62 happiness units by sitting next to Alice.Carol would gain 60 happiness units by sitting next to Bob.Carol would gain 55 happiness units by sitting next to David.David would gain 46 happiness units by sitting next to Alice.David would lose 7 happiness units by sitting next to Bob.David would gain 41 happiness units by sitting next to Carol.Then, if you seat Alice next to David, Alice would lose 2 happiness units (because David talks so much), but David would gain 46 happiness units (because Alice is such a good listener), for a total change of 44.If you continue around the table, you could then seat Bob next to Alice (Bob gains 83, Alice gains 54). Finally, seat Carol, who sits next to Bob (Carol gains 60, Bob loses 7) and David (Carol gains 55, David gains 41). The arrangement looks like this: +41 +46+55 David -2Carol Alice+60 Bob +54 -7 +83After trying every other seating arrangement in this hypothetical scenario, you find that this one is the most optimal, with a total change in happiness of 330.What is the total change in happiness for the optimal seating arrangement of the actual guest list?
###Code
import re
from collections import defaultdict
from itertools import permutations
_PARSE = re.compile("([A-Za-z]+) would (gain|lose) ([0-9]+) happiness units by sitting next to ([A-Za-z]+).")
def count_happines(units, order):
p = order[0]
v_l = units[p][order[-1]]
for c in order[1:]:
v_l += int(units[p][c])
p = c
p = order[-1]
v_r = units[p][order[0]]
for c in order[::-1][1:]:
v_r += int(units[p][c])
p = c
return v_r + v_l
def happiness(data, addme=False):
units = defaultdict(dict)
for d in data:
name1, gain_loss, points, name2 = _PARSE.match(d).groups()
points = int(points) if gain_loss == 'gain' else -int(points)
units[name1][name2] = points
if addme:
for name in list(units.keys()):
units['me'][name] = 0
units[name]['me'] = 0
names = list(units.keys())
max_perm = tuple(names)
max_value = count_happines(units, max_perm)
for perm in permutations(units.keys()):
value = count_happines(units, perm)
if value > max_value:
max_perm = perm
max_value = value
return max_value, max_perm
# Test example
example = [
"Alice would gain 54 happiness units by sitting next to Bob.",
"Alice would lose 79 happiness units by sitting next to Carol.",
"Alice would lose 2 happiness units by sitting next to David.",
"Bob would gain 83 happiness units by sitting next to Alice.",
"Bob would lose 7 happiness units by sitting next to Carol.",
"Bob would lose 63 happiness units by sitting next to David.",
"Carol would lose 62 happiness units by sitting next to Alice.",
"Carol would gain 60 happiness units by sitting next to Bob.",
"Carol would gain 55 happiness units by sitting next to David.",
"David would gain 46 happiness units by sitting next to Alice.",
"David would lose 7 happiness units by sitting next to Bob.",
"David would gain 41 happiness units by sitting next to Carol."
]
happiness(example)
# Test input
with open('day13/input.txt', 'rt') as fd:
print(happiness(fd))
###Output
(618, ('Bob', 'Carol', 'David', 'George', 'Mallory', 'Frank', 'Alice', 'Eric'))
###Markdown
--- Part Two ---In all the commotion, you realize that you forgot to seat yourself. At this point, you're pretty apathetic toward the whole thing, and your happiness wouldn't really go up or down regardless of who you sit next to. You assume everyone else would be just as ambivalent about sitting next to you, too.So, add yourself to the list, and give all happiness relationships that involve you a score of 0.What is the total change in happiness for the optimal seating arrangement that actually includes yourself?
###Code
with open('day13/input.txt', 'rt') as fd:
print(happiness(fd, addme=True))
###Output
(601, ('Bob', 'Carol', 'David', 'George', 'Mallory', 'Frank', 'Alice', 'Eric', 'me'))
|
notebooks/.ipynb_checkpoints/Laboratory2-checkpoint.ipynb | ###Markdown
Laboratório 2: Mofo e Fungicida Referente ao capítulo 6Nesse laboratório, seja $x(t)$ a concentração de mofo que queremos reduzir em um período de tempo fixo. Assumiremos que $x$ tenha crescimento com taxa $r$ e capacidade de carga $M.$ Seja $u(t)$ o fungicida que reduz a população em $u(t)x(t).$ Assim$$x'(t) = r(M - x(t)) - u(t)x(t), x(0) = x_0 > 0$$O efeito do mofo e do fungicida são negativos para as pessoas ao redor e, por isso, queremos minimizar ambos. Assim, nosso objetivo será $$\min_u \int_0^T Ax(t)^2 + u(t)^2 dt$$tal que $A$ é o parâmetro que balanceia a importância dos termos do funcional, isto é, quanto mais o seu valor, mais importância em minimizar $x$. **Resultado de existência:** $f(t,x,u) = Ax^2 + u^2 \implies f_{xx}(t,x,u) = 2A, f_{uu}(t,x,u) = 2$, logo $f$ é continuamente diferenciável nas três variáveis e convexa em $x$ e $u$. $g(t,x,u) = r(M - x(t)) - u(t)x(t) \implies g_{xx} = g_{uu} = 0$ e, de mesma forma é continuamente diferenciável e convexa em $x$ e $u$. Assim, após encontrarmos $\lambda$ que satisfaça as condições necessárias, temos um resultado de existência, caso a integral seja finita. Condições Necessárias Hamiltoniano$$H = Ax^2 + u^2 + \lambda(r(M - x) - ux)$$ Condição de otimalidade$$0 = H_u = 2u - \lambda x \implies u^{*}(t) = \frac{1}{2}\lambda(t)x(t)$$ Equação adjunta $$\lambda '(t) = - H_x = -2Ax(t) + \lambda(t)(r + u(t)) $$ Condição de transversalidade $$\lambda(T) = 0$$Deveríamos verificar que $\lambda(t) \ge 0$, mas isso não é feito aqui. Observe que formamos um sistema não linear de equações diferenciais, que tornam a solução analítica muito mais complexa. Por isso, vamos resolver esse problema de forma iterativa. Importanto as bibliotecas
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
import sympy as sp
import sys
sys.path.insert(0, '../pyscripts/')
from optimal_control_class import OptimalControl
###Output
_____no_output_____
###Markdown
Com a biblioteca `sympy`, de matemática simbólica, é possível obter as condições necessárias sem fazer contas. Para isso, vamos escrever o Hamiltoniano com uma expressão simbólica.
###Code
x_sp,u_sp,lambda_sp, r_sp, A_sp, M_sp = sp.symbols('x u lambda r A M')
H = A_sp*x_sp**2 + u_sp**2 + lambda_sp*(r_sp*(M_sp - x_sp) - u_sp*x_sp)
H
###Output
_____no_output_____
###Markdown
Assim podemos conseguir suas derivadas e, portanto, as condições necessárias
###Code
print('H_x = {}'.format(sp.diff(H,x_sp)))
print('H_u = {}'.format(sp.diff(H,u_sp)))
print('H_lambda = {}'.format(sp.diff(H,lambda_sp)))
###Output
H_x = 2*A*x + lambda*(-r - u)
H_u = -lambda*x + 2*u
H_lambda = r*(M - x) - u*x
###Markdown
Podemos resolver a equação $H_u = 0$, mas esse passo é importante conferir manualmente também.
###Code
eq = sp.Eq(sp.diff(H,u_sp), 0)
sp.solve(eq,u_sp)
###Output
_____no_output_____
###Markdown
Dessa vez vamos utilizar uma classe escrita em Python que codifica o algoritmo apresentado no Capítulo 5 e no Laboratório 1. Primeiro precisamos definir as equações importantes das condições necessárias. É importante escrever no formato descrito nesse notebook. `par` é um dicionário com os parâmetros específicos do modelo.
###Code
parameters = {'r': None, 'M': None, 'A': None}
diff_state = lambda t, x, u, par: par['r']*(par['M'] - x) - u*x # Derivada de x
diff_lambda = lambda t, x, u, lambda_, par: -2*par['A']*x + lambda_*(par['r'] + u) # Derivada de lambda_
update_u = lambda t, x, lambda_, par: 0.5*lambda_*x # Atualiza u através de H_u = 0
###Output
_____no_output_____
###Markdown
Aplicando a classe ao exemplo Vamos fazer algumas exeperimentações. Sinta-se livre para variar os parâmetros ao final do notebook.
###Code
problem = OptimalControl(diff_state, diff_lambda, update_u)
x0 = 1
T = 5
parameters['r'] = 0.3
parameters['M'] = 10
parameters['A'] = 1
t,x,u,lambda_ = problem.solve(x0, T, parameters, h = 0.001)
ax = problem.plotting(t,x,u,lambda_)
###Output
_____no_output_____
###Markdown
O controle inicialmente aumenta até atingir um valor constante, da mesma forma que o estado. Dizemos que eles estão em equilíbrio. Eventualmente o controle decresce a 0. Note que o estado não decresce e isso acontece pela ponderação equivalente que damos aos efeitos negativo do mofo e do fungicida. Por isso, podemos sugerir um aumento em $A$.
###Code
parameters['A'] = 10
t,x,u,lambda_ = problem.solve(x0, T, parameters, h = 0.001)
ax = problem.plotting(t,x,u,lambda_)
###Output
_____no_output_____
###Markdown
Aqui o uso de fungicida é muito maior. Como gostaríamos, a quantidade de mofo também é menor em sua constante, porém no final do intervalo, quando a quantidade do fungicida diminui, o nível de mofo cresce consideravelmente. Para fins de comparação, podemos visualizar a diferença de quando $u \equiv 0$ do controle ótimo. Para isso, temos que fazer a integração da derivada de $x$ no intervalo.
###Code
integration = solve_ivp(fun = diff_state,
t_span = (0,T),
y0 = (x0,),
t_eval = np.linspace(0,T,len(u)),
args = (0,parameters))
plt.plot(t, x, integration.t, integration.y[0])
plt.title('Comparação da quantidade mofo')
plt.legend(['Controle ótimo', 'Sem controle'])
plt.grid(alpha = 0.5)
###Output
_____no_output_____
###Markdown
Experimentação Descomente a célula a seguir e varie os parâmetros para ver seus efeitos: 1. Aumentar $r$ para ver o crescimento mais rápido do mofo. Como o controle vai se comportar?2. Variar $T$ faz alguma diferença? Como é essa diferença? 3. Variar $M$, a capacidade de carga, gera que efeito no estado?
###Code
#x0 = 1
#T = 5
#parameters['r'] = 0.3
#parameters['M'] = 10
#parameters['A'] = 1
#
#t,x,u,lambda_ = problem.solve(x0, T, parameters, h = 0.001)
#roblem.plotting(t,x,u,lambda_)
###Output
_____no_output_____ |
old_experiments/02_ImageWang_ContrastLearning_12_ep30_retrainlighting.ipynb | ###Markdown
Image网 Submission `128x128` This contains a submission for the Image网 leaderboard in the `128x128` category.In this notebook we:1. Train on 1 pretext task: - Train a network to do image inpatining on Image网's `/train`, `/unsup` and `/val` images. 2. Train on 4 downstream tasks: - We load the pretext weights and train for `5` epochs. - We load the pretext weights and train for `20` epochs. - We load the pretext weights and train for `80` epochs. - We load the pretext weights and train for `200` epochs. Our leaderboard submissions are the accuracies we get on each of the downstream tasks.
###Code
import json
import torch
import numpy as np
from functools import partial
from fastai2.basics import *
from fastai2.vision.all import *
torch.cuda.set_device(2)
# Chosen parameters
lr=2e-2
sqrmom=0.99
mom=0.95
beta=0.
eps=1e-4
bs=64
sa=1
m = xresnet34
act_fn = Mish
pool = MaxPool
nc=20
source = untar_data(URLs.IMAGEWANG_160)
len(get_image_files(source/'unsup')), len(get_image_files(source/'train')), len(get_image_files(source/'val'))
# Use the Ranger optimizer
opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
m_part = partial(m, c_out=nc, act_cls=torch.nn.ReLU, sa=sa, pool=pool)
model_meta[m_part] = model_meta[xresnet34]
save_name = 'imagewang_contrast_retrain_ep5'
###Output
_____no_output_____
###Markdown
Pretext Task: Contrastive Learning
###Code
#export
from pytorch_metric_learning import losses
class XentLoss(losses.NTXentLoss):
def forward(self, output1, output2):
stacked = torch.cat((output1, output2), dim=0)
labels = torch.arange(output1.shape[0]).repeat(2)
return super().forward(stacked, labels, None)
class ContrastCallback(Callback):
run_before=Recorder
def __init__(self, size=256, aug_targ=None, aug_pos=None, temperature=0.1):
self.aug_targ = ifnone(aug_targ, get_aug_pipe(size, min_scale=0.7))
self.aug_pos = ifnone(aug_pos, get_aug_pipe(size, min_scale=0.4))
self.temperature = temperature
def update_size(self, size):
pipe_update_size(self.aug_targ, size)
pipe_update_size(self.aug_pos, size)
def begin_fit(self):
self.old_lf = self.learn.loss_func
self.old_met = self.learn.metrics
self.learn.metrics = []
self.learn.loss_func = losses.NTXentLoss(self.temperature)
def after_fit(self):
self.learn.loss_fun = self.old_lf
self.learn.metrics = self.old_met
def begin_batch(self):
xb, = self.learn.xb
xb_targ = self.aug_targ(xb)
xb_pos = self.aug_pos(xb)
self.learn.xb = torch.cat((xb_targ, xb_pos), dim=0),
self.learn.yb = torch.arange(xb_targ.shape[0]).repeat(2),
#export
def pipe_update_size(pipe, size):
for tf in pipe.fs:
if isinstance(tf, RandomResizedCropGPU):
tf.size = size
def get_dbunch(size, bs, workers=8, dogs_only=False):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
folders = ['unsup', 'val'] if dogs_only else ['train', 'val']
files = get_image_files(source, folders=folders)
tfms = [[PILImage.create, ToTensor, RandomResizedCrop(size, min_scale=0.9)],
[parent_label, Categorize()]]
# dsets = Datasets(files, tfms=tfms, splits=GrandparentSplitter(train_name='unsup', valid_name='val')(files))
dsets = Datasets(files, tfms=tfms, splits=RandomSplitter(valid_pct=0.1)(files))
# batch_tfms = [IntToFloatTensor, *aug_transforms(p_lighting=1.0, max_lighting=0.9)]
batch_tfms = [IntToFloatTensor]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
size = 128
bs = 64
dbunch = get_dbunch(160, bs)
len(dbunch.train.dataset)
dbunch.show_batch()
# # xb = TensorImage(torch.randn(1, 3,128,128))
# afn_tfm, lght_tfm = aug_transforms(p_lighting=1.0, max_lighting=0.8, p_affine=1.0)
# # lght_tfm.split_idx = None
# xb.allclose(afn_tfm(xb)), xb.allclose(lght_tfm(xb, split_idx=0))
#export
def get_aug_pipe(size, min_scale=0.4, stats=None, erase=False, **kwargs):
stats = ifnone(stats, imagenet_stats)
aug_tfms = aug_transforms(size=size, min_scale=min_scale, **kwargs)
tfms = [Normalize.from_stats(*stats), *aug_tfms]
if erase: tfms.append(RandomErasing(sh=0.1))
pipe = Pipeline(tfms)
pipe.split_idx = 0
return pipe
aug = get_aug_pipe(size, min_scale=0.25, mult=1, max_lighting=0.8, p_lighting=1.0)
aug2 = get_aug_pipe(size, min_scale=0.2, mult=2, max_lighting=0.4, p_lighting=1.0)
cbs = ContrastCallback(size=size, aug_targ=aug, aug_pos=aug2, temperature=0.1)
xb,yb = dbunch.one_batch()
nrm = Normalize.from_stats(*imagenet_stats)
xb_dec = nrm.decodes(aug(xb))
show_images([xb_dec[0], xb[0]])
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 128))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func,
metrics=[], loss_func=CrossEntropyLossFlat(), cbs=cbs, pretrained=False,
config={'custom_head':ch}
).to_fp16()
# state_dict = torch.load(f'imagewang_contrast_simple_ep80.pth')
# learn.model[0].load_state_dict(state_dict, strict=False)
learn.unfreeze()
learn.fit_flat_cos(5, 4e-2, wd=1e-2)
torch.save(learn.model[0].state_dict(), f'{save_name}.pth')
# learn.save(save_name)
###Output
_____no_output_____
###Markdown
Downstream Task: Image Classification
###Code
def get_dbunch(size, bs, workers=8, dogs_only=False):
path = URLs.IMAGEWANG_160 if size <= 160 else URLs.IMAGEWANG
source = untar_data(path)
if dogs_only:
dog_categories = [f.name for f in (source/'val').ls()]
dog_train = get_image_files(source/'train', folders=dog_categories)
valid = get_image_files(source/'val')
files = dog_train + valid
splits = [range(len(dog_train)), range(len(dog_train), len(dog_train)+len(valid))]
else:
files = get_image_files(source)
splits = GrandparentSplitter(valid_name='val')(files)
item_aug = [RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
tfms = [[PILImage.create, ToTensor, *item_aug],
[parent_label, Categorize()]]
dsets = Datasets(files, tfms=tfms, splits=splits)
batch_tfms = [IntToFloatTensor, Normalize.from_stats(*imagenet_stats)]
dls = dsets.dataloaders(bs=bs, num_workers=workers, after_batch=batch_tfms)
dls.path = source
return dls
def do_train(size=128, bs=64, lr=1e-2, epochs=5, runs=5, dogs_only=False, save_name=None):
dbunch = get_dbunch(size, bs, dogs_only=dogs_only)
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, normalize=False,
metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(),
pretrained=False,
config={'custom_head':ch}).to_fp16()
if save_name is not None:
state_dict = torch.load(f'{save_name}.pth')
learn.model[0].load_state_dict(state_dict)
# state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth')
# learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2)
###Output
_____no_output_____
###Markdown
5 Epochs
###Code
epochs = 5
runs = 1
do_train(epochs=epochs, runs=runs, lr=2e-2, dogs_only=False, save_name=save_name)
###Output
Run: 0
###Markdown
Random weights - ACC = 0.337999
###Code
do_train(epochs=epochs, runs=1, dogs_only=False, save_name=None)
###Output
_____no_output_____
###Markdown
20 Epochs
###Code
epochs = 20
runs = 1
# opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
opt_func = SGD
dbunch = get_dbunch(128, 64)
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, normalize=False,
metrics=[accuracy,top_k_accuracy], loss_func=CrossEntropyLossFlat(),
pretrained=False,
config={'custom_head':ch}).to_fp16()
if save_name is not None:
state_dict = torch.load(f'{save_name}.pth')
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.lr_find()
# not inpainting model
do_train(epochs=20, runs=runs, lr=1e-2, dogs_only=False, save_name=save_name)
# inpainting model
do_train(epochs=20, runs=runs, lr=1e-2, dogs_only=False, save_name=save_name)
do_train(epochs=20, runs=runs, lr=1e-2, dogs_only=False, save_name=None)
###Output
Run: 0
###Markdown
80 epochs
###Code
epochs = 80
runs = 1
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=None)
###Output
_____no_output_____
###Markdown
Accuracy: **62.18%** 200 epochs
###Code
epochs = 200
runs = 1
do_train(epochs=epochs, runs=runs, dogs_only=False, save_name=save_name)
###Output
_____no_output_____ |
Data_Analys_7_work.ipynb | ###Markdown
Використовуючи дані таблиці і застосовуючи стандартні заміни змінних, знайти рівняння наступних видів регрессий: 1)лінійної, 2)гіперболічної, 3)степеневої, 4)показникової, 5)логарифмічної.Порівняти якість отриманих наближень шляхом порівняння їх відхилень.Побудувати графіки одержані залежностей і табличних значень аргументів і функції. Варіант № 4
###Code
# import packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import math
# начальные данные
x = [1, 1.64, 2.28, 2.91, 3.56, 2.39, 4.84, 5.48]
y = [0.28,0.19, 0.15, 0.11, 0.09, 0.08, 0.07,0.06]
# Первый график: начальные точки
mpl.rcParams['font.family'] = 'fantasy'
plt.title('Начальные точки',size=14)
plt.xlabel('Координата X', size=14)
plt.ylabel('Координата Y', size=14)
plt.plot(x, y, color='r', linestyle=' ', marker='o', label='Data(x,y)')
plt.legend(loc='best')
plt.grid(True)
plt.show()
# подсчет замены для каждой функ-ции
def coef_count (x,reg_name):
if reg_name == 'linear':
u = x
if reg_name =='hyperbola':
u = [1/i for i in x]
if reg_name == 'sedate':
u = [np.log(i) for i in x]
if reg_name == 'indicative':
u = x
if reg_name == 'logarithm':
u = [np.log(i) for i in x]
return u
# расчеты a и b (одиноковы для каждой регрессии - зависмость от u)
def regression_count (x,y, u):
# сумма значений
su = sum(u)
# сумма истинных ответов
sy = sum(y)
# сумма произведения значений на истинные ответы
list_uy = []
[list_uy.append(u[i]*y[i]) for i in range(len(u))]
suy = sum(list_uy)
# сумма квадратов значений
list_u_sq = []
[list_u_sq.append(u[i]**2) for i in range(len(u))]
su_sq = sum(list_u_sq)
# количество значений
n = len(u)
# общий определитель
det = su_sq*n - su*su
# определитель по a
det_a = su_sq*sy - su*suy
# искомый параметр a
a = (det_a / det)
# определитель по b
det_b = suy*n - sy*su
# искомый параметр b
b = (det_b / det)
# контрольные значения (прооверка)
check1 = (n*b + a*su - sy)
check2 = (b*su + a*su_sq - suy)
print([a, b])
return [round(a,4), round(b,4)]
# rewrite it
# определим функцию для подсчета суммы квадратов ошибок
def errors_sq_regres_count(y,y2):
list_errors_sq = []
for i in range(len(y)):
err = (y[i]-y2[i])**2
list_errors_sq.append(err)
return sum(list_errors_sq)
# определим функцию для формирования массива рассчетных значений
def sales_count(ab,x,y, reg_name):
line_answers = []
if reg_name == 'linear':
[line_answers.append(ab[1]*x[i]+ab[0]) for i in range(len(x))]
if reg_name == 'hyperbola':
[line_answers.append(ab[1]/x[i]+ab[0]) for i in range(len(x))]
if reg_name == 'sedate':
[line_answers.append(math.exp(ab[1])*math.exp(ab[0]*x[i])) for i in range(len(x))]
if reg_name == 'indicative':
[line_answers.append(math.exp(ab[1]*x[i])*math.exp(ab[0])) for i in range(len(x))]
if reg_name == 'logarithm':
[line_answers.append(ab[1]*np.log(x[i])+ab[0]) for i in range(len(x))]
return line_answers
# построние графика с конечным результатом
def result_show (a,b, reg_name):
plt.title('Аппроксимация {}'.format(reg_name))
plt.xlabel('Координата X', size=14)
plt.ylabel('Координата Y', size=14)
plt.plot(x, y, color='r', linestyle=' ', marker='o', label='Data(x,y)')
plt.plot(x, sales_count(ab_us,x,y,reg_name), color='g', linewidth=2, label='Data(x,f(x))')
plt.legend(loc='best')
plt.grid(True)
plt.show()
# уравнение линейной регрессии
reg_name = 'linear'
u = coef_count (x, reg_name)
ab_us = regression_count(x,y,u)
a_us = ab_us[1]
b_us = ab_us[0]
print ('\033[1m' + '\033[4m' + "Оптимальные значения коэффициентов a и b:" + '\033[0m' )
print ('a =', a_us)
print ('b =', b_us)
line_answers = sales_count(ab_us,x,y,reg_name)
# запустим функцию и запишем значение ошибки
error_sq = errors_sq_regres_count(y,line_answers)
print ('\033[1m' + '\033[4m' + "Сумма квадратов отклонений" + '\033[0m')
print (error_sq)
result_show (a_us,b_us,reg_name)
# уравнение гиперболической регрессии
reg_name = 'hyperbola'
u = coef_count (x, reg_name)
ab_us = regression_count(x,y,u)
a_us = ab_us[1]
b_us = ab_us[0]
print ('\033[1m' + '\033[4m' + "Оптимальные значения коэффициентов a и b:" + '\033[0m' )
print ('a =', a_us)
print ('b =', b_us)
line_answers = sales_count(ab_us,x,y,reg_name)
# запустим функцию и запишем значение ошибки
error_sq = errors_sq_regres_count(y,line_answers)
print ('\033[1m' + '\033[4m' + "Сумма квадратов отклонений" + '\033[0m')
print (error_sq)
result_show (a_us,b_us,reg_name)
# уравнение степенной регрессии
reg_name = 'sedate'
u = coef_count (x, reg_name)
y1 = [np.log(i) for i in y]
ab_us = regression_count(x,y1,u)
a_us = ab_us[1]
b_us = ab_us[0]
print ('\033[1m' + '\033[4m' + "Оптимальные значения коэффициентов a и b:" + '\033[0m' )
print ('a =', a_us)
print ('b =', b_us)
line_answers = sales_count(ab_us,x,y,reg_name)
# запустим функцию и запишем значение ошибки
error_sq = errors_sq_regres_count(y,line_answers)
print ('\033[1m' + '\033[4m' + "Сумма квадратов отклонений" + '\033[0m')
print (error_sq)
result_show (a_us,b_us,reg_name)
# уравнение показательной регрессии
reg_name = 'indicative'
u = coef_count (x, reg_name)
y1 = [np.log(i) for i in y]
ab_us = regression_count(x,y1,u)
a_us = ab_us[1]
b_us = ab_us[0]
print ('\033[1m' + '\033[4m' + "Оптимальные значения коэффициентов a и b:" + '\033[0m' )
print ('a =', a_us)
print ('b =', b_us)
line_answers = sales_count(ab_us,x,y,reg_name)
# запустим функцию и запишем значение ошибки
error_sq = errors_sq_regres_count(y,line_answers)
print ('\033[1m' + '\033[4m' + "Сумма квадратов отклонений" + '\033[0m')
print (error_sq)
result_show (a_us,b_us,reg_name)
# уравнение логарифмической регрессии
reg_name = 'logarithm'
u = coef_count (x, reg_name)
ab_us = regression_count(x,y,u)
a_us = ab_us[1]
b_us = ab_us[0]
print ('\033[1m' + '\033[4m' + "Оптимальные значения коэффициентов a и b:" + '\033[0m' )
print ('a =', a_us)
print ('b =', b_us)
line_answers = sales_count(ab_us,x,y,reg_name)
# запустим функцию и запишем значение ошибки
error_sq = errors_sq_regres_count(y,line_answers)
print ('\033[1m' + '\033[4m' + "Сумма квадратов отклонений" + '\033[0m')
print (error_sq)
result_show (a_us,b_us,reg_name)
###Output
[0.2500546076481297, -0.1243177832529005]
[1m[4mОптимальные значения коэффициентов a и b:[0m
a = -0.1243
b = 0.2501
[1m[4mСумма квадратов отклонений[0m
0.0054881742418054665
|
intro/more_on_jupyter.ipynb | ###Markdown
More on the Jupyter notebookAs you heard in [about the software](the_software), Jupyter is an *interface* that allows you to run Python code and see the results.It consists of two parts:* The web client (the thing you are looking at now, if you are running this notebook in Jupyter);* The kernel (that does the work of running code, and generating results).For example, consider this cell, where I make a new variable `a`, and display the value of `a`:
###Code
a = 10.50
a
###Output
_____no_output_____
###Markdown
I type this code in the web client.When I press Shift-Enter, or Run the cell via the web interface, this sends a *message* to the *kernel*.The message says something like:> Here is some Python code: "a = 1" and "a". Run this code, and show me any> results.The kernel accepts the message, runs the code in Python, and then sends back any results, in this case, as text to display (the text representation of the value 10.50).The web client shows the results. The promptsNotice the prompts for the code cells. Before I have run the cell, there is an empty prompt, like this `In [ ]:`. `In` means "Input", meaning, this is a cell where you input code.
###Code
b = 9.25
###Output
_____no_output_____
###Markdown
Then, when you run the cell, the prompt changes to something like `In [1]:` where 1 means this is the first piece of code that the kernel ran.If there is any output, you will now see the *output* from that cell, with a prefix `Out [1]:` where 1 is the same number as the number for the input cell. This is output from the first piece of code that the kernel ran. Interrupting, restarting Sometimes you will find that the kernel gets stuck. That may be because the kernel is running code that takes a long time, but sometimes, it is because the kernel has crashed.In the next cell I load a function that makes the computer wait for a number of seconds.Don't worry about the `from ... import` here, we will come onto that later.
###Code
# Get the sleep function
from time import sleep
###Output
_____no_output_____
###Markdown
The sleep function has the effect of putting the Python process to sleep for the give number of seconds. Here I ask the Python process to sleep for 2 seconds:
###Code
sleep(2)
###Output
_____no_output_____
###Markdown
I can ask the computer to sleep for much longer. Try making a new code cell and executing the code `sleep(1000)`.When you do this, notice that the prompt gets an asterisk symbol `*` instead of a number, meaning that the kernel is busy doing something (busy sleeping).You can interrupt this long-running process by clicking on "stop execution" button on the top of thepage you are working on.
###Code
# Try running the code "sleep(10000)" below.
###Output
_____no_output_____ |
notebooks/3.0 mjg PreProcessing.ipynb | ###Markdown
Normalize Target (event_cause) labels
###Code
le = LabelEncoder()
high_freq['event_coded'] = le.fit_transform(high_freq['event_cause'])
high_freq.head(3)
event_code = high_freq[['event_cause', 'event_coded']]
event_code.to_csv('D:/cap/capstone2/data/interim/target_codes.csv')
df = high_freq.drop(columns=['event_cause'], axis=1)
popped = df.pop('event_coded')
df.insert(0, 'event_coded',popped)
df.head()
# OneHot Encode Categorical features
df = pd.get_dummies(df, columns=['crew_category'], prefix='crew', drop_first=False)
df = pd.get_dummies(df, columns=['med_certf'], prefix='med_certf', drop_first=False)
df = pd.get_dummies(df, columns=['med_crtf_vldty'], prefix='med_crtf_vldty', drop_first=False)
df = pd.get_dummies(df, columns=['light_cond'], prefix='light_cond', drop_first=False)
df = pd.get_dummies(df, columns=['wx_cond_basic'], prefix='wx_cond_basic', drop_first=False)
df = pd.get_dummies(df, columns=['type_fly'], prefix='type_fly', drop_first=False)
df = pd.get_dummies(df, columns=['pilot_privileges'], prefix='pilot_privileges', drop_first=False)
df.head(2)
scaler = MinMaxScaler()
scaler.fit(df[['crew_age']])
df['normed_age'] = scaler.transform(df[['crew_age']])
df.head()
nominals = ['ACTU-INST', 'ACTU-IRCV', 'ACTU-L24H', 'ACTU-L30D', 'ACTU-L90D', 'ACTU-PIC', 'ACTU-TOTL', 'ALL-INSTRUM',
'ALL-IRCV', 'ALL-L24H', 'ALL-L30D', 'ALL-L90D', 'ALL-PIC', 'ALL-TOTL', 'MAKE-INSTRUCT', 'MAKE-IRCV',
'MAKE-L24H', 'MAKE-L30D', 'MAKE-L90D', 'MAKE-PIC', 'MAKE-TOTL', 'MENG-INSTRUCT', 'MENG-IRCV', 'MENG-L24H',
'MENG-L30D', 'MENG-L90D', 'MENG-PIC', 'MENG-TOTL', 'NGHT-INSTRUCT', 'NGHT-IRCV', 'NGHT-L24H',
'NGHT-L30D', 'NGHT-L90D', 'NGHT-PIC', 'NGHT-TOTL', 'SENG-INSTRUCT', 'SENG-IRCV', 'SENG-L24H', 'SENG-L30D',
'SENG-L90D', 'SENG-PIC', 'SENG-TOTL', 'SIMU-TOTL',]
scaler = StandardScaler()
for nominal in nominals:
scaler.fit(df[[nominal]])
df[nominal] = scaler.transform(df[[nominal]])
df.head()
df.drop(columns='crew_age', inplace=True)
df.head()
df.info()
df.to_csv('D:/cap/capstone2/data/processed/processed.csv', index=False)
###Output
_____no_output_____ |
keys/M05-HW_KEY.ipynb | ###Markdown
Metadata```Course: DS 5001Module: 05 HWTopic: Create and Apply a TF-IDF FunctionAuthor: R.C. Alvarado``` InstructionsUsing the notebook from this module (`M05_BOW_TFIDF.ipynb`) and the `LIB` and `CORPUS` tables generated from the collection of texts (Austen and Melville) in Module 4, create a notebook to perform the following tasks:Write a function to generate a bag-of-words (`BOW`) represenation of the `CORPUS` table (or some subset of it) that takes the following arguments:* A tokens dataframe which can be a filtered version of the dataframe you import. This will be the `CORPUS` table or some subset of it.* A choice of bag, i.e. `OHCO` level, such as book, chapter, or paragraph.Write a function that returns the `TFIDF` values for a given `BOW`, with the following arguments:* The `BOW` table.* The type of `TF` measure to use.To compute `IDF`, use the formula $log_2(\frac{N}{DF})$ where $N$ is the number of documents (aka 'bags') in your `BOW`. Use these functions to get get the appropriate `TFIDF` to answer the questions below. Hints* Update the course GitHub repository to make sure you are working with the latest files.* Remember that the `CORPUS` table is a `TOKENS` table; it's just the combination of several such tables into one.* You will need to generate your own `VOCAB` table from `CORPUS` and compute `max_pos`. * When generating your own `VOCAB` table from `CORPUS`, be sure to name your index `term_str`. * Remember that the mean `TFIDF` is an aggregate statistic computed from the `TFIDF` results, and which shares the same domain as the `VOCAB` table.* `OHCO = ['book_id', 'chap_id', 'para_num', 'sent_num', 'token_num']` Questions Q1 Paste your functions here. **Answer**: `PASTED FUNCTIONS` Q2What are the top 20 words in the corpus by TFIDF mean using the `max` count method and `book` as the bag? **Answer**:```elinor 0.631162 NNPvernon 0.484550 NNPdarcy 0.360007 NNPreginald 0.344776 NNPfrederica 0.335458 NNPcrawford 0.331042 NNPelliot 0.318066 NNPweston 0.309443 NNPpierre 0.286605 NNPknightley 0.283192 NNPtilney 0.257662 NNPelton 0.254555 NNPbingley 0.247385 NNPwentworth 0.239176 NNPcourcy 0.237616 NNPwoodhouse 0.221144 NNPchurchhill 0.214320 NNPmarianne 0.197925 NNPbabbalanja 0.169229 NNPmainwaring 0.167729 NNP ``` Q3What are the top 20 words in the corpus by TFIDF mean, if you using the `max` count method and `paragraph` as the bag? Note, beccause of the greater number of bags, this will take longer to compute. **NOTE:** These results would be improved by using the `sum` TF count method, since the mean values would not all be the same. **Answer**:```0 14.841663 CDobverse 14.841663 NNneeva 14.841663 NNnestor 14.841663 NNPnevermore 14.841663 RBnewlanded 14.841663 VBNadjurative 14.841663 NNPnightshade 14.841663 VBPadjourn 14.841663 VBPniter 14.841663 NNnocturnally 14.841663 RBnoder 14.841663 RBnomine 14.841663 JJnov 14.841663 NNPnullifies 14.841663 VBZogdoads 14.841663 NNPnb 14.841663 NNohonoose 14.841663 NNPoloo 14.841663 INoptative 14.841663 NNP`````` Q4Characterize the general difference between the words in Question 3 and those in Qestion 2 in terms of part-of-speech. **Answer**: TFIDF by book just captures proper nouns. Q5Compute mean `TFIDF` for vocabularies conditioned on individual author, using *chapter* as the bag and `max` as the `TF` count method. Among the two authors, whose work has the most significant ajective? **Answer**: Melville. /ugh/ Solution Setup
###Code
data_home = "../labs-repo/data"
data_prefix = 'austen-melville'
OHCO = ['book_id', 'chap_id', 'para_num', 'sent_num', 'token_num']
SENTS = OHCO[:4]
PARAS = OHCO[:3]
CHAPS = OHCO[:2]
BOOKS = OHCO[:1]
###Output
_____no_output_____
###Markdown
Import
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import plotly_express as px
sns.set()
###Output
_____no_output_____
###Markdown
Prepare the data
###Code
LIB = pd.read_csv(f"{data_home}/output/{data_prefix}-LIB.csv").set_index('book_id')
CORPUS = pd.read_csv(f"{data_home}/output/{data_prefix}-CORPUS.csv").set_index(OHCO)
VOCAB = CORPUS.term_str.value_counts().to_frame('n')
VOCAB.index.name = 'term_str'
VOCAB['p'] = VOCAB.n / VOCAB.n.sum()
VOCAB['i'] = np.log2(1/VOCAB.p)
VOCAB['max_pos'] = CORPUS.reset_index().value_counts(['term_str','pos']).sort_index().unstack().idxmax(1)
VOCAB
###Output
_____no_output_____
###Markdown
Define Functions
###Code
def create_bow(CORPUS, bag, item_type='term_str'):
BOW = CORPUS.groupby(bag+[item_type])[item_type].count().to_frame('n')
return BOW
def get_tfidf(BOW, tf_method='max', df_method='standard', item_type='term_str'):
DTCM = BOW.n.unstack() # Create Doc-Term Count Matrix
if tf_method == 'sum':
TF = (DTCM.T / DTCM.T.sum()).T
elif tf_method == 'max':
TF = (DTCM.T / DTCM.T.max()).T
elif tf_method == 'log':
TF = (np.log2(1 + DTCM.T)).T
elif tf_method == 'raw':
TF = DTCM
elif tf_method == 'bool':
TF = DTCM.astype('bool').astype('int')
else:
raise ValueError(f"TF method {tf_method} not found.")
DF = DTCM.count()
N_docs = len(DTCM)
if df_method == 'standard':
IDF = np.log2(N_docs/DF) # This what the students were asked to use
elif df_method == 'textbook':
IDF = np.log2(N_docs/(DF + 1))
elif df_method == 'sklearn':
IDF = np.log2(N_docs/DF) + 1
elif df_method == 'sklearn_smooth':
IDF = np.log2((N_docs + 1)/(DF + 1)) + 1
else:
raise ValueError(f"DF method {df_method} not found.")
TFIDF = TF * IDF
return TFIDF
###Output
_____no_output_____
###Markdown
Get Top Words by Bag Q2
###Code
BOW_books = create_bow(CORPUS, bag=BOOKS)
TFIDF_books = get_tfidf(BOW_books, tf_method='max', df_method='standard')
TFIDF_books.mean().sort_values(ascending=False)\
.head(20).to_frame('mean_tfidf').join(VOCAB.max_pos)
###Output
_____no_output_____
###Markdown
Q3
###Code
BOW_paras = create_bow(CORPUS, bag=PARAS)
TFIDF_paras_max = get_tfidf(BOW_paras, tf_method='max')
TFIDF_paras_max.mean().sort_values(ascending=False)\
.head(20).to_frame('mean_tfidf').join(VOCAB.max_pos)
TFIDF_paras_raw = get_tfidf(BOW_paras, tf_method='raw')
TFIDF_paras_raw.mean().sort_values(ascending=False)\
.head(20).to_frame('mean_tfidf').join(VOCAB.max_pos)
###Output
_____no_output_____
###Markdown
Q5
###Code
AUS_IDX = LIB[LIB.author.str.contains('AUS')].index
MEL_IDX = LIB[LIB.author.str.contains('MEL')].index
aus_chap_bow = create_bow(CORPUS.loc[AUS_IDX], bag=CHAPS)
mel_chap_bow = create_bow(CORPUS.loc[MEL_IDX], bag=CHAPS)
TFIDF_aus = get_tfidf(aus_chap_bow, tf_method='max')
TFIDF_mel = get_tfidf(mel_chap_bow, tf_method='max')
###Output
_____no_output_____
###Markdown
Method 1
###Code
A = TFIDF_aus.mean().sort_values(ascending=False).to_frame('mean_tfidf').join(VOCAB.max_pos)
A[A.max_pos == 'JJ'].head(20)
M = TFIDF_mel.mean().sort_values(ascending=False).to_frame('mean_tfidf').join(VOCAB.max_pos)
M[M.max_pos == 'JJ'].head(20)
###Output
_____no_output_____
###Markdown
Method 2
###Code
A[A.max_pos == 'JJ'].mean_tfidf.idxmax(), A[A.max_pos == 'JJ'].mean_tfidf.max()
M[M.max_pos == 'JJ'].mean_tfidf.idxmax(), M[M.max_pos == 'JJ'].mean_tfidf.max()
###Output
_____no_output_____ |
Tutorials/Intro_To_NN/NNDL-solutions/notebooks/chap-5-why-are-deep-neural-networks-hard-to-train.ipynb | ###Markdown
Chapter 5: Why are deep neural networks hard to train? The vanishing gradient problem What's causing the vanishing gradient problem? Unstable gradients in deep neural nets Exercise 1 ([link](http://neuralnetworksanddeeplearning.com/chap5.htmlexercise_255808)): avoiding the unstable gradient problem with a steeper activation function?Using our random weight initialization by a Gaussian with mean $0$ and standard deviation $1$, our weights are "typically" the same order of magnitude as $1$ (actually, the mean size is $\sqrt{\frac 2 \pi} \approx 0.8$).Ideally then, we'd like an activation function whose derivative is as often as possible close to $1$, so the learning speeds of each layer would be similar despite the product of many terms (all close to 1).Such an activation function wouldn't solve the unstable gradient problem (as explained in the paragraph just above this exercise in the book, the product of terms make the situation intrinsically unstable; if for some reason the weights all grow, we'll still have an exploding gradient, and a vanishing gradient if they all decrease). But it would probably help. Problem 1 ([link](http://neuralnetworksanddeeplearning.com/chap5.htmlproblems_778071)): ranges of weights and activations avoiding the vanishing gradient problem**(1)** Since $\forall z \in \mathbb{R}, \sigma'(z) \leq \frac 1 4$, we can only ever have $|w \sigma'(wa+b)| \geq 1$ when $|w| \geq 4$.**(2)** Since $\sigma'$ is largest around $0$, we'll get greater intervals with $b = 0$.So we want to solve:\begin{equation*} \begin{aligned} | w \sigma'(wa)| \geq 1 &\iff \sigma'(wa) \geq \frac{1}{|w|} \\ &\iff \frac{e^{-wa}}{(1 + e^{-wa})^2} \geq \frac{1}{|w|} \\ &\iff 0 \geq 1 + ( 2 - |w|) e^{-wa} + (e^{-wa})^2 \end{aligned}\end{equation*}Let's denote $x = e^{-wa}$.We've got a quadratic inequation: $x^2 + (2 - |w|)x + 1 \leq 0$.The discriminant of this equation is:$$\Delta = (2 - |w|)^2 - 4 = |w|(|w| - 4)$$If $|w| > 4$, then $\Delta > 0$ and we have two distinct real roots that we'll note $r_1$ and $r_2$, $r_1 < r_2$.As the coefficient before $x^2$ is positive, this inequation will be satisfied for $r_1 \leq x \leq r_2$.Translating into $a$:\begin{equation*} \begin{aligned} r_1 \leq e^{-wa} \leq r_2 &\iff \ln r_1 \leq -wa \leq \ln r_2 \\ &\iff - \ln r_2 \leq wa \leq - \ln r_1 \end{aligned}\end{equation*}If $w > 0$, this is equivalent to:$$- \frac 1 w \ln r_2 \leq a \leq - \frac 1 w \ln r_1$$And so the width of the interval will be:$$\frac 1 w (-\ln r_1 + \ln r_2) = \frac 1 w \ln \frac{r_2}{r_1} = \frac{1}{|w|} \ln \frac{r_2}{r_1}$$If $w < 0$, this is equivalent to:$$- \frac 1 w \ln r_1 \leq a \leq - \frac 1 w \ln r_2$$And so the width of the interval will be:$$- \frac 1 w (\ln r_2 - \ln r_1) = - \frac 1 w \ln \frac{r_2}{r_1} = \frac{1}{|w|} \ln \frac{r_2}{r_1}$$So the width of the interval will always be (again, provided $|w| \geq 4$ and $b = 0$):$$\frac{1}{|w|} \ln \frac{r_2}{r_1}$$Now we know ([Vieta's formulas](https://en.wikipedia.org/wiki/Vieta%27s_formulas)) that the roots of the quadratic equation $a x^2 + bx + c = 0$ satisfy $r_1 r_2 = \frac c a$. In our case, $r_1 r_2 = 1$.Therefore, $\frac{1}{r_1} = r_2$ and $\frac{r_2}{r_1} = r_2^2$. So the width of our interval can be written:$$\frac{1}{|w|} \ln(r_2^2) = \frac{2}{|w|} \ln r_2$$All we have to do now is compute $r_2$:\begin{equation*} \begin{aligned} r_2 &= \frac{-b + \sqrt{\Delta}}{2a} \\ &= \frac{|w| - 2 + \sqrt{|w|(|w| - 4)}}{2} \\ &= \frac{|w| \left( 1 + \sqrt{1 - 4/|w|} \right)}{2} - 1 \end{aligned}\end{equation*}Conclusion: the set of $a$ satisfying $|w\sigma'(wa+b)| \geq 1$ can range over an interval no greater in width than$$\frac{2}{|w|} \ln\left( \frac{|w|(1+\sqrt{1-4/|w|})}{2}-1\right)$$**(3)** Let's plot the function (which is a very imprecise, but visual, way of numerically determining a maximum):
###Code
import numpy as np
import matplotlib.pyplot as plt
import math
def f(x):
return 2 / x * math.log(x / 2 * (1 + math.sqrt(1 - 4 / x)) - 1)
X = np.arange(4.0, 14.0, 0.05)
Y = np.array([f(x) for x in X])
plt.plot(X, Y)
plt.show()
###Output
_____no_output_____
###Markdown
We see that the maximum is around $|w| = 7$. Being more precise:
###Code
import numpy as np
import matplotlib.pyplot as plt
import math
def f(x):
return 2 / x * math.log(x / 2 * (1 + math.sqrt(1 - 4 / x)) - 1)
X = np.arange(6.5, 7.5, 0.01)
Y = np.array([f(x) for x in X])
plt.plot(X, Y)
plt.show()
###Output
_____no_output_____
###Markdown
We see that the width is greatest at $|w| \approx 6.9$, where it takes a value $\approx 0.45$. Problem 2: constructing an identity neuron**Note:** For this problem, I did not follow Nielsen's hint. The approximation I will construct seems satisfying to me, however I feel there might be a better solution. In particular, another solution might perhaps get arbitrarily close to the identity, which my solution can't do.We start from a regular sigmoid (setting $w_1 = w_2 = 1$, $b = 0$):
###Code
import numpy as np
import matplotlib.pyplot as plt
import math
def sigmoid(w1, w2, b, x):
return w2 / (1 + math.exp(- w1 * x - b))
def plot(w1, w2, b):
# Plot between -3 and 3:
X1 = np.arange(-3.0, 3.0, 0.05)
Y1 = np.array([sigmoid(w1, w2, b, x) for x in X1])
# Plot between 0 and 1:
X2 = np.arange(0.0, 1.0, 0.01)
Y2 = np.array([sigmoid(w1, w2, b, x) for x in X2])
Ylin = X2 # target function
plt.subplot(1, 2, 1) # 1 line, 2 columns, position 1
plt.ylim(bottom=0, top=1)
plt.plot(X1, Y1)
plt.subplot(1, 2, 2)
plt.ylim(bottom=0, top=1)
plt.plot(X2, Y2, label="sigmoid")
plt.plot(X2, Ylin, label="target")
plt.legend()
plt.show()
w1 = 1
w2 = 1
b = 0
plot(w1, w2, b)
###Output
_____no_output_____
###Markdown
What happens if we impose the following 2 conditions on our final function?* $w_2 \sigma(0.5 w_1 + b) = 0.5$* $w_2 \sigma(w_1 + b) = 1$Those conditions mean that the sigmoid approximation of the identity must be exact on abscisses $0.5$ and $1$.In other terms,\begin{equation}\frac{w_2}{1 + e^{-0.5 w_1 - b}} = 0.5\tag{1}\end{equation}\begin{equation}\frac{w_2}{1 + e^{w_1 - b}} = 1\tag{2}\end{equation}Dividing (2) by (1) we get:\begin{equation*} \begin{aligned} 2 &= \frac{1 + e^{-0.5 w_1} e^{-b}}{1 + e^{-w_1}e^{-b}} \\ 1 + 2 e^{-w_1} e^{-b} &= e^{-0.5 w_1} e^{-b} \\ 1 &= e^{-b} \left( e^{-0.5 w_1} - 2 e^{-w_1} \right) \end{aligned}\end{equation*}Supposing $w_1$ fixed, we obtain:$$b = \ln \left( e^{-0.5 w_1} - 2 e^{-w_1} \right)$$Note that this imposes $\left( e^{-0.5 w_1} - 2 e^{-w_1} \right) > 0$, and so $w_1 > 2 \ln 2 \approx 1.39$.Now we can deduce $w_2$ using (2):$$w_2 = 1 + e^{- w_1 - b}$$Playing with the parameter $w_1$ gives rather satisfying approximations around $w_1 = 6$.
###Code
w1 = 6
b = math.log(math.exp(- 0.5 * w1) - 2 * math.exp(- w1))
w2 = 1 + math.exp(-(w1 + b))
plot(w1, w2, b)
###Output
_____no_output_____ |
Projects/CO2_emissions_of_various_countries_pred.ipynb | ###Markdown
Step 1: Data Loding and Understanding
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import linear_model
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.model_selection import GridSearchCV
import os
# hide warnings
import warnings
warnings.filterwarnings('ignore')
# reading the csv files
cars= pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\cars_trucks_and_buses_per_1000_persons.csv")
co2=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\co2_emissions_tonnes_per_person.csv")
coal=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\coal_consumption_per_cap.csv")
ele_gen=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\electricity_generation_per_person.csv")
ele_use=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\electricity_use_per_person.csv")
forest=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\forest_coverage_percent.csv")
hydro=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\hydro_power_generation_per_person.csv")
income=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\income_per_person_gdppercapita_ppp_inflation_adjusted.csv")
industry=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\industry_percent_of_gdp.csv")
natural=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\natural_gas_production_per_person.csv")
oil_con=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\oil_consumption_per_cap.csv")
oil_pro=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\oil_production_per_person.csv")
year=pd.read_csv("C:\\Users\\NAMAN\\Desktop\\MLass\\co2_prediction\\yearly_co2_emissions_1000_tonnes.csv")
cars.head()
co2.head()
coal.head()
ele_gen.head()
###Output
_____no_output_____
###Markdown
Seeing above files we are sure that all of our files would have been read properly so no need to head all of them We see that all of our columns are years and country name and we need only year 2014 and counry name so we will drop other years
###Code
co22=co2[['geo','2014']]
coal2=coal[['geo','2014']]
ele_gen2=ele_gen[['geo','2014']]
ele_use2=ele_use[['geo','2014']]
forest2=forest[['geo','2014']]
income2=income[['geo','2014']]
industry2=industry[['geo','2014']]
natural2=natural[['geo','2014']]
oil_con2=oil_con[['geo','2014']]
oil_pro2=oil_pro[['geo','2014']]
year2=year[['geo','2014']]
###Output
_____no_output_____
###Markdown
Reading above data we find that cars and hydro do not have column for 2014 so we will not be using these files
###Code
# Now checking first 3 files for first 5 rows
co22.head()
coal2.head()
ele_gen2.head()
# Checking for missing values
year2.isnull().sum()
###Output
_____no_output_____
###Markdown
Hence we sucessfuly dropped waste columns. Our next step will be to merge these files using geo as our common column
###Code
df1 = pd.merge(co22, coal2, how='outer', on='geo')
df2 = pd.merge(df1,ele_gen2, how='outer', on='geo')
df3 = pd.merge(df2, ele_use2, how='outer', on='geo')
df4 = pd.merge(df3, forest2, how='outer', on='geo')
df5 = pd.merge(df4, income2, how='outer', on='geo')
df6 = pd.merge(df5, industry2, how='outer', on='geo')
df7 = pd.merge(df6, natural2, how='outer', on='geo')
df8 = pd.merge(df7, oil_con2, how='outer', on='geo')
df9 = pd.merge(df8, oil_pro2, how='outer', on='geo')
df = pd.merge(df9, year2, how='outer', on='geo')
df.head()
df.columns
df.describe
# Now renaming our columns
df.columns=[ "geo","co2","coal","ele_gen","ele_use","forest","income","industry","natural","oil_con","oil_pro","year"]
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Step 2: Data Cleaning and Manipulation
###Code
# Checking for missing values
df.isnull().sum()
#Checking the percentage of missing values
round(100*(df.isnull().sum()/len(df.index)), 2)
###Output
_____no_output_____
###Markdown
As we can see:coal 66.49ele_gen 66.49natural 74.74oil_con 66.49oil_pro 74.74 These have more than 50% of missing values. So there is no need to consider them. Whereas,co2 1.03forest 1.55income 0.52industry 5.67year 1.03ele_use 29.38 So, we will be filling it with mean.
###Code
# Dropping useless columns
df = df.drop(['coal','ele_gen','natural','oil_con','oil_pro'], axis=1)
df['co2'].mean()
df['co2'].fillna(value = (df['co2'].mean()), inplace=True)
df['forest'].mean()
df['forest'].fillna(value = (df['forest'].mean()), inplace=True)
df['income'].mean()
df['income'].fillna(value = (df['income'].mean()), inplace=True)
df['industry'].mean()
df['industry'].fillna(value = (df['industry'].mean()), inplace=True)
df['year'].mean()
df['year'].fillna(value = (df['year'].mean()), inplace=True)
df['ele_use'].mean()
df['ele_use'].fillna(value = (df['ele_use'].mean()), inplace=True)
df=df.dropna()
df.isnull().sum()
df.info()
df.columns
###Output
_____no_output_____
###Markdown
Hence, we managed all the missing values and manipulated the data. Step 3: Data Visualisation
###Code
# Visualizing our data
sns.pairplot(df)
plt.show()
sns.jointplot('ele_use', 'co2', df)
plt.show()
sns.jointplot('forest', 'co2', df)
plt.show()
sns.jointplot('income', 'co2', df)
plt.show()
sns.jointplot('industry', 'co2', df)
plt.show()
sns.jointplot('year', 'co2', df)
plt.show()
###Output
_____no_output_____
###Markdown
Now treating the outliers
###Code
#treating outliers
df=df[(df.ele_use<50000)]
df=df[(df.income<100000)]
sns.jointplot('ele_use', 'co2', df)
plt.show()
sns.jointplot('forest', 'co2', df)
plt.show()
sns.jointplot('income', 'co2', df)
plt.show()
sns.jointplot('industry', 'co2', df)
plt.show()
sns.jointplot('year', 'co2', df)
plt.show()
# Now checking the correlation of our data
#Checking the correlation of our target variable with other variables
plt.figure(figsize = (10, 10))
sns.heatmap(df.corr(), annot = True, cmap="YlGnBu")
plt.show()
###Output
_____no_output_____
###Markdown
Step 4: Data Prepration
###Code
# Splitting our data into target and independent variables
x= df.drop(['geo','co2'], axis=1)
x.head()
y=df['co2']
y.head()
# scaling the features
from sklearn.preprocessing import scale
# storing column names in cols, since column names are (annoyingly) lost after
# scaling (the df is converted to a numpy array)
cols = x.columns
x = pd.DataFrame(scale(x))
x.columns = cols
x.columns
# split into train and test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y,
train_size=0.7,
test_size = 0.3, random_state=100)
###Output
_____no_output_____
###Markdown
Step 5:Model Building and Evaluation Using lasso regression
###Code
lasso = Lasso()
# list of alphas to tune
params = {'alpha': [0.0001, 0.001, 0.01, 0.05, 0.1,
0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 2.0, 3.0,
4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 20, 50, 100, 500, 1000 ]}
# cross validation
folds = 5
model_cv = GridSearchCV(estimator = lasso,
param_grid = params,
scoring= 'neg_mean_absolute_error',
cv = folds,
return_train_score=True,
verbose = 1)
model_cv.fit(X_train, y_train)
cv_results = pd.DataFrame(model_cv.cv_results_)
cv_results.head()
# plotting mean test and train scoes with alpha
cv_results['param_alpha'] = cv_results['param_alpha'].astype('float32')
# plotting
plt.plot(cv_results['param_alpha'], cv_results['mean_train_score'])
plt.plot(cv_results['param_alpha'], cv_results['mean_test_score'])
plt.xlabel('alpha')
plt.ylabel('Negative Mean Absolute Error')
plt.title("Negative Mean Absolute Error and alpha")
plt.legend(['train score', 'test score'], loc='upper left')
plt.show()
cv_results
lasso = Lasso(alpha = 0.1)
lasso.fit(X_train, y_train)
Y_pred1 = lasso.predict(X_train)
#Printing Lasso Coefficients
print('Lasso Coefficients',lasso.coef_,sep='\n')
# Calculate Mean Squared Error
mean_squared_error = np.mean((Y_pred1 - y_train)**2)
print("Mean squared error on train set", mean_squared_error)
Y_pred2 = lasso.predict(X_test)
mean_squared_error1 = np.mean((Y_pred2 - y_test)**2)
print("Mean squared error on test set", mean_squared_error1)
###Output
Lasso Coefficients
[ 2.508523 -0.29428438 2.02843619 1.34395783 0.20211723]
Mean squared error on train set 9.293386945837502
Mean squared error on test set 10.99298676683838
|
section_3/03_cross_validation.ipynb | ###Markdown
交差検証「交差検証」により、過学習の問題に対処します。 データの準備必要なライブラリの導入、データの読み込みと加工を行います。
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import accuracy_score
import lightgbm as lgb
train_data = pd.read_csv("train.csv") # 訓練データ
test_data = pd.read_csv("test.csv") # テストデータ
test_id = test_data["PassengerId"] # 結果の提出時に使用
data = pd.concat([train_data, test_data], sort=False) # テストデータ、訓練データを結合
# カテゴリデータの変換
data["Sex"].replace(["male", "female"], [0, 1], inplace=True)
data["Embarked"].fillna(("S"), inplace=True)
data["Embarked"] = data["Embarked"].map({"S": 0, "C": 1, "Q": 2})
# 欠損値を埋める
data["Fare"].fillna(data["Fare"].mean(), inplace=True)
data["Age"].fillna(data["Age"].mean(), inplace=True)
# 新しい特徴量の作成
data["Family"] = data["Parch"] + data["SibSp"]
# 不要な特徴量の削除
data.drop(["Name", "PassengerId", "SibSp", "Parch", "Ticket", "Cabin"],
axis=1, inplace=True)
# 入力と正解の作成
train_data = data[:len(train_data)]
test_data = data[len(train_data):]
t = train_data["Survived"] # 正解
x_train = train_data.drop("Survived", axis=1) # 訓練時の入力
x_test = test_data.drop("Survived", axis=1) # テスト時の入力
x_train.head()
###Output
_____no_output_____
###Markdown
交差検証scikit-learnの`StratifiedKFold`により交差検証を行います。 https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html `StratifiedKFold`を使えば検証用データ中の0と1の割合を一定に保つことができます。
###Code
y_valids = np.zeros((len(x_train),)) # 予測結果: 検証用データ
y_tests = [] # 予測結果: テスト用データ
skf = StratifiedKFold(n_splits=5, shuffle=True)
# ハイパーパラメータの設定
params = {
"objective": "binary", # 二値分類
"max_bin": 300, # 特徴量の最大分割数
"learning_rate": 0.05, # 学習率
"num_leaves": 32 # 分岐の末端の最大数
}
categorical_features = ["Embarked", "Pclass", "Sex"]
for _, (ids_train, ids_valid) in enumerate(skf.split(x_train, t)):
x_tr = x_train.loc[ids_train, :]
x_val = x_train.loc[ids_valid, :]
t_tr = t[ids_train]
t_val = t[ids_valid]
# データセットの作成
lgb_train = lgb.Dataset(x_tr, t_tr, categorical_feature=categorical_features)
lgb_val = lgb.Dataset(x_val, t_val, reference=lgb_train, categorical_feature=categorical_features)
# モデルの訓練
model = lgb.train(params, lgb_train, valid_sets=[lgb_train, lgb_val],
verbose_eval=20, # 学習過程の表示間隔
num_boost_round=500, # 学習回数の最大値
early_stopping_rounds=10) # 連続して10回性能が向上しなければ終了
# 結果を保持
y_valids[ids_valid] = model.predict(x_val, num_iteration=model.best_iteration)
y_test = model.predict(x_test, num_iteration=model.best_iteration)
y_tests.append(y_test)
###Output
_____no_output_____
###Markdown
正解率検証用データによる予測結果と正解を使い、正解率を計算します。
###Code
y_valids_bin = (y_valids>0.5).astype(int) # 結果を0か1に
accuracy_score(t, y_valids_bin) # 正解率の計算
###Output
_____no_output_____
###Markdown
提出用のデータ提出量データの形式を整え、CSVファイルに保存します。
###Code
y_test_subm = sum(y_tests) / len(y_tests) # 平均をとる
y_test_subm = (y_test > 0.5).astype(int) # 結果を0か1に
# 形式を整える
survived_test = pd.Series(y_test_subm, name="Survived")
subm_data = pd.concat([test_id, survived_test], axis=1)
# 提出用のcsvファイルを保存
subm_data.to_csv("submission_titanic_cv.csv", index=False)
subm_data
###Output
_____no_output_____ |
analysis/notebooks/y4analysis.ipynb | ###Markdown
Octanol Learning
###Code
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/OctLearningTest-03_23_2022-12_05/YArenaInfo.mat"
df1 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/OctLearningTest-03_23_2022-14_12/YArenaInfo.mat"
df2 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/OctLearningTest-03_31_2022-18_31/YArenaInfo.mat"
df3 = loadmat(loadmat_file)['YArenaInfo']
histories1 = df1['FlyChoiceMatrix'][0][0].T
schedules1 = np.concatenate([[df1['RewardStateTallyOdor1'][0][0]], [df1['RewardStateTallyOdor2'][0][0]]], axis=0).transpose((2, 1, 0))
histories2 = df2['FlyChoiceMatrix'][0][0].T[[0,2,3]]
schedules2 = np.concatenate([[df2['RewardStateTallyOdor1'][0][0][:,[0,2,3]]], [df2['RewardStateTallyOdor2'][0][0][:,[0,2,3]]]], axis=0).transpose((2, 1, 0))
histories3 = df3['FlyChoiceMatrix'][0][0].T[[0,2]]
schedules3 = np.concatenate([[df3['RewardStateTallyOdor1'][0][0][:,[0,2]]], [df3['RewardStateTallyOdor2'][0][0][:,[0,2]]]], axis=0).transpose((2, 1, 0))
histories = np.concatenate([histories1, histories2, histories3], axis=0)
schedules = np.concatenate([schedules1, schedules2, schedules3], axis=0)
import seaborn as sns
sns.set(style="ticks")
sns.set(font_scale=1.2)
plt.figure(figsize=(7,7))
for n in range(9):
plt.plot(np.cumsum(histories[n]==0),np.cumsum(histories[n]==1),'-',color=plt.cm.viridis(n/8),alpha=1,linewidth=2)
plt.plot(np.cumsum(histories[n]==0)[40],np.cumsum(histories[n]==1)[40],'o',color='green')
plt.plot(np.cumsum(histories==0,axis=1).mean(axis=0),np.cumsum(histories==1,axis=1).mean(axis=0),linewidth=3,color='gray')
plt.plot([0,len(histories[0])//2],[0,len(histories[0])//2],linewidth=2,color='black',linestyle='--')
plt.text(10,40,f"(n = {len(histories)} flies)",fontsize=14)
plt.xlabel('Cumulative number of OCT choices')
plt.ylabel('Cumulative number of MCH choices')
plt.xlim([-1,90])
plt.ylim([-1,90])
plt.box(False)
plt.gca().set_aspect('equal')
plt.plot([],[],'^',color='green',label='Rewarded trials start')
plt.legend(frameon=False)
plt.tight_layout()
plt.savefig('OctLearningTest.svg',transparent=True)
plt.show()
i = schedules[0]
plt.figure(figsize=(8,2))
plt.plot(np.arange(i.shape[0])[i[:,0]==1],np.zeros(np.sum(i[:,0]==1)),'o',color=plt.cm.viridis(0.6),linewidth=2)
plt.plot(np.arange(i.shape[0])[i[:,1]==1],np.ones(np.sum(i[:,1]==1)),'o',color=plt.cm.viridis(0.6),linewidth=2)
plt.plot(histories.mean(axis=0),'-',color=plt.cm.viridis(0.8),linewidth=2)
plt.plot([],[],'o',color='green',label='Rewarded Trials')
plt.yticks([0,0.5,1],["OCT -1.0","0.0","MCH 1.0"])
plt.xlim([0,i.shape[0]])
plt.axhline(0.5,linewidth=2,color='black',linestyle='--')
plt.box(False)
plt.xlabel('Trial Number')
plt.ylabel('Preference Index')
plt.legend(frameon=False)
plt.tight_layout()
plt.savefig('OctLearningTest-schedules.svg',transparent=True)
plt.show()
positions = df1['FlyCentroidsMatrix'][0][0]
nan_filter = ~np.all(np.all(np.isnan(positions),axis=0),axis=0)
positions = positions[:,:,nan_filter]
arena_mask = df1['MaskStack'][0][0].reshape((1024,1280,4,3)).sum(axis=3)
cleaned_positions = np.nan*np.ones(positions.shape)
cleaned_trials = df1['TrialCounterMatrix'][0][0].T[:,nan_filter]
cleaned_times = df1['TimeStampMatrix'][0][0].T[:,nan_filter][0]
for frame in range(positions.shape[2]):
for blob in range(positions.shape[1]):
i,j = np.int32(positions[:,blob,frame])
if i>0 and i<1024 and j>0 and j<1280:
arena = np.arange(4)[arena_mask[j,i,:]==1]
if len(arena)==1:
arena = arena[0]
cleaned_positions[:,arena,frame] = positions[:,blob,frame]
sns.set(style="ticks")
sns.set(font_scale=1.3)
trial_no = 1
arena_no = 0
for trial_no in [3, 33, 63, 93]:
plt.figure(figsize=(3,3))
frames = cleaned_trials[arena_no,:]==trial_no
times = cleaned_times[frames] - cleaned_times[frames][0]
plt.imshow(arena_mask[:,:,arena_no],cmap='gray',vmin=0,vmax=1)
plt.plot(cleaned_positions[0,arena_no,frames],cleaned_positions[1,arena_no,frames],'-',color='black',linewidth=1)
plt.scatter(cleaned_positions[0,arena_no,frames],cleaned_positions[1,arena_no,frames],c=times,s=20,cmap='viridis',vmin=0,vmax=30)
plt.gca().set_aspect('equal')
plt.colorbar()
buffer = 10
ylim1 = np.min(np.argmax(arena_mask[:,:,arena_no],axis=0)[np.argmax(arena_mask[:,:,arena_no],axis=0)>0]) - buffer
xlim1 = np.min(np.argmax(arena_mask[:,:,arena_no],axis=1)[np.argmax(arena_mask[:,:,arena_no],axis=1)>0]) - buffer
ylim2 = 1280-np.min(np.argmax(arena_mask[:,::-1,arena_no],axis=1)[np.argmax(arena_mask[:,::-1,arena_no],axis=1)>0]) + buffer
xlim2 = 1024-np.min(np.argmax(arena_mask[::-1,:,arena_no],axis=0)[np.argmax(arena_mask[::-1,:,arena_no],axis=0)>0]) + buffer
plt.xlim([xlim1,xlim2])
plt.ylim([ylim1,ylim2])
plt.grid(False)
plt.title(f"Trial {trial_no} - Arena {arena_no}")
plt.tight_layout()
plt.savefig(f"OCTLearning_Trial{trial_no}-Arena{arena_no}.svg",transparent=True)
###Output
C:\Users\labadmin\AppData\Local\Temp/ipykernel_21480/769735205.py:13: MatplotlibDeprecationWarning: Auto-removal of grids by pcolor() and pcolormesh() is deprecated since 3.5 and will be removed two minor releases later; please call grid(False) first.
plt.colorbar()
C:\Users\labadmin\AppData\Local\Temp/ipykernel_21480/769735205.py:13: MatplotlibDeprecationWarning: Auto-removal of grids by pcolor() and pcolormesh() is deprecated since 3.5 and will be removed two minor releases later; please call grid(False) first.
plt.colorbar()
C:\Users\labadmin\AppData\Local\Temp/ipykernel_21480/769735205.py:13: MatplotlibDeprecationWarning: Auto-removal of grids by pcolor() and pcolormesh() is deprecated since 3.5 and will be removed two minor releases later; please call grid(False) first.
plt.colorbar()
C:\Users\labadmin\AppData\Local\Temp/ipykernel_21480/769735205.py:13: MatplotlibDeprecationWarning: Auto-removal of grids by pcolor() and pcolormesh() is deprecated since 3.5 and will be removed two minor releases later; please call grid(False) first.
plt.colorbar()
###Markdown
Methycyclohexanol Learning
###Code
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/MchLearningTest-03_23_2022-15_50/YArenaInfo.mat"
df1 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/MchLearningTest-03_24_2022-10_35/YArenaInfo.mat"
df2 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/MchLearningTest-03_31_2022-17_10/YArenaInfo.mat"
df3 = loadmat(loadmat_file)['YArenaInfo']
histories1 = df1['FlyChoiceMatrix'][0][0].T[[0,2]]
schedules1 = np.concatenate([[df1['RewardStateTallyOdor1'][0][0][:,[0,2]]], [df1['RewardStateTallyOdor2'][0][0][:,[0,2]]]], axis=0).transpose((2, 1, 0))
histories2 = df2['FlyChoiceMatrix'][0][0].T[[0,2,3]]
schedules2 = np.concatenate([[df2['RewardStateTallyOdor1'][0][0][:,[0,2,3]]], [df2['RewardStateTallyOdor2'][0][0][:,[0,2,3]]]], axis=0).transpose((2, 1, 0))
histories3 = df3['FlyChoiceMatrix'][0][0].T[[0,1,3]]
schedules3 = np.concatenate([[df3['RewardStateTallyOdor1'][0][0][:,[0,1,3]]], [df3['RewardStateTallyOdor2'][0][0][:,[0,1,3]]]], axis=0).transpose((2, 1, 0))
histories = np.concatenate([histories1, histories2, histories3], axis=0)
schedules = np.concatenate([schedules1, schedules2, schedules3], axis=0)
import seaborn as sns
sns.set(style="ticks")
sns.set(font_scale=1.2)
plt.figure(figsize=(7,7))
for n in range(8):
plt.plot(np.cumsum(histories[n]==0),np.cumsum(histories[n]==1),'-',color=plt.cm.viridis(n/8),alpha=1,linewidth=2)
plt.plot(np.cumsum(histories[n]==0)[40],np.cumsum(histories[n]==1)[40],'o',color='green')
plt.plot(np.cumsum(histories==0,axis=1).mean(axis=0),np.cumsum(histories==1,axis=1).mean(axis=0),linewidth=3,color='gray')
plt.plot([0,len(histories[0])//2],[0,len(histories[0])//2],linewidth=2,color='black',linestyle='--')
plt.text(40,10,f"(n = {len(histories)} flies)",fontsize=14)
plt.xlabel('Cumulative number of OCT choices')
plt.ylabel('Cumulative number of MCH choices')
plt.box(False)
plt.gca().set_aspect('equal')
plt.plot([],[],'^',color='green',label='Rewarded trials start')
plt.legend(frameon=False)
plt.tight_layout()
plt.savefig('MchLearningTest.svg',transparent=True)
plt.show()
i = schedules[0]
plt.figure(figsize=(8,2))
plt.plot(np.arange(i.shape[0])[i[:,0]==1],np.zeros(np.sum(i[:,0]==1)),'o',color=plt.cm.viridis(0.6),linewidth=2)
plt.plot(np.arange(i.shape[0])[i[:,1]==1],np.ones(np.sum(i[:,1]==1)),'o',color=plt.cm.viridis(0.6),linewidth=2)
plt.plot(histories.mean(axis=0),'-',color=plt.cm.viridis(0.8),linewidth=2)
plt.plot([],[],'o',color='green',label='Rewarded Trials')
plt.yticks([0,0.5,1],["OCT -1.0","0.0","MCH 1.0"])
plt.xlim([0,i.shape[0]])
plt.axhline(0.5,linewidth=2,color='black',linestyle='--')
plt.box(False)
plt.xlabel('Trial Number')
plt.ylabel('Preference Index')
plt.legend(frameon=False)
plt.tight_layout()
plt.savefig('MchLearningTest-schedules.svg',transparent=True)
plt.show()
import seaborn as sns
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/OctLearningTest-03_23_2022-12_05/YArenaInfo.mat"
df1 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/OctLearningTest-03_23_2022-14_12/YArenaInfo.mat"
df2 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/OctLearningTest-03_31_2022-18_31/YArenaInfo.mat"
df3 = loadmat(loadmat_file)['YArenaInfo']
histories1 = df1['FlyChoiceMatrix'][0][0].T
schedules1 = np.concatenate([[df1['RewardStateTallyOdor1'][0][0]], [df1['RewardStateTallyOdor2'][0][0]]], axis=0).transpose((2, 1, 0))
histories2 = df2['FlyChoiceMatrix'][0][0].T[[0,2,3]]
schedules2 = np.concatenate([[df2['RewardStateTallyOdor1'][0][0][:,[0,2,3]]], [df2['RewardStateTallyOdor2'][0][0][:,[0,2,3]]]], axis=0).transpose((2, 1, 0))
histories3 = df3['FlyChoiceMatrix'][0][0].T[[0,2]]
schedules3 = np.concatenate([[df3['RewardStateTallyOdor1'][0][0][:,[0,2]]], [df3['RewardStateTallyOdor2'][0][0][:,[0,2]]]], axis=0).transpose((2, 1, 0))
histories_o = np.concatenate([histories1, histories2, histories3], axis=0)
schedules_o = np.concatenate([schedules1, schedules2, schedules3], axis=0)
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/MchLearningTest-03_23_2022-15_50/YArenaInfo.mat"
df1 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/MchLearningTest-03_24_2022-10_35/YArenaInfo.mat"
df2 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/MchLearningTest-03_31_2022-17_10/YArenaInfo.mat"
df3 = loadmat(loadmat_file)['YArenaInfo']
histories1 = df1['FlyChoiceMatrix'][0][0].T[[0,2]]
schedules1 = np.concatenate([[df1['RewardStateTallyOdor1'][0][0][:,[0,2]]], [df1['RewardStateTallyOdor2'][0][0][:,[0,2]]]], axis=0).transpose((2, 1, 0))
histories2 = df2['FlyChoiceMatrix'][0][0].T[[0,2,3]]
schedules2 = np.concatenate([[df2['RewardStateTallyOdor1'][0][0][:,[0,2,3]]], [df2['RewardStateTallyOdor2'][0][0][:,[0,2,3]]]], axis=0).transpose((2, 1, 0))
histories3 = df3['FlyChoiceMatrix'][0][0].T[[0,1,3]]
schedules3 = np.concatenate([[df3['RewardStateTallyOdor1'][0][0][:,[0,1,3]]], [df3['RewardStateTallyOdor2'][0][0][:,[0,1,3]]]], axis=0).transpose((2, 1, 0))
histories_m = np.concatenate([histories1, histories2, histories3], axis=0)
schedules_m = np.concatenate([schedules1, schedules2, schedules3], axis=0)
sns.set(style="ticks")
sns.set(font_scale=1.5)
plt.figure(figsize=(7,7))
for n in range(len(histories_o)):
plt.plot(np.cumsum(histories_o[n]==0),np.cumsum(histories_o[n]==1),'-',color='#9ccb3b',alpha=0.5,linewidth=2)
plt.plot(np.cumsum(histories_o[n]==0)[40],np.cumsum(histories_o[n]==1)[40],'o',color='green')
plt.plot(np.cumsum(histories_o==0,axis=1).mean(axis=0),np.cumsum(histories_o==1,axis=1).mean(axis=0),linewidth=4,color='#9ccb3b')
plt.plot([0,len(histories_o[0])//2],[0,len(histories_o[0])//2],linewidth=2,color='black',linestyle='--')
plt.text(60,40,f"(n = {len(histories_o)} flies)",fontsize=14,color='#9ccb3b')
for n in range(len(histories_m)):
plt.plot(np.cumsum(histories_m[n]==0),np.cumsum(histories_m[n]==1),'-',color='#39b54a',alpha=0.5,linewidth=2)
plt.plot(np.cumsum(histories_m[n]==0)[40],np.cumsum(histories_m[n]==1)[40],'o',color='green')
plt.plot(np.cumsum(histories_m==0,axis=1).mean(axis=0),np.cumsum(histories_m==1,axis=1).mean(axis=0),linewidth=4,color='#39b54a')
plt.plot([0,len(histories_m[0])//2],[0,len(histories_m[0])//2],linewidth=2,color='black',linestyle='--')
plt.text(40,60,f"(n = {len(histories_m)} flies)",fontsize=14, color='#39b54a')
plt.xlabel('Cumulative number of OCT choices')
plt.ylabel('Cumulative number of MCH choices')
plt.xlim([-1,90])
plt.ylim([-1,90])
plt.box(False)
plt.gca().set_aspect('equal')
plt.plot([],[],'o',color='green',label='Rewarded trials start')
plt.legend(frameon=False)
plt.tight_layout()
plt.savefig('LearningTest.svg',transparent=True)
plt.show()
plt.figure(figsize=(8,3))
i = schedules_o[0]
plt.plot(np.arange(i.shape[0])[i[:,0]==1],np.zeros(np.sum(i[:,0]==1)),'o',color='#9ccb3b',linewidth=2,alpha=0.5)
plt.plot(np.arange(i.shape[0])[i[:,1]==1],np.ones(np.sum(i[:,1]==1)),'o',color='#9ccb3b',linewidth=2,alpha=0.5)
plt.plot(histories_o.mean(axis=0),'-',color='#9ccb3b',linewidth=2)
i = schedules_m[0]
plt.plot(np.arange(i.shape[0])[i[:,0]==1],np.zeros(np.sum(i[:,0]==1)),'o',color='#39b54a',linewidth=2,alpha=0.5)
plt.plot(np.arange(i.shape[0])[i[:,1]==1],np.ones(np.sum(i[:,1]==1)),'o',color='#39b54a',linewidth=2,alpha=0.5)
plt.plot(histories_m.mean(axis=0),'-',color='#39b54a',linewidth=2)
plt.plot([],[],'o',color='green',label='Rewarded Trials')
plt.yticks([0,0.5,1],["OCT -1.0","0.0","MCH 1.0"])
plt.xlim([0,i.shape[0]])
plt.axhline(0.5,linewidth=2,color='black',linestyle='--')
plt.box(False)
plt.xlabel('Trial Number')
plt.ylabel('Preference Index')
plt.legend(frameon=False,loc='lower right')
plt.tight_layout()
plt.savefig('LearningTest-schedules.svg',transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
ACV Preference Experiments
###Code
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/ACVPreferenceTest-03_26_2022-10_41/YArenaInfo.mat"
df1 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/ACVPreferenceTest-03_26_2022-13_00/YArenaInfo.mat"
df2 = loadmat(loadmat_file)['YArenaInfo']
histories1 = df1['FlyChoiceMatrix'][0][0].T#[[0,2]]
histories2 = df2['FlyChoiceMatrix'][0][0].T#[[0,2,3]]
histories = np.concatenate([histories1, histories2], axis=0)
import seaborn as sns
sns.set(style="ticks")
sns.set(font_scale=1.2)
plt.figure(figsize=(7,7))
for n in range(8):
plt.plot(np.cumsum(histories[n]==0),np.cumsum(histories[n]==1),'-',color=plt.cm.viridis(n/7),alpha=1,linewidth=1)
plt.plot(np.cumsum(histories==0,axis=1).mean(axis=0),np.cumsum(histories==1,axis=1).mean(axis=0),linewidth=3,color='gray',label=f"Average")
plt.plot([0,len(histories[0])//2],[0,len(histories[0])//2],linewidth=2,color='black',linestyle='--')
plt.text(40,10,f"(n = {len(histories)} flies)",fontsize=14)
plt.xlabel('Cumulative number of AIR choices')
plt.ylabel('Cumulative number of ACV choices')
plt.box(False)
plt.gca().set_aspect('equal')
plt.tight_layout()
plt.savefig('ACVPreferenceTest.png',dpi=300,transparent=True)
plt.show()
plt.figure(figsize=(8,2))
plt.plot(histories.mean(axis=0),'-',color=plt.cm.viridis(0.8),linewidth=2)
plt.yticks([0,1],["AIR","ACV"])
plt.xlim([0,i.shape[0]])
plt.axhline(0.5,linewidth=2,color='black',linestyle='--')
plt.box(False)
plt.xlabel('Trial')
plt.ylabel('Odor Choice')
plt.tight_layout()
plt.savefig('ACVPreferenceTest-schedules.png',dpi=300,transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
Reversal Experiment
###Code
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/MchLearningTest-03_24_2022-10_35/YArenaInfo.mat"
df1 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/MchUnlearningOctLearningTest-03_24_2022-11_32/YArenaInfo.mat"
df2 = loadmat(loadmat_file)['YArenaInfo']
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/OctLearningAfterMCHUnlearningTest-03_24_2022-11_58/YArenaInfo.mat"
df3 = loadmat(loadmat_file)['YArenaInfo']
histories1 = df1['FlyChoiceMatrix'][0][0].T[[0,2,3]]
schedules1 = np.concatenate([[df1['RewardStateTallyOdor1'][0][0][:,[0,2,3]]], [df1['RewardStateTallyOdor2'][0][0][:,[0,2,3]]]], axis=0).transpose((2, 1, 0))
histories2 = df2['FlyChoiceMatrix'][0][0].T[[0,2,1]]
schedules2 = np.concatenate([[df2['RewardStateTallyOdor1'][0][0][:,[0,2,1]]], [df2['RewardStateTallyOdor2'][0][0][:,[0,2,1]]]], axis=0).transpose((2, 1, 0))
histories3 = df3['FlyChoiceMatrix'][0][0].T[[0,2,1]]
schedules3 = np.concatenate([[df3['RewardStateTallyOdor1'][0][0][:,[0,2,1]]], [df3['RewardStateTallyOdor2'][0][0][:,[0,2,1]]]], axis=0).transpose((2, 1, 0))
histories = np.concatenate([histories1, histories2, histories3], axis=1)
schedules = np.concatenate([schedules1, schedules2, schedules3], axis=1)
import seaborn as sns
sns.set(style="ticks")
sns.set(font_scale=1.2)
plt.figure(figsize=(7,7))
for n in range(3):
plt.plot(np.cumsum(histories[n]==0),np.cumsum(histories[n]==1),'-',color=plt.cm.viridis(n/2),alpha=1,linewidth=1)
plt.plot(np.cumsum(histories[n]==0)[40],np.cumsum(histories[n]==1)[40],'o',color='green')
plt.plot(np.cumsum(histories[n]==0)[100],np.cumsum(histories[n]==1)[100],'^',color='green')
plt.plot(np.cumsum(histories==0,axis=1).mean(axis=0),np.cumsum(histories==1,axis=1).mean(axis=0),linewidth=3,color='gray')
plt.plot([0,len(histories[0])//2],[0,len(histories[0])//2],linewidth=2,color='black',linestyle='--')
plt.text(40,20,f"(n = {len(histories)} flies)",fontsize=14)
plt.xlabel('Cumulative number of OCT choices')
plt.ylabel('Cumulative number of MCH choices')
plt.box(False)
plt.gca().set_aspect('equal')
plt.plot([],[],'o',color='green',label='Rewarded Trials start')
plt.plot([],[],'^',color='green',label='Rewarded Odor Change')
plt.legend(frameon=False)
plt.tight_layout()
plt.savefig('ReversalLearningTest.png',dpi=300,transparent=True)
plt.show()
i = schedules[0]
plt.figure(figsize=(8,2))
plt.plot(np.arange(i.shape[0])[i[:,0]==1],np.zeros(np.sum(i[:,0]==1)),'o',color=plt.cm.viridis(0.6),linewidth=2)
plt.plot(np.arange(i.shape[0])[i[:,1]==1],np.ones(np.sum(i[:,1]==1)),'o',color=plt.cm.viridis(0.6),linewidth=2)
plt.plot(histories.mean(axis=0),'-',color=plt.cm.viridis(0.8),linewidth=2)
plt.yticks([0,1],["OCT","MCH"])
plt.xlim([0,i.shape[0]])
plt.axhline(0.5,linewidth=2,color='black',linestyle='--')
plt.box(False)
plt.xlabel('Trial')
plt.ylabel('Odor Choice')
plt.tight_layout()
plt.savefig('ReversalLearningTest-schedules.png',dpi=300,transparent=True)
plt.show()
loadmat_file = "//dm11/turnerlab/Rishika/4Y-Maze/RunData/OctLearningTest-03_23_2022-12_05/YArenaInfo.mat"
df1 = loadmat(loadmat_file)['YArenaInfo']
import pandas as pd
latencies = []
columns = []
for i in os.listdir('//dm11/turnerlab/Rishika/4Y-Maze/RunData/'):
loadmat_file = f"//dm11/turnerlab/Rishika/4Y-Maze/RunData/{i}/YArenaInfo.mat"
frame_latency = np.diff(df1['TimeStampMatrix'][0][0].T[0][1:])
# frame_latency = frame_latency[~np.isnan(frame_latency)]
latencies.append(frame_latency)
columns.append(i.split('-')[1]+'-'+i.split('-')[2])
fr = pd.DataFrame(1/np.array(latencies).T,columns=columns).dropna()
fr = fr.melt(var_name='Session',value_name='Frame Rate (Hz)')
plt.figure(figsize=(5,7))
sns.set(style="ticks")
sns.set(font_scale=1.2)
sns.boxplot(x='Session',y='Frame Rate (Hz)',data=fr,palette=sns.color_palette("viridis", n_colors=11))
plt.xticks(rotation=90)
plt.ylabel('Frame Rate (Hz)')
plt.tight_layout()
plt.savefig('FrameRate.png',dpi=300,transparent=True)
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.