path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/RedBlackMarble-DP.ipynb | ###Markdown
Red Black Marble Question: There are 4 red and 4 black marbles in a bag. Draw one at time. If you have drawn more black than red, winnings are the number of black marbles in excess of red. Otherwise, you get nothing. Quit at any time. What is your optimal strategy? Intuition:- Firstly you can quit any time, this guarantees at least 0 winning, no loss can occur. If you find at any point you lose money (e.g. 3 red, 1 black in hand), just keep going until finish drawing all 8 balls from the bag.- Then, think about the strategy. Is it optimal to quit at following points, you draw: - R, R, - R, R, B, - R, R, B, B, - ... - You don't want to quit in case 1 and 2 to realize the loss. In case 3, stop with winning 0 v.s. keep going with chances of winning something, you choose the latter one. - The option to quit at any time is in our favor ; ) - If you are lucky to draw a Black at 1st time, you seems not satisified to stop and take just 1 dollar. You try to make a 2nd draw. Possibilities can be either you draw a Black(p=3/7) at 2nd time, then win 2 dollar,or you draw a Red(p=4/7), win 0.The expection on 2nd move 3/7 \* 2 + 4/7 * 0 is less then 1.Is it a good time to stop? <!--- No. Even when you draw 1 Black and 1 Red after 2nd move, the immediate payoff is 0. However, chances of winning leads to a positive expected payoff.So we need to modify the payoff of 0 above. - From this, we figured out that the strategy given one status S(num of red, num of black) is not solely depends on its $$ Payoff_{immediate} = max(\black - \red, 0)$$ The decision can be made after comparing it to $$ Payoff_{expected} = p_{red} * payoff(\red+1, \black) + p_{black} * payoff(\red, \black+1) $$ We take the max of the two. --->- The **Dynamic Programming** question can be solved by backward induction.The state space looks like a binomal tree. The state, stategy and payoff at each node of this binomial tree can be computed. The logic is the same for many questions of this kind. Draw a binomial tree
###Code
import matplotlib.pyplot as plt
from IPython.display import Image
Image(url= "https://github.com/dlu-umich/Quant-Lab/blob/master/half_tree.png?raw=true")
###Output
_____no_output_____
###Markdown
Coding - recursion
###Code
def payoff(r, b):
if r == 0:
return 0
if b == 0:
return r
payoff_if_stop = ...
payoff_if_go = ... ## backward recursion
return max(payoff_if_stop, payoff_if_go)
print(payoff(0,0))
print(payoff(1,3))
print(payoff(4,4))
import matplotlib.pyplot as plt
n = 12 ## try n = 20 ?
x = range(n)
res = [payoff(i,i) for i in range(n)]
plt.plot(x,res)
plt.title("payoff of game at the initial point")
plt.show()
# payoff of game at the initial point
import timeit
n = 14 ## try n = 20 ?
for i in range(n):
start = timeit.default_timer()
res = payoff(i,i)
stop = timeit.default_timer()
print('Time: ', stop - start)
# observe that running time grows exponentially fast with n
###Output
Time: 2.318837353501653e-06
Time: 1.0202884355407272e-05
Time: 1.6231861474511533e-05
Time: 4.4057909716531396e-05
Time: 0.00015397080027250985
Time: 0.00047165151770223624
Time: 0.0019237074684649712
Time: 0.007091468394478755
Time: 0.016908961981734055
Time: 0.050692567152370346
Time: 0.2014174588974485
Time: 0.6451585226779974
###Markdown
Complexity analysis:For a bag with n red marbles and n black marbles, how long will the recursion take? - For original state S(n,n), it will need two function calls of S(n-1,n) and S(n,n-1)- Each node will need 2 function calls. Notice, even though the binomial tree is recombined, the function call is not.So going one layer deeper doubles the computing time. We have n + n balls, so the tree is n depth symmetrically on both sides. It is $O(2^{2n})$ complexity. Coding - recursion (with cache)
###Code
payoff_dict = {}
def payoff(r, b):
if r == 0:
return 0
if b == 0:
return r
payoff_if_stop = max(r-b, 0)
...
payoff_if_go = r/(r+b) * red_payoff + b/(r+b) * black_payoff
return max(payoff_if_stop, payoff_if_go)
print(payoff(0,0))
print(payoff(1,3))
print(payoff(4,4))
###Output
0
0.25
0.9999999999999999
###Markdown
Coding - tree node as a class objectBy making use of recombining tree and already computed nodes, we can turn it into $O(n^2)$ complexity - simply counting the edges.
###Code
class Node:
def __init__(self, red, black):
self.r = red
self.b = black
'''pointer to the next node - up/down'''
self.next_node_up = ...
self.next_node_down = ...
'''backward induction'''
self.winning_if_stop = ...
self.winning_if_go = ...
def winning(self):
'''return the winning if following the optimal strategy'''
return ...
def strategy_query(self):
'''print the strategy given the current node'''
print ...
class Game:
def __init__(self):
self.node_dict = {}
def node(self, red, black):
key = (red, black)
if key not in self.node_dict:
self.node_dict[key] = Node(red, black, self)
return self.node_dict[key]
dp = Game()
x = dp.node(4,4).winning()
print(x)
strategy = dp.node(1,3).strategy_query()
## performance test
dp = Game()
res = []
time = []
for n in range(100):
start = timeit.default_timer()
res.append(dp.node(n,n).winning())
stop = timeit.default_timer()
time.append(stop - start)
plt.plot(time)
plt.title("running time with calculated nodes")
plt.show()
###Output
_____no_output_____ |
Testing/Testing_Final_Model.ipynb | ###Markdown
Generating Headlines for Test Dataset using Final Model **Downloading and Importing required libraries**
###Code
!pip install compress-pickle
!pip install rouge
!python -m spacy download en_core_web_md
!sudo apt install openjdk-8-jdk
!sudo update-alternatives --set java /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java
!pip install language-check
%tensorflow_version 1.x
import tensorflow as tf
from tensorflow.contrib import rnn
import nltk
from nltk.tokenize import word_tokenize
from nltk.translate.bleu_score import sentence_bleu,SmoothingFunction
from rouge import Rouge
import collections
import compress_pickle as pickle
import re
import bz2
import os
import time
import warnings
import numpy as np
import pandas as pd
from tqdm.notebook import trange,tqdm
import spacy
import en_core_web_md
nltk.download('punkt')
nlp = en_core_web_md.load()
###Output
_____no_output_____
###Markdown
**Necessary Utility functions**
###Code
default_path = "/Testing/"
dataset_path = "/Dataset/"
test_article_path = dataset_path + "abstract.test.bz2"
test_title_path = dataset_path + "title.test.bz2"
def clean_str(sentence):
sentence = re.sub("[#.]+", " ", sentence)
return sentence
def get_text_list(data_path, toy=False,clean=True):
with bz2.open (data_path, "r") as f:
if not clean:
return [x.decode().strip() for x in f.readlines()[5000:10000:5]]
if not toy:
return [clean_str(x.decode().strip()) for x in tqdm(f.readlines())]
else:
return [clean_str(x.decode().strip()) for x in tqdm(f.readlines()[:20000])]
def build_dict(step, toy=False,train_article_list=[],train_title_list=[]):
if step == "test" or os.path.exists(default_path+"word_dict.bz"):
with open(default_path+"word_dict.bz", "rb") as f:
word_dict = pickle.load(f,compression='bz2')
elif step == "train":
words = list()
for sentence in tqdm(train_article_list + train_title_list):
for word in word_tokenize(sentence):
words.append(word)
word_counter = collections.Counter(words).most_common(500000)
word_dict = dict()
word_dict["<padding>"] = 0
word_dict["<unk>"] = 1
word_dict["<s>"] = 2
word_dict["</s>"] = 3
cur_len = 4
for word, _ in tqdm(word_counter):
word_dict[word] = cur_len
cur_len += 1
pickle.dump(word_dict, default_path+"word_dict",compression='bz2')
reversed_dict = dict(zip(word_dict.values(), word_dict.keys()))
article_max_len = 250
summary_max_len = 15
return word_dict, reversed_dict, article_max_len, summary_max_len
def batch_iter(inputs, outputs, batch_size, num_epochs):
inputs = np.array(inputs)
outputs = np.array(outputs)
num_batches_per_epoch = (len(inputs) - 1) // batch_size + 1
for epoch in range(num_epochs):
for batch_num in range(num_batches_per_epoch):
start_index = batch_num * batch_size
end_index = min((batch_num + 1) * batch_size, len(inputs))
yield inputs[start_index:end_index], outputs[start_index:end_index]
###Output
_____no_output_____
###Markdown
**Title Modification ( OOV replacment and Grammar Check)**
###Code
tool = language_check.LanguageTool('en-US')
smoothing = SmoothingFunction().method0
def get_unk_tokens(word_dict, article):
unk = defaultdict(float)
tokens = word_tokenize(article)
n = min(250,len(tokens))
for i,token in enumerate(tokens[:250]):
if token not in word_dict:
unk[token]+= get_weight(i,n)
tup = []
for i in unk:
tup.append((unk[i],i))
return sorted(tup[:5],reverse=True)
def get_weight(index, token_len):
p = index/token_len
if(p<=0.1):
return 0.35
if(p<=0.2):
return 0.3
if(p<=0.4):
return 0.2
if(p<=0.7):
return 0.1
return 0.05
def correct(text):
matches = tool.check(text)
text = language_check.correct(text, matches)
return text
def update_title(word_dict,article, title):
replace_count = 0
unk_list = get_unk_tokens(word_dict, article)
for j in range(min(title.count('<unk>'), len(unk_list))):
title = title.replace('<unk>', unk_list[j][1],1)
replace_count += 1
return correct(title)
def calculate_bleu(title, reference):
title_tok,reference_tok = word_tokenize(title), [word_tokenize(reference)]
return sentence_bleu(reference_tok,title_tok,smoothing_function=smoothing)
###Output
_____no_output_____
###Markdown
**RNN Model Implementation**
###Code
class Model(object):
def __init__(self, reversed_dict, article_max_len, summary_max_len, args, forward_only=False):
self.vocabulary_size = len(reversed_dict)
self.embedding_size = args.embedding_size
self.num_hidden = args.num_hidden
self.num_layers = args.num_layers
self.learning_rate = args.learning_rate
self.beam_width = args.beam_width
if not forward_only:
self.keep_prob = args.keep_prob
else:
self.keep_prob = 1.0
self.cell = tf.nn.rnn_cell.LSTMCell
with tf.variable_scope("decoder/projection"):
self.projection_layer = tf.layers.Dense(self.vocabulary_size, use_bias=False)
self.batch_size = tf.placeholder(tf.int32, (), name="batch_size")
self.X = tf.placeholder(tf.int32, [None, article_max_len])
self.X_len = tf.placeholder(tf.int32, [None])
self.decoder_input = tf.placeholder(tf.int32, [None, summary_max_len])
self.decoder_len = tf.placeholder(tf.int32, [None])
self.decoder_target = tf.placeholder(tf.int32, [None, summary_max_len])
self.global_step = tf.Variable(0, trainable=False)
with tf.name_scope("embedding"):
if not forward_only and args.glove:
init_embeddings = tf.constant(get_init_embedding(reversed_dict, self.embedding_size), dtype=tf.float32)
else:
init_embeddings = tf.random_uniform([self.vocabulary_size, self.embedding_size], -1.0, 1.0)
self.embeddings = tf.get_variable("embeddings", initializer=init_embeddings)
self.encoder_emb_inp = tf.transpose(tf.nn.embedding_lookup(self.embeddings, self.X), perm=[1, 0, 2])
self.decoder_emb_inp = tf.transpose(tf.nn.embedding_lookup(self.embeddings, self.decoder_input), perm=[1, 0, 2])
with tf.name_scope("encoder"):
fw_cells = [self.cell(self.num_hidden) for _ in range(self.num_layers)]
bw_cells = [self.cell(self.num_hidden) for _ in range(self.num_layers)]
fw_cells = [rnn.DropoutWrapper(cell) for cell in fw_cells]
bw_cells = [rnn.DropoutWrapper(cell) for cell in bw_cells]
encoder_outputs, encoder_state_fw, encoder_state_bw = tf.contrib.rnn.stack_bidirectional_dynamic_rnn(
fw_cells, bw_cells, self.encoder_emb_inp,
sequence_length=self.X_len, time_major=True, dtype=tf.float32)
self.encoder_output = tf.concat(encoder_outputs, 2)
encoder_state_c = tf.concat((encoder_state_fw[0].c, encoder_state_bw[0].c), 1)
encoder_state_h = tf.concat((encoder_state_fw[0].h, encoder_state_bw[0].h), 1)
self.encoder_state = rnn.LSTMStateTuple(c=encoder_state_c, h=encoder_state_h)
with tf.name_scope("decoder"), tf.variable_scope("decoder") as decoder_scope:
decoder_cell = self.cell(self.num_hidden * 2)
if not forward_only:
attention_states = tf.transpose(self.encoder_output, [1, 0, 2])
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(
self.num_hidden * 2, attention_states, memory_sequence_length=self.X_len, normalize=True)
decoder_cell = tf.contrib.seq2seq.AttentionWrapper(decoder_cell, attention_mechanism,
attention_layer_size=self.num_hidden * 2)
initial_state = decoder_cell.zero_state(dtype=tf.float32, batch_size=self.batch_size)
initial_state = initial_state.clone(cell_state=self.encoder_state)
helper = tf.contrib.seq2seq.TrainingHelper(self.decoder_emb_inp, self.decoder_len, time_major=True)
decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, helper, initial_state)
outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(decoder, output_time_major=True, scope=decoder_scope)
self.decoder_output = outputs.rnn_output
self.logits = tf.transpose(
self.projection_layer(self.decoder_output), perm=[1, 0, 2])
self.logits_reshape = tf.concat(
[self.logits, tf.zeros([self.batch_size, summary_max_len - tf.shape(self.logits)[1], self.vocabulary_size])], axis=1)
else:
tiled_encoder_output = tf.contrib.seq2seq.tile_batch(
tf.transpose(self.encoder_output, perm=[1, 0, 2]), multiplier=self.beam_width)
tiled_encoder_final_state = tf.contrib.seq2seq.tile_batch(self.encoder_state, multiplier=self.beam_width)
tiled_seq_len = tf.contrib.seq2seq.tile_batch(self.X_len, multiplier=self.beam_width)
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(
self.num_hidden * 2, tiled_encoder_output, memory_sequence_length=tiled_seq_len, normalize=True)
decoder_cell = tf.contrib.seq2seq.AttentionWrapper(decoder_cell, attention_mechanism,
attention_layer_size=self.num_hidden * 2)
initial_state = decoder_cell.zero_state(dtype=tf.float32, batch_size=self.batch_size * self.beam_width)
initial_state = initial_state.clone(cell_state=tiled_encoder_final_state)
decoder = tf.contrib.seq2seq.BeamSearchDecoder(
cell=decoder_cell,
embedding=self.embeddings,
start_tokens=tf.fill([self.batch_size], tf.constant(2)),
end_token=tf.constant(3),
initial_state=initial_state,
beam_width=self.beam_width,
output_layer=self.projection_layer
)
outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder, output_time_major=True, maximum_iterations=summary_max_len, scope=decoder_scope)
self.prediction = tf.transpose(outputs.predicted_ids, perm=[1, 2, 0])
with tf.name_scope("loss"):
if not forward_only:
crossent = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=self.logits_reshape, labels=self.decoder_target)
weights = tf.sequence_mask(self.decoder_len, summary_max_len, dtype=tf.float32)
self.loss = tf.reduce_sum(crossent * weights / tf.cast(self.batch_size,tf.float32))
params = tf.trainable_variables()
gradients = tf.gradients(self.loss, params)
clipped_gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
optimizer = tf.train.AdamOptimizer(self.learning_rate)
self.update = optimizer.apply_gradients(zip(clipped_gradients, params), global_step=self.global_step)
###Output
_____no_output_____
###Markdown
**Cell for Title Prediction**
###Code
class args:
pass
args.num_hidden=200
args.num_layers=3
args.beam_width=10
args.embedding_size=300
args.glove = True
args.learning_rate=1e-3
args.batch_size=64
args.num_epochs=5
args.keep_prob = 0.8
args.toy=True
args.with_model="store_true"
word_dict, reversed_dict, article_max_len, summary_max_len = build_dict("test", args.toy)
def generate_title(article):
tf.reset_default_graph()
model = Model(reversed_dict, article_max_len, summary_max_len, args, forward_only=True)
saver = tf.train.Saver(tf.global_variables())
ckpt = tf.train.get_checkpoint_state(default_path + "saved_model/")
with warnings.catch_warnings():
warnings.simplefilter("ignore")
x = [word_tokenize(clean_str(article))]
x = [[word_dict.get(w, word_dict["<unk>"]) for w in d] for d in x]
x = [d[:article_max_len] for d in x]
test_x = [d + (article_max_len - len(d)) * [word_dict["<padding>"]] for d in x]
with tf.Session() as sess:
saver.restore(sess, ckpt.model_checkpoint_path)
batches = batch_iter(test_x, [0] * len(test_x), args.batch_size, 1)
for batch_x, _ in batches:
batch_x_len = [len([y for y in x if y != 0]) for x in batch_x]
test_feed_dict = {
model.batch_size: len(batch_x),
model.X: batch_x,
model.X_len: batch_x_len,
}
prediction = sess.run(model.prediction, feed_dict=test_feed_dict)
prediction_output = [[reversed_dict[y] for y in x] for x in prediction[:, 0, :]]
summary_array = []
for line in prediction_output:
summary = list()
for word in line:
if word == "</s>":
break
if word not in summary:
summary.append(word)
summary_array.append(" ".join(summary))
return " ".join(summary)
def get_title(text):
if text.count(' ')<10:
raise Exception("The length of the abstract is very short. Output will not be good")
title = generate_title(clean_str(text))
updated_title = update_title(word_dict, text, title)
return updated_title
###Output
_____no_output_____
###Markdown
**Generate Titles for Test Dataset**
###Code
abstract_list = get_text_list(test_article_path)
generated_titles = []
for i in trange(len(abstract_list)):
generated_titles.append(get_title(abstract_list[i]))
with open(default_path + "result.txt", "w") as f:
f.write('\n'.join(generated_titles))
###Output
_____no_output_____
###Markdown
**BLEU** and **Rouge** scores calculation
###Code
rouge = Rouge()
original_title,generated_title= [],[]
print("Loading Data...")
original_title = get_text_list(test_title_path)
generated_title = get_generated_title(default_path + "result.txt")
abstract = get_text_list(test_article_path)
print('Tokenizing Data...')
tokens_original = [[word_tokenize(s)] for s in tqdm(original_title)]
tokens_generated = [word_tokenize(s) for s in tqdm(generated_title)]
token_abstract = [word_tokenize(s) for s in tqdm(abstract)]
minmized_abstract = []
for line in token_abstract:
minmized_abstract.append(' '.join(line[:40])+'...')
smoothing = SmoothingFunction().method0
print('Calculating BLEU Score')
bleu_score = []
for i in trange(len(tokens_original)):
bleu_score.append(sentence_bleu(tokens_original[i],tokens_generated[i],smoothing_function=smoothing))
bleu = np.array(bleu_score)
print("BLEU score report")
print("Min Score:",bleu.min(),"Max Score:",bleu.max(),"Avg Score:",bleu.mean())
print('Calculating Rouge Score')
rouge1f,rouge1p,rouge1r = [],[],[]
rouge2f,rouge2p,rouge2r = [],[],[]
rougelf,rougelp,rougelr = [],[],[]
for i in trange(len(tokens_original)):
score = rouge.get_scores(original_title[i],generated_title[i])
rouge1f.append(score[0]['rouge-1']['f'])
rouge1p.append(score[0]['rouge-1']['p'])
rouge1r.append(score[0]['rouge-1']['r'])
rouge2f.append(score[0]['rouge-2']['f'])
rouge2p.append(score[0]['rouge-2']['p'])
rouge2r.append(score[0]['rouge-2']['r'])
rougelf.append(score[0]['rouge-l']['f'])
rougelp.append(score[0]['rouge-l']['p'])
rougelr.append(score[0]['rouge-l']['r'])
rouge1f,rouge1p,rouge1r = np.array(rouge1f),np.array(rouge1p),np.array(rouge1r)
rouge2f,rouge2p,rouge2r = np.array(rouge2f),np.array(rouge2p),np.array(rouge2r)
rougelf,rougelp,rougelr = np.array(rougelf),np.array(rougelp),np.array(rougelr)
df = pd.DataFrame(zip(minmized_abstract,original_title,generated_title,bleu,rouge1f,rouge1p,rouge1r,rouge2f,rouge2p,rouge2r,rougelf,rougelp,rougelr),columns=['Abstract','Original_Headline','Generated_Headline_x','Bleu_Score_x','Rouge-1_F_x','Rouge-1_P_x','Rouge-1_R_x','Rouge-2_F_x','Rouge-2_P_x','Rouge-2_R_x','Rouge-l_F_x','Rouge-l_P_x','Rouge-l_R_x'])
df.to_csv(default_path+'output.csv',index=False)
print('Done!!')
###Output
_____no_output_____ |
Experiments/Siamesa_v3/siamesa_v3_sbd.ipynb | ###Markdown
GENERAL IMPORTS AND SEED
###Code
import argparse
import torch
import torchvision
from torch import optim
from torchvision import transforms
import os
import os.path as osp
import random
import numpy as np
from pathlib import Path
from torch.utils.data import dataset
import PIL
from PIL import Image
# fix the seed
seed = 1
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
random.seed(seed)
###Output
_____no_output_____
###Markdown
ACCESS TO THE DRIVE FOLDER WHERE THE DATASET HAS BEEN STORED
###Code
from google.colab import drive
drive.mount('/content/gdrive')
root_path = 'gdrive/My Drive/' #change dir to your project folder
###Output
Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount("/content/gdrive", force_remount=True).
###Markdown
DEFINE ARGUMENTS
###Code
class Args:
frontal_images_directories = "gdrive/My Drive/dataset-cfp/Protocol/image_list_F.txt"
profile_images_directories = "gdrive/My Drive/dataset-cfp/Protocol/image_list_P.txt"
split_main_directory = "gdrive/My Drive/dataset-cfp/Protocol/Split"
split_traindata = ["01", "02", "03", "04", "05", "06"]
split_valdata = ["07", "08"]
split_testdata = ["09", "10"]
dataset_root = "gdrive/My Drive"
dataset= "CFPDataset"
lr = float(1e-3)
weight_decay = float(0.0005)
momentum = float(0.9)
betas = (0.9, 0.999)
batch_size = int(14)
workers = int(8)
start_epoch = int(0)
epochs = int(40)
#save_every = int(2)
pretrained = True
#siamese_linear = True
data_aug = True
resume = "checkpoint_e23_lr1e_3_40e_SGD"
###Output
_____no_output_____
###Markdown
DEFINE DATASET CLASS
###Code
class CFPDataset(dataset.Dataset):
def __init__(self, path, args, img_transforms=None, dataset_root="",
split="train", input_size=(224, 224)):
super().__init__()
self.data = []
self.split = split
self.load(path, args)
print("Dataset loaded")
print("{0} samples in the {1} dataset".format(len(self.data),
self.split))
self.transforms = img_transforms
self.dataset_root = dataset_root
self.input_size = input_size
def load(self, path, args):
# read directories for frontal images
lines = open(args.frontal_images_directories).readlines()
idx = 0
directories_frontal_images = []
#print(len(lines))
while idx < len(lines):
x = lines[idx].strip().split()
directories_frontal_images.append(x)
idx += 1
#print(x)
# read directories for profile images
lines = open(args.profile_images_directories).readlines()
idx = 0
directories_profile_images = []
#print(len(lines))
while idx < len(lines):
x = lines[idx].strip().split()
directories_profile_images.append(x)
idx += 1
#print(x)
# read same and different pairs of images and save at dictionary
self.data = []
for i in path:
ff_diff_file = osp.join(args.split_main_directory, 'FF', i,
'diff.txt')
lines = open(ff_diff_file).readlines()
idx = 0
while idx < int(len(lines)/1):
img_pair = lines[idx].strip().split(',')
#print('ff_diff', img_pair)
img1_dir = directories_frontal_images[int(img_pair[0])-1][1]
img2_dir = directories_frontal_images[int(img_pair[1])-1][1]
pair_tag = -1
d = {
"img1_path": img1_dir,
"img2_path": img2_dir,
"pair_tag": pair_tag
}
#print(d)
self.data.append(d)
idx += 1
ff_same_file = osp.join(args.split_main_directory, 'FF', i,
'same.txt')
lines = open(ff_same_file).readlines()
idx = 0
while idx < int(len(lines)/1):
img_pair = lines[idx].strip().split(',')
#print('ff_same', img_pair)
img1_dir = directories_frontal_images[int(img_pair[0])-1][1]
img2_dir = directories_frontal_images[int(img_pair[1])-1][1]
pair_tag = 1
d = {
"img1_path": img1_dir,
"img2_path": img2_dir,
"pair_tag": pair_tag
}
#print(d)
self.data.append(d)
idx += 1
fp_diff_file = osp.join(args.split_main_directory, 'FP', i,
'diff.txt')
lines = open(fp_diff_file).readlines()
idx = 0
while idx < int(len(lines)/1):
img_pair = lines[idx].strip().split(',')
#print('fp_diff', img_pair)
img1_dir = directories_frontal_images[int(img_pair[0])-1][1]
img2_dir = directories_profile_images[int(img_pair[1])-1][1]
pair_tag = -1
d = {
"img1_path": img1_dir,
"img2_path": img2_dir,
"pair_tag": pair_tag
}
#print(d)
self.data.append(d)
idx += 1
fp_same_file = osp.join(args.split_main_directory, 'FP', i,
'same.txt')
lines = open(fp_same_file).readlines()
idx = 0
while idx < int(len(lines)/1):
img_pair = lines[idx].strip().split(',')
#print('ff_same', img_pair)
img1_dir = directories_frontal_images[int(img_pair[0])-1][1]
img2_dir = directories_profile_images[int(img_pair[1])-1][1]
pair_tag = 1
d = {
"img1_path": img1_dir,
"img2_path": img2_dir,
"pair_tag": pair_tag
}
#print(d)
self.data.append(d)
idx += 1
def __len__(self):
return len(self.data)
def __getitem__(self, index):
d = self.data[index]
image1_path = osp.join(self.dataset_root, 'dataset-cfp', d[
'img1_path'])
image2_path = osp.join(self.dataset_root, 'dataset-cfp', d[
'img2_path'])
image1 = Image.open(image1_path).convert('RGB')
image2 = Image.open(image2_path).convert('RGB')
tag = d['pair_tag']
if self.transforms is not None:
# this converts from (HxWxC) to (CxHxW) as wel
img1 = self.transforms(image1)
img2 = self. transforms(image2)
return img1, img2, tag
###Output
_____no_output_____
###Markdown
DEFINE DATA LOADES
###Code
from torch.utils import data
def get_dataloader(datapath, args, img_transforms=None, split="train"):
if split == 'train':
shuffle = True
drop_last = True
else:
shuffle = False
drop_last = False
dataset = CFPDataset(datapath,
args,
split=split,
img_transforms=img_transforms,
dataset_root=osp.expanduser(args.dataset_root))
data_loader = data.DataLoader(dataset,
batch_size=args.batch_size,
shuffle=shuffle,
num_workers=args.workers,
pin_memory=True,
drop_last=drop_last)
return data_loader
###Output
_____no_output_____
###Markdown
DEFINE MODEL
###Code
import torch
from torch import nn
from torchvision.models import vgg16_bn
def l2norm(x):
x = x / torch.sqrt(torch.sum(x**2, dim=-1, keepdim=True))
return x
class SiameseCosine(nn.Module):
"""
Siamese network
"""
def __init__(self, pretrained=False):
super(SiameseCosine, self).__init__()
vgg16_model = vgg16_bn(pretrained=pretrained)
self.feat = vgg16_model.features
self.linear_classifier = vgg16_model.classifier[0]
self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
def forward(self, img1, img2):
feat_1 = self.feat(img1)
feat_1 = self. avgpool(feat_1)
feat_1 = feat_1.view(feat_1.size(0),-1)
feat_1 = self.linear_classifier(feat_1)
feat_1 = l2norm(feat_1)
feat_2 = self.feat(img2)
feat_2 = self. avgpool(feat_2)
feat_2 = feat_2.view(feat_1.size(0),-1)
feat_2 = self.linear_classifier(feat_2)
feat_2 = l2norm(feat_2)
return feat_1, feat_2
###Output
_____no_output_____
###Markdown
DEFINE LOSS
###Code
from torch import nn
class RecognitionCriterion(nn.Module):
def __init__(self):
super().__init__()
self.classification_criterion = nn.CosineEmbeddingLoss(margin=0.5).cuda()
self.cls_loss = None
def forward(self, *input):
self.cls_loss = self.classification_criterion(*input)
return self.cls_loss
###Output
_____no_output_____
###Markdown
DEFINE TRAINING AND VALIDATION FUNCTIONS
###Code
import torch
from torchvision import transforms
from torch.nn import functional as nnfunc
import numpy as np
def similarity (vec1, vec2):
cos = torch.nn.CosineSimilarity(dim=1, eps=1e-8)
cos_a = cos (vec1, vec2)
return cos_a
def accuracy(vec1, vec2, y, treshold):
correct = 0
total = 0
similarity_value = similarity(vec1, vec2)
for value, label in zip(similarity_value, y):
total += 1
if value > treshold and label == 1.0:
correct += 1
if value < treshold and label == -1.0:
correct += 1
return correct/total
def train(model, loss_fn, optimizer, dataloader, epoch, device):
model.train()
all_loss = []
for idx, (img1, img2, prob) in enumerate(dataloader):
img1 = img1.to('cuda:0')
img2 = img2.to('cuda:0')
prob = prob.float().to('cuda:0') #label
out1, out2 = model(img1, img2) #inputs images to model, executes model, returns features
loss = loss_fn(out1, out2, prob) #calculates loss
loss.backward() #upgrades gradients
all_loss.append(loss.item())
optimizer.step()
optimizer.zero_grad()
if idx % 100 == 0:
message1 = "TRAIN Epoch [{0}]: [{1}/{2}] ".format(epoch, idx,
len(dataloader))
#message2 = "Loss: [{0:.4f}]; Accuracy: [{1}]".format(loss.item(),
# acc)
message2 = "Loss: [{0:.4f}]".format(loss.item())
print(message1, message2)
torch.cuda.empty_cache()
return all_loss
def val(model, loss_fn, dataloader, epoch, device):
model.eval()
all_loss = []
for idx, (img1, img2, prob) in enumerate(dataloader):
img1 = img1.to('cuda:0')
img2 = img2.to('cuda:0')
prob = prob.float().to('cuda:0') #label
out1, out2 = model(img1, img2) #inputs images to model, executes model, returns features
loss = loss_fn(out1, out2, prob) #calculates loss
all_loss.append(loss.item())
if idx % 100 == 0:
message1 = "VAL Epoch [{0}]: [{1}/{2}] ".format(epoch, idx,
len(dataloader))
#message2 = "Loss: [{0:.4f}]; Accuracy: [{1:.4f}]".format(loss.item(),
# acc)
message2 = "Loss: [{0:.4f}]".format(loss.item())
print(message1, message2)
torch.cuda.empty_cache()
return all_loss
def val_sim_lim(model, dataloader, epoch, device):
model.eval()
sim_pos_min = 1
sim_neg_max = -1
pos_similarities = []
neg_similarities = []
for idx, (img1, img2, prob) in enumerate(dataloader):
img1 = img1.to('cuda:0')
img2 = img2.to('cuda:0')
prob = prob.float().to('cuda:0') #label
out1, out2 = model(img1, img2) #inputs images to model, executes model, returns features
sim = similarity(out1, out2)
for value, label in zip(sim, prob):
value = value.item()
np.round(value, decimals=3)
if label == 1:
pos_similarities.append(value)
else:
neg_similarities.append(value)
if idx % 100 == 0:
message1 = "VAL Epoch [{0}]: [{1}/{2}] ".format(epoch, idx,
len(dataloader))
print(message1)
torch.cuda.empty_cache()
return pos_similarities, neg_similarities
def val_tr(model, dataloader, epoch, device, tr):
model.eval()
all_loss = []
all_acc = []
for idx, (img1, img2, prob) in enumerate(dataloader):
img1 = img1.to('cuda:0')
img2 = img2.to('cuda:0')
prob = prob.float().to('cuda:0') #label
out1, out2 = model(img1, img2) #inputs images to model, executes model, returns features
acc = accuracy(out1, out2, prob, tr)
all_acc.append(acc)
if idx % 100 == 0:
message1 = "VAL Epoch [{0}]: [{1}/{2}] ".format(epoch, idx,
len(dataloader))
message2 = "Accuracy: [{0}]".format(acc)
#message2 = "Loss: [{0:.4f}]".format(loss.item())
print(message1, message2)
torch.cuda.empty_cache()
return all_acc
def test(model, loss_fn, dataloader, epoch, device, tr):
#model = model.to(device)
model.eval()
all_acc = []
for idx, (img1, img2, prob) in enumerate(dataloader):
img1 = img1.to('cuda:0')
img2 = img2.to('cuda:0')
prob = prob.float().to('cuda:0') #label
out1, out2 = model(img1, img2) #inputs images to model, executes model, returns features
acc = accuracy(out1, out2, prob, tr)
all_acc.append(acc)
if idx % 100 == 0:
message1 = "TEST Epoch [{0}]: [{1}/{2}] ".format(epoch, idx,
len(dataloader))
message2 = "Accuracy: [{0}]".format(acc)
#message2 = "Loss: [{0:.4f}]".format(loss.item())
print(message1, message2)
torch.cuda.empty_cache()
return all_acc
###Output
_____no_output_____
###Markdown
LOAD ARGUMENTS AND DEFINE IMAGE TRANSFORMATIONS
###Code
args = Args()
train_transform=None
if args.data_aug == False:
img_transforms = transforms.Compose([transforms.Resize((224, 224)), transforms.ToTensor()])
else:
img_transforms = transforms.Compose([transforms.Resize((224, 224)),
transforms.RandomHorizontalFlip(),
transforms.RandomRotation(20, resample=PIL.Image.BILINEAR),
transforms.ToTensor()])
val_transforms = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor()
])
###Output
_____no_output_____
###Markdown
LOAD DATASET SPLIT FOR TRAINING
###Code
train_loader = get_dataloader(args.split_traindata, args,
img_transforms=img_transforms)
###Output
Dataset loaded
8400 samples in the train dataset
###Markdown
LOAD DATASET SPLIT FOR VALIDATION
###Code
val_loader = get_dataloader(args.split_valdata, args,
img_transforms=val_transforms, split="val")
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
SPECIFY DEVICE
###Code
# check for CUDA
if torch.cuda.is_available():
device = torch.device('cuda:0')
else:
device = torch.device('cpu')
###Output
_____no_output_____
###Markdown
LOAD MODEL AND LOSS
###Code
model = SiameseCosine(pretrained=args.pretrained)
model = model.to(device) # treure de train i validation
loss_fn = RecognitionCriterion()
###Output
_____no_output_____
###Markdown
SPECIFY WHEIGHTS DIRECTORY
###Code
# directory where we'll store model weights
weights_dir = "gdrive/My Drive/weights"
if not osp.exists(weights_dir):
os.mkdir(weights_dir)
###Output
_____no_output_____
###Markdown
SELECT OPTIMIZER
###Code
#optimizer = torch.optim.Adam(model.parameters(), lr=args.lr,
# weight_decay=args.weight_decay)
optimizer = optim.SGD(model.parameters(), lr=args.lr,
momentum=args.momentum, weight_decay=args.weight_decay)
###Output
_____no_output_____
###Markdown
DEFINE CHECKPOINT
###Code
def save_checkpoint(state, filename="checkpoint.pth", save_path=weights_dir):
# check if the save directory exists
if not Path(save_path).exists():
Path(save_path).mkdir()
save_path = Path(save_path, filename)
torch.save(state, str(save_path))
###Output
_____no_output_____
###Markdown
RUN TRAIN
###Code
import matplotlib.pyplot as plt
# train and evalute for `epochs`
loss_epoch_train = []
loss_epoch_val = []
acc_epoch_train = []
acc_epoch_val = []
best_loss = 100
best_epoch = 0
for epoch in range(args.start_epoch, args.epochs):
# scheduler.step()
train_loss = train(model, loss_fn, optimizer, train_loader, epoch, device=device)
av_loss = np.mean(train_loss)
loss_epoch_train.append(av_loss)
val_loss = val(model, loss_fn, val_loader, epoch, device=device)
av_loss = np.mean(val_loss)
loss_epoch_val.append(av_loss)
if best_loss > av_loss:
best_loss = av_loss
best_epoch = epoch
save_checkpoint({
'epoch': epoch + 1,
'batch_size': val_loader.batch_size,
'model': model.state_dict(),
'optimizer': optimizer.state_dict()
}, filename=str(args.resume)+".pth",
save_path=weights_dir)
print("Best Epoch: ",best_epoch, "Best Loss: ", best_loss)
epochs = range(1, len(loss_epoch_train) + 1)
# b is for "solid blue line"
plt.plot(epochs, loss_epoch_train, 'b', label='Training loss')
# r is for "solid red line"
plt.plot(epochs, loss_epoch_val, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
#epochs = range(1, len(acc_epoch_train) + 1)
# b is for "solid blue line"
#plt.plot(epochs, acc_epoch_train, 'b', label='Training accuracy')
# r is for "solid red line"
#plt.plot(epochs, acc_epoch_val, 'r', label='Validation accuracy')
#plt.title('Training and validation accuracy')
#plt.xlabel('Epochs')
#plt.ylabel('Accuracy')
#plt.legend()
#plt.show()
###Output
TRAIN Epoch [0]: [0/600] Loss: [0.1080]
TRAIN Epoch [0]: [100/600] Loss: [0.1172]
TRAIN Epoch [0]: [200/600] Loss: [0.2114]
TRAIN Epoch [0]: [300/600] Loss: [0.2063]
TRAIN Epoch [0]: [400/600] Loss: [0.2105]
TRAIN Epoch [0]: [500/600] Loss: [0.2178]
VAL Epoch [0]: [0/200] Loss: [0.1021]
VAL Epoch [0]: [100/200] Loss: [0.0672]
TRAIN Epoch [1]: [0/600] Loss: [0.2227]
TRAIN Epoch [1]: [100/600] Loss: [0.1890]
TRAIN Epoch [1]: [200/600] Loss: [0.1801]
TRAIN Epoch [1]: [300/600] Loss: [0.1705]
TRAIN Epoch [1]: [400/600] Loss: [0.1183]
TRAIN Epoch [1]: [500/600] Loss: [0.2071]
VAL Epoch [1]: [0/200] Loss: [0.1376]
VAL Epoch [1]: [100/200] Loss: [0.0603]
TRAIN Epoch [2]: [0/600] Loss: [0.1619]
TRAIN Epoch [2]: [100/600] Loss: [0.2246]
TRAIN Epoch [2]: [200/600] Loss: [0.2350]
TRAIN Epoch [2]: [300/600] Loss: [0.1982]
TRAIN Epoch [2]: [400/600] Loss: [0.1776]
TRAIN Epoch [2]: [500/600] Loss: [0.1553]
VAL Epoch [2]: [0/200] Loss: [0.1658]
VAL Epoch [2]: [100/200] Loss: [0.0575]
TRAIN Epoch [3]: [0/600] Loss: [0.1390]
TRAIN Epoch [3]: [100/600] Loss: [0.2018]
TRAIN Epoch [3]: [200/600] Loss: [0.1600]
TRAIN Epoch [3]: [300/600] Loss: [0.0857]
TRAIN Epoch [3]: [400/600] Loss: [0.1550]
TRAIN Epoch [3]: [500/600] Loss: [0.1412]
VAL Epoch [3]: [0/200] Loss: [0.1716]
VAL Epoch [3]: [100/200] Loss: [0.0648]
TRAIN Epoch [4]: [0/600] Loss: [0.1496]
TRAIN Epoch [4]: [100/600] Loss: [0.1031]
TRAIN Epoch [4]: [200/600] Loss: [0.1194]
TRAIN Epoch [4]: [300/600] Loss: [0.1046]
TRAIN Epoch [4]: [400/600] Loss: [0.1127]
TRAIN Epoch [4]: [500/600] Loss: [0.1216]
VAL Epoch [4]: [0/200] Loss: [0.1805]
VAL Epoch [4]: [100/200] Loss: [0.0665]
TRAIN Epoch [5]: [0/600] Loss: [0.0792]
TRAIN Epoch [5]: [100/600] Loss: [0.1464]
TRAIN Epoch [5]: [200/600] Loss: [0.0892]
TRAIN Epoch [5]: [300/600] Loss: [0.1045]
TRAIN Epoch [5]: [400/600] Loss: [0.1741]
TRAIN Epoch [5]: [500/600] Loss: [0.0976]
VAL Epoch [5]: [0/200] Loss: [0.1767]
VAL Epoch [5]: [100/200] Loss: [0.0650]
TRAIN Epoch [6]: [0/600] Loss: [0.1222]
TRAIN Epoch [6]: [100/600] Loss: [0.1173]
TRAIN Epoch [6]: [200/600] Loss: [0.0695]
TRAIN Epoch [6]: [300/600] Loss: [0.1352]
TRAIN Epoch [6]: [400/600] Loss: [0.0725]
TRAIN Epoch [6]: [500/600] Loss: [0.1326]
VAL Epoch [6]: [0/200] Loss: [0.1910]
VAL Epoch [6]: [100/200] Loss: [0.0563]
TRAIN Epoch [7]: [0/600] Loss: [0.0913]
TRAIN Epoch [7]: [100/600] Loss: [0.1533]
TRAIN Epoch [7]: [200/600] Loss: [0.1380]
TRAIN Epoch [7]: [300/600] Loss: [0.1510]
TRAIN Epoch [7]: [400/600] Loss: [0.1655]
TRAIN Epoch [7]: [500/600] Loss: [0.1792]
VAL Epoch [7]: [0/200] Loss: [0.1928]
VAL Epoch [7]: [100/200] Loss: [0.0540]
TRAIN Epoch [8]: [0/600] Loss: [0.0842]
TRAIN Epoch [8]: [100/600] Loss: [0.0815]
TRAIN Epoch [8]: [200/600] Loss: [0.0863]
TRAIN Epoch [8]: [300/600] Loss: [0.1468]
TRAIN Epoch [8]: [400/600] Loss: [0.1377]
TRAIN Epoch [8]: [500/600] Loss: [0.1844]
VAL Epoch [8]: [0/200] Loss: [0.1989]
VAL Epoch [8]: [100/200] Loss: [0.0599]
TRAIN Epoch [9]: [0/600] Loss: [0.1067]
TRAIN Epoch [9]: [100/600] Loss: [0.0943]
TRAIN Epoch [9]: [200/600] Loss: [0.1090]
TRAIN Epoch [9]: [300/600] Loss: [0.1236]
TRAIN Epoch [9]: [400/600] Loss: [0.1380]
TRAIN Epoch [9]: [500/600] Loss: [0.0926]
VAL Epoch [9]: [0/200] Loss: [0.2002]
VAL Epoch [9]: [100/200] Loss: [0.0534]
TRAIN Epoch [10]: [0/600] Loss: [0.1048]
TRAIN Epoch [10]: [100/600] Loss: [0.1387]
TRAIN Epoch [10]: [200/600] Loss: [0.0861]
TRAIN Epoch [10]: [300/600] Loss: [0.1494]
TRAIN Epoch [10]: [400/600] Loss: [0.0970]
TRAIN Epoch [10]: [500/600] Loss: [0.1105]
VAL Epoch [10]: [0/200] Loss: [0.1978]
VAL Epoch [10]: [100/200] Loss: [0.0530]
TRAIN Epoch [11]: [0/600] Loss: [0.1678]
TRAIN Epoch [11]: [100/600] Loss: [0.0517]
TRAIN Epoch [11]: [200/600] Loss: [0.0889]
TRAIN Epoch [11]: [300/600] Loss: [0.1184]
TRAIN Epoch [11]: [400/600] Loss: [0.0563]
TRAIN Epoch [11]: [500/600] Loss: [0.0887]
VAL Epoch [11]: [0/200] Loss: [0.2000]
VAL Epoch [11]: [100/200] Loss: [0.0513]
TRAIN Epoch [12]: [0/600] Loss: [0.1034]
TRAIN Epoch [12]: [100/600] Loss: [0.1195]
TRAIN Epoch [12]: [200/600] Loss: [0.0435]
TRAIN Epoch [12]: [300/600] Loss: [0.0766]
TRAIN Epoch [12]: [400/600] Loss: [0.0728]
TRAIN Epoch [12]: [500/600] Loss: [0.1355]
VAL Epoch [12]: [0/200] Loss: [0.1936]
VAL Epoch [12]: [100/200] Loss: [0.0508]
TRAIN Epoch [13]: [0/600] Loss: [0.0873]
TRAIN Epoch [13]: [100/600] Loss: [0.0611]
TRAIN Epoch [13]: [200/600] Loss: [0.1784]
TRAIN Epoch [13]: [300/600] Loss: [0.0648]
TRAIN Epoch [13]: [400/600] Loss: [0.0973]
TRAIN Epoch [13]: [500/600] Loss: [0.0638]
VAL Epoch [13]: [0/200] Loss: [0.1963]
VAL Epoch [13]: [100/200] Loss: [0.0484]
TRAIN Epoch [14]: [0/600] Loss: [0.1183]
TRAIN Epoch [14]: [100/600] Loss: [0.0725]
TRAIN Epoch [14]: [200/600] Loss: [0.0861]
TRAIN Epoch [14]: [300/600] Loss: [0.0862]
TRAIN Epoch [14]: [400/600] Loss: [0.0863]
TRAIN Epoch [14]: [500/600] Loss: [0.1047]
VAL Epoch [14]: [0/200] Loss: [0.1999]
VAL Epoch [14]: [100/200] Loss: [0.0432]
TRAIN Epoch [15]: [0/600] Loss: [0.1551]
TRAIN Epoch [15]: [100/600] Loss: [0.0906]
TRAIN Epoch [15]: [200/600] Loss: [0.0483]
TRAIN Epoch [15]: [300/600] Loss: [0.0771]
TRAIN Epoch [15]: [400/600] Loss: [0.0902]
TRAIN Epoch [15]: [500/600] Loss: [0.0372]
VAL Epoch [15]: [0/200] Loss: [0.1999]
VAL Epoch [15]: [100/200] Loss: [0.0408]
TRAIN Epoch [16]: [0/600] Loss: [0.1211]
TRAIN Epoch [16]: [100/600] Loss: [0.1037]
TRAIN Epoch [16]: [200/600] Loss: [0.0667]
TRAIN Epoch [16]: [300/600] Loss: [0.1599]
TRAIN Epoch [16]: [400/600] Loss: [0.0266]
TRAIN Epoch [16]: [500/600] Loss: [0.0484]
VAL Epoch [16]: [0/200] Loss: [0.1868]
VAL Epoch [16]: [100/200] Loss: [0.0421]
TRAIN Epoch [17]: [0/600] Loss: [0.1155]
TRAIN Epoch [17]: [100/600] Loss: [0.1340]
TRAIN Epoch [17]: [200/600] Loss: [0.1036]
TRAIN Epoch [17]: [300/600] Loss: [0.0855]
TRAIN Epoch [17]: [400/600] Loss: [0.0374]
TRAIN Epoch [17]: [500/600] Loss: [0.0546]
VAL Epoch [17]: [0/200] Loss: [0.1859]
VAL Epoch [17]: [100/200] Loss: [0.0413]
TRAIN Epoch [18]: [0/600] Loss: [0.0836]
TRAIN Epoch [18]: [100/600] Loss: [0.0587]
TRAIN Epoch [18]: [200/600] Loss: [0.0810]
TRAIN Epoch [18]: [300/600] Loss: [0.0448]
TRAIN Epoch [18]: [400/600] Loss: [0.0964]
TRAIN Epoch [18]: [500/600] Loss: [0.0511]
VAL Epoch [18]: [0/200] Loss: [0.1893]
VAL Epoch [18]: [100/200] Loss: [0.0354]
TRAIN Epoch [19]: [0/600] Loss: [0.0443]
TRAIN Epoch [19]: [100/600] Loss: [0.0535]
TRAIN Epoch [19]: [200/600] Loss: [0.0960]
TRAIN Epoch [19]: [300/600] Loss: [0.0247]
TRAIN Epoch [19]: [400/600] Loss: [0.0736]
TRAIN Epoch [19]: [500/600] Loss: [0.1098]
VAL Epoch [19]: [0/200] Loss: [0.1886]
VAL Epoch [19]: [100/200] Loss: [0.0380]
TRAIN Epoch [20]: [0/600] Loss: [0.0772]
TRAIN Epoch [20]: [100/600] Loss: [0.1032]
TRAIN Epoch [20]: [200/600] Loss: [0.0459]
TRAIN Epoch [20]: [300/600] Loss: [0.0749]
TRAIN Epoch [20]: [400/600] Loss: [0.0881]
TRAIN Epoch [20]: [500/600] Loss: [0.0948]
VAL Epoch [20]: [0/200] Loss: [0.1870]
VAL Epoch [20]: [100/200] Loss: [0.0333]
TRAIN Epoch [21]: [0/600] Loss: [0.0664]
TRAIN Epoch [21]: [100/600] Loss: [0.0447]
TRAIN Epoch [21]: [200/600] Loss: [0.1013]
TRAIN Epoch [21]: [300/600] Loss: [0.0433]
TRAIN Epoch [21]: [400/600] Loss: [0.0671]
TRAIN Epoch [21]: [500/600] Loss: [0.0419]
VAL Epoch [21]: [0/200] Loss: [0.1847]
VAL Epoch [21]: [100/200] Loss: [0.0400]
TRAIN Epoch [22]: [0/600] Loss: [0.0674]
TRAIN Epoch [22]: [100/600] Loss: [0.0236]
TRAIN Epoch [22]: [200/600] Loss: [0.0406]
TRAIN Epoch [22]: [300/600] Loss: [0.0587]
TRAIN Epoch [22]: [400/600] Loss: [0.0558]
TRAIN Epoch [22]: [500/600] Loss: [0.0364]
VAL Epoch [22]: [0/200] Loss: [0.1755]
VAL Epoch [22]: [100/200] Loss: [0.0380]
TRAIN Epoch [23]: [0/600] Loss: [0.0744]
TRAIN Epoch [23]: [100/600] Loss: [0.0287]
TRAIN Epoch [23]: [200/600] Loss: [0.0912]
TRAIN Epoch [23]: [300/600] Loss: [0.0756]
TRAIN Epoch [23]: [400/600] Loss: [0.0177]
TRAIN Epoch [23]: [500/600] Loss: [0.0233]
VAL Epoch [23]: [0/200] Loss: [0.1887]
VAL Epoch [23]: [100/200] Loss: [0.0375]
TRAIN Epoch [24]: [0/600] Loss: [0.0657]
TRAIN Epoch [24]: [100/600] Loss: [0.0133]
TRAIN Epoch [24]: [200/600] Loss: [0.0445]
TRAIN Epoch [24]: [300/600] Loss: [0.0929]
TRAIN Epoch [24]: [400/600] Loss: [0.0471]
TRAIN Epoch [24]: [500/600] Loss: [0.0817]
VAL Epoch [24]: [0/200] Loss: [0.1893]
VAL Epoch [24]: [100/200] Loss: [0.0372]
TRAIN Epoch [25]: [0/600] Loss: [0.0326]
TRAIN Epoch [25]: [100/600] Loss: [0.0557]
TRAIN Epoch [25]: [200/600] Loss: [0.0420]
TRAIN Epoch [25]: [300/600] Loss: [0.0413]
TRAIN Epoch [25]: [400/600] Loss: [0.0135]
TRAIN Epoch [25]: [500/600] Loss: [0.0518]
VAL Epoch [25]: [0/200] Loss: [0.1824]
VAL Epoch [25]: [100/200] Loss: [0.0341]
TRAIN Epoch [26]: [0/600] Loss: [0.0829]
TRAIN Epoch [26]: [100/600] Loss: [0.0599]
TRAIN Epoch [26]: [200/600] Loss: [0.0900]
TRAIN Epoch [26]: [300/600] Loss: [0.0877]
TRAIN Epoch [26]: [400/600] Loss: [0.0491]
TRAIN Epoch [26]: [500/600] Loss: [0.0499]
VAL Epoch [26]: [0/200] Loss: [0.1803]
VAL Epoch [26]: [100/200] Loss: [0.0344]
TRAIN Epoch [27]: [0/600] Loss: [0.1060]
TRAIN Epoch [27]: [100/600] Loss: [0.0442]
TRAIN Epoch [27]: [200/600] Loss: [0.0620]
TRAIN Epoch [27]: [300/600] Loss: [0.0135]
TRAIN Epoch [27]: [400/600] Loss: [0.1370]
TRAIN Epoch [27]: [500/600] Loss: [0.0726]
VAL Epoch [27]: [0/200] Loss: [0.1813]
VAL Epoch [27]: [100/200] Loss: [0.0338]
TRAIN Epoch [28]: [0/600] Loss: [0.0573]
TRAIN Epoch [28]: [100/600] Loss: [0.0465]
TRAIN Epoch [28]: [200/600] Loss: [0.0872]
TRAIN Epoch [28]: [300/600] Loss: [0.0974]
TRAIN Epoch [28]: [400/600] Loss: [0.1087]
TRAIN Epoch [28]: [500/600] Loss: [0.0451]
VAL Epoch [28]: [0/200] Loss: [0.1909]
VAL Epoch [28]: [100/200] Loss: [0.0338]
TRAIN Epoch [29]: [0/600] Loss: [0.0412]
TRAIN Epoch [29]: [100/600] Loss: [0.0496]
TRAIN Epoch [29]: [200/600] Loss: [0.0626]
TRAIN Epoch [29]: [300/600] Loss: [0.0528]
TRAIN Epoch [29]: [400/600] Loss: [0.0754]
TRAIN Epoch [29]: [500/600] Loss: [0.0422]
VAL Epoch [29]: [0/200] Loss: [0.1870]
VAL Epoch [29]: [100/200] Loss: [0.0343]
TRAIN Epoch [30]: [0/600] Loss: [0.0403]
TRAIN Epoch [30]: [100/600] Loss: [0.0774]
TRAIN Epoch [30]: [200/600] Loss: [0.0762]
TRAIN Epoch [30]: [300/600] Loss: [0.0639]
TRAIN Epoch [30]: [400/600] Loss: [0.0453]
TRAIN Epoch [30]: [500/600] Loss: [0.0311]
VAL Epoch [30]: [0/200] Loss: [0.1813]
VAL Epoch [30]: [100/200] Loss: [0.0340]
TRAIN Epoch [31]: [0/600] Loss: [0.0626]
TRAIN Epoch [31]: [100/600] Loss: [0.0283]
TRAIN Epoch [31]: [200/600] Loss: [0.0645]
TRAIN Epoch [31]: [300/600] Loss: [0.0623]
TRAIN Epoch [31]: [400/600] Loss: [0.0688]
TRAIN Epoch [31]: [500/600] Loss: [0.0216]
VAL Epoch [31]: [0/200] Loss: [0.1824]
VAL Epoch [31]: [100/200] Loss: [0.0336]
TRAIN Epoch [32]: [0/600] Loss: [0.0453]
TRAIN Epoch [32]: [100/600] Loss: [0.0435]
TRAIN Epoch [32]: [200/600] Loss: [0.0720]
TRAIN Epoch [32]: [300/600] Loss: [0.0917]
TRAIN Epoch [32]: [400/600] Loss: [0.0287]
TRAIN Epoch [32]: [500/600] Loss: [0.0343]
VAL Epoch [32]: [0/200] Loss: [0.1786]
VAL Epoch [32]: [100/200] Loss: [0.0331]
TRAIN Epoch [33]: [0/600] Loss: [0.0211]
TRAIN Epoch [33]: [100/600] Loss: [0.0320]
TRAIN Epoch [33]: [200/600] Loss: [0.0646]
TRAIN Epoch [33]: [300/600] Loss: [0.0854]
TRAIN Epoch [33]: [400/600] Loss: [0.0534]
TRAIN Epoch [33]: [500/600] Loss: [0.0254]
VAL Epoch [33]: [0/200] Loss: [0.1806]
VAL Epoch [33]: [100/200] Loss: [0.0324]
TRAIN Epoch [34]: [0/600] Loss: [0.0217]
TRAIN Epoch [34]: [100/600] Loss: [0.0458]
TRAIN Epoch [34]: [200/600] Loss: [0.0475]
TRAIN Epoch [34]: [300/600] Loss: [0.1082]
TRAIN Epoch [34]: [400/600] Loss: [0.0419]
TRAIN Epoch [34]: [500/600] Loss: [0.0620]
VAL Epoch [34]: [0/200] Loss: [0.1824]
VAL Epoch [34]: [100/200] Loss: [0.0335]
TRAIN Epoch [35]: [0/600] Loss: [0.0506]
TRAIN Epoch [35]: [100/600] Loss: [0.0642]
TRAIN Epoch [35]: [200/600] Loss: [0.0616]
TRAIN Epoch [35]: [300/600] Loss: [0.0417]
TRAIN Epoch [35]: [400/600] Loss: [0.0661]
TRAIN Epoch [35]: [500/600] Loss: [0.0433]
VAL Epoch [35]: [0/200] Loss: [0.1773]
VAL Epoch [35]: [100/200] Loss: [0.0332]
TRAIN Epoch [36]: [0/600] Loss: [0.1021]
TRAIN Epoch [36]: [100/600] Loss: [0.0556]
TRAIN Epoch [36]: [200/600] Loss: [0.0849]
TRAIN Epoch [36]: [300/600] Loss: [0.0165]
TRAIN Epoch [36]: [400/600] Loss: [0.0629]
TRAIN Epoch [36]: [500/600] Loss: [0.0211]
VAL Epoch [36]: [0/200] Loss: [0.1728]
VAL Epoch [36]: [100/200] Loss: [0.0343]
TRAIN Epoch [37]: [0/600] Loss: [0.0602]
TRAIN Epoch [37]: [100/600] Loss: [0.0248]
TRAIN Epoch [37]: [200/600] Loss: [0.1107]
TRAIN Epoch [37]: [300/600] Loss: [0.0430]
TRAIN Epoch [37]: [400/600] Loss: [0.0121]
TRAIN Epoch [37]: [500/600] Loss: [0.0385]
VAL Epoch [37]: [0/200] Loss: [0.1886]
VAL Epoch [37]: [100/200] Loss: [0.0383]
TRAIN Epoch [38]: [0/600] Loss: [0.0664]
TRAIN Epoch [38]: [100/600] Loss: [0.0295]
TRAIN Epoch [38]: [200/600] Loss: [0.0134]
TRAIN Epoch [38]: [300/600] Loss: [0.0804]
TRAIN Epoch [38]: [400/600] Loss: [0.0970]
TRAIN Epoch [38]: [500/600] Loss: [0.0405]
VAL Epoch [38]: [0/200] Loss: [0.1787]
VAL Epoch [38]: [100/200] Loss: [0.0356]
TRAIN Epoch [39]: [0/600] Loss: [0.0516]
TRAIN Epoch [39]: [100/600] Loss: [0.0309]
TRAIN Epoch [39]: [200/600] Loss: [0.0969]
TRAIN Epoch [39]: [300/600] Loss: [0.0606]
TRAIN Epoch [39]: [400/600] Loss: [0.0132]
TRAIN Epoch [39]: [500/600] Loss: [0.0098]
VAL Epoch [39]: [0/200] Loss: [0.1835]
VAL Epoch [39]: [100/200] Loss: [0.0335]
Best Epoch: 27 Best Loss: 0.10832797806244343
###Markdown
LOAD WEIGHTS STORED FOR THE BEST EPOCH
###Code
weights = osp.join(weights_dir,args.resume+'.pth')
epoch = 28
if args.resume:
print(weights)
checkpoint = torch.load(weights)
model.load_state_dict(checkpoint['model'])
# Set the start epoch if it has not been
if not args.start_epoch:
args.start_epoch = checkpoint['epoch']
###Output
gdrive/My Drive/weights/checkpoint_e23_lr1e_3_40e_SGD.pth
###Markdown
COMPUTE SIMILARITIES AND PERCENTILES FOR THE VALIDATION SPLIT
###Code
import numpy as np
# View similarities
sim_pos, sim_neg = val_sim_lim(model, val_loader, epoch, device=device)
pos_95 = np.percentile(sim_pos,95)
pos_5 = np.percentile(sim_pos,5)
pos_max = np.amax(sim_pos)
pos_min = np.amin(sim_pos)
print(pos_95, pos_5, pos_max, pos_min)
neg_95 = np.percentile(sim_neg,95)
neg_5 = np.percentile(sim_neg,5)
neg_max = np.amax(sim_neg)
neg_min = np.amin(sim_neg)
print(neg_95, neg_5, neg_max, neg_min)
###Output
VAL Epoch [28]: [0/200]
VAL Epoch [28]: [100/200]
0.9918118447065353 0.5256112039089202 0.9972943663597107 0.07290332764387131
0.9578584372997284 -0.1372625157237053 0.9923917055130005 -0.2736058235168457
###Markdown
SELECT THRESHOLD
###Code
import numpy
# Select threshold
best_acc = 0
best_tr = 0
sup = numpy.round(neg_95,decimals=3)
inf = numpy.round(pos_5,decimals=3)
for value in numpy.arange(inf, sup, 0.1):
numpy.round(value,decimals=3)
val_acc = val_tr(model, val_loader, epoch, device=device, tr=value)
av_acc = np.mean(val_acc)
if av_acc > best_acc:
best_acc = av_acc
best_tr = value
print('Best accuracy:', av_acc, 'Treshold:', best_tr)
sup = numpy.round(best_tr+.05,decimals=3)
inf = numpy.round(best_tr-.05,decimals=3)
for value in numpy.arange(inf, sup, 0.01):
numpy.round(value,decimals=3)
val_acc = val_tr(model, val_loader, epoch, device=device, tr=value)
av_acc = np.mean(val_acc)
if av_acc > best_acc:
best_acc = av_acc
best_tr = value
print('Best accuracy:', av_acc, 'Treshold:', best_tr)
sup = numpy.round(best_tr+.005,decimals=3)
inf = numpy.round(best_tr-.005,decimals=3)
for value in numpy.arange(inf, sup, 0.001):
numpy.round(value,decimals=3)
val_acc = val_tr(model, val_loader, epoch, device=device, tr=value)
av_acc = np.mean(val_acc)
if av_acc > best_acc:
best_acc = av_acc
best_tr = value
print('Best accuracy:', av_acc, 'Treshold:', best_tr)
print('Best accuracy:', av_acc, 'Treshold:', best_tr)
###Output
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
Best accuracy: 0.8214285714285714 Treshold: 0.526
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
Best accuracy: 0.8342857142857143 Treshold: 0.626
VAL Epoch [28]: [0/200] Accuracy: [0.5714285714285714]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.7142857142857143]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.7857142857142857]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
Best accuracy: 0.835 Treshold: 0.646
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
Best accuracy: 0.8357142857142857 Treshold: 0.641
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
VAL Epoch [28]: [0/200] Accuracy: [0.5]
VAL Epoch [28]: [100/200] Accuracy: [0.9285714285714286]
Best accuracy: 0.835 Treshold: 0.641
###Markdown
LOAD DATASET SPLIT FOR TEST
###Code
test_loader = get_dataloader(args.split_testdata, args,
img_transforms=val_transforms, split="test")
###Output
Dataset loaded
2800 samples in the test dataset
###Markdown
RUN TEST
###Code
# Test
best_tr = 0.641
test_acc = test(model, loss_fn, test_loader, epoch, device=device, tr=best_tr)
av_acc = np.mean(test_acc)
print('Average test accuracy:', av_acc)
###Output
TEST Epoch [28]: [0/200] Accuracy: [0.7857142857142857]
TEST Epoch [28]: [100/200] Accuracy: [0.7857142857142857]
Average test accuracy: 0.8567857142857142
|
dmu24/dmu24_ELAIS-N1/2_Photo-z_Selection_Function.ipynb | ###Markdown
ELAIS-N1 Photo-z selection functionsThe goal is to create a selection function for the photometric redshifts that varies spatially across the field. We will use the depth maps for the optical masterlist to find regions of the field that have similar photometric coverage and then calculate the fraction of sources meeting a given photo-z selection within those pixels.1. For optical depth maps: do clustering analysis to find HEALpix with similar photometric properties.2. Calculate selection function within those groups of similar regions as a function of magnitude in a given band.3. Paramatrise the selection function in such a way that it can be easily applied for a given sample of sources or region.
###Code
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
import datetime
print("This notebook was executed on: \n{}".format(datetime.datetime.now()))
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
import os
import time
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table, join
import numpy as np
from pymoc import MOC
import healpy as hp
#import pandas as pd #Astropy has group_by function so apandas isn't required.
import seaborn as sns
import warnings
#We ignore warnings - this is a little dangerous but a huge number of warnings are generated by empty cells later
warnings.filterwarnings('ignore')
from herschelhelp_internal.utils import inMoc, coords_to_hpidx, flux_to_mag
from herschelhelp_internal.masterlist import find_last_ml_suffix, nb_ccplots
from astropy.io.votable import parse_single_table
from astropy.io import fits
from astropy.stats import binom_conf_interval
from astropy.utils.console import ProgressBar
from astropy.modeling.fitting import LevMarLSQFitter
from sklearn.cluster import MiniBatchKMeans, MeanShift
from collections import Counter
from astropy.modeling import Fittable1DModel, Parameter
class GLF1D(Fittable1DModel):
"""
Generalised Logistic Function
"""
inputs = ('x',)
outputs = ('y',)
A = Parameter()
B = Parameter()
K = Parameter()
Q = Parameter()
nu = Parameter()
M = Parameter()
@staticmethod
def evaluate(x, A, B, K, Q, nu, M):
top = K - A
bottom = (1 + Q*np.exp(-B*(x-M)))**(1/nu)
return A + (top/bottom)
@staticmethod
def fit_deriv(x, A, B, K, Q, nu, M):
d_A = 1 - (1 + (Q*np.exp(-B*(x-M)))**(-1/nu))
d_B = ((K - A) * (x-M) * (Q*np.exp(-B*(x-M)))) / (nu * ((1 + Q*np.exp(-B*(x-M)))**((1/nu) + 1)))
d_K = 1 + (Q*np.exp(-B*(x-M)))**(-1/nu)
d_Q = -((K - A) * (Q*np.exp(-B*(x-M)))) / (nu * ((1 + Q*np.exp(-B*(x-M)))**((1/nu) + 1)))
d_nu = ((K-A) * np.log(1 + (Q*np.exp(-B*(x-M))))) / ((nu**2) * ((1 + Q*np.exp(-B*(x-M)))**((1/nu))))
d_M = -((K - A) * (Q*B*np.exp(-B*(x-M)))) / (nu * ((1 + Q*np.exp(-B*(x-M)))**((1/nu) + 1)))
return [d_A, d_B, d_K, d_Q, d_nu, d_M]
class InverseGLF1D(Fittable1DModel):
"""
Generalised Logistic Function
"""
inputs = ('x',)
outputs = ('y',)
A = Parameter()
B = Parameter()
K = Parameter()
Q = Parameter()
nu = Parameter()
M = Parameter()
@staticmethod
def evaluate(x, A, B, K, Q, nu, M):
return M - (1/B)*(np.log((((K - A)/(x -A))**nu - 1)/Q))
###Output
_____no_output_____
###Markdown
0 - Set relevant initial parameters
###Code
FIELD = 'ELAIS-N1'
ORDER = 10
OUT_DIR = 'selection_functions'
SUFFIX = 'depths_20171016_photoz_20170725'
DEPTH_MAP = 'ELAIS-N1/depths_elais-n1_20171016.fits'
MASTERLIST = 'ELAIS-N1/master_catalogue_elais-n1_20170627.fits'
PHOTOZS = 'ELAIS-N1/master_catalogue_elais-n1_20170706_photoz_20170725_irac1_optimised.fits'
help(find_last_ml_suffix)
###Output
Help on function find_last_ml_suffix in module herschelhelp_internal.masterlist:
find_last_ml_suffix(directory='./data/')
Find the data prefix of the last masterlist.
This function returns the data prefix to use to get the last master list
from a directory.
###Markdown
I - Find clustering of healpix in the depth maps
###Code
depth_map = Table.read(DEPTH_MAP)
# Get Healpix IDs
hp_idx = depth_map['hp_idx_O_{0}'.format(ORDER)]
# Calculate RA, Dec of depth map Healpix pixels for later plotting etc.
dm_hp_ra, dm_hp_dec = hp.pix2ang(2**ORDER, hp_idx, nest=True, lonlat=True)
###Output
_____no_output_____
###Markdown
The depth map provides two measures of depth:
###Code
mean_values = Table(depth_map.columns[2::2]) # Mean 1-sigma error within a cell
p90_values = Table(depth_map.columns[3::2]) # 90th percentile of observed fluxes
###Output
_____no_output_____
###Markdown
For the photo-z selection functions we will make use of the mean 1-sigma error as this can be used to accurately predict the completeness as a function of magnitude.We convert the mean 1-sigma uncertainty to a 3-sigma magnitude upper limit and convert to a useable array.When a given flux has no measurement in a healpix (and *ferr_mean* is therefore a *NaN*) we set the depth to some semi-arbitrary bright limit separate from the observed depths:
###Code
dm_clustering = 23.9 - 2.5*np.log10(3*mean_values.to_pandas().as_matrix())
dm_clustering[np.isnan(dm_clustering)] = 14
dm_clustering[np.isinf(dm_clustering)] = 14
###Output
_____no_output_____
###Markdown
To encourage the clustering to group nearby Healpix together, we also add the RA and Dec of the healpix to the inpux dataset:
###Code
dm_clustering = np.hstack([dm_clustering, np.array(dm_hp_ra.data, ndmin=2).T, np.array(dm_hp_dec.data, ndmin=2).T])
###Output
_____no_output_____
###Markdown
Next, we find clusters within the depth maps using a simple k-means clustering. For the number of clusters we assume an initial guess on the order of the number of different input magnitdues (/depths) in the dataset. This produces good initial results but may need further tuning:
###Code
NCLUSTERS = dm_clustering.shape[1]*2
km = MiniBatchKMeans(n_clusters=NCLUSTERS)
km.fit(dm_clustering)
counts = Counter(km.labels_) # Quickly calculate sizes of the clusters for reference
clusters = dict(zip(hp_idx.data, km.labels_))
Fig, Ax = plt.subplots(1,1,figsize=(8,8))
Ax.scatter(dm_hp_ra, dm_hp_dec, c=km.labels_, cmap=plt.cm.tab20, s=6)
Ax.set_xlabel('Right Ascension [deg]')
Ax.set_ylabel('Declination [deg]')
Ax.set_title('{0}'.format(FIELD))
###Output
_____no_output_____
###Markdown
II - Map photo-$z$ and masterlist objects to their corresponding depth cluster We now load the photometric redshift catalog and keep only the key columns for this selection function.Note: if using a different photo-$z$ measure than the HELP standard `z1_median`, the relevant columns should be retained instead.
###Code
photoz_catalogue = Table.read(PHOTOZS)
photoz_catalogue.keep_columns(['help_id', 'RA', 'DEC', 'id', 'z1_median', 'z1_min', 'z1_max', 'z1_area'])
###Output
_____no_output_____
###Markdown
Next we load the relevant sections of the masterlist catalog (including the magnitude columns) and map the Healpix values to their corresponding cluster. For each of the masterlist/photo-$z$ sources and their corresponding healpix we find the respective cluster.
###Code
masterlist_hdu = fits.open(MASTERLIST, memmap=True)
masterlist = masterlist_hdu[1]
masterlist_catalogue = Table()
masterlist_catalogue['help_id'] = masterlist.data['help_id']
masterlist_catalogue['RA'] = masterlist.data['ra']
masterlist_catalogue['DEC'] = masterlist.data['dec']
for column in masterlist.columns.names:
if (column.startswith('m_') or column.startswith('merr_')):
masterlist_catalogue[column] = masterlist.data[column]
masterlist_hpx = coords_to_hpidx(masterlist_catalogue['RA'], masterlist_catalogue['DEC'], ORDER)
masterlist_catalogue["hp_idx_O_{:d}".format(ORDER)] = masterlist_hpx
masterlist_cl_no = np.array([clusters[hpx] for hpx in masterlist_hpx])
masterlist_catalogue['hp_depth_cluster'] = masterlist_cl_no
merged = join(masterlist_catalogue, photoz_catalogue, join_type='left', keys=['help_id', 'RA', 'DEC'])
###Output
_____no_output_____
###Markdown
Constructing the output selection function table:The photo-$z$ selection function will be saved in a table that mirrors the format of the input optical depth maps, with matching length.
###Code
pz_depth_map = Table()
pz_depth_map.add_column(depth_map['hp_idx_O_13'])
pz_depth_map.add_column(depth_map['hp_idx_O_10'])
###Output
_____no_output_____
###Markdown
III - Creating the binary photo-$z$ selection functionWith the sources now easily grouped into regions of similar photometric properties, we can calculate the photo-$z$ selection function within each cluster of pixels. To begin with we want to create the most basic set of photo-$z$ selection functions - a map of the fraction of sources in the masterlist in a given region that have a photo-$z$ estimate. We will then create more informative selection function maps that make use of the added information from clustering.
###Code
NCLUSTERS # Fixed during the clustering stage above
cluster_photoz_fraction = np.ones(NCLUSTERS)
pz_frac_cat = np.zeros(len(merged))
pz_frac_map = np.zeros(len(dm_hp_ra))
for ic, cluster in enumerate(np.arange(NCLUSTERS)):
ml_sources = (merged['hp_depth_cluster'] == cluster)
has_photoz = (merged['z1_median'] > -90.)
in_ml = np.float(ml_sources.sum())
withz = np.float((ml_sources*has_photoz).sum())
if in_ml > 0:
frac = withz / in_ml
else:
frac = 0.
cluster_photoz_fraction[ic] = frac
print("""{0} In cluster: {1:<6.0f} With photo-z: {2:<6.0f}\
Fraction: {3:<6.3f}""".format(cluster, in_ml, withz, frac))
# Map fraction to catalog positions for reference
where_cat = (merged['hp_depth_cluster'] == cluster)
pz_frac_cat[where_cat] = frac
# Map fraction back to depth map healpix
where_map = (km.labels_ == cluster)
pz_frac_map[where_map] = frac
###Output
0 In cluster: 82628 With photo-z: 69189 Fraction: 0.837
1 In cluster: 21592 With photo-z: 2969 Fraction: 0.138
2 In cluster: 14850 With photo-z: 0 Fraction: 0.000
3 In cluster: 26284 With photo-z: 7225 Fraction: 0.275
4 In cluster: 11563 With photo-z: 1514 Fraction: 0.131
5 In cluster: 24064 With photo-z: 0 Fraction: 0.000
6 In cluster: 19702 With photo-z: 3857 Fraction: 0.196
7 In cluster: 1002 With photo-z: 168 Fraction: 0.168
8 In cluster: 6097 With photo-z: 0 Fraction: 0.000
9 In cluster: 4679 With photo-z: 0 Fraction: 0.000
10 In cluster: 3943 With photo-z: 692 Fraction: 0.176
11 In cluster: 8989 With photo-z: 0 Fraction: 0.000
12 In cluster: 8343 With photo-z: 0 Fraction: 0.000
13 In cluster: 11418 With photo-z: 0 Fraction: 0.000
14 In cluster: 17799 With photo-z: 11404 Fraction: 0.641
15 In cluster: 72915 With photo-z: 61350 Fraction: 0.841
16 In cluster: 4 With photo-z: 0 Fraction: 0.000
17 In cluster: 20290 With photo-z: 3299 Fraction: 0.163
18 In cluster: 97935 With photo-z: 77317 Fraction: 0.789
19 In cluster: 988 With photo-z: 368 Fraction: 0.372
20 In cluster: 3947 With photo-z: 940 Fraction: 0.238
21 In cluster: 22728 With photo-z: 0 Fraction: 0.000
22 In cluster: 9536 With photo-z: 2483 Fraction: 0.260
23 In cluster: 112932 With photo-z: 98701 Fraction: 0.874
24 In cluster: 147240 With photo-z: 120714 Fraction: 0.820
25 In cluster: 20206 With photo-z: 3519 Fraction: 0.174
26 In cluster: 8264 With photo-z: 5573 Fraction: 0.674
27 In cluster: 788 With photo-z: 0 Fraction: 0.000
28 In cluster: 3551 With photo-z: 432 Fraction: 0.122
29 In cluster: 39307 With photo-z: 12482 Fraction: 0.318
30 In cluster: 80280 With photo-z: 66654 Fraction: 0.830
31 In cluster: 4204 With photo-z: 831 Fraction: 0.198
32 In cluster: 16910 With photo-z: 12059 Fraction: 0.713
33 In cluster: 7272 With photo-z: 6153 Fraction: 0.846
34 In cluster: 49929 With photo-z: 32971 Fraction: 0.660
35 In cluster: 134366 With photo-z: 115005 Fraction: 0.856
36 In cluster: 3643 With photo-z: 1086 Fraction: 0.298
37 In cluster: 44602 With photo-z: 8412 Fraction: 0.189
38 In cluster: 12260 With photo-z: 0 Fraction: 0.000
39 In cluster: 61382 With photo-z: 31830 Fraction: 0.519
40 In cluster: 1741 With photo-z: 0 Fraction: 0.000
41 In cluster: 39915 With photo-z: 25612 Fraction: 0.642
42 In cluster: 756 With photo-z: 0 Fraction: 0.000
43 In cluster: 144268 With photo-z: 120085 Fraction: 0.832
44 In cluster: 20944 With photo-z: 15889 Fraction: 0.759
45 In cluster: 25398 With photo-z: 21595 Fraction: 0.850
46 In cluster: 45878 With photo-z: 31389 Fraction: 0.684
47 In cluster: 21939 With photo-z: 4246 Fraction: 0.194
48 In cluster: 11224 With photo-z: 1 Fraction: 0.000
49 In cluster: 19402 With photo-z: 0 Fraction: 0.000
50 In cluster: 16194 With photo-z: 3778 Fraction: 0.233
51 In cluster: 2786 With photo-z: 0 Fraction: 0.000
52 In cluster: 3033 With photo-z: 0 Fraction: 0.000
53 In cluster: 23082 With photo-z: 17528 Fraction: 0.759
54 In cluster: 3837 With photo-z: 877 Fraction: 0.229
55 In cluster: 117955 With photo-z: 98807 Fraction: 0.838
56 In cluster: 4729 With photo-z: 0 Fraction: 0.000
57 In cluster: 16861 With photo-z: 0 Fraction: 0.000
58 In cluster: 143652 With photo-z: 89746 Fraction: 0.625
59 In cluster: 5096 With photo-z: 0 Fraction: 0.000
60 In cluster: 1490 With photo-z: 625 Fraction: 0.419
61 In cluster: 5722 With photo-z: 508 Fraction: 0.089
62 In cluster: 22857 With photo-z: 0 Fraction: 0.000
63 In cluster: 88789 With photo-z: 77004 Fraction: 0.867
64 In cluster: 110994 With photo-z: 95905 Fraction: 0.864
65 In cluster: 20218 With photo-z: 3332 Fraction: 0.165
66 In cluster: 11708 With photo-z: 0 Fraction: 0.000
67 In cluster: 15168 With photo-z: 5342 Fraction: 0.352
68 In cluster: 2529 With photo-z: 0 Fraction: 0.000
69 In cluster: 10158 With photo-z: 6738 Fraction: 0.663
70 In cluster: 149171 With photo-z: 121770 Fraction: 0.816
71 In cluster: 47639 With photo-z: 41370 Fraction: 0.868
72 In cluster: 6004 With photo-z: 865 Fraction: 0.144
73 In cluster: 79984 With photo-z: 68404 Fraction: 0.855
74 In cluster: 11476 With photo-z: 8949 Fraction: 0.780
75 In cluster: 114394 With photo-z: 96172 Fraction: 0.841
76 In cluster: 19232 With photo-z: 4625 Fraction: 0.240
77 In cluster: 8769 With photo-z: 215 Fraction: 0.025
78 In cluster: 3172 With photo-z: 67 Fraction: 0.021
79 In cluster: 170650 With photo-z: 133279 Fraction: 0.781
80 In cluster: 2332 With photo-z: 28 Fraction: 0.012
81 In cluster: 2341 With photo-z: 764 Fraction: 0.326
82 In cluster: 25005 With photo-z: 4839 Fraction: 0.194
83 In cluster: 8840 With photo-z: 6909 Fraction: 0.782
84 In cluster: 5812 With photo-z: 0 Fraction: 0.000
85 In cluster: 19158 With photo-z: 13734 Fraction: 0.717
86 In cluster: 38188 With photo-z: 9414 Fraction: 0.247
87 In cluster: 212908 With photo-z: 173463 Fraction: 0.815
88 In cluster: 3457 With photo-z: 792 Fraction: 0.229
89 In cluster: 21175 With photo-z: 4628 Fraction: 0.219
90 In cluster: 7165 With photo-z: 594 Fraction: 0.083
91 In cluster: 30436 With photo-z: 0 Fraction: 0.000
92 In cluster: 4987 With photo-z: 3660 Fraction: 0.734
93 In cluster: 3765 With photo-z: 851 Fraction: 0.226
94 In cluster: 237613 With photo-z: 197277 Fraction: 0.830
95 In cluster: 55873 With photo-z: 42960 Fraction: 0.769
96 In cluster: 320 With photo-z: 0 Fraction: 0.000
97 In cluster: 95894 With photo-z: 74589 Fraction: 0.778
98 In cluster: 8417 With photo-z: 0 Fraction: 0.000
99 In cluster: 14966 With photo-z: 1 Fraction: 0.000
100 In cluster: 7135 With photo-z: 3945 Fraction: 0.553
101 In cluster: 39816 With photo-z: 29024 Fraction: 0.729
102 In cluster: 35839 With photo-z: 10237 Fraction: 0.286
103 In cluster: 5679 With photo-z: 4164 Fraction: 0.733
104 In cluster: 26991 With photo-z: 21254 Fraction: 0.787
105 In cluster: 125149 With photo-z: 106118 Fraction: 0.848
106 In cluster: 43175 With photo-z: 33846 Fraction: 0.784
107 In cluster: 136610 With photo-z: 117675 Fraction: 0.861
###Markdown
The binary photo-$z$ selection function of the field
###Code
Fig, Ax = plt.subplots(1,1,figsize=(9.5,8))
Sc = Ax.scatter(dm_hp_ra, dm_hp_dec, c=pz_frac_map, cmap=plt.cm.viridis, s=6, vmin=0, vmax=1)
Ax.set_xlabel('Right Ascension [deg]')
Ax.set_ylabel('Declination [deg]')
Ax.set_title('{0}'.format(FIELD))
CB = Fig.colorbar(Sc)
CB.set_label('Fraction with photo-z estimate')
###Output
_____no_output_____
###Markdown
Add the binary photo-$z$ selection function to output catalog
###Code
pz_depth_map.add_column(Column(name='pz_fraction', data=pz_frac_map))
###Output
_____no_output_____
###Markdown
V - Magnitude dependent photo-$z$ selection functions The binary selection function gives a broad illustration of where photo-$z$s are available in the field (given the availability of optical datasets etc.). However, the fraction of sources that have an estimate available will depend on the brightness of a given source in the bands used for photo-$z$s.Furthermore, the quality of those photo-$z$ is also highly dependent on the depth, wavelength coverage and sampling of the optical data in that region.To calculate the likelihood of a given source having a photo-$z$ that passes the defined quality selection or be able to select samples of homogeneous photo-$z$ quality, we therefore need to estimate the magnitude (and spatially) dependent selection function. Defining the photo-$z$ quality criteriaA key stage in the photo-$z$ estimation methodology is the explicit calibration of the redshift posteriors as a function of magnitude. The benefit of this approach is that by making a cut based on the width of redshift posterior, $P(z)$, we can select sources with a desired estimated redshift precision.Making this cut based on the full $P(z)$ is impractical. However the main photo-$z$ catalog contains information about the width of the primary and secondary peaks above the 80% highest probability density (HPD) credible interval, we can use this information to determine our redshift quality criteria. Parse columns to select the available magnitudes within the masterlist:
###Code
filters = [col for col in merged.colnames if col.startswith('m_')]
print('{0} magnitude columns present in the masterlist.'.format(len(filters)))
scaled_photoz_error = (0.5*(merged['z1_max']- merged['z1_min'])) / (1 + merged['z1_median'])
photoz_quality_cut = (scaled_photoz_error < 0.2)
###Output
_____no_output_____
###Markdown
To calculate the magnitude dependent selection function in a given masterlist filter, for each of the Healpix clusters we do the following:1. Find the number of masterlist sources within that cluster that have a measurement in the corresponding filter. (If this is zero - see stage 3B)2. Calculate the fraction of This relation typically follows a form of a sigmoid function the declines towards fainter magnitudes - however depending on the selection being applied it may not start at 1. Similarly, the rate of decline and the turnover point depends on the depth of the optical selection properties of that cluster.3. 1. Fit the magnitude dependence using the generalised logistic function (GLF, or Richards' function). Provided with conservative boundary conditions and plausible starting conditions based on easily estimated properties (i.e. the typical magnitude in the cluster and the maximum point), this function is able to describe well almost the full range of measured selection functions. 2. If no masterlist sources in the cluster have an observation in the filter - all parameters set to zero (with the GLF then returning zero for all magnitudes).4. Map the parameters estimated for a given healpix cluster back to the healpix belonging to that cluster.
###Code
for photometry_band in filters:
print(photometry_band)
pz_frac_cat = np.zeros(len(merged))
pz_M_map = np.zeros((len(dm_hp_ra),6))
m001, m999 = np.nanpercentile(merged[photometry_band], [0.1, 99.9])
counts, binedges = np.histogram(merged[photometry_band],
range=(np.minimum(m001, 17.), np.minimum(m999, 29.)),
bins=10)
binmids = 0.5*(binedges[:-1] + binedges[1:])
with ProgressBar(NCLUSTERS, ipython_widget=True) as bar:
for ic, cluster in enumerate(np.arange(NCLUSTERS)[:]):
ml_sources = (merged['hp_depth_cluster'] == cluster)
has_photoz = (merged['z1_median'] > -90.) * photoz_quality_cut
has_mag = (merged[photometry_band] > -90.)
in_ml = np.float(ml_sources.sum())
withz = (has_photoz)
frac = []
frac_upper = []
frac_lower = []
iqr25_mag = (np.nanpercentile(merged[photometry_band][ml_sources*has_photoz], 25))
if (ml_sources*has_photoz*has_mag).sum() > 1:
for i in np.arange(len(binedges[:-1])):
mag_cut = np.logical_and(merged[photometry_band] >= binedges[i],
merged[photometry_band] < binedges[i+1])
if (ml_sources * mag_cut).sum() > 0:
pass_cut = np.sum(ml_sources * withz * mag_cut)
total_cut = np.sum(ml_sources * mag_cut)
frac.append(np.float(pass_cut) / total_cut)
lower, upper = binom_conf_interval(pass_cut, total_cut)
frac_lower.append(lower)
frac_upper.append(upper)
else:
frac.append(0.)
frac_lower.append(0.)
frac_upper.append(1.)
frac = np.array(frac)
frac_upper = np.array(frac_upper)
frac_lower = np.array(frac_lower)
model = GLF1D(A=np.median(frac[:5]), K=0., B=0.9, Q=1., nu=0.4, M=iqr25_mag,
bounds={'A': (0,1), 'K': (0,1), 'B': (0., 5.),
'M': (np.minimum(m001, 17.), np.minimum(m999, 29.)),
'Q': (0., 10.),
'nu': (0, None)})
fit = LevMarLSQFitter()
m = fit(model, x=binmids, y=frac, maxiter=1000,
weights=1/(0.5*((frac_upper-frac) + (frac-frac_lower))),
estimate_jacobian=False)
parameters = np.copy(m.parameters)
else:
frac = np.zeros(len(binmids))
frac_upper = np.zeros(len(binmids))
frac_lower = np.zeros(len(binmids))
parameters = np.zeros(6)
# Map parameters to cluster
# Map parameters back to depth map healpix
where_map = (km.labels_ == cluster)
pz_M_map[where_map] = parameters
bar.update()
c = Column(data=pz_M_map, name='pz_glf_{0}'.format(photometry_band), shape=(1,6))
try:
pz_depth_map.add_column(c)
except:
pz_depth_map.replace_column('pz_glf_{0}'.format(photometry_band), c)
###Output
m_ap_wfc_u
###Markdown
The selection function catalog consists of a set of parameters for the generalised logistic function (GLF, or Richards' function) that can be used to calculate the fraction of masterlist sources that have a photo-$z$ estimate satisfying the quality cut as a function of a given magnitude. e.g.$S = \rm{GLF}(M_{f}, \textbf{P}_{\rm{Healpix}})$,where $S$ is the success fraction for a given magnitude $M_{f}$ in a given filter, $f$, and $\textbf{P}_{\rm{Healpix}}$ corresponds to the set of 6 parameters fit for that healpix.In practical terms, using the GLF function defined in this notebook this would be `S = GLF1D(*P)(M)`. Similarly, to estimate the magnitude corresponding to a desired photo-$z$ completeness one can use the same parameters and the corresponding inverse function: `M = InverseGLF1D(*P)(S)`. Save the photo-$z$ selection function catalog:
###Code
pz_depth_map.write('{0}/photo-z_selection_{1}_{2}.fits'.format(OUT_DIR, FIELD, SUFFIX).lower(), format='fits', overwrite=True)
###Output
_____no_output_____
###Markdown
VI - Illustrating the photo-$z$ selection function
###Code
Fig, Ax = plt.subplots(1,1,figsize=(9.5,8))
photometry_band = 'm_ap_cfht_megacam_u'
model = GLF1D(*pz_M_map[0])
mag = 20.
value = [GLF1D.evaluate(mag, *pars) for pars in pz_depth_map['pz_glf_{0}'.format(photometry_band)]]
Sc = Ax.scatter(dm_hp_ra, dm_hp_dec, c=value, cmap=plt.cm.viridis, s=8)
Ax.set_xlabel('Right Ascension [deg]')
Ax.set_ylabel('Declination [deg]')
Ax.set_title('{0} - {1} = {2}'.format(FIELD, photometry_band, mag))
CB = Fig.colorbar(Sc)
CB.set_label(r'Photometric Redshift Completeness ($\Delta z_{80\%\rm{HPD}} / (1 + z_{\rm{phot}}$) < 0.2)')
Fig.savefig('pz_completeness_map_{0}_{1}.png'.format(photometry_band, mag), format='png', dpi=150, bbox_inches='tight')
Fig, Ax = plt.subplots(1,1,figsize=(9.5,8))
photometry_band = 'm_ap_ukidss_j'
model = GLF1D(*pz_M_map[0])
mag = 20.
value = [GLF1D.evaluate(mag, *pars) for pars in pz_depth_map['pz_glf_{0}'.format(photometry_band)]]
Sc = Ax.scatter(dm_hp_ra, dm_hp_dec, c=value, cmap=plt.cm.viridis, s=8)
Ax.set_xlabel('Right Ascension [deg]')
Ax.set_ylabel('Declination [deg]')
Ax.set_title('{0} - {1} = {2}'.format(FIELD, photometry_band, mag))
CB = Fig.colorbar(Sc)
CB.set_label(r'Photometric Redshift Completeness ($\Delta z_{80\%\rm{HPD}} / (1 + z_{\rm{phot}}$) < 0.2)')
Fig.savefig('pz_completeness_map_{0}_{1}.png'.format(photometry_band, mag), format='png', dpi=150, bbox_inches='tight')
Fig, Ax = plt.subplots(1,1,figsize=(9.5,8))
photometry_band = 'm_ap_ukidss_j'
model = GLF1D(*pz_M_map[0])
completeness = 0.5
mag_complete = [InverseGLF1D(*pars)(completeness) for pars in pz_depth_map['pz_glf_{0}'.format(photometry_band)]]
Sc = Ax.scatter(dm_hp_ra, dm_hp_dec, c=mag_complete, cmap=plt.cm.viridis, s=8)
Ax.set_xlabel('Right Ascension [deg]')
Ax.set_ylabel('Declination [deg]')
Ax.set_title('{0} - {1} = {2}'.format(FIELD, photometry_band, mag))
CB = Fig.colorbar(Sc)
CB.set_label(r'Magnitude at which 80% Complete')
Fig.savefig('pz_mag_depth_map_{0}_{1}.png'.format(photometry_band, mag), format='png', dpi=150, bbox_inches='tight')
###Output
_____no_output_____ |
03-NumPy-and-Linear-Algebra/linear-algebra-python-basics.ipynb | ###Markdown
Linear Algebra and Python Basicsby Rob Hicks http://rlhick.people.wm.edu/stories/linear-algebra-python-basics.htmlIn this chapter, I will be discussing some linear algebra basics that will provide sufficient linear algebra background for effective programming in Python for our purposes. We will be doing very basic linear algebra that by no means covers the full breadth of this topic. Why linear algebra? Linear algebra allows us to express relatively complex linear expressions in a very compact way.Being comfortable with the rules for scalar and matrix addition, subtraction, multiplication, and division (known as inversion) is important for our class.Before we can implement any of these ideas in code, we need to talk a bit about python and how data is stored. Python PrimerThere are numerous ways to run python code. I will show you two and both are easily accessible after installing Anaconda:1. The Spyder integrated development environment. The major advantages of Spyder is that it provides a graphical way for viewing matrices, vectors, and other objects you want to check as you work on a problem. It also has the most intuitive way of debugging code. Spyder looks like this:  Code can be run by clicking the green arrow (runs the entire file) or by blocking a subset and running it. In Windows or Mac, you can launch the Spyder by looking for the icon in the newly installed Program Folder Anaconda. 2. The Ipython Notebook (now called Jupyter). The major advantages of this approach is that you use your web browser for all of your python work and you can mix code, videos, notes, graphics from the web, and mathematical notation to tell the whole story of your python project. In fact, I am using the ipython notebook for writing these notes. The Ipython Notebook looks like this:  In Windows or Mac, you can launch the Ipython Notebook by looking in the newly installed Program Folder Anaconda.In my work flow, I usually only use the Ipython Notebook, but for some coding problems where I need access to the easy debugging capabilities of Spyder, I use it. We will be using the Ipython Notebook interface (web browser) mostly in this class. Loading librariesThe python universe has a huge number of libraries that extend the capabilities of python. Nearly all of these are open source, unlike packages like stata or matlab where some key libraries are proprietary (and can cost lots of money). In lots of my code, you will see this at the top:
###Code
%matplotlib inline
##import sympy as sympy
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sbn
from scipy import *
###Output
_____no_output_____
###Markdown
This code sets up Ipython Notebook environments (lines beginning with `%`), and loads several libraries and functions. The core scientific stack in python consists of a number of free libraries. The ones I have loaded above include:1. sympy: provides for symbolic computation (solving algebra problems)2. numpy: provides for linear algebra computations3. matplotlib.pyplot: provides for the ability to graph functions and draw figures4. scipy: scientific python provides a plethora of capabilities5. seaborn: makes matplotlib figures even pretties (another library like this is called bokeh). This is entirely optional and is purely for eye candy. Creating arrays, scalars, and matrices in PythonScalars can be created easily like this:
###Code
x = .5
print x
###Output
0.5
###Markdown
Vectors and ListsThe numpy library (we will reference it by np) is the workhorse library for linear algebra in python. To creat a vector simply surround a python list ($[1,2,3]$) with the np.array function:
###Code
x_vector = np.array([1,2,3])
print x_vector
###Output
[1 2 3]
###Markdown
We could have done this by defining a python list and converting it to an array:
###Code
c_list = [1,2]
print "The list:",c_list
print "Has length:", len(c_list)
c_vector = np.array(c_list)
print "The vector:", c_vector
print "Has shape:",c_vector.shape
z = [5,6]
print "This is a list, not an array:",z
print type(z)
zarray = np.array(z)
print "This is an array, not a list",zarray
print type(zarray)
###Output
This is an array, not a list [5 6]
<type 'numpy.ndarray'>
###Markdown
Matrices
###Code
b = zip(z,c_vector)
print b
print "Note that the length of our zipped list is 2 not (2 by 2):",len(b)
print "But we can convert the list to a matrix like this:"
A = np.array(b)
print A
print type(A)
print "A has shape:",A.shape
###Output
But we can convert the list to a matrix like this:
[[5 1]
[6 2]]
<type 'numpy.ndarray'>
A has shape: (2, 2)
###Markdown
Matrix Addition and Subtraction Adding or subtracting a scalar value to a matrixTo learn the basics, consider a small matrix of dimension $2 \times 2$, where $2 \times 2$ denotes the number of rows $\times$ the number of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Consider adding a scalar value (e.g. 3) to the A.$$\begin{equation} A+3=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}+3 =\begin{bmatrix} a_{11}+3 & a_{12}+3 \\ a_{21}+3 & a_{22}+3 \end{bmatrix}\end{equation}$$The same basic principle holds true for A-3:$$\begin{equation} A-3=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}-3 =\begin{bmatrix} a_{11}-3 & a_{12}-3 \\ a_{21}-3 & a_{22}-3 \end{bmatrix}\end{equation}$$Notice that we add (or subtract) the scalar value to each element in the matrix A. A can be of any dimension.This is trivial to implement, now that we have defined our matrix A:
###Code
result = A + 3
#or
result = 3 + A
print result
###Output
[[8 4]
[9 5]]
###Markdown
Adding or subtracting two matricesConsider two small $2 \times 2$ matrices, where $2 \times 2$ denotes the \ of rows $\times$ the \ of columns. Let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{smallmatrix} \bigr)$ and $B$=$\bigl( \begin{smallmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{smallmatrix} \bigr)$. To find the result of $A-B$, simply subtract each element of A with the corresponding element of B:$$\begin{equation} A -B = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} - \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} a_{11}-b_{11} & a_{12}-b_{12} \\ a_{21}-b_{21} & a_{22}-b_{22} \end{bmatrix}\end{equation}$$Addition works exactly the same way:$$\begin{equation} A + B = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} + \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{bmatrix} = \begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} \\ a_{21}+b_{21} & a_{22}+b_{22} \end{bmatrix}\end{equation}$$An important point to know about matrix addition and subtraction is that it is only defined when $A$ and $B$ are of the same size. Here, both are $2 \times 2$. Since operations are performed element by element, these two matrices must be conformable- and for addition and subtraction that means they must have the same numbers of rows and columns. I like to be explicit about the dimensions of matrices for checking conformability as I write the equations, so write$$A_{2 \times 2} + B_{2 \times 2}= \begin{bmatrix} a_{11}+b_{11} & a_{12}+b_{12} \\ a_{21}+b_{21} & a_{22}+b_{22} \end{bmatrix}_{2 \times 2}$$Notice that the result of a matrix addition or subtraction operation is always of the same dimension as the two operands.Let's define another matrix, B, that is also $2 \times 2$ and add it to A:
###Code
B = np.random.randn(2,2)
print B
result = A + B
result
###Output
_____no_output_____
###Markdown
Matrix MultiplicationMultiplying a scalar value times a matrixAs before, let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{smallmatrix} \bigr)$. Suppose we want to multiply A times a scalar value (e.g. $3 \times A$)$$\begin{equation} 3 \times A = 3 \times \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} = \begin{bmatrix} 3a_{11} & 3a_{12} \\ 3a_{21} & 3a_{22} \end{bmatrix}\end{equation}$$is of dimension (2,2). Scalar multiplication is commutative, so that $3 \times A$=$A \times 3$. Notice that the product is defined for a matrix A of any dimension.Similar to scalar addition and subtration, the code is simple:
###Code
A * 3
###Output
_____no_output_____
###Markdown
Multiplying two matriciesNow, consider the $2 \times 1$ vector $C=\bigl( \begin{smallmatrix} c_{11} \\ c_{21}\end{smallmatrix} \bigr)$ Consider multiplying matrix $A_{2 \times 2}$ and the vector $C_{2 \times 1}$. Unlike the addition and subtraction case, this product is defined. Here, conformability depends not on the row **and** column dimensions, but rather on the column dimensions of the first operand and the row dimensions of the second operand. We can write this operation as follows$$\begin{equation} A_{2 \times 2} \times C_{2 \times 1} = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}_{2 \times 2} \times \begin{bmatrix} c_{11} \\ c_{21} \end{bmatrix}_{2 \times 1} = \begin{bmatrix} a_{11}c_{11} + a_{12}c_{21} \\ a_{21}c_{11} + a_{22}c_{21} \end{bmatrix}_{2 \times 1}\end{equation}$$Alternatively, consider a matrix C of dimension $2 \times 3$ and a matrix A of dimension $3 \times 2$$$\begin{equation} A_{3 \times 2}=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{bmatrix}_{3 \times 2} , C_{2 \times 3} = \begin{bmatrix} c_{11} & c_{12} & c_{13} \\ c_{21} & c_{22} & c_{23} \\ \end{bmatrix}_{2 \times 3} \end{equation}$$Here, A $\times$ C is$$\begin{align} A_{3 \times 2} \times C_{2 \times 3}=& \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{bmatrix}_{3 \times 2} \times \begin{bmatrix} c_{11} & c_{12} & c_{13} \\ c_{21} & c_{22} & c_{23} \end{bmatrix}_{2 \times 3} \\ =& \begin{bmatrix} a_{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \\ a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \\ a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23} \end{bmatrix}_{3 \times 3} \end{align}$$So in general, $X_{r_x \times c_x} \times Y_{r_y \times c_y}$ we have two important things to remember: * For conformability in matrix multiplication, $c_x=r_y$, or the columns in the first operand must be equal to the rows of the second operand.* The result will be of dimension $r_x \times c_y$, or of dimensions equal to the rows of the first operand and columns equal to columns of the second operand.Given these facts, you should convince yourself that matrix multiplication is not generally commutative, that the relationship $X \times Y = Y \times X$ does **not** hold in all cases.For this reason, we will always be very explicit about whether we are pre multiplying ($X \times Y$) or post multiplying ($Y \times X$) the vectors/matrices $X$ and $Y$.For more information on this topic, see thishttp://en.wikipedia.org/wiki/Matrix_multiplication.
###Code
# Let's redefine A and C to demonstrate matrix multiplication:
A = np.arange(6).reshape((3,2))
C = np.random.randn(2,2)
print A.shape
print C.shape
###Output
(3, 2)
(2, 2)
###Markdown
We will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result:
###Code
print A.dot(C)
print np.dot(A,C)
###Output
[[ 0.48080757 0.43511698]
[ 1.47915018 0.72999774]
[ 2.47749278 1.0248785 ]]
[[ 0.48080757 0.43511698]
[ 1.47915018 0.72999774]
[ 2.47749278 1.0248785 ]]
###Markdown
Suppose instead of pre-multiplying C by A, we post-multiply. The product doesn't exist because we don't have conformability as described above:
###Code
C.dot(A)
###Output
_____no_output_____
###Markdown
Matrix DivisionThe term matrix division is actually a misnomer. To divide in a matrix algebra world we first need to invert the matrix. It is useful to consider the analog case in a scalar work. Suppose we want to divide the $f$ by $g$. We could do this in two different ways:$$\begin{equation} \frac{f}{g}=f \times g^{-1}.\end{equation}$$In a scalar seeting, these are equivalent ways of solving the division problem. The second one requires two steps: first we invert g and then we multiply f times g. In a matrix world, we need to think about this second approach. First we have to invert the matrix g and then we will need to pre or post multiply depending on the exact situation we encounter (this is intended to be vague for now).Inverting a MatrixAs before, consider the square $2 \times 2$ matrix $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{smallmatrix} \bigr)$. Let the inverse of matrix A (denoted as $A^{-1}$) be $$\begin{equation} A^{-1}=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix}^{-1}=\frac{1}{a_{11}a_{22}-a_{12}a_{21}} \begin{bmatrix} a_{22} & -a_{12} \\ -a_{21} & a_{11} \end{bmatrix}\end{equation}$$The inverted matrix $A^{-1}$ has a useful property:$$\begin{equation} A \times A^{-1}=A^{-1} \times A=I\end{equation}$$where I, the identity matrix (the matrix equivalent of the scalar value 1), is$$\begin{equation} I_{2 \times 2}=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}\end{equation}$$furthermore, $A \times I = A$ and $I \times A = A$.An important feature about matrix inversion is that it is undefined if (in the $2 \times 2$ case), $a_{11}a_{22}-a_{12}a_{21}=0$. If this relationship is equal to zero the inverse of A does not exist. If this term is very close to zero, an inverse may exist but $A^{-1}$ may be poorly conditioned meaning it is prone to rounding error and is likely not well identified computationally. The term $a_{11}a_{22}-a_{12}a_{21}$ is the determinant of matrix A, and for square matrices of size greater than $2 \times 2$, if equal to zero indicates that you have a problem with your data matrix (columns are linearly dependent on other columns). The inverse of matrix A exists if A is square and is of full rank (ie. the columns of A are not linear combinations of other columns of A).For more information on this topic, see thishttp://en.wikipedia.org/wiki/Matrix_inversion, for example, on inverting matrices.
###Code
# note, we need a square matrix (# rows = # cols), use C:
C_inverse = np.linalg.inv(C)
print C_inverse
###Output
[[ 2.97399042 1.966247 ]
[-3.28628201 0.12551463]]
###Markdown
Check that $C\times C^{-1} = I$:
###Code
print C.dot(C_inverse)
print "Is identical to:"
print C_inverse.dot(C)
###Output
[[ 1.00000000e+00 -4.61031414e-18]
[ -6.43302442e-18 1.00000000e+00]]
Is identical to:
[[ 1.00000000e+00 6.11198607e-17]
[ 6.54738800e-18 1.00000000e+00]]
###Markdown
Transposing a MatrixAt times it is useful to pivot a matrix for conformability- that is in order to matrix divide or multiply, we need to switch the rows and column dimensions of matrices. Consider the matrix$$\begin{equation} A_{3 \times 2}=\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{bmatrix}_{3 \times 2} \end{equation}$$The transpose of A (denoted as $A^{\prime}$) is$$\begin{equation} A^{\prime}=\begin{bmatrix} a_{11} & a_{21} & a_{31} \\ a_{12} & a_{22} & a_{32} \\ \end{bmatrix}_{2 \times 3}\end{equation}$$
###Code
A = np.arange(6).reshape((3,2))
B = np.arange(8).reshape((2,4))
print "A is"
print A
print "The Transpose of A is"
print A.T
###Output
A is
[[0 1]
[2 3]
[4 5]]
The Transpose of A is
[[0 2 4]
[1 3 5]]
###Markdown
One important property of transposing a matrix is the transpose of a product of two matrices. Let matrix A be of dimension $N \times M$ and let B of of dimension $M \times P$. Then$$\begin{equation} (AB)^{\prime}=B^{\prime}A^{\prime}\end{equation}$$For more information, see this http://en.wikipedia.org/wiki/Matrix_transposition on matrix transposition. This is also easy to implement:
###Code
print B.T.dot(A.T)
print "Is identical to:"
print (A.dot(B)).T
###Output
[[ 4 12 20]
[ 5 17 29]
[ 6 22 38]
[ 7 27 47]]
Is identical to:
[[ 4 12 20]
[ 5 17 29]
[ 6 22 38]
[ 7 27 47]]
###Markdown
More python tools IndexingPython begins indexing at 0 (not 1), therefore the first row and first column is referenced by 0,0 **not** 1,1. Slicing Accessing elements of numpy matrices and arrays. This code grabs the first column of A:
###Code
print A
A[:,0]
###Output
[[0 1]
[2 3]
[4 5]]
###Markdown
or, we could grab a particular element (in this case, the second column, last row):
###Code
A[2,1]
###Output
_____no_output_____
###Markdown
Logical Checks to extract values from matrices/arrays:
###Code
print A
print A[:,1]>4
A[A[:,1]>4]
###Output
[False False True]
###Markdown
For loopsCreate a $12 \times 2$ matrix and print it out:
###Code
A = np.arange(24).reshape((12,2))
print A
print A.shape
###Output
[[ 0 1]
[ 2 3]
[ 4 5]
[ 6 7]
[ 8 9]
[10 11]
[12 13]
[14 15]
[16 17]
[18 19]
[20 21]
[22 23]]
(12, 2)
###Markdown
The code below loops over the rows (12 of them) of our matrix A. For each row, it slices A and prints the row values across all columns. Notice the form of the for loop. The colon defines the statement we are looping over. For each iteration of the loop **idented** lines will be executed:
###Code
for rows in A:
print rows
for rows in A:
print rows
for cols in A.T:
print cols
###Output
[ 0 2 4 6 8 10 12 14 16 18 20 22]
[ 1 3 5 7 9 11 13 15 17 19 21 23]
###Markdown
If/then/elseThe code below checks the value of x and categorizes it into one of three values. Like the for loop, each logical if check is ended with a colon, and any commands to be applied to that particular if check (if true) must be indented.
###Code
x=.4
if x<.5:
print "Heads"
print 100
elif x>.5:
print "Tails"
print 0
else:
print "Tie"
print 50
###Output
Heads
100
###Markdown
While loopsAgain, we have the same basic form for the statement (note the colons and indents). Here we use the shorthand notation `x+=1` for performing the calculation `x = x + 1`:
###Code
x=0
while x<10:
x+=1
print x<10
print x
###Output
True
True
True
True
True
True
True
True
True
False
10
###Markdown
Some more
###Code
v = np.random.beta(56, 23, 100)
plt.hist(v)
np.savetxt("myvextor.txt", v.reshape(20,5))
%%sh
head -10 myvextor.txt
w = np.loadtxt("myvextor.txt")
(w*v).sum()/(w.)
a = v.reshape(10,10)
type(a)
np.matrix(a)
v =
type(v)
v = np.matrix(np.random.rand(10,1))
w = np.matrix(np.random.rand(10,1))
a = v.dot(w.transpose())
a.shape
type(a)
lag = np.linalg
###Output
_____no_output_____ |
examples/gallery/links/deck_gl_json_editor.ipynb | ###Markdown
This example demonstrates how to `jslink` a JSON editor to a DeckGL pane to enable super fast, live editing of a plot:
###Code
MAPBOX_KEY = "pk.eyJ1IjoicGFuZWxvcmciLCJhIjoiY2s1enA3ejhyMWhmZjNobjM1NXhtbWRrMyJ9.B_frQsAVepGIe-HiOJeqvQ"
json_spec = {
"initialViewState": {
"bearing": -27.36,
"latitude": 52.2323,
"longitude": -1.415,
"maxZoom": 15,
"minZoom": 5,
"pitch": 40.5,
"zoom": 6
},
"layers": [{
"@@type": "HexagonLayer",
"autoHighlight": True,
"coverage": 1,
"data": "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv",
"elevationRange": [0, 3000],
"elevationScale": 50,
"extruded": True,
"getPosition": "@@=[lng, lat]",
"id": "8a553b25-ef3a-489c-bbe2-e102d18a3211", "pickable": True
}],
"mapStyle": "mapbox://styles/mapbox/dark-v9",
"views": [{"@@type": "MapView", "controller": True}]
}
view_editor = pn.widgets.Ace(value=json.dumps(json_spec['initialViewState'], indent=4),
theme= 'monokai', width=500, height=225)
layer_editor = pn.widgets.Ace(value=json.dumps(json_spec['layers'][0], indent=4),
theme= 'monokai', width=500, height=365)
deck_gl = pn.pane.DeckGL(json_spec, mapbox_api_key=MAPBOX_KEY, sizing_mode='stretch_width', height=600)
view_editor.jscallback(args={'deck_gl': deck_gl}, value="deck_gl.initialViewState = JSON.parse(cb_obj.code)")
layer_editor.jscallback(args={'deck_gl': deck_gl}, value="deck_gl.layers = [JSON.parse(cb_obj.code)]")
editor = pn.Row(pn.Column(view_editor, layer_editor), deck_gl)
editor
###Output
_____no_output_____
###Markdown
AppLets wrap it into nice template that can be served via `panel serve deck_gl_json_editor.ipynb`
###Code
pn.template.FastListTemplate(
site="Panel", title="Deck.gl Json Editor",
main=[
pn.pane.Markdown("This example demonstrates two JSON editors `jslink`ed to a DeckGL pane to enable super fast, live editing of a plot:", sizing_mode="stretch_width"),
editor
]
).servable();
###Output
_____no_output_____
###Markdown
This example demonstrates how to link a JSON editor to a DeckGL pane to enable live editing of a plot:
###Code
MAPBOX_KEY = "pk.eyJ1IjoicGFuZWxvcmciLCJhIjoiY2s1enA3ejhyMWhmZjNobjM1NXhtbWRrMyJ9.B_frQsAVepGIe-HiOJeqvQ"
json_spec = {
"initialViewState": {
"bearing": -27.36,
"latitude": 52.2323,
"longitude": -1.415,
"maxZoom": 15,
"minZoom": 5,
"pitch": 40.5,
"zoom": 6
},
"layers": [{
"@@type": "HexagonLayer",
"autoHighlight": True,
"coverage": 1,
"data": "https://raw.githubusercontent.com/uber-common/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv",
"elevationRange": [0, 3000],
"elevationScale": 50,
"extruded": True,
"getPosition": "@@=[lng, lat]",
"id": "8a553b25-ef3a-489c-bbe2-e102d18a3211", "pickable": True
}],
"mapStyle": "mapbox://styles/mapbox/dark-v9",
"views": [{"@@type": "MapView", "controller": True}]
}
view_editor = pn.widgets.Ace(value=json.dumps(json_spec['initialViewState'], indent=4),
theme= 'monokai', width=500, height=225)
layer_editor = pn.widgets.Ace(value=json.dumps(json_spec['layers'][0], indent=4),
theme= 'monokai', width=500, height=365)
deck_gl = pn.pane.DeckGL(json_spec, mapbox_api_key=MAPBOX_KEY, sizing_mode='stretch_width', height=600)
view_editor.jscallback(args={'deck_gl': deck_gl}, value="deck_gl.initialViewState = JSON.parse(cb_obj.code)")
layer_editor.jscallback(args={'deck_gl': deck_gl}, value="deck_gl.layers = [JSON.parse(cb_obj.code)]")
pn.Row(pn.Column(view_editor, layer_editor), deck_gl)
###Output
_____no_output_____ |
.ipynb_checkpoints/IPO_data_analysis-checkpoint.ipynb | ###Markdown
Analyzing some data on IPO first-day trends authors: JLM, RCT
###Code
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
params = {'legend.fontsize': 'x-large',
'figure.figsize': (15, 5),
'axes.labelsize':16,
'axes.titlesize':16,
'xtick.labelsize':16,
'ytick.labelsize':16}
pylab.rcParams.update(params)
###Output
_____no_output_____
###Markdown
Loading Data
###Code
cols = ['date','issuer','Symbol','Managers','offer_price','opening_price',
'day1_close','day1_Px_change','change_opening','change_close','rating','performed']
years = [2010,2011,2012,2013,2014,2015,2016,2017,2018,2019,2020]
data_IPO = {}
for year in years:
data_IPO[year] = pd.read_excel('data_{}.xlsx'.format(year),skiprows=3,header=None,names=cols)
#data_IPO[2018] = pd.read_csv('IPO_data/data_2018.csv',skiprows=3,header=None,names=cols)
#data_IPO[2019] = pd.read_csv('IPO_data/data_2019.csv',skiprows=3,header=None,names=cols)
#data_IPO[2020] = pd.read_csv('IPO_data/data_2020.csv',skiprows=3,header=None,names=cols)
###Output
_____no_output_____
###Markdown
Defining a couple of functions to edit dataframes
###Code
def rm_symbol_and_change_str_to_float(df,column,symbol):
df[column] = df[column].str.replace(symbol,'',regex=True).astype(float)
def m_d_y(st,which):
m = st[:st.find('/')]
st = st[st.find('/')+1:]
d = st[:st.find('/')]
y = st = st[st.find('/')+1:]
if which=='m':
return int(m)
elif which=='d':
return int(d)
else:
return int(y)
def process_df(df):
# rm_symbol_and_change_str_to_float(df,'offer_price','$')
# rm_symbol_and_change_str_to_float(df,'opening_price','$')
# rm_symbol_and_change_str_to_float(df,'day1_close','$')
# rm_symbol_and_change_str_to_float(df,'day1_Px_change','%')
df['month'] = df['date'].map(lambda element: m_d_y(element,'m'))
df['day' ] = df['date'].map(lambda element: m_d_y(element,'d'))
###Output
_____no_output_____
###Markdown
Editing dataframes
###Code
for year in years:
process_df(data_IPO[year])
data_IPO[years[0]].head(2)
###Output
_____no_output_____
###Markdown
Plotting correlation between offer, opening prices, and day 1 close
###Code
fig = plt.subplots(1,2,figsize=(15,5))
plt.subplot(1, 2, 1)
for year in years:
plt.scatter(data_IPO[year]['offer_price'],data_IPO[year]['opening_price'],label='{}'.format(year))
plt.xlabel('offer price $')
plt.ylabel('opening price $')
plt.legend()
plt.subplot(1, 2, 2)
for year in years:
plt.scatter(data_IPO[year]['opening_price'],data_IPO[year]['day1_close'])
plt.xlabel('opening price $')
plt.ylabel('day 1 close $')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the day 1 percent change for different years
###Code
data_IPO_greater_than = {}
greater_than = 0.1
for y,year in enumerate(years):
data_IPO[year]['day1_Px_change'] = (data_IPO[year].day1_close - data_IPO[year].opening_price)/data_IPO[year].day1_close
data_IPO[year]['change_opening'] = data_IPO[year].day1_close - data_IPO[year].opening_price
plt.hist(data_IPO[year]['day1_Px_change'],alpha = 0.3,label='{}'.format(year),bins=50)
mean = data_IPO[year]['day1_Px_change'].mean()
print(year, ":",round(mean,4))
#looking at companies with greater than x growth
data_IPO_greater_than[year] = data_IPO[year][(data_IPO[year]['day1_Px_change'] > greater_than)]
#plt.plot([mean,mean],[0,100],label='{} average = {}'.format(year,mean),ls='--',lw=3, alpha = 0.3)
plt.xlabel('day 1 % change')
plt.ylabel('frequency')
plt.legend()
data_IPO_greater_than[2019].head(10)
###Output
_____no_output_____
###Markdown
Trying to find if there's a correlation between day 1 % change and month(Need to include more years to try and see that)
###Code
for year in years:
x = []
y = []
for month in range(1,13):
x.append(month)
y.append(data_IPO[year]['day1_Px_change'][data_IPO[year]['month']==month].mean())
plt.plot(x,y,label='{}'.format(year))
plt.legend()
plt.ylabel('day 1 % change')
plt.xlabel('month')
###Output
_____no_output_____
###Markdown
Loading the data as excel to try to avoid changing cell types[ ] Adding a calculated column for % day change at day 1
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
column_names = ['date','issuer','Symbol','Managers','offer_price','opening_price',
'day1_close','day1_Px_change','change_opening','change_close','rating','performed']
df = pd.read_excel('data_2019.xlsx', skiprows=2, names=column_names)
df['day1_Px_change'] = (df.day1_close - df.opening_price)/df.day1_close
df['change_opening'] = df.day1_close - df.opening_price
df.head()
plt.hist(df['day1_Px_change'], bins=25)
df.describe()
###Output
_____no_output_____ |
Ranking_de_Vendas.ipynb | ###Markdown
Ranking de Vendas
```
ENTRADAS
7
55 100 100 40 100 50 35
4
20 60 40 10
SAÍDA ESPERADA
6
2
4
6
```
###Code
def processInput ( input ):
#"Aqui você deve criar seu algoritmo para processar a entrada e depois retorna-la."
vec.append(input)
result = ''
if len(vec) == 4:
lenV, lenC = int(vec[0]), int(vec[2])
vecV = [int(i) for i in vec[1].split(' ')]
vecC = [int(i) for i in vec[3].split(' ')]
vecV.sort()
vecV = list(dict.fromkeys(vecV))
for i in vecC:
pos = 1 # posição de vendas incial é o primeiro colocado
for j, k in enumerate(vecV):
if i < k:
pos += 1
result += str(pos) + '\n'
#print(pos)
return result
#Este e um exemplo de processamento de entradas (inputs), sinta-se a vontade para altera-lo conforme necessidade da questao.
vec = []
values = ['7\r',
'55 100 100 40 100 50 35\r',
'4\r',
'20 60 40 10\r']
for value in values:
print(processInput(value))
###Output
6
2
4
6
|
old_stuff/x/basics/02 - Strings.ipynb | ###Markdown
Table of Contents
###Code
# should take about 8 mins
# a string is an ordered sequence of characters
# hello world
# they are everywhere
# daily oil/gas
# https://www.dmr.nd.gov/oilgas/dailyindex.asp
# parseing
# web scraping
# same with stock data
s = 'hello world'
s
y = 'hello again'
y
s + y
'hello' * 3
len(s)
# len is a generic python function
# it operates on any sequence
# this is why it is a function, not a method
s.split()
white_space = ' '
white_space.join(s)
# replacing text, not a regular expression
# but good enough for simple cases
s.replace('world', 'people')
s
s.upper()
s
strip_str = ' hello \t '
strip_str
strip_str.strip()
strip_str
dir(s)
# or s. + TAB
m = '''hello
everyone
how are you'''
m
# multi line with parenthesis so there is no \n char
# really useful for PEP-8
m = ("hello "
"everyone "
"how are you")
m
# can also use the line continuation character
m = 'hello '\
'everyone '\
'how are you'
m
m = 'hello \n'\
'everyone'
m
str(1)
str(1.1 + 2.2)
repr(1.1 + 2.2)
hex(255)
oct(255)
bin(255)
# strings to numbers
int('12')
type(int('12'))
int('FF', 16) # hexadecimal
float('12')
###Output
_____no_output_____ |
sklearn/sklearn learning/demonstration/auto_examples_jupyter/compose/plot_column_transformer.ipynb | ###Markdown
Column Transformer with Heterogeneous Data SourcesDatasets can often contain components of that require different featureextraction and processing pipelines. This scenario might occur when:1. Your dataset consists of heterogeneous data types (e.g. raster images and text captions)2. Your dataset is stored in a Pandas DataFrame and different columns require different processing pipelines.This example demonstrates how to use:class:`sklearn.compose.ColumnTransformer` on a dataset containingdifferent types of features. We use the 20-newsgroups dataset and computestandard bag-of-words features for the subject line and body in separatepipelines as well as ad hoc features on the body. We combine them (withweights) using a ColumnTransformer and finally train a classifier on thecombined set of features.The choice of features is not particularly helpful, but serves to illustratethe technique.
###Code
# Author: Matt Terry <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.datasets import fetch_20newsgroups
from sklearn.decomposition import TruncatedSVD
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.svm import LinearSVC
class TextStats(TransformerMixin, BaseEstimator):
"""Extract features from each document for DictVectorizer"""
def fit(self, x, y=None):
return self
def transform(self, posts):
return [{'length': len(text),
'num_sentences': text.count('.')}
for text in posts]
class SubjectBodyExtractor(TransformerMixin, BaseEstimator):
"""Extract the subject & body from a usenet post in a single pass.
Takes a sequence of strings and produces a dict of sequences. Keys are
`subject` and `body`.
"""
def fit(self, x, y=None):
return self
def transform(self, posts):
# construct object dtype array with two columns
# first column = 'subject' and second column = 'body'
features = np.empty(shape=(len(posts), 2), dtype=object)
for i, text in enumerate(posts):
headers, _, bod = text.partition('\n\n')
features[i, 1] = bod
prefix = 'Subject:'
sub = ''
for line in headers.split('\n'):
if line.startswith(prefix):
sub = line[len(prefix):]
break
features[i, 0] = sub
return features
pipeline = Pipeline([
# Extract the subject & body
('subjectbody', SubjectBodyExtractor()),
# Use ColumnTransformer to combine the features from subject and body
('union', ColumnTransformer(
[
# Pulling features from the post's subject line (first column)
('subject', TfidfVectorizer(min_df=50), 0),
# Pipeline for standard bag-of-words model for body (second column)
('body_bow', Pipeline([
('tfidf', TfidfVectorizer()),
('best', TruncatedSVD(n_components=50)),
]), 1),
# Pipeline for pulling ad hoc features from post's body
('body_stats', Pipeline([
('stats', TextStats()), # returns a list of dicts
('vect', DictVectorizer()), # list of dicts -> feature matrix
]), 1),
],
# weight components in ColumnTransformer
transformer_weights={
'subject': 0.8,
'body_bow': 0.5,
'body_stats': 1.0,
}
)),
# Use a SVC classifier on the combined features
('svc', LinearSVC(dual=False)),
], verbose=True)
# limit the list of categories to make running this example faster.
categories = ['alt.atheism', 'talk.religion.misc']
X_train, y_train = fetch_20newsgroups(random_state=1,
subset='train',
categories=categories,
remove=('footers', 'quotes'),
return_X_y=True)
X_test, y_test = fetch_20newsgroups(random_state=1,
subset='test',
categories=categories,
remove=('footers', 'quotes'),
return_X_y=True)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
_____no_output_____ |
Surfs_Up/climate_starter.ipynb | ###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base= automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
###Output
_____no_output_____
###Markdown
Exploratory Climate Analysis
###Code
# Design a query to retrieve the last 12 months of precipitation data and plot the results
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date
# Calculate the date 1 year ago from the last data point in the database
one_year_ago = dt.date(2017,8,23) - dt.timedelta(days=365)
one_year_ago
# Perform a query to retrieve the data and precipitation scores
scores = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= one_year_ago).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(scores, columns=["Date","Precipitation"])
df.set_index("Date", inplace=True,)
df.head()
# Sort the dataframe by date
df = df.sort_index()
# Use Pandas Plotting with Matplotlib to plot the data
df.plot(rot=90)
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe()
# Design a query to show how many stations are available in this dataset?
station_count = session.query(Measurement.station).distinct().count()
station_count
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
most_active_stations = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
most_active_stations
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
session.query(func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).\
filter(Measurement.station == 'USC00519281').all()
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
prev_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
results = session.query(Measurement.tobs).\
filter(Measurement.station == 'USC00519281').\
filter(Measurement.date >= prev_year).all()
df = pd.DataFrame(results, columns=['tobs'])
df.plot.hist(bins=12)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Bonus Challenge Assignment
###Code
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
import datetime as dt
prev_year_start = dt.date(2018, 1, 1) - dt.timedelta(days=365)
prev_year_end = dt.date(2018, 1, 7) - dt.timedelta(days=365)
tmin, tavg, tmax = calc_temps(prev_year_start.strftime("%Y-%m-%d"), prev_year_end.strftime("%Y-%m-%d"))[0]
print(tmin, tavg, tmax)
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
###Output
_____no_output_____ |
ipynb/reading robinhood output.ipynb | ###Markdown
Plotting
###Code
offset = temp[temp.name == 'OffsetTree']
naive = temp[temp.name == 'Naive']
fair = temp[temp.name == 'GroupFairnessSRL']
plt.figure(figsize=(12, 8))
plt.boxplot([offset['return_test'],naive['return_test'],fair['return_test']])
plt.xticks([1, 2, 3], ['offset','naive','fairness'],fontsize=14)
plt.ylabel('Expected Rewards',fontsize=14)
plt.title('Treatments Based on {} | Additional Info: {}'.format('Sex','All'),fontsize=17)
plt.show()
plt.figure(figsize=(12, 8))
plt.boxplot([offset['test_bqf_0_mean'],naive['test_bqf_0_mean'],fair['test_bqf_0_mean']],showfliers=True)
plt.xticks([1, 2, 3], ['offset','naive','fairness'],fontsize=14)
plt.ylabel('Difference between Treatment Groups',fontsize=14)
plt.title('Treatments Based on {} | Additional Info: {}'.format('Sex','All'),fontsize=17)
plt.show()
def plot_curve(x,y,name):
res = stats.linregress(x, y)
plt.plot(x, y, 'o')
plt.plot(x, res.intercept + res.slope*x, label=name)
plt.legend(fontsize=14)
plt.figure(figsize=(12, 8))
plot_curve(offset['return_test'], np.log(1- offset['test_bqf_0_mean']),'offset')
plot_curve(naive['return_test'], np.log(1- naive['test_bqf_0_mean']),'naive')
plot_curve(fair['return_test'], np.log(1- fair['test_bqf_0_mean']),'fairness')
###Output
C:\Users\emmyp\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\cbook\__init__.py:1402: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead.
x[:, None]
C:\Users\emmyp\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\axes\_base.py:276: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead.
x = x[:, np.newaxis]
C:\Users\emmyp\AppData\Local\Continuum\anaconda3\lib\site-packages\matplotlib\axes\_base.py:278: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead.
y = y[:, np.newaxis]
|
graph_pit/examples/runtimes.ipynb | ###Markdown
Runtime evaluation similar to [1]References:[1] Speeding Up Permutation Invariant Training for Source Separation
###Code
from scipy.optimize import linear_sum_assignment
import paderbox as pb
import itertools
import timeit
from collections import defaultdict
import torch
import numpy as np
import padertorch as pt
import paderbox as pb
from tqdm.notebook import tqdm
import graph_pit
import torch
import padertorch as pt
import functools
# General settings
max_time = 1 # Time in s over which runs are ignored
device = 'cpu' # Device for computing the losses / loss matrices. The permutation solver always works on the CPU
number = 3 # Number of runs per configuration. Higher number means smoother curves, but large values are impractical for an interactive notebook
# Utilty functions
def plot_timings(timings, xrange, xlabel, logx=False):
with pb.visualization.axes_context() as ac:
for idx, (key, values) in enumerate(timings.items()):
values = np.asarray(values)
x = xrange[:len(values)]
pb.visualization.plot.line(x, values.mean(axis=-1), label=key, ax=ac.last, color=f'C{idx}')
ac.last.fill_between(x, values.min(axis=-1), values.max(axis=-1), color=f'C{idx}', alpha=0.3)
# std = values.std(axis=-1)
# mean = values.mean(axis=-1)
# ac.last.fill_between(x, mean - std, mean + std, color=f'C{idx}', alpha=0.3)
if logx:
ac.last.loglog()
else:
ac.last.semilogy()
ac.last.set_xlabel(xlabel)
ac.last.set_ylabel('runtime in s')
ac.last.set_ylim([ac.last.get_ylim()[0], max_time])
ac.last.set_xlim([xrange[0], xrange[-1]])
###Output
_____no_output_____
###Markdown
uPIT
###Code
from padertorch.ops.losses.source_separation import pit_loss_from_loss_matrix, compute_pairwise_losses
from torch.nn.functional import mse_loss
# Define the uPIT loss functions
def upit_sa_sdr_decomp_dot(estimate, target, algorithm='hungarian'):
"""
sa-SDR decomposed with dot product, eq. (13)/(14)
"""
loss_matrix = -torch.matmul(estimate, target.T)
loss = pit_loss_from_loss_matrix(
loss_matrix, reduction='sum', algorithm=algorithm
)
numerator = torch.sum(target**2)
loss = -10*(torch.log10(numerator) - torch.log10(
numerator + torch.sum(estimate**2) + 2*loss
))
return loss
def upit_sa_sdr_decomp_mse(estimate, target, algorithm='hungarian'):
"""
sa-SDR decomposed with MSE, eq. (11)/(12)
"""
loss_matrix = compute_pairwise_losses(estimate, target, axis=0, loss_fn=functools.partial(mse_loss, reduction='sum'))
loss = pit_loss_from_loss_matrix(
loss_matrix, reduction='sum', algorithm=algorithm
)
loss = -10*(torch.log10(torch.sum(target**2)) - torch.log10(
loss
))
return loss
def upit_sa_sdr_naive_brute_force(estimate, target):
"""
Brute-force sa-SDR, eq. (5)
"""
return pt.pit_loss(estimate, target, 0, pt.source_aggregated_sdr_loss)
def upit_a_sdr_naive_brute_force(estimate, target):
"""
Brute-force a-SDR
"""
return pt.pit_loss(estimate, target, 0, pt.sdr_loss)
def upit_a_sdr_decomp(estimate, target, algorithm='hungarian'):
"""
Decomposed a-SDR
"""
loss_matrix = compute_pairwise_losses(estimate, target, axis=0, loss_fn=pt.sdr_loss)
loss = pit_loss_from_loss_matrix(
loss_matrix, reduction='mean', algorithm=algorithm
)
return loss
# Check if the losses all give the same loss values
estimate = torch.randn(3, 32000)
target = torch.randn(3, 32000)
ref = upit_sa_sdr_naive_brute_force(estimate, target)
np.testing.assert_allclose(ref, upit_sa_sdr_decomp_dot(estimate, target), rtol=1e-5)
np.testing.assert_allclose(ref, upit_sa_sdr_decomp_dot(estimate, target, algorithm='brute_force'), rtol=1e-5)
np.testing.assert_allclose(ref, upit_sa_sdr_decomp_mse(estimate, target), rtol=1e-5)
np.testing.assert_allclose(ref, upit_sa_sdr_decomp_mse(estimate, target, algorithm='brute_force'), rtol=1e-5)
ref = upit_a_sdr_naive_brute_force(estimate, target)
np.testing.assert_allclose(ref, upit_a_sdr_decomp(estimate, target), rtol=1e-5)
np.testing.assert_allclose(ref, upit_a_sdr_decomp(estimate, target, algorithm='brute_force'), rtol=1e-5)
# Define all loss functions whose runtime we want to compare
losses = {
'sa_sdr naive brute_force': upit_sa_sdr_naive_brute_force,
'sa_sdr brute_force deomp mse': functools.partial(upit_sa_sdr_decomp_mse, algorithm='brute_force'),
'sa_sdr brute_force deomp dot': functools.partial(upit_sa_sdr_decomp_dot, algorithm='brute_force'),
'sa_sdr hungarian decomp mse': upit_sa_sdr_decomp_mse,
'sa_sdr hungarian decomp dot': upit_sa_sdr_decomp_dot,
'a_sdr naive brute_force': upit_a_sdr_naive_brute_force,
'a_sdr decomp brute_force': functools.partial(upit_a_sdr_decomp, algorithm='brute_force'),
'a_sdr decomp hungarian': upit_a_sdr_decomp,
}
# Settings for uPIT
num_speakers_range = list(range(2, 100))
T = 32000
def time_loss(loss, num_speakers=3, T=8000 * 4, number=10, device='cuda'):
import torch
targets = torch.tensor(np.random.randn(num_speakers, T)).to(device)
estimates = torch.tensor(np.random.randn(num_speakers, T)).to(device)
timings = timeit.repeat('float(loss(estimates, targets).cpu())', globals=locals(), repeat=number, number=1)
timings = np.asarray(timings)
return timings
upit_timings = defaultdict(list)
skip = defaultdict(lambda: False)
for num_speakers in tqdm(num_speakers_range):
for loss_name, loss_fn in losses.items():
if skip[loss_name]:
continue
timing = time_loss(loss_fn, num_speakers=num_speakers, number=number, device=device, T=T)
upit_timings[loss_name].append(timing)
if np.mean(timing) > max_time:
skip[loss_name] = True
plot_timings(upit_timings, num_speakers_range, '#speakers', logx=True)
###Output
_____no_output_____
###Markdown
- Brute-force becomes impractical for very small numbers of speakers (<10)- The Hungarian Algorithm can be used for large numbers of speakers with no significant runtime- The dot decomposition is the fastest here. It is, however, probably possible to push the MSE below the dot with a low-level implementation Graph-PIT assignment algorithms
###Code
graph_pit_losses = {
'naive brute-force': graph_pit.loss.unoptimized.GraphPITLossModule(pt.source_aggregated_sdr_loss),
'decomp brute-force': graph_pit.loss.optimized.OptimizedGraphPITSourceAggregatedSDRLossModule(assignment_solver='optimal_brute_force'),
'decomp branch-and-bound': graph_pit.loss.optimized.OptimizedGraphPITSourceAggregatedSDRLossModule(assignment_solver='optimal_branch_and_bound'),
'decomp dfs': graph_pit.loss.optimized.OptimizedGraphPITSourceAggregatedSDRLossModule(assignment_solver='dfs'),
'decomp dynamic programming': graph_pit.loss.optimized.OptimizedGraphPITSourceAggregatedSDRLossModule(assignment_solver='optimal_dynamic_programming'),
}
num_utterances_range = list(range(2, 30))
utterance_length = 8000
overlap = 500
def time_alg(loss, num_segments, num_estimates=3, number=10, device='cpu',
utterance_length=2*8000, overlap=500):
timings = []
for i in range(number):
segment_boundaries = [
(i * (utterance_length - overlap), (i + 1) * utterance_length)
for i in range(num_segments)
]
num_samples = max(s[-1] for s in segment_boundaries) + 100
targets = [torch.rand(stop - start).to(device) for start, stop in segment_boundaries]
estimate = torch.rand(num_estimates, num_samples).to(device)
timings.append(timeit.timeit(
# 'float(l.loss.cpu().numpy())',
setup='l = loss.get_loss_object(estimate, targets, segment_boundaries)',
stmt='float(l.loss.cpu())',
globals={
'loss': loss,
'estimate': estimate,
'targets': targets,
'segment_boundaries': segment_boundaries,
}, number=1))
return np.asarray(timings)
graph_pit_timings = defaultdict(list)
skip = defaultdict(lambda: False)
for num_segments in tqdm(num_utterances_range):
for loss_name, loss_fn in graph_pit_losses.items():
if skip[loss_name]:
continue
timing = time_alg(loss_fn, num_segments=num_segments, number=number, device='cpu', utterance_length=utterance_length, overlap=overlap)
graph_pit_timings[loss_name].append(timing)
if np.mean(timing) > max_time:
skip[loss_name] = True
plot_timings(graph_pit_timings, num_utterances_range, '#utterances')
###Output
_____no_output_____ |
Assignments/Assignment_4-2.ipynb | ###Markdown
KNN Model
###Code
#Training the KNN model using the data
#training the nearest neighbour starting with k=9
knn = KNeighborsClassifier(n_neighbors=9, weights='uniform')
#fitting the model
knn.fit(x_train, y_train)
#printing the accuracy of the model on the train data
print('Accuracy on train data:')
print(knn.score(x_train, y_train))
#printing the accuracy of the model on the test data
print('Accuracy on test data:')
print(knn.score(x_test, y_test))
#setting the range of neighbours from 1 to 35
k_range=list(range(1,35))
#considering distane and uniform as the weight parameters
options=['distance','uniform']
#creating a parameter distribution with the range and weights as declared above
param_dist = dict(n_neighbors=k_range, weights = options)
#using the knn classifier
knn=KNeighborsClassifier()
#tuning the number of k usingRandomizedCV
rand = RandomizedSearchCV(knn, param_dist, cv=10, scoring='accuracy', n_iter=30, random_state=5)
#fitting the model with the randomized weights
rand.fit(x, y)
#printing the results obtained
rand.cv_results_
#printing the best accuracy obtained
print('Best Accuracy: ')
print(rand.best_score_)
#printing the parameters for which the best accuracy is obtained
print('Best Parameters: ')
print(rand.best_params_)
###Output
Best Accuracy:
0.9800000000000001
Best Parameters:
{'weights': 'distance', 'n_neighbors': 15}
###Markdown
SVM Model
###Code
#Training the SVM model using the data
#training the model without any kernel functions, C or gamma values
svm = SVC()
#fitting the model
svm.fit(x_train, y_train)
#printing the accuracy of the model on the train data
print('Accuracy on train data:')
print(svm.score(x_train, y_train))
#printing the accuracy of the model on the test data
print('Accuracy on test data:')
print(svm.score(x_test, y_test))
#mapping the string data into numerics to caluclate the rmse
#y_train_map=y_train.map({'versicolor':1,'virginica':2,'setosa':3})
#y_test_map=y_test.map({'versicolor':1,'virginica':2,'setosa':3})
#calculating the mean squared error
#mse = mean_squared_error(x_test,y_test)
#calculating the root mean square error
#training the model using knn with rbf kernel function
rbf_svm= SVC(kernel='rbf')
#printing the kernel function used
rbf_svm.kernel
#fitting the model with rbf kernel function used
rbf_svm.fit(x_train, y_train)
#printing the accuracy of the model on the train data
print('Accuracy on train data:')
print(rbf_svm.score(x_train, y_train))
#printing the accuracy of the model on the test data
print('Accuracy on test data:')
print(rbf_svm.score(x_test, y_test))
#training the model using knn with linear kernel function
linear_svm=SVC(kernel='linear')
#fitting the model with rbf kernel function used
linear_svm.fit(x_train,y_train)
#printing the accuracy of the model on the train data
print('Accuracy on train data:')
print(linear_svm.score(x_train,y_train))
#printing the accuracy of the model on the test data
print('Accuracy on test data:')
print(linear_svm.score(x_test,y_test))
#Tune SVM hyperparameters by using GridSearchCV with cross validation
param_grid = { 'C':[0.1,1,10,100,1000],#drtting the c values as above
'kernel':['rbf'], #using only rbf as kernel function( can used linear, poly and sigmoid too)
'degree':[1,2,3,4,5,6],#setting the degree from 1 to6
'gamma': [1, 0.1, 0.01, 0.001,0.0001]}#setting the gamma range as above
#creating a grid with above prameter grid made
grid = GridSearchCV(SVC(),param_grid)
#fitting the model with the grid made above
grid.fit(x_train,y_train)
#printing the best fit values of C, degree and gamma
print(grid.best_params_)
#printing the accuracy of the model with these parameter values
print(grid.score(x_test,y_test))
###Output
{'C': 10, 'degree': 1, 'gamma': 0.1, 'kernel': 'rbf'}
0.9666666666666667
|
data/input_files_1link/exp_link.ipynb | ###Markdown
Run the dta on different link files
###Code
import shutil
import os
import numpy as np
name_list = ['ctm', 'ltm', 'pq', 'lq']
target_file_name = 'MNM_input_link'
executable_name = 'exp_link'
output_name = 'cc_record'
for file_name in name_list:
shutil.copy2(target_file_name + '_' + file_name, target_file_name)
os.system('./' + executable_name)
os.rename(output_name, output_name+'_'+file_name)
###Output
_____no_output_____
###Markdown
Extract the cumulative curve info
###Code
total_dict = dict()
for name in name_list:
info_dict = dict()
f = file(output_name + '_' + name, 'r')
for line in f:
words = line.split(',')
link_ID = words[0]
direction = words[1]
key = link_ID + "_" + direction
value = (list(), list())
for e in words[2:]:
one_value = e.split(":")
value[0].append(np.float(one_value[0]))
value[1].append(np.float(one_value[1]))
info_dict[key] = value
f.close()
total_dict[name] = info_dict
###Output
_____no_output_____
###Markdown
Plotting
###Code
import matplotlib.pyplot as plt
link_name = '3_out'
# plt.subplot(2, 2, 1)
plt.plot(total_dict['ctm'][link_name][0], total_dict['ctm'][link_name][1])
# plt.subplot(2, 2, 2)
plt.plot(total_dict['ltm'][link_name][0], total_dict['ltm'][link_name][1])
# plt.subplot(2, 2, 3)
plt.plot(total_dict['pq'][link_name][0], total_dict['pq'][link_name][1])
# plt.subplot(2, 2, 4)
plt.plot(total_dict['lq'][link_name][0], total_dict['lq'][link_name][1])
plt.legend(name_list)
plt.show()
###Output
_____no_output_____ |
code/Modeling the Reddit Data.ipynb | ###Markdown
Modeling the Reddit Data The Subreddits include:- bodyweight fitness = 0- powerlifting = 1 Import Libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
from sklearn.model_selection import cross_val_score, train_test_split, GridSearchCV
# Boosting
from sklearn.ensemble import GradientBoostingClassifier, AdaBoostClassifier, VotingClassifier, BaggingClassifier
from sklearn.datasets import make_classification # creates random datasets
from sklearn.metrics import confusion_matrix, plot_confusion_matrix
###Output
_____no_output_____
###Markdown
Import Modeling Data
###Code
df = pd.read_csv('../datasets/model_ready_data_tvec.csv')
###Output
_____no_output_____
###Markdown
Baseline Model
###Code
# baseline accuracy - the percentage of the majority class
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
**The baseline model is 0.5. This is the model to beat** Model 1: Decision Tree
###Code
# Creat X and y variables
X = df.drop('subreddit', axis=1)
y = df['subreddit']
# Train, Test Split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 42)
# Instantiate the model
dt = DecisionTreeClassifier()
# Fit model.
dt.fit(X_train, y_train)
# Evaluate model.
print(f'Score on training set: {dt.score(X_train, y_train)}')
print(f'Score on testing set: {dt.score(X_test, y_test)}')
###Output
Score on training set: 0.9958571428571429
Score on testing set: 0.8681666666666666
###Markdown
**Analysis:**The training score is .996 and the testing score is .868, which indicates that the model is overfit. This is an expected weekness a decision tree with untuned parameters. In the code below I will perform a grid search to find better parameters. **Grid search to fine tune the parameters**
###Code
params = {
'max_depth': [2, 3, 5],
'min_samples_split': [5, 10, 15],
'min_samples_leaf': [2, 3, 4],
'ccp_alpha': [0.001, 0.01, 0.1,]
}
grid = GridSearchCV(estimator = DecisionTreeClassifier(random_state=42),
param_grid = params,
cv = 3,
verbose = 1)
# GridSearch over the above parameters on the training data.
grid.fit(X_train, y_train)
# Finds the best decision tree
grid.best_estimator_
# What was the cross-validated score of the above decision tree?
grid.best_score_
# Evaluate model.
print(f'Score on training set: {grid.score(X_train, y_train)}')
print(f'Score on testing set: {grid.score(X_test, y_test)}')
###Output
Score on training set: 0.7021428571428572
Score on testing set: 0.696
###Markdown
**Analysis:**Grid search chose the folowing values for the parameters:- ccp_alpha = 0.001- max_depth = 5- min_samples_lef = 2- min_samples_split = 5The cross validated score was 0.697.The training score was 0.702 and the testing score was 0.696. These are drops from the original decision tree scores, which signals that the model is getting worse in its predictive ability. However, as the model gets more general the high variance error should also be reduced. Confusion Matrix Analysis
###Code
# Generate predictions on test set.
y_pred_dt = grid.predict(X_test)
# Generate confusion matrix.
tn, fp, fn, tp = confusion_matrix(y_test,
y_pred_dt).ravel()
print(confusion_matrix(y_test,
y_pred_dt))
# Plot confusion matrix
plot_confusion_matrix(grid, X_test, y_test, cmap='Blues', values_format='d');
# code --> https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html
tp_dt = 1208 # True Positives (Decision Tree)
tn_dt = 2968 # True Negatives (Decision Tree)
fp_dt = 49 # False Positives (Decision Tree)
fn_dt = 1775 # False Negatives (Decision Tree)
# code reference --> https://towardsdatascience.com/build-and-compare-3-models-nlp-sentiment-prediction-67320979de61
# Calculate sensitivity.
sens_dt = tp_dt / (tp_dt + fn_dt)
print(f'Sensitivity: {round(sens_dt, 4)}')
# Calculate specificity.
spec_dt = tn_dt / (tn_dt + fp_dt)
print(f'Specificity: {round(spec_dt, 4)}')
# Calculate accuracy.
accuracy_dt = (tp_dt + tn_dt) / (tp_dt + tn_dt + fp_dt + fn_dt)
print(f'Accuracy: {round(accuracy_dt, 4)}')
# Calculate precision.
precision_dt = tp_dt / (tp_dt + fp_dt)
print(f'Precision: {round(precision_dt, 4)}')
# Calculate recall.
recall_dt = tp_dt / (tp_dt + fn_dt)
print(f'Recall: {round(recall_dt, 4)}')
###Output
Recall: 0.405
###Markdown
**Analysis:**The decision tree model has a sensitivity of 40.5%, which means it correctly predicts a powerlifting subreddit post only about 40% of the time or more noteably __incorrectly__ predicts it about 60% of the time. The specificity is much better at 98%. The model creates very few false positives. Model 2: Random Forests and ExtraTrees **_Random forests us a modified tree learning algorithm that selects, at each split in the learning process, a random subset of the features. This counters the correlation between base trees that would otherwise choose the few features that are strong predictors of the target variable._**
###Code
# Instantiate the Random Forest model
rf = RandomForestClassifier(n_estimators=100)
# Instantiate the Extra Trees model
et = ExtraTreesClassifier(n_estimators=100)
# Find the cross_val score for the Random Forest model
cross_val_score(rf, X_train, y_train, cv=5).mean()
# Find the cross_val score for the Extra Tree model
cross_val_score(et, X_train, y_train, cv=5).mean()
###Output
_____no_output_____
###Markdown
**Both the Random Forest and ExtraTrees models have cross val scores of about 0.90. Since the Random Forest model is slightly better I'll use it in the grid search to find ideal parameters.**
###Code
# Grid Search
rf_params = {
'n_estimators': [100, 150],
'max_depth': [None, 1, 2, 3],
}
gs = GridSearchCV(rf, param_grid=rf_params, cv=3)
gs.fit(X_train, y_train)
print(gs.best_score_)
print(gs.best_params_)
# Training set score
gs.score(X_train, y_train)
# Testing set score
gs.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
**Analysis:**The best grid score was 0.912. The best parameters are as follows:- base_estimator_max_depth = 2- learning_rate = 1.0- n_estimators = 100The grid search score on the training set was 0.996 and on the testing set was 0.902. The high testing score indicates that this model is also overfit; however, the different between the training and testing scores is shrinking.
###Code
# Generate predictions on test set.
y_pred_rf = rf.predict(X_test)
# Generate confusion matrix.
tn, fp, fn, tp = confusion_matrix(y_test,
y_pred_rf).ravel()
print(confusion_matrix(y_test,
y_pred_rf))
# Plot confusion matrix
plot_confusion_matrix(rf, X_test, y_test, cmap='Blues', values_format='d');
tp_rf = 2621 # True Positives (Random Forest)
tn_rf = 2809 # True Negatives (Random Forest)
fp_rf = 208 # False Positives (Random Forest)
fn_rf = 362 # False Negatives (Random Forest)
# Calculate sensitivity.
sens_rf = tp_rf / (tp_rf + fn_rf)
print(f'Sensitivity: {round(sens_rf, 4)}')
# Calculate specificity.
spec_rf = tn_rf / (tn_rf + fp_rf)
print(f'Specificity: {round(spec_rf, 4)}')
# Calculate accuracy.
accuracy_rf = (tp_rf + tn_rf) / (tp_rf + tn_rf + fp_rf + fn_rf)
print(f'Accuracy: {round(accuracy_rf, 4)}')
# Calculate precision.
precision_rf = tp_rf / (tp_rf + fp_rf)
print(f'Precision: {round(precision_rf, 4)}')
# Calculate recall.
recall_rf = tp_rf / (tp_rf + fn_rf)
print(f'Recall: {round(recall_rf, 4)}')
###Output
Recall: 0.8786
###Markdown
**Analysis:**The random forest model has a sensitivity of 87.9%, which is significantly better than the first model. The specificity is 93%, which is about the same as the first model. The model also suffers from being overfit. Boosting as an Ensemble Model **_Boosting a model has several benefits: the model achieves higher performance than bagging when the hyperparameters are properly tuned, it works equally well for classification and regression, and it is resilient to outliers.The disadvantages are that the hyperparameters are time consuming to properly tune, they don't scale with large amounts of data, and there is a higher risk of overfitting.The aim of boosting is to reduce bias. The theory is that many weak learners can combine to make a single strong learner. Boosting uses shallow bias-base estimators so each weak learner has low variance and high bias.AdaBoost works by fitting a sequence of weak learners on repeatedly modified versions of the data. The more the model misclassifies objects, the objects become more likely to be sampled in the subsequent trees.Gradient boosting learns from its errors. It fits subsequent trees to the residuals of the last tree._**
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, stratify=y)
# code --> by Riley Dallas from Austin
ada = AdaBoostClassifier(base_estimator=RandomForestClassifier())
ada_params = {
'n_estimators': [50,100],
'base_estimator__max_depth': [1,2],
'learning_rate': [.9, 1.]
}
gs = GridSearchCV(ada, param_grid=ada_params, cv=3)
gs.fit(X_train, y_train)
print(gs.best_score_)
gs.best_params_
# Training set score
gs.score(X_train, y_train)
# Testing set score
gs.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
0.9116000000000001{'base_estimator__max_depth': 2, 'learning_rate': 1.0, 'n_estimators': 100} **Analysis:** The best grid score was 0.912. The best parameters are as follows:- base_estimator_max_depth = 2- learning_rate = 1.0- n_estimators = 100The grid search score on the training set was 0.928 and on the testing set was 0.916. This final model improves the high variance error as indicated by testing score that is no longer 0.99 and a very small difference between the training and testing scores.
###Code
# Generate predictions on test set.
y_pred_gs = gs.predict(X_test)
# Generate confusion matrix.
tn, fp, fn, tp = confusion_matrix(y_test,
y_pred_gs).ravel()
print(confusion_matrix(y_test,
y_pred_gs))
# Plot confusion matrix
plot_confusion_matrix(gs, X_test, y_test, cmap='Blues', values_format='d');
# code --> https://github.com/mychetan/understanding_player_preference_using_reddit_data/blob/master/code/04_modeling.ipynb
tp_gs = 2337 # True Positives (Boosted Random Forest)
tn_gs = 2242 # True Negatives (Boosted Random Forest)
fp_gs = 258 # False Positives (Boosted Random Forest)
fn_gs = 163 # False Negatives (Boosted Random Forest)
# Calculate sensitivity.
sens_gs = tp_gs / (tp_gs + fn_gs)
print(f'Sensitivity: {round(sens_gs, 4)}')
# Calculate specificity.
spec_gs = tn_gs / (tn_gs + fp_gs)
print(f'Specificity: {round(spec_gs, 4)}')
# Calculate accuracy.
accuracy_gs = (tp_gs + tn_gs) / (tp_gs + tn_gs + fp_gs + fn_gs)
print(f'Accuracy: {round(accuracy_gs, 4)}')
# Calculate precision.
precision_gs = tp_gs / (tp_gs + fp_gs)
print(f'Precision: {round(precision_gs, 4)}')
# Calculate recall.
recall_gs = tp_gs / (tp_gs + fn_gs)
print(f'Recall: {round(recall_gs, 4)}')
###Output
Recall: 0.9348
###Markdown
**Analysis:**The boosted random forest model has a sensitivity of 89.7%, which is the best score of all three models. The specificity is 91.6%, which is about the same as the other models. Comparing the Models and Conclusion Comparing Model Accuracy:The boosed random forest model performed the best on accuracy, recall and sensitivity. Precision and sensitivity were not as good as the other models. While we would like the boosted random forest to outperform the other two models, the lower precision and sensitivity scores are not alarming. The decision tree and random forest were overfit to the training data. Neither one could be the winner of the three models since they would not generalize to new data. The boosted random forest is the clear favorite.
###Code
Accuracy = [accuracy_dt, accuracy_rf, accuracy_gs]
Methods = ['Decision_Trees', 'Random_Forest', 'Boosted_Random_Forest']
Accuracy_pos = np.arange(len(Methods))
plt.bar(Accuracy_pos, Accuracy)
plt.xticks(Accuracy_pos, Methods)
plt.title('Comparing the Accuracy of Each Model')
plt.show()
###Output
_____no_output_____
###Markdown
Comparing Model Precision
###Code
Precision = [precision_dt, precision_rf, precision_gs]
Precision_pos = np.arange(len(Methods))
plt.bar(Precision_pos, Precision)
plt.xticks(Precision_pos, Methods)
plt.title('Comparing the Precision of Each Model')
plt.show()
###Output
_____no_output_____
###Markdown
Comparing Model Recall
###Code
Recall = [recall_dt, recall_rf, recall_gs]
Recall_pos = np.arange(len(Methods))
plt.bar(Recall_pos, Recall)
plt.xticks(Recall_pos, Methods)
plt.title('Comparing the Recall of Each Model')
plt.show()
Sensitivity = [sens_dt, sens_rf, sens_gs]
Sensitivity_pos = np.arange(len(Methods))
plt.bar(Sensitivity_pos, Recall)
plt.xticks(Sensitivity_pos, Methods)
plt.title('Comparing the Sensitivity of Each Model')
plt.show()
Specificity = [spec_dt, spec_rf, spec_gs]
Specificity_pos = np.arange(len(Methods))
plt.bar(Specificity_pos, Recall)
plt.xticks(Specificity_pos, Methods)
plt.title('Comparing the Specificity of Each Model')
plt.show()
###Output
_____no_output_____ |
Desafio Properatti - Grupo 3.ipynb | ###Markdown
Desafio Properati - Limpieza de datos - Grupo 3En este protecto el desafio es limpiar la base de datos de inmuebles provista por Properati.El objetivo de la limpieza es dejar listo el dataset para despues poder usarlo para hacer regresiones y calcular el valor de nuevas observaciones. ¿Como lo vamos a hacer?* 1ro Analisis explorativo: Hacer mil pruebas y analizar que estrategias tomar con estos datos.* 2do Normalizar, corregir y rellenar la informacion que lo permita sin afectar prediciones futuras* 3ro Quitar todo lo que no nos sirve* 4to Calcular las variables dummies y mostrar los resultados 1. Analisis exploratorioA partir del analisis exploratorio de los datos, ponemos a prueba algunas de las hipotesis que tendremos en cuenta para estandarizar la información que nos permita predicciones futuras. En los casos en los que nuestras hipótesis se corroboran, llevamos a cabo su ejecución en el bloque siguiente.
###Code
# Importo librerias
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib as mpl
import re
# usado para pruebas hechas sobre las urls de imagenes y link a las publicaciones
import requests
import hashlib
from IPython.core.display import HTML
%matplotlib inline
# importo archivo
df = pd.read_csv("properatti.csv")
df.shape
df.info()
df.describe()
###Output
_____no_output_____
###Markdown
Trabajo de Hipotesis sobre m2 con valores invertidos
###Code
# Buscamos los casos en los que la superficie total es menor que la superficie cubierta.
print("Muestra de M2 con superficie total menor que superficie cubierta")
display(df.loc[(df.surface_total_in_m2 < df.surface_covered_in_m2),["surface_total_in_m2","surface_covered_in_m2"]].sample(5))
#Suponemos que los valores de las columnas de superficie total y superficie cubierta pueden estar invertidos por error.
#Estimamos la división de uno sobre otro para calcular la media de esta diferencia que nos permita corroborar nuestra hipótesis.
df['cubierta_sobre_total'] = df['surface_total_in_m2']/ df['surface_covered_in_m2']
df['total_sobre_cubierta'] = df['surface_covered_in_m2']/ df['surface_total_in_m2']
df['valores_invertidos'] = df['surface_covered_in_m2'] < df['surface_total_in_m2']
#Dropeamos los valores iguales para que no afecten el promedio.
#Observamos que son valores similares y que por lo tanto nos permiten asumir que los valores de ambas columnas fueron invertidos.
print("Resumen de la media de la relacion de variables sobre M2 (valores invertidos y no invertidos)")
display(
df.drop(df.loc[df['surface_total_in_m2'] == df['surface_covered_in_m2']].index)\
[["valores_invertidos","total_sobre_cubierta","cubierta_sobre_total"]].groupby(['valores_invertidos']).mean()
)
# Por esto decidimo invertir los valores de las columnas de superficie total y superficie cubierta en aquellos casos que la primera es inferior a la segunda
###Output
_____no_output_____
###Markdown
Trabajo de hipotesis sobre: conseguir m2 a partir de valor de la propiedad y valor por metro
###Code
# Solo puedo averiguar mi incognita si tengo metros y valor por metro
# ejemplo: x = df['price_aprox_usd']/df['price_usd_per_m2']
#Buscamos las diferentes combinaciones
print("USD, Con Precio y PPM USD pero sin M2: {}".format(
df.loc[(~df["price_aprox_usd"].isnull()) & (~df["price_usd_per_m2"].isnull()) & (df["surface_total_in_m2"].isnull()),"operation"].count()
))
print("ARS, Con Precio y PPM ARS pero sin M2: {}".format(
df.loc[(~df["price_aprox_local_currency"].isnull()) & (~df["price_per_m2"].isnull()) & (df["surface_total_in_m2"].isnull()),"operation"].count()
))
print("Con Precio default y PPM default pero sin M2: {}".format(
df.loc[(~df["price"].isnull()) & (~df["price_per_m2"].isnull()) & (df["surface_total_in_m2"].isnull()),"operation"].count()
))
print("Con Precio default y PPM USD pero sin M2: {}".format(
df.loc[(~df["price"].isnull()) & (~df["price_usd_per_m2"].isnull()) & (df["surface_total_in_m2"].isnull()),"operation"].count()
))
# Hipotesis fallada, no sirve para tener nuevos datos
###Output
_____no_output_____
###Markdown
Trabajo de hipotesis sobre: encontrar en titulo y descripcion valores utiles para rellenar nulos en M2
###Code
#creamos una regex y la corremos en titulo y en descripcion para ver que encuentra
pattern= r'([\.\d]{2,99}) (?!m²|m2|mt|metro)'
m2ExtractedFromTitle=df.loc[df["surface_total_in_m2"].isnull(),'title'].str.extract(pattern, re.IGNORECASE)
m2FromDescription=df.loc[df["surface_total_in_m2"].isnull(),'description'].str.extract(pattern, re.IGNORECASE)
# imprimir resumen resultados
print("Valores en columna titulo: {}".format(m2ExtractedFromTitle.dropna().describe().loc["count",0]))
print("Valores en columna descripcion: {}".format(m2FromDescription.dropna().describe().loc["count",0]))
# Imprimir lo encontrado en titulo.
df["m2Extracted"] = m2ExtractedFromTitle
for index,x in df.iloc[m2ExtractedFromTitle.dropna().index].loc[:,["title","m2Extracted"]].iterrows():
print("\r Found: {} \t Title: {} ".format(x["m2Extracted"],x["title"]))
# Dropeamos columna temporal
df.drop("m2Extracted",axis=1,inplace=True)
#Hipotesis fallada, tras revisar los resultados, hay muchas informacion falsa y no es confiable
###Output
Valores en columna titulo: 936
Valores en columna descripcion: 7318
###Markdown
2. Normalizar, corregir y rellenar información ** Los metros cuadrados que encontramos invertidos los corregimos
###Code
#Creo columna temporal_dos para filtrar subconjunto de datos relevantes a invertir
df['temporal_dos'] = (df.surface_total_in_m2 < df.surface_covered_in_m2)
print("Cantidad de registros a invertir entre Superficies total y cubierta: {}".format(df['temporal_dos'].sum()))
#Creo columna temporal para guardar datos
df['temporal'] = df.surface_total_in_m2
#Paso valores de superficie cubierta a superficie total
df.loc[df['temporal_dos'],'surface_total_in_m2'] = df.loc[df['temporal_dos'],'surface_covered_in_m2']
#Paso valores de superficie total a superficie cubierta
df.loc[df['temporal_dos'], 'surface_covered_in_m2'] = df.loc[df['temporal_dos'], 'temporal']
#Recreamos la columna temporal para ver si siguen existiendo valores invertidos
df['temporal_dos'] = (df.surface_total_in_m2 < df.surface_covered_in_m2)
print("Cantidad de registros que siguen invertidos: {}".format(df['temporal_dos'].sum()))
#Dropeamos las temporales
df.drop('temporal', axis=1)
df.drop('temporal_dos', axis=1);
###Output
Cantidad de registros a invertir entre Superficies total y cubierta: 1106
Cantidad de registros que siguen invertidos: 0
###Markdown
** limpieza de nulls en variables de superficie
###Code
# Los metros en cero en superficie totales los ponemos en null
print("Valores M2 cubierto en cero puestos en Nan: {}".format((df["surface_covered_in_m2"] == 0).sum()))
print("Valores M2 totales en cero puestos en Nan: {}".format((df["surface_total_in_m2"] == 0).sum()))
df.loc[(df["surface_total_in_m2"] == 0),["surface_total_in_m2"]] = np.nan
df.loc[(df["surface_covered_in_m2"] == 0),["surface_covered_in_m2"]] = np.nan
print("-------------")
#Revisamos valores antes del eemplazo
print("Antes del reemplazo")
print("Nulos en totales: {}".format(df['surface_total_in_m2'].isnull().sum()))
print("Nulos en cubiertos: {}".format(df['surface_covered_in_m2'].isnull().sum()))
print("Nulos en ambos al mismo tiempo: {}".format(df.loc[(df['surface_covered_in_m2'].isnull()) & (df['surface_total_in_m2'].isnull()) ,:].loc[:,"operation"].count()))
print("Nulos totales y no en cubiertos: {}".format(df.loc[(~df['surface_covered_in_m2'].isnull()) & (df['surface_total_in_m2'].isnull()) ,:].loc[:,"operation"].count()))
print("Nulos cubierto y no en totales: {}".format(df.loc[(df['surface_covered_in_m2'].isnull()) & (~df['surface_total_in_m2'].isnull()) ,:].loc[:,"operation"].count()))
print("Valores iguales: {}".format(df.loc[df['surface_total_in_m2'] == df['surface_covered_in_m2'],"surface_covered_in_m2"].count()))
# relleno los m2totales faltantes con los cubiertos
df.loc[(~df['surface_covered_in_m2'].isnull()) & ( df['surface_total_in_m2'].isnull()) ,"surface_total_in_m2"] = df["surface_covered_in_m2"]
# relleno los m2cubiertos faltantes con los totales
df.loc[( df['surface_covered_in_m2'].isnull()) & (~df['surface_total_in_m2'].isnull()) ,"surface_covered_in_m2"] = df["surface_total_in_m2"]
print("-------------")
#Revisamos valores despues del eemplazo
print("Despues del reemplazo")
print("Nulos en totales: {}".format(df['surface_total_in_m2'].isnull().sum()))
print("Nulos en cubiertos: {}".format(df['surface_covered_in_m2'].isnull().sum()))
print("Nulos en ambos al mismo tiempo: {}".format(df.loc[(df['surface_covered_in_m2'].isnull()) & (df['surface_total_in_m2'].isnull()) ,:].loc[:,"operation"].count()))
print("Nulos totales y no en cubiertos: {}".format(df.loc[(~df['surface_covered_in_m2'].isnull()) & (df['surface_total_in_m2'].isnull()) ,:].loc[:,"operation"].count()))
print("Nulos cubierto y no en totales: {}".format(df.loc[(df['surface_covered_in_m2'].isnull()) & (~df['surface_total_in_m2'].isnull()) ,:].loc[:,"operation"].count()))
print("Valores iguales: {}".format(df.loc[df['surface_total_in_m2'] == df['surface_covered_in_m2'],"surface_covered_in_m2"].count()))
###Output
Valores M2 cubierto en cero puestos en Nan: 2
Valores M2 totales en cero puestos en Nan: 383
-------------
Antes del reemplazo
Nulos en totales: 39711
Nulos en cubiertos: 19909
Nulos en ambos al mismo tiempo: 12752
Nulos totales y no en cubiertos: 26959
Nulos cubierto y no en totales: 7157
Valores iguales: 24173
-------------
Despues del reemplazo
Nulos en totales: 12752
Nulos en cubiertos: 12752
Nulos en ambos al mismo tiempo: 12752
Nulos totales y no en cubiertos: 0
Nulos cubierto y no en totales: 0
Valores iguales: 58289
###Markdown
3. Quitar todo lo que no nos sirve Quitamos duplicados
###Code
# Muestro forma inicial.
display(df.shape)
# buscar indices de registros duplicados (sin tener en cuenta las urls y la 1er columna de autonumerico)
duplicados=df.loc[df.drop("Unnamed: 0",axis=1).drop("properati_url",axis=1).drop("image_thumbnail",axis=1).duplicated(keep="last")]
print("Registros duplicados: {}".format(duplicados["operation"].count()))
# DROP duplicados
df.drop(duplicados.index, inplace=True)
display(df.shape)
###Output
_____no_output_____
###Markdown
Quitamos columnas redundantes, que no agregan al modelo de prediccion
###Code
# BORRADO DE COLUMNAS SIN USO
def drop_column(column,df):
try:
df.drop(column,axis=1,inplace=True)
print("Dropeando columna {} ".format(column));
except:
print("Columna {} ya dropeada ".format(column)) ;
# Muestro forma inicial.
display(df.shape)
# properati_url: no tiene ningun uso de valor predictivo
drop_column("properati_url",df)
# image_thumbnail: como se vió antes, hay imagenes duplicadas para departamentos distintos.
drop_column("image_thumbnail",df)
# unnamed 0: replica el indice en cada linea
drop_column("Unnamed: 0",df)
# operation: siempre es venta, no suma nada al modelo
drop_column("operation",df)
# country_name: siempres es argentina, no suma a modelo
drop_column("country_name",df)
display(df.shape)
###Output
_____no_output_____ |
data/attention-index-march-2018.ipynb | ###Markdown
Setup
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
date_filename = "2018-03-01_2018-03-31"
data = pd.read_csv("articles_" + date_filename + ".csv", index_col="id", \
parse_dates=["published", "discovered"])
data.head()
###Output
_____no_output_____
###Markdown
Response Score The response score is a number between 0 and 50 that indicates the level of response to an article.Perhaps in the future we may choose to include other factors, but for now we just include engagements on Facebook. The maximum score of 50 should be achieved by an article that does really well compared with others.
###Code
pd.options.display.float_format = '{:.2f}'.format
data.fb_engagements.describe([0.5, 0.75, 0.9, 0.95, 0.99, 0.995, 0.999])
###Output
_____no_output_____
###Markdown
There's 2 articles with more than 1 million engagements this month.
###Code
data[data.fb_engagements > 1000000]
data.fb_engagements.mode()
###Output
_____no_output_____
###Markdown
*november* Going back to the enagement counts, we see the mean is 1,117, mode is zero, median is 24, 90th percentile is 1,453, 99th percentile is 21,166, 99.5th percentile is 33,982. The standard deviation is 8,083, significantly higher than the mean, so this is not a normal distribution. *december* Going back to the enagement counts, we see the mean is 1,106, mode is zero, median is 24, 90th percentile is 1,545, 99th percentile is 20,228, 99.5th percentile is 32,446. The standard deviation is 9,852, significantly higher than the mean, so this is not a normal distribution. *january 2018* Going back to the enagement counts, we see the mean is 1,108, mode is zero, median is 26, 90th percentile is 1,621, 99th percentile is 19,918, 99.5th percentile is 32,935. The standard deviation is 8,278, significantly higher than the mean, so this is not a normal distribution. *february 2018* Going back to the enagement counts, we see the mean is 1,237, mode is zero, median is 25, 90th percentile is 1,453, 99th percentile is 23,172, 99.5th percentile is 38,692. The standard deviation is 11,019, significantly higher than the mean, so this is not a normal distribution. *march 2018* Going back to the enagement counts, we see the mean is 1,519, mode is zero, median is 25, 90th percentile is 1,597, 99th percentile is 27,468, 99.5th percentile is 51,204. The standard deviation is 15,132, significantly higher than the mean, so this is not a normal distribution. Key publishers stats
###Code
data.groupby("publisher_id").agg({'url': 'count', 'fb_engagements': ['sum', 'median', 'mean']})
mean = data.fb_engagements.mean()
median = data.fb_engagements.median()
non_zero_fb_enagagements = data.fb_engagements[data.fb_engagements > 0]
###Output
_____no_output_____
###Markdown
That's a bit better, but still way too clustered at the low end. Let's look at a log normal distribution.
###Code
mean = data.fb_engagements.mean()
median = data.fb_engagements.median()
ninety = data.fb_engagements.quantile(.90)
ninetyfive = data.fb_engagements.quantile(.95)
ninetynine = data.fb_engagements.quantile(.99)
plt.figure(figsize=(12,4.5))
plt.hist(np.log(non_zero_fb_enagagements + median), bins=50)
plt.axvline(np.log(mean), linestyle=':', label=f'Mean ({mean:,.0f})', color='green')
plt.axvline(np.log(median), label=f'Median ({median:,.0f})', color='green')
plt.axvline(np.log(ninety), linestyle='--', label=f'90% percentile ({ninety:,.0f})', color='red')
plt.axvline(np.log(ninetyfive), linestyle='-.', label=f'95% percentile ({ninetyfive:,.0f})', color='red')
plt.axvline(np.log(ninetynine), linestyle=':', label=f'99% percentile ({ninetynine:,.0f})', color='red')
leg = plt.legend()
eng = data.fb_engagements[(data.fb_engagements < 5000)]
mean = data.fb_engagements.mean()
median = data.fb_engagements.median()
ninety = data.fb_engagements.quantile(.90)
ninetyfive = data.fb_engagements.quantile(.95)
ninetynine = data.fb_engagements.quantile(.99)
plt.figure(figsize=(15,7))
plt.hist(eng, bins=50)
plt.title("Article count by engagements")
plt.axvline(median, label=f'Median ({median:,.0f})', color='green')
plt.axvline(mean, linestyle=':', label=f'Mean ({mean:,.0f})', color='green')
plt.axvline(ninety, linestyle='--', label=f'90% percentile ({ninety:,.0f})', color='red')
plt.axvline(ninetyfive, linestyle='-.', label=f'95% percentile ({ninetyfive:,.0f})', color='red')
# plt.axvline(ninetynine, linestyle=':', label=f'99% percentile ({ninetynine:,.0f})', color='red')
leg = plt.legend()
log_engagements = (non_zero_fb_enagagements
.clip_upper(data.fb_engagements.quantile(.999))
.apply(lambda x: np.log(x + median))
)
log_engagements.describe()
###Output
_____no_output_____
###Markdown
Use standard feature scaling to bring that to a 1 to 50 range
###Code
def scale_log_engagements(engagements_logged):
return np.ceil(
50 * (engagements_logged - log_engagements.min()) / (log_engagements.max() - log_engagements.min())
)
def scale_engagements(engagements):
return scale_log_engagements(np.log(engagements + median))
scaled_non_zero_engagements = scale_log_engagements(log_engagements)
scaled_non_zero_engagements.describe()
# add in the zeros, as zero
scaled_engagements = pd.concat([scaled_non_zero_engagements, data.fb_engagements[data.fb_engagements == 0]])
proposed = pd.DataFrame({"fb_engagements": data.fb_engagements, "response_score": scaled_engagements})
proposed.response_score.plot.hist(bins=50)
###Output
_____no_output_____
###Markdown
Looks good to me, lets save that.
###Code
data["response_score"] = proposed.response_score
###Output
_____no_output_____
###Markdown
ProposalThe maximum of 50 points is awarded when the engagements are greater than the 99.9th percentile, rolling over the last month. i.e. where $limit$ is the 99.5th percentile of engagements calculated over the previous month, the response score for article $a$ is:\begin{align}basicScore_a & = \begin{cases} 0 & \text{if } engagements_a = 0 \\ \log(\min(engagements_a,limit) + median(engagements)) & \text{if } engagements_a > 0\end{cases} \\responseScore_a & = \begin{cases} 0 & \text{if } engagements_a = 0 \\ 50 \cdot \frac{basicScore_a - \min(basicScore)}{\max(basicScore) - \min(basicScore)} & \text{if } engagements_a > 0\end{cases} \\\\\text{The latter equation can be expanded to:} \\responseScore_a & = \begin{cases} 0 & \text{if } engagements_a = 0 \\ 50 \cdot \frac{\log(\min(engagements_a,limit) + median(engagements)) - \log(1 + median(engagements))} {\log(limit + median(engagements)) - \log(1 + median(engagements))} & \text{if } engagements_a > 0\end{cases} \\\end{align} Promotion ScoreThe aim of the promotion score is to indicate how important the article was to the publisher, by tracking where they chose to promote it. This is a number between 0 and 50 comprised of:- 20 points based on whether the article was promoted as the "lead" story on the publisher's home page- 15 points based on how long the article was promoted anywhere on the publisher's home page- 15 points based on whether the article was promoted on the publisher's main facebook brand pageThe first two should be scaled by the popularity/reach of the home page, for which we use the alexa page rank as a proxy.The last should be scaled by the popularity/reach of the brand page, for which we use the number of likes the brand page has. Lead story (20 points)
###Code
data.mins_as_lead.describe([0.5, 0.75, 0.9, 0.95, 0.99, 0.995, 0.999])
###Output
_____no_output_____
###Markdown
As expected, the vast majority of articles don't make it as lead. Let's explore how long typically publishers put something as lead for.
###Code
lead_articles = data[data.mins_as_lead > 0]
lead_articles.mins_as_lead.describe([0.25, 0.5, 0.75, 0.9, 0.95, 0.99, 0.995, 0.999])
lead_articles.mins_as_lead.plot.hist(bins=50)
###Output
_____no_output_____
###Markdown
For lead, it's a significant thing for an article to be lead at all, so although we want to penalise articles that were lead for a very short time, mostly we want to score the maximum even if it wasn't lead for ages. So we'll give maximum points when something has been lead for an hour.
###Code
lead_articles.mins_as_lead.clip_upper(60).plot.hist(bins=50)
###Output
_____no_output_____
###Markdown
We also want to scale this by the alexa page rank, such that the maximum score of 20 points is for an article that was on the front for 4 hours for the most popular site.So lets explore the alexa nunbers.
###Code
alexa_ranks = data.groupby(by="publisher_id").alexa_rank.mean().sort_values()
alexa_ranks
alexa_ranks.plot.bar(figsize=[10,5])
###Output
_____no_output_____
###Markdown
Let's try the simple option first: just divide the number of minutes as lead by the alexa rank. What's the scale of numbers we get then.
###Code
lead_proposal_1 = lead_articles.mins_as_lead.clip_upper(60) / lead_articles.alexa_rank
lead_proposal_1.plot.hist()
###Output
_____no_output_____
###Markdown
Looks like there's too much of a cluster around 0. Have we massively over penalised the publishers with a high alexa rank?
###Code
lead_proposal_1.groupby(data.publisher_id).mean().plot.bar(figsize=[10,5])
###Output
_____no_output_____
###Markdown
Yes. Let's try taking the log of the alexa rank and see if that looks better.
###Code
lead_proposal_2 = (lead_articles.mins_as_lead.clip_upper(60) / np.log(lead_articles.alexa_rank))
lead_proposal_2.plot.hist()
lead_proposal_2.groupby(data.publisher_id).describe()
lead_proposal_2.groupby(data.publisher_id).min().plot.bar(figsize=[10,5])
###Output
_____no_output_____
###Markdown
That looks about right, as long as the smaller publishers were closer to zero. So let's apply feature scaling to this, to give a number between 1 and 20. (Anything not as lead will pass though as zero.)
###Code
def rescale(series):
return (series - series.min()) / (series.max() - series.min())
lead_proposal_3 = np.ceil(20 * rescale(lead_proposal_2))
lead_proposal_2.min(), lead_proposal_2.max()
lead_proposal_3.plot.hist()
lead_proposal_3.groupby(data.publisher_id).median().plot.bar(figsize=[10,5])
data["lead_score"] = pd.concat([lead_proposal_3, data.mins_as_lead[data.mins_as_lead==0]])
data.lead_score.value_counts().sort_index()
data.lead_score.groupby(data.publisher_id).max()
###Output
_____no_output_____
###Markdown
In summary then, score for article $a$ is:$$unscaledLeadScore_a = \frac{\min(minsAsLead_a, 60)}{\log(alexaRank_a)}\\leadScore_a = 19 \cdot \frac{unscaledLeadScore_a - \min(unscaledLeadScore)}{\max(unscaledLeadScore) - \min(unscaledLeadScore)} + 1$$Since the minium value of $minsAsLead$ is 1, $\min(unscaledLeadScore)$ is pretty insignificant. So we can simplify this to:$$leadScore_a = 20 \cdot \frac{unscaledLeadScore_a } {\max(unscaledLeadScore)} $$or: $$leadScore_a = 20 \cdot \frac{\frac{\min(minsAsLead_a, 60)}{\log(alexaRank_a)} } {\frac{60}{\log(\max(alexaRank))}} $$$$leadScore_a = \left( 20 \cdot \frac{\min(minsAsLead_a, 60)}{\log(alexaRank_a)} \cdot {\frac{\log(\max(alexaRank))}{60}} \right)$$ Time on front score (15 points)This is similar to time as lead, so lets try doing the same calculation, except we also want to factor in the number of slots on the front:$$frontScore_a = 15 \left(\frac{\min(minsOnFront_a, 1440)}{alexaRank_a \cdot numArticlesOnFront_a}\right) \left( \frac{\min(alexaRank \cdot numArticlesOnFront)}{1440} \right)$$
###Code
(data.alexa_rank * data.num_articles_on_front).min() / 1440
time_on_front_proposal_1 = np.ceil(data.mins_on_front.clip_upper(1440) / (data.alexa_rank * data.num_articles_on_front) * (2.45) * 15)
time_on_front_proposal_1.plot.hist(figsize=(15, 7), bins=15)
time_on_front_proposal_1.value_counts().sort_index()
time_on_front_proposal_1.groupby(data.publisher_id).sum()
###Output
_____no_output_____
###Markdown
That looks good to me.
###Code
data["front_score"] = np.ceil(data.mins_on_front.clip_upper(1440) / (data.alexa_rank * data.num_articles_on_front) * (2.45) * 15).fillna(0)
data.front_score
###Output
_____no_output_____
###Markdown
Facebook brand page promotion (15 points)One way a publisher has of promoting content is to post to their brand page. The significance of doing so is stronger when the brand page has more followers (likes).$$ facebookPromotionProposed1_a = 15 \left( \frac {brandPageLikes_a} {\max(brandPageLikes)} \right) $$Now lets explore the data to see if that makes sense. **tr;dr the formula above is incorrect**
###Code
data.fb_brand_page_likes.max()
facebook_promotion_proposed_1 = np.ceil((15 * (data.fb_brand_page_likes / data.fb_brand_page_likes.max())).fillna(0))
facebook_promotion_proposed_1.value_counts().sort_index().plot.bar()
facebook_promotion_proposed_1.groupby(data.publisher_id).describe()
###Output
_____no_output_____
###Markdown
That's too much variation: sites like the Guardian, which have a respectable 7.5m likes, should not be scoring a 3. Lets try applying a log to it, and then standard feature scaling again.
###Code
data.fb_brand_page_likes.groupby(data.publisher_id).max()
np.log(2149)
np.log(data.fb_brand_page_likes.groupby(data.publisher_id).max())
###Output
_____no_output_____
###Markdown
That's more like it, but the lower numbers should be smaller.
###Code
np.log(data.fb_brand_page_likes.groupby(data.publisher_id).max() / 1000)
scaled_fb_brand_page_likes = (data.fb_brand_page_likes / 1000)
facebook_promotion_proposed_2 = np.ceil(\
(15 * \
(np.log(scaled_fb_brand_page_likes) / np.log(scaled_fb_brand_page_likes.max()))\
)\
).fillna(0)
facebook_promotion_proposed_2.groupby(data.publisher_id).max()
###Output
_____no_output_____
###Markdown
LGTM. So the equation is$$ facebookPromotion_a = 15 \left( \frac {\log(\frac {brandPageLikes_a}{1000})} {\log(\frac {\max(brandPageLikes)}{1000}))} \right) $$ Now, let's try applying standard feature scaling approch to this, rather than using a magic number of 1,000. That equation would be:\begin{align}unscaledFacebookPromotion_a &= \log(brandPageLikes_a) \\facebookPromotion_a &= 15 \cdot \frac{unscaledFacebookPromotion_a - \min(unscaledFacebookPromotion)}{\max(unscaledFacebookPromotion) - \min(unscaledFacebookPromotion)} \\\\\text{The scaling can be simplified to:} \\facebookPromotion_a &= 15 \cdot \frac{unscaledFacebookPromotion_a - \log(\min(brandPageLikes))}{\log(\max(brandPageLikes)) - \log(\min(brandPageLikes))} \\\\\text{Meaning the overall equation becomes:} \\facebookPromotion_a &= 15 \cdot \frac{\log(brandPageLikes_a) - \log(\min(brandPageLikes))}{\log(\max(brandPageLikes)) - \log(\min(brandPageLikes))} \end{align}
###Code
facebook_promotion_proposed_3 = np.ceil(
(14 *
(
(np.log(data.fb_brand_page_likes) - np.log(data.fb_brand_page_likes.min()) ) /
(np.log(data.fb_brand_page_likes.max()) - np.log(data.fb_brand_page_likes.min()))
)
) + 1
)
facebook_promotion_proposed_3.groupby(data.publisher_id).max()
data["facebook_promotion_score"] = facebook_promotion_proposed_3.fillna(0.0)
###Output
_____no_output_____
###Markdown
Review
###Code
data["promotion_score"] = (data.lead_score + data.front_score + data.facebook_promotion_score)
data["attention_index"] = (data.promotion_score + data.response_score)
data.promotion_score.plot.hist(bins=np.arange(50), figsize=(15,6))
data.attention_index.plot.hist(bins=np.arange(100), figsize=(15,6))
data.attention_index.value_counts().sort_index()
# and lets see the articles with the biggest attention index
data.sort_values("attention_index", ascending=False)
data["score_diff"] = data.promotion_score - data.response_score
# promoted but low response
data.sort_values("score_diff", ascending=False).head(25)
# high response but not promoted
data.sort_values("score_diff", ascending=True).head(25)
###Output
_____no_output_____
###Markdown
Write that data to a file. Note that the scores here are provisional for two reasons:1. they should be using a rolling-month based on the article publication date to calculate medians/min/max etc, whereas in this workbook we as just using values for the month of May2. for analysis, we've rounded the numbers; we don't expect to do that for the actual scores
###Code
data.to_csv("articles_with_provisional_scores_" + date_filename + ".csv")
###Output
_____no_output_____ |
Clustering/K-Means Clustering/k_means_clustering.ipynb | ###Markdown
K-Means Clustering Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Mall_Customers.csv')
X = dataset.iloc[:, [3, 4]].values
###Output
_____no_output_____
###Markdown
Using the elbow method to find the optimal number of clusters
###Code
from sklearn.cluster import KMeans
wcss = []
for cluster in range(1, 11):
kmeans = KMeans(n_clusters = cluster, random_state=42, init = 'k-means++')
kmeans.fit(X)
wcss.append(kmeans.inertia_) ##gives wcss value of the models
plt.plot(range(1, 11), wcss)
plt.title('Elbow Function')
plt.xlabel('No. of Clusters')
plt.ylabel('WCSS')
plt.show()
###Output
_____no_output_____
###Markdown
Training the K-Means model on the dataset
###Code
kmeans = KMeans(n_clusters = 5, random_state=42, init = 'k-means++')
y_kmeans = kmeans.fit_predict(X)
print(y_kmeans)
###Output
_____no_output_____
###Markdown
Visualising the clusters
###Code
plt.scatter(X[y_kmeans == 0,0],X[y_kmeans == 0,1],s = 100, c = 'red', label = 'Cluster 1')
plt.scatter(X[y_kmeans == 1,0],X[y_kmeans == 1,1],s = 100, c = 'blue', label = 'Cluster 2')
plt.scatter(X[y_kmeans == 2,0],X[y_kmeans == 2,1],s = 100, c = 'green', label = 'Cluster 3')
plt.scatter(X[y_kmeans == 3,0],X[y_kmeans == 3,1],s = 100, c = 'cyan', label = 'Cluster 4')
plt.scatter(X[y_kmeans == 4,0],X[y_kmeans == 4,1],s = 100, c = 'magenta', label = 'Cluster 5')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s = 300, c = 'yellow', label = 'Centroids')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
###Output
_____no_output_____ |
my_notebooks/intro_to_numpy.ipynb | ###Markdown
MarkDownSample
###Code
import numpy as np
import numpy as np
import numpy as np
import numpy as np
fix
###Output
_____no_output_____ |
Chapter_2_Elements_of_matrix_theory.ipynb | ###Markdown
Table of ContentsChapter 2 - Elements of Matrix Theory2.1.2 The Jordan Normal FormExample 2.5 Revisiting the wireless sensor network exampleNumPy/ SciPy approachSymPy approach2.1.3 Semi-convergence and convergence for discrete-time linear systemsDefinition 2.6 (Spectrum and spectral radius of a matrix)2.2.1 The spectral radius for row-stochastic matricesTheorem 2.8 (Geršgorin Disks Theorem)2.3.3 Applications to matrix powers and averaging systemsTheorem 2.13 (Powers of non-negative matrices with a simple and strictly dominant eigenvalue)Example 2.14 Wireless sensor networkExercises 2.18Exercises 2.19
###Code
%matplotlib widget
# Import packages
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import networkx as nx
import scipy.linalg as spla
from sympy import Matrix
# For interactive graphs
import ipywidgets as widgets
# Import self defined functions
import lib # General library
# Settings
custom_figsize= (6, 4) # Might need to change this value to fit the figures to your screen
custom_figsize_square = (5, 5)
###Output
_____no_output_____
###Markdown
Chapter 2 - Elements of Matrix TheoryThese Jupyter Notebook scripts contain some examples, visualization and supplements accompanying the book "Lectures on Network Systems" by Francesco Bullo http://motion.me.ucsb.edu/book-lns/. These scripts are published with the MIT license. **Make sure to run the first cell above to import all necessary packages and functions and adapt settings in case.** In this script it is necessary to execute cell by cell chronologically due to reocurring examples (excepts for e.g. the Exercises in the end). (Tip: Use the shortcut Shift+Enter to execute each cell). Most of the functions are kept in separate files to keep this script neat. 2.1.2 The Jordan Normal Form Example 2.5 Revisiting the wireless sensor network exampleThe following cells are showing the computation of the Jordan Normal Form $J$, the invertible transformation matrix $T$ and some of its dependencies.
###Code
# Defining the A matrix again
A = np.array([[1/2, 1/2, 0., 0.],
[1/4, 1/4, 1/4, 1/4],
[0., 1/3, 1/3, 1/3],
[0., 1/3, 1/3, 1/3]
])
###Output
_____no_output_____
###Markdown
There is the possibility to calculate the Jordan Normal Form directly with the package SymPy https://docs.sympy.org/latest/index.html. However, we are determining the Jordan Normal Form via determining the generalized eigenvectors (read more for literature recommendations about generalized eigenvectors in the book) with the SciPy package first to discuss some possibilities and problems with non symbolic toolboxes. NumPy/ SciPy approachFrom the documentation of scipy.linalg.eig: *'Solve an ordinary or generalized eigenvalue problem of a square matrix.'*
###Code
# Right eigenvectors
lambdas, eigv = spla.eig(A)
# Left eigenvectors
lambdas2, eigw = spla.eig(A.T)
###Output
_____no_output_____
###Markdown
Due to numerical instabilities, the zero values are not reflected and it can be seen, how the expected eigenvalue of 1 is not precise. The zeros can be fixed with:
###Code
def correct_close_to_zero(M, tol=1e-12):
M.real[abs(M.real) < tol] = 0.0
if M.imag.any():
M.imag[abs(M.imag) < tol] = 0.0
return M
eigv_cor = correct_close_to_zero(eigv)
eigw_cor = correct_close_to_zero(eigw)
lambdas_cor = correct_close_to_zero(lambdas)
lambdas2_cor = correct_close_to_zero(lambdas2)
print("Right eigenvectors:")
lib.matprint(eigv_cor)
print("\n")
print("Left eigenvectors:")
lib.matprint(eigw_cor)
print("\n")
print("Eigenvalues (right):")
lib.matprint(lambdas_cor)
print("\n")
print("Eigenvalues (left) for matching later:")
lib.matprint(lambdas2_cor)
###Output
_____no_output_____
###Markdown
There are two options now for $T^{-1}$: Taking the inverse of the right eigenvectors (which contains again numerical instabilities) or building it from the left eigenvectors, what would include some sorting to match the eigenvalue order from the right eigenvector (often it is the case, that they are already aligned since calling scipy.linalg.eig twice on a matrix with the same eigenvalues).
###Code
T = eigv_cor.copy()*-1 # Rescale the eigenvectors to match eigenvalues later
# Sorting if necessary, remember to use transpose, since in T^-1 the rows represent the left eigenvectors.
Tinv = eigw_cor.T.copy()
###Output
_____no_output_____
###Markdown
Now we can simply compute J, when compared, is fairly close to the solution in the book, however, due to numerical intabilities not precise. Further on, the order of the eigenvalues might be different than the on from the book.
###Code
J = correct_close_to_zero(Tinv@A@T)
print("Jordan Normal Form via SciPy/Numpy:")
lib.matprint(J)
###Output
_____no_output_____
###Markdown
SymPy approachNow we use a symbolic toolbox package SymPy from python as a blackbox. Note, that also here the order of the eigenvalues might be different!
###Code
Asym = Matrix(A) # Sympy Matrix toolbox object
Tsym, Jsym = Asym.jordan_form()
###Output
_____no_output_____
###Markdown
Here we can compare them with our previous results:
###Code
print("Jordan Normal Form SymPy:")
lib.matprint(np.array(Jsym).astype(np.float64))
print("Jordan Normal Form SciPy:")
lib.matprint(J)
###Output
_____no_output_____
###Markdown
2.1.3 Semi-convergence and convergence for discrete-time linear systems Definition 2.6 (Spectrum and spectral radius of a matrix)We display the spectrum of the previous A matrix with the spectrum radius for visualization purpose. Additionally, we also show how the spectrum of a randomly generated matrix.
###Code
fig, ax213 = plt.subplots(figsize=custom_figsize_square)
lib.plot_spectrum(A, ax213);
n_M1=8
# A unifornmly distributed, positive, row stochastic matrix vs not row stochastic
M1 = np.random.uniform(0, 1,(n_M1,n_M1))
M1 = M1 / M1.sum(axis=1, keepdims=1) # Row-stochastic
M2 = M1 - 0.05 # Not row-stochastic
fig, (ax2131, ax2132) = plt.subplots(1,2, figsize=(custom_figsize_square[0]*2, custom_figsize_square[1]))
lib.plot_spectrum(M1, ax2131);
lib.plot_spectrum(M2, ax2132);
###Output
_____no_output_____
###Markdown
2.2.1 The spectral radius for row-stochastic matrices Theorem 2.8 (Geršgorin Disks Theorem)Similar to before, the Geršgorin Disks are now visualized for a row-stochastic matrix and another matrix.
###Code
fig, (ax2211, ax2212) = plt.subplots(1,2, figsize=(custom_figsize_square[0]*2, custom_figsize_square[1]))
lib.plot_gersgorin_disks(M1, ax2211)
lib.plot_gersgorin_disks(M2, ax2212)
###Output
_____no_output_____
###Markdown
2.3.3 Applications to matrix powers and averaging systems Theorem 2.13 (Powers of non-negative matrices with a simple and strictly dominant eigenvalue)Here is an example for Theorem 2.13, which shows, how the powers of primitive, row-stochastic matrices converges to rank 1. This is also done for the wireless sensor network example. Example 2.14 Wireless sensor networkIn the book it is shown, that the wireless sensor network matrix is primitive. Here, the eigenvectors and eigenvalues are printed again and compared for the semi convergence result $\lim_{k \to \infty} A^k = \mathbb{1}_n w^T$ to demonstrate Theorem 2.13 for a row-stochastic matrix.
###Code
print("Left eigenvectors of A:")
lib.matprint(eigw_cor)
print("\n")
print("Eigenvalues (left) of A:")
lib.matprint(lambdas2_cor)
print("\n")
print("Normalizing dominant eigenvector:")
dom_eigv = eigw_cor[:, 0] / sum(eigw_cor[:, 0])
lib.matprint(dom_eigv)
print("\n")
print("Convergence result of A:")
lib.matprint(np.linalg.matrix_power(A, 50))
print("\n")
print("equals 1n*w^T")
lib.matprint(np.ones((4,1))@dom_eigv[:, None].T)
###Output
_____no_output_____
###Markdown
Below is a randomly generated example to show, that primitive, row-stochastic matrices always converge to rank 1. *Note: The code is not robust for semisimple eigenvalue of 1*
###Code
# Creating a new random primitive (positiv), row stochastic matrix here
n_M11=5
M11 = np.random.uniform(0, 1,(n_M11,n_M11))
M11 = M11 / M11.sum(axis=1, keepdims=1) # Row-stochastic
print("Random primitive row-stochastic matrix M:")
lib.matprint(M11)
print("\n")
print("Left eigenvectors of M:")
l_M, m_eigv = spla.eig(M11.T)
m_eigv = correct_close_to_zero(m_eigv)
l_M = correct_close_to_zero(l_M)
lib.matprint(m_eigv)
print("\n")
print("Eigenvalues (left) of M:")
lib.matprint(l_M)
print("\n")
# Here we check the position with numerical unprecision of the eigenvalue 1
print("Normalizing dominant eigenvector:")
idx_dom = np.where(abs(l_M - 1) < 0.005)[0][0]
dom_eigv_M = m_eigv[:, 0] / sum(m_eigv[:, 0])
lib.matprint(dom_eigv_M)
print("\n")
print("Convergence result of M:")
lib.matprint(np.linalg.matrix_power(M11, 500))
print("\n")
print("equals 1n*w^T")
lib.matprint(np.ones((n_M11,1))@dom_eigv_M[:, None].T)
###Output
_____no_output_____
###Markdown
Exercises 2.18This section is similar to exercise 1.4, however, here we actually visualize the graph and its node values. Additionally, we show that the values converge to the initial values multiplied b the dominant left eigenvector as presented in Theorem 2.13 for row stochastic matrices.First, we define the graphs and their adjacency Matrix A and simulate the results. Then, for each graph a cell can be executed for the interactive visualization. A plot of the states is available in the Jupyter Notebook script for Chapter 1.
###Code
# Define x_0
xinitial = np.array([1., -1., 1., -1., 1.])
# Defining the 3 different systems.
# Complete graph
A_complete = np.ones((5,5)) / 5
# Cycle graph
A_cycle = np.array([
[1/3, 1/3, 0, 0, 1/3],
[1/3, 1/3, 1/3, 0, 0],
[0, 1/3, 1/3, 1/3, 0],
[0, 0, 1/3, 1/3, 1/3],
[1/3, 0, 0, 1/3, 1/3] ] )
# Star topology. center = node 1
A_star = np.array([
[1/5, 1/5, 1/5, 1/5, 1/5],
[1/2, 1/2, 0, 0, 0],
[1/2, 0, 1/2, 0, 0],
[1/2, 0, 0, 1/2, 0],
[1/2, 0, 0, 0, 1/2] ])
# Defining simulation time
ts = 15
# Defining graphs for plotting later
n = 5
G_star = nx.star_graph(n-1)
pos_star = {0:[0.5,0.8], 1:[0.2,0.6],2:[.4,.2],3:[.6,.2],4:[.8,.6]}
G_cycle = nx.cycle_graph(n)
pos_cycle = {0:[0.5,0.8], 1:[0.35,0.6],2:[.4,.3],3:[.6,.3],4:[.65,.6]}
G_complete = nx.complete_graph(n)
pos_complete = pos_cycle.copy()
# Simulating and saving each network
states_complete = lib.simulate_network(A_complete,xinitial, ts)
states_star = lib.simulate_network(A_star,xinitial, ts)
states_cycle = lib.simulate_network(A_cycle,xinitial, ts)
###Output
_____no_output_____
###Markdown
**Complete graph**Showing complete graph interactive simulation and Theorem 2.13
###Code
fig, ax2181 = plt.subplots(figsize=custom_figsize)
# If this cell is executed twice we are making sure in the following, that the previous widget instances are all closed
try:
[c.close() for c in widget2181.children] # Note: close_all() does also affect plot, thus list compr.
except NameError: # Only want to except not defined variable error
pass
widget2181 = lib.interactive_network_plot(G_complete, states_complete, pos_complete, ts, fig, ax2181)
display(widget2181)
# Verifying the results
eigval, eigvec = np.linalg.eig(A_complete.transpose())
idx_dom = np.argmax(eigval)
dom_eigvec = eigvec[0:5,idx_dom]/eigvec[0:5,idx_dom].sum()
print("Showing Theorem 2.13 for the complete graph")
print("Dominant eigenvector: \n", dom_eigvec)
print("Final values : \n", xinitial@dom_eigvec*np.ones(5))
###Output
_____no_output_____
###Markdown
**Star graph**Showing star graph interactive simulation and Theorem 2.13
###Code
fig, ax2182 = plt.subplots(figsize=custom_figsize)
# If this cell is executed twice we are making sure in the following, that the previous widget instances are all closed
try:
[c.close() for c in widget2182.children] # Note: close_all() does also affect plot, thus list compr.
except NameError: # Only want to except not defined variable error
pass
widget2182 = lib.interactive_network_plot(G_star, states_star, pos_star, ts, fig, ax2182)
display(widget2182)
# Verifying the results
eigval, eigvec = np.linalg.eig(A_star.transpose() )
idx_dom = np.argmax(eigval)
dom_eigvec = eigvec[0:5,idx_dom]/eigvec[0:5,idx_dom].sum()
print("Showing Theorem 2.13 for the star graph")
print("Dominant eigenvector: \n", dom_eigvec)
print("Final values : \n", xinitial@dom_eigvec*np.ones(5))
###Output
_____no_output_____
###Markdown
**Cycle graph**Showing cycle graph interactive simulation and Theorem 2.13
###Code
fig, ax2183 = plt.subplots(figsize=custom_figsize)
# If this cell is executed twice we are making sure in the following, that the previous widget instances are all closed
try:
[c.close() for c in widget2183.children] # Note: close_all() does also affect plot, thus list compr.
except NameError: # Only want to except not defined variable error
pass
widget2183 = lib.interactive_network_plot(G_cycle, states_cycle, pos_cycle, ts, fig, ax2183)
display(widget2183)
# Verifying the results
eigval, eigvec = np.linalg.eig(A_cycle.transpose())
idx_dom = np.argmax(eigval)
dom_eigvec = eigvec[0:5,idx_dom]/eigvec[0:5,idx_dom].sum()
print("Showing Theorem 2.13 for the cycle graph")
print("Dominant eigenvector: \n", dom_eigvec)
print("Final values : \n", xinitial@dom_eigvec*np.ones(5))
###Output
_____no_output_____
###Markdown
Exercises 2.19This exercise is about $n$ robots moving on the line trying to gather at a common location (i.e., reach rendezvous), where each robot heads for the centroid of its neighbors. The visualization of the algorithm deals with the discrete part of the task, where on can explore values of the sampling period $T$ for the Euler discretization.
###Code
# Setup - change these parameters if wanted
n_robots = 8
# Number of timesteps and sampling period T (If T is too small, need very high n_dt)
n_dt = 25
T = 0.3 # Play around with this value, something interesting is happening around (2(n_robots-1)/n_robots)
#T = 2*(n_robots-1)/n_robots
# Set up initial position matrix and further saving variables
current_positions = 2*np.random.random((n_robots,1))-1
new_position = current_positions.copy()
all_positions = np.zeros((n_dt, n_robots, 1))
all_positions[0] = current_positions.copy()
for tt in range(1, n_dt):
for index, own_pos in np.ndenumerate(current_positions):
new_position[index] = own_pos + T*(1/(n_robots-1)*(np.sum(current_positions)-own_pos) - own_pos)
all_positions[tt] = new_position.copy()
current_positions = new_position.copy()
fig, ax219 = plt.subplots(figsize=custom_figsize)
# Set colors of robots for tracking
all_colors = np.random.rand(n_robots,3)
def plot_robot_pos(ax, pos):
# Set xlim, ylim and aspect ratio
ax219.set_xlim(-1.5, 1.5)
ax219.set_ylim(-0.5, 0.5)
ax219.set_aspect('equal')
# Add horizontal line
ax219.axhline(y=0.0, color='k', linestyle='-')
for i in range(0, pos.shape[0]):
# Add a robot as circle
bug = mpl.patches.Circle((pos[i], 0), radius=0.06, ec='black', color=all_colors[i])
ax.add_patch(bug)
def interactive_robots(timestep):
ax219.clear()
plot_robot_pos(ax219, all_positions[timestep['new'], :]) # Take the new value received from the slider dict
return None
# Plot initial configuration
plot_robot_pos(ax219, all_positions[0, :])
# Widget
# If this cell is executed twice we are making sure in the following, that the previous widget instances are all closed
try:
[c.close() for c in widget219.children] # Note: close_all() does also affect plot, thus list compr.
except NameError: # Only want to except not defined variable error
pass
widget219 = lib.create_widgets_play_slider(fnc=interactive_robots, minv=0, maxv=n_dt-1, step=1, play_speed=500)
display(widget219)
###Output
_____no_output_____
###Markdown
We can even compute the convergence solution in advance. Please refer and solve the given Excersice first to try to understand the following calculations.
###Code
A_robot = 1/(n_robots-1) * np.ones((n_robots, n_robots)) - n_robots/(n_robots-1)*np.identity(n_robots)
eigval, eigvec = np.linalg.eig(A_robot)
idx = np.argmin(abs(eigval))
z_eigvec = eigvec[:,idx]/np.sqrt(np.sum(eigvec[:,idx]**2))
final_values = z_eigvec[None, :] @ all_positions[0] @ z_eigvec[None, :]
print("Final values :", final_values)
###Output
_____no_output_____ |
Internship mini project 1 to 7/Task#6 - Cross-Validation_Grid_Search_with_Random_Forest_Preet_Mehta.ipynb | ###Markdown
**Run the following two cells before you begin.**
###Code
%autosave 10
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('cleaned_data.csv')
###Output
_____no_output_____
###Markdown
**Run the following 3 cells to create a list of features, create a train/test split, and instantiate a random forest classifier.**
###Code
features_response = df.columns.tolist()
items_to_remove = ['ID', 'SEX', 'PAY_2', 'PAY_3', 'PAY_4', 'PAY_5', 'PAY_6',
'EDUCATION_CAT', 'graduate school', 'high school', 'none',
'others', 'university']
features_response = [item for item in features_response if item not in items_to_remove]
features_response
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
df[features_response[:-1]].values,
df['default payment next month'].values,
test_size=0.2, random_state=24
)
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(
n_estimators=10, criterion='gini', max_depth=3,
min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0,
max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=None,
random_state=4, verbose=0, warm_start=False, class_weight=None
)
###Output
_____no_output_____
###Markdown
**Create a dictionary representing the grid for the `max_depth` and `n_estimators` hyperparameters that will be searched. Include depths of 3, 6, 9, and 12, and 10, 50, 100, and 200 trees.**
###Code
params = {
'max_depth':[3,6,9,12],
'n_estimators':[10,50,100,200]
}
###Output
_____no_output_____
###Markdown
________________________________________________________________**Instantiate a `GridSearchCV` object using the same options that we have previously in this course, but with the dictionary of hyperparameters created above. Set `verbose=2` to see the output for each fit performed.**
###Code
from sklearn.model_selection import GridSearchCV
grid_cv = GridSearchCV(rf, param_grid=params, scoring='roc_auc',
n_jobs=None, iid=False, refit=True, cv=4, verbose=2,
pre_dispatch=None, error_score=np.nan, return_train_score=True)
###Output
_____no_output_____
###Markdown
____________________________________________________**Fit the `GridSearchCV` object on the training data.**
###Code
grid_cv.fit(X_train, y_train)
###Output
Fitting 4 folds for each of 16 candidates, totalling 64 fits
[CV] max_depth=3, n_estimators=10 ....................................
###Markdown
___________________________________________________________**Put the results of the grid search in a pandas DataFrame.**
###Code
grid_cv_results_df = pd.DataFrame(grid_cv.cv_results_)
grid_cv_results_df
grid_cv_results_df.columns
ax = plt.axes()
ax.errorbar(grid_cv_results_df['param_max_depth'],
grid_cv_results_df['mean_train_score'],
yerr=grid_cv_results_df['std_train_score'],
label='Mean $\pm$ 1 SD training scores')
ax.errorbar(grid_cv_results_df['param_max_depth'],
grid_cv_results_df['mean_test_score'],
yerr=grid_cv_results_df['std_test_score'],
label='Mean $\pm$ 1 SD testing scores')
ax.legend()
plt.xlabel('max_depth')
plt.ylabel('ROC AUC')
plt.show()
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(6, 3))
axs[0].plot(grid_cv_results_df['param_n_estimators'],
grid_cv_results_df['mean_fit_time'],
'-o')
axs[0].set_xlabel('Number of trees')
axs[0].set_ylabel('Mean fit time (seconds)')
axs[1].errorbar(grid_cv_results_df['param_n_estimators'],
grid_cv_results_df['mean_test_score'],
yerr=grid_cv_results_df['std_test_score'])
axs[1].set_xlabel('Number of trees')
axs[1].set_ylabel('Mean testing ROC AUC $\pm$ 1 SD ')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
**Find the best hyperparameters from the cross-validation.**
###Code
grid_cv.best_params_
###Output
_____no_output_____
###Markdown
________________________________________________________________________________________________________**Create a `pcolormesh` visualization of the mean testing score for each combination of hyperparameters.** Hint: Remember to reshape the values of the mean testing scores to be a two-dimensional 4x4 grid.
###Code
# Create a 5x5 grid
x_coordi, y_coordi = np.meshgrid(range(5), range(5))
z_coordi = grid_cv_results_df.mean_test_score.values.reshape(4,4)
print(x_coordi)
print(y_coordi)
print(z_coordi)
# Set color map to `plt.cm.jet`
color_map = plt.cm.jet
# Visualize pcolormesh
ax = plt.axes()
pcolor_ex = ax.pcolormesh(x_coordi, y_coordi, z_coordi, cmap = color_map)
plt.colorbar(pcolor_ex, label='Color scale')
ax.set_xlabel('X coordinate')
ax.set_ylabel('Y coordinate')
plt.show()
###Output
_____no_output_____
###Markdown
________________________________________________________________________________________________________**Conclude which set of hyperparameters to use.**
###Code
# Create a dataframe of the feature names and importance
feature_importance_df = pd.DataFrame({
'Feature Name': features_response[:-1],
'Importance': grid_cv.best_estimator_.feature_importances_
})
# Sort values by importance
feature_importance_df.sort_values("Importance", ascending = False)
###Output
_____no_output_____ |
Assignments/07.2 Assignment - Linear Equations.ipynb | ###Markdown
Fill in any place that says ` YOUR CODE HERE` or YOUR ANSWER HERE, as well as your name and collaborators below.Grading for pre-lecture assignments is all or nothing. Partial credit is available for in-class assignments and checkpoints, but **only when code is commented**.
###Code
NAME = ""
COLLABORATORS = ""
###Output
_____no_output_____
###Markdown
---
###Code
import grading_helper as _test
# Space for imports, utility functions, etc.
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Circuit Network 1(Adapted from textbook exercise 6.5)The voltage $V_+$ is time-varying and sinusoidal of the form $V_+ = x_+e^{\mathrm{i}\omega t}$ with $x_+$ a constant. The resistors in the circuit can be treated using Ohm's law as usual. For the capacitors the charge $Q$ and voltage $V$ across them are related by the capacitor law $Q=CV$, where $C$ is the capacitance. Differentiating both sides of this expression gives the current $I$ flowing in on one side of the capacitor and out on theother:$$I = \frac{dQ}{dt} = C \frac{dV}{dt}\,.$$Assuming the voltages at the points labeled 1, 2, and 3 are of the form $V_1 = x_1 e^{\mathrm{i}\omega t}$, $V_2 = x_2 e^{\mathrm{i}\omega t}$, and $V_3 = x_3 e^{\mathrm{i}\omega t}$, we can apply Kirchhoff's law at each of the three points, along with Ohm's law and the capacitor law, to find that the constants $x_1$,$x_2$, and $x_3$ satisfy the equations$$\begin{align*}\biggl( {1\over R_1} + {1\over R_4} + \mathrm{i}\omega C_1 \biggr) x_1 - \mathrm{i}\omega C_1 x_2 &= {x_+\over R_1}\,, \\- \mathrm{i}\omega C_1 x_1+ \biggl( {1\over R_2} + {1\over R_5} + \mathrm{i}\omega C_1 + \mathrm{i}\omega C_2 \biggr) x_2 - \mathrm{i}\omega C_2 x_3 &= {x_+\over R_2}\,, \\- \mathrm{i}\omega C_2 x_2+ \biggl( {1\over R_3} + {1\over R_6} + \mathrm{i}\omega C_2 \biggr) x_3 &= {x_+\over R_3}\,.\end{align*}$$Write a program to solve for $x_1$, $x_2$, and $x_3$ when$$\begin{align*}R_1 &= R_3 = R_5 = 1\,\mathrm{k}\Omega\,, \\R_2 &= R_4 = R_6 = 2\,\mathrm{k}\Omega\,, \\C_1 &= 1\,\mu\mathrm{F},\qquad C_2 = 0.5\,\mu\mathrm{F}\,, \\x_+ &= 3\,\mathrm{V},\qquad \omega = 1000\,\mathrm{s}^{-1}\,.\end{align*}$$Store you result in variables named `x1`, `x2`, and `x3`.> Notice that the matrix for this problem has complex elements, meaning your solutions may be complex. You will need to define a complex array to hold your values, but you can still use `scipy.linalg.solve` to solve the equations - it works with either real or complex arguments.
###Code
%%graded # 7 points
# YOUR CODE HERE
%%tests
_test.similar(x1, 1.694-0.162j)
_test.similar(x2, 1.450+0.297j)
###Output
_____no_output_____
###Markdown
Use your solution to plot the real parts of $V_+(t)$, $V_1(t)$, $V_2(t)$, and $V_3(t)$ on the same graph. Label each of these voltages.Your graph should look similar to this (notice that the phases differ):
###Code
%%graded # 3 points
# YOUR CODE HERE
###Output
_____no_output_____ |
notebooks/VMHNeurons/kimetal_tenx_predictions.ipynb | ###Markdown
###Code
import requests
import os
from tqdm import tnrange, tqdm_notebook
def download_file(doi,ext):
url = 'https://api.datacite.org/dois/'+doi+'/media'
r = requests.get(url).json()
netcdf_url = r['data'][0]['attributes']['url']
r = requests.get(netcdf_url,stream=True)
#Set file name
fname = doi.split('/')[-1]+ext
#Download file with progress bar
if r.status_code == 403:
print("File Unavailable")
if 'content-length' not in r.headers:
print("Did not get file")
else:
with open(fname, 'wb') as f:
total_length = int(r.headers.get('content-length'))
pbar = tnrange(int(total_length/1024), unit="B")
for chunk in r.iter_content(chunk_size=1024):
if chunk:
pbar.update()
f.write(chunk)
return fname
#10x VMH data
#metadata.csv
download_file('10.22002/D1.2065','.gz')
#tenx.mtx (log counts)
download_file('10.22002/D1.2072','.gz')
#10X raw Count Matrix
download_file('10.22002/D1.2073','.gz')
#var.csv
download_file('10.22002/D1.2066','.gz')
os.system("gunzip *.gz")
os.system("mv D1.2065 metadata.csv")
os.system("mv D1.2072 tenx.mtx")
os.system("mv D1.2073 tenxCount.mtx")
os.system("mv D1.2066 tenx_var.csv")
%cd /content
!git clone https://github.com/hhcho/densvis.git
%cd /content/densvis/densne/
!g++ sptree.cpp densne.cpp densne_main.cpp -o den_sne -O2
import densne
%cd /content/
!git clone https://github.com/pachterlab/CBP_2021.git
!pip3 install --quiet torch
!pip3 install --quiet anndata
!pip3 install --quiet matplotlib
!pip3 install --quiet scikit-learn
!pip3 install --quiet torchsummary
!pip install --quiet scanpy==1.6.0
!pip3 install --quiet umap-learn
!pip3 install --quiet scvi-tools
%cd /content/CBP_2021/scripts
###Output
/content/CBP_2021/scripts
###Markdown
**Install Packages**
###Code
import anndata
import pandas as pd
import numpy as np
from MCML import MCML #Now has continuous label addition
from Picasso import Picasso
import tools as tl
import random
import scvi
from sklearn.decomposition import TruncatedSVD
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
from sklearn.neighbors import NeighborhoodComponentsAnalysis, NearestNeighbors
from sklearn.metrics import pairwise_distances
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import scale
import torch
import time
import scanpy as sc
import seaborn as sns
import umap
from scipy import stats
import scipy.io as sio
sns.set_style('white')
###Output
_____no_output_____
###Markdown
**Import Data**
###Code
plt.rcParams["font.family"] = "sans-serif"
plt.rcParams['axes.linewidth'] = 0.1
state = 42
ndims = 2
data_path = '../..'
pcs = 50
n_latent = 50
count_mat = sio.mmread(data_path+'/tenx.mtx')
count_mat.shape
rawcount_mat = sio.mmread(data_path+'/tenxCount.mtx')
rawcount_mat.shape
#Center and scale log-normalized data
scaled_mat = scale(count_mat)
meta = pd.read_csv(data_path+'/metadata.csv',index_col = 0)
meta.head()
clusters = np.unique(meta['cluster'].values)
map_dict = {}
for i, c in enumerate(clusters):
map_dict[c] = i
new_labs = [map_dict[c] for c in meta['cluster'].values]
adata = anndata.AnnData(count_mat, obs = meta)
adata.X = np.nan_to_num(adata.X)
adata2 = anndata.AnnData(rawcount_mat, obs = meta)
adata2.X = np.nan_to_num(adata2.X)
def knn_infer(embd_space, labeled_idx, labeled_lab, unlabeled_idx,n_neighbors=50):
"""
Predicts the labels of unlabeled data in the embedded space with KNN.
Parameters
----------
embd_space : ndarray (n_samples, embedding_dim)
Each sample is described by the features in the embedded space.
Contains all samples, both labeled and unlabeled.
labeled_idx : list
Indices of the labeled samples (used for training the classifier).
labeled_lab : ndarray (n_labeled_samples)
Labels of the labeled samples.
unlabeled_idx : list
Indices of the unlabeled samples.
Returns
-------
pred_lab : ndarray (n_unlabeled_samples)
Inferred labels of the unlabeled samples.
"""
# obtain labeled data and unlabled data from indices
labeled_samp = embd_space[labeled_idx, :]
unlabeled_samp = embd_space[unlabeled_idx, :]
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=n_neighbors)
knn.fit(labeled_samp, labeled_lab)
pred_lab = knn.predict(unlabeled_samp)
return pred_lab
def getJac(orig_indices,latents, latentLab, n_neighbors=30):
emb = []
xs = []
ys = []
knnDF = pd.DataFrame()
for p in range(len(latents)):
i = latents[p]
l = latentLab[p]
ind = tl.getNeighbors(i, n_neigh = n_neighbors,p=1)
x = tl.getJaccard(orig_indices,ind)
xs += x
#ys += list(y)
emb += [l]*len(x)
print(l)
print(np.mean(tl.getJaccard(orig_indices,ind)))
knnDF['x'] = xs
#knnDF['y'] = ys
knnDF['latent'] = emb
return knnDF
###Output
_____no_output_____
###Markdown
**Prediction Accuracy for Cell Type Labels Across Benchmarks**Tests accuracy on sex labels as well Set up metadata for MCML
###Code
lab1 = list(meta.cluster)
lab2 = list(meta.sex_label)
# lab3 = list(meta.medical_cond_label)
lab4 = list(meta.cluster)
allLabs = np.array([lab1])
allLabs2 = np.array([lab1,lab2])
nanLabs = np.array([[np.nan]*len(lab1)])
#Shuffled labels for over-fitting check
shuff_lab1 = random.sample(lab1, len(lab1))
shuff_lab2 = random.sample(lab2, len(lab2))
shuff_allLabs = np.array([shuff_lab1,shuff_lab2])
clus_colors = list(pd.unique(meta.cluster_color))
sex_colors = ['#abacb7','#F8C471']
###Output
_____no_output_____
###Markdown
First test 2D space predictions (t-SNE, UMAP, UMAP-Supervised)
###Code
ndims = 2
acc_score_2D = []
for i in range(3):
reducer = umap.UMAP(n_components = ndims)
tsne = TSNE(n_components = ndims)
tsvd = TruncatedSVD(n_components=pcs)
x_pca = tsvd.fit_transform(scaled_mat)
pcaUMAP = reducer.fit_transform(x_pca)
pcaTSNE = tsne.fit_transform(x_pca)
#Partially labeled UMAP
labels = np.array([lab4]).copy().astype(np.int8)
train_inds = np.random.choice(len(scaled_mat), size = int(0.7*len(scaled_mat)),replace=False) #0.7 for training fraction
#Set 30% to no label (nan)
unlab_inds = [i for i in range(len(adata)) if i not in train_inds]
labels[:, unlab_inds] = -1
pcaUMAPLab = reducer.fit_transform(x_pca,y=labels[0])
preds = knn_infer(pcaUMAPLab, train_inds, adata.obs.cluster_id.values[train_inds], unlab_inds)
acc = accuracy_score(adata.obs.cluster_id.values[unlab_inds], preds)
acc_score_2D.append(acc)
preds = knn_infer(pcaUMAP, train_inds, adata.obs.cluster_id.values[train_inds], unlab_inds)
acc = accuracy_score(adata.obs.cluster_id.values[unlab_inds], preds)
acc_score_2D.append(acc)
preds = knn_infer(pcaTSNE, train_inds, adata.obs.cluster_id.values[train_inds], unlab_inds)
acc = accuracy_score(adata.obs.cluster_id.values[unlab_inds], preds)
acc_score_2D.append(acc)
print(acc_score_2D)
# LDVAE accuracy scores
from tqdm import tqdm #Need to intialize to stop tqdm errors
tqdm(disable=True, total=0)
scvi.data.setup_anndata(adata2, labels_key='cluster_id')
acc_score = []
acc_score2 = []
for i in range(3): #3
vae = scvi.model.LinearSCVI(adata2,n_latent=n_latent)
vae.train(train_size = 0.7) #train_size = 0.7
latent_ldvae = vae.get_latent_representation()
lab_idx = vae.train_indices
unlabeled_idx = []
for i in range(len(adata2)):
if i not in lab_idx:
unlabeled_idx.append(i)
preds = knn_infer(np.array(latent_ldvae), list(lab_idx), adata2.obs.cluster_id.values[lab_idx], unlabeled_idx)
acc = accuracy_score(adata2.obs.cluster_id.values[unlabeled_idx], preds)
acc_score.append(acc)
preds2 = knn_infer(np.array(latent_ldvae), list(lab_idx), adata2.obs.sex_label.values[lab_idx], unlabeled_idx)
acc2 = accuracy_score(adata2.obs.sex_label.values[unlabeled_idx], preds2)
acc_score2.append(acc2)
acc_score
# acc_score
# SCANVI accuracy scores
scvi.data.setup_anndata(adata2, labels_key='cluster_id')
acc_score_scanvi = []
acc_score_scanvi2 = []
for i in range(3):
vae = scvi.model.SCANVI(adata2, np.nan,n_latent=n_latent)
vae.train(train_size = 0.7)
latent_scanvi = vae.get_latent_representation()
lab_idx = vae.train_indices
unlabeled_idx = []
for i in range(len(adata2)):
if i not in lab_idx:
unlabeled_idx.append(i)
preds = knn_infer(np.array(latent_scanvi), list(lab_idx), adata2.obs.cluster_id.values[lab_idx], unlabeled_idx)
acc = accuracy_score(adata2.obs.cluster_id.values[unlabeled_idx], preds)
acc_score_scanvi.append(acc)
preds2 = knn_infer(np.array(latent_scanvi), list(lab_idx), adata2.obs.sex_label.values[lab_idx], unlabeled_idx)
acc2 = accuracy_score(adata2.obs.sex_label.values[unlabeled_idx], preds2)
acc_score_scanvi2.append(acc2)
print(acc_score_scanvi)
print(acc_score_scanvi2)
# print(acc_score_scanvi)
# print(acc_score_scanvi2)
###Output
[0.7808241141574475, 0.783629950296617, 0.7790604457271124]
[0.9048420715087382, 0.8896103896103896, 0.9135000801667469]
###Markdown
Check train/test overfitting (with cell type labels)
###Code
# nca.fit(scaled_mat,labels,fracNCA = 0.25, silent = True,ret_loss = True) Parameters used for prediction
nca = MCML(n_latent = n_latent, epochs = 100)
lossesTrain, lossesTest = nca.trainTest(scaled_mat,np.array([lab1]), fracNCA = 0.25, silent = True)
nca.plotLosses(figsize=(10,3),fname='tenxTrainTest.pdf',axisFontSize=10,tickFontSize=8)
# Reconstruction loss only
acc_scoreR = []
acc_scoreR2 = []
for i in range(3):
ncaR = MCML(n_latent = n_latent, epochs = 100)
labels = np.array([lab1])
train_inds = np.random.choice(len(scaled_mat), size = int(0.7*len(scaled_mat)),replace=False)
unlab_inds = [i for i in range(len(adata)) if i not in train_inds]
labels[:, unlab_inds] = np.nan
lossesR, latentR = ncaR.fit(scaled_mat,nanLabs,fracNCA = 0, silent = True,ret_loss = True)
toc = time.perf_counter()
unlabeled_idx = []
for i in range(len(adata)):
if i not in train_inds:
unlabeled_idx.append(i)
preds = knn_infer(latentR, train_inds, adata.obs.cluster.values[train_inds], unlabeled_idx)
acc = accuracy_score(adata.obs.cluster.values[unlabeled_idx], preds)
acc_scoreR.append(acc)
preds2 = knn_infer(latentR, train_inds, adata.obs.sex_label.values[train_inds], unlabeled_idx)
acc2 = accuracy_score(adata.obs.sex_label.values[unlabeled_idx], preds2)
acc_scoreR2.append(acc2)
# print(f"nnNCA fit in {toc - tic:0.4f} seconds")
print(acc_scoreR)
print(acc_scoreR2)
###Output
[0.8298997995991984, 0.8358316633266533, 0.8317434869739478]
[0.9102204408817636, 0.9123847695390782, 0.9093386773547094]
###Markdown
PCA 50D
###Code
acc_scorePCA = []
acc_scorePCA2 = []
for i in range(3):
tsvd = TruncatedSVD(n_components=pcs)
x_pca = tsvd.fit_transform(scaled_mat)
labels = np.array([lab1])
train_inds = np.random.choice(len(scaled_mat), size = int(0.7*len(scaled_mat)),replace=False)
unlab_inds = [i for i in range(len(adata)) if i not in train_inds]
labels[:, unlab_inds] = np.nan
unlabeled_idx = []
for i in range(len(adata)):
if i not in train_inds:
unlabeled_idx.append(i)
preds = knn_infer(x_pca, train_inds, adata.obs.cluster.values[train_inds], unlabeled_idx)
acc = accuracy_score(adata.obs.cluster.values[unlabeled_idx], preds)
acc_scorePCA.append(acc)
preds2 = knn_infer(x_pca, train_inds, adata.obs.sex_label.values[train_inds], unlabeled_idx)
acc2 = accuracy_score(adata.obs.sex_label.values[unlabeled_idx], preds2)
acc_scorePCA2.append(acc2)
# print(f"nnNCA fit in {toc - tic:0.4f} seconds")
print(acc_scorePCA)
print(acc_scorePCA2)
###Output
[0.816192384769539, 0.8143486973947895, 0.8129058116232465]
[0.9044488977955912, 0.9079759519038076, 0.9068537074148296]
###Markdown
NCA Below
###Code
# NCA loss only
acc_scoreNCA = []
acc_scoreNCA2 = []
acc_scoreNCA3 = []
for i in range(1): #3
nca = MCML(n_latent = n_latent, epochs = 100)
ncaR2 = MCML(n_latent = n_latent, epochs = 100)
labels = np.array([lab1]).copy()
train_inds = np.random.choice(len(scaled_mat), size = int(0.7*len(scaled_mat)),replace=False) #0.7
unlab_inds = [i for i in range(len(adata)) if i not in train_inds]
labels[:, unlab_inds] = np.nan
#2 labels
labels2 = allLabs2.copy()
labels2[:, unlab_inds] = np.nan
losses, latent = nca.fit(scaled_mat,labels,fracNCA = 1, silent = True,ret_loss = True)
losses2, latent2 = ncaR2.fit(scaled_mat,labels2,fracNCA = 1, silent = True,ret_loss = True)
toc = time.perf_counter()
unlabeled_idx = []
for i in range(len(adata)):
if i not in train_inds:
unlabeled_idx.append(i)
preds = knn_infer(latent, train_inds, adata.obs.cluster.values[train_inds], unlabeled_idx)
acc = accuracy_score(adata.obs.cluster.values[unlabeled_idx], preds)
acc_scoreNCA.append(acc)
preds2 = knn_infer(latent2, train_inds, adata.obs.cluster.values[train_inds], unlabeled_idx)
acc2 = accuracy_score(adata.obs.cluster.values[unlabeled_idx], preds2)
acc_scoreNCA2.append(acc2)
preds2 = knn_infer(latent2, train_inds, adata.obs.sex_label.values[train_inds], unlabeled_idx)
acc2 = accuracy_score(adata.obs.sex_label.values[unlabeled_idx], preds2)
acc_scoreNCA3.append(acc2)
# print(f"nnNCA fit in {toc - tic:0.4f} seconds")
print(acc_scoreNCA)
print(acc_scoreNCA2)
print(acc_scoreNCA3)
# fracNCA = 0.3
acc_scoreBoth = []
acc_scoreBoth2 = []
acc_scoreBoth3 = []
for i in range(3): #3
nca = MCML(n_latent = n_latent, epochs = 100)
ncaR2 = MCML(n_latent = n_latent, epochs = 100)
labels = np.array([lab1]).copy()
train_inds = np.random.choice(len(scaled_mat), size = int(0.7*len(scaled_mat)),replace=False)
unlab_inds = [i for i in range(len(adata)) if i not in train_inds]
labels[:, unlab_inds] = np.nan
#2 labels
labels2 = allLabs2.copy()
labels2[:, unlab_inds] = np.nan
losses, latent = nca.fit(scaled_mat,labels,fracNCA = 0.25, silent = True,ret_loss = True)
losses2, latent2 = ncaR2.fit(scaled_mat,labels2,fracNCA = 0.25, silent = True,ret_loss = True)
toc = time.perf_counter()
unlabeled_idx = []
for i in range(len(adata)):
if i not in train_inds:
unlabeled_idx.append(i)
preds = knn_infer(latent, train_inds, adata.obs.cluster.values[train_inds], unlabeled_idx)
acc = accuracy_score(adata.obs.cluster.values[unlabeled_idx], preds)
acc_scoreBoth.append(acc)
preds2 = knn_infer(latent2, train_inds, adata.obs.cluster.values[train_inds], unlabeled_idx)
acc2 = accuracy_score(adata.obs.cluster.values[unlabeled_idx], preds2)
acc_scoreBoth2.append(acc2)
preds2 = knn_infer(latent2, train_inds, adata.obs.sex_label.values[train_inds], unlabeled_idx)
acc2 = accuracy_score(adata.obs.sex_label.values[unlabeled_idx], preds2)
acc_scoreBoth3.append(acc2)
# # print(f"nnNCA fit in {toc - tic:0.4f} seconds")
print(acc_scoreBoth)
print(acc_scoreBoth2)
print(acc_scoreBoth3)
fig, axs = plt.subplots(1, losses.shape[1],figsize=(8,4))
for i in range(losses.shape[1]):
axs[i].plot(losses[:,i],label=str(i))
plt.legend()
plt.show()
fig, axs = plt.subplots(1, losses2.shape[1],figsize=(8,4))
for i in range(losses2.shape[1]):
axs[i].plot(losses2[:,i],label=str(i))
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Save Analysis Output**
###Code
vals = pd.DataFrame()
vals['Accuracy'] = acc_score + acc_score_scanvi + acc_scoreR + acc_scoreNCA + acc_scoreBoth + acc_score2 + acc_score_scanvi2 + acc_scoreR2 + acc_scoreNCA3 + acc_scoreBoth3 + acc_scoreNCA2 + acc_scoreBoth2 +
acc_scorePCA +acc_scorePCA2 #+ netAE_score + netAE_score2
vals['Embed'] = ['LDVAE']*3 + ['SCANVI']*3 + ['Recon MCML']*3 + ['NCA 100% MCML']*1 + ['NCA-Recon MCML']*3 +['LDVAE']*3 + ['SCANVI']*3 + ['Recon MCML']*3 + ['NCA 100% MCML']*1 + ['NCA-Recon MCML']*3 +
['NCA 100% MCML']*1 + ['NCA-Recon MCML']*3 + ['PCA 50D']*3 + ['PCA 50D']*3 #+ ['netAE']*2
vals['Label'] = ['CellType1']*13 + ['Gender2']*13 + ['CellType2']*4 + ['CellType1']*3 + ['Gender2']*3#+ ['CellType1'] #+ ['Gender2']
vals
from google.colab import files
vals.to_csv('all10XPreds.csv')
files.download('all10XPreds.csv')
###Output
_____no_output_____
###Markdown
**MCML Prediction Accuracy With Lower Percentages of Labeled Data**
###Code
# fracNCA = 0.5
acc_scoreBoth = []
percs = [0.7,0.6,0.5,0.4,0.3,0.2,0.1]
for p in percs:
nca = MCML(n_latent = n_latent, epochs = 100)
ncaR2 = MCML(n_latent = n_latent, epochs = 100)
labels = np.array([lab1])
train_inds = np.random.choice(len(scaled_mat), size = int((p)*len(scaled_mat)),replace=False)
unlab_inds = [i for i in range(len(adata)) if i not in train_inds]
labels[:, unlab_inds] = np.nan
#2 labels
labels2 = allLabs2
labels2[:, unlab_inds] = np.nan
losses, latent = nca.fit(scaled_mat,labels,fracNCA = 0.25, silent = True,ret_loss = True)
toc = time.perf_counter()
unlabeled_idx = []
for i in range(len(adata)):
if i not in train_inds:
unlabeled_idx.append(i)
preds = knn_infer(latent, train_inds, adata.obs.cluster.values[train_inds], unlabeled_idx)
acc = accuracy_score(adata.obs.cluster.values[unlabeled_idx], preds)
acc_scoreBoth.append(acc)
# print(f"nnNCA fit in {toc - tic:0.4f} seconds")
lowPercsSmartSeq = pd.DataFrame()
lowPercsSmartSeq['Accuracy'] = acc_scoreBoth
lowPercsSmartSeq['Percent'] = percs
lowPercsSmartSeq
from google.colab import files
lowPercsSmartSeq.to_csv('lowPercs10XPreds.csv')
files.download('lowPercs10XPreds.csv')
###Output
_____no_output_____
###Markdown
**Test KNN Jaccard**
###Code
rounds = 3
batch_size = 128 #scaled_mat.shape[0]#len(adata.obs_names)
#Make a (unit) circle
#r = 1
theta = np.linspace(0, 2*np.pi, batch_size)
# #Turkey guy
# x, y = (np.sin(2**theta) - 1.7) * np.cos(theta), (np.sin(2**theta) - 1.7) * np.sin(theta)
# #Butterfly r = 4*cos(4cosθ))
# x, y = (2*np.cos(4*np.cos(theta))) * np.cos(theta), (2*np.cos(4*np.cos(theta))) * np.sin(theta)
# # Spiral
#x, y = (1/2)*theta * np.cos(theta), (1/2)*theta * np.sin(theta)
# #Quasi-rose
x, y = (4 + np.cos(6*theta)) * np.cos(theta), (4 + np.cos(6*theta)) * np.sin(theta)
#Make array input for dimension of shape
coords = np.array([list(x),list(y)])
plt.plot(x,y)
#Ex utero
fl = []
flLab = []
flType = []
for i in range(rounds):
nca = Picasso(n_latent = 2, epochs = 500, batch_size = batch_size)
lossesB, latentB = nca.fit(scaled_mat,coords, frac = .3,silent=True,ret_loss=True) #.06 for ex utero
fl += [latentB]
flLab += ['Flower']
flType += ['MCML 2D']
###Output
_____no_output_____
###Markdown
Save KNN Jaccard Dists
###Code
orig_indices = tl.getNeighbors(count_mat, n_neigh = 30,p=1)
df = getJac(orig_indices,fl, flLab, 30)
df.to_csv('tenxPicFlAmb.csv')
from google.colab import files
files.download("tenxPicFlAmb.csv")
nca.plotLosses(figsize=(10,3),axisFontSize=10,tickFontSize=8)
# elephant parameters
batch_size = 128 #scaled_mat.shape[0]#3850 # 50
p1, p2, p3, p4 = (50 - 30j, 18 + 8j, 12 - 10j, -14 - 60j )
p5 = 40 + 20j # eyepiece
def fourier(t, C):
f = np.zeros(t.shape)
A, B = C.real, C.imag
for k in range(len(C)):
f = f + A[k]*np.cos(k*t) + B[k]*np.sin(k*t)
return f
def elephant(t, p1, p2, p3, p4, p5):
npar = 6
Cx = np.zeros((npar,), dtype='complex')
Cy = np.zeros((npar,), dtype='complex')
Cx[1] = p1.real*1j
Cx[2] = p2.real*1j
Cx[3] = p3.real
Cx[5] = p4.real
Cy[1] = p4.imag + p1.imag*1j
Cy[2] = p2.imag*1j
Cy[3] = p3.imag*1j
# x = np.append(fourier(t,Cx), [-p5.imag]) #[-p5.imag]
# y = np.append(fourier(t,Cy), [p5.imag]) #[p5.imag]
x = fourier(t,Cx)
y = fourier(t,Cy)
return x,y
x, y = elephant(np.linspace(0,2*np.pi,batch_size), p1, p2, p3, p4, p5)
#Make array input for dimension of shape
y = y#0.04*y #.025 .025 .025
x = x#0.02*x #.015 .02 .015
# y = 0.02*y
# x = 0.01*x
coords = np.array([list(y),list(-x)])
plt.plot(y,-x,'.',c='black')
plt.show()
#Test with task assignment in-utero
el = []
elLab = []
elType = []
for i in range(rounds):
nca = Picasso(n_latent = 2, epochs = 500, batch_size = batch_size)
lossesEl, latentEl = nca.fit(scaled_mat,coords, frac = 0.3,silent=True,ret_loss=True)
el += [latentEl]
elLab += ['Elephant']
elType += ['MCML 2D']
nca.plotLosses(figsize=(10,3),axisFontSize=10,tickFontSize=8)
###Output
_____no_output_____
###Markdown
Save KNN Jaccard Dists
###Code
orig_indices = tl.getNeighbors(count_mat, n_neigh = 30,p=1)
df = getJac(orig_indices,el, elLab, 30)
df.to_csv('tenxPicElAmb.csv')
from google.colab import files
files.download("tenxPicElAmb.csv")
###Output
_____no_output_____ |
Matplotlib/Advanced Matplotlib Concepts.ipynb | ###Markdown
Advanced Matplotlib Logarithmic scale It is also possible to set a logarithmic scale for one or both axes. This functionality is in fact only one application of a more general transformation system in Matplotlib. Each of the axes' scales are set seperately using `set_xscale` and `set_yscale` methods which accept one parameter (with the value "log" in this case):
###Code
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 2, figsize=(10,4))
axes[0].plot(x, x**2, x, np.exp(x))
axes[0].set_title("Normal scale")
axes[1].plot(x, x**2, x, np.exp(x))
axes[1].set_yscale("log")
axes[1].set_title("Logarithmic scale (y)");
###Output
_____no_output_____
###Markdown
Placement of ticks and custom tick labels We can explicitly determine where we want the axis ticks with `set_xticks` and `set_yticks`, which both take a list of values for where on the axis the ticks are to be placed. We can also use the `set_xticklabels` and `set_yticklabels` methods to provide a list of custom text labels for each tick location:
###Code
fig, ax = plt.subplots(figsize=(10, 4))
ax.plot(x, x**2, x, x**3, lw=2)
ax.set_xticks([1, 2, 3, 4, 5])
ax.set_xticklabels([r'$\alpha$', r'$\beta$', r'$\gamma$', r'$\delta$', r'$\epsilon$'], fontsize=18)
yticks = [0, 50, 100, 150]
ax.set_yticks(yticks)
ax.set_yticklabels(["$%.1f$" % y for y in yticks], fontsize=18); # use LaTeX formatted labels
###Output
_____no_output_____
###Markdown
There are a number of more advanced methods for controlling major and minor tick placement in matplotlib figures, such as automatic placement according to different policies. See http://matplotlib.org/api/ticker_api.html for details. Scientific notation With large numbers on axes, it is often better use scientific notation:
###Code
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_title("scientific notation")
ax.set_yticks([0, 50, 100, 150])
from matplotlib import ticker
formatter = ticker.ScalarFormatter(useMathText=True)
formatter.set_scientific(True)
formatter.set_powerlimits((-1,1))
ax.yaxis.set_major_formatter(formatter)
###Output
_____no_output_____
###Markdown
Axis number and axis label spacing
###Code
# distance between x and y axis and the numbers on the axes
matplotlib.rcParams['xtick.major.pad'] = 5
matplotlib.rcParams['ytick.major.pad'] = 5
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("label and axis spacing")
# padding between axis label and axis numbers
ax.xaxis.labelpad = 5
ax.yaxis.labelpad = 5
ax.set_xlabel("x")
ax.set_ylabel("y");
# restore defaults
matplotlib.rcParams['xtick.major.pad'] = 3
matplotlib.rcParams['ytick.major.pad'] = 3
###Output
_____no_output_____
###Markdown
Axis position adjustments Unfortunately, when saving figures the labels are sometimes clipped, and it can be necessary to adjust the positions of axes a little bit. This can be done using `subplots_adjust`:
###Code
fig, ax = plt.subplots(1, 1)
ax.plot(x, x**2, x, np.exp(x))
ax.set_yticks([0, 50, 100, 150])
ax.set_title("title")
ax.set_xlabel("x")
ax.set_ylabel("y")
fig.subplots_adjust(left=0.15, right=.9, bottom=0.1, top=0.9);
###Output
_____no_output_____
###Markdown
Axis grid With the `grid` method in the axis object, we can turn on and off grid lines. We can also customize the appearance of the grid lines using the same keyword arguments as the `plot` function:
###Code
fig, axes = plt.subplots(1, 2, figsize=(10,3))
# default grid appearance
axes[0].plot(x, x**2, x, x**3, lw=2)
axes[0].grid(True)
# custom grid appearance
axes[1].plot(x, x**2, x, x**3, lw=2)
axes[1].grid(color='b', alpha=0.5, linestyle='dashed', linewidth=0.5)
###Output
_____no_output_____
###Markdown
Axis spines We can also change the properties of axis spines:
###Code
fig, ax = plt.subplots(figsize=(6,2))
ax.spines['bottom'].set_color('blue')
ax.spines['top'].set_color('blue')
ax.spines['left'].set_color('red')
ax.spines['left'].set_linewidth(2)
# turn off axis spine to the right
ax.spines['right'].set_color("none")
ax.yaxis.tick_left() # only ticks on the left side
###Output
_____no_output_____
###Markdown
Twin axes Sometimes it is useful to have dual x or y axes in a figure; for example, when plotting curves with different units together. Matplotlib supports this with the `twinx` and `twiny` functions:
###Code
fig, ax1 = plt.subplots()
ax1.plot(x, x**2, lw=2, color="blue")
ax1.set_ylabel(r"area $(m^2)$", fontsize=18, color="blue")
for label in ax1.get_yticklabels():
label.set_color("blue")
ax2 = ax1.twinx()
ax2.plot(x, x**3, lw=2, color="red")
ax2.set_ylabel(r"volume $(m^3)$", fontsize=18, color="red")
for label in ax2.get_yticklabels():
label.set_color("red")
###Output
_____no_output_____
###Markdown
Axes where x and y is zero
###Code
fig, ax = plt.subplots()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0)) # set position of x spine to x=0
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0)) # set position of y spine to y=0
xx = np.linspace(-0.75, 1., 100)
ax.plot(xx, xx**3);
###Output
_____no_output_____
###Markdown
Other 2D plot styles In addition to the regular `plot` method, there are a number of other functions for generating different kind of plots. See the matplotlib plot gallery for a complete list of available plot types: http://matplotlib.org/gallery.html. Some of the more useful ones are show below:
###Code
n = np.array([0,1,2,3,4,5])
fig, axes = plt.subplots(1, 4, figsize=(12,3))
axes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))
axes[0].set_title("scatter")
axes[1].step(n, n**2, lw=2)
axes[1].set_title("step")
axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5)
axes[2].set_title("bar")
axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5);
axes[3].set_title("fill_between");
###Output
_____no_output_____
###Markdown
Text annotation Annotating text in matplotlib figures can be done using the `text` function. It supports LaTeX formatting just like axis label texts and titles:
###Code
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
ax.text(0.15, 0.2, r"$y=x^2$", fontsize=20, color="blue")
ax.text(0.65, 0.1, r"$y=x^3$", fontsize=20, color="green");
###Output
_____no_output_____
###Markdown
Figures with multiple subplots and insets Axes can be added to a matplotlib Figure canvas manually using `fig.add_axes` or using a sub-figure layout manager such as `subplots`, `subplot2grid`, or `gridspec`: subplots
###Code
fig, ax = plt.subplots(2, 3)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
subplot2grid
###Code
fig = plt.figure()
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1,2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2,0))
ax5 = plt.subplot2grid((3,3), (2,1))
fig.tight_layout()
###Output
_____no_output_____
###Markdown
gridspec
###Code
import matplotlib.gridspec as gridspec
fig = plt.figure()
gs = gridspec.GridSpec(2, 3, height_ratios=[2,1], width_ratios=[1,2,1])
for g in gs:
ax = fig.add_subplot(g)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
add_axes Manually adding axes with `add_axes` is useful for adding insets to figures:
###Code
fig, ax = plt.subplots()
ax.plot(xx, xx**2, xx, xx**3)
fig.tight_layout()
# inset
inset_ax = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # X, Y, width, height
inset_ax.plot(xx, xx**2, xx, xx**3)
inset_ax.set_title('zoom near origin')
# set axis range
inset_ax.set_xlim(-.2, .2)
inset_ax.set_ylim(-.005, .01)
# set axis tick locations
inset_ax.set_yticks([0, 0.005, 0.01])
inset_ax.set_xticks([-0.1,0,.1]);
###Output
_____no_output_____
###Markdown
Colormap and contour figures Colormaps and contour figures are useful for plotting functions of two variables. In most of these functions we will use a colormap to encode one dimension of the data. There are a number of predefined colormaps. It is relatively straightforward to define custom colormaps. For a list of pre-defined colormaps, see: http://www.scipy.org/Cookbook/Matplotlib/Show_colormaps
###Code
alpha = 0.7
phi_ext = 2 * np.pi * 0.5
def flux_qubit_potential(phi_m, phi_p):
return 2 + alpha - 2 * np.cos(phi_p) * np.cos(phi_m) - alpha * np.cos(phi_ext - 2*phi_p)
phi_m = np.linspace(0, 2*np.pi, 100)
phi_p = np.linspace(0, 2*np.pi, 100)
X,Y = np.meshgrid(phi_p, phi_m)
Z = flux_qubit_potential(X, Y).T
###Output
_____no_output_____
###Markdown
pcolor
###Code
fig, ax = plt.subplots()
p = ax.pcolor(X/(2*np.pi), Y/(2*np.pi), Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max())
cb = fig.colorbar(p, ax=ax)
###Output
_____no_output_____
###Markdown
imshow
###Code
fig, ax = plt.subplots()
im = ax.imshow(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
im.set_interpolation('bilinear')
cb = fig.colorbar(im, ax=ax)
###Output
_____no_output_____
###Markdown
contour
###Code
fig, ax = plt.subplots()
cnt = ax.contour(Z, cmap=matplotlib.cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[0, 1, 0, 1])
###Output
_____no_output_____
###Markdown
3D figures To use 3D graphics in matplotlib, we first need to create an instance of the `Axes3D` class. 3D axes can be added to a matplotlib figure canvas in exactly the same way as 2D axes; or, more conveniently, by passing a `projection='3d'` keyword argument to the `add_axes` or `add_subplot` methods.
###Code
from mpl_toolkits.mplot3d.axes3d import Axes3D
###Output
_____no_output_____
###Markdown
Surface plots
###Code
fig = plt.figure(figsize=(14,6))
# `ax` is a 3D-aware axis instance because of the projection='3d' keyword argument to add_subplot
ax = fig.add_subplot(1, 2, 1, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=4, cstride=4, linewidth=0)
# surface_plot with color grading and color bar
ax = fig.add_subplot(1, 2, 2, projection='3d')
p = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=matplotlib.cm.coolwarm, linewidth=0, antialiased=False)
cb = fig.colorbar(p, shrink=0.5)
###Output
_____no_output_____
###Markdown
Wire-frame plot
###Code
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_wireframe(X, Y, Z, rstride=4, cstride=4)
###Output
_____no_output_____
###Markdown
Coutour plots with projections
###Code
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(1,1,1, projection='3d')
ax.plot_surface(X, Y, Z, rstride=4, cstride=4, alpha=0.25)
cset = ax.contour(X, Y, Z, zdir='z', offset=-np.pi, cmap=matplotlib.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='x', offset=-np.pi, cmap=matplotlib.cm.coolwarm)
cset = ax.contour(X, Y, Z, zdir='y', offset=3*np.pi, cmap=matplotlib.cm.coolwarm)
ax.set_xlim3d(-np.pi, 2*np.pi);
ax.set_ylim3d(0, 3*np.pi);
ax.set_zlim3d(-np.pi, 2*np.pi);
###Output
_____no_output_____ |
10_classes_02_exercises.ipynb | ###Markdown
**Important**: Click on "*Kernel*" > "*Restart Kernel and Run All*" *after* finishing the exercises in [JupyterLab ](https://jupyterlab.readthedocs.io/en/stable/) (e.g., in the cloud on [MyBinder ](https://mybinder.org/v2/gh/webartifex/intro-to-python/master?urlpath=lab/tree/10_classes_02_exercises.ipynb)) to ensure that your solution runs top to bottom *without* any errors Chapter 10: Classes & Instances The exercises below assume that you have read [Chapter 10 ](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/master/10_classes_00_content.ipynb) in the book.The `...`'s in the code cells indicate where you need to fill in code snippets. The number of `...`'s within a code cell give you a rough idea of how many lines of code are needed to solve the task. You should not need to create any additional code cells for your final solution. However, you may want to use temporary code cells to try out some ideas. Berlin Tourist Guide: A Traveling Salesman Problem This notebook is a hands-on and tutorial-like application to show how to load data from web services like [Google Maps](https://developers.google.com/maps) and use them to solve a logistics problem, namely a **[Traveling Salesman Problem ](https://en.wikipedia.org/wiki/Traveling_salesman_problem)**.Imagine that a tourist lands at Berlin's [Tegel Airport ](https://en.wikipedia.org/wiki/Berlin_Tegel_Airport) in the morning and has his "connecting" flight from [Schönefeld Airport ](https://en.wikipedia.org/wiki/Berlin_Sch%C3%B6nefeld_Airport) in the evening. By the time, the flights were scheduled, the airline thought that there would be only one airport in Berlin.Having never been in Berlin before, the tourist wants to come up with a plan of sights that he can visit with a rental car on his way from Tegel to Schönefeld.With a bit of research, he creates a `list` of `sights` like below.
###Code
arrival = "Berlin Tegel Airport (TXL), Berlin"
sights = [
"Alexanderplatz, Berlin",
"Brandenburger Tor, Pariser Platz, Berlin",
"Checkpoint Charlie, Friedrichstraße, Berlin",
"Kottbusser Tor, Berlin",
"Mauerpark, Berlin",
"Siegessäule, Berlin",
"Reichstag, Platz der Republik, Berlin",
"Soho House Berlin, Torstraße, Berlin",
"Tempelhofer Feld, Berlin",
]
departure = "Berlin Schönefeld Airport (SXF), Berlin"
###Output
_____no_output_____
###Markdown
With just the street addresses, however, he cannot calculate a route. He needs `latitude`-`longitude` coordinates instead. While he could just open a site like [Google Maps](https://www.google.com/maps) in a web browser, he wonders if he can download the data with a bit of Python code using a [web API ](https://en.wikipedia.org/wiki/Web_API) offered by [Google](https://www.google.com).So, in this notebook, we solve the entire problem with code. To keep everything clean and maintainable, we use Python's object-oriented features (cf., [Chapter 10 ](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/master/10_classes_00_content.ipynb) to implement the solution. Geocoding In order to obtain coordinates for the given street addresses above, a process called **geocoding**, we use the [Google Maps Geocoding API](https://developers.google.com/maps/documentation/geocoding/start).**Q1**: Familiarize yourself with this [documentation](https://developers.google.com/maps/documentation/geocoding/start), register a developer account, create a project, and [create an API key](https://console.cloud.google.com/apis/credentials) that is necessary for everything to work! Then, [enable the Geocoding API](https://console.developers.google.com/apis/library/geocoding-backend.googleapis.com) and link a [billing account](https://console.developers.google.com/billing)!Info: The first 200 Dollars per month are not charged (cf., [pricing page](https://cloud.google.com/maps-platform/pricing/)), so no costs will incur for this tutorial. You must sign up because Google simply wants to know the people using its services.**Q2**: Assign the API key as a `str` object to the `key` variable!
###Code
key = " < your API key goes here > "
###Output
_____no_output_____
###Markdown
To use external web services, our application needs to make HTTP requests just like web browsers do when surfing the web.We do not have to implement this on our own. Instead, we use the official Python Client for the Google Maps Services provided by Google in one of its corporate [GitHub repositories ](https://github.com/googlemaps).**Q3**: Familiarize yourself with the [googlemaps ](https://github.com/googlemaps/google-maps-services-python) package! Then, install it with the `pip` command line tool!
###Code
!pip install googlemaps
###Output
_____no_output_____
###Markdown
**Q4**: Finish the following code cells and instantiate a `Client` object named `api`! Use the `key` from above. `api` provides us with a lot of methods to talk to the API.
###Code
import googlemaps
api = googlemaps.Client(...)
api
type(api)
###Output
_____no_output_____
###Markdown
**Q5**: Execute the next code cell to list the methods and attributes on the `api` object!
###Code
[x for x in dir(api) if not x.startswith("_")]
###Output
_____no_output_____
###Markdown
To obtain all kinds of information associated with a street address, we call the `geocode()` method with the address as the sole argument.For example, let's search for Brandenburg Gate. Its street address is `"Brandenburger Tor, Pariser Platz, Berlin"`.**Q6**: Execute the next code cell!Hint: If you get an error message, follow the instructions in it to debug it.If everything works, we receive a `list` with a single `dict` in it. That means the [Google Maps Geocoding API](https://developers.google.com/maps/documentation/geocoding/start) only knows about one place at the address. Unfortunately, the `dict` is pretty dense and hard to read.
###Code
api.geocode("Brandenburger Tor, Pariser Platz, Berlin")
###Output
_____no_output_____
###Markdown
**Q7**: Capture the first and only search result in the `brandenburg_gate` variable and "pretty print" it with the help of the [pprint() ](https://docs.python.org/3/library/pprint.htmlpprint.pprint) function in the [pprint ](https://docs.python.org/3/library/pprint.html) module in the [standard library ](https://docs.python.org/3/library/index.html)!
###Code
response = api.geocode("Brandenburger Tor, Pariser Platz, Berlin")
brandenburg_gate = ...
###Output
_____no_output_____
###Markdown
The `dict` has several keys that are of use for us: `"formatted_address"` is a cleanly formatted version of the address. `"geometry"` is a nested `dict` with several `lat`-`lng` coordinates representing the place where `"location"` is the one we need for our calculations. Lastly, `"place_id"` is a unique identifier that allows us to obtain further information about the address from other Google APIs.
###Code
from pprint import pprint
pprint(brandenburg_gate)
###Output
_____no_output_____
###Markdown
The `Place` Class To keep our code readable and maintainable, we create a `Place` class to manage the API results in a clean way.The `__init__()` method takes a `street_address` (e.g., an element of `sights`) and a `client` argument (e.g., an object like `api`) and stores them on `self`. The place's `name` is parsed out of the `street_address` as well: It is the part before the first comma. Also, the instance attributes `latitude`, `longitude`, and `place_id` are initialized to `None`.**Q8**: Finish the `__init__()` method according to the description!The `sync_from_google()` method uses the internally kept `client` and synchronizes the place's state with the [Google Maps Geocoding API](https://developers.google.com/maps/documentation/geocoding/start). In particular, it updates the `address` with the `formatted_address` and stores the values for `latitude`, `longitude`, and `place_id`. It enables method chaining.**Q9**: Implement the `sync_from_google()` method according to the description!**Q10**: Add a read-only `location` property on the `Place` class that returns the `latitude` and `longitude` as a `tuple`!
###Code
class Place:
"""A place connected to the Google Maps Geocoding API."""
# answer to Q8
def __init__(self, street_address, *, client):
"""Create a new place.
Args:
street_address (str): street address of the place
client (googlemaps.Client): access to the Google Maps Geocoding API
"""
...
...
...
...
...
...
def __repr__(self):
cls, name = self.__class__.__name__, self.name
synced = " [SYNCED]" if self.place_id else ""
return f"<{cls}: {name}{synced}>"
# answer to Q9
def sync_from_google(self):
"""Download the place's coordinates and other info."""
response = ...
first_hit = ...
... = first_hit[...]
... = first_hit[...][...][...]
... = first_hit[...][...][...]
... = first_hit[...]
return ...
# answer to Q10
...
...
...
###Output
_____no_output_____
###Markdown
**Q11**: Verify that the instantiating a `Place` object works!
###Code
brandenburg_gate = Place("Brandenburger Tor, Pariser Platz, Berlin", client=api)
brandenburg_gate
###Output
_____no_output_____
###Markdown
**Q12**: What do the angle brackets `` mean in the text representation? Now, we can obtain the geo-data from the [Google Maps Geocoding API](https://developers.google.com/maps/documentation/geocoding/start) in a clean way. As we enabled method chaining for `sync_from_google()`, we get back the instance after calling the method.**Q13**: Verify that the `sync_from_google()` method works!
###Code
brandenburg_gate.sync_from_google()
brandenburg_gate.address
brandenburg_gate.place_id
brandenburg_gate.location
###Output
_____no_output_____
###Markdown
The `Place` Class (continued): Batch Synchronization **Q14**: Add an alternative constructor method named `from_addresses()` that takes an `addresses`, a `client`, and a `sync` argument! `addresses` is a finite iterable of `str` objects (e.g., like `sights`). The method returns a `list` of `Place`s, one for each `str` in `addresses`. All `Place`s are initialized with the same `client`. `sync` is a flag and defaults to `False`. If it is set to `True`, the alternative constructor invokes the `sync_from_google()` method on the `Place`s before returning them.
###Code
class Place:
"""A place connected to the Google Maps Geocoding API."""
# answers from above
# answer to Q14
...
def from_addresses(cls, addresses, *, client, sync=False):
"""Create new places in a batch.
Args:
addresses (iterable of str's): the street addresses of the places
client (googlemaps.Client): access to the Google Maps Geocoding API
Returns:
list of Places
"""
places = ...
for address in addresses:
place = ...
if sync:
...
...
return places
###Output
_____no_output_____
###Markdown
**Q15**: Verify that the alternative constructor works with and without the `sync` flag!
###Code
Place.from_addresses(sights, client=api)
Place.from_addresses(sights, client=api, sync=True)
###Output
_____no_output_____
###Markdown
Visualization For geo-data it always makes sense to plot them on a map. We use the third-party library [folium ](https://github.com/python-visualization/folium) to achieve that.**Q16**: Familiarize yourself with [folium ](https://github.com/python-visualization/folium) and install it with the `pip` command line tool!
###Code
!pip install folium
###Output
_____no_output_____
###Markdown
**Q17**: Execute the code cells below to create an empty map of Berlin!
###Code
import folium
berlin = folium.Map(location=(52.513186, 13.3944349), zoom_start=14)
type(berlin)
###Output
_____no_output_____
###Markdown
`folium.Map` instances are shown as interactive maps in Jupyter notebooks whenever they are the last expression in a code cell.
###Code
berlin
###Output
_____no_output_____
###Markdown
In order to put something on the map, [folium ](https://github.com/python-visualization/folium) works with so-called `Marker` objects.**Q18**: Review its docstring and then create a marker `m` with the location data of Brandenburg Gate! Use the `brandenburg_gate` object from above!Hint: You may want to use HTML tags for the `popup` argument to format the text output on the map in a nicer way. So, instead of just passing `"Brandenburger Tor"` as the `popup` argument, you could use, for example, `"Brandenburger Tor(Pariser Platz, 10117 Berlin, Germany)"`. Then, the name appears in bold and the street address is put on the next line. You could use an f-string to parametrize the argument.
###Code
folium.Marker?
m = folium.Marker(
location=...,
popup=...,
tooltip=...,
)
type(m)
###Output
_____no_output_____
###Markdown
**Q19**: Execute the next code cells that add `m` to the `berlin` map!
###Code
m.add_to(berlin)
berlin
###Output
_____no_output_____
###Markdown
The `Place` Class (continued): Marker Representation **Q20**: Finish the `as_marker()` method that returns a `Marker` instance when invoked on a `Place` instance! The method takes an optional `color` argument that uses [folium ](https://github.com/python-visualization/folium)'s `Icon` type to control the color of the marker.
###Code
class Place:
"""A place connected to the Google Maps Geocoding API."""
# answers from above
# answer to Q20
def as_marker(self, *, color="blue"):
"""Create a Marker representation of the place.
Args:
color (str): color of the marker, defaults to "blue"
Returns:
marker (folium.Marker)
Raises:
RuntimeError: if the place is not synchronized with
the Google Maps Geocoding API
"""
if not self.place_id:
raise RuntimeError("must synchronize with Google first")
return folium.Marker(
location=...,
popup=...,
tooltip=...,
icon=folium.Icon(color=color),
)
###Output
_____no_output_____
###Markdown
**Q21**: Execute the next code cells that create a new `Place` and obtain a `Marker` for it!Notes: Without synchronization, we get a `RuntimeError`. `as_marker()` can be chained right after `sync_from_google()`
###Code
brandenburg_gate = Place("Brandenburger Tor, Pariser Platz, Berlin", client=api)
brandenburg_gate.as_marker()
brandenburg_gate.sync_from_google().as_marker()
###Output
_____no_output_____
###Markdown
**Q22**: Use the alternative `from_addresses()` constructor to create a `list` named `places` with already synced `Place`s!
###Code
places = Place.from_addresses(sights, client=api, sync=True)
places
###Output
_____no_output_____
###Markdown
The `Map` Class To make [folium ](https://github.com/python-visualization/folium)'s `Map` class work even better with our `Place` instances, we write our own `Map` class wrapping [folium ](https://github.com/python-visualization/folium)'s. Then, we add further functionality to the class throughout this tutorial.The `__init__()` method takes mandatory `name`, `center`, `start`, `end`, and `places` arguments. `name` is there for convenience, `center` is the map's initial center, `start` and `end` are `Place` instances, and `places` is a finite iterable of `Place` instances. Also, `__init__()` accepts an optional `initial_zoom` argument defaulting to `12`.Upon initialization, a `folium.Map` instance is created and stored as an implementation detail `_map`. Also, `__init__()` puts markers for each place on the `_map` object: `"green"` and `"red"` markers for the `start` and `end` locations and `"blue"` ones for the `places` to be visited. To do that, `__init__()` invokes another `add_marker()` method on the `Map` class, once for every `Place` object. `add_marker()` itself invokes the `add_to()` method on the `folium.Marker` representation of a `Place` instance and enables method chaining.To keep the state in a `Map` instance consistent, all passed in arguments except `name` are treated as implementation details. Otherwise, a user of the `Map` class could, for example, change the `start` attribute, which would not be reflected in the internally kept `folium.Map` object.**Q23**: Implement the `__init__()` and `add_marker()` methods on the `Map` class as described!**Q24**: Add a `show()` method on the `Map` class that simply returns the internal `folium.Map` object!
###Code
class Map:
"""A map with plotting and routing capabilities."""
# answer to Q23
def __init__(self, name, center, start, end, places, initial_zoom=12):
"""Create a new map.
Args:
name (str): name of the map
center (float, float): coordinates of the map's center
start (Place): start of the tour
end (Place): end of the tour
places (iterable of Places): the places to be visitied
initial_zoom (integer): zoom level according to folium's
specifications; defaults to 12
"""
self.name = name
...
...
...
... = folium.Map(...)
# Add markers to the map.
...
...
for place in places:
...
def __repr__(self):
return f"<Map: {self.name}>"
# answer to Q24
def show(self):
"""Return a folium.Map representation of the map."""
return ...
# answer to Q23
def add_marker(self, marker):
"""Add a marker to the map.
Args:
marker (folium.Marker): marker to be put on the map
"""
...
return ...
###Output
_____no_output_____
###Markdown
Let's put all the sights, the two airports, and three more places, the [Bundeskanzleramt ](https://en.wikipedia.org/wiki/German_Chancellery), the [Olympic Stadium ](https://en.wikipedia.org/wiki/Olympiastadion_%28Berlin%29), and the [East Side Gallery ](https://en.wikipedia.org/wiki/East_Side_Gallery), on the map.**Q25**: Execute the next code cells to create a map of Berlin with all the places on it!Note: Because we implemented method chaining everywhere, the code below is only *one* expression written over several lines. It almost looks like a self-explanatory and compact "language" on its own.
###Code
berlin = (
Map(
"Sights in Berlin",
center=(52.5015154, 13.4066838),
start=Place(arrival, client=api).sync_from_google(),
end=Place(departure, client=api).sync_from_google(),
places=places,
initial_zoom=10,
)
.add_marker(
Place("Bundeskanzleramt, Willy-Brandt-Straße, Berlin", client=api)
.sync_from_google()
.as_marker(color="orange")
)
.add_marker(
Place("Olympiastadion Berlin", client=api)
.sync_from_google()
.as_marker(color="orange")
)
.add_marker(
Place("East Side Gallery, Berlin", client=api)
.sync_from_google()
.as_marker(color="orange")
)
)
berlin
berlin.show()
###Output
_____no_output_____
###Markdown
Distance Matrix Before we can find out the best order in which to visit all the sights, we must calculate the pairwise distances between all points. While Google also offers a [Directions API](https://developers.google.com/maps/documentation/directions/start) and a [Distance Matrix API](https://developers.google.com/maps/documentation/distance-matrix/start), we choose to calculate the air distances using the third-party library [geopy ](https://github.com/geopy/geopy).**Q26**: Familiarize yourself with the [documentation](https://geopy.readthedocs.io/en/stable/) and install [geopy ](https://github.com/geopy/geopy) with the `pip` command line tool!
###Code
!pip install geopy
###Output
_____no_output_____
###Markdown
We use [geopy ](https://github.com/geopy/geopy) primarily for converting the `latitude`-`longitude` coordinates into a [distance matrix ](https://en.wikipedia.org/wiki/Distance_matrix).Because the [earth is not flat ](https://en.wikipedia.org/wiki/Flat_Earth), [geopy ](https://github.com/geopy/geopy) provides a `great_circle()` function that calculates the so-called [orthodromic distance ](https://en.wikipedia.org/wiki/Great-circle_distance) between two places on a sphere.
###Code
from geopy.distance import great_circle
###Output
_____no_output_____
###Markdown
**Q27**: For quick reference, read the docstring of `great_circle()` and execute the code cells below to calculate the distance between the `arrival` and the `departure`!
###Code
great_circle?
tegel = Place(arrival, client=api).sync_from_google()
schoenefeld = Place(departure, client=api).sync_from_google()
great_circle(tegel.location, schoenefeld.location)
great_circle(tegel.location, schoenefeld.location).km
great_circle(tegel.location, schoenefeld.location).meters
###Output
_____no_output_____
###Markdown
The `Place` Class (continued): Distance to another `Place` **Q28**: Finish the `distance_to()` method in the `Place` class that takes a `other` argument and returns the distance in meters! Adhere to the given docstring!
###Code
class Place:
"""A place connected to the Google Maps Geocoding API."""
# answers from above
# answer to Q28
def distance_to(self, other):
"""Calculate the distance to another place in meters.
Args:
other (Place): the other place to calculate the distance to
Returns:
distance (int)
Raises:
RuntimeError: if one of the places is not synchronized with
the Google Maps Geocoding API
"""
if not self.place_id or not other.place_id:
raise RuntimeError("must synchronize places with Google first")
return ...
###Output
_____no_output_____
###Markdown
**Q29**: Execute the code cells below to test the new feature!Note: If done right, object-oriented code reads almost like plain English.
###Code
tegel = Place(arrival, client=api).sync_from_google()
schoenefeld = Place(departure, client=api).sync_from_google()
tegel.distance_to(schoenefeld)
###Output
_____no_output_____
###Markdown
**Q30**: Execute the next code cell to instantiate the `Place`s in `sights` again!
###Code
places = Place.from_addresses(sights, client=api, sync=True)
places
###Output
_____no_output_____
###Markdown
The `Map` Class (continued): Pairwise Distances Now, we add a read-only `distances` property on our `Map` class. As we are working with air distances, these are *symmetric* which reduces the number of distances we must calculate.To do so, we use the [combinations() ](https://docs.python.org/3/library/itertools.htmlitertools.combinations) generator function in the [itertools ](https://docs.python.org/3/library/itertools.html) module in the [standard library ](https://docs.python.org/3/library/index.html). That produces all possible `r`-`tuple`s from an `iterable` argument. `r` is `2` in our case as we are looking at `origin`-`destination` pairs.Let's first look at an easy example of [combinations() ](https://docs.python.org/3/library/itertools.htmlitertools.combinations) to understand how it works: It gives us all the `2`-`tuple`s from a `list` of five `numbers` disregarding the order of the `tuple`s' elements.
###Code
import itertools
numbers = [1, 2, 3, 4, 5]
for x, y in itertools.combinations(numbers, 2):
print(x, y)
###Output
_____no_output_____
###Markdown
`distances` uses the internal `_start`, `_end`, and `_places` attributes and creates a `dict` with the keys consisting of all pairs of `Place`s and the values being their distances in meters. As this operation is rather costly, we cache the distances the first time we calculate them into a hidden instance attribute `_distances`, which must be initialized with `None` in the `__init__()` method.**Q31**: Finish the `distances` property as described!
###Code
class Map:
"""A map with plotting and routing capabilities."""
# answers from above with a tiny adjustment
# answer to Q31
...
def distances(self):
"""Return a dict with the pairwise distances of all places.
Implementation note: The results of the calculations are cached.
"""
if not self._distances:
distances = ...
all_pairs = itertools.combinations(
...,
r=2,
)
for origin, destination in all_pairs:
distance = ...
distances[origin, destination] = distance
distances[destination, origin] = distance
self._distances = ...
return ...
###Output
_____no_output_____
###Markdown
We pretty print the total distance matrix.
###Code
berlin = Map(
"Berlin",
center=(52.5015154, 13.4066838),
start=Place(arrival, client=api).sync_from_google(),
end=Place(departure, client=api).sync_from_google(),
places=places,
initial_zoom=10,
)
pprint(berlin.distances)
###Output
_____no_output_____
###Markdown
How can we be sure the matrix contains all possible pairs? As we have 9 `sights` plus the `start` and the `end` of the tour, we conclude that there must be `11 * 10 = 110` distances excluding the `0` distances of a `Place` to itself that are not in the distance matrix.
###Code
n_places = len(places) + 2
n_places * (n_places - 1)
len(berlin.distances)
###Output
_____no_output_____
###Markdown
Route Optimization Let us find the cost minimal order of traveling from the `arrival` airport to the `departure` airport while visiting all the `sights`.This problem can be expressed as finding the shortest so-called [Hamiltonian path ](https://en.wikipedia.org/wiki/Hamiltonian_path) from the `start` to `end` on the `Map` (i.e., a path that visits each intermediate node exactly once). With the "hack" of assuming the distance of traveling from the `end` to the `start` to be `0` and thereby effectively merging the two airports into a single node, the problem can be viewed as a so-called [traveling salesman problem ](https://en.wikipedia.org/wiki/Traveling_salesman_problem) (TSP).The TSP is a hard problem to solve but also well studied in the literature. Assuming symmetric distances, a TSP with $n$ nodes has $\frac{(n-1)!}{2}$ possible routes. $(n-1)$ because any node can be the `start` / `end` and divided by $2$ as the problem is symmetric.Starting with about $n = 20$, the TSP is almost impossible to solve exactly in a reasonable amount of time. Luckily, we do not have that many `sights` to visit, and so we use a [brute force ](https://en.wikipedia.org/wiki/Brute-force_search) approach and simply loop over all possible routes to find the shortest.In the case of our tourist, we "only" need to try out `181_440` possible routes because the two airports are effectively one node and $n$ becomes $10$.
###Code
import math
math.factorial(len(places) + 1 - 1) // 2
###Output
_____no_output_____
###Markdown
Analyzing the problem a bit further, all we need is a list of [permutations ](https://en.wikipedia.org/wiki/Permutation) of the sights as the two airports are always the first and last location.The [permutations() ](https://docs.python.org/3/library/itertools.htmlitertools.permutations) generator function in the [itertools ](https://docs.python.org/3/library/itertools.html) module in the [standard library ](https://docs.python.org/3/library/index.html) helps us with the task. Let's see an example to understand how it works.
###Code
numbers = [1, 2, 3]
for permutation in itertools.permutations(numbers):
print(permutation)
###Output
_____no_output_____
###Markdown
However, if we use [permutations() ](https://docs.python.org/3/library/itertools.htmlitertools.permutations) as is, we try out *redundant* routes. For example, transferred to our case, the tuples `(1, 2, 3)` and `(3, 2, 1)` represent the *same* route as the distances are symmetric and the tourist could be going in either direction. To obtain the *unique* routes, we use an `if`-clause in a "hacky" way by only accepting routes where the first node has a smaller value than the last. Thus, we keep, for example, `(1, 2, 3)` and discard `(3, 2, 1)`.
###Code
for permutation in itertools.permutations(numbers):
if permutation[0] < permutation[-1]:
print(permutation)
###Output
_____no_output_____
###Markdown
In order to compare `Place`s as numbers, we would have to implement, among others, the `__eq__()` special method. Otherwise, we get a `TypeError`.
###Code
Place(arrival, client=api) < Place(departure, client=api)
###Output
_____no_output_____
###Markdown
As a quick and dirty solution, we use the `location` property on a `Place` to do the comparison.
###Code
Place(arrival, client=api).location < Place(departure, client=api).location
###Output
_____no_output_____
###Markdown
As the code cell below shows, combining [permutations() ](https://docs.python.org/3/library/itertools.htmlitertools.permutations) with an `if`-clause results in the correct number of routes to be looped over.
###Code
sum(
1
for route in itertools.permutations(places)
if route[0].location < route[-1].location
)
###Output
_____no_output_____
###Markdown
To implement the brute force algorithm, we split the logic into two methods.First, we create an `evaluate()` method that takes a `route` argument that is a sequence of `Place`s and returns the total distance of the route. Internally, this method uses the `distances` property repeatedly, which is why we built in caching above.**Q32**: Finish the `evaluate()` method as described!Second, we create a `brute_force()` method that needs no arguments. It loops over all possible routes to find the shortest. As the `start` and `end` of a route are fixed, we only need to look at `permutation`s of inner nodes. Each `permutation` can then be traversed in a forward and a backward order. `brute_force` enables method chaining as well.**Q33**: Finish the `brute_force()` method as described! The `Map` Class (continued): Brute Forcing the TSP
###Code
class Map:
"""A map with plotting and routing capabilities."""
# answers from above
# answer to Q32
def evaluate(self, route):
"""Calculate the total distance of a route.
Args:
route (sequence of Places): the ordered nodes in a tour
Returns:
cost (int)
"""
cost = ...
# Loop over all pairs of consecutive places.
origin = ...
for destination in ...:
cost += self.distances[...]
...
return ...
# answer to Q33
def brute_force(self):
"""Calculate the shortest route by brute force.
The route is plotted on the folium.Map.
"""
# Assume a very high cost to begin with.
min_cost = ...
best_route = None
# Loop over all permutations of the intermediate nodes to visit.
for permutation in ...:
# Skip redundant permutations.
if ...:
...
# Travel through the routes in both directions.
for route in (permutation, permutation[::-1]):
# Add start and end to the route.
route = (..., *route, ...)
# Check if a route is cheaper than all routes seen before.
cost = ...
if ...:
min_cost = ...
best_route = ...
# Plot the route on the map
folium.PolyLine(
[x.location for x in best_route],
color="orange", weight=3, opacity=1
).add_to(self._map)
return ...
###Output
_____no_output_____
###Markdown
**Q34**: Find the best route for our tourist by executing the code cells below!
###Code
berlin = Map(
"Berlin",
center=(52.4915154, 13.4066838),
start=Place(arrival, client=api).sync_from_google(),
end=Place(departure, client=api).sync_from_google(),
places=places,
initial_zoom=12,
)
berlin.brute_force().show()
###Output
_____no_output_____ |
03_relatedness_to_beer_strains/calcRelativeTime_Scer.ipynb | ###Markdown
Estimating divergence time between Jean-Talon and it's relatives in generations and yearsMethod adopted from Skoglund et al. 2011 (following Green et al. 2006).Method uses triplets: Jean-Talon, relative & outgroup. Outgroup is S. cerevisiae, which is the reference genome.First estimation of divergence between Jean-Talon & an outgroup (using molecular clock), by calculating fixed differences between Jean-Talon and outgroup (synonymous sites fixed for 1 in Jean-Talon). Second calculating sites which are shared between Jean-Talon & outgroup but not the relative (genotypes 0,1,0) and shared between relative and an outgroup but not Jean-Talon (genotypes 1, 0, 0). Number of these sites over total number of sites carry information about the proportion of the branch length from the split of Jean-Talon with its relative, relative to the branch length from the split of Jean-Talon (or relative) with an outgroup.@author:aniafijarczyk
###Code
import pandas as pd
import glob
import random
import gzip
import numpy as np
from collections import defaultdict
###Output
_____no_output_____
###Markdown
Setting global variables
###Code
focal_species = "Jean-Talon"
relatives = ["A.Muntons","A.S-33","A.T-58","BE005","CFI","CFN","CFP"]
mutation_rate = 1.67E-10
gen_year_min = 150 # lower estimate of generation number per year (from Gallone et al. 2016)
gen_year_max = 2920 # higher estimate of generation number per year (from Fay & Benavides 2005)
jean_talon_synonymous_sites = 1601812.63 # total length of synonymous sites in Jean-Talon (no missing data)
###Output
_____no_output_____
###Markdown
Lengths of synonymous sites for pairs of genomes of different relatives with Jean-Talon excluding positions with any missing data
###Code
#lengths = pd.read_csv("./input_files/synonymous_length.txt",sep="\t",header=0,names=['strain','length'])
lengths = pd.DataFrame({"strain":["A.2565","A.Muntons","A.S-33","A.T-58","BE005","CFI","CFN","CFP","CHK","Jean-Talon"],
"length":[825134.19,1393964.5,921367.81,981293.2,1600532.52,1601356.33,1601365.39,1600888.52,1601484.86,1601812.63]})
lengths.head()
###Output
_____no_output_____
###Markdown
Selecting strain indices for triplets (duplets here - S. cerevisiae is reference genome)
###Code
# File with list of sample names from relatives_annot_Filtered2.vcf file
samples = pd.read_csv("./input_files/relatives_annot_Filtered2.samples", sep="\t", header=None, names=["haplotype"])
sample_names = list(samples["haplotype"])
T = []
for strain in relatives:
test_samples = ["Jean-Talon",strain]
test_samples_p1 = [[0,1][ele.split("_")[0] in test_samples[0]] for ele in sample_names]
samp_indices_p1 = [i for i in range(len(test_samples_p1)) if test_samples_p1[i] == 1]
test_samples_p2 = [[0,1][ele.split("_")[0] in test_samples[1]] for ele in sample_names]
samp_indices_p2 = [i for i in range(len(test_samples_p2)) if test_samples_p2[i] == 1]
samp_indices = samp_indices_p1 + samp_indices_p2
T.append(samp_indices)
T
###Output
_____no_output_____
###Markdown
Getting synonymous variants Annotating vcf file (File S5 after changing chromosome names to chr format)```consolejava -jar snpEff.jar R64-2-1 relatives_annot_Filtered2_R64.2.1.vcf.gz > relatives_annot_Filtered2_R64.2.1_snpEff.vcf``` Getting variants for selected genes (./output/manipulateFasta_nonoverlappingCDS.bed) in table```consolebcftools query -f '%CHROM\t%POS\t%INFO/ANN\n' -R ./output/manipulateFasta_nonoverlappingCDS.bed \relatives_annot_Filtered2_R64.2.1.vcf.gz | grep 'synonymous_variant' > \relatives_annot_synonymous_snpEff.tab``` Reading file with synonymous variants
###Code
fa = gzip.open("./input_files/relatives_annot_synonymous_snpEff.tab.gz", "rt").readlines()
#fa = gzip.open("./input_files/sample_annot_synonymous_snpEff.tab.gz", "rt").readlines()
ann = [ele.split() for ele in fa]
D = {a+"_"+b:c for a,b,c in ann}
print("Number of all synonymous variants = "+str(len(list(D.keys()))))
###Output
Number of all synonymous variants = 51608
###Markdown
Generating table with genotypes encoded as 0 and 1 from vcf file for selected genes (./output/manipulateFasta_nonoverlappingCDS.bed)```consolebcftools query -f '%CHROM\t%POS[\t%GT]\n' -R ./output/manipulateFasta_nonoverlappingCDS.bed \relatives_annot_Filtered2_R64.2.1.vcf.gz | sed 's/\//\|/g' \| awk -F"\t" -v OFS="\t" 'function GSUB(F) {gsub(/[|]/,"\t",$F)} \ {GSUB(3);GSUB(4);GSUB(5);GSUB(6);GSUB(7);GSUB(8);GSUB(9);GSUB(10);GSUB(11);GSUB(12);GSUB(13)}1' \| awk '{if (length($3)==1) print $0}' > relatives_annot_Filtered2_01.tab``` Reading file with all variant genotypes & filtering only synonymous
###Code
fh = gzip.open("./input_files/relatives_annot_Filtered2_01.tab.gz","rt").readlines()
#fh = gzip.open("./input_files/sample_annot_Filtered2_01.tab.gz","rt").readlines()
d = {'_'.join(ele.split()[:2]):''.join(ele.split()[2:]) for ele in fh}
k = {ele:d[ele] for ele in D.keys()}
print("Number of filtered synonymous variants = "+str(len(list(k.keys()))))
###Output
Number of filtered synonymous variants = 51608
###Markdown
Calculating fixed differences between Jean-Talon & reference (outgroup)
###Code
jt = [] # fixed variants relative to reference
jt_tot = [] # all synonymous variants with no missing data
for pos in k.keys():
newset = ''.join(k[pos][-4:])
if (newset.count('.') == 0):
jt_tot.append(newset)
if (newset.count('0') == 0):
jt.append(newset)
k_rate = len(jt)/jean_talon_synonymous_sites
t_out = k_rate/(2*mutation_rate)
print("Number of fixed synonymous differences between Jean-Talon & reference is "+str(len(jt)))
print("Divergence rate between Jean-Talon & reference is "+str(k_rate))
print("Number of generations since divergence of Jean-Talon with reference is "+str(t_out))
###Output
Number of fixed synonymous differences between Jean-Talon & reference is 7697
Divergence rate between Jean-Talon & reference is 0.004805181240205354
Number of generations since divergence of Jean-Talon with reference is 14386770.180255553
###Markdown
Calculating time of split of Jean-Talon with relatives, relative to time of split with reference
###Code
S = defaultdict(list)
S2 = defaultdict(list)
for duplex_index in range(len(T)):
sec_strain = relatives[duplex_index]
print(sec_strain)
n = []
for pos in k.keys():
newset = ''.join([k[pos][i] for i in T[duplex_index]])
if newset.count('.') == 0:
n.append(newset)
#n[:3]
P = []
P2 = []
C2_aba = []
C2_baa = []
for site in n:
# taxon 1 and taxon 2 bases are given by randomly selecting one base from all alleles in a given position
#anc = random.sample(list(site[0:4]),1)
sp1 = random.sample(list(site[0:4]),1)[0]
sp2 = random.sample(list(site[4:8]),1)[0]
#if sp1.intersection(set(anc)): p1 = "A"
if sp1 == '1': p1 = "B"
else: p1 = "A"
if sp2 == '1': p2 = "B"
else: p2 = "A"
pat = p1+p2+"A"
P.append(pat)
# derived bases in taxon 1 and 2 are all bases with derived mutations of any frequency
sp1 = set(list(site[0:4]))
sp2 = set(list(site[4:8]))
if sp1.intersection(set('1')): p1 = "B"
else: p1 = "A"
if sp2.intersection(set('1')): p2 = "B"
else: p2 = "A"
pat2 = p1+p2+"A"
P2.append(pat2)
if pat2 == "ABA":
C2_aba.append(list(sp2).count("1")/4.)
elif pat2 == "BAA":
C2_baa.append(list(sp1).count("1")/4.)
nnn = lengths.loc[lengths["strain"]==sec_strain,'length'].values[0]
aba = P.count("ABA")
baa = P.count("BAA")
Ss1 = aba/float(nnn)
Ss2 = baa/float(nnn)
S['strain'].append(sec_strain)
S['ABA'].append(aba)
S['BAA'].append(baa)
S['Ss_ABA'].append(Ss1)
S['Ss_BAA'].append(Ss2)
S['mean_Ss'].append(np.mean([Ss1, Ss2]))
aba2 = P2.count("ABA")
baa2 = P2.count("BAA")
Ss1 = aba2/float(nnn)
Ss2 = baa2/float(nnn)
S2['strain'].append(sec_strain)
S2['ABA'].append(aba2)
S2['BAA'].append(baa2)
S2['Ss1'].append(Ss1)
S2['Ss2'].append(Ss2)
S2['meanSs'].append(np.mean([Ss1, Ss2]))
# rate of aba and baa patterns is multiplied by frequency of corresponding derived mutations
Ss1_freq = sum(C2_aba)/float(nnn)
Ss2_freq = sum(C2_baa)/float(nnn)
S2['Ss1_freq'].append(Ss1_freq)
S2['Ss2_freq'].append(Ss2_freq)
S2['meanSs_freq'].append(np.mean([Ss1_freq, Ss2_freq]))
dS1 = pd.DataFrame(S)
dS2 = pd.DataFrame(S2)
###Output
A.Muntons
A.S-33
A.T-58
BE005
CFI
CFN
CFP
###Markdown
Calculating divergence times
###Code
dS1['t_aba_150'] = (dS1['Ss_ABA']*t_out)/150
dS1['t_baa_150'] = (dS1['Ss_BAA']*t_out)/150
dS1['t_150'] = (dS1['mean_Ss']*t_out)/150
dS1['t_aba_2920'] = (dS1['Ss_ABA']*t_out)/2920
dS1['t_baa_2920'] = (dS1['Ss_BAA']*t_out)/2920
dS1['t_2920'] = (dS1['mean_Ss']*t_out)/2920
dS1['t_out'] = t_out
dS1['mut_rate'] = mutation_rate
dF = dS1.loc[dS1['strain'].isin(["A.Muntons","A.S-33","BE005","CFI","CFN"]),:]
dM = pd.merge(dF,lengths,on=['strain'],how='left')
dM
###Output
_____no_output_____
###Markdown
Saving table
###Code
dM.to_csv("./output/calcRelativeTime_Scer.out",sep="\t",index=False,header=True)
#dM.to_csv("calcRelativeTime_Scer_sample.out",sep="\t",index=False,header=True)
###Output
_____no_output_____ |
lecture07.big.data/lecture07.3.trends.ipynb | ###Markdown
We are only interested the year range from 2002 - 2006
###Code
yrs = [str(yr) for yr in range(2002, 2016)]
###Output
_____no_output_____
###Markdown
Let's filter out the following types of record:1. Export only2. Partners are those with the UK and **outside** the EU28
###Code
export_df = df[(df['trade_type'] == 'Export') &
(df['partner'] == 'EXT_EU28')
].loc[['EU28', 'UK']][yrs]
export_df.head(4)
###Output
_____no_output_____
###Markdown
Let's transpoe this to get 2 columns of series data:
###Code
export_df = export_df.T
export_df.head(4)
###Output
_____no_output_____
###Markdown
Let's rename the columns to clarify these columns related to export from these entities:
###Code
export_df = export_df.rename(columns={'EU28': 'EU28_TO_EXT', 'UK': 'UK_TO_EXT'})
export_df.head(4)
###Output
_____no_output_____
###Markdown
Now, let's get the columns from UK and EU28 to those partners **inside** EU28
###Code
int_df = df[(df['trade_type'] == 'Export') &
(df['partner'] == 'EU28')
].loc[['EU28', 'UK']][yrs]
int_df.head(4)
int_df = int_df.T
int_df.head(4)
###Output
_____no_output_____
###Markdown
Let's now combine these 2 new columns to the exports to outside UK and EU28
###Code
export_df = pd.concat([export_df, int_df], axis=1)
export_df.head(4)
export_df = export_df.rename(columns={'EU28': 'EU28_TO_INT',
'UK' : 'UK_TO_INT'})
export_df.head(4)
###Output
_____no_output_____
###Markdown
Trends Let's now plot to see any trends
###Code
export_df.plot(legend=False)
export_df.plot()
###Output
_____no_output_____
###Markdown
Looks like there is significantly more external trade outside EU28.Let's just look at the UK data
###Code
export_df[['UK_TO_EXT', 'UK_TO_INT']].plot()
###Output
_____no_output_____
###Markdown
Interactive Plot
###Code
from bokeh.plotting import figure, output_file, show
from bokeh.layouts import gridplot
TOOLS = 'resize,pan,wheel_zoom,box_zoom,reset,hover'
p = figure(tools=TOOLS, x_range=(2002, 2015), y_range=(200000, 500000),
title="UK Import Export Trends from 2002-2014")
p.yaxis.axis_label = "Value in $1000"
p.line(yrs, export_df['UK_TO_EXT'], color='#A6CEE3', legend='UK_TO_EXT')
p.line(yrs, export_df['UK_TO_INT'], color='#B2DF8A', legend='UK_TO_INT')
p.legend.location = 'top_left'
output_file("uk_grade.html", title="UK Trade from 2002-2014")
# open a browser
show(p)
###Output
_____no_output_____
###Markdown
Outliers Let look at % change. First, let's remove the aggregate sum (by identifying the aggregate key 'EU28'. Remember that we have set the index to "geo" already.
###Code
df = df[~ df.index.isin(['EU28'])]
df.head(4)
pct_change_df = df.copy()
###Output
_____no_output_____
###Markdown
Recall that yrs column is of type "str" even though they supposedly represent the year number.
###Code
for yr in yrs:
pct_change_df[yr] = (df[yr] - df[str(int(yr)-1)]) / df[str(int(yr)-1)]
pct_change_df.head(4)
###Output
_____no_output_____
###Markdown
What is the year with the largest spread
###Code
[(yr, abs(pct_change_df[yr].max() - pct_change_df[yr].min(0))) for yr in yrs]
###Output
_____no_output_____
###Markdown
2010 seems to have a big % change in recent years. Let's find some outliers by using standard deviations.
###Code
pct_change_df['2010'].std()
pct_change_df['2010'].mean()
###Output
_____no_output_____
###Markdown
Let's define outliers are those > 2 standard deviations from the mean.
###Code
pct_change_df[pct_change_df['2010'].abs() >=
(pct_change_df['2010'].mean() + 2*pct_change_df['2010'].std())]
###Output
_____no_output_____
###Markdown
Looks like these 3 countries are outliers defined as having % change > 2 standard deviations from their means. Let's use sorting to see the range of values for 2010
###Code
pct_change_df['2010'].sort_values()
###Output
_____no_output_____
###Markdown
There are very few countries with negative % change values for 2010. Let's separate out those values.
###Code
pct_change_df[pct_change_df['2010'] < 0]
###Output
_____no_output_____
###Markdown
Looks like Greece, Hungary, and Ireland all shrunk in imports for 2010. Luxumberg shrunk in both imports & exports in 2010. Also looks like very few countries have % change values > 0.4. Let's examine those values for 2010.
###Code
pct_change_df[pct_change_df['2010'] > 0.4]
###Output
_____no_output_____ |
notebooks/Lesson 1 - Basic Image Operations.ipynb | ###Markdown
1. Loading and Saving Images
###Code
# Accessing and Modifying pixel values
# loads an image
image = cv2.imread('../img/beach.png') # OpenCV reads images in as B, G, R
image = np.flip(image, axis = 2) # to Re-order channels as R, G, B for matplotlib renderer
# It returns a tuple of number of rows, columns and channels (if image is color)
image.shape
def show_image(image, cmap = None, fig_size = (10, 10)):
fig, ax = plt.subplots(figsize=fig_size)
ax.imshow(image, cmap = cmap)
ax.axis('off')
plt.show()
show_image(image)
# accessing image section and converting it to white
image[10:50, 100:140] = [255, 255, 255]
show_image(image)
cv2.imwrite('edited.png', image) # Remember OpenCV operations expect images to be in the format # B, G, R
###Output
_____no_output_____
###Markdown
EXERCISE: Load an image using CV2 and use `show_image()` function to show it
###Code
# TODO: What happens if we don't flip the image channels before showing it using matplotlib?
image = cv2.imread('../img/beach.png')
show_image(image)
###Output
_____no_output_____
###Markdown
EXERCISE: Load an image using CV2, draw a white rectangle on it then save it to disk using CV2
###Code
# TODO: Write your code below
image[20:30, 100:200] = [255, 255, 255]
cv2.imwrite('./edited.png', image)
###Output
_____no_output_____
###Markdown
2. Colour Channels 2.1 Order of Colour Channels
###Code
shapes_image = "../img/beach.png"
# reads image using matplotlib
shapes_matpotlib = plt.imread(shapes_image) # R, G, B
show_image(shapes_matpotlib)
# The order of colour channels read in is important - Notice the colour changes
shapes_cv2 = cv2.imread(shapes_image)
show_image(shapes_cv2)
show_image(np.flip(shapes_cv2, axis = 2))
###Output
_____no_output_____
###Markdown
2.2 Flipping matrices with numpy
###Code
x = np.array([[[1,2, 3], [2, 3, 4], [3, 4, 5]],
[[1,2, 3], [2, 3, 4], [3, 4, 6]],
[[1,2, 3], [2, 3, 4], [3, 4, 7]]])
x
np.flip(x, axis =0 )
np.flip(x, axis =1 )
np.flip(x, axis =2 )
###Output
_____no_output_____
###Markdown
2.3 Splitting colour channels with matplotlib
###Code
image = plt.imread('../img/beach.png')
show_image(image)
channels_matplotlib = [image[:, : , i] for i in range(3)]
names = ['Red', 'Green', 'Blue']
for name, channel in zip(names, channels_matplotlib):
print(name)
show_image(channel, cmap='gray')
###Output
_____no_output_____
###Markdown
2.4 Splitting colour channels with CV2
###Code
image = cv2.imread('../img/beach.png') # B, G, R -> this order is required for showing images with cv2.imshow().
image = np.flip(image, axis = 2) # R, G, B -> This order is required for showing images with plt.imshow().
show_image(image)
cv2_channels = cv2.split(image)
for name, channel in zip(names, cv2_channels):
print(name)
show_image(channel, cmap='gray')
###Output
_____no_output_____
###Markdown
EXERCISE: Write a script that uses argument parser to load, crop and display an image with CV2 then saves it into a file
###Code
# Your code below
# %load ../solutions/parsing_commands.py
import argparse
import cv2
# Construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument('-i', '--image', required=True, help = 'Path to the image')
ap.add_argument('-o', '--output', required=True, help = 'Path to saving the image')
args = vars(ap.parse_args())
image = cv2.imread(args['image'])
cv2.imshow('Original', image)
cv2.waitKey(0)
# crop the image by slicing the matrix
cropped = image[100: 1000, 10:300]
cv2.imshow('Edited', cropped)
cv2.waitKey(0)
# save the image to specified path
cv2.imwrite(f'{args["output"]}/cropped_coding.jpg', cropped)
###Output
_____no_output_____
###Markdown
Homework 1: Write a function that reads in an image using either the matplotlib or CV2 and shows it in this notebook using matplotlib
###Code
# TODO: Write your code below
###Output
_____no_output_____ |
Resnet-LSTM-with-attention/test.ipynb | ###Markdown
Import Library
###Code
import sys
sys.path.append('../data')
import os
import gc
import re
import math
import time
import random
import shutil
import pickle
from pathlib import Path
from contextlib import contextmanager
from collections import defaultdict, Counter
import scipy as sp
import numpy as np
import pandas as pd
from tqdm.auto import tqdm
import Levenshtein
from sklearn import preprocessing
from sklearn.model_selection import StratifiedKFold, GroupKFold, KFold
from functools import partial
import cv2
from PIL import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import Adam, SGD
import torchvision.models as models
from torch.nn.parameter import Parameter
from torch.utils.data import DataLoader, Dataset
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence
from torch.optim.lr_scheduler import CosineAnnealingWarmRestarts, CosineAnnealingLR, ReduceLROnPlateau
from albumentations import (
Compose, OneOf, Normalize, Resize, RandomResizedCrop, RandomCrop, HorizontalFlip, VerticalFlip,
RandomBrightness, RandomContrast, RandomBrightnessContrast, Rotate, ShiftScaleRotate, Cutout,
IAAAdditiveGaussianNoise, Transpose, Blur
)
from albumentations.pytorch import ToTensorV2
from albumentations import ImageOnlyTransform
import timm
import warnings
warnings.filterwarnings('ignore')
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
###Output
_____no_output_____
###Markdown
Utils
###Code
# ====================================================
# CFG
# ====================================================
class CFG:
debug=False
max_len=275
print_freq=500
num_workers=16
model_name='resnet34'
size=224
scheduler='CosineAnnealingLR' # ['ReduceLROnPlateau', 'CosineAnnealingLR', 'CosineAnnealingWarmRestarts']
epochs=8 # not to exceed 9h
#factor=0.2 # ReduceLROnPlateau
#patience=4 # ReduceLROnPlateau
#eps=1e-6 # ReduceLROnPlateau
T_max=8 # CosineAnnealingLR
#T_0=4 # CosineAnnealingWarmRestarts
encoder_lr=1e-4
decoder_lr=4e-4
min_lr=1e-6
batch_size=128
weight_decay=1e-6
gradient_accumulation_steps=1
max_grad_norm=5
attention_dim=256
embed_dim=256
decoder_dim=512
dropout=0.5
seed=42
n_fold=5
trn_fold= [0] # [0, 1, 2, 3, 4]
train=True
def get_score(y_true, y_pred):
scores = []
for true, pred in zip(y_true, y_pred):
score = Levenshtein.distance(true, pred)
scores.append(score)
avg_score = np.mean(scores)
return avg_score
def seed_torch(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
def bms_collate(batch):
imgs, labels, label_lengths = [], [], []
for data_point in batch:
imgs.append(data_point[0])
labels.append(data_point[1])
label_lengths.append(data_point[2])
labels = pad_sequence(labels, batch_first=True, padding_value=tokenizer.stoi["<pad>"])
return torch.stack(imgs), labels, torch.stack(label_lengths).reshape(-1, 1)
seed_torch(seed=CFG.seed)
###Output
_____no_output_____
###Markdown
Tokenizer
###Code
class Tokenizer(object):
def __init__(self):
self.stoi = {}
self.itos = {}
def __len__(self):
return len(self.stoi)
def fit_on_texts(self, texts):
vocab = set()
for text in texts:
vocab.update(text.split(' '))
vocab = sorted(vocab)
vocab.append('<sos>')
vocab.append('<eos>')
vocab.append('<pad>')
for i, s in enumerate(vocab):
self.stoi[s] = i
self.itos = {item[1]: item[0] for item in self.stoi.items()}
def text_to_sequence(self, text):
sequence = []
sequence.append(self.stoi['<sos>'])
for s in text.split(' '):
sequence.append(self.stoi[s])
sequence.append(self.stoi['<eos>'])
return sequence
def texts_to_sequences(self, texts):
sequences = []
for text in texts:
sequence = self.text_to_sequence(text)
sequences.append(sequence)
return sequences
def sequence_to_text(self, sequence):
return ''.join(list(map(lambda i: self.itos[i], sequence)))
def sequences_to_texts(self, sequences):
texts = []
for sequence in sequences:
text = self.sequence_to_text(sequence)
texts.append(text)
return texts
def predict_caption(self, sequence):
caption = ''
for i in sequence:
if i == self.stoi['<eos>'] or i == self.stoi['<pad>']:
break
caption += self.itos[i]
return caption
def predict_captions(self, sequences):
captions = []
for sequence in sequences:
caption = self.predict_caption(sequence)
captions.append(caption)
return captions
tokenizer = torch.load('tokenizer2.pth')
print(f"tokenizer.stoi: {tokenizer.stoi}")
###Output
tokenizer.stoi: {'(': 0, ')': 1, '+': 2, ',': 3, '-': 4, '/b': 5, '/c': 6, '/h': 7, '/i': 8, '/m': 9, '/s': 10, '/t': 11, '0': 12, '1': 13, '10': 14, '100': 15, '101': 16, '102': 17, '103': 18, '104': 19, '105': 20, '106': 21, '107': 22, '108': 23, '109': 24, '11': 25, '110': 26, '111': 27, '112': 28, '113': 29, '114': 30, '115': 31, '116': 32, '117': 33, '118': 34, '119': 35, '12': 36, '120': 37, '121': 38, '122': 39, '123': 40, '124': 41, '125': 42, '126': 43, '127': 44, '128': 45, '129': 46, '13': 47, '130': 48, '131': 49, '132': 50, '133': 51, '134': 52, '135': 53, '136': 54, '137': 55, '138': 56, '139': 57, '14': 58, '140': 59, '141': 60, '142': 61, '143': 62, '144': 63, '145': 64, '146': 65, '147': 66, '148': 67, '149': 68, '15': 69, '150': 70, '151': 71, '152': 72, '153': 73, '154': 74, '155': 75, '156': 76, '157': 77, '158': 78, '159': 79, '16': 80, '161': 81, '163': 82, '165': 83, '167': 84, '17': 85, '18': 86, '19': 87, '2': 88, '20': 89, '21': 90, '22': 91, '23': 92, '24': 93, '25': 94, '26': 95, '27': 96, '28': 97, '29': 98, '3': 99, '30': 100, '31': 101, '32': 102, '33': 103, '34': 104, '35': 105, '36': 106, '37': 107, '38': 108, '39': 109, '4': 110, '40': 111, '41': 112, '42': 113, '43': 114, '44': 115, '45': 116, '46': 117, '47': 118, '48': 119, '49': 120, '5': 121, '50': 122, '51': 123, '52': 124, '53': 125, '54': 126, '55': 127, '56': 128, '57': 129, '58': 130, '59': 131, '6': 132, '60': 133, '61': 134, '62': 135, '63': 136, '64': 137, '65': 138, '66': 139, '67': 140, '68': 141, '69': 142, '7': 143, '70': 144, '71': 145, '72': 146, '73': 147, '74': 148, '75': 149, '76': 150, '77': 151, '78': 152, '79': 153, '8': 154, '80': 155, '81': 156, '82': 157, '83': 158, '84': 159, '85': 160, '86': 161, '87': 162, '88': 163, '89': 164, '9': 165, '90': 166, '91': 167, '92': 168, '93': 169, '94': 170, '95': 171, '96': 172, '97': 173, '98': 174, '99': 175, 'B': 176, 'Br': 177, 'C': 178, 'Cl': 179, 'D': 180, 'F': 181, 'H': 182, 'I': 183, 'N': 184, 'O': 185, 'P': 186, 'S': 187, 'Si': 188, 'T': 189, '<sos>': 190, '<eos>': 191, '<pad>': 192}
###Markdown
CV Split
###Code
folds = train.copy()
Fold = StratifiedKFold(n_splits=CFG.n_fold, shuffle=True, random_state=CFG.seed)
for n, (train_index, val_index) in enumerate(Fold.split(folds, folds['InChI_length'])):
folds.loc[val_index, 'fold'] = int(n)
folds['fold'] = folds['fold'].astype(int)
print(folds.groupby(['fold']).size())
###Output
_____no_output_____
###Markdown
Dataset
###Code
# ====================================================
# Dataset
# ====================================================
class TrainDataset(Dataset):
def __init__(self, df, tokenizer, transform=None):
super().__init__()
self.df = df
self.tokenizer = tokenizer
self.file_paths = df['file_path'].values
self.labels = df['InChI_text'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_path = self.file_paths[idx]
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
label = self.labels[idx]
label = self.tokenizer.text_to_sequence(label)
label_length = len(label)
label_length = torch.LongTensor([label_length])
return image, torch.LongTensor(label), label_length
class TestDataset(Dataset):
def __init__(self, df, transform=None):
super().__init__()
self.df = df
self.file_paths = df['file_path'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_path = self.file_paths[idx]
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
return image
###Output
_____no_output_____
###Markdown
Tranform
###Code
def get_transforms(*, data):
if data == 'train':
return Compose([
Resize(CFG.size, CFG.size),
Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
),
ToTensorV2(),
])
elif data == 'valid':
return Compose([
Resize(CFG.size, CFG.size),
Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
),
ToTensorV2(),
])
from matplotlib import pyplot as plt
train_dataset = TrainDataset(train, tokenizer, transform=get_transforms(data='train'))
for i in range(1):
image, label, label_length = train_dataset[i]
text = tokenizer.sequence_to_text(label.numpy())
plt.imshow(image.transpose(0, 1).transpose(1, 2))
plt.title(f'label: {label} text: {text} label_length: {label_length}')
plt.show()
###Output
_____no_output_____
###Markdown
Model Helper functions
###Code
# ====================================================
# Helper functions
# ====================================================
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (remain %s)' % (asMinutes(s), asMinutes(rs))
###Output
_____no_output_____
###Markdown
Train and Validation
###Code
def train_fn(train_loader, encoder, decoder, criterion,
encoder_optimizer, decoder_optimizer, epoch,
encoder_scheduler, decoder_scheduler, device):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
# switch to train mode
encoder.train()
decoder.train()
start = end = time.time()
global_step = 0
for step, (images, labels, label_lengths) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
labels = labels.to(device)
label_lengths = label_lengths.to(device)
batch_size = images.size(0)
features = encoder(images)
predictions, caps_sorted, decode_lengths, alphas, sort_ind = decoder(features, labels, label_lengths)
targets = caps_sorted[:, 1:]
predictions = pack_padded_sequence(predictions, decode_lengths, batch_first=True).data
targets = pack_padded_sequence(targets, decode_lengths, batch_first=True).data
loss = criterion(predictions, targets)
# record loss
losses.update(loss.item(), batch_size)
if CFG.gradient_accumulation_steps > 1:
loss = loss / CFG.gradient_accumulation_steps
loss.backward()
encoder_grad_norm = torch.nn.utils.clip_grad_norm_(encoder.parameters(), CFG.max_grad_norm)
decoder_grad_norm = torch.nn.utils.clip_grad_norm_(decoder.parameters(), CFG.max_grad_norm)
if (step + 1) % CFG.gradient_accumulation_steps == 0:
encoder_optimizer.step()
decoder_optimizer.step()
encoder_optimizer.zero_grad()
decoder_optimizer.zero_grad()
global_step += 1
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG.print_freq == 0 or step == (len(train_loader)-1):
print('Epoch: [{0}][{1}/{2}] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Elapsed {remain:s} '
'Loss: {loss.val:.4f}({loss.avg:.4f}) '
'Encoder Grad: {encoder_grad_norm:.4f} '
'Decoder Grad: {decoder_grad_norm:.4f} '
#'Encoder LR: {encoder_lr:.6f} '
#'Decoder LR: {decoder_lr:.6f} '
.format(
epoch+1, step, len(train_loader), batch_time=batch_time,
data_time=data_time, loss=losses,
remain=timeSince(start, float(step+1)/len(train_loader)),
encoder_grad_norm=encoder_grad_norm,
decoder_grad_norm=decoder_grad_norm,
#encoder_lr=encoder_scheduler.get_lr()[0],
#decoder_lr=decoder_scheduler.get_lr()[0],
))
return losses.avg
def valid_fn(valid_loader, encoder, decoder, tokenizer, criterion, device):
batch_time = AverageMeter()
data_time = AverageMeter()
# switch to evaluation mode
encoder.eval()
decoder.eval()
text_preds = []
start = end = time.time()
for step, (images) in enumerate(valid_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
batch_size = images.size(0)
with torch.no_grad():
features = encoder(images)
predictions = decoder.predict(features, CFG.max_len, tokenizer)
predicted_sequence = torch.argmax(predictions.detach().cpu(), -1).numpy()
_text_preds = tokenizer.predict_captions(predicted_sequence)
text_preds.append(_text_preds)
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG.print_freq == 0 or step == (len(valid_loader)-1):
print('EVAL: [{0}/{1}] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Elapsed {remain:s} '
.format(
step, len(valid_loader), batch_time=batch_time,
data_time=data_time,
remain=timeSince(start, float(step+1)/len(valid_loader)),
))
text_preds = np.concatenate(text_preds)
return text_preds
###Output
_____no_output_____
###Markdown
Main Loop
###Code
# ====================================================
# Train loop
# ====================================================
def train_loop(folds, fold):
LOGGER.info(f"========== fold: {fold} training ==========")
# ====================================================
# loader
# ====================================================
trn_idx = folds[folds['fold'] != fold].index
val_idx = folds[folds['fold'] == fold].index
train_folds = folds.loc[trn_idx].reset_index(drop=True)
valid_folds = folds.loc[val_idx].reset_index(drop=True)
valid_labels = valid_folds['InChI'].values
train_dataset = TrainDataset(train_folds, tokenizer, transform=get_transforms(data='train'))
valid_dataset = TestDataset(valid_folds, transform=get_transforms(data='valid'))
train_loader = DataLoader(train_dataset,
batch_size=CFG.batch_size,
shuffle=True,
num_workers=CFG.num_workers,
pin_memory=True,
drop_last=True,
collate_fn=bms_collate)
valid_loader = DataLoader(valid_dataset,
batch_size=CFG.batch_size,
shuffle=False,
num_workers=CFG.num_workers,
pin_memory=True,
drop_last=False)
# ====================================================
# scheduler
# ====================================================
def get_scheduler(optimizer):
if CFG.scheduler=='ReduceLROnPlateau':
scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=CFG.factor, patience=CFG.patience, verbose=True, eps=CFG.eps)
elif CFG.scheduler=='CosineAnnealingLR':
scheduler = CosineAnnealingLR(optimizer, T_max=CFG.T_max, eta_min=CFG.min_lr, last_epoch=-1)
elif CFG.scheduler=='CosineAnnealingWarmRestarts':
scheduler = CosineAnnealingWarmRestarts(optimizer, T_0=CFG.T_0, T_mult=1, eta_min=CFG.min_lr, last_epoch=-1)
return scheduler
# ====================================================
# model & optimizer
# ====================================================
encoder = Encoder(CFG.model_name, pretrained=True)
encoder.to(device)
encoder_optimizer = Adam(encoder.parameters(), lr=CFG.encoder_lr, weight_decay=CFG.weight_decay, amsgrad=False)
encoder_scheduler = get_scheduler(encoder_optimizer)
decoder = DecoderWithAttention(attention_dim=CFG.attention_dim,
embed_dim=CFG.embed_dim,
decoder_dim=CFG.decoder_dim,
vocab_size=len(tokenizer),
dropout=CFG.dropout,
device=device)
decoder.to(device)
decoder_optimizer = Adam(decoder.parameters(), lr=CFG.decoder_lr, weight_decay=CFG.weight_decay, amsgrad=False)
decoder_scheduler = get_scheduler(decoder_optimizer)
# ====================================================
# loop
# ====================================================
criterion = nn.CrossEntropyLoss(ignore_index=tokenizer.stoi["<pad>"])
best_score = np.inf
best_loss = np.inf
for epoch in range(CFG.epochs):
start_time = time.time()
# train
avg_loss = train_fn(train_loader, encoder, decoder, criterion,
encoder_optimizer, decoder_optimizer, epoch,
encoder_scheduler, decoder_scheduler, device)
# eval
text_preds = valid_fn(valid_loader, encoder, decoder, tokenizer, criterion, device)
text_preds = [f"InChI=1S/{text}" for text in text_preds]
LOGGER.info(f"labels: {valid_labels[:5]}")
LOGGER.info(f"preds: {text_preds[:5]}")
# scoring
score = get_score(valid_labels, text_preds)
if isinstance(encoder_scheduler, ReduceLROnPlateau):
encoder_scheduler.step(score)
elif isinstance(encoder_scheduler, CosineAnnealingLR):
encoder_scheduler.step()
elif isinstance(encoder_scheduler, CosineAnnealingWarmRestarts):
encoder_scheduler.step()
if isinstance(decoder_scheduler, ReduceLROnPlateau):
decoder_scheduler.step(score)
elif isinstance(decoder_scheduler, CosineAnnealingLR):
decoder_scheduler.step()
elif isinstance(decoder_scheduler, CosineAnnealingWarmRestarts):
decoder_scheduler.step()
elapsed = time.time() - start_time
LOGGER.info(f'Epoch {epoch+1} - avg_train_loss: {avg_loss:.4f} time: {elapsed:.0f}s')
LOGGER.info(f'Epoch {epoch+1} - Score: {score:.4f}')
if score < best_score:
best_score = score
LOGGER.info(f'Epoch {epoch+1} - Save Best Score: {best_score:.4f} Model')
torch.save({'encoder': encoder.state_dict(),
'encoder_optimizer': encoder_optimizer.state_dict(),
'encoder_scheduler': encoder_scheduler.state_dict(),
'decoder': decoder.state_dict(),
'decoder_optimizer': decoder_optimizer.state_dict(),
'decoder_scheduler': decoder_scheduler.state_dict(),
'text_preds': text_preds,
},
OUTPUT_DIR+f'{CFG.model_name}_fold{fold}_best.pth')
def main():
"""
Prepare: 1.train 2.folds
"""
if CFG.train:
# train
oof_df = pd.DataFrame()
for fold in range(CFG.n_fold):
if fold in CFG.trn_fold:
train_loop(folds, fold)
###Output
_____no_output_____ |
Part1 Machine Learning Basics/5 Clustering/k_means_practice.ipynb | ###Markdown
K-Means算法实战> 使用[corpus_train.txt](./corpus_train.txt)中文本数据集对单词进行聚类
###Code
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from scipy.spatial.distance import cdist
import numpy as np
###Output
_____no_output_____
###Markdown
数据预处理输入数据向量化
###Code
def tfidf_vector(corpus_path):
"""向量化函数"""
corpus_train = [] # 用于提取特征
target_train = []
for line in open(corpus_path):
line = line.strip().split('\t')
if len(line) == 2:
words = line[1]
category = line[0]
target_train.append(category)
corpus_train.append(words)
print ("build train-corpus done!!")
count_v1 = CountVectorizer(max_df=0.4, min_df=0.01) # 忽略高于或者低于阈值的词
counts_train = count_v1.fit_transform(corpus_train) # 索引-词频
print(count_v1.get_feature_names())
word_dict = {}
for index, word in enumerate(count_v1.get_feature_names()):
word_dict[index] = word
print('Shape of train is', repr(counts_train.shape))
# 转化为术语的频率
tfidftransformer = TfidfTransformer()
tfidf_train = tfidftransformer.fit_transform(counts_train) # 标准化的tf-idf
return tfidf_train, word_dict
###Output
_____no_output_____
###Markdown
K-Means算法聚类代码
###Code
def cluster_kmeans(tfidf_train, word_dict, cluster_doc, cluster_keywords, num_clusters):
f_doc = open(cluster_doc, 'w+')
km = KMeans(n_clusters=num_clusters)
km.fit(tfidf_train)
clusters = km.labels_.tolist()
cluster_dict = {}
order_centroids = km.cluster_centers_.argsort()[:,::-1]
doc = 1
for cluster in clusters:
f_doc.write(str(str(doc)) + ',' + str(cluster) + '\n')
doc = doc + 1
if cluster not in cluster_dict:
cluster_dict[cluster] = 1
else:
cluster_dict[cluster] = cluster_dict[cluster] + 1
f_doc.close()
cluster = 1
f_clusterwords = open(cluster_keywords, 'w+')
for ind in order_centroids: # 每个类别选50歌词
words = []
for index in ind[:50]:
words.append(word_dict[index])
print(cluster, ','.join(words))
f_clusterwords.write(str(cluster) + '\t' + ','.join(words) + '\n')
cluster = cluster + 1
print('=====' * 50)
f_clusterwords.close()
###Output
_____no_output_____
###Markdown
选择K的值
###Code
def best_kmeans(tfidf_matrix, word_dict):
K = range(1, 10)
meandistortions = []
for k in K:
print (k), ('====='*5)
kmeans = KMeans(n_clusters=k)
kmeans.fit(tfidf_matrix)
meandistortions.append(sum(np.min(cdist(tfidf_matrix.toarray(), kmeans.cluster_centers_, 'euclidean'), axis=1)) / tfidf_matrix.shape[0])
plt.plot(K, meandistortions, 'bx-')
plt.grid(True)
plt.xlabel('Number of clusters')
plt.ylabel('Average within-cluster sum of squares')
plt.title('Elbow for Kmeans clustering')
plt.show()
###Output
_____no_output_____
###Markdown
开始训练
###Code
corpus_train = "corpus_train.txt"
cluster_docs = "cluster_result_document.txt"
cluster_keywords = "cluster_result_keyword.txt"
num_clusters = 7
tfidf_train,word_dict = tfidf_vector(corpus_train)
best_kmeans(tfidf_train,word_dict)
cluster_kmeans(tfidf_train,word_dict,cluster_docs,cluster_keywords,num_clusters)
###Output
build train-corpus done!!
['abaaoud', 'abdeslam', 'act', 'action', 'added', 'afghanistan', 'africa', 'air', 'airstrikes', 'al', 'america', 'american', 'anti', 'area', 'armed', 'army', 'arrested', 'asked', 'assad', 'atrocity', 'attack', 'attacker', 'attacks', 'authority', 'band', 'bank', 'bataclan', 'bbc', 'belgian', 'belgium', 'believed', 'blair', 'blast', 'blood', 'body', 'bomb', 'bomber', 'bombing', 'border', 'britain', 'british', 'brother', 'brussels', 'bus', 'bush', 'business', 'call', 'called', 'cameron', 'campaign', 'capital', 'car', 'cell', 'cent', 'centre', 'change', 'chief', 'child', 'city', 'claim', 'clarke', 'close', 'common', 'community', 'company', 'concern', 'concert', 'cop', 'corbyn', 'country', 'crime', 'cross', 'cup', 'cut', 'david', 'day', 'de', 'dead', 'death', 'defence', 'didn', 'died', 'doe', 'don', 'drone', 'due', 'east', 'economic', 'emergency', 'emwazi', 'enemy', 'england', 'eu', 'europe', 'european', 'event', 'evil', 'expert', 'explosion', 'explosive', 'extremist', 'fa', 'face', 'family', 'fan', 'father', 'fear', 'feel', 'fight', 'financial', 'flat', 'football', 'force', 'foreign', 'france', 'french', 'friday', 'friend', 'friendly', 'g8', 'game', 'germany', 'give', 'glasgow', 'global', 'good', 'government', 'great', 'ground', 'group', 'gun', 'gunman', 'hand', 'happened', 'hate', 'head', 'heard', 'held', 'high', 'hit', 'hollande', 'home', 'hope', 'horror', 'hospital', 'hostage', 'hotel', 'hour', 'house', 'human', 'image', 'including', 'information', 'injured', 'innocent', 'intelligence', 'international', 'involved', 'iraq', 'isi', 'isis', 'islam', 'islamic', 'islamist', 'israel', 'israeli', 'jihadi', 'jihadist', 'john', 'kill', 'killed', 'killer', 'killing', 'kind', 'king', 'la', 'labour', 'law', 'le', 'leader', 'leeds', 'left', 'life', 'live', 'local', 'london', 'long', 'lord', 'loss', 'love', 'madrid', 'major', 'man', 'market', 'match', 'measure', 'medium', 'meeting', 'member', 'men', 'message', 'mi5', 'middle', 'migrant', 'military', 'million', 'minister', 'minute', 'missing', 'month', 'morning', 'mosque', 'mother', 'mp', 'muslim', 'nation', 'national', 'news', 'night', 'north', 'number', 'obama', 'office', 'officer', 'official', 'oil', 'olympic', 'olympics', 'open', 'operation', 'order', 'page', 'pakistan', 'paris', 'part', 'party', 'passenger', 'passport', 'people', 'phone', 'picture', 'place', 'plan', 'play', 'player', 'pm', 'point', 'police', 'policy', 'political', 'power', 'president', 'price', 'public', 'put', 'putin', 'qaeda', 'question', 'radical', 'raid', 'raqqa', 'refugee', 'religion', 'religious', 'report', 'republican', 'response', 'restaurant', 'risk', 'road', 'russia', 'russian', 'safe', 'saturday', 'scene', 'school', 'scotland', 'secretary', 'security', 'service', 'set', 'shadow', 'share', 'shooting', 'shot', 'show', 'silence', 'site', 'society', 'solidarity', 'son', 'source', 'special', 'sport', 'square', 'st', 'stadium', 'staff', 'stand', 'started', 'state', 'statement', 'station', 'stock', 'stop', 'story', 'street', 'strike', 'suicide', 'summit', 'sun', 'sunday', 'support', 'suspect', 'syria', 'syrian', 'system', 'talk', 'target', 'team', 'terror', 'terrorism', 'terrorist', 'terrorists', 'thought', 'threat', 'thursday', 'time', 'today', 'told', 'top', 'town', 'train', 'transport', 'travel', 'tribute', 'tube', 'tuesday', 'uk', 'underground', 'united', 've', 'victim', 'video', 'violence', 'war', 'weapon', 'week', 'weekend', 'wembley', 'west', 'western', 'white', 'win', 'woman', 'word', 'work', 'world', 'year', 'yesterday', 'york', 'young']
Shape of train is (1610, 362)
1
2
3
4
5
6
7
8
9
|
babilim/model/imagenet.ipynb | ###Markdown
babilim.model.imagenet> An implemntation of various imagenet models.
###Code
#export
from babilim.core.annotations import RunOnlyOnce
from babilim.core.module_native import ModuleNative
#export
class ImagenetModel(ModuleNative):
def __init__(self, encoder_type, only_encoder=False, pretrained=False, last_layer=None):
"""
Create one of the iconic image net models in one line.
Allows for only using the encoder part.
This model assumes the input image to be 0-255 (8 bit integer) with 3 channels.
:param encoder_type: The encoder type that should be used. Must be in ("vgg16", "vgg16_bn", "vgg19", "vgg19_bn", "resnet50", "resnet101", "resnet152", "densenet121", "densenet169", "densenet201", "inception_v3", "mobilenet_v2")
:param only_encoder: Leaves out the classification head for VGG16 leaving you with a feature encoder.
:param pretrained: If you want imagenet weights for this network.
:param last_layer: Index of the last layer in the encoder. Allows to cutoff encoder after a few layers.
"""
super().__init__()
self.only_encoder = only_encoder
self.pretrained = pretrained
self.encoder_type = encoder_type
self.last_layer = last_layer
@RunOnlyOnce
def _build_tf(self, image):
raise NotImplementedError()
def _call_tf(self, image):
raise NotImplementedError()
@RunOnlyOnce
def _build_pytorch(self, image):
import torch
from torchvision.models import vgg16, vgg16_bn, vgg19, vgg19_bn, resnet50, resnet101, resnet152, densenet121, densenet169, densenet201, inception_v3, mobilenet_v2
from torch.nn import Sequential
model = None
if self.encoder_type == "vgg16":
model = vgg16
elif self.encoder_type == "vgg16_bn":
model = vgg16_bn
elif self.encoder_type == "vgg19":
model = vgg19
elif self.encoder_type == "vgg19_bn":
model = vgg19_bn
elif self.encoder_type == "resnet50":
model = resnet50
elif self.encoder_type == "resnet101":
model = resnet101
elif self.encoder_type == "resnet152":
model = resnet152
elif self.encoder_type == "densenet121":
model = densenet121
elif self.encoder_type == "densenet169":
model = densenet169
elif self.encoder_type == "densenet201":
model = densenet201
elif self.encoder_type == "inception_v3":
model = inception_v3
elif self.encoder_type == "mobilenet_v2":
model = mobilenet_v2
else:
raise RuntimeError("Unsupported encoder type.")
if self.only_encoder:
encoder = list(model(pretrained=self.pretrained).features)
if self.last_layer is not None:
encoder = encoder[:self.last_layer+1]
self.model = Sequential(*encoder)
else:
self.model = model(pretrained=self.pretrained)
if torch.cuda.is_available():
self.model = self.model.to(torch.device(self.device))
# Just in case, make the image a float tensor
image = image.float()
# Standardization values from torchvision.models documentation
mean = [0.485, 0.456, 0.406]
std = [0.229, 0.224, 0.225]
# Create tensors for a 0-255 value range image.
self.mean = torch.as_tensor([i * 255 for i in mean], dtype=image.dtype, device=image.device)
self.std = torch.as_tensor([j * 255 for j in std], dtype=image.dtype, device=image.device)
def _call_pytorch(self, image):
# Just in case, make the image a float tensor and apply variance correction.
image = image.float()
image.sub_(self.mean[None, :, None, None]).div_(self.std[None, :, None, None])
return self.model(image)
from babilim.core.tensor import Tensor
import numpy as np
encoder = ImagenetModel("vgg16_bn", only_encoder=True, pretrained="imagenet")
fake_image_batch_pytorch = Tensor(data=np.zeros((1, 3, 256, 256), dtype=np.float32), trainable=False)
print(fake_image_batch_pytorch.shape)
result = encoder(fake_image_batch_pytorch)
print(result.shape)
from babilim.core.tensor import Tensor
import numpy as np
model = ImagenetModel("resnet50", only_encoder=False, pretrained="imagenet")
fake_image_batch_pytorch = Tensor(data=np.zeros((1, 3, 256, 256), dtype=np.float32), trainable=False)
print(fake_image_batch_pytorch.shape)
result = model(fake_image_batch_pytorch)
print(result.shape)
###Output
(1, 3, 256, 256)
Downloading: "https://download.pytorch.org/models/resnet50-19c8e357.pth" to C:\Users\fuerst/.cache\torch\hub\checkpoints\resnet50-19c8e357.pth
100%|██████████| 97.8M/97.8M [00:23<00:00, 4.41MB/s]
(1, 1000)
|
Q-Table Learning.ipynb | ###Markdown
Q-Table Learning
###Code
import gym
import numpy as np
env = gym.make('FrozenLake-v0')
#Initialize table with all zeros
Q = np.zeros([env.observation_space.n,env.action_space.n])
# Set learning parameters
lr = .85
y = .99
num_episodes = 2000
#create lists to contain total rewards and steps per episode
#jList = []
rList = []
for i in range(num_episodes):
#Reset environment and get first new observation
s = env.reset()
rAll = 0
d = False
j = 0
#The Q-Table learning algorithm
while j < 99:
j+=1
#Choose an action by greedily (with noise) picking from Q table
a = np.argmax(Q[s,:] + np.random.randn(1,env.action_space.n)*(1./(i+1)))
#Get new state and reward from environment
s1,r,d,_ = env.step(a)
print("s1rd", s1, r, d)
#Update Q-Table with new knowledge
Q[s,a] = Q[s,a] + lr*(r + y*np.max(Q[s1,:]) - Q[s,a])
rAll += r
s = s1
if d == True:
break
#jList.append(j)
rList.append(rAll)
print("Score over time: " + str(sum(rList)/num_episodes))
print("Final Q-Table Values")
print(Q)
print(sum(rList))
###Output
_____no_output_____ |
sphinx/scikit-intro/source/plot-area.ipynb | ###Markdown
Area Plot
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
plt.style.use('ggplot')
np.random.seed(37)
###Output
_____no_output_____
###Markdown
Basic
###Code
mus = [100, 105, 90, 20]
sigmas = [1.0, 2.0, 3.0, 5.0]
df = pd.DataFrame({f'x{i}': sigma * np.random.randn(1000) + mu
for i, (mu, sigma) in enumerate(zip(mus, sigmas))})
fig, ax = plt.subplots(figsize=(10, 5))
_ = df.plot(kind='area', ax=ax)
_ = ax.set_title('Basic area plot')
_ = ax.legend(bbox_to_anchor=(1, 1), loc='upper left')
###Output
_____no_output_____ |
Blast2.ipynb | ###Markdown
NCBI Blast an assembly against a nucleotide sequenceThis notebook uses output from Megahit or SPAdes assembly and runs Blastn against fasta nucleotide target sequence. It automates the Blast process and then generates a consensus fasta sequence that best matches the query file.In example below we run on the result of Megahit assemby on SRR11085797 (https://trace.ncbi.nlm.nih.gov/Traces/sra/?run=SRR11085797) Our workflow for RaTG13 de-novo anlysis consisted of running this notebook on each of:- final.contigs.fa MEGAHIT reults using default settings for NCBI accession SRR11085797- final.contigs.fa MEGAHIT reults using max Kmer of K79 for NCBI accession SRR11085797- final.contigs.fa MEGAHIT reults using k-step10 and --no-mercy option for NCBI accession SRR11085797- gene_clusters.fasta CoronaSPAdes results using default settings for NCBI accession SRR11085797- SRR11806578.fa generated using Biopython from SRR11806578.fastq, sourced from NCBI accession SRR11085797The consensus fasta files generated from each were then used in the [Fasta_Gap_Comparison.ipynb](files/Fasta_Gap_Comparison.ipynb) notebook to compare overall coverage of the RaTG13 genome
###Code
import os
import collections
import re
import pathlib
from io import StringIO
from Bio.Blast.Applications import NcbiblastnCommandline
from Bio.Blast import NCBIXML
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio import SeqIO
from Bio import pairwise2
from Bio.pairwise2 import format_alignment
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
#DATA_PATH='../../RaTG13/Sars_SL3.megahit_asm/'
DATA_PATH='../../RaTG13/Sars_SL3.megahit_asm/intermediate_contigs/'
FASTA_PATH='../../fasta/'
#TARGET_FILE is fasta file that want to match assembly to
TARGET_FILE='MN996532_2_RaTG13_complete_genome.fa'
#ASM_file is assembly output from Megablast/CoronaSPAdes
ASM_FILE='k29.contigs.fa'
METHOD='Megahit default'
COV_NAME='RaTG13'
ASM_CODE='SL3'
UID='mega_defaults_intermediates'
OUT_PATH=DATA_PATH+'blast_analysis/'
pathlib.Path(OUT_PATH).mkdir(exist_ok=True)
query_file = os.path.join(FASTA_PATH, TARGET_FILE)
subject_file = os.path.join(DATA_PATH, ASM_FILE)
assert(os.path.exists(query_file))
assert(os.path.exists(subject_file))
###Output
_____no_output_____
###Markdown
BlastRun BLAST and parse the output as XMLAlignment properties: 'accession', 'hit_def', 'hit_id', 'hsps', 'length', 'title'hsp properties: 'align_length', 'bits', 'expect', 'frame', 'gaps', 'identities', 'match', 'num_alignments', 'positives', 'query', 'query_end', 'query_start', 'sbjct', 'sbjct_end', 'sbjct_start', 'score', 'strand'
###Code
def blast_asm(query_file, subject_file, print_summary=False):
seqs=[]
lengths=[]
try:
output = NcbiblastnCommandline(query=query_file, subject=subject_file, outfmt=5)()[0]
blast_result_record = NCBIXML.read(StringIO(output))
for i, alignment in enumerate(blast_result_record.alignments):
lengths.append(alignment.length)
for hsp in alignment.hsps:
seqs.append([hsp.query_start, hsp.query_end, alignment.title, hsp.sbjct])
if print_summary:
print(f'{subject_file}, {len(seqs)} Blast hits to {query_file}, lengths: {lengths} ')
except Exception as e:
print(e)
return seqs
seqs=blast_asm(query_file, subject_file, print_summary=True)
if len(seqs)==0:
print('No matches found, halting workflow')
raise ValueError
###Output
_____no_output_____
###Markdown
Create 2D array of Blast hitsCreate a numpy array (faster than pandas dataframe) of size required and fill rows with sequences. We use the '-' character as empty filler.Later we use this array to build final sequence from assemblies
###Code
fasta_target = SeqIO.read(query_file, format="fasta")
fasta_target
fasta_seq=str(fasta_target.seq)
fasta_title=fasta_target.description
###Output
_____no_output_____
###Markdown
Add 1 row for target sequence
###Code
seq_array=np.tile('-', (len(seqs)+1, len(fasta_seq)))
title_array=np.empty(len(seqs)+1, dtype='str')
seq_array=seq_array.astype('str')
assert seq_array.shape[1]==len(fasta_seq)
seq_array[0]=list(fasta_seq)
title_array[0]=fasta_title
assert seq_array.shape[0]==title_array.shape[0]
seq_array.shape
def problem_sequence_gen(start, end, title, subject, target_length):
'''
Blast on some CoronaSPAdes assemblies generates anomalous outputs
where hsp.end-hsp.start != len(sequence)
Either:
1) start wrong
2) end wrong
3) actual sequence not fully used by blast? (where sequence too long)
Here we assume
i) if sequence too long, that start and end are correct ie truncate sequence end
ii) if sequence too short, that start and sequence length are correct
'''
#TODO: add option to select to keep full subject sequence
expected_len=end-(start-1)
expected_end=start+len(subject)-1
subject_len_diff=expected_len-len(subject)
#make independent copy of subject
subject_mod = ''.join(subject)
if subject_len_diff>0:
#expected length longer than actual, ie sequnce too short
SEQ_LONG=False
print(f'WARNING sequence too short by {abs(subject_len_diff)} characters, \
assuming that start and sequence length are correct')
else:
SEQ_LONG=True
print(f'WARNING sequence too long by {abs(subject_len_diff)} characters, \
truncating sequence end!')
row=['-'] * (start-1)
if SEQ_LONG:
trailer=['-'] * (target_length-end)
subject_mod=subject_mod[:len(subject_mod) - abs(subject_len_diff)]
else:
trailer=['-'] * (target_length-expected_end)
subject_chars=list(subject_mod)
row.extend(subject_chars)
row.extend(trailer)
if not(len(row)==target_length):
print(f'built row length: {len(row)}, target length: {target_length}, \
start: {start}, end: {end}, subject length {len(subject)}, mod subject length: {len(subject_mod)}')
return row
def create_sequence(start, end, title, subject, target_length):
if not((end-(start-1))==len(subject)):
return problem_sequence_gen(start, end, title, subject, target_length)
row=['-'] * (start-1)
assert end==start+len(subject)-1
trailer=['-'] * (target_length-end)
subject_chars=list(subject)
row.extend(subject_chars)
row.extend(trailer)
assert len(row)==target_length
return row
seq_rows=[]
lengths=[]
for l in seqs:
#l indexes represent: start, end, title, subject
row=create_sequence(l[0], l[1], l[2], l[3], len(fasta_seq))
lengths.append(len(l[3]))
seq_rows.append(row)
assert len(seq_rows) == len(seqs)
length_sorted=[x for _,x in sorted(zip(lengths,seq_rows), reverse=True)]
titles_sorted=[x for x in length_sorted[2]]
###Output
_____no_output_____
###Markdown
Replace empty values in numpy arry - sorted by length of sequence, row 0 being the target sequencetitles are stored in a separate array
###Code
i=0
for l,t in zip(length_sorted,titles_sorted):
seq_array[i+1]=l
title_array[i+1]=t
i+=1
#check or target sequence looks OK
seq_array[0]
assert len(seq_array)== len(seqs)+1
###Output
_____no_output_____
###Markdown
Check against targetThe target sequence (here the RaTG13 fasta accession MN996532.2) is in row zero of seq_array as per above. The other rows are each of the Blast hits.
###Code
#keep all but target in row 0
nn_only = seq_array[1:]
target=seq_array[0]
equal_checks=np.empty([nn_only.shape[0], nn_only.shape[1]])
for i in range(np.shape(nn_only)[0]):
#get row
seq_cmp=nn_only[i]
equal_check = seq_cmp == target
equal_check = equal_check.astype(int)
mask = np.isin(seq_cmp, ['-'])
#set empty (true) to be 2 temporarily
equal_check[mask]=2
equal_checks[i]=equal_check
###Output
_____no_output_____
###Markdown
Swap 2 ('-' ie empty) to be 0 - makes more sense as a zero
###Code
equal_checks[equal_checks == 0] = 3
equal_checks[equal_checks == 2] = 0
equal_checks[equal_checks == 3] = 2
assert (np.count_nonzero(equal_checks == 3))==0
def plot_blocked_seq(stack_arr, name='sequences_blocked.png', cmap='CMRmap_r'):
print(f'{METHOD}, {ASM_FILE}, stack_arr: {stack_arr.shape}')
fig= plt.figure(figsize=(20,6))
plt.imshow(stack_arr, cmap=plt.get_cmap(cmap))
ax = plt.gca()
ax.axes.yaxis.set_visible(False)
plt.savefig(name, dpi=600)
plt.show()
###Output
_____no_output_____
###Markdown
Merge into single sequenceHere we create a single consensus fasta file based on a combination of the Blast hits.Method:Where only single seuence has coverage - use thatWhere 2 or more sequences cover same NN - if NN same, use that, if NN's differ and more of 1 then other use that, if differ and equal number, then use the one that matches the target
###Code
def choose_most_freq(frequency, e_idx_val, unique, asm_build):
'''if not equal fequency then get the most common
remove empty character (ie e_idx_val) from comparison'''
temp_freq=frequency.copy().tolist()
del temp_freq[e_idx_val]
temp_freq=temp_freq
max_freq = np.where(temp_freq == np.amax(temp_freq))
asm_build.append(unique[max_freq][0])
return asm_build
nn_only = seq_array[1:]
asm_build=[]
col_idx=0
for column in nn_only.T:
unique, frequency = np.unique(column,
return_counts = True)
if len(unique)==1:
assert frequency==nn_only.shape[0]
asm_build.append(unique[0])
elif len(unique)==2:
if '-' in unique:
#highly unlikely this wont be case for this dataset
not_empty = [x for x in unique if x != '-']
asm_build.append(not_empty[0])
else:
if frequency[0]==frequency[1]:
#give benefit of doubt
if unique[0]==seq_array[0][col_idx]:
asm_build.append(unique[0])
else:
asm_build.append(unique[1])
else:
#if not equal fequency then get the most common
max_freq = numpy.where(frequency == numpy.amax(frequency))
asm_build.append(unique[max_freq])
elif len(unique)==3:
if '-' in unique:
not_empty = [x for x in unique if x != '-']
temp_idxs=list(range(3))
empty_idx = np.where(unique == '-')
e_idx_val=empty_idx[0][0]
del temp_idxs[e_idx_val]
if frequency[temp_idxs[0]]==frequency[temp_idxs[1]]:
#give benefit of doubt
if unique[temp_idxs[0]]==seq_array[0][col_idx]:
asm_build.append(unique[temp_idxs[0]])
else:
asm_build.append(unique[temp_idxs[1]])
else:
asm_build=choose_most_freq(frequency, e_idx_val, unique, asm_build)
else:
max_freq = numpy.where(frequency == numpy.amax(frequency))
asm_build.append(unique[max_freq])
elif len(unique)>3:
if '-' in unique:
not_empty = [x for x in unique if x != '-']
temp_idxs=list(range(len(unique)))
empty_idx = np.where(unique == '-')
e_idx_val=empty_idx[0][0]
asm_build=choose_most_freq(frequency, e_idx_val, unique, asm_build)
else:
max_freq = numpy.where(frequency == numpy.amax(frequency))
asm_build.append(unique[max_freq])
col_idx+=1
assert len(asm_build)==len(fasta_seq)
assert all(isinstance(x, str) for x in asm_build)
###Output
_____no_output_____
###Markdown
Plot consensus sequence
###Code
def ord_convert(x):
'''convert each character in array to its integer representation'''
return ord(x)
ord_v = np.vectorize(ord_convert)
###Output
_____no_output_____
###Markdown
Convert to integers to plot
###Code
plt.hist(asm_build)
data_ord=ord_v(asm_build)
#replace '-' (empty), with 0 - easier to manage colourmaps
data_ord[data_ord == 45] = 0
#convert to 2D so can plot
stacked=np.stack([data_ord, data_ord], axis=0)
###Output
_____no_output_____
###Markdown
- Dark blue: Continuous coverage by GTCA nucleotides- Red: Poor coverage, many empty reads- Yellow: Very poor coverage- White: No reads
###Code
stacked_repeated = np.repeat(stacked, repeats=500, axis=0)
plot_blocked_seq(stacked_repeated, name=OUT_PATH+f'asm_stitched_{UID}.png')
target=seq_array[0]
asm_check = asm_build == target
asm_check = asm_check.astype(int)
mask = np.isin(asm_build, ['-'])
#set empty (true) to be 2 temporarily
asm_check[mask]=2
asm_check[asm_check == 0] = 3
asm_check[asm_check == 2] = 0
asm_check[asm_check == 3] = 2
#0: empty; 1: NN match, 2: NN incorrect (based on consensus id more than 1)
unique, frequency = np.unique(asm_check,
return_counts = True)
print(f'unique: {unique}, frequency: {frequency}')
def export_fasta(out_file_path, sequence, id_text, description):
if isinstance(sequence, list):
sequence = ''.join(sequence)
seq=Seq(sequence)
record=SeqRecord(seq, id=id_text, description=description)
with open(out_file_path, "w") as output_handle:
SeqIO.write(record, output_handle, "fasta")
export_fasta(OUT_PATH+f'{COV_NAME}_{ASM_CODE}_{UID}.fa', asm_build, f'{COV_NAME}_{ASM_CODE}_{UID}', METHOD)
###Output
_____no_output_____
###Markdown
Stats
###Code
correct_seq=f'Percentage of full sequence with correct sequence coverage: {(frequency[1]/len(fasta_seq))*100}%'
###Output
_____no_output_____
###Markdown
greater than 1 where segments overlp
###Code
correct_overlap=f'Percentage of full sequence with correct overlapping sequences: {(frequency[2]/len(fasta_seq))*100}%'
missing=f'Percentage missing: {(frequency[0]/len(fasta_seq))*100}%'
print(correct_seq)
print(correct_overlap)
print(missing)
with open(OUT_PATH+f"{COV_NAME}_{ASM_CODE}_{UID}_stats.txt", "w") as text_file:
text_file.write(correct_seq +'\n')
text_file.write(correct_overlap +'\n')
text_file.write(missing +'\n')
###Output
_____no_output_____
###Markdown
Plot Blast Hits Top row: consensus sequence, white- no coverageSecond row: spacerEach row below show each contig from Blast
###Code
equal_checks.shape
bin_stacked_repeated = np.repeat(stacked, repeats=250, axis=0)
bin_stacked_repeated[bin_stacked_repeated > 0] = 1
#add a gap
bin_zero_repeated = np.repeat(stacked, repeats=250, axis=0)
bin_zero_repeated[bin_stacked_repeated > 0] = 0
stacked_header=np.stack([bin_stacked_repeated, bin_zero_repeated], axis=0)
stacked_header=stacked_header.reshape((bin_stacked_repeated.shape[0]+bin_zero_repeated.shape[0], bin_zero_repeated.shape[1]))
equal_repeated = np.repeat(equal_checks, repeats=500, axis=0)
stack_arr=np.concatenate((stacked_header,equal_repeated), axis=0)
plot_blocked_seq(stack_arr, name=OUT_PATH+f'equal_repeated_{UID}.png', cmap='gray_r')
###Output
Megahit default, k29.contigs.fa, stack_arr: (13500, 29855)
###Markdown
Misc PlottingThe plot blow is experimental and was not used for any analysis. Note, different to plots above, the first row in plot here shows the target sequence not consensus sequence.
###Code
nn_unique=np.unique(seq_array)
ord_unique=np.unique(ord_v(seq_array))
unique_codes = dict(zip(list(range(len(ord_unique))), ord_unique))
def replce_npy(a, val_list, replace_with):
for v,r in zip(val_list, replace_with):
[[_el if _el == v else r for _el in _ar] for _ar in a]
def get_plot_data(seq_array, unique_codes, labels):
data_ord=ord_v(seq_array)
unique, frequency = np.unique(seq_array,
return_counts = True)
print(f'Unique Nucleotides: {unique}, frequency: {frequency}')
for n,o in zip(nn_unique, ord_unique):
#use the unique NN code (order unknown) as loopup in the colourbar list
idx=np.where(labels == n.upper())
#key = next(key for key, value in dd.items() if value == v)
data_ord=np.where(data_ord==o, idx, data_ord)
#repeat to expand y axis
data_repeated = np.repeat(data_ord, repeats=500, axis=0)
return data_repeated
def plot_nn_colourbars():
#after https://stackoverflow.com/questions/14777066/matplotlib-discrete-colorbar
col_dict={0:"white",
1:"blue",
2:"yellow",
3:"red",
4:"green",
5:"magenta",
6:"black",
7:"tan",
8:"darkgreen",
9:"lavender",
10:"lightcoral",
11:"aquamarine",
12:"lightcyan",
13:"orchid",
14:"coral",
15:"olive",
16:"lightblue"
}
# create a colormap from our list of colors
cm = ListedColormap([col_dict[x] for x in col_dict.keys()])
#in order of here: https://www.genome.jp/kegg/catalog/codes1.html
labels = np.array(["-","A","G","C","T","U","R","Y","N","W", "S","M","K","B","H","D","V"])
len_lab = len(labels)
# prepare normalizer
norm_bins = np.sort([*col_dict.keys()]) + 0.5
norm_bins = np.insert(norm_bins, 0, np.min(norm_bins) - 1.0)
norm = mpl.colors.BoundaryNorm(norm_bins, len_lab, clip=True)
fmt = mpl.ticker.FuncFormatter(lambda x, pos: labels[norm(x)])
data_repeated=get_plot_data(seq_array, unique_codes, labels)
#plt.matshow(data_repeated, cmap=cm, norm=norm)
fig,ax = plt.subplots(figsize=(20, 10))
im = ax.imshow(data_repeated, cmap=cm, norm=norm)
ax.axes.yaxis.set_visible(False)
diff = norm_bins[1:] - norm_bins[:-1]
tickz = norm_bins[:-1] + diff / 2
cb = fig.colorbar(im, format=fmt, ticks=tickz)
ax.axes.yaxis.set_visible(False)
plt.title(f'Target: {TARGET_FILE} (top row) vs contigs using {METHOD}', fontsize=12)
fig.savefig(OUT_PATH+f"colour_sequence_map_{UID}.png")
plt.show()
#TODO, use bokeh for revised version so can zoom and explore
plot_nn_colourbars()
#odd that plot above looks like mostly G & C - needs testing
plt.hist(seq_array[0])
###Output
_____no_output_____ |
fandango_ratings.ipynb | ###Markdown
Is Fandango Still Inflating Ratings? In October 2015, Walt Hickey from FiveThirtyEight published a popular article where he presented strong evidence which suggest that Fandango's movie rating system was biased and dishonest. In this project, we'll analyze more recent movie ratings data to determine whether there has been any change in Fandango's rating system after Hickey's analysis. Understanding the Data We'll work with two samples of movie ratings:the data in one sample was collected previous to Hickey's analysis, while the other sample was collected after. Let's start by reading in the two samples (which are stored as CSV files) and getting familiar with their structure.
###Code
import pandas as pd
pd.options.display.max_columns = 100 # Avoid having displayed truncated output
previous = pd.read_csv('fandango_score_comparison.csv')
after = pd.read_csv('movie_ratings_16_17.csv')
previous.head(3)
after.head(3)
###Output
_____no_output_____
###Markdown
Below we isolate only the columns that provide information about Fandango so we make the relevant data more readily available for later use.
###Code
fandango_previous = previous[['FILM', 'Fandango_Stars', 'Fandango_Ratingvalue', 'Fandango_votes',
'Fandango_Difference']].copy()
fandango_after = fandango_after = after[['movie', 'year', 'fandango']].copy()
fandango_previous.head(3)
fandango_after.head(3)
###Output
_____no_output_____
###Markdown
Our goal is to determine whether there has been any change in Fandango's rating system after Hickey's analysis. The population of interest for our analysis is made of all the movie ratings stored on Fandango's website, regardless of the releasing year.Because we want to find out whether the parameters of this population changed after Hickey's analysis, we're interested in sampling the population at two different periods in time — previous and after Hickey's analysis — so we can compare the two states.The data we're working with was sampled at the moments we want: one sample was taken previous to the analysis, and the other after the analysis. We want to describe the population, so we need to make sure that the samples are representative, otherwise we should expect a large sampling error and, ultimately, wrong conclusions.From Hickey's article and from the README.md of the [data set's repository](https://github.com/fivethirtyeight/data/tree/master/fandango), we can see that he used the following sampling criteria:- The movie must have had at least 30 fan ratings on Fandango's website at the time of sampling (Aug. 24, 2015).- The movie must have had tickets on sale in 2015.The sampling was clearly not random because not every movie had the same chance to be included in the sample — some movies didn't have a chance at all (like those having under 30 fan ratings or those without tickets on sale in 2015). It's questionable whether this sample is representative of the entire population we're interested to describe. It seems more likely that it isn't, mostly because this sample is subject to temporal trends — e.g. movies in 2015 might have been outstandingly good or bad compared to other years.The sampling conditions for our other sample were (as it can be read in the README.md of [the data set's repository](https://github.com/mircealex/Movie_ratings_2016_17)):- The movie must have been released in 2016 or later.- The movie must have had a considerable number of votes and reviews (unclear how many from the README.md or from the data).This second sample is also subject to temporal trends and it's unlikely to be representative of our population of interest.Both these authors had certain research questions in mind when they sampled the data, and they used a set of criteria to get a sample that would fit their questions. Their sampling method is called [purposive sampling](https://www.youtube.com/watch?v=CdK7N_kTzHI&feature=youtu.be) (or judgmental/selective/subjective sampling). While these samples were good enough for their research, they don't seem too useful for us. Changing the Goal of our Analysis At this point, we can either collect new data or change our the goal of our analysis. We choose the latter and place some limitations on our initial goal.Instead of trying to determine whether there has been any change in Fandango's rating system after Hickey's analysis, our new goal is to determine whether there's any difference between Fandango's ratings for popular movies in 2015 and Fandango's ratings for popular movies in 2016. This new goal should also be a fairly good proxy for our initial goal. Isolating the Samples We Need With this new research goal, we have two populations of interest:1. All Fandango's ratings for popular movies released in 2015.2. All Fandango's ratings for popular movies released in 2016.We need to be clear about what counts as popular movies. We'll use Hickey's benchmark of 30 fan ratings and count a movie as popular only if it has 30 fan ratings or more on Fandango's website.Although one of the sampling criteria in our second sample is movie popularity, the sample doesn't provide information about the number of fan ratings. We should be skeptical once more and ask whether this sample is truly representative and contains popular movies (movies with over 30 fan ratings).One quick way to check the representativity of this sample is to sample randomly 10 movies from it and then check the number of fan ratings ourselves on Fandango's website. Ideally, at least 8 out of the 10 movies have 30 fan ratings or more.
###Code
fandango_after.sample(10, random_state = 1)
###Output
_____no_output_____
###Markdown
Above we used a value of 1 as the random seed. This is good practice because it suggests that we weren't trying out various random seeds just to get a favorable sample. Let's also double-check the other data set for popular movies. The documentation states clearly that there're only movies with at least 30 fan ratings, but it should take only a couple of seconds to double-check here.
###Code
(fandango_previous['Fandango_votes'] < 30).sum()
###Output
_____no_output_____
###Markdown
If you explore the two data sets, you'll notice that there are movies with a releasing year different than 2015 or 2016. For our purposes, we'll need to isolate only the movies released in 2015 and 2016.Let's start with Hickey's data set and isolate only the movies released in 2015. There's no special column for the releasing year, but we should be able to extract it from the strings in the FILM column.
###Code
fandango_previous.head(2)
fandango_previous['Year'] = fandango_previous['FILM'].str.extract(r'\((\d+)\)')
fandango_previous.head(3)
###Output
_____no_output_____
###Markdown
Let's examine the frequency distribution for the Year column and then isolate the movies released in 2015.
###Code
fandango_previous.Year.value_counts()
fandango_2015 = fandango_previous[fandango_previous['Year'] == '2015'].copy()
fandango_2015['Year'].value_counts()
###Output
_____no_output_____
###Markdown
Great, now let's isolate the movies in the other data set.
###Code
fandango_after.dtypes
fandango_after.year.value_counts()
fandango_2016 = fandango_after[fandango_after.year == 2016].copy()
fandango_2016.year.value_counts()
###Output
_____no_output_____
###Markdown
Comparing Distribution Shapes for 2015 and 2016 Our aim is to figure out whether there's any difference between Fandango's ratings for popular movies in 2015 and Fandango's ratings for popular movies in 2016. One way to go about is to analyze and compare the distributions of movie ratings for the two samples.We'll start with comparing the shape of the two distributions using kernel density plots. We'll use the [FiveThirtyEight](https://www.dataquest.io/blog/making-538-plots/) style for the plots.
###Code
import matplotlib.pyplot as plt
from numpy import arange
%matplotlib inline
plt.style.use('fivethirtyeight')
fandango_2015['Fandango_Stars'].plot.kde(label='2015', legend=True, figsize=(8, 5.5))
fandango_2016['fandango'].plot.kde(label = '2016', legend = True)
plt.title("Comparing distribution shapes for Fandango's ratings\n(2015 vs 2016)",
y = 1.07) # the `y` parameter pads the title upward
plt.xlabel('Stars')
plt.xlim(0, 5) # because ratings start at 0 and end at 5
plt.xticks([0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0])
plt.show()
###Output
_____no_output_____
###Markdown
Two aspects are striking on the figure above:- Both distributions are strongly left skewed.- The 2016 distribution is slightly shifted to the left relative to the 2015 distribution.The left skew suggests that movies on Fandango are given mostly high and very high fan ratings. Coupled with the fact that Fandango sells tickets, the high ratings are a bit dubious. It'd be really interesting to investigate this further — ideally in a separate project, since this is quite irrelevant for the current goal of our analysis.The slight left shift of the 2016 distribution is very interesting for our analysis. It shows that ratings were slightly lower in 2016 compared to 2015. This suggests that there was a difference indeed between Fandango's ratings for popular movies in 2015 and Fandango's ratings for popular movies in 2016. We can also see the direction of the difference: the ratings in 2016 were slightly lower compared to 2015. Comparing Relative Frequencies It seems we're following a good thread so far, but we need to analyze more granular information. Let's examine the frequency tables of the two distributions to analyze some numbers. Because the data sets have different numbers of movies, we normalize the tables and show percentages instead.
###Code
print('2015' + '\n' + '-' * 16)
fandango_2015['Fandango_Stars'].value_counts(normalize=True).sort_index() * 100
print('2016' + '\n' + '-' * 16)
fandango_2016['fandango'].value_counts(normalize = True).sort_index() * 100
###Output
2016
----------------
###Markdown
In 2016, very high ratings (4.5 and 5 stars) had significantly lower percentages compared to 2015. In 2016, under 1% of the movies had a perfect rating of 5 stars, compared to 2015 when the percentage was close to 7%. Ratings of 4.5 were also more popular in 2015 — there were approximately 13% more movies rated with a 4.5 in 2015 compared to 2016.The minimum rating is also lower in 2016 — 2.5 instead of 3 stars, the minimum of 2015. There clearly is a difference between the two frequency distributions.For some other ratings, the percentage went up in 2016. There was a greater percentage of movies in 2016 that received 3.5 and 4 stars, compared to 2015. 3.5 and 4.0 are high ratings and this challenges the direction of the change we saw on the kernel density plots. Determining the Direction of the Change Let's take a couple of summary metrics to get a more precise picture about the direction of the change. In what follows, we'll compute the mean, the median, and the mode for both distributions and then use a bar graph to plot the values.
###Code
mean_2015 = fandango_2015['Fandango_Stars'].mean()
mean_2016 = fandango_2016['fandango'].mean()
median_2015 = fandango_2015['Fandango_Stars'].median()
median_2016 = fandango_2016['fandango'].median()
mode_2015 = fandango_2015['Fandango_Stars'].mode()[0]
mode_2016 = fandango_2016['fandango'].mode()[0]
summary = pd.DataFrame()
summary['2015'] = [mean_2015, median_2015, mode_2015]
summary['2016'] = [mean_2016, median_2016, mode_2016]
summary.index = ['mean', 'median', 'mode']
summary
plt.style.use('fivethirtyeight')
summary['2015'].plot.bar(color = '#0066FF', align = 'center', label = '2015', width = .25)
summary['2016'].plot.bar(color = '#CC0000', align = 'edge', label = '2016', width = .25,
rot = 0, figsize = (8,5))
plt.title('Comparing summary statistics: 2015 vs 2016', y = 1.07)
plt.ylim(0,5.5)
plt.yticks(arange(0,5.1,.5))
plt.ylabel('Stars')
plt.legend(framealpha = 0, loc = 'upper center')
plt.show()
###Output
_____no_output_____
###Markdown
The mean rating was lower in 2016 with approximately 0.2. This means a drop of almost 5% relative to the mean rating in 2015.
###Code
(summary.loc['mean'][0] - summary.loc['mean'][1]) / summary.loc['mean'][0]
###Output
_____no_output_____ |
99_Archive_Uniform.ipynb | ###Markdown
Archive - uniform and complete Once the data is pivoted, we want to create a dataset that contains only columns which appear in every report.For instance, we want to have all the columns 'Assets', 'AssetsCurrent', 'AssetsNoncurrent' to be present in every balancesheet. However, not all desired tags are present and we have to find ways to calculate and complete them. Often, only 2 of 'Assets', 'AssetsCurrent', 'AssetsNoncurrent' are present, however, since Assets = AssetsCurrent + AssetsNoncurrent we can calculate the third.Sometimes different tags are used to express the same meaning, so we have to figure out which tags belong together. Basic Settings
###Code
# imports
from bfh_mt_hs2020_sec_data.core import *
from pathlib import Path
from typing import List, Tuple, Union, Set
from pyspark.sql.dataframe import DataFrame
from pyspark.sql.functions import col
import pandas as pd
import shutil # provides high level file operations
import time # used to measure execution time
import os
import sys
all_pivot_selected_folder = "D:/data/parq_pivot_select"
all_pivoted_folder = "D:/data/parq_pivot_split"
all_processed_folder = "D:/data/parq_processed/"
col_list = ["stmt","cik","ticker", "adsh","period","filed", "form","tag","value","report", "line", "fp", "uom"]
pivot_group = ["cik","ticker","adsh","form","period","fp", "qtrs"]
pivot_attrs = ['value', 'report', 'line']
statements = ['IS','CF','CP','BS','CI','EQ','UN']
print('number of reports: ', joined_df.shape[0])
print('number of companies: ', len(joined_df.cik.unique()))
# init Spark
spark = get_spark_session() # Session anlegen
spark # display the moste important information of the session
###Output
_____no_output_____
###Markdown
00_Raw_data
###Code
# loading the complete unpivoted dataset - if it is needed for debbuging
df_all_selected = spark.read.parquet(all_pivot_selected_folder).cache()
# it happens sometimes, that the data could not be associated with a right sheet (bs, is, cf, ..). in this cases, the data can appea under "UN"
# so if expected information cannot be found in the appropriate statement, we have to look in the un statement
un_pivot_value = load_data("UN", "value")
un_pivot_pd = un_pivot_value.toPandas()
un_pivot_pd.shape
def prepare_un_values(df_to_merge_into, attr_list):
# add possible columns from un set to cf data with prefix cpy_
attributes = pivot_group[:] # create copy
attributes.extend(attr_list)
un_prepared = un_pivot_pd[attributes].copy()
un_prepared.rename(columns=lambda x: x if x in pivot_group else ("cpy_" + x), inplace=True)
return pd.merge(df_to_merge_into, un_prepared, how='left', on=pivot_group)
###Output
_____no_output_____
###Markdown
01_Balance_Sheet
###Code
bs_pivot_value = load_data("BS", "value")
spark_shape(bs_pivot_value)
bs_pivot_pd = bs_pivot_value.toPandas()
bs_pivot_pd_copy = bs_pivot_pd.copy()
###Output
_____no_output_____
###Markdown
Assets
###Code
print_null_count(bs_pivot_pd_copy, ['Assets','AssetsNoncurrent','AssetsCurrent'])
# Somtimes AssetsNet is present instead of Assets, copy its content to Assets
copy_if_not_empty(bs_pivot_pd_copy, 'AssetsNet', 'Assets')
# if one of the three provided columns is missing, calculate its content based on Assets = AssetsCurrent + AssetsNoncurrent
complete_addition(bs_pivot_pd_copy, 'Assets', 'AssetsCurrent', 'AssetsNoncurrent')
# if Assets contains data but AssetsCurrent and AssetsNoncurrent are empty, assume that only AssetsCurrent is present
# copy value from Assets to AssetsCurrent and set AssetsNoncurrent to 0.0
copy_if_not_empty(bs_pivot_pd_copy, 'Assets', 'AssetsCurrent', 'AssetsNoncurrent')
# if AssetsCurrent contains data and Assets and AssetsNoncurrent are empty, assume that only AssetsCurrent is present
# copy value from AssetsCurrent to Assets and set AssetsNoncurrent to 0.0
copy_if_not_empty(bs_pivot_pd_copy, 'AssetsCurrent', 'Assets', 'AssetsNoncurrent')
# check for how many entries Assets, AssetsNoncurrent and AsstesCurrent couldn't be completed
print_null_count(bs_pivot_pd_copy, ['Assets','AssetsNoncurrent','AssetsCurrent'])
###Output
Assets 1675
AssetsNoncurrent 1675
AssetsCurrent 1675
###Markdown
Liabilities
###Code
print_null_count(bs_pivot_pd_copy, ['Liabilities','LiabilitiesNoncurrent','LiabilitiesCurrent'])
# Completing the Liabilities columns follows the same logic as for the Assets columns
complete_addition(bs_pivot_pd_copy, 'Liabilities', 'LiabilitiesCurrent', 'LiabilitiesNoncurrent')
copy_if_not_empty(bs_pivot_pd_copy, 'Liabilities', 'LiabilitiesCurrent', 'LiabilitiesNoncurrent')
copy_if_not_empty(bs_pivot_pd_copy, 'LiabilitiesCurrent', 'Liabilities', 'LiabilitiesNoncurrent')
# check for how many entries we were not able to complete the Liabilities information
print_null_count(bs_pivot_pd_copy, ['Liabilities','LiabilitiesNoncurrent','LiabilitiesCurrent'])
###Output
Liabilities 1983
LiabilitiesNoncurrent 1982
LiabilitiesCurrent 1983
###Markdown
EquityIn the Equity section of the balance sheet, we are intereste in the StockholdersEquity and the Earnings (Tag. RetainedEarningsAccumulatedDeficit)
###Code
print_null_count(bs_pivot_pd_copy, ['StockholdersEquity','RetainedEarningsAccumulatedDeficit'])
# per definition, LiabilitisAndStockholdersEquity has to match Assets in a balance sheet
# so if LiabilitiesAndStockholdersEquity is not set, we copy the value from the Assets column
copy_if_not_empty(bs_pivot_pd_copy, 'Assets', 'LiabilitiesAndStockholdersEquity') # has to be the same
# if there is partner capital but no StockholdersEquite, we consider it the same as stockholder equity
copy_if_not_empty(bs_pivot_pd_copy, 'PartnersCapital', 'StockholdersEquity')
# if there is StockholdersEquityIncludingPortionAttributableToNoncontrollingInterest instead of StockholdersEquity, we use this as StocholdersEquity
copy_if_not_empty(bs_pivot_pd_copy, 'StockholdersEquityIncludingPortionAttributableToNoncontrollingInterest', 'StockholdersEquity')
# if RetainedEarnings has no value, we set it to zero
set_to_zero_if_null(bs_pivot_pd_copy, 'RetainedEarningsAccumulatedDeficit')
print_null_count(bs_pivot_pd_copy, ['StockholdersEquity','RetainedEarningsAccumulatedDeficit'])
###Output
StockholdersEquity 2423
RetainedEarningsAccumulatedDeficit 0
###Markdown
Save
###Code
bs_pivot_pd_copy[["cik","ticker", "adsh","period","filed","form", "qtrs","fp",
'Assets','AssetsNoncurrent', 'AssetsCurrent',
'Liabilities','LiabilitiesNoncurrent','LiabilitiesCurrent',
'StockholdersEquity','RetainedEarningsAccumulatedDeficit']] \
.to_csv(all_processed_folder + "bs_not_cleaned.csv", index=False)
###Output
_____no_output_____
###Markdown
Clean empty companies
###Code
bs_cols_selected = bs_pivot_pd_copy[["cik","ticker", "adsh","period","form", "qtrs","fp"
'Assets','AssetsNoncurrent', 'AssetsCurrent',
'Liabilities','LiabilitiesNoncurrent','LiabilitiesCurrent',
'StockholdersEquity','RetainedEarningsAccumulatedDeficit']]
incomplete_ciks = bs_cols_selected[bs_cols_selected.isnull().sum(axis=1) > 0].cik.unique()
bs_cols_cleaned = bs_cols_selected[~bs_pivot_pd_copy.cik.isin(incomplete_ciks)]
bs_cols_cleaned.shape
bs_cols_cleaned.isnull().sum(axis=1).sum()
bs_cols_cleaned.to_csv(all_processed_folder + "bs.csv", index=False)
###Output
_____no_output_____
###Markdown
02_CashFlow Operation- NetIncomeLoss- ProfitLoss- NetCashProvidedByUsedInOperatingActivities: NetIncome + other positions ergibt diese PositionInvesting- NetCashProvidedByUsedInInvestingActivitiesFinancing activities- PaymentsForRepurchaseOfCommonStock: Aktienrückkäufe- PaymentsOfDividends- NetCashProvidedByUsedInFinancingActivitiesCash Bestand unterschied- CashAndCashEquivalentsPeriodIncreaseDecrease: increase/decrease in cash
###Code
cf_pivot_value = load_data("CF", "value")
spark_shape(cf_pivot_value)
#cf_empty_count = get_empty_count(cf_pivot_value)
cf_pivot_pd = cf_pivot_value.toPandas()
cf_pivot_pd_copy = cf_pivot_pd.copy()
cf_pivot_pd.shape
###Output
_____no_output_____
###Markdown
Cash Increase/Decrease- 'CashAndCashEquivalentsPeriodIncreaseDecrease',- 'CashAndCashEquivalentsPeriodIncreaseDecreaseExcludingExchangeRateEffect',- 'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseExcludingExchangeRateEffect',- 'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseIncludingExchangeRateEffect',- 'CashPeriodIncreaseDecrease',- 'CashPeriodIncreaseDecreaseExcludingExchangeRateEffect',- 'NetCashProvidedByUsedInContinuingOperations'
###Code
print_null_count(cf_pivot_pd_copy, ['CashAndCashEquivalentsPeriodIncreaseDecrease'])
# merge relevant columns from the UN dataset
cf_pivot_pd_copy = prepare_un_values(cf_pivot_pd_copy, [
'CashAndCashEquivalentsPeriodIncreaseDecrease',
'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseIncludingExchangeRateEffect',
'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseExcludingExchangeRateEffect'
])
cf_pivot_pd_copy.shape
# if CashAndCashEquivalentsPeriodIncreaseDecrease is not present and CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseIncludingExchangeRateEffect
# is present, we can replace CashAndCashEquivalentsPeriodIncreaseDecrease.
# there are only about 12 entries where both are present
copy_if_not_empty(cf_pivot_pd_copy, 'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseIncludingExchangeRateEffect',
'CashAndCashEquivalentsPeriodIncreaseDecrease') # either or
copy_if_not_empty(cf_pivot_pd_copy, 'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseExcludingExchangeRateEffect',
'CashAndCashEquivalentsPeriodIncreaseDecrease') # either or
copy_if_not_empty(cf_pivot_pd_copy, 'CashPeriodIncreaseDecrease',
'CashAndCashEquivalentsPeriodIncreaseDecrease') # either or
copy_if_not_empty(cf_pivot_pd_copy, 'CashAndCashEquivalentsPeriodIncreaseDecreaseExcludingExchangeRateEffect',
'CashAndCashEquivalentsPeriodIncreaseDecrease') # either or
copy_if_not_empty(cf_pivot_pd_copy, 'CashPeriodIncreaseDecreaseExcludingExchangeRateEffect',
'CashAndCashEquivalentsPeriodIncreaseDecrease') # either or
copy_if_not_empty(cf_pivot_pd_copy, 'NetCashProvidedByUsedInContinuingOperations',
'CashAndCashEquivalentsPeriodIncreaseDecrease') # either or
# try to find data in joined un data
copy_if_not_empty(cf_pivot_pd_copy, 'cpy_CashAndCashEquivalentsPeriodIncreaseDecrease',
'CashAndCashEquivalentsPeriodIncreaseDecrease') # either or
copy_if_not_empty(cf_pivot_pd_copy, 'cpy_CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseIncludingExchangeRateEffect',
'CashAndCashEquivalentsPeriodIncreaseDecrease') # either or
copy_if_not_empty(cf_pivot_pd_copy, 'cpy_CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseExcludingExchangeRateEffect',
'CashAndCashEquivalentsPeriodIncreaseDecrease') # either or
print_null_count(cf_pivot_pd_copy, ['CashAndCashEquivalentsPeriodIncreaseDecrease'])
###Output
CashAndCashEquivalentsPeriodIncreaseDecrease 188780
###Markdown
Operation- NetIncomeLoss- NetCashProvidedByUsedInOperatingActivities: NetIncome + other positions ergibt diese Position
###Code
print_null_count(cf_pivot_pd_copy, ['NetIncomeLoss', 'ProfitLoss', 'NetCashProvidedByUsedInOperatingActivities'])
# if only ProfitLoss is set, copy content to NetIncomeLoss
# if onlyNetIncomeLoss is set, copy to ProfitLoss
copy_if_not_empty(cf_pivot_pd_copy, 'ProfitLoss', 'NetIncomeLoss')
copy_if_not_empty(cf_pivot_pd_copy, 'NetIncomeLoss', 'ProfitLoss')
copy_if_not_empty(cf_pivot_pd_copy, 'NetIncomeLossAvailableToCommonStockholdersBasic', 'ProfitLoss') # certain CFs just have this position
copy_if_not_empty(cf_pivot_pd_copy, 'NetIncomeLossAvailableToCommonStockholdersBasic', 'NetIncomeLoss')
copy_if_not_empty(cf_pivot_pd_copy, 'NetCashProvidedByUsedInOperatingActivitiesContinuingOperations', 'NetCashProvidedByUsedInOperatingActivities')
copy_if_not_empty(cf_pivot_pd_copy, 'NetCashProvidedByUsedInOperatingActivities', 'ProfitLoss') # certain CFs just have this position
copy_if_not_empty(cf_pivot_pd_copy, 'NetCashProvidedByUsedInOperatingActivities', 'NetIncomeLoss')
print_null_count(cf_pivot_pd_copy, ['NetIncomeLoss', 'ProfitLoss', 'NetCashProvidedByUsedInOperatingActivities'])
###Output
NetIncomeLoss 119998
ProfitLoss 119998
NetCashProvidedByUsedInOperatingActivities 187825
###Markdown
Investing- NetCashProvidedByUsedInInvestingActivities
###Code
print_null_count(cf_pivot_pd_copy, ['NetCashProvidedByUsedInInvestingActivities'])
sum_into_empty_target(cf_pivot_pd_copy,
'NetCashProvidedByUsedInInvestingActivitiesContinuingOperations',
'CashProvidedByUsedInInvestingActivitiesDiscontinuedOperations',
'NetCashProvidedByUsedInInvestingActivities')
copy_if_not_empty(cf_pivot_pd_copy, 'NetCashProvidedByUsedInInvestingActivitiesContinuingOperations', 'NetCashProvidedByUsedInInvestingActivities')
copy_if_not_empty(cf_pivot_pd_copy, 'CashProvidedByUsedInInvestingActivitiesDiscontinuedOperations', 'NetCashProvidedByUsedInInvestingActivities')
set_to_zero_if_null(cf_pivot_pd_copy, 'NetCashProvidedByUsedInInvestingActivities')
print_null_count(cf_pivot_pd_copy, ['NetCashProvidedByUsedInInvestingActivities'])
###Output
NetCashProvidedByUsedInInvestingActivities 0
###Markdown
Financing activities- PaymentsForRepurchaseOfCommonStock: Aktienrückkäufe- PaymentsOfDividends- NetCashProvidedByUsedInFinancingActivities('CashProvidedByUsedInDiscontinuedOperationsFinancingActivities', 'CashProvidedByUsedInFinancingActivitiesDiscontinuedOperations', 'NetCashProvidedByUsedInFinancingActivities', 'NetCashProvidedByUsedInFinancingActivitiesContinuingOperations') NetCashProvidedByUsedInFinancingActivities
###Code
print_null_count(cf_pivot_pd_copy, ['NetCashProvidedByUsedInFinancingActivities'])
sum_into_empty_target(cf_pivot_pd_copy,
'NetCashProvidedByUsedInFinancingActivitiesContinuingOperations',
'CashProvidedByUsedInFinancingActivitiesDiscontinuedOperations',
'NetCashProvidedByUsedInFinancingActivities')
copy_if_not_empty(cf_pivot_pd_copy, 'NetCashProvidedByUsedInFinancingActivitiesContinuingOperations', 'NetCashProvidedByUsedInFinancingActivities')
copy_if_not_empty(cf_pivot_pd_copy, 'CashProvidedByUsedInFinancingActivitiesDiscontinuedOperations', 'NetCashProvidedByUsedInFinancingActivities')
set_to_zero_if_null(cf_pivot_pd_copy, 'NetCashProvidedByUsedInFinancingActivities')
print_null_count(cf_pivot_pd_copy, ['NetCashProvidedByUsedInFinancingActivities'])
###Output
NetCashProvidedByUsedInFinancingActivities 0
###Markdown
PaymentsOfDividendsSimply set to 0.0 if no data is present 'PaymentsOfDividends', 'PaymentsOfDividendsCommonStock', 'PaymentsOfDividendsMinorityInterest', 'PaymentsOfDividendsPreferredStockAndPreferenceStock', 'PaymentsOfOrdinaryDividends',
###Code
cf_pivot_pd_copy = sum_cols_into_new_target(cf_pivot_pd_copy, 'PaymentsOfDividendsTotal',
['PaymentsOfDividends',
'PaymentsOfDividendsCommonStock',
'PaymentsOfDividendsMinorityInterest',
'PaymentsOfDividendsPreferredStockAndPreferenceStock',
'PaymentsOfOrdinaryDividends'])
###Output
_____no_output_____
###Markdown
PaymentsForRepurchaseOfCommonStock: Stock buybacks 'PaymentsForRepurchaseOfCommonStock', 'PaymentsForRepurchaseOfCommonStockForEmployeeTaxWithholdingObligations', 'PaymentsForRepurchaseOfConvertiblePreferredStock', 'PaymentsForRepurchaseOfPreferredStockAndPreferenceStock', 'PaymentsForRepurchaseOfRedeemableConvertiblePreferredStock', 'PaymentsForRepurchaseOfRedeemablePreferredStock'
###Code
cf_pivot_pd_copy = sum_cols_into_new_target(cf_pivot_pd_copy, 'PaymentsForRepurchaseOfStockTotal',
['PaymentsForRepurchaseOfCommonStock',
'PaymentsForRepurchaseOfCommonStockForEmployeeTaxWithholdingObligations',
'PaymentsForRepurchaseOfConvertiblePreferredStock',
'PaymentsForRepurchaseOfPreferredStockAndPreferenceStock',
'PaymentsForRepurchaseOfRedeemableConvertiblePreferredStock',
'PaymentsForRepurchaseOfRedeemablePreferredStock'])
###Output
_____no_output_____
###Markdown
Save
###Code
cf_pivot_pd_copy[["cik","ticker", "adsh","period","form", "qtrs","fp",
'CashAndCashEquivalentsPeriodIncreaseDecrease',
'NetIncomeLoss',
'ProfitLoss',
'NetCashProvidedByUsedInOperatingActivities',
'NetCashProvidedByUsedInInvestingActivities',
'NetCashProvidedByUsedInFinancingActivities',
'PaymentsOfDividendsTotal',
'PaymentsForRepurchaseOfStockTotal']] \
.to_csv(all_processed_folder + "cf_not_cleaned.csv", index=False)
###Output
_____no_output_____
###Markdown
Clean empty companies
###Code
cf_cols_selected = cf_pivot_pd_copy[["cik","ticker", "adsh","period","form",
'CashAndCashEquivalentsPeriodIncreaseDecrease',
'NetIncomeLoss', 'ProfitLoss',
'NetCashProvidedByUsedInOperatingActivities',
'NetCashProvidedByUsedInInvestingActivities',
'NetCashProvidedByUsedInFinancingActivities'.
'PaymentsOfDividendsTotal'
'PaymentsForRepurchaseOfStockTotal']]
incomplete_ciks = cf_cols_selected[cf_cols_selected.isnull().sum(axis=1) > 0].cik.unique()
len(incomplete_ciks)
cf_cols_cleaned = cf_cols_selected[~cf_pivot_pd_copy.cik.isin(incomplete_ciks)]
cf_cols_cleaned.shape
cf_cols_cleaned.isnull().sum(axis=1).sum()
cf_cols_cleaned.to_csv(all_processed_folder + "cf.csv", index=False)
###Output
_____no_output_____
###Markdown
03_IncomeStatement Gross Margin- Net Sales- Cost of Sales- Gross Margin -> NetSales - CostOfSalesOperating Expenses- R&D- Selling, general and admin- Total op expenses = R&D + Selling, general and admin- Operating Income = Gross Margin - Total op expenses -> OperatingIncomeLoss- other income- Income before provision for income taxes = operating income + other income- Provision for income taxes- Net income = Income before taxes -taxes -> NetIncomeLoss -> also available in CF(!)Earning per share- Basic- DilutedShares used in computing earnings per share:- basic- diluted
###Code
is_pivot_value = load_data("IS", "value")
spark_shape(is_pivot_value)
# is_empty_count = get_empty_count(is_pivot_value)
is_pivot_pd = is_pivot_value.toPandas()
is_pivot_pd['value_count'] = is_pivot_pd.notnull().sum(axis=1)-len(pivot_group) # create a column that countains the number of not null values of the row
is_pivot_pd.shape
is_pivot_pd_copy = is_pivot_pd.copy()
# merge relevant columns from the UN dataset
is_pivot_pd_copy = prepare_un_values(is_pivot_pd_copy, [
'NetIncomeLoss',
'NetIncomeLossAvailableToCommonStockholdersBasic',
'NetIncomeLossAllocatedToLimitedPartners',
'ProfitLoss',
'Revenues',
'SalesRevenueNet',
'RevenueFromContractWithCustomerExcludingAssessedTax',
'RevenueFromContractWithCustomerIncludingAssessedTax',
'CostOfGoodsAndServicesSold',
'CostOfGoodsSold',
'CostOfRevenue',
'CostOfServices',
'CostsAndExpenses',
'OperatingIncomeLoss',
'IncomeLossFromContinuingOperationsBeforeIncomeTaxesMinorityInterestAndIncomeLossFromEquityMethodInvestments',
'IncomeLossFromContinuingOperationsBeforeIncomeTaxesExtraordinaryItemsNoncontrollingInterest',
'GrossProfit',
])
is_pivot_pd_copy.shape
# if there are less than 5 columns with values it is likely that this is not a complete statement
# often this indicates, that the real information is inside the ComprehensiveIncome Statement and not in an IncomeStatement
is_pivot_pd_copy = is_pivot_pd_copy[is_pivot_pd_copy['value_count'] > 4]
is_pivot_pd_copy.shape
###Output
_____no_output_____
###Markdown
shares
###Code
print_null_count(is_pivot_pd_copy, [ 'EarningsPerShareBasic',
'EarningsPerShareBasicAndDiluted',
'EarningsPerShareDiluted',
'EarningsPerShareBasicDistributed',
'EarningsPerShareDilutedDistributed'])
is_pivot_pd_copy['EarningsPerShare_hj'] = None
copy_if_not_empty(is_pivot_pd_copy, 'EarningsPerShareBasic', 'EarningsPerShare_hj')
copy_if_not_empty(is_pivot_pd_copy, 'EarningsPerShareBasicAndDiluted', 'EarningsPerShare_hj')
copy_if_not_empty(is_pivot_pd_copy, 'EarningsPerShareBasicDistributed', 'EarningsPerShare_hj')
copy_if_not_empty(is_pivot_pd_copy, 'EarningsPerShareDiluted', 'EarningsPerShare_hj')
copy_if_not_empty(is_pivot_pd_copy, 'EarningsPerShareDilutedDistributed', 'EarningsPerShare_hj')
print_null_count(is_pivot_pd_copy, ['EarningsPerShare_hj'])
print_null_count(is_pivot_pd_copy, ['WeightedAverageNumberOfSharesOutstandingBasic','WeightedAverageNumberOfDilutedSharesOutstanding'])
is_pivot_pd_copy['SharesOutstanding_hj'] = None
copy_if_not_empty(is_pivot_pd_copy, 'WeightedAverageNumberOfSharesOutstandingBasic', 'SharesOutstanding_hj')
copy_if_not_empty(is_pivot_pd_copy, 'WeightedAverageNumberOfDilutedSharesOutstanding', 'SharesOutstanding_hj')
print_null_count(is_pivot_pd_copy, ['SharesOutstanding_hj'])
###Output
SharesOutstanding_hj 40947
###Markdown
NetIncome
###Code
print_null_count(is_pivot_pd_copy, [ 'NetIncomeLoss', 'NetIncomeLossAvailableToCommonStockholdersBasic', 'ProfitLoss'])
is_pivot_pd_copy['NetIncomeLoss_hj'] = None
copy_if_not_empty(is_pivot_pd_copy, 'cpy_NetIncomeLoss', 'NetIncomeLoss')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_NetIncomeLossAvailableToCommonStockholdersBasic', 'NetIncomeLossAvailableToCommonStockholdersBasic')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_NetIncomeLossAllocatedToLimitedPartners', 'NetIncomeLossAllocatedToLimitedPartners')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_ProfitLoss', 'ProfitLoss')
copy_if_not_empty(is_pivot_pd_copy, 'NetIncomeLoss', 'NetIncomeLoss_hj')
copy_if_not_empty(is_pivot_pd_copy, 'NetIncomeLossAvailableToCommonStockholdersBasic', 'NetIncomeLoss_hj')
copy_if_not_empty(is_pivot_pd_copy, 'NetIncomeLossAllocatedToLimitedPartners', 'NetIncomeLoss_hj')
copy_if_not_empty(is_pivot_pd_copy, 'ProfitLoss', 'NetIncomeLoss_hj')
print_null_count(is_pivot_pd_copy, [ 'NetIncomeLoss_hj', 'NetIncomeLoss', 'ProfitLoss'])
###Output
NetIncomeLoss_hj 621
NetIncomeLoss 15026
ProfitLoss 76441
###Markdown
NetSales / Revenues
###Code
print_null_count(is_pivot_pd_copy, [
'Revenues',
'SalesRevenueNet',
'RevenueFromContractWithCustomerExcludingAssessedTax', # Sales
'RevenueFromContractWithCustomerIncludingAssessedTax', # Sales
])
is_pivot_pd_copy['Revenues_hj'] = None
copy_if_not_empty(is_pivot_pd_copy, 'cpy_Revenues', 'Revenues')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_SalesRevenueNet', 'SalesRevenueNet')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_RevenueFromContractWithCustomerExcludingAssessedTax', 'RevenueFromContractWithCustomerExcludingAssessedTax')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_RevenueFromContractWithCustomerIncludingAssessedTax', 'RevenueFromContractWithCustomerIncludingAssessedTax')
#copy_if_not_empty(is_pivot_pd_copy, 'cpy_RevenuesExcludingInterestAndDividends', 'RevenuesExcludingInterestAndDividends')
copy_if_not_empty(is_pivot_pd_copy, 'Revenues', 'Revenues_hj')
copy_if_not_empty(is_pivot_pd_copy, 'SalesRevenueNet', 'Revenues_hj')
copy_if_not_empty(is_pivot_pd_copy, 'RevenueFromContractWithCustomerExcludingAssessedTax', 'Revenues_hj')
copy_if_not_empty(is_pivot_pd_copy, 'RevenueFromContractWithCustomerIncludingAssessedTax', 'Revenues_hj')
copy_if_not_empty(is_pivot_pd_copy, 'RevenuesExcludingInterestAndDividends', 'Revenues_hj')
copy_if_not_empty(is_pivot_pd_copy, 'RegulatedAndUnregulatedOperatingRevenue', 'Revenues_hj')
# some companies provide NonInterestIncome and InterestAndDividendIncomeOperating instead of a Revenue
sum_into_empty_target(is_pivot_pd_copy,
'InterestAndDividendIncomeOperating',
'NoninterestIncome',
'Revenues_hj')
sum_into_empty_target(is_pivot_pd_copy,
'InterestIncomeExpenseNet',
'NoninterestIncome',
'Revenues_hj')
print_null_count(is_pivot_pd_copy, [ 'Revenues_hj'])
###Output
Revenues_hj 19673
###Markdown
CostOfSales
###Code
print_null_count(is_pivot_pd_copy, [
'CostOfGoodsAndServicesSold',
'CostOfGoodsSold',
'CostOfRevenue',
'CostOfServices',
])
is_pivot_pd_copy['CostOfRevenue_hj'] = None
copy_if_not_empty(is_pivot_pd_copy, 'cpy_CostOfGoodsAndServicesSold', 'CostOfGoodsAndServicesSold')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_CostOfGoodsSold', 'CostOfGoodsSold')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_CostOfRevenue', 'CostOfRevenue')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_CostOfServices', 'CostOfServices')
copy_if_not_empty(is_pivot_pd_copy, 'CostOfRevenue', 'CostOfRevenue_hj')
copy_if_not_empty(is_pivot_pd_copy, 'CostOfGoodsAndServicesSold', 'CostOfRevenue_hj')
sum_into_empty_target(is_pivot_pd_copy,
'CostOfGoodsSold',
'CostOfServices',
'CostOfRevenue_hj')
copy_if_not_empty(is_pivot_pd_copy, 'CostOfGoodsSold', 'CostOfRevenue_hj')
copy_if_not_empty(is_pivot_pd_copy, 'CostOfServices', 'CostOfRevenue_hj')
print_null_count(is_pivot_pd_copy, ['CostOfRevenue_hj'])
###Output
CostOfRevenue_hj 52385
###Markdown
OperatingIncomeLoss
###Code
print_null_count(is_pivot_pd_copy, ['OperatingIncomeLoss',
'IncomeLossFromContinuingOperationsBeforeIncomeTaxesMinorityInterestAndIncomeLossFromEquityMethodInvestments',
'IncomeLossFromContinuingOperationsBeforeIncomeTaxesExtraordinaryItemsNoncontrollingInterest'])
is_pivot_pd_copy['OperatingIncomeLoss_hj'] = None
copy_if_not_empty(is_pivot_pd_copy, 'cpy_OperatingIncomeLoss', 'OperatingIncomeLoss')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_IncomeLossFromContinuingOperationsBeforeIncomeTaxesMinorityInterestAndIncomeLossFromEquityMethodInvestments', 'IncomeLossFromContinuingOperationsBeforeIncomeTaxesMinorityInterestAndIncomeLossFromEquityMethodInvestments')
copy_if_not_empty(is_pivot_pd_copy, 'cpy_IncomeLossFromContinuingOperationsBeforeIncomeTaxesExtraordinaryItemsNoncontrollingInterest', 'IncomeLossFromContinuingOperationsBeforeIncomeTaxesExtraordinaryItemsNoncontrollingInterest')
copy_if_not_empty(is_pivot_pd_copy, 'OperatingIncomeLoss', 'OperatingIncomeLoss_hj')
copy_if_not_empty(is_pivot_pd_copy, 'IncomeLossFromContinuingOperationsBeforeIncomeTaxesMinorityInterestAndIncomeLossFromEquityMethodInvestments', 'OperatingIncomeLoss_hj')
copy_if_not_empty(is_pivot_pd_copy, 'IncomeLossFromContinuingOperationsBeforeIncomeTaxesExtraordinaryItemsNoncontrollingInterest', 'OperatingIncomeLoss_hj')
print_null_count(is_pivot_pd_copy, ['OperatingIncomeLoss_hj'])
###Output
OperatingIncomeLoss_hj 4073
###Markdown
Other
###Code
copy_if_not_empty(is_pivot_pd_copy, 'cpy_CostsAndExpenses', 'CostsAndExpenses')
Grossprofit ... mitnehmen..
###Output
_____no_output_____
###Markdown
Save
###Code
is_pivot_pd_copy[["cik","ticker", "adsh","period","form", "qtrs","fp",
'Revenues_hj',
'CostOfRevenue_hj',
'OperatingIncomeLoss_hj',
'CostsAndExpenses',
'NetIncomeLoss_hj', 'NetIncomeLoss', 'ProfitLoss',
'SharesOutstanding_hj',
'EarningsPerShare_hj'
]] \
.to_csv(all_processed_folder + "is_not_cleaned.csv", index=False)
###Output
_____no_output_____
###Markdown
xx_trials
###Code
# index = is_pivot_pd_copy.form == '10-K'
# print('10-Ks', len(pd.unique(is_pivot_pd_copy[index].adsh))) # 32283
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') & (is_pivot_pd_copy.fp == 'FY') & (is_pivot_pd_copy.qtrs == '4')].count() # 32188
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') & (is_pivot_pd_copy.fp == 'FY') & (is_pivot_pd_copy.qtrs == '0')].count() # 96
# len(pd.unique(is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') &
# (is_pivot_pd_copy.fp == 'FY') &
# (is_pivot_pd_copy.qtrs.isin(['0','4']) )].adsh) )# 32207
# index = is_pivot_pd_copy.form == '10-Q'
# print('10-Qs', len(pd.unique(is_pivot_pd_copy[index].adsh))) # 101521
# is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-Q') & (is_pivot_pd_copy.qtrs == '1')].count() # 101378
# is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-Q') & (is_pivot_pd_copy.qtrs == '2')].count() # 101378
# is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-Q') & (is_pivot_pd_copy.qtrs == '3')].count() # 101378
is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-Q') & (is_pivot_pd_copy.qtrs == '4')].count() # 101378
pd.set_option('display.max_rows', 40)
is_pivot_pd_copy[\
(is_pivot_pd_copy.OperatingIncomeLoss_hj.isnull())
&(is_pivot_pd_copy.IncomeLossFromContinuingOperationsBeforeIncomeTaxesMinorityInterestAndIncomeLossFromEquityMethodInvestments.isnull())
# &(is_pivot_pd_copy.CostsAndExpenses.notnull())
# &(is_pivot_pd_copy.CostOfGoodsAndServicesSold.isnull())
# &(is_pivot_pd_copy.CostOfGoodsSold.isnull())
# &(is_pivot_pd_copy.CostOfRevenue.isnull())
# &(is_pivot_pd_copy.CostOfServices.isnull())
# (is_pivot_pd_copy.EarningsPerShareBasic.isnull())
# ', 'WeightedAverageNumberOfSharesOutstandingBasic'
# (is_pivot_pd_copy.NetIncomeLoss.notnull()) \
# & (is_pivot_pd_copy.ProfitLoss.notnull()) \
# & (is_pivot_pd_copy.NetIncomeLossAvailableToCommonStockholdersBasic.notnull()) \
# (is_pivot_pd_copy.Revenues.isnull()) \
# &(is_pivot_pd_copy.SalesRevenueNet.isnull()) \
# &(is_pivot_pd_copy.RevenueFromContractWithCustomerExcludingAssessedTax.isnull()) \
# &(is_pivot_pd_copy.RevenueFromContractWithCustomerIncludingAssessedTax.isnull()) \
# &(is_pivot_pd_copy.InterestAndDividendIncomeOperating.isnull()) \
# &(is_pivot_pd_copy.NoninterestIncome.isnull()) \
# &(is_pivot_pd_copy.RevenuesExcludingInterestAndDividends.isnull()) \
# &(is_pivot_pd_copy.RegulatedAndUnregulatedOperatingRevenue.isnull()) \
# &(is_pivot_pd_copy.SalesRevenueNet.isnull()) \
] \
[["cik","ticker", "adsh","period", "form","fp","qtrs","value_count",
'OperatingIncomeLoss',
'IncomeLossFromContinuingOperationsBeforeIncomeTaxesMinorityInterestAndIncomeLossFromEquityMethodInvestments',
'CostsAndExpenses',
# 'CostOfGoodsAndServicesSold',
# 'CostOfGoodsSold',
# 'CostOfRevenue',
# 'CostOfServices',
# 'NetIncomeLoss', 'ProfitLoss','NetIncomeLossAvailableToCommonStockholdersBasic'
# 'Revenues',
# 'SalesRevenueNet',
# 'RevenueFromContractWithCustomerExcludingAssessedTax', # Sales
# 'RevenueFromContractWithCustomerIncludingAssessedTax', # Sales
# 'InterestAndDividendIncomeOperating',
# 'NoninterestIncome',
# 'RevenuesExcludingInterestAndDividends'
# 'Revenues',
# 'SalesRevenueNet',
# 'OperatingLeasesIncomeStatementLeaseRevenue',
# 'RevenueFromCollaborativeArrangementExcludingRevenueFromContractWithCustomer',
# 'RevenueNotFromContractWithCustomer',
# 'RevenueNotFromContractWithCustomerExcludingInterestIncome',
# 'RevenueNotFromContractWithCustomerOther',
# 'RevenuesFromExternalCustomers',
# 'CostOfGoodsAndServicesSold',
# 'GrossProfit'
]] \
.sort_values(by=['period'])
is_pivot_pd_copy[is_pivot_pd_copy.adsh=="0000082166-20-000130"].dropna(how='all', axis=1)
#is_pivot_pd_copy[(is_pivot_pd_copy.qtrs == '4') & (is_pivot_pd_copy.fp != 'FY')]
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') & (is_pivot_pd_copy.fp == 'FY') & (is_pivot_pd_copy.qtrs == '4')].count() # 32188
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') & (is_pivot_pd_copy.fp == 'FY')].count() #54080
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') ].count() #54100
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') & (is_pivot_pd_copy.fp != 'FY')].count() #20
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') & (is_pivot_pd_copy.fp == 'FY') & (is_pivot_pd_copy.qtrs == '0')].count() # 96
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') & (is_pivot_pd_copy.fp == 'FY') & (is_pivot_pd_copy.qtrs == '1')].count() # 21363
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') & (is_pivot_pd_copy.fp == 'FY') & (is_pivot_pd_copy.qtrs == '2')].count() # 69
#is_pivot_pd_copy[(is_pivot_pd_copy.form == '10-K') & (is_pivot_pd_copy.fp == 'FY') & (is_pivot_pd_copy.qtrs == '3')].count() # 56
is_pivot_pd_copy[is_pivot_pd_copy.adsh == '0001401521-20-000018'].notnull().sum(axis=1)-len(pivot_group)
#is_pivot_pd_copy[is_pivot_pd_copy.adsh == '0001401521-20-000018'].isnull().sum(axis=1)
selection = is_pivot_pd_copy[(is_pivot_pd_copy.qtrs == '0') | (is_pivot_pd_copy.qtrs > '4')].isnull().sum(axis=1)
selection = selection == 2320 # shape[1]-8
selection.sum()
cf_empty_pd = cf_empty_count.toPandas()
cf_empty_pd.shape
cf_melt_pd = cf_empty_pd.melt(var_name = 'Tag', value_name = "Count")
cf_melt_pd['diff'] = 133811 -cf_melt_pd['Count']
canditates = ['CashAndCashEquivalentsPeriodIncreaseDecrease','cpy_CashAndCashEquivalentsPeriodIncreaseDecrease',
'CashAndCashEquivalentsPeriodIncreaseDecreaseExcludingExchangeRateEffect',
'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseExcludingExchangeRateEffect',
'CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseIncludingExchangeRateEffect',
'CashPeriodIncreaseDecrease',
'CashPeriodIncreaseDecreaseExcludingExchangeRateEffect']
cf_melt_pd[cf_melt_pd['Tag'].isin(canditates)]
sorted = cf_melt_pd.sort_values('Count', ascending=True)[:100]
sorted.reset_index(drop = True, inplace = True)
sorted.plot.bar(x = 'Tag', y='Count', figsize = (15,10))
empty_count = get_empty_count(bs_pivot_value)
empty_pd = empty_count.toPandas()
melt_pd = empty_pd.melt(var_name = 'Tag', value_name = "Count")
# df2 = pd.melt(df, id_vars=["location", "name"], var_name="Date", value_name="Value")
melt_pd.columns
pd_frame = df_all_selected.where("adsh == '0001564590-20-043606' and stmt == 'IS' and qtrs=='1'").toPandas()
#pd_frame = df_all_selected.where("adsh == '0001628279-20-000210'").toPandas()
#pd_frame = df_all_selected.where("adsh == '0001193125-20-213555'").toPandas()
#print(pd_frame.sort_values(['report', 'line']))
pd.set_option('display.max_rows', pd_frame.shape[0]+1)
pd_frame[['fp','cik', 'tag', 'value', 'stmt', 'report', 'line','period', 'qtrs']].sort_values(['report','qtrs', 'line'])
gaap_CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseIncludingExchangeRateEffect
gaap_CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsIncludingDisposalGroupAndDiscontinuedOperations
gaap_CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsIncludingDisposalGroupAndDiscontinuedOperations
gaap_CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalentsPeriodIncreaseDecreaseIncludingExchangeRateEffect
gaap_CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents
gaap_CashCashEquivalentsRestrictedCashAndRestrictedCashEquivalents
bs_pivot_report.where("adsh == '0001492298-20-000025'").show()
df_all_selected.select('adsh','stmt').distinct().count()
df_all_selected.select('adsh','stmt').distinct().where('stmt = "BS"').count()
df_all_selected.select('adsh','stmt').distinct().where('stmt = "EQ"').count()
cf_pivot_pd_copy.columns.tolist()
CostOfGoodsSold
CostOfRevenue
CostOfServices
[x for x in is_pivot_pd_copy.columns.values if ('CostOf' in x)]
#[x for x in is_pivot_pd_copy.columns.values if ('Revenue' in x) and ('Customer' in x)]
[x for x in bs_pivot_liabilities_copy.columns.values if x.startswith('StockholdersEquity')]
spark.stop()
###Output
_____no_output_____ |
OnlinePredictiveCoding/.ipynb_checkpoints/scenario_vs_performance-checkpoint.ipynb | ###Markdown
Scenario vs. PerformanceIn this notebook, we explore how the performance of various online learners in different datasets change based on the simulation type we employed. The types are: 1. Remove features following a discrete-uniform distribution 2. Remove features following a multivariate Gaussian distribution w/ threshold 0. Datasets1. German2. Ionosphere3. Spambase4. Magic5. A8aBelow, we define a generic function to read datasets for the experiments.
###Code
import pandas as pd
import numpy as np
dataset_names = ["german", "ionosphere", "spambase", "magic", "a8a"]
root_path, extension = "/datasets/", "_numeric"
def get_path(name):
'''returns a path pair to the preprocessed datasets
X and y csv files.'''
path = root_path + name + extension
return path + "_X.csv", path + "_y.csv"
def read_dataset(X_path, y_path):
'''reads and returns numpy arrays in a given pair of paths for
X and y.'''
X = pd.read_csv(X_path).values
y = pd.read_csv(y_path)['0'].values
return X, y
###Output
_____no_output_____
###Markdown
Baseline Learners1. Gradient-based learner with Hinge Loss2. OCO based learner with Hinge Loss
###Code
class predictor:
def __init__(input_size):
self.w = np.zeros(input_size)
def predict(x):
return np.dot(self.w, x)
class gradient_learner(predictor):
def __init_(input_size):
super.__init__(input_size)
def update(x, y, yhat):
loss = np.maximum(0, 1.0 - y * np.dot(self.w, x))
if loss > 0: self.w += x * y * lr
return loss
class oco_learner(predictor):
def __init__(input_size):
super.__init__(input_size)
def update(x, y, yhat):
loss = np.maximum(0, 1.0 - y * np.dot(self.w, x))
if loss > 0:
margin = np.minimum(X, loss/np.square(np.linalg.norm(z)))
self.w += margin * x * y
return loss
def train(X, X_mask, y, learner):
'''Generic training function for all learners.
X_mask is for simulating different settings.
To learn from full data, set X_mask to a unit matrix (all ones).
Trains for 1-pass over the given data.'''
losses, yhat = [], []
for i in range(len(X)):
X_i, y_i = X[i] * X_mask[i], y[i]
yhat_i = learner.predict(x)
###Output
_____no_output_____
###Markdown
Experiment Scenarios1. Full Data2. Varying Features w/ Discrete Uniform3. Varying Features w/ Multivariate Gaussian
###Code
# hyperparameters
# C, lambda, lr
###Output
_____no_output_____ |
03_BikeRental.ipynb | ###Markdown
**PROBLEM STATEMENT** ---- Data Reference: - This Hadi Fanaee-T - Laboratory of Artificial Intelligence and Decision Support (LIAAD), University of Porto INESC Porto, Campus da FEUP Rua Dr. Roberto Frias, 378 4200 - 465 Porto, Portugal- Data Description: - instant: record index - dteday : date - season : season (1:springer, 2:summer, 3:fall, 4:winter) - yr : year (0: 2011, 1:2012) - mnth : month ( 1 to 12) - hr : hour (0 to 23) - holiday : wether day is holiday or not (extracted from http://dchr.dc.gov/page/holiday-schedule) - weekday : day of the week - workingday : if day is neither weekend nor holiday is 1, otherwise is 0. + weathersit : - 1: Clear, Few clouds, Partly cloudy - 2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist - 3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds - 4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog - temp : Normalized temperature in Celsius. The values are divided to 41 (max) - hum: Normalized humidity. The values are divided to 100 (max) - windspeed: Normalized wind speed. The values are divided to 67 (max) - casual: count of casual users - registered: count of registered users - cnt: count of total rental bikes including both casual and registered--- **0. IMPORT LIBRARIES**
###Code
import tensorflow as tf
print(tf.__name__, tf.__version__)
import pandas as pd
print(pd.__name__, pd.__version__)
import numpy as np
print(np.__name__, np.__version__)
import seaborn as sns
print(sns.__name__, sns.__version__)
import matplotlib
import matplotlib.pyplot as plt
print(plt.__name__, matplotlib.__version__)
import sklearn
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
print(sklearn.__name__, sklearn.__version__)
# from IPython.core.interactiveshell import InteractiveShell
# InteractiveShell.ast_node_interactivity = "all"
###Output
_____no_output_____
###Markdown
**1. IMPORT DATASETS**
###Code
from google.colab import drive
drive.mount('/content/drive')
bike = pd.read_csv('/content/drive/MyDrive/data/bike_sharing_daily.csv')
bike.info()
bike.describe()
###Output
_____no_output_____
###Markdown
**2. CLEAN UP DATASET** **2.1 Check for nulls**
###Code
#sns.heatmap(bike.isnull())
###Output
_____no_output_____
###Markdown
**2.2 Remove/Reformat data**
###Code
bike = bike.drop(labels=['instant'], axis = 1)
bike = bike.drop(labels=['casual', 'registered'], axis = 1)
bike.dteday = pd.to_datetime(bike.dteday, format='%m/%d/%Y')
bike.index = pd.DatetimeIndex(bike.dteday)
bike = bike.drop(labels=['dteday'], axis = 1)
bike
###Output
_____no_output_____
###Markdown
**3. VISUALIZE DATASET**
###Code
bike['cnt'].asfreq('W').plot(linewidth = 3)
plt.title('Bike Usage Per Week')
plt.xlabel('Week')
plt.ylabel('Bike Rental')
x_numerical = bike[['temp','hum','windspeed', 'cnt']]
x_numerical
sns.pairplot(x_numerical)
sns.heatmap(x_numerical.corr(), annot=True)
x_cat = bike[['season','yr','mnth','holiday','weekday','workingday','weathersit']]
encoder = OneHotEncoder()
x_cat = encoder.fit_transform(x_cat).toarray()
x_cat = pd.DataFrame(x_cat)
x_cat.shape
x_numerical = x_numerical.reset_index()
x_numerical.shape
x_all = pd.concat([x_cat, x_numerical], axis=1)
x_all.shape
x_all = x_all.drop('dteday', axis = 1)
x_all.shape
###Output
_____no_output_____
###Markdown
**4. CREATE TRAINING AND TESTING DATASET**
###Code
x = x_all.iloc[:,:-1].values
y = x_all.iloc[:,-1:].values
scaler = MinMaxScaler()
y = scaler.fit_transform(y)
y
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size = 0.2)
###Output
_____no_output_____ |
exercise/ex8_anomaly_detection_and_recommendation/2_Recommendation.ipynb | ###Markdown
Recommendation SystemImplement collaborative filtering learning algorithm and apply it to a dataset of movie ratings.
###Code
import numpy as np
from scipy.io import loadmat
import scipy.optimize as opt
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 Load the data
###Code
data = loadmat('data/ex8_movies.mat')
# User ratings of movies
Y = data['Y']
# R(i, j) = 1 if the i-th movie was rated by j-th user
R = data['R']
print(Y.shape)
print(R.shape)
param = loadmat('data/ex8_movieParams.mat')
m, u = Y.shape
m, u
theta = param['Theta']
X = param['X']
print(theta.shape)
print(X.shape)
###Output
(943, 10)
(1682, 10)
###Markdown
2 Collaborative filtering learning algorithm
###Code
def serialize(X, theta):
'''
Unroll the parameters into a single vector parameters.
'''
return np.concatenate([X.ravel(), theta.ravel()])
def deserialize(param, n, u, m):
'''
Transform serialized parameters into origin X and theta.
'''
return param[:m * n].reshape(m, n), param[m * n:].reshape(u, n)
def cost(param, Y, R, n):
X, theta = deserialize(param, n, Y.shape[1], Y.shape[0])
return np.power((X @ theta.T - Y) * R, 2).sum() / 2
u_sub = 4
m_sub = 5
n_sub = 3
X_sub = X[:m_sub, :n_sub]
theta_sub = theta[:u_sub, :n_sub]
Y_sub = Y[:m_sub, :u_sub]
R_sub = R[:m_sub, :u_sub]
cost(serialize(X_sub, theta_sub), Y_sub, R_sub, n_sub)
def regularized_cost(param, Y, R, n, reg):
reg_term = np.power(param, 2).sum()
return cost(param, Y, R, n) + (reg / 2) * reg_term
regularized_cost(serialize(X_sub, theta_sub), Y_sub, R_sub, n_sub, reg=1.5)
def gradient(param, Y, R, n):
X, theta = deserialize(param, n, Y.shape[1], Y.shape[0])
inner_term = (X @ theta.T - Y) * R
X_grad = inner_term @ theta
theta_grad = inner_term.T @ X
return serialize(X_grad, theta_grad)
def regularized_gradient(param, Y, R, n, reg):
return gradient(param, Y, R, n) + reg * param
def random_init(n, u, m):
X = np.random.standard_normal((m, n))
theta = np.random.standard_normal((u, n))
return serialize(X, theta)
def collaborative_filter(Y, R, n, reg):
param = random_init(n, Y.shape[1], Y.shape[0])
Y_norm = Y - Y.mean()
return opt.minimize(fun=regularized_cost,
x0=param,
args=(Y_norm, R, n, reg),
method='TNC',
jac=regularized_gradient)
def predict(X, theta, Y, user_id):
return (X @ theta.T)[:, user_id] + Y.mean()
###Output
_____no_output_____
###Markdown
3 Recommendation system
###Code
# Original rating provided
ratings = np.zeros(1682)
ratings[0] = 4
ratings[6] = 3
ratings[11] = 5
ratings[53] = 4
ratings[63] = 5
ratings[65] = 3
ratings[68] = 5
ratings[97] = 2
ratings[182] = 4
ratings[225] = 5
ratings[354] = 5
# Add new user and ratings
Y = np.insert(Y, 0, ratings, axis=1)
R = np.insert(R, 0, ratings != 0, axis=1)
print(Y.shape)
print(R.shape)
# Get movie list
movie_list = []
with open('data/movie_ids.txt', encoding='latin-1') as f:
for line in f:
tokens = line.strip().split(' ')
movie_list.append(' '.join(tokens[1:]))
movie_list = np.array(movie_list)
movie_list.shape
res = collaborative_filter(Y, R, n=50, reg=10)
res
X_train, theta_train = deserialize(res.x, 50, u + 1, m)
print(X_train.shape)
print(theta_train.shape)
pred = predict(X_train, theta_train, Y, 0)
idx = np.argsort(pred)[::-1]
pred[idx[:10]]
for i, m in enumerate(movie_list[idx[:10]]):
print("{:2d}. {:35} Rating: {:.2f}".format(i + 1, m, pred[idx[i]]))
###Output
1. Titanic (1997) Rating: 4.13
2. Star Wars (1977) Rating: 4.04
3. Shawshank Redemption, The (1994) Rating: 3.99
4. Forrest Gump (1994) Rating: 3.92
5. Raiders of the Lost Ark (1981) Rating: 3.82
6. Braveheart (1995) Rating: 3.82
7. Return of the Jedi (1983) Rating: 3.77
8. Usual Suspects, The (1995) Rating: 3.76
9. Godfather, The (1972) Rating: 3.76
10. Schindler's List (1993) Rating: 3.75
|
solutions/7_scipp/1_getting-started.ipynb | ###Markdown
Part 1: Scipp crash course and exploring data Getting help- Scipp documentation is available at https://scipp.github.io/- Join [scipp](https://ess-eric.slack.com/archives/C01AAGCQEU8) in the ESS Slack workspace for updates, questions, and discussions.
###Code
import scipp as sc
import numpy as np
###Output
_____no_output_____
###Markdown
Using Jupyter- Press `shift-return` to run a cell and move to next cell- Press `alt-return` to run a cell, to keep focus on current cell- If things go wrong, `Kernel > Restart kernel and clear all outputs` is often helpful.- Jupyter will automatically display the last (and only the last) object typed in a cell
###Code
a = 5
b = 4
a
b
###Output
_____no_output_____
###Markdown
Scipp crash course- `scipp` stores data in a **multi-dimensional array** with **labeled (named) dimensions**. This is best imagined as `numpy` arrays, without the need to memorize and keep track of dimension order.- Each array is combined with a **physical unit** into a **variable**.- Variables are enhanced by **coordinates**. Each coordinate is also a variable. A variable with associated coordinates is called **data array**.- Multiple data arrays with aligned coordinates can be combined into a **dataset**.Consider a 2-D numpy array:
###Code
a = np.random.rand(2,4)
a
###Output
_____no_output_____
###Markdown
Scipp variables enrich this with labelled dimensions and units, for clarity and safety.Variables can be created from numpy arrays using `sc.array`:
###Code
var = sc.array(dims=['time','location'], values=a, unit='K')
var
###Output
_____no_output_____
###Markdown
Dimension labels are used for many operations, the simplest example is "slicing" (or cropping):
###Code
var['location',2:4]
###Output
_____no_output_____
###Markdown
Data arrays are created from variables:
###Code
time = sc.array(dims=['time'], unit=sc.units.s, values=[20,30])
location = sc.array(dims=['location'], unit=sc.units.m, values=np.arange(4))
array = sc.DataArray(data=var, coords={'time':time, 'location':location})
array
###Output
_____no_output_____
###Markdown
Scalar variables are variables with zero dimensions.There are two ways to create these, using `sc.scalar`, or by multiplying a value by a scipp unit:
###Code
windspeed = sc.scalar(1.2, unit='m/s') # see help(sc.scalar) for additional arguments
windspeed = 1.2 * sc.Unit('m/s')
windspeed
###Output
_____no_output_____
###Markdown
Data arrays also support **attributes** to store additional meta information:
###Code
array.attrs['windspeed'] = windspeed
array
###Output
_____no_output_____
###Markdown
Scipp's units protect against invalid additions and subtractions:
###Code
array += windspeed # will raise an exception
###Output
_____no_output_____
###Markdown
Data array coordinates protect against operations between incompatible data:
###Code
array['location', 0:2] + array['location', 2:4] # will raise an exception
array['location', 0:2] - sc.mean(array, 'location') # ok, mean over location drops location coord
###Output
_____no_output_____
###Markdown
Exploring dataWhen working with a dataset, the first step is usually to understand what data and metadata it contains.In this chapter we explore how scipp supports this.We start by loading some data, in this case measured with a prototype of the LoKI detectors at the LARMOR beamline:
###Code
data = sc.io.open_hdf5(filename='/home/shared/ikon20/loki-at-larmor.hdf5')
###Output
_____no_output_____
###Markdown
If you are running this notebook locally instead of on the course server, the file can be downloaded/created with the `download_data.ipynb` notebook.Note that the exercises in the following are fictional and do not represent the actual data reduction workflow. Step 1: Use the HTML representation to see what the loaded data containsThe HTML representation is what Jupyter displays for a scipp object.- Take some time to explore this view and try to understand all the information (dimensions, dtypes, units, ...).- Note that sections can be expanded, and values can shown by clicking the icons to the right.
###Code
data
###Output
_____no_output_____
###Markdown
Step 2: Plot the dataScipp objects can be created using the `plot()` method.Alternatively `sc.plot(obj)` can be used.Since this is neutron-scattering data, we can also use the "instrument view", provided by `sc.neutron.instrument_view(obj)`.- Plot the loaded data and familiarize yourself with the controls.- Create the instrument view and familiarize yourself with the controls.
###Code
data.plot()
sc.neutron.instrument_view(data)
###Output
_____no_output_____
###Markdown
Step 3: Exploring meta dataAbove we saw that many attributes are scalar variables with `dtype=DataArray`.The single value in a scalar variable is accessed using the `value` property.Compare:
###Code
data.attrs['proton_charge_by_period']
data.attrs['proton_charge_by_period'].value
###Output
_____no_output_____
###Markdown
Exercises:1. Find some attributes of `data` with `dtype=DataArray` and plot their `value`. Also try `sc.table(attr.value)` to show a table representation.2. Find and plot a monitor.3. Try to normalize `data` to monitor 1. Why does this fail?4. Plot all the monitors on the same plot. Note that `sc.plot()` can be used with a Python `dict` for this purpose: `sc.plot({'a':something, 'b':else})`.5. Convert all the monitors from `'tof'` to `'wavelength'` using, e.g., `mon1_wav = sc.neutron.convert(mon1, 'tof', 'wavelength')`.6. Inspect the HTML view and note how the "unit conversion" changed the dimensions and units.7. Re-plot all the monitors on the same plot, now in `'wavelength'`.
###Code
# Data and monitor are in unit TOF, but pixels and monitors are at different position, so data is not comparable
data / data.attrs['monitor1'].value
sc.plot({f'monitor{i}':data.attrs[f'monitor{i}'].value for i in [1,2,3,4,5]})
sc.plot({f'monitor{i}':sc.neutron.convert(data.attrs[f'monitor{i}'].value, 'tof', 'wavelength') for i in [1,2,3,4,5]})
###Output
_____no_output_____
###Markdown
Step 4: Fixing metadataExercises:1. The sample-position is wrong, shift the sample by `delta = sc.scalar(value=np.array([0.01,0.01,0.04]), unit=sc.units.m)`.2. Because of a glitch in the timing system the time-of-flight has an offset of $2.3~\mu s$. Fix the corresponding coordinate.3. Use the HTML view of `data` to verify that you applied the corrections/calibrations there, rather than in a copy.
###Code
data.coords['sample-position'] += sc.scalar(value=np.array([0.01,0.01,0.04]), unit=sc.units.m)
data.coords['tof'] += 2.3 * sc.Unit('us')
data
###Output
_____no_output_____
###Markdown
Step 5: A closer look at the dataThe 2-D plot we obtain above by default is often not very enlightening.Define:
###Code
counts = sc.sum(data, 'tof')
###Output
_____no_output_____
###Markdown
Exercises:1. Create a plot of `counts` and also try the instrument view.2. How many counts are there in total, in all spectra combined?3. Plot a single spectrum of `data` as a 1-D plot using the slicing syntax to access the spectrum.
###Code
# sc.sum(counts, 'spectrum') #would be another solution
sc.sum(data).value
###Output
_____no_output_____
###Markdown
As seen in the instrument view the detectors consist of 4 layers of tubes, each containing 7 straws.Let us try to split up our data, so we can compare layers.There are other (and probably better) ways to do this, but here we try to define an integer variable containing a layer index:
###Code
z = sc.geometry.z(data.coords['position'])
near = sc.min(z)
far = sc.max(z)
layer = ((z-near)*400).astype(sc.dtype.int32)
layer.unit = ''
layer.plot()
###Output
_____no_output_____
###Markdown
Exercises:- Change the magic parameter `400` in the cell above until pixels fall cleanly into layers, either 4 layers of tubes or 12 layers of straws.- Store `layer` as a new coord in `data`.- Use `sc.groupby(data, group='layer').sum('spectrum')` to group spectra into layers.- Inspect and understand the HTML view of the result.- Plot the result. There are two options: - Use `plot` with `projection='1d'` - Use `sc.plot` after collapsing dimensions, `sc.collapse(grouped, keep='tof')`- Bonus: When grouping by straw layers, there is a different number of straws in the center layer of each tube (3 instead of 2) due to the flower-pattern arrangement of straws. Define a helper data array with data set to 1 for each spectrum, group by layers and sum over spectrum as above, and use this result to normalize the layer-grouped data from above to spectrum count.
###Code
# NOTE:
# - set magic factor to, e.g., 150 to group by straw layer
# - set magic factor to, e.g., 40 to group by tube layer
data.coords['layer'] = layer
grouped = sc.groupby(data, group='layer').sum('spectrum')
grouped.plot(projection='1d')
sc.plot(sc.collapse(grouped, keep='tof'))
norm = sc.DataArray(data=layer*0+1, coords={'layer':layer})
norm = sc.groupby(norm, group='layer').sum('spectrum')
sc.plot(sc.collapse(grouped/norm, keep='tof'))
###Output
_____no_output_____ |
LinearRegression/9.More-About-Linear-Regression.ipynb | ###Markdown
更多关于线性回归模型的讨论
###Code
import numpy as np
from sklearn import datasets
boston = datasets.load_boston()
X = boston.data
y = boston.target
X = X[y < 50.0]
y = y[y < 50.0]
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
# 不关系数据准确的,所以不对数据进行train test split
lin_reg.fit(X, y)
# 数值代表特征对结果的相关程度,正负代表是正相关还是负相关
lin_reg.coef_
# 从负相关到正相关特征排序
np.argsort(lin_reg.coef_)
boston.feature_names[np.argsort(lin_reg.coef_)]
print(boston.DESCR)
###Output
.. _boston_dataset:
Boston house prices dataset
---------------------------
**Data Set Characteristics:**
:Number of Instances: 506
:Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.
:Attribute Information (in order):
- CRIM per capita crime rate by town
- ZN proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS proportion of non-retail business acres per town
- CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
- NOX nitric oxides concentration (parts per 10 million)
- RM average number of rooms per dwelling
- AGE proportion of owner-occupied units built prior to 1940
- DIS weighted distances to five Boston employment centres
- RAD index of accessibility to radial highways
- TAX full-value property-tax rate per $10,000
- PTRATIO pupil-teacher ratio by town
- B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
- LSTAT % lower status of the population
- MEDV Median value of owner-occupied homes in $1000's
:Missing Attribute Values: None
:Creator: Harrison, D. and Rubinfeld, D.L.
This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/
This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.
The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980. N.B. Various transformations are used in the table on
pages 244-261 of the latter.
The Boston house-price data has been used in many machine learning papers that address regression
problems.
.. topic:: References
- Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
- Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
|
content/homeworks/cs109a_hw0_209/cs109a_hw0.ipynb | ###Markdown
CS109A Introduction to Data Science Homework 0: Knowledge Test**Harvard University****Fall 2019****Instructors**: Pavlos Protopapas, Kevin Rader, and Chris Tanner---This is a homework which you must turn in.This homework has the following intentions:1. To get you familiar with the jupyter/python environment2. You should easily understand these questions and what is being asked. If you struggle, this may not be the right class for you.3. You should be able to understand the intent (if not the exact syntax) of the code and be able to look up google and provide code that is asked of you. If you cannot, this may not be the right class for you.
###Code
## RUN THIS CELL TO GET THE RIGHT FORMATTING
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
###Output
_____no_output_____
###Markdown
--- Basic Math and Probability/Statistics Calculations We'll start you off with some basic math and statistics problems questions to make sure you have the appropriate background to be comfortable with concepts that will come up in CS 109a. Question 1: Mathiage is What Brings Us Together Today**Matrix Operations***Complete the following matrix operations (show your work as a markdown/latex notebook cell)* **1.1.** Let $ A = \left( \begin{array}{ccc}3 & 4 & 2 \\5 & 6 & 4 \\4 & 3 & 4 \end{array} \right) \,\,$ and $ \,\, B = \left( \begin{array}{ccc}1 & 4 & 2 \\1 & 9 & 3 \\2 & 3 & 3 \end{array} \right)$. Compute $A \cdot B$.**1.2.** Let $ A = \left( \begin{array}{ccc}0 & 12 & 8 \\1 & 15 & 0 \\0 & 6 & 3 \end{array} \right)$. Compute $A^{-1}$. **1.1 Solution**$The dot product of each row and column builds the solution matrix.\begin{array}{ccc}(row0 \cdot col0) & (row0 \cdot col1) & (row0 \cdot col2) \\(row1 \cdot col0) & (row1 \cdot col1) & (row1 \cdot col2) \\(row2 \cdot col0) & (row2 \cdot col1) & (row2 \cdot col2) \end{array}Answer:\begin{array}{ccc}11 & 54 & 24 \\19 & 86 & 40 \\15 & 55 & 29 \end{array}** 1.2 Solution **Augment with a 3x3 identity matrix.\begin{array}{cccccc}0 & 12 & 8 & 1 & 0 & 0 \\1 & 15 & 0 & 0 & 1 & 0 \\0 & 6 & 3 & 0 & 0 & 1 \end{array}Reduce the matrix to row echelon form.\begin{array}{cccccc}5 & 6 & 4 & 0 & 1 & 0 \\0 & -9/4 & 4/5 & 0 & -4/5 & 1 \\0 & 0 & -2/9 & 1 & -7/9 & 2/9 \end{array}Reduce the matrix to reduced row echelon form. The inverse matrix is now on the right.\begin{array}{cccccc}1 & 0 & 0 & 6 & -5 & 2 \\0 & 1 & 0 & -2 & 2 & -1 \\0 & 0 & 1 & -9/2 & 7/2 & -1 \end{array}Answer:\begin{array}{ccc}6 & -5 & 2 \\-2 & 2 & -1 \\-9/2 & 7/2 & -1 \end{array} **Calculus and Probability***Complete the following (show your work as a markdown/latex notebook cell)***1.3**. From Wikipedia: > In mathematical optimization, statistics, econometrics, decision theory, machine learning and computational neuroscience, a loss function or cost function is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. We've generated a cost function on parameters $x,y \in \mathcal{R}$ $L(x,y)= 3x^2y - y^3 - 3x^2 - 3y^2 + 2$. Find the critical points (optima) of $L(x,y)$.**1.4**. A central aspect of call center operations is the per minute statistics of caller demographics. Because of the massive call volumes call centers achieve, these per minute statistics can often take on well-known distributions. In the CS109 Homework Helpdesk, X and Y are discrete random variables with X measuring the number of female callers per minute and Y the total number of callers per minute. We've determined historically the joint pmf of (X, Y) and found it to be $$p_{X,Y}(x,y) = e^{-4}\frac{2^y}{x!(y-x)!}$$ where $y \in \mathcal{N}, x \in [0, y]$ (That is to say the total number of callers in a minute is a non-negative integer and the number of female callers naturally assumes a value between 0 and the total number of callers inclusive). Find the mean and variance of the marginal distribution of $X$. **(Hint: Think what values can y take on. A change of variables in your sum from y to y-x may make evaluating the sum easier.)** **1.3 Solution**First find the partial derivatives with respect to x and to y.$$L(x,y) = 3x^2y - y^3 - 3x^2 - 3y^2 + 2$$$$L _{x} = 6xy -6x $$$$L _{y} = 3x^2 - 3y^2 -6y $$Setting $L _{x} = 0$ we get:$$0 = 6xy -6x $$$$y = 1$$Substituting $y=1$ into $L _{y} = 0$ we get:$$0 = 3x^2 - 3 -6 $$$$x = \pm\sqrt{3} $$Setting $L _{y} = 0$ we get:$$0 = 3x^2 - 3y^2 -6y $$$$y(y+2)=x^2$$So the critical points are $(\sqrt{3},1), (-\sqrt{3},1), (0,0), (0,-2)$** 1.4 Solution **The joint pmf of $(X,Y)$ describes the probability of having X women among Y callers on any given night. To find the mean and variance of women callers specifically, we calculate the marginal pmf of X by taking the summation over values of Y. Since we know that $ \sum{Y-X} + \sum{X}=\sum{Y} $, we can calculate:$$P _{X}(X) = \sum _{y-x} + \sum _{x} $$
###Code
### The line %... is a jupyter "magic" command, and is not part of the Python language.
# In this case we're just telling the plotting library to draw things on
# the notebook, instead of on a separate window.
%matplotlib inline
# See the "import ... as ..." contructs below? They're just aliasing the package names.
# That way we can call methods like plt.plot() instead of matplotlib.pyplot.plot().
import numpy as np
import scipy as sp
import pandas as pd
import scipy.stats
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
**Basic Statistics***Complete the following: you can perform the calculations by hand (show your work) or using software (include the code and output, screenshots are fine if it is from another platform).***1.5**. 37 of the 76 female CS concentrators have taken Data Science 1 (DS1) while 50 of the 133 male concentrators haven taken DS1. Perform a statistical test to determine if interest in Data Science (by taking DS1) is related to sex. Be sure to state your conclusion. **1.5 Solution**Let $$H_{0} : P(W) = P(M)$$$$H_{a} : P(W) \neq P(M)$$
###Code
n_women = 76
p_women = (37/76)
n_men = 133
p_men = (50/133)
women_dist = np.random.binomial(1,p_women,n_women)
men_dist = np.random.binomial(1,p_men,n_men)
scipy.stats.ttest_ind(women_dist,men_dist)
###Output
_____no_output_____
###Markdown
With a p-value of .0007, we can be quite sure that there is a difference between women's and men's interest in this class. ------ Simulation of a Coin ThrowWe'd like to do some experiments with coin flips, but we don't have a physical coin at the moment. So let us **simulate** the process of flipping a coin on a computer. To do this we will use a form of the **random number generator** built into `numpy`. In particular, we will use the function `np.random.choice` which picks items with uniform probability from a list. If we provide it a list ['H', 'T'], it will pick one of the two items in the list. We can also ask it to do this multiple times by specifying the parameter `size`.
###Code
#Heads == 0, Tails == 1
def throw_a_coin(n_trials):
return np.random.choice([0,1], size=n_trials)
###Output
_____no_output_____
###Markdown
`np.sum` is a function that returns the sum of items in an iterable (i.e. a list or an array). Because python coerces `True` to 1 and `False` to 0, the effect of calling `np.sum` on the array of `True`s and `False`s will be to return the number of of `True`s in the array which is the same as the number of heads. Question 2: The 12 Labors of BernoullisNow that we know how to run our coin flip experiment, we're interested in knowing what happens as we choose larger and larger number of coin flips.**2.1**. Run one experiment of flipping a coin 40 times storing the resulting sample in the variable `throws1`. What's the total proportion of heads?**2.2**. **Replicate** the experiment in 2.1 storing the resulting sample in the variable `throws2`. What's the proportion of heads? How does this result compare to that you obtained in question 2.1?**2.3**. Write a function called `run_trials` that takes as input a list, called `n_flips`, of integers representing different values for the number of coin flips in a trial. For each element in the input list, `run_trials` should run the coin flip experiment with that number of flips and calculate the proportion of heads. The output of `run_trials` should be the list of calculated proportions. Store the output of calling `run_trials` in a list called `proportions`.**2.4**. Using the results in 2.3, reproduce the plot below. **2.5**. What's the appropriate observation about the result of running the coin flip experiment with larger and larger numbers of coin flips? Choose the appropriate one from the choices below. > A. Regardless of sample size the probability of in our experiment of observing heads is 0.5 so the proportion of heads observed in the coin-flip experiments will always be 0.5. >> B. The proportions **fluctuate** about their long-run value of 0.5 (what you might expect if you tossed the coin an infinite amount of times), in accordance with the notion of a fair coin (which we encoded in our simulation by having `np.random.choice` choose between two possibilities with equal probability), with the fluctuations seeming to become much smaller as the number of trials increases.>> C. The proportions **fluctuate** about their long-run value of 0.5 (what you might expect if you tossed the coin an infinite amount of times), in accordance with the notion of a fair coin (which we encoded in our simulation by having `np.random.choice` choose between two possibilities with equal probability), with the fluctuations constant regardless of the number of trials. Solutions **2.1**
###Code
throws1 = np.sum(throw_a_coin(40))
print("Total proportion of heads =", throws1/40)
###Output
Total proportion of heads = 0.45
###Markdown
**2.2**
###Code
throws2 = np.sum(throw_a_coin(40))
print("Total proportion of heads =", throws2/40)
###Output
Total proportion of heads = 0.5
###Markdown
**2.3**
###Code
n_flips = [10, 30, 50, 70, 100, 130, 170, 200, 500, 1000, 2000, 5000, 10000]
def run_trials(n_flips):
proportions = []
for f in n_flips:
proportions.append(np.sum(throw_a_coin(f))/f)
return proportions
###Output
_____no_output_____
###Markdown
**2.4**
###Code
plt.plot(n_flips,run_trials(n_flips))
###Output
_____no_output_____
###Markdown
**2.5** **What's the appropriate observation about the result of applying the coin flip experiment to larger and larger numbers of coin flips? Choose the appropriate one.**B Multiple Replications of the Coin Flip ExperimentThe coin flip experiment that we did above gave us some insight, but we don't have a good notion of how robust our results are under repetition as we've only run one experiment for each number of coin flips. Lets redo the coin flip experiment, but let's incorporate multiple repetitions of each number of coin flips. For each choice of the number of flips, $n$, in an experiment, we'll do $M$ replications of the coin tossing experiment. Question 3. So Many Replications**3.1**. Write a function `make_throws` which takes as arguments the `n_replications` ($M$) and the `n_flips` ($n$), and returns a list (of size $M$) of proportions, with each proportion calculated by taking the ratio of heads to to total number of coin flips in each replication of $n$ coin tosses. `n_flips` should be a python parameter whose value should default to 20 if unspecified when `make_throws` is called. **3.2**. Create the variables `proportions_at_n_flips_100` and `proportions_at_n_flips_1000`. Store in these variables the result of `make_throws` for `n_flips` equal to 100 and 1000 respectively while keeping `n_replications` at 200. Create a plot with the histograms of `proportions_at_n_flips_100` and `proportions_at_n_flips_1000`. Make sure to title your plot, label the x-axis and provide a legend.(See below for an example of what the plot may look like)  **3.3**. Calculate the mean and variance of the results in the each of the variables `proportions_at_n_flips_100` and `proportions_at_n_flips_1000` generated in 3.2.3.4. Based upon the plots what would be your guess of what type of distribution is represented by histograms in 3.2? Explain the factors that influenced your choice.> A. Gamma Distribution>> B. Beta Distribution>> C. Gaussian**3.5**. Let's just assume for arguments sake that the answer to 3.4 is **C. Gaussian**. Plot a **normed histogram** of your results `proportions_at_n_flips_1000` overlayed with your selection for the appropriate gaussian distribution to represent the experiment of flipping a coin 1000 times. (**Hint: What parameters should you use for your Gaussian?**) Answers **3.1**
###Code
def make_throws(n_replications, n_flips =20):
proportions = []
for r in range(0,n_replications):
proportions.append(np.sum(throw_a_coin(n_flips))/n_flips)
return proportions
###Output
_____no_output_____
###Markdown
**3.2**
###Code
proportions_at_n_flips_100 = make_throws(200, 100)
proportions_at_n_flips_1000 = make_throws(200, 1000)
fig, ax = plt.subplots()
ax.hist([proportions_at_n_flips_100, proportions_at_n_flips_1000])
ax.legend(["100","1000"])
ax.set_title("Histogram of number of flips distibution")
ax.set_xlabel("Proportion")
###Output
_____no_output_____
###Markdown
**3.3**
###Code
print("100 flips mean: ", np.mean(proportions_at_n_flips_100), "\n")
print("100 flips variance: ", np.var(proportions_at_n_flips_100), "\n")
print("1000 flips mean: ", np.mean(proportions_at_n_flips_1000), "\n")
print("1000 flips variance: ", np.var(proportions_at_n_flips_1000), "\n")
###Output
100 flips mean: 0.50325
100 flips variance: 0.0027069375
1000 flips mean: 0.49992
1000 flips variance: 0.0002518336
###Markdown
**3.4** C. This is clearly a normal (Gaussian) distribution, because it varies evenly around both sides of the mean. **3.5**
###Code
fig, ax = plt.subplots()
ax.hist(proportions_at_n_flips_1000, normed=True)
###Output
_____no_output_____
###Markdown
Working With Distributions in Numpy/ScipyEarlier in this problem set we've been introduced to the Bernoulli "aka coin-flip" distribution and worked with it indirectly by using np.random.choice to make a random selection between two elements 'H' and 'T'. Let's see if we can create comparable results by taking advantage of the machinery for working with other probability distributions in python using numpy and scipy. Question 4: My Normal BinomialLet's use our coin-flipping machinery to do some experimentation with the binomial distribution. The binomial distribution, often represented by $k \sim Binomial(n, p)$ is often described the number of successes in `n` Bernoulli trials with each trial having a probability of success `p`. In other words, if you flip a coin `n` times, and each coin-flip has a probability `p` of landing heads, then the number of heads you observe is a sample from a bernoulli distribution.**4.1**. Sample the binomial distribution using coin flips by writing a function `sample_binomial1` which takes in integer parameters `n` and `size`. The output of `sample_binomial1` should be a list of length `size` observations with each observation being the outcome of flipping a coin `n` times and counting the number of heads. By default `size` should be 1. Your code should take advantage of the `throw_a_coin` function we defined above. **4.2**. Sample the binomial distribution directly using scipy.stats.binom.rvs by writing another function `sample_binomial2` that takes in integer parameters `n` and `size` as well as a float `p` parameter `p` where $p \in [0 \ldots 1]$. The output of `sample_binomial2` should be a list of length `size` observations with each observation a sample of $Binomial(n, p)$ (taking advantage of scipy.stats.binom). By default `size` should be 1 and `p` should be 0.5.**4.3**. Run sample_binomial1 with 25 and 200 as values of the `n` and `size` parameters respectively and store the result in `binomial_trials1`. Run sample_binomial2 with 25, 200 and 0.5 as values of the `n`, `size` and `p` parameters respectively and store the results in `binomial_trials2`. Plot normed histograms of `binomial_trials1` and `binomial_trials2`. On both histograms, overlay a plot of the pdf of $Binomial(n=25, p=0.5)$**4.4**. How do the plots in 4.3 compare?**4.5**. Find the mean and variance of `binomial_trials1`. How do they compare to the mean and variance of $Binomial(n=25, p=0.5)$ Answers **4.1**
###Code
def sample_binomial1(n, size =1):
outcomes = []
for flip in range(0,size):
outcomes.append(np.sum(throw_a_coin(n)))
return outcomes
###Output
_____no_output_____
###Markdown
**4.2**
###Code
def sample_binomial2(n, size =1, p =.5):
outcomes = []
for sample in range(0,size):
outcomes.append(np.sum(scipy.stats.binom.rvs(n,p)))
return outcomes
###Output
_____no_output_____
###Markdown
**4.3**
###Code
binomial_trials1 = sample_binomial1(25,200)
binomial_trials2 = sample_binomial2(25,200,.5)
fig,ax = plt.subplots()
ax.hist(binomial_trials1, normed=1)
ax.hist(binomial_trials2, normed=1)
###Output
_____no_output_____
###Markdown
**4.4** They're both basically the same: normal distributions. **4.5**
###Code
print("Mean of binomial_trials1: ", np.mean(binomial_trials1))
print("Variance of binomial_trials1: ", np.var(binomial_trials1),"\n")
print("Mean of Binomial: ", np.mean(binomial_trials2))
print("Variance of Binomial: ", np.var(binomial_trials2))
###Output
Mean of binomial_trials1: 12.53
Variance of binomial_trials1: 6.5491
Mean of Binomial: 12.8
Variance of Binomial: 5.71
###Markdown
They're pretty much the same because they're essentially sampling the same distribution. Testing Your Python Code In the following section we're going to do a brief introduction to unit testing. We do so not only because unit testing has become an increasingly important part of of the methodology of good software practices, but also because we plan on using unit tests as part of our own CS109 grading practices as a way of increasing rigor and repeatability decreasing complexity and manual workload in our evaluations of your code. We'll provide an example unit test at the end of this section. Introduction to unit testing
###Code
import ipytest
###Output
_____no_output_____
###Markdown
***Unit testing*** is one of the most important software testing methodologies. Wikipedia describes unit testing as "a software testing method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine whether they are fit for use."There are many different python libraries that support software testing in general and unit testing in particular. PyTest is one of the most widely used and well-liked libraries for this purpose. We've chosen to adopt PyTest (and ipytest which allows pytest to be used in ipython notebooks) for our testing needs and we'll do a very brief introduction to Pytest here so that you can become familiar with it too. If you recall the function that we provided you above `throw_a_coin`, which we'll reproduce here for convenience, it took a number and returned that many "coin tosses". We'll start by seeing what happens when we give it different sizes of $N$. If we give $N=0$, we should get an empty array of "experiments".
###Code
def throw_a_coin(N):
return np.random.choice(['H','T'], size=N)
throw_a_coin(0)
###Output
_____no_output_____
###Markdown
Great! If we give it positive values of $N$ we should get that number of 'H's and 'T's.
###Code
throw_a_coin(5)
throw_a_coin(8)
###Output
_____no_output_____
###Markdown
Exactly what we expected! What happens if the input isn't a positive integer though?
###Code
#throw_a_coin(4.5)
###Output
_____no_output_____
###Markdown
or
###Code
#throw_a_coin(-4)
###Output
_____no_output_____
###Markdown
It looks like for both real numbers and negative numbers, we get two kinds of errors a `TypeError` and a `ValueError`. We just engaged in one of the most rudimentary forms of testing, trial and error. We can use pytest to automate this process by writing some functions that will automatically (and potentially repeatedly) test individual units of our code methodology. These are called ***unit tests***.Before we write our tests, let's consider what we would think of as the appropriate behavior for `throw_a_coin` under the conditions we considered above. If `throw_a_coin` receives positive integer input, we want it to behave exactly as it currently does -- returning an output consisting of a list of characters 'H' or 'T' with the length of the list equal to the positive integer input. For a positive floating point input, we want `throw_a_coin_properly` to treat the input as if it were rounded down to the nearest integer (thus returning a list of 'H' or 'T' integers whose length is the same as the input rounded down to the next highest integer. For a any negative number input or an input of 0, we want `throw_a_coin_properly` to return an empty list. We create pytest tests by writing functions that start or end with "test". We'll use the **convention** that our tests will start with "test". We begin the code cell with ipytest's clean_tests function as a way to clear out the results of previous tests starting with "test_throw_a_coin" (the * is the standard wild card character here).
###Code
## the * after test_throw_a_coin tells this code cell to clean out the results
## of all tests starting with test_throw_a_coin
ipytest.clean_tests("test_throw_a_coin*")
## run throw_a_coin with a variety of positive integer inputs (all numbers between 1 and 20) and
## verify that the length of the output list (e.g ['H', 'H', 'T', 'H', 'T']) matches the input integer
def test_throw_a_coin_length_positive():
for n in range(1,20):
assert len(throw_a_coin(n)) == n
## verify that throw_a_coin produces an empty list (i.e. a list of length 0) if provide with an input
## of 0
def test_throw_a_coin_length_zero():
## should be the empty array
assert len(throw_a_coin(0)) == 0
## verify that given a positive floating point input (i.e. 4.34344298547201), throw_a_coin produces a list of
## coin flips of length equal to highest integer less than the input
def test_throw_a_coin_float():
for n in np.random.exponential(7, size=5):
assert len(throw_a_coin(n)) == np.floor(n)
## verify that given any negative input (e.g. -323.4), throw_a_coin produces an empty
def test_throw_a_coin_negative():
for n in range(-7, 0):
assert len(throw_a_coin(n)) == 0
ipytest.run_tests()
###Output
unittest.case.FunctionTestCase (test_throw_a_coin_float) ... ERROR
unittest.case.FunctionTestCase (test_throw_a_coin_length_positive) ... ok
unittest.case.FunctionTestCase (test_throw_a_coin_length_zero) ... ok
unittest.case.FunctionTestCase (test_throw_a_coin_negative) ... ERROR
======================================================================
ERROR: unittest.case.FunctionTestCase (test_throw_a_coin_float)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-247-9f743191e4e7>", line 22, in test_throw_a_coin_float
assert len(throw_a_coin(n)) == np.floor(n)
File "<ipython-input-241-54a1ec744eb3>", line 2, in throw_a_coin
return np.random.choice(['H','T'], size=N)
File "mtrand.pyx", line 1158, in mtrand.RandomState.choice
File "mtrand.pyx", line 990, in mtrand.RandomState.randint
File "mtrand.pyx", line 991, in mtrand.RandomState.randint
File "randint_helpers.pxi", line 253, in mtrand._rand_int64
TypeError: 'numpy.float64' object cannot be interpreted as an integer
======================================================================
ERROR: unittest.case.FunctionTestCase (test_throw_a_coin_negative)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-247-9f743191e4e7>", line 28, in test_throw_a_coin_negative
assert len(throw_a_coin(n)) == 0
File "<ipython-input-241-54a1ec744eb3>", line 2, in throw_a_coin
return np.random.choice(['H','T'], size=N)
File "mtrand.pyx", line 1158, in mtrand.RandomState.choice
File "mtrand.pyx", line 990, in mtrand.RandomState.randint
File "mtrand.pyx", line 991, in mtrand.RandomState.randint
File "randint_helpers.pxi", line 253, in mtrand._rand_int64
ValueError: negative dimensions are not allowed
----------------------------------------------------------------------
Ran 4 tests in 0.014s
FAILED (errors=2)
###Markdown
As you see, we were able to use pytest (and ipytest which allows us to run pytest tests in our ipython notebooks) to automate the tests that we constructed manually before and get the same errors and successes. Now time to fix our code and write our own test! Question 5: You Better Test Yourself before You Wreck Yourself!Now it's time to fix `throw_a_coin` so that it passes the tests we've written above as well as add our own test to the mix!**5.1**. Write a new function called `throw_a_coin_properly` that will pass the tests that we saw above. For your convenience we'll provide a new jupyter notebook cell with the tests rewritten for the new function. All the tests should pass. For a positive floating point input, we want `throw_a_coin_properly` to treat the input as if it were rounded down to the nearest integer. For a any negative number input, we want `throw_a_coin_properly` to treat the input as if it were 0.**5.2**. Write a new test for `throw_a_coin_properly` that verifies that all the elements of the resultant arrays are 'H' or 'T'. Answers **5.1**
###Code
def throw_a_coin_properly(n):
np.floor(n)
if (n < 0):
n = 0
return np.random.choice(['H','T'], size=n)
ipytest.clean_tests("test_throw_a_coin*")
def test_throw_a_coin_properly_length_positive():
for n in range(1,20):
assert len(throw_a_coin_properly(n)) == n
def test_throw_a_coin_properly_length_zero():
## should be the empty array
assert len(throw_a_coin_properly(0)) == 0
def test_throw_a_coin_properly_float():
for n in np.random.exponential(7, size=5):
assert len(throw_a_coin_properly(n)) == np.floor(n)
def test_throw_a_coin_properly_negative():
for n in range(-7, 0):
assert len(throw_a_coin_properly(n)) == 0
ipytest.run_tests()
###Output
unittest.case.FunctionTestCase (test_throw_a_coin_properly_float) ... ERROR
unittest.case.FunctionTestCase (test_throw_a_coin_properly_length_positive) ... ok
unittest.case.FunctionTestCase (test_throw_a_coin_properly_length_zero) ... ok
unittest.case.FunctionTestCase (test_throw_a_coin_properly_negative) ... ok
======================================================================
ERROR: unittest.case.FunctionTestCase (test_throw_a_coin_properly_float)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<ipython-input-249-849428350f76>", line 16, in test_throw_a_coin_properly_float
assert len(throw_a_coin_properly(n)) == np.floor(n)
File "<ipython-input-248-d3d8d39f9467>", line 5, in throw_a_coin_properly
return np.random.choice(['H','T'], size=n)
File "mtrand.pyx", line 1158, in mtrand.RandomState.choice
File "mtrand.pyx", line 990, in mtrand.RandomState.randint
File "mtrand.pyx", line 991, in mtrand.RandomState.randint
File "randint_helpers.pxi", line 253, in mtrand._rand_int64
TypeError: 'numpy.float64' object cannot be interpreted as an integer
----------------------------------------------------------------------
Ran 4 tests in 0.014s
FAILED (errors=1)
###Markdown
**5.2**
###Code
ipytest.clean_tests("test_throw_a_coin*")
## write a test that verifies you don't have any other elements except H's and T's
def test_throw_a_coin_properly_verify_H_T():
for n in range(1,20):
result = throw_a_coin_properly(n)
for i in range(0,n):
assert ((result[i] == 'H') or (result[i] == 'T'))
ipytest.run_tests()
###Output
unittest.case.FunctionTestCase (test_throw_a_coin_properly_verify_H_T) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.004s
OK
|
archiveOldVersions/Traffic_Sign_Classifier_v04.ipynb | ###Markdown
Self-Driving Car Engineer Nanodegree Deep Learning Project: Build a Traffic Sign Recognition ClassifierIn this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary. > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/481/view) for this project.The [rubric](https://review.udacity.com/!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. --- Change Log- v01 : First working version. EPOCH 10 ... LeNet copy/paste. Normalization (pixel-128/128). - Validation Accuracy = 0.886- v02 : try regularization / dropout fc3 and fc2 dropout 0.5 --> EPOCH 10 ...Validation Accuracy = 0.930- v03 : try dropout fc1 --> EPOCH 10 ... Validation Accuracy = 0.926- v04 : keep_prob from 0.5 to 0.6. --> EPOCH 10 ...Validation Accuracy = 0.939 - keep_prob = 0.7 --> EPOCH 10 ...Validation Accuracy = 0.942- Other possibilities : Try padding SAME instead of VALID. --- Step 0: Load The Data
###Code
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = '../data/train.p'
validation_file= '../data/valid.p'
testing_file = '../data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
###Output
_____no_output_____
###Markdown
--- Step 1: Dataset Summary & ExplorationThe pickled data is a dictionary with 4 key/value pairs:- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results. Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
###Code
import numpy as np
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
# TODO: Number of training examples
nx_train = len(X_train)
ny_train = len(y_train)
# TODO: Number of validation examples
nx_validation = len(X_valid)
ny_validation = len(y_valid)
# TODO: Number of testing examples.
nx_test = len(X_test)
ny_test = len(y_test)
# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(set(np.concatenate((y_train,y_valid,y_test))))
print(f'Number of training examples X_train = {nx_train} y_train = {ny_train}')
print(f'Number of validation examples X_valid = {nx_validation} y_valid = {ny_validation}')
print(f'Number of testing examples = X_test = {nx_test} y_test = {ny_test}')
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
#print(train['labels'])
###Output
Number of training examples X_train = 34799 y_train = 34799
Number of validation examples X_valid = 4410 y_valid = 4410
Number of testing examples = X_test = 12630 y_test = 12630
Image data shape = (32, 32, 3)
Number of classes = 43
###Markdown
Include an exploratory visualization of the dataset Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc. The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
###Code
import random
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
#plt.imshow(image, cmap="gray")
plt.imshow(image)
print(y_train[index])
print(image.shape)
print(image[0][0])
###Output
14
(32, 32, 3)
[255 255 241]
###Markdown
---- Step 2: Design and Test a Model ArchitectureDesign and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play! With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission. There are various aspects to consider when thinking about this problem:- Neural network architecture (is the network over or underfitting?)- Play around preprocessing techniques (normalization, rgb to grayscale, etc)- Number of examples per label (some have more than others).- Generate fake data.Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these. Pre-process the Data Set (normalization, grayscale, etc.) Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project. Other pre-processing steps are optional. You can try different techniques to see if it improves performance. Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
###Code
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
def normalize(image_data):
# Shape (32, 32, 3)
return (np.divide(np.subtract(np.float32(image_data),np.float32(128)),128))
print(f'len(X_train) = {len(X_train)} len(y_train) = {len(y_train)} len(X_valid) = {len(X_valid)} len(y_valid) = {len(y_valid)} len(X_test) = {len(X_test)} len(y_test) = {len(y_test)}')
X_train = normalize(X_train)
X_valid = normalize(X_valid)
X_test = normalize(X_test)
print('info debug : ')
print(f'np.amax(X_train) = {np.amax(X_train)}') #[0][0])
print(f'np.amin(X_train) = {np.amin(X_train)}') #[0][0])
print(f'len(X_train) = {len(X_train)} len(y_train) = {len(y_train)} len(X_valid) = {len(X_valid)} len(y_valid) = {len(y_valid)} len(X_test) = {len(X_test)} len(y_test) = {len(y_test)}')
###Output
len(X_train) = 34799 len(y_train) = 34799 len(X_valid) = 4410 len(y_valid) = 4410 len(X_test) = 12630 len(y_test) = 12630
0.992188
-1.0
len(X_train) = 34799 len(y_train) = 34799 len(X_valid) = 4410 len(y_valid) = 4410 len(X_test) = 12630 len(y_test) = 12630
###Markdown
Model Architecture
###Code
### Define your architecture here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Setup TensorFlow
###Code
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
###Output
_____no_output_____
###Markdown
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since Traffic Signs images are color RGB, C is **3** in this case. Architecture- **Layer 1: Convolutional.** The output shape should be 28x28x6.- **Activation.** RELU.- **Pooling.** The output shape should be 14x14x6.- **Layer 2: Convolutional.** The output shape should be 10x10x16.- **Activation.** RELU.- **Pooling.** The output shape should be 5x5x16.- **Flatten.** Flatten the output shape of the final pooling layer such that it's 1D instead of 3D.- **Layer 3: Fully Connected.** This should have 120 outputs.- **Activation.** RELU.- **Layer 4: Fully Connected.** This should have 84 outputs.- **Activation.** RELU.- **Layer 5: Fully Connected (Logits).** This should have **43** outputs. **43 different German Traffic Signs)**- **Dropout for regularization** Output- Return the result of the 2nd fully connected layer.
###Code
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# Test DROPOUT to prevent overfitting.
fc1 = tf.nn.dropout(fc1, keep_prob)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# Test DROPOUT to prevent overfitting.
fc2 = tf.nn.dropout(fc2, keep_prob)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(43))
logits = tf.matmul(fc2, fc3_W) + fc3_b
# Test DROPOUT to prevent overfitting.
logits = tf.nn.dropout(logits, keep_prob)
return logits
###Output
_____no_output_____
###Markdown
Features and Labels
###Code
x = tf.placeholder(tf.float32, (None, 32, 32, 3))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 43)
# for test regularization / dropout
keep_prob = tf.placeholder(tf.float32) # probability to keep units
###Output
_____no_output_____
###Markdown
Training PipelineCreate a training pipeline that uses the model to classify
###Code
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
###Output
_____no_output_____
###Markdown
Model EvaluationEvaluate how well the loss and accuracy of the model for a given dataset.
###Code
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
# num_examples = len(X_data)
num_examples = min(len(X_data),len(y_data))
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
# ####CORRECTION
# batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
# InvalidArgumentError: Incompatible shapes: [128] vs. [58]
# final batch may be less than BATCH_SIZE !!!!
end = min(offset + BATCH_SIZE,num_examples)
# end = offset + BATCH_SIZE
batch_x, batch_y = X_data[offset:end], y_data[offset:end]
# ####END OF CORRECTION
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
###Output
_____no_output_____
###Markdown
Train, Validate and Test the Model A validation set can be used to assess how well the model is performing. A low accuracy on the training and validationsets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
###Code
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Train the Model- Run the training data through the training pipeline to train the model.- Before each epoch, shuffle the training set.- After each epoch, measure the loss and accuracy of the validation set.- Save the model after training.
###Code
from sklearn.utils import shuffle
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# num_examples = len(X_train)
num_examples = min(len(X_train),len(y_train))
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
#end = offset + BATCH_SIZE # InvalidArgumentError: Incompatible shapes: [128] vs. [58]
# final batch may be less than BATCH_SIZE !!!!
end = min(offset + BATCH_SIZE,num_examples)
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 0.7})
validation_accuracy = evaluate(X_valid, y_valid)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
###Output
Training...
EPOCH 1 ...
Validation Accuracy = 0.678
EPOCH 2 ...
Validation Accuracy = 0.831
EPOCH 3 ...
Validation Accuracy = 0.874
EPOCH 4 ...
Validation Accuracy = 0.882
EPOCH 5 ...
Validation Accuracy = 0.918
EPOCH 6 ...
Validation Accuracy = 0.916
EPOCH 7 ...
Validation Accuracy = 0.925
EPOCH 8 ...
Validation Accuracy = 0.939
EPOCH 9 ...
Validation Accuracy = 0.949
EPOCH 10 ...
Validation Accuracy = 0.942
Model saved
###Markdown
Evaluate the ModelOnce completely satisfied with your model, evaluate the performance of the model on the test set.Only do this once!
###Code
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
###Output
_____no_output_____
###Markdown
--- Step 3: Test a Model on New ImagesTo give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name. Load and Output the Images
###Code
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Predict the Sign Type for Each Image
###Code
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Analyze Performance
###Code
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
###Output
_____no_output_____
###Markdown
Output Top 5 Softmax Probabilities For Each Image Found on the Web For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.htmltop_k) could prove helpful here. The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:``` (5, 6) arraya = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497, 0.12789202], [ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401, 0.15899337], [ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 , 0.23892179], [ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 , 0.16505091], [ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137, 0.09155967]])```Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:```TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202], [ 0.28086119, 0.27569815, 0.18063401], [ 0.26076848, 0.23892179, 0.23664738], [ 0.29198961, 0.26234032, 0.16505091], [ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5], [0, 1, 4], [0, 5, 1], [1, 3, 5], [1, 4, 3]], dtype=int32))```Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
###Code
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
###Output
_____no_output_____
###Markdown
Project WriteupOnce you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. --- Step 4 (Optional): Visualize the Neural Network's State with Test Images This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol. Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image. Your output should look something like this (above)
###Code
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
###Output
_____no_output_____ |
AcousticFWI/Acoustic-wave-equation.ipynb | ###Markdown
PDE The acoustic wave equation for the square slowness m and a source q is given in 3D by :\begin{cases} &m \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) =q \\ &u(.,0) = 0 \\ &\frac{d u(x,t)}{dt}|_{t=0} = 0 \end{cases}with the zero initial conditons to guaranty unicity of the solution
###Code
p=Function('p')
m,s,h = symbols('m s h')
m=M(x,y,z)
q=Q(x,y,t)
d=D(x,y,t)
e=E(x,y)
###Output
_____no_output_____
###Markdown
Time and space discretization as a Taylor expansion.The time discretization is define as a second order ( $ O (dt^2)) $) centered finite difference to get an explicit Euler scheme easy to solve by steping in time. $ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{u(x,t+dt) - 2 u(x,t) + u(x,t-dt)}{dt^2} + O(dt^2) $And we define the space discretization also as a Taylor serie, with oder chosen by the user. This can either be a direct expansion of the second derivative bulding the laplacian, or a combination of first oder space derivative. The second option can be a better choice in case you would want to extand the method to more complex wave equations involving first order derivatives in chain only.$ \frac{d^2 u(x,t)}{dt^2} \simeq \frac{1}{dx^2} \sum_k \alpha_k (u(x+k dx,t)+u(x-k dx,t)) + O(dx^k) $
###Code
dtt=as_finite_diff(p(x,y,z,t).diff(t,t), [t-s,t, t+s])
dt=as_finite_diff(p(x,y,t).diff(t), [t-s, t+s])
# Spacial finite differences can easily be extended to higher order by increasing the list of sampling point in the next expression.
# Be sure to keep this stencil symmetric and everything else in the notebook will follow.
dxx=as_finite_diff(p(x,y,z,t).diff(x,x), [x-h,x, x+h])
dyy=as_finite_diff(p(x,y,z,t).diff(y,y), [y-h,y, y+h])
dzz=as_finite_diff(p(x,y,z,t).diff(z,z), [z-h,z, z+h])
dtt,dxx,dyy,dt
###Output
_____no_output_____
###Markdown
Solve forward in time The wave equation with absorbing boundary conditions writes$ \eta \frac{d u(x,t)}{dt} + \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) =q $ and the adjont wave equation $ -\eta \frac{d u(x,t)}{dt} + \frac{d^2 u(x,t)}{dt^2} - \nabla^2 u(x,t) =q $ where $ \eta$ is a damping factor equal to zero inside the physical domain and decreasing inside the absorbing layer from the pysical domain to the border
###Code
# Forward wave equation
wave_equation = m*dtt- (dxx+dyy+dzz)
stencil = solve(wave_equation,p(x,y,z,t+s))[0]
ts=lambdify((p(x,y,t-s),p(x-h,y,t), p(x,y,t), p(x+h,y,t),p(x,y-h,t), p(x,y+h,t), q , m, s, h,e),stencil,"numpy")
eq=Eq(p(x,y,z,t+s),stencil)
eq
###Output
_____no_output_____
###Markdown
Rewriting the discret PDE as part of an Inversion Accuracy and rigourousness of the dicretizationThe above axpression are good for modelling. However, if you want to include a wave equation solver into an Inversion workflow, a more rigourous study of the discretization must be done. We can rewrite a single time step as follows $ A_3 u(x,t+dt) = A_1 u(x,t) + A_2 u(x,t-dt) +q(x,t)$where $ A_1,A_2,A_3 $ are square, invertible matrices, and symetric without any boundary conditions. In more details we have :\begin{align}& A_1 = \frac{2}{dt^2 m} + \Delta \\& A_2 = \frac{-1}{dt^2 m} \\& A_3 = \frac{1}{dt^2 m}\end{align}We can the write the action of the adjoint wave equation operator. The adjoint wave equation is defined by \begin{cases} &m \frac{d^2 v(x,t)}{dt^2} - \nabla^2 v(x,t) = \delta d \\ &v(.,T) = 0 \\ &\frac{d v(x,t)}{dt}|_{t=T} = 0 \end{cases}but by choosing to discretize first we will not discretize this equation. Instead we will take the adjoint of the forward wave equation operator and by testing that the operator is the true adjoint, we will guaranty solving the adjoint wave equation. We have the the single time step for the adjoint wavefield going backward in time in order to keep an explicit Euler scheme$ A_2^T v(x,t-dt) = A_1^T v(x,t) + A_3^T v(x,t+dt) + \delta d(x,t)$and as $A_2$ and $A_3$ are diagonal matrices there is no issue in inverting it. We can also see that choosing a asymetric stencil for the spacial derivative may lead to erro has the Laplacian would stop to be self-adjoint, and the actual adjoint finite difference scheme should be implemented.
###Code
# Adjoint wave equation
wave_equationA = m*dtt- (dxx+dyy) - D(x,y,t) - e*dt
stencilA = solve(wave_equationA,p(x,y,t-s))[0]
tsA=lambdify((p(x,y,t+s),p(x-h,y,t), p(x,y,t), p(x+h,y,t),p(x,y-h,t), p(x,y+h,t), d , m, s, h,e),stencilA,"numpy")
stencilA
###Output
_____no_output_____
###Markdown
Define the discrete model
###Code
import matplotlib.pyplot as plt
from matplotlib import animation
hstep=25 #space increment d = minv/(10*f0);
tstep=2 #time increment dt < .5 * hstep /maxv;
tmin=0.0 #initial time
tmax=300 #simulate until
xmin=-875.0 #left bound
xmax=875.0 #right bound...assume packet never reaches boundary
ymin=-875.0 #left bound
ymax=875.0 #right bound...assume packet never reaches boundary
f0=.010
t0=1/.010
nbpml=10
nx = int((xmax-xmin)/hstep) + 1 #number of points on x grid
ny = int((ymax-ymin)/hstep) + 1 #number of points on x grid
nt = int((tmax-tmin)/tstep) + 2 #number of points on t grid
xsrc=-400
ysrc=0.0
xrec = nbpml+4
#set source as Ricker wavelet for f0
def source(x,y,t):
r = (np.pi*f0*(t-t0))
val = (1-2.*r**2)*np.exp(-r**2)
if abs(x-xsrc)<hstep/2 and abs(y-ysrc)<hstep/2:
return val
else:
return 0.0
def dampx(x):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if x<nbpml:
return dampcoeff*((nbpml-x)/nbpml)**2
elif x>nx-nbpml-1:
return dampcoeff*((x-nx+nbpml)/nbpml)**2
else:
return 0.0
def dampy(y):
dampcoeff=1.5*np.log(1.0/0.001)/(5.0*hstep);
if y<nbpml:
return dampcoeff*((nbpml-y)/nbpml)**2
elif y>ny-nbpml-1:
return dampcoeff*((y-ny+nbpml)/nbpml)**2
else:
return 0.0
# Velocity models
def smooth10(vel,nx,ny):
out=np.ones((nx,ny))
out[:,:]=vel[:,:]
for a in range(5,nx-6):
out[a,:]=np.sum(vel[a-5:a+5,:], axis=0) /10
return out
# True velocity
vel=np.ones((nx,ny)) + 2.0
vel[floor(nx/2):nx,:]=4.5
mt=vel**-2
# Smooth velocity
v0=smooth10(vel,nx,ny)
m0=v0**-2
dm=m0-mt
###Output
_____no_output_____
###Markdown
Create functions for the PDEThe Gradient/Born are here so that everything is at the correct place, it is described later
###Code
def Forward(nt,nx,ny,m):
u=np.zeros((nt,nx,ny))
rec=np.zeros((nt,ny-2))
for ti in range(0,nt):
for a in range(1,nx-1):
for b in range(1,ny-1):
src = source(xmin+a*hstep,ymin+b*hstep,tstep*ti)
damp=dampx(a)+dampy(b)
if ti==0:
u[ti,a,b]=ts(0,0,0,0,0,0,src,m[a,b],tstep,hstep,damp)
elif ti==1:
u[ti,a,b]=ts(0,u[ti-1,a-1,b],u[ti-1,a,b],u[ti-1,a+1,b],u[ti-1,a,b-1],u[ti-1,a,b+1],src,m[a,b],tstep,hstep,damp)
else:
u[ti,a,b]=ts(u[ti-2,a,b],u[ti-1,a-1,b],u[ti-1,a,b],u[ti-1,a+1,b],u[ti-1,a,b-1],u[ti-1,a,b+1],src,m[a,b],tstep,hstep,damp)
if a==xrec :
rec[ti,b-1]=u[ti,a,b]
return rec,u
def Adjoint(nt,nx,ny,m,rec):
v=np.zeros((nt,nx,ny))
srca=np.zeros((nt))
for ti in range(nt-1, -1, -1):
for a in range(1,nx-1):
for b in range(1,ny-1):
if a==xrec:
resid=rec[ti,b-1]
else:
resid=0
damp=dampx(a)+dampy(b)
if ti==nt-1:
v[ti,a,b]=tsA(0,0,0,0,0,0,resid,m[a,b],tstep,hstep,damp)
elif ti==nt-2:
v[ti,a,b]=tsA(0,v[ti+1,a-1,b],v[ti+1,a,b],v[ti+1,a+1,b],v[ti+1,a,b-1],v[ti+1,a,b+1],resid,m[a,b],tstep,hstep,damp)
else:
v[ti,a,b]=tsA(v[ti+2,a,b],v[ti+1,a-1,b],v[ti+1,a,b],v[ti+1,a+1,b],v[ti+1,a,b-1],v[ti+1,a,b+1],resid,m[a,b],tstep,hstep,damp)
if abs(xmin+a*hstep-xsrc)<hstep/2 and abs(ymin+b*hstep-ysrc)<hstep/2:
srca[ti]=v[ti,a,b]
return srca,v
def Gradient(nt,nx,ny,m,rec,u):
v1=np.zeros((nx,ny))
v2=np.zeros((nx,ny))
v3=np.zeros((nx,ny))
grad=np.zeros((nx,ny))
for ti in range(nt-1,-1,-1):
for a in range(1,nx-1):
for b in range(1,ny-1):
if a==xrec:
resid=rec[ti,b-1]
else:
resid=0
damp=dampx(a)+dampy(b)
v3[a,b]=tsA(v1[a,b],v2[a-1,b],v2[a,b],v2[a+1,b],v2[a,b-1],v2[a,b+1],resid,m[a,b],tstep,hstep,damp)
grad[a,b]=grad[a,b]-(v3[a,b]-2*v2[a,b]+v1[a,b])*(u[ti,a,b])
v1,v2,v3=v2,v3,v1
return tstep**-2*grad
def Born(nt,nx,ny,m,dm):
u1=np.zeros((nx,ny))
U1=np.zeros((nx,ny))
u2=np.zeros((nx,ny))
U2=np.zeros((nx,ny))
u3=np.zeros((nx,ny))
U3=np.zeros((nx,ny))
rec=np.zeros((nt,ny-2))
src2=0
for ti in range(0,nt):
for a in range(1,nx-1):
for b in range(1,ny-1):
damp=dampx(a)+dampy(b)
src = source(xmin+a*hstep,ymin+b*hstep,tstep*ti)
u3[a,b]=ts(u1[a,b],u2[a-1,b],u2[a,b],u2[a+1,b],u2[a,b-1],u2[a,b+1],src,m[a,b],tstep,hstep,damp)
src2 = -tstep**-2*(u3[a,b]-2*u2[a,b]+u1[a,b])*dm[a,b]
U3[a,b]=ts(U1[a,b],U2[a-1,b],U2[a,b],U2[a+1,b],U2[a,b-1],U2[a,b+1],src2,m[a,b],tstep,hstep,damp)
if a==xrec :
rec[ti,b-1]=U3[a,b]
u1,u2,u3=u2,u3,u1
U1,U2,U3=U2,U3,U1
return rec
###Output
_____no_output_____
###Markdown
A Forward propagation example
###Code
(rect,ut)=Forward(nt,nx,ny,mt)
fig = plt.figure()
plts = [] # get ready to populate this list the Line artists to be plotted
plt.hold("off")
for i in range(nt):
r = plt.imshow(ut[i,:,:]) # this is how you'd plot a single line...
plts.append( [r] )
ani = animation.ArtistAnimation(fig, plts, interval=50, repeat = False) # run the animation
plt.show()
fig2 = plt.figure()
plt.hold("off")
shotrec = plt.imshow(rect) # this is how you'd plot a single line...
#plt.show()
###Output
_____no_output_____
###Markdown
Adjoint testIn ordr to guaranty we have the gradient we need to make sure that the solution of the adjoint wave equation is indeed the true adjoint. Tod os so one should check that$ - = 0$where $A$ is the wave_equation, $A^T$ is wave_equationA and $x,y$ are any random vectors in the range of each operator. This can however be expensive as this two vector would be of size $N * n_t$. To test our operator we will the relax test by$ - = 0$where $P_r , P_s^T$ are the source and recevier projection operator mapping the source and receiver locations and times onto the full domain. This allow to have only a random source of size $n_t$ at a random postion.
###Code
(rec0,u0)=Forward(nt,nx,ny,m0)
(srca,v)=Adjoint(nt,nx,ny,m0,rec0)
plts = [] # get ready to populate this list the Line artists to be plotted
plt.hold("off")
for i in range(0,nt):
r = plt.imshow(v[i,:,:],vmin=-100, vmax=100) # this is how you'd plot a single line...
plts.append( [r] )
ani = animation.ArtistAnimation(fig, plts, interval=50, repeat = False) # run the animation
plt.show()
shotrec = plt.plot(srca) # this is how you'd plot a single line...
#plt.show()
# Actual adjoint test
term1=0
for ti in range(0,nt):
term1=term1+srca[ti]*source(xsrc,ysrc,(ti)*tstep)
term2=LA.norm(rec0)**2
term1,term2,term1-term2,term1/term2
###Output
_____no_output_____
###Markdown
Least square objective GradientWe will consider here the least square objective, as this is the one in need of an adjoint. The test that will follow are however necessary for any objective and associated gradient in a optimization framework. The objective function can be written$ min_m \Phi(m) := \frac{1}{2} \| P_r A^{-1}(m) q - d\|_2^2$And it's gradient becomes $ \nabla_m \Phi(m) = - (\frac{dA(m)u}{dm})^T v $where v is the soltuion if the adjoint wave equation. For the simple acoustic case the gradient can be rewritten as $ \nabla_m \Phi(m) = - \sum_{t=1}^{nt} \frac{d^2u(t)}{dt^2} v(t) $
###Code
# Misfit
F0=.5*LA.norm(rec0-rect)**2
F0
Im1=Gradient(nt,nx,ny,m0,rec0-rect,u0)
shotrec = plt.imshow(rect,vmin=-1,vmax=1) # this is how you'd plot a single line...
shotrec = plt.imshow(rec0,vmin=-1,vmax=1) # this is how you'd plot a single line...
shotrec = plt.imshow(rec0-rect,vmin=-.1,vmax=.1) # this is how you'd plot a single line...
shotrec = plt.imshow(Im1,vmin=-1,vmax=1) # this is how you'd plot a single line...
#plt.show()
###Output
_____no_output_____
###Markdown
Adjoint test for the gradientThe adjoint of the FWI Gradient is the Born modelling operator, implementing a double propagation forward in time with a wavefield scaled by the model perturbation for the second propagation $ J dm = - A^{-1}(\frac{d A^{-1}q}{dt^2}) dm $
###Code
Im2=Gradient(nt,nx,ny,m0,rec0,u0)
du1=Born(nt,nx,ny,m0,dm)
term11=np.dot((rec0).reshape(-1),du1.reshape(-1))
term21=np.dot(Im2.reshape(-1),dm.reshape(-1))
term11,term21,term11-term21,term11/term21
###Output
_____no_output_____
###Markdown
Jacobian testThe last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied $ U(m + hdm) = U(m) + \mathcal{O} (h) \\ U(m + h dm) = U(m) + h J[m]dm + \mathcal{O} (h^2) $ which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.1 - Genrate data for the true model m 2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model 3 - You now have $U(m_0)$ 4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$ 5 - For each $h$ compute $U(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $ 6 - Plot in Loglog the two lines of equation above
###Code
H=[1,0.1,0.01,.001,0.0001,0.00001,0.000001]
(D1,u0)=Forward(nt,nx,ny,m0)
dub=Born(nt,nx,ny,m0,dm)
error1=np.zeros((7))
error2=np.zeros((7))
for i in range(0,7):
mloc=m0+H[i]*dm
(d,u)=Forward(nt,nx,ny,mloc)
error1[i] = LA.norm(d - D1,ord=1)
error2[i] = LA.norm(d - D1 - H[i]*dub,ord=1)
hh=np.zeros((7))
for i in range(0,7):
hh[i]=H[i]*H[i]
shotrec = plt.loglog(H,error1,H,H) # this is how you'd plot a single line...
plt.show()
shotrec = plt.loglog(H,error2,H,hh) # this is howyou'd plot a single line...
plt.show()
###Output
_____no_output_____
###Markdown
Gradient testThe last part is to check that the operators are consistent with the problem. There is then two properties to be satisfied $ \Phi(m + hdm) = \Phi(m) + \mathcal{O} (h) \\ \Phi(m + h dm) = \Phi(m) + h (J[m]^T\delta |d)dm + \mathcal{O} (h^2) $ which are the linearization conditions for the objective. This is a bit slow to run here but here is the way to test it.1 - Genrate data for the true model m 2 - Define a smooth initial model $m_0$ and comput the data $d_0$ for this model 3 - You now have $\Phi(m_0)$ 4 - Define $ dm = m-m_0$ and $ h = {1,.1,.01,.001,...}$ 5 - For each $h$ compute $\Phi(m_0 + h dm)$ by generating data for $m_0 + h dm$ and compute $(J[m_0 + h dm]^T\delta |d) $ 6 - Plot in Loglog the two lines of equation above
###Code
(DT,uT)=Forward(nt,nx,ny,mt)
(D1,u0)=Forward(nt,nx,ny,m0)
F0=.5*LA.norm(D1-DT)**2
g=Gradient(nt,nx,ny,m0,D1-DT,u0)
G=np.dot(g.reshape(-1),dm.reshape(-1));
error21=np.zeros((7))
error22=np.zeros((7))
for i in range(0,7):
mloc=m0+H[i]*dm
(D,u)=Forward(nt,nx,ny,mloc)
error21[i] = .5*LA.norm(D-DT)**2 -F0
error22[i] = .5*LA.norm(D-DT)**2 -F0 - H[i]*G
shotrec = plt.loglog(H,error21,H,H) # this is how you'd plot a single line...
plt.show()
shotrec = plt.loglog(H,error22,H,hh) # this is how you'd plot a single line...
plt.show()
###Output
_____no_output_____ |
docs/examples/game_dataset.ipynb | ###Markdown
Initialize a GameDataset and some sample operationsA GameDatset instance can contain tracking and events data in a TrackingFrame and EventsFrame respectively.
###Code
import sys
sys.path.insert(1, '../../')
from codeball import GameDataset, Zones
metadata_file = (r"../../codeball/tests/files/metadata.xml")
tracking_file = (r"../../codeball/tests/files/tracking.txt")
events_file = (r"../../codeball/tests/files/events.json")
game_dataset = GameDataset(
tracking_metadata_file=metadata_file,
tracking_data_file=tracking_file,
events_metadata_file=metadata_file,
events_data_file=events_file,
)
print(type(game_dataset.tracking))
print(type(game_dataset.events))
###Output
<class 'codeball.codeball_frames.TrackingFrame'>
<class 'codeball.codeball_frames.EventsFrame'>
###Markdown
TrackingGameDataset.tracking holds a TrackingFrame with all the tacking data of the game.
###Code
game_dataset.tracking.head()
###Output
_____no_output_____
###Markdown
If you want to filter the TrackingFrame, you can use it's methods (on top of all standard DataFrame methods). For example to get a TrackingFrame only with the data of team with team_id `FIFATMA` you can do:
###Code
game_dataset.tracking.team('FIFATMA').head()
###Output
_____no_output_____
###Markdown
Final example, let's say you want to get the x coordiante data, only for the field players (excluding goalkeeper) for team_id `FIFATMA`, you can get that by doing:
###Code
game_dataset.tracking.team('FIFATMA').players('field').dimension('x').head()
###Output
_____no_output_____
###Markdown
EventsSimilarly, GameDataset.events holds a TrackingFrame with all the tacking data of the game, and if you want to filter it, you can do so using it's methods. For example to get all event that go into the opponent box you can do `game_dataset.events.into(Zones.OPPONENT_BOX)`, or if you want to get all the passes you can do:
###Code
game_dataset.events.type('PASS').head()
###Output
_____no_output_____
###Markdown
Since in this game tracking and event daa come from the same provider, GameDataset.metadata for this case is the same as GameDataset.tracking.metadata and GameDataset.events.metadata. There ou can access metadata about the data like frame rate, field dimensions, teams and players details, etc. Like:
###Code
game_dataset.metadata.teams[0].players[5].name
game_dataset.metadata.frame_rate
game_dataset.metadata.score
###Output
_____no_output_____ |
examples/ex5_spatial_smoothing.ipynb | ###Markdown
Spatial smoothing and estimation of coherent DOAsVarious DOA estimation algorithms fail when there exists coherent signals. We demonstrate how spatial smoothing can be applied to decorrelate the coherent signals. Further readings:* S. U. Pillai and B. H. Kwon, "Forward/backward spatial smoothing techniques for coherent signal identification," *IEEE Transactions on Acoustics, Speech, and Signal Processing*, vol. 37, no. 1, pp. 8-15, Jan. 1989.
###Code
import numpy as np
import doatools.model as model
import doatools.estimation as estimation
import doatools.plotting as doaplt
import matplotlib.pyplot as plt
%matplotlib inline
wavelength = 1.0 # normalized
d0 = wavelength / 2
# 10-element ULA
ula = model.UniformLinearArray(10, d0)
# 4 sources
sources = model.FarField1DSourcePlacement(
[-40.0, 0.0, 30.0, 60.0],
unit='deg'
)
# The last two sources are coherent.
P = np.array([
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[0.0, 0.0, 1.0, 1.0],
[0.0, 0.0, 1.0, 1.0]
])
source_signal = model.ComplexStochasticSignal(sources.size, P)
# SNR = 0 dB
noise_signal = model.ComplexStochasticSignal(ula.size, 1.0)
# Collect 100 snapshots.
n_snapshots = 100
y, R = model.get_narrowband_snapshots(
ula, sources, wavelength, source_signal, noise_signal, n_snapshots,
return_covariance=True
)
# Use the default search grid.
grid = estimation.FarField1DSearchGrid(unit='deg')
###Output
_____no_output_____
###Markdown
Without spatial smoothing
###Code
mvdr = estimation.MVDRBeamformer(ula, wavelength, grid)
music = estimation.MUSIC(ula, wavelength, grid)
resv_mvdr, est_mvdr, sp_mvdr = mvdr.estimate(R, sources.size, return_spectrum=True)
resv_mu, est_mu, sp_mu = music.estimate(R, sources.size, return_spectrum=True)
plt.figure()
ax = plt.subplot(1, 2, 1)
doaplt.plot_spectrum(sp_mvdr, grid, ax=ax, ground_truth=sources)
ax.set_title('MVDR')
ax = plt.subplot(1, 2, 2)
doaplt.plot_spectrum(sp_mu, grid, ax=ax, ground_truth=sources)
ax.set_title('MUSIC')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Unfortunately we can only recover the first two. With spatial smoothing
###Code
# We have a pair of coherent sources.
# Consider two overlapping subarrays of 9 elements.
l = 2
Rss = estimation.spatial_smooth(R, l, True)
# Each subarray has 9 elements.
ula_ss = model.UniformLinearArray(ula.size - l + 1, d0)
mvdr_ss = estimation.MVDRBeamformer(ula_ss, wavelength, grid)
music_ss = estimation.MUSIC(ula_ss, wavelength, grid)
resv_mvdr, est_mvdr, sp_mvdr = mvdr_ss.estimate(Rss, sources.size, return_spectrum=True)
resv_mu, est_mu, sp_mu = music_ss.estimate(Rss, sources.size, return_spectrum=True)
plt.figure()
ax = plt.subplot(1, 2, 1)
doaplt.plot_spectrum(sp_mvdr, grid, ax=ax, ground_truth=sources)
ax.set_title('MVDR')
ax = plt.subplot(1, 2, 2)
doaplt.plot_spectrum(sp_mu, grid, ax=ax, ground_truth=sources)
ax.set_title('MUSIC')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
assignments/2020/assignment1_jupyter/assignment1/knn.ipynb | ###Markdown
k-Nearest Neighbor (kNN) exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*The kNN classifier consists of two stages:- During training, the classifier takes the training data and simply remembers it- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples- The value of k is cross-validatedIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
###Code
# Run some setup code for this notebook.
import sys, os
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
cd /home/razor/PycharmProjects/cs231n.github.io/assignments/2020/assignment1_jupyter/assignment1
import sys
sys.executable
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
os.makedirs('cs231n/datasets/cifar-10-batches-py/', exist_ok=True)
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print(X_train.shape, X_test.shape)
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
###Output
_____no_output_____
###Markdown
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for the labelLets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.**Note: For the three distance computations that we require you to implement in this notebook, you may not use the np.linalg.norm() function that numpy provides.**First, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
###Code
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
###Output
_____no_output_____
###Markdown
**Inline Question 1** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the columns?$\color{blue}{\textit Your Answer:}$ - The cause for the distinctly brighter rows is that there are test-data whose colors differ from those of the train data. These are rows whose nearest-neighbors are far. - What causes the columns is train data which differs from the test-data.
###Code
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
Got 137 / 500 correct => accuracy: 0.274000
###Markdown
You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:
###Code
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
Got 139 / 500 correct => accuracy: 0.278000
###Markdown
You should expect to see a slightly better performance than with `k = 1`. **Inline Question 2**We can also use other distance metrics such as L1 distance.For pixel values $p_{ij}^{(k)}$ at location $(i,j)$ of some image $I_k$, the mean $\mu$ across all pixels over all images is $$\mu=\frac{1}{nhw}\sum_{k=1}^n\sum_{i=1}^{h}\sum_{j=1}^{w}p_{ij}^{(k)}$$And the pixel-wise mean $\mu_{ij}$ across all images is $$\mu_{ij}=\frac{1}{n}\sum_{k=1}^np_{ij}^{(k)}.$$The general standard deviation $\sigma$ and pixel-wise standard deviation $\sigma_{ij}$ is defined similarly.Which of the following preprocessing steps will not change the performance of a Nearest Neighbor classifier that uses L1 distance? Select all that apply.1. Subtracting the mean $\mu$ ($\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\mu$.)2. Subtracting the per pixel mean $\mu_{ij}$ ($\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\mu_{ij}$.)3. Subtracting the mean $\mu$ and dividing by the standard deviation $\sigma$.4. Subtracting the pixel-wise mean $\mu_{ij}$ and dividing by the pixel-wise standard deviation $\sigma_{ij}$.5. Rotating the coordinate axes of the data.$\color{blue}{\textit Your Answer:}$4, 5$\color{blue}{\textit Your Explanation:}$I suppose modification of all the values - test & train sets alike, then, we first write the formula of the classfication given a test image $I_a$:$$ \arg I_a = \arg \min_k \left[\ell_1(I_a,I_k) \right] = \arg \min_k \left[\sum_{i=1}^{h}\sum_{j=1}^{w} \left|p_{ij}^{(k)} - p_{ij}^{(a)}\right| \right] $$The answer depends on the details of the subtraction, but I suppose that the same transformation holds for both the train images ($1\le k\le n$) and the test images ($1\le a\le b$), so:1. $$\left|(p_{ij}^{(k)} - \mu) - (p_{ij}^{(a)} - \mu)\right| = \left|p_{ij}^{(k)} - p_{ij}^{(a)}\right| \implies \ell_1(T_1(I_a), T_1(I_k)) = \ell_1(I_a, I_k) $$2. $$\left|\left.(p_{ij}^{(k)} - \mu)\middle/\sigma\right. - \left.(p_{ij}^{(a)} - \mu)\middle/\sigma\right.\right| = \left.\left|p_{ij}^{(k)} - p_{ij}^{(a)}\right|\middle/|\sigma|\right. \implies \ell_1(T_2(I_a), T_2(I_k)) = \left.\ell_1(I_a, I_k)\middle/|\sigma|\right. $$ which doesn't change the minimal values.3. $$\left|(p_{ij}^{(k)} - \mu_{ij}) - (p_{ij}^{(a)} - \mu_{ij})\right| = \left|p_{ij}^{(k)} - p_{ij}^{(a)}\right| \implies \ell_1(T_3(I_a), T_3(I_k)) = \ell_1(I_a, I_k) $$4. $$\left|\left.\frac{p_{ij}^{(k)}-\mu_{ij}}{\sigma_{ij}}\right.-\left.\frac{p_{ij}^{(a)}-\mu_{ij}}{\sigma_{ij}}\right.\right|=\left|\frac{p_{ij}^{(k)}-p_{ij}^{(a)}}{\sigma_{ij}}\right|\,\,\implies\,\,\ell_{1}\left(T_{4}\left(I_{a}\right),T_{4}\left(I_{k}\right)\right)=\sum_{i=1}^{h}\sum_{j=1}^{w}\left|\frac{p_{ij}^{(k)}-p_{ij}^{(a)}}{\sigma_{ij}}\right|$$ which does change the minimal values, for example, in the case where $k=3$, $\left[I_{1}\right]_{ij}=c\delta_{i1}\delta_{j1},\left[I_{2}\right]_{ij}=d\delta_{i1}\delta_{j1}+\delta_{i2}\delta_{j2}$for some $c$, and $I_{3}=0$, then $$ \frac{\ell_{1}(I_{1},I_{3})}{\ell_{1}(I_{2},I_{3})}=\frac{c}{1+d} $$ while $\mu_{11}=\frac{c+d}{3},\,\mu_{22}=\frac{1}{3},\,\sigma_{11}=\sqrt{\frac{2}{3}(c^{2}-cd+d^{2})},\,\sigma_{22}=\sqrt{\frac{2}{3}}$ and so $$ \frac{\ell_{1}(T_{4}(I_{1}),T_{4}(I_{3}))}{\ell_{1}(T_{4}(I_{2}),T_{4}(I_{3}))}=\frac{\frac{c-0}{\sqrt{\frac{2}{3}(c^{2}-cd+d^{2})}}}{\frac{d-0}{\sqrt{\frac{2}{3}(c^{2}-cd+d^{2})}}+\frac{1-0}{\sqrt{\frac{2}{3}}}}=\frac{c}{d+\sqrt{c^{2}-cd+d^{2}}} $$ then for $d=99,\,c=101$: $$ \frac{\ell_{1}(T_{4}(I_{1}),T_{4}(I_{3}))}{\ell_{1}(T_{4}(I_{2}),T_{4}(I_{3}))}\approx\frac{101}{99+100}<1<\frac{101}{100}=\frac{\ell_{1}(I_{1},I_{3})}{\ell_{1}(I_{2},I_{3})} $$5. This transformation completely changes the axes. Consider the dataset contains three $32\times32$ images - the test image $I_3$ is totally black, $I_1$ has a single extremely bright pixel in the upper-most left-most part of the image, and $I_2$ has a single dim pixel in the middle of the image. A $45^\circ$ Rotation will cause the bright pixel to "exit" the image rendering it completely black, and hence identical to the test-image (which is incvariant under rotations). So while before the rotation $\ell_1(I_2,I_3)\ell_1(T_5(I_1),T_5(I_3))$.
###Code
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('One loop difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('No loop difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print('No loop version took %f seconds' % no_loop_time)
# You should see significantly faster performance with the fully vectorized implementation!
# NOTE: depending on what machine you're using,
# you might not see a speedup when you go from two loops to one loop,
# and might even see a slow-down.
###Output
Two loop version took 28.342590 seconds
One loop version took 24.499071 seconds
No loop version took 0.107559 seconds
###Markdown
Cross-validationWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
###Code
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
X_train_folds.extend(np.array_split(X_train, num_folds))
y_train_folds.extend(np.array_split(y_train, num_folds))
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
classifier = KNearestNeighbor()
for k in k_choices:
k_to_accuracies[k] = []
for i in range(num_folds):
k_X_train = np.concatenate([*X_train_folds[:i], *X_train_folds[i+1:]], axis=0)
k_y_train = np.concatenate([*y_train_folds[:i], *y_train_folds[i+1:]], axis=0)
classifier.train(k_X_train, k_y_train)
y_test_pred = classifier.predict(X_train_folds[i], k)
num_correct = np.sum(y_test_pred == y_train_folds[i])
k_to_accuracies[k].append(float(num_correct) / len(y_test_pred))
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print('k = %d, accuracy = %f' % (k, accuracy))
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
###Output
Got 141 / 500 correct => accuracy: 0.282000
|
useful_nlp/spacy.ipynb | ###Markdown
spacy model architecture1. spacy doc object (container) is the entry point of the spacy API. It is constructed after passing raw text into a spacy model. NOTE: a spacy model is a pipeline of functions (tokenizer --> tagger --> parser --> NER --> ...), whose output is a doc object. A pipeline is made of components, different components is responsible to add different object attributes (linguistic features) to the doc container 2. there are two big category of classes in spacy: Container Objects and Processing Pipelines3. most linguistic features can be accessed via container objects, there are four container objects i. doc: sequence of tokens ii. token: individual tokens, like a work, punctuation symbol, space iii. span: a slice of doc, like a sentence, a noun chunk iv. lexeme: an entry in the vocabulary 4. Like many NLP libraries, spaCy encodes all strings to hash values to reduce memory usage and improve efficiency, to see string representation, use method call with suffix '\_', like, 'pos\_' instead of 'pos'5. API details: https://spacy.io/api/doc basic navigation of spacy API
###Code
# create spacy entry point - doc object
raw_text = u"Autonomous cars shift insurance liability toward manufacturers"
doc = nlp(raw_text)
# 1. doc object
print(doc.__class__)
# 2. token object -- individual element of a doc
print(doc[0].__class__)
# 3. span object -- a slice of a doc
print(list(doc.noun_chunks)[0].__class__) # a noun chunk
print(list(doc.sents)[0].__class__) # a sentence
# 4. lexeme
print('first lexeme repr in spacy vocab is: {}'.format(list(doc.vocab)[0].text))
print(list(doc.vocab)[0].__class__)
###Output
<class 'spacy.tokens.doc.Doc'>
<class 'spacy.tokens.token.Token'>
<class 'spacy.tokens.span.Span'>
<class 'spacy.tokens.span.Span'>
first lexeme repr in spacy vocab is: convincing
<class 'spacy.lexeme.Lexeme'>
###Markdown
linguistic Features 1. POS tagging
###Code
# get token's linguistic features (pos / dep / lemma / ... at token's level)
data = {}
for token in doc:
data.setdefault('text', []).append(token.text)
data.setdefault('lemma', []).append(token.lemma_)
data.setdefault('pos', []).append(token.pos_)
data.setdefault('tag', []).append(token.tag_)
data.setdefault('dep', []).append(token.dep_)
data.setdefault('shape', []).append(token.shape_)
data.setdefault('is_alpha', []).append(token.is_alpha)
data.setdefault('is_stop', []).append(token.is_stop)
pd.DataFrame(data)
###Output
_____no_output_____
###Markdown
linguistic featuresText: The original word text. Lemma: The base form of the word. POS: The simple part-of-speech tag. Tag: The detailed part-of-speech tag. Dep: Syntactic dependency, i.e. the relation between tokens. Shape: The word shape – capitalisation, punctuation, digits. is alpha: Is the token an alpha character? is stop: Is the token part of a stop list, i.e. the most common words of the language?
###Code
# don't understand what does 'amod' mean?
spacy.explain('amod')
###Output
_____no_output_____
###Markdown
2. dependency parsing
###Code
# 1. Noun Chunks
data = {}
for chunk in doc.noun_chunks:
data.setdefault('text', []).append(chunk.text)
data.setdefault('root text', []).append(chunk.root.text)
data.setdefault('root dep', []).append(chunk.root.dep_)
data.setdefault('explain', []).append(spacy.explain(chunk.root.dep_))
data.setdefault('root head text', []).append(chunk.root.head.text)
pd.DataFrame(data)
###Output
_____no_output_____
###Markdown
Noun ChunksText: The original noun chunk text. Root text: The original text of the word connecting the noun chunk to the rest of the parse. Root dep: Dependency relation connecting the root to its head. Root head text: The text of the root token's head.
###Code
# 2. Navigating the parse tree
data = {}
for token in doc:
data.setdefault('text', []).append(token.text)
data.setdefault('dep', []).append(token.dep_)
data.setdefault('explain', []).append(spacy.explain(token.dep_))
data.setdefault('head text', []).append(token.head.text)
data.setdefault('head pos', []).append(token.head.pos_)
data.setdefault('children', []).append([child for child in token.children])
pd.DataFrame(data)
###Output
_____no_output_____
###Markdown
parse treeText: The original token text. Dep: The syntactic relation connecting child to head. Head text: The original text of the token head. Head POS: The part-of-speech tag of the token head. Children: The immediate syntactic dependents of the token.
###Code
# 3. Iterate local sub tree
# interest: find verbs that has a subject
from spacy.symbols import nsubj, VERB
verbs = set()
for possible_subject in doc:
# make sure subject is the nominal subject and its head is a Verb
if possible_subject.dep == nsubj and possible_subject.head.pos == VERB:
# add its head, which is a Verb, to verbs
verbs.add(possible_subject.head)
print('out interest: {}'.format(verbs))
# iterate our interest's subtree
interest = verbs.pop()
print(interest.__class__)
data = {}
for descendant in interest.subtree:
data.setdefault('text', []).append(descendant.text)
data.setdefault('dep', []).append(descendant.dep_)
data.setdefault('explain', []).append(spacy.explain(descendant.dep_))
data.setdefault('n_lefts', []).append(descendant.n_lefts)
data.setdefault('lefts', []).append(list(descendant.lefts))
data.setdefault('n_rights', []).append(descendant.n_rights)
data.setdefault('rights', []).append(list(descendant.rights))
data.setdefault('ancestor', []).append([ancestor.text for ancestor in descendant.ancestors])
pd.DataFrame(data)
# 4. parse tree visualization
displacy.render(doc, style='dep', jupyter = True, options = {'distance': 120})
###Output
_____no_output_____
###Markdown
3. Named Entity Recognition (NER)
###Code
# 1. access Entity via "doc.ents" method at document level
raw_text = u"Apple is looking at buying U.K. startup for $1 billion"
doc = nlp(raw_text)
data = {}
for ent in doc.ents:
data.setdefault('text', []).append(ent.text)
data.setdefault('start_char', []).append(ent.start_char)
data.setdefault('end_char', []).append(ent.end_char)
data.setdefault('label', []).append(ent.label_)
data.setdefault('explain', []).append(spacy.explain(ent.label_))
pd.DataFrame(data)
###Output
_____no_output_____
###Markdown
NER attrText: The original entity text. Start: Index of start of entity in the Doc. End: Index of end of entity in the Doc. Label: Entity label, i.e. type.
###Code
# 2. access Entity at token level
data = {}
for token in doc:
data.setdefault('text', []).append(token.text)
data.setdefault('ENT_IOB (hash)', []).append(token.ent_iob)
data.setdefault('ENT_IOB_', []).append(token.ent_iob_)
data.setdefault('ENT_TYPE_', []).append(token.ent_type_)
data.setdefault('explain', []).append(spacy.explain(token.ent_type_))
pd.DataFrame(data)
###Output
_____no_output_____
###Markdown
IOB SCHEMEI – Token is inside an entity. O – Token is outside an entity. B – Token is the beginning of an entity.
###Code
# 3. over write or re-edit NER at document level
from spacy.tokens import Span
doc = nlp(u"FB is hiring a new Vice President of global policy")
ents = [(e.text, e.start_char, e.end_char, e.label_) for e in doc.ents]
print('Before', ents)
# the model didn't recognise "FB" as an entity :(
ORG = doc.vocab.strings[u'ORG'] # get hash value of entity 'ORG' label
fb_ent = Span(doc, 0, 1, label=ORG) # create a Span, which start from idx 0 to 1, for the new entity
doc.ents = list(doc.ents) + [fb_ent]
ents = [(e.text, e.start_char, e.end_char, e.label_) for e in doc.ents]
print('After', ents)
# 4. Entity visualization
displacy.render(doc, style='ent', jupyter=True)
###Output
_____no_output_____
###Markdown
4. Tokenization
###Code
# 1. add special case tokenization rule
from spacy.symbols import ORTH, LEMMA, POS, TAG # those are hash values
doc = nlp(u'gimme that') # phrase to tokenize
print([w.text for w in doc]) # ['gimme', 'that']
# add special case rule
special_case = [{ORTH: u'gim', LEMMA: u'give', POS: u'VERB'}, {ORTH: u'me'}]
nlp.tokenizer.add_special_case(u'gimme', special_case)
# check new tokenization
print([w.text for w in nlp(u'gimme that')]) # ['gim', 'me', 'that']
# Pronoun lemma is returned as -PRON-!
print([w.lemma_ for w in nlp(u'gimme that')]) # ['give', '-PRON-', 'that']
###Output
['gimme', 'that']
['gim', 'me', 'that']
['give', '-PRON-', 'that']
###Markdown
spacy tokenization algo1. split by space2. handle special case or special rules3. consume prefix4. consume suffix5. consume infix6. can't consume any more, handle as single token
###Code
# 2. Customizing spaCy's Tokenizer class
import re
from spacy.tokenizer import Tokenizer
prefix_re = re.compile(r'''^[\[\("']''')
suffix_re = re.compile(r'''[\]\)"']$''')
infix_re = re.compile(r'''[-~]''')
simple_url_re = re.compile(r'''^https?://''')
def custom_tokenizer(nlp):
return Tokenizer(nlp.vocab, prefix_search=prefix_re.search,
suffix_search=suffix_re.search,
infix_finditer=infix_re.finditer,
token_match=simple_url_re.match)
# the last one is optional: token_match matching strings
# that should never be split, overriding the previous rules.
# Useful for things like URLs or numbers.
nlp = spacy.load('en_core_web_sm')
nlp.tokenizer = custom_tokenizer(nlp)
doc = nlp(u"hello-world.")
print([t.text for t in doc])
# 3. Hooking an arbitrary tokenizer into the pipeline
# NOTE: tokenizer is the first component in spacy model process pipeline,
# NOTE: unlike other component, its input is raw text, output is doc object
from spacy.tokens import Doc
class WhitespaceTokenizer(object):
def __init__(self, vocab):
self.vocab = vocab
def __call__(self, text):
words = text.split(' ')
# All tokens 'own' a subsequent space character in this tokenizer
spaces = [True] * len(words)
return Doc(self.vocab, words=words, spaces=spaces)
# load model to get vocab
nlp = spacy.load('en_core_web_sm')
# need vocal to construct tokenizer object
# assign tokenizer to model.tokenizer
nlp.tokenizer = WhitespaceTokenizer(nlp.vocab)
# construct doc
doc = nlp(u"What's happened to me? he thought. It wasn't a dream.")
print([t.text for t in doc])
###Output
["What's", 'happened', 'to', 'me?', 'he', 'thought.', 'It', "wasn't", 'a', 'dream.']
###Markdown
5. Sentence Segmentation
###Code
# 1. access sentences via doc.sents method -- returns a generator
# spaCy uses the dependency parse to determine sentence boundaries
doc = nlp(u"This is a sentence. This is another sentence.")
for sent in doc.sents:
print(sent.text)
# 2. setting boundaries manually -- there are three ways to do it
# i. add custom pipeline component before dependency parser
text = u"this is a sentence...hello...and another sentence."
doc = nlp(text)
print('Before:', [sent.text for sent in doc.sents])
def set_custom_boundaries(doc):
for token in doc[:-1]:
if token.text == '...':
doc[token.i+1].is_sent_start = True
return doc
nlp.add_pipe(set_custom_boundaries, before='parser')
doc = nlp(text)
print('After:', [sent.text for sent in doc.sents])
# ii. add Rule-based pipeline component -- this will remove dependency parser
from spacy.lang.en import English
nlp = English() # just the language with no model
sbd = nlp.create_pipe('sentencizer') # The sentencizer component splits sentences on punctuation like ., ! or ?
nlp.add_pipe(sbd)
doc = nlp(u"This is a sentence. This is another sentence.")
print(doc.is_parsed)
for sent in doc.sents:
print(sent.text)
# iii. add Custom rule-based strategy -- only modify strategy within a pipeline component, it also remove dependency parser
from spacy.lang.en import English
from spacy.pipeline import SentenceSegmenter
def split_on_newlines(doc):
"""
This is the strategy function.
The strategy should be a function that takes a Doc object and yields a Span for each sentence
"""
start = 0
seen_newline = False
for word in doc:
if seen_newline and not word.is_space:
yield doc[start:word.i]
start = word.i
seen_newline = False
elif word.text == '\n':
seen_newline = True
if start < len(doc):
yield doc[start:len(doc)]
nlp = English() # just the language with no model
sbd = SentenceSegmenter(nlp.vocab, strategy=split_on_newlines)
nlp.add_pipe(sbd)
doc = nlp(u"This is a sentence\n\nThis is another sentence\nAnd more")
print(doc.is_parsed)
for sent in doc.sents:
print(sent.text)
###Output
False
This is a sentence
This is another sentence
And more
|
Marshall_Stability_Enhanced.ipynb | ###Markdown
Marshall Stability
###Code
#%% IMPORTS
#BASICS
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import math
from numpy import absolute
from pandas.plotting import scatter_matrix
from sklearn.pipeline import make_pipeline
from IPython.display import display, Markdown, Latex
pd.options.display.max_columns = None
#STATISTICS
from scipy.stats import normaltest
from scipy import stats
#ML TRAINING AND DATA PREPROCESSING
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.preprocessing import PolynomialFeatures
#ML MODELS
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import BayesianRidge
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import Ridge
from sklearn.linear_model import SGDRegressor
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
from sklearn.svm import SVR
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from xgboost import XGBRegressor
import xgboost as xgb
from xgboost import plot_importance
#MODEL EVALUATION
from sklearn.model_selection import cross_validate
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import RepeatedKFold
#METRICS
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
###Output
_____no_output_____
###Markdown
1. Methods
###Code
#Eliminate Outliers based on the interquantile
#datFrame: Data frame where the outliers will be eliminated.
#columnName: the name of the column where the outliers will be identified.
def eliminateOutliers (dataFrame, columnName):
Q1 = dataFrame[columnName].quantile(0.25)
Q3 = dataFrame[columnName].quantile(0.75)
IQR = Q3 - Q1
print('Initial dataframe size: '+str(dataFrame.shape))
dataFrame = dataFrame[(dataFrame[columnName] < (Q3 + 1.5 * IQR)) & (dataFrame[columnName] > (Q1 - 1.5 * IQR))]
print('Final dataframe size: '+str(dataFrame.shape))
return dataFrame
# Create the boxplot graphs for the categorical variables
# dataFrame: Data frame associated to the property of interest (dfAirVoids, dfMS, dfMF, dfITS, dfTSR)
# propertyOfInterest: the name of the column where the property of interest is located.
# columnName1...4: The categorical columns to evaluate.
def displayBoxPlotGraphs (dataFrame, propertyOfInterest, columnName1, columnName2, columnName3, columnName4):
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15,10))
sns.boxplot(y = propertyOfInterest, x = columnName1, data=dataFrame, orient='v' , ax=ax1)
sns.boxplot(y = propertyOfInterest, x = columnName2, data=dataFrame, orient='v' , ax=ax2)
sns.boxplot(y = propertyOfInterest, x= columnName3, data=dataFrame, orient='v' , ax=ax3)
sns.boxplot(y= propertyOfInterest, x= columnName4, data=dataFrame, orient='v' , ax=ax4)
###Output
_____no_output_____
###Markdown
2. Data Import
###Code
#%%DATA READING AND INITIAL PREPROCESSING
numericColumns = ['Aggregate absorption (%)',
'Apparent specific gravity',
0.075,
0.3,
0.6,
2.36,
4.75,
9.5,
12.5,
19,
'Plastic particle size (mm)',
'Mixing speed (RPM)',
'Mixing Temperature',
'Mixing Time (hours)',
'Plastic Addition by bitumen weight (%)',
'Bitumen content in the sample'
]
categoricalColumns = ['Modified asphalt Mix?',
'Agreggate Type',
'Aggregate absorption [%]',
'Filler used',
'Consolidated bitumen penetration grade',
'New Plastic Type',
'Plastic pretreatment',
'Plastic shape',
'Plastic Size',
'Mixing Process',
'Plastic melted previous to addition?',
'Aggregates replacement ?',
'Bitumen replacement?',
'Filler replacement',
'Property',
'Units']
#It returns the dataframe of interes based on the property - 'AirVoids', 'MS', 'MF', 'ITS', 'TSR'
def returnDf (propertyOfInterest):
df = pd.read_excel('fileML.xlsx', sheet_name = propertyOfInterest, engine='openpyxl')
df = df.set_index(propertyOfInterest + ' ID')
df.loc[:,:'Units'] = df.loc[:,:'Units'].applymap(str)
df.loc[:,:'Units'] = df.loc[:,:'Units'] .applymap(str.strip)
df.replace('NS', np.nan, inplace = True)
df[numericColumns] = df[numericColumns].replace('N/a', 0).astype(float)
return df
dfMS = returnDf('MS')
###Output
_____no_output_____
###Markdown
3. Data Exploration 3.1 Total Sample
###Code
dfMS = eliminateOutliers(dfMS, 'MS of the sample (kN)')
dfMS.iloc[:,2:].describe(include = 'all')
###Output
_____no_output_____
###Markdown
I might have a problem with the $\color{red}{\text{Aggregate absorption}}$ because more than 20% of the data is missing. Regarding the $\color{red}{\text{MS}}$, there is a high dispersion ($\sigma$ = 4.56), and the Mean seems normal. According to the Australian standards, the minimum value of the Marshall stability is between two and eigth.
###Code
scatter_matrix(dfMS[['Aggregate absorption (%)', 'Apparent specific gravity', 'Bitumen content in the sample', 'MS of the sample (kN)']], figsize=(10, 10))
plt.show()
plt.figure(figsize=(16, 6))
heatmap = sns.heatmap(dfMS.corr(), vmin=-1, vmax=1, annot=True)
heatmap.set_title('Correlation Heatmap MS', fontdict={'fontsize':12}, pad=12);
###Output
_____no_output_____
###Markdown
Interestingly, there is positive correlation in $\color{red}{\text{MS-Apparent specific gravity}}$ and $\color{red}{\text{MS-plastic addition by bitumen content}}$.
###Code
displayBoxPlotGraphs(dataFrame = dfMS, propertyOfInterest = "MS of the sample (kN)", columnName1 = "Agreggate Type", columnName2 = "Filler used", columnName3 = "Consolidated bitumen penetration grade", columnName4 = "Modified asphalt Mix?")
###Output
_____no_output_____
###Markdown
* As it happened with the Air Voids, it exists a MS difference among the samples that employed the bitumen 40/50; however, it is important to note that the sample size for this group was not representative enough.* Samples with plastic modification tend to have higher MS. The glue effect of the plastic and the stiffness increase of the bitumen might serve as valid explanations.* No signigicant difference among the aggregate types and fillers 3.2 Modified mixtures
###Code
dfMSModvsUnmod = dfMS [['Modified asphalt Mix?', 'MS of the sample (kN)']]
dfMSModvsUnmod.groupby(['Modified asphalt Mix?'], as_index=False).describe()
dfMSModified = dfMS[dfMS['Modified asphalt Mix?'] == 'Yes']
dfMSModified.describe(include = "all")
columnsOfInteres = numericColumns[0:2]+numericColumns[10:]+['MS of the sample (kN)']
scatter_matrix(dfMSModified[columnsOfInteres], figsize=(25, 20))
plt.show()
plt.figure(figsize=(16, 6))
heatmap = sns.heatmap(dfMSModified.corr(), vmin=-1, vmax=1, annot=True)
heatmap.set_title('Correlation Heatmap MS', fontdict={'fontsize':12}, pad=12)
###Output
_____no_output_____
###Markdown
$\color{red}{\text{MS-Apparent specific gravity}}$ presents the highest correlation with $\color{red}{\text{MS}}$; however, it only has 66 observations, so it is not a convincing result. Other parameters such as $\color{red}{\text{Plastic content}}$, and $\color{red}{\text{gradation}}$ present an slight effect on the MS.
###Code
displayBoxPlotGraphs(dataFrame = dfMSModified, propertyOfInterest = "MS of the sample (kN)", columnName1 = "Agreggate Type", columnName2 = "Plastic shape", columnName3 = "New Plastic Type", columnName4 = "Mixing Process")
###Output
_____no_output_____
###Markdown
The mean of the **dry** and **wet** process are not significantly different. 3.3 Wet vs. Dry Mixing
###Code
dfMSWetvsDry = dfMSModified [['Mixing Process', 'MS of the sample (kN)']]
dfMSWetvsDry.groupby(['Mixing Process'], as_index=False).describe()
sns.pairplot(dfMSModified[columnsOfInteres+['Mixing Process']], hue="Mixing Process", height=2.5)
###Output
_____no_output_____
###Markdown
**Marshall Stability summary:*** There are missing values mainly in $\color{red}{\text{Apparent specific gravity}}$, $\color{red}{\text{Aggregate type}}$ and $\color{red}{\text{filler used}}$.* Four outliers were eliminated. The final total sample included 402 data points ($\mu$ = 14.47, $\sigma$ = 4.6). * $\color{red}{\text{Aggregate absorption}}$ seems to be a critical variable to include, but the percentage of missing values is more than 20%.* $\color{red}{\text{Apparent specific gravity}}$ presents the strongest positive correlation with the Marshall stability, but it is not a reliable inference becasue it presents many missing points (318 missing points).* Although Marshall stability of modified asphalts is relatively higher than not modified, this is not certain because the high variances of both sample groups. $\mu_{modified}$ = 15.12 vs. $\mu_{unmodified}$ = 11.97* $\color{red}{\text{Percentage of plastic addition}}$ has a noticeable possitive correlation with MS. (r = 0.39) * MS of dry and wet are really similar -> $\mu_{Dry}$ = 15.05 (200 observations) vs $\mu_{Wet}$ = 15.2 (119 observations) 4. Data Pre-processing
###Code
dfMS.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 402 entries, 1 to 406
Data columns (total 34 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Article ID 402 non-null object
1 Global ID 402 non-null object
2 Modified asphalt Mix? 402 non-null object
3 Agreggate Type 262 non-null object
4 Aggregate absorption (%) 242 non-null float64
5 Apparent specific gravity 84 non-null float64
6 0.075 325 non-null float64
7 0.3 372 non-null float64
8 0.6 344 non-null float64
9 2.36 355 non-null float64
10 4.75 372 non-null float64
11 9.5 344 non-null float64
12 12.5 357 non-null float64
13 19 372 non-null float64
14 Filler used 161 non-null object
15 Bitumen Type Penetration Grade 402 non-null object
16 Consolidated bitumen penetration grade 402 non-null object
17 New Plastic Type 377 non-null object
18 Plastic pretreatment 402 non-null object
19 Plastic shape 402 non-null object
20 Plastic Size 320 non-null object
21 Plastic particle size (mm) 307 non-null float64
22 Mixing Process 402 non-null object
23 Mixing speed (RPM) 371 non-null float64
24 Mixing Temperature 385 non-null float64
25 Mixing Time (hours) 372 non-null float64
26 Aggregates replacement ? 402 non-null object
27 Bitumen replacement? 402 non-null object
28 Filler replacement 402 non-null object
29 Plastic Addition by bitumen weight (%) 400 non-null float64
30 Property 402 non-null object
31 Units 402 non-null object
32 Bitumen content in the sample 399 non-null float64
33 MS of the sample (kN) 402 non-null float64
dtypes: float64(17), object(17)
memory usage: 109.9+ KB
###Markdown
Pre-processing:1. Eliminate the columns $\color{red}{\text{Article ID}}$, $\color{red}{\text{Global ID}}$, $\color{red}{\text{Aggregate type}}$, $\color{red}{\text{Apparent specific gravity}}$, $\color{red}{\text{filler used}}$, $\color{red}{\text{Bitumen type penetration}}$, $\color{red}{\text{Property}}$, $\color{red}{\text{plastic size}}$ and $\color{red}{\text{Units}}$.2. Change the N/a to zero. This is for the unmodified mixtures.3. Eliminate rows with missing values in $\color{red}{\text{New Plastic Type}}$, $\color{red}{\text{Plastic addition by bitumen weight}}$ and $\color{red}{\text{bitumen}}$ content in sample4. Change categorical columns to numeric.5. Imputer to $\color{red}{\text{Aggregate absorption}}$, $\color{red}{\text{gradation}}$, $\color{red}{\text{plastic size(mm)}}$, and $\color{red}{\text{mixing parameters}}$.
###Code
#Categorical Variables
dfMSCleaned = dfMS.drop(['Article ID',
'Global ID',
'Modified asphalt Mix?',
'Agreggate Type',
'Apparent specific gravity',
'Filler used',
'Bitumen Type Penetration Grade',
'Property',
'Units',
'Plastic Size' ], axis = 1)
dfMSCleaned = dfMSCleaned.replace('N/a', 0)
dfMSCleaned = dfMSCleaned.dropna(subset=['New Plastic Type',
'Plastic Addition by bitumen weight (%)',
'Bitumen content in the sample'])
dfMSCleaned = pd.get_dummies(dfMSCleaned, columns=['New Plastic Type'], drop_first = False)
dfMSCleaned = pd.get_dummies(dfMSCleaned, drop_first = True)
dfMSCleaned = dfMSCleaned.drop(['New Plastic Type_0'], axis = 1)
dfMSCleaned.info()
#IMPUTATION OF MISSING VALUES
imputer = IterativeImputer (estimator = ExtraTreesRegressor(n_estimators=10, random_state=123), max_iter=50)
n = imputer.fit_transform(dfMSCleaned)
dfMSCleanedImputed = pd.DataFrame(n, columns = list(dfMSCleaned.columns))
dfMSCleanedImputed.info()
print ('There is '+str(sum(n < 0 for n in dfMSCleanedImputed.values.flatten()))+' negative values in the new Dataframe')
dfMSCleanedImputed['New Plastic Type_Nylon'] = dfMSCleanedImputed['New Plastic Type_Nylon'] * dfMSCleanedImputed['Plastic Addition by bitumen weight (%)']
dfMSCleanedImputed['New Plastic Type_PE'] = dfMSCleanedImputed['New Plastic Type_PE'] * dfMSCleanedImputed['Plastic Addition by bitumen weight (%)']
dfMSCleanedImputed['New Plastic Type_PET'] = dfMSCleanedImputed['New Plastic Type_PET'] * dfMSCleanedImputed['Plastic Addition by bitumen weight (%)']
dfMSCleanedImputed['New Plastic Type_PP'] = dfMSCleanedImputed['New Plastic Type_PP'] * dfMSCleanedImputed['Plastic Addition by bitumen weight (%)']
dfMSCleanedImputed['New Plastic Type_PU'] = dfMSCleanedImputed['New Plastic Type_PU'] * dfMSCleanedImputed['Plastic Addition by bitumen weight (%)']
dfMSCleanedImputed['New Plastic Type_PVC'] = dfMSCleanedImputed['New Plastic Type_PVC'] * dfMSCleanedImputed['Plastic Addition by bitumen weight (%)']
dfMSCleanedImputed['New Plastic Type_Plastic Mix'] = dfMSCleanedImputed['New Plastic Type_Plastic Mix'] * dfMSCleanedImputed['Plastic Addition by bitumen weight (%)']
dfMSCleanedImputed['New Plastic Type_e-waste'] = dfMSCleanedImputed['New Plastic Type_e-waste'] * dfMSCleanedImputed['Plastic Addition by bitumen weight (%)']
dfMSCleanedImputed = dfMSCleanedImputed.drop(['Plastic Addition by bitumen weight (%)'], axis = 1)
scaler = MinMaxScaler()
dfMSCleanedImputedScaled = pd.DataFrame(scaler.fit_transform(dfMSCleanedImputed), columns = list(dfMSCleanedImputed.columns))
dfMSCleanedImputedScaled.to_clipboard()
###Output
_____no_output_____
###Markdown
5. Model Training
###Code
min = dfMSCleanedImputed['MS of the sample (kN)'].min()
max = dfMSCleanedImputed['MS of the sample (kN)'].max()
print('The min value is: '+str(min)+'. The max value is: '+str(max))
#Method that print the best parameters, R2 and MSE based on a grid search.
def printBestModelAdv (grid, estimator = n, advancedAnalysis = False):
min = dfMSCleanedImputed['MS of the sample (kN)'].min()
max = dfMSCleanedImputed['MS of the sample (kN)'].max()
mse = -grid.best_score_
print('Best Parameters:' , grid.best_params_)
print('Best Test MSE: ' + str(mse))
print('Std of the Test MSE:' + str(grid.cv_results_['std_test_neg_mean_squared_error'][grid.best_index_]))
print('Best Test RMSE: ' +str(math.sqrt(mse)))
print('Best Test scaled RMSE: ' +str((math.sqrt(mse)*(max-min))+min))
print('Best Test scaled MSE: ' +str(((math.sqrt(mse)*(max-min))+min)**2))
print('Best Test R2: ' + str(grid.cv_results_['mean_test_r2'][grid.best_index_]))
if (advancedAnalysis):
bestEstimator = estimator
bestEstimator.fit(X_train, y_train)
predictionsTrain = bestEstimator.predict(X_train)
df = pd.DataFrame({'predictions':predictionsTrain, 'original': y_train})
df.plot.hist(bins=10, alpha=0.5)
unScaledDf = (df*(max-min))+min
print (unScaledDf.describe())
X = dfMSCleanedImputedScaled.loc[:, dfMSCleanedImputedScaled.columns != 'MS of the sample (kN)']
X.columns = X.columns.astype(str)
y = dfMSCleanedImputedScaled.loc[:,'MS of the sample (kN)']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=123)
cv = RepeatedKFold(n_splits = 5, n_repeats = 10, random_state = 123)
y_train_new = y_train.to_frame()
y_train_new['y_train_unscaled'] = (y_train*(max-min))+min
y_train_new.describe()
sns.set_style('darkgrid')
fig, ax = plt.subplots()
sns.histplot(x=y_train_new["y_train_unscaled"], bins=10, kde=True, ax = ax)
ax.set(xlabel='Marshall Stability (kN)')
ax.axvline(x=5.33, label='Minimum accepted value' , linestyle = '--', color='k')
ax.set_title('(b)')
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
5.1 Model Evaluation Linear Model
###Code
param_grid = {'fit_intercept': [True, False],
'positive': [True, False]}
grid = GridSearchCV(LinearRegression(), param_grid, cv = cv, scoring=['neg_mean_squared_error', 'r2'], refit = 'neg_mean_squared_error', return_train_score= True)
grid.fit(X_train, y_train)
printBestModelAdv(grid)
###Output
_____no_output_____
###Markdown
Lasso Linear Model
###Code
param_grid = {'alpha': [0.001,1, 10, 15, 30, 50, 100],
'fit_intercept':[True, False],
'positive': [True, False]}
grid = GridSearchCV(Lasso(), param_grid, cv=cv, scoring=['neg_mean_squared_error', 'r2'], refit = 'neg_mean_squared_error', return_train_score= True)
grid.fit(X_train, y_train)
printBestModelAdv(grid)
###Output
_____no_output_____
###Markdown
Ridge Linear regression model
###Code
param_grid = {'alpha': [7, 8, 10,100],
'fit_intercept': [True, False],
'solver': [ 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga']}
grid = GridSearchCV(Ridge(), param_grid, cv=cv, scoring=['neg_mean_squared_error', 'r2'], refit = 'neg_mean_squared_error')
grid.fit(X_train, y_train)
printBestModelAdv(grid)
###Output
_____no_output_____
###Markdown
Linear Elastic net
###Code
param_grid = {'alpha': [0.01,1,2,3,4],
'fit_intercept': [True, False]}
grid = GridSearchCV(ElasticNet(), param_grid, cv=cv, scoring=['neg_mean_squared_error', 'r2'], refit = 'neg_mean_squared_error')
grid.fit(X_train, y_train)
printBestModelAdv(grid)
###Output
_____no_output_____
###Markdown
Polynomial model
###Code
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree), LinearRegression(**kwargs))
param_grid = {'polynomialfeatures__degree': [2,3],
'linearregression__fit_intercept': [True, False],
'linearregression__positive':[True, False]}
grid = GridSearchCV(PolynomialRegression(), param_grid, cv=cv, scoring=['neg_mean_squared_error', 'r2'], refit = 'neg_mean_squared_error')
grid.fit(X_train, y_train)
printBestModelAdv(grid)
###Output
_____no_output_____
###Markdown
Lasso Polynomial model
###Code
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree), Lasso(**kwargs))
param_grid = {'polynomialfeatures__degree': [2,3],
'lasso__alpha': [1,2, 3, 10, 15, 30],
'lasso__fit_intercept':[True, False],
'lasso__positive': [True, False],
'lasso__max_iter': [2000,3000, 3500]}
grid = GridSearchCV(PolynomialRegression(), param_grid, cv=cv, scoring=['neg_mean_squared_error', 'r2'], refit = 'neg_mean_squared_error', return_train_score= True)
grid.fit(X_train, y_train)
printBestModelAdv(grid)
###Output
_____no_output_____
###Markdown
Ridge polynomial model
###Code
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree), Ridge(**kwargs))
param_grid = {'polynomialfeatures__degree': [2,3],
'ridge__alpha':[20,30,50, 60],
'ridge__fit_intercept': [True, False],
'ridge__solver': [ 'lsqr', 'cholesky', 'sparse_cg', 'auto']}
grid = GridSearchCV(PolynomialRegression(), param_grid, cv=cv, scoring=['neg_mean_squared_error', 'r2'], refit='neg_mean_squared_error')
grid.fit(X_train, y_train)
printBestModelAdv(grid)
###Output
_____no_output_____
###Markdown
Support vector regression
###Code
param_grid = {
'kernel':['linear','rbf', 'sigmoid', 'poly'],
'degree':[2,3,4],
'C':[0.01,1,5,10],
'epsilon':[0.1,0.2, 1, 1.5]
}
grid = GridSearchCV(SVR(), param_grid, cv=cv, scoring=['neg_mean_squared_error', 'r2'], refit='neg_mean_squared_error')
grid.fit(X_train, y_train)
printBestModelAdv(grid)
###Output
_____no_output_____
###Markdown
Decision Tree regressor
###Code
param_grid = {
'max_depth':[1,2,3,5,10,30],
'min_samples_split':[2,3,4],
'min_samples_leaf':[0.4,1,2]
}
grid = GridSearchCV(DecisionTreeRegressor(), param_grid, cv=cv, scoring=['neg_mean_squared_error', 'r2'], refit='neg_mean_squared_error')
grid.fit(X_train, y_train)
printBestModelAdv(grid)
###Output
_____no_output_____
###Markdown
Random Forest
###Code
param_grid = {
'bootstrap': [True, False],
'max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, None],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [2, 5, 10],
'n_estimators': [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000]
}
grid = RandomizedSearchCV(RandomForestRegressor(), param_grid, cv=cv, scoring=['r2','neg_mean_squared_error'], refit='neg_mean_squared_error', n_iter=10)
grid.fit(X_train, y_train)
printBestModelAdv(grid, RandomForestRegressor(**grid.best_params_), True)
###Output
_____no_output_____
###Markdown
Extra tree regressor
###Code
param_grid = {
'bootstrap': [True],
'max_depth': [50],
'max_features': ['auto'],
'min_samples_leaf': [1,2,4,5,10],
'min_samples_split': [2],
'n_estimators': [400]
}
grid = GridSearchCV(ExtraTreesRegressor(), param_grid, cv=cv, scoring=['r2','neg_mean_squared_error'], refit='neg_mean_squared_error')
grid.fit(X_train, y_train)
printBestModelAdv(grid, RandomForestRegressor(**grid.best_params_), True)
param_grid = {
'bootstrap': [True, False],
'max_depth': [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, None],
'max_features': ['auto', 'sqrt'],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [2, 5, 10],
'n_estimators': [200, 400, 600, 800, 1000, 1200, 1400, 1600, 1800, 2000]
}
grid = RandomizedSearchCV(ExtraTreesRegressor(), param_grid, cv=cv, scoring=['r2','neg_mean_squared_error'], refit='neg_mean_squared_error', n_iter=10)
grid.fit(X_train, y_train)
printBestModelAdv(grid, RandomForestRegressor(**grid.best_params_), True)
###Output
_____no_output_____
###Markdown
XG Boost Regressor
###Code
param_grid = {
'n_estimators': [100,300,500,1000]
}
grid = GridSearchCV(XGBRegressor(random_state=123), param_grid, cv=cv, scoring=['neg_mean_squared_error', 'r2'], refit='neg_mean_squared_error')
grid.fit(X_train, y_train)
printBestModelAdv(grid, XGBRegressor(**grid.best_params_), True)
X_train.columns
X_train2 = X_train.copy()
X_train2.rename(columns={'0.075':'Grad. Sieve size 0.075', '0.3':'Grad. Sieve size 0.3', '0.6':'Grad. Sieve size 0.6', '2.36':'Grad. Sieve size 2.36', '4.75':'Grad. Sieve size 4.75','9.5':'Grad. Sieve size 9.5', '12.5':'Grad. Sieve size 12.5', '19':'Grad. Sieve size 19',
'Plastic particle size (mm)':'Plastic size', 'Mixing speed (RPM)':'Mixing speed',
'Mixing Time (hours)':'Mixing Time',
'Bitumen content in the sample':'Bitumen content', 'New Plastic Type_Nylon':'Plastic Type_Nylon',
'New Plastic Type_PE':'Plastic Type_PE', 'New Plastic Type_PET':'Plastic Type_PET', 'New Plastic Type_PP':'Plastic Type_PP',
'New Plastic Type_PU':'Plastic Type_PU', 'New Plastic Type_PVC':'Plastic Type_PVC',
'New Plastic Type_Plastic Mix':'Plastic Type_Plastic Mix', 'New Plastic Type_e-waste':'Plastic Type_e-waste' ,
'Consolidated bitumen penetration grade_50/70':'Bitumen grade_50/70',
'Consolidated bitumen penetration grade_70/100':'Bitumen grade_70/100',
}, inplace=True)
#Graph employed for selecting important features during tunning
XGBoostModel = XGBRegressor(random_state=123)
XGBoostModel.fit(X_train2,y_train)
ax = plot_importance(XGBoostModel, height=0.8, importance_type='weight', show_values=False, title=None, max_num_features = 20)
fig = ax.figure
plt.xlabel('Weight', fontsize=20)
plt.ylabel('Features', fontsize=20)
plt.title('(a)',fontsize= 22)
fig.set_size_inches(8,8)
###Output
_____no_output_____
###Markdown
6. Best Model Tunning
###Code
X_train.columns = X_train.columns.astype(str)
cv = RepeatedKFold(n_splits = 10, n_repeats = 10, random_state = 123)
###Output
_____no_output_____
###Markdown
6.1. Feature selection
###Code
features_MSE = {}
def addMSE (columns, string):
cv_results = cross_validate(XGBRegressor(random_state = 123), X_train[columns], y_train, cv = cv, scoring = ['neg_mean_squared_error'])
MSE = np.average(-cv_results['test_neg_mean_squared_error'])
features_MSE[string] = MSE
X_train.columns
addMSE(['New Plastic Type_Nylon','New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC','New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste'],
'Plastic type')
addMSE(['New Plastic Type_Nylon','New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC','New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste',
'Bitumen content in the sample'],
'Plastic type \n Bitument cont.')
addMSE(['New Plastic Type_Nylon','New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC','New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste',
'Bitumen content in the sample',
'Aggregate absorption (%)'],
'Plastic type \n Bitument cont. \n Aggregates abs.')
addMSE(['New Plastic Type_Nylon','New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC','New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste',
'Bitumen content in the sample',
'Aggregate absorption (%)',
'Plastic particle size (mm)'],
'Plastic type \n Bitument cont. \n Aggregates abs. \n Plastic size')
addMSE(['New Plastic Type_Nylon','New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC','New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste',
'Bitumen content in the sample',
'Aggregate absorption (%)',
'Plastic particle size (mm)',
'0.075', '0.3', '0.6', '2.36', '4.75','9.5', '12.5', '19'],
'Plastic type \n Bitument cont. \n Aggregates abs. \n Plastic size \n Gradation')
addMSE(['New Plastic Type_Nylon','New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC','New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste',
'Bitumen content in the sample',
'Aggregate absorption (%)',
'Plastic particle size (mm)',
'0.075', '0.3', '0.6', '2.36', '4.75','9.5', '12.5', '19',
'Consolidated bitumen penetration grade_50/70','Consolidated bitumen penetration grade_70/100'],
'Plastic type \n Bitument cont. \n Aggregates abs. \n Plastic size \n Gradation \n Bitumen type')
addMSE(['New Plastic Type_Nylon','New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC','New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste',
'Bitumen content in the sample',
'Aggregate absorption (%)',
'Plastic particle size (mm)',
'0.075', '0.3', '0.6', '2.36', '4.75','9.5', '12.5', '19',
'Consolidated bitumen penetration grade_50/70','Consolidated bitumen penetration grade_70/100',
'Mixing speed (RPM)', 'Mixing Temperature'],
'Plastic type \n Bitument cont. \n Aggregates abs. \n Plastic size \n Gradation \n Bitumen type \n Mixing speed \n Mixing Temp.')
addMSE(['Aggregate absorption (%)', '0.075', '0.3', '0.6', '2.36', '4.75',
'9.5', '12.5', '19', 'Plastic particle size (mm)', 'Mixing speed (RPM)',
'Mixing Temperature', 'Mixing Time (hours)',
'Bitumen content in the sample', 'New Plastic Type_Nylon',
'New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC',
'New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste',
'Consolidated bitumen penetration grade_50/70',
'Consolidated bitumen penetration grade_70/100',
'Plastic pretreatment_Physical', 'Plastic pretreatment_Plastic Melted',
'Plastic shape_Fibers', 'Plastic shape_Pellets',
'Plastic shape_Shredded', 'Mixing Process_Dry', 'Mixing Process_Wet',
'Aggregates replacement ?_Yes', 'Bitumen replacement?_Yes'],
'All features')
plt.rcParams["figure.figsize"] = (20,5)
plt.plot(features_MSE.keys(), features_MSE.values(), marker = '*')
plt.ylim(ymin = 0)
plt.axvline(x='Plastic type \n Bitument cont. \n Aggregates abs. \n Plastic size \n Gradation \n Bitumen type', ymin=0, ymax=1, color = 'k', ls = '--' , label='Selected model')
plt.ylabel('MSE', fontsize = 20)
plt.xlabel('Features included in the model', fontsize = 20)
plt.xticks(fontsize= 13)
plt.title('(b)', fontsize=22)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
The features most approppiate for the model are aggregates gradation, bitumen content, plastic type, plastic addition. 6.2 Model Tunning
###Code
X_train = X_train[['New Plastic Type_Nylon','New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC','New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste',
'Bitumen content in the sample',
'Aggregate absorption (%)',
'Plastic particle size (mm)',
'0.075', '0.3', '0.6', '2.36', '4.75','9.5', '12.5', '19',
'Consolidated bitumen penetration grade_50/70','Consolidated bitumen penetration grade_70/100',
]]
X_test = X_test [['New Plastic Type_Nylon','New Plastic Type_PE', 'New Plastic Type_PET', 'New Plastic Type_PP',
'New Plastic Type_PU', 'New Plastic Type_PVC','New Plastic Type_Plastic Mix', 'New Plastic Type_e-waste',
'Bitumen content in the sample',
'Aggregate absorption (%)',
'Plastic particle size (mm)',
'0.075', '0.3', '0.6', '2.36', '4.75','9.5', '12.5', '19',
'Consolidated bitumen penetration grade_50/70','Consolidated bitumen penetration grade_70/100',
]]
###Output
_____no_output_____
###Markdown
6.3 Final model evaluation on test set
###Code
def modelEvaluation (model, Title):
min = dfMSCleanedImputed['MS of the sample (kN)'].min()
max = dfMSCleanedImputed['MS of the sample (kN)'].max()
#Model Fitting
model.fit(X_train, y_train)
predictions_test = model.predict(X_test)
#Model Evaluation
r2_test = r2_score(y_test, predictions_test)
mse_test = mean_squared_error(y_test, predictions_test)
rmse_test_unscaled = (math.sqrt(mse_test)*(max-min))+min
plt.figure(figsize=(7,7))
#Model Plotting
plt.scatter(y_test, predictions_test, c='crimson')
plt.plot([(1,1), (0,0)], [(1,1), (0,0)], 'b-')
plt.xlabel('True Values', fontsize=15)
plt.xlim (0,1)
plt.ylim (0,1)
plt.ylabel('Predictions - ' + Title, fontsize=18)
plt.annotate('R2 = '+str(round(r2_test,3)), xy = (0.6,0.3), fontweight = 'bold', fontsize = 'xx-large')
plt.annotate('RMSE = '+str(round(rmse_test_unscaled,3)), xy = (0.6,0.25), fontweight = 'bold', fontsize = 'xx-large')
plt.show()
return predictions_test
XGModel = XGBRegressor(random_state = 123)
prediction_XGModel = modelEvaluation(XGModel, 'XGModel')
XGModel.get_params
extraTreeModel = ExtraTreesRegressor(n_estimators=400, min_samples_split=2,min_samples_leaf=1, max_features='auto', max_depth=90, random_state= 123, bootstrap=False)
prediction_ExtraTrees = modelEvaluation(extraTreeModel, 'Extra Trees model')
RFModel = RandomForestRegressor(n_estimators=1000,min_samples_split=5, min_samples_leaf=1, max_features='auto', max_depth=None, random_state= 123, bootstrap=True)
prediction_RandomForest = modelEvaluation(RFModel, 'Random Forest Model')
###Output
_____no_output_____
###Markdown
Outliers inspection
###Code
XGModel = XGBRegressor(random_state = 123)
XGModel.fit(X_train, y_train)
predictions_test = XGModel.predict(X_test)
df = pd.DataFrame(data=[predictions_test,y_test], index=['prediction', 'y_test']).T
df = df.sort_values(by='y_test')
min = dfMSCleanedImputed['MS of the sample (kN)'].min()
max = dfMSCleanedImputed['MS of the sample (kN)'].max()
df['prediction_uns'] = (df['prediction']*(max-min))+min
df['y_test_uns'] = (df['y_test']*(max-min))+min
###Output
_____no_output_____
###Markdown
6.3 ANOVA Analysis
###Code
df_predictions = y_test.to_frame(name='real_Y')
df_predictions['XGModel_predictions'] = prediction_XGModel
df_predictions['ExtraTrees_predictions'] = prediction_ExtraTrees
df_predictions['RandomForest_predictions'] = prediction_RandomForest
def normalityTest (model, data, alpha=0.05):
k2, p = stats.normaltest(data)
if p > alpha:
print ('The ' + model + ' is probably Gaussian. p-value = ' + str(p))
else:
print ('The ' + model +' is not probably Gaussian. p-value = '+ str(p))
###Output
_____no_output_____
###Markdown
Normality evaluation
###Code
normalityTest(data = df_predictions['real_Y'], model = 'real values')
normalityTest(data = df_predictions['RandomForest_predictions'], model = 'Random Forest')
normalityTest(data = df_predictions['XGModel_predictions'], model = 'XG model')
normalityTest(data = df_predictions['ExtraTrees_predictions'], model = 'Extra trees')
###Output
_____no_output_____
###Markdown
Variance homogeneity (Bartlett's test)
###Code
stal, p = stats.bartlett(df_predictions['real_Y'], df_predictions['RandomForest_predictions'], df_predictions['XGModel_predictions'], df_predictions['ExtraTrees_predictions'])
if p > 0.05:
print('There is not sufficient evidence to say that the variance of the real values and the predictors are differents. The p-value is ' + str(p))
else:
print('There is non-homogeneity in the variance. The p-value is ' + str(p))
stats.f_oneway(df_predictions['real_Y'],
df_predictions['RandomForest_predictions'],
df_predictions['XGModel_predictions'],
df_predictions['ExtraTrees_predictions'])
###Output
_____no_output_____
###Markdown
There is not statistical differences between the predicted values by the three models and the observed values 6.4 Goodness of fit Analysis
###Code
def evaluateGOF (y_real, model_predictions, Model, alpha = 0.05):
stat, p_value = stats.ks_2samp(y_real, model_predictions, alternative='two-sided')
if p_value > 0.05:
print ('The real values and the predictions of '+Model+' come from the same distribution according to the Kolmogorov-Smirnov test. The p-value is '+str(p_value))
else:
print ('The real values and the predictions of '+Model+'DO NOT come from the same distribution. The p-value is '+str(p_value))
evaluateGOF(y_real=df_predictions['real_Y'], model_predictions=df_predictions['RandomForest_predictions'], Model = 'Random Forest')
evaluateGOF(y_real=df_predictions['real_Y'], model_predictions=df_predictions['XGModel_predictions'], Model = 'Boosted Tree')
evaluateGOF(y_real=df_predictions['real_Y'], model_predictions=df_predictions['ExtraTrees_predictions'], Model = 'Extra treees')
df_predictions.to_clipboard()
###Output
_____no_output_____
###Markdown
🚨 Marshall Stability Effect and Shap values
###Code
dfMSEffect = returnDf('MSEffect')
dfMSEffectNoOutliers = eliminateOutliers(dfMSEffect, 'Effect(%)')
dfMSEffectNoOutliers['Virgin Bitumen Penetration'] = pd.to_numeric(dfMSEffectNoOutliers['Virgin Bitumen Penetration'])
dfMSEffectNoOutliers.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 242 entries, 1 to 402
Data columns (total 37 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Article ID 242 non-null object
1 Global ID 242 non-null object
2 Modified asphalt Mix? 242 non-null object
3 Agreggate Type 154 non-null object
4 Aggregate absorption (%) 142 non-null float64
5 Apparent specific gravity 44 non-null float64
6 0.075 184 non-null float64
7 0.3 217 non-null float64
8 0.6 203 non-null float64
9 2.36 209 non-null float64
10 4.75 217 non-null float64
11 9.5 203 non-null float64
12 12.5 211 non-null float64
13 19 217 non-null float64
14 Filler used 77 non-null object
15 Bitumen Type Penetration Grade 242 non-null object
16 Consolidated bitumen penetration grade 242 non-null object
17 Virgin Bitumen Penetration 242 non-null float64
18 New Plastic Type 217 non-null object
19 Plastic pretreatment 242 non-null object
20 Plastic shape 242 non-null object
21 Plastic Size 181 non-null object
22 Plastic particle size (mm) 171 non-null float64
23 Mixing Process 242 non-null object
24 Mixing speed (RPM) 215 non-null float64
25 Mixing Temperature 225 non-null float64
26 Mixing Time (hours) 216 non-null float64
27 Aggregates replacement ? 242 non-null object
28 Bitumen replacement? 242 non-null object
29 Filler replacement 242 non-null object
30 Plastic Addition by bitumen weight (%) 241 non-null float64
31 Property 242 non-null object
32 Units 242 non-null object
33 Bitumen content in the sample 241 non-null float64
34 MS Control kN) 242 non-null float64
35 MS of the sample (kN) 242 non-null float64
36 Effect(%) 242 non-null float64
dtypes: float64(20), object(17)
memory usage: 71.8+ KB
###Markdown
🛀 Data Preprocessing
###Code
#Categorical Variables
dfMSEffectNoOutliers = dfMSEffectNoOutliers.drop(['Article ID',
'Global ID',
'Modified asphalt Mix?',
'Agreggate Type',
'Apparent specific gravity',
'Filler used',
'Bitumen Type Penetration Grade',
'Property',
'Units',
'Plastic Size',
'Consolidated bitumen penetration grade' ], axis = 1)
dfMSEffectNoOutliers = dfMSEffectNoOutliers.dropna(subset=['New Plastic Type'])
dfMSEffectNoOutliers = pd.get_dummies(dfMSEffectNoOutliers, columns=['New Plastic Type'], drop_first=True)
dfMSEffectNoOutliers = pd.get_dummies(dfMSEffectNoOutliers, drop_first=True)
dfMSEffectNoOutliers
# Split X and Y
X = dfMSEffectNoOutliers[dfMSEffectNoOutliers.columns.difference(['Effect(%)','MS Control kN)', 'MS of the sample (kN)'])]
y = dfMSEffectNoOutliers[['Effect(%)']]
X.head()
#IMPUTATION OF MISSING VALUES
imputer = IterativeImputer (estimator = ExtraTreesRegressor(n_estimators=10, random_state = 123), max_iter=50,random_state = 123)
n = imputer.fit_transform(X)
X_Imputed = pd.DataFrame(n, columns = list(X.columns))
print ('There is '+
str(sum(n < 0 for n in X_Imputed.loc[:,X_Imputed.columns].values.flatten()))+
' negative values in the new DataFrame')
X_Imputed['New Plastic Type_PE'] = X_Imputed['New Plastic Type_PE'] * X_Imputed['Plastic Addition by bitumen weight (%)']
X_Imputed['New Plastic Type_PET'] = X_Imputed['New Plastic Type_PET'] * X_Imputed['Plastic Addition by bitumen weight (%)']
X_Imputed['New Plastic Type_PP'] = X_Imputed['New Plastic Type_PP'] * X_Imputed['Plastic Addition by bitumen weight (%)']
X_Imputed['New Plastic Type_PU'] = X_Imputed['New Plastic Type_PU'] * X_Imputed['Plastic Addition by bitumen weight (%)']
X_Imputed['New Plastic Type_Plastic Mix'] = X_Imputed['New Plastic Type_Plastic Mix'] * X_Imputed['Plastic Addition by bitumen weight (%)']
X_Imputed['New Plastic Type_e-waste'] = X_Imputed['New Plastic Type_e-waste'] * X_Imputed['Plastic Addition by bitumen weight (%)']
X_Imputed = X_Imputed.drop(['Plastic Addition by bitumen weight (%)' ], axis = 1)
# Scaling
#Feature Scaling
scaler = MinMaxScaler()
X = pd.DataFrame(scaler.fit_transform(X_Imputed), columns = list(X_Imputed))
X.head()
###Output
_____no_output_____
###Markdown
🧠Model Training
###Code
cv = RepeatedKFold(n_splits = 10, n_repeats = 10, random_state = 123)
scores = cross_validate(XGBRegressor(random_state=123, n_estimators=500), X, y, cv=cv,scoring=('r2', 'neg_mean_squared_error'),return_train_score=True)
np.average(scores['test_r2'])
###Output
_____no_output_____
###Markdown
🔪SHAP Values
###Code
import shap
model = XGBRegressor().fit(X, y)
explainer = shap.Explainer(model)
shap_values = explainer(X)
# summarize the effects of all the features
shap.plots.beeswarm(shap_values, max_display=28)
shap.plots.scatter(shap_values[:,'Mixing Process_Wet'])
shap_values_wet = explainer(X[X['Mixing Process_Wet'] == 1])
# summarize the effects of all the features
shap.plots.beeswarm(shap_values_wet, max_display=15)
shap_values_dry = explainer(X[X['Mixing Process_Dry'] == 0])
# summarize the effects of all the features
shap.plots.beeswarm(shap_values_dry, max_display=15)
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/QUBO_Examples_TSP_ExtendedSolutions-checkpoint.ipynb | ###Markdown
Prepared by Sabah Ud Din Ahmad Extended Solutions for QUBO Examples - Travelling Salesman Problem Task 1Let's assume we have the following fully-connected, undirected graph of 4 nodes / cities and 6 edges, $G = (4,6)$. Determine the matrix Q for this graph. (You may assume suitable values for A and B). Solution Since the objective function to be minimizied (including penalties) is, $$C(x) = A\sum_{(i,j) \in E} w_{ij} \sum_{p=1}^{N} x_{i,p} x_{j,p+1} + B\sum_{p=1}^{N} \left(1-\sum_{i=1}^{N}x_{i,p}\right)^2 + B\sum_{i=1}^{N} \left(1-\sum_{p=1}^{N}x_{i,p}\right)^2 + B\sum_{(i,j) \notin E} \sum_{p=1}^{N} x_{i,p} x_{j,p+1}$$Let $A=1$ and $B=40$, under the condition $0<A\max (w_{ij})<B$ since $\max (w_{ij})=29$. (We won't substitute B for now).Moreover, we must drop the last term as the graph is fully connected. So, with these simplifications, $$C(x) = \sum_{(i,j) \in E} w_{ij} \sum_{p=1}^{4} x_{i,p} x_{j,p+1} + B\sum_{p=1}^{4} \left(1-\sum_{i=1}^{4}x_{i,p}\right)^2 + B\sum_{i=1}^{4} \left(1-\sum_{p=1}^{4}x_{i,p}\right)^2$$ $$C(x) = \sum_{(i,j) \in E} w_{ij} (x_{i1}x_{j2} + x_{i2}x_{j3} + x_{i3}x_{j4} + x_{i4}x_{j5}) + B\sum_{p=1}^{4} \left(1 - (x_{1p}+x_{2p}+x_{3p}+x_{4p})\right)^2 + B\sum_{i=1}^{4} \left(1-(x_{i1}+x_{i2}+x_{i3}+x_{i4})\right)^2$$$$C(x) = w_{12} (x_{11}x_{22} + x_{12}x_{23} + x_{13}x_{24} + x_{14}x_{25})+w_{13} (x_{11}x_{32} + x_{12}x_{33} + x_{13}x_{34} + x_{14}x_{35})+w_{14} (x_{11}x_{42} + x_{12}x_{43} + x_{13}x_{44} + x_{14}x_{45})+w_{21} (x_{21}x_{12} + x_{22}x_{13} + x_{23}x_{14} + x_{24}x_{15})+w_{23} (x_{21}x_{32} + x_{22}x_{33} + x_{23}x_{34} + x_{24}x_{35})+w_{24} (x_{21}x_{42} + x_{22}x_{43} + x_{23}x_{44} + x_{24}x_{45})+w_{31} (x_{31}x_{12} + x_{32}x_{13} + x_{33}x_{14} + x_{34}x_{15})+w_{32} (x_{31}x_{22} + x_{32}x_{23} + x_{33}x_{24} + x_{34}x_{25})+w_{34} (x_{31}x_{42} + x_{32}x_{43} + x_{33}x_{44} + x_{34}x_{45})+w_{41} (x_{41}x_{12} + x_{42}x_{13} + x_{43}x_{14} + x_{44}x_{15})+w_{42} (x_{41}x_{22} + x_{42}x_{23} + x_{43}x_{24} + x_{44}x_{25})+w_{43} (x_{41}x_{32} + x_{42}x_{33} + x_{43}x_{34} + x_{44}x_{35})+ B\left[\left(1 - (x_{11}+x_{21}+x_{31}+x_{41})\right)^2+\left(1 - (x_{12}+x_{22}+x_{32}+x_{42})\right)^2+\left(1 - (x_{13}+x_{23}+x_{33}+x_{43})\right)^2+\left(1 - (x_{14}+x_{24}+x_{34}+x_{44})\right)^2\right]+ B\left[\left(1 - (x_{11}+x_{12}+x_{13}+x_{14})\right)^2+\left(1 - (x_{21}+x_{22}+x_{23}+x_{24})\right)^2+\left(1 - (x_{31}+x_{32}+x_{33}+x_{34})\right)^2+\left(1 - (x_{41}+x_{42}+x_{43}+x_{44})\right)^2\right]$$ $$C(x) = w_{12} (x_{11}x_{22} + x_{12}x_{23} + x_{13}x_{24} + x_{14}x_{25})+w_{13} (x_{11}x_{32} + x_{12}x_{33} + x_{13}x_{34} + x_{14}x_{35})+w_{14} (x_{11}x_{42} + x_{12}x_{43} + x_{13}x_{44} + x_{14}x_{45})+w_{21} (x_{21}x_{12} + x_{22}x_{13} + x_{23}x_{14} + x_{24}x_{15})+w_{23} (x_{21}x_{32} + x_{22}x_{33} + x_{23}x_{34} + x_{24}x_{35})+w_{24} (x_{21}x_{42} + x_{22}x_{43} + x_{23}x_{44} + x_{24}x_{45})+w_{31} (x_{31}x_{12} + x_{32}x_{13} + x_{33}x_{14} + x_{34}x_{15})+w_{32} (x_{31}x_{22} + x_{32}x_{23} + x_{33}x_{24} + x_{34}x_{25})+w_{34} (x_{31}x_{42} + x_{32}x_{43} + x_{33}x_{44} + x_{34}x_{45})+w_{41} (x_{41}x_{12} + x_{42}x_{13} + x_{43}x_{14} + x_{44}x_{15})+w_{42} (x_{41}x_{22} + x_{42}x_{23} + x_{43}x_{24} + x_{44}x_{25})+w_{43} (x_{41}x_{32} + x_{42}x_{33} + x_{43}x_{34} + x_{44}x_{35})+ B\left[1 - 2(x_{11}+x_{21}+x_{31}+x_{41})+(x_{11}+x_{21}+x_{31}+x_{41})^2+1 - 2(x_{12}+x_{22}+x_{32}+x_{42})+(x_{12}+x_{22}+x_{32}+x_{42})^2+1 -2(x_{13}+x_{23}+x_{33}+x_{43})+ (x_{13}+x_{23}+x_{33}+x_{43})^2+1 -2(x_{14}+x_{24}+x_{34}+x_{44})+ (x_{14}+x_{24}+x_{34}+x_{44})^2\right]+ B\left[1 - 2(x_{11}+x_{12}+x_{13}+x_{14})+(x_{11}+x_{12}+x_{13}+x_{14})^2+1 - 2(x_{21}+x_{22}+x_{23}+x_{24})+(x_{21}+x_{22}+x_{23}+x_{24})^2+1 - 2(x_{31}+x_{32}+x_{33}+x_{34})+(x_{31}+x_{32}+x_{33}+x_{34})^2+1 -2(x_{41}+x_{42}+x_{43}+x_{44})+ (x_{41}+x_{42}+x_{43}+x_{44})^2\right]$$ $$C(x) = w_{12} (x_{11}x_{22} + x_{12}x_{23} + x_{13}x_{24} + x_{14}x_{25})+w_{13} (x_{11}x_{32} + x_{12}x_{33} + x_{13}x_{34} + x_{14}x_{35})+w_{14} (x_{11}x_{42} + x_{12}x_{43} + x_{13}x_{44} + x_{14}x_{45})+w_{21} (x_{21}x_{12} + x_{22}x_{13} + x_{23}x_{14} + x_{24}x_{15})+w_{23} (x_{21}x_{32} + x_{22}x_{33} + x_{23}x_{34} + x_{24}x_{35})+w_{24} (x_{21}x_{42} + x_{22}x_{43} + x_{23}x_{44} + x_{24}x_{45})+w_{31} (x_{31}x_{12} + x_{32}x_{13} + x_{33}x_{14} + x_{34}x_{15})+w_{32} (x_{31}x_{22} + x_{32}x_{23} + x_{33}x_{24} + x_{34}x_{25})+w_{34} (x_{31}x_{42} + x_{32}x_{43} + x_{33}x_{44} + x_{34}x_{45})+w_{41} (x_{41}x_{12} + x_{42}x_{13} + x_{43}x_{14} + x_{44}x_{15})+w_{42} (x_{41}x_{22} + x_{42}x_{23} + x_{43}x_{24} + x_{44}x_{25})+w_{43} (x_{41}x_{32} + x_{42}x_{33} + x_{43}x_{34} + x_{44}x_{35})+ B\left[8 -2(x_{11}+x_{21}+x_{31}+x_{41})+(x_{11}+x_{21}+x_{31}+x_{41})^2 -2(x_{12}+x_{22}+x_{32}+x_{42})+(x_{12}+x_{22}+x_{32}+x_{42})^2 -2(x_{13}+x_{23}+x_{33}+x_{43})+(x_{13}+x_{23}+x_{33}+x_{43})^2-2(x_{14}+x_{24}+x_{34}+x_{44})+(x_{14}+x_{24}+x_{34}+x_{44})^2 -2(x_{11}+x_{12}+x_{13}+x_{14})+(x_{11}+x_{12}+x_{13}+x_{14})^2-2(x_{21}+x_{22}+x_{23}+x_{24})+(x_{21}+x_{22}+x_{23}+x_{24})^2 -2(x_{31}+x_{32}+x_{33}+x_{34})+(x_{31}+x_{32}+x_{33}+x_{34})^2-2(x_{41}+x_{42}+x_{43}+x_{44})+(x_{41}+x_{42}+x_{43}+x_{44})^2\right]$$ Using the identity,$$(a+b+c+d)^2 = a^2+b^2+c^2+d^2+2ab+2ac+2ad+2bc+2bd+2cd$$ $$C(x) = w_{12} (x_{11}x_{22} + x_{12}x_{23} + x_{13}x_{24} + x_{14}x_{25})+w_{13} (x_{11}x_{32} + x_{12}x_{33} + x_{13}x_{34} + x_{14}x_{35})+w_{14} (x_{11}x_{42} + x_{12}x_{43} + x_{13}x_{44} + x_{14}x_{45})+w_{21} (x_{21}x_{12} + x_{22}x_{13} + x_{23}x_{14} + x_{24}x_{15})+w_{23} (x_{21}x_{32} + x_{22}x_{33} + x_{23}x_{34} + x_{24}x_{35})+w_{24} (x_{21}x_{42} + x_{22}x_{43} + x_{23}x_{44} + x_{24}x_{45})+w_{31} (x_{31}x_{12} + x_{32}x_{13} + x_{33}x_{14} + x_{34}x_{15})+w_{32} (x_{31}x_{22} + x_{32}x_{23} + x_{33}x_{24} + x_{34}x_{25})+w_{34} (x_{31}x_{42} + x_{32}x_{43} + x_{33}x_{44} + x_{34}x_{45})+w_{41} (x_{41}x_{12} + x_{42}x_{13} + x_{43}x_{14} + x_{44}x_{15})+w_{42} (x_{41}x_{22} + x_{42}x_{23} + x_{43}x_{24} + x_{44}x_{25})+w_{43} (x_{41}x_{32} + x_{42}x_{33} + x_{43}x_{34} + x_{44}x_{35})+ B\left[8 -2(x_{11}+x_{21}+x_{31}+x_{41})+x_{11}^2+x_{21}^2+x_{31}^2+x_{41}^2+2x_{11}x_{21}+2x_{11}x_{31}+2x_{11}x_{41}+2x_{21}x_{31}+2x_{21}x_{41}+2x_{31}x_{41} -2(x_{12}+x_{22}+x_{32}+x_{42})+x_{12}^2+x_{22}^2+x_{32}^2+x_{42}^2+2x_{12}x_{22}+2x_{12}x_{32}+2x_{12}x_{42}+2x_{22}x_{32}+2x_{22}x_{42}+2x_{32}x_{42} -2(x_{13}+x_{23}+x_{33}+x_{43})+x_{13}^2+x_{23}^2+x_{33}^2+x_{43}^2+2x_{13}x_{23}+2x_{13}x_{33}+2x_{13}x_{43}+2x_{23}x_{33}+2x_{23}x_{43}+2x_{33}x_{43}-2(x_{14}+x_{24}+x_{34}+x_{44})+x_{14}^2+x_{24}^2+x_{34}^2+x_{44}^2+2x_{14}x_{24}+2x_{14}x_{34}+2x_{14}x_{44}+2x_{24}x_{34}+2x_{24}x_{44}+2x_{34}x_{44}-2(x_{11}+x_{12}+x_{13}+x_{14})+x_{11}^2+x_{12}^2+x_{13}^2+x_{14}^2+2x_{11}x_{12}+2x_{11}x_{13}+2x_{11}x_{14}+2x_{12}x_{13}+2x_{12}x_{14}+2x_{13}x_{14}-2(x_{21}+x_{22}+x_{23}+x_{24})+x_{21}^2+x_{22}^2+x_{23}^2+x_{24}^2+2x_{21}x_{22}+2x_{21}x_{23}+2x_{21}x_{24}+2x_{22}x_{23}+2x_{22}x_{24}+2x_{23}x_{24}-2(x_{31}+x_{32}+x_{33}+x_{34})+x_{31}^2+x_{32}^2+x_{33}^2+x_{34}^2+2x_{31}x_{32}+2x_{31}x_{33}+2x_{31}x_{34}+2x_{32}x_{33}+2x_{32}x_{34}+2x_{33}x_{34}-2(x_{41}+x_{42}+x_{43}+x_{44})+x_{41}^2+x_{42}^2+x_{43}^2+x_{44}^2+2x_{41}x_{42}+2x_{41}x_{43}+2x_{41}x_{44}+2x_{42}x_{43}+2x_{42}x_{44}+2x_{43}x_{44}\right]$$ For ease of notation, let's update our variables using single subscript:$$(x_{11}, x_{12}, x_{13}, x_{14}, x_{15}, x_{21}, x_{22}, x_{23}, x_{24}, x_{25}, x_{31}, x_{32}, x_{33}, x_{34}, x_{35}, x_{41}, x_{42}, x_{43}, x_{44}, x_{45}) = (x_1, x_2, x_3, x_4, x_5, x_6, x_7,......x_{18}, x_{19}, x_{20})$$ $$C(x) = w_{12} (x_{1}x_{7} + x_{2}x_{8} + x_{3}x_{9} + x_{4}x_{10})+w_{13} (x_{1}x_{12} + x_{2}x_{13} + x_{3}x_{14} + x_{4}x_{15})+w_{14} (x_{1}x_{17} + x_{2}x_{18} + x_{3}x_{19} + x_{4}x_{20})+w_{21} (x_{6}x_{2} + x_{7}x_{3} + x_{8}x_{4} + x_{9}x_{5})+w_{23} (x_{6}x_{12} + x_{7}x_{13} + x_{8}x_{14} + x_{9}x_{15})+w_{24} (x_{6}x_{17} + x_{7}x_{18} + x_{8}x_{19} + x_{9}x_{20})+w_{31} (x_{11}x_{2} + x_{12}x_{3} + x_{13}x_{4} + x_{14}x_{5})+w_{32} (x_{11}x_{7} + x_{12}x_{8} + x_{13}x_{9} + x_{14}x_{10})+w_{34} (x_{11}x_{17} + x_{12}x_{18} + x_{13}x_{19} + x_{14}x_{20})+w_{41} (x_{16}x_{2} + x_{17}x_{3} + x_{18}x_{4} + x_{19}x_{5})+w_{42} (x_{16}x_{7} + x_{17}x_{8} + x_{18}x_{9} + x_{19}x_{10})+w_{43} (x_{16}x_{12} + x_{17}x_{13} + x_{18}x_{14} + x_{19}x_{15})+ B\left[8 -2(x_{1}+x_{6}+x_{11}+x_{16})+x_{1}^2+x_{6}^2+x_{11}^2+x_{16}^2+2x_{1}x_{6}+2x_{1}x_{11}+2x_{1}x_{16}+2x_{6}x_{11}+2x_{6}x_{16}+2x_{11}x_{16} -2(x_{2}+x_{7}+x_{12}+x_{17})+x_{2}^2+x_{7}^2+x_{12}^2+x_{17}^2+2x_{2}x_{7}+2x_{2}x_{12}+2x_{2}x_{17}+2x_{7}x_{12}+2x_{7}x_{17}+2x_{12}x_{17} -2(x_{3}+x_{8}+x_{13}+x_{18})+x_{3}^2+x_{8}^2+x_{13}^2+x_{18}^2+2x_{3}x_{8}+2x_{3}x_{13}+2x_{3}x_{18}+2x_{8}x_{13}+2x_{8}x_{18}+2x_{13}x_{18}-2(x_{4}+x_{9}+x_{14}+x_{19})+x_{4}^2+x_{9}^2+x_{14}^2+x_{19}^2+2x_{4}x_{9}+2x_{4}x_{14}+2x_{4}x_{19}+2x_{9}x_{14}+2x_{9}x_{19}+2x_{14}x_{19}-2(x_{1}+x_{2}+x_{3}+x_{4})+x_{1}^2+x_{2}^2+x_{3}^2+x_{4}^2+2x_{1}x_{2}+2x_{1}x_{3}+2x_{1}x_{4}+2x_{2}x_{3}+2x_{2}x_{4}+2x_{3}x_{4}-2(x_{6}+x_{7}+x_{8}+x_{9})+x_{6}^2+x_{7}^2+x_{8}^2+x_{9}^2+2x_{6}x_{7}+2x_{6}x_{8}+2x_{6}x_{9}+2x_{7}x_{8}+2x_{7}x_{9}+2x_{8}x_{9}-2(x_{11}+x_{12}+x_{13}+x_{14})+x_{11}^2+x_{12}^2+x_{13}^2+x_{14}^2+2x_{11}x_{12}+2x_{11}x_{13}+2x_{11}x_{14}+2x_{12}x_{13}+2x_{12}x_{14}+2x_{13}x_{14}-2(x_{16}+x_{17}+x_{18}+x_{19})+x_{16}^2+x_{17}^2+x_{18}^2+x_{19}^2+2x_{16}x_{17}+2x_{16}x_{18}+2x_{16}x_{19}+2x_{17}x_{18}+2x_{17}x_{19}+2x_{18}x_{19}\right]$$ Since QUBO doesn't have squared binary variables as its 0 and 1 values remain unchanged when squared, so we can replace any term $x_i^2$ with $x_i$, and vice versa (this doesnt apply to products $x_i x_j$).We will ignore the constant $8B$ while constructing the matrix.This takes the desired form:$$\min_{x \in {0,1}^n} x^T Q x$$where:$$x^T = \begin{pmatrix}x_1 & x_2 & x_3 & x_4 & x_5 & x_6 & x_7 & x_8 & x_9 & x_{10} & x_{11} & x_{12} & x_{13} & x_{14} & x_{15} & x_{16} & x_{17} & x_{18} & x_{19} & x_{20} \end{pmatrix}$$and the upper diagonal matrix Q is:$$Q = \begin{pmatrix}-2B & 2B & 2B & 2B & 0 & 2B & w_{12} & 0 & 0 & 0 & 2B & w_{13} & 0 & 0 & 0 & 2B & w_{14} & 0 & 0 & 0\\0 & -2B & 2B & 2B & 0 & w_{21} & 2B & w_{12} & 0 & 0 & w_{31} & 2B & w_{13} & 0 & 0 & w_{41} & 2B & w_{14} & 0 & 0\\0 & 0 & -2B & 2B & 0 & 0 & w_{21} & 2B & w_{12} & 0 & 0 & w_{31} & 2B & w_{13} & 0 & 0 & w_{41} & 2B & w_{14} & 0\\0 & 0 & 0 & -2B & 0 & 0 & 0 & w_{21} & 2B & w_{12} & 0 & 0 & w_{31} & 2B & w_{13} & 0 & 0 & w_{41} & 2B & w_{14}\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & w_{21} & 0 & 0 & 0 & 0 & w_{31} & 0 & 0 & 0 & 0 & w_{41} & 0\\0 & 0 & 0 & 0 & 0 & -2B & 2B & 2B & 2B & 0 & 2B & w_{23} & 0 & 0 & 0 & 2B & w_{24} & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & -2B & 2B & 2B & 0 & w_{32} & 2B & w_{23} & 0 & 0 & w_{42} & 2B & w_{24} & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 2B & 0 & 0 & w_{32} & 2B & w_{23} & 0 & 0 & w_{42} & 2B & w_{24} & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 0 & 0 & 0 & w_{32} & 2B & w_{23} & 0 & 0 & w_{42} & 2B & w_{24}\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & w_{32} & 0 & 0 & 0 & 0 & w_{42} & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 2B & 2B & 2B & 0 & 2B & w_{34} & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 2B & 2B & 0 & w_{43} & 2B & w_{34} & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 2B & 0 & 0 & w_{43} & 2B & w_{34} & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 0 & 0 & 0 & w_{43} & 2B & w_{34}\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & w_{43} & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 2B & 2B & 2B & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 2B & 2B & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 2B & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -2B & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\end{pmatrix}$$where $B=40, w_{12}=w_{21}=12, w_{13}=w_{31}=14, w_{14}=w_{41}=17, w_{23}=w_{32}=15, w_{24}=w_{42}=18, w_{34}=w_{43}=29$. *** Additional Task 1Input matrix Q calculated in Task 1 to the function *qubo_solver()* and determine $x$ which minimizes $x^T Qx$ and the corresponding minimum value. Solution
###Code
# Access the qubo_solver() function
%run qubo_functions.py
# Define the Q matrix
B=40
w_12=12
w_21=12
w_13=14
w_31=14
w_14=17
w_41=17
w_23=15
w_32=15
w_24=18
w_42=18
w_34=29
w_43=29
Q=np.array([[-2*B, 2*B, 2*B, 2*B, 0, 2*B, w_12, 0, 0, 0, 2*B, w_13, 0, 0, 0, 2*B, w_14, 0, 0, 0],
[ 0, -2*B, 2*B, 2*B, 0, w_21, 2*B, w_12, 0, 0, w_31, 2*B, w_13, 0, 0, w_41, 2*B, w_14, 0, 0],
[ 0, 0, -2*B, 2*B, 0, 0, w_21, 2*B, w_12, 0, 0, w_31, 2*B, w_13, 0, 0, w_41, 2*B, w_14, 0],
[ 0, 0, 0, -2*B, 0, 0, 0, w_21, 2*B, w_12, 0, 0, w_31, 2*B, w_13, 0, 0, w_41, 2*B, w_14],
[ 0, 0, 0, 0, 0, 0, 0, 0, w_21, 0, 0, 0, 0, w_31, 0, 0, 0, 0, w_41, 0],
[ 0, 0, 0, 0, 0, -2*B, 2*B, 2*B, 2*B, 0, 2*B, w_23, 0, 0, 0, 2*B, w_24, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, -2*B, 2*B, 2*B, 0, w_32, 2*B, w_23, 0, 0, w_42, 2*B, w_24, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, -2*B, 2*B, 0, 0, w_32, 2*B, w_23, 0, 0, w_42, 2*B, w_24, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, -2*B, 0, 0, 0, w_32, 2*B, w_23, 0, 0, w_42, 2*B, w_24],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, w_32, 0, 0, 0, 0, w_42, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2*B, 2*B, 2*B, 2*B, 0, 2*B, w_34, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2*B, 2*B, 2*B, 0, w_43, 2*B, w_34, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2*B, 2*B, 0, 0, w_43, 2*B, w_34, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2*B, 0, 0, 0, w_43, 2*B, w_34],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, w_43, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2*B, 2*B, 2*B, 2*B, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2*B, 2*B, 2*B, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2*B, 2*B, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2*B, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
# Pass the matrix as an argument to the function
qubo_solver(Q)
###Output
_____no_output_____
###Markdown
The result in the original notation of variables is $x_{12}=x_{23}=x_{31}=x_{44}=x_{45}=1$. This means that the salesman travels on the following route: $3 \to 1 \to 2 \to 4 \to 4$. Additional Task 2Using the QUBO algebraic expression and testing all possibilities of $x$, verify your result for Additional Task 1. (You may use a Python code for this task). Solution
###Code
#Create a function to evaluate the value of objective function for each x.
def tsp_task_3(x):
#INSERT YOUR CODE HERE!
x1 = x[0]
x2 = x[1]
x3 = x[2]
x4 = x[3]
x5 = x[4]
x6 = x[5]
x7 = x[6]
x8 = x[7]
x9 = x[8]
x10 = x[9]
x11 = x[10]
x12 = x[11]
x13 = x[12]
x14 = x[13]
x15 = x[14]
x16 = x[15]
x17 = x[16]
x18 = x[17]
x19 = x[18]
x20 = x[19]
B=40
w_12=12
w_21=12
w_13=14
w_31=14
w_14=17
w_41=17
w_23=15
w_32=15
w_24=18
w_42=18
w_34=29
w_43=29
y = w_12*(x1*x7 + x2*x8 + x3*x9 + x4*x10)+w_13*(x1*x12 + x2*x13 + x3*x14 + x4*x15)+w_14*(x1*x17 + x2*x18 + x3*x19 + x4*x20)+w_21*(x6*x2 + x7*x3 + x8*x4 + x9*x5)+w_23*(x6*x12 + x7*x13 + x8*x14 + x9*x15)+w_24*(x6*x17 + x7*x18 + x8*x19 + x9*x20)+w_31*(x11*x2 + x12*x3 + x13*x4 + x14*x5)+w_32*(x11*x7 + x12*x8 + x13*x9 + x14*x10)+w_34*(x11*x17 + x12*x18 + x13*x19 + x14*x20)+w_41*(x16*x2 + x17*x3 + x18*x4 + x19*x5)+w_42*(x16*x7 + x17*x8 + x18*x9 + x19*x10)+w_43*(x16*x12 + x17*x13 + x18*x14 + x19*x15)+ B*(-2*(x1+x6+x11+x16)+(x1)**2+(x6)**2+(x11)**2+(x16)**2+2*x1*x6+2*x1*x11+2*x1*x16+2*x6*x11+2*x6*x16+2*x11*x16-2*(x2+x7+x12+x17)+(x2)**2+(x7)**2+(x12)**2+(x17)**2+2*x2*x7+2*x2*x12+2*x2*x17+2*x7*x12+2*x7*x17+2*x12*x17-2*(x3+x8+x13+x18)+(x3)**2+(x8)**2+(x13)**2+(x18)**2+2*x3*x8+2*x3*x13+2*x3*x18+2*x8*x13+2*x8*x18+2*x13*x18-2*(x4+x9+x14+x19)+(x4)**2+(x9)**2+(x14)**2+(x19)**2+2*x4*x9*+2*x4*x14+2*x4*x19+2*x9*x14+2*x9*x19+2*x14*x19-2*(x1+x2+x3+x4)+(x1)**2+(x2)**2+(x3)**2+(x4)**2+2*x1*x2+2*x1*x3+2*x1*x4+2*x2*x3+2*x2*x4+2*x3*x4-2*(x6+x7+x8+x9)+(x6)**2+(x7)**2+(x8)**2+(x9)**2+2*x6*x7+2*x6*x8+2*x6*x9+2*x7*x8+2*x7*x9+2*x8*x9-2*(x11+x12+x13+x14)+(x11)**2+(x12)**2+(x13)**2+(x14)**2+2*x11*x12+2*x11*x13+2*x11*x14+2*x12*x13+2*x12*x14+2*x13*x14-2*(x16+x17+x18+x19)+(x16)**2+(x17)**2+(x18)**2+(x19)**2+2*x16*x17+2*x16*x18+2*x16*x19+2*x17*x18+2*x17*x19+2*x18*x19)
return y
#Minimize the function for all possibilites of x.
#The following code generates the possile permutations of x and calculates the value of the objectve funtion for each.
import numpy as np
import itertools
possible_values = {}
vec_permutations = itertools.product([0,1], repeat=20) # A list of all the possible permutations for x vector
for permutation in vec_permutations:
x = np.array([[var] for var in permutation]) # Converts the permutation into a column vector
value = tsp_task_3(x)
possible_values[value[0]] = x
vector = tuple(x.T[0])
# print("Vector x =", vector, "; Value =",int(value)) # Displays every vector with its corresponding value
min_value = min(possible_values.keys()) # Lowest value of the objective function
opt_vector = tuple(possible_values[min_value].T[0]) # Optimum x corresponding to lowest value
print("---")
print("The vector x =", opt_vector, "minimizes the objective function to a value of", int(min_value))
###Output
---
The vector x = (0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0) minimizes the objective function to a value of -302
|
Notebooks/Experimentation/Change Detection/hansen_data_prep.ipynb | ###Markdown
Hansen Loss Year Data
###Code
#import loss map with all year data
hansen_loss_year_path = '/gws/nopw/j04/ai4er/users/jl2182/data/Mres_Data/Hansen_Results/loss_year/reprojected/Hansen_GFC-2020-v1.8_lossyear_00N_010E_reprojected.tif'
hansen_loss_year = xr.open_rasterio(hansen_loss_year_path)
#crop to study region
#region of interest
studyRegion = '/gws/nopw/j04/ai4er/users/jl2182/data/Mres_Data/GeoJSONS/PIREDD_Plataue.geojson'
studyRegion = gpd.read_file(studyRegion)
studyRegion_crs = studyRegion.to_crs(epsg=3341)
#print(studyRegion_crs.head())
#print(studyRegion_crs.crs)
hansen_loss_year_crop = hansen_loss_year.rio.clip(studyRegion_crs.geometry.apply(mapping))
plt.figure(figsize = (10,10))
plt.imshow(hansen_loss_year_crop[0], interpolation='nearest' )
np.unique(hansen_loss_year_crop)
loss_years = np.where(hansen_loss_year_crop > 13, hansen_loss_year_crop, 0)
lossyears = np.where(loss_years < 20,loss_years,0 )
np.unique(lossyears)
lossyear_2019 = np.where(lossyears == 19, 1,0)
plt.figure(figsize = (10,10))
plt.imshow(lossyear_2019[0], interpolation='nearest')
lossyear_2019.shape
#resample annual image
import scipy.ndimage
lossyear_2019_resampled = scipy.ndimage.zoom(lossyear_2019[0],0.0900267, order=0)
lossyear_2019_resampled.shape
np.save('/gws/nopw/j04/ai4er/users/jl2182/data/Mres_Data/Hansen_Results/loss_year/resampled/loss_year_19.npy', lossyear_2019_resampled)
###Output
_____no_output_____ |
examples/XLNetBinaryClassifier.ipynb | ###Markdown
训练
###Code
model.fit(X, y, total_steps=20)
###Output
/Users/geyingli/Library/Python/3.8/lib/python/site-packages/tensorflow/python/keras/legacy_tf_layers/core.py:268: UserWarning: `tf.layers.dropout` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dropout` instead.
warnings.warn('`tf.layers.dropout` is deprecated and '
/Users/geyingli/Library/Python/3.8/lib/python/site-packages/tensorflow/python/keras/engine/base_layer_v1.py:1719: UserWarning: `layer.apply` is deprecated and will be removed in a future version. Please use `layer.__call__` method instead.
warnings.warn('`layer.apply` is deprecated and '
/Users/geyingli/Library/Python/3.8/lib/python/site-packages/tensorflow/python/keras/legacy_tf_layers/core.py:171: UserWarning: `tf.layers.dense` is deprecated and will be removed in a future version. Please use `tf.keras.layers.Dense` instead.
warnings.warn('`tf.layers.dense` is deprecated and '
###Markdown
推理
###Code
model.predict(X)
###Output
INFO:tensorflow:Time usage 0m-3.85s, 0.26 steps/sec, 1.04 examples/sec
###Markdown
评分
###Code
model.score(X, y)
###Output
INFO:tensorflow:Time usage 0m-2.78s, 0.36 steps/sec, 1.44 examples/sec
|
solved notebooks/lab3.2hw.ipynb | ###Markdown
Лабораторная работа 3.2. Домашнее задание Задание №1 Создайте два массива: в первом должны быть четные числа от 2 до 12 включительно, а в другом числа 7, 11, 15, 18, 23, 29. $1.$ Сложите массивы и возведите элементы получившегося массива в квадрат:
###Code
import numpy as np
a = np.arange(2, 13, 2)
b = np.array([7, 11, 15, 18, 23, 29])
print(a)
print(b)
###Output
[ 2 4 6 8 10 12]
[ 7 11 15 18 23 29]
###Markdown
$2.$ Выведите все элементы из первого массива, индексы которых соответствуют индексам тех элементов второго массива, которые больше 12 и дают остаток 3 при делении на 5.
###Code
loc = np.logical_and(b > 12, b % 5 == 3)
a[loc]
###Output
_____no_output_____
###Markdown
*3.* Проверьте условие "Элементы первого массива делятся на 4, элементы второго массива меньше 14". (Подсказка: в результате должен получиться массив с True и False)
###Code
log1 = a % 4 == 0
log2 = b < 14
print(log1 + log2)
###Output
[ True True False True False True]
###Markdown
Задание №2 * Найдите интересный для вас датасет. Например, можно выбрать датасет тут. http://data.un.org/Explorer.aspx (выбираете датасет, жмете на view data, потом download, выбирайте csv формат)* Рассчитайте подходящие описательные статистики для признаков объектов в выбранном датасете* Проанализируйте и прокомментируйте содержательно получившиеся результаты* Все комментарии оформляйте строго в ячейках формата markdown Ищем зависимость числа рождения от уровня грамотностиЧисло рождения - зависимая yУровень грамотности - независимая x
###Code
import csv
import matplotlib.pyplot as plt
%matplotlib inline
# Данные формата Страна, Год, детей рождено, уровень грамотности
with open('Годовое число рождений.csv', 'r', newline='') as csvfile:
data = csv.reader(csvfile, delimiter=',')
birth = [["Country", "Year", "Value"]]
for row in data:
if row[5].isdigit():
birth.append([row[1], int(row[5]), int(row[11])])
with open('Уровень грамотности взрослого населения.csv', 'r', newline='') as csvfile:
data = csv.reader(csvfile, delimiter=',')
literacy = [["Country", "First year", "Last year", "Value"]]
for row in data:
if row[11].isdigit():
f_year, l_year = map(int, row[5].split('-'))
literacy.append([row[1], f_year, l_year, int(row[11])])
data = [["Country", "Year", "Birth", "Literacy"]]
for row1 in birth:
for row2 in literacy:
if row1[0] == row2[0] and row1[1] >= row2[1] and row1[1] <= row2[2]:
data.append([row1[0], int(row1[1]), int(row1[2]), int(row2[3])])
data = np.array(data)
birth = np.int_(np.array(data[1:, 2]))
literacy = np.int_(np.array(data[1:, 3]))
print(birth)
print(literacy)
plt.title('Число рождений от уровня грамотности', fontsize=20, fontname='Times New Roman')
plt.xlabel('Годовое число рождений', color='gray')
plt.ylabel('Уровень грамотности взрослого населения',color='gray')
plt.plot(birth, literacy, color="r", marker="*", linestyle="none")
print(f"Среднее значение годового числа рождения: {np.mean(birth)}")
print(f"Среднее значение уровеня грамотности взрослого населения: {np.mean(literacy)}")
print(f"Среднее отклонение годового числа рождения: {np.std(birth)}")
print(f"Среднее отклонение уровеня грамотности взрослого населения: {np.std(literacy)}")
print(f"Дисперсия годового числа рождения: {np.var(birth)}")
print(f"Дисперсия уровеня грамотности взрослого населения: {np.var(literacy)}")
print(f"Коэффициент парной корреляции:\n {np.corrcoef(literacy, birth)}")
plt.title('Число рождений от уровня грамотности', fontsize=20, fontname='Times New Roman')
plt.xlabel('Годовое число рождений', color='gray')
plt.ylabel('Уровень грамотности взрослого населения',color='gray')
A = np.vstack([birth, np.ones(len(birth))]).T
m, c = np.linalg.lstsq(A, literacy, rcond=None)[0]
plt.plot(birth, m*birth + c, 'b')
plt.plot(birth, literacy, color="r", marker="*", linestyle="none")
###Output
_____no_output_____ |
model_performance.ipynb | ###Markdown
Plot ROC Curve
###Code
plt.figure(figsize=(8, 6), dpi=100)
lw = 2
plt.plot(fpr['train'], tpr['train'], color='darkorange',
lw=lw, label='ROC curve (train area = %0.2f)' % roc_auc['train'])
plt.plot(fpr['test'], tpr['test'], color='darkred',
lw=lw, label='ROC curve (test area = %0.2f)' % roc_auc['test'])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
y_pred_train = lgbm_model.predict(X_train)
confusion_matrix(y_train, y_pred_train)
y_pred_test = lgbm_model.predict(X_test)
confusion_matrix(y_test, y_pred_test)
###Output
_____no_output_____ |
notebooks/1-DataUnderstanding/1. DataDescription/1-DB-projects.ipynb | ###Markdown
**PROJECTS**This notebook the description of the table `PROJECTS`.First, we import the libraries we need and, then, we read the corresponding csv.
###Code
import pandas as pd
projects = pd.read_csv("../../data/raw/PROJECTS.csv")
projects.shape
###Output
_____no_output_____
###Markdown
We show the first rows of the table to get an idea of its content.
###Code
projects.loc[0:4]
###Output
_____no_output_____ |
doc/steps_to_make/2001_0301_ast_parse_example.ipynb | ###Markdown
ast parse - example
###Code
def print_all(source_code):
import sys
from pathlib import Path
my_happy_flow_path = str(Path('../../src').resolve())
my_lib_path = str(Path('my_lib').resolve())
if my_lib_path not in sys.path:
sys.path.append(my_lib_path)
if my_happy_flow_path not in sys.path:
sys.path.append(my_happy_flow_path)
from my_happy_flow import Job
from ast_utils import print_utils
print_utils.parse_print(source_code)
print('')
print('=' * 100)
print('')
print_utils.json_print(source_code)
###Output
_____no_output_____
###Markdown
JoinedStr
###Code
print_all('f"sin({a}) is {sin(a):.3}"')
###Output
Module(body=[
Expr(value=JoinedStr(values=[
Str(s='sin('),
FormattedValue(value=Name(id='a', ctx=Load()), conversion=-1, format_spec=None),
Str(s=') is '),
FormattedValue(value=Call(func=Name(id='sin', ctx=Load()), args=[
Name(id='a', ctx=Load()),
], keywords=[]), conversion=-1, format_spec=JoinedStr(values=[
Str(s='.3'),
])),
])),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "JoinedStr",
"values": [
{
"_PyType": "Str",
"s": "sin("
},
{
"_PyType": "FormattedValue",
"value": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
},
"conversion": -1,
"format_spec": null
},
{
"_PyType": "Str",
"s": ") is "
},
{
"_PyType": "FormattedValue",
"value": {
"_PyType": "Call",
"func": {
"_PyType": "Name",
"id": "sin",
"ctx": {
"_PyType": "Load"
}
},
"args": [
{
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
}
],
"keywords": [
]
},
"conversion": -1,
"format_spec": {
"_PyType": "JoinedStr",
"values": [
{
"_PyType": "Str",
"s": ".3"
}
]
}
}
]
}
}
]
}
###Markdown
loading a
###Code
print_all("a")
###Output
Module(body=[
Expr(value=Name(id='a', ctx=Load())),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
}
}
]
}
###Markdown
storing a
###Code
print_all("a = 1")
###Output
Module(body=[
Assign(targets=[
Name(id='a', ctx=Store()),
], value=Num(n=1)),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Assign",
"targets": [
{
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Store"
}
}
],
"value": {
"_PyType": "Num",
"n": 1
}
}
]
}
###Markdown
Deleting a
###Code
print_all("del a")
###Output
Module(body=[
Delete(targets=[
Name(id='a', ctx=Del()),
]),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Delete",
"targets": [
{
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Del"
}
}
]
}
]
}
###Markdown
Starred
###Code
print_all("a, *b = it")
###Output
Module(body=[
Assign(targets=[
Tuple(elts=[
Name(id='a', ctx=Store()),
Starred(value=Name(id='b', ctx=Store()), ctx=Store()),
], ctx=Store()),
], value=Name(id='it', ctx=Load())),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Assign",
"targets": [
{
"_PyType": "Tuple",
"elts": [
{
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Store"
}
},
{
"_PyType": "Starred",
"value": {
"_PyType": "Name",
"id": "b",
"ctx": {
"_PyType": "Store"
}
},
"ctx": {
"_PyType": "Store"
}
}
],
"ctx": {
"_PyType": "Store"
}
}
],
"value": {
"_PyType": "Name",
"id": "it",
"ctx": {
"_PyType": "Load"
}
}
}
]
}
###Markdown
Expr
###Code
print_all('-a')
###Output
Module(body=[
Expr(value=UnaryOp(op=USub(), operand=Name(id='a', ctx=Load()))),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "UnaryOp",
"op": {
"_PyType": "USub"
},
"operand": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
}
}
}
]
}
###Markdown
Compare
###Code
print_all("1 < a < 10")
###Output
Module(body=[
Expr(value=Compare(left=Num(n=1), ops=[
Lt(),
Lt(),
], comparators=[
Name(id='a', ctx=Load()),
Num(n=10),
])),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "Compare",
"left": {
"_PyType": "Num",
"n": 1
},
"ops": [
{
"_PyType": "Lt"
},
{
"_PyType": "Lt"
}
],
"comparators": [
{
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
},
{
"_PyType": "Num",
"n": 10
}
]
}
}
]
}
###Markdown
Call
###Code
print_all("func(a, b=c, *d, **e)")
###Output
Module(body=[
Expr(value=Call(func=Name(id='func', ctx=Load()), args=[
Name(id='a', ctx=Load()),
Starred(value=Name(id='d', ctx=Load()), ctx=Load()),
], keywords=[
keyword(arg='b', value=Name(id='c', ctx=Load())),
keyword(arg=None, value=Name(id='e', ctx=Load())),
])),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "Call",
"func": {
"_PyType": "Name",
"id": "func",
"ctx": {
"_PyType": "Load"
}
},
"args": [
{
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
},
{
"_PyType": "Starred",
"value": {
"_PyType": "Name",
"id": "d",
"ctx": {
"_PyType": "Load"
}
},
"ctx": {
"_PyType": "Load"
}
}
],
"keywords": [
{
"_PyType": "keyword",
"arg": "b",
"value": {
"_PyType": "Name",
"id": "c",
"ctx": {
"_PyType": "Load"
}
}
},
{
"_PyType": "keyword",
"arg": null,
"value": {
"_PyType": "Name",
"id": "e",
"ctx": {
"_PyType": "Load"
}
}
}
]
}
}
]
}
###Markdown
Attribute
###Code
print_all('snake.colour')
###Output
Module(body=[
Expr(value=Attribute(value=Name(id='snake', ctx=Load()), attr='colour', ctx=Load())),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "Attribute",
"value": {
"_PyType": "Name",
"id": "snake",
"ctx": {
"_PyType": "Load"
}
},
"attr": "colour",
"ctx": {
"_PyType": "Load"
}
}
}
]
}
###Markdown
Index
###Code
print_all("l[1]")
###Output
Module(body=[
Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Index(value=Num(n=1)), ctx=Load())),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "Subscript",
"value": {
"_PyType": "Name",
"id": "l",
"ctx": {
"_PyType": "Load"
}
},
"slice": {
"_PyType": "Index",
"value": {
"_PyType": "Num",
"n": 1
}
},
"ctx": {
"_PyType": "Load"
}
}
}
]
}
###Markdown
Slice
###Code
print_all("l[1:2]")
###Output
Module(body=[
Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=Slice(lower=Num(n=1), upper=Num(n=2), step=None), ctx=Load())),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "Subscript",
"value": {
"_PyType": "Name",
"id": "l",
"ctx": {
"_PyType": "Load"
}
},
"slice": {
"_PyType": "Slice",
"lower": {
"_PyType": "Num",
"n": 1
},
"upper": {
"_PyType": "Num",
"n": 2
},
"step": null
},
"ctx": {
"_PyType": "Load"
}
}
}
]
}
###Markdown
ExtSlice
###Code
print_all("l[1:2, 3]")
###Output
Module(body=[
Expr(value=Subscript(value=Name(id='l', ctx=Load()), slice=ExtSlice(dims=[
Slice(lower=Num(n=1), upper=Num(n=2), step=None),
Index(value=Num(n=3)),
]), ctx=Load())),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "Subscript",
"value": {
"_PyType": "Name",
"id": "l",
"ctx": {
"_PyType": "Load"
}
},
"slice": {
"_PyType": "ExtSlice",
"dims": [
{
"_PyType": "Slice",
"lower": {
"_PyType": "Num",
"n": 1
},
"upper": {
"_PyType": "Num",
"n": 2
},
"step": null
},
{
"_PyType": "Index",
"value": {
"_PyType": "Num",
"n": 3
}
}
]
},
"ctx": {
"_PyType": "Load"
}
}
}
]
}
###Markdown
comprehension
###Code
print_all("[ord(c) for line in file for c in line]")
print_all("(n**2 for n in it if n>5 if n<10)")
source_code = """
async def f():
return [i async for i in soc]
""".strip()
print_all(source_code)
###Output
Module(body=[
AsyncFunctionDef(name='f', args=arguments(args=[], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=[
Return(value=ListComp(elt=Name(id='i', ctx=Load()), generators=[
comprehension(target=Name(id='i', ctx=Store()), iter=Name(id='soc', ctx=Load()), ifs=[], is_async=1),
])),
], decorator_list=[], returns=None),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "AsyncFunctionDef",
"name": "f",
"args": {
"_PyType": "arguments",
"args": [
],
"vararg": null,
"kwonlyargs": [
],
"kw_defaults": [
],
"kwarg": null,
"defaults": [
]
},
"body": [
{
"_PyType": "Return",
"value": {
"_PyType": "ListComp",
"elt": {
"_PyType": "Name",
"id": "i",
"ctx": {
"_PyType": "Load"
}
},
"generators": [
{
"_PyType": "comprehension",
"target": {
"_PyType": "Name",
"id": "i",
"ctx": {
"_PyType": "Store"
}
},
"iter": {
"_PyType": "Name",
"id": "soc",
"ctx": {
"_PyType": "Load"
}
},
"ifs": [
],
"is_async": 1
}
]
}
}
],
"decorator_list": [
],
"returns": null
}
]
}
###Markdown
statement - assign
###Code
print_all("a = 1 # type: int")
###Output
Module(body=[
Assign(targets=[
Name(id='a', ctx=Store()),
], value=Num(n=1)),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Assign",
"targets": [
{
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Store"
}
}
],
"value": {
"_PyType": "Num",
"n": 1
}
}
]
}
###Markdown
Multiple assignment
###Code
print_all("a = b = 1")
###Output
Module(body=[
Assign(targets=[
Name(id='a', ctx=Store()),
Name(id='b', ctx=Store()),
], value=Num(n=1)),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Assign",
"targets": [
{
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Store"
}
},
{
"_PyType": "Name",
"id": "b",
"ctx": {
"_PyType": "Store"
}
}
],
"value": {
"_PyType": "Num",
"n": 1
}
}
]
}
###Markdown
Unpacking
###Code
print_all("a,b = c")
###Output
Module(body=[
Assign(targets=[
Tuple(elts=[
Name(id='a', ctx=Store()),
Name(id='b', ctx=Store()),
], ctx=Store()),
], value=Name(id='c', ctx=Load())),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Assign",
"targets": [
{
"_PyType": "Tuple",
"elts": [
{
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Store"
}
},
{
"_PyType": "Name",
"id": "b",
"ctx": {
"_PyType": "Store"
}
}
],
"ctx": {
"_PyType": "Store"
}
}
],
"value": {
"_PyType": "Name",
"id": "c",
"ctx": {
"_PyType": "Load"
}
}
}
]
}
###Markdown
AnnAssign
###Code
print_all("c: int")
###Output
Module(body=[
AnnAssign(target=Name(id='c', ctx=Store()), annotation=Name(id='int', ctx=Load()), value=None, simple=1),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "AnnAssign",
"target": {
"_PyType": "Name",
"id": "c",
"ctx": {
"_PyType": "Store"
}
},
"annotation": {
"_PyType": "Name",
"id": "int",
"ctx": {
"_PyType": "Load"
}
},
"value": null,
"simple": 1
}
]
}
###Markdown
Expression like name
###Code
print_all("(a): int = 1")
###Output
Module(body=[
AnnAssign(target=Name(id='a', ctx=Store()), annotation=Name(id='int', ctx=Load()), value=Num(n=1), simple=0),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "AnnAssign",
"target": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Store"
}
},
"annotation": {
"_PyType": "Name",
"id": "int",
"ctx": {
"_PyType": "Load"
}
},
"value": {
"_PyType": "Num",
"n": 1
},
"simple": 0
}
]
}
###Markdown
Attribute annotation
###Code
print_all("a.b: int")
###Output
Module(body=[
AnnAssign(target=Attribute(value=Name(id='a', ctx=Load()), attr='b', ctx=Store()), annotation=Name(id='int', ctx=Load()), value=None, simple=0),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "AnnAssign",
"target": {
"_PyType": "Attribute",
"value": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
},
"attr": "b",
"ctx": {
"_PyType": "Store"
}
},
"annotation": {
"_PyType": "Name",
"id": "int",
"ctx": {
"_PyType": "Load"
}
},
"value": null,
"simple": 0
}
]
}
###Markdown
Subscript annotation
###Code
print_all("a[1]: int")
###Output
Module(body=[
AnnAssign(target=Subscript(value=Name(id='a', ctx=Load()), slice=Index(value=Num(n=1)), ctx=Store()), annotation=Name(id='int', ctx=Load()), value=None, simple=0),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "AnnAssign",
"target": {
"_PyType": "Subscript",
"value": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
},
"slice": {
"_PyType": "Index",
"value": {
"_PyType": "Num",
"n": 1
}
},
"ctx": {
"_PyType": "Store"
}
},
"annotation": {
"_PyType": "Name",
"id": "int",
"ctx": {
"_PyType": "Load"
}
},
"value": null,
"simple": 0
}
]
}
###Markdown
import
###Code
print_all("from ..foo.bar import a as b, c")
###Output
Module(body=[
ImportFrom(module='foo.bar', names=[
alias(name='a', asname='b'),
alias(name='c', asname=None),
], level=2),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "ImportFrom",
"module": "foo.bar",
"names": [
{
"_PyType": "alias",
"name": "a",
"asname": "b"
},
{
"_PyType": "alias",
"name": "c",
"asname": null
}
],
"level": 2
}
]
}
###Markdown
for if
###Code
source_code = """
for a in b:
if a > 5:
break
else:
continue
""".strip()
print_all(source_code)
###Output
Module(body=[
For(target=Name(id='a', ctx=Store()), iter=Name(id='b', ctx=Load()), body=[
If(test=Compare(left=Name(id='a', ctx=Load()), ops=[
Gt(),
], comparators=[
Num(n=5),
]), body=[
Break(),
], orelse=[
Continue(),
]),
], orelse=[]),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "For",
"target": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Store"
}
},
"iter": {
"_PyType": "Name",
"id": "b",
"ctx": {
"_PyType": "Load"
}
},
"body": [
{
"_PyType": "If",
"test": {
"_PyType": "Compare",
"left": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
},
"ops": [
{
"_PyType": "Gt"
}
],
"comparators": [
{
"_PyType": "Num",
"n": 5
}
]
},
"body": [
{
"_PyType": "Break"
}
],
"orelse": [
{
"_PyType": "Continue"
}
]
}
],
"orelse": [
]
}
]
}
###Markdown
except handler
###Code
source_code = """
try:
a + 1
except TypeError:
pass
""".strip()
print_all(source_code)
###Output
Module(body=[
Try(body=[
Expr(value=BinOp(left=Name(id='a', ctx=Load()), op=Add(), right=Num(n=1))),
], handlers=[
ExceptHandler(type=Name(id='TypeError', ctx=Load()), name=None, body=[
Pass(),
]),
], orelse=[], finalbody=[]),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "Try",
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "BinOp",
"left": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
},
"op": {
"_PyType": "Add"
},
"right": {
"_PyType": "Num",
"n": 1
}
}
}
],
"handlers": [
{
"_PyType": "ExceptHandler",
"type": {
"_PyType": "Name",
"id": "TypeError",
"ctx": {
"_PyType": "Load"
}
},
"name": null,
"body": [
{
"_PyType": "Pass"
}
]
}
],
"orelse": [
],
"finalbody": [
]
}
]
}
###Markdown
with item
###Code
source_code = """
with a as b, c as d:
do_things(b, d)
""".strip()
print_all(source_code)
###Output
Module(body=[
With(items=[
withitem(context_expr=Name(id='a', ctx=Load()), optional_vars=Name(id='b', ctx=Store())),
withitem(context_expr=Name(id='c', ctx=Load()), optional_vars=Name(id='d', ctx=Store())),
], body=[
Expr(value=Call(func=Name(id='do_things', ctx=Load()), args=[
Name(id='b', ctx=Load()),
Name(id='d', ctx=Load()),
], keywords=[])),
]),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "With",
"items": [
{
"_PyType": "withitem",
"context_expr": {
"_PyType": "Name",
"id": "a",
"ctx": {
"_PyType": "Load"
}
},
"optional_vars": {
"_PyType": "Name",
"id": "b",
"ctx": {
"_PyType": "Store"
}
}
},
{
"_PyType": "withitem",
"context_expr": {
"_PyType": "Name",
"id": "c",
"ctx": {
"_PyType": "Load"
}
},
"optional_vars": {
"_PyType": "Name",
"id": "d",
"ctx": {
"_PyType": "Store"
}
}
}
],
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "Call",
"func": {
"_PyType": "Name",
"id": "do_things",
"ctx": {
"_PyType": "Load"
}
},
"args": [
{
"_PyType": "Name",
"id": "b",
"ctx": {
"_PyType": "Load"
}
},
{
"_PyType": "Name",
"id": "d",
"ctx": {
"_PyType": "Load"
}
}
],
"keywords": [
]
}
}
]
}
]
}
###Markdown
function
###Code
source_code = """
@dec1
@dec2
def f(a: 'annotation', b=1, c=2, *d, e, f=3, **g) -> 'return annotation':
pass
""".strip()
print_all(source_code)
###Output
Module(body=[
FunctionDef(name='f', args=arguments(args=[
arg(arg='a', annotation=Str(s='annotation')),
arg(arg='b', annotation=None),
arg(arg='c', annotation=None),
], vararg=arg(arg='d', annotation=None), kwonlyargs=[
arg(arg='e', annotation=None),
arg(arg='f', annotation=None),
], kw_defaults=[
None,
Num(n=3),
], kwarg=arg(arg='g', annotation=None), defaults=[
Num(n=1),
Num(n=2),
]), body=[
Pass(),
], decorator_list=[
Name(id='dec1', ctx=Load()),
Name(id='dec2', ctx=Load()),
], returns=Str(s='return annotation')),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "FunctionDef",
"name": "f",
"args": {
"_PyType": "arguments",
"args": [
{
"_PyType": "arg",
"arg": "a",
"annotation": {
"_PyType": "Str",
"s": "annotation"
}
},
{
"_PyType": "arg",
"arg": "b",
"annotation": null
},
{
"_PyType": "arg",
"arg": "c",
"annotation": null
}
],
"vararg": {
"_PyType": "arg",
"arg": "d",
"annotation": null
},
"kwonlyargs": [
{
"_PyType": "arg",
"arg": "e",
"annotation": null
},
{
"_PyType": "arg",
"arg": "f",
"annotation": null
}
],
"kw_defaults": [
null,
{
"_PyType": "Num",
"n": 3
}
],
"kwarg": {
"_PyType": "arg",
"arg": "g",
"annotation": null
},
"defaults": [
{
"_PyType": "Num",
"n": 1
},
{
"_PyType": "Num",
"n": 2
}
]
},
"body": [
{
"_PyType": "Pass"
}
],
"decorator_list": [
{
"_PyType": "Name",
"id": "dec1",
"ctx": {
"_PyType": "Load"
}
},
{
"_PyType": "Name",
"id": "dec2",
"ctx": {
"_PyType": "Load"
}
}
],
"returns": {
"_PyType": "Str",
"s": "return annotation"
}
}
]
}
###Markdown
class
###Code
source_code = """
@dec1
@dec2
class foo(base1, base2, metaclass=meta):
pass
""".strip()
print_all(source_code)
###Output
Module(body=[
ClassDef(name='foo', bases=[
Name(id='base1', ctx=Load()),
Name(id='base2', ctx=Load()),
], keywords=[
keyword(arg='metaclass', value=Name(id='meta', ctx=Load())),
], body=[
Pass(),
], decorator_list=[
Name(id='dec1', ctx=Load()),
Name(id='dec2', ctx=Load()),
]),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "ClassDef",
"name": "foo",
"bases": [
{
"_PyType": "Name",
"id": "base1",
"ctx": {
"_PyType": "Load"
}
},
{
"_PyType": "Name",
"id": "base2",
"ctx": {
"_PyType": "Load"
}
}
],
"keywords": [
{
"_PyType": "keyword",
"arg": "metaclass",
"value": {
"_PyType": "Name",
"id": "meta",
"ctx": {
"_PyType": "Load"
}
}
}
],
"body": [
{
"_PyType": "Pass"
}
],
"decorator_list": [
{
"_PyType": "Name",
"id": "dec1",
"ctx": {
"_PyType": "Load"
}
},
{
"_PyType": "Name",
"id": "dec2",
"ctx": {
"_PyType": "Load"
}
}
]
}
]
}
###Markdown
async
###Code
source_code = """
async def f():
await g()
""".strip()
print_all(source_code)
###Output
Module(body=[
AsyncFunctionDef(name='f', args=arguments(args=[], vararg=None, kwonlyargs=[], kw_defaults=[], kwarg=None, defaults=[]), body=[
Expr(value=Await(value=Call(func=Name(id='g', ctx=Load()), args=[], keywords=[]))),
], decorator_list=[], returns=None),
])
====================================================================================================
{
"_PyType": "Module",
"body": [
{
"_PyType": "AsyncFunctionDef",
"name": "f",
"args": {
"_PyType": "arguments",
"args": [
],
"vararg": null,
"kwonlyargs": [
],
"kw_defaults": [
],
"kwarg": null,
"defaults": [
]
},
"body": [
{
"_PyType": "Expr",
"value": {
"_PyType": "Await",
"value": {
"_PyType": "Call",
"func": {
"_PyType": "Name",
"id": "g",
"ctx": {
"_PyType": "Load"
}
},
"args": [
],
"keywords": [
]
}
}
}
],
"decorator_list": [
],
"returns": null
}
]
}
|
Kaggle-DS-Survey-2020-main/Kaggle DS 2020 survey.ipynb | ###Markdown
1. Countries have most number of data scientist(overall vs Experienced) A Overall
###Code
plt.figure(figsize=(15,5))
sns.countplot(
x='Q3', data=df, palette = 'YlGn_r', order = df.Q3.value_counts().iloc[:6].index
).set(title='Data scientist in topmost countries')
plt.xlabel('Country')
plt.show()
###Output
_____no_output_____
###Markdown
B Experienced
###Code
def experience(year):
if year == '10-20 years' or year == '20+ years':
return 'Proffesional'
elif year == '5-10 years' or year == '3-5 years':
return 'intermediate'
else:
return 'Beginner'
exp = df['Q6'].apply(experience)
plt.figure(figsize=(15,5))
sns.countplot(
x='Q3', hue =exp, data=df, palette = 'YlGnBu_r', order = df.Q3.value_counts().iloc[:6].index
).set(title='expereinced wise Data scientist in topmost countries')
plt.xlabel('Country')
plt.show()
###Output
_____no_output_____
###Markdown
2. Programing Language recomended most for aspiring data scientist.
###Code
df['Q8'].unique()
fig = plt.figure(figsize=(15,5))
sns.countplot(
x='Q8', data=df, palette = 'Greens_d', order=df.Q8.value_counts().iloc[:8].index
).set(title='Programing Language which are recomended most for aspiring data scientist')
plt.xlabel('Programming Language')
plt.show()
###Output
_____no_output_____
###Markdown
3. Programming Language used on a regular basis
###Code
q7 = {}
for i in range(1, 12):
q7.update(dict(df[f'Q7_Part_{i}'].value_counts()))
q7 = pd.DataFrame(q7.items(), columns=['language', 'total'])
plt.figure(figsize=(15,5))
plt.plot(
q7['language'], q7['total'], color='DarkOrange', linestyle='dashed', linewidth = 3, marker='o', markerfacecolor='blue', markersize=12
)
plt.xlabel('Langauges used on regular basis')
plt.ylabel('Total')
plt.title('Langauges used on regular basis')
plt.show()
###Output
_____no_output_____
###Markdown
4. Most popular Integrated development environments (IDE)
###Code
q9 = {}
for i in range(1, 12):
q9.update(dict(df[f'Q9_Part_{i}'].value_counts()))
q9 = pd.DataFrame(q9.items(), columns=['language', 'total'])
q9
fig = plt.figure(figsize = (15,5))
plt.bar(q9['language'], q9['total'], width = 0.2)
plt.xlabel('Integrated development environments(IDE)')
plt.ylabel('total')
plt.xticks(rotation='vertical')
plt.title('Popular IDE for Data Science')
plt.show()
###Output
_____no_output_____
###Markdown
5. ML algorithms used on a regular basis
###Code
q17 = {}
for i in range(1, 12):
q17.update(dict(df[f'Q17_Part_{i}'].value_counts()))
q17 = pd.DataFrame(q17.items(), columns=['ML_Algorithm', 'total'])
plt.figure(figsize=(15,5))
sns.set(style='darkgrid')
sns.set(style='whitegrid')
sns.lineplot(x='ML_Algorithm', y='total', data=q17,color='r')
plt.title('ML algorithm used on regular basis')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
6. Business intelligence tools used on regular basis
###Code
q31 = {}
for i in range(1, 14):
q31.update(dict(df[f'Q31_A_Part_{i}'].value_counts()))
q31 = pd.DataFrame(q31.items(), columns=['BI_tool', 'total'])
plt.figure(figsize=(20,5))
plot_order = q31.sort_values(by='total',ascending=False).BI_tool.values
sns.set(style='darkgrid')
sns.barplot(x='BI_tool', y='total', data=q31, order=plot_order)
plt.title('Business intelligence tools used on regular basis')
plt.xticks(rotation=-45)
plt.show()
###Output
_____no_output_____
###Markdown
7. Platform where Publicly share or deploy data analysis or machine learning model.
###Code
q36 = {}
for i in range(1, 10):
q36.update(dict(df[f'Q36_Part_{i}'].value_counts()))
q36 = pd.DataFrame(q36.items(), columns=['Platform', 'total'])
plt.figure(figsize =(10,5))
plot_order = q36.sort_values(by='total',ascending=False).Platform.values
sns.barplot(x='total', y='Platform', data=q36, order=plot_order, palette='plasma')
plt.ylabel('Platform')
plt.xlabel('Total')
plt.title('platform where share analysis and model')
plt.show()
###Output
_____no_output_____
###Markdown
8. Age group participated the most in the survey.
###Code
plt.figure(figsize=(20,5))
sns.countplot(x='Q1', data=df, order = df.Q1.value_counts().iloc[:i].index)
plt.xlabel('Age group')
plt.ylabel('Total')
plt.title('Age group participated in survey')
plt.show()
###Output
_____no_output_____
###Markdown
9. Participate by diffrent proffesion people in the survey.
###Code
plt.figure(figsize =(15,8))
df['Q5'].value_counts().plot(kind='barh')
plt.ylabel('Proffesion')
plt.xlabel('Total')
plt.title('Participation of diffrent proffesion people')
plt.show()
###Output
_____no_output_____
###Markdown
10. Natural language processing (NLP) methods used on regular basis
###Code
q19 = {}
for i in range(1, 6):
q19.update(dict(df[f'Q19_Part_{i}'].value_counts()))
q19 = pd.DataFrame(q19.items(), columns=['NLP', 'total'])
q19
plt.figure(figsize =(10,5))
plot_order = q19.sort_values(by='total',ascending=False).NLP.values
sns.barplot(x='total', y='NLP', data=q19, order=plot_order, palette='Oranges_r')
plt.ylabel('NLP')
plt.xlabel('Total')
plt.title('NLP used on regular basis')
plt.show()
###Output
_____no_output_____ |
1S_regression_model_selection/polynomial_regression.ipynb | ###Markdown
Polynomial Regression Importing the libraries
###Code
import numpy as np
# no visual graph in this template, plt is commented out as such
# import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Data.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Training the Polynomial Regression model on the Training set
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
poly_reg = PolynomialFeatures(degree = 4)
X_poly = poly_reg.fit_transform(X_train)
regressor = LinearRegression()
regressor.fit(X_poly, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = regressor.predict(poly_reg.transform(X_test))
np.set_printoptions(precision=2)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
[[433.94 431.23]
[457.9 460.01]
[460.52 461.14]
...
[469.53 473.26]
[438.27 438. ]
[461.67 463.28]]
###Markdown
Evaluating the Model Performance
###Code
from sklearn.metrics import r2_score
r2_score(y_test, y_pred)
###Output
_____no_output_____ |
DSC 530 - Data Exploration and Analysis/ThinkStats2/examples/gender_bias_example.ipynb | ###Markdown
Data visualization exampleIn a [recent blog post](https://www.allendowney.com/blog/2019/01/30/data-visualization-for-academics/), I showed figures from [a recent paper](https://osf.io/preprints/socarxiv/j2tw9/) and invited readers to redesign them to communicate their message more effectively.This notebook shows one way we might redesign the figures. At the same time, it demonstrates a simple use of a Pandas MultiIndex.
###Code
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
The study reports the distribution of student evaluation scores for instructors under eight conditions. At the top level, they report scores from evaluations with a 10-point of 6-points scale.
###Code
scale = ['10-point', '6-point']
###Output
_____no_output_____
###Markdown
At the next level, they distinguish fields of study as "least" or "most" male-dominated.
###Code
area = ['LeastMaleDominated', 'MostMaleDominated']
###Output
_____no_output_____
###Markdown
And they distinguish between male and female instructors.
###Code
instructor = ['Male', 'Female']
###Output
_____no_output_____
###Markdown
We can assemble those levels into a MultiIndex like this:
###Code
index = pd.MultiIndex.from_product([scale, area, instructor],
names=['Scale', 'Area', 'Instructor'])
index
###Output
_____no_output_____
###Markdown
For each of these eight conditions, the original paper reports the entire distribution of student evaluation scores. To make a simpler and clearer visualization of the results, I am going to present a summary of these distributions.I could take the mean of each distribution, and that would show the effect. But to make it even clearer, I will use the fraction of "top" scores, meaning a 9 or 10 on the 10-point scale and a 6 on the 6-point scale. Now, to get the data, I used the figures from the paper and estimated numbers by eye. **So these numbers are only approximate!**
###Code
data = [60, 60, 54, 38, 43, 42, 41, 41]
df = pd.DataFrame(data, columns=['TopScore%'], index=index)
df
###Output
_____no_output_____
###Markdown
To extract the subset of the data on a 10-point scale, we can use `loc` in the usual way.
###Code
df.loc['10-point']
###Output
_____no_output_____
###Markdown
To extract subsets at other levels, we can use `xs`. This example takes a cross-section of the second level.
###Code
df.xs('MostMaleDominated', level='Area')
###Output
_____no_output_____
###Markdown
This example takes a cross-section of the third level.
###Code
df.xs('Male', level='Instructor')
###Output
_____no_output_____
###Markdown
Ok, now to think about presenting the data. At the top level, the 10-point scale and the 6-point scale are different enough that I want to put them on different axes. So I'll start by splitting the data at the top level.
###Code
ten = df.loc['10-point']
ten
###Output
_____no_output_____
###Markdown
Now, the primary thing I want the reader to see is a discrepancy in percentages. For comparison of two or more values, a bar plot is often a good choice.As a starting place, I'll try the Pandas default for showing a bar plot of this data.
###Code
ten.unstack().plot(kind='bar');
###Output
_____no_output_____
###Markdown
As defaults go, that's not bad. From this figure it is immediately clear that there is a substantial difference in scores between male and female instructors in male-dominated areas, and no difference in other areas.The following function cleans up some of the details in the presentation.
###Code
def make_bar_plot(df):
# make the plot (and set the rotation of the x-axis)
df.unstack().plot(kind='bar', rot=0, alpha=0.7);
# clean up the legend
plt.gca().legend(['Female', 'Male'])
# label the y axis
plt.ylabel('Fraction of instructors getting top scores')
# set limits on the 7-axis (in part to make room for the legend)
plt.ylim([0, 75])
###Output
_____no_output_____
###Markdown
Here are the results for the 10-point scale.
###Code
make_bar_plot(ten)
plt.title('10-point scale');
###Output
_____no_output_____
###Markdown
And here are the results for the six-point scale, which show clearly that the effect disappears when a 6-point scale is used (at least in this experiment).
###Code
six = df.loc['6-point']
make_bar_plot(six)
plt.title('6-point scale');
###Output
_____no_output_____
###Markdown
Presenting two figures might be the best option, but in my challenge I asked for a single figure.Here's a version that uses Pandas defaults with minimal customization.
###Code
df.unstack().plot(kind='barh', xlim=[0, 65], alpha=0.7);
plt.gca().legend(['Female', 'Male'])
plt.gca().invert_yaxis()
plt.xlabel('Fraction of instructors getting top scores')
plt.tight_layout()
plt.savefig('gender_bias.png')
###Output
_____no_output_____ |
docs/Examples/drug_respons_01.ipynb | ###Markdown
Tree algorithms
###Code
# RandomForestClassifier
model = ensemble.RandomForestClassifier(random_state=0, class_weight="balanced")
estimator = BinarizeTargetClassifier(model)
rval = rval_rfc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = ensemble.RandomForestRegressor(random_state=0)
estimator = BinarizeTargetRegressor(model)
rval = rval_rfr = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = XGBClassifier(booster='gbtree')
estimator = BinarizeTargetClassifier(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = XGBRegressor(booster='dart')
estimator = BinarizeTargetRegressor(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = XGBClassifier(booster='dart')
estimator = BinarizeTargetClassifier(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = XGBRegressor(booster='gblinear')
estimator = BinarizeTargetRegressor(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = XGBClassifier(booster='gblinear')
estimator = BinarizeTargetClassifier(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = svm.LinearSVR()
estimator = BinarizeTargetRegressor(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = svm.SVR()
estimator = BinarizeTargetRegressor(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = linear_model.LinearRegression()
estimator = BinarizeTargetRegressor(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = linear_model.Ridge()
estimator = BinarizeTargetRegressor(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
# RandomForestRegressor
model = linear_model.LassoLars()
estimator = BinarizeTargetRegressor(model)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
###Output
_____no_output_____
###Markdown
Stacking
###Code
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
li_xgbr = XGBRegressor(booster='gblinear')
li_regr = linear_model.LinearRegression()
dart_xgbr = XGBRegressor(booster='gbtree'))
tree_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gbtree')
ridge = BinarizeTargetRegressor(linear_model.Ridge())
estimator = StackingCVRegressor(regressors=[li_xgbc, li_xgbr, li_regr, dart_xgbr], meta_regressor=ridge)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
li_xgbr = BinarizeTargetRegressor(XGBRegressor(booster='gblinear'))
li_regr = BinarizeTargetRegressor(linear_model.LinearRegression())
dart_xgbr = BinarizeTargetRegressor(XGBRegressor(booster='gbtree'))
tree_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gbtree'))
ridge = BinarizeTargetRegressor(linear_model.Ridge())
estimator = StackingRegressor(regressors=[li_xgbc, li_xgbr, li_regr, dart_xgbr], meta_regressor=ridge)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
li_xgbr = BinarizeTargetRegressor(XGBRegressor(booster='gblinear'))
estimator = StackingCVRegressor(regressors=[li_xgbc, li_xgbr], meta_regressor=li_xgbc, cv=cv)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
li_xgbr = BinarizeTargetRegressor(XGBRegressor(booster='gblinear'))
estimator = StackingRegressor(regressors=[li_xgbc, li_xgbr], meta_regressor=li_xgbc)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
li_xgbr = BinarizeTargetRegressor(XGBRegressor(booster='gblinear'))
estimator = StackingRegressor(regressors=[li_xgbc, li_xgbr], meta_regressor=li_xgbr)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
li_xgbr = BinarizeTargetRegressor(XGBRegressor(booster='gblinear'))
estimator = StackingRegressor(regressors=[li_xgbc, li_xgbr], meta_regressor=li_regr)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
li_xgbr = BinarizeTargetRegressor(XGBRegressor(booster='gblinear'))
estimator = StackingRegressor(regressors=[li_xgbr, li_xgbc], meta_regressor=li_regr)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
li_xgbr = BinarizeTargetRegressor(XGBRegressor(booster='gblinear'))
svr = BinarizeTargetRegressor(svm.SVR())
estimator = StackingRegressor(regressors=[li_xgbr, li_xgbc], meta_regressor=svr)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbr = XGBRegressor(booster='gblinear')
dart_xgbr = XGBRegressor(booster='dart')
li_svr = BinarizeTargetRegressor(svm.LinearSVR())
estimator = StackingRegressor(regressors=[li_xgbr, dart_xgbr], meta_regressor=li_svr, store_train_meta_features=True)
estimator.fit(X, y)
li_xgbr = XGBRegressor(booster='gblinear')
dart_xgbr = XGBRegressor(booster='dart')
b_li_xgbr = BinarizeTargetRegressor(li_xgbr)
estimator = StackingCVRegressor(regressors=[li_xgbr], meta_regressor=b_li_xgbr, use_features_in_secondary=True, cv=cv)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbr = XGBRegressor(booster='gblinear')
dart_xgbr = XGBRegressor(booster='dart')
b_li_xgbr = BinarizeTargetRegressor(li_xgbr)
estimator = StackingCVRegressor(regressors=[dart_xgbr], meta_regressor=b_li_xgbr, use_features_in_secondary=True, cv=cv)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbr = XGBRegressor(booster='gblinear')
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
b_li_xgbr = BinarizeTargetRegressor(li_xgbr)
estimator = StackingCVRegressor(regressors=[li_xgbc], meta_regressor=b_li_xgbr, use_features_in_secondary=True, cv=cv)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbr = XGBRegressor(booster='gblinear')
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
logistic = BinarizeTargetClassifier(linear_model.LogisticRegression(random_state=10))
estimator = StackingRegressor(regressors=[li_xgbc, b_li_xgbr], meta_regressor=logistic)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
li_xgbr = XGBRegressor(booster='gblinear')
li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
tree_xgbc = BinarizeTargetClassifier(XGBClassifier())
estimator = StackingRegressor(regressors=[li_xgbc, b_li_xgbr], meta_regressor=tree_xgbc)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
###Output
_____no_output_____
###Markdown
Feature_selection
###Code
from sklearn.feature_selection import SelectKBest, f_classif
kbest = SelectKBest(score_func=f_classif, k=100)
pipe = Pipeline([('kbest', kbest), ('est', XGBClassifier(booster='gblinear'))])
estimator = BinarizeTargetClassifier(pipe)
#estimator = StackingRegressor(regressors=[li_xgbc, b_li_xgbr], meta_regressor=tree_xgbc)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
from sklearn.feature_selection import SelectKBest, f_classif, f_regression
kbest = SelectKBest(score_func=f_regression, k=200)
pipe = Pipeline([('kbest', kbest), ('est', XGBRegressor(booster='gblinear'))])
estimator = BinarizeTargetRegressor(pipe)
#estimator = StackingRegressor(regressors=[li_xgbc, b_li_xgbr], meta_regressor=tree_xgbc)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
from sklearn.feature_selection import SelectKBest, f_classif, f_regression
kbest = SelectKBest(score_func=f_classif, k=100)
pipe = Pipeline([('kbest', kbest), ('est', XGBClassifier(booster='gblinear'))])
estimator = BinarizeTargetClassifier(pipe)
cv2 = RepeatedOrderedKFold(n_splits=10, n_repeats=4)
#estimator = StackingRegressor(regressors=[li_xgbc, b_li_xgbr], meta_regressor=tree_xgbc)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv2, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
from sklearn.feature_selection import SelectKBest, f_classif, f_regression, chi2, mutual_info_classif
kbest = SelectKBest(score_func=mutual_info_classif, k=100)
pipe = Pipeline([('kbest', kbest), ('est', XGBClassifier(booster='gblinear'))])
estimator = BinarizeTargetClassifier(pipe)
cv2 = OrderedKFold(n_splits=10)
#estimator = StackingRegressor(regressors=[li_xgbc, b_li_xgbr], meta_regressor=tree_xgbc)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv2, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
from sklearn.feature_selection import SelectKBest, f_classif, f_regression, chi2, mutual_info_classif
kbest = SelectKBest(score_func=f_classif, k=100)
pipe = Pipeline([('kbest', kbest), ('est', XGBClassifier(booster='gblinear'))])
estimator = BinarizeTargetClassifier(pipe)
#estimator = StackingRegressor(regressors=[li_xgbc, b_li_xgbr], meta_regressor=tree_xgbc)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
from sklearn.feature_selection import SelectKBest, f_classif, f_regression, chi2, mutual_info_classif
kbest = BinarizeTargetTransformer(SelectKBest(score_func=f_classif, k=100))
b_li_xgbc = BinarizeTargetClassifier(XGBClassifier(booster='gblinear'))
pipe = Pipeline([('kbest', kbest), ('est', b_li_xgbc)])
#estimator = BinarizeTargetClassifier(pipe)
#estimator = StackingRegressor(regressors=[li_xgbc, b_li_xgbr], meta_regressor=tree_xgbc)
rval = rval_xgbc = cross_validate(pipe, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
###Output
_____no_output_____
###Markdown
preprocessing
###Code
from sklearn.feature_selection import SelectKBest, f_classif, f_regression, chi2, mutual_info_classif
from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler, Normalizer
kbest = SelectKBest(score_func=f_classif, k=100)
scaler = Normalizer(norm='l2')
pipe = Pipeline([('kbest', kbest), ('scaler', scaler), ('est', XGBClassifier(booster='gblinear'))])
estimator = BinarizeTargetClassifier(pipe)
cv2 = OrderedKFold(n_splits=10)
#estimator = StackingRegressor(regressors=[li_xgbc, b_li_xgbr], meta_regressor=tree_xgbc)
rval = rval_xgbc = cross_validate(estimator, X, y, cv=cv2, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
###Output
_____no_output_____
###Markdown
Stacking
###Code
from sklearn.feature_selection import SelectKBest, f_classif, f_regression, chi2, mutual_info_classif
kbest = BinarizeTargetTransformer(SelectKBest(score_func=f_classif, k=100))
li_xgbc = XGBClassifier(booster='gblinear')
b_li_xgbc = BinarizeTargetClassifier(li_xgbc)
tree_xgbr = XGBClassifier(booster='gbtree')
dart_xgbr = XGBClassifier(booster='dart')
li_xgbr = XGBRegressor(booster='gblinear')
logistic = linear_model.LogisticRegression()
stacking = StackingCVRegressor(regressors=[li_xgbr], meta_regressor=b_li_xgbc, cv=cv, use_features_in_secondary=False)
pipe = Pipeline([('kbest', kbest), ('est', BinarizeTargetRegressor(stacking))])
rval = rval_xgbc = cross_validate(pipe, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
from sklearn.feature_selection import SelectKBest, f_classif, f_regression, chi2, mutual_info_classif
kbest = BinarizeTargetTransformer(SelectKBest(score_func=f_classif, k=100))
li_xgbc = XGBClassifier(booster='gblinear')
b_li_xgbc = BinarizeTargetClassifier(li_xgbc)
tree_xgbr = XGBClassifier(booster='gbtree')
dart_xgbr = XGBClassifier(booster='dart')
li_xgbr = XGBRegressor(booster='gblinear')
logistic = linear_model.LogisticRegression()
svr = svm.SVR()
cv2 = OrderedKFold(n_splits=10)
stacking = StackingCVClassifier(classifiers=[li_xgbc], meta_classifier=li_xgbc, cv=cv2, use_features_in_secondary=True)
pipe = Pipeline([('kbest', kbest), ('est', BinarizeTargetClassifier(stacking))])
rval = rval_xgbc = cross_validate(pipe, X, y, cv=cv, scoring=scoring, n_jobs=2)
rval, np.mean(rval['test_ap']), np.mean(rval['test_auc'])
###Output
_____no_output_____ |
praktikum/fischer.ipynb | ###Markdown
Check if perceptron yields similar results
###Code
def perceptron(x_train,y_train):
tf.compat.v1.reset_default_graph()
with tf.device("/gpu:0"):
model = tf.keras.models.Sequential(
[tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(x_train.shape[1],))])
model.compile(optimizer='adam',loss="binary_crossentropy", metrics=['accuracy'])
hist = model.fit(
x=x_train,
y=y_train,
epochs=10,
batch_size = 32,
verbose=1).history
return np.array([a.numpy() for a in model.trainable_weights])[0]
for preprocessing in data:
for subset in data[preprocessing]:
print(preprocessing,subset)
datasubset = data[preprocessing][subset]
w = perceptron(datasubset["x_train"],datasubset["y_train_binary"])
mean = datasubset["x_train"].mean(axis=0).reshape(-1,1)
Helpers.store((abs(w)*mean).numpy(),"measure/featureimportance/{}/{}".format(preprocessing,subset),"importance")
def calc_featureimportance(data,weights):
return abs(weights*data["x_train"].mean(axis=0).reshape(-1,1))
plt.figure(figsize=(10,10))
plt.imshow(Preprocessing.minmax_scaler(calc_featureimportance(data["x_train"],w)).reshape(2,2),cmap="gray",vmin=0)
plt.axis("off")
plt.show()
###Output
_____no_output_____ |
codes/mixup.ipynb | ###Markdown
mixup: BEYOND EMPIRICAL RISK MINIMIZATION
###Code
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt
from scipy.stats import beta
###Output
_____no_output_____
###Markdown
1. mixup of images
###Code
im1 = np.array(Image.open("../images/000154.jpg"))
im2 = np.array(Image.open("../images/000218.jpg"))
plt.figure(num=1, figsize=(10,5))
plt.subplot(1,2,1)
plt.imshow(im1)
plt.subplot(1,2,2)
plt.imshow(im2)
plt.show()
plt.figure(num=2, figsize=(10,10))
for i in range(1,10):
lam= i*0.1
im_mixup = (im1*lam+im2*(1-lam)).astype(np.uint8)
plt.subplot(3,3,i)
plt.imshow(im_mixup)
plt.show()
###Output
_____no_output_____
###Markdown
2. numpy.random.beta(alpha, alpha)
###Code
alpha = 1.0
np.random.beta(alpha,alpha)
plt.figure(num=3, figsize=(8,5))
x = np.linspace( 0, 1, 100)
a_array = [5, 2, 1, 0.1 ]
for a in a_array:
plt.plot(x, beta.pdf(x, a, a), lw= 1, alpha= 0.6, label= 'a='+ str(a) + ',b='+ str(a))
plt.legend(frameon= False)
plt.show()
###Output
_____no_output_____ |
analysis/EDA/cross_val_novo.ipynb | ###Markdown
Create a dictionarySince the output of saphyra is like a dictionary we need to navigate on and get all information.kolmov has a class called crossval_table which allow us to get this information and tranform into a pandas Dataframe.The first thing to do is define a OrderedDict to access all information inside of saphyra tuned file.
###Code
def create_op_dict(op):
d = {
op+'_pd_ref' : "reference/"+op+"_cutbased/pd_ref#0",
op+'_fa_ref' : "reference/"+op+"_cutbased/fa_ref#0",
op+'_sp_ref' : "reference/"+op+"_cutbased/sp_ref",
op+'_pd_val' : "reference/"+op+"_cutbased/pd_val#0",
op+'_fa_val' : "reference/"+op+"_cutbased/fa_val#0",
op+'_sp_val' : "reference/"+op+"_cutbased/sp_val",
op+'_pd_op' : "reference/"+op+"_cutbased/pd_op#0",
op+'_fa_op' : "reference/"+op+"_cutbased/fa_op#0",
op+'_sp_op' : "reference/"+op+"_cutbased/sp_op",
# Counts
op+'_pd_ref_passed' : "reference/"+op+"_cutbased/pd_ref#1",
op+'_fa_ref_passed' : "reference/"+op+"_cutbased/fa_ref#1",
op+'_pd_ref_total' : "reference/"+op+"_cutbased/pd_ref#2",
op+'_fa_ref_total' : "reference/"+op+"_cutbased/fa_ref#2",
op+'_pd_val_passed' : "reference/"+op+"_cutbased/pd_val#1",
op+'_fa_val_passed' : "reference/"+op+"_cutbased/fa_val#1",
op+'_pd_val_total' : "reference/"+op+"_cutbased/pd_val#2",
op+'_fa_val_total' : "reference/"+op+"_cutbased/fa_val#2",
op+'_pd_op_passed' : "reference/"+op+"_cutbased/pd_op#1",
op+'_fa_op_passed' : "reference/"+op+"_cutbased/fa_op#1",
op+'_pd_op_total' : "reference/"+op+"_cutbased/pd_op#2",
op+'_fa_op_total' : "reference/"+op+"_cutbased/fa_op#2",
}
return d
tuned_info = collections.OrderedDict( {
# validation
"max_sp_val" : 'summary/max_sp_val',
"max_sp_pd_val" : 'summary/max_sp_pd_val#0',
"max_sp_fa_val" : 'summary/max_sp_fa_val#0',
# Operation
"max_sp_op" : 'summary/max_sp_op',
"max_sp_pd_op" : 'summary/max_sp_pd_op#0',
"max_sp_fa_op" : 'summary/max_sp_fa_op#0',
} )
tuned_info.update(create_op_dict('tight'))
tuned_info.update(create_op_dict('medium'))
tuned_info.update(create_op_dict('loose'))
tuned_info.update(create_op_dict('vloose'))
etbins = [4, 7, 10, 15]
etabins = [0.0, 0.8, 1.37, 1.54, 2.37, 2.47]
tunes_path = "/home/natmourajr/Workspace/CERN/CERN-ATLAS-Qualify/tunings"
analysis_path = "/home/natmourajr/Workspace/CERN/CERN-ATLAS-Qualify/tunings"
###Output
_____no_output_____
###Markdown
Initialize the crossval_table objectIn this step we initialiaze the crossval_table object and fill with data from our training.
###Code
m_cv = crossval_table( tuned_info, etbins = etbins , etabins = etabins )
#m_cv.fill( os.path.join(tunes_path, 'v1/r0/*/*/*pic.gz'), 'v1.r0')
m_cv.fill( os.path.join(tunes_path, 'v1/r1/*/*/*.pic.gz'), 'v1.r1')
best_inits = m_cv.filter_inits("max_sp_val")
print(len(best_inits))
best_inits.head()
n_min, n_max = 2, 20
model_add_tag = { idx : '.mlp%i' %(neuron) for idx, neuron in enumerate(range(n_min, n_max +1))}
# add a sufix in train_tag
best_inits.train_tag = best_inits.train_tag + best_inits.model_idx.replace(model_add_tag)
best_inits.model_idx.unique()
10*len(best_inits.model_idx.unique())*15
best_inits.head()
# since take a long time to open those files let's save into a .csv
print(analysis_path)
best_inits.to_csv(os.path.join(analysis_path, 'v1/r1/best_inits.csv'))
print(analysis_path)
r1_path = 'v1/r1'
map_key_dict ={
'max_sp_val' : (r'$SP_{max}$ (Test)', 'sp'),
'max_sp_pd_val' : (r'$P_D$ (Test)', 'pd'),
'max_sp_fa_val' : (r'$F_A$ (Test)', 'fa'),
'auc_val' : (r'AUC (Test)', 'auc'),
}
from kolmov.utils.constants import str_etbins_jpsiee, str_etabins
# using as simple function in order to make easier plot all need measures
def create_cool_catplot(df, key, kind, mapped_key, output_name, tuning_flag, tuning_folder, list_of_neuros=None):
# create the box plot.
# rename the columns names.
# map the model idx into real # neurons.
if list_of_neuros is None:
list_of_neuros = range(2, 20+1)
sns.catplot(data=(df
.replace({'model_idx' : {i : n for i, n in zip(range(0,df.model_idx.max()+1),
range(2,20+1))},
'et_bin' : {i : str_etbins_jpsiee[i] for i in range(3)},
'eta_bin' : {i : str_etabins[i] for i in range(5)}})
.rename({'model_idx' : '# Neurons',
'et_bin' : r'$E_T$',
'eta_bin' : r'$\eta$',
key : mapped_key},
axis=1)), x='# Neurons',
y=mapped_key, col=r'$\eta$',
row=r'$E_T$', kind=kind, sharey=False,
)
plt.tight_layout()
plt.savefig(os.path.join(analysis_path, '%s/plots/%s_plot_%s_%s.png' %(tuning_folder, kind, output_name, tuning_flag)), dpi=150, facecolor='white')
plt.close()
def create_cool_scatterplot(df, key1, key2, mapped_key1, mapped_key2, output_name, tuning_flag, tuning_folder):
sns.relplot(data=(best_inits.replace({'model_idx' : {i : n for i, n in zip(best_inits.model_idx.unique(), [2, 5, 10, 15, 20])},
'et_bin' : {i : str_etbins_jpsiee[i] for i in range(3)},
'eta_bin' : {i : str_etabins[i] for i in range(5)}})
.rename({'model_idx' : '# Neurons',
'et_bin' : r'$E_T$',
'eta_bin' : r'$\eta$',
key1 : mapped_key1,
key2 : mapped_key2}, axis=1)),
x=mapped_key1, y=mapped_key2,
palette=['red', 'orange', 'green'], style='# Neurons',
hue='# Neurons', row=r'$E_T$', col=r'$\eta$', facet_kws=dict(sharex=False, sharey=False))
plt.tight_layout()
plt.savefig(os.path.join(analysis_path, '%s/plots/scatter_plot_%s_%s.png' %(tuning_folder, output_name, tuning_flag)), dpi=150, facecolor='white')
plt.close()
best_inits.head()
best_inits[best_inits.train_tag.str.contains('v1.r1')].head()
best_inits[best_inits.train_tag.str.contains('v1.r1')].shape
15*10*best_inits.model_idx.nunique()
ikey = 'max_sp_val'
map_k, o_name = map_key_dict[ikey]
for ikind in ['box', 'violin', 'boxen']:
create_cool_catplot(df=best_inits[best_inits.train_tag.str.contains('v1.r1')], key=ikey, mapped_key=map_k,
kind=ikind, output_name=o_name, tuning_flag='v1.r1.all_neurons', tuning_folder=r1_path)
# select some models to filter
selected_models = ['v1.r1.mlp%i' %(ineuron) for ineuron in [2, 5, 10, 15, 20]]
print(selected_models)
best_inits[best_inits.train_tag.isin(selected_models)].train_tag.unique()
for ikey in map_key_dict.keys():
map_k, o_name = map_key_dict[ikey]
for ikind in ['box', 'violin', 'boxen']:
create_cool_catplot(df=best_inits[best_inits.train_tag.isin(selected_models)], key=ikey, mapped_key=map_k,
kind=ikind, output_name=o_name, tuning_flag='v1.r1.selected_neurons', tuning_folder=r1_path)
###Output
_____no_output_____
###Markdown
Filter the initializations and get the best sortTo get the best initialization in each sort and the best sort for each model configuration is easy since we are using pandas.
###Code
for iet in best_inits['et_bin'].unique():
iet_mask = best_inits['et_bin'] == iet
for ieta in best_inits['eta_bin'].unique():
ieta_mask = best_inits['eta_bin'] == ieta
for tag, midx in zip(best_inits['train_tag'].unique(), best_inits['model_idx'].unique()):
model_mask = best_inits['model_idx'] == midx
tag_mask = best_inits['train_tag'] == tag
full_mask = iet_mask & ieta_mask & model_mask & tag_mask
print(iet, ieta, tag, midx, best_inits.loc[full_mask].shape)
best_inits[(best_inits.train_tag == 'v1.r0.mlp2') & (best_inits.et_bin == 2.) & (best_inits.eta_bin == 0.)]
###Output
_____no_output_____
###Markdown
When we filter sorts we must to have only one entry since.
###Code
best_sorts = m_cv.filter_sorts( best_inits , 'max_sp_op')
print(len(best_sorts))
best_sorts
###Output
_____no_output_____
###Markdown
Get the cross-validation table
###Code
for op in ['tight','medium','loose','vloose']:
m_cv.dump_beamer_table( best_inits , [op], 'v1_r1_'+op,
title = op+' Tunings (v1-r1)',
tags = ['v1.r1.mlp2', 'v1.r1.mlp5', 'v1.r1.mlp10', 'v1.r1.mlp15', 'v1.r1.mlp20']
)
m_cv.integrate(best_inits, 'v1.r1.mlp2')
###Output
_____no_output_____
###Markdown
Plot monitoring training curves
###Code
m_cv.plot_training_curves( best_inits, best_sorts , 'monitoring_curves' )
###Output
_____no_output_____
###Markdown
Plot ROC Curves
###Code
m_cv.plot_roc_curves( best_sorts, ['v1.r1.mlp2', 'v1.r1.mlp5', 'v1.r1.mlp10', 'v1.r1.mlp15', 'v1.r1.mlp20'],
['v1.r1.mlp2', 'v1.r1.mlp5', 'v1.r1.mlp10', 'v1.r1.mlp15', 'v1.r1.mlp20'],
'roc_curve.png', display=True,
colors=get_color_fader('blue','red',5),
et_bin=2, eta_bin=0, xmin=-0.005, xmax=.25, ymin=0.9, ymax=1.005,
fontsize=20,
figsize=(7,7))
###Output
_____no_output_____ |
Working-with-Dataloggers.ipynb | ###Markdown
Working with data loggers *Olm* is designed to work seamlessly with data from data loggers. Most functions that conduct geochemical calculations can be run with arrays or *Pandas* `Series` objects as inputs. There are also a variety of convenience functions for reading datalogger csv files and postprocessing those data. The `olm.loggers` package `olm.loggers` provides a set of modules (called toolkits) for importing data from common data logger formats. All of these toolkits use the *Pandas* `read_csv()` function to read data into a Pandas `DataFrame`. While it is pretty easy to use customize `read_csv()` yourself to read in the desired datalogger file, these convenience functions make it easier for common formats.
###Code
#Check whether we are running on Colab or locally.
try:
import google.colab
IN_COLAB = True
base_path = 'https://raw.githubusercontent.com/CovingtonResearchGroup/olm-examples/main/'
except:
IN_COLAB = False
base_path = './'
print('Base working path for data files is',base_path)
#If olm isn't already installed (or if you're running in Colab), then run this cell of code.
!pip install olm-karst
#We will run in pylab mode, to import plotting functions.
%pylab inline
###Output
_____no_output_____
###Markdown
Reading in logger data
###Code
#Here we use an example of reading data from an Onset HOBO data logger
from olm.loggers.HoboToolkit import read_hobo_csv
#Here we import temperature data from three HOBO loggers
T1_jan = read_hobo_csv(base_path + 'data/2015-01-21/BS-T1.csv')
T3_jan = read_hobo_csv(base_path + 'data/2015-01-21/BS-T3.csv')
T5_jan = read_hobo_csv(base_path + 'data/2015-01-21/BS-T5.csv')
#Plot the data as an example to compare the three timeseries
T1_jan.Temp.plot()
T3_jan.Temp.plot()
T5_jan.Temp.plot()
legend(['T1','T3','T5'])
ylabel('Temperature ($^\circ$C)')
#Now we read in a couple more months (if we were clever, we'd use a for loop instead)
T1_feb = read_hobo_csv(base_path + 'data/2015-02-19/BS-T1.csv')
T3_feb = read_hobo_csv(base_path + 'data/2015-02-19/BS-T3.csv')
T5_feb = read_hobo_csv(base_path + 'data/2015-02-19/BS-T5.csv')
T1_mar = read_hobo_csv(base_path + 'data/2015-03-16/BS-T1.csv')
T3_mar = read_hobo_csv(base_path + 'data/2015-03-16/BS-T3.csv')
T5_mar = read_hobo_csv(base_path + 'data/2015-03-16/BS-T5.csv')
###Output
_____no_output_____
###Markdown
The `olm.loggers.loggerScripts` module Concatenating logger data Currently, we have separate *Pandas* DataFrames for each CSV file. We can use [`pandas.concat()`](https://pandas.pydata.org/docs/reference/api/pandas.concat.html) to put together the total record for each logger.
###Code
from pandas import concat
#To concatenate the DataFrames for each logger, we just provide a list of the individual DataFrames
#There are also many options described in the Pandas docs.
T1 = concat([T1_jan, T1_feb, T1_mar])
T3 = concat([T3_jan, T3_feb, T3_mar])
T5 = concat([T5_jan, T5_feb, T5_mar])
T1.Temp.plot()
T3.Temp.plot()
T5.Temp.plot()
legend(['T1','T3','T5'])
ylabel('Temperature ($^\circ$C)')
###Output
_____no_output_____
###Markdown
Joining loggers into a single `DataFrame` Depending on how you have set them up, loggers will not always have the same timestamps. See the example below for T1 and T3, whose timestamps do not align. This causes problems when merging into a single `DataFrame`.
###Code
print('Data from T1')
print(T1_jan.head())
print('Data from T3')
print(T3_jan.head())
###Output
_____no_output_____
###Markdown
Join and resample loggers olm.loggers.loggerScripts contains a [`joinAndResampleLoggers()`](https://olm.readthedocs.io/en/master/olm.loggers.htmlolm.loggers.loggerScripts.joinAndResampleLoggers) function that enables simulataneous joining of loggers and resampling onto a common timestamp index.
###Code
from olm.loggers.loggerScripts import joinAndResampleLoggers
#Provide a list of the loggers to be joined as well as the interval
# See list of frequency strings (e.g. min) that can be used in intervals here:
# https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects
# Since the three loggers contain a duplicate column name (Temp), we have to provide suffixes
# for each one to differentiate the column names.
df = joinAndResampleLoggers([T1,T3,T5], '5min', suffixes=['T1', 'T3', 'T5'])
df
df.plot()
ylabel('Temperature ($^\circ$C)')
###Output
_____no_output_____
###Markdown
Applying linear corrections Sometimes data logger values drift due to bio-fouling or other factors (particularly conductivity loggers). This drift is often removed using a linear correction based on spot measurements at the site. [`olm.loggers.loggerScripts.linear_correction()`](https://olm.readthedocs.io/en/master/olm.loggers.htmlolm.loggers.loggerScripts.linear_correction) provides a function to make such corrections.
###Code
#Read in some raw data logger data from a Campbell .dat file
from olm.loggers.CampbellToolkit import read_dat
Langle = read_dat(base_path + 'data/CR800_Langle_Water.dat')
Langle_cond_datalogger = 1000*Langle.Cond_Avg #Convert to muS/cm from mS/cm
#Read in the spot measurements from a csv file
from pandas import read_csv
spot_meas = read_csv(base_path + 'data/Langle_data_fixed.csv', parse_dates=[[0,1]], index_col=0, na_values='NaN')
cond_spot = spot_meas['cond']
from olm.loggers.loggerScripts import linear_correction
#To correct, simply provide the timeseries DataFrame and spot measurement DataFrame
#Both must have datetime indicies.
cond_corr = linear_correction(Langle_cond_datalogger, cond_spot)
title('Comparing the raw and corrected series')
Langle_cond_datalogger.plot()
cond_corr.plot()
cond_spot.plot(style='ko')
legend(['Raw SpC', 'Corrected SpC', 'Spot measurements'], loc='lower right')
ylabel('SpC ($\mu S/cm$)');
###Output
_____no_output_____
###Markdown
Shifting a logger's timestamp Sometimes the datalogger clock gets reset to the wrong value (wrong timezone, or some bizarre time in the distant future or past). As long as the offsets are correct between timestamps, it is easy to correct this shift. While this can be corrected easily with a few lines of code using Pandas functions, this happens frequently enough that I didn't want to have to reinvent the code each time. [`olm.loggers.loggerScripts.shiftLogger()` ](https://olm.readthedocs.io/en/master/olm.loggers.htmlolm.loggers.loggerScripts.shiftLogger) will do this shift for us.
###Code
BS4_cond = read_hobo_csv(base_path + 'data/2016-10-20/BS4-Cond.csv')
BS4_cond
###Output
_____no_output_____
###Markdown
This timestamps are in the distant future and can't be right. Actually the RTC chip in the HOBO Shuttle had gotten damaged by water, causing it to reset the loggers to this strange time. No worries, I know from my field notes that I downloaded this logger and restarted it last at 23:11 on 10/5/2016 UTC.
###Code
from olm.loggers.loggerScripts import shiftLogger
#Just provide the DataFrame to shift and the starting timestamp desired.
#One can also shift to end at a specific time using align_at_start=False.
BS4_cond_corrected = shiftLogger(BS4_cond, '10/05/2016 23:11',align_at_start=True)
BS4_cond_corrected
###Output
_____no_output_____ |
ml_lesson_notebook_6.ipynb | ###Markdown
- 以降の、それぞれのセルにカーソルを合わせて、Shift + Enter キーを押すことで、そのセルのコードを実行することができます。- 基本的には、上のセルから順番にコードを実行していきます。 ライブラリの読み込み
###Code
from numpy import argmax
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestClassifier
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.utils import plot_model
###Output
_____no_output_____
###Markdown
データを読み込む
###Code
# MNIST データ(手書き数字画像60,000枚と、テスト画像10,000枚を集めた、画像データセット)を読み込む
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# 読み込んだデータの、次元ごとの要素数を表示
print("x_train(学習用の説明変数データ) の要素数: ")
print(x_train.shape)
print("y_train(学習用の目的変数データ) の要素数: ")
print(y_train.shape)
print("x_test(検証用の説明変数データ) の要素数: ")
print(x_test.shape)
print("y_test(検証用の目的変数データ) の要素数: ")
print(y_test.shape)
# 読み込んだ学習用データから 1件抜き出してその内容を表示
print("x_train のデータ 1件分のサンプル( 28 x 28 の画像データが二次元配列の形で表されている): ")
print(x_train[0])
print("y_train のデータ 10件分のサンプル( 0 - 9 いずれかの正解データが格納されている): ")
print(y_train[:10])
# MNISTデータを画像として表示
print("MNISTデータを画像として表示")
W = 8 # 横に並べる個数
H = 4 # 縦に並べる個数
fig = plt.figure(figsize=(W, H))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1.0, hspace=0.05, wspace=0.05)
for i in range(W*H):
ax = fig.add_subplot(H, W, i + 1, xticks=[], yticks=[])
ax.imshow(x_train[i].reshape((28, 28)), cmap='gray')
plt.show()
# 学習させやすいようにデータを整形する(次元数を 28x28 の二次元から 784 の一次元に削減し、各値を 0.0 - 1.0 の間に収まるように調整)
# (配列のひとつひとつの要素が、画像の画素 1つに相当する)
x_train = x_train.reshape(-1, 784)/255.0
x_test = x_test.reshape(-1, 784)/255.0
# 正解データは One-Hot ベクトルの形式に変換
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# 読み込んだデータの、次元ごとの要素数を表示
print("x_train(学習用の説明変数データ)の要素数(調整後): ")
print(x_train.shape)
print("y_train(学習用の目的変数データ) の要素数(調整後): ")
print(y_train.shape)
print("x_test(検証用の説明変数データ) の要素数(調整後): ")
print(x_test.shape)
print("y_test(検証用の目的変数データ) の要素数(調整後): ")
print(y_test.shape)
print("x_train のデータ 1件分のサンプル(調整後): ")
print(x_train[0])
print("y_train のデータ 10件分のサンプル(調整後): ")
print(y_train[:10])
###Output
_____no_output_____
###Markdown
ランダムフォレストで分類問題を解く
###Code
# RandomForestClassifier を用いてランダムフォレストモデルを実装
# ハイパーパラメータ
n_estimators = 10 # 決定木の数。多くすると精度が上がるかもしれない。
max_depth = 8 # 木の枝分かれの深さの最大値。大きくすると精度が上がる傾向が見られるが、過学習の恐れもあり。
criterion = 'gini' # 評価指標。"gini": ジニ係数。"entropy": エントロピー
min_samples_leaf = 8 # 枝を分割するために必要な最小データサイズ。小さくすると精度が上がる傾向が見られるが、過学習の恐れもあり。
min_samples_split = 4 # ひとつの枝の末端(=葉)に、最低限格納されていなければならないデータ数。。小さくすると精度が上がる傾向が見られるが、過学習の恐れも。
classifier = RandomForestClassifier(n_estimators=n_estimators, max_depth=max_depth, criterion=criterion,
min_samples_leaf=min_samples_leaf, min_samples_split=min_samples_split, random_state=1234)
classifier.fit(x_train, y_train)
# Accuracy(正答率)を表示(学習用データの場合)
print('Accuracy(x_train) = {:.3f}%'.format(100 * classifier.score(x_train, y_train)))
# Accuracy(正答率)を表示(検証用データの場合)
print('Accuracy(x_test) = {:.3f}%'.format(100 * classifier.score(x_test, y_test)))
# 予測結果
y_test_predicted = classifier.predict(x_test)
# 検証用データの画像と、その予測結果を並べて表示
print("x_test の予測結果 10件分( One-Hot ベクトル表現を元に戻して表示): ")
for i in range(10):
print(argmax(y_test_predicted[i]))
print("x_test の画像データ 10件分: ")
fig = plt.figure(figsize=(10, 4))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1.0, hspace=0.05, wspace=0.05)
for i in range(10):
ax = fig.add_subplot(H, W, i + 1, xticks=[], yticks=[])
ax.imshow(x_test[i].reshape((28, 28)), cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
ランダムフォレストモデルの考察- ランダムフォレストモデルのハイパーパラメータ( n_estimators、max_depth、criterion、min_samples_split、min_samples_leaf )の値を変更すると、Accuracy がどのように変化するでしょうか。- 検証用データ( x_test )のほうの Accuracy が最大となるように、これらのハイパーパラメータを調整してみましょう。 MLP(Multi-Layer Perceptron、多層パーセプトロン)で分類問題を解く
###Code
# MLP モデルを構築
model = Sequential()
# 入力層( 784 -> 256 )
model.add(Dense(units=256, input_shape=(784,)))
# 入力層の活性化関数として relu 関数を指定( sigmoid 関数なども指定できるが、誤差逆伝播の際に勾配消失の可能性が出てくる)
model.add(Activation('relu'))
# 隠れ層( 256 -> 100。基本的には各層のニューロン数を小さくする方向でモデルを構築する )
model.add(Dense(units=100))
# 隠れ層の活性化関数として relu 関数を指定
model.add(Activation('relu'))
# (たとえば、以下のコメントを外して、2つ目の隠れ層( 100 -> 50 )を加えると、予測精度はどう変化するだろうか)
#model.add(Dense(units=50))
#model.add(Activation('relu'))
# 出力層( 100 -> 10 )
# 出力層は、分類したい種類(クラス)の個数と一致させる(今回の場合は 10)
model.add(Dense(units=10))
# 出力層の活性化関数として softmax 関数を指定
# softmax 関数を通すことで、出力層の総和が 1 になるため、それぞれの分類に属する確率を表すことができる
model.add(Activation('softmax'))
# ハイパーパラメータ:
# loss:損失関数。分類問題であるため、交差エントロピー誤差を表す categorical_crossentropy を指定
# optimizer:最適化アルゴリズム。(参考: https://keras.io/ja/optimizers/ )
optimizer = 'sgd' # 'rmsprop'、'adagrad', 'adadelta', 'adam' など。後述の「参考:Optimizer(最適化アルゴリズム)」も参照
# metrics:評価関数。今回は正解率を表す accuracy を指定
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
# モデルのネットワーク構造を可視化
plot_model(model, show_shapes=True, to_file='model.png')
###Output
_____no_output_____
###Markdown
参考:Optimizer(最適化アルゴリズム)Optimizer の最適化イメージ- 参考: https://postd.cc/optimizing-gradient-descent/- Adadelta、Rmsprop などの Optimizer が、すぐに最適解に到達できているのに対して、SGD などはなかなか局所解から抜け出せていないように見える
###Code
# 構築した MLP モデルを用いて学習
# batch_size: ここで指定したデータ件数毎に勾配を更新(小さくするとより細かい頻度で勾配の更新が反映されるようになるが、学習に時間がかかるようになる)
batch_size = 1000
# epochs: 学習の反復回数
epochs = 10
# verbose: 学習の進捗状況の表示レベル
# validation_split: 検証用として取り分けるデータの割合
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_split=0.1)
# 性能評価
score_train = model.evaluate(x_train, y_train, verbose=0)
score_test = model.evaluate(x_test, y_test, verbose=0)
# Accuracy(正答率)を表示(学習用データの場合)
print('Accuracy(x_train) = {:.3f}%'.format(100 * score_train[1]))
# Accuracy(正答率)を表示(検証用データの場合)
print('Accuracy(x_test) = {:.3f}%'.format(100 * score_test[1]))
###Output
_____no_output_____ |
7-exceptions/handling-exception.ipynb | ###Markdown
Handling ExceptionsThis section shows how to handle exception to recover from runtime errors. 1 The Best Error Handling StrategyBecause an exception represents a runtime error, the best error handling strategy is to avoid exceptions. If you design a program carefully, you can avoid many exceptions thus there are no need to handle them.For example, to avoid `ZeroDivisionError`, check the divisor first before a division computation. If the divisor is zero, display an error message and ask the user to correct the problem.To avoid `FileNotFoundError`, check the existence of a file before read from it. Following is a code example.
###Code
import os
READ_MODE = 'r'
filename = input('Please type the filename: ')
if os.path.isfile(filename):
with open(filename, READ_MODE) as file:
print(f'Process {filename}')
# process the file content here
else:
print(f'{filename} is not a valid file, please check that you input the correct filename.')
###Output
_____no_output_____
###Markdown
You should avoid runtime errors if possible. Sometimes it is hard to check a runtime error or to display user-friendly error message, you can use the mechanisms introduced in the following sections. 2 Basic `try` StatementYou can use the `try` statement to handle exceptions. The basic `try` statement has a `try` clause and an `except` clause. Following is an example:
###Code
FILENAME = 'test.txt'
try:
with open(FILENAME) as file:
pass # process file data here
except OSError as error:
print(f'Unable to open file {FILENAME}. Error message: {error}')
print('After the hanlding code, program keeps running')
###Output
_____no_output_____
###Markdown
Below the `try:` clause, you can write statements in a code block that is protected by the caluse. If there is no exception in the try-clause code block, the except-cluase is skipped. If there is an exception raised, Python will check if the exception matches with the exception type specified in the `except ExceptionType as variable_name:` clause. If there is a match, the code block in the except-cluase will be executed. Otherwise, the exception is uncaught and Python will stop the execution and print an error message. From a user's perspective, the program crashes when there is an uncaught exception. A file cannot be opened for many reasons: not found, no permission, time out errors etc. The `OSError` can be used to catch these errors and display a user friendly message. If you don't need the error message, you can ignore the `as variable_name` in the except clause. The code will be the following:
###Code
FILENAME = 'test.txt'
try:
with open(FILENAME) as file:
pass # process file data here
except OSError:
print(f'Unable to open file {FILENAME}')
###Output
_____no_output_____
###Markdown
3 Multiple `except` ClauseIf the code block in a try-clause has many operations, it could raise many different exceptions. You can use multiple except-clause to catch different exceptions.
###Code
try:
# all statements in this block is protected
int("abc")
except OSError as error:
print(f'Unable to open file {FILENAME}. Error message: {error}')
except ValueError as error:
print(f'Value error message: {error}')
###Output
_____no_output_____
###Markdown
Agian, the `as variable_name` is optional if you don't want to access the error message. 4 Catch All ExceptionsIf you want to catch all exception, you can use `except:` without an exception type. You can use it alone or as the last of a sequence of except-clauses.
###Code
import sys
try:
# all statements in this block is protected
print('test exceptions')
1 / 0 # raise an exception
except OSError as error:
print(f'Unable to open file {FILENAME}. Error message: {error}')
except ValueError as error:
print(f'Value error message: {error}')
except:
error = sys.exc_info()[0] # to get error info
print(f'Unexpected error: {error}')
###Output
_____no_output_____
###Markdown
The following code doesn't get the erro info and only has a catch-all except-cluase.
###Code
try:
# all statements in this block is protected
print('test exceptions')
1 / 0 # raise an exception
except:
print(f'Unexpected exception, blame its developer.')
###Output
_____no_output_____
###Markdown
5 The `else` and `finally` ClauseThe try statement can have two optional clauses. An optional `else` cluase and an optional `finally` clause. Python executes the `else` clause code block if there is no exception raised. The `finally` clause is always executed regardless there is an exception or not. It is often used to run clean up code.
###Code
try:
print('normal code, no exception.')
1 / 0
except:
print('skipped if no exception.')
else:
print('executed when there is no exception.')
finally:
print('always execute the finally code block.')
###Output
_____no_output_____ |
prepare_text_with_tf.keras.ipynb | ###Markdown
Prepare Text Data With `tf.keras`@ Sani Kamal, 2019 Split Words with `text_to_word_sequence` `tf.keras` provides the text to word sequence() function that you can use to split text into a list of words.
###Code
import tensorflow as tf
from tensorflow.keras.preprocessing.text import text_to_word_sequence
# define the document
text = 'Poetry is often separated into lines on a page, in a process known as lineation.'
# tokenize the document
result = text_to_word_sequence(text)
print(result)
###Output
['poetry', 'is', 'often', 'separated', 'into', 'lines', 'on', 'a', 'page', 'in', 'a', 'process', 'known', 'as', 'lineation']
###Markdown
Encoding with `one_hot`
###Code
from tensorflow.keras.preprocessing.text import one_hot
from tensorflow.keras.preprocessing.text import text_to_word_sequence
# define the document
text = 'Poetry is often separated into lines on a page, in a process known as lineation.'
# estimate the size of the vocabulary
words = set(text_to_word_sequence(text))
vocab_size = len(words)
print(vocab_size)
# integer encode the document
result = one_hot(text, round(vocab_size*1.3))
print(result)
###Output
14
[1, 11, 17, 5, 1, 4, 13, 9, 12, 17, 9, 8, 14, 8, 1]
###Markdown
Hash Encoding with `hashing_trick``tf.keras` provides the `hashing_trick()` function that tokenizes and then integer encodes thedocument, just like the `one_hot()` function. It provides more flexibility, allowing you to specify the hash function as either hash (the default) or other hash functions such as the built in `md5` function or custom function.
###Code
from tensorflow.keras.preprocessing.text import hashing_trick
from tensorflow.keras.preprocessing.text import text_to_word_sequence
# define the document
text = 'Poetry is often separated into lines on a page, in a process known as lineation.'
# estimate the size of the vocabulary
words = set(text_to_word_sequence(text))
vocab_size = len(words)
print(vocab_size)
# integer encode the document
result = hashing_trick(text, round(vocab_size*1.3), hash_function= 'md5')
print(result)
###Output
14
[11, 6, 5, 8, 15, 1, 14, 12, 17, 1, 12, 16, 5, 3, 10]
###Markdown
`Tokenizer` API`tf.keras` provides a more sophisticated API for preparing text that can be fit and reused to prepare multiple text documents. This may be the preferred approach for large projects. `tf.keras` provides the `Tokenizer` class for preparing text documents for deep learning. The `Tokenizer` must be constructed and then fit on either raw text documents or integer encoded text documents.
###Code
from tensorflow.keras.preprocessing.text import Tokenizer
# define 5 documents
docs = [ ' Well done! ' ,
' Good work ' ,
' Great effort ' ,
' nice work ' ,
' Excellent! ' ]
# create the tokenizer
t = Tokenizer()
# fit the tokenizer on the documents
t.fit_on_texts(docs)
###Output
_____no_output_____
###Markdown
Once fit the `Tokenizer` provides 4 attributes that you can use to query what has been learned about your documents:- `word_counts`: A dictionary of words and their counts.- `word_docs`: An integer count of the total number of documents that were used to fit the Tokenizer.- `word_index`: A dictionary of words and their uniquely assigned integers.- `document_count`: A dictionary of words and how many documents each appeared in.
###Code
# summarize what was learned
print(t.word_counts)
print(t.document_count)
print(t.word_index)
print(t.word_docs)
# binary: Whether or not each word is present in the document. This is the default.
encoded_docs = t.texts_to_matrix(docs)
print(encoded_docs)
# tfidf: The Text Frequency-Inverse DocumentFrequency (TF-IDF)
# scoring for each wordin the document.
encoded_docs = t.texts_to_matrix(docs, mode= 'tfidf' )
print(encoded_docs)
# freq: The frequency of each word as a ratio of words within each document.
encoded_docs = t.texts_to_matrix(docs, mode= 'freq' )
print(encoded_docs)
# count: The count of each word in the document.
# integer encode documents
encoded_docs = t.texts_to_matrix(docs, mode= 'count' )
print(encoded_docs)
###Output
[[0. 0. 1. 1. 0. 0. 0. 0. 0.]
[0. 1. 0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 1. 1. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 1. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1.]]
|
scripts/Y2019M09D04_RH_Ingest_MAPSPAM_GBQ_V01.ipynb | ###Markdown
Download all csv files except production (has different schema) from http://mapspam.info/data/Unzip and upload to Google Cloud Storage.Rename:spam2010v1r0_global_yield.csv -> spam2010V1r0_global_yield.csv spam2010v1r0_global_val_prod_agg.csv -> spam2010V1r0_global_val_prod_agg.csv
###Code
import time, datetime, sys
dateString = time.strftime("Y%YM%mD%d")
timeString = time.strftime("UTC %H:%M")
start = datetime.datetime.now()
print(dateString,timeString)
sys.version
import os
import subprocess
import pandas as pd
from tqdm import tqdm
from google.cloud import bigquery
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/.google.json"
os.environ["GOOGLE_CLOUD_PROJECT"] = "aqueduct30"
client = bigquery.Client(project=BQ_PROJECT_ID)
!rm -r {ec2_input_path}
!rm -r {ec2_output_path}
!mkdir -p {ec2_input_path}
!mkdir -p {ec2_output_path}
!gsutil -m cp -r {gcs_input_path}/* {ec2_input_path}
variable_dict = {"yield":{"variable_short":"yield","shorthand":"Y"},
"production":{"variable_short":"prod","shorthand":"P"},
"harvested_area":{"variable_short":"harv_area","shorthand":"H"},
"physical_area":{"variable_short":"phys_area","shorthand":"A"}}
#"value_of_production":{"variable_short":"val_prod_agg","shorthand":"V_agg"}}
technologies = ["A","I","H","L","S","R"] # see metadata
def load_df(variable_short,technology):
folder_name = "spam2010V1r0_global_{}.csv".format(variable_short)
filename = "spam2010V1r0_global_{}_T{}.csv".format(shorthand,technology)
input_path = "{}/{}/{}".format(ec2_input_path,folder_name,filename)
df_raw = pd.read_csv(input_path,encoding="iso-8859-1")
if TESTING:
df = df_raw[0:100]
else:
df = df_raw
return df
def rename_crop_columns(df,technology):
"""
The csv files in Mapspam have the technology in the column names. The technology
is also stored in the column tech_type and therefore redundant. It prevents
vertically stacking the data.
Args:
df(dataframe): Dataframe with old crop columns.
technology(string):technology.
Returs:
df_renamed: Dataframe with renames columns
"""
df_cropnames = pd.read_csv(MAPSPAM_CROPNAMES)
new_crop_names = list(df_cropnames["SPAM_name"])
old_crop_names = list(map(lambda x: x+"_{}".format(technology.lower()), new_crop_names))
dictje = dict(zip(old_crop_names, new_crop_names))
df_renamed = df.rename(columns=dictje)
return df_renamed
for technology in tqdm(technologies):
print(technology)
for variable, values in variable_dict.items():
print(variable)
variable_short = values["variable_short"]
shorthand = values["shorthand"]
df = load_df(variable_short,technology)
df_renamed = rename_crop_columns(df,technology)
filename = "spam2010V1r0_global_{}_T{}.csv".format(shorthand,technology)
output_path = "{}/{}".format(ec2_output_path,filename)
df_renamed.to_csv(path_or_buf=output_path,
encoding="UTF-8")
gbq_dataset_name = "MAPSPAM_2010v1r0"
table_name = technology
destination_table= "{}.output_v{:02.0f}".format(gbq_dataset_name,OUTPUT_VERSION)
df_renamed.to_gbq(project_id=BQ_PROJECT_ID,
destination_table=destination_table,
chunksize=100000,
if_exists="append")
!gsutil -m cp -r {ec2_output_path} {gcs_output_path}
end = datetime.datetime.now()
elapsed = end - start
print(elapsed)
###Output
1:43:49.833856
|
Linear Regression Class/1-D Linear Regression.ipynb | ###Markdown
https://github.com/lazyprogrammer/machine_learning_examples/tree/master/linear_regression_class
###Code
import numpy as np
import matplotlib.pyplot as plt
# load the data
X = []
Y = []
for line in open('data_1d.csv'):
x, y = line.split(',')
X.append(float(x))
Y.append(float(y))
# Let's turn X and Y into Numpy arrays
X = np.array(X)
Y = np.array(Y)
# plot to see what it look like
plt.scatter(X, Y)
plt.show()
# apply the equations we learned to calculate a and b
denominator = X.dot(X) - X.mean() * X.sum()
a = ( X.dot(Y) - Y.mean()*X.sum() ) / denominator
b = ( Y.mean() * X.dot(X) - X.mean() * X.dot(Y) ) / denominator
# calculate Predictiected Y
Yhat = a*X + b #slope of a line
# plot it all
plt.scatter(X, Y)
plt.plot(X, Yhat)
plt.show()
plt.scatter(X, Y)
plt.plot(X, Yhat ,'k')
# calculate r-squared
d1 = Y - Yhat
d2 = Y - Y.mean()
r2 = 1 - d1.dot(d1) / d2.dot(d2)
print("the r-squared is:", r2)
# since r2 is close to 1 it is good
# if its close to 0 its bad
# if its negative its really bad, not predicting at all
###Output
the r-squared is: 0.9911838202977805
|
tutorials/plotly_built-in.ipynb | ###Markdown
Plotting Pandapower Networks using PlotlyThis tutorial shows you how to make interactive plots of pandapower networks using plotly (https://plot.ly/python/).The best way to get started is to get familiar with 3 built-in plots that correspond to:* a simple plot of a network (respect switch statuses by default)* voltage-levels plot - colores and labels network according to voltage levels* Power Flow results - a colormap plot where buses are colored according to voltage magnitudes and branches according to line/transformer loading.The following sample plots are with mv_oberrhein network from the pandapower.networks package:
###Code
from pandapower.plotting.plotly import simple_plotly
from pandapower.networks import mv_oberrhein
from pandapower import runpp
net = mv_oberrhein()
runpp(net)
###Output
_____no_output_____
###Markdown
Simple plottingA simple network plot wich labels as separate trace all network buses, lines, transformers and external grid.Try some of the fancy plotly features from the upper-right corner:* zooming,* hoover tool (position cursor on the bus/line/trafo to get basic info),* selecting, * click on the legend to hide-show any of the legened elements,
###Code
simple_plotly(net)
###Output
_____no_output_____
###Markdown
Voltage levelsPlots a network colored and layered according to voltage levels.
###Code
from pandapower.plotting.plotly import vlevel_plotly
vlevel_plotly(net)
from pandapower.networks import create_cigre_network_hv
net = create_cigre_network_hv()
runpp(net)
vlevel_plotly(net)
###Output
_____no_output_____
###Markdown
Power Flow ResultsResults from `res_bus`, `res_line` and `res_trafo` can be effectively displayed using `pf_res_plolty`.Buses colored according to resulting voltage magnitude using colormap in range $[0.9,1.1]$. Lines and trafos are colored according to resulting loading using colormap in range $[0,100]$. Positioning a cursor over a bus or line-breaks shows more details about each element.
###Code
from pandapower.plotting.plotly import pf_res_plotly
pf_res_plotly(net)
###Output
_____no_output_____
###Markdown
General Plotting featuresInteractive plots are built to share some general plotting features with static plots using [matplotlib](https://github.com/e2nIEE/pandapower/blob/master/tutorials/plotting_basic.ipynb). Plots without geodata available
###Code
net = mv_oberrhein()
runpp(net)
# delete the geocoordinates
net.bus_geodata.drop(net.bus_geodata.index, inplace=True)
net.line_geodata.drop(net.line_geodata.index, inplace=True)
simple_plotly(net)
###Output
_____no_output_____
###Markdown
Figure size and aspect ratio**Aspect ratio** (`aspectratio`) - default aspect ratio of a figure is set to `'auto'` which means keeping aspect ratio proportional to geodata. If `aspectratio=False` figure will be stretch according to window size.
###Code
net.bus_geodata.drop(net.bus_geodata.index, inplace=True)
net.line_geodata.drop(net.line_geodata.index, inplace=True)
pf_res_plotly(net, aspectratio=(1.3,1))
###Output
_____no_output_____
###Markdown
**Figure Size** (`figsize`) is by default set to 1 and it is used only to multiply total plot size, thus real figure size `figsize=aspectratio*figsize`
###Code
net = mv_oberrhein()
simple_plotly(net, aspectratio=(2,1), figsize=0.5)
###Output
_____no_output_____ |
benchmark-deep-learning-models-for-multivariate-time-series.ipynb | ###Markdown
Contents:1. [Background](Background)2. [Environment Setup](Environment-Setup) - [Install the required python packages](Install-the-required-python-packages) - [Import the required libraries](Import-the-required-libraries)3. [Download and prepare the data](Download-and-prepare-the-data)4. [Scenario 1: Related time series are available in the forecast horizon](Scenario-1:-Related-time-series-are-available-in-the-forecast-horizon)5. [Scenario 2: Related time series is not available in the forecast horizon](Scenario-2:-Related-time-series-is-not-available-in-the-forecast-horizon)6. [Scenario 3: Model all the time series as target series](Model-all-the-time-series-as-target-series) 1. BackgroundMultivariate time series forecasting is a common problem and more recently deep learning models have been applied to time series forecasting. [GluonTS](https://ts.gluon.ai/index.html) is a deep learning toolkit for probabilistic modelling of time series. This notebook shows you different ways in which one can model a multivariate time series problem (time series with related variables) using different models that are implemented in GluonTS.The following models are explored in this notebook -- [DeepAR](https://ts.gluon.ai/api/gluonts/gluonts.model.deepar.html)- [Transformer](https://ts.gluon.ai/api/gluonts/gluonts.model.transformer.html)- [MQ-CNN](https://ts.gluon.ai/api/gluonts/gluonts.model.seq2seq.html)- [Temporal Fusion Transformer](https://ts.gluon.ai/api/gluonts/gluonts.model.tft.html)- [LSTNet](https://ts.gluon.ai/api/gluonts/gluonts.model.lstnet.html) 2. Environment SetupPlease run this notebook on an instance that has a GPU. (p2.xlarge or higher) 2.1 Install the required python packages
###Code
!pip uninstall -y mxnet
!pip install --upgrade mxnet~=1.7
!pip install gluonts
!pip install mxnet-cu102==1.7.0
###Output
_____no_output_____
###Markdown
2.2 Import the required libraries
###Code
import pandas as pd
from gluonts.dataset.common import (
CategoricalFeatureInfo,
ListDataset,
MetaData,
TrainDatasets,
load_datasets
)
from gluonts.dataset.field_names import FieldName
from gluonts.model.deepar import DeepAREstimator
from gluonts.model.transformer import TransformerEstimator
from gluonts.model.lstnet import LSTNetEstimator
from gluonts.model.seq2seq import MQCNNEstimator
from gluonts.model.seq2seq import MQRNNEstimator
from gluonts.model.tft import TemporalFusionTransformerEstimator
from gluonts.evaluation.backtest import make_evaluation_predictions
from gluonts.evaluation import Evaluator, MultivariateEvaluator
from gluonts.mx.trainer import Trainer
from gluonts.dataset.multivariate_grouper import MultivariateGrouper
import mxnet as mx
from itertools import islice
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
3. Download and prepare the dataWe use the [PM 2.5 dataset](https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data) from the UCI Machine Learning Repository.The dataset contains PM 2.5 data from the US Embassy in Beijing and is supplemented with meteorological data. The following columns are part of the data - - No: row number- year: year of data in this row- month: month of data in this row- day: day of data in this row- hour: hour of data in this row- pm2.5: PM2.5 concentration (ug/m^3)- DEWP: Dew Point (℃)- TEMP: Temperature (℃)- PRES: Pressure (hPa)- cbwd: Combined wind direction- Iws: Cumulated wind speed (m/s)- Is: Cumulated hours of snow- Ir: Cumulated hours of rainGiven the above information, here is how the different features in the dataset are treated - - pm2.5 is the target variable. - Meteorological variables like 'TEMP', 'DEWP' and 'PRES' can be treated as related time series with real values.- 'cbwd' is a categorical variable and varies with time and can be treated as a dynamic categorical feature.There are different ways in which one can model multivariate time series problems depending on the availability of related time series features in the forecast horizon. This notebook illustrates them by assuming the presence or absence of the meteorological variables in the forecast horizon.
###Code
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00381/PRSA_data_2010.1.1-2014.12.31.csv
df = pd.read_csv("PRSA_data_2010.1.1-2014.12.31.csv")
#combine month,day,hour into a timestamp
df['Timestamp'] = pd.to_datetime(df[['year', 'month', 'day', 'hour']])
#set an ID to identify a time series
df['id'] = 0
#set the type of the categorical variable
df["cbwd"] = df["cbwd"].astype('category')
df["cbwd_cat"] = df["cbwd"].cat.codes
df.columns
###Output
_____no_output_____
###Markdown
4. Related time series are available in the forecast horizonIn this section, you will assume that the meteorological variables (TEMP, DEWP, PRES) are available to the model in the forecast horizon. In real life, this could be from a weather prediction model or forecast.The following cells compare a DeepAR and Transformer in this particular scenario. 4.1 Prepare the training and testing dataset
###Code
forecast_length = 120
num_backtest_windows = 2
training_data_list = []
test_data_list = []
for i in reversed(range(1, num_backtest_windows+1)):
training_data = [
{
"start": df.iloc[0]["Timestamp"],
"target": df["pm2.5"][:-forecast_length*i],
"feat_static_cat": [0],
"feat_dynamic_real": [df["TEMP"][:-forecast_length*i],
df["DEWP"][:-forecast_length*i]],
"feat_dynamic_cat": [df["cbwd_cat"][:-forecast_length*i]]
}
]
# create testing data.
test_data = [
{
"start": df.iloc[0]["Timestamp"],
"target": df["pm2.5"][:-forecast_length*(i-1)] if i>1 else df["pm2.5"][:],
"feat_static_cat": [0],
"feat_dynamic_real": [df["TEMP"][:-forecast_length*(i-1)] if i>1 else df["TEMP"][:],
df["DEWP"][:-forecast_length*(i-1)] if i>1 else df["DEWP"][:]],
"feat_dynamic_cat": [df["cbwd_cat"][:-forecast_length*(i-1)] if i>1 else df["cbwd_cat"][:]]
}
]
training_data_list.append(ListDataset(training_data, freq='1h'))
test_data_list.append(ListDataset(test_data, freq='1h'))
#function that takes a gluonTS estimator, trains the model and predicts it for every pair of train and test dataset
def backtest(model,
training_data_list,
test_data_list,
num_backtest_windows):
forecasts = []
obs = []
#train a model for every backtest window
for i in range(num_backtest_windows):
predictor = model.train(training_data_list[i],
force_reinit=True)
forecast_it, ts_it = make_evaluation_predictions(test_data_list[i],
predictor=predictor,
num_samples=100)
forecasts.extend(list(forecast_it))
obs.extend(list(ts_it))
return forecasts, obs
###Output
_____no_output_____
###Markdown
DeepAR
###Code
deepar = DeepAREstimator(freq="1h",
use_feat_static_cat=True,
use_feat_dynamic_real=True,
cardinality=[1],
prediction_length=forecast_length,
trainer=Trainer(epochs=30, ctx = mx.context.gpu(0)),
num_cells=40)
forecast_deepar, obs_deepar = backtest(deepar,
training_data_list,
test_data_list,
num_backtest_windows)
###Output
_____no_output_____
###Markdown
Transformer
###Code
transformer = TransformerEstimator(freq="1h",
use_feat_dynamic_real=True,
#context_length=168,
prediction_length=forecast_length,
trainer=Trainer(epochs=10,
learning_rate=0.01,
ctx = mx.context.gpu(0))
)
forecast_transformer, obs_transformer = backtest(transformer,
training_data_list,
test_data_list,
num_backtest_windows)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
evaluator = Evaluator(quantiles=[0.5], seasonality=None)
agg_metrics_deepar, item_metrics_deepar = evaluator(iter(obs_deepar),
iter(forecast_deepar),
num_series=len(forecast_deepar))
evaluator = Evaluator(quantiles=[0.5], seasonality=None)
agg_metrics_transformer, item_metrics_transformer = evaluator(iter(obs_transformer),
iter(forecast_transformer),
num_series=len(forecast_transformer))
###Output
_____no_output_____
###Markdown
Results
###Code
pd.concat([pd.DataFrame.from_dict(agg_metrics_deepar, orient='index').rename(columns={0: "DeepAR"}),
pd.DataFrame.from_dict(agg_metrics_transformer, orient='index').rename(columns={0: "Transformer"})],
axis=1)
def plot_forecasts(obs, forecasts, past_length, start, stop, step, title):
for target, forecast in zip(obs, forecasts):
ax = target[-past_length:].plot(figsize=(12, 5), linewidth=2)
forecast.plot(color='g')
plt.ylabel('PM2.5 concentration (ug/m^3)')
plt.title(title)
plt.grid(which='both')
plt.legend(["observations", "median prediction", "90% confidence interval", "50% confidence interval"])
plt.show()
###Output
_____no_output_____
###Markdown
The below charts plot the observed PM2.5 against the forecast from DeepAR and Transformer. Since, the model computes probabilistic forecasts, it is possible to draw a confidence interval around the median prediction. These charts show a 50% and 90% confidence interval. Plot Sample Forecast - DeepAR
###Code
plot_forecasts(obs_deepar, forecast_deepar, 340, 0, 2, 1, 'deepar')
###Output
_____no_output_____
###Markdown
Plot Sample Forecast - Transformer
###Code
plot_forecasts(obs_transformer, forecast_transformer, 340, 0, 2, 1, 'transformer')
###Output
_____no_output_____
###Markdown
5. Related time series is not available in the forecast horizon In this section, you will see how to train models when the related time series (meteorological features in this case) is not available in the forecast horizon. The meteorological variables are only present for the historical time period and can hence be used for training the model. Seq2Seq models like [MQ-CNN](https://ts.gluon.ai/api/gluonts/gluonts.model.seq2seq.html) which uses a CNN as an encoder and a MLP as a decoder can be used in this scenario.[Temporal Fusion Transformer](https://arxiv.org/abs/1912.09363) is another architecture that combines recurrent layers and attention layers to enable the usage of a mix of inputs like exogenous variables that are only observed historically and other static and dynamic covariates.We compare the above to models in this section. 5.1 Prepare the training and testing dataset to use 'past_feat_dynamic_real'
###Code
training_data_list = []
test_data_list = []
for i in reversed(range(1, 3)):
training_data = [
{
"start": df.iloc[0]["Timestamp"],
"target": df["pm2.5"][:-forecast_length*i],
"past_feat_dynamic_real": [df["TEMP"][:-forecast_length*i],
df["DEWP"][:-forecast_length*i]],
}
]
# create testing data.
test_data = [
{
"start": df.iloc[0]["Timestamp"],
"target": df["pm2.5"][:-forecast_length*(i-1)] if i>1 else df["pm2.5"][:],
"past_feat_dynamic_real": [df["TEMP"][:-forecast_length*(i-1)] if i>1 else df["TEMP"][:],
df["DEWP"][:-forecast_length*(i-1)] if i>1 else df["DEWP"][:]],
}
]
training_data_list.append(ListDataset(training_data, freq='1h'))
test_data_list.append(ListDataset(test_data, freq='1h'))
###Output
_____no_output_____
###Markdown
MQ-CNN
###Code
#At times, one can encounter exploding gradients and as a result the loss can become a NaN.
#set hybridize=False. May be related to https://github.com/awslabs/gluon-ts/issues/833
mqcnn = MQCNNEstimator(freq="1h",
use_past_feat_dynamic_real=True,
prediction_length=forecast_length,
trainer=Trainer(epochs=30,
learning_rate=0.001,
#clip_gradient=3,
#batch_size=32,
#num_batches_per_epoch=16,
hybridize=False,
ctx = mx.context.gpu(0)),
)
forecast_mqcnn, obs_mqcnn = backtest(mqcnn,
training_data_list,
test_data_list,
num_backtest_windows)
###Output
_____no_output_____
###Markdown
Temporal Fusion Transformer
###Code
training_data_list = []
test_data_list = []
for i in reversed(range(1, 3)):
training_data = [
{
"start": df.iloc[0]["Timestamp"],
"target": df["pm2.5"][:-forecast_length*i],
"past_feat_dynamic_real_1": df["TEMP"][:-forecast_length*i],
"past_feat_dynamic_real_2": df["DEWP"][:-forecast_length*i],
"past_feat_dynamic_real_3": df["Ir"][:-forecast_length*i]
}
]
# create testing data.
test_data = [
{
"start": df.iloc[0]["Timestamp"],
"target": df["pm2.5"][:-forecast_length*(i-1)] if i>1 else df["pm2.5"][:],
"past_feat_dynamic_real_1": df["TEMP"][:-forecast_length*(i-1)] if i>1 else df["TEMP"][:],
"past_feat_dynamic_real_2": df["DEWP"][:-forecast_length*(i-1)] if i>1 else df["DEWP"][:],
"past_feat_dynamic_real_3": df["Ir"][:-forecast_length*(i-1)] if i>1 else df["Ir"][:]
}
]
training_data_list.append(ListDataset(training_data, freq='1h'))
test_data_list.append(ListDataset(test_data, freq='1h'))
feat_past_dynamic_real = ["past_feat_dynamic_real_1", "past_feat_dynamic_real_2", "past_feat_dynamic_real_3"]
#https://github.com/awslabs/gluon-ts/issues/1075
tft = TemporalFusionTransformerEstimator(freq = '1h',
context_length=168,
prediction_length = forecast_length,
trainer=Trainer(epochs=30,
learning_rate=0.001,
ctx = mx.context.gpu(0)),
hidden_dim=32,
variable_dim=8,
num_heads=4,
num_outputs=3,
num_instance_per_series=100,
dropout_rate=0.1,
dynamic_feature_dims={
'past_feat_dynamic_real_1': 1,
'past_feat_dynamic_real_2': 1,
'past_feat_dynamic_real_3': 1
}, # dimensions of dynamic real features
past_dynamic_features=feat_past_dynamic_real,
)
forecast_tft, obs_tft = backtest(tft,
training_data_list,
test_data_list,
num_backtest_windows)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
evaluator = Evaluator(quantiles=[0.5], seasonality=None)
agg_metrics_mqcnn, item_metrics_mqcnn = evaluator(iter(obs_mqcnn),
iter(forecast_mqcnn),
num_series=len(forecast_mqcnn))
evaluator = Evaluator(quantiles=[0.5], seasonality=None)
agg_metrics_tft, item_metrics_tft = evaluator(iter(obs_tft),
iter(forecast_tft),
num_series=len(forecast_tft))
###Output
_____no_output_____
###Markdown
Results
###Code
pd.concat([pd.DataFrame.from_dict(agg_metrics_mqcnn, orient='index').rename(columns={0: "MQ-CNN"}),
pd.DataFrame.from_dict(agg_metrics_tft, orient='index').rename(columns={0: "TFT"})],
axis=1)
# 'QuantileForecast.plot' plots all the quantiles as line plots.
# This is some boiler plate code to plot an interval around the median
# using the 10th and 90th quantile
def plot_from_quantile_forecast(obs, past_length, lower_bound, upper_bound, forecasts):
plt.figure(figsize=(12,6))
plt.plot(obs[0][-forecast_length-past_length:], label='observed')
plt.plot(obs[0][-forecast_length:].index,
lower_bound,
color='g',
alpha=0.3,
label='10th quantile')
plt.plot(obs[0][-forecast_length:].index,
forecasts,
color='g',
label='median prediction')
plt.plot(obs[0][-forecast_length:].index,
upper_bound,
color='g',
alpha=0.3,
label='90th quantile')
plt.fill_between(obs[0][-forecast_length:].index,
lower_bound,
upper_bound,
color='g',
alpha=0.3)
plt.ylabel('PM2.5 concentration (ug/m^3)')
plt.legend()
plt.grid(which="both")
plt.show()
###Output
_____no_output_____
###Markdown
The two models illustrated in this section forecast quantiles. Hence to construct an interval, one needs to pick forecasts at different quantiles. The charts below use the 10th and the 90th quantile forecast to construct an interval. Plot Sample Forecast - MQCNN
###Code
plot_from_quantile_forecast(obs_mqcnn,
100,
forecast_mqcnn[0].forecast_array[0],
forecast_mqcnn[0].forecast_array[8],
forecast_mqcnn[0].forecast_array[4])
###Output
_____no_output_____
###Markdown
Plot Sample Forecast - Temporal Fusion Transformer (TFT)
###Code
plot_from_quantile_forecast(obs_tft,
100,
forecast_tft[0].forecast_array[1],
forecast_tft[0].forecast_array[2],
forecast_tft[0].forecast_array[0])
###Output
_____no_output_____
###Markdown
6. Model all the time series as target seriesIn this case, we forecast pm2.5 and the other meteorological features together as multivariate variables.Models like [LSTNet](https://ts.gluon.ai/api/gluonts/gluonts.model.lstnet.html) allow one to treat all the related time series in a multivariate fashion. One can train a model to forecast all the time series simultaneously.For this, the data needs to be prepared in a different way and the below cell does that. LSTNet
###Code
train = df.transpose()
train2 = train.to_numpy()
target=train2[[5,6,7,8],:]
#prediction_length=24
start= [df.iloc[0]["Timestamp"] for _ in range(4)]
train_ds = ListDataset([{FieldName.TARGET: target,
FieldName.START: start
}
for (target, start) in zip(target[:, :-forecast_length],
start)],
freq='1h')
test_ds = ListDataset([{FieldName.TARGET: target,
FieldName.START: start
}
for (target, start) in zip(target[:, :],
start)],
freq='1h')
lstnet_estimator=LSTNetEstimator(freq='1h',
prediction_length=forecast_length,
context_length=336,
num_series=4,
skip_size=10,
ar_window=320,
channels=80,
trainer = Trainer(epochs=400,
ctx = mx.context.gpu(0)),
dropout_rate = 0.4,
output_activation = 'sigmoid',
rnn_cell_type = 'gru',
rnn_num_cells = 100,
rnn_num_layers = 6,
skip_rnn_cell_type = 'gru',
skip_rnn_num_layers = 3,
skip_rnn_num_cells = 10,
scaling = True)
grouper_train = MultivariateGrouper(max_target_dim=4)
train_ds = grouper_train(train_ds)
lstnet_predictor = lstnet_estimator.train(train_ds)
grouper_test = MultivariateGrouper(max_target_dim=4)
test_ds = grouper_test(test_ds)
forecast_lstnet, obs_lstnet = make_evaluation_predictions(test_ds,
predictor=lstnet_predictor,
num_samples=100)
forecast_lstnet = list(forecast_lstnet)
obs_lstnet = list(obs_lstnet)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
evaluator = MultivariateEvaluator(quantiles=[0.1, 0.5, 0.9])
agg_metrics_lstnet, item_metrics_lstnet = evaluator(obs_lstnet, forecast_lstnet, num_series=len(test_ds))
index_series_map = {0: "pm 2.5",
1: "DEWP",
2: "TEMP",
3: "PRES"}
metrics_lstnet = []
for i in range(4):
metrics = [k for k in agg_metrics_lstnet.keys() if k.startswith(str(i))]
metrics_lstnet.append(pd.DataFrame.from_dict({m[2:]:agg_metrics_lstnet[m] for m in metrics},
orient='index').rename(columns={0: index_series_map[i]}))
pd.concat(metrics_lstnet, axis=1)
###Output
_____no_output_____
###Markdown
Plot Sample Forecast - LSTNet The below plots show the forecasts for each of the target time series as defined in the above cell. The results from the LSTNet model that is trained to forecast all the time series simultaneously is not great. One, probably needs to do a more thorough hyperparameter optimization. But this can be a good model to explore when one wants to build a single model to forecast multiple time series.
###Code
for x, y in zip(obs_lstnet, forecast_lstnet):
for i in range(4):
plt.figure(figsize=(12,6))
plt.plot(x[i][-forecast_length-100:])
median = y.copy_dim(i).quantile(0.5)
y_10 = y.copy_dim(i).quantile(0.1)
y_90 = y.copy_dim(i).quantile(0.9)
#print(y_10)
#print(y_90)
plt.plot(x[i][-forecast_length:].index,
median,
color='g')
plt.fill_between(x[i][-forecast_length:].index,
y_10,
y_90,
color='g',
alpha=0.3)
plt.title(index_series_map[i])
plt.show()
###Output
_____no_output_____ |
examples/fetchers.ipynb | ###Markdown
Fetchers Notebook Contents- [How can I create a `Fetcher`?](How-can-I-create-a-Fetcher-?)- [How can I fetch GitHub issues? ](How-can-I-fetch-GitHub-issues?)- [How does Donkeybot fetch Rucio documentation?](How-does-Donkeybot-Fetch-Rucio-Documentation?)- [How does Donkeybot save the fetched data?](How-does-Donkeybot-save-the-fetched-data?) **The scripts `fetch_issues.py`, `fetch_rucio_docs.py` do everything explained here.** See [scripts](https://github.com/rucio/donkeybot/tree/master/scripts) for source code and run the scripts with the '-h' option for info on the arguments they take. eg. `(virt)$ python scripts/fetch_rucio_docs.py -h` How can I create a `Fetcher` ? Simple, use the `FetcherFactory` and just pick the fetcher type - Issue for a GitHub `IssueFetcher`- Rucio Documentation for a `RucioDocsFetcher` What about the `EmailFetcher` ?- Currently as explained in [How It Works](https://github.com/rucio/donkeybot/blob/master/docs/how_it_works.md) emails are fetched from different scripts run in CERN and not through Donkeybot.
###Code
from bot.fetcher.factory import FetcherFactory
###Output
_____no_output_____
###Markdown
Let's create a GitHub `IssueFetcher`.
###Code
issues_fetcher = FetcherFactory.get_fetcher("Issue")
issues_fetcher
###Output
_____no_output_____
###Markdown
How can I fetch GitHub issues? You need 4 things.- The **repository** whose issues we are fetching- A **GitHub API token**. To generate a GitHub token visit [Personal Access Tokens](https://github.com/settings/tokens) and follow [Creating a Personal Access Token](https://docs.github.com/en/github/authenticating-to-github/creating-a-personal-access-token).- The **maximum number of pages** the fetcher will look through to fetch issues. (default is 201)- A couple pandas **DataFrames**, one which will hold the issues data and one for the issue comments data.
###Code
import pandas as pd
repository = 'rucio/rucio' # but you can use any in the format user/repo
token = "<YOUR_TOKEN>"
max_pages = 3
(issues_df, comments_df) = issues_fetcher.fetch(repo=repository, api_token=token, max_pages=max_pages)
###Output
_____no_output_____
###Markdown
The resulting DataFrames will look like this:
###Code
issues_df.info()
comments_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16 entries, 0 to 15
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 issue_id 16 non-null object
1 comment_id 16 non-null object
2 creator 16 non-null object
3 created_at 16 non-null object
4 body 16 non-null object
dtypes: object(5)
memory usage: 768.0+ bytes
###Markdown
How does Donkeybot Fetch Rucio Documentation? It's the same process we followed with the `IssueFetcher` only now the factory will create a `RucioDocsFetcher`
###Code
from bot.fetcher.factory import FetcherFactory
docs_fetcher = FetcherFactory.get_fetcher("Rucio Documentation")
docs_fetcher
token = "<YOUR_TOKEN>"
docs_df = docs_fetcher.fetch(api_token=token)
###Output
_____no_output_____
###Markdown
How does Donkeybot save the fetched data? For this we need to **Step 1.** open a connection to our Data Storage
###Code
from bot.database.sqlite import Databae
# open the connection
db_name = 'data_storage'
data_storage = Database(f"{db_name}.db")
###Output
_____no_output_____
###Markdown
**Step 2.** Save the fetched issues and comments data.
###Code
# save the fetched data
issues_fetcher.save(
db=data_storage,
issues_table_name='issues',
comments_table_name='issue_comments',
)
###Output
_____no_output_____
###Markdown
**Step 2.1.** Alternativerly save the documentation data.
###Code
# save the fetched data
docs_fetcher.save(db=data_storage, docs_table_name='docs')
###Output
_____no_output_____
###Markdown
**Step 3.** Finally close the connection
###Code
# close the connection
data_storage.close_connection()
###Output
_____no_output_____ |
.ipynb_checkpoints/Fifa_ratings-checkpoint.ipynb | ###Markdown
Filter Python database to find better, cheaper players using FIFA’s ratingsProduced by : [FC Python](https://fcpython.com/blog/filter-python-database-to-find-better-cheaper-players-using-fifas-ratings)Data provided by : [Kaggle](https://www.kaggle.com/stefanoleone992/fifa-20-complete-player-datasetplayers_20.csv)
###Code
#Import modules
import numpy as np
import pandas as pd
# 2020 Data Loading
data = pd.read_csv("fifa-20-complete-player-dataset/players_20.csv")
data.head()
#Define function called club, that looks for a team name and return the selected columns
def club(teamName):
return data[data['club'] == teamName][['short_name','wage_eur','value_eur','player_positions','overall','age']]
#Use the club function to find the team, and sort the squad by wage bill
club('Manchester United').sort_values("wage_eur", ascending = False)
#Extract DDG's information, just like we did with the team name before
DDG = data[data['short_name'] == 'De Gea'][['short_name','wage_eur','value_eur','player_positions','overall','age']]
#Assign DDG's wage, position, rating and age to variables
DDGWage = DDG['wage_eur'].item()
DDGPos = DDG['player_positions'].item()
DDGRating = DDG['overall'].item()
DDGAge = DDG['age'].item()
#Create a list of goalkeepers, matching DDG's position
longlist = data[data['player_positions'] == DDGPos][['short_name','wage_eur','value_eur','player_positions','overall','age']]
#Create a list of players that have a lower overall rank than DDG
removals = longlist[longlist['overall'] <= DDGRating].index
#Drop these players
longlist.drop(removals , inplace=True)
#Repeat above, but for players with a larger wage
removals = longlist[longlist['wage_eur'] > DDGWage].index
longlist.drop(removals , inplace=True)
#Repeat above, but for older players
removals = longlist[longlist['age'] >= DDGAge].index
longlist.drop(removals , inplace=True)
#Show me our potential replacements, sorted by lowest wages
longlist.sort_values("wage_eur")
def cheapReplacement(player, skillReduction = 0):
#Get the replacee with the name provided in the argument
replacee = data[data['short_name'] == player][['short_name','wage_eur','value_eur','player_positions','overall','age']]
#Assign the relevant details of this player to variables
replaceePos = replacee['player_positions'].item()
replaceeWage = replacee['wage_eur'].item()
replaceeAge = replacee['age'].item()
replaceeOverall = replacee['overall'].item() - skillReduction
#Create the longlist of players that share the position
longlist = data[data['player_positions'] == replaceePos][['short_name','wage_eur','value_eur','player_positions','overall','age']]
#Create list of players that do not meet the rating criteria and drop them from the longlist
removals = longlist[longlist['overall'] <= replaceeOverall].index
longlist.drop(removals , inplace=True)
#Repeat for players with higher wages
removals = longlist[longlist['wage_eur'] > replaceeWage].index
longlist.drop(removals , inplace=True)
#Repeat for older players
removals = longlist[longlist['age'] >= replaceeAge].index
longlist.drop(removals , inplace=True)
#Display the players that meet the requirements
return longlist.sort_values("wage_eur")
###Output
_____no_output_____
###Markdown
* Fred's Cheap Replacement
###Code
cheapReplacement('Fred')
###Output
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:7: FutureWarning: `item` has been deprecated and will be removed in a future version
import sys
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:8: FutureWarning: `item` has been deprecated and will be removed in a future version
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:9: FutureWarning: `item` has been deprecated and will be removed in a future version
if __name__ == '__main__':
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:10: FutureWarning: `item` has been deprecated and will be removed in a future version
# Remove the CWD from sys.path while we load stuff.
###Markdown
* Lindelöf's Cheap Replacement
###Code
cheapReplacement('V. Lindelöf')
###Output
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:7: FutureWarning: `item` has been deprecated and will be removed in a future version
import sys
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:8: FutureWarning: `item` has been deprecated and will be removed in a future version
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:9: FutureWarning: `item` has been deprecated and will be removed in a future version
if __name__ == '__main__':
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:10: FutureWarning: `item` has been deprecated and will be removed in a future version
# Remove the CWD from sys.path while we load stuff.
###Markdown
* Pogba's Cheap Reaplacement with a circumference of 8
###Code
cheapReplacement('P. Pogba', 8)
###Output
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:7: FutureWarning: `item` has been deprecated and will be removed in a future version
import sys
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:8: FutureWarning: `item` has been deprecated and will be removed in a future version
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:9: FutureWarning: `item` has been deprecated and will be removed in a future version
if __name__ == '__main__':
/home/abdoul-ma/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:10: FutureWarning: `item` has been deprecated and will be removed in a future version
# Remove the CWD from sys.path while we load stuff.
|
lab-exercise/.ipynb_checkpoints/lab-exercise-II-checkpoint.ipynb | ###Markdown
Name: Alo Oluwapese Mat No.: 20120612222 Email: [email protected] Exercise I A python program to determine the difference between 17 and a given number  
###Code
a = 22
b = 17
if a > b:
print (2*(a-b))
a = 14
b = 17
if a > b:
print (2*(a-b))
else: print (abs(a-b))
###Output
3
###Markdown
Exercise II Python program to calculate the value of the sum of three numbers  
###Code
a = 1
b = 2
c = 3
if a == b == c:
print (3*(a + b +c))
else:
print (a + b +c)
a = 3
b = 3
c = 3
if a == b == c:
print (3*(a + b +c))
else:
print (a + b +c)
###Output
27
###Markdown
Exercise III A python program to return true if the two given integer values are equal or their sum or difference is 5. 
###Code

a = 7
b = 2
if a == b:
print (True)
elif a + b == 5:
print (True)
elif a - b == 5:
print (True)
a = 3
b = 2
if a == b:
print (True)
elif a + b == 5:
print (True)
elif a - b == 5:
print (True)
a = 2
b = 2
if a == b:
print (True)
elif a + b == 5:
print (True)
elif a - b == 5:
print (True)
###Output
True
###Markdown
Exercise IV Write a Python program to sort three integers without using conditional statements and loops 
###Code

a = 8
b = 3
c = 7
d = [a, b, c]
print (min(d))
middle = (a + b + c) - max(d) - min(d)
print (middle)
print (max(d))
###Output
3
7
8
###Markdown
Exercise V Write a Python function that takes a positive integer and returns the sum of the cube of all the positive integers smaller than the specified number.  
###Code
a = [1,2,3,4,5,6]
b = a[0]**3
c = a[1]**3
d = a[2]**3
e = a[3]**3
f = a[4]**3
print (b + c + d + e + f)
###Output
225
|
chp10/neuralnets.ipynb | ###Markdown
Perceptron (Single layer Fully connected NN): Simplest ANN based on Threshold Logic Unit. All the inputs are weighted. The Perceptron computes the weighted sum of all the inputs and pass the output to a step function to make decision. **Simpleste Step Function is: Heaviside Step Function**Look the below figure for the context. The below figure produces a logical ouptut between 1 or 0.The decision boundary of each output neuron is linear, so Perceptrons are incapableof learning complex patterns (just like Logistic Regression classifiers). In fact, Scikit-Learn’s Perceptron class is equivalentto using an SGDClassifier with the following hyperparameters: loss="perceptron" ,learning_rate="constant" , eta0=1 (the learning rate), and penalty=None (no regu‐larization).
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
# TensorFlow ≥2.0 is required
import tensorflow as tf
assert tf.__version__ >= "2.0"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# W
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
import numpy as np
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
iris = load_iris()
X = iris.data[:, (2, 3)] # petal length, petal width
y = (iris.target == 0).astype(np.int) # Iris Setosa?
per_clf = Perceptron()
per_clf.fit(X, y)
y_pred = per_clf.predict([[2, 0.5]])
y_pred
###Output
_____no_output_____
###Markdown
Logistic REgression Loss BackPropagationUsing Foor loopWithout Foor loop Vectorizationnp.dot($w, x$) TRY NUMPY if possible in place of For Loop if you initialize allweights and biases to zero, then all neurons in a given layer will beperfectly identical, and thus backpropagation will affect them inexactly the same way, so they will remain identical. In other words,despite having hundreds of neurons per layer, your model will actas if it had only one neuron per layer: it won’t be too smart. Ifinstead you randomly initialize the weights, you break the symme‐try and allow backpropagation to train a diverse team of neurons. ACtivation FunctionStep Function is replaced by sigmoid because most of the region of the step function is flat or which produces constant value which makes gradient decent inefficient as it does not learn anything from the same value over and over again.Sigmoid: Good for output in binary classification problems.Tanh:Continous and Differentiable.Useful in the middle layers.Value ranges from -1 to 1, tends to make the layers output more or less centered around 0 at the begining of the training which helps speed up the convergence. Mean is closer to 0 that helps the other layers.Cons: If z is very high/low its derivative becomes very high/low which slows down the gradient decent.ReLu: Continous and non-Differentiable at z=0. derivative is zero for z 0 and the NN learn faster. Non-Linear Activation FunctionIf we use Linear activation function, the nn is just producing the output as the linear function of the input. Therefore, it doesnot matter how many layers you have its not learning anything.
###Code
def sigmoid(z):
return 1 / (1 + np.exp(-z))
def relu(z):
return np.maximum(0, z)
def derivative(f, z, eps=0.000001):
return (f(z + eps) - f(z - eps))/(2 * eps)
z = np.linspace(-5, 5, 200)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(z, np.sign(z), "r-", linewidth=1, label="Step")
plt.plot(z, sigmoid(z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, np.tanh(z), "b-", linewidth=2, label="Tanh")
plt.plot(z, relu(z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=14)
plt.title("Activation functions", fontsize=14)
plt.axis([-5, 5, -1.2, 1.2])
plt.subplot(122)
plt.plot(z, derivative(np.sign, z), "r-", linewidth=1, label="Step")
plt.plot(0, 0, "ro", markersize=5)
plt.plot(0, 0, "rx", markersize=10)
plt.plot(z, derivative(sigmoid, z), "g--", linewidth=2, label="Sigmoid")
plt.plot(z, derivative(np.tanh, z), "b-", linewidth=2, label="Tanh")
plt.plot(z, derivative(relu, z), "m-.", linewidth=2, label="ReLU")
plt.grid(True)
plt.legend(loc="center right", fontsize=10)
#plt.legend(loc="center right", fontsize=14)
plt.title("Derivatives", fontsize=14)
plt.axis([-5, 5, -0.2, 1.2])
# save_fig("activation_functions_plot")
plt.show()
###Output
_____no_output_____
###Markdown
In general each positive class is dedicated to a output neuron Deep neural networks are useful in learning complex things. Lower layers learn simple features and upper layer combining the simple features learn complex features. One iteration of gradient decent
###Code
# Implementation with KERAS
from tensorflow import keras
import os
import tensorflow as tf
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
fashion_data = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_data.load_data()
###Output
_____no_output_____
###Markdown
with Keras, data is an nd array with each example as an image instead of 1-D vector as we had before with Scikit-Learn
###Code
X_train_full.shape, X_train_full.dtype
X_valid, X_train = X_train_full[:5000] / 255.0, X_train_full[5000:] / 255.0
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_test = X_test / 255.
np.unique(y_train)
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat",
"Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
class_names[y_train[0]]
plt.figure(figsize=(15, 10))
for i in range(0, 25):
plt.subplot(5,5, i+1)
plt.axis('off')
plt.grid(False)
plt.imshow(X_train[i], cmap="binary", interpolation="nearest")
plt.title(class_names[y_train[i]])
plt.subplots_adjust(wspace=0.05, hspace=0.2)
plt.show()
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28, 28]))
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dense(100, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
# OR ........
# model = keras.models.Sequential([
# keras.layers.Flatten(input_shape=[28, 28]),
# keras.layers.Dense(300, activation="relu"),
# keras.layers.Dense(100, activation="relu"),
# keras.layers.Dense(10, activation="softmax")
# ])
model.summary()
model.layers
keras.utils.plot_model(model, "my_fashion_mnist_model.png", show_shapes=True)
model.layers[0].name, model.get_layer('flatten')
hidden1 = model.layers[1]
# hidden1.name
weights, biases = hidden1.get_weights()
###Output
_____no_output_____
###Markdown
** initialize the weights and biases using other methods. check kernel_initializer and bias initializer **
###Code
model.compile(loss="sparse_categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"])
###Output
_____no_output_____
###Markdown
For multiclass classification case, if each instance:**Labeled with only class**: Sparse-cross-entropy**Labelled with one-hot vector**: categorical crossentropyIf you want to convert sparse labels (i.e., class indices) to one-hotvector labels, you can use the keras.utils.to_categorical()function.
###Code
history = model.fit(X_train, y_train, epochs=30,validation_data=(X_valid, y_valid))
history.params
###Output
_____no_output_____
###Markdown
Imbalanced classesTo deal with such cases, use class_weight arg when calling fit() method which gives larger weights to the underrepresented classes and lower weight to the overrepresented classes.
###Code
import pandas as pd
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1) # set the vertical range to [0-1]
plt.show()
model.evaluate(X_test, y_test)
X_new = X_test[:6]
y_proba = model.predict(X_new)
y_proba.round(2)
y_pred = np.argmax(model.predict(X_new), axis=-1)
np.array(class_names)[y_pred]
###Output
_____no_output_____
###Markdown
Regression using NN
###Code
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(housing.data, housing.target, random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_valid = scaler.transform(X_valid)
X_test = scaler.transform(X_test)
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=X_train.shape[1:]),
keras.layers.Dense(1)
])
model.compile(loss="mean_squared_error", optimizer="sgd")
history = model.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
mse_test = model.evaluate(X_test, y_test)
X_new = X_test[:3] # pretend these are new instances
y_pred = model.predict(X_new)
###Output
Epoch 1/20
363/363 [==============================] - 0s 732us/step - loss: 1.0336 - val_loss: 15.9921
Epoch 2/20
363/363 [==============================] - 0s 528us/step - loss: 1.0643 - val_loss: 9.6009
Epoch 3/20
363/363 [==============================] - 0s 550us/step - loss: 0.6049 - val_loss: 0.4534
Epoch 4/20
363/363 [==============================] - 0s 518us/step - loss: 0.4093 - val_loss: 0.3639
Epoch 5/20
363/363 [==============================] - 0s 532us/step - loss: 0.3959 - val_loss: 0.3605
Epoch 6/20
363/363 [==============================] - 0s 546us/step - loss: 0.3821 - val_loss: 0.3825
Epoch 7/20
363/363 [==============================] - 0s 522us/step - loss: 0.3962 - val_loss: 0.3767
Epoch 8/20
363/363 [==============================] - 0s 518us/step - loss: 0.3806 - val_loss: 0.3864
Epoch 9/20
363/363 [==============================] - 0s 528us/step - loss: 0.3660 - val_loss: 0.4068
Epoch 10/20
363/363 [==============================] - 0s 558us/step - loss: 0.3737 - val_loss: 0.3810
Epoch 11/20
363/363 [==============================] - 0s 520us/step - loss: 0.3556 - val_loss: 0.3596
Epoch 12/20
363/363 [==============================] - 0s 537us/step - loss: 0.3606 - val_loss: 0.3756
Epoch 13/20
363/363 [==============================] - 0s 526us/step - loss: 0.3550 - val_loss: 0.3618
Epoch 14/20
363/363 [==============================] - 0s 529us/step - loss: 0.3509 - val_loss: 0.3504
Epoch 15/20
363/363 [==============================] - 0s 546us/step - loss: 0.3543 - val_loss: 0.3635
Epoch 16/20
363/363 [==============================] - 0s 517us/step - loss: 0.3579 - val_loss: 0.3388
Epoch 17/20
363/363 [==============================] - 0s 507us/step - loss: 0.3401 - val_loss: 0.3475
Epoch 18/20
363/363 [==============================] - 0s 591us/step - loss: 0.3473 - val_loss: 0.3460
Epoch 19/20
363/363 [==============================] - 0s 563us/step - loss: 0.3404 - val_loss: 0.3390
Epoch 20/20
363/363 [==============================] - 0s 508us/step - loss: 0.3315 - val_loss: 0.3897
162/162 [==============================] - 0s 344us/step - loss: 0.3437
###Markdown
The main differences are the fact that the output layer has a single neuron (since we only want topredict a single value) and uses no activation function, and the loss function is themean squared error. Building Complex Models Using the Functional APISequential NN: Layers stacked one over the other.Non-Sequential: It connects all or part of the inputs directly to the output layer, as shown inFigure 10-13. This architecture makes it possible for the neural network to learn bothdeep patterns (using the deep path) and simple rules (through the short path). Incontrast, a regular MLP forces all the data to flow through the full stack of layers, thussimple patterns in the data may end up being distorted by this sequence of transfor‐mations.
###Code
# non-Sequential network
input_ = keras.layers.Input(shape=X_train.shape[1:])
hidden1 = keras.layers.Dense(30, activation="relu")(input_)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([input_, hidden2])
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs=[input_], outputs=[output])
model.summary()
model.compile(loss="mean_squared_error", optimizer=keras.optimizers.SGD(lr=1e-3))
history = model.fit(X_train, y_train, epochs=20,
validation_data=(X_valid, y_valid))
mse_test = model.evaluate(X_test, y_test)
y_pred = model.predict(X_new)
y_pred
###Output
_____no_output_____
###Markdown
what if you want to send a subset of the features through the wide path, and adifferent subset (possibly overlapping) through the deep path (see Figure 10-14)? Inthis case, one solution is to use multiple inputs. For example, suppose we want tosend 5 features through the deep path (features 0 to 4), and 6 features through thewide path (features 2 to 7):
###Code
X_train[:, :5].shape
inp_A = keras.layers.Input(shape=[5])
inp_b = keras.layers.Input(shape=[6])
hidden1 = keras.layers.Dense(30, activation="relu")(inp_b)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([inp_A, hidden2])
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs=[inp_A, inp_b], outputs=[output])
model.compile(loss='mean_squared_error', optimizer='sgd')
X_train_A, X_train_B = X_train[:, :5], X_train[:, 2:]
X_valid_A, X_valid_B = X_valid[:, :5], X_valid[:, 2:]
X_test_A, X_test_B = X_test[:, :5], X_test[:, 2:]
X_new_A, X_new_B = X_test_A[:3], X_test_B[:3]
history = model.fit((X_train_A, X_train_B), y_train, epochs=20,
validation_data=((X_valid_A, X_valid_B), y_valid))
mse_test = model.evaluate((X_test_A, X_test_B), y_test)
y_pred = model.predict((X_new_A, X_new_B))
y_pred
###Output
_____no_output_____
###Markdown
Multiple Outputs1. Classification and localization coordinates2. Different tasks to be performed on the same data. Instead of training seperate nn for each tasks, train a single nn on each task seperately so that network can learn better.3. Auxilary output from between the network where you want the network to learn something useful.
###Code
inp_A = keras.layers.Input(shape=[5])
inp_b = keras.layers.Input(shape=[6])
hidden1 = keras.layers.Dense(30, activation="relu")(inp_b)
hidden2 = keras.layers.Dense(30, activation="relu")(hidden1)
concat = keras.layers.concatenate([inp_A, hidden2])
output = keras.layers.Dense(1)(concat)
aux_out = keras.layers.Dense(1)(hidden2)
model = keras.models.Model(inputs=[inp_A, inp_b], outputs=[output, aux_out])
model.compile(loss=["mse", "mse"], loss_weights=[0.9, 0.1], optimizer=keras.optimizers.SGD(lr=1e-3))
history = model.fit((X_train_A, X_train_B), y_train, epochs=20,
validation_data=((X_valid_A, X_valid_B), y_valid))
total_loss, main_loss, aux_loss = model.evaluate(
[X_test_A, X_test_B], [y_test, y_test])
y_pred_main, y_pred_aux = model.predict([X_new_A, X_new_B])
total_loss, main_loss, aux_loss
###Output
_____no_output_____
###Markdown
Building Dynamic Models Using the Subclassing API (Debugging Network) Chap 12 Saving and Loading models
###Code
model.save("my_keras_model.h5")
model = keras.models.load_model("my_keras_model.h5")
model.summary()
###Output
Model: "model_10"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_27 (InputLayer) [(None, 6)] 0
__________________________________________________________________________________________________
dense_54 (Dense) (None, 30) 210 input_27[0][0]
__________________________________________________________________________________________________
input_26 (InputLayer) [(None, 5)] 0
__________________________________________________________________________________________________
dense_55 (Dense) (None, 30) 930 dense_54[0][0]
__________________________________________________________________________________________________
concatenate_14 (Concatenate) (None, 35) 0 input_26[0][0]
dense_55[0][0]
__________________________________________________________________________________________________
dense_56 (Dense) (None, 1) 36 concatenate_14[0][0]
__________________________________________________________________________________________________
dense_57 (Dense) (None, 1) 31 dense_55[0][0]
==================================================================================================
Total params: 1,207
Trainable params: 1,207
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Checkpoints
###Code
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=[8]),
keras.layers.Dense(30, activation="relu"),
keras.layers.Dense(1)
])
checkpoint_cb = keras.callbacks.ModelCheckpoint("my_keras_model.h5", save_best_only=True)
early_cb = keras.callbacks.EarlyStopping( patience=10, restore_best_weights=True )
model.compile(loss='mse', optimizer='sgd')
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=[checkpoint_cb, early_cb])
# Custom Callback
class PrintValTrainRatioCallback(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
print("\nval/train: {:.2f}".format(logs["val_loss"] / logs["loss"]))
model.compile(loss='mse', optimizer='sgd')
history = model.fit(X_train, y_train, epochs=100,
validation_data=(X_valid, y_valid),
callbacks=[checkpoint_cb, early_cb, PrintValTrainRatioCallback()])
###Output
Epoch 1/100
363/363 [==============================] - 0s 802us/step - loss: 0.3015 - val_loss: 0.3179
val/train: 1.07
Epoch 2/100
363/363 [==============================] - 0s 581us/step - loss: 0.3001 - val_loss: 0.5006
val/train: 1.71
Epoch 3/100
363/363 [==============================] - 0s 540us/step - loss: 0.3073 - val_loss: 0.2954
val/train: 1.01
Epoch 4/100
363/363 [==============================] - 0s 645us/step - loss: 0.2971 - val_loss: 0.5779
val/train: 1.98
Epoch 5/100
363/363 [==============================] - 0s 653us/step - loss: 0.3000 - val_loss: 0.4472
val/train: 1.53
Epoch 6/100
363/363 [==============================] - 0s 605us/step - loss: 0.2948 - val_loss: 0.5281
val/train: 1.81
Epoch 7/100
363/363 [==============================] - 0s 584us/step - loss: 0.2998 - val_loss: 0.3149
val/train: 1.08
Epoch 8/100
363/363 [==============================] - 0s 549us/step - loss: 0.2964 - val_loss: 0.4397
val/train: 1.53
Epoch 9/100
363/363 [==============================] - 0s 592us/step - loss: 0.2869 - val_loss: 0.4493
val/train: 1.57
Epoch 10/100
363/363 [==============================] - 0s 585us/step - loss: 0.2946 - val_loss: 0.5553
val/train: 1.94
Epoch 11/100
363/363 [==============================] - 0s 566us/step - loss: 0.2787 - val_loss: 0.3295
val/train: 1.15
Epoch 12/100
363/363 [==============================] - 0s 563us/step - loss: 0.2859 - val_loss: 0.6196
val/train: 2.16
Epoch 13/100
363/363 [==============================] - 0s 576us/step - loss: 0.2861 - val_loss: 0.6135
val/train: 2.14
###Markdown
Visualization Using TensorBoard
###Code
root_logdir = os.path.join(os.curdir, "my_logs")
tensorboard_cb = keras.callbacks.TensorBoard(root_logdir)
###Output
_____no_output_____ |
01_AI_for_Medical_Diagnosis/03_week/02_Extract_SubSection.ipynb | ###Markdown
Extract a sub-sectionIn the assignment you will be extracting sub-sections of the MRI data to train your network. The reason for this is that training on a full MRI scan would be too memory intensive to be practical. To extract a sub-section in the assignment, you will need to write a function to isolate a small "cube" of the data for training. This example is meant to show you how to do such an extraction for 1D arrays. In the assignment you will apply the same logic in 3D.
###Code
import numpy as np
import keras
import pandas as pd
# Define a simple one dimensional "image" to extract from
image = np.array([10,11,12,13,14,15])
image
# Compute the dimensions of your "image"
image_length = image.shape[0]
image_length
###Output
_____no_output_____
###Markdown
Sub-sectionsIn the assignment, you will define a "patch size" in three dimensions, that will be the size of the sub-section you want to extract. For this exercise, you only need to define a patch size in one dimension.
###Code
# Define a patch length, which will be the size of your extracted sub-section
patch_length = 3
###Output
_____no_output_____
###Markdown
To extract a patch of length `patch_length` you will first define an index at which to start the patch.Run the next cell to define your start index
###Code
# Define your start index
start_i = 0
# Define an end index given your start index and patch size
print(f"start index {start_i}")
end_i = start_i + patch_length
print(f"end index {end_i}")
# Extract a sub-section from your "image"
sub_section = image[start_i: end_i]
print("output patch length: ", len(sub_section))
print("output patch array: ", sub_section)
# Add one to your start index
start_i +=1
###Output
start index 0
end index 3
output patch length: 3
output patch array: [10 11 12]
###Markdown
You'll notice when you run the above multiple times, that eventually the sub-section returned is no longer of length `patch_length`. In the assignment, your neural network will be expecting a particular sub-section size and will not accept inputs of other dimensions. For the start indices, you will be randomly choosing values and you need to ensure that your random number generator is set up to avoid the edges of your image object.The next few code cells include a demonstration of how you could determine the constraints on your start index for the simple one dimensional example.
###Code
# Set your start index to 3 to extract a valid patch
start_i = 3
print(f"start index {start_i}")
end_i = start_i + patch_length
print(f"end index {end_i}")
sub_section = image[start_i: end_i]
print("output patch array: ", sub_section)
# Compute and print the largest valid value for start index
print(f"The largest start index for which "
f"a sub section is still valid is "
f"{image_length - patch_length}")
# Compute and print the range of valid start indices
print(f"The range of valid start indices is:")
# Compute valid start indices, note the range() function excludes the upper bound
valid_start_i = [i for i in range(image_length - patch_length + 1)]
print(valid_start_i)
###Output
The range of valid start indices is:
[0, 1, 2, 3]
###Markdown
Random selection of start indicesIn the assignment, you will need to randomly select a valid integer for the start index in each of three dimensions. The way to do this is by following the logic above to identify valid start indices and then selecting randomly from that range of valid numbers.Run the next cell to select a valid start index for the one dimensional example
###Code
# Choose a random start index, note the np.random.randint() function excludes the upper bound.
start_i = np.random.randint(image_length - patch_length + 1)
print(f"randomly selected start index {start_i}")
# Randomly select multiple start indices in a loop
for _ in range(10):
start_i = np.random.randint(image_length - patch_length + 1)
print(f"randomly selected start index {start_i}")
###Output
randomly selected start index 2
randomly selected start index 0
randomly selected start index 0
randomly selected start index 3
randomly selected start index 0
randomly selected start index 3
randomly selected start index 3
randomly selected start index 0
randomly selected start index 0
randomly selected start index 2
###Markdown
Background RatioAnother thing you will be doing in the assignment is to compute the ratio of background to edema and tumorous regions. You will be provided with a file containing labels with these categories:* 0: background* 1: edema* 2: non-enhancing tumor* 3: enhancing tumorLet's try to demonstrate this in 1-D to get some intuition on how to implement it in 3D later in the assignment.
###Code
# We first simulate input data by defining a random patch of length 16. This will contain labels
# with the categories (0 to 3) as defined above.
patch_labels = np.random.randint(0, 4, (16))
print(patch_labels)
# A straightforward approach to get the background ratio is
# to count the number of 0's and divide by the patch length
bgrd_ratio = np.count_nonzero(patch_labels == 0) / len(patch_labels)
print("using np.count_nonzero(): ", bgrd_ratio)
bgrd_ratio = len(np.where(patch_labels == 0)[0]) / len(patch_labels)
print("using np.where(): ", bgrd_ratio)
# However, take note that we'll use our label array to train a neural network
# so we can opt to compute the ratio a bit later after we do some preprocessing.
# First, we convert the label's categories into one-hot format so it can be used to train the model
patch_labels_one_hot = keras.utils.np_utils.to_categorical(patch_labels, num_classes=4)
print(patch_labels_one_hot)
###Output
[[1. 0. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 0. 1.]
[0. 0. 0. 1.]
[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
[0. 0. 1. 0.]
[1. 0. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 0. 1.]
[0. 0. 0. 1.]
[1. 0. 0. 0.]
[1. 0. 0. 0.]]
###Markdown
**Note**: We hardcoded the number of classes to 4 in our simple example above.In the assignment, you should take into account that the label file can havea different number of categories
###Code
# Let's convert the output to a dataframe just so we can see the labels more clearly
pd.DataFrame(patch_labels_one_hot, columns=['background', 'edema', 'non-enhancing tumor', 'enhancing tumor'])
# What we're interested in is the first column because that
# indicates if the element is part of the background
# In this case, 1 = background, 0 = non-background
print("background column: ", patch_labels_one_hot[:,0])
# we can compute the background ratio by counting the number of 1's
# in the said column divided by the length of the patch
bgrd_ratio = np.sum(patch_labels_one_hot[:,0])/ len(patch_labels)
print("using one-hot column: ", bgrd_ratio)
###Output
using one-hot column: 0.3125
|
text/job_titles/job_title_generator.ipynb | ###Markdown
Parse Training Data
###Code
csv_file = "./data/Dice_US_jobs.csv"
csv_data = amlutils.csv_utils.parse_csv_file(csv_file, quotechar='"', encoding="latin-1")
headers = csv_data[0]
csv_data = csv_data[1:]
START_TOKEN = "<start_token>"
END_TOKEN = "<end_token>"
print(headers)
print(len(csv_data))
import re
def striphtml(data):
p = re.compile(r'<.*?>')
return p.sub('', data)
def stripnewlinechars(data):
data = data.replace(r"\n", "")
data = data.replace(r"\r", "")
return data
def add_start_and_end_token(string, start_token=START_TOKEN, end_token=END_TOKEN):
return "{} {} {}".format(start_token, string, end_token)
def preprocess_sentence(sentence):
sentence = sentence.lower()
s = striphtml(sentence)
return add_start_and_end_token(s)
job_description_index = headers.index("job_description")
job_title_index = headers.index("job_title")
job_titles = [row[job_title_index] for row in csv_data]
job_descriptions = [row[job_description_index] for row in csv_data]
start_end_token_descriptions = [preprocess_sentence(desc) for desc in job_descriptions]
start_end_token_job_titles = [preprocess_sentence(title) for title in job_titles]
print(start_end_token_descriptions[0], start_end_token_job_titles[0])
def tokenize(data, maxlen=None):
tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='#$%&()*+,-./:;=?@[\\]^`{|}~\t\n')
tokenizer.fit_on_texts(data)
data_sequences = tokenizer.texts_to_sequences(data)
padded_sequences = tf.keras.preprocessing.sequence.pad_sequences(
data_sequences,
maxlen=maxlen,
padding="post",
)
return padded_sequences, tokenizer
train_sequences, train_tokenizer = tokenize(start_end_token_descriptions, maxlen=4000)
label_sequences, label_tokenizer = tokenize(start_end_token_job_titles)
max_length_targ, max_length_inp = label_sequences.shape[1], train_sequences.shape[1]
print(max_length_targ, max_length_inp)
BUFFER_SIZE = len(train_sequences)
BATCH_SIZE = 64
steps_per_epoch = len(train_sequences) // BATCH_SIZE
embedding_dim = 128
units = 256
vocab_inp_size = len(train_tokenizer.word_index)+1
vocab_tar_size = len(label_tokenizer.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((train_sequences, label_sequences)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(
self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform',
)
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
# we are doing this to broadcast addition along the time axis to calculate the score
query_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(query_with_time_axis) + self.W2(values)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(
tf.random.uniform((BATCH_SIZE, 1)),
sample_hidden, sample_output,
)
print("Decoder output shape: (batch_size, vocab size) {}".format(sample_decoder_output.shape))
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction="none")
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
checkpoint_dir = "./models/dice_model/"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([label_tokenizer.word_index[START_TOKEN]] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
###Output
_____no_output_____
###Markdown
Train Model
###Code
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print("Epoch {} Batch {} Loss {:.4f}".format(
epoch + 1,
batch,
batch_loss.numpy()),
)
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print("Epoch {} Loss {:.4f}".format(epoch + 1, total_loss / steps_per_epoch))
print("Time taken for 1 epoch {} sec\n".format(time.time() - start))
###Output
_____no_output_____
###Markdown
Evaluate Model Restore from checkpoint
###Code
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
def get_job_title(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
words = [word for word in sentence.split(" ") if word in train_tokenizer.word_index]
inputs = [train_tokenizer.word_index[i] for i in words]
inputs = tf.keras.preprocessing.sequence.pad_sequences(
[inputs],
maxlen=max_length_inp,
padding="post",
)
inputs = tf.convert_to_tensor(inputs)
result = ""
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([label_tokenizer.word_index[START_TOKEN]], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# storing the attention weights to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += label_tokenizer.index_word[predicted_id] + ' '
if label_tokenizer.index_word[predicted_id] == END_TOKEN:
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result
job_desc = "Minimum Required Skills:Civil Engineering, AutoCAD, Site Layout, Conceptual DesignThis role will require an expertise in Site Layout and Grading for Commercial Builds. Candidates with experience working on federal projects is a huge plus!Our contractor is a full service contractor, providing; architectural and engineering design; environmental consulting; remediation; and operations and maintenance services to municipal, government, and private sector Clients throughout New England and adjacent states.What You Will Be DoingThe Civil Engineer will be responsible for maintaining client relationships through the successful management of projects and/or leading design efforts on a project. The Civil Engineer will conduct a wide variety of engineering tasks including conceptual designs, engineering reports/studies, detailed designs including drawings and specifications, and cost estimates.- Engineering of site layout, grading, drainage- Use AutoCAD for Engineering Design- Prepare/support proposals to support base line engineering workload.- Coordinate execution of engineering field work on specific projects.- Coordinate the preparation of project design submittals for permitting, bidding, and/or construction. Responsible for being a Civil Designer of Record.- Coordinate construction inspection services and/or construction phase services, as required. Responsible for conducting the construction inspection services and/or construction phase services for the civil engineering disciplinWhat You Need for this PositionB.S. Degree in Civil Engineering required.Professional Registration in at least one New England state, preferably Massachusetts.Proficiency in AutoCAD required.A minimum of 5 years of experience. - Civil Engineering - AutoCAD - Site Layout - Conceptual Design - EstimatingWhat's In It for YouWe offer excellent compensation packages and benefits, including medical, dental, and vision insurance, and an attractive 401(k) plan.So, if you are a Civil Engineer, P.E. with experience, please apply today!Applicants must be authorized to work in the U.S.Please apply directly to by clicking 'Click Here to Apply' with your Word resume!Looking forward to receiving your resume and going over the position in more detail with you.- Not a fit for this position? Click the link at the bottom of this email to search all of our open positions.Looking forward to receiving your resume!CyberCodersCyberCoders, Inc is proud to be an Equal Opportunity EmployerAll qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or any other characteristic protected by law.Your Right to Work - In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification document form upon hire.Copyright å© 1999 - 2016 . CyberCoders, Inc. All rights reserved."
preprocess_sentence(job_desc)
job_titles_predicted = []
for job_desc in tqdm(job_descriptions):
jt = get_job_title(job_desc.replace("\r", " ").replace("\n", " "))
job_titles_predicted.append(jt)
###Output
_____no_output_____
###Markdown
Computer BLEU Score
###Code
def compute_bleu_n_gram(groundtruths, prediction, n=1):
groundtruth_list = [gs.split(" ") for gs in groundtruths]
prediction_list = prediction.split(" ")
return compute_bleu_score(groundtruth_list, prediction_list, np.eye(n)[n-1])
def compute_bleu_score(groundtruth_list, prediction_list, weights):
return nltk.translate.bleu_score.sentence_bleu(
groundtruth_list,
prediction_list,
weights=weights,
)
metrics = []
for i, jt_predicted in enumerate(job_titles_predicted):
prediction = jt_predicted.replace("<end_token>", "").replace("<start_token>", "").strip().lower()
bl_1 = compute_bleu_n_gram([job_titles[i].strip().lower()], prediction, n=1)
bl_2 = compute_bleu_n_gram([job_titles[i].strip().lower()], prediction, n=2)
metrics.append((bl_1, bl_2, 1 if job_titles[i].lower() == prediction.strip().lower() else 0))
bleu_1s, bleu_2s, matches = zip(*metrics)
len(bleu_1s), len(bleu_2s), len(matches)
# Metrics
print(f"Exact matches {sum(matches)}")
print(f"BLEU 1 {np.mean(bleu_1s)}")
print(f"BLEU 2 {np.mean(bleu_2s)}")
###Output
Exact matches 6660
BLEU 1 0.5345026429116695
BLEU 2 0.3973641292158458
|
Basics/phase_diagram_flat_vs_dimpled/investigate_order_range.ipynb | ###Markdown
Order values minimums are not getting close to zeroInvestigate why
###Code
from particletracking import dataframes, statistics
with dataframes.DataStore("/media/data/Data/FirstOrder/PhaseDiagram/DimpledPlateFeb2021/1600.hdf5") as data:
df = data.df
df
f = df.loc[45000]
import matplotlib.pyplot as plt
%matplotlib auto
sp.delaunay_plot_2d(sp.Delaunay(f[['x', 'y']].values))
plt.scatter(f.x, f.y, c=f.order_long)
plt.plot(f.x[f.neighbors==0], f.y[f.neighbors==0], 'x')
# plt.colorbar()
plt.hist(f.order[f.neighbors==2], bins=100)
f.loc[f.neighbors==1]['order']
import numpy as np
import scipy.spatial as sp
def order_process(features, threshold=2.3):
points = features[['x', 'y', 'r']].values
threshold *= np.mean(points[:, 2])
orders, neighbors = order_and_neighbors(points[:, :2], threshold)
features['order_r'] = np.real(orders).astype('float32')
features['order_i'] = np.imag(orders).astype('float32')
features['neighbors'] = neighbors
return features
def order_process_mean(features, threshold=2.3):
points = features[['x_mean', 'y_mean', 'r']].values
threshold *= np.mean(points[:, 2])
orders, neighbors = order_and_neighbors(points[:, :2], threshold)
features['order_r_mean'] = np.real(orders).astype('float32')
features['order_i_mean'] = np.imag(orders).astype('float32')
features['neighbors_mean'] = neighbors
return features
def order_and_neighbors(points, threshold):
list_indices, point_indices = find_delaunay_indices(points)
vectors = find_vectors(points, list_indices, point_indices)
filtered = filter_vectors(vectors, threshold)
angles = calculate_angles(vectors)
orders, neighbors = calculate_orders(angles, list_indices, filtered)
neighbors = np.real(neighbors).astype('uint8')
return orders, neighbors
def find_delaunay_indices(points):
tess = sp.Delaunay(points)
return tess.vertex_neighbor_vertices
def find_vectors(points, list_indices, point_indices):
repeat = list_indices[1:] - list_indices[:-1]
return points[point_indices] - np.repeat(points, repeat, axis=0)
def filter_vectors(vectors, threshold):
length = np.linalg.norm(vectors, axis=1)
return length < threshold
def calculate_angles(vectors):
angles = np.angle(vectors[:, 0] + 1j * vectors[:, 1])
return angles
def calculate_orders(angles, list_indices, filtered):
# calculate summand for every angle
step = np.exp(6j * angles)
# set summand to zero if bond length > threshold
step *= filtered
list_indices -= 1
# sum the angles and count neighbours for each particle
stacked = np.cumsum((step, filtered), axis=1)[:, list_indices[1:]]
stacked[:, 1:] = np.diff(stacked, axis=1)
neighbors = stacked[1, :]
indxs = neighbors != 0
orders = np.zeros_like(neighbors)
orders[indxs] = stacked[0, indxs] / neighbors[indxs]
return orders, neighbors
points = f[['x', 'y', 'r']]
threshold = 4 * np.mean(points.r)
points = points.values
list_indices, point_indices = find_delaunay_indices(points)
vectors = find_vectors(points, list_indices, point_indices)
vectors.shape
list_indices
x_repeat = np.repeat(points[:, 0], list_indices[1:]-list_indices[:-1])
y_repeat = np.repeat(points[:, 1], list_indices[1:]-list_indices[:-1])
plt.quiver(x_repeat, y_repeat, vectors[:, 0], vectors[:, 1], angles='xy', scale_units='xy', scale=1)
filtered = filter_vectors(vectors, threshold)
angles = calculate_angles(vectors)
angles
vectors, np.linalg.norm(vectors, axis=1)
filtered
step = np.exp(6j*angles)
step.tolist()
step *= filtered
step.tolist()
list_indices -= 1
stacked = np.cumsum((step, filtered), axis=1)[:, list_indices[1:]]
stacked
stacked[:, 1:] = np.diff(stacked, axis=1)
stacked
neighbors = stacked[1, :]
neighbors.real
stacked.shape
indxs = neighbors != 0
indxs
points.shape
list_indices[1:]
stacked_temp = np.cumsum((step, filtered), axis=1)
stacked_temp[:, 1:] = np.diff(stacked_temp, axis=1)
neighbors = stacked[1, :]
neighbors
indxs = neighbors != 0
indxs
orders = np.zeros_like(neighbors)
orders[indxs] = stacked[0, indxs] / neighbors[indxs]
sorter = np.argsort(orders)
plt.plot(orders[sorter])
plt.plot(neighbors[sorter])
orders_r = np.real(orders).astype('float32')
orders_i = np.imag(orders).astype('float32')
orders_mag = np.abs(orders_r + 1j*orders_i)
plt.plot(orders_mag[sorter])
plt.plot(neighbors[sorter])
x = np.linspace(0, 2*np.pi, 100)
y = np.exp(6j*x)
plt.plot(x, np.abs(y))
plt.plot(x, y**2)
def order_on_angles(a):
steps = np.exp(6j*a)
total = np.sum(steps)
return total / len(a)
a = np.linspace(0, 2*np.pi, 6, endpoint=False)
a
b = np.random.rand(6)
b
n = np.arange(1, 20, 1)
o = [order_on_angles(a + b/ni) for ni in n]
plt.plot(n, np.abs(o))
o
order = f.order * f.neighbors / 6
plt.hist(order, bins=100)
order = order_process(points)
order = order_process(f[['x', 'y', 'r']])
order_long = order_process(f[['x', 'y', 'r']], 5)
order['order'] = np.abs(order['order_r']+1j*order['order_i'])
order_long['order'] = np.abs(order_long['order_r']+1j*order_long['order_i'])
plt.hist(order.order, bins=100, alpha=0.5)
plt.hist(order_long.order, bins=100, alpha=0.5)
order_long
plt.subplot(1, 2, 1)
plt.scatter(order_long.x, order_long.y, c=order_long.order)
plt.subplot(1, 2, 2)
plt.scatter(order.x, order.y, c=order.order)
cutoffs = [2, 3, 4, 5, 6, 7, 8, 9, 10]
orders = [order_process(f[['x', 'y', 'r']], c) for c in cutoffs]
for c, o in zip(cutoffs, orders):
o.order = np.abs(o.order_r + 1j*o.order_i)
plt.hist(o.order, bins=100, histtype='step', label=c)
plt.legend()
list_indices
repeat = list_indices[1:] - list_indices[:-1]
repeat
a = points[point_indices][:7]
end_points = np.repeat(points, repeat, axis=0)
b = end_points[:7]
plt.plot(a[:, 0], a[:, 1], '.')
plt.plot(b[:, 0], b[:, 1], '.')
plt.quiver(b[:, 0], b[:, 1], vectors[:, 0], vectors[:, 1], scale_units='xy', scale=1, angles='xy')
cir = plt.Circle((b[0, 0], b[0, 1]), 72, fill=False)
plt.gca().add_patch(cir)
vectors = a - b
###Output
_____no_output_____ |
week_4/.ipynb_checkpoints/day_15_pandas-checkpoint.ipynb | ###Markdown
Pandas, part 2 By the end of this talk, you will be able to - modify/clean columns - evaluate the runtime of your scripts - merge and append data frames By the end of this talk, you will be able to - **modify/clean columns** - **evaluate the runtime of your scripts** - merge and append data frames What is data cleaning?- the data you scrape from sites are often not in a format you can directly work with - the temp column in weather.csv contains the temperature value but also other strings - the year column in imdb.csv contains the year the movie came out but also - or () and roman numbers- data cleaning means that you bring the data in a format that's easier to analyze/visualize How is it done?- you read in your data into a pandas data frame and either - modify the values of a column - create a new column that contains the modified values- either works, I'll show you how to do both- ADVICE: when you save the modified data frame, it is usually good practice to not overwrite the original csv or excel file that you scraped. - save the modified data into a new file instead Ways to modify a column- there are several ways to do this - with a for loop - with a list comprehension - using the .apply method- we will compare run times - investigate which approach is faster - important when you work with large datasets (>1,000,000 lines) The task: runtime column in imdb.csv- the column is string in the format 'n min' where n is the length of the movie in minutes- for plotting purposes, it is better if the runtime is not a string but a number (float or int) - you can't create a histogram of runtime using strings- task: clean the runtime column and convert it to float Approach 1: for loop
###Code
# read in the data
import pandas as pd
df_imdb = pd.read_csv('../week_3/data/imdb.csv')
print(df_imdb.head())
import time
start = time.time() # start the clock
for i in range(100): # repeat everything 100 times to get better estimate of elapsed time
# the actual code to clean the runtime column comes here
runtime_lst = []
for x in df_imdb['runtime']:
if type(x) == str:
runtime = float(x[:-4].replace(',',''))
else:
runtime = 0e0
runtime_lst.append(runtime)
df_imdb['runtime min'] = runtime_lst
end = time.time() # stop the timer
print('cpu time = ',end-start,'sec')
###Output
_____no_output_____
###Markdown
Approach 2: list comprehension
###Code
start = time.time() # start the clock
for i in range(100): # repeat everything 100 times to get better estimate of elapsed time
# the actual code to clean the runtime column comes here
df_imdb['runtime min'] = [float(x[:-4].replace(',','')) if type(x) == str else 0e0 for x in df_imdb['runtime']]
end = time.time() # stop the timer
print('cpu time = ',end-start,'sec')
###Output
_____no_output_____
###Markdown
Approach 3: the .apply method
###Code
def clean_runtime(x):
if type(x) == str:
runtime = float(x[:-4].replace(',',''))
else:
runtime = 0e0
return runtime
start = time.time() # start the clock
for i in range(100): # repeat everything 100 times to get better estimate of elapsed time
# the actual code to clean the runtime column comes here
df_imdb['runtime min'] = df_imdb['runtime'].apply(clean_runtime)
end = time.time() # stop the timer
print('cpu time = ',end-start,'sec')
###Output
_____no_output_____
###Markdown
Summary- the for loop is slower- the list comprehension and the apply method are equally quick- it is down to personal preference to choose between list comprehension and .apply- **the same ranking is not quaranteed for a different task!** - **always try a few different approaches if runtime is an issue (you work with large data)!** Exercise 1Clean the `temp` column in the `../week_3/data/weather.csv` file. The new temperature column should be an integer or a float. Work through at least one of the approaches we discussed. By the end of this talk, you will be able to - modify/clean columns - evaluate the runtime of your scripts - **merge and append data frames** How to merge dataframes?Merge - data are distributed in multiple files
###Code
# We have two datasets from two hospitals
hospital1 = {'ID':['ID1','ID2','ID3','ID4','ID5','ID6','ID7'],'col1':[5,8,2,6,0,2,5],'col2':['y','j','w','b','a','b','t']}
df1 = pd.DataFrame(data=hospital1)
print(df1)
hospital2 = {'ID':['ID2','ID5','ID6','ID10','ID11'],'col3':[12,76,34,98,65],'col2':['q','u','e','l','p']}
df2 = pd.DataFrame(data=hospital2)
print(df2)
# we are interested in only patients from hospital1
#df_left = df1.merge(df2,how='left',on='ID') # IDs from the left dataframe (df1) are kept
#print(df_left)
# we are interested in only patients from hospital2
#df_right = df1.merge(df2,how='right',on='ID') # IDs from the right dataframe (df2) are kept
#print(df_right)
# we are interested in patiens who were in both hospitals
#df_inner = df1.merge(df2,how='inner',on='ID') # merging on IDs present in both dataframes
#print(df_inner)
# we are interested in all patients who visited at least one of the hospitals
#df_outer = df1.merge(df2,how='outer',on='ID') # merging on IDs present in any dataframe
#print(df_outer)
###Output
_____no_output_____
###Markdown
How to append dataframes?Append - new data comes in over a period of time. E.g., one file per month/quarter/fiscal year etc.You want to combine these files into one data frame.
###Code
#df_append = df1.append(df2) # note that rows with ID2, ID5, and ID6 are duplicated! Indices are duplicated too.
#print(df_append)
df_append = df1.append(df2,ignore_index=True) # note that rows with ID2, ID5, and ID6 are duplicated!
#print(df_append)
d3 = {'ID':['ID23','ID94','ID56','ID17'],'col1':['rt','h','st','ne'],'col2':[23,86,23,78]}
df3 = pd.DataFrame(data=d3)
#print(df3)
df_append = df1.append([df2,df3],ignore_index=True) # multiple dataframes can be appended to df1
print(df_append)
###Output
_____no_output_____
###Markdown
Exercise 2- Create three data frames from raw_data_1, 2, and 3.- Append the first two data frames and assign it to df_append.- Merge the third data frame with df_append such that only subject_ids from df_append are present. - Assign the new data frame to df_merge. - How many rows and columns do we have in df_merge?
###Code
raw_data_1 = {
'subject_id': ['1', '2', '3', '4', '5'],
'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']}
raw_data_2 = {
'subject_id': ['6', '7', '8', '9', '10'],
'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan']}
raw_data_3 = {
'subject_id': ['1', '2', '3', '4', '5', '7', '8', '9', '10', '11'],
'test_id': [51, 15, 15, 61, 16, 14, 15, 1, 61, 16]}
# add your code here:
###Output
_____no_output_____ |
Python/ML_basic/2.Data Handling/3.Pandas/11_merge_concat.ipynb | ###Markdown
Concat
###Code
raw_data = {'subject_id': ['1', '2', '3', '4', '5'],
'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']}
df_a = pd.DataFrame(raw_data, columns=['subject_id', 'first_name', 'last_name'])
df_a
raw_data = {'subject_id': ['4', '5', '6', '7', '8'],
'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan']}
df_b = pd.DataFrame(raw_data, columns=['subject_id', 'first_name', 'last_name'])
df_b
df_new = pd.concat([df_a, df_b]) # 기본적으로 rbind
df_new.reset_index() # index 초기화
df_a.append(df_b) # df_a에 df_b를 rbind
df_new = pd.concat([df_a, df_b], axis=1) # axis=1 옵션으로 cbind
df_new.reset_index()
###Output
_____no_output_____
###Markdown
case
###Code
import os
files = [file_name for file_name in os.listdir("data") if file_name.endswith("xlsx")]
files.remove("excel-comp-data.xlsx")
# files.remove('df_routes.xlsx')
files
df_list = [pd.read_excel("data/" + df_filename) for df_filename in files]
status = df_list[0]
sales = pd.concat(df_list[1:])
status.head()
sales.head()
merge_df = pd.merge(status, sales, how="inner", on="account number")
merge_df.head()
merge_df.groupby(["status","name_x"])["quantity","ext price"].sum().reset_index().sort_values(
by=["status","quantity"], ascending=False)
###Output
_____no_output_____ |
extractingImage.ipynb | ###Markdown
Retrieving image from wtf funfacts upto mutliple pages
###Code
import os
import requests
from bs4 import BeautifulSoup
path = os.getcwd()
path = path + "//WTF"
os.mkdir(path)
###Output
_____no_output_____
###Markdown
above we have created a folder to save out images from the wtf sites we will going to have a lot of great fun facts Requesting WebSite Function
###Code
def request_website(url):
html_res = requests.get(url)
bsObj = BeautifulSoup(html_res.content)
image_tag = bsObj.find_all("img")
r = set()
for img in image_tag:
if img['src'].startswith("http"):
r.add(img['src'])
return list(r)
def get_image(url,path):
temp = requests.get(url)
name = path + "\\" + url.split('/')[-1]
with open(name,'wb') as image:
image.write(temp.content)
for i in range(1,11):
url = 'https://wtffunfact.com/page{}'.format(i)
src = request_website(url)
for l in src:
get_image(l,path)
###Output
_____no_output_____ |
examples/glmnet-examples.ipynb | ###Markdown
GlmNet
###Code
X = np.empty(shape=(100000, 7))
X[:, 0] = 1.0
X[:, 1] = np.random.normal(size=100000)
X[:, 2] = np.random.normal(size=100000)
X[:, 3] = np.random.normal(size=100000)
X[:, 4] = np.random.normal(size=100000)
X[:, 5] = np.random.normal(size=100000)
y = 1 + 0.5*X[:, 1] - 0.75*X[:, 2] + 2*X[:, 3] - 1.0*X[:, 4] + 1.0*X[:, 5] + np.random.normal(scale=0.2, size=100000)
s = StandardScaler()
X = s.fit_transform(X)
X[:, 0] = 1.0
lambdas = np.logspace(np.log10(0.00001), np.log10(10000), num=50)
lambdas = lambdas[::-1]
gnet = GLMNet(family=Gaussian(), alpha=1.0, lambdas=lambdas, max_iter=50)
gnet.fit(X, y)
gnet._enets[-1][-1].coef_
coef_paths = np.row_stack([[g.coef_ for g in enets][-1] for enets in gnet._enets])
fig, ax = plt.subplots(figsize=(14, 4))
t = np.log(lambdas)
for idx in range(0, coef_paths.shape[1]):
ax.plot(t, coef_paths[:, idx], label=str(idx))
ax.legend()
lr = LinearRegression()
lr.fit(X, y)
lr.coef_
granular_paths = [np.row_stack([g.coef_ for g in enet_list]) for enet_list in gnet._enets]
quad_approx_boundaries = np.cumsum([0] + [path.shape[0] for path in granular_paths])
long_coef_path = np.row_stack(granular_paths)
fig, ax = plt.subplots(figsize=(20, 6))
t = np.arange(long_coef_path.shape[0])
for idx in range(1, long_coef_path.shape[1]):
ax.plot(t, long_coef_path[:, idx])
for boundary in quad_approx_boundaries[:-1]:
ax.axvline(x=boundary, ymin=0, ymax=1, color="grey", alpha=0.2)
gnet._enets[0][0]._coef_path
long_coef_path = np.row_stack([np.row_stack([e._coef_path for e in enet_list]) for enet_list in gnet._enets])
fig, ax = plt.subplots(figsize=(20, 6))
t = np.arange(long_coef_path.shape[0])
for idx in range(1, long_coef_path.shape[1]):
ax.plot(t, long_coef_path[:, idx])
###Output
_____no_output_____
###Markdown
Logit Net
###Code
X = np.empty(shape=(100000, 6))
X[:, 0] = 1.0
X[:, 1] = np.random.normal(size=100000)
X[:, 2] = np.random.normal(size=100000)
X[:, 3] = 0.25*X[:, 1] + np.sqrt(1 - 0.25**2)*np.random.normal(size=100000)
X[:, 4] = 0.25*X[:, 1] + np.sqrt(1 - 0.25**2)*np.random.normal(size=100000)
#X[:, 3] = np.random.normal(size=100000)
#X[:, 4] = np.random.normal(size=100000)
X[:, 5] = np.random.normal(size=100000)
lp = 0.05*X[:, 1] - 0.1*X[:, 2] -0.1*X[:, 3] + 0.075*X[:, 5]
p = 1 / (1 + np.exp(-lp))
y = np.random.binomial(1, p=p, size=100000)
#s = StandardScaler()
#X = s.fit_transform(X)
#X[:, 0] = 1.0
lambdas = np.logspace(np.log10(0.000001), np.log10(0.2), num=50)
lambdas = lambdas[::-1]
gnet = GLMNet(family=Bernoulli(), alpha=0.2, lambdas=lambdas)
gnet.fit(X, y)
coef_paths = np.row_stack([[g.coef_ for g in enets][-1] for enets in gnet._enets])
fig, ax = plt.subplots(figsize=(14, 4))
t = lambdas
for idx in range(1, coef_paths.shape[1]):
ax.plot(t, coef_paths[:, idx])
granular_paths = [np.row_stack([g.coef_ for g in enet_list]) for enet_list in gnet._enets]
quad_approx_boundaries = np.cumsum([0] + [path.shape[0] for path in granular_paths])
long_coef_path = np.row_stack(granular_paths)
fig, ax = plt.subplots(figsize=(20, 6))
t = np.arange(long_coef_path.shape[0])
for idx in range(1, long_coef_path.shape[1]):
ax.plot(t, long_coef_path[:, idx])
for boundary in quad_approx_boundaries[:-1]:
ax.axvline(x=boundary, ymin=0, ymax=1, color="grey", alpha=0.2)
###Output
_____no_output_____ |
02-Cervical Cancer/.ipynb_checkpoints/Cervical Cancer - EDA-checkpoint.ipynb | ###Markdown
From the above graphs it can be seen that Harmonal Contraceptive column has the highest number of one's which indicate that this might be the important key feature in detecting the cervical cancer. So let us concentrate on this feature more in future analysis.
###Code
g = sn.PairGrid(df,
y_vars=['Hormonal Contraceptives'],
x_vars= target_df,
aspect=.75, size=3.5)
g.map(sn.barplot, palette="pastel");
df['Number of sexual partners'] = round(df['Number of sexual partners'].convert_objects(convert_numeric=True))
df['First sexual intercourse'] = df['First sexual intercourse'].convert_objects(convert_numeric=True)
df['Num of pregnancies']=round(df['Num of pregnancies'].convert_objects(convert_numeric=True))
df['Smokes'] = df['Smokes'].convert_objects(convert_numeric=True)
df['Smokes (years)'] = df['Smokes (years)'].convert_objects(convert_numeric=True)
df['Hormonal Contraceptives'] = df['Hormonal Contraceptives'].convert_objects(convert_numeric=True)
df['Hormonal Contraceptives (years)'] = df['Hormonal Contraceptives (years)'].convert_objects(convert_numeric=True)
df['IUD (years)'] = df['IUD (years)'].convert_objects(convert_numeric=True)
df['Age'].hist(bins=70)
plt.xlabel('Age')
plt.ylabel('Count')
print('Mean age of the Women facing the risk of Cervical cancer',df['Age'].mean())
for feature in target_df:
as_fig = sn.FacetGrid(df,hue=feature,aspect=5)
as_fig.map(sn.kdeplot,'Age',shade=True)
oldest = df['Age'].max()
as_fig.set(xlim=(0,oldest))
as_fig.add_legend()
###Output
_____no_output_____
###Markdown
From the above plots it can be seen that the mean age of the women facing the risk of cervicakl cancer is 26. Also the women with the age in range of 20 to 35 have the highest chances of developing the risk of cervical cancer. The peaks at age of 50 and the furthur extension of the density plot indicate that some of the women face the risk of cervical cancer even at that age.
###Code
for feature in target_df:
sn.factorplot(x='Number of sexual partners',y='Age',hue=feature,data=df,aspect=1.95,kind='bar');
sn.distplot(df['First sexual intercourse'].convert_objects(convert_numeric=True))
g = sn.PairGrid(df,
y_vars=['Smokes (years)'],
x_vars= target_df,
aspect=.75, size=3.5)
g.map(sn.stripplot, palette="winter");
for i in target_df:
sn.violinplot(x=i ,y='IUD (years)', data=df)
plt.show()
#smokers_zone = ['Smokes','Smokes (years)','Smokes (packs/year)']
#for i in smokers_zone :
df['Age'].hist(bins=70)
plt.xlabel('Age')
plt.ylabel('Smokes')
#print('Mean age of the Women facing the risk of Cervical cancer',df['Age'].mean())
df['Age'].hist(bins=70)
plt.xlabel('Age')
plt.ylabel('Smokes (years)')
df['Age'].hist(bins=70)
plt.xlabel('Age')
plt.ylabel('Smokes(packs/year)')
for i in target_df:
sn.jointplot(x=i, y="Age", data=df, kind="kde");
plt.show();
sn.set(style="white")
# Compute the correlation matrix
corr = df.corr()
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(20, 10))
# Generate a custom diverging colormap
cmap = sn.diverging_palette(220, 10, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sn.heatmap(corr, mask=mask, cmap=cmap, vmax= .3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
###Output
_____no_output_____ |
python-statatics-tutorial/basic-theme/python-language/struct.ipynb | ###Markdown
struct module This module performs conversions between Python values and C structs represented as Python strings.
###Code
from struct import *
###Output
_____no_output_____
###Markdown
1 ConvertFormat | C Type | Python | byte-size--- | --- | --- | ---x | pad byte | no value | 1c | char |string of length 1 | 1b | signed char |integer | 1B | unsigned char | integer | 1? | Bool | bool |1h | short | integer | 2H | unsigned short | integer | 2i | int | integer | 4I | unsigned int | integer or long | 4l | long | integer | 4L | unsigned long | long | 4q | long long | long | 8Q | unsigned long long | long | 8f | float | float | 4d | double | float | 8s | char[] | string | 1p | char[] | string | 1 2 byte orderCharacter | Byte order | Size and alignment--- | --- | ---@ | native | native = | native | standard < | little-endian| standard \> | big-endian | standard! | network (= big-endian) | standard 3 function
###Code
pack('hhl', 1, 2, 3)
unpack('hhl', '\x01\x00\x02\x00\x00\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00')
###Output
_____no_output_____ |
Data Visualization/8 Creating your own notebooks/creating-your-own-notebooks.ipynb | ###Markdown
Congratulations for making it to the end of the micro-course!In this final tutorial, you'll learn an efficient workflow that you can use to continue creating your own stunning data visualizations on the Kaggle website. WorkflowBegin by navigating to the site for Kaggle Notebooks:> https://www.kaggle.com/kernelsThen, in the top right corner, click on **[New Notebook]**.This opens a pop-up window.Then, click on **[Create]**. (Don't change the default settings: so, **"Python"** should appear under "Select language", and you should have **"Notebook"** selected under "Select type".)This opens a notebook with some default code. **_Please erase this code, and replace it with the code in the cell below._** (_This is the same code that you used in all of the exercises to set up your Python environment._)
###Code
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
###Output
Setup Complete
|
Chapter06/Chapter6_HousingPricePredictor.ipynb | ###Markdown
Installing AutoKeras
###Code
pip install autokeras
import autokeras as ak
import pandas as pd
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Getting the datasets We download the Boston housing prices dataset and create the training and test subsets.
###Code
df = pd.read_csv("https://raw.githubusercontent.com/PacktPublishing/Automated-Machine-Learning-with-Auto-Keras/main/boston.csv")
y = df.pop('MEDV')
X = df
train_data, test_data, train_targets, test_targets = train_test_split(X,y,test_size=0.2)
###Output
_____no_output_____
###Markdown
Let’s take a look at the datasets dimensions: We can see that has 404 training samples and test samples 102, each one with 13 average numerical characteristics, as crime rate per capita, number of rooms, access to roads, etc.The targets are the median values of owner-occupied homes, in thousands of dollars: The prices are typically between \$10,000 and \$50,000. Prices may seem cheap, but keep in mind we are talking about 1970s Creating and training the models
###Code
# Initialize the StructuredDataRegressor
reg = ak.StructuredDataRegressor(
max_trials=2,
overwrite=True,
metrics=['mae']
)
# Search for the best model.
reg.fit(
train_data.to_numpy(),
train_targets.to_numpy(),
epochs=50,
)
###Output
_____no_output_____
###Markdown
For regression models AutoKeras uses MSE as the default loss: mean square error, the square of the difference between the predictions and the objectives. But for this example, we are also monitoring a new metric during training that will give us more information: mean absolute error (MAE). It is the absolute value of the difference between the predictions and the targets. For example, an MAE of 1.5 in this problem would mean that your predictions are off $1500 on average. Evaluating the best model
###Code
reg.evaluate(test_data, test_targets)
###Output
_____no_output_____
###Markdown
Visualizing the model
###Code
# First we export the model to a keras model
keras_model = reg.export_model()
# Now, we ask for the model Sumary:
keras_model.summary()
from tensorflow.keras.utils import plot_model
plot_model(keras_model)
###Output
_____no_output_____ |
2_clean_text.ipynb | ###Markdown
Clean text for analysis Import modules
###Code
from connect_to_mongo import connect_to_mongo
import nltk
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer
from nltk.corpus import stopwords
from nltk.stem import porter
###Output
_____no_output_____
###Markdown
Connect to Mongo database
###Code
db, transcipts = connect_to_mongo('x-files', 'transcripts')
###Output
_____no_output_____
###Markdown
Format transcripts for basic text analysis Treat each show as a separate text entry. All of Scully's lines will be one entry, all of Mulder's lines another.
###Code
doc_cursor = transcipts.find()
scully_corpus = []
mulder_corpus = []
for i in doc_cursor:
scully = i['scully_lines']
scully_concat = (' ').join(scully)
scully_corpus.append(scully_concat)
mulder = i['mulder_lines']
mulder_concat = (' ').join(mulder)
mulder_corpus.append(mulder_concat)
len(scully_corpus)
scully_corpus[0]
scully_concat
mulder_concat = (" ").join(mulder)
###Output
_____no_output_____
###Markdown
Process all lines as one text entry Lowercase
###Code
lower_case = scully_corpus[0].lower()
lower_case[:400]
###Output
_____no_output_____
###Markdown
Important words
###Code
important_words = {'f.b.i.': 'fbi', 'x-files': 'xfiles', 'x-file': 'xfile'}
for key in important_words.keys():
lower_case = lower_case.replace(key, important_words[key])
lower_case
###Output
_____no_output_____
###Markdown
Remove punctuation Replace punctuation with a whitespace, for easier removal of stopwords later like I've (stopwords that appear as separate entries in the corpus - for I've it appears as 'i' and 've' in the stopword corpus).
###Code
import re
import string
punc_removed = re.sub('[%s]' % re.escape(string.punctuation), ' ', lower_case)
# remove extra whitespaces
# regex for 2 or more whitespaces, replace with 1 whitespace
punc_removed[:100]
###Output
_____no_output_____
###Markdown
Remove words with digits
###Code
no_digits = re.sub('\w*\d\w*', '', punc_removed)
no_digits[:400]
###Output
_____no_output_____
###Markdown
Replace 2 or more whitespaces with a single space.
###Code
space_removed = " ".join(no_digits.split())
space_removed[:400]
###Output
_____no_output_____
###Markdown
Stem & Stopwords Stopwords can be removed as part of count vectorizer, but we'll do it here separately.
###Code
my_stopwords = stopwords.words('english')
stemmer = nltk.stem.porter.PorterStemmer()
stemmed_text = []
for word in space_removed.split():
if word not in my_stopwords:
word_stem = stemmer.stem(word)
stemmed_text.append(word_stem)
stemmed_text = (' ').join(stemmed_text)
stemmed_text
###Output
_____no_output_____
###Markdown
Preprocessing pipeline
###Code
import re
import string
special_words = {'f.b.i.': 'fbi', 'x-files': 'xfiles', 'x-file': 'xfile'}
# try other nltk stemmers...snowball stemmer
def clean_text(corpus, use_stemmer = True):
"""Add a description here"""
cleaned_text = []
for text in corpus:
lower_case = text.lower()
for key in special_words.keys():
preserved_words = lower_case.replace(key, special_words[key])
punc_removed = re.sub('[%s]' % re.escape(string.punctuation), ' ', preserved_words)
no_digits = re.sub('\w*\d\w*', '', punc_removed)
space_removed = " ".join(no_digits.split()) # check if this is redundant
clean_text = ''
keeper_words = []
if use_stemmer:
for word in space_removed.split():
if word not in my_stopwords:
word_stem = stemmer.stem(word) # wrap this in ifelse() for stemmer
keeper_words.append(word_stem)
else:
for word in space_removed.split():
if word not in my_stopwords:
keeper_words.append(word)
clean_text = (' ').join(keeper_words)
cleaned_text.append(clean_text)
return(cleaned_text)
scully_cleaned = clean_text(scully_corpus)
mulder_cleaned = clean_text(mulder_corpus)
len(scully_cleaned) == len(scully_corpus)
scully_cleaned[15]
###Output
_____no_output_____
###Markdown
Basic Counts Top words for Scully and Mulder over the whole show Append all the episodes together and see what each character says most often. Adding labels to documents for classification - testing if Scully or Mulder said it Make a function that gets the length of the scully or mulder list and creates an array with the data labels for each record.
###Code
import numpy as np
label_dict = {'Scully': 0, 'Mulder': 1}
def get_labels(list_of_docs, character):
number_of_docs = len(list_of_docs)
labels = np.array([label_dict[character]] * number_of_docs)
return labels
scully_list = [scully_concat]
scully_list.append('test')
get_labels(scully_list, 'Scully')
###Output
_____no_output_____
###Markdown
Count vectorize
###Code
cv = CountVectorizer(lowercase=True, stop_words='english')
cv_data = cv.fit_transform([scully_concat])
scully_count_vect = pd.DataFrame(cv_data.toarray(), columns=cv.get_feature_names())
scully_count_vect
scully_count_vect.sort_values(by=0, axis=1, ascending=False)
cv = CountVectorizer(lowercase=True, stop_words='english')
cv_data = cv.fit_transform([mulder_concat])
mulder_count_vect = pd.DataFrame(cv_data.toarray(), columns=cv.get_feature_names())
mulder_count_vect
list(mulder_count_vect.columns)
mulder_count_vect.sort_values(by=0, axis=1, ascending=False)
###Output
_____no_output_____ |
Documento del proyecto #3.ipynb | ###Markdown
Materia : Simulacion Matematica Proyecto : "Montecarlo en las finanzas" 1.2 Objetivos ** El objetivo general del proyecto es utilizar Montecarlo y otros metodos aprendidos en clase pars poder ver posbiles situaciones futuras, para aplicar en las finanzas con el precio de las acciones y opciones en un periodo de tiempo. ** Objetivos especificos · El trabajo tiene como objetivo el usar las herramientas aprendidas durante la tercera parte del curso de simulacion matematica · Importar y manipular bases de datos a traves de la libreria Pandas. · El uso de funciones para simular posibles situaciones, como Montecarlo. · Graficar los posbibles escenarios. **Los componentes del trabajo son **1. Importar bibliotecas.2. Importar informacion de valores y acciones de la pagina "Morningstar.3. Crear dataframes, para manipular los datos.4. Definir periodo de tiempo.5. Utilizar la función de volatilidad.6. Graficar la volatilidad de la accion.7. Utilizar numeros aleatorios para poder ver como se comporta el modelo. 1.3 Modelo Condiciones y Restricciones - Las condiciones iniciales son importadas del portal "Morning Star" según el periodo de tiempo que se estipule.- La importación contiene la fecha y el precio de la acción en la misma, como indica el formato de abajo...- En la última parte del trabajo se utilizan números pseudo-aleatorios para las pruebas.
###Code
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Situación ** En las finanzas hay bastantes fluctuaciones de cotizaciones o precios de acciones, lo cual genera mucha incertidumbre en el mundo corporativo, nuestro modelo a traves de Montecarlo buscara los precios más probables viendo la volatilidad que tuvo esa accion en cierto periodo de tiempo. ** Ecuaciones Las ecuaciones son: $V = \ln 1+{\frac {p_{i}}{p_{i-1}}}$ $D = u - (.5 \times v )$ $G = \exp ^{D.valores + DS.valores * N}$Donde: $V$ = Volatilidad de la acción $p_{i}$: Precio de la acción. $p_{i-1}$: Precio anterior de la acción $D$ : Drift $N$ : Normal y continua variable aleatoria $u$ : Promedio de Volatilidad $DS$ : Desviación Estandar de Volatilidad $v$ : Varianza de Volatilidad $G$ : Ganancia diaria 1.5 Visualización
###Code
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
** Información sobre la volatilidad **
###Code
pic = mpimg.imread("Datos de volatilidad.jpg")
plt.imshow(pic)
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
** Grafica de Volatilidad**
###Code
imagen = mpimg.imread("Volatilidad.jpg")
plt.imshow(imagen)
plt.axis("off");
###Output
_____no_output_____
###Markdown
** Grafica con Montecarlo **
###Code
Montecarlo = mpimg.imread("Montecarlo.jpg")
plt.imshow(Montecarlo)
plt.axis("off");
###Output
_____no_output_____
###Markdown
** Margen **
###Code
Margen = mpimg.imread("Margen.jpg")
plt.imshow(Margen)
plt.axis("off");
###Output
_____no_output_____
###Markdown
Materia : Simulacion Matematica Proyecto : "Montecarlo en las finanzas" 1.2 Objetivos ** El objetivo general del proyecto es utilizar Montecarlo y otros metodos aprendidos en clase pars poder ver posbiles situaciones futuras, para aplicar en las finanzas con el precio de las acciones y opciones en un periodo de tiempo. ** Objetivos especificos · El trabajo tiene como objetivo el usar las herramientas aprendidas durante la tercera parte del curso de simulacion matematica · Importar y manipular bases de datos a traves de la libreria Pandas. · El uso de funciones para simular posibles situaciones, como Montecarlo. · Graficar los posbibles escenarios. **Los componentes del trabajo son **1. Importar bibliotecas.2. Importar informacion de valores y acciones de la pagina "Morningstar.3. Crear dataframes, para manipular los datos.4. Definir periodo de tiempo.5. Utilizar la función de volatilidad.6. Graficar la volatilidad de la accion.7. Utilizar numeros aleatorios para poder ver como se comporta el modelo. 1.3 Modelo Condiciones y Restricciones - Las condiciones iniciales son importadas del portal "Morning Star" según el periodo de tiempo que se estipule.- La importación contiene la fecha y el precio de la acción en la misma, como indica el formato de abajo...- En la última parte del trabajo se utilizan números pseudo-aleatorios para las pruebas.
###Code
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Situación ** En las finanzas hay bastantes fluctuaciones de cotizaciones o precios de acciones, lo cual genera mucha incertidumbre en el mundo corporativo, nuestro modelo a traves de Montecarlo buscara los precios más probables viendo la volatilidad que tuvo esa accion en cierto periodo de tiempo. ** Ecuaciones Las ecuaciones son: $V = \ln 1+{\frac {p_{i}}{p_{i-1}}}$ $D = u - (.5 \times v )$ $G = \exp ^{D.valores + DS.valores * N}$Donde: $V$ = Volatilidad de la acción $p_{i}$: Precio de la acción. $p_{i-1}$: Precio anterior de la acción $D$ : Drift $N$ : Normal y continua variable aleatoria $u$ : Promedio de Volatilidad $DS$ : Desviación Estandar de Volatilidad $v$ : Varianza de Volatilidad $G$ : Ganancia diaria 1.5 Visualización
###Code
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
** Información sobre la volatilidad **
###Code
pic = mpimg.imread("Datos de volatilidad.jpg")
plt.imshow(pic)
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
** Grafica de Volatilidad**
###Code
imagen = mpimg.imread("Volatilidad.jpg")
plt.imshow(imagen)
plt.axis("off");
###Output
_____no_output_____
###Markdown
** Grafica con Montecarlo **
###Code
Montecarlo = mpimg.imread("Montecarlo.jpg")
plt.imshow(Montecarlo)
plt.axis("off");
###Output
_____no_output_____
###Markdown
** Margen **
###Code
Margen = mpimg.imread("Margen.jpg")
plt.imshow(Margen)
plt.axis("off");
###Output
_____no_output_____
###Markdown
** Margen Con Alphas **
###Code
Margen = mpimg.imread("Margenalpha.jpg")
plt.imshow(Margen)
plt.axis("off");
###Output
_____no_output_____ |
Variability_Simulations/MeasurementNoiseSimulation.ipynb | ###Markdown
Measurement Noise Simulation This notebook tries to show how to simulate the noise that comes as a result of the using of measurement devices and how this noise affects gene expression of species in a cell population. Here, gene expression of each cell is simulated deterministically through ordinary differential equations. Since this point of view all cells behave in the same way. Therefor, measurement noise is apply over the population to get different outputs from each cell. The noise is generated using a noise model that sets two noise terms. An additive term (a) that is independent and added to the output, and a mulplicative term (b) that depends on and is multiplied by the output. Aditionally, n is a variable that represents Gaussian noise.The measurement noise simulation follows the next expression: *y = f + (a + b x output) x n*
###Code
#libraries required
import numpy as np
import simsysbio as s2b
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
**Defines Biological System Properties**
###Code
####### Determine the differential equations system
#molecular species
species = ['mRNA', 'Protein']
#reagent and product matrices
reagents = np.array([[0, 1, 1, 0],[0, 0, 0, 1]])
products = np.array([[1, 0, 1, 0],[0, 0, 1, 0]])
#kinetic parameters
parameters = ['c1', 'c2', 'c3', 'c4']
#system input. It affects first reaction
inp = 'U'
idxR = 1
#gets simbolic differential equations
equations, variables = s2b.simbODE(species, reagents, products, parameters,
inputN=inp)
#muestra las ODEs obtenidas
for s in range(0, len(species)):
print(f'd{species[s]}/dt:', equations[s])
print(variables)
###Output
dmRNA/dt: U*c1 - c2*mRNA
dProtein/dt: -Protein*c4 + c3*mRNA
{'species': [mRNA, Protein], 'pars': array([c1, c2, c3, c4], dtype=object), 'nameVar': array([U, mRNA, Protein], dtype=object)}
###Markdown
**Creates System Input**
###Code
#computes a hog signal as system input
#duration experiment
tend = np.array([100], float)
#pulse start and end
ton = np.array([1], float)
tdur = np.array([3], float)
#calculo de la expresion y sus respectivos perfiles
inputHOG, tog, perfiles = s2b.HOGexpr(ton, tdur, tend)
#Plotting
plt.figure()
plt.plot(tog, perfiles['t_u_Valve'], label='Step Signal (Valve)')
plt.plot(perfiles['t_u_Chamber'][0], perfiles['t_u_Chamber'][1], label='Delayed Step Signal (Camera)')
plt.plot(tog, inputHOG, label='Model Signal (HOG)')
plt.legend(loc='best')
plt.xlabel('Time (min)')
plt.ylabel('Concentration')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
**Solves System of Differential Equations**
###Code
#kinetic parameters
parsValues = [4.0, 0.010, 1.0, 0.006]
#initial concentrations
sp0 = np.zeros(len(species))
#Stores regressors
Allins = {
"ODEs":equations,
"inpU":inputHOG,
"Vtime":tog,
"species0":sp0
}
Allins.update(variables)
Allins["idxR"] = idxR
Allins["matrizR"] = reagents
Allins["matrizP"] = products
#solucion numerica de las ODEs de expresion de las especies
exprEspecies = s2b.solveODE(parsValues, Allins)
###Output
_____no_output_____
###Markdown
**Plots Mean Output**
###Code
plt.figure()
plt.subplot(2,1,1)
plt.plot(tog, inputHOG,'g-',label='System Input')
plt.plot(tog ,exprEspecies[0,:],'b-',label='mRNA')
plt.legend(loc='best')
plt.xlabel('Time (min)')
plt.ylabel('Concentration')
plt.grid()
plt.subplot(2,1,2)
plt.plot(tog , exprEspecies[1,:],'r-',label='Protein')
plt.legend(loc='best')
plt.xlabel('Time (min)')
plt.ylabel('Concentration')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
**Adds Measurement Noise**
###Code
#number of cells
ncells = 1000
#makes ncells copies of the mean output
pcells = np.tile(exprEspecies[1,:], (ncells,1))
#creates a normal distribution of gaussian noise values
mu, sigma = 0, 1 #mean and standard deviation
ndistrib = np.random.normal(mu, sigma, (pcells.shape))
#noise parameters
a_err = 80.0
b_err = 0.05
#computes variability and noisy output
hvar = a_err + b_err*pcells
MCy = pcells + hvar*ndistrib
###Output
_____no_output_____
###Markdown
**Plots Noisy Output**
###Code
plt.figure()
for i in range(0,ncells):
plt.plot(tog,MCy[i,:])
plt.xlabel('Time (min)')
plt.ylabel('Concentration')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
**Computes Population Output**
###Code
#takes quantiles contain betwenn 2.5 and 97.5 of the population distribution
ExprMin = np.quantile(MCy, 0.025, axis=0)
ExprMax = np.quantile(MCy, 0.975, axis=0)
#takes median of the population distribution
ExprMedn = np.median(MCy, axis=0)
###Output
_____no_output_____
###Markdown
**Plots Population Output**
###Code
plt.figure()
plt.plot(tog, ExprMedn,linewidth=2)
plt.fill_between(tog, ExprMin, ExprMax, alpha = 0.4)
plt.xlabel('Time (min)')
plt.ylabel('Concentration')
plt.grid()
plt.show()
###Output
_____no_output_____ |
posts/taking-a-more-fundamental-approach-to-regularization-with-lars.ipynb | ###Markdown
LARS正则化 如果斯坦福大学的Bradley Efron, Trevor Hastie, Iain Johnstone和Robert Tibshirani没有发现它的话[1],LARS(Least Angle Regression,最小角回归)可能有一天会被你想出来,它借用了[威廉·吉尔伯特·斯特朗(William Gilbert Strang)](https://en.wikipedia.org/wiki/Gilbert_Strang)介绍过的高斯消元法(Gaussian elimination)的灵感。 Getting ready LARS是一种回归手段,适用于解决高维问题,也就是$p >> n$的情况,其中$p$表示列或者特征变量,$n$表示样本数量。 How to do it... 首先让我们导入必要的对象。这里我们用的数据集是200个数据,500个特征。我们还设置了一个低噪声,和少量提供信息的(informative)特征:
###Code
import numpy as np
from sklearn.datasets import make_regression
reg_data, reg_target = make_regression(n_samples=200,n_features=500, n_informative=10, noise=2)
###Output
_____no_output_____
###Markdown
由于我们用了10个信息特征,因此我们还要为LARS设置10个非0的相关系数。我们事先可能不知道信息特征的准确数量,但是出于试验的目的是可行的:
###Code
from sklearn.linear_model import Lars
lars = Lars(n_nonzero_coefs=10)
lars.fit(reg_data, reg_target)
###Output
_____no_output_____
###Markdown
我们可以检验一下看看LARS的非0相关系数的和:
###Code
np.sum(lars.coef_ != 0)
###Output
_____no_output_____
###Markdown
问题在于为什么少量的特征反而变得更加有效。要证明这一点,让我们用一半数量来训练两个LARS模型,一个用12个非零相关系数,另一个非零相关系数用默认值。这里用12个是因为我们对重要特征的数量有个估计,但是可能无法确定准确的数量:
###Code
train_n = 100
lars_12 = Lars(n_nonzero_coefs=12)
lars_12.fit(reg_data[:train_n], reg_target[:train_n])
lars_500 = Lars() #默认就是500
lars_500.fit(reg_data[:train_n], reg_target[:train_n])
###Output
_____no_output_____
###Markdown
现在,让我们看看拟合数据的效果如何,如下所示:
###Code
np.mean(np.power(reg_target[train_n:] - lars.predict(reg_data[train_n:]), 2))
np.mean(np.power(reg_target[train_n:] - lars_12.predict(reg_data[train_n:]), 2))
np.mean(np.power(reg_target[train_n:] - lars_500.predict(reg_data[train_n:]), 2))
###Output
_____no_output_____
###Markdown
仔细看看这组结果;测试集的误差明显高很多。高维数据集问题就在于此;通常面对大量的特征时,想找出一个对训练集拟合很好的模型并不难,但是拟合过度却是更大的问题。 How it works... LARS通过重复选择与残存变化相关的特征。从图上看,相关性实际上就是特征与残差之间的最小角度;这就是LARS名称的由来。选择第一个特征之后,LARS会继续沿着最小角的方向移动,直到另一个特征与残差有同样数量的相关性。然后,LARS会沿着两个特征组合的角度移动。如下图所示:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
def unit(*args):
squared = map(lambda x: x**2, args)
distance = sum(squared) ** (.5)
return map(lambda x: x / distance, args)
f, ax = plt.subplots(nrows=3, figsize=(5, 10))
plt.tight_layout()
ax[0].set_ylim(0, 1.1)
ax[0].set_xlim(0, 1.1)
x, y = unit(1, 0.02)
ax[0].arrow(0, 0, x, y, edgecolor='black', facecolor='black')
ax[0].text(x + .05, y + .05, r"$x_1$")
x, y = unit(.5, 1)
ax[0].arrow(0, 0, x, y, edgecolor='black', facecolor='black')
ax[0].text(x + .05, y + .05, r"$x_2$")
x, y = unit(1, .45)
ax[0].arrow(0, 0, x, y, edgecolor='black', facecolor='black')
ax[0].text(x + .05, y + .05, r"$y$")
ax[0].set_title("No steps")
#step 1
ax[1].set_title("Step 1")
ax[1].set_ylim(0, 1.1)
ax[1].set_xlim(0, 1.1)
x, y = unit(1, 0.02)
ax[1].arrow(0, 0, x, y, edgecolor='black', facecolor='black')
ax[1].text(x + .05, y + .05, r"$x_1$")
x, y = unit(.5, 1)
ax[1].arrow(0, 0, x, y, edgecolor='black', facecolor='black')
ax[1].text(x + .05, y + .05, r"$x_2$")
x, y = unit(.5, 1)
ax[1].arrow(.5, 0.01, x, y, ls='dashed', edgecolor='black', facecolor='black')
ax[1].text(x + .5 + .05, y + .01 + .05, r"$x_2$")
ax[1].arrow(0, 0, .47, .01, width=.0015, edgecolor='black', facecolor='black')
ax[1].text(.47-.15, .01 + .03, "Step 1")
x, y = unit(1, .45)
ax[1].arrow(0, 0, x, y, edgecolor='black', facecolor='black')
ax[1].text(x + .05, y + .05, r"$y$")
#step 2
ax[2].set_title("Step 2")
ax[2].set_ylim(0, 1.1)
ax[2].set_xlim(0, 1.1)
x, y = unit(1, 0.02)
ax[2].arrow(0, 0, x, y, edgecolor='black', facecolor='black')
ax[2].text(x + .05, y + .05, r"$x_1$")
x, y = unit(.5, 1)
ax[2].arrow(0, 0, x, y, edgecolor='black', facecolor='black')
ax[2].text(x + .05, y + .05, r"$x_2$")
x, y = unit(.5, 1)
ax[2].arrow(.5, 0.01, x, y, ls='dashed', edgecolor='black', facecolor='black')
ax[2].text(x + .5 + .05, y + .01 + .05, r"$x_2$")
ax[2].arrow(0, 0, .47, .01, width=.0015, edgecolor='black', facecolor='black')
ax[2].text(.47-.15, .01 + .03, "Step 1")
## step 2
x, y = unit(1, .45)
ax[2].arrow(.5, .02, .4, .35, width=.0015, edgecolor='black', facecolor='black')
ax[2].text(x, y - .1, "Step 2")
x, y = unit(1, .45)
ax[2].arrow(0, 0, x, y, edgecolor='black', facecolor='black')
ax[2].text(x + .05, y + .05, r"$y$");
###Output
_____no_output_____
###Markdown
具体过程是,我们把$x2$沿着$x1$方向移动到一个位置:$x1$与$y$的点积与$x1$与$y$的点积相同。到了这个位置之后,我们再沿着$x1$和$x2$夹角的一半的方向移动。 There's more... 和我们前面用交叉检验来优化领回归模型一样,我们可以对LARS做交叉检验:
###Code
from sklearn.linear_model import LarsCV
lcv = LarsCV()
lcv.fit(reg_data, reg_target)
###Output
d:\Miniconda3\lib\site-packages\sklearn\linear_model\least_angle.py:285: ConvergenceWarning: Regressors in active set degenerate. Dropping a regressor, after 168 iterations, i.e. alpha=2.278e-02, with an active set of 132 regressors, and the smallest cholesky pivot element being 6.144e-08
ConvergenceWarning)
d:\Miniconda3\lib\site-packages\sklearn\linear_model\least_angle.py:285: ConvergenceWarning: Regressors in active set degenerate. Dropping a regressor, after 168 iterations, i.e. alpha=2.105e-02, with an active set of 132 regressors, and the smallest cholesky pivot element being 9.771e-08
ConvergenceWarning)
###Markdown
用交叉检验可以帮助我们确定需要使用的非零相关系数的最佳数量。验证如下所示:
###Code
np.sum(lcv.coef_ != 0)
###Output
_____no_output_____ |
2017/tutorials/Tutorial3/RetinaTutorial-Exercises.ipynb | ###Markdown
CS 375 - Tutorial 3 (Retinal Models and Neural Coding) The retina comprises the first component of visual processing, and even at this level, the retina must compress visual information from 100 million photoreceptors down to 1 million ganglion (output) cells. In just a few layers, the retina predicts object motion [1], predicts complex spatiotemporal patterns [2], and can reduce spatiotemporal redudancy in natural scenes [3]. We will be analyzing data recorded in the Baccus Lab from a salamander retinal ganglion cell (RGC) in response to a white noise stimulus. We will use a simple encoding model known as a Linear-Nonlinear (LN) model [4] that predicts the RGC response to the stimulus, and we will use spike-triggered analysis [5] to compute its linear receptive field. This will then motivate the use of deeper encoding models featured in [6], which you will explore in your upcoming homework assignment. 0.) Loading data and experiment details The data we will be using is in rgc_data.npz. It consists of a 16.67 minute recording of a ganglion cell from the salamander retina. The stimulus was flickering white noise bars, sampled at a frame rate of 100 Hz. The stimulus array has dimensions (30x100000) corresponding to the pixel values of the 30 bars over 100000 frames. The time array contains the time (in seconds) of the stimulus presentation for each stimulus frame. The spike_times array contains the spike times of an isolated retinal ganglion cell (RGC) recorded in response to the stimulus
###Code
import numpy as np
import matplotlib.pyplot as plt
from __future__ import division
%matplotlib inline
rgc_data = np.load('rgc_data.npz', encoding='latin1')['arr_0'][()]
stimulus = rgc_data['stimulus']
time = rgc_data['time']
spike_times = rgc_data['spike_times']
stimulus.shape
plt.imshow(stimulus[:, :40], cmap=plt.get_cmap('gray'))
plt.colorbar()
plt.xlabel('Frames')
plt.ylabel('Bars')
###Output
_____no_output_____
###Markdown
1.) Spike-triggered analysis To start our analysis, we begin by computing the linear component of the LN model. In order to do this, we compute the spike-triggered ensemble (STE). This contains the stimulus that directly preceded a particular spike, for every spike. First, we initialize the STE.
###Code
dt = 0.01 # stimulus sampling rate (in seconds)
spatial_dim = stimulus.shape[0] # the number of spatial dimensions in the stimulus (number of bars)
filter_length = 40 # the number of temporal dimensions in the stimulus (integration time of rgc is 400 ms, so 40 frames)
# cut out the first few spikes that occur before the length of the filter (in seconds) has elapsed
spike_times = spike_times[spike_times > filter_length * dt]
# store the indices of the time array corresponding to each spike
# (you'll use this when computing histograms and the nonlinearity of the LN model)
spike_indices = np.zeros(spike_times.shape)
num_spike_times = spike_times.shape[0]
# initialize an array that will store the spike-triggered ensemble (STE)
# it is a matrix with dimensions given by: the number of spikes and the total of dimensions in the filter
ste = np.zeros((num_spike_times, spatial_dim*filter_length))
###Output
_____no_output_____
###Markdown
Now, compute the STE (fill in the code below)
###Code
for t in range(num_spike_times):
# get the nearest index of this spike time
spike_idx = np.sum(time < spike_times[t]) # timestep that is closest to given spike time
spike_indices[t] = spike_idx
# select out the stimulus preceeding the spike, and store it in the ste array
# FILL IN HERE
###Output
_____no_output_____
###Markdown
Compute the STA (average response preceding the stimulus)
###Code
# FILL IN HERE
# unit norm the sta (since the scale is arbitrary)
sta = sta / np.linalg.norm(sta)
sta_plot = sta.reshape(spatial_dim, filter_length) # reshape to 30 by 40
plt.imshow(sta_plot, cmap='RdBu')
plt.xlabel('Filter Length (frames)')
plt.ylabel('Spatial dimension (bars)')
plt.colorbar()
plt.clim(-0.2,0.2)
###Output
_____no_output_____
###Markdown
Biological Interpretation (Center-Surround Receptive Fields) What does the above plot tell us about this ganglion cell's response? For most positions on the surface of the retina, flashing a spot of light has no effect on the RGC's response. However, within a particular region, known as the receptive field, flashing the light affects the ganglion cell's response. The receptive field is therefore the region of the visual field in which light stimuli evoke responses in the ganglion cell. In the dark, a photoreceptor (rod/cone) cell will release glutamate, which inhibits the ON bipolar cells and excites the OFF bipolar cells. In the light, ON bipolar cells become are excited, while the OFF bipolar cells become inhibited. This stratification of the bipolar cell population contributes the receptive field of the ganglion cell (since bipolar cells synapse onto ganglion cells). Due to these two populations of bipolar cells, the receptive field of the retinal ganglion cell is subdivided into two regions: a center and a surround. There are two types of receptive fields:1. ON center/OFF surround cell: Flashing small bright spot in the center subregion increases the cell's response. Flashing a bright annulus in the surround subregion inhibits the cell's response. There is little or no response to a large (full field) spot of light that covers both the center and the surround because excitation in the center cancels the inhibition from the surround, called lateral inhibition.2. An OFF-center/ON-surround ganglion cell has the opposite arrangement. It gets inhibition from a small spot of light in the center, and excitation from an annulus in the surround. Photo credit: http://www.cns.nyu.edu/~david/courses/perception/lecturenotes/ganglion/ganglion.html So, is this RGC an ON-center or an OFF-center ganglion cell? 2.) Adding the nonlinearity RGCs have thresholds (nonlinearities) that go from membrane potentials ($u(t)$) to predicted firing rates ($\hat{r}(t)$). Therefore, we need to account for the amount of amplification necessary to predict the ganglion cell response given the stimulus response and the STA (the linear weights).$$u(t) = sta*x(t) = sta\cdot x(t-filterlength:t)$$$$\hat{r}(t) = f(u(t))$$ Looping over time, compute the linear projection of each stimulus slice onto the STA and store it in the variable u
###Code
u = np.zeros(time.shape[0]) # the variable `u` will store the projection at each time step (predicted membrane potentials)
for t in range(filter_length, time.shape[0]): # loop over each time point
# FILL IN HERE
# extract the stimulus slice at this time point
# store the linear projection (dot product) of the stimulus slice onto the STA in u
###Output
_____no_output_____
###Markdown
Compute the nonlinearity as a ratio of histograms
###Code
spike_indices = spike_indices.astype('int64')
# bin the spike times according to the time array
spike_counts, _ = np.histogram(spike_times, time)
bins = np.linspace(-6, 6, 50) # min and max of u
raw, _ = np.histogram(u, bins) # discretize u into 50 bins
raw = raw / float(np.sum(raw)) # p(stimulus)
conditional, _ = np.histogram(u[spike_indices], bins)
conditional = conditional / np.sum(conditional) # p(stimulus|spike)
nln = (np.mean(spike_counts) * conditional) / raw # p(spike|stimulus)
plt.plot(bins[:-1], nln / dt)
plt.xlabel('Projection of stimulus onto STA (u)')
plt.ylabel('Mean number of spikes per bin')
###Output
_____no_output_____ |
6. One with the No Shows.ipynb | ###Markdown
Course Name : ML 501 Practical Machine Learning Notebook compiled by : Rajiv Kale, Consultant at Learning and Development ** Important ! ** For internal circulation only Assignment No. 1 This assignment is designed to test your ** Pandas Fundamentals. ** We would be working on the *' [Medical No-Shows](https://www.kaggle.com/joniarroba/noshowappointments)'* dataset availabe on Kaggle. Dataset is already downloaded and you can get it easily form here. Required imports
###Code
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Read dataset form local file
###Code
df=pd.read_csv("./Datasets/No-show-Issue-Comma-300k.csv")
df.dropna()
###Output
_____no_output_____
###Markdown
Print first five rows of the dataset
###Code
df.head()
gender_mapping = {'M': 1,'F': 0}
df["Gender"] = df["Gender"].map(gender_mapping)
Status_mapping = {'Show-Up': 1,'No-Show': 0}
df["Status"] = df["Status"].map(Status_mapping)
X=df[['Age','Gender','Diabetes','Alcoolism','HiperTension']].values
X=df[['Age','Gender','Diabetes','Alcoolism','HiperTension','Handcap','Smokes','Scholarship','Tuberculosis','Sms_Reminder','AwaitingTime']].values
y=df[['Status']].values
from sklearn.tree import DecisionTreeClassifier
# fit a CART model to the data
model = DecisionTreeClassifier()
model.fit(X, y)
print(model)
expected = y
predicted = model.predict(X)
from sklearn import metrics
print(metrics.classification_report(expected, predicted))
print(metrics.confusion_matrix(expected, predicted))
model.score(X,y)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(C=100)
model.fit(X, y)
model.score(X,y)
###Output
_____no_output_____
###Markdown
Navive Bayes
###Code
from sklearn.naive_bayes import GaussianNB
model = GaussianNB()
model.fit(X, y)
model.score(X,y)
###Output
_____no_output_____
###Markdown
Random Forest
###Code
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(criterion='gini',n_estimators=100)
#forest.fit(X_train,y_train)
#forest.score(X_test,y_test)
forest.fit(X,y.ravel())
forest.score(X,y.ravel())
###Output
_____no_output_____ |
Google Maps API/Google Maps JSON/pharmacie scraping - NearbySearch.ipynb | ###Markdown
Nearby Search
###Code
url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?location=12.9538477,77.3507442&radius=20000&type=atm"
url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json?"
location = "33.589886, -7.603869"
# en mètres
radius = 26000
# type d'endroit à rechercher
place_type = "pharmacy"
language = "fr"
###Output
_____no_output_____
###Markdown
First Page results
###Code
r = requests.get(url + '&location=' +str(location) +'&radius='+str(radius) + '&type='+place_type+'&language='+language+'&key=' + api_key)
response = r.json()
response
###Output
_____no_output_____
###Markdown
Second Page results ** getting the next_page_token **
###Code
token = response['next_page_token']
r = requests.get(url +'pagetoken='+token+'&location=' +str(location) +'&radius='+str(radius) + '&type='+place_type+'&language='+language+'&key=' + api_key)
response2 = r.json()
print(response2)
# next page token
token2 = response2['next_page_token']
###Output
{'html_attributions': [], 'next_page_token': 'CsQDvAEAANn_0xs0mZvK1D6cqLYj9wL2oNN6seolotDR8XZfhS_eWku4Lu6oOkTrv9w6otfMcsRDDUBj5az1X6gG8GLKCoZtDrULWJWQMLdjiZbfSRUQHi0lUe7H_Roa0LD0-CtAbEfXSW7CsFkA9_B3XQegmY9U-oEW0VlzKFPHPMQhdnwAQ7kTpK1pW4hC9-_hbt8gfIdnTOBEpLXbjGAAug7dj7c0HeniipyhHdP9Gs8nde5Eh7nbSDWP1uBpxhcdG2-L8aZUC4GcOs7LvUuLUCYXn1CPP7NpO_ohNborJFUk6Uk5tkPI0elowhb2XwL5ijTQulwiADw4jELYXtp5lE6uEoWYunAn1M5K5AuJoZ3s4nrs9h9lDvN2qKrdUY9pJAWiI_ZgRJMI2ijM_qFf56kKvKVSRiwpmkFID6YOHTshVj4GNBN5-Vc5WXAGvGEGEZrXtmpPDKanQF28y3sGG-LXuNK4oi8_15yNlVISXP7gbl4Fb--ZHhCxEJGWso2iVtS2DYReB70SFR1jXkApn8WBU9_v6HiaNr1nltStvUMD_eQMckRFumIncShxhTb1SUONElHfbYFMQkBDuSa-0rNenQQSECoontuQM2uTD6q7HUKN4RkaFGAd6aRb0XxKDKUVQdzJTcAjs20L', 'results': [{'geometry': {'location': {'lat': 33.5347382, 'lng': -7.6041742}, 'viewport': {'northeast': {'lat': 33.53605848029149, 'lng': -7.602946069708498}, 'southwest': {'lat': 33.53336051970849, 'lng': -7.605644030291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '5b4351d3de0187f7f219a4caac30702590bc56d4', 'name': 'PHARMACIE AYA صيدلية اية', 'opening_hours': {'open_now': True}, 'place_id': 'ChIJr0N92Zsypg0Rry7AtXAy0EY', 'plus_code': {'compound_code': 'G9MW+V8 Casablanca, Maroc', 'global_code': '8C5JG9MW+V8'}, 'rating': 5, 'reference': 'ChIJr0N92Zsypg0Rry7AtXAy0EY', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 1, 'vicinity': 'Kebibate, Rabat'}, {'geometry': {'location': {'lat': 33.5371609, 'lng': -7.6490633}, 'viewport': {'northeast': {'lat': 33.53849838029149, 'lng': -7.647784669708496}, 'southwest': {'lat': 33.5358004197085, 'lng': -7.650482630291501}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '537dddd0ef255b8909f5e824671e8cd58575d377', 'name': 'Pharmacie Zenith Millenium', 'opening_hours': {'open_now': True}, 'photos': [{'height': 960, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/109830194430583123225/photos">amina tahri</a>'], 'photo_reference': 'CmRaAAAAOcvRkOJcR3zNLi6bNkrG2JfGsDlZ-OR6CCYH9qrQLtEnjyZNKFgWVJWUmPFFrDVtrcn0H8IhqMbE1Oyfv-iquY3jt_kQKHe9aoKTg-Kd_JJrb_hLFV8tatNYBWwS3ClzEhDW0YjCUiLj3BHEhfbnEbMYGhS3ia-RbBINXOPa2zyCdEjUwuAO1Q', 'width': 1280}], 'place_id': 'ChIJUxO_deLMpw0R83maF7eGKmc', 'plus_code': {'compound_code': 'G9P2+V9 Casablanca, Maroc', 'global_code': '8C5JG9P2+V9'}, 'rating': 4.5, 'reference': 'ChIJUxO_deLMpw0R83maF7eGKmc', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 14, 'vicinity': '162 Lotissement Florida Sidi Maarouf, Casablanca'}, {'geometry': {'location': {'lat': 33.5662344, 'lng': -7.6782163}, 'viewport': {'northeast': {'lat': 33.5675579302915, 'lng': -7.676851419708497}, 'southwest': {'lat': 33.5648599697085, 'lng': -7.679549380291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '471bfdccebd8d78c79ee482ec41edf00ab3827d7', 'name': 'Para Verano Medical', 'opening_hours': {'open_now': True}, 'photos': [{'height': 3456, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/100338294628906246384/photos">e-commerce verano</a>'], 'photo_reference': 'CmRaAAAAT07XTKIuzCTXjeDmUws-Dne5iub4VHyXdNH2R51OYf0skkA7J8iH06XvkLWdW4O1gBXeNAubxgozGrzSWoiEebWymmFGkybC7vonEzX4hfSR6i7EJ40Rk7TT4OrSHwQpEhDiAqnRvUY6BCl2XKKwoo3IGhQn5WXKyP53mb1Lfb3R70dVRwpk7A', 'width': 5064}], 'place_id': 'ChIJk2qJ9uzTpw0RNQihBBXVlSM', 'plus_code': {'compound_code': 'H88C+FP Casablanca, Maroc', 'global_code': '8C5JH88C+FP'}, 'rating': 4.399999999999999, 'reference': 'ChIJk2qJ9uzTpw0RNQihBBXVlSM', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 12, 'vicinity': 'a cote de marjane, Casablanca'}, {'geometry': {'location': {'lat': 33.5326141, 'lng': -7.6789118}, 'viewport': {'northeast': {'lat': 33.5339950302915, 'lng': -7.677615369708498}, 'southwest': {'lat': 33.5312970697085, 'lng': -7.680313330291503}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': 'b56dabfebe74d6f56543836652df9f16ede386e8', 'name': 'pharmacie merini', 'photos': [{'height': 1080, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/105674530814294126855/photos">hammani amine</a>'], 'photo_reference': 'CmRaAAAAapj7i4MvZEvhErPg4cXVC5Jqg9oevmgYufDmeiIZrCQxjKksJGe3LXokPk8v8Fq07pCMeHfJsjmYQV0MMGh5LtvzJxo4HOGysO9y-t4v6YNFcYuhCpOAGtEOANJ5qErFEhAfPMKsTFWwQQ42DjH6ZO77GhT4VDYcSoxgC2OQbZ4UM0JtKrVjhQ', 'width': 1920}], 'place_id': 'ChIJfUvx-ZQspg0RYxI9CwTLJTY', 'plus_code': {'compound_code': 'G8MC+2C Casablanca, Maroc', 'global_code': '8C5JG8MC+2C'}, 'rating': 4, 'reference': 'ChIJfUvx-ZQspg0RYxI9CwTLJTY', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 2, 'vicinity': 'Casablanca'}, {'geometry': {'location': {'lat': 33.6101878, 'lng': -7.493514600000002}, 'viewport': {'northeast': {'lat': 33.6115332802915, 'lng': -7.492171019708498}, 'southwest': {'lat': 33.6088353197085, 'lng': -7.494868980291503}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': 'c90d0fa2f693cec943a4a8a2ac466fbfa7fefc80', 'name': 'Elgass', 'place_id': 'ChIJ9z8glQjLpw0RWJl45-HIBfk', 'plus_code': {'compound_code': 'JG64+3H Casablanca, Maroc', 'global_code': '8C5JJG64+3H'}, 'reference': 'ChIJ9z8glQjLpw0RWJl45-HIBfk', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'vicinity': 'elgass'}, {'geometry': {'location': {'lat': 33.58068730000001, 'lng': -7.484076699999999}, 'viewport': {'northeast': {'lat': 33.58200408029151, 'lng': -7.482680919708497}, 'southwest': {'lat': 33.57930611970851, 'lng': -7.485378880291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': 'c76cda1bc25d7ab72d74fc94f9136b2b836975cc', 'name': 'PHARMACIE ABRAR LAAOUNATE', 'opening_hours': {'open_now': True}, 'photos': [{'height': 720, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/118052973936555459195/photos">Younes Doukkali</a>'], 'photo_reference': 'CmRaAAAAOXJqCijIRT3_NpuoPr-rI4DmcHKBuh87GSd2-PEFHoAAMT1OQikDZPqUKlupia-Nrb7Q8h7KsAfTDELoY77C4wvJ3zkQuScOQ9U5dd8N63Ngsx8o-iA7mrQop0lj_JdDEhDMD9JuiogJAOjtXrVykO_SGhR0jKmWlTwwjyXklJyTZMVRn61h7w', 'width': 960}], 'place_id': 'ChIJ5Vu1YijLpw0R4Kq2Q2UA6vY', 'plus_code': {'compound_code': 'HGJ8+79 Casablanca, Maroc', 'global_code': '8C5JHGJ8+79'}, 'rating': 3, 'reference': 'ChIJ5Vu1YijLpw0R4Kq2Q2UA6vY', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 4, 'vicinity': 'Casablanca'}, {'geometry': {'location': {'lat': 33.6016395, 'lng': -7.483267300000001}, 'viewport': {'northeast': {'lat': 33.6029918802915, 'lng': -7.481927419708498}, 'southwest': {'lat': 33.6002939197085, 'lng': -7.484625380291503}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '4527e659a9e02f370a542041774763c451c372aa', 'name': 'Pharmacie SAJID صيدلية ساجد', 'opening_hours': {'open_now': True}, 'place_id': 'ChIJP1xiXALLpw0RrqbIy4sO9eA', 'plus_code': {'compound_code': 'JG28+MM Casablanca, Maroc', 'global_code': '8C5JJG28+MM'}, 'rating': 3.5, 'reference': 'ChIJP1xiXALLpw0RrqbIy4sO9eA', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 4, 'vicinity': 'Casablanca'}, {'geometry': {'location': {'lat': 33.5342115, 'lng': -7.706121400000001}, 'viewport': {'northeast': {'lat': 33.5355594802915, 'lng': -7.704772969708498}, 'southwest': {'lat': 33.5328615197085, 'lng': -7.707470930291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': 'bdc6d6197c3159a3ff72bd76e1bac0ae03cf98c8', 'name': 'Pharmacie Moulay Ahmed', 'opening_hours': {'open_now': True}, 'place_id': 'ChIJ07dCfmcrpg0RH3To2UkpDtw', 'plus_code': {'compound_code': 'G7MV+MH Casablanca, Maroc', 'global_code': '8C5JG7MV+MH'}, 'rating': 1.5, 'reference': 'ChIJ07dCfmcrpg0RH3To2UkpDtw', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 2, 'vicinity': 'Rocade Sud-Ouest, Casablanca'}, {'geometry': {'location': {'lat': 33.6766009, 'lng': -7.3901443}, 'viewport': {'northeast': {'lat': 33.67793998029151, 'lng': -7.388807119708497}, 'southwest': {'lat': 33.67524201970851, 'lng': -7.391505080291501}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': 'd045b4513573b3fc54152c78c8fc41bb70d7d138', 'name': 'Pharmacie Youssef', 'photos': [{'height': 2448, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/109785678667396462703/photos">Abderrahim Derraji</a>'], 'photo_reference': 'CmRaAAAATiLKBwasi64bg5OrmZABWWHdEarKPkfft8SNQDHJchynC8kO68UXTd4au6mBzOEyINbajs0lCcPWNW6N1uz1Mv6U0RQJ7sr27YeQWHlAq2WRk1St-WAlgIelSZQaeoCjEhAdSRFyVXPi_Y3kTINhKxO6GhRGVNFH5RqL0X6V9VdwTVcsM7uf9g', 'width': 3264}], 'place_id': 'ChIJMbCHr4i2pw0RbhkWr-lbInU', 'plus_code': {'compound_code': 'MJG5+JW Mohammédia, Maroc', 'global_code': '8C5JMJG5+JW'}, 'rating': 3.4, 'reference': 'ChIJMbCHr4i2pw0RbhkWr-lbInU', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 13, 'vicinity': 'Bloc A numero 138, Mohammedia'}, {'geometry': {'location': {'lat': 33.53020199999999, 'lng': -7.583869999999999}, 'viewport': {'northeast': {'lat': 33.5316109802915, 'lng': -7.582577319708498}, 'southwest': {'lat': 33.5289130197085, 'lng': -7.585275280291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '6f806b67003904ad3d0f014adb18be0f3407c8de', 'name': 'Pharmacie El Malki', 'opening_hours': {'open_now': True}, 'photos': [{'height': 480, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/109687966975895064027/photos">Pharmacie El Malki</a>'], 'photo_reference': 'CmRaAAAAHrvF8AGIvETUhnqUY5PXw258oJ8CPZAEmh8qcl40bpYjo6bls8RiiceJQgzytqJ8yFGbmQDT8KpSooIEVy8rWqX7qLeKon-Y1AeKjlC7Z48JqP9SFxZcJUmnMVB9eXylEhBfAZ94RZ9Kng-ie7M2P8IAGhSCmTia5EJIv40UJMBfMZRQ9H-9uA', 'width': 480}], 'place_id': 'ChIJjzdP-vEypg0ROzz8DBvcdzo', 'plus_code': {'compound_code': 'GCJ8+3F Casablanca, Maroc', 'global_code': '8C5JGCJ8+3F'}, 'rating': 3.8, 'reference': 'ChIJjzdP-vEypg0ROzz8DBvcdzo', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 5, 'vicinity': 'Ain Chok, Boulevard sidi massaoud, lamkanssa 4 Bloc D, Rue 14 - Magasin 10, Casablanca'}, {'geometry': {'location': {'lat': 33.6935507, 'lng': -7.3919427}, 'viewport': {'northeast': {'lat': 33.69484173029149, 'lng': -7.390528669708498}, 'southwest': {'lat': 33.6921437697085, 'lng': -7.393226630291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '31267262eddc4012241ccfefc461726ed0a74dd2', 'name': 'La Grande Pharmacie', 'opening_hours': {'open_now': True}, 'photos': [{'height': 2988, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/114128467836453022552/photos">Abdililah B'TASH</a>'], 'photo_reference': 'CmRaAAAAn6Hyciy1GiLeyQ1Cs3ImfS2StpVlterR6by_BqdOMeLaZpp2xgz5UlBbTNpAtuLkqxOrKIyxDSLHELGiXfRK4odB1FMdqd8snrOW4AeUZG949l327nKRFAbdNOYPpn7xEhD04h5Fn78wbtefhHAHPshwGhRTFi4bX8Lq2bb6dxg4_MvQ1SUCDA', 'width': 5312}], 'place_id': 'ChIJdaEWifa2pw0R0rivALIU1XM', 'plus_code': {'compound_code': 'MJV5+C6 Mohammédia, Maroc', 'global_code': '8C5JMJV5+C6'}, 'rating': 4.3, 'reference': 'ChIJdaEWifa2pw0R0rivALIU1XM', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 4, 'vicinity': 'R322, Mohammédia'}, {'geometry': {'location': {'lat': 33.6970446, 'lng': -7.3864781}, 'viewport': {'northeast': {'lat': 33.6983724802915, 'lng': -7.385142619708496}, 'southwest': {'lat': 33.6956745197085, 'lng': -7.387840580291501}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': 'e6563b94cdcaba2e5ea99e795cba3e16c3dcefca', 'name': 'Pharmacie Baghdad', 'opening_hours': {'open_now': True}, 'photos': [{'height': 5312, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/114649994306587060817/photos">badr miri</a>'], 'photo_reference': 'CmRaAAAAHI5N7FKLNFu4AnPShyXocT1EngVlMHvuzwiBIxRzb7huCC34LfU_RINF44AJHAUnbpWHC0LY88IEGHbB_tVgiEZJfLW9PEJM9SB5CL3i76VTCknHNjZUGhcQlMSwmwhmEhBAld5bH9l9hNmVseaiFRVcGhQ4qa3ErxtrHV3GdqPrHvkDM0_i3A', 'width': 2988}], 'place_id': 'ChIJmbiTYPC2pw0Ru-2y15BNdcs', 'plus_code': {'compound_code': 'MJW7+RC Mohammédia, Maroc', 'global_code': '8C5JMJW7+RC'}, 'rating': 4.1, 'reference': 'ChIJmbiTYPC2pw0Ru-2y15BNdcs', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 8, 'vicinity': 'Mohammédia'}, {'geometry': {'location': {'lat': 33.70540529999999, 'lng': -7.397221999999998}, 'viewport': {'northeast': {'lat': 33.7068077802915, 'lng': -7.395820419708497}, 'southwest': {'lat': 33.7041098197085, 'lng': -7.398518380291501}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '2abe93c3f276c6528cb34d44d16a99c70728f1d8', 'name': 'Pharmacie Palmier', 'opening_hours': {'open_now': True}, 'photos': [{'height': 3596, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/107817801878723865593/photos">Zaoui Nourdine</a>'], 'photo_reference': 'CmRaAAAAeXCm6UZFgf7tHI7gCYb9Lcbmur7Kso4Sr9FzFeZH3veJNUBYADy8usnXUFzS_LGRKGTp6bT3wyHXQHIUFGOcx64h4WrfmTcZwSxRa8PKg2Vm5c1V4EvmQjti9h1MdUTOEhBixNx_coH8jvgiimqopmNCGhT7HjvAC5or9Dd6D3shm2n5oa0_Ag', 'width': 2030}], 'place_id': 'ChIJmT41EVW2pw0RdoqynRF4ZCI', 'plus_code': {'compound_code': 'PJ43+54 Mohammédia, Maroc', 'global_code': '8C5JPJ43+54'}, 'rating': 3, 'reference': 'ChIJmT41EVW2pw0RdoqynRF4ZCI', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 3, 'vicinity': 'Bd. Zerktouni, Ang. Abdelmoumen Résid. Palmier Imm. B N°4, 28810, Mohammédia'}, {'geometry': {'location': {'lat': 33.38686279999999, 'lng': -7.532858600000001}, 'viewport': {'northeast': {'lat': 33.3881961302915, 'lng': -7.531513519708497}, 'southwest': {'lat': 33.38549816970851, 'lng': -7.534211480291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '5604a261b12b527f714604d7f6cc71b4c30ffb2b', 'name': 'Parapharmacie Makhlouf', 'opening_hours': {'open_now': True}, 'photos': [{'height': 720, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/103066813847798756839/photos">Abdelhak EN-NAGRI</a>'], 'photo_reference': 'CmRaAAAAf1IjDWedI7pF2R7E69VUID4wILYhCiRDgQQViW1aRmFi87jaUBFbcFqMN-yw3fIozsen7dBYVpmjuXdW0J37vj3g0u0iqzpZ0C7sPfptiJ78p_6pSTMk6Cwn7SFiqhOgEhBXockMQLK6lPEYgztpRPOXGhRwWqJop7O7o_HaZdQM6j8W1OdJ9A', 'width': 1280}], 'place_id': 'ChIJwyoDqAc6pg0RPdQxIakDCBQ', 'plus_code': {'compound_code': '9FP8+PV Ville de Deroua, Maroc', 'global_code': '8C5J9FP8+PV'}, 'rating': 4.3, 'reference': 'ChIJwyoDqAc6pg0RPdQxIakDCBQ', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 4, 'vicinity': 'Morocco'}, {'geometry': {'location': {'lat': 33.554787, 'lng': -7.584525999999999}, 'viewport': {'northeast': {'lat': 33.55613578029149, 'lng': -7.583148019708498}, 'southwest': {'lat': 33.5534378197085, 'lng': -7.585845980291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '409997460fcf91178da69a4cb297603ac3a0be1c', 'name': 'Pharmacie Oued Zem', 'opening_hours': {'open_now': True}, 'place_id': 'ChIJV_Lc1cgypg0RaLtoHm1NrLQ', 'plus_code': {'compound_code': 'HC38+W5 Casablanca, Maroc', 'global_code': '8C5JHC38+W5'}, 'rating': 4.1, 'reference': 'ChIJV_Lc1cgypg0RaLtoHm1NrLQ', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 11, 'vicinity': 'Boulevard Oued Zem, Casablanca'}, {'geometry': {'location': {'lat': 33.5742918, 'lng': -7.671136599999999}, 'viewport': {'northeast': {'lat': 33.57561228029149, 'lng': -7.669843019708497}, 'southwest': {'lat': 33.5729143197085, 'lng': -7.672540980291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': 'db115011b11a69a5e4edb20d7b6ebb869a46dea6', 'name': 'Pharmacie Cinéma Anfa', 'opening_hours': {'open_now': True}, 'place_id': 'ChIJf11wXznTpw0RrKXF_I--CQU', 'plus_code': {'compound_code': 'H8FH+PG Casablanca, Maroc', 'global_code': '8C5JH8FH+PG'}, 'rating': 4.199999999999999, 'reference': 'ChIJf11wXznTpw0RrKXF_I--CQU', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 5, 'vicinity': 'Boulevard Sidi Abderrahmane, Casablanca'}, {'geometry': {'location': {'lat': 33.5359025, 'lng': -7.608371899999998}, 'viewport': {'northeast': {'lat': 33.53720508029151, 'lng': -7.606992819708497}, 'southwest': {'lat': 33.53450711970851, 'lng': -7.609690780291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '42cdc2526f9a7d6959e06d2d66405074bfd96719', 'name': 'Pharmacie Boulevard El Qods', 'opening_hours': {'open_now': True}, 'place_id': 'ChIJzUGciZ4ypg0Rm0sz0imSRIc', 'plus_code': {'compound_code': 'G9PR+9M Casablanca, Maroc', 'global_code': '8C5JG9PR+9M'}, 'rating': 4.5, 'reference': 'ChIJzUGciZ4ypg0Rm0sz0imSRIc', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 8, 'vicinity': 'Casablanca'}, {'geometry': {'location': {'lat': 33.5705953, 'lng': -7.6257769}, 'viewport': {'northeast': {'lat': 33.5719443802915, 'lng': -7.624437319708497}, 'southwest': {'lat': 33.56924641970851, 'lng': -7.627135280291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': '0a82c3be4ed3cf950ea4fcd4fd5cfe3a46a1c03f', 'name': 'Pharmacie Alj', 'opening_hours': {'open_now': True}, 'photos': [{'height': 3120, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/107084637289878933230/photos">Supnet Technology</a>'], 'photo_reference': 'CmRaAAAAUroJFR0TBhYbtSJ8-B3wgoMEkVoIskGNJj5-144fMZp2F9m_z3P2gkk_kw5i5NnUZW84uvj5lyz36uLwGlJuJc3AZyGjiipI8lL85MoKRzsawvphALmzmP_ARh2mbfbQEhDJjKAl83xPxrPcASN9PsR2GhTJdvdPegOsRWiKYb8MfdAhF-QMzw', 'width': 4160}], 'place_id': 'ChIJwchl5bDSpw0R9Km9pSx4ohI', 'plus_code': {'compound_code': 'H9CF+6M Casablanca, Maroc', 'global_code': '8C5JH9CF+6M'}, 'rating': 2, 'reference': 'ChIJwchl5bDSpw0R9Km9pSx4ohI', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 5, 'vicinity': '279 Boulevard Abdelmoumen, Casablanca'}, {'geometry': {'location': {'lat': 33.5581936, 'lng': -7.6318872}, 'viewport': {'northeast': {'lat': 33.5595275302915, 'lng': -7.630493819708497}, 'southwest': {'lat': 33.5568295697085, 'lng': -7.633191780291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': 'ef4dec4c6cc7176ec9ee4078ca76a0e91fcd27d3', 'name': 'Pharmacie Aourir', 'opening_hours': {'open_now': True}, 'photos': [{'height': 4032, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/110263213749241852783/photos">Zak Foudali</a>'], 'photo_reference': 'CmRaAAAAszl7YXrdL5AKFgthi1_T3Zl69qB9RxDbCvMQHhT_tpORpfHTCEQbSsXRgDoQ6ibuMgP6RwZwGyPY8Mz9TfC_KzLJN5_rerNExl9N1FjbWyRnQXi55tc-GDe3VfAxlYn2EhBvxbkQZ-VXsL2So44J2NEUGhT9L8Uf26m4xYDCrLb1dRLFnIqwdw', 'width': 3024}], 'place_id': 'ChIJF3sDYUotpg0R_xYKrblvmII', 'plus_code': {'compound_code': 'H959+76 Casablanca, Maroc', 'global_code': '8C5JH959+76'}, 'rating': 4, 'reference': 'ChIJF3sDYUotpg0R_xYKrblvmII', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 9, 'vicinity': 'Rue de Lagramta, Casablanca'}, {'geometry': {'location': {'lat': 33.5613763, 'lng': -7.6702246}, 'viewport': {'northeast': {'lat': 33.56267778029149, 'lng': -7.668847869708498}, 'southwest': {'lat': 33.5599798197085, 'lng': -7.671545830291502}}}, 'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/shopping-71.png', 'id': 'e65616c6f2cc6b8a4853fbcdcd7cc49e80427173', 'name': 'Pharmacie Oum Rabii', 'opening_hours': {'open_now': True}, 'photos': [{'height': 480, 'html_attributions': ['<a href="https://maps.google.com/maps/contrib/103867329474130956984/photos">RedOne Bounhar</a>'], 'photo_reference': 'CmRaAAAACBLk6jFD2zTKa10dg6IBGGufjaEiJTP_PXTzZsjdJ_mDGl8cBfxrI58a8kJoJFFf6Cu59_KTX7scR8Diz4qUWd0kw05oBdPWPa_32AtwgMPUBSOPKp_gFY6O9D1s_2LPEhB_I34CLJiG13C1qGUAWT7UGhS0A9NJl0FValAAPX2R1adZzvIlfg', 'width': 800}], 'place_id': 'ChIJ5ULhdjPTpw0R15g-DvVtlQA', 'plus_code': {'compound_code': 'H86H+HW Casablanca, Maroc', 'global_code': '8C5JH86H+HW'}, 'rating': 3.8, 'reference': 'ChIJ5ULhdjPTpw0R15g-DvVtlQA', 'scope': 'GOOGLE', 'types': ['pharmacy', 'health', 'point_of_interest', 'store', 'establishment'], 'user_ratings_total': 17, 'vicinity': 'Boulevard Oued Oum Rabia, Casablanca'}], 'status': 'OK'}
###Markdown
Third Page results
###Code
r = requests.get(url +'pagetoken='+token2+'&location=' +str(location) +'&radius='+str(radius) + '&type='+place_type+'&language='+language+'&key=' + api_key)
response3 = r.json()
response3
###Output
_____no_output_____
###Markdown
Compose and test everything
###Code
c = 0
for i in response['results']:
print(i['geometry']['location'])
print(i['name'])
c = c + 1
print("**********")
print(c)
c = 0
for i in response2['results']:
print(i['geometry']['location'])
print(i['name'])
c = c + 1
print("**********")
print(c)
c = 0
for i in response3['results']:
print(i['geometry']['location'])
print(i['name'])
c = c + 1
print("**********")
print(c)
###Output
{'lat': 33.5750999, 'lng': -7.634702000000001}
Pharmacie Val Fleuri
{'lat': 33.5579945, 'lng': -7.573909199999999}
Pharmacie El Mansour
{'lat': 33.590313, 'lng': -7.604042799999999}
Pharmacie du Soleil
{'lat': 33.5877863, 'lng': -7.603750799999999}
pharmacie moulay
{'lat': 33.588538, 'lng': -7.601782299999999}
Pharmacie Derb Omar
{'lat': 33.59181089999999, 'lng': -7.601966399999999}
Pharmacie Yassine
{'lat': 33.5892814, 'lng': -7.6081621}
Pharmacie La Victoire
{'lat': 33.5935853, 'lng': -7.6040812}
Pharmacie De Caducee
{'lat': 33.58821330000001, 'lng': -7.599717999999999}
Pharmacie Farida
{'lat': 33.592772, 'lng': -7.607113199999999}
GRANDE PHARMACIE DU MAROC الصيدلية الكبيرة بالمغرب
{'lat': 33.5858643, 'lng': -7.6044343}
Pharmacie al mechoir
{'lat': 33.593475, 'lng': -7.601689999999998}
Pharma Self
{'lat': 33.58580689999999, 'lng': -7.604408899999999}
Pharmacie Al Mechouar
{'lat': 33.5863719, 'lng': -7.607355400000001}
Pharmacie
{'lat': 33.5869877, 'lng': -7.6080889}
Pharmacie 15 Ramadan
{'lat': 33.5934637, 'lng': -7.6004148}
Pharmacie Centre
{'lat': 33.5916704, 'lng': -7.598429900000001}
pharmacie Albert 1
{'lat': 33.5937498, 'lng': -7.608086300000001}
Pharmacie Du Marché Central
{'lat': 33.5949141, 'lng': -7.601372699999999}
Mutuelle de l'Office d'Exploitation des Ports (MODEP)
{'lat': 33.5865023, 'lng': -7.598498599999998}
Pharmacie Ilham
**********
20
|
cursoDS/Parte 2 - Classificando os dados.ipynb | ###Markdown
Novas perguntas de negócios1. Qual a data do imóvel mais antigo no portfólio?2. Quantos imóveis possuem o número máximo de andares?3. Criar uma classificação para o imóveis, separando-os em baixo e alto padrão, de acordo com preço. Acima de 540.000 reais é alto padrão e abaixo de 540.000 reais é baixo padrão.4. Gostaria de um relatório ordenado pelo preço e contento as seguintes informações: id do imóvel; data que o imóvel ficou disponível para compra; o número de quartos; o tamanho total do terreno; o preço; a classificação do imóvel (alto e baixo padrão).5. Gostaria de um Mapa indicando onde as casas estão localizadas geograficamente.Os dados para análise são encontrados em: https://www.kaggle.com/harlfoxem/housesalesprediction/version/1?select=kc_house_data.csv Análise de dados
###Code
import pandas as pd
import numpy as np
data = pd.read_csv('datasets/kc_house_data.csv')
data.head()
###Output
_____no_output_____
###Markdown
1. Qual a data do imóvel mais antigo no portfólio?
###Code
data['date']
data.dtypes
# A coluna date não está na forma de datetime e sim de object, então tem que transformar para datetime para conseguir colocar em ordem.
data['date'] = pd.to_datetime(data['date'])
data.dtypes
#colocando a coluna date em ordem crescente
data_1 = data.sort_values('date').reset_index()
data_1.head()
e1 = data_1.loc[0,'date']
print(f'O imóvel mais antigo do DataFrame é de {e1}')
###Output
O imóvel mais antigo do DataFrame é de 2014-05-02 00:00:00
###Markdown
2. Quantos imóveis possuem o número máximo de andares?
###Code
#encontrando quais linhas tem o valor da coluna floors máximo
data_2 = data[data['floors'] == data['floors'].max()]
data_2
tam = len(data_2)
print(f'Existem {tam} imóveis com o número máximo de andares.')
###Output
Existem 8 imóveis com o número máximo de andares.
###Markdown
3. Criar uma classificação para o imóveis, separando-os em baixo e alto padrão, de acordo com preço. Acima de 540.000 reais é alto padrão e abaixo de 540.000 reais é baixo padrão.
###Code
#criando uma nova coluna para adicionar os valores
data['standard'] = 'standard'
#linhas em que o preço é maior que 540000
maior = data['price'] > 540000
#para encontrar onde na linha o preço é maior que 540000 e substituir na coluna 'standard' o valor 'high_standard'
data.loc[maior, 'standard'] = 'high_standard'
#linhas em que o preço é menor que 540000
menor = data['price'] < 540000
#para encontrar onde na linha o preço é menor que 540000 e substituir na coluna 'standard' o valor 'low_standard'
data.loc[menor, 'standard'] = 'low_standard'
data
###Output
_____no_output_____
###Markdown
4. Gostaria de um relatório ordenado pelo preço e contento as seguintes informações: id do imóvel; data que o imóvel ficou disponível para compra; o número de quartos; o tamanho total do terreno; o preço; a classificação do imóvel (alto e baixo padrão).
###Code
#ordenando pela coluna de preço
data = data.sort_values('price')
#selecionando as colunas que quero no relatorio
col = ['id','date','bedrooms','price','standard','sqft_lot']
report = data[col]
print(report)
#para salvar com um arquivo usa to_csv, o parametro de index falso é para zerar o index
report.to_csv('datasets/report_aula2.csv', index=False)
###Output
_____no_output_____
###Markdown
5. Gostaria de um Mapa indicando onde as casas estão localizadas geograficamente.
###Code
#biblioteca para criação de mapas
import plotly.express as px
data_mapa = data[['id','lat','long','price']]
mapa = px.scatter_mapbox(data_mapa, lat='lat',lon='long',
hover_name='id',
hover_data=['price'],
color_discrete_sequence=['fuchsia'],
zoom=3,height=300)
mapa.show()
#função para salvar o arquivo em html
mapa.write_html('datasets/mapa_house_rocket.html')
###Output
_____no_output_____ |
Group Project - 2022 - Q1 - empty (1).ipynb | ###Markdown
Cognitive Neuroscience: Group Project 2022 Final Group Project Code InstructionsMarijn van Wingerden, Department of Cognitive Science and Artificial Intelligence – Tilburg University Academic Year 21-22 Handing in of your codeYou can adapt this script template and hand it in for the weekly Group Project Assignments. Whenever you encounter ... in the code, you should complete the code in place (of course you can add lines before and after). "Your code here" indicates where code blocks should go.Whenever you are asked to make a plot, it should be completed with a meaningful plot title, xlabel and ylabel texts. Figures are started with a Matplotlib figure handle: "fig_Q2A, ax = plt.subplots;". This indicates that a link (called handle) to your figure will be saved in the variable, so we can easily check it when checking your scripts. Whenever a naming convention for a variable is given, use it, because it will allow semi-automatic grading of your project script. Group members:Please list the contributors and their U-numbers here in comments:- -- - - Setting up: list your modules to importFor loading/saving puroposes, we will make use of the **os** package.An example worksheet with instructions on how to use the os package will be provided
###Code
%matplotlib notebook
import os
import numpy as np
from pprint import pprint
import pandas as pd
import matplotlib.pyplot as plt
import scipy.fft as fft
###Output
_____no_output_____
###Markdown
Data loadingIn your assignment, you will compare neural data in different trial conditions from the same participant: this is a *within-subject* comparison. You can think of this as a contrast: which spectral features are more present in condition A vs. condition B?The second level of the analysis focuses on group statistics. You will answer a question like: "As a group, do the participants in group [RM/RB/RL] show more spectral power in the [delta/theta/alpha/beta/gamma] band in the ambiguous sentences vs. the non-ambiguous sentences?"The analysis will start with setting up data structures (refer to WorkSheet 0) that will hold the relevant data. Because EEG activity can differ between participants (due to e.g. anatomical differences like skull thickness or skin conductivity), the **absolute** voltages that we record are not completely informative. Instead, we will be looking at **relative** differences within an individual to remove the between-subject effects that we cannot control. Each datafile that you have been given has the trials related to a particular condition (NA-IR and AM-IR, for example). "Control" refers to the non-ambiguous conditions, and "Experimental" to the ambiguous condition. Please note that the SF and OF trial types have been ignored (that is, added together). The datafiles are NumPy arrays that have been saved to disk. These arrays are 3D arrays: - the 0 dimension is the trial repetitions- the 1st dimension is the number of channels- the 2nd dimension if the number of samples in a trial - for the baseline, this is 0.55s of data (276 samples: from -0.05s to +0.5s) - for the evoked period, this is 1.5s of data (751 samples: from -0.5s to +1.0s)You will need to load the datafiles from all participants and add them all together so that we end up with a 4D matrix that has nParticipants x nTrials x nChannels x nTime. You can make your work easier by organising the datafiles in such a way that you put the control.npy files in their own subdirectory, and the experimental.npy files as well. In order to load the files, we can use the os package.Adapt the following so that it works on your machine:
###Code
# enter the path to the base directory where the folder called group_xx is located
path_base = os.path.normpath('...')
group = 'group_04/' # update this to reflect the name of your group with 2 digits or SX for S1, S2, S3
# path_base + group
path_data = os.path.join(path_base,group)
files = os.listdir(path_data)
control_files = list()
experimental_files = list()
control_files_baseline = list()
experimental_files_baseline = list()
for f in files:
# check the files that end with specific extention
# if a given file would need to be excluded, this is how to do it
#if f.rfind("part_10") > -1:
# continue
if f.endswith("control.npy"):
control_files.append(f)
elif f.endswith("experimental.npy"):
experimental_files.append(f)
elif f.endswith("control_baseline.npy"):
control_files_baseline.append(f)
elif f.endswith("experimental_baseline.npy"):
experimental_files_baseline.append(f)
# check that the length of your files list matches the provided datafiles, and contains only .npy datafiles
## EVOKED files
control_files.sort()
pprint(control_files)
print("the number of control files is: ", len(control_files), "\n")
experimental_files.sort()
pprint(experimental_files)
print("the number of experimental files is: ", len(experimental_files), "\n")
## BASELINE files
control_files_baseline.sort()
pprint(control_files_baseline)
print("the number of baseline control files is: ", len(control_files_baseline), "\n")
experimental_files_baseline.sort()
pprint(experimental_files_baseline)
print("the number of baseline experimental files is: ", len(experimental_files_baseline), "\n")
###Output
['group_04_part_01_control.npy',
'group_04_part_02_control.npy',
'group_04_part_03_control.npy',
'group_04_part_04_control.npy',
'group_04_part_05_control.npy',
'group_04_part_06_control.npy',
'group_04_part_07_control.npy',
'group_04_part_08_control.npy',
'group_04_part_09_control.npy',
'group_04_part_10_control.npy',
'group_04_part_11_control.npy',
'group_04_part_12_control.npy',
'group_04_part_13_control.npy',
'group_04_part_14_control.npy',
'group_04_part_15_control.npy']
the number of control files is: 15
['group_04_part_01_experimental.npy',
'group_04_part_02_experimental.npy',
'group_04_part_03_experimental.npy',
'group_04_part_04_experimental.npy',
'group_04_part_05_experimental.npy',
'group_04_part_06_experimental.npy',
'group_04_part_07_experimental.npy',
'group_04_part_08_experimental.npy',
'group_04_part_09_experimental.npy',
'group_04_part_10_experimental.npy',
'group_04_part_11_experimental.npy',
'group_04_part_12_experimental.npy',
'group_04_part_13_experimental.npy',
'group_04_part_14_experimental.npy',
'group_04_part_15_experimental.npy']
the number of experimental files is: 15
['group_04_part_01_control_baseline.npy',
'group_04_part_02_control_baseline.npy',
'group_04_part_03_control_baseline.npy',
'group_04_part_04_control_baseline.npy',
'group_04_part_05_control_baseline.npy',
'group_04_part_06_control_baseline.npy',
'group_04_part_07_control_baseline.npy',
'group_04_part_08_control_baseline.npy',
'group_04_part_09_control_baseline.npy',
'group_04_part_10_control_baseline.npy',
'group_04_part_11_control_baseline.npy',
'group_04_part_12_control_baseline.npy',
'group_04_part_13_control_baseline.npy',
'group_04_part_14_control_baseline.npy',
'group_04_part_15_control_baseline.npy']
the number of baseline control files is: 15
['group_04_part_01_experimental_baseline.npy',
'group_04_part_02_experimental_baseline.npy',
'group_04_part_03_experimental_baseline.npy',
'group_04_part_04_experimental_baseline.npy',
'group_04_part_05_experimental_baseline.npy',
'group_04_part_06_experimental_baseline.npy',
'group_04_part_07_experimental_baseline.npy',
'group_04_part_08_experimental_baseline.npy',
'group_04_part_09_experimental_baseline.npy',
'group_04_part_10_experimental_baseline.npy',
'group_04_part_11_experimental_baseline.npy',
'group_04_part_12_experimental_baseline.npy',
'group_04_part_13_experimental_baseline.npy',
'group_04_part_14_experimental_baseline.npy',
'group_04_part_15_experimental_baseline.npy']
the number of baseline experimental files is: 15
###Markdown
Combining data and matrix pre-allocationnext, you will need to load these files one by one and extract the data for this participant. The data in the NumPy arrays are stored as Trials x Channels x Time. To aggregate across participants, you will thus need to add a 4th dimension to store the data.To be able to adequately pre-allocate the data from the different subjects, we will load one trial subject manually to have a look at the shape/dimensionality of the data:
###Code
EEG = np.load(os.path.join(path_data,control_files[0]))
# control_files is a list of strings, so indexing its first element returns a string
# in this case, we are loading the first entry of control_files, i.e. participant 1
# verify that the number of trials equals 44,
# verify that the number of channels equals 64 or 65
# and verify that there are 751 samples per trace
print("Number of trials = ",...)
print("Number of channels = ", ...)
print("Number of timepoints = ", ...)
# do the same for one of the baseline datafiles (they have a different number of samples)
EEG_base = np.load(os.path.join(path_data,control_files_baseline[0]))
print("Number of trials (base) = ", ...)
print("Number of channels (base) = ", ...)
print("Number of timepoints (base) = ", ...)
###Output
_____no_output_____
###Markdown
Q1 - setting up the data structure and loading data from all participantsThe EEG data is currently stored as a 3-dimensional NumPy array. But to run our time-frequency analysis, we need some more information like the sampling rate and the time axis that corresponds to the stimulus-locked analysis window. In order to set up (=pre-allocate) a matrix that will hold all traces for all participants, we need to know the sizes of the dimensions of this 4-dimensional matrix, and fill up this matrix by looping over participants:
###Code
# There are 64 or 65 channels in the dataset. Only channels 1-59 (not python indexes!) are EEG channels
# the remaining channels are EMG and EOG channels that we will ignore in this analysis
# subset your EEG array so that only the EEG channels remain
##
## Your code here
##
# Define nTrials, nChans (=channels), nSamples, nSamples_base and nParticipants.
nTrials = ...
nChans = ...
nSamples = ...
nSamples_base = ...
nParticipants = ...
# Then, pre-allocate a matrix filled with zeros and with size nParticipants x nTrials x nChans x nSamples
# one each for the control, experimental, control_baseline and experimental_baseline data.
# Name them:
# data_control
# data_experimental
# data_control_base
# data_experimental_base
data_control = ...
data_experimental = ...
data_control_base = ...
data_experimental_base = ...
# next, we need to loop over all participant datafiles and add them to the appropriate slice in your 4-D arrays
# For this, you need to use specific array indexing to indicate where in comb_data_(control/experimental)
# each participant's data needs to go. You can and should reuse the data-reading code above.
# CAREFUL! Not every participant may have the same number of (correct) trials in their dataset.
# So for each newly loaded datafile, you need to establish the current number of trials again
# loop over participants, and within each iteration of the loop, load the
# next datafile and fill comb_data_(control/experimental) with the EEG traces (nTrials x nChans x nSamples)
# check the shape of the matrices after filling them
for iPart in range(len(control_files)):
##
## Your code here
##
print("Shape of data_control:",...)
print("Shape of data_experimental:",...)
print("Shape of data_control_base:",...)
print("Shape of data_experimental_base:",...)
###Output
_____no_output_____ |
ml_zero_to_hero/ml_zero_to_hero_part4.ipynb | ###Markdown
写経元[Build an image classifier (ML Zero to Hero - Part 4)](https://www.youtube.com/watch?v=u2TjZzNuly8&list=PLQY2H8rRoyvwLbzbnKJ59NkZvQAW9wLbx&index=16&t=0s)TensorFlow を用いたじゃんけんの画像認識。[Course 2 - Part 8 - Lesson 2 - Notebook (RockPaperScissors)](https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/Course%202%20-%20Part%208%20-%20Lesson%202%20-%20Notebook%20(RockPaperScissors).ipynb)[じゃんけん画像データ](https://www.tensorflow.org/datasets/catalog/rock_paper_scissors) 画像データのダウンロード
###Code
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps.zip -O /tmp/rps.zip
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps-test-set.zip -O /tmp/rps-test-set.zip
###Output
--2020-05-27 01:27:02-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps.zip
Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.142.128, 2607:f8b0:400e:c07::80
Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.142.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 200682221 (191M) [application/zip]
Saving to: ‘/tmp/rps.zip’
/tmp/rps.zip 100%[===================>] 191.38M 94.2MB/s in 2.0s
2020-05-27 01:27:04 (94.2 MB/s) - ‘/tmp/rps.zip’ saved [200682221/200682221]
--2020-05-27 01:27:05-- https://storage.googleapis.com/laurencemoroney-blog.appspot.com/rps-test-set.zip
Resolving storage.googleapis.com (storage.googleapis.com)... 74.125.142.128, 2607:f8b0:400e:c07::80
Connecting to storage.googleapis.com (storage.googleapis.com)|74.125.142.128|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 29516758 (28M) [application/zip]
Saving to: ‘/tmp/rps-test-set.zip’
/tmp/rps-test-set.z 100%[===================>] 28.15M 76.5MB/s in 0.4s
2020-05-27 01:27:05 (76.5 MB/s) - ‘/tmp/rps-test-set.zip’ saved [29516758/29516758]
###Markdown
zipファイルの展開
###Code
import os
import zipfile
local_zip = '/tmp/rps.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
local_zip = '/tmp/rps-test-set.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
###Output
_____no_output_____
###Markdown
展開した画像ファイルの確認
###Code
rock_dir = os.path.join('/tmp/rps/rock')
paper_dir = os.path.join('/tmp/rps/paper')
scissors_dir = os.path.join('/tmp/rps/scissors')
print('tortal training rock images:', len(os.listdir(rock_dir)))
print('tortal training paperk images:', len(os.listdir(paper_dir)))
print('tortal training scissors images:', len(os.listdir(scissors_dir)))
rock_files = os.listdir(rock_dir)
print(rock_files[:10])
paper_files = os.listdir(paper_dir)
print(paper_files[:10])
scissors_files = os.listdir(scissors_dir)
print(scissors_files[:10])
###Output
tortal training rock images: 840
tortal training paperk images: 840
tortal training scissors images: 840
['rock01-036.png', 'rock03-072.png', 'rock02-103.png', 'rock04-047.png', 'rock05ck01-031.png', 'rock07-k03-092.png', 'rock01-042.png', 'rock05ck01-002.png', 'rock05ck01-000.png', 'rock03-025.png']
['paper01-072.png', 'paper04-068.png', 'paper07-098.png', 'paper04-009.png', 'paper02-042.png', 'paper02-038.png', 'paper03-095.png', 'paper06-058.png', 'paper06-064.png', 'paper04-055.png']
['scissors03-100.png', 'scissors04-037.png', 'scissors03-091.png', 'testscissors01-102.png', 'testscissors01-012.png', 'testscissors03-084.png', 'testscissors01-077.png', 'testscissors02-005.png', 'scissors03-017.png', 'testscissors02-020.png']
###Markdown
じゃんけんの画像表示?
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
pic_index = 2
next_rock = [
os.path.join(rock_dir, fname)
for fname in rock_files[pic_index-2:pic_index]]
next_paper = [
os.path.join(paper_dir, fname)
for fname in paper_files[pic_index-2:pic_index]]
next_scissors = [
os.path.join(scissors_dir, fname)
for fname in scissors_files[pic_index-2:pic_index]]
for i, img_path in enumerate(next_rock+next_paper+next_scissors):
# print(img_path)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.axis('Off')
plt.show()
###Output
_____no_output_____
###Markdown
画像の振り分け?とモデル生成から学習まで
###Code
import tensorflow as tf
import keras_preprocessing
from keras_preprocessing import image
from keras_preprocessing.image import ImageDataGenerator
TRAINING_DIR = "/tmp/rps/"
training_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range = 40,
width_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True,
fill_mode = 'nearest')
VALIDATION_DIR = "/tmp/rps-test-set"
validation_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = training_datagen.flow_from_directory(
TRAINING_DIR,
target_size = (150,150),
class_mode = 'categorical',
batch_size = 126
)
validation_generator = validation_datagen.flow_from_directory(
VALIDATION_DIR,
target_size = (150,150),
class_mode = 'categorical',
batch_size = 126
)
model = tf.keras.models.Sequential([
# Note the input shape is the size of the image 150x150 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(150,150,3)),
tf.keras.layers.MaxPooling2D(2,2),
# The second convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
# 512 neuron hidden layer,
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
model.summary()
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
history = model.fit(train_generator, epochs=25, steps_per_epoch=20, validation_data=validation_generator, verbose=1, validation_steps=3)
model.save("rps.h5")
###Output
Found 2520 images belonging to 3 classes.
Found 372 images belonging to 3 classes.
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 148, 148, 64) 1792
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 74, 74, 64) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 72, 72, 64) 36928
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 36, 36, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 34, 34, 128) 73856
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 17, 17, 128) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 15, 15, 128) 147584
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 7, 7, 128) 0
_________________________________________________________________
flatten (Flatten) (None, 6272) 0
_________________________________________________________________
dropout (Dropout) (None, 6272) 0
_________________________________________________________________
dense (Dense) (None, 512) 3211776
_________________________________________________________________
dense_1 (Dense) (None, 3) 1539
=================================================================
Total params: 3,473,475
Trainable params: 3,473,475
Non-trainable params: 0
_________________________________________________________________
Epoch 1/25
20/20 [==============================] - 19s 935ms/step - loss: 1.4316 - accuracy: 0.3448 - val_loss: 1.0965 - val_accuracy: 0.3333
Epoch 2/25
20/20 [==============================] - 19s 954ms/step - loss: 1.0590 - accuracy: 0.4365 - val_loss: 0.9261 - val_accuracy: 0.5780
Epoch 3/25
20/20 [==============================] - 19s 962ms/step - loss: 0.8620 - accuracy: 0.6060 - val_loss: 0.8482 - val_accuracy: 0.5887
Epoch 4/25
20/20 [==============================] - 19s 947ms/step - loss: 0.6838 - accuracy: 0.7004 - val_loss: 0.3878 - val_accuracy: 0.8817
Epoch 5/25
20/20 [==============================] - 19s 947ms/step - loss: 0.5453 - accuracy: 0.7690 - val_loss: 0.4808 - val_accuracy: 0.8038
Epoch 6/25
20/20 [==============================] - 19s 940ms/step - loss: 0.4536 - accuracy: 0.8119 - val_loss: 0.2538 - val_accuracy: 0.9704
Epoch 7/25
20/20 [==============================] - 19s 956ms/step - loss: 0.3355 - accuracy: 0.8774 - val_loss: 0.0817 - val_accuracy: 0.9839
Epoch 8/25
20/20 [==============================] - 19s 968ms/step - loss: 0.2395 - accuracy: 0.9040 - val_loss: 0.0295 - val_accuracy: 1.0000
Epoch 9/25
20/20 [==============================] - 19s 937ms/step - loss: 0.2212 - accuracy: 0.9139 - val_loss: 0.0152 - val_accuracy: 1.0000
Epoch 10/25
20/20 [==============================] - 19s 950ms/step - loss: 0.2685 - accuracy: 0.9028 - val_loss: 0.0565 - val_accuracy: 0.9973
Epoch 11/25
20/20 [==============================] - 19s 954ms/step - loss: 0.1235 - accuracy: 0.9599 - val_loss: 0.0417 - val_accuracy: 0.9866
Epoch 12/25
20/20 [==============================] - 19s 931ms/step - loss: 0.1209 - accuracy: 0.9532 - val_loss: 0.0092 - val_accuracy: 1.0000
Epoch 13/25
20/20 [==============================] - 19s 944ms/step - loss: 0.1521 - accuracy: 0.9389 - val_loss: 0.0257 - val_accuracy: 1.0000
Epoch 14/25
20/20 [==============================] - 19s 947ms/step - loss: 0.0921 - accuracy: 0.9663 - val_loss: 0.0095 - val_accuracy: 1.0000
Epoch 15/25
20/20 [==============================] - 19s 939ms/step - loss: 0.0899 - accuracy: 0.9714 - val_loss: 0.0257 - val_accuracy: 1.0000
Epoch 16/25
20/20 [==============================] - 19s 960ms/step - loss: 0.0987 - accuracy: 0.9675 - val_loss: 0.0797 - val_accuracy: 0.9812
Epoch 17/25
20/20 [==============================] - 19s 935ms/step - loss: 0.0533 - accuracy: 0.9825 - val_loss: 0.0022 - val_accuracy: 1.0000
Epoch 18/25
20/20 [==============================] - 19s 940ms/step - loss: 0.0650 - accuracy: 0.9810 - val_loss: 0.0379 - val_accuracy: 0.9866
Epoch 19/25
20/20 [==============================] - 19s 954ms/step - loss: 0.1267 - accuracy: 0.9480 - val_loss: 0.0722 - val_accuracy: 0.9866
Epoch 20/25
20/20 [==============================] - 19s 932ms/step - loss: 0.0303 - accuracy: 0.9901 - val_loss: 0.0131 - val_accuracy: 0.9946
Epoch 21/25
20/20 [==============================] - 19s 935ms/step - loss: 0.0584 - accuracy: 0.9750 - val_loss: 0.2950 - val_accuracy: 0.8145
Epoch 22/25
20/20 [==============================] - 19s 939ms/step - loss: 0.0936 - accuracy: 0.9718 - val_loss: 0.0042 - val_accuracy: 1.0000
Epoch 23/25
20/20 [==============================] - 19s 946ms/step - loss: 0.0338 - accuracy: 0.9897 - val_loss: 0.0683 - val_accuracy: 0.9677
Epoch 24/25
20/20 [==============================] - 19s 948ms/step - loss: 0.0277 - accuracy: 0.9909 - val_loss: 0.0094 - val_accuracy: 1.0000
Epoch 25/25
20/20 [==============================] - 19s 943ms/step - loss: 0.1001 - accuracy: 0.9663 - val_loss: 0.0016 - val_accuracy: 1.0000
###Markdown
学習と検証のaccuracyの推移を表示
###Code
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
###Output
_____no_output_____
###Markdown
【独自】とりあえずモデルとパラメータの保存
###Code
from google.colab import files
model.save_weights("rps.hdf5")
files.download("rps.hdf5")
files.download("rps.h5")
###Output
_____no_output_____
###Markdown
画像をアップロードして、学習モデルを使って、予測
###Code
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = fn
img = image.load_img(path, target_size=(150,150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(fn)
print(classes)
###Output
_____no_output_____ |
image_operate.ipynb | ###Markdown
随机翻转
###Code
def randomflip(img):
sw = random.randint(0, 2)
if sw == 0:
dst = cv2.flip(img, 1) # y轴
elif sw == 1:
dst = cv2.flip(img, 0) # x轴
else:
dst = cv2.flip(img, -1) # x,y
return dst
###Output
_____no_output_____
###Markdown
梯度运算
###Code
def tiduyunsuan(img):
k = np.ones((7, 7), np.uint8)
dst = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, k)
return dst
###Output
_____no_output_____
###Markdown
黑帽运算
###Code
def blackhat(img):
k = np.ones((19, 19), np.uint8)
dst = cv2.morphologyEx(img, cv2.MORPH_BLACKHAT, k)
return dst
###Output
_____no_output_____
###Markdown
灰度图像
###Code
def togray(img):
dst = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
return dst
###Output
_____no_output_____
###Markdown
添加噪声
###Code
'''
添加椒盐噪声
prob:噪声比例
'''
def sp_noise(image, prob):
dst = np.zeros(image.shape,np.uint8)
thres = 1 - prob
for i in range(image.shape[0]):
for j in range(image.shape[1]):
rdn = random.random()
if rdn < prob:
dst[i][j] = 0
elif rdn > thres:
dst[i][j] = 150
else:
dst[i][j] = image[i][j]
return dst
###Output
_____no_output_____
###Markdown
随机剪裁
###Code
def Randomcorp(img):
h=random.randint(0,159)
w=random.randint(0,159)
img = cv2.resize(img,(384,384))
cropped = img[h:h+224, w:w+224]#384
return cropped
#看画儿用的cell
plt.figure(figsize=(4, 4))
img = cv2.imread('TEST/m/train_m610.jpg')
Imshow(img)
dst = blackhat(img)
Imshow(dst)
###Output
_____no_output_____
###Markdown
批量处理
###Code
i = 0
for jpgfile in glob.glob('Tumor/Training/pituitary_tumor/*.jpg'):
img = cv2.imread(jpgfile)
# ——————————————————————
# function
dst = Randomcorp(img)
# ——————————————————————
cv2.imwrite('Tumor/Training/pituitary_tumor/rc_train_p{}.jpg'.format(i), dst)
i = i + 1
###Output
_____no_output_____ |
E-0 Uso de sympy.ipynb | ###Markdown
Uso de simpy Conceptos basico: definición de simbolos y operaciones**Para representar valores se puede usar:**
###Code
# Salida Latex automatica
from sympy import Symbol
x = Symbol("x")
x**2+2*x+1
#Salida comun
from sympy import Symbol, pprint
x = Symbol("x")
ec = x**2+2*x+1
print(ec)
#Salida usando pprint
import sympy as sp
x = Symbol("x")
ec = x**2+2*x+1
sp.pprint(ec)
#Salida tipo Latex
"""Configuracion inicial"""
from sympy.interactive import printing
printing.init_printing(use_latex=True)
"""modulo necesario"""
import sympy as sp
x = sp.Symbol("x")
ecuacion = x**2 +2*x +1
print("Ecuacion")
display(ecuacion)
###Output
Ecuacion
###Markdown
Definiendo ecuaciones
###Code
"""Configuracion incial"""
from sympy.interactive import printing
printing.init_printing(use_latex=True)
"""modulos necesarios"""
import sympy as sp
#Declaracion de un simbolo
x = sp.Symbol("x")
#Declaracion de multiples simbolos
y,z = sp.symbols("y,z")
ecuacion = x**2 +2*y*z**2 +2*y**2*z +z**2
print("La ecuacion definida es:")
display(ecuacion)
###Output
La ecuacion definida es:
###Markdown
Sustituyendo expresiones**_Uso del metodo subs()_**
###Code
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
x,y = sp.symbols("x,y")
expresion = x*x +x*y +x*y +y*y
print("Expresion: ", end="\n")
display(expresion)
print("\nSustituyendo valores y=2, x=1")
est_valor = expresion.subs({x:1, y:2})
display(est_valor)
print("\n\nSegunda sustitucion x = 1-y\n")
est_valor = expresion.subs({x:y**2+2})
display(est_valor)
###Output
Expresion:
###Markdown
Factorizacion de expresiones ```markdown Uso de * factor() * expand()```
###Code
# Factorizacion simple
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
x,y = sp.symbols("x,y")
expresion = x**3 +3*x**2*y +3*x*y**2 +y**3
display(sp.factor(expresion))
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
x,y = sp.symbols("x,y")
# Estableciendo la ecuacion con la que vamos a trabajar
diff_cuadrados = x**2 - y**2
print("Diferencia de cuadrados")
display(diff_cuadrados)
print()
# Uso de factor
ec_factor = sp.factor(diff_cuadrados)
print("Expresion factorizada")
display(ec_factor)
print()
# Uso de expand
print("Expresion regresada a su forma original")
display(sp.expand(ec_factor))
###Output
Diferencia de cuadrados
###Markdown
Uso de sympify()**LECTURA DE ECUACIONES INGRESADAS POR EL USUARIO**Usada para convertir un string en algo con lo que pueda trabajar la biblioteca sympy
###Code
# Definiendo expresiones desde teclado
from sympy.interactive import printing
printing.init_printing(use_latex=True)
from sympy import sympify
from sympy.core.sympify import SympifyError
import sympy as sp
expresion = input("Ingrese su expresion matematica: ")
try:
expresion = sp.sympify(expresion)
print("La expresion multiplicada por dos es:")
display(expresion * 2)
except SympifyError:
print("Valor invalido")
# Multiplicacion de expresiones
from sympy.interactive import printing
printing.init_printing(use_latex=True)
from sympy.core.sympify import SympifyError
import sympy as sp
def producto(expre1, expre2):
print("\nProducto de:")
prod =(expre1 * expre2)
display(prod)
display(prod.expand())
print("MULTIPLICACION DE ECUACIONES")
expre1 = input("Ingrese su primera ecuacion")
expre2 = input("Ingrese su segunda ecuacion")
try:
expre1 = sp.sympify(expre1)
expre2 = sp.sympify(expre2)
except SympifyError:
print("Valor invalido")
else:
producto(expre1, expre2)
###Output
MULTIPLICACION DE ECUACIONES
###Markdown
Resolviendo ecuaciones Uso de solve()Usado para encontrar la solucionde a la ecuacion, la funcion resulve las expresiones entendiendo que son igual a cero
###Code
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
x = sp.Symbol("x")
ecuacion = x -10 -7
# Devuelve una lista con el valor que hace cero a la ecuacion.
display(sp.solve(ecuacion))
display(sp.solve(ecuacion, dict=True))
# Resolucion de ecuaciones cuadraticas
from sympy.interactive import printing
printing.init_printing(use_latex=True)
from sympy.core.sympify import SympifyError
import sympy as sp
ecuacion = input("ingrese la ecuacion a resolver")
try:
ecuacion = sp.sympify(ecuacion)
print("Ecuacion\t")
display(ecuacion)
except SympifyError:
print("Valor invalido")
else:
print("Soluciones:\t",end="")
display(sp.solve(ecuacion, dict=True))
# Resolucion de una variable en terminos de otra
## Ejemplo de la ecuacion cuadratica
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
x,a,b,c = sp.symbols("x,a,b,c")
exprecion = a*x**2 + b*x +c
print("Las soluciones de una ecuacion cuadratica son: ")
display(sp.solve(exprecion, x, dict=True))
###Output
Las soluciones de una ecuacion cuadratica son:
###Markdown
Resolucion de sistema de ecuaciones lineales
###Code
# Configuracion inicial
from sympy.interactive import printing
printing.init_printing(use_latex=True)
# Importamos los paquetes que vamos a usar
import sympy as sp
x,y = sp.symbols("x,y")
ecuacion_1 = 2*x + 3*y -6
ecuacion_2 = 3*x + 2*y -12
# llamada a solve con los dos elementos en una tupla
print("Sistema de ecuaciones lineales")
display(ecuacion_1, ecuacion_2)
print("\nSolucion")
display(sp.solve((ecuacion_1, ecuacion_2), dict=True))
print("\nComprando solucion")
soluciones = sp.solve((ecuacion_1, ecuacion_2), dict= True)
print("El valor devuelto es una lista",soluciones)
soluciones = soluciones[0]
display(ecuacion_1.subs({x: soluciones[x], y: soluciones[y]}))
display(ecuacion_2.subs({x: soluciones[x], y: soluciones[y]}))
###Output
Sistema de ecuaciones lineales
###Markdown
Trabajando con series
###Code
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
def imprimir_serie(n):
printing.init_printing(order="rev-lex") #Establece el orden de la impresion
x = sp.Symbol("x")
serie = x
for i in range(2, n+1):
serie = serie+((x**i)/i)
display(serie)
print()
sp.pprint(serie)
n = int(input("Ingrese el numero de terminos que quiere en la serie: "))
imprimir_serie(n)
###Output
Ingrese el numero de terminos que quiere en la serie: 8
###Markdown
Graficas usando sympy
###Code
# Graficando una funcion lineal
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
x = sp.Symbol("x")
print("Graficando la funcion y=2x+3")
display(sp.plot(2*x+3))
# Graficando delimintando valores de x
# Estableciendo el titulo de los ejes
# Graficando una funcion lineal
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
x = sp.Symbol("x")
print("Graficando la funcion y=2x+3")
display(
sp.plot(2*x+3,x+1,(x, -5, 5),
title="Grafico A",
xlabel = "x ",
ylabel="y=2*x+3")
)
###Output
Graficando la funcion y=2x+3
###Markdown
Graficando expresiones ingresadas por el usuario
###Code
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
# Como se devuelve una lista, tenemos que "sacar" la expresion de ella
def graficar_expresion(expresion):
y=sp.Symbol("y")
soluciones=sp.solve(expresion, y)
# Como se devuelve una lista, tenemos que "sacar" la expresion de ella
y = soluciones[0]
sp.plot(y)
def main():
# Se convierte la expresion en terminos de x
expresion = input("ingrese su ecuacion igualando a cero")
try:
expresion = sp.sympify(expresion)
except sp.SympifyError:
print("Entrada invalida")
else:
print("Expresion a graficar")
display(expresion)
graficar_expresion(expresion)
main()
###Output
ingrese su ecuacion igualando a cero x**2+3*y+1
###Markdown
Aplicaciones
###Code
# Encontrar el factor de una expresion
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
def factorizar(expresion):
factores=sp.factor(expresion)
display(factores)
def inicio():
expresion = input("Ingrese la a expresion a factorizar")
try:
expresion = sp.simplify(expresion)
except sp.SympifyError:
print("Entrada no valida")
else:
print("Factores de la expresion")
display(expresion)
factorizar(expresion)
inicio()
# Gaficador de ecuaciones
from sympy.interactive import printing
printing.init_printing(use_latex=True)
import sympy as sp
def graficar():
y=sp.Symbol("y")
soluciones=sp.solve(expresion, y)
# Como se devuelve una lista, tenemos que "sacar" la expresion de ella
y = soluciones[0]
sp.plot(y)
def inicio():
expre1, expre2 =(input("Ingrese expresion igualando a cero: ") for_ in range(2))
try:
expre1 = sp.sympify(expre1)
expre2 = sp.sympify(expre2)
except sp.SympifyError:
print("Entrada invalida")
else:
###Output
_____no_output_____ |
projects/project02/Notebooks/LSTM_Unidirectional.ipynb | ###Markdown
Utilities
###Code
def get_error( scores , labels ):
bs=scores.size(0)
predicted_labels = scores.argmax(dim=1)
indicator = (predicted_labels == labels)
num_matches=indicator.sum()
corr=num_matches/len(predicted_labels)
return corr.item()
def save_model_parameters():
# save models
torch.save({
'epoch': epoch,
'lstm_model':lstm_net.state_dict(),
'lstm_optimizer': lstm_optimizer.state_dict(),
'train_loss': loss_plt, 'train_accuracy': accu_plt,
'test_loss': test_loss_plt,
'test_accuracy': accuracy_test_plt, 'confusion matrix_parameters':confusion_mtx_parameters }, 'lstm_single_Basic Model.pt')
# 'incorrect_to_correct':confusion_mtx_parameters[0], 'correct_to_correct':confusion_mtx_parameters[2],
# 'correct_to_incorrect':confusion_mtx_parameters[1],'regenerate':confusion_mtx_parameters[3]
#print(in_word, out_word)
def confusion_parameters(scores,target_tensor,inpute_tensor,conf_counter):
if torch.all(torch.eq(target_tensor, scores.argmax(dim=1)))==1 and torch.all(torch.eq(inpute_tensor,target_tensor))==0: ### making incorrect->correct
conf_counter[0] +=1
if torch.all(torch.eq(inpute_tensor, scores.argmax(dim=1)))==0 and torch.all(torch.eq(inpute_tensor,target_tensor))==1: ### making correct->incorrect
conf_counter[1] +=1
if torch.all(torch.eq(inpute_tensor, scores.argmax(dim=1)))==1 and torch.all(torch.eq(inpute_tensor,target_tensor))==1: ### making correct->correct
conf_counter[2] +=1
if torch.all(torch.eq(inpute_tensor, scores.argmax(dim=1)))==1 and torch.all(torch.eq(inpute_tensor,target_tensor))==0: ### making inccorrect->correct or regenerate the input
conf_counter[3] +=1
return conf_counter
###Output
_____no_output_____
###Markdown
Evaluation
###Code
def eval_on_test_set():
# to deactivate dropout regularization during testing
lstm_net.eval()
running_loss=0
num_batches=0
num_matches=0
acc_plt=[]
loss_plt=[]
# counts for computing correct and incorrect proportion
count=torch.zeros(4)
cnt_list=torch.zeros(4)
cnf_mtx_count=torch.zeros(4)
for i in range(0,num_test_words):
h = torch.zeros(1,batch_of_words, hidden_size).cuda()
c = torch.zeros(1,batch_of_words, hidden_size).cuda()
h=h.to(device)
c=c.to(device)
word_length= len(test_data_1 [i])
inpute_tensor= test_data_1[i].cuda()
target_tensor = test_labels_1[i].cuda()
# sending to GPU
inpute_tensor=inpute_tensor.to(device)
target_tensor=target_tensor.to(device)
# forward pass
scores_char, h, c= lstm_net(inpute_tensor.view(word_length,1), h, c)
# reshape before calculating the loss for easier slicing of mini batch of words
scores_char = scores_char.view(word_length*batch_of_words,vocab_size)
target_tensor = target_tensor.contiguous()
target_tensor = target_tensor.view(word_length*batch_of_words)
# calculating the loss of batch of character sthat constrcut a words
loss_char= criterion(scores_char, target_tensor)# do we need to add the loss for every
#================================================#
# accumalate the loss
running_loss+= loss_char.item()
num_batches+=1
# computing accuracy
num_matches+= get_error(scores_char, target_tensor)
#=======================================================#
# compute confusion paramters
cnf_mtx_count+=confusion_parameters(scores_char,target_tensor,inpute_tensor,count)
#============================================#
###====Different Metrics for evaluation=============###
# counter for computing confusion matrix
# cnt_list[0]+=cnf_mtx_count[0].item()
# cnt_list[1]+=cnf_mtx_count[1].item()
# cnt_list[2]+=cnf_mtx_count[2].item()
# cnt_list[3]+=cnf_mtx_count[3].item()
# accuracy and loss
accuracy= (num_matches/num_test_words)*100
acc_plt.append(accuracy)
total_loss = running_loss/num_batches
loss_plt.append(total_loss)
# printing results
print('Test==: loss = ',(total_loss),'\t accuracy=', accuracy,'%' )
return acc_plt, loss_plt, cnf_mtx_count
###Output
_____no_output_____
###Markdown
Training
###Code
eval_on_test_set()
start=time.time()
# shuff_index=torch.LongTensor(485224).random_(0,485224)
# train_data_2=train_data_1[:,shuff_index]
# train_labels_2=train_labels_1[:,shuff_index]
accu_plt=[]
loss_plt=[]
test_loss_plt=[]
accuracy_test_plt=[]
confusion_mtx_parameters=[]
for epoch in range(1,1000):
# to activate dropout during training
lstm_net.train()
# divide the learning rate by 3 except after the first epoch
if epoch % 10==0:
my_lr = my_lr / 1.1
# create a new optimizer at the beginning of each epoch: give the current learning rate.
lstm_optimizer=torch.optim.SGD( lstm_net.parameters() , lr=my_lr )
# set the initial h and c to be the zero vector
# set the running quatities to zero at the beginning of the epoch
running_loss=0
num_batches=0
num_matches=0
# loop across batch of words
for i in range(0,num_train_words):
# initilize the hidden state every word as the words ara independent
h = torch.zeros(1,batch_of_words, hidden_size).cuda()
c = torch.zeros(1,batch_of_words, hidden_size).cuda()
h=h.to(device)
c=c.to(device)
word_length= len(train_data_1 [i])
# Set the gradients to zeros
lstm_optimizer.zero_grad()
minibatch_words = train_data_1[i ]
minibatch_labels = train_labels_1[ i]
# sending to GPU
minibatch_words=minibatch_words.to(device)
minibatch_labels=minibatch_labels.to(device)
# Detach to prevent from backpropagating all the way to the beginning
# Then tell Pytorch to start tracking all operations that will be done on h and c
h=h.detach()
c=c.detach()
h=h.requires_grad_()
c=c.requires_grad_()
# forward pass
scores_char, h, c = lstm_net(minibatch_words.view(word_length,1), h, c)
# reshape before calculating the loss for easier slicing of mini batch of words
scores_char = scores_char.view( word_length*batch_of_words , vocab_size)
minibatch_labels = minibatch_labels.contiguous()
minibatch_labels = minibatch_labels.view(word_length*batch_of_words )
# calculating the loss of batch of character sthat constrcut a words
loss_char= criterion(scores_char, minibatch_labels)
#===============================================
# summation of both lossses to
combined_loss=loss_char
# backward pass to compute dL/dR, dL/dV and dL/dW
combined_loss.backward()
# update the wights
lstm_optimizer.step()
# update the running loss
running_loss += combined_loss.detach().item()
num_batches += 1
num_matches += get_error(scores_char, minibatch_labels)
# end of iteration
# compute for full training set
total_loss = running_loss/num_batches
loss_plt.append(total_loss)
accuracy= (num_matches/num_train_words)*100
accu_plt.append(accuracy)
# Compute the time
elapsed = time.time()-start
print('')
print('Train:::', 'epoch=',epoch,'\t lr=', my_lr, '\t (loss)=',(total_loss),'\t (accuracy)=' , (accuracy),'%','\t time=', elapsed)
# evaluate on test set to monitor the loss
acc_tst_plt, loss_tst_plt, confusion_param=eval_on_test_set()
# saving test parameters
test_loss_plt.append(loss_tst_plt)
accuracy_test_plt.append(acc_tst_plt)
confusion_mtx_parameters.append(confusion_param)
#### Saving model parameters every epoch
save_model_parameters()
print(confusion_mtx_parameters)
checkpoint = torch.load('lstm_single_char_50_mutation_bid_drop.pt')
torch.save(lstm_net, 'this_morning_w_do.pth')
print(checkpoint['test_loss'])
torch.save(lstm_net,'single_mistake_model_lstm_drop_out_1.pth')
checkpoint = torch.load('lstm_single_char_50_mutation_bid_drop_02.pt')
print(checkpoint[])
# 'incorrect_to_correct':confusion_mtx_parameters[0], 'correct_to_correct':confusion_mtx_parameters[2],
# # 'correct_to_incorrect':confusion_mtx_parameters[1],'regenerate':confusion_mtx_parameters[3]
###Output
[0.008481162786483764, 0.009953457117080688, 0.008462512493133545, 0.008568865060806275, 0.008872705698013305, 0.009778851270675659, 0.009399718046188355, 0.009138405323028564, 0.008280891180038451, 0.00938764214515686, 0.009248465299606323, 0.010755127668380738, 0.009009695053100586, 0.008971762657165528, 0.01076977252960205, 0.01007135510444641, 0.010158282518386842, 0.008264505863189697, 0.008282959461212158, 0.01121429204940796, 0.008798831701278686, 0.009055852890014648, 0.010330718755722047, 0.007766813039779663, 0.009448951482772827, 0.009692203998565675, 0.00912320613861084, 0.009875631332397461, 0.008551460504531861, 0.01096159815788269, 0.009085208177566528, 0.009600162506103516, 0.00832710862159729, 0.010008686780929565, 0.009891873598098755, 0.010521680116653442, 0.010294747352600098, 0.009758317470550537, 0.010009801387786866, 0.008804881572723388, 0.008309823274612427, 0.009526288509368897, 0.008488410711288452, 0.008953988552093506, 0.009343469142913818, 0.00920448899269104, 0.010769516229629517, 0.007997560501098632, 0.010012930631637574, 0.008238482475280761, 0.007914996147155762, 0.008441448211669922, 0.008660966157913208, 0.008735442161560058, 0.01002773642539978, 0.007890093326568603, 0.0065487980842590336, 0.009454089403152465, 0.011431741714477538, 0.007453322410583496, 0.009435546398162842, 0.007637321949005127, 0.008381032943725586, 0.009216135740280152]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.