path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
lab_day5/day5_code03 RNN-3 text sequence data.ipynb | ###Markdown
RNN models for text dataWe analyse here data from the Internet Movie Database (IMDB: https://www.imdb.com/).We use RNN to build a classifier for movie reviews: given the text of a review, the model will predict whether it is a positive or negative review. Steps1. Load the dataset (50K IMDB Movie Review)2. Clean the dataset3. Encode the data4. Split into training and testing sets5. Tokenize and pad/truncate reviews6. Build the RNN model7. Train the model8. Test the model9. Applications
###Code
## import relevant libraries
import re
import nltk
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import seaborn as sns
import itertools
import matplotlib.pyplot as plt
from scipy import stats
from keras.datasets import imdb
from nltk.corpus import stopwords # to get collection of stopwords
from sklearn.model_selection import train_test_split # for splitting dataset
from tensorflow.keras.preprocessing.text import Tokenizer # to encode text to int
from tensorflow.keras.preprocessing.sequence import pad_sequences # to do padding or truncating
from tensorflow.keras.models import Sequential # the model
from tensorflow.keras.layers import Embedding, LSTM, Dense # layers of the architecture
from tensorflow.keras.callbacks import ModelCheckpoint # save model
from tensorflow.keras.models import load_model # load saved model
nltk.download('stopwords')
###Output
_____no_output_____
###Markdown
Reading the dataWe raw an extract from IMDB hosted on a Github page:
###Code
DATAURL = 'https://raw.githubusercontent.com/hansmichaels/sentiment-analysis-IMDB-Review-using-LSTM/master/IMDB%20Dataset.csv'
data = pd.read_csv(DATAURL)
print(data)
## alternative way of getting the data, already preprocessed
# (X_train,Y_train),(X_test,Y_test) = imdb.load_data(path="imdb.npz",num_words=None,skip_top=0,maxlen=None,start_char=1,seed=13,oov_char=2,index_from=3)
###Output
_____no_output_____
###Markdown
Preprocessing The original reviews are "dirty", they contain html tags, punctuation, uppercase, stop words etc. which are not good for model training. Therefore, we now need to clean the dataset.**Stop words** are commonly used words in a sentence, usually to be ignored in the analysis (i.e. "the", "a", "an", "of", etc.)
###Code
english_stops = set(stopwords.words('english'))
[x[1] for x in enumerate(itertools.islice(english_stops, 10))]
def prep_dataset():
x_data = data['review'] # Reviews/Input
y_data = data['sentiment'] # Sentiment/Output
# PRE-PROCESS REVIEW
x_data = x_data.replace({'<.*?>': ''}, regex = True) # remove html tag
x_data = x_data.replace({'[^A-Za-z]': ' '}, regex = True) # remove non alphabet
x_data = x_data.apply(lambda review: [w for w in review.split() if w not in english_stops]) # remove stop words
x_data = x_data.apply(lambda review: [w.lower() for w in review]) # lower case
# ENCODE SENTIMENT -> 0 & 1
y_data = y_data.replace('positive', 1)
y_data = y_data.replace('negative', 0)
return x_data, y_data
x_data, y_data = prep_dataset()
print('Reviews')
print(x_data, '\n')
print('Sentiment')
print(y_data)
###Output
_____no_output_____
###Markdown
Split dataset`train_test_split()` function to partition the data in 80% training and 20% test sets
###Code
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size = 0.2)
###Output
_____no_output_____
###Markdown
A little bit of EDA
###Code
print("x train shape: ",x_train.shape)
print("y train shape: ",y_train.shape)
print("x test shape: ",x_test.shape)
print("y test shape: ",y_test.shape)
###Output
_____no_output_____
###Markdown
Distribution of classes in the training set
###Code
plt.figure();
sns.countplot(y_train);
plt.xlabel("Classes");
plt.ylabel("Frequency");
plt.title("Y Train");
review_len_train = []
review_len_test = []
for i,j in zip(x_train,x_test):
review_len_train.append(len(i))
review_len_test.append(len(j))
print("min train: ", min(review_len_train), "max train: ", max(review_len_train))
print("min test: ", min(review_len_test), "max test: ", max(review_len_test))
###Output
_____no_output_____
###Markdown
Tokenize and pad/truncateRNN models only accept numeric data, so we need to encode the reviews. `tensorflow.keras.preprocessing.text.Tokenizer` is used to encode the reviews into integers, where each unique word is automatically indexed (using `fit_on_texts`) based on the training datax_train and x_test are converted to integers using `texts_to_sequences`Each reviews has a different length, so we need to add padding (by adding 0) or truncating the words to the same length (in this case, it is the mean of all reviews length): `tensorflow.keras.preprocessing.sequence.pad_sequences`
###Code
def get_max_length():
review_length = []
for review in x_train:
review_length.append(len(review))
return int(np.ceil(np.mean(review_length)))
# ENCODE REVIEW
token = Tokenizer(lower=False) # no need lower, because already lowered the data in load_data()
token.fit_on_texts(x_train)
x_train = token.texts_to_sequences(x_train)
x_test = token.texts_to_sequences(x_test)
max_length = get_max_length()
x_train = pad_sequences(x_train, maxlen=max_length, padding='post', truncating='post')
x_test = pad_sequences(x_test, maxlen=max_length, padding='post', truncating='post')
## size of vocabulary
total_words = len(token.word_index) + 1 # add 1 because of 0 padding
print('Encoded X Train\n', x_train, '\n')
print('Encoded X Test\n', x_test, '\n')
print('Maximum review length: ', max_length)
x_train[0,0]
###Output
_____no_output_____
###Markdown
Build model**Embedding Layer**: it creates word vectors of each word in the vocabulary, and group words that are related or have similar meaning by analyzing other words around them**LSTM Layer**: to make a decision to keep or throw away data by considering the current input, previous output, and previous memory. There are some important components in LSTM.- *Forget Gate*, decides information is to be kept or thrown away- *Input Gate*, updates cell state by passing previous output and current input into sigmoid activation function- *Cell State*, calculate new cell state, it is multiplied by forget vector (drop value if multiplied by a near 0), add it with the output from input gate to update the cell state value.- *Ouput Gate*, decides the next hidden state and used for predictions**Dense Layer**: compute the input from the LSTM layer and uses the sigmoid activation function because the output is only 0 or 1
###Code
# ARCHITECTURE
model = Sequential()
model.add(Embedding(total_words, 32, input_length = max_length))
model.add(LSTM(64))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
print(model.summary())
###Output
_____no_output_____
###Markdown
Training the modelFor training we fit the x_train (input) and y_train (output/label) data to the RNN model. We use a mini-batch learning method with a batch_size of 128 and 5 epochs
###Code
num_epochs = 5
batch_size = 128
checkpoint = ModelCheckpoint(
'models/LSTM.h5',
monitor='accuracy',
save_best_only=True,
verbose=1
)
history = model.fit(x_train, y_train, batch_size = batch_size, epochs = num_epochs, callbacks=[checkpoint])
plt.figure()
plt.plot(history.history["accuracy"],label="Train");
plt.title("Accuracy")
plt.ylabel("Accuracy")
plt.xlabel("Epochs")
plt.legend()
plt.show();
###Output
_____no_output_____
###Markdown
Testing
###Code
from sklearn.metrics import confusion_matrix
predictions = model.predict(x_test)
predicted_labels = np.where(predictions > 0.5, "good review", "bad review")
target_labels = y_test
target_labels = np.where(target_labels > 0.5, "good review", "bad review")
con_mat_df = confusion_matrix(target_labels, predicted_labels, labels=["bad review","good review"])
print(con_mat_df)
y_pred = np.where(predictions > 0.5, 1, 0)
true = 0
for i, y in enumerate(y_test):
if y == y_pred[i]:
true += 1
print('Correct Prediction: {}'.format(true))
print('Wrong Prediction: {}'.format(len(y_pred) - true))
print('Accuracy: {}'.format(true/len(y_pred)*100))
###Output
_____no_output_____
###Markdown
A little applicationNow we feed a new review to the trained RNN model, to see whether it will be classified positive or negative.We go through the same preprocessing (cleaning, tokenizing, encoding), and then move directly to the predcition step (the RNN model has already been trained, and it has high accuracy from cross-validation).
###Code
loaded_model = load_model('models/LSTM.h5')
review = 'Movie Review: Nothing was typical about this. Everything was beautifully done in this movie, the story, the flow, the scenario, everything. I highly recommend it for mystery lovers, for anyone who wants to watch a good movie!'
# Pre-process input
regex = re.compile(r'[^a-zA-Z\s]')
review = regex.sub('', review)
print('Cleaned: ', review)
words = review.split(' ')
filtered = [w for w in words if w not in english_stops]
filtered = ' '.join(filtered)
filtered = [filtered.lower()]
print('Filtered: ', filtered)
tokenize_words = token.texts_to_sequences(filtered)
tokenize_words = pad_sequences(tokenize_words, maxlen=max_length, padding='post', truncating='post')
print(tokenize_words)
result = loaded_model.predict(tokenize_words)
print(result)
if result >= 0.7:
print('positive')
else:
print('negative')
###Output
_____no_output_____
###Markdown
ExerciseTry to write your own movie review, and then have the deep learning model classify it.0. write your review1. clean the text data2. tokenize it3. predict and evaluate
###Code
###Output
_____no_output_____ |
Algorismica/3.2 (1).ipynb | ###Markdown
Capítol 3 - Algorismes i Nombres 3.2 El Màxim Comú Divisor i les seves aplicacions
###Code
def reduir_fraccio(numerador, denominador):
"""
Funció que retorna l'expressio irreductible d'una fracció
Parameters
----------
numerador: int
denominador: int
Returns
-------
numReduit: int
denReduit: int
"""
return (numReduit,denReduit)
assert reduir_fraccio(12, 8) == (3,2)
###Output
_____no_output_____ |
evaluations/simulation/1-simulate-reads.simulation-alpha-2.0.ipynb | ###Markdown
1. Parameters
###Code
# Defaults
## Random seed
random_seed <- 25524
## Directories
simulation_dir <- "simulations/unset"
reference_file <- "simulations/reference/reference.fa.gz"
initial_tree_file <- "input/salmonella.tre"
## Simulation parameters
sub_lambda <- 1e-2
sub_pi_tcag <- c(0.1, 0.2, 0.3, 0.4)
sub_alpha <- 0.2
sub_beta <- sub_alpha/2
sub_mu <- 1
sub_invariant <- 0.3
ins_rate <- 1e-4
ins_max_length <- 60
ins_a <- 1.6
del_rate <- 1e-4
del_max_length <- 60
del_a <- 1.6
## Read simulation information
read_coverage <- 30
read_length <- 250
## Other
ncores <- 48
# Parameters
read_coverage = 30
mincov = 10
simulation_dir = "simulations/alpha-2.0-cov-30"
iterations = 3
sub_alpha = 2.0
output_dir <- file.path(simulation_dir, "simulated_data")
output_vcf_prefix <- file.path(output_dir, "haplotypes")
reads_data_initial_prefix <- file.path(output_dir, "reads_initial", "data")
set.seed(random_seed)
print(output_dir)
print(output_vcf_prefix)
###Output
[1] "simulations/alpha-2.0-cov-30/simulated_data"
###Markdown
2. Generate simulated dataThis simulates *Salmonella* data using a reference genome and a tree.
###Code
library(jackalope)
# Make sure we've complied with openmp
jackalope:::using_openmp()
reference <- read_fasta(reference_file)
reference_len <- sum(reference$sizes())
reference
library(ape)
tree <- read.tree(initial_tree_file)
tree <- root(tree, "reference", resolve.root=TRUE)
tree
sub <- sub_HKY85(pi_tcag = sub_pi_tcag, mu = sub_mu,
alpha = sub_alpha, beta = sub_beta, gamma_shape=1, gamma_k = 5,
invariant = sub_invariant)
ins <- indels(rate = ins_rate, max_length = ins_max_length, a = ins_a)
del <- indels(rate = del_rate, max_length = del_max_length, a = del_a)
ref_haplotypes <- create_haplotypes(reference, haps_phylo(tree), sub=sub, ins=ins, del=del)
ref_haplotypes
###Output
_____no_output_____
###Markdown
3. Write simulated data
###Code
write_vcf(ref_haplotypes, out_prefix=output_vcf_prefix, compress=TRUE)
assemblies_prefix = file.path(output_dir, "assemblies", "data")
write_fasta(ref_haplotypes, out_prefix=assemblies_prefix,
compress=TRUE, n_threads=ncores, overwrite=TRUE)
n_samples <- length(tree$tip)
n_reads <- round((reference_len * read_coverage * n_samples) / read_length)
print(sprintf("Number of reads for coverage %sX and read length %s over %s samples with respect to reference with length %s: %s",
read_coverage, read_length, n_samples, reference_len, n_reads))
illumina(ref_haplotypes, out_prefix = reads_data_initial_prefix, sep_files=TRUE, n_reads = n_reads,
frag_mean = read_length * 2 + 50, frag_sd = 100,
compress=TRUE, comp_method="bgzip", n_threads=ncores,
paired=TRUE, read_length = read_length)
# Remove the simulated reads for the reference genome since I don't want these in the tree
ref1 <- paste(toString(reads_data_initial_prefix), "_reference_R1.fq.gz", sep="")
ref2 <- paste(toString(reads_data_initial_prefix), "_reference_R2.fq.gz", sep="")
if (file.exists(ref1)) {
file.remove(ref1)
print(sprintf("Removing: %s", ref1))
}
if (file.exists(ref2)) {
file.remove(ref2)
print(sprintf("Removing: %s", ref2))
}
# Remove the new reference assembly genome since I don't need it
ref1 <- paste(toString(assemblies_prefix), "__reference.fa.gz", sep="")
if (file.exists(ref1)) {
file.remove(ref1)
print(sprintf("Removing: %s", ref1))
}
###Output
[1] "Removing: simulations/alpha-2.0-cov-30/simulated_data/assemblies/data__reference.fa.gz"
|
experiments/debug/debug_mcts.ipynb | ###Markdown
Debug trained MCTS agent
###Code
%matplotlib inline
import numpy as np
import sys
import logging
from matplotlib import pyplot as plt
import torch
sys.path.append("../../")
from ginkgo_rl import GinkgoLikelihood1DEnv, MCTSAgent
logging.basicConfig(
format='%(message)s',
datefmt='%H:%M',
level=logging.DEBUG
)
for key in logging.Logger.manager.loggerDict:
if "ginkgo_rl" not in key:
logging.getLogger(key).setLevel(logging.ERROR)
env = GinkgoLikelihood1DEnv()
agent = MCTSAgent(env, verbose=1)
agent.load_state_dict(torch.load("../data/runs/mcts_20200901_174303/model.pty"))
###Output
Initializing environment
Creating linear layer: 100->100
Creating linear head layer: 100->1
###Markdown
Let's play an episode
###Code
# Initialize episode
state = env.reset()
done = False
log_likelihood = 0.
errors = 0
reward = 0.0
agent.set_env(env)
agent.eval()
# Render initial state
env.render()
while not done:
# Agent step
action, agent_info = agent.predict(state)
# Environment step
next_state, next_reward, done, info = env.step(action)
env.render()
# Book keeping
log_likelihood += next_reward
errors += int(info["legal"])
agent.update(state, reward, action, done, next_state, next_reward=reward, num_episode=0, **agent_info)
reward, state = next_reward, next_state
###Output
Resetting environment
Sampling new jet with 9 leaves
9 particles:
p[ 0] = ( 1.3, 0.9, 0.7, 0.7)
p[ 1] = ( 0.8, 0.3, 0.5, 0.5)
p[ 2] = ( 0.6, 0.3, 0.3, 0.3)
p[ 3] = ( 0.5, 0.4, 0.3, 0.2)
p[ 4] = ( 0.3, 0.2, 0.2, 0.2)
p[ 5] = ( 0.2, 0.1, 0.1, 0.1)
p[ 6] = ( 0.2, 0.1, 0.1, 0.1)
p[ 7] = ( 0.1, 0.0, 0.1, 0.1)
p[ 8] = ( 0.0, 0.0, 0.0, 0.0)
Starting MCTS with 100 trajectories
MCTS results:
0: log likelihood = -8.4, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
1: log likelihood = -5.3, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
2: log likelihood = -4.6, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
3: log likelihood = -8.5, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
4: log likelihood = -9.7, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
5: log likelihood = -8.1, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
6: log likelihood = -6.7, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
7: log likelihood = -100.0, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
8: log likelihood = -3.0, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
9: log likelihood = -8.0, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
10: log likelihood = -4.3, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
11: log likelihood = -3.8, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
12: log likelihood = -100.0, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
13: log likelihood = -6.1, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
g 14: log likelihood = -2.6, policy = 0.01, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
15: log likelihood = -6.1, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
16: log likelihood = -3.8, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
17: log likelihood = -3.7, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
18: log likelihood = -6.9, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
19: log likelihood = -3.0, policy = 0.01, n = 1, mean = -75.9 [0.94], max = -75.9 [0.94]
20: log likelihood = -2.7, policy = 0.03, n = 3, mean = -76.1 [0.94], max = -76.0 [0.94]
21: log likelihood = -8.4, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
22: log likelihood = -6.0, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
23: log likelihood = -6.1, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
24: log likelihood = -8.1, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
25: log likelihood = -4.8, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
26: log likelihood = -4.2, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
27: log likelihood = -4.2, policy = 0.01, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
28: log likelihood = -5.5, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
29: log likelihood = -5.1, policy = 0.00, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
30: log likelihood = -4.2, policy = 0.01, n = 0, mean = 0.0 [0.93], max = -inf [0.00]
* 31: log likelihood = -3.7, policy = 0.04, n = 11, mean = -80.9 [0.89], max = -70.0 [1.00]
32: log likelihood = -3.8, policy = 0.04, n = 4, mean = -74.9 [0.95], max = -73.6 [0.97]
33: log likelihood = -2.7, policy = 0.52, n = 48, mean = -77.0 [0.93], max = -70.2 [1.00]
34: log likelihood = -3.2, policy = 0.20, n = 20, mean = -75.7 [0.95], max = -72.8 [0.97]
35: log likelihood = -3.5, policy = 0.13, n = 13, mean = -76.0 [0.94], max = -74.6 [0.96]
Environment step. Action: (8, 3)
Computing log likelihood of action (8, 3): ti = 0.0, tj = 0.0, t_cut = 16.0, lam = 1.5 -> log likelihood = -3.7217040061950684
Merging particles 8 and 3. New state has 8 particles.
8 particles:
p[ 0] = ( 1.3, 0.9, 0.7, 0.7)
p[ 1] = ( 0.8, 0.3, 0.5, 0.5)
p[ 2] = ( 0.6, 0.5, 0.3, 0.2)
p[ 3] = ( 0.6, 0.3, 0.3, 0.3)
p[ 4] = ( 0.3, 0.2, 0.2, 0.2)
p[ 5] = ( 0.2, 0.1, 0.1, 0.1)
p[ 6] = ( 0.2, 0.1, 0.1, 0.1)
p[ 7] = ( 0.1, 0.0, 0.1, 0.1)
Starting MCTS with 100 trajectories
MCTS results:
0: log likelihood = -8.4, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
1: log likelihood = -11.3, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
2: log likelihood = -12.3, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
3: log likelihood = -5.3, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
4: log likelihood = -4.6, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
5: log likelihood = -10.7, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
6: log likelihood = -6.7, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
7: log likelihood = -100.0, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
8: log likelihood = -10.6, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
9: log likelihood = -3.0, policy = 0.03, n = 4, mean = -71.7 [0.94], max = -70.8 [0.95]
10: log likelihood = -4.3, policy = 0.01, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
11: log likelihood = -3.8, policy = 0.01, n = 1, mean = -72.4 [0.94], max = -72.4 [0.94]
12: log likelihood = -8.7, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
13: log likelihood = -100.0, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
*g 14: log likelihood = -2.6, policy = 0.13, n = 23, mean = -70.8 [0.95], max = -65.5 [1.00]
15: log likelihood = -6.1, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
16: log likelihood = -3.8, policy = 0.02, n = 3, mean = -72.5 [0.93], max = -72.3 [0.94]
17: log likelihood = -9.4, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
18: log likelihood = -3.7, policy = 0.04, n = 5, mean = -72.4 [0.94], max = -71.0 [0.95]
19: log likelihood = -3.0, policy = 0.18, n = 14, mean = -78.4 [0.88], max = -66.5 [0.99]
20: log likelihood = -2.7, policy = 0.40, n = 46, mean = -74.1 [0.92], max = -69.7 [0.96]
21: log likelihood = -8.4, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
22: log likelihood = -6.0, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
23: log likelihood = -10.6, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
24: log likelihood = -6.1, policy = 0.00, n = 0, mean = 0.0 [0.92], max = -inf [0.00]
25: log likelihood = -4.8, policy = 0.02, n = 2, mean = -72.9 [0.93], max = -72.9 [0.93]
26: log likelihood = -4.2, policy = 0.07, n = 9, mean = -73.0 [0.93], max = -71.8 [0.94]
27: log likelihood = -4.2, policy = 0.09, n = 4, mean = -96.4 [0.71], max = -71.3 [0.95]
Environment step. Action: (5, 4)
Computing log likelihood of action (5, 4): ti = 0.0, tj = 0.0, t_cut = 16.0, lam = 1.5 -> log likelihood = -2.6181464195251465
Merging particles 5 and 4. New state has 7 particles.
7 particles:
p[ 0] = ( 1.3, 0.9, 0.7, 0.7)
p[ 1] = ( 0.8, 0.3, 0.5, 0.5)
p[ 2] = ( 0.6, 0.5, 0.3, 0.2)
p[ 3] = ( 0.6, 0.3, 0.3, 0.3)
p[ 4] = ( 0.5, 0.3, 0.3, 0.3)
p[ 5] = ( 0.2, 0.1, 0.1, 0.1)
p[ 6] = ( 0.1, 0.0, 0.1, 0.1)
Starting MCTS with 82 trajectories
MCTS results:
0: log likelihood = -8.4, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
1: log likelihood = -11.3, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
2: log likelihood = -12.3, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
3: log likelihood = -5.3, policy = 0.01, n = 1, mean = -70.5 [0.94], max = -70.5 [0.94]
4: log likelihood = -4.6, policy = 0.04, n = 4, mean = -67.8 [0.96], max = -66.3 [0.98]
5: log likelihood = -10.7, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
6: log likelihood = -9.5, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
7: log likelihood = -6.4, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
8: log likelihood = -14.1, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
9: log likelihood = -6.2, policy = 0.01, n = 1, mean = -65.5 [0.98], max = -65.5 [0.98]
10: log likelihood = -6.1, policy = 0.01, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
* 11: log likelihood = -3.8, policy = 0.20, n = 22, mean = -67.6 [0.96], max = -63.6 [1.00]
12: log likelihood = -9.4, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
g 13: log likelihood = -3.7, policy = 0.32, n = 29, mean = -69.8 [0.94], max = -67.2 [0.97]
14: log likelihood = -6.2, policy = 0.01, n = 4, mean = -71.8 [0.93], max = -71.1 [0.93]
15: log likelihood = -8.4, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
16: log likelihood = -6.0, policy = 0.02, n = 1, mean = -69.6 [0.95], max = -69.6 [0.95]
17: log likelihood = -10.6, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
18: log likelihood = -6.1, policy = 0.02, n = 1, mean = -72.3 [0.92], max = -72.3 [0.92]
19: log likelihood = -8.0, policy = 0.00, n = 5, mean = -72.4 [0.92], max = -69.8 [0.94]
20: log likelihood = -4.2, policy = 0.36, n = 37, mean = -68.2 [0.96], max = -63.9 [1.00]
Environment step. Action: (5, 1)
Computing log likelihood of action (5, 1): ti = 0.0, tj = 0.0, t_cut = 16.0, lam = 1.5 -> log likelihood = -3.8469157218933105
Merging particles 5 and 1. New state has 6 particles.
6 particles:
p[ 0] = ( 1.3, 0.9, 0.7, 0.7)
p[ 1] = ( 0.9, 0.4, 0.6, 0.6)
p[ 2] = ( 0.6, 0.5, 0.3, 0.2)
p[ 3] = ( 0.6, 0.3, 0.3, 0.3)
p[ 4] = ( 0.5, 0.3, 0.3, 0.3)
p[ 5] = ( 0.1, 0.0, 0.1, 0.1)
Starting MCTS with 53 trajectories
MCTS results:
0: log likelihood = -11.3, policy = 0.00, n = 0, mean = 0.0 [0.96], max = -inf [0.00]
1: log likelihood = -11.3, policy = 0.00, n = 0, mean = 0.0 [0.96], max = -inf [0.00]
2: log likelihood = -15.6, policy = 0.00, n = 0, mean = 0.0 [0.96], max = -inf [0.00]
*g 3: log likelihood = -5.3, policy = 0.39, n = 32, mean = -62.4 [0.98], max = -60.3 [1.00]
4: log likelihood = -7.7, policy = 0.03, n = 1, mean = -64.6 [0.96], max = -64.6 [0.96]
5: log likelihood = -10.7, policy = 0.00, n = 0, mean = 0.0 [0.96], max = -inf [0.00]
6: log likelihood = -9.5, policy = 0.00, n = 0, mean = 0.0 [0.96], max = -inf [0.00]
7: log likelihood = -10.9, policy = 0.00, n = 0, mean = 0.0 [0.96], max = -inf [0.00]
8: log likelihood = -14.1, policy = 0.00, n = 0, mean = 0.0 [0.96], max = -inf [0.00]
9: log likelihood = -6.2, policy = 0.20, n = 15, mean = -62.8 [0.98], max = -61.7 [0.99]
10: log likelihood = -8.4, policy = 0.02, n = 1, mean = -65.6 [0.95], max = -65.6 [0.95]
11: log likelihood = -8.8, policy = 0.01, n = 9, mean = -64.6 [0.96], max = -63.6 [0.97]
12: log likelihood = -10.6, policy = 0.00, n = 0, mean = 0.0 [0.96], max = -inf [0.00]
13: log likelihood = -6.1, policy = 0.31, n = 15, mean = -69.2 [0.92], max = -65.2 [0.96]
14: log likelihood = -8.0, policy = 0.03, n = 2, mean = -64.1 [0.97], max = -63.9 [0.97]
Environment step. Action: (3, 0)
Computing log likelihood of action (3, 0): ti = 0.0, tj = 0.0, t_cut = 16.0, lam = 1.5 -> log likelihood = -5.2660722732543945
Merging particles 3 and 0. New state has 5 particles.
5 particles:
p[ 0] = ( 1.9, 1.2, 1.1, 1.1)
p[ 1] = ( 0.9, 0.4, 0.6, 0.6)
p[ 2] = ( 0.6, 0.5, 0.3, 0.2)
p[ 3] = ( 0.5, 0.3, 0.3, 0.3)
p[ 4] = ( 0.1, 0.0, 0.1, 0.1)
Starting MCTS with 18 trajectories
MCTS results:
0: log likelihood = -14.5, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
1: log likelihood = -15.3, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
2: log likelihood = -15.6, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
3: log likelihood = -12.6, policy = 0.00, n = 18, mean = -62.4 [0.93], max = -60.3 [0.95]
4: log likelihood = -10.9, policy = 0.01, n = 2, mean = -60.7 [0.95], max = -60.6 [0.95]
5: log likelihood = -14.1, policy = 0.00, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
6: log likelihood = -11.2, policy = 0.01, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
* 7: log likelihood = -8.8, policy = 0.27, n = 10, mean = -54.7 [1.00], max = -54.4 [1.00]
8: log likelihood = -10.6, policy = 0.03, n = 0, mean = 0.0 [0.95], max = -inf [0.00]
g 9: log likelihood = -8.0, policy = 0.67, n = 20, mean = -60.8 [0.95], max = -57.4 [0.97]
Environment step. Action: (4, 1)
Computing log likelihood of action (4, 1): ti = 0.0, tj = 46.62428045165643, t_cut = 16.0, lam = 1.5 -> log likelihood = -8.781725883483887
Merging particles 4 and 1. New state has 4 particles.
4 particles:
p[ 0] = ( 1.9, 1.2, 1.1, 1.1)
p[ 1] = ( 1.1, 0.4, 0.7, 0.7)
p[ 2] = ( 0.6, 0.5, 0.3, 0.2)
p[ 3] = ( 0.5, 0.3, 0.3, 0.3)
Starting MCTS with 20 trajectories
MCTS results:
0: log likelihood = -15.8, policy = 0.00, n = 0, mean = 0.0 [0.97], max = -inf [0.00]
1: log likelihood = -15.3, policy = 0.00, n = 0, mean = 0.0 [0.97], max = -inf [0.00]
2: log likelihood = -16.1, policy = 0.00, n = 0, mean = 0.0 [0.97], max = -inf [0.00]
3: log likelihood = -12.6, policy = 0.37, n = 13, mean = -46.5 [0.99], max = -46.2 [1.00]
*g 4: log likelihood = -12.5, policy = 0.55, n = 15, mean = -51.7 [0.95], max = -45.6 [1.00]
5: log likelihood = -14.1, policy = 0.07, n = 2, mean = -48.7 [0.98], max = -48.7 [0.98]
Environment step. Action: (3, 1)
Computing log likelihood of action (3, 1): ti = 17.626047046255508, tj = 250.2520287784164, t_cut = 16.0, lam = 1.5 -> log likelihood = -12.457528114318848
Merging particles 3 and 1. New state has 3 particles.
3 particles:
p[ 0] = ( 1.9, 1.2, 1.1, 1.1)
p[ 1] = ( 1.6, 0.7, 1.0, 1.1)
p[ 2] = ( 0.6, 0.5, 0.3, 0.2)
Starting MCTS with 5 trajectories
MCTS results:
0: log likelihood = -16.2, policy = 0.15, n = 6, mean = -54.5 [0.85], max = -54.5 [0.85]
*g 1: log likelihood = -15.3, policy = 0.73, n = 12, mean = -41.9 [0.94], max = -33.2 [1.00]
2: log likelihood = -16.6, policy = 0.12, n = 2, mean = -55.3 [0.84], max = -55.3 [0.84]
Environment step. Action: (2, 0)
Computing log likelihood of action (2, 0): ti = 42.91672634848851, tj = 108.83427709899843, t_cut = 16.0, lam = 1.5 -> log likelihood = -15.265053749084473
Merging particles 2 and 0. New state has 2 particles.
2 particles:
p[ 0] = ( 2.5, 1.6, 1.3, 1.3)
p[ 1] = ( 1.6, 0.7, 1.0, 1.1)
Starting MCTS with 1 trajectories
MCTS results:
*g 0: log likelihood = -17.9, policy = 1.00, n = 13, mean = -40.1 [0.86], max = -17.9 [1.00]
Environment step. Action: (1, 0)
Computing log likelihood of action (1, 0): ti = 462.6623043913296, tj = 1406.6497382560137, t_cut = 16.0, lam = 3.0 -> log likelihood = -17.922468185424805
Merging particles 1 and 0. New state has 1 particles.
Episode is done.
Sampling new jet with 8 leaves
8 particles:
p[ 0] = ( 0.8, 0.6, 0.5, 0.3)
p[ 1] = ( 0.7, 0.3, 0.4, 0.5)
p[ 2] = ( 0.6, 0.4, 0.3, 0.2)
p[ 3] = ( 0.6, 0.3, 0.3, 0.4)
p[ 4] = ( 0.5, 0.3, 0.3, 0.4)
p[ 5] = ( 0.5, 0.2, 0.3, 0.3)
p[ 6] = ( 0.3, 0.1, 0.2, 0.2)
p[ 7] = ( 0.1, 0.1, 0.1, 0.1)
|
notebooks/write_CassiniIssPds3LabelNaifSpiceDriver.ipynb | ###Markdown
Writing out a USGSCSM ISD from a PDS3 Cassini ISS image
###Code
import os
import json
import ale
from ale.drivers.cassini_drivers import CassiniIssPds3LabelNaifSpiceDriver
from ale.formatters.usgscsm_formatter import to_usgscsm
###Output
_____no_output_____
###Markdown
Instantiating an ALE driverALE drivers are objects that define how to acquire common ISD keys from an input image format, in this case we are reading in a PDS3 image using NAIF SPICE kernels for exterior orientation data. If the driver utilizes NAIF SPICE kernels, it is implemented as a [context manager](https://docs.python.org/3/reference/datamodel.htmlcontext-managers) and will furnish metakernels when entering the context (i.e. when entering the `with` block) and free the metakernels on exit. This maintains the integrity of spicelib's internal data structures. These driver objects are short-lived and are input to a formatter function that consumes the API to create a serializable file format. `ale.formatters` contains available formatter functions. The default config file is located at `ale/config.yml` and is copied into your home directory at `.ale/config.yml` on first use of the library. The config file can be modified using a text editor. `ale.config` is loaded into memory as a dictionary. It is used to find metakernels for different missions. For example, there is an entry for cassini that points to `/usgs/cpkgs/isis3/data/cassini/kernels/mk/` by default. If you want to use your own metakernels, you will need to udpate this path. For example, if the metakernels are located in `/data/cassini/mk/` the cassini entry should be updated with this path. If you are using the default metakernels, then you do not need to update the path.ALE has a two step process for writing out an ISD: 1. Instantiate your driver (in this case `CassiniIssPds3LabelNaifSpiceDriver`) within a context and 2. pass the driver object into a formatter (in this case, `to_usgscsm`). Requirements: * A PDS3 Cassini ISS image * NAIF metakernels installed * Config file path for Cassini (ale.config.cassini) pointing to the Cassini NAIF metakernel directory * A conda environment with ALE installed into it usisng the `conda install` command or created using the environment.yml file at the base of ALE.
###Code
# printing config displays the yaml formatted string
print(ale.config)
# config object is a dictionary so it has the same access patterns
print('Cassini spice directory:', ale.config['cassini'])
# updating config for new LRO path in this notebook
# Note: this will not change the path in `.ale/config.yml`. This change only lives in the notebook.
# ale.config['cassini'] = '/data/cassini/mk/'
# change to desired PDS3 image path
file_name = '/home/kberry/dev/ale/ale/N1702360370_1.LBL'
# metakernels are furnished when entering the context (with block) with a driver instance
# most driver constructors simply accept an image path
with CassiniIssPds3LabelNaifSpiceDriver(file_name) as driver:
# pass driver instance into formatter function
usgscsmString = to_usgscsm(driver)
###Output
_____no_output_____
###Markdown
Write ISD to disk ALE formatter functions generally return bytes or a string that can be written out to disk. ALE's USGSCSM formatter function returns a JSON encoded string that can be written out using any JSON library. USGSCSM requires the ISD to be colocated with the image file with a `.json` extension in place of the image extension.
###Code
# Load the json string into a dict
usgscsm_dict = json.loads(usgscsmString)
# Write the dict out to the associated file
json_file = os.path.splitext(file_name)[0] + '.json'
# Save off the json and read it back in to check if
# the json exists and was formatted correctly
with open(json_file, 'w') as fp:
json.dump(usgscsm_dict, fp)
with open(json_file, 'r') as fp:
usgscsm_dict = json.load(fp)
usgscsm_dict.keys()
###Output
_____no_output_____ |
Modeling_Final.ipynb | ###Markdown
Class Imbalanced
###Code
X = df[['HHAGE', 'HINCP', 'BATHROOMS', 'UTILAMT','PERPOVLVL', 'ELECAMT', 'GASAMT', 'TRASHAMT', 'WATERAMT', 'OMB13CBSA','UNITSIZE','NUMPEOPLE','STORIES', 'HHNATVTY']]
y = df['RATINGHS_BIN']
X.hist(figsize=(20,15))
pd.Series(y).value_counts().plot.bar(color=['purple', 'orange', 'green', 'red'])
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state = 33)
X_sm, y_sm = sm.fit_sample(X, y.ravel())
pd.Series(y_sm).value_counts().plot.bar(color=['purple', 'orange', 'green', 'red'])
X_sm, y_sm = sm.fit_sample(X_sm, y_sm.ravel())
pd.Series(y_sm).value_counts().plot.bar(color=['purple', 'orange', 'green', 'red'])
X_sm, y_sm = sm.fit_sample(X_sm, y_sm.ravel())
pd.Series(y_sm).value_counts().plot.bar(color=['purple', 'orange', 'green', 'red'])
###Output
/Users/sabashaikh/anaconda2/envs/py36/lib/python3.6/site-packages/imblearn/base.py:306: UserWarning: The target type should be binary.
warnings.warn('The target type should be binary.')
###Markdown
Precision: It is implied as the measure of the correctly identified positive cases from all the predicted positive cases. Thus, it is useful when the costs of False Positives is high.Recall: It is the measure of the correctly identified positive cases from all the actual positive cases. It is important when the cost of False Negatives is high.Accuracy: One of the more obvious metrics, it is the measure of all the correctly identified cases. It is most used when all the classes are equally important.
###Code
# Create the train and test data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X_sm, y_sm, test_size=0.2)
from sklearn import metrics
def score_model(X, y, estimator, **kwargs):
y = LabelEncoder().fit_transform(y)
model = Pipeline([
('one_hot_encoder', OneHotEncoder()),
('estimator', estimator)
])
model.fit(X_train, y_train.ravel(), **kwargs)
expected = y_test
predicted = model.predict(X_test)
print("{}: {}".format(estimator.__class__.__name__, f1_score(expected, predicted, average='micro')))
# print("{} Accuracy Score: {}".format(estimator.__class__.__name__, metrics.accuracy_score(expected, predicted)))
models = [
SVC(), NuSVC(), LinearSVC(),
SGDClassifier(), KNeighborsClassifier(),
ExtraTreesClassifier(),
RandomForestClassifier(),
DecisionTreeClassifier(),
AdaBoostClassifier(),
GradientBoostingClassifier()
]
for model in models:
score_model(X_train, y_train.ravel(), model)
###Output
SVC: 0.24586077977568097
NuSVC: 0.3695923090617767
LinearSVC: 0.3735089905643582
SGDClassifier: 0.3911340573259747
KNeighborsClassifier: 0.33630051628983443
ExtraTreesClassifier: 0.3647854726722449
RandomForestClassifier: 0.36140288410183374
DecisionTreeClassifier: 0.3354103614028841
AdaBoostClassifier: 0.38632722093644295
GradientBoostingClassifier: 0.38632722093644295
###Markdown
Classification Report
###Code
from yellowbrick.classifier import ClassificationReport
# Instantiate the classification model and visualizer
classes=['extremely satisfied','very satisfied','satisfied','not satisfied ']
model = ExtraTreesClassifier()
visualizer = ClassificationReport(model, classes=classes, size=(600, 420), support=True)
visualizer.fit(X_train, y_train.ravel()) # Fit the visualizer and the model
visualizer.score(X_test, y_test.ravel()) # Evaluate the model on the test data
visualizer.show() # Finalize and show the figure
# Instantiate the classification model and visualizer
classes=['extremely satisfied','very satisfied','satisfied','not satisfied ']
model = RandomForestClassifier()
visualizer = ClassificationReport(model, classes=classes, size=(600, 420), support=True)
visualizer.fit(X_train, y_train.ravel()) # Fit the visualizer and the model
visualizer.score(X_test, y_test.ravel()) # Evaluate the model on the test data
visualizer.show() # Finalize and show the figure
from yellowbrick.classifier import ROCAUC
classes=['extremely satisfied','very satisfied','satisfied','not satisfied ']
visualizer = ROCAUC(
RandomForestClassifier(), classes=classes, size=(1080, 720)
)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Draw the data
###Output
_____no_output_____
###Markdown
Confusion Matrixs
###Code
from yellowbrick.classifier import ConfusionMatrix
model = ExtraTreesClassifier()
cm = ConfusionMatrix(model, classes=['not satisfied ','satisfied','very satisfied','extremely satisfied'])
# Fit fits the passed model. This is unnecessary if you pass the visualizer a pre-fitted model
cm.fit(X_train, y_train.ravel())
cm.score(X_test, y_test.ravel())
cm.show()
model = RandomForestClassifier()
cm = ConfusionMatrix(model, classes=['not satisfied ','satisfied','very satisfied','extremely satisfied'])
# Fit fits the passed model. This is unnecessary if you pass the visualizer a pre-fitted model
cm.fit(X_train, y_train.ravel())
cm.score(X_test, y_test.ravel())
cm.show()
###Output
_____no_output_____
###Markdown
Cross Validation
###Code
from sklearn.model_selection import StratifiedKFold
from yellowbrick.model_selection import CVScores
# Create a cross-validation strategy
cv = StratifiedKFold(n_splits=12, random_state=42)
# Instantiate the classification model and visualizer
model = RandomForestClassifier()
visualizer = CVScores(
model, cv=cv, scoring='f1_weighted', size=(780, 520)
)
visualizer.fit(X, y)
visualizer.show()
#With Class Balanced
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import StratifiedKFold
from yellowbrick.model_selection import CVScores
# Create a cross-validation strategy
cv = StratifiedKFold(n_splits=12, random_state=42)
# Instantiate the classification model and visualizer
model = ExtraTreesClassifier()
visualizer = CVScores(
model, cv=cv, scoring='f1_weighted', size=(780, 520)
)
visualizer.fit(X, y)
visualizer.show()
###Output
_____no_output_____
###Markdown
GridSearchCV
###Code
RandomForestClassifier().get_params()
from sklearn.model_selection import GridSearchCV
model = RandomForestClassifier()
# TODO: Create a dictionary with the Ridge parameter options
parameters = {
'n_estimators': [200, 300],
'max_features': ['auto', 'sqrt','log2'],
'min_samples_split':[2,4,6],
'n_jobs':[2,4]
}
clf = GridSearchCV(model, parameters, cv=5)
clf.fit(X_train, y_train)
print('If we change our parameters to: {}'.format(clf.best_params_))
print(clf.best_estimator_)
models = [
ExtraTreesClassifier(n_estimators=100, max_features='log2', n_jobs=2, random_state=42),
AdaBoostClassifier(learning_rate=0.7,n_estimators=200),
GradientBoostingClassifier(learning_rate=.5,max_depth=4),
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=300,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False)
]
for model in models:
score_model(X_train, y_train.ravel(), model)
from sklearn.model_selection import learning_curve
from sklearn.svm import SVC
train_sizes, train_scores, valid_scores = learning_curve(
RandomForestClassifier(), X, y, train_sizes=[50, 80, 110], cv=5)
###Output
_____no_output_____
###Markdown
A learning curve shows the validation and training score of an estimator for varying numbers of training samples. It is a tool to find out how much we benefit from adding more training data and whether the estimator suffers more from a variance error or a bias error.
###Code
# Create CV training and test scores for various training set sizes
train_sizes, train_scores, test_scores = learning_curve(RandomForestClassifier(),
X,
y,
# Number of folds in cross-validation
cv=12,
# Evaluation metric
scoring='accuracy',
# Use all computer cores
n_jobs=1,
# 50 different sizes of the training set
train_sizes=np.linspace(.1, 1.0, 100))
# Create means and standard deviations of training set scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
# Create means and standard deviations of test set scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# Draw lines
plt.plot(train_sizes, train_mean, '--', color="r", label="Training score")
plt.plot(train_sizes, test_mean, color="g", label="Cross-validation score")
# Draw bands
plt.fill_between(train_sizes, train_mean - train_std, train_mean + train_std,color="r",)
plt.fill_between(train_sizes, test_mean - test_std, test_mean + test_std, color="g")
# Create plot
plt.title("Learning Curve")
plt.xlabel("Training Set Size"), plt.ylabel("Accuracy Score"), plt.legend(loc="best")
plt.tight_layout()
plt.show()
'''
from sklearn.model_selection import validation_curve
# Create range of values for parameter
param_range = np.arange(1, 250, 2)
# Calculate accuracy on training and test set using range of parameter values
train_scores, test_scores = validation_curve(RandomForestClassifier(),
X,
y,
param_name="n_estimators",
param_range=param_range,
cv=3,
scoring="accuracy",
n_jobs=-1)
# Plot mean accuracy scores for training and test sets
plt.plot(param_range, train_mean, label="Training score", color="blue")
plt.plot(param_range, test_mean, label="Cross-validation score", color="green")
# Plot accurancy bands for training and test sets
plt.fill_between(param_range, train_mean - train_std, train_mean + train_std, color="blue")
plt.fill_between(param_range, test_mean - test_std, test_mean + test_std, color="gainsboro")
# Create plot
plt.title("Validation Curve With Random Forest")
plt.xlabel("Number Of Trees")
plt.ylabel("Accuracy Score")
plt.tight_layout()
plt.legend(loc="best")
plt.show()
'''
###Output
_____no_output_____
###Markdown
Different Approach - 2 Classes
###Code
# Labeling Rating column to 2 classes
LABEL_MAP = {
1: "Less Satisfied",
2: "Less Satisfied",
3: "Less Satisfied",
4: "Less Satisfied",
5: "Less Satisfied",
6: "Less Satisfied",
7: "Less Satisfied",
8: "Less Satisfied",
9: "Highly Satisfied",
10: "Highly Satisfied"
}
# Convert class labels into text
y = df['RATINGHS'].map(LABEL_MAP)
y
X = df[['HHAGE', 'HINCP', 'BATHROOMS', 'UTILAMT','PERPOVLVL', 'ELECAMT', 'GASAMT', 'TRASHAMT', 'WATERAMT', 'OMB13CBSA','NUMPEOPLE', 'STORIES', 'HHNATVTY']]
#identify the classes balance
pd.Series(y).value_counts().plot.bar(color=['blue', 'red'])
###Output
_____no_output_____
###Markdown
Split the data into 80 to 20
###Code
# Create the train and test data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2)
from sklearn import metrics
def score_model(X, y, estimator, **kwargs):
y = LabelEncoder().fit_transform(y)
model = Pipeline([
('one_hot_encoder', OneHotEncoder()),
('estimator', estimator)
])
model.fit(X_train, y_train.ravel(), **kwargs)
expected = y_test
predicted = model.predict(X_test)
print("{}: {}".format(estimator.__class__.__name__, f1_score(expected, predicted, average='micro')))
from sklearn.preprocessing import LabelEncoder
# Encode our target variable
encoder = LabelEncoder().fit(y)
y = encoder.transform(y)
y
models = [
SVC(), NuSVC(), LinearSVC(),
SGDClassifier(), KNeighborsClassifier(),
LogisticRegression(),
ExtraTreesClassifier(),
RandomForestClassifier(),
DecisionTreeClassifier(),
AdaBoostClassifier(),
GradientBoostingClassifier()
]
for model in models:
score_model(X_train, y_train.ravel(), model)
#Classification Report
classes=['highly satisfied', 'less satisfied']
model = GradientBoostingClassifier()
visualizer = ClassificationReport(model, classes=classes, size=(600, 420), support=True)
visualizer.fit(X_train, y_train.ravel()) # Fit the visualizer and the model
visualizer.score(X_test, y_test.ravel()) # Evaluate the model on the test data
visualizer.show()
## ROC Curve
from yellowbrick.classifier import ROCAUC
classes=['highly satisfied', 'less satisfied']
visualizer = ROCAUC(
GradientBoostingClassifier(), classes=classes, size=(1080, 720)
)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show()
## Confusion Matrix
model = GradientBoostingClassifier()
cm = ConfusionMatrix(model, classes=['highly satisfied', 'less satisfied'])
# Fit fits the passed model. This is unnecessary if you pass the visualizer a pre-fitted model
cm.fit(X_train, y_train.ravel())
cm.score(X_test, y_test.ravel())
cm.show()
## GridSearch for hyperparameter tuning
from sklearn.model_selection import GridSearchCV
model = AdaBoostClassifier()
# TODO: Create a dictionary with the Ridge parameter options
parameters = {
'learning_rate': [1,2,3],
'algorithm': ['SAMME', 'SAMME.R'],
'n_estimators': [100,200,300]
}
clf = GridSearchCV(model, parameters, cv=5)
clf.fit(X_train, y_train)
print('If we change our parameters to: {}'.format(clf.best_params_))
print(clf.best_estimator_)
models = [
AdaBoostClassifier(algorithm= 'SAMME.R', learning_rate= 1, n_estimators=200),
GradientBoostingClassifier(loss='exponential', max_features= 'auto',n_estimators=300),
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
max_depth=None, max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0, min_samples_split=2,
min_weight_fraction_leaf=0.0, n_estimators=300,
n_jobs=None, oob_score=False, random_state=None,
verbose=0, warm_start=False)
]
for model in models:
score_model(X_train, y_train.ravel(), model)
# Create CV training and test scores for various training set sizes
train_sizes, train_scores, test_scores = learning_curve(RandomForestClassifier(),
X,
y,
# Number of folds in cross-validation
cv=10,
# Evaluation metric
scoring='accuracy',
# Use all computer cores
n_jobs=1,
# 50 different sizes of the training set
train_sizes=np.linspace(.1, 1.0, 100))
# Create means and standard deviations of training set scores
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
# Create means and standard deviations of test set scores
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
# Draw lines
plt.plot(train_sizes, train_mean, '--', color="r", label="Training score")
plt.plot(train_sizes, test_mean, color="g", label="Cross-validation score")
# Draw bands
plt.fill_between(train_sizes, train_mean - train_std, train_mean + train_std,color="r",)
plt.fill_between(train_sizes, test_mean - test_std, test_mean + test_std, color="g")
# Create plot
plt.title("Learning Curve")
plt.xlabel("Training Set Size"), plt.ylabel("Accuracy Score"), plt.legend(loc="best")
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
ipynb/deprecated/tutorial/UseCaseExamples_SchedTuneAnalysis.ipynb | ###Markdown
Experiments collected data Data required to run this notebook are available for download at this link:https://www.dropbox.com/s/q9ulf3pusu0uzss/SchedTuneAnalysis.tar.xz?dl=0This archive has to be extracted from within the LISA's results folder. Initial set of data
###Code
res_dir = '../../results/SchedTuneAnalysis/'
!tree {res_dir}
noboost_trace = res_dir + 'trace_noboost.dat'
boost15_trace = res_dir + 'trace_boost15.dat'
boost25_trace = res_dir + 'trace_boost25.dat'
# trace_file = noboost_trace
trace_file = boost15_trace
# trace_file = boost25_trace
###Output
_____no_output_____
###Markdown
Loading support data collected from the target
###Code
import json
# Load the platform information
with open('../../results/SchedTuneAnalysis/platform.json', 'r') as fh:
platform = json.load(fh)
print "Platform descriptio collected from the target:"
print json.dumps(platform, indent=4)
from trappy.stats.Topology import Topology
# Create a topology descriptor
topology = Topology(platform['topology'])
###Output
_____no_output_____
###Markdown
Trace analysis We want to ensure that the task has the expected workload:- LITTLE CPU bandwidth of **[10, 35 and 60]%** every **2[ms]**- activations every **32ms**- always **starts on a big** core Trace inspection Using kernelshark
###Code
# Let's look at the trace using kernelshark...
!kernelshark {trace_file} 2>/dev/null
###Output
version = 6
###Markdown
- Requires a lot of interactions and hand made measurements- We cannot easily annotate our findings to produre a sharable notebook Using the TRAPpy Trace Plotter An overall view on the trace is still useful to get a graps on what we are looking at.
###Code
# Suport for FTrace events parsing and visualization
import trappy
# NOTE: The interactive trace visualization is available only if you run
# the workload to generate a new trace-file
trappy.plotter.plot_trace(trace_file)#, execnames="task_ramp")#, pids=[2221])
###Output
_____no_output_____
###Markdown
Events Plotting The **sched_load_avg_task** trace events reports this information Using all the unix arsenal to parse and filter the trace
###Code
# Get a list of first 5 "sched_load_avg_events" events
sched_load_avg_events = !(\
grep sched_load_avg_task {trace_file.replace('.dat', '.txt')} | \
head -n5 \
)
print "First 5 sched_load_avg events:"
for line in sched_load_avg_events:
print line
###Output
First 5 sched_load_avg events:
trace-cmd-2204 [000] 1773.509207: sched_load_avg_task: comm=trace-cmd pid=2204 cpu=0 load_avg=452 util_avg=176 util_est=176 load_sum=21607277 util_sum=8446887 period_contrib=125
trace-cmd-2204 [000] 1773.509223: sched_load_avg_task: comm=trace-cmd pid=2204 cpu=0 load_avg=452 util_avg=176 util_est=176 load_sum=21607277 util_sum=8446887 period_contrib=125
<idle>-0 [002] 1773.509522: sched_load_avg_task: comm=sudo pid=2203 cpu=2 load_avg=0 util_avg=0 util_est=941 load_sum=7 util_sum=7 period_contrib=576
sudo-2203 [002] 1773.511197: sched_load_avg_task: comm=sudo pid=2203 cpu=2 load_avg=14 util_avg=14 util_est=941 load_sum=688425 util_sum=688425 period_contrib=219
sudo-2203 [002] 1773.511219: sched_load_avg_task: comm=sudo pid=2203 cpu=2 load_avg=14 util_avg=14 util_est=14 load_sum=688425 util_sum=688425 period_contrib=219
grep: write error
###Markdown
A graphical representation whould be really usefuly! Using TRAPpy generated DataFrames Generate DataFrames from Trace Events
###Code
# Load the LISA::Trace parsing module
from trace import Trace
# Define which event we are interested into
trace = Trace(trace_file, [
"sched_switch",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_boost_cpu",
"sched_boost_task",
"cpu_frequency",
"cpu_capacity",
], platform)
###Output
_____no_output_____
###Markdown
Get the DataFrames for the events of interest
###Code
# Trace events are converted into tables, let's have a look at one
# of such tables
load_df = trace.data_frame.trace_event('sched_load_avg_task')
load_df.head()
df = load_df[load_df.comm.str.match('k.*')]
# df.head()
print df.comm.unique()
cap_df = trace.data_frame.trace_event('cpu_capacity')
cap_df.head()
###Output
_____no_output_____
###Markdown
Plot the signals of interest
###Code
# Signals can be easily plot using the ILinePlotter
trappy.ILinePlot(
# FTrace object
trace.ftrace,
# Signals to be plotted
signals=[
'cpu_capacity:capacity',
'sched_load_avg_task:util_avg'
],
# Generate one plot for each value of the specified column
pivot='cpu',
# Generate only plots which satisfy these filters
filters={
'comm': ['task_ramp'],
'cpu' : [2,5]
},
# Formatting style
per_line=2,
drawstyle='steps-post',
marker = '+',
sync_zoom=True,
group="GroupTag"
).view()
###Output
_____no_output_____
###Markdown
Use a set of standard plots A graphical representation can always be on hand
###Code
trace = Trace(boost15_trace,
["sched_switch",
"sched_overutilized",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_boost_cpu",
"sched_boost_task",
"cpu_frequency",
"cpu_capacity",
],
platform,
plots_prefix='boost15_'
)
###Output
_____no_output_____
###Markdown
Usually a common set of plots can be generated which capture the most useful information realted to a workload we are analysing Example of task realted signals
###Code
trace.analysis.tasks.plotTasks(
tasks=['task_ramp'],
signals=['util_avg', 'boosted_util', 'sched_overutilized', 'residencies'],
)
###Output
_____no_output_____
###Markdown
Example of Clusters related singals
###Code
trace.analysis.frequency.plotClusterFrequencies()
###Output
_____no_output_____
###Markdown
Take-away In a single plot we can aggregate multiple informations which makes it easy to verify the expected behaviros.With a set of properly defined plots we are able to condense mucy more sensible information which are easy to ready because they are "standard".We immediately capture what we are interested to evaluate!Moreover, all he produced plots are available as high resolution images, ready to be shared and/or used in other reports.
###Code
!tree {res_dir}
###Output
[01;34m../../results/SchedTuneAnalysis/[00m
├── [01;35mboost15_cluster_freqs.png[00m
├── [01;35mboost15_task_util_task_ramp.png[00m
├── energy.json
├── output.log
├── platform.json
├── rt-app-task_ramp-0.log
├── test_00.json
├── trace_boost15.dat
├── trace_boost15.raw.txt
├── trace_boost15.txt
├── trace_boost25.dat
├── trace_boost25.raw.txt
├── trace_boost25.txt
├── trace.dat
├── trace_noboost.dat
├── trace_noboost.raw.txt
├── trace_noboost.txt
├── trace.raw.txt
└── trace.txt
0 directories, 19 files
###Markdown
Behavioral Analysis Is the task starting on a big core? We always expect a new task to be allocated on a big core.To verify this condition we need to know what is the topology of the target.This information is **automatically collected by LISA** when the workload is executed.Thus it can be used to write **portable tests** conditions. Create a SchedAssert for the specific topology
###Code
from bart.sched.SchedMultiAssert import SchedAssert
# Create an object to get/assert scheduling pbehaviors
sa = SchedAssert(trace_file, topology, execname='task_ramp')
###Output
_____no_output_____
###Markdown
Use the SchedAssert method to investigate properties of this task
###Code
# Check on which CPU the task start its execution
if sa.assertFirstCpu(platform['clusters']['big']):#, window=(4,6)):
print "PASS: Task starts on big CPU: ", sa.getFirstCpu()
else:
print "FAIL: Task does NOT start on a big CPU!!!"
###Output
PASS: Task starts on big CPU: 1
###Markdown
Is the task generating the expected load? We expect 35% load in the between 2 and 4 [s] of the execution Identify the start of the first phase
###Code
# Let's find when the task starts
start = sa.getStartTime()
first_phase = (start, start+2)
print "The task starts execution at [s]: ", start
print "Window of interest: ", first_phase
###Output
The task starts execution at [s]: 1.9683
Window of interest: (1.9682999999999993, 3.9682999999999993)
###Markdown
Use the SchedAssert module to check the task load in that period
###Code
import operator
# Check the task duty cycle in the second step window
if sa.assertDutyCycle(10, operator.lt, window=first_phase):
print "PASS: Task duty-cycle is {}% in the [2,4] execution window"\
.format(sa.getDutyCycle(first_phase))
else:
print "FAIL: Task duty-cycle is {}% in the [2,4] execution window"\
.format(sa.getDutyCycle(first_phase))
###Output
FAIL: Task duty-cycle is 18.11125% in the [2,4] execution window
###Markdown
This test fails because we have not considered a scaling factor due running at a lower OPP.To write a portable test we need to account for that condition! Take OPP scaling into consideration
###Code
# Get LITTLEs capacities ranges:
littles = platform['clusters']['little']
little_capacities = cap_df[cap_df.cpu.isin(littles)].capacity
min_cap = little_capacities.min()
max_cap = little_capacities.max()
print "LITTLEs capacities range: ", (min_cap, max_cap)
# Get min OPP correction factor
min_little_scale = 1.0 * min_cap / max_cap
print "LITTLE's min capacity scale: ", min_little_scale
# Scale the target duty-cycle according to the min OPP
target_dutycycle = 10 / min_little_scale
print "Scaled target duty-cycle: ", target_dutycycle
target_dutycycle = 1.01 * target_dutycycle
print "1% tolerance scaled duty-cycle: ", target_dutycycle
###Output
Scaled target duty-cycle: 18.9406779661
1% tolerance scaled duty-cycle: 19.1300847458
###Markdown
Write a more portable assertion
###Code
# Add a 1% tolerance to our scaled target dutycycle
if sa.assertDutyCycle(1.01 * target_dutycycle, operator.lt, window=first_phase):
print "PASS: Task duty-cycle is {}% in the [2,4] execution window"\
.format(sa.getDutyCycle(first_phase) * min_little_scale)
else:
print "FAIL: Task duty-cycle is {}% in the [2,4] execution window"\
.format(sa.getDutyCycle(first_phase) * min_little_scale)
###Output
PASS: Task duty-cycle is 9.56209172258% in the [2,4] execution window
###Markdown
Is the task migrated once we exceed the LITTLE CPUs capacity? Check that the task is switching the cluster once expected
###Code
# Consider a 100 [ms] window for the task to migrate
delta = 0.1
# Defined the window of interest
switch_window=(start+4-delta, start+4+delta)
if sa.assertSwitch("cluster",
platform['clusters']['little'],
platform['clusters']['big'],
window=switch_window):
print "PASS: Task switches to big within: ", switch_window
else:
print "PASS: Task DOES NO switches to big within: ", switch_window
###Output
PASS: Task switches to big within: (5.8682999999999996, 6.0682999999999989)
###Markdown
Check that the task is running most of its time on the LITTLE cluster
###Code
import operator
if sa.assertResidency("cluster", platform['clusters']['little'], 66, operator.le, percent=True):
print "PASS: Task exectuion on LITTLEs is {:.1f}% (less than 66% of its execution time)".\
format(sa.getResidency("cluster", platform['clusters']['little'], percent=True))
else:
print "FAIL: Task run on LITTLE for MORE than 66% of its execution time"
###Output
PASS: Task exectuion on LITTLEs is 53.1% (less than 66% of its execution time)
###Markdown
Check that the util estimation is properly computed and CPU capacity matches
###Code
start = 2
last_phase = (start+4, start+6)
analyzer_config = {
"SCALE" : 1024,
"BOOST" : 15,
}
# Verify that the margin is properly computed for each event:
# margin := (scale - util) * boost
margin_check_statement = "(((SCALE - sched_boost_task:util) * BOOST) // 100) == sched_boost_task:margin"
from bart.common.Analyzer import Analyzer
# Create an Assertion Object
a = Analyzer(trace.ftrace,
analyzer_config,
window=last_phase,
filters={"comm": "task_ramp"})
if a.assertStatement(margin_check_statement):
print "PASS: Margin properly computed in : ", last_phase
else:
print "FAIL: Margin NOT properly computed in : ", last_phase
###Output
PASS: Margin properly computed in : (6, 8)
###Markdown
Check that the CPU capacity matches the task boosted value
###Code
# Get the two dataset of interest
df1 = trace.data_frame.trace_event('cpu_capacity')[['cpu', 'capacity']]
df2 = trace.data_frame.trace_event('boost_task_rtapp')[['__cpu', 'boosted_util']]
# Join the information from these two
df3 = df2.join(df1, how='outer')
df3 = df3.fillna(method='ffill')
df3 = df3[df3.__cpu == df3.cpu]
#df3.ix[start+4:start+6,].head()
len(df3[df3.boosted_util >= df3.capacity])
###Output
_____no_output_____
###Markdown
Do it the TRAPpy way
###Code
# Create the TRAPpy class
trace.ftrace.add_parsed_event('rtapp_capacity_check', df3)
# Define pivoting value
trace.ftrace.rtapp_capacity_check.pivot = 'cpu'
# Create an Assertion
a = Analyzer(trace.ftrace,
{"CAP" : trace.ftrace.rtapp_capacity_check},
window=(start+4.1, start+6))
a.assertStatement("CAP:capacity >= CAP:boosted_util")
###Output
_____no_output_____
###Markdown
Going further on events processing What are the relative residency on different OPPs? We are not limited to the usage of pre-defined functions. We can exploit the full power of PANDAS to process the DataFrames to extract all kind of information we want. Use PANDAs APIs to filter and aggregate events
###Code
import pandas as pd
# Focus on cpu_frequency events for CPU0
df = trace.data_frame.trace_event('cpu_frequency')
df = df[df.cpu == 0]
# Compute the residency on each OPP before switching to the next one
df.loc[:,'start'] = df.index
df.loc[:,'delta'] = (df['start'] - df['start'].shift()).fillna(0).shift(-1)
# Group by frequency and sum-up the deltas
freq_residencies = df.groupby('frequency')['delta'].sum()
print "Residency time per OPP:"
df = pd.DataFrame(freq_residencies)
df.head()
# Compute the relative residency time
tot = sum(freq_residencies)
#df = df.apply(lambda delta : 100*delta/tot)
for f in freq_residencies.index:
print "Freq {:10d}Hz : {:5.1f}%".format(f, 100*freq_residencies[f]/tot)
###Output
Residency time per OPP:
Freq 450000Hz : 59.3%
Freq 575000Hz : 11.7%
Freq 700000Hz : 19.5%
Freq 775000Hz : 8.8%
Freq 850000Hz : 0.6%
###Markdown
Use MathPlot Lib to generate all kind of plot from collected data
###Code
# Plot residency time
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 1, figsize=(16, 5));
df.plot(kind='barh', ax=axes, title="Frequency residency", rot=45);
###Output
_____no_output_____
###Markdown
Advanced DataFrame usage: filtering by columns/rows, merging tables, plotting data[notebooks/tutorial/05_TrappyUsage.ipynb](05_TrappyUsage.ipynb) Remote target connection and control Using LISA APIs to control a remote device and run custom workloads Configure the connection
###Code
# Setup a target configuration
conf = {
# Target is localhost
"platform" : 'linux',
"board" : "juno",
# Login credentials
"host" : "192.168.0.1",
"username" : "root",
"password" : "",
# Binary tools required to run this experiment
# These tools must be present in the tools/ folder for the architecture
"tools" : ['rt-app', 'taskset', 'trace-cmd'],
# Comment the following line to force rt-app calibration on your target
"rtapp-calib" : {
"0": 355, "1": 138, "2": 138, "3": 355, "4": 354, "5": 354
},
# FTrace events end buffer configuration
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"sched_overutilized",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_frequency",
"cpu_capacity",
],
"buffsize" : 10240
},
# Where results are collected
"results_dir" : "SchedTuneAnalysis",
# Devlib module required (or not required)
'modules' : [ "cpufreq", "cgroups" ],
#"exclude_modules" : [ "hwmon" ],
}
###Output
_____no_output_____
###Markdown
Setup the connection
###Code
# Support to access the remote target
from env import TestEnv
# Initialize a test environment using:
# the provided target configuration (my_target_conf)
# the provided test configuration (my_test_conf)
te = TestEnv(conf)
target = te.target
print "DONE"
###Output
_____no_output_____
###Markdown
Target control Run custom commands
###Code
# Enable Energy-Aware scheduler
target.execute("echo ENERGY_AWARE > /sys/kernel/debug/sched_features");
target.execute("echo UTIL_EST > /sys/kernel/debug/sched_features");
# Check which sched_feature are enabled
sched_features = target.read_value("/sys/kernel/debug/sched_features");
print "sched_features:"
print sched_features
###Output
_____no_output_____
###Markdown
Example CPUFreq configuration
###Code
target.cpufreq.set_all_governors('sched');
# Check which governor is enabled on each CPU
enabled_governors = target.cpufreq.get_all_governors()
print enabled_governors
###Output
_____no_output_____
###Markdown
Example of CGruops configuration
###Code
schedtune = target.cgroups.controller('schedtune')
# Configure a 50% boostgroup
boostgroup = schedtune.cgroup('/boosted')
boostgroup.set(boost=25)
# Dump the configuraiton of each groups
cgroups = schedtune.list_all()
for cgname in cgroups:
cgroup = schedtune.cgroup(cgname)
attrs = cgroup.get()
boost = attrs['boost']
print '{}:{:<15} boost: {}'.format(schedtune.kind, cgroup.name, boost)
###Output
_____no_output_____
###Markdown
Remote workloads execution Generate RTApp configurations
###Code
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Periodic, Ramp
# Create a new RTApp workload generator using the calibration values
# reported by the TestEnv module
rtapp = RTA(target, 'test', calibration=te.calibration())
# Ramp workload
ramp = Ramp(
start_pct=10,
end_pct=60,
delta_pct=25,
time_s=2,
period_ms=32
)
# Configure this RTApp instance to:
rtapp.conf(
# 1. generate a "profile based" set of tasks
kind = 'profile',
# 2. define the "profile" of each task
params = {
# 3. Composed task
'task_ramp': ramp.get(),
},
#loadref='big',
loadref='LITTLE',
run_dir=target.working_directory
);
###Output
_____no_output_____
###Markdown
Execution and tracing
###Code
def execute(te, wload, res_dir, cg='/'):
logging.info('# Setup FTrace')
te.ftrace.start()
if te.emeter:
logging.info('## Start energy sampling')
te.emeter.reset()
logging.info('### Start RTApp execution')
wload.run(out_dir=res_dir, cgroup=cg)
if te.emeter:
logging.info('## Read energy consumption: %s/energy.json', res_dir)
nrg_report = te.emeter.report(out_dir=res_dir)
else:
nrg_report = None
logging.info('# Stop FTrace')
te.ftrace.stop()
trace_file = os.path.join(res_dir, 'trace.dat')
logging.info('# Save FTrace: %s', trace_file)
te.ftrace.get_trace(trace_file)
logging.info('# Save platform description: %s/platform.json', res_dir)
plt, plt_file = te.platform_dump(res_dir)
logging.info('# Report collected data:')
logging.info(' %s', res_dir)
!tree {res_dir}
return nrg_report, plt, plt_file, trace_file
nrg_report, plt, plt_file, trace_file = execute(te, rtapp, te.res_dir, cg=boostgroup.name)
###Output
_____no_output_____
###Markdown
Regression testing support Writing and running regression tests using the LISA API Defined configurations to test and workloads
###Code
stune_smoke_test = '../../tests/stune/smoke_test_ramp.config'
!cat {stune_smoke_test}
###Output
{
/* Devlib modules to enable/disbale for all the experiments */
"modules" : [ "cpufreq", "cgroups" ],
"exclude_modules" : [ ],
/* Binary tools required by the experiments */
"tools" : [ "rt-app" ],
/* FTrace configuration */
"ftrace" : {
"events" : [
"sched_switch",
"sched_contrib_scale_f",
"sched_load_avg_cpu",
"sched_load_avg_task",
"sched_tune_config",
"sched_tune_tasks_update",
"sched_tune_boostgroup_update",
"sched_tune_filter",
"sched_boost_cpu",
"sched_boost_task",
"sched_energy_diff",
"cpu_frequency",
"cpu_capacity",
],
"buffsize" : 10240,
},
/* Set of platform configurations to test */
"confs" : [
{
"tag" : "noboost",
"flags" : "ftrace",
"sched_features" : "ENERGY_AWARE",
"cpufreq" : { "governor" : "sched" },
"cgroups" : {
"conf" : {
"schedtune" : {
"/" : {"boost" : 0 },
"/stune" : {"boost" : 0 },
}
},
"default" : "/",
}
},
{
"tag" : "boost15",
"flags" : "ftrace",
"sched_features" : "ENERGY_AWARE",
"cpufreq" : { "governor" : "sched" },
"cgroups" : {
"conf" : {
"schedtune" : {
"/" : {"boost" : 0 },
"/stune" : {"boost" : 15 },
}
},
"default" : "/stune",
}
},
{
"tag" : "boost30",
"flags" : "ftrace",
"sched_features" : "ENERGY_AWARE",
"cpufreq" : { "governor" : "sched" },
"cgroups" : {
"conf" : {
"schedtune" : {
"/" : {"boost" : 0 },
"/stune" : {"boost" : 30 },
}
},
"default" : "/stune",
}
},
{
"tag" : "boost60",
"flags" : "ftrace",
"sched_features" : "ENERGY_AWARE",
"cpufreq" : { "governor" : "sched" },
"cgroups" : {
"conf" : {
"schedtune" : {
"/" : {"boost" : 0 },
"/stune" : {"boost" : 60 },
}
},
"default" : "/stune",
}
}
],
/* Set of workloads to run on each platform configuration */
"wloads" : {
"mixprof" : {
"type": "rt-app",
"conf" : {
"class" : "profile",
"params" : {
"r5_10-60" : {
"kind" : "Ramp",
"params" : {
"period_ms" : 16,
"start_pct" : 5,
"end_pct" : 60,
"delta_pct" : 5,
"time_s" : 1,
}
}
}
},
"loadref" : "LITTLE",
}
},
/* Number of iterations for each workload */
"iterations" : 1,
}
// vim :set tabstop=4 shiftwidth=4 expandtab
###Markdown
Write Test Cases
###Code
stune_smoke_test = '../../tests/stune/smoke_test_ramp.py'
!cat {stune_smoke_test}
###Output
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (C) 2015, ARM Limited and contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import logging
import os
from test import LisaTest
import trappy
from bart.common.Analyzer import Analyzer
TESTS_DIRECTORY = os.path.dirname(os.path.realpath(__file__))
TESTS_CONF = os.path.join(TESTS_DIRECTORY, "smoke_test_ramp.config")
class STune(LisaTest):
"""Tests for SchedTune framework"""
@classmethod
def setUpClass(cls, *args, **kwargs):
super(STune, cls)._init(TESTS_CONF, *args, **kwargs)
def test_boosted_utilization_signal(self):
"""The boosted utilization signal is appropriately boosted
The margin should match the formula
(sched_load_scale - util) * boost"""
for tc in self.conf["confs"]:
test_id = tc["tag"]
wload_idx = self.conf["wloads"].keys()[0]
run_dir = os.path.join(self.te.res_dir,
"rtapp:{}:{}".format(test_id, wload_idx),
"1")
ftrace_events = ["sched_boost_task"]
ftrace = trappy.FTrace(run_dir, scope="custom",
events=ftrace_events)
first_task_params = self.conf["wloads"][wload_idx]["conf"]["params"]
first_task_name = first_task_params.keys()[0]
rta_task_name = "task_{}".format(first_task_name)
sbt_dfr = ftrace.sched_boost_task.data_frame
boost_task_rtapp = sbt_dfr[sbt_dfr.comm == rta_task_name]
ftrace.add_parsed_event("boost_task_rtapp", boost_task_rtapp)
# Avoid the first period as the task starts with a very
# high load and it overutilizes the CPU
rtapp_period = first_task_params[first_task_name]["params"]["period_ms"]
task_start = boost_task_rtapp.index[0]
after_first_period = task_start + (rtapp_period / 1000.)
boost = tc["cgroups"]["conf"]["schedtune"]["/stune"]["boost"] / 100.
analyzer_const = {
"SCHED_LOAD_SCALE": 1024,
"BOOST": boost,
}
analyzer = Analyzer(ftrace, analyzer_const,
window=(after_first_period, None))
statement = "(((SCHED_LOAD_SCALE - boost_task_rtapp:util) * BOOST) // 100) == boost_task_rtapp:margin"
error_msg = "task was not boosted to the expected margin: {}".\
format(boost)
self.assertTrue(analyzer.assertStatement(statement), msg=error_msg)
# vim :set tabstop=4 shiftwidth=4 expandtab
|
Traffic/Traffic2.ipynb | ###Markdown
NuPIC HTM Implementation
###Code
#Import from NuPIC library
from nupic.encoders import RandomDistributedScalarEncoder
from nupic.algorithms.spatial_pooler import SpatialPooler
from nupic.algorithms.temporal_memory import TemporalMemory
from nupic.algorithms.anomaly import Anomaly
data_all = pd.concat([occupancy_6005, occupancy_t4013, speed_6005, speed_7578, speed_t4013], axis = 1)
dataseq = data_all.resample('H').bfill().interpolate()
import datetime as dt # Imports dates library
a = dt.datetime(2015, 9, 8, 12) # Fixes the start date
seldata = dataseq[a:] # Subsets the data
Vars = set(["occupancy_6005", "occupancy_t4013", "speed_6005", "speed_7578", "speed_t4013"])
for x in Vars:
exec("RDSE_"+ x +" = RandomDistributedScalarEncoder(resolution=seldata['"+ x +"'].std()/5)")
prueba = seldata['speed_6005']
seldata['speed_6005'][0]
prueba.plot()
###Output
_____no_output_____
###Markdown
Encoding Es importante fijar la precisión adecuada para discernir los cambios requeridos en las variables - En nuestro caso hemos probado con $ ^\sigma / _5$ Se ha de comprobar que efectivamente las diferencias en los encoders son signidicativas cuando lo son para las variables que se han tomado
###Code
RDSE = RandomDistributedScalarEncoder(resolution=prueba.std()/5)
a = np.zeros(len(prueba)-1)
for x in range(len(prueba)-1):
a[x] = sum(RDSE.encode(prueba[x]) != RDSE.encode(prueba[x-1]))
plt.plot(prueba)
plt.plot(a)
###Output
_____no_output_____
###Markdown
Spatial pooler
###Code
# Define the imput width
encoder_width = RDSE.getWidth()
pooler_out = 2048
encoder_width = 0
for x in Vars:
exec("encoder_width += RDSE_"+ x +".getWidth()")
pooler_out = 4096
sp = SpatialPooler(
# How large the imput encoding will be
inputDimensions=(encoder_width),
# Number of columns on the Spatial Pooler
columnDimensions=(pooler_out),
# Percent of the imputs that a column can be conected to, 1 means the colum is connected to every other column
potentialPct = 0.8,
# Eliminates the topology
globalInhibition = True,
# Recall that there is only one inhibition area
numActiveColumnsPerInhArea = pooler_out//50,
#Velocity of synapses grown an degradation
synPermInactiveDec = 0.005,
synPermActiveInc = 0.04,
synPermConnected = 0.1,
# boostStrength controls the strength of boosting. Boosting encourages efficient usage of SP columns.
boostStrength = 3.0,
seed = 25,
# Determines whether the encoder is cyclic or not
wrapAround = False)
activeColumns = np.zeros(pooler_out)
encoding = RDSE.encode(prueba[0])
sp.compute(encoding, True, activeColumns)
activeColumnIndices = np.nonzero(activeColumns)[0]
print activeColumnIndices
plt.plot(activeColumns)
tm = TemporalMemory(
# Must be the same dimensions as the SP
columnDimensions=(pooler_out,),
# How many cells in each mini-column.
cellsPerColumn=5,
# A segment is active if it has >= activationThreshold connected synapses that are active due to infActiveState
activationThreshold=16,
initialPermanence=0.21,
connectedPermanence=0.5,
# Minimum number of active synapses for a segment to be considered during
# search for the best-matching segments.
minThreshold=12,
# The max number of synapses added to a segment during learning
maxNewSynapseCount=20,
permanenceIncrement=0.1,
permanenceDecrement=0.1,
predictedSegmentDecrement=0.0,
maxSegmentsPerCell=128,
maxSynapsesPerSegment=32,
seed=25)
# Execute Temporal Memory algorithm over active mini-columns.
tm.compute(activeColumnIndices, learn=True)
activeCells = tm.getActiveCells()
print activeCells
###Output
[2070, 2071, 2072, 2073, 2074, 2110, 2111, 2112, 2113, 2114, 2120, 2121, 2122, 2123, 2124, 2155, 2156, 2157, 2158, 2159, 2165, 2166, 2167, 2168, 2169, 2175, 2176, 2177, 2178, 2179, 2235, 2236, 2237, 2238, 2239, 2285, 2286, 2287, 2288, 2289, 2370, 2371, 2372, 2373, 2374, 2675, 2676, 2677, 2678, 2679, 2680, 2681, 2682, 2683, 2684, 2720, 2721, 2722, 2723, 2724, 2780, 2781, 2782, 2783, 2784, 2975, 2976, 2977, 2978, 2979, 7725, 7726, 7727, 7728, 7729, 7735, 7736, 7737, 7738, 7739, 7755, 7756, 7757, 7758, 7759, 7760, 7761, 7762, 7763, 7764, 7800, 7801, 7802, 7803, 7804, 7810, 7811, 7812, 7813, 7814, 7885, 7886, 7887, 7888, 7889, 7905, 7906, 7907, 7908, 7909, 7940, 7941, 7942, 7943, 7944, 7945, 7946, 7947, 7948, 7949, 7950, 7951, 7952, 7953, 7954, 8880, 8881, 8882, 8883, 8884, 8895, 8896, 8897, 8898, 8899, 8900, 8901, 8902, 8903, 8904, 8910, 8911, 8912, 8913, 8914, 8925, 8926, 8927, 8928, 8929, 8930, 8931, 8932, 8933, 8934, 8945, 8946, 8947, 8948, 8949, 8955, 8956, 8957, 8958, 8959, 9010, 9011, 9012, 9013, 9014, 9035, 9036, 9037, 9038, 9039, 9050, 9051, 9052, 9053, 9054, 9060, 9061, 9062, 9063, 9064, 9070, 9071, 9072, 9073, 9074, 9115, 9116, 9117, 9118, 9119, 9255, 9256, 9257, 9258, 9259]
###Markdown
Univariate procedure
###Code
activeColumns = np.zeros(pooler_out)
from __future__ import division
A_score = np.zeros(len(prueba))
for x in range(len(prueba)):
encoding = RDSE.encode(prueba[x]) #encode each input value
sp.compute(encoding, False, activeColumns) #Spatial Pooler
activeColumnIndices = np.nonzero(activeColumns)[0]
tm.compute(activeColumnIndices, learn=True)
activeCells = tm.getActiveCells()
if x > 0:
inter = set(activeColumnIndices).intersection(predictiveColumns_prev)
inter_l = len(inter)
active_l = len(activeColumnIndices)
A_score[x] = 1 - (inter_l/active_l)
predictiveColumns_prev = list(set([x//5 for x in tm.getPredictiveCells()]))
#print ("intersección ", inter_l, ", Activas ", active_l, " cociente ", inter_l/active_l)
activeColumns = np.zeros(pooler_out)
from __future__ import division
A_score = np.zeros(len(prueba))
for x in range(len(prueba)):
encoding = []
for y in Vars:
exec("encoding_y = RDSE_" + y + ".encode(seldata['" + y + "'][x])")
encoding = np.concatenate((encoding, encoding_y))
#RDSE.encode(prueba[x]) #encode each input value
sp.compute(encoding, False, activeColumns) #Spatial Pooler
activeColumnIndices = np.nonzero(activeColumns)[0]
tm.compute(activeColumnIndices, learn=True)
activeCells = tm.getActiveCells()
if x > 0:
inter = set(activeColumnIndices).intersection(predictiveColumns_prev)
inter_l = len(inter)
active_l = len(activeColumnIndices)
A_score[x] = 1 - (inter_l/active_l)
predictiveColumns_prev = list(set([x//5 for x in tm.getPredictiveCells()]))
#print ("intersección ", inter_l, ", Activas ", active_l, " cociente ", inter_l/active_l)
plt.plot(seldata)
plt.figure()
plt.plot(A_score)
###Output
_____no_output_____
###Markdown
Computes the anomaly likelhood We are now computing the likelihood that the system is in a current anomalous state, to do so we have to determine 2 windows:- W: 72 datapoints (three days), computes the normal error distribution- W_prim: 6 datapoints (6 hours), computes the mean error at the current state
###Code
from scipy.stats import norm
W = 72
W_prim = 5
eps = 1e-6
AL_score = np.zeros(len(A_score))
for x in range(len(A_score)):
if x > 0:
W_vec = A_score[max(0, x-W): x]
W_prim_vec = A_score[max(0, x-W_prim): x]
AL_score[x] = 1 - 2*norm.sf(abs(np.mean(W_vec)-np.mean(W_prim_vec))/max(np.std(W_vec), eps))
plt.plot(seldata)
plt.figure()
plt.plot(AL_score)
plt.figure()
plt.plot(A_score)
###Output
_____no_output_____ |
doc/nb/everest.ipynb | ###Markdown
Everest
###Code
import os
import numpy as np
import fitsio
from astropy.table import Table
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(context='talk', style='ticks', palette='deep', font_scale=1.3)#, rc=rc)
colors = sns.color_palette()
%matplotlib inline
specprod = 'everest'
rootdir = os.path.join(os.getenv('DESI_ROOT'), 'spectro', 'fastspecfit')
catdir = os.path.join(rootdir, specprod, 'catalogs')
figdir = os.path.join(os.getenv('DESI_ROOT'), 'users', 'ioannis', 'fastspecfit', specprod)
print(catdir)
print(figdir)
def read_results(prefix='fastspec', survey='main', program='bright'):
catfile = os.path.join(catdir, '{}-{}-{}-{}.fits'.format(prefix, specprod, survey, program))
fast = Table(fitsio.read(catfile, prefix.upper()))
meta = Table(fitsio.read(catfile, 'METADATA'))
I = meta['ZWARN'] == 0
return fast[I], meta[I]
fastspec, metaspec = read_results('fastspec', 'sv3', 'bright')
fastspec
metaspec
fastphot, metaphot = read_results('fastphot', 'sv3', 'dark')
metaphot
###Output
_____no_output_____ |
EEG_Biometrics_with_CNN-and-Ridge-Regression-regularisation.ipynb | ###Markdown
1st trail: EEG_Biometrics_with_CNN-and-Ridge-Regression-regularisation
###Code
import tensorflow
import tensorflow as tf
import numpy as np
import pandas as pd
import cv2
import matplotlib.pyplot as plt
import os
from tensorflow import keras
from tensorflow.keras import layers
from keras.utils import np_utils
from IPython.utils import io
from sklearn.model_selection import train_test_split
import scipy.io as sio
from scipy import stats
from scipy.signal import butter, lfilter
import mne
print(tf.__version__)
# root directory and path to cats-dogs folder zip file
path_dir = os.getcwd() # determine root directory
folder_path = path_dir + '\\files1' + '\\S108R06.edf' # add folder name to root d
# to read a sample file of edf format
#get root directory and create path to the file
path_dir = os.getcwd()
folder_path = path_dir + '\\files1' + '\\S108R06.edf'
#read the sample file
data = mne.io.read_raw_edf(folder_path)
raw_data = data.get_data()
raw_data = raw_data
print(raw_data.shape) # number of epoch for S108R06.edf protocol by 64 channels
# you can get the metadata included in the file and a list of all channels:
info = data.info
channels = data.ch_names
# T0 = rest state, T1= motion (left fits(runs 3,4,7,8,11 and 12), both fistruns 5, 6, 9, 10, 13, and 14 )
# T2 = right fist (in runs 3, 4, 7, 8, 11, and 12) or both feet (in runs 5, 6, 9, 10, 13, and 14)
event, eventid= mne.events_from_annotations(data)
# event or protocol typeid
print(eventid)
#raw eeg waveform details
print(info)
# create custom filter functions
def butter_bandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
b, a = butter(order, [low, high], btype='band')
return b, a
def butter_bandpass_filter(data, lowcut, highcut, fs, order=5):
b, a = butter_bandpass(lowcut, highcut, fs, order=order)
y = lfilter(b, a, data)
return y
# get filtered signal
Filtered = butter_bandpass_filter(raw_data, 0.5, 50, 160, order = 5)
# filtered vs unfiltered plots by duration of recordings
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
# by duration of recordings
ax1.plot(raw_data[0,:481])
ax1.set_title('EEG signal after 0 Hz and 80 Hz sinusoids')
# by duration of recordings
ax2.plot(Filtered[0,:481])
ax2.set_title('EEG signal after 0.5 to 50 Hz band-pass filter')
ax2.set_xlabel('Duration of Recordings [milliseconds]')
plt.tight_layout()
plt.show()
# filtered vs unfiltered plots by duration of recordings
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
# by channel of recordings
ax1.plot(raw_data.T[0,:481])
ax1.set_title('EEG signal after 0 Hz and 80 Hz sinusoids')
# by channel of recordings
ax2.plot(Filtered.T[0,:481])
ax2.set_title('EEG signal after 0.5 to 50 Hz band-pass filter')
ax2.set_xlabel('Number of Channels [64]')
plt.tight_layout()
plt.show()
# Generate input and target pairs
#folder path
folder_path1 = path_dir + '\\files2'
n_class = os.listdir(folder_path1)
# create input and target classes
input_data = [ ]
m = 64
n = 64
image_size = (m,n)
with io.capture_output() as captured:
for i in n_class:
fpath = os.path.join(folder_path1, i)
cls_num = n_class.index(i)
for imgs in os.listdir(fpath):
if (imgs.endswith("edf")):
data_egg = mne.io.read_raw_edf(os.path.join(fpath,imgs))
raw_eeg = data_egg.get_data()
raw_eeg = raw_eeg.T
raw_eeg = cv2.resize(raw_eeg, image_size)
filtered = butter_bandpass_filter(raw_eeg, 0.5, 50, 160, order = 5)
input_data.append([filtered, cls_num])
print(len(n_class))
# Create Input(Features) and Target(Labels) data array
X = [] # input features
y = [] # input labels
m = 64
n = 64
for features, labels in input_data:
X.append(features)
y.append(labels)
# input and target array for train and test
X = np.array(X).reshape(-1,m,n,1) # 1 dim
y = np.array(np_utils.to_categorical(y))
# input shape
input_shape_X =X[0].shape
print(input_shape_X)
# total size of input and label pair
len(input_data)
print(len(n_class))
# train and test datasets separation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
#declare noise and apply guassian noise to norm data
eps = 0.5
# normalisation
X_train = stats.zscore(X_train) + eps*np.random.random_sample(X_train.shape)
X_test = stats.zscore(X_test) + eps*np.random.random_sample(X_test.shape)
# output shape
print(len(y_test[1]))
print(X_train.shape)
# create training batches in terms of tensors of input and target values
train_ds1 = tf.data.Dataset.from_tensor_slices(
(X_train,y_train)).shuffle(1000).batch(32)
test_ds1 = tf.data.Dataset.from_tensor_slices(
(X_test,y_test)).shuffle(1000).batch(32)
# add buffer for effective performance
train_ds = train_ds1.prefetch(buffer_size=32)
test_ds = test_ds1.prefetch(buffer_size=32)
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from keras.models import Sequential
from keras.layers.normalization import BatchNormalization
from keras import regularizers
model = Sequential()
model.add(Conv2D(64, (3,3), input_shape=(input_shape_X),activation='relu',
kernel_regularizer=regularizers.l2((0.2))))
model.add(MaxPooling2D(pool_size=(2, 2), padding ='same'))
model.add(BatchNormalization())
model.add(Conv2D(256, (3,3),activation='relu',
kernel_regularizer=regularizers.l2((0.2))))
model.add(MaxPooling2D(pool_size=(2, 2), padding ='same'))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(109, activation='relu'))
model.add(Dense(len(n_class), activation='softmax'))
model.summary()
from keras.optimizers import SGD, Adam
epochs = 10
model.compile(
optimizer=keras.optimizers.SGD(1e-3),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
model.fit(
train_ds, epochs=epochs, validation_data=test_ds,
)
###Output
Epoch 1/10
4/4 [==============================] - 1s 259ms/step - loss: 23.3865 - accuracy: 0.1122 - val_loss: 23.0208 - val_accuracy: 0.1190
Epoch 2/10
4/4 [==============================] - 1s 257ms/step - loss: 22.0304 - accuracy: 0.5816 - val_loss: 22.9531 - val_accuracy: 0.1667
Epoch 3/10
4/4 [==============================] - 1s 228ms/step - loss: 21.6174 - accuracy: 0.7551 - val_loss: 22.8788 - val_accuracy: 0.1667
Epoch 4/10
4/4 [==============================] - 1s 217ms/step - loss: 21.2700 - accuracy: 0.8980 - val_loss: 22.8028 - val_accuracy: 0.1190
Epoch 5/10
4/4 [==============================] - 1s 223ms/step - loss: 21.0662 - accuracy: 0.9184 - val_loss: 22.7289 - val_accuracy: 0.2143
Epoch 6/10
4/4 [==============================] - 1s 223ms/step - loss: 20.8337 - accuracy: 1.0000 - val_loss: 22.6502 - val_accuracy: 0.2381
Epoch 7/10
4/4 [==============================] - 1s 238ms/step - loss: 20.7303 - accuracy: 0.9694 - val_loss: 22.5665 - val_accuracy: 0.2857
Epoch 8/10
4/4 [==============================] - 1s 218ms/step - loss: 20.6029 - accuracy: 1.0000 - val_loss: 22.4842 - val_accuracy: 0.3810
Epoch 9/10
4/4 [==============================] - 1s 223ms/step - loss: 20.5031 - accuracy: 1.0000 - val_loss: 22.4172 - val_accuracy: 0.4286
Epoch 10/10
4/4 [==============================] - 1s 218ms/step - loss: 20.4513 - accuracy: 0.9796 - val_loss: 22.3267 - val_accuracy: 0.4524
|
experiment-2/1_Data_prep/prepare_data.ipynb | ###Markdown
Data preparation 1. Import Libraries
###Code
import boto3
import sagemaker
sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()
print(role)
bucket = 'eagle-eye-dataset'
prefix = 'OD_using_TFOD_API/experiment2/tfrecords'
###Output
arn:aws:iam::743025358310:role/service-role/AmazonSageMaker-ExecutionRole-20210528T211254
###Markdown
2. Uploading To S3
###Code
s3_input = sagemaker_session.upload_data('data', bucket, prefix)
print(s3_input)
###Output
s3://eagle-eye-dataset/OD_using_TFOD_API/experiment2/tfrecords
###Markdown
3. Copying data To Local Path
###Code
image_name = 'copying-ecr-expr2'
!sh ./docker/build_and_push.sh $image_name
import os
with open (os.path.join('docker', 'ecr_image_fullname.txt'), 'r') as f:
container = f.readlines()[0][:-1]
print(container)
###Output
743025358310.dkr.ecr.us-west-2.amazonaws.com/copying-ecr-expr2:20210623102923
|
Titanico/.ipynb_checkpoints/titanico-3-checkpoint.ipynb | ###Markdown
 Table of Contents Initialization Dataset: Cleaning and Exploration Modelling Quarta Seção Quinta Seção <a id="01" style=" background-color: 37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="toc">TOC ↻ 1. Initialization Inicialização Pacotes Funcoes Dados de Indicadores Sociais Dados de COVID-19 --> 1.1 Description <a href="01"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻ Dataset available in: https://www.kaggle.com/c/titanic/ FeaturesVariableDefinitionKeysurvivalSurvival0 = No, 1 = YespclassTicket class1 = 1st, 2 = 2nd, 3 = 3rdsexSexAgeAge in yearssibsp of siblings / spouses aboard the Titanicparch of parents / children aboard the TitanicticketTicket numberfarePassenger farecabinCabin numberembarkedPort of EmbarkationC = Cherbourg, Q = Queenstown, S = Southampton 1.2 Packages <a href="01"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm
from time import time,sleep
import nltk
from nltk import tokenize
from string import punctuation
from nltk.stem import PorterStemmer, SnowballStemmer, LancasterStemmer
from unidecode import unidecode
from sklearn.dummy import DummyClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score,f1_score
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_validate,KFold,GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import OrdinalEncoder,OneHotEncoder, LabelEncoder
from sklearn.preprocessing import StandardScaler,Normalizer
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from scipy.stats import randint
from numpy.random import uniform
###Output
_____no_output_____
###Markdown
1.3 Settings <a href="01"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻
###Code
# pandas options
pd.options.display.max_columns = 30
pd.options.display.float_format = '{:.2f}'.format
# seaborn options
sns.set(style="darkgrid")
import warnings
warnings.filterwarnings("ignore")
SEED = 42
###Output
_____no_output_____
###Markdown
1.4 Useful Functions <a href="01"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻
###Code
def treat_words(df,
col,
language='english',
inplace=False,
tokenizer = tokenize.WordPunctTokenizer(),
decode = True,
stemmer = None,
lower = True,
remove_words = [],
):
"""
Description:
----------------
Receives a dataframe and the column name. Eliminates
stopwords for each row of that column and apply stemmer.
After that, it regroups and returns a list.
tokenizer = tokenize.WordPunctTokenizer()
tokenize.WhitespaceTokenizer()
stemmer = PorterStemmer()
SnowballStemmer()
LancasterStemmer()
nltk.RSLPStemmer() # in portuguese
"""
pnct = [string for string in punctuation] # from string import punctuation
wrds = nltk.corpus.stopwords.words(language)
unwanted_words = pnct + wrds + remove_words
processed_text = list()
for element in tqdm(df[col]):
# starts a new list
new_text = list()
# starts a list with the words of the non precessed text
text_old = tokenizer.tokenize(element)
# check each word
for wrd in text_old:
# if the word are not in the unwanted words list
# add to the new list
if wrd.lower() not in unwanted_words:
new_wrd = wrd
if decode: new_wrd = unidecode(new_wrd)
if stemmer: new_wrd = stemmer.stem(new_wrd)
if lower: new_wrd = new_wrd.lower()
if new_wrd not in remove_words:
new_text.append(new_wrd)
processed_text.append(' '.join(new_text))
if inplace:
df[col] = processed_text
else:
return processed_text
def list_words_of_class(df,
col,
language='english',
inplace=False,
tokenizer = tokenize.WordPunctTokenizer(),
decode = True,
stemmer = None,
lower = True,
remove_words = []
):
"""
Description:
----------------
Receives a dataframe and the column name. Eliminates
stopwords for each row of that column, apply stemmer
and returns a list of all the words.
"""
lista = treat_words(
df,col = col,language = language,
tokenizer=tokenizer,decode=decode,
stemmer=stemmer,lower=lower,
remove_words = remove_words
)
words_list = []
for string in lista:
words_list += tokenizer.tokenize(string)
return words_list
def get_frequency(df,
col,
language='english',
inplace=False,
tokenizer = tokenize.WordPunctTokenizer(),
decode = True,
stemmer = None,
lower = True,
remove_words = []
):
list_of_words = list_words_of_class(
df,
col = col,
decode = decode,
stemmer = stemmer,
lower = lower,
remove_words = remove_words
)
freq = nltk.FreqDist(list_of_words)
df_freq = pd.DataFrame({
'word': list(freq.keys()),
'frequency': list(freq.values())
}).sort_values(by='frequency',ascending=False)
n_words = df_freq['frequency'].sum()
df_freq['prop'] = 100*df_freq['frequency']/n_words
return df_freq
def common_best_words(df,col,n_common = 10,tol_frac = 0.8,n_jobs = 1):
list_to_remove = []
for i in range(0,n_jobs):
print('[info] Most common words in not survived')
sleep(0.5)
df_dead = get_frequency(
df.query('Survived == 0'),
col = col,
decode = False,
stemmer = False,
lower = False,
remove_words = list_to_remove )
print('[info] Most common words in survived')
sleep(0.5)
df_surv = get_frequency(
df.query('Survived == 1'),
col = col,
decode = False,
stemmer = False,
lower = False,
remove_words = list_to_remove )
words_dead = df_dead.nlargest(n_common, 'frequency')
list_dead = list(words_dead['word'].values)
words_surv = df_surv.nlargest(n_common, 'frequency')
list_surv = list(words_surv['word'].values)
for word in list(set(list_dead).intersection(list_surv)):
prop_dead = words_dead[words_dead['word'] == word]['prop'].values[0]
prop_surv = words_surv[words_surv['word'] == word]['prop'].values[0]
ratio = min([prop_dead,prop_surv])/max([prop_dead,prop_surv])
if ratio > tol_frac:
list_to_remove.append(word)
return list_to_remove
def just_keep_the_words(df,
col,
keep_words = [],
tokenizer = tokenize.WordPunctTokenizer()
):
"""
Description:
----------------
Removes all words that is not in `keep_words`
"""
processed_text = list()
# para cada avaliação
for element in tqdm(df[col]):
# starts a new list
new_text = list()
# starts a list with the words of the non precessed text
text_old = tokenizer.tokenize(element)
for wrd in text_old:
if wrd in keep_words: new_text.append(wrd)
processed_text.append(' '.join(new_text))
return processed_text
class Classifier:
'''
Description
-----------------
Class to approach classification algorithm
Example
-----------------
classifier = Classifier(
algorithm = ChooseTheAlgorith,
hyperparameters_range = {
'hyperparameter_1': [1,2,3],
'hyperparameter_2': [4,5,6],
'hyperparameter_3': [7,8,9]
}
)
# Looking for best model
classifier.grid_search_fit(X,y,n_splits=10)
#dt.grid_search_results.head(3)
# Prediction Form 1
par = classifier.best_model_params
dt.fit(X_trn,y_trn,params = par)
y_pred = classifier.predict(X_tst)
print(accuracy_score(y_tst, y_pred))
# Prediction Form 2
classifier.fit(X_trn,y_trn,params = 'best_model')
y_pred = classifier.predict(X_tst)
print(accuracy_score(y_tst, y_pred))
# Prediction Form 3
classifier.fit(X_trn,y_trn,min_samples_split = 5,max_depth=4)
y_pred = classifier.predict(X_tst)
print(accuracy_score(y_tst, y_pred))
'''
def __init__(self,algorithm, hyperparameters_range={},random_state=42):
self.algorithm = algorithm
self.hyperparameters_range = hyperparameters_range
self.random_state = random_state
self.grid_search_cv = None
self.grid_search_results = None
self.hyperparameters = self.__get_hyperparameters()
self.best_model = None
self.best_model_params = None
self.fitted_model = None
def grid_search_fit(self,X,y,verbose=0,n_splits=10,shuffle=True,scoring='accuracy'):
self.grid_search_cv = GridSearchCV(
self.algorithm(),
self.hyperparameters_range,
cv = KFold(n_splits = n_splits, shuffle=shuffle, random_state=self.random_state),
scoring=scoring,
verbose=verbose
)
self.grid_search_cv.fit(X, y)
col = list(map(lambda par: 'param_'+str(par),self.hyperparameters))+[
'mean_fit_time',
'mean_test_score',
'std_test_score',
'params'
]
results = pd.DataFrame(self.grid_search_cv.cv_results_)
self.grid_search_results = results[col].sort_values(
['mean_test_score','mean_fit_time'],
ascending=[False,True]
).reset_index(drop=True)
self.best_model = self.grid_search_cv.best_estimator_
self.best_model_params = self.best_model.get_params()
def best_model_cv_score(self,X,y,parameter='test_score',verbose=0,n_splits=10,shuffle=True,scoring='accuracy'):
if self.best_model != None:
cv_results = cross_validate(
self.best_model,
X = X,
y = y,
cv=KFold(n_splits = 10,shuffle=True,random_state=self.random_state)
)
return {
parameter+'_mean': cv_results[parameter].mean(),
parameter+'_std': cv_results[parameter].std()
}
def fit(self,X,y,params=None,**kwargs):
model = None
if len(kwargs) == 0 and params == 'best_model' and self.best_model != None:
model = self.best_model
elif type(params) == dict and len(params) > 0:
model = self.algorithm(**params)
elif len(kwargs) >= 0 and params==None:
model = self.algorithm(**kwargs)
else:
print('[Error]')
if model != None:
model.fit(X,y)
self.fitted_model = model
def predict(self,X):
if self.fitted_model != None:
return self.fitted_model.predict(X)
else:
print('[Error]')
return np.array([])
def predict_score(self,X_tst,y_tst,score=accuracy_score):
if self.fitted_model != None:
y_pred = self.predict(X_tst)
return score(y_tst, y_pred)
else:
print('[Error]')
return np.array([])
def hyperparameter_info(self,hyperpar):
str_ = 'param_'+hyperpar
return self.grid_search_results[
[str_,'mean_fit_time','mean_test_score']
].groupby(str_).agg(['mean','std'])
def __get_hyperparameters(self):
return [hp for hp in self.hyperparameters_range]
def cont_class_limits(lis_df,n_class):
ampl = lis_df.quantile(1.0)-lis_df.quantile(0.0)
ampl_class = ampl/n_class
limits = [[i*ampl_class,(i+1)*ampl_class] for i in range(n_class)]
return limits
def cont_classification(lis_df,limits):
list_res = []
n_class = len(limits)
for elem in lis_df:
for ind in range(n_class-1):
if elem >= limits[ind][0] and elem < limits[ind][1]:
list_res.append(ind+1)
if elem >= limits[-1][0]: list_res.append(n_class)
return list_res
###Output
_____no_output_____
###Markdown
<a id="02" style=" background-color: 37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="toc">TOC ↻ 2. Dataset: Cleaning and Exploration Inicialização Pacotes Funcoes Dados de Indicadores Sociais Dados de COVID-19 --> 2.1 Import Dataset <a href="02"style=" border-radius: 10px; background-color: f1f1f1; border: none; color: 37509b; text-align: center; text-decoration: none; display: inline-block; padding: 4px 4px; font-size: 14px;">↻
###Code
df_trn = pd.read_csv('data/train.csv')
df_tst = pd.read_csv('data/test.csv')
df = pd.concat([df_trn,df_tst])
df_trn = df_trn.drop(columns=['PassengerId'])
df_tst = df_tst.drop(columns=['PassengerId'])
df_tst.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 10 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Pclass 418 non-null int64
1 Name 418 non-null object
2 Sex 418 non-null object
3 Age 332 non-null float64
4 SibSp 418 non-null int64
5 Parch 418 non-null int64
6 Ticket 418 non-null object
7 Fare 417 non-null float64
8 Cabin 91 non-null object
9 Embarked 418 non-null object
dtypes: float64(2), int64(3), object(5)
memory usage: 32.8+ KB
###Markdown
Pclass Investigating if the class is related to the probability of survival
###Code
sns.barplot(x='Pclass', y="Survived", data=df_trn)
###Output
_____no_output_____
###Markdown
Name
###Code
treat_words(df_trn,col = 'Name',inplace=True)
treat_words(df_tst,col = 'Name',inplace=True)
%matplotlib inline
from wordcloud import WordCloud
import matplotlib.pyplot as plt
all_words = ' '.join(list(df_trn['Name']))
word_cloud = WordCloud().generate(all_words)
plt.figure(figsize=(10,7))
plt.imshow(word_cloud, interpolation='bilinear')
plt.axis("off")
plt.show()
common_best_words(df_trn,col='Name',n_common = 10,tol_frac = 0.5,n_jobs = 1)
###Output
[info] Most common words in not survived
###Markdown
We can see that Master and William are words with equivalent proportion between both survived and not survived cases. So, they are not good descriptive words
###Code
df_comm = get_frequency(df_trn,col = 'Name',remove_words=['("','")','master', 'william']).reset_index(drop=True)
surv_prob = [ df_trn['Survived'][df_trn['Name'].str.contains(row['word'])].mean() for index, row in df_comm.iterrows()]
df_comm['survival_prob (%)'] = 100*np.array(surv_prob)
print('Survival Frequency related to words in Name')
df_comm.head(10)
df_comm_surv = get_frequency(df_trn[df_trn['Survived']==1],col = 'Name',remove_words=['("','")']).reset_index(drop=True)
sleep(0.5)
print('Most frequent words within those who survived')
df_comm_surv.head(10)
df_comm_dead = get_frequency(df_trn[df_trn['Survived']==0],col = 'Name',remove_words=['("','")']).reset_index(drop=True)
sleep(0.5)
print("Most frequent words within those that did not survive")
df_comm_dead.head(10)
###Output
100%|██████████| 549/549 [00:00<00:00, 23993.92it/s]
###Markdown
Feature Engineering
###Code
min_occurrences = 2
df_comm = get_frequency(df,col = 'Name',
remove_words=['("','")','john','henry', 'william','h','j','jr']
).reset_index(drop=True)
words_to_keep = list(df_comm[df_comm['frequency'] > min_occurrences]['word'])
df_trn['Name'] = just_keep_the_words(df_trn,
col = 'Name',
keep_words = words_to_keep
)
df_tst['Name'] = just_keep_the_words(df_tst,
col = 'Name',
keep_words = words_to_keep
)
vectorize = CountVectorizer(lowercase=True,max_features = 4)
vectorize.fit(df_trn['Name'])
bag_of_words = vectorize.transform(df_trn['Name'])
X = pd.DataFrame(vectorize.fit_transform(df_trn['Name']).toarray(),
columns=list(map(lambda word: 'Name_'+word,vectorize.get_feature_names()))
)
y = df_trn['Survived']
from sklearn.model_selection import train_test_split
X_trn,X_tst,y_trn,y_tst = train_test_split(
X,
y,
test_size = 0.25,
random_state=42
)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(C=100)
classifier.fit(X_trn,y_trn)
accuracy = classifier.score(X_tst,y_tst)
print('Accuracy = %.3f%%' % (100*accuracy))
df_trn = pd.concat([
df_trn
,
pd.DataFrame(vectorize.fit_transform(df_trn['Name']).toarray(),
columns=list(map(lambda word: 'Name_'+word,vectorize.get_feature_names()))
)
],axis=1).drop(columns=['Name'])
df_tst = pd.concat([
df_tst
,
pd.DataFrame(vectorize.fit_transform(df_tst['Name']).toarray(),
columns=list(map(lambda word: 'Name_'+word,vectorize.get_feature_names()))
)
],axis=1).drop(columns=['Name'])
###Output
_____no_output_____
###Markdown
Sex
###Code
from sklearn.preprocessing import LabelEncoder
Sex_Encoder = LabelEncoder()
df_trn['Sex'] = Sex_Encoder.fit_transform(df_trn['Sex']).astype(int)
df_tst['Sex'] = Sex_Encoder.transform(df_tst['Sex']).astype(int)
###Output
_____no_output_____
###Markdown
Age
###Code
mean_age = df['Age'][df['Age'].notna()].mean()
df_trn['Age'].fillna(mean_age,inplace=True)
df_tst['Age'].fillna(mean_age,inplace=True)
age_limits = cont_class_limits(df['Age'],3)
df_trn['Age'] = cont_classification(df_trn['Age'],age_limits)
df_tst['Age'] = cont_classification(df_tst['Age'],age_limits)
###Output
_____no_output_____
###Markdown
Family Size
###Code
# df_trn['FamilySize'] = df_trn['SibSp'] + df_trn['Parch'] + 1
# df_tst['FamilySize'] = df_tst['SibSp'] + df_tst['Parch'] + 1
# df_trn = df_trn.drop(columns = ['SibSp','Parch'])
# df_tst = df_tst.drop(columns = ['SibSp','Parch'])
###Output
_____no_output_____
###Markdown
Cabin Feature There is very little data about the cabin
###Code
# df_trn['Cabin'] = df_trn['Cabin'].fillna('N000')
# df_cab = df_trn[df_trn['Cabin'].notna()]
# df_cab = pd.concat(
# [
# df_cab,
# df_cab['Cabin'].str.extract(
# '([A-Za-z]+)(\d+\.?\d*)([A-Za-z]*)',
# expand = True).drop(columns=[2]).rename(
# columns={0: 'Cabin_Class', 1: 'Cabin_Number'}
# )
# ], axis=1)
# df_trn = df_cab.drop(columns=['Cabin','Cabin_Number'])
# df_trn = pd.concat([
# df_trn.drop(columns=['Cabin_Class']),
# pd.get_dummies(df_trn['Cabin_Class'],prefix='Cabin').drop(columns=['Cabin_N'])
# # pd.get_dummies(df_trn['Cabin_Class'],prefix='Cabin')
# ],axis=1)
# df_tst['Cabin'] = df_tst['Cabin'].fillna('N000')
# df_cab = df_tst[df_tst['Cabin'].notna()]
# df_cab = pd.concat(
# [
# df_cab,
# df_cab['Cabin'].str.extract(
# '([A-Za-z]+)(\d+\.?\d*)([A-Za-z]*)',
# expand = True).drop(columns=[2]).rename(
# columns={0: 'Cabin_Class', 1: 'Cabin_Number'}
# )
# ], axis=1)
# df_tst = df_cab.drop(columns=['Cabin','Cabin_Number'])
# df_tst = pd.concat([
# df_tst.drop(columns=['Cabin_Class']),
# pd.get_dummies(df_tst['Cabin_Class'],prefix='Cabin').drop(columns=['Cabin_N'])
# # pd.get_dummies(df_tst['Cabin_Class'],prefix='Cabin')
# ],axis=1)
###Output
_____no_output_____
###Markdown
Ticket
###Code
df_trn = df_trn.drop(columns=['Ticket'])
df_tst = df_tst.drop(columns=['Ticket'])
###Output
_____no_output_____
###Markdown
Fare
###Code
mean_fare = df['Fare'][df['Fare'].notna()].mean()
df_trn['Fare'].fillna(mean_fare,inplace=True)
df_tst['Fare'].fillna(mean_fare,inplace=True)
fare_limits = cont_class_limits(df['Fare'],3)
df_trn['Fare'] = cont_classification(df_trn['Fare'],fare_limits)
df_tst['Fare'] = cont_classification(df_tst['Fare'],fare_limits)
###Output
_____no_output_____
###Markdown
Embarked
###Code
most_frequent_emb = df['Embarked'].value_counts()[:1].index.tolist()[0]
df_trn['Embarked'] = df_trn['Embarked'].fillna(most_frequent_emb)
df_tst['Embarked'] = df_tst['Embarked'].fillna(most_frequent_emb)
df_trn = pd.concat([
df_trn.drop(columns=['Embarked']),
pd.get_dummies(df_trn['Embarked'],prefix='Emb').drop(columns=['Emb_C'])
# pd.get_dummies(df_trn['Embarked'],prefix='Emb')
],axis=1)
df_tst = pd.concat([
df_tst.drop(columns=['Embarked']),
pd.get_dummies(df_tst['Embarked'],prefix='Emb').drop(columns=['Emb_C'])
# pd.get_dummies(df_tst['Embarked'],prefix='Emb')
],axis=1)
###Output
_____no_output_____
###Markdown
<a id="03" style=" background-color: 37509b; border: none; color: white; padding: 2px 10px; text-align: center; text-decoration: none; display: inline-block; font-size: 10px;" href="toc">TOC ↻ 3. Modelling Inicialização Pacotes Funcoes Dados de Indicadores Sociais Dados de COVID-19 --> Classification Approach
###Code
Model_Scores = {}
Model_Scores = {}
def print_model_scores():
return pd.DataFrame([[
model,
Model_Scores[model]['test_accuracy_score'],
Model_Scores[model]['cv_score_mean'],
Model_Scores[model]['cv_score_std']
] for model in Model_Scores.keys()],
columns=['model','test_accuracy_score','cv_score','cv_score_std']
).sort_values(by='cv_score',ascending=False)
def OptimizeClassification(X,y,
model,
hyperparametric_space,
cv = KFold(n_splits = 10, shuffle=True,random_state=SEED),
model_description = 'classifier',
n_iter = 20,
test_size = 0.25
):
X_trn,X_tst,y_trn,y_tst = train_test_split(
X,
y,
test_size = test_size,
random_state=SEED
)
start = time()
# Searching the best setting
print('[info] Searching for the best hyperparameter')
search_cv = RandomizedSearchCV(
model,
hyperparametric_space,
n_iter = n_iter,
cv = cv,
random_state = SEED)
search_cv.fit(X, y)
results = pd.DataFrame(search_cv.cv_results_)
print('[info] Search Timing: %.2f seconds'%(time() - start))
# Evaluating Test Score For Best Estimator
start = time()
print('[info] Test Accuracy Score')
gb = search_cv.best_estimator_
gb.fit(X_trn, y_trn)
y_pred = gb.predict(X_tst)
# Evaluating K Folded Cross Validation
print('[info] KFolded Cross Validation')
cv_results = cross_validate(search_cv.best_estimator_,X,y,
cv = cv )
print('[info] Cross Validation Timing: %.2f seconds'%(time() - start))
Model_Scores[model_description] = {
'test_accuracy_score':gb.score(X_tst,y_tst),
'cv_score_mean':cv_results['test_score'].mean(),
'cv_score_std':cv_results['test_score'].std(),
'best_params':search_cv.best_estimator_.get_params()
}
pd.options.display.float_format = '{:,.5f}'.format
print('\t\t test_accuracy_score: {:.3f}'.format(gb.score(X_tst,y_tst)))
print('\t\t cv_score: {:.3f}±{:.3f}'.format(
cv_results['test_score'].mean(),cv_results['test_score'].std()))
params_list = ['mean_test_score']+list(map(lambda var: 'param_'+var,search_cv.best_params_.keys()))+['mean_fit_time']
return results[params_list].sort_values(
['mean_test_score','mean_fit_time'],
ascending=[False,True]
)
###Output
_____no_output_____
###Markdown
Scaling DataSet
###Code
scaler = StandardScaler()
# caler = Normalizer()
scaler.fit(df_trn.drop(columns=['Survived','Cabin']))
X = scaler.transform(df_trn.drop(columns=['Survived','Cabin']))
y = df_trn['Survived']
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
results = OptimizeClassification(X,y,
model = LogisticRegression(random_state=SEED),
hyperparametric_space = {
'solver' : ['newton-cg', 'lbfgs', 'liblinear'],#
'C' : uniform(0.075,0.125,200) #10**uniform(-2,2,200)
},
cv = KFold(n_splits = 50, shuffle=True,random_state=SEED),
model_description = 'LogisticRegression',
n_iter = 20
)
results.head(5)
###Output
[info] Searching for the best hyperparameter
[info] Search Timing: 13.23 seconds
[info] Test Accuracy Score
[info] KFolded Cross Validation
[info] Cross Validation Timing: 0.15 seconds
test_accuracy_score: 0.798
cv_score: 0.831±0.096
###Markdown
Support Vector Classifier
###Code
results = OptimizeClassification(X,y,
model = SVC(random_state=SEED),
hyperparametric_space = {
'kernel' : ['linear', 'poly','rbf','sigmoid'],
'C' : 10**uniform(-1,1,200),
'decision_function_shape' : ['ovo', 'ovr'],
'degree' : [1,2,3,4]
},
cv = KFold(n_splits = 50, shuffle=True,random_state=SEED),
model_description = 'SVC',
n_iter = 20
)
results.head(5)
###Output
[info] Searching for the best hyperparameter
[info] Search Timing: 20.93 seconds
[info] Test Accuracy Score
[info] KFolded Cross Validation
[info] Cross Validation Timing: 0.96 seconds
test_accuracy_score: 0.825
cv_score: 0.835±0.098
###Markdown
Decision Tree Classifier
###Code
results = OptimizeClassification(X,y,
model = DecisionTreeClassifier(),
hyperparametric_space = {
'min_samples_split': randint(10,30),
'max_depth': randint(10,30),
'min_samples_leaf': randint(1,10)
},
cv = KFold(n_splits = 50, shuffle=True,random_state=SEED),
model_description = 'DecisionTree',
n_iter = 100
)
results.head(5)
print_model_scores()
###Output
_____no_output_____
###Markdown
Random Forest Classifier
###Code
results = OptimizeClassification(X,y,
model = RandomForestClassifier(random_state = SEED,oob_score=True),
hyperparametric_space = {
'n_estimators': randint(190,250),
'min_samples_split': randint(10,15),
'min_samples_leaf': randint(1,6)
# 'max_depth': randint(1,100),
# ,
# 'min_weight_fraction_leaf': uniform(0,1,100),
# 'max_features': uniform(0,1,100),
# 'max_leaf_nodes': randint(10,100),
},
cv = KFold(n_splits = 20, shuffle=True,random_state=SEED),
model_description = 'RandomForestClassifier',
n_iter = 20
)
results.head(5)
print_model_scores()
###Output
_____no_output_____
###Markdown
Gradient Boosting Classifier
###Code
results = OptimizeClassification(X,y,
model = GradientBoostingClassifier(),
hyperparametric_space = {
'loss': ['exponential'], #'deviance',
'min_samples_split': randint(130,170),
'max_depth': randint(6,15),
'learning_rate': uniform(0.05,0.15,100),
'random_state' : randint(0,10),
'tol': 10**uniform(-5,-3,100)
},
cv = KFold(n_splits = 50, shuffle=True,random_state=SEED),
model_description = 'GradientBoostingClassifier',
n_iter = 100
)
results.head(5)
###Output
[info] Searching for the best hyperparameter
[info] Search Timing: 652.98 seconds
[info] Test Accuracy Score
[info] KFolded Cross Validation
[info] Cross Validation Timing: 5.60 seconds
test_accuracy_score: 0.807
cv_score: 0.831±0.099
###Markdown
Multi Layer Perceptron Classifier
###Code
def random_layer(max_depth=4,max_layer=100):
res = list()
depth = np.random.randint(1,1+max_depth)
for i in range(1,depth+1):
res.append(np.random.randint(2,max_layer))
return tuple(res)
results = OptimizeClassification(X,y,
model = MLPClassifier(random_state=SEED),
hyperparametric_space = {
'hidden_layer_sizes': [random_layer(max_depth=4,max_layer=40) for i in range(10)],
'solver' : ['lbfgs', 'sgd', 'adam'],
'learning_rate': ['adaptive'],
'activation' : ['identity', 'logistic', 'tanh', 'relu']
},
cv = KFold(n_splits = 3, shuffle=True,random_state=SEED),
model_description = 'MLPClassifier',
n_iter = 100
)
results.head(5)
###Output
[info] Searching for the best hyperparameter
[info] Search Timing: 571.72 seconds
[info] Test Accuracy Score
[info] KFolded Cross Validation
[info] Cross Validation Timing: 2.38 seconds
test_accuracy_score: 0.821
cv_score: 0.828±0.000
###Markdown
Best Model **Best Model (until now)**```GradientBoostingClassifier(ccp_alpha=0.0, criterion='friedman_mse', init=None, learning_rate=0.10165006218060142, loss='exponential', max_depth=7, max_features=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=134, min_weight_fraction_leaf=0.0, n_estimators=100, n_iter_no_change=None, presort='deprecated', random_state=42, subsample=1.0, tol=0.0001, validation_fraction=0.1, verbose=0, warm_start=False)```
###Code
print_model_scores()
model = GradientBoostingClassifier(**Model_Scores['GradientBoostingClassifier']['best_params'])
X_trn,X_tst,y_trn,y_tst = train_test_split(
X,
y,
test_size = 0.25
)
model.fit(X_trn,y_trn)
y_pred = model.predict(X_tst)
cv_results = cross_validate(model,X,y,
cv = KFold(n_splits = 20, shuffle=True) )
print('test_accuracy_score: {:.3f}'.format(model.score(X_tst,y_tst)))
print('cv_score: {:.3f}±{:.3f}'.format(
cv_results['test_score'].mean(),cv_results['test_score'].std()))
pass_id = pd.read_csv('data/test.csv')['PassengerId']
model = GradientBoostingClassifier(**Model_Scores['GradientBoostingClassifier']['best_params'])
model.fit(X,y)
X_sub = scaler.transform(df_tst.drop(columns=['Cabin']))
y_pred = model.predict(X_sub)
sub = pd.Series(y_pred,index=pass_id,name='Survived')
sub.to_csv('gbc_model_2.csv',header=True)
y_pred
model
###Output
_____no_output_____ |
11Oct/F-elNet.ipynb | ###Markdown
BALANCED TESTING
###Code
NBname='_F-elNetb'
# xb_better_test=testb.reshape(testb.shape[0],testb.shape[1])
# yb_better_test=tlabelsb.reshape(tlabelsb.shape[0],tlabelsb.shape[1])
# yb_better_test=yb_better_test[:,1]
y_pred = LR.predict_proba(xb_better_test)
fpr_0, tpr_0, thresholds_0 = roc_curve(yb_better_test, y_pred[:,1])
fpr_x.append(fpr_0)
tpr_x.append(tpr_0)
thresholds_x.append(thresholds_0)
auc_x.append(auc(fpr_0, tpr_0))
testby=yb_better_test
# predict probabilities for testb set
yhat_probs = LR.predict_proba(xb_better_test)
# predict crisp classes for testb set
yhat_classes = LR.predict(xb_better_test)
# # reduce to 1d array
#testby1=tlabels[:,1]
#yhat_probs = yhat_probs[:, 0]
#yhat_classes = yhat_classes[:, 0]
# accuracy: (tp + tn) / (p + n)
acc_S.append(accuracy_score(testby, yhat_classes))
#print('Accuracy: %f' % accuracy_score(testby, yhat_classes))
#precision tp / (tp + fp)
pre_S.append(precision_score(testby, yhat_classes))
#print('Precision: %f' % precision_score(testby, yhat_classes))
#recall: tp / (tp + fn)
rec_S.append(recall_score(testby, yhat_classes))
#print('Recall: %f' % recall_score(testby, yhat_classes))
# f1: 2 tp / (2 tp + fp + fn)
f1_S.append(f1_score(testby, yhat_classes))
#print('F1 score: %f' % f1_score(testby, yhat_classes))
# kappa
kap_S.append(cohen_kappa_score(testby, yhat_classes))
#print('Cohens kappa: %f' % cohen_kappa_score(testby, yhat_classes))
# confusion matrix
mat_S.append(confusion_matrix(testby, yhat_classes))
#print(confusion_matrix(testby, yhat_classes))
with open('perform'+NBname+'.txt', "w") as f:
f.writelines("AUC \t Accuracy \t Precision \t Recall \t F1 \t Kappa\n")
f.writelines(map("{}\t{}\t{}\t{}\t{}\t{}\n".format, auc_x, acc_S, pre_S, rec_S, f1_S, kap_S))
for x in range(len(fpr_x)):
f.writelines(map("{}\n".format, mat_S[x]))
f.writelines(map("{}\t{}\t{}\n".format, fpr_x[x], tpr_x[x], thresholds_x[x]))
# ==========================================================================
# # THIS IS THE UNBIASED testb; DO NOT UNCOMMENT UNTIL THE END
# ==========================================================================
plot_auc(auc_x,fpr_x,tpr_x,NBname)
###Output
_____no_output_____
###Markdown
to see which samples were correctly classified ...
###Code
yhat_probs[yhat_probs[:,1]>=0.5,1]
yhat_probs[:,1]>=0.5
yhat_classes
testby
###Output
_____no_output_____
###Markdown
IMBALANCED TESTING
###Code
NBname='_F-elNetim'
# xim_better_test=testim.reshape(testim.shape[0],testim.shape[1])
# yim_better_test=tlabelsim.reshape(tlabelsim.shape[0],tlabelsim.shape[1])
# yim_better_test=yim_better_test[:,1]
y_pred = LR.predict_proba(xim_better_test)#.ravel()
fpr_0, tpr_0, thresholds_0 = roc_curve(yim_better_test, y_pred[:,1])
fpr_x.append(fpr_0)
tpr_x.append(tpr_0)
thresholds_x.append(thresholds_0)
auc_x.append(auc(fpr_0, tpr_0))
testim=xim_better_test
# predict probabilities for testim set
yhat_probs = LR.predict_proba(testim)
# predict crisp classes for testim set
yhat_classes = LR.predict(testim)
# reduce to 1d array
testimy=tlabelsim[:,1]
#testimy1=tlabels[:,1]
#yhat_probs = yhat_probs[:, 0]
#yhat_classes = yhat_classes[:, 0]
# accuracy: (tp + tn) / (p + n)
acc_S.append(accuracy_score(testimy, yhat_classes))
#print('Accuracy: %f' % accuracy_score(testimy, yhat_classes))
#precision tp / (tp + fp)
pre_S.append(precision_score(testimy, yhat_classes))
#print('Precision: %f' % precision_score(testimy, yhat_classes))
#recall: tp / (tp + fn)
rec_S.append(recall_score(testimy, yhat_classes))
#print('Recall: %f' % recall_score(testimy, yhat_classes))
# f1: 2 tp / (2 tp + fp + fn)
f1_S.append(f1_score(testimy, yhat_classes))
#print('F1 score: %f' % f1_score(testimy, yhat_classes))
# kappa
kap_S.append(cohen_kappa_score(testimy, yhat_classes))
#print('Cohens kappa: %f' % cohen_kappa_score(testimy, yhat_classes))
# confusion matrix
mat_S.append(confusion_matrix(testimy, yhat_classes))
#print(confusion_matrix(testimy, yhat_classes))
with open('perform'+NBname+'.txt', "w") as f:
f.writelines("##THE TWO LINES ARE FOR BALANCED AND IMBALALANCED TEST\n")
f.writelines("#AUC \t Accuracy \t Precision \t Recall \t F1 \t Kappa\n")
f.writelines(map("{}\t{}\t{}\t{}\t{}\t{}\n".format, auc_x, acc_S, pre_S, rec_S, f1_S, kap_S))
f.writelines("#TRUE_SENSITIVE \t TRUE_RESISTANT\n")
for x in range(len(fpr_x)):
f.writelines(map("{}\n".format, mat_S[x]))
#f.writelines(map("{}\t{}\t{}\n".format, fpr_x[x], tpr_x[x], thresholds_x[x]))
f.writelines("#FPR \t TPR \t THRESHOLDs\n")
for x in range(len(fpr_x)):
#f.writelines(map("{}\n".format, mat_S[x]))
f.writelines(map("{}\t{}\t{}\n".format, fpr_x[x], tpr_x[x], thresholds_x[x]))
f.writelines("#NEXT\n")
# ==========================================================================
# # THIS IS THE UNBIASED testim; DO NOT UNCOMMENT UNTIL THE END
# ==========================================================================
plot_auc(auc_x,fpr_x,tpr_x,NBname)
###Output
_____no_output_____
###Markdown
to see which samples were correctly classified ...
###Code
yhat_probs[yhat_probs[:,1]>=0.5,1]
yhat_probs[:,1]>=0.5
yhat_classes
testimy
###Output
_____no_output_____
###Markdown
MISCELLANEOUS
###Code
mat_S
auc_x
###Output
_____no_output_____
###Markdown
END OF TESTING
###Code
print(str(datetime.datetime.now()))
###Output
2019-10-03 15:09:52.177842
|
code/project16_neuralnet_tissuetype_ccle.ipynb | ###Markdown
Project objectiveIn this project, we build a neural network model for predicting tissue of origin of cancer cell lines using their gene expression provided in cancer cell line encyclopedia dataset. Although by definition the built model is a deep learning model, I try to avoid it for now.Information about the dataset, some technical details about the used machine learning method(s) and mathematical details of the quantifications approaches are provided in the code. Packages we work with in this notebookWe are going to use the following libraries and packages:* **numpy**: NumPy is the fundamental package for scientific computing with Python. (http://www.numpy.org/)* **sklearn**: Scikit-learn is a machine learning library for Python programming language. (https://scikit-learn.org/stable/)* **pandas**: Pandas provides easy-to-use data structures and data analysis tools for Python. (https://pandas.pydata.org/)* **keras**: keras is a widely-used neural network framework in python.
###Code
import numpy as np
import pandas as pd
import keras
###Output
_____no_output_____
###Markdown
Introduction to the dataset**Name**: Cancer Cell Line Encyclopedia dataset**Summary**: Identifying tissue of origin of cancer cell lines using their gene expression. Cell lines from 6 tissues were chosen for this code including: breast, central_nervous_system, haematopoietic_and_lymphoid_tissue, large_intestine, lung, skin**number of features**: 500 (real, positive) Top 500 genex based on variance of their expression in the dataset is chosen. The right way to select the features is to do it only on the training set to eliminate information leak from test set. But to simplify the process for the sake of this teaching code, we use all the dataset.**Number of data points (instances)**: 550**dataset accessibility**: Dataset is available as part of PharmacoGx R package (https://www.bioconductor.org/packages/release/bioc/html/PharmacoGx.html)**Link to the dataset**: https://portals.broadinstitute.org/ccle Importing the datasetWe can import the dataset in multiple ways**Colab Notebook**: You can download the dataset file (or files) from the link (if provided) and uploading it to your google drive and then you can import the file (or files) as follows:**Note.** When you run the following cell, it tries to connect the colab with google derive. Follow steps 1 to 5 in this link (https://www.marktechpost.com/2019/06/07/how-to-connect-google-colab-with-google-drive/) to complete the
###Code
from google.colab import drive
drive.mount('/content/gdrive')
# This path is common for everybody
# This is the path to your google drive
input_path = '/content/gdrive/My Drive/'
# reading the data (target)
target_dataset_features = pd.read_csv(input_path + 'CCLE_ExpMat_Top500Genes.csv', index_col=0)
target_dataset_output = pd.read_csv(input_path + 'CCLE_ExpMat_Phenotype.csv', index_col=0)
# Transposing the dataframe to put features in the dataframe columns
target_dataset_features = target_dataset_features.transpose()
###Output
Drive already mounted at /content/gdrive; to attempt to forcibly remount, call drive.mount("/content/gdrive", force_remount=True).
###Markdown
**Local directory**: In case you save the data in your local directory, you need to change "input_path" to the local directory you saved the file (or files) in.**GitHub**: If you use my GitHub (or your own GitHub) repo, you need to change the "input_path" to where the file (or files) exist in the repo. For example, when I clone ***ml_in_practice*** from my GitHub, I need to change "input_path" to 'data/' as the file (or files) is saved in the data dicretory in this repository. **Note.**: You can also clone my ***ml_in_practice*** repository (here: https://github.com/alimadani/ml_in_practice) and follow the same process. Making sure about the dataset characteristics (number of data points and features)
###Code
print('number of features: {}'.format(target_dataset_features.shape[1]))
print('number of data points: {}'.format(target_dataset_features.shape[0]))
###Output
number of features: 500
number of data points: 550
###Markdown
Data preparationWe need to prepare the dataset for machine learnign modeling. Here we prepare the data in 2 steps:1) Selecting target columns from the output dataframe (target_dataset_output)2) Converting tissue names to integers (one for each tissue)3) Converting the integer array of labels to one-hot encodings to be used in neural network modeling
###Code
# tissueid is the column that contains tissue type information
output_var_names = target_dataset_output['tissueid']
# converting tissue names to labels
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(output_var_names)
output_var = le.transform(output_var_names)
class_number = len(np.unique(output_var))
# transforming the output array to an array of one hot vectors
# it measn that we have a vector for each datapoint
# with length equal to the number of classes
# Depending on the class of each datapoint,
# one of the values (for that class) will be one
# and the rest of them will be zero for each data point
# .reshape(-1,1) has to be used to transform a 1d array of class
# numbers to a 2d array ready to be encoded by OneHotEncoder
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder()
output_var = ohe.fit_transform(output_var.reshape(-1,1)).toarray()
# we would like to use all the features as input features of the model
input_features = target_dataset_features
###Output
_____no_output_____
###Markdown
Splitting data to training and testing setsWe need to split the data to train and test, if we do not have a separate dataset for validation and/or testing, to make sure about generalizability of the model we train.**test_size**: Traditionally, 30%-40% of the dataset cna be used for test set. If you split the data to train, validation and test, you can use 60%, 20% and 20% of the dataset, respectively.**Note.**: We need the validation and test sets to be big enough for checking generalizability of our model. At the same time we would like to have as much data as possible in the training set to train a better model.**random_state** as the name suggests, is used for initializing the internal random number generator, which will decide the splitting of data into train and test indices in your case.
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(input_features, output_var, test_size=0.30, random_state=5)
###Output
_____no_output_____
###Markdown
Building the supervised learning modelWe want to build a multi-class classification model as the output variable include multiple classes. Here we build a neural network model with 2 hidden layers. A neural network with 2 or more hidden layers are called deep neural network. So technical it is a deep learning code. As you can see the implementation of a deep learning model is not difficult. But knowing how to interpret it, how to fine-tune the model and avoid overfitting are the parts that need experience and more knowledge. Fully connected neural network
###Code
from keras.models import Sequential
from keras.layers import Dense
# building a neural network model
model = Sequential()
# adding 1st hidden layer with 128 neurons and relu as its activation function
# input_dim should be specified as the number of input features
model.add(Dense(128, input_dim=target_dataset_features.shape[1], activation='relu'))
# adding 2nd hidden layer with 32 neurons and relu as its activation function
model.add(Dense(64, activation='relu'))
# adding the output layer (softmax is used to generate probabilities for each predicted class)
# Size if the last layer should be equal to the total number of classes in the dataset
model.add(Dense(class_number, activation='softmax'))
# compiling the model using cross-entropy for categorical variables,
# as we are dealing with multi-class classification
# Adam optimization algorithm is also used
# Accuracy is used as the metric to assess performance of our model
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Now we fit our neural network model using the trainign set:
###Code
# Train the model using the training set
model.fit(X_train, y_train, epochs=300, batch_size=64)
###Output
Epoch 1/300
385/385 [==============================] - 0s 208us/step - loss: 2.6319 - accuracy: 0.4701
Epoch 2/300
385/385 [==============================] - 0s 59us/step - loss: 1.2702 - accuracy: 0.6052
Epoch 3/300
385/385 [==============================] - 0s 57us/step - loss: 0.8191 - accuracy: 0.7506
Epoch 4/300
385/385 [==============================] - 0s 46us/step - loss: 0.6767 - accuracy: 0.8000
Epoch 5/300
385/385 [==============================] - 0s 53us/step - loss: 0.7844 - accuracy: 0.7662
Epoch 6/300
385/385 [==============================] - 0s 47us/step - loss: 0.6166 - accuracy: 0.8286
Epoch 7/300
385/385 [==============================] - 0s 60us/step - loss: 0.4902 - accuracy: 0.8468
Epoch 8/300
385/385 [==============================] - 0s 45us/step - loss: 0.7123 - accuracy: 0.7818
Epoch 9/300
385/385 [==============================] - 0s 45us/step - loss: 0.4307 - accuracy: 0.8909
Epoch 10/300
385/385 [==============================] - 0s 52us/step - loss: 0.6444 - accuracy: 0.8000
Epoch 11/300
385/385 [==============================] - 0s 43us/step - loss: 0.3936 - accuracy: 0.8649
Epoch 12/300
385/385 [==============================] - 0s 46us/step - loss: 0.3655 - accuracy: 0.9039
Epoch 13/300
385/385 [==============================] - 0s 44us/step - loss: 0.3022 - accuracy: 0.9117
Epoch 14/300
385/385 [==============================] - 0s 52us/step - loss: 0.2723 - accuracy: 0.9117
Epoch 15/300
385/385 [==============================] - 0s 48us/step - loss: 0.4490 - accuracy: 0.8494
Epoch 16/300
385/385 [==============================] - 0s 53us/step - loss: 0.4986 - accuracy: 0.8519
Epoch 17/300
385/385 [==============================] - 0s 49us/step - loss: 0.5167 - accuracy: 0.8338
Epoch 18/300
385/385 [==============================] - 0s 44us/step - loss: 0.3224 - accuracy: 0.9065
Epoch 19/300
385/385 [==============================] - 0s 48us/step - loss: 0.2689 - accuracy: 0.9221
Epoch 20/300
385/385 [==============================] - 0s 50us/step - loss: 0.5095 - accuracy: 0.8208
Epoch 21/300
385/385 [==============================] - 0s 52us/step - loss: 0.3976 - accuracy: 0.8935
Epoch 22/300
385/385 [==============================] - 0s 47us/step - loss: 0.3291 - accuracy: 0.8961
Epoch 23/300
385/385 [==============================] - 0s 47us/step - loss: 0.2629 - accuracy: 0.9325
Epoch 24/300
385/385 [==============================] - 0s 49us/step - loss: 0.2106 - accuracy: 0.9351
Epoch 25/300
385/385 [==============================] - 0s 54us/step - loss: 0.1937 - accuracy: 0.9403
Epoch 26/300
385/385 [==============================] - 0s 52us/step - loss: 0.2713 - accuracy: 0.9091
Epoch 27/300
385/385 [==============================] - 0s 51us/step - loss: 0.2443 - accuracy: 0.9195
Epoch 28/300
385/385 [==============================] - 0s 59us/step - loss: 0.2025 - accuracy: 0.9273
Epoch 29/300
385/385 [==============================] - 0s 48us/step - loss: 0.1671 - accuracy: 0.9403
Epoch 30/300
385/385 [==============================] - 0s 45us/step - loss: 0.1765 - accuracy: 0.9299
Epoch 31/300
385/385 [==============================] - 0s 48us/step - loss: 0.3743 - accuracy: 0.8623
Epoch 32/300
385/385 [==============================] - 0s 48us/step - loss: 0.2830 - accuracy: 0.9065
Epoch 33/300
385/385 [==============================] - 0s 49us/step - loss: 0.2108 - accuracy: 0.9299
Epoch 34/300
385/385 [==============================] - 0s 49us/step - loss: 0.1811 - accuracy: 0.9506
Epoch 35/300
385/385 [==============================] - 0s 51us/step - loss: 0.1490 - accuracy: 0.9532
Epoch 36/300
385/385 [==============================] - 0s 50us/step - loss: 0.1344 - accuracy: 0.9532
Epoch 37/300
385/385 [==============================] - 0s 51us/step - loss: 0.1287 - accuracy: 0.9584
Epoch 38/300
385/385 [==============================] - 0s 49us/step - loss: 0.1189 - accuracy: 0.9584
Epoch 39/300
385/385 [==============================] - 0s 54us/step - loss: 0.1062 - accuracy: 0.9662
Epoch 40/300
385/385 [==============================] - 0s 63us/step - loss: 0.9572 - accuracy: 0.7974
Epoch 41/300
385/385 [==============================] - 0s 50us/step - loss: 1.0079 - accuracy: 0.7532
Epoch 42/300
385/385 [==============================] - 0s 49us/step - loss: 0.6010 - accuracy: 0.8234
Epoch 43/300
385/385 [==============================] - 0s 48us/step - loss: 0.3755 - accuracy: 0.8961
Epoch 44/300
385/385 [==============================] - 0s 46us/step - loss: 0.2757 - accuracy: 0.9169
Epoch 45/300
385/385 [==============================] - 0s 48us/step - loss: 0.1862 - accuracy: 0.9429
Epoch 46/300
385/385 [==============================] - 0s 48us/step - loss: 0.2772 - accuracy: 0.9091
Epoch 47/300
385/385 [==============================] - 0s 45us/step - loss: 0.2080 - accuracy: 0.9299
Epoch 48/300
385/385 [==============================] - 0s 46us/step - loss: 0.1726 - accuracy: 0.9429
Epoch 49/300
385/385 [==============================] - 0s 48us/step - loss: 0.6782 - accuracy: 0.8000
Epoch 50/300
385/385 [==============================] - 0s 46us/step - loss: 0.5196 - accuracy: 0.8597
Epoch 51/300
385/385 [==============================] - 0s 51us/step - loss: 0.5655 - accuracy: 0.8312
Epoch 52/300
385/385 [==============================] - 0s 67us/step - loss: 0.4006 - accuracy: 0.8883
Epoch 53/300
385/385 [==============================] - 0s 43us/step - loss: 0.8945 - accuracy: 0.8234
Epoch 54/300
385/385 [==============================] - 0s 47us/step - loss: 0.2756 - accuracy: 0.9039
Epoch 55/300
385/385 [==============================] - 0s 47us/step - loss: 0.2739 - accuracy: 0.9091
Epoch 56/300
385/385 [==============================] - 0s 47us/step - loss: 0.2442 - accuracy: 0.9117
Epoch 57/300
385/385 [==============================] - 0s 48us/step - loss: 0.1434 - accuracy: 0.9558
Epoch 58/300
385/385 [==============================] - 0s 45us/step - loss: 0.1184 - accuracy: 0.9740
Epoch 59/300
385/385 [==============================] - 0s 42us/step - loss: 0.0931 - accuracy: 0.9662
Epoch 60/300
385/385 [==============================] - 0s 44us/step - loss: 0.0905 - accuracy: 0.9636
Epoch 61/300
385/385 [==============================] - 0s 47us/step - loss: 0.0797 - accuracy: 0.9714
Epoch 62/300
385/385 [==============================] - 0s 47us/step - loss: 0.0735 - accuracy: 0.9792
Epoch 63/300
385/385 [==============================] - 0s 42us/step - loss: 0.0676 - accuracy: 0.9766
Epoch 64/300
385/385 [==============================] - 0s 43us/step - loss: 0.0680 - accuracy: 0.9792
Epoch 65/300
385/385 [==============================] - 0s 49us/step - loss: 0.0633 - accuracy: 0.9818
Epoch 66/300
385/385 [==============================] - 0s 46us/step - loss: 0.5854 - accuracy: 0.8571
Epoch 67/300
385/385 [==============================] - 0s 49us/step - loss: 0.5032 - accuracy: 0.8831
Epoch 68/300
385/385 [==============================] - 0s 46us/step - loss: 0.2613 - accuracy: 0.9169
Epoch 69/300
385/385 [==============================] - 0s 45us/step - loss: 0.2082 - accuracy: 0.9403
Epoch 70/300
385/385 [==============================] - 0s 50us/step - loss: 0.3186 - accuracy: 0.8961
Epoch 71/300
385/385 [==============================] - 0s 43us/step - loss: 0.2948 - accuracy: 0.9039
Epoch 72/300
385/385 [==============================] - 0s 47us/step - loss: 0.1793 - accuracy: 0.9377
Epoch 73/300
385/385 [==============================] - 0s 47us/step - loss: 0.1363 - accuracy: 0.9429
Epoch 74/300
385/385 [==============================] - 0s 46us/step - loss: 0.1268 - accuracy: 0.9584
Epoch 75/300
385/385 [==============================] - 0s 59us/step - loss: 0.3259 - accuracy: 0.8987
Epoch 76/300
385/385 [==============================] - 0s 54us/step - loss: 0.2061 - accuracy: 0.9455
Epoch 77/300
385/385 [==============================] - 0s 55us/step - loss: 0.1537 - accuracy: 0.9455
Epoch 78/300
385/385 [==============================] - 0s 49us/step - loss: 0.0909 - accuracy: 0.9688
Epoch 79/300
385/385 [==============================] - 0s 45us/step - loss: 0.0919 - accuracy: 0.9740
Epoch 80/300
385/385 [==============================] - 0s 47us/step - loss: 0.0736 - accuracy: 0.9766
Epoch 81/300
385/385 [==============================] - 0s 46us/step - loss: 0.0685 - accuracy: 0.9766
Epoch 82/300
385/385 [==============================] - 0s 46us/step - loss: 0.1531 - accuracy: 0.9481
Epoch 83/300
385/385 [==============================] - 0s 49us/step - loss: 0.1722 - accuracy: 0.9403
Epoch 84/300
385/385 [==============================] - 0s 53us/step - loss: 0.0942 - accuracy: 0.9714
Epoch 85/300
385/385 [==============================] - 0s 51us/step - loss: 0.0770 - accuracy: 0.9714
Epoch 86/300
385/385 [==============================] - 0s 47us/step - loss: 0.0642 - accuracy: 0.9818
Epoch 87/300
385/385 [==============================] - 0s 46us/step - loss: 0.0586 - accuracy: 0.9818
Epoch 88/300
385/385 [==============================] - 0s 48us/step - loss: 0.0549 - accuracy: 0.9844
Epoch 89/300
385/385 [==============================] - 0s 53us/step - loss: 0.0482 - accuracy: 0.9922
Epoch 90/300
385/385 [==============================] - 0s 49us/step - loss: 0.0465 - accuracy: 0.9870
Epoch 91/300
385/385 [==============================] - 0s 50us/step - loss: 0.0424 - accuracy: 0.9896
Epoch 92/300
385/385 [==============================] - 0s 49us/step - loss: 0.0426 - accuracy: 0.9922
Epoch 93/300
385/385 [==============================] - 0s 47us/step - loss: 0.0505 - accuracy: 0.9818
Epoch 94/300
385/385 [==============================] - 0s 49us/step - loss: 0.0422 - accuracy: 0.9844
Epoch 95/300
385/385 [==============================] - 0s 63us/step - loss: 0.0423 - accuracy: 0.9922
Epoch 96/300
385/385 [==============================] - 0s 52us/step - loss: 0.0379 - accuracy: 0.9896
Epoch 97/300
385/385 [==============================] - 0s 50us/step - loss: 0.5501 - accuracy: 0.8442
Epoch 98/300
385/385 [==============================] - 0s 44us/step - loss: 0.1564 - accuracy: 0.9481
Epoch 99/300
385/385 [==============================] - 0s 45us/step - loss: 0.1610 - accuracy: 0.9558
Epoch 100/300
385/385 [==============================] - 0s 47us/step - loss: 0.1057 - accuracy: 0.9610
Epoch 101/300
385/385 [==============================] - 0s 52us/step - loss: 0.0838 - accuracy: 0.9662
Epoch 102/300
385/385 [==============================] - 0s 49us/step - loss: 0.6978 - accuracy: 0.8416
Epoch 103/300
385/385 [==============================] - 0s 58us/step - loss: 0.4687 - accuracy: 0.8857
Epoch 104/300
385/385 [==============================] - 0s 47us/step - loss: 0.1270 - accuracy: 0.9403
Epoch 105/300
385/385 [==============================] - 0s 45us/step - loss: 0.1206 - accuracy: 0.9532
Epoch 106/300
385/385 [==============================] - 0s 44us/step - loss: 0.0665 - accuracy: 0.9922
Epoch 107/300
385/385 [==============================] - 0s 50us/step - loss: 0.0544 - accuracy: 0.9844
Epoch 108/300
385/385 [==============================] - 0s 60us/step - loss: 0.0502 - accuracy: 0.9792
Epoch 109/300
385/385 [==============================] - 0s 60us/step - loss: 0.0418 - accuracy: 0.9948
Epoch 110/300
385/385 [==============================] - 0s 46us/step - loss: 0.0389 - accuracy: 0.9948
Epoch 111/300
385/385 [==============================] - 0s 47us/step - loss: 0.0399 - accuracy: 0.9896
Epoch 112/300
385/385 [==============================] - 0s 48us/step - loss: 0.0345 - accuracy: 0.9948
Epoch 113/300
385/385 [==============================] - 0s 47us/step - loss: 0.0318 - accuracy: 0.9948
Epoch 114/300
385/385 [==============================] - 0s 54us/step - loss: 0.0315 - accuracy: 0.9948
Epoch 115/300
385/385 [==============================] - 0s 47us/step - loss: 0.0297 - accuracy: 0.9948
Epoch 116/300
385/385 [==============================] - 0s 47us/step - loss: 0.0550 - accuracy: 0.9766
Epoch 117/300
385/385 [==============================] - 0s 47us/step - loss: 0.0552 - accuracy: 0.9740
Epoch 118/300
385/385 [==============================] - 0s 48us/step - loss: 0.0313 - accuracy: 0.9948
Epoch 119/300
385/385 [==============================] - 0s 52us/step - loss: 0.0308 - accuracy: 0.9974
Epoch 120/300
385/385 [==============================] - 0s 52us/step - loss: 0.0254 - accuracy: 0.9922
Epoch 121/300
385/385 [==============================] - 0s 52us/step - loss: 0.0241 - accuracy: 0.9948
Epoch 122/300
385/385 [==============================] - 0s 45us/step - loss: 0.0216 - accuracy: 0.9948
Epoch 123/300
385/385 [==============================] - 0s 52us/step - loss: 0.1006 - accuracy: 0.9584
Epoch 124/300
385/385 [==============================] - 0s 51us/step - loss: 0.0509 - accuracy: 0.9896
Epoch 125/300
385/385 [==============================] - 0s 50us/step - loss: 0.0260 - accuracy: 0.9922
Epoch 126/300
385/385 [==============================] - 0s 45us/step - loss: 0.0296 - accuracy: 0.9922
Epoch 127/300
385/385 [==============================] - 0s 48us/step - loss: 0.0220 - accuracy: 0.9974
Epoch 128/300
385/385 [==============================] - 0s 58us/step - loss: 0.0693 - accuracy: 0.9766
Epoch 129/300
385/385 [==============================] - 0s 54us/step - loss: 0.0616 - accuracy: 0.9818
Epoch 130/300
385/385 [==============================] - 0s 46us/step - loss: 0.0313 - accuracy: 0.9948
Epoch 131/300
385/385 [==============================] - 0s 51us/step - loss: 0.0342 - accuracy: 0.9922
Epoch 132/300
385/385 [==============================] - 0s 43us/step - loss: 0.0231 - accuracy: 0.9922
Epoch 133/300
385/385 [==============================] - 0s 46us/step - loss: 0.0178 - accuracy: 1.0000
Epoch 134/300
385/385 [==============================] - 0s 46us/step - loss: 0.0145 - accuracy: 0.9974
Epoch 135/300
385/385 [==============================] - 0s 52us/step - loss: 0.0138 - accuracy: 0.9974
Epoch 136/300
385/385 [==============================] - 0s 54us/step - loss: 0.0124 - accuracy: 1.0000
Epoch 137/300
385/385 [==============================] - 0s 55us/step - loss: 0.0118 - accuracy: 1.0000
Epoch 138/300
385/385 [==============================] - 0s 48us/step - loss: 0.0113 - accuracy: 0.9974
Epoch 139/300
385/385 [==============================] - 0s 47us/step - loss: 0.0111 - accuracy: 0.9974
Epoch 140/300
385/385 [==============================] - 0s 45us/step - loss: 0.0104 - accuracy: 0.9974
Epoch 141/300
385/385 [==============================] - 0s 48us/step - loss: 0.0099 - accuracy: 1.0000
Epoch 142/300
385/385 [==============================] - 0s 49us/step - loss: 0.0100 - accuracy: 1.0000
Epoch 143/300
385/385 [==============================] - 0s 51us/step - loss: 0.0101 - accuracy: 0.9974
Epoch 144/300
385/385 [==============================] - 0s 51us/step - loss: 0.0111 - accuracy: 0.9974
Epoch 145/300
385/385 [==============================] - 0s 51us/step - loss: 0.0104 - accuracy: 0.9974
Epoch 146/300
385/385 [==============================] - 0s 49us/step - loss: 0.0098 - accuracy: 1.0000
Epoch 147/300
385/385 [==============================] - 0s 47us/step - loss: 0.0098 - accuracy: 1.0000
Epoch 148/300
385/385 [==============================] - 0s 52us/step - loss: 0.0089 - accuracy: 1.0000
Epoch 149/300
385/385 [==============================] - 0s 57us/step - loss: 0.0081 - accuracy: 1.0000
Epoch 150/300
385/385 [==============================] - 0s 45us/step - loss: 0.0077 - accuracy: 1.0000
Epoch 151/300
385/385 [==============================] - 0s 46us/step - loss: 0.0081 - accuracy: 1.0000
Epoch 152/300
385/385 [==============================] - 0s 45us/step - loss: 0.0083 - accuracy: 1.0000
Epoch 153/300
385/385 [==============================] - 0s 67us/step - loss: 0.0074 - accuracy: 1.0000
Epoch 154/300
385/385 [==============================] - 0s 49us/step - loss: 0.0072 - accuracy: 1.0000
Epoch 155/300
385/385 [==============================] - 0s 45us/step - loss: 0.0070 - accuracy: 1.0000
Epoch 156/300
385/385 [==============================] - 0s 58us/step - loss: 0.0068 - accuracy: 1.0000
Epoch 157/300
385/385 [==============================] - 0s 44us/step - loss: 0.0065 - accuracy: 1.0000
Epoch 158/300
385/385 [==============================] - 0s 46us/step - loss: 0.0067 - accuracy: 1.0000
Epoch 159/300
385/385 [==============================] - 0s 45us/step - loss: 0.1089 - accuracy: 0.9766
Epoch 160/300
385/385 [==============================] - 0s 43us/step - loss: 0.0539 - accuracy: 0.9896
Epoch 161/300
385/385 [==============================] - 0s 44us/step - loss: 0.0465 - accuracy: 0.9818
Epoch 162/300
385/385 [==============================] - 0s 50us/step - loss: 0.0180 - accuracy: 1.0000
Epoch 163/300
385/385 [==============================] - 0s 56us/step - loss: 0.3476 - accuracy: 0.9117
Epoch 164/300
385/385 [==============================] - 0s 50us/step - loss: 0.1378 - accuracy: 0.9506
Epoch 165/300
385/385 [==============================] - 0s 65us/step - loss: 0.0765 - accuracy: 0.9818
Epoch 166/300
385/385 [==============================] - 0s 73us/step - loss: 0.0595 - accuracy: 0.9766
Epoch 167/300
385/385 [==============================] - 0s 54us/step - loss: 0.0304 - accuracy: 0.9948
Epoch 168/300
385/385 [==============================] - 0s 56us/step - loss: 0.0240 - accuracy: 0.9896
Epoch 169/300
385/385 [==============================] - 0s 58us/step - loss: 0.0140 - accuracy: 1.0000
Epoch 170/300
385/385 [==============================] - 0s 44us/step - loss: 0.0125 - accuracy: 1.0000
Epoch 171/300
385/385 [==============================] - 0s 45us/step - loss: 0.0084 - accuracy: 1.0000
Epoch 172/300
385/385 [==============================] - 0s 53us/step - loss: 0.0080 - accuracy: 0.9974
Epoch 173/300
385/385 [==============================] - 0s 46us/step - loss: 0.0072 - accuracy: 1.0000
Epoch 174/300
385/385 [==============================] - 0s 48us/step - loss: 0.0068 - accuracy: 1.0000
Epoch 175/300
385/385 [==============================] - 0s 53us/step - loss: 0.0058 - accuracy: 1.0000
Epoch 176/300
385/385 [==============================] - 0s 48us/step - loss: 0.0057 - accuracy: 1.0000
Epoch 177/300
385/385 [==============================] - 0s 48us/step - loss: 0.0052 - accuracy: 1.0000
Epoch 178/300
385/385 [==============================] - 0s 46us/step - loss: 0.0049 - accuracy: 1.0000
Epoch 179/300
385/385 [==============================] - 0s 53us/step - loss: 0.0048 - accuracy: 1.0000
Epoch 180/300
385/385 [==============================] - 0s 50us/step - loss: 0.0046 - accuracy: 1.0000
Epoch 181/300
385/385 [==============================] - 0s 46us/step - loss: 0.0046 - accuracy: 1.0000
Epoch 182/300
385/385 [==============================] - 0s 47us/step - loss: 0.0046 - accuracy: 1.0000
Epoch 183/300
385/385 [==============================] - 0s 51us/step - loss: 0.1258 - accuracy: 0.9610
Epoch 184/300
385/385 [==============================] - 0s 46us/step - loss: 0.0629 - accuracy: 0.9818
Epoch 185/300
385/385 [==============================] - 0s 48us/step - loss: 0.0408 - accuracy: 0.9818
Epoch 186/300
385/385 [==============================] - 0s 55us/step - loss: 0.0231 - accuracy: 0.9948
Epoch 187/300
385/385 [==============================] - 0s 55us/step - loss: 0.0134 - accuracy: 0.9974
Epoch 188/300
385/385 [==============================] - 0s 45us/step - loss: 0.0176 - accuracy: 0.9922
Epoch 189/300
385/385 [==============================] - 0s 44us/step - loss: 0.0123 - accuracy: 1.0000
Epoch 190/300
385/385 [==============================] - 0s 51us/step - loss: 0.0073 - accuracy: 1.0000
Epoch 191/300
385/385 [==============================] - 0s 52us/step - loss: 0.0066 - accuracy: 0.9974
Epoch 192/300
385/385 [==============================] - 0s 53us/step - loss: 0.0060 - accuracy: 0.9974
Epoch 193/300
385/385 [==============================] - 0s 51us/step - loss: 0.0050 - accuracy: 1.0000
Epoch 194/300
385/385 [==============================] - 0s 49us/step - loss: 0.0054 - accuracy: 1.0000
Epoch 195/300
385/385 [==============================] - 0s 42us/step - loss: 0.0047 - accuracy: 1.0000
Epoch 196/300
385/385 [==============================] - 0s 52us/step - loss: 0.0046 - accuracy: 1.0000
Epoch 197/300
385/385 [==============================] - 0s 48us/step - loss: 0.0048 - accuracy: 1.0000
Epoch 198/300
385/385 [==============================] - 0s 45us/step - loss: 0.0044 - accuracy: 1.0000
Epoch 199/300
385/385 [==============================] - 0s 52us/step - loss: 0.0044 - accuracy: 1.0000
Epoch 200/300
385/385 [==============================] - 0s 53us/step - loss: 0.0041 - accuracy: 1.0000
Epoch 201/300
385/385 [==============================] - 0s 48us/step - loss: 0.0045 - accuracy: 1.0000
Epoch 202/300
385/385 [==============================] - 0s 58us/step - loss: 0.0039 - accuracy: 1.0000
Epoch 203/300
385/385 [==============================] - 0s 46us/step - loss: 0.0039 - accuracy: 1.0000
Epoch 204/300
385/385 [==============================] - 0s 45us/step - loss: 0.0037 - accuracy: 1.0000
Epoch 205/300
385/385 [==============================] - 0s 44us/step - loss: 0.0038 - accuracy: 1.0000
Epoch 206/300
385/385 [==============================] - 0s 50us/step - loss: 0.0036 - accuracy: 1.0000
Epoch 207/300
385/385 [==============================] - 0s 52us/step - loss: 0.0037 - accuracy: 1.0000
Epoch 208/300
385/385 [==============================] - 0s 50us/step - loss: 0.0038 - accuracy: 1.0000
Epoch 209/300
385/385 [==============================] - 0s 49us/step - loss: 0.0035 - accuracy: 1.0000
Epoch 210/300
385/385 [==============================] - 0s 51us/step - loss: 0.0036 - accuracy: 1.0000
Epoch 211/300
385/385 [==============================] - 0s 50us/step - loss: 0.0034 - accuracy: 1.0000
Epoch 212/300
385/385 [==============================] - 0s 45us/step - loss: 0.0034 - accuracy: 1.0000
Epoch 213/300
385/385 [==============================] - 0s 55us/step - loss: 0.0030 - accuracy: 1.0000
Epoch 214/300
385/385 [==============================] - 0s 52us/step - loss: 0.0034 - accuracy: 1.0000
Epoch 215/300
385/385 [==============================] - 0s 48us/step - loss: 0.0031 - accuracy: 1.0000
Epoch 216/300
385/385 [==============================] - 0s 46us/step - loss: 0.0029 - accuracy: 1.0000
Epoch 217/300
385/385 [==============================] - 0s 55us/step - loss: 0.0029 - accuracy: 1.0000
Epoch 218/300
385/385 [==============================] - 0s 45us/step - loss: 0.0029 - accuracy: 1.0000
Epoch 219/300
385/385 [==============================] - 0s 53us/step - loss: 0.0029 - accuracy: 1.0000
Epoch 220/300
385/385 [==============================] - 0s 51us/step - loss: 0.0028 - accuracy: 1.0000
Epoch 221/300
385/385 [==============================] - 0s 54us/step - loss: 0.0029 - accuracy: 1.0000
Epoch 222/300
385/385 [==============================] - 0s 45us/step - loss: 0.0030 - accuracy: 1.0000
Epoch 223/300
385/385 [==============================] - 0s 44us/step - loss: 0.0025 - accuracy: 1.0000
Epoch 224/300
385/385 [==============================] - 0s 47us/step - loss: 0.0025 - accuracy: 1.0000
Epoch 225/300
385/385 [==============================] - 0s 48us/step - loss: 0.0025 - accuracy: 1.0000
Epoch 226/300
385/385 [==============================] - 0s 54us/step - loss: 0.0022 - accuracy: 1.0000
Epoch 227/300
385/385 [==============================] - 0s 49us/step - loss: 0.0024 - accuracy: 1.0000
Epoch 228/300
385/385 [==============================] - 0s 50us/step - loss: 0.0026 - accuracy: 1.0000
Epoch 229/300
385/385 [==============================] - 0s 52us/step - loss: 0.0025 - accuracy: 1.0000
Epoch 230/300
385/385 [==============================] - 0s 54us/step - loss: 0.0022 - accuracy: 1.0000
Epoch 231/300
385/385 [==============================] - 0s 66us/step - loss: 0.0021 - accuracy: 1.0000
Epoch 232/300
385/385 [==============================] - 0s 44us/step - loss: 0.0020 - accuracy: 1.0000
Epoch 233/300
385/385 [==============================] - 0s 46us/step - loss: 0.0020 - accuracy: 1.0000
Epoch 234/300
385/385 [==============================] - 0s 47us/step - loss: 0.0020 - accuracy: 1.0000
Epoch 235/300
385/385 [==============================] - 0s 48us/step - loss: 0.0018 - accuracy: 1.0000
Epoch 236/300
385/385 [==============================] - 0s 53us/step - loss: 0.0019 - accuracy: 1.0000
Epoch 237/300
385/385 [==============================] - 0s 44us/step - loss: 0.0019 - accuracy: 1.0000
Epoch 238/300
385/385 [==============================] - 0s 47us/step - loss: 0.0018 - accuracy: 1.0000
Epoch 239/300
385/385 [==============================] - 0s 47us/step - loss: 0.0018 - accuracy: 1.0000
Epoch 240/300
385/385 [==============================] - 0s 51us/step - loss: 0.0019 - accuracy: 1.0000
Epoch 241/300
385/385 [==============================] - 0s 50us/step - loss: 0.0017 - accuracy: 1.0000
Epoch 242/300
385/385 [==============================] - 0s 52us/step - loss: 0.0017 - accuracy: 1.0000
Epoch 243/300
385/385 [==============================] - 0s 48us/step - loss: 0.0018 - accuracy: 1.0000
Epoch 244/300
385/385 [==============================] - 0s 49us/step - loss: 0.0016 - accuracy: 1.0000
Epoch 245/300
385/385 [==============================] - 0s 47us/step - loss: 0.0019 - accuracy: 1.0000
Epoch 246/300
385/385 [==============================] - 0s 50us/step - loss: 0.0020 - accuracy: 1.0000
Epoch 247/300
385/385 [==============================] - 0s 55us/step - loss: 0.0017 - accuracy: 1.0000
Epoch 248/300
385/385 [==============================] - 0s 45us/step - loss: 0.0015 - accuracy: 1.0000
Epoch 249/300
385/385 [==============================] - 0s 50us/step - loss: 0.0016 - accuracy: 1.0000
Epoch 250/300
385/385 [==============================] - 0s 53us/step - loss: 0.0016 - accuracy: 1.0000
Epoch 251/300
385/385 [==============================] - 0s 65us/step - loss: 0.0015 - accuracy: 1.0000
Epoch 252/300
385/385 [==============================] - 0s 49us/step - loss: 0.0014 - accuracy: 1.0000
Epoch 253/300
385/385 [==============================] - 0s 49us/step - loss: 0.0089 - accuracy: 0.9974
Epoch 254/300
385/385 [==============================] - 0s 49us/step - loss: 0.0034 - accuracy: 1.0000
Epoch 255/300
385/385 [==============================] - 0s 48us/step - loss: 0.0038 - accuracy: 1.0000
Epoch 256/300
385/385 [==============================] - 0s 52us/step - loss: 0.0014 - accuracy: 1.0000
Epoch 257/300
385/385 [==============================] - 0s 49us/step - loss: 0.0020 - accuracy: 1.0000
Epoch 258/300
385/385 [==============================] - 0s 51us/step - loss: 0.0016 - accuracy: 1.0000
Epoch 259/300
385/385 [==============================] - 0s 47us/step - loss: 0.0016 - accuracy: 1.0000
Epoch 260/300
385/385 [==============================] - 0s 52us/step - loss: 0.0013 - accuracy: 1.0000
Epoch 261/300
385/385 [==============================] - 0s 47us/step - loss: 0.0014 - accuracy: 1.0000
Epoch 262/300
385/385 [==============================] - 0s 43us/step - loss: 0.0013 - accuracy: 1.0000
Epoch 263/300
385/385 [==============================] - 0s 47us/step - loss: 0.0013 - accuracy: 1.0000
Epoch 264/300
385/385 [==============================] - 0s 46us/step - loss: 0.0012 - accuracy: 1.0000
Epoch 265/300
385/385 [==============================] - 0s 46us/step - loss: 0.0013 - accuracy: 1.0000
Epoch 266/300
385/385 [==============================] - 0s 46us/step - loss: 0.0013 - accuracy: 1.0000
Epoch 267/300
385/385 [==============================] - 0s 50us/step - loss: 0.0027 - accuracy: 1.0000
Epoch 268/300
385/385 [==============================] - 0s 43us/step - loss: 0.0012 - accuracy: 1.0000
Epoch 269/300
385/385 [==============================] - 0s 49us/step - loss: 0.0014 - accuracy: 1.0000
Epoch 270/300
385/385 [==============================] - 0s 54us/step - loss: 0.0014 - accuracy: 1.0000
Epoch 271/300
385/385 [==============================] - 0s 49us/step - loss: 0.0012 - accuracy: 1.0000
Epoch 272/300
385/385 [==============================] - 0s 51us/step - loss: 0.0011 - accuracy: 1.0000
Epoch 273/300
385/385 [==============================] - 0s 50us/step - loss: 0.0011 - accuracy: 1.0000
Epoch 274/300
385/385 [==============================] - 0s 47us/step - loss: 0.0011 - accuracy: 1.0000
Epoch 275/300
385/385 [==============================] - 0s 46us/step - loss: 0.0011 - accuracy: 1.0000
Epoch 276/300
385/385 [==============================] - 0s 49us/step - loss: 0.0010 - accuracy: 1.0000
Epoch 277/300
385/385 [==============================] - 0s 72us/step - loss: 0.0010 - accuracy: 1.0000
Epoch 278/300
385/385 [==============================] - 0s 54us/step - loss: 0.0010 - accuracy: 1.0000
Epoch 279/300
385/385 [==============================] - 0s 60us/step - loss: 0.0010 - accuracy: 1.0000
Epoch 280/300
385/385 [==============================] - 0s 53us/step - loss: 9.8464e-04 - accuracy: 1.0000
Epoch 281/300
385/385 [==============================] - 0s 54us/step - loss: 0.0010 - accuracy: 1.0000
Epoch 282/300
385/385 [==============================] - 0s 56us/step - loss: 9.7297e-04 - accuracy: 1.0000
Epoch 283/300
385/385 [==============================] - 0s 53us/step - loss: 9.7341e-04 - accuracy: 1.0000
Epoch 284/300
385/385 [==============================] - 0s 48us/step - loss: 9.9984e-04 - accuracy: 1.0000
Epoch 285/300
385/385 [==============================] - 0s 45us/step - loss: 0.0011 - accuracy: 1.0000
Epoch 286/300
385/385 [==============================] - 0s 56us/step - loss: 0.0014 - accuracy: 1.0000
Epoch 287/300
385/385 [==============================] - 0s 46us/step - loss: 0.0011 - accuracy: 1.0000
Epoch 288/300
385/385 [==============================] - 0s 81us/step - loss: 0.0012 - accuracy: 1.0000
Epoch 289/300
385/385 [==============================] - 0s 56us/step - loss: 9.6594e-04 - accuracy: 1.0000
Epoch 290/300
385/385 [==============================] - 0s 46us/step - loss: 9.2916e-04 - accuracy: 1.0000
Epoch 291/300
385/385 [==============================] - 0s 48us/step - loss: 9.5915e-04 - accuracy: 1.0000
Epoch 292/300
385/385 [==============================] - 0s 47us/step - loss: 9.1888e-04 - accuracy: 1.0000
Epoch 293/300
385/385 [==============================] - 0s 46us/step - loss: 8.9688e-04 - accuracy: 1.0000
Epoch 294/300
385/385 [==============================] - 0s 47us/step - loss: 9.1273e-04 - accuracy: 1.0000
Epoch 295/300
385/385 [==============================] - 0s 49us/step - loss: 8.7445e-04 - accuracy: 1.0000
Epoch 296/300
385/385 [==============================] - 0s 58us/step - loss: 8.6560e-04 - accuracy: 1.0000
Epoch 297/300
385/385 [==============================] - 0s 60us/step - loss: 8.5257e-04 - accuracy: 1.0000
Epoch 298/300
385/385 [==============================] - 0s 51us/step - loss: 8.4338e-04 - accuracy: 1.0000
Epoch 299/300
385/385 [==============================] - 0s 65us/step - loss: 8.2643e-04 - accuracy: 1.0000
Epoch 300/300
385/385 [==============================] - 0s 49us/step - loss: 8.5211e-04 - accuracy: 1.0000
###Markdown
The model is trained now and can be used to predict the lables of datapoints in the test set.Note. To be able to assess the performance of the predictions in the test set using metrics class in sklearn, we need to transform the true lables and the predictions from one-hot encodings to lists.
###Code
y_pred = model.predict(X_test)
#Converting predictions to label
pred = list()
for i in range(len(y_pred)):
pred.append(np.argmax(y_pred[i]))
#Converting one hot encoded test label to label
test = list()
for i in range(len(y_test)):
test.append(np.argmax(y_test[i]))
###Output
_____no_output_____
###Markdown
Evaluating performance of the modelWe need to assess performance of the model using the predictions of the test set. We use accuracy and balanced accuracy. Here are their definitions:* **recall** in this context is also referred to as the true positive rate or sensitivityHow many relevant item are selected$${\displaystyle {\text{recall}}={\frac {tp}{tp+fn}}\,} $$ * **specificity** true negative rate$${\displaystyle {\text{true negative rate}}={\frac {tn}{tn+fp}}\,}$$* **accuracy**: This measure gives you a sense of performance for all the classes together as follows:$$ {\displaystyle {\text{accuracy}}={\frac {tp+tn}{tp+tn+fp+fn}}\,}$$\begin{equation*} accuracy=\frac{number\:of\:correct\:predictions}{(total\:number\:of\:data\:points (samples))} \end{equation*}* **balanced accuracy**: This measure gives you a sense of performance for all the classes together as follows:$${\displaystyle {\text{balanced accuracy}}={\frac {recall+specificity}{2}}\,}$$
###Code
from sklearn import metrics
print('Accuracy of the neural network model is:', metrics.accuracy_score(pred,test)*100)
print("Blanced accuracy of the neural network model is:", metrics.balanced_accuracy_score(pred, test))
###Output
Accuracy of the neural network model is: 92.72727272727272
Blanced accuracy of the neural network model is: 0.9156156880615085
|
YouTube-Exaggerated-Bangla-Titles-Categorization(8 ML Models).ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
import unicodedata
df = pd.read_csv('/content/drive/MyDrive/youtube project/youtubeRD-csv.csv',error_bad_lines=False)
df
df.isnull().sum()
df['Type'].value_counts()
df['Type'].unique()
from sklearn.model_selection import train_test_split
X = df['Title']
y = df['Type']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
X_train_tfidf = vectorizer.fit_transform(X_train)
X_train_tfidf.shape
y_test.shape, X_test.shape,X_train.shape,y_train.shape
###Output
_____no_output_____
###Markdown
**linearsvc**
###Code
from sklearn.svm import LinearSVC
clf = LinearSVC()
clf.fit(X_train_tfidf,y_train)
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf', LinearSVC()),
])
text_clf.fit(X_train, y_train)
predictions = text_clf.predict(X_test)
from sklearn import metrics
cf_matrix=metrics.confusion_matrix(y_test,predictions)
print(cf_matrix)
plt.figure(figsize=(10,6))
sns.heatmap(cf_matrix, annot=True )
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
###Output
[[121 46]
[ 45 148]]
precision recall f1-score support
অতিরঞ্জিত 0.73 0.72 0.73 167
সামঞ্জস্যপূর্ণ 0.76 0.77 0.76 193
accuracy 0.75 360
macro avg 0.75 0.75 0.75 360
weighted avg 0.75 0.75 0.75 360
0.7472222222222222
###Markdown
**KNN**
###Code
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier()
clf.fit(X_train_tfidf,y_train)
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf', KNeighborsClassifier()),
])
text_clf.fit(X_train, y_train)
predictions = text_clf.predict(X_test)
from sklearn import metrics
cf_matrix=metrics.confusion_matrix(y_test,predictions)
print(cf_matrix)
plt.figure(figsize=(10,6))
sns.heatmap(cf_matrix, annot=True )
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
###Output
[[ 34 133]
[ 10 183]]
precision recall f1-score support
অতিরঞ্জিত 0.77 0.20 0.32 167
সামঞ্জস্যপূর্ণ 0.58 0.95 0.72 193
accuracy 0.60 360
macro avg 0.68 0.58 0.52 360
weighted avg 0.67 0.60 0.53 360
0.6027777777777777
###Markdown
**SVC**
###Code
from sklearn.svm import SVC
clf = SVC()
clf.fit(X_train_tfidf,y_train)
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf', SVC()),
])
text_clf.fit(X_train, y_train)
predictions = text_clf.predict(X_test)
from sklearn import metrics
cf_matrix=metrics.confusion_matrix(y_test,predictions)
print(cf_matrix)
plt.figure(figsize=(10,6))
sns.heatmap(cf_matrix, annot=True )
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
###Output
[[126 41]
[ 30 163]]
precision recall f1-score support
অতিরঞ্জিত 0.81 0.75 0.78 167
সামঞ্জস্যপূর্ণ 0.80 0.84 0.82 193
accuracy 0.80 360
macro avg 0.80 0.80 0.80 360
weighted avg 0.80 0.80 0.80 360
0.8027777777777778
###Markdown
**Logisticreg**
###Code
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X_train_tfidf,y_train)
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf', LogisticRegression()),
])
text_clf.fit(X_train, y_train)
predictions = text_clf.predict(X_test)
from sklearn import metrics
cf_matrix=metrics.confusion_matrix(y_test,predictions)
print(cf_matrix)
plt.figure(figsize=(10,6))
sns.heatmap(cf_matrix, annot=True )
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
###Output
[[128 39]
[ 39 154]]
precision recall f1-score support
অতিরঞ্জিত 0.77 0.77 0.77 167
সামঞ্জস্যপূর্ণ 0.80 0.80 0.80 193
accuracy 0.78 360
macro avg 0.78 0.78 0.78 360
weighted avg 0.78 0.78 0.78 360
0.7833333333333333
###Markdown
**Random** **forest**
###Code
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf.fit(X_train_tfidf,y_train)
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf',RandomForestClassifier()),
])
text_clf.fit(X_train, y_train)
predictions = text_clf.predict(X_test)
from sklearn import metrics
cf_matrix=metrics.confusion_matrix(y_test,predictions)
print(cf_matrix)
plt.figure(figsize=(10,6))
sns.heatmap(cf_matrix, annot=True )
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
###Output
[[132 35]
[ 49 144]]
precision recall f1-score support
অতিরঞ্জিত 0.73 0.79 0.76 167
সামঞ্জস্যপূর্ণ 0.80 0.75 0.77 193
accuracy 0.77 360
macro avg 0.77 0.77 0.77 360
weighted avg 0.77 0.77 0.77 360
0.7666666666666667
###Markdown
**decison** **tree**
###Code
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
clf.fit(X_train_tfidf,y_train)
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf',DecisionTreeClassifier()),
])
text_clf.fit(X_train, y_train)
predictions = text_clf.predict(X_test)
from sklearn import metrics
cf_matrix=metrics.confusion_matrix(y_test,predictions)
print(cf_matrix)
plt.figure(figsize=(10,6))
sns.heatmap(cf_matrix, annot=True )
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
###Output
[[134 33]
[ 63 130]]
precision recall f1-score support
অতিরঞ্জিত 0.68 0.80 0.74 167
সামঞ্জস্যপূর্ণ 0.80 0.67 0.73 193
accuracy 0.73 360
macro avg 0.74 0.74 0.73 360
weighted avg 0.74 0.73 0.73 360
0.7333333333333333
###Markdown
**sgd**
###Code
from sklearn.linear_model import SGDClassifier
clf = SGDClassifier()
clf.fit(X_train_tfidf,y_train)
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf',SGDClassifier()),
])
text_clf.fit(X_train, y_train)
predictions = text_clf.predict(X_test)
from sklearn import metrics
cf_matrix=metrics.confusion_matrix(y_test,predictions)
print(cf_matrix)
plt.figure(figsize=(10,6))
sns.heatmap(cf_matrix, annot=True )
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
###Output
[[121 46]
[ 50 143]]
precision recall f1-score support
অতিরঞ্জিত 0.71 0.72 0.72 167
সামঞ্জস্যপূর্ণ 0.76 0.74 0.75 193
accuracy 0.73 360
macro avg 0.73 0.73 0.73 360
weighted avg 0.73 0.73 0.73 360
0.7333333333333333
###Markdown
**naivebayes**
###Code
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB()
clf.fit(X_train_tfidf,y_train)
from sklearn.pipeline import Pipeline
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf',MultinomialNB()),
])
text_clf.fit(X_train, y_train)
predictions = text_clf.predict(X_test)
from sklearn import metrics
cf_matrix=metrics.confusion_matrix(y_test,predictions)
print(cf_matrix)
plt.figure(figsize=(10,6))
sns.heatmap(cf_matrix, annot=True )
print(metrics.classification_report(y_test,predictions))
print(metrics.accuracy_score(y_test,predictions))
###Output
[[132 35]
[ 53 140]]
precision recall f1-score support
অতিরঞ্জিত 0.71 0.79 0.75 167
সামঞ্জস্যপূর্ণ 0.80 0.73 0.76 193
accuracy 0.76 360
macro avg 0.76 0.76 0.76 360
weighted avg 0.76 0.76 0.76 360
0.7555555555555555
|
Notebooks/DataPreperation.ipynb | ###Markdown
CLTK
###Code
!pip install cltk
import os
from cltk.tokenize.word import WordTokenizer
from cltk.tokenize.line import LineTokenizer
line_tokenizer = LineTokenizer('akkadian')
with open('sumerian_pll.txt') as f:
lines = f.read()
lines = line_tokenizer.tokenize(lines)
word_tokenizer = WordTokenizer('akkadian')
for text in lines[70:100]:
print(f'Original: {text}, Tokenized: {word_tokenizer.tokenize(text)}')
###Output
_____no_output_____
###Markdown
BBPE
###Code
from tokenizers import ByteLevelBPETokenizer
tokeniser_ep = ByteLevelBPETokenizer()
tokeniser_ep.train(['english_pll.txt'], special_tokens= ['<sos>', '<eos>', '<pad>'])
tokeniser_sp = ByteLevelBPETokenizer()
tokeniser_sp.train(['sumerian_pll.txt'], special_tokens= ['<sos>', '<eos>', '<pad>'])
vocab_size_eng = tokeniser_ep.get_vocab_size()
vocab_size_sum = tokeniser_sp.get_vocab_size()
print(vocab_size_eng, vocab_size_sum)
tokeniser_sp.decode([vocab_size_sum-1])
tokeniser_ep.decode([vocab_size_eng-1])
vocab_sp = tokeniser_sp.get_vocab()
vocab_ep = tokeniser_ep.get_vocab()
print(len(list(vocab_sp.keys())))
print(len(list(vocab_ep.keys())))
tokeniser_sm = ByteLevelBPETokenizer()
tokeniser_sm.train(['sumerian_mono.txt'], special_tokens= ['<sos>', '<eos>', '<pad>'])
vocab_size_sm = tokeniser_sm.get_vocab_size()
print(vocab_size_sm)
tokeniser_sm.decode([vocab_size_sm-1])
vocab_sm = tokeniser_sm.get_vocab()
print(len(list(vocab_sm.keys())))
###Output
_____no_output_____
###Markdown
BPE
###Code
from tokenizers import CharBPETokenizer
tokeniser_ep_2 = CharBPETokenizer()
tokeniser_ep_2.train(['english_pll.txt'], special_tokens= ['<sos>', '<eos>', '<pad>'])
tokeniser_sp_2 = CharBPETokenizer()
tokeniser_sp_2.train(['sumerian_pll.txt'], special_tokens= ['<sos>', '<eos>', '<pad>'])
vocab_size_en_2 = tokeniser_ep_2.get_vocab_size()
vocab_size_sm_2 = tokeniser_sp_2.get_vocab_size()
print(vocab_size_en_2, vocab_size_sm_2)
tokeniser_sp_2.decode([vocab_size_sm_2 - 1])
tokeniser_ep_2.decode([vocab_size_en_2 - 1])
vocab_sp_2 = tokeniser_sp_2.get_vocab()
vocab_ep_2 = tokeniser_ep_2.get_vocab()
print(len(list(vocab_sp_2.keys())))
print(len(list(vocab_ep_2.keys())))
tokeniser_sm_2 = CharBPETokenizer()
tokeniser_sm_2.train(['sumerian_mono.txt'], special_tokens= ['<sos>', '<eos>', '<pad>'])
vocab_size_sm = tokeniser_sm_2.get_vocab_size()
print(vocab_size_sm)
tokeniser_sm_2.decode([vocab_size_sm - 1])
vocab_sm_2 = tokeniser_sm_2.get_vocab()
print(len(list(vocab_sm_2.keys())))
###Output
_____no_output_____
###Markdown
BERT WordPiece
###Code
from tokenizers import BertWordPieceTokenizer
tokeniser_ep_3 = BertWordPieceTokenizer()
tokeniser_ep_3.train(['english_pll.txt'], special_tokens= ['<sos>', '<eos>', '<pad>'])
tokeniser_sp_3 = BertWordPieceTokenizer()
tokeniser_sp_3.train(['sumerian_pll.txt'], special_tokens= ['<sos>', '<eos>', '<pad>'])
vocab_size_en_3 = tokeniser_ep_3.get_vocab_size()
vocab_size_sm_3 = tokeniser_sp_3.get_vocab_size()
print(vocab_size_en_3, vocab_size_sm_3)
tokeniser_sp_3.decode([vocab_size_sm_3 - 1])
tokeniser_ep_3.decode([vocab_size_en_3 - 1])
vocab_sp_3 = tokeniser_sp_3.get_vocab()
vocab_ep_3 = tokeniser_ep_3.get_vocab()
print(len(list(vocab_sp_3.keys())))
print(len(list(vocab_ep_3.keys())))
tokeniser_sm_3 = BertWordPieceTokenizer()
tokeniser_sm_3.train(['sumerian_mono.txt'], special_tokens= ['<sos>', '<eos>', '<pad>'])
vocab_size_sm = tokeniser_sm_3.get_vocab_size()
print(vocab_size_sm)
tokeniser_sm_3.decode([vocab_size_sm - 1])
vocab_sm_3 = tokeniser_sm_3.get_vocab()
print(len(list(vocab_sm_3.keys())))
###Output
_____no_output_____
###Markdown
White Space
###Code
import nltk
from nltk.tokenize import WhitespaceTokenizer
with open('sumerian_pll.txt') as f:
sum_text = f.read()
with open('english_pll.txt') as f:
eng_text = f.read()
tokenised_smp = WhitespaceTokenizer().tokenize(sum_text)
tokenised_enp = WhitespaceTokenizer().tokenize(eng_text)
len(tokenised_smp)
smp_vocab = []
for word in tokenised_smp:
if word not in smp_vocab:
smp_vocab.append(word)
enp_vocab = []
for word in tokenised_enp:
if word not in enp_vocab:
enp_vocab.append(word)
len(smp_vocab)
len(enp_vocab)
###Output
_____no_output_____
###Markdown
Saving vocabulary
###Code
%cd ../../Tokenizers/
!mkdir BBPE
!mkdir BPE
!mkdir BertWordPiece
import json
tokeniser_sp.save('./BBPE', "sumerian_pll")
tokeniser_ep.save('./BBPE', "english_pll")
tokeniser_sm.save('./BBPE', "sumerian_mono")
tokeniser_sp_2.save('./BPE', "sumerian_pll")
tokeniser_ep_2.save('./BPE', "english_pll")
tokeniser_sm_2.save('./BPE', "sumerian_mono")
tokeniser_sp_3.save('./BertWordPiece', "sumerian_pll")
tokeniser_ep_3.save('./BertWordPiece', "english_pll")
tokeniser_sm_3.save('./BertWordPiece', "sumerian_mono")
###Output
_____no_output_____
###Markdown
Testing BBPE
###Code
encc = tokeniser_sm.encode('lu2 ki-inim-ma-me')
print(encc.ids)
print(encc.tokens)
encc = tokeniser_sp.encode('lu2 ki-inim-ma-me')
print(encc.ids)
print(encc.tokens)
encc = tokeniser_sm.encode('2(asz) sze lid2-ga')
print(encc.ids)
print(encc.tokens)
encc = tokeniser_sm.encode('3(asz) kur2 |LAGABx(HA.A)|')
print(encc.ids)
print(encc.tokens)
###Output
_____no_output_____
###Markdown
BPE
###Code
encc = tokeniser_sm_2.encode('lu2 ki-inim-ma-me')
print(encc.ids)
print(encc.tokens)
encc = tokeniser_sp_2.encode('lu2 ki-inim-ma-me')
print(encc.ids)
print(encc.tokens)
encc = tokeniser_sm_2.encode('2(asz) sze lid2-ga')
print(encc.ids)
print(encc.tokens)
encc = tokeniser_sm_2.encode('3(asz) kur2 |LAGABx(HA.A)|')
print(encc.ids)
print(encc.tokens)
###Output
_____no_output_____
###Markdown
BERT WordPiece
###Code
encc = tokeniser_sm_3.encode('lu2 ki-inim-ma-me')
print(encc.ids)
print(encc.tokens)
encc = tokeniser_sp_3.encode('lu2 ki-inim-ma-me')
print(encc.ids)
print(encc.tokens)
encc = tokeniser_sm_3.encode('2(asz) sze lid2-ga')
print(encc.ids)
print(encc.tokens)
encc = tokeniser_sm_3.encode('3(asz) kur2 |LAGABx(HA.A)|')
print(encc.ids)
print(encc.tokens)
###Output
_____no_output_____
###Markdown
Additional
###Code
!zip -r ../Tokenizers.zip ./Tokenizers
from google.colab import files
files.download("../Tokenizers.zip")
###Output
_____no_output_____ |
sw-notebook-notes.ipynb | ###Markdown
Taking pictures with your camera
###Code
test_img = cv2.imread("waiting_for_the_bus.jpg")
test_img_rgb = cv2.cvtColor(test_img,cv2.COLOR_BGR2RGB)
plt.imshow(test_img_rgb)
test_img[1215]
test_img_rgb[2500,500]
###Output
_____no_output_____
###Markdown
Angular FOV of cameraMeasuring the FOV by photographing an object of known width at known distance.
###Code
width = 150
distance = 114
angle = 2 * math.atan(width / (2 * distance))
print(math.degrees(angle))
###Output
66.68141469295401
###Markdown
Working out pixels per radian
###Code
fov_image = cv2.imread("window.jpg")
height_px , width_px = fov_image.shape
radians_per_pixel = angle / width_px
plt.imshow(fov_image[500:510,1000:1010])
print(np.matrix(fov_image[500:510,1000:1010,2]))
###Output
[[162 163 163 163 161 162 163 162 160 165]
[159 161 160 161 160 161 162 159 160 164]
[160 161 161 162 162 162 162 159 163 162]
[160 163 163 164 162 161 163 161 162 158]
[161 165 163 167 160 161 164 161 160 162]
[163 160 156 164 161 158 160 163 161 161]
[163 161 157 163 162 160 161 162 161 161]
[153 158 156 160 158 162 160 158 160 162]
[159 163 160 166 161 165 161 162 159 163]
[160 162 157 162 157 162 159 161 163 163]]
###Markdown
Read in a spectrum photo
###Code
s1 = cv2.imread("spec-ibl-auto.jpg")
plt.imshow(s1)
b1 = [p[0] for p in s1[1452]]
g1 = [p[1] for p in s1[1452]]
r1 = [p[2] for p in s1[1452]]
wvlb,sb1 = sf.get_spectrum(b1,1e-6,radians_per_pixel)
wvlg,sg1 = sf.get_spectrum(g1,1e-6,radians_per_pixel)
wvlr,sr1 = sf.get_spectrum(r1,1e-6,radians_per_pixel)
plt.plot(wvlb,sb1,'b-')
plt.plot(wvlg,sg1,'g-')
plt.plot(wvlr,sr1,'r-')
sgrey = [float(r) + float(g) + float(b) for r,g,b in zip(sr1,sg1,sb1)]
wvlgrey = [w1 for w1,w2,w3 in zip(wvlr,wvlg,wvlb)]
plt.plot(wvlgrey,sgrey)
###Output
_____no_output_____ |
simpsons/Simpsons-PyTorch.ipynb | ###Markdown
Calculate mean width and lenght from test images
###Code
import os, random
from scipy.misc import imread, imresize
width = 0
lenght = 0
num_test_images = len(test_image_names)
for i in range(num_test_images):
path_file = os.path.join(test_root_path, test_image_names[i])
image = imread(path_file)
width += image.shape[0]
lenght += image.shape[1]
width_mean = width//num_test_images
lenght_mean = lenght//num_test_images
dim_size = (width_mean + lenght_mean) // 2
print("Width mean: {}".format(width_mean))
print("Lenght mean: {}".format(lenght_mean))
print("Size mean dimension: {}".format(dim_size))
###Output
Width mean: 152
Lenght mean: 147
Size mean dimension: 149
###Markdown
Size mean dimension will be used for the resizing process. __All the images will be scaled__ to __(149, 149)__ since it's the average of the test images. Show some test examples
###Code
import matplotlib.pyplot as plt
idx = random.randint(0, num_test_images)
sample_file, sample_name = test_image_names[idx], test_image_names[idx].split('_')[:-1]
path_file = os.path.join(test_root_path, sample_file)
sample_image = imread(path_file)
print("Label:{}, Image:{}, Shape:{}".format('_'.join(sample_name), idx, sample_image.shape))
plt.figure(figsize=(3,3))
plt.imshow(sample_image)
plt.axis('off')
plt.show()
###Output
Label:apu_nahasapeemapetilon, Image:759, Shape:(171, 229, 3)
###Markdown
Making batches (resized)
###Code
def get_num_of_samples():
count = 0
for _,character in enumerate(character_directories):
path = os.path.join(train_root_path, character)
count += len(listdir(path))
return count
def get_batch(batch_init, batch_size):
data = {'image':[], 'label':[]}
character_batch_size = batch_size//len(character_directories)
character_batch_init = batch_init//len(character_directories)
character_batch_end = character_batch_init + character_batch_size
for _,character in enumerate(character_directories):
path = os.path.join(train_root_path, character)
images_list = listdir(path)
for i in range(character_batch_init, character_batch_end):
if len(images_list) == 0:
continue
#if this character has small number of features
#we repeat them
if i >= len(images_list):
p = i % len(images_list)
else:
p = i
path_file = os.path.join(path, images_list[p])
image = imread(path_file)
#all with the same shape
image = imresize(image, (dim_size, dim_size))
data['image'].append(image)
data['label'].append(character)
return data
def get_batches(num_batches, batch_size, verbose=False):
#num max of samples
num_samples = get_num_of_samples()
#check number of batches with the maximum
max_num_batches = num_samples//batch_size - 1
if verbose:
print("Number of samples:{}".format(num_samples))
print("Batches:{} Size:{}".format(num_batches, batch_size))
assert num_batches <= max_num_batches, "Surpassed the maximum number of batches"
for i in range(0, num_batches):
init = i * batch_size
if verbose:
print("Batch-{} yielding images from {} to {}...".format(i, init, init+batch_size))
yield get_batch(init, batch_size)
#testing generator
batch_size = 500
for b in get_batches(10, batch_size, verbose=True):
print("\t|- retrieved {} images".format(len(b['image'])))
###Output
Number of samples:20933
Batches:10 Size:500
Batch-0 yielding images from 0 to 500...
###Markdown
Preprocessing data
###Code
from sklearn import preprocessing
#num characters
num_characters = len(character_directories)
#normalize
def normalize(x):
#we use the feature scaling to have all the batches
#in the same space, that is (0,1)
return (x - np.amin(x))/(np.amax(x) - np.amin(x))
#one-hot encode
lb = preprocessing.LabelBinarizer()
lb = lb.fit(character_directories)
def one_hot(label):
return lb.transform([label])
###Output
_____no_output_____
###Markdown
Storing preprocessed batches on disk
###Code
num_batches = 40
batch_size = 500
import pickle
import numpy as np
cnt_images = 0
for cnt, b in enumerate(get_batches(num_batches, batch_size)):
data = {'image':[], 'label':[]}
for i in range( min(len(b['image']), batch_size) ):
image = np.array( b['image'][i] )
label = np.array( b['label'][i] )
#label = label.reshape([-1,:])
if len(image.shape) == 3:
data['image'].append(normalize(image))
data['label'].append(one_hot(label)[-1,:])
cnt_images += 1
else:
print("Dim image < 3")
with open("simpson_train_{}.pkl".format(cnt), 'wb') as file:
pickle.dump(data, file, pickle.HIGHEST_PROTOCOL)
print("Loaded {} train images and stored on disk".format(cnt_images))
#testing load from file
import pickle
with open('simpson_train_0.pkl', 'rb') as file:
data = pickle.load(file)
print("Example of onehot encoded:\n{}".format(data['label'][0]))
print("Data shape: {}".format(data['image'][0].shape))
###Output
Example of onehot encoded:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0]
Data shape: (149, 149, 3)
###Markdown
NOTESince here the data is already processed and saved as pickle files. Building the Network
###Code
import torch
import torchvision
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
import torch.nn as nn
import torch.nn.functional as F
num_characters = 47
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 32, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(32, 64, 5)
self.fc1 = nn.Linear(64 * 34 * 34, num_characters)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
#print("shape: {}".format(x.size()))
x = x.view(x.size(0), -1)
x = self.fc1(x)
return x
net = Net()
#move the neural network to the GPU
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
net = nn.DataParallel(net)
net.to(device)
import torch.optim as optim
loss_fn = nn.CrossEntropyLoss() #buit-in softmax, we can use logits directly
optimizer = optim.Adam(net.parameters())
import os
import pickle
from sklearn.model_selection import train_test_split
def getDatasetsFromPickle(file):
#print("Processing: {}".format(fname))
data = pickle.load(file)
X_train, X_val, y_train, y_val = train_test_split(data['image'], data['label'], test_size=0.2)
inputs_train, labels_train = torch.FloatTensor(X_train), torch.FloatTensor(y_train)
inputs_val, labels_val = torch.FloatTensor(X_train), torch.FloatTensor(y_train)
#permute image as (samples, x, y, channels) to (samples, channels, x, y)
inputs_train = inputs_train.permute(0, 3, 1, 2)
inputs_val = inputs_val.permute(0, 3, 1, 2)
#move the inputs and labels to the GPU
return inputs_train.to(device), labels_train.to(device), inputs_val.to(device), labels_val.to(device)
stats = {'train_loss':[], 'val_loss':[], 'acc':[]}
for epoch in range(3): # loop over the dataset multiple times
for i in range(100):
fname = "simpson_train_{}.pkl".format(i)
if os.path.exists(fname):
with open(fname, 'rb') as file:
#retrieve the data
inputs_train, labels_train, inputs_val, labels_val = getDatasetsFromPickle(file)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs_train)
#cross entropy loss doesn't accept onehot encoded targets
# |-> use the index class instead
lbls_no_onehot_encoded = torch.argmax(labels_train, dim=1)
loss = loss_fn(outputs, lbls_no_onehot_encoded)
loss.backward()
optimizer.step()
#statistics
stats['train_loss'].append(loss.item())
with torch.no_grad():
outputs = net(inputs_val)
label_val_classes = torch.argmax(labels_val, dim=1)
output_classes = torch.argmax(outputs, dim=1)
stats['val_loss'].append( loss_fn(outputs, label_val_classes).item() )
stats['acc'].append( (output_classes == label_val_classes).sum().item() / label_val_classes.size(0) )
#printouts
if i % 20 == 19:
printout = "Epoch: {} Batch: {} Training loss: {:.3f} Validation loss: {:.3f} Accuracy: {:.3f}"
print(printout.format(epoch + 1, i + 1, stats['train_loss'][-1], stats['val_loss'][-1], stats['acc'][-1],))
else:
break
print('Finished Training')
import matplotlib.pyplot as plt
plt.plot(stats['train_loss'], label='Train Loss')
plt.plot(stats['val_loss'], label='Validation Loss')
plt.plot(stats['acc'], label='Accuracy')
plt.legend()
###Output
_____no_output_____
###Markdown
Testing model
###Code
import warnings
warnings.filterwarnings('ignore')
#select random image
idx = random.randint(0, num_test_images)
sample_file, sample_name = test_image_names[idx], test_image_names[idx].split('_')[:-1]
path_file = os.path.join(test_root_path, sample_file)
#read them
test_image = normalize(imresize(imread(path_file), (dim_size, dim_size)))
test_label_onehot = one_hot('_'.join(sample_name))[-1,:]
#move to tensors
test_image, test_label_onehot = torch.FloatTensor(test_image), torch.FloatTensor(test_label_onehot)
#permute image as (samples, x, y, channels) to (samples, channels, x, y)
test_image = test_image.permute(2, 0, 1)
test_image.unsqueeze_(0)
#move to GPU
test_image, test_label_onehot = test_image.to(device), test_label_onehot.to(device)
##
with torch.no_grad():
output = net(test_image)
predicted_character = torch.argmax(output.data, 1)
actual_character = torch.argmax(test_label_onehot)
print("Right!!") if (predicted_character == actual_character) else print("Wrong..")
#showing
actual_name = ' '.join([s.capitalize() for s in sample_name])
print("Label: {}".format(actual_name))
pred_name = lb.inverse_transform(output.cpu().numpy()).item() #copy from cuda to cpu, then to numpy
prediction = ' '.join([s.capitalize() for s in pred_name.split('_')])
print("Prediction: {}".format(prediction))
plt.figure(figsize=(3,3))
plt.imshow(test_image.permute(0, 2, 3, 1).squeeze())
plt.axis('off')
plt.show()
###Output
Right!!
Label: Abraham Grampa Simpson
Prediction: Abraham Grampa Simpson
|
tf_data_dep/c_3/TFDS-Week2-Question.ipynb | ###Markdown
Classify Structured Data Import TensorFlow and Other Libraries
###Code
import pandas as pd
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import feature_column
from os import getcwd
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Use Pandas to Create a Dataframe[Pandas](https://pandas.pydata.org/) is a Python library with many helpful utilities for loading and working with structured data. We will use Pandas to download the dataset and load it into a dataframe.
###Code
filePath = f"{getcwd()}/../tmp2/heart.csv"
dataframe = pd.read_csv(filePath)
dataframe.head()
###Output
_____no_output_____
###Markdown
Split the Dataframe Into Train, Validation, and Test SetsThe dataset we downloaded was a single CSV file. We will split this into train, validation, and test sets.
###Code
train, test = train_test_split(dataframe, test_size=0.2)
train, val = train_test_split(train, test_size=0.2)
print(len(train), 'train examples')
print(len(val), 'validation examples')
print(len(test), 'test examples')
###Output
193 train examples
49 validation examples
61 test examples
###Markdown
Create an Input Pipeline Using `tf.data`Next, we will wrap the dataframes with [tf.data](https://www.tensorflow.org/guide/datasets). This will enable us to use feature columns as a bridge to map from the columns in the Pandas dataframe to features used to train the model. If we were working with a very large CSV file (so large that it does not fit into memory), we would use tf.data to read it from disk directly.
###Code
# EXERCISE: A utility method to create a tf.data dataset from a Pandas Dataframe.
def df_to_dataset(dataframe, shuffle=True, batch_size=32):
dataframe = dataframe.copy()
# Use Pandas dataframe's pop method to get the list of targets.
labels = dataframe.pop("target")
# Create a tf.data.Dataset from the dataframe and labels.
ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))
if shuffle:
# Shuffle dataset.
ds = ds.shuffle(buffer_size=100)
# Batch dataset with specified batch_size parameter.
ds = ds.batch(batch_size)
return ds
batch_size = 5 # A small batch sized is used for demonstration purposes
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Understand the Input PipelineNow that we have created the input pipeline, let's call it to see the format of the data it returns. We have used a small batch size to keep the output readable.
###Code
for feature_batch, label_batch in train_ds.take(1):
print('Every feature:', list(feature_batch.keys()))
print('A batch of ages:', feature_batch['age'])
print('A batch of targets:', label_batch )
###Output
Every feature: ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg', 'thalach', 'exang', 'oldpeak', 'slope', 'ca', 'thal']
A batch of ages: tf.Tensor([66 45 51 42 44], shape=(5,), dtype=int32)
A batch of targets: tf.Tensor([0 0 0 0 0], shape=(5,), dtype=int32)
###Markdown
We can see that the dataset returns a dictionary of column names (from the dataframe) that map to column values from rows in the dataframe. Create Several Types of Feature ColumnsTensorFlow provides many types of feature columns. In this section, we will create several types of feature columns, and demonstrate how they transform a column from the dataframe.
###Code
# Try to demonstrate several types of feature columns by getting an example.
example_batch = next(iter(train_ds))[0]
# A utility method to create a feature column and to transform a batch of data.
def demo(feature_column):
feature_layer = layers.DenseFeatures(feature_column, dtype='float64')
print(feature_layer(example_batch).numpy())
###Output
_____no_output_____
###Markdown
Numeric ColumnsThe output of a feature column becomes the input to the model (using the demo function defined above, we will be able to see exactly how each column from the dataframe is transformed). A [numeric column](https://www.tensorflow.org/api_docs/python/tf/feature_column/numeric_column) is the simplest type of column. It is used to represent real valued features.
###Code
# EXERCISE: Create a numeric feature column out of 'age' and demo it.
age = feature_column.numeric_column("age")
demo(age)
###Output
[[55.]
[58.]
[56.]
[51.]
[46.]]
###Markdown
In the heart disease dataset, most columns from the dataframe are numeric. Bucketized ColumnsOften, you don't want to feed a number directly into the model, but instead split its value into different categories based on numerical ranges. Consider raw data that represents a person's age. Instead of representing age as a numeric column, we could split the age into several buckets using a [bucketized column](https://www.tensorflow.org/api_docs/python/tf/feature_column/bucketized_column).
###Code
# EXERCISE: Create a bucketized feature column out of 'age' with
# the following boundaries and demo it.
boundaries = [18, 25, 30, 35, 40, 45, 50, 55, 60, 65]
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
demo(age_buckets)
###Output
[[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]]
###Markdown
Notice the one-hot values above describe which age range each row matches. Categorical ColumnsIn this dataset, thal is represented as a string (e.g. 'fixed', 'normal', or 'reversible'). We cannot feed strings directly to a model. Instead, we must first map them to numeric values. The categorical vocabulary columns provide a way to represent strings as a one-hot vector (much like you have seen above with age buckets). **Note**: You will probably see some warning messages when running some of the code cell below. These warnings have to do with software updates and should not cause any errors or prevent your code from running.
###Code
# EXERCISE: Create a categorical vocabulary column out of the
# above mentioned categories with the key specified as 'thal'.
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
demo(thal_one_hot)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4276: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4331: VocabularyListCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
[[0. 0. 1.]
[0. 1. 0.]
[0. 0. 1.]
[0. 0. 1.]
[0. 1. 0.]]
###Markdown
The vocabulary can be passed as a list using [categorical_column_with_vocabulary_list](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list), or loaded from a file using [categorical_column_with_vocabulary_file](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_file). Embedding ColumnsSuppose instead of having just a few possible strings, we have thousands (or more) values per category. For a number of reasons, as the number of categories grow large, it becomes infeasible to train a neural network using one-hot encodings. We can use an embedding column to overcome this limitation. Instead of representing the data as a one-hot vector of many dimensions, an [embedding column](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column) represents that data as a lower-dimensional, dense vector in which each cell can contain any number, not just 0 or 1. You can tune the size of the embedding with the `dimension` parameter.
###Code
# EXERCISE: Create an embedding column out of the categorical
# vocabulary you just created (thal). Set the size of the
# embedding to 8, by using the dimension parameter.
thal_embedding = feature_column.embedding_column(thal, dimension=8)
demo(thal_embedding)
###Output
[[-0.33200276 0.03792952 0.10786314 0.12328085 -0.50400513 0.30712402
-0.18802935 0.06810189]
[ 0.4303781 0.49573514 -0.6356979 0.12526743 -0.63225657 0.18131317
0.01873719 -0.27798456]
[-0.33200276 0.03792952 0.10786314 0.12328085 -0.50400513 0.30712402
-0.18802935 0.06810189]
[-0.33200276 0.03792952 0.10786314 0.12328085 -0.50400513 0.30712402
-0.18802935 0.06810189]
[ 0.4303781 0.49573514 -0.6356979 0.12526743 -0.63225657 0.18131317
0.01873719 -0.27798456]]
###Markdown
Hashed Feature ColumnsAnother way to represent a categorical column with a large number of values is to use a [categorical_column_with_hash_bucket](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_hash_bucket). This feature column calculates a hash value of the input, then selects one of the `hash_bucket_size` buckets to encode a string. When using this column, you do not need to provide the vocabulary, and you can choose to make the number of hash buckets significantly smaller than the number of actual categories to save space.
###Code
# EXERCISE: Create a hashed feature column with 'thal' as the key and
# 1000 hash buckets.
thal_hashed = feature_column.categorical_column_with_hash_bucket(
'thal', hash_bucket_size=1000)
demo(feature_column.indicator_column(thal_hashed))
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4331: HashedCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Crossed Feature ColumnsCombining features into a single feature, better known as [feature crosses](https://developers.google.com/machine-learning/glossary/feature_cross), enables a model to learn separate weights for each combination of features. Here, we will create a new feature that is the cross of age and thal. Note that `crossed_column` does not build the full table of all possible combinations (which could be very large). Instead, it is backed by a `hashed_column`, so you can choose how large the table is.
###Code
# EXERCISE: Create a crossed column using the bucketized column (age_buckets),
# the categorical vocabulary column (thal) previously created, and 1000 hash buckets.
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
demo(feature_column.indicator_column(crossed_feature))
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4331: CrossedColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Choose Which Columns to UseWe have seen how to use several types of feature columns. Now we will use them to train a model. The goal of this exercise is to show you the complete code needed to work with feature columns. We have selected a few columns to train our model below arbitrarily.If your aim is to build an accurate model, try a larger dataset of your own, and think carefully about which features are the most meaningful to include, and how they should be represented.
###Code
dataframe.dtypes
###Output
_____no_output_____
###Markdown
You can use the above list of column datatypes to map the appropriate feature column to every column in the dataframe.
###Code
# EXERCISE: Fill in the missing code below
feature_columns = []
# Numeric Cols.
# Create a list of numeric columns. Use the following list of columns
# that have a numeric datatype: ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca'].
numeric_columns = ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']
for header in numeric_columns:
# Create a numeric feature column out of the header.
numeric_feature_column = feature_column.numeric_column(header)
feature_columns.append(numeric_feature_column)
# bucketized cols
age_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
feature_columns.append(age_buckets)
# indicator cols
thal = feature_column.categorical_column_with_vocabulary_list(
'thal', ['fixed', 'normal', 'reversible'])
thal_one_hot = feature_column.indicator_column(thal)
feature_columns.append(thal_one_hot)
# embedding cols
thal_embedding = feature_column.embedding_column(thal, dimension=8)
feature_columns.append(thal_embedding)
# crossed cols
crossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)
crossed_feature = feature_column.indicator_column(crossed_feature)
feature_columns.append(crossed_feature)
###Output
_____no_output_____
###Markdown
Create a Feature LayerNow that we have defined our feature columns, we will use a [DenseFeatures](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/DenseFeatures) layer to input them to our Keras model.
###Code
# EXERCISE: Create a Keras DenseFeatures layer and pass the feature_columns you just created.
feature_layer = tf.keras.layers.DenseFeatures(feature_columns)
###Output
_____no_output_____
###Markdown
Earlier, we used a small batch size to demonstrate how feature columns worked. We create a new input pipeline with a larger batch size.
###Code
batch_size = 32
train_ds = df_to_dataset(train, batch_size=batch_size)
val_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)
test_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Create, Compile, and Train the Model
###Code
model = tf.keras.Sequential([
feature_layer,
layers.Dense(128, activation='relu'),
layers.Dense(128, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(train_ds,
validation_data=val_ds,
epochs=100)
loss, accuracy = model.evaluate(test_ds)
print("Accuracy", accuracy)
###Output
2/2 [==============================] - 0s 6ms/step - loss: 0.4093 - accuracy: 0.8197
Accuracy 0.8196721
###Markdown
Submission Instructions
###Code
# Now click the 'Submit Assignment' button above.
###Output
_____no_output_____
###Markdown
When you're done or would like to take a break, please run the two cells below to save your work and close the Notebook. This frees up resources for your fellow learners.
###Code
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
%%javascript
<!-- Shutdown and close the notebook -->
window.onbeforeunload = null
window.close();
IPython.notebook.session.delete();
###Output
_____no_output_____ |
module2-loadingdata/Sanjay_Krishna_Unit_1_Sprint_2.ipynb | ###Markdown
_Lambda School Data Science_ Join and Reshape datasetsObjectives- concatenate data with pandas- merge data with pandas- understand tidy data formatting- melt and pivot data with pandasLinks- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data) - Combine Data Sets: Standard Joins - Tidy Data - Reshaping Data- Python Data Science Handbook - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables Reference- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
###Code
!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
%cd instacart_2017_05_01
!ls -lh *.csv
###Output
-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv
-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv
-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv
-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv
-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv
-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv
###Markdown
Assignment Join Data PracticeThese are the top 10 most frequently ordered products. How many times was each ordered? 1. Banana2. Bag of Organic Bananas3. Organic Strawberries4. Organic Baby Spinach 5. Organic Hass Avocado6. Organic Avocado7. Large Lemon 8. Strawberries9. Limes 10. Organic Whole MilkFirst, write down which columns you need and which dataframes have them.Next, merge these into a single dataframe.Then, use pandas functions from the previous lesson to get the counts of the top 10 most frequently ordered products.
###Code
import pandas as pd #import pandas
df_orders = pd.read_csv('order_products__prior.csv') #create dataframe of orders
df_orders.head(10) #view dataframe
df_products = pd.read_csv('products.csv') #create dataframe of products
df_products.head() #view dataframe
c_df = pd.merge(df_products, df_orders, on='product_id') #merge both dataframes using product_id
c_df.head() #view combined dataframe
freq = ['Banana','Bag of Organic Bananas','Organic Strawberries','Organic Baby Spinach','Organic Hass Avocado','Organic Avocado','Large Lemon','Strawberries','Limes','Organic Whole Milk']
#freq is list of most frequent products
filter = c_df.product_name.isin(freq) #condition of filter to be used for creating a new dataframe with only frequently ordered products
filtered_df = c_df[filter] #apply condition to combined dataframe
groupby_productname = filtered_df['reordered'].groupby(filtered_df['product_name']) #create groupby object of reordered items grouped by product name
groupby_productname.sum() #total number of times reordered of most frequently ordered items. I'm not sure if this is right. Need confirmation.
###Output
_____no_output_____
###Markdown
Reshape Data Section- Replicate the lesson code- Complete the code cells we skipped near the beginning of the notebook- Table 2 --> Tidy- Tidy --> Table 2- Load seaborn's `flights` dataset by running the cell below. Then create a pivot table showing the number of passengers by month and year. Use year for the index and month for the columns. You've done it right if you get 112 passengers for January 1949 and 432 passengers for December 1960.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan,2],[16,11],[3,1]],index=['John Smith','Jane Doe','Mary Johnson'], columns=['treatmenta','treatmentb'])
table2=table1.T
table1
table2
indexC = table2.columns.to_list()
table2 = table2.reset_index()
table2 = table2.rename(columns={'index':'trt'})
table2
table2.trt = table2.trt.str.replace('treatment','')
table2.head()
tidyT2 = table2.melt(id_vars='trt',value_vars=indexC)
tidyT2.head()
import seaborn as sns
flights = sns.load_dataset('flights')
flights.head()
flights.pivot_table(index='year',columns='month')
###Output
_____no_output_____
###Markdown
Join Data Stretch ChallengeThe [Instacart blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2) has a visualization of "**Popular products** purchased earliest in the day (green) and latest in the day (red)." The post says,> "We can also see the time of day that users purchase specific products.> Healthier snacks and staples tend to be purchased earlier in the day, whereas ice cream (especially Half Baked and The Tonight Dough) are far more popular when customers are ordering in the evening.> **In fact, of the top 25 latest ordered products, the first 24 are ice cream! The last one, of course, is a frozen pizza.**"Your challenge is to reproduce the list of the top 25 latest ordered popular products.We'll define "popular products" as products with more than 2,900 orders.
###Code
##### YOUR CODE HERE #####
orders2 = pd.read_csv('orders.csv')
orders2.head()
new_df = pd.merge(df_orders, orders2, on='order_id')
new_df.head(100) #view combined dataframe
###Output
_____no_output_____
###Markdown
Reshape Data Stretch Challenge_Try whatever sounds most interesting to you!_- Replicate more of Instacart's visualization showing "Hour of Day Ordered" vs "Percent of Orders by Product"- Replicate parts of the other visualization from [Instacart's blog post](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2), showing "Number of Purchases" vs "Percent Reorder Purchases"- Get the most recent order for each user in Instacart's dataset. This is a useful baseline when [predicting a user's next order](https://www.kaggle.com/c/instacart-market-basket-analysis)- Replicate parts of the blog post linked at the top of this notebook: [Modern Pandas, Part 5: Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)
###Code
##### YOUR CODE HERE #####
###Output
_____no_output_____ |
p13-lr-torch.ipynb | ###Markdown
Lab TODO: 0. In this version; consider commenting out other calls to train/fit! 1. Investigate LEARNING_RATE, DROPOUT, MOMENTUM, REGULARIZATION. LEARNING_RATE = 0.1 DROPOUT = 0.1 randomly turn off this fraction of the neural-net while training. MOMENTUM = 0.9 REGULARIZATION = 0.01 achieved: Validation. Acc: 0.927 Auc: 0.973 and converged in the fewest iterations learning rate of 1 was too large it definitely was missing the global minimum 2. What do you think these variables change? Learning rate - The amount that the weights are updated during training is referred to as the step size or the “learning rate.”The learning rate controls how quickly the model is adapted to the problem. Smaller learning rates require more training epochs given the smaller changes made to the weights each update, whereas larger learning rates result in rapid changes and require fewer training epochs. Dropout - A single model can be used to simulate having a large number of different network architectures by randomly dropping out nodes during training. This is called dropout and offers a very computationally cheap and remarkably effective regularization method to reduce overfitting and improve generalization error in deep neural networks of all kinds. Momentum - replaces the current gradient with m (“momentum”), which is an aggregate of gradients. This aggregate is the exponential moving average of current and past gradients (i.e. up to time t).If the momentum term is large then the learning rate should be kept smaller. A large value of momentum also means that the convergence will happen fast. But if both the momentum and learning rate are kept at large values, then you might skip the minimum with a huge step. regularization - neural networks are easily overfit. regularization makes it so neural networks generalize better 3. Consider a shallower, wider network. - Changing [16,16] to something else... might require revisiting step 1. Comparison LEARNING_RATE = 0.1 DROPOUT = 0.1 randomly turn off this fraction of the neural-net while training. MOMENTUM = 0.9 REGULARIZATION = 0.01 [16,16] achieved: Validation. Acc: 0.927 Auc: 0.973 shallower network - [1,1] - accuracy decreased to 0.683 [2,2] - accuracy decreased to .683 auc decreased a little bit [5,5] - didn't make much of a difference[10,10] - didn't make much of a difference wider - [20,20] - didn't make much of a difference [100,100] didn't make much of a difference
###Code
LEARNING_RATE = 0.1
DROPOUT = 0.1 # randomly turn off this fraction of the neural-net while training.
MOMENTUM = 0.9
REGULARIZATION = 0.01 # try 0.1, 0.01, etc.
# two hidden layers, 16 nodes, each.
model = make_neural_net(D, [5, 5], dropout=DROPOUT)
objective = nn.CrossEntropyLoss()
optimizer = optim.SGD(
model.parameters(), lr=LEARNING_RATE, momentum=MOMENTUM, weight_decay=REGULARIZATION
)
train("neural_net", model, optimizer, objective, max_iter=1000)
###Output
100%|██████████| 1000/1000 [00:01<00:00, 791.83it/s]
|
ed_heaps.ipynb | ###Markdown
Top 'K' Frequent Numbers (medium) ```Example 1:Input: [1, 3, 5, 12, 11, 12, 11], K = 2Output: [12, 11]Explanation: Both '11' and '12' apeared twice.Example 2:Input: [5, 12, 11, 3, 11], K = 2Output: [11, 5] or [11, 12] or [11, 3]Explanation: Only '11' appeared twice, all other numbers appeared once.```
###Code
from collections import defaultdict
import heapq
def find_k_frequent_numbers(nums, k):
topNumbers = []
counts = defaultdict(int)
for n in nums:
counts[n] += 1
max_heap = [(-count, n) for n,count in counts.items()]
heapq.heapify(max_heap)
res = []
while len(res) < k:
res.append(heapq.heappop(max_heap)[1])
return res
def main():
print("Here are the K frequent numbers: " +
str(find_k_frequent_numbers([1, 3, 5, 12, 11, 12, 11], 2)))
print("Here are the K frequent numbers: " +
str(find_k_frequent_numbers([5, 12, 11, 3, 11], 2)))
main()
###Output
_____no_output_____ |
nlp_imdb_reviews.ipynb | ###Markdown
Importing Modules
###Code
import io
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
print(tf.__version__)
###Output
2.3.1
###Markdown
Downloading Dataset form tensorflow_datasets
###Code
imdb , info = tfds.load('imdb_reviews' , as_supervised = True , with_info = True)
train_data , test_data = imdb['train'] , imdb['test']
###Output
_____no_output_____
###Markdown
Extracting Train and Test Sentences and their corresponding Labels
###Code
training_sentences = []
training_labels = []
testing_sentences = []
testing_labels = []
for sentence,label in train_data:
training_sentences.append(sentence.numpy().decode('utf8'))
training_labels.append(label.numpy())
for sentence,label in test_data:
testing_sentences.append(sentence.numpy().decode('utf8'))
testing_labels.append(label.numpy())
training_labels_final = np.array(training_labels)
testing_labels_final = np.array(testing_labels)
###Output
_____no_output_____
###Markdown
Initializing Tokenizer and converting sentences into padded sequences
###Code
vocab_size = 20000
embed_dims = 16
truncate = 'post'
pad = 'post'
oov_token = '<OOV>'
max_length = 150
tokenizer = Tokenizer(num_words=vocab_size , oov_token = oov_token)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
train_sequences = tokenizer.texts_to_sequences(training_sentences)
padded_train = pad_sequences(train_sequences , truncating = truncate , padding = pad , maxlen = max_length)
test_sequences = tokenizer.texts_to_sequences(testing_sentences)
padded_test = pad_sequences(test_sequences , truncating=truncate , padding = pad , maxlen = max_length)
###Output
_____no_output_____
###Markdown
Decoding Sequences back into Texts by creating a reverse word index
###Code
reverse_word_index = dict([(values , keys) for keys , values in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i , '?') for i in text])
print(decode_review(padded_train[3]))
print(training_sentences[3])
###Output
this is the kind of film for a snowy sunday afternoon when the rest of the world can go ahead with its own business as you descend into a big arm chair and mellow for a couple of hours wonderful performances from cher and nicolas cage as always gently row the plot along there are no <OOV> to cross no dangerous waters just a warm and witty <OOV> through new york life at its best a family film in every sense and one that deserves the praise it received ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?
This is the kind of film for a snowy Sunday afternoon when the rest of the world can go ahead with its own business as you descend into a big arm-chair and mellow for a couple of hours. Wonderful performances from Cher and Nicolas Cage (as always) gently row the plot along. There are no rapids to cross, no dangerous waters, just a warm and witty paddle through New York life at its best. A family film in every sense and one that deserves the praise it received.
###Markdown
Making DNN Model
###Code
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size , embed_dims , input_length= max_length),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(units = 16 , activation = 'relu'),
tf.keras.layers.Dense(units = 1 , activation = 'sigmoid')
])
model.compile(optimizer = 'adam' , loss = 'binary_crossentropy' , metrics = ['accuracy'])
###Output
_____no_output_____
###Markdown
Summary of the Model's Processing
###Code
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_1 (Embedding) (None, 150, 16) 320000
_________________________________________________________________
bidirectional_1 (Bidirection (None, 64) 12544
_________________________________________________________________
dense_2 (Dense) (None, 16) 1040
_________________________________________________________________
dense_3 (Dense) (None, 1) 17
=================================================================
Total params: 333,601
Trainable params: 333,601
Non-trainable params: 0
_________________________________________________________________
###Markdown
Initialzing a callback to avoid overfitting , Fitting the data on the model
###Code
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.99):
print("\nReached 99% accuracy so cancelling training!")
self.model.stop_training = True
callbacks = myCallback()
history = model.fit(
padded_train,
training_labels_final,
epochs = 15,
validation_data = (padded_test , testing_labels_final),
callbacks = [callbacks]
)
###Output
Epoch 1/15
782/782 [==============================] - 47s 60ms/step - loss: 0.4605 - accuracy: 0.7689 - val_loss: 0.3734 - val_accuracy: 0.8332
Epoch 2/15
782/782 [==============================] - 47s 60ms/step - loss: 0.2425 - accuracy: 0.9088 - val_loss: 0.4236 - val_accuracy: 0.8259
Epoch 3/15
782/782 [==============================] - 47s 60ms/step - loss: 0.1642 - accuracy: 0.9428 - val_loss: 0.4280 - val_accuracy: 0.8198
Epoch 4/15
782/782 [==============================] - 53s 68ms/step - loss: 0.1186 - accuracy: 0.9597 - val_loss: 0.5145 - val_accuracy: 0.8113
Epoch 5/15
782/782 [==============================] - 51s 66ms/step - loss: 0.0885 - accuracy: 0.9701 - val_loss: 0.6779 - val_accuracy: 0.8131
Epoch 6/15
782/782 [==============================] - 40s 52ms/step - loss: 0.0726 - accuracy: 0.9760 - val_loss: 0.6103 - val_accuracy: 0.8138
Epoch 7/15
782/782 [==============================] - 38s 49ms/step - loss: 0.0537 - accuracy: 0.9826 - val_loss: 0.6695 - val_accuracy: 0.8150
Epoch 8/15
782/782 [==============================] - 38s 49ms/step - loss: 0.0358 - accuracy: 0.9882 - val_loss: 0.8158 - val_accuracy: 0.8086
Epoch 9/15
782/782 [==============================] - 37s 47ms/step - loss: 0.0347 - accuracy: 0.9896 - val_loss: 0.9003 - val_accuracy: 0.8082
Epoch 10/15
782/782 [==============================] - ETA: 0s - loss: 0.0202 - accuracy: 0.9938
Reached 99% accuracy so cancelling training!
782/782 [==============================] - 39s 50ms/step - loss: 0.0202 - accuracy: 0.9938 - val_loss: 0.9763 - val_accuracy: 0.8106
###Markdown
Extracting Embeddings from the Model
###Code
embed_layer = model.layers[0]
embed_weights = embed_layer.get_weights()[0]
print(embed_weights.shape)
###Output
(20000, 16)
###Markdown
Exporting meta.tsv and vecs.tsv (embeddings) to visualize it in tensorflow projector in spherical form
###Code
out_v = io.open("vecs.tsv" , mode = 'w' , encoding='utf-8')
out_m = io.open("meta.tsv" , mode = 'w' , encoding='utf-8')
for word_num in range(1,vocab_size):
word = reverse_word_index[word_num]
embeddings = embed_weights[word_num]
out_m.write(word + '\n')
out_v.write('\t'.join([str(x) for x in embeddings]) + '\n')
out_m.close()
out_v.close()
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
###Output
_____no_output_____
###Markdown
Testing the model on different Sentences ( if y_hat is above 0.5 review has been predicted positive and below 0.5 is a negative predicted review) Creating function to convert sentences into padded sequences with the same hyperparameters
###Code
def get_pad_sequence(sentence_val):
sequence = tokenizer.texts_to_sequences([sentence_val])
padded_seq = pad_sequences(sequence , truncating = truncate , padding = pad , maxlen = max_length)
return padded_seq
###Output
_____no_output_____
###Markdown
Trying Positive Review
###Code
sentence = "I really think this is amazing. honest."
padded_test_1 = get_pad_sequence(sentence)
###Output
_____no_output_____
###Markdown
0.99 means its a very positive review and the classifier is good in predicting posiitve reviews
###Code
model.predict(padded_test_1)
###Output
_____no_output_____
###Markdown
Trying Negative Review
###Code
sentence = "The movie was so boring , bad and not worth watching. I hated the movie and no one should have to sit through that"
padded_test_2 = get_pad_sequence(sentence)
model.predict(padded_test_2)
###Output
_____no_output_____
###Markdown
0.009 means its a very negative review and the model is correct Analysis of Loss and Accuracy
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
num_epochs = range(len(acc))
def plot_graph(x,y1,y2 , string_lst):
plt.plot(x , y1 , x ,y2)
plt.title(string_lst[0])
plt.xlabel(string_lst[1])
plt.ylabel(string_lst[2])
plt.show()
plt.figure()
plot_graph(num_epochs , acc , val_acc , ['Accuracy Plot' , 'Accuracy' , 'Epochs'])
plot_graph(num_epochs , loss , val_loss , ['Loss Plot' , 'Loss' , 'Epochs'])
###Output
_____no_output_____ |
ch_11/3-EDA_labeled_data.ipynb | ###Markdown
Exploratory data analysis with labeled dataNow that we have the labels for our data, we can do some initial EDA to see if there is something different between the hackers and the valid users. Setup
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sqlite3
with sqlite3.connect('logs/logs.db') as conn:
logs_2018 = pd.read_sql(
'SELECT * FROM logs WHERE datetime BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['datetime'], index_col='datetime'
)
hackers_2018 = pd.read_sql(
'SELECT * FROM attacks WHERE start BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['start', 'end']
).assign(
duration=lambda x: x.end - x.start,
start_floor=lambda x: x.start.dt.floor('min'),
end_ceil=lambda x: x.end.dt.ceil('min')
)
hackers_2018.head()
###Output
_____no_output_____
###Markdown
This function will tell us if the datetimes had hacker activity:
###Code
def check_if_hacker(datetimes, hackers, resolution='1min'):
"""
Check whether a hacker attempted a log in during that time.
Parameters:
- datetimes: The datetimes to check for hackers
- hackers: The dataframe indicating when the attacks started and stopped
- resolution: The granularity of the datetime. Default is 1 minute.
Returns:
A pandas Series of booleans.
"""
date_ranges = hackers.apply(
lambda x: pd.date_range(x.start_floor, x.end_ceil, freq=resolution),
axis=1
)
dates = pd.Series()
for date_range in date_ranges:
dates = pd.concat([dates, date_range.to_series()])
return datetimes.isin(dates)
###Output
_____no_output_____
###Markdown
Let's label our data for Q1 so we can look for a separation boundary:
###Code
users_with_failures = logs_2018['2018-Q1'].assign(
failures=lambda x: 1 - x.success
).query('failures > 0').resample('1min').agg(
{'username':'nunique', 'failures': 'sum'}
).dropna().rename(
columns={'username':'usernames_with_failures'}
)
labels = check_if_hacker(users_with_failures.reset_index().datetime, hackers_2018)
users_with_failures['flag'] = labels[:users_with_failures.shape[0]].values
users_with_failures.head()
###Output
_____no_output_____
###Markdown
Since we have the labels, we can draw a sample boundary that would separate most of the hackers from the valid users. Notice there is still at least one hacker in the valid users section of our separation:
###Code
ax = sns.scatterplot(
x=users_with_failures.usernames_with_failures,
y=users_with_failures.failures,
alpha=0.25,
hue=users_with_failures.flag
)
ax.plot([0, 8], [12, -4], 'r--', label='sample boundary')
plt.ylim(-4, None)
plt.legend()
plt.title('Usernames with failures on minute resolution')
###Output
_____no_output_____
###Markdown
Exploratory data analysis with labeled dataNow that we have the labels for our data, we can do some initial EDA to see if there is something different between the hackers and the valid users. Setup
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sqlite3
with sqlite3.connect('logs/logs.db') as conn:
logs_2018 = pd.read_sql(
'SELECT * FROM logs WHERE datetime BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['datetime'], index_col='datetime'
)
hackers_2018 = pd.read_sql(
'SELECT * FROM attacks WHERE start BETWEEN "2018-01-01" AND "2019-01-01";',
conn, parse_dates=['start', 'end']
).assign(
duration=lambda x: x.end - x.start,
start_floor=lambda x: x.start.dt.floor('min'),
end_ceil=lambda x: x.end.dt.ceil('min')
)
hackers_2018.head()
###Output
_____no_output_____
###Markdown
This function will tell us if the datetimes had hacker activity:
###Code
def check_if_hacker(datetimes, hackers, resolution='1min'):
"""
Check whether a hacker attempted a log in during that time.
Parameters:
- datetimes: The datetimes to check for hackers
- hackers: The dataframe indicating when the attacks started and stopped
- resolution: The granularity of the datetime. Default is 1 minute.
Returns:
`pandas.Series` of Booleans.
"""
date_ranges = hackers.apply(
lambda x: pd.date_range(x.start_floor, x.end_ceil, freq=resolution),
axis=1
)
dates = pd.Series(dtype='object')
for date_range in date_ranges:
dates = pd.concat([dates, date_range.to_series()])
return datetimes.isin(dates)
###Output
_____no_output_____
###Markdown
Let's label our data for Q1 so we can look for a separation boundary:
###Code
users_with_failures = logs_2018.loc['2018-Q1'].assign(
failures=lambda x: 1 - x.success
).query('failures > 0').resample('1min').agg(
{'username':'nunique', 'failures': 'sum'}
).dropna().rename(
columns={'username':'usernames_with_failures'}
)
labels = check_if_hacker(users_with_failures.reset_index().datetime, hackers_2018)
users_with_failures['flag'] = labels[:users_with_failures.shape[0]].values
users_with_failures.head()
###Output
_____no_output_____
###Markdown
Since we have the labels, we can draw a sample boundary that would separate most of the hackers from the valid users:
###Code
ax = sns.scatterplot(
x=users_with_failures.usernames_with_failures,
y=users_with_failures.failures,
alpha=0.25,
hue=users_with_failures.flag
)
plt.ylim(-4, None)
ax.plot([-2, 5], [15, -2], 'r--', label='sample boundary')
# sort the legend entries
handles, labels = ax.get_legend_handles_labels()
labels, handles = zip(*sorted(zip(labels, handles), key=lambda t: t[0]))
ax.legend(handles, labels, title='flag')
plt.title('Usernames with failures on minute resolution')
###Output
_____no_output_____ |
time series data/1. univarite time series/air passengers/air_passengers_cnn-lstm.ipynb | ###Markdown
Load and process the data
###Code
df = pd.read_csv('AirPassengers.csv')
df['Month'] = pd.to_datetime(df['Month'])
df.head()
y = pd.Series(data=df['Passengers'].values, index=df['Month'])
y.head()
y.plot(figsize=(14, 6))
plt.show()
data = y.values.reshape(y.size,1)
###Output
_____no_output_____
###Markdown
LSTM Forecast Model LSTM Data Preparation
###Code
'MixMaxScaler'
scaler = MinMaxScaler(feature_range=(0, 1))
data = scaler.fit_transform(data)
train_size = int(len(data) * 0.7)
test_size = len(data) - train_size
train, test = data[0:train_size,:], data[train_size:len(data),:]
print('data train size :',train.shape[0])
print('data test size :',test.shape[0])
data.shape
'function to reshape data according to the number of lags'
def reshape_data (data, look_back,time_steps):
sub_seqs = int(look_back/time_steps)
dataX, dataY = [], []
for i in range(len(data)-look_back-1):
a = data[i:(i+look_back), 0]
dataX.append(a)
dataY.append(data[i + look_back, 0])
dataX = np.array(dataX)
dataY = np.array(dataY)
dataX = np.reshape(dataX,(dataX.shape[0],sub_seqs,time_steps,np.size(data,1)))
return dataX, dataY
look_back = 2
time_steps = 1
trainX, trainY = reshape_data(train, look_back,time_steps)
testX, testY = reshape_data(test, look_back,time_steps)
print('train shape :',trainX.shape)
print('test shape :',testX.shape)
###Output
train shape : (97, 2, 1, 1)
test shape : (41, 2, 1, 1)
###Markdown
Define and Fit the Model
###Code
model = Sequential()
model.add(TimeDistributed(Conv1D(filters=64, kernel_size=1, activation='relu'), input_shape=(None, time_steps, 1)))
model.add(TimeDistributed(MaxPooling1D(pool_size=1)))
model.add(TimeDistributed(Flatten()))
model.add(LSTM(50, activation='relu'))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
history = model.fit(trainX, trainY, epochs=300, validation_data=(testX, testY), verbose=0)
'plot history'
plt.figure(figsize=(12,5))
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend()
plt.show()
'make predictions'
trainPredict = model.predict(trainX)
testPredict = model.predict(testX)
'invert predictions'
trainPredict = scaler.inverse_transform(trainPredict)
trainY = scaler.inverse_transform([trainY])
testPredict = scaler.inverse_transform(testPredict)
testY = scaler.inverse_transform([testY])
'calculate root mean squared error'
trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0]))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0]))
print('Test Score: %.2f RMSE' % (testScore))
'shift train predictions for plotting'
trainPredictPlot = np.empty_like(data)
trainPredictPlot[:, :] = np.nan
trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict
'shift test predictions for plotting'
testPredictPlot = np.empty_like(data)
testPredictPlot[:, :] = np.nan
testPredictPlot[len(trainPredict)+(look_back*2)+1:len(data)-1, :] = testPredict
'Make as pandas series to plot'
data_series = pd.Series(scaler.inverse_transform(data).ravel(), index=df['Month'])
trainPredict_series = pd.Series(trainPredictPlot.ravel(), index=df['Month'])
testPredict_series = pd.Series(testPredictPlot.ravel(), index=df['Month'])
'plot baseline and predictions'
plt.figure(figsize=(15,6))
plt.plot(data_series,label = 'real')
plt.plot(trainPredict_series,label = 'predict_train')
plt.plot(testPredict_series,label = 'predict_test')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
SARIMA Model
###Code
y_train = y[:train_size]
y_test = y[train_size:]
###Output
_____no_output_____
###Markdown
Grid search the p, d, q parameters
###Code
'Define the p, d and q parameters to take any value between 0 and 3'
p = d = q = range(0, 3)
'Generate all different combinations of p, q and q triplets'
pdq = list(itertools.product(p, d, q))
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
warnings.filterwarnings("ignore") # specify to ignore warning messages
best_result = [0, 0, 10000000]
for param in pdq:
for param_seasonal in seasonal_pdq:
mod = sm.tsa.statespace.SARIMAX(y_train,order=param,
seasonal_order=param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print('ARIMA{} x {} - AIC: {}'.format(param, param_seasonal, results.aic))
if results.aic < best_result[2]:
best_result = [param, param_seasonal, results.aic]
print('\nBest Result:', best_result)
###Output
ARIMA(0, 0, 0) x (0, 0, 0, 12) - AIC: 1360.8887896501508
ARIMA(0, 0, 0) x (0, 0, 1, 12) - AIC: 1124.3605834274476
ARIMA(0, 0, 0) x (0, 0, 2, 12) - AIC: 927.015094210419
ARIMA(0, 0, 0) x (0, 1, 0, 12) - AIC: 856.8586491342998
ARIMA(0, 0, 0) x (0, 1, 1, 12) - AIC: 715.3862543108482
ARIMA(0, 0, 0) x (0, 1, 2, 12) - AIC: 2424.05821195876
ARIMA(0, 0, 0) x (0, 2, 0, 12) - AIC: 677.6305908202118
ARIMA(0, 0, 0) x (0, 2, 1, 12) - AIC: 545.3916144454995
ARIMA(0, 0, 0) x (0, 2, 2, 12) - AIC: 2208.5967678581574
ARIMA(0, 0, 0) x (1, 0, 0, 12) - AIC: 708.3477980505883
ARIMA(0, 0, 0) x (1, 0, 1, 12) - AIC: 677.035732345672
ARIMA(0, 0, 0) x (1, 0, 2, 12) - AIC: 593.8024898311235
ARIMA(0, 0, 0) x (1, 1, 0, 12) - AIC: 686.6828938232255
ARIMA(0, 0, 0) x (1, 1, 1, 12) - AIC: 636.5755932113505
ARIMA(0, 0, 0) x (1, 1, 2, 12) - AIC: 519.1095139949875
ARIMA(0, 0, 0) x (1, 2, 0, 12) - AIC: 560.8824483068595
ARIMA(0, 0, 0) x (1, 2, 1, 12) - AIC: 546.1873082653859
ARIMA(0, 0, 0) x (1, 2, 2, 12) - AIC: 2155.12356822405
ARIMA(0, 0, 0) x (2, 0, 0, 12) - AIC: 607.8081010582929
ARIMA(0, 0, 0) x (2, 0, 1, 12) - AIC: 602.4882287503307
ARIMA(0, 0, 0) x (2, 0, 2, 12) - AIC: 586.4838167299578
ARIMA(0, 0, 0) x (2, 1, 0, 12) - AIC: 561.9605298197137
ARIMA(0, 0, 0) x (2, 1, 1, 12) - AIC: 542.1833962218651
ARIMA(0, 0, 0) x (2, 1, 2, 12) - AIC: 521.1080538325623
ARIMA(0, 0, 0) x (2, 2, 0, 12) - AIC: 460.0929928638763
ARIMA(0, 0, 0) x (2, 2, 1, 12) - AIC: 460.1531429959472
ARIMA(0, 0, 0) x (2, 2, 2, 12) - AIC: 444.97054743062995
ARIMA(0, 0, 1) x (0, 0, 0, 12) - AIC: 1221.2179560475297
ARIMA(0, 0, 1) x (0, 0, 1, 12) - AIC: 1001.931049165728
ARIMA(0, 0, 1) x (0, 0, 2, 12) - AIC: 822.6360845869136
ARIMA(0, 0, 1) x (0, 1, 0, 12) - AIC: 774.1014443475727
ARIMA(0, 0, 1) x (0, 1, 1, 12) - AIC: 658.9271396006367
ARIMA(0, 0, 1) x (0, 1, 2, 12) - AIC: 2332.4709220970303
ARIMA(0, 0, 1) x (0, 2, 0, 12) - AIC: 644.1323231762209
ARIMA(0, 0, 1) x (0, 2, 1, 12) - AIC: 511.7339125269903
ARIMA(0, 0, 1) x (0, 2, 2, 12) - AIC: 1993.5327930476094
ARIMA(0, 0, 1) x (1, 0, 0, 12) - AIC: 679.1462699269271
ARIMA(0, 0, 1) x (1, 0, 1, 12) - AIC: 639.8272876213035
ARIMA(0, 0, 1) x (1, 0, 2, 12) - AIC: 560.1871968868732
ARIMA(0, 0, 1) x (1, 1, 0, 12) - AIC: 657.6546736142463
ARIMA(0, 0, 1) x (1, 1, 1, 12) - AIC: 603.8680020435264
ARIMA(0, 0, 1) x (1, 1, 2, 12) - AIC: 493.41949642737507
ARIMA(0, 0, 1) x (1, 2, 0, 12) - AIC: 538.8390763878019
ARIMA(0, 0, 1) x (1, 2, 1, 12) - AIC: 513.0164650306015
ARIMA(0, 0, 1) x (1, 2, 2, 12) - AIC: 2135.123573999866
ARIMA(0, 0, 1) x (2, 0, 0, 12) - AIC: 613.3880400603576
ARIMA(0, 0, 1) x (2, 0, 1, 12) - AIC: 590.3538297263591
ARIMA(0, 0, 1) x (2, 0, 2, 12) - AIC: 561.128056119609
ARIMA(0, 0, 1) x (2, 1, 0, 12) - AIC: 540.8212167168216
ARIMA(0, 0, 1) x (2, 1, 1, 12) - AIC: 522.4414465166531
ARIMA(0, 0, 1) x (2, 1, 2, 12) - AIC: 502.84457263286663
ARIMA(0, 0, 1) x (2, 2, 0, 12) - AIC: 431.5053740797726
ARIMA(0, 0, 1) x (2, 2, 1, 12) - AIC: 432.65098620313915
ARIMA(0, 0, 1) x (2, 2, 2, 12) - AIC: 416.55649673886154
ARIMA(0, 0, 2) x (0, 0, 0, 12) - AIC: 1117.0587559687897
ARIMA(0, 0, 2) x (0, 0, 1, 12) - AIC: 924.4580724279936
ARIMA(0, 0, 2) x (0, 0, 2, 12) - AIC: 759.3475263610297
ARIMA(0, 0, 2) x (0, 1, 0, 12) - AIC: 718.4978274845218
ARIMA(0, 0, 2) x (0, 1, 1, 12) - AIC: 609.272478000999
ARIMA(0, 0, 2) x (0, 1, 2, 12) - AIC: 2557.462044014673
ARIMA(0, 0, 2) x (0, 2, 0, 12) - AIC: 610.5288313561465
ARIMA(0, 0, 2) x (0, 2, 1, 12) - AIC: 479.8621509231858
ARIMA(0, 0, 2) x (0, 2, 2, 12) - AIC: 2414.411580894622
ARIMA(0, 0, 2) x (1, 0, 0, 12) - AIC: 654.0054149110563
ARIMA(0, 0, 2) x (1, 0, 1, 12) - AIC: 617.1339982329018
ARIMA(0, 0, 2) x (1, 0, 2, 12) - AIC: 538.8017082800209
ARIMA(0, 0, 2) x (1, 1, 0, 12) - AIC: 626.7325037606137
ARIMA(0, 0, 2) x (1, 1, 1, 12) - AIC: 573.9443105193089
ARIMA(0, 0, 2) x (1, 1, 2, 12) - AIC: 472.51407260530436
ARIMA(0, 0, 2) x (1, 2, 0, 12) - AIC: 516.4873484764807
ARIMA(0, 0, 2) x (1, 2, 1, 12) - AIC: 481.84006557070165
ARIMA(0, 0, 2) x (1, 2, 2, 12) - AIC: 2349.5466486426935
ARIMA(0, 0, 2) x (2, 0, 0, 12) - AIC: 563.3800684523475
ARIMA(0, 0, 2) x (2, 0, 1, 12) - AIC: 589.8321226723405
ARIMA(0, 0, 2) x (2, 0, 2, 12) - AIC: 549.9951735717103
ARIMA(0, 0, 2) x (2, 1, 0, 12) - AIC: 518.1564646680001
ARIMA(0, 0, 2) x (2, 1, 1, 12) - AIC: 505.9778246828015
ARIMA(0, 0, 2) x (2, 1, 2, 12) - AIC: 478.691847812154
ARIMA(0, 0, 2) x (2, 2, 0, 12) - AIC: 421.4616326605979
ARIMA(0, 0, 2) x (2, 2, 1, 12) - AIC: 418.5342806809597
ARIMA(0, 0, 2) x (2, 2, 2, 12) - AIC: 394.18377899336264
ARIMA(0, 1, 0) x (0, 0, 0, 12) - AIC: 900.556419909168
ARIMA(0, 1, 0) x (0, 0, 1, 12) - AIC: 743.5111285412091
ARIMA(0, 1, 0) x (0, 0, 2, 12) - AIC: 619.196372932136
ARIMA(0, 1, 0) x (0, 1, 0, 12) - AIC: 644.0889344474547
ARIMA(0, 1, 0) x (0, 1, 1, 12) - AIC: 557.9193719081528
ARIMA(0, 1, 0) x (0, 1, 2, 12) - AIC: 2109.672510235733
ARIMA(0, 1, 0) x (0, 2, 0, 12) - AIC: 627.3561323625744
ARIMA(0, 1, 0) x (0, 2, 1, 12) - AIC: 484.3303290512879
ARIMA(0, 1, 0) x (0, 2, 2, 12) - AIC: 2209.194781705146
ARIMA(0, 1, 0) x (1, 0, 0, 12) - AIC: 651.1606782432904
ARIMA(0, 1, 0) x (1, 0, 1, 12) - AIC: 624.9072049189873
ARIMA(0, 1, 0) x (1, 0, 2, 12) - AIC: 549.290656057059
ARIMA(0, 1, 0) x (1, 1, 0, 12) - AIC: 564.8049191219023
ARIMA(0, 1, 0) x (1, 1, 1, 12) - AIC: 560.1101970893297
ARIMA(0, 1, 0) x (1, 1, 2, 12) - AIC: 2651.801559605201
ARIMA(0, 1, 0) x (1, 2, 0, 12) - AIC: 510.664442855735
ARIMA(0, 1, 0) x (1, 2, 1, 12) - AIC: 487.18120244155654
ARIMA(0, 1, 0) x (1, 2, 2, 12) - AIC: 380.9035462876482
ARIMA(0, 1, 0) x (2, 0, 0, 12) - AIC: 559.6302191762979
ARIMA(0, 1, 0) x (2, 0, 1, 12) - AIC: 556.661740519176
ARIMA(0, 1, 0) x (2, 0, 2, 12) - AIC: 544.2894435875077
ARIMA(0, 1, 0) x (2, 1, 0, 12) - AIC: 479.67435099987983
ARIMA(0, 1, 0) x (2, 1, 1, 12) - AIC: 481.94206817708834
ARIMA(0, 1, 0) x (2, 1, 2, 12) - AIC: 2026.5287429926323
ARIMA(0, 1, 0) x (2, 2, 0, 12) - AIC: 398.2717682168705
ARIMA(0, 1, 0) x (2, 2, 1, 12) - AIC: 397.90019157840305
ARIMA(0, 1, 0) x (2, 2, 2, 12) - AIC: 385.900448095168
ARIMA(0, 1, 1) x (0, 0, 0, 12) - AIC: 886.9123851909636
ARIMA(0, 1, 1) x (0, 0, 1, 12) - AIC: 734.7180409024656
ARIMA(0, 1, 1) x (0, 0, 2, 12) - AIC: 612.5501043177705
ARIMA(0, 1, 1) x (0, 1, 0, 12) - AIC: 633.006196762335
ARIMA(0, 1, 1) x (0, 1, 1, 12) - AIC: 547.4685575046775
ARIMA(0, 1, 1) x (0, 1, 2, 12) - AIC: 2105.121840396246
ARIMA(0, 1, 1) x (0, 2, 0, 12) - AIC: 612.691916921501
ARIMA(0, 1, 1) x (0, 2, 1, 12) - AIC: 467.87984885215536
ARIMA(0, 1, 1) x (0, 2, 2, 12) - AIC: 2155.099916679922
ARIMA(0, 1, 1) x (1, 0, 0, 12) - AIC: 642.7082544285727
ARIMA(0, 1, 1) x (1, 0, 1, 12) - AIC: 609.7296092686094
ARIMA(0, 1, 1) x (1, 0, 2, 12) - AIC: 534.4159619204312
ARIMA(0, 1, 1) x (1, 1, 0, 12) - AIC: 561.6179770987012
ARIMA(0, 1, 1) x (1, 1, 1, 12) - AIC: 549.4676267350999
ARIMA(0, 1, 1) x (1, 1, 2, 12) - AIC: 2113.3263913961073
ARIMA(0, 1, 1) x (1, 2, 0, 12) - AIC: 500.82452657927655
ARIMA(0, 1, 1) x (1, 2, 1, 12) - AIC: 468.3158748679183
ARIMA(0, 1, 1) x (1, 2, 2, 12) - AIC: 375.58840725995344
ARIMA(0, 1, 1) x (2, 0, 0, 12) - AIC: 548.2295939975477
ARIMA(0, 1, 1) x (2, 0, 1, 12) - AIC: 547.7949506152927
ARIMA(0, 1, 1) x (2, 0, 2, 12) - AIC: 527.8825692629185
ARIMA(0, 1, 1) x (2, 1, 0, 12) - AIC: 479.62320021415445
ARIMA(0, 1, 1) x (2, 1, 1, 12) - AIC: 481.6231970569141
ARIMA(0, 1, 1) x (2, 1, 2, 12) - AIC: 2093.2339763095592
ARIMA(0, 1, 1) x (2, 2, 0, 12) - AIC: 395.9376695386184
ARIMA(0, 1, 1) x (2, 2, 1, 12) - AIC: 394.28674800226815
ARIMA(0, 1, 1) x (2, 2, 2, 12) - AIC: 372.0533764644165
ARIMA(0, 1, 2) x (0, 0, 0, 12) - AIC: 874.9295150228447
ARIMA(0, 1, 2) x (0, 0, 1, 12) - AIC: 728.1144558717069
ARIMA(0, 1, 2) x (0, 0, 2, 12) - AIC: 601.687866254928
ARIMA(0, 1, 2) x (0, 1, 0, 12) - AIC: 628.5209373723821
ARIMA(0, 1, 2) x (0, 1, 1, 12) - AIC: 542.6195981613755
ARIMA(0, 1, 2) x (0, 1, 2, 12) - AIC: 2312.641169062299
ARIMA(0, 1, 2) x (0, 2, 0, 12) - AIC: 607.2859111461836
ARIMA(0, 1, 2) x (0, 2, 1, 12) - AIC: 463.12719432935444
ARIMA(0, 1, 2) x (0, 2, 2, 12) - AIC: 2164.2150815160426
ARIMA(0, 1, 2) x (1, 0, 0, 12) - AIC: 644.7049495535867
ARIMA(0, 1, 2) x (1, 0, 1, 12) - AIC: 605.3823833242819
ARIMA(0, 1, 2) x (1, 0, 2, 12) - AIC: 529.664261271502
ARIMA(0, 1, 2) x (1, 1, 0, 12) - AIC: 563.6173816211144
###Markdown
Plot model diagnostics
###Code
mod = sm.tsa.statespace.SARIMAX(y_train,
order=(best_result[0][0], best_result[0][1], best_result[0][2]),
seasonal_order=(best_result[1][0], best_result[1][1], best_result[1][2], best_result[1][3]),
enforce_stationarity=False,
enforce_invertibility=False)
results = mod.fit()
print(results.summary())
results.plot_diagnostics(figsize=(15, 12))
plt.show()
'make predictions'
pred = results.get_prediction(start=pd.to_datetime('1949-02-01'), dynamic=False,full_results=True)
pred_ci = pred.conf_int()
ax = y_train.plot(label='Observed', figsize=(15, 6))
pred.predicted_mean.plot(ax=ax, label='predicted', alpha=.7)
ax.set_xlabel('Date')
ax.set_ylabel('Airline Passengers')
plt.legend()
plt.show()
pred_uc = results.get_forecast(steps=44)
ax = y_train.plot(label='train', figsize=(15,6))
pred_uc.predicted_mean.plot(ax=ax, label='Forecast')
y_test.plot(ax=ax, label='test')
ax.set_xlabel('Date')
ax.set_ylabel('Airline Passengers')
plt.legend()
plt.show()
trainScore = math.sqrt(mean_squared_error(y_train[1:], pred.predicted_mean))
print('Train Score: %.2f RMSE' % (trainScore))
testScore = math.sqrt(mean_squared_error(y_test, pred_uc.predicted_mean))
print('Test Score: %.2f RMSE' % (testScore))
###Output
Train Score: 21.00 RMSE
Test Score: 69.46 RMSE
|
practice/imdb/Simple IMDB Classification.ipynb | ###Markdown
Setting up random state for reproducibility
###Code
RANDOM_STATE = 1234
np.random.seed(RANDOM_STATE)
###Output
_____no_output_____
###Markdown
Setting up logger
###Code
# Logging configuration
logger = logging.getLogger(__name__)
handler = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(name)-5s %(levelname)-5s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
###Output
_____no_output_____
###Markdown
Restrict maximum featuresWe restrict the maximum number of features a.k.a. our inputs to be 5000. So only top 5000 words will be chosen from IMDB dataset. `load_data` automatically does a 50:50 train test split.
###Code
max_features = 5000
max_review_length = 300
from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
logger.debug('Length of X_train: %(len)s', {'len': len(x_train)})
logger.debug('Length of X_test: %(len)s', {'len': len(x_test)})
from keras.preprocessing import sequence
X_train = sequence.pad_sequences(x_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(x_test, maxlen=max_review_length)
logger.debug('Shape of X_train: %(shape)s', {'shape': X_train.shape})
logger.debug('Shape of X_test: %(shape)s', {'shape': X_test.shape})
###Output
2018-04-18 12:58:31,854 __main__ DEBUG Shape of X_train: (25000, 300)
2018-04-18 12:58:31,855 __main__ DEBUG Shape of X_test: (25000, 300)
###Markdown
Simple LSTM
###Code
from keras.models import Sequential
from keras.layers import Dense, LSTM
from keras.layers.embeddings import Embedding
model = Sequential()
model.add(Embedding(max_features, 128, embeddings_initializer='glorot_normal'))
model.add(LSTM(128, dropout = 0.3, recurrent_dropout=0.3))
model.add(Dense(1, activation='sigmoid', kernel_initializer='glorot_normal'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
model.fit(X_train, y_train, batch_size=64, epochs=10, validation_data=(X_test, y_test))
###Output
Train on 25000 samples, validate on 25000 samples
Epoch 1/10
25000/25000 [==============================] - 157s 6ms/step - loss: 0.5051 - acc: 0.7530 - val_loss: 0.4462 - val_acc: 0.7906
Epoch 2/10
25000/25000 [==============================] - 159s 6ms/step - loss: 0.4020 - acc: 0.8234 - val_loss: 0.3865 - val_acc: 0.8318
Epoch 3/10
25000/25000 [==============================] - 152s 6ms/step - loss: 0.3524 - acc: 0.8521 - val_loss: 0.3556 - val_acc: 0.8510
Epoch 4/10
25000/25000 [==============================] - 153s 6ms/step - loss: 0.3184 - acc: 0.8700 - val_loss: 0.3487 - val_acc: 0.8590
Epoch 5/10
25000/25000 [==============================] - 153s 6ms/step - loss: 0.2827 - acc: 0.8853 - val_loss: 0.3221 - val_acc: 0.8686
Epoch 6/10
25000/25000 [==============================] - 165s 7ms/step - loss: 0.2582 - acc: 0.8963 - val_loss: 0.4479 - val_acc: 0.8201
Epoch 7/10
25000/25000 [==============================] - 153s 6ms/step - loss: 0.2435 - acc: 0.9041 - val_loss: 0.3676 - val_acc: 0.8679
Epoch 8/10
25000/25000 [==============================] - 152s 6ms/step - loss: 0.2109 - acc: 0.9167 - val_loss: 0.3411 - val_acc: 0.8718
Epoch 9/10
25000/25000 [==============================] - 152s 6ms/step - loss: 0.1872 - acc: 0.9254 - val_loss: 0.3356 - val_acc: 0.8716
Epoch 10/10
25000/25000 [==============================] - 152s 6ms/step - loss: 0.1641 - acc: 0.9370 - val_loss: 0.3713 - val_acc: 0.8756
|
docs/allcools/cluster_level/RegionDS/01b.other_option_to_init_region_ds.ipynb | ###Markdown
Alternative Way to Create RegionDS Parse Methylpy DMRfind outputIf you used the [`methylpy DMRfind`](https://github.com/yupenghe/methylpy) function to identify DMRs, you can create a {{ RegionDS }} by running {func}`methylpy_to_region_ds `
###Code
from ALLCools.mcds import RegionDS
from ALLCools.dmr.parse_methylpy import methylpy_to_region_ds
# DMR output of methylpy DMRfind
methylpy_dmr = '../../data/HIPBulk/DMR/snmC_CT/_rms_results_collapsed.tsv'
methylpy_to_region_ds(dmr_path=methylpy_dmr, output_dir='test_HIP_methylpy')
RegionDS.open('test_HIP_methylpy', region_dim='dmr')
###Output
_____no_output_____
###Markdown
Create RegionDS from a BED fileYou can create an empty {{ RegionDS }} with a BED file, with only the region coordinates recorded. You can then perform annotation, motif scan and further analysis using the methods described in the following sections.The BED file contains three columns: 1. chrom: required2. start: required3. end: required4. region_id: optional, but recommended to have. If not provided, RegionDS will automatically generate `f"{region_dim}_{i_row}"` as region_id. region_id must be unique.You also need to provide a `chrom_size_path` which tells RegionDS the sizes of your chromosomes.```{important} About BED SortingRegion order matters throughout the genomic analysis. The best practice is to sort your BED file according to the `chrom_size_path` you are providing. If your BED file is already sorted, you can set `sort_bed=False`, which is True by default```
###Code
# example BED file with region ID
!head test_from_bed_func.bed
bed_region_ds = RegionDS.from_bed(
bed='test_from_bed_func.bed',
location='test_from_bed_RegionDS',
chrom_size_path='../../data/genome/mm10.main.nochrM.chrom.sizes',
region_dim='bed_region',
# True by default, set to False if bed is already sorted
sort_bed=True)
# the RegionDS is stored at {location}
RegionDS.open('test_from_bed_RegionDS')
###Output
Using bed_region as region_dim
|
Pandas-NumPy/Video Game Sales Task.ipynb | ###Markdown
Video Game Sales Task 1. Import pandas module as pd
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
2. Create variable vgs and read vgsales.csv file as dataframe in it
###Code
vgs = pd.read_csv('vgsales.csv')
###Output
_____no_output_____
###Markdown
3. Get first 10 rows from the dataframe
###Code
vgs.head(10)
###Output
_____no_output_____
###Markdown
4. Use info() method to know the information about number of entries in vgs dataframe
###Code
vgs.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 16598 entries, 0 to 16597
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Rank 16598 non-null int64
1 Name 16598 non-null object
2 Platform 16598 non-null object
3 Year 16327 non-null float64
4 Genre 16598 non-null object
5 Publisher 16540 non-null object
6 NA_Sales 16598 non-null float64
7 EU_Sales 16598 non-null float64
8 JP_Sales 16598 non-null float64
9 Other_Sales 16598 non-null float64
10 Global_Sales 16598 non-null float64
dtypes: float64(6), int64(1), object(4)
memory usage: 1.4+ MB
###Markdown
5. Get average value of sales in Europe
###Code
vgs['EU_Sales'].mean()
###Output
_____no_output_____
###Markdown
6. Get the highest value of sales in Japan
###Code
vgs['JP_Sales'].max()
###Output
_____no_output_____
###Markdown
7. What is the genre of "Brain Age 2: More Training in Minutes a Day" video game?
###Code
vgs[vgs['Name'] == 'Brain Age 2: More Training in Minutes a Day']['Genre']
###Output
_____no_output_____
###Markdown
8. What is the amount of sales "Grand Theft Auto: Vice City" video game around the world?
###Code
vgs[vgs['Name'] == 'Grand Theft Auto: Vice City']['Global_Sales']
###Output
_____no_output_____
###Markdown
9. Get the name of the video game which has the highest sales in North America
###Code
vgs[vgs['NA_Sales'] == vgs['NA_Sales'].max()]['Name']
###Output
_____no_output_____
###Markdown
10. Get the name of video game which has the smallest sales around the world
###Code
vgs[vgs['Global_Sales'] == vgs['Global_Sales'].min()]['Name']
###Output
_____no_output_____
###Markdown
11. What is the average value of sales of all video games per genre in Japan?
###Code
vgs.groupby('Genre').mean()['JP_Sales']
###Output
_____no_output_____
###Markdown
12. How many unique names of video games in this dataframe?
###Code
vgs['Name'].nunique()
###Output
_____no_output_____
###Markdown
13. Get the 3 most common genres of video games worldwide
###Code
vgs['Genre'].value_counts().head(3)
###Output
_____no_output_____
###Markdown
14. How many video games have "super" word in their names?
###Code
len(vgs[vgs['Name'].apply(lambda x: 'super ' in x.lower()) == True])
###Output
_____no_output_____ |
tutorials/v1.0/4_how_to_make_configurations.ipynb | ###Markdown
How to make configurationsFor FuxiCTR v1.0 only.This tutorial presents the details of how to use the YAML config files. The dataset_config contains the following keys:+ **dataset_id**: the key used to denote a dataset split, e.g., taobao_tiny_data+ **data_root**: the directory to save or load the h5 dataset files+ **data_format**: csv | h5+ **train_data**: training data file path+ **valid_data**: validation data file path+ **test_data**: test data file path+ **min_categr_count**: the default threshold used to filter rare features+ **feature_cols**: a list of feature columns, each containing the following keys - **name**: feature name, i.e., column header name. - **active**: True | False, whether to use the feature. - **dtype**: the data type of this column. - **type**: categorical | numeric | sequence, which type of features. - **source**: optional, which feature source, such as user/item/context. - **share_embedding**: optional, specify which feature_name to share embedding. - **embedding_dim**: optional, embedding dim of a specific field, overriding the default embedding_dim if used. - **pretrained_emb**: optional, filepath of pretrained embedding, which should be a h5 file with two columns (id, emb). - **freeze_emb**: optional, True | False, whether to freeze embedding is pretrained_emb is used. - **encoder**: optional, "MaskedAveragePooling" | "MaskedSumPooling" | "null", specify how to pool the sequence feature. "MaskedAveragePooling" is used by default. "null" means no pooling is required. - **splitter**: optional, how to split the sequence feature during preprocessing; the space " " is used by default. - **max_len**: optional, the max length set to pad or truncate the sequence features. If not specified, the max length of all the training samples will be used. - **padding**: optional, "pre" | "post", either pre padding or post padding the sequence. - **na_value**: optinal, what value used to fill the missing entries of a field; "" is used by default.+ **label_col**: label name, i.e., the column header of the label - **name**: the column header name for label - **dtype**: the data type The model_config contains the following keys:+ **expid**: the key used to denote an experiment id, e.g., DeepFM_test. Each expid corresponds to a dataset_id and the model hyper-parameters used for experiment.+ **model_root**: the directory to save or load the model checkpoints and running logs.+ **workers**: the number of processes used for data generator.+ **verbose**: 0 for disabling tqdm progress bar; 1 for enabling tqdm progress bar.+ **patience**: how many epochs to stop training if no improvments are made.+ **pickle_feature_encoder**: True | False, whether pickle feature_encoder+ **use_hdf5**: True | False, whether reuse h5 data if available+ **save_best_only**: True | False, whether to save the best model weights only.+ **every_x_epochs**: how many epochs to evaluate the model on valiadtion set, float supported. For example, 0.5 denotes to evaluate every half epoch.+ **debug**: True | False, whether to enable debug mode. If enabled, every run will generate a new expid to avoid conflicted runs on two code versions. + **model**: model name used to load the specific model class+ **dataset_id**: the dataset_id used for the experiment+ **loss**: currently support "binary_crossentropy" only.+ **metrics**: list, currently support ['logloss', 'AUC'] only+ **task**: currently support "binary_classification" only+ **optimizer**: "adam" is used by default+ **learning_rate**: the initial learning rate+ **batch_size**: the batch size for model training+ **embedding_dim**: the default embedding dim for all feature fields. It will be ignored if a feature has embedding_dim value.+ **epochs**: the max number of epochs for model training+ **shuffle**: True | False, whether to shuffle data for each epoch+ **seed**: int, fix the random seed for reproduciblity+ **monitor**: 'AUC' | 'logloss' | {'AUC': 1, 'logloss': -1}, the metric used to determine early stopping. The dict can be used for combine multiple metrics. E.g., {'AUC': 2, 'logloss': -1} means 2 * AUC - logloss and the larger the better. + **monitor_mode**: 'max' | 'min', the mode of the metric. E.g., 'max' for AUC and 'min' for logloss.There are also some model-specific hyper-parameters. E.g., DeepFM has the following specific hyper-parameters:+ **hidden_units**: list, hidden units of MLP+ **hidden_activations**: str or list, e.g., 'relu' or ['relu', 'tanh']. When each layer has the same activation, one could use str; otherwise use list to set activations for each layer.+ **net_regularizer**: regularizaiton weight for MLP, supporting different types such as 1.e-8 | l2(1.e-8) | l1(1.e-8) | l1_l2(1.e-8, 1.e-8). l2 norm is used by default.+ **embedding_regularizer**: regularizaiton weight for feature embeddings, supporting different types such as 1.e-8 | l2(1.e-8) | l1(1.e-8) | l1_l2(1.e-8, 1.e-8). l2 norm is used by default.+ **net_dropout**: dropout rate for MLP, e.g., 0.1 denotes that hidden values are dropped randomly with 10% probability. + **batch_norm**: False | True, whether to apply batch normalizaiton on MLP.Many config files are available at https://github.com/xue-pai/FuxiCTR/tree/main/config for your reference. Here, we take the config [demo/demo_config](https://github.com/xue-pai/FuxiCTR/tree/main/demo/demo_config) as an example. The dataset_config.yaml and model_config.yaml are as follows.
###Code
# dataset_config.yaml
taobao_tiny_data: # dataset_id
data_root: ../data/
data_format: csv
train_data: ../data/tiny_data/train_sample.csv
valid_data: ../data/tiny_data/valid_sample.csv
test_data: ../data/tiny_data/test_sample.csv
min_categr_count: 1
feature_cols:
- {name: ["userid","adgroup_id","pid","cate_id","campaign_id","customer","brand","cms_segid",
"cms_group_id","final_gender_code","age_level","pvalue_level","shopping_level","occupation"],
active: True, dtype: str, type: categorical}
label_col: {name: clk, dtype: float}
###Output
_____no_output_____
###Markdown
Note that we merge the feature_cols with the same config settings for compactness. But we also could expand them as shown below.
###Code
taobao_tiny_data:
data_root: ../data/
data_format: csv
train_data: ../data/tiny_data/train_sample.csv
valid_data: ../data/tiny_data/valid_sample.csv
test_data: ../data/tiny_data/test_sample.csv
min_categr_count: 1
feature_cols:
[{name: "userid", active: True, dtype: str, type: categorical},
{name: "adgroup_id", active: True, dtype: str, type: categorical},
{name: "pid", active: True, dtype: str, type: categorical},
{name: "cate_id", active: True, dtype: str, type: categorical},
{name: "campaign_id", active: True, dtype: str, type: categorical},
{name: "customer", active: True, dtype: str, type: categorical},
{name: "brand", active: True, dtype: str, type: categorical},
{name: "cms_segid", active: True, dtype: str, type: categorical},
{name: "cms_group_id", active: True, dtype: str, type: categorical},
{name: "final_gender_code", active: True, dtype: str, type: categorical},
{name: "age_level", active: True, dtype: str, type: categorical},
{name: "pvalue_level", active: True, dtype: str, type: categorical},
{name: "shopping_level", active: True, dtype: str, type: categorical},
{name: "occupation", active: True, dtype: str, type: categorical}]
label_col: {name: clk, dtype: float}
###Output
_____no_output_____
###Markdown
The following model config contains two parts. When `Base` is available, the base settings will be shared by all expids. The base settings can be also overridden in expid with the same key. This design is for compactness when a large group of model configs are available, as shown in `./config` folder. `Base` and expid `DeepFM_test` can be either put in the same `model_config.yaml` file or the same `model_config` directory. Note that in any case, each expid should be unique among all the expids.
###Code
# model_config.yaml
Base:
model_root: '../checkpoints/'
workers: 3
verbose: 1
patience: 2
pickle_feature_encoder: True
use_hdf5: True
save_best_only: True
every_x_epochs: 1
debug: False
DeepFM_test:
model: DeepFM
dataset_id: taobao_tiny_data # each expid corresponds to a dataset_id
loss: 'binary_crossentropy'
metrics: ['logloss', 'AUC']
task: binary_classification
optimizer: adam
hidden_units: [64, 32]
hidden_activations: relu
net_regularizer: 0
embedding_regularizer: 1.e-8
learning_rate: 1.e-3
batch_norm: False
net_dropout: 0
batch_size: 128
embedding_dim: 4
epochs: 1
shuffle: True
seed: 2019
monitor: 'AUC'
monitor_mode: 'max'
###Output
_____no_output_____
###Markdown
The `load_config` method will automatically merge the above two parts. If you prefer, it is also flexible to remove `Base` and declare all the settings using only one dict as below.
###Code
DeepFM_test:
model_root: '../checkpoints/'
workers: 3
verbose: 1
patience: 2
pickle_feature_encoder: True
use_hdf5: True
save_best_only: True
every_x_epochs: 1
debug: False
model: DeepFM
dataset_id: taobao_tiny_data
loss: 'binary_crossentropy'
metrics: ['logloss', 'AUC']
task: binary_classification
optimizer: adam
hidden_units: [64, 32]
hidden_activations: relu
net_regularizer: 0
embedding_regularizer: 1.e-8
learning_rate: 1.e-3
batch_norm: False
net_dropout: 0
batch_size: 128
embedding_dim: 4
epochs: 1
shuffle: True
seed: 2019
monitor: 'AUC'
monitor_mode: 'max'
###Output
_____no_output_____
###Markdown
How to make configurationsFor FuxiCTR v1.0.x only.This tutorial presents the details of how to use the YAML config files. The dataset_config contains the following keys:+ **dataset_id**: the key used to denote a dataset split, e.g., taobao_tiny_data+ **data_root**: the directory to save or load the h5 dataset files+ **data_format**: csv | h5+ **train_data**: training data file path+ **valid_data**: validation data file path+ **test_data**: test data file path+ **min_categr_count**: the default threshold used to filter rare features+ **feature_cols**: a list of feature columns, each containing the following keys - **name**: feature name, i.e., column header name. - **active**: True | False, whether to use the feature. - **dtype**: the data type of this column. - **type**: categorical | numeric | sequence, which type of features. - **source**: optional, which feature source, such as user/item/context. - **share_embedding**: optional, specify which feature_name to share embedding. - **embedding_dim**: optional, embedding dim of a specific field, overriding the default embedding_dim if used. - **pretrained_emb**: optional, filepath of pretrained embedding, which should be a h5 file with two columns (id, emb). - **freeze_emb**: optional, True | False, whether to freeze embedding is pretrained_emb is used. - **encoder**: optional, "MaskedAveragePooling" | "MaskedSumPooling" | "null", specify how to pool the sequence feature. "MaskedAveragePooling" is used by default. "null" means no pooling is required. - **splitter**: optional, how to split the sequence feature during preprocessing; the space " " is used by default. - **max_len**: optional, the max length set to pad or truncate the sequence features. If not specified, the max length of all the training samples will be used. - **padding**: optional, "pre" | "post", either pre padding or post padding the sequence. - **na_value**: optinal, what value used to fill the missing entries of a field; "" is used by default.+ **label_col**: label name, i.e., the column header of the label - **name**: the column header name for label - **dtype**: the data type The model_config contains the following keys:+ **expid**: the key used to denote an experiment id, e.g., DeepFM_test. Each expid corresponds to a dataset_id and the model hyper-parameters used for experiment.+ **model_root**: the directory to save or load the model checkpoints and running logs.+ **workers**: the number of processes used for data generator.+ **verbose**: 0 for disabling tqdm progress bar; 1 for enabling tqdm progress bar.+ **patience**: how many epochs to stop training if no improvments are made.+ **pickle_feature_encoder**: True | False, whether pickle feature_encoder+ **use_hdf5**: True | False, whether reuse h5 data if available+ **save_best_only**: True | False, whether to save the best model weights only.+ **every_x_epochs**: how many epochs to evaluate the model on valiadtion set, float supported. For example, 0.5 denotes to evaluate every half epoch.+ **debug**: True | False, whether to enable debug mode. If enabled, every run will generate a new expid to avoid conflicted runs on two code versions. + **model**: model name used to load the specific model class+ **dataset_id**: the dataset_id used for the experiment+ **loss**: currently support "binary_crossentropy" only.+ **metrics**: list, currently support ['logloss', 'AUC'] only+ **task**: currently support "binary_classification" only+ **optimizer**: "adam" is used by default+ **learning_rate**: the initial learning rate+ **batch_size**: the batch size for model training+ **embedding_dim**: the default embedding dim for all feature fields. It will be ignored if a feature has embedding_dim value.+ **epochs**: the max number of epochs for model training+ **shuffle**: True | False, whether to shuffle data for each epoch+ **seed**: int, fix the random seed for reproduciblity+ **monitor**: 'AUC' | 'logloss' | {'AUC': 1, 'logloss': -1}, the metric used to determine early stopping. The dict can be used for combine multiple metrics. E.g., {'AUC': 2, 'logloss': -1} means 2 * AUC - logloss and the larger the better. + **monitor_mode**: 'max' | 'min', the mode of the metric. E.g., 'max' for AUC and 'min' for logloss.There are also some model-specific hyper-parameters. E.g., DeepFM has the following specific hyper-parameters:+ **hidden_units**: list, hidden units of MLP+ **hidden_activations**: str or list, e.g., 'relu' or ['relu', 'tanh']. When each layer has the same activation, one could use str; otherwise use list to set activations for each layer.+ **net_regularizer**: regularizaiton weight for MLP, supporting different types such as 1.e-8 | l2(1.e-8) | l1(1.e-8) | l1_l2(1.e-8, 1.e-8). l2 norm is used by default.+ **embedding_regularizer**: regularizaiton weight for feature embeddings, supporting different types such as 1.e-8 | l2(1.e-8) | l1(1.e-8) | l1_l2(1.e-8, 1.e-8). l2 norm is used by default.+ **net_dropout**: dropout rate for MLP, e.g., 0.1 denotes that hidden values are dropped randomly with 10% probability. + **batch_norm**: False | True, whether to apply batch normalizaiton on MLP.Many config files are available at https://github.com/xue-pai/FuxiCTR/tree/main/config for your reference. Here, we take the config [demo/demo_config](https://github.com/xue-pai/FuxiCTR/tree/main/demo/demo_config) as an example. The dataset_config.yaml and model_config.yaml are as follows.
###Code
# dataset_config.yaml
taobao_tiny_data: # dataset_id
data_root: ../data/
data_format: csv
train_data: ../data/tiny_data/train_sample.csv
valid_data: ../data/tiny_data/valid_sample.csv
test_data: ../data/tiny_data/test_sample.csv
min_categr_count: 1
feature_cols:
- {name: ["userid","adgroup_id","pid","cate_id","campaign_id","customer","brand","cms_segid",
"cms_group_id","final_gender_code","age_level","pvalue_level","shopping_level","occupation"],
active: True, dtype: str, type: categorical}
label_col: {name: clk, dtype: float}
###Output
_____no_output_____
###Markdown
Note that we merge the feature_cols with the same config settings for compactness. But we also could expand them as shown below.
###Code
taobao_tiny_data:
data_root: ../data/
data_format: csv
train_data: ../data/tiny_data/train_sample.csv
valid_data: ../data/tiny_data/valid_sample.csv
test_data: ../data/tiny_data/test_sample.csv
min_categr_count: 1
feature_cols:
[{name: "userid", active: True, dtype: str, type: categorical},
{name: "adgroup_id", active: True, dtype: str, type: categorical},
{name: "pid", active: True, dtype: str, type: categorical},
{name: "cate_id", active: True, dtype: str, type: categorical},
{name: "campaign_id", active: True, dtype: str, type: categorical},
{name: "customer", active: True, dtype: str, type: categorical},
{name: "brand", active: True, dtype: str, type: categorical},
{name: "cms_segid", active: True, dtype: str, type: categorical},
{name: "cms_group_id", active: True, dtype: str, type: categorical},
{name: "final_gender_code", active: True, dtype: str, type: categorical},
{name: "age_level", active: True, dtype: str, type: categorical},
{name: "pvalue_level", active: True, dtype: str, type: categorical},
{name: "shopping_level", active: True, dtype: str, type: categorical},
{name: "occupation", active: True, dtype: str, type: categorical}]
label_col: {name: clk, dtype: float}
###Output
_____no_output_____
###Markdown
The following model config contains two parts. When `Base` is available, the base settings will be shared by all expids. The base settings can be also overridden in expid with the same key. This design is for compactness when a large group of model configs are available, as shown in `./config` folder. `Base` and expid `DeepFM_test` can be either put in the same `model_config.yaml` file or the same `model_config` directory. Note that in any case, each expid should be unique among all the expids.
###Code
# model_config.yaml
Base:
model_root: '../checkpoints/'
workers: 3
verbose: 1
patience: 2
pickle_feature_encoder: True
use_hdf5: True
save_best_only: True
every_x_epochs: 1
debug: False
DeepFM_test:
model: DeepFM
dataset_id: taobao_tiny_data # each expid corresponds to a dataset_id
loss: 'binary_crossentropy'
metrics: ['logloss', 'AUC']
task: binary_classification
optimizer: adam
hidden_units: [64, 32]
hidden_activations: relu
net_regularizer: 0
embedding_regularizer: 1.e-8
learning_rate: 1.e-3
batch_norm: False
net_dropout: 0
batch_size: 128
embedding_dim: 4
epochs: 1
shuffle: True
seed: 2019
monitor: 'AUC'
monitor_mode: 'max'
###Output
_____no_output_____
###Markdown
The `load_config` method will automatically merge the above two parts. If you prefer, it is also flexible to remove `Base` and declare all the settings using only one dict as below.
###Code
DeepFM_test:
model_root: '../checkpoints/'
workers: 3
verbose: 1
patience: 2
pickle_feature_encoder: True
use_hdf5: True
save_best_only: True
every_x_epochs: 1
debug: False
model: DeepFM
dataset_id: taobao_tiny_data
loss: 'binary_crossentropy'
metrics: ['logloss', 'AUC']
task: binary_classification
optimizer: adam
hidden_units: [64, 32]
hidden_activations: relu
net_regularizer: 0
embedding_regularizer: 1.e-8
learning_rate: 1.e-3
batch_norm: False
net_dropout: 0
batch_size: 128
embedding_dim: 4
epochs: 1
shuffle: True
seed: 2019
monitor: 'AUC'
monitor_mode: 'max'
###Output
_____no_output_____ |
tf-data-argumentation/.ipynb_checkpoints/01_Data_Augmentation-checkpoint.ipynb | ###Markdown
Data Augmentation - `TensorFlow`A technique to increase the diversity of training set by applying random (but realistic) transformations such as image rotation. Imports.
###Code
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow import keras
###Output
_____no_output_____
###Markdown
> Loading the dataset. We are going to use the `tf_flowers` dataset.
###Code
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
class_names = [metadata.features['label'].int2str(i) for i in range(5)]
class_names
###Output
_____no_output_____
###Markdown
The Keras preprocessing layersThe Keras preprocessing layers API allows us to build Keras-native input processing pipelines.* We can preprocess input and using the `Sequantial` API and then pass them as a layer in a `NN`.* [Preprocesing Text, images, etc](https://keras.io/guides/preprocessing_layers/) using keras
###Code
image, label = next(iter(train_ds))
#image, label
plt.imshow(image)
plt.title(class_names[label])
plt.show()
###Output
_____no_output_____
###Markdown
Now we want to do transformations on this image.
###Code
IMG_SIZE = 180
resize_and_rescale = keras.Sequential([
keras.layers.experimental.preprocessing.Resizing(IMG_SIZE, IMG_SIZE),
keras.layers.experimental.preprocessing.Rescaling(1./255)
])
image_output = resize_and_rescale(image)
plt.imshow(image_output)
plt.title(class_names[label])
plt.show()
data_augmentation = tf.keras.Sequential([
keras.layers.experimental.preprocessing.RandomFlip("horizontal_and_vertical"),
keras.layers.experimental.preprocessing.RandomRotation(0.5),
])
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = data_augmentation(tf.expand_dims(image, 0))
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0])
plt.axis("off")
###Output
_____no_output_____
###Markdown
As we can see from a single image we now have different lokking images of the same flower.* [TFTutorial- Docs](https://keras.io/guides/preprocessing_layers/)* [Keras Docs](https://www.tensorflow.org/tutorials/images/data_augmentation) ``Tf.image`` Data argumentation.* [tf.image](https://www.tensorflow.org/api_docs/python/tf/image)This is the recommended way of processing images since the `keras.Sequential` is still under experimenta.* You have to define a function that performs transformation on an image and return the processed image.* We can use the ``map`` function to appy the transformation to all the images in the dataset.```pythondef argument(image, label): image = tf.image.resize(image, (224, 224)) image = tf.image.random_brightness(image, 0.1) .... return image, label All images will be stransformsed to defined arguments in the fntransformed_images = train_ds.map(argument)```* all transforms that we can apply are found [here](https://www.tensorflow.org/api_docs/python/tf/image) Example: > Grayscale image.
###Code
def augment(image):
return tf.image.rgb_to_grayscale(image)
plt.imshow(augment(image), cmap="gray")
plt.title(class_names[label])
plt.show()
###Output
_____no_output_____ |
2 - The Android App Market on Google Play/The Android App Market on Google Play.ipynb | ###Markdown
1. Google Play Store apps and reviewsMobile apps are everywhere. They are easy to create and can be lucrative. Because of these two factors, more and more apps are being developed. In this notebook, we will do a comprehensive analysis of the Android app market by comparing over ten thousand apps in Google Play across different categories. We'll look for insights in the data to devise strategies to drive growth and retention.Let's take a look at the data, which consists of two files:apps.csv: contains all the details of the applications on Google Play. There are 13 features that describe a given app.user_reviews.csv: contains 100 reviews for each app, most helpful first. The text in each review has been pre-processed and attributed with three new features: Sentiment (Positive, Negative or Neutral), Sentiment Polarity and Sentiment Subjectivity.
###Code
# Read in dataset
import pandas as pd
apps_with_duplicates = pd.read_csv("datasets/apps.csv")
# Drop duplicates from apps_with_duplicates
apps = apps_with_duplicates.drop_duplicates()
# Print the total number of apps
print('Total number of apps in the dataset = ', len(apps_with_duplicates))
# Have a look at a random sample of 5 rows
print(apps_with_duplicates.sample(5))
###Output
Total number of apps in the dataset = 9659
Unnamed: 0 App \
8495 9631 NewsDog - Latest News, Breaking News, Local News
4939 5928 Arabic Alif Ba Ta For Kids
3210 4037 Fruit Ninja®
5665 6696 Top BR Chaya Songs
6323 7370 mySharpBranded CI Test
Category Rating Reviews Size Installs Type Price \
8495 NEWS_AND_MAGAZINES 4.4 291901 4.3 10,000,000+ Free 0
4939 FAMILY 4.5 226 26.0 100,000+ Free 0
3210 GAME 4.3 5091448 41.0 100,000,000+ Free 0
5665 FAMILY NaN 0 3.8 10+ Free 0
6323 TOOLS NaN 0 0.9 1+ Free 0
Content Rating Genres Last Updated Current Ver \
8495 Mature 17+ News & Magazines July 20, 2018 2.6.0
4939 Everyone Education;Education May 30, 2017 1.0.0
3210 Everyone Arcade July 12, 2018 2.6.7.487220
5665 Everyone Entertainment May 18, 2018 1
6323 Everyone Tools October 4, 2017 1.0.7
Android Ver
8495 4.1 and up
4939 2.3 and up
3210 4.1 and up
5665 4.0.3 and up
6323 4.2 and up
###Markdown
2. Data cleaningData cleaning is one of the most essential subtask any data science project. Although it can be a very tedious process, it's worth should never be undermined.By looking at a random sample of the dataset rows (from the above task), we observe that some entries in the columns like Installs and Price have a few special characters (+ , $) due to the way the numbers have been represented. This prevents the columns from being purely numeric, making it difficult to use them in subsequent future mathematical calculations. Ideally, as their names suggest, we would want these columns to contain only digits from [0-9].Hence, we now proceed to clean our data. Specifically, the special characters , and + present in Installs column and $ present in Price column need to be removed.It is also always a good practice to print a summary of your dataframe after completing data cleaning. We will use the info() method to acheive this.
###Code
# List of characters to remove
chars_to_remove = ['+', ',', '$']
# List of column names to clean
cols_to_clean = ['Installs', 'Price']
# Loop for each column in cols_to_clean
for col in cols_to_clean:
# Loop for each char in chars_to_remove
for char in chars_to_remove:
# Replace the character with an empty string
apps[col] = apps[col].apply(lambda x: x.replace(char, ''))
# Print a summary of the apps dataframe
print(apps.info())
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9659 entries, 0 to 9658
Data columns (total 14 columns):
Unnamed: 0 9659 non-null int64
App 9659 non-null object
Category 9659 non-null object
Rating 8196 non-null float64
Reviews 9659 non-null int64
Size 8432 non-null float64
Installs 9659 non-null object
Type 9659 non-null object
Price 9659 non-null object
Content Rating 9659 non-null object
Genres 9659 non-null object
Last Updated 9659 non-null object
Current Ver 9651 non-null object
Android Ver 9657 non-null object
dtypes: float64(2), int64(2), object(10)
memory usage: 1.1+ MB
None
###Markdown
3. Correcting data typesFrom the previous task we noticed that Installs and Price were categorized as object data type (and not int or float) as we would like. This is because these two columns originally had mixed input types: digits and special characters. To know more about Pandas data types, read this.The four features that we will be working with most frequently henceforth are Installs, Size, Rating and Price. While Size and Rating are both float (i.e. purely numerical data types), we still need to work on Installs and Price to make them numeric.
###Code
import numpy as np
# Convert Installs to float data type
apps['Installs'] = apps['Installs'].astype(float)
# Convert Price to float data type
apps['Price'] = apps['Price'].astype(float)
# Checking dtypes of the apps dataframe
print(apps.info())
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 9659 entries, 0 to 9658
Data columns (total 14 columns):
Unnamed: 0 9659 non-null int64
App 9659 non-null object
Category 9659 non-null object
Rating 8196 non-null float64
Reviews 9659 non-null int64
Size 8432 non-null float64
Installs 9659 non-null float64
Type 9659 non-null object
Price 9659 non-null float64
Content Rating 9659 non-null object
Genres 9659 non-null object
Last Updated 9659 non-null object
Current Ver 9651 non-null object
Android Ver 9657 non-null object
dtypes: float64(4), int64(2), object(8)
memory usage: 1.1+ MB
None
###Markdown
4. Exploring app categoriesWith more than 1 billion active users in 190 countries around the world, Google Play continues to be an important distribution platform to build a global audience. For businesses to get their apps in front of users, it's important to make them more quickly and easily discoverable on Google Play. To improve the overall search experience, Google has introduced the concept of grouping apps into categories.This brings us to the following questions:Which category has the highest share of (active) apps in the market? Is any specific category dominating the market?Which categories have the fewest number of apps?We will see that there are 33 unique app categories present in our dataset. Family and Game apps have the highest market prevalence. Interestingly, Tools, Business and Medical apps are also at the top.
###Code
import plotly
plotly.offline.init_notebook_mode(connected=True)
import plotly.graph_objs as go
# Print the total number of unique categories
num_categories = len(apps['Category'].unique())
print('Number of categories = ', num_categories)
# Count the number of apps in each 'Category'.
num_apps_in_category = apps['Category'].value_counts()
# Sort num_apps_in_category in descending order based on the count of apps in each category
sorted_num_apps_in_category = num_apps_in_category.sort_values(ascending = False)
data = [go.Bar(
x = num_apps_in_category.index, # index = category name
y = num_apps_in_category.values, # value = count
)]
plotly.offline.iplot(data)
###Output
_____no_output_____
###Markdown
5. Distribution of app ratingsAfter having witnessed the market share for each category of apps, let's see how all these apps perform on an average. App ratings (on a scale of 1 to 5) impact the discoverability, conversion of apps as well as the company's overall brand image. Ratings are a key performance indicator of an app.From our research, we found that the average volume of ratings across all app categories is 4.17. The histogram plot is skewed to the left indicating that the majority of the apps are highly rated with only a few exceptions in the low-rated apps.
###Code
# Average rating of apps
avg_app_rating = apps['Rating'].mean()
print('Average app rating = ', avg_app_rating)
# Distribution of apps according to their ratings
data = [go.Histogram(
x = apps['Rating']
)]
# Vertical dashed line to indicate the average app rating
layout = {'shapes': [{
'type' :'line',
'x0': avg_app_rating,
'y0': 0,
'x1': avg_app_rating,
'y1': 1000,
'line': { 'dash': 'dashdot'}
}]
}
plotly.offline.iplot({'data': data, 'layout': layout})
###Output
Average app rating = 4.173243045387994
###Markdown
6. Size and price of an appLet's now examine app size and app price. For size, if the mobile app is too large, it may be difficult and/or expensive for users to download. Lengthy download times could turn users off before they even experience your mobile app. Plus, each user's device has a finite amount of disk space. For price, some users expect their apps to be free or inexpensive. These problems compound if the developing world is part of your target market; especially due to internet speeds, earning power and exchange rates.How can we effectively come up with strategies to size and price our app?Does the size of an app affect its rating? Do users really care about system-heavy apps or do they prefer light-weighted apps? Does the price of an app affect its rating? Do users always prefer free apps over paid apps?We find that the majority of top rated apps (rating over 4) range from 2 MB to 20 MB. We also find that the vast majority of apps price themselves under \$10.
###Code
%matplotlib inline
import seaborn as sns
sns.set_style("darkgrid")
import warnings
warnings.filterwarnings("ignore")
# Select rows where both 'Rating' and 'Size' values are present (ie. the two values are not null)
apps_with_size_and_rating_present = apps[(~apps["Rating"].isnull()) & (~apps["Size"].isnull())]
# Subset for categories with at least 250 apps
large_categories = apps_with_size_and_rating_present.groupby("Category").filter(lambda x: len(x) >= 250).reset_index()
# Plot size vs. rating
plt1 = sns.jointplot(x = large_categories["Rating"], y = large_categories["Size"], kind = 'hex')
# Select apps whose 'Type' is 'Paid'
paid_apps = apps_with_size_and_rating_present[apps_with_size_and_rating_present["Type"] == "Paid"]
# Plot price vs. rating
plt2 = sns.jointplot(x = paid_apps["Price"], y = paid_apps["Rating"])
###Output
_____no_output_____
###Markdown
7. Relation between app category and app priceSo now comes the hard part. How are companies and developers supposed to make ends meet? What monetization strategies can companies use to maximize profit? The costs of apps are largely based on features, complexity, and platform.There are many factors to consider when selecting the right pricing strategy for your mobile app. It is important to consider the willingness of your customer to pay for your app. A wrong price could break the deal before the download even happens. Potential customers could be turned off by what they perceive to be a shocking cost, or they might delete an app they’ve downloaded after receiving too many ads or simply not getting their money's worth.Different categories demand different price ranges. Some apps that are simple and used daily, like the calculator app, should probably be kept free. However, it would make sense to charge for a highly-specialized medical app that diagnoses diabetic patients. Below, we see that Medical and Family apps are the most expensive. Some medical apps extend even up to \$80! All game apps are reasonably priced below \$20.
###Code
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Select a few popular app categories
popular_app_cats = apps[apps.Category.isin(['GAME', 'FAMILY', 'PHOTOGRAPHY',
'MEDICAL', 'TOOLS', 'FINANCE',
'LIFESTYLE','BUSINESS'])]
# Examine the price trend by plotting Price vs Category
ax = sns.stripplot(x = popular_app_cats['Price'], y = popular_app_cats['Category'], jitter=True, linewidth=1)
ax.set_title('App pricing trend across categories')
# Apps whose Price is greater than 200
apps_above_200 = popular_app_cats[['Category', 'App', 'Price']][popular_app_cats["Price"] > 200]
apps_above_200[['Category', 'App', 'Price']]
###Output
_____no_output_____
###Markdown
8. Filter out "junk" appsIt looks like a bunch of the really expensive apps are "junk" apps. That is, apps that don't really have a purpose. Some app developer may create an app called I Am Rich Premium or most expensive app (H) just for a joke or to test their app development skills. Some developers even do this with malicious intent and try to make money by hoping people accidentally click purchase on their app in the store.Let's filter out these junk apps and re-do our visualization.
###Code
# Select apps priced below $100
apps_under_100 = popular_app_cats[popular_app_cats["Price"]<100]
fig, ax = plt.subplots()
fig.set_size_inches(15, 8)
# Examine price vs category with the authentic apps (apps_under_100)
ax = sns.stripplot(x=apps_under_100["Price"], y=apps_under_100["Category"], data=apps_under_100, jitter = True, linewidth = 1)
ax.set_title('App pricing trend across categories after filtering for junk apps')
###Output
_____no_output_____
###Markdown
9. Popularity of paid apps vs free appsFor apps in the Play Store today, there are five types of pricing strategies: free, freemium, paid, paymium, and subscription. Let's focus on free and paid apps only. Some characteristics of free apps are:Free to download.Main source of income often comes from advertisements.Often created by companies that have other products and the app serves as an extension of those products.Can serve as a tool for customer retention, communication, and customer service.Some characteristics of paid apps are:Users are asked to pay once for the app to download and use it.The user can't really get a feel for the app before buying it.Are paid apps installed as much as free apps? It turns out that paid apps have a relatively lower number of installs than free apps, though the difference is not as stark as I would have expected!
###Code
trace0 = go.Box(
# Data for paid apps
y = apps[apps['Type'] == "Paid"]['Installs'],
name = 'Paid'
)
trace1 = go.Box(
# Data for free apps
y = apps[apps['Type'] == "Free"]['Installs'],
name = 'Free'
)
layout = go.Layout(
title = "Number of downloads of paid apps vs. free apps",
yaxis = dict(title = "Log number of downloads",
type = 'log',
autorange = True)
)
# Add trace0 and trace1 to a list for plotting
data = [trace0, trace1]
plotly.offline.iplot({'data': data, 'layout': layout})
###Output
_____no_output_____
###Markdown
10. Sentiment analysis of user reviewsMining user review data to determine how people feel about your product, brand, or service can be done using a technique called sentiment analysis. User reviews for apps can be analyzed to identify if the mood is positive, negative or neutral about that app. For example, positive words in an app review might include words such as 'amazing', 'friendly', 'good', 'great', and 'love'. Negative words might be words like 'malware', 'hate', 'problem', 'refund', and 'incompetent'.By plotting sentiment polarity scores of user reviews for paid and free apps, we observe that free apps receive a lot of harsh comments, as indicated by the outliers on the negative y-axis. Reviews for paid apps appear never to be extremely negative. This may indicate something about app quality, i.e., paid apps being of higher quality than free apps on average. The median polarity score for paid apps is a little higher than free apps, thereby syncing with our previous observation.In this notebook, we analyzed over ten thousand apps from the Google Play Store. We can use our findings to inform our decisions should we ever wish to create an app ourselves.
###Code
# Load user_reviews.csv
reviews_df = pd.read_csv("datasets/user_reviews.csv")
# Join and merge the two dataframe
merged_df = pd.merge(apps, reviews_df, on = "App", how = "inner")
# Drop NA values from Sentiment and Translated_Review columns
merged_df = merged_df.dropna(subset=['Sentiment', 'Review'])
sns.set_style('ticks')
fig, ax = plt.subplots()
fig.set_size_inches(11, 8)
# User review sentiment polarity for paid vs. free apps
ax = sns.boxplot(x = "Type", y = "Sentiment_Polarity", data = merged_df)
ax.set_title('Sentiment Polarity Distribution')
###Output
_____no_output_____ |
02 Python Teil 1/06 Extras.ipynb | ###Markdown
06 Extras Zusätzliche Basisfunktionen* % Gibt den Rest einer Division aus* // lässt Kommastellen einer Division weg 1. Teile 2/3 und lasse die Stellen nach dem Komma weg 2. Teile 5/3 und gebe nur den Rest aus Datentypen II 3. Ergänze den Code in Zeile 4 so dass die Division keine Fehlermeldung ergibt
###Code
c = '162'
d = 3
c/d
###Output
_____no_output_____
###Markdown
4. Gib b mal die ziffer a aus. Z.B wenn a = 7 und b = 3 ist das gewünschte Ergebnis die Zahl 777. *(Tipp: was passiert wenn du einen String mit einer Zahl multiplizierst?)*
###Code
a = 1
b = 4
# dein Code hier
###Output
_____no_output_____
###Markdown
Listen 5. Entferne das Element der Liste welches keine Zahl ist:
###Code
list_1 = [1, 4, '162', 3]
# dein code hier
###Output
_____no_output_____
###Markdown
6. Wandle die Liste in einen String um, mit jeweils einem Leerschlag zwischen den Elementen: *Tipp: .join anwenden: https://www.w3schools.com/python/ref_string_join.asp*
###Code
list_2 = ['A', 'B', 'C', 'D']
###Output
_____no_output_____ |
notebooks/CognitiveServices - Celebrity Quote Analysis.ipynb | ###Markdown
Celebrity Quote Analysis with The Cognitive Services on Spark
###Code
from mmlspark.cognitive import *
from pyspark.ml import PipelineModel
from pyspark.sql.functions import col, udf
from pyspark.ml.feature import SQLTransformer
import os
if os.environ.get("AZURE_SERVICE", None) == "Microsoft.ProjectArcadia":
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
from notebookutils.mssparkutils.credentials import getSecret
os.environ['VISION_API_KEY'] = getSecret("mmlspark-keys", "mmlspark-cs-key")
os.environ['TEXT_API_KEY'] = getSecret("mmlspark-keys", "mmlspark-cs-key")
os.environ['BING_IMAGE_SEARCH_KEY'] = getSecret("mmlspark-keys", "mmlspark-bing-search-key")
#put your service keys here
TEXT_API_KEY = os.environ["TEXT_API_KEY"]
VISION_API_KEY = os.environ["VISION_API_KEY"]
BING_IMAGE_SEARCH_KEY = os.environ["BING_IMAGE_SEARCH_KEY"]
###Output
_____no_output_____
###Markdown
Extracting celebrity quote images using Bing Image Search on SparkHere we define two Transformers to extract celebrity quote images.
###Code
imgsPerBatch = 10 #the number of images Bing will return for each query
offsets = [(i*imgsPerBatch,) for i in range(100)] # A list of offsets, used to page into the search results
bingParameters = spark.createDataFrame(offsets, ["offset"])
bingSearch = BingImageSearch()\
.setSubscriptionKey(BING_IMAGE_SEARCH_KEY)\
.setOffsetCol("offset")\
.setQuery("celebrity quotes")\
.setCount(imgsPerBatch)\
.setOutputCol("images")
#Transformer to that extracts and flattens the richly structured output of Bing Image Search into a simple URL column
getUrls = BingImageSearch.getUrlTransformer("images", "url")
###Output
_____no_output_____
###Markdown
Recognizing Images of CelebritiesThis block identifies the name of the celebrities for each of the images returned by the Bing Image Search.
###Code
celebs = RecognizeDomainSpecificContent()\
.setSubscriptionKey(VISION_API_KEY)\
.setModel("celebrities")\
.setUrl("https://eastus.api.cognitive.microsoft.com/vision/v2.0/")\
.setImageUrlCol("url")\
.setOutputCol("celebs")
#Extract the first celebrity we see from the structured response
firstCeleb = SQLTransformer(statement="SELECT *, celebs.result.celebrities[0].name as firstCeleb FROM __THIS__")
###Output
_____no_output_____
###Markdown
Reading the quote from the image.This stage performs OCR on the images to recognize the quotes.
###Code
from mmlspark.stages import UDFTransformer
recognizeText = RecognizeText()\
.setSubscriptionKey(VISION_API_KEY)\
.setUrl("https://eastus.api.cognitive.microsoft.com/vision/v2.0/recognizeText")\
.setImageUrlCol("url")\
.setMode("Printed")\
.setOutputCol("ocr")\
.setConcurrency(5)
def getTextFunction(ocrRow):
if ocrRow is None: return None
return "\n".join([line.text for line in ocrRow.recognitionResult.lines])
# this transformer wil extract a simpler string from the structured output of recognize text
getText = UDFTransformer().setUDF(udf(getTextFunction)).setInputCol("ocr").setOutputCol("text")
###Output
_____no_output_____
###Markdown
Understanding the Sentiment of the Quote
###Code
sentimentTransformer = TextSentiment()\
.setTextCol("text")\
.setUrl("https://eastus.api.cognitive.microsoft.com/text/analytics/v3.0/sentiment")\
.setSubscriptionKey(TEXT_API_KEY)\
.setOutputCol("sentiment")
#Extract the sentiment score from the API response body
getSentiment = SQLTransformer(statement="SELECT *, sentiment[0].sentiment as sentimentLabel FROM __THIS__")
###Output
_____no_output_____
###Markdown
Tying it all togetherNow that we have built the stages of our pipeline its time to chain them together into a single model that can be used to process batches of incoming data
###Code
from mmlspark.stages import SelectColumns
# Select the final coulmns
cleanupColumns = SelectColumns().setCols(["url", "firstCeleb", "text", "sentimentLabel"])
celebrityQuoteAnalysis = PipelineModel(stages=[
bingSearch, getUrls, celebs, firstCeleb, recognizeText, getText, sentimentTransformer, getSentiment, cleanupColumns])
celebrityQuoteAnalysis.transform(bingParameters).show(5)
###Output
_____no_output_____ |
Week3_ageregression_sexclassification.ipynb | ###Markdown
###Code
!git clone https://github.com/DLTK/DLTK.git
###Output
Cloning into 'DLTK'...
remote: Enumerating objects: 3, done.[K
remote: Counting objects: 100% (3/3), done.[K
remote: Compressing objects: 100% (3/3), done.[K
remote: Total 9084 (delta 0), reused 0 (delta 0), pack-reused 9081[K
Receiving objects: 100% (9084/9084), 19.05 MiB | 15.82 MiB/s, done.
Resolving deltas: 100% (6366/6366), done.
###Markdown
Changed line 139 to `na_values=[]).values`for /content/DLTK/data/IXI_HH/download_IXI_HH.pybecause pandas code is outdated: https://pandas.pydata.org/pandas-docs/version/0.25.1/reference/api/pandas.DataFrame.as_matrix.html
###Code
!pip install SimpleITK
!python /content/DLTK/data/IXI_HH/download_IXI_HH.py
!pwd
###Output
_____no_output_____ |
StockPrice_Prediction_TimeSeriesAnalysis[RealData]_UsingAuto_Arima.ipynb | ###Markdown
**Business Problem** : You work for a Hedge Fund company in India.Your Employer wants you to Predict the Stock price of **Bajaj Finances** , so that he can choose whether to Invest in it or not.He gives you the Data set to create A Model to predict the Stock price. **1.Importing Libraries**
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
**2.Importing Dataset**
###Code
df=pd.read_csv('BAJFINANCE.csv')
df.head()
###Output
_____no_output_____
###Markdown
**3.Preliminary Analysis and Missing value Detection & Rectification**
###Code
df.set_index('Date',inplace=True)
# Lets just see an Overview of how the Stock Price changes in Time #
df['VWAP'].plot()
df.shape
# Null value Detection #
df.isna().sum()
# We drop the column Trades as almost half of it is Missing #
df.drop('Trades',axis=1,inplace=True)
# Lets just drop the rest of the missing values as they are not present from the beginning upto row no. 446 #
# If they were missing inbetween rather that From the beginning , we could have used Imputation methods like Mean/Moving Avg to fill it #
df.dropna(inplace=True)
df.isna().sum()
# The problem of missing values is Over and the new Shape is given below #
df.shape
data=df.copy()
data.dtypes
###Output
_____no_output_____
###Markdown
**4.Creation of the Rolling Statistic** A rolling analysis of a time series model is often used to assess the model’s stability over time. When analyzing financial time series data using a statistical model, a key assumption is that the parameters of the model are constant over time. However, the economic environment often changes considerably, and it may not be reasonable to assume that a model’s parameters are constant. A common technique to assess the constancy of a model’s parameters is to compute parameter estimates over a rolling window of a fixed size through the sample. If the parameters are truly constant over the entire sample, then the estimates over the rolling windows should not be too different. If the parameters change at some point during the sample, then the rolling estimates should capture this instability.
###Code
data.columns
lag_features=['High','Low','Volume','Turnover']
window1=3
window2=7
for feature in lag_features:
data[feature+'rolling_mean_3']=data[feature].rolling(window=window1).mean()
data[feature+'rolling_mean_7']=data[feature].rolling(window=window2).mean()
for feature in lag_features:
data[feature+'rolling_std_3']=data[feature].rolling(window=window1).std()
data[feature+'rolling_std_7']=data[feature].rolling(window=window2).std()
data.head()
# We can see that there are Null Values in the Newly formed Columns #
data.isna().sum()
# Since the null values are very low compared to the Dataset , lets drop them #
data.dropna(inplace=True)
data.isna().sum()
data.columns
data.shape
###Output
_____no_output_____
###Markdown
**5.Splitting the Data into Trainset and TestSet**
###Code
# Note: we are not spliting the data Randomly using Test train Split because we need proper chrological order in Timeseries Data#
training_data=data[0:4000]
test_data=data[4000:]
training_data.head()
test_data.head()
# These are features created by Rolling Analysis to Minimize Outliers and Unstability in Data #
Independent_Features = ['Highrolling_mean_3', 'Highrolling_mean_7',
'Lowrolling_mean_3', 'Lowrolling_mean_7', 'Volumerolling_mean_3',
'Volumerolling_mean_7', 'Turnoverrolling_mean_3',
'Turnoverrolling_mean_7', 'Highrolling_std_3', 'Highrolling_std_7',
'Lowrolling_std_3', 'Lowrolling_std_7', 'Volumerolling_std_3',
'Volumerolling_std_7', 'Turnoverrolling_std_3',
'Turnoverrolling_std_7']
###Output
_____no_output_____
###Markdown
**6.Training and Fitting the Model**
###Code
!pip install pmdarima
from pmdarima import auto_arima
###Output
_____no_output_____
###Markdown
We will be using the Infamous Auto-Arima Machine Learning Technique Specialised for TimeSeries to train the Model
###Code
# We will set the 'trace' parameter to 'True' so that the model can consider all bundles of (p,d,q) to find the best #
model=auto_arima(y=training_data['VWAP'],exogenous=training_data[Independent_Features],trace=True)
model.fit(training_data['VWAP'],training_data[Independent_Features])
###Output
_____no_output_____
###Markdown
**7.Predicting the Test Set**
###Code
forecast=model.predict(n_periods=len(test_data), exogenous=test_data[Independent_Features])
test_data['Forecast_ARIMA']=forecast
# Lets plot the Actual Values to the Predicted values by the Model #
test_data[['VWAP','Forecast_ARIMA']].plot(figsize=(14,7))
###Output
_____no_output_____
###Markdown
Note that : The Model was predicting VWAP with very high Accuracy until March 2020 . Which we know what happened after that : 'The Corona pandemic'.After the pandemic the markets were super Unstable ,leading to such drastic highs and lows in predictions after March 2020.This is Clearly Visualized in the graph. **8.Accuracy Matrix and R Squared Value**
###Code
x = test_data.iloc[:,8].values
y = test_data.iloc[:,29].values
x = x.reshape(len(x),1)
y = y.reshape(len(y),1)
am = np.concatenate((x,y),1)
am[:10]
# x is the Actual VWAP and y is the predicted VWAP #
from sklearn.metrics import r2_score,mean_absolute_error, mean_squared_error
r2_score(test_data['VWAP'],test_data['Forecast_ARIMA'])
# An R Squared Value of 0.89 makes this model a Good Fit #
np.sqrt(mean_squared_error(test_data['VWAP'],test_data['Forecast_ARIMA']))
mean_absolute_error(test_data['VWAP'],test_data['Forecast_ARIMA'])
###Output
_____no_output_____
###Markdown
**9.Lets check Accuracy for Test Set just before March 2020 just for the sake of Curiosity**
###Code
test_data.shape
pre_covid_test_data = test_data.iloc[:475,:]
post_covid_test_data = test_data.iloc[475:,:]
forecast_a=model.predict(n_periods=len(pre_covid_test_data), exogenous=pre_covid_test_data[Independent_Features])
forecast_b=model.predict(n_periods=len(post_covid_test_data), exogenous=post_covid_test_data[Independent_Features])
pre_covid_test_data['Forecast_ARIMA']=forecast_a
post_covid_test_data['Forecast_ARIMA']=forecast_b
pre_covid_test_data[['VWAP','Forecast_ARIMA']].plot(figsize=(14,7))
r2_score(pre_covid_test_data['VWAP'],pre_covid_test_data['Forecast_ARIMA'])
np.sqrt(mean_squared_error(pre_covid_test_data['VWAP'],pre_covid_test_data['Forecast_ARIMA']))
mean_absolute_error(pre_covid_test_data['VWAP'],pre_covid_test_data['Forecast_ARIMA'])
###Output
_____no_output_____
###Markdown
R Squared Value of Pre-Covid Predictions are pretty Amazing with a whooping Accuracy of almost 98% . Also Note that the forecast by Arima is mostly Under-Predicted except a few exceptions.Which is actually good while buying Stock market shares (ie.) you get more profit than your expectations.All said and done still the values MAE and RMSE looks a little bit higher , That is expected after all.Predicting Super Accurate stock prices is nearly Impossible.These predicted values cannot be used in Intra-day trading , but can still be effectively used in Swing trading and Long term Investmest
###Code
post_covid_test_data[['VWAP','Forecast_ARIMA']].plot(figsize=(14,7))
r2_score(post_covid_test_data['VWAP'],post_covid_test_data['Forecast_ARIMA'])
###Output
_____no_output_____
###Markdown
As expected the R Squared Value of Post Covid test data is Disappointing and brought down the accuracy of the model.It is implictly proved that this model is not a good fit for this data from the R Squared Value.No need for MAE and RMSE to prove it again. **10.Predicting the Value of VWAP in Realtime**
###Code
check = pd.read_csv('07-04-2021-TO-06-05-2021BAJFINANCEEQN.csv')
###Output
_____no_output_____
###Markdown
Note that : The training data stops at August 2018 and the below data is from April 2021 till today (May 07 2021) .I know it is not perfectly done in Real time , but this the best I could do,to bring in the essence of realtime in this project . please bear with me ***The same preliminary Analysis , Null value Detection & Rectification , Data Cleaning and Creation of Rolling Variables is done for this small Dataset as well***
###Code
check.head()
check.set_index('Date',inplace=True)
check['VWAP'].plot()
check.drop('No. of Trades',axis=1,inplace=True)
check.shape
check.isna().sum()
check.columns
lag_feature = ['High Price','Low Price','Total Traded Quantity', 'Turnover']
window3=3
window4=7
for feature in lag_feature:
check[feature+'rolling_mean_3']=check[feature].rolling(window=window3).mean()
check[feature+'rolling_mean_7']=check[feature].rolling(window=window4).mean()
for feature in lag_feature:
check[feature+'rolling_std_3']=check[feature].rolling(window=window3).std()
check[feature+'rolling_std_7']=check[feature].rolling(window=window4).std()
check.head()
check.isna().sum()
check.dropna(inplace=True)
check.isna().sum()
check.shape
check.columns
ind_features = ['High Pricerolling_mean_3',
'High Pricerolling_mean_7', 'Low Pricerolling_mean_3',
'Low Pricerolling_mean_7', 'Total Traded Quantityrolling_mean_3',
'Total Traded Quantityrolling_mean_7', 'Turnoverrolling_mean_3',
'Turnoverrolling_mean_7','High Pricerolling_std_3',
'High Pricerolling_std_7', 'Low Pricerolling_std_3',
'Low Pricerolling_std_7', 'Total Traded Quantityrolling_std_3',
'Total Traded Quantityrolling_std_7', 'Turnoverrolling_std_3',
'Turnoverrolling_std_7']
###Output
_____no_output_____
###Markdown
**11.Forecasting for the Second Dataset**
###Code
forecast1=model.predict(n_periods=len(check), exogenous=check[ind_features])
check['Forecast_ARIMA']=forecast1
# plotting Predicted Values VS Actual Values #
check[['VWAP','Forecast_ARIMA']].plot(figsize=(14,7))
from sklearn.metrics import r2_score
r2_score(check['VWAP'],check['Forecast_ARIMA'])
###Output
_____no_output_____ |
content/notebooks/chapter_11.ipynb | ###Markdown
Brute Forces, Secretaries, and DichotomiesChapter 11 of [Real World Algorithms](https://mitpress.mit.edu/books/real-world-algorithms).---> Panos Louridas> Athens University of Economics and Business Sequential SearchSequential search is perhaps the most straightforward search method.We start from the beginning and check each item in turn until we find the one we want.It can be used for both sorted and unsorted data, but there are much better methods for sorted data.Here is a straightforward implementation:
###Code
def sequential_search(a, s):
for i, element in enumerate(a):
if element == s:
return i
return -1
###Output
_____no_output_____
###Markdown
Let's check it on a random list:
###Code
import random
a = list(range(1000))
random.shuffle(a)
pos = sequential_search(a, 314)
print(a[pos], 'found at', pos)
pos = sequential_search(a, 1001)
print(pos)
###Output
314 found at 942
-1
###Markdown
We need not write `sequential_search(a, s)` in Python. If `a` is a list, we can use `a.index(s)` instead.In fact that's what we should do, because it is way faster (we saw that also in Chapter 7). Here is the timing for our version:
###Code
import timeit
total_elapsed = 0
for i in range(100):
a = list(range(10000))
random.shuffle(a)
start = timeit.default_timer()
index = sequential_search(a, 314)
end = timeit.default_timer()
total_elapsed += end - start
print(total_elapsed)
###Output
0.028058045892976224
###Markdown
And here is the timing for the native version (which actually calls a function written in C):
###Code
total_elapsed = 0
for i in range(100):
a = list(range(10000))
random.shuffle(a)
start = timeit.default_timer()
index = a.index(314)
end = timeit.default_timer()
total_elapsed += end - start
print(total_elapsed)
###Output
0.006791955849621445
###Markdown
Matching, Comparing, Records, KeysWhen we are searching for an item in the list, Python performs an equality test between each item and the item we are searching for.The equality test is performed with the operator `==`.Checking for equality is not the same as checking whether two items are the *same*.This is called *strict comparison* and in Python it is implemented with the operator `is`. That means that the following two are equal:
###Code
an_item = (1, 2)
another_item = (1, 2)
an_item == another_item
###Output
_____no_output_____
###Markdown
But they are not the same:
###Code
an_item is another_item
###Output
_____no_output_____
###Markdown
As another example, let's see what happens with Python's support for complex numbers:
###Code
x = 3.14+1.62j
y = 3.14+1.62j
print(x == y)
print(x is y)
###Output
True
False
###Markdown
String comparison is must faster than equality checking, but it is not what we usually want to use.A common idiom for identity checking in Python is checking for `None`, like `if a is None` or `if a is not None`. In many cases, we hold information for entities in *records*, which are collections of *attributes*.In that case, we want to search for an entity based on a particular attribute that identifies it.The attribute is called a *key*. In Python we can represent records as *objects* that are instances of a class.Alternatively, we can represent them as dictionaries.In fact, Python objects use dictionaries internally.Let's get a list of two persons.
###Code
john = {
'first_name': 'John',
'surname': 'Doe',
'passport_no': 'AI892495',
'year_of_birth': 1990,
'occupation': 'teacher'
}
jane = {
'first_name': 'Jane',
'surname': 'Roe',
'passport_no': 'AI485713',
'year_of_birth': 1986,
'occupation': 'civil engineer'
}
persons = [john, jane]
###Output
_____no_output_____
###Markdown
In this example, the key for our search would be the passport number, because we would like to find the full information for a person with that particular piece of identification.To do that we could re-implement sequential search so that we provide to it the comparison function.
###Code
def sequential_search_m(a, s, matches):
for i, element in enumerate(a):
if matches(element, s):
return i
return -1
def match_passport(person, passport_no):
return person['passport_no'] == passport_no
pos = sequential_search_m(persons, 'AI485713', match_passport)
print(persons[pos], 'found at', pos)
###Output
{'first_name': 'Jane', 'surname': 'Roe', 'passport_no': 'AI485713', 'year_of_birth': 1986, 'occupation': 'civil engineer'} found at 1
###Markdown
Although you would probably use something more Pythonic like:
###Code
results = [(i, p) for i, p in enumerate(persons) if p['passport_no'] == 'AI485713']
results
###Output
_____no_output_____
###Markdown
Self-Organizing SearchIn self-organizing search, we take advantage of an item's popularity to move it to the front of the collection in which we are performing our searches.In the move-to-front method, when we find an item we move it directly to the front.In the transposition method, when we find an item we swap it with its predecessor (if any). We cannot implement directly the algorithms for lists given in the book (that is, algorithm 11.2 and algorithm 11.3) for the simple reason that Python hides the list implementation from us.Moreover, Python lists are *not* linked lists. They are variable-length arrays (see the online documentation for details on the [implementation of lists in Python](https://docs.python.org/3/faq/design.htmlhow-are-lists-implemented)).We can implement algorithm 11.3, which is the transposition method for arrays.
###Code
def transposition_search(a, s):
for i, item in enumerate(a):
if item == s:
if i > 0:
a[i-1], a[i] = a[i], a[i-1]
return i-1
else:
return i
return -1
###Output
_____no_output_____
###Markdown
How can we test `transposition_search(a, s)`?We need to do some groundwork to emulate a situation of popularity-biased searches.In particular, we will create a setting where the items we are searching for are governed by Zipf's law.First, we'll write a function that provides the Zipf probability for $n$ items.
###Code
def zipf(n):
h = 0
for x in range(1, n+1):
h += 1/x
z = [ 1/x * 1/h for x in range(1, n+1) ]
return z
###Output
_____no_output_____
###Markdown
We'll work with 1000 items:
###Code
zipf_1000 = zipf(1000)
###Output
_____no_output_____
###Markdown
We can check that they sum up to 1, and see the first 20 of the probabilities.
###Code
print(sum(zipf_1000))
print(zipf_1000[:20])
###Output
1.0000000000000018
[0.13359213049244018, 0.06679606524622009, 0.044530710164146725, 0.033398032623110044, 0.026718426098488037, 0.022265355082073363, 0.019084590070348597, 0.016699016311555022, 0.014843570054715576, 0.013359213049244019, 0.012144739135676381, 0.011132677541036681, 0.010276317730187707, 0.009542295035174298, 0.008906142032829346, 0.008349508155777511, 0.007858360617202364, 0.007421785027357788, 0.007031164762760009, 0.006679606524622009]
###Markdown
Again we will be performing our searches on 1000 items, in random order.
###Code
a = list(range(1000))
random.shuffle(a)
print(a[:20])
###Output
[965, 274, 365, 189, 152, 107, 624, 641, 767, 475, 750, 824, 490, 524, 698, 211, 958, 607, 13, 599]
###Markdown
We will perform 100000 searches among these items.We want the searches to follow Zipf's law.First, we will create another list of 1000 items in random order.
###Code
b = list(range(1000))
random.shuffle(b)
print(b[:20])
###Output
[934, 515, 787, 618, 387, 654, 322, 164, 810, 204, 453, 415, 109, 855, 402, 53, 728, 581, 280, 809]
###Markdown
Then, we will select 100000 items from the second list, using the Zipf probabilities.That means that we will be selecting the first item with probability `zipf_1000[0]`, the second item with probability `zipf_1000[1]`, and so on.
###Code
searches = random.choices(b, weights=zipf_1000, k=100000)
###Output
_____no_output_____
###Markdown
Indeed, we can verify that the popularity of items in `searches` mirrors `b`:
###Code
from collections import Counter
counted = Counter(searches)
counted.most_common(20)
###Output
_____no_output_____
###Markdown
So, we will perform 100000 searches in the first list, using as keys the items in `searches`.Because `transposition_search(a, s)` changes `a`, we will keep a copy of it to use it to compare the performance with simple sequential search.At the end, apart from displaying the time elapsed we will also show the first items of the changed `a`, to see how popular searches have gone to the beginning.
###Code
a_copy = a[:]
total_elapsed = 0
for s in searches:
start = timeit.default_timer()
index = transposition_search(a, s)
end = timeit.default_timer()
total_elapsed += end - start
print(total_elapsed)
print(a[:20])
###Output
1.3782846610993147
[934, 515, 387, 787, 618, 322, 654, 164, 204, 415, 810, 109, 53, 453, 402, 855, 750, 58, 581, 728]
###Markdown
We will now perform the same searches with `a_copy` using simple sequential search.
###Code
total_elapsed = 0
for s in searches:
start = timeit.default_timer()
index = sequential_search(a_copy, s)
end = timeit.default_timer()
total_elapsed += end - start
print(total_elapsed)
###Output
2.779128445254173
###Markdown
The Secretary ProblemThe secretary problem requires selecting the best item when we have not seen, and we cannot wait to see, the full sets of items.The solution is an online algorithm. We find the best item among the first $n/e$, where $n$ is the total expected number of items, and $e \approx 2.71828$ is [Euler's number](https://en.wikipedia.org/wiki/E_(mathematical_constant). Then we select the first of the remaining items that is better than that. The probability that we'll indeed select the best item is $n/e \approx 37\%$.Here is how we can do that:
###Code
import math
def secretary_search(a):
# Calculate |a|/n items.
m = int((len(a) // math.e) + 1)
# Find the best among the first |a|/n.
c = 0
for i in range(1, m):
if a[i] > a[c]:
c = i
# Get the first that is better from the one
# we found, if possible.
for i in range(m, len(a)):
if a[i] > a[c]:
return i
return - 1
###Output
_____no_output_____
###Markdown
Does `secretary_search(a)` find the best item in `a` about 37% of the time?To check that, we'll continue working in a similar fashion. We'll perform 1000 searches in 1000 items and see how often we do come up with the best item.
###Code
total_found = 0
for i in range(1000):
a = list(range(1000))
random.shuffle(a)
index = secretary_search(a)
max_index = a.index(max(a))
if index == max_index:
total_found += 1
print(f"found {total_found} out of {i+1}, {100*total_found/(i+1)}%")
###Output
found 379 out of 1000, 37.9%
###Markdown
Binary SearchBinary search is the most efficient way to search for an item when the search space is *ordered*.It is an iterative algorithm, where in each iteration we split the search space in half.We start by asking if the search item is in the middle of the search space. Let's assume that the items are ordered in ascending orded.If it is greater than the item in the middle, we repeat the question on the right part of the search space; it it is smaller, we repeat the question on the left part of the search space. We continue until we find the item, or we cannot perform a split any more.
###Code
def binary_search(a, item):
# Initialize borders of search space.
low = 0
high = len(a) - 1
# While the search space is not empty:
while low <= high:
# Split the search space in the middle.
mid = low + (high - low) // 2
# Compare with midpoint.
c = (a[mid] > item) - (a[mid] < item)
# If smaller, repeat on the left half.
if c < 0:
low = mid + 1
# If greater, repeat on the right half.
elif c > 0:
high = mid - 1
# If found, we are done.
else:
return mid
return -1
###Output
_____no_output_____
###Markdown
In Python 3 there is no `cmp(x, y)` function that compares `x` and `y` and returns -1, 0, or 1, if `x > y`, `x == y`, or `x < y`, respectively. We use the ```python(x > y) - (y < x)```idiom instead.Note also the line where we calculate the midpoint:```pythonmid = low + (high - low) // 2```This guards against overflows. In Python that is not necessary, because there is no upper limit in integers, so it could be:```pythonmid = (low + high) // 2```However, this is a problem in most other languages, so we'll stick with the foolproof version. To see how binary search works we can add some tracing information in `binary_search(a, item)`:
###Code
def binary_search_trace(a, item):
print("Searching for", item)
# Initialize borders of search space.
low = 0
high = len(a) - 1
# While the search space is not empty:
while low <= high:
# Split the search space in the middle.
mid = low + (high - low) // 2
# Compare with midpoint.
c = (a[mid] > item) - (a[mid] < item)
print(f"({low}, {a[low]}), ({mid}, {a[mid]}), ({high}, {a[high]})")
# If smaller, repeat on the left half.
if c < 0:
low = mid + 1
# If greater, repeat on the right half.
elif c > 0:
high = mid - 1
# If found, we are done.
else:
return mid
return -1
a = [4, 10, 31, 65, 114, 149, 181, 437,
480, 507, 551, 613, 680, 777, 782, 903]
binary_search_trace(a, 149)
binary_search_trace(a, 181)
binary_search_trace(a, 583)
binary_search_trace(a, 450)
binary_search_trace(a, 3)
###Output
Searching for 149
(0, 4), (7, 437), (15, 903)
(0, 4), (3, 65), (6, 181)
(4, 114), (5, 149), (6, 181)
Searching for 181
(0, 4), (7, 437), (15, 903)
(0, 4), (3, 65), (6, 181)
(4, 114), (5, 149), (6, 181)
(6, 181), (6, 181), (6, 181)
Searching for 583
(0, 4), (7, 437), (15, 903)
(8, 480), (11, 613), (15, 903)
(8, 480), (9, 507), (10, 551)
(10, 551), (10, 551), (10, 551)
Searching for 450
(0, 4), (7, 437), (15, 903)
(8, 480), (11, 613), (15, 903)
(8, 480), (9, 507), (10, 551)
(8, 480), (8, 480), (8, 480)
Searching for 3
(0, 4), (7, 437), (15, 903)
(0, 4), (3, 65), (6, 181)
(0, 4), (1, 10), (2, 31)
(0, 4), (0, 4), (0, 4)
###Markdown
Binary search is very efficient—in fact, it is as efficient as a search method can be (there is a smalll caveat here, concerning searching in quantum computers, but we can leave that aside).If have have $n$ items, it will complete the search in $O(\lg(n))$.Once again, we can verify that theory agrees with practice. We will perform 1000 searches, 500 of them successful and 500 of them unsuccessful and count the average number of iterations required. To do that, we'll change `binary_search(a, item)` so that it also returns the number of iterations.
###Code
def binary_search_count(a, item):
# Initialize borders of search space.
low = 0
high = len(a) - 1
i = 0
# While the search space is not empty:
while low <= high:
i += 1
# Split the search space in the middle.
mid = low + (high - low) // 2
# Compare with midpoint.
c = (a[mid] > item) - (a[mid] < item)
# If smaller, repeat on the left half.
if c < 0:
low = mid + 1
# If greater, repeat on the right half.
elif c > 0:
high = mid - 1
# If found, we are done.
else:
return (i, mid)
return (i, -1)
###Output
_____no_output_____
###Markdown
We build up our test suite. Our items will be 1000 random numbers in the range from 0 to 999,999.We will select 500 of them to perform matching searches, and another 500, not in them, to perform non-matching searches.
###Code
num_range = 1000000
# Get 1000 random numbers from 0 to 999999.
a = random.sample(range(num_range), k=1000)
# Select 500 from them for our matching searches.
existing = random.sample(a, k=500)
# Select another 500 random numbers in the range,
# not in the set a, for our non-matching searches
non_existing = random.sample(set(range(num_range)) - set(a), k=500)
# Verify that the matching and non-matchin sets are distinct.
print(set(existing) & set(non_existing))
###Output
set()
###Markdown
So now we can see how the average number of iterations in practice fares compared to what predicted by theory.
###Code
total_iters = 0
for matching, non_matching in zip(existing, non_existing):
matching_iters, _ = binary_search_count(a, matching)
non_matching_iters, _ = binary_search_count(a, non_matching)
total_iters += (matching_iters + non_matching_iters)
print(f"Average iterations:", total_iters / (len(existing) + len(non_existing)))
print(f"lg(1000) = {math.log(1000, 2)}")
###Output
Average iterations: 9.743
lg(1000) = 9.965784284662087
|
d2l-en/tensorflow/chapter_appendix-mathematics-for-deep-learning/information-theory.ipynb | ###Markdown
Information Theory:label:`sec_information_theory`The universe is overflowing with information. Information provides a common language across disciplinary rifts: from Shakespeare's Sonnet to researchers' paper on Cornell ArXiv, from Van Gogh's printing Starry Night to Beethoven's music Symphony No. 5, from the first programming language Plankalkül to the state-of-the-art machine learning algorithms. Everything must follow the rules of information theory, no matter the format. With information theory, we can measure and compare how much information is present in different signals. In this section, we will investigate the fundamental concepts of information theory and applications of information theory in machine learning.Before we get started, let us outline the relationship between machine learning and information theory. Machine learning aims to extract interesting signals from data and make critical predictions. On the other hand, information theory studies encoding, decoding, transmitting, and manipulating information. As a result, information theory provides fundamental language for discussing the information processing in machine learned systems. For example, many machine learning applications use the cross entropy loss as described in :numref:`sec_softmax`. This loss can be directly derived from information theoretic considerations. InformationLet us start with the "soul" of information theory: information. *Information* can be encoded in anything with a particular sequence of one or more encoding formats. Suppose that we task ourselves with trying to define a notion of information. What could be our starting point?Consider the following thought experiment. We have a friend with a deck of cards. They will shuffle the deck, flip over some cards, and tell us statements about the cards. We will try to assess the information content of each statement.First, they flip over a card and tell us, "I see a card." This provides us with no information at all. We were already certain that this was the case so we hope the information should be zero.Next, they flip over a card and say, "I see a heart." This provides us some information, but in reality there are only $4$ different suits that were possible, each equally likely, so we are not surprised by this outcome. We hope that whatever the measure of information, this event should have low information content.Next, they flip over a card and say, "This is the $3$ of spades." This is more information. Indeed there were $52$ equally likely possible outcomes, and our friend told us which one it was. This should be a medium amount of information.Let us take this to the logical extreme. Suppose that finally they flip over every card from the deck and read off the entire sequence of the shuffled deck. There are $52!$ different orders to the deck, again all equally likely, so we need a lot of information to know which one it is.Any notion of information we develop must conform to this intuition. Indeed, in the next sections we will learn how to compute that these events have $0\text{ bits}$, $2\text{ bits}$, $~5.7\text{ bits}$, and $~225.6\text{ bits}$ of information respectively.If we read through these thought experiments, we see a natural idea. As a starting point, rather than caring about the knowledge, we may build off the idea that information represents the degree of surprise or the abstract possibility of the event. For example, if we want to describe an unusual event, we need a lot information. For a common event, we may not need much information.In 1948, Claude E. Shannon published *A Mathematical Theory of Communication* :cite:`Shannon.1948` establishing the theory of information. In his article, Shannon introduced the concept of information entropy for the first time. We will begin our journey here. Self-informationSince information embodies the abstract possibility of an event, how do we map the possibility to the number of bits? Shannon introduced the terminology *bit* as the unit of information, which was originally created by John Tukey. So what is a "bit" and why do we use it to measure information? Historically, an antique transmitter can only send or receive two types of code: $0$ and $1$. Indeed, binary encoding is still in common use on all modern digital computers. In this way, any information is encoded by a series of $0$ and $1$. And hence, a series of binary digits of length $n$ contains $n$ bits of information.Now, suppose that for any series of codes, each $0$ or $1$ occurs with a probability of $\frac{1}{2}$. Hence, an event $X$ with a series of codes of length $n$, occurs with a probability of $\frac{1}{2^n}$. At the same time, as we mentioned before, this series contains $n$ bits of information. So, can we generalize to a math function which can transfer the probability $p$ to the number of bits? Shannon gave the answer by defining *self-information*$$I(X) = - \log_2 (p),$$as the *bits* of information we have received for this event $X$. Note that we will always use base-2 logarithms in this section. For the sake of simplicity, the rest of this section will omit the subscript 2 in the logarithm notation, i.e., $\log(.)$ always refers to $\log_2(.)$. For example, the code "0010" has a self-information$$I(\text{"0010"}) = - \log (p(\text{"0010"})) = - \log \left( \frac{1}{2^4} \right) = 4 \text{ bits}.$$We can calculate self information as shown below. Before that, let us first import all the necessary packages in this section.
###Code
import tensorflow as tf
def log2(x):
return tf.math.log(x) / tf.math.log(2.)
def nansum(x):
return tf.reduce_sum(tf.where(tf.math.is_nan(
x), tf.zeros_like(x), x), axis=-1)
def self_information(p):
return -log2(tf.constant(p)).numpy()
self_information(1 / 64)
###Output
_____no_output_____
###Markdown
EntropyAs self-information only measures the information of a single discrete event, we need a more generalized measure for any random variable of either discrete or continuous distribution. Motivating EntropyLet us try to get specific about what we want. This will be an informal statement of what are known as the *axioms of Shannon entropy*. It will turn out that the following collection of common-sense statements force us to a unique definition of information. A formal version of these axioms, along with several others may be found in :cite:`Csiszar.2008`.1. The information we gain by observing a random variable does not depend on what we call the elements, or the presence of additional elements which have probability zero.2. The information we gain by observing two random variables is no more than the sum of the information we gain by observing them separately. If they are independent, then it is exactly the sum.3. The information gained when observing (nearly) certain events is (nearly) zero.While proving this fact is beyond the scope of our text, it is important to know that this uniquely determines the form that entropy must take. The only ambiguity that these allow is in the choice of fundamental units, which is most often normalized by making the choice we saw before that the information provided by a single fair coin flip is one bit. DefinitionFor any random variable $X$ that follows a probability distribution $P$ with a probability density function (p.d.f.) or a probability mass function (p.m.f.) $p(x)$, we measure the expected amount of information through *entropy* (or *Shannon entropy*)$$H(X) = - E_{x \sim P} [\log p(x)].$$:eqlabel:`eq_ent_def`To be specific, if $X$ is discrete, $$H(X) = - \sum_i p_i \log p_i \text{, where } p_i = P(X_i).$$Otherwise, if $X$ is continuous, we also refer entropy as *differential entropy*$$H(X) = - \int_x p(x) \log p(x) \; dx.$$We can define entropy as below.
###Code
def entropy(p):
return nansum(- p * log2(p))
entropy(tf.constant([0.1, 0.5, 0.1, 0.3]))
###Output
_____no_output_____
###Markdown
InterpretationsYou may be curious: in the entropy definition :eqref:`eq_ent_def`, why do we use an expectation of a negative logarithm? Here are some intuitions.First, why do we use a *logarithm* function $\log$? Suppose that $p(x) = f_1(x) f_2(x) \ldots, f_n(x)$, where each component function $f_i(x)$ is independent from each other. This means that each $f_i(x)$ contributes independently to the total information obtained from $p(x)$. As discussed above, we want the entropy formula to be additive over independent random variables. Luckily, $\log$ can naturally turn a product of probability distributions to a summation of the individual terms.Next, why do we use a *negative* $\log$? Intuitively, more frequent events should contain less information than less common events, since we often gain more information from an unusual case than from an ordinary one. However, $\log$ is monotonically increasing with the probabilities, and indeed negative for all values in $[0, 1]$. We need to construct a monotonically decreasing relationship between the probability of events and their entropy, which will ideally be always positive (for nothing we observe should force us to forget what we have known). Hence, we add a negative sign in front of $\log$ function.Last, where does the *expectation* function come from? Consider a random variable $X$. We can interpret the self-information ($-\log(p)$) as the amount of *surprise* we have at seeing a particular outcome. Indeed, as the probability approaches zero, the surprise becomes infinite. Similarly, we can interpret the entropy as the average amount of surprise from observing $X$. For example, imagine that a slot machine system emits statistical independently symbols ${s_1, \ldots, s_k}$ with probabilities ${p_1, \ldots, p_k}$ respectively. Then the entropy of this system equals to the average self-information from observing each output, i.e.,$$H(S) = \sum_i {p_i \cdot I(s_i)} = - \sum_i {p_i \cdot \log p_i}.$$ Properties of EntropyBy the above examples and interpretations, we can derive the following properties of entropy :eqref:`eq_ent_def`. Here, we refer to $X$ as an event and $P$ as the probability distribution of $X$.* Entropy is non-negative, i.e., $H(X) \geq 0, \forall X$.* If $X \sim P$ with a p.d.f. or a p.m.f. $p(x)$, and we try to estimate $P$ by a new probability distribution $Q$ with a p.d.f. or a p.m.f. $q(x)$, then $$H(X) = - E_{x \sim P} [\log p(x)] \leq - E_{x \sim P} [\log q(x)], \text{ with equality if and only if } P = Q.$$ Alternatively, $H(X)$ gives a lower bound of the average number of bits needed to encode symbols drawn from $P$.* If $X \sim P$, then $x$ conveys the maximum amount of information if it spreads evenly among all possible outcomes. Specifically, if the probability distribution $P$ is discrete with $k$-class $\{p_1, \ldots, p_k \}$, then $$H(X) \leq \log(k), \text{ with equality if and only if } p_i = \frac{1}{k}, \forall i.$$ If $P$ is a continuous random variable, then the story becomes much more complicated. However, if we additionally impose that $P$ is supported on a finite interval (with all values between $0$ and $1$), then $P$ has the highest entropy if it is the uniform distribution on that interval. Mutual InformationPreviously we defined entropy of a single random variable $X$, how about the entropy of a pair random variables $(X, Y)$? We can think of these techniques as trying to answer the following type of question, "What information is contained in $X$ and $Y$ together compared to each separately? Is there redundant information, or is it all unique?"For the following discussion, we always use $(X, Y)$ as a pair of random variables that follows a joint probability distribution $P$ with a p.d.f. or a p.m.f. $p_{X, Y}(x, y)$, while $X$ and $Y$ follow probability distribution $p_X(x)$ and $p_Y(y)$, respectively. Joint EntropySimilar to entropy of a single random variable :eqref:`eq_ent_def`, we define the *joint entropy* $H(X, Y)$ of a pair random variables $(X, Y)$ as$$H(X, Y) = −E_{(x, y) \sim P} [\log p_{X, Y}(x, y)]. $$:eqlabel:`eq_joint_ent_def`Precisely, on the one hand, if $(X, Y)$ is a pair of discrete random variables, then$$H(X, Y) = - \sum_{x} \sum_{y} p_{X, Y}(x, y) \log p_{X, Y}(x, y).$$On the other hand, if $(X, Y)$ is a pair of continuous random variables, then we define the *differential joint entropy* as$$H(X, Y) = - \int_{x, y} p_{X, Y}(x, y) \ \log p_{X, Y}(x, y) \;dx \;dy.$$We can think of :eqref:`eq_joint_ent_def` as telling us the total randomness in the pair of random variables. As a pair of extremes, if $X = Y$ are two identical random variables, then the information in the pair is exactly the information in one and we have $H(X, Y) = H(X) = H(Y)$. On the other extreme, if $X$ and $Y$ are independent then $H(X, Y) = H(X) + H(Y)$. Indeed we will always have that the information contained in a pair of random variables is no smaller than the entropy of either random variable and no more than the sum of both.$$H(X), H(Y) \le H(X, Y) \le H(X) + H(Y).$$Let us implement joint entropy from scratch.
###Code
def joint_entropy(p_xy):
joint_ent = -p_xy * log2(p_xy)
# nansum will sum up the non-nan number
out = nansum(joint_ent)
return out
joint_entropy(tf.constant([[0.1, 0.5], [0.1, 0.3]]))
###Output
_____no_output_____
###Markdown
Notice that this is the same *code* as before, but now we interpret it differently as working on the joint distribution of the two random variables. Conditional EntropyThe joint entropy defined above the amount of information contained in a pair of random variables. This is useful, but oftentimes it is not what we care about. Consider the setting of machine learning. Let us take $X$ to be the random variable (or vector of random variables) that describes the pixel values of an image, and $Y$ to be the random variable which is the class label. $X$ should contain substantial information---a natural image is a complex thing. However, the information contained in $Y$ once the image has been show should be low. Indeed, the image of a digit should already contain the information about what digit it is unless the digit is illegible. Thus, to continue to extend our vocabulary of information theory, we need to be able to reason about the information content in a random variable conditional on another.In the probability theory, we saw the definition of the *conditional probability* to measure the relationship between variables. We now want to analogously define the *conditional entropy* $H(Y \mid X)$. We can write this as$$ H(Y \mid X) = - E_{(x, y) \sim P} [\log p(y \mid x)],$$:eqlabel:`eq_cond_ent_def`where $p(y \mid x) = \frac{p_{X, Y}(x, y)}{p_X(x)}$ is the conditional probability. Specifically, if $(X, Y)$ is a pair of discrete random variables, then$$H(Y \mid X) = - \sum_{x} \sum_{y} p(x, y) \log p(y \mid x).$$If $(X, Y)$ is a pair of continuous random variables, then the *differential conditional entropy* is similarly defined as$$H(Y \mid X) = - \int_x \int_y p(x, y) \ \log p(y \mid x) \;dx \;dy.$$It is now natural to ask, how does the *conditional entropy* $H(Y \mid X)$ relate to the entropy $H(X)$ and the joint entropy $H(X, Y)$? Using the definitions above, we can express this cleanly:$$H(Y \mid X) = H(X, Y) - H(X).$$This has an intuitive interpretation: the information in $Y$ given $X$ ($H(Y \mid X)$) is the same as the information in both $X$ and $Y$ together ($H(X, Y)$) minus the information already contained in $X$. This gives us the information in $Y$ which is not also represented in $X$.Now, let us implement conditional entropy :eqref:`eq_cond_ent_def` from scratch.
###Code
def conditional_entropy(p_xy, p_x):
p_y_given_x = p_xy/p_x
cond_ent = -p_xy * log2(p_y_given_x)
# nansum will sum up the non-nan number
out = nansum(cond_ent)
return out
conditional_entropy(tf.constant([[0.1, 0.5], [0.2, 0.3]]),
tf.constant([0.2, 0.8]))
###Output
_____no_output_____
###Markdown
Mutual InformationGiven the previous setting of random variables $(X, Y)$, you may wonder: "Now that we know how much information is contained in $Y$ but not in $X$, can we similarly ask how much information is shared between $X$ and $Y$?" The answer will be the *mutual information* of $(X, Y)$, which we will write as $I(X, Y)$.Rather than diving straight into the formal definition, let us practice our intuition by first trying to derive an expression for the mutual information entirely based on terms we have constructed before. We wish to find the information shared between two random variables. One way we could try to do this is to start with all the information contained in both $X$ and $Y$ together, and then we take off the parts that are not shared. The information contained in both $X$ and $Y$ together is written as $H(X, Y)$. We want to subtract from this the information contained in $X$ but not in $Y$, and the information contained in $Y$ but not in $X$. As we saw in the previous section, this is given by $H(X \mid Y)$ and $H(Y \mid X)$ respectively. Thus, we have that the mutual information should be$$I(X, Y) = H(X, Y) - H(Y \mid X) − H(X \mid Y).$$Indeed, this is a valid definition for the mutual information. If we expand out the definitions of these terms and combine them, a little algebra shows that this is the same as$$I(X, Y) = E_{x} E_{y} \left\{ p_{X, Y}(x, y) \log\frac{p_{X, Y}(x, y)}{p_X(x) p_Y(y)} \right\}. $$:eqlabel:`eq_mut_ent_def`We can summarize all of these relationships in image :numref:`fig_mutual_information`. It is an excellent test of intuition to see why the following statements are all also equivalent to $I(X, Y)$.* $H(X) − H(X \mid Y)$* $H(Y) − H(Y \mid X)$* $H(X) + H(Y) − H(X, Y)$:label:`fig_mutual_information`In many ways we can think of the mutual information :eqref:`eq_mut_ent_def` as principled extension of correlation coefficient we saw in :numref:`sec_random_variables`. This allows us to ask not only for linear relationships between variables, but for the maximum information shared between the two random variables of any kind.Now, let us implement mutual information from scratch.
###Code
def mutual_information(p_xy, p_x, p_y):
p = p_xy / (p_x * p_y)
mutual = p_xy * log2(p)
# Operator nansum will sum up the non-nan number
out = nansum(mutual)
return out
mutual_information(tf.constant([[0.1, 0.5], [0.1, 0.3]]),
tf.constant([0.2, 0.8]), tf.constant([[0.75, 0.25]]))
###Output
_____no_output_____
###Markdown
Properties of Mutual InformationRather than memorizing the definition of mutual information :eqref:`eq_mut_ent_def`, you only need to keep in mind its notable properties:* Mutual information is symmetric, i.e., $I(X, Y) = I(Y, X)$.* Mutual information is non-negative, i.e., $I(X, Y) \geq 0$.* $I(X, Y) = 0$ if and only if $X$ and $Y$ are independent. For example, if $X$ and $Y$ are independent, then knowing $Y$ does not give any information about $X$ and vice versa, so their mutual information is zero.* Alternatively, if $X$ is an invertible function of $Y$, then $Y$ and $X$ share all information and $$I(X, Y) = H(Y) = H(X).$$ Pointwise Mutual InformationWhen we worked with entropy at the beginning of this chapter, we were able to provide an interpretation of $-\log(p_X(x))$ as how *surprised* we were with the particular outcome. We may give a similar interpretation to the logarithmic term in the mutual information, which is often referred to as the *pointwise mutual information*:$$\mathrm{pmi}(x, y) = \log\frac{p_{X, Y}(x, y)}{p_X(x) p_Y(y)}.$$:eqlabel:`eq_pmi_def`We can think of :eqref:`eq_pmi_def` as measuring how much more or less likely the specific combination of outcomes $x$ and $y$ are compared to what we would expect for independent random outcomes. If it is large and positive, then these two specific outcomes occur much more frequently than they would compared to random chance (*note*: the denominator is $p_X(x) p_Y(y)$ which is the probability of the two outcomes were independent), whereas if it is large and negative it represents the two outcomes happening far less than we would expect by random chance.This allows us to interpret the mutual information :eqref:`eq_mut_ent_def` as the average amount that we were surprised to see two outcomes occurring together compared to what we would expect if they were independent. Applications of Mutual InformationMutual information may be a little abstract in it pure definition, so how does it related to machine learning? In natural language processing, one of the most difficult problems is the *ambiguity resolution*, or the issue of the meaning of a word being unclear from context. For example, recently a headline in the news reported that "Amazon is on fire". You may wonder whether the company Amazon has a building on fire, or the Amazon rain forest is on fire.In this case, mutual information can help us resolve this ambiguity. We first find the group of words that each has a relatively large mutual information with the company Amazon, such as e-commerce, technology, and online. Second, we find another group of words that each has a relatively large mutual information with the Amazon rain forest, such as rain, forest, and tropical. When we need to disambiguate "Amazon", we can compare which group has more occurrence in the context of the word Amazon. In this case the article would go on to describe the forest, and make the context clear. Kullback–Leibler DivergenceAs what we have discussed in :numref:`sec_linear-algebra`, we can use norms to measure distance between two points in space of any dimensionality. We would like to be able to do a similar task with probability distributions. There are many ways to go about this, but information theory provides one of the nicest. We now explore the *Kullback–Leibler (KL) divergence*, which provides a way to measure if two distributions are close together or not. DefinitionGiven a random variable $X$ that follows the probability distribution $P$ with a p.d.f. or a p.m.f. $p(x)$, and we estimate $P$ by another probability distribution $Q$ with a p.d.f. or a p.m.f. $q(x)$. Then the *Kullback–Leibler (KL) divergence* (or *relative entropy*) between $P$ and $Q$ is$$D_{\mathrm{KL}}(P\|Q) = E_{x \sim P} \left[ \log \frac{p(x)}{q(x)} \right].$$:eqlabel:`eq_kl_def`As with the pointwise mutual information :eqref:`eq_pmi_def`, we can again provide an interpretation of the logarithmic term: $-\log \frac{q(x)}{p(x)} = -\log(q(x)) - (-\log(p(x)))$ will be large and positive if we see $x$ far more often under $P$ than we would expect for $Q$, and large and negative if we see the outcome far less than expected. In this way, we can interpret it as our *relative* surprise at observing the outcome compared to how surprised we would be observing it from our reference distribution.Let us implement the KL divergence from Scratch.
###Code
def kl_divergence(p, q):
kl = p * log2(p / q)
out = nansum(kl)
return tf.abs(out).numpy()
###Output
_____no_output_____
###Markdown
KL Divergence PropertiesLet us take a look at some properties of the KL divergence :eqref:`eq_kl_def`.* KL divergence is non-symmetric, i.e., there are $P,Q$ such that $$D_{\mathrm{KL}}(P\|Q) \neq D_{\mathrm{KL}}(Q\|P).$$* KL divergence is non-negative, i.e., $$D_{\mathrm{KL}}(P\|Q) \geq 0.$$ Note that the equality holds only when $P = Q$.* If there exists an $x$ such that $p(x) > 0$ and $q(x) = 0$, then $D_{\mathrm{KL}}(P\|Q) = \infty$.* There is a close relationship between KL divergence and mutual information. Besides the relationship shown in :numref:`fig_mutual_information`, $I(X, Y)$ is also numerically equivalent with the following terms: 1. $D_{\mathrm{KL}}(P(X, Y) \ \| \ P(X)P(Y))$; 1. $E_Y \{ D_{\mathrm{KL}}(P(X \mid Y) \ \| \ P(X)) \}$; 1. $E_X \{ D_{\mathrm{KL}}(P(Y \mid X) \ \| \ P(Y)) \}$. For the first term, we interpret mutual information as the KL divergence between $P(X, Y)$ and the product of $P(X)$ and $P(Y)$, and thus is a measure of how different the joint distribution is from the distribution if they were independent. For the second term, mutual information tells us the average reduction in uncertainty about $Y$ that results from learning the value of the $X$'s distribution. Similarly to the third term. ExampleLet us go through a toy example to see the non-symmetry explicitly.First, let us generate and sort three tensors of length $10,000$: an objective tensor $p$ which follows a normal distribution $N(0, 1)$, and two candidate tensors $q_1$ and $q_2$ which follow normal distributions $N(-1, 1)$ and $N(1, 1)$ respectively.
###Code
tensor_len = 10000
p = tf.random.normal((tensor_len, ), 0, 1)
q1 = tf.random.normal((tensor_len, ), -1, 1)
q2 = tf.random.normal((tensor_len, ), 1, 1)
p = tf.sort(p)
q1 = tf.sort(q1)
q2 = tf.sort(q2)
###Output
_____no_output_____
###Markdown
Since $q_1$ and $q_2$ are symmetric with respect to the y-axis (i.e., $x=0$), we expect a similar value of KL divergence between $D_{\mathrm{KL}}(p\|q_1)$ and $D_{\mathrm{KL}}(p\|q_2)$. As you can see below, there is only a less than 3% off between $D_{\mathrm{KL}}(p\|q_1)$ and $D_{\mathrm{KL}}(p\|q_2)$.
###Code
kl_pq1 = kl_divergence(p, q1)
kl_pq2 = kl_divergence(p, q2)
similar_percentage = abs(kl_pq1 - kl_pq2) / ((kl_pq1 + kl_pq2) / 2) * 100
kl_pq1, kl_pq2, similar_percentage
###Output
_____no_output_____
###Markdown
In contrast, you may find that $D_{\mathrm{KL}}(q_2 \|p)$ and $D_{\mathrm{KL}}(p \| q_2)$ are off a lot, with around 40% off as shown below.
###Code
kl_q2p = kl_divergence(q2, p)
differ_percentage = abs(kl_q2p - kl_pq2) / ((kl_q2p + kl_pq2) / 2) * 100
kl_q2p, differ_percentage
###Output
_____no_output_____
###Markdown
Cross EntropyIf you are curious about applications of information theory in deep learning, here is a quick example. We define the true distribution $P$ with probability distribution $p(x)$, and the estimated distribution $Q$ with probability distribution $q(x)$, and we will use them in the rest of this section.Say we need to solve a binary classification problem based on given $n$ data examples {$x_1, \ldots, x_n$}. Assume that we encode $1$ and $0$ as the positive and negative class label $y_i$ respectively, and our neural network is parameterized by $\theta$. If we aim to find a best $\theta$ so that $\hat{y}_i= p_{\theta}(y_i \mid x_i)$, it is natural to apply the maximum log-likelihood approach as was seen in :numref:`sec_maximum_likelihood`. To be specific, for true labels $y_i$ and predictions $\hat{y}_i= p_{\theta}(y_i \mid x_i)$, the probability to be classified as positive is $\pi_i= p_{\theta}(y_i = 1 \mid x_i)$. Hence, the log-likelihood function would be$$\begin{aligned}l(\theta) &= \log L(\theta) \\ &= \log \prod_{i=1}^n \pi_i^{y_i} (1 - \pi_i)^{1 - y_i} \\ &= \sum_{i=1}^n y_i \log(\pi_i) + (1 - y_i) \log (1 - \pi_i). \\\end{aligned}$$Maximizing the log-likelihood function $l(\theta)$ is identical to minimizing $- l(\theta)$, and hence we can find the best $\theta$ from here. To generalize the above loss to any distributions, we also called $-l(\theta)$ the *cross entropy loss* $\mathrm{CE}(y, \hat{y})$, where $y$ follows the true distribution $P$ and $\hat{y}$ follows the estimated distribution $Q$.This was all derived by working from the maximum likelihood point of view. However, if we look closely we can see that terms like $\log(\pi_i)$ have entered into our computation which is a solid indication that we can understand the expression from an information theoretic point of view. Formal DefinitionLike KL divergence, for a random variable $X$, we can also measure the divergence between the estimating distribution $Q$ and the true distribution $P$ via *cross entropy*,$$\mathrm{CE}(P, Q) = - E_{x \sim P} [\log(q(x))].$$:eqlabel:`eq_ce_def`By using properties of entropy discussed above, we can also interpret it as the summation of the entropy $H(P)$ and the KL divergence between $P$ and $Q$, i.e.,$$\mathrm{CE} (P, Q) = H(P) + D_{\mathrm{KL}}(P\|Q).$$We can implement the cross entropy loss as below.
###Code
def cross_entropy(y_hat, y):
ce = -tf.math.log(y_hat[:, :len(y)])
return tf.reduce_mean(ce)
###Output
_____no_output_____
###Markdown
Now define two tensors for the labels and predictions, and calculate the cross entropy loss of them.
###Code
labels = tf.constant([0, 2])
preds = tf.constant([[0.3, 0.6, 0.1], [0.2, 0.3, 0.5]])
cross_entropy(preds, labels)
###Output
_____no_output_____
###Markdown
PropertiesAs alluded in the beginning of this section, cross entropy :eqref:`eq_ce_def` can be used to define a loss function in the optimization problem. It turns out that the following are equivalent:1. Maximizing predictive probability of $Q$ for distribution $P$, (i.e., $E_{x\sim P} [\log (q(x))]$);1. Minimizing cross entropy $\mathrm{CE} (P, Q)$;1. Minimizing the KL divergence $D_{\mathrm{KL}}(P\|Q)$.The definition of cross entropy indirectly proves the equivalent relationship between objective 2 and objective 3, as long as the entropy of true data $H(P)$ is constant. Cross Entropy as An Objective Function of Multi-class ClassificationIf we dive deep into the classification objective function with cross entropy loss $\mathrm{CE}$, we will find minimizing $\mathrm{CE}$ is equivalent to maximizing the log-likelihood function $L$.To begin with, suppose that we are given a dataset with $n$ examples, and it can be classified into $k$-classes. For each data example $i$, we represent any $k$-class label $\mathbf{y}_i = (y_{i1}, \ldots, y_{ik})$ by *one-hot encoding*. To be specific, if the example $i$ belongs to class $j$, then we set the $j$-th entry to $1$, and all other components to $0$, i.e.,$$ y_{ij} = \begin{cases}1 & j \in J; \\ 0 &\text{otherwise.}\end{cases}$$For instance, if a multi-class classification problem contains three classes $A$, $B$, and $C$, then the labels $\mathbf{y}_i$ can be encoded in {$A: (1, 0, 0); B: (0, 1, 0); C: (0, 0, 1)$}.Assume that our neural network is parameterized by $\theta$. For true label vectors $\mathbf{y}_i$ and predictions $$\hat{\mathbf{y}}_i= p_{\theta}(\mathbf{y}_i \mid \mathbf{x}_i) = \sum_{j=1}^k y_{ij} p_{\theta} (y_{ij} \mid \mathbf{x}_i).$$Hence, the *cross entropy loss* would be$$\mathrm{CE}(\mathbf{y}, \hat{\mathbf{y}}) = - \sum_{i=1}^n \mathbf{y}_i \log \hat{\mathbf{y}}_i = - \sum_{i=1}^n \sum_{j=1}^k y_{ij} \log{p_{\theta} (y_{ij} \mid \mathbf{x}_i)}.\\$$On the other side, we can also approach the problem through maximum likelihood estimation. To begin with, let us quickly introduce a $k$-class multinoulli distribution. It is an extension of the Bernoulli distribution from binary class to multi-class. If a random variable $\mathbf{z} = (z_{1}, \ldots, z_{k})$ follows a $k$-class *multinoulli distribution* with probabilities $\mathbf{p} =$ ($p_{1}, \ldots, p_{k}$), i.e., $$p(\mathbf{z}) = p(z_1, \ldots, z_k) = \mathrm{Multi} (p_1, \ldots, p_k), \text{ where } \sum_{i=1}^k p_i = 1,$$ then the joint probability mass function(p.m.f.) of $\mathbf{z}$ is$$\mathbf{p}^\mathbf{z} = \prod_{j=1}^k p_{j}^{z_{j}}.$$It can be seen that the label of each data example, $\mathbf{y}_i$, is following a $k$-class multinoulli distribution with probabilities $\boldsymbol{\pi} =$ ($\pi_{1}, \ldots, \pi_{k}$). Therefore, the joint p.m.f. of each data example $\mathbf{y}_i$ is $\mathbf{\pi}^{\mathbf{y}_i} = \prod_{j=1}^k \pi_{j}^{y_{ij}}.$Hence, the log-likelihood function would be$$\begin{aligned}l(\theta) = \log L(\theta) = \log \prod_{i=1}^n \boldsymbol{\pi}^{\mathbf{y}_i} = \log \prod_{i=1}^n \prod_{j=1}^k \pi_{j}^{y_{ij}} = \sum_{i=1}^n \sum_{j=1}^k y_{ij} \log{\pi_{j}}.\\\end{aligned}$$Since in maximum likelihood estimation, we maximizing the objective function $l(\theta)$ by having $\pi_{j} = p_{\theta} (y_{ij} \mid \mathbf{x}_i)$. Therefore, for any multi-class classification, maximizing the above log-likelihood function $l(\theta)$ is equivalent to minimizing the CE loss $\mathrm{CE}(y, \hat{y})$.To test the above proof, let us apply the built-in measure `NegativeLogLikelihood`. Using the same `labels` and `preds` as in the earlier example, we will get the same numerical loss as the previous example up to the 5 decimal place.
###Code
def nll_loss(y_hat, y):
# Convert labels to binary class matrix.
y = tf.keras.utils.to_categorical(y, num_classes=3)
# Since tf.keras.losses.binary_crossentropy returns the mean
# over the last axis, we calculate the sum here.
return tf.reduce_sum(
tf.keras.losses.binary_crossentropy(y, y_hat, from_logits=True))
loss = nll_loss(tf.math.log(preds), labels)
loss
###Output
_____no_output_____ |
Model backlog/Models/68-openvaccine-tf-bahdanau-attention-safe-features.ipynb | ###Markdown
Dependencies
###Code
from openvaccine_scripts import *
import warnings, json
from sklearn.model_selection import KFold, StratifiedKFold
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import optimizers, losses, Model
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau
SEED = 0
seed_everything(SEED)
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Model parameters
###Code
config = {
"BATCH_SIZE": 32,
"EPOCHS": 60,
"LEARNING_RATE": 1e-3,
"ES_PATIENCE": 10,
"N_FOLDS": 5,
"N_USED_FOLDS": 5,
"PB_SEQ_LEN": 107,
"PV_SEQ_LEN": 130,
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
config
###Output
_____no_output_____
###Markdown
Load data
###Code
database_base_path = '/kaggle/input/stanford-covid-vaccine/'
train = pd.read_json(database_base_path + 'train.json', lines=True)
test = pd.read_json(database_base_path + 'test.json', lines=True)
print('Train samples: %d' % len(train))
display(train.head())
print(f'Test samples: {len(test)}')
display(test.head())
###Output
Train samples: 2400
###Markdown
Auxiliary functions
###Code
def get_dataset(x, y=None, sample_weights=None, labeled=True, shuffled=True, batch_size=32, buffer_size=-1, seed=0):
input_map = {'inputs_seq': x['sequence'],
'inputs_struct': x['structure'],
'inputs_loop': x['predicted_loop_type'],
'inputs_bpps_max': x['bpps_max'],
'inputs_bpps_sum': x['bpps_sum'],
'inputs_bpps_mean': x['bpps_mean'],
'inputs_bpps_scaled': x['bpps_scaled']}
if labeled:
output_map = {'output_react': y['reactivity'],
'output_bg_ph': y['deg_Mg_pH10'],
'output_ph': y['deg_pH10'],
'output_mg_c': y['deg_Mg_50C'],
'output_c': y['deg_50C']}
if sample_weights is not None:
dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map, sample_weights))
else:
dataset = tf.data.Dataset.from_tensor_slices((input_map, output_map))
else:
dataset = tf.data.Dataset.from_tensor_slices((input_map))
if shuffled:
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size)
dataset = dataset.prefetch(buffer_size)
return dataset
###Output
_____no_output_____
###Markdown
Model
###Code
def model_fn(hidden_dim=384, dropout=.5, pred_len=68, n_outputs=5):
inputs_seq = L.Input(shape=(None, 1), name='inputs_seq')
inputs_struct = L.Input(shape=(None, 1), name='inputs_struct')
inputs_loop = L.Input(shape=(None, 1), name='inputs_loop')
inputs_bpps_max = L.Input(shape=(None, 1), name='inputs_bpps_max')
inputs_bpps_sum = L.Input(shape=(None, 1), name='inputs_bpps_sum')
inputs_bpps_mean = L.Input(shape=(None, 1), name='inputs_bpps_mean')
inputs_bpps_scaled = L.Input(shape=(None, 1), name='inputs_bpps_scaled')
def _one_hot(x, num_classes):
return K.squeeze(K.one_hot(K.cast(x, 'uint8'), num_classes=num_classes), axis=2)
ohe_seq = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_seq)}, input_shape=(None, 1))(inputs_seq)
ohe_struct = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_struct)}, input_shape=(None, 1))(inputs_struct)
ohe_loop = L.Lambda(_one_hot, arguments={'num_classes': len(token2int_loop)}, input_shape=(None, 1))(inputs_loop)
### Encoder block
# Conv block
conv_seq = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(ohe_seq)
conv_struct = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(ohe_struct)
conv_loop = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(ohe_loop)
conv_bpps_max = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(inputs_bpps_max)
conv_bpps_sum = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(inputs_bpps_sum)
# conv_bpps_mean = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(inputs_bpps_mean)
# conv_bpps_scaled = L.Conv1D(filters=64, kernel_size=5, strides=1, padding='same')(inputs_bpps_scaled)
x_concat = L.concatenate([conv_seq, conv_struct, conv_loop, conv_bpps_max, conv_bpps_sum], axis=-1, name='conv_concatenate')
# Recurrent block
encoder, encoder_state_f, encoder_state_b = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True,
return_state=True, kernel_initializer='orthogonal'),
name='Encoder_RNN')(x_concat)
### Decoder block
decoder = L.Bidirectional(L.GRU(hidden_dim, dropout=dropout, return_sequences=True, kernel_initializer='orthogonal'),
name='Decoder')(encoder, initial_state=[encoder_state_f, encoder_state_b])
# Attention
attention = L.AdditiveAttention()([encoder, decoder])
attention = L.concatenate([attention, decoder])
# Since we are only making predictions on the first part of each sequence, we have to truncate it
decoder_truncated = attention[:, :pred_len]
output_react = L.Dense(1, name='output_react')(decoder_truncated)
output_bg_ph = L.Dense(1, name='output_bg_ph')(decoder_truncated)
output_ph = L.Dense(1, name='output_ph')(decoder_truncated)
output_mg_c = L.Dense(1, name='output_mg_c')(decoder_truncated)
output_c = L.Dense(1, name='output_c')(decoder_truncated)
model = Model(inputs=[inputs_seq, inputs_struct, inputs_loop, inputs_bpps_max,
inputs_bpps_sum, inputs_bpps_mean, inputs_bpps_scaled],
outputs=[output_react, output_bg_ph, output_ph, output_mg_c, output_c])
opt = optimizers.Adam(learning_rate=config['LEARNING_RATE'])
model.compile(optimizer=opt, loss={'output_react': MCRMSE,
'output_bg_ph': MCRMSE,
'output_ph': MCRMSE,
'output_mg_c': MCRMSE,
'output_c': MCRMSE},
loss_weights={'output_react': 2.,
'output_bg_ph': 2.,
'output_ph': 1.,
'output_mg_c': 2.,
'output_c': 1.})
return model
model = model_fn()
model.summary()
###Output
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
inputs_seq (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
inputs_struct (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
inputs_loop (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
lambda (Lambda) (None, None, 4) 0 inputs_seq[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, None, 3) 0 inputs_struct[0][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, None, 7) 0 inputs_loop[0][0]
__________________________________________________________________________________________________
inputs_bpps_max (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
inputs_bpps_sum (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
conv1d (Conv1D) (None, None, 64) 1344 lambda[0][0]
__________________________________________________________________________________________________
conv1d_1 (Conv1D) (None, None, 64) 1024 lambda_1[0][0]
__________________________________________________________________________________________________
conv1d_2 (Conv1D) (None, None, 64) 2304 lambda_2[0][0]
__________________________________________________________________________________________________
conv1d_3 (Conv1D) (None, None, 64) 384 inputs_bpps_max[0][0]
__________________________________________________________________________________________________
conv1d_4 (Conv1D) (None, None, 64) 384 inputs_bpps_sum[0][0]
__________________________________________________________________________________________________
conv_concatenate (Concatenate) (None, None, 320) 0 conv1d[0][0]
conv1d_1[0][0]
conv1d_2[0][0]
conv1d_3[0][0]
conv1d_4[0][0]
__________________________________________________________________________________________________
Encoder_RNN (Bidirectional) [(None, None, 768), 1626624 conv_concatenate[0][0]
__________________________________________________________________________________________________
Decoder (Bidirectional) (None, None, 768) 2658816 Encoder_RNN[0][0]
Encoder_RNN[0][1]
Encoder_RNN[0][2]
__________________________________________________________________________________________________
additive_attention (AdditiveAtt (None, None, 768) 768 Encoder_RNN[0][0]
Decoder[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, None, 1536) 0 additive_attention[0][0]
Decoder[0][0]
__________________________________________________________________________________________________
tf_op_layer_strided_slice (Tens [(None, None, 1536)] 0 concatenate[0][0]
__________________________________________________________________________________________________
inputs_bpps_mean (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
inputs_bpps_scaled (InputLayer) [(None, None, 1)] 0
__________________________________________________________________________________________________
output_react (Dense) (None, None, 1) 1537 tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
output_bg_ph (Dense) (None, None, 1) 1537 tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
output_ph (Dense) (None, None, 1) 1537 tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
output_mg_c (Dense) (None, None, 1) 1537 tf_op_layer_strided_slice[0][0]
__________________________________________________________________________________________________
output_c (Dense) (None, None, 1) 1537 tf_op_layer_strided_slice[0][0]
==================================================================================================
Total params: 4,299,333
Trainable params: 4,299,333
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
Pre-process
###Code
# Add bpps as features
bpps_max = []
bpps_sum = []
bpps_mean = []
bpps_scaled = []
bpps_nb_mean = 0.077522 # mean of bpps_nb across all training data
bpps_nb_std = 0.08914 # std of bpps_nb across all training data
for row in train.itertuples():
probability = np.load(f'{database_base_path}/bpps/{row.id}.npy')
bpps_max.append(probability.max(-1).tolist())
bpps_sum.append((1-probability.sum(-1)).tolist())
bpps_mean.append((1-probability.mean(-1)).tolist())
# bpps nb
bpps_nb = (probability > 0).sum(axis=0) / probability.shape[0]
bpps_nb = (bpps_nb - bpps_nb_mean) / bpps_nb_std
bpps_scaled.append(bpps_nb)
train = train.assign(bpps_max=bpps_max, bpps_sum=bpps_sum, bpps_mean=bpps_mean, bpps_scaled=bpps_scaled)
bpps_max = []
bpps_sum = []
bpps_mean = []
bpps_scaled = []
for row in test.itertuples():
probability = np.load(f'{database_base_path}/bpps/{row.id}.npy')
bpps_max.append(probability.max(-1).tolist())
bpps_sum.append((1-probability.sum(-1)).tolist())
bpps_mean.append((1-probability.mean(-1)).tolist())
# bpps nb
bpps_nb = (probability > 0).sum(axis=0) / probability.shape[0]
bpps_nb = (bpps_nb - bpps_nb_mean) / bpps_nb_std
bpps_scaled.append(bpps_nb)
test = test.assign(bpps_max=bpps_max, bpps_sum=bpps_sum, bpps_mean=bpps_mean, bpps_scaled=bpps_scaled)
feature_cols = ['sequence', 'structure', 'predicted_loop_type', 'bpps_max', 'bpps_sum', 'bpps_mean', 'bpps_scaled']
pred_cols = ['reactivity', 'deg_Mg_pH10', 'deg_pH10', 'deg_Mg_50C', 'deg_50C']
encoder_list = [token2int_seq, token2int_struct, token2int_loop, None, None, None, None]
public_test = test.query("seq_length == 107").copy()
private_test = test.query("seq_length == 130").copy()
x_test_public = get_features_dict(public_test, feature_cols, encoder_list, public_test.index)
x_test_private = get_features_dict(private_test, feature_cols, encoder_list, private_test.index)
# To use as stratified col
train['signal_to_noise_int'] = train['signal_to_noise'].astype(int)
###Output
_____no_output_____
###Markdown
Training
###Code
AUTO = tf.data.experimental.AUTOTUNE
skf = KFold(n_splits=config['N_FOLDS'], shuffle=True, random_state=SEED)
history_list = []
oof = train[['id', 'SN_filter', 'signal_to_noise'] + pred_cols].copy()
oof_preds = np.zeros((len(train), 68, len(pred_cols)))
test_public_preds = np.zeros((len(public_test), config['PB_SEQ_LEN'], len(pred_cols)))
test_private_preds = np.zeros((len(private_test), config['PV_SEQ_LEN'], len(pred_cols)))
for fold,(train_idx, valid_idx) in enumerate(skf.split(train['signal_to_noise_int'])):
if fold >= config['N_USED_FOLDS']:
break
print(f'\nFOLD: {fold+1}')
### Create datasets
x_train = get_features_dict(train, feature_cols, encoder_list, train_idx)
x_valid = get_features_dict(train, feature_cols, encoder_list, valid_idx)
y_train = get_targets_dict(train, pred_cols, train_idx)
y_valid = get_targets_dict(train, pred_cols, valid_idx)
w_train = np.log(train.iloc[train_idx]['signal_to_noise'].values+1.2)+1
w_valid = np.log(train.iloc[valid_idx]['signal_to_noise'].values+1.2)+1
train_ds = get_dataset(x_train, y_train, w_train, labeled=True, shuffled=True, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
valid_ds = get_dataset(x_valid, y_valid, w_valid, labeled=True, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
oof_ds = get_dataset(get_features_dict(train, feature_cols, encoder_list, valid_idx), labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_public_ds = get_dataset(x_test_public, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
test_private_ds = get_dataset(x_test_private, labeled=False, shuffled=False, batch_size=config['BATCH_SIZE'], buffer_size=AUTO, seed=SEED)
### Model
K.clear_session()
model = model_fn()
model_path = f'model_{fold}.h5'
es = EarlyStopping(monitor='val_loss', mode='min', patience=config['ES_PATIENCE'], restore_best_weights=True, verbose=1)
rlrp = ReduceLROnPlateau(monitor='val_loss', mode='min', factor=0.1, patience=5, verbose=1)
### Train
history = model.fit(train_ds,
validation_data=valid_ds,
callbacks=[es, rlrp],
epochs=config['EPOCHS'],
batch_size=config['BATCH_SIZE'],
verbose=2).history
history_list.append(history)
# Save last model weights
model.save_weights(model_path)
### Inference
oof_ds_preds = np.array(model.predict(oof_ds)).reshape((len(pred_cols), len(valid_idx), 68)).transpose((1, 2, 0))
oof_preds[valid_idx] = oof_ds_preds
# Short sequence (public test)
model = model_fn(pred_len=config['PB_SEQ_LEN'])
model.load_weights(model_path)
test_public_ds_preds = np.array(model.predict(test_public_ds)).reshape((len(pred_cols), len(public_test),
config['PB_SEQ_LEN'])).transpose((1, 2, 0))
test_public_preds += test_public_ds_preds * (1 / config['N_USED_FOLDS'])
# Long sequence (private test)
model = model_fn(pred_len=config['PV_SEQ_LEN'])
model.load_weights(model_path)
test_private_ds_preds = np.array(model.predict(test_private_ds)).reshape((len(pred_cols), len(private_test),
config['PV_SEQ_LEN'])).transpose((1, 2, 0))
test_private_preds += test_private_ds_preds * (1 / config['N_USED_FOLDS'])
###Output
FOLD: 1
Epoch 1/60
60/60 - 9s - loss: 9.6114 - output_react_loss: 1.0106 - output_bg_ph_loss: 1.2335 - output_ph_loss: 1.4490 - output_mg_c_loss: 1.2416 - output_c_loss: 1.1910 - val_loss: 8.0061 - val_output_react_loss: 0.8309 - val_output_bg_ph_loss: 1.0443 - val_output_ph_loss: 1.1722 - val_output_mg_c_loss: 1.0316 - val_output_c_loss: 1.0202
Epoch 2/60
60/60 - 7s - loss: 8.3061 - output_react_loss: 0.8861 - output_bg_ph_loss: 1.0579 - output_ph_loss: 1.2363 - output_mg_c_loss: 1.0545 - output_c_loss: 1.0728 - val_loss: 7.5913 - val_output_react_loss: 0.7936 - val_output_bg_ph_loss: 0.9828 - val_output_ph_loss: 1.1206 - val_output_mg_c_loss: 0.9720 - val_output_c_loss: 0.9740
Epoch 3/60
60/60 - 7s - loss: 7.9838 - output_react_loss: 0.8504 - output_bg_ph_loss: 1.0109 - output_ph_loss: 1.1993 - output_mg_c_loss: 1.0108 - output_c_loss: 1.0402 - val_loss: 7.3731 - val_output_react_loss: 0.7629 - val_output_bg_ph_loss: 0.9557 - val_output_ph_loss: 1.0869 - val_output_mg_c_loss: 0.9482 - val_output_c_loss: 0.9527
Epoch 4/60
60/60 - 7s - loss: 7.7261 - output_react_loss: 0.8248 - output_bg_ph_loss: 0.9751 - output_ph_loss: 1.1700 - output_mg_c_loss: 0.9728 - output_c_loss: 1.0107 - val_loss: 7.1011 - val_output_react_loss: 0.7538 - val_output_bg_ph_loss: 0.9098 - val_output_ph_loss: 1.0551 - val_output_mg_c_loss: 0.9004 - val_output_c_loss: 0.9178
Epoch 5/60
60/60 - 7s - loss: 7.5131 - output_react_loss: 0.8065 - output_bg_ph_loss: 0.9463 - output_ph_loss: 1.1396 - output_mg_c_loss: 0.9395 - output_c_loss: 0.9888 - val_loss: 6.7921 - val_output_react_loss: 0.7254 - val_output_bg_ph_loss: 0.8703 - val_output_ph_loss: 1.0182 - val_output_mg_c_loss: 0.8512 - val_output_c_loss: 0.8802
Epoch 6/60
60/60 - 7s - loss: 7.2731 - output_react_loss: 0.7879 - output_bg_ph_loss: 0.9076 - output_ph_loss: 1.1104 - output_mg_c_loss: 0.9052 - output_c_loss: 0.9611 - val_loss: 6.6382 - val_output_react_loss: 0.7281 - val_output_bg_ph_loss: 0.8380 - val_output_ph_loss: 0.9963 - val_output_mg_c_loss: 0.8225 - val_output_c_loss: 0.8647
Epoch 7/60
60/60 - 7s - loss: 7.1074 - output_react_loss: 0.7769 - output_bg_ph_loss: 0.8800 - output_ph_loss: 1.0897 - output_mg_c_loss: 0.8800 - output_c_loss: 0.9440 - val_loss: 6.4173 - val_output_react_loss: 0.6937 - val_output_bg_ph_loss: 0.8091 - val_output_ph_loss: 0.9707 - val_output_mg_c_loss: 0.7999 - val_output_c_loss: 0.8412
Epoch 8/60
60/60 - 7s - loss: 6.9231 - output_react_loss: 0.7578 - output_bg_ph_loss: 0.8586 - output_ph_loss: 1.0605 - output_mg_c_loss: 0.8542 - output_c_loss: 0.9214 - val_loss: 6.4351 - val_output_react_loss: 0.6952 - val_output_bg_ph_loss: 0.8122 - val_output_ph_loss: 0.9649 - val_output_mg_c_loss: 0.8079 - val_output_c_loss: 0.8397
Epoch 9/60
60/60 - 7s - loss: 6.8027 - output_react_loss: 0.7528 - output_bg_ph_loss: 0.8371 - output_ph_loss: 1.0430 - output_mg_c_loss: 0.8360 - output_c_loss: 0.9080 - val_loss: 6.1733 - val_output_react_loss: 0.6792 - val_output_bg_ph_loss: 0.7774 - val_output_ph_loss: 0.9300 - val_output_mg_c_loss: 0.7587 - val_output_c_loss: 0.8129
Epoch 10/60
60/60 - 7s - loss: 6.6764 - output_react_loss: 0.7417 - output_bg_ph_loss: 0.8243 - output_ph_loss: 1.0210 - output_mg_c_loss: 0.8156 - output_c_loss: 0.8922 - val_loss: 6.0550 - val_output_react_loss: 0.6667 - val_output_bg_ph_loss: 0.7686 - val_output_ph_loss: 0.9061 - val_output_mg_c_loss: 0.7450 - val_output_c_loss: 0.7882
Epoch 11/60
60/60 - 7s - loss: 6.5587 - output_react_loss: 0.7268 - output_bg_ph_loss: 0.8098 - output_ph_loss: 1.0075 - output_mg_c_loss: 0.7990 - output_c_loss: 0.8800 - val_loss: 6.0169 - val_output_react_loss: 0.6662 - val_output_bg_ph_loss: 0.7599 - val_output_ph_loss: 0.9016 - val_output_mg_c_loss: 0.7373 - val_output_c_loss: 0.7885
Epoch 12/60
60/60 - 7s - loss: 6.4650 - output_react_loss: 0.7174 - output_bg_ph_loss: 0.8003 - output_ph_loss: 0.9940 - output_mg_c_loss: 0.7841 - output_c_loss: 0.8674 - val_loss: 5.9529 - val_output_react_loss: 0.6551 - val_output_bg_ph_loss: 0.7600 - val_output_ph_loss: 0.8898 - val_output_mg_c_loss: 0.7306 - val_output_c_loss: 0.7718
Epoch 13/60
60/60 - 7s - loss: 6.3701 - output_react_loss: 0.7090 - output_bg_ph_loss: 0.7859 - output_ph_loss: 0.9782 - output_mg_c_loss: 0.7722 - output_c_loss: 0.8577 - val_loss: 5.9184 - val_output_react_loss: 0.6489 - val_output_bg_ph_loss: 0.7514 - val_output_ph_loss: 0.8894 - val_output_mg_c_loss: 0.7267 - val_output_c_loss: 0.7752
Epoch 14/60
60/60 - 7s - loss: 6.2772 - output_react_loss: 0.6981 - output_bg_ph_loss: 0.7720 - output_ph_loss: 0.9684 - output_mg_c_loss: 0.7598 - output_c_loss: 0.8493 - val_loss: 5.8356 - val_output_react_loss: 0.6398 - val_output_bg_ph_loss: 0.7450 - val_output_ph_loss: 0.8785 - val_output_mg_c_loss: 0.7107 - val_output_c_loss: 0.7661
Epoch 15/60
60/60 - 7s - loss: 6.2121 - output_react_loss: 0.6918 - output_bg_ph_loss: 0.7652 - output_ph_loss: 0.9620 - output_mg_c_loss: 0.7480 - output_c_loss: 0.8401 - val_loss: 5.8153 - val_output_react_loss: 0.6419 - val_output_bg_ph_loss: 0.7385 - val_output_ph_loss: 0.8820 - val_output_mg_c_loss: 0.7065 - val_output_c_loss: 0.7597
Epoch 16/60
60/60 - 7s - loss: 6.1506 - output_react_loss: 0.6858 - output_bg_ph_loss: 0.7540 - output_ph_loss: 0.9527 - output_mg_c_loss: 0.7415 - output_c_loss: 0.8354 - val_loss: 5.7829 - val_output_react_loss: 0.6358 - val_output_bg_ph_loss: 0.7363 - val_output_ph_loss: 0.8636 - val_output_mg_c_loss: 0.7066 - val_output_c_loss: 0.7619
Epoch 17/60
60/60 - 7s - loss: 6.0583 - output_react_loss: 0.6751 - output_bg_ph_loss: 0.7415 - output_ph_loss: 0.9417 - output_mg_c_loss: 0.7275 - output_c_loss: 0.8284 - val_loss: 5.9016 - val_output_react_loss: 0.6365 - val_output_bg_ph_loss: 0.7520 - val_output_ph_loss: 0.8843 - val_output_mg_c_loss: 0.7330 - val_output_c_loss: 0.7741
Epoch 18/60
60/60 - 7s - loss: 6.0062 - output_react_loss: 0.6736 - output_bg_ph_loss: 0.7328 - output_ph_loss: 0.9312 - output_mg_c_loss: 0.7211 - output_c_loss: 0.8200 - val_loss: 5.7432 - val_output_react_loss: 0.6325 - val_output_bg_ph_loss: 0.7266 - val_output_ph_loss: 0.8528 - val_output_mg_c_loss: 0.7100 - val_output_c_loss: 0.7524
Epoch 19/60
60/60 - 7s - loss: 5.9459 - output_react_loss: 0.6649 - output_bg_ph_loss: 0.7248 - output_ph_loss: 0.9253 - output_mg_c_loss: 0.7118 - output_c_loss: 0.8175 - val_loss: 5.7190 - val_output_react_loss: 0.6206 - val_output_bg_ph_loss: 0.7338 - val_output_ph_loss: 0.8763 - val_output_mg_c_loss: 0.6922 - val_output_c_loss: 0.7495
Epoch 20/60
60/60 - 7s - loss: 5.8505 - output_react_loss: 0.6569 - output_bg_ph_loss: 0.7098 - output_ph_loss: 0.9160 - output_mg_c_loss: 0.6956 - output_c_loss: 0.8097 - val_loss: 5.7082 - val_output_react_loss: 0.6242 - val_output_bg_ph_loss: 0.7298 - val_output_ph_loss: 0.8536 - val_output_mg_c_loss: 0.7005 - val_output_c_loss: 0.7455
Epoch 21/60
60/60 - 7s - loss: 5.8117 - output_react_loss: 0.6528 - output_bg_ph_loss: 0.7055 - output_ph_loss: 0.9069 - output_mg_c_loss: 0.6925 - output_c_loss: 0.8034 - val_loss: 5.6943 - val_output_react_loss: 0.6207 - val_output_bg_ph_loss: 0.7276 - val_output_ph_loss: 0.8613 - val_output_mg_c_loss: 0.6934 - val_output_c_loss: 0.7495
Epoch 22/60
60/60 - 7s - loss: 5.7288 - output_react_loss: 0.6432 - output_bg_ph_loss: 0.6931 - output_ph_loss: 0.8982 - output_mg_c_loss: 0.6791 - output_c_loss: 0.7999 - val_loss: 5.6630 - val_output_react_loss: 0.6222 - val_output_bg_ph_loss: 0.7239 - val_output_ph_loss: 0.8444 - val_output_mg_c_loss: 0.6931 - val_output_c_loss: 0.7403
Epoch 23/60
60/60 - 7s - loss: 5.6748 - output_react_loss: 0.6367 - output_bg_ph_loss: 0.6837 - output_ph_loss: 0.8935 - output_mg_c_loss: 0.6726 - output_c_loss: 0.7952 - val_loss: 5.6628 - val_output_react_loss: 0.6241 - val_output_bg_ph_loss: 0.7220 - val_output_ph_loss: 0.8437 - val_output_mg_c_loss: 0.6943 - val_output_c_loss: 0.7384
Epoch 24/60
60/60 - 7s - loss: 5.6290 - output_react_loss: 0.6320 - output_bg_ph_loss: 0.6777 - output_ph_loss: 0.8847 - output_mg_c_loss: 0.6669 - output_c_loss: 0.7911 - val_loss: 5.6139 - val_output_react_loss: 0.6153 - val_output_bg_ph_loss: 0.7139 - val_output_ph_loss: 0.8435 - val_output_mg_c_loss: 0.6860 - val_output_c_loss: 0.7400
Epoch 25/60
60/60 - 7s - loss: 5.5753 - output_react_loss: 0.6251 - output_bg_ph_loss: 0.6702 - output_ph_loss: 0.8790 - output_mg_c_loss: 0.6603 - output_c_loss: 0.7850 - val_loss: 5.6010 - val_output_react_loss: 0.6133 - val_output_bg_ph_loss: 0.7129 - val_output_ph_loss: 0.8384 - val_output_mg_c_loss: 0.6860 - val_output_c_loss: 0.7383
Epoch 26/60
60/60 - 7s - loss: 5.5158 - output_react_loss: 0.6173 - output_bg_ph_loss: 0.6623 - output_ph_loss: 0.8720 - output_mg_c_loss: 0.6518 - output_c_loss: 0.7809 - val_loss: 5.5886 - val_output_react_loss: 0.6095 - val_output_bg_ph_loss: 0.7107 - val_output_ph_loss: 0.8480 - val_output_mg_c_loss: 0.6832 - val_output_c_loss: 0.7339
Epoch 27/60
60/60 - 7s - loss: 5.4767 - output_react_loss: 0.6126 - output_bg_ph_loss: 0.6568 - output_ph_loss: 0.8670 - output_mg_c_loss: 0.6466 - output_c_loss: 0.7778 - val_loss: 5.5616 - val_output_react_loss: 0.6103 - val_output_bg_ph_loss: 0.7066 - val_output_ph_loss: 0.8333 - val_output_mg_c_loss: 0.6781 - val_output_c_loss: 0.7383
Epoch 28/60
60/60 - 7s - loss: 5.4212 - output_react_loss: 0.6046 - output_bg_ph_loss: 0.6499 - output_ph_loss: 0.8600 - output_mg_c_loss: 0.6393 - output_c_loss: 0.7735 - val_loss: 5.5776 - val_output_react_loss: 0.6086 - val_output_bg_ph_loss: 0.7127 - val_output_ph_loss: 0.8372 - val_output_mg_c_loss: 0.6819 - val_output_c_loss: 0.7338
Epoch 29/60
60/60 - 7s - loss: 5.3809 - output_react_loss: 0.6031 - output_bg_ph_loss: 0.6394 - output_ph_loss: 0.8575 - output_mg_c_loss: 0.6325 - output_c_loss: 0.7733 - val_loss: 5.6088 - val_output_react_loss: 0.6149 - val_output_bg_ph_loss: 0.7163 - val_output_ph_loss: 0.8359 - val_output_mg_c_loss: 0.6861 - val_output_c_loss: 0.7384
Epoch 30/60
60/60 - 7s - loss: 5.3584 - output_react_loss: 0.6021 - output_bg_ph_loss: 0.6355 - output_ph_loss: 0.8539 - output_mg_c_loss: 0.6302 - output_c_loss: 0.7688 - val_loss: 5.6076 - val_output_react_loss: 0.6082 - val_output_bg_ph_loss: 0.7175 - val_output_ph_loss: 0.8348 - val_output_mg_c_loss: 0.6876 - val_output_c_loss: 0.7461
Epoch 31/60
60/60 - 7s - loss: 5.3068 - output_react_loss: 0.5926 - output_bg_ph_loss: 0.6294 - output_ph_loss: 0.8475 - output_mg_c_loss: 0.6253 - output_c_loss: 0.7647 - val_loss: 5.5944 - val_output_react_loss: 0.6070 - val_output_bg_ph_loss: 0.7131 - val_output_ph_loss: 0.8335 - val_output_mg_c_loss: 0.6934 - val_output_c_loss: 0.7340
Epoch 32/60
Epoch 00032: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
60/60 - 7s - loss: 5.2625 - output_react_loss: 0.5868 - output_bg_ph_loss: 0.6243 - output_ph_loss: 0.8413 - output_mg_c_loss: 0.6191 - output_c_loss: 0.7608 - val_loss: 5.5880 - val_output_react_loss: 0.6102 - val_output_bg_ph_loss: 0.7181 - val_output_ph_loss: 0.8385 - val_output_mg_c_loss: 0.6796 - val_output_c_loss: 0.7335
Epoch 33/60
60/60 - 7s - loss: 5.0928 - output_react_loss: 0.5660 - output_bg_ph_loss: 0.6001 - output_ph_loss: 0.8231 - output_mg_c_loss: 0.5961 - output_c_loss: 0.7455 - val_loss: 5.4791 - val_output_react_loss: 0.5979 - val_output_bg_ph_loss: 0.7007 - val_output_ph_loss: 0.8204 - val_output_mg_c_loss: 0.6680 - val_output_c_loss: 0.7256
Epoch 34/60
60/60 - 7s - loss: 5.0344 - output_react_loss: 0.5606 - output_bg_ph_loss: 0.5915 - output_ph_loss: 0.8163 - output_mg_c_loss: 0.5865 - output_c_loss: 0.7407 - val_loss: 5.4656 - val_output_react_loss: 0.5970 - val_output_bg_ph_loss: 0.6971 - val_output_ph_loss: 0.8208 - val_output_mg_c_loss: 0.6662 - val_output_c_loss: 0.7242
Epoch 35/60
60/60 - 7s - loss: 5.0062 - output_react_loss: 0.5575 - output_bg_ph_loss: 0.5873 - output_ph_loss: 0.8116 - output_mg_c_loss: 0.5833 - output_c_loss: 0.7384 - val_loss: 5.4458 - val_output_react_loss: 0.5956 - val_output_bg_ph_loss: 0.6945 - val_output_ph_loss: 0.8175 - val_output_mg_c_loss: 0.6630 - val_output_c_loss: 0.7223
Epoch 36/60
60/60 - 7s - loss: 4.9922 - output_react_loss: 0.5561 - output_bg_ph_loss: 0.5843 - output_ph_loss: 0.8110 - output_mg_c_loss: 0.5820 - output_c_loss: 0.7363 - val_loss: 5.4461 - val_output_react_loss: 0.5954 - val_output_bg_ph_loss: 0.6951 - val_output_ph_loss: 0.8166 - val_output_mg_c_loss: 0.6634 - val_output_c_loss: 0.7216
Epoch 37/60
60/60 - 7s - loss: 4.9767 - output_react_loss: 0.5532 - output_bg_ph_loss: 0.5841 - output_ph_loss: 0.8077 - output_mg_c_loss: 0.5792 - output_c_loss: 0.7360 - val_loss: 5.4528 - val_output_react_loss: 0.5958 - val_output_bg_ph_loss: 0.6952 - val_output_ph_loss: 0.8178 - val_output_mg_c_loss: 0.6652 - val_output_c_loss: 0.7225
Epoch 38/60
60/60 - 7s - loss: 4.9720 - output_react_loss: 0.5533 - output_bg_ph_loss: 0.5818 - output_ph_loss: 0.8082 - output_mg_c_loss: 0.5794 - output_c_loss: 0.7348 - val_loss: 5.4455 - val_output_react_loss: 0.5957 - val_output_bg_ph_loss: 0.6947 - val_output_ph_loss: 0.8161 - val_output_mg_c_loss: 0.6635 - val_output_c_loss: 0.7216
Epoch 39/60
60/60 - 7s - loss: 4.9616 - output_react_loss: 0.5518 - output_bg_ph_loss: 0.5814 - output_ph_loss: 0.8069 - output_mg_c_loss: 0.5768 - output_c_loss: 0.7347 - val_loss: 5.4565 - val_output_react_loss: 0.5964 - val_output_bg_ph_loss: 0.6958 - val_output_ph_loss: 0.8185 - val_output_mg_c_loss: 0.6654 - val_output_c_loss: 0.7228
Epoch 40/60
60/60 - 7s - loss: 4.9526 - output_react_loss: 0.5521 - output_bg_ph_loss: 0.5786 - output_ph_loss: 0.8056 - output_mg_c_loss: 0.5757 - output_c_loss: 0.7344 - val_loss: 5.4433 - val_output_react_loss: 0.5947 - val_output_bg_ph_loss: 0.6941 - val_output_ph_loss: 0.8161 - val_output_mg_c_loss: 0.6641 - val_output_c_loss: 0.7215
Epoch 41/60
60/60 - 7s - loss: 4.9428 - output_react_loss: 0.5493 - output_bg_ph_loss: 0.5780 - output_ph_loss: 0.8046 - output_mg_c_loss: 0.5755 - output_c_loss: 0.7327 - val_loss: 5.4542 - val_output_react_loss: 0.5955 - val_output_bg_ph_loss: 0.6969 - val_output_ph_loss: 0.8167 - val_output_mg_c_loss: 0.6649 - val_output_c_loss: 0.7228
Epoch 42/60
60/60 - 7s - loss: 4.9372 - output_react_loss: 0.5485 - output_bg_ph_loss: 0.5779 - output_ph_loss: 0.8043 - output_mg_c_loss: 0.5737 - output_c_loss: 0.7326 - val_loss: 5.4667 - val_output_react_loss: 0.5985 - val_output_bg_ph_loss: 0.6981 - val_output_ph_loss: 0.8181 - val_output_mg_c_loss: 0.6668 - val_output_c_loss: 0.7219
Epoch 43/60
60/60 - 7s - loss: 4.9269 - output_react_loss: 0.5481 - output_bg_ph_loss: 0.5769 - output_ph_loss: 0.8023 - output_mg_c_loss: 0.5718 - output_c_loss: 0.7312 - val_loss: 5.4407 - val_output_react_loss: 0.5947 - val_output_bg_ph_loss: 0.6938 - val_output_ph_loss: 0.8160 - val_output_mg_c_loss: 0.6631 - val_output_c_loss: 0.7216
Epoch 44/60
60/60 - 7s - loss: 4.9165 - output_react_loss: 0.5461 - output_bg_ph_loss: 0.5738 - output_ph_loss: 0.8023 - output_mg_c_loss: 0.5713 - output_c_loss: 0.7319 - val_loss: 5.4619 - val_output_react_loss: 0.5967 - val_output_bg_ph_loss: 0.6985 - val_output_ph_loss: 0.8179 - val_output_mg_c_loss: 0.6654 - val_output_c_loss: 0.7227
Epoch 45/60
60/60 - 7s - loss: 4.9150 - output_react_loss: 0.5461 - output_bg_ph_loss: 0.5746 - output_ph_loss: 0.8019 - output_mg_c_loss: 0.5707 - output_c_loss: 0.7305 - val_loss: 5.4448 - val_output_react_loss: 0.5950 - val_output_bg_ph_loss: 0.6950 - val_output_ph_loss: 0.8155 - val_output_mg_c_loss: 0.6642 - val_output_c_loss: 0.7209
Epoch 46/60
60/60 - 7s - loss: 4.9026 - output_react_loss: 0.5451 - output_bg_ph_loss: 0.5716 - output_ph_loss: 0.7994 - output_mg_c_loss: 0.5696 - output_c_loss: 0.7306 - val_loss: 5.4362 - val_output_react_loss: 0.5937 - val_output_bg_ph_loss: 0.6933 - val_output_ph_loss: 0.8154 - val_output_mg_c_loss: 0.6630 - val_output_c_loss: 0.7208
Epoch 47/60
60/60 - 7s - loss: 4.8978 - output_react_loss: 0.5440 - output_bg_ph_loss: 0.5714 - output_ph_loss: 0.8005 - output_mg_c_loss: 0.5693 - output_c_loss: 0.7279 - val_loss: 5.4576 - val_output_react_loss: 0.5959 - val_output_bg_ph_loss: 0.6962 - val_output_ph_loss: 0.8173 - val_output_mg_c_loss: 0.6669 - val_output_c_loss: 0.7224
Epoch 48/60
60/60 - 7s - loss: 4.8916 - output_react_loss: 0.5430 - output_bg_ph_loss: 0.5710 - output_ph_loss: 0.7998 - output_mg_c_loss: 0.5674 - output_c_loss: 0.7290 - val_loss: 5.4425 - val_output_react_loss: 0.5948 - val_output_bg_ph_loss: 0.6934 - val_output_ph_loss: 0.8155 - val_output_mg_c_loss: 0.6648 - val_output_c_loss: 0.7212
Epoch 49/60
60/60 - 7s - loss: 4.8873 - output_react_loss: 0.5426 - output_bg_ph_loss: 0.5691 - output_ph_loss: 0.7987 - output_mg_c_loss: 0.5679 - output_c_loss: 0.7294 - val_loss: 5.4359 - val_output_react_loss: 0.5936 - val_output_bg_ph_loss: 0.6936 - val_output_ph_loss: 0.8147 - val_output_mg_c_loss: 0.6632 - val_output_c_loss: 0.7206
Epoch 50/60
60/60 - 7s - loss: 4.8796 - output_react_loss: 0.5405 - output_bg_ph_loss: 0.5692 - output_ph_loss: 0.7979 - output_mg_c_loss: 0.5671 - output_c_loss: 0.7282 - val_loss: 5.4605 - val_output_react_loss: 0.5961 - val_output_bg_ph_loss: 0.6984 - val_output_ph_loss: 0.8167 - val_output_mg_c_loss: 0.6661 - val_output_c_loss: 0.7225
Epoch 51/60
60/60 - 7s - loss: 4.8730 - output_react_loss: 0.5408 - output_bg_ph_loss: 0.5677 - output_ph_loss: 0.7970 - output_mg_c_loss: 0.5657 - output_c_loss: 0.7277 - val_loss: 5.4413 - val_output_react_loss: 0.5943 - val_output_bg_ph_loss: 0.6948 - val_output_ph_loss: 0.8165 - val_output_mg_c_loss: 0.6630 - val_output_c_loss: 0.7206
Epoch 52/60
60/60 - 7s - loss: 4.8590 - output_react_loss: 0.5387 - output_bg_ph_loss: 0.5654 - output_ph_loss: 0.7962 - output_mg_c_loss: 0.5644 - output_c_loss: 0.7258 - val_loss: 5.4469 - val_output_react_loss: 0.5951 - val_output_bg_ph_loss: 0.6956 - val_output_ph_loss: 0.8173 - val_output_mg_c_loss: 0.6635 - val_output_c_loss: 0.7211
Epoch 53/60
60/60 - 7s - loss: 4.8623 - output_react_loss: 0.5386 - output_bg_ph_loss: 0.5663 - output_ph_loss: 0.7955 - output_mg_c_loss: 0.5654 - output_c_loss: 0.7261 - val_loss: 5.4528 - val_output_react_loss: 0.5950 - val_output_bg_ph_loss: 0.6960 - val_output_ph_loss: 0.8176 - val_output_mg_c_loss: 0.6654 - val_output_c_loss: 0.7223
Epoch 54/60
Epoch 00054: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
60/60 - 7s - loss: 4.8488 - output_react_loss: 0.5375 - output_bg_ph_loss: 0.5646 - output_ph_loss: 0.7941 - output_mg_c_loss: 0.5620 - output_c_loss: 0.7264 - val_loss: 5.4563 - val_output_react_loss: 0.5955 - val_output_bg_ph_loss: 0.6970 - val_output_ph_loss: 0.8177 - val_output_mg_c_loss: 0.6660 - val_output_c_loss: 0.7217
Epoch 55/60
60/60 - 7s - loss: 4.8331 - output_react_loss: 0.5351 - output_bg_ph_loss: 0.5621 - output_ph_loss: 0.7916 - output_mg_c_loss: 0.5620 - output_c_loss: 0.7232 - val_loss: 5.4400 - val_output_react_loss: 0.5943 - val_output_bg_ph_loss: 0.6942 - val_output_ph_loss: 0.8156 - val_output_mg_c_loss: 0.6633 - val_output_c_loss: 0.7209
Epoch 56/60
60/60 - 7s - loss: 4.8313 - output_react_loss: 0.5356 - output_bg_ph_loss: 0.5612 - output_ph_loss: 0.7922 - output_mg_c_loss: 0.5609 - output_c_loss: 0.7237 - val_loss: 5.4404 - val_output_react_loss: 0.5941 - val_output_bg_ph_loss: 0.6946 - val_output_ph_loss: 0.8155 - val_output_mg_c_loss: 0.6632 - val_output_c_loss: 0.7211
Epoch 57/60
60/60 - 7s - loss: 4.8256 - output_react_loss: 0.5345 - output_bg_ph_loss: 0.5601 - output_ph_loss: 0.7924 - output_mg_c_loss: 0.5600 - output_c_loss: 0.7239 - val_loss: 5.4367 - val_output_react_loss: 0.5938 - val_output_bg_ph_loss: 0.6940 - val_output_ph_loss: 0.8149 - val_output_mg_c_loss: 0.6628 - val_output_c_loss: 0.7206
Epoch 58/60
60/60 - 7s - loss: 4.8297 - output_react_loss: 0.5349 - output_bg_ph_loss: 0.5607 - output_ph_loss: 0.7920 - output_mg_c_loss: 0.5616 - output_c_loss: 0.7232 - val_loss: 5.4384 - val_output_react_loss: 0.5944 - val_output_bg_ph_loss: 0.6942 - val_output_ph_loss: 0.8151 - val_output_mg_c_loss: 0.6628 - val_output_c_loss: 0.7205
Epoch 59/60
Restoring model weights from the end of the best epoch.
Epoch 00059: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
60/60 - 7s - loss: 4.8202 - output_react_loss: 0.5341 - output_bg_ph_loss: 0.5600 - output_ph_loss: 0.7906 - output_mg_c_loss: 0.5591 - output_c_loss: 0.7233 - val_loss: 5.4380 - val_output_react_loss: 0.5940 - val_output_bg_ph_loss: 0.6945 - val_output_ph_loss: 0.8149 - val_output_mg_c_loss: 0.6629 - val_output_c_loss: 0.7204
Epoch 00059: early stopping
FOLD: 2
Epoch 1/60
60/60 - 9s - loss: 9.4981 - output_react_loss: 1.0002 - output_bg_ph_loss: 1.2243 - output_ph_loss: 1.3942 - output_mg_c_loss: 1.2474 - output_c_loss: 1.1602 - val_loss: 8.6098 - val_output_react_loss: 0.9316 - val_output_bg_ph_loss: 1.0835 - val_output_ph_loss: 1.3267 - val_output_mg_c_loss: 1.0768 - val_output_c_loss: 1.0994
Epoch 2/60
60/60 - 7s - loss: 8.1750 - output_react_loss: 0.8619 - output_bg_ph_loss: 1.0508 - output_ph_loss: 1.2078 - output_mg_c_loss: 1.0415 - output_c_loss: 1.0587 - val_loss: 8.1798 - val_output_react_loss: 0.8957 - val_output_bg_ph_loss: 1.0211 - val_output_ph_loss: 1.2417 - val_output_mg_c_loss: 1.0314 - val_output_c_loss: 1.0417
Epoch 3/60
60/60 - 7s - loss: 7.8765 - output_react_loss: 0.8254 - output_bg_ph_loss: 1.0079 - output_ph_loss: 1.1756 - output_mg_c_loss: 1.0022 - output_c_loss: 1.0299 - val_loss: 7.9007 - val_output_react_loss: 0.8692 - val_output_bg_ph_loss: 0.9931 - val_output_ph_loss: 1.2108 - val_output_mg_c_loss: 0.9797 - val_output_c_loss: 1.0062
Epoch 4/60
60/60 - 7s - loss: 7.6321 - output_react_loss: 0.8060 - output_bg_ph_loss: 0.9710 - output_ph_loss: 1.1452 - output_mg_c_loss: 0.9659 - output_c_loss: 1.0011 - val_loss: 7.6178 - val_output_react_loss: 0.8415 - val_output_bg_ph_loss: 0.9437 - val_output_ph_loss: 1.1798 - val_output_mg_c_loss: 0.9430 - val_output_c_loss: 0.9816
Epoch 5/60
60/60 - 7s - loss: 7.3515 - output_react_loss: 0.7833 - output_bg_ph_loss: 0.9261 - output_ph_loss: 1.1111 - output_mg_c_loss: 0.9242 - output_c_loss: 0.9730 - val_loss: 7.3615 - val_output_react_loss: 0.8282 - val_output_bg_ph_loss: 0.9030 - val_output_ph_loss: 1.1412 - val_output_mg_c_loss: 0.9033 - val_output_c_loss: 0.9513
Epoch 6/60
60/60 - 7s - loss: 7.1560 - output_react_loss: 0.7696 - output_bg_ph_loss: 0.9010 - output_ph_loss: 1.0806 - output_mg_c_loss: 0.8955 - output_c_loss: 0.9434 - val_loss: 7.1913 - val_output_react_loss: 0.8130 - val_output_bg_ph_loss: 0.8886 - val_output_ph_loss: 1.1246 - val_output_mg_c_loss: 0.8681 - val_output_c_loss: 0.9274
Epoch 7/60
60/60 - 7s - loss: 6.9447 - output_react_loss: 0.7541 - output_bg_ph_loss: 0.8705 - output_ph_loss: 1.0530 - output_mg_c_loss: 0.8611 - output_c_loss: 0.9201 - val_loss: 7.0322 - val_output_react_loss: 0.7968 - val_output_bg_ph_loss: 0.8541 - val_output_ph_loss: 1.1067 - val_output_mg_c_loss: 0.8560 - val_output_c_loss: 0.9118
Epoch 8/60
60/60 - 7s - loss: 6.7772 - output_react_loss: 0.7390 - output_bg_ph_loss: 0.8472 - output_ph_loss: 1.0297 - output_mg_c_loss: 0.8369 - output_c_loss: 0.9012 - val_loss: 6.9125 - val_output_react_loss: 0.7967 - val_output_bg_ph_loss: 0.8460 - val_output_ph_loss: 1.0718 - val_output_mg_c_loss: 0.8329 - val_output_c_loss: 0.8895
Epoch 9/60
60/60 - 7s - loss: 6.6121 - output_react_loss: 0.7267 - output_bg_ph_loss: 0.8248 - output_ph_loss: 1.0050 - output_mg_c_loss: 0.8120 - output_c_loss: 0.8801 - val_loss: 6.7593 - val_output_react_loss: 0.7729 - val_output_bg_ph_loss: 0.8273 - val_output_ph_loss: 1.0551 - val_output_mg_c_loss: 0.8122 - val_output_c_loss: 0.8793
Epoch 10/60
60/60 - 7s - loss: 6.5210 - output_react_loss: 0.7176 - output_bg_ph_loss: 0.8189 - output_ph_loss: 0.9854 - output_mg_c_loss: 0.7980 - output_c_loss: 0.8666 - val_loss: 6.7340 - val_output_react_loss: 0.7709 - val_output_bg_ph_loss: 0.8320 - val_output_ph_loss: 1.0468 - val_output_mg_c_loss: 0.8040 - val_output_c_loss: 0.8733
Epoch 11/60
60/60 - 7s - loss: 6.3834 - output_react_loss: 0.7046 - output_bg_ph_loss: 0.7960 - output_ph_loss: 0.9688 - output_mg_c_loss: 0.7797 - output_c_loss: 0.8539 - val_loss: 6.6706 - val_output_react_loss: 0.7681 - val_output_bg_ph_loss: 0.8133 - val_output_ph_loss: 1.0379 - val_output_mg_c_loss: 0.7996 - val_output_c_loss: 0.8707
Epoch 12/60
60/60 - 7s - loss: 6.2979 - output_react_loss: 0.6984 - output_bg_ph_loss: 0.7851 - output_ph_loss: 0.9535 - output_mg_c_loss: 0.7682 - output_c_loss: 0.8411 - val_loss: 6.6129 - val_output_react_loss: 0.7503 - val_output_bg_ph_loss: 0.8112 - val_output_ph_loss: 1.0324 - val_output_mg_c_loss: 0.7957 - val_output_c_loss: 0.8659
Epoch 13/60
60/60 - 7s - loss: 6.1866 - output_react_loss: 0.6857 - output_bg_ph_loss: 0.7710 - output_ph_loss: 0.9407 - output_mg_c_loss: 0.7511 - output_c_loss: 0.8301 - val_loss: 6.5087 - val_output_react_loss: 0.7426 - val_output_bg_ph_loss: 0.7901 - val_output_ph_loss: 1.0185 - val_output_mg_c_loss: 0.7840 - val_output_c_loss: 0.8569
Epoch 14/60
60/60 - 7s - loss: 6.1521 - output_react_loss: 0.6793 - output_bg_ph_loss: 0.7631 - output_ph_loss: 0.9364 - output_mg_c_loss: 0.7495 - output_c_loss: 0.8319 - val_loss: 6.4714 - val_output_react_loss: 0.7404 - val_output_bg_ph_loss: 0.7858 - val_output_ph_loss: 1.0138 - val_output_mg_c_loss: 0.7735 - val_output_c_loss: 0.8582
Epoch 15/60
60/60 - 7s - loss: 6.0261 - output_react_loss: 0.6656 - output_bg_ph_loss: 0.7455 - output_ph_loss: 0.9224 - output_mg_c_loss: 0.7318 - output_c_loss: 0.8178 - val_loss: 6.4980 - val_output_react_loss: 0.7426 - val_output_bg_ph_loss: 0.7954 - val_output_ph_loss: 1.0145 - val_output_mg_c_loss: 0.7789 - val_output_c_loss: 0.8497
Epoch 16/60
60/60 - 7s - loss: 5.9831 - output_react_loss: 0.6637 - output_bg_ph_loss: 0.7418 - output_ph_loss: 0.9148 - output_mg_c_loss: 0.7244 - output_c_loss: 0.8086 - val_loss: 6.5209 - val_output_react_loss: 0.7476 - val_output_bg_ph_loss: 0.7926 - val_output_ph_loss: 1.0142 - val_output_mg_c_loss: 0.7821 - val_output_c_loss: 0.8621
Epoch 17/60
60/60 - 7s - loss: 5.8969 - output_react_loss: 0.6541 - output_bg_ph_loss: 0.7286 - output_ph_loss: 0.9034 - output_mg_c_loss: 0.7122 - output_c_loss: 0.8037 - val_loss: 6.4055 - val_output_react_loss: 0.7255 - val_output_bg_ph_loss: 0.7808 - val_output_ph_loss: 1.0030 - val_output_mg_c_loss: 0.7739 - val_output_c_loss: 0.8421
Epoch 18/60
60/60 - 7s - loss: 5.8327 - output_react_loss: 0.6503 - output_bg_ph_loss: 0.7171 - output_ph_loss: 0.8952 - output_mg_c_loss: 0.7023 - output_c_loss: 0.7982 - val_loss: 6.3722 - val_output_react_loss: 0.7299 - val_output_bg_ph_loss: 0.7770 - val_output_ph_loss: 0.9953 - val_output_mg_c_loss: 0.7608 - val_output_c_loss: 0.8413
Epoch 19/60
60/60 - 7s - loss: 5.7629 - output_react_loss: 0.6405 - output_bg_ph_loss: 0.7098 - output_ph_loss: 0.8837 - output_mg_c_loss: 0.6943 - output_c_loss: 0.7899 - val_loss: 6.3640 - val_output_react_loss: 0.7278 - val_output_bg_ph_loss: 0.7736 - val_output_ph_loss: 0.9944 - val_output_mg_c_loss: 0.7627 - val_output_c_loss: 0.8413
Epoch 20/60
60/60 - 7s - loss: 5.7205 - output_react_loss: 0.6348 - output_bg_ph_loss: 0.7040 - output_ph_loss: 0.8797 - output_mg_c_loss: 0.6884 - output_c_loss: 0.7864 - val_loss: 6.3923 - val_output_react_loss: 0.7177 - val_output_bg_ph_loss: 0.7868 - val_output_ph_loss: 0.9963 - val_output_mg_c_loss: 0.7712 - val_output_c_loss: 0.8447
Epoch 21/60
60/60 - 7s - loss: 5.6370 - output_react_loss: 0.6262 - output_bg_ph_loss: 0.6947 - output_ph_loss: 0.8698 - output_mg_c_loss: 0.6732 - output_c_loss: 0.7791 - val_loss: 6.2766 - val_output_react_loss: 0.7133 - val_output_bg_ph_loss: 0.7662 - val_output_ph_loss: 0.9857 - val_output_mg_c_loss: 0.7491 - val_output_c_loss: 0.8337
Epoch 22/60
60/60 - 7s - loss: 5.5647 - output_react_loss: 0.6204 - output_bg_ph_loss: 0.6787 - output_ph_loss: 0.8623 - output_mg_c_loss: 0.6652 - output_c_loss: 0.7738 - val_loss: 6.2761 - val_output_react_loss: 0.7117 - val_output_bg_ph_loss: 0.7633 - val_output_ph_loss: 0.9887 - val_output_mg_c_loss: 0.7509 - val_output_c_loss: 0.8355
Epoch 23/60
60/60 - 7s - loss: 5.5224 - output_react_loss: 0.6137 - output_bg_ph_loss: 0.6737 - output_ph_loss: 0.8560 - output_mg_c_loss: 0.6613 - output_c_loss: 0.7690 - val_loss: 6.3353 - val_output_react_loss: 0.7151 - val_output_bg_ph_loss: 0.7718 - val_output_ph_loss: 0.9945 - val_output_mg_c_loss: 0.7593 - val_output_c_loss: 0.8484
Epoch 24/60
60/60 - 7s - loss: 5.4635 - output_react_loss: 0.6070 - output_bg_ph_loss: 0.6660 - output_ph_loss: 0.8480 - output_mg_c_loss: 0.6523 - output_c_loss: 0.7648 - val_loss: 6.2987 - val_output_react_loss: 0.7119 - val_output_bg_ph_loss: 0.7686 - val_output_ph_loss: 0.9878 - val_output_mg_c_loss: 0.7571 - val_output_c_loss: 0.8355
Epoch 25/60
60/60 - 7s - loss: 5.4133 - output_react_loss: 0.6015 - output_bg_ph_loss: 0.6581 - output_ph_loss: 0.8427 - output_mg_c_loss: 0.6463 - output_c_loss: 0.7589 - val_loss: 6.3131 - val_output_react_loss: 0.7059 - val_output_bg_ph_loss: 0.7868 - val_output_ph_loss: 0.9866 - val_output_mg_c_loss: 0.7507 - val_output_c_loss: 0.8398
Epoch 26/60
60/60 - 7s - loss: 5.3754 - output_react_loss: 0.5973 - output_bg_ph_loss: 0.6551 - output_ph_loss: 0.8387 - output_mg_c_loss: 0.6374 - output_c_loss: 0.7570 - val_loss: 6.2882 - val_output_react_loss: 0.7077 - val_output_bg_ph_loss: 0.7657 - val_output_ph_loss: 0.9857 - val_output_mg_c_loss: 0.7572 - val_output_c_loss: 0.8413
Epoch 27/60
Epoch 00027: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
60/60 - 7s - loss: 5.3020 - output_react_loss: 0.5887 - output_bg_ph_loss: 0.6415 - output_ph_loss: 0.8285 - output_mg_c_loss: 0.6308 - output_c_loss: 0.7515 - val_loss: 6.2765 - val_output_react_loss: 0.7127 - val_output_bg_ph_loss: 0.7668 - val_output_ph_loss: 0.9810 - val_output_mg_c_loss: 0.7503 - val_output_c_loss: 0.8360
Epoch 28/60
60/60 - 7s - loss: 5.1232 - output_react_loss: 0.5696 - output_bg_ph_loss: 0.6161 - output_ph_loss: 0.8073 - output_mg_c_loss: 0.6047 - output_c_loss: 0.7352 - val_loss: 6.1572 - val_output_react_loss: 0.6947 - val_output_bg_ph_loss: 0.7517 - val_output_ph_loss: 0.9689 - val_output_mg_c_loss: 0.7362 - val_output_c_loss: 0.8233
Epoch 29/60
60/60 - 7s - loss: 5.0674 - output_react_loss: 0.5627 - output_bg_ph_loss: 0.6102 - output_ph_loss: 0.8005 - output_mg_c_loss: 0.5964 - output_c_loss: 0.7285 - val_loss: 6.1392 - val_output_react_loss: 0.6921 - val_output_bg_ph_loss: 0.7495 - val_output_ph_loss: 0.9677 - val_output_mg_c_loss: 0.7331 - val_output_c_loss: 0.8220
Epoch 30/60
60/60 - 7s - loss: 5.0447 - output_react_loss: 0.5601 - output_bg_ph_loss: 0.6063 - output_ph_loss: 0.7973 - output_mg_c_loss: 0.5939 - output_c_loss: 0.7270 - val_loss: 6.1429 - val_output_react_loss: 0.6922 - val_output_bg_ph_loss: 0.7508 - val_output_ph_loss: 0.9667 - val_output_mg_c_loss: 0.7345 - val_output_c_loss: 0.8211
Epoch 31/60
60/60 - 7s - loss: 5.0307 - output_react_loss: 0.5587 - output_bg_ph_loss: 0.6043 - output_ph_loss: 0.7956 - output_mg_c_loss: 0.5915 - output_c_loss: 0.7259 - val_loss: 6.1369 - val_output_react_loss: 0.6927 - val_output_bg_ph_loss: 0.7494 - val_output_ph_loss: 0.9662 - val_output_mg_c_loss: 0.7328 - val_output_c_loss: 0.8207
Epoch 32/60
60/60 - 7s - loss: 5.0161 - output_react_loss: 0.5568 - output_bg_ph_loss: 0.6024 - output_ph_loss: 0.7940 - output_mg_c_loss: 0.5896 - output_c_loss: 0.7244 - val_loss: 6.1463 - val_output_react_loss: 0.6935 - val_output_bg_ph_loss: 0.7512 - val_output_ph_loss: 0.9687 - val_output_mg_c_loss: 0.7334 - val_output_c_loss: 0.8215
Epoch 33/60
60/60 - 7s - loss: 5.0008 - output_react_loss: 0.5551 - output_bg_ph_loss: 0.6005 - output_ph_loss: 0.7918 - output_mg_c_loss: 0.5876 - output_c_loss: 0.7226 - val_loss: 6.1335 - val_output_react_loss: 0.6916 - val_output_bg_ph_loss: 0.7489 - val_output_ph_loss: 0.9660 - val_output_mg_c_loss: 0.7329 - val_output_c_loss: 0.8208
Epoch 34/60
60/60 - 7s - loss: 4.9913 - output_react_loss: 0.5557 - output_bg_ph_loss: 0.5974 - output_ph_loss: 0.7913 - output_mg_c_loss: 0.5863 - output_c_loss: 0.7212 - val_loss: 6.1430 - val_output_react_loss: 0.6915 - val_output_bg_ph_loss: 0.7509 - val_output_ph_loss: 0.9660 - val_output_mg_c_loss: 0.7350 - val_output_c_loss: 0.8221
Epoch 35/60
60/60 - 7s - loss: 4.9820 - output_react_loss: 0.5531 - output_bg_ph_loss: 0.5974 - output_ph_loss: 0.7902 - output_mg_c_loss: 0.5844 - output_c_loss: 0.7222 - val_loss: 6.1395 - val_output_react_loss: 0.6921 - val_output_bg_ph_loss: 0.7500 - val_output_ph_loss: 0.9663 - val_output_mg_c_loss: 0.7337 - val_output_c_loss: 0.8216
Epoch 36/60
60/60 - 7s - loss: 4.9705 - output_react_loss: 0.5519 - output_bg_ph_loss: 0.5956 - output_ph_loss: 0.7877 - output_mg_c_loss: 0.5842 - output_c_loss: 0.7195 - val_loss: 6.1414 - val_output_react_loss: 0.6922 - val_output_bg_ph_loss: 0.7507 - val_output_ph_loss: 0.9670 - val_output_mg_c_loss: 0.7333 - val_output_c_loss: 0.8221
Epoch 37/60
60/60 - 7s - loss: 4.9624 - output_react_loss: 0.5506 - output_bg_ph_loss: 0.5939 - output_ph_loss: 0.7881 - output_mg_c_loss: 0.5826 - output_c_loss: 0.7202 - val_loss: 6.1346 - val_output_react_loss: 0.6901 - val_output_bg_ph_loss: 0.7498 - val_output_ph_loss: 0.9666 - val_output_mg_c_loss: 0.7334 - val_output_c_loss: 0.8214
Epoch 38/60
Epoch 00038: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
60/60 - 7s - loss: 4.9531 - output_react_loss: 0.5486 - output_bg_ph_loss: 0.5932 - output_ph_loss: 0.7865 - output_mg_c_loss: 0.5817 - output_c_loss: 0.7197 - val_loss: 6.1340 - val_output_react_loss: 0.6903 - val_output_bg_ph_loss: 0.7493 - val_output_ph_loss: 0.9673 - val_output_mg_c_loss: 0.7332 - val_output_c_loss: 0.8211
Epoch 39/60
60/60 - 7s - loss: 4.9364 - output_react_loss: 0.5480 - output_bg_ph_loss: 0.5900 - output_ph_loss: 0.7849 - output_mg_c_loss: 0.5787 - output_c_loss: 0.7180 - val_loss: 6.1324 - val_output_react_loss: 0.6902 - val_output_bg_ph_loss: 0.7491 - val_output_ph_loss: 0.9663 - val_output_mg_c_loss: 0.7333 - val_output_c_loss: 0.8209
Epoch 40/60
60/60 - 7s - loss: 4.9321 - output_react_loss: 0.5471 - output_bg_ph_loss: 0.5895 - output_ph_loss: 0.7835 - output_mg_c_loss: 0.5791 - output_c_loss: 0.7171 - val_loss: 6.1325 - val_output_react_loss: 0.6905 - val_output_bg_ph_loss: 0.7493 - val_output_ph_loss: 0.9658 - val_output_mg_c_loss: 0.7332 - val_output_c_loss: 0.8207
Epoch 41/60
60/60 - 7s - loss: 4.9300 - output_react_loss: 0.5466 - output_bg_ph_loss: 0.5896 - output_ph_loss: 0.7843 - output_mg_c_loss: 0.5782 - output_c_loss: 0.7169 - val_loss: 6.1288 - val_output_react_loss: 0.6903 - val_output_bg_ph_loss: 0.7489 - val_output_ph_loss: 0.9654 - val_output_mg_c_loss: 0.7325 - val_output_c_loss: 0.8201
Epoch 42/60
60/60 - 7s - loss: 4.9270 - output_react_loss: 0.5468 - output_bg_ph_loss: 0.5893 - output_ph_loss: 0.7828 - output_mg_c_loss: 0.5777 - output_c_loss: 0.7166 - val_loss: 6.1323 - val_output_react_loss: 0.6902 - val_output_bg_ph_loss: 0.7495 - val_output_ph_loss: 0.9658 - val_output_mg_c_loss: 0.7333 - val_output_c_loss: 0.8205
Epoch 43/60
60/60 - 7s - loss: 4.9244 - output_react_loss: 0.5459 - output_bg_ph_loss: 0.5896 - output_ph_loss: 0.7827 - output_mg_c_loss: 0.5772 - output_c_loss: 0.7163 - val_loss: 6.1326 - val_output_react_loss: 0.6906 - val_output_bg_ph_loss: 0.7497 - val_output_ph_loss: 0.9658 - val_output_mg_c_loss: 0.7331 - val_output_c_loss: 0.8203
Epoch 44/60
60/60 - 7s - loss: 4.9157 - output_react_loss: 0.5460 - output_bg_ph_loss: 0.5868 - output_ph_loss: 0.7827 - output_mg_c_loss: 0.5758 - output_c_loss: 0.7160 - val_loss: 6.1328 - val_output_react_loss: 0.6905 - val_output_bg_ph_loss: 0.7495 - val_output_ph_loss: 0.9658 - val_output_mg_c_loss: 0.7332 - val_output_c_loss: 0.8206
Epoch 45/60
60/60 - 7s - loss: 4.9277 - output_react_loss: 0.5473 - output_bg_ph_loss: 0.5890 - output_ph_loss: 0.7834 - output_mg_c_loss: 0.5776 - output_c_loss: 0.7166 - val_loss: 6.1329 - val_output_react_loss: 0.6901 - val_output_bg_ph_loss: 0.7498 - val_output_ph_loss: 0.9659 - val_output_mg_c_loss: 0.7334 - val_output_c_loss: 0.8205
Epoch 46/60
Epoch 00046: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
60/60 - 7s - loss: 4.9256 - output_react_loss: 0.5462 - output_bg_ph_loss: 0.5888 - output_ph_loss: 0.7833 - output_mg_c_loss: 0.5775 - output_c_loss: 0.7173 - val_loss: 6.1321 - val_output_react_loss: 0.6906 - val_output_bg_ph_loss: 0.7495 - val_output_ph_loss: 0.9656 - val_output_mg_c_loss: 0.7330 - val_output_c_loss: 0.8204
Epoch 47/60
60/60 - 7s - loss: 4.9187 - output_react_loss: 0.5454 - output_bg_ph_loss: 0.5886 - output_ph_loss: 0.7827 - output_mg_c_loss: 0.5762 - output_c_loss: 0.7159 - val_loss: 6.1306 - val_output_react_loss: 0.6904 - val_output_bg_ph_loss: 0.7493 - val_output_ph_loss: 0.9655 - val_output_mg_c_loss: 0.7328 - val_output_c_loss: 0.8203
Epoch 48/60
60/60 - 7s - loss: 4.9182 - output_react_loss: 0.5459 - output_bg_ph_loss: 0.5877 - output_ph_loss: 0.7828 - output_mg_c_loss: 0.5761 - output_c_loss: 0.7160 - val_loss: 6.1303 - val_output_react_loss: 0.6903 - val_output_bg_ph_loss: 0.7493 - val_output_ph_loss: 0.9655 - val_output_mg_c_loss: 0.7327 - val_output_c_loss: 0.8202
Epoch 49/60
60/60 - 7s - loss: 4.9258 - output_react_loss: 0.5454 - output_bg_ph_loss: 0.5891 - output_ph_loss: 0.7839 - output_mg_c_loss: 0.5776 - output_c_loss: 0.7176 - val_loss: 6.1300 - val_output_react_loss: 0.6902 - val_output_bg_ph_loss: 0.7492 - val_output_ph_loss: 0.9655 - val_output_mg_c_loss: 0.7327 - val_output_c_loss: 0.8202
Epoch 50/60
60/60 - 7s - loss: 4.9159 - output_react_loss: 0.5451 - output_bg_ph_loss: 0.5881 - output_ph_loss: 0.7820 - output_mg_c_loss: 0.5758 - output_c_loss: 0.7159 - val_loss: 6.1303 - val_output_react_loss: 0.6903 - val_output_bg_ph_loss: 0.7493 - val_output_ph_loss: 0.9655 - val_output_mg_c_loss: 0.7328 - val_output_c_loss: 0.8203
Epoch 51/60
Restoring model weights from the end of the best epoch.
Epoch 00051: ReduceLROnPlateau reducing learning rate to 1.0000001111620805e-07.
60/60 - 7s - loss: 4.9204 - output_react_loss: 0.5453 - output_bg_ph_loss: 0.5891 - output_ph_loss: 0.7819 - output_mg_c_loss: 0.5769 - output_c_loss: 0.7158 - val_loss: 6.1297 - val_output_react_loss: 0.6902 - val_output_bg_ph_loss: 0.7492 - val_output_ph_loss: 0.9655 - val_output_mg_c_loss: 0.7326 - val_output_c_loss: 0.8202
Epoch 00051: early stopping
FOLD: 3
Epoch 1/60
60/60 - 8s - loss: 9.5697 - output_react_loss: 1.0222 - output_bg_ph_loss: 1.2461 - output_ph_loss: 1.4125 - output_mg_c_loss: 1.2170 - output_c_loss: 1.1867 - val_loss: 8.2413 - val_output_react_loss: 0.8569 - val_output_bg_ph_loss: 1.0898 - val_output_ph_loss: 1.2131 - val_output_mg_c_loss: 1.0463 - val_output_c_loss: 1.0421
Epoch 2/60
60/60 - 7s - loss: 8.2242 - output_react_loss: 0.8775 - output_bg_ph_loss: 1.0445 - output_ph_loss: 1.2264 - output_mg_c_loss: 1.0457 - output_c_loss: 1.0624 - val_loss: 7.7900 - val_output_react_loss: 0.8107 - val_output_bg_ph_loss: 1.0155 - val_output_ph_loss: 1.1589 - val_output_mg_c_loss: 0.9833 - val_output_c_loss: 1.0121
Epoch 3/60
60/60 - 7s - loss: 7.9507 - output_react_loss: 0.8453 - output_bg_ph_loss: 1.0081 - output_ph_loss: 1.1943 - output_mg_c_loss: 1.0078 - output_c_loss: 1.0342 - val_loss: 7.5180 - val_output_react_loss: 0.7918 - val_output_bg_ph_loss: 0.9627 - val_output_ph_loss: 1.1346 - val_output_mg_c_loss: 0.9442 - val_output_c_loss: 0.9860
Epoch 4/60
60/60 - 7s - loss: 7.6495 - output_react_loss: 0.8186 - output_bg_ph_loss: 0.9649 - output_ph_loss: 1.1555 - output_mg_c_loss: 0.9625 - output_c_loss: 1.0020 - val_loss: 7.2015 - val_output_react_loss: 0.7646 - val_output_bg_ph_loss: 0.9203 - val_output_ph_loss: 1.0960 - val_output_mg_c_loss: 0.8992 - val_output_c_loss: 0.9374
Epoch 5/60
60/60 - 7s - loss: 7.4481 - output_react_loss: 0.8036 - output_bg_ph_loss: 0.9357 - output_ph_loss: 1.1314 - output_mg_c_loss: 0.9304 - output_c_loss: 0.9772 - val_loss: 7.0553 - val_output_react_loss: 0.7694 - val_output_bg_ph_loss: 0.8884 - val_output_ph_loss: 1.0646 - val_output_mg_c_loss: 0.8736 - val_output_c_loss: 0.9279
Epoch 6/60
60/60 - 7s - loss: 7.2074 - output_react_loss: 0.7869 - output_bg_ph_loss: 0.8975 - output_ph_loss: 1.0996 - output_mg_c_loss: 0.8928 - output_c_loss: 0.9535 - val_loss: 6.8253 - val_output_react_loss: 0.7392 - val_output_bg_ph_loss: 0.8644 - val_output_ph_loss: 1.0457 - val_output_mg_c_loss: 0.8380 - val_output_c_loss: 0.8965
Epoch 7/60
60/60 - 7s - loss: 7.0170 - output_react_loss: 0.7671 - output_bg_ph_loss: 0.8708 - output_ph_loss: 1.0742 - output_mg_c_loss: 0.8685 - output_c_loss: 0.9299 - val_loss: 6.7012 - val_output_react_loss: 0.7289 - val_output_bg_ph_loss: 0.8396 - val_output_ph_loss: 1.0253 - val_output_mg_c_loss: 0.8293 - val_output_c_loss: 0.8804
Epoch 8/60
60/60 - 7s - loss: 6.8885 - output_react_loss: 0.7550 - output_bg_ph_loss: 0.8536 - output_ph_loss: 1.0522 - output_mg_c_loss: 0.8516 - output_c_loss: 0.9158 - val_loss: 6.5459 - val_output_react_loss: 0.7121 - val_output_bg_ph_loss: 0.8250 - val_output_ph_loss: 0.9973 - val_output_mg_c_loss: 0.7904 - val_output_c_loss: 0.8936
Epoch 9/60
60/60 - 7s - loss: 6.7288 - output_react_loss: 0.7405 - output_bg_ph_loss: 0.8324 - output_ph_loss: 1.0296 - output_mg_c_loss: 0.8262 - output_c_loss: 0.9011 - val_loss: 6.3550 - val_output_react_loss: 0.6990 - val_output_bg_ph_loss: 0.8093 - val_output_ph_loss: 0.9644 - val_output_mg_c_loss: 0.7678 - val_output_c_loss: 0.8384
Epoch 10/60
60/60 - 7s - loss: 6.6117 - output_react_loss: 0.7341 - output_bg_ph_loss: 0.8175 - output_ph_loss: 1.0123 - output_mg_c_loss: 0.8072 - output_c_loss: 0.8817 - val_loss: 6.3434 - val_output_react_loss: 0.7125 - val_output_bg_ph_loss: 0.7921 - val_output_ph_loss: 0.9610 - val_output_mg_c_loss: 0.7731 - val_output_c_loss: 0.8270
Epoch 11/60
60/60 - 7s - loss: 6.5283 - output_react_loss: 0.7264 - output_bg_ph_loss: 0.8077 - output_ph_loss: 0.9965 - output_mg_c_loss: 0.7973 - output_c_loss: 0.8689 - val_loss: 6.1949 - val_output_react_loss: 0.6863 - val_output_bg_ph_loss: 0.7823 - val_output_ph_loss: 0.9423 - val_output_mg_c_loss: 0.7485 - val_output_c_loss: 0.8184
Epoch 12/60
60/60 - 7s - loss: 6.4040 - output_react_loss: 0.7133 - output_bg_ph_loss: 0.7900 - output_ph_loss: 0.9805 - output_mg_c_loss: 0.7785 - output_c_loss: 0.8600 - val_loss: 6.1436 - val_output_react_loss: 0.6863 - val_output_bg_ph_loss: 0.7744 - val_output_ph_loss: 0.9328 - val_output_mg_c_loss: 0.7399 - val_output_c_loss: 0.8095
Epoch 13/60
60/60 - 7s - loss: 6.3072 - output_react_loss: 0.6991 - output_bg_ph_loss: 0.7749 - output_ph_loss: 0.9717 - output_mg_c_loss: 0.7691 - output_c_loss: 0.8492 - val_loss: 6.1045 - val_output_react_loss: 0.6795 - val_output_bg_ph_loss: 0.7661 - val_output_ph_loss: 0.9317 - val_output_mg_c_loss: 0.7390 - val_output_c_loss: 0.8035
Epoch 14/60
60/60 - 7s - loss: 6.2428 - output_react_loss: 0.6914 - output_bg_ph_loss: 0.7715 - output_ph_loss: 0.9579 - output_mg_c_loss: 0.7580 - output_c_loss: 0.8432 - val_loss: 6.0367 - val_output_react_loss: 0.6730 - val_output_bg_ph_loss: 0.7618 - val_output_ph_loss: 0.9114 - val_output_mg_c_loss: 0.7294 - val_output_c_loss: 0.7969
Epoch 15/60
60/60 - 7s - loss: 6.1591 - output_react_loss: 0.6869 - output_bg_ph_loss: 0.7559 - output_ph_loss: 0.9501 - output_mg_c_loss: 0.7447 - output_c_loss: 0.8340 - val_loss: 6.1394 - val_output_react_loss: 0.6781 - val_output_bg_ph_loss: 0.7722 - val_output_ph_loss: 0.9292 - val_output_mg_c_loss: 0.7523 - val_output_c_loss: 0.8050
Epoch 16/60
60/60 - 7s - loss: 6.0661 - output_react_loss: 0.6754 - output_bg_ph_loss: 0.7425 - output_ph_loss: 0.9381 - output_mg_c_loss: 0.7315 - output_c_loss: 0.8293 - val_loss: 5.9536 - val_output_react_loss: 0.6602 - val_output_bg_ph_loss: 0.7544 - val_output_ph_loss: 0.9024 - val_output_mg_c_loss: 0.7158 - val_output_c_loss: 0.7905
Epoch 17/60
60/60 - 7s - loss: 6.0002 - output_react_loss: 0.6690 - output_bg_ph_loss: 0.7357 - output_ph_loss: 0.9293 - output_mg_c_loss: 0.7224 - output_c_loss: 0.8169 - val_loss: 5.9426 - val_output_react_loss: 0.6559 - val_output_bg_ph_loss: 0.7444 - val_output_ph_loss: 0.9104 - val_output_mg_c_loss: 0.7204 - val_output_c_loss: 0.7909
Epoch 18/60
60/60 - 7s - loss: 5.9176 - output_react_loss: 0.6597 - output_bg_ph_loss: 0.7229 - output_ph_loss: 0.9203 - output_mg_c_loss: 0.7114 - output_c_loss: 0.8093 - val_loss: 5.9100 - val_output_react_loss: 0.6647 - val_output_bg_ph_loss: 0.7407 - val_output_ph_loss: 0.8979 - val_output_mg_c_loss: 0.7075 - val_output_c_loss: 0.7862
Epoch 19/60
60/60 - 7s - loss: 5.8650 - output_react_loss: 0.6548 - output_bg_ph_loss: 0.7149 - output_ph_loss: 0.9096 - output_mg_c_loss: 0.7058 - output_c_loss: 0.8045 - val_loss: 5.9909 - val_output_react_loss: 0.6732 - val_output_bg_ph_loss: 0.7538 - val_output_ph_loss: 0.9036 - val_output_mg_c_loss: 0.7195 - val_output_c_loss: 0.7942
Epoch 20/60
60/60 - 7s - loss: 5.7870 - output_react_loss: 0.6470 - output_bg_ph_loss: 0.7017 - output_ph_loss: 0.9019 - output_mg_c_loss: 0.6945 - output_c_loss: 0.7987 - val_loss: 5.8706 - val_output_react_loss: 0.6453 - val_output_bg_ph_loss: 0.7426 - val_output_ph_loss: 0.8895 - val_output_mg_c_loss: 0.7067 - val_output_c_loss: 0.7920
Epoch 21/60
60/60 - 7s - loss: 5.7244 - output_react_loss: 0.6387 - output_bg_ph_loss: 0.6954 - output_ph_loss: 0.8946 - output_mg_c_loss: 0.6841 - output_c_loss: 0.7936 - val_loss: 5.9085 - val_output_react_loss: 0.6567 - val_output_bg_ph_loss: 0.7451 - val_output_ph_loss: 0.8983 - val_output_mg_c_loss: 0.7107 - val_output_c_loss: 0.7852
Epoch 22/60
60/60 - 7s - loss: 5.6630 - output_react_loss: 0.6329 - output_bg_ph_loss: 0.6864 - output_ph_loss: 0.8851 - output_mg_c_loss: 0.6755 - output_c_loss: 0.7881 - val_loss: 5.7854 - val_output_react_loss: 0.6425 - val_output_bg_ph_loss: 0.7297 - val_output_ph_loss: 0.8809 - val_output_mg_c_loss: 0.6948 - val_output_c_loss: 0.7704
Epoch 23/60
60/60 - 7s - loss: 5.6037 - output_react_loss: 0.6277 - output_bg_ph_loss: 0.6772 - output_ph_loss: 0.8787 - output_mg_c_loss: 0.6666 - output_c_loss: 0.7820 - val_loss: 5.8083 - val_output_react_loss: 0.6456 - val_output_bg_ph_loss: 0.7296 - val_output_ph_loss: 0.8876 - val_output_mg_c_loss: 0.6970 - val_output_c_loss: 0.7764
Epoch 24/60
60/60 - 7s - loss: 5.5718 - output_react_loss: 0.6220 - output_bg_ph_loss: 0.6720 - output_ph_loss: 0.8748 - output_mg_c_loss: 0.6641 - output_c_loss: 0.7807 - val_loss: 5.8437 - val_output_react_loss: 0.6469 - val_output_bg_ph_loss: 0.7334 - val_output_ph_loss: 0.8862 - val_output_mg_c_loss: 0.7088 - val_output_c_loss: 0.7792
Epoch 25/60
60/60 - 7s - loss: 5.5114 - output_react_loss: 0.6157 - output_bg_ph_loss: 0.6625 - output_ph_loss: 0.8665 - output_mg_c_loss: 0.6556 - output_c_loss: 0.7774 - val_loss: 5.8203 - val_output_react_loss: 0.6381 - val_output_bg_ph_loss: 0.7383 - val_output_ph_loss: 0.8858 - val_output_mg_c_loss: 0.7011 - val_output_c_loss: 0.7796
Epoch 26/60
60/60 - 7s - loss: 5.4776 - output_react_loss: 0.6117 - output_bg_ph_loss: 0.6584 - output_ph_loss: 0.8616 - output_mg_c_loss: 0.6515 - output_c_loss: 0.7730 - val_loss: 5.7812 - val_output_react_loss: 0.6341 - val_output_bg_ph_loss: 0.7292 - val_output_ph_loss: 0.8855 - val_output_mg_c_loss: 0.6936 - val_output_c_loss: 0.7819
Epoch 27/60
60/60 - 7s - loss: 5.4332 - output_react_loss: 0.6074 - output_bg_ph_loss: 0.6502 - output_ph_loss: 0.8589 - output_mg_c_loss: 0.6442 - output_c_loss: 0.7706 - val_loss: 5.7954 - val_output_react_loss: 0.6398 - val_output_bg_ph_loss: 0.7349 - val_output_ph_loss: 0.8806 - val_output_mg_c_loss: 0.6926 - val_output_c_loss: 0.7802
Epoch 28/60
60/60 - 7s - loss: 5.3611 - output_react_loss: 0.5988 - output_bg_ph_loss: 0.6406 - output_ph_loss: 0.8509 - output_mg_c_loss: 0.6340 - output_c_loss: 0.7635 - val_loss: 5.8401 - val_output_react_loss: 0.6437 - val_output_bg_ph_loss: 0.7404 - val_output_ph_loss: 0.8872 - val_output_mg_c_loss: 0.7020 - val_output_c_loss: 0.7807
Epoch 29/60
60/60 - 7s - loss: 5.3167 - output_react_loss: 0.5982 - output_bg_ph_loss: 0.6319 - output_ph_loss: 0.8447 - output_mg_c_loss: 0.6267 - output_c_loss: 0.7586 - val_loss: 5.7707 - val_output_react_loss: 0.6361 - val_output_bg_ph_loss: 0.7293 - val_output_ph_loss: 0.8813 - val_output_mg_c_loss: 0.6934 - val_output_c_loss: 0.7716
Epoch 30/60
60/60 - 7s - loss: 5.2792 - output_react_loss: 0.5928 - output_bg_ph_loss: 0.6252 - output_ph_loss: 0.8382 - output_mg_c_loss: 0.6238 - output_c_loss: 0.7574 - val_loss: 5.7897 - val_output_react_loss: 0.6425 - val_output_bg_ph_loss: 0.7300 - val_output_ph_loss: 0.8793 - val_output_mg_c_loss: 0.6942 - val_output_c_loss: 0.7771
Epoch 31/60
60/60 - 7s - loss: 5.2388 - output_react_loss: 0.5857 - output_bg_ph_loss: 0.6223 - output_ph_loss: 0.8369 - output_mg_c_loss: 0.6168 - output_c_loss: 0.7524 - val_loss: 5.8210 - val_output_react_loss: 0.6396 - val_output_bg_ph_loss: 0.7367 - val_output_ph_loss: 0.8843 - val_output_mg_c_loss: 0.7040 - val_output_c_loss: 0.7759
Epoch 32/60
60/60 - 7s - loss: 5.2069 - output_react_loss: 0.5785 - output_bg_ph_loss: 0.6188 - output_ph_loss: 0.8355 - output_mg_c_loss: 0.6126 - output_c_loss: 0.7517 - val_loss: 5.8045 - val_output_react_loss: 0.6351 - val_output_bg_ph_loss: 0.7364 - val_output_ph_loss: 0.8844 - val_output_mg_c_loss: 0.6996 - val_output_c_loss: 0.7778
Epoch 33/60
60/60 - 7s - loss: 5.1797 - output_react_loss: 0.5766 - output_bg_ph_loss: 0.6126 - output_ph_loss: 0.8297 - output_mg_c_loss: 0.6107 - output_c_loss: 0.7502 - val_loss: 5.7443 - val_output_react_loss: 0.6350 - val_output_bg_ph_loss: 0.7255 - val_output_ph_loss: 0.8704 - val_output_mg_c_loss: 0.6885 - val_output_c_loss: 0.7758
Epoch 34/60
60/60 - 7s - loss: 5.1206 - output_react_loss: 0.5679 - output_bg_ph_loss: 0.6054 - output_ph_loss: 0.8233 - output_mg_c_loss: 0.6036 - output_c_loss: 0.7435 - val_loss: 5.7495 - val_output_react_loss: 0.6304 - val_output_bg_ph_loss: 0.7233 - val_output_ph_loss: 0.8763 - val_output_mg_c_loss: 0.6961 - val_output_c_loss: 0.7735
Epoch 35/60
60/60 - 7s - loss: 5.0768 - output_react_loss: 0.5628 - output_bg_ph_loss: 0.5981 - output_ph_loss: 0.8171 - output_mg_c_loss: 0.5985 - output_c_loss: 0.7408 - val_loss: 5.7059 - val_output_react_loss: 0.6289 - val_output_bg_ph_loss: 0.7193 - val_output_ph_loss: 0.8733 - val_output_mg_c_loss: 0.6835 - val_output_c_loss: 0.7690
Epoch 36/60
60/60 - 7s - loss: 5.0602 - output_react_loss: 0.5614 - output_bg_ph_loss: 0.5949 - output_ph_loss: 0.8180 - output_mg_c_loss: 0.5954 - output_c_loss: 0.7389 - val_loss: 5.7295 - val_output_react_loss: 0.6305 - val_output_bg_ph_loss: 0.7293 - val_output_ph_loss: 0.8694 - val_output_mg_c_loss: 0.6849 - val_output_c_loss: 0.7705
Epoch 37/60
60/60 - 7s - loss: 5.0307 - output_react_loss: 0.5576 - output_bg_ph_loss: 0.5913 - output_ph_loss: 0.8116 - output_mg_c_loss: 0.5908 - output_c_loss: 0.7396 - val_loss: 5.7364 - val_output_react_loss: 0.6307 - val_output_bg_ph_loss: 0.7271 - val_output_ph_loss: 0.8711 - val_output_mg_c_loss: 0.6888 - val_output_c_loss: 0.7720
Epoch 38/60
60/60 - 7s - loss: 4.9926 - output_react_loss: 0.5538 - output_bg_ph_loss: 0.5865 - output_ph_loss: 0.8089 - output_mg_c_loss: 0.5848 - output_c_loss: 0.7336 - val_loss: 5.7063 - val_output_react_loss: 0.6310 - val_output_bg_ph_loss: 0.7198 - val_output_ph_loss: 0.8705 - val_output_mg_c_loss: 0.6844 - val_output_c_loss: 0.7654
Epoch 39/60
60/60 - 7s - loss: 4.9705 - output_react_loss: 0.5523 - output_bg_ph_loss: 0.5812 - output_ph_loss: 0.8040 - output_mg_c_loss: 0.5832 - output_c_loss: 0.7330 - val_loss: 5.7339 - val_output_react_loss: 0.6311 - val_output_bg_ph_loss: 0.7242 - val_output_ph_loss: 0.8750 - val_output_mg_c_loss: 0.6903 - val_output_c_loss: 0.7677
Epoch 40/60
60/60 - 7s - loss: 4.9450 - output_react_loss: 0.5487 - output_bg_ph_loss: 0.5768 - output_ph_loss: 0.8037 - output_mg_c_loss: 0.5803 - output_c_loss: 0.7296 - val_loss: 5.6852 - val_output_react_loss: 0.6307 - val_output_bg_ph_loss: 0.7161 - val_output_ph_loss: 0.8683 - val_output_mg_c_loss: 0.6785 - val_output_c_loss: 0.7663
Epoch 41/60
60/60 - 7s - loss: 4.9073 - output_react_loss: 0.5415 - output_bg_ph_loss: 0.5718 - output_ph_loss: 0.8000 - output_mg_c_loss: 0.5750 - output_c_loss: 0.7307 - val_loss: 5.6877 - val_output_react_loss: 0.6258 - val_output_bg_ph_loss: 0.7205 - val_output_ph_loss: 0.8675 - val_output_mg_c_loss: 0.6818 - val_output_c_loss: 0.7642
Epoch 42/60
60/60 - 7s - loss: 4.8779 - output_react_loss: 0.5375 - output_bg_ph_loss: 0.5684 - output_ph_loss: 0.7939 - output_mg_c_loss: 0.5732 - output_c_loss: 0.7259 - val_loss: 5.6984 - val_output_react_loss: 0.6271 - val_output_bg_ph_loss: 0.7194 - val_output_ph_loss: 0.8683 - val_output_mg_c_loss: 0.6839 - val_output_c_loss: 0.7693
Epoch 43/60
60/60 - 7s - loss: 4.8695 - output_react_loss: 0.5360 - output_bg_ph_loss: 0.5690 - output_ph_loss: 0.7918 - output_mg_c_loss: 0.5723 - output_c_loss: 0.7232 - val_loss: 5.7351 - val_output_react_loss: 0.6246 - val_output_bg_ph_loss: 0.7300 - val_output_ph_loss: 0.8759 - val_output_mg_c_loss: 0.6905 - val_output_c_loss: 0.7688
Epoch 44/60
60/60 - 7s - loss: 4.8453 - output_react_loss: 0.5349 - output_bg_ph_loss: 0.5633 - output_ph_loss: 0.7912 - output_mg_c_loss: 0.5670 - output_c_loss: 0.7237 - val_loss: 5.7322 - val_output_react_loss: 0.6294 - val_output_bg_ph_loss: 0.7215 - val_output_ph_loss: 0.8739 - val_output_mg_c_loss: 0.6934 - val_output_c_loss: 0.7697
Epoch 45/60
Epoch 00045: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
60/60 - 7s - loss: 4.8184 - output_react_loss: 0.5284 - output_bg_ph_loss: 0.5607 - output_ph_loss: 0.7886 - output_mg_c_loss: 0.5657 - output_c_loss: 0.7202 - val_loss: 5.7136 - val_output_react_loss: 0.6328 - val_output_bg_ph_loss: 0.7198 - val_output_ph_loss: 0.8760 - val_output_mg_c_loss: 0.6833 - val_output_c_loss: 0.7658
Epoch 46/60
60/60 - 7s - loss: 4.6707 - output_react_loss: 0.5112 - output_bg_ph_loss: 0.5388 - output_ph_loss: 0.7719 - output_mg_c_loss: 0.5456 - output_c_loss: 0.7077 - val_loss: 5.6206 - val_output_react_loss: 0.6207 - val_output_bg_ph_loss: 0.7090 - val_output_ph_loss: 0.8607 - val_output_mg_c_loss: 0.6706 - val_output_c_loss: 0.7592
Epoch 47/60
60/60 - 7s - loss: 4.6107 - output_react_loss: 0.5042 - output_bg_ph_loss: 0.5307 - output_ph_loss: 0.7634 - output_mg_c_loss: 0.5377 - output_c_loss: 0.7020 - val_loss: 5.6214 - val_output_react_loss: 0.6202 - val_output_bg_ph_loss: 0.7093 - val_output_ph_loss: 0.8594 - val_output_mg_c_loss: 0.6719 - val_output_c_loss: 0.7591
Epoch 48/60
60/60 - 7s - loss: 4.5807 - output_react_loss: 0.4992 - output_bg_ph_loss: 0.5281 - output_ph_loss: 0.7601 - output_mg_c_loss: 0.5331 - output_c_loss: 0.6997 - val_loss: 5.6073 - val_output_react_loss: 0.6189 - val_output_bg_ph_loss: 0.7078 - val_output_ph_loss: 0.8576 - val_output_mg_c_loss: 0.6696 - val_output_c_loss: 0.7571
Epoch 49/60
60/60 - 7s - loss: 4.5736 - output_react_loss: 0.5004 - output_bg_ph_loss: 0.5250 - output_ph_loss: 0.7585 - output_mg_c_loss: 0.5329 - output_c_loss: 0.6984 - val_loss: 5.6204 - val_output_react_loss: 0.6203 - val_output_bg_ph_loss: 0.7096 - val_output_ph_loss: 0.8590 - val_output_mg_c_loss: 0.6718 - val_output_c_loss: 0.7578
Epoch 50/60
60/60 - 7s - loss: 4.5585 - output_react_loss: 0.4983 - output_bg_ph_loss: 0.5240 - output_ph_loss: 0.7581 - output_mg_c_loss: 0.5291 - output_c_loss: 0.6977 - val_loss: 5.6064 - val_output_react_loss: 0.6190 - val_output_bg_ph_loss: 0.7076 - val_output_ph_loss: 0.8569 - val_output_mg_c_loss: 0.6697 - val_output_c_loss: 0.7570
Epoch 51/60
60/60 - 7s - loss: 4.5463 - output_react_loss: 0.4975 - output_bg_ph_loss: 0.5215 - output_ph_loss: 0.7554 - output_mg_c_loss: 0.5280 - output_c_loss: 0.6969 - val_loss: 5.6015 - val_output_react_loss: 0.6170 - val_output_bg_ph_loss: 0.7077 - val_output_ph_loss: 0.8566 - val_output_mg_c_loss: 0.6693 - val_output_c_loss: 0.7570
Epoch 52/60
60/60 - 7s - loss: 4.5363 - output_react_loss: 0.4955 - output_bg_ph_loss: 0.5197 - output_ph_loss: 0.7543 - output_mg_c_loss: 0.5280 - output_c_loss: 0.6955 - val_loss: 5.6041 - val_output_react_loss: 0.6187 - val_output_bg_ph_loss: 0.7067 - val_output_ph_loss: 0.8575 - val_output_mg_c_loss: 0.6694 - val_output_c_loss: 0.7571
Epoch 53/60
60/60 - 7s - loss: 4.5262 - output_react_loss: 0.4937 - output_bg_ph_loss: 0.5179 - output_ph_loss: 0.7543 - output_mg_c_loss: 0.5271 - output_c_loss: 0.6943 - val_loss: 5.6042 - val_output_react_loss: 0.6176 - val_output_bg_ph_loss: 0.7075 - val_output_ph_loss: 0.8572 - val_output_mg_c_loss: 0.6695 - val_output_c_loss: 0.7578
Epoch 54/60
60/60 - 7s - loss: 4.5155 - output_react_loss: 0.4923 - output_bg_ph_loss: 0.5168 - output_ph_loss: 0.7528 - output_mg_c_loss: 0.5249 - output_c_loss: 0.6946 - val_loss: 5.6127 - val_output_react_loss: 0.6186 - val_output_bg_ph_loss: 0.7092 - val_output_ph_loss: 0.8578 - val_output_mg_c_loss: 0.6707 - val_output_c_loss: 0.7579
Epoch 55/60
60/60 - 7s - loss: 4.5158 - output_react_loss: 0.4922 - output_bg_ph_loss: 0.5177 - output_ph_loss: 0.7521 - output_mg_c_loss: 0.5248 - output_c_loss: 0.6942 - val_loss: 5.5982 - val_output_react_loss: 0.6165 - val_output_bg_ph_loss: 0.7075 - val_output_ph_loss: 0.8565 - val_output_mg_c_loss: 0.6685 - val_output_c_loss: 0.7567
Epoch 56/60
60/60 - 7s - loss: 4.5081 - output_react_loss: 0.4914 - output_bg_ph_loss: 0.5167 - output_ph_loss: 0.7507 - output_mg_c_loss: 0.5239 - output_c_loss: 0.6935 - val_loss: 5.6070 - val_output_react_loss: 0.6175 - val_output_bg_ph_loss: 0.7085 - val_output_ph_loss: 0.8580 - val_output_mg_c_loss: 0.6700 - val_output_c_loss: 0.7569
Epoch 57/60
60/60 - 7s - loss: 4.5030 - output_react_loss: 0.4901 - output_bg_ph_loss: 0.5157 - output_ph_loss: 0.7517 - output_mg_c_loss: 0.5234 - output_c_loss: 0.6929 - val_loss: 5.6035 - val_output_react_loss: 0.6168 - val_output_bg_ph_loss: 0.7075 - val_output_ph_loss: 0.8581 - val_output_mg_c_loss: 0.6701 - val_output_c_loss: 0.7566
Epoch 58/60
60/60 - 7s - loss: 4.4975 - output_react_loss: 0.4901 - output_bg_ph_loss: 0.5145 - output_ph_loss: 0.7506 - output_mg_c_loss: 0.5226 - output_c_loss: 0.6926 - val_loss: 5.6115 - val_output_react_loss: 0.6186 - val_output_bg_ph_loss: 0.7090 - val_output_ph_loss: 0.8580 - val_output_mg_c_loss: 0.6702 - val_output_c_loss: 0.7577
Epoch 59/60
60/60 - 7s - loss: 4.4910 - output_react_loss: 0.4889 - output_bg_ph_loss: 0.5136 - output_ph_loss: 0.7491 - output_mg_c_loss: 0.5222 - output_c_loss: 0.6924 - val_loss: 5.6078 - val_output_react_loss: 0.6176 - val_output_bg_ph_loss: 0.7088 - val_output_ph_loss: 0.8577 - val_output_mg_c_loss: 0.6696 - val_output_c_loss: 0.7580
Epoch 60/60
60/60 - 7s - loss: 4.4832 - output_react_loss: 0.4879 - output_bg_ph_loss: 0.5134 - output_ph_loss: 0.7470 - output_mg_c_loss: 0.5212 - output_c_loss: 0.6912 - val_loss: 5.5956 - val_output_react_loss: 0.6168 - val_output_bg_ph_loss: 0.7063 - val_output_ph_loss: 0.8560 - val_output_mg_c_loss: 0.6688 - val_output_c_loss: 0.7558
FOLD: 4
Epoch 1/60
60/60 - 8s - loss: 9.5336 - output_react_loss: 1.0431 - output_bg_ph_loss: 1.2174 - output_ph_loss: 1.4070 - output_mg_c_loss: 1.2087 - output_c_loss: 1.1881 - val_loss: 8.2671 - val_output_react_loss: 0.8818 - val_output_bg_ph_loss: 1.0654 - val_output_ph_loss: 1.2155 - val_output_mg_c_loss: 1.0595 - val_output_c_loss: 1.0382
Epoch 2/60
60/60 - 7s - loss: 8.2185 - output_react_loss: 0.8678 - output_bg_ph_loss: 1.0485 - output_ph_loss: 1.2258 - output_mg_c_loss: 1.0474 - output_c_loss: 1.0652 - val_loss: 7.7753 - val_output_react_loss: 0.8206 - val_output_bg_ph_loss: 1.0052 - val_output_ph_loss: 1.1620 - val_output_mg_c_loss: 0.9825 - val_output_c_loss: 0.9968
Epoch 3/60
60/60 - 7s - loss: 7.9117 - output_react_loss: 0.8374 - output_bg_ph_loss: 1.0032 - output_ph_loss: 1.1891 - output_mg_c_loss: 1.0040 - output_c_loss: 1.0335 - val_loss: 7.5732 - val_output_react_loss: 0.7961 - val_output_bg_ph_loss: 0.9722 - val_output_ph_loss: 1.1267 - val_output_mg_c_loss: 0.9663 - val_output_c_loss: 0.9772
Epoch 4/60
60/60 - 7s - loss: 7.6730 - output_react_loss: 0.8198 - output_bg_ph_loss: 0.9709 - output_ph_loss: 1.1563 - output_mg_c_loss: 0.9653 - output_c_loss: 1.0046 - val_loss: 7.3379 - val_output_react_loss: 0.7908 - val_output_bg_ph_loss: 0.9323 - val_output_ph_loss: 1.1085 - val_output_mg_c_loss: 0.9183 - val_output_c_loss: 0.9465
Epoch 5/60
60/60 - 7s - loss: 7.3944 - output_react_loss: 0.8008 - output_bg_ph_loss: 0.9266 - output_ph_loss: 1.1242 - output_mg_c_loss: 0.9215 - output_c_loss: 0.9724 - val_loss: 7.1245 - val_output_react_loss: 0.7687 - val_output_bg_ph_loss: 0.9082 - val_output_ph_loss: 1.0796 - val_output_mg_c_loss: 0.8827 - val_output_c_loss: 0.9257
Epoch 6/60
60/60 - 7s - loss: 7.1791 - output_react_loss: 0.7841 - output_bg_ph_loss: 0.8919 - output_ph_loss: 1.0946 - output_mg_c_loss: 0.8914 - output_c_loss: 0.9499 - val_loss: 6.8764 - val_output_react_loss: 0.7393 - val_output_bg_ph_loss: 0.8678 - val_output_ph_loss: 1.0393 - val_output_mg_c_loss: 0.8594 - val_output_c_loss: 0.9041
Epoch 7/60
60/60 - 7s - loss: 6.9773 - output_react_loss: 0.7638 - output_bg_ph_loss: 0.8649 - output_ph_loss: 1.0664 - output_mg_c_loss: 0.8625 - output_c_loss: 0.9284 - val_loss: 6.6969 - val_output_react_loss: 0.7329 - val_output_bg_ph_loss: 0.8437 - val_output_ph_loss: 1.0127 - val_output_mg_c_loss: 0.8269 - val_output_c_loss: 0.8773
Epoch 8/60
60/60 - 7s - loss: 6.8713 - output_react_loss: 0.7535 - output_bg_ph_loss: 0.8546 - output_ph_loss: 1.0490 - output_mg_c_loss: 0.8451 - output_c_loss: 0.9158 - val_loss: 6.5904 - val_output_react_loss: 0.7293 - val_output_bg_ph_loss: 0.8350 - val_output_ph_loss: 0.9951 - val_output_mg_c_loss: 0.8063 - val_output_c_loss: 0.8543
Epoch 9/60
60/60 - 7s - loss: 6.6902 - output_react_loss: 0.7405 - output_bg_ph_loss: 0.8264 - output_ph_loss: 1.0238 - output_mg_c_loss: 0.8220 - output_c_loss: 0.8886 - val_loss: 6.4943 - val_output_react_loss: 0.7298 - val_output_bg_ph_loss: 0.8200 - val_output_ph_loss: 0.9756 - val_output_mg_c_loss: 0.7876 - val_output_c_loss: 0.8438
Epoch 10/60
60/60 - 7s - loss: 6.5840 - output_react_loss: 0.7314 - output_bg_ph_loss: 0.8126 - output_ph_loss: 1.0093 - output_mg_c_loss: 0.8041 - output_c_loss: 0.8786 - val_loss: 6.4037 - val_output_react_loss: 0.7108 - val_output_bg_ph_loss: 0.8047 - val_output_ph_loss: 0.9717 - val_output_mg_c_loss: 0.7793 - val_output_c_loss: 0.8423
Epoch 11/60
60/60 - 7s - loss: 6.4805 - output_react_loss: 0.7160 - output_bg_ph_loss: 0.8006 - output_ph_loss: 0.9952 - output_mg_c_loss: 0.7916 - output_c_loss: 0.8688 - val_loss: 6.2625 - val_output_react_loss: 0.6882 - val_output_bg_ph_loss: 0.7927 - val_output_ph_loss: 0.9446 - val_output_mg_c_loss: 0.7685 - val_output_c_loss: 0.8191
Epoch 12/60
60/60 - 7s - loss: 6.3424 - output_react_loss: 0.7034 - output_bg_ph_loss: 0.7842 - output_ph_loss: 0.9727 - output_mg_c_loss: 0.7713 - output_c_loss: 0.8521 - val_loss: 6.2534 - val_output_react_loss: 0.6890 - val_output_bg_ph_loss: 0.7914 - val_output_ph_loss: 0.9459 - val_output_mg_c_loss: 0.7639 - val_output_c_loss: 0.8191
Epoch 13/60
60/60 - 7s - loss: 6.2836 - output_react_loss: 0.6975 - output_bg_ph_loss: 0.7752 - output_ph_loss: 0.9684 - output_mg_c_loss: 0.7616 - output_c_loss: 0.8467 - val_loss: 6.1758 - val_output_react_loss: 0.6862 - val_output_bg_ph_loss: 0.7799 - val_output_ph_loss: 0.9455 - val_output_mg_c_loss: 0.7440 - val_output_c_loss: 0.8101
Epoch 14/60
60/60 - 7s - loss: 6.1922 - output_react_loss: 0.6885 - output_bg_ph_loss: 0.7629 - output_ph_loss: 0.9557 - output_mg_c_loss: 0.7485 - output_c_loss: 0.8367 - val_loss: 6.0942 - val_output_react_loss: 0.6730 - val_output_bg_ph_loss: 0.7744 - val_output_ph_loss: 0.9233 - val_output_mg_c_loss: 0.7378 - val_output_c_loss: 0.8005
Epoch 15/60
60/60 - 7s - loss: 6.1035 - output_react_loss: 0.6790 - output_bg_ph_loss: 0.7496 - output_ph_loss: 0.9416 - output_mg_c_loss: 0.7379 - output_c_loss: 0.8289 - val_loss: 6.0996 - val_output_react_loss: 0.6734 - val_output_bg_ph_loss: 0.7770 - val_output_ph_loss: 0.9221 - val_output_mg_c_loss: 0.7403 - val_output_c_loss: 0.7960
Epoch 16/60
60/60 - 7s - loss: 6.0375 - output_react_loss: 0.6715 - output_bg_ph_loss: 0.7409 - output_ph_loss: 0.9314 - output_mg_c_loss: 0.7296 - output_c_loss: 0.8221 - val_loss: 6.2008 - val_output_react_loss: 0.6709 - val_output_bg_ph_loss: 0.7693 - val_output_ph_loss: 0.9436 - val_output_mg_c_loss: 0.7789 - val_output_c_loss: 0.8190
Epoch 17/60
60/60 - 7s - loss: 5.9593 - output_react_loss: 0.6625 - output_bg_ph_loss: 0.7295 - output_ph_loss: 0.9226 - output_mg_c_loss: 0.7183 - output_c_loss: 0.8160 - val_loss: 6.0699 - val_output_react_loss: 0.6697 - val_output_bg_ph_loss: 0.7817 - val_output_ph_loss: 0.9179 - val_output_mg_c_loss: 0.7296 - val_output_c_loss: 0.7899
Epoch 18/60
60/60 - 7s - loss: 5.8923 - output_react_loss: 0.6547 - output_bg_ph_loss: 0.7244 - output_ph_loss: 0.9126 - output_mg_c_loss: 0.7079 - output_c_loss: 0.8057 - val_loss: 5.9955 - val_output_react_loss: 0.6640 - val_output_bg_ph_loss: 0.7581 - val_output_ph_loss: 0.9124 - val_output_mg_c_loss: 0.7245 - val_output_c_loss: 0.7899
Epoch 19/60
60/60 - 7s - loss: 5.8030 - output_react_loss: 0.6489 - output_bg_ph_loss: 0.7089 - output_ph_loss: 0.9022 - output_mg_c_loss: 0.6929 - output_c_loss: 0.7995 - val_loss: 6.0003 - val_output_react_loss: 0.6634 - val_output_bg_ph_loss: 0.7612 - val_output_ph_loss: 0.9180 - val_output_mg_c_loss: 0.7198 - val_output_c_loss: 0.7934
Epoch 20/60
60/60 - 7s - loss: 5.7647 - output_react_loss: 0.6442 - output_bg_ph_loss: 0.7004 - output_ph_loss: 0.8969 - output_mg_c_loss: 0.6903 - output_c_loss: 0.7981 - val_loss: 5.9887 - val_output_react_loss: 0.6623 - val_output_bg_ph_loss: 0.7605 - val_output_ph_loss: 0.9146 - val_output_mg_c_loss: 0.7209 - val_output_c_loss: 0.7868
Epoch 21/60
60/60 - 7s - loss: 5.6828 - output_react_loss: 0.6343 - output_bg_ph_loss: 0.6924 - output_ph_loss: 0.8868 - output_mg_c_loss: 0.6773 - output_c_loss: 0.7881 - val_loss: 6.0351 - val_output_react_loss: 0.6671 - val_output_bg_ph_loss: 0.7703 - val_output_ph_loss: 0.9122 - val_output_mg_c_loss: 0.7272 - val_output_c_loss: 0.7937
Epoch 22/60
60/60 - 7s - loss: 5.6369 - output_react_loss: 0.6304 - output_bg_ph_loss: 0.6838 - output_ph_loss: 0.8807 - output_mg_c_loss: 0.6713 - output_c_loss: 0.7852 - val_loss: 5.9767 - val_output_react_loss: 0.6528 - val_output_bg_ph_loss: 0.7625 - val_output_ph_loss: 0.9078 - val_output_mg_c_loss: 0.7237 - val_output_c_loss: 0.7908
Epoch 23/60
60/60 - 7s - loss: 5.5836 - output_react_loss: 0.6231 - output_bg_ph_loss: 0.6739 - output_ph_loss: 0.8772 - output_mg_c_loss: 0.6646 - output_c_loss: 0.7833 - val_loss: 5.9364 - val_output_react_loss: 0.6503 - val_output_bg_ph_loss: 0.7512 - val_output_ph_loss: 0.9047 - val_output_mg_c_loss: 0.7213 - val_output_c_loss: 0.7862
Epoch 24/60
60/60 - 7s - loss: 5.5108 - output_react_loss: 0.6161 - output_bg_ph_loss: 0.6636 - output_ph_loss: 0.8657 - output_mg_c_loss: 0.6537 - output_c_loss: 0.7782 - val_loss: 5.9561 - val_output_react_loss: 0.6571 - val_output_bg_ph_loss: 0.7542 - val_output_ph_loss: 0.9116 - val_output_mg_c_loss: 0.7193 - val_output_c_loss: 0.7835
Epoch 25/60
60/60 - 7s - loss: 5.4718 - output_react_loss: 0.6155 - output_bg_ph_loss: 0.6568 - output_ph_loss: 0.8622 - output_mg_c_loss: 0.6471 - output_c_loss: 0.7709 - val_loss: 5.9058 - val_output_react_loss: 0.6504 - val_output_bg_ph_loss: 0.7475 - val_output_ph_loss: 0.8999 - val_output_mg_c_loss: 0.7158 - val_output_c_loss: 0.7785
Epoch 26/60
60/60 - 7s - loss: 5.4313 - output_react_loss: 0.6077 - output_bg_ph_loss: 0.6516 - output_ph_loss: 0.8562 - output_mg_c_loss: 0.6444 - output_c_loss: 0.7677 - val_loss: 5.8881 - val_output_react_loss: 0.6468 - val_output_bg_ph_loss: 0.7450 - val_output_ph_loss: 0.9019 - val_output_mg_c_loss: 0.7117 - val_output_c_loss: 0.7791
Epoch 27/60
60/60 - 7s - loss: 5.3893 - output_react_loss: 0.5980 - output_bg_ph_loss: 0.6479 - output_ph_loss: 0.8569 - output_mg_c_loss: 0.6371 - output_c_loss: 0.7661 - val_loss: 5.8772 - val_output_react_loss: 0.6483 - val_output_bg_ph_loss: 0.7417 - val_output_ph_loss: 0.8982 - val_output_mg_c_loss: 0.7079 - val_output_c_loss: 0.7832
Epoch 28/60
60/60 - 7s - loss: 5.3144 - output_react_loss: 0.5927 - output_bg_ph_loss: 0.6344 - output_ph_loss: 0.8408 - output_mg_c_loss: 0.6288 - output_c_loss: 0.7617 - val_loss: 5.8856 - val_output_react_loss: 0.6515 - val_output_bg_ph_loss: 0.7415 - val_output_ph_loss: 0.8974 - val_output_mg_c_loss: 0.7109 - val_output_c_loss: 0.7805
Epoch 29/60
60/60 - 7s - loss: 5.2738 - output_react_loss: 0.5889 - output_bg_ph_loss: 0.6291 - output_ph_loss: 0.8379 - output_mg_c_loss: 0.6228 - output_c_loss: 0.7543 - val_loss: 5.8810 - val_output_react_loss: 0.6542 - val_output_bg_ph_loss: 0.7441 - val_output_ph_loss: 0.8915 - val_output_mg_c_loss: 0.7064 - val_output_c_loss: 0.7803
Epoch 30/60
60/60 - 7s - loss: 5.2198 - output_react_loss: 0.5842 - output_bg_ph_loss: 0.6219 - output_ph_loss: 0.8301 - output_mg_c_loss: 0.6141 - output_c_loss: 0.7493 - val_loss: 5.9105 - val_output_react_loss: 0.6439 - val_output_bg_ph_loss: 0.7521 - val_output_ph_loss: 0.8976 - val_output_mg_c_loss: 0.7183 - val_output_c_loss: 0.7843
Epoch 31/60
60/60 - 7s - loss: 5.1920 - output_react_loss: 0.5793 - output_bg_ph_loss: 0.6147 - output_ph_loss: 0.8277 - output_mg_c_loss: 0.6137 - output_c_loss: 0.7488 - val_loss: 5.9147 - val_output_react_loss: 0.6485 - val_output_bg_ph_loss: 0.7378 - val_output_ph_loss: 0.9028 - val_output_mg_c_loss: 0.7255 - val_output_c_loss: 0.7882
Epoch 32/60
Epoch 00032: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
60/60 - 7s - loss: 5.1582 - output_react_loss: 0.5722 - output_bg_ph_loss: 0.6104 - output_ph_loss: 0.8238 - output_mg_c_loss: 0.6114 - output_c_loss: 0.7464 - val_loss: 5.9102 - val_output_react_loss: 0.6539 - val_output_bg_ph_loss: 0.7497 - val_output_ph_loss: 0.9050 - val_output_mg_c_loss: 0.7094 - val_output_c_loss: 0.7793
Epoch 33/60
60/60 - 7s - loss: 4.9822 - output_react_loss: 0.5522 - output_bg_ph_loss: 0.5878 - output_ph_loss: 0.8036 - output_mg_c_loss: 0.5842 - output_c_loss: 0.7304 - val_loss: 5.7695 - val_output_react_loss: 0.6343 - val_output_bg_ph_loss: 0.7283 - val_output_ph_loss: 0.8842 - val_output_mg_c_loss: 0.6947 - val_output_c_loss: 0.7705
Epoch 34/60
60/60 - 7s - loss: 4.9266 - output_react_loss: 0.5461 - output_bg_ph_loss: 0.5804 - output_ph_loss: 0.7966 - output_mg_c_loss: 0.5763 - output_c_loss: 0.7244 - val_loss: 5.7668 - val_output_react_loss: 0.6334 - val_output_bg_ph_loss: 0.7277 - val_output_ph_loss: 0.8848 - val_output_mg_c_loss: 0.6944 - val_output_c_loss: 0.7709
Epoch 35/60
60/60 - 7s - loss: 4.9011 - output_react_loss: 0.5440 - output_bg_ph_loss: 0.5762 - output_ph_loss: 0.7921 - output_mg_c_loss: 0.5731 - output_c_loss: 0.7226 - val_loss: 5.7530 - val_output_react_loss: 0.6321 - val_output_bg_ph_loss: 0.7270 - val_output_ph_loss: 0.8812 - val_output_mg_c_loss: 0.6926 - val_output_c_loss: 0.7685
Epoch 36/60
60/60 - 7s - loss: 4.8871 - output_react_loss: 0.5412 - output_bg_ph_loss: 0.5742 - output_ph_loss: 0.7916 - output_mg_c_loss: 0.5722 - output_c_loss: 0.7203 - val_loss: 5.7480 - val_output_react_loss: 0.6312 - val_output_bg_ph_loss: 0.7260 - val_output_ph_loss: 0.8808 - val_output_mg_c_loss: 0.6921 - val_output_c_loss: 0.7686
Epoch 37/60
60/60 - 7s - loss: 4.8764 - output_react_loss: 0.5405 - output_bg_ph_loss: 0.5734 - output_ph_loss: 0.7899 - output_mg_c_loss: 0.5693 - output_c_loss: 0.7201 - val_loss: 5.7581 - val_output_react_loss: 0.6335 - val_output_bg_ph_loss: 0.7276 - val_output_ph_loss: 0.8810 - val_output_mg_c_loss: 0.6930 - val_output_c_loss: 0.7688
Epoch 38/60
60/60 - 7s - loss: 4.8613 - output_react_loss: 0.5388 - output_bg_ph_loss: 0.5708 - output_ph_loss: 0.7876 - output_mg_c_loss: 0.5682 - output_c_loss: 0.7179 - val_loss: 5.7601 - val_output_react_loss: 0.6332 - val_output_bg_ph_loss: 0.7272 - val_output_ph_loss: 0.8830 - val_output_mg_c_loss: 0.6936 - val_output_c_loss: 0.7694
Epoch 39/60
60/60 - 7s - loss: 4.8535 - output_react_loss: 0.5374 - output_bg_ph_loss: 0.5704 - output_ph_loss: 0.7864 - output_mg_c_loss: 0.5669 - output_c_loss: 0.7178 - val_loss: 5.7466 - val_output_react_loss: 0.6310 - val_output_bg_ph_loss: 0.7269 - val_output_ph_loss: 0.8796 - val_output_mg_c_loss: 0.6914 - val_output_c_loss: 0.7684
Epoch 40/60
60/60 - 7s - loss: 4.8459 - output_react_loss: 0.5361 - output_bg_ph_loss: 0.5684 - output_ph_loss: 0.7872 - output_mg_c_loss: 0.5666 - output_c_loss: 0.7164 - val_loss: 5.7479 - val_output_react_loss: 0.6315 - val_output_bg_ph_loss: 0.7255 - val_output_ph_loss: 0.8815 - val_output_mg_c_loss: 0.6921 - val_output_c_loss: 0.7683
Epoch 41/60
60/60 - 7s - loss: 4.8369 - output_react_loss: 0.5362 - output_bg_ph_loss: 0.5672 - output_ph_loss: 0.7852 - output_mg_c_loss: 0.5639 - output_c_loss: 0.7170 - val_loss: 5.7497 - val_output_react_loss: 0.6308 - val_output_bg_ph_loss: 0.7274 - val_output_ph_loss: 0.8810 - val_output_mg_c_loss: 0.6918 - val_output_c_loss: 0.7688
Epoch 42/60
60/60 - 7s - loss: 4.8317 - output_react_loss: 0.5343 - output_bg_ph_loss: 0.5678 - output_ph_loss: 0.7842 - output_mg_c_loss: 0.5636 - output_c_loss: 0.7162 - val_loss: 5.7494 - val_output_react_loss: 0.6314 - val_output_bg_ph_loss: 0.7267 - val_output_ph_loss: 0.8805 - val_output_mg_c_loss: 0.6921 - val_output_c_loss: 0.7683
Epoch 43/60
60/60 - 7s - loss: 4.8210 - output_react_loss: 0.5334 - output_bg_ph_loss: 0.5650 - output_ph_loss: 0.7828 - output_mg_c_loss: 0.5633 - output_c_loss: 0.7147 - val_loss: 5.7541 - val_output_react_loss: 0.6315 - val_output_bg_ph_loss: 0.7267 - val_output_ph_loss: 0.8824 - val_output_mg_c_loss: 0.6934 - val_output_c_loss: 0.7687
Epoch 44/60
Epoch 00044: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
60/60 - 7s - loss: 4.8182 - output_react_loss: 0.5320 - output_bg_ph_loss: 0.5656 - output_ph_loss: 0.7830 - output_mg_c_loss: 0.5628 - output_c_loss: 0.7146 - val_loss: 5.7519 - val_output_react_loss: 0.6321 - val_output_bg_ph_loss: 0.7266 - val_output_ph_loss: 0.8805 - val_output_mg_c_loss: 0.6925 - val_output_c_loss: 0.7690
Epoch 45/60
60/60 - 7s - loss: 4.7908 - output_react_loss: 0.5292 - output_bg_ph_loss: 0.5608 - output_ph_loss: 0.7801 - output_mg_c_loss: 0.5589 - output_c_loss: 0.7130 - val_loss: 5.7460 - val_output_react_loss: 0.6313 - val_output_bg_ph_loss: 0.7260 - val_output_ph_loss: 0.8804 - val_output_mg_c_loss: 0.6913 - val_output_c_loss: 0.7686
Epoch 46/60
60/60 - 7s - loss: 4.7975 - output_react_loss: 0.5307 - output_bg_ph_loss: 0.5616 - output_ph_loss: 0.7813 - output_mg_c_loss: 0.5595 - output_c_loss: 0.7127 - val_loss: 5.7447 - val_output_react_loss: 0.6310 - val_output_bg_ph_loss: 0.7260 - val_output_ph_loss: 0.8803 - val_output_mg_c_loss: 0.6912 - val_output_c_loss: 0.7681
Epoch 47/60
60/60 - 7s - loss: 4.7907 - output_react_loss: 0.5294 - output_bg_ph_loss: 0.5615 - output_ph_loss: 0.7788 - output_mg_c_loss: 0.5584 - output_c_loss: 0.7135 - val_loss: 5.7471 - val_output_react_loss: 0.6313 - val_output_bg_ph_loss: 0.7265 - val_output_ph_loss: 0.8805 - val_output_mg_c_loss: 0.6914 - val_output_c_loss: 0.7683
Epoch 48/60
60/60 - 7s - loss: 4.7942 - output_react_loss: 0.5309 - output_bg_ph_loss: 0.5613 - output_ph_loss: 0.7801 - output_mg_c_loss: 0.5586 - output_c_loss: 0.7125 - val_loss: 5.7446 - val_output_react_loss: 0.6311 - val_output_bg_ph_loss: 0.7259 - val_output_ph_loss: 0.8803 - val_output_mg_c_loss: 0.6912 - val_output_c_loss: 0.7679
Epoch 49/60
60/60 - 7s - loss: 4.7873 - output_react_loss: 0.5290 - output_bg_ph_loss: 0.5599 - output_ph_loss: 0.7803 - output_mg_c_loss: 0.5578 - output_c_loss: 0.7136 - val_loss: 5.7469 - val_output_react_loss: 0.6314 - val_output_bg_ph_loss: 0.7265 - val_output_ph_loss: 0.8804 - val_output_mg_c_loss: 0.6913 - val_output_c_loss: 0.7681
Epoch 50/60
60/60 - 7s - loss: 4.7899 - output_react_loss: 0.5298 - output_bg_ph_loss: 0.5613 - output_ph_loss: 0.7783 - output_mg_c_loss: 0.5579 - output_c_loss: 0.7134 - val_loss: 5.7456 - val_output_react_loss: 0.6310 - val_output_bg_ph_loss: 0.7264 - val_output_ph_loss: 0.8803 - val_output_mg_c_loss: 0.6912 - val_output_c_loss: 0.7680
Epoch 51/60
60/60 - 7s - loss: 4.7827 - output_react_loss: 0.5286 - output_bg_ph_loss: 0.5598 - output_ph_loss: 0.7791 - output_mg_c_loss: 0.5573 - output_c_loss: 0.7122 - val_loss: 5.7438 - val_output_react_loss: 0.6313 - val_output_bg_ph_loss: 0.7257 - val_output_ph_loss: 0.8803 - val_output_mg_c_loss: 0.6908 - val_output_c_loss: 0.7679
Epoch 52/60
60/60 - 7s - loss: 4.7846 - output_react_loss: 0.5283 - output_bg_ph_loss: 0.5594 - output_ph_loss: 0.7796 - output_mg_c_loss: 0.5583 - output_c_loss: 0.7130 - val_loss: 5.7444 - val_output_react_loss: 0.6312 - val_output_bg_ph_loss: 0.7260 - val_output_ph_loss: 0.8803 - val_output_mg_c_loss: 0.6910 - val_output_c_loss: 0.7679
Epoch 53/60
60/60 - 7s - loss: 4.7853 - output_react_loss: 0.5297 - output_bg_ph_loss: 0.5607 - output_ph_loss: 0.7784 - output_mg_c_loss: 0.5571 - output_c_loss: 0.7119 - val_loss: 5.7463 - val_output_react_loss: 0.6312 - val_output_bg_ph_loss: 0.7262 - val_output_ph_loss: 0.8806 - val_output_mg_c_loss: 0.6913 - val_output_c_loss: 0.7683
Epoch 54/60
60/60 - 7s - loss: 4.7804 - output_react_loss: 0.5283 - output_bg_ph_loss: 0.5594 - output_ph_loss: 0.7793 - output_mg_c_loss: 0.5570 - output_c_loss: 0.7118 - val_loss: 5.7469 - val_output_react_loss: 0.6313 - val_output_bg_ph_loss: 0.7266 - val_output_ph_loss: 0.8804 - val_output_mg_c_loss: 0.6913 - val_output_c_loss: 0.7682
Epoch 55/60
60/60 - 7s - loss: 4.7820 - output_react_loss: 0.5293 - output_bg_ph_loss: 0.5602 - output_ph_loss: 0.7778 - output_mg_c_loss: 0.5571 - output_c_loss: 0.7111 - val_loss: 5.7462 - val_output_react_loss: 0.6312 - val_output_bg_ph_loss: 0.7264 - val_output_ph_loss: 0.8804 - val_output_mg_c_loss: 0.6913 - val_output_c_loss: 0.7680
Epoch 56/60
Epoch 00056: ReduceLROnPlateau reducing learning rate to 1.0000000656873453e-06.
60/60 - 7s - loss: 4.7782 - output_react_loss: 0.5285 - output_bg_ph_loss: 0.5594 - output_ph_loss: 0.7785 - output_mg_c_loss: 0.5562 - output_c_loss: 0.7117 - val_loss: 5.7466 - val_output_react_loss: 0.6315 - val_output_bg_ph_loss: 0.7264 - val_output_ph_loss: 0.8804 - val_output_mg_c_loss: 0.6912 - val_output_c_loss: 0.7680
Epoch 57/60
60/60 - 7s - loss: 4.7790 - output_react_loss: 0.5285 - output_bg_ph_loss: 0.5587 - output_ph_loss: 0.7781 - output_mg_c_loss: 0.5573 - output_c_loss: 0.7120 - val_loss: 5.7466 - val_output_react_loss: 0.6314 - val_output_bg_ph_loss: 0.7264 - val_output_ph_loss: 0.8804 - val_output_mg_c_loss: 0.6913 - val_output_c_loss: 0.7680
Epoch 58/60
60/60 - 7s - loss: 4.7773 - output_react_loss: 0.5282 - output_bg_ph_loss: 0.5589 - output_ph_loss: 0.7786 - output_mg_c_loss: 0.5560 - output_c_loss: 0.7124 - val_loss: 5.7464 - val_output_react_loss: 0.6314 - val_output_bg_ph_loss: 0.7264 - val_output_ph_loss: 0.8804 - val_output_mg_c_loss: 0.6912 - val_output_c_loss: 0.7680
Epoch 59/60
60/60 - 7s - loss: 4.7779 - output_react_loss: 0.5274 - output_bg_ph_loss: 0.5605 - output_ph_loss: 0.7784 - output_mg_c_loss: 0.5562 - output_c_loss: 0.7113 - val_loss: 5.7459 - val_output_react_loss: 0.6314 - val_output_bg_ph_loss: 0.7263 - val_output_ph_loss: 0.8803 - val_output_mg_c_loss: 0.6912 - val_output_c_loss: 0.7680
Epoch 60/60
60/60 - 7s - loss: 4.7728 - output_react_loss: 0.5282 - output_bg_ph_loss: 0.5584 - output_ph_loss: 0.7767 - output_mg_c_loss: 0.5562 - output_c_loss: 0.7107 - val_loss: 5.7458 - val_output_react_loss: 0.6313 - val_output_bg_ph_loss: 0.7263 - val_output_ph_loss: 0.8803 - val_output_mg_c_loss: 0.6912 - val_output_c_loss: 0.7680
FOLD: 5
Epoch 1/60
60/60 - 9s - loss: 9.4103 - output_react_loss: 0.9933 - output_bg_ph_loss: 1.2397 - output_ph_loss: 1.3956 - output_mg_c_loss: 1.2014 - output_c_loss: 1.1457 - val_loss: 8.4835 - val_output_react_loss: 0.8810 - val_output_bg_ph_loss: 1.0646 - val_output_ph_loss: 1.2580 - val_output_mg_c_loss: 1.0907 - val_output_c_loss: 1.1530
Epoch 2/60
60/60 - 7s - loss: 8.1708 - output_react_loss: 0.8739 - output_bg_ph_loss: 1.0476 - output_ph_loss: 1.2143 - output_mg_c_loss: 1.0370 - output_c_loss: 1.0395 - val_loss: 8.1381 - val_output_react_loss: 0.8419 - val_output_bg_ph_loss: 1.0081 - val_output_ph_loss: 1.2299 - val_output_mg_c_loss: 1.0443 - val_output_c_loss: 1.1196
Epoch 3/60
60/60 - 7s - loss: 7.8827 - output_react_loss: 0.8401 - output_bg_ph_loss: 1.0078 - output_ph_loss: 1.1791 - output_mg_c_loss: 0.9992 - output_c_loss: 1.0093 - val_loss: 7.9489 - val_output_react_loss: 0.8188 - val_output_bg_ph_loss: 0.9792 - val_output_ph_loss: 1.1969 - val_output_mg_c_loss: 1.0260 - val_output_c_loss: 1.1040
Epoch 4/60
60/60 - 7s - loss: 7.5957 - output_react_loss: 0.8151 - output_bg_ph_loss: 0.9672 - output_ph_loss: 1.1401 - output_mg_c_loss: 0.9564 - output_c_loss: 0.9781 - val_loss: 7.5792 - val_output_react_loss: 0.8014 - val_output_bg_ph_loss: 0.9249 - val_output_ph_loss: 1.1520 - val_output_mg_c_loss: 0.9545 - val_output_c_loss: 1.0656
Epoch 5/60
60/60 - 7s - loss: 7.3306 - output_react_loss: 0.7962 - output_bg_ph_loss: 0.9289 - output_ph_loss: 1.1087 - output_mg_c_loss: 0.9108 - output_c_loss: 0.9501 - val_loss: 7.3039 - val_output_react_loss: 0.7773 - val_output_bg_ph_loss: 0.8880 - val_output_ph_loss: 1.1276 - val_output_mg_c_loss: 0.9107 - val_output_c_loss: 1.0241
Epoch 6/60
60/60 - 7s - loss: 7.0968 - output_react_loss: 0.7751 - output_bg_ph_loss: 0.8947 - output_ph_loss: 1.0794 - output_mg_c_loss: 0.8787 - output_c_loss: 0.9203 - val_loss: 7.1160 - val_output_react_loss: 0.7580 - val_output_bg_ph_loss: 0.8610 - val_output_ph_loss: 1.1039 - val_output_mg_c_loss: 0.8866 - val_output_c_loss: 1.0008
Epoch 7/60
60/60 - 7s - loss: 6.9131 - output_react_loss: 0.7591 - output_bg_ph_loss: 0.8697 - output_ph_loss: 1.0525 - output_mg_c_loss: 0.8513 - output_c_loss: 0.9003 - val_loss: 7.0134 - val_output_react_loss: 0.7564 - val_output_bg_ph_loss: 0.8386 - val_output_ph_loss: 1.0921 - val_output_mg_c_loss: 0.8741 - val_output_c_loss: 0.9832
Epoch 8/60
60/60 - 7s - loss: 6.7509 - output_react_loss: 0.7491 - output_bg_ph_loss: 0.8468 - output_ph_loss: 1.0275 - output_mg_c_loss: 0.8276 - output_c_loss: 0.8763 - val_loss: 6.8661 - val_output_react_loss: 0.7455 - val_output_bg_ph_loss: 0.8198 - val_output_ph_loss: 1.0721 - val_output_mg_c_loss: 0.8485 - val_output_c_loss: 0.9664
Epoch 9/60
60/60 - 7s - loss: 6.6092 - output_react_loss: 0.7311 - output_bg_ph_loss: 0.8301 - output_ph_loss: 1.0077 - output_mg_c_loss: 0.8086 - output_c_loss: 0.8622 - val_loss: 6.8013 - val_output_react_loss: 0.7290 - val_output_bg_ph_loss: 0.8241 - val_output_ph_loss: 1.0474 - val_output_mg_c_loss: 0.8450 - val_output_c_loss: 0.9576
Epoch 10/60
60/60 - 7s - loss: 6.4887 - output_react_loss: 0.7223 - output_bg_ph_loss: 0.8164 - output_ph_loss: 0.9872 - output_mg_c_loss: 0.7882 - output_c_loss: 0.8476 - val_loss: 6.7064 - val_output_react_loss: 0.7239 - val_output_bg_ph_loss: 0.8101 - val_output_ph_loss: 1.0381 - val_output_mg_c_loss: 0.8208 - val_output_c_loss: 0.9586
Epoch 11/60
60/60 - 7s - loss: 6.3593 - output_react_loss: 0.7113 - output_bg_ph_loss: 0.7965 - output_ph_loss: 0.9682 - output_mg_c_loss: 0.7710 - output_c_loss: 0.8335 - val_loss: 6.6384 - val_output_react_loss: 0.7220 - val_output_bg_ph_loss: 0.7892 - val_output_ph_loss: 1.0302 - val_output_mg_c_loss: 0.8179 - val_output_c_loss: 0.9498
Epoch 12/60
60/60 - 7s - loss: 6.2870 - output_react_loss: 0.7041 - output_bg_ph_loss: 0.7880 - output_ph_loss: 0.9601 - output_mg_c_loss: 0.7607 - output_c_loss: 0.8212 - val_loss: 6.5548 - val_output_react_loss: 0.7031 - val_output_bg_ph_loss: 0.7922 - val_output_ph_loss: 1.0134 - val_output_mg_c_loss: 0.8130 - val_output_c_loss: 0.9250
Epoch 13/60
60/60 - 7s - loss: 6.1807 - output_react_loss: 0.6880 - output_bg_ph_loss: 0.7759 - output_ph_loss: 0.9451 - output_mg_c_loss: 0.7463 - output_c_loss: 0.8150 - val_loss: 6.5071 - val_output_react_loss: 0.7022 - val_output_bg_ph_loss: 0.7753 - val_output_ph_loss: 1.0178 - val_output_mg_c_loss: 0.8069 - val_output_c_loss: 0.9205
Epoch 14/60
60/60 - 7s - loss: 6.1154 - output_react_loss: 0.6837 - output_bg_ph_loss: 0.7635 - output_ph_loss: 0.9374 - output_mg_c_loss: 0.7389 - output_c_loss: 0.8059 - val_loss: 6.5406 - val_output_react_loss: 0.7082 - val_output_bg_ph_loss: 0.7780 - val_output_ph_loss: 1.0117 - val_output_mg_c_loss: 0.8134 - val_output_c_loss: 0.9299
Epoch 15/60
60/60 - 7s - loss: 6.0102 - output_react_loss: 0.6752 - output_bg_ph_loss: 0.7474 - output_ph_loss: 0.9232 - output_mg_c_loss: 0.7222 - output_c_loss: 0.7972 - val_loss: 6.4292 - val_output_react_loss: 0.6975 - val_output_bg_ph_loss: 0.7658 - val_output_ph_loss: 1.0107 - val_output_mg_c_loss: 0.7862 - val_output_c_loss: 0.9197
Epoch 16/60
60/60 - 7s - loss: 5.9318 - output_react_loss: 0.6652 - output_bg_ph_loss: 0.7347 - output_ph_loss: 0.9134 - output_mg_c_loss: 0.7131 - output_c_loss: 0.7921 - val_loss: 6.3910 - val_output_react_loss: 0.6868 - val_output_bg_ph_loss: 0.7614 - val_output_ph_loss: 0.9970 - val_output_mg_c_loss: 0.7905 - val_output_c_loss: 0.9165
Epoch 17/60
60/60 - 7s - loss: 5.8695 - output_react_loss: 0.6581 - output_bg_ph_loss: 0.7297 - output_ph_loss: 0.9023 - output_mg_c_loss: 0.7036 - output_c_loss: 0.7844 - val_loss: 6.3561 - val_output_react_loss: 0.6866 - val_output_bg_ph_loss: 0.7569 - val_output_ph_loss: 0.9853 - val_output_mg_c_loss: 0.7862 - val_output_c_loss: 0.9112
Epoch 18/60
60/60 - 7s - loss: 5.7855 - output_react_loss: 0.6511 - output_bg_ph_loss: 0.7170 - output_ph_loss: 0.8933 - output_mg_c_loss: 0.6900 - output_c_loss: 0.7760 - val_loss: 6.3485 - val_output_react_loss: 0.6888 - val_output_bg_ph_loss: 0.7608 - val_output_ph_loss: 0.9833 - val_output_mg_c_loss: 0.7807 - val_output_c_loss: 0.9046
Epoch 19/60
60/60 - 7s - loss: 5.7347 - output_react_loss: 0.6449 - output_bg_ph_loss: 0.7103 - output_ph_loss: 0.8853 - output_mg_c_loss: 0.6838 - output_c_loss: 0.7713 - val_loss: 6.3686 - val_output_react_loss: 0.6843 - val_output_bg_ph_loss: 0.7677 - val_output_ph_loss: 0.9828 - val_output_mg_c_loss: 0.7876 - val_output_c_loss: 0.9067
Epoch 20/60
60/60 - 7s - loss: 5.6701 - output_react_loss: 0.6374 - output_bg_ph_loss: 0.6999 - output_ph_loss: 0.8811 - output_mg_c_loss: 0.6732 - output_c_loss: 0.7680 - val_loss: 6.2847 - val_output_react_loss: 0.6809 - val_output_bg_ph_loss: 0.7466 - val_output_ph_loss: 0.9774 - val_output_mg_c_loss: 0.7742 - val_output_c_loss: 0.9041
Epoch 21/60
60/60 - 7s - loss: 5.5949 - output_react_loss: 0.6301 - output_bg_ph_loss: 0.6892 - output_ph_loss: 0.8678 - output_mg_c_loss: 0.6639 - output_c_loss: 0.7607 - val_loss: 6.3120 - val_output_react_loss: 0.6808 - val_output_bg_ph_loss: 0.7537 - val_output_ph_loss: 0.9806 - val_output_mg_c_loss: 0.7802 - val_output_c_loss: 0.9021
Epoch 22/60
60/60 - 7s - loss: 5.5362 - output_react_loss: 0.6234 - output_bg_ph_loss: 0.6805 - output_ph_loss: 0.8631 - output_mg_c_loss: 0.6557 - output_c_loss: 0.7538 - val_loss: 6.2629 - val_output_react_loss: 0.6675 - val_output_bg_ph_loss: 0.7540 - val_output_ph_loss: 0.9724 - val_output_mg_c_loss: 0.7729 - val_output_c_loss: 0.9017
Epoch 23/60
60/60 - 7s - loss: 5.4736 - output_react_loss: 0.6182 - output_bg_ph_loss: 0.6700 - output_ph_loss: 0.8525 - output_mg_c_loss: 0.6468 - output_c_loss: 0.7511 - val_loss: 6.2878 - val_output_react_loss: 0.6761 - val_output_bg_ph_loss: 0.7491 - val_output_ph_loss: 0.9752 - val_output_mg_c_loss: 0.7794 - val_output_c_loss: 0.9033
Epoch 24/60
60/60 - 7s - loss: 5.4364 - output_react_loss: 0.6120 - output_bg_ph_loss: 0.6639 - output_ph_loss: 0.8504 - output_mg_c_loss: 0.6414 - output_c_loss: 0.7513 - val_loss: 6.2761 - val_output_react_loss: 0.6719 - val_output_bg_ph_loss: 0.7529 - val_output_ph_loss: 0.9753 - val_output_mg_c_loss: 0.7756 - val_output_c_loss: 0.9001
Epoch 25/60
60/60 - 7s - loss: 5.3850 - output_react_loss: 0.6028 - output_bg_ph_loss: 0.6608 - output_ph_loss: 0.8427 - output_mg_c_loss: 0.6348 - output_c_loss: 0.7455 - val_loss: 6.2672 - val_output_react_loss: 0.6702 - val_output_bg_ph_loss: 0.7542 - val_output_ph_loss: 0.9704 - val_output_mg_c_loss: 0.7750 - val_output_c_loss: 0.8978
Epoch 26/60
60/60 - 7s - loss: 5.3472 - output_react_loss: 0.6005 - output_bg_ph_loss: 0.6518 - output_ph_loss: 0.8405 - output_mg_c_loss: 0.6313 - output_c_loss: 0.7395 - val_loss: 6.2297 - val_output_react_loss: 0.6708 - val_output_bg_ph_loss: 0.7454 - val_output_ph_loss: 0.9709 - val_output_mg_c_loss: 0.7662 - val_output_c_loss: 0.8942
Epoch 27/60
60/60 - 7s - loss: 5.2674 - output_react_loss: 0.5923 - output_bg_ph_loss: 0.6382 - output_ph_loss: 0.8322 - output_mg_c_loss: 0.6201 - output_c_loss: 0.7342 - val_loss: 6.2864 - val_output_react_loss: 0.6815 - val_output_bg_ph_loss: 0.7518 - val_output_ph_loss: 0.9778 - val_output_mg_c_loss: 0.7723 - val_output_c_loss: 0.8973
Epoch 28/60
60/60 - 7s - loss: 5.2197 - output_react_loss: 0.5840 - output_bg_ph_loss: 0.6337 - output_ph_loss: 0.8240 - output_mg_c_loss: 0.6132 - output_c_loss: 0.7338 - val_loss: 6.1966 - val_output_react_loss: 0.6646 - val_output_bg_ph_loss: 0.7445 - val_output_ph_loss: 0.9605 - val_output_mg_c_loss: 0.7628 - val_output_c_loss: 0.8923
Epoch 29/60
60/60 - 7s - loss: 5.1921 - output_react_loss: 0.5853 - output_bg_ph_loss: 0.6292 - output_ph_loss: 0.8191 - output_mg_c_loss: 0.6080 - output_c_loss: 0.7279 - val_loss: 6.2762 - val_output_react_loss: 0.6758 - val_output_bg_ph_loss: 0.7508 - val_output_ph_loss: 0.9810 - val_output_mg_c_loss: 0.7723 - val_output_c_loss: 0.8973
Epoch 30/60
60/60 - 7s - loss: 5.1468 - output_react_loss: 0.5769 - output_bg_ph_loss: 0.6242 - output_ph_loss: 0.8179 - output_mg_c_loss: 0.6009 - output_c_loss: 0.7250 - val_loss: 6.1926 - val_output_react_loss: 0.6627 - val_output_bg_ph_loss: 0.7389 - val_output_ph_loss: 0.9605 - val_output_mg_c_loss: 0.7667 - val_output_c_loss: 0.8955
Epoch 31/60
60/60 - 7s - loss: 5.0940 - output_react_loss: 0.5704 - output_bg_ph_loss: 0.6151 - output_ph_loss: 0.8090 - output_mg_c_loss: 0.5967 - output_c_loss: 0.7206 - val_loss: 6.2227 - val_output_react_loss: 0.6647 - val_output_bg_ph_loss: 0.7431 - val_output_ph_loss: 0.9676 - val_output_mg_c_loss: 0.7725 - val_output_c_loss: 0.8946
Epoch 32/60
60/60 - 7s - loss: 5.0798 - output_react_loss: 0.5672 - output_bg_ph_loss: 0.6122 - output_ph_loss: 0.8100 - output_mg_c_loss: 0.5959 - output_c_loss: 0.7194 - val_loss: 6.2217 - val_output_react_loss: 0.6640 - val_output_bg_ph_loss: 0.7469 - val_output_ph_loss: 0.9640 - val_output_mg_c_loss: 0.7714 - val_output_c_loss: 0.8932
Epoch 33/60
60/60 - 7s - loss: 5.0324 - output_react_loss: 0.5618 - output_bg_ph_loss: 0.6052 - output_ph_loss: 0.8047 - output_mg_c_loss: 0.5881 - output_c_loss: 0.7174 - val_loss: 6.1964 - val_output_react_loss: 0.6668 - val_output_bg_ph_loss: 0.7437 - val_output_ph_loss: 0.9591 - val_output_mg_c_loss: 0.7609 - val_output_c_loss: 0.8946
Epoch 34/60
60/60 - 7s - loss: 5.0032 - output_react_loss: 0.5574 - output_bg_ph_loss: 0.6004 - output_ph_loss: 0.7996 - output_mg_c_loss: 0.5858 - output_c_loss: 0.7165 - val_loss: 6.1899 - val_output_react_loss: 0.6658 - val_output_bg_ph_loss: 0.7439 - val_output_ph_loss: 0.9613 - val_output_mg_c_loss: 0.7604 - val_output_c_loss: 0.8884
Epoch 35/60
60/60 - 7s - loss: 4.9681 - output_react_loss: 0.5537 - output_bg_ph_loss: 0.5983 - output_ph_loss: 0.7957 - output_mg_c_loss: 0.5784 - output_c_loss: 0.7116 - val_loss: 6.2296 - val_output_react_loss: 0.6712 - val_output_bg_ph_loss: 0.7517 - val_output_ph_loss: 0.9594 - val_output_mg_c_loss: 0.7630 - val_output_c_loss: 0.8984
Epoch 36/60
60/60 - 7s - loss: 4.9206 - output_react_loss: 0.5469 - output_bg_ph_loss: 0.5904 - output_ph_loss: 0.7905 - output_mg_c_loss: 0.5742 - output_c_loss: 0.7073 - val_loss: 6.1815 - val_output_react_loss: 0.6633 - val_output_bg_ph_loss: 0.7381 - val_output_ph_loss: 0.9590 - val_output_mg_c_loss: 0.7616 - val_output_c_loss: 0.8963
Epoch 37/60
60/60 - 7s - loss: 4.9037 - output_react_loss: 0.5481 - output_bg_ph_loss: 0.5850 - output_ph_loss: 0.7875 - output_mg_c_loss: 0.5714 - output_c_loss: 0.7073 - val_loss: 6.1912 - val_output_react_loss: 0.6668 - val_output_bg_ph_loss: 0.7434 - val_output_ph_loss: 0.9582 - val_output_mg_c_loss: 0.7592 - val_output_c_loss: 0.8943
Epoch 38/60
60/60 - 7s - loss: 4.8946 - output_react_loss: 0.5453 - output_bg_ph_loss: 0.5839 - output_ph_loss: 0.7857 - output_mg_c_loss: 0.5720 - output_c_loss: 0.7063 - val_loss: 6.2125 - val_output_react_loss: 0.6650 - val_output_bg_ph_loss: 0.7503 - val_output_ph_loss: 0.9621 - val_output_mg_c_loss: 0.7641 - val_output_c_loss: 0.8915
Epoch 39/60
60/60 - 7s - loss: 4.8311 - output_react_loss: 0.5358 - output_bg_ph_loss: 0.5770 - output_ph_loss: 0.7807 - output_mg_c_loss: 0.5629 - output_c_loss: 0.6989 - val_loss: 6.1657 - val_output_react_loss: 0.6621 - val_output_bg_ph_loss: 0.7400 - val_output_ph_loss: 0.9545 - val_output_mg_c_loss: 0.7596 - val_output_c_loss: 0.8878
Epoch 40/60
60/60 - 7s - loss: 4.7940 - output_react_loss: 0.5316 - output_bg_ph_loss: 0.5715 - output_ph_loss: 0.7774 - output_mg_c_loss: 0.5575 - output_c_loss: 0.6954 - val_loss: 6.1798 - val_output_react_loss: 0.6616 - val_output_bg_ph_loss: 0.7430 - val_output_ph_loss: 0.9581 - val_output_mg_c_loss: 0.7619 - val_output_c_loss: 0.8887
Epoch 41/60
60/60 - 7s - loss: 4.7747 - output_react_loss: 0.5290 - output_bg_ph_loss: 0.5683 - output_ph_loss: 0.7745 - output_mg_c_loss: 0.5547 - output_c_loss: 0.6962 - val_loss: 6.1764 - val_output_react_loss: 0.6625 - val_output_bg_ph_loss: 0.7429 - val_output_ph_loss: 0.9549 - val_output_mg_c_loss: 0.7614 - val_output_c_loss: 0.8878
Epoch 42/60
60/60 - 7s - loss: 4.7490 - output_react_loss: 0.5251 - output_bg_ph_loss: 0.5645 - output_ph_loss: 0.7725 - output_mg_c_loss: 0.5521 - output_c_loss: 0.6932 - val_loss: 6.1903 - val_output_react_loss: 0.6646 - val_output_bg_ph_loss: 0.7406 - val_output_ph_loss: 0.9636 - val_output_mg_c_loss: 0.7615 - val_output_c_loss: 0.8934
Epoch 43/60
60/60 - 7s - loss: 4.7238 - output_react_loss: 0.5198 - output_bg_ph_loss: 0.5610 - output_ph_loss: 0.7679 - output_mg_c_loss: 0.5502 - output_c_loss: 0.6939 - val_loss: 6.2057 - val_output_react_loss: 0.6666 - val_output_bg_ph_loss: 0.7456 - val_output_ph_loss: 0.9693 - val_output_mg_c_loss: 0.7613 - val_output_c_loss: 0.8895
Epoch 44/60
Epoch 00044: ReduceLROnPlateau reducing learning rate to 0.00010000000474974513.
60/60 - 7s - loss: 4.7290 - output_react_loss: 0.5223 - output_bg_ph_loss: 0.5597 - output_ph_loss: 0.7697 - output_mg_c_loss: 0.5520 - output_c_loss: 0.6912 - val_loss: 6.1686 - val_output_react_loss: 0.6703 - val_output_bg_ph_loss: 0.7385 - val_output_ph_loss: 0.9566 - val_output_mg_c_loss: 0.7537 - val_output_c_loss: 0.8871
Epoch 45/60
60/60 - 7s - loss: 4.5633 - output_react_loss: 0.5012 - output_bg_ph_loss: 0.5374 - output_ph_loss: 0.7499 - output_mg_c_loss: 0.5300 - output_c_loss: 0.6761 - val_loss: 6.0713 - val_output_react_loss: 0.6523 - val_output_bg_ph_loss: 0.7264 - val_output_ph_loss: 0.9448 - val_output_mg_c_loss: 0.7446 - val_output_c_loss: 0.8799
Epoch 46/60
60/60 - 7s - loss: 4.5008 - output_react_loss: 0.4931 - output_bg_ph_loss: 0.5299 - output_ph_loss: 0.7426 - output_mg_c_loss: 0.5203 - output_c_loss: 0.6716 - val_loss: 6.0661 - val_output_react_loss: 0.6514 - val_output_bg_ph_loss: 0.7256 - val_output_ph_loss: 0.9446 - val_output_mg_c_loss: 0.7441 - val_output_c_loss: 0.8791
Epoch 47/60
60/60 - 7s - loss: 4.4755 - output_react_loss: 0.4909 - output_bg_ph_loss: 0.5256 - output_ph_loss: 0.7383 - output_mg_c_loss: 0.5175 - output_c_loss: 0.6692 - val_loss: 6.0685 - val_output_react_loss: 0.6520 - val_output_bg_ph_loss: 0.7262 - val_output_ph_loss: 0.9452 - val_output_mg_c_loss: 0.7437 - val_output_c_loss: 0.8795
Epoch 48/60
60/60 - 7s - loss: 4.4565 - output_react_loss: 0.4878 - output_bg_ph_loss: 0.5224 - output_ph_loss: 0.7381 - output_mg_c_loss: 0.5149 - output_c_loss: 0.6683 - val_loss: 6.0612 - val_output_react_loss: 0.6516 - val_output_bg_ph_loss: 0.7251 - val_output_ph_loss: 0.9440 - val_output_mg_c_loss: 0.7429 - val_output_c_loss: 0.8782
Epoch 49/60
60/60 - 7s - loss: 4.4468 - output_react_loss: 0.4877 - output_bg_ph_loss: 0.5210 - output_ph_loss: 0.7358 - output_mg_c_loss: 0.5133 - output_c_loss: 0.6670 - val_loss: 6.0555 - val_output_react_loss: 0.6506 - val_output_bg_ph_loss: 0.7235 - val_output_ph_loss: 0.9438 - val_output_mg_c_loss: 0.7426 - val_output_c_loss: 0.8782
Epoch 50/60
60/60 - 7s - loss: 4.4385 - output_react_loss: 0.4865 - output_bg_ph_loss: 0.5189 - output_ph_loss: 0.7356 - output_mg_c_loss: 0.5128 - output_c_loss: 0.6665 - val_loss: 6.0605 - val_output_react_loss: 0.6513 - val_output_bg_ph_loss: 0.7248 - val_output_ph_loss: 0.9438 - val_output_mg_c_loss: 0.7430 - val_output_c_loss: 0.8785
Epoch 51/60
60/60 - 7s - loss: 4.4226 - output_react_loss: 0.4840 - output_bg_ph_loss: 0.5182 - output_ph_loss: 0.7325 - output_mg_c_loss: 0.5101 - output_c_loss: 0.6654 - val_loss: 6.0619 - val_output_react_loss: 0.6517 - val_output_bg_ph_loss: 0.7257 - val_output_ph_loss: 0.9435 - val_output_mg_c_loss: 0.7427 - val_output_c_loss: 0.8782
Epoch 52/60
60/60 - 7s - loss: 4.4189 - output_react_loss: 0.4833 - output_bg_ph_loss: 0.5170 - output_ph_loss: 0.7333 - output_mg_c_loss: 0.5101 - output_c_loss: 0.6649 - val_loss: 6.0592 - val_output_react_loss: 0.6518 - val_output_bg_ph_loss: 0.7242 - val_output_ph_loss: 0.9432 - val_output_mg_c_loss: 0.7429 - val_output_c_loss: 0.8780
Epoch 53/60
60/60 - 7s - loss: 4.4123 - output_react_loss: 0.4833 - output_bg_ph_loss: 0.5154 - output_ph_loss: 0.7338 - output_mg_c_loss: 0.5084 - output_c_loss: 0.6641 - val_loss: 6.0638 - val_output_react_loss: 0.6518 - val_output_bg_ph_loss: 0.7253 - val_output_ph_loss: 0.9444 - val_output_mg_c_loss: 0.7434 - val_output_c_loss: 0.8783
Epoch 54/60
Epoch 00054: ReduceLROnPlateau reducing learning rate to 1.0000000474974514e-05.
60/60 - 7s - loss: 4.4094 - output_react_loss: 0.4827 - output_bg_ph_loss: 0.5158 - output_ph_loss: 0.7316 - output_mg_c_loss: 0.5084 - output_c_loss: 0.6640 - val_loss: 6.0602 - val_output_react_loss: 0.6521 - val_output_bg_ph_loss: 0.7258 - val_output_ph_loss: 0.9428 - val_output_mg_c_loss: 0.7421 - val_output_c_loss: 0.8774
Epoch 55/60
60/60 - 7s - loss: 4.3949 - output_react_loss: 0.4811 - output_bg_ph_loss: 0.5129 - output_ph_loss: 0.7285 - output_mg_c_loss: 0.5080 - output_c_loss: 0.6624 - val_loss: 6.0563 - val_output_react_loss: 0.6513 - val_output_bg_ph_loss: 0.7248 - val_output_ph_loss: 0.9429 - val_output_mg_c_loss: 0.7419 - val_output_c_loss: 0.8774
Epoch 56/60
60/60 - 7s - loss: 4.3883 - output_react_loss: 0.4807 - output_bg_ph_loss: 0.5119 - output_ph_loss: 0.7301 - output_mg_c_loss: 0.5054 - output_c_loss: 0.6625 - val_loss: 6.0552 - val_output_react_loss: 0.6515 - val_output_bg_ph_loss: 0.7244 - val_output_ph_loss: 0.9427 - val_output_mg_c_loss: 0.7417 - val_output_c_loss: 0.8773
Epoch 57/60
60/60 - 7s - loss: 4.3856 - output_react_loss: 0.4796 - output_bg_ph_loss: 0.5125 - output_ph_loss: 0.7281 - output_mg_c_loss: 0.5056 - output_c_loss: 0.6620 - val_loss: 6.0554 - val_output_react_loss: 0.6514 - val_output_bg_ph_loss: 0.7245 - val_output_ph_loss: 0.9427 - val_output_mg_c_loss: 0.7418 - val_output_c_loss: 0.8773
Epoch 58/60
60/60 - 7s - loss: 4.3889 - output_react_loss: 0.4792 - output_bg_ph_loss: 0.5130 - output_ph_loss: 0.7293 - output_mg_c_loss: 0.5062 - output_c_loss: 0.6629 - val_loss: 6.0560 - val_output_react_loss: 0.6513 - val_output_bg_ph_loss: 0.7246 - val_output_ph_loss: 0.9428 - val_output_mg_c_loss: 0.7421 - val_output_c_loss: 0.8773
Epoch 59/60
60/60 - 7s - loss: 4.3844 - output_react_loss: 0.4790 - output_bg_ph_loss: 0.5120 - output_ph_loss: 0.7287 - output_mg_c_loss: 0.5059 - output_c_loss: 0.6618 - val_loss: 6.0537 - val_output_react_loss: 0.6512 - val_output_bg_ph_loss: 0.7241 - val_output_ph_loss: 0.9427 - val_output_mg_c_loss: 0.7417 - val_output_c_loss: 0.8771
Epoch 60/60
60/60 - 7s - loss: 4.3848 - output_react_loss: 0.4801 - output_bg_ph_loss: 0.5114 - output_ph_loss: 0.7291 - output_mg_c_loss: 0.5053 - output_c_loss: 0.6621 - val_loss: 6.0546 - val_output_react_loss: 0.6511 - val_output_bg_ph_loss: 0.7241 - val_output_ph_loss: 0.9430 - val_output_mg_c_loss: 0.7419 - val_output_c_loss: 0.8773
###Markdown
Model loss graph
###Code
for fold, history in enumerate(history_list):
print(f'\nFOLD: {fold+1}')
print(f"Train {np.array(history['loss']).min():.5f} Validation {np.array(history['val_loss']).min():.5f}")
plot_metrics_agg(history_list)
###Output
FOLD: 1
Train 4.82023 Validation 5.43593
FOLD: 2
Train 4.91574 Validation 6.12882
FOLD: 3
Train 4.48322 Validation 5.59557
FOLD: 4
Train 4.77282 Validation 5.74377
FOLD: 5
Train 4.38437 Validation 6.05368
###Markdown
Post-processing
###Code
# Assign preds to OOF set
for idx, col in enumerate(pred_cols):
val = oof_preds[:, :, idx]
oof = oof.assign(**{f'{col}_pred': list(val)})
oof.to_csv('oof.csv', index=False)
oof_preds_dict = {}
for col in pred_cols:
oof_preds_dict[col] = oof_preds[:, :, idx]
# Assign values to test set
preds_ls = []
for df, preds in [(public_test, test_public_preds), (private_test, test_private_preds)]:
for i, uid in enumerate(df.id):
single_pred = preds[i]
single_df = pd.DataFrame(single_pred, columns=pred_cols)
single_df['id_seqpos'] = [f'{uid}_{x}' for x in range(single_df.shape[0])]
preds_ls.append(single_df)
preds_df = pd.concat(preds_ls)
###Output
_____no_output_____
###Markdown
Model evaluation
###Code
y_true_dict = get_targets_dict(train, pred_cols, train.index)
y_true = np.array([y_true_dict[col] for col in pred_cols]).transpose((1, 2, 0, 3)).reshape(oof_preds.shape)
display(evaluate_model(train, y_true, oof_preds, pred_cols))
###Output
_____no_output_____
###Markdown
Visualize test predictions
###Code
submission = pd.read_csv(database_base_path + 'sample_submission.csv')
submission = submission[['id_seqpos']].merge(preds_df, on=['id_seqpos'])
###Output
_____no_output_____
###Markdown
Test set predictions
###Code
display(submission.head(10))
display(submission.describe())
submission.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
uber_analysis_apr14/Uber analysis apr 14.ipynb | ###Markdown
Load the CSV file into memory
###Code
df = pd.read_csv('uber-raw-data-apr14.csv')
df.head()
df['Date/Time'] = df['Date/Time'].map(pd.to_datetime)
df.head()
###Output
_____no_output_____
###Markdown
Date and time helper functions
###Code
def get_day_of_month(dt):
return dt.day
def get_weekday(dt):
return dt.weekday()
def get_hour(dt):
return dt.hour
df['day_of_month'] = df['Date/Time'].map(get_day_of_month)
df['weekday'] = df['Date/Time'].map(get_weekday)
df['hour'] = df['Date/Time'].map(get_hour)
df.head()
###Output
_____no_output_____
###Markdown
Date Anaylsis
###Code
hist(df.day_of_month, bins=30, rwidth=.8, range=(0.5, 30.5))
xlabel('date of the month')
ylabel('frequency')
title("Frequency by Day of Month - Uber April 2014")
;
def count_rows(rows):
return len(rows)
calls_by_date = df.groupby('day_of_month').apply(count_rows)
calls_sorted_by_date = calls_by_date.sort_values()
bar(range(1,31), calls_sorted_by_date)
xticks(range(1,31), calls_sorted_by_date.index)
;
###Output
_____no_output_____
###Markdown
Analyse the hour
###Code
hist(df.hour, bins=24, rwidth=.8, range=(-.5,23.5))
;
###Output
_____no_output_____
###Markdown
Analyse the weekday
###Code
hist(df.weekday, bins=7, range=(-.5, 6.5), rwidth=.8, color='#AA5533', alpha=.4)
;
###Output
_____no_output_____
###Markdown
Analysis hour vs day_of_month
###Code
hour_matrix = df.groupby('hour weekday'.split()).apply(count_rows).unstack()
sns.heatmap(hour_matrix)
;
###Output
_____no_output_____
###Markdown
Analysis of Latitude and Longitude
###Code
hist(df['Lon'], bins=100, range=(-74.1, -73.9), color='green', alpha=.5, label='Longitude')
grid()
legend(loc='upper left')
twiny()
hist(df['Lat'], bins=100, range=(40.5, 41), color='red', alpha=.5, label='Latitude')
legend(loc='best')
###Output
_____no_output_____
###Markdown
Geo plot the data
###Code
figure(figsize=(20, 20))
plot(df['Lon'], df['Lat'], '.', alpha=.5)
xlim(-74.2, -73.7)
ylim(40.7, 41)
;
###Output
_____no_output_____ |
Google_Colaboratory/Chapter6/Section6-4.ipynb | ###Markdown
Chapter6 再帰型ニューラルネットワーク(テキストデータの分類) ~映画レビューの感情分析プログラムを作る~ 4. 感情分析の高速化
###Code
# 必要なパッケージのインストール
!pip3 install torch==1.6.0+cu101
!pip3 install torchvision==0.7.0+cu101
!pip3 install torchtext==0.3.1
!pip3 install numpy==1.18.5
!pip3 install matplotlib==3.2.2
!pip3 install scikit-learn==0.23.1
!pip3 install seaborn==0.11.0
!pip3 install spacy==2.2.4
###Output
Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (1.6.0+cu101)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch) (1.18.5)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch) (0.16.0)
Requirement already satisfied: torchvision in /usr/local/lib/python3.6/dist-packages (0.7.0+cu101)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.18.5)
Requirement already satisfied: torch==1.6.0 in /usr/local/lib/python3.6/dist-packages (from torchvision) (1.6.0+cu101)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision) (7.0.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch==1.6.0->torchvision) (0.16.0)
Requirement already satisfied: torchtext in /usr/local/lib/python3.6/dist-packages (0.3.1)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from torchtext) (4.41.1)
Requirement already satisfied: torch in /usr/local/lib/python3.6/dist-packages (from torchtext) (1.6.0+cu101)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torchtext) (1.18.5)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from torchtext) (2.23.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from torch->torchtext) (0.16.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext) (2020.6.20)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->torchtext) (3.0.4)
Collecting numpy==1.19.0
[?25l Downloading https://files.pythonhosted.org/packages/93/0b/71ae818646c1a80fbe6776d41f480649523ed31243f1f34d9d7e41d70195/numpy-1.19.0-cp36-cp36m-manylinux2010_x86_64.whl (14.6MB)
[K |████████████████████████████████| 14.6MB 215kB/s
[31mERROR: tensorflow 2.3.0 has requirement numpy<1.19.0,>=1.16.0, but you'll have numpy 1.19.0 which is incompatible.[0m
[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.[0m
[31mERROR: albumentations 0.1.12 has requirement imgaug<0.2.7,>=0.2.5, but you'll have imgaug 0.2.9 which is incompatible.[0m
[?25hInstalling collected packages: numpy
Found existing installation: numpy 1.18.5
Uninstalling numpy-1.18.5:
Successfully uninstalled numpy-1.18.5
Successfully installed numpy-1.19.0
###Markdown
4.2. 前準備(パッケージのインポート)
###Code
# 必要なパッケージのインストール
import numpy as np
import spacy
import matplotlib.pyplot as plt
import torch
from torchtext import data
from torchtext import datasets
from torch import nn
import torch.nn.functional as F
from torch import optim
###Output
_____no_output_____
###Markdown
4.3. 訓練データとテストデータの用意
###Code
def generate_bigrams(x):
n_grams = set(zip(*[x[i:] for i in range(2)])) # bi-gram, 2文字ずつに分割
for n_gram in n_grams:
x.append(' '.join(n_gram)) # 分割した部分語をリスト化
return x
generate_bigrams(['This', 'film', 'is', 'terrible'])
# Text, Label Fieldの定義
all_texts = data.Field(tokenize = 'spacy', preprocessing = generate_bigrams) # テキストデータのField
all_labels = data.LabelField(dtype = torch.float) # ラベルデータのField
# データの取得
train_dataset, test_dataset = datasets.IMDB.splits(all_texts, all_labels)
print("train_dataset size: {}".format(len(train_dataset))) # 訓練データのサイズ
print("test_dataset size: {}".format(len(test_dataset))) # テストデータのサイズ
# 訓練データの中身の確認
print(vars(train_dataset.examples[0]))
# 単語帳(Vocabulary)の作成
max_vocab_size = 25_000
all_texts.build_vocab(train_dataset,
max_size = max_vocab_size,
vectors = 'glove.6B.100d', # 学習済み単語埋め込みベクトル
unk_init = torch.Tensor.normal_) # ランダムに初期化
all_labels.build_vocab(train_dataset)
print("Unique tokens in all_texts vocabulary: {}".format(len(all_texts.vocab)))
print("Unique tokens in all_labels vocabulary: {}".format(len(all_labels.vocab)))
# 上位20位の単語
print(all_texts.vocab.freqs.most_common(20))
# テキストはID化されているがテキストに変換することもできる。
print(all_texts.vocab.itos[:10])
# labelの0と1がネガティブとポジティブどちらかを確認できる。
print(all_labels.vocab.stoi)
# ミニバッチの作成
batch_size = 64
# CPUとGPUどちらを使うかを指定
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# デバイスの確認
print("Device: {}".format(device))
train_batch, test_batch = data.BucketIterator.splits(
(train_dataset, test_dataset), # データセット
batch_size = batch_size, # バッチサイズ
device = device) # CPUかGPUかを指定
for batch in train_batch:
print("text size: {}".format(batch.text[0].size())) # テキストデータのサイズ
print("squence size: {}".format(batch.text[1].size())) # シーケンス長のサイズ
print("label size: {}".format(batch.label.size())) # ラベルデータのサイズ
break
###Output
Device: cuda
text size: torch.Size([64])
squence size: torch.Size([64])
label size: torch.Size([64])
###Markdown
4.4. ニューラルネットワークの定義
###Code
# ニューラルネットワークの定義
class Net(nn.Module):
def __init__(self, D_in, D_embedding, D_out, pad_idx):
super(Net, self).__init__()
self.embedding = nn.Embedding(D_in, D_embedding, padding_idx = pad_idx) # 単語埋め込み層
self.linear = nn.Linear(D_embedding, D_out) # 全結合層
def forward(self, x):
embedded = self.embedding(x) #text = [sent len, batch size]
embedded = embedded.permute(1, 0, 2) #embedded = [sent len, batch size, emb dim]
pooled = F.avg_pool2d(embedded, (embedded.shape[1], 1)).squeeze(1) #embedded = [batch size, sent len, emb dim]
output = self.linear(pooled) #pooled = [batch size, embedding_dim]
return output
# ニューラルネットワークのロード
D_in = len(all_texts.vocab) # 入力層の次元
D_embedding = 100 # 単語埋め込み層の次元
D_out = 1 # 出力層の次元
pad_idx = all_texts.vocab.stoi[all_texts.pad_token] # <pad>トークンのインデックス
net = Net(D_in,
D_embedding,
D_out,
pad_idx).to(device)
print(net)
# 学習済みの埋め込みを読み込み
pretrained_embeddings = all_texts.vocab.vectors
print(pretrained_embeddings.shape)
# 埋め込み層の重みを学習済みの埋め込みに置き換え
net.embedding.weight.data.copy_(pretrained_embeddings)
# 不明なトークン<unk>のインデックス取得
unk_idx = all_texts.vocab.stoi[all_texts.unk_token]
# <unk_idx>と<pad_idx>トークンのTensorをゼロで初期化
net.embedding.weight.data[unk_idx] = torch.zeros(D_embedding)
net.embedding.weight.data[pad_idx] = torch.zeros(D_embedding)
print(net.embedding.weight.data)
###Output
tensor([[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[-0.0382, -0.2449, 0.7281, ..., -0.1459, 0.8278, 0.2706],
...,
[-1.1643, -1.8603, -1.5510, ..., 0.9041, -0.9158, 0.9599],
[-0.6015, 0.7581, 0.8129, ..., -0.0145, 0.5207, 0.8053],
[ 0.8744, -1.1818, -0.9526, ..., 0.4148, 0.7086, 1.1648]],
device='cuda:0')
###Markdown
4.5. 損失関数と最適化関数の定義
###Code
# 損失関数の定義
criterion = nn.BCEWithLogitsLoss().to(device)
# 最適化関数の定義
optimizer = optim.Adam(net.parameters())
###Output
_____no_output_____
###Markdown
4.6. 学習
###Code
# 損失と正解率を保存するリストを作成
train_loss_list = [] # 学習損失
train_accuracy_list = [] # 学習データの正答率
test_loss_list = [] # 評価損失
test_accuracy_list = [] # テストデータの正答率
# 学習(エポック)の実行
epoch = 10
for i in range(epoch):
# エポックの進行状況を表示
print('---------------------------------------------')
print("Epoch: {}/{}".format(i+1, epoch))
# 損失と正解率の初期化
train_loss = 0 # 学習損失
train_accuracy = 0 # 学習データの正答数
test_loss = 0 # 評価損失
test_accuracy = 0 # テストデータの正答数
# ---------学習パート--------- #
# ニューラルネットワークを学習モードに設定
net.train()
# ミニバッチごとにデータをロードし学習
for batch in train_batch:
# GPUにTensorを転送
texts = batch.text
labels = batch.label
# 勾配を初期化
optimizer.zero_grad()
# データを入力して予測値を計算(順伝播)
y_pred_prob = net(texts).squeeze(1)
# 損失(誤差)を計算
loss = criterion(y_pred_prob, labels)
# 勾配の計算(逆伝搬)
loss.backward()
# パラメータ(重み)の更新
optimizer.step()
# ミニバッチごとの損失を蓄積
train_loss += loss.item()
# 予測したラベルを予測確率y_pred_probから計算
y_pred_labels = torch.round(torch.sigmoid(y_pred_prob))
# ミニバッチごとに正解したラベル数をカウント
train_accuracy += torch.sum(y_pred_labels == labels).item() / len(labels)
# エポックごとの損失と正解率を計算(ミニバッチの平均の損失と正解率を計算)
epoch_train_loss = train_loss / len(train_batch)
epoch_train_accuracy = train_accuracy / len(train_batch)
# ---------学習パートはここまで--------- #
# ---------評価パート--------- #
# ニューラルネットワークを評価モードに設定
net.eval()
# 評価時の計算で自動微分機能をオフにする
with torch.no_grad():
for batch in test_batch:
# GPUにTensorを転送
texts = batch.text
labels = batch.label
# データを入力して予測値を計算(順伝播)
y_pred_prob = net(texts).squeeze(1)
# 損失(誤差)を計算
loss = criterion(y_pred_prob, labels)
# ミニバッチごとの損失を蓄積
test_loss += loss.item()
# 予測したラベルを予測確率y_pred_probから計算
y_pred_labels = torch.round(torch.sigmoid(y_pred_prob))
# ミニバッチごとに正解したラベル数をカウント
test_accuracy += torch.sum(y_pred_labels == labels).item() / len(labels)
# エポックごとの損失と正解率を計算(ミニバッチの平均の損失と正解率を計算)
epoch_test_loss = test_loss / len(test_batch)
epoch_test_accuracy = test_accuracy / len(test_batch)
# ---------評価パートはここまで--------- #
# エポックごとに損失と正解率を表示
print("Train_Loss: {:.4f}, Train_Accuracy: {:.4f}".format(
epoch_train_loss, epoch_train_accuracy))
print("Test_Loss: {:.4f}, Test_Accuracy: {:.4f}".format(
epoch_test_loss, epoch_test_accuracy))
# 損失と正解率をリスト化して保存
train_loss_list.append(epoch_train_loss) # 学習損失
train_accuracy_list.append(epoch_train_accuracy) # 学習正答率
test_loss_list.append(epoch_test_loss) # テスト損失
test_accuracy_list.append(epoch_test_accuracy) # テスト正答率
###Output
---------------------------------------------
Epoch: 1/10
###Markdown
4.7. 結果の可視化
###Code
# 損失
plt.figure()
plt.title('Train and Test Loss') # タイトル
plt.xlabel('Epoch') # 横軸名
plt.ylabel('Loss') # 縦軸名
plt.plot(range(1, epoch+1), train_loss_list, color='blue',
linestyle='-', label='Train_Loss') # Train_lossのプロット
plt.plot(range(1, epoch+1), test_loss_list, color='red',
linestyle='--', label='Test_Loss') # Test_lossのプロット
plt.legend() # 凡例
# 正解率
plt.figure()
plt.title('Train and Test Accuracy') # タイトル
plt.xlabel('Epoch') # 横軸名
plt.ylabel('Accuracy') # 縦軸名
plt.plot(range(1, epoch+1), train_accuracy_list, color='blue',
linestyle='-', label='Train_Accuracy') # Train_lossのプロット
plt.plot(range(1, epoch+1), test_accuracy_list, color='red',
linestyle='--', label='Test_Accuracy') # Test_lossのプロット
plt.legend()
# 表示
plt.show()
###Output
_____no_output_____
###Markdown
4.8. 新しいレビューに対する感情分析
###Code
nlp = spacy.load('en')
def predict_sentiment(net, sentence):
net.eval() # 評価モードに設定
tokenized = [tok.text for tok in nlp.tokenizer(sentence)] # 文をトークン化して、リストに分割
indexed = [all_texts.vocab.stoi[t] for t in tokenized] # トークンにインデックスを付与
tensor = torch.LongTensor(indexed).to(device) # インデックスをTensorに変換
tensor = tensor.unsqueeze(1) # バッチの次元を追加
prediction = torch.sigmoid(net(tensor)) # シグモイド関数で0から1の出力に
return prediction
# ネガティブなレビューを入力して、感情分析
y_pred_prob = predict_sentiment(net, "This film is terrible")
y_pred_label = torch.round(y_pred_prob)
print("Probability: {:.4f}".format(y_pred_prob.item()))
print("Pred Label: {:.0f}".format(y_pred_label.item()))
# ポジティブなレビューを入力して、感情分析
y_pred_prob = predict_sentiment(net, "This film is great")
y_pred_label = torch.round(y_pred_prob)
print("Probability: {:.4f}".format(y_pred_prob.item()))
print("Pred Label: {:.0f}".format(y_pred_label.item()))
###Output
Probability: 1.0000
Pred Label: 1
|
Notebooks/stock_example.ipynb | ###Markdown
Make a stock and view its performanceCreate a stock of Fortis Inc and view its performance over 20 years based on the current dividend and perspective growth
###Code
# Create a Brokerage Account
a = Account()
FTS = a.Stock(a, name='FTS.TO', shares=50, price=50.78, dividend=1.8, annual_growth=5, volatility='med', drip=True,
dividend_percent_growth=10)
# View the summary of the stock before compounding
FTS.summary()
values, prices, cash = FTS.compound(years=20)
# View the summary of the stock after compounding
FTS.summary()
###Output
The total value of FTS.TO is $11477.29.
You have 62.0 shares at a price of $121.21.
You have $3962.03 in cash.
###Markdown
We see that 12 new shares were perchased over the 20 years through the DRIP program, remaining dividends went to a collective cash account called 'a' in the code.The results can also be plotted.
###Code
# The stock price can be plotted
ut.plot_growth(prices, 'Stock Price', FTS.name)
plt.show()
# The value of the cash account as well as the cumulative value (stock + cash) can also be plotted
plt.figure(figsize=(18,5))
x = np.array([i / 12 for i in range(len(values))])
plt.plot(x,values,label='Cumulative');
plt.plot(x,cash,label='Cash');
plt.ylabel('Value ($)');
plt.xlabel('Years');
plt.grid()
plt.legend();
###Output
_____no_output_____
###Markdown
We see that the cash account does not grow as quickly after approximately 17 years. This is because the dividends are reinvested by buying shares since at this point the quarterly dividend was finally large enough to purchase at least one share. There also appears to be more cumulative growth after this time due to more compounding from the DRIP. Next we can simulate various scenarios of the stock growth to get a better idea of potential outcomes. (With high volitility to see some more variation)
###Code
plt.figure(figsize=(18,6))
# make 20 simulations
for _ in range(20):
a = Account()
FTS = a.Stock(a, name='FTS.TO', shares=50, price=50.78, dividend=1.8, annual_growth=5, volatility='high', drip=True,
dividend_percent_growth=10)
values, prices, cash = FTS.compound(years=20)
plt.plot(x,prices);
plt.ylabel('Value ($)');
plt.xlabel('Years');
plt.title('Fortis Price')
plt.grid()
###Output
_____no_output_____
###Markdown
Annual/monthly deposits and withdrawals can be made where new stocks are purchased or sold with a commission price. Let's see how a $1000 annual deposit helps grow an account.
###Code
a = Account()
FTS = a.Stock(a, name='FTS.TO', shares=50, price=50.78, dividend=1.8, annual_growth=5, volatility='med', drip=True,
dividend_percent_growth=10)
values, prices, cash = FTS.compound(years=20, annual_deposit=1000)
ut.plot_growth(values, 'Stock Price', FTS.name)
plt.show()
###Output
_____no_output_____
###Markdown
If the user makes excessive withdrawals resulting in the account going broke, an error arrises telling the user how long it took for the account to go broke. Here the user will go broke in 4 years with a $1000/year withdrawal.
###Code
a = Account()
FTS = a.Stock(a, name='FTS.TO', shares=50, price=50.78, dividend=1.8, annual_growth=5, volatility='med', drip=True,
dividend_percent_growth=10)
values, prices, cash = FTS.compound(years=20, annual_withdrawal=1000)
ut.plot_growth(values, 'Stock Price', FTS.name)
plt.show()
###Output
_____no_output_____ |
notebooks/Machine Code 001.ipynb | ###Markdown
Write A Function In Machine Code - Basically a String but more complicatedLinux memory == files?memory mapped file - https://man7.org/linux/man-pages/man2/mmap.2.htmlmemory with code in must be marked "executable"Code from https://github.com/tonysimpson/pointbreak >
###Code
import ctypes
import mmap
def create_callable_from_machine_code(machine_code, doc=None, restype=None, argtypes=None, use_errno=False, use_last_error=False):
if argtypes is None:
argtypes = []
exec_code = mmap.mmap(-1, len(machine_code), prot=mmap.PROT_WRITE | mmap.PROT_READ | mmap.PROT_EXEC)
exec_code.write(machine_code)
c_type = ctypes.c_byte * len(machine_code)
c_var = c_type.from_buffer(exec_code)
address = ctypes.addressof(c_var)
c_func_factory = ctypes.CFUNCTYPE(restype, *argtypes, use_errno=use_errno, use_last_error=use_last_error)
func = c_func_factory(address)
func._exec_code = exec_code # prevent GC of code
func.__doc__ = doc
return func
###Output
_____no_output_____
###Markdown
linux x86_64 calling conventionThe calling convention of the System V AMD64 ABI is followed on GNU/Linux. The registers RDI, RSI, RDX, RCX, R8, and R9 are used for integer and memory address arguments and XMM0, XMM1, XMM2, XMM3, XMM4, XMM5, XMM6 and XMM7 are used for floating point arguments.For system calls, R10 is used instead of RCX. Additional arguments are passed on the stack and the return value is stored in RAX.https://www.felixcloutier.com/x86/RET in hex C3
###Code
just_return = create_callable_from_machine_code(b'\xC3')
just_return()
import distorm3
return_21_code = b'\x48\xB8\x15\x00\x00\x00\x00\x00\x00\x00\xC3'
for offset, length, assembler, hex in distorm3.Decode(0, return_21_code, distorm3.Decode64Bits):
print(f'{assembler}')
return_21 = create_callable_from_machine_code(return_21_code, restype=ctypes.c_int64)
return_21()
###Output
_____no_output_____ |
quantum_summarization.ipynb | ###Markdown
整数計画問題による複数文書要約モデル McDonaldモデル次の目的関数の最小化を考える。**目的関数**$$f(q) = -\sum_{i} r_{i}q_{i} + \lambda \sum_{i < j} s_{ij} q_{i}q_{j}$$**制約条件**$$\sum_{i} c_{i} q_{i} \leq K$$$$\forall i, \forall j, q_{i} q_{j}-q_{i} \leq 0$$$$\forall i, \forall j, q_{i} q_{j}-q_{j} \leq 0$$$$\forall i, \forall j, q_{i}+q_{j}-q_{i} q_{j} \leq 1$$$$\forall i, q_{i} \in\{0,1\}$$**変数** $q_{i}$: 文$i$が要約に含まれれば1、そうでなければ0 $K$: 許容される要約文の長さ $c_{i}$: 文$i$の長さ $r_{i}$: 文$i$と入力文書全体の類似度 $s_{ij}$: 文$i$と文$j$の類似度 **QUBOの定式化** 要約文の長さに関する制約条件を罰金項の形で加える。$$f(q) = -\sum_{i} r_{i}q_{i} + \lambda_{1} \sum_{i < j} s_{ij} q_{i}q_{j} + \lambda_{2} \sum_{i} \left(c_{i}q_{i} - K \right)^2$$あるいは、定式化の上で$K$を定めずに以下の最小化を考える。$\lambda_{2}$を調整し、解の後処理として$\sum_{i} c_{i} x_{i} \leq K$となる解を選択すればよい。$$f(q) = -\sum_{i} r_{i}q_{i} + \lambda_{1} \sum_{i < j} s_{ij} q_{i}q_{j} + \lambda_{2} \sum_{i} c_{i}q_{i}$$今回は多くの文を含む文章を扱うため、後者の定式化を採用する。 実験以下、スティーブジョブスのスピーチを対象とした実験を行う。
###Code
original_text = """I am honored to be with you today at your commencement from one of the finest universities in the world. I never graduated from college. Truth be told, this is the closest I've ever gotten to a college graduation. Today I want to tell you three stories from my life. That's it. No big deal. Just three stories.
The first story is about connecting the dots.
I dropped out of Reed College after the first 6 months, but then stayed around as a drop-in for another 18 months or so before I really quit. So why did I drop out?
It started before I was born. My biological mother was a young, unwed college graduate student, and she decided to put me up for adoption. She felt very strongly that I should be adopted by college graduates, so everything was all set for me to be adopted at birth by a lawyer and his wife. Except that when I popped out they decided at the last minute that they really wanted a girl. So my parents, who were on a waiting list, got a call in the middle of the night asking: "We have an unexpected baby boy; do you want him?" They said: "Of course." My biological mother later found out that my mother had never graduated from college and that my father had never graduated from high school. She refused to sign the final adoption papers. She only relented a few months later when my parents promised that I would someday go to college.
And 17 years later I did go to college. But I naively chose a college that was almost as expensive as Stanford, and all of my working-class parents' savings were being spent on my college tuition. After six months, I couldn't see the value in it. I had no idea what I wanted to do with my life and no idea how college was going to help me figure it out. And here I was spending all of the money my parents had saved their entire life. So I decided to drop out and trust that it would all work out OK. It was pretty scary at the time, but looking back it was one of the best decisions I ever made. The minute I dropped out I could stop taking the required classes that didn't interest me, and begin dropping in on the ones that looked interesting.
It wasn't all romantic. I didn't have a dorm room, so I slept on the floor in friends' rooms, I returned coke bottles for the 5 - deposits to buy food with, and I would walk the 7 miles across town every Sunday night to get one good meal a week at the Hare Krishna temple. I loved it. And much of what I stumbled into by following my curiosity and intuition turned out to be priceless later on. Let me give you one example:
Reed College at that time offered perhaps the best calligraphy instruction in the country. Throughout the campus every poster, every label on every drawer, was beautifully hand calligraphed. Because I had dropped out and didn't have to take the normal classes, I decided to take a calligraphy class to learn how to do this. I learned about serif and san serif typefaces, about varying the amount of space between different letter combinations, about what makes great typography great. It was beautiful, historical, artistically subtle in a way that science can't capture, and I found it fascinating.
None of this had even a hope of any practical application in my life. But ten years later, when we were designing the first Macintosh computer, it all came back to me. And we designed it all into the Mac. It was the first computer with beautiful typography. If I had never dropped in on that single course in college, the Mac would have never had multiple typefaces or proportionally spaced fonts. And since Windows just copied the Mac, it's likely that no personal computer would have them. If I had never dropped out, I would have never dropped in on this calligraphy class, and personal computers might not have the wonderful typography that they do. Of course it was impossible to connect the dots looking forward when I was in college. But it was very, very clear looking backwards ten years later.
Again, you can't connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something - your gut, destiny, life, karma, whatever. This approach has never let me down, and it has made all the difference in my life.
My second story is about love and loss. I was lucky I found what I loved to do early in life. Woz and I started Apple in my parents garage when I was 20. We worked hard, and in 10 years Apple had grown from just the two of us in a garage into a $2 billion company with over 4000 employees. We had just released our finest creation -the Macintosh- a year earlier, and I had just turned 30. And then I got fired. How can you get fired from a company you started? Well, as Apple grew we hired someone who I thought was very talented to run the company with me, and for the first year or so things went well. But then our visions of the future began to diverge and eventually we had a falling out. When we did, our Board of Directors sided with him. So at 30 I was out. And very publicly out. What had been the focus of my entire adult life was gone, and it was devastating.
I really didn't know what to do for a few months. I felt that I had let the previous generation of entrepreneurs down - that I had dropped the baton as it was being passed to me. I met with David Packard and Bob Noyce and tried to apologize for screwing up so badly. I was a very public failure, and I even thought about running away from the valley. But something slowly began to dawn on me I still loved what I did. The turn of events at Apple had not changed that one bit. I had been rejected, but I was still in love. And so I decided to start over.
I didn't see it then, but it turned out that getting fired from Apple was the best thing that could have ever happened to me. The heaviness of being successful was replaced by the lightness of being a beginner again, less sure about everything. It freed me to enter one of the most creative periods of my life.
During the next five years, I started a company named NeXT, another company named Pixar, and fell in love with an amazing woman who would become my wife. Pixar went on to create the worlds first computer animated feature film, Toy Story, and is now the most successful animation studio in the world. In a remarkable turn of events, Apple bought NeXT, I returned to Apple, and the technology we developed at NeXT is at the heart of Apple's current renaissance. And Laurene and I have a wonderful family together.
I'm pretty sure none of this would have happened if I hadn't been fired from Apple. It was awful tasting medicine, but I guess the patient needed it. Sometimes life hits you in the head with a brick. Don't lose faith. I'm convinced that the only thing that kept me going was that I loved what I did. You've got to find what you love. And that is as true for your work as it is for your lovers. Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven't found it yet, keep looking. Don't settle. As with all matters of the heart, you'll know when you find it. And, like any great relationship, it just gets better and better as the years roll on. So keep looking until you find it. Don't settle.
My third story is about death. When I was 17, I read a quote that went something like: "If you live each day as if it was your last, someday you'll most certainly be right." It made an impression on me, and since then, for the past 33 years, I have looked in the mirror every morning and asked myself: "If today were the last day of my life, would I want to do what I am about to do today?" And whenever the answer has been "No" for too many days in a row, I know I need to change something.
Remembering that I'll be dead soon is the most important tool I've ever encountered to help me make the big choices in life. Because almost everything - all external expectations, all pride, all fear of embarrassment or failure - these things just fall away in the face of death, leaving only what is truly important. Remembering that you are going to die is the best way I know to avoid the trap of thinking you have something to lose. You are already naked. There is no reason not to follow your heart.
About a year ago I was diagnosed with cancer. I had a scan at 7:30 in the morning, and it clearly showed a tumor on my pancreas. I didn't even know what a pancreas was. The doctors told me this was almost certainly a type of cancer that is incurable, and that I should expect to live no longer than three to six months. My doctor advised me to go home and get my affairs in order, which is doctor's code for prepare to die. It means to try to tell your kids everything you thought you'd have the next 10 years to tell them in just a few months. It means to make sure everything is buttoned up so that it will be as easy as possible for your family. It means to say your goodbyes.
I lived with that diagnosis all day. Later that evening I had a biopsy, where they stuck an endoscope down my throat, through my stomach and into my intestines, put a needle into my pancreas and got a few cells from the tumor. I was sedated, but my wife, who was there, told me that when they viewed the cells under a microscope the doctors started crying because it turned out to be a very rare form of pancreatic cancer that is curable with surgery. I had the surgery and I'm fine now.
This was the closest I've been to facing death, and I hope it's the closest I get for a few more decades. Having lived through it, I can now say this to you with a bit more certainty than when death was a useful but purely intellectual concept:
No one wants to die. Even people who want to go to heaven don't want to die to get there. And yet death is the destination we all share. No one has ever escaped it. And that is as it should be, because Death is very likely the single best invention of Life. It is Life's change agent. It clears out the old to make way for the new. Right now the new is you, but someday not too long from now, you will gradually become the old and be cleared away. Sorry to be so dramatic, but it is quite true.
Your time is limited, so don't waste it living someone else's life. Don't be trapped by dogma -which is living with the results of other people's thinking. Don't let the noise of others' opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. They somehow already know what you truly want to become. Everything else is secondary.
When I was young, there was an amazing publication called The Whole Earth Catalog, which was one of the bibles of my generation. It was created by a fellow named Stewart Brand not far from here in Menlo Park, and he brought it to life with his poetic touch. This was in the late 1960's, before personal computers and desktop publishing, so it was all made with typewriters, scissors, and polaroid cameras. It was sort of like Google in paperback form, 35 years before Google came along: it was idealistic, and overflowing with neat tools and great notions.
Stewart and his team put out several issues of The Whole Earth Catalog, and then when it had run its course, they put out a final issue. It was the mid-1970s, and I was your age. On the back cover of their final issue was a photograph of an early morning country road, the kind you might find yourself hitchhiking on if you were so adventurous. Beneath it were the words: "Stay Hungry. Stay Foolish." It was their farewell message as they signed off. Stay Hungry. Stay Foolish. And I have always wished that for myself. And now, as you graduate to begin anew, I wish that for you.
Stay Hungry. Stay Foolish.
Thank you all very much.
"""
from nltk.tokenize import sent_tokenize
# import nltk
# nltk.download('punkt')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# 文章を文単位で分割する
morphemes = sent_tokenize(original_text)
# 単語に数値を割り当てる
word2id = {}
for line in morphemes:
for word in line.split():
if word in word2id:
continue
word2id[word] = len(word2id)
# Bag of Words
bow_set = []
for line in morphemes:
bow = [0] * len(word2id)
for word in line.split():
try:
bow[word2id[word]] += 1
except:
pass
bow_set.append(bow)
def cos_sim(bow1, bow2):
""" コサイン類似度を求める """
narr_bow1 = np.array(bow1)
narr_bow2 = np.array(bow2)
return np.sum(narr_bow1 * narr_bow2) / (np.linalg.norm(narr_bow1) * np.linalg.norm(narr_bow2))
# それぞれの文の長さ
sentence_lengths = [len(m) for m in morphemes]
# 文章全体に対する類似度
bow_all = np.sum(bow_set, axis=0)
sim2all = [cos_sim(b, bow_all) for b in bow_set]
# 文同士の類似度
sim2each = [[cos_sim(b, c) if i < j else 0 for j, c in enumerate(bow_set)] for i, b in enumerate(bow_set)]
from pyqubo import *
def normalize(exp):
""" 各制約項を正規化する関数 """
qubo, offset = exp.compile().to_qubo()
max_coeff = abs(max(qubo.values()))
min_coeff = abs(min(qubo.values()))
norm = max_coeff if max_coeff - min_coeff > 0 else min_coeff
return exp / norm
num_var = len(morphemes)
q = Array.create(name='q', shape=num_var, vartype='BINARY')
H = - normalize(sum(sim2all[i] * q[i] for i in range(num_var))) \
+ Placeholder('sim') * normalize(sum(sim2each[i][j] * q[i] * q[j] for i in range(num_var) for j in range(i+1, num_var))) \
+ Placeholder('len') * normalize(sum(sentence_lengths[i] * q[i] for i in range(num_var)))
model = H.compile()
feed_dict = {'sim': 1.0, 'len': 0.5}
qubo, offset = model.to_qubo(feed_dict=feed_dict)
def show_qubo(qubo_, N):
import re
narr_qubo = np.zeros((N, N))
for pos, coeff in qubo_.items():
pos0 = int(re.search(r'.+\[([0-9]+)\]', pos[0]).group(1))
pos1 = int(re.search(r'.+\[([0-9]+)\]', pos[1]).group(1))
if pos0 < pos1:
narr_qubo[pos0][pos1] = coeff
else:
narr_qubo[pos1][pos0] = coeff
# 描画
plt.imshow(narr_qubo, cmap="GnBu")
plt.colorbar()
plt.show()
show_qubo(qubo, num_var)
from dwave_qbsolv import QBSolv
response = QBSolv().sample_qubo(qubo, num_reads=10)
solution_list = []
summarized_text_list = []
for i, sample in enumerate(response.samples()):
solution_list.append([sample['q[{}]'.format(i)] for i in range(num_var)])
summarized_text_list.append(" ".join([morphemes[i] for i, s in enumerate(solution_list[-1]) if s == 1]))
print(summarized_text_list[0])
###Output
I never graduated from college. None of this had even a hope of any practical application in my life. Don't settle. My third story is about death. Even people who want to go to heaven don't want to die to get there. No one has ever escaped it. It was the mid-1970s, and I was your age. Beneath it were the words: "Stay Hungry. Stay Foolish. And I have always wished that for myself. Thank you all very much.
|
01_examples.ipynb | ###Markdown
Forest Cover Type dataset
###Code
BASE_DIR = Path.home().joinpath('data/tabnet')
datafile = BASE_DIR.joinpath('covtype.data.gz')
datafile.parent.mkdir(parents=True, exist_ok=True)
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz"
if not datafile.exists():
wget.download(url, datafile.as_posix())
target = "Covertype"
cat_names = [
"Wilderness_Area1", "Wilderness_Area2", "Wilderness_Area3",
"Wilderness_Area4", "Soil_Type1", "Soil_Type2", "Soil_Type3", "Soil_Type4",
"Soil_Type5", "Soil_Type6", "Soil_Type7", "Soil_Type8", "Soil_Type9",
"Soil_Type10", "Soil_Type11", "Soil_Type12", "Soil_Type13", "Soil_Type14",
"Soil_Type15", "Soil_Type16", "Soil_Type17", "Soil_Type18", "Soil_Type19",
"Soil_Type20", "Soil_Type21", "Soil_Type22", "Soil_Type23", "Soil_Type24",
"Soil_Type25", "Soil_Type26", "Soil_Type27", "Soil_Type28", "Soil_Type29",
"Soil_Type30", "Soil_Type31", "Soil_Type32", "Soil_Type33", "Soil_Type34",
"Soil_Type35", "Soil_Type36", "Soil_Type37", "Soil_Type38", "Soil_Type39",
"Soil_Type40"
]
cont_names = [
"Elevation", "Aspect", "Slope", "Horizontal_Distance_To_Hydrology",
"Vertical_Distance_To_Hydrology", "Horizontal_Distance_To_Roadways",
"Hillshade_9am", "Hillshade_Noon", "Hillshade_3pm",
"Horizontal_Distance_To_Fire_Points"
]
feature_columns = (
cont_names + cat_names + [target])
df = pd.read_csv(datafile, header=None, names=feature_columns)
df.head()
procs = [Categorify, FillMissing, Normalize]
splits = RandomSplitter(0.05)(range_of(df))
to = TabularPandas(df, procs, cat_names, cont_names, y_names=target, y_block = CategoryBlock(), splits=splits)
dls = to.dataloaders(bs=64*64*4)
model = TabNetModel(get_emb_sz(to), len(to.cont_names), dls.c, n_d=64, n_a=64, n_steps=5, virtual_batch_size=256)
opt_func = partial(Adam, wd=0.01, eps=1e-5)
learn = Learner(dls, model, CrossEntropyLossFlat(), opt_func=opt_func, lr=3e-2, metrics=[accuracy])
model.size()
learn.fit_one_cycle(10)
###Output
_____no_output_____
###Markdown
Poker hand https://www.kaggle.com/c/poker-rule-induction
###Code
BASE_DIR = Path.home().joinpath('data/tabnet/poker')
df = pd.read_csv(BASE_DIR.joinpath('train.csv'))
df.head()
cat_names = ['S1', 'S2', 'S3', 'S4', 'S5', 'C1', 'C2', 'C3', 'C4', 'C5']
cont_names = []
target = ['hand']
procs = [Categorify, Normalize]
splits = RandomSplitter(0.05)(range_of(df))
to = TabularPandas(df, procs, cat_names, cont_names, y_names=target, y_block = CategoryBlock(), splits=splits)
dls = to.dataloaders(bs=64*4)
model = TabNetModel(get_emb_sz(to), len(to.cont_names), dls.c, n_d=16, n_a=16,
n_steps=5, virtual_batch_size=256, gamma=1.5)
opt_func = partial(Adam, eps=1e-5)
learn = Learner(dls, model, CrossEntropyLossFlat(), opt_func=opt_func, lr=3e-2, metrics=[accuracy])
learn.lr_find()
learn.fit_one_cycle(1000)
###Output
_____no_output_____
###Markdown
Sarcos Robotics Arm Inverse Dynamics http://www.gaussianprocess.org/gpml/data/
###Code
BASE_DIR = Path.home().joinpath('data/tabnet/sarcos')
from mat4py import *
wget.download('http://www.gaussianprocess.org/gpml/data/sarcos_inv.mat', BASE_DIR.as_posix())
wget.download('http://www.gaussianprocess.org/gpml/data/sarcos_inv_test.mat', BASE_DIR.as_posix())
data = loadmat(BASE_DIR.joinpath('sarcos_inv.mat').as_posix())
df = pd.DataFrame(data['sarcos_inv'])
df.head()
data = loadmat(BASE_DIR.joinpath('sarcos_inv_test.mat').as_posix())
test_df = pd.DataFrame(data['sarcos_inv_test'])
splits = RandomSplitter()(df)
procs = [Normalize]
to = TabularPandas(df, procs, cat_names=[], cont_names=list(range(0,27)),
y_names=27, splits=splits, y_block=TransformBlock())
dls = to.dataloaders(bs=512)
model = TabNetModel([], len(to.cont_names), 1, n_d=64, n_a=64, n_steps=5, virtual_batch_size=256)
opt_func = partial(Adam, wd=0.01, eps=1e-5)
learn = Learner(dls, model, MSELossFlat(), opt_func=opt_func, lr=1e-2)
model.size()
learn.lr_find()
learn.fit_one_cycle(1000)
dl = learn.dls.test_dl(test_df)
learn.validate(dl=dl)
###Output
_____no_output_____
###Markdown
Higgs Boson
###Code
BASE_DIR = Path.home().joinpath('data/tabnet/higgs')
datafile = BASE_DIR.joinpath('HIGGS.csv.gz')
datafile.parent.mkdir(parents=True, exist_ok=True)
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz'
if not datafile.exists():
wget.download(url, datafile.as_posix())
feature_columns = ['label', 'lepton pT', 'lepton eta', 'lepton phi', 'missing energy magnitude',
'missing energy phi', 'jet 1 pt', 'jet 1 eta', 'jet 1 phi', 'jet 1 b-tag',
'jet 2 pt', 'jet 2 eta', 'jet 2 phi', 'jet 2 b-tag', 'jet 3 pt',
'jet 3 eta', 'jet 3 phi', 'jet 3 b-tag', 'jet 4 pt', 'jet 4 eta',
'jet 4 phi', 'jet 4 b-tag', 'm_jj', 'm_jjj', 'm_lv', 'm_jlv',
'm_bb', 'm_wbb', 'm_wwbb']
df = pd.read_csv(datafile, header=None, names=feature_columns)
df.head()
df['label'] = df['label'].astype(int)
df.replace({'label': {1.0:'yes',0.0:'no'}}, inplace=True)
splits = IndexSplitter(range(len(df)-500000, len(df)))(df)
procs = [Normalize]
to = TabularPandas(df, procs, cat_names=[], cont_names=feature_columns[1:],
y_names='label', splits=splits)
dls = to.dataloaders(bs=64*64*4)
model = TabNetModel([], len(to.cont_names), dls.c, n_d=64, n_a=64, n_steps=5, virtual_batch_size=256)
opt_func = partial(Adam, wd=0.01, eps=1e-5)
learn = Learner(dls, model, CrossEntropyLossFlat(), opt_func=opt_func, lr=3e-2, metrics=[accuracy])
model.size()
learn.lr_find()
learn.fit_one_cycle(10)
###Output
_____no_output_____ |
replication/replication_word-based_dedup.ipynb | ###Markdown
Load data files
###Code
import pandas as pd
from sklearn.metrics import *
binary = False
n_logits = 2 if binary else 3
for data_format in ['token', 'morph']:
train_df = pd.read_csv(f'../{data_format}_train.tsv', sep='\t', header=None, names=['comment', 'label'])
train_df['is_valid'] = False
test_df = pd.read_csv(f'../{data_format}_test.tsv', sep='\t', header=None, names=['comment', 'label'])
test_df['is_valid'] = True
df = pd.concat([train_df,test_df], sort=False)
df = df.drop_duplicates('comment')
if binary: df = df[df.label != 2]
train_df = df[df.is_valid == False].drop('is_valid', axis=1)
test_df = df[df.is_valid == True].drop('is_valid', axis=1)
train_df.to_csv(f'../{data_format}_train_clean.tsv', sep='\t', header=None, index=False)
test_df.to_csv(f'../{data_format}_test_clean.tsv', sep='\t', header=None, index=False)
import codecs
import re
from keras.utils.np_utils import to_categorical
import numpy as np
import warnings
warnings.filterwarnings('ignore')
def load_data(filename):
data = pd.read_csv(filename, sep='\t', header=None, names=['comment', 'label'])
x, y = data.comment, data.label
x = np.asarray(list(x))
# Reducing any char-acter sequence of more than 3 consecutive repetitions to a respective 3-character sequence
# (e.g. “!!!!!!!!”turns to “!!!”)
# x = [re.sub(r'((.)\2{3,})', r'\2\2\2', i) for i in x]
y = to_categorical(y, n_logits)
return x, y
version = '_clean'
x_token_train, y_token_train = load_data(f'../token_train{version}.tsv')
x_token_test, y_token_test = load_data(f'../token_test{version}.tsv')
x_morph_train, y_morph_train = load_data(f'../morph_train{version}.tsv')
x_morph_test, y_morph_test = load_data(f'../morph_test{version}.tsv')
print('X token train shape: {}'.format(x_token_train.shape))
print('X token test shape: {}'.format(x_token_test.shape))
print('X morph train shape: {}'.format(x_morph_train.shape))
print('X morph test shape: {}'.format(x_morph_test.shape))
print(x_token_train[:5])
print(x_token_test[:5])
print(x_morph_train[:5])
print(x_morph_test[:5])
###Output
[' שמע ישראל , ה שם ישמור ו יקרא ה גורל = ( י.ק.ו.ק . ) אימרו אמן ל אבא ה שם של אנחנו ! ! ! ! אחרי ברכה של ביבי ! ה כח ב ה ישראל הוא מתי ש יש משמעת ו פרגמתיות ב משרדי ה חינוך ש זה איתן את ה אור ! ש מאוד חסר ל אנחנו ! , ו התאחדות ב אחד שלם , ו אין שמאל ו אין ימין ! ו ב ישראל נקודה חשובה היא , תעשיית כוח פרגמטיבית ! https://www.youtube.com/watch?v=_rKMXgPQSj8 . עוד מעת אהיה ראש חודש תעברו על ה תפילה של ה תיקון ה כללי ו תדליקו את ה נר ! '
'איחולי הצלחה ב תפקידך .'
' בוקר טוב ישראל בוקר טוב לכבוד נשיא מדינת ישראל . ״ אשרי ה עם ש נבחר אדם עשיר ב ענווה , יושרה ו דעת ״ מי ייתן ו תאחד את עמך ישראל . יישר כוח . עופר אלפסי מאילת . '
'איפה ה גינוי ? http://www.iba.org.il/bet/bet.aspx?type=1&entity=1023105'
'נכון ש מאחרת לברך ו נכון ש אני ה קטנה מ כולם אבל לא ה אחרונה לברך אני האמין את היא ב הצלחה רבה . ב אחדות ב עזרת ה שם ש בת שלום']
###Markdown
PrepareConvert text (train & test) to sequences and pad to requested document length
###Code
from keras.preprocessing import text, sequence
def tokenizer(x_train, x_test, vocabulary_size, char_level):
tokenize = text.Tokenizer(num_words=vocabulary_size,
char_level=char_level,
filters='',
oov_token='UNK')
tokenize.fit_on_texts(x_train) # only fit on train
print('UNK index: {}'.format(tokenize.word_index['UNK']))
x_train = tokenize.texts_to_sequences(x_train)
x_test = tokenize.texts_to_sequences(x_test)
return x_train, x_test
def pad(x_train, x_test, max_document_length):
x_train = sequence.pad_sequences(x_train, maxlen=max_document_length, padding='post', truncating='post')
x_test = sequence.pad_sequences(x_test, maxlen=max_document_length, padding='post', truncating='post')
return x_train, x_test
vocabulary_size = 5000
x_token_train, x_token_test = tokenizer(x_token_train, x_token_test, vocabulary_size, False)
x_morph_train, x_morph_test = tokenizer(x_morph_train, x_morph_test, vocabulary_size, False)
max_document_length = 100
x_token_train, x_token_test = pad(x_token_train, x_token_test, max_document_length)
x_morph_train, x_morph_test = pad(x_morph_train, x_morph_test, max_document_length)
print('X token train shape: {}'.format(x_token_train.shape))
print('X token test shape: {}'.format(x_token_test.shape))
print('X morph train shape: {}'.format(x_morph_train.shape))
print('X morph test shape: {}'.format(x_morph_test.shape))
###Output
UNK index: 1
UNK index: 1
X token train shape: (7488, 100)
X token test shape: (1131, 100)
X morph train shape: (7481, 100)
X morph test shape: (1132, 100)
###Markdown
Plot function
###Code
import matplotlib.pyplot as plt
def plot_loss_and_accuracy(history):
fig, axs = plt.subplots(1, 2, sharex=True)
axs[0].plot(history.history['loss'])
axs[0].plot(history.history['val_loss'])
axs[0].set_title('Model Loss')
axs[0].legend(['Train', 'Validation'], loc='upper left')
axs[1].plot(history.history['accuracy'])
axs[1].plot(history.history['val_accuracy'])
axs[1].set_title('Model Accuracy')
axs[1].legend(['Train', 'Validation'], loc='upper left')
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Import required modules from Keras
###Code
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Concatenate
from keras.layers import Embedding
from keras.layers import LSTM, Bidirectional
from keras.layers.convolutional import Conv1D
from keras.layers.pooling import MaxPool1D
from keras.layers import BatchNormalization
from keras import optimizers
from keras import metrics
from keras import backend as K
###Output
_____no_output_____
###Markdown
Default Parameters
###Code
dropout_keep_prob = 0.5
embedding_size = 300
batch_size = 50
lr = 1e-4
dev_size = 0.2
loss = 'categorical_crossentropy'
###Output
_____no_output_____
###Markdown
Linear - Token
###Code
num_epochs = 10
# Create new TF graph
K.clear_session()
# Construct model
text_input = Input(shape=(max_document_length,))
x = Dense(100)(text_input)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss=loss,
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_token_train, y_token_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_token_test, y_token_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.4f}'.format(scores[1]))
# Save the model
#model.save('word_saved_models/Linear-Token-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_token_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_token_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1131/1131 [==============================] - 0s 10us/step
###Markdown
Linear - Morph
###Code
# Create new TF graph
K.clear_session()
# Construct model
text_input = Input(shape=(max_document_length,))
x = Dense(100)(text_input)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss=loss,
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_morph_train, y_morph_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_morph_test, y_morph_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.4f}'.format(scores[1]))
# Save the model
# model.save('word_saved_models/Linear-Morph-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_morph_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_morph_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1132/1132 [==============================] - 0s 10us/step
###Markdown
CNN - Token
###Code
num_epochs = 5
# Create new TF graph
K.clear_session()
# Construct model
convs = []
text_input = Input(shape=(max_document_length,))
x = Embedding(vocabulary_size, embedding_size)(text_input)
for fsz in [3, 8]:
conv = Conv1D(128, fsz, padding='valid', activation='relu')(x)
pool = MaxPool1D()(conv)
convs.append(pool)
x = Concatenate(axis=1)(convs)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss=loss,
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_token_train, y_token_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_token_test, y_token_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.3f}'.format(scores[1]))
# Save the model
# model.save('word_saved_models/CNN-Token-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_token_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_token_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1131/1131 [==============================] - 0s 64us/step
###Markdown
CNN - Morph
###Code
# Create new TF graph
K.clear_session()
# Construct model
convs = []
text_input = Input(shape=(max_document_length,))
x = Embedding(vocabulary_size, embedding_size)(text_input)
for fsz in [3, 8]:
conv = Conv1D(128, fsz, padding='valid', activation='relu')(x)
pool = MaxPool1D()(conv)
convs.append(pool)
x = Concatenate(axis=1)(convs)
x = Flatten()(x)
x = Dense(128, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_morph_train, y_morph_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_morph_test, y_morph_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.4f}'.format(scores[1]))
# Save the model
# model.save('word_saved_models/CNN-Morph-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_morph_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_morph_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1132/1132 [==============================] - 0s 62us/step
###Markdown
LSTM - Token
###Code
num_epochs = 7
lstm_units = 93
# Create new TF graph
K.clear_session()
# Construct model
text_input = Input(shape=(max_document_length,))
x = Embedding(vocabulary_size, embedding_size)(text_input)
x = LSTM(units=lstm_units, return_sequences=True)(x)
x = LSTM(units=lstm_units)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_token_train, y_token_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_token_test, y_token_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.3f}'.format(scores[1]))
# Save the model
# model.save('word_saved_models/LSTM-Token-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_token_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_token_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1131/1131 [==============================] - 0s 427us/step
###Markdown
LSTM - Morph
###Code
num_epochs = 7
# Create new TF graph
K.clear_session()
# Construct model
text_input = Input(shape=(max_document_length,))
x = Embedding(vocabulary_size, embedding_size)(text_input)
x = LSTM(units=lstm_units, return_sequences=True)(x)
x = LSTM(units=lstm_units)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_morph_train, y_morph_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_morph_test, y_morph_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.3f}'.format(scores[1]))
# Save the model
# model.save('word_saved_models/LSTM-Morph-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_morph_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_morph_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1132/1132 [==============================] - 0s 425us/step
###Markdown
BiLSTM - Token
###Code
num_epochs = 3
# Create new TF graph
K.clear_session()
# Construct model
text_input = Input(shape=(max_document_length,))
x = Embedding(vocabulary_size, embedding_size)(text_input)
x = Bidirectional(LSTM(units=lstm_units, return_sequences=True))(x)
x = Bidirectional(LSTM(units=lstm_units))(x)
x = Dense(128, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_token_train, y_token_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_token_test, y_token_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.3f}'.format(scores[1]))
# Save the model
# model.save('word_saved_models/BiLSTM-Token-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_token_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_token_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1131/1131 [==============================] - 1s 806us/step
###Markdown
BiLSTM - Morph
###Code
# Create new TF graph
K.clear_session()
# Construct model
text_input = Input(shape=(max_document_length,))
x = Embedding(vocabulary_size, embedding_size)(text_input)
x = Bidirectional(LSTM(units=lstm_units, return_sequences=True))(x)
x = Bidirectional(LSTM(units=lstm_units))(x)
x = Dense(128, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_morph_train, y_morph_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_morph_test, y_morph_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.3f}'.format(scores[1]))
# Save the model
# model.save('word_saved_models/BiLSTM-Morph-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_morph_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_morph_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1132/1132 [==============================] - 1s 808us/step
###Markdown
MLP - Token
###Code
num_epochs = 6
# Create new TF graph
K.clear_session()
# Construct model
text_input = Input(shape=(max_document_length,))
x = Embedding(vocabulary_size, embedding_size)(text_input)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_token_train, y_token_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_token_test, y_token_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.3f}'.format(scores[1]))
# Save the model
# model.save('word_saved_models/MLP-Token-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_token_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_token_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1131/1131 [==============================] - 0s 47us/step
###Markdown
MLP - Morph
###Code
# Create new TF graph
K.clear_session()
# Construct model
text_input = Input(shape=(max_document_length,))
x = Embedding(vocabulary_size, embedding_size)(text_input)
x = Flatten()(x)
x = Dense(256, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
x = Dense(64, activation='relu')(x)
x = Dropout(dropout_keep_prob)(x)
preds = Dense(n_logits, activation='softmax')(x)
model = Model(text_input, preds)
adam = optimizers.Adam(lr=lr)
model.compile(loss='categorical_crossentropy',
optimizer=adam,
metrics=['accuracy'])
# Train the model
history = model.fit(x_morph_train, y_morph_train,
batch_size=batch_size,
epochs=num_epochs,
verbose=1,
validation_split=dev_size)
# Plot training accuracy and loss
plot_loss_and_accuracy(history)
# Evaluate the model
scores = model.evaluate(x_morph_test, y_morph_test,
batch_size=batch_size, verbose=1)
print('\nAccurancy: {:.3f}'.format(scores[1]))
# Save the model
# model.save('word_saved_models/MLP-Morph-{:.3f}.h5'.format((scores[1] * 100)))
preds = model.predict(x_morph_test, batch_size=batch_size, verbose=1)
preds = preds.argmax(1)
ys = y_morph_test.argmax(1)
accuracy_score(ys, preds), f1_score(ys, preds, average='macro'), matthews_corrcoef(ys, preds)
###Output
1132/1132 [==============================] - 0s 47us/step
|
notebooks/.ipynb_checkpoints/220180415_split_data_into_chunks-checkpoint.ipynb | ###Markdown
Split data files into chunks ordered by days, weeks, ...
###Code
import numpy as np
from datetime import datetime
from quickndirtybot import io
filename_in = '../data/gdax_data'
# split by conversion:
splitby = '_%Y-%m-%d'
splitby = '_%Y-%U'
data = io.load_csv(filename_in + '.csv')
# get datetimes from timestamps
time = [datetime.fromtimestamp(x/1000) for x in data[:, 0]]
categories = [ti.strftime(splitby) for ti in time]
lastidx = 0
for i, (line, lastline) in enumerate(zip(categories[1:], categories[:-1])):
if line != lastline:
io.save_csv(data[lastidx:i, :], filename_in+lastline+'.csv')
lastidx = i+1
io.save_csv(data[lastidx:, :], filename_in+line+'.csv')
###Output
_____no_output_____ |
Analysis/7-Testing_DDM_exp2.ipynb | ###Markdown
Recovering successfull fit
###Code
fit1 = []
for f in os.listdir("DDM/Fits/ModelSelection/"):
if "Exp2" in f:
if "M6" in f:
print(f)
fit1.append(hddm.load("DDM/Fits/ModelSelection/%s"%f))
fit1 = kabuki.utils.concat_models(fit1)
###Output
Exp2_depends_M6_23
Exp2_depends_M6_22
Exp2_depends_M6_20
Exp2_depends_M6_21
###Markdown
QPplot Every cell in this section takes extremely long time and requires at least 16GB of RAM...
###Code
ppc_data = hddm.utils.post_pred_gen(fit1)
ppc_data.to_csv('DDM/simulated_data_exp2.csv')
gen_data = pd.read_csv('DDM/simulated_data_exp2.csv')
gen_data.reset_index(inplace=True)
gen_data.rt = np.abs(gen_data.rt)*1000
gen_data[["condition","Con1","contraste","expdResp","participant"]] = gen_data['node'].str.split('.', expand=True)
gen_data.drop(['index','node','Con1'], axis=1, inplace=True)
gen_data.contraste = [float("0."+x) for x in gen_data.contraste]
gen_data.expdResp = gen_data.apply(lambda x: "Left" if x["expdResp"]=="0)" else "Right", axis=1)
gen_data.condition = [x[5:] for x in gen_data.condition]
gen_data["givenResp"] = gen_data.apply(lambda x: "Left" if x["response"]==0 else "Right", axis=1)
gen_data.response = gen_data.apply(lambda x: 1 if x["givenResp"]==x["expdResp"] else 0, axis=1)
df2 = data[data.exp == 2]
fig, ax = plt.subplots(2,1, figsize=[5,7], dpi=300)
for SAT, SAT_dat in df2.groupby('condition'):
Prec, RTQuantiles, subject, contrast = [],[],[],[]
meanPrec, meanRT = [],[]
synmeanPrec, synmeanRT, samp_idx = [],[],[]
for con, con_dat in SAT_dat.groupby("contraste"):
for corr, corr_dat in con_dat.groupby("response"):
meanPrec.append(float(len(corr_dat.response))/len(con_dat))
corr_dat["quantile"] = pd.qcut(corr_dat.rt, 5)
corr_dat["quantile"].replace(corr_dat["quantile"].unique().sort_values(), corr_dat.groupby("quantile").mean().rt.values)
mean_quantiles = []
for quant, quant_dat in corr_dat.groupby("quantile"):
mean_quantiles.append(quant_dat.rt.mean())
meanRT.append(mean_quantiles)
for i in np.arange(250): #Using samples from the synthetic data
syn = gen_data[(gen_data["sample"] == i) & (gen_data["condition"] == SAT) & \
(gen_data["contraste"] == con)]
corr_syn = syn[syn.response == corr].copy()
corr_syn["quantile"] = pd.qcut(corr_syn.rt, 5)
corr_syn["quantile"].replace(corr_syn["quantile"].unique().sort_values(), corr_syn.groupby("quantile").mean().rt.values)
synmeanPrec.append(float(len(corr_syn.response))/len(syn))
mean_quantiles = []
for quant, quant_dat in corr_syn.groupby("quantile"):
mean_quantiles.append(quant_dat.rt.mean())
synmeanRT.append(mean_quantiles)
samp_idx.append(i)
QPdf = pd.DataFrame([meanRT, meanPrec, contrast]).T
QPdf.columns=["RTQuantiles","Precision","contrast"]
QPdf = QPdf.sort_values(by="Precision")
synQPdf = pd.DataFrame([synmeanRT, synmeanPrec, samp_idx]).T
synQPdf.columns=["RTQuantiles","Precision","sample"]
synQPdf = synQPdf.sort_values(by="Precision")
color = ['#999999','#777777', '#555555','#333333','#111111']
x = [x for x in QPdf["Precision"].values]
y = [y for y in QPdf["RTQuantiles"].values]
if SAT =="Accuracy":
curax = ax[0]
else:
curax = ax[1]
for _x, _y in zip( x, y):
n = 0
for xp, yp in zip([_x] * len(_y), _y):
n += 1
curax.scatter([xp],[yp], marker=None, s = 0.0001)
curax.text(xp-.01, yp-10, 'x', fontsize=12, color=color[n-1])#substracted values correct text offset
for samp, samp_dat in synQPdf.groupby("sample"):
curax.plot( [i for i in samp_dat["Precision"].values], [j for j in samp_dat["RTQuantiles"].values],'.',
color='gray', markerfacecolor="w", markeredgecolor="gray", alpha=.2)
curax.set_xlabel("Response proportion")
curax.set_ylabel("RT quantiles (ms)")
curax.set_xlim(0,1)
curax.vlines(.5,0,2000,linestyle=':')
if SAT == "Accuracy":
curax.set_ylim([250, 1300])
else :
curax.set_ylim([200, 800])
plt.tight_layout()
plt.savefig('DDM/QPplot_exp2.png')
plt.show()
###Output
_____no_output_____
###Markdown
Printing parameter summary table
###Code
stats = fit1.gen_stats()
table = stats[stats.apply(lambda row: False if "subj" in row.name else (False if "std" in row.name else True), axis=1)][["mean", '2.5q', '97.5q']].T
#col_names = [r"$a$ Acc", r"$a$ Spd", r"$v$ Acc 1", r"$v$ Acc 3", r"$v$ Acc 4", r"$v$ Spd 1",
# r"$v$ Spd 3", r"$v$ Spd 4", r"$T_{er}$ Acc", r"$T_{er}$ Spd ",
# r"$sv$", r"$sz$ Acc", r"$sz$ Spd", r"$st$", r"$z$"]
#table.columns = col_names
table = np.round(table, decimals=2)
print(table)#.to_latex())
traces = fit1.get_traces()
fig, ax = plt.subplots(1,4, figsize=(15,3), dpi=300)
traces["a(Accuracy)"].plot(kind='density', ax=ax[0], color='k', label="Accuracy")
traces["a(Speed)"].plot(kind='density', ax=ax[0], color="gray", label="Speed")
ax[0].set_xlabel(r'$a$ values')
ax[0].set_xlim(0.6, 1.41)
traces["t(Accuracy)"].plot(kind='density', ax=ax[1], color='k', label='_nolegend_')
traces["t(Speed)"].plot(kind='density', ax=ax[1], color="gray", label='_nolegend_')
ax[1].set_xlabel(r'$T_{er}$ values')
ax[1].set_ylabel('')
ax[1].set_xlim(0.225, 0.375)
traces["sz(Accuracy)"].plot(kind='density', ax=ax[2], color='k', label='_nolegend_')
traces["sz(Speed)"].plot(kind='density', ax=ax[2], color="gray", label='_nolegend_')
ax[2].set_xlabel(r'$s_z$ values')
ax[2].set_ylabel('')
ax[2].set_xlim(-0.05, 0.78)
traces["v(Accuracy.0.01)"].plot(kind='density', ax=ax[3], color='k', label="1")
traces["v(Accuracy.0.07)"].plot(kind='density', ax=ax[3], color='k', ls="-.", label="2")
traces["v(Accuracy.0.15)"].plot(kind='density', ax=ax[3], color='k', ls="--", label="3")
traces["v(Speed.0.01)"].plot(kind='density', ax=ax[3], color='gray', label='_nolegend_')
traces["v(Speed.0.07)"].plot(kind='density', ax=ax[3], color='gray', ls="-.", label='_nolegend_')
traces["v(Speed.0.15)"].plot(kind='density', ax=ax[3], color='gray', ls="--", label='_nolegend_')
ax[3].set_xlabel(r'$v$ values')
ax[3].set_ylabel('')
ax[3].set_xlim(-2, 6)
plt.tight_layout()
plt.savefig("../Manuscript/plots/DDMpar2.png")
plt.show()
###Output
_____no_output_____
###Markdown
Estimating the effect size of SAT by substracting traces
###Code
print(np.mean(traces["t(Accuracy)"] - traces["t(Speed)"]))
print(np.percentile(traces["t(Accuracy)"] - traces["t(Speed)"], 2.5))
print(np.percentile(traces["t(Accuracy)"] - traces["t(Speed)"], 97.5))
###Output
0.04242462999793254
0.0003373945267390908
0.08447671394826704
###Markdown
Regressing MT over the Ter parameter across participants Computing plausible values
###Code
corr_Acc, corr_Spd, Ters_Acc, Ters_Spd = [],[],[],[]
traces = fit1.get_traces()
mts = data[data.exp==2].groupby(['condition','participant']).mt.mean().values#same index as below
for iteration in traces.iterrows():
Ter_Acc = iteration[1][['t_subj' in s for s in iteration[1].index]][:16]
Ter_Spd = iteration[1][['t_subj' in s for s in iteration[1].index]][16:]
corr_Acc.append(np.corrcoef(Ter_Acc, mts[:16])[0,1])
corr_Spd.append(np.corrcoef(Ter_Spd, mts[16:])[0,1])
Ters_Acc.append(Ter_Acc*1000)
Ters_Spd.append(Ter_Spd*1000)
###Output
_____no_output_____
###Markdown
Potting raw data
###Code
plt.errorbar(x=mts[:16], y=np.mean(Ters_Acc, axis=0), yerr=np.abs([np.mean(Ters_Acc, axis=0),np.mean(Ters_Acc, axis=0)] - np.asarray((np.percentile(Ters_Acc, 97.5, axis=0),np.percentile(Ters_Acc, 2.5, axis=0)))),fmt='o')
plt.errorbar(x=mts[16:], y=np.mean(Ters_Spd, axis=0), yerr=np.abs([np.mean(Ters_Spd, axis=0),np.mean(Ters_Spd, axis=0)] - np.asarray((np.percentile(Ters_Spd, 97.5, axis=0),np.percentile(Ters_Spd, 2.5, axis=0)))),fmt='o')
#plt.savefig('testexp1.png')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting plausible value distribution
###Code
plt.hist(corr_Acc)
plt.hist(corr_Spd)
###Output
_____no_output_____
###Markdown
Taking the code for plausible population correlation from the DMC package (Heathcote, Lin, reynolds, Strickland, Gretton and Matzke, 2019)
###Code
%%R
### Plausible values ----
posteriorRho <- function(r, n, npoints=100, kappa=1)
# Code provided by Dora Matzke, March 2016, from Alexander Ly
# Reformatted into a single funciton. kappa=1 implies uniform prior.
# Picks smart grid of npoints points concentrating around the density peak.
# Returns approxfun for the unnormalized density.
{
.bf10Exact <- function(n, r, kappa=1) {
# Ly et al 2015
# This is the exact result with symmetric beta prior on rho
# with parameter alpha. If kappa = 1 then uniform prior on rho
#
if (n <= 2){
return(1)
} else if (any(is.na(r))){
return(NaN)
}
# TODO: use which
check.r <- abs(r) >= 1 # check whether |r| >= 1
if (kappa >= 1 && n > 2 && check.r) {
return(Inf)
}
log.hyper.term <- log(hypergeo::genhypergeo(U=c((n-1)/2, (n-1)/2),
L=((n+2/kappa)/2), z=r^2))
log.result <- log(2^(1-2/kappa))+0.5*log(pi)-lbeta(1/kappa, 1/kappa)+
lgamma((n+2/kappa-1)/2)-lgamma((n+2/kappa)/2)+log.hyper.term
real.result <- exp(Re(log.result))
return(real.result)
}
.jeffreysApproxH <- function(n, r, rho) {
result <- ((1 - rho^(2))^(0.5*(n - 1)))/((1 - rho*r)^(n - 1 - 0.5))
return(result)
}
.bf10JeffreysIntegrate <- function(n, r, kappa=1) {
# Jeffreys' test for whether a correlation is zero or not
# Jeffreys (1961), pp. 289-292
# This is the exact result, see EJ
##
if (n <= 2){
return(1)
} else if ( any(is.na(r)) ){
return(NaN)
}
# TODO: use which
if (n > 2 && abs(r)==1) {
return(Inf)
}
hyper.term <- Re(hypergeo::genhypergeo(U=c((2*n-3)/4, (2*n-1)/4), L=(n+2/kappa)/2, z=r^2))
log.term <- lgamma((n+2/kappa-1)/2)-lgamma((n+2/kappa)/2)-lbeta(1/kappa, 1/kappa)
result <- sqrt(pi)*2^(1-2/kappa)*exp(log.term)*hyper.term
return(result)
}
# 1.0. Built-up for likelihood functions
.aFunction <- function(n, r, rho) {
#hyper.term <- Re(hypergeo::hypergeo(((n-1)/2), ((n-1)/2), (1/2), (r*rho)^2))
hyper.term <- Re(hypergeo::genhypergeo(U=c((n-1)/2, (n-1)/2), L=(1/2), z=(r*rho)^2))
result <- (1-rho^2)^((n-1)/2)*hyper.term
return(result)
}
.bFunction <- function(n, r, rho) {
#hyper.term.1 <- Re(hypergeo::hypergeo((n/2), (n/2), (1/2), (r*rho)^2))
#hyper.term.2 <- Re(hypergeo::hypergeo((n/2), (n/2), (-1/2), (r*rho)^2))
#hyper.term.1 <- Re(hypergeo::genhypergeo(U=c(n/2, n/2), L=(1/2), z=(r*rho)^2))
#hyper.term.2 <- Re(hypergeo::genhypergeo(U=c(n/2, n/2), L=(-1/2), z=(r*rho)^2))
#result <- 2^(-1)*(1-rho^2)^((n-1)/2)*exp(log.term)*
# ((1-2*n*(r*rho)^2)/(r*rho)*hyper.term.1-(1-(r*rho)^2)/(r*rho)*hyper.term.2)
#
hyper.term <- Re(hypergeo::genhypergeo(U=c(n/2, n/2), L=(3/2), z=(r*rho)^2))
log.term <- 2*(lgamma(n/2)-lgamma((n-1)/2))+((n-1)/2)*log(1-rho^2)
result <- 2*r*rho*exp(log.term)*hyper.term
return(result)
}
.hFunction <- function(n, r, rho) {
result <- .aFunction(n, r, rho) + .bFunction(n, r, rho)
return(result)
}
.scaledBeta <- function(rho, alpha, beta){
result <- 1/2*dbeta((rho+1)/2, alpha, beta)
return(result)
}
.priorRho <- function(rho, kappa=1) {
.scaledBeta(rho, 1/kappa, 1/kappa)
}
fisherZ <- function(r) log((1+r)/(1-r))/2
inv.fisherZ <- function(z) {K <- exp(2*z); (K-1)/(K+1)}
# Main body
# Values spaced around mode
qs <- qlogis(seq(0,1,length.out=npoints+2)[-c(1,npoints+2)])
rho <- c(-1,inv.fisherZ(fisherZ(r)+qs/sqrt(n)),1)
# Get heights
if (!is.na(r) && !r==0) {
d <- .bf10Exact(n, r, kappa)*.hFunction(n, r, rho)*.priorRho(rho, kappa)
} else if (!is.na(r) && r==0) {
d <- .bf10JeffreysIntegrate(n, r, kappa)*
.jeffreysApproxH(n, r, rho)*.priorRho(rho, kappa)
} else return(NA)
# Unnormalized approximation funciton for density
approxfun(rho,d)
}
postRav <- function(r, n, spacing=.01, kappa=1,npoints=100,save=FALSE)
# r is a vector, returns average density. Can also save unnormalized pdfs
{
funs <- sapply(r,posteriorRho,n=n,npoints=npoints,kappa=kappa)
rho <- seq(-1,1,spacing)
result <- apply(matrix(unlist(lapply(funs,function(x){
out <- x(rho); out/sum(out)
})),nrow=length(rho)),1,mean)
names(result) <- seq(-1,1,spacing)
attr(result,"n") <- n
attr(result,"kappa") <- kappa
if (save) attr(result,"updfs") <- funs
result
}
postRav.Density <- function(result)
# Produces density class object
{
x.vals <- as.numeric(names(result))
result <- result/(diff(range(x.vals))/length(x.vals))
out <- list(x=x.vals,y=result,has.na=FALSE,
data.name="postRav",call=call("postRav"),
bw=mean(diff(x.vals)),n=attr(result,"n"))
class(out) <- "density"
out
}
postRav.mean <- function(pra) {
# Average value of object produced by posteriorRhoAverage
sum(pra*as.numeric(names(pra)))
}
postRav.p <- function(pra,lower=-1,upper=1) {
# probability in an (inclusive) range of posteriorRhoAverage object
x.vals <- as.numeric(names(pra))
sum(pra[x.vals <= upper & x.vals >= lower])
}
postRav.ci <- function(pra,interval=c(.025,.975))
{
cs <- cumsum(pra)
rs <- as.numeric(names(pra))
tmp <- approx(cs,rs,interval)
out <- tmp$y
names(out) <- interval
out
}
###Output
_____no_output_____
###Markdown
Computing plausible population correlation for both SAT conditions
###Code
%%R -i corr_Acc -o x4_1,y4_1
rhohat = postRav(corr_Acc, 16)
print(postRav.mean(rhohat))
print(postRav.ci(rhohat))
d = postRav.Density(rhohat)
plot(d)
x4_1 = d$x
y4_1 = d$y
%%R -i corr_Spd -o x4_2,y4_2
rhohat = postRav(corr_Spd, 16)
print(postRav.mean(rhohat))
print(postRav.ci(rhohat))
d = postRav.Density(rhohat)
plot(d)
x4_2 = d$x
y4_2 = d$y
###Output
_____no_output_____
###Markdown
Plotting for both experiment
###Code
plot1data = pd.read_csv("plot1data.csv")
plot3data = pd.read_csv("plot3data.csv")
import matplotlib.gridspec as gridspec
plt.figure(dpi=300)
gs = gridspec.GridSpec(2, 2,
width_ratios=[2, 2, ],
height_ratios=[2, 1])
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
ax4 = plt.subplot(gs[3])
ax1.errorbar(x=plot1data.x1_1, y=plot1data.y1_1, yerr=np.array([plot1data.yerr1_1u.values, plot1data.yerr1_1b.values]),fmt='.', color="k", label="Accuracy")
ax1.errorbar(x=plot1data.x1_2, y=plot1data.y1_2, yerr=np.array([plot1data.yerr1_1u.values, plot1data.yerr1_1b.values]),fmt='.', color="gray", label="Speed")
ax2.errorbar(x=mts[:16], y=np.mean(Ters_Acc, axis=0), yerr=np.abs([np.mean(Ters_Acc, axis=0),np.mean(Ters_Acc, axis=0)] - np.asarray((np.percentile(Ters_Acc, 97.5, axis=0),np.percentile(Ters_Acc, 2.5, axis=0)))),fmt='.', color="k")
ax2.errorbar(x=mts[16:], y=np.mean(Ters_Spd, axis=0), yerr=np.abs([np.mean(Ters_Spd, axis=0),np.mean(Ters_Spd, axis=0)] - np.asarray((np.percentile(Ters_Spd, 97.5, axis=0),np.percentile(Ters_Spd, 2.5, axis=0)))),fmt='.', color="gray")
ax3.plot(plot3data.x3_1,plot3data.y3_1, color="k", label="Accuracy")
ax3.plot(plot3data.x3_2,plot3data.y3_2, color="gray", label="Speed")
ax4.plot(x4_1,y4_1, color="k")
ax4.plot(x4_2,y4_2, color="gray")
ax1.legend(loc=0)
ax1.set_ylim(148, 400)
ax2.set_ylim(148, 400)
ax3.set_ylim(0, 3)
ax4.set_ylim(0, 3)
ax2.set_yticks([])
ax4.set_yticks([])
ax1.set_ylabel(r"$T_{er}$ (ms)")
ax1.set_xlabel("MT (ms)")
ax2.set_xlabel("MT (ms)")
ax3.set_ylabel("Density")
ax3.set_xlabel(r"$r$ value")
ax4.set_xlabel(r"$r$ value")
plt.tight_layout()
plt.savefig("../Manuscript/plots/TerMTcorr.eps")
###Output
_____no_output_____
###Markdown
Joint fit with MT Except if high amount of RAM (>18 Gb), kernel should be restarted and only the first cell run before running cells below
###Code
fit_joint2 = []
for f in os.listdir("DDM/Fits/"):
if os.path.isfile("DDM/Fits/%s"%f) and "Exp2" in f:
fit_joint2.append(hddm.load("DDM/Fits/%s"%f))
fit_joint = kabuki.utils.concat_models(fit_joint2)
stats = fit_joint.gen_stats()
stats[stats.index=="t_mt"]
###Output
_____no_output_____
###Markdown
Testing wether var in MT ~ Ter can be explained by r(PMT,MT)
###Code
import scipy.stats as stats
df = dffull = pd.read_csv('../Raw_data/markers/MRK_SAT.csv')
df = df[df.exp==2]
df = df[np.isfinite(df.pmt)].reset_index(drop=True)#Removing unmarked EMG trials
r, part, SAT = [],[],[]
for xx, subj_dat in df.groupby(['participant', 'condition']):
subj_dat = subj_dat[np.isfinite(subj_dat['mt'])]
r.append(stats.spearmanr(subj_dat.mt, subj_dat.pmt)[0])
part.append(xx[0])
SAT.append(xx[1])
dfcorr = pd.concat([pd.Series(r), pd.Series(part),pd.Series(SAT)], axis=1)
dfcorr.columns = ['correl','participant','SAT']
PMTMTcorr = dfcorr.groupby('participant').correl.mean().values #averaging across SAT conditions
corr, t_mts = [],[]
traces = fit_joint.get_traces()
for iteration in traces.iterrows():
t_mt = iteration[1][['t_mt_subj' in s for s in iteration[1].index]]
corr.append(np.corrcoef(t_mt, PMTMTcorr)[0,1])
t_mts.append(t_mt)
plt.errorbar(x=PMTMTcorr, y=np.mean(t_mts, axis=0), yerr=np.abs([np.mean(t_mts, axis=0),np.mean(t_mts, axis=0)] - np.asarray((np.percentile(t_mts, 97.5, axis=0),np.percentile(t_mts, 2.5, axis=0)))),fmt='o')
plt.hist(corr)
###Output
_____no_output_____
###Markdown
Computing population plausible values
###Code
%%R -i corr -o x4,y4
rhohat = postRav(corr, 16)
print(postRav.mean(rhohat))
print(postRav.ci(rhohat))
d = postRav.Density(rhohat)
plot(d)
x4 = d$x
y4 = d$y
###Output
_____no_output_____
###Markdown
Plotting for both experiments
###Code
plot1data_tmt = pd.read_csv('plot1data_tmt.csv')
plot2data_tmt = pd.read_csv('plot2data_tmt.csv')
import matplotlib.gridspec as gridspec
plt.figure(dpi=300)
gs = gridspec.GridSpec(2, 2,
width_ratios=[2, 2, ],
height_ratios=[2, 1])
ax1 = plt.subplot(gs[0])
ax2 = plt.subplot(gs[1])
ax3 = plt.subplot(gs[2])
ax4 = plt.subplot(gs[3])
ax1.errorbar(x=plot1data_tmt.x1, y=plot1data_tmt.y1, yerr=np.array([plot1data_tmt.yerr1u.values, plot1data_tmt.yerr1b.values]),fmt='.', color="k")
ax2.errorbar(x=PMTMTcorr, y=np.mean(t_mts, axis=0), yerr=np.abs([np.mean(t_mts, axis=0),np.mean(t_mts, axis=0)] - np.asarray((np.percentile(t_mts, 97.5, axis=0),np.percentile(t_mts, 2.5, axis=0)))),fmt='.', color="k")
ax3.plot(plot2data_tmt.x2,plot2data_tmt.y2, color="k", label="Accuracy")
ax4.plot(x4,y4, color="k")
ax3.set_ylim(0, 4)
ax4.set_ylim(0, 4)
ax2.set_yticks([])
ax4.set_yticks([])
ax1.set_ylabel(r"$\beta_{MT}$")
ax1.set_xlabel("PMT-MT correlation")
ax2.set_xlabel("PMT-MT correlation")
ax3.set_ylabel("Density")
ax3.set_xlabel(r"$r$ value")
ax4.set_xlabel(r"$r$ value")
plt.tight_layout()
plt.savefig("../Manuscript/plots/tmt.eps")
###Output
_____no_output_____ |
NLP/2-NaiveBayes_N-gram/MultinomialNB-BernoulliNB.ipynb | ###Markdown
值得注意的是,多项式模型在训练一个数据集结束后可以继续训练其他数据集而无需将两个数据集放在一起进行训练。在sklearn中,MultinomialNB()类的partial_fit()方法可以进行这种训练。这种方式特别适合于训练集大到内存无法一次性放入的情况。在第一次调用partial_fit()时需要给出所有的分类标号。
###Code
clf_1 = BernoulliNB()
clf_1.fit(x,y)
clf_1.predict(x[22].reshape(1,-1))
###Output
_____no_output_____ |
notebooks/Sales_Forecasting.ipynb | ###Markdown
###Code
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/bundickm/Sales-Analysis-and-Dashboard/master/sales_data_sample.csv' , encoding="unicode_escape")
###Output
_____no_output_____ |
python_projects/advanced_python/static_typing.ipynb | ###Markdown
Static TypingPython is a dinamic language, that means, any error will show during execution, and that can create a problem The way to type in order to make python static is the next
###Code
a: int = 5
b: str = "World"
c: float = 3.6
d: bool = False
print(type(a), a,)
print(type(b), b,)
print(type(c), c,)
print(type(d), d,)
###Output
_____no_output_____
###Markdown
The same can be applied to functions with the next syntax
###Code
#Inside the parameters of the function we type the arguments
#and next we can add what type the result of the function will be
def add(x: int, y: int) -> int:
return x + y
print(add(5,5))
#but what happens when we not apply the type of the function in the argument
print(add("2", "1"))
#it concatentes the two numbers, even do we specify that they are supposed to be integers
print(add("l","y"))
###Output
_____no_output_____
###Markdown
To do this in list and dictionaries we type the next syntax
###Code
from typing import Dict, List
population: Dict[str,int] = {
"canada": 38000000,
"brazil": 212000000,
"japan":125000000 ,
}
###Output
_____no_output_____
###Markdown
And with tuple
###Code
from typing import Tuple
answer: Tuple[int,float,str,bool] = (6, 4.8, "lol", False)
###Output
_____no_output_____
###Markdown
A list, containing a dictionary, that the value of the dctionaries is a tuple
###Code
from typing import Tuple, Dict, List
age = List[Dict[str, Tuple[int, int]]]
age = [
{
"couple1": (34,33),
"couple2": (76,70),
"couple3": (55,70),
}
]
print(age)
###Output
_____no_output_____ |
prep-notebooks/ETL Pipeline Preparation.ipynb | ###Markdown
ETL Pipeline PreparationFollow the instructions below to help you create your ETL pipeline. 1. Import libraries and load datasets.- Import Python libraries- Load `messages.csv` into a dataframe and inspect the first few lines.- Load `categories.csv` into a dataframe and inspect the first few lines.
###Code
# import libraries
import pandas as pd
import numpy as np
from sqlalchemy import create_engine
# load messages dataset
messages = pd.read_csv('../data/disaster_messages.csv')
messages.head()
# load categories dataset
categories = pd.read_csv('../data/disaster_categories.csv')
categories.head()
###Output
_____no_output_____
###Markdown
2. Merge datasets.- Merge the messages and categories datasets using the common id- Assign this combined dataset to `df`, which will be cleaned in the following steps
###Code
# merge datasets
df = messages.merge(categories, on='id', how='inner')
df.head()
###Output
_____no_output_____
###Markdown
3. Split `categories` into separate category columns.- Split the values in the `categories` column on the `;` character so that each value becomes a separate column. You'll find [this method](https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.Series.str.split.html) very helpful! Make sure to set `expand=True`.- Use the first row of categories dataframe to create column names for the categories data.- Rename columns of `categories` with new column names.
###Code
# create a dataframe of the 36 individual category columns
categories = df['categories'].str.split(";",expand=True)
categories.head()
# select the first row of the categories dataframe
row = categories.iloc[1]
# use this row to extract a list of new column names for categories.
# one way is to apply a lambda function that takes everything
# up to the second to last character of each string with slicing
category_colnames = list([x[:-2] for x in row])
print(category_colnames)
# rename the columns of `categories`
categories.columns = category_colnames
categories.head()
###Output
_____no_output_____
###Markdown
4. Convert category values to just numbers 0 or 1.- Iterate through the category columns in df to keep only the last character of each string (the 1 or 0). For example, `related-0` becomes `0`, `related-1` becomes `1`. Convert the string to a numeric value.- You can perform [normal string actions on Pandas Series](https://pandas.pydata.org/pandas-docs/stable/text.htmlindexing-with-str), like indexing, by including `.str` after the Series. You may need to first convert the Series to be of type string, which you can do with `astype(str)`.
###Code
import copy
c = copy.deepcopy(categories)
for column in categories:
# set each value to be the last character of the string
# print(categories[column])
categories[column] = categories[column].apply(lambda x: x[-1])
# convert column from string to numeric
categories[column] = pd.to_numeric(categories[column])
categories.head()
# checking unique values in each column categories to confirm they only have either 1 or 0
categories.nunique()
# Related has 3 values. finding those.
categories['related'].drop_duplicates()
# Replacing 2 with 1
categories['related'] = categories['related'].apply(lambda x: 1 if x not in ['1', '0', 1, 0] else x)
print(categories.head())
categories['related'].drop_duplicates()
###Output
related request offer aid_related medical_help medical_products \
0 1 0 0 0 0 0
1 1 0 0 1 0 0
2 1 0 0 0 0 0
3 1 1 0 1 0 1
4 1 0 0 0 0 0
search_and_rescue security military child_alone ... aid_centers \
0 0 0 0 0 ... 0
1 0 0 0 0 ... 0
2 0 0 0 0 ... 0
3 0 0 0 0 ... 0
4 0 0 0 0 ... 0
other_infrastructure weather_related floods storm fire earthquake \
0 0 0 0 0 0 0
1 0 1 0 1 0 0
2 0 0 0 0 0 0
3 0 0 0 0 0 0
4 0 0 0 0 0 0
cold other_weather direct_report
0 0 0 0
1 0 0 0
2 0 0 0
3 0 0 0
4 0 0 0
[5 rows x 36 columns]
###Markdown
5. Replace `categories` column in `df` with new category columns.- Drop the categories column from the df dataframe since it is no longer needed.- Concatenate df and categories data frames.
###Code
# drop the original categories column from `df`
df.drop(columns='categories', inplace=True)
df.head()
# concatenate the original dataframe with the new `categories` dataframe
df = pd.concat([df, categories], axis=1)
df.head()
###Output
_____no_output_____
###Markdown
6. Remove duplicates.- Check how many duplicates are in this dataset.- Drop the duplicates.- Confirm duplicates were removed.
###Code
# check number of duplicates
df.duplicated().sum()
# drop duplicates
df.drop_duplicates(inplace=True)
# check number of duplicates
df.duplicated().sum()
###Output
_____no_output_____
###Markdown
7. Save the clean dataset into an sqlite database.You can do this with pandas [`to_sql` method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html) combined with the SQLAlchemy library. Remember to import SQLAlchemy's `create_engine` in the first cell of this notebook to use it below.
###Code
engine = create_engine('sqlite:///DisasterResponse.db')
df.to_sql('cleaned_data', engine, index=False, if_exists='replace')
###Output
_____no_output_____
###Markdown
8. Use this notebook to complete `etl_pipeline.py`Use the template file attached in the Resources folder to write a script that runs the steps above to create a database based on new datasets specified by the user. Alternatively, you can complete `etl_pipeline.py` in the classroom on the `Project Workspace IDE` coming later.
###Code
# df_2 = df.drop(columns=["id", "message", "original", "genre"])
# categories_list = list(map(lambda x: x.replace('_', ' '), list(df_2.columns)))
# print(categories_list)
# list(df_2.sum().values)
###Output
_____no_output_____ |
2020-21_semester2/11_BotRacingUpgrade.ipynb | ###Markdown
Upgraded Racing Game
###Code
import random
class Game:
n_squares = 10
winner = None
def __init__(self, bots, verbose=False):
self.bots = bots
self.verbose = verbose
self._set_starting_positions() # calling the method that will reset all positions
def _set_starting_positions(self):
for b in self.bots: # go through every bot in the competition and set the position to 0
b.position = 0
def show_board(self):
print("=" * 30)
board = {i: [] for i in range(self.n_squares + 1)}
for bot in self.bots:
board[bot.position].append(bot)
for square, bots_in_square in board.items():
print(f"{square}: {bots_in_square}")
def play_round(self):
if self.winner is None:
random.shuffle(self.bots)
if self.verbose:
print(self.bots)
for bot in self.bots:
self._play_bot(bot)
if self.winner:
break
# for bot in bots:
# bot.direction = 1
if self.verbose:
if self.winner:
print(f"========== Race Over, WINNER: {self.winner} ========== ")
self.show_board()
def _play_bot(self, bot):
bot_position_dictionary = {b.name: b.position for b in self.bots}
action_str = bot.play(bot_position_dictionary)
if action_str == "walk":
pos_from, pos_to = bot.walk()
if self.verbose:
print(f"{str(bot):<15} walked from {pos_from} to {pos_to}")
elif action_str == "sabotage":
sabotaged_bots = bot.sabotage(self.bots)
if self.verbose:
print(f"{str(bot):<15} sabotaged {sabotaged_bots}")
elif action_str == "faceforward":
bot.direction = 1
if self.verbose:
print(f"{str(bot):<15} faced forward")
if bot.position >= self.n_squares:
self.winner = bot
class Bot:
position = 0
direction = 1
def __init__(self, name, strategy):
self.name = name
self.strategy = strategy
def __repr__(self):
return f"{self.name}"
def walk(self):
from_position = self.position
self.position = max(0, self.position+self.direction)
to_position = self.position
return from_position, to_position
def sabotage(self, bots):
sabotaged_bots = []
for bot in bots:
if bot.position == self.position and bot != self:
bot.direction *= -1
sabotaged_bots.append(bot)
return sabotaged_bots
def play(self, bot_positions):
return self.strategy(self, bot_positions)
###Output
_____no_output_____
###Markdown
strategies
###Code
def random_strategy(self, bot_positions):
return random.choice(["walk", "sabotage"])
original_list = ['walk', 'walk', 'sabotage']
current_list = []
def list_strategy(self, bot_positions):
global current_list # to allow "write-access" to out-of-function variables
if current_list == []:
current_list = original_list.copy()
return current_list.pop(0)
def always_walk(self, bot_positions):
return "walk"
def underdog(self, bot_positions):
my_pos = self.position
bots_at_my_pos = sum([1 for pos in bot_positions.values() if pos == my_pos])
if bots_at_my_pos > 2 and my_pos > 3:
return "sabotage"
else:
if self.direction == 1:
return "walk"
else:
return "faceforward"
bots = [
Bot("Random", random_strategy),
Bot("List", list_strategy),
Bot("Walker", always_walk),
Bot("UnderDog", underdog),
]
from tqdm.auto import tqdm
import pandas as pd
def grand_prix(n):
winnings = {b: 0 for b in bots}
for _ in tqdm(range(n)):
game = Game(bots, verbose=False)
while game.winner is None:
game.play_round()
winnings[game.winner] += 1
return winnings
winnings = grand_prix(n=1000)
podium = pd.Series(winnings).sort_values(ascending=False)
podium.plot.bar(grid=True, figsize=(25,6), rot=0)
###Output
_____no_output_____ |
jupyter_notebooks/08_itertools_Module.ipynb | ###Markdown
Python Cheat SheetBasic cheatsheet for Python mostly based on the book written by Al Sweigart, [Automate the Boring Stuff with Python](https://automatetheboringstuff.com/) under the [Creative Commons license](https://creativecommons.org/licenses/by-nc-sa/3.0/) and many other sources. Read It- [Website](https://www.pythoncheatsheet.org)- [Github](https://github.com/wilfredinni/python-cheatsheet)- [PDF](https://github.com/wilfredinni/Python-cheatsheet/raw/master/python_cheat_sheet.pdf)- [Jupyter Notebook](https://mybinder.org/v2/gh/wilfredinni/python-cheatsheet/master?filepath=jupyter_notebooks) itertools ModuleThe _itertools_ module is a collection of tools intented to be fast and use memory efficiently when handling iterators (like [lists](lists) or [dictionaries](dictionaries-and-structuring-data)).From the official [Python 3.x documentation](https://docs.python.org/3/library/itertools.html):> The module standardizes a core set of fast, memory efficient tools that are useful by themselves or in combination. Together, they form an “iterator algebra” making it possible to construct specialized tools succinctly and efficiently in pure Python.The _itertools_ module comes in the standard library and must be imported.The [operator](https://docs.python.org/3/library/operator.html) module will also be used. This module is not necessary when using itertools, but needed for some of the examples below.
###Code
import itertools
import operator
###Output
_____no_output_____
###Markdown
accumulateMakes an iterator that returns the results of a function.
###Code
itertools.accumulate(iterable[, func])
###Output
_____no_output_____
###Markdown
Example:
###Code
data = [1, 2, 3, 4, 5]
result = itertools.accumulate(data, operator.mul)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
The operator.mul takes two numbers and multiplies them:
###Code
operator.mul(1, 2)
operator.mul(2, 3)
operator.mul(6, 4)
operator.mul(24, 5)
###Output
_____no_output_____
###Markdown
Passing a function is optional:
###Code
data = [5, 2, 6, 4, 5, 9, 1]
result = itertools.accumulate(data)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
If no function is designated the items will be summed:
###Code
5
5 + 2 = 7
7 + 6 = 13
13 + 4 = 17
17 + 5 = 22
22 + 9 = 31
31 + 1 = 32
###Output
_____no_output_____
###Markdown
combinationsTakes an iterable and a integer. This will create all the unique combination that have r members.
###Code
itertools.combinations(iterable, r)
###Output
_____no_output_____
###Markdown
Example:
###Code
shapes = ['circle', 'triangle', 'square',]
result = itertools.combinations(shapes, 2)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
combinations_with_replacementJust like combinations(), but allows individual elements to be repeated more than once.
###Code
itertools.combinations_with_replacement(iterable, r)
###Output
_____no_output_____
###Markdown
Example:
###Code
shapes = ['circle', 'triangle', 'square']
result = itertools.combinations_with_replacement(shapes, 2)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
countMakes an iterator that returns evenly spaced values starting with number start.
###Code
itertools.count(start=0, step=1)
###Output
_____no_output_____
###Markdown
Example:
###Code
for i in itertools.count(10,3):
print(i)
if i > 20:
break
###Output
_____no_output_____
###Markdown
cycleThis function cycles through an iterator endlessly.
###Code
itertools.cycle(iterable)
###Output
_____no_output_____
###Markdown
Example:
###Code
colors = ['red', 'orange', 'yellow', 'green', 'blue', 'violet']
for color in itertools.cycle(colors):
print(color)
###Output
_____no_output_____
###Markdown
When reached the end of the iterable it start over again from the beginning. chainTake a series of iterables and return them as one long iterable.
###Code
itertools.chain(*iterables)
###Output
_____no_output_____
###Markdown
Example:
###Code
colors = ['red', 'orange', 'yellow', 'green', 'blue']
shapes = ['circle', 'triangle', 'square', 'pentagon']
result = itertools.chain(colors, shapes)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
compressFilters one iterable with another.
###Code
itertools.compress(data, selectors)
###Output
_____no_output_____
###Markdown
Example:
###Code
shapes = ['circle', 'triangle', 'square', 'pentagon']
selections = [True, False, True, False]
result = itertools.compress(shapes, selections)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
dropwhileMake an iterator that drops elements from the iterable as long as the predicate is true; afterwards, returns every element.
###Code
itertools.dropwhile(predicate, iterable)
###Output
_____no_output_____
###Markdown
Example:
###Code
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1]
result = itertools.dropwhile(lambda x: x<5, data)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
filterfalseMakes an iterator that filters elements from iterable returning only those for which the predicate is False.
###Code
itertools.filterfalse(predicate, iterable)
###Output
_____no_output_____
###Markdown
Example:
###Code
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
result = itertools.filterfalse(lambda x: x<5, data)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
groupbySimply put, this function groups things together.
###Code
itertools.groupby(iterable, key=None)
###Output
_____no_output_____
###Markdown
Example:
###Code
robots = [{
'name': 'blaster',
'faction': 'autobot'
}, {
'name': 'galvatron',
'faction': 'decepticon'
}, {
'name': 'jazz',
'faction': 'autobot'
}, {
'name': 'metroplex',
'faction': 'autobot'
}, {
'name': 'megatron',
'faction': 'decepticon'
}, {
'name': 'starcream',
'faction': 'decepticon'
}]
for key, group in itertools.groupby(robots, key=lambda x: x['faction']):
print(key)
print(list(group))
###Output
_____no_output_____
###Markdown
isliceThis function is very much like slices. This allows you to cut out a piece of an iterable.
###Code
itertools.islice(iterable, start, stop[, step])
###Output
_____no_output_____
###Markdown
Example:
###Code
colors = ['red', 'orange', 'yellow', 'green', 'blue',]
few_colors = itertools.islice(colors, 2)
for each in few_colors:
print(each)
###Output
_____no_output_____
###Markdown
permutations
###Code
itertools.permutations(iterable, r=None)
###Output
_____no_output_____
###Markdown
Example:
###Code
alpha_data = ['a', 'b', 'c']
result = itertools.permutations(alpha_data)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
productCreates the cartesian products from a series of iterables.
###Code
num_data = [1, 2, 3]
alpha_data = ['a', 'b', 'c']
result = itertools.product(num_data, alpha_data)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
repeatThis function will repeat an object over and over again. Unless, there is a times argument.
###Code
itertools.repeat(object[, times])
###Output
_____no_output_____
###Markdown
Example:
###Code
for i in itertools.repeat("spam", 3):
print(i)
###Output
_____no_output_____
###Markdown
starmapMakes an iterator that computes the function using arguments obtained from the iterable.
###Code
itertools.starmap(function, iterable)
###Output
_____no_output_____
###Markdown
Example:
###Code
data = [(2, 6), (8, 4), (7, 3)]
result = itertools.starmap(operator.mul, data)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
takewhileThe opposite of dropwhile(). Makes an iterator and returns elements from the iterable as long as the predicate is true.
###Code
itertools.takewhile(predicate, iterable)
###Output
_____no_output_____
###Markdown
Example:
###Code
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1]
result = itertools.takewhile(lambda x: x<5, data)
for each in result:
print(each)
###Output
_____no_output_____
###Markdown
teeReturn n independent iterators from a single iterable.
###Code
itertools.tee(iterable, n=2)
###Output
_____no_output_____
###Markdown
Example:
###Code
colors = ['red', 'orange', 'yellow', 'green', 'blue']
alpha_colors, beta_colors = itertools.tee(colors)
for each in alpha_colors:
print(each)
colors = ['red', 'orange', 'yellow', 'green', 'blue']
alpha_colors, beta_colors = itertools.tee(colors)
for each in beta_colors:
print(each)
###Output
_____no_output_____
###Markdown
zip_longestMakes an iterator that aggregates elements from each of the iterables. If the iterables are of uneven length, missing values are filled-in with fillvalue. Iteration continues until the longest iterable is exhausted.
###Code
itertools.zip_longest(*iterables, fillvalue=None)
###Output
_____no_output_____
###Markdown
Example:
###Code
colors = ['red', 'orange', 'yellow', 'green', 'blue',]
data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10,]
for each in itertools.zip_longest(colors, data, fillvalue=None):
print(each)
###Output
_____no_output_____ |
project2/.Trash-0/files/project_2_solution.ipynb | ###Markdown
Project 2: Breakout Strategy InstructionsEach problem consists of a function to implement and instructions on how to implement the function. The parts of the function that need to be implemented are marked with a ` TODO` comment. After implementing the function, run the cell to test it against the unit tests we've provided. For each problem, we provide one or more unit tests from our `project_tests` package. These unit tests won't tell you if your answer is correct, but will warn you of any major errors. Your code will be checked for the correct solution when you submit it to Udacity. PackagesWhen you implement the functions, you'll only need to use the [Pandas](https://pandas.pydata.org/) and [Numpy](http://www.numpy.org/) packages. Don't import any other packages, otherwise the grader will not be able to run your code.The other packages that we're importing is `helper`, `project_helper`, and `project_tests`. These are custom packages built to help you solve the problems. The `helper` and `project_helper` module contains utility functions and graph functions. The `project_tests` contains the unit tests for all the problems. Install Packages
###Code
import sys
!{sys.executable} -m pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
Load Packages
###Code
import pandas as pd
import numpy as np
import helper
import project_helper
import project_tests
###Output
_____no_output_____
###Markdown
Market DataThe data source we'll be using is the [Wiki End of Day data](https://www.quandl.com/databases/WIKIP) hosted at [Quandl](https://www.quandl.com). This contains data for many stocks, but we'll just be looking at the S&P 500 stocks. We'll also make things a little easier to solve by narrowing our range of time from 2007-06-30 to 2017-09-30. Set API KeySet the `quandl_api_key` variable to your Quandl api key. You can find your Quandl api key [here](https://www.quandl.com/account/api).
###Code
# TODO: Add your Quandl API Key
quandl_api_key = ''
###Output
_____no_output_____
###Markdown
Download Data
###Code
import os
snp500_file_path = 'data/tickers_SnP500.txt'
wiki_file_path = 'data/WIKI_PRICES.csv'
start_date, end_date = '2013-07-01', '2017-06-30'
use_columns = ['date', 'ticker', 'adj_close', 'adj_high', 'adj_low']
if not os.path.exists(wiki_file_path):
with open(snp500_file_path) as f:
tickers = f.read().split()
helper.download_quandl_dataset(quandl_api_key, 'WIKI', 'PRICES', wiki_file_path, use_columns, tickers, start_date, end_date)
else:
print('Data already downloaded')
###Output
_____no_output_____
###Markdown
Load DataWhile using real data will give you hands on experience, it's doesn't cover all the topics we try to condense in one project. We'll solve this by creating new stocks. We've create a scenario where companies mining [Terbium](https://en.wikipedia.org/wiki/Terbium) are making huge profits. All the companies in this sector of the market are made up. They represent a sector with large growth that will be used for demonstration latter in this project.
###Code
df_original = pd.read_csv(wiki_file_path, parse_dates=['date'], index_col=False)
# Add TB sector to the market
df = df_original
df = pd.concat([df] + project_helper.generate_tb_sector(df[df['ticker'] == 'AAPL']['date']), ignore_index=True)
print('Loaded Dataframe')
###Output
_____no_output_____
###Markdown
2-D MatricesHere we convert df into multiple DataFrames for each OHLC. We could use a multiindex, but that just stacks the columns for each ticker. We want to be able to apply calculations without using groupby each time.
###Code
close = df.reset_index().pivot(index='date', columns='ticker', values='adj_close')
high = df.reset_index().pivot(index='date', columns='ticker', values='adj_high')
low = df.reset_index().pivot(index='date', columns='ticker', values='adj_low')
###Output
_____no_output_____
###Markdown
View DataTo see what one of these 2-d matrices looks like, let's take a look at the closing prices matrix.
###Code
close
###Output
_____no_output_____
###Markdown
Stock ExampleLet's see what a single stock looks like from the closing prices. For this example and future display examples, we'll use Apple's stock, "AAPL", to graph the data. If we tried to graph all the stocks, it would be too much information.Run the code below to view a chart of Apple stock.
###Code
apple_ticker = 'AAPL'
project_helper.plot_stock(close[apple_ticker], '{} Stock'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
The Alpha Research ProcessIn this project you will code and evaluate a "breakout" signal. It is important to understand where these steps fit in the alpha research workflow. The signal-to-noise ratio in trading signals is very low and, as such, it is very easy to fall into the trap of _overfitting_ to noise. It is therefore inadvisable to jump right into signal coding. To help mitigate overfitting, it is best to start with a general observation and hypothesis; i.e., you should be able to answer the following question _before_ you touch any data:> What feature of markets or investor behaviour would lead to a persistent anomaly that my signal will try to use?Ideally the assumptions behind the hypothesis will be testable _before_ you actually code and evaluate the signal itself. The workflow therefore is as follows:In this project, we assume that the first three steps area done ("observe & research", "form hypothesis", "validate hypothesis"). The hypothesis you'll be using for this project is the following:- In the absence of news or significant investor trading interest, stocks oscillate in a range.- Traders seek to capitalize on this range-bound behaviour periodically by selling/shorting at the top of the range and buying/covering at the bottom of the range. This behaviour reinforces the existence of the range.- When stocks break out of the range, due to, e.g., a significant news release or from market pressure from a large investor: - the liquidity traders who have been providing liquidity at the bounds of the range seek to cover their positions to mitigate losses, thus magnifying the move out of the range, _and_ - the move out of the range attracts other investor interest; these investors, due to the behavioural bias of _herding_ (e.g., [Herd Behavior](https://www.investopedia.com/university/behavioral_finance/behavioral8.asp)) build positions which favor continuation of the trend.Using this hypothesis, let start coding.. Compute the Highs and Lows in a WindowYou'll use the price highs and lows as an indicator for the breakout strategy. In this section, implement `get_high_lows_lookback` to get the maximum high price and minimum low price over a window of days. The variable `lookback_days` contains the number of days to look in the past. Make sure this doesn't include the current day.
###Code
def get_high_lows_lookback(high, low, lookback_days):
"""
Get the highs and lows in a lookback window.
Parameters
----------
high : DataFrame
High price for each ticker and date
low : DataFrame
Low price for each ticker and date
lookback_days : int
The number of days to look back
Returns
-------
lookback_high : DataFrame
Lookback high price for each ticker and date
lookback_low : DataFrame
Lookback low price for each ticker and date
"""
#TODO: Implement function
lookback_high = high.shift(1).rolling(lookback_days, lookback_days).max()
lookback_low = low.shift(1).rolling(lookback_days, lookback_days).min()
return lookback_high, lookback_low
project_tests.test_get_high_lows_lookback(get_high_lows_lookback)
###Output
_____no_output_____
###Markdown
View DataLet's use your implementation of `get_high_lows_lookback` to get the highs and lows for the past 50 days and compare it to it their respective stock. Just like last time, we'll use Apple's stock as the example to look at.
###Code
lookback_days = 50
lookback_high, lookback_low = get_high_lows_lookback(high, low, lookback_days)
project_helper.plot_high_low(
close[apple_ticker],
lookback_high[apple_ticker],
lookback_low[apple_ticker],
'High and Low of {} Stock'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
Compute Long and Short SignalsUsing the generated indicator of highs and lows, create long and short signals using a breakout strategy. Implement `get_long_short` to generate the following signals:| Signal | Condition ||----|------|| -1 | Low > Close Price || 1 | High < Close Price || 0 | Otherwise |In this chart, **Close Price** is the `close` parameter. **Low** and **High** are the values generated from `get_high_lows_lookback`, the `lookback_high` and `lookback_low` parameters.
###Code
def get_long_short(close, lookback_high, lookback_low):
"""
Generate the signals long, short, and do nothing.
Parameters
----------
close : DataFrame
Close price for each ticker and date
lookback_high : DataFrame
Lookback high price for each ticker and date
lookback_low : DataFrame
Lookback low price for each ticker and date
Returns
-------
long_short : DataFrame
The long, short, and do nothing signals for each ticker and date
"""
#TODO: Implement function
return ((close < lookback_low).astype(int) * -1) + (close > lookback_high).astype(int)
project_tests.test_get_long_short(get_long_short)
###Output
_____no_output_____
###Markdown
View DataLet's compare the signals you generated against the close prices. This chart will show a lot of signals. Too many in fact. We'll talk about filtering the redundant signals in the next problem.
###Code
signal = get_long_short(close, lookback_high, lookback_low)
project_helper.plot_signal(
close[apple_ticker],
signal[apple_ticker],
'Long and Short of {} Stock'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
Filter SignalsThat was a lot of repeated signals! If we're already shorting a stock, having an additional signal to short a stock isn't helpful for this strategy. This also applies to additional long signals when the last signal was long.Implement `filter_signals` to filter out repeated long or short signals within the `lookahead_days`. If the previous signal was the same, change the signal to `0` (do nothing signal). For example, say you have a single stock time series that is`[1, 0, 1, 0, 1, 0, -1, -1]`Running `filter_signals` with a lookahead of 3 days should turn those signals into`[1, 0, 0, 0, 1, 0, -1, 0]`To help you implement the function, we have provided you with the `clear_signals` function. This will remove all signals within a window after the last signal. For example, say you're using a windows size of 3 with `clear_signals`. It would turn the Series of long signals`[0, 1, 0, 0, 1, 1, 0, 1, 0]`into`[0, 1, 0, 0, 0, 1, 0, 0, 0]`Note: it only takes a Series of the same type of signals, where `1` is the signal and `0` is no signal. It can't take a mix of long and short signals. Using this function, implement `filter_signals`.
###Code
def clear_signals(signals, window_size):
"""
Clear out signals in a Series of just long or short signals.
Remove the number of signals down to 1 within the window size time period.
Parameters
----------
signals : Pandas Series
The long, short, or do nothing signals
window_size : int
The number of days to have a single signal
Returns
-------
signals : Pandas Series
Signals with the signals removed from the window size
"""
# Start with buffer of window size
# This handles the edge case of calculating past_signal in the beginning
clean_signals = [0]*window_size
for signal_i, current_signal in enumerate(signals):
# Check if there was a signal in the past window_size of days
has_past_signal = bool(sum(clean_signals[signal_i:signal_i+window_size]))
# Use the current signal if there's no past signal, else 0/False
clean_signals.append(not has_past_signal and current_signal)
# Remove buffer
clean_signals = clean_signals[window_size:]
# Return the signals as a Series of Ints
return pd.Series(np.array(clean_signals).astype(np.int), signals.index)
def filter_signals(signal, lookahead_days):
"""
Filter out signals in a DataFrame.
Parameters
----------
signal : DataFrame
The long, short, and do nothing signals for each ticker and date
lookahead_days : int
The number of days to look ahead
Returns
-------
filtered_signal : DataFrame
The filtered long, short, and do nothing signals for each ticker and date
"""
#TODO: Implement function
pos_signal = signal[signal == 1].fillna(0)
neg_signal = signal[signal == -1].fillna(0) * -1
pos_signal = pos_signal.apply(lambda signals: clear_signals(signals, lookahead_days))
neg_signal = neg_signal.apply(lambda signals: clear_signals(signals, lookahead_days))
return pos_signal + neg_signal*-1
project_tests.test_filter_signals(filter_signals)
###Output
_____no_output_____
###Markdown
View DataLet's view the same chart as before, but with the redundant signals removed.
###Code
signal_5 = filter_signals(signal, 5)
signal_10 = filter_signals(signal, 10)
signal_20 = filter_signals(signal, 20)
for signal_data, signal_days in [(signal_5, 5), (signal_10, 10), (signal_20, 20)]:
project_helper.plot_signal(
close[apple_ticker],
signal_data[apple_ticker],
'Long and Short of {} Stock with {} day signal window'.format(apple_ticker, signal_days))
###Output
_____no_output_____
###Markdown
Lookahead Close PricesWith the trading signal done, we can start working on evaluating how many days to short or long the stocks. In this problem, implement `get_lookahead_prices` to get the close price days ahead in time. You can get the number of days from the variable `lookahead_days`. We'll use the lookahead prices to calculate future returns in another problem.
###Code
def get_lookahead_prices(close, lookahead_days):
"""
Get the lookahead prices for `lookahead_days` number of days.
Parameters
----------
close : DataFrame
Close price for each ticker and date
lookahead_days : int
The number of days to look ahead
Returns
-------
lookahead_prices : DataFrame
The lookahead prices for each ticker and date
"""
#TODO: Implement function
return close.shift(-lookahead_days)
project_tests.test_get_lookahead_prices(get_lookahead_prices)
###Output
_____no_output_____
###Markdown
View DataUsing the `get_lookahead_prices` function, let's generate lookahead closing prices for 5, 10, and 20 days.Let's also chart a subsection of a few months of the Apple stock instead of years. This will allow you to view the differences between the 5, 10, and 20 day lookaheads. Otherwise, they will mesh together when looking at a chart that is zoomed out.
###Code
lookahead_5 = get_lookahead_prices(close, 5)
lookahead_10 = get_lookahead_prices(close, 10)
lookahead_20 = get_lookahead_prices(close, 20)
project_helper.plot_lookahead_prices(
close[apple_ticker].iloc[150:250],
[
(lookahead_5[apple_ticker].iloc[150:250], 5),
(lookahead_10[apple_ticker].iloc[150:250], 10),
(lookahead_20[apple_ticker].iloc[150:250], 20)],
'5, 10, and 20 day Lookahead Prices for Slice of {} Stock'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
Lookahead Price ReturnsImplement `get_return_lookahead` to generate the log price return between the closing price and the lookahead price.
###Code
def get_return_lookahead(close, lookahead_prices):
"""
Calculate the log returns from the lookahead days to the signal day.
Parameters
----------
close : DataFrame
Close price for each ticker and date
lookahead_prices : DataFrame
The lookahead prices for each ticker and date
Returns
-------
lookahead_returns : DataFrame
The lookahead log returns for each ticker and date
"""
#TODO: Implement function
return np.log(lookahead_prices) - np.log(close)
project_tests.test_get_return_lookahead(get_return_lookahead)
###Output
_____no_output_____
###Markdown
View DataUsing the same lookahead prices and same subsection of the Apple stock from the previous problem, we'll view the lookahead returns.In order to view price returns on the same chart as the stock, a second y-axis will be added. When viewing this chart, the axis for the price of the stock will be on the left side, like previous charts. The axis for price returns will be located on the right side.
###Code
price_return_5 = get_return_lookahead(close, lookahead_5)
price_return_10 = get_return_lookahead(close, lookahead_10)
price_return_20 = get_return_lookahead(close, lookahead_20)
project_helper.plot_price_returns(
close[apple_ticker].iloc[150:250],
[
(price_return_5[apple_ticker].iloc[150:250], 5),
(price_return_10[apple_ticker].iloc[150:250], 10),
(price_return_20[apple_ticker].iloc[150:250], 20)],
'5, 10, and 20 day Lookahead Returns for Slice {} Stock'.format(apple_ticker))
###Output
_____no_output_____
###Markdown
Compute the Signal ReturnUsing the price returns generate the signal returns.
###Code
def get_signal_return(signal, lookahead_returns):
"""
Compute the signal returns.
Parameters
----------
signal : DataFrame
The long, short, and do nothing signals for each ticker and date
lookahead_returns : DataFrame
The lookahead log returns for each ticker and date
Returns
-------
signal_return : DataFrame
Signal returns for each ticker and date
"""
#TODO: Implement function
return signal * lookahead_returns
project_tests.test_get_signal_return(get_signal_return)
###Output
_____no_output_____
###Markdown
View DataLet's continue using the previous lookahead prices to view the signal returns. Just like before, the axis for the signal returns is on the right side of the chart.
###Code
title_string = '{} day LookaheadSignal Returns for {} Stock'
signal_return_5 = get_signal_return(signal_5, price_return_5)
signal_return_10 = get_signal_return(signal_10, price_return_10)
signal_return_20 = get_signal_return(signal_20, price_return_20)
project_helper.plot_signal_returns(
close[apple_ticker],
[
(signal_return_5[apple_ticker], signal_5[apple_ticker], 5),
(signal_return_10[apple_ticker], signal_10[apple_ticker], 10),
(signal_return_20[apple_ticker], signal_20[apple_ticker], 20)],
[title_string.format(5, apple_ticker), title_string.format(10, apple_ticker), title_string.format(20, apple_ticker)])
###Output
_____no_output_____
###Markdown
Test for Significance HistogramLet's plot a histogram of the signal return values.
###Code
project_helper.plot_signal_histograms(
[signal_return_5, signal_return_10, signal_return_20],
'Signal Return',
('5 Days', '10 Days', '20 Days'))
###Output
_____no_output_____
###Markdown
Question: What do the histograms tell you about the signal? *TODO: Put Answer In this Cell* P-ValueLet's calculate the P-Value from the signal return.
###Code
pval_5 = project_helper.get_signal_return_pval(signal_return_5)
print('5 Day P-value: {}'.format(pval_5))
pval_10 = project_helper.get_signal_return_pval(signal_return_10)
print('10 Day P-value: {}'.format(pval_10))
pval_20 = project_helper.get_signal_return_pval(signal_return_20)
print('20 Day P-value: {}'.format(pval_20))
###Output
_____no_output_____
###Markdown
Question: What do the p-values tell you about the signal? *TODO: Put Answer In this Cell* OutliersYou might have noticed the outliers in the 10 and 20 day histograms. To better visualize the outliers, let's compare the 5, 10, and 20 day signals returns to normal distributions with the same mean and deviation for each signal return distributions.
###Code
project_helper.plot_signal_to_normal_histograms(
[signal_return_5, signal_return_10, signal_return_20],
'Signal Return',
('5 Days', '10 Days', '20 Days'))
###Output
_____no_output_____
###Markdown
Find OutliersWhile you can see the outliers in the histogram, we need to find the stocks that are cause these outlying returns. Implement the function `find_outliers` to use Kolmogorov-Smirnov test (KS test) between a normal distribution and each stock's signal returns in the following order: - Ignore returns without a signal in `signal`. This will better fit the normal distribution and remove false positives.- Run KS test on a normal distribution that with the same std and mean of all the signal returns against each stock's signal returns. Use `kstest` to perform the KS test.- Ignore any items that don't pass the null hypothesis with a threshold of `pvalue_threshold`. You can consider them not outliers.- Return all stock tickers with a KS value above `ks_threshold`.
###Code
from scipy.stats import kstest
def find_outliers(signal, signal_return, ks_threshold, pvalue_threshold=0.05):
"""
Find stock outliers in `df` using Kolmogorov-Smirnov test against a normal distribution.
Ignore stock with a p-value from Kolmogorov-Smirnov test greater than `pvalue_threshold`.
Ignore stocks with KS static value lower than `ks_threshold`.
Parameters
----------
signal : DataFrame
The long, short, and do nothing signals for each ticker and date
signal_return : DataFrame
The signal return for each ticker and date
ks_threshold : float
The threshold for the KS static
pvalue_threshold : float
The threshold for the p-value
Returns
-------
outliers : list of str
Symbols that are outliers
"""
#TODO: Implement function
non_zero_signal_returns = signal_return[signal != 0].stack().dropna().T
normal_args = (
non_zero_signal_returns.mean(),
non_zero_signal_returns.mean())
non_zero_signal_returns.index = non_zero_signal_returns.index.set_names(['date', 'ticker'])
outliers = non_zero_signal_returns.groupby('ticker') \
.apply(lambda x: kstest(x, 'norm', normal_args)) \
.apply(pd.Series) \
.rename(index=str, columns={0: 'ks_value', 1: 'p_value'})
# Remove items that don't pass the null hypothesis
outliers = outliers[outliers['p_value'] < pvalue_threshold]
return outliers[outliers['ks_value'] > ks_threshold].index.tolist()
project_tests.test_find_outliers(find_outliers)
###Output
_____no_output_____
###Markdown
View DataUsing the `find_outliers` function you implemented, let's see what we found.
###Code
outlier_tickers = []
ks_threshold = 0.8
outlier_tickers.extend(find_outliers(signal_5, signal_return_5, ks_threshold))
outlier_tickers.extend(find_outliers(signal_10, signal_return_10, ks_threshold))
outlier_tickers.extend(find_outliers(signal_20, signal_return_20, ks_threshold))
outlier_tickers = set(outlier_tickers)
print('{} Outliers Found:\n{}'.format(len(outlier_tickers), ', '.join(list(outlier_tickers))))
###Output
_____no_output_____
###Markdown
Show Significance without OutliersLet's compare the 5, 10, and 20 day signals returns without outliers to normal distributions. Also, let's see how the P-Value has changed with the outliers removed.
###Code
good_tickers = list(set(close.columns) - outlier_tickers)
project_helper.plot_signal_to_normal_histograms(
[signal_return_5[good_tickers], signal_return_10[good_tickers], signal_return_20[good_tickers]],
'Signal Return Without Outliers',
('5 Days', '10 Days', '20 Days'))
outliers_removed_pval_5 = project_helper.get_signal_return_pval(signal_return_5[good_tickers])
outliers_removed_pval_10 = project_helper.get_signal_return_pval(signal_return_10[good_tickers])
outliers_removed_pval_20 = project_helper.get_signal_return_pval(signal_return_20[good_tickers])
print('5 Day P-value (with outliers): {}'.format(pval_5))
print('5 Day P-value (without outliers): {}'.format(outliers_removed_pval_5))
print('')
print('10 Day P-value (with outliers): {}'.format(pval_10))
print('10 Day P-value (without outliers): {}'.format(outliers_removed_pval_10))
print('')
print('20 Day P-value (with outliers): {}'.format(pval_20))
print('20 Day P-value (without outliers): {}'.format(outliers_removed_pval_20))
###Output
_____no_output_____ |
accessor/accessor.ipynb | ###Markdown
GLTF 格式教學 Accessor 篇 [`Open in observablehq`](https://observablehq.com/@toonnyy8/gltf-accessor) ![圖 1. buffers, bufferViews, accessors \[1\]](https://github.com/CSP-GD/notes/raw/master/practice/file_format/gltf%E6%A0%BC%E5%BC%8F%E8%A7%A3%E6%9E%90/accessor/gltfOverview-2.0.0b-accessor.png)圖 1. buffers, bufferViews, accessors \[1\] 簡介 在 glTF,模型的網格、權重、動畫等等數據實際上是儲存在 Buffer 中, 當要使用到這些數據時,就會用到 Accessor 去解讀數據, 而 Accessor 解讀的數據則是透過 BufferView 去從 Buffer 中提取出來的。 運作流程如下 > **Buffer** ==> **BufferView** 提取數據 ==> **Accessor** 解讀數據 ==> 數據 Accessor 屬性 - bufferView : \ > 此 Accessor 是從哪個 BufferView 取得數據。- byteOffset :\ > 從 BufferView 偏移多少個 byteOffset 的位置開始取數據。- type : > 表示一筆數據的類型(count 的單位) > `SCALAR` : $1$ 個 componentType 構成 > `VEC2` : $2$ 個 componentType 構成 > `VEC3` : $3$ 個 componentType 構成 > `VEC4` : $4$ 個 componentType 構成 > `MAT2` : $2*2$ 個 componentType 構成 > `MAT3` : $3*3$ 個 componentType 構成 > `MAT4` : $4*4$ 個 componentType 構成 - componentType : \ > 表示數據的型別,以下幾種為部分 componentType 代表的型別 > `5120` : `BYTE` > `5121` : `UNSIGNED_BYTE` > `5122` : `SHORT` > `5123` : `UNSIGNED_SHORT` > `5124` : `INT` > `5125` : `UNSIGNED_INT` > `5126` : `FLOAT` - count : \ > 有幾筆數據- min : \`\>> 數據的最大值- max : \`\>> 數據的最小值 BufferView 屬性 - buffer : \ > 此 BufferView 是從哪個 Buffer 取得數據。- byteOffset : \ > 從 Buffer 偏移多少個 byteOffset 的位置開始取數據。- byteLength : \ > 要取下多少個 byte。- byteStride : \ > 數據交錯擺放時,讓 Accessor 知道取數據的步伐要多少。- target : \ > 用來分辨數據的性質為 vertex (target 等於 `34962`,代表 `ARRAY_BUFFER`) 還是 vertex indices (target 等於 `34963`,代表 `ELEMENT_ARRAY_BUFFER`)。 Buffer 屬性 - byteLength : \ > 此 Buffer 的大小。- uri : \ > bufferData 的位置,也可能用 base64 直接儲存 bufferData。 正式開始 載入 glTF_tools
###Code
!wget https://github.com/CSP-GD/notes/raw/master/practice/file_format/gltf%E6%A0%BC%E5%BC%8F%E8%A7%A3%E6%9E%90/gltf-tools.ipynb -O gltf-tools.ipynb
%run ./gltf-tools.ipynb
###Output
--2020-04-09 12:10:03-- https://github.com/CSP-GD/notes/raw/master/practice/file_format/gltf%E6%A0%BC%E5%BC%8F%E8%A7%A3%E6%9E%90/gltf-tools.ipynb
Resolving github.com (github.com)... 192.30.255.113
Connecting to github.com (github.com)|192.30.255.113|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/CSP-GD/notes/master/practice/file_format/gltf%E6%A0%BC%E5%BC%8F%E8%A7%A3%E6%9E%90/gltf-tools.ipynb [following]
--2020-04-09 12:10:04-- https://raw.githubusercontent.com/CSP-GD/notes/master/practice/file_format/gltf%E6%A0%BC%E5%BC%8F%E8%A7%A3%E6%9E%90/gltf-tools.ipynb
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 5840 (5.7K) [text/plain]
Saving to: ‘gltf-tools.ipynb’
gltf-tools.ipynb 0%[ ] 0 --.-KB/s
gltf-tools.ipynb 100%[===================>] 5.70K --.-KB/s in 0s
2020-04-09 12:10:04 (74.8 MB/s) - ‘gltf-tools.ipynb’ saved [5840/5840]
glTF_tools is loaded
###Markdown
載入檔案
###Code
!wget https://github.com/CSP-GD/notes/raw/master/practice/file_format/gltf%E6%A0%BC%E5%BC%8F%E8%A7%A3%E6%9E%90/accessor/cube.glb -O cube.glb
glb_file = open('./cube.glb', 'rb')
glb_bytes = glb_file.read()
model, buffers = glTF_tools.glb_loader(glb_bytes)
glTF_tools.render_JSON(model)
glTF_tools.render_JSON(model['accessors'])
glTF_tools.render_JSON(model['bufferViews'])
def accessor(idx, model, buffers):
_accessor = model['accessors'][idx]
_buffer_view = model['bufferViews'][_accessor['bufferView']]
_buffer = buffers[_buffer_view['buffer']]
byteLength = _buffer_view['byteLength']
byteOffset = _buffer_view['byteOffset']
ret
return _accessor, _buffer[byteOffset:byteOffset + byteLength]
accessor(0, model, buffers)
###Output
_____no_output_____ |
docs/session_5/03_s5_solution.ipynb | ###Markdown
Session 5 Quiz and Solution In Session 5, you looked at the framework of a case study. Building on elements from previous sessions — such as loading data, calculating an index, and plotting results — you used a new index, the Enhanced Vegetation Index (EVI), to show differences over time. The first half of EVI data was compared to the second half, and mapped to show relative increase or decrease. Quiz If you would like to be awarded a certificate of achievement at the end of the course, we ask that you [complete the quiz](https://docs.google.com/forms/d/e/1FAIpQLSfAck3EJgxeYDfTyseB0VwVmcyDMNq_PiFCnm3-JIoSRae3xw/viewform?usp=sf_link). You will need to supply your email address to progress towards the certificate. After you complete the quiz, you can check if your answers were correct by pressing the **View Accuracy** button.This quiz does not require a notebook to solve. However, you may find the EVI vegetation change detection notebook useful as a reference. If you would like to confirm that your vegetation change notebook works as expected, you can check it against the solution notebook provided below.
###Code
.. note::
The solution notebook below does not contain the answer to the quiz. Use it to check that you implemented the exercise correctly, then use your exercise notebook to help with the quiz. Accessing the solution notebook will not affect your progression towards the certificate.
###Output
_____no_output_____
###Markdown
Solution notebook
###Code
.. note::
We strongly encourage you to attempt the exercise on the previous page before downloading the solution below. This will help you learn how to use the Sandbox independently for your own analyses.
###Output
_____no_output_____ |
demo_plot/plotnine-examples/examples/geom_map.ipynb | ###Markdown
The Political Territories of Westeros*Layering different features on a Map* Read data and select features in Westeros only.
###Code
continents = gp.read_file('data/lands-of-ice-and-fire/continents.shp')
islands = gp.read_file('data/lands-of-ice-and-fire/islands.shp')
lakes = gp.read_file('data/lands-of-ice-and-fire/lakes.shp')
rivers = gp.read_file('data/lands-of-ice-and-fire/rivers.shp')
political = gp.read_file('data/lands-of-ice-and-fire/political.shp')
wall = gp.read_file('data/lands-of-ice-and-fire/wall.shp')
roads = gp.read_file('data/lands-of-ice-and-fire/roads.shp')
locations = gp.read_file('data/lands-of-ice-and-fire/locations.shp')
westeros = continents.query('name=="Westeros"')
islands = islands.query('continent=="Westeros" and name!="Summer Islands"')
lakes = lakes.query('continent=="Westeros"')
rivers = rivers.query('continent=="Westeros"')
roads = roads.query('continent=="Westeros"')
wg = westeros.geometry[0]
bool_idx = [wg.contains(g) for g in locations.geometry]
westeros_locations = locations[bool_idx]
cities = westeros_locations[westeros_locations['type'] == 'City'].copy()
###Output
_____no_output_____
###Markdown
Create map by placing the features in layers in an order that limits obstraction.The `GeoDataFrame.geometry.centroid` property has the center coordinates of polygons,we use these to place the labels of the political regions.
###Code
# colors
water_color = '#a3ccff'
wall_color = 'white'
road_color = 'brown'
# Create label text by merging the territory name and
# the claimant to the territory
def fmt_labels(names, claimants):
labels = []
for name, claimant in zip(names, claimants):
if name:
labels.append('{} ({})'.format(name, claimant))
else:
labels.append('({})'.format(claimant))
return labels
def calculate_center(df):
"""
Calculate the centre of a geometry
This method first converts to a planar crs, gets the centroid
then converts back to the original crs. This gives a more
accurate
"""
original_crs = df.crs
planar_crs = 'EPSG:3857'
return df['geometry'].to_crs(planar_crs).centroid.to_crs(original_crs)
political['center'] = calculate_center(political)
cities['center'] = calculate_center(cities)
# Gallery Plot
(ggplot()
+ geom_map(westeros, fill=None)
+ geom_map(islands, fill=None)
+ geom_map(political, aes(fill='ClaimedBy'), color=None, show_legend=False)
+ geom_map(wall, color=wall_color, size=2)
+ geom_map(lakes, fill=water_color, color=None)
+ geom_map(rivers, aes(size='size'), color=water_color, show_legend=False)
+ geom_map(roads, aes(size='size'), color=road_color, alpha=0.5, show_legend=False)
+ geom_map(cities, size=1)
+ geom_text(
political,
aes('center.x', 'center.y', label='fmt_labels(name, ClaimedBy)'),
size=8,
fontweight='bold'
)
+ geom_text(
cities,
aes('center.x', 'center.y', label='name'),
size=8,
ha='left',
nudge_x=.20
)
+ labs(title="The Political Territories of Westeros")
+ scale_fill_brewer(type='qual', palette=8)
+ scale_x_continuous(expand=(0, 0, 0, 1))
+ scale_y_continuous(expand=(0, 1, 0, 0))
+ scale_size_continuous(range=(0.4, 1))
+ coord_cartesian()
+ theme_void()
+ theme(figure_size=(8, 12), panel_background=element_rect(fill=water_color))
)
###Output
_____no_output_____ |
demo_whole_sphere.ipynb | ###Markdown
[DeepSphere]: a spherical convolutional neural network[DeepSphere]: https://github.com/SwissDataScienceCenter/DeepSphere[Nathanaël Perraudin](https://perraudin.info), [Michaël Defferrard](http://deff.ch), Tomasz Kacprzak, Raphael Sgier Demo: whole sphere classification
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import shutil
# Run on CPU.
os.environ["CUDA_VISIBLE_DEVICES"] = ""
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
import healpy as hp
import tensorflow as tf
from deepsphere import models, experiment_helper, plot
from deepsphere.data import LabeledDataset
plt.rcParams['figure.figsize'] = (17, 5)
EXP_NAME = 'whole_sphere'
###Output
_____no_output_____
###Markdown
1 Data loadingThe data consists of a toy dataset that is sufficiently small to have fun with. It is made of 200 maps of size `NSIDE=64` splitted into 2 classes. The maps contain a Gaussian random field realisations produced with Synfast function from Healpy package.The input power spectra were taken from LambdaCDM model with two sets of parameters.These maps are not realistic cosmological mass maps, just a toy dataset.We downsampled them to `Nside=64` in order to make the processing faster.
###Code
data = np.load('data/maps_downsampled_64.npz')
assert(len(data['class1']) == len(data['class2']))
nclass = len(data['class1'])
###Output
_____no_output_____
###Markdown
Let us plot a map of each class. It is not simple to visually catch the differences.
###Code
cmin = min(np.min(data['class1']), np.min(data['class2']))
cmax = max(np.max(data['class1']), np.max(data['class2']))
cm = plt.cm.RdBu_r
cm.set_under('w')
hp.mollview(data['class1'][0], title='class 1', nest=True, cmap=cm, min=cmin, max=cmax)
hp.mollview(data['class2'][0], title='class 2', nest=True, cmap=cm, min=cmin, max=cmax)
###Output
_____no_output_____
###Markdown
However, those maps have different Power Spectral Densities PSD.
###Code
sample_psd_class1 = np.empty((nclass, 192))
sample_psd_class2 = np.empty((nclass, 192))
for i in range(nclass):
sample_psd_class1[i] = experiment_helper.psd(data['class1'][i])
sample_psd_class2[i] = experiment_helper.psd(data['class2'][i])
ell = np.arange(sample_psd_class1.shape[1])
plot.plot_with_std(ell, sample_psd_class1*ell*(ell+1), label='class 1, Omega_matter=0.3, mean', color='b')
plot.plot_with_std(ell,sample_psd_class2*ell*(ell+1), label='class 2, Omega_matter=0.5, mean', color='r')
plt.legend(fontsize=16);
plt.xlim([10, np.max(ell)])
plt.ylim([1e-6, 1e-3])
# plt.yscale('log')
plt.xscale('log')
plt.xlabel('$\ell$: spherical harmonic index', fontsize=18)
plt.ylabel('$C_\ell \cdot \ell \cdot (\ell+1)$', fontsize=18)
plt.title('Power Spectrum Density, 3-arcmin smoothing, noiseless, Nside=1024', fontsize=18);
###Output
_____no_output_____
###Markdown
2 Data preparationLet us split the data into training and testing sets. The raw data is stored into `x_raw` and the power spectrum densities into `x_psd`.
###Code
# Normalize and transform the data, i.e. extract features.
x_raw = np.vstack((data['class1'], data['class2']))
x_raw = x_raw / np.mean(x_raw**2) # Apply some normalization (We do not want to affect the mean)
x_psd = preprocessing.scale(np.vstack((sample_psd_class1, sample_psd_class2)))
# Create the label vector
labels = np.zeros([x_raw.shape[0]], dtype=int)
labels[nclass:] = 1
# Random train / test split
ntrain = 150
ret = train_test_split(x_raw, x_psd, labels, test_size=2*nclass-ntrain, shuffle=True)
x_raw_train, x_raw_test, x_psd_train, x_psd_test, labels_train, labels_test = ret
print('Class 1 VS class 2')
print(' Training set: {} / {}'.format(np.sum(labels_train==0), np.sum(labels_train==1)))
print(' Test set: {} / {}'.format(np.sum(labels_test==0), np.sum(labels_test==1)))
###Output
_____no_output_____
###Markdown
3 Classification using SVMAs a baseline, let us classify our data using an SVM classifier.* An SVM based on the raw feature cannot discriminate the data because the dimensionality of the data is too large.* We however observe that the PSD features are linearly separable.
###Code
clf = SVC(kernel='rbf')
clf.fit(x_raw_train, labels_train)
e_train = experiment_helper.model_error(clf, x_raw_train, labels_train)
e_test = experiment_helper.model_error(clf, x_raw_test, labels_test)
print('The training error is: {}%'.format(e_train*100))
print('The testing error is: {}%'.format(e_test*100))
clf = SVC(kernel='linear')
clf.fit(x_psd_train, labels_train)
e_train = experiment_helper.model_error(clf, x_psd_train, labels_train)
e_test = experiment_helper.model_error(clf, x_psd_test, labels_test)
print('The training error is: {}%'.format(e_train*100))
print('The testing error is: {}%'.format(e_test*100))
###Output
_____no_output_____
###Markdown
4 Classification using DeepSphereLet us now classify our data using a spherical convolutional neural network.Three types of architectures are suitable for this task:1. Classic CNN: the classic ConvNet composed of some convolutional layers followed by some fully connected layers.2. Stat layer: a statistical layer, which computes some statistics over the pixels, is inserted between the convolutional and fully connected layers. The role of this added layer is make the prediction invariant to the position of the pixels on the sphere.3. Fully convolutional: the fully connected layers are removed and the network outputs many predictions at various spatial locations that are then averaged.On this simple task, all architectures can reach 100% test accuracy. Nevertheless, the number of parameters to learn decreases and training converges faster. A fully convolutional network is much faster and efficient in terms of parameters. It does however assume that all pixels have the same importance and that their location does not matter. While that is true for cosmological applications, it may not for others.
###Code
params = dict()
params['dir_name'] = EXP_NAME
# Types of layers.
params['conv'] = 'chebyshev5' # Graph convolution: chebyshev5 or monomials.
params['pool'] = 'max' # Pooling: max or average.
params['activation'] = 'relu' # Non-linearity: relu, elu, leaky_relu, softmax, tanh, etc.
params['statistics'] = None # Statistics (for invariance): None, mean, var, meanvar, hist.
# Architecture.
architecture = 'fully_convolutional'
if architecture == 'classic_cnn':
params['statistics'] = None
params['nsides'] = [64, 32, 16, 16] # Pooling: number of pixels per layer.
params['F'] = [5, 5, 5] # Graph convolutional layers: number of feature maps.
params['M'] = [50, 2] # Fully connected layers: output dimensionalities.
elif architecture == 'stat_layer':
params['statistics'] = 'meanvar'
params['nsides'] = [64, 32, 16, 16] # Pooling: number of pixels per layer.
params['F'] = [5, 5, 5] # Graph convolutional layers: number of feature maps.
params['M'] = [50, 2] # Fully connected layers: output dimensionalities.
elif architecture == 'fully_convolutional':
params['statistics'] = 'mean'
params['nsides'] = [64, 32, 16, 8, 8]
params['F'] = [5, 5, 5, 2]
params['M'] = []
params['K'] = [10] * len(params['F']) # Polynomial orders.
params['batch_norm'] = [True] * len(params['F']) # Batch normalization.
# Regularization.
params['regularization'] = 0 # Amount of L2 regularization over the weights (will be divided by the number of weights).
params['dropout'] = 0.5 # Percentage of neurons to keep.
# Training.
params['num_epochs'] = 12 # Number of passes through the training data.
params['batch_size'] = 16 # Number of samples per training batch. Should be a power of 2 for greater speed.
params['eval_frequency'] = 15 # Frequency of model evaluations during training (influence training time).
params['scheduler'] = lambda step: 1e-1 # Constant learning rate.
params['optimizer'] = lambda lr: tf.train.GradientDescentOptimizer(lr)
#params['optimizer'] = lambda lr: tf.train.MomentumOptimizer(lr, momentum=0.5)
#params['optimizer'] = lambda lr: tf.train.AdamOptimizer(lr, beta1=0.9, beta2=0.999, epsilon=1e-8)
model = models.deepsphere(**params)
# Cleanup before running again.
shutil.rmtree('summaries/{}/'.format(EXP_NAME), ignore_errors=True)
shutil.rmtree('checkpoints/{}/'.format(EXP_NAME), ignore_errors=True)
training = LabeledDataset(x_raw_train, labels_train)
testing = LabeledDataset(x_raw_test, labels_test)
accuracy_validation, loss_validation, loss_training, t_step = model.fit(training, testing)
plot.plot_loss(loss_training, loss_validation, t_step, params['eval_frequency'])
error_train = experiment_helper.model_error(model, x_raw_train, labels_train)
error_test = experiment_helper.model_error(model, x_raw_test, labels_test)
print('The training error is: {:.2%}'.format(error_train))
print('The testing error is: {:.2%}'.format(error_test))
###Output
_____no_output_____
###Markdown
5 Filters visualizationThe package offers a few different visualizations for the learned filters. First we can simply look at the Chebyshef coefficients. This visualization is not very interpretable for human, but can help for debugging problems related to optimization.
###Code
layer=2
model.plot_chebyshev_coeffs(layer)
###Output
_____no_output_____
###Markdown
We observe the Chebyshef polynomial, i.e the filters in the graph spectral domain. This visuallization can help to understand wich graph frequencies are picked by the filtering operation. It mostly interpretable by the people for the graph signal processing community.
###Code
model.plot_filters_spectral(layer);
###Output
_____no_output_____
###Markdown
Here comes one of the most human friendly representation of the filters. It consists the section of the filters "projected" on the sphere. Because of the irregularity of the healpix sampling, this representation of the filters may not look very smooth.
###Code
mpl.rcParams.update({'font.size': 16})
model.plot_filters_section(layer, title='');
###Output
_____no_output_____
###Markdown
Eventually, we can simply look at the filters on sphere. This representation clearly displays the sampling artifacts.
###Code
plt.rcParams['figure.figsize'] = (10, 10)
model.plot_filters_gnomonic(layer, title='')
###Output
_____no_output_____ |
15 - Advanced Statistical Methods in Python/3_Linear Regression with sklearn/6_Multiple Linear Regression with sklearn (3:10)/sklearn - Multiple Linear Regression.ipynb | ###Markdown
Multiple linear regression Import the relevant libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Load the data
###Code
data = pd.read_csv('1.02. Multiple linear regression.csv')
data.head()
data.describe()
###Output
_____no_output_____
###Markdown
Create the multiple linear regression Declare the dependent and independent variables
###Code
x = data[['SAT','Rand 1,2,3']]
y = data['GPA']
###Output
_____no_output_____
###Markdown
Regression itself
###Code
reg = LinearRegression()
reg.fit(x,y)
reg.coef_
reg.intercept_
###Output
_____no_output_____ |
Neural Networks and Deep Learning/week 4/Building+your+Deep+Neural+Network+-+Step+by+Step+v8.ipynb | ###Markdown
Building your Deep Neural Network: Step by StepWelcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!- In this notebook, you will implement all the functions required to build a deep neural network.- In the next assignment, you will use these functions to build a deep neural network for image classification.**After this assignment you will be able to:**- Use non-linear units like ReLU to improve your model- Build a deeper neural network (with more than 1 hidden layer)- Implement an easy-to-use neural network class**Notation**:- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example.- Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).Let's get started! 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the main package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- dnn_utils provides some necessary functions for this notebook.- testCases provides some test cases to assess the correctness of your functions- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v4 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
n = np.ones((3,4))
print(n)
n.sum(axis=0)
###Output
[[ 1. 1. 1. 1.]
[ 1. 1. 1. 1.]
[ 1. 1. 1. 1.]]
###Markdown
2 - Outline of the AssignmentTo build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:- Initialize the parameters for a two-layer network and for an $L$-layer neural network.- Implement the forward propagation module (shown in purple in the figure below). - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$). - We give you the ACTIVATION function (relu/sigmoid). - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function. - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.- Compute the loss.- Implement the backward propagation module (denoted in red in the figure below). - Complete the LINEAR part of a layer's backward propagation step. - We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function. - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function- Finally update the parameters. **Figure 1****Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. 3 - InitializationYou will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers. 3.1 - 2-layer Neural Network**Exercise**: Create and initialize the parameters of the 2-layer neural network.**Instructions**:- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*. - Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.- Use zero initialization for the biases. Use `np.zeros(shape)`.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x)*0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h)*0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]]
b1 = [[ 0.]
[ 0.]]
W2 = [[ 0.01744812 -0.00761207]]
b2 = [[ 0.]]
###Markdown
**Expected output**: **W1** [[ 0.01624345 -0.00611756 -0.00528172] [-0.01072969 0.00865408 -0.02301539]] **b1** [[ 0.] [ 0.]] **W2** [[ 0.01744812 -0.00761207]] **b2** [[ 0.]] 3.2 - L-layer Neural NetworkThe initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then: **Shape of W** **Shape of b** **Activation** **Shape of Activation** **Layer 1** $(n^{[1]},12288)$ $(n^{[1]},1)$ $Z^{[1]} = W^{[1]} X + b^{[1]} $ $(n^{[1]},209)$ **Layer 2** $(n^{[2]}, n^{[1]})$ $(n^{[2]},1)$ $Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ $(n^{[2]}, 209)$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ **Layer L-1** $(n^{[L-1]}, n^{[L-2]})$ $(n^{[L-1]}, 1)$ $Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ $(n^{[L-1]}, 209)$ **Layer L** $(n^{[L]}, n^{[L-1]})$ $(n^{[L]}, 1)$ $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$ $(n^{[L]}, 209)$ Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} j & k & l\\ m & n & o \\ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\\ d & e & f \\ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \\ t \\ u\end{bmatrix}\tag{2}$$Then $WX + b$ will be:$$ WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u\end{bmatrix}\tag{3} $$ **Exercise**: Implement initialization for an L-layer Neural Network. **Instructions**:- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.- Use zeros initialization for the biases. Use `np.zeros(shape)`.- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers! - Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).```python if L == 1: parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01 parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))```
###Code
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l],layer_dims[l-1])
parameters['b' + str(l)] = np.zeros((layer_dims[l],1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[ 1.78862847 0.43650985 0.09649747 -1.8634927 -0.2773882 ]
[-0.35475898 -0.08274148 -0.62700068 -0.04381817 -0.47721803]
[-1.31386475 0.88462238 0.88131804 1.70957306 0.05003364]
[-0.40467741 -0.54535995 -1.54647732 0.98236743 -1.10106763]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-1.18504653 -0.2056499 1.48614836 0.23671627]
[-1.02378514 -0.7129932 0.62524497 -0.16051336]
[-0.76883635 -0.23003072 0.74505627 1.97611078]]
b2 = [[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected output**: **W1** [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]] **b2** [[ 0.] [ 0.] [ 0.]] 4 - Forward propagation module 4.1 - Linear Forward Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:- LINEAR- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)The linear forward module (vectorized over all the examples) computes the following equations:$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$where $A^{[0]} = X$. **Exercise**: Build the linear part of forward propagation.**Reminder**:The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
###Code
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W,A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
###Output
Z = [[ 3.26295337 -1.23429987]]
###Markdown
**Expected output**: **Z** [[ 3.26295337 -1.23429987]] 4.2 - Linear-Activation ForwardIn this notebook, you will use two activation functions:- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call: ``` pythonA, activation_cache = sigmoid(Z)```- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:``` pythonA, activation_cache = relu(Z)``` For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
###Code
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
###Output
With sigmoid: A = [[ 0.96890023 0.11013289]]
With ReLU: A = [[ 3.43896131 0. ]]
###Markdown
**Expected output**: **With sigmoid: A ** [[ 0.96890023 0.11013289]] **With ReLU: A ** [[ 3.43896131 0. ]] **Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers. d) L-Layer Model For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID. **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model**Exercise**: Implement the forward propagation of the above model.**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.) **Tips**:- Use the functions you had previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
###Code
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = None
None
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = None
None
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case_2hidden()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
###Output
_____no_output_____
###Markdown
**AL** [[ 0.03921668 0.70498921 0.19734387 0.04728177]] **Length of caches list ** 3 Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions. 5 - Cost functionNow you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = None
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
###Output
_____no_output_____
###Markdown
**Expected Output**: **cost** 0.41493159961539694 6 - Backward propagation moduleJust like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. **Reminder**: **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* <!-- For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.This is why we talk about **backpropagation**.!-->Now, similar to forward propagation, you are going to build the backward propagation in three steps:- LINEAR backward- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model) 6.1 - Linear backwardFor layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$. **Figure 4** The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$ **Exercise**: Use the 3 formulas above to implement linear_backward().
###Code
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = None
db = None
dA_prev = None
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
###Output
_____no_output_____
###Markdown
**Expected Output**: **dA_prev** [[ 0.51822968 -0.19517421] [-0.40506361 0.15255393] [ 2.37496825 -0.89445391]] **dW** [[-0.10076895 1.40685096 1.64992505]] **db** [[ 0.50629448]] 6.2 - Linear-Activation backwardNext, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**. To help you implement `linear_activation_backward`, we provided two backward functions:- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:```pythondZ = sigmoid_backward(dA, activation_cache)```- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:```pythondZ = relu_backward(dA, activation_cache)```If $g(.)$ is the activation function, `sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$. **Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
###Code
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = cache
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoide_backward(dA, activation_cache)
dA_prev, dW, db = cache
### END CODE HERE ###
return dA_prev, dW, db
dAL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
###Output
_____no_output_____
###Markdown
**Expected output with sigmoid:** dA_prev [[ 0.11017994 0.01105339] [ 0.09466817 0.00949723] [-0.05743092 -0.00576154]] dW [[ 0.10266786 0.09778551 -0.01968084]] db [[-0.05729622]] **Expected output with relu:** dA_prev [[ 0.44090989 0. ] [ 0.37883606 0. ] [-0.2298228 0. ]] dW [[ 0.44513824 0.37371418 -0.10478989]] db [[-0.20837892]] 6.3 - L-Model Backward Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. **Figure 5** : Backward pass ** Initializing backpropagation**:To backpropagate through this network, we know that the output is, $A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):```pythondAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) derivative of cost with respect to AL```You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : $$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
###Code
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = None
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = None
grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] = None
### END CODE HERE ###
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = None
dA_prev_temp, dW_temp, db_temp = None
grads["dA" + str(l)] = None
grads["dW" + str(l + 1)] = None
grads["db" + str(l + 1)] = None
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print_grads(grads)
###Output
_____no_output_____
###Markdown
**Expected Output** dW1 [[ 0.41010002 0.07807203 0.13798444 0.10502167] [ 0. 0. 0. 0. ] [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] db1 [[-0.22007063] [ 0. ] [-0.02835349]] dA1 [[ 0.12913162 -0.44014127] [-0.14175655 0.48317296] [ 0.01663708 -0.05670698]] 6.4 - Update ParametersIn this section you will update the parameters of the model, using gradient descent: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. **Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.**Instructions**:Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
###Code
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = None
parameters["b" + str(l+1)] = None
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
###Output
_____no_output_____ |
examples/3.03-WindPower-Design_Onshore_Turbine.ipynb | ###Markdown
Suggest Onshore Wind Turbine * RESKit can suggest a turbine design based off site conditions (average 100m wind speed)* Suggestions are tailored to a far future context (~2050)* Since the suggestion model computes a specific capacity, a desired rotor diameter must be specified
###Code
from reskit import windpower
# Get suggestion for one location
design = windpower.suggestOnshoreTurbine(averageWindspeed=6.70, # Assume average 100m wind speed is 6.70 m/s
rotordiam=136 )
print("Suggested capacity is {:.0f} kW".format( design['capacity'] ) )
print("Suggested hub height is {:.0f} meters".format( design['hubHeight'] ) )
print("Suggested rotor diamter is {:.0f} meters".format( design['rotordiam'] ) )
print("Suggested specific capacity is {:.0f} W/m2".format( design['specificPower'] ) )
# Get suggestion for many locations
designs = windpower.suggestOnshoreTurbine(averageWindspeed=[6.70,4.34,5.66,4.65,5.04,4.62,4.64,5.11,6.23,5.25,],
rotordiam=136 )
designs.round()
###Output
_____no_output_____ |
notebooks/4.4-me-classification-doc2vec.ipynb | ###Markdown
Classification - Doc2VecThis notebook discusses Multi-label classificaon methods for the [academia.stackexchange.com](https://academia.stackexchange.com/) data dump in [Doc2Vec](https://radimrehurek.com/gensim_3.8.3/models/doc2vec.html) representation. Table of Contents* [Data import](data_import)* [Data preparation](data_preparation)* [Methods](methods)* [Evaluation](evaluation)
###Code
import matplotlib.pyplot as plt
import numpy as np
import warnings
import re
from joblib import load
from pathlib import Path
from sklearn.metrics import classification_report
from academia_tag_recommender.experiments.experimental_classifier import available_classifier_paths
warnings.filterwarnings('ignore')
plt.style.use('plotstyle.mplstyle')
plt.rcParams.update({
'axes.grid.axis': 'y',
'figure.figsize': (16, 8),
'font.size': 18,
'lines.linestyle': '-',
'lines.linewidth': 0.7,
})
RANDOM_STATE = 0
###Output
_____no_output_____
###Markdown
Data import
###Code
from academia_tag_recommender.experiments.data import ExperimentalData
ed = ExperimentalData.load()
X_train, X_test, y_train, y_test = ed.get_train_test_set()
from academia_tag_recommender.experiments.transformer import Doc2VecTransformer
from academia_tag_recommender.experiments.experimental_classifier import ExperimentalClassifier
transformer = Doc2VecTransformer.load('doc2vec')
train = transformer.fit(X_train)
test = transformer.transform(X_test)
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
def create_classifier(classifier, name):
experimental_classifier = ExperimentalClassifier.load(transformer, classifier, name)
#experimental_classifier.train(train, y_train)
#experimental_classifier.score(test, y_test)
print('Training: {}s'.format(experimental_classifier.training_time))
print('Test: {}s'.format(experimental_classifier.test_time))
experimental_classifier.evaluation.print_stats()
###Output
_____no_output_____
###Markdown
Methods* [Problem Transformation](problem_transformation)* [Algorithm Adaption](algorithm_adaption)* [Ensembles](ensembles) Problem Transformation- [DecisionTreeClassifier](decisiontree)- [KNeighborsClassifier](kNN)- [MLPClassifier](mlp)- [MultioutputClassifier](multioutput)- [Classwise Classifier](classwise)- [Classifier Chain](chain)- [Label Powerset](label_powerset) **DecisionTreeClassifier** [source](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.htmlsklearn.tree.DecisionTreeClassifier)
###Code
from sklearn.tree import DecisionTreeClassifier
create_classifier(DecisionTreeClassifier(random_state=RANDOM_STATE), 'DecisionTreeClassifier')
###Output
Training: 76.27636694908142s
Test: 0.061362504959106445s
Hamming Loss Accuracy Precision Recall F1
samples 0.024383631388022655 0.003990326481257557 0.10874647319629181 0.10883514711809755 0.09939377363198404
micro 0.1039592708759489 0.10948346019436067 0.10664987875396381
macro 0.03660010385642078 0.037981143240158055 0.037131329451119834
###Markdown
**KNeighborsClassifier** [source](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.htmlsklearn.neighbors.KNeighborsClassifier)
###Code
from sklearn.neighbors import KNeighborsClassifier
create_classifier(KNeighborsClassifier(), 'KNeighborsClassifier')
###Output
Training: 0.9465596675872803s
Test: 90.63134479522705s
Hamming Loss Accuracy Precision Recall F1
samples 0.013180805702284732 0.04885126964933494 0.17735792019347038 0.08295042321644498 0.10687856279150111
micro 0.5285073670723895 0.07898894154818326 0.1374370080379826
macro 0.23687247740865236 0.027264107708750013 0.04498031199208422
###Markdown
**MLPClassifier** [source](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.htmlsklearn.neural_network.MLPClassifier)
###Code
from sklearn.neural_network import MLPClassifier
create_classifier(MLPClassifier(random_state=RANDOM_STATE), 'MLPClassifier')
###Output
Training: 111.18141150474548s
Test: 0.06784701347351074s
Hamming Loss Accuracy Precision Recall F1
samples 0.013068796537898556 0.05973397823458283 0.34634450394426214 0.20785771866182992 0.24126103843758012
micro 0.5219059405940594 0.20187658576284168 0.2911388035486209
macro 0.33849144103022694 0.13960900013562563 0.18556977476916545
###Markdown
**MultioutputClassifier** [source](https://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.htmlsklearn.multioutput.MultiOutputClassifier)MultiouputClassifier transforms sklearn classifier into classifiers capable of Binary Relevence.
###Code
from sklearn.multioutput import MultiOutputClassifier
from sklearn.svm import LinearSVC
create_classifier(MultiOutputClassifier(LinearSVC(random_state=RANDOM_STATE)), 'MultioutputClassifier(LinearSVC)')
from sklearn.linear_model import LogisticRegression
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
###Output
Training: 26.97082209587097s
Test: 0.806842565536499s
Hamming Loss Accuracy Precision Recall F1
samples 0.013457010119009738 0.051390568319226115 0.26861221153117226 0.18266021765417173 0.19639741270299985
micro 0.48389126604580923 0.18406816985016036 0.2666897867175308
macro 0.3042910235834853 0.12026726313350174 0.16194533928707805
###Markdown
**Classifier Chain** [source](http://scikit.ml/api/skmultilearn.problem_transform.cc.htmlskmultilearn.problem_transform.ClassifierChain)[Read et al., 2011][1][1]: https://doi.org/10.1007/s10994-011-5256-5
###Code
from skmultilearn.problem_transform import ClassifierChain
create_classifier(ClassifierChain(classifier=LinearSVC(random_state=RANDOM_STATE)), 'ClassifierChain(LinearSVC)')
create_classifier(ClassifierChain(classifier=LogisticRegression(random_state=RANDOM_STATE)), 'ClassifierChain(LogisticRegression)')
###Output
Training: 87.58442544937134s
Test: 2.549346446990967s
Hamming Loss Accuracy Precision Recall F1
samples 0.01343409915356711 0.057799274486094315 0.2970840418482499 0.20300080612656188 0.21943120227884433
micro 0.48730671590122315 0.20216381827756236 0.2857722889527999
macro 0.3049837549506505 0.1281596617636677 0.16928884799055832
###Markdown
**Label Powerset** [source](http://scikit.ml/api/skmultilearn.problem_transform.lp.htmlskmultilearn.problem_transform.LabelPowerset)
###Code
from skmultilearn.problem_transform import LabelPowerset
create_classifier(LabelPowerset(classifier=LinearSVC(random_state=RANDOM_STATE)), 'LabelPowerset(LinearSVC)')
from skmultilearn.problem_transform import LabelPowerset
from sklearn.linear_model import LogisticRegression
create_classifier(LabelPowerset(classifier=LogisticRegression(random_state=RANDOM_STATE)), 'LabelPowerset(LogisticRegression)')
###Output
Training: 1539.2145681381226s
Test: 1.93672776222229s
Hamming Loss Accuracy Precision Recall F1
samples 0.014385540635142875 0.04268440145102781 0.4307194679564692 0.23454655380894798 0.2850065257864532
micro 0.42233493342994294 0.22322753602374457 0.2920764171625431
macro 0.25895784180538806 0.09682338410257547 0.12731302085386986
###Markdown
Algorithm Adaption- [MLkNN](mlknn)- [MLARAM](mlaram) **MLkNN** [source](http://scikit.ml/api/skmultilearn.adapt.mlknn.htmlmultilabel-k-nearest-neighbours)> Firstly, for each test instance, its k nearest neighbors in the training set are identified. Then, according to statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the test instance.[Zhang & Zhou, 2007][1][1]: https://doi.org/10.1016/j.patcog.2006.12.019
###Code
from skmultilearn.adapt import MLkNN
create_classifier(MLkNN(), 'MLkNN')
###Output
Training: 294.2208788394928s
Test: 100.74386668205261s
Hamming Loss Accuracy Precision Recall F1
samples 0.013298542608031566 0.052962515114873036 0.2613986295848448 0.13766424828698107 0.1687763190725706
micro 0.499371746544606 0.13318014265881564 0.21027966742252455
macro 0.27156017711272884 0.05862752367723445 0.08853517824979555
###Markdown
**MLARAM** [source](http://scikit.ml/api/skmultilearn.adapt.mlaram.htmlskmultilearn.adapt.MLARAM)> an extension of fuzzy Adaptive Resonance Associative Map (ARAM) – an Adaptive Resonance Theory (ART)based neural network. It aims at speeding up the classification process in the presence of very large data.[F. Benites & E. Sapozhnikova, 2015][7][7]: https://doi.org/10.1109/ICDMW.2015.14
###Code
from skmultilearn.adapt import MLARAM
create_classifier(MLARAM(), 'MLARAM')
###Output
Training: 35.39668679237366s
Test: 226.74313640594482s
Hamming Loss Accuracy Precision Recall F1
samples 0.01710112645580093 0.025030229746070134 0.3575491203549841 0.22192261185006046 0.24775952971717358
micro 0.2996382636655949 0.21413183972425678 0.24976966245079155
macro 0.14192316760561402 0.049015527374583624 0.055665691818524494
###Markdown
Ensembles- [RAkELo](rakelo)- [RAkELd](rakeld)- [MajorityVotingClassifier](majority_voting)- [LabelSpacePartitioningClassifier](label_space) **RAkELo** [source](http://scikit.ml/api/skmultilearn.ensemble.rakelo.htmlskmultilearn.ensemble.RakelO)> Rakel: randomly breaking the initial set of labels into a number of small-sized labelsets, and employing [Label powerset] to train a corresponding multilabel classifier.[Tsoumakas et al., 2011][1]> Divides the label space in to m subsets of size k, trains a Label Powerset classifier for each subset and assign a label to an instance if more than half of all classifiers (majority) from clusters that contain the label assigned the label to the instance.[skmultilearn][2][1]: https://doi.org/10.1109/TKDE.2010.164[2]: http://scikit.ml/api/skmultilearn.ensemble.rakelo.htmlskmultilearn.ensemble.RakelO
###Code
from skmultilearn.ensemble import RakelO
create_classifier(RakelO(
base_classifier=LinearSVC(random_state=RANDOM_STATE),
model_count=y_train.shape[1]
), 'RakelO(LinearSVC)')
create_classifier(RakelO(
base_classifier=LogisticRegression(random_state=RANDOM_STATE),
model_count=y_train.shape[1]
), 'RakelO(LogisticRegression)')
###Output
Training: 370.52514123916626s
Test: 119.96998405456543s
Hamming Loss Accuracy Precision Recall F1
samples 0.013464010691783873 0.05344619105199516 0.26955103139504594 0.18479040709391376 0.19812030601415556
micro 0.48337277369535436 0.18579156493848437 0.26841413652396434
macro 0.29799527974185985 0.12219491689906783 0.16328009254607176
###Markdown
**RAkELd** [source](http://scikit.ml/api/skmultilearn.ensemble.rakeld.htmlskmultilearn.ensemble.RakelD)>Divides the label space in to equal partitions of size k, trains a Label Powerset classifier per partition and predicts by summing the result of all trained classifiers.[skmultilearn][3][3]: http://scikit.ml/api/skmultilearn.ensemble.rakeld.htmlskmultilearn.ensemble.RakelD
###Code
from skmultilearn.ensemble import RakelD
create_classifier(RakelD(base_classifier=LinearSVC(random_state=RANDOM_STATE)), 'RakelD(LinearSVC)')
create_classifier(RakelD(base_classifier=LogisticRegression(random_state=RANDOM_STATE)), 'RakelD(LogisticRegression)')
###Output
Training: 93.14573168754578s
Test: 36.9341824054718s
Hamming Loss Accuracy Precision Recall F1
samples 0.013516196779736523 0.05259975816203144 0.27088899847968767 0.18747279322853688 0.19969524765587224
micro 0.47876870665531085 0.18837665757097036 0.27037240621135084
macro 0.30104692624358603 0.12634089701838583 0.16752655504097633
###Markdown
***Clustering***
###Code
from skmultilearn.cluster import LabelCooccurrenceGraphBuilder
def get_graph_builder():
graph_builder = LabelCooccurrenceGraphBuilder(weighted=True, include_self_edges=False)
edge_map = graph_builder.transform(y_train)
return graph_builder
from skmultilearn.cluster import IGraphLabelGraphClusterer
import igraph as ig
def get_clusterer():
graph_builder = get_graph_builder()
clusterer_igraph = IGraphLabelGraphClusterer(graph_builder=graph_builder, method='walktrap')
partition = clusterer_igraph.fit_predict(X_train, y_train)
return clusterer_igraph
clusterer_igraph = get_clusterer()
###Output
_____no_output_____
###Markdown
**MajorityVotingClassifier** [source](http://scikit.ml/api/skmultilearn.ensemble.voting.htmlskmultilearn.ensemble.MajorityVotingClassifier)
###Code
from skmultilearn.ensemble.voting import MajorityVotingClassifier
create_classifier(MajorityVotingClassifier(
classifier=ClassifierChain(classifier=LinearSVC(random_state=RANDOM_STATE)),
clusterer=clusterer_igraph
), 'MajorityVotingClassifier(ClassifierChain(LinearSVC))')
create_classifier(MajorityVotingClassifier(
classifier=ClassifierChain(classifier=LogisticRegression(random_state=RANDOM_STATE)),
clusterer=clusterer_igraph
), 'MajorityVotingClassifier(ClassifierChain(LogisticRegression))')
###Output
Training: 69.36259269714355s
Test: 5.972095727920532s
Hamming Loss Accuracy Precision Recall F1
samples 0.013481193915865844 0.05392986698911729 0.2688883757737919 0.18759572752922207 0.19927400362399295
micro 0.4820293398533007 0.18875963425726458 0.2712855619388352
macro 0.3040693392764102 0.1252751844800329 0.16663963290039285
###Markdown
Evaluation
###Code
paths = available_classifier_paths('doc2vec', 'size=100.')
paths = [path for path in paths if '-' not in path.name]
evals = []
for path in paths:
clf = load(path)
evaluation = clf.evaluation
evals.append([str(clf), evaluation])
from matplotlib.ticker import MultipleLocator
x_ = ['Precision', 'F1', 'Recall']
fig, axes = plt.subplots(1, 3, sharey=True)
axes[0].set_title('Sample')
axes[1].set_title('Macro')
axes[2].set_title('Micro')
for ax in axes:
ax.set_xticklabels(x_, rotation=45, ha='right')
ax.yaxis.set_major_locator(MultipleLocator(0.1))
ax.yaxis.set_minor_locator(MultipleLocator(0.05))
ax.set_ylim(-0.05, 1.05)
for eval_ in evals:
evaluator = eval_[1]
axes[0].plot(x_, [evaluator.precision_samples, evaluator.f1_samples, evaluator.recall_samples], label=eval_[0])
axes[1].plot(x_, [evaluator.precision_macro, evaluator.f1_macro, evaluator.recall_macro])
axes[2].plot(x_, [evaluator.precision_micro, evaluator.f1_micro, evaluator.recall_micro])
axes[0].set_ylabel('Recall macro')
fig.legend(bbox_to_anchor=(1,0.5), loc='center left')
plt.show()
top_3 = sorted(paths, key=lambda x: load(x).evaluation.recall_macro, reverse=True)[:3]
def per_label_accuracy(orig, prediction):
if not isinstance(prediction, np.ndarray):
prediction = prediction.toarray()
l = 1 - np.absolute(orig - prediction)
return np.average(l, axis=0)
from sklearn.metrics import classification_report
classwise_results = []
for clf_path in top_3:
clf = load(clf_path)
prediction = clf.predict(test)
label_accuracies = per_label_accuracy(y_test, prediction)
report = classification_report(y_test, prediction, output_dict=True, zero_division=0)
classwise_report = {}
for i, result in enumerate(report):
if i < len(label_accuracies):
classwise_report[result] = report[result]
classwise_report[result]['accuracy'] = label_accuracies[int(result)]
classwise_results.append((clf, classwise_report))
x_ = np.arange(0, len(y_test[0]))
fig, axes = plt.subplots(3, 1, figsize=(16, 12))
for i, classwise_result in enumerate(classwise_results):
name, results = classwise_result
sorted_results = sorted(results, key=lambda x: results[x]['support'], reverse=True)
axes[i].set_title(name, fontsize=18)
axes[i].plot(x_, [results[result]['precision'] for result in sorted_results][0:len(x_)], label='Precision')
axes[i].plot(x_, [results[result]['recall'] for result in sorted_results][0:len(x_)], label='Recall')
axes[i].plot(x_, [results[result]['f1-score'] for result in sorted_results][0:len(x_)], label='F1')
axes[i].plot(x_, [results[result]['accuracy'] for result in sorted_results][0:len(x_)], label="Accuracy")
axes[i].set_ylabel('Score')
axes[i].yaxis.set_major_locator(MultipleLocator(0.5))
axes[i].yaxis.set_minor_locator(MultipleLocator(0.25))
axes[i].set_ylim(-0.05, 1.05)
axes[2].set_xlabel('Label (sorted by support)')
lines, labels = fig.axes[-1].get_legend_handles_labels()
fig.legend(lines, labels, loc='right')
plt.subplots_adjust(hspace=0.5, right=0.8)
plt.show()
###Output
_____no_output_____
###Markdown
**Impact of vector size**
###Code
transformer = Doc2VecTransformer.load('doc2vec', 100)
#train = transformer.fit(X_train)
#test = transformer.transform(X_test)
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
transformer = Doc2VecTransformer.load('doc2vec', 200)
#train = transformer.fit(X_train)
#test = transformer.transform(X_test)
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
transformer = Doc2VecTransformer.load('doc2vec', 500)
#train = transformer.fit(X_train)
#test = transformer.transform(X_test)
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
transformer = Doc2VecTransformer.load('doc2vec', 1000)
#train = transformer.fit(X_train)
#test = transformer.transform(X_test)
create_classifier(MultiOutputClassifier(LogisticRegression(random_state=RANDOM_STATE)), 'MultioutputClassifier(LogisticRegression)')
import re
paths = available_classifier_paths('MultioutputClassifier(LogisticRegression)', 'doc2vec', 'size')
evals = []
for path in paths:
clf = load(path)
evaluation = clf.evaluation
matches = re.findall(r'=([\w,\d]*)', str(path))
_, _, size = matches
evals.append([int(size), evaluation])
fig, ax = plt.subplots()
ax.set_title('Impact of Vector Size')
evals = sorted(evals, key=lambda x: x[0])
x_ = [eval[0] for eval in evals]
ax.plot(x_, [eval[1].accuracy for eval in evals], marker="o", label='Accuracy')
ax.plot(x_, [eval[1].f1_macro for eval in evals], marker="o", label='F1 Macro')
ax.plot(x_, [eval[1].recall_macro for eval in evals], marker="o", label='Recall Macro')
ax.plot(x_, [eval[1].precision_macro for eval in evals], marker="o", label='Precision Macro')
ax.set_xlabel('Vector size')
ax.set_ylabel('Score')
ax.xaxis.set_major_locator(MultipleLocator(100))
fig.legend(bbox_to_anchor=(1,0.5), loc='center left')
plt.show()
###Output
_____no_output_____ |
3. Baseline.ipynb | ###Markdown
Import Data
###Code
train_data= "./data/glove_ids_train.csv.zip"
val_data= "./data/glove_ids_val.csv.zip"
test_data= "./data/glove_ids_test.csv.zip"
from src.MyDataset import MyDataset, _batch_to_tensor
data_train = MyDataset(path_data=train_data,
max_seq_len=250)
data_val = MyDataset(path_data=val_data,
max_seq_len=250)
data_test = MyDataset(path_data=test_data,
max_seq_len=250)
###Output
_____no_output_____
###Markdown
Fit Baseline
###Code
baseline = DummyClassifier(strategy="uniform")
baseline.fit(data_train.X, data_train.y_id)
###Output
_____no_output_____
###Markdown
Classification Report with Baseline Train Data
###Code
np.random.seed(1234)
pred = baseline.predict(data_train.X)
print(classification_report(data_train.y_id, pred))
###Output
_____no_output_____
###Markdown
Validation Data
###Code
pred = baseline.predict(data_val.X)
print(classification_report(data_val.y_id, pred))
###Output
_____no_output_____
###Markdown
Test Data
###Code
pred = baseline.predict(data_test.X)
print(classification_report(data_test.y_id, pred))
###Output
_____no_output_____ |
docs/examples/broadcast_tracking_data.ipynb | ###Markdown
Broadcast Tracking DataUnlike tracking data from permanently installed camera systems in a stadium, broadcast tracking captures player locations directly from video feeds. As these broadcast feeds usually show only part of the pitch, this tracking data contains only part of the players information for most frames and may be missing data for some frames entirely due to playbacks or close-up camera views. For more information about this data please go to https://github.com/SkillCorner/opendata.A note from Skillcorner: "if you use the data, we kindly ask that you credit SkillCorner and hope you'll notify us on Twitter so we can follow the great work being done with this data."Available Matches in the Skillcorner Opendata Repository: "ID: 4039 - Manchester City vs Liverpool on 2020-07-02" "ID: 3749 - Dortmund vs Bayern Munchen on 2020-05-26" "ID: 3518 - Juventus vs Inter on 2020-03-08" "ID: 3442 - Real Madrid vs FC Barcelona on 2020-03-01" "ID: 2841 - FC Barcelona vs Real Madrid on 2019-12-18" "ID: 2440 - Liverpool vs Manchester City on 2019-11-10" "ID: 2417 - Bayern Munchen vs Dortmund on 2019-11-09" "ID: 2269 - Paris vs Marseille on 2019-10-27" "ID: 2068 - Inter vs Juventus on 2019-10-06"Metadata is available for this data. Loading Skillcorner data
###Code
from kloppy import skillcorner
# there is one example match for testing purposes in kloppy that we use here
# for other matches change the filenames to the location of your downloaded skillcorner opendata files
matchdata_file = '../../kloppy/tests/files/skillcorner_match_data.json'
tracking_file = '../../kloppy/tests/files/skillcorner_structured_data.json'
dataset = skillcorner.load(meta_data=matchdata_file,
raw_data=tracking_file,
limit=100)
df = dataset.to_pandas()
###Output
_____no_output_____
###Markdown
Exploring the dataWhen you want to show the name of a player you are advised to use `str(player)`. This will call the magic `__str__` method that handles fallbacks for missing data. By default it will return `full_name`, and fallback to 1) `first_name last_name` 2) `player_id`.
###Code
metadata = dataset.metadata
home_team, away_team = metadata.teams
[f"{player} ({player.jersey_no})" for player in home_team.players]
print(f"{home_team.ground} - {home_team}")
print(f"{away_team.ground} - {away_team}")
###Output
home - FC Bayern Munchen
away - Borussia Dortmund
###Markdown
Working with tracking dataThe actual tracking data is available at `dataset.frames`. This list holds all frames. Each frame has a `players_coordinates` dictionary that is indexed by `Player` entities and has values of the `Point` type.Identities of players are not always specified. In that case only the team affiliation is known and a track_id that is part of the player_id is used to identify the same (unknown) player across multiple frames.
###Code
first_frame = dataset.frames[88]
print(f"Number of players in the frame: {len(first_frame.players_coordinates)}")
from pprint import pprint
print("List home team players coordinates")
pprint([
(player.player_id, player_coordinates)
for player, player_coordinates
in first_frame.players_coordinates.items()
if player.team == home_team
])
df.head()
###Output
_____no_output_____ |
notebooks/monte_carlo_dev/.ipynb_checkpoints/Quantify_AIS_barge_attributions-checkpoint.ipynb | ###Markdown
Concatenate all monthly ship track data to get values for entire year ATBs
###Code
%%time
allTracks = {}
allTracks_dask = delayed(concat_shp("atb", shapefile_path))
allTracks["atb"]=allTracks_dask.compute()
allTracks["atb"].shape[0]
###Output
_____no_output_____
###Markdown
Barges
###Code
%%time
allTracks_dask = delayed(concat_shp("barge", shapefile_path))
allTracks["barge"]=allTracks_dask.compute()
###Output
creating barge shapefile for 2018, starting with January data
Concatenating barge data from month 2
Concatenating barge data from month 3
Concatenating barge data from month 4
Concatenating barge data from month 5
Concatenating barge data from month 6
Concatenating barge data from month 7
Concatenating barge data from month 8
Concatenating barge data from month 9
Concatenating barge data from month 10
Concatenating barge data from month 11
Concatenating barge data from month 12
CPU times: user 15min 4s, sys: 31.4 s, total: 15min 36s
Wall time: 15min 36s
###Markdown
Tankers
###Code
%%time
ship_type = "tanker"
allTracks["tanker"] = concat_shp("tanker", shapefile_path)
###Output
creating tanker shapefile for 2018, starting with January data
Concatenating tanker data from month 2
Concatenating tanker data from month 3
Concatenating tanker data from month 4
Concatenating tanker data from month 5
Concatenating tanker data from month 6
Concatenating tanker data from month 7
Concatenating tanker data from month 8
Concatenating tanker data from month 9
Concatenating tanker data from month 10
Concatenating tanker data from month 11
Concatenating tanker data from month 12
CPU times: user 1min 10s, sys: 1.29 s, total: 1min 12s
Wall time: 1min 12s
###Markdown
Check barge ship track count used in ping-to-transfer ratio estimate- values recorded in `Origin_Destination_Analysis_updated.xlsx`
###Code
print(f'{allTracks["atb"].shape[0]} ATB ship tracks')
print(f'cf. 588,136 ATB ship tracks used in ping-to-transfer estimate')
print(f'{allTracks["barge"].shape[0]} barge ship tracks')
###Output
588136 ATB ship tracks
cf. 588,136 ATB ship tracks used in ping-to-transfer estimate
13902896 barge ship tracks
###Markdown
Take-away: total number of ship tracks used in the ping-to-transfer ratio matches those used in this analysis. That's good. It's what I wanted to verify. Find all ATB and barge tracks with generic attribution as both origin and destination
###Code
attribution = ['US','Canada','Pacific']
noNone = {}
allNone = {}
generic = {}
for vessel_type in ["atb",'barge']:
generic[vessel_type] = allTracks[vessel_type].loc[
(allTracks[vessel_type].TO.isin(attribution)) &
(allTracks[vessel_type].FROM_.isin(attribution))
]
generic["barge"].shape
###Output
_____no_output_____
###Markdown
Find all ship tracks with None as origin and destination
###Code
for vessel_type in ["atb",'barge']:
# keep rows with None attribution
shp_tmp = allTracks[vessel_type].isnull()
row_has_None = shp_tmp.any(axis=1)
allNone[vessel_type] = allTracks[vessel_type][row_has_None]
###Output
_____no_output_____
###Markdown
Find all ship tracks with no None designations in either origin or destination
###Code
for vessel_type in ["atb",'barge']:
# drop rows with None attribution
noNone[vessel_type] = allTracks[vessel_type].dropna().reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Find all ship tracks with origin or destination as marine terminal- compare this value to frac[vessel_type]["marine_terminal"] to quantify how many ship tracks have mixed origin/destation as marine_terminal/generic (I don't think mixed with None is possible).
###Code
allfac = {}
toWA = {}
fromWA = {}
bothWA = {}
for vessel_type in ["atb",'barge']:
allfac[vessel_type] = allTracks[vessel_type].loc[
((allTracks[vessel_type].TO.isin(facWA.FacilityName)) |
(allTracks[vessel_type].FROM_.isin(facWA.FacilityName))
)
]
toWA[vessel_type] = allTracks[vessel_type].loc[
(allTracks[vessel_type].TO.isin(facWA.FacilityName))
]
fromWA[vessel_type] = allTracks[vessel_type].loc[
(allTracks[vessel_type].FROM_.isin(facWA.FacilityName))
]
bothWA[vessel_type] = allTracks[vessel_type].loc[
((allTracks[vessel_type].TO.isin(facWA.FacilityName)) &
(allTracks[vessel_type].FROM_.isin(facWA.FacilityName)))
]
###Output
_____no_output_____
###Markdown
Find all ship tracks with any WA or CAD marine oil terminal as either origin or destination
###Code
allfacWACAD = {}
for vessel_type in ["atb","barge"]:
allfacWACAD[vessel_type] = allTracks[vessel_type].loc[
((allTracks[vessel_type].TO.isin(facWA.FacilityName)) |
(allTracks[vessel_type].FROM_.isin(facWA.FacilityName))|
(allTracks[vessel_type].TO.isin(facCAD.Name)) |
(allTracks[vessel_type].FROM_.isin(facCAD.Name))
)
]
print(f'To OR From: {allfac["atb"].shape[0]}')
print(f'To AND From: {bothWA["atb"].shape[0]}')
print(f'To: {toWA["atb"].shape[0]}')
print(f'From: {fromWA["atb"].shape[0]}')
print(f'To + From: {toWA["atb"].shape[0] + fromWA["atb"].shape[0]}')
print(f'To + From - "To AND from": {toWA["atb"].shape[0] + fromWA["atb"].shape[0] - bothWA["atb"].shape[0]}')
allfacWACAD["atb"].shape[0]
###Output
_____no_output_____
###Markdown
TAKE-AWAY: - 129165 ship tracks are to or from WA marine terminals with - 13238 of these having WA marine terminal as both to and from- The remainder are mixed with origin or destination as WA marine terminal and the other end-member being US, Pacific, Canada or CAD marine terminal (None values shouldn't be includeded here) TEST: - All tracks = Generic + allFac + None
###Code
print(f'All tracks = Generic + allFac + allNone')
print(f'{allTracks["atb"].shape[0]} = {generic["atb"].shape[0] + allfac["atb"].shape[0] + allNone["atb"].shape[0]}')
###Output
All tracks = Generic + allFac + allNone
588136 = 476681
###Markdown
Hypothesis: the difference in the above is CAD terminal transfers. Testing....
###Code
print(f'All tracks = Generic + allFacWACAD + allNone')
print(f'{allTracks["atb"].shape[0]} = {generic["atb"].shape[0] + allfacWACAD["atb"].shape[0] + allNone["atb"].shape[0]}')
###Output
All tracks = Generic + allFacWACAD + allNone
588136 = 588136
###Markdown
Good! So 588136 - 476681 = 111455 => CAD traffic.
###Code
# compare ATB tracks
vessel_type = "atb"
print(f'Attributed (no None in to or from): {noNone[vessel_type].shape[0]}')
print(f'Generic (to AND from): {generic[vessel_type].shape[0]}')
print(f'Attributed - Generic = {noNone[vessel_type].shape[0] - generic[vessel_type].shape[0]}')
print(f'Marine terminal (to or from): {allfac[vessel_type].shape[0]}')
# create a dictionary of ratios between subsampled data and all ship tracks
frac = {}
frac['atb'] = {}
frac['barge'] = {}
for vessel_type in ["atb","barge"]:
frac[vessel_type]["unattributed"] = allNone[vessel_type].shape[0]/allTracks[vessel_type].shape[0]
frac[vessel_type]["attributed"] = noNone[vessel_type].shape[0]/allTracks[vessel_type].shape[0]
frac[vessel_type]["generic"] = generic[vessel_type].shape[0]/allTracks[vessel_type].shape[0]
frac[vessel_type]["marine_terminal_WACAD"] = allfacWACAD[vessel_type].shape[0]/allTracks[vessel_type].shape[0]
frac[vessel_type]["marine_terminal_diff"] = frac[vessel_type]["attributed"] - frac[vessel_type]["generic"]
print(f'~~~ {vessel_type} ~~~')
print(f'Fraction of {vessel_type} tracks that are unattributed: {frac[vessel_type]["unattributed"]}')
print(f'Fraction of {vessel_type} tracks that are attributed: {frac[vessel_type]["attributed"]}')
print(f'Fraction of attributed {vessel_type} tracks that are generic : {frac[vessel_type]["generic"]}')
print(f'Fraction of attributed {vessel_type} tracks that are linked to marine terminal (WACAD): {frac[vessel_type]["marine_terminal_WACAD"]}')
print(f'Fraction of attributed {vessel_type} tracks that are linked to marine terminal (diff): {frac[vessel_type]["marine_terminal_diff"]}')
for vessel_type in ["atb","barge"]:
print(f'Total number of tracks for {vessel_type}: {allTracks[vessel_type].shape[0]:1.2e}')
print(f'Total number of unattributed barge tracks: {allTracks[vessel_type].shape[0]*frac[vessel_type]["unattributed"]:10.2f}')
print(f'Total number of generically-attributed barge tracks: {allTracks[vessel_type].shape[0]*frac[vessel_type]["generic"]:10.2f}')
print(f'Total number of marine-terminal-attributed barge tracks: {allTracks[vessel_type].shape[0]*frac[vessel_type]["marine_terminal_WACAD"]:10.2f}')
###Output
Total number of unattributed barge tracks: 6035026.00
Total number of generically-attributed barge tracks: 5865057.00
Total number of marine-terminal-attributed barge tracks: 2002813.00
###Markdown
Quantify barge and ATB cargo transfers in 2018 DOE database
###Code
[atb_in, atb_out]=get_DOE_atb(
doe_xls_path,
fac_xls_path,
transfer_type = 'cargo',
facilities='selected'
)
barge_inout=get_DOE_barges(
doe_xls_path,
fac_xls_path,
direction='combined',
facilities='selected',
transfer_type = 'cargo')
transfers = {}
transfers["barge"] = barge_inout.shape[0]
transfers["atb"] = atb_in.shape[0] + atb_out.shape[0]
print(f'{transfers["atb"]} cargo transfers for atbs')
print(f'{transfers["barge"]} cargo transfers for barges')
###Output
677 cargo transfers for atbs
2773 cargo transfers for barges
###Markdown
Group barge and atb transfers by AntID and:- compare transfers- compare fraction of grouped transfers to ungrouped transfers by vessel type
###Code
transfers["barge_antid"] = barge_inout.groupby('AntID').sum().shape[0]
transfers["atb_antid"] = atb_in.groupby('AntID').sum().shape[0] + atb_out.groupby('AntID').sum().shape[0]
print(f'{transfers["atb_antid"]} ATB cargo transfers based on AntID')
print(f'{transfers["atb_antid"]/transfers["atb"]:.2f} ATB fraction AntID to all')
print(f'{transfers["barge_antid"]} barge cargo transfers based on AntID')
print(f'{transfers["barge_antid"]/transfers["barge"]:.2f} barge fraction AntID to all')
###Output
482 ATB cargo transfers based on AntID
0.71 ATB fraction AntID to all
2334 barge cargo transfers based on AntID
0.84 barge fraction AntID to all
###Markdown
Take away: Barge and ATBs have similar number of mixed-oil-type transfers with ATBs having more mixed-type transfers (29% of 677) than barges (16% of 2334). Even though values are similar, we will use the AntID grouped number of transfers for our ping to transfer ratios Calculate the number of oil cargo barges we expect using the AntID grouping for ping-to-transfer ratio
###Code
ping2transfer = {}
oilcargobarges = {}
# ATB ping-to-transfer ratio
ping2transfer["atb"] = allTracks["atb"].shape[0]/transfers["atb_antid"]
# Estimate number of oil cargo barges using number of barge transfers
# and atb ping-to-transfer ratio
oilcargobarges["total"] = transfers["barge_antid"]*ping2transfer["atb"]
print(f'We expect {oilcargobarges["total"]:.0f} total oil cargo pings for barge traffic')
###Output
We expect 2847945 total oil cargo pings for barge traffic
###Markdown
Calculate the number of Attributed tracks we get for ATBs and estimate the equivalent value for barges
###Code
# estimate the ratio of attributed ATB tracks to ATB cargo transfers
noNone_ratio = noNone["atb"].shape[0]/transfers["atb_antid"]
print(f'We get {noNone_ratio:.2f} attributed ATB tracks per ATB cargo transfer')
# estimate the amount of attributed tracks we'd expect to see for tank barges based on tank barge transfers
print(f'We expect {noNone_ratio*transfers["barge_antid"]:.2f} attributed barge tracks, but we get {noNone["barge"].shape[0]}')
# estimate spurious barge voyages by removing estimated oil carge barge from total
fraction_nonoilbarge = (noNone["barge"].shape[0]-noNone_ratio*transfers["barge_antid"])/noNone["barge"].shape[0]
print(f'We estimate that non-oil tank barge voyages account for {100*fraction_nonoilbarge:.2f}% of barge voyages')
#The above value was 88% when not using the AntID grouping
###Output
_____no_output_____
###Markdown
Evaluate oil cargo traffic pings for ATBs and barges
###Code
# Dictionary for probability of oil cargo barges for our 3 attribution types
P_oilcargobarges = {}
allfac = {}
for vessel_type in ["atb",'barge']:
allfac[vessel_type] = allTracks[vessel_type].loc[
((allTracks[vessel_type].TO.isin(facWA.FacilityName)) |
(allTracks[vessel_type].FROM_.isin(facWA.FacilityName)))
]
# Ratio of ATB pings with WA facility attribution to ATB WA transfers
fac_att_ratio = allfacWACAD["atb"].shape[0]/transfers["atb_antid"]
# Fraction of barge pings with generic attribution that are expected to carry oil based on ATB pings and transfers
P_oilcargobarges["facility"] = fac_att_ratio*transfers["barge_antid"]/allfacWACAD["barge"].shape[0]
print(f'{allfac["atb"].shape[0]} ATB tracks have a WA oil facility as origin or destination')
print(f'{allfac["barge"].shape[0]} barge tracks have a WA oil facility as origin or destination')
print(f'We get {fac_att_ratio:.2f} WA oil marine terminal attributed ATB tracks per ATB cargo transfer')
# estimate the amount of oil cargo facility tracks we'd expect to see for tank barges based on tank barge transfers
print(f'We expect {fac_att_ratio*transfers["barge_antid"]:.2f} WA oil marine terminal attributed barge tracks, but we get {allfac["barge"].shape[0]}')
fraction_nonoilbarge = (allfac["barge"].shape[0]-fac_att_ratio*transfers["barge_antid"])/allfac["barge"].shape[0]
print(f'We estimate that non-oil tank barge voyages to/from marine terminals account for {100*fraction_nonoilbarge:.2f}% of barge voyages attributed to WA marine terminals')
###Output
129165 ATB tracks have a WA oil facility as origin or destination
1666271 barge tracks have a WA oil facility as origin or destination
We get 499.21 WA oil marine terminal attributed ATB tracks per ATB cargo transfer
We expect 1165159.92 WA oil marine terminal attributed barge tracks, but we get 1666271
We estimate that non-oil tank barge voyages to/from marine terminals account for 30.07% of barge voyages attributed to WA marine terminals
###Markdown
When not grouped by AntID:- 129165 ATB tracks have a WA oil facility as origin or destination- 1666271 barge tracks have a WA oil facility as origin or destination- We get 190.79 WA oil marine terminal attributed ATB tracks per ATB cargo transfer- We expect 529061.37 WA oil marine terminal attributed barge tracks, but we get 1666271- We estimate that non-oil tank barge voyages to/from marine terminals account for 68.25% of - barge voyages attributed to WA marine terminals Repeat for Generic attibution only
###Code
# Ratio of ATB pings with generic attribution to ATB WA transfers
generic_ratio = generic["atb"].shape[0]/transfers["atb_antid"]
# Fraction of barge pings with generic attribution that are expected to carry oil based on ATB pings and transfers
P_oilcargobarges["generic"] = generic_ratio*transfers["barge_antid"]/generic["barge"].shape[0]
print(f'{generic["atb"].shape[0]} ATB tracks have Pacific, US or Canada as origin or destination')
print(f'{generic["barge"].shape[0]} barge tracks have Pacific, US or Canada as origin or destination')
print(f'We get {generic_ratio:.2f} Generically attributed ATB tracks per ATB cargo transfer')
# estimate the amount of oil cargo facility tracks we'd expect to see for tank barges based on tank barge transfers
print(f'We expect {generic_ratio*transfers["barge_antid"]:.2f} Generically attributed barge tracks, but we get {generic["barge"].shape[0]}')
fraction_nonoilbarge = (generic["barge"].shape[0]-generic_ratio*transfers["barge_antid"])/generic["barge"].shape[0]
print(f'We estimate that non-oil tank barge voyages account for {100*fraction_nonoilbarge:.2f}% of barge voyages with both to/from as generic attributions ')
###Output
119323 ATB tracks have Pacific, US or Canada as origin or destination
5865057 barge tracks have Pacific, US or Canada as origin or destination
We get 247.56 Generically attributed ATB tracks per ATB cargo transfer
We expect 577800.59 Generically attributed barge tracks, but we get 5865057
We estimate that non-oil tank barge voyages account for 90.15% of barge voyages with both to/from as generic attributions
###Markdown
Repeat for No attribution
###Code
# Ratio of ATB pings with "None" attribution to ATB WA transfers
allNone_ratio = allNone["atb"].shape[0]/transfers["atb_antid"]
# Fraction of barge pings with None attribution that are expected to carry oil based on ATB pings and transfers
P_oilcargobarges["none"] = allNone_ratio*transfers["barge_antid"]/allNone["barge"].shape[0]
print(f'{allNone["atb"].shape[0]} ATB tracks have None as origin or destination')
print(f'{allNone["barge"].shape[0]} barge tracks have None as as origin or destination')
print(f'We get {allNone_ratio:.2f} None attributed ATB tracks per ATB cargo transfer')
# estimate the amount of oil cargo facility tracks we'd expect to see for tank barges based on tank barge transfers
print(f'We expect {allNone_ratio*transfers["barge_antid"]:.2f} None attributed oil cargo barge tracks, but we get {allNone["barge"].shape[0]}')
fraction_nonoilbarge = (allNone["barge"].shape[0]-allNone_ratio*transfers["barge_antid"])/allNone["barge"].shape[0]
print(f'We estimate that non-oil tank barge voyages account for {100*fraction_nonoilbarge:.2f}% of barge voyages with None attributions ')
###Output
228193 ATB tracks have None as origin or destination
6035026 barge tracks have None as as origin or destination
We get 473.43 None attributed ATB tracks per ATB cargo transfer
We expect 1104984.36 None attributed oil cargo barge tracks, but we get 6035026
We estimate that non-oil tank barge voyages account for 81.69% of barge voyages with None attributions
###Markdown
Find the probability of oil carge for each ping classification, i.e.:- `oilcargobarges["total"]` = 588136.0 = (1) + (2) + (3), where - (1) `P_oilcargobarges["facilities"]` * `allfacWACAD["barges"].shape[0]` - (2) `P_oilcargobarges["none"]` * `allNone["barges"].shape[0]` - (3) `P_oilcargobarges["generic"]` * `generic["barges"].shape[0]`
###Code
print(P_oilcargobarges["facility"])
print(P_oilcargobarges["none"])
print(P_oilcargobarges["generic"])
print(allfacWACAD["barge"].shape[0])
print(allNone["barge"].shape[0])
print(generic["barge"].shape[0])
print(P_oilcargobarges["facility"] * allfacWACAD["barge"].shape[0])
print(P_oilcargobarges["none"] * allNone["barge"].shape[0])
print(P_oilcargobarges["generic"] * generic["barge"].shape[0])
oilcargobarges["facilities"] = (P_oilcargobarges["facility"] * allfacWACAD["barge"].shape[0])
oilcargobarges["none"] = (P_oilcargobarges["none"] * allNone["barge"].shape[0])
oilcargobarges["generic"] = (P_oilcargobarges["generic"] * generic["barge"].shape[0])
print('oilcargobarges["total"] = oilcargobarges["facilities"] + oilcargobarges["none"] + oilcargobarges["generic"]?')
#oilcargobarges_sum = oilcargobarges["facilities"] + oilcargobarges["none"] + oilcargobarges["generic"]
print(f'{oilcargobarges["total"]:.0f} =? {oilcargobarges["facilities"] + oilcargobarges["none"] + oilcargobarges["generic"]:.0f}')
missing_pings = oilcargobarges["total"]-(oilcargobarges["facilities"] + oilcargobarges["none"] + oilcargobarges["generic"])
print(f' Missing {missing_pings} pings ({100*missing_pings/oilcargobarges["total"]:.0f}%)')
###Output
oilcargobarges["total"] = oilcargobarges["facilities"] + oilcargobarges["none"] + oilcargobarges["generic"]?
2847945 =? 2847945
Missing -4.656612873077393e-10 pings (-0%)
###Markdown
I'm not sure where this 11% error comes from. I used WA-only terminal pings and transfers for ping-to-transfer ratio but multiplied this by the total number of CAD and WA oil transfer terminal pings.
###Code
allfacWACAD["barge"].shape[0] + generic["barge"].shape[0] + allNone["barge"].shape[0]
allTracks["barge"].shape[0]
###Output
_____no_output_____ |
_build/jupyter_execute/assignment3.ipynb | ###Markdown
Assignment 3: Combustor Design IntroductionThe global desire to reduce greenhouse gas emissions is the main reason for the interest in the use of hydrogen for power generation.Although hydrogen shows to be a promising solution, there are many challenges that need to be solved. One of the challenges focuses on the use of hydrogen as a fuel in gas turbines.In gas turbines hydrogen could replace natural gas as a fuel in the combustor. Unfortunately, this is accompanied with a technical challenge which deals with an important property in premixed combustion: the flame speed. The flame speed of hydrogen is an order of magnitude higher than natural gas due to the highly reactive nature of hydrogen. As a result a hydrogen flame is more prone to flashback than a natural gas flame. Flame flashback is the undesired upstream propagation of a flame into the premix section of a combustor. Flashback occurs when the flame speed is higher than the velocity of the incoming fresh mixture. This could cause severe equipment damage and turbine shutdown. Adjustments to traditional combustors are required in order to guarantee safe operation when using hydrogen as fuel.To this end the students are asked to investigate the use of hydrogen, natural gas and a blend thereof in gas turbines. The first part will focus on the impact of the different fuels on the combustor geometry. Finally, we will have a closer look at the influence of different fuels on the $CO_2$ and $NO_x$ emissions. For simplicty, it is assumed that natural gas consists purely of methane ($CH_4$). Tasks Diameter of the combustorA gas turbine has a power output of 100 MW. The combustion section consists of 8 can combustors. Each can combustor is, for the sake of simplicty, represented by a tube with a diameter $D$.The inlet temperature $T_2$ of the compressor is 293 K and the inlet pressure $p_2$ is 101325 Pa. To prevent damage of the turbine blades a turbine inlet temperature (TIT) of 1800 K is desired. Furthermore, assume that the specific heat of the fluid is constant through the compressor, i.e. specific heat capacity $c_{p,c}$=1.4 and a heat capacity ratio $\gamma_c$=1.4. The polytropic efficiency of the compressor and turbine are 0.90 and 0.85, respectively.The pressure ratio over the compressor will depend on your studentID:PR = 10 if (numpy.mod(studentID, 2) + 1) == 1PR = 20 if (numpy.mod(studentID, 2) + 1) == 2Assume the TIT to be equal to the temperature of the flame inside the combustor. The flame temperature depends on the equivalence ratio ($\phi$), the hydrogen volume percentage of the fuel ($H_2\%$) and the combustor inlet temperature and pressure. For now consider the fuel to consist of pure natural gas ($H_2\%=0$). Note that the equivalence ratio is given by:\begin{align}\phi = \frac{\frac{m_{fuel}}{m_{air}}}{(\frac{m_{fuel}}{m_{air}})_{stoich}}\end{align}**1. Calculate the inlet temperature $T_3$ and inlet pressure $p_3$ of the combustor and determine the required equivalence ratio (adjust PART A and PART B and run the code), so that the TIT specification is met.** Inside the combustor the flow is turbulent. Turbulence causes an increase in the flame speed, so that the turbulent flame speed $S_T \approx 10 S_L$.**2. With the equivalence ratio determined in the previous question, calculate the total mass flow rate ($\dot{m}_{tot}$) through the gas turbine and the maximal diameter $D$ of a single combustor tube, so that flashback is prevented. Adjust PART A, PART B, PART C and PART D in the code and run it again. Report the steps you have taken. Is there also a minimum diameter? If so, no calculation required, discuss what could be the reason for the necessity of a minimal diameter of the combustor tube.**The combustion of methane is represented by the reaction: $CH_4 + 2 (O_2 + 3.76 N_2) \rightarrow CO_2 + 2 H_2O + 7.52 N_2$ **3. Use the above reaction equation and the definition of $\phi$ to find the mass flow rate of the fuel $\dot{m}_{fuel}$.** **4. Calculate the total heat input using $\dot{m}_{fuel}$ and calculate the efficiency of the complete cycle.** **5. Repeat tasks 1-4 for a fuel consisting of $50\%H_2$/$50\%CH_4$ and $100\%H_2$. Discuss the effect of the addition of hydrogen to the fuel on the combustor geometry and cycle performance.** $CO_2$ and $NO_x$ emissions**6. A gas turbine manufacturer claims that their gas turbines can be fired with a hydrogen content of 30%. Discuss wheter this could be regarded an achievement (use the top plot in Figure 5).****7. Consider an equivalence ratio $\phi=0.5$. Regarding emissions, discuss the advantages and disadvantages of increasing the hydrogen content of the fuel. Adjust PART A and use Figure 5.** Bonus assignmentFor simplicty, it was assumed that natural gas does consist of pure methane. In reality, it could be a mix of methane, higher hydrocarbons and nitrogen. An example is Dutch Natural Gas (DNG), which consists of $80\%CH_4$, $5\%C_2H_6$ and $15\%N_2$.**Repeat tasks 1-4 for a fuel consisting of $50\%H_2$/$50\%DNG$. Hint1: Nitrogen does not participate in the reaction. Hint2: This requires more adjustment of the code than just PARTS A, B, C, D.** CodeTwo properties of importance in this assignment are the laminar flame speed $S_L$ and the adiabtic flame temperature $T_{ad}$ of a mixture. These properties can be determined by solving the equations for continuity, momentum, species and energy in one dimension. Fortunetaly, we do not need to solve these equations by hand, instead a chemical kinetics software (Cantera) is used to solve these equations by running a simulation. The simulation is illustrated in the sketch below. Keep in mind that the simulation can take some time to complete.For more information about Cantera visit: https://cantera.org/. For more background information regarding the 1D flame simulation visti: https://cantera.org/science/flames.html
###Code
#%% Load required packages
import sys
import cantera as ct
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import cm
#%% Constants
R_gas_mol = 8314 # Universal gas constant [units: J*K^-1*kmol^-1]
R_gas_mass = 287 # universal gas constant [units: J*K^-1*kg^-1]
#%% Start
# Power output of turbine
power_output = 100 # units: MW
power_output*=1e6
# Compressor and turbine polytropic efficiencies
etap_c = 1
etap_t = 1
# Pressure ratio
PR = 10
# Compressor inlet temperature and pressure
T2 = 293.15 # units: K
p2 = 101325 # units: Pa
# Heat capacity ratio of air at T=293.15 K
gam_c = 1.4
# Compressor stage
# Specific heat capacity (heat capacity per unit mass) of mixture in compressor
cp_c = R_gas_mass*gam_c/(gam_c-1)
# cp_c = 1006 # units: J.kg^-1.K^-1
# cv_c = 717 # units: J.kg^-1.K^-1
# Molar mass of species [units: kg*kmol^-1]
M_H = 1.008
M_C = 12.011
M_N = 14.007
M_O = 15.999
M_H2 = M_H*2
M_CH4 = M_C + M_H*4
M_CO2 = M_C + M_O*4
M_O2 = M_O*2
M_N2 = M_N*2
# Define volume fractions of species in air [units: -]
f_O2 = 0.21
f_N2 = 0.79
########## PART A: ADJUST CODE HERE ##########
# Equivalence ratios
phis = [None, None, None] # Set equivalence ratios ranging from 0.4 to 0.8
# Hydrogen percentages
H2_percentages = [None, None, None] # Set hydrogen volume percentages of the fuel ranging from 0 to 100
################# END PART A ##################
# Define colors to make distinction between different mixtures based on hydrogen percentage
colors = cm.rainbow(np.linspace(0, 1, len(H2_percentages)))
#%% Premixed flame object
class mixture_class:
def __init__(self, phi, H2_percentage, T_u=293.15, p_u=101325):
# Color and label for plots
self.color = colors[H2_percentages.index(H2_percentage)]
self.label = str(int(H2_percentage)) + r'$\% H_2$'
# Temperature and pressure of the unburnt mixture
self.T_u = T_u # units: K
self.p_u = p_u # units: Pa
# Equivalence ratio
self.phi = phi
# Hydrogen percentage of fuel
self.H2_percentage = H2_percentage
# DNG percentage of fuel
self.CH4_percentage = 100 - self.H2_percentage
# Volume fractions of fuel
self.f_H2 = self.H2_percentage/100
self.f_CH4 = self.CH4_percentage/100
# Mass densities of fuel species
rho_H2 = M_H2*self.p_u/(self.T_u*R_gas_mol)
rho_CH4 = M_CH4*self.p_u/(self.T_u*R_gas_mol)
# Check if volume fractions of fuel and air are correct
check_air = f_O2 + f_N2
check_fuel = self.f_H2 + self.f_CH4
if check_air == 1.0 and round(check_fuel,3) == 1.0:
pass
else:
sys.exit("fuel or air composition is incorrect!")
if round(check_fuel,3) == 1.0:
pass
else:
sys.exit("fuel composition is incorrect!")
# Definition of the mixture
# 1. Set the reaction mechanism
self.gas = ct.Solution('gri30.cti')
# 2. Define the fuel and air composition
fuel = {'H2':self.f_H2, 'CH4':self.f_CH4}
air = {'N2':f_N2/f_O2, 'O2':1.0}
# 3. Set the equivalence ratio
self.gas.set_equivalence_ratio(phi, fuel, air)
# 4. Set the transport model
self.gas.transport_model= 'Multi'
# 5. Set the unburnt mixture temperature and pressure
self.gas.TP = T_u, p_u
# Unburnt mixture properties
self.h_u = self.gas.enthalpy_mass # units: J.kg^-1
self.cp_u = self.gas.cp_mass # units: J*K^-1*kg^-1
self.cv_u = self.gas.cv_mass # units: J*K^-1*kg^-1
self.rho_u = self.gas.density_mass # units: kg.m^-3
self.rho_u_H2 = rho_H2 # units: kg.m^-3
self.rho_u_CH4 = rho_CH4 # units: kg.m^-3
self.mu_u = self.gas.viscosity # Pa.s
self.nu_u = self.mu_u/self.rho_u # units: m^2.s^-1
self.lambda_u= self.gas.thermal_conductivity # units: W.m^-1.K^-1
self.alpha_u = self.lambda_u/(self.rho_u*self.cp_u) # units: m^2.s^-1
def solve_equations(self):
# Unburnt molar fractions
self.X_H2 = self.gas["H2"].X[0]
self.X_CH4 = self.gas["CH4"].X[0]
self.X_O2 = self.gas["O2"].X[0]
self.X_N2 = self.gas["N2"].X[0]
# Set domain size (1D)
width = 0.05 # units: m
# Create object for freely-propagating premixed flames
flame = ct.FreeFlame(self.gas, width=width)
# Set the criteria used to refine one domain
flame.set_refine_criteria(ratio=3, slope=0.1, curve=0.1)
# Solve the equations
flame.solve(loglevel=0, auto=True)
# Result 1: Laminar flame speed
self.S_L0 = flame.velocity[0]*100 # units: cm.s^-1
self.S_T = 10*self.S_L0/100 # units: m.s^-1 Rough estimation of the turbulent flame speed
# Result 2: Adiabtaic flame temperature
self.T_ad = self.gas.T
# Burnt mixture properties
self.h_b = self.gas.enthalpy_mass # units: J.kg^-1
self.cp_b = self.gas.cp_mass # units: J*K^-1*kg^-1
self.cv_b = self.gas.cv_mass # units: J*K^-1*kg^-1
self.rho_b = self.gas.density_mass # units: kg.m^-3
self.mu_b = self.gas.viscosity # Pa.s
self.nu_b = self.mu_b/self.rho_b # units: m^2.s^-1
self.lambda_b = self.gas.thermal_conductivity # units: W.m^-1.K^-1
self.alpha_b = self.lambda_b/(self.rho_b*self.cp_b) # units: m^2.s^-1
# Burnt mixture molar fractions
self.X_CO2 = self.gas["CO2"].X[0]
self.X_NO = self.gas["NO"].X[0]
self.X_NO2 = self.gas["NO2"].X[0]
#%% Function to retrieve the LHV of different kind of fuels
def heating_value(fuel):
""" Returns the LHV and HHV for the specified fuel """
T_u = 293.15
p_u = 101325
gas1 = ct.Solution('gri30.cti')
gas1.TP = T_u, p_u
gas1.set_equivalence_ratio(1.0, fuel, 'O2:1.0')
h1 = gas1.enthalpy_mass
Y_fuel = gas1[fuel].Y[0]
# complete combustion products
Y_products = {'CO2': gas1.elemental_mole_fraction('C'),
'H2O': 0.5 * gas1.elemental_mole_fraction('H'),
'N2': 0.5 * gas1.elemental_mole_fraction('N')}
gas1.TPX = None, None, Y_products
h2 = gas1.enthalpy_mass
LHV = -(h2-h1)/Y_fuel
return LHV
# Lower Heating Values of well-known combustion fuels
LHV_H2 = heating_value('H2')
LHV_CH4 = heating_value('CH4')
LHV_C2H6 = heating_value('C2H6')
#%% Create list of flame objects for multiple mixtures depending on the equivalence ratio
# and the percentage of hydrogen in the fuel (volume based)
# Initialize list for flame objects
mixtures = []
# Create flame objects and start simulations
for phi in phis:
for H2_percentage in H2_percentages:
########## PART B: ADJUST CODE HERE ##########
# Compressor stage
# Temperature after compressor stage
T3 = None # units: K
p3 = None # units: Pa
# Combustor inlet temperature in K and pressure in Pa
T_u = T3 # units: K
p_u = p3 # units: Pa
################# END PART B ##################
# Combustor stage
# Define unburnt mixture that goes into the combustor
mixture = mixture_class(phi, H2_percentage, T_u, p_u)
# Solve equations and obtain burnt mixture properties
mixture.solve_equations()
# Append the mixture (with unburnt and burnt properties) to list of mixtures
mixtures.append(mixture)
# Turbine stage
# Heat capacity ratio of mixture in turbine
gam_t = mixture.cp_b/mixture.cv_b
# Turbine inlet temperature
T4 = mixture.T_ad
########## PART C: ADJUST CODE HERE ##########
# Turbine outlet temperature
T5 = None
################# END PART C ##################
print('mixture solved: phi=' + str(phi) + ', H2%=' + str(H2_percentage))
#%% Plots A: Laminar flame speed/adiabatic flame temperture vs equivalence ratio
plt.close('all')
# Plot parameters
fontsize = 12
marker = 'o'
markersize = 8
linewidth = 1
linestyle = 'None'
# Figure 1: Laminar flame speed vs equivalence ratio
fig1, ax1 = plt.subplots()
ax1.set_xlabel(r'$\phi$ [-]', fontsize=fontsize)
ax1.set_ylabel(r'$S_L$ [cm.s$^{-1}$]', fontsize=fontsize)
ax1.set_xlim(0.3, 1.1)
ax1.set_ylim(0, 250)
ax1.set_title('Laminar flame speed vs. equivalence ratio \n $T_u=$' + str(round(T_u,2)) + ' K, $p_u$=' + str(p_u*1e-5) + ' bar')
ax1.grid()
# Figure 2: Adiabatic flame temperature vs equivalence ratio
fig2, ax2 = plt.subplots()
ax2.set_xlabel(r'$\phi$ [-]', fontsize=fontsize)
ax2.set_ylabel(r'$T_{ad}$ [K]', fontsize=fontsize)
ax2.set_xlim(0.3, 1.1)
ax2.set_ylim(1200, 2800)
ax2.grid()
ax2.set_title('Adiabtic flame temperature vs. equivalence ratio \n $T_u=$' + str(round(T_u,2)) + ' K, $p_u$=' + str(p_u*1e-5) + ' bar')
# Initialize list for laminar flame speeds
S_L0_lists = [[] for i in range(len(H2_percentages))]
# Initialize list for adiabatic flame temperatures
T_ad_lists = [[] for i in range(len(H2_percentages))]
# Fill Figure 1 and 2
for mixture in mixtures:
index = H2_percentages.index(mixture.H2_percentage)
ax1.plot(mixture.phi, mixture.S_L0, ls=linestyle, marker=marker, ms=markersize, c=mixture.color, label=mixture.label if mixture.phi == phis[0] else "")
ax2.plot(mixture.phi, mixture.T_ad, ls=linestyle, marker=marker, ms=markersize, c=mixture.color, label=mixture.label if mixture.phi == phis[0] else "")
S_L0_lists[index] = np.append(S_L0_lists[index], mixture.S_L0)
T_ad_lists[index] = np.append(T_ad_lists[index], mixture.T_ad)
# Plot polynomial fits to show trends for laminar flame speed and adiabatic flame temperature as a function of the equivalence ratio
if len(phis) == 1:
pass
else:
# Create zipped lists for polynomial fits
lists_zipped = zip(S_L0_lists, T_ad_lists, colors)
# Order of polynomial
poly_order = 3
for (S_L0, T_ad, color) in lists_zipped:
# Create new array for phi
phis_fit = np.linspace(phis[0], phis[-1])
# Plot 4th order polynomial fit for laminar flame speed
coeff_S_L0 = np.polyfit(phis, S_L0, poly_order)
poly_S_L0 = np.poly1d(coeff_S_L0)
S_L0_fit = poly_S_L0(phis_fit)
ax1.plot(phis_fit, S_L0_fit, ls="--", c=color)
# Plot 4th order polynomial fit for adiabatic flame temperature
coeff_T_ad = np.polyfit(phis, T_ad, poly_order)
poly_T_ad = np.poly1d(coeff_T_ad)
T_ad_fit = poly_T_ad(phis_fit)
ax2.plot(phis_fit, T_ad_fit, ls="--", c=color)
#% Plots B: Fuel blend properties and emissions
# Assume constant power (or heat input): heat_input = m_H2_dot*LHV_H2 + m_CH4_dot*LHV_CH4 = 1 (constant)
# Plot parameters
x_ticks = np.linspace(0, 100, 11)
y_ticks = x_ticks
bin_width = 5
# Initialize lists
H2_fraction_heat_input, H2_fraction_mass, CH4_fraction_heat_input, CH4_fraction_mass, CO2_fraction, fuel_energy_mass = ([] for i in range(6))
# Densities of hydrogen and methane of unburnt mixture
rho_H2 = mixture.rho_u_H2
rho_CH4 = mixture.rho_u_CH4
# Reference: Amount of CO2 when H2%=0 (1 mol of CH4 == 1 mol CO2)
Q_CO2_ref = 1 / (rho_CH4*LHV_CH4)
# Hydrogen fraction in the fuel
H2_fraction_volume = np.linspace(0, 1, 21)
# Mixture calculations
for x in H2_fraction_volume:
# Fractions of H2 and CH4 by heat input
H2_part = rho_H2*LHV_H2*x
CH4_part = rho_CH4*LHV_CH4*(1-x)
H2_fraction_heat_input_i = H2_part / (H2_part + CH4_part)
CH4_fraction_heat_input_i = 1 - H2_fraction_heat_input_i
H2_fraction_heat_input = np.append(H2_fraction_heat_input, H2_fraction_heat_input_i)
CH4_fraction_heat_input = np.append(CH4_fraction_heat_input, CH4_fraction_heat_input_i)
# Fraction of CO2 reduction
Q_u_i = 1 / (rho_H2*LHV_H2*x + rho_CH4*LHV_CH4*(1-x))
Q_CH4_i = Q_u_i*(1-x)
Q_CO2_i = Q_CH4_i
CO2_fraction_i = Q_CO2_i/Q_CO2_ref
CO2_fraction = np.append(CO2_fraction, CO2_fraction_i)
# Fractions of H2 and CH4 by mass
H2_part = rho_H2*x
CH4_part = rho_CH4*(1-x)
H2_fraction_mass_i = H2_part / (H2_part + CH4_part)
CH4_fraction_mass_i = 1- H2_fraction_mass_i
H2_fraction_mass = np.append(H2_fraction_mass, H2_fraction_mass_i)
CH4_fraction_mass = np.append(CH4_fraction_mass, CH4_fraction_mass_i)
# Fuel energy content
fuel_energy_mass_i = (H2_fraction_mass_i*LHV_H2 + CH4_fraction_mass_i*LHV_CH4)/1e6 # units: MJ.kg^-1
fuel_energy_mass = np.append(fuel_energy_mass, fuel_energy_mass_i)
# Convert fractions to percentages
CO2_percentage = 100*CO2_fraction
CO2_reduction_percentage = 100 - CO2_percentage
H2_percentage_volume = H2_fraction_volume*100
CH4_percentage_heat_input = CH4_fraction_heat_input*100
H2_percentage_heat_input = H2_fraction_heat_input*100
H2_percentage_mass = 100*H2_fraction_mass
CH4_percentage_mass = 100*CH4_fraction_mass
# Plots
fig3, ax3 = plt.subplots()
line3_0 = ax3.plot(H2_percentage_volume, CH4_percentage_heat_input, marker=marker, color='tab:blue', label=r'$CH_{4}$')
ax3.set_xticks(x_ticks)
ax3.set_yticks(y_ticks)
ax3.set_xlabel(r'$H_{2}$% (by volume)', fontsize=fontsize)
ax3.set_ylabel(r'$CH_{4}$% (by heat input)', fontsize=fontsize, color='tab:blue')
ax3.set_title('Heat input vs. volume percentage for a methane/hydrogen fuel blend')
ax3.grid()
ax3_1 = ax3.twinx()
line3_1 = ax3_1.plot(H2_percentage_volume, H2_percentage_heat_input, marker=marker, color='tab:orange', label=r'$H_{2}$')
ax3_1.set_ylabel(r'$H_{2}$% (by heat input)', fontsize=fontsize, color='tab:orange')
lines3 = line3_0 + line3_1
labels3 = [l.get_label() for l in lines3]
fig4, ax4 = plt.subplots()
ax4.plot(H2_percentage_heat_input, CO2_reduction_percentage, marker=marker)
ax4.set_xticks(x_ticks)
ax4.set_yticks(y_ticks)
ax4.set_xlabel(r'$H_{2}$% (by heat input)', fontsize=fontsize)
ax4.set_ylabel(r'$CO_2$ reduction [%]', fontsize=fontsize)
ax4.set_title(r'$CO_2$ emissions vs. hydrogen/methane fuel blends (heat input %)')
ax4.grid()
fig5, (ax5, ax5_1) = plt.subplots(2)
ax5.plot(H2_percentage_volume, CO2_percentage, marker=marker)
ax5.set_xticks(x_ticks)
ax5.set_yticks(y_ticks)
ax5.set_xlabel(r'$H_{2}$% (by volume)', fontsize=fontsize)
ax5.set_ylabel(r'$CO_2$ emissions [%]', fontsize=fontsize)
ax5.set_title(r'$CO_2$ emissions vs. hydrogen/methane fuel blends (volume %)')
ax5.grid()
for i, mixture in enumerate(mixtures):
if mixture.phi == phis[-1]:
NO_percentage_volume = mixture.X_NO*100
NO2_percentage_volume = mixture.X_NO2*100
ax5_1.bar(mixture.H2_percentage, NO_percentage_volume, bin_width, color='tab:red', label=r'$NO$' if i == 0 else "")
ax5_1.bar(mixture.H2_percentage, NO2_percentage_volume, bin_width, bottom=NO_percentage_volume, color='tab:blue', label=r'$NO_2$' if i == 0 else "")
ax5_1.set_title(r'$NO_x$ emissions for $\phi=$' + str(mixture.phi))
ax5_1.set_xlim(-5, 105)
ax5_1.set_xticks(x_ticks)
ax5_1.set_xlabel(r'$H_{2}$% (by volume)', fontsize=fontsize)
ax5_1.set_ylabel(r'$NO_x$ [%]', fontsize=fontsize)
ax5_1.grid()
fig6, ax6 = plt.subplots()
ax6.plot(H2_percentage_volume, H2_percentage_mass, marker=marker, color='tab:blue', label=r'$H_{2}$')
ax6.plot(H2_percentage_volume, CH4_percentage_mass, marker=marker, color='tab:orange', label=r'$CH_{4}$')
ax6.set_xticks(x_ticks)
ax6.set_yticks(y_ticks)
ax6.set_xlabel(r'$H_{2}$% (by volume)', fontsize=fontsize)
ax6.set_ylabel(r'wt.% (by mass)', fontsize=fontsize)
ax6.set_title(r'Weight vs. volume percentage for hydrogen/methane fuel blends')
ax6.grid()
fig7, ax7 = plt.subplots()
ax7.plot(H2_percentage_volume, fuel_energy_mass, lw=2, marker=marker, color='tab:red')
ax7.set_xticks(x_ticks)
ax7.set_xlabel(r'$H_{2}$% (by volume)', fontsize=fontsize)
ax7.set_ylabel(r'Fuel energy content [MJ.kg$^{-1}$]', fontsize=fontsize)
ax7.set_title(r'Energy content vs. volume percentage for hydrogen/methane fuel blends')
ax7.grid()
# Turn on legends
ax1.legend()
ax2.legend()
ax3.legend(lines3, labels3, loc='center left')
ax5.legend(bbox_to_anchor=(1, 1))
ax5_1.legend(bbox_to_anchor=(1, 1))
ax6.legend(bbox_to_anchor=(1, 1))
# Fix figures layout
fig1.tight_layout()
fig2.tight_layout()
fig3.tight_layout()
fig4.tight_layout()
fig5.tight_layout()
fig6.tight_layout()
fig7.tight_layout()
# Uncomment to save figures as .svg
# fig1.savefig('turbo3_1.svg')
# fig2.savefig('turbo3_2.svg')
# fig3.savefig('turbo3_3.svg')
# fig4.savefig('turbo3_4.svg')
# fig5.savefig('turbo3_5.svg')
# fig6.savefig('turbo3_6.svg')
# fig7.savefig('turbo3_7.svg')
########## PART D: ADJUST CODE HERE ##########
for mixture in mixtures:
# Equivalence of the mixture
phi = mixture.phi
# Density of the unburnt mixture (before entering the combustor)
rho_u = mixture.rho_u # units: kg.s^-1
# Volumtric fractions of hydrogen and methane
f_H2 = mixture.f_H2
f_CH4 = mixture.f_CH4
# Specific heat capacity (heat capacity per unit mass)
cp_t = mixture.cp_b
# Total mass flow rate
m_dot = None # units: kg.s^-1
# Velocity in the combustor
V = mixture.S_T # units: m.s^-1
# Area and diameter of the combustor
A = None # units: m^2
D = None # units: m
# ratio of mass fuel and mass air at stoichiometric conditions
m_f_over_m_a_stoich = None
# Mass flow of the fuel
m_f_dot = None
# Heat input or Heat of combustion
Q = None # units: W
# thermal cycle efficiency
eta_cycle = None
################# END PART D ##################
###Output
_____no_output_____ |
_notebooks/2020-03-12-python-1-3.ipynb | ###Markdown
Aprende Python analizando partidas de Dota 1. Parte 1/3 Jupyter Notebooks & Google Colab> Una manera divertida de aprender Python mientras analizamos la informacion de lo replays de Dota 1- toc: true - badges: true- comments: true- author: Stanley Salvatierra- image: images/dota1.jpg- categories: [Python, Dota, NLP] En el mes de febrero, en los días de carnaval en Bolivia tuve la suerte de encontrar entre los archivos de mi computadora las repeticiones de dota 1 que había guardado automáticamente en los años que jugamos dota 1 junto con mis amigos.Lo primero que se me vino a la mente al ver estos archivos fue el data que tenian dentro, en especial de Mensajes de Chats que estaban llenos de insultos en su mayoria de las partidas.Uno de mis intereses es el procesamiento del lenguaje natural o (NLP) en inglés. De esta manera, siento que estos datos pueden usarse para crear algun clasificador de insulto a un nivel básico.Estos mensajes de chat no están etiquetados, por lo que creo que se utilizara algun algoritmo de Machine Learning no supervisado.**Imagen de un mensaje toxico en medio de una partida de Dota 1**El archivo que contiene los replays pesa aproximadamente ~900 MB. Y puede ser descargado de aca:[Link al archivo.](http://www.mediafire.com/file/4xmjki2xxy3kdgo/replays.zip/file)Como comentario adicional, existe una herramienta de Windows para abrir estos archivos, en general suelen ser utilizados por los moderadores de RGC. ProcedimientoEste post esta dividido en tres partes los cuales serán:* Que son los Jupyter Notebooks y como manejarlos.* Introduccion a Python y como extraer los replays.* Analisis de los mensajes de Chat y el clasificador de Insultos (Machine Learning)No es necesario conocimiento previo a Python.Utilizare `Jupyter Notebook` para escribir este blog como tambien para trabajar con Python. 1. Que son los Jupyter Notebooks y como manejarlos.Una manera facil de trabajar con Python es utilizando los Jupyter Notebooks de [Google Colab](https://colab.research.google.com/). Dentro de un [Google Colab](https://colab.research.google.com/) existen *`celdas` donde podemos `escribir código`* o *`Texto normal` con el formato `Markdown`*.El texto que estoy escribiendo ahora mismo esta siguiendo el formato `Markdown`. Para saber mas que es `Markdown` pueden ver el siguiente [Link](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet).Una caracteristica adicional de las celdas donde se puede `escribir código` es que tambien puedo escribir `comandos mágicos` de Jupyter Notebook que al final vienen siendo comandos de Linux. Comandos magicos:Utilizare los siguientes`comandos mágicos`:**Para descargar un archivo desde internet**```unix!wget ```**Para listar los archivos que tengo en la carpeta donde tengo abierto este notebook.**```unix!ls -lsh```**Para descomprimir un archivo**```unix!unzip -d .``` **Probando los comandos Mágicos**A continuacion empezare a descargar el archivo, listar la carpeta para ver si lo descargo y luego descomprimirlo. **Descarga**Para descargar el archivo necesitamos tener el link de descarga. Ir a http://www.mediafire.com/file/4xmjki2xxy3kdgo/replays.zip/file y luego hacer click derecho en `Copy link location` y obtener el link de descarga. Este link le pasamos al comando mágico `!wget`
###Code
!wget http://download1639.mediafire.com/iveo702e3bxg/4xmjki2xxy3kdgo/replays.zip
###Output
--2020-03-05 04:13:40-- http://download1639.mediafire.com/iveo702e3bxg/4xmjki2xxy3kdgo/replays.zip
Resolving download1639.mediafire.com (download1639.mediafire.com)... 199.91.152.139
Connecting to download1639.mediafire.com (download1639.mediafire.com)|199.91.152.139|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 897454746 (856M) [application/zip]
Saving to: ‘replays.zip’
replays.zip 100%[===================>] 855.88M 6.53MB/s in 2m 12s
2020-03-05 04:15:52 (6.51 MB/s) - ‘replays.zip’ saved [897454746/897454746]
###Markdown
**Listar**`ls` se refiere a la "listar" y `-lsh` son la *vanderas* o parametros que acepta este comando:* `l` listar con detalles* `s` mostrar el tamaño de los archivos.* `h` mostrar los resultados en formato que un humano pueda entenderlo
###Code
%ls -lsh
###Output
total 856M
856M -rw-r--r-- 1 root root 856M Mar 5 04:15 replays.zip
4.0K drwxr-xr-x 1 root root 4.0K Mar 3 18:11 [0m[01;34msample_data[0m/
###Markdown
Parece que mi archivo fue descargado correctamente. Voy a descomprmirlo **Descomprimir**`-d` se refiere al "destino" donde se va a descomprimir y el punto **.** se refiere a `a esta misma carpeta donde estoy actualmente`
###Code
!unzip replays.zip -d .
###Output
Archive: replays.zip
creating: ./replays/
inflating: ./replays/Multiplayer.tar.gz
extracting: ./replays/DotAReplay_2.zip
extracting: ./replays/Multiplayer17.zip
extracting: ./replays/Multiplayer16.zip
###Markdown
Si listo la carpeta decomprimida `/replays` voy a ver que tiene dentro.
###Code
!ls replays/
###Output
DotAReplay_2.zip Multiplayer16.zip Multiplayer17.zip Multiplayer.tar.gz
###Markdown
Parece que internamente tiene mas archivos comprimidos. Voy a utilaar `-lsh` para ver en mas detalle estoys archivos.
###Code
!ls -lsh replays/
###Output
total 856M
2.7M -rw-r--r-- 1 root root 2.7M May 26 2017 DotAReplay_2.zip
275M -rw-r--r-- 1 root root 275M May 26 2017 Multiplayer16.zip
70M -rw-r--r-- 1 root root 70M May 26 2017 Multiplayer17.zip
510M -rw-r--r-- 1 root root 510M May 26 2017 Multiplayer.tar.gz
###Markdown
Parece que tenemos dentro otros archivos `.zip` y uno `.tar.gz`.Para practicar un poco más vamos a descomprimir el ` Multiplayer16.zip` de 275 MB y el otro `Multiplayer.tar.gz` de 510MB.Para descomprimir el `Multiplayer.tar.gz` utilizaremos en siguiente `comando mágico`.```unix!tar xvzf .tar.gz -C ``` **Descomprimir el `.tar.gz`**Los parametros `xvzf` que toma el comando mágico `!tar` son por:* `x` extraer los archivos.* `v` *verbose* que se refiere a "mostrar cada archivo que se esta extrayendo"* `z` esta es muy importante y dice "descomprimir desde `gzip`"* `f` esto le dice a `tar` " cual es el archivo que vamos a descomprimir"Hay un parametro opcional que vamos a utilzar `-C`.* `C` , lugar donde quieres que se descomprima.
###Code
#collapse_show
!tar xvzf replays/Multiplayer.tar.gz -C replays/
###Output
Multiplayer/Replay_2013_08_23_1854.w3g
Multiplayer/Replay_2014_01_30_2356.w3g
Multiplayer/Replay_2013_11_23_1851.w3g
Multiplayer/Replay_2015_08_17_1239.w3g
Multiplayer/Replay_2015_06_25_1710.w3g
Multiplayer/Replay_2015_07_25_0029.w3g
Multiplayer/Replay_2015_06_24_2006.w3g
Multiplayer/Replay_2013_09_01_1357.w3g
Multiplayer/Replay_2013_08_18_1432.w3g
Multiplayer/Replay_2015_09_03_2102.w3g
Multiplayer/Replay_2015_06_29_1936.w3g
Multiplayer/Replay_2013_08_13_1957.w3g
Multiplayer/Replay_2013_07_18_1930.w3g
Multiplayer/Replay_2013_09_16_1732.w3g
Multiplayer/Replay_2015_07_26_2248.w3g
Multiplayer/Replay_2015_07_27_2311.w3g
Multiplayer/Replay_2013_07_29_2216.w3g
Multiplayer/Replay_2015_08_31_1400.w3g
Multiplayer/Replay_2015_08_09_1132.w3g
Multiplayer/Replay_2015_07_28_2016.w3g
Multiplayer/Replay_2013_08_30_1049.w3g
Multiplayer/Replay_2015_07_25_1530.w3g
Multiplayer/Replay_2013_10_20_0144.w3g
Multiplayer/Replay_2013_11_06_1800.w3g
Multiplayer/Replay_2013_08_13_1745.w3g
Multiplayer/Replay_2014_11_03_1411.w3g
Multiplayer/Replay_2015_07_24_2244.w3g
Multiplayer/Replay_2014_01_31_0035.w3g
Multiplayer/Replay_2014_01_10_1931.w3g
Multiplayer/Replay_2015_08_19_1631.w3g
Multiplayer/Replay_2015_07_23_2157.w3g
Multiplayer/Replay_2015_07_19_2359.w3g
Multiplayer/Replay_2013_12_10_0223.w3g
Multiplayer/Replay_2013_08_24_2141.w3g
Multiplayer/Replay_2013_10_02_2338.w3g
Multiplayer/Replay_2013_10_01_1603.w3g
Multiplayer/Replay_2013_10_15_1855.w3g
Multiplayer/Replay_2015_08_02_0050.w3g
Multiplayer/Replay_2013_11_03_0113.w3g
Multiplayer/Replay_2013_12_01_2203.w3g
Multiplayer/Replay_2015_07_16_2232.w3g
Multiplayer/Replay_2013_07_28_1133.w3g
Multiplayer/Replay_2013_12_18_1951.w3g
Multiplayer/Replay_2013_11_09_0112.w3g
Multiplayer/Replay_2013_08_23_2226.w3g
Multiplayer/Replay_2015_08_30_2320.w3g
Multiplayer/Replay_2013_08_09_0132.w3g
Multiplayer/Replay_2013_11_23_1840.w3g
Multiplayer/Replay_2013_12_21_2217.w3g
Multiplayer/Replay_2013_09_26_1219.w3g
Multiplayer/Replay_2013_07_22_0121.w3g
Multiplayer/Replay_2015_06_29_1641.w3g
Multiplayer/Replay_2015_07_23_2229.w3g
Multiplayer/Replay_2015_07_26_2330.w3g
Multiplayer/Replay_2015_06_26_1115.w3g
Multiplayer/Replay_2015_06_26_2317.w3g
Multiplayer/Replay_2013_08_25_1535.w3g
Multiplayer/Replay_2015_08_08_1836.w3g
Multiplayer/Replay_2015_06_23_1943.w3g
Multiplayer/Replay_2015_07_22_2006.w3g
Multiplayer/Replay_2015_08_09_1233.w3g
Multiplayer/Replay_2013_07_25_1326.w3g
Multiplayer/Replay_2013_12_20_2319.w3g
Multiplayer/Replay_2013_11_23_1606.w3g
Multiplayer/Replay_2013_11_27_2330.w3g
Multiplayer/Replay_2013_08_19_2228.w3g
Multiplayer/Replay_2013_12_27_2004.w3g
Multiplayer/Replay_2015_06_22_0141.w3g
Multiplayer/Replay_2013_08_29_1207.w3g
Multiplayer/Replay_2013_09_18_1530.w3g
Multiplayer/Replay_2014_01_12_1941.w3g
Multiplayer/Replay_2015_07_07_1846.w3g
Multiplayer/Replay_2015_06_24_2048.w3g
Multiplayer/Replay_2015_08_26_1922.w3g
Multiplayer/Replay_2015_07_30_2336.w3g
Multiplayer/Replay_2013_08_16_0010.w3g
Multiplayer/Replay_2015_07_25_1803.w3g
Multiplayer/Replay_2013_12_09_1547.w3g
Multiplayer/Replay_2015_08_20_0115.w3g
Multiplayer/Replay_2013_07_20_1355.w3g
Multiplayer/Replay_2013_12_16_1508.w3g
Multiplayer/Replay_2013_12_03_2101.w3g
Multiplayer/Replay_2014_03_12_2047.w3g
Multiplayer/Replay_2015_07_19_1719.w3g
Multiplayer/Replay_2013_10_29_2102.w3g
Multiplayer/Replay_2013_08_15_2038.w3g
Multiplayer/Replay_2013_10_11_1501.w3g
Multiplayer/Replay_2015_06_26_1204.w3g
Multiplayer/Replay_2013_08_03_0122.w3g
Multiplayer/Replay_2013_09_17_0048.w3g
Multiplayer/Replay_2013_12_10_0112.w3g
Multiplayer/Replay_2013_08_09_2348.w3g
Multiplayer/Replay_2015_09_05_1052.w3g
Multiplayer/Replay_2015_07_31_2111.w3g
Multiplayer/Replay_2013_08_04_1158.w3g
Multiplayer/Replay_2013_09_12_0036.w3g
Multiplayer/Replay_2015_08_10_0006.w3g
Multiplayer/Replay_2015_07_04_2038.w3g
Multiplayer/Replay_2013_12_01_2245.w3g
Multiplayer/Replay_2015_08_01_1739.w3g
Multiplayer/Replay_2015_08_29_2001.w3g
Multiplayer/Replay_2015_09_03_1716.w3g
Multiplayer/Replay_2013_10_29_2038.w3g
Multiplayer/Replay_2015_08_17_1936.w3g
Multiplayer/Replay_2015_08_23_1638.w3g
Multiplayer/Replay_2015_08_14_2328.w3g
Multiplayer/Replay_2015_08_13_1722.w3g
Multiplayer/Replay_2014_01_16_2255.w3g
Multiplayer/Replay_2013_10_09_2213.w3g
Multiplayer/Replay_2013_08_16_2209.w3g
Multiplayer/Replay_2014_02_02_2300.w3g
Multiplayer/Replay_2013_08_30_0109.w3g
Multiplayer/Replay_2013_07_18_1944.w3g
Multiplayer/Replay_2013_08_09_2000.w3g
Multiplayer/Replay_2013_08_14_2150.w3g
Multiplayer/Replay_2013_10_08_2126.w3g
Multiplayer/Replay_2015_07_29_2251.w3g
Multiplayer/Replay_2013_08_05_0115.w3g
Multiplayer/Replay_2013_10_18_1807.w3g
Multiplayer/Replay_2015_08_11_1152.w3g
Multiplayer/Replay_2015_06_29_1413.w3g
Multiplayer/Replay_2015_08_01_1639.w3g
Multiplayer/Replay_2013_08_30_0014.w3g
Multiplayer/Replay_2013_08_05_2340.w3g
Multiplayer/Replay_2013_08_23_1916.w3g
Multiplayer/Replay_2015_09_04_1455.w3g
Multiplayer/Replay_2015_09_05_1253.w3g
Multiplayer/Replay_2013_10_09_2113.w3g
Multiplayer/Replay_2013_08_18_1251.w3g
Multiplayer/Replay_2013_09_30_1530.w3g
Multiplayer/Replay_2015_06_29_0135.w3g
Multiplayer/Replay_2013_09_01_2201.w3g
Multiplayer/Replay_2015_08_02_1514.w3g
Multiplayer/Replay_2015_07_11_2334.w3g
Multiplayer/Replay_2013_11_22_1313.w3g
Multiplayer/Replay_2015_07_03_0231.w3g
Multiplayer/Replay_2015_08_13_2158.w3g
Multiplayer/Replay_2015_08_13_2046.w3g
Multiplayer/Replay_2013_11_30_2255.w3g
Multiplayer/Replay_2015_07_27_1749.w3g
Multiplayer/Replay_2013_12_12_2101.w3g
Multiplayer/Replay_2013_08_12_1954.w3g
Multiplayer/Replay_2013_11_28_0017.w3g
Multiplayer/Replay_2013_08_16_2143.w3g
Multiplayer/Replay_2014_01_22_0103.w3g
Multiplayer/Replay_2013_12_21_0116.w3g
Multiplayer/Replay_2013_09_10_1435.w3g
Multiplayer/Replay_2015_08_23_2239.w3g
Multiplayer/Replay_2013_10_04_0022.w3g
Multiplayer/Replay_2013_12_01_2122.w3g
Multiplayer/Replay_2013_11_23_1854.w3g
Multiplayer/Replay_2015_07_17_1320.w3g
Multiplayer/Replay_2014_02_04_1929.w3g
Multiplayer/Replay_2014_01_27_0015.w3g
Multiplayer/Replay_2013_07_18_2032.w3g
Multiplayer/Replay_2013_10_29_1532.w3g
Multiplayer/Replay_2015_08_31_1520.w3g
Multiplayer/Replay_2013_10_07_1650.w3g
Multiplayer/Replay_2013_07_20_1316.w3g
Multiplayer/Replay_2013_08_11_2038.w3g
Multiplayer/Replay_2013_09_20_2050.w3g
Multiplayer/Replay_2013_12_24_2256.w3g
Multiplayer/Replay_2013_12_06_1935.w3g
Multiplayer/Replay_2015_07_24_2058.w3g
Multiplayer/Replay_2013_11_16_1547.w3g
Multiplayer/Replay_2013_11_22_1249.w3g
Multiplayer/Replay_2015_08_04_1507.w3g
Multiplayer/Replay_2015_08_26_1919.w3g
Multiplayer/Replay_2013_12_08_0124.w3g
Multiplayer/Replay_2015_08_01_1516.w3g
Multiplayer/Replay_2015_08_26_2035.w3g
Multiplayer/Replay_2015_07_29_1322.w3g
Multiplayer/Replay_2013_08_30_1317.w3g
Multiplayer/Replay_2013_08_03_2026.w3g
Multiplayer/Replay_2013_10_09_1609.w3g
Multiplayer/Replay_2013_10_25_1509.w3g
Multiplayer/Replay_2013_12_07_2305.w3g
Multiplayer/Replay_2015_09_06_2108.w3g
Multiplayer/Replay_2015_09_04_1745.w3g
Multiplayer/Replay_2013_09_14_2114.w3g
Multiplayer/Replay_2013_11_23_1809.w3g
Multiplayer/Replay_2015_07_11_2323.w3g
Multiplayer/Replay_2014_02_02_2342.w3g
Multiplayer/Replay_2015_08_01_2045.w3g
Multiplayer/Replay_2013_10_03_0041.w3g
Multiplayer/Replay_2013_10_01_2235.w3g
Multiplayer/Replay_2014_01_13_1123.w3g
Multiplayer/Replay_2013_12_25_0052.w3g
Multiplayer/Replay_2013_10_11_1804.w3g
Multiplayer/Replay_2013_11_30_1615.w3g
Multiplayer/Replay_2013_08_18_2153.w3g
Multiplayer/Replay_2013_12_19_1907.w3g
Multiplayer/Replay_2013_07_31_2030.w3g
Multiplayer/Replay_2013_08_12_2326.w3g
Multiplayer/Replay_2013_12_02_1244.w3g
Multiplayer/Replay_2015_06_29_1720.w3g
Multiplayer/Replay_2015_07_08_2340.w3g
Multiplayer/Replay_2015_08_04_1424.w3g
Multiplayer/Replay_2015_08_20_1324.w3g
Multiplayer/Replay_2015_09_01_1637.w3g
Multiplayer/Replay_2013_11_09_1429.w3g
Multiplayer/Replay_2015_08_13_1819.w3g
Multiplayer/Replay_2013_11_23_1843.w3g
Multiplayer/Replay_2013_12_23_2258.w3g
Multiplayer/Replay_2014_01_13_1159.w3g
Multiplayer/Replay_2013_11_16_1500.w3g
Multiplayer/Replay_2013_07_24_2124.w3g
Multiplayer/Replay_2013_12_21_0010.w3g
Multiplayer/Replay_2013_11_24_1548.w3g
Multiplayer/Replay_2013_11_06_1720.w3g
Multiplayer/Replay_2015_08_15_0008.w3g
Multiplayer/Replay_2014_02_04_1823.w3g
Multiplayer/Replay_2015_07_15_1708.w3g
Multiplayer/Replay_2013_08_12_0949.w3g
Multiplayer/Replay_2015_08_18_2000.w3g
Multiplayer/Replay_2015_08_23_2150.w3g
Multiplayer/Replay_2013_08_11_1644.w3g
Multiplayer/Replay_2015_08_23_2016.w3g
Multiplayer/Replay_2013_09_27_1209.w3g
Multiplayer/Replay_2015_06_23_1946.w3g
Multiplayer/Replay_2015_08_31_1946.w3g
Multiplayer/Replay_2015_09_06_0006.w3g
Multiplayer/Replay_2013_08_13_2349.w3g
Multiplayer/Replay_2014_01_16_2116.w3g
Multiplayer/Replay_2015_07_22_1932.w3g
Multiplayer/Replay_2013_08_15_2117.w3g
Multiplayer/Replay_2013_12_23_1947.w3g
Multiplayer/Replay_2014_02_09_1618.w3g
Multiplayer/Replay_2015_08_02_1320.w3g
Multiplayer/Replay_2015_06_28_0009.w3g
Multiplayer/Replay_2015_08_25_2109.w3g
Multiplayer/Replay_2013_11_22_1427.w3g
Multiplayer/Replay_2015_06_25_1903.w3g
Multiplayer/Replay_2013_10_30_0105.w3g
Multiplayer/Replay_2013_10_27_0105.w3g
Multiplayer/Replay_2013_12_26_1838.w3g
Multiplayer/Replay_2014_02_03_2301.w3g
Multiplayer/Replay_2015_09_04_2117.w3g
Multiplayer/Replay_2013_10_29_1056.w3g
Multiplayer/Replay_2013_07_30_2104.w3g
Multiplayer/Replay_2015_08_24_1656.w3g
Multiplayer/Replay_2013_12_21_1430.w3g
Multiplayer/Replay_2013_12_20_2239.w3g
Multiplayer/Replay_2015_08_13_1259.w3g
Multiplayer/Replay_2015_08_24_1828.w3g
Multiplayer/Replay_2015_06_27_1311.w3g
Multiplayer/Replay_2015_07_03_2207.w3g
Multiplayer/Replay_2013_11_24_1630.w3g
Multiplayer/Replay_2013_12_21_2209.w3g
Multiplayer/Replay_2015_08_01_1952.w3g
Multiplayer/Replay_2015_07_18_1404.w3g
Multiplayer/Replay_2014_01_21_2355.w3g
Multiplayer/Replay_2015_09_02_1842.w3g
Multiplayer/Replay_2014_03_16_2310.w3g
Multiplayer/Replay_2015_08_19_1807.w3g
Multiplayer/Replay_2013_12_01_0021.w3g
Multiplayer/Replay_2013_08_18_1148.w3g
Multiplayer/Replay_2013_12_23_2018.w3g
Multiplayer/Replay_2015_06_27_1459.w3g
Multiplayer/Replay_2015_09_06_1956.w3g
Multiplayer/Replay_2015_07_19_2020.w3g
Multiplayer/Replay_2013_08_03_1604.w3g
Multiplayer/Replay_2015_06_23_1935.w3g
Multiplayer/Replay_2015_08_16_1833.w3g
Multiplayer/Replay_2013_12_02_2352.w3g
Multiplayer/Replay_2015_07_07_0240.w3g
Multiplayer/Replay_2013_10_03_1015.w3g
Multiplayer/Replay_2013_12_15_1358.w3g
Multiplayer/Replay_2014_01_31_2207.w3g
Multiplayer/Replay_2013_12_23_2052.w3g
Multiplayer/Replay_2015_09_06_0012.w3g
Multiplayer/Replay_2013_08_11_1934.w3g
Multiplayer/Replay_2015_08_12_1927.w3g
Multiplayer/Replay_2015_07_17_1635.w3g
Multiplayer/Replay_2013_09_01_1921.w3g
Multiplayer/Replay_2015_08_31_1305.w3g
Multiplayer/Replay_2013_08_23_1523.w3g
Multiplayer/Replay_2013_09_28_1338.w3g
Multiplayer/Replay_2014_02_04_2112.w3g
Multiplayer/Replay_2013_12_21_1402.w3g
Multiplayer/Replay_2013_07_21_0115.w3g
Multiplayer/Replay_2013_08_02_0144.w3g
Multiplayer/Replay_2015_08_16_1905.w3g
Multiplayer/Replay_2014_01_13_1254.w3g
Multiplayer/Replay_2013_08_25_1444.w3g
Multiplayer/Replay_2015_06_25_2139.w3g
Multiplayer/Replay_2013_11_23_1138.w3g
Multiplayer/Replay_2013_12_25_1628.w3g
Multiplayer/Replay_2015_08_10_2151.w3g
Multiplayer/Replay_2013_11_22_1450.w3g
Multiplayer/Replay_2013_09_25_1512.w3g
Multiplayer/Replay_2015_08_05_2154.w3g
Multiplayer/Replay_2013_07_29_1349.w3g
Multiplayer/Replay_2013_09_20_2000.w3g
Multiplayer/Replay_2015_07_19_1857.w3g
Multiplayer/Replay_2013_08_21_1803.w3g
Multiplayer/Replay_2013_11_22_1333.w3g
Multiplayer/Replay_2013_09_18_1723.w3g
Multiplayer/Replay_2015_07_26_1818.w3g
Multiplayer/Replay_2013_08_10_1652.w3g
Multiplayer/Replay_2013_09_25_1419.w3g
Multiplayer/Replay_2013_10_01_1637.w3g
Multiplayer/Replay_2015_07_23_0112.w3g
Multiplayer/Replay_2013_08_29_1305.w3g
Multiplayer/Replay_2013_09_28_2056.w3g
Multiplayer/Replay_2013_12_24_2223.w3g
Multiplayer/Replay_2015_07_22_1339.w3g
Multiplayer/Replay_2015_06_26_1514.w3g
Multiplayer/Replay_2013_11_12_1837.w3g
Multiplayer/Replay_2015_08_05_2054.w3g
Multiplayer/Replay_2015_07_01_0249.w3g
Multiplayer/Replay_2013_11_29_2103.w3g
Multiplayer/Replay_2013_11_30_1016.w3g
Multiplayer/Replay_2014_02_07_2056.w3g
Multiplayer/Replay_2015_08_09_1403.w3g
Multiplayer/Replay_2015_07_23_2244.w3g
Multiplayer/Replay_2013_07_24_0059.w3g
Multiplayer/Replay_2013_08_01_0146.w3g
Multiplayer/Replay_2013_12_19_2243.w3g
Multiplayer/Replay_2015_06_21_2303.w3g
Multiplayer/Replay_2013_12_02_1748.w3g
Multiplayer/Replay_2013_12_27_2114.w3g
Multiplayer/Replay_2014_01_12_2057.w3g
Multiplayer/Replay_2013_08_25_1817.w3g
Multiplayer/Replay_2014_01_10_2019.w3g
Multiplayer/Replay_2013_08_21_1720.w3g
Multiplayer/Replay_2015_08_20_1921.w3g
Multiplayer/Replay_2013_10_18_1532.w3g
Multiplayer/Replay_2014_02_07_2204.w3g
Multiplayer/Replay_2013_11_27_2347.w3g
Multiplayer/Replay_2015_08_17_1143.w3g
Multiplayer/Replay_2013_08_16_2218.w3g
Multiplayer/Replay_2013_11_18_2253.w3g
Multiplayer/Replay_2013_08_18_2120.w3g
Multiplayer/Replay_2013_11_22_1422.w3g
Multiplayer/Replay_2013_07_30_2046.w3g
Multiplayer/Replay_2015_08_17_1315.w3g
Multiplayer/Replay_2014_11_08_2340.w3g
Multiplayer/Replay_2015_08_08_1734.w3g
Multiplayer/Replay_2013_10_20_1729.w3g
Multiplayer/Replay_2013_08_17_1958.w3g
Multiplayer/Replay_2015_09_05_1417.w3g
Multiplayer/Replay_2015_06_30_2030.w3g
Multiplayer/Replay_2013_12_26_0008.w3g
Multiplayer/Replay_2013_08_07_2007.w3g
Multiplayer/Replay_2015_06_27_1401.w3g
Multiplayer/Replay_2013_11_23_1810.w3g
Multiplayer/Replay_2015_06_29_1859.w3g
Multiplayer/Replay_2013_10_18_2233.w3g
Multiplayer/Replay_2015_08_23_2100.w3g
Multiplayer/Replay_2013_09_21_1559.w3g
Multiplayer/Replay_2014_11_03_2347.w3g
Multiplayer/Replay_2015_07_01_1516.w3g
Multiplayer/Replay_2015_06_26_1620.w3g
Multiplayer/Replay_2013_12_01_0111.w3g
Multiplayer/Replay_2015_06_25_2050.w3g
Multiplayer/Replay_2015_09_03_2154.w3g
Multiplayer/Replay_2013_12_02_1930.w3g
Multiplayer/Replay_2013_11_29_2321.w3g
Multiplayer/Replay_2015_06_24_1847.w3g
Multiplayer/Replay_2014_02_03_0236.w3g
Multiplayer/Replay_2013_10_28_0106.w3g
Multiplayer/Replay_2015_07_24_1920.w3g
Multiplayer/Replay_2015_07_02_1904.w3g
Multiplayer/Replay_2015_08_02_1559.w3g
Multiplayer/Replay_2013_08_16_2152.w3g
Multiplayer/Replay_2015_06_29_1759.w3g
Multiplayer/Replay_2013_08_06_1213.w3g
Multiplayer/Replay_2013_08_31_2008.w3g
Multiplayer/Replay_2015_09_02_1744.w3g
Multiplayer/Replay_2015_06_29_1246.w3g
Multiplayer/Replay_2013_10_11_1724.w3g
Multiplayer/Replay_2013_11_09_0152.w3g
Multiplayer/Replay_2013_12_10_1613.w3g
Multiplayer/Replay_2013_09_19_2235.w3g
Multiplayer/Replay_2013_08_16_2324.w3g
Multiplayer/Replay_2015_07_23_2345.w3g
Multiplayer/Replay_2015_07_17_2242.w3g
Multiplayer/Replay_2013_09_22_0003.w3g
Multiplayer/Replay_2015_08_01_1601.w3g
Multiplayer/Replay_2013_12_02_0005.w3g
Multiplayer/Replay_2013_11_27_0221.w3g
Multiplayer/Replay_2013_08_03_1506.w3g
Multiplayer/Replay_2013_12_24_1753.w3g
Multiplayer/Replay_2013_09_05_0016.w3g
Multiplayer/Replay_2013_10_26_2329.w3g
Multiplayer/Replay_2013_08_18_1728.w3g
Multiplayer/Replay_2013_08_26_0042.w3g
Multiplayer/Replay_2015_09_02_1644.w3g
Multiplayer/Replay_2015_07_01_1605.w3g
Multiplayer/Replay_2013_08_09_1634.w3g
Multiplayer/Replay_2014_02_07_0002.w3g
Multiplayer/Replay_2015_06_22_0052.w3g
Multiplayer/Replay_2014_02_05_2325.w3g
Multiplayer/Replay_2014_01_31_2238.w3g
Multiplayer/Replay_2013_07_31_2005.w3g
Multiplayer/Replay_2013_12_01_2315.w3g
Multiplayer/Replay_2013_10_29_1040.w3g
Multiplayer/Replay_2013_08_11_1719.w3g
Multiplayer/Replay_2015_06_22_2031.w3g
Multiplayer/Replay_2013_11_30_0959.w3g
Multiplayer/Replay_2015_09_05_1151.w3g
Multiplayer/Replay_2013_12_26_0126.w3g
Multiplayer/Replay_2013_09_23_1851.w3g
Multiplayer/Replay_2013_09_09_1743.w3g
Multiplayer/Replay_2013_08_07_1630.w3g
Multiplayer/Replay_2013_12_26_1457.w3g
Multiplayer/Replay_2013_09_16_0237.w3g
Multiplayer/Replay_2013_09_29_0137.w3g
Multiplayer/Replay_2013_09_01_1723.w3g
Multiplayer/Replay_2013_08_01_0107.w3g
Multiplayer/Replay_2013_12_15_1308.w3g
Multiplayer/Replay_2013_11_03_1950.w3g
Multiplayer/Replay_2013_11_23_1835.w3g
Multiplayer/Replay_2015_08_23_1324.w3g
Multiplayer/Replay_2013_08_30_2027.w3g
Multiplayer/Replay_2015_07_24_2003.w3g
Multiplayer/Replay_2013_08_26_1846.w3g
Multiplayer/Replay_2013_10_06_0213.w3g
Multiplayer/Replay_2013_12_07_2214.w3g
Multiplayer/Replay_2013_10_18_1629.w3g
Multiplayer/Replay_2013_08_12_1835.w3g
Multiplayer/Replay_2013_12_03_1908.w3g
Multiplayer/Replay_2013_10_20_0228.w3g
Multiplayer/Replay_2013_08_14_0048.w3g
Multiplayer/Replay_2015_06_25_2340.w3g
Multiplayer/Replay_2013_11_22_1211.w3g
Multiplayer/Replay_2013_09_10_1717.w3g
Multiplayer/Replay_2013_10_30_0137.w3g
Multiplayer/Replay_2015_08_27_2103.w3g
Multiplayer/Replay_2015_08_25_2311.w3g
Multiplayer/Replay_2015_08_23_1513.w3g
Multiplayer/Replay_2015_08_13_1427.w3g
Multiplayer/Replay_2013_11_21_0118.w3g
Multiplayer/Replay_2015_08_20_2017.w3g
Multiplayer/Replay_2013_08_23_2138.w3g
Multiplayer/Replay_2015_08_10_1436.w3g
Multiplayer/Replay_2013_12_28_1037.w3g
Multiplayer/Replay_2013_11_25_1527.w3g
Multiplayer/Replay_2014_03_14_1928.w3g
Multiplayer/Replay_2013_07_18_1931.w3g
Multiplayer/Replay_2015_08_05_1950.w3g
Multiplayer/Replay_2013_08_21_1851.w3g
Multiplayer/Replay_2015_07_30_0350.w3g
Multiplayer/Replay_2013_12_23_2339.w3g
Multiplayer/Replay_2013_10_04_2207.w3g
Multiplayer/Replay_2013_08_18_2251.w3g
Multiplayer/Replay_2014_02_10_2352.w3g
Multiplayer/Replay_2015_07_24_2228.w3g
Multiplayer/Replay_2013_08_11_1801.w3g
Multiplayer/Replay_2013_08_17_1936.w3g
Multiplayer/Replay_2013_08_15_2014.w3g
Multiplayer/Replay_2013_08_05_1433.w3g
Multiplayer/Replay_2015_07_27_1700.w3g
Multiplayer/Replay_2013_11_11_1533.w3g
Multiplayer/Replay_2015_07_15_1751.w3g
Multiplayer/Replay_2013_08_27_1733.w3g
Multiplayer/Replay_2013_10_31_1209.w3g
Multiplayer/Replay_2013_10_29_0135.w3g
Multiplayer/Replay_2015_07_31_1651.w3g
Multiplayer/Replay_2014_03_14_1850.w3g
Multiplayer/Replay_2014_01_11_0151.w3g
Multiplayer/Replay_2013_08_25_1848.w3g
Multiplayer/Replay_2015_08_23_1430.w3g
Multiplayer/Replay_2013_12_19_2004.w3g
Multiplayer/Replay_2013_07_18_1738.w3g
Multiplayer/Replay_2013_10_04_2224.w3g
Multiplayer/Replay_2015_07_03_0330.w3g
Multiplayer/Replay_2013_10_30_0114.w3g
Multiplayer/Replay_2015_08_05_1757.w3g
Multiplayer/Replay_2015_07_25_2215.w3g
Multiplayer/Replay_2013_07_31_1800.w3g
Multiplayer/Replay_2015_07_03_1816.w3g
Multiplayer/Replay_2013_10_23_1503.w3g
Multiplayer/Replay_2015_06_29_2254.w3g
Multiplayer/Replay_2015_07_25_1715.w3g
Multiplayer/Replay_2015_06_24_2225.w3g
Multiplayer/Replay_2015_08_25_1900.w3g
Multiplayer/Replay_2013_09_26_0152.w3g
Multiplayer/Replay_2013_10_16_1313.w3g
Multiplayer/Replay_2014_01_29_2138.w3g
Multiplayer/Replay_2013_09_23_2011.w3g
Multiplayer/Replay_2015_07_23_2058.w3g
Multiplayer/Replay_2013_12_31_1520.w3g
Multiplayer/Replay_2015_08_11_1247.w3g
Multiplayer/Replay_2014_01_17_1937.w3g
Multiplayer/Replay_2013_12_24_0018.w3g
Multiplayer/Replay_2015_08_27_2204.w3g
Multiplayer/Replay_2015_08_31_2043.w3g
Multiplayer/Replay_2013_12_25_1548.w3g
Multiplayer/Replay_2015_08_23_1213.w3g
Multiplayer/Replay_2013_09_30_1608.w3g
Multiplayer/Replay_2015_08_05_0114.w3g
Multiplayer/Replay_2013_11_23_1058.w3g
Multiplayer/Replay_2013_09_11_1829.w3g
Multiplayer/Replay_2013_09_16_0143.w3g
Multiplayer/Replay_2015_08_07_1731.w3g
Multiplayer/Replay_2013_11_24_1940.w3g
Multiplayer/Replay_2013_07_28_1206.w3g
Multiplayer/Replay_2015_07_26_1855.w3g
Multiplayer/Replay_2015_07_04_2220.w3g
Multiplayer/Replay_2013_08_13_1145.w3g
Multiplayer/Replay_2013_09_15_1523.w3g
Multiplayer/Replay_2013_08_10_1727.w3g
Multiplayer/Replay_2015_08_05_1710.w3g
Multiplayer/Replay_2015_06_30_2133.w3g
Multiplayer/Replay_2013_09_06_1808.w3g
Multiplayer/Replay_2013_10_21_1535.w3g
Multiplayer/Replay_2015_07_03_1905.w3g
Multiplayer/Replay_2013_08_18_1723.w3g
Multiplayer/Replay_2015_07_26_0159.w3g
Multiplayer/Replay_2013_07_20_1339.w3g
Multiplayer/Replay_2015_07_02_1952.w3g
Multiplayer/Replay_2014_02_03_0055.w3g
Multiplayer/Replay_2015_06_26_2218.w3g
Multiplayer/Replay_2013_10_16_1402.w3g
Multiplayer/Replay_2015_07_20_0041.w3g
Multiplayer/Replay_2013_09_16_1510.w3g
Multiplayer/Replay_2013_07_20_1733.w3g
Multiplayer/Replay_2014_01_11_0047.w3g
Multiplayer/Replay_2015_08_20_1447.w3g
Multiplayer/Replay_2014_01_29_2244.w3g
Multiplayer/Replay_2013_10_11_1638.w3g
Multiplayer/Replay_2015_07_31_1523.w3g
Multiplayer/Replay_2013_11_23_1240.w3g
Multiplayer/Replay_2015_08_06_1716.w3g
Multiplayer/Replay_2014_02_03_0203.w3g
Multiplayer/Replay_2013_11_23_1813.w3g
Multiplayer/Replay_2015_07_26_0059.w3g
Multiplayer/Replay_2013_10_06_0038.w3g
Multiplayer/Replay_2013_11_28_1307.w3g
Multiplayer/Replay_2013_10_14_1646.w3g
Multiplayer/Replay_2015_07_22_1849.w3g
Multiplayer/Replay_2014_03_19_0055.w3g
Multiplayer/Replay_2013_12_06_1858.w3g
Multiplayer/Replay_2015_08_04_2059.w3g
Multiplayer/Replay_2013_08_16_2138.w3g
Multiplayer/Replay_2013_07_20_1256.w3g
Multiplayer/Replay_2015_06_23_2127.w3g
Multiplayer/Replay_2015_06_22_1946.w3g
Multiplayer/Replay_2013_10_09_1513.w3g
Multiplayer/Replay_2013_08_25_1601.w3g
Multiplayer/Replay_2013_08_15_1957.w3g
Multiplayer/Replay_2013_10_28_0109.w3g
Multiplayer/Replay_2015_07_20_0113.w3g
Multiplayer/Replay_2013_08_25_1508.w3g
Multiplayer/Replay_2015_06_25_1954.w3g
Multiplayer/Replay_2013_08_01_2329.w3g
Multiplayer/Replay_2013_09_01_2137.w3g
Multiplayer/Replay_2015_07_01_1419.w3g
Multiplayer/Replay_2015_09_03_1810.w3g
Multiplayer/Replay_2015_08_10_2215.w3g
Multiplayer/Replay_2015_06_24_0040.w3g
Multiplayer/Replay_2015_07_20_0130.w3g
Multiplayer/Replay_2013_09_15_1624.w3g
Multiplayer/Replay_2013_08_16_2327.w3g
Multiplayer/Replay_2013_08_10_1544.w3g
Multiplayer/Replay_2013_10_08_2244.w3g
Multiplayer/Replay_2013_07_18_2044.w3g
Multiplayer/Replay_2014_02_06_0018.w3g
Multiplayer/Replay_2015_06_24_2131.w3g
Multiplayer/Replay_2013_08_24_2239.w3g
Multiplayer/Replay_2015_07_17_1215.w3g
Multiplayer/Replay_2015_06_25_2237.w3g
Multiplayer/Replay_2013_11_30_1956.w3g
Multiplayer/Replay_2015_07_26_0012.w3g
Multiplayer/Replay_2013_10_29_0113.w3g
Multiplayer/Replay_2013_10_02_1344.w3g
Multiplayer/Replay_2015_06_24_2303.w3g
Multiplayer/Replay_2015_07_24_2335.w3g
Multiplayer/Replay_2013_10_29_1026.w3g
Multiplayer/Replay_2013_08_14_1121.w3g
Multiplayer/Replay_2015_08_18_1924.w3g
Multiplayer/Replay_2015_06_25_0158.w3g
Multiplayer/Replay_2015_08_01_1905.w3g
Multiplayer/Replay_2013_07_18_1828.w3g
Multiplayer/Replay_2013_08_25_2338.w3g
Multiplayer/Replay_2013_11_23_1802.w3g
Multiplayer/Replay_2013_08_09_2302.w3g
Multiplayer/Replay_2015_06_24_1402.w3g
Multiplayer/Replay_2015_06_27_1546.w3g
Multiplayer/Replay_2013_08_24_2003.w3g
Multiplayer/Replay_2013_10_18_1730.w3g
Multiplayer/Replay_2013_11_11_0004.w3g
Multiplayer/Replay_2014_02_07_2249.w3g
Multiplayer/Replay_2013_09_12_1304.w3g
Multiplayer/Replay_2013_08_07_2016.w3g
Multiplayer/Replay_2014_02_07_2001.w3g
Multiplayer/Replay_2013_10_02_1416.w3g
Multiplayer/Replay_2013_12_20_0035.w3g
Multiplayer/Replay_2015_08_05_1635.w3g
Multiplayer/Replay_2013_11_23_1838.w3g
Multiplayer/Replay_2013_12_26_0034.w3g
Multiplayer/Replay_2015_07_03_2058.w3g
Multiplayer/Replay_2014_01_10_0002.w3g
Multiplayer/Replay_2013_08_02_0042.w3g
Multiplayer/Replay_2013_09_06_2155.w3g
Multiplayer/Replay_2013_10_19_1318.w3g
Multiplayer/Replay_2013_12_25_1013.w3g
Multiplayer/Replay_2013_08_17_2002.w3g
Multiplayer/Replay_2013_12_16_1432.w3g
Multiplayer/Replay_2013_07_31_0135.w3g
Multiplayer/Replay_2015_08_11_2327.w3g
Multiplayer/Replay_2013_11_22_1520.w3g
Multiplayer/Replay_2015_08_10_1327.w3g
Multiplayer/Replay_2013_10_23_1551.w3g
Multiplayer/Replay_2013_08_12_2238.w3g
Multiplayer/Replay_2015_06_23_2322.w3g
Multiplayer/Replay_2013_07_20_1317.w3g
Multiplayer/Replay_2015_08_24_2227.w3g
Multiplayer/Replay_2015_08_24_1732.w3g
Multiplayer/Replay_2015_08_27_2123.w3g
Multiplayer/Replay_2013_11_19_1539.w3g
Multiplayer/Replay_2015_08_30_2235.w3g
Multiplayer/Replay_2014_02_08_2024.w3g
Multiplayer/Replay_2013_07_20_1847.w3g
Multiplayer/Replay_2013_10_16_1115.w3g
Multiplayer/Replay_2015_07_25_2010.w3g
Multiplayer/Replay_2013_08_18_1207.w3g
Multiplayer/Replay_2013_11_24_2108.w3g
Multiplayer/Replay_2013_11_30_0001.w3g
Multiplayer/Replay_2015_08_10_1227.w3g
Multiplayer/Replay_2015_08_02_0438.w3g
Multiplayer/Replay_2014_03_15_1651.w3g
Multiplayer/Replay_2014_02_04_0020.w3g
Multiplayer/Replay_2015_07_04_2117.w3g
Multiplayer/Replay_2013_08_13_2108.w3g
Multiplayer/Replay_2015_07_17_1522.w3g
Multiplayer/Replay_2013_11_12_1659.w3g
Multiplayer/Replay_2013_12_03_1332.w3g
Multiplayer/Replay_2015_08_04_1826.w3g
Multiplayer/Replay_2014_02_10_2255.w3g
Multiplayer/Replay_2013_11_19_2257.w3g
Multiplayer/Replay_2015_08_13_1937.w3g
Multiplayer/Replay_2015_07_05_2350.w3g
Multiplayer/Replay_2013_10_11_2317.w3g
Multiplayer/Replay_2015_08_14_1222.w3g
Multiplayer/Replay_2013_10_29_1617.w3g
Multiplayer/Replay_2015_06_28_2058.w3g
Multiplayer/Replay_2013_11_24_2025.w3g
Multiplayer/Replay_2015_08_16_1945.w3g
Multiplayer/Replay_2015_06_23_2029.w3g
Multiplayer/Replay_2014_01_21_2255.w3g
Multiplayer/Replay_2013_11_06_1857.w3g
Multiplayer/Replay_2014_01_16_2217.w3g
Multiplayer/Replay_2015_06_25_1815.w3g
Multiplayer/Replay_2015_08_25_2038.w3g
Multiplayer/Replay_2013_10_18_1903.w3g
Multiplayer/Replay_2015_08_27_2207.w3g
Multiplayer/Replay_2015_07_08_1901.w3g
Multiplayer/Replay_2013_10_28_1700.w3g
Multiplayer/Replay_2013_11_22_1220.w3g
Multiplayer/Replay_2013_10_29_2034.w3g
Multiplayer/Replay_2013_09_22_1828.w3g
Multiplayer/Replay_2015_08_01_1334.w3g
Multiplayer/Replay_2013_11_27_2050.w3g
Multiplayer/Replay_2013_08_05_2302.w3g
Multiplayer/Replay_2015_08_01_2135.w3g
Multiplayer/Replay_2015_08_06_1800.w3g
Multiplayer/Replay_2013_12_03_2050.w3g
Multiplayer/Replay_2015_07_31_2255.w3g
Multiplayer/Replay_2013_08_04_0007.w3g
Multiplayer/Replay_2013_08_25_1403.w3g
Multiplayer/Replay_2013_10_06_0122.w3g
Multiplayer/Replay_2014_02_03_2249.w3g
Multiplayer/Replay_2015_08_02_0009.w3g
Multiplayer/Replay_2015_07_16_2325.w3g
Multiplayer/Replay_2013_08_15_2118.w3g
Multiplayer/Replay_2013_11_19_2346.w3g
Multiplayer/Replay_2015_06_27_1106.w3g
Multiplayer/Replay_2013_09_10_1514.w3g
Multiplayer/Replay_2015_07_09_0039.w3g
Multiplayer/Replay_2013_11_29_2009.w3g
Multiplayer/Replay_2015_06_30_1524.w3g
Multiplayer/Replay_2013_07_28_1243.w3g
Multiplayer/Replay_2013_10_31_0150.w3g
Multiplayer/Replay_2013_10_08_2158.w3g
Multiplayer/Replay_2015_07_07_0152.w3g
Multiplayer/Replay_2013_07_22_0231.w3g
Multiplayer/Replay_2013_08_10_2123.w3g
Multiplayer/Replay_2015_08_06_2212.w3g
Multiplayer/Replay_2013_12_02_2300.w3g
Multiplayer/Replay_2013_08_25_2257.w3g
Multiplayer/Replay_2013_08_29_1150.w3g
Multiplayer/Replay_2015_06_27_1159.w3g
Multiplayer/Replay_2015_08_01_1841.w3g
Multiplayer/Replay_2013_11_25_0022.w3g
Multiplayer/Replay_2015_07_25_2107.w3g
Multiplayer/Replay_2013_11_14_1851.w3g
Multiplayer/Replay_2015_07_07_0052.w3g
Multiplayer/Replay_2013_11_30_0038.w3g
Multiplayer/Replay_2013_11_19_0031.w3g
Multiplayer/Replay_2013_08_31_1459.w3g
Multiplayer/Replay_2013_08_18_1149.w3g
Multiplayer/Replay_2013_09_26_1229.w3g
Multiplayer/Replay_2013_08_18_1832.w3g
Multiplayer/Replay_2013_10_09_1645.w3g
Multiplayer/Replay_2013_11_08_1213.w3g
Multiplayer/Replay_2015_07_02_1643.w3g
Multiplayer/Replay_2015_06_29_2146.w3g
Multiplayer/Replay_2013_12_19_2331.w3g
Multiplayer/Replay_2015_07_30_1910.w3g
Multiplayer/Replay_2013_08_31_1741.w3g
Multiplayer/Replay_2013_12_02_1151.w3g
Multiplayer/Replay_2013_09_15_1515.w3g
Multiplayer/Replay_2013_07_20_1424.w3g
Multiplayer/Replay_2014_02_06_2229.w3g
Multiplayer/Replay_2013_12_25_0014.w3g
Multiplayer/Replay_2013_10_14_1831.w3g
Multiplayer/Replay_2013_12_03_2206.w3g
Multiplayer/Replay_2015_06_30_1608.w3g
Multiplayer/Replay_2013_08_05_2215.w3g
Multiplayer/Replay_2014_01_09_2317.w3g
Multiplayer/Replay_2015_07_18_1458.w3g
Multiplayer/Replay_2015_06_24_1505.w3g
Multiplayer/Replay_2015_08_20_1403.w3g
Multiplayer/Replay_2015_08_17_1138.w3g
Multiplayer/Replay_2015_07_25_0217.w3g
Multiplayer/Replay_2015_08_14_1147.w3g
Multiplayer/Replay_2013_08_06_0051.w3g
Multiplayer/Replay_2013_09_17_1701.w3g
Multiplayer/Replay_2015_06_30_1248.w3g
Multiplayer/Replay_2015_07_25_1641.w3g
Multiplayer/Replay_2013_12_02_1206.w3g
Multiplayer/Replay_2013_10_28_1608.w3g
Multiplayer/Replay_2015_09_01_1716.w3g
Multiplayer/Replay_2013_08_12_1657.w3g
Multiplayer/Replay_2013_08_21_2132.w3g
Multiplayer/Replay_2015_08_05_0023.w3g
Multiplayer/Replay_2015_07_17_1407.w3g
Multiplayer/Replay_2015_06_22_2130.w3g
Multiplayer/Replay_2015_07_21_2252.w3g
Multiplayer/Replay_2015_07_16_1454.w3g
Multiplayer/Replay_2013_07_18_1736.w3g
Multiplayer/Replay_2013_11_19_1852.w3g
Multiplayer/Replay_2013_12_01_1554.w3g
Multiplayer/Replay_2015_08_09_1315.w3g
Multiplayer/Replay_2015_09_06_1910.w3g
Multiplayer/Replay_2013_12_24_1559.w3g
Multiplayer/Replay_2015_08_05_1903.w3g
Multiplayer/Replay_2013_12_25_1510.w3g
Multiplayer/Replay_2014_02_05_0021.w3g
Multiplayer/Replay_2015_07_01_0326.w3g
Multiplayer/Replay_2015_06_28_0222.w3g
Multiplayer/Replay_2015_06_29_0020.w3g
Multiplayer/Replay_2013_08_16_2147.w3g
Multiplayer/Replay_2013_07_18_1636.w3g
Multiplayer/Replay_2013_11_22_1506.w3g
Multiplayer/Replay_2015_08_09_2245.w3g
Multiplayer/Replay_2013_09_11_1543.w3g
Multiplayer/Replay_2013_07_20_1814.w3g
Multiplayer/Replay_2014_02_04_2314.w3g
Multiplayer/Replay_2015_09_06_0231.w3g
Multiplayer/Replay_2013_07_25_1442.w3g
Multiplayer/Replay_2015_08_06_1625.w3g
Multiplayer/Replay_2013_08_24_2059.w3g
Multiplayer/Replay_2013_10_02_1453.w3g
Multiplayer/Replay_2015_06_26_1002.w3g
Multiplayer/Replay_2013_11_28_1216.w3g
Multiplayer/Replay_2015_07_27_1558.w3g
Multiplayer/Replay_2013_09_18_1247.w3g
Multiplayer/Replay_2015_07_21_2207.w3g
Multiplayer/Replay_2013_08_01_2243.w3g
Multiplayer/Replay_2014_01_27_0036.w3g
Multiplayer/Replay_2013_11_12_1609.w3g
Multiplayer/Replay_2015_08_13_1338.w3g
Multiplayer/Replay_2015_08_05_1443.w3g
Multiplayer/Replay_2015_07_12_0017.w3g
Multiplayer/Replay_2013_11_22_1406.w3g
Multiplayer/Replay_2015_07_24_1832.w3g
Multiplayer/Replay_2015_07_26_1736.w3g
Multiplayer/Replay_2013_11_22_1251.w3g
Multiplayer/Replay_2013_12_18_1905.w3g
Multiplayer/Replay_2014_02_08_1928.w3g
Multiplayer/Replay_2015_09_05_1912.w3g
Multiplayer/Replay_2015_07_23_0030.w3g
Multiplayer/Replay_2014_02_04_2231.w3g
Multiplayer/Replay_2013_07_26_1219.w3g
Multiplayer/Replay_2015_08_12_1817.w3g
Multiplayer/Replay_2014_01_30_2316.w3g
Multiplayer/Replay_2013_09_14_1907.w3g
Multiplayer/Replay_2013_10_29_1107.w3g
Multiplayer/Replay_2015_08_02_1636.w3g
Multiplayer/Replay_2013_08_23_1430.w3g
Multiplayer/Replay_2013_12_17_1759.w3g
Multiplayer/Replay_2013_10_01_1739.w3g
Multiplayer/Replay_2013_12_26_1450.w3g
Multiplayer/Replay_2013_08_01_1149.w3g
Multiplayer/Replay_2015_07_19_1618.w3g
Multiplayer/Replay_2013_11_23_1847.w3g
Multiplayer/Replay_2013_12_08_0217.w3g
Multiplayer/Replay_2013_11_10_2306.w3g
Multiplayer/Replay_2014_02_09_1703.w3g
Multiplayer/Replay_2015_06_26_1103.w3g
Multiplayer/Replay_2014_01_31_2055.w3g
Multiplayer/Replay_2013_12_26_1906.w3g
Multiplayer/Replay_2013_08_24_2330.w3g
Multiplayer/Replay_2013_08_27_0927.w3g
Multiplayer/Replay_2013_12_14_2221.w3g
Multiplayer/Replay_2013_08_24_0936.w3g
Multiplayer/Replay_2013_10_29_1024.w3g
Multiplayer/Replay_2013_08_16_2225.w3g
Multiplayer/Replay_2015_08_13_1628.w3g
Multiplayer/Replay_2015_09_01_1558.w3g
Multiplayer/Replay_2013_12_10_0200.w3g
Multiplayer/Replay_2015_07_19_1953.w3g
Multiplayer/Replay_2013_09_28_0039.w3g
Multiplayer/Replay_2013_08_03_0205.w3g
Multiplayer/Replay_2013_09_26_1313.w3g
Multiplayer/Replay_2014_01_29_2343.w3g
Multiplayer/Replay_2013_08_04_1623.w3g
Multiplayer/Replay_2015_08_13_1910.w3g
Multiplayer/Replay_2013_08_25_2353.w3g
Multiplayer/Replay_2013_11_23_1819.w3g
Multiplayer/Replay_2013_09_28_1635.w3g
Multiplayer/Replay_2015_06_26_1457.w3g
Multiplayer/Replay_2013_10_31_1238.w3g
Multiplayer/Replay_2013_10_22_0231.w3g
Multiplayer/Replay_2013_08_18_1935.w3g
Multiplayer/Replay_2013_08_14_0125.w3g
Multiplayer/Replay_2013_10_28_0230.w3g
Multiplayer/Replay_2013_07_18_1614.w3g
Multiplayer/Replay_2013_09_26_0035.w3g
Multiplayer/Replay_2013_08_09_2224.w3g
Multiplayer/Replay_2015_07_16_2133.w3g
Multiplayer/Replay_2015_07_22_2100.w3g
Multiplayer/Replay_2013_07_28_0250.w3g
Multiplayer/Replay_2015_07_31_2211.w3g
Multiplayer/Replay_2015_08_09_1747.w3g
Multiplayer/Replay_2013_11_18_2332.w3g
Multiplayer/Replay_2013_07_19_1849.w3g
Multiplayer/Replay_2014_01_12_2019.w3g
Multiplayer/Replay_2015_08_08_1925.w3g
Multiplayer/Replay_2015_06_30_1215.w3g
Multiplayer/Replay_2014_01_30_0014.w3g
Multiplayer/Replay_2015_08_09_2147.w3g
Multiplayer/Replay_2015_06_27_0149.w3g
Multiplayer/Replay_2013_08_14_2121.w3g
Multiplayer/Replay_2013_09_06_2301.w3g
Multiplayer/Replay_2015_07_06_2348.w3g
Multiplayer/Replay_2015_08_12_2011.w3g
Multiplayer/Replay_2013_08_27_1015.w3g
Multiplayer/Replay_2013_07_19_1754.w3g
Multiplayer/Replay_2015_07_03_0414.w3g
Multiplayer/Replay_2015_07_25_1343.w3g
Multiplayer/Replay_2015_08_12_0027.w3g
Multiplayer/Replay_2013_07_20_1315.w3g
Multiplayer/Replay_2013_08_12_0959.w3g
Multiplayer/Replay_2015_07_22_2308.w3g
Multiplayer/Replay_2013_10_26_2207.w3g
Multiplayer/Replay_2013_08_18_1852.w3g
Multiplayer/Replay_2015_08_02_0135.w3g
Multiplayer/Replay_2013_12_25_1047.w3g
Multiplayer/Replay_2013_08_23_2003.w3g
Multiplayer/Replay_2013_11_22_1401.w3g
Multiplayer/Replay_2013_12_08_0011.w3g
Multiplayer/Replay_2013_08_12_2007.w3g
Multiplayer/Replay_2015_08_02_1313.w3g
Multiplayer/Replay_2013_08_04_1218.w3g
Multiplayer/Replay_2013_11_28_1135.w3g
Multiplayer/Replay_2013_10_03_2102.w3g
Multiplayer/Replay_2013_12_02_1209.w3g
Multiplayer/Replay_2015_07_15_2247.w3g
Multiplayer/Replay_2014_03_14_1717.w3g
Multiplayer/Replay_2015_06_30_1533.w3g
Multiplayer/Replay_2013_10_28_1516.w3g
Multiplayer/Replay_2015_07_22_2053.w3g
Multiplayer/Replay_2015_08_12_1812.w3g
Multiplayer/Replay_2013_08_17_2201.w3g
Multiplayer/Replay_2015_08_01_1415.w3g
Multiplayer/Replay_2013_08_09_0210.w3g
Multiplayer/Replay_2013_08_04_1252.w3g
Multiplayer/Replay_2013_12_21_1535.w3g
Multiplayer/Replay_2013_08_30_2110.w3g
Multiplayer/Replay_2015_09_05_1714.w3g
Multiplayer/Replay_2015_08_04_1936.w3g
Multiplayer/Replay_2015_08_27_1922.w3g
Multiplayer/Replay_2013_12_23_2128.w3g
Multiplayer/Replay_2015_08_25_2000.w3g
Multiplayer/Replay_2013_09_21_1820.w3g
Multiplayer/Replay_2015_06_25_0217.w3g
Multiplayer/Replay_2014_02_04_2016.w3g
Multiplayer/Replay_2013_09_04_1721.w3g
Multiplayer/Replay_2015_07_31_1440.w3g
Multiplayer/Replay_2015_09_05_1826.w3g
Multiplayer/Replay_2014_11_03_1313.w3g
Multiplayer/Replay_2015_07_24_2146.w3g
Multiplayer/Replay_2013_11_22_1337.w3g
Multiplayer/Replay_2014_03_18_2341.w3g
Multiplayer/Replay_2014_01_10_1833.w3g
Multiplayer/Replay_2015_08_18_1847.w3g
Multiplayer/Replay_2014_11_02_1226.w3g
Multiplayer/Replay_2013_07_18_1639.w3g
Multiplayer/Replay_2013_11_23_0146.w3g
Multiplayer/Replay_2013_11_15_2349.w3g
Multiplayer/Replay_2015_06_28_1957.w3g
Multiplayer/Replay_2013_08_31_2250.w3g
Multiplayer/Replay_2013_09_02_2348.w3g
Multiplayer/Replay_2015_08_20_0310.w3g
Multiplayer/Replay_2015_07_03_0138.w3g
Multiplayer/Replay_2013_12_03_1402.w3g
Multiplayer/Replay_2015_07_28_0028.w3g
Multiplayer/Replay_2015_07_01_1300.w3g
Multiplayer/Replay_2014_03_16_2227.w3g
Multiplayer/Replay_2014_01_31_2334.w3g
Multiplayer/Replay_2013_10_29_2032.w3g
Multiplayer/Replay_2013_09_21_1627.w3g
Multiplayer/Replay_2015_09_04_2156.w3g
Multiplayer/Replay_2013_11_16_1418.w3g
Multiplayer/Replay_2015_08_30_2105.w3g
Multiplayer/Replay_2013_11_25_0103.w3g
Multiplayer/Replay_2013_11_22_1426.w3g
Multiplayer/Replay_2013_10_16_1002.w3g
Multiplayer/Replay_2013_08_12_1513.w3g
Multiplayer/Replay_2013_09_08_0056.w3g
Multiplayer/Replay_2015_07_25_1557.w3g
Multiplayer/Replay_2015_08_17_1849.w3g
Multiplayer/Replay_2013_07_28_0211.w3g
Multiplayer/Replay_2015_08_02_0308.w3g
Multiplayer/Replay_2013_08_06_1257.w3g
Multiplayer/Replay_2013_08_31_1710.w3g
Multiplayer/Replay_2013_10_29_0025.w3g
Multiplayer/Replay_2013_11_09_1529.w3g
Multiplayer/Replay_2015_08_01_2222.w3g
Multiplayer/Replay_2015_06_26_2119.w3g
Multiplayer/Replay_2014_01_27_0128.w3g
Multiplayer/Replay_2013_10_02_1316.w3g
Multiplayer/Replay_2014_02_03_0128.w3g
Multiplayer/Replay_2013_10_08_2056.w3g
Multiplayer/Replay_2013_11_01_1815.w3g
Multiplayer/Replay_2015_07_25_1457.w3g
Multiplayer/Replay_2015_07_17_1450.w3g
Multiplayer/Replay_2015_08_12_2101.w3g
Multiplayer/Replay_2013_09_10_1601.w3g
Multiplayer/
Multiplayer/Replay_2015_06_30_0144.w3g
Multiplayer/Replay_2013_12_09_1502.w3g
Multiplayer/Replay_2015_08_26_2148.w3g
Multiplayer/Replay_2015_06_26_1259.w3g
Multiplayer/Replay_2013_09_13_1721.w3g
Multiplayer/Replay_2013_11_30_2056.w3g
Multiplayer/Replay_2013_11_09_0020.w3g
Multiplayer/Replay_2015_08_09_1642.w3g
Multiplayer/Replay_2013_09_10_1637.w3g
Multiplayer/Replay_2015_08_19_1706.w3g
Multiplayer/Replay_2015_09_03_1847.w3g
Multiplayer/Replay_2013_10_09_1431.w3g
Multiplayer/Replay_2015_08_20_0218.w3g
Multiplayer/Replay_2013_11_18_1617.w3g
Multiplayer/Replay_2013_11_20_1222.w3g
Multiplayer/Replay_2013_12_13_1241.w3g
Multiplayer/Replay_2013_11_23_1631.w3g
Multiplayer/Replay_2013_08_04_1230.w3g
Multiplayer/Replay_2013_09_22_0041.w3g
Multiplayer/Replay_2015_06_22_1900.w3g
Multiplayer/Replay_2013_08_03_1921.w3g
Multiplayer/Replay_2015_08_27_2018.w3g
Multiplayer/Replay_2013_09_23_0011.w3g
Multiplayer/Replay_2015_06_22_2300.w3g
Multiplayer/Replay_2013_12_13_1124.w3g
Multiplayer/Replay_2014_03_16_2150.w3g
Multiplayer/Replay_2015_07_31_2331.w3g
Multiplayer/Replay_2014_02_06_2316.w3g
Multiplayer/Replay_2015_08_10_1120.w3g
Multiplayer/Replay_2015_07_31_1814.w3g
Multiplayer/Replay_2013_10_03_2150.w3g
Multiplayer/Replay_2013_11_24_1902.w3g
Multiplayer/Replay_2015_07_28_1301.w3g
Multiplayer/Replay_2014_02_07_1903.w3g
Multiplayer/Replay_2014_11_02_1221.w3g
Multiplayer/Replay_2015_09_06_1414.w3g
Multiplayer/Replay_2013_08_24_1657.w3g
Multiplayer/Replay_2015_08_02_0214.w3g
Multiplayer/Replay_2013_10_17_1945.w3g
Multiplayer/Replay_2013_12_25_1126.w3g
Multiplayer/Replay_2014_01_12_1904.w3g
Multiplayer/Replay_2013_12_16_1433.w3g
Multiplayer/Replay_2013_11_28_1102.w3g
Multiplayer/Replay_2013_08_18_1838.w3g
Multiplayer/Replay_2015_06_25_0112.w3g
Multiplayer/Replay_2015_07_03_2248.w3g
Multiplayer/Replay_2013_09_23_1927.w3g
Multiplayer/Replay_2013_08_10_1706.w3g
Multiplayer/Replay_2014_01_17_1816.w3g
Multiplayer/Replay_2015_07_09_2359.w3g
Multiplayer/Replay_2013_11_22_2033.w3g
Multiplayer/Replay_2013_12_02_1840.w3g
Multiplayer/Replay_2013_12_24_1629.w3g
Multiplayer/Replay_2013_10_28_0150.w3g
Multiplayer/Replay_2013_10_22_1637.w3g
Multiplayer/Replay_2013_07_30_1244.w3g
Multiplayer/Replay_2013_11_12_1736.w3g
Multiplayer/Replay_2013_08_31_1936.w3g
Multiplayer/Replay_2015_08_09_2315.w3g
Multiplayer/Replay_2015_06_30_0050.w3g
Multiplayer/Replay_2015_09_03_0045.w3g
Multiplayer/Replay_2015_08_01_0333.w3g
Multiplayer/Replay_2015_09_06_0148.w3g
Multiplayer/Replay_2013_07_18_1723.w3g
Multiplayer/Replay_2015_06_25_1632.w3g
Multiplayer/Replay_2013_10_26_2244.w3g
Multiplayer/Replay_2015_07_04_2252.w3g
Multiplayer/Replay_2013_11_01_1757.w3g
Multiplayer/Replay_2013_10_29_1050.w3g
Multiplayer/Replay_2015_08_25_2232.w3g
Multiplayer/Replay_2013_11_22_1304.w3g
Multiplayer/Replay_2013_10_23_1624.w3g
Multiplayer/Replay_2015_06_30_1400.w3g
Multiplayer/Replay_2013_08_20_0108.w3g
Multiplayer/Replay_2015_06_21_2202.w3g
Multiplayer/Replay_2013_11_17_1419.w3g
Multiplayer/Replay_2015_06_21_2355.w3g
Multiplayer/Replay_2013_08_15_2052.w3g
Multiplayer/Replay_2013_12_02_1246.w3g
Multiplayer/Replay_2014_08_05_0043.w3g
Multiplayer/Replay_2015_09_02_1554.w3g
Multiplayer/Replay_2015_07_24_0034.w3g
Multiplayer/Replay_2013_11_10_2132.w3g
Multiplayer/Replay_2013_10_14_1926.w3g
Multiplayer/Replay_2013_10_24_2240.w3g
Multiplayer/Replay_2013_08_29_1940.w3g
Multiplayer/Replay_2013_12_04_0008.w3g
Multiplayer/Replay_2013_12_20_2004.w3g
Multiplayer/Replay_2013_12_26_1756.w3g
Multiplayer/Replay_2013_11_23_1845.w3g
Multiplayer/Replay_2013_08_10_2040.w3g
Multiplayer/Replay_2013_11_18_1540.w3g
Multiplayer/Replay_2013_11_08_1342.w3g
Multiplayer/Replay_2015_06_29_2029.w3g
Multiplayer/Replay_2015_07_27_2157.w3g
Multiplayer/Replay_2015_06_22_1825.w3g
Multiplayer/Replay_2013_10_16_0132.w3g
Multiplayer/Replay_2013_08_30_1119.w3g
Multiplayer/Replay_2013_10_29_1122.w3g
Multiplayer/Replay_2013_11_23_1728.w3g
Multiplayer/Replay_2015_07_29_1430.w3g
Multiplayer/Replay_2013_07_20_2354.w3g
Multiplayer/Replay_2013_12_14_2202.w3g
Multiplayer/Replay_2015_07_29_2343.w3g
Multiplayer/Replay_2013_08_18_1331.w3g
Multiplayer/Replay_2013_11_18_1707.w3g
Multiplayer/Replay_2013_11_30_2335.w3g
Multiplayer/Replay_2013_10_24_2312.w3g
Multiplayer/Replay_2015_08_26_2026.w3g
Multiplayer/Replay_2015_07_23_2115.w3g
Multiplayer/Replay_2015_06_25_1604.w3g
Multiplayer/Replay_2013_12_26_1539.w3g
Multiplayer/Replay_2014_02_01_0002.w3g
Multiplayer/Replay_2013_11_22_1257.w3g
Multiplayer/Replay_2013_11_10_2221.w3g
Multiplayer/Replay_2013_08_24_2252.w3g
Multiplayer/Replay_2013_09_25_1608.w3g
Multiplayer/Replay_2013_11_27_0144.w3g
Multiplayer/Replay_2013_08_25_1421.w3g
Multiplayer/Replay_2013_11_19_2211.w3g
Multiplayer/Replay_2014_03_13_2139.w3g
Multiplayer/Replay_2015_07_30_1822.w3g
Multiplayer/Replay_2013_11_18_1741.w3g
Multiplayer/Replay_2015_08_26_1832.w3g
Multiplayer/Replay_2013_10_29_2107.w3g
Multiplayer/Replay_2015_08_13_1643.w3g
Multiplayer/Replay_2015_08_25_2128.w3g
Multiplayer/Replay_2015_07_25_0110.w3g
Multiplayer/Replay_2015_08_04_1928.w3g
Multiplayer/Replay_2014_02_07_2245.w3g
Multiplayer/Replay_2015_08_07_1824.w3g
Multiplayer/Replay_2015_06_30_1938.w3g
Multiplayer/Replay_2013_09_19_2034.w3g
Multiplayer/Replay_2015_09_05_1818.w3g
Multiplayer/Replay_2013_09_26_0145.w3g
Multiplayer/Replay_2013_11_16_0024.w3g
Multiplayer/Replay_2013_11_23_1946.w3g
Multiplayer/Replay_2013_10_04_2251.w3g
Multiplayer/Replay_2015_08_05_1538.w3g
Multiplayer/Replay_2013_10_31_1141.w3g
Multiplayer/Replay_2013_07_31_1829.w3g
Multiplayer/Replay_2014_03_17_0038.w3g
###Markdown
Vamos a ver que es lo que se descomprimio.
###Code
%ls replays/
#collapse
%ls replays/Multiplayer
###Output
[0m[01;32mReplay_2013_07_18_1614.w3g[0m* [01;32mReplay_2013_12_19_2004.w3g[0m*
[01;32mReplay_2013_07_18_1636.w3g[0m* [01;32mReplay_2013_12_19_2243.w3g[0m*
[01;32mReplay_2013_07_18_1639.w3g[0m* [01;32mReplay_2013_12_19_2331.w3g[0m*
[01;32mReplay_2013_07_18_1723.w3g[0m* [01;32mReplay_2013_12_20_0035.w3g[0m*
[01;32mReplay_2013_07_18_1736.w3g[0m* [01;32mReplay_2013_12_20_2004.w3g[0m*
[01;32mReplay_2013_07_18_1738.w3g[0m* [01;32mReplay_2013_12_20_2239.w3g[0m*
[01;32mReplay_2013_07_18_1828.w3g[0m* [01;32mReplay_2013_12_20_2319.w3g[0m*
[01;32mReplay_2013_07_18_1930.w3g[0m* [01;32mReplay_2013_12_21_0010.w3g[0m*
[01;32mReplay_2013_07_18_1931.w3g[0m* [01;32mReplay_2013_12_21_0116.w3g[0m*
[01;32mReplay_2013_07_18_1944.w3g[0m* [01;32mReplay_2013_12_21_1402.w3g[0m*
[01;32mReplay_2013_07_18_2032.w3g[0m* [01;32mReplay_2013_12_21_1430.w3g[0m*
[01;32mReplay_2013_07_18_2044.w3g[0m* [01;32mReplay_2013_12_21_1535.w3g[0m*
[01;32mReplay_2013_07_19_1754.w3g[0m* [01;32mReplay_2013_12_21_2209.w3g[0m*
[01;32mReplay_2013_07_19_1849.w3g[0m* [01;32mReplay_2013_12_21_2217.w3g[0m*
[01;32mReplay_2013_07_20_1256.w3g[0m* [01;32mReplay_2013_12_23_1947.w3g[0m*
[01;32mReplay_2013_07_20_1315.w3g[0m* [01;32mReplay_2013_12_23_2018.w3g[0m*
[01;32mReplay_2013_07_20_1316.w3g[0m* [01;32mReplay_2013_12_23_2052.w3g[0m*
[01;32mReplay_2013_07_20_1317.w3g[0m* [01;32mReplay_2013_12_23_2128.w3g[0m*
[01;32mReplay_2013_07_20_1339.w3g[0m* [01;32mReplay_2013_12_23_2258.w3g[0m*
[01;32mReplay_2013_07_20_1355.w3g[0m* [01;32mReplay_2013_12_23_2339.w3g[0m*
[01;32mReplay_2013_07_20_1424.w3g[0m* [01;32mReplay_2013_12_24_0018.w3g[0m*
[01;32mReplay_2013_07_20_1733.w3g[0m* [01;32mReplay_2013_12_24_1559.w3g[0m*
[01;32mReplay_2013_07_20_1814.w3g[0m* [01;32mReplay_2013_12_24_1629.w3g[0m*
[01;32mReplay_2013_07_20_1847.w3g[0m* [01;32mReplay_2013_12_24_1753.w3g[0m*
[01;32mReplay_2013_07_20_2354.w3g[0m* [01;32mReplay_2013_12_24_2223.w3g[0m*
[01;32mReplay_2013_07_21_0115.w3g[0m* [01;32mReplay_2013_12_24_2256.w3g[0m*
[01;32mReplay_2013_07_22_0121.w3g[0m* [01;32mReplay_2013_12_25_0014.w3g[0m*
[01;32mReplay_2013_07_22_0231.w3g[0m* [01;32mReplay_2013_12_25_0052.w3g[0m*
[01;32mReplay_2013_07_24_0059.w3g[0m* [01;32mReplay_2013_12_25_1013.w3g[0m*
[01;32mReplay_2013_07_24_2124.w3g[0m* [01;32mReplay_2013_12_25_1047.w3g[0m*
[01;32mReplay_2013_07_25_1326.w3g[0m* [01;32mReplay_2013_12_25_1126.w3g[0m*
[01;32mReplay_2013_07_25_1442.w3g[0m* [01;32mReplay_2013_12_25_1510.w3g[0m*
[01;32mReplay_2013_07_26_1219.w3g[0m* [01;32mReplay_2013_12_25_1548.w3g[0m*
[01;32mReplay_2013_07_28_0211.w3g[0m* [01;32mReplay_2013_12_25_1628.w3g[0m*
[01;32mReplay_2013_07_28_0250.w3g[0m* [01;32mReplay_2013_12_26_0008.w3g[0m*
[01;32mReplay_2013_07_28_1133.w3g[0m* [01;32mReplay_2013_12_26_0034.w3g[0m*
[01;32mReplay_2013_07_28_1206.w3g[0m* [01;32mReplay_2013_12_26_0126.w3g[0m*
[01;32mReplay_2013_07_28_1243.w3g[0m* [01;32mReplay_2013_12_26_1450.w3g[0m*
[01;32mReplay_2013_07_29_1349.w3g[0m* [01;32mReplay_2013_12_26_1457.w3g[0m*
[01;32mReplay_2013_07_29_2216.w3g[0m* [01;32mReplay_2013_12_26_1539.w3g[0m*
[01;32mReplay_2013_07_30_1244.w3g[0m* [01;32mReplay_2013_12_26_1756.w3g[0m*
[01;32mReplay_2013_07_30_2046.w3g[0m* [01;32mReplay_2013_12_26_1838.w3g[0m*
[01;32mReplay_2013_07_30_2104.w3g[0m* [01;32mReplay_2013_12_26_1906.w3g[0m*
[01;32mReplay_2013_07_31_0135.w3g[0m* [01;32mReplay_2013_12_27_2004.w3g[0m*
[01;32mReplay_2013_07_31_1800.w3g[0m* [01;32mReplay_2013_12_27_2114.w3g[0m*
[01;32mReplay_2013_07_31_1829.w3g[0m* [01;32mReplay_2013_12_28_1037.w3g[0m*
[01;32mReplay_2013_07_31_2005.w3g[0m* [01;32mReplay_2013_12_31_1520.w3g[0m*
[01;32mReplay_2013_07_31_2030.w3g[0m* [01;32mReplay_2014_01_09_2317.w3g[0m*
[01;32mReplay_2013_08_01_0107.w3g[0m* [01;32mReplay_2014_01_10_0002.w3g[0m*
[01;32mReplay_2013_08_01_0146.w3g[0m* [01;32mReplay_2014_01_10_1833.w3g[0m*
[01;32mReplay_2013_08_01_1149.w3g[0m* [01;32mReplay_2014_01_10_1931.w3g[0m*
[01;32mReplay_2013_08_01_2243.w3g[0m* [01;32mReplay_2014_01_10_2019.w3g[0m*
[01;32mReplay_2013_08_01_2329.w3g[0m* [01;32mReplay_2014_01_11_0047.w3g[0m*
[01;32mReplay_2013_08_02_0042.w3g[0m* [01;32mReplay_2014_01_11_0151.w3g[0m*
[01;32mReplay_2013_08_02_0144.w3g[0m* [01;32mReplay_2014_01_12_1904.w3g[0m*
[01;32mReplay_2013_08_03_0122.w3g[0m* [01;32mReplay_2014_01_12_1941.w3g[0m*
[01;32mReplay_2013_08_03_0205.w3g[0m* [01;32mReplay_2014_01_12_2019.w3g[0m*
[01;32mReplay_2013_08_03_1506.w3g[0m* [01;32mReplay_2014_01_12_2057.w3g[0m*
[01;32mReplay_2013_08_03_1604.w3g[0m* [01;32mReplay_2014_01_13_1123.w3g[0m*
[01;32mReplay_2013_08_03_1921.w3g[0m* [01;32mReplay_2014_01_13_1159.w3g[0m*
[01;32mReplay_2013_08_03_2026.w3g[0m* [01;32mReplay_2014_01_13_1254.w3g[0m*
[01;32mReplay_2013_08_04_0007.w3g[0m* [01;32mReplay_2014_01_16_2116.w3g[0m*
[01;32mReplay_2013_08_04_1158.w3g[0m* [01;32mReplay_2014_01_16_2217.w3g[0m*
[01;32mReplay_2013_08_04_1218.w3g[0m* [01;32mReplay_2014_01_16_2255.w3g[0m*
[01;32mReplay_2013_08_04_1230.w3g[0m* [01;32mReplay_2014_01_17_1816.w3g[0m*
[01;32mReplay_2013_08_04_1252.w3g[0m* [01;32mReplay_2014_01_17_1937.w3g[0m*
[01;32mReplay_2013_08_04_1623.w3g[0m* [01;32mReplay_2014_01_21_2255.w3g[0m*
[01;32mReplay_2013_08_05_0115.w3g[0m* [01;32mReplay_2014_01_21_2355.w3g[0m*
[01;32mReplay_2013_08_05_1433.w3g[0m* [01;32mReplay_2014_01_22_0103.w3g[0m*
[01;32mReplay_2013_08_05_2215.w3g[0m* [01;32mReplay_2014_01_27_0015.w3g[0m*
[01;32mReplay_2013_08_05_2302.w3g[0m* [01;32mReplay_2014_01_27_0036.w3g[0m*
[01;32mReplay_2013_08_05_2340.w3g[0m* [01;32mReplay_2014_01_27_0128.w3g[0m*
[01;32mReplay_2013_08_06_0051.w3g[0m* [01;32mReplay_2014_01_29_2138.w3g[0m*
[01;32mReplay_2013_08_06_1213.w3g[0m* [01;32mReplay_2014_01_29_2244.w3g[0m*
[01;32mReplay_2013_08_06_1257.w3g[0m* [01;32mReplay_2014_01_29_2343.w3g[0m*
[01;32mReplay_2013_08_07_1630.w3g[0m* [01;32mReplay_2014_01_30_0014.w3g[0m*
[01;32mReplay_2013_08_07_2007.w3g[0m* [01;32mReplay_2014_01_30_2316.w3g[0m*
[01;32mReplay_2013_08_07_2016.w3g[0m* [01;32mReplay_2014_01_30_2356.w3g[0m*
[01;32mReplay_2013_08_09_0132.w3g[0m* [01;32mReplay_2014_01_31_0035.w3g[0m*
[01;32mReplay_2013_08_09_0210.w3g[0m* [01;32mReplay_2014_01_31_2055.w3g[0m*
[01;32mReplay_2013_08_09_1634.w3g[0m* [01;32mReplay_2014_01_31_2207.w3g[0m*
[01;32mReplay_2013_08_09_2000.w3g[0m* [01;32mReplay_2014_01_31_2238.w3g[0m*
[01;32mReplay_2013_08_09_2224.w3g[0m* [01;32mReplay_2014_01_31_2334.w3g[0m*
[01;32mReplay_2013_08_09_2302.w3g[0m* [01;32mReplay_2014_02_01_0002.w3g[0m*
[01;32mReplay_2013_08_09_2348.w3g[0m* [01;32mReplay_2014_02_02_2300.w3g[0m*
[01;32mReplay_2013_08_10_1544.w3g[0m* [01;32mReplay_2014_02_02_2342.w3g[0m*
[01;32mReplay_2013_08_10_1652.w3g[0m* [01;32mReplay_2014_02_03_0055.w3g[0m*
[01;32mReplay_2013_08_10_1706.w3g[0m* [01;32mReplay_2014_02_03_0128.w3g[0m*
[01;32mReplay_2013_08_10_1727.w3g[0m* [01;32mReplay_2014_02_03_0203.w3g[0m*
[01;32mReplay_2013_08_10_2040.w3g[0m* [01;32mReplay_2014_02_03_0236.w3g[0m*
[01;32mReplay_2013_08_10_2123.w3g[0m* [01;32mReplay_2014_02_03_2249.w3g[0m*
[01;32mReplay_2013_08_11_1644.w3g[0m* [01;32mReplay_2014_02_03_2301.w3g[0m*
[01;32mReplay_2013_08_11_1719.w3g[0m* [01;32mReplay_2014_02_04_0020.w3g[0m*
[01;32mReplay_2013_08_11_1801.w3g[0m* [01;32mReplay_2014_02_04_1823.w3g[0m*
[01;32mReplay_2013_08_11_1934.w3g[0m* [01;32mReplay_2014_02_04_1929.w3g[0m*
[01;32mReplay_2013_08_11_2038.w3g[0m* [01;32mReplay_2014_02_04_2016.w3g[0m*
[01;32mReplay_2013_08_12_0949.w3g[0m* [01;32mReplay_2014_02_04_2112.w3g[0m*
[01;32mReplay_2013_08_12_0959.w3g[0m* [01;32mReplay_2014_02_04_2231.w3g[0m*
[01;32mReplay_2013_08_12_1513.w3g[0m* [01;32mReplay_2014_02_04_2314.w3g[0m*
[01;32mReplay_2013_08_12_1657.w3g[0m* [01;32mReplay_2014_02_05_0021.w3g[0m*
[01;32mReplay_2013_08_12_1835.w3g[0m* [01;32mReplay_2014_02_05_2325.w3g[0m*
[01;32mReplay_2013_08_12_1954.w3g[0m* [01;32mReplay_2014_02_06_0018.w3g[0m*
[01;32mReplay_2013_08_12_2007.w3g[0m* [01;32mReplay_2014_02_06_2229.w3g[0m*
[01;32mReplay_2013_08_12_2238.w3g[0m* [01;32mReplay_2014_02_06_2316.w3g[0m*
[01;32mReplay_2013_08_12_2326.w3g[0m* [01;32mReplay_2014_02_07_0002.w3g[0m*
[01;32mReplay_2013_08_13_1145.w3g[0m* [01;32mReplay_2014_02_07_1903.w3g[0m*
[01;32mReplay_2013_08_13_1745.w3g[0m* [01;32mReplay_2014_02_07_2001.w3g[0m*
[01;32mReplay_2013_08_13_1957.w3g[0m* [01;32mReplay_2014_02_07_2056.w3g[0m*
[01;32mReplay_2013_08_13_2108.w3g[0m* [01;32mReplay_2014_02_07_2204.w3g[0m*
[01;32mReplay_2013_08_13_2349.w3g[0m* [01;32mReplay_2014_02_07_2245.w3g[0m*
[01;32mReplay_2013_08_14_0048.w3g[0m* [01;32mReplay_2014_02_07_2249.w3g[0m*
[01;32mReplay_2013_08_14_0125.w3g[0m* [01;32mReplay_2014_02_08_1928.w3g[0m*
[01;32mReplay_2013_08_14_1121.w3g[0m* [01;32mReplay_2014_02_08_2024.w3g[0m*
[01;32mReplay_2013_08_14_2121.w3g[0m* [01;32mReplay_2014_02_09_1618.w3g[0m*
[01;32mReplay_2013_08_14_2150.w3g[0m* [01;32mReplay_2014_02_09_1703.w3g[0m*
[01;32mReplay_2013_08_15_1957.w3g[0m* [01;32mReplay_2014_02_10_2255.w3g[0m*
[01;32mReplay_2013_08_15_2014.w3g[0m* [01;32mReplay_2014_02_10_2352.w3g[0m*
[01;32mReplay_2013_08_15_2038.w3g[0m* [01;32mReplay_2014_03_12_2047.w3g[0m*
[01;32mReplay_2013_08_15_2052.w3g[0m* [01;32mReplay_2014_03_13_2139.w3g[0m*
[01;32mReplay_2013_08_15_2117.w3g[0m* [01;32mReplay_2014_03_14_1717.w3g[0m*
[01;32mReplay_2013_08_15_2118.w3g[0m* [01;32mReplay_2014_03_14_1850.w3g[0m*
[01;32mReplay_2013_08_16_0010.w3g[0m* [01;32mReplay_2014_03_14_1928.w3g[0m*
[01;32mReplay_2013_08_16_2138.w3g[0m* [01;32mReplay_2014_03_15_1651.w3g[0m*
[01;32mReplay_2013_08_16_2143.w3g[0m* [01;32mReplay_2014_03_16_2150.w3g[0m*
[01;32mReplay_2013_08_16_2147.w3g[0m* [01;32mReplay_2014_03_16_2227.w3g[0m*
[01;32mReplay_2013_08_16_2152.w3g[0m* [01;32mReplay_2014_03_16_2310.w3g[0m*
[01;32mReplay_2013_08_16_2209.w3g[0m* [01;32mReplay_2014_03_17_0038.w3g[0m*
[01;32mReplay_2013_08_16_2218.w3g[0m* [01;32mReplay_2014_03_18_2341.w3g[0m*
[01;32mReplay_2013_08_16_2225.w3g[0m* [01;32mReplay_2014_03_19_0055.w3g[0m*
[01;32mReplay_2013_08_16_2324.w3g[0m* [01;32mReplay_2014_08_05_0043.w3g[0m*
[01;32mReplay_2013_08_16_2327.w3g[0m* [01;32mReplay_2014_11_02_1221.w3g[0m*
[01;32mReplay_2013_08_17_1936.w3g[0m* [01;32mReplay_2014_11_02_1226.w3g[0m*
[01;32mReplay_2013_08_17_1958.w3g[0m* [01;32mReplay_2014_11_03_1313.w3g[0m*
[01;32mReplay_2013_08_17_2002.w3g[0m* [01;32mReplay_2014_11_03_1411.w3g[0m*
[01;32mReplay_2013_08_17_2201.w3g[0m* [01;32mReplay_2014_11_03_2347.w3g[0m*
[01;32mReplay_2013_08_18_1148.w3g[0m* [01;32mReplay_2014_11_08_2340.w3g[0m*
[01;32mReplay_2013_08_18_1149.w3g[0m* [01;32mReplay_2015_06_21_2202.w3g[0m*
[01;32mReplay_2013_08_18_1207.w3g[0m* [01;32mReplay_2015_06_21_2303.w3g[0m*
[01;32mReplay_2013_08_18_1251.w3g[0m* [01;32mReplay_2015_06_21_2355.w3g[0m*
[01;32mReplay_2013_08_18_1331.w3g[0m* [01;32mReplay_2015_06_22_0052.w3g[0m*
[01;32mReplay_2013_08_18_1432.w3g[0m* [01;32mReplay_2015_06_22_0141.w3g[0m*
[01;32mReplay_2013_08_18_1723.w3g[0m* [01;32mReplay_2015_06_22_1825.w3g[0m*
[01;32mReplay_2013_08_18_1728.w3g[0m* [01;32mReplay_2015_06_22_1900.w3g[0m*
[01;32mReplay_2013_08_18_1832.w3g[0m* [01;32mReplay_2015_06_22_1946.w3g[0m*
[01;32mReplay_2013_08_18_1838.w3g[0m* [01;32mReplay_2015_06_22_2031.w3g[0m*
[01;32mReplay_2013_08_18_1852.w3g[0m* [01;32mReplay_2015_06_22_2130.w3g[0m*
[01;32mReplay_2013_08_18_1935.w3g[0m* [01;32mReplay_2015_06_22_2300.w3g[0m*
[01;32mReplay_2013_08_18_2120.w3g[0m* [01;32mReplay_2015_06_23_1935.w3g[0m*
[01;32mReplay_2013_08_18_2153.w3g[0m* [01;32mReplay_2015_06_23_1943.w3g[0m*
[01;32mReplay_2013_08_18_2251.w3g[0m* [01;32mReplay_2015_06_23_1946.w3g[0m*
[01;32mReplay_2013_08_19_2228.w3g[0m* [01;32mReplay_2015_06_23_2029.w3g[0m*
[01;32mReplay_2013_08_20_0108.w3g[0m* [01;32mReplay_2015_06_23_2127.w3g[0m*
[01;32mReplay_2013_08_21_1720.w3g[0m* [01;32mReplay_2015_06_23_2322.w3g[0m*
[01;32mReplay_2013_08_21_1803.w3g[0m* [01;32mReplay_2015_06_24_0040.w3g[0m*
[01;32mReplay_2013_08_21_1851.w3g[0m* [01;32mReplay_2015_06_24_1402.w3g[0m*
[01;32mReplay_2013_08_21_2132.w3g[0m* [01;32mReplay_2015_06_24_1505.w3g[0m*
[01;32mReplay_2013_08_23_1430.w3g[0m* [01;32mReplay_2015_06_24_1847.w3g[0m*
[01;32mReplay_2013_08_23_1523.w3g[0m* [01;32mReplay_2015_06_24_2006.w3g[0m*
[01;32mReplay_2013_08_23_1854.w3g[0m* [01;32mReplay_2015_06_24_2048.w3g[0m*
[01;32mReplay_2013_08_23_1916.w3g[0m* [01;32mReplay_2015_06_24_2131.w3g[0m*
[01;32mReplay_2013_08_23_2003.w3g[0m* [01;32mReplay_2015_06_24_2225.w3g[0m*
[01;32mReplay_2013_08_23_2138.w3g[0m* [01;32mReplay_2015_06_24_2303.w3g[0m*
[01;32mReplay_2013_08_23_2226.w3g[0m* [01;32mReplay_2015_06_25_0112.w3g[0m*
[01;32mReplay_2013_08_24_0936.w3g[0m* [01;32mReplay_2015_06_25_0158.w3g[0m*
[01;32mReplay_2013_08_24_1657.w3g[0m* [01;32mReplay_2015_06_25_0217.w3g[0m*
[01;32mReplay_2013_08_24_2003.w3g[0m* [01;32mReplay_2015_06_25_1604.w3g[0m*
[01;32mReplay_2013_08_24_2059.w3g[0m* [01;32mReplay_2015_06_25_1632.w3g[0m*
[01;32mReplay_2013_08_24_2141.w3g[0m* [01;32mReplay_2015_06_25_1710.w3g[0m*
[01;32mReplay_2013_08_24_2239.w3g[0m* [01;32mReplay_2015_06_25_1815.w3g[0m*
[01;32mReplay_2013_08_24_2252.w3g[0m* [01;32mReplay_2015_06_25_1903.w3g[0m*
[01;32mReplay_2013_08_24_2330.w3g[0m* [01;32mReplay_2015_06_25_1954.w3g[0m*
[01;32mReplay_2013_08_25_1403.w3g[0m* [01;32mReplay_2015_06_25_2050.w3g[0m*
[01;32mReplay_2013_08_25_1421.w3g[0m* [01;32mReplay_2015_06_25_2139.w3g[0m*
[01;32mReplay_2013_08_25_1444.w3g[0m* [01;32mReplay_2015_06_25_2237.w3g[0m*
[01;32mReplay_2013_08_25_1508.w3g[0m* [01;32mReplay_2015_06_25_2340.w3g[0m*
[01;32mReplay_2013_08_25_1535.w3g[0m* [01;32mReplay_2015_06_26_1002.w3g[0m*
[01;32mReplay_2013_08_25_1601.w3g[0m* [01;32mReplay_2015_06_26_1103.w3g[0m*
[01;32mReplay_2013_08_25_1817.w3g[0m* [01;32mReplay_2015_06_26_1115.w3g[0m*
[01;32mReplay_2013_08_25_1848.w3g[0m* [01;32mReplay_2015_06_26_1204.w3g[0m*
[01;32mReplay_2013_08_25_2257.w3g[0m* [01;32mReplay_2015_06_26_1259.w3g[0m*
[01;32mReplay_2013_08_25_2338.w3g[0m* [01;32mReplay_2015_06_26_1457.w3g[0m*
[01;32mReplay_2013_08_25_2353.w3g[0m* [01;32mReplay_2015_06_26_1514.w3g[0m*
[01;32mReplay_2013_08_26_0042.w3g[0m* [01;32mReplay_2015_06_26_1620.w3g[0m*
[01;32mReplay_2013_08_26_1846.w3g[0m* [01;32mReplay_2015_06_26_2119.w3g[0m*
[01;32mReplay_2013_08_27_0927.w3g[0m* [01;32mReplay_2015_06_26_2218.w3g[0m*
[01;32mReplay_2013_08_27_1015.w3g[0m* [01;32mReplay_2015_06_26_2317.w3g[0m*
[01;32mReplay_2013_08_27_1733.w3g[0m* [01;32mReplay_2015_06_27_0149.w3g[0m*
[01;32mReplay_2013_08_29_1150.w3g[0m* [01;32mReplay_2015_06_27_1106.w3g[0m*
[01;32mReplay_2013_08_29_1207.w3g[0m* [01;32mReplay_2015_06_27_1159.w3g[0m*
[01;32mReplay_2013_08_29_1305.w3g[0m* [01;32mReplay_2015_06_27_1311.w3g[0m*
[01;32mReplay_2013_08_29_1940.w3g[0m* [01;32mReplay_2015_06_27_1401.w3g[0m*
[01;32mReplay_2013_08_30_0014.w3g[0m* [01;32mReplay_2015_06_27_1459.w3g[0m*
[01;32mReplay_2013_08_30_0109.w3g[0m* [01;32mReplay_2015_06_27_1546.w3g[0m*
[01;32mReplay_2013_08_30_1049.w3g[0m* [01;32mReplay_2015_06_28_0009.w3g[0m*
[01;32mReplay_2013_08_30_1119.w3g[0m* [01;32mReplay_2015_06_28_0222.w3g[0m*
[01;32mReplay_2013_08_30_1317.w3g[0m* [01;32mReplay_2015_06_28_1957.w3g[0m*
[01;32mReplay_2013_08_30_2027.w3g[0m* [01;32mReplay_2015_06_28_2058.w3g[0m*
[01;32mReplay_2013_08_30_2110.w3g[0m* [01;32mReplay_2015_06_29_0020.w3g[0m*
[01;32mReplay_2013_08_31_1459.w3g[0m* [01;32mReplay_2015_06_29_0135.w3g[0m*
[01;32mReplay_2013_08_31_1710.w3g[0m* [01;32mReplay_2015_06_29_1246.w3g[0m*
[01;32mReplay_2013_08_31_1741.w3g[0m* [01;32mReplay_2015_06_29_1413.w3g[0m*
[01;32mReplay_2013_08_31_1936.w3g[0m* [01;32mReplay_2015_06_29_1641.w3g[0m*
[01;32mReplay_2013_08_31_2008.w3g[0m* [01;32mReplay_2015_06_29_1720.w3g[0m*
[01;32mReplay_2013_08_31_2250.w3g[0m* [01;32mReplay_2015_06_29_1759.w3g[0m*
[01;32mReplay_2013_09_01_1357.w3g[0m* [01;32mReplay_2015_06_29_1859.w3g[0m*
[01;32mReplay_2013_09_01_1723.w3g[0m* [01;32mReplay_2015_06_29_1936.w3g[0m*
[01;32mReplay_2013_09_01_1921.w3g[0m* [01;32mReplay_2015_06_29_2029.w3g[0m*
[01;32mReplay_2013_09_01_2137.w3g[0m* [01;32mReplay_2015_06_29_2146.w3g[0m*
[01;32mReplay_2013_09_01_2201.w3g[0m* [01;32mReplay_2015_06_29_2254.w3g[0m*
[01;32mReplay_2013_09_02_2348.w3g[0m* [01;32mReplay_2015_06_30_0050.w3g[0m*
[01;32mReplay_2013_09_04_1721.w3g[0m* [01;32mReplay_2015_06_30_0144.w3g[0m*
[01;32mReplay_2013_09_05_0016.w3g[0m* [01;32mReplay_2015_06_30_1215.w3g[0m*
[01;32mReplay_2013_09_06_1808.w3g[0m* [01;32mReplay_2015_06_30_1248.w3g[0m*
[01;32mReplay_2013_09_06_2155.w3g[0m* [01;32mReplay_2015_06_30_1400.w3g[0m*
[01;32mReplay_2013_09_06_2301.w3g[0m* [01;32mReplay_2015_06_30_1524.w3g[0m*
[01;32mReplay_2013_09_08_0056.w3g[0m* [01;32mReplay_2015_06_30_1533.w3g[0m*
[01;32mReplay_2013_09_09_1743.w3g[0m* [01;32mReplay_2015_06_30_1608.w3g[0m*
[01;32mReplay_2013_09_10_1435.w3g[0m* [01;32mReplay_2015_06_30_1938.w3g[0m*
[01;32mReplay_2013_09_10_1514.w3g[0m* [01;32mReplay_2015_06_30_2030.w3g[0m*
[01;32mReplay_2013_09_10_1601.w3g[0m* [01;32mReplay_2015_06_30_2133.w3g[0m*
[01;32mReplay_2013_09_10_1637.w3g[0m* [01;32mReplay_2015_07_01_0249.w3g[0m*
[01;32mReplay_2013_09_10_1717.w3g[0m* [01;32mReplay_2015_07_01_0326.w3g[0m*
[01;32mReplay_2013_09_11_1543.w3g[0m* [01;32mReplay_2015_07_01_1300.w3g[0m*
[01;32mReplay_2013_09_11_1829.w3g[0m* [01;32mReplay_2015_07_01_1419.w3g[0m*
[01;32mReplay_2013_09_12_0036.w3g[0m* [01;32mReplay_2015_07_01_1516.w3g[0m*
[01;32mReplay_2013_09_12_1304.w3g[0m* [01;32mReplay_2015_07_01_1605.w3g[0m*
[01;32mReplay_2013_09_13_1721.w3g[0m* [01;32mReplay_2015_07_02_1643.w3g[0m*
[01;32mReplay_2013_09_14_1907.w3g[0m* [01;32mReplay_2015_07_02_1904.w3g[0m*
[01;32mReplay_2013_09_14_2114.w3g[0m* [01;32mReplay_2015_07_02_1952.w3g[0m*
[01;32mReplay_2013_09_15_1515.w3g[0m* [01;32mReplay_2015_07_03_0138.w3g[0m*
[01;32mReplay_2013_09_15_1523.w3g[0m* [01;32mReplay_2015_07_03_0231.w3g[0m*
[01;32mReplay_2013_09_15_1624.w3g[0m* [01;32mReplay_2015_07_03_0330.w3g[0m*
[01;32mReplay_2013_09_16_0143.w3g[0m* [01;32mReplay_2015_07_03_0414.w3g[0m*
[01;32mReplay_2013_09_16_0237.w3g[0m* [01;32mReplay_2015_07_03_1816.w3g[0m*
[01;32mReplay_2013_09_16_1510.w3g[0m* [01;32mReplay_2015_07_03_1905.w3g[0m*
[01;32mReplay_2013_09_16_1732.w3g[0m* [01;32mReplay_2015_07_03_2058.w3g[0m*
[01;32mReplay_2013_09_17_0048.w3g[0m* [01;32mReplay_2015_07_03_2207.w3g[0m*
[01;32mReplay_2013_09_17_1701.w3g[0m* [01;32mReplay_2015_07_03_2248.w3g[0m*
[01;32mReplay_2013_09_18_1247.w3g[0m* [01;32mReplay_2015_07_04_2038.w3g[0m*
[01;32mReplay_2013_09_18_1530.w3g[0m* [01;32mReplay_2015_07_04_2117.w3g[0m*
[01;32mReplay_2013_09_18_1723.w3g[0m* [01;32mReplay_2015_07_04_2220.w3g[0m*
[01;32mReplay_2013_09_19_2034.w3g[0m* [01;32mReplay_2015_07_04_2252.w3g[0m*
[01;32mReplay_2013_09_19_2235.w3g[0m* [01;32mReplay_2015_07_05_2350.w3g[0m*
[01;32mReplay_2013_09_20_2000.w3g[0m* [01;32mReplay_2015_07_06_2348.w3g[0m*
[01;32mReplay_2013_09_20_2050.w3g[0m* [01;32mReplay_2015_07_07_0052.w3g[0m*
[01;32mReplay_2013_09_21_1559.w3g[0m* [01;32mReplay_2015_07_07_0152.w3g[0m*
[01;32mReplay_2013_09_21_1627.w3g[0m* [01;32mReplay_2015_07_07_0240.w3g[0m*
[01;32mReplay_2013_09_21_1820.w3g[0m* [01;32mReplay_2015_07_07_1846.w3g[0m*
[01;32mReplay_2013_09_22_0003.w3g[0m* [01;32mReplay_2015_07_08_1901.w3g[0m*
[01;32mReplay_2013_09_22_0041.w3g[0m* [01;32mReplay_2015_07_08_2340.w3g[0m*
[01;32mReplay_2013_09_22_1828.w3g[0m* [01;32mReplay_2015_07_09_0039.w3g[0m*
[01;32mReplay_2013_09_23_0011.w3g[0m* [01;32mReplay_2015_07_09_2359.w3g[0m*
[01;32mReplay_2013_09_23_1851.w3g[0m* [01;32mReplay_2015_07_11_2323.w3g[0m*
[01;32mReplay_2013_09_23_1927.w3g[0m* [01;32mReplay_2015_07_11_2334.w3g[0m*
[01;32mReplay_2013_09_23_2011.w3g[0m* [01;32mReplay_2015_07_12_0017.w3g[0m*
[01;32mReplay_2013_09_25_1419.w3g[0m* [01;32mReplay_2015_07_15_1708.w3g[0m*
[01;32mReplay_2013_09_25_1512.w3g[0m* [01;32mReplay_2015_07_15_1751.w3g[0m*
[01;32mReplay_2013_09_25_1608.w3g[0m* [01;32mReplay_2015_07_15_2247.w3g[0m*
[01;32mReplay_2013_09_26_0035.w3g[0m* [01;32mReplay_2015_07_16_1454.w3g[0m*
[01;32mReplay_2013_09_26_0145.w3g[0m* [01;32mReplay_2015_07_16_2133.w3g[0m*
[01;32mReplay_2013_09_26_0152.w3g[0m* [01;32mReplay_2015_07_16_2232.w3g[0m*
[01;32mReplay_2013_09_26_1219.w3g[0m* [01;32mReplay_2015_07_16_2325.w3g[0m*
[01;32mReplay_2013_09_26_1229.w3g[0m* [01;32mReplay_2015_07_17_1215.w3g[0m*
[01;32mReplay_2013_09_26_1313.w3g[0m* [01;32mReplay_2015_07_17_1320.w3g[0m*
[01;32mReplay_2013_09_27_1209.w3g[0m* [01;32mReplay_2015_07_17_1407.w3g[0m*
[01;32mReplay_2013_09_28_0039.w3g[0m* [01;32mReplay_2015_07_17_1450.w3g[0m*
[01;32mReplay_2013_09_28_1338.w3g[0m* [01;32mReplay_2015_07_17_1522.w3g[0m*
[01;32mReplay_2013_09_28_1635.w3g[0m* [01;32mReplay_2015_07_17_1635.w3g[0m*
[01;32mReplay_2013_09_28_2056.w3g[0m* [01;32mReplay_2015_07_17_2242.w3g[0m*
[01;32mReplay_2013_09_29_0137.w3g[0m* [01;32mReplay_2015_07_18_1404.w3g[0m*
[01;32mReplay_2013_09_30_1530.w3g[0m* [01;32mReplay_2015_07_18_1458.w3g[0m*
[01;32mReplay_2013_09_30_1608.w3g[0m* [01;32mReplay_2015_07_19_1618.w3g[0m*
[01;32mReplay_2013_10_01_1603.w3g[0m* [01;32mReplay_2015_07_19_1719.w3g[0m*
[01;32mReplay_2013_10_01_1637.w3g[0m* [01;32mReplay_2015_07_19_1857.w3g[0m*
[01;32mReplay_2013_10_01_1739.w3g[0m* [01;32mReplay_2015_07_19_1953.w3g[0m*
[01;32mReplay_2013_10_01_2235.w3g[0m* [01;32mReplay_2015_07_19_2020.w3g[0m*
[01;32mReplay_2013_10_02_1316.w3g[0m* [01;32mReplay_2015_07_19_2359.w3g[0m*
[01;32mReplay_2013_10_02_1344.w3g[0m* [01;32mReplay_2015_07_20_0041.w3g[0m*
[01;32mReplay_2013_10_02_1416.w3g[0m* [01;32mReplay_2015_07_20_0113.w3g[0m*
[01;32mReplay_2013_10_02_1453.w3g[0m* [01;32mReplay_2015_07_20_0130.w3g[0m*
[01;32mReplay_2013_10_02_2338.w3g[0m* [01;32mReplay_2015_07_21_2207.w3g[0m*
[01;32mReplay_2013_10_03_0041.w3g[0m* [01;32mReplay_2015_07_21_2252.w3g[0m*
[01;32mReplay_2013_10_03_1015.w3g[0m* [01;32mReplay_2015_07_22_1339.w3g[0m*
[01;32mReplay_2013_10_03_2102.w3g[0m* [01;32mReplay_2015_07_22_1849.w3g[0m*
[01;32mReplay_2013_10_03_2150.w3g[0m* [01;32mReplay_2015_07_22_1932.w3g[0m*
[01;32mReplay_2013_10_04_0022.w3g[0m* [01;32mReplay_2015_07_22_2006.w3g[0m*
[01;32mReplay_2013_10_04_2207.w3g[0m* [01;32mReplay_2015_07_22_2053.w3g[0m*
[01;32mReplay_2013_10_04_2224.w3g[0m* [01;32mReplay_2015_07_22_2100.w3g[0m*
[01;32mReplay_2013_10_04_2251.w3g[0m* [01;32mReplay_2015_07_22_2308.w3g[0m*
[01;32mReplay_2013_10_06_0038.w3g[0m* [01;32mReplay_2015_07_23_0030.w3g[0m*
[01;32mReplay_2013_10_06_0122.w3g[0m* [01;32mReplay_2015_07_23_0112.w3g[0m*
[01;32mReplay_2013_10_06_0213.w3g[0m* [01;32mReplay_2015_07_23_2058.w3g[0m*
[01;32mReplay_2013_10_07_1650.w3g[0m* [01;32mReplay_2015_07_23_2115.w3g[0m*
[01;32mReplay_2013_10_08_2056.w3g[0m* [01;32mReplay_2015_07_23_2157.w3g[0m*
[01;32mReplay_2013_10_08_2126.w3g[0m* [01;32mReplay_2015_07_23_2229.w3g[0m*
[01;32mReplay_2013_10_08_2158.w3g[0m* [01;32mReplay_2015_07_23_2244.w3g[0m*
[01;32mReplay_2013_10_08_2244.w3g[0m* [01;32mReplay_2015_07_23_2345.w3g[0m*
[01;32mReplay_2013_10_09_1431.w3g[0m* [01;32mReplay_2015_07_24_0034.w3g[0m*
[01;32mReplay_2013_10_09_1513.w3g[0m* [01;32mReplay_2015_07_24_1832.w3g[0m*
[01;32mReplay_2013_10_09_1609.w3g[0m* [01;32mReplay_2015_07_24_1920.w3g[0m*
[01;32mReplay_2013_10_09_1645.w3g[0m* [01;32mReplay_2015_07_24_2003.w3g[0m*
[01;32mReplay_2013_10_09_2113.w3g[0m* [01;32mReplay_2015_07_24_2058.w3g[0m*
[01;32mReplay_2013_10_09_2213.w3g[0m* [01;32mReplay_2015_07_24_2146.w3g[0m*
[01;32mReplay_2013_10_11_1501.w3g[0m* [01;32mReplay_2015_07_24_2228.w3g[0m*
[01;32mReplay_2013_10_11_1638.w3g[0m* [01;32mReplay_2015_07_24_2244.w3g[0m*
[01;32mReplay_2013_10_11_1724.w3g[0m* [01;32mReplay_2015_07_24_2335.w3g[0m*
[01;32mReplay_2013_10_11_1804.w3g[0m* [01;32mReplay_2015_07_25_0029.w3g[0m*
[01;32mReplay_2013_10_11_2317.w3g[0m* [01;32mReplay_2015_07_25_0110.w3g[0m*
[01;32mReplay_2013_10_14_1646.w3g[0m* [01;32mReplay_2015_07_25_0217.w3g[0m*
[01;32mReplay_2013_10_14_1831.w3g[0m* [01;32mReplay_2015_07_25_1343.w3g[0m*
[01;32mReplay_2013_10_14_1926.w3g[0m* [01;32mReplay_2015_07_25_1457.w3g[0m*
[01;32mReplay_2013_10_15_1855.w3g[0m* [01;32mReplay_2015_07_25_1530.w3g[0m*
[01;32mReplay_2013_10_16_0132.w3g[0m* [01;32mReplay_2015_07_25_1557.w3g[0m*
[01;32mReplay_2013_10_16_1002.w3g[0m* [01;32mReplay_2015_07_25_1641.w3g[0m*
[01;32mReplay_2013_10_16_1115.w3g[0m* [01;32mReplay_2015_07_25_1715.w3g[0m*
[01;32mReplay_2013_10_16_1313.w3g[0m* [01;32mReplay_2015_07_25_1803.w3g[0m*
[01;32mReplay_2013_10_16_1402.w3g[0m* [01;32mReplay_2015_07_25_2010.w3g[0m*
[01;32mReplay_2013_10_17_1945.w3g[0m* [01;32mReplay_2015_07_25_2107.w3g[0m*
[01;32mReplay_2013_10_18_1532.w3g[0m* [01;32mReplay_2015_07_25_2215.w3g[0m*
[01;32mReplay_2013_10_18_1629.w3g[0m* [01;32mReplay_2015_07_26_0012.w3g[0m*
[01;32mReplay_2013_10_18_1730.w3g[0m* [01;32mReplay_2015_07_26_0059.w3g[0m*
[01;32mReplay_2013_10_18_1807.w3g[0m* [01;32mReplay_2015_07_26_0159.w3g[0m*
[01;32mReplay_2013_10_18_1903.w3g[0m* [01;32mReplay_2015_07_26_1736.w3g[0m*
[01;32mReplay_2013_10_18_2233.w3g[0m* [01;32mReplay_2015_07_26_1818.w3g[0m*
[01;32mReplay_2013_10_19_1318.w3g[0m* [01;32mReplay_2015_07_26_1855.w3g[0m*
[01;32mReplay_2013_10_20_0144.w3g[0m* [01;32mReplay_2015_07_26_2248.w3g[0m*
[01;32mReplay_2013_10_20_0228.w3g[0m* [01;32mReplay_2015_07_26_2330.w3g[0m*
[01;32mReplay_2013_10_20_1729.w3g[0m* [01;32mReplay_2015_07_27_1558.w3g[0m*
[01;32mReplay_2013_10_21_1535.w3g[0m* [01;32mReplay_2015_07_27_1700.w3g[0m*
[01;32mReplay_2013_10_22_0231.w3g[0m* [01;32mReplay_2015_07_27_1749.w3g[0m*
[01;32mReplay_2013_10_22_1637.w3g[0m* [01;32mReplay_2015_07_27_2157.w3g[0m*
[01;32mReplay_2013_10_23_1503.w3g[0m* [01;32mReplay_2015_07_27_2311.w3g[0m*
[01;32mReplay_2013_10_23_1551.w3g[0m* [01;32mReplay_2015_07_28_0028.w3g[0m*
[01;32mReplay_2013_10_23_1624.w3g[0m* [01;32mReplay_2015_07_28_1301.w3g[0m*
[01;32mReplay_2013_10_24_2240.w3g[0m* [01;32mReplay_2015_07_28_2016.w3g[0m*
[01;32mReplay_2013_10_24_2312.w3g[0m* [01;32mReplay_2015_07_29_1322.w3g[0m*
[01;32mReplay_2013_10_25_1509.w3g[0m* [01;32mReplay_2015_07_29_1430.w3g[0m*
[01;32mReplay_2013_10_26_2207.w3g[0m* [01;32mReplay_2015_07_29_2251.w3g[0m*
[01;32mReplay_2013_10_26_2244.w3g[0m* [01;32mReplay_2015_07_29_2343.w3g[0m*
[01;32mReplay_2013_10_26_2329.w3g[0m* [01;32mReplay_2015_07_30_0350.w3g[0m*
[01;32mReplay_2013_10_27_0105.w3g[0m* [01;32mReplay_2015_07_30_1822.w3g[0m*
[01;32mReplay_2013_10_28_0106.w3g[0m* [01;32mReplay_2015_07_30_1910.w3g[0m*
[01;32mReplay_2013_10_28_0109.w3g[0m* [01;32mReplay_2015_07_30_2336.w3g[0m*
[01;32mReplay_2013_10_28_0150.w3g[0m* [01;32mReplay_2015_07_31_1440.w3g[0m*
[01;32mReplay_2013_10_28_0230.w3g[0m* [01;32mReplay_2015_07_31_1523.w3g[0m*
[01;32mReplay_2013_10_28_1516.w3g[0m* [01;32mReplay_2015_07_31_1651.w3g[0m*
[01;32mReplay_2013_10_28_1608.w3g[0m* [01;32mReplay_2015_07_31_1814.w3g[0m*
[01;32mReplay_2013_10_28_1700.w3g[0m* [01;32mReplay_2015_07_31_2111.w3g[0m*
[01;32mReplay_2013_10_29_0025.w3g[0m* [01;32mReplay_2015_07_31_2211.w3g[0m*
[01;32mReplay_2013_10_29_0113.w3g[0m* [01;32mReplay_2015_07_31_2255.w3g[0m*
[01;32mReplay_2013_10_29_0135.w3g[0m* [01;32mReplay_2015_07_31_2331.w3g[0m*
[01;32mReplay_2013_10_29_1024.w3g[0m* [01;32mReplay_2015_08_01_0333.w3g[0m*
[01;32mReplay_2013_10_29_1026.w3g[0m* [01;32mReplay_2015_08_01_1334.w3g[0m*
[01;32mReplay_2013_10_29_1040.w3g[0m* [01;32mReplay_2015_08_01_1415.w3g[0m*
[01;32mReplay_2013_10_29_1050.w3g[0m* [01;32mReplay_2015_08_01_1516.w3g[0m*
[01;32mReplay_2013_10_29_1056.w3g[0m* [01;32mReplay_2015_08_01_1601.w3g[0m*
[01;32mReplay_2013_10_29_1107.w3g[0m* [01;32mReplay_2015_08_01_1639.w3g[0m*
[01;32mReplay_2013_10_29_1122.w3g[0m* [01;32mReplay_2015_08_01_1739.w3g[0m*
[01;32mReplay_2013_10_29_1532.w3g[0m* [01;32mReplay_2015_08_01_1841.w3g[0m*
[01;32mReplay_2013_10_29_1617.w3g[0m* [01;32mReplay_2015_08_01_1905.w3g[0m*
[01;32mReplay_2013_10_29_2032.w3g[0m* [01;32mReplay_2015_08_01_1952.w3g[0m*
[01;32mReplay_2013_10_29_2034.w3g[0m* [01;32mReplay_2015_08_01_2045.w3g[0m*
[01;32mReplay_2013_10_29_2038.w3g[0m* [01;32mReplay_2015_08_01_2135.w3g[0m*
[01;32mReplay_2013_10_29_2102.w3g[0m* [01;32mReplay_2015_08_01_2222.w3g[0m*
[01;32mReplay_2013_10_29_2107.w3g[0m* [01;32mReplay_2015_08_02_0009.w3g[0m*
[01;32mReplay_2013_10_30_0105.w3g[0m* [01;32mReplay_2015_08_02_0050.w3g[0m*
[01;32mReplay_2013_10_30_0114.w3g[0m* [01;32mReplay_2015_08_02_0135.w3g[0m*
[01;32mReplay_2013_10_30_0137.w3g[0m* [01;32mReplay_2015_08_02_0214.w3g[0m*
[01;32mReplay_2013_10_31_0150.w3g[0m* [01;32mReplay_2015_08_02_0308.w3g[0m*
[01;32mReplay_2013_10_31_1141.w3g[0m* [01;32mReplay_2015_08_02_0438.w3g[0m*
[01;32mReplay_2013_10_31_1209.w3g[0m* [01;32mReplay_2015_08_02_1313.w3g[0m*
[01;32mReplay_2013_10_31_1238.w3g[0m* [01;32mReplay_2015_08_02_1320.w3g[0m*
[01;32mReplay_2013_11_01_1757.w3g[0m* [01;32mReplay_2015_08_02_1514.w3g[0m*
[01;32mReplay_2013_11_01_1815.w3g[0m* [01;32mReplay_2015_08_02_1559.w3g[0m*
[01;32mReplay_2013_11_03_0113.w3g[0m* [01;32mReplay_2015_08_02_1636.w3g[0m*
[01;32mReplay_2013_11_03_1950.w3g[0m* [01;32mReplay_2015_08_04_1424.w3g[0m*
[01;32mReplay_2013_11_06_1720.w3g[0m* [01;32mReplay_2015_08_04_1507.w3g[0m*
[01;32mReplay_2013_11_06_1800.w3g[0m* [01;32mReplay_2015_08_04_1826.w3g[0m*
[01;32mReplay_2013_11_06_1857.w3g[0m* [01;32mReplay_2015_08_04_1928.w3g[0m*
[01;32mReplay_2013_11_08_1213.w3g[0m* [01;32mReplay_2015_08_04_1936.w3g[0m*
[01;32mReplay_2013_11_08_1342.w3g[0m* [01;32mReplay_2015_08_04_2059.w3g[0m*
[01;32mReplay_2013_11_09_0020.w3g[0m* [01;32mReplay_2015_08_05_0023.w3g[0m*
[01;32mReplay_2013_11_09_0112.w3g[0m* [01;32mReplay_2015_08_05_0114.w3g[0m*
[01;32mReplay_2013_11_09_0152.w3g[0m* [01;32mReplay_2015_08_05_1443.w3g[0m*
[01;32mReplay_2013_11_09_1429.w3g[0m* [01;32mReplay_2015_08_05_1538.w3g[0m*
[01;32mReplay_2013_11_09_1529.w3g[0m* [01;32mReplay_2015_08_05_1635.w3g[0m*
[01;32mReplay_2013_11_10_2132.w3g[0m* [01;32mReplay_2015_08_05_1710.w3g[0m*
[01;32mReplay_2013_11_10_2221.w3g[0m* [01;32mReplay_2015_08_05_1757.w3g[0m*
[01;32mReplay_2013_11_10_2306.w3g[0m* [01;32mReplay_2015_08_05_1903.w3g[0m*
[01;32mReplay_2013_11_11_0004.w3g[0m* [01;32mReplay_2015_08_05_1950.w3g[0m*
[01;32mReplay_2013_11_11_1533.w3g[0m* [01;32mReplay_2015_08_05_2054.w3g[0m*
[01;32mReplay_2013_11_12_1609.w3g[0m* [01;32mReplay_2015_08_05_2154.w3g[0m*
[01;32mReplay_2013_11_12_1659.w3g[0m* [01;32mReplay_2015_08_06_1625.w3g[0m*
[01;32mReplay_2013_11_12_1736.w3g[0m* [01;32mReplay_2015_08_06_1716.w3g[0m*
[01;32mReplay_2013_11_12_1837.w3g[0m* [01;32mReplay_2015_08_06_1800.w3g[0m*
[01;32mReplay_2013_11_14_1851.w3g[0m* [01;32mReplay_2015_08_06_2212.w3g[0m*
[01;32mReplay_2013_11_15_2349.w3g[0m* [01;32mReplay_2015_08_07_1731.w3g[0m*
[01;32mReplay_2013_11_16_0024.w3g[0m* [01;32mReplay_2015_08_07_1824.w3g[0m*
[01;32mReplay_2013_11_16_1418.w3g[0m* [01;32mReplay_2015_08_08_1734.w3g[0m*
[01;32mReplay_2013_11_16_1500.w3g[0m* [01;32mReplay_2015_08_08_1836.w3g[0m*
[01;32mReplay_2013_11_16_1547.w3g[0m* [01;32mReplay_2015_08_08_1925.w3g[0m*
[01;32mReplay_2013_11_17_1419.w3g[0m* [01;32mReplay_2015_08_09_1132.w3g[0m*
[01;32mReplay_2013_11_18_1540.w3g[0m* [01;32mReplay_2015_08_09_1233.w3g[0m*
[01;32mReplay_2013_11_18_1617.w3g[0m* [01;32mReplay_2015_08_09_1315.w3g[0m*
[01;32mReplay_2013_11_18_1707.w3g[0m* [01;32mReplay_2015_08_09_1403.w3g[0m*
[01;32mReplay_2013_11_18_1741.w3g[0m* [01;32mReplay_2015_08_09_1642.w3g[0m*
[01;32mReplay_2013_11_18_2253.w3g[0m* [01;32mReplay_2015_08_09_1747.w3g[0m*
[01;32mReplay_2013_11_18_2332.w3g[0m* [01;32mReplay_2015_08_09_2147.w3g[0m*
[01;32mReplay_2013_11_19_0031.w3g[0m* [01;32mReplay_2015_08_09_2245.w3g[0m*
[01;32mReplay_2013_11_19_1539.w3g[0m* [01;32mReplay_2015_08_09_2315.w3g[0m*
[01;32mReplay_2013_11_19_1852.w3g[0m* [01;32mReplay_2015_08_10_0006.w3g[0m*
[01;32mReplay_2013_11_19_2211.w3g[0m* [01;32mReplay_2015_08_10_1120.w3g[0m*
[01;32mReplay_2013_11_19_2257.w3g[0m* [01;32mReplay_2015_08_10_1227.w3g[0m*
[01;32mReplay_2013_11_19_2346.w3g[0m* [01;32mReplay_2015_08_10_1327.w3g[0m*
[01;32mReplay_2013_11_20_1222.w3g[0m* [01;32mReplay_2015_08_10_1436.w3g[0m*
[01;32mReplay_2013_11_21_0118.w3g[0m* [01;32mReplay_2015_08_10_2151.w3g[0m*
[01;32mReplay_2013_11_22_1211.w3g[0m* [01;32mReplay_2015_08_10_2215.w3g[0m*
[01;32mReplay_2013_11_22_1220.w3g[0m* [01;32mReplay_2015_08_11_1152.w3g[0m*
[01;32mReplay_2013_11_22_1249.w3g[0m* [01;32mReplay_2015_08_11_1247.w3g[0m*
[01;32mReplay_2013_11_22_1251.w3g[0m* [01;32mReplay_2015_08_11_2327.w3g[0m*
[01;32mReplay_2013_11_22_1257.w3g[0m* [01;32mReplay_2015_08_12_0027.w3g[0m*
[01;32mReplay_2013_11_22_1304.w3g[0m* [01;32mReplay_2015_08_12_1812.w3g[0m*
[01;32mReplay_2013_11_22_1313.w3g[0m* [01;32mReplay_2015_08_12_1817.w3g[0m*
[01;32mReplay_2013_11_22_1333.w3g[0m* [01;32mReplay_2015_08_12_1927.w3g[0m*
[01;32mReplay_2013_11_22_1337.w3g[0m* [01;32mReplay_2015_08_12_2011.w3g[0m*
[01;32mReplay_2013_11_22_1401.w3g[0m* [01;32mReplay_2015_08_12_2101.w3g[0m*
[01;32mReplay_2013_11_22_1406.w3g[0m* [01;32mReplay_2015_08_13_1259.w3g[0m*
[01;32mReplay_2013_11_22_1422.w3g[0m* [01;32mReplay_2015_08_13_1338.w3g[0m*
[01;32mReplay_2013_11_22_1426.w3g[0m* [01;32mReplay_2015_08_13_1427.w3g[0m*
[01;32mReplay_2013_11_22_1427.w3g[0m* [01;32mReplay_2015_08_13_1628.w3g[0m*
[01;32mReplay_2013_11_22_1450.w3g[0m* [01;32mReplay_2015_08_13_1643.w3g[0m*
[01;32mReplay_2013_11_22_1506.w3g[0m* [01;32mReplay_2015_08_13_1722.w3g[0m*
[01;32mReplay_2013_11_22_1520.w3g[0m* [01;32mReplay_2015_08_13_1819.w3g[0m*
[01;32mReplay_2013_11_22_2033.w3g[0m* [01;32mReplay_2015_08_13_1910.w3g[0m*
[01;32mReplay_2013_11_23_0146.w3g[0m* [01;32mReplay_2015_08_13_1937.w3g[0m*
[01;32mReplay_2013_11_23_1058.w3g[0m* [01;32mReplay_2015_08_13_2046.w3g[0m*
[01;32mReplay_2013_11_23_1138.w3g[0m* [01;32mReplay_2015_08_13_2158.w3g[0m*
[01;32mReplay_2013_11_23_1240.w3g[0m* [01;32mReplay_2015_08_14_1147.w3g[0m*
[01;32mReplay_2013_11_23_1606.w3g[0m* [01;32mReplay_2015_08_14_1222.w3g[0m*
[01;32mReplay_2013_11_23_1631.w3g[0m* [01;32mReplay_2015_08_14_2328.w3g[0m*
[01;32mReplay_2013_11_23_1728.w3g[0m* [01;32mReplay_2015_08_15_0008.w3g[0m*
[01;32mReplay_2013_11_23_1802.w3g[0m* [01;32mReplay_2015_08_16_1833.w3g[0m*
[01;32mReplay_2013_11_23_1809.w3g[0m* [01;32mReplay_2015_08_16_1905.w3g[0m*
[01;32mReplay_2013_11_23_1810.w3g[0m* [01;32mReplay_2015_08_16_1945.w3g[0m*
[01;32mReplay_2013_11_23_1813.w3g[0m* [01;32mReplay_2015_08_17_1138.w3g[0m*
[01;32mReplay_2013_11_23_1819.w3g[0m* [01;32mReplay_2015_08_17_1143.w3g[0m*
[01;32mReplay_2013_11_23_1835.w3g[0m* [01;32mReplay_2015_08_17_1239.w3g[0m*
[01;32mReplay_2013_11_23_1838.w3g[0m* [01;32mReplay_2015_08_17_1315.w3g[0m*
[01;32mReplay_2013_11_23_1840.w3g[0m* [01;32mReplay_2015_08_17_1849.w3g[0m*
[01;32mReplay_2013_11_23_1843.w3g[0m* [01;32mReplay_2015_08_17_1936.w3g[0m*
[01;32mReplay_2013_11_23_1845.w3g[0m* [01;32mReplay_2015_08_18_1847.w3g[0m*
[01;32mReplay_2013_11_23_1847.w3g[0m* [01;32mReplay_2015_08_18_1924.w3g[0m*
[01;32mReplay_2013_11_23_1851.w3g[0m* [01;32mReplay_2015_08_18_2000.w3g[0m*
[01;32mReplay_2013_11_23_1854.w3g[0m* [01;32mReplay_2015_08_19_1631.w3g[0m*
[01;32mReplay_2013_11_23_1946.w3g[0m* [01;32mReplay_2015_08_19_1706.w3g[0m*
[01;32mReplay_2013_11_24_1548.w3g[0m* [01;32mReplay_2015_08_19_1807.w3g[0m*
[01;32mReplay_2013_11_24_1630.w3g[0m* [01;32mReplay_2015_08_20_0115.w3g[0m*
[01;32mReplay_2013_11_24_1902.w3g[0m* [01;32mReplay_2015_08_20_0218.w3g[0m*
[01;32mReplay_2013_11_24_1940.w3g[0m* [01;32mReplay_2015_08_20_0310.w3g[0m*
[01;32mReplay_2013_11_24_2025.w3g[0m* [01;32mReplay_2015_08_20_1324.w3g[0m*
[01;32mReplay_2013_11_24_2108.w3g[0m* [01;32mReplay_2015_08_20_1403.w3g[0m*
[01;32mReplay_2013_11_25_0022.w3g[0m* [01;32mReplay_2015_08_20_1447.w3g[0m*
[01;32mReplay_2013_11_25_0103.w3g[0m* [01;32mReplay_2015_08_20_1921.w3g[0m*
[01;32mReplay_2013_11_25_1527.w3g[0m* [01;32mReplay_2015_08_20_2017.w3g[0m*
[01;32mReplay_2013_11_27_0144.w3g[0m* [01;32mReplay_2015_08_23_1213.w3g[0m*
[01;32mReplay_2013_11_27_0221.w3g[0m* [01;32mReplay_2015_08_23_1324.w3g[0m*
[01;32mReplay_2013_11_27_2050.w3g[0m* [01;32mReplay_2015_08_23_1430.w3g[0m*
[01;32mReplay_2013_11_27_2330.w3g[0m* [01;32mReplay_2015_08_23_1513.w3g[0m*
[01;32mReplay_2013_11_27_2347.w3g[0m* [01;32mReplay_2015_08_23_1638.w3g[0m*
[01;32mReplay_2013_11_28_0017.w3g[0m* [01;32mReplay_2015_08_23_2016.w3g[0m*
[01;32mReplay_2013_11_28_1102.w3g[0m* [01;32mReplay_2015_08_23_2100.w3g[0m*
[01;32mReplay_2013_11_28_1135.w3g[0m* [01;32mReplay_2015_08_23_2150.w3g[0m*
[01;32mReplay_2013_11_28_1216.w3g[0m* [01;32mReplay_2015_08_23_2239.w3g[0m*
[01;32mReplay_2013_11_28_1307.w3g[0m* [01;32mReplay_2015_08_24_1656.w3g[0m*
[01;32mReplay_2013_11_29_2009.w3g[0m* [01;32mReplay_2015_08_24_1732.w3g[0m*
[01;32mReplay_2013_11_29_2103.w3g[0m* [01;32mReplay_2015_08_24_1828.w3g[0m*
[01;32mReplay_2013_11_29_2321.w3g[0m* [01;32mReplay_2015_08_24_2227.w3g[0m*
[01;32mReplay_2013_11_30_0001.w3g[0m* [01;32mReplay_2015_08_25_1900.w3g[0m*
[01;32mReplay_2013_11_30_0038.w3g[0m* [01;32mReplay_2015_08_25_2000.w3g[0m*
[01;32mReplay_2013_11_30_0959.w3g[0m* [01;32mReplay_2015_08_25_2038.w3g[0m*
[01;32mReplay_2013_11_30_1016.w3g[0m* [01;32mReplay_2015_08_25_2109.w3g[0m*
[01;32mReplay_2013_11_30_1615.w3g[0m* [01;32mReplay_2015_08_25_2128.w3g[0m*
[01;32mReplay_2013_11_30_1956.w3g[0m* [01;32mReplay_2015_08_25_2232.w3g[0m*
[01;32mReplay_2013_11_30_2056.w3g[0m* [01;32mReplay_2015_08_25_2311.w3g[0m*
[01;32mReplay_2013_11_30_2255.w3g[0m* [01;32mReplay_2015_08_26_1832.w3g[0m*
[01;32mReplay_2013_11_30_2335.w3g[0m* [01;32mReplay_2015_08_26_1919.w3g[0m*
[01;32mReplay_2013_12_01_0021.w3g[0m* [01;32mReplay_2015_08_26_1922.w3g[0m*
[01;32mReplay_2013_12_01_0111.w3g[0m* [01;32mReplay_2015_08_26_2026.w3g[0m*
[01;32mReplay_2013_12_01_1554.w3g[0m* [01;32mReplay_2015_08_26_2035.w3g[0m*
[01;32mReplay_2013_12_01_2122.w3g[0m* [01;32mReplay_2015_08_26_2148.w3g[0m*
[01;32mReplay_2013_12_01_2203.w3g[0m* [01;32mReplay_2015_08_27_1922.w3g[0m*
[01;32mReplay_2013_12_01_2245.w3g[0m* [01;32mReplay_2015_08_27_2018.w3g[0m*
[01;32mReplay_2013_12_01_2315.w3g[0m* [01;32mReplay_2015_08_27_2103.w3g[0m*
[01;32mReplay_2013_12_02_0005.w3g[0m* [01;32mReplay_2015_08_27_2123.w3g[0m*
[01;32mReplay_2013_12_02_1151.w3g[0m* [01;32mReplay_2015_08_27_2204.w3g[0m*
[01;32mReplay_2013_12_02_1206.w3g[0m* [01;32mReplay_2015_08_27_2207.w3g[0m*
[01;32mReplay_2013_12_02_1209.w3g[0m* [01;32mReplay_2015_08_29_2001.w3g[0m*
[01;32mReplay_2013_12_02_1244.w3g[0m* [01;32mReplay_2015_08_30_2105.w3g[0m*
[01;32mReplay_2013_12_02_1246.w3g[0m* [01;32mReplay_2015_08_30_2235.w3g[0m*
[01;32mReplay_2013_12_02_1748.w3g[0m* [01;32mReplay_2015_08_30_2320.w3g[0m*
[01;32mReplay_2013_12_02_1840.w3g[0m* [01;32mReplay_2015_08_31_1305.w3g[0m*
[01;32mReplay_2013_12_02_1930.w3g[0m* [01;32mReplay_2015_08_31_1400.w3g[0m*
[01;32mReplay_2013_12_02_2300.w3g[0m* [01;32mReplay_2015_08_31_1520.w3g[0m*
[01;32mReplay_2013_12_02_2352.w3g[0m* [01;32mReplay_2015_08_31_1946.w3g[0m*
[01;32mReplay_2013_12_03_1332.w3g[0m* [01;32mReplay_2015_08_31_2043.w3g[0m*
[01;32mReplay_2013_12_03_1402.w3g[0m* [01;32mReplay_2015_09_01_1558.w3g[0m*
[01;32mReplay_2013_12_03_1908.w3g[0m* [01;32mReplay_2015_09_01_1637.w3g[0m*
[01;32mReplay_2013_12_03_2050.w3g[0m* [01;32mReplay_2015_09_01_1716.w3g[0m*
[01;32mReplay_2013_12_03_2101.w3g[0m* [01;32mReplay_2015_09_02_1554.w3g[0m*
[01;32mReplay_2013_12_03_2206.w3g[0m* [01;32mReplay_2015_09_02_1644.w3g[0m*
[01;32mReplay_2013_12_04_0008.w3g[0m* [01;32mReplay_2015_09_02_1744.w3g[0m*
[01;32mReplay_2013_12_06_1858.w3g[0m* [01;32mReplay_2015_09_02_1842.w3g[0m*
[01;32mReplay_2013_12_06_1935.w3g[0m* [01;32mReplay_2015_09_03_0045.w3g[0m*
[01;32mReplay_2013_12_07_2214.w3g[0m* [01;32mReplay_2015_09_03_1716.w3g[0m*
[01;32mReplay_2013_12_07_2305.w3g[0m* [01;32mReplay_2015_09_03_1810.w3g[0m*
[01;32mReplay_2013_12_08_0011.w3g[0m* [01;32mReplay_2015_09_03_1847.w3g[0m*
[01;32mReplay_2013_12_08_0124.w3g[0m* [01;32mReplay_2015_09_03_2102.w3g[0m*
[01;32mReplay_2013_12_08_0217.w3g[0m* [01;32mReplay_2015_09_03_2154.w3g[0m*
[01;32mReplay_2013_12_09_1502.w3g[0m* [01;32mReplay_2015_09_04_1455.w3g[0m*
[01;32mReplay_2013_12_09_1547.w3g[0m* [01;32mReplay_2015_09_04_1745.w3g[0m*
[01;32mReplay_2013_12_10_0112.w3g[0m* [01;32mReplay_2015_09_04_2117.w3g[0m*
[01;32mReplay_2013_12_10_0200.w3g[0m* [01;32mReplay_2015_09_04_2156.w3g[0m*
[01;32mReplay_2013_12_10_0223.w3g[0m* [01;32mReplay_2015_09_05_1052.w3g[0m*
[01;32mReplay_2013_12_10_1613.w3g[0m* [01;32mReplay_2015_09_05_1151.w3g[0m*
[01;32mReplay_2013_12_12_2101.w3g[0m* [01;32mReplay_2015_09_05_1253.w3g[0m*
[01;32mReplay_2013_12_13_1124.w3g[0m* [01;32mReplay_2015_09_05_1417.w3g[0m*
[01;32mReplay_2013_12_13_1241.w3g[0m* [01;32mReplay_2015_09_05_1714.w3g[0m*
[01;32mReplay_2013_12_14_2202.w3g[0m* [01;32mReplay_2015_09_05_1818.w3g[0m*
[01;32mReplay_2013_12_14_2221.w3g[0m* [01;32mReplay_2015_09_05_1826.w3g[0m*
[01;32mReplay_2013_12_15_1308.w3g[0m* [01;32mReplay_2015_09_05_1912.w3g[0m*
[01;32mReplay_2013_12_15_1358.w3g[0m* [01;32mReplay_2015_09_06_0006.w3g[0m*
[01;32mReplay_2013_12_16_1432.w3g[0m* [01;32mReplay_2015_09_06_0012.w3g[0m*
[01;32mReplay_2013_12_16_1433.w3g[0m* [01;32mReplay_2015_09_06_0148.w3g[0m*
[01;32mReplay_2013_12_16_1508.w3g[0m* [01;32mReplay_2015_09_06_0231.w3g[0m*
[01;32mReplay_2013_12_17_1759.w3g[0m* [01;32mReplay_2015_09_06_1414.w3g[0m*
[01;32mReplay_2013_12_18_1905.w3g[0m* [01;32mReplay_2015_09_06_1910.w3g[0m*
[01;32mReplay_2013_12_18_1951.w3g[0m* [01;32mReplay_2015_09_06_1956.w3g[0m*
[01;32mReplay_2013_12_19_1907.w3g[0m* [01;32mReplay_2015_09_06_2108.w3g[0m*
###Markdown
Perfecto, ahora vamos a extraer el archivo `replays/Multiplayer16.zip` utilizando el mismo procedimiento anterior para el `replays.zip`, con la diferencia de que cambios el archivo que queremos descomprimir y donde lo vamos a descomprimir `-d replays/`
###Code
#collapse
!unzip replays/Multiplayer16.zip -d replays/
###Output
Archive: replays/Multiplayer16.zip
replace replays/Multiplayer/Replay_2013_07_18_1614.w3g? [y]es, [n]o, [A]ll, [N]one, [r]ename: y
extracting: replays/Multiplayer/Replay_2013_07_18_1614.w3g
replace replays/Multiplayer/Replay_2013_07_18_1636.w3g? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
inflating: replays/Multiplayer/Replay_2013_07_18_1636.w3g
extracting: replays/Multiplayer/Replay_2013_07_18_1639.w3g
inflating: replays/Multiplayer/Replay_2013_07_18_1723.w3g
inflating: replays/Multiplayer/Replay_2013_07_18_1736.w3g
extracting: replays/Multiplayer/Replay_2013_07_18_1738.w3g
inflating: replays/Multiplayer/Replay_2013_07_18_1828.w3g
inflating: replays/Multiplayer/Replay_2013_07_18_1930.w3g
extracting: replays/Multiplayer/Replay_2013_07_18_1931.w3g
inflating: replays/Multiplayer/Replay_2013_07_18_1944.w3g
inflating: replays/Multiplayer/Replay_2013_07_18_2032.w3g
inflating: replays/Multiplayer/Replay_2013_07_18_2044.w3g
inflating: replays/Multiplayer/Replay_2013_07_19_1754.w3g
inflating: replays/Multiplayer/Replay_2013_07_19_1849.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_1256.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_1315.w3g
extracting: replays/Multiplayer/Replay_2013_07_20_1316.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_1317.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_1339.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_1355.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_1424.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_1733.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_1814.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_1847.w3g
inflating: replays/Multiplayer/Replay_2013_07_20_2354.w3g
inflating: replays/Multiplayer/Replay_2013_07_21_0115.w3g
inflating: replays/Multiplayer/Replay_2013_07_22_0121.w3g
inflating: replays/Multiplayer/Replay_2013_07_22_0231.w3g
inflating: replays/Multiplayer/Replay_2013_07_24_0059.w3g
inflating: replays/Multiplayer/Replay_2013_07_24_2124.w3g
inflating: replays/Multiplayer/Replay_2013_07_25_1326.w3g
inflating: replays/Multiplayer/Replay_2013_07_25_1442.w3g
extracting: replays/Multiplayer/Replay_2013_07_26_1219.w3g
inflating: replays/Multiplayer/Replay_2013_07_28_0211.w3g
inflating: replays/Multiplayer/Replay_2013_07_28_0250.w3g
inflating: replays/Multiplayer/Replay_2013_07_28_1133.w3g
inflating: replays/Multiplayer/Replay_2013_07_28_1206.w3g
inflating: replays/Multiplayer/Replay_2013_07_28_1243.w3g
inflating: replays/Multiplayer/Replay_2013_07_29_1349.w3g
inflating: replays/Multiplayer/Replay_2013_07_29_2216.w3g
inflating: replays/Multiplayer/Replay_2013_07_30_1244.w3g
inflating: replays/Multiplayer/Replay_2013_07_30_2046.w3g
inflating: replays/Multiplayer/Replay_2013_07_30_2104.w3g
inflating: replays/Multiplayer/Replay_2013_07_31_0135.w3g
inflating: replays/Multiplayer/Replay_2013_07_31_1800.w3g
inflating: replays/Multiplayer/Replay_2013_07_31_1829.w3g
inflating: replays/Multiplayer/Replay_2013_07_31_2005.w3g
inflating: replays/Multiplayer/Replay_2013_07_31_2030.w3g
inflating: replays/Multiplayer/Replay_2013_08_01_0107.w3g
inflating: replays/Multiplayer/Replay_2013_08_01_0146.w3g
inflating: replays/Multiplayer/Replay_2013_08_01_1149.w3g
inflating: replays/Multiplayer/Replay_2013_08_01_2243.w3g
inflating: replays/Multiplayer/Replay_2013_08_01_2329.w3g
inflating: replays/Multiplayer/Replay_2013_08_02_0042.w3g
inflating: replays/Multiplayer/Replay_2013_08_02_0144.w3g
inflating: replays/Multiplayer/Replay_2013_08_03_0122.w3g
inflating: replays/Multiplayer/Replay_2013_08_03_0205.w3g
inflating: replays/Multiplayer/Replay_2013_08_03_1506.w3g
inflating: replays/Multiplayer/Replay_2013_08_03_1604.w3g
inflating: replays/Multiplayer/Replay_2013_08_03_1921.w3g
inflating: replays/Multiplayer/Replay_2013_08_03_2026.w3g
inflating: replays/Multiplayer/Replay_2013_08_04_0007.w3g
extracting: replays/Multiplayer/Replay_2013_08_04_1158.w3g
inflating: replays/Multiplayer/Replay_2013_08_04_1218.w3g
inflating: replays/Multiplayer/Replay_2013_08_04_1230.w3g
inflating: replays/Multiplayer/Replay_2013_08_04_1252.w3g
extracting: replays/Multiplayer/Replay_2013_08_04_1623.w3g
inflating: replays/Multiplayer/Replay_2013_08_05_0115.w3g
inflating: replays/Multiplayer/Replay_2013_08_05_1433.w3g
inflating: replays/Multiplayer/Replay_2013_08_05_2215.w3g
inflating: replays/Multiplayer/Replay_2013_08_05_2302.w3g
inflating: replays/Multiplayer/Replay_2013_08_05_2340.w3g
inflating: replays/Multiplayer/Replay_2013_08_06_0051.w3g
extracting: replays/Multiplayer/Replay_2013_08_06_1213.w3g
inflating: replays/Multiplayer/Replay_2013_08_06_1257.w3g
inflating: replays/Multiplayer/Replay_2013_08_07_1630.w3g
inflating: replays/Multiplayer/Replay_2013_08_07_2007.w3g
inflating: replays/Multiplayer/Replay_2013_08_07_2016.w3g
inflating: replays/Multiplayer/Replay_2013_08_09_0132.w3g
inflating: replays/Multiplayer/Replay_2013_08_09_0210.w3g
inflating: replays/Multiplayer/Replay_2013_08_09_1634.w3g
extracting: replays/Multiplayer/Replay_2013_08_09_2000.w3g
inflating: replays/Multiplayer/Replay_2013_08_09_2224.w3g
inflating: replays/Multiplayer/Replay_2013_08_09_2302.w3g
inflating: replays/Multiplayer/Replay_2013_08_09_2348.w3g
inflating: replays/Multiplayer/Replay_2013_08_10_1544.w3g
inflating: replays/Multiplayer/Replay_2013_08_10_1652.w3g
inflating: replays/Multiplayer/Replay_2013_08_10_1706.w3g
inflating: replays/Multiplayer/Replay_2013_08_10_1727.w3g
inflating: replays/Multiplayer/Replay_2013_08_10_2040.w3g
inflating: replays/Multiplayer/Replay_2013_08_10_2123.w3g
inflating: replays/Multiplayer/Replay_2013_08_11_1644.w3g
inflating: replays/Multiplayer/Replay_2013_08_11_1719.w3g
inflating: replays/Multiplayer/Replay_2013_08_11_1801.w3g
inflating: replays/Multiplayer/Replay_2013_08_11_1934.w3g
inflating: replays/Multiplayer/Replay_2013_08_11_2038.w3g
inflating: replays/Multiplayer/Replay_2013_08_12_0949.w3g
inflating: replays/Multiplayer/Replay_2013_08_12_0959.w3g
inflating: replays/Multiplayer/Replay_2013_08_12_1513.w3g
inflating: replays/Multiplayer/Replay_2013_08_12_1657.w3g
inflating: replays/Multiplayer/Replay_2013_08_12_1835.w3g
inflating: replays/Multiplayer/Replay_2013_08_12_1954.w3g
inflating: replays/Multiplayer/Replay_2013_08_12_2007.w3g
inflating: replays/Multiplayer/Replay_2013_08_12_2238.w3g
inflating: replays/Multiplayer/Replay_2013_08_12_2326.w3g
inflating: replays/Multiplayer/Replay_2013_08_13_1145.w3g
extracting: replays/Multiplayer/Replay_2013_08_13_1745.w3g
extracting: replays/Multiplayer/Replay_2013_08_13_1957.w3g
inflating: replays/Multiplayer/Replay_2013_08_13_2108.w3g
inflating: replays/Multiplayer/Replay_2013_08_13_2349.w3g
inflating: replays/Multiplayer/Replay_2013_08_14_0048.w3g
inflating: replays/Multiplayer/Replay_2013_08_14_0125.w3g
inflating: replays/Multiplayer/Replay_2013_08_14_1121.w3g
inflating: replays/Multiplayer/Replay_2013_08_14_2121.w3g
inflating: replays/Multiplayer/Replay_2013_08_14_2150.w3g
inflating: replays/Multiplayer/Replay_2013_08_15_1957.w3g
inflating: replays/Multiplayer/Replay_2013_08_15_2014.w3g
inflating: replays/Multiplayer/Replay_2013_08_15_2038.w3g
inflating: replays/Multiplayer/Replay_2013_08_15_2052.w3g
inflating: replays/Multiplayer/Replay_2013_08_15_2117.w3g
extracting: replays/Multiplayer/Replay_2013_08_15_2118.w3g
extracting: replays/Multiplayer/Replay_2013_08_16_0010.w3g
inflating: replays/Multiplayer/Replay_2013_08_16_2138.w3g
extracting: replays/Multiplayer/Replay_2013_08_16_2143.w3g
extracting: replays/Multiplayer/Replay_2013_08_16_2147.w3g
inflating: replays/Multiplayer/Replay_2013_08_16_2152.w3g
inflating: replays/Multiplayer/Replay_2013_08_16_2209.w3g
inflating: replays/Multiplayer/Replay_2013_08_16_2218.w3g
extracting: replays/Multiplayer/Replay_2013_08_16_2225.w3g
inflating: replays/Multiplayer/Replay_2013_08_16_2324.w3g
extracting: replays/Multiplayer/Replay_2013_08_16_2327.w3g
inflating: replays/Multiplayer/Replay_2013_08_17_1936.w3g
inflating: replays/Multiplayer/Replay_2013_08_17_1958.w3g
extracting: replays/Multiplayer/Replay_2013_08_17_2002.w3g
inflating: replays/Multiplayer/Replay_2013_08_17_2201.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1148.w3g
extracting: replays/Multiplayer/Replay_2013_08_18_1149.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1207.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1251.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1331.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1432.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1723.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1728.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1832.w3g
extracting: replays/Multiplayer/Replay_2013_08_18_1838.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1852.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_1935.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_2120.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_2153.w3g
inflating: replays/Multiplayer/Replay_2013_08_18_2251.w3g
inflating: replays/Multiplayer/Replay_2013_08_19_2228.w3g
inflating: replays/Multiplayer/Replay_2013_08_20_0108.w3g
inflating: replays/Multiplayer/Replay_2013_08_21_1720.w3g
inflating: replays/Multiplayer/Replay_2013_08_21_1803.w3g
inflating: replays/Multiplayer/Replay_2013_08_21_1851.w3g
inflating: replays/Multiplayer/Replay_2013_08_21_2132.w3g
inflating: replays/Multiplayer/Replay_2013_08_23_1430.w3g
inflating: replays/Multiplayer/Replay_2013_08_23_1523.w3g
inflating: replays/Multiplayer/Replay_2013_08_23_1854.w3g
inflating: replays/Multiplayer/Replay_2013_08_23_1916.w3g
inflating: replays/Multiplayer/Replay_2013_08_23_2003.w3g
inflating: replays/Multiplayer/Replay_2013_08_23_2138.w3g
inflating: replays/Multiplayer/Replay_2013_08_23_2226.w3g
inflating: replays/Multiplayer/Replay_2013_08_24_0936.w3g
inflating: replays/Multiplayer/Replay_2013_08_24_1657.w3g
inflating: replays/Multiplayer/Replay_2013_08_24_2003.w3g
inflating: replays/Multiplayer/Replay_2013_08_24_2059.w3g
inflating: replays/Multiplayer/Replay_2013_08_24_2141.w3g
inflating: replays/Multiplayer/Replay_2013_08_24_2239.w3g
inflating: replays/Multiplayer/Replay_2013_08_24_2252.w3g
inflating: replays/Multiplayer/Replay_2013_08_24_2330.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_1403.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_1421.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_1444.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_1508.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_1535.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_1601.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_1817.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_1848.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_2257.w3g
extracting: replays/Multiplayer/Replay_2013_08_25_2338.w3g
inflating: replays/Multiplayer/Replay_2013_08_25_2353.w3g
inflating: replays/Multiplayer/Replay_2013_08_26_0042.w3g
inflating: replays/Multiplayer/Replay_2013_08_26_1846.w3g
inflating: replays/Multiplayer/Replay_2013_08_27_0927.w3g
inflating: replays/Multiplayer/Replay_2013_08_27_1015.w3g
inflating: replays/Multiplayer/Replay_2013_08_27_1733.w3g
inflating: replays/Multiplayer/Replay_2013_08_29_1150.w3g
inflating: replays/Multiplayer/Replay_2013_08_29_1207.w3g
inflating: replays/Multiplayer/Replay_2013_08_29_1305.w3g
inflating: replays/Multiplayer/Replay_2013_08_29_1940.w3g
inflating: replays/Multiplayer/Replay_2013_08_30_0014.w3g
inflating: replays/Multiplayer/Replay_2013_08_30_0109.w3g
inflating: replays/Multiplayer/Replay_2013_08_30_1049.w3g
inflating: replays/Multiplayer/Replay_2013_08_30_1119.w3g
inflating: replays/Multiplayer/Replay_2013_08_30_1317.w3g
inflating: replays/Multiplayer/Replay_2013_08_30_2027.w3g
inflating: replays/Multiplayer/Replay_2013_08_30_2110.w3g
inflating: replays/Multiplayer/Replay_2013_08_31_1459.w3g
inflating: replays/Multiplayer/Replay_2013_08_31_1710.w3g
inflating: replays/Multiplayer/Replay_2013_08_31_1741.w3g
inflating: replays/Multiplayer/Replay_2013_08_31_1936.w3g
inflating: replays/Multiplayer/Replay_2013_08_31_2008.w3g
inflating: replays/Multiplayer/Replay_2013_08_31_2250.w3g
inflating: replays/Multiplayer/Replay_2013_09_01_1357.w3g
inflating: replays/Multiplayer/Replay_2013_09_01_1723.w3g
inflating: replays/Multiplayer/Replay_2013_09_01_1921.w3g
inflating: replays/Multiplayer/Replay_2013_09_01_2137.w3g
inflating: replays/Multiplayer/Replay_2013_09_01_2201.w3g
inflating: replays/Multiplayer/Replay_2013_09_02_2348.w3g
inflating: replays/Multiplayer/Replay_2013_09_04_1721.w3g
inflating: replays/Multiplayer/Replay_2013_09_05_0016.w3g
inflating: replays/Multiplayer/Replay_2013_09_06_1808.w3g
inflating: replays/Multiplayer/Replay_2013_09_06_2155.w3g
inflating: replays/Multiplayer/Replay_2013_09_06_2301.w3g
inflating: replays/Multiplayer/Replay_2013_09_08_0056.w3g
inflating: replays/Multiplayer/Replay_2013_09_09_1743.w3g
extracting: replays/Multiplayer/Replay_2013_09_10_1435.w3g
inflating: replays/Multiplayer/Replay_2013_09_10_1514.w3g
inflating: replays/Multiplayer/Replay_2013_09_10_1601.w3g
inflating: replays/Multiplayer/Replay_2013_09_10_1637.w3g
inflating: replays/Multiplayer/Replay_2013_09_10_1717.w3g
inflating: replays/Multiplayer/Replay_2013_09_11_1543.w3g
inflating: replays/Multiplayer/Replay_2013_09_11_1829.w3g
inflating: replays/Multiplayer/Replay_2013_09_12_0036.w3g
inflating: replays/Multiplayer/Replay_2013_09_12_1304.w3g
inflating: replays/Multiplayer/Replay_2013_09_13_1721.w3g
inflating: replays/Multiplayer/Replay_2013_09_14_1907.w3g
inflating: replays/Multiplayer/Replay_2013_09_14_2114.w3g
inflating: replays/Multiplayer/Replay_2013_09_15_1515.w3g
inflating: replays/Multiplayer/Replay_2013_09_15_1523.w3g
inflating: replays/Multiplayer/Replay_2013_09_15_1624.w3g
inflating: replays/Multiplayer/Replay_2013_09_16_0143.w3g
inflating: replays/Multiplayer/Replay_2013_09_16_0237.w3g
inflating: replays/Multiplayer/Replay_2013_09_16_1510.w3g
inflating: replays/Multiplayer/Replay_2013_09_16_1732.w3g
inflating: replays/Multiplayer/Replay_2013_09_17_0048.w3g
inflating: replays/Multiplayer/Replay_2013_09_17_1701.w3g
inflating: replays/Multiplayer/Replay_2013_09_18_1247.w3g
inflating: replays/Multiplayer/Replay_2013_09_18_1530.w3g
inflating: replays/Multiplayer/Replay_2013_09_18_1723.w3g
inflating: replays/Multiplayer/Replay_2013_09_19_2034.w3g
inflating: replays/Multiplayer/Replay_2013_09_19_2235.w3g
inflating: replays/Multiplayer/Replay_2013_09_20_2000.w3g
inflating: replays/Multiplayer/Replay_2013_09_20_2050.w3g
inflating: replays/Multiplayer/Replay_2013_09_21_1559.w3g
inflating: replays/Multiplayer/Replay_2013_09_21_1627.w3g
inflating: replays/Multiplayer/Replay_2013_09_21_1820.w3g
inflating: replays/Multiplayer/Replay_2013_09_22_0003.w3g
inflating: replays/Multiplayer/Replay_2013_09_22_0041.w3g
inflating: replays/Multiplayer/Replay_2013_09_22_1828.w3g
inflating: replays/Multiplayer/Replay_2013_09_23_0011.w3g
inflating: replays/Multiplayer/Replay_2013_09_23_1851.w3g
inflating: replays/Multiplayer/Replay_2013_09_23_1927.w3g
inflating: replays/Multiplayer/Replay_2013_09_23_2011.w3g
inflating: replays/Multiplayer/Replay_2013_09_25_1419.w3g
inflating: replays/Multiplayer/Replay_2013_09_25_1512.w3g
inflating: replays/Multiplayer/Replay_2013_09_25_1608.w3g
inflating: replays/Multiplayer/Replay_2013_09_26_0035.w3g
inflating: replays/Multiplayer/Replay_2013_09_26_0145.w3g
extracting: replays/Multiplayer/Replay_2013_09_26_0152.w3g
inflating: replays/Multiplayer/Replay_2013_09_26_1219.w3g
inflating: replays/Multiplayer/Replay_2013_09_26_1229.w3g
inflating: replays/Multiplayer/Replay_2013_09_26_1313.w3g
inflating: replays/Multiplayer/Replay_2013_09_27_1209.w3g
inflating: replays/Multiplayer/Replay_2013_09_28_0039.w3g
inflating: replays/Multiplayer/Replay_2013_09_28_1338.w3g
inflating: replays/Multiplayer/Replay_2013_09_28_1635.w3g
inflating: replays/Multiplayer/Replay_2013_09_28_2056.w3g
inflating: replays/Multiplayer/Replay_2013_09_29_0137.w3g
inflating: replays/Multiplayer/Replay_2013_09_30_1530.w3g
inflating: replays/Multiplayer/Replay_2013_09_30_1608.w3g
inflating: replays/Multiplayer/Replay_2013_10_01_1603.w3g
inflating: replays/Multiplayer/Replay_2013_10_01_1637.w3g
inflating: replays/Multiplayer/Replay_2013_10_01_1739.w3g
inflating: replays/Multiplayer/Replay_2013_10_01_2235.w3g
inflating: replays/Multiplayer/Replay_2013_10_02_1316.w3g
inflating: replays/Multiplayer/Replay_2013_10_02_1344.w3g
inflating: replays/Multiplayer/Replay_2013_10_02_1416.w3g
inflating: replays/Multiplayer/Replay_2013_10_02_1453.w3g
inflating: replays/Multiplayer/Replay_2013_10_02_2338.w3g
inflating: replays/Multiplayer/Replay_2013_10_03_0041.w3g
inflating: replays/Multiplayer/Replay_2013_10_03_1015.w3g
inflating: replays/Multiplayer/Replay_2013_10_03_2102.w3g
inflating: replays/Multiplayer/Replay_2013_10_03_2150.w3g
inflating: replays/Multiplayer/Replay_2013_10_04_0022.w3g
extracting: replays/Multiplayer/Replay_2013_10_04_2207.w3g
inflating: replays/Multiplayer/Replay_2013_10_04_2224.w3g
inflating: replays/Multiplayer/Replay_2013_10_04_2251.w3g
inflating: replays/Multiplayer/Replay_2013_10_06_0038.w3g
inflating: replays/Multiplayer/Replay_2013_10_06_0122.w3g
inflating: replays/Multiplayer/Replay_2013_10_06_0213.w3g
inflating: replays/Multiplayer/Replay_2013_10_07_1650.w3g
inflating: replays/Multiplayer/Replay_2013_10_08_2056.w3g
inflating: replays/Multiplayer/Replay_2013_10_08_2126.w3g
inflating: replays/Multiplayer/Replay_2013_10_08_2158.w3g
inflating: replays/Multiplayer/Replay_2013_10_08_2244.w3g
inflating: replays/Multiplayer/Replay_2013_10_09_1431.w3g
inflating: replays/Multiplayer/Replay_2013_10_09_1513.w3g
inflating: replays/Multiplayer/Replay_2013_10_09_1609.w3g
inflating: replays/Multiplayer/Replay_2013_10_09_1645.w3g
inflating: replays/Multiplayer/Replay_2013_10_09_2113.w3g
inflating: replays/Multiplayer/Replay_2013_10_09_2213.w3g
inflating: replays/Multiplayer/Replay_2013_10_11_1501.w3g
inflating: replays/Multiplayer/Replay_2013_10_11_1638.w3g
inflating: replays/Multiplayer/Replay_2013_10_11_1724.w3g
inflating: replays/Multiplayer/Replay_2013_10_11_1804.w3g
inflating: replays/Multiplayer/Replay_2013_10_11_2317.w3g
inflating: replays/Multiplayer/Replay_2013_10_14_1646.w3g
inflating: replays/Multiplayer/Replay_2013_10_14_1831.w3g
inflating: replays/Multiplayer/Replay_2013_10_14_1926.w3g
inflating: replays/Multiplayer/Replay_2013_10_15_1855.w3g
inflating: replays/Multiplayer/Replay_2013_10_16_0132.w3g
inflating: replays/Multiplayer/Replay_2013_10_16_1002.w3g
inflating: replays/Multiplayer/Replay_2013_10_16_1115.w3g
inflating: replays/Multiplayer/Replay_2013_10_16_1313.w3g
inflating: replays/Multiplayer/Replay_2013_10_16_1402.w3g
inflating: replays/Multiplayer/Replay_2013_10_17_1945.w3g
inflating: replays/Multiplayer/Replay_2013_10_18_1532.w3g
inflating: replays/Multiplayer/Replay_2013_10_18_1629.w3g
inflating: replays/Multiplayer/Replay_2013_10_18_1730.w3g
inflating: replays/Multiplayer/Replay_2013_10_18_1807.w3g
inflating: replays/Multiplayer/Replay_2013_10_18_1903.w3g
inflating: replays/Multiplayer/Replay_2013_10_18_2233.w3g
inflating: replays/Multiplayer/Replay_2013_10_19_1318.w3g
inflating: replays/Multiplayer/Replay_2013_10_20_0144.w3g
inflating: replays/Multiplayer/Replay_2013_10_20_0228.w3g
inflating: replays/Multiplayer/Replay_2013_10_20_1729.w3g
inflating: replays/Multiplayer/Replay_2013_10_21_1535.w3g
inflating: replays/Multiplayer/Replay_2013_10_22_0231.w3g
inflating: replays/Multiplayer/Replay_2013_10_22_1637.w3g
inflating: replays/Multiplayer/Replay_2013_10_23_1503.w3g
inflating: replays/Multiplayer/Replay_2013_10_23_1551.w3g
inflating: replays/Multiplayer/Replay_2013_10_23_1624.w3g
inflating: replays/Multiplayer/Replay_2013_10_24_2240.w3g
inflating: replays/Multiplayer/Replay_2013_10_24_2312.w3g
inflating: replays/Multiplayer/Replay_2013_10_25_1509.w3g
inflating: replays/Multiplayer/Replay_2013_10_26_2207.w3g
inflating: replays/Multiplayer/Replay_2013_10_26_2244.w3g
inflating: replays/Multiplayer/Replay_2013_10_26_2329.w3g
inflating: replays/Multiplayer/Replay_2013_10_27_0105.w3g
inflating: replays/Multiplayer/Replay_2013_10_28_0106.w3g
inflating: replays/Multiplayer/Replay_2013_10_28_0109.w3g
inflating: replays/Multiplayer/Replay_2013_10_28_0150.w3g
inflating: replays/Multiplayer/Replay_2013_10_28_0230.w3g
inflating: replays/Multiplayer/Replay_2013_10_28_1516.w3g
inflating: replays/Multiplayer/Replay_2013_10_28_1608.w3g
inflating: replays/Multiplayer/Replay_2013_10_28_1700.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_0025.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_0113.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_0135.w3g
extracting: replays/Multiplayer/Replay_2013_10_29_1024.w3g
extracting: replays/Multiplayer/Replay_2013_10_29_1026.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_1040.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_1050.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_1056.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_1107.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_1122.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_1532.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_1617.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_2032.w3g
extracting: replays/Multiplayer/Replay_2013_10_29_2034.w3g
extracting: replays/Multiplayer/Replay_2013_10_29_2038.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_2102.w3g
inflating: replays/Multiplayer/Replay_2013_10_29_2107.w3g
inflating: replays/Multiplayer/Replay_2013_10_30_0105.w3g
extracting: replays/Multiplayer/Replay_2013_10_30_0114.w3g
inflating: replays/Multiplayer/Replay_2013_10_30_0137.w3g
inflating: replays/Multiplayer/Replay_2013_10_31_0150.w3g
inflating: replays/Multiplayer/Replay_2013_10_31_1141.w3g
inflating: replays/Multiplayer/Replay_2013_10_31_1209.w3g
inflating: replays/Multiplayer/Replay_2013_10_31_1238.w3g
inflating: replays/Multiplayer/Replay_2013_11_01_1757.w3g
inflating: replays/Multiplayer/Replay_2013_11_01_1815.w3g
inflating: replays/Multiplayer/Replay_2013_11_03_0113.w3g
inflating: replays/Multiplayer/Replay_2013_11_03_1950.w3g
inflating: replays/Multiplayer/Replay_2013_11_06_1720.w3g
inflating: replays/Multiplayer/Replay_2013_11_06_1800.w3g
inflating: replays/Multiplayer/Replay_2013_11_06_1857.w3g
inflating: replays/Multiplayer/Replay_2013_11_08_1213.w3g
inflating: replays/Multiplayer/Replay_2013_11_08_1342.w3g
inflating: replays/Multiplayer/Replay_2013_11_09_0020.w3g
inflating: replays/Multiplayer/Replay_2013_11_09_0112.w3g
inflating: replays/Multiplayer/Replay_2013_11_09_0152.w3g
inflating: replays/Multiplayer/Replay_2013_11_09_1429.w3g
inflating: replays/Multiplayer/Replay_2013_11_09_1529.w3g
inflating: replays/Multiplayer/Replay_2013_11_10_2132.w3g
inflating: replays/Multiplayer/Replay_2013_11_10_2221.w3g
inflating: replays/Multiplayer/Replay_2013_11_10_2306.w3g
inflating: replays/Multiplayer/Replay_2013_11_11_0004.w3g
inflating: replays/Multiplayer/Replay_2013_11_11_1533.w3g
inflating: replays/Multiplayer/Replay_2013_11_12_1609.w3g
inflating: replays/Multiplayer/Replay_2013_11_12_1659.w3g
inflating: replays/Multiplayer/Replay_2013_11_12_1736.w3g
inflating: replays/Multiplayer/Replay_2013_11_12_1837.w3g
inflating: replays/Multiplayer/Replay_2013_11_14_1851.w3g
inflating: replays/Multiplayer/Replay_2013_11_15_2349.w3g
inflating: replays/Multiplayer/Replay_2013_11_16_0024.w3g
inflating: replays/Multiplayer/Replay_2013_11_16_1418.w3g
inflating: replays/Multiplayer/Replay_2013_11_16_1500.w3g
inflating: replays/Multiplayer/Replay_2013_11_16_1547.w3g
extracting: replays/Multiplayer/Replay_2013_11_17_1419.w3g
inflating: replays/Multiplayer/Replay_2013_11_18_1540.w3g
inflating: replays/Multiplayer/Replay_2013_11_18_1617.w3g
inflating: replays/Multiplayer/Replay_2013_11_18_1707.w3g
inflating: replays/Multiplayer/Replay_2013_11_18_1741.w3g
inflating: replays/Multiplayer/Replay_2013_11_18_2253.w3g
inflating: replays/Multiplayer/Replay_2013_11_18_2332.w3g
inflating: replays/Multiplayer/Replay_2013_11_19_0031.w3g
inflating: replays/Multiplayer/Replay_2013_11_19_1539.w3g
inflating: replays/Multiplayer/Replay_2013_11_19_1852.w3g
inflating: replays/Multiplayer/Replay_2013_11_19_2211.w3g
inflating: replays/Multiplayer/Replay_2013_11_19_2257.w3g
inflating: replays/Multiplayer/Replay_2013_11_19_2346.w3g
inflating: replays/Multiplayer/Replay_2013_11_20_1222.w3g
inflating: replays/Multiplayer/Replay_2013_11_21_0118.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1211.w3g
extracting: replays/Multiplayer/Replay_2013_11_22_1220.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1249.w3g
extracting: replays/Multiplayer/Replay_2013_11_22_1251.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1257.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1304.w3g
extracting: replays/Multiplayer/Replay_2013_11_22_1313.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1333.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1337.w3g
extracting: replays/Multiplayer/Replay_2013_11_22_1401.w3g
extracting: replays/Multiplayer/Replay_2013_11_22_1406.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1422.w3g
extracting: replays/Multiplayer/Replay_2013_11_22_1426.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1427.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1450.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1506.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_1520.w3g
inflating: replays/Multiplayer/Replay_2013_11_22_2033.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_0146.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1058.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1138.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1240.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1606.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1631.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1728.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1802.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1809.w3g
extracting: replays/Multiplayer/Replay_2013_11_23_1810.w3g
extracting: replays/Multiplayer/Replay_2013_11_23_1813.w3g
extracting: replays/Multiplayer/Replay_2013_11_23_1819.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1835.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1838.w3g
extracting: replays/Multiplayer/Replay_2013_11_23_1840.w3g
extracting: replays/Multiplayer/Replay_2013_11_23_1843.w3g
extracting: replays/Multiplayer/Replay_2013_11_23_1845.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1847.w3g
extracting: replays/Multiplayer/Replay_2013_11_23_1851.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1854.w3g
inflating: replays/Multiplayer/Replay_2013_11_23_1946.w3g
inflating: replays/Multiplayer/Replay_2013_11_24_1548.w3g
inflating: replays/Multiplayer/Replay_2013_11_24_1630.w3g
inflating: replays/Multiplayer/Replay_2013_11_24_1902.w3g
inflating: replays/Multiplayer/Replay_2013_11_24_1940.w3g
inflating: replays/Multiplayer/Replay_2013_11_24_2025.w3g
inflating: replays/Multiplayer/Replay_2013_11_24_2108.w3g
inflating: replays/Multiplayer/Replay_2013_11_25_0022.w3g
inflating: replays/Multiplayer/Replay_2013_11_25_0103.w3g
inflating: replays/Multiplayer/Replay_2013_11_25_1527.w3g
inflating: replays/Multiplayer/Replay_2013_11_27_0144.w3g
inflating: replays/Multiplayer/Replay_2013_11_27_0221.w3g
inflating: replays/Multiplayer/Replay_2013_11_27_2050.w3g
inflating: replays/Multiplayer/Replay_2013_11_27_2330.w3g
inflating: replays/Multiplayer/Replay_2013_11_27_2347.w3g
inflating: replays/Multiplayer/Replay_2013_11_28_0017.w3g
inflating: replays/Multiplayer/Replay_2013_11_28_1102.w3g
inflating: replays/Multiplayer/Replay_2013_11_28_1135.w3g
inflating: replays/Multiplayer/Replay_2013_11_28_1216.w3g
inflating: replays/Multiplayer/Replay_2013_11_28_1307.w3g
inflating: replays/Multiplayer/Replay_2013_11_29_2009.w3g
inflating: replays/Multiplayer/Replay_2013_11_29_2103.w3g
inflating: replays/Multiplayer/Replay_2013_11_29_2321.w3g
inflating: replays/Multiplayer/Replay_2013_11_30_0001.w3g
inflating: replays/Multiplayer/Replay_2013_11_30_0038.w3g
inflating: replays/Multiplayer/Replay_2013_11_30_0959.w3g
inflating: replays/Multiplayer/Replay_2013_11_30_1016.w3g
inflating: replays/Multiplayer/Replay_2013_11_30_1615.w3g
inflating: replays/Multiplayer/Replay_2013_11_30_1956.w3g
inflating: replays/Multiplayer/Replay_2013_11_30_2056.w3g
inflating: replays/Multiplayer/Replay_2013_11_30_2255.w3g
inflating: replays/Multiplayer/Replay_2013_11_30_2335.w3g
inflating: replays/Multiplayer/Replay_2013_12_01_0021.w3g
inflating: replays/Multiplayer/Replay_2013_12_01_0111.w3g
inflating: replays/Multiplayer/Replay_2013_12_01_1554.w3g
inflating: replays/Multiplayer/Replay_2013_12_01_2122.w3g
inflating: replays/Multiplayer/Replay_2013_12_01_2203.w3g
inflating: replays/Multiplayer/Replay_2013_12_01_2245.w3g
inflating: replays/Multiplayer/Replay_2013_12_01_2315.w3g
inflating: replays/Multiplayer/Replay_2013_12_02_0005.w3g
extracting: replays/Multiplayer/Replay_2013_12_02_1151.w3g
inflating: replays/Multiplayer/Replay_2013_12_02_1206.w3g
inflating: replays/Multiplayer/Replay_2013_12_02_1209.w3g
inflating: replays/Multiplayer/Replay_2013_12_02_1244.w3g
extracting: replays/Multiplayer/Replay_2013_12_02_1246.w3g
inflating: replays/Multiplayer/Replay_2013_12_02_1748.w3g
inflating: replays/Multiplayer/Replay_2013_12_02_1840.w3g
inflating: replays/Multiplayer/Replay_2013_12_02_1930.w3g
inflating: replays/Multiplayer/Replay_2013_12_02_2300.w3g
inflating: replays/Multiplayer/Replay_2013_12_02_2352.w3g
inflating: replays/Multiplayer/Replay_2013_12_03_1332.w3g
inflating: replays/Multiplayer/Replay_2013_12_03_1402.w3g
inflating: replays/Multiplayer/Replay_2013_12_03_1908.w3g
inflating: replays/Multiplayer/Replay_2013_12_03_2050.w3g
inflating: replays/Multiplayer/Replay_2013_12_03_2101.w3g
inflating: replays/Multiplayer/Replay_2013_12_03_2206.w3g
inflating: replays/Multiplayer/Replay_2013_12_04_0008.w3g
inflating: replays/Multiplayer/Replay_2013_12_06_1858.w3g
inflating: replays/Multiplayer/Replay_2013_12_06_1935.w3g
inflating: replays/Multiplayer/Replay_2013_12_07_2214.w3g
inflating: replays/Multiplayer/Replay_2013_12_07_2305.w3g
inflating: replays/Multiplayer/Replay_2013_12_08_0011.w3g
inflating: replays/Multiplayer/Replay_2013_12_08_0124.w3g
inflating: replays/Multiplayer/Replay_2013_12_08_0217.w3g
inflating: replays/Multiplayer/Replay_2013_12_09_1502.w3g
inflating: replays/Multiplayer/Replay_2013_12_09_1547.w3g
inflating: replays/Multiplayer/Replay_2013_12_10_0112.w3g
inflating: replays/Multiplayer/Replay_2013_12_10_0200.w3g
inflating: replays/Multiplayer/Replay_2013_12_10_0223.w3g
inflating: replays/Multiplayer/Replay_2013_12_10_1613.w3g
inflating: replays/Multiplayer/Replay_2013_12_12_2101.w3g
extracting: replays/Multiplayer/Replay_2013_12_13_1124.w3g
inflating: replays/Multiplayer/Replay_2013_12_13_1241.w3g
inflating: replays/Multiplayer/Replay_2013_12_14_2202.w3g
inflating: replays/Multiplayer/Replay_2013_12_14_2221.w3g
inflating: replays/Multiplayer/Replay_2013_12_15_1308.w3g
inflating: replays/Multiplayer/Replay_2013_12_15_1358.w3g
inflating: replays/Multiplayer/Replay_2013_12_16_1432.w3g
extracting: replays/Multiplayer/Replay_2013_12_16_1433.w3g
inflating: replays/Multiplayer/Replay_2013_12_16_1508.w3g
inflating: replays/Multiplayer/Replay_2013_12_17_1759.w3g
inflating: replays/Multiplayer/Replay_2013_12_18_1905.w3g
inflating: replays/Multiplayer/Replay_2013_12_18_1951.w3g
inflating: replays/Multiplayer/Replay_2013_12_19_1907.w3g
inflating: replays/Multiplayer/Replay_2013_12_19_2004.w3g
inflating: replays/Multiplayer/Replay_2013_12_19_2243.w3g
inflating: replays/Multiplayer/Replay_2013_12_19_2331.w3g
inflating: replays/Multiplayer/Replay_2013_12_20_0035.w3g
inflating: replays/Multiplayer/Replay_2013_12_20_2004.w3g
inflating: replays/Multiplayer/Replay_2013_12_20_2239.w3g
inflating: replays/Multiplayer/Replay_2013_12_20_2319.w3g
inflating: replays/Multiplayer/Replay_2013_12_21_0010.w3g
inflating: replays/Multiplayer/Replay_2013_12_21_0116.w3g
inflating: replays/Multiplayer/Replay_2013_12_21_1402.w3g
inflating: replays/Multiplayer/Replay_2013_12_21_1430.w3g
inflating: replays/Multiplayer/Replay_2013_12_21_1535.w3g
inflating: replays/Multiplayer/Replay_2013_12_21_2209.w3g
inflating: replays/Multiplayer/Replay_2013_12_21_2217.w3g
inflating: replays/Multiplayer/Replay_2013_12_23_1947.w3g
inflating: replays/Multiplayer/Replay_2013_12_23_2018.w3g
inflating: replays/Multiplayer/Replay_2013_12_23_2052.w3g
inflating: replays/Multiplayer/Replay_2013_12_23_2128.w3g
inflating: replays/Multiplayer/Replay_2013_12_23_2258.w3g
inflating: replays/Multiplayer/Replay_2013_12_23_2339.w3g
inflating: replays/Multiplayer/Replay_2013_12_24_0018.w3g
inflating: replays/Multiplayer/Replay_2013_12_24_1559.w3g
inflating: replays/Multiplayer/Replay_2013_12_24_1629.w3g
inflating: replays/Multiplayer/Replay_2013_12_24_1753.w3g
inflating: replays/Multiplayer/Replay_2013_12_24_2223.w3g
inflating: replays/Multiplayer/Replay_2013_12_24_2256.w3g
inflating: replays/Multiplayer/Replay_2013_12_25_0014.w3g
inflating: replays/Multiplayer/Replay_2013_12_25_0052.w3g
inflating: replays/Multiplayer/Replay_2013_12_25_1013.w3g
inflating: replays/Multiplayer/Replay_2013_12_25_1047.w3g
inflating: replays/Multiplayer/Replay_2013_12_25_1126.w3g
inflating: replays/Multiplayer/Replay_2013_12_25_1510.w3g
inflating: replays/Multiplayer/Replay_2013_12_25_1548.w3g
inflating: replays/Multiplayer/Replay_2013_12_25_1628.w3g
inflating: replays/Multiplayer/Replay_2013_12_26_0008.w3g
inflating: replays/Multiplayer/Replay_2013_12_26_0034.w3g
inflating: replays/Multiplayer/Replay_2013_12_26_0126.w3g
inflating: replays/Multiplayer/Replay_2013_12_26_1450.w3g
inflating: replays/Multiplayer/Replay_2013_12_26_1457.w3g
inflating: replays/Multiplayer/Replay_2013_12_26_1539.w3g
inflating: replays/Multiplayer/Replay_2013_12_26_1756.w3g
inflating: replays/Multiplayer/Replay_2013_12_26_1838.w3g
inflating: replays/Multiplayer/Replay_2013_12_26_1906.w3g
inflating: replays/Multiplayer/Replay_2013_12_27_2004.w3g
inflating: replays/Multiplayer/Replay_2013_12_27_2114.w3g
inflating: replays/Multiplayer/Replay_2013_12_28_1037.w3g
extracting: replays/Multiplayer/Replay_2013_12_31_1520.w3g
inflating: replays/Multiplayer/Replay_2014_01_09_2317.w3g
inflating: replays/Multiplayer/Replay_2014_01_10_0002.w3g
inflating: replays/Multiplayer/Replay_2014_01_10_1833.w3g
inflating: replays/Multiplayer/Replay_2014_01_10_1931.w3g
inflating: replays/Multiplayer/Replay_2014_01_10_2019.w3g
inflating: replays/Multiplayer/Replay_2014_01_11_0047.w3g
inflating: replays/Multiplayer/Replay_2014_01_11_0151.w3g
inflating: replays/Multiplayer/Replay_2014_01_12_1904.w3g
inflating: replays/Multiplayer/Replay_2014_01_12_1941.w3g
inflating: replays/Multiplayer/Replay_2014_01_12_2019.w3g
inflating: replays/Multiplayer/Replay_2014_01_12_2057.w3g
inflating: replays/Multiplayer/Replay_2014_01_13_1123.w3g
inflating: replays/Multiplayer/Replay_2014_01_13_1159.w3g
inflating: replays/Multiplayer/Replay_2014_01_13_1254.w3g
inflating: replays/Multiplayer/Replay_2014_01_16_2116.w3g
inflating: replays/Multiplayer/Replay_2014_01_16_2217.w3g
inflating: replays/Multiplayer/Replay_2014_01_16_2255.w3g
inflating: replays/Multiplayer/Replay_2014_01_17_1816.w3g
inflating: replays/Multiplayer/Replay_2014_01_17_1937.w3g
inflating: replays/Multiplayer/Replay_2014_01_21_2255.w3g
inflating: replays/Multiplayer/Replay_2014_01_21_2355.w3g
inflating: replays/Multiplayer/Replay_2014_01_22_0103.w3g
inflating: replays/Multiplayer/Replay_2014_01_27_0015.w3g
inflating: replays/Multiplayer/Replay_2014_01_27_0036.w3g
inflating: replays/Multiplayer/Replay_2014_01_27_0128.w3g
inflating: replays/Multiplayer/Replay_2014_01_29_2138.w3g
inflating: replays/Multiplayer/Replay_2014_01_29_2244.w3g
inflating: replays/Multiplayer/Replay_2014_01_29_2343.w3g
inflating: replays/Multiplayer/Replay_2014_01_30_0014.w3g
inflating: replays/Multiplayer/Replay_2014_01_30_2316.w3g
inflating: replays/Multiplayer/Replay_2014_01_30_2356.w3g
inflating: replays/Multiplayer/Replay_2014_01_31_0035.w3g
inflating: replays/Multiplayer/Replay_2014_01_31_2055.w3g
inflating: replays/Multiplayer/Replay_2014_01_31_2207.w3g
inflating: replays/Multiplayer/Replay_2014_01_31_2238.w3g
inflating: replays/Multiplayer/Replay_2014_01_31_2334.w3g
inflating: replays/Multiplayer/Replay_2014_02_01_0002.w3g
extracting: replays/Multiplayer/Replay_2014_02_02_2300.w3g
inflating: replays/Multiplayer/Replay_2014_02_02_2342.w3g
inflating: replays/Multiplayer/Replay_2014_02_03_0055.w3g
inflating: replays/Multiplayer/Replay_2014_02_03_0128.w3g
inflating: replays/Multiplayer/Replay_2014_02_03_0203.w3g
inflating: replays/Multiplayer/Replay_2014_02_03_0236.w3g
inflating: replays/Multiplayer/Replay_2014_02_03_2249.w3g
inflating: replays/Multiplayer/Replay_2014_02_03_2301.w3g
inflating: replays/Multiplayer/Replay_2014_02_04_0020.w3g
inflating: replays/Multiplayer/Replay_2014_02_04_1823.w3g
inflating: replays/Multiplayer/Replay_2014_02_04_1929.w3g
inflating: replays/Multiplayer/Replay_2014_02_04_2016.w3g
inflating: replays/Multiplayer/Replay_2014_02_04_2112.w3g
inflating: replays/Multiplayer/Replay_2014_02_04_2231.w3g
inflating: replays/Multiplayer/Replay_2014_02_04_2314.w3g
inflating: replays/Multiplayer/Replay_2014_02_05_0021.w3g
inflating: replays/Multiplayer/Replay_2014_02_05_2325.w3g
inflating: replays/Multiplayer/Replay_2014_02_06_0018.w3g
inflating: replays/Multiplayer/Replay_2014_02_06_2229.w3g
inflating: replays/Multiplayer/Replay_2014_02_06_2316.w3g
inflating: replays/Multiplayer/Replay_2014_02_07_0002.w3g
inflating: replays/Multiplayer/Replay_2014_02_07_1903.w3g
inflating: replays/Multiplayer/Replay_2014_02_07_2001.w3g
inflating: replays/Multiplayer/Replay_2014_02_07_2056.w3g
inflating: replays/Multiplayer/Replay_2014_02_07_2204.w3g
inflating: replays/Multiplayer/Replay_2014_02_07_2245.w3g
extracting: replays/Multiplayer/Replay_2014_02_07_2249.w3g
inflating: replays/Multiplayer/Replay_2014_02_08_1928.w3g
inflating: replays/Multiplayer/Replay_2014_02_08_2024.w3g
inflating: replays/Multiplayer/Replay_2014_02_09_1618.w3g
inflating: replays/Multiplayer/Replay_2014_02_09_1703.w3g
inflating: replays/Multiplayer/Replay_2014_02_10_2255.w3g
inflating: replays/Multiplayer/Replay_2014_02_10_2352.w3g
inflating: replays/Multiplayer/Replay_2014_03_12_2047.w3g
inflating: replays/Multiplayer/Replay_2014_03_13_2139.w3g
extracting: replays/Multiplayer/Replay_2014_03_14_1717.w3g
inflating: replays/Multiplayer/Replay_2014_03_14_1850.w3g
inflating: replays/Multiplayer/Replay_2014_03_14_1928.w3g
inflating: replays/Multiplayer/Replay_2014_03_15_1651.w3g
inflating: replays/Multiplayer/Replay_2014_03_16_2150.w3g
extracting: replays/Multiplayer/Replay_2014_03_16_2227.w3g
inflating: replays/Multiplayer/Replay_2014_03_16_2310.w3g
inflating: replays/Multiplayer/Replay_2014_03_17_0038.w3g
inflating: replays/Multiplayer/Replay_2014_03_18_2341.w3g
inflating: replays/Multiplayer/Replay_2014_03_19_0055.w3g
extracting: replays/Multiplayer/Replay_2014_11_02_1221.w3g
extracting: replays/Multiplayer/Replay_2014_11_02_1226.w3g
inflating: replays/Multiplayer/Replay_2016_01_13_2311.w3g
inflating: replays/Multiplayer/Replay_2016_01_13_2351.w3g
inflating: replays/Multiplayer/Replay_2016_01_14_2359.w3g
extracting: replays/Multiplayer/Replay_2016_01_15_0005.w3g
extracting: replays/Multiplayer/Replay_2016_07_07_1913.w3g
extracting: replays/Multiplayer/Replay_2016_07_07_1917.w3g
inflating: replays/Multiplayer/Replay_2016_07_07_1953.w3g
inflating: replays/Multiplayer/Replay_2016_07_07_2023.w3g
inflating: replays/Multiplayer/Replay_2016_07_07_2034.w3g
extracting: replays/Multiplayer/Replay_2016_07_07_2036.w3g
inflating: replays/Multiplayer/Replay_2016_07_08_1038.w3g
extracting: replays/Multiplayer/Replay_2017_01_10_2340.w3g
extracting: replays/Multiplayer/Replay_2017_01_10_2342.w3g
extracting: replays/Multiplayer/Replay_2017_01_10_2347.w3g
inflating: replays/Multiplayer/Replay_2017_01_11_2228.w3g
extracting: replays/Multiplayer/Replay_2017_01_11_2233.w3g
inflating: replays/Multiplayer/Replay_2017_01_11_2347.w3g
###Markdown
Voy a revisar el peso de la carpeta `replays/Multiplayer` con el comando```unix!du -lsh replays/Multiplayer```
###Code
!du -lsh replays/Multiplayer
###Output
515M replays/Multiplayer
###Markdown
Descomprimiendo el ultimo archivo `Multiplayer17.zip`
###Code
#collapse
!unzip replays/Multiplayer17.zip -d replays/
###Output
Archive: replays/Multiplayer17.zip
inflating: replays/Multiplayer/Replay_2014_03_23_1748.w3g
inflating: replays/Multiplayer/Replay_2014_03_23_1843.w3g
inflating: replays/Multiplayer/Replay_2014_03_23_1934.w3g
inflating: replays/Multiplayer/Replay_2014_03_23_2018.w3g
inflating: replays/Multiplayer/Replay_2014_03_24_1812.w3g
inflating: replays/Multiplayer/Replay_2014_03_24_1904.w3g
inflating: replays/Multiplayer/Replay_2014_03_24_1951.w3g
inflating: replays/Multiplayer/Replay_2014_03_25_1955.w3g
inflating: replays/Multiplayer/Replay_2014_03_25_2044.w3g
inflating: replays/Multiplayer/Replay_2014_03_25_2123.w3g
inflating: replays/Multiplayer/Replay_2014_03_25_2205.w3g
inflating: replays/Multiplayer/Replay_2014_03_31_1721.w3g
inflating: replays/Multiplayer/Replay_2014_03_31_1816.w3g
inflating: replays/Multiplayer/Replay_2014_03_31_1908.w3g
inflating: replays/Multiplayer/Replay_2014_03_31_1951.w3g
inflating: replays/Multiplayer/Replay_2014_03_31_2046.w3g
inflating: replays/Multiplayer/Replay_2014_04_01_1717.w3g
inflating: replays/Multiplayer/Replay_2014_04_01_1803.w3g
inflating: replays/Multiplayer/Replay_2014_04_01_1846.w3g
inflating: replays/Multiplayer/Replay_2014_04_02_2007.w3g
inflating: replays/Multiplayer/Replay_2014_04_02_2117.w3g
inflating: replays/Multiplayer/Replay_2014_04_03_1624.w3g
inflating: replays/Multiplayer/Replay_2014_04_03_1743.w3g
inflating: replays/Multiplayer/Replay_2014_04_03_1825.w3g
inflating: replays/Multiplayer/Replay_2014_04_03_2247.w3g
inflating: replays/Multiplayer/Replay_2014_04_18_1102.w3g
inflating: replays/Multiplayer/Replay_2014_04_18_1208.w3g
inflating: replays/Multiplayer/Replay_2014_04_20_1948.w3g
inflating: replays/Multiplayer/Replay_2014_04_20_2033.w3g
inflating: replays/Multiplayer/Replay_2014_04_20_2124.w3g
inflating: replays/Multiplayer/Replay_2014_04_20_2215.w3g
inflating: replays/Multiplayer/Replay_2014_04_21_2022.w3g
extracting: replays/Multiplayer/Replay_2014_04_21_2030.w3g
inflating: replays/Multiplayer/Replay_2014_04_21_2142.w3g
inflating: replays/Multiplayer/Replay_2014_04_22_0033.w3g
inflating: replays/Multiplayer/Replay_2014_04_25_1612.w3g
inflating: replays/Multiplayer/Replay_2014_04_25_1714.w3g
inflating: replays/Multiplayer/Replay_2014_04_25_1750.w3g
inflating: replays/Multiplayer/Replay_2014_04_28_1132.w3g
inflating: replays/Multiplayer/Replay_2014_04_28_1643.w3g
inflating: replays/Multiplayer/Replay_2014_04_28_1700.w3g
inflating: replays/Multiplayer/Replay_2014_04_28_1757.w3g
inflating: replays/Multiplayer/Replay_2014_04_28_1906.w3g
inflating: replays/Multiplayer/Replay_2014_04_28_1948.w3g
inflating: replays/Multiplayer/Replay_2014_04_30_1246.w3g
inflating: replays/Multiplayer/Replay_2014_04_30_2126.w3g
inflating: replays/Multiplayer/Replay_2014_04_30_2218.w3g
inflating: replays/Multiplayer/Replay_2014_05_01_0003.w3g
inflating: replays/Multiplayer/Replay_2014_05_01_1707.w3g
inflating: replays/Multiplayer/Replay_2014_05_01_1907.w3g
inflating: replays/Multiplayer/Replay_2014_05_01_2107.w3g
inflating: replays/Multiplayer/Replay_2014_05_01_2333.w3g
inflating: replays/Multiplayer/Replay_2014_05_02_1335.w3g
inflating: replays/Multiplayer/Replay_2014_05_02_1755.w3g
inflating: replays/Multiplayer/Replay_2014_05_02_2053.w3g
inflating: replays/Multiplayer/Replay_2014_05_02_2145.w3g
inflating: replays/Multiplayer/Replay_2014_05_02_2230.w3g
inflating: replays/Multiplayer/Replay_2014_05_03_1848.w3g
inflating: replays/Multiplayer/Replay_2014_05_03_1928.w3g
inflating: replays/Multiplayer/Replay_2014_05_03_2121.w3g
inflating: replays/Multiplayer/Replay_2014_05_03_2201.w3g
inflating: replays/Multiplayer/Replay_2014_05_04_0005.w3g
inflating: replays/Multiplayer/Replay_2014_05_04_1618.w3g
inflating: replays/Multiplayer/Replay_2014_05_04_1702.w3g
inflating: replays/Multiplayer/Replay_2014_05_04_1739.w3g
inflating: replays/Multiplayer/Replay_2014_05_04_2351.w3g
extracting: replays/Multiplayer/Replay_2014_05_04_2357.w3g
inflating: replays/Multiplayer/Replay_2014_05_05_0039.w3g
inflating: replays/Multiplayer/Replay_2014_05_05_1150.w3g
inflating: replays/Multiplayer/Replay_2014_05_05_1229.w3g
inflating: replays/Multiplayer/Replay_2014_05_05_1308.w3g
inflating: replays/Multiplayer/Replay_2014_05_06_1705.w3g
inflating: replays/Multiplayer/Replay_2014_05_06_1753.w3g
inflating: replays/Multiplayer/Replay_2014_05_06_1905.w3g
inflating: replays/Multiplayer/Replay_2014_05_06_2050.w3g
inflating: replays/Multiplayer/Replay_2014_05_06_2136.w3g
extracting: replays/Multiplayer/Replay_2014_05_07_1754.w3g
inflating: replays/Multiplayer/Replay_2014_05_07_1828.w3g
inflating: replays/Multiplayer/Replay_2014_05_07_1922.w3g
inflating: replays/Multiplayer/Replay_2014_05_09_1720.w3g
inflating: replays/Multiplayer/Replay_2014_05_11_1447.w3g
inflating: replays/Multiplayer/Replay_2014_05_11_1537.w3g
inflating: replays/Multiplayer/Replay_2014_05_12_2143.w3g
inflating: replays/Multiplayer/Replay_2014_05_13_2347.w3g
inflating: replays/Multiplayer/Replay_2014_05_17_1404.w3g
inflating: replays/Multiplayer/Replay_2014_05_17_1408.w3g
inflating: replays/Multiplayer/Replay_2014_05_17_1437.w3g
inflating: replays/Multiplayer/Replay_2014_05_17_1636.w3g
inflating: replays/Multiplayer/Replay_2014_05_17_1713.w3g
inflating: replays/Multiplayer/Replay_2014_05_17_1725.w3g
inflating: replays/Multiplayer/Replay_2014_05_17_1756.w3g
inflating: replays/Multiplayer/Replay_2014_05_17_1849.w3g
inflating: replays/Multiplayer/Replay_2014_05_24_1733.w3g
inflating: replays/Multiplayer/Replay_2014_05_24_1821.w3g
inflating: replays/Multiplayer/Replay_2014_06_28_1725.w3g
inflating: replays/Multiplayer/Replay_2014_06_28_1828.w3g
inflating: replays/Multiplayer/Replay_2014_06_29_1838.w3g
inflating: replays/Multiplayer/Replay_2014_06_29_2031.w3g
inflating: replays/Multiplayer/Replay_2014_06_29_2038.w3g
inflating: replays/Multiplayer/Replay_2014_06_29_2117.w3g
inflating: replays/Multiplayer/Replay_2014_06_29_2210.w3g
inflating: replays/Multiplayer/Replay_2014_06_29_2256.w3g
inflating: replays/Multiplayer/Replay_2014_06_29_2346.w3g
inflating: replays/Multiplayer/Replay_2014_06_30_0024.w3g
inflating: replays/Multiplayer/Replay_2014_06_30_1920.w3g
inflating: replays/Multiplayer/Replay_2014_06_30_2005.w3g
inflating: replays/Multiplayer/Replay_2014_06_30_2057.w3g
inflating: replays/Multiplayer/Replay_2014_06_30_2153.w3g
inflating: replays/Multiplayer/Replay_2014_06_30_2247.w3g
inflating: replays/Multiplayer/Replay_2014_07_04_1826.w3g
inflating: replays/Multiplayer/Replay_2014_07_04_1915.w3g
inflating: replays/Multiplayer/Replay_2014_07_06_1505.w3g
inflating: replays/Multiplayer/Replay_2014_07_06_1559.w3g
inflating: replays/Multiplayer/Replay_2014_07_06_1649.w3g
inflating: replays/Multiplayer/Replay_2014_07_08_2019.w3g
inflating: replays/Multiplayer/Replay_2014_11_03_1301.w3g
inflating: replays/Multiplayer/Replay_2014_11_05_2128.w3g
inflating: replays/Multiplayer/Replay_2014_11_05_2156.w3g
inflating: replays/Multiplayer/Replay_2014_11_05_2219.w3g
inflating: replays/Multiplayer/Replay_2014_11_05_2237.w3g
inflating: replays/Multiplayer/Replay_2014_11_07_1918.w3g
inflating: replays/Multiplayer/Replay_2014_11_07_1942.w3g
extracting: replays/Multiplayer/Replay_2014_11_08_1455.w3g
inflating: replays/Multiplayer/Replay_2014_11_08_1513.w3g
inflating: replays/Multiplayer/Replay_2014_11_08_1534.w3g
inflating: replays/Multiplayer/Replay_2014_11_08_1607.w3g
extracting: replays/Multiplayer/Replay_2014_11_10_0905.w3g
inflating: replays/Multiplayer/Replay_2014_11_14_2040.w3g
inflating: replays/Multiplayer/Replay_2014_11_14_2101.w3g
inflating: replays/Multiplayer/Replay_2014_11_14_2122.w3g
inflating: replays/Multiplayer/Replay_2014_11_14_2132.w3g
inflating: replays/Multiplayer/Replay_2014_11_14_2155.w3g
inflating: replays/Multiplayer/Replay_2015_03_19_2208.w3g
inflating: replays/Multiplayer/Replay_2015_03_19_2302.w3g
inflating: replays/Multiplayer/Replay_2016_01_30_1156.w3g
inflating: replays/Multiplayer/Replay_2016_01_30_1311.w3g
inflating: replays/Multiplayer/Replay_2016_02_12_2306.w3g
inflating: replays/Multiplayer/Replay_2016_02_12_2352.w3g
inflating: replays/Multiplayer/Replay_2016_02_13_0118.w3g
inflating: replays/Multiplayer/Replay_2016_02_13_0211.w3g
inflating: replays/Multiplayer/Replay_2016_02_16_0018.w3g
inflating: replays/Multiplayer/Replay_2016_02_16_0031.w3g
inflating: replays/Multiplayer/Replay_2016_02_16_0151.w3g
inflating: replays/Multiplayer/Replay_2016_02_17_0059.w3g
inflating: replays/Multiplayer/Replay_2016_02_17_0136.w3g
inflating: replays/Multiplayer/Replay_2016_02_17_0225.w3g
extracting: replays/Multiplayer/Replay_2016_12_11_1537.w3g
inflating: replays/Multiplayer/Replay_2016_12_11_1606.w3g
inflating: replays/Multiplayer/Replay_2017_01_07_2143.w3g
inflating: replays/Multiplayer/Replay_2017_01_07_2205.w3g
inflating: replays/Multiplayer/Replay_2017_01_07_2210.w3g
inflating: replays/Multiplayer/Replay_2017_01_07_2220.w3g
inflating: replays/Multiplayer/Replay_2017_01_07_2232.w3g
inflating: replays/Multiplayer/Replay_2017_01_07_2251.w3g
inflating: replays/Multiplayer/Replay_2017_01_07_2308.w3g
inflating: replays/Multiplayer/Replay_2017_01_07_2325.w3g
inflating: replays/Multiplayer/Replay_2017_01_08_2228.w3g
inflating: replays/Multiplayer/Replay_2017_01_08_2318.w3g
extracting: replays/Multiplayer/Replay_2017_01_10_2056.w3g
inflating: replays/Multiplayer/Replay_2017_01_10_2127.w3g
inflating: replays/Multiplayer/Replay_2017_01_10_2155.w3g
inflating: replays/Multiplayer/Replay_2017_01_10_2226.w3g
inflating: replays/Multiplayer/Replay_2017_01_10_2310.w3g
###Markdown
Revisando el tamaño final de la carpeta `replays/Multiplayer`
###Code
!du -lsh replays/Multiplayer
###Output
585M replays/Multiplayer
###Markdown
Estos archivos dentro de la carpeta `replays/Multiplayer` seran con los que vamos a trabajar. A continuacion empezamos con Python. 2. Intro a Python y como extraer los replays.Comenzaremos con Python y tomaremos como ejemplo práctico *leer los archivos dentro de la carpeta `replays/Multiplayer` con Python*Para esto necesitamos entender los conceptos de:* `String`* `Variables` y `print()`* `Paquete de Python`* `import`* `Lista` StringUn `String` es una cadena de texto cualquiera. Estos `Strings` en Python se los crea utilizando el "Double Quote" o "Comillas".¿Donde se los utiliza? Un ejemplo claro de como utilizar un `String` es la ruta o direccion donde estan nuestros archivos y en Python seria algo como ```python"replays/Multiplayer"```De esta manera hemos creado un `String` utilizando las "comillas".Un `String` no nos sirve de mucho. Para que un `String` tenga mas utilizad necesitamos asignar este `String` a una `Variable`. Variables y `print()`Una `Variable` puede tener el valor de nuestro `String`, como dice su nombre, esta `Variable` puede cambiar o variar su valor. *¿Como creamos una variable?*Para crear una variable es como en las matematicas, primero escogemos un nombre y luego con el simbolo de igual `=` asigmos un "valor" a esta variable, en nuestro caso le vamos a asignar un `String`.```pythonmi_carpeta_de_replays = "replays/Multiplayer"```De esta manera he utilizado la variable `mi_carpeta_de_replays` para asignarle el valor de un `String` que es `"replays/Multiplayer"`.Noten que he utilado mas de una palabra para crear el nombrede mi variable.`mi`, `carpeta`, `de`, `replays` unidos por una barra baja `_`.En Python y en general en otros lenguajes se suelen crear nombres de variables con mas de dos palabras para tener bien entendido que es lo que una `variable` representa. En Python se suele unir estas palabras con una `barra baja _`.A continuacion voy a crear una `celda` que corra el codigo de crear esta variable `mi_carpeta_de_replays`.
###Code
# Creando una variable que aloje el String de donde esta ubicado mis replays.
mi_carpeta_de_replays = "replays/Multiplayer"
###Output
_____no_output_____
###Markdown
Funciono, he creado una variable que aloje la ruta de donde estan ubicados mis replays. Pero ahora.***¿como verifico si esta variable funciona?*** `print()`Aqui es donde entra una `funcion propia` de Python llamada `print()`. Lo llamo `funcion` porque representa el concepto de transformacion que tiene una funcion matematica, es decir...`y = f(x)` , `f` seria el `print` , los parentesis `( )`la manera en la que esta funcion acepta paremetros `x`. Y finalmente `y` es el resultado de la transformacion.***¿Pero donde esta `x` en nuestro caso?***`x` son nuestras `Variables`.***¿Que variables tenemos?***La unica variable que tenemos actualmente es `mi_carpeta_de_replays`, entonces probemos `print()` con esta variable `mi_carpeta_de_replays`.
###Code
# Utilizando print()
print(mi_carpeta_de_replays)
###Output
replays/Multiplayer
###Markdown
Entonces `print(mi_carpeta_de_replays)` hace que te muestre el valor que tiene asignado una variable, en nuestro caso, el dato que tiene asignado `mi_carpeta_de_replays` es `"replays/Multiplayer"` un `String`.En conclucion con `print()` podemos ver que valores tienen asignados nuestras variables.`print()` es `un método` que Python ya trae integrado. Otros lenguajes de programacion tambien tiene su equivalente de `print()`, en `javascript` seria algo como:```javascriptconsole.log()``` Paquete de PythonEl concepto de `paquete de python` puede entenderse como herramientas construidas/programadas por alguien mas para solucionar un problema.Un problema que tenemos ahora mismo es como leer los archivos que hay dentro de la `Variable` `mi_carpeta_de_replays` que representa una carpeta fisica en el disco duro.Para esto utilizaremos el paquete `glob` que resulve este problema de manera muy sencilla. Para utilzar `glob` necesitamos `importarlo` dentro de este Jupyter Notebook y ahi es donde entra el siguiente concepto. `import``import` es una palabra reservada por Python para llamar o cargar paquetes en la memeria del Notebook. Este `import` es una palabra recervada y por tal no debe de utilizarse como nombre de alguna variable por parte de nosotros.La manera en la que `import` funciona es:```pythonimport ```En nuestro caso *el nombre del paquete es glob* y deberia llamarse como ```pythonimport glob```Probemos el `import` en codigo.
###Code
# Voy a importar glob dentro del Notebook
import glob
###Output
_____no_output_____
###Markdown
Parece que funciono, ¿que pasa si hago `print(glob)`?, probemos en una celda de codigo nuevamente.
###Code
print(glob)
###Output
<module 'glob' from '/usr/lib/python3.6/glob.py'>
###Markdown
Correctamente `print()` nos dice que este `glob` es un `modulo` y esta ubicado en `'/usr/lib/python3.6/glob.py'` Ahora...***¿Como utilizamos `glob`?.***Para utilizar glob tenemos que entender el concepto de `suma de Strings`, es decir:```python"esto es un String" + " ,Esto es Otro String" = "esto es un String ,Esto es Otro String```dos o mas `Strings` se pueden sumar para crear un único `String`.***Esto es importante para utilizar `glob`?***.Esto lo veremos utilizando `glob` a continacion.
###Code
## Utilizando glob.
# Creando una variable con la suma de Strings que necesito para glob
todo_lo_que_este_dentro_de_la_carpet = mi_carpeta_de_replays + "/*"
# mostrando lo que es el resultado de la operacion de suma de Stirngs.
print(todo_lo_que_este_dentro_de_la_carpet)
# glob hace el trabajo de encontrar "todos" los archivos dentro de la carpeta "todo_lo_que_este_dentro_de_la_carpet"
mis_replays = glob.glob(todo_lo_que_este_dentro_de_la_carpet)
# Mostrar lo que glob encontro.
print(mis_replays)
###Output
replays/Multiplayer/*
['replays/Multiplayer/Replay_2013_12_27_2004.w3g', 'replays/Multiplayer/Replay_2013_12_16_1433.w3g', 'replays/Multiplayer/Replay_2013_08_18_1838.w3g', 'replays/Multiplayer/Replay_2013_08_09_2348.w3g', 'replays/Multiplayer/Replay_2017_01_07_2251.w3g', 'replays/Multiplayer/Replay_2013_08_14_0048.w3g', 'replays/Multiplayer/Replay_2013_11_22_1249.w3g', 'replays/Multiplayer/Replay_2014_06_30_2153.w3g', 'replays/Multiplayer/Replay_2015_07_22_2006.w3g', 'replays/Multiplayer/Replay_2013_08_24_2330.w3g', 'replays/Multiplayer/Replay_2015_06_29_0020.w3g', 'replays/Multiplayer/Replay_2013_08_13_2349.w3g', 'replays/Multiplayer/Replay_2015_08_06_2212.w3g', 'replays/Multiplayer/Replay_2013_11_06_1800.w3g', 'replays/Multiplayer/Replay_2013_12_01_0021.w3g', 'replays/Multiplayer/Replay_2013_12_26_0126.w3g', 'replays/Multiplayer/Replay_2013_08_18_2251.w3g', 'replays/Multiplayer/Replay_2014_03_25_2123.w3g', 'replays/Multiplayer/Replay_2013_10_06_0038.w3g', 'replays/Multiplayer/Replay_2013_12_20_2004.w3g', 'replays/Multiplayer/Replay_2015_09_04_2117.w3g', 'replays/Multiplayer/Replay_2014_05_07_1828.w3g', 'replays/Multiplayer/Replay_2015_07_30_0350.w3g', 'replays/Multiplayer/Replay_2013_12_20_2319.w3g', 'replays/Multiplayer/Replay_2014_11_08_1455.w3g', 'replays/Multiplayer/Replay_2015_08_01_1841.w3g', 'replays/Multiplayer/Replay_2014_02_03_0203.w3g', 'replays/Multiplayer/Replay_2015_07_03_2058.w3g', 'replays/Multiplayer/Replay_2013_08_30_2027.w3g', 'replays/Multiplayer/Replay_2015_09_04_1455.w3g', 'replays/Multiplayer/Replay_2013_12_02_1748.w3g', 'replays/Multiplayer/Replay_2015_06_22_2300.w3g', 'replays/Multiplayer/Replay_2015_07_27_2157.w3g', 'replays/Multiplayer/Replay_2014_01_12_1904.w3g', 'replays/Multiplayer/Replay_2014_11_07_1918.w3g', 'replays/Multiplayer/Replay_2013_07_30_2104.w3g', 'replays/Multiplayer/Replay_2013_10_03_0041.w3g', 'replays/Multiplayer/Replay_2013_11_03_1950.w3g', 'replays/Multiplayer/Replay_2013_11_22_1333.w3g', 'replays/Multiplayer/Replay_2013_09_23_0011.w3g', 'replays/Multiplayer/Replay_2013_11_18_1707.w3g', 'replays/Multiplayer/Replay_2013_11_22_1220.w3g', 'replays/Multiplayer/Replay_2015_08_13_1643.w3g', 'replays/Multiplayer/Replay_2015_06_29_2254.w3g', 'replays/Multiplayer/Replay_2013_11_23_1946.w3g', 'replays/Multiplayer/Replay_2014_01_13_1254.w3g', 'replays/Multiplayer/Replay_2017_01_10_2342.w3g', 'replays/Multiplayer/Replay_2015_07_16_1454.w3g', 'replays/Multiplayer/Replay_2015_08_17_1138.w3g', 'replays/Multiplayer/Replay_2015_06_30_1533.w3g', 'replays/Multiplayer/Replay_2013_10_15_1855.w3g', 'replays/Multiplayer/Replay_2013_11_28_0017.w3g', 'replays/Multiplayer/Replay_2015_06_25_2139.w3g', 'replays/Multiplayer/Replay_2013_10_09_2213.w3g', 'replays/Multiplayer/Replay_2016_01_13_2311.w3g', 'replays/Multiplayer/Replay_2014_05_06_2136.w3g', 'replays/Multiplayer/Replay_2014_04_20_2215.w3g', 'replays/Multiplayer/Replay_2015_07_24_0034.w3g', 'replays/Multiplayer/Replay_2013_08_30_1317.w3g', 'replays/Multiplayer/Replay_2015_07_23_2058.w3g', 'replays/Multiplayer/Replay_2015_08_13_1910.w3g', 'replays/Multiplayer/Replay_2015_06_23_1946.w3g', 'replays/Multiplayer/Replay_2013_11_23_1845.w3g', 'replays/Multiplayer/Replay_2013_12_01_1554.w3g', 'replays/Multiplayer/Replay_2013_09_28_2056.w3g', 'replays/Multiplayer/Replay_2015_06_26_1620.w3g', 'replays/Multiplayer/Replay_2013_07_25_1326.w3g', 'replays/Multiplayer/Replay_2015_07_20_0130.w3g', 'replays/Multiplayer/Replay_2013_12_08_0217.w3g', 'replays/Multiplayer/Replay_2013_10_01_1603.w3g', 'replays/Multiplayer/Replay_2013_07_18_1738.w3g', 'replays/Multiplayer/Replay_2013_12_21_1535.w3g', 'replays/Multiplayer/Replay_2015_07_06_2348.w3g', 'replays/Multiplayer/Replay_2013_08_16_2138.w3g', 'replays/Multiplayer/Replay_2013_08_04_1230.w3g', 'replays/Multiplayer/Replay_2013_08_18_1728.w3g', 'replays/Multiplayer/Replay_2014_11_14_2101.w3g', 'replays/Multiplayer/Replay_2015_09_04_1745.w3g', 'replays/Multiplayer/Replay_2013_11_22_1401.w3g', 'replays/Multiplayer/Replay_2013_08_23_1523.w3g', 'replays/Multiplayer/Replay_2015_08_15_0008.w3g', 'replays/Multiplayer/Replay_2013_08_17_1936.w3g', 'replays/Multiplayer/Replay_2013_11_23_1802.w3g', 'replays/Multiplayer/Replay_2013_11_22_1426.w3g', 'replays/Multiplayer/Replay_2015_08_02_1636.w3g', 'replays/Multiplayer/Replay_2013_12_07_2305.w3g', 'replays/Multiplayer/Replay_2013_12_20_0035.w3g', 'replays/Multiplayer/Replay_2017_01_10_2310.w3g', 'replays/Multiplayer/Replay_2013_08_02_0042.w3g', 'replays/Multiplayer/Replay_2015_07_30_2336.w3g', 'replays/Multiplayer/Replay_2015_07_24_1920.w3g', 'replays/Multiplayer/Replay_2013_09_15_1515.w3g', 'replays/Multiplayer/Replay_2013_12_26_1756.w3g', 'replays/Multiplayer/Replay_2014_04_21_2022.w3g', 'replays/Multiplayer/Replay_2014_11_08_2340.w3g', 'replays/Multiplayer/Replay_2013_12_26_1457.w3g', 'replays/Multiplayer/Replay_2013_11_23_1138.w3g', 'replays/Multiplayer/Replay_2013_08_10_1727.w3g', 'replays/Multiplayer/Replay_2015_03_19_2208.w3g', 'replays/Multiplayer/Replay_2015_06_26_2317.w3g', 'replays/Multiplayer/Replay_2013_07_31_1800.w3g', 'replays/Multiplayer/Replay_2015_07_03_2207.w3g', 'replays/Multiplayer/Replay_2016_07_07_1917.w3g', 'replays/Multiplayer/Replay_2015_07_01_1605.w3g', 'replays/Multiplayer/Replay_2013_08_23_2226.w3g', 'replays/Multiplayer/Replay_2016_07_07_2034.w3g', 'replays/Multiplayer/Replay_2013_08_06_0051.w3g', 'replays/Multiplayer/Replay_2015_09_06_0231.w3g', 'replays/Multiplayer/Replay_2015_08_26_2148.w3g', 'replays/Multiplayer/Replay_2015_08_30_2105.w3g', 'replays/Multiplayer/Replay_2015_03_19_2302.w3g', 'replays/Multiplayer/Replay_2015_06_22_1825.w3g', 'replays/Multiplayer/Replay_2015_07_25_1343.w3g', 'replays/Multiplayer/Replay_2016_07_07_1913.w3g', 'replays/Multiplayer/Replay_2015_08_02_0135.w3g', 'replays/Multiplayer/Replay_2013_08_05_2340.w3g', 'replays/Multiplayer/Replay_2013_11_06_1857.w3g', 'replays/Multiplayer/Replay_2013_12_13_1124.w3g', 'replays/Multiplayer/Replay_2013_08_05_2302.w3g', 'replays/Multiplayer/Replay_2013_12_01_2203.w3g', 'replays/Multiplayer/Replay_2013_08_23_1854.w3g', 'replays/Multiplayer/Replay_2013_08_15_2052.w3g', 'replays/Multiplayer/Replay_2015_07_20_0041.w3g', 'replays/Multiplayer/Replay_2013_10_01_1739.w3g', 'replays/Multiplayer/Replay_2013_09_16_1510.w3g', 'replays/Multiplayer/Replay_2015_08_05_1538.w3g', 'replays/Multiplayer/Replay_2015_08_30_2235.w3g', 'replays/Multiplayer/Replay_2015_07_25_2107.w3g', 'replays/Multiplayer/Replay_2017_01_11_2228.w3g', 'replays/Multiplayer/Replay_2013_08_23_1430.w3g', 'replays/Multiplayer/Replay_2015_08_01_1334.w3g', 'replays/Multiplayer/Replay_2013_11_23_1835.w3g', 'replays/Multiplayer/Replay_2013_08_25_1444.w3g', 'replays/Multiplayer/Replay_2013_11_30_2335.w3g', 'replays/Multiplayer/Replay_2013_12_25_0014.w3g', 'replays/Multiplayer/Replay_2015_08_06_1800.w3g', 'replays/Multiplayer/Replay_2013_10_23_1503.w3g', 'replays/Multiplayer/Replay_2013_10_29_1056.w3g', 'replays/Multiplayer/Replay_2013_11_17_1419.w3g', 'replays/Multiplayer/Replay_2017_01_07_2143.w3g', 'replays/Multiplayer/Replay_2014_03_31_1721.w3g', 'replays/Multiplayer/Replay_2015_07_19_2020.w3g', 'replays/Multiplayer/Replay_2015_08_26_1832.w3g', 'replays/Multiplayer/Replay_2015_06_29_0135.w3g', 'replays/Multiplayer/Replay_2015_08_27_1922.w3g', 'replays/Multiplayer/Replay_2015_07_02_1904.w3g', 'replays/Multiplayer/Replay_2015_08_01_0333.w3g', 'replays/Multiplayer/Replay_2015_07_03_1816.w3g', 'replays/Multiplayer/Replay_2013_12_03_2206.w3g', 'replays/Multiplayer/Replay_2016_07_07_2036.w3g', 'replays/Multiplayer/Replay_2015_07_25_2010.w3g', 'replays/Multiplayer/Replay_2015_08_31_1520.w3g', 'replays/Multiplayer/Replay_2013_08_31_1936.w3g', 'replays/Multiplayer/Replay_2014_02_03_0055.w3g', 'replays/Multiplayer/Replay_2015_07_15_1751.w3g', 'replays/Multiplayer/Replay_2013_11_15_2349.w3g', 'replays/Multiplayer/Replay_2013_09_22_0041.w3g', 'replays/Multiplayer/Replay_2013_11_18_2332.w3g', 'replays/Multiplayer/Replay_2013_11_06_1720.w3g', 'replays/Multiplayer/Replay_2014_06_29_2210.w3g', 'replays/Multiplayer/Replay_2013_11_28_1216.w3g', 'replays/Multiplayer/Replay_2013_09_26_1313.w3g', 'replays/Multiplayer/Replay_2014_05_03_2121.w3g', 'replays/Multiplayer/Replay_2013_12_16_1432.w3g', 'replays/Multiplayer/Replay_2013_12_24_2256.w3g', 'replays/Multiplayer/Replay_2015_06_21_2355.w3g', 'replays/Multiplayer/Replay_2013_11_22_1211.w3g', 'replays/Multiplayer/Replay_2015_08_10_1227.w3g', 'replays/Multiplayer/Replay_2014_02_07_2204.w3g', 'replays/Multiplayer/Replay_2013_11_03_0113.w3g', 'replays/Multiplayer/Replay_2013_09_30_1608.w3g', 'replays/Multiplayer/Replay_2013_07_19_1849.w3g', 'replays/Multiplayer/Replay_2013_10_18_2233.w3g', 'replays/Multiplayer/Replay_2015_06_30_1248.w3g', 'replays/Multiplayer/Replay_2015_07_17_1635.w3g', 'replays/Multiplayer/Replay_2015_06_27_1401.w3g', 'replays/Multiplayer/Replay_2015_08_01_1601.w3g', 'replays/Multiplayer/Replay_2013_09_30_1530.w3g', 'replays/Multiplayer/Replay_2015_06_22_1946.w3g', 'replays/Multiplayer/Replay_2013_10_29_1026.w3g', 'replays/Multiplayer/Replay_2015_08_30_2320.w3g', 'replays/Multiplayer/Replay_2015_07_27_1558.w3g', 'replays/Multiplayer/Replay_2013_08_16_2327.w3g', 'replays/Multiplayer/Replay_2013_10_29_1050.w3g', 'replays/Multiplayer/Replay_2014_11_14_2040.w3g', 'replays/Multiplayer/Replay_2013_07_29_2216.w3g', 'replays/Multiplayer/Replay_2014_03_18_2341.w3g', 'replays/Multiplayer/Replay_2013_10_31_0150.w3g', 'replays/Multiplayer/Replay_2013_08_18_1935.w3g', 'replays/Multiplayer/Replay_2013_11_22_1406.w3g', 'replays/Multiplayer/Replay_2014_03_14_1717.w3g', 'replays/Multiplayer/Replay_2013_09_16_0143.w3g', 'replays/Multiplayer/Replay_2014_05_17_1404.w3g', 'replays/Multiplayer/Replay_2015_08_10_2151.w3g', 'replays/Multiplayer/Replay_2013_12_13_1241.w3g', 'replays/Multiplayer/Replay_2015_07_23_2157.w3g', 'replays/Multiplayer/Replay_2013_08_14_2150.w3g', 'replays/Multiplayer/Replay_2015_09_05_1714.w3g', 'replays/Multiplayer/Replay_2013_12_03_1332.w3g', 'replays/Multiplayer/Replay_2015_09_03_1810.w3g', 'replays/Multiplayer/Replay_2013_11_22_1251.w3g', 'replays/Multiplayer/Replay_2013_12_23_2128.w3g', 'replays/Multiplayer/Replay_2015_06_27_1459.w3g', 'replays/Multiplayer/Replay_2014_06_30_2247.w3g', 'replays/Multiplayer/Replay_2013_10_29_1024.w3g', 'replays/Multiplayer/Replay_2013_12_18_1905.w3g', 'replays/Multiplayer/Replay_2014_11_02_1226.w3g', 'replays/Multiplayer/Replay_2015_08_13_1722.w3g', 'replays/Multiplayer/Replay_2015_06_25_1815.w3g', 'replays/Multiplayer/Replay_2013_11_23_1606.w3g', 'replays/Multiplayer/Replay_2015_09_05_1826.w3g', 'replays/Multiplayer/Replay_2013_10_11_1638.w3g', 'replays/Multiplayer/Replay_2015_07_24_2146.w3g', 'replays/Multiplayer/Replay_2013_11_12_1659.w3g', 'replays/Multiplayer/Replay_2013_09_20_2000.w3g', 'replays/Multiplayer/Replay_2015_08_05_0023.w3g', 'replays/Multiplayer/Replay_2013_09_08_0056.w3g', 'replays/Multiplayer/Replay_2014_03_14_1850.w3g', 'replays/Multiplayer/Replay_2015_09_06_1910.w3g', 'replays/Multiplayer/Replay_2013_09_16_0237.w3g', 'replays/Multiplayer/Replay_2015_06_26_1514.w3g', 'replays/Multiplayer/Replay_2014_05_17_1636.w3g', 'replays/Multiplayer/Replay_2013_11_28_1307.w3g', 'replays/Multiplayer/Replay_2013_08_21_1803.w3g', 'replays/Multiplayer/Replay_2014_02_04_2112.w3g', 'replays/Multiplayer/Replay_2013_11_16_1547.w3g', 'replays/Multiplayer/Replay_2015_06_26_1103.w3g', 'replays/Multiplayer/Replay_2015_07_17_1407.w3g', 'replays/Multiplayer/Replay_2015_08_01_2135.w3g', 'replays/Multiplayer/Replay_2013_10_29_2107.w3g', 'replays/Multiplayer/Replay_2013_08_25_2338.w3g', 'replays/Multiplayer/Replay_2014_01_12_2057.w3g', 'replays/Multiplayer/Replay_2013_07_18_1723.w3g', 'replays/Multiplayer/Replay_2015_07_25_1530.w3g', 'replays/Multiplayer/Replay_2013_12_02_1151.w3g', 'replays/Multiplayer/Replay_2014_02_05_2325.w3g', 'replays/Multiplayer/Replay_2015_08_10_2215.w3g', 'replays/Multiplayer/Replay_2013_08_18_1149.w3g', 'replays/Multiplayer/Replay_2015_08_02_1559.w3g', 'replays/Multiplayer/Replay_2013_10_25_1509.w3g', 'replays/Multiplayer/Replay_2015_06_25_2340.w3g', 'replays/Multiplayer/Replay_2014_03_16_2227.w3g', 'replays/Multiplayer/Replay_2015_07_18_1404.w3g', 'replays/Multiplayer/Replay_2015_06_27_0149.w3g', 'replays/Multiplayer/Replay_2013_12_19_2243.w3g', 'replays/Multiplayer/Replay_2015_08_13_2046.w3g', 'replays/Multiplayer/Replay_2014_11_08_1513.w3g', 'replays/Multiplayer/Replay_2015_08_27_2207.w3g', 'replays/Multiplayer/Replay_2013_08_11_1719.w3g', 'replays/Multiplayer/Replay_2015_06_21_2202.w3g', 'replays/Multiplayer/Replay_2015_06_21_2303.w3g', 'replays/Multiplayer/Replay_2015_07_31_2331.w3g', 'replays/Multiplayer/Replay_2015_07_04_2220.w3g', 'replays/Multiplayer/Replay_2014_03_31_1816.w3g', 'replays/Multiplayer/Replay_2015_06_29_1936.w3g', 'replays/Multiplayer/Replay_2013_07_25_1442.w3g', 'replays/Multiplayer/Replay_2013_11_23_1851.w3g', 'replays/Multiplayer/Replay_2013_11_29_2321.w3g', 'replays/Multiplayer/Replay_2013_12_25_0052.w3g', 'replays/Multiplayer/Replay_2015_08_19_1631.w3g', 'replays/Multiplayer/Replay_2015_07_31_1651.w3g', 'replays/Multiplayer/Replay_2013_11_24_1548.w3g', 'replays/Multiplayer/Replay_2013_10_30_0137.w3g', 'replays/Multiplayer/Replay_2013_11_22_1337.w3g', 'replays/Multiplayer/Replay_2015_08_23_1430.w3g', 'replays/Multiplayer/Replay_2015_08_18_1924.w3g', 'replays/Multiplayer/Replay_2015_07_26_2248.w3g', 'replays/Multiplayer/Replay_2014_06_29_2256.w3g', 'replays/Multiplayer/Replay_2013_10_31_1209.w3g', 'replays/Multiplayer/Replay_2013_10_02_1453.w3g', 'replays/Multiplayer/Replay_2014_05_05_0039.w3g', 'replays/Multiplayer/Replay_2013_12_01_2122.w3g', 'replays/Multiplayer/Replay_2014_11_03_1411.w3g', 'replays/Multiplayer/Replay_2015_07_22_1339.w3g', 'replays/Multiplayer/Replay_2013_11_10_2132.w3g', 'replays/Multiplayer/Replay_2015_08_16_1833.w3g', 'replays/Multiplayer/Replay_2013_08_31_2008.w3g', 'replays/Multiplayer/Replay_2013_11_12_1837.w3g', 'replays/Multiplayer/Replay_2013_11_25_0022.w3g', 'replays/Multiplayer/Replay_2014_05_02_1755.w3g', 'replays/Multiplayer/Replay_2015_07_15_1708.w3g', 'replays/Multiplayer/Replay_2015_08_31_1946.w3g', 'replays/Multiplayer/Replay_2015_07_22_2100.w3g', 'replays/Multiplayer/Replay_2013_10_29_2038.w3g', 'replays/Multiplayer/Replay_2015_09_05_1912.w3g', 'replays/Multiplayer/Replay_2014_11_10_0905.w3g', 'replays/Multiplayer/Replay_2014_03_16_2150.w3g', 'replays/Multiplayer/Replay_2013_09_19_2235.w3g', 'replays/Multiplayer/Replay_2015_08_09_1642.w3g', 'replays/Multiplayer/Replay_2013_10_29_1617.w3g', 'replays/Multiplayer/Replay_2013_12_21_0010.w3g', 'replays/Multiplayer/Replay_2015_06_24_2006.w3g', 'replays/Multiplayer/Replay_2014_02_08_1928.w3g', 'replays/Multiplayer/Replay_2013_07_18_2044.w3g', 'replays/Multiplayer/Replay_2015_07_25_1803.w3g', 'replays/Multiplayer/Replay_2015_06_30_1524.w3g', 'replays/Multiplayer/Replay_2013_10_19_1318.w3g', 'replays/Multiplayer/Replay_2013_10_17_1945.w3g', 'replays/Multiplayer/Replay_2013_08_31_1710.w3g', 'replays/Multiplayer/Replay_2014_03_17_0038.w3g', 'replays/Multiplayer/Replay_2013_12_23_2018.w3g', 'replays/Multiplayer/Replay_2014_11_05_2128.w3g', 'replays/Multiplayer/Replay_2013_12_26_0034.w3g', 'replays/Multiplayer/Replay_2015_08_06_1625.w3g', 'replays/Multiplayer/Replay_2015_07_31_2255.w3g', 'replays/Multiplayer/Replay_2014_06_30_2057.w3g', 'replays/Multiplayer/Replay_2015_09_03_1847.w3g', 'replays/Multiplayer/Replay_2013_08_17_1958.w3g', 'replays/Multiplayer/Replay_2015_08_09_1132.w3g', 'replays/Multiplayer/Replay_2013_07_29_1349.w3g', 'replays/Multiplayer/Replay_2013_07_31_2005.w3g', 'replays/Multiplayer/Replay_2013_08_14_0125.w3g', 'replays/Multiplayer/Replay_2013_11_24_2025.w3g', 'replays/Multiplayer/Replay_2013_11_23_1840.w3g', 'replays/Multiplayer/Replay_2015_08_25_2038.w3g', 'replays/Multiplayer/Replay_2013_11_23_0146.w3g', 'replays/Multiplayer/Replay_2015_08_17_1143.w3g', 'replays/Multiplayer/Replay_2014_05_01_1707.w3g', 'replays/Multiplayer/Replay_2013_09_26_0035.w3g', 'replays/Multiplayer/Replay_2015_08_24_1828.w3g', 'replays/Multiplayer/Replay_2013_08_13_2108.w3g', 'replays/Multiplayer/Replay_2015_07_30_1822.w3g', 'replays/Multiplayer/Replay_2013_11_22_1304.w3g', 'replays/Multiplayer/Replay_2013_10_18_1903.w3g', 'replays/Multiplayer/Replay_2013_07_28_1133.w3g', 'replays/Multiplayer/Replay_2013_12_23_1947.w3g', 'replays/Multiplayer/Replay_2013_10_29_2034.w3g', 'replays/Multiplayer/Replay_2015_08_05_1443.w3g', 'replays/Multiplayer/Replay_2013_10_08_2244.w3g', 'replays/Multiplayer/Replay_2013_12_12_2101.w3g', 'replays/Multiplayer/Replay_2014_04_18_1208.w3g', 'replays/Multiplayer/Replay_2013_12_25_1047.w3g', 'replays/Multiplayer/Replay_2015_08_09_1315.w3g', 'replays/Multiplayer/Replay_2015_07_17_2242.w3g', 'replays/Multiplayer/Replay_2015_07_24_2003.w3g', 'replays/Multiplayer/Replay_2013_12_02_2300.w3g', 'replays/Multiplayer/Replay_2015_07_31_2111.w3g', 'replays/Multiplayer/Replay_2015_06_24_2225.w3g', 'replays/Multiplayer/Replay_2015_08_17_1936.w3g', 'replays/Multiplayer/Replay_2013_11_19_1852.w3g', 'replays/Multiplayer/Replay_2016_07_07_1953.w3g', 'replays/Multiplayer/Replay_2013_08_10_1652.w3g', 'replays/Multiplayer/Replay_2015_07_03_0231.w3g', 'replays/Multiplayer/Replay_2013_12_26_1838.w3g', 'replays/Multiplayer/Replay_2013_08_14_2121.w3g', 'replays/Multiplayer/Replay_2013_10_02_1316.w3g', 'replays/Multiplayer/Replay_2013_11_30_1615.w3g', 'replays/Multiplayer/Replay_2015_08_20_0310.w3g', 'replays/Multiplayer/Replay_2015_07_19_1857.w3g', 'replays/Multiplayer/Replay_2015_07_07_1846.w3g', 'replays/Multiplayer/Replay_2013_11_27_0144.w3g', 'replays/Multiplayer/Replay_2014_02_07_2056.w3g', 'replays/Multiplayer/Replay_2013_09_01_1357.w3g', 'replays/Multiplayer/Replay_2015_07_25_1557.w3g', 'replays/Multiplayer/Replay_2013_10_26_2244.w3g', 'replays/Multiplayer/Replay_2013_08_31_1459.w3g', 'replays/Multiplayer/Replay_2013_08_21_1851.w3g', 'replays/Multiplayer/Replay_2015_07_24_2228.w3g', 'replays/Multiplayer/Replay_2013_07_21_0115.w3g', 'replays/Multiplayer/Replay_2015_07_23_2229.w3g', 'replays/Multiplayer/Replay_2013_07_30_1244.w3g', 'replays/Multiplayer/Replay_2013_11_27_2330.w3g', 'replays/Multiplayer/Replay_2014_01_30_0014.w3g', 'replays/Multiplayer/Replay_2013_08_07_2016.w3g', 'replays/Multiplayer/Replay_2015_09_02_1744.w3g', 'replays/Multiplayer/Replay_2013_08_30_0109.w3g', 'replays/Multiplayer/Replay_2013_09_12_0036.w3g', 'replays/Multiplayer/Replay_2015_06_30_1215.w3g', 'replays/Multiplayer/Replay_2013_09_14_2114.w3g', 'replays/Multiplayer/Replay_2013_11_25_0103.w3g', 'replays/Multiplayer/Replay_2014_01_30_2356.w3g', 'replays/Multiplayer/Replay_2013_08_09_2000.w3g', 'replays/Multiplayer/Replay_2013_12_23_2339.w3g', 'replays/Multiplayer/Replay_2015_07_04_2252.w3g', 'replays/Multiplayer/Replay_2013_11_20_1222.w3g', 'replays/Multiplayer/Replay_2013_11_27_2050.w3g', 'replays/Multiplayer/Replay_2013_12_14_2202.w3g', 'replays/Multiplayer/Replay_2015_08_08_1836.w3g', 'replays/Multiplayer/Replay_2015_07_29_1430.w3g', 'replays/Multiplayer/Replay_2013_08_05_0115.w3g', 'replays/Multiplayer/Replay_2015_08_26_1919.w3g', 'replays/Multiplayer/Replay_2013_11_23_1058.w3g', 'replays/Multiplayer/Replay_2013_09_14_1907.w3g', 'replays/Multiplayer/Replay_2013_09_11_1543.w3g', 'replays/Multiplayer/Replay_2015_08_11_1152.w3g', 'replays/Multiplayer/Replay_2013_09_28_0039.w3g', 'replays/Multiplayer/Replay_2015_07_29_2251.w3g', 'replays/Multiplayer/Replay_2014_04_22_0033.w3g', 'replays/Multiplayer/Replay_2014_03_12_2047.w3g', 'replays/Multiplayer/Replay_2013_11_16_1500.w3g', 'replays/Multiplayer/Replay_2016_02_12_2306.w3g', 'replays/Multiplayer/Replay_2013_12_03_2101.w3g', 'replays/Multiplayer/Replay_2015_06_22_2031.w3g', 'replays/Multiplayer/Replay_2015_09_02_1554.w3g', 'replays/Multiplayer/Replay_2013_10_03_1015.w3g', 'replays/Multiplayer/Replay_2013_11_23_1813.w3g', 'replays/Multiplayer/Replay_2013_07_18_1736.w3g', 'replays/Multiplayer/Replay_2015_08_25_2311.w3g', 'replays/Multiplayer/Replay_2014_11_03_1301.w3g', 'replays/Multiplayer/Replay_2014_03_13_2139.w3g', 'replays/Multiplayer/Replay_2014_05_04_1739.w3g', 'replays/Multiplayer/Replay_2014_01_10_2019.w3g', 'replays/Multiplayer/Replay_2013_08_04_0007.w3g', 'replays/Multiplayer/Replay_2013_12_25_1548.w3g', 'replays/Multiplayer/Replay_2015_07_27_1749.w3g', 'replays/Multiplayer/Replay_2015_09_04_2156.w3g', 'replays/Multiplayer/Replay_2015_08_19_1706.w3g', 'replays/Multiplayer/Replay_2014_05_05_1229.w3g', 'replays/Multiplayer/Replay_2013_07_28_0250.w3g', 'replays/Multiplayer/Replay_2013_08_21_2132.w3g', 'replays/Multiplayer/Replay_2016_07_08_1038.w3g', 'replays/Multiplayer/Replay_2015_07_01_1419.w3g', 'replays/Multiplayer/Replay_2013_10_09_1513.w3g', 'replays/Multiplayer/Replay_2013_10_16_1002.w3g', 'replays/Multiplayer/Replay_2015_06_25_2050.w3g', 'replays/Multiplayer/Replay_2015_07_23_2244.w3g', 'replays/Multiplayer/Replay_2013_08_09_2224.w3g', 'replays/Multiplayer/Replay_2014_04_28_1906.w3g', 'replays/Multiplayer/Replay_2013_08_31_1741.w3g', 'replays/Multiplayer/Replay_2015_08_29_2001.w3g', 'replays/Multiplayer/Replay_2015_08_14_2328.w3g', 'replays/Multiplayer/Replay_2013_08_11_1801.w3g', 'replays/Multiplayer/Replay_2013_08_01_1149.w3g', 'replays/Multiplayer/Replay_2013_12_10_0200.w3g', 'replays/Multiplayer/Replay_2015_06_23_1943.w3g', 'replays/Multiplayer/Replay_2014_01_10_1931.w3g', 'replays/Multiplayer/Replay_2015_08_20_1447.w3g', 'replays/Multiplayer/Replay_2014_04_01_1717.w3g', 'replays/Multiplayer/Replay_2014_02_09_1618.w3g', 'replays/Multiplayer/Replay_2013_11_12_1736.w3g', 'replays/Multiplayer/Replay_2013_11_29_2103.w3g', 'replays/Multiplayer/Replay_2013_11_29_2009.w3g', 'replays/Multiplayer/Replay_2015_06_26_2119.w3g', 'replays/Multiplayer/Replay_2013_09_23_1851.w3g', 'replays/Multiplayer/Replay_2013_10_29_1107.w3g', 'replays/Multiplayer/Replay_2013_11_23_1240.w3g', 'replays/Multiplayer/Replay_2015_06_27_1159.w3g', 'replays/Multiplayer/Replay_2014_03_15_1651.w3g', 'replays/Multiplayer/Replay_2013_08_09_0132.w3g', 'replays/Multiplayer/Replay_2015_09_05_1052.w3g', 'replays/Multiplayer/Replay_2013_08_13_1957.w3g', 'replays/Multiplayer/Replay_2013_12_10_0112.w3g', 'replays/Multiplayer/Replay_2014_02_07_2245.w3g', 'replays/Multiplayer/Replay_2013_09_12_1304.w3g', 'replays/Multiplayer/Replay_2014_05_12_2143.w3g', 'replays/Multiplayer/Replay_2015_08_12_2101.w3g', 'replays/Multiplayer/Replay_2013_08_18_1207.w3g', 'replays/Multiplayer/Replay_2016_02_13_0118.w3g', 'replays/Multiplayer/Replay_2015_06_27_1106.w3g', 'replays/Multiplayer/Replay_2015_06_26_1204.w3g', 'replays/Multiplayer/Replay_2013_12_16_1508.w3g', 'replays/Multiplayer/Replay_2013_10_29_1040.w3g', 'replays/Multiplayer/Replay_2014_06_29_2031.w3g', 'replays/Multiplayer/Replay_2013_08_25_2257.w3g', 'replays/Multiplayer/Replay_2015_07_29_2343.w3g', 'replays/Multiplayer/Replay_2015_09_01_1716.w3g', 'replays/Multiplayer/Replay_2014_05_03_1848.w3g', 'replays/Multiplayer/Replay_2015_06_30_0050.w3g', 'replays/Multiplayer/Replay_2015_09_01_1637.w3g', 'replays/Multiplayer/Replay_2015_08_19_1807.w3g', 'replays/Multiplayer/Replay_2014_03_24_1812.w3g', 'replays/Multiplayer/Replay_2014_05_04_0005.w3g', 'replays/Multiplayer/Replay_2013_08_30_2110.w3g', 'replays/Multiplayer/Replay_2013_10_29_2032.w3g', 'replays/Multiplayer/Replay_2015_07_08_2340.w3g', 'replays/Multiplayer/Replay_2013_12_03_1402.w3g', 'replays/Multiplayer/Replay_2013_08_16_2147.w3g', 'replays/Multiplayer/Replay_2013_08_12_2007.w3g', 'replays/Multiplayer/Replay_2014_01_29_2138.w3g', 'replays/Multiplayer/Replay_2015_07_04_2117.w3g', 'replays/Multiplayer/Replay_2014_02_07_2001.w3g', 'replays/Multiplayer/Replay_2015_06_24_1847.w3g', 'replays/Multiplayer/Replay_2015_06_25_2237.w3g', 'replays/Multiplayer/Replay_2013_11_11_0004.w3g', 'replays/Multiplayer/Replay_2015_08_13_1819.w3g', 'replays/Multiplayer/Replay_2015_07_25_0029.w3g', 'replays/Multiplayer/Replay_2015_08_23_1513.w3g', 'replays/Multiplayer/Replay_2013_08_10_1544.w3g', 'replays/Multiplayer/Replay_2013_11_11_1533.w3g', 'replays/Multiplayer/Replay_2015_09_06_1414.w3g', 'replays/Multiplayer/Replay_2014_01_13_1123.w3g', 'replays/Multiplayer/Replay_2015_06_25_1903.w3g', 'replays/Multiplayer/Replay_2013_12_09_1547.w3g', 'replays/Multiplayer/Replay_2014_01_10_1833.w3g', 'replays/Multiplayer/Replay_2013_08_10_2123.w3g', 'replays/Multiplayer/Replay_2014_05_01_2333.w3g', 'replays/Multiplayer/Replay_2013_08_26_0042.w3g', 'replays/Multiplayer/Replay_2013_11_30_0001.w3g', 'replays/Multiplayer/Replay_2014_05_01_1907.w3g', 'replays/Multiplayer/Replay_2013_08_12_0949.w3g', 'replays/Multiplayer/Replay_2013_10_28_1516.w3g', 'replays/Multiplayer/Replay_2014_02_04_1929.w3g', 'replays/Multiplayer/Replay_2013_10_22_0231.w3g', 'replays/Multiplayer/Replay_2014_04_03_1825.w3g', 'replays/Multiplayer/Replay_2014_05_05_1150.w3g', 'replays/Multiplayer/Replay_2016_02_16_0031.w3g', 'replays/Multiplayer/Replay_2015_09_06_0012.w3g', 'replays/Multiplayer/Replay_2015_08_23_1638.w3g', 'replays/Multiplayer/Replay_2015_08_12_1817.w3g', 'replays/Multiplayer/Replay_2013_12_03_1908.w3g', 'replays/Multiplayer/Replay_2014_05_17_1713.w3g', 'replays/Multiplayer/Replay_2013_12_20_2239.w3g', 'replays/Multiplayer/Replay_2014_01_12_1941.w3g', 'replays/Multiplayer/Replay_2013_11_23_1847.w3g', 'replays/Multiplayer/Replay_2013_09_28_1635.w3g', 'replays/Multiplayer/Replay_2014_11_14_2132.w3g', 'replays/Multiplayer/Replay_2017_01_07_2205.w3g', 'replays/Multiplayer/Replay_2015_06_26_1259.w3g', 'replays/Multiplayer/Replay_2013_09_17_1701.w3g', 'replays/Multiplayer/Replay_2015_06_27_1311.w3g', 'replays/Multiplayer/Replay_2017_01_11_2347.w3g', 'replays/Multiplayer/Replay_2015_08_05_1710.w3g', 'replays/Multiplayer/Replay_2016_02_17_0225.w3g', 'replays/Multiplayer/Replay_2014_04_28_1948.w3g', 'replays/Multiplayer/Replay_2013_07_20_1733.w3g', 'replays/Multiplayer/Replay_2015_06_22_2130.w3g', 'replays/Multiplayer/Replay_2013_09_10_1514.w3g', 'replays/Multiplayer/Replay_2014_05_17_1756.w3g', 'replays/Multiplayer/Replay_2015_07_23_0112.w3g', 'replays/Multiplayer/Replay_2013_07_20_1847.w3g', 'replays/Multiplayer/Replay_2013_11_18_1617.w3g', 'replays/Multiplayer/Replay_2014_11_05_2156.w3g', 'replays/Multiplayer/Replay_2013_11_19_1539.w3g', 'replays/Multiplayer/Replay_2013_08_18_1331.w3g', 'replays/Multiplayer/Replay_2013_09_01_2137.w3g', 'replays/Multiplayer/Replay_2013_08_27_1733.w3g', 'replays/Multiplayer/Replay_2014_01_09_2317.w3g', 'replays/Multiplayer/Replay_2013_08_12_1513.w3g', 'replays/Multiplayer/Replay_2013_09_20_2050.w3g', 'replays/Multiplayer/Replay_2015_06_22_0052.w3g', 'replays/Multiplayer/Replay_2013_08_24_1657.w3g', 'replays/Multiplayer/Replay_2015_06_22_1900.w3g', 'replays/Multiplayer/Replay_2013_10_06_0122.w3g', 'replays/Multiplayer/Replay_2016_01_13_2351.w3g', 'replays/Multiplayer/Replay_2015_07_26_0159.w3g', 'replays/Multiplayer/Replay_2015_08_04_1936.w3g', 'replays/Multiplayer/Replay_2015_06_28_0222.w3g', 'replays/Multiplayer/Replay_2013_07_20_1316.w3g', 'replays/Multiplayer/Replay_2016_02_17_0059.w3g', 'replays/Multiplayer/Replay_2013_12_25_1628.w3g', 'replays/Multiplayer/Replay_2014_03_16_2310.w3g', 'replays/Multiplayer/Replay_2013_10_16_1402.w3g', 'replays/Multiplayer/Replay_2013_10_04_2251.w3g', 'replays/Multiplayer/Replay_2015_08_04_1424.w3g', 'replays/Multiplayer/Replay_2013_12_21_2209.w3g', 'replays/Multiplayer/Replay_2013_10_24_2312.w3g', 'replays/Multiplayer/Replay_2015_08_23_1213.w3g', 'replays/Multiplayer/Replay_2013_08_18_2153.w3g', 'replays/Multiplayer/Replay_2013_08_18_1723.w3g', 'replays/Multiplayer/Replay_2013_08_25_1535.w3g', 'replays/Multiplayer/Replay_2015_08_02_1313.w3g', 'replays/Multiplayer/Replay_2013_11_23_1838.w3g', 'replays/Multiplayer/Replay_2013_08_15_2118.w3g', 'replays/Multiplayer/Replay_2016_02_16_0018.w3g', 'replays/Multiplayer/Replay_2013_10_18_1807.w3g', 'replays/Multiplayer/Replay_2013_12_01_0111.w3g', 'replays/Multiplayer/Replay_2013_12_25_1510.w3g', 'replays/Multiplayer/Replay_2015_08_10_0006.w3g', 'replays/Multiplayer/Replay_2013_11_30_1016.w3g', 'replays/Multiplayer/Replay_2014_01_21_2355.w3g', 'replays/Multiplayer/Replay_2015_08_25_1900.w3g', 'replays/Multiplayer/Replay_2013_10_04_0022.w3g', 'replays/Multiplayer/Replay_2013_11_23_1819.w3g', 'replays/Multiplayer/Replay_2013_11_01_1757.w3g', 'replays/Multiplayer/Replay_2015_06_29_2029.w3g', 'replays/Multiplayer/Replay_2013_08_02_0144.w3g', 'replays/Multiplayer/Replay_2013_10_01_1637.w3g', 'replays/Multiplayer/Replay_2013_10_16_1115.w3g', 'replays/Multiplayer/Replay_2013_09_04_1721.w3g', 'replays/Multiplayer/Replay_2015_06_25_0112.w3g', 'replays/Multiplayer/Replay_2014_08_05_0043.w3g', 'replays/Multiplayer/Replay_2013_08_24_2141.w3g', 'replays/Multiplayer/Replay_2013_12_23_2258.w3g', 'replays/Multiplayer/Replay_2013_08_06_1213.w3g', 'replays/Multiplayer/Replay_2013_11_24_1940.w3g', 'replays/Multiplayer/Replay_2013_08_30_1049.w3g', 'replays/Multiplayer/Replay_2013_07_24_2124.w3g', 'replays/Multiplayer/Replay_2014_01_16_2217.w3g', 'replays/Multiplayer/Replay_2014_11_05_2237.w3g', 'replays/Multiplayer/Replay_2013_10_04_2207.w3g', 'replays/Multiplayer/Replay_2014_03_23_1843.w3g', 'replays/Multiplayer/Replay_2015_08_17_1239.w3g', 'replays/Multiplayer/Replay_2013_07_31_1829.w3g', 'replays/Multiplayer/Replay_2015_06_30_2133.w3g', 'replays/Multiplayer/Replay_2013_12_14_2221.w3g', 'replays/Multiplayer/Replay_2013_08_01_2329.w3g', 'replays/Multiplayer/Replay_2015_08_01_1639.w3g', 'replays/Multiplayer/Replay_2015_07_28_2016.w3g', 'replays/Multiplayer/Replay_2015_07_18_1458.w3g', 'replays/Multiplayer/Replay_2016_01_15_0005.w3g', 'replays/Multiplayer/Replay_2013_10_14_1646.w3g', 'replays/Multiplayer/Replay_2013_12_23_2052.w3g', 'replays/Multiplayer/Replay_2015_07_07_0152.w3g', 'replays/Multiplayer/Replay_2013_08_12_2326.w3g', 'replays/Multiplayer/Replay_2015_09_05_1818.w3g', 'replays/Multiplayer/Replay_2015_08_05_1757.w3g', 'replays/Multiplayer/Replay_2014_04_18_1102.w3g', 'replays/Multiplayer/Replay_2013_07_20_1355.w3g', 'replays/Multiplayer/Replay_2013_10_18_1730.w3g', 'replays/Multiplayer/Replay_2013_08_05_1433.w3g', 'replays/Multiplayer/Replay_2014_03_14_1928.w3g', 'replays/Multiplayer/Replay_2017_01_07_2220.w3g', 'replays/Multiplayer/Replay_2014_11_08_1607.w3g', 'replays/Multiplayer/Replay_2013_12_02_1930.w3g', 'replays/Multiplayer/Replay_2013_07_20_2354.w3g', 'replays/Multiplayer/Replay_2013_11_08_1342.w3g', 'replays/Multiplayer/Replay_2013_09_22_1828.w3g', 'replays/Multiplayer/Replay_2015_07_22_1932.w3g', 'replays/Multiplayer/Replay_2013_11_22_2033.w3g', 'replays/Multiplayer/Replay_2015_07_26_0059.w3g', 'replays/Multiplayer/Replay_2013_09_01_2201.w3g', 'replays/Multiplayer/Replay_2013_08_03_0205.w3g', 'replays/Multiplayer/Replay_2017_01_10_2127.w3g', 'replays/Multiplayer/Replay_2013_11_09_1529.w3g', 'replays/Multiplayer/Replay_2013_10_07_1650.w3g', 'replays/Multiplayer/Replay_2013_09_28_1338.w3g', 'replays/Multiplayer/Replay_2015_08_04_1826.w3g', 'replays/Multiplayer/Replay_2014_03_24_1951.w3g', 'replays/Multiplayer/Replay_2013_12_24_2223.w3g', 'replays/Multiplayer/Replay_2017_01_08_2228.w3g', 'replays/Multiplayer/Replay_2015_07_03_1905.w3g', 'replays/Multiplayer/Replay_2016_01_30_1311.w3g', 'replays/Multiplayer/Replay_2015_08_11_2327.w3g', 'replays/Multiplayer/Replay_2013_11_19_2346.w3g', 'replays/Multiplayer/Replay_2015_08_16_1905.w3g', 'replays/Multiplayer/Replay_2013_09_19_2034.w3g', 'replays/Multiplayer/Replay_2015_06_28_2058.w3g', 'replays/Multiplayer/Replay_2014_03_24_1904.w3g', 'replays/Multiplayer/Replay_2013_10_20_0228.w3g', 'replays/Multiplayer/Replay_2014_02_07_1903.w3g', 'replays/Multiplayer/Replay_2015_08_09_2245.w3g', 'replays/Multiplayer/Replay_2013_11_25_1527.w3g', 'replays/Multiplayer/Replay_2013_10_28_1700.w3g', 'replays/Multiplayer/Replay_2013_07_20_1424.w3g', 'replays/Multiplayer/Replay_2014_06_30_1920.w3g', 'replays/Multiplayer/Replay_2015_08_23_2100.w3g', 'replays/Multiplayer/Replay_2016_12_11_1537.w3g', 'replays/Multiplayer/Replay_2015_07_19_1719.w3g', 'replays/Multiplayer/Replay_2015_08_04_2059.w3g', 'replays/Multiplayer/Replay_2015_09_06_1956.w3g', 'replays/Multiplayer/Replay_2013_08_13_1145.w3g', 'replays/Multiplayer/Replay_2013_10_04_2224.w3g', 'replays/Multiplayer/Replay_2015_07_22_1849.w3g', 'replays/Multiplayer/Replay_2015_06_29_1759.w3g', 'replays/Multiplayer/Replay_2013_11_18_1540.w3g', 'replays/Multiplayer/Replay_2015_08_14_1222.w3g', 'replays/Multiplayer/Replay_2013_11_22_1506.w3g', 'replays/Multiplayer/Replay_2015_08_12_0027.w3g', 'replays/Multiplayer/Replay_2013_08_15_2014.w3g', 'replays/Multiplayer/Replay_2013_07_31_2030.w3g', 'replays/Multiplayer/Replay_2015_07_24_2244.w3g', 'replays/Multiplayer/Replay_2017_01_10_2347.w3g', 'replays/Multiplayer/Replay_2015_08_05_1903.w3g', 'replays/Multiplayer/Replay_2013_08_16_2324.w3g', 'replays/Multiplayer/Replay_2013_11_10_2306.w3g', 'replays/Multiplayer/Replay_2014_02_09_1703.w3g', 'replays/Multiplayer/Replay_2014_05_06_2050.w3g', 'replays/Multiplayer/Replay_2015_08_13_2158.w3g', 'replays/Multiplayer/Replay_2013_08_25_2353.w3g', 'replays/Multiplayer/Replay_2014_05_06_1705.w3g', 'replays/Multiplayer/Replay_2013_11_01_1815.w3g', 'replays/Multiplayer/Replay_2015_08_18_2000.w3g', 'replays/Multiplayer/Replay_2013_08_11_2038.w3g', 'replays/Multiplayer/Replay_2014_06_28_1828.w3g', 'replays/Multiplayer/Replay_2017_01_07_2325.w3g', 'replays/Multiplayer/Replay_2015_07_26_1818.w3g', 'replays/Multiplayer/Replay_2015_06_25_1604.w3g', 'replays/Multiplayer/Replay_2013_12_02_1206.w3g', 'replays/Multiplayer/Replay_2013_08_16_2225.w3g', 'replays/Multiplayer/Replay_2013_10_06_0213.w3g', 'replays/Multiplayer/Replay_2013_08_25_1601.w3g', 'replays/Multiplayer/Replay_2015_06_24_1402.w3g', 'replays/Multiplayer/Replay_2013_09_01_1723.w3g', 'replays/Multiplayer/Replay_2013_10_14_1831.w3g', 'replays/Multiplayer/Replay_2013_11_27_0221.w3g', 'replays/Multiplayer/Replay_2015_07_11_2323.w3g', 'replays/Multiplayer/Replay_2014_11_14_2122.w3g', 'replays/Multiplayer/Replay_2015_08_17_1849.w3g', 'replays/Multiplayer/Replay_2013_08_23_2138.w3g', 'replays/Multiplayer/Replay_2015_08_26_2035.w3g', 'replays/Multiplayer/Replay_2013_08_24_0936.w3g', 'replays/Multiplayer/Replay_2015_08_17_1315.w3g', 'replays/Multiplayer/Replay_2015_06_24_2303.w3g', 'replays/Multiplayer/Replay_2013_11_28_1135.w3g', 'replays/Multiplayer/Replay_2013_09_23_1927.w3g', 'replays/Multiplayer/Replay_2015_09_05_1253.w3g', 'replays/Multiplayer/Replay_2013_08_27_1015.w3g', 'replays/Multiplayer/Replay_2013_08_29_1305.w3g', 'replays/Multiplayer/Replay_2015_09_03_1716.w3g', 'replays/Multiplayer/Replay_2013_10_29_0135.w3g', 'replays/Multiplayer/Replay_2015_06_25_1632.w3g', 'replays/Multiplayer/Replay_2013_10_08_2056.w3g', 'replays/Multiplayer/Replay_2015_07_20_0113.w3g', 'replays/Multiplayer/Replay_2013_08_24_2059.w3g', 'replays/Multiplayer/Replay_2015_06_25_1710.w3g', 'replays/Multiplayer/Replay_2013_11_24_2108.w3g', 'replays/Multiplayer/Replay_2015_08_10_1327.w3g', 'replays/Multiplayer/Replay_2014_11_14_2155.w3g', 'replays/Multiplayer/Replay_2014_04_01_1846.w3g', 'replays/Multiplayer/Replay_2014_04_28_1700.w3g', 'replays/Multiplayer/Replay_2015_07_25_1457.w3g', 'replays/Multiplayer/Replay_2015_08_27_2018.w3g', 'replays/Multiplayer/Replay_2015_06_28_1957.w3g', 'replays/Multiplayer/Replay_2013_11_22_1427.w3g', 'replays/Multiplayer/Replay_2015_09_02_1644.w3g', 'replays/Multiplayer/Replay_2013_09_13_1721.w3g', 'replays/Multiplayer/Replay_2014_04_20_2033.w3g', 'replays/Multiplayer/Replay_2013_10_18_1629.w3g', 'replays/Multiplayer/Replay_2014_06_28_1725.w3g', 'replays/Multiplayer/Replay_2013_10_31_1238.w3g', 'replays/Multiplayer/Replay_2013_09_06_2155.w3g', 'replays/Multiplayer/Replay_2014_05_02_2053.w3g', 'replays/Multiplayer/Replay_2013_09_18_1247.w3g', 'replays/Multiplayer/Replay_2013_08_24_2239.w3g', 'replays/Multiplayer/Replay_2013_12_21_1402.w3g', 'replays/Multiplayer/Replay_2014_01_27_0036.w3g', 'replays/Multiplayer/Replay_2015_07_26_1855.w3g', 'replays/Multiplayer/Replay_2015_07_17_1522.w3g', 'replays/Multiplayer/Replay_2015_06_24_1505.w3g', 'replays/Multiplayer/Replay_2014_01_12_2019.w3g', 'replays/Multiplayer/Replay_2013_11_12_1609.w3g', 'replays/Multiplayer/Replay_2015_08_24_1656.w3g', 'replays/Multiplayer/Replay_2014_05_06_1905.w3g', 'replays/Multiplayer/Replay_2013_08_04_1158.w3g', 'replays/Multiplayer/Replay_2013_07_18_1639.w3g', 'replays/Multiplayer/Replay_2013_07_18_2032.w3g', 'replays/Multiplayer/Replay_2015_06_24_2048.w3g', 'replays/Multiplayer/Replay_2013_10_16_1313.w3g', 'replays/Multiplayer/Replay_2013_12_02_0005.w3g', 'replays/Multiplayer/Replay_2015_07_27_1700.w3g', 'replays/Multiplayer/Replay_2013_07_20_1315.w3g', 'replays/Multiplayer/Replay_2013_07_18_1614.w3g', 'replays/Multiplayer/Replay_2013_10_16_0132.w3g', 'replays/Multiplayer/Replay_2014_04_02_2007.w3g', 'replays/Multiplayer/Replay_2014_07_04_1915.w3g', 'replays/Multiplayer/Replay_2015_06_29_1720.w3g', 'replays/Multiplayer/Replay_2013_09_18_1530.w3g', 'replays/Multiplayer/Replay_2013_10_20_0144.w3g', 'replays/Multiplayer/Replay_2013_07_28_0211.w3g', 'replays/Multiplayer/Replay_2016_12_11_1606.w3g', 'replays/Multiplayer/Replay_2015_08_20_1921.w3g', 'replays/Multiplayer/Replay_2014_01_11_0151.w3g', 'replays/Multiplayer/Replay_2014_06_30_0024.w3g', 'replays/Multiplayer/Replay_2013_08_05_2215.w3g', 'replays/Multiplayer/Replay_2015_07_26_1736.w3g', 'replays/Multiplayer/Replay_2015_07_22_2308.w3g', 'replays/Multiplayer/Replay_2015_08_13_1338.w3g', 'replays/Multiplayer/Replay_2014_05_03_1928.w3g', 'replays/Multiplayer/Replay_2015_07_25_2215.w3g', 'replays/Multiplayer/Replay_2013_12_01_2315.w3g', 'replays/Multiplayer/Replay_2014_11_03_2347.w3g', 'replays/Multiplayer/Replay_2015_09_06_2108.w3g', 'replays/Multiplayer/Replay_2015_08_02_0438.w3g', 'replays/Multiplayer/Replay_2013_11_24_1902.w3g', 'replays/Multiplayer/Replay_2014_01_30_2316.w3g', 'replays/Multiplayer/Replay_2015_08_12_1927.w3g', 'replays/Multiplayer/Replay_2017_01_10_2340.w3g', 'replays/Multiplayer/Replay_2015_06_29_1413.w3g', 'replays/Multiplayer/Replay_2015_08_25_2109.w3g', 'replays/Multiplayer/Replay_2013_11_22_1257.w3g', 'replays/Multiplayer/Replay_2013_11_23_1631.w3g', 'replays/Multiplayer/Replay_2014_01_16_2255.w3g', 'replays/Multiplayer/Replay_2014_05_13_2347.w3g', 'replays/Multiplayer/Replay_2013_09_17_0048.w3g', 'replays/Multiplayer/Replay_2015_09_03_2102.w3g', 'replays/Multiplayer/Replay_2014_04_30_1246.w3g', 'replays/Multiplayer/Replay_2013_12_31_1520.w3g', 'replays/Multiplayer/Replay_2015_08_06_1716.w3g', 'replays/Multiplayer/Replay_2013_10_28_1608.w3g', 'replays/Multiplayer/Replay_2013_08_03_2026.w3g', 'replays/Multiplayer/Replay_2013_09_21_1820.w3g', 'replays/Multiplayer/Replay_2014_03_31_1951.w3g', 'replays/Multiplayer/Replay_2013_08_18_1832.w3g', 'replays/Multiplayer/Replay_2013_10_11_1804.w3g', 'replays/Multiplayer/Replay_2015_08_10_1120.w3g', 'replays/Multiplayer/Replay_2013_08_01_2243.w3g', 'replays/Multiplayer/Replay_2013_08_16_0010.w3g', 'replays/Multiplayer/Replay_2015_08_01_2045.w3g', 'replays/Multiplayer/Replay_2015_09_01_1558.w3g', 'replays/Multiplayer/Replay_2014_07_06_1649.w3g', 'replays/Multiplayer/Replay_2013_09_25_1512.w3g', 'replays/Multiplayer/Replay_2013_08_13_1745.w3g', 'replays/Multiplayer/Replay_2015_08_31_1400.w3g', 'replays/Multiplayer/Replay_2014_05_07_1922.w3g', 'replays/Multiplayer/Replay_2013_08_01_0146.w3g', 'replays/Multiplayer/Replay_2013_11_21_0118.w3g', 'replays/Multiplayer/Replay_2013_08_29_1150.w3g', 'replays/Multiplayer/Replay_2015_08_02_1514.w3g', 'replays/Multiplayer/Replay_2013_12_01_2245.w3g', 'replays/Multiplayer/Replay_2013_09_10_1601.w3g', 'replays/Multiplayer/Replay_2015_06_30_1608.w3g', 'replays/Multiplayer/Replay_2013_12_26_1539.w3g', 'replays/Multiplayer/Replay_2013_10_01_2235.w3g', 'replays/Multiplayer/Replay_2014_04_25_1612.w3g', 'replays/Multiplayer/Replay_2014_04_03_1743.w3g', 'replays/Multiplayer/Replay_2015_08_24_2227.w3g', 'replays/Multiplayer/Replay_2013_08_10_2040.w3g', 'replays/Multiplayer/Replay_2013_08_15_1957.w3g', 'replays/Multiplayer/Replay_2013_07_28_1206.w3g', 'replays/Multiplayer/Replay_2013_11_24_1630.w3g', 'replays/Multiplayer/Replay_2014_03_19_0055.w3g', 'replays/Multiplayer/Replay_2015_06_29_1246.w3g', 'replays/Multiplayer/Replay_2013_10_29_2102.w3g', 'replays/Multiplayer/Replay_2015_07_03_0330.w3g', 'replays/Multiplayer/Replay_2014_04_28_1757.w3g', 'replays/Multiplayer/Replay_2015_09_03_0045.w3g', 'replays/Multiplayer/Replay_2013_09_26_0152.w3g', 'replays/Multiplayer/Replay_2015_07_19_2359.w3g', 'replays/Multiplayer/Replay_2015_08_13_1937.w3g', 'replays/Multiplayer/Replay_2013_08_03_1506.w3g', 'replays/Multiplayer/Replay_2015_07_17_1450.w3g', 'replays/Multiplayer/Replay_2015_08_20_1324.w3g', 'replays/Multiplayer/Replay_2013_10_29_0113.w3g', 'replays/Multiplayer/Replay_2013_09_06_2301.w3g', 'replays/Multiplayer/Replay_2013_11_22_1313.w3g', 'replays/Multiplayer/Replay_2013_12_24_0018.w3g', 'replays/Multiplayer/Replay_2015_09_06_0006.w3g', 'replays/Multiplayer/Replay_2015_08_23_2150.w3g', 'replays/Multiplayer/Replay_2014_11_07_1942.w3g', 'replays/Multiplayer/Replay_2013_12_08_0124.w3g', 'replays/Multiplayer/Replay_2015_07_03_0414.w3g', 'replays/Multiplayer/Replay_2014_05_01_0003.w3g', 'replays/Multiplayer/Replay_2014_05_24_1733.w3g', 'replays/Multiplayer/Replay_2013_10_31_1141.w3g', 'replays/Multiplayer/Replay_2015_07_28_1301.w3g', 'replays/Multiplayer/Replay_2013_08_18_2120.w3g', 'replays/Multiplayer/Replay_2013_11_10_2221.w3g', 'replays/Multiplayer/Replay_2013_08_16_2152.w3g', 'replays/Multiplayer/Replay_2013_08_25_1508.w3g', 'replays/Multiplayer/Replay_2014_02_07_0002.w3g', 'replays/Multiplayer/Replay_2015_08_04_1928.w3g', 'replays/Multiplayer/Replay_2015_06_28_0009.w3g', 'replays/Multiplayer/Replay_2013_11_28_1102.w3g', 'replays/Multiplayer/Replay_2015_08_02_0009.w3g', 'replays/Multiplayer/Replay_2014_02_07_2249.w3g', 'replays/Multiplayer/Replay_2013_08_11_1644.w3g', 'replays/Multiplayer/Replay_2015_08_27_2204.w3g', 'replays/Multiplayer/Replay_2013_10_03_2150.w3g', 'replays/Multiplayer/Replay_2014_01_10_0002.w3g', 'replays/Multiplayer/Replay_2014_06_29_2346.w3g', 'replays/Multiplayer/Replay_2013_10_30_0105.w3g', 'replays/Multiplayer/Replay_2013_08_17_2201.w3g', 'replays/Multiplayer/Replay_2013_08_30_1119.w3g', 'replays/Multiplayer/Replay_2013_10_28_0109.w3g', 'replays/Multiplayer/Replay_2015_07_09_2359.w3g', 'replays/Multiplayer/Replay_2014_05_09_1720.w3g', 'replays/Multiplayer/Replay_2015_08_24_1732.w3g', 'replays/Multiplayer/Replay_2013_11_23_1728.w3g', 'replays/Multiplayer/Replay_2015_07_23_2115.w3g', 'replays/Multiplayer/Replay_2013_07_22_0231.w3g', 'replays/Multiplayer/Replay_2013_12_25_1013.w3g', 'replays/Multiplayer/Replay_2013_12_25_1126.w3g', 'replays/Multiplayer/Replay_2013_12_04_0008.w3g', 'replays/Multiplayer/Replay_2016_02_12_2352.w3g', 'replays/Multiplayer/Replay_2015_08_18_1847.w3g', 'replays/Multiplayer/Replay_2013_12_24_1629.w3g', 'replays/Multiplayer/Replay_2015_08_09_2315.w3g', 'replays/Multiplayer/Replay_2013_08_29_1940.w3g', 'replays/Multiplayer/Replay_2015_08_09_1747.w3g', 'replays/Multiplayer/Replay_2014_02_04_2016.w3g', 'replays/Multiplayer/Replay_2015_07_12_0017.w3g', 'replays/Multiplayer/Replay_2015_08_27_2103.w3g', 'replays/Multiplayer/Replay_2013_08_03_1921.w3g', 'replays/Multiplayer/Replay_2015_07_11_2334.w3g', 'replays/Multiplayer/Replay_2013_10_18_1532.w3g', 'replays/Multiplayer/Replay_2013_09_06_1808.w3g', 'replays/Multiplayer/Replay_2013_08_12_0959.w3g', 'replays/Multiplayer/Replay_2013_11_30_2056.w3g', 'replays/Multiplayer/Replay_2015_06_30_1938.w3g', 'replays/Multiplayer/Replay_2014_02_08_2024.w3g', 'replays/Multiplayer/Replay_2014_02_04_1823.w3g', 'replays/Multiplayer/Replay_2014_01_13_1159.w3g', 'replays/Multiplayer/Replay_2013_10_14_1926.w3g', 'replays/Multiplayer/Replay_2014_04_03_1624.w3g', 'replays/Multiplayer/Replay_2013_07_20_1256.w3g', 'replays/Multiplayer/Replay_2013_11_09_0020.w3g', 'replays/Multiplayer/Replay_2013_08_10_1706.w3g', 'replays/Multiplayer/Replay_2016_02_13_0211.w3g', 'replays/Multiplayer/Replay_2013_08_30_0014.w3g', 'replays/Multiplayer/Replay_2013_10_29_1122.w3g', 'replays/Multiplayer/Replay_2015_07_31_1440.w3g', 'replays/Multiplayer/Replay_2013_09_21_1627.w3g', 'replays/Multiplayer/Replay_2013_08_29_1207.w3g', 'replays/Multiplayer/Replay_2015_08_13_1259.w3g', 'replays/Multiplayer/Replay_2015_08_04_1507.w3g', 'replays/Multiplayer/Replay_2014_01_29_2343.w3g', 'replays/Multiplayer/Replay_2013_10_26_2207.w3g', 'replays/Multiplayer/Replay_2015_07_17_1320.w3g', 'replays/Multiplayer/Replay_2013_10_26_2329.w3g', 'replays/Multiplayer/Replay_2017_01_10_2056.w3g', 'replays/Multiplayer/Replay_2015_07_24_1832.w3g', 'replays/Multiplayer/Replay_2013_10_02_2338.w3g', 'replays/Multiplayer/Replay_2013_08_25_1403.w3g', 'replays/Multiplayer/Replay_2015_06_26_1115.w3g', 'replays/Multiplayer/Replay_2013_12_28_1037.w3g', 'replays/Multiplayer/Replay_2015_07_24_2058.w3g', 'replays/Multiplayer/Replay_2013_08_07_1630.w3g', 'replays/Multiplayer/Replay_2014_02_04_2314.w3g', 'replays/Multiplayer/Replay_2013_07_26_1219.w3g', 'replays/Multiplayer/Replay_2015_07_21_2207.w3g', 'replays/Multiplayer/Replay_2013_08_01_0107.w3g', 'replays/Multiplayer/Replay_2013_09_15_1523.w3g', 'replays/Multiplayer/Replay_2014_05_07_1754.w3g', 'replays/Multiplayer/Replay_2015_06_25_0217.w3g', 'replays/Multiplayer/Replay_2014_02_02_2342.w3g', 'replays/Multiplayer/Replay_2015_07_17_1215.w3g', 'replays/Multiplayer/Replay_2013_09_21_1559.w3g', 'replays/Multiplayer/Replay_2014_05_17_1849.w3g', 'replays/Multiplayer/Replay_2014_06_30_2005.w3g', 'replays/Multiplayer/Replay_2015_06_29_1859.w3g', 'replays/Multiplayer/Replay_2015_07_19_1953.w3g', 'replays/Multiplayer/Replay_2014_05_01_2107.w3g', 'replays/Multiplayer/Replay_2013_11_30_2255.w3g', 'replays/Multiplayer/Replay_2015_08_11_1247.w3g', 'replays/Multiplayer/Replay_2015_06_23_2127.w3g', 'replays/Multiplayer/Replay_2015_06_25_0158.w3g', 'replays/Multiplayer/Replay_2013_11_30_0038.w3g', 'replays/Multiplayer/Replay_2015_06_29_1641.w3g', 'replays/Multiplayer/Replay_2013_08_11_1934.w3g', 'replays/Multiplayer/Replay_2014_11_05_2219.w3g', 'replays/Multiplayer/Replay_2014_05_04_1702.w3g', 'replays/Multiplayer/Replay_2015_07_25_0217.w3g', 'replays/Multiplayer/Replay_2014_02_03_2249.w3g', 'replays/Multiplayer/Replay_2017_01_07_2210.w3g', 'replays/Multiplayer/Replay_2013_08_17_2002.w3g', 'replays/Multiplayer/Replay_2013_08_12_1954.w3g', 'replays/Multiplayer/Replay_2013_12_26_1906.w3g', 'replays/Multiplayer/Replay_2013_07_18_1828.w3g', 'replays/Multiplayer/Replay_2013_08_18_1148.w3g', 'replays/Multiplayer/Replay_2014_05_04_2351.w3g', 'replays/Multiplayer/Replay_2013_08_06_1257.w3g', 'replays/Multiplayer/Replay_2014_02_10_2255.w3g', 'replays/Multiplayer/Replay_2015_08_20_2017.w3g', 'replays/Multiplayer/Replay_2013_09_23_2011.w3g', 'replays/Multiplayer/Replay_2015_07_09_0039.w3g', 'replays/Multiplayer/Replay_2015_06_24_0040.w3g', 'replays/Multiplayer/Replay_2014_01_11_0047.w3g', 'replays/Multiplayer/Replay_2013_08_19_2228.w3g', 'replays/Multiplayer/Replay_2015_07_05_2350.w3g', 'replays/Multiplayer/Replay_2013_11_23_1843.w3g', 'replays/Multiplayer/Replay_2014_05_17_1408.w3g', 'replays/Multiplayer/Replay_2013_07_20_1339.w3g', 'replays/Multiplayer/Replay_2015_08_16_1945.w3g', 'replays/Multiplayer/Replay_2015_07_01_0249.w3g', 'replays/Multiplayer/Replay_2014_01_27_0015.w3g', 'replays/Multiplayer/Replay_2015_06_26_1457.w3g', 'replays/Multiplayer/Replay_2015_07_26_0012.w3g', 'replays/Multiplayer/Replay_2015_08_31_1305.w3g', 'replays/Multiplayer/Replay_2013_09_15_1624.w3g', 'replays/Multiplayer/Replay_2013_08_25_1848.w3g', 'replays/Multiplayer/Replay_2016_01_30_1156.w3g', 'replays/Multiplayer/Replay_2014_01_31_2334.w3g', 'replays/Multiplayer/Replay_2014_01_29_2244.w3g', 'replays/Multiplayer/Replay_2013_07_18_1930.w3g', 'replays/Multiplayer/Replay_2013_09_02_2348.w3g', 'replays/Multiplayer/Replay_2013_10_20_1729.w3g', 'replays/Multiplayer/Replay_2014_03_31_1908.w3g', 'replays/Multiplayer/Replay_2013_09_16_1732.w3g', 'replays/Multiplayer/Replay_2013_11_19_0031.w3g', 'replays/Multiplayer/Replay_2013_08_12_1835.w3g', 'replays/Multiplayer/Replay_2013_09_11_1829.w3g', 'replays/Multiplayer/Replay_2013_09_25_1608.w3g', 'replays/Multiplayer/Replay_2014_02_06_2316.w3g', 'replays/Multiplayer/Replay_2013_10_02_1416.w3g', 'replays/Multiplayer/Replay_2013_10_29_1532.w3g', 'replays/Multiplayer/Replay_2017_01_10_2226.w3g', 'replays/Multiplayer/Replay_2013_08_04_1218.w3g', 'replays/Multiplayer/Replay_2013_09_26_0145.w3g', 'replays/Multiplayer/Replay_2013_11_09_0112.w3g', 'replays/Multiplayer/Replay_2013_08_23_1916.w3g', 'replays/Multiplayer/Replay_2013_12_08_0011.w3g', 'replays/Multiplayer/Replay_2015_07_08_1901.w3g', 'replays/Multiplayer/Replay_2015_09_02_1842.w3g', 'replays/Multiplayer/Replay_2015_06_30_0144.w3g', 'replays/Multiplayer/Replay_2013_12_26_0008.w3g', 'replays/Multiplayer/Replay_2013_08_26_1846.w3g', 'replays/Multiplayer/Replay_2015_09_06_0148.w3g', 'replays/Multiplayer/Replay_2015_08_01_1516.w3g', 'replays/Multiplayer/Replay_2015_07_15_2247.w3g', 'replays/Multiplayer/Replay_2015_08_12_2011.w3g', 'replays/Multiplayer/Replay_2015_08_08_1925.w3g', 'replays/Multiplayer/Replay_2015_07_03_0138.w3g', 'replays/Multiplayer/Replay_2015_07_16_2133.w3g', 'replays/Multiplayer/Replay_2014_04_21_2030.w3g', 'replays/Multiplayer/Replay_2013_12_18_1951.w3g', 'replays/Multiplayer/Replay_2014_05_11_1537.w3g', 'replays/Multiplayer/Replay_2016_02_17_0136.w3g', 'replays/Multiplayer/Replay_2013_10_23_1551.w3g', 'replays/Multiplayer/Replay_2013_10_23_1624.w3g', 'replays/Multiplayer/Replay_2014_04_25_1750.w3g', 'replays/Multiplayer/Replay_2015_06_23_2029.w3g', 'replays/Multiplayer/Replay_2014_04_28_1132.w3g', 'replays/Multiplayer/Replay_2013_08_15_2038.w3g', 'replays/Multiplayer/Replay_2013_10_29_0025.w3g', 'replays/Multiplayer/Replay_2014_03_25_2205.w3g', 'replays/Multiplayer/Replay_2014_05_11_1447.w3g', 'replays/Multiplayer/Replay_2015_08_20_1403.w3g', 'replays/Multiplayer/Replay_2016_02_16_0151.w3g', 'replays/Multiplayer/Replay_2013_08_09_2302.w3g', 'replays/Multiplayer/Replay_2015_07_28_0028.w3g', 'replays/Multiplayer/Replay_2014_02_04_2231.w3g', 'replays/Multiplayer/Replay_2015_08_02_1320.w3g', 'replays/Multiplayer/Replay_2013_08_20_0108.w3g', 'replays/Multiplayer/Replay_2013_08_14_1121.w3g', 'replays/Multiplayer/Replay_2014_04_30_2218.w3g', 'replays/Multiplayer/Replay_2015_08_01_1415.w3g', 'replays/Multiplayer/Replay_2014_01_16_2116.w3g', 'replays/Multiplayer/Replay_2013_07_19_1754.w3g', 'replays/Multiplayer/Replay_2013_08_31_2250.w3g', 'replays/Multiplayer/Replay_2014_04_21_2142.w3g', 'replays/Multiplayer/Replay_2015_08_01_1905.w3g', 'replays/Multiplayer/Replay_2013_11_16_1418.w3g', 'replays/Multiplayer/Replay_2013_10_27_0105.w3g', 'replays/Multiplayer/Replay_2014_07_04_1826.w3g', 'replays/Multiplayer/Replay_2013_10_28_0106.w3g', 'replays/Multiplayer/Replay_2013_10_22_1637.w3g', 'replays/Multiplayer/Replay_2013_09_22_0003.w3g', 'replays/Multiplayer/Replay_2015_08_13_1427.w3g', 'replays/Multiplayer/Replay_2015_09_05_1417.w3g', 'replays/Multiplayer/Replay_2014_03_25_2044.w3g', 'replays/Multiplayer/Replay_2014_05_02_2230.w3g', 'replays/Multiplayer/Replay_2013_12_19_2004.w3g', 'replays/Multiplayer/Replay_2013_08_21_1720.w3g', 'replays/Multiplayer/Replay_2015_08_20_0218.w3g', 'replays/Multiplayer/Replay_2013_11_09_0152.w3g', 'replays/Multiplayer/Replay_2015_08_05_2154.w3g', 'replays/Multiplayer/Replay_2013_07_30_2046.w3g', 'replays/Multiplayer/Replay_2013_08_16_2143.w3g', 'replays/Multiplayer/Replay_2013_08_18_1432.w3g', 'replays/Multiplayer/Replay_2014_02_02_2300.w3g', 'replays/Multiplayer/Replay_2015_07_29_1322.w3g', 'replays/Multiplayer/Replay_2014_04_25_1714.w3g', 'replays/Multiplayer/Replay_2013_07_20_1814.w3g', 'replays/Multiplayer/Replay_2013_12_21_1430.w3g', 'replays/Multiplayer/Replay_2013_07_24_0059.w3g', 'replays/Multiplayer/Replay_2015_07_01_0326.w3g', 'replays/Multiplayer/Replay_2015_07_27_2311.w3g', 'replays/Multiplayer/Replay_2017_01_07_2232.w3g', 'replays/Multiplayer/Replay_2013_10_08_2126.w3g', 'replays/Multiplayer/Replay_2013_10_03_2102.w3g', 'replays/Multiplayer/Replay_2013_09_26_1219.w3g', 'replays/Multiplayer/Replay_2013_10_08_2158.w3g', 'replays/Multiplayer/Replay_2014_02_03_0128.w3g', 'replays/Multiplayer/Replay_2014_07_08_2019.w3g', 'replays/Multiplayer/Replay_2013_09_26_1229.w3g', 'replays/Multiplayer/Replay_2013_12_24_1559.w3g', 'replays/Multiplayer/Replay_2015_07_07_0240.w3g', 'replays/Multiplayer/Replay_2015_08_07_1824.w3g', 'replays/Multiplayer/Replay_2015_06_22_0141.w3g', 'replays/Multiplayer/Replay_2013_08_09_0210.w3g', 'replays/Multiplayer/Replay_2015_07_04_2038.w3g', 'replays/Multiplayer/Replay_2013_11_27_2347.w3g', 'replays/Multiplayer/Replay_2013_12_15_1308.w3g', 'replays/Multiplayer/Replay_2014_05_17_1437.w3g', 'replays/Multiplayer/Replay_2013_08_03_0122.w3g', 'replays/Multiplayer/Replay_2014_03_23_1934.w3g', 'replays/Multiplayer/Replay_2015_08_25_2000.w3g', 'replays/Multiplayer/Replay_2013_10_30_0114.w3g', 'replays/Multiplayer/Replay_2013_10_09_1645.w3g', 'replays/Multiplayer/Replay_2015_07_02_1952.w3g', 'replays/Multiplayer/Replay_2013_12_24_1753.w3g', 'replays/Multiplayer/Replay_2014_02_01_0002.w3g', 'replays/Multiplayer/Replay_2015_06_24_2131.w3g', 'replays/Multiplayer/Replay_2017_01_07_2308.w3g', 'replays/Multiplayer/Replay_2013_09_09_1743.w3g', 'replays/Multiplayer/Replay_2015_07_25_1641.w3g', 'replays/Multiplayer/Replay_2013_07_18_1944.w3g', 'replays/Multiplayer/Replay_2014_04_20_2124.w3g', 'replays/Multiplayer/Replay_2013_11_23_1809.w3g', 'replays/Multiplayer/Replay_2013_11_30_0959.w3g', 'replays/Multiplayer/Replay_2013_12_02_1209.w3g', 'replays/Multiplayer/Replay_2013_11_09_1429.w3g', 'replays/Multiplayer/Replay_2017_01_08_2318.w3g', 'replays/Multiplayer/Replay_2015_08_10_1436.w3g', 'replays/Multiplayer/Replay_2013_08_12_1657.w3g', 'replays/Multiplayer/Replay_2014_05_17_1725.w3g', 'replays/Multiplayer/Replay_2015_07_16_2325.w3g', 'replays/Multiplayer/Replay_2013_11_08_1213.w3g', 'replays/Multiplayer/Replay_2013_08_27_0927.w3g', 'replays/Multiplayer/Replay_2013_12_27_2114.w3g', 'replays/Multiplayer/Replay_2014_03_25_1955.w3g', 'replays/Multiplayer/Replay_2013_09_01_1921.w3g', 'replays/Multiplayer/Replay_2015_07_07_0052.w3g', 'replays/Multiplayer/Replay_2015_07_01_1516.w3g', 'replays/Multiplayer/Replay_2014_05_03_2201.w3g', 'replays/Multiplayer/Replay_2013_08_07_2007.w3g', 'replays/Multiplayer/Replay_2015_08_09_1403.w3g', 'replays/Multiplayer/Replay_2015_08_01_1739.w3g', 'replays/Multiplayer/Replay_2014_04_02_2117.w3g', 'replays/Multiplayer/Replay_2015_08_26_2026.w3g', 'replays/Multiplayer/Replay_2013_10_02_1344.w3g', 'replays/Multiplayer/Replay_2015_08_08_1734.w3g', 'replays/Multiplayer/Replay_2013_10_28_0230.w3g', 'replays/Multiplayer/Replay_2013_08_15_2117.w3g', 'replays/Multiplayer/Replay_2015_08_02_0214.w3g', 'replays/Multiplayer/Replay_2014_07_06_1559.w3g', 'replays/Multiplayer/Replay_2015_07_31_2211.w3g', 'replays/Multiplayer/Replay_2014_03_23_2018.w3g', 'replays/Multiplayer/Replay_2014_01_21_2255.w3g', 'replays/Multiplayer/Replay_2015_08_25_2128.w3g', 'replays/Multiplayer/Replay_2014_02_10_2352.w3g', 'replays/Multiplayer/Replay_2014_01_17_1937.w3g', 'replays/Multiplayer/Replay_2015_06_26_1002.w3g', 'replays/Multiplayer/Replay_2013_07_22_0121.w3g', 'replays/Multiplayer/Replay_2013_08_24_2003.w3g', 'replays/Multiplayer/Replay_2013_09_25_1419.w3g', 'replays/Multiplayer/Replay_2015_08_05_0114.w3g', 'replays/Multiplayer/Replay_2013_12_15_1358.w3g', 'replays/Multiplayer/Replay_2013_11_23_1810.w3g', 'replays/Multiplayer/Replay_2015_07_25_1715.w3g', 'replays/Multiplayer/Replay_2014_02_03_2301.w3g', 'replays/Multiplayer/Replay_2015_08_26_1922.w3g', 'replays/Multiplayer/Replay_2013_11_19_2211.w3g', 'replays/Multiplayer/Replay_2015_08_12_1812.w3g', 'replays/Multiplayer/Replay_2014_01_17_1816.w3g', 'replays/Multiplayer/Replay_2013_08_25_1817.w3g', 'replays/Multiplayer/Replay_2015_08_25_2232.w3g', 'replays/Multiplayer/Replay_2013_08_12_2238.w3g', 'replays/Multiplayer/Replay_2014_05_02_2145.w3g', 'replays/Multiplayer/Replay_2014_11_02_1221.w3g', 'replays/Multiplayer/Replay_2014_05_05_1308.w3g', 'replays/Multiplayer/Replay_2014_04_01_1803.w3g', 'replays/Multiplayer/Replay_2015_08_23_1324.w3g', 'replays/Multiplayer/Replay_2014_04_28_1643.w3g', 'replays/Multiplayer/Replay_2015_07_26_2330.w3g', 'replays/Multiplayer/Replay_2014_02_03_0236.w3g', 'replays/Multiplayer/Replay_2015_08_05_1950.w3g', 'replays/Multiplayer/Replay_2013_12_02_2352.w3g', 'replays/Multiplayer/Replay_2015_06_23_1935.w3g', 'replays/Multiplayer/Replay_2015_07_23_2345.w3g', 'replays/Multiplayer/Replay_2013_07_18_1636.w3g', 'replays/Multiplayer/Replay_2013_10_11_1501.w3g', 'replays/Multiplayer/Replay_2014_06_29_1838.w3g', 'replays/Multiplayer/Replay_2013_08_04_1623.w3g', 'replays/Multiplayer/Replay_2013_10_09_1609.w3g', 'replays/Multiplayer/Replay_2013_10_09_2113.w3g', 'replays/Multiplayer/Replay_2014_05_04_1618.w3g', 'replays/Multiplayer/Replay_2013_08_24_2252.w3g', 'replays/Multiplayer/Replay_2016_07_07_2023.w3g', 'replays/Multiplayer/Replay_2013_08_03_1604.w3g', 'replays/Multiplayer/Replay_2013_10_28_0150.w3g', 'replays/Multiplayer/Replay_2013_09_10_1435.w3g', 'replays/Multiplayer/Replay_2013_07_28_1243.w3g', 'replays/Multiplayer/Replay_2014_01_31_0035.w3g', 'replays/Multiplayer/Replay_2014_03_31_2046.w3g', 'replays/Multiplayer/Replay_2014_11_08_1534.w3g', 'replays/Multiplayer/Replay_2017_01_10_2155.w3g', 'replays/Multiplayer/Replay_2015_07_22_2053.w3g', 'replays/Multiplayer/Replay_2015_07_24_2335.w3g', 'replays/Multiplayer/Replay_2014_02_05_0021.w3g', 'replays/Multiplayer/Replay_2014_07_06_1505.w3g', 'replays/Multiplayer/Replay_2013_12_21_2217.w3g', 'replays/Multiplayer/Replay_2013_08_18_1251.w3g', 'replays/Multiplayer/Replay_2013_10_21_1535.w3g', 'replays/Multiplayer/Replay_2013_11_30_1956.w3g', 'replays/Multiplayer/Replay_2014_05_04_2357.w3g', 'replays/Multiplayer/Replay_2015_07_23_0030.w3g', 'replays/Multiplayer/Replay_2014_02_06_2229.w3g', 'replays/Multiplayer/Replay_2015_07_21_2252.w3g', 'replays/Multiplayer/Replay_2013_12_10_0223.w3g', 'replays/Multiplayer/Replay_2013_12_02_1246.w3g', 'replays/Multiplayer/Replay_2015_07_16_2232.w3g', 'replays/Multiplayer/Replay_2013_11_19_2257.w3g', 'replays/Multiplayer/Replay_2014_01_31_2238.w3g', 'replays/Multiplayer/Replay_2015_08_27_2123.w3g', 'replays/Multiplayer/Replay_2013_09_29_0137.w3g', 'replays/Multiplayer/Replay_2013_09_18_1723.w3g', 'replays/Multiplayer/Replay_2013_12_26_1450.w3g', 'replays/Multiplayer/Replay_2015_08_09_2147.w3g', 'replays/Multiplayer/Replay_2013_11_16_0024.w3g', 'replays/Multiplayer/Replay_2013_10_24_2240.w3g', 'replays/Multiplayer/Replay_2015_09_03_2154.w3g', 'replays/Multiplayer/Replay_2015_08_23_2239.w3g', 'replays/Multiplayer/Replay_2014_01_31_2207.w3g', 'replays/Multiplayer/Replay_2015_08_23_2016.w3g', 'replays/Multiplayer/Replay_2015_07_01_1300.w3g', 'replays/Multiplayer/Replay_2015_08_02_0308.w3g', 'replays/Multiplayer/Replay_2013_07_20_1317.w3g', 'replays/Multiplayer/Replay_2013_10_09_1431.w3g', 'replays/Multiplayer/Replay_2014_05_24_1821.w3g', 'replays/Multiplayer/Replay_2013_11_23_1854.w3g', 'replays/Multiplayer/Replay_2015_06_30_1400.w3g', 'replays/Multiplayer/Replay_2013_11_18_1741.w3g', 'replays/Multiplayer/Replay_2013_12_09_1502.w3g', 'replays/Multiplayer/Replay_2015_08_02_0050.w3g', 'replays/Multiplayer/Replay_2013_12_02_1840.w3g', 'replays/Multiplayer/Replay_2015_09_05_1151.w3g', 'replays/Multiplayer/Replay_2013_11_22_1450.w3g', 'replays/Multiplayer/Replay_2013_12_17_1759.w3g', 'replays/Multiplayer/Replay_2013_10_11_1724.w3g', 'replays/Multiplayer/Replay_2013_08_23_2003.w3g', 'replays/Multiplayer/Replay_2015_06_30_2030.w3g', 'replays/Multiplayer/Replay_2014_01_27_0128.w3g', 'replays/Multiplayer/Replay_2015_08_05_2054.w3g', 'replays/Multiplayer/Replay_2013_11_22_1520.w3g', 'replays/Multiplayer/Replay_2015_08_14_1147.w3g', 'replays/Multiplayer/Replay_2013_08_04_1252.w3g', 'replays/Multiplayer/Replay_2013_11_18_2253.w3g', 'replays/Multiplayer/Replay_2014_05_02_1335.w3g', 'replays/Multiplayer/Replay_2013_08_25_1421.w3g', 'replays/Multiplayer/Replay_2014_05_06_1753.w3g', 'replays/Multiplayer/Replay_2013_12_06_1858.w3g', 'replays/Multiplayer/Replay_2015_06_27_1546.w3g', 'replays/Multiplayer/Replay_2013_12_06_1935.w3g', 'replays/Multiplayer/Replay_2013_09_05_0016.w3g', 'replays/Multiplayer/Replay_2013_09_10_1637.w3g', 'replays/Multiplayer/Replay_2015_07_30_1910.w3g', 'replays/Multiplayer/Replay_2013_12_19_1907.w3g', 'replays/Multiplayer/Replay_2015_06_25_1954.w3g', 'replays/Multiplayer/Replay_2015_08_01_1952.w3g', 'replays/Multiplayer/Replay_2013_12_03_2050.w3g', 'replays/Multiplayer/Replay_2015_07_31_1523.w3g', 'replays/Multiplayer/Replay_2014_01_31_2055.w3g', 'replays/Multiplayer/Replay_2015_07_25_0110.w3g', 'replays/Multiplayer/Replay_2014_04_03_2247.w3g', 'replays/Multiplayer/Replay_2013_12_10_1613.w3g', 'replays/Multiplayer/Replay_2013_12_02_1244.w3g', 'replays/Multiplayer/Replay_2013_09_27_1209.w3g', 'replays/Multiplayer/Replay_2015_08_13_1628.w3g', 'replays/Multiplayer/Replay_2014_06_29_2117.w3g', 'replays/Multiplayer/Replay_2015_08_05_1635.w3g', 'replays/Multiplayer/Replay_2014_04_30_2126.w3g', 'replays/Multiplayer/Replay_2015_07_03_2248.w3g', 'replays/Multiplayer/Replay_2013_09_10_1717.w3g', 'replays/Multiplayer/Replay_2014_02_04_0020.w3g', 'replays/Multiplayer/Replay_2016_01_14_2359.w3g', 'replays/Multiplayer/Replay_2014_06_29_2038.w3g', 'replays/Multiplayer/Replay_2015_06_23_2322.w3g', 'replays/Multiplayer/Replay_2014_01_22_0103.w3g', 'replays/Multiplayer/Replay_2015_08_07_1731.w3g', 'replays/Multiplayer/Replay_2013_08_16_2209.w3g', 'replays/Multiplayer/Replay_2013_08_09_1634.w3g', 'replays/Multiplayer/Replay_2014_04_20_1948.w3g', 'replays/Multiplayer/Replay_2013_11_14_1851.w3g', 'replays/Multiplayer/Replay_2013_11_22_1422.w3g', 'replays/Multiplayer/Replay_2013_12_07_2214.w3g', 'replays/Multiplayer/Replay_2017_01_11_2233.w3g', 'replays/Multiplayer/Replay_2014_11_03_1313.w3g', 'replays/Multiplayer/Replay_2015_08_31_2043.w3g', 'replays/Multiplayer/Replay_2015_08_01_2222.w3g', 'replays/Multiplayer/Replay_2015_08_09_1233.w3g', 'replays/Multiplayer/Replay_2015_07_19_1618.w3g', 'replays/Multiplayer/Replay_2015_07_31_1814.w3g', 'replays/Multiplayer/Replay_2015_06_29_2146.w3g', 'replays/Multiplayer/Replay_2014_02_06_0018.w3g', 'replays/Multiplayer/Replay_2015_06_26_2218.w3g', 'replays/Multiplayer/Replay_2014_03_23_1748.w3g', 'replays/Multiplayer/Replay_2015_08_20_0115.w3g', 'replays/Multiplayer/Replay_2013_12_21_0116.w3g', 'replays/Multiplayer/Replay_2015_07_02_1643.w3g', 'replays/Multiplayer/Replay_2013_08_16_2218.w3g', 'replays/Multiplayer/Replay_2013_10_11_2317.w3g', 'replays/Multiplayer/Replay_2013_12_19_2331.w3g', 'replays/Multiplayer/Replay_2013_08_18_1852.w3g', 'replays/Multiplayer/Replay_2013_07_18_1931.w3g', 'replays/Multiplayer/Replay_2013_07_31_0135.w3g']
|
use_pretrained_cnn_model.ipynb | ###Markdown
Load the model
###Code
with open('options.json', encoding='utf-8') as f:
options = json.load(f)
print(options)
latent_dim = 256
model = load_model('s2s.h5')
user_inputs = [
"There are numerous weaknesses with the bag of words model especially when applied to natural language processing tasks that graph ranking algorithms such as TextRank are able to address.",
"Since purple yams happen to be starchy root vegetables, they also happen to be a great source of carbs, potassium, and vitamin C.",
"Recurrent Neural Networks (RNNs) have been used successfully for many tasks involving sequential data such as machine translation, sentiment analysis, image captioning, time-series prediction etc.",
"Improved RNN models such as Long Short-Term Memory networks (LSTMs) enable training on long sequences overcoming problems like vanishing gradients.",
"However, even the more advanced models have their limitations and researchers had a hard time developing high-quality models when working with long data sequences.",
"In machine translation, for example, the RNN has to find connections between long input and output sentences composed of dozens of words.",
"It seemed that the existing RNN architectures needed to be changed and adapted to better deal with such tasks.",
"Wenger ended his 22-year Gunners reign after the 2017-18 season and previously stated he intended to take charge of a new club in early 2019.",
"It will not prevent the Frenchman from resuming his career in football.",
"However 12 months out of work has given him a different outlook and may influence his next move.",
]
def user_input_to_inputs(ui: List[str]):
max_encoder_seq_length = options['max_encoder_seq_length']
num_encoder_tokens = options['num_encoder_tokens']
input_token_index = options['input_token_index']
encoder_input_data = np.zeros(
(len(ui), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
for i, input_text in enumerate(ui):
for t, char in enumerate(input_text):
encoder_input_data[i, t, input_token_index[char]] = 1.
return encoder_input_data
inputs = user_input_to_inputs(user_inputs)
def print_predictions(inputs:np.array, user_inputs: List[str]):
max_decoder_seq_length = options['max_decoder_seq_length']
num_decoder_tokens = options['num_decoder_tokens']
input_token_index = options['input_token_index']
target_token_index = options['target_token_index']
# Define sampling models
reverse_input_char_index = dict(
(i, char) for char, i in input_token_index.items())
reverse_target_char_index = dict(
(i, char) for char, i in target_token_index.items())
in_encoder = inputs
in_decoder = np.zeros(
(len(in_encoder), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
in_decoder[:, 0, target_token_index["\t"]] = 1
predict = np.zeros(
(len(in_encoder), max_decoder_seq_length),
dtype='float32')
for i in tqdm(range(max_decoder_seq_length - 1), total=max_decoder_seq_length-1):
predict = model.predict([in_encoder, in_decoder])
predict = predict.argmax(axis=-1)
predict_ = predict[:, i].ravel().tolist()
for j, x in enumerate(predict_):
in_decoder[j, i + 1, x] = 1
for seq_index in range(len(in_encoder)):
# Take one sequence (part of the training set)
# for trying out decoding.
output_seq = predict[seq_index, :].ravel().tolist()
decoded = []
for x in output_seq:
if reverse_target_char_index[x] == "\n":
break
else:
decoded.append(reverse_target_char_index[x])
decoded_sentence = "".join(decoded)
print('-')
print('Input sentence:', user_inputs[seq_index])
print('Decoded sentence:', decoded_sentence)
print_predictions(inputs, user_inputs)
###Output
_____no_output_____ |
heapsort.ipynb | ###Markdown
Given a list of unique random intergers:- create a heap.- sort the list using a heap. PREQUISITES
###Code
import random
def make(n):
nums = [i for i in range(n)]
for i in range(n):
rnd = random.randint(0, n - 1)
nums[i], nums[rnd] = nums[rnd], nums[i]
return nums
###Output
_____no_output_____
###Markdown
ALGORITHM
###Code
def heapify(nums, n, idx):
max_idx = idx
l_idx = idx * 2 + 1
r_idx = l_idx + 1
if l_idx < n and nums[l_idx] > nums[max_idx]:
max_idx = l_idx
if r_idx < n and nums[r_idx] > nums[max_idx]:
max_idx = r_idx
if max_idx != idx:
nums[idx], nums[max_idx] = nums[max_idx], nums[idx]
heapify(nums, n, max_idx)
def heapSort(nums):
for i in range(len(nums) // 2 - 1, -1, -1):
heapify(nums, len(nums), i)
for i in range(len(nums) - 1, -1, -1):
nums[0], nums[i] = nums[i], nums[0]
heapify(nums, i, 0)
###Output
_____no_output_____
###Markdown
TEST
###Code
nums = make(26)
heapSort(nums)
print(nums)
###Output
_____no_output_____ |
TensorFlow20_test.ipynb | ###Markdown
###Code
%tensorflow_version 2.x
import tensorflow as tf
tf.__version__
###Output
_____no_output_____ |
03_Experimente/04_B_Interpretation_per_Anomalie_Art_and_Concept_Drift.ipynb | ###Markdown
Auswertung des Fine-Tunings pro Anomalie-Art
###Code
import arrow
import numpy as np
import os
import glob
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import torch
import joblib
from sklearn.preprocessing import MinMaxScaler
from torch.utils.data import TensorDataset
from sklearn.metrics import confusion_matrix
from models.SimpleAutoEncoder import SimpleAutoEncoder
from utils.evalUtils import calc_cm_metrics
from utils.evalUtils import print_confusion_matrix
from matplotlib import rc
rc('text', usetex=True)
%run -i ./scripts/setConfigs.py
os.getcwd()
%run -i ./scripts/EvalPreperations.py
type(drifted_anormal_torch_tensor)
###Output
_____no_output_____
###Markdown
Read Experimet Results from Phase I
###Code
os.chdir(os.path.join(exp_data_path, 'experiment'))
extension = 'csv'
result = glob.glob('*.{}'.format(extension))
result[0]
df_tVP1_m1 = pd.read_csv(result[0])
for file in result[1:]:
df = pd.read_csv(file)
df_tVP1_m1 = df_tVP1_m1.append(df)
df_tVP1_m1.reset_index(inplace=True)
len(df_tVP1_m1)
if 'Unnamed: 0' in df_tVP1_m1.columns:
print('Drop unnamed shit col...')
df_tVP1_m1.drop('Unnamed: 0', axis=1,inplace=True)
df_tVP1_m1.head(1)
df_results = pd.DataFrame()
df_results['ano_labels'] = s_ano_labels_drifted_ano
df_results.reset_index(inplace=True, drop=True)
###Output
_____no_output_____
###Markdown
Let every model predict on $X_{drifted,ano}$
###Code
for i, row in df_tVP1_m1.iterrows():
print('Current Iteration: {} of {}'.format(i+1, len(df_tVP1_m1)))
model = None
model = SimpleAutoEncoder(num_inputs=17, val_lambda=42)
logreg = None
model_fn = row['model_fn']
logreg_fn = row['logreg_fn']
logreg = joblib.load(logreg_fn)
model.load_state_dict(torch.load(model_fn))
losses = []
for val in drifted_anormal_torch_tensor:
loss = model.calc_reconstruction_error(val)
losses.append(loss.item())
s_losses_anormal = pd.Series(losses)
X = s_losses_anormal.to_numpy()
X = X.reshape(-1, 1)
predictions = []
for val in X:
val = val.reshape(1,-1)
pred = logreg.predict(val)
predictions.append(pred[0])
col_name = 'preds_model_{}'.format(i)
df_results[col_name] = predictions
file_name = '{}_predictions_models_phase_1_on_x_drifted_ano.csv'.format(arrow.now().format('YYYYMMDD'))
full_fn = '/home/torge/dev/masterthesis_code/02_Experimente/03_Experimente/exp_data/evaluation/{}'.format(file_name)
df_results.to_csv(full_fn, sep=';', index=False)
len(df_results)
len(df_results.columns)
df_results.head()
df_results['binary_label'] = [1 if x > 0 else 0 for x in df_results['ano_labels']]
phase_1_general_kpis_per_model = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
for col in df_results.columns:
if col != 'ano_labels' and col != 'binary_label':
cm = confusion_matrix(df_results['binary_label'], df_results[col])
tn, fp, fn, tp = cm.ravel()
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
phase_1_general_kpis_per_model['acc'].append(accuracy)
phase_1_general_kpis_per_model['prec'].append(precision)
phase_1_general_kpis_per_model['spec'].append(specifity)
phase_1_general_kpis_per_model['sen'].append(sensitivity)
df_phase_1_general_kpis_per_model = pd.DataFrame(phase_1_general_kpis_per_model)
df_phase_1_general_kpis_per_model.head()
df_phase_1_general_kpis_per_model.describe()
df_results_anos_0 = df_results[df_results['ano_labels'] == 0.0]
df_results_anos_1 = df_results[df_results['ano_labels'] == 1.0]
df_results_anos_2 = df_results[df_results['ano_labels'] == 2.0]
df_results_anos_3 = df_results[df_results['ano_labels'] == 3.0]
print('Anzahl Samples in Klasse 0: {}'.format(len(df_results_anos_0)))
print('Anzahl Samples in Klasse 1: {}'.format(len(df_results_anos_1)))
print('Anzahl Samples in Klasse 2: {}'.format(len(df_results_anos_2)))
print('Anzahl Samples in Klasse 3: {}'.format(len(df_results_anos_3)))
###Output
Anzahl Samples in Klasse 0: 32543
Anzahl Samples in Klasse 1: 20
Anzahl Samples in Klasse 2: 2433
Anzahl Samples in Klasse 3: 44
###Markdown
Create KPIs for every model per Anomaly class
###Code
kpis_per_model_anos_0 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
kpis_per_model_anos_1 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
kpis_per_model_anos_2 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
kpis_per_model_anos_3 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
label_ano_class_0 = np.zeros(len(df_results_anos_0))
label_ano_class_1 = np.ones(len(df_results_anos_1))
label_ano_class_2 = np.ones(len(df_results_anos_2))
label_ano_class_3 = np.ones(len(df_results_anos_3))
for col in df_results_anos_0.columns:
if col != 'ano_labels':
cm = confusion_matrix(label_ano_class_0, df_results_anos_0[col])
tn, fp, fn, tp = cm.ravel()
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_per_model_anos_0['acc'].append(accuracy)
kpis_per_model_anos_0['prec'].append(precision)
kpis_per_model_anos_0['spec'].append(specifity)
kpis_per_model_anos_0['sen'].append(sensitivity)
df_kpis_per_model_anos_0 = pd.DataFrame(kpis_per_model_anos_0)
df_kpis_per_model_anos_0.head()
df_kpis_per_model_anos_0.describe()
print(1.428468e-14)
len(df_kpis_per_model_anos_0)
for col in df_results_anos_1.columns:
if col != 'ano_labels':
cm = confusion_matrix(label_ano_class_1, df_results_anos_1[col])
tn, fp, fn, tp = cm.ravel()
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_per_model_anos_1['acc'].append(accuracy)
kpis_per_model_anos_1['prec'].append(precision)
kpis_per_model_anos_1['spec'].append(specifity)
kpis_per_model_anos_1['sen'].append(sensitivity)
df_kpis_per_model_anos_1 = pd.DataFrame(kpis_per_model_anos_1)
df_kpis_per_model_anos_1.head()
df_kpis_per_model_anos_1.describe()
for col in df_results_anos_2.columns:
if col != 'ano_labels':
cm = confusion_matrix(label_ano_class_2, df_results_anos_2[col])
tn, fp, fn, tp = cm.ravel()
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_per_model_anos_2['acc'].append(accuracy)
kpis_per_model_anos_2['prec'].append(precision)
kpis_per_model_anos_2['spec'].append(specifity)
kpis_per_model_anos_2['sen'].append(sensitivity)
df_kpis_per_model_anos_2 = pd.DataFrame(kpis_per_model_anos_2)
df_kpis_per_model_anos_2.head()
df_kpis_per_model_anos_2.describe()
for col in df_results_anos_3.columns:
if col != 'ano_labels':
cm = confusion_matrix(label_ano_class_3, df_results_anos_3[col])
tn, fp, fn, tp = cm.ravel()
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_per_model_anos_3['acc'].append(accuracy)
kpis_per_model_anos_3['prec'].append(precision)
kpis_per_model_anos_3['spec'].append(specifity)
kpis_per_model_anos_3['sen'].append(sensitivity)
df_kpis_per_model_anos_3 = pd.DataFrame(kpis_per_model_anos_3)
df_kpis_per_model_anos_3.head()
df_kpis_per_model_anos_3.describe()
arrow.now()
###Output
_____no_output_____
###Markdown
Results from Phase II
###Code
os.getcwd()
os.chdir(os.path.join(exp_data_path, 'experiment', 'fine_tuning'))
extension = 'csv'
result = glob.glob('*.{}'.format(extension))
real_results = []
for res in result:
if 'tVPII_M2' not in res:
real_results.append(res)
df_tVP2_m1 = pd.read_csv(real_results[0], sep=';')
df_tVP2_m1.head()
for file in real_results[1:]:
df = pd.read_csv(file, sep=';')
df_tVP2_m1 = df_tVP2_m1.append(df)
df_tVP2_m1.reset_index(drop=True, inplace=True)
df_tVP2_m1.head(2)
len(df_tVP2_m1)
df_results_phase_2 = pd.DataFrame()
df_results_phase_2['ano_labels'] = s_ano_labels_drifted_ano
df_results_phase_2.reset_index(inplace=True, drop=True)
print('Started: {}'.format(arrow.now().format('HH:mm:ss')))
for i, row in df_tVP2_m1.iterrows():
print('Current Iteration: {} of {}'.format(i+1, len(df_tVP2_m1)))
model = None
model = SimpleAutoEncoder(num_inputs=17, val_lambda=42)
logreg = None
model_fn = row['model_fn']
logreg_fn = row['logreg_fn']
logreg = joblib.load(logreg_fn)
model.load_state_dict(torch.load(model_fn))
losses = []
for val in drifted_anormal_torch_tensor:
loss = model.calc_reconstruction_error(val)
losses.append(loss.item())
s_losses_anormal = pd.Series(losses)
X = s_losses_anormal.to_numpy()
X = X.reshape(-1, 1)
predictions = []
for val in X:
val = val.reshape(1,-1)
pred = logreg.predict(val)
predictions.append(pred[0])
col_name = 'preds_model_{}'.format(i)
df_results_phase_2[col_name] = predictions
print('Ended: {}'.format(arrow.now().format('HH:mm:ss')))
file_name = '{}_predictions_models_phase_2_on_x_drifted_ano.csv'.format(arrow.now().format('YYYYMMDD'))
full_fn = '/home/torge/dev/masterthesis_code/02_Experimente/03_Experimente/exp_data/evaluation/{}'.format(file_name)
df_results_phase_2.to_csv(full_fn, sep=';', index=False)
len(df_results_phase_2)
len(df_results_phase_2.columns)
print(df_results_phase_2['binary_label'].sum())
print(len(df_results_phase_2))
df_results_phase_2.head()
df_results_phase_2['binary_label'] = [1 if x > 0 else 0 for x in df_results_phase_2['ano_labels']]
phase_2_general_kpis_per_model = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
for col in df_results_phase_2.columns:
if col != 'ano_labels' and col != 'binary_label':
cm = confusion_matrix(df_results_phase_2['binary_label'], df_results_phase_2[col])
tn, fp, fn, tp = cm.ravel()
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
phase_2_general_kpis_per_model['acc'].append(accuracy)
phase_2_general_kpis_per_model['prec'].append(precision)
phase_2_general_kpis_per_model['spec'].append(specifity)
phase_2_general_kpis_per_model['sen'].append(sensitivity)
df_phase_2_general_kpis_per_model = pd.DataFrame(phase_2_general_kpis_per_model)
df_phase_2_general_kpis_per_model.describe()
###Output
_____no_output_____
###Markdown
Analyse pro Anomalie Art für Phase II
###Code
df_results_phase_2_anos_0 = df_results_phase_2[df_results_phase_2['ano_labels'] == 0.0]
df_results_phase_2_anos_1 = df_results_phase_2[df_results_phase_2['ano_labels'] == 1.0]
df_results_phase_2_anos_2 = df_results_phase_2[df_results_phase_2['ano_labels'] == 2.0]
df_results_phase_2_anos_3 = df_results_phase_2[df_results_phase_2['ano_labels'] == 3.0]
print('Anzahl Samples in Klasse 0: {}'.format(len(df_results_phase_2_anos_0)))
print('Anzahl Samples in Klasse 1: {}'.format(len(df_results_phase_2_anos_1)))
print('Anzahl Samples in Klasse 2: {}'.format(len(df_results_phase_2_anos_2)))
print('Anzahl Samples in Klasse 3: {}'.format(len(df_results_phase_2_anos_3)))
phase_2_kpis_per_model_anos_0 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
phase_2_kpis_per_model_anos_1 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
phase_2_kpis_per_model_anos_2 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
phase_2_kpis_per_model_anos_3 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
phase_2_label_ano_class_0 = np.zeros(len(df_results_phase_2_anos_0))
phase_2_label_ano_class_1 = np.ones(len(df_results_phase_2_anos_1))
phase_2_label_ano_class_2 = np.ones(len(df_results_phase_2_anos_2))
phase_2_label_ano_class_3 = np.ones(len(df_results_phase_2_anos_3))
###Output
_____no_output_____
###Markdown
Kennzahlen Anomaly Klasse 0 nach Phase II Modell 1
###Code
for col in df_results_phase_2_anos_0.columns:
if col != 'ano_labels' and col != 'binary_label':
cm = confusion_matrix(phase_2_label_ano_class_0, df_results_phase_2_anos_0[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
if df_results_phase_2_anos_0[col].all() == 1:
tp = 0
tn = 0
fp = cm[0][0]
fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
phase_2_kpis_per_model_anos_0['acc'].append(accuracy)
phase_2_kpis_per_model_anos_0['prec'].append(precision)
phase_2_kpis_per_model_anos_0['spec'].append(specifity)
phase_2_kpis_per_model_anos_0['sen'].append(sensitivity)
df_phase_2_kpis_per_model_anos_0 = pd.DataFrame(phase_2_kpis_per_model_anos_0)
df_phase_2_kpis_per_model_anos_0.head()
df_phase_2_kpis_per_model_anos_0.describe()
###Output
_____no_output_____
###Markdown
Kennzahlen Anomaly Klasse 1 nach Phase II Modell 1
###Code
for col in df_results_phase_2_anos_1.columns:
if col != 'ano_labels' and col != 'binary_label':
cm = confusion_matrix(phase_2_label_ano_class_1, df_results_phase_2_anos_1[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
if df_results_phase_2_anos_1[col].all() == 1:
tp = cm[0][0]
tn = 0
fp = 0
fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
phase_2_kpis_per_model_anos_1['acc'].append(accuracy)
phase_2_kpis_per_model_anos_1['prec'].append(precision)
phase_2_kpis_per_model_anos_1['spec'].append(specifity)
phase_2_kpis_per_model_anos_1['sen'].append(sensitivity)
df_phase_2_kpis_per_model_anos_1 = pd.DataFrame(phase_2_kpis_per_model_anos_1)
df_phase_2_kpis_per_model_anos_1.head()
df_phase_2_kpis_per_model_anos_1.describe()
###Output
_____no_output_____
###Markdown
Kennzahlen Anomaly Klasse 2 nach Phase II Modell 1
###Code
for col in df_results_phase_2_anos_2.columns:
if col != 'ano_labels' and col != 'binary_label':
cm = confusion_matrix(phase_2_label_ano_class_2, df_results_phase_2_anos_2[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
if df_results_phase_2_anos_2[col].all() == 1:
tp = cm[0][0]
tn = 0
fp = 0
fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
phase_2_kpis_per_model_anos_2['acc'].append(accuracy)
phase_2_kpis_per_model_anos_2['prec'].append(precision)
phase_2_kpis_per_model_anos_2['spec'].append(specifity)
phase_2_kpis_per_model_anos_2['sen'].append(sensitivity)
df_phase_2_kpis_per_model_anos_2 = pd.DataFrame(phase_2_kpis_per_model_anos_2)
df_phase_2_kpis_per_model_anos_2.head()
df_phase_2_kpis_per_model_anos_2.describe()
###Output
_____no_output_____
###Markdown
Kennzahlen Anomaly Klasse 3 nach Phase II Modell 1
###Code
for col in df_results_phase_2_anos_3.columns:
if col != 'ano_labels' and col !='binary_label':
cm = confusion_matrix(phase_2_label_ano_class_3, df_results_phase_2_anos_3[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
if df_results_phase_2_anos_3[col].all() == 1:
tp = cm[0][0]
tn = 0
fp = 0
fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
phase_2_kpis_per_model_anos_3['acc'].append(accuracy)
phase_2_kpis_per_model_anos_3['prec'].append(precision)
phase_2_kpis_per_model_anos_3['spec'].append(specifity)
phase_2_kpis_per_model_anos_3['sen'].append(sensitivity)
df_phase_2_kpis_per_model_anos_3 = pd.DataFrame(phase_2_kpis_per_model_anos_3)
df_phase_2_kpis_per_model_anos_3.head()
df_phase_2_kpis_per_model_anos_3.describe()
###Output
_____no_output_____
###Markdown
Veränderung der Modellperformance pro Concept Drift Event Art
###Code
df_results_phase_1_concept_drift = df_results.copy()
df_results_phase_1_concept_drift.head(1)
s_drift_labls_copy = s_drift_labels_drifted_ano.copy()
s_drift_labls_copy.reset_index(drop=True, inplace=True)
df_results_phase_1_concept_drift['cd_label'] = s_drift_labls_copy
df_res_phase_1_cd_0 = df_results_phase_1_concept_drift[df_results_phase_1_concept_drift['cd_label'] == 0.0]
df_res_phase_1_cd_1 = df_results_phase_1_concept_drift[df_results_phase_1_concept_drift['cd_label'] == 1.0]
df_res_phase_1_cd_2 = df_results_phase_1_concept_drift[df_results_phase_1_concept_drift['cd_label'] == 2.0]
df_res_phase_1_cd_3 = df_results_phase_1_concept_drift[df_results_phase_1_concept_drift['cd_label'] == 3.0]
print('Anzahl CD Event 0: {}'.format(len(df_res_phase_1_cd_0)))
print('Anzahl CD Event 1: {}'.format(len(df_res_phase_1_cd_1)))
print('Anzahl CD Event 2: {}'.format(len(df_res_phase_1_cd_2)))
print('Anzahl CD Event 3: {}'.format(len(df_res_phase_1_cd_3)))
lab_res_phase_1_cd_0 = df_res_phase_1_cd_0['binary_label']
lab_res_phase_1_cd_1 = df_res_phase_1_cd_1['binary_label']
lab_res_phase_1_cd_2 = df_res_phase_1_cd_2['binary_label']
lab_res_phase_1_cd_3 = df_res_phase_1_cd_3['binary_label']
###Output
_____no_output_____
###Markdown
Concept Drivt Event 0
###Code
kpis_phase_1_cd_0 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
for col in df_res_phase_1_cd_0.columns:
if col != 'ano_labels' and col !='binary_label' and col !='cd_label':
cm = confusion_matrix(lab_res_phase_1_cd_0, df_res_phase_1_cd_0[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
#if df_results_phase_2_anos_3[col].all() == 1:
# tp = cm[0][0]
# tn = 0
# fp = 0
#fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_phase_1_cd_0['acc'].append(accuracy)
kpis_phase_1_cd_0['prec'].append(precision)
kpis_phase_1_cd_0['spec'].append(specifity)
kpis_phase_1_cd_0['sen'].append(sensitivity)
df_kpis_phase_1_cd_0 = pd.DataFrame(kpis_phase_1_cd_0)
df_kpis_phase_1_cd_0.describe()
###Output
_____no_output_____
###Markdown
Concept Drift Event 1
###Code
kpis_phase_1_cd_1 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
for col in df_res_phase_1_cd_0.columns:
if col != 'ano_labels' and col !='binary_label' and col !='cd_label':
cm = confusion_matrix(lab_res_phase_1_cd_1, df_res_phase_1_cd_1[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
#if df_results_phase_2_anos_3[col].all() == 1:
# tp = cm[0][0]
# tn = 0
# fp = 0
#fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_phase_1_cd_1['acc'].append(accuracy)
kpis_phase_1_cd_1['prec'].append(precision)
kpis_phase_1_cd_1['spec'].append(specifity)
kpis_phase_1_cd_1['sen'].append(sensitivity)
df_kpis_phase_1_cd_1 = pd.DataFrame(kpis_phase_1_cd_1)
df_kpis_phase_1_cd_1.describe()
###Output
_____no_output_____
###Markdown
Concept Drift Event 2
###Code
kpis_phase_1_cd_2 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
for col in df_res_phase_1_cd_0.columns:
if col != 'ano_labels' and col !='binary_label' and col !='cd_label':
cm = confusion_matrix(lab_res_phase_1_cd_2, df_res_phase_1_cd_2[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
#if df_results_phase_2_anos_3[col].all() == 1:
# tp = cm[0][0]
# tn = 0
# fp = 0
#fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_phase_1_cd_2['acc'].append(accuracy)
kpis_phase_1_cd_2['prec'].append(precision)
kpis_phase_1_cd_2['spec'].append(specifity)
kpis_phase_1_cd_2['sen'].append(sensitivity)
df_kpis_phase_1_cd_2 = pd.DataFrame(kpis_phase_1_cd_2)
df_kpis_phase_1_cd_2.describe()
###Output
_____no_output_____
###Markdown
Concept Drift Event 3
###Code
kpis_phase_1_cd_3 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
for col in df_res_phase_1_cd_0.columns:
if col != 'ano_labels' and col !='binary_label' and col !='cd_label':
cm = confusion_matrix(lab_res_phase_1_cd_3, df_res_phase_1_cd_3[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
#if df_results_phase_2_anos_3[col].all() == 1:
# tp = cm[0][0]
# tn = 0
# fp = 0
#fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_phase_1_cd_3['acc'].append(accuracy)
kpis_phase_1_cd_3['prec'].append(precision)
kpis_phase_1_cd_3['spec'].append(specifity)
kpis_phase_1_cd_3['sen'].append(sensitivity)
df_kpis_phase_1_cd_3 = pd.DataFrame(kpis_phase_1_cd_3)
df_kpis_phase_1_cd_3.describe()
###Output
_____no_output_____
###Markdown
Auswertung Kennzahlen pro Concept Drift Event Art nach Phase II
###Code
df_results_phase_2_concept_drift = df_results_phase_2.copy()
df_results_phase_2_concept_drift['cd_label'] = s_drift_labls_copy
df_results_phase_2_concept_drift.head(1)
df_res_phase_2_cd_0 = df_results_phase_2_concept_drift[df_results_phase_2_concept_drift['cd_label'] == 0.0]
df_res_phase_2_cd_1 = df_results_phase_2_concept_drift[df_results_phase_2_concept_drift['cd_label'] == 1.0]
df_res_phase_2_cd_2 = df_results_phase_2_concept_drift[df_results_phase_2_concept_drift['cd_label'] == 2.0]
df_res_phase_2_cd_3 = df_results_phase_2_concept_drift[df_results_phase_2_concept_drift['cd_label'] == 3.0]
print('Anzahl CD Event 0: {}'.format(len(df_res_phase_2_cd_0)))
print('Anzahl CD Event 1: {}'.format(len(df_res_phase_2_cd_1)))
print('Anzahl CD Event 2: {}'.format(len(df_res_phase_2_cd_2)))
print('Anzahl CD Event 3: {}'.format(len(df_res_phase_2_cd_3)))
lab_res_phase_2_cd_0 = df_res_phase_2_cd_0['binary_label']
lab_res_phase_2_cd_1 = df_res_phase_2_cd_1['binary_label']
lab_res_phase_2_cd_2 = df_res_phase_2_cd_2['binary_label']
lab_res_phase_2_cd_3 = df_res_phase_2_cd_3['binary_label']
###Output
_____no_output_____
###Markdown
Concept Drift Event Art 0
###Code
kpis_phase_2_cd_0 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
for col in df_res_phase_2_cd_0.columns:
if col != 'ano_labels' and col !='binary_label' and col !='cd_label':
cm = confusion_matrix(lab_res_phase_2_cd_0, df_res_phase_2_cd_0[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
#if df_results_phase_2_anos_3[col].all() == 1:
# tp = cm[0][0]
# tn = 0
# fp = 0
#fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_phase_2_cd_0['acc'].append(accuracy)
kpis_phase_2_cd_0['prec'].append(precision)
kpis_phase_2_cd_0['spec'].append(specifity)
kpis_phase_2_cd_0['sen'].append(sensitivity)
df_kpis_phase_2_cd_0 = pd.DataFrame(kpis_phase_2_cd_0)
df_kpis_phase_2_cd_0.describe()
###Output
_____no_output_____
###Markdown
Concept Drift Event Art 1 kpis_phase_2_cd_1 = { 'acc': [], 'prec': [], 'spec': [], 'sen': []}for col in df_res_phase_2_cd_1.columns: if col != 'ano_labels' and col !='binary_label' and col !='cd_label': cm = confusion_matrix(lab_res_phase_2_cd_1, df_res_phase_2_cd_1[col]) try: tn, fp, fn, tp = cm.ravel() except: print('Cant ravel CM : {}'.format(col)) if df_results_phase_2_anos_3[col].all() == 1: tp = cm[0][0] tn = 0 fp = 0 fn = 0 accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn) kpis_phase_2_cd_1['acc'].append(accuracy) kpis_phase_2_cd_1['prec'].append(precision) kpis_phase_2_cd_1['spec'].append(specifity) kpis_phase_2_cd_1['sen'].append(sensitivity)
###Code
df_kpis_phase_2_cd_1 = pd.DataFrame(kpis_phase_2_cd_1)
df_kpis_phase_2_cd_1.describe()
###Output
_____no_output_____
###Markdown
Concept Drift Event Art 2
###Code
kpis_phase_2_cd_2 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
for col in df_res_phase_2_cd_2.columns:
if col != 'ano_labels' and col !='binary_label' and col !='cd_label':
cm = confusion_matrix(lab_res_phase_2_cd_2, df_res_phase_2_cd_2[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
#if df_results_phase_2_anos_3[col].all() == 1:
# tp = cm[0][0]
# tn = 0
# fp = 0
#fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_phase_2_cd_2['acc'].append(accuracy)
kpis_phase_2_cd_2['prec'].append(precision)
kpis_phase_2_cd_2['spec'].append(specifity)
kpis_phase_2_cd_2['sen'].append(sensitivity)
df_kpis_phase_2_cd_2 = pd.DataFrame(kpis_phase_2_cd_2)
df_kpis_phase_2_cd_2.describe()
###Output
_____no_output_____
###Markdown
Concept Drift Event Art 3
###Code
kpis_phase_2_cd_3 = {
'acc': [],
'prec': [],
'spec': [],
'sen': []
}
for col in df_res_phase_2_cd_3.columns:
if col != 'ano_labels' and col !='binary_label' and col !='cd_label':
cm = confusion_matrix(lab_res_phase_2_cd_3, df_res_phase_2_cd_3[col])
try:
tn, fp, fn, tp = cm.ravel()
except:
print('Cant ravel CM : {}'.format(col))
#if df_results_phase_2_anos_3[col].all() == 1:
# tp = cm[0][0]
# tn = 0
# fp = 0
#fn = 0
accuracy, precision, specifity, sensitivity, f1_score = calc_cm_metrics(tp, tn, fp, fn)
kpis_phase_2_cd_3['acc'].append(accuracy)
kpis_phase_2_cd_3['prec'].append(precision)
kpis_phase_2_cd_3['spec'].append(specifity)
kpis_phase_2_cd_3['sen'].append(sensitivity)
df_kpis_phase_2_cd_3 = pd.DataFrame(kpis_phase_2_cd_3)
df_kpis_phase_2_cd_3.describe()
###Output
_____no_output_____ |
app/notebooks/michaela_notebooks/Analysis of Shooters and Victims/Devin Patrick Kelley (Shooter).ipynb | ###Markdown
Devin Patrick Kelley (Okay)
###Code
import ipywidgets as widgets
from IPython.display import display
import esper.identity_clusters
from esper.identity_clusters import identity_clustering_workflow,_manual_recluster,visualization_workflow
shootings = [
('Muhammad Youssef Abdulazeez', 'Chattanooga', 'Jul 16, 2015'),
('Chris Harper-Mercer', 'Umpqua Community College', 'Oct 1, 2015'),
('Robert Lewis Dear Jr', 'Colorado Springs - Planned Parenthood', 'Nov 27, 2015'),
('Syed Rizwan Farook', 'San Bernardino', 'Dec 2, 2015'),
('Tashfeen Malik', 'San Bernardino', 'Dec 2, 2015'),
('Dylann Roof', 'Charleston Shurch', 'Jun 17, 2015'),
('Omar Mateen', 'Orlando Nightclub', 'Jun 12, 2016'),
('Micah Xavier Johnson', 'Dallas Police', 'Jul 7-8, 2016'),
('Gavin Eugene Long', 'Baton Rouge Police', 'Jul 17, 2016'),
('Esteban Santiago-Ruiz', 'Ft. Lauderdale Airport', 'Jan 6, 2017'),
('Willie Corey Godbolt', 'Lincoln County', 'May 28, 2017'),
('Stephen Paddock', 'Las Vegas', 'Oct 1, 2017'),
('Devin Patrick Kelley', 'San Antonio Church', 'Nov 5, 2017')
]
orm_set = { x.name for x in Identity.objects.filter(name__in=[s[0].lower() for s in shootings]) }
for s in shootings:
assert s[0].lower() in orm_set, '{} is not in the database'.format(s)
identity_clustering_workflow('Devin Patrick Kelley','Nov 5, 2017', True)
###Output
Cluster 0 (143 faces): CNN: 66.4%, MSNBC: 8.4%, FOXNEWS: 25.2%
|
completed/.ipynb_checkpoints/Session 1 - Hello NLP World-checkpoint.ipynb | ###Markdown
[Optional]: If you're using a Mac/Linux, you can check your environment with these commands:```!which pip3!which python3!ls -lah /usr/local/bin/python3```
###Code
!pip3 install -U pip
!pip3 install torch==1.3.0
!pip3 install seaborn
import torch
torch.cuda.is_available()
# IPython candies...
from IPython.display import Image
from IPython.core.display import HTML
from IPython.display import clear_output
%%html
<style> table {float:left} </style>
###Output
_____no_output_____
###Markdown
Perceptron=====**Perceptron** algorithm is a:> "*system that depends on **probabilistic** rather than deterministic principles for its operation, gains its reliability from the **properties of statistical measurements obtain from a large population of elements***"> \- Frank Rosenblatt (1957)Then the news:> "*[Perceptron is an] **embryo of an electronic computer** that [the Navy] expects will be **able to walk, talk, see, write, reproduce itself and be conscious of its existence.***"> \- The New York Times (1958)News quote cite from Olazaran (1996) Perceptron in Bullets---- - Perceptron learns to classify any linearly separable set of inputs. - Some nice graphics for perceptron with Go https://appliedgo.net/perceptron/ If you've got some spare time: - There's a whole book just on perceptron: https://mitpress.mit.edu/books/perceptrons - For watercooler gossips on perceptron in the early days, read [Olazaran (1996)](https://pdfs.semanticscholar.org/f3b6/e5ef511b471ff508959f660c94036b434277.pdf?_ga=2.57343906.929185581.1517539221-1505787125.1517539221) Perceptron in Math----Given a set of inputs $x$, the perceptron - learns $w$ vector to map the inputs to a real-value output between $[0,1]$ - through the summation of the dot product of the $w·x$ with a transformation function Perceptron in Picture----
###Code
##Image(url="perceptron.png", width=500)
Image(url="https://ibin.co/4TyMU8AdpV4J.png", width=500)
###Output
_____no_output_____
###Markdown
(**Note:** Usually, we use $x_1$ as the bias and fix the input to 1) Perceptron as a Workflow Diagram----If you're familiar with [mermaid flowchart](https://mermaidjs.github.io)```.. mermaid:: graph LR subgraph Input x_1 x_i x_n end subgraph Perceptron n1((s)) --> n2(("f(s)")) end x_1 --> |w_1| n1 x_i --> |w_i| n1 x_n --> |w_n| n1 n2 --> y["[0,1]"]```
###Code
##Image(url="perceptron-mermaid.svg", width=500)
Image(url="https://svgshare.com/i/AbJ.svg", width=500)
###Output
_____no_output_____
###Markdown
Optimization Process====To learn the weights, $w$, we use an **optimizer** to find the best-fit (optimal) values for $w$ such that the inputs correct maps to the outputs.Typically, process performs the following 4 steps iteratively. **Initialization** - **Step 1**: Initialize weights vector **Forward Propagation** - **Step 2a**: Multiply the weights vector with the inputs, sum the products, i.e. `s` - **Step 2b**: Put the sum through the sigmoid, i.e. `f()` **Back Propagation** - **Step 3a**: Compute the errors, i.e. difference between expected output and predictions - **Step 3b**: Multiply the error with the **derivatives** to get the delta - **Step 3c**: Multiply the delta vector with the inputs, sum the product **Optimizer takes a step** - **Step 4**: Multiply the learning rate with the output of Step 3c.
###Code
import math
import numpy as np
np.random.seed(0)
def sigmoid(x): # Returns values that sums to one.
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(sx):
# See https://math.stackexchange.com/a/1225116
# Hint: let sx = sigmoid(x)
return sx * (1 - sx)
sigmoid(np.array([2.5, 0.32, -1.42])) # [out]: array([0.92414182, 0.57932425, 0.19466158])
sigmoid_derivative(np.array([2.5, 0.32, -1.42])) # [out]: array([0.07010372, 0.24370766, 0.15676845])
def cost(predicted, truth):
return np.abs(truth - predicted)
gold = np.array([0.5, 1.2, 9.8])
pred = np.array([0.6, 1.0, 10.0])
cost(pred, gold)
gold = np.array([0.5, 1.2, 9.8])
pred = np.array([9.3, 4.0, 99.0])
cost(pred, gold)
###Output
_____no_output_____
###Markdown
Representing OR Boolean---Lets consider the problem of the OR boolean and apply the perceptron with simple gradient descent. | x2 | x3 | y | |:--:|:--:|:--:|| 0 | 0 | 0 || 0 | 1 | 1 | | 1 | 0 | 1 | | 1 | 1 | 1 |
###Code
X = or_input = np.array([[0,0], [0,1], [1,0], [1,1]])
Y = or_output = np.array([[0,1,1,1]]).T
or_input
or_output
# Define the shape of the weight vector.
num_data, input_dim = or_input.shape
# Define the shape of the output vector.
output_dim = len(or_output.T)
print('Inputs\n======')
print('no. of rows =', num_data)
print('no. of cols =', input_dim)
print('\n')
print('Outputs\n=======')
print('no. of cols =', output_dim)
# Initialize weights between the input layers and the perceptron
W = np.random.random((input_dim, output_dim))
W
###Output
_____no_output_____
###Markdown
Step 2a: Multiply the weights vector with the inputs, sum the products====To get the output of step 2a, - Itrate through each row of the data, `X` - For each column in each row, find the product of the value and the respective weights - For each row, compute the sum of the products
###Code
# If we write it imperatively:
summation = []
for row in X:
sum_wx = 0
for feature, weight in zip(row, W):
sum_wx += feature * weight
summation.append(sum_wx)
print(np.array(summation))
# If we vectorize the process and use numpy.
np.dot(X, W)
###Output
_____no_output_____
###Markdown
Train the Single-Layer Model====
###Code
num_epochs = 10000 # No. of times to iterate.
learning_rate = 0.03 # How large a step to take per iteration.
# Lets standardize and call our inputs X and outputs Y
X = or_input
Y = or_output
for _ in range(num_epochs):
layer0 = X
# Step 2a: Multiply the weights vector with the inputs, sum the products, i.e. s
# Step 2b: Put the sum through the sigmoid, i.e. f()
# Inside the perceptron, Step 2.
layer1 = sigmoid(np.dot(X, W))
# Back propagation.
# Step 3a: Compute the errors, i.e. difference between expected output and predictions
# How much did we miss?
layer1_error = cost(layer1, Y)
# Step 3b: Multiply the error with the derivatives to get the delta
# multiply how much we missed by the slope of the sigmoid at the values in layer1
layer1_delta = layer1_error * sigmoid_derivative(layer1)
# Step 3c: Multiply the delta vector with the inputs, sum the product (use np.dot)
# Step 4: Multiply the learning rate with the output of Step 3c.
W += learning_rate * np.dot(layer0.T, layer1_delta)
layer1
# Expected output.
Y
# On the training data
[[int(prediction > 0.5)] for prediction in layer1]
###Output
_____no_output_____
###Markdown
Lets try the XOR Boolean---Lets consider the problem of the OR boolean and apply the perceptron with simple gradient descent. | x2 | x3 | y | |:--:|:--:|:--:|| 0 | 0 | 0 || 0 | 1 | 1 | | 1 | 0 | 1 | | 1 | 1 | 0 |
###Code
X = xor_input = np.array([[0,0], [0,1], [1,0], [1,1]])
Y = xor_output = np.array([[0,1,1,0]]).T
xor_input
xor_output
num_epochs = 10000 # No. of times to iterate.
learning_rate = 0.003 # How large a step to take per iteration.
# Lets drop the last row of data and use that as unseen test.
X = xor_input
Y = xor_output
# Define the shape of the weight vector.
num_data, input_dim = X.shape
# Define the shape of the output vector.
output_dim = len(Y.T)
# Initialize weights between the input layers and the perceptron
W = np.random.random((input_dim, output_dim))
for _ in range(num_epochs):
layer0 = X
# Forward propagation.
# Inside the perceptron, Step 2.
layer1 = sigmoid(np.dot(X, W))
# How much did we miss?
layer1_error = cost(layer1, Y)
# Back propagation.
# multiply how much we missed by the slope of the sigmoid at the values in layer1
layer1_delta = sigmoid_derivative(layer1) * layer1_error
# update weights
W += learning_rate * np.dot(layer0.T, layer1_delta)
# Expected output.
Y
# On the training data
[int(prediction > 0.5) for prediction in layer1] # All correct.
###Output
_____no_output_____
###Markdown
You can't represent XOR with simple perceptron !!!====No matter how you change the hyperparameters or data, the XOR function can't be represented by a single perceptron layer. There's no way you can get all four data points to get the correct outputs for the XOR boolean operation. Solving XOR (Add more layers)====
###Code
from itertools import chain
import numpy as np
np.random.seed(0)
def sigmoid(x): # Returns values that sums to one.
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(sx):
# See https://math.stackexchange.com/a/1225116
return sx * (1 - sx)
# Cost functions.
def cost(predicted, truth):
return truth - predicted
xor_input = np.array([[0,0], [0,1], [1,0], [1,1]])
xor_output = np.array([[0,1,1,0]]).T
# Define the shape of the weight vector.
num_data, input_dim = X.shape
# Lets set the dimensions for the intermediate layer.
hidden_dim = 5
# Initialize weights between the input layers and the hidden layer.
W1 = np.random.random((input_dim, hidden_dim))
# Define the shape of the output vector.
output_dim = len(Y.T)
# Initialize weights between the hidden layers and the output layer.
W2 = np.random.random((hidden_dim, output_dim))
# Initialize weigh
num_epochs = 10000
learning_rate = 0.03
for epoch_n in range(num_epochs):
layer0 = X
# Forward propagation.
# Inside the perceptron, Step 2.
layer1 = sigmoid(np.dot(layer0, W1))
layer2 = sigmoid(np.dot(layer1, W2))
# Back propagation (Y -> layer2)
# How much did we miss in the predictions?
layer2_error = cost(layer2, Y)
# In what direction is the target value?
# Were we really close? If so, don't change too much.
layer2_delta = layer2_error * sigmoid_derivative(layer2)
# Back propagation (layer2 -> layer1)
# How much did each layer1 value contribute to the layer2 error (according to the weights)?
layer1_error = np.dot(layer2_delta, W2.T)
layer1_delta = layer1_error * sigmoid_derivative(layer1)
# update weights
W2 += learning_rate * np.dot(layer1.T, layer2_delta)
W1 += learning_rate * np.dot(layer0.T, layer1_delta)
##print(epoch_n, list((layer2)))
# Training input.
X
# Expected output.
Y
layer2 # Our output layer
# On the training data
[int(prediction > 0.5) for prediction in layer2]
###Output
_____no_output_____
###Markdown
Now try adding another layer====Use the same process: 1. Initialize 2. Forward Propagate 3. Back Propagate 4. Update (aka step)
###Code
from itertools import chain
import numpy as np
np.random.seed(0)
def sigmoid(x): # Returns values that sums to one.
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(sx):
# See https://math.stackexchange.com/a/1225116
return sx * (1 - sx)
# Cost functions.
def cost(predicted, truth):
return truth - predicted
xor_input = np.array([[0,0], [0,1], [1,0], [1,1]])
xor_output = np.array([[0,1,1,0]]).T
X
Y
# Define the shape of the weight vector.
num_data, input_dim = X.shape
# Lets set the dimensions for the intermediate layer.
layer0to1_hidden_dim = 5
layer1to2_hidden_dim = 5
# Initialize weights between the input layers 0 -> layer 1
W1 = np.random.random((input_dim, layer0to1_hidden_dim))
# Initialize weights between the layer 1 -> layer 2
W2 = np.random.random((layer0to1_hidden_dim, layer1to2_hidden_dim))
# Define the shape of the output vector.
output_dim = len(Y.T)
# Initialize weights between the layer 2 -> layer 3
W3 = np.random.random((layer1to2_hidden_dim, output_dim))
# Initialize weigh
num_epochs = 10000
learning_rate = 1.0
for epoch_n in range(num_epochs):
layer0 = X
# Forward propagation.
# Inside the perceptron, Step 2.
layer1 = sigmoid(np.dot(layer0, W1))
layer2 = sigmoid(np.dot(layer1, W2))
layer3 = sigmoid(np.dot(layer2, W3))
# Back propagation (Y -> layer2)
# How much did we miss in the predictions?
layer3_error = cost(layer3, Y)
# In what direction is the target value?
# Were we really close? If so, don't change too much.
layer3_delta = layer3_error * sigmoid_derivative(layer3)
# Back propagation (layer2 -> layer1)
# How much did each layer1 value contribute to the layer3 error (according to the weights)?
layer2_error = np.dot(layer3_delta, W3.T)
layer2_delta = layer3_error * sigmoid_derivative(layer2)
# Back propagation (layer2 -> layer1)
# How much did each layer1 value contribute to the layer2 error (according to the weights)?
layer1_error = np.dot(layer2_delta, W2.T)
layer1_delta = layer1_error * sigmoid_derivative(layer1)
# update weights
W3 += learning_rate * np.dot(layer2.T, layer3_delta)
W2 += learning_rate * np.dot(layer1.T, layer2_delta)
W1 += learning_rate * np.dot(layer0.T, layer1_delta)
Y
layer3
# On the training data
[int(prediction > 0.5) for prediction in layer3]
###Output
_____no_output_____
###Markdown
Now, lets do it with PyTorch First lets try a single perceptron and see that we can't train a model that can represent XOR.
###Code
from tqdm import tqdm
import torch
from torch import nn
from torch.autograd import Variable
from torch import FloatTensor
from torch import optim
use_cuda = torch.cuda.is_available()
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_style("darkgrid")
sns.set(rc={'figure.figsize':(15, 10)})
X # Original XOR X input in numpy array data structure.
Y # Original XOR Y output in numpy array data structure.
device = 'gpu' if torch.cuda.is_available() else 'cpu'
# Converting the X to PyTorch-able data structure.
X_pt = torch.tensor(X).float()
X_pt = X_pt.to(device)
# Converting the Y to PyTorch-able data structure.
Y_pt = torch.tensor(Y, requires_grad=False).float()
Y_pt = Y_pt.to(device)
print(X_pt)
print(Y_pt)
# Use tensor.shape to get the shape of the matrix/tensor.
num_data, input_dim = X_pt.shape
print('Inputs Dim:', input_dim)
num_data, output_dim = Y_pt.shape
print('Output Dim:', output_dim)
# Use Sequential to define a simple feed-forward network.
model = nn.Sequential(
nn.Linear(input_dim, output_dim), # Use nn.Linear to get our simple perceptron
nn.Sigmoid() # Use nn.Sigmoid to get our sigmoid non-linearity
)
model
# Remember we define as: cost = truth - predicted
# If we take the absolute of cost, i.e.: cost = |truth - predicted|
# we get the L1 loss function.
criterion = nn.L1Loss()
learning_rate = 0.03
# The simple weights/parameters update processes we did before
# is call the gradient descent. SGD is the sochastic variant of
# gradient descent.
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
###Output
_____no_output_____
###Markdown
(**Note**: Personally, I strongely encourage you to go through the [University of Washington course of machine learning regression](https://www.coursera.org/learn/ml-regression) to better understand the fundamentals of (i) ***gradient***, (ii) ***loss*** and (iii) ***optimizer***. But given that you know how to code it, the process of more complex variants of gradient/loss computation and optimizer's step is easy to grasp) Training a PyTorch modelTo train a model using PyTorch, we simply iterate through the no. of epochs and imperatively state the computations we want to perform. Remember the steps? 1. Initialize 2. Forward Propagation 3. Backward Propagation 4. Update Optimizer
###Code
num_epochs = 1000
# Step 1: Initialization.
# Note: When using PyTorch a lot of the manual weights
# initialization is done automatically when we define
# the model (aka architecture)
model = nn.Sequential(
nn.Linear(input_dim, output_dim),
nn.Sigmoid())
criterion = nn.MSELoss()
learning_rate = 1.0
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
num_epochs = 10000
losses = []
for i in tqdm(range(num_epochs)):
# Reset the gradient after every epoch.
optimizer.zero_grad()
# Step 2: Foward Propagation
predictions = model(X_pt)
# Step 3: Back Propagation
# Calculate the cost between the predictions and the truth.
loss_this_epoch = criterion(predictions, Y_pt)
# Note: The neat thing about PyTorch is it does the
# auto-gradient computation, no more manually defining
# derivative of functions and manually propagating
# the errors layer by layer.
loss_this_epoch.backward()
# Step 4: Optimizer take a step.
# Note: Previously, we have to manually update the
# weights of each layer individually according to the
# learning rate and the layer delta.
# PyTorch does that automatically =)
optimizer.step()
# Log the loss value as we proceed through the epochs.
losses.append(loss_this_epoch.data.item())
# Visualize the losses
plt.plot(losses)
plt.show()
for _x, _y in zip(X_pt, Y_pt):
prediction = model(_x)
print('Input:\t', list(map(int, _x)))
print('Pred:\t', int(prediction))
print('Ouput:\t', int(_y))
print('######')
###Output
Input: [0, 0]
Pred: 0
Ouput: 0
######
Input: [0, 1]
Pred: 0
Ouput: 1
######
Input: [1, 0]
Pred: 0
Ouput: 1
######
Input: [1, 1]
Pred: 0
Ouput: 0
######
###Markdown
Now, try again with 2 layers using PyTorch====
###Code
%%time
hidden_dim = 5
num_data, input_dim = X_pt.shape
num_data, output_dim = Y_pt.shape
model = nn.Sequential(nn.Linear(input_dim, hidden_dim),
nn.Sigmoid(),
nn.Linear(hidden_dim, output_dim),
nn.Sigmoid())
criterion = nn.MSELoss()
learning_rate = 0.3
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
num_epochs = 5000
losses = []
for _ in tqdm(range(num_epochs)):
optimizer.zero_grad()
predictions = model(X_pt)
loss_this_epoch = criterion(predictions, Y_pt)
loss_this_epoch.backward()
optimizer.step()
losses.append(loss_this_epoch.data.item())
##print([float(_pred) for _pred in predictions], list(map(int, Y_pt)), loss_this_epoch.data[0])
# Visualize the losses
plt.plot(losses)
plt.show()
for _x, _y in zip(X_pt, Y_pt):
prediction = model(_x)
print('Input:\t', list(map(int, _x)))
print('Pred:\t', int(prediction > 0.5))
print('Ouput:\t', int(_y))
print('######')
###Output
Input: [0, 0]
Pred: 0
Ouput: 0
######
Input: [0, 1]
Pred: 1
Ouput: 1
######
Input: [1, 0]
Pred: 1
Ouput: 1
######
Input: [1, 1]
Pred: 0
Ouput: 0
######
###Markdown
MNIST: The "Hello World" of Neural Nets====Like any deep learning class, we ***must*** do the MNIST. The MNIST dataset is - is made up of handwritten digits - 60,000 examples training set - 10,000 examples test set
###Code
# We're going to install tensorflow here because their dataset access is simpler =)
!pip3 install torchvision
from torchvision import datasets, transforms
mnist_train = datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
mnist_test = datasets.MNIST('../data', train=False, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
# Visualization Candies
import matplotlib.pyplot as plt
def show_image(mnist_x_vector, mnist_y_vector):
pixels = mnist_x_vector.reshape((28, 28))
label = np.where(mnist_y_vector == 1)[0]
plt.title('Label is {}'.format(label))
plt.imshow(pixels, cmap='gray')
plt.show()
# Fifth image and label.
show_image(mnist_train.data[5], mnist_train.targets[5])
###Output
_____no_output_____
###Markdown
Lets apply what we learn about multi-layered perceptron with PyTorch and apply it to the MNIST data.
###Code
X_mnist = mnist_train.data.float()
Y_mnist = mnist_train.targets.float()
X_mnist_test = mnist_test.data.float()
Y_mnist_test = mnist_test.targets.float()
Y_mnist.shape
# Use FloatTensor.shape to get the shape of the matrix/tensor.
num_data, *input_dim = X_mnist.shape
print('No. of images:', num_data)
print('Inputs Dim:', input_dim)
num_data, *output_dim = Y_mnist.shape
num_test_data, *output_dim = Y_mnist_test.shape
print('Output Dim:', output_dim)
# Flatten the dimensions of the images.
X_mnist = mnist_train.data.float().view(num_data, -1)
Y_mnist = mnist_train.targets.float().unsqueeze(1)
X_mnist_test = mnist_test.data.float().view(num_test_data, -1)
Y_mnist_test = mnist_test.targets.float().unsqueeze(1)
# Use FloatTensor.shape to get the shape of the matrix/tensor.
num_data, *input_dim = X_mnist.shape
print('No. of images:', num_data)
print('Inputs Dim:', input_dim)
num_data, *output_dim = Y_mnist.shape
num_test_data, *output_dim = Y_mnist_test.shape
print('Output Dim:', output_dim)
hidden_dim = 500
model = nn.Sequential(nn.Linear(784, 1),
nn.Sigmoid())
criterion = nn.MSELoss()
learning_rate = 1.0
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
num_epochs = 100
losses = []
plt.ion()
for _e in tqdm(range(num_epochs)):
optimizer.zero_grad()
predictions = model(X_mnist)
loss_this_epoch = criterion(predictions, Y_mnist)
loss_this_epoch.backward()
optimizer.step()
##print([float(_pred) for _pred in predictions], list(map(int, Y_pt)), loss_this_epoch.data[0])
losses.append(loss_this_epoch.data.item())
clear_output(wait=True)
plt.plot(losses)
plt.pause(0.05)
predictions = model(X_mnist_test)
predictions
pred = np.array([np.argmax(_p) for _p in predictions.data.numpy()])
pred
truth = np.array([np.argmax(_p) for _p in Y_mnist_test.data.numpy()])
truth
(pred == truth).sum() / len(pred)
###Output
_____no_output_____ |
2021-fall/challenge-4/challenge-4.ipynb | ###Markdown
IBM Quantum Challenge Fall 2021 Challenge 4: Battery revenue optimization We recommend that you switch to **light** workspace theme under the Account menu in the upper right corner for optimal experience. Introduction to QAOAWhen it comes to optimization problems, a well-known algorithm for finding approximate solutions to combinatorial-optimization problems is **QAOA (Quantum approximate optimization algorithm)**. You may have already used it once in the finance exercise of Challenge-1, but still don't know what it is. In this challlenge we will further learn about QAOA----how does it work? Why we need it?First off, what is QAOA? Simply put, QAOA is a classical-quantum hybrid algorithm that combines a parametrized quantum circuit known as ansatz, and a classical part to optimize those circuits proposed by Farhi, Goldstone, and Gutmann (2014)[**[1]**](https://arxiv.org/abs/1411.4028). It is a variational algorithm that uses a unitary $U(\boldsymbol{\beta}, \boldsymbol{\gamma})$ characterized by the parameters $(\boldsymbol{\beta}, \boldsymbol{\gamma})$ to prepare a quantum state $|\psi(\boldsymbol{\beta}, \boldsymbol{\gamma})\rangle$. The goal of the algorithm is to find optimal parameters $(\boldsymbol{\beta}_{opt}, \boldsymbol{\gamma}_{opt})$ such that the quantum state $|\psi(\boldsymbol{\beta}_{opt}, \boldsymbol{\gamma}_{opt})\rangle$ encodes the solution to the problem. The unitary $U(\boldsymbol{\beta}, \boldsymbol{\gamma})$ has a specific form and is composed of two unitaries $U(\boldsymbol{\beta}) = e^{-i \boldsymbol{\beta} H_B}$ and $U(\boldsymbol{\gamma}) = e^{-i \boldsymbol{\gamma} H_P}$ where $H_{B}$ is the mixing Hamiltonian and $H_{P}$ is the problem Hamiltonian. Such a choice of unitary drives its inspiration from a related scheme called quantum annealing.The state is prepared by applying these unitaries as alternating blocks of the two unitaries applied $p$ times such that $$\lvert \psi(\boldsymbol{\beta}, \boldsymbol{\gamma}) \rangle = \underbrace{U(\boldsymbol{\beta}) U(\boldsymbol{\gamma}) \cdots U(\boldsymbol{\beta}) U(\boldsymbol{\gamma})}_{p \; \text{times}} \lvert \psi_0 \rangle$$where $|\psi_{0}\rangle$ is a suitable initial state.The QAOA implementation of Qiskit directly extends VQE and inherits VQE’s general hybrid optimization structure.To learn more about QAOA, please refer to the [**QAOA chapter**](https://qiskit.org/textbook/ch-applications/qaoa.html) of Qiskit Textbook. **Goal**Implement the quantum optimization code for the battery revenue problem. **Plan**First, you will learn about QAOA and knapsack problem.**Challenge 4a** - Simple knapsack problem with QAOA: familiarize yourself with a typical knapsack problem and find the optimized solution with QAOA.**Final Challenge 4b** - Battery revenue optimization with Qiskit knapsack class: learn the battery revenue optimization problem and find the optimized solution with QAOA. You can receive a badge for solving all the challenge exercises up to 4b.**Final Challenge 4c** - Battery revenue optimization with your own quantum circuit: implement the battery revenue optimization problem to find the lowest circuit cost and circuit depth. Achieve better accuracy with smaller circuits. you can obtain a score with ranking by solving this exercise.Before you begin, we recommend watching the [**Qiskit Optimization Demo Session with Atsushi Matsuo**](https://youtu.be/claoY57eVIc?t=104) and check out the corresponding [**demo notebook**](https://github.com/qiskit-community/qiskit-application-modules-demo-sessions/tree/main/qiskit-optimization) to learn how to do classifications using QSVM. As we just mentioned, QAOA is an algorithm which can be used to find approximate solutions to combinatorial optimization problems, which includes many specific problems, such as:- TSP (Traveling Salesman Problem) problem- Vehicle routing problem- Set cover problem- Knapsack problem- Scheduling problems,etc. Some of them are hard to solve (or in another word, they are NP-hard problems), and it is impractical to find their exact solutions in a reasonable amount of time, and that is why we need the approximate algorithm. Next, we will introduce an instance of using QAOA to solve one of the combinatorial optimization problems----**knapsack problem**. Knapsack Problem [**Knapsack Problem**](https://en.wikipedia.org/wiki/Knapsack_problem) is an optimization problem that goes like this: given a list of items that each has a weight and a value and a knapsack that can hold a maximum weight. Determine which items to take in the knapsack so as to maximize the total value taken without exceeding the maiximum weight the knapsack can hold. The most efficient approach would be a greedy approach, but that is not guaranteed to give the best result. Image source: [Knapsack.svg.](https://commons.wikimedia.org/w/index.php?title=File:Knapsack.svg&oldid=457280382)Note: Knapsack problem have many variations, here we will only discuss the 0-1 Knapsack problem: either take an item or not (0-1 property), which is a NP-hard problem. We can not divide one item, or take multiple same items. Challenge 4a: Simple knapsack problem with QAOA **Challenge 4a** You are given a knapsack with a capacity of 18 and 5 pieces of luggage. When the weights of each piece of luggage $W$ is $w_i = [4,5,6,7,8]$ and the value $V$ is $v_i = [5,6,7,8,9]$, find the packing method that maximizes the sum of the values of the luggage within the capacity limit of 18.
###Code
from qiskit_optimization.algorithms import MinimumEigenOptimizer
from qiskit import Aer
from qiskit.utils import algorithm_globals, QuantumInstance
from qiskit.algorithms import QAOA, NumPyMinimumEigensolver
import numpy as np
###Output
_____no_output_____
###Markdown
Dynamic Programming Approach A typical classical method for finding an exact solution, the Dynamic Programming approach is as follows:
###Code
val = [5,6,7,8,9]
wt = [4,5,6,7,8]
W = 18
def dp(W, wt, val, n):
k = [[0 for x in range(W + 1)] for x in range(n + 1)]
for i in range(n + 1):
for w in range(W + 1):
if i == 0 or w == 0:
k[i][w] = 0
elif wt[i-1] <= w:
k[i][w] = max(val[i-1] + k[i-1][w-wt[i-1]], k[i-1][w])
else:
k[i][w] = k[i-1][w]
picks=[0 for x in range(n)]
volume=W
for i in range(n,-1,-1):
if (k[i][volume]>k[i-1][volume]):
picks[i-1]=1
volume -= wt[i-1]
return k[n][W],picks
n = len(val)
print("optimal value:", dp(W, wt, val, n)[0])
print('\n index of the chosen items:')
for i in range(n):
if dp(W, wt, val, n)[1][i]:
print(i,end=' ')
###Output
optimal value: 21
index of the chosen items:
1 2 3
###Markdown
The time complexity of this method $O(N*W)$, where $N$ is the number of items and $W$ is the maximum weight of the knapsack. We can solve this problem using an exact solution approach within a reasonable time since the number of combinations is limited, but when the number of items becomes huge, it will be impractical to deal with by using a exact solution approach. QAOA approach Qiskit provides application classes for various optimization problems, including the knapsack problem so that users can easily try various optimization problems on quantum computers. In this exercise, we are going to use the application classes for the `Knapsack` problem here. There are application classes for other optimization problems available as well. See [**Application Classes for Optimization Problems**](https://qiskit.org/documentation/optimization/tutorials/09_application_classes.htmlKnapsack-problem) for details.
###Code
# import packages necessary for application classes.
from qiskit_optimization.applications import Knapsack
###Output
_____no_output_____
###Markdown
To represent Knapsack problem as an optimization problem that can be solved by QAOA, we need to formulate the cost function for this problem.
###Code
def knapsack_quadratic_program():
# Put values, weights and max_weight parameter for the Knapsack()
##############################
# Provide your code here
prob = Knapsack(val, wt, W)
#
##############################
# to_quadratic_program generates a corresponding QuadraticProgram of the instance of the knapsack problem.
kqp = prob.to_quadratic_program()
return prob, kqp
prob,quadratic_program=knapsack_quadratic_program()
quadratic_program
# print(prob)
###Output
_____no_output_____
###Markdown
We can solve the problem using the classical `NumPyMinimumEigensolver` to find the minimum eigenvector, which may be useful as a reference without doing things by Dynamic Programming; we can also apply QAOA.
###Code
# Numpy Eigensolver
meo = MinimumEigenOptimizer(min_eigen_solver=NumPyMinimumEigensolver())
result = meo.solve(quadratic_program)
print('result:\n', result)
print('\n index of the chosen items:', prob.interpret(result))
# QAOA
seed = 123
algorithm_globals.random_seed = seed
qins = QuantumInstance(backend=Aer.get_backend('qasm_simulator'), shots=1000, seed_simulator=seed, seed_transpiler=seed)
meo = MinimumEigenOptimizer(min_eigen_solver=QAOA(reps=1, quantum_instance=qins))
result = meo.solve(quadratic_program)
print('result:\n', result)
print('\n index of the chosen items:', prob.interpret(result))
###Output
result:
optimal function value: 21.0
optimal value: [0. 1. 1. 1. 0.]
status: SUCCESS
index of the chosen items: [1, 2, 3]
###Markdown
You will submit the quadratic program created by your `knapsack_quadratic_program` function.
###Code
# Check your answer and submit using the following code
from qc_grader import grade_ex4a
grade_ex4a(quadratic_program)
###Output
Submitting your answer for 4a. Please wait...
Congratulations 🎉! Your answer is correct and has been submitted.
###Markdown
Note: QAOA finds the approximate solutions, so the solution by QAOA is not always optimal. Battery Revenue Optimization Problem In this exercise we will use a quantum algorithm to solve a real-world instance of a combinatorial optimization problem: Battery revenue optimization problem. Battery storage systems have provided a solution to flexibly integrate large-scale renewable energy (such as wind and solar) in a power system. The revenues from batteries come from different types of services sold to the grid. The process of energy trading of battery storage assets is as follows: A regulator asks each battery supplier to choose a market in advance for each time window. Then, the batteries operator will charge the battery with renewable energy and release the energy to the grid depending on pre-agreed contracts. The supplier makes therefore forecasts on the return and the number of charge/discharge cycles for each time window to optimize its overall return. How to maximize the revenue of battery-based energy storage is a concern of all battery storage investors. Choose to let the battery always supply power to the market which pays the most for every time window might be a simple guess, but in reality, we have to consider many other factors. What we can not ignore is the aging of batteries, also known as **degradation**. As the battery charge/discharge cycle progresses, the battery capacity will gradually degrade (the amount of energy a battery can store, or the amount of power it can deliver will permanently reduce). After a number of cycles, the battery will reach the end of its usefulness. Since the performance of a battery decreases while it is used, choosing the best cash return for every time window one after the other, without considering the degradation, does not lead to an optimal return over the lifetime of the battery, i.e. before the number of charge/discharge cycles reached.Therefore, in order to optimize the revenue of the battery, what we have to do is to select the market for the battery in each time window taking both **the returns on these markets (value)**, based on price forecast, as well as expected battery **degradation over time (cost)** into account ——It sounds like solving a common optimization problem, right?We will investigate how quantum optimization algorithms could be adapted to tackle this problem.Image source: [pixabay](https://pixabay.com/photos/renewable-energy-environment-wind-1989416/) Problem SettingHere, we have referred to the problem setting in de la Grand'rive and Hullo's paper [**[2]**](https://arxiv.org/abs/1908.02210).Considering two markets $M_{1}$ , $M_{2}$, during every time window (typically a day), the battery operates on one or the other market, for a maximum of $n$ time windows. Every day is considered independent and the intraday optimization is a standalone problem: every morning the battery starts with the same level of power so that we don’t consider charging problems. Forecasts on both markets being available for the $n$ time windows, we assume known for each time window $t$ (day) and for each market:- the daily returns $\lambda_{1}^{t}$ , $\lambda_{2}^{t}$- the daily degradation, or health cost (number of cycles), for the battery $c_{1}^{t}$, $c_{2}^{t}$ We want to find the optimal schedule, i.e. optimize the life time return with a cost less than $C_{max}$ cycles. We introduce $d = max_{t}\left\{c_{1}^{t}, c_{2}^{t}\right\} $.We introduce the decision variable $z_{t}, \forall t \in [1, n]$ such that $z_{t} = 0$ if the supplier chooses $M_{1}$ , $z_{t} = 1$ if choose $M_{2}$, with every possible vector $z = [z_{1}, ..., z_{n}]$ being a possible schedule. The previously formulated problem can then be expressed as:\begin{equation}\underset{z \in \left\{0,1\right\}^{n}}{max} \displaystyle\sum_{t=1}^{n}(1-z_{t})\lambda_{1}^{t}+z_{t}\lambda_{2}^{t}\end{equation}\begin{equation} s.t. \sum_{t=1}^{n}[(1-z_{t})c_{1}^{t}+z_{t}c_{2}^{t}]\leq C_{max}\end{equation} This does not look like one of the well-known combinatorial optimization problems, but no worries! we are going to give hints on how to solve this problem with quantum computing step by step. Challenge 4b: Battery revenue optimization with Qiskit knapsack class **Challenge 4b** We will optimize the battery schedule using Qiskit optimization knapsack class with QAOA to maximize the total return with a cost within $C_{max}$ under the following conditions; - the time window $t = 7$- the daily return $\lambda_{1} = [5, 3, 3, 6, 9, 7, 1]$- the daily return $\lambda_{2} = [8, 4, 5, 12, 10, 11, 2]$- the daily degradation for the battery $c_{1} = [1, 1, 2, 1, 1, 1, 2]$- the daily degradation for the battery $c_{2} = [3, 2, 3, 2, 4, 3, 3]$- $C_{max} = 16$ Your task is to find the argument, `values`, `weights`, and `max_weight` used for the Qiskit optimization knapsack class, to get a solution which "0" denote the choice of market $M_{1}$, and "1" denote the choice of market $M_{2}$. We will check your answer with another data set of $\lambda_{1}, \lambda_{2}, c_{1}, c_{2}, C_{max}$. You can receive a badge for solving all the challenge exercises up to 4b.
###Code
L1 = [5,3,3,6,9,7,1]
L2 = [8,4,5,12,10,11,2]
C1 = [1,1,2,1,1,1,2]
C2 = [3,2,3,2,4,3,3]
C_max = 16
def knapsack_argument(L1, L2, C1, C2, C_max):
##############################
# Provide your code here
# t = len(L1)
# values = []
# weights = []
# max_weight = []
# for i in range(t):
# if (L1[i] >= L2[i]) and (C1[i] <= C2[i]):
# elif (L2[i] >= L1[i]) and (C2[i] <= C1[i]):
# elif (L2[i] >= L1[i]) and (C2[i] <= C1[i]):
# values.append((L2[i] - L1[i]))
# weights.append((C2[i] - C1[i]))
# max_weight.append(max(C1[i], C2[i]))
# elif (L2[i] >= L1[i]) and (C2[i] <= C1[i]):
#
##############################
return values, weights, max_weight
values, weights, max_weight = knapsack_argument(L1, L2, C1, C2, C_max)
print(values, weights, max_weight)
prob = Knapsack(values = values, weights = weights, max_weight = max_weight)
qp = prob.to_quadratic_program()
qp
# Check your answer and submit using the following code
from qc_grader import grade_ex4b
grade_ex4b(knapsack_argument)
###Output
_____no_output_____
###Markdown
We can solve the problem using QAOA.
###Code
# QAOA
seed = 123
algorithm_globals.random_seed = seed
qins = QuantumInstance(backend=Aer.get_backend('qasm_simulator'), shots=1000, seed_simulator=seed, seed_transpiler=seed)
meo = MinimumEigenOptimizer(min_eigen_solver=QAOA(reps=1, quantum_instance=qins))
result = meo.solve(qp)
print('result:', result.x)
item = np.array(result.x)
revenue=0
for i in range(len(item)):
if item[i]==0:
revenue+=L1[i]
else:
revenue+=L2[i]
print('total revenue:', revenue)
###Output
_____no_output_____
###Markdown
Challenge 4c: Battery revenue optimization with adiabatic quantum computationHere we come to the final exercise! The final challenge is for people to compete in ranking. BackgroundQAOA was developed with inspiration from adiabatic quantum computation. In adiabatic quantum computation, based on the quantum adiabatic theorem, the ground state of a given Hamiltonian can ideally be obtained. Therefore, by mapping the optimization problem to this Hamiltonian, it is possible to solve the optimization problem with adiabatic quantum computation.Although the computational equivalence of adiabatic quantum computation and quantum circuits has been shown, simulating adiabatic quantum computation on quantum circuits involves a large number of gate operations, which is difficult to achieve with current noisy devices. QAOA solves this problem by using a quantum-classical hybrid approach.In this extra challenge, you will be asked to implement a quantum circuit that solves an optimization problem without classical optimization, based on this adiabatic quantum computation framework. In other words, the circuit you build is expected to give a good approximate solution in a single run.Instead of using the Qiskit Optimization Module and Knapsack class, let's try to implement a quantum circuit with as few gate operations as possible, that is, as small as possible. By relaxing the constraints of the optimization problem, it is possible to find the optimum solution with a smaller circuit. We recommend that you follow the solution tips.**Challenge 4c**We will optimize the battery schedule using the adiabatic quantum computation to maximize the total return with a cost within $C_{max}$ under the following conditions;- the time window $t = 11$- the daily return $\lambda_{1} = [3, 7, 3, 4, 2, 6, 2, 2, 4, 6, 6]$- the daily return $\lambda_{2} = [7, 8, 7, 6, 6, 9, 6, 7, 6, 7, 7]$- the daily degradation for the battery $c_{1} = [2, 2, 2, 3, 2, 4, 2, 2, 2, 2, 2]$- the daily degradation for the battery $c_{2} = [4, 3, 3, 4, 4, 5, 3, 4, 4, 3, 4]$- $C_{max} = 33$ - **Note:** $\lambda_{1}[i] < \lambda_{2}[i]$ and $c_{1}[i] < c_{2}[i]$ holds for $i \in \{1,2,...,t\}$ Let "0" denote the choice of market $M_{1}$ and "1" denote the choice of market $M_{2}$, the optimal solutions are "00111111000", and "10110111000" with return value $67$ and cost $33$.Your task is to implement adiabatic quantum computation circuit to meet the accuracy below. We will check your answer with other data set of $\lambda_{1}, \lambda_{2}, c_{1}, c_{2}, C_{max}$. We show examples of inputs for checking below. We will use similar inputs with these examples.
###Code
instance_examples = [
{
'L1': [3, 7, 3, 4, 2, 6, 2, 2, 4, 6, 6],
'L2': [7, 8, 7, 6, 6, 9, 6, 7, 6, 7, 7],
'C1': [2, 2, 2, 3, 2, 4, 2, 2, 2, 2, 2],
'C2': [4, 3, 3, 4, 4, 5, 3, 4, 4, 3, 4],
'C_max': 33
},
{
'L1': [4, 2, 2, 3, 5, 3, 6, 3, 8, 3, 2],
'L2': [6, 5, 8, 5, 6, 6, 9, 7, 9, 5, 8],
'C1': [3, 3, 2, 3, 4, 2, 2, 3, 4, 2, 2],
'C2': [4, 4, 3, 5, 5, 3, 4, 5, 5, 3, 5],
'C_max': 38
},
{
'L1': [5, 4, 3, 3, 3, 7, 6, 4, 3, 5, 3],
'L2': [9, 7, 5, 5, 7, 8, 8, 7, 5, 7, 9],
'C1': [2, 2, 4, 2, 3, 4, 2, 2, 2, 2, 2],
'C2': [3, 4, 5, 4, 4, 5, 3, 3, 5, 3, 5],
'C_max': 35
}
]
###Output
_____no_output_____
###Markdown
IMPORTANT: Final exercise submission rulesFor solving this problem:- Do not optimize with classical methods.- Create a quantum circuit by filling source code in the functions along the following steps.- As for the parameters $p$ and $\alpha$, please **do not change the values from $p=5$ and $\alpha=1$.**- Please implement the quantum circuit within 28 qubits.- You should submit a function that takes (L1, L2, C1, C2, C_max) as inputs and returns a QuantumCircuit. (You can change the name of the function in your way.)- Your circuit should be able to solve different input values. We will validate your circuit with several inputs. - Create a circuit that gives precision of 0.8 or better with lower cost. The precision is explained below. The lower the cost, the better.- Please **do not run jobs in succession** even if you are concerned that your job is not running properly. This can create a long queue and clog the backend. You can check whether your job is running properly at:[**https://quantum-computing.ibm.com/jobs**](https://quantum-computing.ibm.com/jobs) - Judges will check top 10 solutions manually to see if their solutions adhere to the rules. **Please note that your ranking is subject to change after the challenge period as a result of the judging process.**- Top 10 participants will be recognized and asked to submit a write up on how they solved the exercise. **Note: In this challenge, please be aware that you should solve the problem with a quantum circuit, otherwise you will not have a rank in the final ranking.** Scoring RuleThe score of submitted function is computed by two steps.1. In the first step, the precision of output of your quantum circuit is checked.To pass this step, your circuit should output a probability distribution whose **average precision is more than 0.80** for eight instances; four of them are fixed data, while the remaining four are randomly selected data from multiple datasets.If your circuit cannot satisfy this threshold **0.8**, you will not obtain a score.We will explain how the precision of a probability distribution will be calculated when the submitted quantum circuit solves one instance. 1. This precision evaluates how the values of measured feasible solutions are close to the value of optimal solutions. 2. Firstly the number of measured feasible solutions is very low, the precision will be 0 (Please check **"The number of feasible solutions"** below). Before calculating precision, the values of solutions will be normalized so that the precision of the solution whose value is the lowest would be always 0 by subtracting the lowest value. Let $N_s$, $N_f$, and $\lambda_{opt}$ be the total shots (the number of execution), the shots of measured feasible solutions, the optimial solution value. Also let $R(x)$ and $C(x)$ be value and cost of a solution $x\in\{0,1\}^n$ respectively. We normalize the values by subtracting the lowest value of instance, which can be calculated by the summation of $\lambda_{1}$. Given a probability distribution, the precision is computed with the following formula: \begin{equation*} \text{precision} = \frac 1 {N_f\cdot (\lambda_{opt}-\mathrm{sum}(\lambda_{1}) )} \sum_{x, \text{$\mathrm{shots}_x$}\in \text{ prob.dist.}} (R(x)-\mathrm{sum}(\lambda_{1})) \cdot \text{$\mathrm{shots}_x$} \cdot 1_{C(x) \leq C_{max}} \end{equation*} Here, $\mathrm{shots}_x$ is the counts of measuring the solution $x$. For example, given a probability distribution {"1000101": 26, "1000110": 35, "1000111": 12, "1001000": 16, "1001001": 11} with shots $N_s = 100$, the value and the cost of each solution are listed below. | Solution | Value | Cost | Feasible or not | Shot counts | |:-------:|:-------:|:-------:|:-------:|:--------------:| | 1000101 | 46 | 16 | 1 | 26 | | 1000110 | 48 | 17 | 0 | 35 | | 1000111 | 45 | 15 | 1 | 12 | | 1001000 | 45 | 18 | 0 | 16 | | 1001001 | 42 | 16 | 1 | 11 | Since $C_{max}= 16$, the solutions "1000101", "1000111", and "1001001" are feasible, but the solutions "1000110" and "1001000" are infeasible. So, the shots of measured feasbile solutions $N_f$ is calculated as $N_f = 26+12+11=49$. And the lowest value is $ \mathrm{sum}(\lambda_{1}) = 5+3+3+6+9+7+1=34$. Therefore, the precision becomes $$((46-34) \cdot 26 \cdot 1 + (48-34) \cdot 35 \cdot 0 + (45-34) \cdot 12 \cdot 1 + (45-34) \cdot 16 \cdot 0 + (42-34) \cdot 11 \cdot 1) / (49\cdot (50-34)) = 0.68$$ **The number of feasible solutions**: If $N_f$ is less than 20 ($ N_f < 20$), the precision will be calculated as 0.2. In the second step, the score of your quantum circuit will be evaluated only if your solution passes the first step.The score is the sum of circuit costs of four instances, where the circuit cost is calculated as below. 1. Transpile the quantum circuit without gate optimization and decompose the gates into the basis gates of "rz", "sx", "cx". 2. Then the score is calculated by \begin{equation*} \text{score} = 50 \cdot depth + 10 \cdot \(cx) + \(rz) + \(sx) \end{equation*} where $\(gate)$ denotes the number of $gate$ in the circuit. Your circuit will be executed 512 times, which means $N_s = 512$ here.The smaller the score become, the higher you will be ranked. General ApproachHere we are making the answer according to the way shown in [**[2]**](https://arxiv.org/abs/1908.02210), which is solving the "relaxed" formulation of knapsack problem.The relaxed problem can be defined as follows:\begin{equation*}\text{maximize } f(z)=return(z)+penalty(z)\end{equation*}\begin{equation*}\text{where} \quad return(z)=\sum_{t=1}^{n} return_{t}(z) \quad \text{with} \quad return_{t}(z) \equiv\left(1-z_{t}\right) \lambda_{1}^{t}+z_{t} \lambda_{2}^{t}\end{equation*}\begin{equation*}\quad \quad \quad \quad \quad \quad penalty(z)=\left\{\begin{array}{ll}0 & \text{if}\quad cost(z)<C_{\max } \\-\alpha\left(cost(z)-C_{\max }\right) & \text{if} \quad cost(z) \geq C_{\max }, \alpha>0 \quad \text{constant}\end{array}\right.\end{equation*}A non-Ising target function to compute a linear penalty is used here.This may reduce the depth of the circuit while still achieving high accuracy. The basic unit of relaxed approach consisits of the following items.1. Phase Operator $U(C, \gamma_i)$ 1. return part 2. penalty part 1. Cost calculation (data encoding) 2. Constraint testing (marking the indices whose data exceed $C_{max}$) 3. Penalty dephasing (adding penalty to the marked indices) 4. Reinitialization of constraint testing and cost calculation (clean the data register and flag register)2. Mixing Operator $U(B, \beta_i)$This procedure unit $U(B, \beta_i)U(C, \gamma_i)$ will be totally repeated $p$ times in the whole relaxed QAOA procedure.Let's take a look at each function one by one. The quantum circuit we are going to make consists of three types of registers: index register, data register, and flag register.Index register and data register are used for QRAM which contain the cost data for every possible choice of battery.Here these registers appear in the function templates named as follows:- `qr_index`: a quantum register representing the index (the choice of 0 or 1 in each time window)- `qr_data`: a quantum register representing the total cost associated with each index- `qr_f`: a quantum register that store the flag for penalty dephasingWe also use the following variables to represent the number of qubits in each register.- `index_qubits`: the number of qubits in `qr_index`- `data_qubits`: the number of qubits in `qr_data` **Challenge 4c - Step 1** Phase Operator $U(C, \gamma_i)$ Return PartThe return part $return (z)$ can be transformed as follows:\begin{equation*}\begin{aligned}e^{-i \gamma_i . return(z)}\left|z\right\rangle &=\prod_{t=1}^{n} e^{-i \gamma_i return_{t}(z)}\left|z\right\rangle \\&=e^{i \theta} \bigotimes_{t=1}^{n} e^{-i \gamma_i z_{t}\left(\lambda_{2}^{t}-\lambda_{1}^{t}\right)}\left|z_{t}\right\rangle \\\text{with}\quad \theta &=\sum_{t=1}^{n} \lambda_{1}^{t}\quad \text{constant}\end{aligned}\end{equation*}Since we can ignore the constant phase rotation, the return part $return (z)$ can be realized by rotation gate $U_1\left(\gamma_i \left(\lambda_{2}^{t}-\lambda_{1}^{t}\right)\right)= e^{-i \frac{\gamma_i \left(\lambda_{2}^{t}-\lambda_{1}^{t}\right)} 2}$ for each qubit.Fill in the blank in the following cell to complete the `phase_return` function.
###Code
from typing import List, Union
import math
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister, assemble
from qiskit.compiler import transpile
from qiskit.circuit import Gate
from qiskit.circuit.library.standard_gates import *
from qiskit.circuit.library import QFT
def phase_return(index_qubits: int, gamma: float, L1: list, L2: list, to_gate=True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
##############################
### U_1(gamma * (lambda2 - lambda1)) for each qubit ###
# Provide your code here
##############################
return qc.to_gate(label=" phase return ") if to_gate else qc
###Output
_____no_output_____
###Markdown
Phase Operator $U(C, \gamma_i)$ Penalty PartIn this part, we are considering how to add penalty to the quantum states in index register whose total cost exceed the constraint $C_{max}$.As shown above, this can be realized by the following four steps.1. Cost calculation (data encoding)2. Constraint testing (marking the indices whose data value exceed $C_{max}$)3. Penalty dephasing (adding penalty to the marked indices)4. Reinitialization of constraint testing and cost calculation (clean the data register and flag register) **Challenge 4c - Step 2** Cost calculation (data encoding)To represent the sum of cost for every choice of answer, we can use QRAM structure.In order to implement QRAM by quantum circuit, the addition function would be helpful.Here we will first prepare a function for constant value addition.To add a constant value to data we can use- Series of full adders- Plain adder network [**[3]**](https://arxiv.org/abs/quant-ph/9511018)- Ripple carry adder [**[4]**](https://arxiv.org/abs/quant-ph/0410184)- QFT adder **[[5](https://arxiv.org/abs/quant-ph/0008033), [6](https://arxiv.org/abs/1411.5949)]**- etc...Each adder has its own characteristics. Here, for example, we will briefly explain how to implement QFT adder, which is less likely increase circuits cost when the number of additions increases.1. QFT on the target quantum register2. Local phase rotation on the target quantum register controlled by quantum register for the constant3. IQFT on the target quantum registerFill in the blank in the following cell to complete the `const_adder` and `subroutine_add_const` function.
###Code
def subroutine_add_const(data_qubits: int, const: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
qc = QuantumCircuit(data_qubits)
##############################
### Phase Rotation ###
# Provide your code here
##############################
return qc.to_gate(label=" [+"+str(const)+"] ") if to_gate else qc
def const_adder(data_qubits: int, const: int, to_gate=True) -> Union[Gate, QuantumCircuit]:
qr_data = QuantumRegister(data_qubits, "data")
qc = QuantumCircuit(qr_data)
##############################
### QFT ###
# Provide your code here
##############################
##############################
### Phase Rotation ###
# Use `subroutine_add_const`
##############################
##############################
### IQFT ###
# Provide your code here
##############################
return qc.to_gate(label=" [ +" + str(const) + "] ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 3** Here we want to store the cost in a QRAM form: \begin{equation*}\sum_{x\in\{0,1\}^t} \left|x\right\rangle\left|cost(x)\right\rangle\end{equation*}where $t$ is the number of time window (=size of index register), and $x$ is the pattern of battery choice through all the time window.Given two lists $C^1 = \left[c_0^1, c_1^1, \cdots\right]$ and $C^2 = \left[c_0^2, c_1^2, \cdots\right]$, we can encode the "total sum of each choice" of these data using controlled gates by each index qubit.If we want to add $c_i^1$ to the data whose $i$-th index qubit is $0$ and $c_i^2$ to the data whose $i$-th index qubit is $1$, then we can add $C_i^1$ to data register when the $i$-th qubit in index register is $0$,and $C_i^2$ to data register when the $i$-th qubit in index register is $1$.These operation can be realized by controlled gates.If you want to create controlled gate from gate with type `qiskit.circuit.Gate`, the `control()` method might be useful.Fill in the blank in the following cell to complete the `cost_calculation` function.
###Code
def cost_calculation(index_qubits: int, data_qubits: int, list1: list, list2: list, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qr_data = QuantumRegister(data_qubits, "data")
qc = QuantumCircuit(qr_index, qr_data)
for i, (val1, val2) in enumerate(zip(list1, list2)):
##############################
### Add val2 using const_adder controlled by i-th index register (set to 1) ###
# Provide your code here
##############################
qc.x(qr_index[i])
##############################
### Add val1 using const_adder controlled by i-th index register (set to 0) ###
# Provide your code here
##############################
qc.x(qr_index[i])
return qc.to_gate(label=" Cost Calculation ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 4** Constraint TestingAfter the cost calculation process, we have gained the entangled QRAM with flag qubits set to zero for all indices:\begin{equation*}\sum_{x\in\{0,1\}^t} \left|x\right\rangle\left|cost(x)\right\rangle\left|0\right\rangle\end{equation*}In order to selectively add penalty to those indices with cost values larger than $C_{max}$, we have to prepare the following state:\begin{equation*}\sum_{x\in\{0,1\}^t} \left|x\right\rangle\left|cost(x)\right\rangle\left|cost(x)\geq C_{max}\right\rangle\end{equation*}Fill in the blank in the following cell to complete the `constraint_testing` function.
###Code
def constraint_testing(data_qubits: int, C_max: int, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_data, qr_f)
##############################
### Set the flag register for indices with costs larger than C_max ###
# Provide your code here
##############################
return qc.to_gate(label=" Constraint Testing ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 5** Penalty DephasingWe also have to add penalty to the indices with total costs larger than $C_{max}$ in the following way.\begin{equation*}\quad \quad \quad \quad \quad \quad penalty(z)=\left\{\begin{array}{ll}0 & \text{if}\quad cost(z)<C_{\max } \\-\alpha\left(cost(z)-C_{\max }\right) & \text{if} \quad cost(z) \geq C_{\max }, \alpha>0 \quad \text{constant}\end{array}\right.\end{equation*}This penalty can be described as quantum operator $e^{i \gamma \alpha\left(cost(z)-C_{\max }\right)}$.To realize this unitary operator as quantum circuit, we focus on the following property.\begin{equation*}\alpha\left(cost(z)-C_{m a x}\right)=\sum_{j=0}^{k-1} 2^{j} \alpha A_{1}[j]-2^{c} \alpha\end{equation*}where $A_1$ is the quantum register for qram data, $A_1[j]$ is the $j$-th qubit of $A_1$, and $k$ and $c$ are appropriate constants.Using this property, the penalty rotation part can be realized as rotation gates on each digit of data register of QRAM controlled by the flag register.Fill in the blank in the following cell to complete the `penalty_dephasing` function.
###Code
def penalty_dephasing(data_qubits: int, alpha: float, gamma: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_data, qr_f)
##############################
### Phase Rotation ###
# Provide your code here
##############################
return qc.to_gate(label=" Penalty Dephasing ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 6** ReinitializationThe ancillary qubits such as the data register and the flag register should be reinitialized to zero states when the operator $U(C, \gamma_i)$ finishes.If you want to apply inverse unitary of a `qiskit.circuit.Gate`, the `inverse()` method might be useful.Fill in the blank in the following cell to complete the `reinitialization` function.
###Code
def reinitialization(index_qubits: int, data_qubits: int, C1: list, C2: list, C_max: int, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qr_data = QuantumRegister(data_qubits, "data")
qr_f = QuantumRegister(1, "flag")
qc = QuantumCircuit(qr_index, qr_data, qr_f)
##############################
### Reinitialization Circuit ###
# Provide your code here
##############################
return qc.to_gate(label=" Reinitialization ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 7** Mixing Operator $U(B, \beta_i)$Finally, we have to add the mixing operator $U(B,\beta_i)$ after phase operator $U(C,\gamma_i)$.The mixing operator can be represented as follows.\begin{equation*}U(B, \beta_i)=\exp (-i \beta_i B)=\prod_{i=j}^{n} \exp \left(-i \beta_i \sigma_{j}^{x}\right)\end{equation*}This operator can be realized by $R_x(2\beta_i)$ gate on each qubits in index register.Fill in the blank in the following cell to complete the `mixing_operator` function.
###Code
def mixing_operator(index_qubits: int, beta: float, to_gate = True) -> Union[Gate, QuantumCircuit]:
qr_index = QuantumRegister(index_qubits, "index")
qc = QuantumCircuit(qr_index)
##############################
### Mixing Operator ###
# Provide your code here
##############################
return qc.to_gate(label=" Mixing Operator ") if to_gate else qc
###Output
_____no_output_____
###Markdown
**Challenge 4c - Step 8** Finally, using the functions we have created above, we will make the submit function `solver_function` for whole relaxed QAOA process.Fill the TODO blank in the following cell to complete the answer function.- You can copy and paste the function you have made above.- you may also adjust the number of qubits and its arrangement if needed.
###Code
def solver_function(L1: list, L2: list, C1: list, C2: list, C_max: int) -> QuantumCircuit:
# the number of qubits representing answers
index_qubits = len(L1)
# the maximum possible total cost
max_c = sum([max(l0, l1) for l0, l1 in zip(C1, C2)])
# the number of qubits representing data values can be defined using the maximum possible total cost as follows:
data_qubits = math.ceil(math.log(max_c, 2)) + 1 if not max_c & (max_c - 1) == 0 else math.ceil(math.log(max_c, 2)) + 2
### Phase Operator ###
# return part
def phase_return():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def subroutine_add_const():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def const_adder():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def cost_calculation():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def constraint_testing():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def penalty_dephasing():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
# penalty part
def reinitialization():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
### Mixing Operator ###
def mixing_operator():
##############################
### TODO ###
### Paste your code from above cells here ###
##############################
qr_index = QuantumRegister(index_qubits, "index") # index register
qr_data = QuantumRegister(data_qubits, "data") # data register
qr_f = QuantumRegister(1, "flag") # flag register
cr_index = ClassicalRegister(index_qubits, "c_index") # classical register storing the measurement result of index register
qc = QuantumCircuit(qr_index, qr_data, qr_f, cr_index)
### initialize the index register with uniform superposition state ###
qc.h(qr_index)
### DO NOT CHANGE THE CODE BELOW
p = 5
alpha = 1
for i in range(p):
### set fixed parameters for each round ###
beta = 1 - (i + 1) / p
gamma = (i + 1) / p
### return part ###
qc.append(phase_return(index_qubits, gamma, L1, L2), qr_index)
### step 1: cost calculation ###
qc.append(cost_calculation(index_qubits, data_qubits, C1, C2), qr_index[:] + qr_data[:])
### step 2: Constraint testing ###
qc.append(constraint_testing(data_qubits, C_max), qr_data[:] + qr_f[:])
### step 3: penalty dephasing ###
qc.append(penalty_dephasing(data_qubits, alpha, gamma), qr_data[:] + qr_f[:])
### step 4: reinitialization ###
qc.append(reinitialization(index_qubits, data_qubits, C1, C2, C_max), qr_index[:] + qr_data[:] + qr_f[:])
### mixing operator ###
qc.append(mixing_operator(index_qubits, beta), qr_index)
### measure the index ###
### since the default measurement outcome is shown in big endian, it is necessary to reverse the classical bits in order to unify the endian ###
qc.measure(qr_index, cr_index[::-1])
return qc
###Output
_____no_output_____
###Markdown
Validation function contains four input instances.The output should pass the precision threshold 0.80 for the eight inputs before scored.
###Code
# Execute your circuit with following prepare_ex4c() function.
# The prepare_ex4c() function works like the execute() function with only QuantumCircuit as an argument.
from qc_grader import prepare_ex4c
job = prepare_ex4c(solver_function)
result = job.result()
# Check your answer and submit using the following code
from qc_grader import grade_ex4c
grade_ex4c(job)
###Output
_____no_output_____ |
docs/MetaData.ipynb | ###Markdown
*get_metadata(table, variable)*Returns a dataframe containing the associated metadata of a variable (such as data source, distributor, refrences, and etc..).The inputs can be string literals (if only one table, and variable is passed) or a list of string literals. > **Parameters:** >> **table: string or list of string**>> The name of table where the variable is stored. A full list of table names can be found in the [catalog](Catalog.ipynb).>> >> **variable: string or list of string**>> Variable short name. A full list of variable short names can be found in the [catalog](Catalog.ipynb).>**Returns:** >> Pandas dataframe. Example
###Code
#!pip install pycmap -q #uncomment to install pycmap, if necessary
import pycmap
api = pycmap.API(token='<YOUR_API_KEY>')
api.get_metadata(['tblsst_AVHRR_OI_NRT', 'tblArgoMerge_REP'], ['sst', 'argo_merge_salinity_adj'])
###Output
_____no_output_____ |
nbs/deprecated/af_emp.eval.pp1.rq4.ipynb | ###Markdown
Example Template> Templates are python notebooks to guide the users for consuming the facades of each component. Templates are not oriented to run projects in deep learning for software engieering but it encourages to try different techniques to explore data or create tools for sucessfully researching in DL4SE.
###Code
#hide
from nbdev.showdoc import *
###Output
_____no_output_____ |
GATQSAT.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/gdrive')
%cd gdrive/My Drive
import os
if not os.path.isdir('neuroSAT'):
!git clone --recurse-submodules https://github.com/dmeoli/neuroSAT
%cd neuroSAT
else:
%cd neuroSAT
!git pull
!pip install -r requirements.txt
datasets = {'uniform-random-3-sat': {'train': ['uf50-218', 'uuf50-218',
'uf100-430', 'uuf100-430'],
'val': ['uf50-218', 'uuf50-218',
'uf100-430', 'uuf100-430'],
'inner_test': ['uf50-218', 'uuf50-218',
'uf100-430', 'uuf100-430'],
'test': ['uf250-1065', 'uuf250-1065']},
'graph-coloring': {'train': ['flat50-115'],
'val': ['flat50-115'],
'inner_test': ['flat50-115'],
'test': ['flat30-60',
'flat75-180',
'flat100-239',
'flat125-301',
'flat150-360',
'flat175-417',
'flat200-479']}}
%cd GQSAT
###Output
/content/gdrive/My Drive/neuroSAT/GQSAT
###Markdown
Build C++
###Code
%cd minisat
!sudo ln -s --force /usr/local/lib/python3.7/dist-packages/numpy/core/include/numpy /usr/include/numpy # https://stackoverflow.com/a/44935933/5555994
!make distclean && CXXFLAGS=-w make && make python-wrap PYTHON=python3.7
!apt install swig
!swig -fastdispatch -c++ -python minisat/gym/GymSolver.i
%cd ..
###Output
/content/gdrive/My Drive/neuroSAT/GQSAT/minisat
rm -f build/release/minisat/core/Solver.o build/release/minisat/core/Main.o build/release/minisat/simp/SimpSolver.o build/release/minisat/simp/Main.o build/release/minisat/utils/System.o build/release/minisat/utils/Options.o build/release/minisat/gym/GymSolver.o build/debug/minisat/core/Solver.o build/debug/minisat/core/Main.o build/debug/minisat/simp/SimpSolver.o build/debug/minisat/simp/Main.o build/debug/minisat/utils/System.o build/debug/minisat/utils/Options.o build/debug/minisat/gym/GymSolver.o build/profile/minisat/core/Solver.o build/profile/minisat/core/Main.o build/profile/minisat/simp/SimpSolver.o build/profile/minisat/simp/Main.o build/profile/minisat/utils/System.o build/profile/minisat/utils/Options.o build/profile/minisat/gym/GymSolver.o build/dynamic/minisat/core/Solver.o build/dynamic/minisat/core/Main.o build/dynamic/minisat/simp/SimpSolver.o build/dynamic/minisat/simp/Main.o build/dynamic/minisat/utils/System.o build/dynamic/minisat/utils/Options.o build/dynamic/minisat/gym/GymSolver.o \
build/release/minisat/core/Solver.d build/release/minisat/core/Main.d build/release/minisat/simp/SimpSolver.d build/release/minisat/simp/Main.d build/release/minisat/utils/System.d build/release/minisat/utils/Options.d build/release/minisat/gym/GymSolver.d build/debug/minisat/core/Solver.d build/debug/minisat/core/Main.d build/debug/minisat/simp/SimpSolver.d build/debug/minisat/simp/Main.d build/debug/minisat/utils/System.d build/debug/minisat/utils/Options.d build/debug/minisat/gym/GymSolver.d build/profile/minisat/core/Solver.d build/profile/minisat/core/Main.d build/profile/minisat/simp/SimpSolver.d build/profile/minisat/simp/Main.d build/profile/minisat/utils/System.d build/profile/minisat/utils/Options.d build/profile/minisat/gym/GymSolver.d build/dynamic/minisat/core/Solver.d build/dynamic/minisat/core/Main.d build/dynamic/minisat/simp/SimpSolver.d build/dynamic/minisat/simp/Main.d build/dynamic/minisat/utils/System.d build/dynamic/minisat/utils/Options.d build/dynamic/minisat/gym/GymSolver.d \
build/release/bin/minisat_core build/release/bin/minisat build/debug/bin/minisat_core build/debug/bin/minisat build/profile/bin/minisat_core build/profile/bin/minisat build/dynamic/bin/minisat_core build/dynamic/bin/minisat \
build/release/lib/libminisat.a build/debug/lib/libminisat.a build/profile/lib/libminisat.a \
build/dynamic/lib/libminisat.so.2.1.0\
build/dynamic/lib/libminisat.so.2\
build/dynamic/lib/libminisat.so
rm -f config.mk
Compiling: build/release/minisat/simp/Main.o
Compiling: build/release/minisat/core/Solver.o
Compiling: build/release/minisat/simp/SimpSolver.o
Compiling: build/release/minisat/utils/System.o
Compiling: build/release/minisat/utils/Options.o
Compiling: build/release/minisat/gym/GymSolver.o
Linking Static Library: build/release/lib/libminisat.a
Linking Binary: build/release/bin/minisat
Compiling: build/dynamic/minisat/core/Solver.o
Compiling: build/dynamic/minisat/simp/SimpSolver.o
Compiling: build/dynamic/minisat/utils/System.o
Compiling: build/dynamic/minisat/utils/Options.o
Compiling: build/dynamic/minisat/gym/GymSolver.o
Linking Shared Library: build/dynamic/lib/libminisat.so.2.1.0
g++ -O2 -fPIC -c minisat/gym/GymSolver_wrap.cxx -o minisat/gym/GymSolver_wrap.o -I. -I/usr/include/python3.7
g++ -shared -o minisat/gym/_GymSolver.so build/dynamic/minisat/core/Solver.o build/dynamic/minisat/simp/SimpSolver.o build/dynamic/minisat/utils/System.o build/dynamic/minisat/utils/Options.o build/dynamic/minisat/gym/GymSolver.o minisat/gym/GymSolver_wrap.o /usr/lib/x86_64-linux-gnu/libz.so
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
swig3.0
Suggested packages:
swig-doc swig-examples swig3.0-examples swig3.0-doc
The following NEW packages will be installed:
swig swig3.0
0 upgraded, 2 newly installed, 0 to remove and 37 not upgraded.
Need to get 1,100 kB of archives.
After this operation, 5,822 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 swig3.0 amd64 3.0.12-1 [1,094 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/universe amd64 swig amd64 3.0.12-1 [6,460 B]
Fetched 1,100 kB in 1s (859 kB/s)
Selecting previously unselected package swig3.0.
(Reading database ... 155222 files and directories currently installed.)
Preparing to unpack .../swig3.0_3.0.12-1_amd64.deb ...
Unpacking swig3.0 (3.0.12-1) ...
Selecting previously unselected package swig.
Preparing to unpack .../swig_3.0.12-1_amd64.deb ...
Unpacking swig (3.0.12-1) ...
Setting up swig3.0 (3.0.12-1) ...
Setting up swig (3.0.12-1) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
/content/gdrive/My Drive/neuroSAT/GQSAT
###Markdown
Uniform Random 3-SATWe split *(u)uf50-218* and *(u)uf100-430* into three subsets: 800 training problems, 100 validation, and 100 test problems.For generalization experiments, we use 100 problems from all the other benchmarks.To evaluate the knowledge transfer properties of the trained models across different task families, we use 100 problems from all the *graph coloring* benchmarks.
###Code
PROBLEM_TYPE='uniform-random-3-sat'
!bash train_val_test_split.sh "$PROBLEM_TYPE"
###Output
_____no_output_____
###Markdown
Add metadata for evaluation (train and validation set)
###Code
for TRAIN_PROBLEM_NAME, VAL_PROBLEM_NAME in zip(datasets[PROBLEM_TYPE]['train'],
datasets[PROBLEM_TYPE]['val']):
!python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/train/"$TRAIN_PROBLEM_NAME"
!python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/val/"$VAL_PROBLEM_NAME"
###Output
_____no_output_____
###Markdown
Train
###Code
for TRAIN_PROBLEM_NAME, VAL_PROBLEM_NAME in zip(datasets[PROBLEM_TYPE]['train'],
datasets[PROBLEM_TYPE]['val']):
!python dqn.py \
--logdir log \
--env-name sat-v0 \
--train-problems-paths ../data/"$PROBLEM_TYPE"/train/"$TRAIN_PROBLEM_NAME" \
--eval-problems-paths ../data/"$PROBLEM_TYPE"/val/"$VAL_PROBLEM_NAME" \
--lr 0.00002 \
--bsize 64 \
--buffer-size 20000 \
--eps-init 1.0 \
--eps-final 0.01 \
--gamma 0.99 \
--eps-decay-steps 30000 \
--batch-updates 50000 \
--history-len 1 \
--init-exploration-steps 5000 \
--step-freq 4 \
--target-update-freq 10 \
--loss mse \
--opt adam \
--save-freq 500 \
--grad_clip 1 \
--grad_clip_norm_type 2 \
--eval-freq 1000 \
--eval-time-limit 3600 \
--core-steps 4 \
--expert-exploration-prob 0.0 \
--priority_alpha 0.5 \
--priority_beta 0.5 \
--e2v-aggregator sum \
--n_hidden 1 \
--hidden_size 64 \
--decoder_v_out_size 32 \
--decoder_e_out_size 1 \
--decoder_g_out_size 1 \
--encoder_v_out_size 32 \
--encoder_e_out_size 32 \
--encoder_g_out_size 32 \
--core_v_out_size 64 \
--core_e_out_size 64 \
--core_g_out_size 32 \
--activation relu \
--penalty_size 0.1 \
--train_time_max_decisions_allowed 500 \
--test_time_max_decisions_allowed 500 \
--no_max_cap_fill_buffer \
--lr_scheduler_gamma 1 \
--lr_scheduler_frequency 3000 \
--independent_block_layers 0 \
--use_attention \
--heads 3
###Output
_____no_output_____
###Markdown
Add metadata for evaluation (test set)
###Code
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['inner_test']:
!python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME"
for PROBLEM_NAME in datasets['graph-coloring']['inner_test']:
!python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME"
for PROBLEM_NAME in datasets['graph-coloring']['test']:
!python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/"$PROBLEM_NAME"
###Output
_____no_output_____
###Markdown
Evaluate
###Code
models = {'uf50-218': ('Nov12_14-06-54_c42e8ad320d8',
'model_50003'),
'uuf50-218': ('Nov12_20-35-32_c42e8ad320d8',
'model_50006'),
'uf100-430': ('Nov13_03-55-51_c42e8ad320d8',
'model_50085'),
'uuf100-430': ('Nov14_03-26-36_54337a27a809',
'model_50003')}
###Output
_____no_output_____
###Markdown
We test these trained models on the inner test sets.
###Code
for SAT_MODEL in models.keys():
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['inner_test']:
# do not use models trained on unSAT problems to solve SAT ones
if not (SAT_MODEL.startswith('uuf') and PROBLEM_NAME.startswith('uf')):
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
MODEL_DIR = models[SAT_MODEL][0]
CHECKPOINT = models[SAT_MODEL][1]
!python evaluate.py \
--logdir log \
--env-name sat-v0 \
--core-steps -1 \
--eps-final 0.0 \
--eval-time-limit 100000000000000 \
--no_restarts \
--test_time_max_decisions_allowed "$MODEL_DECISION" \
--eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME" \
--model-dir runs/"$MODEL_DIR" \
--model-checkpoint "$CHECKPOINT".chkp \
>> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
###Output
_____no_output_____
###Markdown
We test the trained models on the outer test sets.
###Code
for SAT_MODEL in models.keys():
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['test']:
# do not use models trained on unSAT problems to solve SAT ones
if not (SAT_MODEL.startswith('uuf') and PROBLEM_NAME.startswith('uf')):
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
MODEL_DIR = models[SAT_MODEL][0]
CHECKPOINT = models[SAT_MODEL][1]
!python evaluate.py \
--logdir log \
--env-name sat-v0 \
--core-steps -1 \
--eps-final 0.0 \
--eval-time-limit 100000000000000 \
--no_restarts \
--test_time_max_decisions_allowed "$MODEL_DECISION" \
--eval-problems-paths ../data/"$PROBLEM_TYPE"/"$PROBLEM_NAME" \
--model-dir runs/"$MODEL_DIR" \
--model-checkpoint "$CHECKPOINT".chkp \
>> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
###Output
_____no_output_____
###Markdown
We test the trained models on the *graph coloring* test sets, both inners and outers.
###Code
for SAT_MODEL in models.keys():
# do not use models trained on unSAT problems to solve SAT ones
if not SAT_MODEL.startswith('uuf'):
for PROBLEM_NAME in datasets['graph-coloring']['inner_test']:
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
MODEL_DIR = models[SAT_MODEL][0]
CHECKPOINT = models[SAT_MODEL][1]
!python evaluate.py \
--logdir log \
--env-name sat-v0 \
--core-steps -1 \
--eps-final 0.0 \
--eval-time-limit 100000000000000 \
--no_restarts \
--test_time_max_decisions_allowed "$MODEL_DECISION" \
--eval-problems-paths ../data/graph-coloring/test/"$PROBLEM_NAME" \
--model-dir runs/"$MODEL_DIR" \
--model-checkpoint "$CHECKPOINT".chkp \
>> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
for SAT_MODEL in models.keys():
# do not use models trained on unSAT problems to solve SAT ones
if not SAT_MODEL.startswith('uuf'):
for PROBLEM_NAME in datasets['graph-coloring']['test']:
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
MODEL_DIR = models[SAT_MODEL][0]
CHECKPOINT = models[SAT_MODEL][1]
!python evaluate.py \
--logdir log \
--env-name sat-v0 \
--core-steps -1 \
--eps-final 0.0 \
--eval-time-limit 100000000000000 \
--no_restarts \
--test_time_max_decisions_allowed "$MODEL_DECISION" \
--eval-problems-paths ../data/graph-coloring/"$PROBLEM_NAME" \
--model-dir runs/"$MODEL_DIR" \
--model-checkpoint "$CHECKPOINT".chkp \
>> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
###Output
_____no_output_____
###Markdown
Graph ColoringGraph coloring benchmarks have only 100 problems each, except for *flat50-115* which contains 1000, so we split it into three subsets: 800 training problems, 100 validation, and 100 test problems.For generalization experiments, we use 100 problems from all the other benchmarks.
###Code
PROBLEM_TYPE='graph-coloring'
!bash train_val_test_split.sh "$PROBLEM_TYPE"
###Output
_____no_output_____
###Markdown
Add metadata for evaluation (train and validation set)
###Code
for TRAIN_PROBLEM_NAME, VAL_PROBLEM_NAME in zip(datasets[PROBLEM_TYPE]['train'],
datasets[PROBLEM_TYPE]['val']):
!python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/train/"$TRAIN_PROBLEM_NAME"
!python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/val/"$VAL_PROBLEM_NAME"
###Output
_____no_output_____
###Markdown
Train
###Code
for TRAIN_PROBLEM_NAME, VAL_PROBLEM_NAME in zip(datasets[PROBLEM_TYPE]['train'],
datasets[PROBLEM_TYPE]['val']):
!python dqn.py \
--logdir log \
--env-name sat-v0 \
--train-problems-paths ../data/"$PROBLEM_TYPE"/train/"$TRAIN_PROBLEM_NAME" \
--eval-problems-paths ../data/"$PROBLEM_TYPE"/val/"$VAL_PROBLEM_NAME" \
--lr 0.00002 \
--bsize 64 \
--buffer-size 20000 \
--eps-init 1.0 \
--eps-final 0.01 \
--gamma 0.99 \
--eps-decay-steps 30000 \
--batch-updates 50000 \
--history-len 1 \
--init-exploration-steps 5000 \
--step-freq 4 \
--target-update-freq 10 \
--loss mse \
--opt adam \
--save-freq 500 \
--grad_clip 0.1 \
--grad_clip_norm_type 2 \
--eval-freq 1000 \
--eval-time-limit 3600 \
--core-steps 4 \
--expert-exploration-prob 0.0 \
--priority_alpha 0.5 \
--priority_beta 0.5 \
--e2v-aggregator sum \
--n_hidden 1 \
--hidden_size 64 \
--decoder_v_out_size 32 \
--decoder_e_out_size 1 \
--decoder_g_out_size 1 \
--encoder_v_out_size 32 \
--encoder_e_out_size 32 \
--encoder_g_out_size 32 \
--core_v_out_size 64 \
--core_e_out_size 64 \
--core_g_out_size 32 \
--activation relu \
--penalty_size 0.1 \
--train_time_max_decisions_allowed 500 \
--test_time_max_decisions_allowed 500 \
--no_max_cap_fill_buffer \
--lr_scheduler_gamma 1 \
--lr_scheduler_frequency 3000 \
--independent_block_layers 0 \
--use_attention \
--heads 3
###Output
_____no_output_____
###Markdown
Add metadata for evaluation (test set)
###Code
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['inner_test']:
!python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME"
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['test']:
!python add_metadata.py --eval-problems-paths ../data/"$PROBLEM_TYPE"/"$PROBLEM_NAME"
###Output
_____no_output_____
###Markdown
Evaluate
###Code
MODEL_DIR='Dec09_12-16-16_d4e65e7af705'
CHECKPOINT='model_50001'
###Output
_____no_output_____
###Markdown
We test this trained model on the inner test set.
###Code
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['inner_test']:
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
!python evaluate.py \
--logdir log \
--env-name sat-v0 \
--core-steps -1 \
--eps-final 0.0 \
--eval-time-limit 100000000000000 \
--no_restarts \
--test_time_max_decisions_allowed "$MODEL_DECISION" \
--eval-problems-paths ../data/"$PROBLEM_TYPE"/test/"$PROBLEM_NAME" \
--model-dir runs/"$MODEL_DIR" \
--model-checkpoint "$CHECKPOINT".chkp \
>> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
###Output
_____no_output_____
###Markdown
We test the trained model on the outer test sets.
###Code
for PROBLEM_NAME in datasets[PROBLEM_TYPE]['test']:
for MODEL_DECISION in [10, 50, 100, 300, 500, 1000]:
!python evaluate.py \
--logdir log \
--env-name sat-v0 \
--core-steps -1 \
--eps-final 0.0 \
--eval-time-limit 100000000000000 \
--no_restarts \
--test_time_max_decisions_allowed "$MODEL_DECISION" \
--eval-problems-paths ../data/"$PROBLEM_TYPE"/"$PROBLEM_NAME" \
--model-dir runs/"$MODEL_DIR" \
--model-checkpoint "$CHECKPOINT".chkp \
>> runs/"$MODEL_DIR"/"$PROBLEM_NAME"-gatqsat-max"$MODEL_DECISION".tsv
###Output
_____no_output_____ |
CogNeuro GroupProject_2022 - Worksheet 1 - answers.ipynb | ###Markdown
Cognitive Neuroscience: Group Project Worksheet 1 - sinusoidsMarijn van Wingerden, Department of Cognitive Science and Artificial Intelligence – Tilburg University Academic Year 21-22
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 22}) # we want stuff to be visible
###Output
_____no_output_____
###Markdown
In this worksheet, we will start working with sinusoids. (co)Sine waves are the basic element of the Fourier analysis - basically we will be looking at the *overlap* between a cosine wave of a particular frequency with a particular segment of data. The amount of overlap determines the presence of that cosine wave in the signal, and we call this the **spectral power** of the signal at that frequency. SinusoidsOscillations are periodical signals, and can be created from sinusoids with np.sin and np.cos. There are a couple of basic features that make up a sinus (see the lab intro presentation):- frequency- phase offset (theta)- amplitudethese parameters come together in the following lines of code:
###Code
## simulation parameters single sine
sin_freq = 1 # frequency of the oscillation, default is 1 Hz -> one cycle per second
theta = 0*np.pi/4 # phase offset (starting point of oscillation), set by default to 0, adjustable in 1/4 pi steps
amp = 1 # amplitude, set by default to 1, so the sine is bounded between [-1,1] on the y-axis
# one full cycle is completed in 2pi.
# We are putting 1000 steps in between
time = np.linspace(start = 0, num = 1000, stop = 2*np.pi)
# we create the signal by taking the 1000 timesteps, shifted by theta (0 for now) and taking the sine value
signal = amp*np.sin(time + theta)
fig, ax = plt.subplots(figsize=(24,8))
ax.plot(time,signal,'b.-', label='Sinus with phase offset 0')
# we know that at 0, pi and 2pi the sine wave crosses 0. Let's mark these points
# a line plot uses plot([x1,x2], [y1,y2]). When we plot vertical, x1 == x2
# 'k' is the marker for plotting in black
ax.plot(0*np.array([np.pi, np.pi]), [-1,1], 'k')
ax.plot(1*np.array([np.pi, np.pi]), [-1,1], 'k')
ax.plot(2*np.array([np.pi, np.pi]), [-1,1],'k')
ax.set_xlabel('radians')
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
We can take the example a bit further by making some adjustments. First, we normalize time in such a way that [0,1] on time always means [0,2pi] in the signal creation. We can them redefine time as going from [0,1] and define the _sampling rate_ as 1000, taking steps of 1/srate, so 1/1000, meaning we still have 1000 samples:0, 0.001, 0.002 ... 0.999
###Code
## simulation parameters
sin_freq = 5 # frequency of the oscillation
theta = 0*np.pi/4 # phase offset (starting point of oscillation)
amp = 1 # amplitude
srate = 1000 # defined in Hz: samples per second.
time = np.arange(start = 0, step = 1/srate, stop = 1)
signal = amp*np.sin(2*np.pi*sin_freq*time + theta)
fig, ax = plt.subplots(figsize=(24,8))
ax.plot(time,signal,'b.-', label='Sinus with freq = 5 and\n phase offset = 0')
ax.set_xlabel('Time (Sample index /1000)')
ax.set_ylim([-1.2,1.7]) # make some room for the legend
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
In its most basic form with a frequency of 1 and no offset, this would plot a single sinus over the time interval [0,1]. As time gets multiplied by 2\*pi, we are scaling the time axis to the interval [0,2pi], so one sinus. Theta is a time offset (that also gets multiplied to 2pi), so this offset makes only sense in the [0:2pi] interval, as after that the sinus is back where it started. In this example, we choose negative time offsets; you can think of these as time _lags_ , so all signals follow the blue one with a certain delay in time:
###Code
## simulation parameters
sin_freq = 1 # frequency of the oscillation
theta0 = 0*np.pi/4 # phase offset (starting point of oscillation)
theta1 = -1*np.pi/4 # phase offset (1/4 pi)
theta2 = -2*np.pi/4 # phase offset (2/4 pi)
theta3 = -3*np.pi/4 # phase offset (3/4 pi)
amp = 1 # amplitude
srate = 1000 # defined in Hz: samples per second.
time = np.arange(start = 0, step = 1/srate, stop = 1)
signal0 = amp*np.sin(2*np.pi*sin_freq*time + theta0)
signal1 = amp*np.sin(2*np.pi*sin_freq*time + theta1)
signal2 = amp*np.sin(2*np.pi*sin_freq*time + theta2)
signal3 = amp*np.sin(2*np.pi*sin_freq*time + theta3)
# could we have done this in a loop? of course, but let's keep it simple for now
fig, ax = plt.subplots(figsize=(24,8))
ax.plot(time,signal0,'b.-', label='Sinus with phase offset 0')
ax.plot(time,signal1,'r.-', label='Sinus with phase offset -1/4pi')
ax.plot(time,signal2,'g.-', label='Sinus with phase offset -2/4pi')
ax.plot(time,signal3,'m.-', label='Sinus with phase offset -3/4pi')
ax.set_xlabel('Time (Sample index /1000)')
ax.set_ylim([-1.2,2]) # make some room for the legend
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Building from this, lets add a few sinuses with different frequencies. We will do this by pre-generating a list of frequencies, and plotting them inside a 'for' loop
###Code
# pre-specify a list of frequencies
freqs = np.arange(start = 1, stop = 5)
theta = 0*np.pi/4 # phase offset (starting point of oscillation)
amp = 1 # amplitude
srate = 1000 # defined in Hz: samples per second.
time = np.arange(start = 0, step = 1/srate, stop = 1)
#open a new plot
fig, ax = plt.subplots(figsize=(24,8))
for iFreq in freqs:
signal = np.sin(2*np.pi*iFreq*time + theta);
sin_label = "Freq: {} Hz"
ax.plot(time, signal, label = sin_label.format(iFreq))
ax.legend()
ax.set_xlabel('Time (Sample index /1000)')
plt.show()
###Output
_____no_output_____
###Markdown
Exercises Exercise 1:- Using a loop, create a plot for a 2 Hz oscillation, but now with 4 different phase offsets. Try 0, 0.5pi, 1pi, 1.5pi for example - set up "thetas" as a vector of theta values - loop over thetas - recalculate "signal" in each loop iteration - plot to the ax in each iteration - add a formatted label to each sine plot- prettify according to taste
###Code
##
## Your answer here
##
thetas = -np.arange(start = 0, step = 0.5, stop = 2)*np.pi
freq = 2
fig, ax = plt.subplots(figsize=(24,8))
for iTheta in thetas:
signal = np.sin(2*np.pi*freq*time + iTheta);
sin_label = "Offset: {} radians"
ax.plot(time, signal, label = sin_label.format(iTheta))
plt.title('Exercise 1: a collection of sine waves with different phases')
ax.set_ylabel('amplitude')
ax.set_xlabel('time')
ax.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Exercise 2:- pre-allocate an empty matrix with 3 rows and 1000 colums- per row, fill this matrix with the signal for an oscillation: - Plot a series of 3 oscillations that differ in frequency AND phase offset - find the intersection points for the pairs of oscillations (1,2), (1,3), (2,3) - define the intersection points as a boolean [ix12] that is true when the difference between e.g. sine1 and sine2 is smaller than 0.05 and false otherwise - use the intersection index boolean to extract the time points belonging to these points (time[ix12]) - similarily, define ix13 and ix23 - plot vertical lines on these intersection points (using ax.plot([time[ix12], time[ix12]],[-1, 1],'k')
###Code
##
## Your answer here
##
freq_mat = np.zeros((3,1000))
freqs = np.array([2,5,7])
thetas = np.array([0.5, 1, 1.5])*np.pi
fig, ax = plt.subplots(figsize=(24,8))
plt.title('Exercise 2')
for iMat in range(freq_mat.shape[0]):
freq_mat[iMat,:] = np.sin(2*np.pi*freqs[iMat]*time + thetas[iMat])
ax.plot(time, np.transpose(freq_mat), linewidth=3)
ax.legend(['sine1', 'sine2', 'sine3'])
# it is necessary to transpose the matrix to line up the dimensions
# between time and the sine matrix
ix_one_two = np.abs(freq_mat[0,:] - freq_mat[1,:]) < 0.05
ax.plot([time[ix_one_two], time[ix_one_two]], [-1,1], 'k', alpha = 0.2)
# black lines: sine 1 & sine 2 intersect
ix_one_three = np.abs(freq_mat[0,:] - freq_mat[2,:]) < 0.05
ax.plot([time[ix_one_three], time[ix_one_three]], [-1,1], 'r', alpha = 0.2)
# red lines: sine 1 & sine 3 intersect
ix_two_three = np.abs(freq_mat[1,:] - freq_mat[2,:]) < 0.05
ax.plot([time[ix_two_three], time[ix_two_three]], [-1,1], 'm', alpha = 0.2)
# magenta lines: sine 2 & sine 3 intersect
plt.show()
###Output
_____no_output_____ |
dementia_optima/models/misc/mmse_prediction_final_.ipynb | ###Markdown
------ **Dementia Patients -- Analysis and Prediction** ***Author : Akhilesh Vyas*** ****Date : August, 2019**** ***Result Plots*** - 0. Setup - 0.1. Load libraries - 0.2. Define paths - 1. Data Preparation - 1.1. Read Data - 1.2. Prepare data - 1.3. Prepare target - 1.4. Removing Unwanted Features - 2. Data Analysis - 2.1. Feature - 2.2. Target - 3. Data Preparation and Vector Transformation- 4. Analysis and Imputing Missing Values - 5. Feature Analysis - 5.1. Correlation Matrix - 5.2. Feature and target - 5.3. Feature Selection Models - 6.Machine Learning -Classification Model 0. Setup 0.1 Load libraries Loading Libraries
###Code
import sys
sys.path.insert(1, '../preprocessing/')
import numpy as np
import pickle
import scipy.stats as spstats
import matplotlib.pyplot as plt
import seaborn as sns
import pandas_profiling
from sklearn.datasets.base import Bunch
#from data_transformation_cls import FeatureTransform
from ast import literal_eval
import plotly.figure_factory as ff
import plotly.offline as py
import plotly.graph_objects as go
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', -1)
from ordered_set import OrderedSet
from func_def import *
%matplotlib inline
###Output
_____no_output_____
###Markdown
0.2 Define paths
###Code
# data_path
# !cp -r ../../../datalcdem/data/optima/dementia_18July/data_notasked/ ../../../datalcdem/data/optima/dementia_18July/data_notasked_mmse_0_30/
#data_path = '../../../datalcdem/data/optima/dementia_03_2020/data_filled_wiiliam/'
#result_path = '../../../datalcdem/data/optima/dementia_03_2020/data_filled_wiiliam/results/'
#optima_path = '../../../datalcdem/data/optima/optima_excel/'
data_path = '../../data/'
# Reading Data
#patients data
patient_df = pd.read_csv(data_path+'patients.csv')
print (patient_df.dtypes)
# change dataType if there is something
for col in patient_df.columns:
if 'Date' in col:
patient_df[col] = pd.to_datetime(patient_df[col])
patient_df = patient_df[['patient_id','gender', 'smoker', 'education', 'ageAtFirstEpisode', 'apoe']]
patient_df.rename(columns={'ageAtFirstEpisode':'age'}, inplace=True)
patient_df.head(5)
###Output
_____no_output_____
###Markdown
1. Data Preparation 1.1. Read Data
###Code
#Preparation Features from Raw data
# Extracting selected features from Raw data
def rename_columns(col_list):
d = {}
for i in col_list:
if i=='GLOBAL_PATIENT_DB_ID':
d[i]='patient_id'
elif 'CAMDEX SCORES: ' in i:
d[i]=i.replace('CAMDEX SCORES: ', '').replace(' ', '_')
elif 'CAMDEX ADMINISTRATION 1-12: ' in i:
d[i]=i.replace('CAMDEX ADMINISTRATION 1-12: ', '').replace(' ', '_')
elif 'DIAGNOSIS 334-351: ' in i:
d[i]=i.replace('DIAGNOSIS 334-351: ', '').replace(' ', '_')
elif 'OPTIMA DIAGNOSES V 2010: ' in i:
d[i]=i.replace('OPTIMA DIAGNOSES V 2010: ', '').replace(' ', '_')
elif 'PM INFORMATION: ' in i:
d[i]=i.replace('PM INFORMATION: ', '').replace(' ', '_')
else:
d[i]=i.replace(' ', '_')
return d
columns_selected = ['GLOBAL_PATIENT_DB_ID', 'EPISODE_DATE', 'CAMDEX SCORES: MINI MENTAL SCORE', 'CLINICAL BACKGROUND: BODY MASS INDEX',
'DIAGNOSIS 334-351: ANXIETY/PHOBIC', 'OPTIMA DIAGNOSES V 2010: CERBRO-VASCULAR DISEASE PRESENT', 'DIAGNOSIS 334-351: DEPRESSIVE ILLNESS',
'OPTIMA DIAGNOSES V 2010: DIAGNOSTIC CODE', 'CAMDEX ADMINISTRATION 1-12: EST OF SEVERITY OF DEPRESSION',
'CAMDEX ADMINISTRATION 1-12: EST SEVERITY OF DEMENTIA', 'DIAGNOSIS 334-351: PRIMARY PSYCHIATRIC DIAGNOSES', 'OPTIMA DIAGNOSES V 2010: PETERSEN MCI']
columns_selected = list(OrderedSet(columns_selected).union(OrderedSet(features_all)))
# Need to think about other columns eg. dementia, social, sleeping habits,
df_datarequest = pd.read_excel(data_path+'Optima_Data_Report_Cases_6511_filled.xlsx')
display(df_datarequest.head(1))
df_datarequest_features = df_datarequest[columns_selected]
display(df_datarequest_features.columns)
columns_renamed = rename_columns(df_datarequest_features.columns.tolist())
df_datarequest_features.rename(columns=columns_renamed, inplace=True)
patient_com_treat_fea_raw_df = df_datarequest_features # Need to be changed ------------------------
display(patient_com_treat_fea_raw_df.head(5))
# merging
patient_df = patient_com_treat_fea_raw_df.merge(patient_df,how='inner', on=['patient_id'])
# age calculator
patient_df['age'] = patient_df['age'] + patient_df.groupby(by=['patient_id'])['EPISODE_DATE'].transform(lambda x: (x - x.iloc[0])/(np.timedelta64(1, 'D')*365.25))
# saving file
patient_df.to_csv(data_path + 'patient_com_treat_fea_filled_sel_col.csv', index=False)
# patient_com_treat_fea_raw_df = patient_com_treat_fea_raw_df.drop_duplicates(subset=['patient_id', 'EPISODE_DATE'])
patient_df.sort_values(by=['patient_id', 'EPISODE_DATE'], inplace=True)
display(patient_df.head(5))
display(patient_df.describe(include='all'))
display(patient_df.info())
tmp_l = []
for i in range(len(patient_df.index)):
# print("Nan in row ", i , " : " , patient_com_treat_fea_raw_df.iloc[i].isnull().sum())
tmp_l.append(patient_df.iloc[i].isnull().sum())
plt.hist(tmp_l)
plt.show()
# find NAN and Notasked and replace them with suitable value
'''print (patient_df.columns.tolist())
notasked_columns = ['ANXIETY/PHOBIC', 'CERBRO-VASCULAR_DISEASE_PRESENT', 'DEPRESSIVE_ILLNESS','EST_OF_SEVERITY_OF_DEPRESSION', 'EST_SEVERITY_OF_DEMENTIA',
'PRIMARY_PSYCHIATRIC_DIAGNOSES']
print ('total nan values %: ', 100*patient_df.isna().sum().sum()/patient_df.size)
patient_df.loc[:, notasked_columns] = patient_df.loc[:, notasked_columns].replace([9], [np.nan])
print ('total nan values % after considering notasked: ', 100*patient_df.isna().sum().sum()/patient_df.size)
display(patient_df.isna().sum())
notasked_columns.append('DIAGNOSTIC_CODE')
notasked_columns.append('education')
patient_df.loc[:, notasked_columns] = patient_df.groupby(by=['patient_id'])[notasked_columns].transform(lambda x: x.fillna(method='pad'))
patient_df.loc[:, ['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']] = patient_df.groupby(by=['patient_id'])[['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']].transform(lambda x: x.interpolate())
patient_df.loc[:, ['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']] = patient_df.groupby(by=['patient_id'])[['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']].transform(lambda x: x.fillna(method='pad'))
print ('total nan values % after filling : ', 100*patient_df.isna().sum().sum()/patient_df.size)
display(patient_df.isna().sum())'''
# Label of patients:
misdiagnosed_df = pd.read_csv(data_path+'misdiagnosed.csv')
display(misdiagnosed_df.head(5))
misdiagnosed_df['EPISODE_DATE'] = pd.to_datetime(misdiagnosed_df['EPISODE_DATE'])
#Merge Patient_df
patient_df = patient_df.merge(misdiagnosed_df[['patient_id', 'EPISODE_DATE', 'Misdiagnosed','Misdiagnosed1']], how='left', on=['patient_id', 'EPISODE_DATE'])
display(patient_df.head(5))
patient_df.to_csv(data_path+'patient_df.csv', index=False)
patient_df = pd.read_csv(data_path+'patient_df.csv')
patient_df['EPISODE_DATE'] = pd.to_datetime(patient_df['EPISODE_DATE'])
# duration and previous mini mental score state
patient_df['durations(years)'] = patient_df.groupby(by='patient_id')['EPISODE_DATE'].transform(lambda x: (x - x.iloc[0])/(np.timedelta64(1, 'D')*365.25))
patient_df['MINI_MENTAL_SCORE_PRE'] = patient_df.groupby(by='patient_id')['MINI_MENTAL_SCORE'].transform(lambda x: x.shift(+1))
patient_df[['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']].describe() # Out of Range values
patient_df['CLINICAL_BACKGROUND:_BODY_MASS_INDEX'][(patient_df['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']>54) | (patient_df['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']<8)]=np.nan
patient_df[['CLINICAL_BACKGROUND:_BODY_MASS_INDEX']].describe()
# drop unnecessary columns
# patient_df.drop(columns=['patient_id', 'EPISODE_DATE'], inplace=True)
# drop rows where Misdiagnosed cases are invalid
patient_df = patient_df.dropna(subset=['MINI_MENTAL_SCORE_PRE'], axis=0 )
patient_df['gender'].unique(), patient_df['smoker'].unique(), patient_df['education'].unique(), patient_df['apoe'].unique(), patient_df['Misdiagnosed1'].unique(), patient_df['Misdiagnosed'].unique()
# encoding of categorial features
patient_df['smoker'] = patient_df['smoker'].replace(['smoker', 'no_smoker'],[1, 0])
patient_df['education'] = patient_df['education'].replace(['medium', 'higher','basic'],[1, 2, 0])
patient_df['Misdiagnosed1'] = patient_df['Misdiagnosed1'].replace(['NO', 'YES', 'UNKNOWN'],[0, 1, 2])
patient_df['Misdiagnosed'] = patient_df['Misdiagnosed'].replace(['NO', 'YES', 'UNKNOWN'],[0, 1, 2])
patient_df = pd.get_dummies(patient_df, columns=['gender', 'apoe'])
patient_df.replace(['mixed mitral & Aortic Valve disease', 'Bilateral knee replacements'],[np.nan, np.nan], inplace=True)
patient_df.dtypes
for i, j in zip(patient_df, patient_df.dtypes):
if not (j == "float64" or j == "int64" or j == 'uint8' or j == 'datetime64[ns]'):
print(i)
patient_df[i] = pd.to_numeric(patient_df[i], errors='coerce')
patient_df = patient_df.fillna(-9)
# Misdiagnosed Criteria
patient_df = patient_df[patient_df['Misdiagnosed']<2]
patient_df = patient_df.astype({col: str('float64') for col, dtype in zip (patient_df.columns.tolist(), patient_df.dtypes.tolist()) if 'int' in str(dtype) or str(dtype)=='object'})
patient_df.describe()
patient_df_X = patient_df.drop(columns=['patient_id', 'EPISODE_DATE', 'Misdiagnosed1', 'MINI_MENTAL_SCORE', 'PETERSEN_MCI', 'Misdiagnosed'])
patient_df_y_cat = patient_df['Misdiagnosed1']
patient_df_y_cat_s = patient_df['Misdiagnosed']
patient_df_y_real = patient_df['MINI_MENTAL_SCORE']
print (patient_df_X.shape, patient_df_y_cat.shape, patient_df_y_cat_s.shape, patient_df_y_real.shape)
print(patient_df_X.shape, patient_df_y_cat.shape, patient_df_y_cat_s.shape, patient_df_y_real.shape)
# training data
patient_df_X_fill_data = pd.DataFrame(data=patient_df_X.values, columns=patient_df_X.columns, index=patient_df_X.index)
patient_df_X_train, patient_df_y_train = patient_df_X_fill_data[patient_df_y_cat==0], patient_df_y_real[patient_df_y_cat==0]
patient_df_X_test, patient_df_y_test= patient_df_X_fill_data[patient_df_y_cat==1], patient_df_y_real[patient_df_y_cat==1]
patient_df_X_s_train, patient_df_y_s_train = patient_df_X_fill_data[patient_df_y_cat_s==0], patient_df_y_real[patient_df_y_cat_s==0]
patient_df_X_s_test, patient_df_y_s_test= patient_df_X_fill_data[patient_df_y_cat_s==1], patient_df_y_real[patient_df_y_cat_s==1]
patient_df_X_train.to_csv(data_path+'X_train.csv', index=False)
patient_df_y_train.to_csv(data_path+'y_train.csv', index=False)
patient_df_X_test.to_csv(data_path+'X_test.csv', index=False)
patient_df_y_test.to_csv(data_path+'y_test.csv', index=False)
print(patient_df_X_train.shape, patient_df_y_train.shape, patient_df_X_test.shape, patient_df_y_test.shape)
print(patient_df_X_s_train.shape, patient_df_y_s_train.shape, patient_df_X_s_test.shape, patient_df_y_s_test.shape)
X_train, y_train, X_test, y_test = patient_df_X_train.values, patient_df_y_train.values.reshape(-1, 1),patient_df_X_test.values, patient_df_y_test.values.reshape(-1,1)
X_s_train, y_s_train, X_s_test, y_s_test = patient_df_X_s_train.values, patient_df_y_s_train.values.reshape(-1, 1),patient_df_X_s_test.values, patient_df_y_s_test.values.reshape(-1,1)
# Random Forest Classfier
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm, datasets
from sklearn.model_selection import cross_val_score, cross_validate, cross_val_predict
from sklearn.metrics import classification_report
# patient_df_X_fill_data[patient_df_y_cat==0]
X, y = patient_df_X_fill_data, patient_df_y_cat
clf = RandomForestClassifier(n_estimators=100)
print (cross_validate(clf, X, y, scoring=['recall_macro', 'precision_macro', 'f1_macro', 'accuracy'], cv=5) )
y_pred = cross_val_predict(clf,X, y, cv=5 )
print(classification_report(y, y_pred, target_names=['NO','YES']))
from imblearn.over_sampling import SMOTE
smote = SMOTE(sampling_strategy='auto')
data_p_s, target_p_s = smote.fit_sample(patient_df_X_fill_data, patient_df_y_cat)
print (data_p_s.shape, target_p_s.shape)
# patient_df_X_fill_data[patient_df_y_cat==0]
X, y = data_p_s, target_p_s
clf = RandomForestClassifier(n_estimators=100)
print (cross_validate(clf, X, y, scoring=['recall_macro', 'precision_macro', 'f1_macro', 'accuracy'], cv=5) )
y_pred = cross_val_predict(clf,X, y, cv=5 )
print(classification_report(y, y_pred, target_names=['NO','YES']))
from collections import Counter
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=0)
X_resampled, y_resampled = cc.fit_resample(patient_df_X_fill_data, patient_df_y_cat)
print(sorted(Counter(y_resampled).items()))
X, y = X_resampled, y_resampled
clf = RandomForestClassifier(n_estimators=100)
print (cross_validate(clf, X, y, scoring=['recall_macro', 'precision_macro', 'f1_macro', 'accuracy'], cv=5) )
y_pred = cross_val_predict(clf,X, y, cv=5 )
print(classification_report(y, y_pred, target_names=['NO','YES']))
from imblearn.under_sampling import RandomUnderSampler
rus = RandomUnderSampler(random_state=0)
X, y = rus.fit_resample(patient_df_X_fill_data, patient_df_y_cat)
clf = RandomForestClassifier(n_estimators=100)
print (cross_validate(clf, X, y, scoring=['recall_macro', 'precision_macro', 'f1_macro', 'accuracy'], cv=5) )
y_pred = cross_val_predict(clf,X, y, cv=5 )
print(classification_report(y, y_pred, target_names=['NO','YES']))
X_positive, y_positive, X_negative, y_negative = X_train, y_train, X_test, y_test
X_positive
cr_score_list = []
y_true_5, y_pred_5 = np.array([]), np.array([])
y_true_5.shape, y_pred_5.shape
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
for i in range(5):
X_train, X_test_pos, y_train, y_test_pos = train_test_split(X_positive, y_positive, test_size=0.136)
print (X_train.shape, X_test_pos.shape, y_train.shape, y_test_pos.shape)
X_test, y_test = np.append(X_negative, X_test_pos, axis=0), np.append(y_negative, y_test_pos, axis=0)
#X_test, y_test = X_negative, y_negative
print (X_test.shape, y_test.shape)
regr = RandomForestRegressor(max_depth=2, random_state=0)
regr.fit(X_train, y_train)
#print(regr.feature_importances_)
y_pred = regr.predict(X_test)
#print(regr.predict(X_test))
print (regr.score(X_test, y_test))
print (regr.score(X_train, y_train))
X_y_test = np.append(X_test, y_pred.reshape(-1,1), axis=1)
print (X_test.shape, y_test.shape, X_y_test.shape)
df_X_y_test = pd.DataFrame(data=X_y_test, columns=patient_df_X_fill_data.columns.tolist()+['MMSE_Predicted'])
df_X_y_test.head(5)
patient_df_tmp = patient_df[['patient_id', 'EPISODE_DATE', 'DIAGNOSTIC_CODE', 'smoker', 'gender_Male', 'age', 'durations(years)', 'MINI_MENTAL_SCORE_PRE', ]]
df_X_y_test_tmp = df_X_y_test[['smoker', 'gender_Male', 'DIAGNOSTIC_CODE', 'age', 'durations(years)', 'MINI_MENTAL_SCORE_PRE', 'MMSE_Predicted']]
p_tmp = patient_df_tmp.merge(df_X_y_test_tmp)
print (patient_df.shape, df_X_y_test_tmp.shape, p_tmp.shape)
print (p_tmp.head(5))
# Compare it with Predicted MMSE Scores and True MMSE values
patient_df_misdiag = pd.read_csv(data_path+'misdiagnosed.csv')
patient_df_misdiag['EPISODE_DATE'] = pd.to_datetime(patient_df_misdiag['EPISODE_DATE'])
patient_df_misdiag.head(5)
patient_df_misdiag_predmis = patient_df_misdiag.merge(p_tmp[['patient_id', 'EPISODE_DATE', 'MMSE_Predicted']], how='outer', on=['patient_id', 'EPISODE_DATE'])
patient_df_misdiag_predmis.head(5)
display(patient_df_misdiag_predmis.isna().sum())
index_MMSE_Predicted = patient_df_misdiag_predmis['MMSE_Predicted'].notnull()
patient_df_misdiag_predmis['MMSE_Predicted'] = patient_df_misdiag_predmis['MMSE_Predicted'].fillna(patient_df_misdiag_predmis['MINI_MENTAL_SCORE'])
print (sum(patient_df_misdiag_predmis['MMSE_Predicted']!=patient_df_misdiag_predmis['MINI_MENTAL_SCORE']))
# find Misdiagnosed
def find_misdiagonsed1():
k = 0
l_misdiagno = []
for pat_id in patient_df_misdiag_predmis['patient_id'].unique():
tmp_df = patient_df_misdiag_predmis[['PETERSEN_MCI', 'AD_STATUS', 'MMSE_Predicted', 'durations(years)']][patient_df_misdiag_predmis['patient_id']==pat_id]
flag = False
mms_val = 0.0
dur_val = 0.0
for i, row in tmp_df.iterrows():
if (row[0]==1.0 or row[1]== 1.0) and flag==False:
l_misdiagno.append('UNKNOWN')
mms_val = row[2]
dur_val = row[3]
flag = True
elif (flag==True):
if (row[2]-mms_val>5.0) and (row[3]-dur_val<=1.0) or\
(row[2]-mms_val>3.0) and ((row[3]-dur_val<2.0 and row[3]-dur_val>1.0)) or\
(row[2]-mms_val>0.0) and (row[3]-dur_val>=2.0):
l_misdiagno.append('YES')
else:
l_misdiagno.append('NO')
else:
l_misdiagno.append('UNKNOWN')
return l_misdiagno
print (len(find_misdiagonsed1()))
patient_df_misdiag_predmis['Misdiagnosed_Predicted'] = find_misdiagonsed1()
c2=patient_df_misdiag_predmis['Misdiagnosed1']!=patient_df_misdiag_predmis['Misdiagnosed_Predicted']
misdiagnosed1_true_pred= patient_df_misdiag_predmis[index_MMSE_Predicted][['Misdiagnosed1', 'Misdiagnosed_Predicted']].replace(['NO', 'YES'], [0,1])
print(classification_report(misdiagnosed1_true_pred.Misdiagnosed1, misdiagnosed1_true_pred.Misdiagnosed_Predicted, target_names=['NO', 'YES']))
y_true_5, y_pred_5 = np.append(y_true_5, misdiagnosed1_true_pred.Misdiagnosed1, axis=0), np.append(y_pred_5, misdiagnosed1_true_pred.Misdiagnosed_Predicted, axis=0)
print(y_true_5.shape, y_pred_5.shape)
df_all = pd.DataFrame(classification_report(y_true_5, y_pred_5, target_names=['NO', 'YES'], output_dict=True))
df_all = df_all.round(2)
n_range = int(y_true_5.shape[0]/X_test.shape[0])
y_shape = X_test.shape[0]
for cr in range(n_range):
d = classification_report(y_true_5.reshape(n_range,y_shape)[cr], y_pred_5.reshape(n_range,y_shape)[cr], target_names=['NO', 'YES'], output_dict=True)
cr_score_list.append(d)
print(cr_score_list)
df_tot = pd.DataFrame(cr_score_list[0])
for i in range(n_range-1):
df_tot = pd.concat([df_tot, pd.DataFrame(cr_score_list[i])], axis='rows')
df_avg = df_tot.groupby(level=0, sort=False).mean().round(2)
acc, sup, acc1, sup1 = df_avg.loc['precision', 'accuracy'], df_avg.loc['support', 'macro avg'],\
df_all.loc['precision', 'accuracy'], df_all.loc['support', 'macro avg']
pd.concat([df_avg.drop(columns='accuracy'), df_all.drop(columns='accuracy')], \
keys= ['Average classification metrics (accuracy:{}, support:{})'.format(acc, sup),\
'Classification metrics (accuracy:{}, support:{})'.format(acc1, sup1)], axis=1)
cm_all = confusion_matrix(y_true_5, y_pred_5)
print(cm_all)
n_range = int(y_true_5.shape[0]/X_test.shape[0])
y_shape = X_test.shape[0]
cr_score_list = []
for cr in range(n_range):
d = confusion_matrix(y_true_5.reshape(n_range,y_shape)[cr], y_pred_5.reshape(n_range,y_shape)[cr])
cr_score_list.append(d)
print(cr_score_list)
cr_score_np = np.array(cr_score_list)
cm_avg = cr_score_np.sum(axis=0)/cr_score_np.shape[0]
print(cm_avg)
###Output
_____no_output_____ |
notebooks/43. Fix rct and met names.ipynb | ###Markdown
IntroductionIn Issue 73 Ben correctly raised the question that our reactions in many cases have the KEGG ID as name, and the more detailed name stored in the notes. Here I will try to write a script that for the metabolites and reactions will convert this automaticaly.
###Code
import cameo
import pandas as pd
import cobra.io
from cobra import Reaction, Metabolite
model = cobra.io.read_sbml_model('../model/p-thermo.xml')
for rct in model.reactions:
if rct.name[:1] in 'R':
try:
name = rct.notes['NAME']
name_new = name.replace("’","") #get rid of any apostrophe as it can screw with the code
split = name_new.split(';') #if there are two names assigned, we can split that and only take the first
name_new = split[0]
try: #if the kegg is also stored in the annotation we can remove the name and remove the note
anno = rct.annotation['kegg.reaction']
rct.name = name_new
del rct.notes['NAME']
except KeyError:
print (rct.id) #here print if the metabolite doesn't have the kegg in the annotation but only in the name
except KeyError: #if the metabolite doesn't have a name, just print the ID
print (rct.id)
else:
continue
#save&commit
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
###Output
_____no_output_____
###Markdown
Next I check the names of the above printed by hand to make sure they are still fine.
###Code
model.reactions.MALHYDRO.annotation['kegg.reaction'] = 'R00028'
model.reactions.MALHYDRO.name ='maltose glucohydrolase'
model.reactions.AKGDEHY.name = 'oxoglutarate dehydrogenase (succinyl-transferring)'
model.reactions.OMCDC.name = '3-isopropylmalate dehydrogenase'
model.reactions.DHORD6.name = 'dihydroorotate dehydrogenase'
model.reactions.DRBK.name = 'ribokinase'
model.reactions.TREPT.name = 'protein-Npi-phosphohistidine-sugar phosphotransferase'
model.reactions.AHETHMPPR.name = 'pyruvate dehydrogenase (acetyl-transferring)'
model.reactions.HC01435R.name = 'oxoglutarate dehydrogenase (succinyl-transferring)'
model.reactions.G5SADs.name = 'L-glutamate 5-semialdehyde dehydratase (spontaneous)'
model.reactions.MMTSAOR.name = 'L-Methylmalonyl-CoA Dehydrogenase'
model.reactions.ALCD4.name = 'Butanal dehydrogenase (NADH)'
model.reactions.ALCD4y.name = 'Butanal dehydrogenase (NADPH)'
model.reactions.get_by_id('4OT').name = 'hydroxymuconate tautomerase'
model.reactions.FGFTh.name = 'phosphoribosylglycinamide formyltransferase'
model.reactions.APTA1i.name = 'N-Acetyl-L-2-amino-6-oxopimelate transaminase'
model.reactions.IG3PS.name = 'Imidazole-glycerol-3-phosphate synthase'
model.reactions.RZ5PP.name = 'Alpha-ribazole 5-phosphate phosphatase'
model.reactions.UPP1S.name = 'Hydroxymethylbilane breakdown (spontaneous) '
model.reactions.ADOCBIK.name = 'Adenosyl cobinamide kinase'
model.reactions.R05219.id = 'P6AS'
model.reactions.ACBIPGT.name = 'Adenosyl cobinamide phosphate guanyltransferase'
model.reactions.ADOCBLS.name = 'Adenosylcobalamin 5-phosphate synthase'
model.reactions.HGBYR.name = 'hydrogenobyrinic acid a,c-diamide synthase (glutamine-hydrolysing)'
model.reactions.ACDAH.name = 'adenosylcobyric acid synthase (glutamine-hydrolysing)'
model.reactions.P6AS.name = 'precorrin-6A synthase (deacetylating)'
model.reactions.HBCOAH.name = 'enoyl-CoA hydratase'
model.reactions.HEPDPP.name = 'Octaprenyl diphosphate synthase'
model.reactions.HEXTT.name = 'Heptaprenyl synthase'
model.reactions.PPTT.name = 'Hexaprenyl synthase'
model.reactions.MANNHY.name = 'Manninotriose hydrolysis'
model.reactions.STACHY2.name = 'Stachyose hydrolysis'
model.reactions.ADOCBIP.name = 'adenosylcobinamide kinase'
model.reactions.PMDPHT.name = 'Pyrimidine phosphatase'
model.reactions.ARAT.name = 'aromatic-amino-acid transaminase'
model.metabolites.stys_c.annotation['kegg.glycan'] = 'G00278'
model.metabolites.stys_c.name = 'Stachyose'
model.reactions.MOD.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MHOPT.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MOD_4mop.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MHTPPT.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MOD_3mop.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.MHOBT.name = '3-methyl-2-oxobutanoate dehydrogenase (2-methylpropanoyl-transferring)'
model.reactions.CBMKr.name = 'carbamoyl-phosphate synthase (ammonia)'
model.reactions.DMPPOR.name = '4-hydroxy-3-methylbut-2-en-1-yl diphosphate reductase'
model.reactions.CHOLD.name = 'Choline dehydrogenase'
model.reactions.AMACT.name = 'beta-ketoacyl-[acyl-carrier-protein] synthase III'
model.reactions.HDEACPT.name = 'beta-ketoacyl-[acyl-carrier-protein] synthase I'
model.reactions.OXSTACPOR.name = 'L-xylulose reductase'
model.reactions.RMK.name = 'Rhamnulokinase'
model.reactions.RMPA.name = 'Rhamnulose-1-phosphate aldolase'
model.reactions.RNDR4.name = 'Ribonucleoside-diphosphate reductase (UDP)'
model.reactions.get_by_id('3HAD180').name = '3-hydroxyacyl-[acyl-carrier-protein] dehydratase (n-C18:0)'
#SAVE&COMMIT
cobra.io.write_sbml_model(model,'../model/p-thermo.xml')
###Output
_____no_output_____ |
Zags_Main_3.7fix (2).ipynb | ###Markdown
1 Sample
###Code
if lemode=='ancestral':
leprompt_length_in_seconds=None
leaudio_file = None
###############################################################################
###############################################################################
codes_file=None
!pip install git+https://github.com/openai/jukebox.git
##$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$#### autosave start
import os
from glob import glob
filex = "/usr/local/lib/python3.7/dist-packages/jukebox/sample.py"
fin = open(filex, "rt")
data = fin.read()
fin.close()
newtext = '''import fire
import os
from glob import glob
from termcolor import colored
from datetime import datetime
newtosample = True'''
data = data.replace('import fire',newtext)
newtext = '''starts = get_starts(total_length, prior.n_ctx, hop_length)
counterr = 0
x = None
for start in starts:'''
data = data.replace('for start in get_starts(total_length, prior.n_ctx, hop_length):',newtext)
newtext = '''global newtosample
newtosample = (new_tokens > 0)
if new_tokens <= 0:'''
data = data.replace('if new_tokens <= 0:',newtext)
newtext = '''counterr += 1
datea = datetime.now()
zs = sample_single_window(zs, labels, sampling_kwargs, level, prior, start, hps)
if newtosample and counterr < len(starts):
del x; x = None; prior.cpu(); empty_cache()
x = prior.decode(zs[level:], start_level=level, bs_chunks=zs[level].shape[0])
logdir = f"{hps.name}/level_{level}"
if not os.path.exists(logdir):
os.makedirs(logdir)
t.save(dict(zs=zs, labels=labels, sampling_kwargs=sampling_kwargs, x=x), f"{logdir}/data.pth.tar")
save_wav(logdir, x, hps.sr)
del x; prior.cuda(); empty_cache(); x = None
dateb = datetime.now()
timex = ((dateb-datea).total_seconds()/60.0)*(len(starts)-counterr)
print(f"Step " + colored(counterr,'blue') + "/" + colored( len(starts),'red') + " ~ New to Sample: " + str(newtosample) + " ~ estimated remaining minutes: " + (colored('???','yellow'), colored(timex,'magenta'))[counterr > 1 and newtosample])'''
data = data.replace('zs = sample_single_window(zs, labels, sampling_kwargs, level, prior, start, hps)',newtext)
newtext = """lepath=hps.name
if level==2:
for filex in glob(os.path.join(lepath + '/level_2','item_*.wav')):
os.rename(filex,filex.replace('item_',lepath.split('/')[-1] + '-'))
if level==1:
for filex in glob(os.path.join(lepath + '/level_1','item_*.wav')):
os.rename(filex,filex.replace('item_',lepath.split('/')[-1] + '-L1-'))
if level==0:
for filex in glob(os.path.join(lepath + '/level_0','item_*.wav')):
os.rename(filex,filex.replace('item_',lepath.split('/')[-1] + '-L0-'))
save_html("""
if leautorename:
data = data.replace('save_html(',newtext)
if leexportlyrics == False:
data = data.replace('if alignments is None','#if alignments is None')
data = data.replace('alignments = get_alignment','#alignments = get_alignment')
data = data.replace('save_html(','#save_html(')
if leprogress == False:
data = data.replace('print(f"Step " +','#print(f"Step " +')
fin = open(filex, "wt")
fin.write(data)
fin.close()
##$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$#### autosave end
import jukebox
import torch as t
import librosa
import os
from datetime import datetime
from IPython.display import Audio
from jukebox.make_models import make_vqvae, make_prior, MODELS, make_model
from jukebox.hparams import Hyperparams, setup_hparams
from jukebox.sample import sample_single_window, _sample, \
sample_partial_window, upsample, \
load_prompts
from jukebox.utils.dist_utils import setup_dist_from_mpi
from jukebox.utils.torch_utils import empty_cache
rank, local_rank, device = setup_dist_from_mpi()
print(datetime.now().strftime("%H:%M:%S"))
model = lemodel
hps = Hyperparams()
hps.sr = 44100
hps.n_samples = lecount
hps.name = lepath
chunk_size = lechunk_size
max_batch_size = lemax_batch_size
hps.levels = 3
hps.hop_fraction = lehop
vqvae, *priors = MODELS[model]
vqvae = make_vqvae(setup_hparams(vqvae, dict(sample_length = 1048576)), device)
top_prior = make_prior(setup_hparams(priors[-1], dict()), vqvae, device)
# Prime song creation using an arbitrary audio sample.
mode = lemode
codes_file=None
audio_file = leaudio_file
prompt_length_in_seconds=leprompt_length_in_seconds
if os.path.exists(hps.name):
# Identify the lowest level generated and continue from there.
for level in [0, 1, 2]:
data = f"{hps.name}/level_{level}/data.pth.tar"
if os.path.isfile(data):
mode = mode if 'continue' in mode else 'upsample'
codes_file = data
print(mode + 'ing from level ' + str(level))
break
print('mode is now '+mode)
sample_hps = Hyperparams(dict(mode=mode, codes_file=codes_file, audio_file=audio_file, prompt_length_in_seconds=prompt_length_in_seconds))
sample_length_in_seconds = lesample_length_in_seconds
hps.sample_length = (int(sample_length_in_seconds*hps.sr)//top_prior.raw_to_tokens)*top_prior.raw_to_tokens
assert hps.sample_length >= top_prior.n_ctx*top_prior.raw_to_tokens, f'Please choose a larger sampling rate'
metas = [dict(artist = leartist,
genre = legenre,
total_length = hps.sample_length,
offset = 0,
lyrics = lelyrics,
),
] * hps.n_samples
labels = [None, None, top_prior.labeller.get_batch_labels(metas, 'cuda')]
#----------------------------------------------------------2
sampling_temperature = lesampling_temperature
lower_batch_size = lelower_batch_size
max_batch_size = lemax_batch_size
lower_level_chunk_size = lelower_level_chunk_size
chunk_size = lechunk_size
sampling_kwargs = [dict(temp=.99, fp16=True, max_batch_size=lower_batch_size,
chunk_size=lower_level_chunk_size),
dict(temp=.99, fp16=True, max_batch_size=lower_batch_size,
chunk_size=lower_level_chunk_size),
dict(temp=sampling_temperature, fp16=True,
max_batch_size=max_batch_size, chunk_size=chunk_size)]
if sample_hps.mode == 'ancestral':
zs = [t.zeros(hps.n_samples,0,dtype=t.long, device='cuda') for _ in range(len(priors))]
zs = _sample(zs, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
elif sample_hps.mode == 'upsample':
assert sample_hps.codes_file is not None
# Load codes.
data = t.load(sample_hps.codes_file, map_location='cpu')
zs = [z.cuda() for z in data['zs']]
assert zs[-1].shape[0] == hps.n_samples, f"Expected bs = {hps.n_samples}, got {zs[-1].shape[0]}"
del data
print('Falling through to the upsample step later in the notebook.')
elif sample_hps.mode == 'primed':
assert sample_hps.audio_file is not None
audio_files = sample_hps.audio_file.split(',')
duration = (int(sample_hps.prompt_length_in_seconds*hps.sr)//top_prior.raw_to_tokens)*top_prior.raw_to_tokens
x = load_prompts(audio_files, duration, hps)
zs = top_prior.encode(x, start_level=0, end_level=len(priors), bs_chunks=x.shape[0])
print(sample_hps.prompt_length_in_seconds)
print(hps.sr)
print(top_prior.raw_to_tokens)
print('aaaaaaaaaaaaaaaaaaaaaaaaaaaa 4.52')
print(duration)
print(audio_files)
zs = _sample(zs, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
elif sample_hps.mode == 'continue':
data = t.load(sample_hps.codes_file, map_location='cpu')
zs = [z.cuda() for z in data['zs']]
zs = _sample(zs, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
elif sample_hps.mode == 'cutcontinue':
print('-------CUT INIT--------')
lecutlen = (int(lecut*hps.sr)//top_prior.raw_to_tokens)*top_prior.raw_to_tokens
print(lecutlen)
data = t.load(codes_file, map_location='cpu')
zabaca = [z.cuda() for z in data['zs']]
print(zabaca)
assert zabaca[-1].shape[0] == hps.n_samples, f"Expected bs = {hps.n_samples}, got {zs[-1].shape[0]}"
priorsz = [top_prior] * 3
top_raw_to_tokens = priorsz[-1].raw_to_tokens
assert lecutlen % top_raw_to_tokens == 0, f"Cut-off duration {lecutlen} not an exact multiple of top_raw_to_tokens"
assert lecutlen//top_raw_to_tokens <= zabaca[-1].shape[1], f"Cut-off tokens {lecutlen//priorsz[-1].raw_to_tokens} longer than tokens {zs[-1].shape[1]} in saved codes"
zabaca = [z[:,:lecutlen//prior.raw_to_tokens] for z, prior in zip(zabaca, priorsz)]
hps.sample_length = lecutlen
print(zabaca)
zs = _sample(zabaca, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
del data
print('-------CUT OK--------')
hps.sample_length = (int(sample_length_in_seconds*hps.sr)//top_prior.raw_to_tokens)*top_prior.raw_to_tokens
data = t.load(sample_hps.codes_file, map_location='cpu')
zibica = [z.cuda() for z in data['zs']]
zubu = zibica[:]
if transpose != [0,1,2]:
zubu[2][0] = zibica[:][2][transpose[0]];zubu[2][1] = zibica[:][2][transpose[1]];zubu[2][2] = zibica[:][2][transpose[2]]
zubu[1][0] = zibica[:][1][transpose[0]];zubu[1][1] = zibica[:][1][transpose[1]];zubu[1][2] = zibica[:][1][transpose[2]]
zubu[0][0] = zibica[:][0][transpose[0]];zubu[0][1] = zibica[:][0][transpose[1]];zubu[0][2] = zibica[:][0][transpose[2]]
zubu = _sample(zubu, labels, sampling_kwargs, [None, None, top_prior], [2], hps)
print('-------CONTINUE AFTER CUT OK--------')
zs = zubu
else:
raise ValueError(f'Unknown sample mode {sample_hps.mode}.')
print(datetime.now().strftime("%H:%M:%S"))
###Output
_____no_output_____
###Markdown
2 Upsample
###Code
print(datetime.now().strftime("%H:%M:%S"))
del top_prior
empty_cache()
top_prior=None
upsamplers = [make_prior(setup_hparams(prior, dict()), vqvae, 'cpu') for prior in priors[:-1]]
labels[:2] = [prior.labeller.get_batch_labels(metas, 'cuda') for prior in upsamplers]
zs = upsample(zs, labels, sampling_kwargs, [*upsamplers, top_prior], hps)
print(datetime.now().strftime("%H:%M:%S"))
###Output
_____no_output_____ |
shared/notebooks/MyNextJupyterNLPJavaNotebook.ipynb | ###Markdown
Table of contents* [Find out the version info of the underlying JDK/JVM on which this notebook is running](Find-out-the-version-info-of-the-underlying-JDK/JVM-on-which-this-notebook-is-running)* [Valohai command-line client](Valohai-command-line-client)* [Set up project using the vh client](Set-up-project-using-the-vh-client)* Java bindings (Java API) via Valohai client * [Language Detector API](Language-Detector-API) * [Sentence Detection API](Sentence-Detection-API) * [Tokenizer API](Tokenizer-API) * [Name Finder API](Name-Finder-API) * [More Name Finder API examples](More-Name-Finder-API-examples) * [Parts of speech (POS) Tagger API](Parts-of-speech-(POS)-Tagger-API) * [Chunking API](Chunking-API) * [Parsing API](Parsing-API) Find out the version info of the underlying JDK/JVM on which this notebook is running
###Code
System.out.println("java.version: " + System.getProperty("java.version"));
System.out.println("java.specification.version: " + System.getProperty("java.specification.version"));
System.out.println("java.runtime.version: " + System.getProperty("java.runtime.version"));
import java.lang.management.ManagementFactory;
System.out.println("java runtime VM version: " + ManagementFactory.getRuntimeMXBean().getVmVersion());
###Output
java runtime VM version: 11.0.4+11
###Markdown
Return to [Table of contents](Table-of-contents) Valohai command-line clientThe container comes with the VH client installed, so you won't need to do anything. Above all the shell scripts in the container encapsulates a few of the VH client functionalities for ease of use.If you still would like to use the VH client, make sure you use the VALOHAI_TOKEN variable each time wherever it involves authentication, for e.g.```$ vh --valohai-token ${VALOHAI_TOKEN} [...your commands and options...]```For more details, see [CLI usage docs](https://docs.valohai.com/valohai-cli/index.html) on [Valohai.com]().
###Code
%system vh --help
###Output
Usage: vh [OPTIONS] COMMAND [ARGS]...
:type ctx: click.Context
Options:
--debug / --no-debug
--output-format, --table-format [human|csv|tsv|scsv|psv|json]
--valohai-host URL Override the Valohai API host (default
https://app.valohai.com/) [env var:
VALOHAI_HOST]
--valohai-token SECRET Use this Valohai authentication token [env
var: VALOHAI_TOKEN]
--project UUID (Advanced) Override the project ID [env
var: VALOHAI_PROJECT]
--project-mode local|remote (Advanced) When using --project, set the
project mode [env var:
VALOHAI_PROJECT_MODE]
--project-root DIR (Advanced) When using --project, set the
project root directory [env var:
VALOHAI_PROJECT_ROOT]
--help Show this message and exit.
Commands:
environments List all available execution environments.
execution Execution-related commands.
init Interactively initialize a Valohai project.
lint Lint (syntax-check) a valohai.yaml file.
login Log in into Valohai.
logout Remove local authentication token.
parcel
project Project-related commands.
Commands (execution ...):
execution delete Delete one or more executions, optionally purging their
outputs as well.
execution info Show execution info.
execution list Show a list of executions for the project.
execution logs Show or stream execution event log.
execution open Open an execution in a web browser.
execution outputs List and download execution outputs.
execution run Start an execution of a step.
execution stop Stop one or more in-progress executions.
execution summarize Summarize execution metadata.
execution watch Watch execution progress in a console UI.
Commands (project ...):
project commits List the commits for the linked project.
project create Create a new project and optionally link it to the
directory.
project fetch Fetch new commits for the linked project.
project link Link a directory with a Valohai project.
project list List all projects.
project open Open the project's view in a web browser.
project status Get the general status of the linked project
project unlink Unlink a linked Valohai project.
###Markdown
Set up project using a shell script (internally uses the vh client)_Your Valohai token has must have been provided (and set) during startup of the container. Without this the rest of the commands in the notebook may not work. The below commands expects it and will run successfully when it is not set in the environment._
###Code
// Please execute this cell only once, check your Valohai dashoard for presence of the project
%system ./create-project.sh nlp-java-jvm-example
###Output
😼 Success! Project nlp-java-jvm-example created.
🙂 Success! Linked /home/jovyan/work to nlp-java-jvm-example.
###Markdown
Language Detector API Show a simple example detecting a language of a sentence using a Language detecting model called langdetect-183.bin on a remote instance (powered by Valohai), from within the notebook cell using cell magic!
###Code
%system ./exec-step.sh "detect-language" "Another sentence"
%system ./watch-execution.sh 54
// ^^^ this number is returned by the previous action, see above cell
%system ./show-final-result.sh 54
// ^^^ this is returned by the action, see two cells above
###Output
Gathering output from counter 54
08:01:01.44 [Started...]
08:01:02.24 Sentence: 'Another sentence'
08:01:02.26 Best language: plt
08:01:02.26 Best language confidence: 0.014114330560624104
08:01:02.27
08:01:02.27 Predict languages (with confidence): [tur (0.009708737864077673), bel (0.009708737864077673), san (0.009708737864077673), ara (0.009708737864077673), mon (0.009708737864077673), tel (0.009708737864077673), sin (0.009708737864077673), pes (0.009708737864077673), min (0.009708737864077673), cmn (0.009708737864077673), aze (0.009708737864077673), fao (0.009708737864077673), ita (0.009708737864077673), ceb (0.009708737864077673), mkd (0.009708737864077673), eng (0.009708737864077673), nno (0.009708737864077673), lvs (0.009708737864077673), kor (0.009708737864077673), som (0.009708737864077673), swa (0.009708737864077673), hun (0.009708737864077673), fra (0.009708737864077673), nld (0.009708737864077673), mlt (0.009708737864077673), bak (0.009708737864077673), ekk (0.009708737864077673), ron (0.009708737864077673), gle (0.009708737864077673), hin (0.009708737864077673), est (0.009708737864077673), tha (0.009708737864077673), slk (0.009708737864077673), ltz (0.009708737864077673), kan (0.009708737864077673), eus (0.009708737864077673), epo (0.009708737864077673), bos (0.009708737864077673), pol (0.009708737864077673), nep (0.009708737864077673), lit (0.009708737864077673), war (0.009708737864077673), srp (0.009708737864077673), ces (0.009708737864077673), che (0.009708737864077673), lav (0.009708737864077673), nds (0.009708737864077673), dan (0.009708737864077673), mar (0.009708737864077673), nan (0.009708737864077673), glg (0.009708737864077673), gsw (0.009708737864077673), fry (0.009708737864077673), uzb (0.009708737864077673), mal (0.009708737864077673), vol (0.009708737864077673), fas (0.009708737864077673), msa (0.009708737864077673), cym (0.009708737864077673), nob (0.009708737864077673), ben (0.009708737864077673), kaz (0.009708737864077673), heb (0.009708737864077673), bre (0.009708737864077673), jav (0.009708737864077673), sqi (0.009708737864077673), kir (0.009708737864077673), cat (0.009708737864077673), oci (0.009708737864077673), vie (0.009708737864077673), kat (0.009708737864077673), tam (0.009708737864077673), tgk (0.009708737864077673), mri (0.009708737864077673), slv (0.009708737864077673), lat (0.009708737864077673), tgl (0.009708737864077673), pan (0.009708737864077673), swe (0.009708737864077673), lim (0.009708737864077673), tat (0.009708737864077673), ell (0.009708737864077673), afr (0.009708737864077673), pus (0.009708737864077673), isl (0.009708737864077673), sun (0.009708737864077673), urd (0.009708737864077673), hye (0.009708737864077673), hrv (0.009708737864077673), ast (0.009708737864077673), rus (0.009708737864077673), spa (0.009708737864077673), ind (0.009708737864077673), pnb (0.009708737864077673), bul (0.009708737864077673), plt (0.009708737864077673), deu (0.009708737864077673), zul (0.009708737864077673), ukr (0.009708737864077673), jpn (0.009708737864077673), por (0.009708737864077673), guj (0.009708737864077673), fin (0.009708737864077673)]
08:01:02.27 [...Finished]
08:01:02.27 + set +x
08:01:02.76 container finished with return code 0, duration 3.013581
08:01:02.77 completed in 7.62 seconds
###Markdown
Check out https://www.apache.org/dist/opennlp/models/langdetect/1.8.3/README.txt to find out what each of the two-letter language indicators mean. **Apparantly it detects this to be Latin, instead of English maybe the language detecting model needs more training.See https://opennlp.apache.org/docs/1.9.1/manual/opennlp.htmltools.langdetect.training on how this can be achieved** Return to [Table of contents](Table-of-contents) Sentence Detection API Show a simple example detecting sentences using a Sentence detecting model called en-sent.bin on a remote instance (powered by Valohai), from within the notebook cell using cell magic!
###Code
%system ./exec-step.sh "detect-sentence" "Yet another sentence. And some other sentence."
%system ./watch-execution.sh 55
// ^^^ this number is returned by the previous action, see above cell
%system ./show-final-result.sh 55
// ^^^ this is returned by the action, see two cells above
###Output
Gathering output from counter 55
08:01:47.78 [Started...]
08:01:47.94 Sentence: 'Yet another sentence. And some other sentence.'
08:01:47.95 ['Yet another sentence., And some other sentence.']
08:01:47.95
08:01:47.95 [[0..22), [23..48)]
08:01:47.95 [...Finished]
08:01:47.96 + set +x
08:01:49.17 container finished with return code 0, duration 3.012751
08:01:49.17 completed in 7.63 seconds
###Markdown
**As you can see the two ways to use the SentenceDetect API to detect sentences in a piece of text.** Return to [Table of contents](Table-of-contents) Tokenizer API Show a simple example of tokenization of a sentence using a Tokenizer model called en-token.bin on a remote instance (powered by Valohai), from within the notebook cell using cell magic!
###Code
%system ./exec-step.sh "tokenize" "Yes please tokenize this sentence."
%system ./watch-execution.sh 56
// ^^^ this number is returned by the previous action, see above cell
%system ./show-final-result.sh 56
// ^^^ this is returned by the action, see two cells above
###Output
Gathering output from counter 56
08:02:31.88 [Started...]
08:02:32.11 Sentence: 'Yes please tokenize this sentence.'
08:02:32.11 [', Yes, please, tokenize, this, sentence, ., ']
08:02:32.11 Probabilities of each of the tokens above
08:02:32.12 0.6935247086232172
08:02:32.12 0.9967132853655197
08:02:32.12 1.0
08:02:32.13 1.0
08:02:32.13 1.0
08:02:32.13 0.9960592442445741
08:02:32.13 0.9979320415238206
08:02:32.13 1.0
08:02:32.13
08:02:32.13 [[0..1), [1..4), [5..11), [12..20), [21..25), [26..34), [34..35), [35..36)]
08:02:32.14 [...Finished]
08:02:32.14 + set +x
08:02:33.21 container finished with return code 0, duration 3.011259
08:02:33.21 completed in 7.73 seconds
###Markdown
Return to [Table of contents](Table-of-contents) Name Finder API Show a simple example of tokenization of a sentence using a Tokenizer model called en-token.bin on a remote instance (powered by Valohai), from within the notebook cell using cell magic!
###Code
%system ./exec-step.sh "name-finder-person" "My name is John. And his name is Pierre."
%system ./watch-execution.sh 58
// ^^^ this number is returned by the previous action, see above cell
%system ./show-final-result.sh 58
// ^^^ this is returned by the action, see two cells above
###Output
Gathering output from counter 58
08:07:05.14 [Started...]
08:07:05.93 Sentence: ['My, name, is, John., And, his, name, is, Pierre.']
08:07:05.95 []
08:07:05.95 Sentence: [John, is, from, London, England.]
08:07:05.95 [[0..1) person]
08:07:05.95 [...Finished]
08:07:05.96 + set +x
08:07:06.51 container finished with return code 0, duration 3.011655
08:07:06.51 completed in 130.79 seconds
###Markdown
**As you can see above, it has detected the name of the person in both sentences** Return to [Table of contents](Table-of-contents) More Name Finder API examplesThere are a handful more Name Finder related models i.e.- Name Finder Date- Name Finder Location- Name Finder Money- Name Finder Organization- Name Finder Percentage- Name Finder TimeTheir model names go by these names respectively:- en-ner-date.bin- en-ner-location.bin- en-ner-money.bin- en-ner-organization.bin- en-ner-percentage.bin- en-ner-time.binand can be found at the same location all other models are found at, i.e. http://opennlp.sourceforge.net/models-1.5/ Return to [Table of contents](Table-of-contents) Parts of speech (POS) Tagger API Show a simple example of Parts of speech tagger on a sentence using a PoS Tagger model called en-pos-maxent.bin on a remote instance (powered by Valohai), from within the notebook cell using cell magic!
###Code
%system ./exec-step.sh "pos-tagger" "Tag this sentence word by word."
%system ./watch-execution.sh 59
// ^^^ this number is returned by the previous action, see above cell
%system ./show-final-result.sh 59
// ^^^ this is returned by the action, see two cells above
###Output
Gathering output from counter 59
08:06:59.17 [Started...]
08:07:00.09 Sentence: ['Tag, this, sentence, word, by, word.']
08:07:00.09 [., DT, NN, NN, IN, NNS]
08:07:00.10
08:07:00.10 Probabilities of tags:
08:07:00.10 0.30484035720296515
08:07:00.10 0.9616776639768017
08:07:00.10 0.5932031370367039
08:07:00.11 0.9494315222897873
08:07:00.11 0.9500097291663564
08:07:00.11 0.6612561687805658
08:07:00.11
08:07:00.11 Tags as sequences (contains probabilities:
08:07:00.11 [-2.26605028309725 [., DT, NN, NN, IN, NNS], -3.1807407470423357 [PRP, DT, NN, NN, IN, NNS], -4.042343016881638 [PRP$, DT, NN, NN, IN, NNS]]
08:07:00.11 [...Finished]
08:07:00.12 + set +x
08:07:01.35 container finished with return code 0, duration 4.023575
08:07:01.35 completed in 14.65 seconds
###Markdown
Check out https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html to find out what each of the tags mean Return to [Table of contents](Table-of-contents) Chunking API Show a simple example of chunking on a sentence using a Chunker model called en-chunker.bin on a remote instance (powered by Valohai), from within the notebook cell using cell magic!
###Code
%system ./exec-step.sh "chunker"
%system ./watch-execution.sh 60
// ^^^ this number is returned by the previous action, see above cell
%system ./show-final-result.sh 60
// ^^^ this is returned by the action, see two cells above
###Output
Gathering output from counter 60
08:08:00.86 [Started...]
08:08:01.37 Sentence: [Rockwell, International, Corp., 's, Tulsa, unit, said, it, signed, a, tentative, agreement, extending, its, contract, with, Boeing, Co., to, provide, structural, parts, for, Boeing, 's, 747, jetliners, .]
08:08:01.37
08:08:01.37 Tags chunked: [B-NP, I-NP, I-NP, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-PP, B-NP, B-NP, I-NP, I-NP, O]
08:08:01.37
08:08:01.37 Tags chunked (with probabilities): [-0.3533550124421968 [B-NP, I-NP, I-NP, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-PP, B-NP, B-NP, I-NP, I-NP, O], -4.9833651782143225 [B-NP, I-NP, I-NP, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, B-PP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-PP, B-NP, B-NP, I-NP, I-NP, O], -5.207232108117287 [B-NP, I-NP, I-NP, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, I-NP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-PP, B-NP, B-NP, I-NP, I-NP, O], -5.250640871618706 [B-NP, I-NP, I-NP, B-NP, I-NP, O, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-PP, B-NP, B-NP, I-NP, I-NP, O], -5.2542712803928815 [B-NP, I-NP, I-NP, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-PP, B-NP, B-NP, I-NP, B-VP, O], -5.669524713139481 [B-NP, I-NP, I-NP, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-NP, B-NP, B-NP, I-NP, I-NP, O], -5.802479037079788 [B-NP, I-NP, B-PP, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-PP, B-NP, B-NP, I-NP, I-NP, O], -5.811282463417802 [B-NP, I-NP, I-NP, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, O, B-VP, I-VP, B-NP, I-NP, B-PP, B-NP, B-NP, I-NP, I-NP, O], -5.878077943396101 [B-NP, I-NP, I-NP, B-NP, I-NP, I-NP, B-VP, B-NP, B-VP, B-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-LST, B-NP, B-NP, I-NP, I-NP, O], -5.960383921186912 [B-NP, I-NP, I-NP, B-NP, I-NP, I-NP, B-ADVP, B-NP, B-VP, B-NP, I-NP, I-NP, B-VP, B-NP, I-NP, B-PP, B-NP, I-NP, B-VP, I-VP, B-NP, I-NP, B-PP, B-NP, B-NP, I-NP, I-NP, O]]
08:08:01.38
08:08:01.38 [...Finished]
08:08:01.38 + set +x
08:08:02.08 container finished with return code 0, duration 3.011427
08:08:02.09 completed in 7.63 seconds
###Markdown
Check out https://www.ling.upenn.edu/courses/Fall_2003/ling001/penn_treebank_pos.html to find out what each of the tags mean Return to [Table of contents](Table-of-contents) Parsing API Show a simple example of parsing chunked sentences using a Parser Chunker model called en-parser-chunking.bin on a remote instance (powered by Valohai), from within the notebook cell using cell magic!
###Code
%system ./exec-step.sh "parser" "Another not so quick brown fox jumps over the lazy dog."
%system ./watch-execution.sh 62
// ^^^ this number is returned by the previous action, see above cell
%system ./show-final-result.sh 62
// ^^^ this is returned by the action, see two cells above
###Output
Gathering output from counter 62
08:33:05.90 [Started...]
08:33:09.37 Sentence: 'Another not so quick brown fox jumps over the lazy dog.'
08:33:09.38
08:33:09.38 (TOP (NP (NP (NP (PRP 'Another)) (RB not) (ADJP (RB so) (JJ quick)) (JJ brown) (NN fox) (NNS jumps)) (PP (IN over) (NP (DT the) (JJ lazy) (NNS dog.')))))
08:33:09.38 [...Finished]
08:33:09.58 + set +x
08:33:10.19 container finished with return code 0, duration 6.019934
08:33:10.19 completed in 10.64 seconds
|
Titantic.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/gazorpazorpfield/Titantic-ML-from-Disaster/blob/master/Titantic.ipynb)
###Code
from google.colab import files
uploaded = files.upload()
import io
import pandas as pd
gender_sub= pd.read_csv(io.StringIO(uploaded['gender_submission.csv'].decode('utf=8')))
test_data = pd.read_csv(io.StringIO(uploaded['test.csv'].decode('utf=8')))
train_data = pd.read_csv(io.StringIO(uploaded['train.csv'].decode('utf=8')))
gender_sub.head()
test_data.head()
train_data.head()
import numpy as np
import random as rnd
import matplotlib as plt
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
train_df = train_data
test_df = test_data
combine = [train_df, test_df]
print(train_df.columns.values)
fig = plt.figure(figsize=(20,12))
female_color = "#FA0000"
plt.subplot2grid((3,3), (0,0))
train_df.Survived.value_counts(normalize=True).plot(kind="bar", alpha=0.5)
plt.title("Survived")
plt.subplot2grid((3,3), (0,1))
plt.scatter(train_df.Survived, train_df.Age, alpha=0.1)
plt.title("Age wrt Survived")
plt.subplot2grid((3,3), (0,2))
train_df.Pclass.value_counts(normalize=True).plot(kind="bar", alpha=0.5)
plt.title("Class")
plt.subplot2grid((3,3), (1,0), colspan=2)
for x in [1,2,3]:
train_df.Age[train_df.Pclass ==x].plot(kind='kde')
plt.title("Class wrt Age")
plt.legend(("1st", "2nd", "3rd"))
plt.subplot2grid((3,3), (1,2))
train_df.Embarked.value_counts(normalize=True).plot(kind="bar", alpha=0.5)
plt.title("Embarked")
plt.subplot2grid((3,3), (2,0))
train_df.Survived[train_df.Sex == "male"].value_counts(normalize=True).plot(kind="bar", alpha=0.5)
plt.title("Men Survived")
plt.subplot2grid((3,3), (2,1))
train_df.Survived[train_df.Sex == "female"].value_counts(normalize=True).plot(kind="bar", alpha=0.5, color=female_color)
plt.title("Women Survived")
fig = plt.figure(figsize=(20,12))
plt.subplot2grid((3,4), (1,0), colspan=2)
for x in [1,2,3]:
train_df.Survived[train_df.Pclass ==x].plot(kind='kde')
plt.title("Class wrt Survived")
plt.legend(("1st", "2nd", "3rd"))
plt.subplot2grid((3,4), (2,0))
train_df.Survived[(train_df.Sex == "male") & (train_df.Pclass == 1)].value_counts(normalize=True).plot(kind="bar", alpha=0.5)
plt.title("Rich Men Survived")
plt.subplot2grid((3,4), (2,1))
train_df.Survived[(train_df.Sex == "male") & (train_df.Pclass == 3)].value_counts(normalize=True).plot(kind="bar", alpha=0.5)
plt.title("Poor Men Survived")
plt.subplot2grid((3,4), (2,2))
train_df.Survived[(train_df.Sex == "female") & (train_df.Pclass == 1)].value_counts(normalize=True).plot(kind="bar", alpha=0.5, color=female_color)
plt.title("Rich Women Survived")
plt.subplot2grid((3,4), (2,3))
train_df.Survived[(train_df.Sex == "female") & (train_df.Pclass == 3)].value_counts(normalize=True).plot(kind="bar", alpha=0.5, color=female_color)
plt.title("Poor Women Survived")
train_df.head()
train_df.tail()
train_df.info()
print('_'*40)
test_df.info()
#Gender Classification Training
train_df["Hyp"] = 0
train_df.loc[train_df.Sex == "female", "Hyp"] = 1
train_df["Result"] = 0
train_df.loc[train_df.Survived == train_df["Hyp"], "Result"] = 1
print train_df["Result"].value_counts(normalize=True)
#Linear Regression
from sklearn import linear_model, preprocessing
!pip install datacleaner
from datacleaner import autoclean
train_df= autoclean(train_df)
target = train_df["Survived"].values
feature_names= ["Pclass", "Age", "Fare", "Embarked", "Sex", "SibSp", "Parch"]
features = train_df[feature_names].values
classifier = linear_model.LogisticRegression()
classifier_ = classifier.fit(features, target)
print classifier_.score(features, target)
#Polynomial
poly = preprocessing.PolynomialFeatures(degree=2)
poly_features = poly.fit_transform(features)
classifier_ = classifier.fit(poly_features, target)
print classifier_.score(poly_features, target)
###Output
Requirement already satisfied: datacleaner in /usr/local/lib/python2.7/dist-packages (0.1.5)
Requirement already satisfied: scikit-learn in /usr/local/lib/python2.7/dist-packages (from datacleaner) (0.19.2)
Requirement already satisfied: update-checker in /usr/local/lib/python2.7/dist-packages (from datacleaner) (0.16)
Requirement already satisfied: pandas in /usr/local/lib/python2.7/dist-packages (from datacleaner) (0.22.0)
Requirement already satisfied: requests>=2.3.0 in /usr/local/lib/python2.7/dist-packages (from update-checker->datacleaner) (2.18.4)
Requirement already satisfied: pytz>=2011k in /usr/local/lib/python2.7/dist-packages (from pandas->datacleaner) (2018.5)
Requirement already satisfied: python-dateutil in /usr/local/lib/python2.7/dist-packages (from pandas->datacleaner) (2.5.3)
Requirement already satisfied: numpy>=1.9.0 in /usr/local/lib/python2.7/dist-packages (from pandas->datacleaner) (1.14.6)
Requirement already satisfied: idna<2.7,>=2.5 in /usr/local/lib/python2.7/dist-packages (from requests>=2.3.0->update-checker->datacleaner) (2.6)
Requirement already satisfied: urllib3<1.23,>=1.21.1 in /usr/local/lib/python2.7/dist-packages (from requests>=2.3.0->update-checker->datacleaner) (1.22)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python2.7/dist-packages (from requests>=2.3.0->update-checker->datacleaner) (2018.8.24)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python2.7/dist-packages (from requests>=2.3.0->update-checker->datacleaner) (3.0.4)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python2.7/dist-packages (from python-dateutil->pandas->datacleaner) (1.11.0)
|
module4-sequence-your-narrative/Sanjay_Krishna__LS_DS_124_Sequence_your_narrative_Assignment.ipynb | ###Markdown
_Lambda School Data Science_ Sequence Your Narrative - AssignmentToday we will create a sequence of visualizations inspired by [Hans Rosling's 200 Countries, 200 Years, 4 Minutes](https://www.youtube.com/watch?v=jbkSRLYSojo).Using this [data from Gapminder](https://github.com/open-numbers/ddf--gapminder--systema_globalis/):- [Income Per Person (GDP Per Capital, Inflation Adjusted) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv)- [Life Expectancy (in Years) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv)- [Population Totals, by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv)- [Entities](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv)- [Concepts](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv) Objectives- sequence multiple visualizations- combine qualitative anecdotes with quantitative aggregatesLinks- [Hans Rosling’s TED talks](https://www.ted.com/speakers/hans_rosling)- [Spiralling global temperatures from 1850-2016](https://twitter.com/ed_hawkins/status/729753441459945474)- "[The Pudding](https://pudding.cool/) explains ideas debated in culture with visual essays."- [A Data Point Walks Into a Bar](https://lisacharlotterost.github.io/2016/12/27/datapoint-in-bar/): a thoughtful blog post about emotion and empathy in data storytelling ASSIGNMENT 1. Replicate the Lesson Code2. Take it further by using the same gapminder dataset to create a sequence of visualizations that combined tell a story of your choosing.Get creative! Use text annotations to call out specific countries, maybe: change how the points are colored, change the opacity of the points, change their sized, pick a specific time window. Maybe only work with a subset of countries, change fonts, change background colors, etc. make it your own!
###Code
# TODO
import seaborn as sns
sns.__version__
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
income = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv')
lifespan = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv')
population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv')
entities = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv')
concepts = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv')
income.shape, lifespan.shape, population.shape, entities.shape, concepts.shape
income.head()
income.sample(10)
pd.options.display.max_columns=500
entities.head()
concepts.head()
###Output
_____no_output_____
###Markdown
STRETCH OPTIONS 1. Animate!- [How to Create Animated Graphs in Python](https://towardsdatascience.com/how-to-create-animated-graphs-in-python-bb619cc2dec1)- Try using [Plotly](https://plot.ly/python/animations/)!- [The Ultimate Day of Chicago Bikeshare](https://chrisluedtke.github.io/divvy-data.html) (Lambda School Data Science student)- [Using Phoebe for animations in Google Colab](https://colab.research.google.com/github/phoebe-project/phoebe2-docs/blob/2.1/tutorials/animations.ipynb) 2. Study for the Sprint Challenge- Concatenate DataFrames- Merge DataFrames- Reshape data with `pivot_table()` and `.melt()`- Be able to reproduce a FiveThirtyEight graph using Matplotlib or Seaborn. 3. Work on anything related to your portfolio site / Data Storytelling Project
###Code
# TODO
###Output
_____no_output_____ |
RBM/misc.ipynb | ###Markdown
Nov 10 - Build and save PCA on dataset
###Code
import matplotlib.pyplot as plt
import numpy as np
from sklearn.decomposition import PCA
from data_process import data_mnist, binarize_image_data, image_data_collapse
from settings import MNIST_BINARIZATION_CUTOFF
TRAINING, TESTING = data_mnist(binarize=True)
num_features = TRAINING[0][0].shape[0] ** 2
num_samples = len(TRAINING)
def PCA_on_dataset(num_samples, num_features, dataset=None, binarize=True, X=None):
def get_X(dataset):
X = np.zeros((len(dataset), num_features))
for idx, pair in enumerate(dataset):
elem_arr, elem_label = pair
preprocessed_input = image_data_collapse(elem_arr)
#if binarize:
# preprocessed_input = binarize_image_data(preprocessed_input, threshold=MNIST_BINARIZATION_CUTOFF)
features = preprocessed_input
X[idx, :] = features
return X
if X is None:
X = get_X(dataset)
pca = PCA(n_components=None, svd_solver='full')
pca.fit(X)
return pca
pca = PCA_on_dataset(num_samples, num_features, dataset=TRAINING)
pca_weights = pca.components_ # each ROW of the pca weights is like a pattern
# SAVE (transposed version)
fpath = DIR_MODELS + sep + 'pca_binarized_raw.npz'
np.savez(fpath, pca_weights=pca_weights.T)
# LOAD
with open(fpath, 'rb') as f:
pca_weights = np.load(fpath)['pca_weights']
print(pca_weights)
print(pca_weights.shape)
for idx in range(2):
plt.imshow(pca_weights[:, idx].reshape(28,28))
plt.show()
a = pca_weights.reshape(28,28,-1)
print(pca_weights.shape)
print(a.shape)
for idx in range(2):
plt.imshow(a[:, :, idx])
plt.show()
from RBM_train import load_rbm_hopfield
k_pattern = 12
fname = 'hopfield_mnist_%d0_PCA.npz' % k_pattern
rbm = load_rbm_hopfield(npzpath=DIR_MODELS + os.sep + 'saved' + os.sep + fname)
rbm_weights = rbm.internal_weights
print(rbm_weights.shape)
for idx in range(2):
plt.imshow(rbm_weights[:, idx].reshape(28,28))
plt.show()
###Output
_____no_output_____
###Markdown
Nov 13 - For each digit (so 10x) Build and save PCA on dataset
###Code
from data_process import data_dict_mnist
# data_dict has the form:
# data_dict[0] = 28 x 28 x n0 of n0 '0' samples
# data_dict[4] = 28 x 28 x n4 of n4 '4' samples
data_dict, category_counts = data_dict_mnist(TRAINING)
for idx in range(10):
X = data_dict[idx].reshape((28**2, -1)).transpose()
print(X.shape)
num_samples = X.shape[0]
num_features = X.shape[1]
pca = PCA_on_dataset(num_samples, num_features, dataset=None, binarize=True, X=X)
pca_weights = pca.components_ # each ROW of the pca weights is like a pattern
# SAVE (transposed version)
fpath = DIR_MODELS + sep + 'pca_binarized_raw_digit%d.npz' % idx
np.savez(fpath, pca_weights=pca_weights.T)
# LOAD
fpath = DIR_MODELS + sep + 'pca_binarized_raw_digit7.npz'
with open(fpath, 'rb') as f:
pca_weights = np.load(fpath)['pca_weights']
print(pca_weights)
print(pca_weights.shape)
for idx in range(2):
plt.imshow(pca_weights[:, idx].reshape(28,28))
plt.colorbar()
plt.show()
a = pca_weights.reshape(28,28,-1)
print(pca_weights.shape)
print(a.shape)
for idx in range(2):
plt.imshow(a[:, :, idx])
plt.show()
###Output
[[-8.54285010e-19 1.70207739e-18 7.25777362e-20 ... 0.00000000e+00
0.00000000e+00 -0.00000000e+00]
[-1.66533454e-16 -1.11022302e-16 -3.12250226e-17 ... 1.27313923e-02
-1.03826861e-01 3.57731681e-02]
[-2.22044605e-16 -2.77555756e-16 6.88468380e-18 ... 6.07524881e-02
7.95052795e-02 -1.44445722e-02]
...
[-0.00000000e+00 0.00000000e+00 -0.00000000e+00 ... 0.00000000e+00
0.00000000e+00 -0.00000000e+00]
[-0.00000000e+00 0.00000000e+00 -0.00000000e+00 ... 0.00000000e+00
0.00000000e+00 -0.00000000e+00]
[-0.00000000e+00 0.00000000e+00 -0.00000000e+00 ... 0.00000000e+00
0.00000000e+00 -0.00000000e+00]]
(784, 784)
###Markdown
Inspect models/poe npz files
###Code
DIR_POE = 'models' + sep + 'poe'
fpath = DIR_POE + sep + 'hopfield_digit3_p1000_1000_pca.npz'
with open(fpath, 'rb') as f:
fcontents = np.load(fpath)
print(fcontents.files)
weights = fcontents['Q']
for idx in range(3):
plt.imshow(weights[:, idx].reshape(28, 28))
plt.show()
###Output
['Q', 'proj_remainder', 'pattern_labels', 'xi_image']
###Markdown
Nov 15 - Distribution of images in the dataset
###Code
from data_process import data_dict_mnist
from RBM_train import load_rbm_hopfield
# data_dict has the form:
# data_dict[0] = 28 x 28 x n0 of n0 '0' samples
# data_dict[4] = 28 x 28 x n4 of n4 '4' samples
data_dict, category_counts = data_dict_mnist(TRAINING)
# load 10 target patterns
k_choice = 1
p = k_choice * 10
N = 28**2
fname = 'hopfield_mnist_%d0%s.npz' % (k_choice, '_hebbian')
rbm_hebbian = load_rbm_hopfield(npzpath='models' + sep + 'saved' + sep + fname)
weights_hebbian = rbm_hebbian.internal_weights
XI = weights_hebbian
PATTERNS = weights_hebbian * np.sqrt(N)
for mu in range(p):
pattern_mu = PATTERNS[:, mu] * np.sqrt(N)
#plt.figure(figsize=(2,12))
plt.imshow(pattern_mu.reshape(28,28), interpolation='None')
plt.colorbar()
plt.show()
# PLAN: for each class
# 1) look at all data from the class
# 2) look at hamming distance of each digit from the class (max 28^2=784)
# 3) ...
def hamming_distance(x, y):
prod = x * y
dist = 0.5 * np.sum(1 - prod)
return dist
def build_Jij(patterns):
# TODO remove self-interactions? no RBM has them
A = np.dot(patterns.T, patterns)
A_inv = np.linalg.inv(A)
Jij = np.dot(patterns,
np.dot(A_inv, patterns.T))
return Jij
J_INTXN = build_Jij(PATTERNS)
def energy_fn(state_vector):
scaled_energy = -0.5 * np.dot(state_vector,
np.dot(J_INTXN, state_vector))
return scaled_energy
def get_hamming_histogram(X, target_state):
# given matrix of states, compute distance to the target state for all
# plot the histogram
if len(X.shape) == 3:
assert X.shape[0] == 28 and X.shape[1] == 28
X = X.reshape(28**2, -1)
num_pts = X.shape[-1]
dists = np.zeros(num_pts)
for idx in range(num_pts):
dists[idx] = hamming_distance(X[:, idx], target_state)
return dists
def get_energy_histogram(X, target_state):
# given matrix of states, compute distance to the target state for all
# plot the histogram
if len(X.shape) == 3:
assert X.shape[0] == 28 and X.shape[1] == 28
X = X.reshape(28**2, -1)
num_pts = X.shape[-1]
energies = np.zeros(num_pts)
for idx in range(num_pts):
energies[idx] = energy_fn(X[:, idx])
return energies
###Output
_____no_output_____
###Markdown
Gather data
###Code
#for idx in range(10):
list_of_dists = []
list_of_energies = []
BETA = 2.0
# pick get_hamming_histogram OR get_energy_histogram
hist_fn = get_energy_histogram
for mu in range(p):
target_state = PATTERNS[:, mu]
dists_mu = get_hamming_histogram(data_dict[mu], target_state)
print('class %d: min dist = %d, max dist = %d' % (mu, min(dists_mu), max(dists_mu)))
list_of_dists.append(dists_mu)
energies_mu = get_energy_histogram(data_dict[mu], target_state)
list_of_energies.append(energies_mu)
#boltz_mu = np.exp(- BETA * energies_mu)
#list_of_boltz.append(boltz_mu)
###Output
class 0: min dist = 29, max dist = 207
class 1: min dist = 10, max dist = 174
class 2: min dist = 37, max dist = 201
class 3: min dist = 33, max dist = 202
class 4: min dist = 36, max dist = 221
class 5: min dist = 44, max dist = 217
class 6: min dist = 15, max dist = 229
class 7: min dist = 24, max dist = 212
class 8: min dist = 21, max dist = 224
class 9: min dist = 29, max dist = 234
###Markdown
Plot data
###Code
outdir = 'output' + sep + 'ICLR_nb'
D_RANGE = (0,250)
E_RANGE = (-0.5 * 28**2, -100)
#B_RANGE =
for mu in range(p):
dists_by_mu = list_of_dists[mu]
energies_by_mu = list_of_energies[mu]
#boltz_by_mu = list_of_boltz[mu]
n, bins, _ = plt.hist(dists_by_mu, range=D_RANGE, bins=50, density=True)
plt.title('hamming distances (pattern: %d)' % mu)
plt.savefig(outdir + sep + 'dist_hist_mu%d.jpg' % mu)
plt.close()
plt.hist(energies_by_mu, range=E_RANGE, bins=50, density=True)
plt.title('energies (pattern: %d)' % mu)
plt.axvline(x=-0.5 * 28**2)
plt.savefig(outdir + sep + 'energy_hist_mu%d.jpg' % mu)
plt.close()
scale_factors = np.array([rescaler(i) for i in bins[:-1]])
counts, bins = np.histogram(dists_by_mu, bins=50)
plt.hist(bins[:-1], bins, weights=scale_factors * counts)
#plt.hist(dists_by_mu, range=E_RANGE, bins=bins, density=True, weights=scale_factors)
plt.title('scaled dists (pattern: %d)' % mu)
#plt.axvline(x=-0.5 * 28**2)
plt.savefig(outdir + sep + 'scaled_dists_hist_mu%d.jpg' % mu)
plt.close()
#plt.hist(boltz_by_mu, bins=50, density=True)
#plt.title('boltzmann weights (pattern: %d)' % mu)
#plt.savefig(outdir + sep + 'boltz_hist_mu%d.jpg' % mu)
#plt.close()
idx_a = 7
idx_b = 12
print(data_dict[1].shape)
x1 = data_dict[1][:,:,idx_a].reshape(28**2)
x2 = data_dict[1][:,:,idx_b].reshape(28**2)
plt.imshow(data_dict[1][:,:,idx_a])
plt.show()
plt.imshow(data_dict[1][:,:,idx_b])
plt.show()
hd = np.sum(1 - x1 * x2) * 0.5
print(hd)
###Output
(28, 28, 6742)
###Markdown
(Nov 16, 2020) Distance distribution rescaled by hamming shell area Want to rescale the observed data distance distibution by the 'volume' of states in 2^N spaceThe number of states a distance d away from a given state is:$\displaystyle n(d) = {N \choose d}$e.g. $n(0)=1, n(1)=N=784, n(2)=N(N-1)/2 = 306936, ... , n(N)=1$.Note that: $n!=\Gamma(n+1)$The (uniform) probability to be a distance $d$ away is then: $p(d) = 2^{-N} n(d)$
###Code
N0 = 28**2
from scipy.special import gamma, loggamma
def N_choose_k(N, k):
num = gamma(N+1)
den = gamma(N - k + 1) * gamma(k+1)
return num / den
def log_uniform_dist_prob(d, N=N0):
scale = -N *np.log(2)
num = loggamma(N+1)
den = loggamma(N - d + 1) + loggamma(d+1)
return scale + num - den
print('Example probabilities (log, direct):')
p0 = log_uniform_dist_prob(0, N=784)
p1 = log_uniform_dist_prob(1, N=784)
p2 = log_uniform_dist_prob(2, N=784)
print('d=0', p0, np.exp(p0))
print('d=1', p1, np.exp(p1))
print('d=2', p2, np.exp(p2))
d_arr = np.arange(N+1)
p_arr = log_uniform_dist_prob(d_arr)
plt.plot(d_arr, p_arr)
plt.xlabel(r'$\textrm{distance}$')
plt.ylabel(r'$\log p(d)$')
d_arr = np.arange(N+1)
logp_arr = log_uniform_dist_prob(d_arr)
plt.plot(d_arr, logp_arr)
plt.xlabel(r'$\textrm{distance}$')
plt.ylabel(r'$\log p(d)$')
plt.show(); plt.close()
d_arr = np.arange(0,N+1)
p_arr = np.exp(log_uniform_dist_prob(d_arr))
plt.plot(d_arr, p_arr)
plt.xlabel(r'$\textrm{distance}$')
plt.ylabel(r'$p(d)$')
plt.show(); plt.close()
###Output
Example probabilities (log, direct):
d=0 -543.4273895589972 9.828413039545451e-237
d=1 -536.7629805386473 7.705475822999888e-234
d=2 -530.792995023216 3.01669378470576e-231
###Markdown
We observe some data distribution $p_{data}^{(\mu)}(d)=$ "probability that sample $\mu$-images are a distance $d$ from pattern $\mu$". Here is $\log p(d)$ for $\mu=1$.
###Code
for mu in [1]:
dists_by_mu = list_of_dists[mu]
energies_by_mu = list_of_energies[mu]
#boltz_by_mu = list_of_boltz[mu]
n, bins, _ = plt.hist(dists_by_mu, range=(0,250), bins=50, density=True)
plt.title('hamming distances (pattern: %d)' % mu)
plt.show(); plt.close()
###Output
_____no_output_____
###Markdown
Consider rescaling this observed distribution by $p(d)$, the unbiased probability of being a distance $d$ away.Define $g_{data}^{(\mu)}(d) \equiv p_{data}^{(\mu)}(d) / p(d)$.For re-weighting the binning, we need to sum over all the distances in each bin. For example, suppose bin $i$ represents distance $0, 1, 2$. The "volume" of states is then:$v(b_i) = n(0) + n(1) + n(2)$Then for the corresponding $p(d)$ bin $b_i$, $g_{data}^{(\mu)}(b_i) \equiv 2^N p_{data}^{(\mu)}(b_i) / (n(0) + n(1) + n(2))$And its $\log$, $\log g_{data}^{(\mu)}(b_i) \equiv N \log 2 + \log p_{data}^{(\mu)}(b_i) - \log (n(0) + n(1) + n(2))$The final term is the $\log$ of a partial sum of binomial coefficients, which has no closed form.Indirectly discussed on p102 of https://www.math.upenn.edu/~wilf/AeqB.pdf. Bottom of p160 is our quantity of interest. Relevant links- https://mathoverflow.net/questions/17202/sum-of-the-first-k-binomial-coefficients-for-fixed-n- https://math.stackexchange.com/questions/103280/asymptotics-for-a-partial-sum-of-binomial-coefficients- https://mathoverflow.net/questions/261428/approximation-of-sum-of-the-first-binomial-coefficients-for-fixed-n?noredirect=1&lq=1I will try the upper and lower bounds from the last link.
###Code
outdir = 'output' + sep + 'ICLR_nb'
D_RANGE = (0,250)
E_RANGE = (-0.5 * 28**2, -100)
def build_bins(dmin, dmax, nn):
# return nn+1 bin edges, for the nn bins
gap = (dmax - dmin) / float(nn)
return np.arange(dmin, dmax + 1e-5, gap)
def H_fn(x):
return -x * np.log2(x) - (1-x) * np.log2(1-x)
def log_volume_per_bin(bins, upper=True):
# NOTE: bounds only works for d <= N/2
# assumes the right edge of each bin is inclusive (i.e. [0, 10] means [0, 10+eps])
# TODO care for logs, at initial writing it is NOT log
nn = len(bins) - 1
def upper_bound(r):
# partial sum of binomial coefficients (N choose k) form k=0 to k=r
# see https://mathoverflow.net/questions/261428/approximation-of-sum-of-the-first-binomial-coefficients-for-fixed-n?noredirect=1&lq=1
# see also Michael Lugo: https://mathoverflow.net/questions/17202/sum-of-the-first-k-binomial-coefficients-for-fixed-n
x = r / float(N0)
return 2 ** (N0 * H_fn(x))
def lower_bound(r):
num = upper_bound(r)
den = np.sqrt(8 * r * (1 - float(r) / N0))
return num/den
if upper:
bound = upper_bound
else:
bound = lower_bound
approx_cumsum = np.zeros(nn)
approx_vol_per_bin = np.zeros(nn)
for idx in range(nn):
bin_left = int(bins[idx])
bin_right = int(bins[idx + 1])
approx_cumsum[idx] = bound(bin_right)
approx_vol_per_bin[0] = approx_cumsum[0]
for idx in range(1, nn):
approx_vol_per_bin[idx] = approx_cumsum[idx] - approx_cumsum[idx-1]
log_approx_volume_per_bin = np.log(approx_vol_per_bin)
return log_approx_volume_per_bin
NUM_BINS = 25
assert (D_RANGE[1] - D_RANGE[0]) % NUM_BINS == 0
GAP = (D_RANGE[1] - D_RANGE[0]) / NUM_BINS
BINS = build_bins(D_RANGE[0], D_RANGE[1], NUM_BINS)
BINS_MIDPTS = [0.5*(BINS[idx + 1] + BINS[idx]) for idx in range(NUM_BINS)]
print(BINS)
# scale the normed counts by the distance
approxUpper_log_vol_per_bin_arr = log_volume_per_bin(BINS, upper=True)
approxLower_log_vol_per_bin_arr = log_volume_per_bin(BINS, upper=False)
for mu in range(p):
counts, bins = np.histogram(list_of_dists[mu], bins=BINS)
# Plot p_data(d)
normed_counts, _, _ = plt.hist(bins[:-1], bins, weights=counts, density=True)
plt.title(r'$p_{data}(d)$ (pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'normed_dists_hist_mu%d.jpg' % mu)
plt.close()
# Plot log p_data(d)
log_normed_counts, _, _ = plt.hist(bins[:-1], bins, weights=counts, density=True, log=True)
plt.title(r'$\log p_{data}(d)$ (pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'log_normed_dists_hist_mu%d.jpg' % mu)
plt.close()
#scaled_log_normed_counts = ...
print(len(normed_counts), normed_counts.shape)
print(len(bins[:-1]), bins.shape)
scaled_appxUpper_log_normed_counts = N0 * np.log(2) + np.log(normed_counts) - approxUpper_log_vol_per_bin_arr
scaled_appxLower_log_normed_counts = N0 * np.log(2) + np.log(normed_counts) - approxLower_log_vol_per_bin_arr
# Plot log g_data(d), upper and lower bound versions
plt.bar(BINS_MIDPTS, scaled_appxUpper_log_normed_counts, color='#2A63B1', width=GAP)
plt.title(r'$\log g_{data}(d)$ (upper; pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'scaled_appxUpper_log_normed_dists_hist_mu%d.jpg' % mu)
plt.close()
plt.bar(BINS_MIDPTS, scaled_appxLower_log_normed_counts, color='#2A63B1', width=GAP)
plt.title(r'$\log g_{data}(d)$ (lower; pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'scaled_appxLower_log_normed_dists_hist_mu%d.jpg' % mu)
plt.close()
###Output
[ 0. 10. 20. 30. 40. 50. 60. 70. 80. 90. 100. 110. 120. 130.
140. 150. 160. 170. 180. 190. 200. 210. 220. 230. 240. 250.]
25 (25,)
25 (26,)
###Markdown
troubleshooting NaN: do the whole N range of 0 to 784 split into 28size binsHave function which computes for each distance the average volume / log volume (more stable). Can bin later.
###Code
N = 28**2
D_RANGE = (0,N)
NUM_BINS = N + 1 # 28
if NUM_BINS == N+1:
GAP = 1
BINS = np.arange(NUM_BINS)
BINS_MIDPTS = [0.5*(BINS[idx + 1] + BINS[idx]) for idx in range(NUM_BINS-1)]
else:
assert (D_RANGE[1] - D_RANGE[0]) % NUM_BINS == 0
GAP = (D_RANGE[1] - D_RANGE[0]) / NUM_BINS
BINS = build_bins(D_RANGE[0], D_RANGE[1], NUM_BINS)
BINS_MIDPTS = [0.5*(BINS[idx + 1] + BINS[idx]) for idx in range(NUM_BINS)]
#print(BINS)
def log_volume_per_dist(dists, upper=True):
assert len(dists) == N + 1
nn = len(dists)
def upper_bound(r):
x = r / float(N0)
return 2 ** (N0 * H_fn(x))
def lower_bound(r):
num = upper_bound(r)
den = np.sqrt(8 * r * (1 - float(r) / N0))
return num/den
if upper:
bound = upper_bound
else:
bound = lower_bound
approx_cumsum = np.zeros(nn)
approx_vol_per_dist = np.zeros(nn)
# know first/last value is "1"
approx_cumsum[0] = 1
approx_cumsum[N] = 2**N
# use symmetry wrt midpoint N/2.0
assert N % 2 == 0
midpt = int(N / 2)
for idx in range(1, midpt + 1):
idx_reflect = N - idx
approx_cumsum[idx] = bound(idx)
approx_cumsum[idx_reflect] = approx_cumsum[N] - approx_cumsum[idx]
#print('loop1_cumsum', idx, approx_cumsum[idx], idx_reflect, approx_cumsum[idx_reflect])
approx_vol_per_dist[0] = approx_cumsum[0]
approx_vol_per_dist[N] = 1.0
for idx in range(1, midpt + 1):
idx_reflect = N - idx
approx_vol_per_dist[idx] = approx_cumsum[idx] - approx_cumsum[idx-1]
approx_vol_per_dist[idx_reflect] = approx_vol_per_dist[idx]
#print('log_volume_per_dist', idx, idx_reflect, approx_vol_per_dist[idx], approx_vol_per_dist[idx])
log_approx_volume_per_dist = np.log(approx_vol_per_dist)
#print('log_approx_volume_per_dist')
#print(log_approx_volume_per_dist)
return log_approx_volume_per_dist, approx_vol_per_dist, approx_cumsum
def log_volume_per_bin(bins):
nn = len(bins) - 1
def upper_bound(r):
x = r / float(N0)
return 2 ** (N0 * H_fn(x))
bound = upper_bound
approx_cumsum = np.zeros(nn)
approx_vol_per_bin = np.zeros(nn)
for idx in range(nn):
bin_left = int(bins[idx])
bin_right = int(bins[idx + 1])
approx_cumsum[idx] = bound(bin_right)
print('loop1_cumsum', idx, bin_right, approx_cumsum[idx])
approx_vol_per_bin[0] = approx_cumsum[0]
for idx in range(1, nn):
approx_vol_per_bin[idx] = approx_cumsum[idx] - approx_cumsum[idx-1]
print('log_volume_per_bin', idx, approx_vol_per_bin[idx], approx_cumsum[idx])
log_approx_volume_per_bin = np.log(approx_vol_per_bin)
print('log_approx_volume_per_bin')
print(log_approx_volume_per_bin)
return log_approx_volume_per_bin
# scale the normed counts by the distance
if NUM_BINS == N + 1:
approxUpper_log_vol_per_bin_arr, approx_vol_per_dist, approx_cumsum = log_volume_per_dist(BINS, upper=True)
approxLower_log_vol_per_bin_arr, approx_vol_per_dist, approx_cumsum = log_volume_per_dist(BINS, upper=False)
else:
approxUpper_log_vol_per_bin_arr - log_volume_per_bin(BINS)
for mu in [1]:
counts, bins = np.histogram(list_of_dists[mu], bins=np.arange(N+2)) # len error extend bins
print('len(counts), len(bins)')
print(len(counts), len(bins))
normed_counts, _, _ = plt.hist(bins[:-1], bins, weights=counts, density=True)
plt.close()
# Plot p_data(d)
plt.bar(BINS, normed_counts, color='red', width=0.8, linewidth=0)
plt.title(r'$p_{data}(d)$ (pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_p_dists_hist_mu%d.pdf' % mu)
plt.close()
# Plot log p_data(d)
plt.bar(BINS, np.log(normed_counts), color='red', width=0.8, linewidth=0)
plt.title(r'$\log p_{data}(d)$ (pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_logp_dists_hist_mu%d.pdf' % mu)
plt.close()
#scaled_log_normed_counts = ...
print(len(normed_counts), normed_counts.shape)
print(len(bins[:-1]), bins.shape)
scaled_appxUpper_log_normed_counts = N0 * np.log(2) + np.log(normed_counts) - approxUpper_log_vol_per_bin_arr
scaled_appxLower_log_normed_counts = N0 * np.log(2) + np.log(normed_counts) - approxLower_log_vol_per_bin_arr
print('bins_right')
print(bins[1:])
print('normed_counts')
print(normed_counts)
print('approxUpper_log_vol_per_bin_arr')
print(approxUpper_log_vol_per_bin_arr)
print('scaled_appxUpper_log_normed_counts')
print(scaled_appxUpper_log_normed_counts)
# Plot log g_data(d), upper and lower bound versions
plt.bar(BINS, scaled_appxUpper_log_normed_counts, color='#2A63B1', width=0.8, linewidth=0)
plt.title(r'$\log g_{data}(d)$ (upper; pattern: %d)' % mu)
#plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlim(0,100)
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_scaled_appxUpper_log_normed_dists_hist_mu%d.pdf' % mu)
plt.close()
plt.bar(BINS, scaled_appxLower_log_normed_counts, color='#2A63B1', width=0.8, linewidth=0)
plt.title(r'$\log g_{data}(d)$ (upper; pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_scaled_appxLower_log_normed_dists_hist_mu%d.pdf' % mu)
plt.close()
plt.bar(BINS, np.exp(scaled_appxLower_log_normed_counts), color='green', width=0.8, linewidth=0)
plt.title(r'$\log g_{data}(d)$ (upper; pattern: %d)' % mu)
plt.xlim(D_RANGE[0], D_RANGE[1])
plt.xlabel(r'$\textrm{Hamming distance, d}$')
plt.savefig(outdir + sep + 'TEST_scaled_appxLower_normed_dists_hist_mu%d.pdf' % mu)
plt.close()
plt.plot(range(784+1), approxUpper_log_vol_per_bin_arr - N*np.log(2))
plt.show(); plt.close()
plt.plot(range(784+1), N*np.log(2) - approxUpper_log_vol_per_bin_arr)
plt.show(); plt.close()
print(counts)
a = np.array([[1,3,4],[1,3,4]])
scales = np.array([2,1,2])
print (a / scales)
print(scales > 1)
###Output
[ True False True]
|
src/code_dump/diva_pytoarch_vers.ipynb | ###Markdown
PyTorch 1.2 Quickstart with Google ColabIn this code tutorial we will learn how to quickly train a model to understand some of PyTorch's basic building blocks to train a deep learning model. This notebook is inspired by the ["Tensorflow 2.0 Quickstart for experts"](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/advanced.ipynbscrollTo=DUNzJc4jTj6G) notebook. After completion of this tutorial, you should be able to import data, transform it, and efficiently feed the data in batches to a convolution neural network (CNN) model for image classification.**Author:** [Elvis Saravia](https://twitter.com/omarsar0)**Complete Code Walkthrough:** [Blog post](https://medium.com/dair-ai/pytorch-1-2-quickstart-with-google-colab-6690a30c38d)
###Code
!pip3 install torch==1.2.0+cu92 torchvision==0.4.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html
###Output
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.2.0+cu92
Downloading https://download.pytorch.org/whl/cu92/torch-1.2.0%2Bcu92-cp37-cp37m-manylinux1_x86_64.whl (663.1 MB)
[K |████████████████████████████████| 663.1 MB 1.7 kB/s
[?25hCollecting torchvision==0.4.0+cu92
Downloading https://download.pytorch.org/whl/cu92/torchvision-0.4.0%2Bcu92-cp37-cp37m-manylinux1_x86_64.whl (8.8 MB)
[K |████████████████████████████████| 8.8 MB 44.9 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from torch==1.2.0+cu92) (1.19.5)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from torchvision==0.4.0+cu92) (1.15.0)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.7/dist-packages (from torchvision==0.4.0+cu92) (7.1.2)
Installing collected packages: torch, torchvision
Attempting uninstall: torch
Found existing installation: torch 1.9.0+cu111
Uninstalling torch-1.9.0+cu111:
Successfully uninstalled torch-1.9.0+cu111
Attempting uninstall: torchvision
Found existing installation: torchvision 0.10.0+cu111
Uninstalling torchvision-0.10.0+cu111:
Successfully uninstalled torchvision-0.10.0+cu111
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
torchtext 0.10.0 requires torch==1.9.0, but you have torch 1.2.0+cu92 which is incompatible.[0m
Successfully installed torch-1.2.0+cu92 torchvision-0.4.0+cu92
###Markdown
Note: We will be using the latest stable version of PyTorch so be sure to run the command above to install the latest version of PyTorch, which as the time of this tutorial was 1.2.0. We PyTorch belowing using the `torch` module.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.transforms as transforms
print(torch.__version__)
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Import The DataThe first step before training the model is to import the data. We will use the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) which is like the Hello World dataset of machine learning. Besides importing the data, we will also do a few more things:- We will tranform the data into tensors using the `transforms` module- We will use `DataLoader` to build convenient data loaders or what are referred to as iterators, which makes it easy to efficiently feed data in batches to deep learning models. - As hinted above, we will also create batches of the data by setting the `batch` parameter inside the data loader. Notice we use batches of `32` in this tutorial but you can change it to `64` if you like. I encourage you to experiment with different batches.
###Code
#Configure data correctly
#make rgb images and segmentation images share a file name
#get image in the right format
import os
from numpy import asarray
import numpy as np
from PIL import Image
def remove_chars(filename):
new_filename = ""
for i in range(0, len(filename)):
char = filename[i]
if char.isnumeric():
new_filename += char
return new_filename + ".png"
def config_data(rgb_img, seg_img):
i = 0
for filename in os.listdir(rgb_img):
old_path = os.path.join(rgb_img, filename)
if not os.path.isdir(old_path):
i += 1
new_filename = remove_chars(filename)
os.rename(os.path.join(rgb_img, filename), os.path.join(rgb_img, new_filename))
i = 0
for filename in os.listdir(seg_img):
old_path = os.path.join(seg_img, filename)
if not os.path.isdir(old_path):
i += 1
new_filename = remove_chars(filename)
new_name = os.path.join(seg_img,new_filename)
os.rename(os.path.join(seg_img, filename), os.path.join(seg_img, new_name))
#Normalize image to fit model system: https://machinelearningmastery.com/how-to-manually-scale-image-pixel-data-for-deep-learning/
image = Image.open(new_name)
pixels = asarray(image)
# convert from integers to floats
pixels = pixels.astype('float32')
# normalize to the range 0-1 then to 9
pixels /= 255.0
pixels *= 9.0
im = Image.fromarray(pixels.astype(np.uint8))
im.save(os.path.join(seg_img, new_name))
PATH = '/content/drive/myDrive/Highway_Dataset'
rgb_img = '/content/drive/MyDrive/Highway_Dataset/Train/TrainSeq04/image'
seg_img = '/content/drive/MyDrive/Highway_Dataset/Train/TrainSeq04/label'
config_data(rgb_img, seg_img)
rgb_img_t = '/content/drive/MyDrive/Highway_Dataset/Test/TestSeq04/image'
seg_img_t = '/content/drive/MyDrive/Highway_Dataset/Test/TestSeq04/label'
config_data(rgb_img_t, seg_img_t)
from torch.utils.data import Dataset
from natsort import natsorted
class CustomDataSet(Dataset):
def __init__(self, main_dir, label_dir, transform):
self.main_dir = main_dir
self.label_dir = label_dir
self.transform = transform
all_imgs = os.listdir(main_dir)
all_segs = os.listdir(main_dir)
self.total_imgs = natsorted(all_imgs)
self.total_segs = natsorted(all_segs)
def __len__(self):
return len(self.total_imgs)
def __getitem__(self, idx):
img_loc = os.path.join(self.main_dir, self.total_imgs[idx])
image = Image.open(img_loc).convert("RGB")
tensor_image = self.transform(image)
seg_loc = os.path.join(self.label_dir, self.total_segs[idx])
labeled_image = Image.open(seg_loc).convert("RGB")
transform = transforms.Compose([transforms.Resize((12,12)),
transforms.ToTensor()])
labeled_image = transform(labeled_image)
labeled_image = labeled_image.float()
return tensor_image, labeled_image
BATCH_SIZE = 32
## transformations
transform = transforms.Compose([transforms.Resize((28,28)),
transforms.ToTensor()])
rgb_img = '/content/drive/MyDrive/Highway_Dataset/Train/TrainSeq04/image'
seg_img = '/content/drive/MyDrive/Highway_Dataset/Train/TrainSeq04/label'
## download and load training dataset
imagenet_data = CustomDataSet(rgb_img, seg_img, transform=transform)
trainloader = torch.utils.data.DataLoader(imagenet_data,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=2)
rgb_img_t = '/content/drive/MyDrive/Highway_Dataset/Test/TestSeq04/image'
seg_img_t = '/content/drive/MyDrive/Highway_Dataset/Test/TestSeq04/label'
## download and load training dataset
imagenet_data_test = CustomDataSet(rgb_img_t, seg_img_t, transform=transform)
testloader = torch.utils.data.DataLoader(imagenet_data_test,
batch_size=BATCH_SIZE,
shuffle=False,
num_workers=2)
#trainset = torchvision.datasets.MNIST(root='./data', train=True,
# download=True, transform=transform)
#trainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE,
# shuffle=True, num_workers=2)
## download and load testing dataset
#testset = torchvision.datasets.MNIST(root='./data', train=False,
# download=True, transform=transform)
#testloader = torch.utils.data.DataLoader(testset, batch_size=BATCH_SIZE,
#shuffle=False, num_workers=2)
###Output
_____no_output_____
###Markdown
Exploring the DataAs a practioner and researcher, I am always spending a bit of time and effort exploring and understanding the dataset. It's fun and this is a good practise to ensure that everything is in order. Let's check what the train and test dataset contains. I will use `matplotlib` to print out some of the images from our dataset.
###Code
import matplotlib.pyplot as plt
import numpy as np
## functions to show an image
def imshow(img):
#img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
## get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
## show images
#imshow(torchvision.utils.make_grid(images))
###Output
_____no_output_____
###Markdown
**EXERCISE:** Try to understand what the code above is doing. This will help you to better understand your dataset before moving forward. Let's check the dimensions of a batch.
###Code
for images, labels in trainloader:
print("Image batch dimensions:", images.shape)
print("Image label dimensions:", labels.shape)
break
###Output
Image batch dimensions: torch.Size([32, 3, 28, 28])
Image label dimensions: torch.Size([32, 3, 12, 12])
###Markdown
The ModelNow using the classical deep learning framework pipeline, let's build the 1 convolutional layer model. Here are a few notes for those who are beginning with PyTorch:- The model below consists of an `__init__()` portion which is where you include the layers and components of the neural network. In our model, we have a convolutional layer denoted by `nn.Conv2d(...)`. We are dealing with an image dataset that is in a grayscale so we only need one channel going in, hence `in_channels=1`. We hope to get a nice representation of this layer, so we use `out_channels=32`. Kernel size is 3, and for the rest of parameters we use the default values which you can find [here](https://pytorch.org/docs/stable/nn.html?highlight=conv2dconv2d). - We use 2 back to back dense layers or what we refer to as linear transformations to the incoming data. Notice for `d1` I have a dimension which looks like it came out of nowhere. 128 represents the size we want as output and the (`26*26*32`) represents the dimension of the incoming data. If you would like to find out how to calculate those numbers refer to the [PyTorch documentation](https://pytorch.org/docs/stable/nn.html?highlight=linearconv2d). In short, the convolutional layer transforms the input data into a specific dimension that has to be considered in the linear layer. The same applies for the second linear transformation (`d2`) where the dimension of the output of the previous linear layer was added as `in_features=128`, and `10` is just the size of the output which also corresponds to the number of classes.- After each one of those layers, we also apply an activation function such as `ReLU`. For prediction purposes, we then apply a `softmax` layer to the last transformation and return the output of that.
###Code
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
# 28x28x1 => 26x26x32
self.conv1 = nn.Conv2d(in_channels=3, out_channels=3, kernel_size=3)
self.drop1 = nn.Dropout(0.2)
self.pool1 = nn.MaxPool2d((2,2))
def forward(self, x):
# 32x1x28x28 => 32x32x26x26
x = self.conv1(x)
x = self.drop1(x)
x = self.conv1(x)
x = self.pool1(x)
x = F.relu(x)
# flatten => 32 x (32*26*26)
#x = x.flatten(start_dim = 1)
#32 x (32*26*26) => 32x128
#x = self.d1(x)
#x = F.relu(x)
#x = x.reshape([32, 3, 28, 28])
#x = self.d2(x)
#x = F.relu(x)
# logits => 32x10
logits = x
out = F.softmax(logits, dim=1)
return out
###Output
_____no_output_____
###Markdown
As I have done in my previous tutorials, I always encourage to test the model with 1 batch to ensure that the output dimensions are what we expect.
###Code
## test the model with 1 batch
model = MyModel()
for images, labels in trainloader:
print("batch size:", images.shape)
out = model(images)
print(out.shape, labels.shape)
break
###Output
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f959b15b3b0>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 926, in __del__
self._shutdown_workers()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 906, in _shutdown_workers
w.join()
File "/usr/lib/python3.7/multiprocessing/process.py", line 138, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f959b15b3b0>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 926, in __del__
self._shutdown_workers()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 906, in _shutdown_workers
w.join()
File "/usr/lib/python3.7/multiprocessing/process.py", line 138, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f959b15b3b0>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 926, in __del__
self._shutdown_workers()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 906, in _shutdown_workers
w.join()
File "/usr/lib/python3.7/multiprocessing/process.py", line 138, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f959b15b3b0>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 926, in __del__
self._shutdown_workers()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 906, in _shutdown_workers
w.join()
File "/usr/lib/python3.7/multiprocessing/process.py", line 138, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f959b15b3b0>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 926, in __del__
self._shutdown_workers()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 906, in _shutdown_workers
w.join()
File "/usr/lib/python3.7/multiprocessing/process.py", line 138, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f959b15b3b0>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 926, in __del__
self._shutdown_workers()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 906, in _shutdown_workers
w.join()
File "/usr/lib/python3.7/multiprocessing/process.py", line 138, in join
assert self._parent_pid == os.getpid(), 'can only join a child process'
AssertionError: can only join a child process
###Markdown
Training the ModelNow we are ready to train the model but before that we are going to setup a loss function, an optimizer and a function to compute accuracy of the model.
###Code
learning_rate = 0.001
num_epochs = 10
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = MyModel()
model = model.to(device)
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
## compute accuracy
def compute_accuracy(hypothesis, Y_target):
Y_prediction = hypothesis.data.max(dim=1)[1]
accuracy = torch.mean((Y_prediction.data == Y_target.data.max(dim=1)[1].data).float() )
return accuracy.item()
###Output
_____no_output_____
###Markdown
Now it's time for training.
###Code
for epoch in range(num_epochs):
train_running_loss = 0.0
train_acc = 0.0
model = model.train()
## training step
for i, (images, labels) in enumerate(trainloader):
images = images.to(device)
labels = labels.to(device)
## forward + backprop + loss
logits = model(images)
loss = criterion(logits, labels)
optimizer.zero_grad()
loss.backward()
## update model params
optimizer.step()
train_running_loss += loss.detach().item()
train_acc += compute_accuracy(logits, labels)
model.eval()
print('Epoch: %d | Loss: %.4f | Train Accuracy: %.2f' \
%(epoch, train_running_loss / i, train_acc/i))
###Output
Epoch: 0 | Loss: 0.1214 | Train Accuracy: 0.98
Epoch: 1 | Loss: 0.1213 | Train Accuracy: 0.95
Epoch: 2 | Loss: 0.1213 | Train Accuracy: 0.92
Epoch: 3 | Loss: 0.1213 | Train Accuracy: 0.92
Epoch: 4 | Loss: 0.1211 | Train Accuracy: 0.93
Epoch: 5 | Loss: 0.1213 | Train Accuracy: 0.95
Epoch: 6 | Loss: 0.1214 | Train Accuracy: 0.96
Epoch: 7 | Loss: 0.1212 | Train Accuracy: 0.96
Epoch: 8 | Loss: 0.1213 | Train Accuracy: 0.97
Epoch: 9 | Loss: 0.1212 | Train Accuracy: 0.97
###Markdown
We can also compute accuracy on the testing dataset to see how well the model performs on the image classificaiton task. As you can see below, our basic CNN model is performing very well on the MNIST classification task.
###Code
test_acc = 0.0
for i, (images, labels) in enumerate(testloader, 0):
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
test_acc += compute_accuracy(outputs, labels)
print('Test Accuracy: %.2f'%( test_acc/i))
!pip3 install opencv-python
import cv2
outputs
for i, (images, labels) in enumerate(testloader, 0):
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
tensor = outputs.cpu().detach().numpy() # make sure tensor is on cpu
cv2.imwrite("image.png" ,tensor )
###Output
Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (4.1.2.30)
Requirement already satisfied: numpy>=1.14.5 in /usr/local/lib/python3.7/dist-packages (from opencv-python) (1.19.5)
|
3_Code/01_CLARA_to_MongoDB_RealData_ClaraResults_SSODataModel.ipynb | ###Markdown
Code DetailsAuthor: Rory AngusCreated: 19NOV18Version: 0.1***This code is to test a writing of data to a MongoDB. This is a proof of concept and the data is real. However, it does not bring all of it into Mongo, only the key fields.This uses data that was extracted after the SSO data model was implemented at LE on the platform.The data now is in two parts. The groups and their members as well as the users linked to the results.Please note that the results from doing the survey can be more than two per journey. Package Importing + Variable Setting
###Code
import pandas as pd
import numpy as np
import datetime
# mongo stuff
import pymongo
from pymongo import MongoClient
from bson.objectid import ObjectId
import bson
# json stuff
import json
# the file to read. This needs to be manually updated
readLoc = "~/datasets/CLARA/190328_052400_LE_LivePlatform_IndividualResults.json"
# if true the code outputs to the notebook a whole of diagnostic data that is helpful when writing but not so much when running it for real
verbose = False
# first run will truncate the target database and reload it from scratch. Once delta updates have been implmented this needs adjusting
first_run = True
###Output
_____no_output_____
###Markdown
Set display options
###Code
# further details found by running:
# pd.describe_option('display')
# set the values to show all of the columns etc.
pd.set_option('display.max_columns', None) # or 1000
pd.set_option('display.max_rows', None) # or 1000
pd.set_option('display.max_colwidth', -1) # or 199
# locals() # show all of the local environments
###Output
_____no_output_____
###Markdown
Connect to Mongo DB
###Code
# create the connection to MongoDB
# define the location of the Mongo DB Server
# in this instance it is a local copy running on the dev machine. This is configurable at this point.
client = MongoClient('127.0.0.1', 27017)
# define what the database is called.
db = client.CLARA
# define the collection
raw_data_collection = db.raw_data_user_results
###Output
_____no_output_____
###Markdown
Command to clean the databzse if needed when running this code
###Code
# Delete the raw_data_collection - used for testing
if first_run:
raw_data_collection.drop()
###Output
_____no_output_____
###Markdown
Functions Definitions
###Code
# The web framework gets post_id from the URL and passes it as a string
def get(post_id):
# Convert from string to ObjectId:
document = client.db.collection.find_one({'_id': ObjectId(post_id)})
###Output
_____no_output_____
###Markdown
Place CLARA Results from JSON File into Mongo
###Code
#import the data file
claraDf = pd.read_json(readLoc, orient='records')
if verbose:
display(claraDf)
if verbose:
# count columns and rows
print("Number of columns are " + str(len(claraDf.columns)))
print("Number of rows are " + str(len(claraDf.index)))
print()
# output the shape of the dataframe
print("The shape of the data frame is " + str(claraDf.shape))
print()
# output the column names
print("The column names of the data frame are: ")
print(*claraDf, sep='\n')
print()
# output the column names and datatypes
print("The datatypes of the data frame are: ")
print(claraDf.dtypes)
print()
###Output
_____no_output_____
###Markdown
To DoThe index need to be replaced with the unique identifier for the student
###Code
# Loop through the data frame and build a list
# the list will be used for a bulk update of MongoDB
# I am having to convert to strings for the intergers as Mongo cannot handle the int64 datatype.
# It also cant handle the conversion to int32 at the point of loading the rows, so string is the fall back position
# define the list to hold the data
clara_row = []
# loop through dataframe and create each item in the list
for index, row in claraDf.iterrows():
clara_row.insert(
index, {
"rowIndex":
index,
"userId":
claraDf['userId'].iloc[index],
"nameId":
claraDf['nameId'].iloc[index],
"primaryEmail":
claraDf['primaryEmail'].iloc[index],
"journeyId":
claraDf['journeyId'].iloc[index].astype('str'),
"journeyTitle":
claraDf['journeyTitle'].iloc[index],
"journeyPurpose":
claraDf['journeyPurpose'].iloc[index],
"journeyGoal":
claraDf['journeyGoal'].iloc[index],
"journeyCreatedAt":
claraDf['journeyCreatedAt'].iloc[index],
"claraId":
claraDf['claraId'].iloc[index].astype('str'),
"claraResultsJourneyStep":
claraDf['claraResultsJourneyStep'].iloc[index],
"claraResultsCreatedAt":
claraDf['claraResultsCreatedAt'].iloc[index],
"claraResultCompletedAt":
claraDf['claraResultCompletedAt'].iloc[index],
"claraResult1":
claraDf['claraResult1'].iloc[index],
"claraResult2":
claraDf['claraResult2'].iloc[index],
"claraResult3":
claraDf['claraResult3'].iloc[index],
"claraResult4":
claraDf['claraResult4'].iloc[index],
"claraResult5":
claraDf['claraResult5'].iloc[index],
"claraResult6":
claraDf['claraResult6'].iloc[index],
"claraResult7":
claraDf['claraResult7'].iloc[index],
"claraResult8":
claraDf['claraResult8'].iloc[index],
"insertdate":
datetime.datetime.utcnow()
})
if verbose:
print(clara_row[0])
# bulk update the database
raw_data_collection.insert_many(clara_row)
if verbose:
print(raw_data_collection.inserted_ids)
###Output
_____no_output_____
###Markdown
Create Index
###Code
# Only create the indexes onthe first run through
if first_run:
# put the restult into a list so it can be looked at later.
result = []
# Create some indexes
result.append(
raw_data_collection.create_index([('index', pymongo.ASCENDING)],
unique=False))
result.append(
raw_data_collection.create_index([('userId', pymongo.ASCENDING)],
unique=False))
result.append(
raw_data_collection.create_index([('nameId', pymongo.ASCENDING)],
unique=False))
result.append(
raw_data_collection.create_index([('primaryEmail', pymongo.ASCENDING)],
unique=False))
result.append(
raw_data_collection.create_index([('journeyId', pymongo.ASCENDING)],
unique=False))
result.append(
raw_data_collection.create_index(
[('journeyCreatedAt', pymongo.ASCENDING)], unique=False))
result.append(
raw_data_collection.create_index([('claraId', pymongo.ASCENDING)],
unique=False))
result.append(
raw_data_collection.create_index(
[('claraResultsCreatedAt', pymongo.ASCENDING)], unique=False))
result.append(
raw_data_collection.create_index(
[('claraResultCompletedAt', pymongo.ASCENDING)], unique=False))
result.append(
raw_data_collection.create_index(
[('claraResultsStep', pymongo.ASCENDING)], unique=False))
result.append(
raw_data_collection.create_index([('insertdate', pymongo.ASCENDING)],
unique=False))
if verbose:
print(result)
###Output
_____no_output_____ |
course_content/case_study/Case Study B/notebooks/answers/2_titanic_train-answers.ipynb | ###Markdown
<img src="https://datasciencecampus.ons.gov.uk/wp-content/uploads/sites/10/2017/03/data-science-campus-logo-new.svg" alt="ONS Data Science Campus Logo" width = "240" style="margin: 0px 60px" /> 2.0 Model Selectionpurpose of script: compare logreg vs knn on titanic_train
###Code
# import libraries
import pandas as pd
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import StandardScaler
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
import matplotlib.pyplot as plt
# import cached data from titanic_EDA.py
titanic_train = pd.read_pickle('../../cache/titanic_train.pkl')
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# Define functions to preprocess target & features
def preprocess_target(df) :
# Create arrays for the features and the target variable
target = df['Survived'].values
return(target)
def preprocess_features(df) :
#extract features series
features = df.drop('Survived', axis=1)
#remove features that cannot be converted to float: name, ticket & cabin
features = features.drop(['Name', 'Ticket', 'Cabin'], axis=1)
# dummy encoding of any remaining categorical data
features = pd.get_dummies(features, drop_first=True)
# ensure np.nan used to replace missing values
features.replace('nan', np.nan, inplace=True)
return features
###Output
_____no_output_____
###Markdown
Need to use pipeline to select from best model & best parameters. Use the following workflow:* Train-Test-Split* Instantiate* Fit* PredictPrep data for logreg
###Code
# preprocess target from titanic_train
target = preprocess_target(titanic_train)
#preprocess features from titanic_train
features = preprocess_features(titanic_train)
###Output
_____no_output_____
###Markdown
train_test_split
###Code
# X == features. y == target. Use 25% of data as 'hold out' data. Use a random state of 36.
X_train, X_test, y_train, y_test = train_test_split(features, target, test_size=0.25, random_state=36)
###Output
_____no_output_____
###Markdown
Instantiate
###Code
#impute median for NaNs in age column
imp = SimpleImputer(missing_values=np.nan, strategy='median')
# instantiate classifier
logreg = LogisticRegression()
steps = [
# impute medians on NaNs
('imputation', imp),
# scale features
('scaler', StandardScaler()),
# fit logreg classifier
('logistic_regression', logreg)]
# establish pipeline
pipeline = Pipeline(steps)
###Output
_____no_output_____
###Markdown
Train model
###Code
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict labels
###Code
y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Review performance
###Code
pipeline.score(X_train, y_train)
pipeline.score(X_test, y_test)
print(confusion_matrix(y_test, y_pred))
# print a classification report of the model's performance passing the true labels and the predicted labels
# as arguments in that order
print(classification_report(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Precision is 9% lower in the survived category. High precision == low FP rate. This model performs 9 % better in relation to false positives (assigning survived when in fact died) when class assigned is 0 than 1.Recall (false negative rate - assigning died but in truth survived) is 5%higher in 0 class.The harmonic mean of precision and recall - f1 - has a 7 percent increase when assigning 0 as survived. This has resulted in 133 rows (versus 90 rows in survived) of the trueresponse sampled faling within the 0 (died) category.Overall, it appears that this model is considerably better at predicting whenpeople died rather than survived. Receiver Operator Curve
###Code
# predict the probability of a sample being in a particular class
y_pred_prob = pipeline.predict_proba(X_test)[:,1]
# calculate the false positive rate, true positive rate and thresholds using roc_curve()
fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob)
# create the axes
plt.plot([0, 1], [0, 1], 'k--')
# control the aesthetics
plt.plot(fpr, tpr, label='Logistic Regression')
# x axis label
plt.xlabel('False Positive Rate')
# y axis label
plt.ylabel('True Positive Rate')
# main title
plt.title('Logistic Regression ROC Curve (titanic_train)')
# show plot
plt.show()
###Output
_____no_output_____
###Markdown
To my eye it looks as though a p value of around 0.8 is going to be optimum. This curve can then be used against further ROC curves to visually compare performance. What is the area under curve?
###Code
roc_auc_score(y_test, y_pred_prob)
###Output
_____no_output_____
###Markdown
An AUC of 0.5 is as good as a model randomly asisgning classes and being correct on average 50% of the time. Here we have an untuned AUC of 0.879 which can be used to compare against further models. Precision recall curve not pursued as class imbalance is not high
###Code
# tidy up
del fpr, logreg, pipeline, steps, thresholds, tpr, y_pred, y_pred_prob
###Output
_____no_output_____
###Markdown
*** KNearestNeighbours
###Code
steps = [
# impute median for any NaNs
('imputation', imp),
# scale features
('scaler', StandardScaler()),
# specify the knn classifier function for the 'knn' tep below, specifying k=5
('knn', KNeighborsClassifier(5))]
# establish a pipeline for the above steps
pipeline = Pipeline(steps)
###Output
_____no_output_____
###Markdown
Train model
###Code
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict labels
###Code
# Predict the labels for the test features using the KNN pipeline you have created
y_pred = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
Review performance
###Code
pipeline.score(X_train, y_train)
# As above, calculate accuracy of the classifier, but this time on the test data
pipeline.score(X_test, y_test)
print(confusion_matrix(y_test, y_pred))
###Output
_____no_output_____
###Markdown
True positive marginally higher than in logreg and true negaive identical.
###Code
print(classification_report(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Precision is still lower within the survived category, however the differencehas now reduced from 9 % to 8 % lower than the logreg model. Averages are upmarginally over logreg.Recall (false negative rate - assigning died but in truth survived) within thesurvived predicted group has increased by 2 % than in logreg. harmonic mean of these - f1 is similarly reduced. f1 has been marginally improved over logreg in both classes and average.support output has been unaffected. KNN appears to marginally outperform logreg. Now need to compare whether titanic_engineered adds any value. Receiver Operator Curve
###Code
# predict the probability of a sample being in a particular class
y_pred_prob = pipeline.predict_proba(X_test)[:,1]
# unpack roc curve objects
fpr, tpr, thresholds = roc_curve(y_test, y_pred_prob)
# plot an ROC curve using matplotlib below:
# create axes
plt.plot([0, 1], [0, 1], 'k--')
# control aesthetics
plt.plot(fpr, tpr, label='Logistic Regression')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Logistic Regression ROC Curve (titanic_train)')
plt.show()
###Output
_____no_output_____
###Markdown
The curve looks very similar to the logreg model, I imagine AUC will be very similar also.
###Code
roc_auc_score(y_test, y_pred_prob)
###Output
_____no_output_____ |
notebooks_GLUE/notebooks_electra/electra_rte.ipynb | ###Markdown
Install transformers 2.8.0
###Code
!pip install transformers==2.8.0
###Output
Requirement already satisfied: transformers==2.8.0 in /usr/local/lib/python3.6/dist-packages (2.8.0)
Requirement already satisfied: tokenizers==0.5.2 in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (0.5.2)
Requirement already satisfied: dataclasses; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (0.7)
Requirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (1.12.47)
Requirement already satisfied: sacremoses in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (0.0.43)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (2019.12.20)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (4.38.0)
Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (3.0.12)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (2.23.0)
Requirement already satisfied: sentencepiece in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (0.1.86)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers==2.8.0) (1.18.3)
Requirement already satisfied: botocore<1.16.0,>=1.15.47 in /usr/local/lib/python3.6/dist-packages (from boto3->transformers==2.8.0) (1.15.47)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3->transformers==2.8.0) (0.9.5)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /usr/local/lib/python3.6/dist-packages (from boto3->transformers==2.8.0) (0.3.3)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==2.8.0) (7.1.2)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==2.8.0) (1.12.0)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers==2.8.0) (0.14.1)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==2.8.0) (2020.4.5.1)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==2.8.0) (2.9)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==2.8.0) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers==2.8.0) (1.24.3)
Requirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.16.0,>=1.15.47->boto3->transformers==2.8.0) (0.15.2)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.6/dist-packages (from botocore<1.16.0,>=1.15.47->boto3->transformers==2.8.0) (2.8.1)
###Markdown
Set working directory to the directory containing `download_glue_data.py` and `run_glue.py`
###Code
cd ~/../content/
import os
WORK_DIR = os.path.join('drive', 'My Drive', 'Colab Notebooks', 'NLU')
os.path.exists(WORK_DIR)
cd $WORK_DIR
###Output
/content/drive/My Drive/Colab Notebooks/NLU
###Markdown
Download GLUE data
###Code
GLUE_DIR="data/glue"
!echo $GLUE_DIR
!python download_glue_data.py --help
!python download_glue_data.py --data_dir $GLUE_DIR --tasks RTE
###Output
Downloading and extracting RTE...
Completed!
###Markdown
Compute ELECTRA score on RTE task
###Code
TASK_NAME="RTE"
!echo $TASK_NAME
!python run_glue.py --help
!python run_glue.py \
--model_type electra \
--model_name_or_path google/electra-base-discriminator \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--data_dir $GLUE_DIR/$TASK_NAME \
--max_seq_length 128 \
--per_gpu_eval_batch_size=64 \
--per_gpu_train_batch_size=64 \
--learning_rate 2e-5 \
--num_train_epochs 3 \
--output_dir ../${TASK_NAME}_run \
--overwrite_output_dir
###Output
_____no_output_____ |
Trusted Users Insight System.ipynb | ###Markdown
Select a product
###Code
reviewed_products = (all_metadata
.join(all_reviews, 'asin')
.filter('''
categories is not null
and related is not null'''))
top_reviewed_products = (reviewed_products
.groupBy('asin')
.count()
.sort('count', ascending=False)
.limit(10))
top_reviewed_products.toPandas()
# %load modules/scripts/WebDashboard.py
# %load "/Users/Achilles/Documents/Tech/Scala_Spark/HackOnData/Final Project/Build a WebInterface/screen.py"
#!/usr/bin/env python
from lxml import html
import json
import requests
import json,re
from dateutil import parser as dateparser
from time import sleep
def ParseReviews(asin):
# Added Retrying
for i in range(5):
try:
#This script has only been tested with Amazon.com
amazon_url = 'http://www.amazon.com/dp/'+asin
# Add some recent user agent to prevent amazon from blocking the request
# Find some chrome user agent strings here https://udger.com/resources/ua-list/browser-detail?browser=Chrome
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36'}
page = requests.get(amazon_url,headers = headers)
page_response = page.text
parser = html.fromstring(page_response)
XPATH_AGGREGATE = '//span[@id="acrCustomerReviewText"]'
XPATH_REVIEW_SECTION_1 = '//div[contains(@id,"reviews-summary")]'
XPATH_REVIEW_SECTION_2 = '//div[@data-hook="review"]'
XPATH_AGGREGATE_RATING = '//table[@id="histogramTable"]//tr'
XPATH_PRODUCT_NAME = '//h1//span[@id="productTitle"]//text()'
XPATH_PRODUCT_PRICE = '//span[@id="priceblock_ourprice"]/text()'
raw_product_price = parser.xpath(XPATH_PRODUCT_PRICE)
product_price = ''.join(raw_product_price).replace(',','')
raw_product_name = parser.xpath(XPATH_PRODUCT_NAME)
product_name = ''.join(raw_product_name).strip()
total_ratings = parser.xpath(XPATH_AGGREGATE_RATING)
reviews = parser.xpath(XPATH_REVIEW_SECTION_1)
if not reviews:
reviews = parser.xpath(XPATH_REVIEW_SECTION_2)
ratings_dict = {}
reviews_list = []
if not reviews:
raise ValueError('unable to find reviews in page')
#grabing the rating section in product page
for ratings in total_ratings:
extracted_rating = ratings.xpath('./td//a//text()')
if extracted_rating:
rating_key = extracted_rating[0]
raw_raing_value = extracted_rating[1]
rating_value = raw_raing_value
if rating_key:
ratings_dict.update({rating_key:rating_value})
#Parsing individual reviews
for review in reviews:
XPATH_RATING = './/i[@data-hook="review-star-rating"]//text()'
XPATH_REVIEW_HEADER = './/a[@data-hook="review-title"]//text()'
XPATH_REVIEW_POSTED_DATE = './/a[contains(@href,"/profile/")]/parent::span/following-sibling::span/text()'
XPATH_REVIEW_TEXT_1 = './/div[@data-hook="review-collapsed"]//text()'
XPATH_REVIEW_TEXT_2 = './/div//span[@data-action="columnbalancing-showfullreview"]/@data-columnbalancing-showfullreview'
XPATH_REVIEW_COMMENTS = './/span[@data-hook="review-comment"]//text()'
XPATH_AUTHOR = './/a[contains(@href,"/profile/")]/parent::span//text()'
XPATH_REVIEW_TEXT_3 = './/div[contains(@id,"dpReviews")]/div/text()'
raw_review_author = review.xpath(XPATH_AUTHOR)
raw_review_rating = review.xpath(XPATH_RATING)
raw_review_header = review.xpath(XPATH_REVIEW_HEADER)
raw_review_posted_date = review.xpath(XPATH_REVIEW_POSTED_DATE)
raw_review_text1 = review.xpath(XPATH_REVIEW_TEXT_1)
raw_review_text2 = review.xpath(XPATH_REVIEW_TEXT_2)
raw_review_text3 = review.xpath(XPATH_REVIEW_TEXT_3)
author = ' '.join(' '.join(raw_review_author).split()).strip('By')
#cleaning data
review_rating = ''.join(raw_review_rating).replace('out of 5 stars','')
review_header = ' '.join(' '.join(raw_review_header).split())
review_posted_date = dateparser.parse(''.join(raw_review_posted_date)).strftime('%d %b %Y')
review_text = ' '.join(' '.join(raw_review_text1).split())
#grabbing hidden comments if present
if raw_review_text2:
json_loaded_review_data = json.loads(raw_review_text2[0])
json_loaded_review_data_text = json_loaded_review_data['rest']
cleaned_json_loaded_review_data_text = re.sub('<.*?>','',json_loaded_review_data_text)
full_review_text = review_text+cleaned_json_loaded_review_data_text
else:
full_review_text = review_text
if not raw_review_text1:
full_review_text = ' '.join(' '.join(raw_review_text3).split())
raw_review_comments = review.xpath(XPATH_REVIEW_COMMENTS)
review_comments = ''.join(raw_review_comments)
review_comments = re.sub('[A-Za-z]','',review_comments).strip()
review_dict = {
'review_comment_count':review_comments,
'review_text':full_review_text,
'review_posted_date':review_posted_date,
'review_header':review_header,
'review_rating':review_rating,
'review_author':author
}
reviews_list.append(review_dict)
data = {
'ratings':ratings_dict,
# 'reviews':reviews_list,
# 'url':amazon_url,
# 'price':product_price,
'name':product_name
}
return data
except ValueError:
print ("Retrying to get the correct response")
return {"error":"failed to process the page","asin":asin}
def ReadAsin(AsinList):
#Add your own ASINs here
#AsinList = ['B01ETPUQ6E','B017HW9DEW']
extracted_data = []
for asin in AsinList:
print ("Downloading and processing page http://www.amazon.com/dp/"+asin)
extracted_data.append(ParseReviews(asin))
#f=open('data.json','w')
#json.dump(extracted_data,f,indent=4)
print(extracted_data)
from IPython.core.display import display, HTML
def displayProducts(prodlist):
html_code = """
<table class="image">
"""
# prodlist = ['B000068NW5','B0002CZV82','B0002E1NQ4','B0002GW3Y8','B0002M6B2M','B0002M72JS','B000KIRT74','B000L7MNUM','B000LFCXL8','B000WS1QC6']
for prod in prodlist:
html_code = html_code+ """
<td><img src = "http://images.amazon.com/images/P/%s.01._PI_SCMZZZZZZZ_.jpg" style="float: left" id=%s onclick="itemselect(this)"</td>
%s""" % (prod,prod,prod)
html_code = html_code + """</table>
<img id="myFinalImg" src="">"""
javascriptcode = """
<script type="text/javascript">
function itemselect(selectedprod){
srcFile='http://images.amazon.com/images/P/'+selectedprod.id+'.01._PI_SCTZZZZZZZ_.jpg';
document.getElementById("myFinalImg").src = srcFile;
IPython.notebook.kernel.execute("selected_product = '" + selectedprod.id + "'")
}
</script>"""
display(HTML(html_code + javascriptcode))
#spark.read.json("data.json").show()
#======================================================
displayProducts(
[ row[0] for row in top_reviewed_products.select('asin').collect() ])
selected_product = 'B0009VELTQ'
###Output
_____no_output_____
###Markdown
Product negative sentences
###Code
from pyspark.ml.feature import Tokenizer
from pyspark.sql.functions import explode, col
import pandas as pd
pd.set_option('display.max_rows', 100)
product_words_per_reviewer = (
Tokenizer(inputCol='reviewText', outputCol='words')
.transform(all_reviews.filter(col('asin') == selected_product))
.select('reviewerID', 'words'))
word_ranks = (product_words_per_reviewer
.select(explode(col('words')).alias('word'))
.distinct()
.join(negative_predictive_words, col('word') == negative_predictive_words.negative_word)
.select('word', 'neg_prob')
.sort('neg_prob', ascending=False))
word_ranks.limit(30).toPandas()
selected_negative_word = 'noisy'
###Output
_____no_output_____
###Markdown
Trusted users that used the word
###Code
from pyspark.sql.functions import udf, lit
from pyspark.sql.types import BooleanType
is_elemen_of = udf(lambda word, words: word in words, BooleanType())
users_that_used_the_word = (product_words_per_reviewer
.filter(is_elemen_of(lit(selected_negative_word), col('words')))
.select('reviewerID'))
users_that_used_the_word.toPandas()
###Output
_____no_output_____
###Markdown
Suggested products in the same category
###Code
from pyspark.sql.functions import col
product_category = (reviewed_products
.filter(col('asin') == selected_product)
.select('categories')
.take(1)[0][0][0][-1])
print('Product category: {0}'.format(product_category))
import pandas as pd
pd.set_option('display.max_colwidth', -1)
last_element = udf(lambda categories: categories[0][-1])
products_in_same_category = (reviewed_products
.limit(100000)
.filter(last_element(col('categories')) == product_category)
.select('asin', 'title')
.distinct())
products_in_same_category.limit(10).toPandas()
from pyspark.sql.functions import avg
indexed_products = indexer.transform(
products_in_same_category.crossJoin(users_that_used_the_word))
alternative_products = (recommender_system
.transform(indexed_products)
.groupBy('asin')
.agg(avg(col('prediction')).alias('prediction'))
.sort('prediction', ascending=False)
.filter(col('asin') != selected_product)
.limit(10))
alternative_products.toPandas()
displayProducts([ asin[0] for asin in alternative_products.select('asin').collect() ])
reviews = (all_reviews
.filter(col('overall').isin([1, 2, 5]))
.withColumn('label', make_binary(col('overall')))
.select(col('label').cast('int'), remove_punctuation('summary').alias('summary'))
.filter(trim(col('summary')) != ''))
def most_contributing_summaries(product, total_reviews, ranking_model):
reviews = total_reviews.filter(col('asin') == product).select('summary', 'overall')
udf_max = udf(lambda p: max(p.tolist()), FloatType())
summary_ranks = (ranking_model
.transform(reviews)
.select(
'summary',
second(col('probability')).alias('pos_prob')))
pos_summaries = { row[0]: row[1] for row in summary_ranks.sort('pos_prob', ascending=False).take(5) }
neg_summaries = { row[0]: row[1] for row in summary_ranks.sort('pos_prob').take(5) }
return pos_summaries, neg_summaries
from wordcloud import WordCloud
import matplotlib.pyplot as plt
def present_product(product, total_reviews, ranking_model):
pos_summaries, neg_summaries = most_contributing_summaries(product, total_reviews, ranking_model)
pos_wordcloud = WordCloud(background_color='white', max_words=20).fit_words(pos_summaries)
neg_wordcloud = WordCloud(background_color='white', max_words=20).fit_words(neg_summaries)
fig = plt.figure(figsize=(20, 20))
ax = fig.add_subplot(1,2,1)
ax.set_title('Positive summaries')
ax.imshow(pos_wordcloud, interpolation='bilinear')
ax.axis('off')
ax = fig.add_subplot(1,2,2)
ax.set_title('Negative summaries')
ax.imshow(neg_wordcloud, interpolation='bilinear')
ax.axis('off')
plt.show()
present_product('B00005KIR0', all_reviews, model)
###Output
_____no_output_____ |
code notebook.ipynb | ###Markdown
B
###Code
network = Sequential()
network.add(Dense(512, activation='relu', input_shape=(32 * 32,)))
network.add(Dense(10, activation='softmax'))
network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history = network.fit(x_train, y_train, epochs=2)
score = network.evaluate(x_test, y_test)
print('Test score:', score[0])
print('Test accuracy:', score[1])
print(history)
y_pred = network.predict(x_test,verbose = 1)
y_pred_bool = np.argmax(y_pred, axis=1)
print(classification_report(Y_test, y_pred_bool))
plt.plot(history.history['acc'])
plt.title('Train accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.title('Train loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
###Output
Epoch 1/2
32/60000 [..............................] - ETA: 7:01 - loss: 2.3026 - acc: 0.0625
96/60000 [..............................] - ETA: 2:59 - loss: 2.2976 - acc: 0.1146
224/60000 [..............................] - ETA: 1:31 - loss: 2.2916 - acc: 0.1161
448/60000 [..............................] - ETA: 52s - loss: 2.2768 - acc: 0.1830
640/60000 [..............................] - ETA: 41s - loss: 2.2654 - acc: 0.2516
800/60000 [..............................] - ETA: 37s - loss: 2.2564 - acc: 0.3175
928/60000 [..............................] - ETA: 35s - loss: 2.2490 - acc: 0.3685
1088/60000 [..............................] - ETA: 33s - loss: 2.2383 - acc: 0.4108
1184/60000 [..............................] - ETA: 33s - loss: 2.2311 - acc: 0.4375
1280/60000 [..............................] - ETA: 33s - loss: 2.2233 - acc: 0.4656
1376/60000 [..............................] - ETA: 34s - loss: 2.2166 - acc: 0.4891
1472/60000 [..............................] - ETA: 33s - loss: 2.2095 - acc: 0.5082
1568/60000 [..............................] - ETA: 33s - loss: 2.2015 - acc: 0.5255
1696/60000 [..............................] - ETA: 33s - loss: 2.1892 - acc: 0.5442
1856/60000 [..............................] - ETA: 32s - loss: 2.1759 - acc: 0.5528
1984/60000 [..............................] - ETA: 32s - loss: 2.1652 - acc: 0.5706
2080/60000 [>.............................] - ETA: 32s - loss: 2.1577 - acc: 0.5788
2176/60000 [>.............................] - ETA: 31s - loss: 2.1505 - acc: 0.5864
2272/60000 [>.............................] - ETA: 31s - loss: 2.1411 - acc: 0.5955
2400/60000 [>.............................] - ETA: 31s - loss: 2.1300 - acc: 0.6038
2496/60000 [>.............................] - ETA: 31s - loss: 2.1219 - acc: 0.6114
2592/60000 [>.............................] - ETA: 31s - loss: 2.1151 - acc: 0.6188
2688/60000 [>.............................] - ETA: 31s - loss: 2.1053 - acc: 0.6261
2784/60000 [>.............................] - ETA: 31s - loss: 2.0972 - acc: 0.6311
2912/60000 [>.............................] - ETA: 30s - loss: 2.0866 - acc: 0.6346
3008/60000 [>.............................] - ETA: 31s - loss: 2.0776 - acc: 0.6396
3104/60000 [>.............................] - ETA: 30s - loss: 2.0681 - acc: 0.6463
3232/60000 [>.............................] - ETA: 30s - loss: 2.0581 - acc: 0.6485
3328/60000 [>.............................] - ETA: 30s - loss: 2.0473 - acc: 0.6532
3424/60000 [>.............................] - ETA: 30s - loss: 2.0389 - acc: 0.6580
3552/60000 [>.............................] - ETA: 30s - loss: 2.0277 - acc: 0.6616
3648/60000 [>.............................] - ETA: 30s - loss: 2.0181 - acc: 0.6642
3776/60000 [>.............................] - ETA: 30s - loss: 2.0032 - acc: 0.6711
3904/60000 [>.............................] - ETA: 30s - loss: 1.9914 - acc: 0.6737
4032/60000 [=>............................] - ETA: 29s - loss: 1.9772 - acc: 0.6783
4160/60000 [=>............................] - ETA: 29s - loss: 1.9652 - acc: 0.6813
4288/60000 [=>............................] - ETA: 29s - loss: 1.9515 - acc: 0.6863
4416/60000 [=>............................] - ETA: 29s - loss: 1.9378 - acc: 0.6902
4544/60000 [=>............................] - ETA: 29s - loss: 1.9244 - acc: 0.6937
4704/60000 [=>............................] - ETA: 28s - loss: 1.9081 - acc: 0.6973
4832/60000 [=>............................] - ETA: 28s - loss: 1.8938 - acc: 0.7016
4960/60000 [=>............................] - ETA: 28s - loss: 1.8802 - acc: 0.7054
5120/60000 [=>............................] - ETA: 28s - loss: 1.8633 - acc: 0.7092
5248/60000 [=>............................] - ETA: 28s - loss: 1.8515 - acc: 0.7113
5376/60000 [=>............................] - ETA: 27s - loss: 1.8397 - acc: 0.7128
5504/60000 [=>............................] - ETA: 27s - loss: 1.8279 - acc: 0.7144
5632/60000 [=>............................] - ETA: 27s - loss: 1.8151 - acc: 0.7168
5728/60000 [=>............................] - ETA: 27s - loss: 1.8071 - acc: 0.7175
5824/60000 [=>............................] - ETA: 27s - loss: 1.7960 - acc: 0.7200
5952/60000 [=>............................] - ETA: 27s - loss: 1.7831 - acc: 0.7230
6080/60000 [==>...........................] - ETA: 27s - loss: 1.7712 - acc: 0.7247
6208/60000 [==>...........................] - ETA: 27s - loss: 1.7583 - acc: 0.7276
6336/60000 [==>...........................] - ETA: 27s - loss: 1.7470 - acc: 0.7289
6464/60000 [==>...........................] - ETA: 27s - loss: 1.7367 - acc: 0.7310
6592/60000 [==>...........................] - ETA: 27s - loss: 1.7259 - acc: 0.7327
6720/60000 [==>...........................] - ETA: 26s - loss: 1.7143 - acc: 0.7336
6848/60000 [==>...........................] - ETA: 26s - loss: 1.7017 - acc: 0.7369
6976/60000 [==>...........................] - ETA: 26s - loss: 1.6889 - acc: 0.7387
7104/60000 [==>...........................] - ETA: 26s - loss: 1.6770 - acc: 0.7407
7200/60000 [==>...........................] - ETA: 26s - loss: 1.6697 - acc: 0.7410
7328/60000 [==>...........................] - ETA: 26s - loss: 1.6573 - acc: 0.7428
7488/60000 [==>...........................] - ETA: 26s - loss: 1.6422 - acc: 0.7449
7616/60000 [==>...........................] - ETA: 26s - loss: 1.6294 - acc: 0.7470
7744/60000 [==>...........................] - ETA: 26s - loss: 1.6183 - acc: 0.7487
7872/60000 [==>...........................] - ETA: 25s - loss: 1.6073 - acc: 0.7501
7936/60000 [==>...........................] - ETA: 26s - loss: 1.6021 - acc: 0.7509
8032/60000 [===>..........................] - ETA: 26s - loss: 1.5941 - acc: 0.7527
8192/60000 [===>..........................] - ETA: 25s - loss: 1.5807 - acc: 0.7550
8320/60000 [===>..........................] - ETA: 25s - loss: 1.5688 - acc: 0.7579
8448/60000 [===>..........................] - ETA: 25s - loss: 1.5576 - acc: 0.7599
8576/60000 [===>..........................] - ETA: 25s - loss: 1.5466 - acc: 0.7618
8704/60000 [===>..........................] - ETA: 25s - loss: 1.5353 - acc: 0.7628
8800/60000 [===>..........................] - ETA: 25s - loss: 1.5276 - acc: 0.7639
8896/60000 [===>..........................] - ETA: 25s - loss: 1.5193 - acc: 0.7647
9024/60000 [===>..........................] - ETA: 25s - loss: 1.5099 - acc: 0.7652
9120/60000 [===>..........................] - ETA: 25s - loss: 1.5030 - acc: 0.7659
9248/60000 [===>..........................] - ETA: 25s - loss: 1.4944 - acc: 0.7672
9376/60000 [===>..........................] - ETA: 25s - loss: 1.4843 - acc: 0.7682
9472/60000 [===>..........................] - ETA: 25s - loss: 1.4770 - acc: 0.7690
9568/60000 [===>..........................] - ETA: 25s - loss: 1.4693 - acc: 0.7703
9664/60000 [===>..........................] - ETA: 25s - loss: 1.4619 - acc: 0.7714
9760/60000 [===>..........................] - ETA: 25s - loss: 1.4543 - acc: 0.7725
9888/60000 [===>..........................] - ETA: 25s - loss: 1.4433 - acc: 0.7746
9984/60000 [===>..........................] - ETA: 25s - loss: 1.4360 - acc: 0.7758
10080/60000 [====>.........................] - ETA: 25s - loss: 1.4291 - acc: 0.7768
10176/60000 [====>.........................] - ETA: 25s - loss: 1.4234 - acc: 0.7769
10304/60000 [====>.........................] - ETA: 25s - loss: 1.4145 - acc: 0.7778
10368/60000 [====>.........................] - ETA: 25s - loss: 1.4106 - acc: 0.7782
10464/60000 [====>.........................] - ETA: 25s - loss: 1.4049 - acc: 0.7783
10560/60000 [====>.........................] - ETA: 25s - loss: 1.3986 - acc: 0.7788
10656/60000 [====>.........................] - ETA: 25s - loss: 1.3915 - acc: 0.7799
10752/60000 [====>.........................] - ETA: 25s - loss: 1.3848 - acc: 0.7810
10848/60000 [====>.........................] - ETA: 25s - loss: 1.3784 - acc: 0.7815
10976/60000 [====>.........................] - ETA: 25s - loss: 1.3700 - acc: 0.7824
11136/60000 [====>.........................] - ETA: 24s - loss: 1.3601 - acc: 0.7831
11264/60000 [====>.........................] - ETA: 24s - loss: 1.3519 - acc: 0.7838
11392/60000 [====>.........................] - ETA: 24s - loss: 1.3428 - acc: 0.7849
11520/60000 [====>.........................] - ETA: 24s - loss: 1.3353 - acc: 0.7858
11680/60000 [====>.........................] - ETA: 24s - loss: 1.3256 - acc: 0.7871
11840/60000 [====>.........................] - ETA: 24s - loss: 1.3157 - acc: 0.7883
11968/60000 [====>.........................] - ETA: 24s - loss: 1.3076 - acc: 0.7894
12096/60000 [=====>........................] - ETA: 24s - loss: 1.2995 - acc: 0.7905
12192/60000 [=====>........................] - ETA: 23s - loss: 1.2941 - acc: 0.7912
12320/60000 [=====>........................] - ETA: 23s - loss: 1.2871 - acc: 0.7918
12448/60000 [=====>........................] - ETA: 23s - loss: 1.2799 - acc: 0.7926
12608/60000 [=====>........................] - ETA: 23s - loss: 1.2697 - acc: 0.7937
12768/60000 [=====>........................] - ETA: 23s - loss: 1.2624 - acc: 0.7937
12928/60000 [=====>........................] - ETA: 23s - loss: 1.2545 - acc: 0.7941
13088/60000 [=====>........................] - ETA: 23s - loss: 1.2458 - acc: 0.7952
13248/60000 [=====>........................] - ETA: 23s - loss: 1.2384 - acc: 0.7955
13408/60000 [=====>........................] - ETA: 22s - loss: 1.2305 - acc: 0.7965
13568/60000 [=====>........................] - ETA: 22s - loss: 1.2223 - acc: 0.7974
13728/60000 [=====>........................] - ETA: 22s - loss: 1.2154 - acc: 0.7982
13888/60000 [=====>........................] - ETA: 22s - loss: 1.2075 - acc: 0.7994
14048/60000 [======>.......................] - ETA: 22s - loss: 1.1984 - acc: 0.8010
14176/60000 [======>.......................] - ETA: 22s - loss: 1.1933 - acc: 0.8016
14336/60000 [======>.......................] - ETA: 22s - loss: 1.1855 - acc: 0.8025
14528/60000 [======>.......................] - ETA: 21s - loss: 1.1767 - acc: 0.8035
14688/60000 [======>.......................] - ETA: 21s - loss: 1.1691 - acc: 0.8047
14848/60000 [======>.......................] - ETA: 21s - loss: 1.1612 - acc: 0.8056
15040/60000 [======>.......................] - ETA: 21s - loss: 1.1527 - acc: 0.8063
15200/60000 [======>.......................] - ETA: 21s - loss: 1.1454 - acc: 0.8072
15328/60000 [======>.......................] - ETA: 21s - loss: 1.1398 - acc: 0.8079
15456/60000 [======>.......................] - ETA: 21s - loss: 1.1339 - acc: 0.8089
15584/60000 [======>.......................] - ETA: 20s - loss: 1.1286 - acc: 0.8095
15712/60000 [======>.......................] - ETA: 20s - loss: 1.1229 - acc: 0.8101
15840/60000 [======>.......................] - ETA: 20s - loss: 1.1175 - acc: 0.8107
16000/60000 [=======>......................] - ETA: 20s - loss: 1.1104 - acc: 0.8119
16160/60000 [=======>......................] - ETA: 20s - loss: 1.1047 - acc: 0.8123
16320/60000 [=======>......................] - ETA: 20s - loss: 1.0992 - acc: 0.8131
16480/60000 [=======>......................] - ETA: 20s - loss: 1.0932 - acc: 0.8133
16640/60000 [=======>......................] - ETA: 20s - loss: 1.0871 - acc: 0.8139
16800/60000 [=======>......................] - ETA: 20s - loss: 1.0813 - acc: 0.8143
16928/60000 [=======>......................] - ETA: 19s - loss: 1.0763 - acc: 0.8149
17088/60000 [=======>......................] - ETA: 19s - loss: 1.0699 - acc: 0.8156
17248/60000 [=======>......................] - ETA: 19s - loss: 1.0638 - acc: 0.8163
17376/60000 [=======>......................] - ETA: 19s - loss: 1.0593 - acc: 0.8169
17504/60000 [=======>......................] - ETA: 19s - loss: 1.0543 - acc: 0.8176
17664/60000 [=======>......................] - ETA: 19s - loss: 1.0483 - acc: 0.8184
17792/60000 [=======>......................] - ETA: 19s - loss: 1.0435 - acc: 0.8190
17952/60000 [=======>......................] - ETA: 19s - loss: 1.0383 - acc: 0.8194
18080/60000 [========>.....................] - ETA: 19s - loss: 1.0336 - acc: 0.8201
18208/60000 [========>.....................] - ETA: 19s - loss: 1.0292 - acc: 0.8208
18368/60000 [========>.....................] - ETA: 19s - loss: 1.0236 - acc: 0.8214
18496/60000 [========>.....................] - ETA: 18s - loss: 1.0193 - acc: 0.8219
18624/60000 [========>.....................] - ETA: 18s - loss: 1.0144 - acc: 0.8226
18752/60000 [========>.....................] - ETA: 18s - loss: 1.0104 - acc: 0.8230
18912/60000 [========>.....................] - ETA: 18s - loss: 1.0045 - acc: 0.8238
19040/60000 [========>.....................] - ETA: 18s - loss: 1.0001 - acc: 0.8245
19168/60000 [========>.....................] - ETA: 18s - loss: 0.9956 - acc: 0.8254
19328/60000 [========>.....................] - ETA: 18s - loss: 0.9904 - acc: 0.8260
19488/60000 [========>.....................] - ETA: 18s - loss: 0.9859 - acc: 0.8264
19648/60000 [========>.....................] - ETA: 18s - loss: 0.9805 - acc: 0.8271
19808/60000 [========>.....................] - ETA: 18s - loss: 0.9756 - acc: 0.8277
19968/60000 [========>.....................] - ETA: 18s - loss: 0.9711 - acc: 0.8284
20160/60000 [=========>....................] - ETA: 17s - loss: 0.9654 - acc: 0.8289
20320/60000 [=========>....................] - ETA: 17s - loss: 0.9601 - acc: 0.8296
20512/60000 [=========>....................] - ETA: 17s - loss: 0.9543 - acc: 0.8305
20672/60000 [=========>....................] - ETA: 17s - loss: 0.9493 - acc: 0.8312
20864/60000 [=========>....................] - ETA: 17s - loss: 0.9437 - acc: 0.8318
21024/60000 [=========>....................] - ETA: 17s - loss: 0.9384 - acc: 0.8325
21184/60000 [=========>....................] - ETA: 17s - loss: 0.9336 - acc: 0.8331
21344/60000 [=========>....................] - ETA: 17s - loss: 0.9293 - acc: 0.8336
21504/60000 [=========>....................] - ETA: 17s - loss: 0.9249 - acc: 0.8341
21664/60000 [=========>....................] - ETA: 16s - loss: 0.9208 - acc: 0.8349
21824/60000 [=========>....................] - ETA: 16s - loss: 0.9166 - acc: 0.8355
22016/60000 [==========>...................] - ETA: 16s - loss: 0.9111 - acc: 0.8363
22176/60000 [==========>...................] - ETA: 16s - loss: 0.9077 - acc: 0.8365
22336/60000 [==========>...................] - ETA: 16s - loss: 0.9042 - acc: 0.8368
22464/60000 [==========>...................] - ETA: 16s - loss: 0.9005 - acc: 0.8373
22592/60000 [==========>...................] - ETA: 16s - loss: 0.8979 - acc: 0.8376
22720/60000 [==========>...................] - ETA: 16s - loss: 0.8947 - acc: 0.8379
22848/60000 [==========>...................] - ETA: 16s - loss: 0.8909 - acc: 0.8387
22976/60000 [==========>...................] - ETA: 16s - loss: 0.8872 - acc: 0.8394
23104/60000 [==========>...................] - ETA: 16s - loss: 0.8843 - acc: 0.8399
23232/60000 [==========>...................] - ETA: 16s - loss: 0.8811 - acc: 0.8403
23392/60000 [==========>...................] - ETA: 16s - loss: 0.8779 - acc: 0.8406
23552/60000 [==========>...................] - ETA: 15s - loss: 0.8743 - acc: 0.8408
23712/60000 [==========>...................] - ETA: 15s - loss: 0.8703 - acc: 0.8413
23872/60000 [==========>...................] - ETA: 15s - loss: 0.8662 - acc: 0.8420
24032/60000 [===========>..................] - ETA: 15s - loss: 0.8621 - acc: 0.8427
24192/60000 [===========>..................] - ETA: 15s - loss: 0.8590 - acc: 0.8431
24352/60000 [===========>..................] - ETA: 15s - loss: 0.8555 - acc: 0.8436
24480/60000 [===========>..................] - ETA: 15s - loss: 0.8527 - acc: 0.8440
24640/60000 [===========>..................] - ETA: 15s - loss: 0.8495 - acc: 0.8444
24768/60000 [===========>..................] - ETA: 15s - loss: 0.8465 - acc: 0.8449
24928/60000 [===========>..................] - ETA: 15s - loss: 0.8430 - acc: 0.8453
25056/60000 [===========>..................] - ETA: 15s - loss: 0.8401 - acc: 0.8457
25216/60000 [===========>..................] - ETA: 15s - loss: 0.8369 - acc: 0.8462
25376/60000 [===========>..................] - ETA: 14s - loss: 0.8342 - acc: 0.8465
25536/60000 [===========>..................] - ETA: 14s - loss: 0.8311 - acc: 0.8468
25664/60000 [===========>..................] - ETA: 14s - loss: 0.8287 - acc: 0.8471
25824/60000 [===========>..................] - ETA: 14s - loss: 0.8253 - acc: 0.8474
25984/60000 [===========>..................] - ETA: 14s - loss: 0.8220 - acc: 0.8480
26112/60000 [============>.................] - ETA: 14s - loss: 0.8197 - acc: 0.8483
26272/60000 [============>.................] - ETA: 14s - loss: 0.8162 - acc: 0.8488
26400/60000 [============>.................] - ETA: 14s - loss: 0.8134 - acc: 0.8492
26528/60000 [============>.................] - ETA: 14s - loss: 0.8110 - acc: 0.8497
26688/60000 [============>.................] - ETA: 14s - loss: 0.8076 - acc: 0.8501
26848/60000 [============>.................] - ETA: 14s - loss: 0.8048 - acc: 0.8503
26976/60000 [============>.................] - ETA: 14s - loss: 0.8030 - acc: 0.8503
27104/60000 [============>.................] - ETA: 14s - loss: 0.8008 - acc: 0.8506
27232/60000 [============>.................] - ETA: 14s - loss: 0.7984 - acc: 0.8509
27360/60000 [============>.................] - ETA: 14s - loss: 0.7957 - acc: 0.8513
27520/60000 [============>.................] - ETA: 13s - loss: 0.7922 - acc: 0.8518
27680/60000 [============>.................] - ETA: 13s - loss: 0.7894 - acc: 0.8521
27808/60000 [============>.................] - ETA: 13s - loss: 0.7866 - acc: 0.8526
27936/60000 [============>.................] - ETA: 13s - loss: 0.7845 - acc: 0.8528
28064/60000 [=============>................] - ETA: 13s - loss: 0.7825 - acc: 0.8531
28224/60000 [=============>................] - ETA: 13s - loss: 0.7795 - acc: 0.8535
28352/60000 [=============>................] - ETA: 13s - loss: 0.7773 - acc: 0.8538
28512/60000 [=============>................] - ETA: 13s - loss: 0.7746 - acc: 0.8542
28640/60000 [=============>................] - ETA: 13s - loss: 0.7724 - acc: 0.8545
28768/60000 [=============>................] - ETA: 13s - loss: 0.7700 - acc: 0.8548
28896/60000 [=============>................] - ETA: 13s - loss: 0.7674 - acc: 0.8553
29024/60000 [=============>................] - ETA: 13s - loss: 0.7653 - acc: 0.8555
29152/60000 [=============>................] - ETA: 13s - loss: 0.7634 - acc: 0.8559
29248/60000 [=============>................] - ETA: 13s - loss: 0.7616 - acc: 0.8562
29376/60000 [=============>................] - ETA: 13s - loss: 0.7596 - acc: 0.8564
29536/60000 [=============>................] - ETA: 13s - loss: 0.7568 - acc: 0.8568
29664/60000 [=============>................] - ETA: 12s - loss: 0.7548 - acc: 0.8570
29792/60000 [=============>................] - ETA: 12s - loss: 0.7528 - acc: 0.8572
29952/60000 [=============>................] - ETA: 12s - loss: 0.7507 - acc: 0.8575
30080/60000 [==============>...............] - ETA: 12s - loss: 0.7486 - acc: 0.8578
30208/60000 [==============>...............] - ETA: 12s - loss: 0.7466 - acc: 0.8581
30336/60000 [==============>...............] - ETA: 12s - loss: 0.7447 - acc: 0.8584
30464/60000 [==============>...............] - ETA: 12s - loss: 0.7427 - acc: 0.8586
30592/60000 [==============>...............] - ETA: 12s - loss: 0.7406 - acc: 0.8589
30720/60000 [==============>...............] - ETA: 12s - loss: 0.7383 - acc: 0.8593
30880/60000 [==============>...............] - ETA: 12s - loss: 0.7360 - acc: 0.8597
31040/60000 [==============>...............] - ETA: 12s - loss: 0.7337 - acc: 0.8599
31200/60000 [==============>...............] - ETA: 12s - loss: 0.7315 - acc: 0.8601
31328/60000 [==============>...............] - ETA: 12s - loss: 0.7297 - acc: 0.8603
31488/60000 [==============>...............] - ETA: 12s - loss: 0.7273 - acc: 0.8607
31680/60000 [==============>...............] - ETA: 12s - loss: 0.7246 - acc: 0.8611
31808/60000 [==============>...............] - ETA: 11s - loss: 0.7225 - acc: 0.8614
31968/60000 [==============>...............] - ETA: 11s - loss: 0.7205 - acc: 0.8618
32128/60000 [===============>..............] - ETA: 11s - loss: 0.7180 - acc: 0.8621
32288/60000 [===============>..............] - ETA: 11s - loss: 0.7159 - acc: 0.8623
32416/60000 [===============>..............] - ETA: 11s - loss: 0.7139 - acc: 0.8627
32576/60000 [===============>..............] - ETA: 11s - loss: 0.7115 - acc: 0.8630
32736/60000 [===============>..............] - ETA: 11s - loss: 0.7093 - acc: 0.8634
32896/60000 [===============>..............] - ETA: 11s - loss: 0.7074 - acc: 0.8635
33056/60000 [===============>..............] - ETA: 11s - loss: 0.7048 - acc: 0.8640
33216/60000 [===============>..............] - ETA: 11s - loss: 0.7027 - acc: 0.8642
33376/60000 [===============>..............] - ETA: 11s - loss: 0.7005 - acc: 0.8645
33536/60000 [===============>..............] - ETA: 11s - loss: 0.6984 - acc: 0.8648
33696/60000 [===============>..............] - ETA: 11s - loss: 0.6967 - acc: 0.8651
33856/60000 [===============>..............] - ETA: 11s - loss: 0.6943 - acc: 0.8655
33984/60000 [===============>..............] - ETA: 10s - loss: 0.6928 - acc: 0.8657
34112/60000 [================>.............] - ETA: 10s - loss: 0.6912 - acc: 0.8659
34240/60000 [================>.............] - ETA: 10s - loss: 0.6891 - acc: 0.8663
34368/60000 [================>.............] - ETA: 10s - loss: 0.6875 - acc: 0.8665
34464/60000 [================>.............] - ETA: 10s - loss: 0.6864 - acc: 0.8666
34592/60000 [================>.............] - ETA: 10s - loss: 0.6846 - acc: 0.8670
34720/60000 [================>.............] - ETA: 10s - loss: 0.6828 - acc: 0.8673
34848/60000 [================>.............] - ETA: 10s - loss: 0.6811 - acc: 0.8675
34976/60000 [================>.............] - ETA: 10s - loss: 0.6795 - acc: 0.8677
35104/60000 [================>.............] - ETA: 10s - loss: 0.6782 - acc: 0.8679
35232/60000 [================>.............] - ETA: 10s - loss: 0.6765 - acc: 0.8681
35392/60000 [================>.............] - ETA: 10s - loss: 0.6747 - acc: 0.8684
35520/60000 [================>.............] - ETA: 10s - loss: 0.6730 - acc: 0.8686
35648/60000 [================>.............] - ETA: 10s - loss: 0.6713 - acc: 0.8690
35776/60000 [================>.............] - ETA: 10s - loss: 0.6694 - acc: 0.8693
35904/60000 [================>.............] - ETA: 10s - loss: 0.6679 - acc: 0.8695
36000/60000 [=================>............] - ETA: 10s - loss: 0.6666 - acc: 0.8697
36160/60000 [=================>............] - ETA: 10s - loss: 0.6649 - acc: 0.8699
36288/60000 [=================>............] - ETA: 10s - loss: 0.6634 - acc: 0.8701
36416/60000 [=================>............] - ETA: 9s - loss: 0.6620 - acc: 0.8703
36544/60000 [=================>............] - ETA: 9s - loss: 0.6604 - acc: 0.8705
36672/60000 [=================>............] - ETA: 9s - loss: 0.6589 - acc: 0.8708
36800/60000 [=================>............] - ETA: 9s - loss: 0.6573 - acc: 0.8711
36960/60000 [=================>............] - ETA: 9s - loss: 0.6553 - acc: 0.8714
37088/60000 [=================>............] - ETA: 9s - loss: 0.6537 - acc: 0.8717
37248/60000 [=================>............] - ETA: 9s - loss: 0.6518 - acc: 0.8720
37408/60000 [=================>............] - ETA: 9s - loss: 0.6501 - acc: 0.8722
37568/60000 [=================>............] - ETA: 9s - loss: 0.6484 - acc: 0.8724
37696/60000 [=================>............] - ETA: 9s - loss: 0.6468 - acc: 0.8726
37856/60000 [=================>............] - ETA: 9s - loss: 0.6453 - acc: 0.8728
38016/60000 [==================>...........] - ETA: 9s - loss: 0.6435 - acc: 0.8730
38144/60000 [==================>...........] - ETA: 9s - loss: 0.6423 - acc: 0.8732
38272/60000 [==================>...........] - ETA: 9s - loss: 0.6407 - acc: 0.8735
38400/60000 [==================>...........] - ETA: 9s - loss: 0.6394 - acc: 0.8737
38560/60000 [==================>...........] - ETA: 9s - loss: 0.6377 - acc: 0.8740
38688/60000 [==================>...........] - ETA: 8s - loss: 0.6362 - acc: 0.8743
38848/60000 [==================>...........] - ETA: 8s - loss: 0.6345 - acc: 0.8746
38976/60000 [==================>...........] - ETA: 8s - loss: 0.6334 - acc: 0.8747
39136/60000 [==================>...........] - ETA: 8s - loss: 0.6318 - acc: 0.8749
39264/60000 [==================>...........] - ETA: 8s - loss: 0.6307 - acc: 0.8751
39392/60000 [==================>...........] - ETA: 8s - loss: 0.6295 - acc: 0.8752
39552/60000 [==================>...........] - ETA: 8s - loss: 0.6281 - acc: 0.8753
39712/60000 [==================>...........] - ETA: 8s - loss: 0.6262 - acc: 0.8757
39872/60000 [==================>...........] - ETA: 8s - loss: 0.6246 - acc: 0.8759
40000/60000 [===================>..........] - ETA: 8s - loss: 0.6236 - acc: 0.8760
40160/60000 [===================>..........] - ETA: 8s - loss: 0.6220 - acc: 0.8762
40320/60000 [===================>..........] - ETA: 8s - loss: 0.6202 - acc: 0.8765
40480/60000 [===================>..........] - ETA: 8s - loss: 0.6185 - acc: 0.8768
40640/60000 [===================>..........] - ETA: 8s - loss: 0.6171 - acc: 0.8770
40800/60000 [===================>..........] - ETA: 8s - loss: 0.6158 - acc: 0.8771
40960/60000 [===================>..........] - ETA: 7s - loss: 0.6141 - acc: 0.8774
41120/60000 [===================>..........] - ETA: 7s - loss: 0.6125 - acc: 0.8776
41280/60000 [===================>..........] - ETA: 7s - loss: 0.6106 - acc: 0.8780
41440/60000 [===================>..........] - ETA: 7s - loss: 0.6090 - acc: 0.8783
41600/60000 [===================>..........] - ETA: 7s - loss: 0.6077 - acc: 0.8785
41760/60000 [===================>..........] - ETA: 7s - loss: 0.6062 - acc: 0.8787
41920/60000 [===================>..........] - ETA: 7s - loss: 0.6045 - acc: 0.8790
42080/60000 [====================>.........] - ETA: 7s - loss: 0.6032 - acc: 0.8792
42240/60000 [====================>.........] - ETA: 7s - loss: 0.6015 - acc: 0.8796
42368/60000 [====================>.........] - ETA: 7s - loss: 0.6004 - acc: 0.8798
42528/60000 [====================>.........] - ETA: 7s - loss: 0.5988 - acc: 0.8801
42656/60000 [====================>.........] - ETA: 7s - loss: 0.5976 - acc: 0.8802
42816/60000 [====================>.........] - ETA: 7s - loss: 0.5959 - acc: 0.8805
42976/60000 [====================>.........] - ETA: 7s - loss: 0.5944 - acc: 0.8808
43136/60000 [====================>.........] - ETA: 6s - loss: 0.5931 - acc: 0.8810
43296/60000 [====================>.........] - ETA: 6s - loss: 0.5921 - acc: 0.8812
43456/60000 [====================>.........] - ETA: 6s - loss: 0.5907 - acc: 0.8813
43616/60000 [====================>.........] - ETA: 6s - loss: 0.5894 - acc: 0.8815
43744/60000 [====================>.........] - ETA: 6s - loss: 0.5882 - acc: 0.8817
43904/60000 [====================>.........] - ETA: 6s - loss: 0.5867 - acc: 0.8820
44064/60000 [=====================>........] - ETA: 6s - loss: 0.5851 - acc: 0.8822
44224/60000 [=====================>........] - ETA: 6s - loss: 0.5840 - acc: 0.8824
44384/60000 [=====================>........] - ETA: 6s - loss: 0.5826 - acc: 0.8826
44544/60000 [=====================>........] - ETA: 6s - loss: 0.5814 - acc: 0.8828
44704/60000 [=====================>........] - ETA: 6s - loss: 0.5798 - acc: 0.8831
44864/60000 [=====================>........] - ETA: 6s - loss: 0.5789 - acc: 0.8833
45024/60000 [=====================>........] - ETA: 6s - loss: 0.5773 - acc: 0.8835
45120/60000 [=====================>........] - ETA: 6s - loss: 0.5765 - acc: 0.8836
45280/60000 [=====================>........] - ETA: 6s - loss: 0.5753 - acc: 0.8838
45440/60000 [=====================>........] - ETA: 6s - loss: 0.5739 - acc: 0.8841
45600/60000 [=====================>........] - ETA: 5s - loss: 0.5725 - acc: 0.8843
45760/60000 [=====================>........] - ETA: 5s - loss: 0.5715 - acc: 0.8844
45920/60000 [=====================>........] - ETA: 5s - loss: 0.5701 - acc: 0.8846
46048/60000 [======================>.......] - ETA: 5s - loss: 0.5691 - acc: 0.8848
46208/60000 [======================>.......] - ETA: 5s - loss: 0.5677 - acc: 0.8851
46368/60000 [======================>.......] - ETA: 5s - loss: 0.5667 - acc: 0.8853
46528/60000 [======================>.......] - ETA: 5s - loss: 0.5660 - acc: 0.8854
46688/60000 [======================>.......] - ETA: 5s - loss: 0.5648 - acc: 0.8857
46816/60000 [======================>.......] - ETA: 5s - loss: 0.5640 - acc: 0.8859
46944/60000 [======================>.......] - ETA: 5s - loss: 0.5629 - acc: 0.8860
47104/60000 [======================>.......] - ETA: 5s - loss: 0.5618 - acc: 0.8862
47264/60000 [======================>.......] - ETA: 5s - loss: 0.5607 - acc: 0.8863
47392/60000 [======================>.......] - ETA: 5s - loss: 0.5598 - acc: 0.8865
47552/60000 [======================>.......] - ETA: 5s - loss: 0.5584 - acc: 0.8868
47712/60000 [======================>.......] - ETA: 5s - loss: 0.5570 - acc: 0.8870
47840/60000 [======================>.......] - ETA: 4s - loss: 0.5558 - acc: 0.8872
48000/60000 [=======================>......] - ETA: 4s - loss: 0.5547 - acc: 0.8874
48160/60000 [=======================>......] - ETA: 4s - loss: 0.5538 - acc: 0.8876
48320/60000 [=======================>......] - ETA: 4s - loss: 0.5527 - acc: 0.8877
48480/60000 [=======================>......] - ETA: 4s - loss: 0.5516 - acc: 0.8879
48640/60000 [=======================>......] - ETA: 4s - loss: 0.5506 - acc: 0.8880
48768/60000 [=======================>......] - ETA: 4s - loss: 0.5498 - acc: 0.8881
48928/60000 [=======================>......] - ETA: 4s - loss: 0.5487 - acc: 0.8884
49088/60000 [=======================>......] - ETA: 4s - loss: 0.5474 - acc: 0.8885
49248/60000 [=======================>......] - ETA: 4s - loss: 0.5465 - acc: 0.8887
49408/60000 [=======================>......] - ETA: 4s - loss: 0.5454 - acc: 0.8889
49568/60000 [=======================>......] - ETA: 4s - loss: 0.5441 - acc: 0.8891
49728/60000 [=======================>......] - ETA: 4s - loss: 0.5429 - acc: 0.8893
49888/60000 [=======================>......] - ETA: 4s - loss: 0.5416 - acc: 0.8895
50048/60000 [========================>.....] - ETA: 4s - loss: 0.5405 - acc: 0.8897
50208/60000 [========================>.....] - ETA: 3s - loss: 0.5393 - acc: 0.8898
50368/60000 [========================>.....] - ETA: 3s - loss: 0.5380 - acc: 0.8901
50528/60000 [========================>.....] - ETA: 3s - loss: 0.5369 - acc: 0.8904
50688/60000 [========================>.....] - ETA: 3s - loss: 0.5356 - acc: 0.8906
50848/60000 [========================>.....] - ETA: 3s - loss: 0.5345 - acc: 0.8907
51008/60000 [========================>.....] - ETA: 3s - loss: 0.5338 - acc: 0.8908
51168/60000 [========================>.....] - ETA: 3s - loss: 0.5325 - acc: 0.8911
51296/60000 [========================>.....] - ETA: 3s - loss: 0.5319 - acc: 0.8911
51456/60000 [========================>.....] - ETA: 3s - loss: 0.5311 - acc: 0.8912
51648/60000 [========================>.....] - ETA: 3s - loss: 0.5301 - acc: 0.8914
51808/60000 [========================>.....] - ETA: 3s - loss: 0.5289 - acc: 0.8916
51968/60000 [========================>.....] - ETA: 3s - loss: 0.5276 - acc: 0.8918
52096/60000 [=========================>....] - ETA: 3s - loss: 0.5267 - acc: 0.8920
52256/60000 [=========================>....] - ETA: 3s - loss: 0.5257 - acc: 0.8921
52416/60000 [=========================>....] - ETA: 3s - loss: 0.5249 - acc: 0.8922
52544/60000 [=========================>....] - ETA: 3s - loss: 0.5242 - acc: 0.8924
52704/60000 [=========================>....] - ETA: 2s - loss: 0.5232 - acc: 0.8925
52864/60000 [=========================>....] - ETA: 2s - loss: 0.5225 - acc: 0.8925
53024/60000 [=========================>....] - ETA: 2s - loss: 0.5214 - acc: 0.8928
53184/60000 [=========================>....] - ETA: 2s - loss: 0.5205 - acc: 0.8928
53344/60000 [=========================>....] - ETA: 2s - loss: 0.5195 - acc: 0.8930
53504/60000 [=========================>....] - ETA: 2s - loss: 0.5185 - acc: 0.8931
53664/60000 [=========================>....] - ETA: 2s - loss: 0.5179 - acc: 0.8933
53792/60000 [=========================>....] - ETA: 2s - loss: 0.5172 - acc: 0.8934
53952/60000 [=========================>....] - ETA: 2s - loss: 0.5163 - acc: 0.8936
54144/60000 [==========================>...] - ETA: 2s - loss: 0.5152 - acc: 0.8937
54304/60000 [==========================>...] - ETA: 2s - loss: 0.5141 - acc: 0.8938
54432/60000 [==========================>...] - ETA: 2s - loss: 0.5132 - acc: 0.8939
54592/60000 [==========================>...] - ETA: 2s - loss: 0.5122 - acc: 0.8941
54720/60000 [==========================>...] - ETA: 2s - loss: 0.5113 - acc: 0.8942
54848/60000 [==========================>...] - ETA: 2s - loss: 0.5106 - acc: 0.8943
54976/60000 [==========================>...] - ETA: 2s - loss: 0.5099 - acc: 0.8944
55136/60000 [==========================>...] - ETA: 1s - loss: 0.5091 - acc: 0.8946
55328/60000 [==========================>...] - ETA: 1s - loss: 0.5079 - acc: 0.8948
55488/60000 [==========================>...] - ETA: 1s - loss: 0.5071 - acc: 0.8949
55648/60000 [==========================>...] - ETA: 1s - loss: 0.5061 - acc: 0.8951
55808/60000 [==========================>...] - ETA: 1s - loss: 0.5054 - acc: 0.8952
55936/60000 [==========================>...] - ETA: 1s - loss: 0.5047 - acc: 0.8953
56096/60000 [===========================>..] - ETA: 1s - loss: 0.5037 - acc: 0.8954
56224/60000 [===========================>..] - ETA: 1s - loss: 0.5031 - acc: 0.8955
56352/60000 [===========================>..] - ETA: 1s - loss: 0.5023 - acc: 0.8956
56480/60000 [===========================>..] - ETA: 1s - loss: 0.5014 - acc: 0.8958
56608/60000 [===========================>..] - ETA: 1s - loss: 0.5006 - acc: 0.8959
56768/60000 [===========================>..] - ETA: 1s - loss: 0.4996 - acc: 0.8961
56896/60000 [===========================>..] - ETA: 1s - loss: 0.4990 - acc: 0.8961
57056/60000 [===========================>..] - ETA: 1s - loss: 0.4982 - acc: 0.8963
57216/60000 [===========================>..] - ETA: 1s - loss: 0.4972 - acc: 0.8964
57408/60000 [===========================>..] - ETA: 1s - loss: 0.4963 - acc: 0.8966
57536/60000 [===========================>..] - ETA: 0s - loss: 0.4956 - acc: 0.8967
57696/60000 [===========================>..] - ETA: 0s - loss: 0.4946 - acc: 0.8969
57856/60000 [===========================>..] - ETA: 0s - loss: 0.4940 - acc: 0.8969
58016/60000 [============================>.] - ETA: 0s - loss: 0.4932 - acc: 0.8970
58176/60000 [============================>.] - ETA: 0s - loss: 0.4923 - acc: 0.8972
58336/60000 [============================>.] - ETA: 0s - loss: 0.4912 - acc: 0.8974
58496/60000 [============================>.] - ETA: 0s - loss: 0.4906 - acc: 0.8974
58656/60000 [============================>.] - ETA: 0s - loss: 0.4899 - acc: 0.8976
58848/60000 [============================>.] - ETA: 0s - loss: 0.4888 - acc: 0.8978
59040/60000 [============================>.] - ETA: 0s - loss: 0.4877 - acc: 0.8980
59200/60000 [============================>.] - ETA: 0s - loss: 0.4869 - acc: 0.8982
59360/60000 [============================>.] - ETA: 0s - loss: 0.4860 - acc: 0.8983
59520/60000 [============================>.] - ETA: 0s - loss: 0.4852 - acc: 0.8985
59680/60000 [============================>.] - ETA: 0s - loss: 0.4846 - acc: 0.8986
59840/60000 [============================>.] - ETA: 0s - loss: 0.4838 - acc: 0.8987
60000/60000 [==============================] - 24s 400us/step - loss: 0.4830 - acc: 0.8988
Epoch 2/2
32/60000 [..............................] - ETA: 32s - loss: 0.1573 - acc: 0.9688
192/60000 [..............................] - ETA: 23s - loss: 0.2413 - acc: 0.9479
352/60000 [..............................] - ETA: 22s - loss: 0.2060 - acc: 0.9517
512/60000 [..............................] - ETA: 22s - loss: 0.2344 - acc: 0.9297
672/60000 [..............................] - ETA: 21s - loss: 0.2172 - acc: 0.9345
832/60000 [..............................] - ETA: 21s - loss: 0.2009 - acc: 0.9411
992/60000 [..............................] - ETA: 21s - loss: 0.1930 - acc: 0.9405
1120/60000 [..............................] - ETA: 21s - loss: 0.1863 - acc: 0.9420
1280/60000 [..............................] - ETA: 21s - loss: 0.1917 - acc: 0.9414
1440/60000 [..............................] - ETA: 21s - loss: 0.1868 - acc: 0.9431
1600/60000 [..............................] - ETA: 21s - loss: 0.1846 - acc: 0.9456
1760/60000 [..............................] - ETA: 21s - loss: 0.1864 - acc: 0.9460
1920/60000 [..............................] - ETA: 20s - loss: 0.1835 - acc: 0.9458
2080/60000 [>.............................] - ETA: 20s - loss: 0.1806 - acc: 0.9466
2208/60000 [>.............................] - ETA: 20s - loss: 0.1843 - acc: 0.9447
2336/60000 [>.............................] - ETA: 20s - loss: 0.1826 - acc: 0.9452
2464/60000 [>.............................] - ETA: 21s - loss: 0.1817 - acc: 0.9464
2624/60000 [>.............................] - ETA: 20s - loss: 0.1833 - acc: 0.9451
2784/60000 [>.............................] - ETA: 20s - loss: 0.1925 - acc: 0.9443
2944/60000 [>.............................] - ETA: 20s - loss: 0.1969 - acc: 0.9436
3104/60000 [>.............................] - ETA: 20s - loss: 0.1949 - acc: 0.9439
3264/60000 [>.............................] - ETA: 20s - loss: 0.1978 - acc: 0.9433
3424/60000 [>.............................] - ETA: 20s - loss: 0.1956 - acc: 0.9433
3552/60000 [>.............................] - ETA: 20s - loss: 0.1945 - acc: 0.9434
3712/60000 [>.............................] - ETA: 20s - loss: 0.1966 - acc: 0.9434
3872/60000 [>.............................] - ETA: 20s - loss: 0.1947 - acc: 0.9432
4032/60000 [=>............................] - ETA: 20s - loss: 0.1922 - acc: 0.9442
4224/60000 [=>............................] - ETA: 20s - loss: 0.1922 - acc: 0.9453
4352/60000 [=>............................] - ETA: 20s - loss: 0.1924 - acc: 0.9451
4512/60000 [=>............................] - ETA: 20s - loss: 0.1920 - acc: 0.9450
4704/60000 [=>............................] - ETA: 19s - loss: 0.1899 - acc: 0.9454
4864/60000 [=>............................] - ETA: 19s - loss: 0.1878 - acc: 0.9457
4992/60000 [=>............................] - ETA: 19s - loss: 0.1872 - acc: 0.9461
5120/60000 [=>............................] - ETA: 19s - loss: 0.1868 - acc: 0.9459
5280/60000 [=>............................] - ETA: 19s - loss: 0.1856 - acc: 0.9464
5440/60000 [=>............................] - ETA: 19s - loss: 0.1856 - acc: 0.9465
5632/60000 [=>............................] - ETA: 19s - loss: 0.1900 - acc: 0.9457
5760/60000 [=>............................] - ETA: 19s - loss: 0.1909 - acc: 0.9448
5920/60000 [=>............................] - ETA: 19s - loss: 0.1906 - acc: 0.9448
6080/60000 [==>...........................] - ETA: 19s - loss: 0.1899 - acc: 0.9446
6208/60000 [==>...........................] - ETA: 19s - loss: 0.1912 - acc: 0.9441
6336/60000 [==>...........................] - ETA: 19s - loss: 0.1904 - acc: 0.9443
6496/60000 [==>...........................] - ETA: 19s - loss: 0.1894 - acc: 0.9447
6656/60000 [==>...........................] - ETA: 19s - loss: 0.1946 - acc: 0.9438
6816/60000 [==>...........................] - ETA: 19s - loss: 0.1954 - acc: 0.9437
6976/60000 [==>...........................] - ETA: 19s - loss: 0.1947 - acc: 0.9435
7136/60000 [==>...........................] - ETA: 19s - loss: 0.1946 - acc: 0.9437
7296/60000 [==>...........................] - ETA: 19s - loss: 0.1961 - acc: 0.9434
7424/60000 [==>...........................] - ETA: 19s - loss: 0.1953 - acc: 0.9436
7584/60000 [==>...........................] - ETA: 19s - loss: 0.1937 - acc: 0.9440
7712/60000 [==>...........................] - ETA: 19s - loss: 0.1936 - acc: 0.9442
7840/60000 [==>...........................] - ETA: 19s - loss: 0.1930 - acc: 0.9444
8000/60000 [===>..........................] - ETA: 19s - loss: 0.1923 - acc: 0.9444
8192/60000 [===>..........................] - ETA: 18s - loss: 0.1911 - acc: 0.9448
8320/60000 [===>..........................] - ETA: 18s - loss: 0.1906 - acc: 0.9448
8480/60000 [===>..........................] - ETA: 18s - loss: 0.1911 - acc: 0.9445
8608/60000 [===>..........................] - ETA: 18s - loss: 0.1926 - acc: 0.9442
8768/60000 [===>..........................] - ETA: 18s - loss: 0.1906 - acc: 0.9449
8928/60000 [===>..........................] - ETA: 18s - loss: 0.1898 - acc: 0.9451
9088/60000 [===>..........................] - ETA: 18s - loss: 0.1877 - acc: 0.9460
9248/60000 [===>..........................] - ETA: 18s - loss: 0.1875 - acc: 0.9459
9376/60000 [===>..........................] - ETA: 18s - loss: 0.1878 - acc: 0.9462
9536/60000 [===>..........................] - ETA: 18s - loss: 0.1869 - acc: 0.9466
9696/60000 [===>..........................] - ETA: 18s - loss: 0.1867 - acc: 0.9467
9856/60000 [===>..........................] - ETA: 18s - loss: 0.1867 - acc: 0.9468
10016/60000 [====>.........................] - ETA: 18s - loss: 0.1861 - acc: 0.9471
10144/60000 [====>.........................] - ETA: 18s - loss: 0.1856 - acc: 0.9470
10304/60000 [====>.........................] - ETA: 18s - loss: 0.1848 - acc: 0.9469
10464/60000 [====>.........................] - ETA: 18s - loss: 0.1843 - acc: 0.9471
10624/60000 [====>.........................] - ETA: 18s - loss: 0.1837 - acc: 0.9473
10816/60000 [====>.........................] - ETA: 18s - loss: 0.1827 - acc: 0.9475
10976/60000 [====>.........................] - ETA: 17s - loss: 0.1831 - acc: 0.9476
11136/60000 [====>.........................] - ETA: 17s - loss: 0.1833 - acc: 0.9476
11296/60000 [====>.........................] - ETA: 17s - loss: 0.1825 - acc: 0.9479
11456/60000 [====>.........................] - ETA: 17s - loss: 0.1825 - acc: 0.9480
11616/60000 [====>.........................] - ETA: 17s - loss: 0.1841 - acc: 0.9475
11776/60000 [====>.........................] - ETA: 17s - loss: 0.1835 - acc: 0.9474
11936/60000 [====>.........................] - ETA: 17s - loss: 0.1835 - acc: 0.9473
12096/60000 [=====>........................] - ETA: 17s - loss: 0.1831 - acc: 0.9474
12256/60000 [=====>........................] - ETA: 17s - loss: 0.1824 - acc: 0.9477
12416/60000 [=====>........................] - ETA: 17s - loss: 0.1815 - acc: 0.9478
12576/60000 [=====>........................] - ETA: 17s - loss: 0.1810 - acc: 0.9480
12736/60000 [=====>........................] - ETA: 17s - loss: 0.1810 - acc: 0.9482
12928/60000 [=====>........................] - ETA: 17s - loss: 0.1818 - acc: 0.9483
13088/60000 [=====>........................] - ETA: 16s - loss: 0.1816 - acc: 0.9483
13248/60000 [=====>........................] - ETA: 16s - loss: 0.1826 - acc: 0.9483
13408/60000 [=====>........................] - ETA: 16s - loss: 0.1827 - acc: 0.9485
13568/60000 [=====>........................] - ETA: 16s - loss: 0.1831 - acc: 0.9481
13728/60000 [=====>........................] - ETA: 16s - loss: 0.1842 - acc: 0.9478
13888/60000 [=====>........................] - ETA: 16s - loss: 0.1836 - acc: 0.9477
14048/60000 [======>.......................] - ETA: 16s - loss: 0.1830 - acc: 0.9478
14208/60000 [======>.......................] - ETA: 16s - loss: 0.1833 - acc: 0.9478
14336/60000 [======>.......................] - ETA: 16s - loss: 0.1833 - acc: 0.9479
14528/60000 [======>.......................] - ETA: 16s - loss: 0.1830 - acc: 0.9480
14688/60000 [======>.......................] - ETA: 16s - loss: 0.1830 - acc: 0.9480
14848/60000 [======>.......................] - ETA: 16s - loss: 0.1828 - acc: 0.9479
15008/60000 [======>.......................] - ETA: 16s - loss: 0.1833 - acc: 0.9480
15168/60000 [======>.......................] - ETA: 16s - loss: 0.1837 - acc: 0.9477
15328/60000 [======>.......................] - ETA: 16s - loss: 0.1843 - acc: 0.9479
15488/60000 [======>.......................] - ETA: 16s - loss: 0.1837 - acc: 0.9483
15648/60000 [======>.......................] - ETA: 15s - loss: 0.1838 - acc: 0.9483
15808/60000 [======>.......................] - ETA: 15s - loss: 0.1840 - acc: 0.9481
15968/60000 [======>.......................] - ETA: 15s - loss: 0.1853 - acc: 0.9477
16128/60000 [=======>......................] - ETA: 15s - loss: 0.1863 - acc: 0.9476
16288/60000 [=======>......................] - ETA: 15s - loss: 0.1865 - acc: 0.9476
16448/60000 [=======>......................] - ETA: 15s - loss: 0.1863 - acc: 0.9477
16640/60000 [=======>......................] - ETA: 15s - loss: 0.1863 - acc: 0.9476
16800/60000 [=======>......................] - ETA: 15s - loss: 0.1862 - acc: 0.9476
16960/60000 [=======>......................] - ETA: 15s - loss: 0.1860 - acc: 0.9476
17120/60000 [=======>......................] - ETA: 15s - loss: 0.1851 - acc: 0.9480
17280/60000 [=======>......................] - ETA: 15s - loss: 0.1846 - acc: 0.9481
17440/60000 [=======>......................] - ETA: 15s - loss: 0.1840 - acc: 0.9482
17568/60000 [=======>......................] - ETA: 15s - loss: 0.1845 - acc: 0.9479
17760/60000 [=======>......................] - ETA: 15s - loss: 0.1843 - acc: 0.9480
17888/60000 [=======>......................] - ETA: 15s - loss: 0.1847 - acc: 0.9480
18016/60000 [========>.....................] - ETA: 15s - loss: 0.1843 - acc: 0.9482
18144/60000 [========>.....................] - ETA: 15s - loss: 0.1839 - acc: 0.9483
18304/60000 [========>.....................] - ETA: 14s - loss: 0.1837 - acc: 0.9484
18464/60000 [========>.....................] - ETA: 14s - loss: 0.1832 - acc: 0.9485
18560/60000 [========>.....................] - ETA: 14s - loss: 0.1833 - acc: 0.9483
18720/60000 [========>.....................] - ETA: 14s - loss: 0.1832 - acc: 0.9483
18880/60000 [========>.....................] - ETA: 14s - loss: 0.1831 - acc: 0.9484
19040/60000 [========>.....................] - ETA: 14s - loss: 0.1835 - acc: 0.9482
19200/60000 [========>.....................] - ETA: 14s - loss: 0.1830 - acc: 0.9484
19360/60000 [========>.....................] - ETA: 14s - loss: 0.1824 - acc: 0.9486
19488/60000 [========>.....................] - ETA: 14s - loss: 0.1819 - acc: 0.9487
19616/60000 [========>.....................] - ETA: 14s - loss: 0.1823 - acc: 0.9486
19776/60000 [========>.....................] - ETA: 14s - loss: 0.1827 - acc: 0.9484
19936/60000 [========>.....................] - ETA: 14s - loss: 0.1821 - acc: 0.9486
20128/60000 [=========>....................] - ETA: 14s - loss: 0.1826 - acc: 0.9485
20288/60000 [=========>....................] - ETA: 14s - loss: 0.1826 - acc: 0.9486
20416/60000 [=========>....................] - ETA: 14s - loss: 0.1821 - acc: 0.9487
20576/60000 [=========>....................] - ETA: 14s - loss: 0.1816 - acc: 0.9488
20704/60000 [=========>....................] - ETA: 14s - loss: 0.1815 - acc: 0.9488
20864/60000 [=========>....................] - ETA: 14s - loss: 0.1815 - acc: 0.9487
21024/60000 [=========>....................] - ETA: 14s - loss: 0.1817 - acc: 0.9487
21152/60000 [=========>....................] - ETA: 14s - loss: 0.1821 - acc: 0.9487
21312/60000 [=========>....................] - ETA: 13s - loss: 0.1822 - acc: 0.9488
21504/60000 [=========>....................] - ETA: 13s - loss: 0.1832 - acc: 0.9484
21664/60000 [=========>....................] - ETA: 13s - loss: 0.1837 - acc: 0.9482
21824/60000 [=========>....................] - ETA: 13s - loss: 0.1847 - acc: 0.9479
21984/60000 [=========>....................] - ETA: 13s - loss: 0.1842 - acc: 0.9480
22112/60000 [==========>...................] - ETA: 13s - loss: 0.1845 - acc: 0.9479
22240/60000 [==========>...................] - ETA: 13s - loss: 0.1841 - acc: 0.9481
22368/60000 [==========>...................] - ETA: 13s - loss: 0.1845 - acc: 0.9480
22528/60000 [==========>...................] - ETA: 13s - loss: 0.1843 - acc: 0.9482
22688/60000 [==========>...................] - ETA: 13s - loss: 0.1840 - acc: 0.9482
22848/60000 [==========>...................] - ETA: 13s - loss: 0.1846 - acc: 0.9479
23008/60000 [==========>...................] - ETA: 13s - loss: 0.1848 - acc: 0.9478
23168/60000 [==========>...................] - ETA: 13s - loss: 0.1848 - acc: 0.9476
23360/60000 [==========>...................] - ETA: 13s - loss: 0.1846 - acc: 0.9477
23520/60000 [==========>...................] - ETA: 13s - loss: 0.1848 - acc: 0.9479
23680/60000 [==========>...................] - ETA: 13s - loss: 0.1846 - acc: 0.9479
23840/60000 [==========>...................] - ETA: 13s - loss: 0.1847 - acc: 0.9480
24000/60000 [===========>..................] - ETA: 12s - loss: 0.1844 - acc: 0.9481
24160/60000 [===========>..................] - ETA: 12s - loss: 0.1841 - acc: 0.9482
24320/60000 [===========>..................] - ETA: 12s - loss: 0.1844 - acc: 0.9480
24480/60000 [===========>..................] - ETA: 12s - loss: 0.1842 - acc: 0.9480
24672/60000 [===========>..................] - ETA: 12s - loss: 0.1841 - acc: 0.9480
24832/60000 [===========>..................] - ETA: 12s - loss: 0.1841 - acc: 0.9480
24960/60000 [===========>..................] - ETA: 12s - loss: 0.1838 - acc: 0.9482
25120/60000 [===========>..................] - ETA: 12s - loss: 0.1833 - acc: 0.9483
25248/60000 [===========>..................] - ETA: 12s - loss: 0.1830 - acc: 0.9483
25376/60000 [===========>..................] - ETA: 12s - loss: 0.1832 - acc: 0.9483
25536/60000 [===========>..................] - ETA: 12s - loss: 0.1833 - acc: 0.9483
25728/60000 [===========>..................] - ETA: 12s - loss: 0.1830 - acc: 0.9483
25856/60000 [===========>..................] - ETA: 12s - loss: 0.1829 - acc: 0.9483
26016/60000 [============>.................] - ETA: 12s - loss: 0.1831 - acc: 0.9485
26176/60000 [============>.................] - ETA: 12s - loss: 0.1830 - acc: 0.9485
26304/60000 [============>.................] - ETA: 12s - loss: 0.1830 - acc: 0.9485
26464/60000 [============>.................] - ETA: 12s - loss: 0.1829 - acc: 0.9485
26592/60000 [============>.................] - ETA: 12s - loss: 0.1827 - acc: 0.9486
26752/60000 [============>.................] - ETA: 11s - loss: 0.1829 - acc: 0.9485
26912/60000 [============>.................] - ETA: 11s - loss: 0.1829 - acc: 0.9484
27040/60000 [============>.................] - ETA: 11s - loss: 0.1824 - acc: 0.9486
27200/60000 [============>.................] - ETA: 11s - loss: 0.1824 - acc: 0.9485
27328/60000 [============>.................] - ETA: 11s - loss: 0.1822 - acc: 0.9486
27456/60000 [============>.................] - ETA: 11s - loss: 0.1824 - acc: 0.9484
27584/60000 [============>.................] - ETA: 11s - loss: 0.1823 - acc: 0.9485
27712/60000 [============>.................] - ETA: 11s - loss: 0.1821 - acc: 0.9486
27872/60000 [============>.................] - ETA: 11s - loss: 0.1817 - acc: 0.9487
28032/60000 [=============>................] - ETA: 11s - loss: 0.1817 - acc: 0.9487
28160/60000 [=============>................] - ETA: 11s - loss: 0.1815 - acc: 0.9488
28320/60000 [=============>................] - ETA: 11s - loss: 0.1815 - acc: 0.9488
28480/60000 [=============>................] - ETA: 11s - loss: 0.1818 - acc: 0.9487
28608/60000 [=============>................] - ETA: 11s - loss: 0.1815 - acc: 0.9488
28768/60000 [=============>................] - ETA: 11s - loss: 0.1810 - acc: 0.9489
28928/60000 [=============>................] - ETA: 11s - loss: 0.1805 - acc: 0.9490
29120/60000 [=============>................] - ETA: 11s - loss: 0.1804 - acc: 0.9491
29248/60000 [=============>................] - ETA: 11s - loss: 0.1804 - acc: 0.9491
29376/60000 [=============>................] - ETA: 11s - loss: 0.1804 - acc: 0.9490
29504/60000 [=============>................] - ETA: 11s - loss: 0.1807 - acc: 0.9490
29664/60000 [=============>................] - ETA: 10s - loss: 0.1811 - acc: 0.9489
29824/60000 [=============>................] - ETA: 10s - loss: 0.1807 - acc: 0.9491
29984/60000 [=============>................] - ETA: 10s - loss: 0.1804 - acc: 0.9492
30112/60000 [==============>...............] - ETA: 10s - loss: 0.1801 - acc: 0.9493
30304/60000 [==============>...............] - ETA: 10s - loss: 0.1800 - acc: 0.9492
30464/60000 [==============>...............] - ETA: 10s - loss: 0.1805 - acc: 0.9491
30624/60000 [==============>...............] - ETA: 10s - loss: 0.1803 - acc: 0.9492
30784/60000 [==============>...............] - ETA: 10s - loss: 0.1805 - acc: 0.9492
30912/60000 [==============>...............] - ETA: 10s - loss: 0.1803 - acc: 0.9493
31040/60000 [==============>...............] - ETA: 10s - loss: 0.1802 - acc: 0.9493
31168/60000 [==============>...............] - ETA: 10s - loss: 0.1805 - acc: 0.9491
31360/60000 [==============>...............] - ETA: 10s - loss: 0.1801 - acc: 0.9493
31520/60000 [==============>...............] - ETA: 10s - loss: 0.1801 - acc: 0.9492
31680/60000 [==============>...............] - ETA: 10s - loss: 0.1801 - acc: 0.9492
31840/60000 [==============>...............] - ETA: 10s - loss: 0.1799 - acc: 0.9492
31968/60000 [==============>...............] - ETA: 10s - loss: 0.1797 - acc: 0.9493
32128/60000 [===============>..............] - ETA: 10s - loss: 0.1806 - acc: 0.9490
32256/60000 [===============>..............] - ETA: 10s - loss: 0.1806 - acc: 0.9490
32384/60000 [===============>..............] - ETA: 10s - loss: 0.1808 - acc: 0.9489
32512/60000 [===============>..............] - ETA: 9s - loss: 0.1808 - acc: 0.9488
32704/60000 [===============>..............] - ETA: 9s - loss: 0.1805 - acc: 0.9488
32832/60000 [===============>..............] - ETA: 9s - loss: 0.1809 - acc: 0.9487
32992/60000 [===============>..............] - ETA: 9s - loss: 0.1806 - acc: 0.9488
33152/60000 [===============>..............] - ETA: 9s - loss: 0.1809 - acc: 0.9487
33312/60000 [===============>..............] - ETA: 9s - loss: 0.1811 - acc: 0.9488
33472/60000 [===============>..............] - ETA: 9s - loss: 0.1814 - acc: 0.9487
33600/60000 [===============>..............] - ETA: 9s - loss: 0.1813 - acc: 0.9487
33728/60000 [===============>..............] - ETA: 9s - loss: 0.1811 - acc: 0.9488
33888/60000 [===============>..............] - ETA: 9s - loss: 0.1814 - acc: 0.9487
34048/60000 [================>.............] - ETA: 9s - loss: 0.1814 - acc: 0.9486
34176/60000 [================>.............] - ETA: 9s - loss: 0.1810 - acc: 0.9488
34336/60000 [================>.............] - ETA: 9s - loss: 0.1810 - acc: 0.9488
34496/60000 [================>.............] - ETA: 9s - loss: 0.1808 - acc: 0.9488
34624/60000 [================>.............] - ETA: 9s - loss: 0.1805 - acc: 0.9489
34784/60000 [================>.............] - ETA: 9s - loss: 0.1804 - acc: 0.9488
34912/60000 [================>.............] - ETA: 9s - loss: 0.1802 - acc: 0.9489
35040/60000 [================>.............] - ETA: 9s - loss: 0.1801 - acc: 0.9489
35200/60000 [================>.............] - ETA: 9s - loss: 0.1803 - acc: 0.9488
35392/60000 [================>.............] - ETA: 8s - loss: 0.1806 - acc: 0.9487
35520/60000 [================>.............] - ETA: 8s - loss: 0.1806 - acc: 0.9488
35680/60000 [================>.............] - ETA: 8s - loss: 0.1805 - acc: 0.9487
35840/60000 [================>.............] - ETA: 8s - loss: 0.1804 - acc: 0.9487
36000/60000 [=================>............] - ETA: 8s - loss: 0.1802 - acc: 0.9487
36160/60000 [=================>............] - ETA: 8s - loss: 0.1798 - acc: 0.9488
36320/60000 [=================>............] - ETA: 8s - loss: 0.1798 - acc: 0.9489
36480/60000 [=================>............] - ETA: 8s - loss: 0.1802 - acc: 0.9488
36640/60000 [=================>............] - ETA: 8s - loss: 0.1797 - acc: 0.9489
36800/60000 [=================>............] - ETA: 8s - loss: 0.1799 - acc: 0.9489
36928/60000 [=================>............] - ETA: 8s - loss: 0.1800 - acc: 0.9488
37088/60000 [=================>............] - ETA: 8s - loss: 0.1799 - acc: 0.9488
37248/60000 [=================>............] - ETA: 8s - loss: 0.1796 - acc: 0.9489
37408/60000 [=================>............] - ETA: 8s - loss: 0.1793 - acc: 0.9490
37536/60000 [=================>............] - ETA: 8s - loss: 0.1790 - acc: 0.9491
37696/60000 [=================>............] - ETA: 8s - loss: 0.1788 - acc: 0.9491
37856/60000 [=================>............] - ETA: 8s - loss: 0.1784 - acc: 0.9493
37984/60000 [=================>............] - ETA: 8s - loss: 0.1782 - acc: 0.9493
38144/60000 [==================>...........] - ETA: 7s - loss: 0.1785 - acc: 0.9493
38272/60000 [==================>...........] - ETA: 7s - loss: 0.1784 - acc: 0.9493
38432/60000 [==================>...........] - ETA: 7s - loss: 0.1781 - acc: 0.9494
38592/60000 [==================>...........] - ETA: 7s - loss: 0.1779 - acc: 0.9495
38752/60000 [==================>...........] - ETA: 7s - loss: 0.1776 - acc: 0.9496
38912/60000 [==================>...........] - ETA: 7s - loss: 0.1775 - acc: 0.9496
39040/60000 [==================>...........] - ETA: 7s - loss: 0.1775 - acc: 0.9496
39200/60000 [==================>...........] - ETA: 7s - loss: 0.1771 - acc: 0.9496
39392/60000 [==================>...........] - ETA: 7s - loss: 0.1770 - acc: 0.9496
39552/60000 [==================>...........] - ETA: 7s - loss: 0.1770 - acc: 0.9495
39712/60000 [==================>...........] - ETA: 7s - loss: 0.1772 - acc: 0.9494
39872/60000 [==================>...........] - ETA: 7s - loss: 0.1768 - acc: 0.9496
40064/60000 [===================>..........] - ETA: 7s - loss: 0.1765 - acc: 0.9497
40192/60000 [===================>..........] - ETA: 7s - loss: 0.1765 - acc: 0.9496
40352/60000 [===================>..........] - ETA: 7s - loss: 0.1764 - acc: 0.9495
40512/60000 [===================>..........] - ETA: 7s - loss: 0.1766 - acc: 0.9495
40672/60000 [===================>..........] - ETA: 7s - loss: 0.1762 - acc: 0.9496
40832/60000 [===================>..........] - ETA: 6s - loss: 0.1763 - acc: 0.9496
40992/60000 [===================>..........] - ETA: 6s - loss: 0.1762 - acc: 0.9497
41184/60000 [===================>..........] - ETA: 6s - loss: 0.1762 - acc: 0.9497
41376/60000 [===================>..........] - ETA: 6s - loss: 0.1763 - acc: 0.9497
41536/60000 [===================>..........] - ETA: 6s - loss: 0.1759 - acc: 0.9498
41664/60000 [===================>..........] - ETA: 6s - loss: 0.1761 - acc: 0.9497
41824/60000 [===================>..........] - ETA: 6s - loss: 0.1761 - acc: 0.9496
41952/60000 [===================>..........] - ETA: 6s - loss: 0.1762 - acc: 0.9496
42112/60000 [====================>.........] - ETA: 6s - loss: 0.1760 - acc: 0.9496
42240/60000 [====================>.........] - ETA: 6s - loss: 0.1762 - acc: 0.9496
42400/60000 [====================>.........] - ETA: 6s - loss: 0.1759 - acc: 0.9496
42528/60000 [====================>.........] - ETA: 6s - loss: 0.1760 - acc: 0.9495
42656/60000 [====================>.........] - ETA: 6s - loss: 0.1765 - acc: 0.9495
42784/60000 [====================>.........] - ETA: 6s - loss: 0.1764 - acc: 0.9496
42944/60000 [====================>.........] - ETA: 6s - loss: 0.1768 - acc: 0.9497
43072/60000 [====================>.........] - ETA: 6s - loss: 0.1768 - acc: 0.9497
43232/60000 [====================>.........] - ETA: 6s - loss: 0.1766 - acc: 0.9499
43392/60000 [====================>.........] - ETA: 6s - loss: 0.1764 - acc: 0.9499
43520/60000 [====================>.........] - ETA: 5s - loss: 0.1763 - acc: 0.9499
43648/60000 [====================>.........] - ETA: 5s - loss: 0.1762 - acc: 0.9500
43808/60000 [====================>.........] - ETA: 5s - loss: 0.1766 - acc: 0.9499
43936/60000 [====================>.........] - ETA: 5s - loss: 0.1765 - acc: 0.9499
44096/60000 [=====================>........] - ETA: 5s - loss: 0.1766 - acc: 0.9500
44288/60000 [=====================>........] - ETA: 5s - loss: 0.1767 - acc: 0.9500
44448/60000 [=====================>........] - ETA: 5s - loss: 0.1764 - acc: 0.9501
44608/60000 [=====================>........] - ETA: 5s - loss: 0.1761 - acc: 0.9502
44768/60000 [=====================>........] - ETA: 5s - loss: 0.1759 - acc: 0.9503
44896/60000 [=====================>........] - ETA: 5s - loss: 0.1755 - acc: 0.9504
45056/60000 [=====================>........] - ETA: 5s - loss: 0.1756 - acc: 0.9504
45216/60000 [=====================>........] - ETA: 5s - loss: 0.1756 - acc: 0.9505
45408/60000 [=====================>........] - ETA: 5s - loss: 0.1755 - acc: 0.9505
45600/60000 [=====================>........] - ETA: 5s - loss: 0.1759 - acc: 0.9504
45728/60000 [=====================>........] - ETA: 5s - loss: 0.1758 - acc: 0.9504
45824/60000 [=====================>........] - ETA: 5s - loss: 0.1758 - acc: 0.9504
45952/60000 [=====================>........] - ETA: 5s - loss: 0.1757 - acc: 0.9504
46112/60000 [======================>.......] - ETA: 5s - loss: 0.1757 - acc: 0.9504
46240/60000 [======================>.......] - ETA: 5s - loss: 0.1756 - acc: 0.9505
46400/60000 [======================>.......] - ETA: 4s - loss: 0.1753 - acc: 0.9506
46528/60000 [======================>.......] - ETA: 4s - loss: 0.1750 - acc: 0.9506
46720/60000 [======================>.......] - ETA: 4s - loss: 0.1750 - acc: 0.9507
46880/60000 [======================>.......] - ETA: 4s - loss: 0.1747 - acc: 0.9507
47040/60000 [======================>.......] - ETA: 4s - loss: 0.1748 - acc: 0.9507
47200/60000 [======================>.......] - ETA: 4s - loss: 0.1748 - acc: 0.9507
47360/60000 [======================>.......] - ETA: 4s - loss: 0.1750 - acc: 0.9506
47520/60000 [======================>.......] - ETA: 4s - loss: 0.1749 - acc: 0.9506
47648/60000 [======================>.......] - ETA: 4s - loss: 0.1748 - acc: 0.9507
47840/60000 [======================>.......] - ETA: 4s - loss: 0.1751 - acc: 0.9506
48000/60000 [=======================>......] - ETA: 4s - loss: 0.1750 - acc: 0.9506
48128/60000 [=======================>......] - ETA: 4s - loss: 0.1752 - acc: 0.9507
48320/60000 [=======================>......] - ETA: 4s - loss: 0.1751 - acc: 0.9507
48480/60000 [=======================>......] - ETA: 4s - loss: 0.1751 - acc: 0.9508
48672/60000 [=======================>......] - ETA: 4s - loss: 0.1751 - acc: 0.9508
48800/60000 [=======================>......] - ETA: 4s - loss: 0.1748 - acc: 0.9508
48960/60000 [=======================>......] - ETA: 4s - loss: 0.1746 - acc: 0.9509
49088/60000 [=======================>......] - ETA: 3s - loss: 0.1747 - acc: 0.9508
49248/60000 [=======================>......] - ETA: 3s - loss: 0.1747 - acc: 0.9509
49376/60000 [=======================>......] - ETA: 3s - loss: 0.1745 - acc: 0.9509
49504/60000 [=======================>......] - ETA: 3s - loss: 0.1747 - acc: 0.9508
49696/60000 [=======================>......] - ETA: 3s - loss: 0.1747 - acc: 0.9508
49920/60000 [=======================>......] - ETA: 3s - loss: 0.1743 - acc: 0.9509
50080/60000 [========================>.....] - ETA: 3s - loss: 0.1740 - acc: 0.9510
50240/60000 [========================>.....] - ETA: 3s - loss: 0.1743 - acc: 0.9510
50368/60000 [========================>.....] - ETA: 3s - loss: 0.1740 - acc: 0.9510
50496/60000 [========================>.....] - ETA: 3s - loss: 0.1740 - acc: 0.9511
50624/60000 [========================>.....] - ETA: 3s - loss: 0.1738 - acc: 0.9511
50784/60000 [========================>.....] - ETA: 3s - loss: 0.1739 - acc: 0.9511
50944/60000 [========================>.....] - ETA: 3s - loss: 0.1737 - acc: 0.9512
51104/60000 [========================>.....] - ETA: 3s - loss: 0.1736 - acc: 0.9512
51232/60000 [========================>.....] - ETA: 3s - loss: 0.1736 - acc: 0.9512
51392/60000 [========================>.....] - ETA: 3s - loss: 0.1734 - acc: 0.9513
51520/60000 [========================>.....] - ETA: 3s - loss: 0.1733 - acc: 0.9513
51680/60000 [========================>.....] - ETA: 3s - loss: 0.1731 - acc: 0.9514
51808/60000 [========================>.....] - ETA: 2s - loss: 0.1732 - acc: 0.9514
51936/60000 [========================>.....] - ETA: 2s - loss: 0.1732 - acc: 0.9514
52096/60000 [=========================>....] - ETA: 2s - loss: 0.1733 - acc: 0.9513
52288/60000 [=========================>....] - ETA: 2s - loss: 0.1734 - acc: 0.9513
52448/60000 [=========================>....] - ETA: 2s - loss: 0.1731 - acc: 0.9514
52608/60000 [=========================>....] - ETA: 2s - loss: 0.1731 - acc: 0.9515
52768/60000 [=========================>....] - ETA: 2s - loss: 0.1731 - acc: 0.9514
52928/60000 [=========================>....] - ETA: 2s - loss: 0.1729 - acc: 0.9515
53056/60000 [=========================>....] - ETA: 2s - loss: 0.1728 - acc: 0.9515
53216/60000 [=========================>....] - ETA: 2s - loss: 0.1728 - acc: 0.9514
53408/60000 [=========================>....] - ETA: 2s - loss: 0.1726 - acc: 0.9515
53568/60000 [=========================>....] - ETA: 2s - loss: 0.1725 - acc: 0.9515
53728/60000 [=========================>....] - ETA: 2s - loss: 0.1724 - acc: 0.9516
53888/60000 [=========================>....] - ETA: 2s - loss: 0.1727 - acc: 0.9515
54048/60000 [==========================>...] - ETA: 2s - loss: 0.1729 - acc: 0.9515
54240/60000 [==========================>...] - ETA: 2s - loss: 0.1728 - acc: 0.9515
54368/60000 [==========================>...] - ETA: 2s - loss: 0.1728 - acc: 0.9516
54528/60000 [==========================>...] - ETA: 1s - loss: 0.1729 - acc: 0.9516
54688/60000 [==========================>...] - ETA: 1s - loss: 0.1727 - acc: 0.9517
54848/60000 [==========================>...] - ETA: 1s - loss: 0.1725 - acc: 0.9517
54976/60000 [==========================>...] - ETA: 1s - loss: 0.1725 - acc: 0.9518
55136/60000 [==========================>...] - ETA: 1s - loss: 0.1726 - acc: 0.9518
55296/60000 [==========================>...] - ETA: 1s - loss: 0.1724 - acc: 0.9519
55488/60000 [==========================>...] - ETA: 1s - loss: 0.1724 - acc: 0.9519
55648/60000 [==========================>...] - ETA: 1s - loss: 0.1724 - acc: 0.9519
55808/60000 [==========================>...] - ETA: 1s - loss: 0.1725 - acc: 0.9519
55968/60000 [==========================>...] - ETA: 1s - loss: 0.1725 - acc: 0.9518
56160/60000 [===========================>..] - ETA: 1s - loss: 0.1725 - acc: 0.9518
56320/60000 [===========================>..] - ETA: 1s - loss: 0.1725 - acc: 0.9518
56448/60000 [===========================>..] - ETA: 1s - loss: 0.1724 - acc: 0.9519
56640/60000 [===========================>..] - ETA: 1s - loss: 0.1721 - acc: 0.9520
56800/60000 [===========================>..] - ETA: 1s - loss: 0.1722 - acc: 0.9520
56960/60000 [===========================>..] - ETA: 1s - loss: 0.1721 - acc: 0.9520
57088/60000 [===========================>..] - ETA: 1s - loss: 0.1721 - acc: 0.9520
57248/60000 [===========================>..] - ETA: 0s - loss: 0.1719 - acc: 0.9520
57376/60000 [===========================>..] - ETA: 0s - loss: 0.1721 - acc: 0.9520
57536/60000 [===========================>..] - ETA: 0s - loss: 0.1721 - acc: 0.9520
57696/60000 [===========================>..] - ETA: 0s - loss: 0.1721 - acc: 0.9520
57856/60000 [===========================>..] - ETA: 0s - loss: 0.1719 - acc: 0.9521
58016/60000 [============================>.] - ETA: 0s - loss: 0.1716 - acc: 0.9522
58176/60000 [============================>.] - ETA: 0s - loss: 0.1715 - acc: 0.9521
58336/60000 [============================>.] - ETA: 0s - loss: 0.1714 - acc: 0.9522
58464/60000 [============================>.] - ETA: 0s - loss: 0.1713 - acc: 0.9522
58624/60000 [============================>.] - ETA: 0s - loss: 0.1714 - acc: 0.9522
58816/60000 [============================>.] - ETA: 0s - loss: 0.1714 - acc: 0.9522
58976/60000 [============================>.] - ETA: 0s - loss: 0.1714 - acc: 0.9522
59072/60000 [============================>.] - ETA: 0s - loss: 0.1712 - acc: 0.9523
59232/60000 [============================>.] - ETA: 0s - loss: 0.1709 - acc: 0.9524
59392/60000 [============================>.] - ETA: 0s - loss: 0.1710 - acc: 0.9524
59552/60000 [============================>.] - ETA: 0s - loss: 0.1707 - acc: 0.9525
59712/60000 [============================>.] - ETA: 0s - loss: 0.1706 - acc: 0.9525
59872/60000 [============================>.] - ETA: 0s - loss: 0.1705 - acc: 0.9525
60000/60000 [==============================] - 22s 362us/step - loss: 0.1703 - acc: 0.9525
32/20000 [..............................] - ETA: 38s
576/20000 [..............................] - ETA: 3s
1024/20000 [>.............................] - ETA: 3s
1504/20000 [=>............................] - ETA: 2s
2016/20000 [==>...........................] - ETA: 2s
2688/20000 [===>..........................] - ETA: 2s
3200/20000 [===>..........................] - ETA: 1s
3776/20000 [====>.........................] - ETA: 1s
4384/20000 [=====>........................] - ETA: 1s
4928/20000 [======>.......................] - ETA: 1s
5536/20000 [=======>......................] - ETA: 1s
6080/20000 [========>.....................] - ETA: 1s
6656/20000 [========>.....................] - ETA: 1s
7200/20000 [=========>....................] - ETA: 1s
7840/20000 [==========>...................] - ETA: 1s
8512/20000 [===========>..................] - ETA: 1s
8960/20000 [============>.................] - ETA: 1s
9600/20000 [=============>................] - ETA: 1s
10080/20000 [==============>...............] - ETA: 0s
10656/20000 [==============>...............] - ETA: 0s
11296/20000 [===============>..............] - ETA: 0s
11872/20000 [================>.............] - ETA: 0s
12416/20000 [=================>............] - ETA: 0s
12960/20000 [==================>...........] - ETA: 0s
13664/20000 [===================>..........] - ETA: 0s
14240/20000 [====================>.........] - ETA: 0s
14784/20000 [=====================>........] - ETA: 0s
15200/20000 [=====================>........] - ETA: 0s
15680/20000 [======================>.......] - ETA: 0s
16320/20000 [=======================>......] - ETA: 0s
16544/20000 [=======================>......] - ETA: 0s
17056/20000 [========================>.....] - ETA: 0s
17664/20000 [=========================>....] - ETA: 0s
18272/20000 [==========================>...] - ETA: 0s
18784/20000 [===========================>..] - ETA: 0s
19328/20000 [===========================>..] - ETA: 0s
20000/20000 [==============================] - 2s 96us/step
Test score: 0.29835630317032336
Test accuracy: 0.91115
<keras.callbacks.History object at 0x13531bd30>
32/20000 [..............................] - ETA: 42s
704/20000 [>.............................] - ETA: 3s
1280/20000 [>.............................] - ETA: 2s
1728/20000 [=>............................] - ETA: 2s
2272/20000 [==>...........................] - ETA: 2s
2912/20000 [===>..........................] - ETA: 1s
3680/20000 [====>.........................] - ETA: 1s
4192/20000 [=====>........................] - ETA: 1s
4832/20000 [======>.......................] - ETA: 1s
5632/20000 [=======>......................] - ETA: 1s
6208/20000 [========>.....................] - ETA: 1s
6752/20000 [=========>....................] - ETA: 1s
7296/20000 [=========>....................] - ETA: 1s
7872/20000 [==========>...................] - ETA: 1s
8416/20000 [===========>..................] - ETA: 1s
9024/20000 [============>.................] - ETA: 1s
9632/20000 [=============>................] - ETA: 0s
10144/20000 [==============>...............] - ETA: 0s
10720/20000 [===============>..............] - ETA: 0s
11328/20000 [===============>..............] - ETA: 0s
11872/20000 [================>.............] - ETA: 0s
12576/20000 [=================>............] - ETA: 0s
13120/20000 [==================>...........] - ETA: 0s
13600/20000 [===================>..........] - ETA: 0s
14336/20000 [====================>.........] - ETA: 0s
15136/20000 [=====================>........] - ETA: 0s
15776/20000 [======================>.......] - ETA: 0s
16384/20000 [=======================>......] - ETA: 0s
17024/20000 [========================>.....] - ETA: 0s
17568/20000 [=========================>....] - ETA: 0s
18112/20000 [==========================>...] - ETA: 0s
18720/20000 [===========================>..] - ETA: 0s
19264/20000 [===========================>..] - ETA: 0s
19904/20000 [============================>.] - ETA: 0s
20000/20000 [==============================] - 2s 89us/step
precision recall f1-score support
0.0 0.95 0.98 0.97 2000
1.0 0.93 0.96 0.94 2000
2.0 0.77 0.89 0.82 2000
3.0 0.89 0.82 0.85 2000
4.0 0.87 0.87 0.87 2000
5.0 0.97 0.95 0.96 2000
6.0 0.91 0.87 0.89 2000
7.0 0.98 0.93 0.95 2000
8.0 0.96 0.94 0.95 2000
9.0 0.91 0.91 0.91 2000
accuracy 0.91 20000
macro avg 0.91 0.91 0.91 20000
weighted avg 0.91 0.91 0.91 20000
###Markdown
C - Adding Dropout
###Code
network_c = Sequential()
network_c.add(Dropout(0.2, input_shape=(32*32,)))
network_c.add(Dense(512, activation='relu'))
network_c.add(Dropout(0.5))
network_c.add(Dense(10, activation='softmax'))
network_c.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history_c = network_c.fit(x_train, y_train, epochs=2)
score_c = network_c.evaluate(x_test, y_test)
print('Test score:', score_c[0])
print('Test accuracy:', score_c[1])
y_pred = network.predict(x_test,verbose = 1)
y_pred_bool = np.argmax(y_pred, axis=1)
print(classification_report(Y_test, y_pred_bool))
plt.plot(history_c.history['acc'])
plt.title('Train accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
plt.plot(history_c.history['loss'])
plt.title('Train loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
###Output
WARNING:tensorflow:From /Users/mohammad/PycharmProjects/FCI-HW2/venv/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
Epoch 1/2
32/60000 [..............................] - ETA: 8:02 - loss: 2.3024 - acc: 0.0312
160/60000 [..............................] - ETA: 1:55 - loss: 2.2975 - acc: 0.1938
320/60000 [..............................] - ETA: 1:07 - loss: 2.2909 - acc: 0.1969
448/60000 [..............................] - ETA: 56s - loss: 2.2884 - acc: 0.2254
544/60000 [..............................] - ETA: 51s - loss: 2.2845 - acc: 0.2739
672/60000 [..............................] - ETA: 46s - loss: 2.2785 - acc: 0.3289
800/60000 [..............................] - ETA: 43s - loss: 2.2730 - acc: 0.3675
960/60000 [..............................] - ETA: 39s - loss: 2.2672 - acc: 0.3760
1088/60000 [..............................] - ETA: 38s - loss: 2.2633 - acc: 0.3833
1184/60000 [..............................] - ETA: 38s - loss: 2.2599 - acc: 0.3877
1280/60000 [..............................] - ETA: 38s - loss: 2.2555 - acc: 0.4070
1376/60000 [..............................] - ETA: 37s - loss: 2.2528 - acc: 0.4142
1504/60000 [..............................] - ETA: 36s - loss: 2.2469 - acc: 0.4435
1600/60000 [..............................] - ETA: 36s - loss: 2.2422 - acc: 0.4562
1760/60000 [..............................] - ETA: 35s - loss: 2.2344 - acc: 0.4801
1888/60000 [..............................] - ETA: 35s - loss: 2.2278 - acc: 0.4968
2016/60000 [>.............................] - ETA: 34s - loss: 2.2215 - acc: 0.5084
2144/60000 [>.............................] - ETA: 34s - loss: 2.2149 - acc: 0.5224
2272/60000 [>.............................] - ETA: 33s - loss: 2.2084 - acc: 0.5290
2368/60000 [>.............................] - ETA: 33s - loss: 2.2033 - acc: 0.5380
2496/60000 [>.............................] - ETA: 33s - loss: 2.1957 - acc: 0.5453
2624/60000 [>.............................] - ETA: 33s - loss: 2.1884 - acc: 0.5541
2720/60000 [>.............................] - ETA: 33s - loss: 2.1830 - acc: 0.5581
2816/60000 [>.............................] - ETA: 33s - loss: 2.1777 - acc: 0.5636
2944/60000 [>.............................] - ETA: 33s - loss: 2.1708 - acc: 0.5700
3072/60000 [>.............................] - ETA: 32s - loss: 2.1623 - acc: 0.5794
3200/60000 [>.............................] - ETA: 32s - loss: 2.1558 - acc: 0.5859
3328/60000 [>.............................] - ETA: 32s - loss: 2.1487 - acc: 0.5913
3424/60000 [>.............................] - ETA: 31s - loss: 2.1424 - acc: 0.5973
3552/60000 [>.............................] - ETA: 31s - loss: 2.1345 - acc: 0.6042
3680/60000 [>.............................] - ETA: 31s - loss: 2.1253 - acc: 0.6109
3840/60000 [>.............................] - ETA: 30s - loss: 2.1131 - acc: 0.6180
3968/60000 [>.............................] - ETA: 30s - loss: 2.1049 - acc: 0.6227
4064/60000 [=>............................] - ETA: 30s - loss: 2.0992 - acc: 0.6250
4192/60000 [=>............................] - ETA: 30s - loss: 2.0911 - acc: 0.6310
4320/60000 [=>............................] - ETA: 30s - loss: 2.0818 - acc: 0.6354
4448/60000 [=>............................] - ETA: 29s - loss: 2.0738 - acc: 0.6378
4576/60000 [=>............................] - ETA: 29s - loss: 2.0652 - acc: 0.6396
4736/60000 [=>............................] - ETA: 29s - loss: 2.0539 - acc: 0.6448
4832/60000 [=>............................] - ETA: 29s - loss: 2.0465 - acc: 0.6478
4960/60000 [=>............................] - ETA: 29s - loss: 2.0364 - acc: 0.6512
5120/60000 [=>............................] - ETA: 28s - loss: 2.0260 - acc: 0.6545
5248/60000 [=>............................] - ETA: 28s - loss: 2.0170 - acc: 0.6559
5408/60000 [=>............................] - ETA: 28s - loss: 2.0050 - acc: 0.6620
5664/60000 [=>............................] - ETA: 27s - loss: 1.9852 - acc: 0.6695
5888/60000 [=>............................] - ETA: 26s - loss: 1.9685 - acc: 0.6743
6080/60000 [==>...........................] - ETA: 26s - loss: 1.9544 - acc: 0.6773
6240/60000 [==>...........................] - ETA: 26s - loss: 1.9428 - acc: 0.6795
6272/60000 [==>...........................] - ETA: 26s - loss: 1.9403 - acc: 0.6800
6432/60000 [==>...........................] - ETA: 26s - loss: 1.9285 - acc: 0.6821
6560/60000 [==>...........................] - ETA: 26s - loss: 1.9198 - acc: 0.6825
6688/60000 [==>...........................] - ETA: 25s - loss: 1.9107 - acc: 0.6853
6816/60000 [==>...........................] - ETA: 25s - loss: 1.9003 - acc: 0.6884
6976/60000 [==>...........................] - ETA: 25s - loss: 1.8874 - acc: 0.6912
7072/60000 [==>...........................] - ETA: 25s - loss: 1.8809 - acc: 0.6919
7232/60000 [==>...........................] - ETA: 25s - loss: 1.8684 - acc: 0.6947
7392/60000 [==>...........................] - ETA: 25s - loss: 1.8575 - acc: 0.6955
7520/60000 [==>...........................] - ETA: 25s - loss: 1.8478 - acc: 0.6975
7616/60000 [==>...........................] - ETA: 25s - loss: 1.8414 - acc: 0.6984
7744/60000 [==>...........................] - ETA: 25s - loss: 1.8314 - acc: 0.7000
7904/60000 [==>...........................] - ETA: 24s - loss: 1.8192 - acc: 0.7020
8032/60000 [===>..........................] - ETA: 24s - loss: 1.8096 - acc: 0.7036
8160/60000 [===>..........................] - ETA: 24s - loss: 1.8006 - acc: 0.7056
8320/60000 [===>..........................] - ETA: 24s - loss: 1.7895 - acc: 0.7070
8480/60000 [===>..........................] - ETA: 24s - loss: 1.7773 - acc: 0.7090
8576/60000 [===>..........................] - ETA: 24s - loss: 1.7690 - acc: 0.7108
8704/60000 [===>..........................] - ETA: 24s - loss: 1.7610 - acc: 0.7115
8800/60000 [===>..........................] - ETA: 24s - loss: 1.7537 - acc: 0.7128
9024/60000 [===>..........................] - ETA: 23s - loss: 1.7387 - acc: 0.7145
9120/60000 [===>..........................] - ETA: 23s - loss: 1.7322 - acc: 0.7157
9248/60000 [===>..........................] - ETA: 23s - loss: 1.7220 - acc: 0.7171
9344/60000 [===>..........................] - ETA: 23s - loss: 1.7162 - acc: 0.7179
9472/60000 [===>..........................] - ETA: 23s - loss: 1.7076 - acc: 0.7191
9568/60000 [===>..........................] - ETA: 23s - loss: 1.7012 - acc: 0.7197
9696/60000 [===>..........................] - ETA: 23s - loss: 1.6934 - acc: 0.7211
9824/60000 [===>..........................] - ETA: 23s - loss: 1.6844 - acc: 0.7229
9984/60000 [===>..........................] - ETA: 23s - loss: 1.6734 - acc: 0.7244
10176/60000 [====>.........................] - ETA: 23s - loss: 1.6596 - acc: 0.7266
10400/60000 [====>.........................] - ETA: 22s - loss: 1.6429 - acc: 0.7293
10624/60000 [====>.........................] - ETA: 22s - loss: 1.6281 - acc: 0.7318
10816/60000 [====>.........................] - ETA: 22s - loss: 1.6156 - acc: 0.7335
11040/60000 [====>.........................] - ETA: 21s - loss: 1.5999 - acc: 0.7366
11264/60000 [====>.........................] - ETA: 21s - loss: 1.5870 - acc: 0.7385
11488/60000 [====>.........................] - ETA: 21s - loss: 1.5742 - acc: 0.7396
11680/60000 [====>.........................] - ETA: 21s - loss: 1.5634 - acc: 0.7414
11808/60000 [====>.........................] - ETA: 21s - loss: 1.5559 - acc: 0.7423
11968/60000 [====>.........................] - ETA: 20s - loss: 1.5470 - acc: 0.7441
12128/60000 [=====>........................] - ETA: 20s - loss: 1.5374 - acc: 0.7458
12320/60000 [=====>........................] - ETA: 20s - loss: 1.5268 - acc: 0.7468
12544/60000 [=====>........................] - ETA: 20s - loss: 1.5139 - acc: 0.7484
12768/60000 [=====>........................] - ETA: 20s - loss: 1.5006 - acc: 0.7503
12992/60000 [=====>........................] - ETA: 19s - loss: 1.4884 - acc: 0.7515
13216/60000 [=====>........................] - ETA: 19s - loss: 1.4758 - acc: 0.7529
13440/60000 [=====>........................] - ETA: 19s - loss: 1.4641 - acc: 0.7536
13568/60000 [=====>........................] - ETA: 19s - loss: 1.4574 - acc: 0.7541
13696/60000 [=====>........................] - ETA: 19s - loss: 1.4507 - acc: 0.7553
13792/60000 [=====>........................] - ETA: 19s - loss: 1.4455 - acc: 0.7562
13920/60000 [=====>........................] - ETA: 19s - loss: 1.4390 - acc: 0.7569
14048/60000 [======>.......................] - ETA: 19s - loss: 1.4321 - acc: 0.7577
14176/60000 [======>.......................] - ETA: 19s - loss: 1.4260 - acc: 0.7585
14304/60000 [======>.......................] - ETA: 19s - loss: 1.4196 - acc: 0.7592
14464/60000 [======>.......................] - ETA: 19s - loss: 1.4109 - acc: 0.7604
14560/60000 [======>.......................] - ETA: 19s - loss: 1.4070 - acc: 0.7604
14720/60000 [======>.......................] - ETA: 19s - loss: 1.3990 - acc: 0.7613
14816/60000 [======>.......................] - ETA: 19s - loss: 1.3947 - acc: 0.7621
14976/60000 [======>.......................] - ETA: 18s - loss: 1.3872 - acc: 0.7629
15104/60000 [======>.......................] - ETA: 18s - loss: 1.3806 - acc: 0.7639
15232/60000 [======>.......................] - ETA: 18s - loss: 1.3733 - acc: 0.7652
15328/60000 [======>.......................] - ETA: 18s - loss: 1.3688 - acc: 0.7659
15488/60000 [======>.......................] - ETA: 18s - loss: 1.3616 - acc: 0.7663
15680/60000 [======>.......................] - ETA: 18s - loss: 1.3527 - acc: 0.7676
15872/60000 [======>.......................] - ETA: 18s - loss: 1.3438 - acc: 0.7687
16000/60000 [=======>......................] - ETA: 18s - loss: 1.3384 - acc: 0.7693
16128/60000 [=======>......................] - ETA: 18s - loss: 1.3327 - acc: 0.7700
16288/60000 [=======>......................] - ETA: 18s - loss: 1.3264 - acc: 0.7709
16480/60000 [=======>......................] - ETA: 18s - loss: 1.3168 - acc: 0.7725
16608/60000 [=======>......................] - ETA: 18s - loss: 1.3107 - acc: 0.7736
16704/60000 [=======>......................] - ETA: 18s - loss: 1.3063 - acc: 0.7744
16800/60000 [=======>......................] - ETA: 18s - loss: 1.3023 - acc: 0.7748
16928/60000 [=======>......................] - ETA: 18s - loss: 1.2975 - acc: 0.7756
17024/60000 [=======>......................] - ETA: 18s - loss: 1.2935 - acc: 0.7760
17120/60000 [=======>......................] - ETA: 18s - loss: 1.2898 - acc: 0.7762
17248/60000 [=======>......................] - ETA: 17s - loss: 1.2840 - acc: 0.7771
17344/60000 [=======>......................] - ETA: 17s - loss: 1.2803 - acc: 0.7776
17472/60000 [=======>......................] - ETA: 17s - loss: 1.2753 - acc: 0.7783
17600/60000 [=======>......................] - ETA: 17s - loss: 1.2698 - acc: 0.7793
17760/60000 [=======>......................] - ETA: 17s - loss: 1.2632 - acc: 0.7802
17920/60000 [=======>......................] - ETA: 17s - loss: 1.2570 - acc: 0.7812
18080/60000 [========>.....................] - ETA: 17s - loss: 1.2504 - acc: 0.7820
18304/60000 [========>.....................] - ETA: 17s - loss: 1.2426 - acc: 0.7828
18528/60000 [========>.....................] - ETA: 17s - loss: 1.2342 - acc: 0.7835
18720/60000 [========>.....................] - ETA: 17s - loss: 1.2274 - acc: 0.7844
18944/60000 [========>.....................] - ETA: 16s - loss: 1.2194 - acc: 0.7854
19104/60000 [========>.....................] - ETA: 16s - loss: 1.2131 - acc: 0.7862
19200/60000 [========>.....................] - ETA: 16s - loss: 1.2101 - acc: 0.7864
19328/60000 [========>.....................] - ETA: 16s - loss: 1.2054 - acc: 0.7868
19424/60000 [========>.....................] - ETA: 16s - loss: 1.2017 - acc: 0.7874
19584/60000 [========>.....................] - ETA: 16s - loss: 1.1973 - acc: 0.7878
19712/60000 [========>.....................] - ETA: 16s - loss: 1.1927 - acc: 0.7885
19872/60000 [========>.....................] - ETA: 16s - loss: 1.1868 - acc: 0.7893
20000/60000 [=========>....................] - ETA: 16s - loss: 1.1824 - acc: 0.7898
20096/60000 [=========>....................] - ETA: 16s - loss: 1.1792 - acc: 0.7903
20256/60000 [=========>....................] - ETA: 16s - loss: 1.1739 - acc: 0.7907
20416/60000 [=========>....................] - ETA: 16s - loss: 1.1692 - acc: 0.7910
20576/60000 [=========>....................] - ETA: 16s - loss: 1.1646 - acc: 0.7914
20768/60000 [=========>....................] - ETA: 16s - loss: 1.1582 - acc: 0.7923
20928/60000 [=========>....................] - ETA: 16s - loss: 1.1535 - acc: 0.7927
21088/60000 [=========>....................] - ETA: 15s - loss: 1.1484 - acc: 0.7932
21280/60000 [=========>....................] - ETA: 15s - loss: 1.1425 - acc: 0.7939
21472/60000 [=========>....................] - ETA: 15s - loss: 1.1377 - acc: 0.7942
21664/60000 [=========>....................] - ETA: 15s - loss: 1.1316 - acc: 0.7952
21856/60000 [=========>....................] - ETA: 15s - loss: 1.1262 - acc: 0.7959
22048/60000 [==========>...................] - ETA: 15s - loss: 1.1205 - acc: 0.7965
22144/60000 [==========>...................] - ETA: 15s - loss: 1.1177 - acc: 0.7970
22240/60000 [==========>...................] - ETA: 15s - loss: 1.1147 - acc: 0.7973
22400/60000 [==========>...................] - ETA: 15s - loss: 1.1096 - acc: 0.7982
22592/60000 [==========>...................] - ETA: 15s - loss: 1.1040 - acc: 0.7989
22816/60000 [==========>...................] - ETA: 14s - loss: 1.0976 - acc: 0.8000
23008/60000 [==========>...................] - ETA: 14s - loss: 1.0922 - acc: 0.8009
23136/60000 [==========>...................] - ETA: 14s - loss: 1.0883 - acc: 0.8014
23232/60000 [==========>...................] - ETA: 14s - loss: 1.0856 - acc: 0.8018
23328/60000 [==========>...................] - ETA: 14s - loss: 1.0834 - acc: 0.8018
23456/60000 [==========>...................] - ETA: 14s - loss: 1.0797 - acc: 0.8023
23584/60000 [==========>...................] - ETA: 14s - loss: 1.0767 - acc: 0.8026
23680/60000 [==========>...................] - ETA: 14s - loss: 1.0743 - acc: 0.8027
23808/60000 [==========>...................] - ETA: 14s - loss: 1.0714 - acc: 0.8030
23968/60000 [==========>...................] - ETA: 14s - loss: 1.0673 - acc: 0.8036
24160/60000 [===========>..................] - ETA: 14s - loss: 1.0616 - acc: 0.8044
24384/60000 [===========>..................] - ETA: 14s - loss: 1.0560 - acc: 0.8051
24608/60000 [===========>..................] - ETA: 14s - loss: 1.0505 - acc: 0.8058
24832/60000 [===========>..................] - ETA: 14s - loss: 1.0449 - acc: 0.8066
24992/60000 [===========>..................] - ETA: 13s - loss: 1.0413 - acc: 0.8069
25184/60000 [===========>..................] - ETA: 13s - loss: 1.0365 - acc: 0.8074
25408/60000 [===========>..................] - ETA: 13s - loss: 1.0305 - acc: 0.8084
25632/60000 [===========>..................] - ETA: 13s - loss: 1.0256 - acc: 0.8089
25856/60000 [===========>..................] - ETA: 13s - loss: 1.0205 - acc: 0.8095
26080/60000 [============>.................] - ETA: 13s - loss: 1.0155 - acc: 0.8102
26208/60000 [============>.................] - ETA: 13s - loss: 1.0124 - acc: 0.8104
26400/60000 [============>.................] - ETA: 13s - loss: 1.0086 - acc: 0.8106
26624/60000 [============>.................] - ETA: 13s - loss: 1.0033 - acc: 0.8114
26848/60000 [============>.................] - ETA: 12s - loss: 0.9984 - acc: 0.8119
27072/60000 [============>.................] - ETA: 12s - loss: 0.9933 - acc: 0.8128
27296/60000 [============>.................] - ETA: 12s - loss: 0.9886 - acc: 0.8133
27456/60000 [============>.................] - ETA: 12s - loss: 0.9850 - acc: 0.8138
27616/60000 [============>.................] - ETA: 12s - loss: 0.9815 - acc: 0.8143
27840/60000 [============>.................] - ETA: 12s - loss: 0.9770 - acc: 0.8148
28064/60000 [=============>................] - ETA: 12s - loss: 0.9729 - acc: 0.8152
28288/60000 [=============>................] - ETA: 12s - loss: 0.9682 - acc: 0.8158
28512/60000 [=============>................] - ETA: 11s - loss: 0.9635 - acc: 0.8166
28704/60000 [=============>................] - ETA: 11s - loss: 0.9593 - acc: 0.8173
28864/60000 [=============>................] - ETA: 11s - loss: 0.9568 - acc: 0.8176
29088/60000 [=============>................] - ETA: 11s - loss: 0.9521 - acc: 0.8184
29312/60000 [=============>................] - ETA: 11s - loss: 0.9475 - acc: 0.8191
29536/60000 [=============>................] - ETA: 11s - loss: 0.9432 - acc: 0.8197
29728/60000 [=============>................] - ETA: 11s - loss: 0.9401 - acc: 0.8201
29952/60000 [=============>................] - ETA: 11s - loss: 0.9356 - acc: 0.8208
30176/60000 [==============>...............] - ETA: 11s - loss: 0.9314 - acc: 0.8216
30336/60000 [==============>...............] - ETA: 11s - loss: 0.9283 - acc: 0.8220
30528/60000 [==============>...............] - ETA: 10s - loss: 0.9248 - acc: 0.8224
30752/60000 [==============>...............] - ETA: 10s - loss: 0.9211 - acc: 0.8229
30944/60000 [==============>...............] - ETA: 10s - loss: 0.9175 - acc: 0.8233
31168/60000 [==============>...............] - ETA: 10s - loss: 0.9132 - acc: 0.8238
31392/60000 [==============>...............] - ETA: 10s - loss: 0.9093 - acc: 0.8243
31584/60000 [==============>...............] - ETA: 10s - loss: 0.9059 - acc: 0.8248
31808/60000 [==============>...............] - ETA: 10s - loss: 0.9024 - acc: 0.8252
32000/60000 [===============>..............] - ETA: 10s - loss: 0.8997 - acc: 0.8256
32192/60000 [===============>..............] - ETA: 10s - loss: 0.8963 - acc: 0.8261
32416/60000 [===============>..............] - ETA: 10s - loss: 0.8929 - acc: 0.8264
32608/60000 [===============>..............] - ETA: 10s - loss: 0.8894 - acc: 0.8271
32768/60000 [===============>..............] - ETA: 9s - loss: 0.8873 - acc: 0.8272
32960/60000 [===============>..............] - ETA: 9s - loss: 0.8843 - acc: 0.8275
33152/60000 [===============>..............] - ETA: 9s - loss: 0.8816 - acc: 0.8279
33344/60000 [===============>..............] - ETA: 9s - loss: 0.8780 - acc: 0.8284
33536/60000 [===============>..............] - ETA: 9s - loss: 0.8751 - acc: 0.8288
33760/60000 [===============>..............] - ETA: 9s - loss: 0.8714 - acc: 0.8294
33952/60000 [===============>..............] - ETA: 9s - loss: 0.8688 - acc: 0.8297
34112/60000 [================>.............] - ETA: 9s - loss: 0.8664 - acc: 0.8300
34304/60000 [================>.............] - ETA: 9s - loss: 0.8635 - acc: 0.8304
34528/60000 [================>.............] - ETA: 9s - loss: 0.8599 - acc: 0.8309
34752/60000 [================>.............] - ETA: 9s - loss: 0.8562 - acc: 0.8314
34976/60000 [================>.............] - ETA: 9s - loss: 0.8533 - acc: 0.8317
35136/60000 [================>.............] - ETA: 8s - loss: 0.8506 - acc: 0.8321
35360/60000 [================>.............] - ETA: 8s - loss: 0.8472 - acc: 0.8325
35520/60000 [================>.............] - ETA: 8s - loss: 0.8446 - acc: 0.8330
35712/60000 [================>.............] - ETA: 8s - loss: 0.8417 - acc: 0.8334
35936/60000 [================>.............] - ETA: 8s - loss: 0.8388 - acc: 0.8337
36160/60000 [=================>............] - ETA: 8s - loss: 0.8358 - acc: 0.8341
36384/60000 [=================>............] - ETA: 8s - loss: 0.8327 - acc: 0.8345
36544/60000 [=================>............] - ETA: 8s - loss: 0.8302 - acc: 0.8348
36736/60000 [=================>............] - ETA: 8s - loss: 0.8276 - acc: 0.8352
36960/60000 [=================>............] - ETA: 8s - loss: 0.8247 - acc: 0.8357
37184/60000 [=================>............] - ETA: 8s - loss: 0.8215 - acc: 0.8362
37408/60000 [=================>............] - ETA: 7s - loss: 0.8183 - acc: 0.8367
37600/60000 [=================>............] - ETA: 7s - loss: 0.8155 - acc: 0.8372
37792/60000 [=================>............] - ETA: 7s - loss: 0.8135 - acc: 0.8375
38016/60000 [==================>...........] - ETA: 7s - loss: 0.8111 - acc: 0.8376
38240/60000 [==================>...........] - ETA: 7s - loss: 0.8080 - acc: 0.8380
38464/60000 [==================>...........] - ETA: 7s - loss: 0.8055 - acc: 0.8384
38688/60000 [==================>...........] - ETA: 7s - loss: 0.8027 - acc: 0.8390
38912/60000 [==================>...........] - ETA: 7s - loss: 0.8002 - acc: 0.8393
39072/60000 [==================>...........] - ETA: 7s - loss: 0.7982 - acc: 0.8396
39264/60000 [==================>...........] - ETA: 7s - loss: 0.7958 - acc: 0.8399
39488/60000 [==================>...........] - ETA: 7s - loss: 0.7926 - acc: 0.8405
39712/60000 [==================>...........] - ETA: 7s - loss: 0.7901 - acc: 0.8410
39936/60000 [==================>...........] - ETA: 6s - loss: 0.7872 - acc: 0.8413
40128/60000 [===================>..........] - ETA: 6s - loss: 0.7847 - acc: 0.8417
40352/60000 [===================>..........] - ETA: 6s - loss: 0.7822 - acc: 0.8421
40512/60000 [===================>..........] - ETA: 6s - loss: 0.7805 - acc: 0.8424
40736/60000 [===================>..........] - ETA: 6s - loss: 0.7776 - acc: 0.8428
40928/60000 [===================>..........] - ETA: 6s - loss: 0.7752 - acc: 0.8432
41152/60000 [===================>..........] - ETA: 6s - loss: 0.7730 - acc: 0.8434
41344/60000 [===================>..........] - ETA: 6s - loss: 0.7708 - acc: 0.8437
41472/60000 [===================>..........] - ETA: 6s - loss: 0.7693 - acc: 0.8439
41632/60000 [===================>..........] - ETA: 6s - loss: 0.7674 - acc: 0.8443
41760/60000 [===================>..........] - ETA: 6s - loss: 0.7659 - acc: 0.8445
41888/60000 [===================>..........] - ETA: 6s - loss: 0.7647 - acc: 0.8447
41984/60000 [===================>..........] - ETA: 6s - loss: 0.7637 - acc: 0.8448
42080/60000 [====================>.........] - ETA: 6s - loss: 0.7625 - acc: 0.8450
42208/60000 [====================>.........] - ETA: 6s - loss: 0.7609 - acc: 0.8452
42304/60000 [====================>.........] - ETA: 6s - loss: 0.7605 - acc: 0.8452
42432/60000 [====================>.........] - ETA: 6s - loss: 0.7591 - acc: 0.8455
42624/60000 [====================>.........] - ETA: 6s - loss: 0.7574 - acc: 0.8457
42816/60000 [====================>.........] - ETA: 5s - loss: 0.7551 - acc: 0.8461
43040/60000 [====================>.........] - ETA: 5s - loss: 0.7528 - acc: 0.8463
43264/60000 [====================>.........] - ETA: 5s - loss: 0.7506 - acc: 0.8465
43488/60000 [====================>.........] - ETA: 5s - loss: 0.7487 - acc: 0.8468
43680/60000 [====================>.........] - ETA: 5s - loss: 0.7470 - acc: 0.8471
43904/60000 [====================>.........] - ETA: 5s - loss: 0.7449 - acc: 0.8473
44096/60000 [=====================>........] - ETA: 5s - loss: 0.7427 - acc: 0.8477
44224/60000 [=====================>........] - ETA: 5s - loss: 0.7412 - acc: 0.8480
44288/60000 [=====================>........] - ETA: 5s - loss: 0.7406 - acc: 0.8480
44416/60000 [=====================>........] - ETA: 5s - loss: 0.7394 - acc: 0.8481
44480/60000 [=====================>........] - ETA: 5s - loss: 0.7388 - acc: 0.8482
44576/60000 [=====================>........] - ETA: 5s - loss: 0.7378 - acc: 0.8484
44704/60000 [=====================>........] - ETA: 5s - loss: 0.7364 - acc: 0.8486
44800/60000 [=====================>........] - ETA: 5s - loss: 0.7355 - acc: 0.8487
44960/60000 [=====================>........] - ETA: 5s - loss: 0.7339 - acc: 0.8490
45088/60000 [=====================>........] - ETA: 5s - loss: 0.7323 - acc: 0.8493
45216/60000 [=====================>........] - ETA: 5s - loss: 0.7312 - acc: 0.8495
45376/60000 [=====================>........] - ETA: 5s - loss: 0.7293 - acc: 0.8499
45536/60000 [=====================>........] - ETA: 5s - loss: 0.7282 - acc: 0.8500
45664/60000 [=====================>........] - ETA: 5s - loss: 0.7271 - acc: 0.8501
45824/60000 [=====================>........] - ETA: 4s - loss: 0.7253 - acc: 0.8504
45984/60000 [=====================>........] - ETA: 4s - loss: 0.7237 - acc: 0.8506
46080/60000 [======================>.......] - ETA: 4s - loss: 0.7226 - acc: 0.8508
46208/60000 [======================>.......] - ETA: 4s - loss: 0.7216 - acc: 0.8509
46368/60000 [======================>.......] - ETA: 4s - loss: 0.7203 - acc: 0.8511
46528/60000 [======================>.......] - ETA: 4s - loss: 0.7188 - acc: 0.8514
46624/60000 [======================>.......] - ETA: 4s - loss: 0.7177 - acc: 0.8516
46752/60000 [======================>.......] - ETA: 4s - loss: 0.7166 - acc: 0.8518
46912/60000 [======================>.......] - ETA: 4s - loss: 0.7151 - acc: 0.8521
47072/60000 [======================>.......] - ETA: 4s - loss: 0.7134 - acc: 0.8524
47232/60000 [======================>.......] - ETA: 4s - loss: 0.7117 - acc: 0.8527
47392/60000 [======================>.......] - ETA: 4s - loss: 0.7101 - acc: 0.8530
47552/60000 [======================>.......] - ETA: 4s - loss: 0.7085 - acc: 0.8533
47648/60000 [======================>.......] - ETA: 4s - loss: 0.7077 - acc: 0.8534
47840/60000 [======================>.......] - ETA: 4s - loss: 0.7060 - acc: 0.8537
48000/60000 [=======================>......] - ETA: 4s - loss: 0.7046 - acc: 0.8539
48160/60000 [=======================>......] - ETA: 4s - loss: 0.7029 - acc: 0.8542
48288/60000 [=======================>......] - ETA: 4s - loss: 0.7020 - acc: 0.8544
48448/60000 [=======================>......] - ETA: 4s - loss: 0.7007 - acc: 0.8544
48544/60000 [=======================>......] - ETA: 4s - loss: 0.7000 - acc: 0.8546
48640/60000 [=======================>......] - ETA: 4s - loss: 0.6994 - acc: 0.8547
48736/60000 [=======================>......] - ETA: 3s - loss: 0.6984 - acc: 0.8548
48896/60000 [=======================>......] - ETA: 3s - loss: 0.6971 - acc: 0.8550
49024/60000 [=======================>......] - ETA: 3s - loss: 0.6962 - acc: 0.8551
49152/60000 [=======================>......] - ETA: 3s - loss: 0.6949 - acc: 0.8553
49312/60000 [=======================>......] - ETA: 3s - loss: 0.6934 - acc: 0.8556
49440/60000 [=======================>......] - ETA: 3s - loss: 0.6925 - acc: 0.8556
49568/60000 [=======================>......] - ETA: 3s - loss: 0.6915 - acc: 0.8558
49664/60000 [=======================>......] - ETA: 3s - loss: 0.6907 - acc: 0.8559
49792/60000 [=======================>......] - ETA: 3s - loss: 0.6895 - acc: 0.8560
49952/60000 [=======================>......] - ETA: 3s - loss: 0.6885 - acc: 0.8561
50080/60000 [========================>.....] - ETA: 3s - loss: 0.6875 - acc: 0.8563
50176/60000 [========================>.....] - ETA: 3s - loss: 0.6867 - acc: 0.8565
50272/60000 [========================>.....] - ETA: 3s - loss: 0.6859 - acc: 0.8566
50432/60000 [========================>.....] - ETA: 3s - loss: 0.6849 - acc: 0.8567
50560/60000 [========================>.....] - ETA: 3s - loss: 0.6840 - acc: 0.8568
50720/60000 [========================>.....] - ETA: 3s - loss: 0.6827 - acc: 0.8571
50848/60000 [========================>.....] - ETA: 3s - loss: 0.6815 - acc: 0.8574
51008/60000 [========================>.....] - ETA: 3s - loss: 0.6800 - acc: 0.8577
51168/60000 [========================>.....] - ETA: 3s - loss: 0.6789 - acc: 0.8578
51328/60000 [========================>.....] - ETA: 3s - loss: 0.6774 - acc: 0.8580
51488/60000 [========================>.....] - ETA: 3s - loss: 0.6763 - acc: 0.8582
51616/60000 [========================>.....] - ETA: 2s - loss: 0.6753 - acc: 0.8584
51776/60000 [========================>.....] - ETA: 2s - loss: 0.6740 - acc: 0.8585
51968/60000 [========================>.....] - ETA: 2s - loss: 0.6726 - acc: 0.8587
52096/60000 [=========================>....] - ETA: 2s - loss: 0.6715 - acc: 0.8589
52224/60000 [=========================>....] - ETA: 2s - loss: 0.6704 - acc: 0.8590
52352/60000 [=========================>....] - ETA: 2s - loss: 0.6694 - acc: 0.8593
52480/60000 [=========================>....] - ETA: 2s - loss: 0.6689 - acc: 0.8593
52608/60000 [=========================>....] - ETA: 2s - loss: 0.6679 - acc: 0.8594
52736/60000 [=========================>....] - ETA: 2s - loss: 0.6671 - acc: 0.8596
52896/60000 [=========================>....] - ETA: 2s - loss: 0.6657 - acc: 0.8598
53024/60000 [=========================>....] - ETA: 2s - loss: 0.6647 - acc: 0.8599
53152/60000 [=========================>....] - ETA: 2s - loss: 0.6639 - acc: 0.8600
53312/60000 [=========================>....] - ETA: 2s - loss: 0.6628 - acc: 0.8602
53440/60000 [=========================>....] - ETA: 2s - loss: 0.6617 - acc: 0.8604
53600/60000 [=========================>....] - ETA: 2s - loss: 0.6606 - acc: 0.8605
53728/60000 [=========================>....] - ETA: 2s - loss: 0.6596 - acc: 0.8607
53888/60000 [=========================>....] - ETA: 2s - loss: 0.6585 - acc: 0.8609
54048/60000 [==========================>...] - ETA: 2s - loss: 0.6574 - acc: 0.8610
54176/60000 [==========================>...] - ETA: 2s - loss: 0.6565 - acc: 0.8612
54368/60000 [==========================>...] - ETA: 2s - loss: 0.6551 - acc: 0.8613
54496/60000 [==========================>...] - ETA: 1s - loss: 0.6543 - acc: 0.8614
54656/60000 [==========================>...] - ETA: 1s - loss: 0.6531 - acc: 0.8616
54816/60000 [==========================>...] - ETA: 1s - loss: 0.6520 - acc: 0.8617
54976/60000 [==========================>...] - ETA: 1s - loss: 0.6509 - acc: 0.8620
55104/60000 [==========================>...] - ETA: 1s - loss: 0.6499 - acc: 0.8621
55232/60000 [==========================>...] - ETA: 1s - loss: 0.6489 - acc: 0.8623
55392/60000 [==========================>...] - ETA: 1s - loss: 0.6477 - acc: 0.8625
55520/60000 [==========================>...] - ETA: 1s - loss: 0.6466 - acc: 0.8627
55680/60000 [==========================>...] - ETA: 1s - loss: 0.6459 - acc: 0.8628
55840/60000 [==========================>...] - ETA: 1s - loss: 0.6450 - acc: 0.8629
55968/60000 [==========================>...] - ETA: 1s - loss: 0.6443 - acc: 0.8630
56128/60000 [===========================>..] - ETA: 1s - loss: 0.6432 - acc: 0.8632
56256/60000 [===========================>..] - ETA: 1s - loss: 0.6424 - acc: 0.8633
56416/60000 [===========================>..] - ETA: 1s - loss: 0.6412 - acc: 0.8635
56512/60000 [===========================>..] - ETA: 1s - loss: 0.6405 - acc: 0.8637
56672/60000 [===========================>..] - ETA: 1s - loss: 0.6395 - acc: 0.8639
56800/60000 [===========================>..] - ETA: 1s - loss: 0.6386 - acc: 0.8640
56960/60000 [===========================>..] - ETA: 1s - loss: 0.6375 - acc: 0.8642
57088/60000 [===========================>..] - ETA: 1s - loss: 0.6367 - acc: 0.8643
57248/60000 [===========================>..] - ETA: 0s - loss: 0.6355 - acc: 0.8645
57408/60000 [===========================>..] - ETA: 0s - loss: 0.6345 - acc: 0.8646
57568/60000 [===========================>..] - ETA: 0s - loss: 0.6337 - acc: 0.8648
57664/60000 [===========================>..] - ETA: 0s - loss: 0.6331 - acc: 0.8649
57760/60000 [===========================>..] - ETA: 0s - loss: 0.6323 - acc: 0.8650
57888/60000 [===========================>..] - ETA: 0s - loss: 0.6315 - acc: 0.8652
58016/60000 [============================>.] - ETA: 0s - loss: 0.6306 - acc: 0.8653
58144/60000 [============================>.] - ETA: 0s - loss: 0.6299 - acc: 0.8654
58272/60000 [============================>.] - ETA: 0s - loss: 0.6291 - acc: 0.8656
58400/60000 [============================>.] - ETA: 0s - loss: 0.6283 - acc: 0.8657
58528/60000 [============================>.] - ETA: 0s - loss: 0.6273 - acc: 0.8658
58656/60000 [============================>.] - ETA: 0s - loss: 0.6265 - acc: 0.8660
58816/60000 [============================>.] - ETA: 0s - loss: 0.6255 - acc: 0.8662
58912/60000 [============================>.] - ETA: 0s - loss: 0.6248 - acc: 0.8663
59072/60000 [============================>.] - ETA: 0s - loss: 0.6240 - acc: 0.8665
59232/60000 [============================>.] - ETA: 0s - loss: 0.6231 - acc: 0.8666
59392/60000 [============================>.] - ETA: 0s - loss: 0.6219 - acc: 0.8669
59552/60000 [============================>.] - ETA: 0s - loss: 0.6209 - acc: 0.8671
59680/60000 [============================>.] - ETA: 0s - loss: 0.6201 - acc: 0.8672
59840/60000 [============================>.] - ETA: 0s - loss: 0.6189 - acc: 0.8674
59968/60000 [============================>.] - ETA: 0s - loss: 0.6180 - acc: 0.8676
60000/60000 [==============================] - 22s 363us/step - loss: 0.6177 - acc: 0.8677
Epoch 2/2
32/60000 [..............................] - ETA: 37s - loss: 0.3329 - acc: 0.9375
160/60000 [..............................] - ETA: 30s - loss: 0.2659 - acc: 0.9375
320/60000 [..............................] - ETA: 26s - loss: 0.2662 - acc: 0.9344
448/60000 [..............................] - ETA: 26s - loss: 0.2816 - acc: 0.9286
608/60000 [..............................] - ETA: 24s - loss: 0.2692 - acc: 0.9243
736/60000 [..............................] - ETA: 24s - loss: 0.2752 - acc: 0.9171
896/60000 [..............................] - ETA: 23s - loss: 0.2624 - acc: 0.9208
1024/60000 [..............................] - ETA: 23s - loss: 0.2526 - acc: 0.9258
1184/60000 [..............................] - ETA: 23s - loss: 0.2521 - acc: 0.9265
1312/60000 [..............................] - ETA: 23s - loss: 0.2589 - acc: 0.9276
1440/60000 [..............................] - ETA: 23s - loss: 0.2662 - acc: 0.9278
1600/60000 [..............................] - ETA: 23s - loss: 0.2719 - acc: 0.9256
1728/60000 [..............................] - ETA: 23s - loss: 0.2732 - acc: 0.9230
1920/60000 [..............................] - ETA: 22s - loss: 0.2696 - acc: 0.9224
2080/60000 [>.............................] - ETA: 22s - loss: 0.2735 - acc: 0.9216
2272/60000 [>.............................] - ETA: 21s - loss: 0.2706 - acc: 0.9217
2368/60000 [>.............................] - ETA: 22s - loss: 0.2711 - acc: 0.9210
2464/60000 [>.............................] - ETA: 22s - loss: 0.2735 - acc: 0.9217
2592/60000 [>.............................] - ETA: 22s - loss: 0.2707 - acc: 0.9209
2720/60000 [>.............................] - ETA: 22s - loss: 0.2668 - acc: 0.9217
2816/60000 [>.............................] - ETA: 23s - loss: 0.2627 - acc: 0.9229
2976/60000 [>.............................] - ETA: 22s - loss: 0.2591 - acc: 0.9237
3104/60000 [>.............................] - ETA: 22s - loss: 0.2557 - acc: 0.9243
3200/60000 [>.............................] - ETA: 23s - loss: 0.2553 - acc: 0.9247
3328/60000 [>.............................] - ETA: 23s - loss: 0.2538 - acc: 0.9255
3488/60000 [>.............................] - ETA: 23s - loss: 0.2533 - acc: 0.9252
3584/60000 [>.............................] - ETA: 23s - loss: 0.2518 - acc: 0.9258
3744/60000 [>.............................] - ETA: 23s - loss: 0.2484 - acc: 0.9265
3840/60000 [>.............................] - ETA: 23s - loss: 0.2474 - acc: 0.9266
3968/60000 [>.............................] - ETA: 23s - loss: 0.2495 - acc: 0.9257
4128/60000 [=>............................] - ETA: 23s - loss: 0.2486 - acc: 0.9261
4256/60000 [=>............................] - ETA: 23s - loss: 0.2468 - acc: 0.9267
4384/60000 [=>............................] - ETA: 23s - loss: 0.2471 - acc: 0.9263
4512/60000 [=>............................] - ETA: 23s - loss: 0.2502 - acc: 0.9251
4672/60000 [=>............................] - ETA: 23s - loss: 0.2498 - acc: 0.9251
4768/60000 [=>............................] - ETA: 23s - loss: 0.2499 - acc: 0.9249
4928/60000 [=>............................] - ETA: 23s - loss: 0.2555 - acc: 0.9247
5056/60000 [=>............................] - ETA: 23s - loss: 0.2571 - acc: 0.9237
5184/60000 [=>............................] - ETA: 23s - loss: 0.2551 - acc: 0.9248
5344/60000 [=>............................] - ETA: 22s - loss: 0.2542 - acc: 0.9246
5472/60000 [=>............................] - ETA: 22s - loss: 0.2525 - acc: 0.9251
5664/60000 [=>............................] - ETA: 22s - loss: 0.2510 - acc: 0.9260
5792/60000 [=>............................] - ETA: 22s - loss: 0.2511 - acc: 0.9259
5952/60000 [=>............................] - ETA: 22s - loss: 0.2525 - acc: 0.9251
6080/60000 [==>...........................] - ETA: 22s - loss: 0.2519 - acc: 0.9255
6208/60000 [==>...........................] - ETA: 22s - loss: 0.2503 - acc: 0.9257
6336/60000 [==>...........................] - ETA: 22s - loss: 0.2489 - acc: 0.9257
6464/60000 [==>...........................] - ETA: 22s - loss: 0.2500 - acc: 0.9254
6592/60000 [==>...........................] - ETA: 22s - loss: 0.2514 - acc: 0.9251
6720/60000 [==>...........................] - ETA: 22s - loss: 0.2506 - acc: 0.9251
6848/60000 [==>...........................] - ETA: 22s - loss: 0.2495 - acc: 0.9257
6976/60000 [==>...........................] - ETA: 22s - loss: 0.2491 - acc: 0.9259
7072/60000 [==>...........................] - ETA: 22s - loss: 0.2490 - acc: 0.9260
7232/60000 [==>...........................] - ETA: 22s - loss: 0.2473 - acc: 0.9266
7360/60000 [==>...........................] - ETA: 22s - loss: 0.2468 - acc: 0.9266
7488/60000 [==>...........................] - ETA: 21s - loss: 0.2469 - acc: 0.9271
7648/60000 [==>...........................] - ETA: 21s - loss: 0.2474 - acc: 0.9270
7776/60000 [==>...........................] - ETA: 21s - loss: 0.2485 - acc: 0.9272
7904/60000 [==>...........................] - ETA: 21s - loss: 0.2477 - acc: 0.9274
8032/60000 [===>..........................] - ETA: 21s - loss: 0.2468 - acc: 0.9277
8192/60000 [===>..........................] - ETA: 21s - loss: 0.2462 - acc: 0.9279
8384/60000 [===>..........................] - ETA: 21s - loss: 0.2459 - acc: 0.9277
8512/60000 [===>..........................] - ETA: 21s - loss: 0.2452 - acc: 0.9277
8640/60000 [===>..........................] - ETA: 21s - loss: 0.2464 - acc: 0.9275
8736/60000 [===>..........................] - ETA: 21s - loss: 0.2471 - acc: 0.9278
8864/60000 [===>..........................] - ETA: 21s - loss: 0.2472 - acc: 0.9279
8992/60000 [===>..........................] - ETA: 21s - loss: 0.2461 - acc: 0.9282
9120/60000 [===>..........................] - ETA: 21s - loss: 0.2454 - acc: 0.9285
9312/60000 [===>..........................] - ETA: 20s - loss: 0.2454 - acc: 0.9283
9504/60000 [===>..........................] - ETA: 20s - loss: 0.2459 - acc: 0.9282
9664/60000 [===>..........................] - ETA: 20s - loss: 0.2454 - acc: 0.9284
9792/60000 [===>..........................] - ETA: 20s - loss: 0.2459 - acc: 0.9282
9920/60000 [===>..........................] - ETA: 20s - loss: 0.2459 - acc: 0.9284
10016/60000 [====>.........................] - ETA: 20s - loss: 0.2466 - acc: 0.9284
10208/60000 [====>.........................] - ETA: 20s - loss: 0.2462 - acc: 0.9282
10336/60000 [====>.........................] - ETA: 20s - loss: 0.2457 - acc: 0.9283
10496/60000 [====>.........................] - ETA: 20s - loss: 0.2458 - acc: 0.9284
10624/60000 [====>.........................] - ETA: 20s - loss: 0.2456 - acc: 0.9284
10752/60000 [====>.........................] - ETA: 20s - loss: 0.2444 - acc: 0.9287
10912/60000 [====>.........................] - ETA: 20s - loss: 0.2443 - acc: 0.9289
11040/60000 [====>.........................] - ETA: 20s - loss: 0.2446 - acc: 0.9288
11168/60000 [====>.........................] - ETA: 20s - loss: 0.2452 - acc: 0.9288
11296/60000 [====>.........................] - ETA: 20s - loss: 0.2442 - acc: 0.9292
11456/60000 [====>.........................] - ETA: 19s - loss: 0.2442 - acc: 0.9290
11616/60000 [====>.........................] - ETA: 19s - loss: 0.2433 - acc: 0.9290
11744/60000 [====>.........................] - ETA: 19s - loss: 0.2431 - acc: 0.9291
11872/60000 [====>.........................] - ETA: 19s - loss: 0.2439 - acc: 0.9286
12000/60000 [=====>........................] - ETA: 19s - loss: 0.2435 - acc: 0.9287
12064/60000 [=====>........................] - ETA: 19s - loss: 0.2432 - acc: 0.9289
12192/60000 [=====>........................] - ETA: 19s - loss: 0.2424 - acc: 0.9291
12288/60000 [=====>........................] - ETA: 19s - loss: 0.2424 - acc: 0.9293
12416/60000 [=====>........................] - ETA: 19s - loss: 0.2423 - acc: 0.9290
12544/60000 [=====>........................] - ETA: 19s - loss: 0.2410 - acc: 0.9294
12672/60000 [=====>........................] - ETA: 19s - loss: 0.2403 - acc: 0.9296
12800/60000 [=====>........................] - ETA: 19s - loss: 0.2406 - acc: 0.9294
12864/60000 [=====>........................] - ETA: 19s - loss: 0.2407 - acc: 0.9292
12960/60000 [=====>........................] - ETA: 19s - loss: 0.2403 - acc: 0.9294
13120/60000 [=====>........................] - ETA: 19s - loss: 0.2396 - acc: 0.9295
13280/60000 [=====>........................] - ETA: 19s - loss: 0.2386 - acc: 0.9299
13472/60000 [=====>........................] - ETA: 19s - loss: 0.2379 - acc: 0.9299
13632/60000 [=====>........................] - ETA: 19s - loss: 0.2386 - acc: 0.9296
13792/60000 [=====>........................] - ETA: 19s - loss: 0.2386 - acc: 0.9298
13888/60000 [=====>........................] - ETA: 19s - loss: 0.2380 - acc: 0.9299
14048/60000 [======>.......................] - ETA: 19s - loss: 0.2392 - acc: 0.9297
14208/60000 [======>.......................] - ETA: 18s - loss: 0.2412 - acc: 0.9294
14368/60000 [======>.......................] - ETA: 18s - loss: 0.2423 - acc: 0.9293
14528/60000 [======>.......................] - ETA: 18s - loss: 0.2414 - acc: 0.9298
14624/60000 [======>.......................] - ETA: 18s - loss: 0.2412 - acc: 0.9298
14752/60000 [======>.......................] - ETA: 18s - loss: 0.2409 - acc: 0.9296
14912/60000 [======>.......................] - ETA: 18s - loss: 0.2406 - acc: 0.9297
15040/60000 [======>.......................] - ETA: 18s - loss: 0.2402 - acc: 0.9298
15104/60000 [======>.......................] - ETA: 18s - loss: 0.2418 - acc: 0.9296
15232/60000 [======>.......................] - ETA: 18s - loss: 0.2414 - acc: 0.9298
15264/60000 [======>.......................] - ETA: 18s - loss: 0.2414 - acc: 0.9297
15392/60000 [======>.......................] - ETA: 18s - loss: 0.2418 - acc: 0.9296
15552/60000 [======>.......................] - ETA: 18s - loss: 0.2409 - acc: 0.9297
15680/60000 [======>.......................] - ETA: 18s - loss: 0.2406 - acc: 0.9298
15808/60000 [======>.......................] - ETA: 18s - loss: 0.2404 - acc: 0.9298
15936/60000 [======>.......................] - ETA: 18s - loss: 0.2401 - acc: 0.9299
16064/60000 [=======>......................] - ETA: 18s - loss: 0.2398 - acc: 0.9301
16160/60000 [=======>......................] - ETA: 18s - loss: 0.2399 - acc: 0.9300
16288/60000 [=======>......................] - ETA: 18s - loss: 0.2399 - acc: 0.9299
16416/60000 [=======>......................] - ETA: 18s - loss: 0.2398 - acc: 0.9299
16544/60000 [=======>......................] - ETA: 18s - loss: 0.2389 - acc: 0.9301
16704/60000 [=======>......................] - ETA: 18s - loss: 0.2383 - acc: 0.9304
16832/60000 [=======>......................] - ETA: 18s - loss: 0.2376 - acc: 0.9305
16928/60000 [=======>......................] - ETA: 18s - loss: 0.2379 - acc: 0.9304
17056/60000 [=======>......................] - ETA: 18s - loss: 0.2381 - acc: 0.9303
17184/60000 [=======>......................] - ETA: 18s - loss: 0.2380 - acc: 0.9303
17312/60000 [=======>......................] - ETA: 18s - loss: 0.2383 - acc: 0.9303
17440/60000 [=======>......................] - ETA: 17s - loss: 0.2384 - acc: 0.9302
17600/60000 [=======>......................] - ETA: 17s - loss: 0.2385 - acc: 0.9302
17760/60000 [=======>......................] - ETA: 17s - loss: 0.2381 - acc: 0.9302
17920/60000 [=======>......................] - ETA: 17s - loss: 0.2388 - acc: 0.9301
18080/60000 [========>.....................] - ETA: 17s - loss: 0.2384 - acc: 0.9300
18176/60000 [========>.....................] - ETA: 17s - loss: 0.2383 - acc: 0.9301
18272/60000 [========>.....................] - ETA: 17s - loss: 0.2380 - acc: 0.9300
18400/60000 [========>.....................] - ETA: 17s - loss: 0.2379 - acc: 0.9301
18528/60000 [========>.....................] - ETA: 17s - loss: 0.2382 - acc: 0.9300
18720/60000 [========>.....................] - ETA: 17s - loss: 0.2383 - acc: 0.9300
18880/60000 [========>.....................] - ETA: 17s - loss: 0.2382 - acc: 0.9301
19008/60000 [========>.....................] - ETA: 17s - loss: 0.2377 - acc: 0.9302
19168/60000 [========>.....................] - ETA: 17s - loss: 0.2370 - acc: 0.9304
19328/60000 [========>.....................] - ETA: 16s - loss: 0.2368 - acc: 0.9303
19456/60000 [========>.....................] - ETA: 16s - loss: 0.2363 - acc: 0.9305
19616/60000 [========>.....................] - ETA: 16s - loss: 0.2358 - acc: 0.9306
19744/60000 [========>.....................] - ETA: 16s - loss: 0.2361 - acc: 0.9307
19904/60000 [========>.....................] - ETA: 16s - loss: 0.2358 - acc: 0.9307
20064/60000 [=========>....................] - ETA: 16s - loss: 0.2361 - acc: 0.9307
20160/60000 [=========>....................] - ETA: 16s - loss: 0.2358 - acc: 0.9307
20320/60000 [=========>....................] - ETA: 16s - loss: 0.2358 - acc: 0.9307
20416/60000 [=========>....................] - ETA: 16s - loss: 0.2353 - acc: 0.9308
20576/60000 [=========>....................] - ETA: 16s - loss: 0.2348 - acc: 0.9310
20672/60000 [=========>....................] - ETA: 16s - loss: 0.2345 - acc: 0.9312
20800/60000 [=========>....................] - ETA: 16s - loss: 0.2339 - acc: 0.9314
20960/60000 [=========>....................] - ETA: 16s - loss: 0.2344 - acc: 0.9313
21088/60000 [=========>....................] - ETA: 16s - loss: 0.2338 - acc: 0.9314
21248/60000 [=========>....................] - ETA: 16s - loss: 0.2333 - acc: 0.9317
21408/60000 [=========>....................] - ETA: 16s - loss: 0.2335 - acc: 0.9320
21568/60000 [=========>....................] - ETA: 15s - loss: 0.2332 - acc: 0.9321
21728/60000 [=========>....................] - ETA: 15s - loss: 0.2330 - acc: 0.9321
21888/60000 [=========>....................] - ETA: 15s - loss: 0.2332 - acc: 0.9320
22016/60000 [==========>...................] - ETA: 15s - loss: 0.2334 - acc: 0.9319
22144/60000 [==========>...................] - ETA: 15s - loss: 0.2328 - acc: 0.9320
22336/60000 [==========>...................] - ETA: 15s - loss: 0.2324 - acc: 0.9321
22560/60000 [==========>...................] - ETA: 15s - loss: 0.2319 - acc: 0.9323
22688/60000 [==========>...................] - ETA: 15s - loss: 0.2314 - acc: 0.9324
22880/60000 [==========>...................] - ETA: 15s - loss: 0.2317 - acc: 0.9322
23072/60000 [==========>...................] - ETA: 15s - loss: 0.2313 - acc: 0.9323
23200/60000 [==========>...................] - ETA: 15s - loss: 0.2315 - acc: 0.9323
23296/60000 [==========>...................] - ETA: 15s - loss: 0.2314 - acc: 0.9323
23424/60000 [==========>...................] - ETA: 15s - loss: 0.2316 - acc: 0.9321
23584/60000 [==========>...................] - ETA: 14s - loss: 0.2311 - acc: 0.9323
23744/60000 [==========>...................] - ETA: 14s - loss: 0.2310 - acc: 0.9323
23872/60000 [==========>...................] - ETA: 14s - loss: 0.2311 - acc: 0.9322
24032/60000 [===========>..................] - ETA: 14s - loss: 0.2310 - acc: 0.9323
24160/60000 [===========>..................] - ETA: 14s - loss: 0.2314 - acc: 0.9321
24320/60000 [===========>..................] - ETA: 14s - loss: 0.2314 - acc: 0.9322
24480/60000 [===========>..................] - ETA: 14s - loss: 0.2315 - acc: 0.9321
24576/60000 [===========>..................] - ETA: 14s - loss: 0.2315 - acc: 0.9322
24736/60000 [===========>..................] - ETA: 14s - loss: 0.2317 - acc: 0.9320
24896/60000 [===========>..................] - ETA: 14s - loss: 0.2311 - acc: 0.9322
24992/60000 [===========>..................] - ETA: 14s - loss: 0.2310 - acc: 0.9322
25152/60000 [===========>..................] - ETA: 14s - loss: 0.2307 - acc: 0.9323
25280/60000 [===========>..................] - ETA: 14s - loss: 0.2306 - acc: 0.9323
25440/60000 [===========>..................] - ETA: 14s - loss: 0.2305 - acc: 0.9324
25632/60000 [===========>..................] - ETA: 14s - loss: 0.2299 - acc: 0.9325
25728/60000 [===========>..................] - ETA: 13s - loss: 0.2300 - acc: 0.9326
25856/60000 [===========>..................] - ETA: 13s - loss: 0.2296 - acc: 0.9327
25984/60000 [===========>..................] - ETA: 13s - loss: 0.2292 - acc: 0.9328
26144/60000 [============>.................] - ETA: 13s - loss: 0.2289 - acc: 0.9329
26304/60000 [============>.................] - ETA: 13s - loss: 0.2286 - acc: 0.9329
26464/60000 [============>.................] - ETA: 13s - loss: 0.2289 - acc: 0.9328
26592/60000 [============>.................] - ETA: 13s - loss: 0.2289 - acc: 0.9326
26720/60000 [============>.................] - ETA: 13s - loss: 0.2288 - acc: 0.9326
26880/60000 [============>.................] - ETA: 13s - loss: 0.2289 - acc: 0.9326
27008/60000 [============>.................] - ETA: 13s - loss: 0.2288 - acc: 0.9325
27168/60000 [============>.................] - ETA: 13s - loss: 0.2293 - acc: 0.9323
27328/60000 [============>.................] - ETA: 13s - loss: 0.2294 - acc: 0.9323
27488/60000 [============>.................] - ETA: 13s - loss: 0.2295 - acc: 0.9322
27648/60000 [============>.................] - ETA: 13s - loss: 0.2290 - acc: 0.9322
27840/60000 [============>.................] - ETA: 13s - loss: 0.2288 - acc: 0.9322
27968/60000 [============>.................] - ETA: 12s - loss: 0.2291 - acc: 0.9321
28128/60000 [=============>................] - ETA: 12s - loss: 0.2288 - acc: 0.9321
28288/60000 [=============>................] - ETA: 12s - loss: 0.2286 - acc: 0.9322
28480/60000 [=============>................] - ETA: 12s - loss: 0.2287 - acc: 0.9322
28672/60000 [=============>................] - ETA: 12s - loss: 0.2289 - acc: 0.9322
28928/60000 [=============>................] - ETA: 12s - loss: 0.2292 - acc: 0.9319
29152/60000 [=============>................] - ETA: 12s - loss: 0.2288 - acc: 0.9320
29312/60000 [=============>................] - ETA: 12s - loss: 0.2285 - acc: 0.9322
29472/60000 [=============>................] - ETA: 12s - loss: 0.2286 - acc: 0.9322
29568/60000 [=============>................] - ETA: 12s - loss: 0.2286 - acc: 0.9322
29728/60000 [=============>................] - ETA: 12s - loss: 0.2285 - acc: 0.9321
29920/60000 [=============>................] - ETA: 12s - loss: 0.2281 - acc: 0.9323
30080/60000 [==============>...............] - ETA: 11s - loss: 0.2280 - acc: 0.9322
30240/60000 [==============>...............] - ETA: 11s - loss: 0.2277 - acc: 0.9323
30400/60000 [==============>...............] - ETA: 11s - loss: 0.2271 - acc: 0.9324
30560/60000 [==============>...............] - ETA: 11s - loss: 0.2266 - acc: 0.9326
30752/60000 [==============>...............] - ETA: 11s - loss: 0.2265 - acc: 0.9326
30880/60000 [==============>...............] - ETA: 11s - loss: 0.2263 - acc: 0.9326
31040/60000 [==============>...............] - ETA: 11s - loss: 0.2262 - acc: 0.9327
31168/60000 [==============>...............] - ETA: 11s - loss: 0.2264 - acc: 0.9327
31296/60000 [==============>...............] - ETA: 11s - loss: 0.2265 - acc: 0.9327
31424/60000 [==============>...............] - ETA: 11s - loss: 0.2267 - acc: 0.9327
31584/60000 [==============>...............] - ETA: 11s - loss: 0.2268 - acc: 0.9326
31712/60000 [==============>...............] - ETA: 11s - loss: 0.2266 - acc: 0.9327
31872/60000 [==============>...............] - ETA: 11s - loss: 0.2263 - acc: 0.9328
32064/60000 [===============>..............] - ETA: 11s - loss: 0.2267 - acc: 0.9327
32224/60000 [===============>..............] - ETA: 11s - loss: 0.2263 - acc: 0.9328
32384/60000 [===============>..............] - ETA: 10s - loss: 0.2261 - acc: 0.9329
32512/60000 [===============>..............] - ETA: 10s - loss: 0.2266 - acc: 0.9329
32672/60000 [===============>..............] - ETA: 10s - loss: 0.2268 - acc: 0.9328
32864/60000 [===============>..............] - ETA: 10s - loss: 0.2267 - acc: 0.9329
32992/60000 [===============>..............] - ETA: 10s - loss: 0.2268 - acc: 0.9329
33184/60000 [===============>..............] - ETA: 10s - loss: 0.2264 - acc: 0.9330
33344/60000 [===============>..............] - ETA: 10s - loss: 0.2263 - acc: 0.9331
33504/60000 [===============>..............] - ETA: 10s - loss: 0.2259 - acc: 0.9331
33632/60000 [===============>..............] - ETA: 10s - loss: 0.2260 - acc: 0.9329
33792/60000 [===============>..............] - ETA: 10s - loss: 0.2259 - acc: 0.9330
33984/60000 [===============>..............] - ETA: 10s - loss: 0.2254 - acc: 0.9331
34112/60000 [================>.............] - ETA: 10s - loss: 0.2253 - acc: 0.9330
34272/60000 [================>.............] - ETA: 10s - loss: 0.2255 - acc: 0.9330
34432/60000 [================>.............] - ETA: 10s - loss: 0.2259 - acc: 0.9330
34592/60000 [================>.............] - ETA: 9s - loss: 0.2258 - acc: 0.9330
34720/60000 [================>.............] - ETA: 9s - loss: 0.2257 - acc: 0.9330
34880/60000 [================>.............] - ETA: 9s - loss: 0.2262 - acc: 0.9331
35040/60000 [================>.............] - ETA: 9s - loss: 0.2271 - acc: 0.9329
35168/60000 [================>.............] - ETA: 9s - loss: 0.2269 - acc: 0.9329
35328/60000 [================>.............] - ETA: 9s - loss: 0.2274 - acc: 0.9327
35488/60000 [================>.............] - ETA: 9s - loss: 0.2276 - acc: 0.9326
35616/60000 [================>.............] - ETA: 9s - loss: 0.2276 - acc: 0.9326
35776/60000 [================>.............] - ETA: 9s - loss: 0.2276 - acc: 0.9326
35936/60000 [================>.............] - ETA: 9s - loss: 0.2276 - acc: 0.9326
36096/60000 [=================>............] - ETA: 9s - loss: 0.2275 - acc: 0.9327
36256/60000 [=================>............] - ETA: 9s - loss: 0.2275 - acc: 0.9326
36416/60000 [=================>............] - ETA: 9s - loss: 0.2275 - acc: 0.9326
36544/60000 [=================>............] - ETA: 9s - loss: 0.2277 - acc: 0.9325
36672/60000 [=================>............] - ETA: 9s - loss: 0.2277 - acc: 0.9326
36832/60000 [=================>............] - ETA: 9s - loss: 0.2276 - acc: 0.9326
37024/60000 [=================>............] - ETA: 8s - loss: 0.2274 - acc: 0.9326
37120/60000 [=================>............] - ETA: 8s - loss: 0.2271 - acc: 0.9327
37280/60000 [=================>............] - ETA: 8s - loss: 0.2271 - acc: 0.9327
37440/60000 [=================>............] - ETA: 8s - loss: 0.2268 - acc: 0.9328
37568/60000 [=================>............] - ETA: 8s - loss: 0.2267 - acc: 0.9328
37696/60000 [=================>............] - ETA: 8s - loss: 0.2268 - acc: 0.9328
37888/60000 [=================>............] - ETA: 8s - loss: 0.2265 - acc: 0.9329
38048/60000 [==================>...........] - ETA: 8s - loss: 0.2265 - acc: 0.9329
38208/60000 [==================>...........] - ETA: 8s - loss: 0.2263 - acc: 0.9330
38368/60000 [==================>...........] - ETA: 8s - loss: 0.2261 - acc: 0.9330
38528/60000 [==================>...........] - ETA: 8s - loss: 0.2259 - acc: 0.9330
38656/60000 [==================>...........] - ETA: 8s - loss: 0.2257 - acc: 0.9331
38784/60000 [==================>...........] - ETA: 8s - loss: 0.2260 - acc: 0.9331
38944/60000 [==================>...........] - ETA: 8s - loss: 0.2259 - acc: 0.9331
39104/60000 [==================>...........] - ETA: 8s - loss: 0.2255 - acc: 0.9332
39232/60000 [==================>...........] - ETA: 8s - loss: 0.2256 - acc: 0.9332
39424/60000 [==================>...........] - ETA: 8s - loss: 0.2257 - acc: 0.9331
39584/60000 [==================>...........] - ETA: 7s - loss: 0.2258 - acc: 0.9331
39712/60000 [==================>...........] - ETA: 7s - loss: 0.2260 - acc: 0.9331
39840/60000 [==================>...........] - ETA: 7s - loss: 0.2258 - acc: 0.9331
40032/60000 [===================>..........] - ETA: 7s - loss: 0.2259 - acc: 0.9331
40192/60000 [===================>..........] - ETA: 7s - loss: 0.2257 - acc: 0.9332
40352/60000 [===================>..........] - ETA: 7s - loss: 0.2260 - acc: 0.9331
40512/60000 [===================>..........] - ETA: 7s - loss: 0.2259 - acc: 0.9331
40672/60000 [===================>..........] - ETA: 7s - loss: 0.2256 - acc: 0.9332
40832/60000 [===================>..........] - ETA: 7s - loss: 0.2255 - acc: 0.9332
40960/60000 [===================>..........] - ETA: 7s - loss: 0.2255 - acc: 0.9331
41120/60000 [===================>..........] - ETA: 7s - loss: 0.2253 - acc: 0.9332
41280/60000 [===================>..........] - ETA: 7s - loss: 0.2252 - acc: 0.9332
41440/60000 [===================>..........] - ETA: 7s - loss: 0.2250 - acc: 0.9332
41568/60000 [===================>..........] - ETA: 7s - loss: 0.2249 - acc: 0.9332
41760/60000 [===================>..........] - ETA: 7s - loss: 0.2252 - acc: 0.9332
41920/60000 [===================>..........] - ETA: 7s - loss: 0.2254 - acc: 0.9331
42048/60000 [====================>.........] - ETA: 6s - loss: 0.2251 - acc: 0.9331
42208/60000 [====================>.........] - ETA: 6s - loss: 0.2254 - acc: 0.9331
42368/60000 [====================>.........] - ETA: 6s - loss: 0.2254 - acc: 0.9331
42528/60000 [====================>.........] - ETA: 6s - loss: 0.2251 - acc: 0.9331
42656/60000 [====================>.........] - ETA: 6s - loss: 0.2254 - acc: 0.9330
42816/60000 [====================>.........] - ETA: 6s - loss: 0.2252 - acc: 0.9332
42944/60000 [====================>.........] - ETA: 6s - loss: 0.2253 - acc: 0.9332
43104/60000 [====================>.........] - ETA: 6s - loss: 0.2252 - acc: 0.9332
43296/60000 [====================>.........] - ETA: 6s - loss: 0.2254 - acc: 0.9332
43488/60000 [====================>.........] - ETA: 6s - loss: 0.2251 - acc: 0.9333
43680/60000 [====================>.........] - ETA: 6s - loss: 0.2250 - acc: 0.9333
43840/60000 [====================>.........] - ETA: 6s - loss: 0.2250 - acc: 0.9334
43968/60000 [====================>.........] - ETA: 6s - loss: 0.2249 - acc: 0.9334
44128/60000 [=====================>........] - ETA: 6s - loss: 0.2245 - acc: 0.9335
44320/60000 [=====================>........] - ETA: 6s - loss: 0.2247 - acc: 0.9335
44448/60000 [=====================>........] - ETA: 5s - loss: 0.2245 - acc: 0.9335
44608/60000 [=====================>........] - ETA: 5s - loss: 0.2246 - acc: 0.9335
44768/60000 [=====================>........] - ETA: 5s - loss: 0.2244 - acc: 0.9336
44928/60000 [=====================>........] - ETA: 5s - loss: 0.2248 - acc: 0.9335
45056/60000 [=====================>........] - ETA: 5s - loss: 0.2248 - acc: 0.9335
45216/60000 [=====================>........] - ETA: 5s - loss: 0.2246 - acc: 0.9337
45376/60000 [=====================>........] - ETA: 5s - loss: 0.2245 - acc: 0.9337
45568/60000 [=====================>........] - ETA: 5s - loss: 0.2245 - acc: 0.9337
45664/60000 [=====================>........] - ETA: 5s - loss: 0.2243 - acc: 0.9338
45792/60000 [=====================>........] - ETA: 5s - loss: 0.2243 - acc: 0.9339
45952/60000 [=====================>........] - ETA: 5s - loss: 0.2240 - acc: 0.9340
46112/60000 [======================>.......] - ETA: 5s - loss: 0.2240 - acc: 0.9340
46272/60000 [======================>.......] - ETA: 5s - loss: 0.2239 - acc: 0.9340
46400/60000 [======================>.......] - ETA: 5s - loss: 0.2240 - acc: 0.9340
46496/60000 [======================>.......] - ETA: 5s - loss: 0.2238 - acc: 0.9341
46592/60000 [======================>.......] - ETA: 5s - loss: 0.2239 - acc: 0.9341
46752/60000 [======================>.......] - ETA: 5s - loss: 0.2238 - acc: 0.9342
46912/60000 [======================>.......] - ETA: 5s - loss: 0.2234 - acc: 0.9343
47104/60000 [======================>.......] - ETA: 4s - loss: 0.2235 - acc: 0.9341
47296/60000 [======================>.......] - ETA: 4s - loss: 0.2237 - acc: 0.9341
47456/60000 [======================>.......] - ETA: 4s - loss: 0.2239 - acc: 0.9340
47584/60000 [======================>.......] - ETA: 4s - loss: 0.2238 - acc: 0.9340
47744/60000 [======================>.......] - ETA: 4s - loss: 0.2236 - acc: 0.9341
47872/60000 [======================>.......] - ETA: 4s - loss: 0.2238 - acc: 0.9340
48000/60000 [=======================>......] - ETA: 4s - loss: 0.2237 - acc: 0.9340
48160/60000 [=======================>......] - ETA: 4s - loss: 0.2235 - acc: 0.9342
48320/60000 [=======================>......] - ETA: 4s - loss: 0.2235 - acc: 0.9342
48448/60000 [=======================>......] - ETA: 4s - loss: 0.2233 - acc: 0.9342
48608/60000 [=======================>......] - ETA: 4s - loss: 0.2233 - acc: 0.9342
48736/60000 [=======================>......] - ETA: 4s - loss: 0.2234 - acc: 0.9342
48864/60000 [=======================>......] - ETA: 4s - loss: 0.2234 - acc: 0.9343
49024/60000 [=======================>......] - ETA: 4s - loss: 0.2232 - acc: 0.9344
49152/60000 [=======================>......] - ETA: 4s - loss: 0.2231 - acc: 0.9344
49280/60000 [=======================>......] - ETA: 4s - loss: 0.2232 - acc: 0.9345
49408/60000 [=======================>......] - ETA: 4s - loss: 0.2234 - acc: 0.9345
49536/60000 [=======================>......] - ETA: 4s - loss: 0.2232 - acc: 0.9345
49664/60000 [=======================>......] - ETA: 3s - loss: 0.2231 - acc: 0.9345
49760/60000 [=======================>......] - ETA: 3s - loss: 0.2236 - acc: 0.9345
49888/60000 [=======================>......] - ETA: 3s - loss: 0.2233 - acc: 0.9346
50016/60000 [========================>.....] - ETA: 3s - loss: 0.2231 - acc: 0.9346
50176/60000 [========================>.....] - ETA: 3s - loss: 0.2232 - acc: 0.9346
50336/60000 [========================>.....] - ETA: 3s - loss: 0.2236 - acc: 0.9346
50496/60000 [========================>.....] - ETA: 3s - loss: 0.2235 - acc: 0.9345
50688/60000 [========================>.....] - ETA: 3s - loss: 0.2231 - acc: 0.9346
50880/60000 [========================>.....] - ETA: 3s - loss: 0.2233 - acc: 0.9346
51072/60000 [========================>.....] - ETA: 3s - loss: 0.2233 - acc: 0.9347
51232/60000 [========================>.....] - ETA: 3s - loss: 0.2230 - acc: 0.9347
51392/60000 [========================>.....] - ETA: 3s - loss: 0.2232 - acc: 0.9346
51584/60000 [========================>.....] - ETA: 3s - loss: 0.2228 - acc: 0.9347
51712/60000 [========================>.....] - ETA: 3s - loss: 0.2228 - acc: 0.9348
51872/60000 [========================>.....] - ETA: 3s - loss: 0.2226 - acc: 0.9348
51968/60000 [========================>.....] - ETA: 3s - loss: 0.2225 - acc: 0.9349
52096/60000 [=========================>....] - ETA: 3s - loss: 0.2224 - acc: 0.9349
52256/60000 [=========================>....] - ETA: 2s - loss: 0.2222 - acc: 0.9349
52416/60000 [=========================>....] - ETA: 2s - loss: 0.2219 - acc: 0.9350
52608/60000 [=========================>....] - ETA: 2s - loss: 0.2220 - acc: 0.9350
52768/60000 [=========================>....] - ETA: 2s - loss: 0.2218 - acc: 0.9351
52928/60000 [=========================>....] - ETA: 2s - loss: 0.2216 - acc: 0.9351
53120/60000 [=========================>....] - ETA: 2s - loss: 0.2216 - acc: 0.9351
53312/60000 [=========================>....] - ETA: 2s - loss: 0.2217 - acc: 0.9350
53472/60000 [=========================>....] - ETA: 2s - loss: 0.2218 - acc: 0.9350
53632/60000 [=========================>....] - ETA: 2s - loss: 0.2216 - acc: 0.9351
53824/60000 [=========================>....] - ETA: 2s - loss: 0.2216 - acc: 0.9351
53984/60000 [=========================>....] - ETA: 2s - loss: 0.2215 - acc: 0.9351
54112/60000 [==========================>...] - ETA: 2s - loss: 0.2216 - acc: 0.9352
54176/60000 [==========================>...] - ETA: 2s - loss: 0.2214 - acc: 0.9352
54272/60000 [==========================>...] - ETA: 2s - loss: 0.2214 - acc: 0.9352
54432/60000 [==========================>...] - ETA: 2s - loss: 0.2213 - acc: 0.9352
54592/60000 [==========================>...] - ETA: 2s - loss: 0.2211 - acc: 0.9354
54752/60000 [==========================>...] - ETA: 2s - loss: 0.2212 - acc: 0.9353
54944/60000 [==========================>...] - ETA: 1s - loss: 0.2210 - acc: 0.9354
55136/60000 [==========================>...] - ETA: 1s - loss: 0.2210 - acc: 0.9354
55296/60000 [==========================>...] - ETA: 1s - loss: 0.2208 - acc: 0.9354
55488/60000 [==========================>...] - ETA: 1s - loss: 0.2205 - acc: 0.9355
55680/60000 [==========================>...] - ETA: 1s - loss: 0.2206 - acc: 0.9354
55872/60000 [==========================>...] - ETA: 1s - loss: 0.2203 - acc: 0.9355
56064/60000 [===========================>..] - ETA: 1s - loss: 0.2208 - acc: 0.9354
56256/60000 [===========================>..] - ETA: 1s - loss: 0.2208 - acc: 0.9354
56416/60000 [===========================>..] - ETA: 1s - loss: 0.2209 - acc: 0.9354
56576/60000 [===========================>..] - ETA: 1s - loss: 0.2210 - acc: 0.9354
56736/60000 [===========================>..] - ETA: 1s - loss: 0.2208 - acc: 0.9354
56896/60000 [===========================>..] - ETA: 1s - loss: 0.2207 - acc: 0.9354
56992/60000 [===========================>..] - ETA: 1s - loss: 0.2209 - acc: 0.9353
57088/60000 [===========================>..] - ETA: 1s - loss: 0.2209 - acc: 0.9353
57216/60000 [===========================>..] - ETA: 1s - loss: 0.2209 - acc: 0.9353
57344/60000 [===========================>..] - ETA: 1s - loss: 0.2208 - acc: 0.9354
57504/60000 [===========================>..] - ETA: 0s - loss: 0.2207 - acc: 0.9353
57696/60000 [===========================>..] - ETA: 0s - loss: 0.2206 - acc: 0.9354
57888/60000 [===========================>..] - ETA: 0s - loss: 0.2209 - acc: 0.9354
58048/60000 [============================>.] - ETA: 0s - loss: 0.2208 - acc: 0.9354
58208/60000 [============================>.] - ETA: 0s - loss: 0.2209 - acc: 0.9355
58368/60000 [============================>.] - ETA: 0s - loss: 0.2211 - acc: 0.9354
58528/60000 [============================>.] - ETA: 0s - loss: 0.2209 - acc: 0.9354
58720/60000 [============================>.] - ETA: 0s - loss: 0.2207 - acc: 0.9355
58880/60000 [============================>.] - ETA: 0s - loss: 0.2208 - acc: 0.9355
59008/60000 [============================>.] - ETA: 0s - loss: 0.2208 - acc: 0.9355
59136/60000 [============================>.] - ETA: 0s - loss: 0.2207 - acc: 0.9356
59264/60000 [============================>.] - ETA: 0s - loss: 0.2206 - acc: 0.9356
59392/60000 [============================>.] - ETA: 0s - loss: 0.2207 - acc: 0.9356
59552/60000 [============================>.] - ETA: 0s - loss: 0.2209 - acc: 0.9357
59680/60000 [============================>.] - ETA: 0s - loss: 0.2209 - acc: 0.9357
59840/60000 [============================>.] - ETA: 0s - loss: 0.2210 - acc: 0.9356
60000/60000 [==============================] - 23s 380us/step - loss: 0.2208 - acc: 0.9357
32/20000 [..............................] - ETA: 53s
608/20000 [..............................] - ETA: 4s
1216/20000 [>.............................] - ETA: 2s
1984/20000 [=>............................] - ETA: 2s
2816/20000 [===>..........................] - ETA: 1s
3424/20000 [====>.........................] - ETA: 1s
4192/20000 [=====>........................] - ETA: 1s
4768/20000 [======>.......................] - ETA: 1s
5344/20000 [=======>......................] - ETA: 1s
5952/20000 [=======>......................] - ETA: 1s
6688/20000 [=========>....................] - ETA: 1s
7296/20000 [=========>....................] - ETA: 1s
7648/20000 [==========>...................] - ETA: 1s
8288/20000 [===========>..................] - ETA: 1s
8928/20000 [============>.................] - ETA: 0s
9600/20000 [=============>................] - ETA: 0s
9984/20000 [=============>................] - ETA: 0s
10528/20000 [==============>...............] - ETA: 0s
11136/20000 [===============>..............] - ETA: 0s
11712/20000 [================>.............] - ETA: 0s
12288/20000 [=================>............] - ETA: 0s
12928/20000 [==================>...........] - ETA: 0s
13536/20000 [===================>..........] - ETA: 0s
14112/20000 [====================>.........] - ETA: 0s
14560/20000 [====================>.........] - ETA: 0s
15040/20000 [=====================>........] - ETA: 0s
15616/20000 [======================>.......] - ETA: 0s
16160/20000 [=======================>......] - ETA: 0s
16640/20000 [=======================>......] - ETA: 0s
17248/20000 [========================>.....] - ETA: 0s
17792/20000 [=========================>....] - ETA: 0s
18496/20000 [==========================>...] - ETA: 0s
18944/20000 [===========================>..] - ETA: 0s
19424/20000 [============================>.] - ETA: 0s
19904/20000 [============================>.] - ETA: 0s
20000/20000 [==============================] - 2s 92us/step
Test score: 0.3347589182332158
Test accuracy: 0.89675
32/20000 [..............................] - ETA: 1s
672/20000 [>.............................] - ETA: 1s
1280/20000 [>.............................] - ETA: 1s
2112/20000 [==>...........................] - ETA: 1s
2688/20000 [===>..........................] - ETA: 1s
3424/20000 [====>.........................] - ETA: 1s
4256/20000 [=====>........................] - ETA: 1s
4928/20000 [======>.......................] - ETA: 1s
5568/20000 [=======>......................] - ETA: 1s
6304/20000 [========>.....................] - ETA: 1s
6784/20000 [=========>....................] - ETA: 1s
7488/20000 [==========>...................] - ETA: 0s
8000/20000 [===========>..................] - ETA: 0s
8640/20000 [===========>..................] - ETA: 0s
9408/20000 [=============>................] - ETA: 0s
9984/20000 [=============>................] - ETA: 0s
10784/20000 [===============>..............] - ETA: 0s
11616/20000 [================>.............] - ETA: 0s
12128/20000 [=================>............] - ETA: 0s
12800/20000 [==================>...........] - ETA: 0s
13536/20000 [===================>..........] - ETA: 0s
14176/20000 [====================>.........] - ETA: 0s
14912/20000 [=====================>........] - ETA: 0s
15680/20000 [======================>.......] - ETA: 0s
16320/20000 [=======================>......] - ETA: 0s
17056/20000 [========================>.....] - ETA: 0s
17856/20000 [=========================>....] - ETA: 0s
18336/20000 [==========================>...] - ETA: 0s
18912/20000 [===========================>..] - ETA: 0s
19680/20000 [============================>.] - ETA: 0s
20000/20000 [==============================] - 2s 76us/step
precision recall f1-score support
0.0 0.95 0.98 0.97 2000
1.0 0.93 0.96 0.94 2000
2.0 0.77 0.89 0.82 2000
3.0 0.89 0.82 0.85 2000
4.0 0.87 0.87 0.87 2000
5.0 0.97 0.95 0.96 2000
6.0 0.91 0.87 0.89 2000
7.0 0.98 0.93 0.95 2000
8.0 0.96 0.94 0.95 2000
9.0 0.91 0.91 0.91 2000
accuracy 0.91 20000
macro avg 0.91 0.91 0.91 20000
weighted avg 0.91 0.91 0.91 20000
###Markdown
As we can see, using dropout increases the accuracy on training and testing. D - Validation Set
###Code
network_d = Sequential()
network_d.add(Dense(512, activation='relu'))
network_d.add(Dense(10, activation='softmax'))
network_d.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
history_d = network_d.fit(x_train, y_train, epochs=2, validation_split = 0.2)
score_d = network_d.evaluate(x_test, y_test)
print('Test score:', score_d[0])
print('Test accuracy:', score_d[1])
y_pred = network.predict(x_test,verbose = 1)
y_pred_bool = np.argmax(y_pred, axis=1)
print(classification_report(Y_test, y_pred_bool))
plt.plot(history_d.history['acc'])
plt.title('Train accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
plt.plot(history_d.history['loss'])
plt.title('Train loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
###Output
Train on 48000 samples, validate on 12000 samples
Epoch 1/2
32/48000 [..............................] - ETA: 10:04 - loss: 2.3021 - acc: 0.0938
256/48000 [..............................] - ETA: 1:25 - loss: 2.2870 - acc: 0.2695
480/48000 [..............................] - ETA: 51s - loss: 2.2733 - acc: 0.3667
640/48000 [..............................] - ETA: 41s - loss: 2.2611 - acc: 0.4188
800/48000 [..............................] - ETA: 36s - loss: 2.2500 - acc: 0.4288
960/48000 [..............................] - ETA: 32s - loss: 2.2394 - acc: 0.4417
1120/48000 [..............................] - ETA: 30s - loss: 2.2274 - acc: 0.4866
1280/48000 [..............................] - ETA: 28s - loss: 2.2169 - acc: 0.4992
1440/48000 [..............................] - ETA: 26s - loss: 2.2046 - acc: 0.5181
1632/48000 [>.............................] - ETA: 24s - loss: 2.1909 - acc: 0.5282
1792/48000 [>.............................] - ETA: 23s - loss: 2.1794 - acc: 0.5435
1856/48000 [>.............................] - ETA: 24s - loss: 2.1740 - acc: 0.5512
1952/48000 [>.............................] - ETA: 24s - loss: 2.1671 - acc: 0.5553
2080/48000 [>.............................] - ETA: 24s - loss: 2.1564 - acc: 0.5601
2208/48000 [>.............................] - ETA: 24s - loss: 2.1474 - acc: 0.5611
2336/48000 [>.............................] - ETA: 23s - loss: 2.1351 - acc: 0.5646
2432/48000 [>.............................] - ETA: 23s - loss: 2.1263 - acc: 0.5695
2528/48000 [>.............................] - ETA: 24s - loss: 2.1195 - acc: 0.5700
2656/48000 [>.............................] - ETA: 23s - loss: 2.1075 - acc: 0.5757
2816/48000 [>.............................] - ETA: 23s - loss: 2.0940 - acc: 0.5845
2976/48000 [>.............................] - ETA: 22s - loss: 2.0785 - acc: 0.5901
3136/48000 [>.............................] - ETA: 22s - loss: 2.0647 - acc: 0.5982
3328/48000 [=>............................] - ETA: 21s - loss: 2.0467 - acc: 0.6073
3520/48000 [=>............................] - ETA: 21s - loss: 2.0282 - acc: 0.6156
3744/48000 [=>............................] - ETA: 20s - loss: 2.0054 - acc: 0.6277
3936/48000 [=>............................] - ETA: 20s - loss: 1.9877 - acc: 0.6352
4096/48000 [=>............................] - ETA: 19s - loss: 1.9711 - acc: 0.6411
4288/48000 [=>............................] - ETA: 19s - loss: 1.9537 - acc: 0.6495
4480/48000 [=>............................] - ETA: 18s - loss: 1.9345 - acc: 0.6583
4608/48000 [=>............................] - ETA: 18s - loss: 1.9219 - acc: 0.6621
4768/48000 [=>............................] - ETA: 18s - loss: 1.9069 - acc: 0.6648
4928/48000 [==>...........................] - ETA: 18s - loss: 1.8900 - acc: 0.6686
5088/48000 [==>...........................] - ETA: 18s - loss: 1.8756 - acc: 0.6708
5248/48000 [==>...........................] - ETA: 18s - loss: 1.8599 - acc: 0.6742
5408/48000 [==>...........................] - ETA: 18s - loss: 1.8449 - acc: 0.6757
5600/48000 [==>...........................] - ETA: 17s - loss: 1.8229 - acc: 0.6814
5824/48000 [==>...........................] - ETA: 17s - loss: 1.8009 - acc: 0.6861
6016/48000 [==>...........................] - ETA: 17s - loss: 1.7834 - acc: 0.6897
6176/48000 [==>...........................] - ETA: 17s - loss: 1.7685 - acc: 0.6920
6336/48000 [==>...........................] - ETA: 16s - loss: 1.7544 - acc: 0.6943
6528/48000 [===>..........................] - ETA: 16s - loss: 1.7353 - acc: 0.6990
6720/48000 [===>..........................] - ETA: 16s - loss: 1.7161 - acc: 0.7034
6912/48000 [===>..........................] - ETA: 16s - loss: 1.6995 - acc: 0.7069
7104/48000 [===>..........................] - ETA: 16s - loss: 1.6828 - acc: 0.7110
7264/48000 [===>..........................] - ETA: 16s - loss: 1.6692 - acc: 0.7146
7424/48000 [===>..........................] - ETA: 15s - loss: 1.6551 - acc: 0.7170
7584/48000 [===>..........................] - ETA: 15s - loss: 1.6411 - acc: 0.7198
7776/48000 [===>..........................] - ETA: 15s - loss: 1.6249 - acc: 0.7229
7968/48000 [===>..........................] - ETA: 15s - loss: 1.6083 - acc: 0.7262
8160/48000 [====>.........................] - ETA: 15s - loss: 1.5918 - acc: 0.7294
8320/48000 [====>.........................] - ETA: 15s - loss: 1.5792 - acc: 0.7319
8512/48000 [====>.........................] - ETA: 15s - loss: 1.5640 - acc: 0.7338
8672/48000 [====>.........................] - ETA: 14s - loss: 1.5510 - acc: 0.7363
8832/48000 [====>.........................] - ETA: 14s - loss: 1.5383 - acc: 0.7380
8960/48000 [====>.........................] - ETA: 14s - loss: 1.5282 - acc: 0.7397
9088/48000 [====>.........................] - ETA: 14s - loss: 1.5184 - acc: 0.7405
9184/48000 [====>.........................] - ETA: 14s - loss: 1.5112 - acc: 0.7412
9312/48000 [====>.........................] - ETA: 14s - loss: 1.5024 - acc: 0.7425
9440/48000 [====>.........................] - ETA: 14s - loss: 1.4920 - acc: 0.7441
9568/48000 [====>.........................] - ETA: 14s - loss: 1.4824 - acc: 0.7454
9696/48000 [=====>........................] - ETA: 14s - loss: 1.4735 - acc: 0.7465
9856/48000 [=====>........................] - ETA: 14s - loss: 1.4625 - acc: 0.7488
9984/48000 [=====>........................] - ETA: 14s - loss: 1.4532 - acc: 0.7500
10144/48000 [=====>........................] - ETA: 14s - loss: 1.4417 - acc: 0.7520
10272/48000 [=====>........................] - ETA: 14s - loss: 1.4322 - acc: 0.7529
10464/48000 [=====>........................] - ETA: 14s - loss: 1.4191 - acc: 0.7553
10592/48000 [=====>........................] - ETA: 14s - loss: 1.4096 - acc: 0.7571
10752/48000 [=====>........................] - ETA: 14s - loss: 1.3995 - acc: 0.7581
10912/48000 [=====>........................] - ETA: 14s - loss: 1.3882 - acc: 0.7604
11072/48000 [=====>........................] - ETA: 14s - loss: 1.3777 - acc: 0.7620
11232/48000 [======>.......................] - ETA: 14s - loss: 1.3672 - acc: 0.7634
11360/48000 [======>.......................] - ETA: 14s - loss: 1.3589 - acc: 0.7645
11552/48000 [======>.......................] - ETA: 13s - loss: 1.3459 - acc: 0.7671
11680/48000 [======>.......................] - ETA: 13s - loss: 1.3377 - acc: 0.7682
11840/48000 [======>.......................] - ETA: 13s - loss: 1.3279 - acc: 0.7695
12000/48000 [======>.......................] - ETA: 13s - loss: 1.3180 - acc: 0.7712
12160/48000 [======>.......................] - ETA: 13s - loss: 1.3082 - acc: 0.7728
12352/48000 [======>.......................] - ETA: 13s - loss: 1.2966 - acc: 0.7751
12480/48000 [======>.......................] - ETA: 13s - loss: 1.2892 - acc: 0.7763
12640/48000 [======>.......................] - ETA: 13s - loss: 1.2799 - acc: 0.7775
12768/48000 [======>.......................] - ETA: 13s - loss: 1.2725 - acc: 0.7785
12928/48000 [=======>......................] - ETA: 13s - loss: 1.2628 - acc: 0.7801
13056/48000 [=======>......................] - ETA: 13s - loss: 1.2558 - acc: 0.7810
13216/48000 [=======>......................] - ETA: 13s - loss: 1.2471 - acc: 0.7825
13408/48000 [=======>......................] - ETA: 13s - loss: 1.2365 - acc: 0.7841
13568/48000 [=======>......................] - ETA: 13s - loss: 1.2284 - acc: 0.7852
13760/48000 [=======>......................] - ETA: 12s - loss: 1.2189 - acc: 0.7868
13888/48000 [=======>......................] - ETA: 12s - loss: 1.2116 - acc: 0.7881
13984/48000 [=======>......................] - ETA: 12s - loss: 1.2068 - acc: 0.7888
14112/48000 [=======>......................] - ETA: 12s - loss: 1.1995 - acc: 0.7900
14208/48000 [=======>......................] - ETA: 12s - loss: 1.1944 - acc: 0.7909
14304/48000 [=======>......................] - ETA: 12s - loss: 1.1897 - acc: 0.7916
14464/48000 [========>.....................] - ETA: 12s - loss: 1.1817 - acc: 0.7926
14592/48000 [========>.....................] - ETA: 12s - loss: 1.1756 - acc: 0.7936
14720/48000 [========>.....................] - ETA: 12s - loss: 1.1691 - acc: 0.7947
14880/48000 [========>.....................] - ETA: 12s - loss: 1.1617 - acc: 0.7953
15008/48000 [========>.....................] - ETA: 12s - loss: 1.1551 - acc: 0.7964
15168/48000 [========>.....................] - ETA: 12s - loss: 1.1478 - acc: 0.7973
15328/48000 [========>.....................] - ETA: 12s - loss: 1.1402 - acc: 0.7986
15456/48000 [========>.....................] - ETA: 12s - loss: 1.1348 - acc: 0.7992
15616/48000 [========>.....................] - ETA: 12s - loss: 1.1272 - acc: 0.8004
15776/48000 [========>.....................] - ETA: 12s - loss: 1.1212 - acc: 0.8011
15904/48000 [========>.....................] - ETA: 12s - loss: 1.1150 - acc: 0.8021
16032/48000 [=========>....................] - ETA: 12s - loss: 1.1104 - acc: 0.8027
16192/48000 [=========>....................] - ETA: 12s - loss: 1.1040 - acc: 0.8033
16352/48000 [=========>....................] - ETA: 12s - loss: 1.0968 - acc: 0.8045
16512/48000 [=========>....................] - ETA: 12s - loss: 1.0892 - acc: 0.8059
16672/48000 [=========>....................] - ETA: 12s - loss: 1.0828 - acc: 0.8067
16832/48000 [=========>....................] - ETA: 11s - loss: 1.0765 - acc: 0.8072
16992/48000 [=========>....................] - ETA: 11s - loss: 1.0704 - acc: 0.8080
17184/48000 [=========>....................] - ETA: 11s - loss: 1.0630 - acc: 0.8086
17376/48000 [=========>....................] - ETA: 11s - loss: 1.0557 - acc: 0.8096
17504/48000 [=========>....................] - ETA: 11s - loss: 1.0508 - acc: 0.8103
17632/48000 [==========>...................] - ETA: 11s - loss: 1.0462 - acc: 0.8110
17792/48000 [==========>...................] - ETA: 11s - loss: 1.0407 - acc: 0.8114
17952/48000 [==========>...................] - ETA: 11s - loss: 1.0350 - acc: 0.8123
18048/48000 [==========>...................] - ETA: 11s - loss: 1.0325 - acc: 0.8124
18240/48000 [==========>...................] - ETA: 11s - loss: 1.0256 - acc: 0.8134
18400/48000 [==========>...................] - ETA: 11s - loss: 1.0207 - acc: 0.8141
18528/48000 [==========>...................] - ETA: 11s - loss: 1.0163 - acc: 0.8147
18656/48000 [==========>...................] - ETA: 11s - loss: 1.0122 - acc: 0.8152
18848/48000 [==========>...................] - ETA: 11s - loss: 1.0056 - acc: 0.8161
19008/48000 [==========>...................] - ETA: 11s - loss: 1.0004 - acc: 0.8170
19136/48000 [==========>...................] - ETA: 11s - loss: 0.9959 - acc: 0.8177
19296/48000 [===========>..................] - ETA: 10s - loss: 0.9909 - acc: 0.8184
19456/48000 [===========>..................] - ETA: 10s - loss: 0.9862 - acc: 0.8189
19584/48000 [===========>..................] - ETA: 10s - loss: 0.9822 - acc: 0.8194
19744/48000 [===========>..................] - ETA: 10s - loss: 0.9767 - acc: 0.8202
19904/48000 [===========>..................] - ETA: 10s - loss: 0.9720 - acc: 0.8207
20064/48000 [===========>..................] - ETA: 10s - loss: 0.9667 - acc: 0.8216
20192/48000 [===========>..................] - ETA: 10s - loss: 0.9627 - acc: 0.8222
20352/48000 [===========>..................] - ETA: 10s - loss: 0.9580 - acc: 0.8229
20544/48000 [===========>..................] - ETA: 10s - loss: 0.9522 - acc: 0.8235
20704/48000 [===========>..................] - ETA: 10s - loss: 0.9476 - acc: 0.8243
20864/48000 [============>.................] - ETA: 10s - loss: 0.9428 - acc: 0.8252
21024/48000 [============>.................] - ETA: 10s - loss: 0.9385 - acc: 0.8259
21184/48000 [============>.................] - ETA: 10s - loss: 0.9343 - acc: 0.8264
21344/48000 [============>.................] - ETA: 10s - loss: 0.9294 - acc: 0.8270
21504/48000 [============>.................] - ETA: 10s - loss: 0.9247 - acc: 0.8277
21664/48000 [============>.................] - ETA: 9s - loss: 0.9205 - acc: 0.8281
21824/48000 [============>.................] - ETA: 9s - loss: 0.9163 - acc: 0.8288
21952/48000 [============>.................] - ETA: 9s - loss: 0.9128 - acc: 0.8293
22144/48000 [============>.................] - ETA: 9s - loss: 0.9086 - acc: 0.8298
22336/48000 [============>.................] - ETA: 9s - loss: 0.9035 - acc: 0.8305
22464/48000 [=============>................] - ETA: 9s - loss: 0.9002 - acc: 0.8310
22624/48000 [=============>................] - ETA: 9s - loss: 0.8961 - acc: 0.8316
22752/48000 [=============>................] - ETA: 9s - loss: 0.8928 - acc: 0.8320
22912/48000 [=============>................] - ETA: 9s - loss: 0.8888 - acc: 0.8327
23104/48000 [=============>................] - ETA: 9s - loss: 0.8841 - acc: 0.8334
23264/48000 [=============>................] - ETA: 9s - loss: 0.8799 - acc: 0.8342
23392/48000 [=============>................] - ETA: 9s - loss: 0.8771 - acc: 0.8345
23520/48000 [=============>................] - ETA: 9s - loss: 0.8738 - acc: 0.8350
23680/48000 [=============>................] - ETA: 9s - loss: 0.8703 - acc: 0.8355
23808/48000 [=============>................] - ETA: 9s - loss: 0.8671 - acc: 0.8360
23936/48000 [=============>................] - ETA: 9s - loss: 0.8643 - acc: 0.8364
24096/48000 [==============>...............] - ETA: 9s - loss: 0.8606 - acc: 0.8367
24224/48000 [==============>...............] - ETA: 8s - loss: 0.8584 - acc: 0.8368
24352/48000 [==============>...............] - ETA: 8s - loss: 0.8560 - acc: 0.8372
24512/48000 [==============>...............] - ETA: 8s - loss: 0.8523 - acc: 0.8377
24704/48000 [==============>...............] - ETA: 8s - loss: 0.8480 - acc: 0.8384
24896/48000 [==============>...............] - ETA: 8s - loss: 0.8440 - acc: 0.8388
25024/48000 [==============>...............] - ETA: 8s - loss: 0.8415 - acc: 0.8392
25184/48000 [==============>...............] - ETA: 8s - loss: 0.8382 - acc: 0.8397
25312/48000 [==============>...............] - ETA: 8s - loss: 0.8355 - acc: 0.8402
25504/48000 [==============>...............] - ETA: 8s - loss: 0.8318 - acc: 0.8409
25696/48000 [===============>..............] - ETA: 8s - loss: 0.8278 - acc: 0.8414
25856/48000 [===============>..............] - ETA: 8s - loss: 0.8243 - acc: 0.8419
26016/48000 [===============>..............] - ETA: 8s - loss: 0.8214 - acc: 0.8423
26208/48000 [===============>..............] - ETA: 8s - loss: 0.8174 - acc: 0.8430
26368/48000 [===============>..............] - ETA: 8s - loss: 0.8142 - acc: 0.8434
26528/48000 [===============>..............] - ETA: 8s - loss: 0.8110 - acc: 0.8438
26688/48000 [===============>..............] - ETA: 7s - loss: 0.8079 - acc: 0.8443
26816/48000 [===============>..............] - ETA: 7s - loss: 0.8057 - acc: 0.8446
26976/48000 [===============>..............] - ETA: 7s - loss: 0.8026 - acc: 0.8451
27104/48000 [===============>..............] - ETA: 7s - loss: 0.8000 - acc: 0.8455
27232/48000 [================>.............] - ETA: 7s - loss: 0.7978 - acc: 0.8458
27392/48000 [================>.............] - ETA: 7s - loss: 0.7949 - acc: 0.8462
27552/48000 [================>.............] - ETA: 7s - loss: 0.7924 - acc: 0.8465
27680/48000 [================>.............] - ETA: 7s - loss: 0.7900 - acc: 0.8469
27840/48000 [================>.............] - ETA: 7s - loss: 0.7870 - acc: 0.8475
28000/48000 [================>.............] - ETA: 7s - loss: 0.7841 - acc: 0.8479
28192/48000 [================>.............] - ETA: 7s - loss: 0.7801 - acc: 0.8486
28384/48000 [================>.............] - ETA: 7s - loss: 0.7764 - acc: 0.8492
28544/48000 [================>.............] - ETA: 7s - loss: 0.7739 - acc: 0.8495
28736/48000 [================>.............] - ETA: 7s - loss: 0.7704 - acc: 0.8500
28896/48000 [=================>............] - ETA: 7s - loss: 0.7680 - acc: 0.8502
29056/48000 [=================>............] - ETA: 7s - loss: 0.7657 - acc: 0.8504
29248/48000 [=================>............] - ETA: 6s - loss: 0.7626 - acc: 0.8509
29376/48000 [=================>............] - ETA: 6s - loss: 0.7606 - acc: 0.8512
29536/48000 [=================>............] - ETA: 6s - loss: 0.7581 - acc: 0.8516
29728/48000 [=================>............] - ETA: 6s - loss: 0.7553 - acc: 0.8520
29856/48000 [=================>............] - ETA: 6s - loss: 0.7535 - acc: 0.8521
30016/48000 [=================>............] - ETA: 6s - loss: 0.7508 - acc: 0.8526
30144/48000 [=================>............] - ETA: 6s - loss: 0.7484 - acc: 0.8530
30272/48000 [=================>............] - ETA: 6s - loss: 0.7461 - acc: 0.8534
30432/48000 [==================>...........] - ETA: 6s - loss: 0.7434 - acc: 0.8538
30560/48000 [==================>...........] - ETA: 6s - loss: 0.7414 - acc: 0.8541
30720/48000 [==================>...........] - ETA: 6s - loss: 0.7390 - acc: 0.8545
30880/48000 [==================>...........] - ETA: 6s - loss: 0.7367 - acc: 0.8547
31040/48000 [==================>...........] - ETA: 6s - loss: 0.7342 - acc: 0.8550
31200/48000 [==================>...........] - ETA: 6s - loss: 0.7316 - acc: 0.8554
31360/48000 [==================>...........] - ETA: 6s - loss: 0.7293 - acc: 0.8558
31520/48000 [==================>...........] - ETA: 6s - loss: 0.7269 - acc: 0.8562
31680/48000 [==================>...........] - ETA: 6s - loss: 0.7243 - acc: 0.8565
31840/48000 [==================>...........] - ETA: 5s - loss: 0.7217 - acc: 0.8570
32032/48000 [===================>..........] - ETA: 5s - loss: 0.7190 - acc: 0.8573
32192/48000 [===================>..........] - ETA: 5s - loss: 0.7165 - acc: 0.8578
32384/48000 [===================>..........] - ETA: 5s - loss: 0.7140 - acc: 0.8582
32512/48000 [===================>..........] - ETA: 5s - loss: 0.7120 - acc: 0.8586
32640/48000 [===================>..........] - ETA: 5s - loss: 0.7103 - acc: 0.8588
32800/48000 [===================>..........] - ETA: 5s - loss: 0.7080 - acc: 0.8592
32960/48000 [===================>..........] - ETA: 5s - loss: 0.7053 - acc: 0.8597
33120/48000 [===================>..........] - ETA: 5s - loss: 0.7035 - acc: 0.8599
33280/48000 [===================>..........] - ETA: 5s - loss: 0.7017 - acc: 0.8602
33408/48000 [===================>..........] - ETA: 5s - loss: 0.6997 - acc: 0.8605
33536/48000 [===================>..........] - ETA: 5s - loss: 0.6977 - acc: 0.8608
33696/48000 [====================>.........] - ETA: 5s - loss: 0.6954 - acc: 0.8613
33856/48000 [====================>.........] - ETA: 5s - loss: 0.6936 - acc: 0.8617
33984/48000 [====================>.........] - ETA: 5s - loss: 0.6918 - acc: 0.8619
34176/48000 [====================>.........] - ETA: 5s - loss: 0.6895 - acc: 0.8623
34336/48000 [====================>.........] - ETA: 5s - loss: 0.6871 - acc: 0.8628
34496/48000 [====================>.........] - ETA: 4s - loss: 0.6851 - acc: 0.8631
34656/48000 [====================>.........] - ETA: 4s - loss: 0.6832 - acc: 0.8634
34784/48000 [====================>.........] - ETA: 4s - loss: 0.6815 - acc: 0.8636
34912/48000 [====================>.........] - ETA: 4s - loss: 0.6798 - acc: 0.8639
35040/48000 [====================>.........] - ETA: 4s - loss: 0.6782 - acc: 0.8641
35200/48000 [=====================>........] - ETA: 4s - loss: 0.6762 - acc: 0.8644
35392/48000 [=====================>........] - ETA: 4s - loss: 0.6743 - acc: 0.8645
35552/48000 [=====================>........] - ETA: 4s - loss: 0.6720 - acc: 0.8650
35680/48000 [=====================>........] - ETA: 4s - loss: 0.6702 - acc: 0.8653
35840/48000 [=====================>........] - ETA: 4s - loss: 0.6679 - acc: 0.8658
35968/48000 [=====================>........] - ETA: 4s - loss: 0.6663 - acc: 0.8660
36160/48000 [=====================>........] - ETA: 4s - loss: 0.6641 - acc: 0.8664
36288/48000 [=====================>........] - ETA: 4s - loss: 0.6629 - acc: 0.8665
36480/48000 [=====================>........] - ETA: 4s - loss: 0.6608 - acc: 0.8667
36640/48000 [=====================>........] - ETA: 4s - loss: 0.6590 - acc: 0.8670
36800/48000 [======================>.......] - ETA: 4s - loss: 0.6569 - acc: 0.8674
36928/48000 [======================>.......] - ETA: 4s - loss: 0.6557 - acc: 0.8676
37088/48000 [======================>.......] - ETA: 4s - loss: 0.6537 - acc: 0.8679
37280/48000 [======================>.......] - ETA: 3s - loss: 0.6516 - acc: 0.8681
37440/48000 [======================>.......] - ETA: 3s - loss: 0.6498 - acc: 0.8683
37600/48000 [======================>.......] - ETA: 3s - loss: 0.6482 - acc: 0.8685
37760/48000 [======================>.......] - ETA: 3s - loss: 0.6467 - acc: 0.8686
37920/48000 [======================>.......] - ETA: 3s - loss: 0.6454 - acc: 0.8689
38112/48000 [======================>.......] - ETA: 3s - loss: 0.6430 - acc: 0.8693
38272/48000 [======================>.......] - ETA: 3s - loss: 0.6414 - acc: 0.8695
38432/48000 [=======================>......] - ETA: 3s - loss: 0.6398 - acc: 0.8697
38624/48000 [=======================>......] - ETA: 3s - loss: 0.6378 - acc: 0.8700
38784/48000 [=======================>......] - ETA: 3s - loss: 0.6361 - acc: 0.8703
38912/48000 [=======================>......] - ETA: 3s - loss: 0.6349 - acc: 0.8704
39072/48000 [=======================>......] - ETA: 3s - loss: 0.6333 - acc: 0.8706
39232/48000 [=======================>......] - ETA: 3s - loss: 0.6319 - acc: 0.8707
39392/48000 [=======================>......] - ETA: 3s - loss: 0.6303 - acc: 0.8710
39552/48000 [=======================>......] - ETA: 3s - loss: 0.6287 - acc: 0.8713
39744/48000 [=======================>......] - ETA: 3s - loss: 0.6269 - acc: 0.8716
39904/48000 [=======================>......] - ETA: 2s - loss: 0.6251 - acc: 0.8718
40064/48000 [========================>.....] - ETA: 2s - loss: 0.6237 - acc: 0.8720
40224/48000 [========================>.....] - ETA: 2s - loss: 0.6221 - acc: 0.8723
40384/48000 [========================>.....] - ETA: 2s - loss: 0.6205 - acc: 0.8726
40544/48000 [========================>.....] - ETA: 2s - loss: 0.6192 - acc: 0.8728
40704/48000 [========================>.....] - ETA: 2s - loss: 0.6176 - acc: 0.8730
40864/48000 [========================>.....] - ETA: 2s - loss: 0.6161 - acc: 0.8731
41024/48000 [========================>.....] - ETA: 2s - loss: 0.6146 - acc: 0.8734
41184/48000 [========================>.....] - ETA: 2s - loss: 0.6131 - acc: 0.8736
41344/48000 [========================>.....] - ETA: 2s - loss: 0.6120 - acc: 0.8738
41536/48000 [========================>.....] - ETA: 2s - loss: 0.6103 - acc: 0.8740
41696/48000 [=========================>....] - ETA: 2s - loss: 0.6087 - acc: 0.8742
41856/48000 [=========================>....] - ETA: 2s - loss: 0.6073 - acc: 0.8744
42016/48000 [=========================>....] - ETA: 2s - loss: 0.6059 - acc: 0.8746
42176/48000 [=========================>....] - ETA: 2s - loss: 0.6049 - acc: 0.8747
42336/48000 [=========================>....] - ETA: 2s - loss: 0.6033 - acc: 0.8749
42496/48000 [=========================>....] - ETA: 2s - loss: 0.6017 - acc: 0.8752
42656/48000 [=========================>....] - ETA: 1s - loss: 0.6006 - acc: 0.8754
42784/48000 [=========================>....] - ETA: 1s - loss: 0.5995 - acc: 0.8756
42944/48000 [=========================>....] - ETA: 1s - loss: 0.5980 - acc: 0.8758
43104/48000 [=========================>....] - ETA: 1s - loss: 0.5971 - acc: 0.8760
43232/48000 [==========================>...] - ETA: 1s - loss: 0.5957 - acc: 0.8762
43360/48000 [==========================>...] - ETA: 1s - loss: 0.5944 - acc: 0.8765
43520/48000 [==========================>...] - ETA: 1s - loss: 0.5930 - acc: 0.8767
43712/48000 [==========================>...] - ETA: 1s - loss: 0.5915 - acc: 0.8770
43872/48000 [==========================>...] - ETA: 1s - loss: 0.5898 - acc: 0.8772
44000/48000 [==========================>...] - ETA: 1s - loss: 0.5889 - acc: 0.8773
44160/48000 [==========================>...] - ETA: 1s - loss: 0.5874 - acc: 0.8775
44320/48000 [==========================>...] - ETA: 1s - loss: 0.5861 - acc: 0.8778
44512/48000 [==========================>...] - ETA: 1s - loss: 0.5844 - acc: 0.8781
44672/48000 [==========================>...] - ETA: 1s - loss: 0.5831 - acc: 0.8783
44864/48000 [===========================>..] - ETA: 1s - loss: 0.5817 - acc: 0.8785
45056/48000 [===========================>..] - ETA: 1s - loss: 0.5805 - acc: 0.8787
45216/48000 [===========================>..] - ETA: 1s - loss: 0.5795 - acc: 0.8787
45344/48000 [===========================>..] - ETA: 0s - loss: 0.5785 - acc: 0.8788
45472/48000 [===========================>..] - ETA: 0s - loss: 0.5775 - acc: 0.8790
45600/48000 [===========================>..] - ETA: 0s - loss: 0.5764 - acc: 0.8792
45760/48000 [===========================>..] - ETA: 0s - loss: 0.5751 - acc: 0.8794
45920/48000 [===========================>..] - ETA: 0s - loss: 0.5737 - acc: 0.8796
46080/48000 [===========================>..] - ETA: 0s - loss: 0.5722 - acc: 0.8798
46240/48000 [===========================>..] - ETA: 0s - loss: 0.5709 - acc: 0.8801
46400/48000 [============================>.] - ETA: 0s - loss: 0.5695 - acc: 0.8804
46560/48000 [============================>.] - ETA: 0s - loss: 0.5682 - acc: 0.8806
46720/48000 [============================>.] - ETA: 0s - loss: 0.5672 - acc: 0.8807
46848/48000 [============================>.] - ETA: 0s - loss: 0.5662 - acc: 0.8809
47040/48000 [============================>.] - ETA: 0s - loss: 0.5646 - acc: 0.8813
47168/48000 [============================>.] - ETA: 0s - loss: 0.5635 - acc: 0.8815
47328/48000 [============================>.] - ETA: 0s - loss: 0.5622 - acc: 0.8817
47488/48000 [============================>.] - ETA: 0s - loss: 0.5610 - acc: 0.8819
47648/48000 [============================>.] - ETA: 0s - loss: 0.5597 - acc: 0.8821
47776/48000 [============================>.] - ETA: 0s - loss: 0.5586 - acc: 0.8823
47936/48000 [============================>.] - ETA: 0s - loss: 0.5573 - acc: 0.8826
48000/48000 [==============================] - 18s 378us/step - loss: 0.5568 - acc: 0.8826 - val_loss: 0.2136 - val_acc: 0.9399
Epoch 2/2
32/48000 [..............................] - ETA: 17s - loss: 0.1802 - acc: 0.9375
192/48000 [..............................] - ETA: 15s - loss: 0.1720 - acc: 0.9479
384/48000 [..............................] - ETA: 14s - loss: 0.1996 - acc: 0.9505
576/48000 [..............................] - ETA: 13s - loss: 0.2175 - acc: 0.9427
704/48000 [..............................] - ETA: 15s - loss: 0.2057 - acc: 0.9460
864/48000 [..............................] - ETA: 15s - loss: 0.2185 - acc: 0.9363
992/48000 [..............................] - ETA: 15s - loss: 0.2077 - acc: 0.9405
1152/48000 [..............................] - ETA: 15s - loss: 0.2089 - acc: 0.9410
1280/48000 [..............................] - ETA: 16s - loss: 0.2154 - acc: 0.9406
1472/48000 [..............................] - ETA: 16s - loss: 0.2170 - acc: 0.9409
1632/48000 [>.............................] - ETA: 16s - loss: 0.2217 - acc: 0.9369
1792/48000 [>.............................] - ETA: 16s - loss: 0.2191 - acc: 0.9386
1920/48000 [>.............................] - ETA: 16s - loss: 0.2197 - acc: 0.9370
2048/48000 [>.............................] - ETA: 16s - loss: 0.2225 - acc: 0.9355
2208/48000 [>.............................] - ETA: 16s - loss: 0.2182 - acc: 0.9366
2368/48000 [>.............................] - ETA: 16s - loss: 0.2179 - acc: 0.9358
2496/48000 [>.............................] - ETA: 16s - loss: 0.2148 - acc: 0.9367
2656/48000 [>.............................] - ETA: 16s - loss: 0.2137 - acc: 0.9367
2848/48000 [>.............................] - ETA: 15s - loss: 0.2101 - acc: 0.9371
3008/48000 [>.............................] - ETA: 15s - loss: 0.2095 - acc: 0.9378
3168/48000 [>.............................] - ETA: 15s - loss: 0.2103 - acc: 0.9388
3296/48000 [=>............................] - ETA: 15s - loss: 0.2120 - acc: 0.9366
3456/48000 [=>............................] - ETA: 15s - loss: 0.2103 - acc: 0.9378
3648/48000 [=>............................] - ETA: 15s - loss: 0.2106 - acc: 0.9372
3840/48000 [=>............................] - ETA: 15s - loss: 0.2075 - acc: 0.9383
4000/48000 [=>............................] - ETA: 15s - loss: 0.2072 - acc: 0.9390
4128/48000 [=>............................] - ETA: 15s - loss: 0.2079 - acc: 0.9387
4320/48000 [=>............................] - ETA: 15s - loss: 0.2068 - acc: 0.9396
4480/48000 [=>............................] - ETA: 15s - loss: 0.2066 - acc: 0.9397
4608/48000 [=>............................] - ETA: 15s - loss: 0.2065 - acc: 0.9395
4736/48000 [=>............................] - ETA: 15s - loss: 0.2044 - acc: 0.9407
4896/48000 [==>...........................] - ETA: 15s - loss: 0.2034 - acc: 0.9410
5056/48000 [==>...........................] - ETA: 15s - loss: 0.2038 - acc: 0.9409
5216/48000 [==>...........................] - ETA: 14s - loss: 0.2008 - acc: 0.9421
5376/48000 [==>...........................] - ETA: 14s - loss: 0.1992 - acc: 0.9425
5504/48000 [==>...........................] - ETA: 14s - loss: 0.2002 - acc: 0.9420
5664/48000 [==>...........................] - ETA: 14s - loss: 0.2025 - acc: 0.9419
5824/48000 [==>...........................] - ETA: 14s - loss: 0.2028 - acc: 0.9420
5984/48000 [==>...........................] - ETA: 14s - loss: 0.2036 - acc: 0.9413
6144/48000 [==>...........................] - ETA: 14s - loss: 0.2036 - acc: 0.9411
6304/48000 [==>...........................] - ETA: 14s - loss: 0.2051 - acc: 0.9404
6464/48000 [===>..........................] - ETA: 14s - loss: 0.2046 - acc: 0.9397
6624/48000 [===>..........................] - ETA: 14s - loss: 0.2051 - acc: 0.9390
6752/48000 [===>..........................] - ETA: 14s - loss: 0.2037 - acc: 0.9393
6912/48000 [===>..........................] - ETA: 14s - loss: 0.2051 - acc: 0.9392
7072/48000 [===>..........................] - ETA: 14s - loss: 0.2054 - acc: 0.9391
7232/48000 [===>..........................] - ETA: 14s - loss: 0.2053 - acc: 0.9392
7392/48000 [===>..........................] - ETA: 14s - loss: 0.2050 - acc: 0.9390
7520/48000 [===>..........................] - ETA: 14s - loss: 0.2050 - acc: 0.9391
7680/48000 [===>..........................] - ETA: 14s - loss: 0.2049 - acc: 0.9393
7840/48000 [===>..........................] - ETA: 14s - loss: 0.2043 - acc: 0.9398
8000/48000 [====>.........................] - ETA: 14s - loss: 0.2027 - acc: 0.9403
8192/48000 [====>.........................] - ETA: 14s - loss: 0.2017 - acc: 0.9403
8320/48000 [====>.........................] - ETA: 14s - loss: 0.2008 - acc: 0.9405
8480/48000 [====>.........................] - ETA: 14s - loss: 0.2009 - acc: 0.9404
8672/48000 [====>.........................] - ETA: 13s - loss: 0.2004 - acc: 0.9410
8832/48000 [====>.........................] - ETA: 13s - loss: 0.2008 - acc: 0.9412
8992/48000 [====>.........................] - ETA: 13s - loss: 0.1995 - acc: 0.9417
9152/48000 [====>.........................] - ETA: 13s - loss: 0.2006 - acc: 0.9413
9280/48000 [====>.........................] - ETA: 13s - loss: 0.2001 - acc: 0.9415
9440/48000 [====>.........................] - ETA: 13s - loss: 0.1998 - acc: 0.9415
9600/48000 [=====>........................] - ETA: 13s - loss: 0.1996 - acc: 0.9418
9760/48000 [=====>........................] - ETA: 13s - loss: 0.2013 - acc: 0.9413
9888/48000 [=====>........................] - ETA: 13s - loss: 0.2017 - acc: 0.9413
10080/48000 [=====>........................] - ETA: 13s - loss: 0.2032 - acc: 0.9409
10208/48000 [=====>........................] - ETA: 13s - loss: 0.2043 - acc: 0.9409
10400/48000 [=====>........................] - ETA: 13s - loss: 0.2039 - acc: 0.9413
10560/48000 [=====>........................] - ETA: 13s - loss: 0.2045 - acc: 0.9412
10720/48000 [=====>........................] - ETA: 13s - loss: 0.2051 - acc: 0.9409
10880/48000 [=====>........................] - ETA: 13s - loss: 0.2071 - acc: 0.9398
11040/48000 [=====>........................] - ETA: 12s - loss: 0.2065 - acc: 0.9401
11200/48000 [======>.......................] - ETA: 12s - loss: 0.2060 - acc: 0.9403
11360/48000 [======>.......................] - ETA: 12s - loss: 0.2062 - acc: 0.9404
11520/48000 [======>.......................] - ETA: 12s - loss: 0.2052 - acc: 0.9405
11648/48000 [======>.......................] - ETA: 12s - loss: 0.2049 - acc: 0.9405
11808/48000 [======>.......................] - ETA: 12s - loss: 0.2044 - acc: 0.9409
11968/48000 [======>.......................] - ETA: 12s - loss: 0.2040 - acc: 0.9410
12128/48000 [======>.......................] - ETA: 12s - loss: 0.2029 - acc: 0.9414
12320/48000 [======>.......................] - ETA: 12s - loss: 0.2020 - acc: 0.9415
12480/48000 [======>.......................] - ETA: 12s - loss: 0.2025 - acc: 0.9416
12608/48000 [======>.......................] - ETA: 12s - loss: 0.2025 - acc: 0.9415
12800/48000 [=======>......................] - ETA: 12s - loss: 0.2020 - acc: 0.9414
12992/48000 [=======>......................] - ETA: 12s - loss: 0.2011 - acc: 0.9416
13184/48000 [=======>......................] - ETA: 12s - loss: 0.2019 - acc: 0.9416
13376/48000 [=======>......................] - ETA: 12s - loss: 0.2009 - acc: 0.9420
13504/48000 [=======>......................] - ETA: 12s - loss: 0.2011 - acc: 0.9420
13696/48000 [=======>......................] - ETA: 11s - loss: 0.2006 - acc: 0.9422
13856/48000 [=======>......................] - ETA: 11s - loss: 0.2003 - acc: 0.9424
14016/48000 [=======>......................] - ETA: 11s - loss: 0.2006 - acc: 0.9425
14144/48000 [=======>......................] - ETA: 11s - loss: 0.2004 - acc: 0.9425
14336/48000 [=======>......................] - ETA: 11s - loss: 0.2005 - acc: 0.9425
14528/48000 [========>.....................] - ETA: 11s - loss: 0.2003 - acc: 0.9425
14688/48000 [========>.....................] - ETA: 11s - loss: 0.2012 - acc: 0.9421
14848/48000 [========>.....................] - ETA: 11s - loss: 0.2004 - acc: 0.9423
15040/48000 [========>.....................] - ETA: 11s - loss: 0.1999 - acc: 0.9424
15232/48000 [========>.....................] - ETA: 11s - loss: 0.2010 - acc: 0.9420
15360/48000 [========>.....................] - ETA: 11s - loss: 0.2002 - acc: 0.9423
15520/48000 [========>.....................] - ETA: 11s - loss: 0.2001 - acc: 0.9423
15680/48000 [========>.....................] - ETA: 11s - loss: 0.1990 - acc: 0.9427
15840/48000 [========>.....................] - ETA: 11s - loss: 0.1987 - acc: 0.9428
15968/48000 [========>.....................] - ETA: 11s - loss: 0.1986 - acc: 0.9427
16128/48000 [=========>....................] - ETA: 11s - loss: 0.1993 - acc: 0.9423
16320/48000 [=========>....................] - ETA: 10s - loss: 0.1990 - acc: 0.9424
16512/48000 [=========>....................] - ETA: 10s - loss: 0.1985 - acc: 0.9425
16672/48000 [=========>....................] - ETA: 10s - loss: 0.1980 - acc: 0.9427
16864/48000 [=========>....................] - ETA: 10s - loss: 0.1984 - acc: 0.9427
17024/48000 [=========>....................] - ETA: 10s - loss: 0.1983 - acc: 0.9428
17216/48000 [=========>....................] - ETA: 10s - loss: 0.1981 - acc: 0.9428
17376/48000 [=========>....................] - ETA: 10s - loss: 0.1978 - acc: 0.9429
17568/48000 [=========>....................] - ETA: 10s - loss: 0.1974 - acc: 0.9429
17728/48000 [==========>...................] - ETA: 10s - loss: 0.1983 - acc: 0.9426
17888/48000 [==========>...................] - ETA: 10s - loss: 0.1984 - acc: 0.9426
18016/48000 [==========>...................] - ETA: 10s - loss: 0.1980 - acc: 0.9427
18144/48000 [==========>...................] - ETA: 10s - loss: 0.1985 - acc: 0.9426
18336/48000 [==========>...................] - ETA: 10s - loss: 0.1974 - acc: 0.9430
18528/48000 [==========>...................] - ETA: 10s - loss: 0.1967 - acc: 0.9432
18688/48000 [==========>...................] - ETA: 10s - loss: 0.1971 - acc: 0.9432
18880/48000 [==========>...................] - ETA: 9s - loss: 0.1970 - acc: 0.9432
19072/48000 [==========>...................] - ETA: 9s - loss: 0.1977 - acc: 0.9431
19264/48000 [===========>..................] - ETA: 9s - loss: 0.1979 - acc: 0.9430
19456/48000 [===========>..................] - ETA: 9s - loss: 0.1981 - acc: 0.9428
19648/48000 [===========>..................] - ETA: 9s - loss: 0.1974 - acc: 0.9431
19808/48000 [===========>..................] - ETA: 9s - loss: 0.1991 - acc: 0.9428
19968/48000 [===========>..................] - ETA: 9s - loss: 0.1989 - acc: 0.9427
20160/48000 [===========>..................] - ETA: 9s - loss: 0.1980 - acc: 0.9429
20320/48000 [===========>..................] - ETA: 9s - loss: 0.1981 - acc: 0.9430
20480/48000 [===========>..................] - ETA: 9s - loss: 0.1977 - acc: 0.9432
20640/48000 [===========>..................] - ETA: 9s - loss: 0.1979 - acc: 0.9430
20832/48000 [============>.................] - ETA: 9s - loss: 0.1976 - acc: 0.9431
21024/48000 [============>.................] - ETA: 9s - loss: 0.1976 - acc: 0.9431
21184/48000 [============>.................] - ETA: 9s - loss: 0.1974 - acc: 0.9431
21344/48000 [============>.................] - ETA: 9s - loss: 0.1969 - acc: 0.9432
21504/48000 [============>.................] - ETA: 8s - loss: 0.1969 - acc: 0.9432
21696/48000 [============>.................] - ETA: 8s - loss: 0.1966 - acc: 0.9432
21888/48000 [============>.................] - ETA: 8s - loss: 0.1969 - acc: 0.9433
22048/48000 [============>.................] - ETA: 8s - loss: 0.1978 - acc: 0.9430
22208/48000 [============>.................] - ETA: 8s - loss: 0.1977 - acc: 0.9432
22400/48000 [=============>................] - ETA: 8s - loss: 0.1973 - acc: 0.9433
22592/48000 [=============>................] - ETA: 8s - loss: 0.1974 - acc: 0.9433
22720/48000 [=============>................] - ETA: 8s - loss: 0.1972 - acc: 0.9432
22912/48000 [=============>................] - ETA: 8s - loss: 0.1972 - acc: 0.9434
23072/48000 [=============>................] - ETA: 8s - loss: 0.1975 - acc: 0.9434
23232/48000 [=============>................] - ETA: 8s - loss: 0.1974 - acc: 0.9436
23392/48000 [=============>................] - ETA: 8s - loss: 0.1971 - acc: 0.9437
23552/48000 [=============>................] - ETA: 8s - loss: 0.1968 - acc: 0.9438
23744/48000 [=============>................] - ETA: 8s - loss: 0.1963 - acc: 0.9439
23968/48000 [=============>................] - ETA: 8s - loss: 0.1964 - acc: 0.9439
24128/48000 [==============>...............] - ETA: 8s - loss: 0.1966 - acc: 0.9438
24288/48000 [==============>...............] - ETA: 7s - loss: 0.1961 - acc: 0.9440
24480/48000 [==============>...............] - ETA: 7s - loss: 0.1963 - acc: 0.9440
24672/48000 [==============>...............] - ETA: 7s - loss: 0.1964 - acc: 0.9441
24832/48000 [==============>...............] - ETA: 7s - loss: 0.1964 - acc: 0.9442
25024/48000 [==============>...............] - ETA: 7s - loss: 0.1962 - acc: 0.9443
25184/48000 [==============>...............] - ETA: 7s - loss: 0.1961 - acc: 0.9443
25344/48000 [==============>...............] - ETA: 7s - loss: 0.1963 - acc: 0.9442
25472/48000 [==============>...............] - ETA: 7s - loss: 0.1969 - acc: 0.9442
25664/48000 [===============>..............] - ETA: 7s - loss: 0.1965 - acc: 0.9442
25824/48000 [===============>..............] - ETA: 7s - loss: 0.1964 - acc: 0.9443
26016/48000 [===============>..............] - ETA: 7s - loss: 0.1961 - acc: 0.9444
26176/48000 [===============>..............] - ETA: 7s - loss: 0.1956 - acc: 0.9445
26336/48000 [===============>..............] - ETA: 7s - loss: 0.1950 - acc: 0.9446
26496/48000 [===============>..............] - ETA: 7s - loss: 0.1947 - acc: 0.9447
26656/48000 [===============>..............] - ETA: 7s - loss: 0.1947 - acc: 0.9447
26784/48000 [===============>..............] - ETA: 7s - loss: 0.1943 - acc: 0.9449
26976/48000 [===============>..............] - ETA: 7s - loss: 0.1941 - acc: 0.9449
27136/48000 [===============>..............] - ETA: 7s - loss: 0.1943 - acc: 0.9446
27296/48000 [================>.............] - ETA: 6s - loss: 0.1945 - acc: 0.9446
27456/48000 [================>.............] - ETA: 6s - loss: 0.1940 - acc: 0.9447
27584/48000 [================>.............] - ETA: 6s - loss: 0.1942 - acc: 0.9445
27744/48000 [================>.............] - ETA: 6s - loss: 0.1938 - acc: 0.9447
27872/48000 [================>.............] - ETA: 6s - loss: 0.1933 - acc: 0.9447
28032/48000 [================>.............] - ETA: 6s - loss: 0.1935 - acc: 0.9447
28224/48000 [================>.............] - ETA: 6s - loss: 0.1931 - acc: 0.9448
28384/48000 [================>.............] - ETA: 6s - loss: 0.1932 - acc: 0.9449
28544/48000 [================>.............] - ETA: 6s - loss: 0.1930 - acc: 0.9449
28736/48000 [================>.............] - ETA: 6s - loss: 0.1929 - acc: 0.9449
28896/48000 [=================>............] - ETA: 6s - loss: 0.1931 - acc: 0.9449
29056/48000 [=================>............] - ETA: 6s - loss: 0.1934 - acc: 0.9450
29248/48000 [=================>............] - ETA: 6s - loss: 0.1928 - acc: 0.9451
29408/48000 [=================>............] - ETA: 6s - loss: 0.1922 - acc: 0.9453
29568/48000 [=================>............] - ETA: 6s - loss: 0.1918 - acc: 0.9454
29728/48000 [=================>............] - ETA: 6s - loss: 0.1915 - acc: 0.9455
29920/48000 [=================>............] - ETA: 6s - loss: 0.1909 - acc: 0.9457
30112/48000 [=================>............] - ETA: 5s - loss: 0.1913 - acc: 0.9457
30304/48000 [=================>............] - ETA: 5s - loss: 0.1910 - acc: 0.9458
30464/48000 [==================>...........] - ETA: 5s - loss: 0.1911 - acc: 0.9458
30624/48000 [==================>...........] - ETA: 5s - loss: 0.1910 - acc: 0.9459
30784/48000 [==================>...........] - ETA: 5s - loss: 0.1911 - acc: 0.9458
30976/48000 [==================>...........] - ETA: 5s - loss: 0.1909 - acc: 0.9458
31168/48000 [==================>...........] - ETA: 5s - loss: 0.1908 - acc: 0.9459
31360/48000 [==================>...........] - ETA: 5s - loss: 0.1904 - acc: 0.9460
31552/48000 [==================>...........] - ETA: 5s - loss: 0.1903 - acc: 0.9460
31712/48000 [==================>...........] - ETA: 5s - loss: 0.1905 - acc: 0.9460
31872/48000 [==================>...........] - ETA: 5s - loss: 0.1901 - acc: 0.9461
32032/48000 [===================>..........] - ETA: 5s - loss: 0.1899 - acc: 0.9461
32224/48000 [===================>..........] - ETA: 5s - loss: 0.1899 - acc: 0.9460
32416/48000 [===================>..........] - ETA: 5s - loss: 0.1897 - acc: 0.9461
32544/48000 [===================>..........] - ETA: 5s - loss: 0.1896 - acc: 0.9461
32736/48000 [===================>..........] - ETA: 5s - loss: 0.1898 - acc: 0.9461
32928/48000 [===================>..........] - ETA: 5s - loss: 0.1893 - acc: 0.9462
33120/48000 [===================>..........] - ETA: 4s - loss: 0.1887 - acc: 0.9464
33280/48000 [===================>..........] - ETA: 4s - loss: 0.1889 - acc: 0.9463
33440/48000 [===================>..........] - ETA: 4s - loss: 0.1887 - acc: 0.9463
33600/48000 [====================>.........] - ETA: 4s - loss: 0.1886 - acc: 0.9463
33792/48000 [====================>.........] - ETA: 4s - loss: 0.1883 - acc: 0.9465
33984/48000 [====================>.........] - ETA: 4s - loss: 0.1880 - acc: 0.9466
34176/48000 [====================>.........] - ETA: 4s - loss: 0.1880 - acc: 0.9465
34336/48000 [====================>.........] - ETA: 4s - loss: 0.1878 - acc: 0.9465
34496/48000 [====================>.........] - ETA: 4s - loss: 0.1875 - acc: 0.9466
34656/48000 [====================>.........] - ETA: 4s - loss: 0.1878 - acc: 0.9465
34816/48000 [====================>.........] - ETA: 4s - loss: 0.1875 - acc: 0.9466
34976/48000 [====================>.........] - ETA: 4s - loss: 0.1880 - acc: 0.9463
35168/48000 [====================>.........] - ETA: 4s - loss: 0.1884 - acc: 0.9463
35296/48000 [=====================>........] - ETA: 4s - loss: 0.1886 - acc: 0.9462
35488/48000 [=====================>........] - ETA: 4s - loss: 0.1884 - acc: 0.9462
35648/48000 [=====================>........] - ETA: 4s - loss: 0.1881 - acc: 0.9463
35840/48000 [=====================>........] - ETA: 4s - loss: 0.1881 - acc: 0.9461
36032/48000 [=====================>........] - ETA: 3s - loss: 0.1877 - acc: 0.9463
36192/48000 [=====================>........] - ETA: 3s - loss: 0.1877 - acc: 0.9463
36384/48000 [=====================>........] - ETA: 3s - loss: 0.1880 - acc: 0.9464
36544/48000 [=====================>........] - ETA: 3s - loss: 0.1878 - acc: 0.9465
36736/48000 [=====================>........] - ETA: 3s - loss: 0.1878 - acc: 0.9464
36928/48000 [======================>.......] - ETA: 3s - loss: 0.1877 - acc: 0.9464
37056/48000 [======================>.......] - ETA: 3s - loss: 0.1875 - acc: 0.9464
37248/48000 [======================>.......] - ETA: 3s - loss: 0.1874 - acc: 0.9464
37408/48000 [======================>.......] - ETA: 3s - loss: 0.1872 - acc: 0.9464
37568/48000 [======================>.......] - ETA: 3s - loss: 0.1872 - acc: 0.9464
37728/48000 [======================>.......] - ETA: 3s - loss: 0.1868 - acc: 0.9466
37888/48000 [======================>.......] - ETA: 3s - loss: 0.1868 - acc: 0.9466
38048/48000 [======================>.......] - ETA: 3s - loss: 0.1864 - acc: 0.9468
38240/48000 [======================>.......] - ETA: 3s - loss: 0.1860 - acc: 0.9469
38432/48000 [=======================>......] - ETA: 3s - loss: 0.1856 - acc: 0.9470
38528/48000 [=======================>......] - ETA: 3s - loss: 0.1853 - acc: 0.9472
38688/48000 [=======================>......] - ETA: 3s - loss: 0.1853 - acc: 0.9472
38848/48000 [=======================>......] - ETA: 3s - loss: 0.1851 - acc: 0.9472
39040/48000 [=======================>......] - ETA: 2s - loss: 0.1851 - acc: 0.9473
39232/48000 [=======================>......] - ETA: 2s - loss: 0.1848 - acc: 0.9474
39392/48000 [=======================>......] - ETA: 2s - loss: 0.1848 - acc: 0.9474
39584/48000 [=======================>......] - ETA: 2s - loss: 0.1849 - acc: 0.9475
39744/48000 [=======================>......] - ETA: 2s - loss: 0.1846 - acc: 0.9476
39904/48000 [=======================>......] - ETA: 2s - loss: 0.1853 - acc: 0.9475
40032/48000 [========================>.....] - ETA: 2s - loss: 0.1854 - acc: 0.9474
40224/48000 [========================>.....] - ETA: 2s - loss: 0.1852 - acc: 0.9475
40384/48000 [========================>.....] - ETA: 2s - loss: 0.1848 - acc: 0.9476
40544/48000 [========================>.....] - ETA: 2s - loss: 0.1846 - acc: 0.9477
40704/48000 [========================>.....] - ETA: 2s - loss: 0.1843 - acc: 0.9478
40864/48000 [========================>.....] - ETA: 2s - loss: 0.1845 - acc: 0.9478
41056/48000 [========================>.....] - ETA: 2s - loss: 0.1844 - acc: 0.9479
41216/48000 [========================>.....] - ETA: 2s - loss: 0.1845 - acc: 0.9479
41408/48000 [========================>.....] - ETA: 2s - loss: 0.1843 - acc: 0.9479
41568/48000 [========================>.....] - ETA: 2s - loss: 0.1840 - acc: 0.9480
41728/48000 [=========================>....] - ETA: 2s - loss: 0.1837 - acc: 0.9481
41920/48000 [=========================>....] - ETA: 2s - loss: 0.1841 - acc: 0.9481
42080/48000 [=========================>....] - ETA: 1s - loss: 0.1839 - acc: 0.9481
42272/48000 [=========================>....] - ETA: 1s - loss: 0.1840 - acc: 0.9482
42400/48000 [=========================>....] - ETA: 1s - loss: 0.1838 - acc: 0.9482
42592/48000 [=========================>....] - ETA: 1s - loss: 0.1838 - acc: 0.9483
42752/48000 [=========================>....] - ETA: 1s - loss: 0.1839 - acc: 0.9482
42944/48000 [=========================>....] - ETA: 1s - loss: 0.1839 - acc: 0.9482
43104/48000 [=========================>....] - ETA: 1s - loss: 0.1838 - acc: 0.9482
43264/48000 [==========================>...] - ETA: 1s - loss: 0.1837 - acc: 0.9482
43424/48000 [==========================>...] - ETA: 1s - loss: 0.1836 - acc: 0.9483
43584/48000 [==========================>...] - ETA: 1s - loss: 0.1834 - acc: 0.9483
43744/48000 [==========================>...] - ETA: 1s - loss: 0.1831 - acc: 0.9484
43904/48000 [==========================>...] - ETA: 1s - loss: 0.1833 - acc: 0.9483
44064/48000 [==========================>...] - ETA: 1s - loss: 0.1831 - acc: 0.9484
44256/48000 [==========================>...] - ETA: 1s - loss: 0.1828 - acc: 0.9485
44448/48000 [==========================>...] - ETA: 1s - loss: 0.1833 - acc: 0.9483
44576/48000 [==========================>...] - ETA: 1s - loss: 0.1832 - acc: 0.9484
44704/48000 [==========================>...] - ETA: 1s - loss: 0.1831 - acc: 0.9485
44896/48000 [===========================>..] - ETA: 1s - loss: 0.1834 - acc: 0.9483
45056/48000 [===========================>..] - ETA: 0s - loss: 0.1832 - acc: 0.9484
45248/48000 [===========================>..] - ETA: 0s - loss: 0.1831 - acc: 0.9484
45408/48000 [===========================>..] - ETA: 0s - loss: 0.1830 - acc: 0.9484
45536/48000 [===========================>..] - ETA: 0s - loss: 0.1827 - acc: 0.9485
45728/48000 [===========================>..] - ETA: 0s - loss: 0.1829 - acc: 0.9485
45920/48000 [===========================>..] - ETA: 0s - loss: 0.1835 - acc: 0.9483
46112/48000 [===========================>..] - ETA: 0s - loss: 0.1839 - acc: 0.9483
46272/48000 [===========================>..] - ETA: 0s - loss: 0.1840 - acc: 0.9482
46464/48000 [============================>.] - ETA: 0s - loss: 0.1839 - acc: 0.9483
46624/48000 [============================>.] - ETA: 0s - loss: 0.1840 - acc: 0.9483
46816/48000 [============================>.] - ETA: 0s - loss: 0.1838 - acc: 0.9484
47008/48000 [============================>.] - ETA: 0s - loss: 0.1835 - acc: 0.9484
47136/48000 [============================>.] - ETA: 0s - loss: 0.1832 - acc: 0.9485
47328/48000 [============================>.] - ETA: 0s - loss: 0.1832 - acc: 0.9485
47488/48000 [============================>.] - ETA: 0s - loss: 0.1832 - acc: 0.9485
47680/48000 [============================>.] - ETA: 0s - loss: 0.1834 - acc: 0.9484
47840/48000 [============================>.] - ETA: 0s - loss: 0.1832 - acc: 0.9485
47968/48000 [============================>.] - ETA: 0s - loss: 0.1835 - acc: 0.9485
48000/48000 [==============================] - 16s 343us/step - loss: 0.1835 - acc: 0.9485 - val_loss: 0.1705 - val_acc: 0.9515
32/20000 [..............................] - ETA: 1s
672/20000 [>.............................] - ETA: 1s
1440/20000 [=>............................] - ETA: 1s
2208/20000 [==>...........................] - ETA: 1s
2720/20000 [===>..........................] - ETA: 1s
3232/20000 [===>..........................] - ETA: 1s
3744/20000 [====>.........................] - ETA: 1s
4320/20000 [=====>........................] - ETA: 1s
4672/20000 [======>.......................] - ETA: 1s
5184/20000 [======>.......................] - ETA: 1s
5728/20000 [=======>......................] - ETA: 1s
6368/20000 [========>.....................] - ETA: 1s
6912/20000 [=========>....................] - ETA: 1s
7328/20000 [=========>....................] - ETA: 1s
7840/20000 [==========>...................] - ETA: 1s
8256/20000 [===========>..................] - ETA: 1s
8864/20000 [============>.................] - ETA: 1s
9376/20000 [=============>................] - ETA: 0s
10016/20000 [==============>...............] - ETA: 0s
10528/20000 [==============>...............] - ETA: 0s
11136/20000 [===============>..............] - ETA: 0s
11808/20000 [================>.............] - ETA: 0s
12352/20000 [=================>............] - ETA: 0s
12832/20000 [==================>...........] - ETA: 0s
13440/20000 [===================>..........] - ETA: 0s
13984/20000 [===================>..........] - ETA: 0s
14624/20000 [====================>.........] - ETA: 0s
15264/20000 [=====================>........] - ETA: 0s
16096/20000 [=======================>......] - ETA: 0s
16864/20000 [========================>.....] - ETA: 0s
17632/20000 [=========================>....] - ETA: 0s
18464/20000 [==========================>...] - ETA: 0s
19328/20000 [===========================>..] - ETA: 0s
20000/20000 [==============================] - 2s 85us/step
Test score: 0.3069405316725373
Test accuracy: 0.9057
32/20000 [..............................] - ETA: 1s
1056/20000 [>.............................] - ETA: 0s
1824/20000 [=>............................] - ETA: 1s
2496/20000 [==>...........................] - ETA: 1s
3008/20000 [===>..........................] - ETA: 1s
3616/20000 [====>.........................] - ETA: 1s
4256/20000 [=====>........................] - ETA: 1s
4832/20000 [======>.......................] - ETA: 1s
5408/20000 [=======>......................] - ETA: 1s
6208/20000 [========>.....................] - ETA: 1s
6880/20000 [=========>....................] - ETA: 0s
7552/20000 [==========>...................] - ETA: 0s
8192/20000 [===========>..................] - ETA: 0s
9056/20000 [============>.................] - ETA: 0s
9824/20000 [=============>................] - ETA: 0s
10496/20000 [==============>...............] - ETA: 0s
11136/20000 [===============>..............] - ETA: 0s
12000/20000 [=================>............] - ETA: 0s
12800/20000 [==================>...........] - ETA: 0s
13632/20000 [===================>..........] - ETA: 0s
14304/20000 [====================>.........] - ETA: 0s
14848/20000 [=====================>........] - ETA: 0s
15520/20000 [======================>.......] - ETA: 0s
15904/20000 [======================>.......] - ETA: 0s
16512/20000 [=======================>......] - ETA: 0s
16992/20000 [========================>.....] - ETA: 0s
17472/20000 [=========================>....] - ETA: 0s
18016/20000 [==========================>...] - ETA: 0s
18528/20000 [==========================>...] - ETA: 0s
18976/20000 [===========================>..] - ETA: 0s
19488/20000 [============================>.] - ETA: 0s
20000/20000 [==============================] - 2s 80us/step
precision recall f1-score support
0.0 0.95 0.98 0.97 2000
1.0 0.93 0.96 0.94 2000
2.0 0.77 0.89 0.82 2000
3.0 0.89 0.82 0.85 2000
4.0 0.87 0.87 0.87 2000
5.0 0.97 0.95 0.96 2000
6.0 0.91 0.87 0.89 2000
7.0 0.98 0.93 0.95 2000
8.0 0.96 0.94 0.95 2000
9.0 0.91 0.91 0.91 2000
accuracy 0.91 20000
macro avg 0.91 0.91 0.91 20000
weighted avg 0.91 0.91 0.91 20000
|
.ipynb_checkpoints/Assignment-2-numpy-array-operations-checkpoint.ipynb | ###Markdown
> **Assignment 2 - Numpy Array Operations** >> This assignment is part of the course ["Data Analysis with Python: Zero to Pandas"](http://zerotopandas.com). The objective of this assignment is to develop a solid understanding of Numpy array operations. In this assignment you will:> > 1. Pick 5 interesting Numpy array functions by going through the documentation: https://numpy.org/doc/stable/reference/routines.html > 2. Run and modify this Jupyter notebook to illustrate their usage (some explanation and 3 examples for each function). Use your imagination to come up with interesting and unique examples.> 3. Upload this notebook to your Jovian profile using `jovian.commit` and make a submission here: https://jovian.ml/learn/data-analysis-with-python-zero-to-pandas/assignment/assignment-2-numpy-array-operations> 4. (Optional) Share your notebook online (on Twitter, LinkedIn, Facebook) and on the community forum thread: https://jovian.ml/forum/t/assignment-2-numpy-array-operations-share-your-work/10575 . > 5. (Optional) Check out the notebooks [shared by other participants](https://jovian.ml/forum/t/assignment-2-numpy-array-operations-share-your-work/10575) and give feedback & appreciation.>> The recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks.>> Try to give your notebook a catchy title & subtitle e.g. "All about Numpy array operations", "5 Numpy functions you didn't know you needed", "A beginner's guide to broadcasting in Numpy", "Interesting ways to create Numpy arrays", "Trigonometic functions in Numpy", "How to use Python for Linear Algebra" etc.>> **NOTE**: Remove this block of explanation text before submitting or sharing your notebook online - to make it more presentable. Numpy Array Functions Five PicksFunctions belows are my five chosen functions from Numpy module. - function1 = np.concatenate- function2 = np.transpose- function3 = np.vstack- function4 = np.hsplit- function5 = np.linalgThe recommended way to run this notebook is to click the "Run" button at the top of this page, and select "Run on Binder". This will run the notebook on mybinder.org, a free online service for running Jupyter notebooks.
###Code
!pip install jovian --upgrade -q
import jovian
jovian.commit(project='numpy-array-operations')
###Output
_____no_output_____
###Markdown
Let's begin by importing Numpy and listing out the functions covered in this notebook.
###Code
import numpy as np
# List of functions explained
function1 = np.concatenate
function2 = np.transpose
function3 = np.vstack
function4 = np.hsplit
function5 = np.linalg
###Output
_____no_output_____
###Markdown
Function 1 - np.concatenateArray indexing is the same as accessing an array element.
###Code
# Example 1 - working (change this)
arr1 = [[1, 2],
[3, 4.]]
arr2 = [[5, 6, 7],
[8, 9, 10]]
np.concatenate((arr1, arr2), axis=1)
# Example 2 - working
arr1 = [[1, 2, 9],
[3, 4., 0]]
arr2 = [[5, 6, 7],
[8, 9, 10]]
np.concatenate((arr1, arr2), axis=1)
# Example 3 - breaking (to illustrate when it breaks)
arr1 = [[1, 2],
[3, 4.]]
arr2 = [[5, 6, 7],
[8, 9, 10]]
np.concatenate((arr1, arr2), axis=0)
###Output
_____no_output_____
###Markdown
Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
Function 2 - TransposeAdd some explanations
###Code
# Example 1 - working
arr1 = [[1, 2],
[3, 4.]]
i = np.transpose(arr1)
print(i)
###Output
_____no_output_____
###Markdown
Example 2 shows how to create an empty array. Despite it's name np.empty() does not generate empty array but array of random data.
###Code
# Example 2 - working
arr2 = [[10, 2],
[30, 4.]]
i = np.transpose(arr2)
print(i)
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
arr2 = [[10, 2],
[30, 4.]]
i = np.transpose(arr3)
print(i)
###Output
_____no_output_____
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
Function 3 - np.vstackAdd some explanations
###Code
# Example 1 - working
a = [[1, 2],
[3, 4.]]
b = [[10, 2],
[3, 40.]]
i = np.vstack((a, b))
print(i)
# Example 2 - working
i = np.hstack((a, b))
print(i)
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
i = np.hstack((A, b))
print(i)
###Output
_____no_output_____
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
Function 3 - ???Add some explanations
###Code
# Example 1 - working
a = np.array([[1, 3, 5, 7, 9, 11],
[2, 4, 6, 8, 10, 12]])
# horizontal splitting
print("Splitting along horizontal axis into 2 parts:\n", np.hsplit(a, 2))
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 2 - working
# vertical splitting
print("\nSplitting along vertical axis into 2 parts:\n", np.vsplit(a, 2))
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
print("\nSplitting along vertical axis into 2 parts:\n", np.vsplit(a, 2,1))
###Output
_____no_output_____
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
Function 4 - np.datetime64Add some explanations
###Code
# Example 1 - working
# creating a date
today = np.datetime64('2017-02-12')
print("Date is:", today)
print("Year is:", np.datetime64(today, 'Y'))
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 2 - working
# creating array of dates in a month
dates = np.arange('2017-02', '2017-03', dtype='datetime64[D]')
print("\nDates of February, 2017:\n", dates)
print("Today is February:", today in dates)
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
# arithmetic operation on dates
dur = np.datetim64('2014-05-22') - np.datetime64('2016-05-22')
print("\nNo. of days:", dur)
print("No. of weeks:", np.timedelta64(dur, 'W'))
###Output
_____no_output_____
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
Function 5 - np.linalg.matrix_rankAdd some explanations
###Code
# Example 1 - working
A = np.array([[6, 1, 1],
[4, -2, 5],
[2, 8, 7]])
print("Rank of A:", np.linalg.matrix_rank(A))
print("\nInverse of A:\n", np.linalg.inv(A))
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 2 - working
print("\nMatrix A raised to power 3:\n", np.linalg.matrix_power(A, 3))
###Output
_____no_output_____
###Markdown
Explanation about example
###Code
# Example 3 - breaking (to illustrate when it breaks)
print("\nMatrix A raised to power 3:\n", np.linalg.matrix_power(a, -3))
###Output
_____no_output_____
###Markdown
Explanation about example (why it breaks and how to fix it) Some closing comments about when to use this function.
###Code
jovian.commit()
###Output
_____no_output_____
###Markdown
ConclusionSummarize what was covered in this notebook, and where to go next Reference LinksProvide links to your references and other interesting articles about Numpy arrays:* Numpy official tutorial : https://numpy.org/doc/stable/user/quickstart.html* ...
###Code
jovian.commit()
###Output
_____no_output_____ |
chapter4_agent.ipynb | ###Markdown
Chapter 4 : Write your first Agent The Source class The Source is the place where you implement the acquisition of your dataRegardless of the type of data, the framework will always consume a Source the same way.Many generic sources are provided by the framework and it's easy to create a new one. The Source strongly encourages to stream incoming dataWhatever the size of the dataset, memory will never be an issue. Example Let's continue with our room sensors.
###Code
# this source provides us with a CSV line each second
from pyngsi.sources.more_sources import SourceSampleOrion
# init the source
src = SourceSampleOrion()
# iterate over the source
for row in src:
print(row)
###Output
_____no_output_____
###Markdown
Here we can see that a row is an instance of a Row class.For the vast majority of the Sources, the provider keeps the same value during the datasource lifetime.We'll go into details in next chapters.**In practice you won't iterate the Source by hand. The framework will do it for you.** The Agent class Here comes the power of the framework.By using an Agent you will delegate the processing of the Source to the framework.Basically an Agent needs a Source for input and a Sink for output.It also needs a function in order to convert incoming rows to NGSI entities.Once the Agent is initialized, you can run it ! Let's continue with our rooms.
###Code
from pyngsi.sources.more_sources import SourceSampleOrion
from pyngsi.sink import SinkStdout
from pyngsi.agent import NgsiAgent
# init the source
src = SourceSampleOrion()
# for the convenience of the demo, the sink is the standard output
sink = SinkStdout()
# init the agent
agent = NgsiAgent.create_agent(src, sink)
#run the agent
agent.run()
###Output
_____no_output_____
###Markdown
Here you can see that incoming rows are outputted 'as is'.It's possible because SinkStdout ouputs whatever it receives on its input.But SinkOrion expects valid NGSI entities on its input.So let's define a conversion function.
###Code
from pyngsi.sources.source import Row
from pyngsi.ngsi import DataModel
def build_entity(row: Row) -> DataModel:
id, temperature, pressure = row.record.split(';')
m = DataModel(id=id, type="Room")
m.add("dataProvider", row.provider)
m.add("temperature", float(temperature))
m.add("pressure", int(pressure))
return m
###Output
_____no_output_____
###Markdown
And use it in the Agent.
###Code
# init the Agent with the conversion function
agent = NgsiAgent.create_agent(src, sink, process=build_entity)
# run the agent
agent.run()
# obtain the statistics
print(agent.stats)
###Output
_____no_output_____
###Markdown
Congratulations ! You have developed your first pyngsi Agent !Feel free to try SinkOrion instead of SinkStdout.Note that you get statistics for free. Inside your conversion function you can filter input rows just by returning None.For example, if you're not interested in Room3 you could write this function.
###Code
def build_entity(row: Row) -> DataModel:
id, temperature, pressure = row.record.split(';')
if id == "Room3":
return None
m = DataModel(id=id, type="Room")
m.add("dataProvider", row.provider)
m.add("temperature", float(temperature))
m.add("pressure", int(pressure))
return m
agent = NgsiAgent.create_agent(src, sink, process=build_entity)
agent.run()
print(agent.stats)
###Output
_____no_output_____
###Markdown
The side_effect() functionAs of v1.2.5 the Agent takes the `side_effect()` function as an optional argument.That function will allow to create entities aside of the main flow. Few use cases might need it.
###Code
def side_effect(row, sink, model) -> int:
# we can use as an input the current row or the model returned by our process() function
m = DataModel(id=f"Building:MainBuilding:Room:{model['id']}", type="Room")
sink.write(m.json())
return 1 # number of entities created in the function
agent = NgsiAgent.create_agent(src, sink, process=build_entity, side_effect=side_effect)
agent.run()
print(agent.stats)
###Output
_____no_output_____ |
Python/Operators/Bitwise operator.ipynb | ###Markdown
Bitwise operatorThe bitwise operators are the operators used to perform the operations on the data at the bit-level. When we perform the bitwise operations, then it is also known as bit-level programming. It consists of two digits, either 0 or 1. It is mainly used in numerical computations to make the calculations faster.There are following bitwise operators:1.Bitwise AND (syntax: X & Y)2.Bitwise OR(syntax: X | Y)3.Bitwise NOT(syntax: ~X )4.Bitwise XOR (syntax: X ^ Y)5.Bitwise RIGHT SHIFT (syntax: X >>)6.Bitwise LEFT SHIFT (syntax: X <<) Bitwise ANDReturns 1 if both bits are 1 else 0. Bitwise ORReturns 1 if either of the bit is 1 else 0. Bitwise NOTReturns one's complement of the number. Bitwise XOR Returns 1(true) if one of the bit is 1 and the other is 0 else returns false(0). Bitwise left shift operatorBitwise left shift operator(<<) moves the bits of its first operand to the left by the number of places specified in its second operand.It also inserets enough zero bits to fill the gap that aries on the right edge of the new bit pattern. Bitwise right shift operatorBitwise right shift operator(>>)is analogous to the left one, but instead of moving bits to the left,it pushes them to the right by the specified number of places.The rightmost bits always get dropped.
###Code
x=20
y=15
#Bitwise AND
print(x&y)
#Bitwise OR
print(x|y)
#Bitwise NOT
print(~x)
#Bitwise XOR
print(x^y)
x=-10
y=7
print(x>>1)
print(y>>1)
x=25
y=-20
print(x<<1)
print(y<<1)
###Output
-5
3
50
-40
|
Preprocessing_clip_and_raster2points.ipynb | ###Markdown
Raster within quadrats to points (GeoDataFrame)Using GDAL Warp (clipping) and Rioxarray (conversion)
###Code
# Split quadrats by transect and write as seperate shapefiles
quadratsSplit = [gpd.GeoSeries((quadrats[quadrats['Quadrat'].isin(['1_2', '1_3', '1_1'])])['geometry'].unary_union),
gpd.GeoSeries((quadrats[quadrats['Quadrat'].isin(['2_2', '2_7', '2_5', '2_9', '2_4', '2_10', '2_1', '2_3', '2_6', '2_8'])])['geometry'].unary_union),
gpd.GeoSeries((quadrats[quadrats['Quadrat'].isin(['3_3', '3_2', '3_1'])])['geometry'].unary_union)]
for i in range(3):
j = str(i+1)
outPath = f'E:\\Sync\\_Documents\\_Letter_invasives\\_Data\\_quadrats\\{j}.shp'
quadratsSplit[i].to_file(outPath)
# Clip raster using GDAL by transect
# https://gis.stackexchange.com/questions/45053/gdalwarp-cutline-along-with-shapefile
for i in range(1,4):
poly = f'E:\\Sync\\_Documents\\_Letter_invasives\\_Data\\_quadrats\\{i}.shp'
ds = gdal.Warp(f"E:\\Sync\\_Documents\\_Letter_invasives\\_Data\\_quadrats\\clippedQd{i}.tif", rasPath, cropToCutline=True, cutlineDSName=poly, format="GTiff")
ds = None # Close object
# Final conve
outDfs =[]
for i in range(1,4):
tempDfs = []
for j in range(1,9):
path = "E:\\Sync\\_Documents\\_Letter_invasives\\_Data\\_quadrats\\clippedQd{}.tif"
rds = rioxarray.open_rasterio(path.format(str(i))).sel(band=j)
df = rds.to_dataframe('band'+str(j))
df.drop(columns=['spatial_ref'], axis=1, inplace=True)
df.reset_index(level=['x', 'y'], inplace=True, drop=False)
df.reset_index(inplace=True, drop=True)
tempDfs.append(df) # Conversion
dfConcat = pd.concat([tempDfs[0]['x'], tempDfs[0]['y'], tempDfs[0]['band1'], tempDfs[1]['band2'], tempDfs[2]['band3'], tempDfs[3]['band4'], tempDfs[4]['band5'], tempDfs[5]['band6'], tempDfs[6]['band7'], tempDfs[7]['band8']], axis=1)
dfConcat = dfConcat.loc[dfConcat['band1']>=0,:]
outDfs.append(dfConcat)
outDfs = pd.concat(outDfs, axis=0, ignore_index=True)
gdf = gpd.GeoDataFrame(outDfs, crs='EPSG:26911', geometry=gpd.points_from_xy(outDfs.x, outDfs.y))
gdf.to_file("E:\\Sync\\_Documents\\_Letter_invasives\\_Data\\_quadrats\\rasters2points.shp")
###Output
_____no_output_____ |
silx/processing/fit/FitWidget.ipynb | ###Markdown
FitWidgetFitWidget is a widget to fit curves (1D data) with interactive configuration options, to set constraints, adjust initial estimate parameters... Creating a FitWidgetFirst load the data.
###Code
# opening qt widgets in a Jupyter notebook
%gui qt
# in a regular terminal, run the following 2 lines:
# from silx.gui import qt
# app = qt.QApplication([])
#%pylab inline
import silx.io
specfile = silx.io.open("data/31oct98.dat")
xdata = specfile["/22.1/measurement/TZ3"]
ydata = specfile["/22.1/measurement/If4"]
from silx.gui.plot import Plot1D
plot=Plot1D()
plot.addCurve(x=xdata, y=ydata)
plot.show()
###Output
_____no_output_____
###Markdown
Then create a FitWidget.
###Code
from silx.gui.fit import FitWidget
fw = FitWidget()
fw.setData(x=xdata, y=ydata)
fw.show()
###Output
_____no_output_____
###Markdown
The selection of fit theories and background theories can be done through the interface. Additional configuration parameters can be set in a dialog, by clicking the configure button, to alter the behavior of the estimation function (peak search parameters) or to set global constraints. When the configuration is done, click the Estimate button. Now you may change individual constraints or adjust initial estimated parameters. You can also add peaks by selecting *Add* in the dropdown list in the *Constraints* column of any parameter, or reduce the number of peaks by selecting *Ignore*.When you are happy with the estimated parameters and the constraints, you can click the "Fit" button. At the end of the fit process, you can again adjust the constraints and estimated parameters, and fit again. Only click "Estimate" if you want to reset the estimation and all constraints (this will overwrite all adjustements you have done). Open the FitWidget through a PlotWindow Rather than instantiating your own FitWidget and loading the data into it, you can just select a curve and click the fit icon inside a PlotWindow or a Plot1D widget.A Plot1D always has the fit icon available, but for a PlotWindow you must specify an option `fit=True` when instantiating the widget.
###Code
from silx.gui.plot import PlotWindow
pw = PlotWindow(fit=True, control=True)
pw.addCurve(x=xdata, y=ydata)
pw.show()
###Output
_____no_output_____
###Markdown
A FitWidget opened through a PlotWindow is connected to the plot and will display the fit results in the PlotWindow, which is great for comparing the fit against the original data. ExerciceWrite a cubic polynomial function $y= ax^3 + bx^2 + cx + d$ and its corresponding estimation function (use $a=1, b=1, c=1, d=1$ for initial estimated parameters).Generate synthetic data.Create a FitWidget and add a cubic polynomial function to the dropdown list. Test it on the synthetic data. Polynomial functionTips: - Read the documentation for the module ``silx.math.fit.fittheory`` to use the correct signature for the polynomial and for the estimation functions. - Read the documentation for the module ``silx.math.fit.leastsq`` to use the correct format for constraints. Disable all constraints (set them to FREE)Links: - http://pythonhosted.org/silx/modules/math/fit/fittheory.htmlsilx.math.fit.fittheory.FitTheory.function - http://pythonhosted.org/silx/modules/math/fit/fittheory.htmlsilx.math.fit.fittheory.FitTheory.estimate - http://pythonhosted.org/silx/modules/math/fit/leastsq.htmlsilx.math.fit.leastsq
###Code
# fill-in the blanks
def cubic_poly(x, ...):
"""y = a*x^3 + b*x^2 + c*x +d
:param x: numpy array of abscissa data
:return: numpy array of y values
"""
return ...
def estimate_cubic_params(...):
initial_params = ...
constraints = ...
return initial_params, constraints
###Output
_____no_output_____
###Markdown
Synthetic dataTip: use the `cubic_poly` function
###Code
import numpy
x = numpy.linspace(0, 100, 250)
a, b, c, d = 0.02, -2.51, 76.76, 329.14
y = ...
###Output
_____no_output_____
###Markdown
FitWidget with custom functionTips:- you need to define a customized FitManager to initialize a FitWidget with custom functions Doc:- http://www.silx.org/doc/silx/dev/modules/math/fit/fitmanager.htmlsilx.math.fit.fitmanager.FitManager.addtheory
###Code
%gui qt
#from silx.gui import qt
from silx.gui.fit import FitWidget
from silx.math.fit import FitManager
# Uncomment this line if not in a jupyter notebook
# a = qt.QApplication([])
...
fitwidget = FitWidget(...)
fitwidget.setData(x=x, y=y)
fitwidget.show()
###Output
_____no_output_____ |
src/e_full_10sec_100hosts.ipynb | ###Markdown
Dataset E: 100 hosts sample (among 4,626 hosts) for all dates Dask Setup
###Code
#
# workers x memory_per_worker <= available memory
# threads per worker == 1 if workload is CPU intensive
# dashboard port might need to change if running multiple dask instances within lab
#
# Sizing below is based on the basic jupyterlab environment provided by https://jupyter.olcf.ornl.gov
#
WORKERS = 16
MEMORY_PER_WORKER = "2GB"
THREADS_PER_WORKER = 1
DASHBOARD_PORT = ":8787"
###Output
_____no_output_____
###Markdown
Local Dask cluster setup* Install bokeh, spawn cluster, provide access point to dashboards* Access jupyter hub at the address - https://jupyter.olcf.ornl.gov/hub/user-redirect/proxy/8787/status")* Or access point for the Dask jupyter extension - /proxy/8787
###Code
# General prerequisites we want to have loaded from the get go
!pip install bokeh loguru
# Cleanup
try:
client.shutdown()
client.close()
except Exception as e:
pass
# Setup block
import os
import pwd
import glob
import pandas as pd
from distributed import LocalCluster, Client
import dask
import dask.dataframe as dd
#LOCALDIR = "/gpfs/alpine/stf218/scratch/shinw/.tmp/dask-interactive"
LOCALDIR = "/tmp/dask"
dask.config.set({'worker.memory': {'target': False, 'spill': False, 'pause': 0.8, 'terminate': 0.95}})
#dask.config.config
# Cluster creation
cluster = LocalCluster(processes=True, n_workers=WORKERS, threads_per_worker=THREADS_PER_WORKER,
dashboard_address=DASHBOARD_PORT, local_directory=LOCALDIR,
memory_limit=MEMORY_PER_WORKER)
client = Client(cluster)
cluster
print("Access jupyter hub at the address - https://jupyter.olcf.ornl.gov/hub/user-redirect/proxy/8787/status")
print("Dask jupyter extension - /proxy/8787")
client
###Output
/opt/conda/lib/python3.8/site-packages/distributed/node.py:160: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 34953 instead
warnings.warn(
###Markdown
Preloading tools & libraries
###Code
import sys
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
print("seaborn version: {}".format(sns.__version__))
print("Python version:\n{}\n".format(sys.version))
print("matplotlib version: {}".format(matplotlib.__version__))
print("pandas version: {}".format(pd.__version__))
print("numpy version: {}".format(np.__version__))
###Output
seaborn version: 0.11.2
Python version:
3.8.10 | packaged by conda-forge | (default, May 11 2021, 07:01:05)
[GCC 9.3.0]
matplotlib version: 3.4.2
pandas version: 1.3.1
numpy version: 1.19.5
###Markdown
File locations
###Code
DATA_BASE_PATH = "../data"
INPUT_FILES = f"{DATA_BASE_PATH}/powtemp_10sec_mean/**/*.parquet"
INPUT_PATH = f"{DATA_BASE_PATH}/powtemp_10sec_mean"
OUTPUT_PATH = f"{DATA_BASE_PATH}/e_full_10sec_100hosts"
!ls {INPUT_FILES}
###Output
../data/powtemp_10sec_mean/202001/20200101.parquet
../data/powtemp_10sec_mean/202001/20200102.parquet
../data/powtemp_10sec_mean/202001/20200103.parquet
../data/powtemp_10sec_mean/202001/20200106.parquet
../data/powtemp_10sec_mean/202001/20200107.parquet
../data/powtemp_10sec_mean/202001/20200108.parquet
../data/powtemp_10sec_mean/202001/20200109.parquet
../data/powtemp_10sec_mean/202001/20200110.parquet
../data/powtemp_10sec_mean/202001/20200111.parquet
../data/powtemp_10sec_mean/202001/20200112.parquet
../data/powtemp_10sec_mean/202001/20200113.parquet
../data/powtemp_10sec_mean/202001/20200114.parquet
../data/powtemp_10sec_mean/202001/20200115.parquet
../data/powtemp_10sec_mean/202001/20200116.parquet
../data/powtemp_10sec_mean/202001/20200117.parquet
../data/powtemp_10sec_mean/202001/20200118.parquet
../data/powtemp_10sec_mean/202001/20200119.parquet
../data/powtemp_10sec_mean/202001/20200120.parquet
../data/powtemp_10sec_mean/202001/20200121.parquet
../data/powtemp_10sec_mean/202001/20200122.parquet
../data/powtemp_10sec_mean/202001/20200123.parquet
../data/powtemp_10sec_mean/202001/20200124.parquet
../data/powtemp_10sec_mean/202001/20200125.parquet
../data/powtemp_10sec_mean/202001/20200126.parquet
../data/powtemp_10sec_mean/202001/20200127.parquet
../data/powtemp_10sec_mean/202001/20200128.parquet
../data/powtemp_10sec_mean/202001/20200129.parquet
../data/powtemp_10sec_mean/202001/20200131.parquet
../data/powtemp_10sec_mean/202008/20200801.parquet
../data/powtemp_10sec_mean/202008/20200802.parquet
../data/powtemp_10sec_mean/202008/20200803.parquet
../data/powtemp_10sec_mean/202008/20200804.parquet
../data/powtemp_10sec_mean/202008/20200805.parquet
../data/powtemp_10sec_mean/202008/20200806.parquet
../data/powtemp_10sec_mean/202008/20200807.parquet
../data/powtemp_10sec_mean/202008/20200808.parquet
../data/powtemp_10sec_mean/202008/20200809.parquet
../data/powtemp_10sec_mean/202008/20200810.parquet
../data/powtemp_10sec_mean/202008/20200811.parquet
../data/powtemp_10sec_mean/202008/20200812.parquet
../data/powtemp_10sec_mean/202008/20200813.parquet
../data/powtemp_10sec_mean/202008/20200814.parquet
../data/powtemp_10sec_mean/202008/20200815.parquet
../data/powtemp_10sec_mean/202008/20200816.parquet
../data/powtemp_10sec_mean/202008/20200817.parquet
../data/powtemp_10sec_mean/202008/20200818.parquet
../data/powtemp_10sec_mean/202008/20200819.parquet
../data/powtemp_10sec_mean/202008/20200820.parquet
../data/powtemp_10sec_mean/202008/20200821.parquet
../data/powtemp_10sec_mean/202008/20200822.parquet
../data/powtemp_10sec_mean/202008/20200823.parquet
../data/powtemp_10sec_mean/202008/20200824.parquet
../data/powtemp_10sec_mean/202008/20200825.parquet
../data/powtemp_10sec_mean/202008/20200826.parquet
../data/powtemp_10sec_mean/202008/20200827.parquet
../data/powtemp_10sec_mean/202008/20200828.parquet
../data/powtemp_10sec_mean/202008/20200829.parquet
../data/powtemp_10sec_mean/202008/20200830.parquet
../data/powtemp_10sec_mean/202008/20200831.parquet
../data/powtemp_10sec_mean/202102/20210201.parquet
../data/powtemp_10sec_mean/202102/20210202.parquet
../data/powtemp_10sec_mean/202102/20210203.parquet
../data/powtemp_10sec_mean/202102/20210204.parquet
../data/powtemp_10sec_mean/202102/20210205.parquet
../data/powtemp_10sec_mean/202102/20210206.parquet
../data/powtemp_10sec_mean/202102/20210207.parquet
../data/powtemp_10sec_mean/202102/20210208.parquet
../data/powtemp_10sec_mean/202102/20210209.parquet
../data/powtemp_10sec_mean/202102/20210210.parquet
../data/powtemp_10sec_mean/202102/20210211.parquet
../data/powtemp_10sec_mean/202102/20210212.parquet
../data/powtemp_10sec_mean/202102/20210213.parquet
../data/powtemp_10sec_mean/202102/20210214.parquet
../data/powtemp_10sec_mean/202102/20210215.parquet
../data/powtemp_10sec_mean/202102/20210216.parquet
../data/powtemp_10sec_mean/202102/20210217.parquet
../data/powtemp_10sec_mean/202102/20210218.parquet
../data/powtemp_10sec_mean/202102/20210219.parquet
../data/powtemp_10sec_mean/202102/20210220.parquet
../data/powtemp_10sec_mean/202102/20210221.parquet
../data/powtemp_10sec_mean/202102/20210222.parquet
../data/powtemp_10sec_mean/202102/20210223.parquet
../data/powtemp_10sec_mean/202102/20210224.parquet
../data/powtemp_10sec_mean/202102/20210225.parquet
../data/powtemp_10sec_mean/202102/20210226.parquet
../data/powtemp_10sec_mean/202102/20210227.parquet
../data/powtemp_10sec_mean/202102/20210228.parquet
../data/powtemp_10sec_mean/202108/20210801.parquet
../data/powtemp_10sec_mean/202108/20210802.parquet
../data/powtemp_10sec_mean/202108/20210803.parquet
../data/powtemp_10sec_mean/202108/20210804.parquet
../data/powtemp_10sec_mean/202108/20210805.parquet
../data/powtemp_10sec_mean/202108/20210806.parquet
../data/powtemp_10sec_mean/202108/20210807.parquet
../data/powtemp_10sec_mean/202108/20210808.parquet
../data/powtemp_10sec_mean/202108/20210809.parquet
../data/powtemp_10sec_mean/202108/20210810.parquet
../data/powtemp_10sec_mean/202108/20210811.parquet
../data/powtemp_10sec_mean/202108/20210812.parquet
../data/powtemp_10sec_mean/202108/20210813.parquet
../data/powtemp_10sec_mean/202108/20210814.parquet
../data/powtemp_10sec_mean/202108/20210815.parquet
../data/powtemp_10sec_mean/202108/20210816.parquet
../data/powtemp_10sec_mean/202108/20210824.parquet
../data/powtemp_10sec_mean/202108/20210825.parquet
../data/powtemp_10sec_mean/202108/20210826.parquet
../data/powtemp_10sec_mean/202108/20210827.parquet
../data/powtemp_10sec_mean/202108/20210828.parquet
../data/powtemp_10sec_mean/202108/20210829.parquet
../data/powtemp_10sec_mean/202108/20210830.parquet
../data/powtemp_10sec_mean/202108/20210831.parquet
../data/powtemp_10sec_mean/202201/20220101.parquet
../data/powtemp_10sec_mean/202201/20220102.parquet
../data/powtemp_10sec_mean/202201/20220103.parquet
../data/powtemp_10sec_mean/202201/20220104.parquet
../data/powtemp_10sec_mean/202201/20220105.parquet
../data/powtemp_10sec_mean/202201/20220106.parquet
../data/powtemp_10sec_mean/202201/20220107.parquet
../data/powtemp_10sec_mean/202201/20220108.parquet
../data/powtemp_10sec_mean/202201/20220109.parquet
../data/powtemp_10sec_mean/202201/20220110.parquet
../data/powtemp_10sec_mean/202201/20220111.parquet
../data/powtemp_10sec_mean/202201/20220112.parquet
../data/powtemp_10sec_mean/202201/20220113.parquet
../data/powtemp_10sec_mean/202201/20220114.parquet
../data/powtemp_10sec_mean/202201/20220115.parquet
../data/powtemp_10sec_mean/202201/20220116.parquet
../data/powtemp_10sec_mean/202201/20220117.parquet
../data/powtemp_10sec_mean/202201/20220118.parquet
../data/powtemp_10sec_mean/202201/20220119.parquet
../data/powtemp_10sec_mean/202201/20220120.parquet
../data/powtemp_10sec_mean/202201/20220121.parquet
../data/powtemp_10sec_mean/202201/20220122.parquet
../data/powtemp_10sec_mean/202201/20220123.parquet
../data/powtemp_10sec_mean/202201/20220124.parquet
../data/powtemp_10sec_mean/202201/20220125.parquet
../data/powtemp_10sec_mean/202201/20220126.parquet
../data/powtemp_10sec_mean/202201/20220127.parquet
../data/powtemp_10sec_mean/202201/20220128.parquet
../data/powtemp_10sec_mean/202201/20220129.parquet
../data/powtemp_10sec_mean/202201/20220130.parquet
../data/powtemp_10sec_mean/202201/20220131.parquet
###Markdown
Schema GlobalsSchema related global variables
###Code
# Developing a COLUMN filter we can use to process the data
RAW_COLUMN_FILTER = [
# Meta information
'timestamp',
'node_state',
'hostname',
# Node input power (power supply)
'ps0_input_power',
'ps1_input_power',
# Power consumption (Watts)
# - GPU power
'p0_gpu0_power',
'p0_gpu1_power',
'p0_gpu2_power',
'p1_gpu0_power',
'p1_gpu1_power',
'p1_gpu2_power',
# - CPU power
'p0_power',
'p1_power',
# Thermal (Celcius)
# - V100 core temperature
'gpu0_core_temp',
'gpu1_core_temp',
'gpu2_core_temp',
'gpu3_core_temp',
'gpu4_core_temp',
'gpu5_core_temp',
# - V100 mem temperature (HBM memory)
'gpu0_mem_temp',
'gpu1_mem_temp',
'gpu2_mem_temp',
'gpu3_mem_temp',
'gpu4_mem_temp',
'gpu5_mem_temp',
# - CPU core temperatures
'p0_core0_temp',
'p0_core1_temp',
'p0_core2_temp',
'p0_core3_temp',
'p0_core4_temp',
'p0_core5_temp',
'p0_core6_temp',
'p0_core7_temp',
'p0_core8_temp',
'p0_core9_temp',
'p0_core10_temp',
'p0_core11_temp',
'p0_core12_temp',
'p0_core14_temp',
'p0_core15_temp',
'p0_core16_temp',
'p0_core17_temp',
'p0_core18_temp',
'p0_core19_temp',
'p0_core20_temp',
'p0_core21_temp',
'p0_core22_temp',
'p0_core23_temp',
'p1_core0_temp',
'p1_core1_temp',
'p1_core2_temp',
'p1_core3_temp',
'p1_core4_temp',
'p1_core5_temp',
'p1_core6_temp',
'p1_core7_temp',
'p1_core8_temp',
'p1_core9_temp',
'p1_core10_temp',
'p1_core11_temp',
'p1_core12_temp',
'p1_core14_temp',
'p1_core15_temp',
'p1_core16_temp',
'p1_core17_temp',
'p1_core18_temp',
'p1_core19_temp',
'p1_core20_temp',
'p1_core21_temp',
'p1_core22_temp',
'p1_core23_temp',
]
# Column lists we actually end up using
COLS = [
# Meta information
'timestamp',
'node_state',
'hostname',
# Node input power (power supply)
'ps0_input_power',
'ps1_input_power',
# Power consumption (Watts)
# - GPU power
'p0_gpu0_power',
'p0_gpu1_power',
'p0_gpu2_power',
'p1_gpu0_power',
'p1_gpu1_power',
'p1_gpu2_power',
# - CPU power
'p0_power',
'p1_power',
# Thermal (Celcius)
# - V100 core temperature
'gpu0_core_temp',
'gpu1_core_temp',
'gpu2_core_temp',
'gpu3_core_temp',
'gpu4_core_temp',
'gpu5_core_temp',
# - V100 mem temperature (HBM memory)
'gpu0_mem_temp',
'gpu1_mem_temp',
'gpu2_mem_temp',
'gpu3_mem_temp',
'gpu4_mem_temp',
'gpu5_mem_temp',
]
# Columns in order to calculate the row-wise min,max,mean
P0_CORES = ["p0_core0_temp",
"p0_core1_temp",
"p0_core2_temp",
"p0_core3_temp",
"p0_core4_temp",
"p0_core5_temp",
"p0_core6_temp",
"p0_core7_temp",
"p0_core8_temp",
"p0_core9_temp",
"p0_core10_temp",
"p0_core11_temp",
"p0_core12_temp",
#"p0_core13_temp",
"p0_core14_temp",
"p0_core15_temp",
"p0_core16_temp",
"p0_core17_temp",
"p0_core18_temp",
"p0_core19_temp",
"p0_core20_temp",
"p0_core21_temp",
"p0_core22_temp",
"p0_core23_temp"]
P1_CORES = ["p1_core0_temp",
"p1_core1_temp",
"p1_core2_temp",
"p1_core3_temp",
"p1_core4_temp",
"p1_core5_temp",
"p1_core6_temp",
"p1_core7_temp",
"p1_core8_temp",
"p1_core9_temp",
"p1_core10_temp",
"p1_core11_temp",
"p1_core12_temp",
#"p1_core13_temp",
"p1_core14_temp",
"p1_core15_temp",
"p1_core16_temp",
"p1_core17_temp",
"p1_core18_temp",
"p1_core19_temp",
"p1_core20_temp",
"p1_core21_temp",
"p1_core22_temp",
"p1_core23_temp"]
###Output
_____no_output_____
###Markdown
Sampling & coarsening the data and creating a sampled datasetUtilize map partitions feature and create a few samples from 4,626 nodes in 1 minute increments.Trying to see if we can randomize from the partitions as well to reduce the I/O happening.
###Code
# Definition of the whole pipeline
import os
import shutil
import random
import glob
def find_work_to_do(output_path, input_path):
return [
os.path.basename(file).split(".")[0]
for file in sorted(glob.glob(f"{input_path}/**/*.parquet"))
if not os.access(
os.path.join(
output_path, os.path.basename(file)
), os.F_OK
)
]
def handle_part(df):
# Aggregate core temp
df['p0_temp_max'] = df.loc[:,tuple(P0_CORES)].max(axis=1)
df['p0_temp_min'] = df.loc[:,tuple(P0_CORES)].min(axis=1)
df['p0_temp_mean'] = df.loc[:,tuple(P0_CORES)].mean(axis=1)
df['p1_temp_max'] = df.loc[:,tuple(P1_CORES)].max(axis=1)
df['p1_temp_min'] = df.loc[:,tuple(P1_CORES)].min(axis=1)
df['p1_temp_mean'] = df.loc[:,tuple(P1_CORES)].mean(axis=1)
COL_LIST = COLS + ['p0_temp_max', 'p0_temp_mean', 'p0_temp_min', 'p1_temp_max', 'p1_temp_mean', 'p1_temp_min']
return df.loc[:, tuple(COL_LIST)]
def sample_hosts(output_path, input_path, hostnames=[], nhosts=1):
# Limiting the # of files
work_to_do = find_work_to_do(output_path, input_path)
print(work_to_do)
# Get random hostnames
if hostnames == []:
files = sorted(glob.glob(f"{input_path}/**/*.parquet"))
ddf = dd.read_parquet(
files[0],
index=False,
columns=RAW_COLUMN_FILTER,
engine="pyarrow",
split_row_groups=True,
gather_statistics=True)
df = ddf.get_partition(0).compute().set_index('hostname')
hostnames = random.sample(df.index.unique().to_list(), nhosts)
del ddf
del df
with open(f"{output_path}/hosts.txt", "w") as f:
for host in sorted(hostnames):
f.write(f"{host}\r\n")
for date_key in work_to_do:
print(f" - sample day working on {date_key}")
month_key = date_key[0:6]
day_input_path = f"{input_path}/{month_key}/{date_key}.parquet"
day_output_path = f"{output_path}/{date_key}.parquet"
print(f"Day output path {day_output_path}")
os.makedirs(os.path.dirname(day_output_path), exist_ok=True)
ddf = dd.read_parquet(
[day_input_path],
index=False,
columns=RAW_COLUMN_FILTER,
engine="pyarrow",
split_row_groups=True,
gather_statistics=True)
# Get only the hosts we are interested
hostname_mask = ddf['hostname'].isin(hostnames)
# Calculate the aggregates and dump the result
df = ddf[hostname_mask].map_partitions(handle_part).compute()
# Sort the day before sending it out
df = df.sort_values(['hostname', 'timestamp'])
# Write to the final file
df.to_parquet(day_output_path, engine="pyarrow")
sample_hosts(OUTPUT_PATH, INPUT_PATH, nhosts=100)
###Output
[]
###Markdown
Testing the output data
###Code
import glob
import pandas as pd
def get_host_dataframe(
input_path = OUTPUT_PATH,
hostnames = [],
months = ["202001", "202008", "202102", "202108", "202201"],
sort_values=["hostname", "timestamp"],
set_index=["hostname"],
columns=None,
):
print(f"[reading time series for {hostnames} during {months}]")
if columns != None:
if "hostname" not in columns:
columns.push("hostname")
if "timestamp" not in columns:
columns.push("timestamp")
# Iterate all the files and fetch data for only the hostnames we're interested
df_list = []
for month in months:
print(f"- reading {month}")
files = sorted(glob.glob(f"{input_path}/{month}*.parquet"))
for file in files:
df = pd.read_parquet(file, engine="pyarrow", columns=columns)
if hostnames != []:
mask = df['hostname'].isin(hostnames)
df_list.append(df[mask])
else:
df_list.append(df)
print("- merging dataframe")
df = pd.concat(df_list).reset_index(drop=True)
print(f"- sorting based on {sort_values}")
if sort_values != []:
df = df.sort_values(sort_values)
if set_index != []:
df = df.set_index(set_index)
print("- read success")
return df
df = get_host_dataframe(hostnames = ['f04n08', 'e34n12'], columns=["timestamp", "hostname", "ps0_input_power", "ps1_input_power"])
df.info()
df
###Output
_____no_output_____ |
004-CNN-Fashion.ipynb | ###Markdown
Convolution Neural Network - Fashion MNIST CNN allow us to extract the features of the image while maintaining the spatial arrangement of the image. Lets apply this to the Fashion MNIST Example
###Code
import numpy as np
import pandas as pd
import keras
import matplotlib.pyplot as plt
% matplotlib inline
import vis
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Activation, Flatten, Dense, Dropout
from keras import backend as K
###Output
_____no_output_____
###Markdown
Get Data
###Code
from keras.datasets import fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
x_train.shape, y_train.shape, x_test.shape, y_test.shape
labels = vis.fashion_mnist_label()
###Output
_____no_output_____
###Markdown
**Step 1: Prepare the images and labels**Reshape data for convolution network
###Code
K.image_data_format()
x_train_conv = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_test_conv = x_test.reshape(x_test.shape[0], 28, 28, 1)
input_shape = (28, 28, 1)
###Output
_____no_output_____
###Markdown
Convert from 'uint8' to 'float32' and normalise the data to (0,1)
###Code
x_train_conv = x_train_conv.astype("float32") / 255
x_test_conv = x_test_conv.astype("float32") / 255
###Output
_____no_output_____
###Markdown
Convert class vectors to binary class matrices
###Code
# convert class vectors to binary class matrices
y_train_class = keras.utils.to_categorical(y_train, 10)
y_test_class = keras.utils.to_categorical(y_test, 10)
###Output
_____no_output_____
###Markdown
Model 1: Simple Convolution **Step 2: Craft the feature transfomation and classifier model **
###Code
model_simple_conv = Sequential()
model_simple_conv.add(Conv2D(2, (3, 3), activation ="relu", input_shape=(28, 28, 1)))
model_simple_conv.add(Flatten())
model_simple_conv.add(Dense(100, activation='relu'))
model_simple_conv.add(Dense(10, activation='softmax'))
model_simple_conv.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 2) 20
_________________________________________________________________
flatten_1 (Flatten) (None, 1352) 0
_________________________________________________________________
dense_1 (Dense) (None, 100) 135300
_________________________________________________________________
dense_2 (Dense) (None, 10) 1010
=================================================================
Total params: 136,330
Trainable params: 136,330
Non-trainable params: 0
_________________________________________________________________
###Markdown
**Step 3: Compile and fit the model**
###Code
model_simple_conv.compile(loss='categorical_crossentropy', optimizer="sgd", metrics=['accuracy'])
%%time
output_simple_conv = model_simple_conv.fit(x_train_conv, y_train_class, batch_size=512, epochs=5, verbose=2,
validation_data=(x_test_conv, y_test_class))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
- 20s - loss: 0.8811 - acc: 0.7063 - val_loss: 0.6455 - val_acc: 0.7710
Epoch 2/5
- 25s - loss: 0.7154 - acc: 0.7725 - val_loss: 2.1814 - val_acc: 0.7042
Epoch 3/5
- 25s - loss: 8.1733 - acc: 0.4493 - val_loss: 9.7974 - val_acc: 0.3832
Epoch 4/5
- 22s - loss: 9.0385 - acc: 0.4202 - val_loss: 8.4062 - val_acc: 0.4716
Epoch 5/5
- 22s - loss: 9.7800 - acc: 0.3887 - val_loss: 9.8707 - val_acc: 0.3854
CPU times: user 1min 56s, sys: 5.1 s, total: 2min 1s
Wall time: 1min 54s
###Markdown
**Step 4: Check the performance of the model**
###Code
vis.metrics(output_simple_conv.history)
score = model_simple_conv.evaluate(x_test_conv, y_test_class, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 9.870681315612792
Test accuracy: 0.3854
###Markdown
**Step 5: Make & Visualise the Prediction**
###Code
predict_classes_conv = model_simple_conv.predict_classes(x_test_conv)
pd.crosstab(y_test, predict_classes_conv)
proba_conv = model_simple_conv.predict_proba(x_test_conv)
i = 0
vis.imshow(x_test[i], labels[y_test[i]]) | vis.predict(proba_conv[i], y_test[i], labels)
###Output
_____no_output_____
###Markdown
Model 2: Convulation + Max Pooling **Step 2: Craft the feature transfomation and classifier model **
###Code
model_pooling_conv = Sequential()
model_pooling_conv.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28,28,1)))
model_pooling_conv.add(MaxPooling2D(pool_size=(2, 2)))
model_pooling_conv.add(Conv2D(64, (3, 3), activation='relu'))
model_pooling_conv.add(MaxPooling2D(pool_size=(2, 2)))
model_pooling_conv.add(Flatten())
model_pooling_conv.add(Dense(128, activation='relu'))
model_pooling_conv.add(Dense(10, activation='softmax'))
model_pooling_conv.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_2 (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 13, 13, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 11, 11, 64) 18496
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 5, 5, 64) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 1600) 0
_________________________________________________________________
dense_3 (Dense) (None, 128) 204928
_________________________________________________________________
dense_4 (Dense) (None, 10) 1290
=================================================================
Total params: 225,034
Trainable params: 225,034
Non-trainable params: 0
_________________________________________________________________
###Markdown
**Step 3: Compile and fit the model**
###Code
model_pooling_conv.compile(loss='categorical_crossentropy', optimizer="sgd", metrics=['accuracy'])
%%time
output_pooling_conv = model_pooling_conv.fit(x_train_conv, y_train_class, batch_size=512, epochs=5, verbose=2,
validation_data=(x_test_conv, y_test_class))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
- 89s - loss: 1.4153 - acc: 0.5399 - val_loss: 1.0333 - val_acc: 0.6821
Epoch 2/5
- 68s - loss: 0.7455 - acc: 0.7268 - val_loss: 0.6891 - val_acc: 0.7457
Epoch 3/5
- 90s - loss: 0.6430 - acc: 0.7596 - val_loss: 0.7239 - val_acc: 0.7280
Epoch 4/5
- 103s - loss: 0.5869 - acc: 0.7807 - val_loss: 0.5805 - val_acc: 0.7880
Epoch 5/5
- 99s - loss: 0.5475 - acc: 0.7957 - val_loss: 0.5970 - val_acc: 0.7657
CPU times: user 9min 24s, sys: 1min 43s, total: 11min 7s
Wall time: 7min 29s
###Markdown
**Step 4: Check the performance of the model**
###Code
vis.metrics(output_pooling_conv.history)
score = model_pooling_conv.evaluate(x_test_conv, y_test_class, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 0.5969944985389709
Test accuracy: 0.7657
###Markdown
**Step 5: Make & Visualise the Prediction**
###Code
predict_classes_pooling = model_pooling_conv.predict_classes(x_test_conv)
pd.crosstab(y_test, predict_classes_pooling)
proba_pooling = model_pooling_conv.predict_classes(x_test_conv)
i = 5
vis.imshow(x_test[i], labels[y_test[i]]) | vis.predict(proba_conv[i], y_test[i], labels)
###Output
_____no_output_____
###Markdown
Model 3: Convulation + Max Pooling + Dropout **Step 2: Craft the feature transfomation and classifier model **
###Code
model_dropout_conv = Sequential()
model_dropout_conv.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)))
model_dropout_conv.add(MaxPooling2D(pool_size=(2, 2)))
model_dropout_conv.add(Conv2D(64, (3, 3), activation='relu'))
model_dropout_conv.add(MaxPooling2D(pool_size=(2, 2)))
model_dropout_conv.add(Dropout(0.25))
model_dropout_conv.add(Flatten())
model_dropout_conv.add(Dense(128, activation='relu'))
model_dropout_conv.add(Dropout(0.5))
model_dropout_conv.add(Dense(10, activation='softmax'))
model_dropout_conv.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_8 (Conv2D) (None, 26, 26, 32) 320
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 13, 13, 32) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 11, 11, 64) 18496
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 5, 5, 64) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 5, 5, 64) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 1600) 0
_________________________________________________________________
dense_6 (Dense) (None, 128) 204928
_________________________________________________________________
dropout_4 (Dropout) (None, 128) 0
_________________________________________________________________
dense_7 (Dense) (None, 10) 1290
=================================================================
Total params: 225,034
Trainable params: 225,034
Non-trainable params: 0
_________________________________________________________________
###Markdown
**Step 3: Compile and fit the model**
###Code
model_dropout_conv.compile(loss='categorical_crossentropy', optimizer="sgd", metrics=['accuracy'])
%%time
output_dropout_conv = model_dropout_conv.fit(x_train_conv, y_train_class, batch_size=512, epochs=5, verbose=1,
validation_data=(x_test_conv, y_test_class))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 112s 2ms/step - loss: 0.6454 - acc: 0.7582 - val_loss: 0.5631 - val_acc: 0.7825
Epoch 2/5
60000/60000 [==============================] - 137s 2ms/step - loss: 0.6352 - acc: 0.7639 - val_loss: 0.5547 - val_acc: 0.7872
Epoch 3/5
60000/60000 [==============================] - 137s 2ms/step - loss: 0.6313 - acc: 0.7649 - val_loss: 0.5493 - val_acc: 0.7889
Epoch 4/5
60000/60000 [==============================] - 84s 1ms/step - loss: 0.6239 - acc: 0.7688 - val_loss: 0.5470 - val_acc: 0.7933
Epoch 5/5
60000/60000 [==============================] - 97s 2ms/step - loss: 0.6194 - acc: 0.7717 - val_loss: 0.5416 - val_acc: 0.7945
CPU times: user 10min 15s, sys: 1min 4s, total: 11min 19s
Wall time: 9min 27s
###Markdown
**Step 4: Check the performance of the model**
###Code
vis.metrics(output_dropout_conv.history)
score = model_dropout_conv.evaluate(x_test_conv, y_test_class, verbose=1)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 0.5416207976818085
Test accuracy: 0.7945
###Markdown
**Step 5: Make & Visualise the Prediction**
###Code
predict_classes_dropout = model_dropout_conv.predict_classes(x_test_conv)
pd.crosstab(y_test, predict_classes_dropout)
proba_dropout = model_dropout_conv.predict_proba(x_test_conv)
vis.imshow(x_test[i], labels[y_test[i]]) | vis.predict(proba_dropout[i], y_test[i], labels)
###Output
_____no_output_____ |
metadata/20190923_chapinhall/publications_export_template.ipynb | ###Markdown
Generating `publications.json` partitions This is a template notebook for generating metadata on publications - most importantly, the linkage between the publication and dataset (datasets are enumerated in `datasets.json`)Process goes as follows:1. Import CSV with publication-dataset linkages. Your csv should have at the minimum, fields (spelled like the below): * `dataset` to hold the dataset_ids, and * `title` for the publication title. Update the csv with these field names to ensure this code will run. We read in, dedupe and format the title2. Match to `datasets.json` -- alert if given dataset doesn't exist yet3. Generate list of dicts with publication metadata4. Write to a publications.json file Import CSV containing publication-dataset linkages Set `linkages_path` to the location of the csv containg dataset-publication linkages and read in csv
###Code
import pandas as pd
import os
import datetime
file_name = 'publication_dataset_linkages.csv'
rcm_subfolder = '20190923_chapinhall'
parent_folder = '/Users/sophierand/RichContextMetadata/metadata'
linkages_path = os.path.join(parent_folder,rcm_subfolder,file_name)
# linkages_path = os.path.join(os.getcwd(),'SNAP_DATA_DIMENSIONS_SEARCH_DEMO.csv')
linkages_csv = pd.read_csv(linkages_path)
###Output
_____no_output_____
###Markdown
Format/clean linkage data - apply `scrub_unicode` to `title` field.
###Code
import unicodedata
def scrub_unicode (text):
"""
try to handle the unicode edge cases encountered in source text,
as best as possible
"""
x = " ".join(map(lambda s: s.strip(), text.split("\n"))).strip()
x = x.replace('“', '"').replace('”', '"')
x = x.replace("‘", "'").replace("’", "'").replace("`", "'")
x = x.replace("`` ", '"').replace("''", '"')
x = x.replace('…', '...').replace("\\u2026", "...")
x = x.replace("\\u00ae", "").replace("\\u2122", "")
x = x.replace("\\u00a0", " ").replace("\\u2022", "*").replace("\\u00b7", "*")
x = x.replace("\\u2018", "'").replace("\\u2019", "'").replace("\\u201a", "'")
x = x.replace("\\u201c", '"').replace("\\u201d", '"')
x = x.replace("\\u20ac", "€")
x = x.replace("\\u2212", " - ") # minus sign
x = x.replace("\\u00e9", "é")
x = x.replace("\\u017c", "ż").replace("\\u015b", "ś").replace("\\u0142", "ł")
x = x.replace("\\u0105", "ą").replace("\\u0119", "ę").replace("\\u017a", "ź").replace("\\u00f3", "ó")
x = x.replace("\\u2014", " - ").replace('–', '-').replace('—', ' - ')
x = x.replace("\\u2013", " - ").replace("\\u00ad", " - ")
x = str(unicodedata.normalize("NFKD", x).encode("ascii", "ignore").decode("utf-8"))
# some content returns text in bytes rather than as a str ?
try:
assert type(x).__name__ == "str"
except AssertionError:
print("not a string?", type(x), x)
return x
###Output
_____no_output_____
###Markdown
Scrub titles of problematic characters, drop nulls and dedupe
###Code
linkages_csv = linkages_csv.loc[pd.notnull(linkages_csv.dataset)].drop_duplicates()
linkages_csv = linkages_csv.loc[pd.notnull(linkages_csv.title)].drop_duplicates()
linkages_csv['title'] = linkages_csv['title'].apply(scrub_unicode)
pub_metadata_fields = ['title']
original_metadata_cols = list(set(linkages_csv.columns.values.tolist()) - set(pub_metadata_fields)-set(['dataset']))
###Output
_____no_output_____
###Markdown
Generate list of dicts of metadata Read in `datasets.json`. Update `datasets_path` to your local.
###Code
import json
datasets_path = '/Users/sophierand/RCDatasets/datasets.json'
with open(datasets_path) as json_file:
datasets = json.load(json_file)
###Output
_____no_output_____
###Markdown
Create list of dictionaries of publication metadata. `format_metadata` iterrates through `linkages_csv` dataframe, splits the `dataset` field (for when multiple datasets are listed); throws an error if the dataset doesn't exist and needs to be added to `datasets.json`.
###Code
def create_pub_dict(linkages_dataframe,datasets):
pub_dict_list = []
for i, r in linkages_dataframe.iterrows():
r['title'] = scrub_unicode(r['title'])
ds_id_list = [f for f in [d.strip() for d in r['dataset'].split(",")] if f not in [""," "]]
for ds in ds_id_list:
check_ds = [b for b in datasets if b['id'] == ds]
if len(check_ds) == 0:
print('dataset {} isnt listed in datasets.json. Please add to file'.format(ds))
required_metadata = r[pub_metadata_fields].to_dict()
required_metadata.update({'datasets':ds_id_list})
pub_dict = required_metadata
if len(original_metadata_cols) > 0:
original_metadata = r[original_metadata_cols].to_dict()
original_metadata.update({'date_added':datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')})
pub_dict.update({'original':original_metadata})
pub_dict_list.append(pub_dict)
return pub_dict_list
###Output
_____no_output_____
###Markdown
Generate publication metadata and export to json
###Code
linkage_list = create_pub_dict(linkages_csv,datasets)
###Output
_____no_output_____
###Markdown
Update `pub_path` to be: `_publications.json`
###Code
json_pub_path = os.path.join('/Users/sophierand/RCPublications/partitions/',rcm_subfolder+'_publications.json')
with open(json_pub_path, 'w') as outfile:
json.dump(linkage_list, outfile, indent=2)
###Output
_____no_output_____ |
07_trainRegression_OfferCompleted.ipynb | ###Markdown
Load Data
###Code
# After executing the cell above, Drive
# files will be present in "/content/drive/My Drive".
!ls "/content/drive/My Drive/Udacity-MLE-Capstone-Starbucks-data/"
import pandas as pd
import numpy as np
import math
import json
% matplotlib inline
from matplotlib import pyplot as plt
filePath = "/content/drive/My Drive/Udacity-MLE-Capstone-Starbucks-data/"
# read in the json files
portfolio = pd.read_json(filePath+'data/portfolio.json', orient='records', lines=True)
profile = pd.read_json(filePath+'data/profile.json', orient='records', lines=True)
transcript = pd.read_json(filePath+'data/transcript.json', orient='records', lines=True)
# Upload preprocessed data
transcriptTransformRec = pd.read_pickle(filePath+'preprocessedData/transcriptTransformRec_v1.pkl')
profileClean = pd.read_pickle(filePath+'preprocessedData/profileClean_v1.pkl')
portfolioClean = pd.read_pickle(filePath+'preprocessedData/portfolioClean_v1.pkl')
transcriptCleanOld = pd.read_pickle(filePath+'preprocessedData/transcriptClean_v1.pkl')
transcriptTransformRec
# Get indices of completed offers
offerCompIdx = transcriptTransformRec[transcriptTransformRec.offer_completed == 1].index.tolist()
# Statistics describing the reward amounts
transcriptTransformRec[transcriptTransformRec.offer_completed == 1].compTransAmt.describe()
# Statistics describing the reward amounts
transcriptTransformRec[transcriptTransformRec.offer_completed == 1].adjRev.describe()
fig = plt.figure()
ax = transcriptTransformRec[transcriptTransformRec.offer_completed == 1].compTransAmt.plot.hist(bins=1000)
plt.xlabel('Transaction Revenue ($)')
plt.ylabel('Frequency')
plt.title('Distribution of Completed Offer Transaction Revenue')
fig = plt.figure()
ax = transcriptTransformRec[transcriptTransformRec.offer_completed == 1].compTransAmt.plot.hist(bins=1000)
plt.xlabel('Transaction Revenue ($)')
plt.ylabel('Frequency')
plt.title('Distribution of Completed Offer Transaction Revenue')
plt.xlim(0, 60)
fig = plt.figure()
ax = transcriptTransformRec[transcriptTransformRec.offer_completed == 1].adjRev.plot.hist(bins=1000)
plt.xlabel('Completed Offer Revenue Minus Reward Amount ($)')
plt.ylabel('Frequency')
plt.title('Distribution of Transaction Amounts')
###Output
_____no_output_____
###Markdown
Additional Preprocessing
###Code
portfolio
from sklearn import preprocessing
def normalizePorfolio(df):
# Initialize a min-max scaler object
#scaler = MinMaxScaler()
normalized_df=(df-df.min())/(df.max()-df.min())
return normalized_df
normalizeColumns = ['difficulty', 'duration', 'reward']
normalizedPorfolio = normalizePorfolio(portfolioClean[normalizeColumns])
normalizedPorfolio
# Create cleaned and normalized 'profile' dataset
portfolioClean[normalizeColumns] = normalizedPorfolio[normalizeColumns]
portfolioClean
###Output
_____no_output_____
###Markdown
Merge Data
###Code
def merge_data(portfolio,profile,transcript):
"""
Merge cleaned data frames for EDA
Parameters
----------
portfolio : cleaned portfolio data frame
profile : cleaned profile data frame
transcript : cleaned transcript data frame
Returns
-------
merged_df: merged data frame
"""
#merged_df = pd.merge(transcript, profile, on='customer_id')
merged_df = pd.merge(portfolio, transcript, on='offer_id')
merged_df = pd.merge(merged_df, profile, on='customer_id')
return merged_df
merged_df = merge_data(portfolioClean,profileClean,transcriptTransformRec)
merged_df.head(5)
mergedTrain_df = merged_df.loc[(merged_df['offer_completed'] == 1) & (merged_df['offerTrans']< 50)].copy()
mergedTrain_df
###Output
_____no_output_____
###Markdown
Drop columns not needed for training
###Code
# Get target variable for training
y = mergedTrain_df['adjRev'].copy()
print(mergedTrain_df.columns)
# Rename 'if' to 'customer_id'
mergedTrain_df.rename(columns={'reward_x': 'reward'}, inplace=True)
# Drop columns not needed for training
mergedTrain_df.drop(['offer_id', 'offerType', 'customer_id', 'event', 'amount', 'reward_y', 'time_days', 'joinDate', 'offer_completed',
'offer_viewed', 'offer_compViewed', 'offer_compNotViewed',
'compTransAmt', 'rewardReceived', 'adjRev', 'offerTrans'], axis=1, inplace=True)
X = mergedTrain_df.copy()
# View mergedTrain_df
mergedTrain_df
# Save off data
X.to_pickle(filePath+'dataOfferCompAdjRevX.pkl')
y.to_pickle(filePath+'dataOfferCompAdjRevY.pkl')
###Output
_____no_output_____
###Markdown
Define target and feature data Preparing and splitting the data
###Code
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
min_max_scaler = preprocessing.MinMaxScaler()
X = min_max_scaler.fit_transform(X)
# We split the dataset into 2/3 training and 1/3 testing sets.
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.2, shuffle=True)
# Then we split the training set further into 2/3 training and 1/3 validation sets.
X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.2, shuffle=True)
X_train
Y_train
###Output
_____no_output_____
###Markdown
Train model Train Model using a gradient boosting algorithm The objective of thise model is to use tranaction, customer, and ad characteristic data to predict whether an offer is completed or not.
###Code
from sklearn.ensemble import GradientBoostingClassifier
from xgboost import XGBClassifier
from xgboost import XGBRegressor
from sklearn.model_selection import cross_val_score, KFold
from sklearn.metrics import mean_squared_error
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import plot_confusion_matrix
from sklearn.metrics import roc_curve
from sklearn.metrics import RocCurveDisplay
from sklearn.metrics import r2_score
from sklearn.metrics import max_error
from sklearn.metrics import explained_variance_score
# Skikit learn gradient boosting classifier
#clf = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1,
# max_depth=1, random_state=0)
# XGBoost Classifier
clf = XGBRegressor()
clf.learning_rate = 0.1
clf.n_estimators = 500
clf.objective = 'reg:squarederror'
clf.fit(X_train, Y_train)
clf.score(X_test, Y_test)
scores = cross_val_score(clf, X_train, Y_train,cv=10)
print("Mean cross-validation score: %.2f" % scores.mean())
# Calculate Mean Squared Error (MSE)
ypred = clf.predict(X_test)
mse = mean_squared_error(Y_test, ypred)
print("MSE: %.2f" % mse)
MSE: 3.35
print("RMSE: %.2f" % (mse**(1/2.0)))
RMSE: 1.83
fig1 = plt.figure()
x_ax = range(len(Y_test))
plt.plot(x_ax, Y_test, label="original")
plt.plot(x_ax, ypred, label="predicted")
plt.title("Test and predicted data")
plt.xlim(0, 100)
plt.legend()
plt.show()
fig2 = plt.figure()
x_ax = range(len(Y_test))
plt.plot(x_ax, Y_test, label="original")
plt.plot(x_ax, ypred, label="predicted")
plt.title("Test and predicted data")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Model Validation
###Code
def modelReg_eval(model, X_train, Y_train, X_test, Y_test, X_val, Y_val):
''' Function to evaluate the performance of the regression model.'''
print("Model Evaluation:\n")
print(model)
print('\n')
print("Accuracy score (training): {0:.3f}".format(model.score(X_train, Y_train)))
print("Accuracy score (test): {0:.3f}".format(model.score(X_test, Y_test)))
print("Accuracy score (validation): {0:.3f}".format(model.score(X_val, Y_val)))
print('\n')
y_pred = model.predict(X_val)
ypred = y_pred
# Compute mean cross-validation score
scores = cross_val_score(model, X_val, Y_val,cv=10)
print("Mean cross-validation score: %.2f" % scores.mean())
print("R2 score: %.2f" % r2_score(Y_val,y_pred))
print("Max error: %.2f" % max_error(Y_val,y_pred))
print("Explained Variance Score: %.2f" % explained_variance_score(Y_val,y_pred))
print('\n')
# Compute Mean Squared Error (MSE)
mse = mean_squared_error(Y_val, ypred)
print("MSE: %.2f" % mse)
print("RMSE: %.2f" % (mse**(1/2.0)))
print('\n')
fig1 = plt.figure()
x_ax = range(len(Y_val))
plt.plot(x_ax, Y_val, label="original")
plt.plot(x_ax, ypred, label="predicted")
plt.title("Test and predicted data")
plt.xlim(0, 100)
plt.legend()
plt.show()
print('\n')
fig2 = plt.figure()
x_ax = range(len(Y_val))
plt.plot(x_ax, Y_val, label="original")
plt.plot(x_ax, ypred, label="predicted")
plt.title("Test and predicted data")
plt.legend()
plt.show()
modelReg_eval(clf, X_train, Y_train, X_test, Y_test, X_val, Y_val)
###Output
_____no_output_____
###Markdown
Plot Training Deviance
###Code
print(clf)
###Output
_____no_output_____
###Markdown
Plot feature importance of the XGBoost Regressor Model
###Code
from xgboost import plot_importance
from matplotlib import pyplot
# plot feature importance
plot_importance(clf)
pyplot.show()
###Output
_____no_output_____
###Markdown
Evaluate Classification Performance of Offers Completed Using k-Nearest Neighbors (kNN) Algorithm
###Code
from sklearn.neighbors import KNeighborsRegressor
# Create KNN model
kNN = KNeighborsRegressor(n_neighbors=30)
# Train KNN model
kNN.fit(X_train, Y_train)
# Evaluate KNN model performance
modelReg_eval(kNN, X_train, Y_train, X_test, Y_test, X_val, Y_val)
# Follows example from https://www.tutorialspoint.com/scikit_learn/scikit_learn_kneighbors_classifier.htm
# to evaluate best value of k
from sklearn import metrics
k_range = range(1,31)
scores = {}
scores_list = []
for k in k_range:
classifier = KNeighborsRegressor(n_neighbors=k)
classifier.fit(X_train, Y_train)
y_pred = classifier.predict(X_test)
scores[k] = r2_score(Y_test,y_pred)
scores_list.append(r2_score(Y_test,y_pred))
#result = metrics.confusion_matrix(Y_test, y_pred)
#print("Confusion Matrix:")
#print(result)
#result1 = metrics.classification_report(Y_test, y_pred)
#print("Classification Report:",)
#print (result1)
# Plot data
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(k_range,scores_list)
plt.xlabel("Value of K")
plt.ylabel("Accuracy")
###Output
_____no_output_____
###Markdown
Train baseline model using a support vector machine (SVM) clusting algorithm
###Code
from sklearn import svm
# Create a SVM regressor with linear kernel
clfSVM = svm.SVR(kernel='linear')
clfSVM.fit(X_train, Y_train)
modelReg_eval(clfSVM, X_train, Y_train, X_test, Y_test, X_val, Y_val)
###Output
Model Evaluation:
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma='scale',
kernel='linear', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
Accuracy score (training): 0.419
Accuracy score (test): 0.421
Accuracy score (validation): 0.397
Mean cross-validation score: 0.40
R2 score: 0.40
Max error: 30.81
Explained Variance Score: 0.41
MSE: 36.84
RMSE: 6.07
###Markdown
Train a baseline linear model fitted by minimizing a regularized empirical loss with Stochastic Gradient Descent (SGD)
###Code
from sklearn.linear_model import SGDRegressor
# Create a linear model fitted by minimizing a regularized empirical loss with Stochastic Gradient Descent (SGD)
sgdReg = SGDRegressor()
sgdReg.fit(X_train, Y_train)
modelReg_eval(sgdReg, X_train, Y_train, X_test, Y_test, X_val, Y_val)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.