path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
MaterialCursoPython/Fase 5 - Modulos externos/Tema 18 - Matplotlib/03 - Limites.ipynb | ###Markdown
LímitesEn algunas ocasiones queremos manipular los límites inferiores y superiores de los ejes. Eso podemos hacerlo gracias a los límites:* xlim* ylim
###Code
import matplotlib.pyplot as plt
%matplotlib inline
x = range(1,11) # Números del 1 al 10
y = [n**2 for n in x] # Potencias de 2 del 1 al 10
plt.plot(x, y)
###Output
_____no_output_____
###Markdown
Al cambiar los límites las variaciones que percibimos del gráfico cambiarán. Si los aumentamos parecerá que nos alejamos haciendo zoom hacia fuera.
###Code
plt.plot(x, y)
plt.xlim(0, 20) # Sólo tenemos 10 valores
plt.ylim(0, 200) # El máximo es 10**2 que es 100
###Output
_____no_output_____
###Markdown
Por otro lado si los reducimos parecerá que nos acercamos, haciendo zoom hacia dentro. Esta forma es muy útil para enfocarnos en una sección del gráfico.
###Code
plt.plot(x, y)
plt.xlim(len(x)-5, len(x)) # Nos enfocamos en los últimos 5 números
###Output
_____no_output_____ |
Notebooks/Obsolete_Maybe/Algebras.ipynb | ###Markdown
Algebras of Time and Space
###Code
import qualreas as qr
import os
# The variable, path, assumes that you have defined a variable,
# PYPROJ, in your environment that is the path to the qualreas folder:
path = os.path.join(os.getenv('PYPROJ'), 'qualreas')
###Output
_____no_output_____
###Markdown
Algebras are serialized as JSON files.For more information on the algebras of time defined here, see the paper, ["Intervals, Points, and Branching Time"](https://github.com/alreich/timex/tree/master/docs) by Alfred J. Reich. This link includes links to the paper and a text file containing 4 transitivity tables (Allen's algebra of proper intervals and Reich's 3 generalizations of Allen's algebra). Domain and Range Abbreviations * Pt : Time Point* PInt : Proper Time Interval (i.e., end points not included)* Int : Time Interval (i.e., Pt or PInt) Point Algebras
###Code
pt_alg = qr.Algebra(os.path.join(path, "linearPointAlgebra.json"))
right_pt_alg = qr.Algebra(os.path.join(path, "rightBranchingPointAlgebra.json"))
left_pt_alg = qr.Algebra(os.path.join(path, "leftBranchingPointAlgebra.json"))
pt_alg.print_info()
right_pt_alg.print_info()
left_pt_alg.print_info()
###Output
Algebra Name: Left-Branching Point Algebra
Description: Left-Branching Point Algebra
Type: Relation System
Equality Rels: [=]
Relations:
NAME (ABBREV) INVERSE (ABBREV) REFLEXIVE SYMMETRIC TRANSITIVE DOMAIN RANGE
LessThan ( <) GreaterThan ( >) False False True Pt Pt
Equals ( =) Equals ( =) True True True Pt Pt
GreaterThan ( >) LessThan ( <) False False True Pt Pt
Incomparable ( l~) Incomparable ( l~) False True False Pt Pt
###Markdown
Allen's Algebra of Proper Time Intervals
###Code
int_alg = qr.Algebra(os.path.join(path, "IntervalAlgebra.json"))
int_alg.print_info()
###Output
Algebra Name: Linear Time Interval Algebra
Description: Allen's algebra of proper time intervals
Type: Relation System
Equality Rels: [E]
Relations:
NAME (ABBREV) INVERSE (ABBREV) REFLEXIVE SYMMETRIC TRANSITIVE DOMAIN RANGE
Before ( B) After ( BI) False False True PInt PInt
After ( BI) Before ( B) False False True PInt PInt
During ( D) Contains ( DI) False False True PInt PInt
Contains ( DI) During ( D) False False True PInt PInt
Equals ( E) Equals ( E) True True True PInt PInt
Finishes ( F) Finished-by ( FI) False False True PInt PInt
Finished-by ( FI) Finishes ( F) False False True PInt PInt
Meets ( M) Met-By ( MI) False False False PInt PInt
Met-By ( MI) Meets ( M) False False False PInt PInt
Overlaps ( O) Overlapped-By ( OI) False False False PInt PInt
Overlapped-By ( OI) Overlaps ( O) False False False PInt PInt
Starts ( S) Started-By ( SI) False False True PInt PInt
Started-By ( SI) Starts ( S) False False True PInt PInt
###Markdown
Generalized Algebras Each of the following three algebras are generalizations of Allen's original algebra of proper time intervals.The first one, intpt_alg ("Interval and Point Algebra"), generalizes Allen's algebra to include Time Points. It adds 5 additional relation_lookup to the original 13 (their short_names begin with "Point-") and it generalizes the domain and/or range of 4 of the original 13 relation_lookup (After, Before, During, Contains), from the class of Proper Time Intervals to a new class, Time Intervals, which includes both Proper Time Intervals and Time Points.
###Code
intpt_alg = qr.Algebra(os.path.join(path, "IntervalAndPointAlgebra.json"))
left_intpt_alg = qr.Algebra(os.path.join(path, "LeftBranchingIntervalAndPointAlgebra.json"))
right_intpt_alg = qr.Algebra(os.path.join(path, "RightBranchingIntervalAndPointAlgebra.json"))
###Output
_____no_output_____
###Markdown
Algebra of Proper Intervals and Points for Linear Time
###Code
intpt_alg.print_info()
###Output
Algebra Name: Linear Time Interval & Point Algebra
Description: Reich's point extension to Allen's time interval algebra (see TIME-94 paper)
Type: Relation System
Equality Rels: [PE, E]
Relations:
NAME (ABBREV) INVERSE (ABBREV) REFLEXIVE SYMMETRIC TRANSITIVE DOMAIN RANGE
Before ( B) After ( BI) False False True Int Int
After ( BI) Before ( B) False False True Int Int
During ( D) Contains ( DI) False False True Int PInt
Contains ( DI) During ( D) False False True PInt Int
Equals ( E) Equals ( E) True True True PInt PInt
Finishes ( F) Finished-by ( FI) False False True PInt PInt
Finished-by ( FI) Finishes ( F) False False True PInt PInt
Meets ( M) Met-By ( MI) False False False PInt PInt
Met-By ( MI) Meets ( M) False False False PInt PInt
Overlaps ( O) Overlapped-By ( OI) False False False PInt PInt
Overlapped-By ( OI) Overlaps ( O) False False False PInt PInt
Point-Equals ( PE) Point-Equals ( PE) True True True Pt Pt
Point-Finishes ( PF) Point-Finished-By (PFI) False False False Pt PInt
Point-Finished-By (PFI) Point-Finishes ( PF) False False False PInt Pt
Point-Starts ( PS) Point-Started-By (PSI) False False False Pt PInt
Point-Started-By (PSI) Point-Starts ( PS) False False False PInt Pt
Starts ( S) Started-By ( SI) False False True PInt PInt
Started-By ( SI) Starts ( S) False False True PInt PInt
###Markdown
Algebra of Proper Intervals and Points for Right-Branching Time
###Code
right_intpt_alg.print_info()
###Output
Algebra Name: Right-Branching Time Interval & Point Algebra
Description: Reich's right-branching extension to Allen's time interval algebra (see TIME-94 paper)
Type: Relation System
Equality Rels: [E, PE]
Relations:
NAME (ABBREV) INVERSE (ABBREV) REFLEXIVE SYMMETRIC TRANSITIVE DOMAIN RANGE
Before ( B) After ( BI) False False True Int Int
After ( BI) Before ( B) False False True Int Int
During ( D) Contains ( DI) False False True Int PInt
Contains ( DI) During ( D) False False True PInt Int
Equals ( E) Equals ( E) True True True PInt PInt
Finishes ( F) Finished-by ( FI) False False True PInt PInt
Finished-by ( FI) Finishes ( F) False False True PInt PInt
Meets ( M) Met-By ( MI) False False False PInt PInt
Met-By ( MI) Meets ( M) False False False PInt PInt
Overlaps ( O) Overlapped-By ( OI) False False False PInt PInt
Overlapped-By ( OI) Overlaps ( O) False False False PInt PInt
Point-Equals ( PE) Point-Equals ( PE) True True True Pt Pt
Point-Finishes ( PF) Point-Finished-By (PFI) False False False Pt PInt
Point-Finished-By (PFI) Point-Finishes ( PF) False False False PInt Pt
Point-Starts ( PS) Point-Started-By (PSI) False False False Pt PInt
Point-Started-By (PSI) Point-Starts ( PS) False False False PInt Pt
Right-Before ( RB) Right-After (RBI) False False True PInt Int
Right-After (RBI) Right-Before ( RB) False False True Int PInt
Right-Overlaps ( RO) Right-Overlapped-By (ROI) False False False PInt PInt
Right-Overlapped-By (ROI) Right-Overlaps ( RO) False False False PInt PInt
Right-Starts ( RS) Right-Starts ( RS) False False False PInt PInt
Right-Incomparable ( R~) Right-Incomparable ( R~) False True False Int Int
Starts ( S) Started-By ( SI) False False True PInt PInt
Started-By ( SI) Starts ( S) False False True PInt PInt
###Markdown
Algebra of Proper Intervals and Points for Left-Branching Time
###Code
left_intpt_alg.print_info()
###Output
Algebra Name: Left-Branching Time Interval & Point Algebra
Description: Reich's left-branching extension to Allen's time interval algebra (see TIME-94 paper)
Type: Relation System
Equality Rels: [E, PE]
Relations:
NAME (ABBREV) INVERSE (ABBREV) REFLEXIVE SYMMETRIC TRANSITIVE DOMAIN RANGE
Before ( B) After ( BI) False False True Int Int
After ( BI) Before ( B) False False True Int Int
During ( D) Contains ( DI) False False True Int PInt
Contains ( DI) During ( D) False False True PInt Int
Equals ( E) Equals ( E) True True True PInt PInt
Finishes ( F) Finished-by ( FI) False False True PInt PInt
Finished-by ( FI) Finishes ( F) False False True PInt PInt
Left-Before ( LB) Left-After (LBI) False False True Int PInt
Left-After (LBI) Left-Before ( LB) False False True PInt Int
Left-Finishes ( LF) Left-Finishes ( LF) False False False PInt PInt
Left-Overlaps ( LO) Left-Overlapped-By (LOI) False False False PInt PInt
Left-Overlapped-By (LOI) Left-Overlaps ( LO) False False False PInt PInt
Left-Incomparable ( L~) Left-Incomparable ( L~) False True False Int Int
Meets ( M) Met-By ( MI) False False False PInt PInt
Met-By ( MI) Meets ( M) False False False PInt PInt
Overlaps ( O) Overlapped-By ( OI) False False False PInt PInt
Overlapped-By ( OI) Overlaps ( O) False False False PInt PInt
Point-Equals ( PE) Point-Equals ( PE) True True True Pt Pt
Point-Finishes ( PF) Point-Finished-By (PFI) False False False Pt PInt
Point-Finished-By (PFI) Point-Finishes ( PF) False False False PInt Pt
Point-Starts ( PS) Point-Started-By (PSI) False False False Pt PInt
Point-Started-By (PSI) Point-Starts ( PS) False False False PInt Pt
Starts ( S) Started-By ( SI) False False True PInt PInt
Started-By ( SI) Starts ( S) False False True PInt PInt
###Markdown
Region Connection Calculus
###Code
rcc8_alg = qr.Algebra(os.path.join(path, "rcc8Algebra.json"))
rcc8_alg.print_info()
###Output
Algebra Name: RCC8
Description: Region Connection Calculus 8 Algebra
Type: Relation System
Equality Rels: [EQ]
Relations:
NAME (ABBREV) INVERSE (ABBREV) REFLEXIVE SYMMETRIC TRANSITIVE DOMAIN RANGE
Disconnected ( DC) Disconnected ( DC) False True False Region Region
ExternallyConnected ( EC) ExternallyConnected ( EC) False True False Region Region
Equal ( EQ) Equal ( EQ) True True True Region Region
NonTangentialProperPart (NTPP) NonTangentialProperPartInverse (NTPPI) False False True Region Region
NonTangentialProperPartInverse (NTPPI) NonTangentialProperPart (NTPP) False False True Region Region
PartiallyOverlapping ( PO) PartiallyOverlapping ( PO) False True False Region Region
TangentialProperPart (TPP) TangentialProperPartInverse (TPPI) False False False Region Region
TangentialProperPartInverse (TPPI) TangentialProperPart (TPP) False False False Region Region
|
script-generation_TensorFlow-LSTM/full-simpsons-corpus-trained_script_gen.ipynb | ###Markdown
TV Script GenerationIn this notebook, we generate our own [Simpsons](https://en.wikipedia.org/wiki/The_Simpsons) TV scripts using a multi-RNN cell built with TensorFlow. The dataset is the [Simpsons dataset](https://www.kaggle.com/wcukierski/the-simpsons-by-the-data) of scripts from 27 seasons.
###Code
import helper
data_dir = './data/simpsons/full_history.txt'
text = helper.load_data(data_dir)
text = text[81:]
text[0:500]
###Output
_____no_output_____
###Markdown
Explore the DataView different parts of the data.
###Code
view_sentence_range = (0, 10)
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
###Output
Dataset Stats
Roughly the number of unique words: 136594
Number of scenes: 1
Average number of sentences in each scene: 158234.0
Number of lines: 158235
Average number of words in each line: 11.663614244636143
The sentences 0 to 10:
Miss Hoover: No, actually, it was a little of both. Sometimes when a disease is in all the magazines and all the news shows, it's only natural that you think you have it.
Lisa Simpson: (NEAR TEARS) Where's Mr. Bergstrom?
Miss Hoover: I don't know. Although I'd sure like to talk to him. He didn't touch my lesson plan. What did he teach you?
Lisa Simpson: That life is worth living.
Edna Krabappel-Flanders: The polls will be open from now until the end of recess. Now, (SOUR) just in case any of you have decided to put any thought into this, we'll have our final statements. Martin?
Martin Prince: (HOARSE WHISPER) I don't think there's anything left to say.
Edna Krabappel-Flanders: Bart?
Bart Simpson: Victory party under the slide!
(Apartment Building: Ext. apartment building - day)
Lisa Simpson: (CALLING) Mr. Bergstrom! Mr. Bergstrom!
###Markdown
Preprocessing FunctionsImplement a couple of preprocessing functions below:- Lookup Table- Tokenize Punctuation Lookup TableTo create a word embedding, we first need to transform the words to ids. In this function, two dictionaries are created:- Dictionary to go from the words to an id, we'll call `vocab_to_int`- Dictionary to go from the id to word, we'll call `int_to_vocab`Return these dictionaries in the following tuple `(vocab_to_int, int_to_vocab)`
###Code
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# done: Implement Function
# scenes = text.split('\n\n')
# sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
# words = []
# # punctuations will be dealt with later
# for sentence in sentences:
# words.append(sentence.split())
#comment: Note that "text" here is a different data type. It is not one long string as it is earlier in this notebook. It's already a list of words.
vocab = sorted(set(text))
vocab_to_int = {word: ii for ii, word in enumerate(vocab)}
int_to_vocab = {ii: word for ii, word in enumerate(vocab)}
return vocab_to_int, int_to_vocab
###Output
/home/thojo/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Tokenize PunctuationSplitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".The function `token_lookup` will return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Creating a dictionary for the following symbols where the symbol is the key and value is the token:- Period ( . )- Comma ( , )- Quotation Mark ( " )- Semicolon ( ; )- Exclamation mark ( ! )- Question mark ( ? )- Left Parentheses ( ( )- Right Parentheses ( ) )- Dash ( -- )- Return ( \n )This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Gotta make sure we don't use a token that could be confused as a word. Instead of using the token "dash", using something like "||dash||".
###Code
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
# done: Implement Function
punct = {'.':'||Period||',
',':'||Comma||',
'\"':'||Double_Quotation||',
';':'||Semi_Colon||',
'!':'||Exclamation_Mark||',
'?':'||Question_Mark||',
'(':'||Left_Parentheses||',
')':'||Right_Parentheses||',
'--':'||Dash||',
'\n':'||Newline||',
}
return punct
###Output
_____no_output_____
###Markdown
Preprocess all the data and save itRunning the code cell below will preprocess all the data and save it to file.
###Code
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
###Output
_____no_output_____
###Markdown
Check PointThe preprocessed data has been saved to disk.
###Code
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
###Output
_____no_output_____
###Markdown
Build the Neural NetworkYou'll build the components necessary to build a RNN by implementing the following functions below:- get_inputs- get_init_cell- get_embed- build_rnn- build_nn- get_batches Check the Version of TensorFlow and Access to GPU
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.3'), 'Please use TensorFlow version 1.3 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
###Output
TensorFlow Version: 1.10.0
Default GPU Device: /device:GPU:0
###Markdown
InputImplement the `get_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:- Input text placeholder named "input" using the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) `name` parameter.- Targets placeholder- Learning Rate placeholderReturn the placeholders in the following tuple `(Input, Targets, LearningRate)`
###Code
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# done: Implement Function
input_ = tf.placeholder( tf.int32, [None, None], name = 'input')
targets_ = tf.placeholder( tf.int32, [None, None], name = 'targets')
learning_rate = tf.placeholder( tf.float32, name = 'learning_rate')
return input_, targets_, learning_rate
# tests.test_get_inputs(get_inputs)
###Output
_____no_output_____
###Markdown
Build RNN Cell and InitializeStack one or more [`BasicLSTMCells`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/BasicLSTMCell) in a [`MultiRNNCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCell).- The Rnn size should be set using `rnn_size`- Initalize Cell State using the MultiRNNCell's [`zero_state()`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/MultiRNNCellzero_state) function - Apply the name "initial_state" to the initial state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)Return the cell and initial state in the following tuple `(Cell, InitialState)`
###Code
def build_cell(num_units, keep_prob=0.8):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob) #adding dropout
return drop
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs (TJ: dimension of the hidden state of the LSTM cell)
:return: Tuple (cell, initialize state)
"""
# done: Implement Function
lstm_layers = 2
### Stack up multiple LSTM layers, for deep learning
# cell = tf.contrib.rnn.MultiRNNCell([ lstm ]*lstm_layers)
# lstm = tf.contrib.rnn.BasicLSTMCell( rnn_size )
# no dropout. If desired, add it like this and define cell as [drop]*lstm_layers
# drop = tf.contrib.rnn.DropoutWrapper( lstm, output_keep_prob=keep_prob)
# this is what I first had (works for TensorFlow 1.0. For later versions, do as below i.e., [cell]*lstm_layers doesn't work)
cell=tf.contrib.rnn.MultiRNNCell( [build_cell(rnn_size) for _ in range(lstm_layers)])
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state') #gotamist: why was this necessary? There are other scripts (Sentiment_RNN for example), where the initial state has been set without a name
return cell, initial_state
# tests.test_get_init_cell(get_init_cell)
###Output
_____no_output_____
###Markdown
Word EmbeddingApply embedding to `input_data` using TensorFlow. Return the embedded sequence.
###Code
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embedding = tf.Variable( tf.random_uniform(( vocab_size, embed_dim), -1, 1) )
embed = tf.nn.embedding_lookup( embedding, input_data)
return embed
# tests.test_get_embed(get_embed)
###Output
_____no_output_____
###Markdown
Build RNN Time to use the cell created in `get_init_cell()` functionto create a RNN.- Build the RNN using the [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) - Apply the name "final_state" to the final state using [`tf.identity()`](https://www.tensorflow.org/api_docs/python/tf/identity)Return the outputs and final_state state in the following tuple `(Outputs, FinalState)`
###Code
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
# done: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype='float32')
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
# tests.test_build_rnn(build_rnn)
###Output
_____no_output_____
###Markdown
Build the Neural NetworkApply the functions you implemented above to:- Apply embedding to `input_data` using your `get_embed(input_data, vocab_size, embed_dim)` function.- Build RNN using `cell` and your `build_rnn(cell, inputs)` function.- Apply a fully connected layer with a linear activation and `vocab_size` as the number of outputs.Return the logits and final state in the following tuple (Logits, FinalState)
###Code
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
# logits = tf.layers.dense(outputs, units=vocab_size, activation=None) #trying the line below recommended by reviewer (includes initialization)
logits = tf.contrib.layers.fully_connected(outputs,vocab_size,activation_fn=None, weights_initializer=tf.truncated_normal_initializer(stddev= 0.1),biases_initializer=tf.zeros_initializer())
# logits=tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
# note: both lines are legit for logits. fully_connected, behind the scenes, just calls tf.layers.dense
return logits, final_state
# tests.test_build_nn(build_nn)
###Output
_____no_output_____
###Markdown
BatchesImplement `get_batches` to create batches of input and targets using `int_text`. The batches should be a Numpy array with the shape `(number of batches, 2, batch size, sequence length)`. Each batch contains two elements:- The first element is a single batch of **input** with the shape `[batch size, sequence length]`- The second element is a single batch of **targets** with the shape `[batch size, sequence length]`If there isn't enough data to fill the last batch with enough data, drop the last batch.
###Code
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
# done: Implement Function
# I am assuming int_text will be a list rather than a 1-d array
words_per_batch = batch_size * seq_length
n_batches = len(int_text) // words_per_batch
x = np.array( int_text[:n_batches*words_per_batch] )# inputs
y=np.zeros_like(x, dtype = x.dtype) # targets
y[:-1], y[-1:] = x[1:],x[:1]
xarr = x.reshape((batch_size, -1))
yarr = y.reshape((batch_size, -1))
batches = np.zeros((n_batches, 2, batch_size, seq_length), dtype=int )
batch_idx = 0
for n in range(0, xarr.shape[1], seq_length):
input_batch = xarr[:, n:n+seq_length]
target_batch = yarr[:, n:n+seq_length]
this_batch=np.array([input_batch, target_batch])
batches[ batch_idx,:,:,:]=this_batch
batch_idx += 1
return batches
# tests.test_get_batches(get_batches)
###Output
_____no_output_____
###Markdown
Neural Network Training HyperparametersTune the following parameters:- Set `num_epochs` to the number of epochs.- Set `batch_size` to the batch size.- Set `rnn_size` to the size of the RNNs.- Set `embed_dim` to the size of the embedding.- Set `seq_length` to the length of sequence.- Set `learning_rate` to the learning rate.- Set `show_every_n_batches` to the number of batches the neural network should print progress.
###Code
# Number of Epochs
num_epochs = 25 #around 50 epochs were enough to get the loss under 1.0 with seq_length of 200. Loss goes down to 0.12 with 100 epochs.
# Batch Size
batch_size = 16
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 12 # had 200 at first, but this is an improvement suggested by the reviewer (corresponding to the average sequence length in the training data of 11.5 words)
# Learning Rate
learning_rate = 0.001
# Show stats for every n number of batches
show_every_n_batches = 1000
save_dir = './save'
###Output
_____no_output_____
###Markdown
Build the GraphBuild the graph using the neural network implemented above
###Code
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
###Output
_____no_output_____
###Markdown
TrainTrain the neural network on the preprocessed data.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
###Output
Epoch 0 Batch 0/13372 train_loss = 10.922
Epoch 0 Batch 1000/13372 train_loss = 5.093
Epoch 0 Batch 2000/13372 train_loss = 5.123
Epoch 0 Batch 3000/13372 train_loss = 5.306
Epoch 0 Batch 4000/13372 train_loss = 4.531
Epoch 0 Batch 5000/13372 train_loss = 4.511
Epoch 0 Batch 6000/13372 train_loss = 4.176
Epoch 0 Batch 7000/13372 train_loss = 5.096
Epoch 0 Batch 8000/13372 train_loss = 5.015
Epoch 0 Batch 9000/13372 train_loss = 4.582
Epoch 0 Batch 10000/13372 train_loss = 4.460
Epoch 0 Batch 11000/13372 train_loss = 4.304
Epoch 0 Batch 12000/13372 train_loss = 4.470
Epoch 0 Batch 13000/13372 train_loss = 4.670
Epoch 1 Batch 628/13372 train_loss = 4.382
Epoch 1 Batch 1628/13372 train_loss = 4.199
Epoch 1 Batch 2628/13372 train_loss = 3.889
Epoch 1 Batch 3628/13372 train_loss = 4.878
Epoch 1 Batch 4628/13372 train_loss = 4.781
Epoch 1 Batch 5628/13372 train_loss = 4.274
Epoch 1 Batch 6628/13372 train_loss = 4.667
Epoch 1 Batch 7628/13372 train_loss = 4.504
Epoch 1 Batch 8628/13372 train_loss = 4.331
Epoch 1 Batch 9628/13372 train_loss = 4.341
Epoch 1 Batch 10628/13372 train_loss = 4.275
Epoch 1 Batch 11628/13372 train_loss = 3.737
Epoch 1 Batch 12628/13372 train_loss = 4.572
Epoch 2 Batch 256/13372 train_loss = 4.177
Epoch 2 Batch 1256/13372 train_loss = 4.233
Epoch 2 Batch 2256/13372 train_loss = 4.446
Epoch 2 Batch 3256/13372 train_loss = 4.460
Epoch 2 Batch 4256/13372 train_loss = 4.309
Epoch 2 Batch 5256/13372 train_loss = 4.711
Epoch 2 Batch 6256/13372 train_loss = 3.388
Epoch 2 Batch 7256/13372 train_loss = 4.551
Epoch 2 Batch 8256/13372 train_loss = 4.939
Epoch 2 Batch 9256/13372 train_loss = 4.091
Epoch 2 Batch 10256/13372 train_loss = 3.988
Epoch 2 Batch 11256/13372 train_loss = 4.391
Epoch 2 Batch 12256/13372 train_loss = 4.687
Epoch 2 Batch 13256/13372 train_loss = 4.032
Epoch 3 Batch 884/13372 train_loss = 4.396
Epoch 3 Batch 1884/13372 train_loss = 4.017
Epoch 3 Batch 2884/13372 train_loss = 4.410
Epoch 3 Batch 3884/13372 train_loss = 3.787
Epoch 3 Batch 4884/13372 train_loss = 3.339
Epoch 3 Batch 5884/13372 train_loss = 4.691
Epoch 3 Batch 6884/13372 train_loss = 4.021
Epoch 3 Batch 7884/13372 train_loss = 3.996
Epoch 3 Batch 8884/13372 train_loss = 4.009
Epoch 3 Batch 9884/13372 train_loss = 4.401
Epoch 3 Batch 10884/13372 train_loss = 3.818
Epoch 3 Batch 11884/13372 train_loss = 4.467
Epoch 3 Batch 12884/13372 train_loss = 3.800
Epoch 4 Batch 512/13372 train_loss = 3.801
Epoch 4 Batch 1512/13372 train_loss = 3.922
Epoch 4 Batch 2512/13372 train_loss = 4.039
Epoch 4 Batch 3512/13372 train_loss = 3.818
Epoch 4 Batch 4512/13372 train_loss = 4.015
Epoch 4 Batch 5512/13372 train_loss = 4.117
Epoch 4 Batch 6512/13372 train_loss = 3.852
Epoch 4 Batch 7512/13372 train_loss = 4.000
Epoch 4 Batch 8512/13372 train_loss = 3.798
Epoch 4 Batch 9512/13372 train_loss = 4.067
Epoch 4 Batch 10512/13372 train_loss = 3.697
Epoch 4 Batch 11512/13372 train_loss = 4.290
Epoch 4 Batch 12512/13372 train_loss = 3.906
Epoch 5 Batch 140/13372 train_loss = 3.376
Epoch 5 Batch 1140/13372 train_loss = 3.670
Epoch 5 Batch 2140/13372 train_loss = 4.125
Epoch 5 Batch 3140/13372 train_loss = 4.270
Epoch 5 Batch 4140/13372 train_loss = 3.955
Epoch 5 Batch 5140/13372 train_loss = 4.131
Epoch 5 Batch 6140/13372 train_loss = 3.946
Epoch 5 Batch 7140/13372 train_loss = 3.778
Epoch 5 Batch 8140/13372 train_loss = 3.722
Epoch 5 Batch 9140/13372 train_loss = 3.676
Epoch 5 Batch 10140/13372 train_loss = 3.674
Epoch 5 Batch 11140/13372 train_loss = 4.030
Epoch 5 Batch 12140/13372 train_loss = 4.386
Epoch 5 Batch 13140/13372 train_loss = 4.142
Epoch 6 Batch 768/13372 train_loss = 3.963
Epoch 6 Batch 1768/13372 train_loss = 4.138
Epoch 6 Batch 2768/13372 train_loss = 3.869
Epoch 6 Batch 3768/13372 train_loss = 3.965
Epoch 6 Batch 4768/13372 train_loss = 3.604
Epoch 6 Batch 5768/13372 train_loss = 3.732
Epoch 6 Batch 6768/13372 train_loss = 4.051
Epoch 6 Batch 7768/13372 train_loss = 4.196
Epoch 6 Batch 8768/13372 train_loss = 3.443
Epoch 6 Batch 9768/13372 train_loss = 3.950
Epoch 6 Batch 10768/13372 train_loss = 4.057
Epoch 6 Batch 11768/13372 train_loss = 4.177
Epoch 6 Batch 12768/13372 train_loss = 3.711
Epoch 7 Batch 396/13372 train_loss = 3.942
Epoch 7 Batch 1396/13372 train_loss = 4.155
Epoch 7 Batch 2396/13372 train_loss = 4.272
Epoch 7 Batch 3396/13372 train_loss = 4.109
Epoch 7 Batch 4396/13372 train_loss = 4.322
Epoch 7 Batch 5396/13372 train_loss = 3.562
Epoch 7 Batch 6396/13372 train_loss = 3.752
Epoch 7 Batch 7396/13372 train_loss = 3.778
Epoch 7 Batch 8396/13372 train_loss = 4.123
Epoch 7 Batch 9396/13372 train_loss = 3.656
Epoch 7 Batch 10396/13372 train_loss = 4.227
Epoch 7 Batch 11396/13372 train_loss = 3.562
Epoch 7 Batch 12396/13372 train_loss = 3.421
Epoch 8 Batch 24/13372 train_loss = 3.538
Epoch 8 Batch 1024/13372 train_loss = 3.439
Epoch 8 Batch 2024/13372 train_loss = 4.115
Epoch 8 Batch 3024/13372 train_loss = 3.446
Epoch 8 Batch 4024/13372 train_loss = 3.954
Epoch 8 Batch 5024/13372 train_loss = 3.736
Epoch 8 Batch 6024/13372 train_loss = 3.993
Epoch 8 Batch 7024/13372 train_loss = 3.400
Epoch 8 Batch 8024/13372 train_loss = 3.894
Epoch 8 Batch 9024/13372 train_loss = 3.799
Epoch 8 Batch 10024/13372 train_loss = 3.757
Epoch 8 Batch 11024/13372 train_loss = 4.027
Epoch 8 Batch 12024/13372 train_loss = 4.091
Epoch 8 Batch 13024/13372 train_loss = 3.919
Epoch 9 Batch 652/13372 train_loss = 4.094
Epoch 9 Batch 1652/13372 train_loss = 3.251
Epoch 9 Batch 2652/13372 train_loss = 3.644
Epoch 9 Batch 3652/13372 train_loss = 3.776
Epoch 9 Batch 4652/13372 train_loss = 3.681
Epoch 9 Batch 5652/13372 train_loss = 3.535
Epoch 9 Batch 6652/13372 train_loss = 3.711
Epoch 9 Batch 7652/13372 train_loss = 4.019
Epoch 9 Batch 8652/13372 train_loss = 3.642
Epoch 9 Batch 9652/13372 train_loss = 4.427
Epoch 9 Batch 10652/13372 train_loss = 4.006
Epoch 9 Batch 11652/13372 train_loss = 3.567
Epoch 9 Batch 12652/13372 train_loss = 4.106
Epoch 10 Batch 280/13372 train_loss = 3.746
Epoch 10 Batch 1280/13372 train_loss = 3.453
Epoch 10 Batch 2280/13372 train_loss = 3.834
Epoch 10 Batch 3280/13372 train_loss = 3.491
Epoch 10 Batch 4280/13372 train_loss = 4.100
Epoch 10 Batch 5280/13372 train_loss = 3.800
Epoch 10 Batch 6280/13372 train_loss = 3.933
Epoch 10 Batch 7280/13372 train_loss = 3.733
Epoch 10 Batch 8280/13372 train_loss = 3.663
Epoch 10 Batch 9280/13372 train_loss = 3.611
Epoch 10 Batch 10280/13372 train_loss = 3.768
Epoch 10 Batch 11280/13372 train_loss = 3.616
Epoch 10 Batch 12280/13372 train_loss = 3.599
Epoch 10 Batch 13280/13372 train_loss = 3.835
Epoch 11 Batch 908/13372 train_loss = 3.934
Epoch 11 Batch 1908/13372 train_loss = 3.532
Epoch 11 Batch 2908/13372 train_loss = 3.637
Epoch 11 Batch 3908/13372 train_loss = 3.794
Epoch 11 Batch 4908/13372 train_loss = 3.715
Epoch 11 Batch 5908/13372 train_loss = 3.646
Epoch 11 Batch 6908/13372 train_loss = 4.437
Epoch 11 Batch 7908/13372 train_loss = 3.335
Epoch 11 Batch 8908/13372 train_loss = 3.984
Epoch 11 Batch 9908/13372 train_loss = 3.569
Epoch 11 Batch 10908/13372 train_loss = 3.841
Epoch 11 Batch 11908/13372 train_loss = 3.467
Epoch 11 Batch 12908/13372 train_loss = 3.700
Epoch 12 Batch 536/13372 train_loss = 4.041
Epoch 12 Batch 1536/13372 train_loss = 3.702
Epoch 12 Batch 2536/13372 train_loss = 3.639
Epoch 12 Batch 3536/13372 train_loss = 3.518
Epoch 12 Batch 4536/13372 train_loss = 3.926
Epoch 12 Batch 5536/13372 train_loss = 3.340
Epoch 12 Batch 6536/13372 train_loss = 4.109
Epoch 12 Batch 7536/13372 train_loss = 3.253
Epoch 12 Batch 8536/13372 train_loss = 3.984
###Markdown
Save ParametersSave `seq_length` and `save_dir` for generating a new TV script.
###Code
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
###Output
_____no_output_____
###Markdown
Checkpoint
###Code
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
###Output
_____no_output_____
###Markdown
Implement Generate Functions Get TensorsGet tensors from `loaded_graph` using the function [`get_tensor_by_name()`](https://www.tensorflow.org/api_docs/python/tf/Graphget_tensor_by_name). Get the tensors using the following names:- "input:0"- "initial_state:0"- "final_state:0"- "probs:0"Return the tensors in the following tuple `(InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)`
###Code
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# done: Implement Function
InputTensor = loaded_graph.get_tensor_by_name("input:0")
InitialStateTensor = loaded_graph.get_tensor_by_name("initial_state:0")
FinalStateTensor = loaded_graph.get_tensor_by_name("final_state:0")
ProbsTensor = loaded_graph.get_tensor_by_name("probs:0")
return (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# tests.test_get_tensors(get_tensors)
###Output
_____no_output_____
###Markdown
Choose WordImplement the `pick_word()` function to select the next word using `probabilities`.
###Code
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# note: it would be less noisy to choose among words with the top 5 probabilities.
p = probabilities / np.sum(probabilities)
idx=np.random.choice(list(int_to_vocab.keys()), p=p)
return int_to_vocab[idx]
# tests.test_pick_word(pick_word)
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
###Output
_____no_output_____
###Markdown
Generate TV ScriptThis will generate the TV script for you. Set `gen_length` to the length of TV script you want to generate.
###Code
gen_length = 1000
# homer_simpson, moe_szyslak, or Barney_Gumble
# prime_word = 'moe_szyslak'
prime_word = 'lisa'
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
# gen_sentences = [prime_word + ':']
gen_sentences = [prime_word]
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[0][dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
###Output
INFO:tensorflow:Restoring parameters from ./save
lisa simpson: trust us, i went in and the guts home is to proud of all.
host:(chuckles) hey boys, i'm a winner on this.(starting tear)
homer simpson:(worried grunt)
homer simpson: how did you see that?
lenny leonard: oh well, now, there's another drink way to let me in, you don't know anything.
homer simpson: i'll save page to the life of james madison. which one of us do are you ominous that?
adult lisa: it hides it for mad like that.
(courtroom: int. courtroom - continuous)
that time we have a week over.
adult lisa: well dad, that's right.
lisa simpson:(sobs)(singing) take your rap look / hysterical) mrs. little heck, see? me and i'm going back to sleep.
bart simpson: sings changing our vinegar?
bart simpson: hey you're pity!
marge simpson:(startled yell) continuous)
homer simpson: um...
marge simpson: i don't know if it died again.
little boy:(annoyed) faster?
bergstrom: well, they know.
selma bouvier: you have a disease.
homer simpson:(busboy chipper) take me your way licked with my butt!
homer simpson: aye carumba!!
(mildly chuckle to designer invite laughter)
homer simpson:(booming) weird?
girls the let(sputtering) decorate her two.
(kwik-e-mart: int. kwik-e-mart - afternoon)
patty bouvier: let's roll!
james brady: you lose our issue.
judge snyder: dr. prime highway owes a class or serve tough.
bart simpson: hey lis, dad, i'm going to need... didn't you get these go home?
mona simpson: oh, what part of the thought of ready so, but you're on inconsiderate.
judge snyder: by short his lunches are paying for bumpy belly to someone right now.
woman: now that's a part where you was ivy. but i've made that great cool work tonight.
(pavilion: int. clinic - continuous)
c. montgomery burns: so a short pardner. but i see the whole times of these others, i'd need to go back immediately.
adult lisa: i want to believe this that.
(kirk van houten: sing" sweeet"?
bart jr.".
(krusty the show of us is entitled this song.
marge simpson:(flirty)(shocked) hey, aren't that?
marge simpson: you could be a robotics. i suppose you can send me out to bed
it all right. that's not fun. thank you, very not a man!
homer puppet: broke is an marriage of religious mail.(bringing her in book) yeah, but you're patches up here.
lisa simpson:(mimicking) mr. sparkle carrot of yours!
marge simpson: um, but we came here for some bad news. i've could have it?
lisa simpson: maybe it'll be the big question, but you marge aren't beautiful like i owe what it's up to be.
seymour skinner:(eagerly) few a search" a the m. a.!
singing singers: kill yayyy!
homer simpson: not gone!
homer simpson: here i'm rich! / twenty-five dollars! / won the terror / this has been a buncha gary just one of you county /(dramatic isn't the benefit) i have no match for two,(perking up angrily) nervous squirrel stink!(loud sigh) left me out there burger. and left a tombstone between club engines)
crowd:(charmed son)
c. montgomery burns:
c. montgomery burns: and the road arrived after soon to the square offer, a pack a on why she's trying to consume your father as the poster.
adult lisa:(unimpressed) this is your sci-fi!.(animation)
rev. timothy lovejoy:(squeaky teenage voice) that actually wouldn't approve of their duff food is raising--
herman hermann: next.
young(chuckles) that's it. i'm gonna show you something, no!
crowd:(amid interested noises)
dante calabresi: that was the babysitter. we've finished the holy spirit of gettin' off!
homer simpson:(anguished moan) skin away!
teenage bart: i'm late!
(backstage: int. ballroom - later that night)
master of the quiet day. just relax.
marge simpson: it's about who'd pay so i really involve him that's it that'll be a life inspection and time to date.
mayor joe quimby: you know, the rubbin' is all people are the law, it's it, what a time. how much?
adult lisa: ohh, and this one's pork.
bart simpson:... and another very in.
girls:(screaming as excited) quit me, bigger pants. / this would've had women's rights, to get dressed.
produce i am. you must opener willie. i'll be back a few agent.
fbi agent #1:(impressed) i cracked my cigarettes.
lisa simpson: no, not in the sky!
lisa simpson:(upset sound)
bart
|
2_Ejercicios/Primera_parte/Entregables/2. Ejercios_Mod1/2020_12_15/Alcohol_Consumption.ipynb | ###Markdown
GroupBy Introduction:GroupBy can be summarized as Split-Apply-Combine.Check out this [Diagram](http://i.imgur.com/yjNkiwL.png) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv).
###Code
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/drinks.csv'
###Output
_____no_output_____
###Markdown
Step 3. Assign it to a variable called drinks.
###Code
drinks = pd.read_csv(url, sep=',')
df_drinks = drinks.copy()
df_drinks.head(5)
###Output
_____no_output_____
###Markdown
Step 4. Which continent drinks more beer on average?
###Code
df_drinks.groupby('continent')['beer_servings'].mean()
###Output
_____no_output_____
###Markdown
Step 5. For each continent print the statistics for wine consumption.
###Code
df_drinks.groupby('continent')['wine_servings'].describe()
###Output
_____no_output_____
###Markdown
Step 6. Print the mean alcohol consumption per continent for every column
###Code
df_drinks.groupby('continent').mean()
###Output
_____no_output_____
###Markdown
Step 7. Print the median alcohol consumption per continent for every column
###Code
df_drinks.groupby('continent').median()
###Output
_____no_output_____
###Markdown
Step 8. Print the mean, min and max values for spirit consumption. This time output a DataFrame
###Code
df_drinks.groupby('continent')['spirit_servings'].aggregate([np.mean, 'min', max])
###Output
_____no_output_____ |
DataManipulationWithPandas/Codecademy-PageVisitsFunnel.ipynb | ###Markdown
Copyright 2021 Parker Dunn [email protected] Alternate: [email protected] & [email protected] July 20th, 2021 Codecademy - Page Visits Funnel Skill Path: Analyze Data with Python Section: Data Manipulation with Pandas Topic: Multiple tables in Pandas Assignment ContextThis mini-project was meant to be a demonstration of what I have learned about working with multiple pandas dataframe tables. It is one part of a larger lesson about manipulating data with pandas/dataframes. This assignment is sort of a continuation of two previous pandas projects: "Petal Power Inventory" and "A/B Testing for ShoeFly." The collection of three projects demonstrates many of the things that I have learned to do with the pandas library. Assignment Description"Cool T-Shirts Inc. has asked you to analyze data on visits to their website. Your job is to build a *funnel*, which is a description of how many people continue to the next step of a multi-step process. In this case, our funnel is going to describe the following process:1. A user visits CoolTShirts.com2. A user adds a t-shirt to their cart3. A user clicks "checkout"4. A user actually purchases a t-shirt"
###Code
# import codecademylib
# Data provided for the assignment in visits.csv, cart.csv, checkout.csv, and purchase.csv
###Output
_____no_output_____
###Markdown
"*import codecademylib*" was included the assignment template. This is not a package that can be imported here so that line is commented out.The contents of *codecademylib* were not explicitly provided. The webpage and contents can be downloaded but the contents of the package are not clearly specified.I was able to open and generate a copy of *visits.csv*, *cart.csv*, *checkout.csv*, and *purchase.csv* so that the files can still be imported and the script can be run here.
###Code
import pandas as pd
visits = pd.read_csv("visits.csv", parse_dates = [1])
cart = pd.read_csv("cart.csv", parse_dates = [1])
checkout = pd.read_csv("checkout.csv", parse_dates = [1])
purchase = pd.read_csv("purchase.csv", parse_dates = [1])
print("Visits - First 8 rows\n",visits.head(8))
print("\nCart - first 8 rows\n", cart.head(8))
print("\nCheckout - first 8 rows\n",checkout.head(8))
print("\nPurchase - first 8 rows \n", purchase.head(8))
# Requested to combine visits and cart using a left merge
# Order not specified but seems like we are looking at all visits then ...
# how many visits turned into users starting a cart
visits_cart = pd.merge(visits, cart, how="left")
print(visits_cart.head(5), visits_cart.tail(5))
print("\nNumber of entries in merged df: {}\n".format(visits_cart.user_id.count()))
print("\n",len(visits_cart)) # Alternate approach suggested
# Requested: "How many timestamps are null for the cart_time col?"
print("Number of null timestamps: {}"\
.format(visits_cart.cart_time.isnull().sum()))
###Output
Number of null timestamps: 1652
###Markdown
The fact that there are 1652 null timestamps for "cart_time" in the merged dataframe means 1652 occurances of someone visiting the company website and not adding anything to the cart. This also suggests that 348 of 2000 visitors to the website added a t-shirt to their cart. __Checking on some information about the length and contents of the dataframes__
###Code
# Visits.csv
print("There are {} entries in the visits dataframe".format(len(visits)))
print("\nThere are {} unique user IDs in the visits dataframe.".format(visits.user_id.nunique()))
# cart.csv
print("There are {} entries in the cart dataframe.".format(len(cart)))
print("\nThere are {} unique user IDs in the cart dataframe.".format(cart.user_id.nunique()))
# checkout.csv
print("There are {} entries in the checkout dataframe.".format(len(checkout)))
print("\nThere are {} unique user IDs in the checkout dataframe.".format(checkout.user_id.nunique()))
###Output
There are 360 entries in the checkout dataframe.
There are 226 unique user IDs in the checkout dataframe.
###Markdown
The instructions in the assignment do indicate that I was supposed to account for the repeated unique user IDs in checkout. From the dataframe information printed in the three cells above, there is clearly repeated user IDs, which isn't necessarily a barrier to working with the data. However, later in the assignment (*see below*), I was instructed to: "Repeat the left merge for `cart` and `checkout` and count `null` values. What percentage of users put items in their cart, but did not proceed to checkout?" Since the assignment did not indicate at all that I should account for repeat user IDs in the `checkout` data, I will not change any of the calculations below. However, The next section will previous extract this checkout data and show some of the repeats for confirmation.
###Code
# Grouping checkout by user_id and displaying frequency of user_ids
checkout_by_user = checkout.groupby("user_id").checkout_time.count().reset_index()
print(checkout_by_user.head(20))
print(type(checkout_by_user))
# Requested: "What percent of users who visited Cool T-Shirts Inc. ended up not
# placing a t-shirt in their cart?"
per_no_cart = (float(visits_cart.cart_time.isnull().sum()) / len(visits_cart))
print("Percentage of users who did not add a t-shirt to their cart: {:.2%}"\
.format(per_no_cart))
# Asked to complete a similar "left" merge again with cart and checkout this time
# counting null values again and determining percentage of users
# who did not go to checkout
cart_checkout = pd.merge(cart, checkout, how="left")
# len(cart_checkout) should contain all entries from cart
# regardless of if there is a matching entry in checkout
checkout_time_na = cart_checkout.checkout_time.isnull().sum()
per_no_checkout = float(checkout_time_na) / len(cart_checkout)
print(len(cart_checkout))
print("Null time values for 'checkout_time': {}".format(checkout_time_na))
print("Percentage of users who did not checkout: {:.2%}"\
.format(per_no_checkout))
###Output
482
Null time values for 'checkout_time': 122
Percentage of users who did not checkout: 25.31%
###Markdown
*__I do not see a problem with my work at this moment, but the numbers don't seem to add up here.__* It seems odd that we could have 348 users who visited the site and placed an item in their cart then end up with 482 users (a greater number) who had an item in their cart. Unless I'm missing something right now, the left merge that creates "cart_checkout" should not have more values than the number of carts we found were created when analyzing the "visits_cart" data frame._____Update: I added sections above demonstrating the lengths of the original dataframes. Based on those original data frames, I am not sure how the dataframe merge created a dataframe with more entries than either of the original data frames. I believe the use of a "left" merge should have included all `user_id` entries from *cart* and only matching rows from *checkout*. I will further investigate this issue if I have time at some point. However, I am not particularly interested in this particular subject and believe I understand how to use pandas the way that this lesson was trying to teach. Using a different different variation of `merge()` in the next section to double check that the "left" merge happened as expected.
###Code
# Checking on the cart and checkout merge
cart_checkout_outer = pd.merge(cart, checkout, how="outer")
print("Size of outer merge: {}".format(len(cart_checkout_outer)))
cart_checkout_specific_cols = pd.merge(cart, checkout, left_on="user_id",\
right_on="user_id", how="left")
print("Size of left merge with specific cols: {}".format(len(cart_checkout_specific_cols)))
###Output
Size of outer merge: 482
Size of left merge with specific cols: 482
###Markdown
**well..... the left merge appears to be performing the same way as an outer merge .....** I'm not sure why the "left" merge doesn't seem to be working correctly. Based on the documentation on `.merge()`: "left: use only keys from the left frame"
###Code
# Checking on inner merge of cart and checkout
print("Unique entries in cart: {}".format(cart.user_id.nunique()))
print("Unique entries in checkout: {}".format(checkout.user_id.nunique()))
cart_checkout_inner = pd.merge(cart, checkout, how="inner")
print("\nSize of inner merge: {}".format(len(cart_checkout_inner)))
###Output
Unique entries in cart: 348
Unique entries in checkout: 226
Size of inner merge: 360
###Markdown
Looks like there are two issues going on here. 1. There are multiple entries for some user_ids in checkout - which was briefly mentioned above2. It seems that if you only include user_ids from cart, then there are 360 entries for these user_ids3. I believe there are many entries in checkout for user_ids that do not match those of the user_ids in cart. Maybe I'm misunderstanding the user_id system but this seems weird to me. To confirm the presence of new user_ids in the checkout, I'm going to do an explicit `for` loop to check on this fact.
###Code
# First checking out the list/series of user_ids from cart
user_ids_cart = cart.user_id.to_list()
print(user_ids_cart[:9])
print(type(user_ids_cart))
# Now interating through the list of user_ids from checkout
user_ids_checkout = checkout.user_id.to_list()
counter = 0
for id in user_ids_checkout:
if (id in user_ids_cart):
continue
else:
counter+=1
print(counter)
## Well counter ends up being 0 .... soooo where are the extra entries coming from ....
# Grouped checkout in an above section -- borrowring that df here
print("Total count of the grouped checkout entries: {}".format(checkout_by_user.checkout_time.sum()))
###Output
_____no_output_____
###Markdown
I cannot make sense of the values that I am getting for the checkout and cart merge.* Number of entries for left merge: 482* Number of entries for the outer merge: 482 - this should be the theoretical max* Number of entries for inner merge: 360 * Number of user_ids in checkout but not in cart: 0*NOTE: The number above would have been what I expected in practice but does not match the other numbers as far as I can tell so far.* * Number of unique entries in cart: 348* Number of unique entries in checkout: 226 For the left merge, there were 122 null time values for "checkout_time." AHHHHH okay I figured it outThere are 348 user_ids in cart. These user_ids are the only ones associated with the user_ids in checkout. There are 360 instances of a user_id from cart clicking on checkout. There are 122 instances of a user_id from cart not clicking on checkout. Then, of the 348 users from cart if we take out the 122 that did not select checkout we end up with the 226 unique users from checkout!
###Code
# asked to combine all of the data now using a series of left merges
all_data = visits.merge(cart, how="left")\
.merge(checkout, how="left")\
.merge(purchase, how="left")
print(all_data.head(10))
# Asked to find "What percentage of users proceeded to checkout, but did not purchase a t-shirt?"
num_checkout = len(all_data) - all_data.checkout_time.isnull().sum()
num_purchase = len(all_data) - all_data.purchase_time.isnull().sum()
checkout_no_purchase = (num_checkout - num_purchase)
print("\nNumber of Checkouts: {} | Number of Purchases: {}\n"\
.format(num_checkout, num_purchase))
print("Number of users who started checkout and did not make a purchase: {}"\
.format(checkout_no_purchase))
print("\nPercentage of users who proceeded to checkout but did not make a purchase: {:4%}"\
.format(float(checkout_no_purchase)/num_checkout))
# Confirming there are no entries that ...
# have a purchase_time without a checkout_time
check_null = lambda purchase_time: True if (purchase_time == "NaT") else False
# Checking on what "all_data.checkout_time.isnull()" looks like
print(all_data.checkout_time.isnull().head(10))
checkout_null = all_data[all_data.checkout_time.isnull()] # contains only entries with no "checkout_time"
# checkout_null["purchase_null"] = checkout_null.purchase_time.apply(check_null)
# print(checkout_null.head(10))
# purchase_without_checkout = checkout_null.groupby("purchase_null").count()
print("If the numbers below are the same size, then all null checkout_times have null purchase_times")
print(len(checkout_null))
print(len(checkout_null.purchase_time.isnull()))
###Output
If the numbers below are the same size, then all null checkout_times have null purchase_times
1774
1774
###Markdown
In summary... * 16.89% of people __did checkout__ but __did not purchase__* 25.31% of users __did add an item to cart__ but __did not proceed to checkout__* 82.6% of users __visited site__ but __did not add item to cart__The weakest step of the funnel is getting users to add an item to their cart once they are on the site. As noted above, these are the numbers that were requested (as far as I could tell). However, these numbers do not reflect the fact that some user_ids appear in checkout multiple times. Thus, the 25.31% of users who "added an item to their cart but did not proceed to checkout" is actually higher than it looks. In reality, 122 users from cart did not select checkout from the total of 348 users in cart. This would put the percentage of people who added an item to cart and did not checkout at... (see below)
###Code
print("Percent of users in cart who did not checkout (using number of unique users in cart): {:.2%}"\
.format(122.0/348))
# Requested to provide the average time from initial visit to final purchase
all_data['time_to_purchase'] = all_data.purchase_time \
- all_data.visit_time
print(all_data.time_to_purchase)
print(all_data.time_to_purchase.mean())
###Output
0 NaT
1 0 days 00:44:00
2 NaT
3 NaT
4 NaT
...
2367 NaT
2368 NaT
2369 NaT
2370 NaT
2371 NaT
Name: time_to_purchase, Length: 2372, dtype: timedelta64[ns]
0 days 00:43:53.360160965
|
Chapter 7/Annexes/Correlation Function.ipynb | ###Markdown
Correlation Function
###Code
"""
Function to compute the correlation coefficient from a pandas dataframe
Takes the equivalent of the X and Y from a pandas Dataframe
"""
import pandas as pd
###Output
_____no_output_____
###Markdown
The Zscore Function
###Code
def zscore(x,mu,sigma):
return (x-mu)/sigma
###Output
_____no_output_____
###Markdown
The Correlation Function
###Code
def correlation(dx,dy):
if type(dx) and type(dy) is pd.Series:
#Compute the means
mx = dx.mean()
my = dy.mean()
#Compute the standard deviations
sx = dx.std()
sy = dy.std()
#Compute the second part of the equation
Q=0
for i,j in zip(dx,dy):
Q = Q + (zscore(i,mx,sx)*zscore(j,my,sy))
#multiply with the first part and return the result
return (1/(dx.shape[0]-1))*Q
else:
print('Type should should be Series (a dataframe column)')
###Output
_____no_output_____
###Markdown
* Example 1
###Code
df = pd.DataFrame({'x':[1,2,2,3],'y':[1,2,3,6]})
correlation(df['x'],df['y'])
###Output
_____no_output_____
###Markdown
* Example 2
###Code
df = pd.DataFrame({'x':[43,21,25,42,57,59],'y':[99,65,79,75,87,81]})
correlation(df['x'],df['y'])
###Output
_____no_output_____ |
Stock_Algorithms/t_SNE.ipynb | ###Markdown
t-distributed Stochastic Neighbor Embedding (t-SNE) t-SNE is stochastic algorithm and is non-linear. It is local and global approach; however, it has no truncation of dimensions and no solution.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
# yahoo finance is used to fetch data
import yfinance as yf
yf.pdr_override()
# input
symbol = 'AMD'
start = '2014-01-01'
end = '2018-08-27'
# Read data
dataset = yf.download(symbol,start,end)
# View Columns
dataset.head()
dataset['Increase_Decrease'] = np.where(dataset['Volume'].shift(-1) > dataset['Volume'],1,0)
dataset['Buy_Sell_on_Open'] = np.where(dataset['Open'].shift(-1) > dataset['Open'],1,0)
dataset['Buy_Sell'] = np.where(dataset['Adj Close'].shift(-1) > dataset['Adj Close'],1,0)
dataset['Returns'] = dataset['Adj Close'].pct_change()
dataset = dataset.dropna()
dataset.tail()
dataset.shape
X = np.array(dataset)
# Import TSNE
from sklearn.manifold import TSNE
model = TSNE(learning_rate=200)
model.fit_transform(X)
X_embedded = TSNE(n_components=2).fit_transform(X)
X_embedded.shape
import time
time_start = time.time()
time_tsne = TSNE(random_state=123).fit_transform(X)
print('t-SNE done! Time elapsed: {} seconds'.format(time.time()-time_start))
###Output
t-SNE done! Time elapsed: 6.093719720840454 seconds
|
python/5-pandas-dataframe.ipynb | ###Markdown
Programming with Python Session 5 Pandas DataFrameshttps://swcarpentry.github.io/python-novice-gapminder/08-data-frames/ Questions- How can I do statistical analysis of tabular data? Objectives- Select individual values from a Pandas dataframe.- Select entire rows or entire columns from a dataframe.- Select a subset of both rows and columns from a dataframe in a single operation.- Select a subset of a dataframe by a single Boolean criterion. First note about Pandas DataFrames/SeriesA `DataFrame` is a collection of `Series`; The `DataFrame` is the way Pandas represents a table, and `Series` is the data-structure Pandas uses to represent a column.Pandas is built on top of the Numpy library, which in practice means that most of the methods defined for Numpy's Arrays apply to Pandas' `Series`/`DataFrames`.What makes Pandas so attractive is the powerful interface to access individual records of the table, proper handling of missing values, and relational-databases operations between `DataFrames`. Use `DataFrame.iloc[..., ...]` to select values by numerical indexCan specify location by numerical index analogously to 2D version of character selection in strings.
###Code
import pandas
#Hints:
#index_col='country'
#data.iloc[0, 0]
# read file, make 'country' as an index for the column
# index of the indexes of the locations or rows: iloc
# similar to the columns, a list of indexes of rows can be passed
# the left side of , inside the square bracket is used for rows and right side is for column
# get value from the first cell
# get values from multiple cells
# get all the rows but a few columns
# get all the columns but a few rows
# in the last commands, ':' represents slicing
#: with a number defines where to slice, but without number it takes everything
#save the subset (sliced_data) in a variable
#get statistics of the sliced data
###Output
_____no_output_____
###Markdown
Use `DataFrame.loc[..., ...]` to select values by names.Can specify location by row name analogously to 2D version of dictionary keys.
###Code
# Using asia data here
# data.loc["India", "gdpPercap_1952"]
data = pandas.read_csv("data/gapminder_gdp_asia.csv", index_col='country')
data.loc["India", "gdpPercap_1952"]
# loc is like iloc, but using iloc, you can give index number for bothe rows and columns
# whereas for loc, you have to use keys/index names
# again, in the left side of a ',' inside the square bracket is reserved for rows, but the rightside is for columns
###Output
_____no_output_____
###Markdown
Use `:` on its own to mean all columns or all rows.Just like Python’s usual slicing notation.
###Code
# Slice by "China", :
# Would get the same result printing data.loc["China"] (without a second index).
# Would get a column data["gdpPercap_1952"]
# Also get the same result printing data.gdpPercap_1952 (since it’s a column name)
###Output
_____no_output_____
###Markdown
Select multiple columns or rows using `DataFrame.loc` and a named slice
###Code
# slice India to Israel, and include all columns by ':'
# take all the rows and get data from 1972 to 1982
# slice rows, India to Israel, and columns, 1972 to 1982
###Output
_____no_output_____
###Markdown
In the above code, we discover that slicing using loc is inclusive at both ends, which differs from slicing using iloc, where slicing indicates everything up to but not including the final index. Result of slicing can be used in further operations- Usually don’t just print a slice.- All the statistical operators that work on entire dataframes work the same way on slices. - E.g., calculate max of a slice.
###Code
# you can do multiple operations on your dataframe or its subset
subset3.T.iloc[-1].to_csv('test_subset.csv')
###Output
_____no_output_____
###Markdown
Use comparisons to select data based on value- Comparison is applied element by element.- Returns a similarly-shaped dataframe of True and False.
###Code
# Use a subset of data (from previous section) to keep output readable.
# Which values were greater than 10000 ?
###Output
_____no_output_____
###Markdown
Select values or NaN using a Boolean mask.- A frame full of Booleans is sometimes called a mask because of how it can be used. - Get the value where the mask is true, and NaN (Not a Number) where it is false.- Useful because NaNs are ignored by operations like max, min, average, etc.
###Code
# look at head() and describe() of the filtered subset
###Output
_____no_output_____
###Markdown
Exercise 1 - Selection of individual valuesAssume Pandas has been imported into your notebook and the Gapminder GDP data for Europe has been loaded:
###Code
import pandas
df = pandas.read_csv('data/gapminder_gdp_europe.csv', index_col='country')
###Output
_____no_output_____
###Markdown
Write an expression to find the Per Capita GDP of Serbia in 2007. Exercise 2 - Extent of slicing- Do the two statements below produce the same output?- Based on this, what rule governs what is included (or not) in numerical slices and named slices in Pandas? ```Pythonprint(data.iloc[0:2, 0:2])print(data.loc['Albania':'Belgium', 'gdpPercap_1952':'gdpPercap_1962'])``` Exercise 3 - Reconstructing data- Explain what each line in the following short program does: what is in first, second, etc.?
###Code
first = pandas.read_csv('data/gapminder_all.csv', index_col='country')
second = first[first['continent'] == 'Americas']
third = second.drop('Puerto Rico')
fourth = third.drop('continent', axis = 1)
fourth.to_csv('result.csv')
###Output
_____no_output_____
###Markdown
Exercise 4 - Selecting indicesExplain in simple terms what `idxmin` and `idxmax` do in the short program below. When would you use these methods?
###Code
data = pandas.read_csv('data/gapminder_gdp_europe.csv', index_col='country')
print(data.idxmin())
print(data.idxmax())
###Output
_____no_output_____
###Markdown
Exercise 5 - Practice with selectionAssume Pandas has been imported and the Gapminder GDP data for Europe has been loaded. Write an expression to select each of the following:- GDP per capita for all countries in 1982.- GDP per capita for Denmark for all years.- GDP per capita for all countries for years after 1985.- GDP per capita for each country in 2007 as a multiple of GDP per capita for that country in 1952.
###Code
df = pandas.read_csv('data/gapminder_all.csv', index_col='country')
###Output
_____no_output_____
###Markdown
Keypoints- Use `DataFrame.iloc[..., ...]` to select values by integer location.- Use `:` on its own to mean all columns or all rows.- Select multiple columns or rows using `DataFrame.loc` and a named slice.- Result of slicing can be used in further operations.- Use comparisons to select data based on value.- Select values or NaN using a Boolean mask.
###Code
#A demonstration of how you can create and modify a dataframe
import pandas
#Create an empty df with one column 'X'
df = pandas.DataFrame(columns = ["X"])
#Create an empty df with two columns 'X' and 'Y'
df1 = pandas.DataFrame(columns = ["X", "Y"])
#Create an empty df with two columns 'X' and 'Y' and 1 row 'a'
df2 = pandas.DataFrame(columns = ["X", "Y"], index=['a'])
#Create an empty df with two columns 'X' and 'Y' and multiple rows 'a', 'b', 'c'
df3 = pandas.DataFrame(columns = ["X", "Y"], index=['a', 'b', 'c'])
#Create a df with two columns 'X' and 'Y' and multiple rows 'a', 'b', 'c', and initialize cells with 0s or 1s
df4 = pandas.DataFrame(0, columns = ["X", "Y"], index=['a', 'b', 'c'])
#insert a value for the row a
df4.loc['a'] = 1 # this will add 1 for both the cells in the row 'a'
df4.loc['a'] = [1, 12] # this will add 1 for the cell X and 12 for the cell Y in the row a
#insert a value for the column X
df4['X'] = 1 # this will add 1 for all the cells in the column 'X'
df4['X'] = [1, 20, 24] # this will add different values in the cells of the column 'X'
#insert a value in the soecific cell
df4.loc['c', 'Y'] = 34
# you can use iloc similarly
df4.iloc[1, 1] = 29
# initialize a new column with values
df4['Z'] = [22, 38, 44]
# add an empty column
df4['ZZ'] = 0
df4
#sort data by first column
df4.sort_index()
#sort in reverse order
df4.sort_index(ascending=False)
#sort by a defined column
df4.sort_values(by='Y')
#sort by multiple columns
df4.sort_values(by=['Y', 'Z'])
# REMINDER: you can always pass multiple values for the commands in pandas using [] brackets
#create a mask for values more than 1 in the column X
df4[df4['X']>1]
#masking by multiple values
df4[df4['X']>1 & (df4['Y']<30)]
#check null values
df4.isnull()
#replacing values
df5 = df4.replace(0, '66')
df5['ZZ'] = ['col1', 'col2', 'col3']
df5
df6 = df5.set_index('ZZ')
#groupby
df6 = df5.groupby('X')['Y'].mean() #group by the column 'X' and get a mean of the values in 'Y'
df6
# create a new dataframe ndf
ndf = pandas.DataFrame([['gene1', 299], ['gene2', 599], ['gene3', 678]], index=['col1', 'col2', 'col3'], columns=['Gene', 'Length'])
ndf
# merge df6 and ndf
df6_ndf_1 = pandas.concat([df6, ndf])
df6_ndf_1
# by default the concatanation of the dataframe happens in the axis 0, or rows
# merge df6 and ndf using the column axis (axis=1)
df6_ndf_2 = pandas.concat([df6, ndf], axis=1)
df6_ndf_2
# pandas
# Optionally use join
df6_ndf_3 = ndf.join(df6)
df6_ndf_3
# tryout the commands from your cheatsheets here
###Output
_____no_output_____ |
Exercise_III_Collaboration_and_Competition/.ipynb_checkpoints/Tennis-checkpoint.ipynb | ###Markdown
Collaboration and Competition---In this notebook, you will learn how to use the Unity ML-Agents environment for the third project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) program. 1. Start the EnvironmentWe begin by importing the necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
###Code
from unityagents import UnityEnvironment
import numpy as np
###Output
_____no_output_____
###Markdown
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.- **Mac**: `"path/to/Tennis.app"`- **Windows** (x86): `"path/to/Tennis_Windows_x86/Tennis.exe"`- **Windows** (x86_64): `"path/to/Tennis_Windows_x86_64/Tennis.exe"`- **Linux** (x86): `"path/to/Tennis_Linux/Tennis.x86"`- **Linux** (x86_64): `"path/to/Tennis_Linux/Tennis.x86_64"`- **Linux** (x86, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86"`- **Linux** (x86_64, headless): `"path/to/Tennis_Linux_NoVis/Tennis.x86_64"`For instance, if you are using a Mac, then you downloaded `Tennis.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:```env = UnityEnvironment(file_name="Tennis.app")```
###Code
env = UnityEnvironment(file_name="./Tennis_Windows_x86_64/Tennis.exe")
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
Unity brain name: TennisBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 8
Number of stacked Vector Observation: 3
Vector Action space type: continuous
Vector Action space size (per agent): 2
Vector Action descriptions: ,
###Markdown
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
###Code
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
_____no_output_____
###Markdown
2. Examine the State and Action SpacesIn this environment, two agents control rackets to bounce a ball over a net. If an agent hits the ball over the net, it receives a reward of +0.1. If an agent lets a ball hit the ground or hits the ball out of bounds, it receives a reward of -0.01. Thus, the goal of each agent is to keep the ball in play.The observation space consists of 8 variables corresponding to the position and velocity of the ball and racket. Two continuous actions are available, corresponding to movement toward (or away from) the net, and jumping. Run the code cell below to print some information about the environment.
###Code
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
print(states)
state_size = states.shape[1]
print('\nThere are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
critic_size = num_agents * state_size
print('This is the critic_size: {}'. format(critic_size))
###Output
Number of agents: 2
Size of each action: 2
[[ 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. -6.65278625 -1.5
-0. 0. 6.83172083 6. -0. 0. ]
[ 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. -6.4669857 -1.5
0. 0. -6.83172083 6. 0. 0. ]]
There are 2 agents. Each observes a state with length: 24
The state for the first agent looks like: [ 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. -6.65278625 -1.5
-0. 0. 6.83172083 6. -0. 0. ]
This is the critic_size: 48
###Markdown
3. Take Random Actions in the EnvironmentIn the next code cell, you will learn how to use the Python API to control the agents and receive feedback from the environment.Once this cell is executed, you will watch the agents' performance, if they select actions at random with each time step. A window should pop up that allows you to observe the agents.Of course, as part of the project, you'll have to change the code so that the agents are able to use their experiences to gradually choose better actions when interacting with the environment!
###Code
# for i in range(1, 6): # play game for 5 episodes
# env_info = env.reset(train_mode=False)[brain_name] # reset the environment
# states = env_info.vector_observations # get the current state (for each agent)
# scores = np.zeros(num_agents) # initialize the score (for each agent)
# while True:
# actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
# actions = np.clip(actions, -1, 1) # all actions between -1 and 1
# env_info = env.step(actions)[brain_name] # send all actions to tne environment
# next_states = env_info.vector_observations # get next state (for each agent)
# rewards = env_info.rewards # get reward (for each agent)
# dones = env_info.local_done # see if episode finished
# scores += env_info.rewards # update the score (for each agent)
# states = next_states # roll over states to next time step
# if np.any(dones): # exit loop if episode finished
# break
# print('Score (max over agents) from episode {}: {}'.format(i, np.max(scores)))
###Output
_____no_output_____
###Markdown
When finished, you can close the environment.
###Code
# env.close()
###Output
_____no_output_____
###Markdown
4. It's Your Turn!Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:```pythonenv_info = env.reset(train_mode=True)[brain_name]```
###Code
from collections import deque
from itertools import count
import torch
import matplotlib.pyplot as plt
%matplotlib inline
import gym
import random
import torch
import numpy as np
from maddpg import MADDPG
the_agents = MADDPG(state_size, action_size, 2, num_agents)
checkpoint_actor = 'checkpoint_actor_cuda.pth'
checkpoint_critic = 'checkpoint_critic_cuda.pth'
def maddpg(n_episodes=5000, max_t=1000):
scores_deque = deque(maxlen=100)
scores_all = []
scores = []
max_score = -np.Inf
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name]
states = np.reshape(env_info.vector_observations, (1, state_size * num_agents)) # get states and combine them
the_agents.reset()
scores = np.zeros(num_agents)
while True:
actions = the_agents.all_agents_act(states)
env_info = env.step(actions)[brain_name] # send all actions to the environment
# get next state (for each agent)
next_states = np.reshape(env_info.vector_observations, (1, state_size * num_agents))
rewards = env_info.rewards # get reward (for each agent)
scores += np.max(rewards) # update the score (for each agent)
dones = env_info.local_done # see if episode finished
the_agents.step(states, actions, rewards, next_states, dones)
states = next_states
if np.any(dones): # exit loop if episode finished
break
score_max = np.max(scores)
scores_deque.append(score_max)
scores_all.append(score_max)
print('\rEpisode {}\tAverage Score: {:.3f}\tScore: {:.3f}'.format(i_episode, np.mean(scores_deque), score_max), end="")
if i_episode % 100 == 0:
if np.mean(scores_deque) >= 0.5:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.3f}'.format(i_episode, np.mean(scores_deque)))
for agent in the_agents.agents:
torch.save(agent.actor_local.state_dict(), checkpoint_actor)
torch.save(agent.critic_local.state_dict(), checkpoint_critic)
break
else:
for agent_index, agent in enumerate(the_agents.agents):
torch.save(agent.actor_local.state_dict(), 'checkpoint_actor_{}.pth'.format(agent_index))
torch.save(agent.critic_local.state_dict(), 'checkpoint_critic_{}.pth'.format(agent_index))
print('\rEpisode {}\tAverage Score: {:.3f}'.format(i_episode, np.mean(scores_deque)))
return scores_all
# # MADDPG function without using the MADDPG Manager class
# from maddpg_agent import Agent
# SOLVED_SCORE = 0.5
# CONSEC_EPISODES = 100
# PRINT_EVERY = 10
# ADD_NOISE = True
# def maddpg(n_episodes=2000, max_t=1000, train_mode=True):
# """Multi-Agent Deep Deterministic Policy Gradient (MADDPG)
# Params
# ======
# n_episodes (int) : maximum number of training episodes
# max_t (int) : maximum number of timesteps per episode
# train_mode (bool) : if 'True' set environment to training mode
# """
# scores_window = deque(maxlen=CONSEC_EPISODES)
# scores_all = []
# moving_average = []
# best_score = -np.inf
# best_episode = 0
# already_solved = False
# for i_episode in range(1, n_episodes+1):
# env_info = env.reset(train_mode=train_mode)[brain_name] # reset the environment
# states = np.reshape(env_info.vector_observations, (1,48)) # get states and combine them
# agent_0.reset()
# agent_1.reset()
# scores = np.zeros(num_agents)
# while True:
# actions = get_actions(states, ADD_NOISE) # choose agent actions and combine them
# env_info = env.step(actions)[brain_name] # send both agents' actions together to the environment
# next_states = np.reshape(env_info.vector_observations, (1, 48)) # combine the agent next states
# rewards = env_info.rewards # get reward
# done = env_info.local_done # see if episode finished
# agent_0.step(states, actions, rewards[0], next_states, done, 0) # agent 1 learns
# agent_1.step(states, actions, rewards[1], next_states, done, 1) # agent 2 learns
# scores += np.max(rewards) # update the score for each agent
# states = next_states # roll over states to next time step
# if np.any(done): # exit loop if episode finished
# break
# ep_best_score = np.max(scores)
# scores_window.append(ep_best_score)
# scores_all.append(ep_best_score)
# moving_average.append(np.mean(scores_window))
# # save best score
# if ep_best_score > best_score:
# best_score = ep_best_score
# best_episode = i_episode
# # print results
# if i_episode % PRINT_EVERY == 0:
# print('Episodes {:0>4d}-{:0>4d}\tMax Reward: {:.3f}\tMoving Average: {:.3f}'.format(
# i_episode-PRINT_EVERY, i_episode, np.max(scores_all[-PRINT_EVERY:]), moving_average[-1]))
# # determine if environment is solved and keep best performing models
# if moving_average[-1] >= SOLVED_SCORE:
# if not already_solved:
# print('<-- Environment solved in {:d} episodes! \
# \n<-- Moving Average: {:.3f} over past {:d} episodes'.format(
# i_episode-CONSEC_EPISODES, moving_average[-1], CONSEC_EPISODES))
# already_solved = True
# # save weights
# torch.save(agent_0.actor_local.state_dict(), 'models/checkpoint_actor_0.pth')
# torch.save(agent_0.critic_local.state_dict(), 'models/checkpoint_critic_0.pth')
# torch.save(agent_1.actor_local.state_dict(), 'models/checkpoint_actor_1.pth')
# torch.save(agent_1.critic_local.state_dict(), 'models/checkpoint_critic_1.pth')
# elif ep_best_score >= best_score:
# print('<-- Best episode so far!\
# \nEpisode {:0>4d}\tMax Reward: {:.3f}\tMoving Average: {:.3f}'.format(
# i_episode, ep_best_score, moving_average[-1]))
# # save weights
# torch.save(agent_0.actor_local.state_dict(), 'models/checkpoint_actor_0.pth')
# torch.save(agent_0.critic_local.state_dict(), 'models/checkpoint_critic_0.pth')
# torch.save(agent_1.actor_local.state_dict(), 'models/checkpoint_actor_1.pth')
# torch.save(agent_1.critic_local.state_dict(), 'models/checkpoint_critic_1.pth')
# elif (i_episode-best_episode) >= 200:
# # stop training if model stops converging
# print('<-- Training stopped. Best score not matched or exceeded for 200 episodes')
# break
# else:
# continue
# return scores_all, moving_average
# def get_actions(states, add_noise):
# '''gets actions for each agent and then combines them into one array'''
# print(states)
# action_0 = agent_0.act(states, add_noise) # agent 0 chooses an action
# action_1 = agent_1.act(states, add_noise) # agent 1 chooses an action
# return np.concatenate((action_0, action_1), axis=0).flatten()
# initialize agents
# agent_0 = Agent(state_size, action_size, num_agents=1, random_seed=0)
# agent_1 = Agent(state_size, action_size, num_agents=1, random_seed=0)
# import torch
# from torch import tensor
# values = [tensor([0.0718], device='cuda:0'),
# tensor([-0.0199], device='cuda:0'),
# tensor([0.0498], device='cuda:0'),
# tensor([0.0125], device='cuda:0'),
# tensor([0.0443], device='cuda:0' ),
# tensor([0.0130], device='cuda:0' ),
# tensor([0.0508], device='cuda:0'),
# tensor([0.0505], device='cuda:0'),
# tensor([0.1116], device='cuda:0'),
# tensor([0.1131], device='cuda:0'),
# tensor([0.1449], device='cuda:0'),
# tensor([0.1440], device='cuda:0'),
# tensor([0.1462], device='cuda:0'),
# tensor([0.1456], device='cuda:0' ),
# tensor([0.1458], device='cuda:0' ),
# tensor([0.0514], device='cuda:0' ),
# tensor(0.2463, device='cuda:0' ),
# tensor(0.9718, device='cuda:0' ),
# tensor(0.6775, device='cuda:0' ),
# tensor(0.9377, device='cuda:0' ),
# tensor(0.8323, device='cuda:0' ),
# tensor(0.8711, device='cuda:0' ),
# tensor(1.0203, device='cuda:0' ),
# tensor(0.9050, device='cuda:0' ),
# tensor(0.8278, device='cuda:0' ),
# tensor(0.6138, device='cuda:0' ),
# tensor(0.5070, device='cuda:0'),
# tensor(0.8741, device='cuda:0' ),
# tensor(0.3989, device='cuda:0' ),
# tensor(0.7831, device='cuda:0' ),
# tensor(0.4431, device='cuda:0' ),
# tensor(1.0598, device='cuda:0' )]
# #var1 = tensor([0.0514], device='cuda:0' )
# #var2 = tensor(0.6138, device='cuda:0' ).unsqueeze(0)
# for i, value in enumerate(values):
# if(value.size() == torch.Size([])):
# values[i] = value.unsqueeze(0)
# print('\n')
# #print(var1.size())
# #print(var2.size() == torch.Size([]))
# for value in values:
# print(value)
# print(torch.cat(values, 0))
scores = maddpg()
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
from unityagents import UnityEnvironment
from collections import deque
from itertools import count
import torch
import matplotlib.pyplot as plt
%matplotlib inline
import gym
import random
import torch
import numpy as np
from maddpg_agent import Agent
env = UnityEnvironment(file_name="./Tennis_Windows_x86_64/Tennis.exe")
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
# size of each action
action_size = brain.vector_action_space_size
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
critic_size = num_agents * state_size
agents = []
for i in range(num_agents):
agent = Agent(state_size, action_size, 2, num_agents)
agents.append(agent)
for agent_index, agent in enumerate(agents):
agent.actor_local.load_state_dict(torch.load('checkpoint_actor_{}.pth'.format(agent_index)))
agent.critic_local.load_state_dict(torch.load('checkpoint_critic_{}.pth'.format(agent_index)))
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
for i in range(1, 12): # play game for 5 episodes
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
states = np.reshape(env_info.vector_observations, (1, state_size * num_agents)) # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
while True:
action_0 = agents[0].act(states, add_noise=False)
action_1 = agents[1].act(states, add_noise=False)
actions = np.concatenate((action_0, action_1), axis=0).flatten()
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = np.reshape(env_info.vector_observations, (1, state_size * num_agents)) # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
break
print('Score (max over agents) from episode {}: {}'.format(i, np.max(scores)))
env.close()
###Output
Score (max over agents) from episode 1: 0.7000000104308128
Score (max over agents) from episode 2: 2.600000038743019
Score (max over agents) from episode 3: 0.0
Score (max over agents) from episode 4: 2.600000038743019
Score (max over agents) from episode 5: 0.0
|
vignettes/Basic_Usage.ipynb | ###Markdown
Basic usage **pdfmole**
###Code
from pdfmole import read_pdf, group_blocks, align_exact_match, kdplot
import numpy as np
import pandas as pd
###Output
_____no_output_____
###Markdown
Read the pdf-file
###Code
doc = read_pdf("../samples/cars.pdf")
###Output
_____no_output_____
###Markdown
Each pdf consists of several elements:
###Code
doc.elements
###Output
_____no_output_____
###Markdown
There is a get function for each element type `get_`
###Code
doc.get_metainfo()
###Output
_____no_output_____
###Markdown
By default the get methods return pandas `DataFrame` but the return type can also be changed to `list[dict]`.
###Code
doc.get_metainfo("list")
###Output
_____no_output_____
###Markdown
Transform the data into a rectangular format
###Code
d = doc.get_text()
d.head(3)
###Output
_____no_output_____
###Markdown
The method `get_text` returns the columns- `pid` ... an integer giving the page id.- `block` ... an integer giving the id of some grouping information provided by pdfminer.- `text` ... a character to be refined to the text.- `font` ... a string giving the font name.- `size` ... a float giving the font size.- `colorspace` ... a string giving the color space.- `color` ... a tuple of int giving the color.- `x0` ... a float giving the distance from the left of the page to the left edge of the bounding box.- `y0` ... a float giving the distance from the bottom of the page to the lower edge of the bounding box.- `x1` ... a float giving the distance from the left of the page to the right edge of the bounding box.- `y1` ... a float giving the distance from the bottom of the page to the upper edge of the bounding box.for each character in the document. 1. Remove control characters
###Code
deleted = d[~(d['size'] > 0)]
d = d[d['size'] > 0]
deleted.head(3)
###Output
_____no_output_____
###Markdown
2. Group the blocksIn many situations the grouping information provided by pdfminer is helpfull. So we start by grouping the blocks.
###Code
d = group_blocks(d)
d.head(3)
###Output
_____no_output_____
###Markdown
Group blocks adds a rotated column, the usage of this column is shown in the [`Rotated_Text`]() vignette. Here we won't use it therfore we remove it along with other unused columns.
###Code
rm_us = {"rotated", "size", "colorspace", "color"}
d = d[[k for k in d.columns if not k in rm_us]]
###Output
_____no_output_____
###Markdown
3. FilterInformation like the font name and font size can be used to remove headers and footers.
###Code
d.tail(3)
d = d[d['font'].str.startswith("Courier")]
###Output
_____no_output_____
###Markdown
4. Infer the row idsThe **R** [**pdfmole**](https://github.com/FlorianSchwendinger/pdfmole) package has also an alignment method based on hierarchical clustering. This package currently only contains an exact match alignment method.We round the `'y0'` coordinates to cover at least some fuzzyness.
###Code
d = d.assign(rowid = align_exact_match(d['y0'].round()))
###Output
_____no_output_____
###Markdown
5. Infer the column idsAn easy approach to choose the column ids to provide the column width as in fixed width csv files.
###Code
plt, grid = kdplot(d)
column_splits = np.array(range(len(grid) - 1))[(grid[:-1] - grid[1:]) > 30] + 1
plt.axvline(column_splits[0], color = "red")
plt.axvline(column_splits[1], color = "red")
plt.show()
d['colid'] = -1
for col_id, column_split in enumerate(column_splits):
b = (d['colid'] < 0) & ((d['x0'] + d['x1']) / 2 < column_split)
d.loc[b, 'colid'] = col_id
d.loc[d['colid']<0, 'colid'] = col_id + 1
###Output
_____no_output_____
###Markdown
6. Transform data into tabular formTo transform the data into a tabular form, we will first transform the `DataFrame` into a [coordinate list sparse matrix](https://en.wikipedia.org/wiki/Sparse_matrixCoordinate_list_(COO)) format (also known as simple triplet matrix format.)
###Code
group_keys = ['pid', 'rowid', 'colid']
d = d.sort_values(by = group_keys + ['x0'])
coo_df = d.groupby(group_keys)['text'].apply(' '.join).reset_index()
coo_df['row_id'] = (~coo_df[['pid', 'rowid']].duplicated()).cumsum() - 1
coo_df = coo_df[['row_id', 'colid', 'text']]
coo_df = coo_df.rename(columns={'colid': 'col_id'})
coo_df.head()
###Output
_____no_output_____
###Markdown
Second, we transform it to a matrix and then to a `DataFrame`.
###Code
ncol = coo_df['col_id'].max() + 1
nrow = coo_df['row_id'].max() + 1
mat = np.array((nrow * ncol) * ['NA'], dtype='object').reshape((nrow, ncol))
for _, row in coo_df.iterrows():
i, j, v = row
mat[i, j] = v
di = {mat[0,1]: mat[1:,1].astype(int), mat[0,2]: mat[1:,2].astype(int)}
df = pd.DataFrame(di)
df.head()
###Output
_____no_output_____ |
05_SlicerDockerNotebook.ipynb | ###Markdown
Slicer Jupyter using dockerThis notebook is shows how views and full application window can be displayed and configured to be used in JupyterLab when Slicer runs in docker or on a remote workstation.It relies on a remote desktop connection and web proxy set up in the [slicer-notebook docker image](https://github.com/Slicer/SlicerDocker/tree/master/slicer-notebook).This notebook can be [run in the web browser via Binder](https://mybinder.org/v2/gh/slicer/SlicerNotebooks/master?filepath=05_SlicerDockerNotebook.ipynb) or locally using a Jupyter server started by this command: docker run -p 8888:8888 -p49053:49053 -v path/to/my/notebooks:/home/sliceruser/work --rm -ti lassoan/slicer-notebook:latest Notes:- Replace `path/to/my/notebooks` by the actual local path to your notebook folder.- After the container is started, open `https://127.0.0.1:8888` page in your web browser and copy-paste the token from the docker container's output.
###Code
# Read an image using SimpleITK
import JupyterNotebooksLib as slicernb
import SimpleITK as sitk
import sitkUtils as su
# Load 3D image using SimpleITK
reader = sitk.ImageFileReader()
reader.SetFileName("data/MRBrainTumor1.nrrd")
image = reader.Execute()
# Get the SimpleITK image into the Slicer scene
slicer.mrmlScene.Clear(False) # clear any previously loaded data from the scene
volumeNode = su.PushVolumeToSlicer(image)
# Show volume in slice views
slicernb.ViewDisplay("FourUp") # choose a layout where 3 slice views are present
slicer.util.setSliceViewerLayers(background=volumeNode, fit=True) # show this volume in slice viewers
# Create slice view widgets in the notebook
from ipywidgets import VBox
viewWidgets = VBox([slicernb.ViewSliceWidget('Red'), slicernb.ViewSliceWidget('Yellow'), slicernb.ViewSliceWidget('Green')])
viewWidgets.layout.max_width="400px"
display(viewWidgets)
# Apply some processing and view the udpated results
# Process image
blurFilter = sitk.SmoothingRecursiveGaussianImageFilter()
blurFilter.SetSigma(1.0)
blurredImage = blurFilter.Execute(image)
# Update view widgets (without this, the user would need to move the sliders to get an updated image)
su.PushVolumeToSlicer(blurredImage, targetNode=volumeNode)
for viewWidget in viewWidgets.children:
viewWidget.sliceView.updateImage()
# Set up application window
app = slicernb.AppWindow()
# Hide patient information from slice view
slicernb.showSliceViewAnnotations(False)
# Show markups toolbar
slicer.modules.markups.toolBarVisible=True
# Show volume in 3D view using volume rendering
slicernb.showVolumeRendering(volumeNode, True)
display(app)
# Click on the toolbar buttons to create markups,
# then click in the viewers to place them.
# Display control point positions in each markup node.
from IPython.display import HTML
for markupsNode in getNodesByClass("vtkMRMLMarkupsNode"):
display(HTML(f"<h3>Markup: {markupsNode.GetName()}</h3>"))
display(dataframeFromMarkups(markupsNode))
# Show full markups module GUI
app.setContents("viewers")
slicer.util.findChild(slicer.util.mainWindow(), "PanelDockWidget").show()
slicer.modules.markups.toolBarVisible=True
selectModule("Markups")
# Show full application GUI
app.setContents("full")
# Create link that shows the application GUI in a new browser tab
from ipywidgets import HTML
HTML(f"""<a href="{slicernb.AppWindow.defaultDesktopUrl()}" target="_blank">
<b>Click here</b> to open application window in a new browser tab.</a>""")
# Show only viewers
app.setContents("viewers")
from ipywidgets import Button, HBox
import JupyterNotebooksLib as slicernb
fullButton = Button(description='Full')
fullButton.on_click(lambda b: slicernb.AppWindow().setContents("full"))
viewersButton = Button(description='Viewers')
viewersButton.on_click(lambda b: slicernb.AppWindow().setContents("viewers"))
markupsToolbarToggleButton = Button(description='Markups toolbar')
def toggleMarkupsToolBar(b):
slicer.modules.markups.toolBarVisible = not slicer.modules.markups.toolBarVisible
markupsToolbarToggleButton.on_click(toggleMarkupsToolBar)
HBox([fullButton, viewersButton, markupsToolbarToggleButton])
###Output
_____no_output_____ |
.ipynb_checkpoints/ISS Realtime Location-checkpoint.ipynb | ###Markdown
ISS Realtime Location Resources Original Code https://www.viralml.com/video-content.html?v=7c3nk69xsJw ISS Current Locationhttp://open-notify.org/Open-Notify-API/ISS-Location-Now/ Libraries
###Code
import requests
import pandas as pd
###Output
_____no_output_____
###Markdown
How Many People in the ISS? API: http://open-notify.org/Open-Notify-API/People-In-Space/
###Code
url = "http://api.open-notify.org/astros.json"
r = requests.get(url=url)
r.json()
df_people = r.json()['people']
df_people[0]
type(df_people)
df_people = pd.DataFrame(df_people)
###Output
_____no_output_____ |
Conv Nets.ipynb | ###Markdown
Convolutional NetworksThis script demos using spatial convolutional neural networks to build a supervised classifier for the crop labels using the input temporal images as features and also compares them against other classifiers.
###Code
! pip install tqdm
! conda install -c conda-forge rtree --yes
from pylab import *
from keras.models import Model
from keras.layers import Conv2D, UpSampling2D, MaxPooling2D, concatenate, Input, BatchNormalization
from keras.optimizers import Adam, SGD
from keras import backend
import numpy as np
import common, preprocess
import tqdm, rtree
###Output
Using TensorFlow backend.
###Markdown
Because training a conv net can be computationally expensive, we're going to start with the previously selected features.
###Code
labels = common.loadNumpy('labels')
keyBands = preprocess.getKeyFeatures()
###Output
100%|██████████| 10/10 [01:22<00:00, 8.29s/it]
###Markdown
Sampling considerationsBefore we can build our conv net we need to appropriately sample the data. Because we are using spatial conv nets, and not 1D conv nets over the temporal axis, we need to take into account the significant amount of spatial auto correlation in the data. A naive random sampling approach, will show that the train/test scores are almost the same, because essentially there will be no truly held out data, because some of the training data is virtually guaranteed to come from every part of the image. To correctly hold out some data, we need to divide the image into either strips or chunks that represent groups that are held out. This allows us to most closely emulate the process of training a model on some subset of spatial data and then applying it to a completely unseen area. I choose to hold out large rectangular tiles from the image. In each case, all of the tiles are entirely self-contained, no data leakage from nearby tiles is allowed.
###Code
h,w = labels.shape
groups = np.zeros((h,w),dtype='uint8')
groupSize = max(h//4,w//4)
group = 0
groupRTree = rtree.index.Index()
groupMap = {}
for i in range(0, h, groupSize):
for j in range(0,w,groupSize):
si,ei = i, i+groupSize
sj, ej = j, j+groupSize
if ei > h:
continue
if ej > w:
continue
if (i+2*groupSize) > h:
ei = h
if (j+2*groupSize) > w:
ej = w
groups[si:ei,sj:ej] = group
groupRTree.insert(group,(sj,si,ej,ei))
groupMap[group] = (si,ei,sj,ej)
# print(i,j,group)
group+=1
figure()
title('Plot of the "tiles" that will spatially group the data together')
imshow(groups)
colorbar()
###Output
_____no_output_____
###Markdown
In the above image, there are 12 groups. For our tests, we will hold out all of the data from one or more tiles at once, and use the remaining tiles to train the model. Because selecting which tiles to hold out is a potential source of bias, we will train in a cross-validation mode where we repeatedly train models on different subsets of the data by changing which tiles we are holding out. This will not allow us to say that we have a single model that performs best, but it will enable us to say in general this is how well we would expect a model trained this way to perform on totally unseen data, while removing the selection bias.
###Code
h,w = labels.shape
featureArray = np.zeros((h,w,len(keyBands)))
for k, key in enumerate(keyBands):
featureArray[:,:,k] = keyBands[key]
nclasses=int(labels.max())
print('there are n classes', nclasses)
# we need to use one hot labeling encoding for our labels
one_hot_labels = np.zeros((h,w,nclasses),dtype='f4')
figure(figsize=(40,40))
for k in range(nclasses):
mask = labels == k
im_ones = np.zeros_like(labels)
im_ones[mask] = 1
one_hot_labels[:,:,k] = im_ones
subplot(10,1,k+1)
imshow(im_ones,cmap='gray',vmax=1.0)
featureArray.shape
###Output
_____no_output_____
###Markdown
Now we load the data, and format it for our convolutional neural network. Our conv net will take 3d tiles of size 64x64x20 where 64x64 is the spatial dimension, and 20 is the number of features. I chose 64x64 because it gives us enough context to say what is going on nearby without making it so large that it becomes problematic to train the model. Because the tile shape does not evenly divide into the grouping shape, we have to be careful to ensure that we're not gathering data from outside of our group.
###Code
tileHeight = 64
tileWidth = 64
tiles = []
ytiles = []
tileGroups = []
tileInfo = {}
tileIndex = 0
for ty in range(0, h, tileHeight):
for tx in range(0, w, tileWidth):
sy = ty
sx = tx
ey = sy+tileHeight
ex = sx+tileWidth
groups = list(groupRTree.intersection((sx,sy,ex,ey)))
# if we span more than one group, then we need to make two tiles
# one for each group, but that is limited to each groups visible data
# this prevents data leakage from one group to the next .
for group in groups:
gsy,gey, gsx,gex = groupMap[group]
if ey > gey:
sy = gey-tileHeight
ey = gey
elif sy < gsy:
sy = gsy
ey = gsy + tileHeight
if ex > gex:
sx = gex-tileWidth
ex = gex
elif sx < gsx:
sx = gsx
ex = gsx+tileWidth
assert ex-sx == tileWidth
assert ey-sy == tileHeight
tileInfo[tileIndex] = (sy,ey,sx,ex, group)
tiles.append(featureArray[sy:ey,sx:ex])
ytiles.append(one_hot_labels[sy:ey,sx:ex])
tileGroups.append(group)
tileIndex+=1
tileArray = np.array(tiles)
yTileArray = np.array(ytiles)
groupArray = np.array(tileGroups)
del tiles
del ytiles
tileArray.shape, yTileArray.shape, groupArray.shape
from sklearn.model_selection import LeavePGroupsOut
###Output
_____no_output_____
###Markdown
Here's an example that shows how using the groups with the tiles will automatically break up the data for us.
###Code
lpo = LeavePGroupsOut(4)
print('n splits', lpo.get_n_splits(tileArray,yTileArray,groupArray))
k = 0
for train_index, test_index in lpo.split(tileArray, yTileArray,groupArray):
# print('train_index', train_index)
# print('test_index', test_index)
if k > 5:
break
figure(figsize=(40,40))
subplot(211)
title('Train data (white is selected)')
trainImg = zeros_like(labels)
for idx in train_index:
sy,ey,sx,ex, g = tileInfo[idx]
trainImg[sy:ey,sx:ex] = 1
imshow(trainImg,cmap='gray',vmax=1)
subplot(212)
title('Test data (white is selected)')
testImg = zeros_like(labels)
for idx in test_index:
sy,ey,sx,ex, g = tileInfo[idx]
testImg[sy:ey,sx:ex] = 1
imshow(testImg,cmap='gray',vmax=1)
k+=1
###Output
_____no_output_____
###Markdown
Now it's time to train our model. For our convolutional neural network we're going to use a vanilla U-net and treat this as a semantic segmentation problem.
###Code
def model(trainTiles, yTiles):
backend.clear_session()
nb,h,w,c = trainTiles.shape
nby,hy,wy,cy = yTiles.shape
assert nb == nby
assert h == hy
assert w == wy
i1 = Input((h,w,c))
# 64x64
c1 = Conv2D(32,3,padding='same',activation='relu')(i1)
c1 = Conv2D(32,3,padding='same',activation='relu')(c1)
p1 = MaxPooling2D()(c1)
b1 = BatchNormalization()(p1)
# 32x32
c2 = Conv2D(64,3,padding='same',activation='relu')(b1)
c2 = Conv2D(64,3,padding='same',activation='relu')(c2)
p2 = MaxPooling2D()(c2)
b2 = BatchNormalization()(p2)
# 16x16
c3 = Conv2D(128,3,padding='same',activation='relu')(b2)
c3 = Conv2D(128,3,padding='same',activation='relu')(c3)
p3 = MaxPooling2D()(c3)
b3 = BatchNormalization()(p3)
# 8x8
c4 = Conv2D(256,3,padding='same',activation='relu')(b3)
c4 = Conv2D(256,3,padding='same',activation='relu')(c4)
p4 = MaxPooling2D()(c4)
b4 = BatchNormalization()(p4)
# 4x4
c5 = Conv2D(512,3,padding='same',activation='relu')(b4)
c5 = Conv2D(512,3,padding='same',activation='relu')(c5)
# 8x8
u4 = UpSampling2D()(c5)
u4 = concatenate([c4,u4],axis=3)
u4 = Conv2D(256,3,padding='same',activation='relu')(u4)
u4 = Conv2D(256,3,padding='same',activation='relu')(u4)
u4 = BatchNormalization()(u4)
#16x16
u3 = UpSampling2D()(u4)
u3 = concatenate([c3,u3],axis=3)
u3 = Conv2D(128,3,padding='same',activation='relu')(u3)
u3 = Conv2D(128,3,padding='same',activation='relu')(u3)
u3 = BatchNormalization()(u3)
#32x32
u2 = UpSampling2D()(u3)
u2 = concatenate([c2,u2],axis=3)
u2 = Conv2D(64,3,padding='same',activation='relu')(u2)
u2 = Conv2D(64,3,padding='same',activation='relu')(u2)
u2 = BatchNormalization()(u2)
#64x64
u1 = UpSampling2D()(u2)
u1 = concatenate([c1,u1],axis=3)
u1 = Conv2D(32,3,padding='same',activation='relu')(u1)
u1 = Conv2D(32,3,padding='same',activation='relu')(u1)
u1 = BatchNormalization()(u1)
o1 = Conv2D(16, 3, padding='same',activation='relu')(u1)
o1 = Conv2D(cy, 1, padding='same',activation='softmax')(o1)
return Model(input=[i1], outputs=[o1])
def lr(x, y):
return x[:,::-1,:], y[:,::-1,:]
def ud(x,y):
return x[::-1,:,:], y[::-1,:,:]
def udlr(x,y):
return ud(*lr(x,y))
def ident(x,y):
return x,y
def augmentGenerator(x,y,batchSize,augment=True):
# we have to use a generator function here because augmenting the raw data results in memory errors
# essentially you wind up with multiple copies of a 13000,64,64,20 array floating around, which is at least 4GB
# i think this has something to do with Jupyter holding onto variables beyond their normally expected lifespan, but
# i would have to debug this further to figure it out for sure.
b, h,w,c = x.shape
_, _, _, yc = y.shape
out_x = np.zeros((batchSize,h,w,c))
out_y = np.zeros((batchSize,h,w,yc))
funcs = [ident,lr,ud,udlr] if augment else [ident]
idx = np.arange(0,b*len(funcs))
print('len.idx', len(idx))
while True:
np.random.shuffle(idx)
for k in range(0,len(idx),batchSize):
if k + batchSize > len(idx):
break
out_x.fill(0)
out_y.fill(0)
group_idx = idx[k:k+batchSize]
didx = group_idx % b
fidx = group_idx // b
for j in range(batchSize):
func = funcs[fidx[j]]
ax,ay = func(x[didx[j]],y[didx[j]])
out_x[j] = ax
out_y[j] = ay
# bx = x[group_idx]
# by = y[group_idx]
# for j,func in enumerate(funcs):
# ax,ay = func(bx,by)
# out_x[j*ng:(j+1)*ng] = ax
# out_y[j*ng:(j+1)*ng] = ay
yield np.array(out_x), np.array(out_y)
def makeGenerators(x,y,batchSize,train_size=0.9):
xtrain,xval,ytrain,yval = train_test_split(x,y,train_size=train_size)
train_gen = augmentGenerator(xtrain,ytrain,batchSize)
val_gen = augmentGenerator(xval,yval,batchSize,augment=False)
return train_gen, val_gen
###Output
_____no_output_____
###Markdown
The summary of the model below shows the model schema.
###Code
m = model(tileArray, yTileArray)
m.summary()
###Output
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 64, 64, 20) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 64, 64, 32) 5792 input_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 64, 32) 9248 conv2d_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 32, 32, 32) 0 conv2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 32, 32, 32) 128 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 32, 32, 64) 18496 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 32, 32, 64) 36928 conv2d_3[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 16, 16, 64) 0 conv2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 16, 16, 64) 256 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 16, 16, 128) 73856 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 16, 16, 128) 147584 conv2d_5[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 8, 8, 128) 0 conv2d_6[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 8, 8, 128) 512 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 8, 8, 256) 295168 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 8, 8, 256) 590080 conv2d_7[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 4, 4, 256) 0 conv2d_8[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 4, 4, 256) 1024 max_pooling2d_4[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 4, 4, 512) 1180160 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 4, 4, 512) 2359808 conv2d_9[0][0]
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D) (None, 8, 8, 512) 0 conv2d_10[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 8, 8, 768) 0 conv2d_8[0][0]
up_sampling2d_1[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 8, 8, 256) 1769728 concatenate_1[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 8, 8, 256) 590080 conv2d_11[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 8, 8, 256) 1024 conv2d_12[0][0]
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D) (None, 16, 16, 256) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 16, 16, 384) 0 conv2d_6[0][0]
up_sampling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 16, 16, 128) 442496 concatenate_2[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 16, 16, 128) 147584 conv2d_13[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 16, 16, 128) 512 conv2d_14[0][0]
__________________________________________________________________________________________________
up_sampling2d_3 (UpSampling2D) (None, 32, 32, 128) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
concatenate_3 (Concatenate) (None, 32, 32, 192) 0 conv2d_4[0][0]
up_sampling2d_3[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 32, 32, 64) 110656 concatenate_3[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 32, 32, 64) 36928 conv2d_15[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 32, 32, 64) 256 conv2d_16[0][0]
__________________________________________________________________________________________________
up_sampling2d_4 (UpSampling2D) (None, 64, 64, 64) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
concatenate_4 (Concatenate) (None, 64, 64, 96) 0 conv2d_2[0][0]
up_sampling2d_4[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 64, 64, 32) 27680 concatenate_4[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 64, 64, 32) 9248 conv2d_17[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 64, 64, 32) 128 conv2d_18[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 64, 64, 16) 4624 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 64, 64, 10) 170 conv2d_19[0][0]
==================================================================================================
Total params: 7,860,154
Trainable params: 7,858,234
Non-trainable params: 1,920
__________________________________________________________________________________________________
###Markdown
To start we're going to use the naive random sampling approach and show what happenswith training and test validation.
###Code
from sklearn.model_selection import train_test_split
xtrain,xtest,ytrain,ytest = train_test_split(tileArray,yTileArray,train_size=0.5)
xtrain.shape, xtest.shape, ytrain.shape, ytest.shape
###Output
_____no_output_____
###Markdown
Do some basic augmentation here to help out a bit.
###Code
batchSize = 128
train_gen, val_gen = makeGenerators(xtrain,ytrain,batchSize)
# these are just some basic choices, with more time we could investigate more options
m.compile(Adam(lr=1e-4),metrics=['accuracy'],loss='categorical_crossentropy')
m.fit_generator(train_gen, steps_per_epoch=xtrain.shape[0]*4//batchSize, epochs=10, verbose=1, validation_data=val_gen, validation_steps=5, max_queue_size=25)
###Output
Epoch 1/10
len.idx 10080
len.idx 2520
78/78 [==============================] - 48s 613ms/step - loss: 1.4538 - acc: 0.5715 - val_loss: 1.1558 - val_acc: 0.6748
Epoch 2/10
78/78 [==============================] - 37s 470ms/step - loss: 1.0220 - acc: 0.6996 - val_loss: 0.9989 - val_acc: 0.7024
Epoch 3/10
78/78 [==============================] - 37s 472ms/step - loss: 0.9012 - acc: 0.7245 - val_loss: 0.9761 - val_acc: 0.6942
Epoch 4/10
78/78 [==============================] - 37s 471ms/step - loss: 0.8253 - acc: 0.7410 - val_loss: 0.9169 - val_acc: 0.7237
Epoch 5/10
78/78 [==============================] - 37s 473ms/step - loss: 0.7702 - acc: 0.7508 - val_loss: 0.8910 - val_acc: 0.7238
Epoch 6/10
78/78 [==============================] - 37s 473ms/step - loss: 0.7252 - acc: 0.7630 - val_loss: 0.8189 - val_acc: 0.7313
Epoch 7/10
78/78 [==============================] - 37s 475ms/step - loss: 0.6835 - acc: 0.7720 - val_loss: 0.8171 - val_acc: 0.7312
Epoch 8/10
78/78 [==============================] - 37s 475ms/step - loss: 0.6572 - acc: 0.7781 - val_loss: 0.8538 - val_acc: 0.7284
Epoch 9/10
78/78 [==============================] - 37s 474ms/step - loss: 0.6148 - acc: 0.7893 - val_loss: 0.8310 - val_acc: 0.7364
Epoch 10/10
78/78 [==============================] - 37s 474ms/step - loss: 0.5860 - acc: 0.7968 - val_loss: 0.8284 - val_acc: 0.7398
###Markdown
Normally, we'd let this run for a longer time and use early stopping to prevent overfitting, but since we don't want to consume a lot of resources right now we'll let it stop after 10 epochs, even at the risk of underfitting.
###Code
predictions = m.predict(tileArray)
results = np.zeros_like(labels)
for k, predictionTile in enumerate(predictions):
sy,ey,sx,ex,g = tileInfo[k]
results[sy:ey,sx:ex] = np.argmax(predictionTile,axis=2)
figure(figsize=(15,25))
subplot(211)
title('conv net prediction')
imshow(results,cmap='tab10',vmax=10)
colorbar()
subplot(212)
title('target labels')
imshow(labels,cmap='tab10',vmax=10)
colorbar()
savefig('raw_conv.png')
###Output
_____no_output_____
###Markdown
We split the data into 50/50 train and validation split, and we get a pretty decent result, but we can clearly see that the biggest difference between the two images is that it looks like the predicted image is smoothed relative to the right image. However, it has made some of the predicted labels more contiguous relative to what we see in the raw labels, and picked out some areas that look more reasonable to me that what we see in the raw data.
###Code
import stats
print(stats.prf1_score_img(labels, results))
###Output
Recall Precision F1 Support
0 0.608897 0.644966 0.626413 1141647
1 0.821104 0.919458 0.867502 8300189
2 0.768481 0.812564 0.789908 4721173
3 0.803243 0.838157 0.820329 1540735
4 0.406350 0.119031 0.184127 754481
5 0.707345 0.724010 0.715581 615181
6 0.621712 0.492624 0.549691 490124
7 0.657880 0.108087 0.185670 409151
8 0.389368 0.338505 0.362159 378777
9 0.559443 0.453447 0.500899 339901
10 0.000000 0.000000 0.000000 323087
###Markdown
Unfortunately because there is so much noise in the target labels the P-R and F1 stats don't look very good, but I'm not super concerned about the inability to quantify how good the results are here. Ultimately, I'd have to know more about how "good" the target labels are in order to say if we should be matching them better.Comparing these results to those we obtained from the Random Forest Classifier in the Feature Selection section, we see that these results are actually pretty comparable, and in most cases better. Keep in mind though, that the conv net has seen significantly more data points than the RFC, so if the RFC was given enough data it might do better. Now we run the true held out test, where we hold out entire groups of data from the model during the training phase, and then predict only on those groups. This should give us a much better idea of how well it will predict on new areas.
###Code
def fitData(train_index):
xt = tileArray[train_index]
yt = yTileArray[train_index]
train_gen, val_gen = makeGenerators(xt,yt, 128)
temp_model = model(xt,yt)
temp_model.compile(Adam(lr=1e-4),metrics=['accuracy'],loss='categorical_crossentropy')
# normally I'd use early stopping but 10 epochs seems to be about right
temp_model.fit_generator(train_gen, steps_per_epoch=xtrain.shape[0]*4//batchSize, epochs=10, verbose=1,
validation_data=val_gen, validation_steps=5, max_queue_size=25)
return temp_model
nExamples = 0
kPermutation = 0
predictionImage = []
for train_index, test_index in lpo.split(tileArray, yTileArray,groupArray):
kPermutation += 1
if kPermutation % 10 != 0:
continue
if nExamples > 3:
break
temp_model = fitData(train_index)
predictions = temp_model.predict(tileArray[test_index])
results = np.zeros_like(labels)
mask = np.zeros(labels.shape,dtype='bool')
prediction_groups = set()
for k, predictionTile in enumerate(predictions):
sy,ey,sx,ex,g = tileInfo[test_index[k]]
prediction_groups.add(g)
results[sy:ey,sx:ex] = np.argmax(predictionTile,axis=2)
mask[sy:ey,sx:ex] = 1
f = figure(figsize=(40,40))
f.set_tight_layout(True)
nrows = len(prediction_groups)
for k, prediction_group in enumerate(prediction_groups):
sy,ey, sx, ex = groupMap[prediction_group]
subplot(nrows,2,2*k+1)
imshow(results[sy:ey,sx:ex],cmap='tab10')
ylabel('group: {0}'.format(prediction_group))
subplot(nrows,2,2*k+2)
imshow(labels[sy:ey,sx:ex],cmap='tab10')
f.savefig('conv_heldout_example_{:02d}.png'.format(nExamples))
print('**** Summary stats for Example: {} ****'.format(nExamples))
print(stats.prf1_score_img(labels[mask],results[mask]))
close(f)
predictions = None
del predictions
predictionImage.append(results)
nExamples+=1
###Output
/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/ipykernel/__main__.py:68: UserWarning: Update your `Model` call to the Keras 2 API: `Model(outputs=[<tf.Tenso..., inputs=[<tf.Tenso...)`
|
All_Source_Code/KNearestNeighbors/KNearestNeighbors.ipynb | ###Markdown
Machine Learning for Engineers: [KNearestNeighbors](https://www.apmonitor.com/pds/index.php/Main/KNearestNeighbors)- [k-Nearest Neighbors Classifier](https://www.apmonitor.com/pds/index.php/Main/KNearestNeighbors) - Source Blocks: 3 - Description: Introduction to K-Nearest Neighbors for Classification.- [Course Overview](https://apmonitor.com/pds)- [Course Schedule](https://apmonitor.com/pds/index.php/Main/CourseSchedule)
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(XA,yA)
yP = knn.predict(XB)
from sklearn import datasets
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors=5)
# The digits dataset
digits = datasets.load_digits()
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
# Split into train and test subsets (50% each)
X_train, X_test, y_train, y_test = train_test_split(
data, digits.target, test_size=0.5, shuffle=False)
# Learn the digits on the first half of the digits
classifier.fit(X_train, y_train)
# Test on second half of data
n = np.random.randint(int(n_samples/2),n_samples)
print('Predicted: ' + str(classifier.predict(digits.data[n:n+1])[0]))
# Show number
plt.imshow(digits.images[n], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_blobs
n_samples = 100
features, label = make_blobs(n_samples=n_samples, centers=2,\
n_features=2,random_state=17)
data = pd.DataFrame({'x':features[:,0],'y':features[:,1],\
'z':label})
sns.pairplot(data,hue='z')
###Output
_____no_output_____ |
CycleGANColab/CycleGANColab.ipynb | ###Markdown
CycleGANThis notebook makes the CycleGAN homework assignment runnable on Google Colab (free GPU), so you don't need a physical GPU to run this assignment.Code available on https://github.com/wileyw/DeepLearningDemos.gitHomework Assignment: https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/assignments/a4-handout.pdf
###Code
!git clone https://github.com/wileyw/DeepLearningDemos.git
!wget http://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/assignments/a4-code.zip
!unzip -q a4-code.zip
!ls
!mv a4-code-v2-updated/emojis .
!mv a4-code-v2-updated/checker_files .
!python3 DeepLearningDemos/CycleGANSolution/a4-code-v2-updated/model_checker.py
import sys
sys.path.append('DeepLearningDemos/CycleGANSolution/a4-code-v2-updated')
import cycle_gan
from cycle_gan import *
sys.argv[:] = ['cycle_gan.py']
parser = create_parser()
opts = parser.parse_args()
opts.use_cycle_consistency_loss = True
batch_size = opts.batch_size
cycle_gan.batch_size = batch_size
print(opts)
main(opts)
from IPython.display import Image
import matplotlib.pyplot as plt
import glob
images = sorted(glob.glob('./samples_cyclegan/*X-Y.png'))
Image(images[-1])
from IPython.display import Image
import matplotlib.pyplot as plt
import glob
images = sorted(glob.glob('./samples_cyclegan/*Y-X.png'))
Image(images[-1])
import sys
sys.path.append('DeepLearningDemos/CycleGANSolution/a4-code-v2-updated')
import vanilla_gan
from vanilla_gan import *
# Run Vanilla GAN
sys.argv[:] = ['vanilla_gan.py']
parser = create_parser()
opts = parser.parse_args()
batch_size = opts.batch_size
vanilla_gan.batch_size = batch_size
print(opts)
main(opts)
# View images
from IPython.display import Image
import matplotlib.pyplot as plt
import glob
images = sorted(glob.glob('./samples_vanilla/*.png'))
Image(images[-1])
###Output
_____no_output_____ |
Code_SC_Challenge_2.ipynb | ###Markdown
Link to Colab
###Code
# connect to google colab
from google.colab import drive
drive.mount("/content/drive")
###Output
Mounted at /content/drive
###Markdown
Downloading Dependencies
###Code
# install torchaudio
!pip install torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
import os
import numpy as np
import pandas as pd
# for F1-score & confusion matrix later
from sklearn.metrics import classification_report
import seaborn as sns
# current torch version is 1.7.0+cu101
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import warnings
warnings.filterwarnings("ignore")
import torchaudio
from torchaudio.transforms import MelSpectrogram
from torchaudio.transforms import TimeMasking, FrequencyMasking
import matplotlib.pyplot as plt
import IPython.display as ipd
import time
import random
from tqdm import tqdm
# check if cuda GPU is available, make sure you're using GPU runtime on Google Colab
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device) # you should output "cuda"
# COLAB CONFIG
# change colab flag to false if train using jupyter notebook
COLAB_FLAG = True
COLAB_FILEPATH = './drive/My Drive/TIL2021/competition/SC/' if COLAB_FLAG == True else './'
pd.options.mode.chained_assignment = None # default='warn'
%matplotlib inline
###Output
_____no_output_____
###Markdown
Speech Classification DatasetWe will be providing the base dataset that will be used for the first task of the Speech Classification competition.
###Code
!gdown --id 1im5shxcavdoTRNT66mhVdtA_E0ZR8QLl
!unzip s1_train_release.zip
class CustomSpeechDataset(torch.utils.data.Dataset):
def __init__(self, path, typ='train', transforms=None):
assert typ == 'train' or typ == 'test', 'typ must be either "train" or "test"'
self.typ = typ
self.transforms = transforms
self.targets = []
if self.typ == 'train':
self.class_names = sorted(os.listdir(path))
num_classes = len(self.class_names)
for class_idx, class_name in enumerate(self.class_names):
class_dirx = os.path.join(path, class_name)
wav_list = os.listdir(class_dirx)
for wav_file in wav_list:
self.targets.append({
'filename': wav_file,
'path': os.path.join(class_dirx, wav_file),
'class': class_name
})
if self.typ == 'test':
wav_list = os.listdir(path)
for wav_file in wav_list:
self.targets.append({
'filename': wav_file,
'path': os.path.join(path, wav_file)
})
def __len__(self):
return len(self.targets)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx.tolist()
# sr is the sampling rate
signal, sr = torchaudio.load(self.targets[idx]['path'], normalization=True)
filename = self.targets[idx]['filename']
if self.transforms:
for transform in self.transforms:
signal = transform(signal)
if self.typ == 'train':
clx_name = self.targets[idx]['class']
return filename, signal, sr, clx_name
elif self.typ == 'test':
return filename, signal, sr
# helper function to check the time taken to train an epoch
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
fd = CustomSpeechDataset(path='s1_train_release', typ='train')
train_size = int(len(fd)*0.8)
valid_size = len(fd) - train_size
print(train_size, valid_size)
# train test split
#train_set, valid_set = torch.utils.data.random_split(fd, [train_size, valid_size])
# turn the custom dataset into a list of data
full_dataset = list(fd)
list(full_dataset)[:2]
len(full_dataset)
labels = fd.class_names
print(labels)
labels_to_indices = {}
for idx, l in enumerate(labels):
labels_to_indices[l] = idx
print(labels_to_indices)
###Output
_____no_output_____
###Markdown
Manual kFold implementationCreate kFold data for later part of this notebook
###Code
# define a class that gets the complement of the list
class ComplementList(list):
def __getitem__(self, val):
if type(val) is slice and val.step == 'c':
copy = self[:]
copy[val.start:val.stop] = []
return copy
return super(ComplementList, self).__getitem__(val)
# testing the class implementation
l = ComplementList([1,2,3,4,5])
print(l[2:4:'c']) # [1, 5]
# define the k-fold constant
K = 5
# k is the k-fold constant, fd is the full dataset in list
def manual_k_fold(k, fd, fd_length):
val_list = []
train_list = []
# randomise the dataset first
random.shuffle(fd)
# get the combinations of splits for the validation set
for i in tqdm(range(k)):
val_list.append(fd[int((i/k)*fd_length):int(((i+1)/k)*fd_length)])
clist = ComplementList(fd)
train_list.append(clist[int((i/k)*fd_length):int(((i+1)/k)*fd_length):'c'])
return val_list, train_list
val_list, train_list = manual_k_fold(K, full_dataset, len(full_dataset))
len(train_list[0]) , len(val_list[0])
# an example
# [fold][item set][particular file]
#print(val_list[0][0][0])
#print(val_list[1][0][0])
#print(val_list[2][0][0])
#print(val_list[3][0][0])
#print(val_list[4][0][0])
###Output
_____no_output_____
###Markdown
Get the data distribution of each class
###Code
def train_data_distribution(data):
train_filename_list = []
train_label_list = []
for i in range(len(data)):
train_filename_list.append(data[i][0])
train_label_list.append(data[i][3])
# make to dataframe
train_tuple = list(zip(train_filename_list, train_label_list))
train_df = pd.DataFrame(train_tuple, columns=['train_filename', 'train_label'])
return train_df
for i in range(K):
print(f"Train Set {i+1}")
train_dist = train_data_distribution(train_list[i])
#print(train_dist.head(3))
# find the count of each label to check distribution of the labels
print(train_dist["train_label"].value_counts())
print()
###Output
_____no_output_____
###Markdown
Look at one example from the training set
###Code
train_list[0][1]
filename, waveform, sample_rate, label_id = train_list[0][1]
print("Shape of waveform: {}".format(waveform.size()))
print("Sample rate of waveform: {}".format(sample_rate))
# Let's plot the waveform using matplotlib
# We observe that the main audio activity happens at the later end of the clip
plt.plot(waveform.t().numpy());
# let's play the audio clip and hear it for ourselves!
ipd.Audio(waveform.numpy(), rate=sample_rate)
###Output
_____no_output_____
###Markdown
Constant Sample LengthsIn order to insert our features into a model, we have to ensure that the features are of the same size. Below, we see that the sample length varies across the audio clips.Let's pad the audio clips to a maximum sample length of 16000. (16000 sample length is equal to 1 second at 16,000 Hz sampling rate)We will pad audio clips which are less than 1 second in length, with parts of itself.
###Code
# checking the first fold
audio_lens = []
for i in range(len(train_list[0])):
audio_lens.append(train_list[0][i][1].size(1))
print('Max Sample Length:', max(audio_lens))
print('Min Sample Length:', min(audio_lens))
###Output
_____no_output_____
###Markdown
Fixed Hyperparameters
###Code
# data augmentation -> have to fix this two, as it is a global implementation
# create multiple notebooks for different values of these two hyperparameters
TIME_MASK_PARAM = 5
FREQ_MASK_PARAM = 5
###Output
_____no_output_____
###Markdown
Data AugmentationSince the min and the max length is the same and hence no need to pad the audio, else can run PadAudio and the shorter ones will be repeated with its own audioWe will do a simple data augmentation process in order to increase the variations in our dataset.In the audio domain, the augmentation technique known as [SpecAugment](https://arxiv.org/abs/1904.08779) is often used. It makes use of 3 steps:- Time Warp (warps the spectrogram to the left or right) 2nd row- Frequency Masking (randomly masks a range of frequencies) 3rd row- Time Masking (randomly masks a range of time) 4th rowAs Time Warp is computationally intensive and does not contribute significant improvement in results, we shall simply use Frequency and Time Masking in this example.
###Code
# pad the audio to the same length
class PadAudio(torch.nn.Module):
def __init__(self, req_length = 16000):
super().__init__()
self.req_length = req_length
def forward(self, waveform):
while waveform.size(1) < self.req_length:
# example if audio length is 15800 and max is 16000, the remaining 200 samples will be concatenated
# with the FIRST 200 samples of the waveform itself again (repetition)
waveform = torch.cat((waveform, waveform[:, :self.req_length - waveform.size(1)]), axis=1)
return waveform
# Log-Mel Transform here to get ready to train with our RNN model later on
class LogMelTransform(torch.nn.Module):
def __init__(self, log_offset = 1e-6):
super().__init__()
self.log_offset = log_offset
def forward(self, melspectrogram):
return torch.log(melspectrogram + self.log_offset)
# define a transformation sequence
def transform_sequence(time_mask, freq_mask):
transformations = []
transformations.append(PadAudio())
transformations.append(MelSpectrogram(sample_rate = 16000, n_mels = 128))
transformations.append(LogMelTransform())
eval_transformations = transformations.copy()
# a maximum of 5 time steps will be masked
transformations.append(TimeMasking(time_mask_param = time_mask))
# maximum of 3 freq channels will be masked
transformations.append(FrequencyMasking(freq_mask_param = freq_mask))
return transformations, eval_transformations
transformations, eval_transformations = transform_sequence(TIME_MASK_PARAM, FREQ_MASK_PARAM)
print(transformations)
###Output
[PadAudio(), MelSpectrogram(
(spectrogram): Spectrogram()
(mel_scale): MelScale()
), LogMelTransform(), TimeMasking(), FrequencyMasking()]
###Markdown
Data LoadersLet's now set up our data loaders so that we can streamline the batch loading of data for our model training later on.
###Code
# Fixed parameters
NUM_WORKERS = 4
PIN_MEMORY = True if device == 'cuda' else False
def train_collate_fn(batch):
# A data tuple has the form:
# filename, waveform, sample_rate, label
tensors, targets, filenames = [], [], []
# Gather in lists, and encode labels as indices
for filename, waveform, sample_rate, label in batch:
# apply transformations
for transform in transformations:
waveform = transform(waveform)
waveform = waveform.squeeze().T
tensors += [waveform]
targets += [labels_to_indices[label]]
filenames += [filename]
# Group the list of tensors into a batched tensor
tensors = torch.stack(tensors)
targets = torch.LongTensor(targets)
return (tensors, targets, filenames)
def eval_collate_fn(batch):
# A data tuple has the form:
# filename, waveform, sample_rate, label
tensors, targets, filenames = [], [], []
# Gather in lists, and encode labels as indices
for filename, waveform, sample_rate, label in batch:
# apply transformations
for transform in eval_transformations:
waveform = transform(waveform)
waveform = waveform.squeeze().T
tensors += [waveform]
targets += [labels_to_indices[label]]
filenames += [filename]
# Group the list of tensors into a batched tensor
tensors = torch.stack(tensors)
targets = torch.LongTensor(targets)
filenames += [filename]
return (tensors, targets, filenames)
def loader(train_set, valid_set, batch_size=64, num_workers=NUM_WORKERS, pin_mem=PIN_MEMORY):
# implementing the loader function
train_loader = torch.utils.data.DataLoader(
train_set,
batch_size=batch_size,
shuffle=True,
drop_last=False,
collate_fn=train_collate_fn,
num_workers=num_workers,
pin_memory=pin_mem,
)
valid_loader = torch.utils.data.DataLoader(
valid_set,
batch_size=batch_size,
shuffle=False,
drop_last=False,
collate_fn=eval_collate_fn,
num_workers=num_workers,
pin_memory=pin_mem,
)
return train_loader, valid_loader
###Output
_____no_output_____
###Markdown
Setting up and defining the Model and training loopIn this speech classification example, we will make use of a Long-Short-Term Memory Recurrent Neural Network (LSTM-RNN).
###Code
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, num_classes, device, classes=None):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.num_layers = num_layers
self.gru = nn.GRU(input_size, hidden_size, num_layers, batch_first=True, bidirectional=True)
# hidden_size * 2 cuz bidirectional
self.fc = nn.Linear(hidden_size*2, num_classes)
self.device = device
self.classes = classes
def forward(self, x):
# Set initial hidden and cell states
batch_size = x.size(0)
# bidirectional so need to multiply the layers by 2 for initial state
h0 = torch.zeros(self.num_layers*2, batch_size, self.hidden_size).to(self.device)
# only applicable to LSTM because LSTM got 2 gates but GRU only has one!!!
#c0 = torch.zeros(self.num_layers*2, batch_size, self.hidden_size).to(self.device)
# Forward propagate GRU
out, _ = self.gru(x, h0.detach()) # shape = (batch_size, seq_length, hidden_size)
# lstm implementation
#out, _ = self.gru(x, (h0, c0)) # shape = (batch_size, seq_length, hidden_size)
# Decode the hidden state of the last time step
out = self.fc(out[:, -1, :])
return out
def predict(self, x):
'''Predict one label from one sample's features'''
# x: feature from a sample, LxN
# L is length of sequency
# N is feature dimension
x = torch.tensor(x[np.newaxis, :], dtype=torch.float32)
x = x.to(self.device)
outputs = self.forward(x)
_, predicted = torch.max(outputs.data, 1)
predicted_index = predicted.item()
return predicted_index
'''# initialize the model class
model = RNN(input_size=128,
hidden_size=HIDDEN_SIZE,
num_layers=NUM_LAYERS,
num_classes=len(labels),
device=device,
classes=labels).to(device)
print(model)'''
# define a function to train the model
def fit(model, train_loader, valid_loader, num_epochs, fold_num, optim_flag='adam'):
# fixed configuration
criterion = nn.CrossEntropyLoss()
# choice of optimizers
if optim_flag == 'adam':
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
elif optim_flag == 'sgd99':
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.99)
elif optim_flag == 'sgd90':
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2, momentum=0.9)
elif optim_flag == 'adamw':
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3) # included a weight decay of 1e-02 for adam
elif optim_flag == 'adagrad':
optimizer = torch.optim.Adagrad(model.parameters(), lr=1e-2)
elif optim_flag == 'adadelta':
optimizer = torch.optim.Adadelta(model.parameters(), lr=1)
elif optim_flag == 'adamax':
optimizer = torch.optim.Adamax(model.parameters(), lr=2e-3)
else:
return "No such optimizer, try again"
# set the LR scheduler
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer,
mode='min',
factor=0.1,
patience=3,
min_lr=1e-5,
verbose=True)
# fixed parameters to perform early stopping
n_epochs_stop = 5
epochs_no_improve = 0
early_stop = False
min_val_loss = np.inf
threshold = 0.0001
optimizer.zero_grad()
# declare the fold number
print(f'---------- Fold Number: {fold_num} ----------')
# start the training loop
for epoch in range(1,num_epochs+1):
start_time = time.time()
# training steps
model.train()
count_correct, count_total = 0, 0
for idx, (features, targets, filenames) in enumerate(train_loader):
features = features.to(device)
targets = targets.to(device)
# forward pass
outputs = model(features)
loss = criterion(outputs, targets)
# backward pass
loss.backward()
optimizer.step()
optimizer.zero_grad()
# training results
_, argmax = torch.max(outputs, 1)
count_correct += (targets == argmax.squeeze()).sum().item()
count_total += targets.size(0)
train_acc = count_correct / count_total
# evaluation steps
model.eval()
count_correct, count_total = 0, 0
val_pred_list, val_filename_list = [], []
with torch.no_grad():
for idx, (features, targets, filenames) in enumerate(valid_loader):
features = features.to(device)
targets = targets.to(device)
# forward pass
val_outputs = model(features)
val_loss = criterion(val_outputs, targets)
# validation results
_, argmax = torch.max(val_outputs, 1)
count_correct += (targets == argmax.squeeze()).sum().item()
count_total += targets.size(0)
val_pred_list += argmax.cpu().tolist()
val_filename_list += filenames
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
# print results
valid_acc = count_correct / count_total
if epoch<=3 or epoch%5==0:
print('Epoch [{}/{}]: Train loss = {:.4f}, Train accuracy = {:.2f}, Valid loss = {:.4f}, Valid accuracy = {:.2f}'
.format(epoch, num_epochs, loss.item(), 100*train_acc, val_loss.item(), 100*valid_acc))
print(f'Time taken for Epoch {epoch}: {epoch_mins}m {epoch_secs}s')
# add the scheduler code
scheduler.step(val_loss)
# --- EARLY STOPPING CONDITION ---
if val_loss.item() < min_val_loss+threshold: # give some threshold to make the learning less strict
# resets the no improve counter
epochs_no_improve = 0
min_val_loss = val_loss.item()
else: # val_loss no improvement
epochs_no_improve +=1
print(f"Validation Loss did not improve count: {epochs_no_improve}")
# set this line as min now
min_val_loss = val_loss.item()
if epoch > 5 and epochs_no_improve == n_epochs_stop:
print("Training stopped due to early stopping!")
break
else:
continue
return val_loss.item(), 100*valid_acc, val_pred_list, val_filename_list, loss.item(), 100*train_acc
###Output
_____no_output_____
###Markdown
Configuration for a trained model- Declare hyperparameters- Training the model- Check and save the best fold- F1-score analysis- Confusion Matrix- Load model back to try it on the test set
###Code
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
# ----- TRAINING THE MODEL -----
def train_helper(num_folds, train_list, val_list, batch_size, hidden_size, num_layers, epochs, optim_flag='adam'):
# data loader for all the k-fold datasets
train_loader_list = []
valid_loader_list = []
for i in range(num_folds):
train_loader, valid_loader = loader(train_list[i], val_list[i], batch_size)
train_loader_list.append(train_loader)
valid_loader_list.append(valid_loader)
# initialise list to store the kth folds results
loss_list = [] # validation
acc_list = [] # validation
val_pred_array = []
val_filename_array = []
model_state_list = []
train_loss_list = [] # train
train_acc_list = [] # train
# 5-fold loop
for i in range(num_folds):
# intialise the model class
model = RNN(input_size=128, hidden_size=hidden_size, num_layers=num_layers, num_classes=len(labels), device=device, classes=labels).to(device)
print(model)
print()
# fit the model for training
val_loss, val_acc, val_pred_list, val_filename_list, train_loss, train_acc = fit(model=model,
train_loader=train_loader_list[i],
valid_loader=valid_loader_list[i],
num_epochs=epochs,
fold_num=i+1,
optim_flag=optim_flag)
loss_list.append(val_loss)
acc_list.append(val_acc)
val_pred_array.append(val_pred_list)
val_filename_array.append(val_filename_list)
model_state_list.append(model)
train_loss_list.append(train_loss)
train_acc_list.append(train_acc)
return loss_list, acc_list, val_pred_array, val_filename_array, model_state_list, train_loss_list, train_acc_list
# ----- CHECK BEST FOLD BASED ON VALIDATION ACCURACY AND SAVE BEST FOLD MODEL -----
def save_best_fold(loss_list, acc_list, model_state_list, train_loss_list, train_acc_list, filename):
# ----- CHECK THE BEST FOLD MODEL BASED ON MODEL VALIDATION ACCURACY -----
print(f'Train Losses: {train_loss_list}')
print(f'Train Accuracies: {train_acc_list}')
print(f'Validation Losses: {loss_list}')
print(f'Validation Accuracies: {acc_list}')
best_fold_index = acc_list.index(max(acc_list))
print(f'Best fold: {best_fold_index+1}')
# ----- SAVE THE BEST FOLD MODEL -----
torch.save(model_state_list[best_fold_index].state_dict(), f'{COLAB_FILEPATH}model/model-{filename}.pt')
print(f'Model Saved for Fold {best_fold_index+1}')
return best_fold_index
# ----- F1 & CONFUSION MATRIX -----
def metrics_helper(val_list, val_filename_array, val_pred_array, best_fold_index, filename):
# ----- F1-Score Analysis -----
# get the pred and filename
val_tuple = list(zip(val_filename_array[best_fold_index], val_pred_array[best_fold_index]))
val_df = pd.DataFrame(val_tuple, columns=['val_filename', 'val_label_pred_num'])
val_df['val_label_pred'] = val_df['val_label_pred_num'].apply(lambda x: labels[x])
#val_df.head()
# make a combined data frame by iteratively appending the values to a list
val_filename_list_final = []
val_true_list = []
for i in range(len(val_list[best_fold_index])):
val_filename_list_final.append(val_list[best_fold_index][i][0])
val_true_list.append(val_list[best_fold_index][i][3])
# make to dataframe
val_tuple_final = list(zip(val_filename_list_final, val_true_list))
#print(result_tuple)
val_df_final = pd.DataFrame(val_tuple_final, columns=['val_filename', 'val_label_true'])
#val_df_final.head()
# append the predicted column into the final dataframe
val_df_final['val_label_pred'] = val_df['val_label_pred']
#val_df_final.head(3)
# save the table to csv
val_df_final.to_csv(f"{COLAB_FILEPATH}f1-table/f1-{filename}.csv", index=False)
# import the csv back again
#val_df_final = pd.read_csv(f"{COLAB_FILEPATH}f1-table/f1-{filename}.csv")
#val_df_final.head()
# simple check if length of data are equal
assert len(val_df_final["val_label_true"]) == len(val_df_final["val_label_pred"])
# get the multi-class F1-score to see the distribution of the predictions
print(classification_report(list(val_df_final['val_label_true']), list(val_df_final['val_label_pred']), target_names=labels, digits=4))
print()
# confusion matrix
cm_sns = pd.crosstab(val_df_final['val_label_true'], val_df_final['val_label_pred'], rownames=['Actual'], colnames=['Predicted'])
plt.subplots(figsize=(12,10))
sns.heatmap(cm_sns, annot=True, fmt="d", cmap='YlGnBu_r')
plt.show()
###Output
_____no_output_____
###Markdown
CONFIGURATION 1 - Adagrad & Batch size 32- Hidden Size: 256- Number of layers: 2- Batch Size: 32- Optimizer: Adagrad- Epoch: 30
###Code
# ----- DECLARE HYPERPARAMETERS -----
HIDDEN_SIZE, NUM_LAYERS, BATCH_SIZE, OPTIM_NAME, EPOCHS = 256, 2, 32, 'adagrad', 30
# ----- EXPORTED FILENAME -----
FILENAME = f'SC-BiGru_lr-1e-03_TM-{TIME_MASK_PARAM}_FM-{FREQ_MASK_PARAM}_HS-{HIDDEN_SIZE}_NL-{NUM_LAYERS}_BS-{BATCH_SIZE}_OP-{OPTIM_NAME}_EP-{EPOCHS}'
print(f'Filename: {FILENAME}\n')
# ----- TRAINING THE K-FOLD MODEL -----
loss_list, acc_list, val_pred_array, val_filename_array, model_state_list, train_loss_list, train_acc_list = train_helper(num_folds=K,
train_list=train_list,
val_list=val_list,
batch_size=BATCH_SIZE,
hidden_size=HIDDEN_SIZE,
num_layers=NUM_LAYERS,
epochs=EPOCHS,
optim_flag=OPTIM_NAME)
# ----- SAVE THE BEST FOLD MODEL -----
best_fold_index = save_best_fold(loss_list, acc_list, model_state_list, train_loss_list, train_acc_list, FILENAME)
# ----- GET THE F1 SCORE AND CONFUSION MATRIX -----
metrics_helper(val_list, val_filename_array, val_pred_array, best_fold_index, FILENAME)
###Output
_____no_output_____
###Markdown
Test Set
###Code
# THIS TEST SET MIGHT BE WRONG
#!gdown --id 1AvP49xengGjnFTG209AgvAGj-by8WGSi
#!unzip -q -o s1_test.zip
# THIS TEST SET IS CORRECT
!unzip -q -o './drive/My Drive/TIL2021/competition/SC/s1_test_release.zip' -d "./content/"
# Initialise dataset object for test set
#test_set = CustomSpeechDataset(path='s1_test', typ='test')
test_set = CustomSpeechDataset(path='./content/s1_test_release', typ='test')
###Output
_____no_output_____
###Markdown
Load back any model that was saved earlier
###Code
# ----- MODEL CONFIGURATION -----
HIDDEN_SIZE = 256 #256
NUM_LAYERS = 2 #2
BATCH_SIZE = 32 #64
OPTIM_NAME = 'adagrad' #'adam'
EPOCHS = 30 #30
FILENAME = f'SC-BiGru_lr-1e-03_TM-{TIME_MASK_PARAM}_FM-{FREQ_MASK_PARAM}_HS-{HIDDEN_SIZE}_NL-{NUM_LAYERS}_BS-{BATCH_SIZE}_OP-{OPTIM_NAME}_EP-{EPOCHS}'
# ----- LOAD BACK THE MODEL -----
model = RNN(input_size=128, hidden_size=HIDDEN_SIZE, num_layers=NUM_LAYERS, num_classes=len(labels), device=device, classes=labels).to(device)
print(model.load_state_dict(torch.load(f'{COLAB_FILEPATH}model/model-{FILENAME}.pt')))
print(f'Loaded filepath: {COLAB_FILEPATH}model/model-{FILENAME}.pt')
###Output
_____no_output_____
###Markdown
Test the model with the test set
###Code
# define test collate function and set up test loader
def test_collate_fn(batch):
# A data tuple has the form:
# filename, waveform, sample_rate
tensors, filenames = [], []
# Gather in lists
for filename, waveform, sample_rate in batch:
# apply transformations
for transform in eval_transformations:
waveform = transform(waveform)
waveform = waveform.squeeze().T
tensors += [waveform]
filenames += [filename]
# Group the list of tensors into a batched tensor
tensors = torch.stack(tensors)
return (tensors, filenames)
test_loader = torch.utils.data.DataLoader(
test_set,
batch_size=64,
shuffle=False,
drop_last=False,
collate_fn=test_collate_fn,
num_workers=NUM_WORKERS,
pin_memory=PIN_MEMORY,
)
# pass test set through the RNN model
model.eval()
pred_list, filename_list = [], []
with torch.no_grad():
for idx, (features, filenames) in enumerate(test_loader):
features = features.to(device)
# forward pass
outputs = model(features)
# test results
_, argmax = torch.max(outputs, 1)
pred_list += argmax.cpu().tolist()
filename_list += filenames
print(pred_list[:5])
print(filename_list[:5])
###Output
_____no_output_____
###Markdown
Submission of ResultsSubmission csv file should contain only 2 columns for filename and label, in that order. The file should be sorted by filename, and exclude headers. Refer to **sample_submission.csv** for an example.
###Code
result_tuple = list(zip(filename_list, pred_list))
#print(result_tuple)
submission = pd.DataFrame(result_tuple, columns=['filename', 'pred'])
submission = submission.sort_values('filename').reset_index(drop=True)
submission['label'] = submission['pred'].apply(lambda x: labels[x])
submission[['filename', 'label']].head()
submission["label"].value_counts()
submission[['filename', 'label']].to_csv(f'{COLAB_FILEPATH}submission/submission-{FILENAME}.csv', header=None, index=None)
###Output
_____no_output_____ |
day-2/04-ins_plotly/unsolved/supermarket-sales.ipynb | ###Markdown
Group By
Aggregate data BY a column.
How?
1. Group your data by a specific column by using the `.groupby()` function.
2. Aggregate data using one of the Pandas aggregation functions below:
- `count()` – Number of non-null observations
- `sum()` – Sum of values
- `mean()` – Mean of values
- `median()` – Arithmetic median of values
- `min()` – Minimum
- `max()` – Maximum
- `mode()` – Mode
- `std()` – Standard deviation
- `var()` – Variance
3. Optionally, use the `.reset_index()` function to switch the specified column from an index to a column.
###Code
# 1. Use groupby
# 2. Use groupby with an aggregation function e.g. sum()
# 3. Use groupby with an aggregation function e.g. sum() and reset_index()
# save the full example to a new dataframe
# bar plot of total sales by city
# bar plot of total sales by product line and customer type
# line plot of total sales by date
###Output
_____no_output_____ |
demos/test.ipynb | ###Markdown
load test datasethere we use LHCO dataset, and cheat by calling some locally defined commands to load it.
###Code
lhco = load_LHCO()
data, sim, signal = add_vars(lhco['pythia_qcd']), add_vars(lhco['herwig_qcd']), add_vars(lhco['wprime'])
###Output
_____no_output_____
###Markdown
load datathe best way to do this is to specify a training and testing version of the features, signal tags, simulation vs data tags, and sb tags.
###Code
from sklearn.model_selection import train_test_split
from keras.backend import clear_session
from importlib import reload
import anomaly_detection_models
reload(anomaly_detection_models)
from anomaly_detection_models import SALAD, DataVsSim, CWoLa, SACWoLa
n_signal = 1500
cols = ['mJJ', 'maxmass', 'minmass', 'tau21a', 'tau21b']
test_frac = 0.5
x_data,x_data_test,x_sim,x_sim_test = train_test_split(data.loc[:,cols].values, sim.loc[:,cols].values, test_size=test_frac)
signal_idx = np.random.choice(len(signal), n_signal, replace=False)
x = np.concatenate([x_data, x_sim, signal.loc[:,cols].iloc[signal_idx].values])
x[:,:3]/=1000
y_signal = np.concatenate([np.zeros(len(x_data)), np.zeros(len(x_sim)), np.ones(len(signal_idx))])
y_sim = np.concatenate([np.ones(len(x_data)), np.zeros(len(x_sim)), np.ones(len(signal_idx))])
y_sr = ((x[:,0] <= 3.7) & (x[:,0] >= 3.3))
y_sb = ((x[:,0] > 3.0) & (x[:,0] < 3.3)) | ((x[:,0] > 3.7) & (x[:,0] < 4.0))
x_test = np.concatenate([x_data_test, x_sim_test, np.delete(signal.loc[:,cols].values, signal_idx, axis=0)])
x_test[:,:3]/=1000
y_sr_test = ((x_test[:,0] <= 3.7) & (x_test[:,0] >= 3.3))
y_sb_test = ((x_test[:,0] > 3.0) & (x_test[:,0] < 3.3)) | ((x_test[:,0] > 3.7) & (x_test[:,0] < 4.0))
y_signal_test = np.concatenate([np.zeros(len(x_data_test)), np.zeros(len(x_sim_test)), np.ones(len(signal) - len(signal_idx))])
y_sim_test = np.concatenate([np.ones(len(x_data_test)), np.zeros(len(x_sim_test)), np.ones(len(signal) - len(signal_idx))])
print(((y_signal==1)&(y_sim==1)&(y_sr==1)).sum()/np.sqrt(((y_signal==0)&(y_sim==1)&(y_sr==1)).sum()))
###Output
4.5665720237690675
###Markdown
create architecturesyou can also pass finished models
###Code
def dctr_model():
sb_model = keras.models.Sequential()
sb_model.add(keras.layers.Dense(64, input_shape=(5,), activation='relu'))
sb_model.add(keras.layers.Dense(64, activation='relu'))
sb_model.add(keras.layers.Dense(64, activation='relu'))
sb_model.add(keras.layers.Dense(1, activation='sigmoid'))
return sb_model
def base_model():
base_model = keras.models.Sequential()
base_model.add(keras.layers.Dense(64, input_shape=(4,), activation='relu'))
base_model.add(keras.layers.Dense(64, activation='relu'))
base_model.add(keras.layers.Dense(64, activation='relu'))
base_model.add(keras.layers.Dense(1, activation='sigmoid'))
return base_model
# from importlib import reload
# import models
# reload(models)
# from models import *
from keras.backend import clear_session
from importlib import reload
import anomaly_detection_models
reload(anomaly_detection_models)
from anomaly_detection_models import SALAD, DataVsSim, CWoLa, SACWoLa
global_params = {
'epochs': 1,
'verbose': 1,
'batch_size': 200,
'compile': True
}
clear_session()
models = {
'salad': SALAD(**global_params, model=base_model(), sb_model=dctr_model(), sb_epochs=1, sb_batch_size=200,),
'dvsim': DataVsSim(**global_params, model=base_model(), ),
'cwola': CWoLa(**global_params, model=base_model(), ),
'sacwola': SACWoLa(**global_params, model=base_model(), lambda_=1.0)
}
preds = {}
for name,m in models.items():
print('training ', name)
m.fit(x[:,1:][y_sr | y_sb], y_sim=y_sim[y_sr | y_sb], y_sr=y_sr[y_sr | y_sb], m=x[:,0:1][y_sr | y_sb])
m.save('data/test/{}'.format(name))
preds[name] = m.predict(x_test[y_sr_test,1:])
# clear_session()
from sklearn.metrics import roc_curve
for name in models.keys():
fpr,tpr,thresh = roc_curve(y_signal_test[y_sr_test], preds[name])
plt.plot(tpr, tpr/np.sqrt(fpr), label=name)
plt.plot(tpr, tpr/np.sqrt(tpr), ls=':', color='tab:grey')
plt.legend()
plt.show()
# from importlib import reload
# import models
# reload(models)
# from models import *
from keras.backend import clear_session
from importlib import reload
import anomaly_detection_models
reload(anomaly_detection_models)
from anomaly_detection_models import SALAD, DataVsSim, CWoLa, SACWoLa
global_params = {
'epochs': 10,
'verbose': 1,
'batch_size': 200,
'compile': True
}
clear_session()
models = {
'salad': SALAD(**global_params, model=base_model(), sb_model=dctr_model(), sb_epochs=10, sb_batch_size=200,),
'dvsim': DataVsSim(**global_params, model=base_model(), ),
'cwola': CWoLa(**global_params, model=base_model(), ),
'sacwola': SACWoLa(**global_params, model=base_model(), lambda_=1.0)
}
preds = {}
for name,m in models.items():
print('training ', name)
m.fit(x[:,1:][y_sr | y_sb], y_sim=y_sim[y_sr | y_sb], y_sr=y_sr[y_sr | y_sb], m=x[:,0:1][y_sr | y_sb])
m.save('data/test10/{}'.format(name))
preds[name] = m.predict(x_test[y_sr_test,1:])
# clear_session()
from sklearn.metrics import roc_curve
for name,m in models.items():
yhat = m.predict(x_test[y_sr_test,1:])
fpr,tpr,thresh = roc_curve(y_signal_test[y_sr_test], yhat)
plt.plot(tpr, tpr/np.sqrt(fpr), label=name)
plt.plot(tpr, tpr/np.sqrt(tpr), ls=':', color='tab:grey')
plt.legend()
plt.show()
n_signal_test = n_signal*(test_frac/(1 - test_frac))
n_sel = x_data_test.shape[0] + x_sim_test.shape[0] + n_signal_test
y_sel_test = np.concatenate([np.ones(int(n_sel)), np.zeros(int(x_test.shape[0] - n_sel))]).astype(bool)
pvals = [0, 50, 90, 95, 99, 99.9]
window = (y_sr_test | y_sb_test) & y_sel_test
dtag = y_sim_test[window] == 1
stag = y_sim_test[window] == 0
for model_name,m in models.items():
yhat_all = m.predict(x_test[window,1:])
for p in pvals:
pct = np.percentile(yhat_all[dtag], p)
tag = yhat_all[dtag] >= pct
plt.hist(x_test[window,0][dtag][tag], histtype='step', label=p, density=0, bins=20)
plt.gca().axvline(3.3, color='tab:grey', ls=':')
plt.gca().axvline(3.7, color='tab:grey', ls=':')
plt.title('{}'.format(model_name))
plt.legend()
plt.yscale('log')
plt.show()
pvals = [0, 50, 90, 95, 99, 99.5]
for model_name,m in models.items():
yhat_all = m.predict(x_test[window,1:])
if hasattr(m, 'predict_weight'):
w = m.predict_weight(x_test[window])
else:
w = np.ones_like(y_sim_test[window])
for p in pvals:
pct = np.percentile(yhat_all[dtag], p)
tag = yhat_all[dtag] >= pct
tag_s = yhat_all[stag] >= pct
dcnts,bns = np.histogram(x_test[window,0][dtag][tag], bins=25,)
scnts,bns = np.histogram(x_test[window,0][stag][tag_s], weights=w[stag][tag_s], bins=25,)
xpt = bns[:-1] + np.diff(bns)*.5
val = (dcnts/scnts)
err = val*np.sqrt(1/dcnts + 1/scnts)
plt.plot(xpt, val)
plt.fill_between(xpt, val - err, val + err, alpha=.3, label=p)
plt.legend()
plt.axhline(1, color='tab:grey', ls=':')
plt.title(model_name)
plt.show()
w_sb = s.predict_weight(x_tr[sb])
plot_set(x_tr[sb], y_tr[sb], w_sb)
w_sr = s.predict_weight(x_tr[sr])
plot_set(x_tr[sr], y_tr[sr], w_sr)
%run models.py
###Output
_____no_output_____ |
docs/contents/tools/files/file_prmtop/to_molsysmt_MolSys.ipynb | ###Markdown
To molsysmt.MolSys
###Code
from molsysmt.tools import file_prmtop
#file_prmtop.to_molsysmt_MolSys(item)
###Output
_____no_output_____ |
Self Attention Time Complexity.ipynb | ###Markdown
We compare two implementations of SimpleSelfAttention Both have at their core this self-attention operation: x is originally an input tensor of shape (input_channels, height , width) which gets reshaped to (input_channels, N) where N = height * width W is a tensor of shape (input_channels, input_channels). W * x is implemented as a 1 * 1 convolution We will show that order of operation matters. SimpleSelfAttention1 multiplies matrices in the naive order:
###Code
class SimpleSelfAttention1(nn.Module):
def __init__(self, n_in:int, ks=1):
super().__init__()
self.conv = conv1d(n_in, n_in, ks, padding=ks//2, bias=False)
self.gamma = nn.Parameter(tensor([0.]))
self.n_in = n_in
def forward(self,x):
size = x.size() #(Minibatch, Channels, Height, Width)
x = x.view(*size[:2],-1) #(Minibatch, Channels, N)
o = torch.bmm(x.permute(0,2,1).contiguous(),self.conv(x)) # x^T * (W * x)
o = self.gamma * torch.bmm(x,o) + x
return o.view(*size).contiguous()
###Output
_____no_output_____
###Markdown
While SimpleSelfAttention2 does it in a different order:
###Code
class SimpleSelfAttention2(nn.Module):
def __init__(self, n_in:int, ks=1):#, n_out:int):
super().__init__()
self.conv = conv1d(n_in, n_in, ks, padding=ks//2, bias=False)
self.gamma = nn.Parameter(tensor([0.]))
self.n_in = n_in
def forward(self,x):
size = x.size()
x = x.view(*size[:2],-1) # (C,N)
convx = self.conv(x) # (C,C) * (C,N) = (C,N) => O(NC^2)
xxT = torch.bmm(x,x.permute(0,2,1).contiguous()) # (C,N) * (N,C) = (C,C) => O(NC^2)
o = torch.bmm(xxT, convx) # (C,C) * (C,N) = (C,N) => O(NC^2)
o = self.gamma * o + x
return o.view(*size).contiguous()
###Output
_____no_output_____
###Markdown
The complexity of computing the product of two rectangular matrices of shape (n,m) and (m.p) is O(nmp) Therefore, the operation in SimpleSelfAttention1 is O(NC^2 + CN^2) while for SimpleSelfAttention2 it is O(NC^2)Remember that N = height * width, so having complexity increase with N^2 is very undesirable! Let's see if this works in practice:
###Code
n_in = 64
x = torch.randn(64, n_in,32,32) #minibatch,C,H,W
sa1 = SimpleSelfAttention1(n_in)
sa2 = SimpleSelfAttention2(n_in)
###Output
_____no_output_____
###Markdown
We first check that the two modules have the same output:
###Code
torch.equal(sa1(x),sa2(x))
###Output
_____no_output_____
###Markdown
Let's compare the runtimes:
###Code
%%timeit
sa1(x)
%%timeit
sa2(x)
###Output
216 ms ± 31.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Let's see what happens if we increase channel size by a factor of 16:
###Code
n_in = 1024
x = torch.randn(64, n_in,32,32) #minibatch,C,H,W
sa1 = SimpleSelfAttention1(n_in)
sa2 = SimpleSelfAttention2(n_in)
%%timeit
sa1(x)
%%timeit
sa2(x)
###Output
2.17 s ± 40.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
SimpleSelfAttention2 is more sensitive to channel size. Something to keep in mind if we work with high channel sizes with low input spatial dimensions. What happens if we just double spatial dimensions? Back to 64 channels
###Code
n_in = 64
x = torch.randn(64, n_in,64,64) #minibatch,C,H,W
sa1 = SimpleSelfAttention1(n_in)
sa2 = SimpleSelfAttention2(n_in)
%%timeit
sa1(x)
%%timeit
sa2(x)
###Output
435 ms ± 22.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Let's double them again:
###Code
n_in = 64
x = torch.randn(64, n_in,128,128) #minibatch,C,H,W
sa1 = SimpleSelfAttention1(n_in)
sa2 = SimpleSelfAttention2(n_in)
%%timeit
sa1(x)
%%timeit
sa2(x)
###Output
1.36 s ± 58.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
SimpleSelfAttention is much better at handling larger spatial dimensions! How does this compare to the original Self Attention layer? This is the original SelfAttention layer as currently implemented in fast.aihttps://github.com/fastai/fastai/blob/5c51f9eabf76853a89a9bc5741804d2ed4407e49/fastai/layers.pyThis implementation is based on the SAGAN paperhttps://arxiv.org/abs/1805.08318
###Code
class SelfAttention(nn.Module):
"Self attention layer for nd."
def __init__(self, n_channels:int):
super().__init__()
self.query = conv1d(n_channels, n_channels//8)
self.key = conv1d(n_channels, n_channels//8)
self.value = conv1d(n_channels, n_channels)
self.gamma = nn.Parameter(tensor([0.]))
def forward(self, x):
#Notation from https://arxiv.org/pdf/1805.08318.pdf
size = x.size()
x = x.view(*size[:2],-1)
f,g,h = self.query(x),self.key(x),self.value(x)
beta = F.softmax(torch.bmm(f.permute(0,2,1).contiguous(), g), dim=1)
o = self.gamma * torch.bmm(h, beta) + x
return o.view(*size).contiguous()
###Output
_____no_output_____
###Markdown
It doesn't seem that we can use the same reordering trick, due to the presence of softmax. The outputs from SelfAttention and SimpleSelfAttention won't match, but we can compare runtimes:
###Code
n_in = 32
x = torch.randn(64, n_in,16,16) #minibatch,C,H,W
sa1 = SimpleSelfAttention1(n_in)
sa2 = SimpleSelfAttention2(n_in)
sa0 = SelfAttention(n_in)
%%timeit
sa0(x)
%%timeit
sa1(x)
%%timeit
sa2(x)
# Double image size
n_in = 32
x = torch.randn(64, n_in,32,32) #minibatch,C,H,W
sa1 = SimpleSelfAttention1(n_in)
sa2 = SimpleSelfAttention2(n_in)
sa0 = SelfAttention(n_in)
%%timeit
sa0(x)
%%timeit
sa1(x)
%%timeit
sa2(x)
# Double image size again
n_in = 32
x = torch.randn(64, n_in,64,64) #minibatch,C,H,W
sa1 = SimpleSelfAttention1(n_in)
sa2 = SimpleSelfAttention2(n_in)
sa0 = SelfAttention(n_in)
%%time
sa0(x);
%%timeit
sa1(x)
%%timeit
sa2(x)
###Output
296 ms ± 41.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
|
python/02_numpy_pandas/exam_02_linear_regression.ipynb | ###Markdown
선형회귀 분석- 프리미어리그 데이터(득점, 실점, 승점)- 득점, 실점 -> 승점 예측하는 모델- scikit-learn 패키지 - 데이터 마이닝 및 데이터 분석, 모델을 위한 도구 - 상업적으로 사용이 가능한 오픈소스
###Code
import pickle
# 선형회귀 모델
from sklearn import linear_model
# 학습 데이터와 테스트 데이터를 나눠주는 모듈
from sklearn.model_selection import train_test_split
# 모델을 평가해주는 모듈
from sklearn.metrics import mean_absolute_error
###Output
_____no_output_____
###Markdown
분석 절차- 데이터 로드- 데이터 전처리 - 독립변수와 종속변수를 나눠줌 - 학습 데이터와 테스트 데이터를 나눠줌- 데이터 분석 : 션형회귀 모델- 성능평가 : MAE- 예측 코드 작성
###Code
# 1. 데이터 로드
p_df = pd.read_csv("datas/premierleague.csv")
p_df.tail(1)
# 2. 데이터 전처리 - 1 : 독립변수, 종속변수 나누기
df_x = p_df[["gf", "ga"]]
df_y = p_df[["points"]]
# 2. 데이터 전처리 - 2 : 학습 데이터와 테스트 데이터로 나누기
train_x, test_x, train_y, test_y = train_test_split(
df_x, df_y, test_size=0.3, random_state=1)
# 3. 데이터 분석 : 선형 회귀 모델
# 모델 객체 생성
model = linear_model.LinearRegression()
# fit() -> 학습을 해주는 함수
# 아래를 실행시킴으로서 데이터 분석 모델을 생성 완료
model.fit(train_x, train_y)
# 4. 성능 평가 : MAE (Mean Absolute Error)
# test_y와 pred_y의 각각의 데이터의 차이를 구해 절대 값을 씌워준 후의 평균
pred_y = model.predict(test_x)
pred_y
test_y["points"].values
pred_y = np.around(pred_y.flatten()).astype("int")
pred_y
mae = mean_absolute_error(test_y, pred_y)
round(mae, 2)
# 5. 예측 함수 만들기
# DataFrame으로 만들어줘야함
def make_df(gf, ga):
return pd.DataFrame({"gf":[gf], "ga":[ga]})
gf, ga = 80, 30
result = int(model.predict(make_df(gf, ga)).flatten()[0])
result
p_df.head()
# pickle 파일로 모델 저장하기
with open("datas/p_model.pkl", "wb") as f:
pickle.dump(model, f)
with open("datas/p_model.pkl", "rb") as f:
load_model = pickle.load(f)
gf, ga = 80, 30
result = int(load_model.predict(make_df(gf, ga)).flatten()[0])
result
###Output
_____no_output_____ |
Numerical Methods/system_of_linear_eqns.ipynb | ###Markdown
Iterative algorithms to solve a system of linear equations:* Gauss-Seidel method* Jacobi method
###Code
import numpy as np
from numpy import linalg
import matplotlib.pyplot as plt
class Counter:
"""Class to count iterations taken by each algorithm"""
def __init__(self, limit=500):
self.iter = 0
self.limit = limit # Sets limit for loops in each algorithm
def count(self):
self.iter += 1
def get_count(self):
return(self.iter)
def reset_count(self):
self.iter = 0
###Output
_____no_output_____
###Markdown
Jacobi$X^{k} = D^{-1}b - D^{-1}(L + U)X^{k - 1}$
###Code
#Jacobi Alg.
def jacobi(A, b, counter=Counter()):
"""
Solves system of linear equations numerically using Jacobi's alg
Parameters:
==========
A: Coefficients matrix
b: Solution of equations vector
Returns: "x_new" the roots vector
"""
b = b.reshape((len(A), 1))
x_old = np.zeros_like(b) #initial guess
L = np.tril(A) #lower triangular
U = np.triu(A) #Upper triangular
D = np.diag(A) #diagonal
LplusU = L + U - 2* np.diag(D)
D = D.reshape(b.shape)
check = False
eps = 1e-8 #tolerance
while check == False and counter.get_count() < counter.limit:
x_new = (b - (LplusU) @ x_old) / D #x_new has a new address
check = (linalg.norm(x_new - x_old, 2) / linalg.norm(x_new, 2)) < eps
x_old = x_new #copy memory address
counter.count()
return x_new
###Output
_____no_output_____
###Markdown
Gauss Seidel$X^{k} = (D + L)^{-1}(b-UX^{k-1})$
###Code
#Gauss Seidel Alg.
def seidel(A, b, counter=Counter()):
"""
Solves system of linear equations numerically using Gauss Seidel's alg
Parameters:
==========
A: Coefficients matrix
b: Solution of equations vector
Returns: "x_new" the roots vector
"""
b = b.reshape((len(A), 1))
x_old = np.zeros_like(b) #initial guess
LD = np.tril(A) #Lower triangular
U = A - LD # Upper without diagonal
LD_inv = linalg.inv(LD)
check = False
eps = 1e-8 #tolerance
while check == False and counter.get_count() < counter.limit:
x_new = LD_inv @ (b - U @ x_old) #x_new has a new address
check = (linalg.norm(x_new - x_old, 2) / linalg.norm(x_new, 2)) < eps
x_old = x_new #copy memory address
counter.count()
return x_new
###Output
_____no_output_____
###Markdown
Power method
###Code
# Power method
def power_method(A, counter=Counter()):
"""
Power method Algorithm to solve for the dominant eigenvector and its eigenvalue
Parameters:
===========
A: Coefficient matrix, whose eigenvalues are required
Returns:
========
eig_v: The dominant eigenvalue
x_new: Dominant eigenvector
"""
x_old = np.random.randint(0, 20, (A.shape[0], 1)) #Initial guess of eigenvector
check = False
eig_v = 0
eps = 1e-8 #Tolerance
while check == False and counter.get_count() < counter.limit:
x_new = A @ x_old
eig_v = np.max(x_new) #Uniform norm "Infinity norm"
x_new = x_new / eig_v #New eigenvector
check = (linalg.norm(x_new - x_old, 2) / linalg.norm(x_new, 2)) < eps
x_old = x_new
counter.count()
return eig_v, x_new
###Output
_____no_output_____
###Markdown
Homework 3 Question 1 System modeling$4p_1 - p_2 - p_3 = 0$$5p_2 - p_1 - p_3 - p_4 - p_5= 0$$5p_3 - p_1 - p_2 - p_5 - p_6 = 0$$5p_4 - p_2 - p_5 - p_7 - p_8 = 0$$6p_5 - p_2 - p_3 - p_4 - p_6 - p_8 - p_9 = 0$$5p_6 - p_3 - p_5 - p_9 - p_{10} = 0$$4p_7 - p_4 - p_8= 1$$5p_8 - p_4 - p_5 - p_7 - p_9=1$$5p_9 - p_5 - p_6 - p_8 - p_{10} = 1$$4p_{10} - p_6 - p_9=1$
###Code
A = np.array([
[4, -1, -1, 0, 0, 0, 0, 0, 0, 0],
[-1, 5, -1, -1, -1, 0, 0, 0, 0, 0],
[-1, -1, 5, 0, -1, -1, 0, 0, 0, 0],
[0, -1, 0, 5, -1, 0, -1, -1, 0, 0],
[0, -1, -1, -1, 6, -1, 0, -1, -1, 0],
[0, 0, -1, 0, -1, 5, 0, 0, -1, -1],
[0, 0, 0, -1, 0, 0, 4, -1, 0, 0],
[0, 0, 0, -1, -1, 0, -1, 5, -1, 0],
[0, 0, 0, 0, -1, -1, 0, -1, 5, -1],
[0, 0, 0, 0, 0, -1, 0, 0, -1, 4]])
b = np.array([0, 0, 0, 0, 0, 0, 1, 1, 1, 1])
counter = Counter()
x = jacobi(A, b, counter)
print("Jacobi roots:\nIn ", counter.get_count(), "Iterations\n", x, "\n")
counter.reset_count()
x = seidel(A, b, counter)
print("Gauss Seidel roots:\nIn ", counter.get_count(), "Iterations\n", x, "\n\n")
###Output
Jacobi roots:
In 70 Iterations
[[0.09019607]
[0.18039214]
[0.18039214]
[0.2980392 ]
[0.33333332]
[0.2980392 ]
[0.45490195]
[0.52156861]
[0.52156861]
[0.45490195]]
Gauss Seidel roots:
In 38 Iterations
[[0.09019607]
[0.18039215]
[0.18039215]
[0.29803921]
[0.33333333]
[0.29803921]
[0.45490196]
[0.52156862]
[0.52156862]
[0.45490196]]
###Markdown
Question 2
###Code
A1 = np.array([
[0, 1, 2],
[0.5, 0, 0],
[0, 0.25, 0]])
A2 = np.array([
[0, 6, 8],
[0.5, 0, 0],
[0, 0.5, 0]])
#Question 1
print("Question 1:\nPower Method:")
print("=" * len("Power Method"))
l, v = power_method(A1)
print("Eigenvalue:\n", l, "\n")
print("Eigenvector:\n", v, "\n\n")
print("Check:")
print("=" * len("Check"))
print("Left hand side A @ v:\n", A1 @ v, "\n") #Transformation matrix (dot) Eigenvector
print("Right hand side: l * v\n", l * v, "\n\n") #Eigenvalue * Eigenvector
print("-" * 50, "\n")
#Question 2
print("Question 2:\nPower Method:")
print("=" * len("Power Method"))
l, v = power_method(A2)
print("Eigenvalue:\n", l, "\n")
print("Eigenvector:\n", v, "\n\n")
print("Check:")
print("=" * len("Check"))
print("Left hand side: A @ v\n", A2 @ v, "\n")
print("Right hand side: l * v\n", l * v, "\n\n")
###Output
Question 1:
Power Method:
============
Eigenvalue:
0.8846461812138321
Eigenvector:
[[1. ]
[0.56519771]
[0.15972423]]
Check:
=====
Left hand side A @ v:
[[0.88464617]
[0.5 ]
[0.14129943]]
Right hand side: l * v
[[0.88464618]
[0.5 ]
[0.14129943]]
--------------------------------------------------
Question 2:
Power Method:
============
Eigenvalue:
1.9999999742797248
Eigenvector:
[[1. ]
[0.25 ]
[0.0625]]
Check:
=====
Left hand side: A @ v
[[2.00000001]
[0.5 ]
[0.125 ]]
Right hand side: l * v
[[1.99999997]
[0.5 ]
[0.125 ]]
|
_build/jupyter_execute/notebooks/ddp/03 Asset replacement model with maintenance.ipynb | ###Markdown
Asset replacement model with maintenance**Randall Romero Aguilar, PhD**This demo is based on the original Matlab demo accompanying the Computational Economics and Finance 2001 textbook by Mario Miranda and Paul Fackler.Original (Matlab) CompEcon file: **demddp03.m**Running this file requires the Python version of CompEcon. This can be installed with pip by running !pip install compecon --upgradeLast updated: 2021-Oct-01 AboutConsider the preceding example, but suppose that the productivity of the asset may beenhanced by performing annual service maintenance. Specifically, at the beginning of eachyear, a manufacturer must decide whether to replace the asset with a new one or, if he electsto keep the asset, whether to service it. An asset that is $a$ years old and has been serviced $s$ times yields a profit contribution $p(a, s)$ up to an age of $n$ years, at which point the asset becomes unsafe and must be replaced by law. The cost of a new asset is $c$, and the cost of servicing an asset is $k$. What replacement-maintenance policy maximizes profits? Initial tasks
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from compecon import DDPmodel, gridmake, getindex
###Output
_____no_output_____
###Markdown
Model ParametersAssume a maximum asset age of 5 years, asset replacement cost $c = 75$, cost of servicing $k = 10$, and annual discount factor $\delta = 0.9$.
###Code
maxage = 5
repcost = 75
mancost = 10
delta = 0.9
###Output
_____no_output_____
###Markdown
State SpaceThis is an infinite horizon, deterministic model with time $t$ measured in years. The statevariables$a \in \{1, 2, 3, \dots, n\}$$s \in \{0, 1, 2, \dots, n − 1\}$are the age of the asset in years and the number of servicings it has undergone, respectively.
###Code
s1 = np.arange(1, 1 + maxage) # asset age
s2 = s1 - 1 # servicings
S = gridmake(s1,s2) # combined state grid
S1, S2 = S
n = S1.size # total number of states
###Output
_____no_output_____
###Markdown
Here, the set of possible asset ages and servicings are generated individually, and then a two-dimensional state grid is constructed by forming their Cartesian product using the CompEcon routine gridmake. Action SpaceThe action variable$x \in \{\text{no action}, \text{service}, \text{replace}\}$is the hold-replacement-maintenance decision.
###Code
X = np.array(['no action', 'service', 'replace']) # vector of actions
m = len(X) # number of actions
###Output
_____no_output_____
###Markdown
Reward FunctionThe reward function is\begin{equation}f (a, s, x) =\begin{cases}p(a, s), &x = \text{no action}\\p(0, 0) − c, &x = \text{service}\\p(a, s + 1) − k, &x = \text{replace}\end{cases}\end{equation}Assuming a profit contribution $p(a) =50 − 2.5a −2.5a^2$ that is a function of the asset age $a$ in years.Here, the rows of the reward matrix, whichcorrespond to the three admissible decisions (no action, service, replace), are computedindividually.
###Code
f = np.zeros((m, n))
q = 50 - 2.5 * S1 - 2.5 * S1 ** 2
f[0] = q * np.minimum(1, 1 - (S1 - S2) / maxage)
f[1] = q * np.minimum(1, 1 - (S1 - S2 - 1) / maxage) - mancost
f[2] = 50 - repcost
###Output
_____no_output_____
###Markdown
State Transition FunctionThe state transition function is\begin{equation}g(a, s, x) =\begin{cases}(a + 1, s), &x = \text{no action}\\ (1, 0), &x = \text{service}\\(a + 1, s + 1), &x = \text{replace}\end{cases}\end{equation}Here, the routine ```getindex``` is used to find the index of the following period’s state.
###Code
g = np.empty_like(f)
g[0] = getindex(np.c_[S1 + 1, S2], S)
g[1] = getindex(np.c_[S1 + 1, S2 + 1], S)
g[2] = getindex(np.c_[1, 0], S)
###Output
_____no_output_____
###Markdown
Model Structure The value of asset of age $a$ that has undergone $s$ servicings satisfies the Bellman equation\begin{equation}V(a,s) = \max\{p(a,s) + \delta V(a+1,s),\quad p(a,s+1)−k + \delta V(a+1,s+1),\quad p(0,0) − c + \delta V(1,0)\}\end{equation}where we set $p(n, s) = −\infty$ for all $s$ to enforce replacement of an asset of age $n$. TheBellman equation asserts that if the manufacturer replaces an asset of age $a$ with servicings$s$, he earns $p(0,0) − c$ over the coming year and begins the subsequent year with an assetworth $V(1,0)$; if he services the asset, he earns $p(a, s + 1) − k$ over the coming yearand begins the subsequent year with an asset worth $V(a + 1, s + 1)$. As with the previousexample, the value $V(a, s)$ measures not only the current and future net earnings of theasset, but also the net earnings of all future assets that replace it. To solve and simulate this model, use the CompEcon class ```DDPmodel```.
###Code
model = DDPmodel(f, g, delta)
model.solve()
###Output
_____no_output_____
###Markdown
Analysis Simulate ModelThe paths are computed by performing q deterministic simulation of 12 years in duration using the ```simulate()``` method.
###Code
sinit = 0
nyrs = 12
t = np.arange(nyrs + 1)
spath, xpath = model.simulate(sinit, nyrs)
pd.Categorical(X[xpath], categories=X)
simul = pd.DataFrame({
'Year': t,
'Age of Asset': S1[spath],
'Number of Servicings': S2[spath]}).set_index('Year')
simul['Action'] = pd.Categorical(X[xpath], categories=X)
simul
###Output
_____no_output_____
###Markdown
Plot State Paths (Age and Servicings) and Action PathThe asset is replaced every four years, and is serviced twice, at the beginning of the second and third years of operation.
###Code
fig, axs = plt.subplots(3, 1, sharex=True, figsize=[9,9])
simul['Age of Asset'].plot(marker='o', ax=axs[0])
axs[0].set(title='Age of Asset')
simul['Number of Servicings'].plot(marker='o', ax=axs[1])
axs[1].set(title='Number of Servicings')
simul['Action'].cat.codes.plot(marker='o', ax=axs[2])
axs[2].set(title='Action Path', yticks=range(3), yticklabels=X);
###Output
_____no_output_____ |
Scikit-learn Linear Regression .ipynb | ###Markdown
Linear RegressionLinear regression의 대상이 되는 트레이닝셋 데이터는 미국 성인의 키/몸무게 파일을 이용해보겠습니다. 키를 가지고 몸무게를 예측해보는 것입니다. 먼저 관련 파이썬 모듈들을 임포트합니다
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import sklearn
import scipy.stats as stats
###Output
_____no_output_____
###Markdown
이제 키/몸무게 파일을 로드해보겠습니다. 이제부터 pandas 모듈을 주로 사용하여 데이터의 값을 보고 분포를 살펴봅니다.
###Code
df = pd.read_csv('/Users/keeyong//Downloads/2018-march/weight-height.csv')
###Output
_____no_output_____
###Markdown
트레이닝 셋이 로드하면 첫번째로 해야할 일은 어떤 필드들이 있고 그 필드에 존재하는 값들의 범위, 존재여부 등을 확인해보는 것입니다. 먼저 파일의 처음 몇 레코드를 보겠습니다.
###Code
df.head()
df['Gender'].unique()
df.describe().round(2)
###Output
_____no_output_____
###Markdown
이제 이 데이터들을 matplotlib를 사용해서 한번 시각화해서 보겠습니다.
###Code
df['Height'].plot(kind='hist', title='Height')
plt.xlabel('Height')
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Adult Height/Weight')
###Output
_____no_output_____
###Markdown
이 그래프 위에 빨간색으로 선을 그어봅니다.
###Code
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Adult Height/Weight')
plt.plot([55, 78], [75, 250], color='red', linewidth=3)
###Output
_____no_output_____
###Markdown
이제 이 그래프위에 실제로 직선을 그려가면서 이 패턴에 가장 최적화된 직선이 무엇일지 알아보겠습니다. 먼저 기울기가 w이고 y 절편이 b인 직선을 정의해보겠습니다.
###Code
def line(x, w=0, b=0):
return x * w + b
x = np.linspace(55, 80, 100)
x
yhat = line(x, w=0, b=0)
yhat
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Adult Height/Weight')
plt.plot(x, yhat, color='red', linewidth=3)
###Output
_____no_output_____
###Markdown
비용 함수 정의
###Code
def mean_squared_error(y_true, y_pred):
s = (y_true - y_pred)**2
return s.mean()
###Output
_____no_output_____
###Markdown
트레이닝셋의 입력(features)과 라벨을 별도의 변수들로 저장
###Code
X = df[['Height']].values
y_true = df['Weight'].values
y_true
y_pred = line(X)
y_pred
mean_squared_error(y_true, y_pred.ravel())
###Output
_____no_output_____
###Markdown
(숙제) 각자 w와 b의 값을 바꿔가면서 플로팅을 해보면서 비용함수의 값이 어떻게 바뀌는지 보기 바랍니다 w의 값을 고정하고 b의 값만 바꿀 경우 비용이 어떻게 감소하는지 살펴봅시다. 비용의 변화를 그래프로 그려보겠습니다. 먼저 앞서 본 체중/키 그래프 위에 W를 2로 고정하고 b를 바꿔가면서 선을 그려보도록 하겠습니다.
###Code
plt.figure(figsize=(10, 5))
ax1 = plt.subplot(121)
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Weight and Height in adults', ax=ax1)
# b의 값을 -100과 +150 사이에서 변화 (50씩)
bbs = np.array([-100, -50, 0, 50, 100, 150])
mses = [] # 비용값을 여기 저장. 앞서 만든 비용함수로 실제값과 예측값의 차이를 계산
for b in bbs:
y_pred = line(X, w=2, b=b)
mse = mean_squared_error(y_true, y_pred)
mses.append(mse)
plt.plot(X, y_pred)
###Output
_____no_output_____
###Markdown
비용함수의 값을 앞서 B값에 대해 그래프로 그려봅니다
###Code
plt.plot(bbs, mses, 'o-')
plt.title('Cost as a function of b')
plt.xlabel('b')
###Output
_____no_output_____
###Markdown
이제 scikit learn의 linear regression을 실행해보겠습니다. 성별과 신장 필드의 값을 바탕으로 트레이닝 셋의 feature를 만듭니다
###Code
X = df[['Gender', 'Height']].values
y_true = df['Weight'].values
###Output
_____no_output_____
###Markdown
먼저 gender의 값을 숫자로 인코딩해줍니다
###Code
from sklearn.preprocessing import LabelEncoder
enc = LabelEncoder()
# 성별 정보는 0 인덱스에 존재
label_encoder = enc.fit(X[:, 0])
print ("Categorical classes:", label_encoder.classes_)
# 성별값이 어떤 숫자로 변경되었는지 프린트
integer_classes = label_encoder.transform(label_encoder.classes_)
print ("Integer classes:", integer_classes)
# 이제 feature 리스트에서 성별값을 숫자로 대치
t = label_encoder.transform(X[:, 0])
X[:, 0] = t
print(X)
###Output
Categorical classes: ['Female' 'Male']
Integer classes: [0 1]
[[1 73.847017017515]
[1 68.78190404589029]
[1 74.11010539178491]
...,
[0 63.8679922137577]
[0 69.03424313073461]
[0 61.944245879517204]]
###Markdown
이제 Scikit-Learn의 LinearRegression을 사용해서 모델을 학습합니다: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html
###Code
from sklearn import linear_model
regr = linear_model.LinearRegression()
regr.fit(X, y_true)
regr.predict([[0, 70], [1, 70]])
###Output
_____no_output_____
###Markdown
Evaluating Model Performance using r2_score: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html
###Code
y_pred = regr.predict(X)
from sklearn.metrics import r2_score
print("The R2 score is {:0.3f}".format(r2_score(y_true, y_pred)))
###Output
The R2 score is 0.903
###Markdown
train test and split (20% hold-out)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y_true, test_size=0.2)
len(X)
len(X_train)
regr.fit(X_train, y_train)
y_test_pred = regr.predict(X_test)
print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred)))
from sklearn.metrics import mean_squared_error as mse
print("The Mean Squared Error on the Test set is:\t{:0.1f}".format(mse(y_test, y_test_pred)))
from sklearn.metrics import mean_squared_error as mse
print("The Mean Squared Error on the Train set is:\t{:0.1f}".format(mse(y_train, y_train_pred)))
print("The Mean Squared Error on the Test set is:\t{:0.1f}".format(mse(y_test, y_test_pred)))
print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred)))
print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred)))
print('Coefficients: \n', regr.coef_)
print('Coefficients: \n', regr.intercept_)
###Output
Coefficients:
-243.035438791
|
src/apps/hello_bokeh_world/app.ipynb | ###Markdown
Minimal Bokeh Hello World Example
###Code
from bokeh.plotting import curdoc
from bokeh.models import Div
app=curdoc()
model=Div(text="<h1>Hello Bokeh World from .py Code File</h1>", sizing_mode="stretch_width")
app.add_root(model)
###Output
_____no_output_____ |
mlcc-exercises_ko/exercises/intro_to_sparse_data_and_embeddings.ipynb | ###Markdown
Copyright 2017 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
희소 데이터 및 임베딩 소개**학습 목표:*** 영화 리뷰 문자열 데이터를 희소 특성 벡터로 변환한다* 희소 특성 벡터를 사용하여 감정 분석 선형 모델을 구현한다* 데이터를 두 차원으로 투영하는 임베딩을 사용하여 감정 분석 DNN 모델을 구현한다* 임베딩을 시각화하여 단어 간의 관계에 대해 모델이 학습한 내용을 확인한다이 실습에서는 희소 데이터에 대해 알아보고 [ACL 2011 IMDB 데이터 세트](http://ai.stanford.edu/~amaas/data/sentiment/)에서 가져온 영화 리뷰 텍스트 데이터로 임베딩을 사용해 봅니다. 이 데이터는 이미 `tf.Example` 형식으로 처리되어 있습니다. 설정우선 필요한 모듈을 import로 불러오고 학습 및 테스트 데이터를 다운로드합니다. [`tf.keras`](https://www.tensorflow.org/api_docs/python/tf/keras)에 포함된 파일 다운로드 및 캐싱 도구를 사용하여 데이터 세트를 검색할 수 있습니다.
###Code
from __future__ import print_function
import collections
import io
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from IPython import display
from sklearn import metrics
tf.logging.set_verbosity(tf.logging.ERROR)
train_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/train.tfrecord'
train_path = tf.keras.utils.get_file(train_url.split('/')[-1], train_url)
test_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/test.tfrecord'
test_path = tf.keras.utils.get_file(test_url.split('/')[-1], test_url)
###Output
_____no_output_____
###Markdown
감정 분석 모델 만들기 이 데이터로 감정 분석 모델을 학습시켜 리뷰가 전반적으로 *긍정적*(라벨 1)인지 아니면 *부정적*(라벨 0)인지를 예측해 보겠습니다.이렇게 하려면 문자열 값인 `단어`를 *어휘*, 즉 데이터에 나올 것으로 예상되는 각 단어의 목록을 사용하여 특성 벡터로 변환합니다. 이 실습을 진행하기 위해 제한된 단어 집합을 갖는 소규모 어휘를 만들었습니다. 이러한 단어는 대부분 *긍정* 또는 *부정*을 강하게 암시하는 것이 밝혀졌지만 일부분은 단순히 흥미를 위해 추가되었습니다.어휘의 각 단어는 특성 벡터의 좌표에 매핑됩니다. 예의 문자열 값인 `단어`를 이 특성 벡터로 변환하기 위해, 예 문자열에 어휘 단어가 나오지 않으면 각 좌표의 값에 0을 입력하고 어휘 단어가 나오면 1을 입력하도록 인코딩하겠습니다. 예의 단어 중 어휘에 나오지 않는 단어는 무시됩니다. **참고:** *물론 더 큰 어휘를 사용할 수도 있으며 이러한 어휘를 만드는 데 특화된 도구들이 있습니다. 뿐만 아니라 어휘에 나오지 않는 단어를 단순히 무시하지 않고 적은 수의 OOV(out-of-vocabulary) 버킷을 도입하여 해당 단어를 해시할 수 있습니다. 명시적인 어휘를 만드는 대신 각 단어를 해시하는 __특성 해싱__ 접근법을 사용할 수도 있습니다. 이 방법은 실무에는 적합하지만 해석 가능성이 사라지므로 실습 목적으로는 유용하지 않습니다. 이와 관련된 도구에 대해서는 tf.feature_column 모듈을 참조하세요.* 입력 파이프라인 구축 우선 텐서플로우 모델로 데이터를 가져오는 입력 파이프라인을 구성하겠습니다. 다음 함수를 사용하여 [TFRecord](https://www.tensorflow.org/programmers_guide/datasets) 형식인 입력 및 테스트 데이터를 파싱하고 특성과 해당 라벨로 이루어진 dict를 반환할 수 있습니다.
###Code
def _parse_function(record):
"""Extracts features and labels.
Args:
record: File path to a TFRecord file
Returns:
A `tuple` `(labels, features)`:
features: A dict of tensors representing the features
labels: A tensor with the corresponding labels.
"""
features = {
"terms": tf.VarLenFeature(dtype=tf.string), # terms are strings of varying lengths
"labels": tf.FixedLenFeature(shape=[1], dtype=tf.float32) # labels are 0 or 1
}
parsed_features = tf.parse_single_example(record, features)
terms = parsed_features['terms'].values
labels = parsed_features['labels']
return {'terms':terms}, labels
###Output
_____no_output_____
###Markdown
함수가 정상적으로 작동하는지 확인하기 위해 학습 데이터에 대한 `TFRecordDataset`를 생성하고 위 함수를 사용하여 데이터를 특성 및 라벨에 매핑합니다.
###Code
# Create the Dataset object.
ds = tf.data.TFRecordDataset(train_path)
# Map features and labels with the parse function.
ds = ds.map(_parse_function)
ds
###Output
_____no_output_____
###Markdown
다음 셀을 실행하여 학습 데이터 세트에서 첫 예를 확인합니다.
###Code
n = ds.make_one_shot_iterator().get_next()
sess = tf.Session()
sess.run(n)
###Output
_____no_output_____
###Markdown
이제 텐서플로우 에스티메이터 개체의 `train()` 메소드에 전달할 수 있는 정식 입력 함수를 만들겠습니다.
###Code
# Create an input_fn that parses the tf.Examples from the given files,
# and split them into features and targets.
def _input_fn(input_filenames, num_epochs=None, shuffle=True):
# Same code as above; create a dataset and map features and labels.
ds = tf.data.TFRecordDataset(input_filenames)
ds = ds.map(_parse_function)
if shuffle:
ds = ds.shuffle(10000)
# Our feature data is variable-length, so we pad and batch
# each field of the dataset structure to whatever size is necessary.
ds = ds.padded_batch(25, ds.output_shapes)
ds = ds.repeat(num_epochs)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
###Output
_____no_output_____
###Markdown
작업 1: 희소 입력 및 명시적 어휘와 함께 선형 모델 사용첫 번째 모델로서 50개의 정보 단어를 사용하여 [`LinearClassifier`](https://www.tensorflow.org/api_docs/python/tf/estimator/LinearClassifier) 모델을 만들겠습니다. 처음에는 단순하게 시작하는 것이 좋습니다.다음 코드는 단어에 대한 특성 열을 만듭니다. [`categorical_column_with_vocabulary_list`](https://www.tensorflow.org/api_docs/python/tf/feature_column/categorical_column_with_vocabulary_list) 함수는 문자열과 특성 벡터 간의 매핑을 포함하는 특성 열을 만듭니다.
###Code
# 50 informative terms that compose our model vocabulary.
informative_terms = ("bad", "great", "best", "worst", "fun", "beautiful",
"excellent", "poor", "boring", "awful", "terrible",
"definitely", "perfect", "liked", "worse", "waste",
"entertaining", "loved", "unfortunately", "amazing",
"enjoyed", "favorite", "horrible", "brilliant", "highly",
"simple", "annoying", "today", "hilarious", "enjoyable",
"dull", "fantastic", "poorly", "fails", "disappointing",
"disappointment", "not", "him", "her", "good", "time",
"?", ".", "!", "movie", "film", "action", "comedy",
"drama", "family")
terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key="terms", vocabulary_list=informative_terms)
###Output
_____no_output_____
###Markdown
다음으로, `LinearClassifier`를 생성하고 학습 세트로 학습시킨 후 평가 세트로 평가합니다. 코드를 잘 읽어보고 실행하여 결과를 확인해 보세요.
###Code
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
feature_columns = [ terms_feature_column ]
classifier = tf.estimator.LinearClassifier(
feature_columns=feature_columns,
optimizer=my_optimizer,
)
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
###Output
_____no_output_____
###Markdown
작업 2: 심층신경망(DNN) 모델 사용위 모델은 선형 모델입니다. 비교적 좋은 성능을 발휘하지만, DNN 모델로 성능을 더 높일 수 있을까요?`LinearClassifier`를 [`DNNClassifier`](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNClassifier)로 교체해 보겠습니다. 다음 셀을 실행하고 결과를 확인해 보세요.
###Code
##################### Here's what we changed ##################################
classifier = tf.estimator.DNNClassifier( #
feature_columns=[tf.feature_column.indicator_column(terms_feature_column)], #
hidden_units=[20,20], #
optimizer=my_optimizer, #
) #
###############################################################################
try:
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
except ValueError as err:
print(err)
###Output
_____no_output_____
###Markdown
작업 3: DNN 모델에 임베딩 사용이 작업에서는 임베딩 열을 사용하여 DNN 모델을 구현합니다. 임베딩 열은 희소 데이터를 입력으로 취하고 저차원 밀집 벡터를 출력으로 반환합니다. **참고:** *희소 데이터로 모델을 학습시킬 때 embedding_column은 일반적으로 연산 효율이 가장 높은 옵션입니다. 이 실습 끝부분의 [선택 섹션](scrollTo=XDMlGgRfKSVz)에서 `embedding_column`과 `indicator_column`의 구현상 차이점 및 상대적인 장단점을 자세히 알아봅니다.* 아래 코드에서 다음을 수행합니다.* 데이터를 2개의 차원으로 투영하는 `embedding_column`을 사용하여 모델의 특성 열을 정의합니다. `embedding_column`의 함수 시그니처에 대한 자세한 내용은 [TF 문서](https://www.tensorflow.org/api_docs/python/tf/feature_column/embedding_column)를 참조하세요.* 다음과 같은 사양으로 `DNNClassifier`를 정의합니다. * 각각 20개 유닛을 포함하는 히든 레이어 2개 * Adagrad 최적화, 학습률 0.1 * `gradient_clip_norm`을 5.0으로 지정 **참고:** *실무에서는 2보다 높은 50 또는 100차원으로 투영하게 됩니다. 그러나 여기에서는 시각화하기 쉽도록 2차원만 사용합니다.* 힌트
###Code
# Here's a example code snippet you might use to define the feature columns:
terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)
feature_columns = [ terms_embedding_column ]
###Output
_____no_output_____
###Markdown
아래 코드 완성하기
###Code
########################## YOUR CODE HERE ######################################
terms_embedding_column = # Define the embedding column
feature_columns = # Define the feature columns
classifier = # Define the DNNClassifier
################################################################################
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
###Output
_____no_output_____
###Markdown
해결 방법해결 방법을 보려면 아래를 클릭하세요.
###Code
########################## SOLUTION CODE ########################################
terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)
feature_columns = [ terms_embedding_column ]
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[20,20],
optimizer=my_optimizer
)
#################################################################################
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
###Output
_____no_output_____
###Markdown
작업 4: 임베딩이 실제로 적용되는지 확인위 모델에서 사용한 `embedding_column`은 제대로 작동하는 것 같지만, 내부적으로는 어떻게 사용되는지 알 수가 없습니다. 모델에서 내부적으로 임베딩을 실제로 사용하는지 확인하려면 어떻게 해야 할까요?우선 모델의 텐서를 살펴보겠습니다.
###Code
classifier.get_variable_names()
###Output
_____no_output_____
###Markdown
이제 `'dnn/input_from_feature_columns/input_layer/terms_embedding/...'`이라는 임베딩 레이어가 있음을 확인할 수 있습니다. 여기에서 흥미로운 점은 이 레이어는 여타 히든 레이어와 마찬가지로 모델의 다른 부분과 함께 동시에 학습된다는 점입니다.임베딩 레이어가 올바른 형태로 되어 있을까요? 다음 코드를 실행하여 알아보세요. **참고:** *여기에서 사용하는 임베딩은 50차원 벡터를 2차원으로 투영하는 행렬입니다.*
###Code
classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights').shape
###Output
_____no_output_____
###Markdown
잠시 동안 다양한 레이어와 형태를 직접 확인하여 모든 요소가 예상대로 연결되어 있는지 확인해 보세요. 작업 5: 임베딩 조사이제 실제 임베딩 공간을 조사하여 각 단어가 결국 어느 위치에 배치되었는지 확인해 보겠습니다. 다음을 수행하세요.1. 다음 코드를 실행하여 **작업 3**에서 학습시킨 임베딩을 확인합니다. 결과가 예상과 일치하나요?2. **작업 3**의 코드를 재실행하여 모델을 다시 학습시킨 후 아래의 임베딩 시각화를 다시 실행합니다. 무엇이 그대로인가요? 무엇이 달라졌나요?3. 마지막으로 10단계만 사용하여 모델을 다시 학습시킵니다. 이렇게 하면 매우 열악한 모델이 만들어집니다. 아래의 임베딩 시각화를 다시 실행합니다. 이제 결과가 어떠한가요? 이유는 무엇일까요?
###Code
import numpy as np
import matplotlib.pyplot as plt
embedding_matrix = classifier.get_variable_value('dnn/input_from_feature_columns/input_layer/terms_embedding/embedding_weights')
for term_index in range(len(informative_terms)):
# Create a one-hot encoding for our term. It has 0s everywhere, except for
# a single 1 in the coordinate that corresponds to that term.
term_vector = np.zeros(len(informative_terms))
term_vector[term_index] = 1
# We'll now project that one-hot vector into the embedding space.
embedding_xy = np.matmul(term_vector, embedding_matrix)
plt.text(embedding_xy[0],
embedding_xy[1],
informative_terms[term_index])
# Do a little setup to make sure the plot displays nicely.
plt.rcParams["figure.figsize"] = (15, 15)
plt.xlim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max())
plt.ylim(1.2 * embedding_matrix.min(), 1.2 * embedding_matrix.max())
plt.show()
###Output
_____no_output_____
###Markdown
작업 6: 모델 성능 개선 시도모델을 다듬어 성능을 높일 수 있는지 확인해 보세요. 다음과 같은 방법을 시도해 볼 수 있습니다.* **초매개변수 변경** 또는 Adam 등의 **다른 옵티마이저 사용**. 이 전략으로 향상되는 정확성은 1~2%에 불과할 수 있습니다.* **`informative_terms`에 더 많은 단어 추가.** 이 데이터 세트의 30,716개 단어를 모두 포함하는 전체 어휘 파일은 https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/terms.txt 입니다. 이 어휘 파일에서 단어를 더 추출할 수도 있고, `categorical_column_with_vocabulary_file` 특성 열을 통해 전체 어휘를 사용할 수도 있습니다.
###Code
# Download the vocabulary file.
terms_url = 'https://download.mlcc.google.com/mledu-datasets/sparse-data-embedding/terms.txt'
terms_path = tf.keras.utils.get_file(terms_url.split('/')[-1], terms_url)
# Create a feature column from "terms", using a full vocabulary file.
informative_terms = None
with io.open(terms_path, 'r', encoding='utf8') as f:
# Convert it to a set first to remove duplicates.
informative_terms = list(set(f.read().split()))
terms_feature_column = tf.feature_column.categorical_column_with_vocabulary_list(key="terms",
vocabulary_list=informative_terms)
terms_embedding_column = tf.feature_column.embedding_column(terms_feature_column, dimension=2)
feature_columns = [ terms_embedding_column ]
my_optimizer = tf.train.AdagradOptimizer(learning_rate=0.1)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
classifier = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
hidden_units=[10,10],
optimizer=my_optimizer
)
classifier.train(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([train_path]),
steps=1000)
print("Training set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
evaluation_metrics = classifier.evaluate(
input_fn=lambda: _input_fn([test_path]),
steps=1000)
print("Test set metrics:")
for m in evaluation_metrics:
print(m, evaluation_metrics[m])
print("---")
###Output
_____no_output_____ |
utility/workshop/Feature_Extractor_Class.ipynb | ###Markdown
PLOT and change hyper parameter
###Code
BASE_FOLDER = '../../'
%run -i ..\feature_extractor\JupyterLoad_feature_extractor.py
file_path = r'\dataset\6dB\pump\id_02\normal\00000004.wav'
## test:
fe_mel = feature_extractor_mel(BASE_FOLDER,'mel1')
print(fe_mel)
fe_mel.create_from_wav(file_path)
print(fe_mel)
fe_mel.plot()
plt.show()
fe_mel.set_hyperparamter(n_mels=12, n_fft=128)
print(fe_mel)
fe_mel.plot()
###Output
load feature_extractor_mother
load feature_extractor_mel_spectra
load feature_extractor_psd
<feature_extractor_type.MEL_SPECTRUM>[{'n_mels': 64, 'n_fft': 1024, 'power': 2.0, 'hop_length': 512, 'channel': 0}]wav=A:\Dev\NF_Prj_MIMII_Dataset
<feature_extractor_type.MEL_SPECTRUM>[{'n_mels': 64, 'n_fft': 1024, 'power': 2.0, 'hop_length': 512, 'channel': 0}]wav=A:\Dev\NF_Prj_MIMII_Dataset\dataset\6dB\pump\id_02\normal\00000004.wav
###Markdown
SAVE and Load from file
###Code
BASE_FOLDER = '../../'
%run -i ..\feature_extractor\JupyterLoad_feature_extractor.py
file_path = r'\dataset\6dB\pump\id_02\normal\00000004.wav'
## test:
fe_mel = feature_extractor_mel(BASE_FOLDER,'mel1')
fe_mel.set_hyperparamter(n_fft=512)
fe_mel.create_from_wav(file_path)
print(fe_mel)
fe_mel.save_to_file('test.pkl')
##-
fe_mel_read = feature_extractor_from_file('test.pkl','../../')
fe_mel_read.plot()
16000/2
###Output
_____no_output_____
###Markdown
3 outputs
###Code
BASE_FOLDER = '../../'
%run -i ..\feature_extractor\JupyterLoad_feature_extractor.py
file_path = r'\dataset\6dB\pump\id_02\normal\00000004.wav'
## test:
fe_mel = feature_extractor_mel(BASE_FOLDER,'mel1')
fe_mel.create_from_wav(file_path)
# flat feature simplest
#fe_mel.flat_feature().shape
fe_mel.get_feature({'function': 'flat'}).shape
fe_mel.get_feature({'function': 'flatzo'}).shape
frames = 5
print(fe_mel.feature_data.shape)
n_mels = fe_mel.feature_data.shape[0]
dims = n_mels * frames
vectorarray_size = len(fe_mel.feature_data[0, :]) - frames + 1
print(dims, vectorarray_size)
vectorarray = np.zeros((vectorarray_size, dims), float)
print(vectorarray.shape)
for t in range(frames):
vectorarray[:, n_mels * t: n_mels * (t + 1)] = fe_mel.feature_data[:, t: t + vectorarray_size].T
plt.imshow(vectorarray[:,0:n_mels].T)
fe_mel.feature_data.flatten().shape
###Output
_____no_output_____
###Markdown
ouput frame matrix
###Code
BASE_FOLDER = '../../'
%run -i ..\feature_extractor\JupyterLoad_feature_extractor.py
file_path = r'\dataset\6dB\pump\id_02\normal\00000004.wav'
## test:
fe_mel = feature_extractor_mel(BASE_FOLDER,'mel1')
fe_mel.create_from_wav(file_path)
#fframe_ = fe_mel.frame_pack_feature(3)
fframe_ = fe_mel.get_feature({'function': 'frame', 'frames': 3})
plt.imshow(fframe_)
plt.show()
plt.plot(fframe_[0,:])
plt.plot(fframe_[10,:])
plt.plot(fframe_[50,:])
###Output
load feature_extractor_mother
load feature_extractor_mel_spectra
load feature_extractor_psd
|
Chap_4_NLP_tokenization_pad_sequences.ipynb | ###Markdown
সহজ বাংলায় 'বাংলা' ন্যাচারাল ল্যাঙ্গুয়েজ প্রসেসিং (এনএলপি) - ২
###Code
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
train_data = [
"আচ্ছা, ডেটা কিভাবে কথা বলে?",
"পড়ছিলাম হান্স রোসলিং এর একটা বই, ফ্যাক্টফুলনেস।",
"ধারণা থেকে নয়, বরং ডেটাকে কথা বলতে দিলে আমাদের সব বিপদ কাটবে।",
"এই লোক পৃথিবীকে দেখিয়েছিলেন কিভাবে ২০০ বছরের ডেটা আমাদের বাঁচার সময় বাড়িয়েছে!"
]
test_data = [
"এই অ্যানিমেশন আমরা করবো আমাদের পিসিতে।",
"সরাসরি চালান নিচের লিংক থেকে, হচ্ছে তো?",
"পাল্টান প্যারামিটার, চালান নিজের মতো করে।"
]
num_words = 1000
oov_token = '<UNK>'
pad_type = 'post'
trunc_type = 'post'
# আমাদের ট্রেনিং ডেটা টোকেনাইজ করি, বাংলায় স্টপওয়ার্ড হিসেবে দাড়ি '।'কে ফেলে দিতে হবে।
tokenizer = Tokenizer(num_words=num_words, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n।', oov_token=oov_token)
tokenizer.fit_on_texts(train_data)
# ট্রেনিং ডেটার ওয়ার্ড ইনডেক্স বের করি
word_index = tokenizer.word_index
tokenizer.get_config()
# টেনিং ডেটার বাক্যগুলোকে সিকোয়েন্স এ ফেলি
train_sequences = tokenizer.texts_to_sequences(train_data)
# ট্রেনিং এর সর্বোচ্চ লেনথ বের করি
maxlen = max([len(x) for x in train_sequences])
# ট্রেনিং এর সিকোয়েন্স এর প্যাডিং যোগ করি
train_padded = pad_sequences(train_sequences, padding=pad_type, truncating=trunc_type, maxlen=maxlen)
# আমাদের কাজের আউটপুটগুলো দেখি ভিন্ন ভাবে, তবে বাংলায় স্টপওয়ার্ড হিসেবে '।'কে ফেলে দিয়েছি।
print("Word index:\n", word_index)
print("\nTraining sequences:\n", train_sequences)
print("\nPadded training sequences:\n", train_padded)
print("\nPadded training shape:", train_padded.shape)
print("Training sequences data type:", type(train_sequences))
print("Padded Training sequences data type:", type(train_padded))
test_sequences = tokenizer.texts_to_sequences(test_data)
test_padded = pad_sequences(test_sequences, padding=pad_type, truncating=trunc_type, maxlen=maxlen)
print("Testing sequences:\n", test_sequences)
print("\nPadded testing sequences:\n", test_padded)
print("\nPadded testing shape:",test_padded.shape)
for x, y in zip(test_data, test_padded):
print('{} -> {}'.format(x, y))
print("\nWord index (for reference):", word_index)
###Output
এই অ্যানিমেশন আমরা করবো আমাদের পিসিতে। -> [25 1 1 1 5 1 0 0 0 0 0 0]
সরাসরি চালান নিচের লিংক থেকে, হচ্ছে তো? -> [ 1 1 1 1 16 1 1 0 0 0 0 0]
পাল্টান প্যারামিটার, চালান নিজের মতো করে। -> [1 1 1 1 1 1 0 0 0 0 0 0]
Word index (for reference): {'<UNK>': 1, 'ডেটা': 2, 'কিভাবে': 3, 'কথা': 4, 'আমাদের': 5, 'আচ্ছা': 6, 'বলে': 7, 'পড়ছিলাম': 8, 'হান্স': 9, 'রোসলিং': 10, 'এর': 11, 'একটা': 12, 'বই': 13, 'ফ্যাক্টফুলনেস': 14, 'ধারণা': 15, 'থেকে': 16, 'নয়': 17, 'বরং': 18, 'ডেটাকে': 19, 'বলতে': 20, 'দিলে': 21, 'সব': 22, 'বিপদ': 23, 'কাটবে': 24, 'এই': 25, 'লোক': 26, 'পৃথিবীকে': 27, 'দেখিয়েছিলেন': 28, '২০০': 29, 'বছরের': 30, 'বাঁচার': 31, 'সময়': 32, 'বাড়িয়েছে': 33}
|
digital-assyriology-review/Digital-Assyriology-Day-1.ipynb | ###Markdown
Digital Assyriology Day 1
###Code
%pylab inline
from datascience import *
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import os
import re
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Notes- Changed all [he, she] into [he/ she]- Changed all [itself, themselves] to [itself/ themselves] Getting Started We first import our data into a table.
###Code
# We read in our data from `Enmerkat.txt`, and seperate entries by commas.
Enmerkar_table = Table.read_table('Enmerkar.txt', sep = ',')
Enmerkar_table = Enmerkar_table.drop(['text_name', 'etcsl_no'])
# We display the table below
Enmerkar_table
###Output
_____no_output_____
###Markdown
Sanitizing Input Deleting Spaces We ensure that the labels of our table do not have spaces. We define the function `remove_space_from_labels` which works for all tables.
###Code
def remove_space_from_labels(table):
for label in table.labels:
table.relabel(label, label.replace(' ', ''))
return table
Enmerkar_table = remove_space_from_labels(Enmerkar_table)
Enmerkar_table.labels
###Output
_____no_output_____
###Markdown
Valid Line Numbers We ensure we are only working with valid data: We drop rows of our table where the `l_no` field contains letters.
###Code
# drop rows of different translations
to_be_kept = []
# We iterate through fields in the column `l_no`
for i in Enmerkar_table['l_no']:
# We check if the label of a row contains a letter
if re.search('[a-zA-Z]', i):
# Here, we add an entry `False` to the list we created above, `to_be_dropped`
to_be_kept.append(False)
else:
# If the entry did not contain a letter, we append true to our list
to_be_kept.append(True)
# We drop the columns selected above
Enmerkar_table = Enmerkar_table.where(to_be_kept)
Enmerkar_table
###Output
_____no_output_____
###Markdown
Text Analysis Here, we define dictionaries mapping abbreviations ('CN') to full length phrases.
###Code
proper_nouns = {
'CN': 'Constellation Name (star)',
'DN': 'Deity Name',
'EN': 'Ethnicity Name',
'FN': 'Field Name',
'GN': 'Geographical Name (for regions and countries)',
'MN': 'Month Name',
'ON': 'Object Name (usually for objects associated with a god)',
'PN': 'Personal Name',
'RN': 'Royal Name',
'SN': 'Settlement Name',
'TN': 'Temple Name',
'WN': 'Water Name',
}
simple_terms = {
'AJ': 'Adjective',
'AV': 'Adverb',
'C': 'Conjunction',
'N': 'Noun',
'NU': 'Number',
'PD': 'Part of Speech',
'V': 'Verb',
}
###Output
_____no_output_____
###Markdown
Analyzing Proper Nouns and Speech Articles in text We define a few helper functions below.
###Code
# returns the meanings of words in a line of text
def term_finder (line):
# We filter out terms from the line, which is represented as a String
# We look for substrings which contain left and right brackets
terms = re.findall(r"(?<=\[)(.*?)(?=\])", line)
return terms
# returns a list of all the proper nouns in a line of text
def proper_noun_finder(line):
# We look for substrings which start with a colon and end with a right bracket
nouns = re.findall(r"(?<=\:)(.*?)(?=\[)", line)
# We make sure the proper noun is valid, which is if it has length greater than 1
# and if the first character is upper case while the second isn't
nouns = [word for word in nouns if (len(word) > 1 and word[0].isupper() and not word[1].isupper())]
return nouns
#returns the speech articles for proper_nouns or all words
def speech_article_finder(line, proper_noun_filter = True):
# We look for substrings which start with a left bracket and end with some whitespace
terms = re.findall(r"(?<=\])(.*?)(?=\s)", line)
if proper_noun_filter:
# If we only want proper nouns, we match the terms we found to the list of proper nouns
# Then, we only select those which are on our list of proper nouns
articles = [term for term in terms if term in proper_nouns]
else:
articles = terms
return articles
###Output
_____no_output_____
###Markdown
Here, we use the functions defined above to update our table. Read the functions above to understand how it all happened!
###Code
Enmerkar_table = Enmerkar_table.with_columns([
# We create a column `terms` by applying the `term-finder` function defined above to the data in the text column
'terms', Enmerkar_table.apply(term_finder, 'text'),
# We create the proper-nouns column from the function `proper-noun-finder` applied to the text column
'proper_nouns', Enmerkar_table.apply(proper_noun_finder, 'text'),
# We create the `speech_articles` column from the function `speech_article_finder` applied to the text column
'speech_articles', Enmerkar_table.apply(speech_article_finder, 'text')
])
Enmerkar_table.show()
###Output
_____no_output_____
###Markdown
Determining Sections Here, we define a helper function which helps us determine which section each line belongs to, based on the line number.
###Code
def partitioning(line_no):
ln = int(''.join(c for c in line_no if c.isdigit()))
if(ln <= 13):
return "1.1"
elif (ln <= 21):
return "1.2"
elif (ln <= 39):
return "2.1.1"
elif (ln <= 51):
return "2.1.2"
elif (ln <= 69):
return "2.1.3"
elif (ln <= 76):
return "2.2.1"
elif (ln <= 90):
return "2.2.2"
elif (ln <= 113):
return "2.2.3"
elif (ln <= 127):
return "2.3.1"
elif (ln <= 132):
return "2.3.2"
elif (ln <= 134):
return "2.3.3"
elif (ln <= 138):
return "3.1.1"
elif (ln <= 149):
return "3.1.2"
elif (ln <= 162):
return "3.1.3"
elif (ln <= 169):
return "3.1.4"
elif (ln <= 184):
return "3.2.1"
elif (ln <= 197):
return "3.2.2"
elif (ln <= 205):
return "3.2.3"
elif (ln <= 210):
return "3.2.4"
elif (ln <= 221):
return "3.2.5"
elif (ln <= 227):
return "4.1"
elif (ln <= 248):
return "4.2.1"
elif (ln <= 254):
return "4.2.2"
elif (ln <= 263):
return "4.2.3"
elif (ln <= 273):
return "4.2.4"
elif (ln <= 280):
return "5.1"
elif (ln <= 283):
return "5.2"
elif (ln <= 310):
return "B"
return "0"
def small_partition(line_no):
ln = int(''.join(c for c in line_no if c.isdigit()))
if(ln <= 13):
return "1.1"
elif (ln <= 21):
return "1.2"
elif (ln <= 69):
return "2.1"
elif (ln <= 113):
return "2.2"
elif (ln <= 134):
return "2.3"
elif (ln <= 169):
return "3.1"
elif (ln <= 221):
return "3.2"
elif (ln <= 227):
return "4.1"
elif (ln <= 273):
return "4.2"
elif (ln <= 280):
return "5.1"
elif (ln <= 283):
return "5.2"
elif (ln <= 310):
return "6"
return "0"
###Output
_____no_output_____
###Markdown
We apply the function defined above on our table.
###Code
# We create the column called `section` by applying our `partion` function
Enmerkar_table.append_column('section', Enmerkar_table.apply(partitioning, 'l_no'))
# We create a smaller table by selected only three columns
Enmerkar_graph = Enmerkar_table.select(['proper_nouns', 'speech_articles', 'section']).group('section', list)
# displays the table
Enmerkar_graph
###Output
_____no_output_____
###Markdown
Understanding Data Notice that in the table above, the columns `proper_nouns list` and `speech_articles list` actually contain a list of lists. We "flatten" this list of lists to create a single list. Run the cell below to see what happens!
###Code
def list_flattening(pn_list):
return [noun for nouns in pn_list for noun in nouns]
# First, we flatten the speech articles column
Enmerkar_graph.append_column('speech articles', Enmerkar_graph.apply(list_flattening, 'speech_articles list'))
# Now, we flatten the proper nouns column
Enmerkar_graph.append_column('proper nouns', Enmerkar_graph.apply(list_flattening, 'proper_nouns list'))
# We discard the old columns
Enmerkar_graph = Enmerkar_graph.drop(['proper_nouns list', 'speech_articles list'])
# and display below!
Enmerkar_graph
###Output
_____no_output_____
###Markdown
Examining Data Here, we create a new table which exmaines the correlation between speech articles and proper nouns.We define the function partitioner which will split the speech articles and proper nouns lists into individual rows.
###Code
def partitioner (i):
rows = []
section = Enmerkar_graph['section'][i]
speech_articles = Enmerkar_graph['speech articles'][i]
proper_nouns = Enmerkar_graph['proper nouns'][i]
for j in range(len(speech_articles)):
article = speech_articles[j]
proper_noun = proper_nouns[j]
rows.append([section, article, proper_noun])
return rows
Enmerkar_table_section = Table(['section', 'speech articles', 'proper nouns'])
# We run the partitioner on each row in our table
for i in range(Enmerkar_graph.num_rows):
Enmerkar_table_section = Enmerkar_table_section.with_rows(partitioner(i))
Enmerkar_table_section
###Output
_____no_output_____
###Markdown
Grouping by Proper Noun Now, we will determine the frequency of Proper Nouns based on the number of rows they appeared in, in the table above
###Code
proper_noun_by_section = Enmerkar_table_section.pivot('proper nouns', rows = 'section')
name_counts = []
for name in proper_noun_by_section.drop('section').labels:
# Here, we add an entry to our list `name_counts` defined above
# which is a list of the form: ['Proper_noun', # of entries]
name_counts.append([name, np.sum(proper_noun_by_section[name])])
top_7_names = ['Aratta', 'En-suhgir-ana', 'Enmerkar', 'Inana', 'Nisaba', 'Saŋburu', 'Unug']
# We display the sorted list below
sorted(name_counts, key = lambda x: x[1], reverse = True)
###Output
_____no_output_____
###Markdown
Displaying Data Now, we create a line graph which will show us the frequency of the proper nouns "Aratta" and "Unug" in each section.
###Code
names_section_graph = proper_noun_by_section.with_column('section', range(1, proper_noun_by_section.num_rows+1))
aratta_unug_section_graph = names_section_graph.select(['Aratta', 'Unug', 'section']).plot('section')
#notice Aratta is the only one mentioned in the section 4.2.3
###Output
_____no_output_____
###Markdown
We do the same as above, but now for the top 7 proper nouns.
###Code
top_7_names_graph = names_section_graph.select(top_7_names + ['section']).plot('section')
###Output
_____no_output_____
###Markdown
Here, we display all the proper nouns.
###Code
names_section_graph.plot('section')
###Output
_____no_output_____
###Markdown
Plot character arcs by line number
###Code
# Import more functions
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
We define a helper function which counts the number of each noun in the list `noun`, in our list of nouns `proper nouns`.
###Code
def noun_counts(noun, proper_nouns):
noun_count = []
# We iterate through the list proper_nouns
for i in np.arange(len(proper_nouns)):
# We count the number of times each entry in `noun` appeared in the proper_nouns entry
noun_count.append(proper_nouns[i].count(noun))
return noun_count
###Output
_____no_output_____
###Markdown
Now, we apply `noun_counts` to the tables we created above.
###Code
# We select only the line number and proper_nouns from our table
names_cumulative_graph = Enmerkar_table.select(['l_no', 'proper_nouns'])
# We create a list of unique proper nouns
unique_nouns = np.sort(list(set(list_flattening(names_cumulative_graph.column('proper_nouns')))))
for i in np.arange(1, len(unique_nouns)+1):
# We select a noun to look at
current_noun = unique_nouns[i-1]
# We add a column to the table `names_cumulative_graph` which gives a running count of the appearance of the proper noun
names_cumulative_graph.append_column(current_noun, np.cumsum(noun_counts(current_noun, names_cumulative_graph.column('proper_nouns'))))
# We drop the column `proper_noun`
names_cumulative_graph = names_cumulative_graph.drop('proper_nouns')
###Output
_____no_output_____
###Markdown
We define more helper functions so we can visualize the data below.
###Code
def plot(name, graph, prefix):
if name != 'None':
line_graph = graph.select([prefix] + [name])
plt.plot(line_graph[0], line_graph[1])
def plot_cumulative_characters(name1, name2, name3, name4):
plot(name1, names_cumulative_graph, 'l_no')
plot(name2, names_cumulative_graph, 'l_no')
plot(name3, names_cumulative_graph, 'l_no')
plot(name4, names_cumulative_graph, 'l_no')
def plot_section_characters(name1, name2, name3, name4):
plot(name1, names_section_graph, 'section')
plot(name2, names_section_graph, 'section')
plot(name3, names_section_graph, 'section')
plot(name4, names_section_graph, 'section')
unique_nouns = tuple(['None'] + list(unique_nouns))
###Output
_____no_output_____
###Markdown
Visualizing Data Here, compare the usage of different proper nouns by their cumulative apperance across the text.
###Code
interact(plot_cumulative_characters, name1=unique_nouns, name2=unique_nouns, name3=unique_nouns, name4=unique_nouns)
###Output
_____no_output_____
###Markdown
Now, compare the nouns by apperance in the various sections
###Code
interact(plot_section_characters, name1=unique_nouns, name2=unique_nouns, name3=unique_nouns, name4=unique_nouns)
###Output
_____no_output_____ |
usr/gre/08/08_strings-student.ipynb | ###Markdown
Chaînes de caractères (string) Une chaine de caractères est une sequence de lettres. Elle est délimité par des guillemets* simples* doubles. Un index dans une chaîne Chaque élement peut être accédé par un **indice** entre crochets.
###Code
fruit = 'banane'
fruit[0]
fruit[-1]
###Output
_____no_output_____
###Markdown
L'indexation commence avec zéro. L'indexe 1 pointe vers la deuxième lettre.
###Code
fruit[1]
###Output
_____no_output_____
###Markdown
On peut utiliser également une variable comme indice de chaîne.
###Code
i = 2
fruit[i]
###Output
_____no_output_____
###Markdown
Voici comment obtenir la lettre après la i-ième lettre.
###Code
fruit[i+1]
###Output
_____no_output_____
###Markdown
Longuer de chaîne - len La fonction `len` retourne la longuer de chaîne.
###Code
fruit = 'pamplemousse'
n = len(fruit)
n
###Output
_____no_output_____
###Markdown
Pour obtenir la dernière lettre de la chaine, il faut utiliser l'index `n-1`.
###Code
n = len(fruit)
fruit[n-1]
###Output
_____no_output_____
###Markdown
L'index `-1` pointe vers la dernière lettre.
###Code
fruit[-1]
###Output
_____no_output_____
###Markdown
Parcour avec une boucle for On peut utiliser une boucle `while` pour accéder à chaque lettre, en utilisant un indice.
###Code
fruit = 'pomme'
length =len(fruit)
print('longueur', length)
i = 0
while i < len(fruit):
print('La lettre ', i,' = ', fruit[i])
i += 1
###Output
longueur 5
La lettre 0 = p
La lettre 1 = o
La lettre 2 = m
La lettre 3 = m
La lettre 4 = e
###Markdown
Une autre façon plus courte est d'utiliser la boucle `for`. A chaque itération un autre caractère de la chaîne est affecté à `c`.
###Code
i=0
for c in fruit:
i= i +1
print(i, ' = ', c)
###Output
1 = p
2 = o
3 = m
4 = m
5 = e
###Markdown
Voici un exemple ou chaque lettre de la chaîne **prefixes** est combiné avec la chaîne **suffix** pour former un nouveau mot.
###Code
prefixes = 'BLMPRST'
suffix = 'ack'
for c in prefixes:
print(c + suffix)
###Output
Back
Lack
Mack
Pack
Rack
Sack
Tack
###Markdown
Tranches de chaînes Un sous-ensemble de caractères s'appele une **tranche**. La séléction se fait en utilisant deux indices avec la notation `:`.
###Code
s = 'Monty Python'
s[0:5]
s[6:12]
###Output
_____no_output_____
###Markdown
Si le premier ou dernier indice est omis, la chaine va jusqu'au début ou la fin.
###Code
s[:3]
s[3:]
###Output
_____no_output_____
###Markdown
Si les deux indices sont les identiques, la sous-chaîne est vide.
###Code
s[3:3]
###Output
_____no_output_____
###Markdown
Les chaînes sont immuables. On ne peux pas réaffecter une tranche.
###Code
s[3] = 'k'
###Output
_____no_output_____
###Markdown
Si on veut modifier une lettre, il faut composer une nouvelle chaîne.
###Code
s[:2]+'k'+s[3:]
###Output
_____no_output_____
###Markdown
Rechercher une lettre Que fait la fonction `trouver` ?
###Code
def trouver(mot, lettre):
i = 0
while i < len(mot):
if mot[i] == lettre:
return i
i += 1
return -1
trouver('Monty', 'y')
trouver('Monty', 'x')
###Output
_____no_output_____
###Markdown
Si la lettre existe dans le mot, son index est retourné. Si la lettre n'existe pas, la valeur -1 est retournée. Compter des lettres La fonction suivante compte l'occurence d'une lettre dans un mot.
###Code
def count(mot, lettre):
cnt = 0
for c in mot:
if c == lettre:
cnt += 1
return cnt
count('banana', 'a')
count('banana', 'c')
###Output
_____no_output_____
###Markdown
Méthodes de chaines de caractères Le type `string` possède beaucoup de méthodes. Les méthodes sont ajoutées aux objets chaîne avec un point. La méthdode `upper` transformer en majuscules, la méthode `lower` transforme en minuscules.
###Code
s.upper()
s.lower()
###Output
_____no_output_____
###Markdown
La méthode `find(c)` trouve l'index du caractère `c`. La méthode `replace` remplace une chaîne par une autre.
###Code
s.find('y')
s.find('th')
s.replace('n', 'ng')
###Output
_____no_output_____
###Markdown
L'opérateur in L'opérateur bouléen `in` retourne `True` si un caractère ou une séquence fait partie d'une chaîne.
###Code
'ana' in 'banana'
###Output
_____no_output_____
###Markdown
La fonction `common` imprime les caractères en commun dans les deux chaînes.
###Code
def common(word1, word2):
for c in word1:
if c in word2:
print(c)
common('pommes', 'oranges')
###Output
_____no_output_____
###Markdown
Comparaison de chaines Les 6 opérateurs de comparaison (`>`, `=`, `>`) sont aussi définis pour des chaînes.
###Code
def check(a, b):
if a < b:
print(a, 'is before', b)
else:
print(a, 'is after', b)
check('ananas', 'banana')
check('orange', 'banana')
check('Orange', 'banana')
###Output
_____no_output_____
###Markdown
En Python les lettre majuscues viennent avant les minuscules. Exercices ex 1 strip et replace La fonction `strip` enlève des espace en début et fin d'une chaîne.
###Code
' monty '.strip()
'monty'.replace('t', 'ke')
###Output
_____no_output_____
###Markdown
ex 2 count Il existe une méthode de chaîne de caractères appelée count qui est similaire à la fonction de la section 8.7. Lisez la documentation de cette méthode et écrivez une invocation qui compte le nombre des a dans `'banane'` . ex 3 slice indexing Une tranche de chaîne peut prendre un troisième indice qui spécifie la **taille du pas** ; autrement dit, le nombre d'espaces entre caractères successifs. Une taille de pas de 2 signifie un caractère sur deux ; 3 signifie un caractère sur trois, etc.
###Code
s = 'Monty Python'
s[::2]
###Output
_____no_output_____
###Markdown
Une taille de pas de `-1` parcourt le mot à l'envers, donc la tranche `[:: - 1]` génère une chaîne inversée.
###Code
s[::-1]
###Output
_____no_output_____
###Markdown
Utilisez cette syntaxe pour écrire une version d'une ligne de la méthode `is_palindrome`.
###Code
def is_palindrome(s):
is_palindrome('hannah'), is_palindrome('harry')
###Output
_____no_output_____
###Markdown
ex 4 trouver des minuscules Les fonctions suivantes sont toutes censées vérifier si une chaîne contient au moins une lettre minuscule quelconque, mais au moins certaines d'entre elles sont erronées. Pour chaque fonction, décrivez ce que fait réellement la fonction (en supposant que le paramètre est une chaîne).Pour chacune des 5 fonctions, décrivez ce qu'elle fait réellement.
###Code
def minusc1(s):
for c in s:
if c.islower():
return True
else:
return False
def minusc2(s):
for c in s:
if 'c'.islower():
return 'True'
else:
return 'False'
def minusc3(s):
for c in s:
drapeau = c.islower()
return drapeau
def minusc4(s):
drapeau = False
for c in s:
drapeau = drapeau or c.islower()
return drapeau
minusc4('Jim'), minusc4('JIM')
def minusc5(s):
for c in s:
if not c.islower():
return False
return True
###Output
_____no_output_____
###Markdown
ex 5 ciffre de César Un chiffre de César est une forme faible de chiffrement qui implique la « rotation » de chaque lettre d'un nombre fixe de places. La rotation d'une lettre signifie de décaler sa place dans l'alphabet, en repassant par le début si nécessaire, de sorte qu'après la rotation par 3, 'A' devient 'D' et 'Z' décalé de 1 est 'A'.Pour effectuer la rotation d'un mot, décalez-en chaque lettre par le même nombre. Par exemple, les mots *JAPPA* et *ZAPPA*, décalés de quatre lettres, donnent respectivement *DETTE* et *NETTE*, et le mot *RAVIER*, décalé de 13 crans, donne *ENIVRE* (et réciproquement). De même, le mot *OUI* décalé de 10 crans devient *YES*. Dans le film 2001 : l'Odyssée de l'espace, l'ordinateur du vaisseau s'appelle *HAL*, qui est *IBM* décalé de -1.Écrivez une fonction appelée `rotate_word` qui prend comme paramètres une chaîne de caractères et un entier, et renvoie une nouvelle chaîne qui contient les lettres de la chaîne d'origine décalées selon le nombre donné.
###Code
def rotate_word(s, n):
rotate_word('JAPPA', 4), rotate_word('HAL', 1), rotate_word('RAVIER', 13), rotate_word('OUI', 10)
###Output
_____no_output_____ |
dev_course/dl2/11a_transfer_learning.ipynb | ###Markdown
Serializing the model
###Code
path = datasets.untar_data(datasets.URLs.IMAGEWOOF_160)
size = 128
bs = 64
tfms = [make_rgb, RandomResizedCrop(128,scale=(0.35,1)), np_to_float, PilRandomFlip()]
val_tfms = [make_rgb, CenterCrop(size), np_to_float]
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
ll.valid.x.tfms = val_tfms
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=8)
loss_func = LabelSmoothingCrossEntropy()
opt_func = adam_opt(mom=0.9, mom_sqr=0.99, eps=1e-6, wd=1e-2)
learn = cnn_learner(xresnet18, data, loss_func, opt_func, norm=norm_imagenette)
def sched_1cycle(lr, pct_start=0.3, mom_start=0.95, mom_mid=0.85, mom_end=0.95):
phases = create_phases(pct_start)
sched_lr = combine_scheds(phases, cos_1cycle_anneal(lr/10., lr, lr/1e5))
sched_mom = combine_scheds(phases, cos_1cycle_anneal(mom_start, mom_mid, mom_end))
return [ParamScheduler('lr', sched_lr),
ParamScheduler('mom', sched_mom)]
lr = 3e-3
pct_start = 0.5
cbsched = sched_1cycle(lr, pct_start)
learn.fit(40, cbsched)
st = learn.model.state_dict()
type(st)
', '.join(st.keys())
st['10.bias']
mdl_path = path/'models'
mdl_path.mkdir(exist_ok=True)
###Output
_____no_output_____
###Markdown
It's also possible to save the whole model, including the architecture, but it gets quite fiddly and we don't recommend it. Instead, just save the parameters, and recreate the model directly.
###Code
torch.save(st, mdl_path/'iw5')
###Output
_____no_output_____
###Markdown
Pets
###Code
pets = datasets.untar_data(datasets.URLs.PETS)
pets.ls()
pets_path = pets/'images'
tfms = [make_rgb, RandomResizedCrop(128,scale=(0.35,1)), np_to_float, PilRandomFlip()]
il = ImageList.from_files(pets_path, tfms=tfms)
il
#export
def random_splitter(fn, p_valid): return random.random() < p_valid
random.seed(42)
sd = SplitData.split_by_func(il, partial(random_splitter, p_valid=0.1))
sd
n = il.items[0].name
re.findall(r'^(.*)_\d+.jpg$', n)[0]
def pet_labeler(fn): return re.findall(r'^(.*)_\d+.jpg$', fn.name)[0]
proc = CategoryProcessor()
ll = label_by_func(sd, pet_labeler, proc_y=proc)
', '.join(proc.vocab)
ll.valid.x.tfms = val_tfms
c_out = len(proc.vocab)
data = ll.to_databunch(bs, c_in=3, c_out=c_out, num_workers=8)
learn = cnn_learner(xresnet18, data, loss_func, opt_func, norm=norm_imagenette)
learn.fit(5, cbsched)
###Output
_____no_output_____
###Markdown
Custom head
###Code
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
st = torch.load(mdl_path/'iw5')
m = learn.model
m.load_state_dict(st)
cut = next(i for i,o in enumerate(m.children()) if isinstance(o,nn.AdaptiveAvgPool2d))
m_cut = m[:cut]
xb,yb = get_batch(data.valid_dl, learn)
pred = m_cut(xb)
pred.shape
ni = pred.shape[1]
#export
class AdaptiveConcatPool2d(nn.Module):
def __init__(self, sz=1):
super().__init__()
self.output_size = sz
self.ap = nn.AdaptiveAvgPool2d(sz)
self.mp = nn.AdaptiveMaxPool2d(sz)
def forward(self, x): return torch.cat([self.mp(x), self.ap(x)], 1)
nh = 40
m_new = nn.Sequential(
m_cut, AdaptiveConcatPool2d(), Flatten(),
nn.Linear(ni*2, data.c_out))
learn.model = m_new
learn.fit(5, cbsched)
###Output
_____no_output_____
###Markdown
adapt_model
###Code
def adapt_model(learn, data):
cut = next(i for i,o in enumerate(learn.model.children())
if isinstance(o,nn.AdaptiveAvgPool2d))
m_cut = learn.model[:cut]
xb,yb = get_batch(data.valid_dl, learn)
pred = m_cut(xb)
ni = pred.shape[1]
m_new = nn.Sequential(
m_cut, AdaptiveConcatPool2d(), Flatten(),
nn.Linear(ni*2, data.c_out))
learn.model = m_new
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
for p in learn.model[0].parameters(): p.requires_grad_(False)
learn.fit(3, sched_1cycle(1e-2, 0.5))
for p in learn.model[0].parameters(): p.requires_grad_(False)
learn.fit(5, cbsched, reset_opt=True)
###Output
_____no_output_____
###Markdown
Batch norm transfer
###Code
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
def apply_mod(m, f):
f(m)
for l in m.children(): apply_mod(l, f)
def set_grad(m, b):
if isinstance(m, (nn.Linear,nn.BatchNorm2d)): return
if hasattr(m, 'weight'):
for p in m.parameters(): p.requires_grad_(b)
apply_mod(learn.model, partial(set_grad, b=False))
learn.fit(3, sched_1cycle(1e-2, 0.5))
apply_mod(learn.model, partial(set_grad, b=True))
learn.fit(5, cbsched, reset_opt=True)
###Output
_____no_output_____
###Markdown
Pytorch already has an `apply` method we can use:
###Code
learn.model.apply(partial(set_grad, b=False));
###Output
_____no_output_____
###Markdown
Discriminative LR and param groups
###Code
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
def bn_splitter(m):
def _bn_splitter(l, g1, g2):
if isinstance(l, (nn.Linear,nn.BatchNorm2d)): g2 += l.parameters()
elif hasattr(l, 'weight'): g1 += l.parameters()
for ll in l.children(): _bn_splitter(ll, g1, g2)
g1,g2 = [],[]
_bn_splitter(m[0], g1, g2)
g2 += m[1:].parameters()
return g1,g2
a,b = bn_splitter(learn.model)
test_eq(len(a)+len(b), len(list(m.parameters())))
Learner.ALL_CBS
#export
from types import SimpleNamespace
cb_types = SimpleNamespace(**{o:o for o in Learner.ALL_CBS})
cb_types.after_backward
#export
class DebugCallback(Callback):
_order = 999
def __init__(self, cb_name, f=None): self.cb_name,self.f = cb_name,f
def __call__(self, cb_name):
if cb_name==self.cb_name:
if self.f: self.f(self.run)
else: set_trace()
#export
def sched_1cycle(lrs, pct_start=0.3, mom_start=0.95, mom_mid=0.85, mom_end=0.95):
phases = create_phases(pct_start)
sched_lr = [combine_scheds(phases, cos_1cycle_anneal(lr/10., lr, lr/1e5))
for lr in lrs]
sched_mom = combine_scheds(phases, cos_1cycle_anneal(mom_start, mom_mid, mom_end))
return [ParamScheduler('lr', sched_lr),
ParamScheduler('mom', sched_mom)]
disc_lr_sched = sched_1cycle([0,3e-2], 0.5)
learn = cnn_learner(xresnet18, data, loss_func, opt_func,
c_out=10, norm=norm_imagenette, splitter=bn_splitter)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
def _print_det(o):
print (len(o.opt.param_groups), o.opt.hypers)
raise CancelTrainException()
learn.fit(1, disc_lr_sched + [DebugCallback(cb_types.after_batch, _print_det)])
learn.fit(3, disc_lr_sched)
disc_lr_sched = sched_1cycle([1e-3,1e-2], 0.3)
learn.fit(5, disc_lr_sched)
###Output
_____no_output_____
###Markdown
Export
###Code
!./notebook2script.py 11a_transfer_learning.ipynb
###Output
Converted 11a_transfer_learning.ipynb to exp/nb_11a.py
|
Django2.0-Python-Full-Stack-Web-Developer/TravlersDayOneandTwo/2- Python Overview/2 - Statements Methods Functions/Functions.ipynb | ###Markdown
FunctionsIntroduction to FunctionsThis lecture will consist of explaining what a function is in Python and how to create one. Functions will be one of our main building blocks when we construct larger and larger amounts of code to solve problems.**So what is a function?**Formally, a function is a useful device that groups together a set of statements so they can be run more than once. They can also let us specify parameters that can serve as inputs to the functions.On a more fundamental level, functions allow us to not have to repeatedly write the same code again and again. If you remember back to the lessons on strings and lists, remember that we used a function len() to get the length of a string. Since checking the length of a sequence is a common task you would want to write a function that can do this repeatedly at command.Functions will be one of most basic levels of reusing code in Python, and it will also allow us to start thinking of program design (we will dive much deeper into the ideas of design when we learn about Object Oriented Programming). def StatementsLet's see how to build out a function's syntax in Python. It has the following form:
###Code
def name_of_function(arg1,arg2):
'''
This is where the function's Document String (docstring) goes
'''
# Do stuff here
#return desired result
###Output
_____no_output_____
###Markdown
We begin with def then a space followed by the name of the function. Try to keep names relevant, for example len() is a good name for a length() function. Also be careful with names, you wouldn't want to call a function the same name as a [built-in function in Python](https://docs.python.org/2/library/functions.html) (such as len).Next come a pair of parenthesis with a number of arguments seperated by a comma. These arguments are the inputs for your function. You'll be able to use these inputs in your function and reference them. After this you put a colon.Now here is the important step, you must indent to begin the code inside your function correctly. Python makes use of *whitespace* to organize code. Lots of other programing languages do not do this, so keep that in mind.Next you'll see the docstring, this is where you write a basic description of the function. Using iPython and iPython Notebooks, you'll be ab;e to read these docstrings by pressing Shift+Tab after a function name. Doc strings are not necessary for simple functions, but its good practice to put them in so you or other people can easily understand the code you write.After all this you begin writing the code you wish to execute.The best way to learn functions is by going through examples. So let's try to go through examples that relate back to the various objects and data structures we learned about before. Example 1: A simple print 'hello' function
###Code
def say_hello():
print 'hello'
###Output
_____no_output_____
###Markdown
Call the function
###Code
say_hello()
###Output
hello
###Markdown
Example 2: A simple greeting functionLet's write a function that greets people with their name.
###Code
def greeting(name):
print 'Hello %s' %name
greeting('Jose')
###Output
Hello Jose
###Markdown
Using returnLet's see some example that use a return statement. return allows a function to *return* a result that can then be stored as a variable, or used in whatever manner a user wants. Example 3: Addition function
###Code
def add_num(num1,num2):
return num1+num2
add_num(4,5)
# Can also save as variable due to return
result = add_num(4,5)
print result
###Output
9
###Markdown
What happens if we input two strings?
###Code
print add_num('one','two')
###Output
onetwo
###Markdown
Note that because we don't declare variable types in Python, this function could be used to add numbers or sequences together! We'll later learn about adding in checks to make sure a user puts in the correct arguments into a function.Lets also start using *break*,*continue*, and *pass* statements in our code. We introduced these during the while lecture. Finally lets go over a full example of creating a function to check if a number is prime ( a common interview exercise).We know a number is prime if that number is only evenly divisble by 1 and itself. Let's write our first version of the function to check all the numbers from 1 to N and perform modulo checks.
###Code
def is_prime(num):
'''
Naive method of checking for primes.
'''
for n in range(2,num):
if num % n == 0:
print 'not prime'
break
else: # If never mod zero, then prime
print 'prime'
is_prime(16)
###Output
not prime
###Markdown
Note how we break the code after the print statement! We can actually improve this by only checking to the square root of the target number, also we can disregard all even numbers after checking for 2. We'll also switch to returning a boolean value to get an exaple of using return statements:
###Code
import math
def is_prime(num):
'''
Better method of checking for primes.
'''
if num % 2 == 0 and num > 2:
return False
for i in range(3, int(math.sqrt(num)) + 1, 2):
if num % i == 0:
return False
return True
is_prime(14)
###Output
_____no_output_____ |
code/1_Sequence_to_Sequence_Learning_with_Neural_Networks.ipynb | ###Markdown
1 - Sequence to Sequence Learning with Neural Networks- Seq2Seq 시리즈에서는 Pytorch와 torch text를 이용하여 하나의 `seq`를 다른 `seq`로 바꾸는 머신 러닝 모델을 구축할 예정입니다. - tutorial-1에서는 `독일어`를 `영어`로 번역하는 translation model을 학습합니다. Seq2Seq model 모델은 번역 외에도 내용 요약(Text Summarization), STT(Speech to Text)등에 사용됩니다.- 이번 노트북에서는 Google의 [Sequence to Sequence Learning with Neural Networks](https://arxiv.org/abs/1409.3215) paper의 모델을 간단하게 구현할 예정입니다. 이 논문은 Seq2Seq개념을 최초로 Neural Machine Translation에 적용한 모델로, 자연어 처리에 있어 굉장히 중요한 논문이니 한번쯤은 읽어보시는 것을 추천드립니다 :) - Seq2Seq 개념을 최초로 제안한 논문은 [Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation(2014)](https://arxiv.org/abs/1406.1078)입니다. [이 글](https://happy-jihye.github.io/nlp/2_Learning_Phrase_Representations_using_RNN_Encoder_Decoder_for_Statistical_Machine_Translation/)에서 관련 내용을 학습하실 수 있습니다.> 2021/03/26 Happy-jihye 🌺> > **Reference** : [pytorch-seq2seq/1 - Sequence to Sequence Learning with Neural Networks](https://github.com/bentrevett/pytorch-seq2seq)--- Seq2Seq- 가장 일반적인 Seq2Seq 모델은 `encoder-decoder` 모델입니다. input 문장을 RNN으로 single vector로 인코딩한 후, 이 single vector를 다시 RNN 네트워크를 통해 디코딩합니다.- single vector는 **context vector**라고도 불리며, 전체 입력 문장의 추상적인 표현으로 생각할 수 있습니다.**Encoder**- 위의 이미지는 대표적인 번역 예제로, "guten morgen"이라는 source 문장은 노란색의 `embedding layer`를 걸쳐 초록색의 `encoder`로 들어갑니다. - `` token은 *start of sequence*, token은 *end of sequence*의 약자로 문장의 시작과 끝을 알리는 token입니다. - Encoder RNN은 이전 time step의 hidden state와 현재 time step의 ebedding값을 input으로 받습니다. 수식으로 표현하면 다음과 같습니다. $h_t = \text{EncoderRNN}(e(x_t), h_{t-1})$ - 여기서 input sentence는 $X = \{x_1, x_2, ..., x_T\}$로 표현되며, $x_1$ 은 ``, $x_2$ 는 `guten`이 됩니다. - 또한 초기 hidden state, $h_0$는 0이 되거나 학습된 parameter로 초기화됩니다.- RNN로는 LSTM (Long Short-Term Memory)나 GRU (Gated Recurrent Unit)와 같은 architecture를 사용할 수 있습니다.**context vector**- 최종 단어인 $x_T$, ``가 embedding layer를 통해 RNN에 전달되면, 우리는 마지막 hidden state인 $h_T$을 얻을 수 있으며, 이를 context vector라고 부릅니다. - context vector는 전체 문장을 대표하며, $h_T = z$로 표현할 수 있습니다.**Decoder**- 이제 우리는 context vector인 $z$를 output/target sentence로 디코딩해야합니다. 이를 위해 문장의 앞 뒤에 ``와 `` token을 추가합니다.- 디코딩 과정을 수식으로 표현하면 다음과 같습니다. $s_t = \text{DecoderRNN}(d(y_t), s_{t-1})$ - 여기서 현재 단어를 embedding, $y$한 값이 $d(y_t)$이며, context vector $z = h_T$는 첫번째 hidden state인 $s_0$과도 같습니다.- 우리는 decoder의 hidden state $s_t$를 보라색의 `Linear layer`에 넣음으로써 prediction값을 얻을 수 있습니다. $\hat{y}_t = f(s_t)$- 이때, decoder의 단어는 각 time step당 하나씩 차례대로 생성됩니다. decoder를 거치면서 많은 단어들이 생성이 되는데, `` token이 출력되면 decoding을 멈춥니다.- 예측값 $\hat{Y} = \{ \hat{y}_1, \hat{y}_2, ..., \hat{y}_T \}$을 실제 target sentece의 값 $Y = \{ y_1, y_2, ..., y_T \}$과 비교하여 정확도를 계산합니다. 1. Preparing Data
###Code
!apt install python3.7
!pip install -U torchtext==0.6.0
!python -m spacy download en
!python -m spacy download de
import torch
import torch.nn as nn
import torch.optim as optim
from torchtext.datasets import Multi30k
from torchtext.data import Field, BucketIterator
import spacy
import numpy as np
import random
import math
import time
SEED = 1234
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
###Output
_____no_output_____
###Markdown
**Tokenizers**- tokenizers는 문장을 개별 token으로 변환해주는 데 사용됩니다. - e.g. "good morning!" becomes ["good", "morning", "!"]- nlp를 쉽게 할 수 있도록 도와주는 python package인 `spaCy`를 이용하여, token화를 할 예정입니다.
###Code
spacy_de = spacy.load('de')
spacy_en = spacy.load('en')
###Output
_____no_output_____
###Markdown
**Reversing the order of the words** 이 논문에서는 단어의 순서를 바꾸면 최적화가 더 쉬워져 성능이 더 좋아진다고 말하고 있습니다. 따라서 이를 위해 source 문장인 `독일어`를 token화를 한 후 역순으로 list에 저장했습니다.
###Code
def tokenize_de(text):
return [tok.text for tok in spacy_de.tokenizer(text)][::-1]
def tokenize_en(text):
return [tok.text for tok in spacy_en.tokenizer(text)]
###Output
_____no_output_____
###Markdown
다음으로는 **Field** 라이브러리를 사용하여 데이터를 처리합니다.
###Code
SRC = Field(tokenize= tokenize_de,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
TRG = Field(tokenize= tokenize_en,
init_token = '<sos>',
eos_token = '<eos>',
lower = True)
###Output
_____no_output_____
###Markdown
- dataset으로는 [Multi30k dataset](https://github.com/multi30k/dataset)을 사용하였습니다. 이는 약 3만개의 영어, 독일어, 프랑스어 문장이 있는 데이터이며 각 문장 당 12개의 단어가 있습니다.- `exts`는 source와 target으로 사용할 언어를 지정합니다.
###Code
train_data, valid_data, test_data = Multi30k.splits(exts= ('.de', '.en'),
fields = (SRC, TRG))
print(f"Number of training examples: {len(train_data.examples)}")
print(f"Number of validation examples: {len(valid_data.examples)}")
print(f"Number of testing examples: {len(test_data.examples)}")
###Output
Number of training examples: 29000
Number of validation examples: 1014
Number of testing examples: 1000
###Markdown
- data를 출력해본 결과, source문장은 역순으로 저장되어있음을 확인할 수 있습니다.
###Code
print(len(vars(train_data.examples[0])['src']))
print(len(vars(train_data.examples[1])['src']))
print(vars(train_data.examples[0]))
print(vars(train_data.examples[1]))
###Output
13
8
{'src': ['.', 'büsche', 'vieler', 'nähe', 'der', 'in', 'freien', 'im', 'sind', 'männer', 'weiße', 'junge', 'zwei'], 'trg': ['two', 'young', ',', 'white', 'males', 'are', 'outside', 'near', 'many', 'bushes', '.']}
{'src': ['.', 'antriebsradsystem', 'ein', 'bedienen', 'schutzhelmen', 'mit', 'männer', 'mehrere'], 'trg': ['several', 'men', 'in', 'hard', 'hats', 'are', 'operating', 'a', 'giant', 'pulley', 'system', '.']}
###Markdown
Build Vocabulary- `build_vocab`함수를 이용하여 각 token을 indexing해줍니다. 이때, source와 target의 vocabulary는 다릅니다.- `min_freq`를 사용하여 최소 2번 이상 나오는 단어들만 vocabulary에 넣어주었습니다. 이때, 한번만 나오는 단어는 `` token으로 변환됩니다.- 이때, vocabulary는 **training set**에서만 만들어져야합니다. *(validation/test set에 대해서는 만들어지면 안됨!!)*
###Code
SRC.build_vocab(train_data, min_freq = 2)
TRG.build_vocab(train_data, min_freq = 2)
print(f"Unique tokens in source (de) vocabulary: {len(SRC.vocab)}")
print(f"Unique tokens in target (en) vocabulary: {len(TRG.vocab)}")
###Output
Unique tokens in source (de) vocabulary: 7855
Unique tokens in target (en) vocabulary: 5893
###Markdown
Create the iterators- `BucketIterator`를 이용하여 batch size별로 token들을 묶고, 어휘를 읽을 수 있는 token에서 index로 변환해줍니다.
###Code
# for using GPU
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(train_data)
BATCH_SIZE = 128
train_iterator, valid_iterator, test_iterator = BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device
)
###Output
_____no_output_____
###Markdown
- 다음은 batch size가 무엇인지에 대해 이해해보기 위해 첫번째 batch를 출력해본 예제입니다. `BucketIterator`를 통해 batch끼리 묶으면 [sequence length, batch size]라는 tensor가 생성되며, 이 tensor는 train_data를 batch_size로 나눈 결과값만큼 생성됩니다. - 이 예제에서는 128의 크기를 가진 batch가 총 227개 생깁니다.- 또한, batch에서 `sequence length`는 그 batch 내의 가장 긴 문장의 길이로 결정되며 그보다 짧은 문장들에 대해서는 `` token으로 남은 tensor값이 채워집니다.
###Code
print(TRG.vocab.stoi[TRG.pad_token]) #<pad> token의 index = 1
for i, batch in enumerate(train_iterator):
src = batch.src
trg = batch.trg
src = src.transpose(1,0)
print(f"첫 번째 배치의 text 크기: {src.shape}")
print(src[0])
print(src[1])
break
print(len(train_iterator))
print(len(train_iterator)*128)
###Output
1
첫 번째 배치의 text 크기: torch.Size([128, 31])
tensor([ 2, 4, 4334, 14, 22, 69, 25, 66, 5, 3, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1], device='cuda:0')
tensor([ 2, 4, 1700, 118, 254, 23, 443, 10, 589, 0, 18, 98,
60, 16, 8, 3, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1], device='cuda:0')
torch.Size([128])
227
29056
###Markdown
Building the Seq2Seq Model Encoder- Encoder는 2개의 LSTM layer로 구성되어 있습니다. (논문에서는 4개의 layer를 사용했지만, 이 튜토리얼에서는 학습시간을 줄이기 위해 2개의 layer를 사용했습니다.)- RNN에서는 첫번째 layer의 hidden state를 $h_t^1 = \text{EncoderRNN}^1(e(x_t), h_{t-1}^1)$로, 두번째 layer의 hidden state를 $h_t^2 = \text{EncoderRNN}^2(h_t^1, h_{t-1}^2)$로 표현했다면, LSTM은 `cell state`인 $c_t$도 입력으로 들어갑니다.- 따라서 LSTM에서의 multi-layer equation을 표현하면 다음과 같이 표현할 수 있습니다. $(h_t^1, c_t^1) = \text{EncoderLSTM}^1(e(x_t), (h_{t-1}^1, c_{t-1}^1))$ $(h_t^2, c_t^2) = \text{EncoderLSTM}^2(h_t^1, (h_{t-1}^2, c_{t-1}^2))$ - RNN architecture에 대한 설명은 [이 글](https://happy-jihye.github.io/nlp/2_Updated_Sentiment_Analysis/lstm-long-short-term-memory)에 자세히 적어놓았습니다.
###Code
class Encoder(nn.Module):
def __init__(self, input_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(input_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.dropout = nn.Dropout(dropout)
def forward(self, src):
# src = [src len, batch size]
embedded = self.dropout(self.embedding(src))
# embedded = [src len, batch size, emb dim]
outputs, (hidden, cell) = self.rnn(embedded)
# hidden = [n layers * n directions, batch size, hid dim]
# cell = [n layer * n directions, batch size, hid dim]
# outputs = [src len, batch size, hid dim * n directions]
## output은 언제나 hidden layer의 top에 있음
return hidden, cell
###Output
_____no_output_____
###Markdown
Decoder- decoder도 encoder와 마찬가지로 2개의 LSTM layer를 사용했습니다. (논문에서는 4개의 layer를 사용했습니다.) - 다음은 Decoder의 layer를 수식으로 나타낸 식입니다. $(s_t^1, c_t^1) = \text{DecoderLSTM}^1(d(y_t), (s_{t-1}^1, c_{t-1}^1))\\ (s_t^2, c_t^2) = \text{DecoderLSTM}^2(s_t^1, (s_{t-1}^2, c_{t-1}^2))$
###Code
class Decoder(nn.Module):
def __init__(self, output_dim, emb_dim, hid_dim, n_layers, dropout):
super().__init__()
self.output_dim = output_dim
self.hid_dim = hid_dim
self.n_layers = n_layers
self.embedding = nn.Embedding(output_dim, emb_dim)
self.rnn = nn.LSTM(emb_dim, hid_dim, n_layers, dropout = dropout)
self.fc_out = nn.Linear(hid_dim, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, input, hidden, cell):
# input = [batch size]
## 한번에 하나의 token만 decoding하므로 forward에서의 input token의 길이는 1입니다.
# hidden = [n layers * n directions, batch size, hid dim]
# cell = [n layers * n directions, batch size, hid dim]
# n directions in the decoder will both always be 1, therefore:
# hidden = [n layers, batch size, hid dim]
# context = [n layers, batch size, hid dim]
input = input.unsqueeze(0)
# input을 0차원에 대해 unsqueeze해서 1의 sentence length dimension을 추가합니다.
# input = [1, batch size]
embedded = self.dropout(self.embedding(input))
# embedding layer를 통과한 후에 dropout을 합니다.
# embedded = [1, batch size, emb dim]
output, (hidden, cell) = self.rnn(embedded, (hidden, cell))
# output = [seq len, batch size, hid dim * n directions]
# hidden = [n layers * n directions, batch size, hid dim]
# cell = [n layers * n directions, batch size, hid dim]
# seq len and n directions will always be 1 in the decoder, therefore:
# output = [1, batch size, hid dim]
# hidden = [n layers, batch size, hid dim]
# cell = [n layers, batch size, hid dim]
prediction = self.fc_out(output.squeeze(0))
#prediction = [batch size, output dim]
return prediction, hidden, cell
###Output
_____no_output_____
###Markdown
Seq2Seqseq2seq model을 정리하면 다음과 같습니다.- encoder에 source(input) sentence를 입력한다.- encoder를 학습시켜 고정된 크기의 context vector를 출력한다.- context vector를 decoder에 넣어 예측된 target(output) sentence를 생성한다.- 이번 튜토리얼에서는 Encoder와 Decoder에서의 layer의 수와 hidden/cell dimensions을 동일하게 맞춰주었습니다. 이는 항상 그래야하는 하는 것은 아니지만, layer의 개수나 차원을 다르게 해준다면 추가적으로 생각해줄 문제들이 많아질 것입니다. - ex) 인코드의 레이어는 2개, 디코더의 레이어는 1개라면 context vector의 평균을 디코더에 넘겨줘야하나?- target문장과 output문장의 tensor는 다음과 같습니다. **Teacher Forcing**- teacher forcing은 다음 입력으로 디코더의 예측을 사용하는 대신 실제 목표 출력을 다음 입력으로 사용하는 컨셉입니다. ([참고](https://tutorials.pytorch.kr/intermediate/seq2seq_translation_tutorial.html)) 즉, `target word`(Ground Truth)를 디코더의 다음 입력으로 넣어줌으로써 학습시 더 정확한 예측을 가능하게 합니다.- [참고2](https://blog.naver.com/PostView.nhn?blogId=sooftware&logNo=221790750668&categoryNo=0&parentCategoryNo=0&viewDate=¤tPage=1&postListTopCurrentPage=1&from=postView)
###Code
class Seq2Seq(nn.Module):
def __init__(self, encoder, decoder, device):
super().__init__()
self.encoder = encoder
self.decoder = decoder
self.device = device
assert encoder.hid_dim == decoder.hid_dim, \
"Hidden dimensions of encoder and decoder must be equal!"
assert encoder.n_layers == decoder.n_layers, \
"Encoder and decoder must have equal number of layers!"
def forward(self, src, trg, teacher_forcing_ratio = 0.5):
#src = [src len, batch size]
#trg = [trg len, batch size]
#teacher_forcing_ratio is probability to use teacher forcing
#e.g. if teacher_forcing_ratio is 0.75 we use ground-truth inputs 75% of the time
batch_size = trg.shape[1]
trg_len = trg.shape[0]
trg_vocab_size = self.decoder.output_dim
# output을 저장할 tensor를 만듭니다.(처음에는 전부 0으로)
outputs = torch.zeros(trg_len, batch_size, trg_vocab_size).to(self.device)
# src문장을 encoder에 넣은 후 hidden, cell값을 구합니다.
hidden, cell = self.encoder(src)
# decoder에 입력할 첫번째 input입니다.
# 첫번째 input은 모두 <sos> token입니다.
# trg[0,:].shape = BATCH_SIZE
input = trg[0,:]
'''한번에 batch_size만큼의 token들을 독립적으로 계산
즉, 총 trg_len번의 for문이 돌아가며 이 for문이 다 돌아가야지만 하나의 문장이 decoding됨
또한, 1번의 for문당 128개의 문장의 각 token들이 다같이 decoding되는 것'''
for t in range(1, trg_len):
# input token embedding과 이전 hidden/cell state를 decoder에 입력합니다.
# 새로운 hidden/cell states와 예측 output값이 출력됩니다.
output, hidden, cell = self.decoder(input, hidden, cell)
#output = [batch size, output dim]
# 각각의 출력값을 outputs tensor에 저장합니다.
outputs[t] = output
# decide if we are going to use teacher forcing or not
teacher_force = random.random() < teacher_forcing_ratio
# predictions들 중에 가장 잘 예측된 token을 top에 넣습니다.
# 1차원 중 가장 큰 값만을 top1에 저장하므로 1차원은 사라집니다.
top1 = output.argmax(1)
# top1 = [batch size]
# teacher forcing기법을 사용한다면, 다음 input으로 target을 입력하고
# 아니라면 이전 state의 예측된 출력값을 다음 input으로 사용합니다.
input = trg[t] if teacher_force else top1
return outputs
###Output
_____no_output_____
###Markdown
Training the Seq2Seq Model
###Code
INPUT_DIM = len(SRC.vocab)
OUTPUT_DIM = len(TRG.vocab)
ENC_EMB_DIM = 256
DEC_EMB_DIM = 256
HID_DIM = 512
N_LAYERS = 2
ENC_DROPOUT = 0.5
DEC_DROPOUT = 0.5
enc = Encoder(INPUT_DIM, ENC_EMB_DIM, HID_DIM, N_LAYERS, ENC_DROPOUT)
dec = Decoder(OUTPUT_DIM, DEC_EMB_DIM, HID_DIM, N_LAYERS, DEC_DROPOUT)
model = Seq2Seq(enc, dec, device).to(device)
###Output
_____no_output_____
###Markdown
- 초기 가중치값은 $\mathcal{U}(-0.08, 0.08)$의 정규분포로부터 얻었습니다.
###Code
def init_weights(m):
for name, param in m.named_parameters():
nn.init.uniform_(param.data, -0.08, 0.08)
model.apply(init_weights)
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
###Output
The model has 13,899,013 trainable parameters
###Markdown
- optimizer함수로는 `Adam`을 사용하였고, loss function으로는 `CrossEntropyLoss`를 사용하였습니다. 또한, `` token에 대해서는 loss 계산을 하지 않도록 조건을 부여했습니다.
###Code
optimizer = optim.Adam(model.parameters())
TRG_PAD_IDX = TRG.vocab.stoi[TRG.pad_token]
criterion = nn.CrossEntropyLoss(ignore_index = TRG_PAD_IDX)
###Output
_____no_output_____
###Markdown
Training
###Code
def train(model, iterator, optimizer, criterion, clip):
model.train()
epoch_loss = 0
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
optimizer.zero_grad()
output = model(src, trg)
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), clip)
optimizer.step()
epoch_loss += loss.item()
return epoch_loss / len(iterator)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
def evaluate(model, iterator, criterion):
model.eval()
epoch_loss = 0
with torch.no_grad():
for i, batch in enumerate(iterator):
src = batch.src
trg = batch.trg
output = model(src, trg, 0) #turn off teacher forcing
#trg = [trg len, batch size]
#output = [trg len, batch size, output dim]
output_dim = output.shape[-1]
output = output[1:].view(-1, output_dim)
trg = trg[1:].view(-1)
#trg = [(trg len - 1) * batch size]
#output = [(trg len - 1) * batch size, output dim]
loss = criterion(output, trg)
epoch_loss += loss.item()
return epoch_loss / len(iterator)
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
###Output
_____no_output_____
###Markdown
Train the model through multiple epochsPermalink
###Code
N_EPOCHS = 10
CLIP = 1
best_valid_loss = float('inf')
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss = train(model, train_iterator, optimizer, criterion, CLIP)
valid_loss = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tut1-model.pt')
print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
model.load_state_dict(torch.load('tut1-model.pt'))
test_loss = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test PPL: {math.exp(test_loss):7.3f} |')
###Output
_____no_output_____ |
EDA/Get_dataset_for genus_mushpy_20210626.ipynb | ###Markdown
Load the 5 families dataset
###Code
csv_5fam = "/Users/Adrien/DataScientist/projet_Mushroom/reduced_dataset_5_families_with_genus.csv"
df = pd.read_csv(csv_5fam)
df.head()
###Output
_____no_output_____
###Markdown
Extract each family
###Code
df_label0 = df[df["family"] == 'Inocybaceae']
df_label0 = df_label0.reset_index(drop=True)
df_label1 = df[df["family"] == 'Omphalotaceae']
df_label1 = df_label1.reset_index(drop=True)
df_label2 = df[df["family"] == 'Fomitopsidaceae']
df_label2 = df_label2.reset_index(drop=True)
df_label3 = df[df["family"] == 'Physalacriaceae']
df_label3 = df_label3.reset_index(drop=True)
df_label4 = df[df["family"] == 'Marasmiaceae']
df_label4 = df_label4.reset_index(drop=True)
print("famille 1:", df_label0.shape)
print("famille 2:", df_label1.shape)
print("famille 3:", df_label2.shape)
print("famille 4:", df_label3.shape)
print("famille 5:", df_label4.shape)
###Output
famille 1: (3914, 6)
famille 2: (3816, 6)
famille 3: (3197, 6)
famille 4: (3168, 6)
famille 5: (3042, 6)
###Markdown
Famille 1: Cellule pour ne conserver que les genres ayant plus de 100 images !- En plus, on en profite pour remettre l'index à 0,- Renommer la colonne label en label_family (evite les confusions) !- Et ajuster le label pour qu'il soit propre à chaque genre !Attention ! ne faire tourner cette cellule qu'une seule fois ! fonction pour avoir les genres avec plus de 100 images --> get_gender(dataframe) fonction pour modifier les df --> update_df_label(df, liste1)
###Code
## Regardons les genres qui possèdent plus de 100 images !!
def get_gender(dataframe):
df_genus = dataframe.groupby(dataframe['genus']).size().sort_values(ascending = False)
df_genus = df_genus[dataframe.groupby(dataframe['genus']).size().sort_values(ascending = False) > 100]
#display(df_genus)
liste1 = []
for x in range(len(df_genus)):
liste1.append(df_genus.index[x])
return liste1
def update_df_label(df, liste1):
df = df[df['genus'].isin(liste1)]
df = df.reset_index(drop=True)
df = df.rename(columns={'label': 'label_family'})
df['label'] = df['genus'].replace(liste1,list(range(len(liste1))))
return df
###Output
_____no_output_____
###Markdown
on travaille avec les fonctions définies :
###Code
## Famille 1 --> df_label0
print("la shape originale est de :", df_label0.shape)
liste1 = get_gender(df_label0)
df_label0 = update_df_label(df_label0, liste1)
print("vérification de la suppression de lignes :", df_label0.shape, "\n")
## Famille 2 --> df_label1
print("la shape originale est de :", df_label1.shape)
liste1 = get_gender(df_label1)
df_label1 = update_df_label(df_label1, liste1)
print("vérification de la suppression de lignes :", df_label1.shape, "\n")
## Famille 3 --> df_label2
print("la shape originale est de :", df_label2.shape)
liste1 = get_gender(df_label2)
df_label2 = update_df_label(df_label2, liste1)
print("vérification de la suppression de lignes :", df_label2.shape, "\n")
## Famille 4 --> df_label3
print("la shape originale est de :", df_label3.shape)
liste1 = get_gender(df_label3)
df_label3 = update_df_label(df_label3, liste1)
print("vérification de la suppression de lignes :", df_label3.shape, "\n")
## Famille 5 --> df_label4
print("la shape originale est de :", df_label4.shape)
liste1 = get_gender(df_label4)
df_label4 = update_df_label(df_label4, liste1)
print("vérification de la suppression de lignes :", df_label4.shape, "\n")
###Output
la shape originale est de : (3914, 6)
vérification de la suppression de lignes : (3823, 7)
la shape originale est de : (3816, 6)
vérification de la suppression de lignes : (3483, 7)
la shape originale est de : (3197, 6)
vérification de la suppression de lignes : (3006, 7)
la shape originale est de : (3168, 6)
vérification de la suppression de lignes : (2873, 7)
la shape originale est de : (3042, 6)
vérification de la suppression de lignes : (2570, 7)
###Markdown
Save csv files with appropriated dataset
###Code
df_label0.to_csv('dataset_label0_Inocybaceae.csv', index = False)
df_label1.to_csv('dataset_label1_Omphalotaceae.csv', index = False)
df_label2.to_csv('dataset_label2_Fomitopsidaceae.csv', index = False)
df_label3.to_csv('dataset_label3_Physalacriaceae.csv', index = False)
df_label4.to_csv('dataset_label4_Marasmiaceae.csv', index = False)
###Output
_____no_output_____
###Markdown
Load files if needed
###Code
csv_label0 = "/Users/Adrien/DataScientist/projet_Mushroom/dataset_label0_Inocybaceae.csv"
csv_label1 = "/Users/Adrien/DataScientist/projet_Mushroom/dataset_label1_Omphalotaceae.csv"
csv_label2 = "/Users/Adrien/DataScientist/projet_Mushroom/dataset_label2_Fomitopsidaceae.csv"
csv_label3 = "/Users/Adrien/DataScientist/projet_Mushroom/dataset_label3_Physalacriaceae.csv"
csv_label4 = "/Users/Adrien/DataScientist/projet_Mushroom/dataset_label4_Marasmiaceae.csv"
df_label0 = pd.read_csv(csv_label0)
df_label1 = pd.read_csv(csv_label1)
df_label2 = pd.read_csv(csv_label2)
df_label3 = pd.read_csv(csv_label3)
df_label4 = pd.read_csv(csv_label4)
###Output
_____no_output_____
###Markdown
plot Genus repartition
###Code
plt.figure(figsize=(18, 26))
plt.subplots_adjust(wspace=0.2, hspace=0.3)
plt.subplot(321)
sns.countplot(x= "genus", data=df_label0)
plt.title("Famille Inocybaceae (0)")
plt.xticks(rotation = 60);
plt.subplot(322)
sns.countplot(x= "genus", data=df_label1)
plt.title("Famille Omphalotaceae (1)")
plt.xticks(rotation = 60);
plt.subplot(323)
sns.countplot(x= "genus", data=df_label2)
plt.title("Famille Fomitopsidaceae (2)")
plt.xticks(rotation = 60);
plt.subplot(324)
sns.countplot(x= "genus", data=df_label3)
plt.title("Famille Physalacriaceae (3)")
plt.xticks(rotation = 60);
plt.subplot(325)
sns.countplot(x= "genus", data=df_label4)
plt.title("Famille Marasmiaceae (4)")
plt.xticks(rotation = 60);
liste1 = get_gender(df_label0)
print("label0 :", liste1)
liste1 = get_gender(df_label1)
print("label1 :", liste1)
liste1 = get_gender(df_label2)
print("label2 :", liste1)
liste1 = get_gender(df_label3)
print("label3 :", liste1)
liste1 = get_gender(df_label4)
print("label4 :", liste1)
###Output
label0 : ['Inocybe', 'Crepidotus', 'Simocybe', 'Flammulaster']
label1 : ['Gymnopus', 'Omphalotus', 'Rhodocollybia', 'Marasmiellus']
label2 : ['Fomitopsis', 'Laetiporus', 'Phaeolus', 'Postia', 'Ischnoderma', 'Piptoporus', 'Daedalea', 'Antrodia']
label3 : ['Armillaria', 'Hymenopellis', 'Flammulina', 'Strobilurus', 'Oudemansiella', 'Cyptotrama']
label4 : ['Marasmius', 'Megacollybia', 'Gerronema', 'Tetrapyrgos', 'Atheniella', 'Clitocybula']
|
02 Basics/02-code-comments.ipynb | ###Markdown
Examples Example 1: Single Line CommentSingle line comments simply start with a hashtag.
###Code
#This is a single line comment.
###Output
_____no_output_____
###Markdown
Example 2: Multi-Line CommentMulti-line comments can be created by wrapping your text in a set of triple quotation marks. As you will see, some IDEs like to automatically pair your quotes so creating just three can be something of a challenge.
###Code
"""
This is a
multi-line comment.
"""
###Output
_____no_output_____ |
best-fit-lines.ipynb | ###Markdown
Fitting lines***
###Code
# Numerical arrays and fitting lines.
import numpy as np
# Plots.
import matplotlib.pyplot as plt
# Nicer plot style.
plt.style.use("ggplot")
# Bigger plots.
plt.rcParams["figure.figsize"] = (18,10)
###Output
_____no_output_____
###Markdown
Fitting a straight line***
###Code
# Create some x values.
x = np.linspace(0.0, 10.0, 100)
# Have a look.
x
# Make sure we got the correct amount asked for.
len(x)
# Create y values based on the x values.
y = 3.0 * x + 2.0
# Have a look.
y
# There should be the same number of them.
len(y)
# Plot x vs y.
plt.plot(x, y)
# Ask numpy what, based only on the x and y values,
# what it thinks the relationship between x and y is.
np.polyfit(x, y, 1)
###Output
_____no_output_____
###Markdown
Include some noise***
###Code
# Add some noise - you might try increasing the noise yourself.
y = 3.0 * x + 2.0 + np.random.normal(0.0, 0.3, len(x))
# Have a look.
y
# Plot x vs y as points.
plt.plot(x, y, '.');
# Now what does numpy say the relationship is?
coeffs = np.polyfit(x, y, 1)
coeffs
# Plot the best fit line over the data points.
plt.plot(x, y, '.', label="Data")
plt.plot(x, coeffs[0] * x + coeffs[1], '-', label='Best fit')
plt.legend();
###Output
_____no_output_____
###Markdown
More than one dataset on one plot***
###Code
# Let's create two data sets, each with x and y values.
x1 = np.linspace(0.0, 0.5, 20)
y1 = 3.0 * x1 + 2.0 + np.random.normal(0.0, 1.0, len(x1))
x2 = np.linspace(0.0, 0.5, 30)
y2 = 6.0 * x2 + 5.0 + np.random.normal(0.0, 1.0, len(x2))
# Fit each using polyfit.
coeffs1 = np.polyfit(x1, y1, 1)
coeffs1
coeffs2 = np.polyfit(x2, y2, 1)
coeffs2
# Plot both on one plot.
plt.plot(x1, y1, '.', label="Data set 1")
plt.plot(x1, coeffs1[0] * x1 + coeffs1[1], '-', label='Best fit line 1')
plt.plot(x2, y2, '.', label="Data set 2")
plt.plot(x2, coeffs2[0] * x2 + coeffs2[1], '-', label='Best fit line 2')
plt.legend();
###Output
_____no_output_____
###Markdown
Combine data sets***
###Code
# How about we combine the datasets, for no particular reason.
x = np.concatenate([x1, x2])
y = np.concatenate([y1, y2])
# Fit a line to the combined data set.
coeffs = np.polyfit(x, y, 1)
coeffs
# What does the x/y relationship look like in the combined
# versus the original?
plt.plot(x1, y1, '.', label="Data set 1")
plt.plot(x1, coeffs1[0] * x1 + coeffs1[1], '-', label='Best fit line 1')
plt.plot(x2, y2, '.', label="Data set 2")
plt.plot(x2, coeffs2[0] * x2 + coeffs2[1], '-', label='Best fit line 2')
# plt.plot(x, y, '.', label="Combined")
plt.plot(x, coeffs[0] * x + coeffs[1], '-', label='Best fit combined')
plt.legend();
###Output
_____no_output_____ |
Day 7/Optimization_methods_v1b.ipynb | ###Markdown
Optimization MethodsUntil now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result. Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this: **Figure 1** : **Minimizing the cost is like finding the lowest point in a hilly landscape** At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. **Notations**: As usual, $\frac{\partial J}{\partial a } = $ `da` for any variable `a`.To get started, run the following code to import the libraries you will need. Updates to Assignment If you were working on a previous version* The current notebook filename is version "Optimization_methods_v1b". * You can find your work in the file directory as version "Optimization methods'.* To see the file directory, click on the Coursera logo at the top left of the notebook. List of Updates* op_utils is now opt_utils_v1a. Assertion statement in `initialize_parameters` is fixed.* opt_utils_v1a: `compute_cost` function now accumulates total cost of the batch without taking the average (average is taken for entire epoch instead).* In `model` function, the total cost per mini-batch is accumulated, and the average of the entire epoch is taken as the average cost. So the plot of the cost function over time is now a smooth downward curve instead of an oscillating curve.* Print statements used to check each function are reformatted, and 'expected output` is reformatted to match the format of the print statements (for easier visual comparisons).
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils_v1a import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils_v1a import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
1 - Gradient DescentA simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent. **Warm-up exercise**: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
"""
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)]
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)]
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 =\n" + str(parameters["W1"]))
print("b1 =\n" + str(parameters["b1"]))
print("W2 =\n" + str(parameters["W2"]))
print("b2 =\n" + str(parameters["b2"]))
###Output
W1 =
[[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]]
b1 =
[[ 1.74604067]
[-0.75184921]]
W2 =
[[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]]
b2 =
[[-0.88020257]
[ 0.02561572]
[ 0.57539477]]
###Markdown
**Expected Output**:```W1 =[[ 1.63535156 -0.62320365 -0.53718766] [-1.07799357 0.85639907 -2.29470142]]b1 =[[ 1.74604067] [-0.75184921]]W2 =[[ 0.32171798 -0.25467393 1.46902454] [-2.05617317 -0.31554548 -0.3756023 ] [ 1.1404819 -1.09976462 -0.1612551 ]]b2 =[[-0.88020257] [ 0.02561572] [ 0.57539477]]``` A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent. - **(Batch) Gradient Descent**:``` pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): Forward propagation a, caches = forward_propagation(X, parameters) Compute cost. cost += compute_cost(a, Y) Backward propagation. grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads) ```- **Stochastic Gradient Descent**:```pythonX = data_inputY = labelsparameters = initialize_parameters(layers_dims)for i in range(0, num_iterations): for j in range(0, m): Forward propagation a, caches = forward_propagation(X[:,j], parameters) Compute cost cost += compute_cost(a, Y[:,j]) Backward propagation grads = backward_propagation(a, caches, parameters) Update parameters. parameters = update_parameters(parameters, grads)``` In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this: **Figure 1** : **SGD vs GD** "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). **Note** also that implementing SGD requires 3 for-loops in total:1. Over the number of iterations2. Over the $m$ training examples3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples. **Figure 2** : **SGD vs Mini-Batch GD** "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. **What you should remember**:- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.- You have to tune a learning rate hyperparameter $\alpha$.- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large). 2 - Mini-Batch Gradient descentLet's learn how to build mini-batches from the training set (X, Y).There are two steps:- **Shuffle**: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches. - **Partition**: Partition the shuffled (X, Y) into mini-batches of size `mini_batch_size` (here 64). Note that the number of training examples is not always divisible by `mini_batch_size`. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full `mini_batch_size`, it will look like this: **Exercise**: Implement `random_mini_batches`. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:```pythonfirst_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]...```Note that the last mini-batch might end up smaller than `mini_batch_size=64`. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is `math.floor(s)` in Python). If the total number of examples is not a multiple of `mini_batch_size=64` then there will be $\lfloor \frac{m}{mini\_batch\_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini_\_batch_\_size \times \lfloor \frac{m}{mini\_batch\_size}\rfloor$).
###Code
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
"""
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:,k * mini_batch_size:(k + 1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:,k * mini_batch_size:(k + 1) * mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
end = m - mini_batch_size * math.floor(m / mini_batch_size)
mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size:]
mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size:]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
###Output
shape of the 1st mini_batch_X: (12288, 64)
shape of the 2nd mini_batch_X: (12288, 64)
shape of the 3rd mini_batch_X: (12288, 20)
shape of the 1st mini_batch_Y: (1, 64)
shape of the 2nd mini_batch_Y: (1, 64)
shape of the 3rd mini_batch_Y: (1, 20)
mini batch sanity check: [ 0.90085595 -0.7612069 0.2344157 ]
###Markdown
**Expected Output**: **shape of the 1st mini_batch_X** (12288, 64) **shape of the 2nd mini_batch_X** (12288, 64) **shape of the 3rd mini_batch_X** (12288, 20) **shape of the 1st mini_batch_Y** (1, 64) **shape of the 2nd mini_batch_Y** (1, 64) **shape of the 3rd mini_batch_Y** (1, 20) **mini batch sanity check** [ 0.90085595 -0.7612069 0.2344157 ] **What you should remember**:- Shuffling and Partitioning are the two steps required to build mini-batches- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128. 3 - MomentumBecause mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations. Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill. **Figure 3**: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$. **Exercise**: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the `grads` dictionary, that is:for $l =1,...,L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])```**Note** that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the `for` loop.
###Code
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
"""
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros_like(parameters["W" + str(l+1)])
v["db" + str(l+1)] = np.zeros_like(parameters["b" + str(l+1)])
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] =\n" + str(v["dW1"]))
print("v[\"db1\"] =\n" + str(v["db1"]))
print("v[\"dW2\"] =\n" + str(v["dW2"]))
print("v[\"db2\"] =\n" + str(v["db2"]))
###Output
v["dW1"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] =
[[ 0.]
[ 0.]]
v["dW2"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] =
[[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected Output**:```v["dW1"] =[[ 0. 0. 0.] [ 0. 0. 0.]]v["db1"] =[[ 0.] [ 0.]]v["dW2"] =[[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]]v["db2"] =[[ 0.] [ 0.] [ 0.]]``` **Exercise**: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$: $$ \begin{cases}v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \\W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}\end{cases}\tag{3}$$$$\begin{cases}v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \\b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}} \end{cases}\tag{4}$$where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the `parameters` dictionary. Note that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
"""
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
"""
L = len(parameters) // 2 # number of layers in the neural networks
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l+1)] = beta*v["dW" + str(l+1)] + (1-beta)*(grads["dW" + str(l+1)])
v["db" + str(l+1)] = beta*v["db" + str(l+1)] + (1-beta)*(grads["db" + str(l+1)])
# update parameters
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * v["dW" + str(l + 1)]
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * v["db" + str(l + 1)]
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = \n" + str(parameters["W1"]))
print("b1 = \n" + str(parameters["b1"]))
print("W2 = \n" + str(parameters["W2"]))
print("b2 = \n" + str(parameters["b2"]))
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = v" + str(v["db2"]))
###Output
W1 =
[[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]]
b1 =
[[ 1.74493465]
[-0.76027113]]
W2 =
[[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]]
b2 =
[[-0.87809283]
[ 0.04055394]
[ 0.58207317]]
v["dW1"] =
[[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] =
[[-0.01228902]
[-0.09357694]]
v["dW2"] =
[[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = v[[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
###Markdown
**Expected Output**:```W1 = [[ 1.62544598 -0.61290114 -0.52907334] [-1.07347112 0.86450677 -2.30085497]]b1 = [[ 1.74493465] [-0.76027113]]W2 = [[ 0.31930698 -0.24990073 1.4627996 ] [-2.05974396 -0.32173003 -0.38320915] [ 1.13444069 -1.0998786 -0.1713109 ]]b2 = [[-0.87809283] [ 0.04055394] [ 0.58207317]]v["dW1"] = [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]]v["db1"] = [[-0.01228902] [-0.09357694]]v["dW2"] = [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]]v["db2"] = v[[ 0.02344157] [ 0.16598022] [ 0.07420442]]``` **Note** that:- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.- If $\beta = 0$, then this just becomes standard gradient descent without momentum. **How do you choose $\beta$?**- The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much. - Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default. - Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$. **What you should remember**:- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$. 4 - AdamAdam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum. **How does Adam work?**1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction). 2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction). 3. It updates parameters in a direction based on combining information from "1" and "2".The update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \\v^{corrected}_{dW^{[l]}} = \frac{v_{dW^{[l]}}}{1 - (\beta_1)^t} \\s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \\s^{corrected}_{dW^{[l]}} = \frac{s_{dW^{[l]}}}{1 - (\beta_2)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{dW^{[l]}}}{\sqrt{s^{corrected}_{dW^{[l]}}} + \varepsilon}\end{cases}$$where:- t counts the number of steps taken of Adam - L is the number of layers- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages. - $\alpha$ is the learning rate- $\varepsilon$ is a very small number to avoid dividing by zeroAs usual, we will store all parameters in the `parameters` dictionary **Exercise**: Initialize the Adam variables $v, s$ which keep track of the past information.**Instruction**: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for `grads`, that is:for $l = 1, ..., L$:```pythonv["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])v["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])s["dW" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["W" + str(l+1)])s["db" + str(l+1)] = ... (numpy array of zeros with the same shape as parameters["b" + str(l+1)])```
###Code
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
"""
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l + 1)] = np.zeros_like(parameters["W" + str(l + 1)])
v["db" + str(l + 1)] = np.zeros_like(parameters["b" + str(l + 1)])
s["dW" + str(l+1)] = np.zeros_like(parameters["W" + str(l + 1)])
s["db" + str(l+1)] = np.zeros_like(parameters["b" + str(l + 1)])
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = \n" + str(v["db2"]))
print("s[\"dW1\"] = \n" + str(s["dW1"]))
print("s[\"db1\"] = \n" + str(s["db1"]))
print("s[\"dW2\"] = \n" + str(s["dW2"]))
print("s[\"db2\"] = \n" + str(s["db2"]))
###Output
v["dW1"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db1"] =
[[ 0.]
[ 0.]]
v["dW2"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
v["db2"] =
[[ 0.]
[ 0.]
[ 0.]]
s["dW1"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db1"] =
[[ 0.]
[ 0.]]
s["dW2"] =
[[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]]
s["db2"] =
[[ 0.]
[ 0.]
[ 0.]]
###Markdown
**Expected Output**:```v["dW1"] = [[ 0. 0. 0.] [ 0. 0. 0.]]v["db1"] = [[ 0.] [ 0.]]v["dW2"] = [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]]v["db2"] = [[ 0.] [ 0.] [ 0.]]s["dW1"] = [[ 0. 0. 0.] [ 0. 0. 0.]]s["db1"] = [[ 0.] [ 0.]]s["dW2"] = [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]]s["db2"] = [[ 0.] [ 0.] [ 0.]]``` **Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon}\end{cases}$$**Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding.
###Code
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l + 1)] = beta1 * v["dW" + str(l + 1)] + (1 - beta1) * grads['dW' + str(l + 1)]
v["db" + str(l + 1)] = beta1 * v["db" + str(l + 1)] + (1 - beta1) * grads['db' + str(l + 1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l + 1)] = v["dW" + str(l + 1)] / (1 - np.power(beta1, t))
v_corrected["db" + str(l + 1)] = v["db" + str(l + 1)] / (1 - np.power(beta1, t))
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l + 1)] = beta2 * s["dW" + str(l + 1)] + (1 - beta2) * np.power(grads['dW' + str(l + 1)], 2)
s["db" + str(l + 1)] = beta2 * s["db" + str(l + 1)] + (1 - beta2) * np.power(grads['db' + str(l + 1)], 2)
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l + 1)] = s["dW" + str(l + 1)] / (1 - np.power(beta2, t))
s_corrected["db" + str(l + 1)] = s["db" + str(l + 1)] / (1 - np.power(beta2, t))
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * v_corrected["dW" + str(l + 1)] / np.sqrt(s_corrected["dW" + str(l + 1)] + epsilon)
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * v_corrected["db" + str(l + 1)] / np.sqrt(s_corrected["db" + str(l + 1)] + epsilon)
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = \n" + str(parameters["W1"]))
print("b1 = \n" + str(parameters["b1"]))
print("W2 = \n" + str(parameters["W2"]))
print("b2 = \n" + str(parameters["b2"]))
print("v[\"dW1\"] = \n" + str(v["dW1"]))
print("v[\"db1\"] = \n" + str(v["db1"]))
print("v[\"dW2\"] = \n" + str(v["dW2"]))
print("v[\"db2\"] = \n" + str(v["db2"]))
print("s[\"dW1\"] = \n" + str(s["dW1"]))
print("s[\"db1\"] = \n" + str(s["db1"]))
print("s[\"dW2\"] = \n" + str(s["dW2"]))
print("s[\"db2\"] = \n" + str(s["db2"]))
###Output
W1 =
[[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]]
b1 =
[[ 1.75225313]
[-0.75376553]]
W2 =
[[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09245036 -0.16498684]]
b2 =
[[-0.88529978]
[ 0.03477238]
[ 0.57537385]]
v["dW1"] =
[[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] =
[[-0.01228902]
[-0.09357694]]
v["dW2"] =
[[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] =
[[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
s["dW1"] =
[[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]]
s["db1"] =
[[ 1.51020075e-05]
[ 8.75664434e-04]]
s["dW2"] =
[[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]]
s["db2"] =
[[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]]
###Markdown
**Expected Output**:```W1 = [[ 1.63178673 -0.61919778 -0.53561312] [-1.08040999 0.85796626 -2.29409733]]b1 = [[ 1.75225313] [-0.75376553]]W2 = [[ 0.32648046 -0.25681174 1.46954931] [-2.05269934 -0.31497584 -0.37661299] [ 1.14121081 -1.09245036 -0.16498684]]b2 = [[-0.88529978] [ 0.03477238] [ 0.57537385]]v["dW1"] = [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]]v["db1"] = [[-0.01228902] [-0.09357694]]v["dW2"] = [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]]v["db2"] = [[ 0.02344157] [ 0.16598022] [ 0.07420442]]s["dW1"] = [[ 0.00121136 0.00131039 0.00081287] [ 0.0002525 0.00081154 0.00046748]]s["db1"] = [[ 1.51020075e-05] [ 8.75664434e-04]]s["dW2"] = [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04] [ 1.57413361e-04 4.72206320e-04 7.14372576e-04] [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]]s["db2"] = [[ 5.49507194e-05] [ 2.75494327e-03] [ 5.50629536e-04]]``` You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference. 5 - Model with different optimization algorithmsLets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)
###Code
train_X, train_Y = load_dataset()
###Output
_____no_output_____
###Markdown
We have already implemented a 3-layer neural network. You will train it with: - Mini-batch **Gradient Descent**: it will call your function: - `update_parameters_with_gd()`- Mini-batch **Momentum**: it will call your functions: - `initialize_velocity()` and `update_parameters_with_momentum()`- Mini-batch **Adam**: it will call your functions: - `initialize_adam()` and `update_parameters_with_adam()`
###Code
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
m = X.shape[1] # number of training examples
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
cost_total = 0
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost and add to the cost total
cost_total += compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
cost_avg = cost_total / m
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost_avg))
if print_cost and i % 100 == 0:
costs.append(cost_avg)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
You will now run this 3 layer neural network with each of the 3 optimization methods. 5.1 - Mini-batch Gradient descentRun the following code to see how the model does with mini-batch gradient descent.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
Cost after epoch 0: 0.702405
Cost after epoch 1000: 0.668101
Cost after epoch 2000: 0.635288
Cost after epoch 3000: 0.600491
Cost after epoch 4000: 0.573367
Cost after epoch 5000: 0.551977
Cost after epoch 6000: 0.532370
Cost after epoch 7000: 0.514007
Cost after epoch 8000: 0.496472
Cost after epoch 9000: 0.468014
###Markdown
5.2 - Mini-batch gradient descent with momentumRun the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
Cost after epoch 0: 0.702413
Cost after epoch 1000: 0.668167
Cost after epoch 2000: 0.635388
Cost after epoch 3000: 0.600591
Cost after epoch 4000: 0.573444
Cost after epoch 5000: 0.552058
Cost after epoch 6000: 0.532458
Cost after epoch 7000: 0.514101
Cost after epoch 8000: 0.496652
Cost after epoch 9000: 0.468160
###Markdown
5.3 - Mini-batch with Adam modeRun the following code to see how the model does with Adam.
###Code
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
Cost after epoch 0: 0.702166
Cost after epoch 1000: 0.167966
Cost after epoch 2000: 0.141320
Cost after epoch 3000: 0.138782
Cost after epoch 4000: 0.136111
Cost after epoch 5000: 0.134327
Cost after epoch 6000: 0.131147
Cost after epoch 7000: 0.130245
Cost after epoch 8000: 0.129655
Cost after epoch 9000: 0.129159
|
AllIn.ipynb | ###Markdown
Analysis of Public Interest on Initial Public Offering (IPO) on Bursa Malaysia Initial Public Offering (IPO) is where shares of a company are offered to investors, both institutional and the public.Here I look at public interest towards IPO on Bursa Malaysia over years of 2017 till 2022 (up till January, i.e only Coraza and Sen Heng IPOs so far this year).Data was collected from the Internet 2017 till 2022. Sources are online only, including Bursa Malaysia, The Edge websites and many more.Data was collected manually, hence there could be errors in the numbers - read that again - data was collected manually, hence there could be errors in the numbers. I also made no effort to verify the data after initial collection.Only IPOs that could be subscribed by the public on the Main and ACE markets was included, e.g IPOs on the LEAP market or IPOs or ETFs were not included. This exercise is for entertainment purpose only. Do not trade shares based on this exercise. Trading shares involve capital loss and emotional pain. Please consult your investment advisor before embarking on any financial exercise.For exercises below, I divided the analyses by listing price below RM1 (=RM1).
###Code
import pandas as pd
from statistics import stdev
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import scipy
from scipy import stats
import warnings
warnings.filterwarnings('ignore')
df = pd.read_csv("IPO_2017to2022.csv")
df.head()
###Output
_____no_output_____
###Markdown
Note that here many columns such as NEWSHARES and TOTALAPPLICATIONS are not populated, as they are used in this current analysis. Feature Engineering and Data Clean-Up
###Code
# Create following columns:
# % difference between list price vs opening price on IPO day
# % difference between list price vs highest price on IPO day
# % difference between opening price on IPO day vs closing price on IPO day
# % difference between opening price on IPO day vs highest price on IPO day
df['LISTVSOPENING_PCT'] = df['OPENPRICEIPODAY']/df['LISTPRICE']*100 - 100
df['LISTVSHIGHEST_PCT'] = df['HIGHESTPRICEIPODAY']/df['LISTPRICE']*100 - 100
df['LISTVSCLOSING_PCT'] = df['CLOSINGPRICEIPODAY']/df['LISTPRICE']*100 - 100
# remove Lotte Chemical as it was undersubscribed
df = df[df['TICKER']!='LOTTE']
df.shape
# grab all available years
years = df['YEAR'].unique().tolist()[::-1]
years
###Output
_____no_output_____
###Markdown
Measuring Subscription Interest: Oversubscription Rate by YearI first ask this question: how does public interest towards IPOs change over the years? I am interested in this question since I have encountered anecdotes from acquaintances and friends that it is getting harder to get any units from IPO offering since past years.
###Code
# function to calculate bar height, bar error bars and counts, after dividing by RM<1 and RM>=1
def calculate_bardata(years, df, targetvar):
barheight = {}
barheight['lt1'] = {}
barheight['mt1'] = {}
barerr = {}
barerr['lt1'] = {}
barerr['mt1'] = {}
count = {}
count['lt1']={}
count['mt1']={}
for year in years:
# for < RM1
df_series_lt1 = df[((df['YEAR']==year) & (df['LISTPRICE']<1))][targetvar]
count['lt1'][year] = df_series_lt1.count()
mean_year_lt1 = round(df_series_lt1.mean(),2)
barheight['lt1'][year] = mean_year_lt1
if df_series_lt1.count() >1:
barerr['lt1'][year] = round(stdev(df_series_lt1),2)
else:
barerr['lt1'][year] = 0
df_series_mt1 = df[((df['YEAR']==year) & (df['LISTPRICE']>=1))][targetvar]
count['mt1'][year] = df_series_mt1.count()
mean_year_mt1 = round(df_series_mt1.mean(),2)
barheight['mt1'][year] = mean_year_mt1
if df_series_mt1.count() >1:
barerr['mt1'][year] = round(stdev(df_series_mt1),2)
else:
barerr['mt1'][year] = 0
return count, barheight, barerr
count, barheight, barerr = calculate_bardata(years, df, 'TOTALBALLOTINGOSCBED')
print("For listing price < RM1:")
for year in years:
print(str(year) + '(' + str(count['lt1'][year]) + '): mean ' + str(barheight['lt1'][year]) +
', stdev ' + str(barerr['lt1'][year]))
print()
print("For listing price >= RM1:")
for year in years:
print(str(year) + '(' + str(count['mt1'][year]) + '): mean ' + str(barheight['mt1'][year]) +
', stdev ' + str(barerr['mt1'][year]))
fig, ax = plt.subplots()
xpos = np.arange(len(barerr['lt1']))
width = 0.4
plt.bar(xpos-0.2, list(barheight['lt1'].values()), width, yerr=list(barerr['lt1'].values()), ecolor='black', capsize=10)
plt.bar(xpos+0.2, list(barheight['mt1'].values()), width, yerr=list(barerr['mt1'].values()), ecolor='black', capsize=10)
ax.set_xticks(xpos)
ax.set_xticklabels(years)
ax.set_ylabel('Oversubscription, times')
plt.title('Oversubscription Rate By Year')
plt.legend(["Listing Price < RM1", "Listing Price >= RM1"])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Here you see from 2017 till 2019, oversubscription for IPOs goes down. Interestingly, things changed dramatically since then, where it more than doubled from 2019 to 2020, and again in 2020 to 2021 (for price less than RM1).Also note that oversubcription is much lower in general for IPOs with listing price more than RM1.The huge increase in ovesubscription rate starting in 2020 is consistent with the heightened interest from the public in the stock market as stay-at-home orders were issued due to Covid: we did not only see higher trading volume on the open market in general (as have been covered by many analysts and reports), we also see higher subscription to the IPOs offered.(Note that for 2022 we have only 2 IPOa so far so the analysis for 2022 is ongoing). Measuring Speculative Activity on IPO Day: Volume on IPO day by YearI now ask the question, if there is an increase in public interest from 2019 to 2021 as measured by the oversubscription rate, does this translate into higher trading volume on the counter on IPO day?One scenario that I could imagine is that, could it be that other people who did not get the IPO, also participate on the counter on IPO day (although this can't be proven directly unless we track the trading tickets that day and compare their executers vs list of IPO awardees)?
###Code
count, barheight, barerr = calculate_bardata(years, df, 'VOLUMEIPODAY')
print("For listing price < RM1:")
for year in years:
print(str(year) + '(' + str(count['lt1'][year]) + '): mean ' + str(barheight['lt1'][year]) +
', stdev ' + str(barerr['lt1'][year]))
print()
print("For listing price >= RM1:")
for year in years:
print(str(year) + '(' + str(count['mt1'][year]) + '): mean ' + str(barheight['mt1'][year]) +
', stdev ' + str(barerr['mt1'][year]))
fig, ax = plt.subplots()
xpos = np.arange(len(barerr['lt1']))
width = 0.4
plt.bar(xpos-0.2, list(barheight['lt1'].values()), width, yerr=list(barerr['lt1'].values()), ecolor='black', capsize=10)
plt.bar(xpos+0.2, list(barheight['mt1'].values()), width, yerr=list(barerr['mt1'].values()), ecolor='black', capsize=10)
ax.set_xticks(xpos)
ax.set_xticklabels(years)
ax.set_ylabel('Volume on IPO day (millions)')
plt.title('Volume on IPO day (millions) by Year')
plt.legend(["Listing Price < RM1", "Listing Price >= RM1"])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Interestingly we see that 2020 is the year with highest volume on IPOed counter on IPO day (even though it was 2021 is the year with the highest oversubscription). In 2021 the volume on IPOed counter decreased a bit vis-a-vis 2020, but still slightly above 2019. Measuring Speculative Activity on IPO Day: % Difference between Listing vs Opening Price Now we see a higher volume on average on the IPOed counter on IPO day in 2020 (and also 2021), does this translate into more price movement on that day? Specifically I am interested in seeing the percentage difference between:1. listing price versus opening price (first matched ticket at 9AM)2. listing price versus highest price at any time during IPO day.Let's look at (1) first.
###Code
count, barheight, barerr = calculate_bardata(years, df, 'LISTVSOPENING_PCT')
print("For listing price < RM1:")
for year in years:
print(str(year) + '(' + str(count['lt1'][year]) + '): mean ' + str(barheight['lt1'][year]) +
', stdev ' + str(barerr['lt1'][year]))
print()
print("For listing price >= RM1:")
for year in years:
print(str(year) + '(' + str(count['mt1'][year]) + '): mean ' + str(barheight['mt1'][year]) +
', stdev ' + str(barerr['mt1'][year]))
fig, ax = plt.subplots()
xpos = np.arange(len(barerr['lt1']))
width = 0.4
plt.bar(xpos-0.2, list(barheight['lt1'].values()), width, yerr=list(barerr['lt1'].values()), ecolor='black', capsize=10)
plt.bar(xpos+0.2, list(barheight['mt1'].values()), width, yerr=list(barerr['mt1'].values()), ecolor='black', capsize=10)
ax.set_xticks(xpos)
ax.set_xticklabels(years)
ax.set_ylabel('% Difference, List vs Opening Price')
plt.title('% Difference List vs Opening Price by Year')
plt.legend(["Listing Price < RM1", "Listing Price >= RM1"])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
We see that there is an increasing trend of higher opening price in 2020 and 2021, as opposed to previous years, and it's only for IPOs priced at less than RM1. Also note the really high standard deviation bars for each year, that suggests that while the average suggests an increasing trend, there is a huge variation within the same year itself.Also, be aware that while many shares trade higher than their listing price on their IPO day, there are also instances where the shares open lower than their listing prices. Measuring Speculative Activity on IPO Day: % Difference between Listing vs Highest Price And now let's look at percentage difference between listing versus highest price on IPO day, per year
###Code
count, barheight, barerr = calculate_bardata(years, df, 'LISTVSHIGHEST_PCT')
print("For listing price < RM1:")
for year in years:
print(str(year) + '(' + str(count['lt1'][year]) + '): mean ' + str(barheight['lt1'][year]) +
', stdev ' + str(barerr['lt1'][year]))
print()
print("For listing price >= RM1:")
for year in years:
print(str(year) + '(' + str(count['mt1'][year]) + '): mean ' + str(barheight['mt1'][year]) +
', stdev ' + str(barerr['mt1'][year]))
fig, ax = plt.subplots()
xpos = np.arange(len(barerr['lt1']))
width = 0.4
plt.bar(xpos-0.2, list(barheight['lt1'].values()), width, yerr=list(barerr['lt1'].values()), ecolor='black', capsize=10)
plt.bar(xpos+0.2, list(barheight['mt1'].values()), width, yerr=list(barerr['mt1'].values()), ecolor='black', capsize=10)
ax.set_xticks(xpos)
ax.set_xticklabels(years)
ax.set_ylabel('% difference, Listing vs Highest Price')
plt.title('% difference, Listing vs Highest Price, per Year')
plt.legend(["Listing Price < RM1", "Listing Price >= RM1"])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Similar to the above (analysis of % difference of listing vs opening price), we also see that there is an increasing trend of higher highest price in 2020 and 2021, as opposed to previous years, and it is more pronounced for IPOs at less than RM1. Again note the big standard deviation: the average suggests an increasing trend, yet there is a huge variation within the same year itself. The similar trend for the two previous analysis is not surprising, since % difference of listing versus opening, and % difference of listing versus highest prices, actually correlate strongly (i.e, if the price at open is higher than listing price, it seems that the price will go at least as high/even higher than that, most of the time):
###Code
x_label = 'LISTVSOPENING_PCT'
y_label = 'LISTVSHIGHEST_PCT'
x = df[x_label]
y = df[y_label]
fig, ax = plt.subplots()
ax.scatter(x,y)
ax.set_xlabel('% difference of listing vs opening prices')
ax.set_ylabel('% difference of listing vs highest prices')
plt.title('(% difference of listing vs opening) vs (% difference of lisiting vs highest)')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Does high public interest during subscription translate into higher activity on IPO day? Let's now remove analysis by year from the equation, and take the data as aggregate, and ask, if high oversubscription rate correlates into high volume on the counter on IPO day.
###Code
x_label = 'TOTALBALLOTINGOSCBED'
y_label = 'VOLUMEIPODAY'
x = df[x_label]
y = df[y_label]
fig, ax = plt.subplots()
ax.scatter(x,y)
sns.regplot(x, y)
ax.set_xlabel('Oversubscription, times')
ax.set_ylabel('Volume on IPO day, millions')
plt.title('Oversubscription vs Volume on IPO day')
plt.tight_layout()
plt.show()
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
print("R2: ", round(r_value,4))
print("P-value: ", round(p_value,4))
print("stderr: ", round(std_err,2))
###Output
R2: -0.0244
P-value: 0.8459
stderr: 3338.52
###Markdown
From the plot and R-square value we can see there is a poor correlation between oversubscription rate and volume on the counter on IPO day.English speak: R-square value ranges from -1 to 1, where -1 shows strong negative correlation, and 1 shows strong positive correlation. In this case the R-square is only around 0, suggesting there is no correlation. For P-value, the range is from (close to) 0 to 1, where value close to 0 suggests that the value is strongly significant, while 1 suggests that it's probably not very good. What is a good P-value cutoff depends on the case, but for simple test typically statisticians set that P-values need to be maximum 0.05 (or 0.01 if they want to be more strict). Here we have ~0.7, which is rather large.This suggests that even if a counter was highly oversubscribed, this does not mean that there will be more participants on the counter on IPO day. Does high public interest during subscription translate into higher speculative activity on IPO day?Okay, so it seems that subscription rate doesn't correlate with volume on IPO day. But does subscription rate correlate with higher speculative activity (not volume because we just answered that) on IPO day? i.e, the % difference between listing vs opening, listing vs highest, and maybe also, listing vs closing prices. Let's see % difference between listing vs opening prices first...
###Code
x_label = 'TOTALBALLOTINGOSCBED'
y_label = 'LISTVSOPENING_PCT'
x = df[x_label]
y = df[y_label]
fig, ax = plt.subplots()
ax.scatter(x,y)
sns.regplot(x, y)
ax.set_xlabel('Oversubscription, times')
ax.set_ylabel('% difference, listing vs opening price')
plt.title('Oversubscription vs %difference listing vs opening price')
plt.tight_layout()
plt.show()
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
print("R2: ", round(r_value,4))
print(f"P-value: {p_value:.4e}")
print("stderr: ", round(std_err,4))
print("equation: y = " + str(round(m,4)) + "*X + " + str(round(b,4)))
###Output
R2: 0.6555
P-value: 2.3387e-09
stderr: 0.132
equation: y = 0.9165*X + 8.1523
###Markdown
Noice. Looks like there's rather good correlation: the higher public interest in subscribing to the IPO (as measured by the oversubscription rate), the higher likelihood that we'd see higher opening price on IPO day.We have an R-square value at ~0.6, quite far from 0 for no correlation and halfway to the value 1 (for perfect correlation). Also the P-value is really really small, suggesting the correlation is highly significant.This tells you that while there is no correlation between oversubscription with volume per se, the ones that participate during IPO day indeed push up the price at market open. Let's remove points over 40X oversubscription rate, since outliers like those can easily skew the result.
###Code
df2 = df[df['TOTALBALLOTINGOSCBED']<40]
x_label = 'TOTALBALLOTINGOSCBED'
y_label = 'LISTVSOPENING_PCT'
x = df2[x_label]
y = df2[y_label]
fig, ax = plt.subplots()
ax.scatter(x,y)
sns.regplot(x, y)
ax.set_xlabel('Oversubscription, times')
ax.set_ylabel('% difference, listing vs opening price')
plt.title('Oversubscription vs %difference listing vs opening price (for oversubscription<=40X)')
plt.tight_layout()
plt.show()
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
print("R2: ", round(r_value,4))
print(f"P-value: {p_value:.4e}")
print("stderr: ", round(std_err,4))
print("equation: y = " + str(round(m,4)) + "*X + " + str(round(b,4)))
###Output
R2: 0.4411
P-value: 1.6991e-03
stderr: 0.4842
equation: y = 1.6143*X + 0.0212
###Markdown
Yes, the result still holds, albeit with slightly lower R2 value. Noice. Let's look at oversubscription vs %difference listing-to-highest price. I suspect it will also correlate (since I've already showed that % difference listing-to-opening already correlates with % difference listing-to-highest prices).
###Code
x_label = 'TOTALBALLOTINGOSCBED'
y_label = 'LISTVSHIGHEST_PCT'
x = df[x_label]
y = df[y_label]
fig, ax = plt.subplots()
ax.scatter(x,y)
sns.regplot(x, y)
ax.set_xlabel('Oversubscription')
ax.set_ylabel('% difference, listing vs highest')
plt.title('Oversubscription vs %difference listing vs highest price on IPO day')
plt.tight_layout()
plt.show()
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
print("R2: ", round(r_value,4))
print(f"P-value: {p_value:.4e}")
print("stderr: ", round(std_err,4))
print("equation: y = " + str(round(m,4)) + "*X + " + str(round(b,4)))
###Output
R2: 0.5672
P-value: 6.8342e-07
stderr: 0.161
equation: y = 0.8871*X + 23.1671
###Markdown
Yup, same. The higher public interest in subscribing to the IPO, the higher likelihood that the price will be pushed to a really high level on IPO day.So, again, high public interest to the IPO does not translate into more volume, but those that participate during the IPO day push up the price at market open, and also push up the price to a high level during the day. Again, check for oversubscription less than 40X (it should still hold).
###Code
df2 = df[df['TOTALBALLOTINGOSCBED']<=40]
x_label = 'TOTALBALLOTINGOSCBED'
y_label = 'LISTVSHIGHEST_PCT'
x = df2[x_label]
y = df2[y_label]
fig, ax = plt.subplots()
ax.scatter(x,y)
sns.regplot(x, y)
ax.set_xlabel('Oversubscription')
ax.set_ylabel('% difference, listing vs highest')
plt.title('Oversubscription vs %difference listing vs highest price on IPO day, for oversubscription <=40X only')
plt.tight_layout()
plt.show()
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
print("R2: ", round(r_value,4))
print("P-value: ", round(p_value,4))
print("stderr: ", round(std_err,4))
print("equation: y = " + str(round(m,4)) + "*X + " + str(round(b,4)))
###Output
R2: 0.3502
P-value: 0.0147
stderr: 0.6356
equation: y = 1.6115*X + 14.4463
###Markdown
Result holds, again with lower R-square and P-values. Finally let's look at oversubscription vs %difference listing-to-closing price. Does the stock price tend to settle higher, if it has high oversubscription rate?
###Code
x_label = 'TOTALBALLOTINGOSCBED'
y_label = 'LISTVSCLOSING_PCT'
x = df[x_label]
y = df[y_label]
fig, ax = plt.subplots()
ax.scatter(x,y)
sns.regplot(x, y)
ax.set_xlabel('Oversubscription')
ax.set_ylabel('% difference, listing vs closing price')
plt.title('Oversubscription vs %difference listing vs closing price on IPO day')
plt.tight_layout()
plt.show()
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
print("R2: ", round(r_value,4))
print(f"P-value: {p_value:.4e}")
print("stderr: ", round(std_err,4))
print("equation: y = " + str(round(m,4)) + "*X + " + str(round(b,4)))
###Output
R2: 0.4235
P-value: 3.9548e-04
stderr: 0.1352
equation: y = 0.5056*X + 14.4064
###Markdown
So, again, high public interest to the IPO does not translate into more volume, but those that participate during the IPO day push up the price at market open, and also push up the price to a high level during the day.But you can see that the correlation is weaker now (R2 at 0.364) vs (R2 at ~0.6 for oversubscription vs % difference listing-vs-opening). Again, just for completeness, let's verify for oversubscription less than 40X...
###Code
df = df[df['TOTALBALLOTINGOSCBED']<=40]
x_label = 'TOTALBALLOTINGOSCBED'
y_label = 'LISTVSCLOSING_PCT'
x = df[x_label]
y = df[y_label]
fig, ax = plt.subplots()
ax.scatter(x,y)
sns.regplot(x, y)
ax.set_xlabel('Oversubscription')
ax.set_ylabel('% difference, listing vs closing price')
plt.title('Oversubscription vs %difference listing vs closing price on IPO day, for oversubscription <=40X only')
plt.tight_layout()
plt.show()
m, b, r_value, p_value, std_err = scipy.stats.linregress(x, y)
print("R2: ", round(r_value,4))
print("P-value: ", round(p_value,4))
print("stderr: ", round(std_err,4))
print("equation: y = " + str(round(m,4)) + "*X + " + str(round(b,4)))
###Output
R2: 0.35
P-value: 0.0147
stderr: 0.5699
equation: y = 1.4443*X + 1.6476
|
BasicPython/Lec02_VariablesAndOperators.ipynb | ###Markdown
Variables and Operators
###Code
import os
from IPython.core.display import HTML
def load_style(directory = '../', name='customMac.css'):
styles = open(os.path.join(directory, name), 'r').read()
return HTML(styles)
load_style()
###Output
_____no_output_____
###Markdown
The Zen Of Python
###Code
import this
###Output
The Zen of Python, by Tim Peters
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
###Markdown
Variables A name that is used to denote something or a value is called a variable. In python, variables can be declared and values can be assigned to it as follows,
###Code
x = 2
y = 5
xy = 'Hey'
print(x+y, xy)
###Output
7 Hey
###Markdown
Multiple variables can be assigned with the same value.
###Code
x = y = 1
print(x,y)
###Output
1 1
###Markdown
Operators Arithmetic Operators | Symbol | Task Performed ||----|---|| + | Addition || - | Subtraction || / | division || % | mod || * | multiplication || // | floor division || ** | to the power of |
###Code
1+2
2-1
1*2
1/2 # returns float value
###Output
_____no_output_____
###Markdown
In python 2.7 the result will be 0. This is because both the numerator and denominator are integers but the result is a float value hence an integer value is returned. By changing either the numerator or the denominator to float, correct answer can be obtained. Now in python3, default data convert to float.
###Code
1/2.0
15%4 # returns remainder
###Output
_____no_output_____
###Markdown
Floor division is nothing but converting the result so obtained to the nearest integer.
###Code
2.8//1.0 # return quotient
###Output
_____no_output_____
###Markdown
Relational Operators | Symbol | Task Performed ||----|---|| == | True, if it is equal || != | True, if not equal to || < | less than || > | greater than || <= | less than or equal to || >= | greater than or equal to |
###Code
z = 1
z == 1
z > 1
###Output
_____no_output_____
###Markdown
Bitwise Operators | Symbol | Task Performed ||----|---|| & | Logical And || l | Logical OR || ^ | XOR || ~ | Negate || >> | Right shift || << | Left shift |
###Code
a = 2 # equivalent binary 10
b = 3 # equivalent binary 11
print(a & b) # bitwise operation
print(bin(a&b)) # convert to binary
5 >> 1
###Output
_____no_output_____
###Markdown
0000 0101 -> 5 Shifting the digits by 1 to the right and zero padding0000 0010 -> 2
###Code
5 << 1
###Output
_____no_output_____
###Markdown
0000 0101 -> 5 Shifting the digits by 1 to the left and zero padding0000 1010 -> 10 Built-in Functions Python comes loaded with pre-built functions Conversion from one system to another Conversion from hexadecimal to decimal is done by adding prefix **0x** to the hexadecimal value or vice versa by using built in **hex( )**, Octal to decimal by adding prefix **0o** to the octal value or vice versa by using built in function **oct( )**.
###Code
hex(170)
0xAA
oct(8)
0o10
###Output
_____no_output_____
###Markdown
**int( )** accepts two values when used for conversion, one is the value in a different number system and the other is its base. Note that input number in the different number system should be of string type.
###Code
print(int('010',8))
print(int('0xaa',16))
print(int('1010',2))
###Output
8
170
10
###Markdown
**int( )** can also be used to get only the integer value of a float number or can be used to convert a number which is of type string to integer format. Similarly, the function **str( )** can be used to convert the integer back to string format
###Code
print(int(7.7))
print(int('7'))
###Output
7
7
###Markdown
Also note that function **bin( )** is used for binary and **float( )** for decimal/float values. **chr( )** is used for converting ASCII to its alphabet equivalent, **ord( )** is used for the other way round.
###Code
chr(98)
ord('b')
###Output
_____no_output_____
###Markdown
Simplifying Arithmetic Operations **round( )** function rounds the input value to a specified number of places or to the nearest integer.
###Code
print(round(5.6231))
print(round(4.55892, 2))
###Output
6
4.56
###Markdown
**complex( )** is used to define a complex number and **abs( )** outputs the absolute value of the same.
###Code
c =complex('4+3j')
print(abs(c))
###Output
5.0
###Markdown
**divmod(x,y)** outputs the quotient and the remainder in a tuple(you will be learning about it in the further chapters) in the format (quotient, remainder).
###Code
divmod(9,2)
###Output
_____no_output_____
###Markdown
**isinstance( )** returns True, if the first argument is an instance of that class. Multiple classes can also be checked at once.
###Code
print(isinstance(1, int))
print(isinstance(1.0,int))
print(isinstance(1.0,(int,float)))
###Output
True
False
True
###Markdown
**cmp(x,y)**|x ? y|Output||---|---|| x < y | -1 || x == y | 0 || x > y | 1 |
###Code
def cmp(a,b):
return (a>b)-(a<b)
print(cmp(1,2))
print(cmp(2,1))
print(cmp(2,2))
###Output
-1
1
0
###Markdown
**pow(x,y,z)** can be used to find the power $x^y$ also the mod of the resulting value with the third specified number can be found i.e. : ($x^y$ % z).
###Code
print(pow(3,3))
print(pow(3,3,5))
###Output
27
2
###Markdown
**range( )** function outputs the integers of the specified range. It can also be used to generate a series by specifying the difference between the two numbers within a particular range. The elements are returned in a list (will be discussing in detail later.)
###Code
print(range(3))
print(range(2,9))
print(range(2,27,8))
for i in range(3):
print(i)
###Output
range(0, 3)
range(2, 9)
range(2, 27, 8)
0
1
2
###Markdown
Accepting User Inputs **input( )** accepts input and stores it as a string. Hence, if the user inputs a integer, the code should convert the string to an integer and then proceed.
###Code
abc = input("Type something here and it will be stored in variable abc \t")
print(abc, type(abc))
###Output
4 <class 'str'>
###Markdown
**input( )**, this is used only for accepting only integer inputs.
###Code
abc1 = int(input("Only integer can be stored in variable abc \t"))
print(abc1,type(abc1))
###Output
2 <class 'int'>
|
.ipynb_checkpoints/05_pytorch_spacy_simple-checkpoint.ipynb | ###Markdown
some constants
###Code
SEED = 762
# IN_FILE = 'germeval2018.try.txt'
IN_FILE = 'germeval2018.training.txt'
IN_FILE_TEST = 'germeval2018.test.txt'
BATCH_SIZE = 16
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
###Output
_____no_output_____
###Markdown
define torchtext.Field instances
###Code
# define Fields
f_text = data.Field(sequential=True, use_vocab=True)
f_pos_tag = data.Field(sequential=True, use_vocab=False,
pad_token=1, unk_token=0)
f_lemma = data.Field(sequential=True, use_vocab=True)
f_label = data.LabelField(tensor_type=torch.FloatTensor)
fields = [('text', f_text), ('pos', f_pos_tag),
('lemma', f_lemma), ('label', f_label)]
# HINT: don't specify a tokenizer here
# assign single fields to map
###Output
_____no_output_____
###Markdown
Spacy
###Code
# create a spacy pipeline
# HINT: a simple one - maybe even without setting the model to use - is easier
pipe = TwitterPipeline()
# pre-process training data
full_examples = pipe.process_data(
IN_FILE, fields)[0]
full_ds = data.Dataset(full_examples, fields)
###Output
_____no_output_____
###Markdown
Splitting of data
###Code
# do the splitting with torchtext
trn_ds, val_ds = full_ds.split(
split_ratio=[0.8, 0.2], stratified=True, random_state=random.seed(SEED))
test_examples = pipe.process_data(
IN_FILE_TEST, fields)[0]
tst_ds = data.Dataset(test_examples, fields)
print(f'train len {len(trn_ds.examples)}')
print(f'val len {len(val_ds.examples)}')
print(f'test len {len(tst_ds.examples)}')
###Output
train len 16
val len 4
test len 3398
###Markdown
Vocabulary
###Code
# build vocab
# vec = torchtext.vocab.Vectors('embed_tweets_de_100D_fasttext',
# cache='/Users/michel/Downloads/')
# build vocab
# validation + test data should by no means influence the model, so build the vocab just on trn
#f_text.build_vocab(trn_ds, vectors=vec)
f_text.build_vocab(trn_ds, max_size=20000)
f_lemma.build_vocab(trn_ds)
f_label.build_vocab(trn_ds)
print(f'text vocab size {len(f_text.vocab)}')
print(f'lemma vocab size {len(f_lemma.vocab)}')
print(f'label vocab size {len(f_label.vocab)}')
###Output
text vocab size 247
lemma vocab size 227
label vocab size 2
###Markdown
Iterator for Training loop
###Code
# create training iterators
# Aufteilung in Mini-Batches
trn_iter, val_iter, tst_iter = data.BucketIterator.splits((trn_ds, val_ds, tst_ds),
batch_size=BATCH_SIZE,
device=-1,
sort_key=lambda t: len(
t.text),
sort_within_batch=False,
repeat=False)
###Output
_____no_output_____
###Markdown
The model
###Code
class SimpleRNN(nn.Module):
def __init__(self, vocab_dim, emb_dim=100, hidden_dim=200):
super().__init__()
self.embedding = nn.Embedding(vocab_dim, emb_dim)
self.rnn = nn.RNN(emb_dim, hidden_dim)
self.fc = nn.Linear(hidden_dim, 1) # 1 is output dim
def forward(self, x):
# x type is Tensor[sentence len, batch size]. Internally pytorch does not use 1-hot
embedded = self.embedding(x)
# embedded type is Tensor[sentence len, batch size, emb dim]
output, hidden_state = self.rnn(embedded)
# output type is Tensor[sentence len, batch size, hidden dim]
# hidden_state type is Tensor[1, batch size, hidden dim]
return self.fc(hidden_state.squeeze(0))
###Output
_____no_output_____
###Markdown
Metrik zur Messung
###Code
def binary_accuracy(preds, y):
"""
return accuracy per batch as ratio of correct/all
"""
# round predictions to the closest integer
rounded_preds = torch.round(F.sigmoid(preds))
# convert into float for division
pred_is_correct = (rounded_preds == y).float()
acc = pred_is_correct.sum()/len(pred_is_correct)
return acc
###Output
_____no_output_____
###Markdown
Trainings-Durchlauf
###Code
def train(model, iterator, optimizer, criterion, metric): # optimierungs-Interface, criterion=loss-Function= Optimnierungskriterium, metric zum beobachten
epoch_loss = 0
epoch_meter = 0
model.train() # Regularisier einschalten, um Overfitting zu verhindern
for batch in iterator:
optimizer.zero_grad()
y_hat = model(batch.text).squeeze(1) #y_hat = ^y = Prognose
loss = criterion(y_hat, batch.label)
meter = metric(y_hat, batch.label)
loss.backward()
optimizer.step() # Trainings-Schritt
epoch_loss += loss.item() # .item --> skalarer = nativer Wert eines Tensors
epoch_meter += meter.item()
return epoch_loss / len(iterator), epoch_meter / len(iterator)
###Output
_____no_output_____
###Markdown
Evaluierung (auf Validation-Data)
###Code
def evaluate(model, iterator, criterion, metric):
epoch_loss = 0
epoch_meter = 0
model.eval() # Regularisierer ausschalten, da beste Werte gesucht werden.
with torch.no_grad():
for batch in iterator:
y_hat = model(batch.text).squeeze(1)
loss = criterion(y_hat, batch.label)
meter = metric(y_hat, batch.label)
epoch_loss += loss.item()
epoch_meter += meter.item()
return epoch_loss / len(iterator), epoch_meter / len(iterator)
EMB_SIZE = 100
HID_SIZE = 200
NUM_LIN = 3
NUM_EPOCH = 5
# RNN variant SETUP
model = SimpleRNN(len(f_text.vocab), EMB_SIZE, HID_SIZE)
optimizer = optim.SGD(model.parameters(), lr=1e-3) # SGD stochastic gradient descent (etwas alt abe gar nicht schlecht)
criterion = nn.BCEWithLogitsLoss()
###Output
_____no_output_____
###Markdown
Training
###Code
for epoch in range(NUM_EPOCH):
train_loss, train_acc = train(
model, trn_iter, optimizer, criterion, binary_accuracy)
valid_loss, valid_acc = evaluate(
model, val_iter, criterion, binary_accuracy)
print(f'EPOCH: {epoch:02} - TRN_LOSS: {train_loss:.3f} - TRN_ACC: {train_acc*100:.2f}% - VAL_LOSS: {valid_loss:.3f} - VAL_ACC: {valid_acc*100:.2f}%')
test_loss, test_acc = evaluate(model, tst_iter, criterion, binary_accuracy)
print(f'TEST_LOSS: {test_loss:.3f}, TEST_ACC: {test_acc*100:.2f}%')
###Output
/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torch/nn/functional.py:1006: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/torchtext/data/field.py:322: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
return Variable(arr, volatile=not train)
|
mse/XRD_trends.ipynb | ###Markdown
X-ray diffraction (XRD) spectra trends*Authors: Enze Chen (University of California, Berkeley)*This is an interactive notebook for playing around with some experimental parameters ($a$, $\lambda$, $T$, etc.) and observing the effect on the resulting XRD spectra. I find XRD to be a particularly beautiful subject and I couldn't find any similar visualizations online. I hope this interactive demo will help you learn the _qualitative trends_ associated with powder XRD spectra. PrerequisitesTo get the most out of this notebook, you should already have: - Knowledge of XRD fundamentals such as Bragg's law and intensity factors. Learning goalsBy the end of this notebook, you should be able to *assess* how changing the following experimental inputs affects the XRD spectra: - Crystal structure- Lattice constant- X-ray wavelength- Temperature- Strain- Crystallite size Interested in coding?If you were looking for a more thorough review, including a **scaffolded programming exercise** to generate the XRD spectra, please see [my other notebook](https://colab.research.google.com/github/enze-chen/learning_modules/blob/master/mse/XRD_plotting.ipynb) that will walk you through most of the details. How to use this notebookIf you are viewing this notebook on [Google Colaboratory](https://colab.research.google.com/github/enze-chen/learning_modules/blob/master/mse/XRD_trends.ipynb), then everything is already set up for you (hooray).If you want to **save a copy** of this notebook for yourself, go to "File > Save a copy in Drive" and you will find it in your Google Drive account under "My Drive > Colab Notebooks."If you want to run the notebook locally, you can download it and make sure all the Python modules in the [`requirements.txt`](https://github.com/enze-chen/learning_modules/blob/master/requirements.txt) file are installed before running it.To run this notebook, run all the cells (e.g. `Runtime > Run all` in the menu) and then adjust the sliders at the bottom.I **strongly recommend** just running the code and experimenting with the inputs *before* reading the code in great detail. --------------------------- A few important equations (very quick review)By far the most important equation is [not surprisingly] **Bragg's law**, given by $$n\lambda = 2d \sin(\theta)$$where $n$ is the order (typically $1$), $\lambda$ is the wavelength, $d$ is the interplanar spacing, and $\theta$ is the Bragg angle. Here we will solve for $\theta$ as follows:$$ \lambda = 2d \sin(\theta) \longrightarrow \theta = \sin^{-1} \left( \frac{\lambda}{2d} \right), \quad d = \frac{a}{\sqrt{h^2 + k^2 + l^2}} $$where $h,k,l$ are the miller indices of the diffracting plane and $a$ is the lattice constant. The above formula for $d$ assumes a cubic structure.Another important equation is for the **Intensity**, given by$$ I = |F|^2 \times P \times L \times m \times A \times T $$where* $F$ is the structure factor (we take the modulus before squaring because it might be a complex number).* $P$ is the polarization factor.* $L$ is the Lorentz factor.* $m$ is the multiplicity factor.* $A$ is the absorption factor.* $T$ is the temperature factor.Furthermore, recall that size effects can be included through the **Scherrer equation**, given by $$ t = \frac{K\lambda}{\beta \cos(\theta)} $$ where $t$ is the crystallite/grain thickness, $K \sim 0.9$ is a shape factor, and $\beta$ is the full width at half maximum of the peak in radians.For more information, please reference [*Elements of X-Ray Diffraction* (3rd)](https://www.pearson.com/us/higher-education/program/Cullity-Elements-of-X-Ray-Diffraction-3rd-Edition/PGM113710.html) by Cullity and Stock, which is a fantastic textbook.
###Code
### Python module imports
# General modules
import itertools
# Scientific computing modules
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
%matplotlib inline
# Interactivity modules
from IPython.display import HTML, display
from ipywidgets import interact_manual, interactive_output, fixed, \
IntSlider, FloatSlider, FloatLogSlider, RadioButtons, \
Button, Checkbox, Layout, GridspecLayout
###Output
_____no_output_____
###Markdown
------------------------------------- Widget functionOur widget will call `plot_XRD()` each time we interact with it. This function calculates the structure factor and the intensities and then plots the spectra on an $\text{Intensity}$ vs. $2\theta$ plot. I've tried my best to keep the code simple and yet illustrative. Please see the comments in the code for more information.
###Code
def plot_XRD(a, wavelength, cell_type, thickness, T=0, label=True, K=0.94):
"""This function is called by the widget to perform the plotting based on inputs.
Args:
a (float): Lattice constant in nanometers.
wavelength (float): X-ray wavelength in nanometers.
cell_type (str): Crystal structure, can be FCC, BCC, or DC.
thickness (float): Crystallite size in nanometers.
T (int): Temperature in Kelvin. Default = 0.
K (float): Scherrer equation parameter. Default = 0.94 (cubic).
Returns:
None, but a pyplot is displayed.
"""
# Crystallographic planes
planes = [[1,0,0], [1,1,0], [1,1,1], [2,0,0], [2,1,0], [2,1,1], [2,2,0],\
[2,2,1], [3,0,0], [3,1,0], [3,1,1], [2,2,2], [3,2,0], [3,2,1]]
planes_str = [f'$({p[0]}{p[1]}{p[2]})$' for p in planes] # string labels
# Set the basis
basis = []
if cell_type is 'FCC':
basis = np.array([[0,0,0], [0.5,0.5,0], [0.5,0,0.5], [0,0.5,0.5]])
elif cell_type is 'BCC':
basis = np.array([[0,0,0], [0.5,0.5,0.5]])
elif cell_type is 'DC':
basis = np.array([[0,0,0], [0.5,0.5,0], [0.5,0,0.5], [0,0.5,0.5],
[0.25,0.25,0.25], [0.75,0.75,0.25], \
[0.75,0.25,0.75], [0.25,0.75,0.75]])
else:
raise ValueError('Cell type not yet supported.')
# Convert planes to theta values (see equation above)
s_vals = np.array([np.linalg.norm(p) for p in planes])
# a += 1e-5 * T # thermal expansion estimate; omit b/c a is alread indep var.
theta = np.arcsin(np.divide(wavelength/2, np.divide(a, s_vals)))
two_theta = 2 * np.degrees(theta)
# Scherrer equation calculations
beta = np.degrees(K * wavelength / thickness * np.divide(1, np.cos(theta)))
sigma = beta / 2.355 # proportionality for Gaussian distribution
# Structure-Temperature factor. Must... resist... for loops...
s = np.sin(theta) / (10*wavelength)
S = 2.210 * np.exp(-58.727*s**2) + 2.134 * np.exp(-13.553*s**2) + \
1.689 * np.exp(-2.609*s**2) + 0.524 * np.exp(-0.339*s**2)
f = 28 - 41.78214 * np.multiply(s**2, S) # formula from Ch. 12 of De Graef
F = np.multiply(f, np.sum(np.exp(2 * np.pi * 1j * \
np.dot(np.array(planes), basis.T)), axis=1))
# Multiplicity factor
mult = [2**np.count_nonzero(p) * \
len(set(itertools.permutations(p))) for p in planes]
# Lorentz-Polarization factor
Lp = np.divide(1 + np.cos(2 * theta)**2,
np.multiply(np.sin(theta)**2, np.cos(theta)))
# Final intensity
I = np.multiply(np.absolute(F)**2, np.multiply(mult, Lp))
# Plotting
plt.rcParams.update({'figure.figsize':(10,5), 'font.size':22, 'axes.linewidth':2,
'mathtext.fontset':'cm'})
xmin, xmax = (20, 160)
x = np.linspace(xmin, xmax, int(10*(xmax-xmin)))
fig, ax = plt.subplots()
# Thermal effects. These functional dependencies ARE NOT REAL!!!
thermal_diffuse = 3e1 * T * np.cbrt(x) # background signal
sigma += (T + 5) / 2000 # peak broadening from vibrations
# Save all the curves, then take a max envelope
all_curves = []
for i in range(len(sigma)):
y = stats.norm.pdf(x, two_theta[i], sigma[i])
normed_curve = y / max(y) * I[i]
# Don't include the curves that aren't selected by the Structure factor
if max(normed_curve) > 1e1:
max_ind = normed_curve.argmax()
if label:
ax.annotate(s=planes_str[i], \
xy=(x[max_ind], normed_curve[max_ind] + thermal_diffuse[max_ind]))
all_curves.append(normed_curve)
final_curve = np.max(all_curves, axis=0) + thermal_diffuse
plt.plot(x, final_curve, c='C0', lw=4, alpha=0.7)
# Some fine-tuned settings for visual appeal
for side in ['top', 'right']:
ax.spines[side].set_visible(False)
ax.set_xlim(xmin, xmax)
ax.set_ylim(0, 1.05 * ax.get_ylim()[1])
ax.tick_params(left=False, labelleft=False, direction='in', length=10, width=2)
ax.set_xlabel(r'$2\theta$ (degree)')
ax.set_ylabel('Intensity (a.u.)')
plt.show()
# Now we create each slider individually for readability and customization.
a_widget = FloatSlider(value=0.352, min=0.31, max=0.4, step=0.001,
description='Lattice constant (nm)', readout_format='.3f',
style={'description_width':'150px'}, continuous_update=False,
layout=Layout(width='400px', height='30px'))
w_widget = FloatSlider(value=0.154, min=0.13, max=0.16, step=0.001,
description='X-ray wavelength (nm)', readout_format='.3f',
style={'description_width':'150px'}, continuous_update=False,
layout=Layout(width='400px', height='30px'))
t_widget = FloatLogSlider(value=10, base=10, min=0, max=3, step=0.1,
description='Crystallite thickness (nm)', readout_format='d',
style={'description_width':'150px'}, continuous_update=False,
layout=Layout(width='400px', height='35px'))
T_widget = IntSlider(value=298, min=0, max=1000, step=1,
description='Temperature (K)', readout_format='d',
style={'description_width':'150px'}, continuous_update=False,
layout=Layout(width='400px', height='35px'))
c_widget = RadioButtons(options=['FCC', 'BCC', 'DC'], description='Crystal structure',
style={'description_width':'150px'}, continuous_update=False,
layout=Layout(width='350px', height='60px'))
l_widget = Checkbox(value=False, description='Annotate peaks?')
g = GridspecLayout(n_rows=4, n_columns=2, height='160px', width='820px')
g[0, 0] = a_widget
g[1, 0] = w_widget
g[2, 0] = t_widget
g[3, 0] = T_widget
g[0:2, 1] = c_widget
g[2, 1] = l_widget
###Output
_____no_output_____
###Markdown
Set your parameters below and see how the XRD spectra changes!
###Code
out = interactive_output(plot_XRD, {'a':a_widget, 'wavelength':w_widget, 'cell_type':c_widget,
'thickness':t_widget, 'T':T_widget, 'label':l_widget, 'K':fixed(0.94)})
display(g, out)
###Output
_____no_output_____
###Markdown
-------------------------------------- Discussion questions* Can you rationalize all the trends you see? * Describe all the ways **temperature** affects the XRD spectra.* How do we account for **strain** in our model? What differences might we observe between isotropic and anisotropic strain?* If you're interested in scientific computing, try to understand how the structure factor ($F$) is calculated with clever [NumPy](https://numpy.org/) tools. -------------------------------------- ConclusionI hope you found this notebook helpful in learning more about XRD and what affects a powder XRD spectra. Please don't hesitate to reach out if you have any questions or ideas to contribute. AcknowledgementsI thank Laura Armstrong, Nathan Bieberdorf, Han-Ming Hau, and Divya Ramakrishnan for user testing and helpful suggestions. I also thank [Prof. Andrew Minor](https://mse.berkeley.edu/people_new/minor/) for teaching MATSCI 204: Materials Characterization and my advisor [Prof. Mark Asta](https://mse.berkeley.edu/people_new/asta/) for his unwavering encouragement for my education-related pursuits. Interactivity is enabled with the [`ipywidgets`](https://ipywidgets.readthedocs.io/en/stable/) library. This project is generously hosted on [GitHub](https://github.com/enze-chen/learning_modules) and [Google Colaboratory](https://colab.research.google.com/github/enze-chen/learning_modules/blob/master/mse/XRD_trends.ipynb). ---------------------------------------- Assumptions I've taken great liberties with* For the Structure factor, I greatly oversimplified the construction of the atomic scattering factor ($f$) and selected some numbers from the data for Ni.* I also combined part of the temperature factor into the structure factor. * I combined the Lorentz and polarization factors, as is commonly done in the literature.* I ignored the absorption factor since it is more or less independent of $\theta$.* I used a $\sqrt[3]{\theta}$ term to approximate the thermal background's general shape. I don't know the true analytical dependence, if there even is one.* I used a Gaussian distribution to model each peak to capture crystallite size effects. Peaks in general are *not* Gaussian. Known issues* It doesn't have great safeguards against numerical errors, such as invalid `arcsin` arguments and `NaN`. Please be gentle. ❤* There's a weird rendering error where for large intensities the upper limit (e.g. `1e6`) appears on the y-axis. **:shrug:*** I left out "simple cubic" as one of the candidate structures because it results in too many numerical instabilities to correct for. It's also a boring spectra.
###Code
# This is a clever snippet that hides all of the code - unfortunately doesn't work on Google Colaboratory
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off the raw code."></form>''')
###Output
_____no_output_____ |
notebooks/development/sensitivity/03_08_18_sensitivity_analysis.ipynb | ###Markdown
Parameter Analysis
###Code
pd_outputs[(pd_outputs['radius']==4.5) &
(pd_outputs['mean filter']==1.0) &
(pd_outputs['quality']==4.5) &
(pd_outputs['linking max D']==10.0) &
(pd_outputs['gap closing max D']==10.0) &
(pd_outputs['max frame gap']==2.0) &
(pd_outputs['track displacement']==10.0)]
#Change in radius
counter = 0
dN = []
dMSD = []
for gapD in gap_closing_max_distance:
for gap in max_frame_gap:
for link in linking_max_distance:
for qua in quality:
for rad in radius:
for disp in track_displacement:
currentMSD = pd_outputs[(pd_outputs['radius']==rad) &
(pd_outputs['quality']==qua) &
(pd_outputs['linking max D']==link) &
(pd_outputs['gap closing max D']==gapD) &
(pd_outputs['max frame gap']==gap) &
(pd_outputs['track displacement']==disp)]['MSD'].as_matrix()
dMSD.append((currentMSD[1] - currentMSD[0]))
currentN = pd_outputs[(pd_outputs['radius']==rad) &
(pd_outputs['quality']==qua) &
(pd_outputs['linking max D']==link) &
(pd_outputs['gap closing max D']==gapD) &
(pd_outputs['max frame gap']==gap) &
(pd_outputs['track displacement']==disp)]['particles'].as_matrix()
dN.append((currentMSD[1] - currentMSD[0]))
counter = counter + 1
np.asarray(dMSD)
print('Mean dMSD is {} +/- {}'.format(np.mean(dMSD), stats.sem(dMSD)))
index=['radius', 'quality', 'linking', 'gap D', 'f gap', 'disp', 'filt']
sensitivity = {'Mean': np.array([-0.0240501, -0.059174, 0.0066154, 0.01095, 0.018117, 0, -0.017437]),
'SEM': np.array([0.000815328, 0.00096908, 0.0005201, 0.0004963, 0.0007438, 0.000004, 0.0012713])}
df = pd.DataFrame(data=sensitivity, index=index)
df
width = 0.7
ra = 0.075
fig = plt.figure(figsize=(6, 6))
p1 = plt.bar(index, df['Mean'], width=width, yerr=df['SEM'], capsize=8, error_kw={'elinewidth':3, 'capthick':3})
plt.axhline(y=0, color='k')
plt.ylim(-ra, ra)
plt.savefig('parameter_sensitivity.png', bbox_inches='tight')
aws.upload_s3('parameter_sensitivity.png', '{}/parameter_sensitivity.png'.format(s_folder))
frames = 651
fps = 100.02
t = np.linspace(0, frames-1, frames)/fps
fig = plt.figure(figsize=(10, 10))
for counter in range(0, len(all_params)):
try:
geo_file = 'geomean_{}_{}.csv'.format(name.split('.')[0], counter)
aws.download_s3('{}/{}'.format(s_folder, geo_file), geo_file)
gmean1 = np.genfromtxt(geo_file)
os.remove(geo_file)
plt.plot(t, np.exp(gmean1), 'k', linewidth=2, alpha=0.05)
except:
params = all_params[counter]
print("Missing data {}".format(params))
plt.xlim(0, 1)
plt.ylim(0, 20)
plt.xlabel('Tau (s)', fontsize=25)
plt.ylabel(r'Mean Squared Displacement ($\mu$m$^2$/s)', fontsize=25)
plt.savefig('MSD_sweep.png', bbox_inches='tight')
aws.upload_s3('MSD_sweep.png', '{}/MSD_sweep.png'.format(s_folder))
###Output
_____no_output_____ |
nn_from_scratchV3.ipynb | ###Markdown
###Code
import jax
import jax.numpy as jnp
from jax import value_and_grad
from jax import grad
from jax import vmap
from jax import jit
jax.device_count()
jax.devices()
import jax
key = jax.random.PRNGKey(0)
from numpy import random
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
#dir(jnp.zeros(5).device())
jnp.zeros(5).device().device_kind
#@title
# from https://github.com/google/jax/blob/main/examples/datasets.py
import array
import gzip
import os
from os import path
import struct
import urllib.request
import numpy as np
_DATA = "/tmp/jax_example_data/"
def _download(url, filename):
"""Download a url to a file in the JAX data temp directory."""
if not path.exists(_DATA):
os.makedirs(_DATA)
out_file = path.join(_DATA, filename)
if not path.isfile(out_file):
urllib.request.urlretrieve(url, out_file)
print("downloaded {} to {}".format(url, _DATA))
def _partial_flatten(x):
"""Flatten all but the first dimension of an ndarray."""
return np.reshape(x, (x.shape[0], -1))
def _one_hot(x, k, dtype=np.float32):
"""Create a one-hot encoding of x of size k."""
return np.array(x[:, None] == np.arange(k), dtype)
def mnist_raw():
"""Download and parse the raw MNIST dataset."""
# CVDF mirror of http://yann.lecun.com/exdb/mnist/
base_url = "https://storage.googleapis.com/cvdf-datasets/mnist/"
def parse_labels(filename):
with gzip.open(filename, "rb") as fh:
_ = struct.unpack(">II", fh.read(8))
return np.array(array.array("B", fh.read()), dtype=np.uint8)
def parse_images(filename):
with gzip.open(filename, "rb") as fh:
_, num_data, rows, cols = struct.unpack(">IIII", fh.read(16))
return np.array(array.array("B", fh.read()),
dtype=np.uint8).reshape(num_data, rows, cols)
for filename in ["train-images-idx3-ubyte.gz", "train-labels-idx1-ubyte.gz",
"t10k-images-idx3-ubyte.gz", "t10k-labels-idx1-ubyte.gz"]:
_download(base_url + filename, filename)
train_images = parse_images(path.join(_DATA, "train-images-idx3-ubyte.gz"))
train_labels = parse_labels(path.join(_DATA, "train-labels-idx1-ubyte.gz"))
test_images = parse_images(path.join(_DATA, "t10k-images-idx3-ubyte.gz"))
test_labels = parse_labels(path.join(_DATA, "t10k-labels-idx1-ubyte.gz"))
return train_images, train_labels, test_images, test_labels
def mnist(permute_train=False):
"""Download, parse and process MNIST data to unit scale and one-hot labels."""
train_images, train_labels, test_images, test_labels = mnist_raw()
train_images = _partial_flatten(train_images) / np.float32(255.)
test_images = _partial_flatten(test_images) / np.float32(255.)
train_labels = _one_hot(train_labels, 10)
test_labels = _one_hot(test_labels, 10)
if permute_train:
perm = np.random.RandomState(0).permutation(train_images.shape[0])
train_images = train_images[perm]
train_labels = train_labels[perm]
return {'train_img': train_images, 'train_lab': train_labels, 'test_img': test_images, 'test_lab': test_labels}
mnist_dat = mnist()
mnist_dat['train_img'][0].shape
import matplotlib.pyplot as plt
def show_img(img):
plt.imshow(img.reshape(28,28))
plt.colorbar()
demo_idx = 33
demo_img = mnist_dat['train_img'][demo_idx]
show_img(demo_img)
print(jnp.argmax(mnist_dat['train_lab'][demo_idx]))
#jnp.argmax(model.forward_single(demo_img))
def model_random_predict(img):
return int(random.random()*10)
model_random_predict('test')
def evaluate(pred_func, max=2000, progress_func=lambda x: x):
correct = 0
incorrect = 0
count = 0
for train_img, train_lab in progress_func(zip(mnist_dat['test_img'][:max], mnist_dat['test_lab'][:max])):
pred = pred_func(train_img)
if jnp.argmax(pred) == jnp.argmax(train_lab):
correct += 1
else:
incorrect += 1
count += 1
final_stats = {'correct': correct, 'total': count, 'accuracy': correct/count}
#tqdm.write(str(final_stats))
return final_stats
evaluate(model_random_predict)
class NNModel:
def __init__(self, input_size, hidden_size, out_size, ce_loss=False, use_mom=False):
self.input_size = input_size
self.hidden_size = hidden_size
self.out_size = out_size
self.cross_entropy_loss = ce_loss
self.momentum = use_mom
# use normed initialization?
self.weights_l1 = jnp.array(random.normal(size=(input_size, hidden_size))) / jnp.sqrt(hidden_size)
self.w1_m = jnp.zeros_like(self.weights_l1)
self.weights_l2 = jnp.array(random.normal(size=(hidden_size, out_size))) / jnp.sqrt(out_size)
self.w2_m = jnp.zeros_like(self.weights_l2)
self.bias_l1 = jnp.array(random.normal(size=(hidden_size,)))
self.b1_m = jnp.zeros_like(self.bias_l1)
self.bias_l2 = jnp.array(random.normal(size=(out_size,)))
self.b2_m = jnp.zeros_like(self.bias_l2)
@jit
def forward_single(weights_l1, weights_l2, bias_l1, bias_l2, img):
# matrix mult, bias, relu activation
hidden = jnp.maximum(img @ weights_l1 + bias_l1, 0.0)
out_raw = hidden @ weights_l2 + bias_l2
# softmax to probability distribution
out_exp = jnp.e ** out_raw
out_probs = out_exp / out_exp.sum()
return out_probs
@jit
def loss(w1, w2, b1, b2, img, lab):
preds = forward_single(w1, w2, b1, b2, img)
#print(preds)
#print(lab)
if self.cross_entropy_loss:
idx = jnp.argmax(lab)
return -jnp.log(preds[idx])
else:
diffs = (preds-lab) ** 2
return diffs.mean()
self.forward_single = lambda img: forward_single(
self.weights_l1, self.weights_l2,
self.bias_l1, self.bias_l2, img)
self.grad_f = value_and_grad(loss, argnums=[0,1,2,3])
self.grad_func = lambda img, lab: self.grad_f(
self.weights_l1,
self.weights_l2,
self.bias_l1,
self.bias_l2,
img, lab)
self.batched_loss = vmap(loss, in_axes=(None, None, None, None, 0, 0))
self._grad_batched = jit(value_and_grad(
lambda w1, w2, b1, b2, imgs, labs: self.batched_loss(w1, w2, b1, b2, imgs, labs).sum(),
argnums=[0,1,2,3]
))
self.grad_batched = lambda imgs, labs: self._grad_batched(
self.weights_l1, self.weights_l2,
self.bias_l1, self.bias_l2, imgs, labs)
def update(self, imgs, labs, lr_a, lr_b):
#loss, grads = self.grad_func(imgs, labs)
loss, grads = self.grad_batched(imgs, labs)
w1_d, w2_d, b1_d, b2_d = grads
if self.momentum:
# update momentum
self.w1_m = lr_b * self.w1_m + w1_d * (1-lr_b)
self.w2_m = lr_b * self.w2_m + w2_d * (1-lr_b)
self.b1_m = lr_b * self.b1_m + b1_d * (1-lr_b)
self.b2_m = lr_b * self.b2_m + b2_d * (1-lr_b)
# update parameters
self.weights_l1 -= self.w1_m * lr_a
self.weights_l2 -= self.w2_m * lr_a
self.bias_l1 -= self.b1_m * lr_a
self.bias_l2 -= self.b2_m * lr_a
else:
# update parameters
self.weights_l1 -= w1_d * lr_a #self.w1_m * lr_b
self.weights_l2 -= w2_d * lr_a #self.w2_m * lr_b
self.bias_l1 -= b1_d * lr_a #self.b1_m * lr_b
self.bias_l2 -= b2_d * lr_a #self.b2_m * lr_b
return loss
def create_fresh_mnist_model(hidden_size=256, use_momentum=False, use_ce_loss=False):
return NNModel(
mnist_dat['train_img'][0].shape[0], hidden_size, mnist_dat['train_lab'][0].shape[0],
ce_loss=use_ce_loss, use_mom=use_momentum)
model = create_fresh_mnist_model()
loss_single, grd_single = model.grad_func(mnist_dat['train_img'][demo_idx], mnist_dat['train_lab'][demo_idx])
for x in grd_single:
print(x.shape)
print(x.std())
loss_single
loss_b, grd_b = model.grad_batched(mnist_dat['train_img'][demo_idx:demo_idx + 1], mnist_dat['train_lab'][demo_idx:demo_idx + 1])
loss_b
for grds, grdb in zip(grd_single, grd_b):
print((grds-grdb).sum())
len(mnist_dat['train_img'])
def test_train_performance(train_its, test_interval, test_its, batch_size=16, hidden_size=256, use_momentum=False, use_ce_loss=False, alpha=0.005, beta=0.9):
mod = create_fresh_mnist_model(hidden_size=hidden_size, use_momentum=use_momentum, use_ce_loss=use_ce_loss)
accs = []
for i in tqdm(range(train_its)):
lb, ub = i*batch_size, (i+1)*batch_size
#loss = mod.update(mnist_dat['train_img'][i], mnist_dat['train_lab'][i], alpha, beta)
loss = mod.update(mnist_dat['train_img'][lb:ub], mnist_dat['train_lab'][lb:ub], alpha, beta)
if i % test_interval == 0:
accs.append(evaluate(mod.forward_single, max=test_its))
return {
'stats': accs,
'train_its': train_its, 'test_interval': test_interval,
'test_its': test_its, 'hidden_size': hidden_size,
'use_momentum': use_momentum, 'use_ce_loss': use_ce_loss,
'alpha': alpha, 'beta': beta }
experiments = {}
#experiments['lr_0.005'] = test_train_performance(2500, 500, 200, alpha=0.005)
#experiments['lr_0.015'] = test_train_performance(2500, 500, 200, alpha=0.015)
experiments['lr_0.05_b1'] = test_train_performance(2500, 500, 200, alpha=0.05, batch_size=1)
experiments['lr_0.05'] = test_train_performance(2500, 500, 200, alpha=0.05, batch_size=16)
#experiments['lr_0.1'] = test_train_performance(2500, 500, 200, alpha=0.1)
experiments['lr_0.05_momentum'] = test_train_performance(2500, 500, 200, use_momentum=True, alpha=0.05, beta=0.9, batch_size=1)
experiments['lr_0.005_ce_loss'] = test_train_performance(2500, 500, 200, use_ce_loss=True, alpha=0.005)
experiments['lr_0.005_ce_loss_momentum'] = test_train_performance(2500, 500, 200, use_ce_loss=True, use_momentum=True, alpha=0.005, beta=0.9)
experiments['lr_0.005_ce_loss_momentum'] = test_train_performance(2500, 500, 200, use_ce_loss=True, use_momentum=True, alpha=0.005, beta=0.9)
experiments['lr_0.002_ce_loss_momentum'] = test_train_performance(2500, 500, 200, use_ce_loss=True, use_momentum=True, alpha=0.002, beta=0.9)
experiments['lr_0.002_momentum'] = test_train_performance(2500, 500, 200, use_momentum=True, alpha=0.002, beta=0.9)
for name, data in experiments.items():
nums = [s['accuracy'] for s in data['stats']]
plt.plot(list(range(len(nums))), nums, label=name)
plt.legend()
experiments
def funcy(x):
return jnp.sum(jnp.cos(x)+1.5)**2
def mag(f):
return lambda x: f(x).mean()
fg = grad(funcy)
bad = grad(grad(funcy))
mega = grad(mag(grad(mag(grad(funcy)))))
inp = jnp.arange(0,10, 0.1, dtype=jnp.float32)
fg(inp)
mega(inp)
def proc_vec(va, vb):
return va.mean() + vb.mean()
b_vec = vmap(value_and_grad(proc_vec))
ina = jax.random.normal(key, (16, 1784))
inb = jax.random.normal(key, (16, 64))
b_vec(ina, inb)
#proc_vec(jnp.arange(10, dtype=jnp.float32))
###Output
_____no_output_____ |
exercise 8/Programming Exercise 8 - Anomaly Detection and Recommender Systems.ipynb | ###Markdown
Part 1 - Anomaly detection
###Code
# Read the data
FOLDER = 'data'
FILE = 'ex8data1.mat'
path = os.path.join(FOLDER, FILE)
data = scipy.io.loadmat(path)
X = pd.DataFrame(data=data['X'], # values
columns=['x' + str(i) for i in range(data['X'].shape[1])]) # column names
print("X is of dimensions {0}".format(X.shape))
ax = X.plot.scatter('x0', 'x1', figsize=(8, 8), s=None, c='blue', title = 'The first dataset')
ax.set_xlabel('Latecy (mm)');
ax.set_ylabel('Throughput (mb/s)');
###Output
_____no_output_____
###Markdown
1.1 Gaussian distribution The Gaussian distribution is given by $$ p(x;\mu,\sigma^2) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{- \frac{(x-\mu)^2}{2 \sigma^2}} $$ where $\mu$ is the mean and $\sigma^2$ controls the variance
###Code
def gaus(x, mu, sigma_sq):
numerator = np.exp((-(x-mu)**2) / (2*sigma_sq))
denominator = (2*np.pi*sigma_sq)**0.5
return np.product(numerator / denominator, axis = 1)
###Output
_____no_output_____
###Markdown
1.2 Estimating parameters for a Gaussian You can estimate the parameters $(\mu_i , \sigma^2_i )$ of the $i$-th feature by using the following equations. To estimate the mean, you will use: $$ \mu_i = \frac{1}{m} \sum^{m}_{j=1} x_i^{(j)} $$ and for the varance you will use: $$ \sigma_i^2 = \frac{1}{m} \sum^{m}_{j=1} (x_i^{(j)} - \mu_i)^2 $$
###Code
class Anomaly(object):
def __init__(self):
pass
def set_x(self, x):
# x is expected to be a dataframe
self.x = x
def calc_mean(self):
return self.x.mean(axis = 0)
def calc_variance(self):
return self.x.var(axis = 0, ddof = 0)
def estimate_parameters(self):
mu = self.calc_mean()
var = self.calc_variance()
return mu, var
cls = Anomaly()
cls.set_x(X)
mu, sigma = cls.estimate_parameters()
print('Mean')
print(mu)
print()
print('Sigma')
print(sigma)
fig = plt.figure(figsize = (8,8))
# Plot original data
X.plot.scatter('x0', 'x1', s=30, c='red', marker = '.', title = 'The first dataset', ax = plt.gca())
# Plot contours
delta = .5
xx = np.arange(0,30,delta)
yy = np.arange(0,30,delta)
meshx, meshy = np.meshgrid(xx, yy)
points = np.vstack([ entry.ravel() for entry in (meshx, meshy) ]).T
df_points = pd.DataFrame(points, columns = ['x'+str(i) for i in range(points.shape[1])])
probs = gaus(df_points , mu, sigma)
levels = [10**exp for exp in range(-20,0,3)]
cp = plt.contour(meshx, meshy, probs.reshape(len(xx), len(yy)), levels, colors = 'blue')
# Show plot
plt.title('Contour Plot')
plt.xlabel('Latecy (mm)')
plt.ylabel('Throughput (mb/s)')
plt.show()
###Output
_____no_output_____
###Markdown
1.3 Selecting the threshold, $\epsilon$ The $F_1$ score is computed using precision $(prec)$ and recall $(rec)$: $$ F_1 = \frac{2 \cdot prec \cdot rec}{prec + rec} $$ You compute precision and recall by: $$ prec = \frac{tp}{tp + fp} $$ $$ rec = \frac{tp}{tp + fn}$$ where * tp are the True Positive examples* fp are the False Positive examples* fn are the False Negative examples
###Code
def f1_score(tp, fp, fn):
if tp == 0 and ((fp == 0) or (fn == 0)): return 0
prec = tp / (tp + fp)
rec = tp / (tp + fn)
f1 = (2*prec*rec) / (prec + rec)
return f1
Anomaly.f1_score = f1_score
###Output
_____no_output_____
###Markdown
Load the cross validation set
###Code
xcv = pd.DataFrame(data = data['Xval'],
columns = ['x'+str(i) for i in range(data['Xval'].shape[1])])
ycv = pd.DataFrame(data = data['yval'],
columns = ['y'+str(i) for i in range(data['yval'].shape[1])])
###Output
_____no_output_____
###Markdown
For every point in cross validation set, calculate the probability p
###Code
probs_cv = gaus(xcv.loc[:,['x0','x1']], mu, sigma)
df_probs = pd.DataFrame(data = probs_cv, columns = ['pval'])
df_probs['yval'] = ycv.y0
df_probs.head()
###Output
_____no_output_____
###Markdown
Define function to find optimal epsilon
###Code
def find_opt_epsilon(df, num_split):
# Return optimal epsilon and corresponding f_1 score
# Create list of epsilon values to test
min_eps = df['pval'].min()
max_eps = df['pval'].max()
eps_list = np.linspace(start = min_eps,
stop = max_eps,
num = num_split)
# Initialize optimal values
opt_eps = min_eps
opt_f1 = 0
# Travel the list and update the optimal values accordingly
for eps in eps_list:
df['p<eps'] = df.loc[:,'pval'] < eps
tp = df[(df['yval'] == 1) & df['p<eps']].shape[0]
fp = df[(df['yval'] == 0) & df['p<eps']].shape[0]
fn = df[(df['yval'] == 1) & ~df['p<eps']].shape[0]
f1_cur = f1_score(tp, fp, fn)
if f1_cur > opt_f1:
opt_f1 = f1_cur
opt_eps = eps
return opt_eps, opt_f1
best_eps, best_f1 = find_opt_epsilon(df_probs.copy(), 1000)
print('The optimal epsilon value is {0}, \nwith f_1 score {1}'.format(best_eps, round(best_f1,5)))
###Output
The optimal epsilon value is 8.999852631901394e-05,
with f_1 score 0.875
###Markdown
Circle the anomalies in the plot
###Code
fig = plt.figure(figsize = (8,8))
# Plot original data
X.plot.scatter('x0', 'x1', s=30, c='red', marker = '.', title = 'The first dataset', ax = plt.gca())
# Plot contours
delta = .5
xx = np.arange(0,30,delta)
yy = np.arange(0,30,delta)
meshx, meshy = np.meshgrid(xx, yy)
points = np.vstack([ entry.ravel() for entry in (meshx, meshy) ]).T
df_points = pd.DataFrame(points, columns = ['x'+str(i) for i in range(points.shape[1])])
probs = gaus(df_points , mu, sigma)
levels = [10**exp for exp in range(-20,0,3)]
cp = plt.contour(meshx,
meshy,
probs.reshape(len(xx), len(yy)),
levels,
colors = 'blue')
# Plot anomalies
probs_orig = gaus(X, mu, sigma)
anomalies = X[probs_orig < best_eps]
print(anomalies['x0'].dtype)
an = plt.scatter(anomalies['x0'],
anomalies['x1'],
s = 100,
facecolors = 'none',
edgecolors = 'black',
linewidth=1.5)
# Show plot
plt.title('Contour Plot with highlighted anomalies')
plt.xlabel('Latecy (mm)')
plt.ylabel('Throughput (mb/s)')
plt.show()
###Output
float64
###Markdown
1.4 High dimensional dataset Load high dimensional data
###Code
# Read the data
FILE = 'ex8data2.mat'
path = os.path.join(FOLDER, FILE)
data = scipy.io.loadmat(path)
X_high_dim = pd.DataFrame(data=data['X'], # values
columns=['x' + str(i) for i in range(data['X'].shape[1])]) # column names
xcv_high_dim = pd.DataFrame(data = data['Xval'],
columns = ['x'+str(i) for i in range(data['Xval'].shape[1])])
ycv_high_dim = pd.DataFrame(data = data['yval'],
columns = ['y'+str(i) for i in range(data['yval'].shape[1])])
###Output
_____no_output_____
###Markdown
Estimate parameters
###Code
cls = Anomaly()
cls.set_x(X_high_dim)
mu, sigma = cls.estimate_parameters()
###Output
_____no_output_____
###Markdown
Calculate probability p
###Code
probs_cv = gaus(xcv_high_dim.copy(), mu, sigma)
df_probs = pd.DataFrame(data = probs_cv, columns = ['pval'])
df_probs['yval'] = ycv_high_dim.y0
df_probs.head()
###Output
_____no_output_____
###Markdown
Estimate optimal epsilon and find anomalies
###Code
best_eps, best_f1 = find_opt_epsilon(df_probs.copy(), 1000)
print('The optimal epsilon value is {0}, \nwith f_1 score {1}'.format(best_eps, round(best_f1,5)))
probs_orig_high_dimension = gaus(X_high_dim, mu, sigma)
anomalies = X_high_dim[probs_orig_high_dimension < best_eps]
print('There are {0} anomalies in the dataset'.format(anomalies.shape[0]))
###Output
There are 117 anomalies in the dataset
###Markdown
Part 2 - Recommender Systems
###Code
# Read the data
FILE = 'ex8_movies.mat'
path = os.path.join(FOLDER, FILE)
data = scipy.io.loadmat(path)
r_matrix = pd.DataFrame(data = data['R'],
columns = ['user'+str(i) for i in range(data['R'].shape[1])])
y_matrix = pd.DataFrame(data = data['Y'],
columns = ['user'+str(i) for i in range(data['Y'].shape[1])])
num_movies, num_users = r_matrix.shape
###Output
_____no_output_____
###Markdown
2.1 Movie ratings dataset
###Code
# Calculate the mean rating of the first movie (exclude the 0 ratings)
first_movie_ratings = y_matrix.loc[0,:]
first_movie_ratings[first_movie_ratings != 0].mean()
# Read the data
FILE = 'ex8_movieParams.mat'
path = os.path.join(FOLDER, FILE)
data = scipy.io.loadmat(path)
X = pd.DataFrame(data = data['X'],
columns = ['feature'+str(i) for i in range(data['X'].shape[1])])
theta = pd.DataFrame(data = data['Theta'],
columns = ['feature'+str(i) for i in range(data['Theta'].shape[1])])
###Output
_____no_output_____
###Markdown
2.2 Collaborative filtering learning algorithm The collaborative filterig algorithm in the setting of movie recommendations considers a set of $n$-dimensional parameter vectors $x^{(1)},...,x^{(n_m)}$ and $\theta^{(1)},...,\theta^{(n_u)}$ where the model predicts the rating for movie $i$ by user $j$ as $y^{(i,j)} = (\theta^{(j)})^T x^{(i)}$ 2.2.1 Collaborative filtering cost function The collaborative filtering cost function (without regularization) is given by $$ J(x^{(1)},...,x^{(n_m)},\theta^{(1)},...,\theta^{(n_u)})=\frac{1}{2} \sum_{(i,j):r(i,j)=1} ((\theta^{(j)})^T x^{(i)} - y^{(i,j)})^2 $$
###Code
def flat_param(x_df, theta_df):
# Flatten parameters to pass them to minimizer
return np.concatenate([x_df.values.flatten(),theta_df.values.flatten()])
def reshape_param(flat_params, n_m, n_u, n_f):
# Return parameters to their original shapes
x_vals = flat_params[:int(n_m*n_f)].reshape((n_m,n_f))
x_df = pd.DataFrame(data = x_vals,
columns = ['feature'+str(i) for i in range(x_vals.shape[1])])
theta_vals = flat_params[int(n_m*n_f):].reshape((n_u,n_f))
theta_df = pd.DataFrame(data = theta_vals,
columns = ['feature'+str(i) for i in range(theta_vals.shape[1])])
return x_df, theta_df
def cost_J(params, y, num_movies, num_users, num_features):
# Unflatten the parameters
x, theta = reshape_param(params, num_movies, num_users, num_features)
# Return the value of cost function J
pred = x.dot(theta.T)
# Only movies that have been rated by the user are taken into account
cost = (pred - y[y > 0].values)**2
return cost.sum().sum() / 2
# Test the function
test_users = 4
test_movies = 5
test_features = 3
X_test = X.iloc[:test_movies, :test_features]
theta_test = theta.iloc[:test_users, :test_features]
y_test = y_matrix.iloc[:test_movies, :test_users]
print('X_test shape: {0}'.format(X_test.shape))
print('theta_test shape: {0}'.format(theta_test.shape))
print('y_test shape: {0}'.format(y_test.shape))
print()
params = flat_param(X_test, theta_test)
print('The cost function for the test data set is: {0}'.format(cost_J(params,
y_test,
test_movies,
test_users,
test_features
))) # expected output: 22.22
###Output
X_test shape: (5, 3)
theta_test shape: (4, 3)
y_test shape: (5, 4)
The cost function for the test data set is: 22.22460372568567
###Markdown
2.2.2 Collaborative filtering gradient The gradients of the cost function is given by: $$ \frac{\partial J}{\partial x^{(i)}_{k}} = \sum_{j:r(i,j)=1} ((\theta^{(j)})^T x^{(i)} - y^{(i,j)}) \theta^{(j)}_k $$ $$ \frac{\partial J}{\partial \theta^{(j)}_{k}} = \sum_{i:r(i,j)=1} ((\theta^{(j)})^T x^{(i)} - y^{(i,j)}) x^{(i)}_k $$
###Code
def grad_x(params, y, num_movies, num_users, num_features):
# Unflatten the parameters
x, theta = reshape_param(params, num_movies, num_users, num_features)
# Return the value of cost function J
pred = x.dot(theta.T)
# Only movies that have been rated by the user are taken into account
cost = pred - y[y > 0].values
# Fill the NaNs with 0s so you can use the .dot() method
grad = cost.fillna(0).dot(theta)
return grad
def grad_theta(params, y, num_movies, num_users, num_features):
# Unflatten the parameters
x, theta = reshape_param(params, num_movies, num_users, num_features)
# Return the value of cost function J
pred = x.dot(theta.T)
# Only movies that have been rated by the user are taken into account
cost = (pred - y[y > 0].values)
# Fill the NaNs with 0s so you can use the .dot() method
grad = cost.fillna(0).T.dot(x)
return grad
def grad(params, y, num_movies, num_users, num_features):
gradient_x = grad_x(params, y, num_movies, num_users, num_features)
gradient_theta = grad_theta(params, y, num_movies, num_users, num_features)
return flat_param(gradient_x, gradient_theta)
###Output
_____no_output_____
###Markdown
2.2.3 Regularized cost function The cost function for collaborative filtering with regularization is given by $$ J(x^{(1)},...,x^{(n_m)},\theta^{(1)},...,\theta^{(n_u)}) = \frac{1}{2} \sum_{(i,j):r(i,j)=1} ((\theta^{(j)})^T x^{(i)} - y^{(i,j)})^2 + \left ( \frac{\lambda}{2} \sum_{j=1}^{n_u} \sum_{k=1}^{n} (\theta_k^{(j)})^2 \right ) + \left ( \frac{\lambda}{2} \sum_{i=1}^{n_m} \sum_{k=1}^{n} (x_k^{(i)})^2 \right ) $$
###Code
def regularized_cost_J(params, y, num_movies, num_users, num_features, l):
# Unflatten the parameters
x, theta = reshape_param(params, num_movies, num_users, num_features)
# Use the unregularized function and then add the regularizer terms
cost_tmp = cost_J(params, y, num_movies, num_users, num_features)
theta_reg = 0.5 * l * (theta**2).sum().sum()
x_reg = 0.5 * l * (x**2).sum().sum()
return cost_tmp + theta_reg + x_reg
regularizer = 1.5
params = flat_param(X_test, theta_test)
print('The regularized cost function for the test data set is: {0}'.format(regularized_cost_J(params,
y_test,
test_movies,
test_users,
test_features,
regularizer
))) # expected output: 31.34
###Output
The regularized cost function for the test data set is: 31.344056244274213
###Markdown
2.2.4 Regularized gradient The gradients for the regularized cost function are: $$ \frac{\partial J}{\partial x^{(i)}_{k}} = \sum_{j:r(i,j)=1} ((\theta^{(j)})^T x^{(i)} - y^{(i,j)}) \theta^{(j)}_k + \lambda x_k^{(i)} $$ $$ \frac{\partial J}{\partial \theta^{(j)}_{k}} = \sum_{i:r(i,j)=1} ((\theta^{(j)})^T x^{(i)} - y^{(i,j)}) x^{(i)}_k + \lambda \theta_k^{(j)} $$
###Code
def reg_grad_x(params, y, num_movies, num_users, num_features, l):
# Unflatten the parameters
x, theta = reshape_param(params, num_movies, num_users, num_features)
# Use the unregularized gradient and then add the regularization terms
grad_tmp = grad_x(params, y, num_movies, num_users, num_features)
reg_grad = grad_tmp + l*x
return reg_grad
def reg_grad_theta(params, y, num_movies, num_users, num_features, l):
# Unflatten the parameters
x, theta = reshape_param(params, num_movies, num_users, num_features)
# Use the unregularized gradient and then add the regularization terms
grad_tmp = grad_theta(params, y, num_movies, num_users, num_features)
reg_grad = grad_tmp + l*theta
return reg_grad
def reg_grad(params, y, num_movies, num_users, num_features, l):
reg_gradient_x = reg_grad_x(params, y, num_movies, num_users, num_features, l)
reg_gradient_theta = reg_grad_theta(params, y, num_movies, num_users, num_features, l)
return flat_param(reg_gradient_x, reg_gradient_theta)
###Output
_____no_output_____
###Markdown
2.3 Learning movie recommendations
###Code
# Load the movies file
FILE = 'movie_ids.txt'
path = os.path.join(FOLDER, FILE)
data = pd.read_csv(path,
header=None,
sep = '|',
encoding='latin-1')
data.columns = ['Title']
data.Title = data.Title.apply(lambda x: x.split(' ',1)[1])
print('There are {0} movies in our list'.format(data.shape[0]))
data.head()
###Output
_____no_output_____
###Markdown
Now add some user defined additional movie ratings
###Code
# Initialize ratings
my_ratings = np.zeros(data.shape)
# Check the file movie_idx.txt for id of each movie in our dataset
# For example, Toy Story (1995) has ID 1, so to rate it "4", you can set
my_ratings[0] = 4
# Or suppose did not enjoy Silence of the Lambs (1991), you can set
my_ratings[97] = 2
# A few movies have been selected and the corresponding ratings are as follows:
my_ratings[6] = 3
my_ratings[11]= 5
my_ratings[53] = 4
my_ratings[63]= 5
my_ratings[65]= 3
my_ratings[68] = 5
my_ratings[182] = 4
my_ratings[225] = 5
my_ratings[354]= 5
for index in np.nonzero(my_ratings)[0]:
print('User rated movie {0} with {1}'.format(data.loc[index].item(),
my_ratings[index].item()))
###Output
User rated movie Toy Story (1995) with 4.0
User rated movie Twelve Monkeys (1995) with 3.0
User rated movie Usual Suspects, The (1995) with 5.0
User rated movie Outbreak (1995) with 4.0
User rated movie Shawshank Redemption, The (1994) with 5.0
User rated movie While You Were Sleeping (1995) with 3.0
User rated movie Forrest Gump (1994) with 5.0
User rated movie Silence of the Lambs, The (1991) with 2.0
User rated movie Alien (1979) with 4.0
User rated movie Die Hard 2 (1990) with 5.0
User rated movie Sphere (1998) with 5.0
###Markdown
2.3.1 Recommendations
###Code
# Add user's ratings to the data matrix
y_matrix.loc[:,'new_user'] = my_ratings.astype(int)
r_matrix.loc[:,'new_user'] = np.where(my_ratings > 0, 1, 0)
# Normalize Ratings
mean_rating_per_row = y_matrix.replace(0, np.nan).mean(axis = 1)
norm_y_matrix = y_matrix.replace(0, np.nan).sub(mean_rating_per_row, axis = 0).fillna(0)
num_users = y_matrix.shape[1]
num_movies = y_matrix.shape[0]
num_features = 10
# Initialize dataframes for the experiments
X_exp = pd.DataFrame(data = np.random.randn(num_movies, num_features),
columns = ['feature'+str(i) for i in range(num_features)])
theta_exp = pd.DataFrame(data = np.random.randn(num_users, num_features),
columns = ['feature'+str(i) for i in range(num_features)])
# Create the flattened parameter structure
params = flat_param(X_exp, theta_exp)
# Set regularization parameter to 10
l = 10
# Regularization parameter of 10 is used (as used in the homework assignment)
mylambda = 10.
# Training the actual model with fmin_cg
result = opt.fmin_cg(regularized_cost_J,
x0=params,
fprime=reg_grad,
args=(y_matrix, num_movies, num_users, num_features, mylambda),
maxiter=50,
disp=True,
full_output=True)
# Get the optimal X and theta matrices
X_res, theta_res = reshape_param(result[0], num_movies, num_users, num_features)
# Make predictions
preds = X_res.dot(theta_res.T)
# Get the 'new user' predictions
preds_user = preds.iloc[:,-1] + mean_rating_per_row
top_movies = 10
print('Top recommendations for you:')
for i in range(top_movies):
print('Predicting rating {0} for movie {1}'.format(round(preds_user.sort_values(ascending = False).iloc[i],2) ,
data.loc[preds_user.sort_values(ascending = False).index[i]].item()))
print('\nOriginal ratings provided:')
for index in np.nonzero(my_ratings)[0]:
print('Rated {0} for {1}'.format(my_ratings[index].item(),
data.loc[index].item()))
###Output
Top recommendations for you:
Predicting rating 8.5 for movie Titanic (1997)
Predicting rating 8.46 for movie Star Wars (1977)
Predicting rating 8.29 for movie Shawshank Redemption, The (1994)
Predicting rating 8.23 for movie Schindler's List (1993)
Predicting rating 8.11 for movie Usual Suspects, The (1995)
Predicting rating 8.1 for movie Raiders of the Lost Ark (1981)
Predicting rating 8.09 for movie Good Will Hunting (1997)
Predicting rating 7.98 for movie Braveheart (1995)
Predicting rating 7.96 for movie Empire Strikes Back, The (1980)
Predicting rating 7.96 for movie Godfather, The (1972)
Original ratings provided:
Rated 4.0 for Toy Story (1995)
Rated 3.0 for Twelve Monkeys (1995)
Rated 5.0 for Usual Suspects, The (1995)
Rated 4.0 for Outbreak (1995)
Rated 5.0 for Shawshank Redemption, The (1994)
Rated 3.0 for While You Were Sleeping (1995)
Rated 5.0 for Forrest Gump (1994)
Rated 2.0 for Silence of the Lambs, The (1991)
Rated 4.0 for Alien (1979)
Rated 5.0 for Die Hard 2 (1990)
Rated 5.0 for Sphere (1998)
|
Generative Adversarial Networks (GANs) Specialization/evaluation_of_GANs/C2W1_Assignment.ipynb | ###Markdown
Evaluating GANs GoalsIn this notebook, you're going to gain a better understanding of some of the challenges that come with evaluating GANs and a response you can take to alleviate some of them called Fréchet Inception Distance (FID). Learning Objectives1. Understand the challenges associated with evaluating GANs.2. Write code to evaluate the Fréchet Inception Distance. Challenges With Evaluating GANs Loss is Uninformative of PerformanceOne aspect that makes evaluating GANs challenging is that the loss tells us little about their performance. Unlike with classifiers, where a low loss on a test set indicates superior performance, a low loss for the generator or discriminator suggests that learning has stopped. No Clear Non-human MetricIf you define the goal of a GAN as "generating images which look real to people" then it's technically possible to measure this directly: [you can ask people to act as a discriminator](https://arxiv.org/abs/1904.01121). However, this takes significant time and money so ideally you can use a proxy for this. There is also no "perfect" discriminator that can differentiate reals from fakes - if there were, a lot of machine learning tasks would be solved ;)In this notebook, you will implement Fréchet Inception Distance, one method which aims to solve these issues. Getting StartedFor this notebook, you will again be using [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html). You will start by loading a pre-trained generator which has been trained on CelebA. Here, you will import some useful libraries and packages. You will also be provided with the generator and noise code from earlier assignments.
###Code
import torch
import numpy as np
from torch import nn
from tqdm.auto import tqdm
from torchvision import transforms
from torchvision.datasets import CelebA
from torchvision.utils import make_grid
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
torch.manual_seed(0) # Set for our testing purposes, please do not change!
class Generator(nn.Module):
'''
Generator Class
Values:
z_dim: the dimension of the noise vector, a scalar
im_chan: the number of channels in the images, fitted for the dataset used, a scalar
(CelebA is rgb, so 3 is your default)
hidden_dim: the inner dimension, a scalar
'''
def __init__(self, z_dim=10, im_chan=3, hidden_dim=64):
super(Generator, self).__init__()
self.z_dim = z_dim
# Build the neural network
self.gen = nn.Sequential(
self.make_gen_block(z_dim, hidden_dim * 8),
self.make_gen_block(hidden_dim * 8, hidden_dim * 4),
self.make_gen_block(hidden_dim * 4, hidden_dim * 2),
self.make_gen_block(hidden_dim * 2, hidden_dim),
self.make_gen_block(hidden_dim, im_chan, kernel_size=4, final_layer=True),
)
def make_gen_block(self, input_channels, output_channels, kernel_size=3, stride=2, final_layer=False):
'''
Function to return a sequence of operations corresponding to a generator block of DCGAN;
a transposed convolution, a batchnorm (except in the final layer), and an activation.
Parameters:
input_channels: how many channels the input feature representation has
output_channels: how many channels the output feature representation should have
kernel_size: the size of each convolutional filter, equivalent to (kernel_size, kernel_size)
stride: the stride of the convolution
final_layer: a boolean, true if it is the final layer and false otherwise
(affects activation and batchnorm)
'''
if not final_layer:
return nn.Sequential(
nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride),
nn.BatchNorm2d(output_channels),
nn.ReLU(inplace=True),
)
else:
return nn.Sequential(
nn.ConvTranspose2d(input_channels, output_channels, kernel_size, stride),
nn.Tanh(),
)
def forward(self, noise):
'''
Function for completing a forward pass of the generator: Given a noise tensor,
returns generated images.
Parameters:
noise: a noise tensor with dimensions (n_samples, z_dim)
'''
x = noise.view(len(noise), self.z_dim, 1, 1)
return self.gen(x)
def get_noise(n_samples, z_dim, device='cpu'):
'''
Function for creating noise vectors: Given the dimensions (n_samples, z_dim)
creates a tensor of that shape filled with random numbers from the normal distribution.
Parameters:
n_samples: the number of samples to generate, a scalar
z_dim: the dimension of the noise vector, a scalar
device: the device type
'''
return torch.randn(n_samples, z_dim, device=device)
###Output
_____no_output_____
###Markdown
Loading the Pre-trained ModelNow, you can set the arguments for the model and load the dataset: * z_dim: the dimension of the noise vector * image_size: the image size of the input to Inception (more details in the following section) * device: the device type
###Code
z_dim = 64
image_size = 299
device = 'cuda'
transform = transforms.Compose([
transforms.Resize(image_size),
transforms.CenterCrop(image_size),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
in_coursera = True # Set this to false if you're running this outside Coursera
if in_coursera:
import numpy as np
data = torch.Tensor(np.load('fid_images_tensor.npz', allow_pickle=True)['arr_0'])
dataset = torch.utils.data.TensorDataset(data, data)
else:
dataset = CelebA(".", download=True, transform=transform)
###Output
_____no_output_____
###Markdown
Then, you can load and initialize the model with weights from a pre-trained model. This allows you to use the pre-trained model as if you trained it yourself.
###Code
gen = Generator(z_dim).to(device)
gen.load_state_dict(torch.load(f"pretrained_celeba.pth", map_location=torch.device(device))["gen"])
gen = gen.eval()
###Output
_____no_output_____
###Markdown
Inception-v3 NetworkInception-V3 is a neural network trained on [ImageNet](http://www.image-net.org/) to classify objects. You may recall from the lectures that ImageNet has over 1 million images to train on. As a result, Inception-V3 does a good job detecting features and classifying images. Here, you will load Inception-V3 as `inception_model`.<!-- In the past, people would use a pretrained Inception network to identify the classes of the objects generated by a GAN and measure how similar the distribution of classes generated was to the true image (using KL divergence). This is known as inception score. However, there are many problems with this metric. Barratt and Sharma's 2018 "[A Note on the Inception Score](https://arxiv.org/pdf/1801.01973.pdf)" highlights many issues with this approach. Among them, they highlight its instability, its exploitability, and the widespread use of Inception Score on models not trained on ImageNet. -->
###Code
from torchvision.models import inception_v3
inception_model = inception_v3(pretrained=False)
inception_model.load_state_dict(torch.load("inception_v3_google-1a9a5a14.pth"))
inception_model.to(device)
inception_model = inception_model.eval() # Evaluation mode
###Output
_____no_output_____
###Markdown
Fréchet Inception DistanceFréchet Inception Distance (FID) was proposed as an improvement over Inception Score and still uses the Inception-v3 network as part of its calculation. However, instead of using the classification labels of the Inception-v3 network, it uses the output from an earlier layer—the layer right before the labels. This is often called the feature layer. Research has shown that deep convolutional neural networks trained on difficult tasks, like classifying many classes, build increasingly sophisticated representations of features going deeper into the network. For example, the first few layers may learn to detect different kinds of edges and curves, while the later layers may have neurons that fire in response to human faces.To get the feature layer of a convolutional neural network, you can replace the final fully connected layer with an identity layer that simply returns whatever input it received, unchanged. This essentially removes the final classification layer and leaves you with the intermediate outputs from the layer before.Optional hint for inception_model.fc1. You may find [torch.nn.Identity()](https://pytorch.org/docs/master/generated/torch.nn.Identity.html) helpful.
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL: inception_model.fc
# You want to replace the final fully-connected (fc) layer
# with an identity function layer to cut off the classification
# layer and get a feature extractor
#### START CODE HERE ####
inception_model.fc = nn.Identity()
#### END CODE HERE ####
# UNIT TEST
test_identity_noise = torch.randn(100, 100)
assert torch.equal(test_identity_noise, inception_model.fc(test_identity_noise))
print("Success!")
###Output
Success!
###Markdown
Fréchet Distance Fréchet distance uses the values from the feature layer for two sets of images, say reals and fakes, and compares different statistical properties between them to see how different they are. Specifically, Fréchet distance finds the shortest distance needed to walk along two lines, or two curves, simultaneously. The most intuitive explanation of Fréchet distance is as the "minimum leash distance" between two points. Imagine yourself and your dog, both moving along two curves. If you walked on one curve and your dog, attached to a leash, walked on the other at the same pace, what is the least amount of leash that you can give your dog so that you never need to give them more slack during your walk? Using this, the Fréchet distance measures the similarity between these two curves.The basic idea is similar for calculating the Fréchet distance between two probability distributions. You'll start by seeing what this looks like in one-dimensional, also called univariate, space. Univariate Fréchet DistanceYou can calculate the distance between two normal distributions $X$ and $Y$ with means $\mu_X$ and $\mu_Y$ and standard deviations $\sigma_X$ and $\sigma_Y$, as:$$d(X,Y) = (\mu_X-\mu_Y)^2 + (\sigma_X-\sigma_Y)^2 $$Pretty simple, right? Now you can see how it can be converted to be used in multi-dimensional, which is also called multivariate, space. Multivariate Fréchet Distance**Covariance**To find the Fréchet distance between two multivariate normal distributions, you first need to find the covariance instead of the standard deviation. The covariance, which is the multivariate version of variance (the square of standard deviation), is represented using a square matrix where the side length is equal to the number of dimensions. Since the feature vectors you will be using have 2048 values/weights, the covariance matrix will be 2048 x 2048. But for the sake of an example, this is a covariance matrix in a two-dimensional space:$\Sigma = \left(\begin{array}{cc} 1 & 0\\ 0 & 1\end{array}\right)$The value at location $(i, j)$ corresponds to the covariance of vector $i$ with vector $j$. Since the covariance of $i$ with $j$ and $j$ with $i$ are equivalent, the matrix will always be symmetric with respect to the diagonal. The diagonal is the covariance of that element with itself. In this example, there are zeros everywhere except the diagonal. That means that the two dimensions are independent of one another, they are completely unrelated.The following code cell will visualize this matrix.
###Code
#import os
#os.environ['KMP_DUPLICATE_LIB_OK']='True'
from torch.distributions import MultivariateNormal
import seaborn as sns # This is for visualization
mean = torch.Tensor([0, 0]) # Center the mean at the origin
covariance = torch.Tensor( # This matrix shows independence - there are only non-zero values on the diagonal
[[1, 0],
[0, 1]]
)
independent_dist = MultivariateNormal(mean, covariance)
samples = independent_dist.sample((10000,))
res = sns.jointplot(samples[:, 0], samples[:, 1], kind="kde")
plt.show()
###Output
_____no_output_____
###Markdown
Now, here's an example of a multivariate normal distribution that has covariance:$\Sigma = \left(\begin{array}{cc} 2 & -1\\ -1 & 2\end{array}\right)$And see how it looks:
###Code
mean = torch.Tensor([0, 0])
covariance = torch.Tensor(
[[2, -1],
[-1, 2]]
)
covariant_dist = MultivariateNormal(mean, covariance)
samples = covariant_dist.sample((10000,))
res = sns.jointplot(samples[:, 0], samples[:, 1], kind="kde")
plt.show()
###Output
_____no_output_____
###Markdown
**Formula**Based on the paper, "[The Fréchet distance between multivariate normal distributions](https://core.ac.uk/reader/82269844)" by Dowson and Landau (1982), the Fréchet distance between two multivariate normal distributions $X$ and $Y$ is:$d(X, Y) = \Vert\mu_X-\mu_Y\Vert^2 + \mathrm{Tr}\left(\Sigma_X+\Sigma_Y - 2 \sqrt{\Sigma_X \Sigma_Y}\right)$Similar to the formula for univariate Fréchet distance, you can calculate the distance between the means and the distance between the standard deviations. However, calculating the distance between the standard deviations changes slightly here, as it includes the matrix product and matrix square root. $\mathrm{Tr}$ refers to the trace, the sum of the diagonal elements of a matrix.Now you can implement this!Optional hints for frechet_distance1. You want to implement the above equation in code.2. You might find the functions `torch.norm` and `torch.trace` helpful here.3. A matrix_sqrt function is defined for you above -- you need to use it instead of `torch.sqrt()` which only gets the elementwise square root instead of the matrix square root.4. You can also use the `@` symbol for matrix multiplication.
###Code
import scipy
# This is the matrix square root function you will be using
def matrix_sqrt(x):
'''
Function that takes in a matrix and returns the square root of that matrix.
For an input matrix A, the output matrix B would be such that B @ B is the matrix A.
Parameters:
x: a matrix
'''
y = x.cpu().detach().numpy()
y = scipy.linalg.sqrtm(y)
return torch.Tensor(y.real, device=x.device)
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED FUNCTION: frechet_distance
def frechet_distance(mu_x, mu_y, sigma_x, sigma_y):
'''
Function for returning the Fréchet distance between multivariate Gaussians,
parameterized by their means and covariance matrices.
Parameters:
mu_x: the mean of the first Gaussian, (n_features)
mu_y: the mean of the second Gaussian, (n_features)
sigma_x: the covariance matrix of the first Gaussian, (n_features, n_features)
sigma_y: the covariance matrix of the second Gaussian, (n_features, n_features)
'''
#### START CODE HERE ####
return torch.norm(mu_x - mu_y)**2 + torch.trace(sigma_x + sigma_y - 2*matrix_sqrt(sigma_x@sigma_y))
#### END CODE HERE ####
# UNIT TEST
mean1 = torch.Tensor([0, 0]) # Center the mean at the origin
covariance1 = torch.Tensor( # This matrix shows independence - there are only non-zero values on the diagonal
[[1, 0],
[0, 1]]
)
dist1 = MultivariateNormal(mean1, covariance1)
mean2 = torch.Tensor([0, 0]) # Center the mean at the origin
covariance2 = torch.Tensor( # This matrix shows dependence
[[2, -1],
[-1, 2]]
)
dist2 = MultivariateNormal(mean2, covariance2)
assert torch.isclose(
frechet_distance(
dist1.mean, dist2.mean,
dist1.covariance_matrix, dist2.covariance_matrix
),
4 - 2 * torch.sqrt(torch.tensor(3.))
)
assert (frechet_distance(
dist1.mean, dist1.mean,
dist1.covariance_matrix, dist1.covariance_matrix
).item() == 0)
print("Success!")
###Output
Success!
###Markdown
Putting it all together!Now, you can apply FID to your generator from earlier.You will start by defining a bit of helper code to preprocess the image for the Inception-v3 network:
###Code
def preprocess(img):
img = torch.nn.functional.interpolate(img, size=(299, 299), mode='bilinear', align_corners=False)
return img
###Output
_____no_output_____
###Markdown
Then, you'll define a function to calculate the covariance of the features that returns a covariance matrix given a list of values:
###Code
import numpy as np
def get_covariance(features):
return torch.Tensor(np.cov(features.detach().numpy(), rowvar=False))
###Output
_____no_output_____
###Markdown
Finally, you can use the pre-trained Inception-v3 model to compute features of the real and fake images. With these features, you can then get the covariance and means of these features across many samples. First, you get the features of the real and fake images using the Inception-v3 model:
###Code
fake_features_list = []
real_features_list = []
gen.eval()
n_samples = 512 # The total number of samples
batch_size = 4 # Samples per iteration
dataloader = DataLoader(
dataset,
batch_size=batch_size,
shuffle=True)
cur_samples = 0
with torch.no_grad(): # You don't need to calculate gradients here, so you do this to save memory
try:
for real_example, _ in tqdm(dataloader, total=n_samples // batch_size): # Go by batch
real_samples = real_example
real_features = inception_model(real_samples.to(device)).detach().to('cpu') # Move features to CPU
real_features_list.append(real_features)
fake_samples = get_noise(len(real_example), z_dim).to(device)
fake_samples = preprocess(gen(fake_samples))
fake_features = inception_model(fake_samples.to(device)).detach().to('cpu')
fake_features_list.append(fake_features)
cur_samples += len(real_samples)
if cur_samples >= n_samples:
break
except:
print("Error in loop")
###Output
_____no_output_____
###Markdown
Then, you can combine all of the values that you collected for the reals and fakes into large tensors:
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# UNIT TEST COMMENT: Needed as is for autograding
fake_features_all = torch.cat(fake_features_list)
real_features_all = torch.cat(real_features_list)
###Output
_____no_output_____
###Markdown
And calculate the covariance and means of these real and fake features:
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# GRADED CELL
# Calculate the covariance matrix for the fake and real features
# and also calculate the means of the feature over the batch (for each feature dimension mean)
#### START CODE HERE ####
mu_fake = torch.mean(fake_features_all, 0)
mu_real = torch.mean(real_features_all, 0)
sigma_fake = get_covariance(fake_features_all)
sigma_real = get_covariance(real_features_all)
#### END CODE HERE ####
assert tuple(sigma_fake.shape) == (fake_features_all.shape[1], fake_features_all.shape[1])
assert torch.abs(sigma_fake[0, 0] - 2.5e-2) < 1e-2 and torch.abs(sigma_fake[-1, -1] - 5e-2) < 1e-2
assert tuple(sigma_real.shape) == (real_features_all.shape[1], real_features_all.shape[1])
assert torch.abs(sigma_real[0, 0] - 3.5768e-2) < 1e-4 and torch.abs(sigma_real[0, 1] + 5.3236e-4) < 1e-4
assert tuple(mu_fake.shape) == (fake_features_all.shape[1],)
assert tuple(mu_real.shape) == (real_features_all.shape[1],)
assert torch.abs(mu_real[0] - 0.3099) < 0.01 and torch.abs(mu_real[1] - 0.2721) < 0.01
assert torch.abs(mu_fake[0] - 0.37) < 0.05 and torch.abs(mu_real[1] - 0.27) < 0.05
print("Success!")
###Output
Success!
###Markdown
At this point, you can also visualize what the pairwise multivariate distributions of the inception features look like!
###Code
indices = [2, 4, 5]
fake_dist = MultivariateNormal(mu_fake[indices], sigma_fake[indices][:, indices])
fake_samples = fake_dist.sample((5000,))
real_dist = MultivariateNormal(mu_real[indices], sigma_real[indices][:, indices])
real_samples = real_dist.sample((5000,))
import pandas as pd
df_fake = pd.DataFrame(fake_samples.numpy(), columns=indices)
df_real = pd.DataFrame(real_samples.numpy(), columns=indices)
df_fake["is_real"] = "no"
df_real["is_real"] = "yes"
df = pd.concat([df_fake, df_real])
sns.pairplot(df, plot_kws={'alpha': 0.1}, hue='is_real')
###Output
_____no_output_____
###Markdown
Lastly, you can use your earlier `frechet_distance` function to calculate the FID and evaluate your GAN. You can see how similar/different the features of the generated images are to the features of the real images. The next cell might take five minutes or so to run in Coursera.
###Code
with torch.no_grad():
print(frechet_distance(mu_real, mu_fake, sigma_real, sigma_fake).item())
###Output
_____no_output_____ |
community/awards/teach_me_quantum_2018/bronze/bronze/B14_Python_Basics_Lists.ipynb | ###Markdown
prepared by Abuzer Yakaryilmaz (QuSoft@Riga) | November 02, 2018 I have some macros here. If there is a problem with displaying mathematical formulas, please run me to load these macros.$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\inner}[2]{\langle 1,2\rangle} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $ Basics of Python: Lists We review using Lists in python here. Please run each cell and check the results.A list (or array) is a collection of objects (variables) separated by comma. The order is important, and we can access each element in the list with its index starting from 0.
###Code
# here is a list holding all even numbers between 10 and 20
L = [10, 12, 14, 16, 18, 20]
# let's print the list
print(L)
# let's print each element by using its index but in reverse order
print(L[5],L[4],L[3],L[2],L[1],L[0])
# let's print the length (size) of list
print(len(L))
# let's print each element and its index in the list
# we use a for-loop and the number of iteration is determined by the length of the list
# everthing is automatic :-)
L = [10, 12, 14, 16, 18, 20]
for i in range(len(L)):
print(L[i],"is the element in our list with the index",i)
# let's replace each number in the list with its double value
L = [10, 12, 14, 16, 18, 20]
# let's print the list before doubling
print("the list before doubling",L)
for i in range(len(L)):
current_element=L[i] # get the value of the i-th element
L[i] = 2 * current_element # update the value of the i-th element
# let's shorten the replacing code
#L[i] = 2 * L[i]
# or
#L[i] *= 2
# let's print the list after doubling
print("the list after doubling",L)
# after each exceution of this cell, the latest values will be doubled
# so the values in the list will be exponentially increased
# let's define two lists
L1 = [1,2,3,4]
L2 = [-5,-6,-7,-8]
# two lists can be concatenated
# the results is a new list
print("the concatenation of L1 and L2 is",L1+L2)
# the order of terms is important
print("the concatenation of L2 and L1 is",L2+L1) # this is a different list than L1+L2
# we can add a new element to a list, which increases its length/size by 1
L = [10, 12, 14, 16, 18, 20]
print(L,"the current length is",len(L))
# we add two values by showing two different methods
# L.append(value) directly adds the value as a new element to the list
L.append(-4)
# we can also use concatenation operator +
L = L + [-8] # here [-8] is a list with single element
print(L,"the new length is",len(L))
# a list can be multiplied with an integer
L = [1,2]
# we can consider the multiplication of L by an integer
# as a repeated summation (concatenation) of L by itself
# L * 1 is the list itself
# L * 2 is L + L (concatenation of L with itself)
# L * 3 is L + L + L (concatenation of L with itself twice)
# L * m is L + ... + L (concatenation of L with itself m-1 times)
# L * 0 is the empty list
# L * i is the same as i * L
# let's print the different cases
for i in range(6):
print(i,"* L is",i*L)
# this can be useful when initializing a list with the same value(s)
# let's create a list of prime numbers less than 100
# here is a function determining whether a given number is prime or not
def prime(number):
if number < 2: return False
if number == 2: return True
if number % 2 == 0: return False
for i in range(3,number,2):
if number % i == 0: return False
return True
# end of function
# let's start with an empty list
L=[]
# what can the length of this list be?
print("my initial length is",len(L))
for i in range(2,100):
if prime(i):
L.append(i)
# alternative methods:
#L = L + [i]
#L += [i]
# print the final list
print(L)
print("my final length is",len(L))
###Output
_____no_output_____
###Markdown
For a given integer $n \geq 0$, $ S(0) = 0 $, $ S(1)=1 $, and $ S(n) = 1 + 2 + \cdots + n $.We define list $ L(n) $ such that the element with index $n$ holds $ S(n) $.In other words, the elements of $ L(n) $ is $ [ S(0)~~S(1)~~S(2)~~\cdots~~S(n) ] $.Let's build the list $ L(20) $.
###Code
# let's define the list with S(0)
L = [0]
# let's iteratively define n and S
# initial values
n = 0
S = 0
# the number of iteration
N = 20
while n <= N: # we iterate all values from 1 to 20
n = n + 1
S = S + n
L.append(S)
# print the final list
print(L)
###Output
_____no_output_____
###Markdown
Task 1 Fibonacci sequence starts with $ 1 $ and $ 1 $. Then, each next element is equal to summation of the previous two elements:$$ 1, 1, 2 , 3 , 5, 8, 13, 21, 34, 55 \ldots $$Find the first 30 elements of the Fibonacci sequence, store them in a list, and then print the list. You can verify the first 10 elements of your result with the above list.
###Code
#
# your solution is here
#
F = [1,1]
###Output
_____no_output_____
###Markdown
click for our solution Lists of different objects A list can have any type of values.
###Code
# the following list stores certain information about Asja
# name, surname, age, profession, height, weight, partner(s) if any, kid(s) if any, date
ASJA = ['Asja','Sarkane',34,'musician',180,65.5,[],['Eleni','Fyodor'],"October 24, 2018"]
print(ASJA)
# Remark that an element of a list can be another list as well.
###Output
_____no_output_____
###Markdown
Task 2 Define a list $ N $ with 11 elements such that $ N[i] $ is another list with four elements such that $ [i, i^2, i^3, i^2+i^3] $.The index $ i $ should be form $ 0 $ and $ 10 $.
###Code
#
# your solution is here
#
###Output
_____no_output_____
###Markdown
click for our solution Dictionaries The outcomes of a quantum program will be stored in a dictionary.Therefore, we very shortly mention about the dictionary data type. A dictionary is a set of paired elements.Each pair is composed by a key and its value, and any value can be accessed by its key.
###Code
# let's define a dictionary pairing a person with her/his age
ages = {
'Asja':32,
'Balvis':28,
'Fyodor':43
}
# let print all keys
for person in ages:
print(person)
# let's print the values
for person in ages:
print(ages[person])
###Output
_____no_output_____ |
13. Advanced CNN/13-3. VGG for cifar10.ipynb | ###Markdown
Lab 10.5.2 VGG for cifar10**Jonathan Choi 2021****[Deep Learning By Torch] End to End study scripts of Deep Learning by implementing code practice with Pytorch.**If you have an any issue, please PR below.[[Deep Learning By Torch] - Github @JonyChoi](https://github.com/jonychoi/Deep-Learning-By-Torch)Here, we are going to apply our VGG network to cifar10 datasets.Reference fromhttps://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.htmlsphx-glr-beginner-blitz-cifar10-tutorial-py Imports
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
device = 'cuda' if torch.cuda.is_available() else 'cpu'
torch.manual_seed(1)
if device == 'cuda':
torch.cuda.manual_seed_all(1)
###Output
_____no_output_____
###Markdown
What about data?We will use the CIFAR10 dataset. It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size. Training an image classifierWe will do the following steps in order:1. Load and normalize the CIFAR10 training and test datasets using torchvision2. Define a Convolutional Neural Network3. Define a loss function4. Train the network on the training data5. Test the network on the test data Load and normalize CIFAR10Using torchvision, it’s extremely easy to load CIFAR10.The output of torchvision datasets are PILImage images of range [0, 1]. We transform them to Tensors of normalized range [-1, 1]. Take a Moment!**CLASS** ```torchvision.transforms.Normalize(mean, std, inplace=False)```Normalize a tensor image with mean and standard deviation. This transform does not support PIL Image. Given mean: ```(mean[1],...,mean[n])``` and std: ```(std[1],..,std[n])``` for n channels, this transform will normalize each channel of the input ```torch.*Tensor``` i.e., ```output[channel] = (input[channel] - mean[channel]) / std[channel]```**NOTE**This transform acts out of place, i.e., it does not mutate the input tensor.**Parameters**> - mean (sequence) – Sequence of means for each channel.> - std (sequence) – Sequence of standard deviations for each channel.> - inplace (bool,optional) – Bool to make this operation in-place.Examples using ```Normalize```:[Tensor transforms and JIT](https://pytorch.org/vision/stable/auto_examples/plot_scripted_tensor_transforms.htmlsphx-glr-auto-examples-plot-scripted-tensor-transforms-py) **forward(tensor: torch.Tensor) → torch.Tensor**Parameters- tensor (Tensor) – Tensor image to be normalized.Returns- Normalized Tensor image.Return type- Tensor
###Code
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]
)
trainset = datasets.CIFAR10(root = './cifar10', train = True, download = True, transform = transform)
testset = datasets.CIFAR10(root = './cifar10', train=False, download = True, transform = transform)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
###Output
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Take a Moment!**CLASS** ```torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None, batch_sampler=None, num_workers=0, collate_fn=None, pin_memory=False, drop_last=False, timeout=0, worker_init_fn=None, multiprocessing_context=None, generator=None, *, prefetch_factor=2, persistent_workers=False)```Data loader. Combines a dataset and a sampler, and provides an iterable over the given dataset.The ```DataLoader``` supports both map-style and iterable-style datasets with single- or multi-process loading, customizing loading order and optional automatic batching (collation) and memory pinning.See ```torch.utils.data``` documentation page for more details.**Parameters**- **dataset** (Dataset) – dataset from which to load the data.- **batch_size** (int, optional) – how many samples per batch to load (default: 1).- **shuffle** (bool, optional) – set to True to have the data reshuffled at every epoch (default: False).- **sampler** (Sampler or Iterable, optional) – defines the strategy to draw samples from the dataset. Can be any Iterable with __len__ implemented. If specified, shuffle must not be specified.- **batch_sampler** (Sampler or Iterable, optional) – like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last.- **num_workers** (int, optional) – how many subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default: 0)- **collate_fn** (callable, optional) – merges a list of samples to form a mini-batch of Tensor(s). Used when using batched loading from a map-style dataset.- **pin_memory** (bool, optional) – If True, the data loader will copy Tensors into CUDA pinned memory before returning them. If your data elements are a custom type, or your collate_fn returns a batch that is a custom type, see the example below.- **drop_last** (bool, optional) – set to True to drop the last incomplete batch, if the dataset size is not divisible by the batch size. If False and the size of dataset is not divisible by the batch size, then the last batch will be smaller. (default: False)- **timeout** (numeric, optional) – if positive, the timeout value for collecting a batch from workers. Should always be non-negative. (default: 0)- **worker_init_fn** (callable, optional) – If not None, this will be called on each worker subprocess with the worker id (an int in [0, num_workers - 1]) as input, after seeding and before data loading. (default: None)- **generator** (torch.Generator, optional) – If not None, this RNG will be used by RandomSampler to generate random indexes and multiprocessing to generate base_seed for workers. (default: None)- **prefetch_factor** (int, optional, keyword-only arg) – Number of samples loaded in advance by each worker. 2 means there will be a total of 2 * num_workers samples prefetched across all workers. (default: 2)- **persistent_workers** (bool, optional) – If True, the data loader will not shutdown the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. (default: False)
###Code
train_loader = torch.utils.data.DataLoader(dataset = trainset, shuffle = True, batch_size = 512, drop_last = False, num_workers = 1)
test_loader = torch.utils.data.DataLoader(dataset = testset, shuffle = True, batch_size = 4 , drop_last = False, num_workers = 1)
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.cpu().numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
Define Loss Tracker
###Code
import visdom
vis = visdom.Visdom()
vis.close(env = "main")
def loss_tracker(loss_plot, loss_value, num):
vis.line(X = num, Y = loss_value, win = loss_plot, update = 'append')
loss_plt = vis.line(Y=torch.Tensor(1).zero_(),opts=dict(title='loss_tracker', legend=['loss'], showlegend=True))
###Output
_____no_output_____
###Markdown
Define a Convolutional Neural NetworkWe are going to use VGG model, that we defined before section.
###Code
import torchvision.models.vgg as vgg
def make_layers(cfg, batch_norm = False):
layers = []
in_channels = 3
for v in cfg:
if v == 'M':
layers += [nn.MaxPool2d(kernel_size = 2, stride = 2)]
else:
conv2d = nn.Conv2d(in_channels, v, kernel_size = 3, padding = 1)
if batch_norm:
layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace = True)]
else:
layers += [conv2d, nn.ReLU(inplace = True)]
in_channels = v
return nn.Sequential(*layers)
cfg = {
'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], #vgg11
'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'], # 10 + 3 = vgg 13
'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'], #13 + 3 = vgg 16
'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'], # 16 +3 =vgg 19
}
conv = make_layers(cfg['A'], batch_norm=True)
model = vgg.VGG(conv, num_classes = 10, init_weights = True).to(device)
###Output
_____no_output_____
###Markdown
Define a Loss function and optimizerLet’s use a Classification Cross-Entropy loss and SGD with momentum.
###Code
criterion = nn.CrossEntropyLoss().to(device)
optimizer = optim.SGD(model.parameters(), lr= 0.001, momentum = 0.9)
###Output
_____no_output_____
###Markdown
Train the network
###Code
total_batch = len(train_loader)
training_epochs = 100
for epoch in range(training_epochs): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(train_loader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 7 == 6: # print every 7 mini-batches
running_loss = running_loss / 7
loss_tracker(loss_plt, torch.Tensor([running_loss]), torch.Tensor([i + epoch*total_batch ]))
print('Epoch: {} / 20, MiniBatch: {} / {} Cost: {:.6f}'.format(epoch + 1, i + 1, total_batch, running_loss))
running_loss = 0.0
print('Finished Training')
###Output
Epoch: 1 / 20, MiniBatch: 7 / 98 Cost: 2.538052
Epoch: 1 / 20, MiniBatch: 14 / 98 Cost: 2.382969
Epoch: 1 / 20, MiniBatch: 21 / 98 Cost: 2.276173
Epoch: 1 / 20, MiniBatch: 28 / 98 Cost: 2.150281
Epoch: 1 / 20, MiniBatch: 35 / 98 Cost: 2.073248
Epoch: 1 / 20, MiniBatch: 42 / 98 Cost: 1.995923
Epoch: 1 / 20, MiniBatch: 49 / 98 Cost: 1.923304
Epoch: 1 / 20, MiniBatch: 56 / 98 Cost: 1.889189
Epoch: 1 / 20, MiniBatch: 63 / 98 Cost: 1.804240
Epoch: 1 / 20, MiniBatch: 70 / 98 Cost: 1.767560
Epoch: 1 / 20, MiniBatch: 77 / 98 Cost: 1.767697
Epoch: 1 / 20, MiniBatch: 84 / 98 Cost: 1.704989
Epoch: 1 / 20, MiniBatch: 91 / 98 Cost: 1.684953
Epoch: 1 / 20, MiniBatch: 98 / 98 Cost: 1.618018
Epoch: 2 / 20, MiniBatch: 7 / 98 Cost: 1.631967
Epoch: 2 / 20, MiniBatch: 14 / 98 Cost: 1.557841
Epoch: 2 / 20, MiniBatch: 21 / 98 Cost: 1.570448
Epoch: 2 / 20, MiniBatch: 28 / 98 Cost: 1.544699
Epoch: 2 / 20, MiniBatch: 35 / 98 Cost: 1.478828
Epoch: 2 / 20, MiniBatch: 42 / 98 Cost: 1.477298
Epoch: 2 / 20, MiniBatch: 49 / 98 Cost: 1.493688
Epoch: 2 / 20, MiniBatch: 56 / 98 Cost: 1.457394
Epoch: 2 / 20, MiniBatch: 63 / 98 Cost: 1.486550
Epoch: 2 / 20, MiniBatch: 70 / 98 Cost: 1.453320
Epoch: 2 / 20, MiniBatch: 77 / 98 Cost: 1.419895
Epoch: 2 / 20, MiniBatch: 84 / 98 Cost: 1.406468
Epoch: 2 / 20, MiniBatch: 91 / 98 Cost: 1.425783
Epoch: 2 / 20, MiniBatch: 98 / 98 Cost: 1.425327
Epoch: 3 / 20, MiniBatch: 7 / 98 Cost: 1.328591
Epoch: 3 / 20, MiniBatch: 14 / 98 Cost: 1.336415
Epoch: 3 / 20, MiniBatch: 21 / 98 Cost: 1.331422
Epoch: 3 / 20, MiniBatch: 28 / 98 Cost: 1.303613
Epoch: 3 / 20, MiniBatch: 35 / 98 Cost: 1.293972
Epoch: 3 / 20, MiniBatch: 42 / 98 Cost: 1.325387
Epoch: 3 / 20, MiniBatch: 49 / 98 Cost: 1.295058
Epoch: 3 / 20, MiniBatch: 56 / 98 Cost: 1.288499
Epoch: 3 / 20, MiniBatch: 63 / 98 Cost: 1.288325
Epoch: 3 / 20, MiniBatch: 70 / 98 Cost: 1.257927
Epoch: 3 / 20, MiniBatch: 77 / 98 Cost: 1.245749
Epoch: 3 / 20, MiniBatch: 84 / 98 Cost: 1.243169
Epoch: 3 / 20, MiniBatch: 91 / 98 Cost: 1.231211
Epoch: 3 / 20, MiniBatch: 98 / 98 Cost: 1.221887
Epoch: 4 / 20, MiniBatch: 7 / 98 Cost: 1.173174
Epoch: 4 / 20, MiniBatch: 14 / 98 Cost: 1.163392
Epoch: 4 / 20, MiniBatch: 21 / 98 Cost: 1.159650
Epoch: 4 / 20, MiniBatch: 28 / 98 Cost: 1.122198
Epoch: 4 / 20, MiniBatch: 35 / 98 Cost: 1.124366
Epoch: 4 / 20, MiniBatch: 42 / 98 Cost: 1.138854
Epoch: 4 / 20, MiniBatch: 49 / 98 Cost: 1.159184
Epoch: 4 / 20, MiniBatch: 56 / 98 Cost: 1.160553
Epoch: 4 / 20, MiniBatch: 63 / 98 Cost: 1.112720
Epoch: 4 / 20, MiniBatch: 70 / 98 Cost: 1.110560
Epoch: 4 / 20, MiniBatch: 77 / 98 Cost: 1.164175
Epoch: 4 / 20, MiniBatch: 84 / 98 Cost: 1.113278
Epoch: 4 / 20, MiniBatch: 91 / 98 Cost: 1.108410
Epoch: 4 / 20, MiniBatch: 98 / 98 Cost: 1.103447
Epoch: 5 / 20, MiniBatch: 7 / 98 Cost: 1.037563
Epoch: 5 / 20, MiniBatch: 14 / 98 Cost: 1.037687
Epoch: 5 / 20, MiniBatch: 21 / 98 Cost: 1.021387
Epoch: 5 / 20, MiniBatch: 28 / 98 Cost: 1.026987
Epoch: 5 / 20, MiniBatch: 35 / 98 Cost: 0.987068
Epoch: 5 / 20, MiniBatch: 42 / 98 Cost: 1.014295
Epoch: 5 / 20, MiniBatch: 49 / 98 Cost: 1.006188
Epoch: 5 / 20, MiniBatch: 56 / 98 Cost: 1.019304
Epoch: 5 / 20, MiniBatch: 63 / 98 Cost: 1.020741
Epoch: 5 / 20, MiniBatch: 70 / 98 Cost: 0.990193
Epoch: 5 / 20, MiniBatch: 77 / 98 Cost: 1.006877
Epoch: 5 / 20, MiniBatch: 84 / 98 Cost: 1.004016
Epoch: 5 / 20, MiniBatch: 91 / 98 Cost: 0.992973
Epoch: 5 / 20, MiniBatch: 98 / 98 Cost: 0.996372
Epoch: 6 / 20, MiniBatch: 7 / 98 Cost: 0.906912
Epoch: 6 / 20, MiniBatch: 14 / 98 Cost: 0.907029
Epoch: 6 / 20, MiniBatch: 21 / 98 Cost: 0.900796
Epoch: 6 / 20, MiniBatch: 28 / 98 Cost: 0.887398
Epoch: 6 / 20, MiniBatch: 35 / 98 Cost: 0.923179
Epoch: 6 / 20, MiniBatch: 42 / 98 Cost: 0.907172
Epoch: 6 / 20, MiniBatch: 49 / 98 Cost: 0.903866
Epoch: 6 / 20, MiniBatch: 56 / 98 Cost: 0.914979
Epoch: 6 / 20, MiniBatch: 63 / 98 Cost: 0.906905
Epoch: 6 / 20, MiniBatch: 70 / 98 Cost: 0.892090
Epoch: 6 / 20, MiniBatch: 77 / 98 Cost: 0.889609
Epoch: 6 / 20, MiniBatch: 84 / 98 Cost: 0.859508
Epoch: 6 / 20, MiniBatch: 91 / 98 Cost: 0.893654
Epoch: 6 / 20, MiniBatch: 98 / 98 Cost: 0.870456
Epoch: 7 / 20, MiniBatch: 7 / 98 Cost: 0.770596
Epoch: 7 / 20, MiniBatch: 14 / 98 Cost: 0.778630
Epoch: 7 / 20, MiniBatch: 21 / 98 Cost: 0.767982
Epoch: 7 / 20, MiniBatch: 28 / 98 Cost: 0.811499
Epoch: 7 / 20, MiniBatch: 35 / 98 Cost: 0.786317
Epoch: 7 / 20, MiniBatch: 42 / 98 Cost: 0.796913
Epoch: 7 / 20, MiniBatch: 49 / 98 Cost: 0.777415
Epoch: 7 / 20, MiniBatch: 56 / 98 Cost: 0.768976
Epoch: 7 / 20, MiniBatch: 63 / 98 Cost: 0.797198
Epoch: 7 / 20, MiniBatch: 70 / 98 Cost: 0.820693
Epoch: 7 / 20, MiniBatch: 77 / 98 Cost: 0.764938
Epoch: 7 / 20, MiniBatch: 84 / 98 Cost: 0.768576
Epoch: 7 / 20, MiniBatch: 91 / 98 Cost: 0.808095
Epoch: 7 / 20, MiniBatch: 98 / 98 Cost: 0.789663
Epoch: 8 / 20, MiniBatch: 7 / 98 Cost: 0.698654
Epoch: 8 / 20, MiniBatch: 14 / 98 Cost: 0.673378
Epoch: 8 / 20, MiniBatch: 21 / 98 Cost: 0.640677
Epoch: 8 / 20, MiniBatch: 28 / 98 Cost: 0.673689
Epoch: 8 / 20, MiniBatch: 35 / 98 Cost: 0.687386
Epoch: 8 / 20, MiniBatch: 42 / 98 Cost: 0.708382
Epoch: 8 / 20, MiniBatch: 49 / 98 Cost: 0.678197
Epoch: 8 / 20, MiniBatch: 56 / 98 Cost: 0.666448
Epoch: 8 / 20, MiniBatch: 63 / 98 Cost: 0.711394
Epoch: 8 / 20, MiniBatch: 70 / 98 Cost: 0.678659
Epoch: 8 / 20, MiniBatch: 77 / 98 Cost: 0.664398
Epoch: 8 / 20, MiniBatch: 84 / 98 Cost: 0.665868
Epoch: 8 / 20, MiniBatch: 91 / 98 Cost: 0.683069
Epoch: 8 / 20, MiniBatch: 98 / 98 Cost: 0.698026
Epoch: 9 / 20, MiniBatch: 7 / 98 Cost: 0.546388
Epoch: 9 / 20, MiniBatch: 14 / 98 Cost: 0.551230
Epoch: 9 / 20, MiniBatch: 21 / 98 Cost: 0.556730
Epoch: 9 / 20, MiniBatch: 28 / 98 Cost: 0.563421
Epoch: 9 / 20, MiniBatch: 35 / 98 Cost: 0.555251
Epoch: 9 / 20, MiniBatch: 42 / 98 Cost: 0.589523
Epoch: 9 / 20, MiniBatch: 49 / 98 Cost: 0.570003
Epoch: 9 / 20, MiniBatch: 56 / 98 Cost: 0.570982
Epoch: 9 / 20, MiniBatch: 63 / 98 Cost: 0.559812
Epoch: 9 / 20, MiniBatch: 70 / 98 Cost: 0.564817
Epoch: 9 / 20, MiniBatch: 77 / 98 Cost: 0.587819
Epoch: 9 / 20, MiniBatch: 84 / 98 Cost: 0.570211
Epoch: 9 / 20, MiniBatch: 91 / 98 Cost: 0.576826
Epoch: 9 / 20, MiniBatch: 98 / 98 Cost: 0.609204
Epoch: 10 / 20, MiniBatch: 7 / 98 Cost: 0.456897
Epoch: 10 / 20, MiniBatch: 14 / 98 Cost: 0.477529
Epoch: 10 / 20, MiniBatch: 21 / 98 Cost: 0.467549
Epoch: 10 / 20, MiniBatch: 28 / 98 Cost: 0.442320
Epoch: 10 / 20, MiniBatch: 35 / 98 Cost: 0.457876
Epoch: 10 / 20, MiniBatch: 42 / 98 Cost: 0.470171
Epoch: 10 / 20, MiniBatch: 49 / 98 Cost: 0.462353
Epoch: 10 / 20, MiniBatch: 56 / 98 Cost: 0.461439
Epoch: 10 / 20, MiniBatch: 63 / 98 Cost: 0.455967
Epoch: 10 / 20, MiniBatch: 70 / 98 Cost: 0.460545
Epoch: 10 / 20, MiniBatch: 77 / 98 Cost: 0.491191
Epoch: 10 / 20, MiniBatch: 84 / 98 Cost: 0.484424
Epoch: 10 / 20, MiniBatch: 91 / 98 Cost: 0.486330
Epoch: 10 / 20, MiniBatch: 98 / 98 Cost: 0.497964
Epoch: 11 / 20, MiniBatch: 7 / 98 Cost: 0.361703
Epoch: 11 / 20, MiniBatch: 14 / 98 Cost: 0.373276
Epoch: 11 / 20, MiniBatch: 21 / 98 Cost: 0.386611
Epoch: 11 / 20, MiniBatch: 28 / 98 Cost: 0.360319
Epoch: 11 / 20, MiniBatch: 35 / 98 Cost: 0.381734
Epoch: 11 / 20, MiniBatch: 42 / 98 Cost: 0.336350
Epoch: 11 / 20, MiniBatch: 49 / 98 Cost: 0.367753
Epoch: 11 / 20, MiniBatch: 56 / 98 Cost: 0.355515
Epoch: 11 / 20, MiniBatch: 63 / 98 Cost: 0.366058
Epoch: 11 / 20, MiniBatch: 70 / 98 Cost: 0.400632
Epoch: 11 / 20, MiniBatch: 77 / 98 Cost: 0.376279
Epoch: 11 / 20, MiniBatch: 84 / 98 Cost: 0.363600
Epoch: 11 / 20, MiniBatch: 91 / 98 Cost: 0.360266
Epoch: 11 / 20, MiniBatch: 98 / 98 Cost: 0.386955
Epoch: 12 / 20, MiniBatch: 7 / 98 Cost: 0.261200
Epoch: 12 / 20, MiniBatch: 14 / 98 Cost: 0.270824
Epoch: 12 / 20, MiniBatch: 21 / 98 Cost: 0.254446
Epoch: 12 / 20, MiniBatch: 28 / 98 Cost: 0.278193
Epoch: 12 / 20, MiniBatch: 35 / 98 Cost: 0.291415
Epoch: 12 / 20, MiniBatch: 42 / 98 Cost: 0.274377
Epoch: 12 / 20, MiniBatch: 49 / 98 Cost: 0.280668
Epoch: 12 / 20, MiniBatch: 56 / 98 Cost: 0.261502
Epoch: 12 / 20, MiniBatch: 63 / 98 Cost: 0.281603
Epoch: 12 / 20, MiniBatch: 70 / 98 Cost: 0.272655
Epoch: 12 / 20, MiniBatch: 77 / 98 Cost: 0.285514
Epoch: 12 / 20, MiniBatch: 84 / 98 Cost: 0.297129
Epoch: 12 / 20, MiniBatch: 91 / 98 Cost: 0.296843
Epoch: 12 / 20, MiniBatch: 98 / 98 Cost: 0.298032
Epoch: 13 / 20, MiniBatch: 7 / 98 Cost: 0.202357
Epoch: 13 / 20, MiniBatch: 14 / 98 Cost: 0.200387
Epoch: 13 / 20, MiniBatch: 21 / 98 Cost: 0.204126
Epoch: 13 / 20, MiniBatch: 28 / 98 Cost: 0.195276
Epoch: 13 / 20, MiniBatch: 35 / 98 Cost: 0.205028
Epoch: 13 / 20, MiniBatch: 42 / 98 Cost: 0.205306
Epoch: 13 / 20, MiniBatch: 49 / 98 Cost: 0.201783
Epoch: 13 / 20, MiniBatch: 56 / 98 Cost: 0.199951
Epoch: 13 / 20, MiniBatch: 63 / 98 Cost: 0.198090
Epoch: 13 / 20, MiniBatch: 70 / 98 Cost: 0.192373
Epoch: 13 / 20, MiniBatch: 77 / 98 Cost: 0.192686
Epoch: 13 / 20, MiniBatch: 84 / 98 Cost: 0.213568
Epoch: 13 / 20, MiniBatch: 91 / 98 Cost: 0.198821
Epoch: 13 / 20, MiniBatch: 98 / 98 Cost: 0.205220
Epoch: 14 / 20, MiniBatch: 7 / 98 Cost: 0.144758
Epoch: 14 / 20, MiniBatch: 14 / 98 Cost: 0.125949
Epoch: 14 / 20, MiniBatch: 21 / 98 Cost: 0.136407
Epoch: 14 / 20, MiniBatch: 28 / 98 Cost: 0.132233
Epoch: 14 / 20, MiniBatch: 35 / 98 Cost: 0.131794
Epoch: 14 / 20, MiniBatch: 42 / 98 Cost: 0.135277
Epoch: 14 / 20, MiniBatch: 49 / 98 Cost: 0.130945
Epoch: 14 / 20, MiniBatch: 56 / 98 Cost: 0.134407
Epoch: 14 / 20, MiniBatch: 63 / 98 Cost: 0.137125
Epoch: 14 / 20, MiniBatch: 70 / 98 Cost: 0.144007
Epoch: 14 / 20, MiniBatch: 77 / 98 Cost: 0.133043
Epoch: 14 / 20, MiniBatch: 84 / 98 Cost: 0.136119
Epoch: 14 / 20, MiniBatch: 91 / 98 Cost: 0.136515
Epoch: 14 / 20, MiniBatch: 98 / 98 Cost: 0.141437
Epoch: 15 / 20, MiniBatch: 7 / 98 Cost: 0.101917
Epoch: 15 / 20, MiniBatch: 14 / 98 Cost: 0.095523
Epoch: 15 / 20, MiniBatch: 21 / 98 Cost: 0.097414
Epoch: 15 / 20, MiniBatch: 28 / 98 Cost: 0.096436
Epoch: 15 / 20, MiniBatch: 35 / 98 Cost: 0.095416
Epoch: 15 / 20, MiniBatch: 42 / 98 Cost: 0.086568
Epoch: 15 / 20, MiniBatch: 49 / 98 Cost: 0.098882
Epoch: 15 / 20, MiniBatch: 56 / 98 Cost: 0.084012
Epoch: 15 / 20, MiniBatch: 63 / 98 Cost: 0.093247
Epoch: 15 / 20, MiniBatch: 70 / 98 Cost: 0.084567
Epoch: 15 / 20, MiniBatch: 77 / 98 Cost: 0.104716
Epoch: 15 / 20, MiniBatch: 84 / 98 Cost: 0.083970
Epoch: 15 / 20, MiniBatch: 91 / 98 Cost: 0.104132
Epoch: 15 / 20, MiniBatch: 98 / 98 Cost: 0.099462
Epoch: 16 / 20, MiniBatch: 7 / 98 Cost: 0.078720
Epoch: 16 / 20, MiniBatch: 14 / 98 Cost: 0.072502
Epoch: 16 / 20, MiniBatch: 21 / 98 Cost: 0.066810
Epoch: 16 / 20, MiniBatch: 28 / 98 Cost: 0.062738
Epoch: 16 / 20, MiniBatch: 35 / 98 Cost: 0.069001
Epoch: 16 / 20, MiniBatch: 42 / 98 Cost: 0.072686
Epoch: 16 / 20, MiniBatch: 49 / 98 Cost: 0.072895
Epoch: 16 / 20, MiniBatch: 56 / 98 Cost: 0.075974
Epoch: 16 / 20, MiniBatch: 63 / 98 Cost: 0.072529
Epoch: 16 / 20, MiniBatch: 70 / 98 Cost: 0.074283
Epoch: 16 / 20, MiniBatch: 77 / 98 Cost: 0.074401
Epoch: 16 / 20, MiniBatch: 84 / 98 Cost: 0.078171
Epoch: 16 / 20, MiniBatch: 91 / 98 Cost: 0.071396
Epoch: 16 / 20, MiniBatch: 98 / 98 Cost: 0.079162
Epoch: 17 / 20, MiniBatch: 7 / 98 Cost: 0.054061
Epoch: 17 / 20, MiniBatch: 14 / 98 Cost: 0.055890
Epoch: 17 / 20, MiniBatch: 21 / 98 Cost: 0.057459
Epoch: 17 / 20, MiniBatch: 28 / 98 Cost: 0.050185
Epoch: 17 / 20, MiniBatch: 35 / 98 Cost: 0.053746
Epoch: 17 / 20, MiniBatch: 42 / 98 Cost: 0.041518
Epoch: 17 / 20, MiniBatch: 49 / 98 Cost: 0.049982
Epoch: 17 / 20, MiniBatch: 56 / 98 Cost: 0.052281
Epoch: 17 / 20, MiniBatch: 63 / 98 Cost: 0.046502
Epoch: 17 / 20, MiniBatch: 70 / 98 Cost: 0.048802
Epoch: 17 / 20, MiniBatch: 77 / 98 Cost: 0.050687
Epoch: 17 / 20, MiniBatch: 84 / 98 Cost: 0.055405
Epoch: 17 / 20, MiniBatch: 91 / 98 Cost: 0.052087
Epoch: 17 / 20, MiniBatch: 98 / 98 Cost: 0.048258
Epoch: 18 / 20, MiniBatch: 7 / 98 Cost: 0.038218
Epoch: 18 / 20, MiniBatch: 14 / 98 Cost: 0.037093
Epoch: 18 / 20, MiniBatch: 21 / 98 Cost: 0.036223
Epoch: 18 / 20, MiniBatch: 28 / 98 Cost: 0.037945
Epoch: 18 / 20, MiniBatch: 35 / 98 Cost: 0.039041
Epoch: 18 / 20, MiniBatch: 42 / 98 Cost: 0.031131
Epoch: 18 / 20, MiniBatch: 49 / 98 Cost: 0.037581
Epoch: 18 / 20, MiniBatch: 56 / 98 Cost: 0.036302
Epoch: 18 / 20, MiniBatch: 63 / 98 Cost: 0.028996
Epoch: 18 / 20, MiniBatch: 70 / 98 Cost: 0.035031
Epoch: 18 / 20, MiniBatch: 77 / 98 Cost: 0.036570
Epoch: 18 / 20, MiniBatch: 84 / 98 Cost: 0.038122
Epoch: 18 / 20, MiniBatch: 91 / 98 Cost: 0.034594
Epoch: 18 / 20, MiniBatch: 98 / 98 Cost: 0.036805
Epoch: 19 / 20, MiniBatch: 7 / 98 Cost: 0.026210
Epoch: 19 / 20, MiniBatch: 14 / 98 Cost: 0.022190
Epoch: 19 / 20, MiniBatch: 21 / 98 Cost: 0.027940
Epoch: 19 / 20, MiniBatch: 28 / 98 Cost: 0.026496
Epoch: 19 / 20, MiniBatch: 35 / 98 Cost: 0.025400
Epoch: 19 / 20, MiniBatch: 42 / 98 Cost: 0.027916
Epoch: 19 / 20, MiniBatch: 49 / 98 Cost: 0.024505
Epoch: 19 / 20, MiniBatch: 56 / 98 Cost: 0.025126
Epoch: 19 / 20, MiniBatch: 63 / 98 Cost: 0.030430
Epoch: 19 / 20, MiniBatch: 70 / 98 Cost: 0.025987
Epoch: 19 / 20, MiniBatch: 77 / 98 Cost: 0.023703
Epoch: 19 / 20, MiniBatch: 84 / 98 Cost: 0.024067
Epoch: 19 / 20, MiniBatch: 91 / 98 Cost: 0.027201
Epoch: 19 / 20, MiniBatch: 98 / 98 Cost: 0.026889
Epoch: 20 / 20, MiniBatch: 7 / 98 Cost: 0.019822
Epoch: 20 / 20, MiniBatch: 14 / 98 Cost: 0.020846
Epoch: 20 / 20, MiniBatch: 21 / 98 Cost: 0.016990
Epoch: 20 / 20, MiniBatch: 28 / 98 Cost: 0.021612
Epoch: 20 / 20, MiniBatch: 35 / 98 Cost: 0.019567
Epoch: 20 / 20, MiniBatch: 42 / 98 Cost: 0.019511
Epoch: 20 / 20, MiniBatch: 49 / 98 Cost: 0.019196
Epoch: 20 / 20, MiniBatch: 56 / 98 Cost: 0.021907
Epoch: 20 / 20, MiniBatch: 63 / 98 Cost: 0.021168
Epoch: 20 / 20, MiniBatch: 70 / 98 Cost: 0.021435
Epoch: 20 / 20, MiniBatch: 77 / 98 Cost: 0.021338
Epoch: 20 / 20, MiniBatch: 84 / 98 Cost: 0.021600
Epoch: 20 / 20, MiniBatch: 91 / 98 Cost: 0.021197
Epoch: 20 / 20, MiniBatch: 98 / 98 Cost: 0.021607
Epoch: 21 / 20, MiniBatch: 7 / 98 Cost: 0.016115
Epoch: 21 / 20, MiniBatch: 14 / 98 Cost: 0.015866
Epoch: 21 / 20, MiniBatch: 21 / 98 Cost: 0.016081
Epoch: 21 / 20, MiniBatch: 28 / 98 Cost: 0.013863
Epoch: 21 / 20, MiniBatch: 35 / 98 Cost: 0.014121
Epoch: 21 / 20, MiniBatch: 42 / 98 Cost: 0.014519
Epoch: 21 / 20, MiniBatch: 49 / 98 Cost: 0.016553
Epoch: 21 / 20, MiniBatch: 56 / 98 Cost: 0.014483
Epoch: 21 / 20, MiniBatch: 63 / 98 Cost: 0.012932
Epoch: 21 / 20, MiniBatch: 70 / 98 Cost: 0.013729
Epoch: 21 / 20, MiniBatch: 77 / 98 Cost: 0.012772
Epoch: 21 / 20, MiniBatch: 84 / 98 Cost: 0.015625
Epoch: 21 / 20, MiniBatch: 91 / 98 Cost: 0.013598
Epoch: 21 / 20, MiniBatch: 98 / 98 Cost: 0.015548
Epoch: 22 / 20, MiniBatch: 7 / 98 Cost: 0.013009
Epoch: 22 / 20, MiniBatch: 14 / 98 Cost: 0.010488
Epoch: 22 / 20, MiniBatch: 21 / 98 Cost: 0.014024
Epoch: 22 / 20, MiniBatch: 28 / 98 Cost: 0.012276
Epoch: 22 / 20, MiniBatch: 35 / 98 Cost: 0.011603
Epoch: 22 / 20, MiniBatch: 42 / 98 Cost: 0.012125
Epoch: 22 / 20, MiniBatch: 49 / 98 Cost: 0.011259
Epoch: 22 / 20, MiniBatch: 56 / 98 Cost: 0.012713
Epoch: 22 / 20, MiniBatch: 63 / 98 Cost: 0.010573
Epoch: 22 / 20, MiniBatch: 70 / 98 Cost: 0.009556
Epoch: 22 / 20, MiniBatch: 77 / 98 Cost: 0.011392
Epoch: 22 / 20, MiniBatch: 84 / 98 Cost: 0.013149
Epoch: 22 / 20, MiniBatch: 91 / 98 Cost: 0.010491
Epoch: 22 / 20, MiniBatch: 98 / 98 Cost: 0.011307
Epoch: 23 / 20, MiniBatch: 7 / 98 Cost: 0.011396
Epoch: 23 / 20, MiniBatch: 14 / 98 Cost: 0.009429
Epoch: 23 / 20, MiniBatch: 21 / 98 Cost: 0.013184
Epoch: 23 / 20, MiniBatch: 28 / 98 Cost: 0.009605
Epoch: 23 / 20, MiniBatch: 35 / 98 Cost: 0.008965
Epoch: 23 / 20, MiniBatch: 42 / 98 Cost: 0.008910
Epoch: 23 / 20, MiniBatch: 49 / 98 Cost: 0.010540
Epoch: 23 / 20, MiniBatch: 56 / 98 Cost: 0.012284
Epoch: 23 / 20, MiniBatch: 63 / 98 Cost: 0.011761
Epoch: 23 / 20, MiniBatch: 70 / 98 Cost: 0.008977
Epoch: 23 / 20, MiniBatch: 77 / 98 Cost: 0.010270
Epoch: 23 / 20, MiniBatch: 84 / 98 Cost: 0.009782
Epoch: 23 / 20, MiniBatch: 91 / 98 Cost: 0.009682
Epoch: 23 / 20, MiniBatch: 98 / 98 Cost: 0.009218
Epoch: 24 / 20, MiniBatch: 7 / 98 Cost: 0.006584
Epoch: 24 / 20, MiniBatch: 14 / 98 Cost: 0.008448
Epoch: 24 / 20, MiniBatch: 21 / 98 Cost: 0.006191
Epoch: 24 / 20, MiniBatch: 28 / 98 Cost: 0.006901
Epoch: 24 / 20, MiniBatch: 35 / 98 Cost: 0.006080
Epoch: 24 / 20, MiniBatch: 42 / 98 Cost: 0.006107
Epoch: 24 / 20, MiniBatch: 49 / 98 Cost: 0.006724
Epoch: 24 / 20, MiniBatch: 56 / 98 Cost: 0.007534
Epoch: 24 / 20, MiniBatch: 63 / 98 Cost: 0.006813
Epoch: 24 / 20, MiniBatch: 70 / 98 Cost: 0.008115
Epoch: 24 / 20, MiniBatch: 77 / 98 Cost: 0.007136
Epoch: 24 / 20, MiniBatch: 84 / 98 Cost: 0.007132
Epoch: 24 / 20, MiniBatch: 91 / 98 Cost: 0.006447
Epoch: 24 / 20, MiniBatch: 98 / 98 Cost: 0.008381
Epoch: 25 / 20, MiniBatch: 7 / 98 Cost: 0.005428
Epoch: 25 / 20, MiniBatch: 14 / 98 Cost: 0.005931
Epoch: 25 / 20, MiniBatch: 21 / 98 Cost: 0.005464
Epoch: 25 / 20, MiniBatch: 28 / 98 Cost: 0.005541
Epoch: 25 / 20, MiniBatch: 35 / 98 Cost: 0.007516
Epoch: 25 / 20, MiniBatch: 42 / 98 Cost: 0.005336
Epoch: 25 / 20, MiniBatch: 49 / 98 Cost: 0.004889
Epoch: 25 / 20, MiniBatch: 56 / 98 Cost: 0.005918
Epoch: 25 / 20, MiniBatch: 63 / 98 Cost: 0.007341
Epoch: 25 / 20, MiniBatch: 70 / 98 Cost: 0.005903
Epoch: 25 / 20, MiniBatch: 77 / 98 Cost: 0.005014
Epoch: 25 / 20, MiniBatch: 84 / 98 Cost: 0.006436
Epoch: 25 / 20, MiniBatch: 91 / 98 Cost: 0.004509
Epoch: 25 / 20, MiniBatch: 98 / 98 Cost: 0.006305
Epoch: 26 / 20, MiniBatch: 7 / 98 Cost: 0.004577
Epoch: 26 / 20, MiniBatch: 14 / 98 Cost: 0.004641
Epoch: 26 / 20, MiniBatch: 21 / 98 Cost: 0.005289
Epoch: 26 / 20, MiniBatch: 28 / 98 Cost: 0.005076
Epoch: 26 / 20, MiniBatch: 35 / 98 Cost: 0.004706
Epoch: 26 / 20, MiniBatch: 42 / 98 Cost: 0.005503
Epoch: 26 / 20, MiniBatch: 49 / 98 Cost: 0.005498
Epoch: 26 / 20, MiniBatch: 56 / 98 Cost: 0.005030
Epoch: 26 / 20, MiniBatch: 63 / 98 Cost: 0.006352
Epoch: 26 / 20, MiniBatch: 70 / 98 Cost: 0.006357
Epoch: 26 / 20, MiniBatch: 77 / 98 Cost: 0.006323
Epoch: 26 / 20, MiniBatch: 84 / 98 Cost: 0.005584
Epoch: 26 / 20, MiniBatch: 91 / 98 Cost: 0.005145
Epoch: 26 / 20, MiniBatch: 98 / 98 Cost: 0.005976
Epoch: 27 / 20, MiniBatch: 7 / 98 Cost: 0.005454
Epoch: 27 / 20, MiniBatch: 14 / 98 Cost: 0.004951
Epoch: 27 / 20, MiniBatch: 21 / 98 Cost: 0.003784
Epoch: 27 / 20, MiniBatch: 28 / 98 Cost: 0.004381
Epoch: 27 / 20, MiniBatch: 35 / 98 Cost: 0.004688
Epoch: 27 / 20, MiniBatch: 42 / 98 Cost: 0.005200
Epoch: 27 / 20, MiniBatch: 49 / 98 Cost: 0.004482
Epoch: 27 / 20, MiniBatch: 56 / 98 Cost: 0.003680
Epoch: 27 / 20, MiniBatch: 63 / 98 Cost: 0.003630
Epoch: 27 / 20, MiniBatch: 70 / 98 Cost: 0.003555
Epoch: 27 / 20, MiniBatch: 77 / 98 Cost: 0.003986
Epoch: 27 / 20, MiniBatch: 84 / 98 Cost: 0.003816
Epoch: 27 / 20, MiniBatch: 91 / 98 Cost: 0.003804
Epoch: 27 / 20, MiniBatch: 98 / 98 Cost: 0.003325
Epoch: 28 / 20, MiniBatch: 7 / 98 Cost: 0.004562
Epoch: 28 / 20, MiniBatch: 14 / 98 Cost: 0.004389
Epoch: 28 / 20, MiniBatch: 21 / 98 Cost: 0.003956
Epoch: 28 / 20, MiniBatch: 28 / 98 Cost: 0.003937
Epoch: 28 / 20, MiniBatch: 35 / 98 Cost: 0.004016
Epoch: 28 / 20, MiniBatch: 42 / 98 Cost: 0.003536
Epoch: 28 / 20, MiniBatch: 49 / 98 Cost: 0.003621
Epoch: 28 / 20, MiniBatch: 56 / 98 Cost: 0.003807
Epoch: 28 / 20, MiniBatch: 63 / 98 Cost: 0.002687
Epoch: 28 / 20, MiniBatch: 70 / 98 Cost: 0.004149
Epoch: 28 / 20, MiniBatch: 77 / 98 Cost: 0.003752
Epoch: 28 / 20, MiniBatch: 84 / 98 Cost: 0.003324
Epoch: 28 / 20, MiniBatch: 91 / 98 Cost: 0.003605
Epoch: 28 / 20, MiniBatch: 98 / 98 Cost: 0.004231
Epoch: 29 / 20, MiniBatch: 7 / 98 Cost: 0.003654
Epoch: 29 / 20, MiniBatch: 14 / 98 Cost: 0.002931
Epoch: 29 / 20, MiniBatch: 21 / 98 Cost: 0.003395
Epoch: 29 / 20, MiniBatch: 28 / 98 Cost: 0.004251
Epoch: 29 / 20, MiniBatch: 35 / 98 Cost: 0.003246
Epoch: 29 / 20, MiniBatch: 42 / 98 Cost: 0.002560
Epoch: 29 / 20, MiniBatch: 49 / 98 Cost: 0.005143
Epoch: 29 / 20, MiniBatch: 56 / 98 Cost: 0.003097
Epoch: 29 / 20, MiniBatch: 63 / 98 Cost: 0.003453
Epoch: 29 / 20, MiniBatch: 70 / 98 Cost: 0.002455
Epoch: 29 / 20, MiniBatch: 77 / 98 Cost: 0.002177
Epoch: 29 / 20, MiniBatch: 84 / 98 Cost: 0.003019
Epoch: 29 / 20, MiniBatch: 91 / 98 Cost: 0.003509
Epoch: 29 / 20, MiniBatch: 98 / 98 Cost: 0.002866
Epoch: 30 / 20, MiniBatch: 7 / 98 Cost: 0.003286
Epoch: 30 / 20, MiniBatch: 14 / 98 Cost: 0.003422
Epoch: 30 / 20, MiniBatch: 21 / 98 Cost: 0.002289
Epoch: 30 / 20, MiniBatch: 28 / 98 Cost: 0.002934
Epoch: 30 / 20, MiniBatch: 35 / 98 Cost: 0.003320
Epoch: 30 / 20, MiniBatch: 42 / 98 Cost: 0.003421
Epoch: 30 / 20, MiniBatch: 49 / 98 Cost: 0.002932
Epoch: 30 / 20, MiniBatch: 56 / 98 Cost: 0.002617
Epoch: 30 / 20, MiniBatch: 63 / 98 Cost: 0.004079
Epoch: 30 / 20, MiniBatch: 70 / 98 Cost: 0.002989
Epoch: 30 / 20, MiniBatch: 77 / 98 Cost: 0.003825
Epoch: 30 / 20, MiniBatch: 84 / 98 Cost: 0.003290
Epoch: 30 / 20, MiniBatch: 91 / 98 Cost: 0.003377
Epoch: 30 / 20, MiniBatch: 98 / 98 Cost: 0.003252
Epoch: 31 / 20, MiniBatch: 7 / 98 Cost: 0.003309
Epoch: 31 / 20, MiniBatch: 14 / 98 Cost: 0.002127
Epoch: 31 / 20, MiniBatch: 21 / 98 Cost: 0.001741
Epoch: 31 / 20, MiniBatch: 28 / 98 Cost: 0.003589
Epoch: 31 / 20, MiniBatch: 35 / 98 Cost: 0.001949
Epoch: 31 / 20, MiniBatch: 42 / 98 Cost: 0.003138
Epoch: 31 / 20, MiniBatch: 49 / 98 Cost: 0.002594
Epoch: 31 / 20, MiniBatch: 56 / 98 Cost: 0.002692
Epoch: 31 / 20, MiniBatch: 63 / 98 Cost: 0.002758
Epoch: 31 / 20, MiniBatch: 70 / 98 Cost: 0.002314
Epoch: 31 / 20, MiniBatch: 77 / 98 Cost: 0.003312
Epoch: 31 / 20, MiniBatch: 84 / 98 Cost: 0.003211
Epoch: 31 / 20, MiniBatch: 91 / 98 Cost: 0.003079
Epoch: 31 / 20, MiniBatch: 98 / 98 Cost: 0.004196
Epoch: 32 / 20, MiniBatch: 7 / 98 Cost: 0.003010
Epoch: 32 / 20, MiniBatch: 14 / 98 Cost: 0.003889
Epoch: 32 / 20, MiniBatch: 21 / 98 Cost: 0.003909
Epoch: 32 / 20, MiniBatch: 28 / 98 Cost: 0.004113
Epoch: 32 / 20, MiniBatch: 35 / 98 Cost: 0.002576
Epoch: 32 / 20, MiniBatch: 42 / 98 Cost: 0.004046
Epoch: 32 / 20, MiniBatch: 49 / 98 Cost: 0.004304
Epoch: 32 / 20, MiniBatch: 56 / 98 Cost: 0.003629
Epoch: 32 / 20, MiniBatch: 63 / 98 Cost: 0.003347
Epoch: 32 / 20, MiniBatch: 70 / 98 Cost: 0.002714
Epoch: 32 / 20, MiniBatch: 77 / 98 Cost: 0.002484
Epoch: 32 / 20, MiniBatch: 84 / 98 Cost: 0.003529
Epoch: 32 / 20, MiniBatch: 91 / 98 Cost: 0.002796
Epoch: 32 / 20, MiniBatch: 98 / 98 Cost: 0.002807
Epoch: 33 / 20, MiniBatch: 7 / 98 Cost: 0.003361
Epoch: 33 / 20, MiniBatch: 14 / 98 Cost: 0.002960
Epoch: 33 / 20, MiniBatch: 21 / 98 Cost: 0.002710
Epoch: 33 / 20, MiniBatch: 28 / 98 Cost: 0.001999
Epoch: 33 / 20, MiniBatch: 35 / 98 Cost: 0.002200
Epoch: 33 / 20, MiniBatch: 42 / 98 Cost: 0.001961
Epoch: 33 / 20, MiniBatch: 49 / 98 Cost: 0.002149
Epoch: 33 / 20, MiniBatch: 56 / 98 Cost: 0.002510
Epoch: 33 / 20, MiniBatch: 63 / 98 Cost: 0.002711
Epoch: 33 / 20, MiniBatch: 70 / 98 Cost: 0.002266
Epoch: 33 / 20, MiniBatch: 77 / 98 Cost: 0.002895
Epoch: 33 / 20, MiniBatch: 84 / 98 Cost: 0.002804
Epoch: 33 / 20, MiniBatch: 91 / 98 Cost: 0.002539
Epoch: 33 / 20, MiniBatch: 98 / 98 Cost: 0.002051
Epoch: 34 / 20, MiniBatch: 7 / 98 Cost: 0.002102
Epoch: 34 / 20, MiniBatch: 14 / 98 Cost: 0.002651
Epoch: 34 / 20, MiniBatch: 21 / 98 Cost: 0.002901
Epoch: 34 / 20, MiniBatch: 28 / 98 Cost: 0.002176
Epoch: 34 / 20, MiniBatch: 35 / 98 Cost: 0.002724
Epoch: 34 / 20, MiniBatch: 42 / 98 Cost: 0.001936
Epoch: 34 / 20, MiniBatch: 49 / 98 Cost: 0.002092
Epoch: 34 / 20, MiniBatch: 56 / 98 Cost: 0.002478
Epoch: 34 / 20, MiniBatch: 63 / 98 Cost: 0.002757
Epoch: 34 / 20, MiniBatch: 70 / 98 Cost: 0.002398
Epoch: 34 / 20, MiniBatch: 77 / 98 Cost: 0.001705
Epoch: 34 / 20, MiniBatch: 84 / 98 Cost: 0.002140
Epoch: 34 / 20, MiniBatch: 91 / 98 Cost: 0.002215
Epoch: 34 / 20, MiniBatch: 98 / 98 Cost: 0.002400
Epoch: 35 / 20, MiniBatch: 7 / 98 Cost: 0.002196
Epoch: 35 / 20, MiniBatch: 14 / 98 Cost: 0.001717
Epoch: 35 / 20, MiniBatch: 21 / 98 Cost: 0.001688
Epoch: 35 / 20, MiniBatch: 28 / 98 Cost: 0.002546
Epoch: 35 / 20, MiniBatch: 35 / 98 Cost: 0.001350
Epoch: 35 / 20, MiniBatch: 42 / 98 Cost: 0.001959
Epoch: 35 / 20, MiniBatch: 49 / 98 Cost: 0.002389
Epoch: 35 / 20, MiniBatch: 56 / 98 Cost: 0.001880
Epoch: 35 / 20, MiniBatch: 63 / 98 Cost: 0.002410
Epoch: 35 / 20, MiniBatch: 70 / 98 Cost: 0.001767
Epoch: 35 / 20, MiniBatch: 77 / 98 Cost: 0.001771
Epoch: 35 / 20, MiniBatch: 84 / 98 Cost: 0.002375
Epoch: 35 / 20, MiniBatch: 91 / 98 Cost: 0.001758
Epoch: 35 / 20, MiniBatch: 98 / 98 Cost: 0.002251
Epoch: 36 / 20, MiniBatch: 7 / 98 Cost: 0.002323
Epoch: 36 / 20, MiniBatch: 14 / 98 Cost: 0.001951
Epoch: 36 / 20, MiniBatch: 21 / 98 Cost: 0.001732
Epoch: 36 / 20, MiniBatch: 28 / 98 Cost: 0.001470
Epoch: 36 / 20, MiniBatch: 35 / 98 Cost: 0.002422
Epoch: 36 / 20, MiniBatch: 42 / 98 Cost: 0.001798
Epoch: 36 / 20, MiniBatch: 49 / 98 Cost: 0.001767
Epoch: 36 / 20, MiniBatch: 56 / 98 Cost: 0.001851
Epoch: 36 / 20, MiniBatch: 63 / 98 Cost: 0.002611
Epoch: 36 / 20, MiniBatch: 70 / 98 Cost: 0.002052
Epoch: 36 / 20, MiniBatch: 77 / 98 Cost: 0.002119
Epoch: 36 / 20, MiniBatch: 84 / 98 Cost: 0.001605
Epoch: 36 / 20, MiniBatch: 91 / 98 Cost: 0.001408
Epoch: 36 / 20, MiniBatch: 98 / 98 Cost: 0.002140
Epoch: 37 / 20, MiniBatch: 7 / 98 Cost: 0.001319
Epoch: 37 / 20, MiniBatch: 14 / 98 Cost: 0.002720
Epoch: 37 / 20, MiniBatch: 21 / 98 Cost: 0.001515
Epoch: 37 / 20, MiniBatch: 28 / 98 Cost: 0.001618
Epoch: 37 / 20, MiniBatch: 35 / 98 Cost: 0.001412
Epoch: 37 / 20, MiniBatch: 42 / 98 Cost: 0.001503
Epoch: 37 / 20, MiniBatch: 49 / 98 Cost: 0.001829
Epoch: 37 / 20, MiniBatch: 56 / 98 Cost: 0.001967
Epoch: 37 / 20, MiniBatch: 63 / 98 Cost: 0.001361
Epoch: 37 / 20, MiniBatch: 70 / 98 Cost: 0.001753
Epoch: 37 / 20, MiniBatch: 77 / 98 Cost: 0.002417
Epoch: 37 / 20, MiniBatch: 84 / 98 Cost: 0.001664
Epoch: 37 / 20, MiniBatch: 91 / 98 Cost: 0.002184
Epoch: 37 / 20, MiniBatch: 98 / 98 Cost: 0.001702
Epoch: 38 / 20, MiniBatch: 7 / 98 Cost: 0.001552
Epoch: 38 / 20, MiniBatch: 14 / 98 Cost: 0.001396
Epoch: 38 / 20, MiniBatch: 21 / 98 Cost: 0.001382
Epoch: 38 / 20, MiniBatch: 28 / 98 Cost: 0.001325
Epoch: 38 / 20, MiniBatch: 35 / 98 Cost: 0.001506
Epoch: 38 / 20, MiniBatch: 42 / 98 Cost: 0.001359
Epoch: 38 / 20, MiniBatch: 49 / 98 Cost: 0.001838
Epoch: 38 / 20, MiniBatch: 56 / 98 Cost: 0.001948
Epoch: 38 / 20, MiniBatch: 63 / 98 Cost: 0.002034
Epoch: 38 / 20, MiniBatch: 70 / 98 Cost: 0.001562
Epoch: 38 / 20, MiniBatch: 77 / 98 Cost: 0.002953
Epoch: 38 / 20, MiniBatch: 84 / 98 Cost: 0.001824
Epoch: 38 / 20, MiniBatch: 91 / 98 Cost: 0.001738
Epoch: 38 / 20, MiniBatch: 98 / 98 Cost: 0.001753
Epoch: 39 / 20, MiniBatch: 7 / 98 Cost: 0.002072
Epoch: 39 / 20, MiniBatch: 14 / 98 Cost: 0.001421
Epoch: 39 / 20, MiniBatch: 21 / 98 Cost: 0.001465
Epoch: 39 / 20, MiniBatch: 28 / 98 Cost: 0.001921
Epoch: 39 / 20, MiniBatch: 35 / 98 Cost: 0.001275
Epoch: 39 / 20, MiniBatch: 42 / 98 Cost: 0.002652
Epoch: 39 / 20, MiniBatch: 49 / 98 Cost: 0.001151
Epoch: 39 / 20, MiniBatch: 56 / 98 Cost: 0.001616
Epoch: 39 / 20, MiniBatch: 63 / 98 Cost: 0.001608
Epoch: 39 / 20, MiniBatch: 70 / 98 Cost: 0.001873
Epoch: 39 / 20, MiniBatch: 77 / 98 Cost: 0.001444
Epoch: 39 / 20, MiniBatch: 84 / 98 Cost: 0.001506
Epoch: 39 / 20, MiniBatch: 91 / 98 Cost: 0.001802
Epoch: 39 / 20, MiniBatch: 98 / 98 Cost: 0.001679
Epoch: 40 / 20, MiniBatch: 7 / 98 Cost: 0.002307
Epoch: 40 / 20, MiniBatch: 14 / 98 Cost: 0.001455
Epoch: 40 / 20, MiniBatch: 21 / 98 Cost: 0.001907
Epoch: 40 / 20, MiniBatch: 28 / 98 Cost: 0.001946
Epoch: 40 / 20, MiniBatch: 35 / 98 Cost: 0.001634
Epoch: 40 / 20, MiniBatch: 42 / 98 Cost: 0.001537
Epoch: 40 / 20, MiniBatch: 49 / 98 Cost: 0.001500
Epoch: 40 / 20, MiniBatch: 56 / 98 Cost: 0.001143
Epoch: 40 / 20, MiniBatch: 63 / 98 Cost: 0.001189
Epoch: 40 / 20, MiniBatch: 70 / 98 Cost: 0.001444
Epoch: 40 / 20, MiniBatch: 77 / 98 Cost: 0.001119
Epoch: 40 / 20, MiniBatch: 84 / 98 Cost: 0.001630
Epoch: 40 / 20, MiniBatch: 91 / 98 Cost: 0.001037
Epoch: 40 / 20, MiniBatch: 98 / 98 Cost: 0.001387
Epoch: 41 / 20, MiniBatch: 7 / 98 Cost: 0.001645
Epoch: 41 / 20, MiniBatch: 14 / 98 Cost: 0.001563
Epoch: 41 / 20, MiniBatch: 21 / 98 Cost: 0.001064
Epoch: 41 / 20, MiniBatch: 28 / 98 Cost: 0.001107
Epoch: 41 / 20, MiniBatch: 35 / 98 Cost: 0.001360
Epoch: 41 / 20, MiniBatch: 42 / 98 Cost: 0.001653
Epoch: 41 / 20, MiniBatch: 49 / 98 Cost: 0.001278
Epoch: 41 / 20, MiniBatch: 56 / 98 Cost: 0.001193
Epoch: 41 / 20, MiniBatch: 63 / 98 Cost: 0.001431
Epoch: 41 / 20, MiniBatch: 70 / 98 Cost: 0.000889
Epoch: 41 / 20, MiniBatch: 77 / 98 Cost: 0.001150
Epoch: 41 / 20, MiniBatch: 84 / 98 Cost: 0.001028
Epoch: 41 / 20, MiniBatch: 91 / 98 Cost: 0.001335
Epoch: 41 / 20, MiniBatch: 98 / 98 Cost: 0.001067
Epoch: 42 / 20, MiniBatch: 7 / 98 Cost: 0.001335
Epoch: 42 / 20, MiniBatch: 14 / 98 Cost: 0.001368
Epoch: 42 / 20, MiniBatch: 21 / 98 Cost: 0.001082
Epoch: 42 / 20, MiniBatch: 28 / 98 Cost: 0.001132
Epoch: 42 / 20, MiniBatch: 35 / 98 Cost: 0.001356
Epoch: 42 / 20, MiniBatch: 42 / 98 Cost: 0.001113
Epoch: 42 / 20, MiniBatch: 49 / 98 Cost: 0.001367
Epoch: 42 / 20, MiniBatch: 56 / 98 Cost: 0.001307
Epoch: 42 / 20, MiniBatch: 63 / 98 Cost: 0.001013
Epoch: 42 / 20, MiniBatch: 70 / 98 Cost: 0.001120
Epoch: 42 / 20, MiniBatch: 77 / 98 Cost: 0.001478
Epoch: 42 / 20, MiniBatch: 84 / 98 Cost: 0.000933
Epoch: 42 / 20, MiniBatch: 91 / 98 Cost: 0.001008
Epoch: 42 / 20, MiniBatch: 98 / 98 Cost: 0.001063
Epoch: 43 / 20, MiniBatch: 7 / 98 Cost: 0.001103
Epoch: 43 / 20, MiniBatch: 14 / 98 Cost: 0.000991
Epoch: 43 / 20, MiniBatch: 21 / 98 Cost: 0.001251
Epoch: 43 / 20, MiniBatch: 28 / 98 Cost: 0.000878
Epoch: 43 / 20, MiniBatch: 35 / 98 Cost: 0.001175
Epoch: 43 / 20, MiniBatch: 42 / 98 Cost: 0.001487
Epoch: 43 / 20, MiniBatch: 49 / 98 Cost: 0.000952
Epoch: 43 / 20, MiniBatch: 56 / 98 Cost: 0.001080
Epoch: 43 / 20, MiniBatch: 63 / 98 Cost: 0.001159
Epoch: 43 / 20, MiniBatch: 70 / 98 Cost: 0.001103
Epoch: 43 / 20, MiniBatch: 77 / 98 Cost: 0.000849
Epoch: 43 / 20, MiniBatch: 84 / 98 Cost: 0.001054
Epoch: 43 / 20, MiniBatch: 91 / 98 Cost: 0.000886
Epoch: 43 / 20, MiniBatch: 98 / 98 Cost: 0.001431
Epoch: 44 / 20, MiniBatch: 7 / 98 Cost: 0.001566
Epoch: 44 / 20, MiniBatch: 14 / 98 Cost: 0.000776
Epoch: 44 / 20, MiniBatch: 21 / 98 Cost: 0.001598
Epoch: 44 / 20, MiniBatch: 28 / 98 Cost: 0.001483
Epoch: 44 / 20, MiniBatch: 35 / 98 Cost: 0.000828
Epoch: 44 / 20, MiniBatch: 42 / 98 Cost: 0.001043
Epoch: 44 / 20, MiniBatch: 49 / 98 Cost: 0.001236
Epoch: 44 / 20, MiniBatch: 56 / 98 Cost: 0.001220
Epoch: 44 / 20, MiniBatch: 63 / 98 Cost: 0.001210
Epoch: 44 / 20, MiniBatch: 70 / 98 Cost: 0.001655
Epoch: 44 / 20, MiniBatch: 77 / 98 Cost: 0.001122
Epoch: 44 / 20, MiniBatch: 84 / 98 Cost: 0.001172
Epoch: 44 / 20, MiniBatch: 91 / 98 Cost: 0.001085
Epoch: 44 / 20, MiniBatch: 98 / 98 Cost: 0.001190
Epoch: 45 / 20, MiniBatch: 7 / 98 Cost: 0.001410
Epoch: 45 / 20, MiniBatch: 14 / 98 Cost: 0.001010
Epoch: 45 / 20, MiniBatch: 21 / 98 Cost: 0.001375
Epoch: 45 / 20, MiniBatch: 28 / 98 Cost: 0.001323
Epoch: 45 / 20, MiniBatch: 35 / 98 Cost: 0.000821
Epoch: 45 / 20, MiniBatch: 42 / 98 Cost: 0.001058
Epoch: 45 / 20, MiniBatch: 49 / 98 Cost: 0.001221
Epoch: 45 / 20, MiniBatch: 56 / 98 Cost: 0.000943
Epoch: 45 / 20, MiniBatch: 63 / 98 Cost: 0.000907
Epoch: 45 / 20, MiniBatch: 70 / 98 Cost: 0.001548
Epoch: 45 / 20, MiniBatch: 77 / 98 Cost: 0.001274
Epoch: 45 / 20, MiniBatch: 84 / 98 Cost: 0.001379
Epoch: 45 / 20, MiniBatch: 91 / 98 Cost: 0.001285
Epoch: 45 / 20, MiniBatch: 98 / 98 Cost: 0.001453
Epoch: 46 / 20, MiniBatch: 7 / 98 Cost: 0.001271
Epoch: 46 / 20, MiniBatch: 14 / 98 Cost: 0.000927
Epoch: 46 / 20, MiniBatch: 21 / 98 Cost: 0.001241
Epoch: 46 / 20, MiniBatch: 28 / 98 Cost: 0.001124
Epoch: 46 / 20, MiniBatch: 35 / 98 Cost: 0.000900
Epoch: 46 / 20, MiniBatch: 42 / 98 Cost: 0.001392
Epoch: 46 / 20, MiniBatch: 49 / 98 Cost: 0.000848
Epoch: 46 / 20, MiniBatch: 56 / 98 Cost: 0.000928
Epoch: 46 / 20, MiniBatch: 63 / 98 Cost: 0.001287
Epoch: 46 / 20, MiniBatch: 70 / 98 Cost: 0.000812
Epoch: 46 / 20, MiniBatch: 77 / 98 Cost: 0.000991
Epoch: 46 / 20, MiniBatch: 84 / 98 Cost: 0.001148
Epoch: 46 / 20, MiniBatch: 91 / 98 Cost: 0.001106
Epoch: 46 / 20, MiniBatch: 98 / 98 Cost: 0.001637
Epoch: 47 / 20, MiniBatch: 7 / 98 Cost: 0.001005
Epoch: 47 / 20, MiniBatch: 14 / 98 Cost: 0.001224
Epoch: 47 / 20, MiniBatch: 21 / 98 Cost: 0.001105
Epoch: 47 / 20, MiniBatch: 28 / 98 Cost: 0.000900
Epoch: 47 / 20, MiniBatch: 35 / 98 Cost: 0.000994
Epoch: 47 / 20, MiniBatch: 42 / 98 Cost: 0.001216
Epoch: 47 / 20, MiniBatch: 49 / 98 Cost: 0.001171
Epoch: 47 / 20, MiniBatch: 56 / 98 Cost: 0.000714
Epoch: 47 / 20, MiniBatch: 63 / 98 Cost: 0.001027
Epoch: 47 / 20, MiniBatch: 70 / 98 Cost: 0.001362
Epoch: 47 / 20, MiniBatch: 77 / 98 Cost: 0.000941
Epoch: 47 / 20, MiniBatch: 84 / 98 Cost: 0.000753
Epoch: 47 / 20, MiniBatch: 91 / 98 Cost: 0.000951
Epoch: 47 / 20, MiniBatch: 98 / 98 Cost: 0.001098
Epoch: 48 / 20, MiniBatch: 7 / 98 Cost: 0.000938
Epoch: 48 / 20, MiniBatch: 14 / 98 Cost: 0.001433
Epoch: 48 / 20, MiniBatch: 21 / 98 Cost: 0.000940
Epoch: 48 / 20, MiniBatch: 28 / 98 Cost: 0.000853
Epoch: 48 / 20, MiniBatch: 35 / 98 Cost: 0.001070
Epoch: 48 / 20, MiniBatch: 42 / 98 Cost: 0.000767
Epoch: 48 / 20, MiniBatch: 49 / 98 Cost: 0.000954
Epoch: 48 / 20, MiniBatch: 56 / 98 Cost: 0.000916
Epoch: 48 / 20, MiniBatch: 63 / 98 Cost: 0.000872
Epoch: 48 / 20, MiniBatch: 70 / 98 Cost: 0.001329
Epoch: 48 / 20, MiniBatch: 77 / 98 Cost: 0.000888
Epoch: 48 / 20, MiniBatch: 84 / 98 Cost: 0.000769
Epoch: 48 / 20, MiniBatch: 91 / 98 Cost: 0.001527
Epoch: 48 / 20, MiniBatch: 98 / 98 Cost: 0.000785
Epoch: 49 / 20, MiniBatch: 7 / 98 Cost: 0.001310
Epoch: 49 / 20, MiniBatch: 14 / 98 Cost: 0.001011
Epoch: 49 / 20, MiniBatch: 21 / 98 Cost: 0.001082
Epoch: 49 / 20, MiniBatch: 28 / 98 Cost: 0.001102
Epoch: 49 / 20, MiniBatch: 35 / 98 Cost: 0.000776
Epoch: 49 / 20, MiniBatch: 42 / 98 Cost: 0.001097
Epoch: 49 / 20, MiniBatch: 49 / 98 Cost: 0.000947
Epoch: 49 / 20, MiniBatch: 56 / 98 Cost: 0.000998
Epoch: 49 / 20, MiniBatch: 63 / 98 Cost: 0.000883
Epoch: 49 / 20, MiniBatch: 70 / 98 Cost: 0.000793
Epoch: 49 / 20, MiniBatch: 77 / 98 Cost: 0.000967
Epoch: 49 / 20, MiniBatch: 84 / 98 Cost: 0.000882
Epoch: 49 / 20, MiniBatch: 91 / 98 Cost: 0.000836
Epoch: 49 / 20, MiniBatch: 98 / 98 Cost: 0.001280
Epoch: 50 / 20, MiniBatch: 7 / 98 Cost: 0.000657
Epoch: 50 / 20, MiniBatch: 14 / 98 Cost: 0.000776
Epoch: 50 / 20, MiniBatch: 21 / 98 Cost: 0.001085
Epoch: 50 / 20, MiniBatch: 28 / 98 Cost: 0.000511
Epoch: 50 / 20, MiniBatch: 35 / 98 Cost: 0.000873
Epoch: 50 / 20, MiniBatch: 42 / 98 Cost: 0.000978
Epoch: 50 / 20, MiniBatch: 49 / 98 Cost: 0.001037
Epoch: 50 / 20, MiniBatch: 56 / 98 Cost: 0.000648
Epoch: 50 / 20, MiniBatch: 63 / 98 Cost: 0.000907
Epoch: 50 / 20, MiniBatch: 70 / 98 Cost: 0.001139
Epoch: 50 / 20, MiniBatch: 77 / 98 Cost: 0.000751
Epoch: 50 / 20, MiniBatch: 84 / 98 Cost: 0.001272
Epoch: 50 / 20, MiniBatch: 91 / 98 Cost: 0.000924
Epoch: 50 / 20, MiniBatch: 98 / 98 Cost: 0.000644
Epoch: 51 / 20, MiniBatch: 7 / 98 Cost: 0.001445
Epoch: 51 / 20, MiniBatch: 14 / 98 Cost: 0.001052
Epoch: 51 / 20, MiniBatch: 21 / 98 Cost: 0.001126
Epoch: 51 / 20, MiniBatch: 28 / 98 Cost: 0.000747
Epoch: 51 / 20, MiniBatch: 35 / 98 Cost: 0.001234
Epoch: 51 / 20, MiniBatch: 42 / 98 Cost: 0.000732
Epoch: 51 / 20, MiniBatch: 49 / 98 Cost: 0.000840
Epoch: 51 / 20, MiniBatch: 56 / 98 Cost: 0.000650
Epoch: 51 / 20, MiniBatch: 63 / 98 Cost: 0.001327
Epoch: 51 / 20, MiniBatch: 70 / 98 Cost: 0.001011
Epoch: 51 / 20, MiniBatch: 77 / 98 Cost: 0.000738
Epoch: 51 / 20, MiniBatch: 84 / 98 Cost: 0.000845
Epoch: 51 / 20, MiniBatch: 91 / 98 Cost: 0.000773
Epoch: 51 / 20, MiniBatch: 98 / 98 Cost: 0.001138
Epoch: 52 / 20, MiniBatch: 7 / 98 Cost: 0.000865
Epoch: 52 / 20, MiniBatch: 14 / 98 Cost: 0.000790
Epoch: 52 / 20, MiniBatch: 21 / 98 Cost: 0.000629
Epoch: 52 / 20, MiniBatch: 28 / 98 Cost: 0.001009
Epoch: 52 / 20, MiniBatch: 35 / 98 Cost: 0.000781
Epoch: 52 / 20, MiniBatch: 42 / 98 Cost: 0.000963
Epoch: 52 / 20, MiniBatch: 49 / 98 Cost: 0.000841
Epoch: 52 / 20, MiniBatch: 56 / 98 Cost: 0.001146
Epoch: 52 / 20, MiniBatch: 63 / 98 Cost: 0.000825
Epoch: 52 / 20, MiniBatch: 70 / 98 Cost: 0.001351
Epoch: 52 / 20, MiniBatch: 77 / 98 Cost: 0.001444
Epoch: 52 / 20, MiniBatch: 84 / 98 Cost: 0.001014
Epoch: 52 / 20, MiniBatch: 91 / 98 Cost: 0.000926
Epoch: 52 / 20, MiniBatch: 98 / 98 Cost: 0.001022
Epoch: 53 / 20, MiniBatch: 7 / 98 Cost: 0.000728
Epoch: 53 / 20, MiniBatch: 14 / 98 Cost: 0.000758
Epoch: 53 / 20, MiniBatch: 21 / 98 Cost: 0.001069
Epoch: 53 / 20, MiniBatch: 28 / 98 Cost: 0.000831
Epoch: 53 / 20, MiniBatch: 35 / 98 Cost: 0.000569
Epoch: 53 / 20, MiniBatch: 42 / 98 Cost: 0.000762
Epoch: 53 / 20, MiniBatch: 49 / 98 Cost: 0.000509
Epoch: 53 / 20, MiniBatch: 56 / 98 Cost: 0.000745
Epoch: 53 / 20, MiniBatch: 63 / 98 Cost: 0.000833
Epoch: 53 / 20, MiniBatch: 70 / 98 Cost: 0.000794
Epoch: 53 / 20, MiniBatch: 77 / 98 Cost: 0.000784
Epoch: 53 / 20, MiniBatch: 84 / 98 Cost: 0.000632
Epoch: 53 / 20, MiniBatch: 91 / 98 Cost: 0.001031
Epoch: 53 / 20, MiniBatch: 98 / 98 Cost: 0.000718
Epoch: 54 / 20, MiniBatch: 7 / 98 Cost: 0.001081
Epoch: 54 / 20, MiniBatch: 14 / 98 Cost: 0.000669
Epoch: 54 / 20, MiniBatch: 21 / 98 Cost: 0.000537
Epoch: 54 / 20, MiniBatch: 28 / 98 Cost: 0.000668
Epoch: 54 / 20, MiniBatch: 35 / 98 Cost: 0.000606
Epoch: 54 / 20, MiniBatch: 42 / 98 Cost: 0.000780
Epoch: 54 / 20, MiniBatch: 49 / 98 Cost: 0.000568
Epoch: 54 / 20, MiniBatch: 56 / 98 Cost: 0.000803
Epoch: 54 / 20, MiniBatch: 63 / 98 Cost: 0.000548
Epoch: 54 / 20, MiniBatch: 70 / 98 Cost: 0.000793
Epoch: 54 / 20, MiniBatch: 77 / 98 Cost: 0.000760
Epoch: 54 / 20, MiniBatch: 84 / 98 Cost: 0.000641
Epoch: 54 / 20, MiniBatch: 91 / 98 Cost: 0.000608
Epoch: 54 / 20, MiniBatch: 98 / 98 Cost: 0.000493
Epoch: 55 / 20, MiniBatch: 7 / 98 Cost: 0.000701
Epoch: 55 / 20, MiniBatch: 14 / 98 Cost: 0.000644
Epoch: 55 / 20, MiniBatch: 21 / 98 Cost: 0.000563
Epoch: 55 / 20, MiniBatch: 28 / 98 Cost: 0.000555
Epoch: 55 / 20, MiniBatch: 35 / 98 Cost: 0.000750
Epoch: 55 / 20, MiniBatch: 42 / 98 Cost: 0.000514
Epoch: 55 / 20, MiniBatch: 49 / 98 Cost: 0.000593
Epoch: 55 / 20, MiniBatch: 56 / 98 Cost: 0.000775
Epoch: 55 / 20, MiniBatch: 63 / 98 Cost: 0.000621
Epoch: 55 / 20, MiniBatch: 70 / 98 Cost: 0.000771
Epoch: 55 / 20, MiniBatch: 77 / 98 Cost: 0.000486
Epoch: 55 / 20, MiniBatch: 84 / 98 Cost: 0.000738
Epoch: 55 / 20, MiniBatch: 91 / 98 Cost: 0.000674
Epoch: 55 / 20, MiniBatch: 98 / 98 Cost: 0.000567
Epoch: 56 / 20, MiniBatch: 7 / 98 Cost: 0.000858
Epoch: 56 / 20, MiniBatch: 14 / 98 Cost: 0.000624
Epoch: 56 / 20, MiniBatch: 21 / 98 Cost: 0.001723
Epoch: 56 / 20, MiniBatch: 28 / 98 Cost: 0.000770
Epoch: 56 / 20, MiniBatch: 35 / 98 Cost: 0.000970
Epoch: 56 / 20, MiniBatch: 42 / 98 Cost: 0.000859
Epoch: 56 / 20, MiniBatch: 49 / 98 Cost: 0.000764
Epoch: 56 / 20, MiniBatch: 56 / 98 Cost: 0.000856
Epoch: 56 / 20, MiniBatch: 63 / 98 Cost: 0.000697
Epoch: 56 / 20, MiniBatch: 70 / 98 Cost: 0.000546
Epoch: 56 / 20, MiniBatch: 77 / 98 Cost: 0.000692
Epoch: 56 / 20, MiniBatch: 84 / 98 Cost: 0.000671
Epoch: 56 / 20, MiniBatch: 91 / 98 Cost: 0.000805
Epoch: 56 / 20, MiniBatch: 98 / 98 Cost: 0.000898
Epoch: 57 / 20, MiniBatch: 7 / 98 Cost: 0.000619
Epoch: 57 / 20, MiniBatch: 14 / 98 Cost: 0.000827
Epoch: 57 / 20, MiniBatch: 21 / 98 Cost: 0.000482
Epoch: 57 / 20, MiniBatch: 28 / 98 Cost: 0.000603
Epoch: 57 / 20, MiniBatch: 35 / 98 Cost: 0.000841
Epoch: 57 / 20, MiniBatch: 42 / 98 Cost: 0.000567
Epoch: 57 / 20, MiniBatch: 49 / 98 Cost: 0.000613
Epoch: 57 / 20, MiniBatch: 56 / 98 Cost: 0.000578
Epoch: 57 / 20, MiniBatch: 63 / 98 Cost: 0.000972
Epoch: 57 / 20, MiniBatch: 70 / 98 Cost: 0.000942
Epoch: 57 / 20, MiniBatch: 77 / 98 Cost: 0.001097
Epoch: 57 / 20, MiniBatch: 84 / 98 Cost: 0.000803
Epoch: 57 / 20, MiniBatch: 91 / 98 Cost: 0.000731
Epoch: 57 / 20, MiniBatch: 98 / 98 Cost: 0.000692
Epoch: 58 / 20, MiniBatch: 7 / 98 Cost: 0.000654
Epoch: 58 / 20, MiniBatch: 14 / 98 Cost: 0.000818
Epoch: 58 / 20, MiniBatch: 21 / 98 Cost: 0.000563
Epoch: 58 / 20, MiniBatch: 28 / 98 Cost: 0.000483
Epoch: 58 / 20, MiniBatch: 35 / 98 Cost: 0.000470
Epoch: 58 / 20, MiniBatch: 42 / 98 Cost: 0.000561
Epoch: 58 / 20, MiniBatch: 49 / 98 Cost: 0.000573
Epoch: 58 / 20, MiniBatch: 56 / 98 Cost: 0.000622
Epoch: 58 / 20, MiniBatch: 63 / 98 Cost: 0.000404
Epoch: 58 / 20, MiniBatch: 70 / 98 Cost: 0.000741
Epoch: 58 / 20, MiniBatch: 77 / 98 Cost: 0.000554
Epoch: 58 / 20, MiniBatch: 84 / 98 Cost: 0.000498
Epoch: 58 / 20, MiniBatch: 91 / 98 Cost: 0.000524
Epoch: 58 / 20, MiniBatch: 98 / 98 Cost: 0.000574
Epoch: 59 / 20, MiniBatch: 7 / 98 Cost: 0.000486
Epoch: 59 / 20, MiniBatch: 14 / 98 Cost: 0.000879
Epoch: 59 / 20, MiniBatch: 21 / 98 Cost: 0.000563
Epoch: 59 / 20, MiniBatch: 28 / 98 Cost: 0.001419
Epoch: 59 / 20, MiniBatch: 35 / 98 Cost: 0.001012
Epoch: 59 / 20, MiniBatch: 42 / 98 Cost: 0.000923
Epoch: 59 / 20, MiniBatch: 49 / 98 Cost: 0.000668
Epoch: 59 / 20, MiniBatch: 56 / 98 Cost: 0.000582
Epoch: 59 / 20, MiniBatch: 63 / 98 Cost: 0.000861
Epoch: 59 / 20, MiniBatch: 70 / 98 Cost: 0.000845
Epoch: 59 / 20, MiniBatch: 77 / 98 Cost: 0.000817
Epoch: 59 / 20, MiniBatch: 84 / 98 Cost: 0.000858
Epoch: 59 / 20, MiniBatch: 91 / 98 Cost: 0.000834
Epoch: 59 / 20, MiniBatch: 98 / 98 Cost: 0.000875
Epoch: 60 / 20, MiniBatch: 7 / 98 Cost: 0.000618
Epoch: 60 / 20, MiniBatch: 14 / 98 Cost: 0.000595
Epoch: 60 / 20, MiniBatch: 21 / 98 Cost: 0.000743
Epoch: 60 / 20, MiniBatch: 28 / 98 Cost: 0.000640
Epoch: 60 / 20, MiniBatch: 35 / 98 Cost: 0.000472
Epoch: 60 / 20, MiniBatch: 42 / 98 Cost: 0.000786
Epoch: 60 / 20, MiniBatch: 49 / 98 Cost: 0.000713
Epoch: 60 / 20, MiniBatch: 56 / 98 Cost: 0.000844
Epoch: 60 / 20, MiniBatch: 63 / 98 Cost: 0.001157
Epoch: 60 / 20, MiniBatch: 70 / 98 Cost: 0.000616
Epoch: 60 / 20, MiniBatch: 77 / 98 Cost: 0.001064
Epoch: 60 / 20, MiniBatch: 84 / 98 Cost: 0.000636
Epoch: 60 / 20, MiniBatch: 91 / 98 Cost: 0.000963
Epoch: 60 / 20, MiniBatch: 98 / 98 Cost: 0.000586
Epoch: 61 / 20, MiniBatch: 7 / 98 Cost: 0.001029
Epoch: 61 / 20, MiniBatch: 14 / 98 Cost: 0.000625
Epoch: 61 / 20, MiniBatch: 21 / 98 Cost: 0.000580
Epoch: 61 / 20, MiniBatch: 28 / 98 Cost: 0.000678
Epoch: 61 / 20, MiniBatch: 35 / 98 Cost: 0.000617
Epoch: 61 / 20, MiniBatch: 42 / 98 Cost: 0.000630
Epoch: 61 / 20, MiniBatch: 49 / 98 Cost: 0.000627
Epoch: 61 / 20, MiniBatch: 56 / 98 Cost: 0.000658
Epoch: 61 / 20, MiniBatch: 63 / 98 Cost: 0.000610
Epoch: 61 / 20, MiniBatch: 70 / 98 Cost: 0.000444
Epoch: 61 / 20, MiniBatch: 77 / 98 Cost: 0.000511
Epoch: 61 / 20, MiniBatch: 84 / 98 Cost: 0.000497
Epoch: 61 / 20, MiniBatch: 91 / 98 Cost: 0.000581
Epoch: 61 / 20, MiniBatch: 98 / 98 Cost: 0.000621
Epoch: 62 / 20, MiniBatch: 7 / 98 Cost: 0.000527
Epoch: 62 / 20, MiniBatch: 14 / 98 Cost: 0.000485
Epoch: 62 / 20, MiniBatch: 21 / 98 Cost: 0.000574
Epoch: 62 / 20, MiniBatch: 28 / 98 Cost: 0.000475
Epoch: 62 / 20, MiniBatch: 35 / 98 Cost: 0.000644
Epoch: 62 / 20, MiniBatch: 42 / 98 Cost: 0.000644
Epoch: 62 / 20, MiniBatch: 49 / 98 Cost: 0.000404
Epoch: 62 / 20, MiniBatch: 56 / 98 Cost: 0.000401
Epoch: 62 / 20, MiniBatch: 63 / 98 Cost: 0.000815
Epoch: 62 / 20, MiniBatch: 70 / 98 Cost: 0.000352
Epoch: 62 / 20, MiniBatch: 77 / 98 Cost: 0.000770
Epoch: 62 / 20, MiniBatch: 84 / 98 Cost: 0.000464
Epoch: 62 / 20, MiniBatch: 91 / 98 Cost: 0.000462
Epoch: 62 / 20, MiniBatch: 98 / 98 Cost: 0.000535
Epoch: 63 / 20, MiniBatch: 7 / 98 Cost: 0.000494
Epoch: 63 / 20, MiniBatch: 14 / 98 Cost: 0.000501
Epoch: 63 / 20, MiniBatch: 21 / 98 Cost: 0.000536
Epoch: 63 / 20, MiniBatch: 28 / 98 Cost: 0.000571
Epoch: 63 / 20, MiniBatch: 35 / 98 Cost: 0.000440
Epoch: 63 / 20, MiniBatch: 42 / 98 Cost: 0.000535
Epoch: 63 / 20, MiniBatch: 49 / 98 Cost: 0.000506
Epoch: 63 / 20, MiniBatch: 56 / 98 Cost: 0.000531
Epoch: 63 / 20, MiniBatch: 63 / 98 Cost: 0.000522
Epoch: 63 / 20, MiniBatch: 70 / 98 Cost: 0.000559
Epoch: 63 / 20, MiniBatch: 77 / 98 Cost: 0.000581
Epoch: 63 / 20, MiniBatch: 84 / 98 Cost: 0.000457
Epoch: 63 / 20, MiniBatch: 91 / 98 Cost: 0.000703
Epoch: 63 / 20, MiniBatch: 98 / 98 Cost: 0.000554
Epoch: 64 / 20, MiniBatch: 7 / 98 Cost: 0.000434
Epoch: 64 / 20, MiniBatch: 14 / 98 Cost: 0.000449
Epoch: 64 / 20, MiniBatch: 21 / 98 Cost: 0.000622
Epoch: 64 / 20, MiniBatch: 28 / 98 Cost: 0.001021
Epoch: 64 / 20, MiniBatch: 35 / 98 Cost: 0.000633
Epoch: 64 / 20, MiniBatch: 42 / 98 Cost: 0.000764
Epoch: 64 / 20, MiniBatch: 49 / 98 Cost: 0.000864
Epoch: 64 / 20, MiniBatch: 56 / 98 Cost: 0.000518
Epoch: 64 / 20, MiniBatch: 63 / 98 Cost: 0.001153
Epoch: 64 / 20, MiniBatch: 70 / 98 Cost: 0.000620
Epoch: 64 / 20, MiniBatch: 77 / 98 Cost: 0.000643
Epoch: 64 / 20, MiniBatch: 84 / 98 Cost: 0.000550
Epoch: 64 / 20, MiniBatch: 91 / 98 Cost: 0.000498
Epoch: 64 / 20, MiniBatch: 98 / 98 Cost: 0.000484
Epoch: 65 / 20, MiniBatch: 7 / 98 Cost: 0.000413
Epoch: 65 / 20, MiniBatch: 14 / 98 Cost: 0.000512
Epoch: 65 / 20, MiniBatch: 21 / 98 Cost: 0.000538
Epoch: 65 / 20, MiniBatch: 28 / 98 Cost: 0.000433
Epoch: 65 / 20, MiniBatch: 35 / 98 Cost: 0.000474
Epoch: 65 / 20, MiniBatch: 42 / 98 Cost: 0.000539
Epoch: 65 / 20, MiniBatch: 49 / 98 Cost: 0.000500
Epoch: 65 / 20, MiniBatch: 56 / 98 Cost: 0.000362
Epoch: 65 / 20, MiniBatch: 63 / 98 Cost: 0.000417
Epoch: 65 / 20, MiniBatch: 70 / 98 Cost: 0.000486
Epoch: 65 / 20, MiniBatch: 77 / 98 Cost: 0.000456
Epoch: 65 / 20, MiniBatch: 84 / 98 Cost: 0.000473
Epoch: 65 / 20, MiniBatch: 91 / 98 Cost: 0.000410
Epoch: 65 / 20, MiniBatch: 98 / 98 Cost: 0.000410
Epoch: 66 / 20, MiniBatch: 7 / 98 Cost: 0.000383
Epoch: 66 / 20, MiniBatch: 14 / 98 Cost: 0.000590
Epoch: 66 / 20, MiniBatch: 21 / 98 Cost: 0.000359
Epoch: 66 / 20, MiniBatch: 28 / 98 Cost: 0.000620
Epoch: 66 / 20, MiniBatch: 35 / 98 Cost: 0.000577
Epoch: 66 / 20, MiniBatch: 42 / 98 Cost: 0.000449
Epoch: 66 / 20, MiniBatch: 49 / 98 Cost: 0.000539
Epoch: 66 / 20, MiniBatch: 56 / 98 Cost: 0.000496
Epoch: 66 / 20, MiniBatch: 63 / 98 Cost: 0.000540
Epoch: 66 / 20, MiniBatch: 70 / 98 Cost: 0.000328
Epoch: 66 / 20, MiniBatch: 77 / 98 Cost: 0.000686
Epoch: 66 / 20, MiniBatch: 84 / 98 Cost: 0.000542
Epoch: 66 / 20, MiniBatch: 91 / 98 Cost: 0.000459
Epoch: 66 / 20, MiniBatch: 98 / 98 Cost: 0.000485
Epoch: 67 / 20, MiniBatch: 7 / 98 Cost: 0.000355
Epoch: 67 / 20, MiniBatch: 14 / 98 Cost: 0.000334
Epoch: 67 / 20, MiniBatch: 21 / 98 Cost: 0.000432
Epoch: 67 / 20, MiniBatch: 28 / 98 Cost: 0.000360
Epoch: 67 / 20, MiniBatch: 35 / 98 Cost: 0.000505
Epoch: 67 / 20, MiniBatch: 42 / 98 Cost: 0.000864
Epoch: 67 / 20, MiniBatch: 49 / 98 Cost: 0.000464
Epoch: 67 / 20, MiniBatch: 56 / 98 Cost: 0.000628
Epoch: 67 / 20, MiniBatch: 63 / 98 Cost: 0.000727
Epoch: 67 / 20, MiniBatch: 70 / 98 Cost: 0.000491
Epoch: 67 / 20, MiniBatch: 77 / 98 Cost: 0.000339
Epoch: 67 / 20, MiniBatch: 84 / 98 Cost: 0.000498
Epoch: 67 / 20, MiniBatch: 91 / 98 Cost: 0.000493
Epoch: 67 / 20, MiniBatch: 98 / 98 Cost: 0.000624
Epoch: 68 / 20, MiniBatch: 7 / 98 Cost: 0.000337
Epoch: 68 / 20, MiniBatch: 14 / 98 Cost: 0.000551
Epoch: 68 / 20, MiniBatch: 21 / 98 Cost: 0.000438
Epoch: 68 / 20, MiniBatch: 28 / 98 Cost: 0.000482
Epoch: 68 / 20, MiniBatch: 35 / 98 Cost: 0.000499
Epoch: 68 / 20, MiniBatch: 42 / 98 Cost: 0.000380
Epoch: 68 / 20, MiniBatch: 49 / 98 Cost: 0.000421
Epoch: 68 / 20, MiniBatch: 56 / 98 Cost: 0.000507
Epoch: 68 / 20, MiniBatch: 63 / 98 Cost: 0.000409
Epoch: 68 / 20, MiniBatch: 70 / 98 Cost: 0.000473
Epoch: 68 / 20, MiniBatch: 77 / 98 Cost: 0.000406
Epoch: 68 / 20, MiniBatch: 84 / 98 Cost: 0.000632
Epoch: 68 / 20, MiniBatch: 91 / 98 Cost: 0.000434
Epoch: 68 / 20, MiniBatch: 98 / 98 Cost: 0.000390
Epoch: 69 / 20, MiniBatch: 7 / 98 Cost: 0.001084
Epoch: 69 / 20, MiniBatch: 14 / 98 Cost: 0.000551
Epoch: 69 / 20, MiniBatch: 21 / 98 Cost: 0.000498
Epoch: 69 / 20, MiniBatch: 28 / 98 Cost: 0.000528
Epoch: 69 / 20, MiniBatch: 35 / 98 Cost: 0.000493
Epoch: 69 / 20, MiniBatch: 42 / 98 Cost: 0.000722
Epoch: 69 / 20, MiniBatch: 49 / 98 Cost: 0.000532
Epoch: 69 / 20, MiniBatch: 56 / 98 Cost: 0.000436
Epoch: 69 / 20, MiniBatch: 63 / 98 Cost: 0.000325
Epoch: 69 / 20, MiniBatch: 70 / 98 Cost: 0.000432
Epoch: 69 / 20, MiniBatch: 77 / 98 Cost: 0.000507
Epoch: 69 / 20, MiniBatch: 84 / 98 Cost: 0.000436
Epoch: 69 / 20, MiniBatch: 91 / 98 Cost: 0.000489
Epoch: 69 / 20, MiniBatch: 98 / 98 Cost: 0.000706
Epoch: 70 / 20, MiniBatch: 7 / 98 Cost: 0.000498
Epoch: 70 / 20, MiniBatch: 14 / 98 Cost: 0.000358
Epoch: 70 / 20, MiniBatch: 21 / 98 Cost: 0.000463
Epoch: 70 / 20, MiniBatch: 28 / 98 Cost: 0.000344
Epoch: 70 / 20, MiniBatch: 35 / 98 Cost: 0.000877
Epoch: 70 / 20, MiniBatch: 42 / 98 Cost: 0.000489
Epoch: 70 / 20, MiniBatch: 49 / 98 Cost: 0.000516
Epoch: 70 / 20, MiniBatch: 56 / 98 Cost: 0.000460
Epoch: 70 / 20, MiniBatch: 63 / 98 Cost: 0.000366
Epoch: 70 / 20, MiniBatch: 70 / 98 Cost: 0.000433
Epoch: 70 / 20, MiniBatch: 77 / 98 Cost: 0.000395
Epoch: 70 / 20, MiniBatch: 84 / 98 Cost: 0.000596
Epoch: 70 / 20, MiniBatch: 91 / 98 Cost: 0.000406
Epoch: 70 / 20, MiniBatch: 98 / 98 Cost: 0.000425
Epoch: 71 / 20, MiniBatch: 7 / 98 Cost: 0.000538
Epoch: 71 / 20, MiniBatch: 14 / 98 Cost: 0.000361
Epoch: 71 / 20, MiniBatch: 21 / 98 Cost: 0.000400
Epoch: 71 / 20, MiniBatch: 28 / 98 Cost: 0.000468
Epoch: 71 / 20, MiniBatch: 35 / 98 Cost: 0.000291
Epoch: 71 / 20, MiniBatch: 42 / 98 Cost: 0.000562
Epoch: 71 / 20, MiniBatch: 49 / 98 Cost: 0.000345
Epoch: 71 / 20, MiniBatch: 56 / 98 Cost: 0.000382
Epoch: 71 / 20, MiniBatch: 63 / 98 Cost: 0.000454
Epoch: 71 / 20, MiniBatch: 70 / 98 Cost: 0.000347
Epoch: 71 / 20, MiniBatch: 77 / 98 Cost: 0.000484
Epoch: 71 / 20, MiniBatch: 84 / 98 Cost: 0.000313
Epoch: 71 / 20, MiniBatch: 91 / 98 Cost: 0.000360
Epoch: 71 / 20, MiniBatch: 98 / 98 Cost: 0.000370
Epoch: 72 / 20, MiniBatch: 7 / 98 Cost: 0.000373
Epoch: 72 / 20, MiniBatch: 14 / 98 Cost: 0.000538
Epoch: 72 / 20, MiniBatch: 21 / 98 Cost: 0.000541
Epoch: 72 / 20, MiniBatch: 28 / 98 Cost: 0.000317
Epoch: 72 / 20, MiniBatch: 35 / 98 Cost: 0.000332
Epoch: 72 / 20, MiniBatch: 42 / 98 Cost: 0.000413
Epoch: 72 / 20, MiniBatch: 49 / 98 Cost: 0.000452
Epoch: 72 / 20, MiniBatch: 56 / 98 Cost: 0.000304
Epoch: 72 / 20, MiniBatch: 63 / 98 Cost: 0.000276
Epoch: 72 / 20, MiniBatch: 70 / 98 Cost: 0.000383
Epoch: 72 / 20, MiniBatch: 77 / 98 Cost: 0.000367
Epoch: 72 / 20, MiniBatch: 84 / 98 Cost: 0.000395
Epoch: 72 / 20, MiniBatch: 91 / 98 Cost: 0.000292
Epoch: 72 / 20, MiniBatch: 98 / 98 Cost: 0.000330
Epoch: 73 / 20, MiniBatch: 7 / 98 Cost: 0.000328
Epoch: 73 / 20, MiniBatch: 14 / 98 Cost: 0.000352
Epoch: 73 / 20, MiniBatch: 21 / 98 Cost: 0.000294
Epoch: 73 / 20, MiniBatch: 28 / 98 Cost: 0.000285
Epoch: 73 / 20, MiniBatch: 35 / 98 Cost: 0.000412
Epoch: 73 / 20, MiniBatch: 42 / 98 Cost: 0.000420
Epoch: 73 / 20, MiniBatch: 49 / 98 Cost: 0.000314
Epoch: 73 / 20, MiniBatch: 56 / 98 Cost: 0.000323
Epoch: 73 / 20, MiniBatch: 63 / 98 Cost: 0.000294
Epoch: 73 / 20, MiniBatch: 70 / 98 Cost: 0.000324
Epoch: 73 / 20, MiniBatch: 77 / 98 Cost: 0.000445
Epoch: 73 / 20, MiniBatch: 84 / 98 Cost: 0.000316
Epoch: 73 / 20, MiniBatch: 91 / 98 Cost: 0.000310
Epoch: 73 / 20, MiniBatch: 98 / 98 Cost: 0.000505
Epoch: 74 / 20, MiniBatch: 7 / 98 Cost: 0.000376
Epoch: 74 / 20, MiniBatch: 14 / 98 Cost: 0.000364
Epoch: 74 / 20, MiniBatch: 21 / 98 Cost: 0.000347
Epoch: 74 / 20, MiniBatch: 28 / 98 Cost: 0.000680
Epoch: 74 / 20, MiniBatch: 35 / 98 Cost: 0.000338
Epoch: 74 / 20, MiniBatch: 42 / 98 Cost: 0.000584
Epoch: 74 / 20, MiniBatch: 49 / 98 Cost: 0.000403
Epoch: 74 / 20, MiniBatch: 56 / 98 Cost: 0.000357
Epoch: 74 / 20, MiniBatch: 63 / 98 Cost: 0.000252
Epoch: 74 / 20, MiniBatch: 70 / 98 Cost: 0.000411
Epoch: 74 / 20, MiniBatch: 77 / 98 Cost: 0.000482
Epoch: 74 / 20, MiniBatch: 84 / 98 Cost: 0.000765
Epoch: 74 / 20, MiniBatch: 91 / 98 Cost: 0.000500
Epoch: 74 / 20, MiniBatch: 98 / 98 Cost: 0.000326
Epoch: 75 / 20, MiniBatch: 7 / 98 Cost: 0.000430
Epoch: 75 / 20, MiniBatch: 14 / 98 Cost: 0.000469
Epoch: 75 / 20, MiniBatch: 21 / 98 Cost: 0.000353
Epoch: 75 / 20, MiniBatch: 28 / 98 Cost: 0.000324
Epoch: 75 / 20, MiniBatch: 35 / 98 Cost: 0.000530
Epoch: 75 / 20, MiniBatch: 42 / 98 Cost: 0.000338
Epoch: 75 / 20, MiniBatch: 49 / 98 Cost: 0.000448
Epoch: 75 / 20, MiniBatch: 56 / 98 Cost: 0.000339
Epoch: 75 / 20, MiniBatch: 63 / 98 Cost: 0.000366
Epoch: 75 / 20, MiniBatch: 70 / 98 Cost: 0.000330
Epoch: 75 / 20, MiniBatch: 77 / 98 Cost: 0.000326
Epoch: 75 / 20, MiniBatch: 84 / 98 Cost: 0.000299
Epoch: 75 / 20, MiniBatch: 91 / 98 Cost: 0.000359
Epoch: 75 / 20, MiniBatch: 98 / 98 Cost: 0.000398
Epoch: 76 / 20, MiniBatch: 7 / 98 Cost: 0.000397
Epoch: 76 / 20, MiniBatch: 14 / 98 Cost: 0.000388
Epoch: 76 / 20, MiniBatch: 21 / 98 Cost: 0.000482
Epoch: 76 / 20, MiniBatch: 28 / 98 Cost: 0.000320
Epoch: 76 / 20, MiniBatch: 35 / 98 Cost: 0.000514
Epoch: 76 / 20, MiniBatch: 42 / 98 Cost: 0.000456
Epoch: 76 / 20, MiniBatch: 49 / 98 Cost: 0.000398
Epoch: 76 / 20, MiniBatch: 56 / 98 Cost: 0.000353
Epoch: 76 / 20, MiniBatch: 63 / 98 Cost: 0.000372
Epoch: 76 / 20, MiniBatch: 70 / 98 Cost: 0.000326
Epoch: 76 / 20, MiniBatch: 77 / 98 Cost: 0.000338
Epoch: 76 / 20, MiniBatch: 84 / 98 Cost: 0.000369
Epoch: 76 / 20, MiniBatch: 91 / 98 Cost: 0.000442
Epoch: 76 / 20, MiniBatch: 98 / 98 Cost: 0.000295
Epoch: 77 / 20, MiniBatch: 7 / 98 Cost: 0.000289
Epoch: 77 / 20, MiniBatch: 14 / 98 Cost: 0.000325
Epoch: 77 / 20, MiniBatch: 21 / 98 Cost: 0.000283
Epoch: 77 / 20, MiniBatch: 28 / 98 Cost: 0.000287
Epoch: 77 / 20, MiniBatch: 35 / 98 Cost: 0.000428
Epoch: 77 / 20, MiniBatch: 42 / 98 Cost: 0.000258
Epoch: 77 / 20, MiniBatch: 49 / 98 Cost: 0.000374
Epoch: 77 / 20, MiniBatch: 56 / 98 Cost: 0.000289
Epoch: 77 / 20, MiniBatch: 63 / 98 Cost: 0.000277
Epoch: 77 / 20, MiniBatch: 70 / 98 Cost: 0.000271
Epoch: 77 / 20, MiniBatch: 77 / 98 Cost: 0.000415
Epoch: 77 / 20, MiniBatch: 84 / 98 Cost: 0.000350
Epoch: 77 / 20, MiniBatch: 91 / 98 Cost: 0.000294
Epoch: 77 / 20, MiniBatch: 98 / 98 Cost: 0.000346
Epoch: 78 / 20, MiniBatch: 7 / 98 Cost: 0.000378
Epoch: 78 / 20, MiniBatch: 14 / 98 Cost: 0.000431
Epoch: 78 / 20, MiniBatch: 21 / 98 Cost: 0.000426
Epoch: 78 / 20, MiniBatch: 28 / 98 Cost: 0.000294
Epoch: 78 / 20, MiniBatch: 35 / 98 Cost: 0.000299
Epoch: 78 / 20, MiniBatch: 42 / 98 Cost: 0.000257
Epoch: 78 / 20, MiniBatch: 49 / 98 Cost: 0.000268
Epoch: 78 / 20, MiniBatch: 56 / 98 Cost: 0.000406
Epoch: 78 / 20, MiniBatch: 63 / 98 Cost: 0.000486
Epoch: 78 / 20, MiniBatch: 70 / 98 Cost: 0.000321
Epoch: 78 / 20, MiniBatch: 77 / 98 Cost: 0.000390
Epoch: 78 / 20, MiniBatch: 84 / 98 Cost: 0.000363
Epoch: 78 / 20, MiniBatch: 91 / 98 Cost: 0.000706
Epoch: 78 / 20, MiniBatch: 98 / 98 Cost: 0.000430
Epoch: 79 / 20, MiniBatch: 7 / 98 Cost: 0.000394
Epoch: 79 / 20, MiniBatch: 14 / 98 Cost: 0.000319
Epoch: 79 / 20, MiniBatch: 21 / 98 Cost: 0.000280
Epoch: 79 / 20, MiniBatch: 28 / 98 Cost: 0.000346
Epoch: 79 / 20, MiniBatch: 35 / 98 Cost: 0.000654
Epoch: 79 / 20, MiniBatch: 42 / 98 Cost: 0.000354
Epoch: 79 / 20, MiniBatch: 49 / 98 Cost: 0.000409
Epoch: 79 / 20, MiniBatch: 56 / 98 Cost: 0.000344
Epoch: 79 / 20, MiniBatch: 63 / 98 Cost: 0.000439
Epoch: 79 / 20, MiniBatch: 70 / 98 Cost: 0.000253
Epoch: 79 / 20, MiniBatch: 77 / 98 Cost: 0.000360
Epoch: 79 / 20, MiniBatch: 84 / 98 Cost: 0.000285
Epoch: 79 / 20, MiniBatch: 91 / 98 Cost: 0.000258
Epoch: 79 / 20, MiniBatch: 98 / 98 Cost: 0.000343
Epoch: 80 / 20, MiniBatch: 7 / 98 Cost: 0.000427
Epoch: 80 / 20, MiniBatch: 14 / 98 Cost: 0.000412
Epoch: 80 / 20, MiniBatch: 21 / 98 Cost: 0.000384
Epoch: 80 / 20, MiniBatch: 28 / 98 Cost: 0.001060
Epoch: 80 / 20, MiniBatch: 35 / 98 Cost: 0.000422
Epoch: 80 / 20, MiniBatch: 42 / 98 Cost: 0.000364
Epoch: 80 / 20, MiniBatch: 49 / 98 Cost: 0.000463
Epoch: 80 / 20, MiniBatch: 56 / 98 Cost: 0.000378
Epoch: 80 / 20, MiniBatch: 63 / 98 Cost: 0.000320
Epoch: 80 / 20, MiniBatch: 70 / 98 Cost: 0.000350
Epoch: 80 / 20, MiniBatch: 77 / 98 Cost: 0.000330
Epoch: 80 / 20, MiniBatch: 84 / 98 Cost: 0.000353
Epoch: 80 / 20, MiniBatch: 91 / 98 Cost: 0.000553
Epoch: 80 / 20, MiniBatch: 98 / 98 Cost: 0.000362
Epoch: 81 / 20, MiniBatch: 7 / 98 Cost: 0.000273
Epoch: 81 / 20, MiniBatch: 14 / 98 Cost: 0.000432
Epoch: 81 / 20, MiniBatch: 21 / 98 Cost: 0.000260
Epoch: 81 / 20, MiniBatch: 28 / 98 Cost: 0.000480
Epoch: 81 / 20, MiniBatch: 35 / 98 Cost: 0.000472
Epoch: 81 / 20, MiniBatch: 42 / 98 Cost: 0.000343
Epoch: 81 / 20, MiniBatch: 49 / 98 Cost: 0.000359
Epoch: 81 / 20, MiniBatch: 56 / 98 Cost: 0.000308
Epoch: 81 / 20, MiniBatch: 63 / 98 Cost: 0.000246
Epoch: 81 / 20, MiniBatch: 70 / 98 Cost: 0.000770
Epoch: 81 / 20, MiniBatch: 77 / 98 Cost: 0.000684
Epoch: 81 / 20, MiniBatch: 84 / 98 Cost: 0.000430
Epoch: 81 / 20, MiniBatch: 91 / 98 Cost: 0.000377
Epoch: 81 / 20, MiniBatch: 98 / 98 Cost: 0.000312
Epoch: 82 / 20, MiniBatch: 7 / 98 Cost: 0.000397
Epoch: 82 / 20, MiniBatch: 14 / 98 Cost: 0.000341
Epoch: 82 / 20, MiniBatch: 21 / 98 Cost: 0.000276
Epoch: 82 / 20, MiniBatch: 28 / 98 Cost: 0.000494
Epoch: 82 / 20, MiniBatch: 35 / 98 Cost: 0.000400
Epoch: 82 / 20, MiniBatch: 42 / 98 Cost: 0.000383
Epoch: 82 / 20, MiniBatch: 49 / 98 Cost: 0.000337
Epoch: 82 / 20, MiniBatch: 56 / 98 Cost: 0.000634
Epoch: 82 / 20, MiniBatch: 63 / 98 Cost: 0.000354
Epoch: 82 / 20, MiniBatch: 70 / 98 Cost: 0.000294
Epoch: 82 / 20, MiniBatch: 77 / 98 Cost: 0.000275
Epoch: 82 / 20, MiniBatch: 84 / 98 Cost: 0.000320
Epoch: 82 / 20, MiniBatch: 91 / 98 Cost: 0.000316
Epoch: 82 / 20, MiniBatch: 98 / 98 Cost: 0.000363
Epoch: 83 / 20, MiniBatch: 7 / 98 Cost: 0.000320
Epoch: 83 / 20, MiniBatch: 14 / 98 Cost: 0.000335
Epoch: 83 / 20, MiniBatch: 21 / 98 Cost: 0.000258
Epoch: 83 / 20, MiniBatch: 28 / 98 Cost: 0.000371
Epoch: 83 / 20, MiniBatch: 35 / 98 Cost: 0.000369
Epoch: 83 / 20, MiniBatch: 42 / 98 Cost: 0.000308
Epoch: 83 / 20, MiniBatch: 49 / 98 Cost: 0.000287
Epoch: 83 / 20, MiniBatch: 56 / 98 Cost: 0.000255
Epoch: 83 / 20, MiniBatch: 63 / 98 Cost: 0.000270
Epoch: 83 / 20, MiniBatch: 70 / 98 Cost: 0.000343
Epoch: 83 / 20, MiniBatch: 77 / 98 Cost: 0.001282
Epoch: 83 / 20, MiniBatch: 84 / 98 Cost: 0.000319
Epoch: 83 / 20, MiniBatch: 91 / 98 Cost: 0.000384
Epoch: 83 / 20, MiniBatch: 98 / 98 Cost: 0.000423
Epoch: 84 / 20, MiniBatch: 7 / 98 Cost: 0.000331
Epoch: 84 / 20, MiniBatch: 14 / 98 Cost: 0.000275
Epoch: 84 / 20, MiniBatch: 21 / 98 Cost: 0.000455
Epoch: 84 / 20, MiniBatch: 28 / 98 Cost: 0.000328
Epoch: 84 / 20, MiniBatch: 35 / 98 Cost: 0.000310
Epoch: 84 / 20, MiniBatch: 42 / 98 Cost: 0.000269
Epoch: 84 / 20, MiniBatch: 49 / 98 Cost: 0.000387
Epoch: 84 / 20, MiniBatch: 56 / 98 Cost: 0.000260
Epoch: 84 / 20, MiniBatch: 63 / 98 Cost: 0.000240
Epoch: 84 / 20, MiniBatch: 70 / 98 Cost: 0.000362
Epoch: 84 / 20, MiniBatch: 77 / 98 Cost: 0.000476
Epoch: 84 / 20, MiniBatch: 84 / 98 Cost: 0.000316
Epoch: 84 / 20, MiniBatch: 91 / 98 Cost: 0.000342
Epoch: 84 / 20, MiniBatch: 98 / 98 Cost: 0.000353
Epoch: 85 / 20, MiniBatch: 7 / 98 Cost: 0.000259
Epoch: 85 / 20, MiniBatch: 14 / 98 Cost: 0.000397
Epoch: 85 / 20, MiniBatch: 21 / 98 Cost: 0.000259
Epoch: 85 / 20, MiniBatch: 28 / 98 Cost: 0.000324
Epoch: 85 / 20, MiniBatch: 35 / 98 Cost: 0.000403
Epoch: 85 / 20, MiniBatch: 42 / 98 Cost: 0.000256
Epoch: 85 / 20, MiniBatch: 49 / 98 Cost: 0.000337
Epoch: 85 / 20, MiniBatch: 56 / 98 Cost: 0.000234
Epoch: 85 / 20, MiniBatch: 63 / 98 Cost: 0.000733
Epoch: 85 / 20, MiniBatch: 70 / 98 Cost: 0.000237
Epoch: 85 / 20, MiniBatch: 77 / 98 Cost: 0.000343
Epoch: 85 / 20, MiniBatch: 84 / 98 Cost: 0.000416
Epoch: 85 / 20, MiniBatch: 91 / 98 Cost: 0.000702
Epoch: 85 / 20, MiniBatch: 98 / 98 Cost: 0.000382
Epoch: 86 / 20, MiniBatch: 7 / 98 Cost: 0.000401
Epoch: 86 / 20, MiniBatch: 14 / 98 Cost: 0.000330
Epoch: 86 / 20, MiniBatch: 21 / 98 Cost: 0.000304
Epoch: 86 / 20, MiniBatch: 28 / 98 Cost: 0.000345
Epoch: 86 / 20, MiniBatch: 35 / 98 Cost: 0.000776
Epoch: 86 / 20, MiniBatch: 42 / 98 Cost: 0.000549
Epoch: 86 / 20, MiniBatch: 49 / 98 Cost: 0.000367
Epoch: 86 / 20, MiniBatch: 56 / 98 Cost: 0.000422
Epoch: 86 / 20, MiniBatch: 63 / 98 Cost: 0.000369
Epoch: 86 / 20, MiniBatch: 70 / 98 Cost: 0.000435
Epoch: 86 / 20, MiniBatch: 77 / 98 Cost: 0.000342
Epoch: 86 / 20, MiniBatch: 84 / 98 Cost: 0.000338
Epoch: 86 / 20, MiniBatch: 91 / 98 Cost: 0.000281
Epoch: 86 / 20, MiniBatch: 98 / 98 Cost: 0.000407
Epoch: 87 / 20, MiniBatch: 7 / 98 Cost: 0.000395
Epoch: 87 / 20, MiniBatch: 14 / 98 Cost: 0.000349
Epoch: 87 / 20, MiniBatch: 21 / 98 Cost: 0.000295
Epoch: 87 / 20, MiniBatch: 28 / 98 Cost: 0.000249
Epoch: 87 / 20, MiniBatch: 35 / 98 Cost: 0.000267
Epoch: 87 / 20, MiniBatch: 42 / 98 Cost: 0.000271
Epoch: 87 / 20, MiniBatch: 49 / 98 Cost: 0.000417
Epoch: 87 / 20, MiniBatch: 56 / 98 Cost: 0.000377
Epoch: 87 / 20, MiniBatch: 63 / 98 Cost: 0.000282
Epoch: 87 / 20, MiniBatch: 70 / 98 Cost: 0.000476
Epoch: 87 / 20, MiniBatch: 77 / 98 Cost: 0.000406
Epoch: 87 / 20, MiniBatch: 84 / 98 Cost: 0.000268
Epoch: 87 / 20, MiniBatch: 91 / 98 Cost: 0.000508
Epoch: 87 / 20, MiniBatch: 98 / 98 Cost: 0.000281
Epoch: 88 / 20, MiniBatch: 7 / 98 Cost: 0.000247
Epoch: 88 / 20, MiniBatch: 14 / 98 Cost: 0.000385
Epoch: 88 / 20, MiniBatch: 21 / 98 Cost: 0.000282
Epoch: 88 / 20, MiniBatch: 28 / 98 Cost: 0.000390
Epoch: 88 / 20, MiniBatch: 35 / 98 Cost: 0.000375
Epoch: 88 / 20, MiniBatch: 42 / 98 Cost: 0.000246
Epoch: 88 / 20, MiniBatch: 49 / 98 Cost: 0.000731
Epoch: 88 / 20, MiniBatch: 56 / 98 Cost: 0.000452
Epoch: 88 / 20, MiniBatch: 63 / 98 Cost: 0.000494
Epoch: 88 / 20, MiniBatch: 70 / 98 Cost: 0.000739
Epoch: 88 / 20, MiniBatch: 77 / 98 Cost: 0.000361
Epoch: 88 / 20, MiniBatch: 84 / 98 Cost: 0.000559
Epoch: 88 / 20, MiniBatch: 91 / 98 Cost: 0.000427
Epoch: 88 / 20, MiniBatch: 98 / 98 Cost: 0.000583
Epoch: 89 / 20, MiniBatch: 7 / 98 Cost: 0.000482
Epoch: 89 / 20, MiniBatch: 14 / 98 Cost: 0.000362
Epoch: 89 / 20, MiniBatch: 21 / 98 Cost: 0.000373
Epoch: 89 / 20, MiniBatch: 28 / 98 Cost: 0.000246
Epoch: 89 / 20, MiniBatch: 35 / 98 Cost: 0.000302
Epoch: 89 / 20, MiniBatch: 42 / 98 Cost: 0.000258
Epoch: 89 / 20, MiniBatch: 49 / 98 Cost: 0.000210
Epoch: 89 / 20, MiniBatch: 56 / 98 Cost: 0.000401
Epoch: 89 / 20, MiniBatch: 63 / 98 Cost: 0.000295
Epoch: 89 / 20, MiniBatch: 70 / 98 Cost: 0.000320
Epoch: 89 / 20, MiniBatch: 77 / 98 Cost: 0.000655
Epoch: 89 / 20, MiniBatch: 84 / 98 Cost: 0.000388
Epoch: 89 / 20, MiniBatch: 91 / 98 Cost: 0.000348
Epoch: 89 / 20, MiniBatch: 98 / 98 Cost: 0.000342
Epoch: 90 / 20, MiniBatch: 7 / 98 Cost: 0.000236
Epoch: 90 / 20, MiniBatch: 14 / 98 Cost: 0.000341
Epoch: 90 / 20, MiniBatch: 21 / 98 Cost: 0.000241
Epoch: 90 / 20, MiniBatch: 28 / 98 Cost: 0.000387
Epoch: 90 / 20, MiniBatch: 35 / 98 Cost: 0.000528
Epoch: 90 / 20, MiniBatch: 42 / 98 Cost: 0.000311
Epoch: 90 / 20, MiniBatch: 49 / 98 Cost: 0.000324
Epoch: 90 / 20, MiniBatch: 56 / 98 Cost: 0.000433
Epoch: 90 / 20, MiniBatch: 63 / 98 Cost: 0.000275
Epoch: 90 / 20, MiniBatch: 70 / 98 Cost: 0.000294
Epoch: 90 / 20, MiniBatch: 77 / 98 Cost: 0.000404
Epoch: 90 / 20, MiniBatch: 84 / 98 Cost: 0.000434
Epoch: 90 / 20, MiniBatch: 91 / 98 Cost: 0.000281
Epoch: 90 / 20, MiniBatch: 98 / 98 Cost: 0.000366
Epoch: 91 / 20, MiniBatch: 7 / 98 Cost: 0.000260
Epoch: 91 / 20, MiniBatch: 14 / 98 Cost: 0.000231
Epoch: 91 / 20, MiniBatch: 21 / 98 Cost: 0.000363
Epoch: 91 / 20, MiniBatch: 28 / 98 Cost: 0.000368
Epoch: 91 / 20, MiniBatch: 35 / 98 Cost: 0.000538
Epoch: 91 / 20, MiniBatch: 42 / 98 Cost: 0.000299
Epoch: 91 / 20, MiniBatch: 49 / 98 Cost: 0.000277
Epoch: 91 / 20, MiniBatch: 56 / 98 Cost: 0.000453
Epoch: 91 / 20, MiniBatch: 63 / 98 Cost: 0.000298
Epoch: 91 / 20, MiniBatch: 70 / 98 Cost: 0.000262
Epoch: 91 / 20, MiniBatch: 77 / 98 Cost: 0.000281
Epoch: 91 / 20, MiniBatch: 84 / 98 Cost: 0.000374
Epoch: 91 / 20, MiniBatch: 91 / 98 Cost: 0.000371
Epoch: 91 / 20, MiniBatch: 98 / 98 Cost: 0.000289
Epoch: 92 / 20, MiniBatch: 7 / 98 Cost: 0.000225
Epoch: 92 / 20, MiniBatch: 14 / 98 Cost: 0.000293
Epoch: 92 / 20, MiniBatch: 21 / 98 Cost: 0.000413
Epoch: 92 / 20, MiniBatch: 28 / 98 Cost: 0.000306
Epoch: 92 / 20, MiniBatch: 35 / 98 Cost: 0.000343
Epoch: 92 / 20, MiniBatch: 42 / 98 Cost: 0.000229
Epoch: 92 / 20, MiniBatch: 49 / 98 Cost: 0.000413
Epoch: 92 / 20, MiniBatch: 56 / 98 Cost: 0.000400
Epoch: 92 / 20, MiniBatch: 63 / 98 Cost: 0.000290
Epoch: 92 / 20, MiniBatch: 70 / 98 Cost: 0.000287
Epoch: 92 / 20, MiniBatch: 77 / 98 Cost: 0.000266
Epoch: 92 / 20, MiniBatch: 84 / 98 Cost: 0.000375
Epoch: 92 / 20, MiniBatch: 91 / 98 Cost: 0.000177
Epoch: 92 / 20, MiniBatch: 98 / 98 Cost: 0.000226
Epoch: 93 / 20, MiniBatch: 7 / 98 Cost: 0.000237
Epoch: 93 / 20, MiniBatch: 14 / 98 Cost: 0.000253
Epoch: 93 / 20, MiniBatch: 21 / 98 Cost: 0.000365
Epoch: 93 / 20, MiniBatch: 28 / 98 Cost: 0.000268
Epoch: 93 / 20, MiniBatch: 35 / 98 Cost: 0.000334
Epoch: 93 / 20, MiniBatch: 42 / 98 Cost: 0.000356
Epoch: 93 / 20, MiniBatch: 49 / 98 Cost: 0.000351
Epoch: 93 / 20, MiniBatch: 56 / 98 Cost: 0.000322
Epoch: 93 / 20, MiniBatch: 63 / 98 Cost: 0.000389
Epoch: 93 / 20, MiniBatch: 70 / 98 Cost: 0.000363
Epoch: 93 / 20, MiniBatch: 77 / 98 Cost: 0.000262
Epoch: 93 / 20, MiniBatch: 84 / 98 Cost: 0.000260
Epoch: 93 / 20, MiniBatch: 91 / 98 Cost: 0.000219
Epoch: 93 / 20, MiniBatch: 98 / 98 Cost: 0.000352
Epoch: 94 / 20, MiniBatch: 7 / 98 Cost: 0.000233
Epoch: 94 / 20, MiniBatch: 14 / 98 Cost: 0.000326
Epoch: 94 / 20, MiniBatch: 21 / 98 Cost: 0.000275
Epoch: 94 / 20, MiniBatch: 28 / 98 Cost: 0.000395
Epoch: 94 / 20, MiniBatch: 35 / 98 Cost: 0.000257
Epoch: 94 / 20, MiniBatch: 42 / 98 Cost: 0.000239
Epoch: 94 / 20, MiniBatch: 49 / 98 Cost: 0.000214
Epoch: 94 / 20, MiniBatch: 56 / 98 Cost: 0.000315
Epoch: 94 / 20, MiniBatch: 63 / 98 Cost: 0.000287
Epoch: 94 / 20, MiniBatch: 70 / 98 Cost: 0.000284
Epoch: 94 / 20, MiniBatch: 77 / 98 Cost: 0.000284
Epoch: 94 / 20, MiniBatch: 84 / 98 Cost: 0.000184
Epoch: 94 / 20, MiniBatch: 91 / 98 Cost: 0.000284
Epoch: 94 / 20, MiniBatch: 98 / 98 Cost: 0.000387
Epoch: 95 / 20, MiniBatch: 7 / 98 Cost: 0.000346
Epoch: 95 / 20, MiniBatch: 14 / 98 Cost: 0.000263
Epoch: 95 / 20, MiniBatch: 21 / 98 Cost: 0.000356
Epoch: 95 / 20, MiniBatch: 28 / 98 Cost: 0.000164
Epoch: 95 / 20, MiniBatch: 35 / 98 Cost: 0.000228
Epoch: 95 / 20, MiniBatch: 42 / 98 Cost: 0.000229
Epoch: 95 / 20, MiniBatch: 49 / 98 Cost: 0.000223
Epoch: 95 / 20, MiniBatch: 56 / 98 Cost: 0.000224
Epoch: 95 / 20, MiniBatch: 63 / 98 Cost: 0.000278
Epoch: 95 / 20, MiniBatch: 70 / 98 Cost: 0.000679
Epoch: 95 / 20, MiniBatch: 77 / 98 Cost: 0.000275
Epoch: 95 / 20, MiniBatch: 84 / 98 Cost: 0.000523
Epoch: 95 / 20, MiniBatch: 91 / 98 Cost: 0.000335
Epoch: 95 / 20, MiniBatch: 98 / 98 Cost: 0.000332
Epoch: 96 / 20, MiniBatch: 7 / 98 Cost: 0.000313
Epoch: 96 / 20, MiniBatch: 14 / 98 Cost: 0.000290
Epoch: 96 / 20, MiniBatch: 21 / 98 Cost: 0.000309
Epoch: 96 / 20, MiniBatch: 28 / 98 Cost: 0.000223
Epoch: 96 / 20, MiniBatch: 35 / 98 Cost: 0.000469
Epoch: 96 / 20, MiniBatch: 42 / 98 Cost: 0.000312
Epoch: 96 / 20, MiniBatch: 49 / 98 Cost: 0.000245
Epoch: 96 / 20, MiniBatch: 56 / 98 Cost: 0.000292
Epoch: 96 / 20, MiniBatch: 63 / 98 Cost: 0.000258
Epoch: 96 / 20, MiniBatch: 70 / 98 Cost: 0.000320
Epoch: 96 / 20, MiniBatch: 77 / 98 Cost: 0.000469
Epoch: 96 / 20, MiniBatch: 84 / 98 Cost: 0.000314
Epoch: 96 / 20, MiniBatch: 91 / 98 Cost: 0.000226
Epoch: 96 / 20, MiniBatch: 98 / 98 Cost: 0.000382
Epoch: 97 / 20, MiniBatch: 7 / 98 Cost: 0.000409
Epoch: 97 / 20, MiniBatch: 14 / 98 Cost: 0.000325
Epoch: 97 / 20, MiniBatch: 21 / 98 Cost: 0.000362
Epoch: 97 / 20, MiniBatch: 28 / 98 Cost: 0.000285
Epoch: 97 / 20, MiniBatch: 35 / 98 Cost: 0.000224
Epoch: 97 / 20, MiniBatch: 42 / 98 Cost: 0.000275
Epoch: 97 / 20, MiniBatch: 49 / 98 Cost: 0.000197
Epoch: 97 / 20, MiniBatch: 56 / 98 Cost: 0.000307
Epoch: 97 / 20, MiniBatch: 63 / 98 Cost: 0.000322
Epoch: 97 / 20, MiniBatch: 70 / 98 Cost: 0.000231
Epoch: 97 / 20, MiniBatch: 77 / 98 Cost: 0.000216
Epoch: 97 / 20, MiniBatch: 84 / 98 Cost: 0.000245
Epoch: 97 / 20, MiniBatch: 91 / 98 Cost: 0.000221
Epoch: 97 / 20, MiniBatch: 98 / 98 Cost: 0.000199
Epoch: 98 / 20, MiniBatch: 7 / 98 Cost: 0.000328
Epoch: 98 / 20, MiniBatch: 14 / 98 Cost: 0.000220
Epoch: 98 / 20, MiniBatch: 21 / 98 Cost: 0.000351
Epoch: 98 / 20, MiniBatch: 28 / 98 Cost: 0.000466
Epoch: 98 / 20, MiniBatch: 35 / 98 Cost: 0.000260
Epoch: 98 / 20, MiniBatch: 42 / 98 Cost: 0.000347
Epoch: 98 / 20, MiniBatch: 49 / 98 Cost: 0.000333
Epoch: 98 / 20, MiniBatch: 56 / 98 Cost: 0.000588
Epoch: 98 / 20, MiniBatch: 63 / 98 Cost: 0.000306
Epoch: 98 / 20, MiniBatch: 70 / 98 Cost: 0.000284
Epoch: 98 / 20, MiniBatch: 77 / 98 Cost: 0.000349
Epoch: 98 / 20, MiniBatch: 84 / 98 Cost: 0.000213
Epoch: 98 / 20, MiniBatch: 91 / 98 Cost: 0.000255
Epoch: 98 / 20, MiniBatch: 98 / 98 Cost: 0.000276
Epoch: 99 / 20, MiniBatch: 7 / 98 Cost: 0.000302
Epoch: 99 / 20, MiniBatch: 14 / 98 Cost: 0.000248
Epoch: 99 / 20, MiniBatch: 21 / 98 Cost: 0.000372
Epoch: 99 / 20, MiniBatch: 28 / 98 Cost: 0.000400
Epoch: 99 / 20, MiniBatch: 35 / 98 Cost: 0.000178
Epoch: 99 / 20, MiniBatch: 42 / 98 Cost: 0.000280
Epoch: 99 / 20, MiniBatch: 49 / 98 Cost: 0.000310
Epoch: 99 / 20, MiniBatch: 56 / 98 Cost: 0.000349
Epoch: 99 / 20, MiniBatch: 63 / 98 Cost: 0.000267
Epoch: 99 / 20, MiniBatch: 70 / 98 Cost: 0.000229
Epoch: 99 / 20, MiniBatch: 77 / 98 Cost: 0.000201
Epoch: 99 / 20, MiniBatch: 84 / 98 Cost: 0.000202
Epoch: 99 / 20, MiniBatch: 91 / 98 Cost: 0.000256
Epoch: 99 / 20, MiniBatch: 98 / 98 Cost: 0.000238
Epoch: 100 / 20, MiniBatch: 7 / 98 Cost: 0.000251
Epoch: 100 / 20, MiniBatch: 14 / 98 Cost: 0.000355
Epoch: 100 / 20, MiniBatch: 21 / 98 Cost: 0.000210
Epoch: 100 / 20, MiniBatch: 28 / 98 Cost: 0.000229
Epoch: 100 / 20, MiniBatch: 35 / 98 Cost: 0.000204
Epoch: 100 / 20, MiniBatch: 42 / 98 Cost: 0.000187
Epoch: 100 / 20, MiniBatch: 49 / 98 Cost: 0.000265
Epoch: 100 / 20, MiniBatch: 56 / 98 Cost: 0.000153
Epoch: 100 / 20, MiniBatch: 63 / 98 Cost: 0.000186
Epoch: 100 / 20, MiniBatch: 70 / 98 Cost: 0.000202
Epoch: 100 / 20, MiniBatch: 77 / 98 Cost: 0.000169
Epoch: 100 / 20, MiniBatch: 84 / 98 Cost: 0.000320
Epoch: 100 / 20, MiniBatch: 91 / 98 Cost: 0.000298
Epoch: 100 / 20, MiniBatch: 98 / 98 Cost: 0.000306
Finished Training
###Markdown
 Let's quickly save our trained model:
###Code
PATH = './models/cifar_nets.pth'
torch.save(model.state_dict(), PATH)
###Output
_____no_output_____
###Markdown
Test the network on the test dataWe have trained the network for 2 passes over the training dataset. But we need to check if the network has learnt anything at all.We will check this by predicting the class label that the neural network outputs, and checking it against the ground-truth(testset). If the prediction is correct, we add the sample to the list of correct predictions.Okay, first step. Let us display an image from the test set to get familiar.
###Code
dataiter = iter(test_loader)
images, labels = dataiter.next()
images = images.to(device)
labels = labels.to(device)
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
Next, let’s load back in our saved model (note: saving and re-loading the model wasn’t necessary here, we only did it to illustrate how to do so):
###Code
model = vgg.VGG(conv, num_classes = 10, init_weights = True).to(device)
model.load_state_dict(torch.load(PATH))
###Output
_____no_output_____
###Markdown
Okay, now let us see what the neural network thinks these examples above are:
###Code
outputs = model(images)
###Output
C:\Users\buddhalight\envs\buddhalight\lib\site-packages\torch\nn\functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at ..\c10/core/TensorImpl.h:1156.)
return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)
###Markdown
The outputs are energies for the 10 classes. The higher the energy for a class, the more the network thinks that the image is of the particular class. So, let’s get the index of the highest energy:
###Code
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
###Output
Predicted: horse cat ship bird
###Markdown
The results seem pretty shit.Let us look at how the network performs on the whole dataset.
###Code
correct = 0
total = 0
#Since we're not training, we don't neet to calculate the gradients for our outputs
with torch.no_grad():
for X, Y in test_loader:
X = X.to(device)
Y = Y.to(device)
#calculate outputs by running images through the network
output = model(X)
#the class with the highest energy is what we choose as prediction
_, predicted = torch.max(output, 1)
total += Y.size(0)
correct += (predicted == Y).sum().item()
print('Correct Label: {} out of Total Label: {}'.format(correct, total))
print('Accuracy of the network on the 10000 test images: {:.6f} %'.format(100 * correct / total))
###Output
Correct Label: 5394 out of Total Label: 10000
Accuracy of the network on the 10000 test images: 53.940000 %
###Markdown
That looks way better than chance, which is 10% accuracy (randomly picking a class out of 10 classes). Seems like the network learnt something.Hmmm, what are the classes that performed well, and the classes that did not perform well:
###Code
#prepare to count predictions for each class
correct_pred = {classname: 0 for classname in classes}
total_pred = {classname: 0 for classname in classes}
print(correct_pred, total_pred)
#again no gradients needed
with torch.no_grad():
for X, Y in test_loader:
X = X.to(device)
Y = Y.to(device)
output = model(X)
_, predictions = torch.max(output, 1)
#collect the correct predictions for each class
for label, prediction in zip(Y, predictions):
if label == prediction:
correct_pred[classes[label]] += 1
total_pred[classes[label]] += 1
#print accuracy for each class
for classname in classes:
accuracy = 100 * correct_pred[classname] / total_pred[classname]
print('Accuracy for class {} is {} %'.format(classname, accuracy))
###Output
{'plane': 0, 'car': 0, 'bird': 0, 'cat': 0, 'deer': 0, 'dog': 0, 'frog': 0, 'horse': 0, 'ship': 0, 'truck': 0} {'plane': 0, 'car': 0, 'bird': 0, 'cat': 0, 'deer': 0, 'dog': 0, 'frog': 0, 'horse': 0, 'ship': 0, 'truck': 0}
Accuracy for class plane is 57.4 %
Accuracy for class car is 67.6 %
Accuracy for class bird is 43.5 %
Accuracy for class cat is 39.3 %
Accuracy for class deer is 47.4 %
Accuracy for class dog is 44.8 %
Accuracy for class frog is 58.1 %
Accuracy for class horse is 58.2 %
Accuracy for class ship is 63.4 %
Accuracy for class truck is 61.6 %
|
notebooks/Subsetting_Solution.ipynb | ###Markdown
Step 1: Notebook Setup
###Code
# TODO: Make sure you run this cell before continuing!
%matplotlib inline
import matplotlib.pyplot as plt
def show_plot(x_datas, y_datas, x_label, y_label, legend=None, title=None):
"""
Display a simple line plot.
:param x_data: Numpy array containing data for the X axis
:param y_data: Numpy array containing data for the Y axis
:param x_label: Label applied to X axis
:param y_label: Label applied to Y axis
"""
fig = plt.figure(figsize=(16,8), dpi=100)
for (x_data, y_data) in zip(x_datas, y_datas):
plt.plot(x_data, y_data, '-', marker='|', markersize=2.0, mfc='b')
plt.grid(b=True, which='major', color='k', linestyle='-')
plt.xlabel(x_label)
fig.autofmt_xdate()
plt.ylabel (y_label)
if legend:
plt.legend(legend, loc='upper left')
if title:
plt.title(title)
plt.show()
return plt
def plot_box(bbox):
"""
Display a Green bounding box on an image of the blue marble.
:param bbox: Shapely Polygon that defines the bounding box to display
"""
min_lon, min_lat, max_lon, max_lat = bbox.bounds
import matplotlib.pyplot as plt1
from matplotlib.patches import Polygon
from mpl_toolkits.basemap import Basemap
map = Basemap()
map.bluemarble(scale=0.5)
poly = Polygon([(min_lon,min_lat),(min_lon,max_lat),(max_lon,max_lat),
(max_lon,min_lat)],facecolor=(0,0,0,0.0),edgecolor='green',linewidth=2)
plt1.gca().add_patch(poly)
plt1.gcf().set_size_inches(15,25)
plt1.show()
def show_plot_two_series(x_data_a, x_data_b, y_data_a, y_data_b, x_label, y_label_a,
y_label_b, series_a_label, series_b_label, align_axis=True):
"""
Display a line plot of two series
:param x_data_a: Numpy array containing data for the Series A X axis
:param x_data_b: Numpy array containing data for the Series B X axis
:param y_data_a: Numpy array containing data for the Series A Y axis
:param y_data_b: Numpy array containing data for the Series B Y axis
:param x_label: Label applied to X axis
:param y_label_a: Label applied to Y axis for Series A
:param y_label_b: Label applied to Y axis for Series B
:param series_a_label: Name of Series A
:param series_b_label: Name of Series B
:param align_axis: Use the same range for both y axis
"""
fig, ax1 = plt.subplots(figsize=(10,5), dpi=100)
series_a, = ax1.plot(x_data_a, y_data_a, 'b-', marker='|', markersize=2.0, mfc='b', label=series_a_label)
ax1.set_ylabel(y_label_a, color='b')
ax1.tick_params('y', colors='b')
ax1.set_ylim(min(0, *y_data_a), max(y_data_a)+.1*max(y_data_a))
ax1.set_xlabel(x_label)
ax2 = ax1.twinx()
series_b, = ax2.plot(x_data_b, y_data_b, 'r-', marker='|', markersize=2.0, mfc='r', label=series_b_label)
ax2.set_ylabel(y_label_b, color='r')
ax2.set_ylim(min(0, *y_data_b), max(y_data_b)+.1*max(y_data_b))
ax2.tick_params('y', colors='r')
if align_axis:
axis_min = min(0, *y_data_a, *y_data_b)
axis_max = max(*y_data_a, *y_data_b)
axis_max += .1*axis_max
ax1.set_ylim(axis_min, axis_max)
ax2.set_ylim(axis_min, axis_max)
plt.grid(b=True, which='major', color='k', linestyle='-')
plt.legend(handles=(series_a, series_b), bbox_to_anchor=(1.1, 1), loc=2, borderaxespad=0.)
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: List available Datasets
###Code
# TODO: Import the nexuscli python module.
import nexuscli
# TODO: Target your AWS NEXUS server using your public DNS name and port 8083
nexuscli.set_target("http://ec2-35-163-71-211.us-west-2.compute.amazonaws.com:8083", use_session=False)
# TODO: Call nexuscli.dataset_list() and print the results
nexuscli.dataset_list()
###Output
Target set to http://ec2-35-163-71-211.us-west-2.compute.amazonaws.com:8083
###Markdown
Step 3: Subset using Bounding Box
###Code
import time
import nexuscli
from datetime import datetime
from shapely.geometry import box
# TODO: Target your AWS NEXUS server using your public DNS name and port 8083
nexuscli.set_target("http://ec2-35-163-71-211.us-west-2.compute.amazonaws.com:8083", use_session=False)
# TODO: Create a bounding box using the box method imported above
bbox = box(-98, 17.8, -81.5, 30.8)
# TODO: Plot the bounding box using the helper method plot_box
plot_box(bbox)
# Do not modify this line ##
start = time.perf_counter()#
############################
# TODO: Call the subset method for the AVHRR_OI_L4_GHRSST_NCEI dataset using
# your bounding box and time period of 1 day
ds = "AVHRR_OI_L4_GHRSST_NCEI"
start_time = datetime(2016, 7, 13)
end_time = datetime(2016, 7, 14)
result = nexuscli.subset(ds, bbox, start_time, end_time, None, None)
print(len(result))
# Enter your code above this line
print("Subsetting data took {} seconds".format(time.perf_counter() - start))
###Output
5352
Subsetting data took 0.8999209944158792 seconds
###Markdown
Step 4: Subset Using a Metadatafilter
###Code
import requests
import json
import time
import nexuscli
from datetime import datetime
# TODO: Target your AWS NEXUS server using your public DNS name and port 8083
nexuscli.set_target("http://ec2-35-163-71-211.us-west-2.compute.amazonaws.com:8083", use_session=False)
# River IDs for 9 Rivers in LA County
la_county_river_ids = [17575859, 17574289, 17575711, 17574677, 17574823,
948070361, 22560728, 22560730, 22560738]
ds = "RAPID_WSWM"
start_time = datetime(1997, 1, 1)
end_time = datetime(1998, 12, 31, 23, 59, 59)
la_county_river_data = list()
# Do not modify this line ##
start = time.perf_counter()#
############################
# TODO: Iterate over the list of River IDs
for river_id in la_county_river_ids:
# TODO: For each River ID, call the subset function passing in the metadata filter for that river
metadataFilter = "rivid_i:{}".format(river_id)
result = nexuscli.subset(ds, None, start_time, end_time, None, metadataFilter)
la_county_river_data.append(result)
print("Subsetting took {} seconds".format(time.perf_counter() - start))
# TODO: Graph the results using the show_plot helper method
show_plot([[point.time for point in river] for river in la_county_river_data], # x values
[[point.variable['variable'] for point in river] for river in la_county_river_data], # y values
'Time', # x axis label
'Discharge (m³s⁻¹)', # y axis label
legend=[str(r) for r in la_county_river_ids],
title='LA County Rivers'
)
###Output
_____no_output_____
###Markdown
Step 5: Averaging Results
###Code
import numpy
la_county_river_data = [nexuscli.subset(ds, None, start_time, end_time, None, "rivid_i:{}".format(river_id))
for river_id in la_county_river_ids]
discharge_rates = numpy.array([[point.variable['variable'] for point in river]
for river in la_county_river_data])
single_river_time_steps = numpy.array([point.time for point in next(iter(la_county_river_data))])
avg_discharge_rates = numpy.mean(discharge_rates, axis=0)
show_plot([single_river_time_steps], # x values
[avg_discharge_rates], # y values
'Time', # x axis label
'Discharge (m³s⁻¹)', # y axis label
title='Average Discharge of LA County Rivers'
)
###Output
_____no_output_____ |
GCP-notebooks/aws_mur_sst_tutorial_long.ipynb | ###Markdown
Tutorial for MUR SST on AWS - Funding: Interagency Implementation and Advanced Concepts Team [IMPACT](https://earthdata.nasa.gov/esds/impact) for the Earth Science Data Systems (ESDS) program and AWS Public Dataset ProgramCredits: Tutorial development* [Dr. Chelle Gentemann](mailto:[email protected]) - [Twitter](https://twitter.com/ChelleGentemann) - Farallon Institute* [Dr. Rich Signell](mailto:[email protected]) - [Twitter](https://twitter.com/rsignell) - USGS* [Dr. Ryan Abernathey](mailto:[email protected]) - [Twitter](https://twitter.com/rabernat) - LDEOCredits: Creating of the Zarr MUR SST dataset. * [Aimee Barciauskas](mailto:[email protected]) - [Twitter](https://twitter.com/_aimeeb) - Development Seed* [Dr. Rich Signell](mailto:[email protected]) - [Twitter](https://twitter.com/rsignell) - USGS* [Dr. Chelle Gentemann](mailto:[email protected]) - [Twitter](https://twitter.com/ChelleGentemann) - Farallon Institute* [Joseph Flasher](mailto:[email protected]) [Twitter](https://twitter.com/joseph_flasher) - AWSCredits: Tutorial review and comments.* [Dr. Ed Armstrong](mailto:[email protected]) - JPL PODAAC------------- Please note that this is global, 1 km, daily data. This is a very large dataset and the analyses below can take up to 5-10 minutes [MUR SST](https://podaac.jpl.nasa.gov/Multi-scale_Ultra-high_Resolution_MUR-SST) [AWS Public dataset program](https://registry.opendata.aws/mur/) Access the MUR SST which is in an s3 bucket. This Pangeo binder is faster when run on AWS. This code is an example of how to read from a s3 bucket. This code is an example of how to read from a s3 bucket. Right now (2/16/2020) this takes ~1min on AWS and ~2 min on google cloud, there are couple issues here and we are working to solve both. 1. Some shortcomings in the s3fs and zarr formats have been identified. To work on these, git issues were raised to the developers [here](https://github.com/dask/s3fs/issues/285) and [here](https://github.com/zarr-developers/zarr-python/issues/536) To run this notebookCode is in the cells that have In [ ]: to the left of the cell and have a colored backgroundTo run the code:- option 1) click anywhere in the cell, then hold shift and press Enter- option 2) click on the Run button at the top of the page in the dashboard Structure of this tutorial1. Opening data2. Data exploration2. Data plotting 1. Opening data------------------- Import python packagesIt is nice to turn off warnings and set xarray display options
###Code
import warnings
import numpy as np
import pandas as pd
import xarray as xr
import fsspec
warnings.simplefilter('ignore') # filter some warning messages
xr.set_options(display_style="html") #display dataset nicely
###Output
_____no_output_____
###Markdown
Start a cluster, a group of computers that will work together.(A cluster is the key to big data analysis on on Cloud.)- This will set up a [dask kubernetes](https://docs.dask.org/en/latest/setup/kubernetes.html) cluster for your analysis and give you a path that you can paste into the top of the Dask dashboard to visualize parts of your cluster. - You don't need to paste the link below into the Dask dashboard for this to work, but it will help you visualize progress.- Try 40 workers to start (during the tutorial) but you can increase to speed things up later
###Code
from dask_gateway import Gateway
from dask.distributed import Client
gateway = Gateway()
cluster = gateway.new_cluster(worker_memory=8)
cluster.adapt(minimum=1, maximum=60)
client = Client(cluster)
cluster
###Output
_____no_output_____
###Markdown
** ☝️ Don’t forget to click the link above or copy it to the Dask dashboard  on the left to view the scheduler dashboard! ** Initialize DatasetHere we load the dataset from the zarr store. Note that this very large dataset initializes nearly instantly, and we can see the full list of variables and coordinates. Examine MetadataFor those unfamiliar with this dataset, the variable metadata is very helpful for understanding what the variables actually representPrinting the dataset will show you the dimensions, coordinates, and data variables with clickable icons at the end that show more metadata and size.
###Code
%%time
ds_sst = xr.open_zarr('https://mur-sst.s3.us-west-2.amazonaws.com/zarr-v1',consolidated=True)
ds_sst
###Output
_____no_output_____
###Markdown
2. Explore the data Let's explore the data- look at all the SST data- look at the SST data masked to only ocean and ice-free data- With all data, it is important to explore it and understand what is contains before doing an analysis.- The ice mask used by MUR SST is from NSIDC and is based on satellite passive microwave estimates of sea ice concentration- The satellite data isn't available near land, so the is no estimate of sea ice concentration near land- For this data, it means that there are some erroneous SSTs near land, that is likely ice and this is something to be aware of.
###Code
sst = ds_sst['analysed_sst']
cond = (ds_sst.mask==1) & ((ds_sst.sea_ice_fraction<.15) | np.isnan(ds_sst.sea_ice_fraction))
sst_masked = ds_sst['analysed_sst'].where(cond)
sst_masked
###Output
_____no_output_____
###Markdown
Using ``.groupby`` and ``.resample``Xarray has a lot of nice build-in methods, such as [.resampe](http://xarray.pydata.org/en/stable/generated/xarray.Dataset.resample.htmlxarray-dataset-resample) which can upsample or downsample data and [.mean](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.mean.htmlxarray-dataarray-mean). Here we use these to calculate a climatology and anomoaly. Create a daily SST anomaly dataset- Calculate the daily climatology using ``.groupby``- Calculate the anomaly Create a monthly SST anomaly dataset- First create a monthly version of the dataset using ``.resample``. Two nice arguments for ``.resample``: ``keep_addrs`` which keeps the metadata and ``skipna`` which ensures that only data that is always present is included- Calculate the monthly climatology using ``.groupby``- Calculate the anomaly
###Code
%%time
#create a daily climatology and anomaly
climatology_mean = sst_masked.groupby('time.dayofyear').mean('time',keep_attrs=True,skipna=False)
sst_anomaly = sst_masked.groupby('time.dayofyear')-climatology_mean #take out annual mean to remove trends
#create a monthly dataset, climatology, and anomaly
sst_monthly = sst_masked.resample(time='1MS').mean('time',keep_attrs=True,skipna=False)
climatology_mean_monthly = sst_monthly.groupby('time.month').mean('time',keep_attrs=True,skipna=False)
sst_anomaly_monthly = sst_monthly.groupby('time.month')-climatology_mean_monthly #take out annual mean to remove trends
sst_anomaly
###Output
_____no_output_____
###Markdown
3. Data plotting ``xarray`` plotting functions rely on matplotlib internally, but they make use of all available metadata to make the plotting operations more intuitive and interpretable. More plotting examples are given [here](http://xarray.pydata.org/en/stable/plotting.html) Here we use ``holoviews`` and ``hvplot`` for interactive graphics
###Code
import hvplot.xarray
import holoviews as hv
from holoviews.operation.datashader import regrid
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Plot the SST timeseries in the Pacific Blob RegionPlot both the daily and monthly data
###Code
%%time
daily = sst.sel(lon=-140, lat=53).load()
monthly = sst_monthly.sel(lon=-140, lat=53).load()
daily.hvplot(grid=True) * monthly.hvplot(grid=True)
###Output
_____no_output_____
###Markdown
Plot the SST anomaly timeseries in the Pacific Blob Region
###Code
%%time
daily = sst_anomaly.sel(lon=-140, lat=53).drop('dayofyear').load()
monthly = sst_anomaly_monthly.sel(lon=-140, lat=53).drop('month').load()
daily.hvplot(grid=True) * monthly.hvplot(grid=True)
###Output
_____no_output_____
###Markdown
Plot a global image of SST on 10/1/2015 Plotting on mapsFor plotting on maps, we rely on the excellent [cartopy](http://scitools.org.uk/cartopy/docs/latest/index.html) library. In cartopy you need to define the map projection you want to plot. - Common ones are Ortographic and PlateCarree.- You can add coastlines and gridlines to the axes as well.
###Code
import cartopy.crs as ccrs
import geoviews.feature as gf
%%time
sst_dy = sst.sel(time='2015-10-01T09').load()
basemap = gf.land() * gf.coastline() * gf.borders()
mur_whole = sst_dy.sel(lat=slice(-70,80)).hvplot.quadmesh(x='lon',
y='lat',
rasterize=True,
geo=True,
cmap='turbo',
frame_width=400)
mur_whole * basemap
###Output
_____no_output_____
###Markdown
Subset the El Niño/La Niña Region:
###Code
%%time
sst_elnino = sst.sel(lon=slice(-180,-70), lat=slice(-25,25))
###Output
_____no_output_____
###Markdown
Difference the monthly mean temperature fields for Jan 2016 (El Niño) and Jan 2014 (normal)
###Code
%%time
sst_jan2016 = sst_elnino.sel(time=slice('2016-01-01','2016-02-01')).mean(dim='time')
sst_jan2014 = sst_elnino.sel(time=slice('2014-01-01','2014-02-01')).mean(dim='time')
%%time
sst_diff = (sst_jan2016 - sst_jan2014).load()
sst_diff.hvplot.quadmesh(x='lon', y='lat',
geo=True,
rasterize=True,
cmap='rainbow',
tiles='EsriImagery')
%%time
sst_dy = sst_masked.sel(time='2015-10-01T09').load()
sst_dy.hvplot.quadmesh(x='lon', y='lat',
geo=True,
rasterize=True,
cmap='rainbow',
projection=ccrs.Orthographic(-80, 35),
coastline='110m')
###Output
_____no_output_____
###Markdown
Please close cluster
###Code
client.close()
cluster.close()
###Output
_____no_output_____ |
extra/Names gender 1.ipynb | ###Markdown
Thinking in tensors in PyTorchHands-on training by [Piotr Migdał](https://p.migdal.pl) (2019). Version 0.4 for Uniwersytet Śląski.**Work in progress** Extra: Text one-hot encoding, names part 1We use [US Baby Names - Kaggle Dataset](https://www.kaggle.com/kaggle/us-baby-names).If needed, you can use: `!wget https://www.dropbox.com/s/s14l44ptqevgech/NationalNames.csv.zip?dl=1`See also:* [The Most Unisex Names in US History](https://flowingdata.com/2013/09/25/the-most-unisex-names-in-us-history/)* [Why Most European Names Ending in A Are Female](http://blog-en.namepedia.org/2015/11/why-most-european-names-ending-in-a-are-female/)And for Polish names and surnames:* [Najpopularniejsze imiona w Polsce - Otwarte Dane](https://dane.gov.pl/dataset/219)* [Nazwiska występujące w rejestrze PESEL - Otwarte Dane](https://dane.gov.pl/dataset/568)* https://nazwiska-polskie.pl/* [List of polish first and last names - Kaggle Dataset](https://www.kaggle.com/djablo/list-of-polish-first-and-last-names/home)
###Code
%matplotlib inline
from collections import Counter
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
import h5py
names = pd.read_csv("NationalNames.csv")
names.info()
names.head()
names['Year'].max()
names2014 = names.loc[lambda df: df['Year'] == 2014]
names2014.shape
names2014.sample(5)
names2014['Gender'].value_counts()
names2014['Name'].apply(len).value_counts().sort_index()
y = names2014['Gender'].map({'F': 0, 'M': 1}).values.astype('int64')
y[:5]
X_text = list(names2014['Name'])
X_text[:5]
char_count = Counter()
for name in X_text:
char_count.update(name)
char_count.most_common(5)
char_count.keys()
char_count_lower = Counter()
for name in X_text:
char_count_lower.update(name.lower())
chars = sorted(char_count_lower.keys())
"".join(chars)
char2id = {c: i for i, c in enumerate(chars)}
char2id
max_len = 16
X = np.zeros((len(X_text), len(chars), max_len), dtype='float32')
for i, name in enumerate(X_text):
for j, c in enumerate(name.lower()):
X[i, char2id[c], j] = 1.
sns.heatmap(pd.DataFrame(X[1], index=chars))
len(X)
len(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=137)
with h5py.File("names.h5") as f:
f.create_dataset('X_train', data=X_train)
f.create_dataset('y_train', data=y_train)
f.create_dataset('X_test', data=X_test)
f.create_dataset('y_test', data=y_test)
f.create_dataset('characters', data=np.array(chars, dtype='S1'))
f.create_dataset('categories', data=np.array(['F', 'M'], dtype='S1'))
###Output
_____no_output_____ |
notebooks/Dataset E - Pima Indians Diabetes/Synthetic data evaluation/Utility/TSTR CTGAN Dataset E.ipynb | ###Markdown
TSTR CTGAN Dataset E
###Code
#import libraries
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import os
print('Libraries imported!!')
#define directory of functions and actual directory
HOME_PATH = '' #home path of the project
FUNCTIONS_DIR = 'EVALUATION FUNCTIONS/UTILITY'
ACTUAL_DIR = os.getcwd()
#change directory to functions directory
os.chdir(HOME_PATH + FUNCTIONS_DIR)
#import functions for data labelling analisys
from utility_evaluation import DataPreProcessor
from utility_evaluation import train_evaluate_model
#change directory to actual directory
os.chdir(ACTUAL_DIR)
print('Functions imported!!')
###Output
Functions imported!!
###Markdown
1. Read data
###Code
#read real dataset
train_data = pd.read_csv(HOME_PATH + 'SYNTHETIC DATASETS/CTGAN/E_PimaIndiansDiabetes_Synthetic_CTGAN.csv')
categorical_columns = ['Outcome']
for col in categorical_columns :
train_data[col] = train_data[col].astype('category')
train_data
#read test data
test_data = pd.read_csv(HOME_PATH + 'REAL DATASETS/TEST DATASETS/E_PimaIndiansDiabetes_Real_Test.csv')
for col in categorical_columns :
test_data[col] = test_data[col].astype('category')
test_data
target = 'Outcome'
#quick look at the breakdown of class values
print('Train data')
print(train_data.shape)
print(train_data.groupby(target).size())
print('#####################################')
print('Test data')
print(test_data.shape)
print(test_data.groupby(target).size())
###Output
Train data
(614, 9)
Outcome
0 211
1 403
dtype: int64
#####################################
Test data
(154, 9)
Outcome
0 99
1 55
dtype: int64
###Markdown
2. Pre-process training data
###Code
target = 'Outcome'
categorical_columns = None
numerical_columns = train_data.select_dtypes(include=['int64','float64']).columns.tolist()
categories = None
data_preprocessor = DataPreProcessor(categorical_columns, numerical_columns, categories)
x_train = data_preprocessor.preprocess_train_data(train_data.loc[:, train_data.columns != target])
y_train = train_data.loc[:, target]
x_train.shape, y_train.shape
###Output
_____no_output_____
###Markdown
3. Preprocess test data
###Code
x_test = data_preprocessor.preprocess_test_data(test_data.loc[:, test_data.columns != target])
y_test = test_data.loc[:, target]
x_test.shape, y_test.shape
###Output
_____no_output_____
###Markdown
4. Create a dataset to save the results
###Code
results = pd.DataFrame(columns = ['model','accuracy','precision','recall','f1'])
results
###Output
_____no_output_____
###Markdown
4. Train and evaluate Random Forest Classifier
###Code
rf_results = train_evaluate_model('RF', x_train, y_train, x_test, y_test)
results = results.append(rf_results, ignore_index=True)
rf_results
###Output
[Parallel(n_jobs=3)]: Using backend ThreadingBackend with 3 concurrent workers.
[Parallel(n_jobs=3)]: Done 44 tasks | elapsed: 0.1s
[Parallel(n_jobs=3)]: Done 100 out of 100 | elapsed: 0.2s finished
[Parallel(n_jobs=3)]: Using backend ThreadingBackend with 3 concurrent workers.
[Parallel(n_jobs=3)]: Done 44 tasks | elapsed: 0.0s
[Parallel(n_jobs=3)]: Done 100 out of 100 | elapsed: 0.0s finished
###Markdown
5. Train and Evaluate KNeighbors Classifier
###Code
knn_results = train_evaluate_model('KNN', x_train, y_train, x_test, y_test)
results = results.append(knn_results, ignore_index=True)
knn_results
###Output
_____no_output_____
###Markdown
6. Train and evaluate Decision Tree Classifier
###Code
dt_results = train_evaluate_model('DT', x_train, y_train, x_test, y_test)
results = results.append(dt_results, ignore_index=True)
dt_results
###Output
_____no_output_____
###Markdown
7. Train and evaluate Support Vector Machines Classifier
###Code
svm_results = train_evaluate_model('SVM', x_train, y_train, x_test, y_test)
results = results.append(svm_results, ignore_index=True)
svm_results
###Output
[LibSVM]
###Markdown
8. Train and evaluate Multilayer Perceptron Classifier
###Code
mlp_results = train_evaluate_model('MLP', x_train, y_train, x_test, y_test)
results = results.append(mlp_results, ignore_index=True)
mlp_results
###Output
Iteration 1, loss = 0.66525092
Iteration 2, loss = 0.62842573
Iteration 3, loss = 0.61250394
Iteration 4, loss = 0.60457144
Iteration 5, loss = 0.60035791
Iteration 6, loss = 0.59529669
Iteration 7, loss = 0.59059339
Iteration 8, loss = 0.58577982
Iteration 9, loss = 0.58251283
Iteration 10, loss = 0.57906775
Iteration 11, loss = 0.57642992
Iteration 12, loss = 0.57384669
Iteration 13, loss = 0.57114402
Iteration 14, loss = 0.56870136
Iteration 15, loss = 0.56530111
Iteration 16, loss = 0.56197226
Iteration 17, loss = 0.55872843
Iteration 18, loss = 0.55654354
Iteration 19, loss = 0.55367427
Iteration 20, loss = 0.55183118
Iteration 21, loss = 0.54857647
Iteration 22, loss = 0.54573532
Iteration 23, loss = 0.54285251
Iteration 24, loss = 0.54050451
Iteration 25, loss = 0.53826577
Iteration 26, loss = 0.53553356
Iteration 27, loss = 0.53413514
Iteration 28, loss = 0.53474493
Iteration 29, loss = 0.53066609
Iteration 30, loss = 0.52550152
Iteration 31, loss = 0.52302501
Iteration 32, loss = 0.52082901
Iteration 33, loss = 0.51908471
Iteration 34, loss = 0.51787647
Iteration 35, loss = 0.51662443
Iteration 36, loss = 0.51400386
Iteration 37, loss = 0.51016865
Iteration 38, loss = 0.50719337
Iteration 39, loss = 0.50316967
Iteration 40, loss = 0.50091755
Iteration 41, loss = 0.50603674
Iteration 42, loss = 0.50198726
Iteration 43, loss = 0.49590527
Iteration 44, loss = 0.49294547
Iteration 45, loss = 0.49030892
Iteration 46, loss = 0.49396653
Iteration 47, loss = 0.49668352
Iteration 48, loss = 0.49394851
Iteration 49, loss = 0.49026302
Iteration 50, loss = 0.48782682
Iteration 51, loss = 0.48346689
Iteration 52, loss = 0.47874712
Iteration 53, loss = 0.47817553
Iteration 54, loss = 0.47625916
Iteration 55, loss = 0.47297388
Iteration 56, loss = 0.47270152
Iteration 57, loss = 0.47038275
Iteration 58, loss = 0.46499118
Iteration 59, loss = 0.46097737
Iteration 60, loss = 0.45799790
Iteration 61, loss = 0.45715996
Iteration 62, loss = 0.46014731
Iteration 63, loss = 0.45091346
Iteration 64, loss = 0.44604701
Iteration 65, loss = 0.44580969
Iteration 66, loss = 0.44140029
Iteration 67, loss = 0.43918377
Iteration 68, loss = 0.43626313
Iteration 69, loss = 0.43340432
Iteration 70, loss = 0.43428653
Iteration 71, loss = 0.43676071
Iteration 72, loss = 0.43039249
Iteration 73, loss = 0.42487020
Iteration 74, loss = 0.42126500
Iteration 75, loss = 0.41960162
Iteration 76, loss = 0.41881699
Iteration 77, loss = 0.42145131
Iteration 78, loss = 0.41934812
Iteration 79, loss = 0.41639442
Iteration 80, loss = 0.40841491
Iteration 81, loss = 0.40966158
Iteration 82, loss = 0.40415383
Iteration 83, loss = 0.40015461
Iteration 84, loss = 0.39869488
Iteration 85, loss = 0.39569104
Iteration 86, loss = 0.39834058
Iteration 87, loss = 0.39212008
Iteration 88, loss = 0.39709330
Iteration 89, loss = 0.38793837
Iteration 90, loss = 0.38400560
Iteration 91, loss = 0.37925687
Iteration 92, loss = 0.37365649
Iteration 93, loss = 0.37020472
Iteration 94, loss = 0.36528869
Iteration 95, loss = 0.36722882
Iteration 96, loss = 0.36319720
Iteration 97, loss = 0.36253183
Iteration 98, loss = 0.35381577
Iteration 99, loss = 0.34924506
Iteration 100, loss = 0.34950760
Iteration 101, loss = 0.35117866
Iteration 102, loss = 0.34688767
Iteration 103, loss = 0.34257028
Iteration 104, loss = 0.34075813
Iteration 105, loss = 0.34024586
Iteration 106, loss = 0.33186279
Iteration 107, loss = 0.32615707
Iteration 108, loss = 0.32905152
Iteration 109, loss = 0.32982044
Iteration 110, loss = 0.32400605
Iteration 111, loss = 0.31555190
Iteration 112, loss = 0.31265416
Iteration 113, loss = 0.31328060
Iteration 114, loss = 0.31226786
Iteration 115, loss = 0.30873498
Iteration 116, loss = 0.30571639
Iteration 117, loss = 0.30156120
Iteration 118, loss = 0.31038544
Iteration 119, loss = 0.29395679
Iteration 120, loss = 0.30787815
Iteration 121, loss = 0.28642130
Iteration 122, loss = 0.28231335
Iteration 123, loss = 0.28265881
Iteration 124, loss = 0.27803161
Iteration 125, loss = 0.27315824
Iteration 126, loss = 0.27563729
Iteration 127, loss = 0.27404433
Iteration 128, loss = 0.26796484
Iteration 129, loss = 0.26230075
Iteration 130, loss = 0.26330986
Iteration 131, loss = 0.26601387
Iteration 132, loss = 0.25602034
Iteration 133, loss = 0.24413961
Iteration 134, loss = 0.24619846
Iteration 135, loss = 0.24982394
Iteration 136, loss = 0.23937927
Iteration 137, loss = 0.23991097
Iteration 138, loss = 0.23264432
Iteration 139, loss = 0.23112802
Iteration 140, loss = 0.22786494
Iteration 141, loss = 0.22255937
Iteration 142, loss = 0.23118608
Iteration 143, loss = 0.22869903
Iteration 144, loss = 0.25278292
Iteration 145, loss = 0.22724878
Iteration 146, loss = 0.22372948
Iteration 147, loss = 0.21471222
Iteration 148, loss = 0.20647363
Iteration 149, loss = 0.21149427
Iteration 150, loss = 0.20005817
Iteration 151, loss = 0.21038536
Iteration 152, loss = 0.20436509
Iteration 153, loss = 0.19685754
Iteration 154, loss = 0.18843017
Iteration 155, loss = 0.18584032
Iteration 156, loss = 0.18964351
Iteration 157, loss = 0.18419089
Iteration 158, loss = 0.18007450
Iteration 159, loss = 0.18010070
Iteration 160, loss = 0.17692380
Iteration 161, loss = 0.17409676
Iteration 162, loss = 0.16619604
Iteration 163, loss = 0.16857321
Iteration 164, loss = 0.16749531
Iteration 165, loss = 0.16015335
Iteration 166, loss = 0.16398049
Iteration 167, loss = 0.16747491
Iteration 168, loss = 0.15669308
Iteration 169, loss = 0.15442102
Iteration 170, loss = 0.14545701
Iteration 171, loss = 0.14393652
Iteration 172, loss = 0.14313234
Iteration 173, loss = 0.14206469
Iteration 174, loss = 0.14169968
Iteration 175, loss = 0.13437294
Iteration 176, loss = 0.14235352
Iteration 177, loss = 0.13929744
Iteration 178, loss = 0.14126868
Iteration 179, loss = 0.13328013
Iteration 180, loss = 0.13585010
Iteration 181, loss = 0.12993796
Iteration 182, loss = 0.13882923
Iteration 183, loss = 0.13284293
Iteration 184, loss = 0.12438135
Iteration 185, loss = 0.13063702
Iteration 186, loss = 0.12755641
Iteration 187, loss = 0.12013599
Iteration 188, loss = 0.11436596
Iteration 189, loss = 0.11166379
Iteration 190, loss = 0.11062447
Iteration 191, loss = 0.10936041
Iteration 192, loss = 0.10505666
Iteration 193, loss = 0.10704676
Iteration 194, loss = 0.09740118
Iteration 195, loss = 0.10622609
Iteration 196, loss = 0.09367720
Iteration 197, loss = 0.09745342
Iteration 198, loss = 0.09891881
Iteration 199, loss = 0.09048271
Iteration 200, loss = 0.08904289
Iteration 201, loss = 0.08698558
Iteration 202, loss = 0.08631819
Iteration 203, loss = 0.08571934
Iteration 204, loss = 0.08166796
Iteration 205, loss = 0.08322354
Iteration 206, loss = 0.07694290
Iteration 207, loss = 0.08089277
Iteration 208, loss = 0.07317044
Iteration 209, loss = 0.08131401
Iteration 210, loss = 0.08977113
Iteration 211, loss = 0.08287852
Iteration 212, loss = 0.08815675
Iteration 213, loss = 0.07522075
Iteration 214, loss = 0.06878875
Iteration 215, loss = 0.07143955
Iteration 216, loss = 0.07173229
Iteration 217, loss = 0.06521728
Iteration 218, loss = 0.06552625
Iteration 219, loss = 0.06071554
Iteration 220, loss = 0.05949483
Iteration 221, loss = 0.05932095
Iteration 222, loss = 0.05858813
Iteration 223, loss = 0.05776346
Iteration 224, loss = 0.05845545
Iteration 225, loss = 0.05466736
Iteration 226, loss = 0.05210378
Iteration 227, loss = 0.05386695
Iteration 228, loss = 0.05117846
Iteration 229, loss = 0.05190351
Iteration 230, loss = 0.05197578
Iteration 231, loss = 0.05299839
Iteration 232, loss = 0.05759302
Iteration 233, loss = 0.05886098
Iteration 234, loss = 0.04970810
Iteration 235, loss = 0.04878307
Iteration 236, loss = 0.05267114
Iteration 237, loss = 0.05298432
Iteration 238, loss = 0.05002623
Iteration 239, loss = 0.05286762
Iteration 240, loss = 0.05003693
Iteration 241, loss = 0.04432746
Iteration 242, loss = 0.05065832
Iteration 243, loss = 0.04272329
Iteration 244, loss = 0.04429591
Iteration 245, loss = 0.04148724
Iteration 246, loss = 0.03896921
Iteration 247, loss = 0.04297852
Iteration 248, loss = 0.04115834
Iteration 249, loss = 0.03921849
Iteration 250, loss = 0.04171771
Iteration 251, loss = 0.03879541
Iteration 252, loss = 0.04534562
Iteration 253, loss = 0.04112194
Iteration 254, loss = 0.03729806
Iteration 255, loss = 0.04063094
Iteration 256, loss = 0.03388950
Iteration 257, loss = 0.03371754
Iteration 258, loss = 0.03441758
Iteration 259, loss = 0.03130633
Iteration 260, loss = 0.03099968
Iteration 261, loss = 0.03417712
Iteration 262, loss = 0.03096492
Iteration 263, loss = 0.03021789
Iteration 264, loss = 0.02904076
Iteration 265, loss = 0.02810142
Iteration 266, loss = 0.02697656
Iteration 267, loss = 0.02647054
Iteration 268, loss = 0.02594407
Iteration 269, loss = 0.03089910
Iteration 270, loss = 0.03145890
Iteration 271, loss = 0.03484412
Iteration 272, loss = 0.03027657
Iteration 273, loss = 0.02886480
Iteration 274, loss = 0.02858050
Iteration 275, loss = 0.02602673
Iteration 276, loss = 0.02442472
Iteration 277, loss = 0.02445752
Iteration 278, loss = 0.02271733
Iteration 279, loss = 0.02148553
Iteration 280, loss = 0.02118517
Iteration 281, loss = 0.02138116
Iteration 282, loss = 0.02073974
Iteration 283, loss = 0.02047043
Iteration 284, loss = 0.01994883
Iteration 285, loss = 0.01954171
Iteration 286, loss = 0.02016037
Iteration 287, loss = 0.01917261
Iteration 288, loss = 0.01883878
Iteration 289, loss = 0.01907946
Iteration 290, loss = 0.01895694
Iteration 291, loss = 0.02102595
Iteration 292, loss = 0.02397867
Iteration 293, loss = 0.03728518
Iteration 294, loss = 0.03897445
Iteration 295, loss = 0.04019142
Iteration 296, loss = 0.03267026
Iteration 297, loss = 0.02474778
Iteration 298, loss = 0.05997186
Iteration 299, loss = 0.03108477
Training loss did not improve more than tol=0.000100 for 10 consecutive epochs. Stopping.
###Markdown
9. Save results file
###Code
results.to_csv('RESULTS/models_results_ctgan.csv', index=False)
results
###Output
_____no_output_____ |
solutions.jupyter/ch03.ipynb | ###Markdown
Solutions for exercises of chapter 3https://mml-book.comAuthor: emil.vatai at gmail.com Date: 2019-12-24
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Exercise 3.1 - Symmetry: $\langle \mathbf{x}, \mathbf{y} \rangle = x_1 y_1 - (x_1 y_2 + x_2 y_1) + 2x_2 y_2 = y_1 x_1 - (y_1 x_2 + y_2 x_1) + 2 y_2 y_2 = \langle \mathbf{y}, \mathbf{x} \rangle$- Positive definite: $\langle \mathbf{x}, \mathbf{x} \rangle = x_1^2 - 2 x_1 x_2 + 2x_2^2 = (x_1 - x_2)^2 + x_2^2 \ge 0$ since $(x_1 - x_2)^2 \ge 0$ and $x_2^2 \ge 0$. Exercise 3.2 Exercise 3.3
###Code
x = np.array([1, 2, 3])
y = np.array([-1, -1, 0])
A = np.array([
[2, 1, 0],
[1, 3, -1],
[0, -1, 2]
])
sol33a_sqr = (x - y).dot(x - y)
sol33a = np.sqrt(sol33a_sqr)
print(f"a: sqrt({sol33a_sqr}) = {sol33a}")
sol33b_sqr = (x - y).dot(A).dot(x - y)
sol33b = np.sqrt(sol33b_sqr)
print(f"b: sqrt({sol33b_sqr}) = {sol33b}")
###Output
a: sqrt(22) = 4.69041575982343
b: sqrt(47) = 6.855654600401044
###Markdown
Exercise 3.4
###Code
x = np.array([1, 2])
y = np.array([-1, -1])
B = np.array([
[2, 1],
[1, 3]
])
x_norm = np.sqrt(x.dot(x))
y_norm = np.sqrt(y.dot(y))
sol34a_cos = x.dot(y) / (x_norm * y_norm)
sol34a = np.arccos(sol34a_cos)
print(f"a: \cos \omega = {sol34a_cos} => \omega = {sol34a}")
x_innerprod_y = x.dot(B).dot(y)
x_normB = np.sqrt(x.dot(B).dot(x))
y_normB = np.sqrt(y.dot(B).dot(y))
sol34a_cos = x_innerprod_y / (x_normB * y_normB)
sol34a = np.arccos(sol34a_cos)
print(f"b: \cos \omega = {sol34a_cos} => \omega = {sol34a}")
###Output
a: \cos \omega = -0.9486832980505138 => \omega = 2.819842099193151
b: \cos \omega = -0.9799578870122228 => \omega = 2.9410462957012875
###Markdown
Exercise 3.5
###Code
x = np.array([-1, -9, -1, 4, 1])
U = np.array([
[0, 1, -3, -1],
[-1, -3, 4, -3],
[2, 1, 1, 5],
[0, -1, 2, 0],
[2, 2, 1, 7]
])
tmp = np.vstack([
U[1, :],
U[0, :],
U[2, :],
U[3, :],
U[4, :],
])
tmp[2, :] += 2 * tmp[0, :]
tmp[4, :] += 2 * tmp[0, :]
tmp
tmp[2, :] += 5 * tmp[1, :]
tmp[3, :] += 1 * tmp[1, :]
tmp[4, :] += 4 * tmp[1, :]
tmp
tmp[2, :], tmp[3, :] = tmp[3, :], tmp[2, :]
tmp[3, :] += -1 * tmp[2, :]
tmp[4, :] += -3 * tmp[2, :]
tmp
###Output
_____no_output_____
###Markdown
Based on the Row Echelon form the last column vector is linearly dependent on the first three, so only those are used to construct $B$
###Code
B = U[:, 0:3]
B
BTB = B.T.dot(B)
BTB
tmp = np.hstack([BTB.copy(), np.eye(3)])
tmp[1, :] -= tmp[0, :]
tmp[2, :] += 2*tmp[1, :]
print(tmp)
tmp[0, :] /= 9
tmp[1, :] /= 7
tmp[2, :] /= 3
tmp[0, :] -= tmp[1, :]
tmp[0, :] += -2 * tmp[2, :]
tmp[1, :] += 2 * tmp[2, :]
print(tmp)
BTBinv = tmp[:, 3:]
print("Almost identity:")
print(BTBinv.dot(BTB)) # Almost Identity (close enough)
proj_mat = B.dot(BTBinv.dot(B.T))
# proj_mat = np.vstack([proj_mat, np.zeros([2,5])])
print("Projection matrix:")
print(proj_mat)
proj_x = proj_mat.dot(x)
proj_x
proj_mat.dot(proj_x) # verify it's a projection
###Output
_____no_output_____
###Markdown
Exercise 3.6
###Code
C = np.array([
[2, 1, 0],
[1, 2, -1],
[0, -1, 2]
])
U = np.array([
[1, 0],
[0, 0],
[0, 1]
])
x = np.array([0, 1, 0])
B = U.copy()
print("B matrix: \n", B)
BTCB = B.T.dot(C).dot(B)
print("BTCB: \n", BTCB)
BTCBinv = BTCB/4
print("Check inverse: ", np.all(BTCBinv.dot(BTCB) == np.eye(2)))
proj_mat = B.dot(BTCBinv).dot(B.T).dot(C) # B(BT C B)^{-1} BT C
print("Projection matrix:")
print(proj_mat)
print("Projection of e_2:", proj_mat.dot(x))
# Note 100% sure about this
###Output
B matrix:
[[1 0]
[0 0]
[0 1]]
BTCB:
[[2 0]
[0 2]]
Check inverse: True
Projection matrix:
[[ 1. 0.5 0. ]
[ 0. 0. 0. ]
[ 0. -0.5 1. ]]
Projection of e_2: [ 0.5 0. -0.5]
|
Notebooks/Week2 - Data Preprocessing and Regression/preprocessing.ipynb | ###Markdown
Data Preprocessing According to Kuhn and Johnson data preparation is the process of addition, deletion or transformation of training set data.Sometimes, preprocessing of data can lead to unexpected improvements in model accuracy.Data preparation is an important step and you should experiment with data preprocessing steps that are appropriate for your data to see if you can get that desirable boost in model accuracy.
###Code
import pandas as pd
df = pd.read_csv('iris.csv')
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Basic Data Handling - The `apply` method offers a convenient way to manipulate pandas `DataFrame` entries along the column axis.- We can use a regular Python or lambda function as input to the apply method.- In this context, assume that our goal is to transform class labels from a string representation (e.g., "Iris-Setosa") to an integer representation (e.g., 0), which is a historical convention and a recommendation for compatibility with various machine learning tools.
###Code
df['species'] = df['species'].apply(lambda x: 0 if x=='setosa' else x)
df.head()
###Output
_____no_output_____
###Markdown
.map vs. .apply - If we want to map column values from one value to another, it is often more convenient to use the `map` method instead of apply.- The achieve the following with the `apply` method, we would have to call `apply` three times.
###Code
d = {'setosa': 0,
'versicolor': 1,
'virginica': 2}
df = pd.read_csv('iris.csv')
df['species'] = df['species'].map(d)
df.head()
###Output
_____no_output_____
###Markdown
- The `tail` method is similar to `head` but shows the last five rows by default; we use it to double check that the last class label (Iris-Virginica) was also successfully transformed
###Code
df.tail()
###Output
_____no_output_____
###Markdown
- It's actually not a bad idea to check if all row entries of the `species` column got transformed correctly.
###Code
import numpy as np
np.unique(df['species'])
###Output
_____no_output_____
###Markdown
NumPy Arrays - Pandas' data frames are built on top of NumPy arrays.- While many machine learning-related tools also support pandas `DataFrame` objects as inputs now, by convention, we usually use NumPy arrays most tasks.- We can access the NumPy array that is underlying a `DataFrame` via the `values` attribute.
###Code
y = df['species'].values
y
###Output
_____no_output_____
###Markdown
- There are many different ways to access columns and rows in a pandas `DataFrame`, which we won't discuss here; a good reference documentation can be found at https://pandas.pydata.org/pandas-docs/stable/indexing.html- The `iloc` attribute allows for integer-based indexing and slicing, which is similar to how we use indexing on NumPy arrays (Lecture 04).The following expression will select column 1, 2, 3, and 4 (sepal length, sepal width, petal length, petal width) from the `DataFrame` and then assign the underlying NumPy array to `X`.
###Code
X = df.iloc[:, 0:4].values
###Output
_____no_output_____
###Markdown
- Just as a quick check, we show the first 5 rows in the NumPy array:
###Code
X[:5]
###Output
_____no_output_____
###Markdown
Splitting a Dataset into Train, Validation, and Test Subsets - The following code cells in this section illustrate the process of splitting a dataset into several subsets.- One important step, prior to splitting a dataset, is shuffling it, otherwise, we may end up with unrepresentative class distributions if the dataset was sorted prior to splitting.
###Code
import numpy as np
indices = np.arange(X.shape[0])
rng = np.random.RandomState(123)
permuted_indices = rng.permutation(indices)
permuted_indices
train_size, valid_size = int(0.65*X.shape[0]), int(0.15*X.shape[0])
test_size = X.shape[0] - (train_size + valid_size)
print(train_size, valid_size, test_size)
train_ind = permuted_indices[:train_size]
valid_ind = permuted_indices[train_size:(train_size + valid_size)]
test_ind = permuted_indices[(train_size + valid_size):]
X_train, y_train = X[train_ind], y[train_ind]
X_valid, y_valid = X[valid_ind], y[valid_ind]
X_test, y_test = X[test_ind], y[test_ind]
X_train.shape
###Output
_____no_output_____
###Markdown
Stratification - Previously, we wrote our own code to shuffle and split a dataset into training, validation, and test subsets, which had one considerable downside.- If we are working with small datasets and split it randomly into subsets, it will affect the class distribution in the samples -- this is problematic since machine learning algorithms/models assume that training, validation, and test samples have been drawn from the same distributions to produce reliable models and estimates of the generalization performance.  - The method of ensuring that the class label proportions are the same in each subset after splitting, we use an approach that is usually referred to as "stratification."- Stratification is supported in scikit-learn's `train_test_split` method if we pass the class label array to the `stratify` parameter as shown below.
###Code
from sklearn.model_selection import train_test_split
X_temp, X_test, y_temp, y_test = \
train_test_split(X, y, test_size=0.2,
shuffle=True, random_state=123, stratify=y)
np.bincount(y_temp)
X_train, X_valid, y_train, y_valid = \
train_test_split(X_temp, y_temp, test_size=0.2,
shuffle=True, random_state=123, stratify=y_temp)
print('Train size', X_train.shape, 'class proportions', np.bincount(y_train))
print('Valid size', X_valid.shape, 'class proportions', np.bincount(y_valid))
print('Test size', X_test.shape, 'class proportions', np.bincount(y_test))
###Output
Train size (96, 4) class proportions [32 32 32]
Valid size (24, 4) class proportions [8 8 8]
Test size (30, 4) class proportions [10 10 10]
###Markdown
Data Scaling - In the case of the Iris dataset, all dimensions were measured in centimeters, hence "scaling" features would not be necessary in the context of *k*NN -- unless we want to weight features differently.- Whether or not to scale features depends on the problem at hand and requires your judgement.- However, there are several algorithms (especially gradient-descent, etc., which we will cover later in this course), which work much better (are more robust, numerically stable, and converge faster) if the data is centered and has a smaller range.- There are many different ways for scaling features; here, we only cover to of the most common "normalization" schemes: min-max scaling and z-score standardization. Normalization -- Min-max scaling - Min-max scaling squashes the features into a [0, 1] range, which can be achieved via the following equation for a single input $i$: $$x^{[i]}_{\text{norm}} = \frac{x^{[i]} - x_{\text{min}} }{ x_{\text{max}} - x_{\text{min}} }$$ - Below is an example of how we can implement and apply min-max scaling on 6 data instances given a 1D input vector (1 feature) via NumPy.
###Code
x = np.arange(6).astype(float)
x
x_norm = (x - x.min()) / (x.max() - x.min())
x_norm
###Output
_____no_output_____
###Markdown
Standardization - Z-score standardization is a useful standardization scheme if we are working with certain optimization methods (e.g., gradient descent, later in this course). - After standardizing a feature, it will have the properties of a standard normal distribution, that is, unit variance and zero mean ($N(\mu=0, \sigma^2=1)$); however, this does not transform a feature from not following a normal distribution to a normal distributed one.- The formula for standardizing a feature is shown below, for a single data point $x^{[i]}$. $$x^{[i]}_{\text{std}} = \frac{x^{[i]} - \mu_x }{ \sigma_{x} }$$
###Code
x = np.arange(6).astype(float)
x
x_std = (x - x.mean()) / x.std()
x_std
###Output
_____no_output_____
###Markdown
- Conveniently, NumPy and Pandas both implement a `std` method, which computes the standard devation.- Note the different results shown below.
###Code
df = pd.DataFrame([1, 2, 1, 2, 3, 4])
df[0].std()
df[0].values.std()
###Output
_____no_output_____
###Markdown
- The results differ because Pandas computes the "sample" standard deviation ($s_x$), whereas NumPy computes the "population" standard deviation ($\sigma_x$). $$s_x = \sqrt{ \frac{1}{n-1} \sum^{n}_{i=1} (x^{[i]} - \bar{x})^2 }$$$$\sigma_x = \sqrt{ \frac{1}{n} \sum^{n}_{i=1} (x^{[i]} - \mu_x)^2 }$$ - In the context of machine learning, since we are typically working with large datasets, we typically don't care about Bessel's correction (subtracting one degree of freedom in the denominator).- Further, the goal here is not to model a particular distribution or estimate distribution parameters accurately; however, if you like, you can remove the extra degree of freedom via NumPy's `ddof` parameters -- it's not necessary in practice though.
###Code
df[0].values.std(ddof=1)
###Output
_____no_output_____
###Markdown
- A concept that is very important though is how we use the estimated normalization parameters (e.g., mean and standard deviation in z-score standardization).- In particular, it is important that we re-use the parameters estimated from the training set to transfrom validation and test sets -- re-estimating the parameters is a common "beginner-mistake" which is why we discuss it in more detail.
###Code
mu, sigma = X_train.mean(axis=0), X_train.std(axis=0)
X_train_std = (X_train - mu) / sigma
X_valid_std = (X_valid - mu) / sigma
X_test_std = (X_test - mu) / sigma
###Output
_____no_output_____
###Markdown
- Again, if we standardize the training dataset, we need to keep the parameters (mean and standard deviation for each feature). Then, we’d use these parameters to transform our test data and any future data later on- Let’s assume we have a simple training set consisting of 3 samples with 1 feature column (let’s call the feature column “length in cm”):- example1: 10 cm -> class 2- example2: 20 cm -> class 2- example3: 30 cm -> class 1Given the data above, we estimate the following parameters from this training set:- mean: 20- standard deviation: 8.2If we use these parameters to standardize the same dataset, we get the following z-score values:- example1: -1.21 -> class 2- example2: 0 -> class 2- example3: 1.21 -> class 1Now, let’s say our model has learned the following hypotheses: It classifies samples with a standardized length value < 0.6 as class 2 (and class 1 otherwise). So far so good. Now, let’s imagine we have 3 new unlabeled data points that you want to classify.- example4: 5 cm -> class ?- example5: 6 cm -> class ?- example6: 7 cm -> class ?If we look at the non-standardized "length in cm" values in the training datast, it is intuitive to say that all of these examples (5, 6, and 7) are likely belonging to class 2 because they are smaller than anything in the training set. However, if we standardize these by re-computing the standard deviation and and mean from the new data, we will get similar values as before (i.e., properties of a standard normal distribtion) in the training set and our classifier would (probably incorrectly) assign the “class 2” label to the samples 4 and 5.- example5: -1.21 -> class 2- example6: 0 -> class 2- example7: 1.21 -> class 1However, if we use the parameters from the "training set standardization," we will get the following standardized values- example5: -18.37- example6: -17.15- example7: -15.92Note that these values are more negative than the value of example1 in the original training set, which makes much more sense now! Scikit-Learn Transformer API - The transformer API in scikit-learn is very similar to the estimator API; the main difference is that transformers are typically "unsupervised," meaning, they don't make use of class labels or target values.  - Typical examples of transformers in scikit-learn are the `MinMaxScaler` and the `StandardScaler`, which can be used to perform min-max scaling and z-score standardization as discussed earlier.\
###Code
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train_std = scaler.transform(X_train)
X_valid_std = scaler.transform(X_valid)
X_test_std = scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Categorical Data - When we preprocess a dataset as input to a machine learning algorithm, we have to be careful how we treat categorical variables.- There are two broad categories of categorical variables: nominal (no order implied) and ordinal (order implied).
###Code
df = pd.read_csv('categoricaldata.csv')
df
###Output
_____no_output_____
###Markdown
- In the example above, 'size' would be an example of an ordinal variable; i.e., if the letters refer to T-shirt sizes, it would make sense to come up with an ordering like M < L < XXL.- Hence, we can assign increasing values to ordinal values; however, the range and difference between categories depends on our domain knowledge and judgement.- To convert ordinal variables into a proper representation for numerical computations via machine learning algorithms, we can use the now familiar `map` method in Pandas, as shown below.
###Code
mapping_dict = {'M': 2,
'L': 3,
'XXL': 5}
df['size'] = df['size'].map(mapping_dict)
df
###Output
_____no_output_____
###Markdown
- Machine learning algorithms do not assume an ordering in the case of class labels.- Here, we can use the `LabelEncoder` from scikit-learn to convert class labels to integers as an alternative to using the `map` method
###Code
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
df['classlabel'] = le.fit_transform(df['classlabel'])
df
###Output
_____no_output_____
###Markdown
- Representing nominal variables properly is a bit more tricky.- Since machine learning algorithms usually assume an order if a variable takes on integer values, we need to apply a "trick" here such that the algorithm would not make this assumption.- this "trick" is also called "one-hot" encoding -- we binarize a nominal variable, as shown below for the color variable (again, we do this because some ordering like orange < red < blue would not make sense in many applications).
###Code
pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
- Note that executing the code above produced 3 new variables for "color," each of which takes on binary values.- However, there is some redundancy now (e.g., if we know the values for `color_green` and `color_red`, we automatically know the value for `color_blue`).- While collinearity may cause problems (i.e., the matrix inverse doesn't exist in e.g., the context of the closed-form of linear regression), again, in machine learning we typically would not care about it too much, because most algorithms can deal with collinearity (e.g., adding constraints like regularization penalties to regression models, which we learn via gradient-based optimization).- However, removing collinearity if possible is never a bad idea, and we can do this conveniently by dropping e.g., one of the columns of the one-hot encoded variable.
###Code
pd.get_dummies(df, drop_first=True)
###Output
_____no_output_____
###Markdown
Missing Data - There are many different ways for dealing with missing data.- The simplest approaches are removing entire columns or rows.- Another simple approach is to impute missing values via the feature means, medians, mode, etc.- There is no rule or best practice, and the choice of the approprite missing data imputation method depends on your judgement and domain knowledge.- Below are some examples for dealing with missing data.
###Code
df = pd.read_csv('missingdata.csv')
df
# missing values per column:
df.isnull().sum()
# drop rows with missing values:
df.dropna(axis=0)
# drop columns with missing values:
df.dropna(axis=1)
df.fillna(df.mean())
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
X = df.values
X = imputer.fit_transform(df.values)
X
###Output
_____no_output_____ |
notebooks/SIT_W2D1_Breast_Cancer_Model_Prediction_Client.ipynb | ###Markdown
Machine Learning Engineering: Creating and Working on a Machine Learning Project Propulsion Academy, 2021 Goal: Apply software engineering concepts (version control, logging, testing, modularization) in a Machine Learning engineering context. SIT introduction to Data Science| Machine Learning Engineering| MLE Development Workflow [Set up](P0)
###Code
!pip install requests
import requests
import json
# Google Colab Local Server
# url = 'http://2dc9f9a75795.ngrok.io'
# Heroku Server
url = 'https://sitw2d2.herokuapp.com/'
features = [2.057e+01, 1.777e+01, 1.329e+02, 1.326e+03, 8.474e-02, 7.864e-02,
8.690e-02, 7.017e-02, 1.812e-01, 5.667e-02, 5.435e-01, 7.339e-01,
3.398e+00, 7.408e+01, 5.225e-03, 1.308e-02, 1.860e-02, 1.340e-02,
1.389e-02, 3.532e-03, 2.499e+01, 2.341e+01, 1.588e+02, 1.956e+03,
1.238e-01, 1.866e-01, 2.416e-01, 1.860e-01, 2.750e-01, 8.902e-02]
req = requests.post(url,
headers = {'Content-Type': 'application/json'},
data = json.dumps({'features': features})
)
req.json()['prediction']
###Output
_____no_output_____ |
VAR_model.ipynb | ###Markdown
You may skip straight to 4. Model Analysis with Impulse Response Function. Other analyses are conducted prior to section 4 for a clearer understanding of the data and model.
###Code
import numpy as np
import pandas as pd
import statsmodels.tsa.api as sm
from statsmodels.tsa.api import VAR
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
1. Loading and preparing the data
###Code
# load data
data = pd.read_csv("data.csv")
data.head()
data = data[[a for a in data.columns if a != "date"]] # Remove the date column
data.head()
###Output
_____no_output_____
###Markdown
2. Find the best order of VAR model for this data
###Code
# model determination
model = VAR(data)
print(model.select_order(5).summary())
selected_orders = model.select_order(5).selected_orders
print("Selected orders: ", selected_orders)
###Output
VAR Order Selection (* highlights the minimums)
=================================================
AIC BIC FPE HQIC
-------------------------------------------------
0 7.210 7.253 1353. 7.227
1 4.911* 5.123* 135.8* 4.995*
2 4.932 5.314 138.6 5.084
3 4.920 5.472 137.0 5.139
4 4.914 5.636 136.2 5.201
5 4.947 5.839 140.8 5.301
-------------------------------------------------
Selected orders: {'aic': 1, 'bic': 1, 'hqic': 1, 'fpe': 1}
###Markdown
All measures point towards a VAR model of order 1 3. Model analysis (without IRF yet)
###Code
# model estimation
results = model.fit(1,trend="nc")
results.summary()
print(results.summary())
print("================test_whiteness================\n")
print(results.test_whiteness().summary())
print("================results.roots================\n")
for root in results.roots:
print(str(root)+",")
print("\n================is_stable================\n")
print(str(results.is_stable()))
print("\n================granger causality on sales================\n")
print(results.test_causality('sales', ['review_num'],kind='f').summary())
print(results.test_causality('sales', ['rating'],kind='f').summary())
print(results.test_causality('sales', ['sentiment'],kind='f').summary())
print("\n================granger causality on review_num================\n")
print(results.test_causality('review_num', ['sales'],kind='f').summary())
print(results.test_causality('review_num', ['rating'],kind='f').summary())
print(results.test_causality('review_num', ['sentiment'],kind='f').summary())
print("\n================granger causality on rating================\n")
print(results.test_causality('rating', ['sales'],kind='f').summary())
print(results.test_causality('rating', ['review_num'],kind='f').summary())
print(results.test_causality('rating', ['sentiment'],kind='f').summary())
print("\n================granger causality on sentiment================\n")
print(results.test_causality('sentiment', ['sales'],kind='f').summary())
print(results.test_causality('sentiment', ['rating'],kind='f').summary())
print(results.test_causality('sentiment', ['sentiment'],kind='f').summary())
# forecast error decomposition
fevd = results.fevd(5)
answer = results.fevd(20).plot()
###Output
_____no_output_____
###Markdown
4. Model Analysis with Impulse Response Function 4.1. General IRF
###Code
irf = results.irf(20)
irf2 = irf.plot(orth=False)
###Output
_____no_output_____
###Markdown
4.2. Cumulative IRF
###Code
irf.plot_cum_effects(orth=False)
np.set_printoptions(suppress=True, precision = 5) # To better format the results
print(results.long_run_effects())
###Output
[[149.48919 -8.1003 347.53191 511.78824]
[ 2.14212 1.58128 5.74025 6.89961]
[ 3.62934 -0.01941 11.51555 11.40032]
[ 0.48527 0.00219 1.3463 2.84247]]
|
DAY 001 ~ 100/DAY060_[Programmers] 힙 더 맵게 (Python).ipynb | ###Markdown
2020년 4월 6일 월요일 프로그래머스 - 힙 : 더 맵게 문제 : https://programmers.co.kr/learn/courses/30/lessons/42626 블로그 : https://somjang.tistory.com/entry/Programmers-%ED%9E%99-%EB%8D%94-%EB%A7%B5%EA%B2%8C-Python 첫번째 시도
###Code
def solution(scoville, K):
answer = -1
count = 0
check_flag = False
while min(scoville) < K:
scoville = sorted(scoville, reverse=True)
scoville.append(scoville.pop() + (scoville.pop() * 2) )
if len(scoville) == 1 and scoville[0] < K:
check_flag = True
break
count = count + 1
if check_flag == False:
answer = count
return answer
###Output
_____no_output_____
###Markdown
--- 두번째 시도
###Code
import heapq
def solution(scoville, K):
answer = -1
count = 0
check_flag = False
heapq_scoville = heapq.heapify(scoville)
while scoville[0] < K:
min_first = heapq.heappop(scoville)
min_second = heapq.heappop(scoville)
heapq.heappush(scoville, min_first + (min_second * 2))
if len(scoville) == 1 and scoville[0] < K:
check_flag = True
break
count = count + 1
if check_flag == False:
answer = count
return answer
###Output
_____no_output_____ |
assignments/essential/assignment_08.ipynb | ###Markdown
Assignment 1. Create a 3D data of shape 10x5x3 with values equal zero. 2. Create another array woth values 255 and shape 10x5x3.3. Join the 2 array horizontally4. Create an array of random unsigned integer 8 bits with shape (50x50x3)5. Convert the array from question 4, top part to red and bottom part to green 6. Seed value is 117. Display or show each data
###Code
import numpy as np
import matplotlib.pyplot as plt
# Question 01
array_0 = np.zeros((10,5,3), dtype='uint8')
# Question 02
array_255 = np.full((10,5,3), 255)
# Question 03
horinzontal_join = np.hstack((array_0, array_255))
# Display array using matplotlib
plt.figure(figsize=(15,5))
plt.subplot(1,3,1)
plt.imshow(array_0)
plt.title('Array of 0')
plt.subplot(1,3,2)
plt.imshow(array_255)
plt.title('Array of 255')
plt.subplot(1,3,3)
plt.imshow(horinzontal_join)
plt.title('Horizontally joined')
# Question 04
np.random.seed(87)
array_random = np.random.randint(0, 256, (50, 50, 3))
plt.figure(figsize=(5,5))
plt.imshow(array_random)
plt.title('RANDOM ARRAY')
plt.show()
# Question 05
array_copy = array_random.copy()
# Define the top and bottom half of the array
top_half = array_copy[:26,:,:]
bottom_half = array_copy[26:,:,:]
# Create an array of 25 rows red and an array of 25 rows green
red_25 = np.full((25,50,3),[255,0,0])
green_25 = np.full((25,50,3),[0,255,0])
# Replace the top and bottom half to red and green respectively
top_half = red_25
bottom_half = green_25
# Stack the top half of red and bottom half of green into an array of 50 rows
half_red_green = np.vstack((top_half, bottom_half))
# Display the array
plt.figure(figsize=(10,10))
plt.subplot(1,2,1)
plt.imshow(array_random)
plt.title('Original')
plt.subplot(1,2,2)
plt.imshow(half_red_green)
plt.title('Half red green array')
plt.show()
# Question 06
np.random.seed(11)
array_seed11 = np.random.randint(0, 256, (50, 50, 3))
# Question 07
plt.figure(figsize=(10,10))
plt.subplot(1,2,1)
plt.imshow(array_random)
plt.title('Original array')
plt.subplot(1,2,2)
plt.imshow(array_seed11)
plt.title('Array <seed=11>')
plt.show()
###Output
_____no_output_____ |
notebooks/examples/DQN_battery_example.ipynb | ###Markdown
The main aim of this notebook is to demo the low level energy_py API. All of the functionality exposed here is wrapped into a higher level experiment API (see readme).This higher level API wraps more functionality than is exposed int his example - i.e. generating log files and writing data to TensorBoard. For the scope of this low level example, we will just use data available locally - episode rewards in the `Runner` class and data for the last episode - the `info` dictionary.This notebook also demonstrates the ability of a DQN agent to learn to optimize simplfied electric battery storage.This example involves a constant and repetitive electricity price profile, combined with a perfect forecast. The agent has both the ability to memorize this profile and lives in a near Markov environment. More interesting work randomly samples different price rollouts and uses realistic forecasts. A real world application of using reinforcement learning to control a battery would have to deal with both a variable price profile and a non-Markov understanding of what the price profile would do in the future. It could also involve additional reward signals, such as payments from fast frequency response needed to be balanced against price arbitrage.
###Code
from datetime import datetime
import os
import random
import numpy as np
import pandas as pd
import tensorflow as tf
import energypy
# define a total number of steps for the experiment to run
TOTAL_STEPS = 10000
# to setup the agent we use a dictionary
# a dictionary allows us to eaisly save the config to csv if we want
agent_config = {
'discount': 0.97, # the discount rate
'tau': 0.001, # parameter that controls the copying of weights from online to target network
'total_steps': TOTAL_STEPS,
'batch_size': 32, # size of the minibatches used for learning
'layers': (50, 50), # structure of the neural network used to approximate Q(s,a)
'learning_rate': 0.0001, # controls the stength of weight updates during learning
'epsilon_decay_fraction': 0.3, # a fraction as % of total steps where epsilon decayed from 1.0 to 0.1
'memory_fraction': 0.4, # the size of the replay memory as a % of total steps
'memory_type': 'deque', # the replay memory implementation we want
}
# keep all of the BatteryEnv variables (episode length, efficiency etc) at their defaults
# we just need to let our env know where our state.csv and observation.csv are (data_path)
env = energypy.make_env('battery')
# set seeds for reproducibility
env.seed(42)
def print_time():
print(datetime.now().strftime('%Y-%m-%d %H:%M:%S'))
print_time()
# reset the graph (without this nb needs to be restart each time)
tf.reset_default_graph()
# initialize Tensorflow machinery
with tf.Session() as sess:
# add the tf session and the environment to the agent config dictionary
# and initialize the agent
agent_config['sess'] = sess
agent_config['env'] = env
agent = energypy.make_agent(agent_id='dqn', **agent_config)
# initial values for the step and episode number
step, episode = 0, 0
# outer while loop runs through multiple episodes
rewards = []
while step < TOTAL_STEPS:
episode += 1
done = False
observation = env.reset()
# inner while loop runs through a single episode
episode_rewards = []
while not done:
step += 1
# select an action
action = agent.act(observation)
# take one step through the environment
next_observation, reward, done, info = env.step(action)
# store the experience
agent.remember(observation, action, reward,
next_observation, done)
# moving to the next time step
observation = next_observation
# saving the reward
episode_rewards.append(reward)
# we don't start learning until the memory is half full
if step > int(agent.memory.size * 0.5):
train_info = agent.learn()
rewards.append(sum(episode_rewards))
if episode % 10 == 1:
print('ep {} {:.2f} % rew {}'.format(episode, 100 * step / TOTAL_STEPS, sum(episode_rewards)))
print_time()
# results of the last episode
info = pd.DataFrame(info)
info
###Output
_____no_output_____ |
LFPy-example-02.ipynb | ###Markdown
Example 2: Extracellular response of synaptic inputThis is an example of **``LFPy``** running in a **``Jupyter notebook``**. To run through this example code and produce output, press **````** in each code block below.First step is to import ``LFPy`` and other packages for analysis and plotting:
###Code
import LFPy
###Output
_____no_output_____
###Markdown
Create some dictionarys with parameters for cell, synapse and extracellular electrode:
###Code
cellParameters = {
'morphology': 'morphologies/L5_Mainen96_LFPy.hoc',
'tstart': -50,
'tstop': 100,
'dt': 2**-4,
'passive': True,
}
synapseParameters = {
'syntype': 'Exp2Syn',
'e': 0.,
'tau1': 0.5,
'tau2': 2.0,
'weight': 0.005,
'record_current': True,
}
z = mgrid[-400:1201:100]
electrodeParameters = {
'x': zeros(z.size),
'y': zeros(z.size),
'z': z,
'sigma': 0.3,
}
###Output
_____no_output_____
###Markdown
Then, create the **`cell`**, **`synapse`** and **`electrode`** objects using the**`LFPy.Cell`**, **`LFPy.Synapse`**, **`LFPy.RecExtElectrode`** classes.
###Code
cell = LFPy.Cell(**cellParameters)
cell.set_pos(x=-10, y=0, z=0)
cell.set_rotation(z=np.pi)
synapse = LFPy.Synapse(cell,
idx = cell.get_closest_idx(z=800),
**synapseParameters)
synapse.set_spike_times(array([10, 30, 50]))
electrode = LFPy.RecExtElectrode(cell, **electrodeParameters)
###Output
_____no_output_____
###Markdown
Run the simulation using **`cell.simulate()`** probing the extracellular potential with the additional keyword argument **`probes=[electrode]`**
###Code
cell.simulate(probes=[electrode])
###Output
_____no_output_____
###Markdown
Then plot the **somatic potential** and the **prediction** obtained using the `RecExtElectrode` instance (now accessible as `electrode.data`):
###Code
fig = figure(figsize=(12, 6))
gs = GridSpec(2, 3)
ax0 = fig.add_subplot(gs[:, 0])
ax0.plot(cell.x.T, cell.z.T, 'k')
ax0.plot(synapse.x, synapse.z,
color='r', marker='o', markersize=10,
label='synapse')
ax0.plot(electrode.x, electrode.z, '.', color='g',
label='electrode')
ax0.axis([-500, 500, -450, 1250])
ax0.legend()
ax0.set_xlabel('x (um)')
ax0.set_ylabel('z (um)')
ax0.set_title('morphology')
ax1 = fig.add_subplot(gs[0, 1])
ax1.plot(cell.tvec, synapse.i, 'r'), title('synaptic current (pA)')
plt.setp(ax1.get_xticklabels(), visible=False)
ax2 = fig.add_subplot(gs[1, 1], sharex=ax1)
ax2.plot(cell.tvec, cell.somav, 'k'), title('somatic voltage (mV)')
ax3 = fig.add_subplot(gs[:, 2], sharey=ax0, sharex=ax1)
im = ax3.pcolormesh(cell.tvec, electrode.z, electrode.data,
vmin=-abs(electrode.data).max(), vmax=abs(electrode.data).max(),
shading='auto')
colorbar(im)
ax3.set_title('LFP (mV)')
ax3.set_xlabel('time (ms)')
#savefig('LFPy-example-02.pdf', dpi=300)
###Output
_____no_output_____ |
Capstone Project (6).ipynb | ###Markdown
**Capstone Project GRP4 NLP B** Installing Packages
###Code
!pip install -U textblob
!pip install translators
!pip install goslate
import goslate
!pip install translate
!pip install googletrans==4.0.0-rc1
import googletrans
from googletrans import Translator
!pip install langdetect
!pip install ftfy
!pip install rpy2
###Output
Requirement already up-to-date: textblob in /usr/local/lib/python3.6/dist-packages (0.15.3)
Requirement already satisfied, skipping upgrade: nltk>=3.1 in /usr/local/lib/python3.6/dist-packages (from textblob) (3.2.5)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from nltk>=3.1->textblob) (1.15.0)
Requirement already satisfied: translators in /usr/local/lib/python3.6/dist-packages (4.7.11)
Requirement already satisfied: requests>=2.23.0 in /usr/local/lib/python3.6/dist-packages (from translators) (2.23.0)
Requirement already satisfied: loguru>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from translators) (0.5.3)
Requirement already satisfied: PyExecJS>=1.5.1 in /usr/local/lib/python3.6/dist-packages (from translators) (1.5.1)
Requirement already satisfied: lxml>=4.5.0 in /usr/local/lib/python3.6/dist-packages (from translators) (4.6.2)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.23.0->translators) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.23.0->translators) (2020.12.5)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests>=2.23.0->translators) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.23.0->translators) (3.0.4)
Requirement already satisfied: aiocontextvars>=0.2.0; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from loguru>=0.4.1->translators) (0.2.2)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from PyExecJS>=1.5.1->translators) (1.15.0)
Requirement already satisfied: contextvars==2.4; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from aiocontextvars>=0.2.0; python_version < "3.7"->loguru>=0.4.1->translators) (2.4)
Requirement already satisfied: immutables>=0.9 in /usr/local/lib/python3.6/dist-packages (from contextvars==2.4; python_version < "3.7"->aiocontextvars>=0.2.0; python_version < "3.7"->loguru>=0.4.1->translators) (0.14)
Requirement already satisfied: goslate in /usr/local/lib/python3.6/dist-packages (1.5.1)
Requirement already satisfied: futures in /usr/local/lib/python3.6/dist-packages (from goslate) (3.1.1)
Requirement already satisfied: translate in /usr/local/lib/python3.6/dist-packages (3.5.0)
Requirement already satisfied: tox in /usr/local/lib/python3.6/dist-packages (from translate) (3.20.1)
Requirement already satisfied: pre-commit in /usr/local/lib/python3.6/dist-packages (from translate) (2.9.3)
Requirement already satisfied: lxml in /usr/local/lib/python3.6/dist-packages (from translate) (4.6.2)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from translate) (7.1.2)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from translate) (2.23.0)
Requirement already satisfied: packaging>=14 in /usr/local/lib/python3.6/dist-packages (from tox->translate) (20.7)
Requirement already satisfied: six>=1.14.0 in /usr/local/lib/python3.6/dist-packages (from tox->translate) (1.15.0)
Requirement already satisfied: toml>=0.9.4 in /usr/local/lib/python3.6/dist-packages (from tox->translate) (0.10.2)
Requirement already satisfied: filelock>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from tox->translate) (3.0.12)
Collecting pluggy>=0.12.0
Using cached https://files.pythonhosted.org/packages/a0/28/85c7aa31b80d150b772fbe4a229487bc6644da9ccb7e427dd8cc60cb8a62/pluggy-0.13.1-py2.py3-none-any.whl
Requirement already satisfied: py>=1.4.17 in /usr/local/lib/python3.6/dist-packages (from tox->translate) (1.9.0)
Requirement already satisfied: importlib-metadata<3,>=0.12; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from tox->translate) (2.1.1)
Requirement already satisfied: virtualenv!=20.0.0,!=20.0.1,!=20.0.2,!=20.0.3,!=20.0.4,!=20.0.5,!=20.0.6,!=20.0.7,>=16.0.0 in /usr/local/lib/python3.6/dist-packages (from tox->translate) (20.2.2)
Requirement already satisfied: identify>=1.0.0 in /usr/local/lib/python3.6/dist-packages (from pre-commit->translate) (1.5.10)
Requirement already satisfied: cfgv>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from pre-commit->translate) (3.2.0)
Requirement already satisfied: importlib-resources; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from pre-commit->translate) (3.3.0)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.6/dist-packages (from pre-commit->translate) (5.3.1)
Requirement already satisfied: nodeenv>=0.11.1 in /usr/local/lib/python3.6/dist-packages (from pre-commit->translate) (1.5.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->translate) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->translate) (2020.12.5)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->translate) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->translate) (3.0.4)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging>=14->tox->translate) (2.4.7)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata<3,>=0.12; python_version < "3.8"->tox->translate) (3.4.0)
Requirement already satisfied: distlib<1,>=0.3.1 in /usr/local/lib/python3.6/dist-packages (from virtualenv!=20.0.0,!=20.0.1,!=20.0.2,!=20.0.3,!=20.0.4,!=20.0.5,!=20.0.6,!=20.0.7,>=16.0.0->tox->translate) (0.3.1)
Requirement already satisfied: appdirs<2,>=1.4.3 in /usr/local/lib/python3.6/dist-packages (from virtualenv!=20.0.0,!=20.0.1,!=20.0.2,!=20.0.3,!=20.0.4,!=20.0.5,!=20.0.6,!=20.0.7,>=16.0.0->tox->translate) (1.4.4)
[31mERROR: pytest 3.6.4 has requirement pluggy<0.8,>=0.5, but you'll have pluggy 0.13.1 which is incompatible.[0m
[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.[0m
Installing collected packages: pluggy
Found existing installation: pluggy 0.7.1
Uninstalling pluggy-0.7.1:
Successfully uninstalled pluggy-0.7.1
Successfully installed pluggy-0.13.1
Requirement already satisfied: googletrans==4.0.0-rc1 in /usr/local/lib/python3.6/dist-packages (4.0.0rc1)
Requirement already satisfied: httpx==0.13.3 in /usr/local/lib/python3.6/dist-packages (from googletrans==4.0.0-rc1) (0.13.3)
Requirement already satisfied: httpcore==0.9.* in /usr/local/lib/python3.6/dist-packages (from httpx==0.13.3->googletrans==4.0.0-rc1) (0.9.1)
Requirement already satisfied: hstspreload in /usr/local/lib/python3.6/dist-packages (from httpx==0.13.3->googletrans==4.0.0-rc1) (2020.11.21)
Requirement already satisfied: certifi in /usr/local/lib/python3.6/dist-packages (from httpx==0.13.3->googletrans==4.0.0-rc1) (2020.12.5)
Requirement already satisfied: chardet==3.* in /usr/local/lib/python3.6/dist-packages (from httpx==0.13.3->googletrans==4.0.0-rc1) (3.0.4)
Requirement already satisfied: sniffio in /usr/local/lib/python3.6/dist-packages (from httpx==0.13.3->googletrans==4.0.0-rc1) (1.2.0)
Requirement already satisfied: idna==2.* in /usr/local/lib/python3.6/dist-packages (from httpx==0.13.3->googletrans==4.0.0-rc1) (2.10)
Requirement already satisfied: rfc3986<2,>=1.3 in /usr/local/lib/python3.6/dist-packages (from httpx==0.13.3->googletrans==4.0.0-rc1) (1.4.0)
Requirement already satisfied: h2==3.* in /usr/local/lib/python3.6/dist-packages (from httpcore==0.9.*->httpx==0.13.3->googletrans==4.0.0-rc1) (3.2.0)
Requirement already satisfied: h11<0.10,>=0.8 in /usr/local/lib/python3.6/dist-packages (from httpcore==0.9.*->httpx==0.13.3->googletrans==4.0.0-rc1) (0.9.0)
Requirement already satisfied: contextvars>=2.1; python_version < "3.7" in /usr/local/lib/python3.6/dist-packages (from sniffio->httpx==0.13.3->googletrans==4.0.0-rc1) (2.4)
Requirement already satisfied: hyperframe<6,>=5.2.0 in /usr/local/lib/python3.6/dist-packages (from h2==3.*->httpcore==0.9.*->httpx==0.13.3->googletrans==4.0.0-rc1) (5.2.0)
Requirement already satisfied: hpack<4,>=3.0 in /usr/local/lib/python3.6/dist-packages (from h2==3.*->httpcore==0.9.*->httpx==0.13.3->googletrans==4.0.0-rc1) (3.0.0)
Requirement already satisfied: immutables>=0.9 in /usr/local/lib/python3.6/dist-packages (from contextvars>=2.1; python_version < "3.7"->sniffio->httpx==0.13.3->googletrans==4.0.0-rc1) (0.14)
Requirement already satisfied: langdetect in /usr/local/lib/python3.6/dist-packages (1.0.8)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from langdetect) (1.15.0)
Requirement already satisfied: ftfy in /usr/local/lib/python3.6/dist-packages (5.8)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from ftfy) (0.2.5)
Requirement already satisfied: rpy2 in /usr/local/lib/python3.6/dist-packages (3.2.7)
Requirement already satisfied: cffi>=1.13.1 in /usr/local/lib/python3.6/dist-packages (from rpy2) (1.14.4)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.6/dist-packages (from rpy2) (2.11.2)
Requirement already satisfied: simplegeneric in /usr/local/lib/python3.6/dist-packages (from rpy2) (0.8.1)
Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from rpy2) (2018.9)
Requirement already satisfied: pytest in /usr/local/lib/python3.6/dist-packages (from rpy2) (3.6.4)
Requirement already satisfied: tzlocal in /usr/local/lib/python3.6/dist-packages (from rpy2) (1.5.1)
Requirement already satisfied: pycparser in /usr/local/lib/python3.6/dist-packages (from cffi>=1.13.1->rpy2) (2.20)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.6/dist-packages (from jinja2->rpy2) (1.1.1)
Collecting pluggy<0.8,>=0.5
Using cached https://files.pythonhosted.org/packages/f5/f1/5a93c118663896d83f7bcbfb7f657ce1d0c0d617e6b4a443a53abcc658ca/pluggy-0.7.1-py2.py3-none-any.whl
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.6/dist-packages (from pytest->rpy2) (1.4.0)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from pytest->rpy2) (1.15.0)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from pytest->rpy2) (1.9.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.6/dist-packages (from pytest->rpy2) (20.3.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.6/dist-packages (from pytest->rpy2) (8.6.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from pytest->rpy2) (50.3.2)
[31mERROR: tox 3.20.1 has requirement pluggy>=0.12.0, but you'll have pluggy 0.7.1 which is incompatible.[0m
[31mERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible.[0m
Installing collected packages: pluggy
Found existing installation: pluggy 0.13.1
Uninstalling pluggy-0.13.1:
Successfully uninstalled pluggy-0.13.1
Successfully installed pluggy-0.7.1
###Markdown
Importing libraries
###Code
from google.colab import drive
import gc
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import LSTM, Embedding, Dense, TimeDistributed, Dropout, Bidirectional, Input, Flatten, GlobalMaxPool1D, SpatialDropout1D
from keras.preprocessing import sequence
from keras.callbacks import EarlyStopping
from gensim.test.utils import datapath, get_tmpfile
from gensim.models import KeyedVectors # Gensim contains word2vec models and processing tools
from gensim.scripts.glove2word2vec import glove2word2vec
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.preprocessing import binarize
from nltk import word_tokenize
from nltk.corpus import wordnet
import os
import nltk
import string
import re
from collections import Counter
from nltk.corpus import stopwords
from translate import Translator
from langdetect import detect
from langdetect import detect_langs
from langdetect import DetectorFactory
DetectorFactory.seed = 0
import random
random.seed(0)
import warnings
warnings.filterwarnings("ignore")
%load_ext rpy2.ipython
import random as rnd
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from IPython.display import display
from textblob import Word
from ftfy import fix_text
tf.__version__
#from googletrans import Translator
#translator = Translator(service_urls=['translate.googleapis.com'])
###Output
_____no_output_____
###Markdown
Downloading NLTK data
###Code
nltk.download('stopwords')
nltk.download('wordnet')
nltk.download('punkt')
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
Mount your Google Drive
###Code
#### mounting google drive ####
drive.mount("/content/drive/")
folder_path = ("/content/drive/MyDrive/Capstone Project - Ticket Routing NLP")
###Output
Drive already mounted at /content/drive/; to attempt to forcibly remount, call drive.mount("/content/drive/", force_remount=True).
###Markdown
Loading data and creating a pickle function
###Code
ticket_data = ""
import pickle
pickle_flag = False
## Get Pickle Data
def get_pickle_data(filename):
pickle_data = open(folder_path + "/" + filename,'rb')
return pickle.load(pickle_data)
## Dump Pickle Data
def pickle_dump(data_to_dump, filename):
filehandler = open((folder_path+ "/" + filename),"wb")
pickle.dump(data_to_dump,filehandler)
if not os.path.exists(folder_path + "/input_data.pickle"):
ticket_data = pd.read_excel(folder_path + "/input_data.xlsx")
print("picking from excel")
else:
ticket_data = get_pickle_data("input_data.pickle")
print("picking from pickle")
print(len(ticket_data))
pickle_dump(ticket_data, "input_data.pickle")
###Output
picking from pickle
8500
###Markdown
Data Analysis Begins
###Code
ticket_data.info()
ticket_data.head()
unique_callers = ticket_data['Caller'].unique()
unique_callers.shape
Func_group = ticket_data['Assignment group'].unique()
Func_group.shape
TargetGroupCnt=ticket_data['Assignment group'].value_counts()
TargetGroupCnt.describe()
ticket_data.Caller.value_counts()
sns.distplot(ticket_data['Assignment group'].value_counts())
sns.distplot(ticket_data['Caller'].value_counts())
sns.boxplot(ticket_data['Assignment group'].value_counts())
sns.countplot(y="Assignment group", data=ticket_data, order=ticket_data['Assignment group'].value_counts().index )
plt.style.use('ggplot')
%matplotlib inline
descending_order = ticket_data['Assignment group'].value_counts().sort_values(ascending=False).index
plt.subplots(figsize=(22,5))
#added code for x label rotate
ax=sns.countplot(x='Assignment group', data=ticket_data, color='royalblue',order=descending_order)
ax.set_xticklabels(ax.get_xticklabels(), rotation=45, ha="right")
plt.tight_layout()
plt.show()
ticket_data.isnull().values.any()
sns.heatmap(ticket_data.isnull(), yticklabels=False, cmap="Wistia")
ticket_data['Description'].fillna(value=' ', inplace=True)
ticket_data['Short description'].fillna(value=' ', inplace=True)
ticket_data.isnull().values.any()
###Output
_____no_output_____
###Markdown
TODO: Somewhere in the initial EDA, We must remove the records where both Short desc and Desc are blank. While there is no such record in the data, we should still have this
###Code
summary_data=ticket_data.pivot_table(columns = "Assignment group",aggfunc='count')
summary_data
ticket_data['Assignment group'].unique()
len(ticket_data['Assignment group'].unique())
df_assg = ticket_data['Assignment group'].value_counts().reset_index()
df_assg['percentage'] = (df_assg['Assignment group']/df_assg['Assignment group'].sum())*100
df_assg.head()
sns.set(style="whitegrid")
plt.figure(figsize=(20,5))
ax = sns.countplot(x="Assignment group", data=ticket_data, order=ticket_data["Assignment group"].value_counts().index)
ax.set_xticklabels(ax.get_xticklabels(), rotation=90)
for p in ax.patches:
ax.annotate(str(format(p.get_height()/len(ticket_data.index)*100, '.2f')+"%"), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'bottom', rotation=90, xytext = (0, 10), textcoords = 'offset points')
###Output
_____no_output_____
###Markdown
Top 20 Assignment groups with highest number of tickets
###Code
df_top_assg = ticket_data['Assignment group'].value_counts().nlargest(20).reset_index()
df_top_assg
plt.figure(figsize=(12,6))
bars = plt.bar(df_top_assg['index'],df_top_assg['Assignment group'])
plt.title('Top 20 Assignment groups with highest number of Tickets')
plt.xlabel('Assignment Group')
plt.xticks(rotation=90)
plt.ylabel('Number of Tickets')
for bar in bars:
yval = bar.get_height()
plt.text(bar.get_x(), yval + .005, yval)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Visualize the percentage of incidents per assignment group
###Code
df_bottom_assg = ticket_data['Assignment group'].value_counts().nsmallest(20).reset_index()
df_bottom_assg
plt.figure(figsize=(12,6))
bars = plt.bar(df_bottom_assg['index'],df_bottom_assg['Assignment group'])
plt.title('Bottom 20 Assignment groups with small number of Tickets')
plt.xlabel('Assignment Group')
plt.xticks(rotation=90)
plt.ylabel('Number of Tickets')
for bar in bars:
yval = bar.get_height()
plt.text(bar.get_x(), yval + .005, yval)
plt.tight_layout()
plt.show()
df_tickets = pd.DataFrame(columns=['Description','Ticket Count'])
one_ticket = {'Description':'1','Ticket Count':len(df_assg[df_assg['Assignment group'] < 2])}
_2_5_ticket = {'Description':'2-5',
'Ticket Count':len(df_assg[(df_assg['Assignment group'] > 1)& (df_assg['Assignment group'] < 6) ])}
_10_ticket = {'Description':' 6-10',
'Ticket Count':len(df_assg[(df_assg['Assignment group'] > 5)& (df_assg['Assignment group'] < 11)])}
_10_20_ticket = {'Description':' 11-20',
'Ticket Count':len(df_assg[(df_assg['Assignment group'] > 10)& (df_assg['Assignment group'] < 21)])}
_20_50_ticket = {'Description':' 21-50',
'Ticket Count':len(df_assg[(df_assg['Assignment group'] > 20)& (df_assg['Assignment group'] < 51)])}
_51_100_ticket = {'Description':' 51-100',
'Ticket Count':len(df_assg[(df_assg['Assignment group'] > 50)& (df_assg['Assignment group'] < 101)])}
_100_ticket = {'Description':' >100',
'Ticket Count':len(df_assg[(df_assg['Assignment group'] > 100)])}
#append row to the dataframe
df_tickets = df_tickets.append([one_ticket,_2_5_ticket,_10_ticket,
_10_20_ticket,_20_50_ticket,_51_100_ticket,_100_ticket], ignore_index=True)
df_tickets
plt.figure(figsize=(10, 8))
plt.pie(df_tickets['Ticket Count'],labels=df_tickets['Description'],autopct='%1.1f%%', startangle=15, shadow = True);
plt.title('Assignment Groups Distribution')
plt.axis('equal')
ticket_data[ticket_data['Short description'].isnull()]
ticket_data[ticket_data['Description'].isnull()]
!pip install goslate
from goslate import Goslate
# Define and construct the service urls
svc_domains = ['.com','.com.au','.com.ar','.co.kr','.co.in','.co.jp','.at','.de','.ru','.ch','.fr','.es','.ae']
svc_urls = ['http://translate.google' + domain for domain in svc_domains]
print('Original text: \033[1m%s\033[0m\nFixed text: \033[1m%s\033[0m' % ('电脑开机开不出来', 'Boot the computer does not really come out'))
from googletrans import Translator
translator = Translator(service_urls=[
'translate.google.com'
])
def translate_sentence(para):
if para == "":
return para
lang = detect(para)
if lang != "en":
return translator.translate(para, src = lang, dest = "en")
translated_short_desc = ticket_data["Short description"].apply(translate_sentence)
translated_long_desc = ticket_data["Description"].apply(translate_sentence)
# List of column data to consider for translation
trans_cols = ['Short description','Description']
# Add a new column to store the detected language
ticket_data.insert(loc=2, column='Langu', value=np.nan, allow_duplicates=True)
for idx in range(ticket_data.shape[0]):
# Instantiate Goslate class in each iteration
gs = Goslate(service_urls=svc_urls)
lang = gs.detect(' '.join(ticket_data.loc[idx, trans_cols].tolist()))
row_iter = gs.translate(ticket_data.loc[idx, trans_cols].tolist(),
target_language='en',
source_language='auto')
ticket_data.loc[idx, trans_cols] = list(row_iter)
ticket_data.Language = lang
ticket_data.head()
###Output
_____no_output_____
###Markdown
Begin Data Cleaning Replacing NaN
###Code
#Replace NaN values in Short Description and Description columns
ticket_data['Short description'] = ticket_data['Short description'].replace(np.nan, '', regex=True)
ticket_data['Description'] = ticket_data['Description'].replace(np.nan, '', regex=True)
###Output
_____no_output_____
###Markdown
Decoding the data
###Code
#Lets encode the string, to make it easier to be passed to language detection api.
def fn_decode_to_ascii(df):
text = df.encode().decode('utf-8').encode('ascii', 'ignore')
return text.decode("utf-8")
###Output
_____no_output_____
###Markdown
Prepping potential boilerplate text for removal
###Code
#As different lines are of different length. We need to pad the our sequences using the max length
contractions = {
"ain't": "am not / are not / is not / has not / have not",
"aren't": "are not / am not",
"can't": "cannot",
"can't've": "cannot have",
"'cause": "because",
"could've": "could have",
"couldn't": "could not",
"couldn't've": "could not have",
"didn't": "did not",
"doesn't": "does not",
"don't": "do not",
"hadn't": "had not",
"hadn't've": "had not have",
"hasn't": "has not",
"haven't": "have not",
"he'd": "he had / he would",
"he'd've": "he would have",
"he'll": "he shall / he will",
"he'll've": "he shall have / he will have",
"he's": "he has / he is",
"how'd": "how did",
"how'd'y": "how do you",
"how'll": "how will",
"how's": "how has / how is / how does",
"I'd": "I had / I would",
"I'd've": "I would have",
"I'll": "I shall / I will",
"I'll've": "I shall have / I will have",
"I'm": "I am",
"I've": "I have",
"isn't": "is not",
"it'd": "it had / it would",
"it'd've": "it would have",
"it'll": "it shall / it will",
"it'll've": "it shall have / it will have",
"it's": "it has / it is",
"let's": "let us",
"ma'am": "madam",
"mayn't": "may not",
"might've": "might have",
"mightn't": "might not",
"mightn't've": "might not have",
"must've": "must have",
"mustn't": "must not",
"mustn't've": "must not have",
"needn't": "need not",
"needn't've": "need not have",
"o'clock": "of the clock",
"oughtn't": "ought not",
"oughtn't've": "ought not have",
"shan't": "shall not",
"sha'n't": "shall not",
"shan't've": "shall not have",
"she'd": "she had / she would",
"she'd've": "she would have",
"she'll": "she shall / she will",
"she'll've": "she shall have / she will have",
"she's": "she has / she is",
"should've": "should have",
"shouldn't": "should not",
"shouldn't've": "should not have",
"so've": "so have",
"so's": "so as / so is",
"that'd": "that would / that had",
"that'd've": "that would have",
"that's": "that has / that is",
"there'd": "there had / there would",
"there'd've": "there would have",
"there's": "there has / there is",
"they'd": "they had / they would",
"they'd've": "they would have",
"they'll": "they shall / they will",
"they'll've": "they shall have / they will have",
"they're": "they are",
"they've": "they have",
"to've": "to have",
"wasn't": "was not",
"we'd": "we had / we would",
"we'd've": "we would have",
"we'll": "we will",
"we'll've": "we will have",
"we're": "we are",
"we've": "we have",
"weren't": "were not",
"what'll": "what shall / what will",
"what'll've": "what shall have / what will have",
"what're": "what are",
"what's": "what has / what is",
"what've": "what have",
"when's": "when has / when is",
"when've": "when have",
"where'd": "where did",
"where's": "where has / where is",
"where've": "where have",
"who'll": "who shall / who will",
"who'll've": "who shall have / who will have",
"who's": "who has / who is",
"who've": "who have",
"why's": "why has / why is",
"why've": "why have",
"will've": "will have",
"won't": "will not",
"won't've": "will not have",
"would've": "would have",
"wouldn't": "would not",
"wouldn't've": "would not have",
"y'all": "you all",
"y'all'd": "you all would",
"y'all'd've": "you all would have",
"y'all're": "you all are",
"y'all've": "you all have",
"you'd": "you had / you would",
"you'd've": "you would have",
"you'll": "you shall / you will",
"you'll've": "you shall have / you will have",
"you're": "you are",
"you've": "you have"
}
###Output
_____no_output_____
###Markdown
Defining Translation functions
###Code
##************************************************************************************************************
### Getting data from Translation object
def get_text_from_translation(translation):
print (translation.text)
translated_text = translation.text.split("text")[1]
translated_text = translated_text.split("Pronunciation")[0]
translated_text = translated_text.split("=")[1]
translated_text = translated_text.strip()
translated_text = translated_text.replace(translated_text[len(translated_text)-1],"")
return translated_text
##Translating word by word using Google and MyMemory
##TODO: Change the detect function to translator.detect
def translate_word_by_word(sentence):
words = sentence.split(" ")
new_words = []
for word in words:
lang = detect(word)
translator = Translator(provider='mymemory', to_lang="en", from_lang = lang, secret_access_key=None)
new_word = translator.translate(word)
new_words.append(new_word)
return " ".join(new_words)
##Translating entire sentences using mymemory (pypi)
def translate_sentence(sentence):
lang = detect(sentence)
translator = Translator(provider='mymemory', to_lang="en", from_lang = lang, secret_access_key=None)
return translator.translate(sentence)
##Translating entire column using the above functions
def translate_column(columnvalue):
try:
sentence_translated = translate_sentence(columnvalue)
return sentence_translated
except:
return columnvalue
#### NOTE: THESE FUNCTIONS ABOVE ARE NOT IN USE. WE ARE LEVERAGING THE ONES BELOW
##************************************************************************************************************
## Detecting language using google
def detect_lang(desc):
try:
if desc != "":
return translator.detect(desc)
else:
return "en"
except:
return "en"
# Function to translate the text to english.
def fn_translate(sentence):
try:
if translator.detect(sentence) == "en":
return sentence
else:
return translator.translate(df, src = lang, dest = "en").text
except:
return sentence
#ticket_data['Description'] = ticket_data['Description'].apply(translate_column)
#ticket_data['Short description'] = ticket_data['Short description'].apply(translate_column)
###Output
_____no_output_____
###Markdown
Function for cleaning data
###Code
max_features = 10000
MAX_LENGTH = 300
embedding_size = 200
def clean_text(text):
if text != "":
'''Make text lowercase, remove text in square brackets,remove links,remove punctuation
and remove words containing numbers.'''
text=text.replace(('first name: ').lower(),'firstname')
text=text.replace(('last name: ').lower(),'lastname')
text=text.replace(('received from:').lower(),'')
text=text.replace('email:','')
text=text.replace('email address:','')
index1=text.find('from:')
index2=text.find('\nsddubject:')
text=text.replace(text[index1:index2],'')
index3=text.find('[cid:image')
index4=text.find(']')
text=text.replace(text[index3:index4],'')
text=text.replace('subject:','')
text=text.replace('received from:','')
text=text.replace('this message was sent from an unmonitored email address', '')
text=text.replace('please do not reply to this message', '')
text=text.replace('[email protected]','MonitoringTool')
text=text.replace('select the following link to view the disclaimer in an alternate language','')
text=text.replace('description problem', '')
text=text.replace('steps taken far', '')
text=text.replace('customer job title', '')
text=text.replace('sales engineer contact', '')
text=text.replace('description of problem:', '')
text=text.replace('steps taken so far', '')
text=text.replace('please do the needful', '')
text=text.replace('please note that ', '')
text=text.replace('please find below', '')
text=text.replace('date and time', '')
text=text.replace('kindly refer mail', '')
text=text.replace('name:', '')
text=text.replace('language:', '')
text=text.replace('customer number:', '')
text=text.replace('telephone:', '')
text=text.replace('summary:', '')
text=text.replace('sincerely', '')
text=text.replace('company inc', '')
text=text.replace('importance:', '')
text=text.replace('gmail.com', '')
text=text.replace('company.com', '')
text=text.replace('microsoftonline.com', '')
text=text.replace('company.onmicrosoft.com', '')
text=text.replace('hello', '')
text=text.replace('hallo', '')
text=text.replace('hi it team', '')
text=text.replace('hi team', '')
text=text.replace('hi', '')
text=text.replace('best', '')
text=text.replace('kind', '')
text=text.replace('regards', '')
text=text.replace('good morning', '')
text=text.replace('please', '')
text=text.replace('regards', '')
text = re.sub(r'\<a href', ' ', text)
text = re.sub(r'&', '', text)
text = re.sub(r'<br />', ' ', text)
text = re.sub(r'\S+@\S+', '', text)
text = re.sub(r'\d+','' ,text)
text = re.sub(r'#','', text)
text = re.sub(r'&;?', 'and',text)
text = re.sub(r'\&\w*;', '', text)
text = re.sub(r'https?:\/\/.*\/\w*', '', text)
custom_punctuation='!"#$%&\'()*+,-./:;<=>?@[\\]^`{|}~'
text = re.sub(r'\w*\d\w*', '', text)
text = re.sub(r'\[.*?\]', '', text)
text = re.sub(r'https?://\S+|www\.\S+', '', text)
text = re.sub(r'<.*?>+', '', text)
text= ''.join(c for c in text if c <= '\uFFFF')
text = text.strip()
text = re.sub(r'[%s]' % re.escape(string.punctuation), '', text)
text = ' '.join(re.sub("[^\u0030-\u0039\u0041-\u005a\u0061-\u007a]", " ", text).split())
text = re.sub(r'\r\n', '', text)
text = re.sub(r'\n', '', text)
text = re.sub(r'\S+@\S+', '', text)
text = text.lower()
return text
###Output
_____no_output_____
###Markdown
Data cleaning, stop word removal and lemmatization
###Code
ticket_data['Description'] = ticket_data['Description'].apply(fn_decode_to_ascii)
ticket_data['Short description'] = ticket_data['Short description'].apply(fn_decode_to_ascii)
ftfy_ShortDescription = []
for Short_Description in ticket_data['Short description']:
ftfy_ShortDescription.append(fix_text(Short_Description))
ticket_data['Short description']= ftfy_ShortDescription
ftfy_Description = []
for Description in ticket_data['Description']:
ftfy_Description.append(fix_text(Description))
ticket_data['Description']= ftfy_Description
ticket_data['lang_desc'] = ticket_data['Description'].apply(detect_lang)
ticket_data['lang_short_desc'] = ticket_data['Short description'].apply(detect_lang)
ticket_data['Description'] = ticket_data.apply(lambda x: fn_translate(x['Description']), axis=1)
ticket_data['Short description'] = ticket_data.apply(lambda x: fn_translate(x['Short description']), axis=1)
ticket_data["Description"] = ticket_data["Description"].apply(clean_text)
ticket_data["Short description"] = ticket_data["Short description"].apply(clean_text)
print(len(ticket_data))
ticket_pivot = ticket_data.pivot_table(aggfunc="count",columns ="Assignment group", values="Caller")
ticket_nums = np.array(ticket_pivot)
ticket_cols = ticket_pivot.columns
ticket_pivot_df = pd.DataFrame(data = ticket_nums, columns = ticket_cols, index = ["Number of tickets"])
ticket_pivot_df = ticket_pivot_df.transpose()
ticket_pivot_df.reset_index(inplace=True)
def filter_group_augment (groupname):
if groupname in list(ticket_pivot_df[(ticket_pivot_df["Number of tickets"]>20) & (ticket_pivot_df["Number of tickets"]<1000)]["Assignment group"]):
return True
else:
return False
def filter_group_large (groupname):
if groupname in list(ticket_pivot_df[(ticket_pivot_df["Number of tickets"]>=1000)]["Assignment group"]):
return True
else:
return False
def filter_group_small (groupname):
if groupname in list(ticket_pivot_df[(ticket_pivot_df["Number of tickets"]<=20)]["Assignment group"]):
return True
else:
return False
def filter_group_AI (groupname):
if groupname in list(ticket_pivot_df[(ticket_pivot_df["Number of tickets"]>20)]["Assignment group"]):
return True
else:
return False
data_to_be_augmented = ticket_data[ticket_data["Assignment group"].apply(filter_group_augment)]
data_large_tickets = ticket_data[ticket_data["Assignment group"].apply(filter_group_large)]
data_small_tickets = ticket_data[ticket_data["Assignment group"].apply(filter_group_small)]
data_to_be_augmented.shape
data_large_tickets.shape
data_small_tickets.shape
def replace_with_synonym(sentence):
if sentence != "":
synonym = ""
words = sentence.split(" ")
repl_num = 0
syn_idx = 0
for i in range(5):
if len(words) > 1:
repl_num = rnd.randint(0,len(words)-1)
elif len(words) == 1:
repl_num = 0
else:
return sentence
syns = wordnet.synsets(words[repl_num])
if len(syns) > 1:
syn_idx = rnd.randint(0,len(syns)-1)
synonym = syns[syn_idx].lemmas()[0].name()
elif len(syns) == 1:
syn_idx = 0
synonym = syns[syn_idx].lemmas()[0].name()
else:
synonym = ""
if synonym != "":
sentence = sentence.replace(words[repl_num], synonym)
return sentence
def drop_words(sentence):
if sentence != "":
words = sentence.split(" ")
repl_num = 0
for i in range(5):
if len(words) > 1:
repl_num = rnd.randint(0,len(words)-1)
elif len(words) == 1:
repl_num = 0
else:
return sentence
words[repl_num] = ""
sentence = " ".join(words)
return sentence
def scatter_sentences(para):
sentences = para.split(".")
i=0
for sentence in sentences:
sentences[i] = sentence.strip()
i = i+1
total = len(sentences)
i = 0
new_order = []
while (len(new_order) < total):
num = rnd.randint(0,total-1)
if num not in new_order:
new_order.append(num)
i = i + 1
new_sentences = [sentences[i] for i in new_order]
total_sentences = len(new_sentences)
return_para = ""
for i in range(total_sentences):
if not new_sentences[i] == "":
if return_para == "":
return_para = new_sentences[i]
else:
return_para = return_para + "." + new_sentences[i]
return return_para
ticket_pivot_df_augment = ticket_pivot_df[(ticket_pivot_df["Number of tickets"]>20) & (ticket_pivot_df["Number of tickets"]<1000)]
ticket_pivot_df_augment = ticket_pivot_df_augment.reset_index()
ticket_pivot_df_augment.drop(["index"], axis = 1, inplace = True)
def augment_data(df):
total_grps = len(ticket_pivot_df_augment)
ticket_data_updated = df.copy()
list_to_add = []
list_df = []
col_names = ticket_data_updated.columns.values.tolist()
for i in range(total_grps):
grpname = ticket_pivot_df_augment["Assignment group"][i]
group_df = ticket_data_updated[ticket_data_updated["Assignment group"]==grpname]
list_df = group_df.values.tolist()
group_df = None
records_added = ticket_pivot_df_augment["Number of tickets"][i]
while records_added < 330:
list_df = list_df + list_df
records_added = records_added + records_added
list_to_add = list_to_add + list_df
list_df = []
gc.collect()
#ticket_data_updated.info()
#ticket_data_updated = pd.DataFrame(data = final_df, columns= col_names)
df_to_add = pd.DataFrame(data = list_to_add, columns = ticket_data.columns)
df_to_add["Description"] = df_to_add["Description"].apply(replace_with_synonym)
df_to_add["Short description"] = df_to_add["Short description"].apply(replace_with_synonym)
ticket_data_updated = pd.concat([ticket_data_updated, df_to_add],axis = 0)
print("first", gc.collect())
df_to_add = pd.DataFrame(data = list_to_add, columns = ticket_data.columns)
df_to_add["Description"] = df_to_add["Description"].apply(scatter_sentences)
df_to_add["Short description"] = df_to_add["Short description"].apply(scatter_sentences)
ticket_data_updated = pd.concat([ticket_data_updated, df_to_add],axis = 0)
print("second:", gc.collect())
df_to_add = pd.DataFrame(data = list_to_add, columns = ticket_data.columns)
df_to_add["Description"] = df_to_add["Description"].apply(drop_words)
df_to_add["Short description"] = df_to_add["Short description"].apply(drop_words)
ticket_data_updated = pd.concat([ticket_data_updated, df_to_add],axis = 0)
print("third", gc.collect())
return ticket_data_updated
def check_small_group(grp):
if grp in list(ticket_pivot_df[ticket_pivot_df["Number of tickets"] <= 20]["Assignment group"]):
return True
return False
### Augment the data here
augmented_data = ""
if os.path.exists(folder_path + "/" + "augmented_data.pickle"):
augmented_data = get_pickle_data("augmented_data.pickle")
print("picking from pickle")
else:
augmented_data = augment_data(data_to_be_augmented)
print("augmented")
augmented_data = pd.concat([augmented_data,data_large_tickets], axis = 0)
augmented_data = pd.concat([augmented_data,data_small_tickets], axis = 0)
augmented_data = augmented_data.reset_index()
augmented_data.drop(["index"], axis = 1, inplace = True)
pickle_dump(augmented_data, "augmented_data.pickle")
augmented_data.head()
ticket_pivot = augmented_data.pivot_table(aggfunc="count",columns ="Assignment group", values="Caller")
ticket_nums = np.array(ticket_pivot)
ticket_cols = ticket_pivot.columns
ticket_pivot_df = pd.DataFrame(data = ticket_nums, columns = ticket_cols, index = ["Number of tickets"])
ticket_pivot_df = ticket_pivot_df.transpose()
ticket_pivot_df.reset_index(inplace=True)
deep_learning_df = " "
if os.path.exists(folder_path + "/deep_learning_df.pickle"):
deep_learning_df = get_pickle_data("deep_learning_df.pickle")
else:
deep_learning_df = augmented_data[augmented_data["Assignment group"].apply(filter_group_AI)]
deep_learning_df = deep_learning_df.reset_index()
deep_learning_df.drop(["index"], axis = 1, inplace = True)
deep_learning_df["Description_New"] = deep_learning_df["Short description"] + deep_learning_df["Description"]
pickle_dump(deep_learning_df, "deep_learning_df.pickle")
deep_learning_df.shape
augmented_data.shape
stop = stopwords.words('english')
augmented_data["Description"] = augmented_data["Description"].apply(lambda x: " ".join(x for x in str(x).split() if x not in stop))
augmented_data["Short description"] = augmented_data["Short description"].apply(lambda x: " ".join(x for x in str(x).split() if x not in stop))
augmented_data['Short description']= augmented_data['Short description'].apply(lambda x: " ".join([Word(word).lemmatize() for word in str(x).split()]))
augmented_data['Description']= augmented_data['Description'].apply(lambda x: " ".join([Word(word).lemmatize() for word in str(x).split()]))
pickle_dump(augmented_data, "lemmatized_data.pickle")
augmented_data.head()
ticket_data["lang_desc"].value_counts()
ticket_data["lang_short_desc"].value_counts()
x = ticket_data["lang_desc"].value_counts()
x=x.sort_index()
plt.figure(figsize=(10,6))
ax= sns.barplot(x.index, x.values, alpha=0.8)
plt.title("Distribution of text by language")
plt.ylabel('number of records')
plt.xlabel('Language')
rects = ax.patches
labels = x.values
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2, height + 5, label, ha='center', va='bottom')
plt.show();
x = ticket_data["lang_short_desc"].value_counts()
x=x.sort_index()
plt.figure(figsize=(10,6))
ax= sns.barplot(x.index, x.values, alpha=0.8)
plt.title("Distribution of text by language")
plt.ylabel('number of records')
plt.xlabel('Language')
rects = ax.patches
labels = x.values
for rect, label in zip(rects, labels):
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2, height + 5, label, ha='center', va='bottom')
plt.show();
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
augmented_data['Description_New'] = augmented_data['Short description'] + augmented_data['Description']
augmented_data['num_wds'] = augmented_data['Description_New'].apply(lambda x: len(x.split()))
augmented_data['num_wds'].mean()
print(augmented_data['num_wds'].max())
print(augmented_data['num_wds'].min())
len(augmented_data[augmented_data['num_wds']==0])
augmented_data['uniq_wds'] = augmented_data['Description_New'].str.split().apply(lambda x: len(set(x)))
augmented_data['uniq_wds'].head()
assign_grps = augmented_data.groupby('Assignment group')
ax=assign_grps['num_wds'].aggregate(np.mean).plot(kind='bar', fontsize=14, figsize=(20,10))
ax.set_title('Mean Number of Words in tickets per Assignment Group\n', fontsize=20)
ax.set_ylabel('Mean Number of Words', fontsize=18)
ax.set_xlabel('Assignment Group', fontsize=18);
ax=assign_grps['uniq_wds'].aggregate(np.mean).plot(kind='bar', fontsize=14, figsize=(20,10))
ax.set_title('Mean Number of Unique Words per tickets in Assignment Group\n', fontsize=20)
ax.set_ylabel('Mean Number of Unique Words', fontsize=18)
ax.set_xlabel('Assignment Group', fontsize=18);
word_counts = Counter()
for i, row in augmented_data.iterrows():
word_counts.update(row['Description_New'].split())
word_counts.most_common(20)
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')
augmented_data['token_desc'] = augmented_data['Description_New'].apply(lambda x: tokenizer.tokenize(x))
augmented_data['token_desc'].head()
augmented_data.head()
###Output
_____no_output_____
###Markdown
Splitting the data into rule based and Machine learning based processing
###Code
rule_based_df = " "
machine_learning_df = " "
if os.path.exists(folder_path + "/rule_based_df.pickle"):
rule_based_df = get_pickle_data("rule_based_df.pickle")
else:
rule_based_df = augmented_data[augmented_data["Assignment group"].apply(filter_group_small)]
if os.path.exists(folder_path + "/machine_learning_df.pickle"):
machine_learning_df = get_pickle_data("machine_learning_df.pickle")
else:
machine_learning_df = augmented_data[augmented_data["Assignment group"].apply(filter_group_AI)]
machine_learning_df.shape
deep_learning_df.shape
rule_based_df.shape
augmented_data.shape
machine_learning_df = machine_learning_df.reset_index()
machine_learning_df.drop(["index"], axis = 1, inplace = True)
rule_based_df = rule_based_df.reset_index()
rule_based_df.drop(["index"], axis = 1, inplace = True)
pickle_dump(rule_based_df, "rule_based_df.pickle")
pickle_dump(machine_learning_df, "machine_learning_df.pickle")
###Output
_____no_output_____
###Markdown
Vocabularizing the Deep Learning Df
###Code
def vocabularize(text, max_features):
x = text
vocabSize = max_features
tokenizer = Tokenizer(num_words=vocabSize, split=' ')
tokenizer.fit_on_texts(x)
x = tokenizer.texts_to_sequences(x)
x = pad_sequences(x,MAX_LENGTH)
return x
x = vocabularize(deep_learning_df["Description_New"], max_features)
x
# build the vocabulary in one pass
from string import punctuation
from nltk import word_tokenize
stop_words = []
vocabulary = set()
def tokenize(text):
words = word_tokenize(text)
words = [w.lower() for w in words]
return [w for w in words if w not in stop_words and not w.isdigit()]
if os.path.exists(folder_path + "/vocabulary.pickle"):
vocabulary = get_pickle_data("vocabulary.pickle")
else:
stop_words = stopwords.words('english') + list(punctuation)
counter = len(machine_learning_df["token_desc"])
for i in range(counter):
words = tokenize(str(machine_learning_df["Description_New"][i]))
vocabulary.update(words)
vocabulary = list(vocabulary)
VOCABULARY_SIZE = len(vocabulary)
print(VOCABULARY_SIZE)
pickle_dump(vocabulary, "vocabulary.pickle")
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words=stop_words, tokenizer=tokenize, max_features=30, analyzer = 'word', ngram_range=(1, 2))
inc_tfidf = tfidf.fit_transform(machine_learning_df['Description_New'])
inc_tfidf.shape
# create a dictionary mapping the tokens to their tfidf values
'''tfidf = dict(zip(tfidf.get_feature_names(), tfidf.idf_))
tfidf = pd.DataFrame(columns=['tfidf']).from_dict(
dict(tfidf), orient='index')
tfidf.columns = ['tfidf']
###Output
_____no_output_____
###Markdown
Top 20 Words with highest tfidf score
###Code
'''tfidf.sort_values(by=['tfidf'], ascending=False).head(20)
###Output
_____no_output_____
###Markdown
Bottom 10 words with lowest tfidf score
###Code
''' tfidf.sort_values(by=['tfidf'], ascending=True).head(10)
###Output
_____no_output_____
###Markdown
Dimentionality Reduction
###Code
'''from sklearn.decomposition import TruncatedSVD
n_comp=10
svd = TruncatedSVD(n_components=n_comp, random_state=42)
svd_tfidf = svd.fit_transform(inc_tfidf)
'''from sklearn.manifold import TSNE
tsne_model = TSNE(n_components=2, verbose=1, random_state=42, n_iter=500)
tsne_tfidf = tsne_model.fit_transform(svd_tfidf)
'''from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import CountVectorizer
# create count vectorizer first
cvectorizer = CountVectorizer(min_df=4, max_features=4000, ngram_range=(1,2))
cvz = cvectorizer.fit_transform(machine_learning_df['Description_New'])
# generate topic models using Latent Dirichlet Allocation
lda_model = LatentDirichletAllocation(n_components=10, learning_method='online', max_iter=20, random_state=42)
X_topics = lda_model.fit_transform(cvz)
'''n_top_words = 10
topic_summaries = []
# get topics and topic terms
topic_word = lda_model.components_
vocab = cvectorizer.get_feature_names()
for i, topic_dist in enumerate(topic_word):
topic_words = np.array(vocab)[np.argsort(topic_dist)][:-(n_top_words+1):-1]
topic_summaries.append(' '.join(topic_words))
print('Topic {}: {}'.format(i, ' | '.join(topic_words)))
# collect the tfid matrix in numpy array
#array = inc_tfidf.todense()
array = inc_tfidf.todense()
# store the tf-idf array into pandas dataframe
df_inc = pd.DataFrame(array)
df_inc.head(10)
df_inc.shape
machine_learning_df.head()
machine_learning_df.head()
df_inc = df_inc.reset_index()
df_inc.drop(['index'],axis=1, inplace = True)
df_inc.head()
df_inc['num_wds']= machine_learning_df['num_wds']
df_inc['uniq_wds']= machine_learning_df['uniq_wds']
df_inc['Assignment group']= machine_learning_df['Assignment group']
df_inc.head()
features = df_inc.columns.tolist()
output = 'Assignment group'
# removing the output and the id from features
features.remove(output)
df_inc_sample = df_inc[df_inc['Assignment group'].map(df_inc['Assignment group'].value_counts()) > 100]
df_inc_sample['Assignment_group'].value_counts()
def multiclass_logloss(actual, predicted, eps=1e-15):
"""Multi class version of Logarithmic Loss metric.
:param actual: Array containing the actual target classes
:param predicted: Matrix with class predictions, one probability per class
"""
# Convert 'actual' to a binary array if it's not already:
if len(actual.shape) == 1:
actual2 = np.zeros((actual.shape[0], predicted.shape[1]))
for i, val in enumerate(actual):
actual2[i, val] = 1
actual = actual2
clip = np.clip(predicted, eps, 1 - eps)
rows = actual.shape[0]
vsota = np.sum(actual * np.log(clip))
return -1.0 / rows * vsota
non_eng_text=machine_learning_df.loc[machine_learning_df['Caller']=="gusyjcer lvbxfimr"]
non_eng_text
###Output
_____no_output_____ |
Term2/Part1/2. SlidingControl.ipynb | ###Markdown
Sliding control$$\begin{cases}\hat{\textbf{J}}\;\dot{\textbf{w}} + \textbf{w}\times\hat{\textbf{J}}\;\textbf{w}\;=\;\textbf{M}_{ext}+\textbf{M}_{control}\\\dot{Q} = \frac{1}{2}Q\circ\textbf{w}\end{cases}\\(Rodrig-Gamilton\;parameters)\\Q = Q_{ref}\circ {Q}_{error}\\\textbf{w} = \textbf{w}_{error} + \textbf{w}_{ref}\\We\;want\;\;\dot{\textbf{q}}_{error} + \lambda\:\dot{\textbf{w}}_{error} = -\hat{\textbf{C}}\;sign(\textbf{q}_{error}+ \lambda\:\textbf{w}_{error})\\Where\;\textbf{q}\;is\;vector\;part\;of\;Q\\\dot{Q}_{error}=\tilde{Q}_{ref}\circ\dot{Q} + \tilde{\dot{Q}}_{ref}\circ Q\\\dot{Q}=\frac{1}{2}Q\circ\textbf{w}\;\;\;\;\;\dot{Q}_{ref}=\frac{1}{2}Q_{ref}\circ\textbf{w}_{ref}\\\dot{\textbf{w}}_{error} = \dot{\textbf{w}} - \dot{\textbf{w}}_{ref}\\\textbf{M}_{control} = \textbf{w}\times\hat{\textbf{J}}\;\textbf{w} - \textbf{M}_{ext} - \frac{\hat{\textbf{J}}}{\lambda}(\dot{\textbf{q}}_{error}-\lambda\;\dot{\textbf{w}}_{ref}+\hat{\textbf{C}}\;sign(\textbf{q}_{error}+ \lambda\:\textbf{w}))\\$$
###Code
class Brain():
def __init__(self, time_discretization, deviations: np.ndarray = None):
self.time_disc = time_discretization
self.true_x = None
self.deviations = deviations
self._time_measured = -np.inf # -inf is to avoid first-iteration troubles
self._measured_y = None
self._measured_x = None
self._flag1 = False
self._flag2 = False
def set_true_x(self, t, x):
dt = t - self._time_measured
if dt >= self.time_disc:
self.true_x = np.copy(x)
self._time_measured = t
self._flag1 = True
self._flag2 = True
def _h(self, t, x):
if self.deviations:
noise = np.empty(x.size)
for i in range(noise.size):
noise[i] = np.random.normal(scale = self.deviations[i])
else:
noise = np.zeros(x.size)
y = x + noise
return y
def _g(self, t, y):
x = np.copy(y)
return x
def get_measured_values(self):
if self._flag1:
self._measured_y = self._h(self._time_measured, self.true_x)
self._flag1 = False
return np.copy(self._measured_y)
def get_measured_x(self):
# y = h(t, x) - measurements
# x = g(t, y) - phase calculation based on measurements
if self._flag2:
y = self.get_measured_values()
self._measured_x = self._g(self._time_measured, y)
self._flag2 = False
x = np.copy(self._measured_x)
return x
m04pi = constants.mmE * constants.m0 / (4 * np.pi)
T = 24 * 60 * 60
wE = 2 * np.pi / T # Earths angular speed
def RK4(f, x0x1, y0, step, save_steps :int = 1, step_process = lambda y: y):
# f MUST takes x and y as arguments: f(x, y)
# It solves equation y' = f(x, y), y(x0) = y0 (everything is a vector)
# from x0x1[0] to x0x1[1] on the grid with step step
# save_steps - INT >= 1; integrator will save results every save_steps steps (2 - every second step, 1 - every step, etc.)
# save_steps doesn't affect starting point and final point
# step_process - FUNCTION that is called after every step to somehow process results (normalization and so on)
# step_process - must take y argument (for single time step) as input and return the same shape numpy.ndarray
# by default it doesn't do anything
# returns array of (x, y) pairs
if not isinstance(save_steps, int):
raise TypeError("save_steps MUST be an integer")
elif save_steps <= 0:
raise ValueError("save_steps MUST be a natural number (1, 2, 3, ...)")
x0 = x0x1[0]
x1 = x0x1[1]
current_x = np.array(x0, dtype = np.float64)
current_y = np.array(y0, dtype = np.float64)
result = [[x0, *y0]]
h = step
h2 = h/2
h6 = h/6
stop = x1 - h
ind = 0
while current_x < stop:
ind += 1
k1 = np.array(f(current_x, current_y))
k2 = np.array(f(current_x + h2, current_y + k1*h2))
k3 = np.array(f(current_x + h2, current_y + k2*h2))
k4 = np.array(f(current_x + h, current_y + k3*h))
current_y += h6*(k1 + 2*k2 + 2*k3 + k4)
current_x += h
current_y = step_process(current_x, current_y)
if ind == save_steps:
result.append(np.array([current_x.copy(), *current_y.copy()]))
ind = 0
if current_x < x1 - constants.max_to_zero:
h = x1 - current_x
h2 = h/2
h6 = h/6
k1 = np.array(f(current_x, current_y))
k2 = np.array(f(current_x + h2, current_y + k1*h2))
k3 = np.array(f(current_x + h2, current_y + k2*h2))
k4 = np.array(f(current_x + h, current_y + k3*h))
current_y += h6*(k1 + 2*k2 + 2*k3 + k4)
current_x += h
current_y = step_process(current_x, current_y)
result.append(np.array([current_x.copy(), *current_y.copy()]))
return np.array(result)
@static_vars(T = T, wE = wE)
def ECEF2ECI(a, t, t0 = 0, fi0 = 0):
# converts a-vector from ECEF (Earth-bounded system) to ECI (Inertial system)
dt = (t - t0) % ECEF2ECI.T
fi = -(fi0 + ECEF2ECI.wE * dt)
# rotation matrix in terms of coordinates: x_new = M @ x_old
M = np.array([[np.cos(fi), np.sin(fi), 0],
[-np.sin(fi), np.cos(fi), 0],
[0, 0, 1]])
return M @ a
@static_vars(m04pi = m04pi)
def B1(r, t):
# r - np.ndarray; radius-vector [x, y, z]
# returns magnetic field vector B in ECI system
global m04pi
rr = np.linalg.norm(r)
k1 = np.array([0., 0., -1.])
B = -B1.m04pi * (3 * r[2] * r / rr**2 + k1) / rr**3
return B
@static_vars(m04pi = m04pi)
def B2(r, t, t0 = 0, theta = 9.5 * np.pi / 180, fi0 = 0):
# r - np.ndarray; radius-vector [x, y, z]
# theta - declination of the magnetic moment from z-axis
# t0 - time moment when ECI and ECEF matched
# fi0 - fi at t0
# returns magnetic field vector in INERTIAL system
k1 = np.array([np.sin(theta),
0,
-np.cos(theta)])
k1 = ECEF2ECI(k1, t, t0 = t0, fi0 = fi0)
rr = np.linalg.norm(r)
B = B2.m0в4pi * (3 * np.dot(k1, r) * r / rr**2 - k1) / rr**3
return B
def GravTorque(t, r, V, we, A, Ainv, params):
# xe = ST * xi = A.inverse * xi * A
# takes r, V in inertial reference system
# w, A in bounded system; A as quaternion!
# Conversion is supposed to be carried out in overlying function
# returns gravitational torque IN BOUNDED basis
rr = np.linalg.norm(r)
e3 = r / rr # in irs
e3 = (Ainv * np.quaternion(*e3) * A).vec # convertion to bounded basis
Me = 3 * params.mu / rr**3 * np.cross(e3, params.J @ e3)
return Me
def MagneticTorque(t, r, V, we, A, Ainv, m, params, Bfunc = B1):
# xe = ST * xi = A.inverse * xi * A
# takes r, V in inertial reference system
# w, A, Ainv in bounded system; A, Ainv as quaternions!
# Conversions is supposed to be carried out in overlying function
# returns magnetic torque IN BOUNDED basis
B = Bfunc(r, t)
B = (Ainv * np.quaternion(*B) * A).vec # convertion to bounded basis
Mm = np.cross(m, B)
return Mm
def ExternalTorqueAssesment(t, r, V, we, A, Ainv, params):
# now it calculates the exaxt torque, but in future it gonna be an assesment
M = np.zeros(3)
M += GravTorque(t, r, V, we, A, Ainv, params)
m = params.mm
M += MagneticTorque(t, r, V, we, A, Ainv, m, params)
return M
lmd = 100.
C = np.array([0.01, 0.01, 0.01])
@static_vars(lmd = lmd, C = C)
def SlidingControl(t, r, V, we, A, Ainv, params, Aref, wref, eref):
# we, A, Aref, wref, eref must be in bounded basis!
# returns in bounded basis
Aerr = Aref.conj() * A
werr = we - wref
qerr = Aerr.vec
A_dot = 0.5 * A * np.quaternion(*we)
Aref_dot = 0.5 * Aref * np.quaternion(*wref)
Aerr_dot = Aref.conj() * A_dot + Aref_dot.conj() * A
qerr_dot = Aerr_dot.vec
Mext = ExternalTorqueAssesment(t, r, V, we, A, Ainv, params)
Mcont = np.cross(we, params.J @ we) - Mext - \
params.J / SlidingControl.lmd @ \
(qerr_dot - SlidingControl.lmd * eref + C * np.sign(qerr + SlidingControl.lmd * werr))
return Mcont
def TorqueControl(t, r, V, we, A, Ainv, params):
Aref, wref, eref = params.desired_orientation(t, r, V, we, A, Ainv, params)
# Aref, wref, eref in bounded(!) basis!
return SlidingControl(t, r, V, we, A, Ainv, params, Aref, wref, eref)
def Torque(t, r, V, we, A, params):
# xe = ST * xi = A.inverse * xi * A
# takes r, V in inertial reference system
# w, A in bounded system as arrays
# returns torque IN BOUNDED basis
A = np.quaternion(*A)
Ainv = A.inverse()
M = np.zeros(3)
M += GravTorque(t, r, V, we, A, Ainv, params)
# in future here will be m() - function
m = params.mm
M += MagneticTorque(t, r, V, we, A, Ainv, m, params)
M += TorqueControl(t, r, V, we, A, Ainv, params)
return M
def QuatDot(Ai, we):
return np.quaternion(*Ai) * np.quaternion(*we) * 0.5
def Euler(t, r, V, we, A, params, M = lambda t, r, V, we, A: np.array([0, 0, 0])):
# return w' accroding to Euler's dinamic equation
# A - orientation QUATERNION
# M(t, w, A) - function of external moment in bounded axes
return params.Jinv @ (M(t, r, V, we, A, params) - np.cross(we, params.J @ we))
def f(t, x, params):
# x[:3] - r, x[3:6] - V, x[6:9] - we, x[9:] - Ai
r = x[:3]
V = x[3:6]
we = x[6:9]
A = x[9:]
res = np.empty(13)
res[:3] = V
res[3:6] = -params.mu * r / np.linalg.norm(r)**3
res[6:9] = Euler(t, r, V, we, A, params, Torque)
res[9:] = QuatDot(A, we).components
return res
def normalization(x):
x[9:] /= np.linalg.norm(x[9:])
return x
def step_processing(t, x, brain: Brain):
brain.set_true_x(t, x)
return normalization(x)
def Integrate(x0, t_start, t_final, step, params, brain):
"""
Input:
x0 - initial phase vector, flat(!) np.ndarray
t_start - initial time in seconds
t_final - final time in seconds
step - time step (constant step integration via RK4) in seconds
params - params argument in f
brain - Brain class instance
Returns:
time_points - np.ndarray of times with size [N,], where N - number of points = (t_final - t_start) // step + 2
x - np.ndarray of phase vector with size [N, x0.size]
"""
brain.set_true_x(t_start, x0)
res = RK4(lambda t, x: f(t, x, params), (t_start, t_final), x0,
step, step_process = lambda t, x: step_processing(t, x, brain))
time_points = res[:, 0]
x = res[:, 1:]
return time_points, x
###Output
_____no_output_____
###Markdown
To change desired orientation, change a function called in desired_orientation on the first line
###Code
from orientations import InertialOrientation, OrbitalOrientation, DZZOrientation
class parameters:
pass
@static_vars(Ae_prev = None)
def desired_orientation(t, r, V, we, A, Ainv, params):
# Ai, wi, ei = InertialOrientation(t)
# Ai, wi, ei = OrbitalOrientation(r, V) # - checked, converges
Ai, wi, ei = DZZOrientation(r, V, t)
wiq = np.quaternion(*wi)
eiq = np.quaternion(*ei)
Ad = 0.5 * wiq * Ai
Ae = Ai
we = (Ainv * wiq * A).vec
ee = (Ainv * eiq * A).vec +\
(Ad.conj() * wiq * A).vec +\
(Ainv * wiq * Ad).vec
# have to check Ae didn't change sign suddenly
if desired_orientation.Ae_prev:
dA = Ae - desired_orientation.Ae_prev
if dA.norm() >= 1:
# leap, A -> -A
Ae = -Ae
desired_orientation.Ae_prev = Ae.copy()
return Ae, we, ee
# desired_orientation(t0, r0, V0, w0, np.quaternion(*A0), np.quaternion(*A0), params)
J = np.zeros((3, 3))
J[0,0] = 1
J[1, 1] = 3
J[2, 2] = 2
Jinv = np.linalg.inv(J)
params = parameters()
params.desired_orientation = desired_orientation
params.J = J
params.Jinv = Jinv
params.mu = constants.muE
params.mm = np.array([0, 0, 10]) # self magnetic moment in bounded system; 10 A, 10 cm^2, 100 rounds
h0 = 1.5e5 # m
lat0 = 30 / 180 * np.pi
long0 = 60 / 180 * np.pi
ysc = 15 / 180 * np.pi
r0 = (constants.RE + h0) * np.array([np.cos(lat0) * np.cos(long0), np.cos(lat0) * np.sin(long0), np.sin(lat0)])
V0 = np.sqrt(constants.muE / np.linalg.norm(r0)) * np.array([-np.sin(long0 - ysc), np.cos(long0 - ysc), 0])
w0 = np.array([0., 0., 0.])
A0 = np.array([0., 0., 0., 1.])
x0 = np.hstack((r0, V0, w0, A0))
t0 = 0
t_final = 2 * 60 * 60
step = 1
time_disc = step
deviations = np.array([500., 500., 500., 1., 1., 1., 0.005, 0.005, 0.005, 0.001, 0.001, 0.001, 0.001])
brain = Brain(time_disc, deviations)
t_points, x_res = Integrate(x0, t0, t_final, step, params, brain)
x = x_res[:, 0]
y = x_res[:, 1]
z = x_res[:, 2]
fig = plt.figure()
ax = p3.Axes3D(fig)
ax.plot(x, y, z)
r = x_res[:, :3]
V = x_res[:, 3:6]
we = x_res[:, 6:9]
A = x_res[:, 9:]
Ades = []
wdes = []
edes = []
Aerr = []
for i in range(t_points.size):
Ade, wde, ede = desired_orientation(t_points[i], r[i], V[i], we[i], np.quaternion(*A[i]), np.quaternion(*A[i]).conj(), params)
Aerr.append((Ade.conj() * np.quaternion(*A[i])).components)
Ades.append(Ade.components)
wdes.append(wde)
edes.append(ede)
Ades = np.asarray(Ades)
wdes = np.asarray(wdes)
edes = np.asarray(edes)
Aerr = np.asarray(Aerr)
werr = we - wdes
A0 = np.array([-1, 0, 0, 0])
print("Max quat. error in the end:", np.max(np.linalg.norm((Aerr - A0)[-1000:], axis = 1)))
plt.figure(figsize=(10,10))
plt.plot(Aerr)
plt.title("Quaternion error")
y = np.linalg.norm(werr, axis = 1)
print("Max error in the end:", np.max(y[-1000:]))
plt.figure(figsize=(10,10))
plt.plot(t_points, y)
plt.title("Angular speed error")
###Output
Max error in the end: 3.658034870997935e-05
|
notebooks/2021-07-14 Exploration of a Best Fit run to refamiliarize.ipynb | ###Markdown
2021-07-14 Exploration of a Best Fit run to refamiliarizeIt's been over 8 months since I touched this codebase, so this notebook is to refamiliarize with the data and the code. In my mind I have that Best Fit is still better than DQN, but there are a couple of notebooks around august-september saying that DQN is being better. Here I'll figure out if best fit is doing what it is supposed to be doing, and see if I need to make changes to run the next batch of experiments. Another thing I need to remember is whether the August analysis were pre or post reward function.Remember to post findings in the Notion journal!Run used: https://wandb.ai/jotaporras/rl_warehouse_assignment/runs/14i4yoqy/overview?workspace=user-jotaporras Libraries
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data loading
###Code
movement_detail_report = pd.read_csv("../data/results/gr_best_fit_few_warehouses/movement_detail_report.csv")
summary_movement_report = pd.read_csv("../data/results/gr_best_fit_few_warehouses/summary_movement_report.csv")
valid_dcs = pd.read_csv("../data/results/gr_best_fit_few_warehouses/valid_dcs.csv")
###Output
_____no_output_____
###Markdown
Data exploration
###Code
movement_detail_report.head()
summary_movement_report.head()
valid_dcs.head()
###Output
_____no_output_____
###Markdown
Total cost in summary movement reportFrom what I recall, SMR summarizes by timestep (taking source_time as a start). I wish I had reward as well.
###Code
summary_movement_report#todo explore.
###Output
_____no_output_____
###Markdown
Checking if there are any Big Ms.According to the wandb plots, there are spikes in reward as high as 2e+8, indicating possible Big Ms. There are also no BigM reported, though. This is confusing.The summary movement report is noisy but consistent though, no peaks above the BigM threshold (10K).Maybe something's wrong with the Big M calculation. A good next step would be to cross this with the export from wandb. A quick look shows they indeed are not consistent. So they're measuring different things.
###Code
sns.lineplot(data=summary_movement_report.total_cost)
summary_movement_report.total_cost.describe().reset_index()
###Output
_____no_output_____ |
jupyter_notebooks/machine_learning/MultiLabel_Classification.ipynb | ###Markdown
**Terminology:** In multilabel classification, you can have a features record assigned to one or more labels. In multiclass classification, it just means we have more than one label or class we can choose from to assign a features record to, but not more than one, just exactly one label.
###Code
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.svm import LinearSVC
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.multiclass import OneVsRestClassifier
from sklearn import preprocessing
X_train = np.array(["new york is a hell of a town",
"new york was originally dutch",
"the big apple is great",
"new york is also called the big apple",
"nyc is nice",
"people abbreviate new york city as nyc",
"the capital of great britain is london",
"london is in the uk",
"london is in england",
"london is in great britain",
"it rains a lot in london",
"london hosts the british museum",
"new york is great and so is london",
"i like london better than new york"])
y_train_text = [["new york"],["new york"],["new york"],["new york"],["new york"],
["new york"],["london"],["london"],["london"],["london"],
["london"],["london"],["new york","london"],["new york","london"]]
X_test = np.array(['nice day in nyc',
'welcome to london',
'london is rainy',
'it is raining in britian',
'it is raining in britian and the big apple',
'it is raining in britian and nyc',
'hello welcome to new york. enjoy it here and london too'])
lb = preprocessing.MultiLabelBinarizer()
Y = lb.fit_transform(y_train_text)
classifier = Pipeline([
('vectorizer', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(LinearSVC()))])
classifier.fit(X_train, Y)
predicted = classifier.predict(X_test)
all_labels = lb.inverse_transform(predicted)
for item, labels in zip(X_test, all_labels):
print('%s => %s' % (item, ', '.join(labels)))
X_train.shape
X_test.shape
###Output
_____no_output_____
###Markdown
Follow-up The X_train and X_test data need to be 1-D array. Otherwise, you will get an error. Not sure why the API is designed such that there is a restriction on the shape/dimension of the data.
###Code
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.svm import LinearSVC
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.multiclass import OneVsRestClassifier
from sklearn import preprocessing
X_train = np.array([[1234, "new york is a hell of a town"],
[2345, "new york was originally dutch"],
[454, "the big apple is great"],
[1234, "new york is also called the big apple"],
[567, "nyc is nice"],
[423, "people abbreviate new york city as nyc"],
[9078, "the capital of great britain is london"],
[2134, "london is in the uk"],
[789, "london is in england"],
[6567, "london is in great britain"],
[6567, "it rains a lot in london"],
[7896, "london hosts the british museum"],
[3125, "new york is great and so is london"],
[5123, "i like london better than new york"]])
y_train_text = [["new york"],["new york"],["new york"],["new york"],["new york"],
["new york"],["london"],["london"],["london"],["london"],
["london"],["london"],["new york","london"],["new york","london"]]
X_test = np.array([[1234, 'nice day in nyc'],
[2134, 'welcome to london'],
[789, 'london is rainy'],
[9078, 'it is raining in britian'],
[3125, 'it is raining in britian and the big apple'],
[3100, 'it is raining in britian and nyc'],
[3025, 'hello welcome to new york. enjoy it here and london too']])
target_names = ['New York', 'London']
lb = preprocessing.MultiLabelBinarizer()
Y = lb.fit_transform(y_train_text)
classifier = Pipeline([
('vectorizer', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', OneVsRestClassifier(LinearSVC()))])
classifier.fit(X_train, Y)
predicted = classifier.predict(X_test)
all_labels = lb.inverse_transform(predicted)
for item, labels in zip(X_test, all_labels):
print('%s => %s' % (item, ', '.join(labels)))
X_train.shape
X_test.shape
###Output
_____no_output_____ |
Chapter05/generate_dataset.ipynb | ###Markdown
International trading prices for commodities from Federal Reserve St. Louis
###Code
# identifier for commodities
idnames = {
'bananas (U.S. Dollars per Metric Ton)': "PBANSOPUSDM",
'olive oil (U.S. Dollars per Metric Ton)': "POLVOILUSDM",
'sugar (U.S. Cents per Pound)': "PSUGAISAUSDM",
'uranium (U.S. Dollars per Pound)': "PURANUSDM",
'cotton (U.S. Cents per Pound)': "PCOTTINDUSDM",
'orange (U.S. Dollars per Metric Ton)': "PORANGUSDM",
'wheat (U.S. Dollars per Metric Ton)': "PWHEAMTUSDM",
'aluminium (U.S. Dollars per Metric Ton)': "PALUMUSDM",
'iron (U.S. Dollars per Metric Ton)': "PIORECRUSDM",
'corn (U.S. Dollars per Metric Ton)': "PMAIZMTUSDM"
}
url = "https://fred.stlouisfed.org/graph/fredgraph.csv?bgcolor=%23e1e9f0&chart_type=line&drp=0&fo=open%20sans&graph_bgcolor=%23ffffff&height=450&mode=fred&recession_bars=off&txtcolor=%23444444&ts=12&tts=12&width=1168&nt=0&thu=0&trc=0&show_legend=yes&show_axis_titles=yes&show_tooltip=yes&id={}&scale=left&cosd=1980-01-01&coed=2017-06-01&line_color=%234572a7&link_values=false&line_style=solid&mark_type=none&mw=0&lw=2&ost=-99999&oet=99999&mma=0&fml=a&fq=Monthly&fam=avg&fgst=lin&fgsnd=2009-06-01&line_index=1&transformation=lin&vintage_date=2018-10-18&revision_date=2018-10-18&nd=1980-01-01"
import requests
def download_and_get_dataset(name, idname):
r = requests.get(url.format(idname), allow_redirects=True)
open(idname + '.csv', 'wb').write(r.content)
df = pd.read_csv(idname + '.csv')
df['value'] = df[idname]
df['feature'] = name
df['commodity'] = name.split(' ')[0]
del df[idname]
return df
import pandas as pd
data = {k: download_and_get_dataset(name=k, idname=v) for k, v in idnames.items()}
dfs = pd.concat([v for k, v in data.items()])
dfs.head(3)
###Output
_____no_output_____
###Markdown
Save first dataset
###Code
dfs.to_csv('fsb_st_louis_commodities.csv', index=False)
###Output
_____no_output_____
###Markdown
Import value and volume of Bananas from USDA ```https://data.ers.usda.gov/reports.aspx?programArea=fruit&stat_year=2009&top=5&HardCopy=True&RowsPerPage=25&groupName=Noncitrus&commodityName=Bananas&ID=17851```
###Code
def assign_feature(idx):
if idx < 10:
return 'dried bananas'
elif idx < 20:
return 'fresh bananas'
elif idx < 30:
return 'frozen bananas'
else:
return 'preserved bananas'
data = []
filename = './us_import_value_of_bananas.csv'
with open(filename, 'r') as f:
for line in f.readlines():
data.append(line.split('\t'))
data_filtered = []
for idx, item in enumerate(data):
if item[-1]=='\n':
item.pop(-1)
item[-1] = item[-1].strip()
if idx!=0:
item = [item[0]] + [float(e.replace(',', '')) if e!='NA' else None for e in item[1:]]
data_filtered.append(item)
df = pd.DataFrame(data_filtered[1:], columns=data_filtered[0])
df['feature'] = df.index.map(lambda idx: assign_feature(idx))
df['commodity'] = 'banana'
df.head(10)
df.to_csv('import_value_of_bananas_in_thousand_usd.csv', index=False)
df_bananas_value = pd.melt(df, id_vars=['year', 'commodity', 'feature'],
value_vars=['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
df_bananas_value.head(10)
data = []
filename = './us_import_volume_of_bananas.csv'
with open(filename, 'r') as f:
for line in f.readlines():
data.append(line.split('\t'))
data_filtered = []
for idx, item in enumerate(data):
if item[-1]=='\n':
item.pop(-1)
item[-1] = item[-1].strip()
if idx!=0:
item = [item[0]] + [float(e.strip().replace(',', '')) if e!='NA' else None for e in item[1:] if len(e.strip())>0]
else:
item = [e.strip() for e in item if len(e.strip())>0 ]
data_filtered.append(item)
df = pd.DataFrame(data_filtered[1:], columns=data_filtered[0])
df['feature'] = df.index.map(lambda idx: assign_feature(idx))
df['commodity'] = 'banana'
df.head(10)
df.to_csv('import_volume_of_bananas_in_thousand_pounds.csv', index=False)
df_bananas_volume = pd.melt(df, id_vars=['year', 'feature', 'commodity'],
value_vars=['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
df_bananas_volume.head(10)
df_bananas = pd.merge(left=df_bananas_value,
right=df_bananas_volume,
on=['year', 'feature', 'variable', 'commodity'],
suffixes=['_bnn_1000_usd', '_bnn_1000_pounds'])
df_bananas['timestamp'] = df_bananas[['variable', 'year']].apply(lambda row: row['variable'] + ', ' + row['year'], axis=1)
df_bananas.head(10)
###Output
_____no_output_____
###Markdown
Import value and volume of Bananas from USDA ```https://data.ers.usda.gov/reports.aspx?programArea=fruit&stat_year=2009&top=5&HardCopy=True&RowsPerPage=25&groupName=Citrus&commodityName=Oranges&ID=17851```
###Code
def assign_feature(idx):
if idx < 4:
return 'fresh oranges'
elif idx < 8:
return 'orange juice'
else:
return 'preserved oranges'
import pandas as pd
data = []
filename = './us_import_value_of_oranges.csv'
with open(filename, 'r') as f:
for line in f.readlines():
data.append([e for e in line.split('\t') if len(e.strip())>0])
data_filtered = []
for idx, item in enumerate(data):
if item[-1]=='\n':
item.pop(-1)
item[-1] = item[-1].strip()
if idx!=0:
item = [item[0].strip()] + [float(e.strip().replace(',', '')) if e!='NA' else None for e in item[1:]]
else:
item = [e.strip() for e in item]
data_filtered.append([e for e in item])
df = pd.DataFrame(data_filtered[1:], columns=data_filtered[0])
df['feature'] = df.index.map(lambda idx: assign_feature(idx))
df['commodity'] = 'orange'
df.head(10)
df.to_csv('import_value_of_oranges_in_thousand_usd.csv', index=False)
df_oranges_value = pd.melt(
df,
id_vars=['year', 'feature', 'commodity'],
value_vars=['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug','Sep', 'Oct', 'Nov', 'Dec'])
df_oranges_value.year = df_oranges_value.year.map(lambda x: x.split('/')[0])
df_oranges_value.head(3)
import pandas as pd
data = []
filename = './us_import_volume_of_oranges.csv'
with open(filename, 'r') as f:
for line in f.readlines():
data.append([e for e in line.split('\t') if len(e.strip())>0])
data_filtered = []
for idx, item in enumerate(data):
if item[-1]=='\n':
item.pop(-1)
item[-1] = item[-1].strip()
if idx!=0:
item = [item[0].strip()] + [float(e.strip().replace(',', '')) if e!='NA' else None for e in item[1:]]
else:
item = [e.strip() for e in item]
data_filtered.append([e for e in item])
df = pd.DataFrame(data_filtered[1:], columns=data_filtered[0])
df['feature'] = df.index.map(lambda idx: assign_feature(idx))
df['commodity'] = 'orange'
df.head(3)
df.to_csv('import_volume_of_oranges_in_thousand_usd.csv', index=False)
df_oranges_volume = pd.melt(
df,
id_vars=['year', 'feature', 'commodity'],
value_vars=['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug','Sep', 'Oct', 'Nov', 'Dec'])
df_oranges_volume.year = df_oranges_volume.year.map(lambda x: x.split('/')[0])
df_oranges_volume.head(3)
df_oranges = pd.merge(left=df_oranges_value,
right=df_oranges_volume,
on=['year', 'variable', 'feature', 'commodity'],
suffixes=['_orng_1000_usd', '_orng_1000_pounds'])
df_oranges['timestamp'] = df_oranges[['variable', 'year']].apply(lambda row: row['variable'] + ', ' + row['year'], axis=1)
df_oranges.head(10)
###Output
_____no_output_____
###Markdown
Merge Bananas and Oranges, Import Value and Volume
###Code
df_merged = pd.merge(left=df_bananas, right=df_oranges, on=['timestamp', 'variable', 'year'])
df_merged.head(10)
###Output
_____no_output_____
###Markdown
Save second dataset
###Code
df_merged.to_csv('usda_oranges_and_bananas_data.csv', index=False)
###Output
_____no_output_____ |
examples/dash_visualize_example.ipynb | ###Markdown
Visualization Notebook NeuroLit Sofware Engineering for Data Scientists, Autumn 2017 Maggie Clarke, Patrick Donnelly, & Sritam Kethireddy
###Code
# import necessary databases
import dash, os
import dash_core_components as dcc
import dash_html_components as html
import plotly.graph_objs as go
import pandas as pd
import numpy as np
import neurolit as nlit
# pull data using dataset function in neurolit module
ilabs_data = nlit.Dataset(data_folder = os.path.join(nlit.__path__[0],'data'), token_file = 'neurolit_api_token.txt')
# shortens dataset variable for ease in workflow
df = ilabs_data.all_data
###Output
_____no_output_____
###Markdown
Dash app initialization
###Code
# define the layout for the Dash app
app = dash.Dash()
available_indicators = df.columns
app.layout = html.Div(html.Div(children=[
html.H1(
children='NeuroLit Visualization Tool',
style={
'textAlign': 'left',
'color': '#122850',
'fontSize': 55,
'font-family': 'arial'
}
),
html.Div(
children='''UW Reading and Dyslexia Research Program.''',
style={
'textAlign': 'left',
'color': '#1e3f78',
'fontSize': 28,
'font-family': 'arial'
}
),
html.Div(
dcc.Dropdown(
id='xaxis-column',
options=[{'label': i, 'value': i} for i in available_indicators],
placeholder='Choose first predictor variable',
#value='Perceived Reading Skill'
),
),
html.Div(
dcc.Dropdown(
id='yaxis-column',
options=[{'label': i, 'value': i} for i in available_indicators],
placeholder='Choose second predictor variable',
#value='WJ Basic Reading Skills'
),
),
html.Div(
dcc.Dropdown(
id='outcome-variable',
options=[{'label': i, 'value': i} for i in available_indicators],
placeholder='Choose Outcome variable',
#value='WJ Basic Reading Skills'
),
),
html.Div([
dcc.Graph(id='graph1'),
], style={'display': 'inline-block', 'width': '49%'}),
html.Div([
dcc.Graph(id='graph2'),
], style={'display': 'inline-block', 'width': '49%'}),
]))
@app.callback(
dash.dependencies.Output('graph1', 'figure'),
[dash.dependencies.Input('xaxis-column', 'value'),
dash.dependencies.Input('yaxis-column', 'value'),
dash.dependencies.Input('outcome-variable', 'value')])
def update_graph(xaxis_column_name, yaxis_column_name, outcome_variable):
return {
'data': [go.Scatter(
x=df[xaxis_column_name],
y=df[outcome_variable],
text=df[yaxis_column_name],
mode='markers',
marker={
'size': 15,
'opacity': 0.5,
'color': df[outcome_variable],
'line': {'width': 0.5, 'color': 'white'},
'colorscale': 'Viridis',
'showscale': True
}
)],
'layout': go.Layout(
xaxis={
'title': xaxis_column_name
},
yaxis={
'title': outcome_variable
})
}
@app.callback(
dash.dependencies.Output('graph2', 'figure'),
[dash.dependencies.Input('xaxis-column', 'value'),
dash.dependencies.Input('yaxis-column', 'value'),
dash.dependencies.Input('outcome-variable', 'value')])
def update_graph(xaxis_column_name, yaxis_column_name, outcome_variable):
return {
'data': [go.Scatter(
x=df[yaxis_column_name],
y=df[outcome_variable],
text=df[yaxis_column_name],
mode='markers',
marker={
'size': 15,
'opacity': 0.5,
'color': df[outcome_variable],
'line': {'width': 0.5, 'color': 'white'},
'colorscale': 'viridis_rgb',
'showscale': True
}
)],
'layout': go.Layout(
xaxis={
'title': yaxis_column_name
},
yaxis={
'title': outcome_variable
})
}
if __name__ == '__main__':
app.run_server()
del app
###Output
_____no_output_____ |
Dask/NYC_taxis/NYC_Taxis.ipynb | ###Markdown
Análisis de viajes en taxi de 2013 en Nueva York El objetivo de este cuaderno es explorar los datos de los viajes en taxi durante el año 2013 en la ciudad de Nueva York haciendo uso de la librería Dask. Para empezar, inicializamos dask e importamos el dataset
###Code
from dask.distributed import Client
client = Client(n_workers=4)
import dask.bag as db
import os
#files = [os.path.join('Datos','2015-01-0%d-*.json.gz' % i) for i in range(1,2)]
files = os.path.join('trip_data', '*_1.csv')
b = db.read_text(files)
print(files)
print(b)
###Output
trip_data/*_1.csv
dask.bag<bag-from-delayed, npartitions=1>
###Markdown
Ahora, miremos el primer archivo
###Code
b.take(2)
###Output
_____no_output_____
###Markdown
Sin embargo,como los archivos están en formato csv se pueden leer y manejar en dataframes de una mejor manera
###Code
import dask.dataframe as dd
df = dd.read_csv('trip_data/*_8.csv',dtype={' passenger_count': 'float64',
' rate_code': 'float64',
' trip_time_in_secs': 'float64'})
df
###Output
_____no_output_____
###Markdown
Ahora, viendo la cabecera del primer dataframe
###Code
df.head()
###Output
_____no_output_____
###Markdown
Ahora miremos los tipos de datos almacenados
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
Y de paso, miremos la parte final del último dataframe
###Code
df.tail()
###Output
_____no_output_____
###Markdown
Como podemos ver el último registro tiene valores NaN, por ello, eliminamos este tipo de valores de la misma manera que lo hacemos en pandas
###Code
df = df.dropna()
df.tail()
###Output
_____no_output_____
###Markdown
Luego, examinamos la estructura de los datos usando
###Code
df.values
###Output
_____no_output_____
###Markdown
Una vez con esto, comenzamos a hacer consultas y a extraer información. Primero miramos el número de registros
###Code
%%time
len(df)
###Output
CPU times: user 7.41 s, sys: 944 ms, total: 8.35 s
Wall time: 1min 38s
###Markdown
Ahora, extraemos la información relacionada con los pasajeros y calculamos el número máximo de pasajeros en un viaje.
###Code
%%time
pasajeros = df[' passenger_count']
pasajeros.max().compute()
###Output
CPU times: user 6.77 s, sys: 791 ms, total: 7.56 s
Wall time: 1min 26s
###Markdown
Sin embargo, podemos hacer el calculo directamente y vemos que el tiempo es ligeramente menor
###Code
%%time
df[' passenger_count'].max().compute()
###Output
CPU times: user 6.55 s, sys: 894 ms, total: 7.45 s
Wall time: 1min 20s
###Markdown
Ahora, modifiquemos los nombres de las columnas
###Code
df.columns.values
df = df.rename(columns={' hack_license': 'hack_license', ' trip_time_in_secs': 'trip_time_in_secs', ' trip_distance':'trip_distance'})
###Output
_____no_output_____
###Markdown
Ahora, guardemos la información de las licencias y el tiempo de viaje
###Code
df2 = df[['hack_license','trip_time_in_secs']]
df2
df2.head()
###Output
_____no_output_____
###Markdown
Ahora, busquemos el viaje con más duración para cada uno de los conductores
###Code
%%time
df2.groupby("hack_license").trip_time_in_secs.max().compute()
###Output
CPU times: user 6.29 s, sys: 896 ms, total: 7.19 s
Wall time: 1min 18s
###Markdown
Esto también lo podemos calcular directamente y los tiempos son similares
###Code
%%time
df.groupby("hack_license").trip_time_in_secs.max().compute()
client.shutdown()
###Output
distributed.client - ERROR - Failed to reconnect to scheduler after 10.00 seconds, closing client
ERROR:asyncio:_GatheringFuture exception was never retrieved
future: <_GatheringFuture finished exception=CancelledError()>
concurrent.futures._base.CancelledError
|
experiments/2022_scientific_reports/notebooks/data_processing/Questionnaire_Processing.ipynb | ###Markdown
Questionnaire Data Processing Setup and Helper Functions
###Code
import json
import re
from pathlib import Path
import pandas as pd
import numpy as np
import pingouin as pg
import matplotlib.pyplot as plt
import seaborn as sns
import biopsykit as bp
from cft_analysis.datasets import CftDatasetRaw
%load_ext autoreload
%autoreload 2
%matplotlib widget
plt.close("all")
palette = bp.colors.fau_palette
sns.set_theme(context="notebook", style="ticks", palette=palette)
plt.rcParams["figure.figsize"] = (10, 5)
plt.rcParams["pdf.fonttype"] = 42
plt.rcParams["mathtext.default"] = "regular"
palette
###Output
_____no_output_____
###Markdown
Data Import
###Code
# build path to data folder
config_dict = json.load(Path("../../config.json").open(encoding="utf-8"))
base_path = Path("..").joinpath(config_dict["base_path"])
dataset = CftDatasetRaw(base_path)
dataset
data = dataset.questionnaire
data.head()
quest_path = base_path.joinpath("questionnaire")
# path to export processed questionnaire data in the Data repository
quest_path_export = quest_path.joinpath("processed")
quest_path_export_analysis = Path("../../data/questionnaire")
bp.utils.file_handling.mkdirs([quest_path_export, quest_path_export_analysis])
quest_path_export_analysis
###Output
_____no_output_____
###Markdown
Data Processing Metadata The following metadata are extracted:* Body Mass Index (BMI)* Age
###Code
bmi = bp.metadata.bmi(data, ["weight", "height"])
metadata = data[["age", "gender"]]
metadata = pd.concat([bmi, metadata], axis=1)
###Output
_____no_output_____
###Markdown
Questionnaires The following questionnaire scores are computed:* Allgemeine Depressionsskala - Langform (ADS-L) (german version of the Center for Epidemiological Studies Depression Scale – CESD)* Perceived Stress Scale (PSS)* Multidimensionaler Befindlichkeitsfragebogen (MDBF) (german version of the Multidimensional Mood State Questionnaire – MDMQ): *pre* and *post* MIST
###Code
quest_dict = {
"ads_l": bp.questionnaires.utils.find_cols(data, regex_str="ADSL_\d+")[1],
"pss": bp.questionnaires.utils.find_cols(data, regex_str="PSS_\d+")[1],
"mdbf-pre": bp.questionnaires.utils.find_cols(data, regex_str="MDBF_Pre_\d+")[1],
"mdbf-post": bp.questionnaires.utils.find_cols(data, regex_str="MDBF_Post_\d+")[1],
}
data_recode = data.copy()
data_recode = bp.questionnaires.utils.convert_scale(data=data_recode, cols=quest_dict["ads_l"], offset=-1)
data_recode = bp.questionnaires.utils.convert_scale(data=data_recode, cols=quest_dict["pss"], offset=-1)
quest_data = bp.questionnaires.utils.compute_scores(data_recode, quest_dict)
quest_data_total = pd.concat([metadata, quest_data], axis=1)
quest_data_total.head()
###Output
_____no_output_____
###Markdown
Export
###Code
quest_data_total.to_csv(quest_path_export.joinpath("questionnaire_data.csv"))
quest_data_total.to_csv(quest_path_export_analysis.joinpath("questionnaire_data.csv"))
###Output
_____no_output_____ |
notebooks/custom_LSTM_RNN_AI_summer_experiment.ipynb | ###Markdown
Welcome to AI Summer tutorial:Intuitive understanding of recurrent neural networksThis eductional LSTM tutorial heavily borrows from the Pytorch example for time sequence prediction that can be found here: https://github.com/pytorch/examples/tree/master/time_sequence_prediction Basic imports
###Code
import numpy as np
import torch
from torch import nn
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import torch.optim as optim
###Output
_____no_output_____
###Markdown
Generate synthetic sin wave data
###Code
np.random.seed(2)
T = 20
L = 1000
N = 200
x = np.empty((N, L), 'int64')
x[:] = np.array(range(L)) + np.random.randint(-4 * T, 4 * T, N).reshape(N, 1)
data = np.sin(x / 1.0 / T).astype('float64')
torch.save(data, open('traindata.pt', 'wb'))
###Output
_____no_output_____
###Markdown
Our humble implementation of LSTM cell
###Code
import torch
from torch import nn
class LSTM_cell_AI_SUMMER(torch.nn.Module):
"""
A simple LSTM cell network for educational AI-summer purposes
"""
def __init__(self, input_length=10, hidden_length=20):
super(LSTM_cell_AI_SUMMER, self).__init__()
self.input_length = input_length
self.hidden_length = hidden_length
# forget gate components
self.linear_forget_w1 = nn.Linear(self.input_length, self.hidden_length, bias=True)
self.linear_forget_r1 = nn.Linear(self.hidden_length, self.hidden_length, bias=False)
self.sigmoid_forget = nn.Sigmoid()
# input gate components
self.linear_gate_w2 = nn.Linear(self.input_length, self.hidden_length, bias=True)
self.linear_gate_r2 = nn.Linear(self.hidden_length, self.hidden_length, bias=False)
self.sigmoid_gate = nn.Sigmoid()
# cell memory components
self.linear_gate_w3 = nn.Linear(self.input_length, self.hidden_length, bias=True)
self.linear_gate_r3 = nn.Linear(self.hidden_length, self.hidden_length, bias=False)
self.activation_gate = nn.Tanh()
# out gate components
self.linear_gate_w4 = nn.Linear(self.input_length, self.hidden_length, bias=True)
self.linear_gate_r4 = nn.Linear(self.hidden_length, self.hidden_length, bias=False)
self.sigmoid_hidden_out = nn.Sigmoid()
self.activation_final = nn.Tanh()
def forget(self, x, h):
x = self.linear_forget_w1(x)
h = self.linear_forget_r1(h)
return self.sigmoid_forget(x + h)
def input_gate(self, x, h):
# Equation 1. input gate
x_temp = self.linear_gate_w2(x)
h_temp = self.linear_gate_r2(h)
i = self.sigmoid_gate(x_temp + h_temp)
return i
def cell_memory_gate(self, i, f, x, h, c_prev):
x = self.linear_gate_w3(x)
h = self.linear_gate_r3(h)
# new information part that will be injected in the new context
k = self.activation_gate(x + h)
g = k * i
# forget old context/cell info
c = f * c_prev
# learn new context/cell info
c_next = g + c
return c_next
def out_gate(self, x, h):
x = self.linear_gate_w4(x)
h = self.linear_gate_r4(h)
return self.sigmoid_hidden_out(x + h)
def forward(self, x, tuple_in ):
(h, c_prev) = tuple_in
# Equation 1. input gate
i = self.input_gate(x, h)
# Equation 2. forget gate
f = self.forget(x, h)
# Equation 3. updating the cell memory
c_next = self.cell_memory_gate(i, f, x, h,c_prev)
# Equation 4. calculate the main output gate
o = self.out_gate(x, h)
# Equation 5. produce next hidden output
h_next = o * self.activation_final(c_next)
return h_next, c_next
###Output
_____no_output_____
###Markdown
Our more humble implementation of GRU cellWe will descibr GRU in part 2 but can you play around with it, if you want!
###Code
class GRU_cell_AI_SUMMER(torch.nn.Module):
"""
A simple GRU cell network for educational purposes
"""
def __init__(self, input_length=10, hidden_length=20):
super(GRU_cell_AI_SUMMER, self).__init__()
self.input_length = input_length
self.hidden_length = hidden_length
# reset gate components
self.linear_reset_w1 = nn.Linear(self.input_length, self.hidden_length, bias=True)
self.linear_reset_r1 = nn.Linear(self.hidden_length, self.hidden_length, bias=True)
self.linear_reset_w2 = nn.Linear(self.input_length, self.hidden_length, bias=True)
self.linear_reset_r2 = nn.Linear(self.hidden_length, self.hidden_length, bias=True)
self.activation_1 = nn.Sigmoid()
# update gate components
self.linear_gate_w3 = nn.Linear(self.input_length, self.hidden_length, bias=True)
self.linear_gate_r3 = nn.Linear(self.hidden_length, self.hidden_length, bias=True)
self.activation_2 = nn.Sigmoid()
self.activation_3 = nn.Tanh()
def reset_gate(self, x, h):
x_1 = self.linear_reset_w1(x)
h_1 = self.linear_reset_r1(h)
# gate update
reset = self.activation_1(x_1 + h_1)
return reset
def update_gate(self, x, h):
x_2 = self.linear_reset_w2(x)
h_2 = self.linear_reset_r2(h)
z = self.activation_2( h_2 + x_2)
return z
def update_component(self, x,h,r):
x_3 = self.linear_gate_w3(x)
h_3 = r * self.linear_gate_r3(h)
gate_update = self.activation_3(x_3+h_3)
return gate_update
def forward(self, x, h):
# Equation 1. reset gate vector
r = self.reset_gate(x, h)
# Equation 2: the update gate - the shared update gate vector z
z = self.update_gate(x, h)
# Equation 3: The almost output component
n = self.update_component(x,h,r)
# Equation 4: the new hidden state
h_new = (1-z) * n + z * h
return h_new
###Output
_____no_output_____
###Markdown
Putting the cells together
###Code
class Sequence(nn.Module):
def __init__(self, LSTM=True, custom=True):
super(Sequence, self).__init__()
self.LSTM = LSTM
if LSTM:
if custom:
print("AI summer LSTM cell implementation...")
self.rnn1 = LSTM_cell_AI_SUMMER(1, 51)
self.rnn2 = LSTM_cell_AI_SUMMER(51, 51)
else:
print("Official PyTorch LSTM cell implementation...")
self.rnn1 = nn.LSTMCell(1, 51)
self.rnn2 = nn.LSTMCell(51, 51)
#GRU
else:
if custom:
print("AI summer GRU cell implementation...")
self.rnn1 = GRU_cell_AI_SUMMER(1, 51)
self.rnn2 = GRU_cell_AI_SUMMER(51, 51)
else:
print("Official PyTorch GRU cell implementation...")
self.rnn1 = nn.GRUCell(1, 51)
self.rnn2 = nn.GRUCell(51, 51)
self.linear = nn.Linear(51, 1)
def forward(self, input, future=0):
outputs = []
h_t = torch.zeros(input.size(0), 51, dtype=torch.double)
c_t = torch.zeros(input.size(0), 51, dtype=torch.double)
h_t2 = torch.zeros(input.size(0), 51, dtype=torch.double)
c_t2 = torch.zeros(input.size(0), 51, dtype=torch.double)
for i, input_t in enumerate(input.chunk(input.size(1), dim=1)):
if self.LSTM:
h_t, c_t = self.rnn1(input_t, (h_t, c_t))
h_t2, c_t2 = self.rnn2(h_t, (h_t2, c_t2))
else:
h_t = self.rnn1(input_t, h_t)
h_t2 = self.rnn2(h_t, h_t2)
output = self.linear(h_t2)
outputs += [output]
# if we should predict the future
for i in range(future):
if self.LSTM:
h_t, c_t = self.rnn1(input_t, (h_t, c_t))
h_t2, c_t2 = self.rnn2(h_t, (h_t2, c_t2))
else:
h_t = self.rnn1(input_t, h_t)
h_t2 = self.rnn2(h_t, h_t2)
output = self.linear(h_t2)
outputs += [output]
outputs = torch.stack(outputs, 1).squeeze(2)
return outputs
###Output
_____no_output_____
###Markdown
Train code (based on the Pytorch example for time sequence prediction)that can be found here: https://github.com/pytorch/examples/tree/master/time_sequence_prediction
###Code
if __name__ == '__main__':
# set random seed to 0
np.random.seed(0)
torch.manual_seed(0)
# load data and make training set
data = torch.load('traindata.pt')
input = torch.from_numpy(data[3:, :-1])
print(input.shape)
target = torch.from_numpy(data[3:, 1:])
test_input = torch.from_numpy(data[:3, :-1])
test_target = torch.from_numpy(data[:3, 1:])
# build the model. LSTM=False means GRU cell
seq = Sequence(LSTM=True, custom=True)
seq.double()
criterion = nn.MSELoss()
# use LBFGS as optimizer since we can load the whole data to train
optimizer = optim.LBFGS(seq.parameters(), lr=0.8)
# begin to train
for i in range(20):
print('STEP: ', i)
def closure():
optimizer.zero_grad()
out = seq(input)
loss = criterion(out, target)
print('loss:', loss.item())
loss.backward()
return loss
optimizer.step(closure)
# begin to predict, no need to track gradient here
with torch.no_grad():
future = 1000
pred = seq(test_input, future=future)
loss = criterion(pred[:, :-future], test_target)
print('test loss:', loss.item())
y = pred.detach().numpy()
# draw the result
plt.figure(figsize=(30, 10))
plt.title('Predict future values for time sequences\n(Dashlines are predicted values)', fontsize=30)
plt.xlabel('x', fontsize=20)
plt.ylabel('y', fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
def draw(yi, color):
plt.plot(np.arange(input.size(1)), yi[:input.size(1)], color, linewidth=2.0)
plt.plot(np.arange(input.size(1), input.size(1) + future), yi[input.size(1):], color + ':', linewidth=2.0)
draw(y[0], 'r')
draw(y[1], 'g')
draw(y[2], 'b')
plt.savefig('predict%d.png' % i)
plt.close()
###Output
torch.Size([197, 999])
AI summer LSTM cell implementation...
STEP: 0
loss: 0.528113070229433
loss: 0.5170091480952497
loss: 0.9555554711921114
loss: 0.02929956013377312
loss: 0.02668833057236316
loss: 0.02578294447759208
loss: 0.024732702950737373
loss: 0.022838443305267297
loss: 0.019883604722129907
loss: 0.014934575438973216
loss: 0.01294144748634683
loss: 0.01125866273027107
loss: 0.010340490329751066
loss: 0.005540535925217467
loss: 0.003720403467427374
loss: 0.002060538268350741
loss: 0.0014695789705200751
loss: 0.0008976667789292632
loss: 0.0005588928839437537
loss: 0.000553251786021077
test loss: 0.00025937913340416294
STEP: 1
loss: 0.0004188821271592212
loss: 0.0004115594020828428
loss: 0.0004098080070695498
loss: 0.00040585335549792625
loss: 0.00039800889556492565
loss: 0.0003823670322268507
loss: 0.0003565621034820429
loss: 0.0003271524037103022
loss: 0.00031091980677155646
loss: 0.00030442897653582625
loss: 0.00030135321920862106
loss: 0.000297159804392728
loss: 0.00028623982689565544
loss: 0.00026367783932863267
loss: 0.00023431325251627166
loss: 0.000198653027681383
loss: 0.0001815602801224594
loss: 0.00017243991942617054
loss: 0.000167400305207888
loss: 0.00016676000062173038
test loss: 8.630216931841515e-05
STEP: 2
loss: 0.0001655738129420653
loss: 0.00016274375068161108
loss: 0.0001594271200927448
loss: 0.00015566651539603154
loss: 0.00015310920266744678
loss: 0.00015175200303974804
loss: 0.00015021718169219833
loss: 0.00014764850590202232
loss: 0.00014406411552009922
loss: 0.00014020286381351805
loss: 0.00013721598097735975
loss: 0.00013564592536166646
loss: 0.0001141601658816323
loss: 9.632621856127124e-05
loss: 8.540454699156185e-05
loss: 8.00886269558638e-05
loss: 7.866357692138397e-05
loss: 7.846270683516499e-05
loss: 7.803720795279226e-05
loss: 7.787795173710368e-05
test loss: 5.309506777407865e-05
STEP: 3
loss: 7.758916682904886e-05
loss: 7.72278791969809e-05
loss: 7.66369668169787e-05
loss: 7.512375054800206e-05
loss: 6.960300053678313e-05
loss: 5.974397824269719e-05
loss: 5.285812496968241e-05
loss: 5.879130528044276e-05
loss: 4.5344381081473816e-05
loss: 4.428458576933465e-05
loss: 4.289067757161094e-05
loss: 4.234652194868516e-05
loss: 4.2188541758263925e-05
loss: 4.211897040355653e-05
loss: 4.208079302286101e-05
loss: 4.20530790474463e-05
loss: 4.201103741671751e-05
loss: 4.171172915792721e-05
loss: 4.023048477710445e-05
loss: 3.8600785264667686e-05
test loss: 2.0303105849979015e-05
STEP: 4
loss: 3.655464978675086e-05
loss: 3.4447083416544624e-05
loss: 3.2380408661129566e-05
loss: 3.1224058027911596e-05
loss: 2.982218832569802e-05
loss: 2.9339563104859834e-05
loss: 2.8842857399152147e-05
loss: 2.8621022762785623e-05
loss: 2.8250528581550616e-05
loss: 2.7601681645153186e-05
loss: 2.666863185511221e-05
loss: 2.6065789274730817e-05
loss: 2.5220270154457496e-05
loss: 2.4755895857374163e-05
loss: 2.4459466972747967e-05
loss: 2.4289890236020474e-05
loss: 2.417871460234937e-05
loss: 2.405552084448569e-05
loss: 2.3803825009267305e-05
loss: 2.3391986899631617e-05
test loss: 1.0651709515802253e-05
STEP: 5
loss: 2.2722787639648107e-05
loss: 2.195350374047056e-05
loss: 2.0790937997244048e-05
loss: 2.0559415845903276e-05
loss: 1.8719478798203408e-05
loss: 1.775920559743769e-05
loss: 1.6894070432816332e-05
loss: 1.4953226300362585e-05
loss: 1.3812765688017387e-05
loss: 1.2995846153363898e-05
loss: 1.161801037515683e-05
loss: 1.0840213562266476e-05
loss: 1.0552324119695516e-05
loss: 1.0364055093154938e-05
loss: 1.019954593444896e-05
loss: 1.0006838183569344e-05
loss: 9.84015970834103e-06
loss: 9.512266385666233e-06
loss: 9.043037858738418e-06
loss: 8.695631981398065e-06
test loss: 8.771404789806152e-06
STEP: 6
loss: 8.351934709219552e-06
loss: 8.099359845597974e-06
loss: 8.023492137734603e-06
loss: 7.982697316393027e-06
loss: 7.934128231570328e-06
loss: 7.882304943013427e-06
loss: 7.817944812735015e-06
loss: 7.708381470080016e-06
loss: 7.545262800637035e-06
loss: 7.3590306414342304e-06
loss: 7.187149637024493e-06
loss: 7.041607611668718e-06
loss: 6.880853713534134e-06
loss: 6.64904209907784e-06
loss: 6.349631631734573e-06
loss: 6.083386331472478e-06
loss: 5.840510084289589e-06
loss: 5.641808262040041e-06
loss: 5.550992309084652e-06
loss: 5.476518943945725e-06
test loss: 7.558627393520033e-06
STEP: 7
loss: 5.424637104614732e-06
loss: 5.343169328026148e-06
loss: 5.278998892833016e-06
loss: 5.233559719895183e-06
loss: 5.1790714520945995e-06
loss: 5.102813685644669e-06
loss: 5.057816101579782e-06
loss: 5.021956007848406e-06
loss: 5.010760587860601e-06
loss: 4.984458411239595e-06
loss: 4.971447417822076e-06
loss: 4.962694540121956e-06
loss: 4.955367134554158e-06
loss: 4.941791844369365e-06
loss: 4.886708029358563e-06
loss: 4.8298561848093285e-06
loss: 4.729860370084668e-06
loss: 4.603941070965755e-06
loss: 4.510229859050638e-06
loss: 4.451572626333044e-06
test loss: 6.719516757378004e-06
STEP: 8
loss: 4.410182307722153e-06
loss: 4.383787406300294e-06
loss: 4.373314104222076e-06
loss: 4.366633444691338e-06
loss: 4.365811584282103e-06
test loss: 6.687451260034044e-06
STEP: 9
loss: 4.365811584282103e-06
loss: 4.364949992640159e-06
test loss: 6.680560301308882e-06
STEP: 10
loss: 4.364949992640159e-06
loss: 4.3634225594267824e-06
loss: 4.361092433937902e-06
loss: 4.354063689071793e-06
loss: 4.340617338459655e-06
loss: 4.313205800087283e-06
loss: 4.259797513366939e-06
loss: 4.178709907934998e-06
loss: 4.082023948515665e-06
loss: 4.296541790315014e-06
loss: 3.898617119222726e-06
loss: 3.863048014958528e-06
loss: 3.828496011390211e-06
loss: 3.7981087020646685e-06
loss: 3.7854740901244747e-06
loss: 3.774291574154713e-06
loss: 3.7639566498014944e-06
loss: 3.754666197950809e-06
loss: 3.744645470054434e-06
loss: 3.7340472375547916e-06
test loss: 5.650664426590775e-06
STEP: 11
loss: 3.7271341540426344e-06
loss: 3.7242201571422833e-06
loss: 3.721707122961544e-06
loss: 3.7209966604878027e-06
test loss: 5.674726447039287e-06
STEP: 12
loss: 3.7209966604878027e-06
test loss: 5.674726447039287e-06
STEP: 13
loss: 3.7209966604878027e-06
test loss: 5.674726447039287e-06
STEP: 14
loss: 3.7209966604878027e-06
test loss: 5.674726447039287e-06
STEP: 15
loss: 3.7209966604878027e-06
test loss: 5.674726447039287e-06
STEP: 16
loss: 3.7209966604878027e-06
test loss: 5.674726447039287e-06
STEP: 17
loss: 3.7209966604878027e-06
test loss: 5.674726447039287e-06
STEP: 18
loss: 3.7209966604878027e-06
test loss: 5.674726447039287e-06
STEP: 19
loss: 3.7209966604878027e-06
test loss: 5.674726447039287e-06
###Markdown
Zip files to download
###Code
!zip archive.zip predict*.png
###Output
adding: predict0.png (deflated 7%)
adding: predict10.png (deflated 7%)
adding: predict11.png (deflated 7%)
adding: predict12.png (deflated 7%)
adding: predict13.png (deflated 7%)
adding: predict14.png (deflated 7%)
adding: predict15.png (deflated 7%)
adding: predict16.png (deflated 7%)
adding: predict17.png (deflated 7%)
adding: predict18.png (deflated 7%)
adding: predict19.png (deflated 7%)
adding: predict1.png (deflated 7%)
adding: predict2.png (deflated 7%)
adding: predict3.png (deflated 7%)
adding: predict4.png (deflated 7%)
adding: predict5.png (deflated 7%)
adding: predict6.png (deflated 7%)
adding: predict7.png (deflated 7%)
adding: predict8.png (deflated 7%)
adding: predict9.png (deflated 7%)
|
.ipynb_checkpoints/UseCase_Safie-checkpoint.ipynb | ###Markdown
Project: Future Food Consumer Needs
###Code
#import libraries
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import itertools
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
# import data using pandas
df = pd.read_csv('Food_Preference.csv')
df1 = pd.read_csv('food_coded.csv')
#print(df)
#print(df1)
for (i,y) in enumerate(df1.columns):
print (i," : ",y)
# To visualize the columns to be used
df_new = df1.iloc[:,[17,18,14,36]]
df_new.head(5)
data = pd.read_csv('food_coded.csv', header=None)
X = data.iloc[1:,[17,18,14]].values
y = data.iloc[1:, [36]].values
print (X.shape)
print (y.shape)
# Splitting the data into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=10)
# imputation
from sklearn.impute import SimpleImputer
imputer = SimpleImputer()
"""
OR
from sklearn.preprocessing import Imputer
imputer = Imputer()
"""
X_train = imputer.fit_transform(X_train)
X_test = imputer.transform(X_test)
# scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Use any two classifiers
from sklearn.svm import SVC, LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
SVC = SVC()
SVC.fit(X_train,y_train)
y2_SVC_model = SVC.predict(X_test)
print("SVC Accuracy :", accuracy_score(y_test, y2_SVC_model))
print (y2_SVC_model)
###Output
SVC Accuracy : 0.36
['2' '2' '2' '2' '2' '2' '3' '2' '2' '2' '3' '2' '2' '2' '2' '2' '2' '2'
'2' '2' '3' '2' '2' '2' '2']
|
site/ko/addons/tutorials/layers_weightnormalization.ipynb | ###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
TensorFlow 애드온 레이어: WeightNormalization TensorFlow.org에서 보기 Google Colab에서 실행하기 GitHub에서 소스 보기 노트북 다운로드하기 개요이 노트북은 가중치 정규화 레이어를 사용하는 방법과 수렴을 향상할 수 있는 방법을 보여줍니다. WeightNormalization심층 신경망의 훈련을 가속하기 위한 간단한 재매개변수화:Tim Salimans, Diederik P. Kingma (2016)> 이러한 방식으로 가중치를 재매개변수화함으로써 최적화 문제의 처리를 개선하고 확률적 경사 하강의 수렴을 가속합니다. 재매개변수화는 배치 정규화에서 영감을 얻었지만, 미니 배치의 예제 간에 종속성을 도입하지는 않습니다. 이는 이 방법이 배치 정규화가 덜 적합한 LSTM과 같은 반복 모델과 심층 강화 학습 또는 생성 모델과 같은 노이즈에 민감한 애플리케이션에 성공적으로 적용될 수 있음을 의미합니다. 이 방법은 훨씬 간단하지만, 전체 배치 정규화의 속도를 크게 향상합니다. 또한, 이 방법의 계산 오버헤드가 더 적으므로 같은 시간에 더 많은 최적화 단계를 수행할 수 있습니다.> https://arxiv.org/abs/1602.07868 설정
###Code
!pip install -U tensorflow-addons
import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
from matplotlib import pyplot as plt
# Hyper Parameters
batch_size = 32
epochs = 10
num_classes=10
###Output
_____no_output_____
###Markdown
모델 빌드하기
###Code
# Standard ConvNet
reg_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(6, 5, activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(16, 5, activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120, activation='relu'),
tf.keras.layers.Dense(84, activation='relu'),
tf.keras.layers.Dense(num_classes, activation='softmax'),
])
# WeightNorm ConvNet
wn_model = tf.keras.Sequential([
tfa.layers.WeightNormalization(tf.keras.layers.Conv2D(6, 5, activation='relu')),
tf.keras.layers.MaxPooling2D(2, 2),
tfa.layers.WeightNormalization(tf.keras.layers.Conv2D(16, 5, activation='relu')),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(120, activation='relu')),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(84, activation='relu')),
tfa.layers.WeightNormalization(tf.keras.layers.Dense(num_classes, activation='softmax')),
])
###Output
_____no_output_____
###Markdown
데이터 로드하기
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
# Convert class vectors to binary class matrices.
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
###Output
_____no_output_____
###Markdown
모델 훈련하기
###Code
reg_model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
reg_history = reg_model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
wn_model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
wn_history = wn_model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
reg_accuracy = reg_history.history['accuracy']
wn_accuracy = wn_history.history['accuracy']
plt.plot(np.linspace(0, epochs, epochs), reg_accuracy,
color='red', label='Regular ConvNet')
plt.plot(np.linspace(0, epochs, epochs), wn_accuracy,
color='blue', label='WeightNorm ConvNet')
plt.title('WeightNorm Accuracy Comparison')
plt.legend()
plt.grid(True)
plt.show()
###Output
_____no_output_____ |
Engineering/Notebooks/Predict Cible.ipynb | ###Markdown
Google Cloud adaptation
###Code
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
data = uploaded[fn]
with open(fn, 'wb') as file:
file.write(data)
import pandas as pd
import numpy as np
import pylab as plt
from math import sqrt
from sklearn.metrics import log_loss, accuracy_score, mean_squared_error
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import LinearRegression,LogisticRegression, SGDClassifier # Linear regression
from sklearn.ensemble import RandomForestRegressor ,RandomForestClassifier # Random Forest
from sklearn.ensemble import ExtraTreesClassifier # Extra Trees
from sklearn.ensemble import GradientBoostingClassifier # Gradient Boosting
from sklearn.tree import DecisionTreeClassifier
data_train = pd.read_csv('train.csv',sep=',')
data_test = pd.read_csv('test.csv',sep=',')
data_train.shape
data_test.shape
data_train.head(10)
data_test.head(1)
data_train.v1.nunique()
data_test.v1.nunique()
#data_train.v1.value_counts()
%matplotlib inline
plt.hist(data_train.v1,bins=data_train.v1.nunique())
plt.title("Gaussian Histogram")
plt.xlabel("Value V1")
plt.ylabel("Frequency")
plt.legend(loc=2)
# create X and Y matrix from the initial data
def getXYMatrix(X,resultCol):
features = [col for col in X.columns if col != resultCol]
return X[features], X[resultCol]
# get X and Y
X, y = getXYMatrix(data_train,'cible')
print (X.shape, y.shape)
X.head(2)
y.head(2)
# split the dataset into two datasets, one for training with 0.7 of data and one for test with 0.3 data
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=0.3)
#model = LinearRegression()
model = RandomForestRegressor(n_estimators=160,n_jobs=-1)
# 1) train the model
model.fit(X_train,y_train)
# 2) predict on the test dataset
y_pred = model.predict(X_test)
# show the prediction resultat
print (y_pred)
result = X_test
result['pred'] = y_pred
result.shape
result.tail(10)
###Output
_____no_output_____ |
Web_Scraping_Workshop_BN.ipynb | ###Markdown
**Web Scraping W/ Python**SPRING 2021 ANALYTICS WORKSHOP SERIES - TOPIC 2 UTDMSBA - BALC Brian Nguyen--- Agenda:1. HTML Basics2. Chrome DevTools3. Compare Web-Scraping Packages 4. BeautifulSoup * Simple Exercise5. Selenium * A-tad-harder Exercise6. Advanced Scraping and Crawling Demo7. Ethical & Efficiency Discussion Goals:1. Inspect an HTML page & Identify what to scrape.2. Scrape with requests and BeautifulSoup.3. Drive web crawling with Selenium.4. How to be a responsible Scraper. Why Scrape?1. Build Datasets: * Texts * Numbers * Images2. For Analysis 📈 * Sales * Marketing3. For Machine Learning 🤖4. End-to-end testing.5. Etc.--- I. HTML Basics 1. Overview of HTML:1. Hypertext Markup Language (HTML)2. Standard markup language for documents.3. Instruct web browser how to display content. * Provide structure. * Cascading Style Sheets (CSS) = Style. * JavaScript (or any script) = Interactive.4. Tags are the Elements. *Paired * Start: `` * End: `` 2. Common HTML Tags: 1. `` declaration defines this document to be HTML5.2. `` element is the root element of an HTML page.3. `` tag defines a division or a section in an HTML document. It's usually a container for other elements.4. `` element contains meta information about the document.5. `` element specifies a title for the document.6. `` element contains the visible page content.7. `` element defines a large heading.8. `` element defines a paragraph.9. `` element defines a hyperlink. (look for ``10. And Many More! 3. Make a Simple HTML Page Notes:1. Note Repetitions2. Note Styles3. Note ButtonsTasks:1. Change Size for Heading 2, 32. Fix Link 2, 33. Fix Typo in Title ("ZZZZZZ")
###Code
from IPython.core.display import display, HTML
display(HTML("""<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css" ">
<title>My Courses</title>
</head>
<body>
<h1>Let's Get Scrapin' zzzzzzzzzzzzz !</h1>
<div class="card" id="card-python-for-beginners">
<div class="card-header">
BeautifulSoup
</div>
<div class="card-body">
<h3 class="card-title">Web-Scraping for beginners</h5>
<p class="card-text">If you are new to web-scraping, you should learn this!</p>
<p>Ordered list:</p>
<ol>
<li>Data collection</li>
<li>Exploratory data analysis</li>
<li>Data analysis</li>
<li>Policy recommendations</li>
</ol>
<hr>
<a href="https://www.crummy.com/software/BeautifulSoup/bs4/doc/" class="btn btn-primary" >Start for 20$</a>
</div>
</div>
<div class="card" id="card-python-web-development">
<div class="card-header">
Selenium
</div>
<div class="card-body">
<h4 class="card-title">Web Automation with Selenium</h5>
<p class="card-text">If you feel enough confident with XPath, you are ready to learn the most versatile web-scraping tool!</p>
<p style="color:red">
Add colour to your paragraphs.
</p>
<a href="#" class="btn btn-primary">Start for 50$</a>
</div>
</div>
<div class="card" id="card-python-machine-learning">
<div class="card-header">
Scrapy
</div>
<div class="card-body">
<h5 class="card-title">Master Web-Scraping with Scrapy</h5>
<p class="card-text">Become a Web-Scraping master!</p>
<p>
That's a text paragraph. You can also <b>bold</b>, <mark>mark</mark>, <ins>underline</ins>, <del>strikethrough</del> and <i>emphasize</i> words.
You can also add links - here's one to <a href="https://en.wikipedia.org/wiki/Main_Page">Wikipedia</a>.
</p>
<a href="#" class="btn btn-primary">Start for 100$</a>
</div>
</div>
</body>
</html>
"""))
###Output
_____no_output_____
###Markdown
4. ChromeDevToolsOverview:1. Built in to Chrome.2. Super useful tool: * View Source * Inspect Elements * Edit Webpage3. Equivalence available for other browsers. Quick Exercise: What is your favourite Website? a. IMDB b. Associated Press c. Reddit d. LinkedIn Tasks:1. Find Logo2. Find Text3. Find a Button Shortcuts:Command + Option + C (Mac) or Control + Shift + C (Windows) or F12 II. Web-Scraping W/ BeautifulSoup 🍲 Overview:1. Requests access, collect page source (all code).2. BeautifulSoup Is: * Python Library. * Extract HTML, XML files. * Navigate and Scrape Webpage’s Tree structure. Simple Exercise:
###Code
# Imports
import requests
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
import matplotlib.pyplot as plt
%matplotlib inline
# AP's homepage
ap_url = 'https://apnews.com/'
# Use requests to retrieve data from a given URL
ap_response = requests.get(ap_url)
# Parse the whole HTML page using BeautifulSoup
# Second Argument is the parser method. Here let's try "html.parser" first
ap_soup = BeautifulSoup(ap_response.text, 'html.parser')
# Take a look at the mess that is ap_soup right now
print(ap_soup.prettify())
# Title of the parsed page
ap_soup.title
# We can also get it without the HTML tags
ap_soup.title.string
###Output
_____no_output_____
###Markdown
"LXML" Parser Method + For LoopFind() VS Find_all()We will use the `.find_all()` method to search the HTML tree for particular tags and get a `list` with all the relevant objects.
###Code
# Collect all code again but his time with the best parser method called 'lxml'
ap_soup = BeautifulSoup(ap_response.text, 'lxml')
# Find top story (first one only)
top = ap_soup.find('h1')
# Show what we got
top.text
# Find all top stories
top_all = ap_soup.find_all('h1')
# Show what we got
top_all
for story in top_all:
title = story.text
print(title)
###Output
_____no_output_____
###Markdown
III. Web-Scraping W/ Selenium: Overview:1. The most versatile of all web-scraper.2. In the right hand, it can become a Powerful Web Automator (Driver)3. Only one can read JavaScript easily.4. Can be very efficient when combined w/ Scrapy.5. IMO, Best Combo Right Now: Selenium + XPAth "A-tad-harder" ExerciseColab is not the optimal environment for Selenium and Chromium driver Let's do a DEMO
###Code
# Install Selenium
%pip install selenium
# Part of Scrapy, Parsel lets you extract data from XML/HTML documents using XPath or CSS selector
%pip install parsel
# Install Driver
!apt update
!apt install chromium-chromedriver
# Initiate and set options
from parsel import Selector
from selenium import webdriver
options = webdriver.ChromeOptions()
options.add_argument('--headless')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome('chromedriver',options=options)
# URL to use in Selenium
driver.get('https://www.boxofficemojo.com/year/?ref_=bo_nb_di_secondarytab')
# assigning the source code for the webpage to variable sel
sel = Selector(text=driver.page_source)
# xpath to extract the text from the class containing the movie title and year
name = sel.xpath('//*[starts-with(@class, "a-link-normal")]/text()').getall()
print(name)
# xpath to extract the text from the class containing the movie Total Gross
gross = sel.xpath('//*[starts-with(@class, "a-text-right mojo-field-type-money")]/text()').getall()
print(gross)
###Output
['$244,468,683', '$2,222,442', '$2,085,651,481', '$4,593,945', '$11,320,874,529', '$12,426,865', '$11,889,341,443', '$11,973,153', '$11,072,815,067', '$12,996,261', '$11,377,066,920', '$13,290,966', '$11,125,835,068', '$13,151,105', '$10,359,575,749', '$12,202,091', '$10,922,051,943', '$13,222,823', '$10,822,806,722', '$13,411,160', '$10,173,621,826', '$13,936,468', '$10,566,830,616', '$16,231,690', '$10,590,200,693', '$16,393,499', '$9,629,131,592', '$13,281,560', '$9,657,106,911', '$12,460,783', '$9,208,611,128', '$12,343,982', '$8,837,713,363', '$13,073,540', '$9,365,047,036', '$13,378,638', '$9,210,978,005', '$13,809,562', '$9,165,532,414', '$16,079,881', '$8,110,859,106', '$19,638,884', '$7,511,547,085', '$17,110,585', '$7,377,967,100', '$16,468,676', '$6,725,527,166', '$20,136,308', '$6,156,263,535', '$19,858,914', '$5,647,751,531', '$18,456,704', '$5,199,428,915', '$17,867,453', '$5,101,025,737', '$19,695,080', '$4,860,902,708', '$18,205,628', '$4,556,151,332', '$18,445,956', '$4,366,922,856', '$17,260,564', '$4,362,843,546', '$18,486,625', '$4,111,404,298', '$17,495,337', '$3,548,516,187', '$14,847,348', '$3,398,925,607', '$15,039,493', '$3,102,793,651', '$15,436,784', '$3,041,480,248', '$15,923,980', '$3,104,296,629', '$18,368,619', '$2,748,432,836', '$18,445,857', '$3,008,355,492', '$22,790,571', '$918,310,755', '$16,398,406', '$1,657,166,297', '$24,370,092', '$1,244,961,893', '$31,124,047', '$840,508,376', '$64,654,490', '$443,497,478', '$49,277,497']
|
LowLevelCallable Notebook.ipynb | ###Markdown
Since [scipy](https://scipy.org/) 0.19, scientific Python programmers have had access to [`scipy.LowLevelCallable`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.LowLevelCallable.html), a facility that allows you to use native compiled functions (be they written in C, Cython, or even [numba](https://ilovesymposia.com/2017/03/12/scipys-new-lowlevelcallable-is-a-game-changer/)) to speed things like numeric integration and image filtering up.`LowLevelCallable` supports loading functions from a Cython module natively, but this only works if there is a Cython module to load the function from. If you're working in a Jupyter notebook, you might want to use the `%%cython` magic, in which case there *is* no module in the first place.However, with some trickery, we can persuade `%%cython` magic code to hand over its functions to scipy. Let's say we want to compute the Gaussian integral numerically.$$f(a) = \int\limits_{-\infty}^{+\infty}\,e^{-ax^2}\,\mathrm{d}x = \sqrt{\frac{\pi}{a}}$$This is of course a silly example, but it's easy to verify the results are correct seeing as we know the analytical solution. A simple approach to integrating this as a pure Python function might look like this:
###Code
import numpy as np
from scipy.integrate import quad
def integrand(x, a):
return np.exp(-a*x**2)
@np.vectorize
def gauss_py(a):
y, abserr = quad(integrand, -np.inf, np.inf, (a,))
return y
###Output
_____no_output_____
###Markdown
Here I'm creating a simple Python function representing the integrand, and integrating it up with scipy's [`quad`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.quad.htmlscipy.integrate.quad) function. This is wrapped in a vectorized function to make computing the result for different values of `a` easier.
###Code
%%time
a = np.linspace(0.1, 10, 10000)
py_result = gauss_py(a)
%matplotlib inline
from matplotlib import pyplot as plt
plt.plot(a, np.sqrt(np.pi/a), label='analytical')
plt.plot(a, py_result, label='numerical (py)')
plt.title(f'{len(a)} points')
plt.xlabel('a')
plt.ylabel('Gaussian integral')
plt.legend()
###Output
_____no_output_____
###Markdown
This works very nicely, but several seconds to compute a mere 104 values of this simple integral doesn't bode well for more complex computations (suffice it to say I had a reason to learn how to use Cython for this purpose).There are three ways to construct a ``scipy.LowLevelCallable``: * from a function in a cython module * from a ctypes or cffi function pointer * from a [PyCapsule](https://docs.python.org/3/c-api/capsule.html) – a Python C API facility used to safely pass pointers through Python code The first option is out in a notebook. The second *might* be possible, but using a PyCapsule sounds like a safe bet. So let's do that! As Cython provides us with easy access to the CPython API, we can easily get access to the essential functions `PyCapsule_New` and `PyCapsule_GetPointer`.The main objective is to create an integrand function with the C signature double func(double x, void *user_data)to pass to `quad()`, with the ``user_data`` pointer containing the parameter `a`. With `quad()`, there's a simpler way to pass in arguments, but for sake of demonstration I'll use the method that works with `dblquad()` and `nquad()` as well.
###Code
%load_ext cython
%%cython
from cpython.pycapsule cimport (PyCapsule_New,
PyCapsule_GetPointer)
from cpython.mem cimport PyMem_Malloc, PyMem_Free
from libc.math cimport exp
import scipy
cdef double c_integrand(double x, void* user_data):
"""The integrand, written in Cython"""
# Extract a.
# Cython uses array access syntax for pointer dereferencing!
cdef double a = (<double*>user_data)[0]
return exp(-a*x**2)
#
# Now comes some classic C-style housekeeping
#
cdef object pack_a(double a):
"""Wrap 'a' in a PyCapsule for transport."""
# Allocate memory where 'a' will be saved for the time being
cdef double* a_ptr = <double*> PyMem_Malloc(sizeof(double))
a_ptr[0] = a
return PyCapsule_New(<void*>a_ptr, NULL, free_a)
cdef void free_a(capsule):
"""Free the memory our value is using up."""
PyMem_Free(PyCapsule_GetPointer(capsule, NULL))
def get_low_level_callable(double a):
# scipy.LowLevelCallable expects the function signature to
# appear as the "name" of the capsule
func_capsule = PyCapsule_New(<void*>c_integrand,
"double (double, void *)",
NULL)
data_capsule = pack_a(a)
return scipy.LowLevelCallable(func_capsule, data_capsule)
###Output
_____no_output_____
###Markdown
At this point, we should be able to use our `LowLevelCallable` from Python code!
###Code
@np.vectorize
def gauss_c(a):
c_integrand = get_low_level_callable(a)
y, abserr = quad(c_integrand, -np.inf, np.inf)
return y
%%time
a = np.linspace(0.1, 10, 10000)
c_result = gauss_c(a)
###Output
CPU times: user 154 ms, sys: 4.69 ms, total: 159 ms
Wall time: 159 ms
###Markdown
As you can see, even for such a simple function, using Cython like this results in a speed-up by more than an order of magnitude, and the results are, of course, the same:
###Code
%matplotlib inline
from matplotlib import pyplot as plt
plt.plot(a, np.sqrt(np.pi/a), label='analytical')
plt.plot(a, c_result, label='numerical (Cython)')
plt.title(f'{len(a)} points')
plt.xlabel('a')
plt.ylabel('Gaussian integral')
plt.legend()
###Output
_____no_output_____ |
exercises/CW_0_2.ipynb | ###Markdown
Deutsch–Jozsa algorithm
###Code
import numpy as np
import pandas as pd
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister, execute
from qiskit.tools import visualization
from qiskit import BasicAer as Aer
import matplotlib.pyplot as plt
%matplotlib auto
from IPython.display import Latex
from IPython.display import Math
import ancillary_functions as anf
###Output
Using matplotlib backend: Qt5Agg
###Markdown
Oracles* I have hidden oracles in ancillary_functions. * There are four of them and you can call them using 'anf.oracle(circuit,index)', where index is from 0 to 3. (If you won't pass index, it will choose one randomly.)* Function returns circuit with implemented oracle. Tasks a) Implement a circuit for two-qubit Deutsch-Jozsa algorithm with various oracles. b) Write a function which, depending on experimental outcomes, decides if the function is constant or balanced.* Hint: You may wish to use two-qubit qreg, but single-qubit creg.
###Code
# specify variables
qrs =
crs =
circuit_name =
nos =
backend_name =
backend = Aer.get_backend(backend_name)
circuit, qreg, creg = create_circuit_draft(qrs=qrs, crs=crs, circuit_name=circuit_name)
# add things to circuit
# ....
# perform experiments
job = execute(circuit, backend=backend, shots=nos)
results = job.result();
counts = results.get_counts()
print(counts)
###Output
_____no_output_____ |
Bootstrap.ipynb | ###Markdown
Run if you need to download the kaggle learning stuff.You should only have to run this once.
###Code
!wget https://github.com/Kaggle/learntools/archive/master.zip
!apt-get update && apt-get install unzip
!unzip master.zip
!mv learntools-master kaggle-minicourses
###Output
_____no_output_____
###Markdown
Run to install learntoolsYou'll have to do this every time you start up.
###Code
!pip install -e ./kaggle-minicourses/
###Output
_____no_output_____ |
1 - Neural Networks and Deep Learning/Building_your_Deep_Neural_Network_Step_by_Step.ipynb | ###Markdown
Building your Deep Neural Network: Step by StepWelcome to your week 4 assignment (part 1 of 2)! Previously you trained a 2-layer Neural Network with a single hidden layer. This week, you will build a deep neural network with as many layers as you want!- In this notebook, you'll implement all the functions required to build a deep neural network.- For the next assignment, you'll use these functions to build a deep neural network for image classification.**By the end of this assignment, you'll be able to:**- Use non-linear units like ReLU to improve your model- Build a deeper neural network (with more than 1 hidden layer)- Implement an easy-to-use neural network class**Notation**:- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer. - Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example.- Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).Let's get started! Table of Contents- [1 - Packages](1)- [2 - Outline](2)- [3 - Initialization](3) - [3.1 - 2-layer Neural Network](3-1) - [Exercise 1 - initialize_parameters](ex-1) - [3.2 - L-layer Neural Network](3-2) - [Exercise 2 - initialize_parameters_deep](ex-2)- [4 - Forward Propagation Module](4) - [4.1 - Linear Forward](4-1) - [Exercise 3 - linear_forward](ex-3) - [4.2 - Linear-Activation Forward](4-2) - [Exercise 4 - linear_activation_forward](ex-4) - [4.3 - L-Layer Model](4-3) - [Exercise 5 - L_model_forward](ex-5)- [5 - Cost Function](5) - [Exercise 6 - compute_cost](ex-6)- [6 - Backward Propagation Module](6) - [6.1 - Linear Backward](6-1) - [Exercise 7 - linear_backward](ex-7) - [6.2 - Linear-Activation Backward](6-2) - [Exercise 8 - linear_activation_backward](ex-8) - [6.3 - L-Model Backward](6-3) - [Exercise 9 - L_model_backward](ex-9) - [6.4 - Update Parameters](6-4) - [Exercise 10 - update_parameters](ex-10) 1 - PackagesFirst, import all the packages you'll need during this assignment. - [numpy](www.numpy.org) is the main package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- dnn_utils provides some necessary functions for this notebook.- testCases provides some test cases to assess the correctness of your functions- np.random.seed(1) is used to keep all the random function calls consistent. It helps grade your work. Please don't change the seed!
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases import *
from dnn_utils import sigmoid, sigmoid_backward, relu, relu_backward
from public_tests import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - OutlineTo build your neural network, you'll be implementing several "helper functions." These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function will have detailed instructions to walk you through the necessary steps. Here's an outline of the steps in this assignment:- Initialize the parameters for a two-layer network and for an $L$-layer neural network- Implement the forward propagation module (shown in purple in the figure below) - Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$). - The ACTIVATION function is provided for you (relu/sigmoid) - Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function. - Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.- Compute the loss- Implement the backward propagation module (denoted in red in the figure below) - Complete the LINEAR part of a layer's backward propagation step - The gradient of the ACTIVATE function is provided for you(relu_backward/sigmoid_backward) - Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function - Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function- Finally, update the parametersFigure 1**Note**:For every forward function, there is a corresponding backward function. This is why at every step of your forward module you will be storing some values in a cache. These cached values are useful for computing gradients. In the backpropagation module, you can then use the cache to calculate the gradients. Don't worry, this assignment will show you exactly how to carry out each of these steps! 3 - InitializationYou will write two helper functions to initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one generalizes this initialization process to $L$ layers. 3.1 - 2-layer Neural Network Exercise 1 - initialize_parametersCreate and initialize the parameters of the 2-layer neural network.**Instructions**:- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*. - Use this random initialization for the weight matrices: `np.random.randn(shape)*0.01` with the correct shape- Use zero initialization for the biases: `np.zeros(shape)`
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
#(≈ 4 lines of code)
# W1 = ...
# b1 = ...
# W2 = ...
# b2 = ...
# YOUR CODE STARTS HERE
W1 = np.random.randn(n_h,n_x)*0.01
b1 = b1 = np.zeros((n_h,1))
W2 = np.random.randn(n_y,n_h)*0.01
b2 = np.zeros((n_y,1))
# YOUR CODE ENDS HERE
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
initialize_parameters_test(initialize_parameters)
###Output
W1 = [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]]
b1 = [[0.]
[0.]]
W2 = [[ 0.01744812 -0.00761207]]
b2 = [[0.]]
[92m All tests passed.
###Markdown
***Expected output***```W1 = [[ 0.01624345 -0.00611756 -0.00528172] [-0.01072969 0.00865408 -0.02301539]]b1 = [[0.] [0.]]W2 = [[ 0.01744812 -0.00761207]]b2 = [[0.]]``` 3.2 - L-layer Neural NetworkThe initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep` function, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. For example, if the size of your input $X$ is $(12288, 209)$ (with $m=209$ examples) then: Shape of W Shape of b Activation Shape of Activation Layer 1 $(n^{[1]},12288)$ $(n^{[1]},1)$ $Z^{[1]} = W^{[1]} X + b^{[1]} $ $(n^{[1]},209)$ Layer 2 $(n^{[2]}, n^{[1]})$ $(n^{[2]},1)$ $Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ $(n^{[2]}, 209)$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ Layer L-1 $(n^{[L-1]}, n^{[L-2]})$ $(n^{[L-1]}, 1)$ $Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ $(n^{[L-1]}, 209)$ Layer L $(n^{[L]}, n^{[L-1]})$ $(n^{[L]}, 1)$ $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$ $(n^{[L]}, 209)$ Remember that when you compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} w_{00} & w_{01} & w_{02} \\ w_{10} & w_{11} & w_{12} \\ w_{20} & w_{21} & w_{22} \end{bmatrix}\;\;\; X = \begin{bmatrix} x_{00} & x_{01} & x_{02} \\ x_{10} & x_{11} & x_{12} \\ x_{20} & x_{21} & x_{22} \end{bmatrix} \;\;\; b =\begin{bmatrix} b_0 \\ b_1 \\ b_2\end{bmatrix}\tag{2}$$Then $WX + b$ will be:$$ WX + b = \begin{bmatrix} (w_{00}x_{00} + w_{01}x_{10} + w_{02}x_{20}) + b_0 & (w_{00}x_{01} + w_{01}x_{11} + w_{02}x_{21}) + b_0 & \cdots \\ (w_{10}x_{00} + w_{11}x_{10} + w_{12}x_{20}) + b_1 & (w_{10}x_{01} + w_{11}x_{11} + w_{12}x_{21}) + b_1 & \cdots \\ (w_{20}x_{00} + w_{21}x_{10} + w_{22}x_{20}) + b_2 & (w_{20}x_{01} + w_{21}x_{11} + w_{22}x_{21}) + b_2 & \cdots\end{bmatrix}\tag{3} $$ Exercise 2 - initialize_parameters_deepImplement initialization for an L-layer Neural Network. **Instructions**:- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.- Use zeros initialization for the biases. Use `np.zeros(shape)`.- You'll store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for last week's Planar Data classification model would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. This means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers! - Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).```python if L == 1: parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01 parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))```
###Code
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
#(≈ 2 lines of code)
# parameters['W' + str(l)] = ...
# parameters['b' + str(l)] = ...
# YOUR CODE STARTS HERE
parameters["W" + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l],1))
# YOUR CODE ENDS HERE
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
initialize_parameters_deep_test(initialize_parameters_deep)
###Output
W1 = [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]
b1 = [[0.]
[0.]
[0.]
[0.]]
W2 = [[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]
b2 = [[0.]
[0.]
[0.]]
[92m All tests passed.
###Markdown
***Expected output***```W1 = [[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]b1 = [[0.] [0.] [0.] [0.]]W2 = [[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]]b2 = [[0.] [0.] [0.]]``` 4 - Forward Propagation Module 4.1 - Linear Forward Now that you have initialized your parameters, you can do the forward propagation module. Start by implementing some basic functions that you can use again later when implementing the model. Now, you'll complete three functions in this order:- LINEAR- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)The linear forward module (vectorized over all the examples) computes the following equations:$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$where $A^{[0]} = X$. Exercise 3 - linear_forward Build the linear part of forward propagation.**Reminder**:The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
###Code
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python tuple containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
#(≈ 1 line of code)
# Z = ...
# YOUR CODE STARTS HERE
Z = np.dot(W,A) + b
# YOUR CODE ENDS HERE
cache = (A, W, b)
return Z, cache
t_A, t_W, t_b = linear_forward_test_case()
t_Z, t_linear_cache = linear_forward(t_A, t_W, t_b)
print("Z = " + str(t_Z))
linear_forward_test(linear_forward)
###Output
Z = [[ 3.26295337 -1.23429987]]
[92m All tests passed.
###Markdown
***Expected output***```Z = [[ 3.26295337 -1.23429987]]``` 4.2 - Linear-Activation ForwardIn this notebook, you will use two activation functions:- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. You've been provided with the `sigmoid` function which returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call: ``` pythonA, activation_cache = sigmoid(Z)```- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. You've been provided with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what you'll feed in to the corresponding backward function). To use it you could just call:``` pythonA, activation_cache = relu(Z)``` For added convenience, you're going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you'll implement a function that does the LINEAR forward step, followed by an ACTIVATION forward step. Exercise 4 - linear_activation_forwardImplement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use `linear_forward()` and the correct activation function.
###Code
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python tuple containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
#(≈ 2 lines of code)
# Z, linear_cache = ...
# A, activation_cache = ...
# YOUR CODE STARTS HERE
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
# YOUR CODE ENDS HERE
elif activation == "relu":
#(≈ 2 lines of code)
# Z, linear_cache = ...
# A, activation_cache = ...
# YOUR CODE STARTS HERE
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
# YOUR CODE ENDS HERE
cache = (linear_cache, activation_cache)
return A, cache
t_A_prev, t_W, t_b = linear_activation_forward_test_case()
t_A, t_linear_activation_cache = linear_activation_forward(t_A_prev, t_W, t_b, activation = "sigmoid")
print("With sigmoid: A = " + str(t_A))
t_A, t_linear_activation_cache = linear_activation_forward(t_A_prev, t_W, t_b, activation = "relu")
print("With ReLU: A = " + str(t_A))
linear_activation_forward_test(linear_activation_forward)
###Output
With sigmoid: A = [[0.96890023 0.11013289]]
With ReLU: A = [[3.43896131 0. ]]
[92m All tests passed.
###Markdown
***Expected output***```With sigmoid: A = [[0.96890023 0.11013289]]With ReLU: A = [[3.43896131 0. ]]``` **Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers. 4.3 - L-Layer Model For even *more* convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID. Figure 2 : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model Exercise 5 - L_model_forwardImplement the forward propagation of the above model.**Instructions**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.) **Hints**:- Use the functions you've previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
###Code
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- activation value from the output (last) layer
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L of them, indexed from 0 to L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
# The for loop starts at 1 because layer 0 is the input
for l in range(1, L):
A_prev = A
#(≈ 2 lines of code)
# A, cache = ...
# caches ...
# YOUR CODE STARTS HERE
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], activation = "relu")
caches.append(cache)
# YOUR CODE ENDS HERE
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
#(≈ 2 lines of code)
# AL, cache = ...
# caches ...
# YOUR CODE STARTS HERE
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], activation = "sigmoid")
caches.append(cache)
# YOUR CODE ENDS HERE
return AL, caches
t_X, t_parameters = L_model_forward_test_case_2hidden()
t_AL, t_caches = L_model_forward(t_X, t_parameters)
print("AL = " + str(t_AL))
L_model_forward_test(L_model_forward)
###Output
AL = [[0.03921668 0.70498921 0.19734387 0.04728177]]
[92m All tests passed.
###Markdown
***Expected output***```AL = [[0.03921668 0.70498921 0.19734387 0.04728177]]``` **Awesome!** You've implemented a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions. 5 - Cost FunctionNow you can implement forward and backward propagation! You need to compute the cost, in order to check whether your model is actually learning. Exercise 6 - compute_costCompute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
# (≈ 1 lines of code)
# cost = ...
# YOUR CODE STARTS HERE
cost = -1/m * (np.dot(Y,np.log(AL.T)) + np.dot(1 - Y,np.log(1 - AL).T))
# YOUR CODE ENDS HERE
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
return cost
t_Y, t_AL = compute_cost_test_case()
t_cost = compute_cost(t_AL, t_Y)
print("Cost: " + str(t_cost))
compute_cost_test(compute_cost)
###Output
Cost: 0.2797765635793422
[92m All tests passed.
###Markdown
**Expected Output**: cost 0.2797765635793422 6 - Backward Propagation ModuleJust as you did for the forward propagation, you'll implement helper functions for backpropagation. Remember that backpropagation is used to calculate the gradient of the loss function with respect to the parameters. **Reminder**: Figure 3: Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.<!-- For those of you who are experts in calculus (which you don't need to be to do this assignment!), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.This is why we talk about **backpropagation**.!-->Now, similarly to forward propagation, you're going to build the backward propagation in three steps:1. LINEAR backward2. LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation3. [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model) For the next exercise, you will need to remember that:- `b` is a matrix(np.ndarray) with 1 column and n rows, i.e: b = [[1.0], [2.0]] (remember that `b` is a constant)- np.sum performs a sum over the elements of a ndarray- axis=1 or axis=0 specify if the sum is carried out by rows or by columns respectively- keepdims specifies if the original dimensions of the matrix must be kept.- Look at the following example to clarify:
###Code
A = np.array([[1, 2], [3, 4]])
print('axis=1 and keepdims=True')
print(np.sum(A, axis=1, keepdims=True))
print('axis=1 and keepdims=False')
print(np.sum(A, axis=1, keepdims=False))
print('axis=0 and keepdims=True')
print(np.sum(A, axis=0, keepdims=True))
print('axis=0 and keepdims=False')
print(np.sum(A, axis=0, keepdims=False))
###Output
axis=1 and keepdims=True
[[3]
[7]]
axis=1 and keepdims=False
[3 7]
axis=0 and keepdims=True
[[4 6]]
axis=0 and keepdims=False
[4 6]
###Markdown
6.1 - Linear BackwardFor layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$.Figure 4The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l-1]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:$$ dW^{[l]} = \frac{\partial \mathcal{J} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$$$ db^{[l]} = \frac{\partial \mathcal{J} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$$A^{[l-1] T}$ is the transpose of $A^{[l-1]}$. Exercise 7 - linear_backward Use the 3 formulas above to implement `linear_backward()`.**Hint**:- In numpy you can get the transpose of an ndarray `A` using `A.T` or `A.transpose()`
###Code
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
# dW = ...
# db = ... sum by the rows of dZ with keepdims=True
# dA_prev = ...
# YOUR CODE STARTS HERE
dW = 1 / m * (np.dot(dZ,A_prev.T))
db = 1 / m * (np.sum(dZ,axis = 1,keepdims = True))
dA_prev = np.dot(W.T,dZ)
# YOUR CODE ENDS HERE
return dA_prev, dW, db
t_dZ, t_linear_cache = linear_backward_test_case()
t_dA_prev, t_dW, t_db = linear_backward(t_dZ, t_linear_cache)
print("dA_prev: " + str(t_dA_prev))
print("dW: " + str(t_dW))
print("db: " + str(t_db))
linear_backward_test(linear_backward)
###Output
dA_prev: [[-1.15171336 0.06718465 -0.3204696 2.09812712]
[ 0.60345879 -3.72508701 5.81700741 -3.84326836]
[-0.4319552 -1.30987417 1.72354705 0.05070578]
[-0.38981415 0.60811244 -1.25938424 1.47191593]
[-2.52214926 2.67882552 -0.67947465 1.48119548]]
dW: [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716]
[ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808]
[ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]]
db: [[-0.14713786]
[-0.11313155]
[-0.13209101]]
[92m All tests passed.
###Markdown
**Expected Output**:```dA_prev: [[-1.15171336 0.06718465 -0.3204696 2.09812712] [ 0.60345879 -3.72508701 5.81700741 -3.84326836] [-0.4319552 -1.30987417 1.72354705 0.05070578] [-0.38981415 0.60811244 -1.25938424 1.47191593] [-2.52214926 2.67882552 -0.67947465 1.48119548]]dW: [[ 0.07313866 -0.0976715 -0.87585828 0.73763362 0.00785716] [ 0.85508818 0.37530413 -0.59912655 0.71278189 -0.58931808] [ 0.97913304 -0.24376494 -0.08839671 0.55151192 -0.10290907]]db: [[-0.14713786] [-0.11313155] [-0.13209101]] ``` 6.2 - Linear-Activation BackwardNext, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**. To help you implement `linear_activation_backward`, two backward functions have been provided:- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:```pythondZ = sigmoid_backward(dA, activation_cache)```- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:```pythondZ = relu_backward(dA, activation_cache)```If $g(.)$ is the activation function, `sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}). \tag{11}$$ Exercise 8 - linear_activation_backwardImplement the backpropagation for the *LINEAR->ACTIVATION* layer.
###Code
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
#(≈ 2 lines of code)
# dZ = ...
# dA_prev, dW, db = ...
# YOUR CODE STARTS HERE
dZ = relu_backward(dA,activation_cache)
dA_prev, dW, db = linear_backward(dZ,linear_cache)
# YOUR CODE ENDS HERE
elif activation == "sigmoid":
#(≈ 2 lines of code)
# dZ = ...
# dA_prev, dW, db = ...
# YOUR CODE STARTS HERE
dZ = sigmoid_backward(dA,activation_cache)
dA_prev, dW, db = linear_backward(dZ,linear_cache)
# YOUR CODE ENDS HERE
return dA_prev, dW, db
t_dAL, t_linear_activation_cache = linear_activation_backward_test_case()
t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = "sigmoid")
print("With sigmoid: dA_prev = " + str(t_dA_prev))
print("With sigmoid: dW = " + str(t_dW))
print("With sigmoid: db = " + str(t_db))
t_dA_prev, t_dW, t_db = linear_activation_backward(t_dAL, t_linear_activation_cache, activation = "relu")
print("With relu: dA_prev = " + str(t_dA_prev))
print("With relu: dW = " + str(t_dW))
print("With relu: db = " + str(t_db))
linear_activation_backward_test(linear_activation_backward)
###Output
With sigmoid: dA_prev = [[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]]
With sigmoid: dW = [[ 0.10266786 0.09778551 -0.01968084]]
With sigmoid: db = [[-0.05729622]]
With relu: dA_prev = [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]]
With relu: dW = [[ 0.44513824 0.37371418 -0.10478989]]
With relu: db = [[-0.20837892]]
[92m All tests passed.
###Markdown
**Expected output:**```With sigmoid: dA_prev = [[ 0.11017994 0.01105339] [ 0.09466817 0.00949723] [-0.05743092 -0.00576154]]With sigmoid: dW = [[ 0.10266786 0.09778551 -0.01968084]]With sigmoid: db = [[-0.05729622]]With relu: dA_prev = [[ 0.44090989 0. ] [ 0.37883606 0. ] [-0.2298228 0. ]]With relu: dW = [[ 0.44513824 0.37371418 -0.10478989]]With relu: db = [[-0.20837892]]``` 6.3 - L-Model Backward Now you will implement the backward function for the whole network! Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you'll use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you'll iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. Figure 5: Backward pass**Initializing backpropagation**:To backpropagate through this network, you know that the output is: $A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.To do so, use this formula (derived using calculus which, again, you don't need in-depth knowledge of!):```pythondAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) derivative of cost with respect to AL```You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : $$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`. Exercise 9 - L_model_backwardImplement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
###Code
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
#(1 line of code)
# dAL = ...
# YOUR CODE STARTS HERE
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
# YOUR CODE ENDS HERE
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "dAL, current_cache". Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
#(approx. 5 lines)
# current_cache = ...
# dA_prev_temp, dW_temp, db_temp = ...
# grads["dA" + str(L-1)] = ...
# grads["dW" + str(L)] = ...
# grads["db" + str(L)] = ...
# YOUR CODE STARTS HERE
current_cache = caches[L-1]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
grads["dA" + str(L-1)] = dA_prev_temp
grads["dW" + str(L)] = dW_temp
grads["db" + str(L)] = db_temp
# YOUR CODE ENDS HERE
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 1)], current_cache". Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
#(approx. 5 lines)
# current_cache = ...
# dA_prev_temp, dW_temp, db_temp = ...
# grads["dA" + str(l)] = ...
# grads["dW" + str(l + 1)] = ...
# grads["db" + str(l + 1)] = ...
# YOUR CODE STARTS HERE
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l+1)], current_cache, activation = "relu")
grads["dA" + str(l)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
# YOUR CODE ENDS HERE
return grads
t_AL, t_Y_assess, t_caches = L_model_backward_test_case()
grads = L_model_backward(t_AL, t_Y_assess, t_caches)
print("dA0 = " + str(grads['dA0']))
print("dA1 = " + str(grads['dA1']))
print("dW1 = " + str(grads['dW1']))
print("dW2 = " + str(grads['dW2']))
print("db1 = " + str(grads['db1']))
print("db2 = " + str(grads['db2']))
L_model_backward_test(L_model_backward)
###Output
dA0 = [[ 0. 0.52257901]
[ 0. -0.3269206 ]
[ 0. -0.32070404]
[ 0. -0.74079187]]
dA1 = [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]]
dW1 = [[0.41010002 0.07807203 0.13798444 0.10502167]
[0. 0. 0. 0. ]
[0.05283652 0.01005865 0.01777766 0.0135308 ]]
dW2 = [[-0.39202432 -0.13325855 -0.04601089]]
db1 = [[-0.22007063]
[ 0. ]
[-0.02835349]]
db2 = [[0.15187861]]
[92m All tests passed.
###Markdown
**Expected output:**```dA0 = [[ 0. 0.52257901] [ 0. -0.3269206 ] [ 0. -0.32070404] [ 0. -0.74079187]]dA1 = [[ 0.12913162 -0.44014127] [-0.14175655 0.48317296] [ 0.01663708 -0.05670698]]dW1 = [[0.41010002 0.07807203 0.13798444 0.10502167] [0. 0. 0. 0. ] [0.05283652 0.01005865 0.01777766 0.0135308 ]]dW2 = [[-0.39202432 -0.13325855 -0.04601089]]db1 = [[-0.22007063] [ 0. ] [-0.02835349]]db2 = [[0.15187861]]``` 6.4 - Update ParametersIn this section, you'll update the parameters of the model, using gradient descent: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. Exercise 10 - update_parametersImplement `update_parameters()` to update your parameters using gradient descent.**Instructions**:Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
###Code
# GRADED FUNCTION: update_parameters
def update_parameters(params, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
params -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
parameters = params.copy()
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
#(≈ 2 lines of code)
for l in range(L):
# parameters["W" + str(l+1)] = ...
# parameters["b" + str(l+1)] = ...
# YOUR CODE STARTS HERE
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW"+ str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db"+ str(l+1)]
# YOUR CODE ENDS HERE
return parameters
t_parameters, grads = update_parameters_test_case()
t_parameters = update_parameters(t_parameters, grads, 0.1)
print ("W1 = "+ str(t_parameters["W1"]))
print ("b1 = "+ str(t_parameters["b1"]))
print ("W2 = "+ str(t_parameters["W2"]))
print ("b2 = "+ str(t_parameters["b2"]))
update_parameters_test(update_parameters)
###Output
W1 = [[-0.59562069 -0.09991781 -2.14584584 1.82662008]
[-1.76569676 -0.80627147 0.51115557 -1.18258802]
[-1.0535704 -0.86128581 0.68284052 2.20374577]]
b1 = [[-0.04659241]
[-1.28888275]
[ 0.53405496]]
W2 = [[-0.55569196 0.0354055 1.32964895]]
b2 = [[-0.84610769]]
[92m All tests passed.
|
notebooks/Module3-Classification/3.4.ML0101EN-Clas-SVM-cancer-py-v1.ipynb | ###Markdown
SVM (Support Vector Machines) In this notebook, you will use SVM (Support Vector Machines) to build and train a model using human cell records, and classify cells to whether the samples are benign or malignant.SVM works by mapping data to a high-dimensional feature space so that data points can be categorized, even when the data are not otherwise linearly separable. A separator between the categories is found, then the data is transformed in such a way that the separator could be drawn as a hyperplane. Following this, characteristics of new data can be used to predict the group to which a new record should belong. Table of contents Load the Cancer data Modeling Evaluation Practice
###Code
import pandas as pd
import pylab as pl
import numpy as np
import scipy.optimize as opt
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
%matplotlib inline
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load the Cancer dataThe example is based on a dataset that is publicly available from the UCI Machine Learning Repository (Asuncion and Newman, 2007)[http://mlearn.ics.uci.edu/MLRepository.html]. The dataset consists of several hundred human cell sample records, each of which contains the values of a set of cell characteristics. The fields in each record are:| Field name | Description || ----------- | --------------------------- || ID | Clump thickness || Clump | Clump thickness || UnifSize | Uniformity of cell size || UnifShape | Uniformity of cell shape || MargAdh | Marginal adhesion || SingEpiSize | Single epithelial cell size || BareNuc | Bare nuclei || BlandChrom | Bland chromatin || NormNucl | Normal nucleoli || Mit | Mitoses || Class | Benign or malignant |For the purposes of this example, we're using a dataset that has a relatively small number of predictors in each record. To download the data, we will use `!wget` to download it from IBM Object Storage. **Did you know?** When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC)
###Code
#Click here and press Shift+Enter
!wget -O cell_samples.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-Coursera/labs/Data_files/cell_samples.csv
###Output
--2021-05-30 05:57:53-- https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-Coursera/labs/Data_files/cell_samples.csv
Resolving cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)... 169.45.118.108
Connecting to cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud (cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud)|169.45.118.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 19975 (20K) [text/csv]
Saving to: ‘cell_samples.csv’
cell_samples.csv 100%[===================>] 19.51K --.-KB/s in 0s
2021-05-30 05:57:54 (278 MB/s) - ‘cell_samples.csv’ saved [19975/19975]
###Markdown
Load Data From CSV File
###Code
cell_df = pd.read_csv("cell_samples.csv")
cell_df.head()
###Output
_____no_output_____
###Markdown
The ID field contains the patient identifiers. The characteristics of the cell samples from each patient are contained in fields Clump to Mit. The values are graded from 1 to 10, with 1 being the closest to benign.The Class field contains the diagnosis, as confirmed by separate medical procedures, as to whether the samples are benign (value = 2) or malignant (value = 4).Lets look at the distribution of the classes based on Clump thickness and Uniformity of cell size:
###Code
ax = cell_df[cell_df['Class'] == 4][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='DarkBlue', label='malignant');
cell_df[cell_df['Class'] == 2][0:50].plot(kind='scatter', x='Clump', y='UnifSize', color='Yellow', label='benign', ax=ax);
plt.show()
###Output
_____no_output_____
###Markdown
Data pre-processing and selection Lets first look at columns data types:
###Code
cell_df.dtypes
###Output
_____no_output_____
###Markdown
It looks like the **BareNuc** column includes some values that are not numerical. We can drop those rows:
###Code
cell_df = cell_df[pd.to_numeric(cell_df['BareNuc'], errors='coerce').notnull()]
cell_df['BareNuc'] = cell_df['BareNuc'].astype('int')
cell_df.dtypes
feature_df = cell_df[['Clump', 'UnifSize', 'UnifShape', 'MargAdh', 'SingEpiSize', 'BareNuc', 'BlandChrom', 'NormNucl', 'Mit']]
X = np.asarray(feature_df)
X[0:5]
###Output
_____no_output_____
###Markdown
We want the model to predict the value of Class (that is, benign (=2) or malignant (=4)). As this field can have one of only two possible values, we need to change its measurement level to reflect this.
###Code
cell_df['Class'] = cell_df['Class'].astype('int')
y = np.asarray(cell_df['Class'])
y [0:5]
###Output
_____no_output_____
###Markdown
Train/Test dataset Okay, we split our dataset into train and test set:
###Code
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=4)
print ('Train set:', X_train.shape, y_train.shape)
print ('Test set:', X_test.shape, y_test.shape)
###Output
Train set: (546, 9) (546,)
Test set: (137, 9) (137,)
###Markdown
Modeling (SVM with Scikit-learn) The SVM algorithm offers a choice of kernel functions for performing its processing. Basically, mapping data into a higher dimensional space is called kernelling. The mathematical function used for the transformation is known as the kernel function, and can be of different types, such as:```1.Linear2.Polynomial3.Radial basis function (RBF)4.Sigmoid```Each of these functions has its characteristics, its pros and cons, and its equation, but as there's no easy way of knowing which function performs best with any given dataset, we usually choose different functions in turn and compare the results. Let's just use the default, RBF (Radial Basis Function) for this lab.
###Code
from sklearn import svm
clf = svm.SVC(kernel='rbf')
clf.fit(X_train, y_train)
###Output
/home/jupyterlab/conda/envs/python/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
"avoid this warning.", FutureWarning)
###Markdown
After being fitted, the model can then be used to predict new values:
###Code
yhat = clf.predict(X_test)
yhat [0:5]
###Output
_____no_output_____
###Markdown
Evaluation
###Code
from sklearn.metrics import classification_report, confusion_matrix
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, yhat, labels=[2,4])
np.set_printoptions(precision=2)
print (classification_report(y_test, yhat))
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=['Benign(2)','Malignant(4)'],normalize= False, title='Confusion matrix')
###Output
precision recall f1-score support
2 1.00 0.94 0.97 90
4 0.90 1.00 0.95 47
micro avg 0.96 0.96 0.96 137
macro avg 0.95 0.97 0.96 137
weighted avg 0.97 0.96 0.96 137
Confusion matrix, without normalization
[[85 5]
[ 0 47]]
###Markdown
You can also easily use the **f1_score** from sklearn library:
###Code
from sklearn.metrics import f1_score
f1_score(y_test, yhat, average='weighted')
###Output
_____no_output_____
###Markdown
Lets try jaccard index for accuracy:
###Code
from sklearn.metrics import jaccard_similarity_score
jaccard_similarity_score(y_test, yhat)
###Output
_____no_output_____
###Markdown
PracticeCan you rebuild the model, but this time with a __linear__ kernel? You can use __kernel='linear'__ option, when you define the svm. How the accuracy changes with the new kernel function?
###Code
clf2 = svm.SVC(kernel='linear')
clf2.fit(X_train, y_train)
yhat2 = clf2.predict(X_test)
print("Avg F1-score: %.4f" % f1_score(y_test, yhat2, average='weighted'))
print("Jaccard score: %.4f" % jaccard_similarity_score(y_test, yhat2))
###Output
Avg F1-score: 0.9639
Jaccard score: 0.9635
|
01_Note_Books/04_Challenge_Risk_Assessment.ipynb | ###Markdown
Asset Management Principles For Modern Power Systems Topic 4: Risk assessment David L. Alvarez A, Ph.D [email protected] Copyright (c) 2021 dlalvareza IntroductionWith this notebook the asset management plan is assessed by using Montecarlo Simulations considering the probability of failure and their impact (criticality) on the net. CIGRE network benchmark DER in Medium Voltage Systems
###Code
#!conda install --yes pyarrow
###Output
_____no_output_____
###Markdown
Neccesary libraries
###Code
import sys
import datetime
import pandas as pd
import calendar
from ipywidgets import interact
from ipywidgets import fixed
import plotly.express as px
from plotly.subplots import make_subplots
from bokeh.io import show, push_notebook, output_notebook
###Output
_____no_output_____
###Markdown
Load PywerAPM libraries
###Code
sys.path.insert(0,'../CASES/05_Challenge_Data/')
sys.path.insert(0,'../APM/BIN/')
from ST_AM_Contingencies_Analysis import Real_Time_Contingencies as Cont_Assessment
from APM_Module import APM
from APM_Run import run_condition
from ARM_Run import load_criticality
from PywerAM_Scenario_Assessment import Decision_Making
from PywerAM_bokeh_tools import plot_condition_assessment#,plot_decision_making, plot_scenario,plot_condition_forecast, Plot_HI_Forecast_Stacked
###Output
_____no_output_____
###Markdown
1. Normal opertaion 1.1 Import case settings
###Code
from PywerAPM_Case_Setting import*
_,_,assets = run_condition()
asset_list_id = list(assets.Asset_Portfolio.keys()) # List of asset id
df_ACP = load_criticality() # Montecarlo simulations
df_ACP['Cr'] = df_ACP['Cr']
# Load fixed criticality
df_Fixed_Cr = load_criticality(cr_type=case_settings['Cr'],assets=assets.Asset_Portfolio_List)
df_Fixed_Cr
###Output
_____no_output_____
###Markdown
1.2 Create contingencies assessment object
###Code
%time
DMS = Decision_Making(assets,DF_ACP=df_ACP,df_AC_Fixed=df_Fixed_Cr)
DMS.R = 0.13 # Discount rate
DMS.date_beg = date_beg
DMS.N_days = n_days
# Scenario do nothing
DMS.run_scenario_base()
#DMS.load_scenario_base()
output_notebook()
def update_HI_plot(Asset_Id):
asset = assets.Asset_Portfolio[Asset_Id]
df = DMS.scenario['Base'][Asset_Id]['Con']
print(df.head())
p = plot_condition_assessment(df)
show(p, notebook_handle=True)
push_notebook()
interact(update_HI_plot, Asset_Id=asset_list_id);
from bokeh.plotting import figure
def update_Cr_plot(Asset_Id):
asset_name = assets.Asset_Portfolio_List.loc[Asset_Id].Name
df = df_ACP[df_ACP[asset_name]==True]
p = figure(title="PyweAM - Asset: "+asset_name, plot_height=500, plot_width=950,background_fill_color='#808080',x_axis_type='datetime')
p.circle(df["Date"], df["Cr"], fill_alpha=0.2, size=10)
show(p, notebook_handle=True)
push_notebook()
interact(update_Cr_plot, Asset_Id=asset_list_id);
###Output
_____no_output_____
###Markdown
4 Risk Index\begin{equation*}RI_i \left( t \right) = POF_i \left( t \right) \times Cr_i \left( t \right)\end{equation*}
###Code
from OPT_Module import OPT
asset_id = 1
l_asset = assets.Asset_Portfolio[asset_id]
data = DMS.scenario['Base'][asset_id]
opt_des = OPT(l_asset,data)
df_current = opt_des.Current_Con_Rel_asseesment(n_years)
df_current
from ST_AM_Contingencies_Analysis import Real_Time_Contingencies as Cont_Assessment
Cont_A = Cont_Assessment(case_settings,pp_case='json')
print(Cont_A.AM_Plan)
plan = Cont_A.AM_Plan
DMS.run_scenario('Test',plan)
def update_Cr_plot(Asset_Id,Scenario):
# # # # # # # # # #
asset_name = assets.Asset_Portfolio_List.loc[Asset_Id].Name
df = DMS.scenario[Scenario][Asset_Id]['RI']
p = figure(title="PyweAM - Asset: "+asset_name, plot_height=500, plot_width=950,background_fill_color='#808080',x_axis_type='datetime')
p.line(df["date"], -df["Cr"])
p.line(df["date"], -df["RI"], color='#FF0000',)
show(p, notebook_handle=True)
push_notebook()
print('Risk with out AM plan '+str(round(DMS.scenario['Base'][Asset_Id]['PV'],2)))
print('Risk with an AM plan '+ str(round(DMS.scenario['Test'][Asset_Id]['PV'],2)))
df
interact(update_Cr_plot, Asset_Id=asset_list_id,Scenario=DMS.scenario.keys());
###Output
_____no_output_____ |
docs/tutorials/sims/indep_simulations.ipynb | ###Markdown
Independence Simulations This module contains a benchmark of 20 simulations of various dependency structures to allow benchmark comparisons between included tests. It also can be used to see when to use which test if some information about the shape of the data distribution is already known. Importing simulations from `hyppo` is easy! All simulations are contained withing the `hyppo.sims` module and can be imported like all other Python packages. In this tutorial, we will show you what all of the independence simulations look like and as such import all of them.
###Code
from hyppo.sims import (linear, exponential, cubic, joint_normal, step,
quadratic, w_shaped, spiral, uncorrelated_bernoulli, logarithmic,
fourth_root, sin_four_pi, sin_sixteen_pi, square, two_parabolas,
circle, ellipse, diamond, multiplicative_noise, multimodal_independence)
###Output
_____no_output_____
###Markdown
These are some constants that are used in this notebook. If running these notebook, please only manipulate these constants.
###Code
# number of sample for noisy and not noisy simulations
NOISY = 100
NO_NOISE = 1000
# list containing simulations and plot titles
simulations = [
(linear, "Linear"),
(exponential, "Exponential"),
(cubic, "Cubic"),
(joint_normal, "Joint Normal"),
(step, "Step"),
(quadratic, "Quadratic"),
(w_shaped, "W-Shaped"),
(spiral, "Spiral"),
(uncorrelated_bernoulli, "Uncorrelated Bernoulli"),
(logarithmic, "Logarithmic"),
(fourth_root, "Fourth Root"),
(sin_four_pi, "Sine 4\u03C0"),
(sin_sixteen_pi, "Sine 16\u03C0"),
(square, "Square"),
(two_parabolas, "Two Parabolas"),
(circle, "Circle"),
(ellipse, "Ellipse"),
(diamond, "Diamond"),
(multiplicative_noise, "Multiplicative"),
(multimodal_independence, "Independence")
]
###Output
_____no_output_____
###Markdown
The following code plots the simulation with noise where applicable in blue overlayed with the simulation with no noise in red. For specific equations for a simulation, please refer to the documentation corresponding to the desired simulation.
###Code
def plot_sims():
# set the figure size
fig, ax = plt.subplots(nrows=4, ncols=5, figsize=(28,24))
count = 0
for i, row in enumerate(ax):
for j, col in enumerate(row):
count = 5*i + j
sim, sim_title = simulations[count]
# the multiplicative noise and independence simulation don't have a noise parameter
if sim_title == "Multiplicative" or sim_title == "Independence":
x, y = sim(NO_NOISE, 1)
x_no_noise, y_no_noise = x, y
else:
x, y = sim(NOISY, 1, noise=True)
x_no_noise, y_no_noise = sim(NO_NOISE, 1)
col.scatter(x, y, c='b', label="noise", alpha=0.5)
col.scatter(x_no_noise, y_no_noise, c='r', label="no noise")
# make the plot look pretty
col.set_title('{}'.format(sim_title))
col.set_xticks([])
col.set_yticks([])
if count == 16:
col.set_ylim([-1, 1])
sns.despine(left=True, bottom=True, right=True)
leg = plt.legend(bbox_to_anchor=(0.5, 0.1), bbox_transform=plt.gcf().transFigure,
ncol=5, loc='upper center')
leg.get_frame().set_linewidth(0.0)
for legobj in leg.legendHandles:
legobj.set_linewidth(5.0)
plt.subplots_adjust(hspace=.75)
plot_sims()
###Output
_____no_output_____ |
04-pooled_model.ipynb | ###Markdown
Bayesian Multilevel Modelling using PyStanThis is a tutorial, following through Chris Fonnesbeck's [primer on using PyStan with Bayesian Multilevel Modelling](http://mc-stan.org/documentation/case-studies/radon.html). 4. A Pooled Model
###Code
%pylab inline
import numpy as np
import pystan
import seaborn as sns
import clean_data
sns.set_context('notebook')
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Building the model in `Stan`We construct a model with complete pooling, where we treat all counties the same, and estimate a single radon level: $$y_i = \alpha + \beta x_i + \epsilon_i$$* $y_i$: measured log(radon) in household $i$* $\alpha$: prevailing radon level across the state* $\beta$: effect on measured log(radon) in moving from basement to ground floor measurement* $\epsilon_i$: error in the model prediction for household %i% Specifying the pooled model in `Stan`To build a model in `Stan`, we need to define `data`, `parameters`, and the `model` itself. This is done by creating strings in the `Stan` language, rather than having an API that provides a constructor for the model.We construct the `data` block to comprise the number of samples (`N`, `int`), with vectors of log-radon measurements (`y`, a `vector` of length `N`) and the floor measurement covariates (`x`, `vector`, length `N`).
###Code
# Construct the data block.
pooled_data = """
data {
int<lower=0> N;
vector[N] x;
vector[N] y;
}
"""
###Output
_____no_output_____
###Markdown
Next we initialise parameters, which here are linear model coefficients (`beta`, a `vector` of length 2) that represent both $\alpha$ and $\beta$ in the pooled model definition, as `beta[1]` and `beta[2]` are assumed to lie on a Normal distribution, and the Normal distribution scale parameter `sigma` defining errors in the model's prediction of the output (`y`, defined later), which is constrained to be positive.
###Code
# Initialise parameters
pooled_parameters = """
parameters {
vector[2] beta;
real<lower=0> sigma;
}
"""
###Output
_____no_output_____
###Markdown
Finally we specify the model, with log(radon) measurements as a normal sample, having a mean that is a function of the choice of floor at which the measurement was made:$$y \sim N(\beta[1] + \beta[2]x, \sigma_e)$$
###Code
pooled_model = """
model {
y ~ normal(beta[1] + beta[2] * x, sigma);
}
"""
###Output
_____no_output_____
###Markdown
Running the pooled model with `pystan`We need to map Python variables (from `clean_data`) to those in the `Stan` model, and pass the data, parameters and model strings above to `Stan`. We also need to specify how many iterations of sampling we want, and how many parallel chains to sample (here, 1000 iterations of 2 chains).This is where explicitly-named local variables are convenient for definition of Stan models.Calling `pystan.stan` doesn't just define the model, ready to fit - it runs the fitting immediately.
###Code
pooled_data_dict = {'N': len(clean_data.log_radon),
'x': clean_data.floor_measure,
'y': clean_data.log_radon}
pooled_fit = pystan.stan(model_code=pooled_data + pooled_parameters + pooled_model,
data=pooled_data_dict,
iter=1000,
chains=2)
###Output
_____no_output_____
###Markdown
Inspecting the fitOnce the fit has been run, the sample can be extracted for visualisation and summarisation. Specifying `permuted=True` means that all fitting chains are merged and warmup samples are discarded, and that a dictionary is returned, with samples for each parameter:
###Code
# Collect the sample
pooled_sample = pooled_fit.extract(permuted=True)
###Output
_____no_output_____
###Markdown
The output is an `OrderedDict` with two keys of interest to us: `beta` and `sigma`. `sigma` describes the estimated error term, and `beta` describes the estimated values of $\alpha$ and $\beta$ for each iteration. These can be obtained through key values:
###Code
# Inspect the sample
pooled_sample['beta']
###Output
_____no_output_____
###Markdown
While it can be very interesting to see the results for individual iterations and how they vary (e.g. to obtain credibility intervals for parameter estimates), for now we are interested primarily in the mean values of these estimates:
###Code
# Get mean values for parameters, from the sample
# b0 = common radon value across counties (alpha)
# m0 = variation in radon level with change in floor (beta)
b0, m0 = pooled_sample['beta'].T.mean(1)
# What are the fitted parameters
print("alpha: {0}, beta: {1}".format(b0, m0))
###Output
alpha: 1.3639661176627098, beta: -0.5912920402438805
###Markdown
We can create a scatterplot with the mean fit overlaid, to visualise how well this pooled model fits the observed data:
###Code
# Plot the fitted model (red line) against observed values (blue points)
plt.scatter(clean_data.srrs_mn.floor, np.log(clean_data.srrs_mn.activity + 0.1))
xvals = np.linspace(-0.1, 1.2)
plt.plot(xvals, m0 * xvals + b0, 'r--')
plt.title("Fitted model")
plt.xlabel("Floor")
plt.ylabel("log(radon)");
###Output
_____no_output_____ |
Foot balll data.ipynb | ###Markdown
Set index
###Code
df[df['year']<2012][['team','wins','losses']]
df[df['year']<2012][['team','wins','losses']].set_index('team')
###Output
_____no_output_____ |
EEG-HandGrasp-Classification/4b. Cross Subject Classification with FBCSP Classifier.ipynb | ###Markdown
Clinical BCI Challenge-WCCI2020- [website link](https://sites.google.com/view/bci-comp-wcci/?fbclid=IwAR37WLQ_xNd5qsZvktZCT8XJerHhmVb_bU5HDu69CnO85DE3iF0fs57vQ6M) - [Dataset Link](https://github.com/5anirban9/Clinical-Brain-Computer-Interfaces-Challenge-WCCI-2020-Glasgow) - [FBCSP Github Repo Link](https://github.com/jesus-333/FBCSP-Python) I have changed the source code to give cohen's kappa score instead of accuracy as an evaluation measure and I have also made a few other small changes as well. Moreover, for cross subject analysis I have also changed the implementation stuff a bit. LinearSVM showed some errors while making predictions from evaluateTrail() method. So, I replaced it with LogisticRegression
###Code
from FBCSP.FBCSP_V4_CS import FBCSP_V4 as FBCSP
import mne
from scipy.io import loadmat
import scipy
import sklearn
import numpy as np
import pandas as pd
import glob
from mne.decoding import CSP
import os
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC, SVC
from sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV, StratifiedShuffleSplit
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as lda
from sklearn.model_selection import LeaveOneGroupOut
from sklearn.metrics import cohen_kappa_score as kappa_score
import warnings
warnings.filterwarnings('ignore') # to ignore warnings
verbose = False # global variable to suppress output display of MNE functions
mne.set_log_level(verbose=verbose) # to suppress large info outputs
verbose_clf = True # control output of FBCSP function
freqs_band = np.linspace(8, 32, 7) # filter bank choice
# using kappa as evaluation metric
kappa = sklearn.metrics.make_scorer(sklearn.metrics.cohen_kappa_score) # kappa scorer
acc = sklearn.metrics.make_scorer(sklearn.metrics.accuracy_score) # accuracy scorer
scorer = kappa # just assign another scorer to replace kappa scorer
###Output
_____no_output_____
###Markdown
Data Loading and Conversion to MNE Datatypes[Mike Cohen Tutorials link for EEG Preprocessing](https://www.youtube.com/watch?v=uWB5tjhataY&list=PLn0OLiymPak2gDD-VDA90w9_iGDgOOb2o)
###Code
current_folder = globals()['_dh'][0] # a hack to get path of current folder in which juptyter file is located
data_path = os.path.join(current_folder, 'Data')
training_files = glob.glob(data_path + '/*T.mat')
len(training_files) # if return zero,then no file is loaded
###Output
_____no_output_____
###Markdown
Lets Append Epochs
###Code
def get_mne_epochs_complete(files_paths, verbose=verbose, t_start=2, fs=512, mode='train'):
'''
similar to get_mne_epochs, just appends data from all relevant files together to give a single
epoch object
'''
eeg_data = []
for filepath in files_paths:
mat_data = loadmat(filepath)
eeg_data.extend(mat_data['RawEEGData'])
idx_start = fs*t_start # fs*ts
eeg_data = np.array(eeg_data)
eeg_data = eeg_data[:, :, idx_start:]
event_id = {'left-hand': 1, 'right-hand': 2}
channel_names = ['F3', 'FC3', 'C3', 'CP3', 'P3', 'FCz', 'CPz', 'F4', 'FC4', 'C4', 'CP4', 'P4']
info = mne.create_info(ch_names=channel_names, sfreq=fs, ch_types='eeg')
epochs = mne.EpochsArray(eeg_data, info, verbose=verbose, tmin=t_start-3.0)
epochs.set_montage('standard_1020')
epochs.filter(1., None) # required be ICA, (7-30 Hz) later
epochs.apply_baseline(baseline=(-.250, 0)) # linear baseline correction
if mode == 'train': # this in only applicable for training data
labels = []
for filepath in files_paths:
mat_data = loadmat(filepath)
labels.extend(mat_data['Labels'].ravel())
epochs.event_id = event_id
epochs.events[:,2] = labels
return epochs
###Output
_____no_output_____
###Markdown
Data Loading with Band Pass Filtering
###Code
# loading relevant files
training_epochs_all = get_mne_epochs_complete(training_files).filter(7,32) # for all training subjects
epochs = training_epochs_all.copy()
data, labels = epochs.get_data(), epochs.events[:,-1]
print('Shape of EEG Data: ', data.shape, '\t Shape of Labels: ', labels.shape)
###Output
Shape of EEG Data: (640, 12, 3072) Shape of Labels: (640,)
###Markdown
Training with Leave One Group Out CV
###Code
cv = LeaveOneGroupOut()
# group parameter for leave one group out cross validation in sklearn, each subject is given unique identifier
group_list = []
for subject in np.linspace(1,8,8):
group_list.extend([subject for _ in range(80)]) # since we have 80 samples in each training file
groups = np.array(group_list)
i = 1
fs = epochs.info['sfreq']
valid_scores_lda = []
for train_idx, valid_idx in cv.split(epochs, y=labels, groups=groups):
print('-'*20, "Iteration:", i, '-'*20)
train_epochs = epochs[train_idx]
valid_epochs = epochs[valid_idx]
valid_data, valid_labels = valid_epochs.get_data()[:,:,256+512:-256], valid_epochs.events[:,-1]
data_dict_train = {'left-hand': train_epochs['left-hand'].get_data()[:,:,256+512:-256], # [0.5, 4.5] sec data
'right-hand': train_epochs['right-hand'].get_data()[:,:,256+512:-256]}
# using LDA as classifier
fbcsp_clf_lda = FBCSP(data_dict_train, fs, freqs_band=freqs_band,
classifier=lda(), print_var=verbose_clf)
preds_fbcsp_clf_lda = fbcsp_clf_lda.evaluateTrial(valid_data)[0]
valid_scores_lda.append(kappa_score(preds_fbcsp_clf_lda, valid_labels))
i = i+1
print()
i = 1
valid_scores_logreg = []
for train_idx, valid_idx in cv.split(epochs, y=labels, groups=groups):
print('-'*20, "Iteration:", i, '-'*20)
train_epochs = epochs[train_idx]
valid_epochs = epochs[valid_idx]
valid_data, valid_labels = valid_epochs.get_data()[:,:,256+512:-256], valid_epochs.events[:,-1]
data_dict_train = {'left-hand': train_epochs['left-hand'].get_data()[:,:,256+512:-256], # [0.5, 4.5] sec data
'right-hand': train_epochs['right-hand'].get_data()[:,:,256+512:-256]}
fs = epochs.info['sfreq']
# using Logistic Regression as classifier
fbcsp_clf_logreg = FBCSP(data_dict_train, fs, freqs_band=freqs_band,
classifier=LogisticRegression(), print_var=verbose_clf)
preds_fbcsp_clf_logreg = fbcsp_clf_logreg.evaluateTrial(valid_data)[0]
valid_scores_logreg.append(kappa_score(preds_fbcsp_clf_logreg, valid_labels))
i = i+1
print()
print("FBCSP-LDA Cross Validation Score:", np.mean(valid_scores_lda))
print("FBCSP-Logreg Cross Validation Score:", np.mean(valid_scores_logreg))
# we aren't doing grid search here so wouldn't take max score
###Output
FBCSP-LDA Cross Validation Score: 0.39999999999999997
FBCSP-Logreg Cross Validation Score: 0.39374999999999993
|
Industrial-Indexing and calculating profit.ipynb | ###Markdown
**QuipuSpicyCycle.**In this lesson you will:- Сalculate the rate of TezQuipu pair buy on the exchange QuipuSwap exchange- Сalculate the rate of TezQuipu pair buy on the exchange SpicySwap exchange- Сalculate the possible profit from arbitration As usual, let's start by installing the libraries
###Code
%%capture
## %%capture is a Colab built-in function to suppress the output of that particular cell whether it uses a command-line code or some python code
from matplotlib import pyplot as plt
from pandas import json_normalize
import json
import pandas as pd
from tqdm import tqdm
import requests
import time
###Output
_____no_output_____
###Markdown
We considered the contract from the QuipuSwap in one of the previous lessons, let's see the contract from the SpicySwap
###Code
smart_contract ="KT1VLvdV1u268eVzBFEFr72JgEKhq2MU6NqJ"
request_url = f'https://api.tzkt.io/v1/contracts/{smart_contract}/storage/'
response = requests.get(request_url)
new_data = json.loads(response.text)
new_data
###Output
_____no_output_____
###Markdown
As you can see the storage is differentreserve0 - Quipu poolreserve1- Tezos pool **Collecting reserves**Let's write functions that get pools-reserves
###Code
def Qpools(smart_contract):
request_url = f'https://api.tzkt.io/v1/contracts/{smart_contract}/storage/'
response = requests.get(request_url)
new_data = json.loads(response.text)
return int(new_data['storage']['tez_pool']), int(new_data['storage']['token_pool'])
def Spools(mart_contract):
request_url = f'https://api.tzkt.io/v1/contracts/{smart_contract}/storage/'
response = requests.get(request_url)
new_data = json.loads(response.text)
return int(new_data['reserve1']),int(new_data['reserve0'])
###Output
_____no_output_____
###Markdown
Let's take the functions from the previous lessons to calculate the exchange rate of the pair
###Code
#1 tez = 1,000,000 micro tez
def tzToMutez(tezAmount):
return tezAmount*1000000
def TeztoQoutput(tezAmount,tez_pool,token_pool):
mutezAmount = tzToMutez(tezAmount)
tezInWithFee = mutezAmount*997
numerator = tezInWithFee*token_pool
denominator = tez_pool * 1000 + tezInWithFee
tokensOut = numerator/denominator
return tokensOut/(10*10*10*10*10*10)
###Output
_____no_output_____
###Markdown
Let's calculate the rate
###Code
def ex_rate_tq(amount):
return (amount)/TeztoQoutput(amount)
def pair(qcontract,scontract,tezAmount):
q_tez_pool,q_token_pool = Qpools(qcontract)
q=TeztoQoutput(tezAmount,q_tez_pool,q_token_pool)
print("Quipu_rate: ",q)
s_tez_pool,s_token_pool = Spools(scontract)
s=TeztoQoutput(tezAmount,s_tez_pool,s_token_pool)
print("Spicy_rate: ",s)
###Output
_____no_output_____
###Markdown
Let's add the calculation of the reverse rate on the exchange and see the possible profit
###Code
def pair(qcontract,scontract,tezAmount):
q_tez_pool,q_token_pool = Qpools(qcontract)
q=TeztoQoutput(tezAmount,q_tez_pool,q_token_pool)
print("Quipu_rate: ",q)
s_tez_pool,s_token_pool = Spools(scontract)
s=TeztoQoutput(tezAmount,s_tez_pool,s_token_pool)
print("Spicy_rate: ",s)
#Exchange rate
reverseSpicy = tezAmount/s
print("Spicy reverse: ", reverseSpicy)
print(tezAmount - tezAmount*q*reverseSpicy)
###Output
_____no_output_____
###Markdown
Important!!! Transactional fees are not included
###Code
qcontract="KT1X3zxdTzPB9DgVzA3ad6dgZe9JEamoaeRy"
scontract="KT1VLvdV1u268eVzBFEFr72JgEKhq2MU6NqJ"
tezAmount= 1
pair(qcontract,scontract,tezAmount)
###Output
Quipu_rate: 1.7600589249150516
Spicy_rate: 1.7815362858469792
Spicy reverse: 0.5613132934446964
0.01205552819920086
|
StraddleMovementPredictor.ipynb | ###Markdown
Options Straddle Stock Price Predictor This library and script allows the user to see the current real time market price of a straddle position. Using this information this script provides the user with a visualization of the predicted price movement for a given stock. It also provides the user with the exact percentage move being priced into the options chain in real time.
###Code
### LIBRARIES AND PACKAGES
# BASIC DATA HANDLING LIBRARIES
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import figure
import datetime as dt
import time
from dateutil.parser import parse
# FINANCE LIBRARIES
import yahoo_fin.stock_info as si
from yahoo_fin.options import *
from yahoo_fin import news
import yfinance as yf
import opstrat as op
# OPTIONS AND SETTINGS
pd.set_option("display.max_rows", None, "display.max_columns", None)
def get_straddle_predcition(ticker):
### GET EXPIRATION DATES AND OPTIONS CHAIN DATA
# Grabs options expiration dates
expiration_dates = get_expiration_dates(ticker)
chain = get_options_chain(ticker, expiration_dates[2])
# Stores Call and Put chains in Dataframes
calls_df = pd.DataFrame(chain['calls'])
puts_df = pd.DataFrame(chain['puts'])
# Rounding function for underlying price and strike price
def round_by5(x, base=5):
high = base * np.round(x/base)
low = base * np.round(x/base) - base
return int(low), int(high)
### GET CURRENT PRICE OF UNDERLYING
underlying = yf.Ticker(ticker)
lastclose = pd.Series(underlying.history(period='1d')['Close'])
price = lastclose.values
rounded_price = round(price[0], 2)
### DISPLAY AT THE MONEY OPTIONS
# ATM call
high_strike = (calls_df.where(calls_df['Strike'] == round_by5(price)[1]))
atm_call = (high_strike.dropna())
# ATM put
high_strike = (puts_df.where(puts_df['Strike'] == round_by5(price)[1]))
atm_put = (high_strike.dropna())
### VISUALIZE STRADDLE
# Format Date
parsed_date = parse(expiration_dates[2])
parsed_date_str = str(parsed_date).strip(' 00:00:00')
# Plot Straddle
op_1={'op_type': 'c', 'strike':round_by5(price)[1], 'tr_type': 'b'}
op_2={'op_type': 'p', 'strike':round_by5(price)[1], 'tr_type': 'b'}
op.yf_plotter(ticker=ticker, exp=parsed_date_str, op_list=[op_1, op_2])
### CALCULATE ANTICIAPTED MOVE
# Straddle Cost
call_price = atm_call['Last Price']
put_price = atm_put['Last Price']
straddle_float = call_price + call_price
straddle_cost = straddle_float.to_numpy()[0] * 100
# Calculate anticipated move
block_share_cost = price * 100
anticipated_move = (straddle_cost / block_share_cost)
#format_anticiapted_move = anticipated_move.astype(float).tolist()
pct_anticipated_move = (anticipated_move * 100)
pct_move = np.round(pct_anticipated_move[0], 2)
# Upper and Lower price boundaries
upper = (price*anticipated_move + price)
upper_price = round(upper[0], 2)
lower = (price - (price*anticipated_move))
lower_price = round(lower[0], 2)
print()
print(ticker, 'current price:', rounded_price)
print('Total cost of entering into an ATM straddle position: $', round(straddle_cost, 2))
print('Anticipated move for', ticker, 'stock by:', parsed_date_str, 'is approximately:', pct_move,'%')
print('According to this prediction, the price is predicted to stay between: $',lower_price,'and: $', upper_price)
get_straddle_predcition('NVDA')
###Output
_____no_output_____ |
notebooks/analysis_250_models/.ipynb_checkpoints/03_Feature_Extraction-checkpoint.ipynb | ###Markdown
SkinAnaliticAI, Skin Cancer Detection with AI Deep Learning __Evaluation of Harvard Dataset with different AI classiffication techniques using FastClassAI papeline__Author: __Pawel Rosikiewicz__ [email protected] License: __MIT__ ttps://opensource.org/licenses/MIT Copyright (C) 2021.01.30 Pawel Rosikiewicz standard imports
###Code
import os # allow changing, and navigating files and folders,
import sys
import shutil
import re # module to use regular expressions,
import glob # lists names in folders that match Unix shell patterns
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings("ignore")
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# setup basedir
basedir = os.path.dirname(os.getcwd())
os.chdir(basedir)
sys.path.append(basedir)
# set up paths for the project
PATH_raw = os.path.join(basedir, "data/raw")
PATH_interim = os.path.join(basedir, "data/interim")
PATH_models = os.path.join(basedir, "models")
PATH_interim_dataset_summary_tables = os.path.join(PATH_interim, "dataset_summary_tables") # create in that notebook,
# load functions,
from src.utils.feature_extraction_tools import encode_images
# load configs
from src.configs.project_configs import CLASS_DESCRIPTION # information on each class, including descriptive class name and diegnostic description - used to help wiht the project
from src.configs.tfhub_configs import TFHUB_MODELS # names of TF hub modules that I presenlected for featuress extraction with all relevant info,
from src.configs.dataset_configs import DATASET_CONFIGS # names created for clases, assigned to original one, and colors assigned to these classes
from src.configs.dataset_configs import CLASS_LABELS_CONFIGS # names created for clases, assigned to original one, and colors assigned to these classes
from src.configs.dataset_configs import DROPOUT_VALUE # str, special value to indicate samples to remoce in class labels
from src.configs.config_functions import DEFINE_DATASETS # function that creates datasunbsets collections for one dataset (custome made for that project)
# set project variables
PROJECT_NAME = "SkinAnaliticAI_Harvard_dataset_evaluation" #
DATASET_NAME = "HAM10000" # name used in config files to identify all info on that dataset variant
DATASET_VARIANTS = DATASET_CONFIGS[DATASET_NAME]["labels"] # class labels that will be used, SORT_FILES_WITH must be included
###Output
_____no_output_____
###Markdown
FEATURE EXTRACTION
###Code
# preset values
generator_batch_size = 3000 # no more then 3000 images will be taken, but we expect no more then 2000 in that tassk.
use_url = "no" # the script is adapted only to use sys.path, but configs carries url's and ulr can be used with feature extraction function
# extract features from images in each dataset varinat using one or more tf hub modules,
for dv_i, dataset_variant in enumerate(DATASET_VARIANTS):
print(f"\n- {dv_i} - Extracting features from: {dataset_variant}")
# find names off train/valid/test subsets in dataset folder,
os.chdir(os.path.join(PATH_interim, f"{DATASET_NAME}__{dataset_variant}"))
subset_names_to_encode = []
for file in glob.glob(f"[train|valid|test]*"):
subset_names_to_encode.append(file)
# Create lists with info required for feture extraction from images
'this step is super usefull when many models is used for feature extraction'
tfmodules = list(TFHUB_MODELS.keys()) # names of tf hub models used
module_names = [TFHUB_MODELS[x]['module_name'] for x in tfmodules]
module_file_names = [TFHUB_MODELS[x]['file_name'] for x in tfmodules]
img_imput_size = [TFHUB_MODELS[x]['input_size'] for x in tfmodules]
# extract features, from images from each subset, and store them togther as one batch array,
for i, (one_module_name, one_module_file_name, one_img_input_size) in enumerate(zip(module_names, module_file_names, img_imput_size)):
'''
all data subsets found in load_dir will be encoded automatically,
- logfile will be created for a given datasets
- batch_labels csv file and npz file with encoded features will be created for
each data subset will have:
-
'''
print("\n ................................................")
print(f" - {dv_i}/{i} module: {one_module_name}")
print(f" - {dv_i}/{i} filename or url: {one_module_file_name}")
print(f" - {dv_i}/{i} RGB image size : {one_img_input_size}")
print(f" - {dv_i}/{i} datset subsets : {subset_names_to_encode}")
print(f" - Cataloging subsets, then extracting features from all images")
print(f" - Important: Each subset will be saved as one matrix")
print("\n")
# I am using modules saved in computer memory, thus I need to build fiull path to them,
if use_url=="no":
one_module_full_path = os.path.join(PATH_models, one_module_file_name)
else:
one_module_full_path = one_module_file_name # here I am using module url, (no path)
# extract features
encode_images(
# .. dastaset name & directories,
dataset_name = f"{DATASET_NAME}__{dataset_variant}",# name used when saving encoded files, logfiles and other things, related to encoding,
subset_names = subset_names_to_encode,# list, ust names of files in the load_dir, if any,
load_dir = os.path.join(PATH_interim, f"{DATASET_NAME}__{dataset_variant}"), # full path to input data, ie. file folder with either folders with images names after class names, or folders with subsetnames, and folders names after each class in them,
save_dir = os.path.join(PATH_interim, f"{DATASET_NAME}__{dataset_variant}__extracted_features"), # all new files, will be saved as one batch, with logfile, if None, load_dir will be used,
# .. encoding module parameters,
module_name = one_module_name, # name used when saving files
module_location = one_module_full_path, # full path to a given module, or url,
img_target_size = one_img_input_size, # image resolution in pixels,
generator_batch_size = generator_batch_size, # must be larger or equal to in size of the largest subset
generator_shuffle = False,
# .. other,
save_files = True,
verbose = False
)
###Output
- 0 - Extracting features from: Cancer_Detection_And_Classification
................................................
- 0/0 module: MobileNet_v2
- 0/0 filename or url: imagenet_mobilenet_v2_100_224_feature_vector_2
- 0/0 RGB image size : (224, 224)
- 0/0 datset subsets : ['train_05', 'train_02', 'valid_01', 'train_03', 'train_04', 'test_01', 'train_01', 'train_06', 'valid_02', 'train_07', 'test_02']
- Cataloging subsets, then extracting features from all images
- Important: Each subset will be saved as one matrix
Found 744 images belonging to 7 classes.
Found 742 images belonging to 7 classes.
Found 740 images belonging to 7 classes.
Found 742 images belonging to 7 classes.
Found 744 images belonging to 7 classes.
Found 367 images belonging to 7 classes.
Found 742 images belonging to 7 classes.
Found 738 images belonging to 7 classes.
Found 741 images belonging to 7 classes.
Found 751 images belonging to 7 classes.
Found 367 images belonging to 7 classes.
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
................................................
- 0/1 module: BiT_M_Resnet101
- 0/1 filename or url: bit_m-r101x1_1
- 0/1 RGB image size : (224, 224)
- 0/1 datset subsets : ['train_05', 'train_02', 'valid_01', 'train_03', 'train_04', 'test_01', 'train_01', 'train_06', 'valid_02', 'train_07', 'test_02']
- Cataloging subsets, then extracting features from all images
- Important: Each subset will be saved as one matrix
Found 744 images belonging to 7 classes.
Found 742 images belonging to 7 classes.
Found 740 images belonging to 7 classes.
Found 742 images belonging to 7 classes.
Found 744 images belonging to 7 classes.
Found 367 images belonging to 7 classes.
Found 742 images belonging to 7 classes.
Found 738 images belonging to 7 classes.
Found 741 images belonging to 7 classes.
Found 751 images belonging to 7 classes.
Found 367 images belonging to 7 classes.
- 1 - Extracting features from: Cancer_Risk_Groups
................................................
- 1/0 module: MobileNet_v2
- 1/0 filename or url: imagenet_mobilenet_v2_100_224_feature_vector_2
- 1/0 RGB image size : (224, 224)
- 1/0 datset subsets : ['train_05', 'train_02', 'valid_01', 'train_03', 'train_04', 'test_01', 'train_01', 'train_06', 'valid_02', 'train_07', 'test_02']
- Cataloging subsets, then extracting features from all images
- Important: Each subset will be saved as one matrix
Found 743 images belonging to 3 classes.
Found 742 images belonging to 3 classes.
Found 741 images belonging to 3 classes.
Found 742 images belonging to 3 classes.
Found 743 images belonging to 3 classes.
Found 369 images belonging to 3 classes.
Found 741 images belonging to 3 classes.
Found 740 images belonging to 3 classes.
Found 741 images belonging to 3 classes.
Found 746 images belonging to 3 classes.
Found 370 images belonging to 3 classes.
INFO:tensorflow:Saver not created because there are no variables in the graph to restore
................................................
- 1/1 module: BiT_M_Resnet101
- 1/1 filename or url: bit_m-r101x1_1
- 1/1 RGB image size : (224, 224)
- 1/1 datset subsets : ['train_05', 'train_02', 'valid_01', 'train_03', 'train_04', 'test_01', 'train_01', 'train_06', 'valid_02', 'train_07', 'test_02']
- Cataloging subsets, then extracting features from all images
- Important: Each subset will be saved as one matrix
Found 743 images belonging to 3 classes.
Found 742 images belonging to 3 classes.
Found 741 images belonging to 3 classes.
Found 742 images belonging to 3 classes.
Found 743 images belonging to 3 classes.
Found 369 images belonging to 3 classes.
Found 741 images belonging to 3 classes.
Found 740 images belonging to 3 classes.
Found 741 images belonging to 3 classes.
Found 746 images belonging to 3 classes.
Found 370 images belonging to 3 classes.
|
notebooks/predict_flight_delays.ipynb | ###Markdown
Predicting Flight DelaysIn this notebook, we use the combined flight delay and weather data we have created to create and evaluate models to predict flight delays.**Note** the full flight delay dataset is very large (over 80GB uncompressed), so we are working with a smaller sample dataset. Hence our results may not be a true reflection of the results on the full dataset. Import required modulesImport and configure the required modules.
###Code
!pip install seaborn scikit-learn > /dev/null 2>&1
# Define required imports
import json
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_theme(style='darkgrid', palette='deep')
# These set pandas max column and row display in the notebook
pd.set_option('display.max_columns', 50)
pd.set_option('display.max_rows', 50)
MODEL_EXPORT_FOLDER = 'models'
from pathlib import Path
export_path = Path(MODEL_EXPORT_FOLDER)
export_path.mkdir(parents=True, exist_ok=True)
###Output
_____no_output_____
###Markdown
Read the dataWe start by reading in the merged flight delay and weather data
###Code
flight_path = 'data/jfk_flight_weather_features.csv'
flight_data = pd.read_csv(flight_path, parse_dates=['flight_date'])
flight_data.head()
flight_data['dest'].value_counts().tail(10)
flight_data['dest'].value_counts().tail(10)
dest_to_drop = ['MKE', 'HYA', 'ALB', 'PSP', 'BDL', 'TUS', 'DAB', 'BHM']
flight_data[flight_data['dest'].isin(dest_to_drop)]
flight_data.drop(flight_data[flight_data['dest'].isin(dest_to_drop)].index, inplace=True)
flight_data
###Output
_____no_output_____
###Markdown
Create train / test data splitThe first step in building our models is to split the dataset into training and test sets. We use a portion of the data for training, and another portion of data for our test sets.If we instead trained a model on the full dataset, the model would learn to be very good at making predictions on that particular dataset, essentially just copying the answers it knows. However, when presented with data the model has not seen , it would perform poorly since it has not learned how to generalize its answers.By training on a portion of the dataset and testing the model's performance on another portion of the dataset (which data the model has not seen in training), we try to avoid our models "over-fitting" the dataset and make them better at prediction when given unseen, future data. This process of splitting the dataset and evaluating a model's performance on "held-out" datasets is commonly known as _cross-validation_.By default here we use 80% of the data for the training set and 20% for the test set.**Note** for simplicity here we perform a random split. Technically, we have some time-dependent information leakage, since for earlier records, the model can use data from the future in training. In reality, a model at that point in time would not have information about the future available for training. For a better evaluation of the model performance on fully unseen, new data, the test set should be generated from _future_ data occurring after the time window in the training set.
###Code
from sklearn.model_selection import train_test_split
# Split the dataset into 80% training and 20% test sets, stratified by the 'delayed' field
df_train, df_test = train_test_split(
flight_data, train_size=0.8, random_state=24, stratify=flight_data[['delayed']])
# specify the target variable
y_train = df_train['delayed'].values
y_test = df_test['delayed'].values
print('Training set: {} rows'.format(len(df_train)))
print('Test set: {} rows'.format(len(df_test)))
###Output
_____no_output_____
###Markdown
Encode categorical variablesNext, we want to encode the various _categorical_ features we have - such as the flight departure time bucket, airline and airport ids, and so on - into numerical representations. We do this by assigning integer ids to each unique feature value. This is known as ordinal encoding.Note that certain models (e.g. linear models) will interpret these numerical values as having an ordinal structure. However, for our demonstration purposes we will use tree-based models, which can handle these types of integer ids directly. For linear models, we would prefer to use one-hot encoding for categorical features.
###Code
from sklearn.preprocessing import OrdinalEncoder
# specify columns for raw categorical features
cat_columns = [
'month',
'day_of_month',
'day_of_week',
'airline_name',
'dest',
'dep_time_bin',
'distance_bin'
]
# extract categorical data columns for training set
df_train_cat = df_train[cat_columns]
# extract categorical data columns for test set
df_test_cat = df_test[cat_columns]
ord_enc = OrdinalEncoder()
# fit and encode training features
X_train_cat = ord_enc.fit_transform(df_train_cat)
# encode test features
X_test_cat = ord_enc.transform(df_test_cat)
print('Training set categorical features: {} rows, {} features' .format(X_train_cat.shape[0], X_train_cat.shape[1]))
print('Test set categorical features: {} rows, {} features' .format(X_test_cat.shape[0], X_test_cat.shape[1]))
###Output
_____no_output_____
###Markdown
Encode numerical variablesThe next step is to encode numerical features. Depending on the models used, it can be very important to scale / normalize numerical features - such as `wind_speed` or `precip`. Again, linear models and neural networks are a good example of this. In our case we will use tree-based models, which again do not require feature scaling, hence we can use these numerical features directly without pre-processing. **Note** that the weather type features are also categorical. However, we have already encoded these as binary values in our pre-processing step, hence we can now treat these features as numerical.
###Code
num_columns = [
'visibility',
'wind_speed',
'wind_gust_speed',
'precip',
'rain',
'ice_pellets',
'mist',
'snow',
'drizzle',
'haze',
'fog',
'thunderstorm',
'smoke',
'unknown_precipitation'
]
# extract numerical data columns for training set
X_train_num = df_train[num_columns].values
# extract numerical data columns for validation set
X_test_num = df_test[num_columns].values
print('Training set numerical features: {} rows, {} features' .format(X_train_num.shape[0], X_train_num.shape[1]))
print('Test set numerical features: {} rows, {} features' .format(X_test_num.shape[0], X_test_num.shape[1]))
###Output
_____no_output_____
###Markdown
Combine categorical and numerical featuresWe can now combine the two sets of features by concatenating them ("horizontally stacking"):
###Code
X_train = np.hstack((X_train_cat, X_train_num))
X_test = np.hstack((X_test_cat, X_test_num))
print('Training set all features: {} rows, {} features' .format(X_train.shape[0], X_train.shape[1]))
print('Test set all features: {} rows, {} features' .format(X_test.shape[0], X_test.shape[1]))
###Output
_____no_output_____
###Markdown
Train and evaluate modelsNow that we have pre-processed all our features into numerical representations, we can pass them to our machine learning models.For simplicity, we will evalute 3 tree-based models: a single decision tree; a random forest and a gradient-boosting tree (both of these are "ensemble" models made up of many smaller sub-models, typicaly themselves single decision trees).Tree ensemble models are very flexible and powerful, and typically perform well "out the box" in particular on tabular datasets such as we have here. As we have seen, they also require less feature pre-processing and engineering in general than, for example, linear models.
###Code
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_score, cross_validate
dt = DecisionTreeClassifier()
rf = RandomForestClassifier()
gb = GradientBoostingClassifier()
###Output
_____no_output_____
###Markdown
We have split out dataset into a training and test set. However, the test set itself should never be directly used in model training, but only to perform a final model evaluation. This gives an estimate on how the model might perform in the "real world".We would still like to perform model selection, which means we need to evaluate our models using the training set in some way. To avoid over-fitting on the training set, as well as to give a good estimate on how the model may perform on our test set, we will use K-fold cross-validation on our training set.This splits the dataset into `k` (in our case `5`) non-overlapping subsets (`folds`). In turn, the model is trained on 4 of these (80% of training data) and evaluated on 1 (20% of training data). This is repeated `k` times and the evaluation scores are averaged across each of the `k` runs. This averaged metric typically gives a fairly good indication of how the model performs on unseen data.`scikit-learn` provides us this functionality, built-in and easy to use!**Note** As we see in the analysis notebook, we are dealing with some degree of class imbalance - on-time flights are far more prevelant compared to delayed flights (80% / 20% split). So, we need to be cautious when evaluting the performance of such models. For example, if we use `accuracy` as a metric, then a simple rule that classifies all flights as `on-time` would achieve 80% accuracy, which sounds very good! However, the model is completely unable to actually predict whether a flight will be delayed, so is useless for any real-world application.A common metric used for binary classification is the area under the ROC curve (`roc_auc`). However, this metric can sometimes provide an unclear picture for imbalanced classes.There are a few metrics that try to alleviate this problem for binary classification problems. We will be using `F1 score` as our metric for selecting the model to use, since it can handle the class imbalance problem. _Note_ that the selection of metric also depends on the particular use case.
###Code
metric = 'f1'
scores = cross_val_score(dt, X_train, y_train, cv=5, scoring=metric)
dt_score = np.mean(scores)
scores = cross_val_score(rf, X_train, y_train, cv=5, scoring=metric)
rf_score = np.mean(scores)
scores = cross_val_score(gb, X_train, y_train, cv=5, scoring=metric)
gb_score = np.mean(scores)
cv_scores = [dt_score, rf_score, gb_score]
plt.figure(figsize=(16, 6))
sns.barplot(x=['DecisionTreeClassifier', 'RandomForestClassifier', 'GradientBoostingClassifier'], y=cv_scores)
plt.show()
print('Average {} for DecisionTreeClassifier: {}'.format(metric, dt_score))
print('Average {} for RandomForestClassifier: {}'.format(metric, rf_score))
print('Average {} for GradientBoostingClassifier: {}'.format(metric, gb_score))
###Output
_____no_output_____
###Markdown
Based on this, we will select the `DecisionTreeClassifier`.**Note** based on the `auc_roc` metric, we would have selected the `GradientBoostingClassifier` - try it out in the cells above to see and then compare the model performance later on.We can also evaluate the impact of adding our weather features on model performance:
###Code
scores = cross_val_score(dt, X_train_cat, y_train, cv=5, scoring=metric)
cat_score = np.mean(scores)
scores = cross_val_score(dt, X_train_num, y_train, cv=5, scoring=metric)
num_score = np.mean(scores)
scores = cross_val_score(dt, X_train, y_train, cv=5, scoring=metric)
all_score = np.mean(scores)
cv_scores = [cat_score, num_score, all_score]
plt.figure(figsize=(16, 6))
sns.barplot(x=['Flight features', 'Weather features', 'Flight + Weather features'], y=cv_scores)
plt.show()
print('Average {} for only flight delay features: {}'.format(metric, cat_score))
print('Average {} for only weather features: {}'.format(metric, num_score))
print('Average {} for all features: {}'.format(metric, all_score))
###Output
_____no_output_____
###Markdown
We see that using only weather features does little better than random guessing, while adding weather features to the flight features increases our metric by around `0.01`. This is not a very large amount, but it does indicate that information about weather helps a little with predictions. In some applications, even small increases in model performance can be significant.Finally, we re-train the model on the full training dataset and perform a final classification evaluation on the test set.
###Code
from sklearn.metrics import confusion_matrix, roc_auc_score, f1_score, classification_report
from sklearn.metrics import plot_roc_curve, plot_confusion_matrix, plot_precision_recall_curve
# fit on full data
dt.fit(X_train, y_train)
y_prob = dt.predict_proba(X_test)[:, 1]
y_pred = dt.predict(X_test)
f1_test = f1_score(y_test, y_prob)
roc_auc_test = roc_auc_score(y_test, y_prob)
print('Final {} for test set: {}'.format(metric, f1_test))
###Output
_____no_output_____
###Markdown
We export the trained model and a few example rows from the test dataset, for potential use by downstream stages.
###Code
# save the model file for downstream tasks
from joblib import dump
dump(dt, '{}/model.joblib'.format(MODEL_EXPORT_FOLDER))
# also save a few example rows
np.save('data/test_rows.npy', X_test[:10])
# export metrics for KFP
metrics = {
'metrics': [
{
'name': 'f1_score',
'numberValue': f1_test,
'format': 'RAW'
},
{
'name': 'roc_auc_score',
'numberValue': roc_auc_test,
'format': 'RAW'
}
]
}
with open('mlpipeline-metrics.json', 'w') as f:
json.dump(metrics, f)
fig = plt.figure(figsize=(16, 6))
plt.subplot(121)
plot_roc_curve(dt, X_test, y_test, ax=fig.gca())
plt.subplot(122)
plot_precision_recall_curve(dt, X_test, y_test, ax=fig.gca())
plt.show()
print(classification_report(y_test, y_pred, target_names=['On-time', 'Delayed']))
cm = confusion_matrix(y_test, y_pred)
class_labels = ['On-time', 'Delayed']
labels = ['{0:0.0f}'.format(value) for value in
cm.flatten()]
labels = np.asarray(labels).reshape(2,2)
fig = plt.figure(figsize=(12, 8))
chart = sns.heatmap(
cm, annot=labels, fmt='', cmap='Blues',
xticklabels=class_labels, yticklabels=class_labels)
chart.set_xlabel('Predicted label')
chart.set_ylabel('True label')
chart.set_title('Confusion Matrix')
plt.show()
# export confusion matrix for KFP
cm_data = []
for target_index, target_row in enumerate(cm):
for predicted_index, count in enumerate(target_row):
cm_data.append((class_labels[target_index], class_labels[predicted_index], count))
ui_metadata = {
'outputs' : [{
'type': 'confusion_matrix',
'format': 'csv',
'schema': [
{'name': 'target', 'type': 'CATEGORY'},
{'name': 'predicted', 'type': 'CATEGORY'},
{'name': 'count', 'type': 'NUMBER'},
],
'source': pd.DataFrame(cm_data).to_csv(header=False, index=False),
'storage': 'inline',
'labels': ['Delayed', 'On-time'],
}]
}
with open('mlpipeline-ui-metadata.json', 'w') as f:
json.dump(ui_metadata, f)
###Output
_____no_output_____
###Markdown
If we investigate the various classification charts and reports, we can see that our problem of classifying whether a flight will be delayed is a tricky one.As one might expect, the model predicts most `on-time` flights as `on-time` (80%). However, it struggles to correctly predict `delayed` flights, instead classifying them as `on-time`. In fact it only correctly predicts delays 28% of the time! (this is the `recall` figure for `Delayed` in the classification report table). When it predicts a delayed flight, it is correct only 25% of the time (this is the `precision` field).Overall, we would say that our model is doing a mediocre job of predicting flight delays - we either need to do a lot more model tuning and hyper-parameter selection, or use more data and better features.Perhaps you can try to find ways to improve the performance!Finally, we can generate a list of "feature importances" to see what the model is focusing on for making predictions:
###Code
feat_names = list(df_train_cat.columns.values) + list(df_train[num_columns].columns.values)
feat_nb = dt.feature_importances_
plt.figure(figsize=(16, 8))
chart = sns.barplot(x=feat_names, y=feat_nb, palette='Blues')
chart.set_xticklabels(
chart.get_xticklabels(),
rotation=45,
horizontalalignment='right',
fontweight='light',
fontsize='large'
)
plt.show()
###Output
_____no_output_____ |
Mundo01/Desafio033.ipynb | ###Markdown
**Desafio 033****Python 3 - 1º Mundo**Descrição: Faça um programa que leia três números e mostre qual é o maior e qual é o menor.Link: https://www.youtube.com/watch?v=a_8FbW5oH6I
###Code
num1 = float(input('Digite o primeiro número: '))
num2 = float(input('Digite o segundo número: '))
num3 = float(input('Digite o terceiro número: '))
if num1 > num2 and num1 > num3:
print(f'O maior número é o {num1:.0f}')
if num2 > num1 and num2 > num3:
print(f'O maior número é o número {num2:.0f}')
if num3 > num1 and num3 > num2:
print(f'O maior número e o {num3:.0f}')
if num1 < num2 and num1 < num3:
print(f'O menor número é o {num1:.0f}')
if num2 < num1 and num2 < num3:
print(f'O menor número é o número {num2:.0f}')
if num3 < num1 and num3 < num2:
print(f'O menor número e o {num3:.0f}')
###Output
_____no_output_____ |
experiments/1_incident_points_3_ambo/10_qambo_noisy_bagging_3dqn.ipynb | ###Markdown
Noisy Bagging Duelling Double Deep Q Learning - A simple ambulance dispatch point allocation model Reinforcement learning introduction RL involves:* Trial and error search* Receiving and maximising reward (often delayed)* Linking state -> action -> reward* Must be able to sense something of their environment* Involves uncertainty in sensing and linking action to reward* Learning -> improved choice of actions over time* All models find a way to balance best predicted action vs. exploration Elements of RL* *Environment*: all observable and unobservable information relevant to us* *Observation*: sensing the environment* *State*: the perceived (or perceivable) environment * *Agent*: senses environment, decides on action, receives and monitors rewards* *Action*: may be discrete (e.g. turn left) or continuous (accelerator pedal)* *Policy* (how to link state to action; often based on probabilities)* *Reward signal*: aim is to accumulate maximum reward over time* *Value function* of a state: prediction of likely/possible long-term reward* *Q*: prediction of likely/possible long-term reward of an *action** *Advantage*: The difference in Q between actions in a given state (sums to zero for all actions)* *Model* (optional): a simulation of the environment Types of model* *Model-based*: have model of environment (e.g. a board game)* *Model-free*: used when environment not fully known* *Policy-based*: identify best policy directly* *Value-based*: estimate value of a decision* *Off-policy*: can learn from historic data from other agent* *On-policy*: requires active learning from current decisions Duelling Deep Q Networks for Reinforcement LearningQ = The expected future rewards discounted over time. This is what we are trying to maximise.The aim is to teach a network to take the current state observations and recommend the action with greatest Q.Duelling is very similar to Double DQN, except that the policy net splits into two. One component reduces to a single value, which will model the state *value*. The other component models the *advantage*, the difference in Q between different actions (the mean value is subtracted from all values, so that the advtantage always sums to zero). These are aggregated to produce Q for each action. Q is learned through the Bellman equation, where the Q of any state and action is the immediate reward achieved + the discounted maximum Q value (the best action taken) of next best action, where gamma is the discount rate.$$Q(s,a)=r + \gamma.maxQ(s',a')$$ Key DQN components General method for Q learning:Overall aim is to create a neural network that predicts Q. Improvement comes from improved accuracy in predicting 'current' understood Q, and in revealing more about Q as knowledge is gained (some rewards only discovered after time). Target networks are used to stabilise models, and are only updated at intervals. Changes to Q values may lead to changes in closely related states (i.e. states close to the one we are in at the time) and as the network tries to correct for errors it can become unstable and suddenly lose signficiant performance. Target networks (e.g. to assess Q) are updated only infrequently (or gradually), so do not have this instability problem. Training networksDouble DQN contains two networks. This ammendment, from simple DQN, is to decouple training of Q for current state and target Q derived from next state which are closely correlated when comparing input features.The *policy network* is used to select action (action with best predicted Q) when playing the game.When training, the predicted best *action* (best predicted Q) is taken from the *policy network*, but the *policy network* is updated using the predicted Q value of the next state from the *target network* (which is updated from the policy network less frequently). So, when training, the action is selected using Q values from the *policy network*, but the the *policy network* is updated to better predict the Q value of that action from the *target network*. The *policy network* is copied across to the *target network* every *n* steps (e.g. 1000). Bagging (Bootstrap Aggregation)Each network is trained from the same memory, but have different starting weights and are trained on different bootstrap samples from that memory. In this example actions are chosen randomly from each of the networks (an alternative could be to take the most common action recommended by the networks, or an average output). This bagging method may also be used to have some measure of uncertainty of action by looking at the distribution of actions recommended from the different nets. Bagging may also be used to aid exploration during stages where networks are providing different suggested action. Noisy layersNoisy layers are an alternative to epsilon-greedy exploration (here, we leave the epsilon-greedy code in the model, but set it to reduce to zero immediately after the period of fully random action choice).For every weight in the layer we have a random value that we draw from the normal distribution. This random value is used to add noise to the output. The parameters for the extent of noise for each weight, sigma, are stored within the layer and get trained as part of the standard back-propogation.A modification to normal nosiy layers is to use layers with ‘factorized gaussian noise’. This reduces the number of random numbers to be sampled (so is less computationally expensive). There are two random vectors, one with the size of the input, and the other with the size of the output. A random matrix is created by calculating the outer product of the two vectors. ReferencesDouble DQN: van Hasselt H, Guez A, Silver D. (2015) Deep Reinforcement Learning with Double Q-learning. arXiv:150906461 http://arxiv.org/abs/1509.06461Bagging:Osband I, Blundell C, Pritzel A, et al. (2016) Deep Exploration via Bootstrapped DQN. arXiv:160204621 http://arxiv.org/abs/1602.04621Noisy networks:Fortunato M, Azar MG, Piot B, et al. (2019) Noisy Networks for Exploration. arXiv:170610295 http://arxiv.org/abs/1706.10295Code for the nosiy layers comes from:Lapan, M. (2020). Deep Reinforcement Learning Hands-On: Apply modern RL methods to practical problems of chatbots, robotics, discrete optimization, web automation, and more, 2nd Edition. Packt Publishing. Code structure
###Code
################################################################################
# 1 Import packages #
################################################################################
from amboworld.environment import Env
import math
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import random
import torch
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
# Use a double ended queue (deque) for memory
# When memory is full, this will replace the oldest value with the new one
from collections import deque
# Supress all warnings (e.g. deprecation warnings) for regular use
import warnings
warnings.filterwarnings("ignore")
################################################################################
# 2 Define model parameters #
################################################################################
# Set whether to display on screen (slows model)
DISPLAY_ON_SCREEN = False
# Discount rate of future rewards
GAMMA = 0.99
# Learing rate for neural network
LEARNING_RATE = 0.003
# Maximum number of game steps (state, action, reward, next state) to keep
MEMORY_SIZE = 10000000
# Sample batch size for policy network update
BATCH_SIZE = 5
# Number of game steps to play before starting training (all random actions)
REPLAY_START_SIZE = 50000
# Number of steps between policy -> target network update
SYNC_TARGET_STEPS = 1000
# Exploration rate (epsilon) is probability of choosing a random action
EXPLORATION_MAX = 1.0
EXPLORATION_MIN = 0.0
# Reduction in epsilon with each game step
EXPLORATION_DECAY = 0.0
# Training episodes
TRAINING_EPISODES = 50
# Set number of parallel networks
NUMBER_OF_NETS = 5
# Results filename
RESULTS_NAME = 'bagging_noisy_d3qn'
# SIM PARAMETERS
RANDOM_SEED = 42
SIM_DURATION = 5000
NUMBER_AMBULANCES = 3
NUMBER_INCIDENT_POINTS = 1
INCIDENT_RADIUS = 2
NUMBER_DISPTACH_POINTS = 25
AMBOWORLD_SIZE = 50
INCIDENT_INTERVAL = 60
EPOCHS = 2
AMBO_SPEED = 60
AMBO_FREE_FROM_HOSPITAL = False
################################################################################
# 3 Define DQN (Duelling Deep Q Network) class #
# (Used for both policy and target nets) #
################################################################################
"""
Code for nosiy layers comes from:
Lapan, M. (2020). Deep Reinforcement Learning Hands-On: Apply modern RL methods
to practical problems of chatbots, robotics, discrete optimization,
web automation, and more, 2nd Edition. Packt Publishing.
"""
class NoisyLinear(nn.Linear):
"""
Noisy layer for network.
For every weight in the layer we have a random value that we draw from the
normal distribution.Paraemters for the noise, sigma, are stored within the
layer and get trained as part of the standard back-propogation.
'register_buffer' is used to create tensors in the network that are not
updated during back-propogation. They are used to create normal
distributions to add noise (multiplied by sigma which is a paramater in the
network).
"""
def __init__(self, in_features, out_features,
sigma_init=0.017, bias=True):
super(NoisyLinear, self).__init__(
in_features, out_features, bias=bias)
w = torch.full((out_features, in_features), sigma_init)
self.sigma_weight = nn.Parameter(w)
z = torch.zeros(out_features, in_features)
self.register_buffer("epsilon_weight", z)
if bias:
w = torch.full((out_features,), sigma_init)
self.sigma_bias = nn.Parameter(w)
z = torch.zeros(out_features)
self.register_buffer("epsilon_bias", z)
self.reset_parameters()
def reset_parameters(self):
std = math.sqrt(3 / self.in_features)
self.weight.data.uniform_(-std, std)
self.bias.data.uniform_(-std, std)
def forward(self, input):
self.epsilon_weight.normal_()
bias = self.bias
if bias is not None:
self.epsilon_bias.normal_()
bias = bias + self.sigma_bias * \
self.epsilon_bias.data
v = self.sigma_weight * self.epsilon_weight.data + self.weight
return F.linear(input, v, bias)
class NoisyFactorizedLinear(nn.Linear):
"""
NoisyNet layer with factorized gaussian noise. This reduces the number of
random numbers to be sampled (so less computationally expensive). There are
two random vectors. One with the size of the input, and the other with the
size of the output. A random matrix is create by calculating the outer
product of the two vectors.
'register_buffer' is used to create tensors in the network that are not
updated during back-propogation. They are used to create normal
distributions to add noise (multiplied by sigma which is a paramater in the
network).
"""
def __init__(self, in_features, out_features,
sigma_zero=0.4, bias=True):
super(NoisyFactorizedLinear, self).__init__(
in_features, out_features, bias=bias)
sigma_init = sigma_zero / math.sqrt(in_features)
w = torch.full((out_features, in_features), sigma_init)
self.sigma_weight = nn.Parameter(w)
z1 = torch.zeros(1, in_features)
self.register_buffer("epsilon_input", z1)
z2 = torch.zeros(out_features, 1)
self.register_buffer("epsilon_output", z2)
if bias:
w = torch.full((out_features,), sigma_init)
self.sigma_bias = nn.Parameter(w)
def forward(self, input):
self.epsilon_input.normal_()
self.epsilon_output.normal_()
func = lambda x: torch.sign(x) * torch.sqrt(torch.abs(x))
eps_in = func(self.epsilon_input.data)
eps_out = func(self.epsilon_output.data)
bias = self.bias
if bias is not None:
bias = bias + self.sigma_bias * eps_out.t()
noise_v = torch.mul(eps_in, eps_out)
v = self.weight + self.sigma_weight * noise_v
return F.linear(input, v, bias)
class DQN(nn.Module):
"""Deep Q Network. Udes for both policy (action) and target (Q) networks."""
def __init__(self, observation_space, action_space):
"""Constructor method. Set up neural nets."""
# nerurones per hidden layer = 2 * max of observations or actions
neurons_per_layer = 2 * max(observation_space, action_space)
# Set starting exploration rate
self.exploration_rate = EXPLORATION_MAX
# Set up action space (choice of possible actions)
self.action_space = action_space
# First layerswill be common to both Advantage and value
super(DQN, self).__init__()
self.feature = nn.Sequential(
nn.Linear(observation_space, neurons_per_layer),
nn.ReLU()
)
# Advantage has same number of outputs as the action space
self.advantage = nn.Sequential(
NoisyFactorizedLinear(neurons_per_layer, neurons_per_layer),
nn.ReLU(),
NoisyFactorizedLinear(neurons_per_layer, action_space)
)
# State value has only one output (one value per state)
self.value = nn.Sequential(
nn.Linear(neurons_per_layer, neurons_per_layer),
nn.ReLU(),
nn.Linear(neurons_per_layer, 1)
)
def act(self, state):
"""Act either randomly or by redicting action that gives max Q"""
# Act randomly if random number < exploration rate
if np.random.rand() < self.exploration_rate:
action = random.randrange(self.action_space)
else:
# Otherwise get predicted Q values of actions
q_values = self.forward(torch.FloatTensor(state))
# Get index of action with best Q
action = np.argmax(q_values.detach().numpy()[0])
return action
def forward(self, x):
x = self.feature(x)
advantage = self.advantage(x)
value = self.value(x)
action_q = value + advantage - advantage.mean()
return action_q
################################################################################
# 4 Define policy net training function #
################################################################################
def optimize(policy_net, target_net, memory):
"""
Update model by sampling from memory.
Uses policy network to predict best action (best Q).
Uses target network to provide target of Q for the selected next action.
"""
# Do not try to train model if memory is less than reqired batch size
if len(memory) < BATCH_SIZE:
return
# Reduce exploration rate (exploration rate is stored in policy net)
policy_net.exploration_rate *= EXPLORATION_DECAY
policy_net.exploration_rate = max(EXPLORATION_MIN,
policy_net.exploration_rate)
# Sample a random batch from memory
batch = random.sample(memory, BATCH_SIZE)
for state, action, reward, state_next, terminal in batch:
state_action_values = policy_net(torch.FloatTensor(state))
# Get target Q for policy net update
if not terminal:
# For non-terminal actions get Q from policy net
expected_state_action_values = policy_net(torch.FloatTensor(state))
# Detach next state values from gradients to prevent updates
expected_state_action_values = expected_state_action_values.detach()
# Get next state action with best Q from the policy net (double DQN)
policy_next_state_values = policy_net(torch.FloatTensor(state_next))
policy_next_state_values = policy_next_state_values.detach()
best_action = np.argmax(policy_next_state_values[0].numpy())
# Get target net next state
next_state_action_values = target_net(torch.FloatTensor(state_next))
# Use detach again to prevent target net gradients being updated
next_state_action_values = next_state_action_values.detach()
best_next_q = next_state_action_values[0][best_action].numpy()
updated_q = reward + (GAMMA * best_next_q)
expected_state_action_values[0][action] = updated_q
else:
# For termal actions Q = reward (-1)
expected_state_action_values = policy_net(torch.FloatTensor(state))
# Detach values from gradients to prevent gradient update
expected_state_action_values = expected_state_action_values.detach()
# Set Q for all actions to reward (-1)
expected_state_action_values[0] = reward
# Set network to training mode
policy_net.train()
# Reset net gradients
policy_net.optimizer.zero_grad()
# calculate loss
loss_v = nn.MSELoss()(state_action_values, expected_state_action_values)
# Backpropogate loss
loss_v.backward()
# Update network gradients
policy_net.optimizer.step()
return
################################################################################
# 5 Define memory class #
################################################################################
class Memory():
"""
Replay memory used to train model.
Limited length memory (using deque, double ended queue from collections).
- When memory full deque replaces oldest data with newest.
Holds, state, action, reward, next state, and episode done.
"""
def __init__(self):
"""Constructor method to initialise replay memory"""
self.memory = deque(maxlen=MEMORY_SIZE)
def remember(self, state, action, reward, next_state, done):
"""state/action/reward/next_state/done"""
self.memory.append((state, action, reward, next_state, done))
################################################################################
# 6 Define results plotting function #
################################################################################
def plot_results(run, exploration, score, mean_call_to_arrival,
mean_assignment_to_arrival):
"""Plot and report results at end of run"""
# Set up chart (ax1 and ax2 share x-axis to combine two plots on one graph)
fig = plt.figure(figsize=(6,6))
ax1 = fig.add_subplot(111)
ax2 = ax1.twinx()
# Plot results
lns1 = ax1.plot(
run, exploration, label='exploration', color='g', linestyle=':')
lns2 = ax2.plot(run, mean_call_to_arrival,
label='call to arrival', color='r')
lns3 = ax2.plot(run, mean_assignment_to_arrival,
label='assignment to arrival', color='b', linestyle='--')
# Get combined legend
lns = lns1 + lns2 + lns3
labs = [l.get_label() for l in lns]
ax1.legend(lns, labs, loc='upper center', bbox_to_anchor=(0.5, -0.1), ncol=3)
# Set axes
ax1.set_xlabel('run')
ax1.set_ylabel('exploration')
ax2.set_ylabel('Response time')
filename = 'output/' + RESULTS_NAME +'.png'
plt.savefig(filename, dpi=300)
plt.show()
################################################################################
# 7 Main program #
################################################################################
def qambo():
"""Main program loop"""
############################################################################
# 8 Set up environment #
############################################################################
sim = Env(
random_seed = RANDOM_SEED,
duration_incidents = SIM_DURATION,
number_ambulances = NUMBER_AMBULANCES,
number_incident_points = NUMBER_INCIDENT_POINTS,
incident_interval = INCIDENT_INTERVAL,
number_epochs = EPOCHS,
number_dispatch_points = NUMBER_DISPTACH_POINTS,
incident_range = INCIDENT_RADIUS,
max_size = AMBOWORLD_SIZE,
ambo_kph = AMBO_SPEED,
ambo_free_from_hospital = AMBO_FREE_FROM_HOSPITAL
)
# Get number of observations returned for state
observation_space = sim.observation_size
# Get number of actions possible
action_space = sim.action_number
############################################################################
# 9 Set up policy and target nets #
############################################################################
# Set up policy and target neural nets
policy_nets = [DQN(observation_space, action_space)
for i in range(NUMBER_OF_NETS)]
target_nets = [DQN(observation_space, action_space)
for i in range(NUMBER_OF_NETS)]
best_nets = [DQN(observation_space, action_space)
for i in range(NUMBER_OF_NETS)]
# Set optimizer, copy weights from policy_net to target, and
for i in range(NUMBER_OF_NETS):
# Set optimizer
policy_nets[i].optimizer = optim.Adam(
params=policy_nets[i].parameters(), lr=LEARNING_RATE)
# Copy weights from policy -> target
target_nets[i].load_state_dict(policy_nets[i].state_dict())
# Set target net to eval rather than training mode
target_nets[i].eval()
############################################################################
# 10 Set up memory #
############################################################################
# Set up memomry
memory = Memory()
############################################################################
# 11 Set up + start training loop #
############################################################################
# Set up run counter and learning loop
run = 0
all_steps = 0
continue_learning = True
best_reward = -np.inf
# Set up list for results
results_run = []
results_exploration = []
results_score = []
results_mean_call_to_arrival = []
results_mean_assignment_to_arrival = []
# Continue repeating games (episodes) until target complete
while continue_learning:
########################################################################
# 12 Play episode #
########################################################################
# Increment run (episode) counter
run += 1
########################################################################
# 13 Reset game #
########################################################################
# Reset game environment and get first state observations
state = sim.reset()
# Reset total reward and rewards list
total_reward = 0
rewards = []
# Reshape state into 2D array with state obsverations as first 'row'
state = np.reshape(state, [1, observation_space])
# Continue loop until episode complete
while True:
####################################################################
# 14 Game episode loop #
####################################################################
####################################################################
# 15 Get action #
####################################################################
# Get actions to take (use evalulation mode)
actions = []
for i in range(NUMBER_OF_NETS):
policy_nets[i].eval()
actions.append(policy_nets[i].act(state))
# Randomly choose an action from net actions
random_index = random.randint(0, NUMBER_OF_NETS - 1)
action = actions[random_index]
####################################################################
# 16 Play action (get S', R, T) #
####################################################################
# Act
state_next, reward, terminal, info = sim.step(action)
total_reward += reward
# Update trackers
rewards.append(reward)
# Reshape state into 2D array with state observations as first 'row'
state_next = np.reshape(state_next, [1, observation_space])
# Update display if needed
if DISPLAY_ON_SCREEN:
sim.render()
####################################################################
# 17 Add S/A/R/S/T to memory #
####################################################################
# Record state, action, reward, new state & terminal
memory.remember(state, action, reward, state_next, terminal)
# Update state
state = state_next
####################################################################
# 18 Check for end of episode #
####################################################################
# Actions to take if end of game episode
if terminal:
# Get exploration rate
exploration = policy_nets[0].exploration_rate
# Clear print row content
clear_row = '\r' + ' ' * 79 + '\r'
print(clear_row, end='')
print(f'Run: {run}, ', end='')
print(f'Exploration: {exploration: .3f}, ', end='')
average_reward = np.mean(rewards)
print(f'Average reward: {average_reward:4.1f}, ', end='')
mean_assignment_to_arrival = np.mean(info['assignment_to_arrival'])
print(f'Mean assignment to arrival: {mean_assignment_to_arrival:4.1f}, ', end='')
mean_call_to_arrival = np.mean(info['call_to_arrival'])
print(f'Mean call to arrival: {mean_call_to_arrival:4.1f}, ', end='')
demand_met = info['fraction_demand_met']
print(f'Demand met {demand_met:0.3f}')
# Add to results lists
results_run.append(run)
results_exploration.append(exploration)
results_score.append(total_reward)
results_mean_call_to_arrival.append(mean_call_to_arrival)
results_mean_assignment_to_arrival.append(mean_assignment_to_arrival)
# Save model if best reward
total_reward = np.sum(rewards)
if total_reward > best_reward:
best_reward = total_reward
# Copy weights to best net
for i in range(NUMBER_OF_NETS):
best_nets[i].load_state_dict(policy_nets[i].state_dict())
################################################################
# 18b Check for end of learning #
################################################################
if run == TRAINING_EPISODES:
continue_learning = False
# End episode loop
break
####################################################################
# 19 Update policy net #
####################################################################
# Avoid training model if memory is not of sufficient length
if len(memory.memory) > REPLAY_START_SIZE:
# Update policy net
for i in range(NUMBER_OF_NETS):
optimize(policy_nets[i], target_nets[i], memory.memory)
################################################################
# 20 Update target net periodically #
################################################################
# Use load_state_dict method to copy weights from policy net
if all_steps % SYNC_TARGET_STEPS == 0:
for i in range(NUMBER_OF_NETS):
target_nets[i].load_state_dict(
policy_nets[i].state_dict())
############################################################################
# 21 Learning complete - plot and save results #
############################################################################
# Target reached. Plot results
plot_results(results_run, results_exploration, results_score,
results_mean_call_to_arrival, results_mean_assignment_to_arrival)
# SAVE RESULTS
run_details = pd.DataFrame()
run_details['run'] = results_run
run_details['exploration '] = results_exploration
run_details['mean_call_to_arrival'] = results_mean_call_to_arrival
run_details['mean_assignment_to_arrival'] = results_mean_assignment_to_arrival
filename = 'output/' + RESULTS_NAME + '.csv'
run_details.to_csv(filename, index=False)
############################################################################
# Test best model #
############################################################################
print()
print('Test Model')
print('----------')
for i in range(NUMBER_OF_NETS):
best_nets[i].eval()
best_nets[i].exploration_rate = 0
# Set up results dictionary
results = dict()
results['call_to_arrival'] = []
results['assign_to_arrival'] = []
results['demand_met'] = []
# Replicate model runs
for run in range(30):
# Reset game environment and get first state observations
state = sim.reset()
state = np.reshape(state, [1, observation_space])
# Continue loop until episode complete
while True:
# Get actions to take (use evalulation mode)
actions = []
for i in range(NUMBER_OF_NETS):
actions.append(best_nets[i].act(state))
# Randomly choose an action from net actions
random_index = random.randint(0, NUMBER_OF_NETS - 1)
action = actions[random_index]
# Act
state_next, reward, terminal, info = sim.step(action)
# Reshape state into 2D array with state observations as first 'row'
state_next = np.reshape(state_next, [1, observation_space])
# Update state
state = state_next
if terminal:
print(f'Run: {run}, ', end='')
mean_assignment_to_arrival = np.mean(info['assignment_to_arrival'])
print(f'Mean assignment to arrival: {mean_assignment_to_arrival:4.1f}, ', end='')
mean_call_to_arrival = np.mean(info['call_to_arrival'])
print(f'Mean call to arrival: {mean_call_to_arrival:4.1f}, ', end='')
demand_met = info['fraction_demand_met']
print(f'Demand met: {demand_met:0.3f}')
# Add to results
results['call_to_arrival'].append(mean_call_to_arrival)
results['assign_to_arrival'].append(mean_assignment_to_arrival)
results['demand_met'].append(demand_met)
# End episode loop
break
results = pd.DataFrame(results)
filename = './output/results_' + RESULTS_NAME +'.csv'
results.to_csv(filename, index=False)
print()
print(results.describe())
return run_details
######################## MODEL ENTRY POINT #####################################
# Run model and return last run results
last_run = qambo()
###Output
Run: 1, Exploration: 1.000, Average reward: -699.3, Mean assignment to arrival: 24.5, Mean call to arrival: 30.8, Demand met 1.000
Run: 2, Exploration: 1.000, Average reward: -725.6, Mean assignment to arrival: 24.9, Mean call to arrival: 30.7, Demand met 1.000
Run: 3, Exploration: 1.000, Average reward: -714.0, Mean assignment to arrival: 24.7, Mean call to arrival: 30.0, Demand met 1.000
Run: 4, Exploration: 1.000, Average reward: -724.6, Mean assignment to arrival: 24.9, Mean call to arrival: 30.1, Demand met 1.000
Run: 5, Exploration: 1.000, Average reward: -714.0, Mean assignment to arrival: 24.8, Mean call to arrival: 30.6, Demand met 1.000
Run: 6, Exploration: 1.000, Average reward: -721.2, Mean assignment to arrival: 24.8, Mean call to arrival: 30.5, Demand met 1.000
Run: 7, Exploration: 1.000, Average reward: -728.5, Mean assignment to arrival: 24.9, Mean call to arrival: 30.3, Demand met 1.000
Run: 8, Exploration: 1.000, Average reward: -713.8, Mean assignment to arrival: 24.7, Mean call to arrival: 30.4, Demand met 1.000
Run: 9, Exploration: 1.000, Average reward: -714.6, Mean assignment to arrival: 24.7, Mean call to arrival: 30.8, Demand met 1.000
Run: 10, Exploration: 1.000, Average reward: -723.4, Mean assignment to arrival: 24.8, Mean call to arrival: 30.6, Demand met 1.000
Run: 11, Exploration: 0.000, Average reward: -301.1, Mean assignment to arrival: 15.2, Mean call to arrival: 18.9, Demand met 0.999
Run: 12, Exploration: 0.000, Average reward: -229.5, Mean assignment to arrival: 13.0, Mean call to arrival: 15.9, Demand met 1.000
Run: 13, Exploration: 0.000, Average reward: -274.8, Mean assignment to arrival: 14.3, Mean call to arrival: 18.2, Demand met 1.000
Run: 14, Exploration: 0.000, Average reward: -241.9, Mean assignment to arrival: 13.3, Mean call to arrival: 16.1, Demand met 1.000
Run: 15, Exploration: 0.000, Average reward: -238.7, Mean assignment to arrival: 13.3, Mean call to arrival: 16.2, Demand met 1.000
Run: 16, Exploration: 0.000, Average reward: -193.7, Mean assignment to arrival: 11.6, Mean call to arrival: 14.5, Demand met 1.000
Run: 17, Exploration: 0.000, Average reward: -177.5, Mean assignment to arrival: 11.1, Mean call to arrival: 13.4, Demand met 1.000
Run: 18, Exploration: 0.000, Average reward: -181.1, Mean assignment to arrival: 11.2, Mean call to arrival: 13.9, Demand met 1.000
Run: 19, Exploration: 0.000, Average reward: -183.2, Mean assignment to arrival: 11.2, Mean call to arrival: 14.2, Demand met 1.000
Run: 20, Exploration: 0.000, Average reward: -195.2, Mean assignment to arrival: 11.7, Mean call to arrival: 14.9, Demand met 1.000
Run: 21, Exploration: 0.000, Average reward: -190.5, Mean assignment to arrival: 11.5, Mean call to arrival: 14.4, Demand met 1.000
Run: 22, Exploration: 0.000, Average reward: -196.5, Mean assignment to arrival: 11.9, Mean call to arrival: 14.8, Demand met 1.000
Run: 23, Exploration: 0.000, Average reward: -192.1, Mean assignment to arrival: 11.7, Mean call to arrival: 14.3, Demand met 1.000
Run: 24, Exploration: 0.000, Average reward: -172.6, Mean assignment to arrival: 10.9, Mean call to arrival: 13.3, Demand met 1.000
Run: 25, Exploration: 0.000, Average reward: -212.7, Mean assignment to arrival: 12.4, Mean call to arrival: 15.0, Demand met 1.000
Run: 26, Exploration: 0.000, Average reward: -175.2, Mean assignment to arrival: 11.1, Mean call to arrival: 13.6, Demand met 1.000
Run: 27, Exploration: 0.000, Average reward: -172.5, Mean assignment to arrival: 11.2, Mean call to arrival: 13.3, Demand met 1.000
Run: 28, Exploration: 0.000, Average reward: -190.2, Mean assignment to arrival: 11.9, Mean call to arrival: 14.8, Demand met 1.000
Run: 29, Exploration: 0.000, Average reward: -210.3, Mean assignment to arrival: 12.6, Mean call to arrival: 15.1, Demand met 1.000
Run: 30, Exploration: 0.000, Average reward: -212.2, Mean assignment to arrival: 12.6, Mean call to arrival: 15.7, Demand met 1.000
Run: 31, Exploration: 0.000, Average reward: -223.2, Mean assignment to arrival: 13.0, Mean call to arrival: 15.9, Demand met 1.000
Run: 32, Exploration: 0.000, Average reward: -224.6, Mean assignment to arrival: 13.0, Mean call to arrival: 15.9, Demand met 1.000
Run: 33, Exploration: 0.000, Average reward: -220.8, Mean assignment to arrival: 12.9, Mean call to arrival: 16.2, Demand met 1.000
Run: 34, Exploration: 0.000, Average reward: -203.3, Mean assignment to arrival: 12.2, Mean call to arrival: 15.0, Demand met 1.000
Run: 35, Exploration: 0.000, Average reward: -276.5, Mean assignment to arrival: 14.7, Mean call to arrival: 17.8, Demand met 1.000
Run: 36, Exploration: 0.000, Average reward: -265.7, Mean assignment to arrival: 14.6, Mean call to arrival: 17.9, Demand met 1.000
Run: 37, Exploration: 0.000, Average reward: -285.9, Mean assignment to arrival: 15.2, Mean call to arrival: 18.6, Demand met 1.000
Run: 38, Exploration: 0.000, Average reward: -289.3, Mean assignment to arrival: 15.1, Mean call to arrival: 18.2, Demand met 1.000
Run: 39, Exploration: 0.000, Average reward: -238.2, Mean assignment to arrival: 13.4, Mean call to arrival: 16.3, Demand met 1.000
Run: 40, Exploration: 0.000, Average reward: -225.0, Mean assignment to arrival: 13.0, Mean call to arrival: 15.7, Demand met 1.000
Run: 41, Exploration: 0.000, Average reward: -220.1, Mean assignment to arrival: 12.8, Mean call to arrival: 16.0, Demand met 1.000
Run: 42, Exploration: 0.000, Average reward: -297.6, Mean assignment to arrival: 15.1, Mean call to arrival: 18.5, Demand met 1.000
Run: 43, Exploration: 0.000, Average reward: -228.8, Mean assignment to arrival: 12.9, Mean call to arrival: 15.8, Demand met 1.000
Run: 44, Exploration: 0.000, Average reward: -295.3, Mean assignment to arrival: 15.1, Mean call to arrival: 18.0, Demand met 1.000
Run: 45, Exploration: 0.000, Average reward: -233.9, Mean assignment to arrival: 13.3, Mean call to arrival: 16.3, Demand met 1.000
Run: 46, Exploration: 0.000, Average reward: -307.2, Mean assignment to arrival: 15.0, Mean call to arrival: 18.2, Demand met 1.000
Run: 47, Exploration: 0.000, Average reward: -250.7, Mean assignment to arrival: 13.7, Mean call to arrival: 16.7, Demand met 1.000
Run: 48, Exploration: 0.000, Average reward: -351.6, Mean assignment to arrival: 16.4, Mean call to arrival: 19.8, Demand met 1.000
Run: 49, Exploration: 0.000, Average reward: -323.0, Mean assignment to arrival: 15.6, Mean call to arrival: 19.1, Demand met 1.000
Run: 50, Exploration: 0.000, Average reward: -223.5, Mean assignment to arrival: 12.6, Mean call to arrival: 15.5, Demand met 1.000
|
finalproj_allmodels_winequalityprediction/regression/regression-submission.ipynb | ###Markdown
Regression models on the wine quality datasetIn this notebook, we show how to use regression models to predict the wine quality in the wine quality dataset. In particular, we run regression models on the white wine dataset, but if we want to run experiment on the red wine dataset, it should be similar.
###Code
import pandas as pd
import sklearn
red = pd.read_csv('winequality-red.csv', header=0)
white = pd.read_csv('winequality-white.csv', header=0)
# show the columns of the the data
white.columns
# find X and y for the white wine dataset
whiteX = white.loc[:, 'fixed acidity': 'alcohol'].as_matrix()
whitey = white.loc[:, 'quality'].as_matrix()
# find X and y for the white wine dataset
redX = red.loc[:, 'fixed acidity': 'alcohol'].as_matrix()
redy = red.loc[:, 'quality'].as_matrix()
###Output
_____no_output_____
###Markdown
First model: SVMWe use the SVMregressor in sklearn, make sure to set up `max_iter` so the program does not take forever.
###Code
from sklearn.svm import SVR
svr = SVR(max_iter=1000000, gamma=0.03, epsilon=0.1, C=1)
param_SVM = {'kernel': ['linear', 'poly', 'rbf', 'sigmoid'], 'degree':[2,3,4,5,6]}
SVM_grid = GridSearchCV(svr, param_SVM, scoring = 'neg_mean_absolute_error', n_jobs=8, cv=5, verbose=True, return_train_score=True)
SVM_grid.fit(whiteX, whitey)
SVM_grid.cv_results_
SVM_grid.best_score_, SVM_grid.best_params_
###Output
_____no_output_____
###Markdown
Multi-layer Perceptron RegressorAgain, we use API from sklearn.
###Code
from sklearn.neural_network import MLPRegressor
MLP = MLPRegressor(early_stopping=True)
param_MLP = {'learning_rate': ['invscaling', 'adaptive'],
'solver':['lbfgs', 'sgd', 'adam'],
'hidden_layer_sizes': [(9000, ), (4000, ), (1000, )],
'activation': ['tanh', 'relu']}
MLP_grid = GridSearchCV(MLP, param_MLP, cv=5, scoring = 'neg_mean_absolute_error', n_jobs=8, verbose=True)
MLP_grid.fit(whiteX, whitey)
MLP_grid.best_score_, MLP_grid.best_estimator_, MLP_grid.best_params_
###Output
_____no_output_____
###Markdown
Random Forest Regressor
###Code
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(random_state=0, n_jobs=8)
param_rf = {'max_depth':[1,2,3,4,5], 'n_estimators': [10, 50]}
rf_grid = GridSearchCV(rf, param_rf, cv=5, scoring = 'neg_mean_absolute_error', n_jobs=8, verbose=True)
rf_grid.fit(whiteX, whitey)
rf_grid.best_score_
rf_grid.best_params_
###Output
_____no_output_____
###Markdown
Gradient Boosting RegressorBased on our experiments, Gradient Boosting consistently gives remarkable results.
###Code
from sklearn.ensemble import GradientBoostingRegressor
gb = GradientBoostingRegressor()
param_gb = {'max_depth':[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 15, 20], 'n_estimators': [12, 18, 22, 25, 30, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 50, 55, 60, 65, 70]}
gb_grid = GridSearchCV(gb, param_gb, cv=5, scoring = 'neg_mean_absolute_error', n_jobs=8, verbose=True)
gb_grid.fit(whiteX, whitey)
gb_grid.best_score_
gb_grid.best_params_
###Output
_____no_output_____
###Markdown
AdaBoost It is not too bad either.
###Code
from sklearn.ensemble import AdaBoostRegressor
ada = AdaBoostRegressor()
param_ada = {'n_estimators': [10, 100, 1000, 10000, 50000]}
ada_grid = GridSearchCV(ada, param_ada, cv=5, scoring = 'neg_mean_absolute_error', n_jobs=8, verbose=True)
ada_grid.fit(whiteX, whitey)
ada_grid.best_score_
ada_grid.best_params_
###Output
_____no_output_____
###Markdown
More linear modelsThe following models are different types of linear models. Their performances, unsurprisingly, are similar. Lasso Regression
###Code
from sklearn.linear_model import Lasso
lasso = Lasso()
param_lasso = {'alpha': [0.1, 0.5, 1]}
lasso_grid = GridSearchCV(lasso, param_lasso, cv=5, scoring = 'neg_mean_absolute_error', n_jobs=8, verbose=True)
lasso_grid.fit(whiteX, whitey)
lasso_grid.best_score_, lasso_grid.best_params_
###Output
_____no_output_____
###Markdown
Ridge Regression
###Code
from sklearn.linear_model import Ridge
ridge = Ridge()
param_ridge = {'alpha': [0.1, 0.2, 0.3, 0.5, 1]}
ridge_grid = GridSearchCV(ridge, param_ridge, cv=5, scoring = 'neg_mean_absolute_error', n_jobs=8, verbose=True)
ridge_grid.fit(whiteX, whitey)
ridge_grid.best_score_, ridge_grid.best_params_
###Output
_____no_output_____
###Markdown
Multiple Linear Regression
###Code
from sklearn.linear_model import LinearRegression
lr = LinearRegression(n_jobs=8)
param_lr = {'normalize': [True, False]}
lr_grid = GridSearchCV(lr, param_lr, cv=5, scoring = 'neg_mean_absolute_error', n_jobs=8, verbose=True)
lr_grid.fit(whiteX, whitey)
lr_grid.best_score_, lr_grid.best_params_
###Output
_____no_output_____
###Markdown
Elastic nets
###Code
from sklearn.linear_model import ElasticNet
enet = ElasticNet()
param_enet = {'alpha': [0.1, 0.5, 1], 'l1_ratio': [0.5, 0.7, 0.3]}
enet_grid = GridSearchCV(enet, param_enet, cv=5, scoring = 'neg_mean_absolute_error', n_jobs=8, verbose=True)
enet_grid.fit(whiteX, whitey)
enet_grid.best_score_, enet_grid.best_params_
###Output
_____no_output_____ |
twitter_nlp/ethan/Untitled1.ipynb | ###Markdown
NLP1Improve on document term occurence matrix/one hot encoded vectors for word/document representationswith embeddings. The advantage of embeddings over matrix representations followed by dimensionality reductionis embeddings encode word similarity directly without the dimensionsionality reduction step. If you have the following sentences:Mathematician can runMathematician likes coffeeMathematician majored in physicsPhysicst can runPhysicst likes coffeePhysicst majored in physicsWe can give each word a vector, a vector for mathematician and one for physicist and we can see the wordssurrouding these 2 words are similar. We can use a dot product and cos between the 2 vectors to represent similarity. This concept of word vectorsdoesn't scale so we use word embeddings which is a mini DL vector.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(42)
vocab = set()
with open('howard.txt','r') as fh:
for x in fh:
#print(x)
vocab = vocab | set(x.split())
print (len(vocab))
index_to_word= []
word_to_index = []
#create index to word and word to index mappings
for index,word in enumerate(vocab):
index_to_word.append(word)
word_to_index.append(index)
###Output
2823
|
notebooks/.ipynb_checkpoints/20190621-checking-heston-trap-checkpoint.ipynb | ###Markdown
NEED TO MANIPULATE OWN CF UNTIL STABLE. THEN WILL KNOW HOW TO MANIPULATE OTHER
###Code
import os
os.chdir(r'/Users/ryanmccrickerd/desktop/jdheston')
import numpy as np
import pandas as pd
from jdheston import jdheston as jdh
from jdheston import config as cfg
from matplotlib import pyplot as plt
from scipy.stats import norm
# import mpl
# %matplotlib inline
nx = np.newaxis
cfg.config(scale=1.5,print_keys=False)
σ,ρ,v,κ = 0.1,-0.7,1,1.
α,β,γ,v0,ρ = σ*v,κ,σ**2,σ**2,ρ
θ = α,β,γ,v0,ρ
T = np.array([1/52,1/12,3/12,6/12,9/12,1])[:,nx]
M = ['1W','1M','3M','6M','9M','1Y']
Δ = np.linspace(5,95,19)[nx,:]/100
k = norm.ppf(Δ)*σ*np.sqrt(T)
pd.DataFrame(k,index=M,columns=Δ[0,:])
C = jdh.pricer(T,k,θ)
BSV = jdh.surface(T,k,C)
pd.DataFrame(BSV,index=M,columns=Δ[0,:])
plot,axes = plt.subplots()
for i in range(len(T[:,0])):
axes.plot(k[i,:],100*BSV[i,:])
axes.set_xlabel(r'$k$')
axes.set_ylabel(r'$\bar{\sigma}(k,\tau)$')
###Output
_____no_output_____ |
Notebooks/Exemple_Rolling_NeuralNetwork.ipynb | ###Markdown
Rolling Predictive Neural Network ------The idea of the Rolling Neural Network is to train the model by followig the temporal structure of the data. By exemple we start to train over 1 year (from 01/01/2000 to 31/12/2000), predict signs of returns 3 months ahead (from 01/01/2001 to 31/03/2001), and move 3 months ahead to retrain the model over one year again (from 01/04/2000 to 31/03/2001) and predict signs of returns 3 months ahead (from 01/04/2001 to 30/06/2001), and so on until present. ---In this exemple we used most basic Keras neural network that it rolled along time axis. At each period we train models over one year and predict signs of daily returns three months ahead. Data used to train models are only past data from SP500 (close price) that we transform with several methods (first difference, moving average, financial indicators, etc.).
###Code
# Import extern packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import time
from jupyterthemes import jtplot
from IPython.display import clear_output
from keras.models import Model
from keras.layers import Input, Dense, Dropout
from keras.optimizers import Adam
from keras import regularizers, initializers, constraints
from keras import backend as K
# Import local packages
import fynance as fy
from Core.ML_tools import set_nn_model, display_perf
from Core.roll_multi_neural_network import RollMultiNeuralNet
def sign(X, alpha=0.05):
inf_alpha = np.abs(X) < alpha
not_inf_alpha = np.abs(X) >= alpha
X[inf_alpha] = 0.
X[not_inf_alpha] = np.sign(X[not_inf_alpha])
return X
###Output
_____no_output_____
###Markdown
---Load Data (SP500)---
###Code
path = '/home/arthur/GitHub/ML_Finance/Data/daily_finance_data.xlsx'
sheet_name = 'Sheet4'
UNDERLY = 'SP500'
df = pd.read_excel(path, sheet_name=sheet_name).set_index('Date')
df = df.loc[:, [UNDERLY]].dropna()
axis_date = df.index.values[1:]
###Output
_____no_output_____
###Markdown
---Set Target---
###Code
""" Y is the sign of log returns from 1 to T """
I = np.log(df.values)
lr = I[1:] - I[: -1]
y = lr.flatten()
y = y.reshape([y.size, 1])
y.flatten()
###Output
_____no_output_____
###Markdown
---Set Features (Transfo Data)---
###Code
""" X is set with data from 0 to T - 1 """
# THIS PART OF SCRIPT IS NOT AVAILABLE
###Output
_____no_output_____
###Markdown
---Set Neural Network Model ------We used a function that we didn't display to set parameters of our neural network model. Just know that we used a very basic and simple neural network.
###Code
models = []
for i in range(1):
model = set_nn_model(
X, SEED=None, activation='tanh'
)
models += [model]
###Output
_____no_output_____
###Markdown
---RollNeuralNet Strategy on SP500 ------We train and make prediction with just one neural network. The first figure display the loss function of the neural network (graph at the top) and the performance of the strategy on the training and testing period (graph at the bottom). On the second figure we compare the backtest of the following strategies: - If prediction of the daily return is $> 0$ we take a long position, if prediction $< - 0$ we take a short position, else we take a neutral position (no position). We back test this strategy in red on plots, it has been noted as 'Strategy'. - We also back test a similar strategy where we apply a coefficient computed with the past volatility in green on plots, it has been noted as 'Strat Iso-Vol'. - We compare it with index value of SP500 in blue on plots, it equivalent at a long position during all the period. Graph at the top is the performance with a initial value of 100$, second plot is the drawdown in percent, and last graph is rolling sharpe ratio over one year.
###Code
%matplotlib notebook
# Set variables: Train on one year and predict on three months
n = 63 * 4
s = 21 * 3
params = {'batch_size': s, 'epochs': 1, 'shuffle': False, 'verbose': 0}
init_params = {'batch_size': n, 'epochs': 5, 'shuffle': False, 'verbose': 0}
# Train model
print("Let run RollNeuralNetwork Strategy on {} !".format(UNDERLY))
print("======================================{}==".format('=' * len(UNDERLY)))
rmnn = RollMultiNeuralNet(
train_period=n, estim_period=s, params=params, init_params=init_params
)
rmnn = rmnn.run(y=y, X=X, NN=models, x_axis=axis_date)
# Results
y_estim = np.mean(np.sign(rmnn.y_estim.copy()), axis=1)
# Plot perf. on estimating set
perf_idx, perf_est, perf_ivo = display_perf(
y[n: - s, 0], sign(y_estim[n: - s].copy(), alpha=0.1), period=252,
title='Perf. of RollNeuralNet Strategy on {}'.format(UNDERLY),
params_iv={'leverage': 2., 'half_life': 5},
x_axis=axis_date[n: -s], underlying=UNDERLY,
)
###Output
Let run RollNeuralNetwork Strategy on SP500 !
=============================================
###Markdown
---Set Multi Neural Network Models ------We make a loop over the setting neural network function to initialize five neurals networks.
###Code
models = []
for i in range(5):
model = set_nn_model()
models += [model]
###Output
_____no_output_____
###Markdown
---RollMultiNeuralNet Strategy on SP500------We train simultanously several (five here) rolling neural network and aggregate results to improve and obtain more stable results. The first figure display the loss functions of neural networks (graph at the top) and performances of strategies for each neural network on the training and testing period (graph at the bottom). On the second figure we compare the backtest of the following strategies: - We make the mean of daily return prediction of each neural networks, if prediction it is $> 0.3$ we take a long position, if it is $< - 0.3$ we take a short position, else we take a neutral position (no position). We back test this strategy in red on plots, it has been noted as 'Strategy'. - We also back test a similar strategy where we apply a coefficient computed with the past volatility in green on plots, it has been noted as 'Strat Iso-Vol'. - We compare it with index value of SP500 in blue on plots, it equivalent at a long position during all the period. Graph at the top is the performance with a initial value of 100$, second plot is the drawdown in percent, and last graph is rolling sharpe ratio over one year. The third figure display in red the mean of neural network's prediction and in blue the signal of the final strategy.
###Code
%matplotlib notebook
# Set variables
n = 63 * 4
s = 21 * 3
params = {'batch_size': s, 'epochs': 1, 'shuffle': False, 'verbose': 0}
init_params = {'batch_size': n, 'epochs': 5, 'shuffle': False, 'verbose': 0}
# Train model
print("Let run RollNeuralNetwork Strategy on {} !".format(UNDERLY))
print("======================================{}==".format('=' * len(UNDERLY)))
rmnn = RollMultiNeuralNet(
train_period=n, estim_period=s, params=params, init_params=init_params
)
rmnn = rmnn.run(y=y, X=X, NN=models, x_axis=axis_date)
# AGGREGATION
y_estim = np.mean(np.sign(rmnn.y_estim.copy()), axis=1)
# Plot perf. on estimating set
perf_idx, perf_est, perf_ivo = display_perf(
y[n: - s, 0], sign(y_estim[n: - s].copy(), alpha=0.3), period=252,
title='Perf. of RollNeuralNet Strategy on {}'.format(UNDERLY),
params_iv={'leverage': 2., 'half_life': 5},
x_axis=axis_date[n: -s], underlying=UNDERLY,
)
# Plot signal
f, ax = plt.subplots(1, 1, figsize=(9, 4))
ax.plot(
axis_date[n: -s],
sign(y_estim[n: - s].copy(), alpha=0.1),
color=sns.xkcd_rgb["denim blue"],
LineWidth=3.5
)
ax.plot(
axis_date[n: -s],
y_estim[n: -s],
color=sns.xkcd_rgb["pale red"],
LineWidth=2.
)
ax.set_yticks([-1, 0, 1])
ax.set_title('Strategy Signal')
ax.legend(['Position', 'Aggregated positions'], fontsize=15)
ax.set_xlabel('Date')
ax.set_ylabel('Position')
plt.show()
###Output
Let run RollNeuralNetwork Strategy on SP500 !
=============================================
|
05_Expanded Image Analysis Lab.ipynb | ###Markdown
Expanded Image Analysis Lab IntroImage segmentation can be useful for edge detection. You saw how to segment images with k-Means awhile ago. Now, expand upon that by thinking through the process with the eventual goal being able to perform edge detection on objects and detect their shape. The step you may want to take after that is image recognition, but we'll leave that for homework. Goals for lab* Become comfortable building out a solution in notebooks with R and making it **reproducible*** Gain/regain the experience of a "hackathon" style lab involving collaboration* Become more comfortable and familiar with the notebook shortcuts and tools InstructionsLibraries needed: `jpeg` (could replace with `png`), `grid` and `gridExtra` (if not installed general installation of packages is outline in [01.Installing stuff](notebook_basics/01.Installing stuff.ipynb))**PART 1.** Think through a general process of finding edges or borders in an image. If it's helpful think of it in the context of these pictures: These pictures live in the following folders. Don't worry, these personal photos are not copyrighted. Their file names are (including folder) are:* images/05_Expanded_Image_Analysis_Lab/hummingbird_mharris.jpg* images/05_Expanded_Image_Analysis_Lab/a_query.jpg* images/05_Expanded_Image_Analysis_Lab/whidbey_mharris.jpg > Bonus points to those who also figure out how to rotate the middle image clockwise by 90 degrees. Code example to follow taken from Ryan Walker wrote a marvelous blog [post](http://www.r-bloggers.com/color-quantization-in-r/) on color quantization and segmentation in R.http://www.r-bloggers.com/color-quantization-in-r/ **PART 2.** Perform image processing task such as segmentation (we saw k-Means earlier) ```Rlibrary(jpeg)myimg <- readJPEG('images/05_Expanded_Image_Analysis_Lab/hummingbird_mharris.jpg') copy the image three timesmyimg.R = myimgmyimg.G = myimgmyimg.B = myimg zero out the non-contributing channels for each image copymyimg.R[,,2:3] = 0myimg.G[,,1]=0myimg.G[,,3]=0myimg.B[,,1:2]=0...``` Sample code (directly from Ryan Walker)
###Code
library(jpeg)
library("grid")
library("gridExtra")
myimg <- readJPEG('images/05_Expanded_Image_Analysis_Lab/hummingbird_mharris.jpg')
### EX 3: show the 3 channels in separate images
# copy the image three times
myimg.R = myimg
myimg.G = myimg
myimg.B = myimg
# zero out the non-contributing channels for each image copy
myimg.R[,,2:3] = 0
myimg.G[,,1]=0
myimg.G[,,3]=0
myimg.B[,,1:2]=0
# build the image grid
img1 = rasterGrob(myimg.R)
img2 = rasterGrob(myimg.G)
img3 = rasterGrob(myimg.B)
grid.arrange(img1, img2, img3, nrow=1)
# Now let’s segment this image. First, we need to reshape the
# array into a data frame with one row for each pixel and three columns for the RGB channels:
# reshape image into a data frame
df = data.frame(
red = matrix(myimg[,,1], ncol=1),
green = matrix(myimg[,,2], ncol=1),
blue = matrix(myimg[,,3], ncol=1)
)
# Now, we apply k-means to our data frame. We’ll choose k=4 to break the image into 4 color regions.
### compute the k-means clustering
K = kmeans(df,4)
df$label = K$cluster
### Replace the color of each pixel in the image with the mean
### R,G, and B values of the cluster in which the pixel resides:
# get the coloring
colors = data.frame(
label = 1:nrow(K$centers),
R = K$centers[,"red"],
G = K$centers[,"green"],
B = K$centers[,"blue"]
)
# merge color codes on to df
# IMPORTANT: we must maintain the original order of the df after the merge!
df$order = 1:nrow(df)
df = merge(df, colors)
df = df[order(df$order),]
df$order = NULL
# Finally, we have to reshape our data frame back into an image:
# get mean color channel values for each row of the df.
R = matrix(df$R, nrow=dim(myimg)[1])
G = matrix(df$G, nrow=dim(myimg)[1])
B = matrix(df$B, nrow=dim(myimg)[1])
# reconstitute the segmented image in the same shape as the input image
myimg.segmented = array(dim=dim(myimg))
myimg.segmented[,,1] = R
myimg.segmented[,,2] = G
myimg.segmented[,,3] = B
# View the result
grid.raster(myimg.segmented)
###Output
_____no_output_____ |
nb/2019_spring/Lecture_2.ipynb | ###Markdown
CME 193 - Lecture 2 Example: Rational NumbersLet's continue with our example of rational numbers (fractions), that is, numbers of the form$$r = \frac{p}{q}$$where $p$ and $q$ are integers. Let's make it support addition using the formula:$$ \frac{p_1}{q_1} + \frac{p_2}{q_2} = \frac{p_1 q_2 + p_2 q_1}{q_1 q_2}$$
###Code
import math
class Rational:
def __init__(self, p, q=1):
if q == 0:
raise ValueError('Denominator must not be zero')
if not isinstance(p, int):
raise TypeError('Numerator must be an integer')
if not isinstance(q, int):
raise TypeError('Denominator must be an integer')
g = math.gcd(p, q)
self.p = p // g
self.q = q // g
# method to convert rational to float
def __float__(self):
return float(self.p) / float(self.q)
# method to convert rational to string for printing
def __str__(self):
return '%d / %d' % (self.p, self.q)
# method to add two rationals - interprets self + other
def __add__(self, other):
if isinstance(other, Rational):
return Rational(self.p * other.q + other.p * self.q, self.q * other.q)
# -- if it's an integer...
elif isinstance(other, int):
return Rational(self.p + other * self.q, self.q)
# -- otherwise, we assume it will be a float
return float(self) + float(other)
def __radd__(self, other): # interprets other + self
return self + other # addition commutes!
r = Rational(3)
print(r)
r = Rational(3, 2)
print('Integer adding:')
print('right add')
print(r + 4)
print(float(r + 4))
print('left add')
print(4 + r)
print(float(4 + r))
###Output
_____no_output_____
###Markdown
Exercise 3 Add more operations to `Rational`You can read about the available operations that you can overload [here](https://docs.python.org/3.7/reference/datamodel.htmlemulating-numeric-types)Add the following operations to the `Rational` class:* `*` - use `__mul__`* `/` - use `__truediv__`* `-` - use `__sub__`You only need to define these operations between two `Rational` types - use an `if isinstance(other, Rational):` block.Make a few examples to convince yourself that this works.
###Code
class Rational:
def __init__(self, p, q=1):
if q == 0:
raise ValueError('Denominator must not be zero')
if not isinstance(p, int):
raise TypeError('Numerator must be an integer')
if not isinstance(q, int):
raise TypeError('Denominator must be an integer')
g = math.gcd(p, q)
self.p = p // g
self.q = q // g
# method to convert rational to float
def __float__(self):
return float(self.p) / float(self.q)
# method to convert rational to string for printing
def __str__(self):
return '%d / %d' % (self.p, self.q)
# method to add two rationals - interprets self + other
def __add__(self, other):
if isinstance(other, Rational):
return Rational(self.p * other.q + other.p * self.q, self.q * other.q)
# -- if it's an integer...
elif isinstance(other, int):
return Rational(self.p + other * self.q, self.q)
# -- otherwise, we assume it will be a float
return float(self) + float(other)
def __radd__(self, other): # interprets other + self
return self + other # addition commutes!
# subtraction
def __sub__(self, other):
raise NotImplementedError('Subtraction not implemented yet')
# multiplication
def __mul__(self, other):
raise NotImplementedError('Subtraction not implemented yet')
# division
def __truediv__(self, other):
raise NotImplementedError('Division not implemented yet')
# Write some examples to test your code
###Output
_____no_output_____
###Markdown
Exercise 4 Square root of rationals using the Babylonian methodImplement the [Babylonian Method](https://en.wikipedia.org/wiki/Methods_of_computing_square_rootsBabylonian_method) for computing the square root of a number $S$.
###Code
def babylonian(S, num_iters=5):
raise NotImplementedError('Not implemented yet')
math.sqrt(24)
babylonian(24)
###Output
_____no_output_____
###Markdown
NumPyThis is a good segue into NumPy. Python provides only a handful of numeric types: ints, longs, floats, and complex numbers. We just declared a class that implements rational numbers. NumPy implements one very useful numeric type: multidimensional arrays.
###Code
# Quick note on importing
import math
math.sin(5)
import math as m
m.sin(5)
import numpy as np
x = np.array([[0, 1], [1, 5]])
x
y = np.array([[4, 0], [0, 4]])
y
x + y
x ** 2
x @ y # Matrix multiplication
np.sum(x)
###Output
_____no_output_____ |
src/Results_Analysis.ipynb | ###Markdown
Importations
###Code
# Importations
import os
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Plot styler
def plt_style(titlesize=16,
labelsize=14,
legendsize=12,
fontsize=14,
figsize=(15,10)):
# Font sizes
plt.rcParams['font.size'] = fontsize
plt.rcParams['axes.labelsize'] = labelsize
plt.rcParams['axes.titlesize'] = titlesize
plt.rcParams['xtick.labelsize'] = labelsize
plt.rcParams['ytick.labelsize'] = labelsize
plt.rcParams['legend.fontsize'] = fontsize
plt.rcParams['figure.titlesize'] = titlesize
# Figure size
plt.figure(1)
plt.figure(figsize = figsize)
# axes
ax = plt.subplot(111)
ax.spines["top"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
# Return axis
return ax
###Output
_____no_output_____
###Markdown
Data
###Code
# Results
RESULTS_PATH = os.path.join('..', 'results')
DF = pd.read_excel(os.path.join(RESULTS_PATH, 'ClassProbs.xls'))
print(DF.shape)
DF.head()
###Output
(17579, 34)
###Markdown
Order by descending $P(Fire)$
###Code
# Sort
DFS = DF.sort_values('ProbF', ascending=False)
DFS.head()
# Get frequency for a row
R = DFS[lcovers].sum() / DFS.shape[0]
R = R[R > 0.01]
R.index = [LCoverDict[i] for i in R.index]
# Colors from CMAP
cmap = plt.get_cmap("tab20c")
norm = matplotlib.colors.Normalize(vmin=0.0, vmax=16.0)
Tabcolors = [cmap(norm(v)) for v in range(0, len(R))]
Tabcolors = np.array(Tabcolors)
colors = Tabcolors
angle = 150
ax = plt_style()
explode = [0.03 for i in range(R.shape[0])]
R.plot(kind='pie',
y='freq',
ax=ax,
autopct='%1.1f%%',
startangle=angle,
fontsize=30,
pctdistance=.8,
explode=explode,
colors=colors,
labeldistance=1.07)
plt.title('')
#ax.get_legend().remove()
plt.ylabel('')
#draw circle
centre_circle = plt.Circle((0,0),0.70,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
plt.tight_layout()
plt.savefig('All_Samples_LCover.pdf', dpi=300)
###Output
_____no_output_____
###Markdown
Risk
###Code
import matplotlib
cmap = matplotlib.cm.get_cmap('Set2')
norm = matplotlib.colors.Normalize(vmin=0, vmax=len(lcovers) - 1)
# Categories
DFS.eval('risk_level = 3 * (ProbF >= .7) + 2 * (ProbF < .3) + 1 * ((ProbF < .7) & (ProbF >= .3))',
inplace=True)
DFS
###Output
_____no_output_____
###Markdown
High risk
###Code
HR = DFS[DFS.ProbF >= .7]
features = ['Lat','Long', 'n_inc', 'n_comp', 'n_comp8']
HR[features].describe()
lcovers = ['lcV' + str(i) for i in range(0, 11)]
HR[lcovers].describe()
# Proportions (check)
HR[lcovers].sum() / HR.shape[0]
plt_style(fontsize=30, labelsize=30, titlesize=30)
plt.bar(x=lcovers, height=HR[lcovers].describe().iloc[1,:].values, fc=cmap(1))
plt.xticks(rotation=90)
plt.title("High-risk: average landcover proportions")
# Land cover dictionary
LCoverDict = {'lcV0':'NData',
'lcV1': 'Crops',
'lcV2': 'Native Forest',
'lcV3': 'Plantations',
'lcV4': 'Meadows',
'lcV5': 'Bushes',
'lcV6': 'Wetlands',
'lcV7': 'Water bodies',
'lcV8': 'Water',
'lcV9': 'Bare floor',
'lcV10': 'Snow',
'lcV11': 'Clouds',}
# Get frequency for a row
R = HR[lcovers].sum() / HR.shape[0]
R = R[R > 0.01]
R.index = [LCoverDict[i] for i in R.index]
# Colors from CMAP
cmap = plt.get_cmap("tab20c")
norm = matplotlib.colors.Normalize(vmin=0.0, vmax=16.0)
Tabcolors = [cmap(norm(v)) for v in range(0, len(R))]
Tabcolors = np.array(Tabcolors)
colors = Tabcolors
angle = 150
ax = plt_style()
explode = [0.03 for i in range(R.shape[0])]
R.plot(kind='pie',
y='freq',
ax=ax,
autopct='%1.1f%%',
startangle=angle,
fontsize=30,
pctdistance=.8,
explode=explode,
colors=colors,
labeldistance=1.07)
plt.title('')
#ax.get_legend().remove()
plt.ylabel('')
#draw circle
centre_circle = plt.Circle((0,0),0.70,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
plt.tight_layout()
plt.savefig('HR_Samples_LCover.pdf', dpi=300)
###Output
_____no_output_____
###Markdown
Medium risk
###Code
MR = DFS[(DFS.ProbF >= .3) & (DFS.ProbF < .7)]
MR[features].describe()
MR[lcovers].describe()
plt_style(fontsize=30, labelsize=30, titlesize=30)
plt.bar(x=lcovers, height=MR[lcovers].describe().iloc[1,:].values, fc=cmap(6))
plt.xticks(rotation=90)
plt.title("Medium-risk: average landcover proportions")
# Get frequency for a row
R = MR[lcovers].sum() / MR.shape[0]
R = R[R > 0.01]
R.index = [LCoverDict[i] for i in R.index]
# Colors from CMAP
cmap = plt.get_cmap("tab20c")
norm = matplotlib.colors.Normalize(vmin=0.0, vmax=16.0)
Tabcolors = [cmap(norm(v)) for v in range(0, len(R))]
Tabcolors = np.array(Tabcolors)
colors = Tabcolors
angle = 150
ax = plt_style()
explode = [0.03 for i in range(R.shape[0])]
R.plot(kind='pie',
y='freq',
ax=ax,
autopct='%1.1f%%',
startangle=angle,
fontsize=30,
pctdistance=.8,
explode=explode,
colors=colors,
labeldistance=1.07)
plt.title('')
#ax.get_legend().remove()
plt.ylabel('')
#draw circle
centre_circle = plt.Circle((0,0),0.70,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
plt.tight_layout()
plt.savefig('MR_Samples_LCover.pdf', dpi=300)
###Output
_____no_output_____
###Markdown
Low risk
###Code
LR = DFS[DFS.ProbF < .3]
LR[features].describe()
LR[lcovers].describe()
plt_style(fontsize=30, labelsize=30, titlesize=30)
plt.bar(x=lcovers, height=LR[lcovers].describe().iloc[1,:].values, fc=cmap(0))
plt.xticks(rotation=90)
plt.title("Low-risk: average landcover proportions")
# Get frequency for a row
R = LR[lcovers].sum() / LR.shape[0]
R = R[R > 0.01]
R.index = [LCoverDict[i] for i in R.index]
# Colors from CMAP
cmap = plt.get_cmap("tab20c")
norm = matplotlib.colors.Normalize(vmin=0.0, vmax=16.0)
Tabcolors = [cmap(norm(v)) for v in range(0, len(R))]
Tabcolors = np.array(Tabcolors)
colors = Tabcolors
angle = 150
ax = plt_style()
explode = [0.03 for i in range(R.shape[0])]
R.plot(kind='pie',
y='freq',
ax=ax,
autopct='%1.1f%%',
startangle=angle,
fontsize=30,
pctdistance=.8,
explode=explode,
colors=colors,
labeldistance=1.07)
plt.title('')
#ax.get_legend().remove()
plt.ylabel('')
#draw circle
centre_circle = plt.Circle((0,0),0.70,fc='white')
fig = plt.gcf()
fig.gca().add_artist(centre_circle)
plt.tight_layout()
plt.savefig('LR_Samples_LCover.pdf', dpi=300)
###Output
_____no_output_____
###Markdown
Plots
###Code
# Latitude
plt_style(fontsize=30, labelsize=30, titlesize=30)
var = 'Lat'
sns.distplot(HR[var], color='red', label='HR')
sns.distplot(MR[var], color='orange', label='MR')
sns.distplot(LR[var], color='green', label='LR')
plt.legend()
#plt.title('Density plot')
plt.xlabel('Latitude')
plt.savefig('Latitud_Density.pdf', dpi=300)
# Longitude
var = 'Long'
plt_style(fontsize=30, labelsize=30, titlesize=30)
sns.distplot(HR[var], color='red', label='HR')
sns.distplot(MR[var], color='orange', label='MR')
sns.distplot(LR[var], color='green', label='LR')
plt.legend()
#plt.title('Density plot')
plt.xlabel('Longitude')
plt.savefig('Longitude_Density.pdf', dpi=300)
# Ncomp
var = 'n_comp'
plt_style(fontsize=30, labelsize=30, titlesize=30)
sns.distplot(HR[var], color='red', label='HR')
sns.distplot(MR[var], color='orange', label='MR')
sns.distplot(LR[var], color='green', label='LR')
plt.legend()
#plt.title('Density plot')
plt.xlabel('Number of components')
plt.savefig('NComponents_Density.pdf', dpi=300)
# Ncomp2
var = 'n_comp8'
plt_style(fontsize=30, labelsize=30, titlesize=30)
sns.distplot(HR[var], color='red', label='HR')
sns.distplot(MR[var], color='orange', label='MR')
sns.distplot(LR[var], color='green', label='LR')
plt.legend()
#plt.title('Density plot')
plt.xlabel('Number of components (8 neighbors)')
plt.savefig('NComponents8_Density.pdf', dpi=300)
# Ninc
var = 'n_inc'
plt_style(fontsize=30, labelsize=30, titlesize=30)
sns.distplot(HR[var], color='red', label='HR')
sns.distplot(MR[var], color='orange', label='MR')
sns.distplot(LR[var], color='green', label='LR')
plt.legend()
plt.xlim([-1,20])
#plt.title('Density plot')
plt.xlabel('Number of fires in the time horizon (no outliers)')
plt.savefig('ninc_Density.pdf', dpi=300)
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/tutorial_linear_model-checkpoint.ipynb | ###Markdown
Tutorial for ridge regression model Demo code for how to use the linear encoding model used in the study ‘Single-trial neural dynamics are dominated by richly varied movements’ by Musall, Kaufman et al., 2019. This code shows how to build a design matrix based on task and movementevents, runs the linear model, shows how to analyze the fitted beta weightand quantify cross-validated explained varianceTo run, add the repo to your matlab path and download our widefield dataset from the CSHL repository, under http://repository.cshl.edu/38599/. Download the data folder 'Widefield' and set its directory below.Adapted to Python by Michael Sokoletsky, 2021
###Code
# Get some data from example recording
import h5py
import re
import matplotlib.pyplot as plt
from os.path import join as pjoin
from scipy import io
from make_design_matrix import *
from array_shrink import *
from cross_val_model import *
from model_corr import *
local_disk = 'D:\Churchland\Widefield'
animal = 'test' # example animal
rec = 'test' # example recording
f_path = pjoin(local_disk, animal, rec) # path to demo recording
# opts = io.loadmat(pjoin(f_path, 'opts2.mat'))['opts'] # load some options
# # to do - figure out how to elegantly convert NumPy structured arrays loaded using the loadmat function into dictionaries
# opts = {re.sub(r'(?<!^)(?=[A-Z])', '_', name).lower(): opts[name][0][0][0].squeeze()[0] if not len(opts[name][0][0][0]) else opts[name][0][0][0].squeeze() for name in opts.dtype.names} # convert to a Python dict
# with h5py.File(pjoin(f_path, 'Vc.mat'),'r') as f_spa: # load spatial components
# U = np.array(f_spa['U']).T
# f_tem = io.loadmat(pjoin(f_path, 'interpVc.mat')) # load adjusted temporal components
# Vc = f_tem['Vc']
# frames = f_tem['frames'][0][0]
# mask = np.squeeze(np.isnan(U[:,:,0]))
# frame_rate = opts['frame_rate']
# Assign some basic options for the model
"""
There are three event types when building the design matrix.
Event type 1 will span the rest of the current trial. Type 2 will span
frames according to sPostTime. Type 3 will span frames before the event
according to mPreTime and frames after the event according to mPostTime.
"""
opts['s_post_time'] = (6 * frame_rate).astype(int) # follow stim events for s_post_stim in frames (used for event_type 2)
opts['m_pre_time'] = (0.5 * frame_rate).astype(int) # precede motor events to capture preparatory activity in frames (used for event_type 3)
opts['m_post_time'] = (2 * frame_rate).astype(int) # follow motor events for m_post_stim in frames (used for event_type 3)
opts['frames_per_trial'] = frames # nr of frames per trial
opts['folds'] = 10 # nr of folds for cross-validation
# Get some events
with h5py.File(pjoin(f_path, 'orgRegData.mat'),'r') as f_des: # load design matrix to isolate example events and video data
full_R = np.array(f_des['fullR']).T
rec_labels = [''.join([chr(char[0]) for char in f_des[label[0]]]) for label in f_des['recLabels'] ]
rec_labels = [re.sub(r'(?<!^)(?=[A-Z])', '_', label).lower() for label in rec_labels] # Convert to snake case
rec_idx = np.array(f_des['recIdx'])-1 # -1 to convert to Python indexing
idx = np.array(f_des['idx'])
rec_idx = rec_idx[idx == 0]
vid_R = full_R[:,-400:] # last 400 PCs are video components
# Task events
task_labels = ['time', 'l_vis_stim', 'r_vis_stim', 'choice', 'prev_reward'] # some task variables
task_event_type = [1, 2, 2, 1, 1] # different type of events.
task_events = np.empty((np.size(full_R,0),len(task_labels)))
task_events[:,0] = full_R[:, np.where(rec_idx == rec_labels.index(task_labels[0]))[0][0]] # find time regressor. This happens every first frame in every trial.
task_events[:,1] = full_R[:, np.where(rec_idx == rec_labels.index(task_labels[1]))[0][0]] # find event regressor for left visual stimulus
task_events[:,2] = full_R[:, np.where(rec_idx == rec_labels.index(task_labels[2]))[0][0]] # find event regressor for right visual stimulus
task_events[:,3] = full_R[:, np.where(rec_idx == rec_labels.index(task_labels[3]))[0][0]] # find choice reressor. This is true when the animal responded on the left.
task_events[:,4] = full_R[:, np.where(rec_idx == rec_labels.index(task_labels[4]))[0][0]] # find previous reward regressor. This is true when previous trial was rewarded.
# Movement events
move_labels = ['l_grab', 'r_grab', 'l_lick', 'r_lick', 'nose', 'whisk'] # some movement variables
move_event_type = [3, 3, 3, 3, 3, 3] # different type of events. these are all peri-event variables.
move_events = np.empty((np.size(full_R,0),len(move_labels)))
for x in range(len(move_labels)):
move_events[:,x] = full_R[:, np.where(rec_idx == rec_labels.index(move_labels[x]))[0][0]+15] # find movement regressor.
del full_R # clear old design matrix
# Make design matrix
task_R, task_idx = make_design_matrix(task_events, task_event_type, opts) # make design matrix for task variables
move_R, move_idx = make_design_matrix(move_events, move_event_type, opts) # make design matrix for movement variables
full_R = np.hstack((task_R, move_R, vid_R)) # make new, single design matrix
move_labels.append('video')
reg_idx = np.concatenate((task_idx, move_idx + np.max(task_idx) + 1, np.full(np.size(vid_R,1),np.max(move_idx)+np.max(task_idx)+2))) # regressor index
reg_labels = task_labels + move_labels
# Run QR and check for rank-defficiency. This will show whether a given regressor is highly collinear with other regressors in the design matrix.
"""
The resulting plot ranges from 0 to 1 for each regressor, with 1 being
fully orthogonal to all preceeding regressors in the matrix and 0 being
fully redundant. Having fully redundant regressors in the matrix will
break the model, so in this example those regressors are removed. In
practice, you should understand where the redundancy is coming from and
change your model design to avoid it in the first place!
"""
%matplotlib inline
rej_idx = np.zeros((1,np.size(full_R,1)), dtype=bool)
full_QRR = LA.qr(np.divide(full_R,np.sqrt(np.sum(full_R**2,0))),mode='r') # orthogonalize normalized design matrix
plt.plot(abs(np.diagonal(full_QRR)),linewidth=2)
plt.ylim(0,1.1)
plt.title('Regressor orthogonality') # this shows how orthogonal individual regressors are to the rest of the matrix
plt.ylabel('Norm. vector angle')
plt.xlabel('Regressors')
if np.sum(abs(np.diagonal(full_QRR)) > np.max(np.shape(full_R)) * abs(np.spacing(full_QRR[0,0]))) < np.size(full_R,1): # check if design matrix is full rank
temp = ~(abs(np.diagonal(full_QRR)) > max(np.shape(full_R)) * abs(np.spacing(full_QRR[0,0])))
print(f'Design matrix is rank-defficient. Removing {np.sum(temp)}/{np.sum(~rej_idx)} additional regressors.');
rej_idx[~rej_idx] = temp # reject regressors that cause rank-defficint matrix
# np.savez(pjoin(f_path, 'reg_data.py'), full_R=full_R, reg_idx=reg_idx, reg_labels=reg_labels, full_QRR=full_QRR) # save some model variables
# Run cross-validation
# full model - this will take a moment
[V_full, full_beta, _, full_idx, full_ridge, full_labels] = cross_val_model(full_R, Vc, reg_labels, reg_idx, reg_labels, opts['folds'])
# np.savez(pjoin(f_path, 'cv_full.py'), V_full=V_full, full_beta=full_beta, full_R=full_R, full_idx=full_idx, full_ridge=full_ridge, full_labels=full_labels) # save some results
full_mat = model_corr(Vc, V_full,U)[0] ** 2 # compute explained variance
full_mat = array_shrink(full_mat, mask,'split') # recreate full frame
# task model alone - this will take a moment
[V_task, task_beta, task_R, task_idx, task_ridge, task_labels] = cross_val_model(full_R, Vc, task_labels, reg_idx, reg_labels, opts['folds'])
# np.savez(pjoin(f_path, 'cv_task.py'), V_task=V_task, task_beta=task_beta, task_R=task_R, task_idx=task_idx, task_ridge=task_ridge, task_labels=task_labels) # save some results
task_mat = model_corr(Vc,V_task,U)[0] ** 2 # compute explained variance
task_mat = array_shrink(task_mat,mask,'split') # recreate task frame
# movement model alone - this will take a moment
[V_move, move_beta, move_R, move_idx, move_ridge, move_labels] = cross_val_model(full_R, Vc, move_labels, reg_idx, reg_labels, opts['folds'])
# save([fPath 'cvMove.mat'], 'Vmove', 'moveBeta', 'moveR', 'moveIdx', 'moveRidge', 'moveLabels') # save some results
# np.savez(pjoin(f_path, 'cv_move.py'), V_move=V_move, move_beta=move_beta, move_R=move_R, move_idx=move_idx, move_ridge=move_ridge, move_labels=move_labels) # save some results
move_mat = model_corr(Vc,V_move, U)[0] ** 2 # compute explained variance
move_mat = array_shrink(move_mat,mask,'split') # recreate move frame
# Show R^2 results
%matplotlib inline
curr_cmap = copy.copy(plt.cm.get_cmap('inferno'))
curr_cmap.set_bad(color='white') # make nan values white
# cross-validated R^2
fig, (ax1, ax2, ax3) = plt.subplots(1, 3)
fig.tight_layout()
ax1.imshow(full_mat,cmap=curr_cmap, vmin=0,vmax=0.75);
ax1.set_title('cVR$^2$ - Full model')
ax1.axis('off');
ax2.imshow(task_mat,cmap=curr_cmap, vmin=0,vmax=0.75);
ax2.set_title('cVR$^2$ - Task model')
ax2.axis('off');
ax3.imshow(move_mat,cmap=curr_cmap, vmin=0,vmax=0.75);
ax3.set_title('cVR$^2$ - Movement model')
ax3.axis('off');
# unique R^2
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.tight_layout()
ax1.imshow(full_mat - move_mat,cmap=curr_cmap, vmin=0,vmax=0.4);
ax1.set_title('deltaR$^2$ - Task model')
ax1.axis('off');
ax2.imshow(full_mat - task_mat,cmap=curr_cmap, vmin=0,vmax=0.4);
ax2.set_title('deltaR$^2$ - Movement model')
ax2.axis('off');
###Output
_____no_output_____ |
Model_Attempts/Second_Try_Model.ipynb | ###Markdown
King County Dataset Linear Regression Model 2 In this model I am going to try to clean up a bit, maybe starting with 'date' and 'sqft_basement'
###Code
import pandas as pd
data = pd.read_csv("kc_house_data.csv")
data.head()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
The following features are all right skewed with a outlier that may be keeping the data from being normal.
###Code
# bedrooms, sqft_living, sqft_lot, sqft_living15, sqft_lot15
# Perform log transformation
logbedrooms = np.log(data["bedrooms"])
logliving = np.log(data["sqft_living"])
loglot = np.log(data["sqft_lot"])
loglivingnear = np.log(data["sqft_living15"])
loglotnear = np.log(data["sqft_lot15"])
# Switch the Standardization into the original data
data["bedrooms"] = (logbedrooms-np.mean(logbedrooms))/np.sqrt(np.var(logbedrooms))
data["sqft_living"] = (logliving-np.mean(logliving))/np.sqrt(np.var(logliving))
data["sqft_lot"] = (loglot-np.mean(loglot))/np.sqrt(np.var(loglot))
data["sqft_living15"] = (loglivingnear-np.mean(loglivingnear))/np.sqrt(np.var(loglivingnear))
data["sqft_lot15"] = (loglotnear-np.mean(loglotnear))/(np.sqrt(np.var(loglotnear)))
data.head()
# This one it's getting stuck on the "NaN's" (also got ride of 'waterfront' and 'yr_renovated')
X = data.drop(["date","sqft_basement", "view", "waterfront", "yr_renovated"], axis=1)
y = pd.DataFrame(data, columns = ['price'])
# Perform a train-test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
# A brief preview of our train test split
print(len(X_train), len(X_test), len(y_train), len(y_test))
# Apply your model to the train set
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X_train, y_train)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
# Calculate predictions on training and test sets
y_hat_train = linreg.predict(X_train)
y_hat_test = linreg.predict(X_test)
# Calculate training and test residuals
train_residuals = y_hat_train - y_train
test_residuals = y_hat_test - y_test
#Calculate the Mean Squared Error (MSE)
from sklearn.metrics import mean_squared_error
train_mse = mean_squared_error(y_train, y_hat_train)
test_mse = mean_squared_error(y_test, y_hat_test)
print('Train Mean Squarred Error:', train_mse)
print('Test Mean Squarred Error:', test_mse)
#Evaluate the effect of train-test split
import random
random.seed(8)
train_err = []
test_err = []
t_sizes = list(range(5,100,5))
for t_size in t_sizes:
temp_train_err = []
temp_test_err = []
for i in range(100):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=t_size/100)
linreg.fit(X_train, y_train)
y_hat_train = linreg.predict(X_train)
y_hat_test = linreg.predict(X_test)
temp_train_err.append(mean_squared_error(y_train, y_hat_train))
temp_test_err.append(mean_squared_error(y_test, y_hat_test))
train_err.append(np.mean(temp_train_err))
test_err.append(np.mean(temp_test_err))
plt.scatter(t_sizes, train_err, label='Training Error')
plt.scatter(t_sizes, test_err, label='Testing Error')
plt.legend()
import statsmodels.api as sm
from statsmodels.formula.api import ols
formula = "price ~ id+bedrooms+bathrooms+sqft_living+sqft_lot+floors+yr_built+zipcode+lat+long+sqft_living15+sqft_lot15"
model = ols(formula= formula, data=data).fit()
model.summary()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score
cv_5_results = np.mean(cross_val_score(linreg, X, y, cv=5, scoring='neg_mean_squared_error'))
cv_5_results
###Output
_____no_output_____ |
brain_image_segmentation.ipynb | ###Markdown
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Importing all the required libraries
import os
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use("ggplot")
%matplotlib inline
import cv2
from tqdm import tqdm_notebook, tnrange
from glob import glob
from itertools import chain
from skimage.io import imread, imshow, concatenate_images
from skimage.transform import resize
from skimage.morphology import label
from sklearn.model_selection import train_test_split
import tensorflow as tf
from skimage.color import rgb2gray
from tensorflow.keras import Input
from tensorflow.keras.models import Model, load_model, save_model
from tensorflow.keras.layers import Input, Activation, BatchNormalization, Dropout, Lambda, Conv2D, Conv2DTranspose, MaxPooling2D, concatenate
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras import backend as K
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
# declaring the image dimensions
im_width = 256
im_height = 256
# Separating all the images and mask from the dataset
train_files = []
mask_files = glob('../input/lgg-mri-segmentation/kaggle_3m/*/*_mask*')
for i in mask_files:
train_files.append(i.replace('_mask',''))
print(train_files[:10])
print(mask_files[:10])
# Visualizing the datasets
rows,cols=3,3
fig=plt.figure(figsize=(10,10))
for i in range(1,rows*cols+1):
fig.add_subplot(rows,cols,i)
img_path=train_files[i]
msk_path=mask_files[i]
img=cv2.imread(img_path)
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
msk=cv2.imread(msk_path)
plt.imshow(img)
plt.imshow(msk,alpha=0.4)
plt.show()
# Creating the dataframe of the training images and the mask files
df = pd.DataFrame(data={"filename": train_files, 'mask' : mask_files})
df_train, df_test = train_test_split(df,test_size = 0.1)
df_train, df_val = train_test_split(df_train,test_size = 0.2)
print(df_train.values.shape)
print(df_val.values.shape)
print(df_test.values.shape)
# Creating the data generator to load the dataset
def train_generator(data_frame, batch_size, aug_dict,
image_color_mode="rgb",
mask_color_mode="grayscale",
image_save_prefix="image",
mask_save_prefix="mask",
save_to_dir=None,
target_size=(256,256),
seed=1):
image_datagen = ImageDataGenerator(**aug_dict)
mask_datagen = ImageDataGenerator(**aug_dict)
# Using flow from dataframe in order to load images from folder
image_generator = image_datagen.flow_from_dataframe(
data_frame,
x_col = "filename",
class_mode = None,
color_mode = image_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = image_save_prefix,
seed = seed)
# Using flow from dataframe in order to load images from folder
mask_generator = mask_datagen.flow_from_dataframe(
data_frame,
x_col = "mask",
class_mode = None,
color_mode = mask_color_mode,
target_size = target_size,
batch_size = batch_size,
save_to_dir = save_to_dir,
save_prefix = mask_save_prefix,
seed = seed)
# zipping both the generators so as to load them together
train_gen = zip(image_generator, mask_generator)
for (img, mask) in train_gen:
img, mask = adjust_data(img, mask)
yield (img,mask)
def adjust_data(img,mask):
img = img / 255
mask = mask / 255
mask[mask > 0.5] = 1
mask[mask <= 0.5] = 0
return (img, mask)
# Defining the loss functions for image segmentation. Dice coefficient, and IOU
smooth=100
def dice_coef(y_true, y_pred):
y_truef=K.flatten(y_true)
y_predf=K.flatten(y_pred)
And=K.sum(y_truef* y_predf)
return((2* And + smooth) / (K.sum(y_truef) + K.sum(y_predf) + smooth))
def dice_coef_loss(y_true, y_pred):
return -dice_coef(y_true, y_pred)
def iou(y_true, y_pred):
intersection = K.sum(y_true * y_pred)
sum_ = K.sum(y_true + y_pred)
jac = (intersection + smooth) / (sum_ - intersection + smooth)
return jac
def jac_distance(y_true, y_pred):
y_truef=K.flatten(y_true)
y_predf=K.flatten(y_pred)
return - iou(y_true, y_pred)
# Defining the U net model for image segmentation
def unet(input_size=(256,256,3)):
inputs = Input(input_size)
conv1 = Conv2D(64, (3, 3), padding='same')(inputs)
bn1 = Activation('relu')(conv1)
conv1 = Conv2D(64, (3, 3), padding='same')(bn1)
bn1 = BatchNormalization(axis=3)(conv1)
bn1 = Activation('relu')(bn1)
pool1 = MaxPooling2D(pool_size=(2, 2))(bn1)
conv2 = Conv2D(128, (3, 3), padding='same')(pool1)
bn2 = Activation('relu')(conv2)
conv2 = Conv2D(128, (3, 3), padding='same')(bn2)
bn2 = BatchNormalization(axis=3)(conv2)
bn2 = Activation('relu')(bn2)
pool2 = MaxPooling2D(pool_size=(2, 2))(bn2)
conv3 = Conv2D(256, (3, 3), padding='same')(pool2)
bn3 = Activation('relu')(conv3)
conv3 = Conv2D(256, (3, 3), padding='same')(bn3)
bn3 = BatchNormalization(axis=3)(conv3)
bn3 = Activation('relu')(bn3)
pool3 = MaxPooling2D(pool_size=(2, 2))(bn3)
conv4 = Conv2D(512, (3, 3), padding='same')(pool3)
bn4 = Activation('relu')(conv4)
conv4 = Conv2D(512, (3, 3), padding='same')(bn4)
bn4 = BatchNormalization(axis=3)(conv4)
bn4 = Activation('relu')(bn4)
pool4 = MaxPooling2D(pool_size=(2, 2))(bn4)
conv5 = Conv2D(1024, (3, 3), padding='same')(pool4)
bn5 = Activation('relu')(conv5)
conv5 = Conv2D(1024, (3, 3), padding='same')(bn5)
bn5 = BatchNormalization(axis=3)(conv5)
bn5 = Activation('relu')(bn5)
up6 = concatenate([Conv2DTranspose(512, (2, 2), strides=(2, 2), padding='same')(bn5), conv4], axis=3)
conv6 = Conv2D(512, (3, 3), padding='same')(up6)
bn6 = Activation('relu')(conv6)
conv6 = Conv2D(512, (3, 3), padding='same')(bn6)
bn6 = BatchNormalization(axis=3)(conv6)
bn6 = Activation('relu')(bn6)
up7 = concatenate([Conv2DTranspose(256, (2, 2), strides=(2, 2), padding='same')(bn6), conv3], axis=3)
conv7 = Conv2D(256, (3, 3), padding='same')(up7)
bn7 = Activation('relu')(conv7)
conv7 = Conv2D(256, (3, 3), padding='same')(bn7)
bn7 = BatchNormalization(axis=3)(conv7)
bn7 = Activation('relu')(bn7)
up8 = concatenate([Conv2DTranspose(128, (2, 2), strides=(2, 2), padding='same')(bn7), conv2], axis=3)
conv8 = Conv2D(128, (3, 3), padding='same')(up8)
bn8 = Activation('relu')(conv8)
conv8 = Conv2D(128, (3, 3), padding='same')(bn8)
bn8 = BatchNormalization(axis=3)(conv8)
bn8 = Activation('relu')(bn8)
up9 = concatenate([Conv2DTranspose(64, (2, 2), strides=(2, 2), padding='same')(bn8), conv1], axis=3)
conv9 = Conv2D(64, (3, 3), padding='same')(up9)
bn9 = Activation('relu')(conv9)
conv9 = Conv2D(64, (3, 3), padding='same')(bn9)
bn9 = BatchNormalization(axis=3)(conv9)
bn9 = Activation('relu')(bn9)
conv10 = Conv2D(1, (1, 1), activation='sigmoid')(bn9)
return Model(inputs=[inputs], outputs=[conv10])
model = unet()
model.summary()
# defining the parameters for model training.
EPOCHS = 150
BATCH_SIZE = 32
learning_rate = 1e-4
# Training the model
train_generator_args = dict(rotation_range=0.2,
width_shift_range=0.05,
height_shift_range=0.05,
shear_range=0.05,
zoom_range=0.05,
horizontal_flip=True,
fill_mode='nearest')
train_gen = train_generator(df_train, BATCH_SIZE,
train_generator_args,
target_size=(im_height, im_width))
test_gener = train_generator(df_val, BATCH_SIZE,
dict(),
target_size=(im_height, im_width))
model = unet(input_size=(im_height, im_width, 3))
decay_rate = learning_rate / EPOCHS
opt = Adam(lr=learning_rate, beta_1=0.9, beta_2=0.999, epsilon=None, decay=decay_rate, amsgrad=False)
model.compile(optimizer=opt, loss=dice_coef_loss, metrics=["binary_accuracy", iou, dice_coef])
callbacks = [ModelCheckpoint('unet_brain_mri_seg.hdf5', verbose=1, save_best_only=True)]
history = model.fit(train_gen,
steps_per_epoch=len(df_train) / BATCH_SIZE,
epochs=EPOCHS,
callbacks=callbacks,
validation_data = test_gener,
validation_steps=len(df_val) / BATCH_SIZE)
# Visualizing the model performance
a = history.history
list_traindice = a['dice_coef']
list_testdice = a['val_dice_coef']
list_trainjaccard = a['iou']
list_testjaccard = a['val_iou']
list_trainloss = a['loss']
list_testloss = a['val_loss']
plt.figure(1)
plt.plot(list_testloss, 'b-')
plt.plot(list_trainloss,'r-')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.title('loss graph', fontsize = 15)
plt.figure(2)
plt.plot(list_traindice, 'r-')
plt.plot(list_testdice, 'b-')
plt.xlabel('iteration')
plt.ylabel('accuracy')
plt.title('accuracy graph', fontsize = 15)
plt.show()
###Output
_____no_output_____ |
_jupyter/covidml/.ipynb_checkpoints/covidpred-checkpoint.ipynb | ###Markdown
Machine learning for proteomics: an easy introductionIn a lot of proteomics publications recently, machine learning algorithms are used to perform a variety of tasks such as sample classification, image segmentation or prediction of important features in a set of samples.In this series I want to explore a bit how to employ machine learning in omics/proteomics and in general some good do's and don't in machine learning applications, plus providing some Python3 code to exemplify some of the ideas.Only prerequisite is basic understanding of Python. I will drop explanation of things which I reckon be important but feel free to reach out for curiosities or similar Case study using random forest to predict COVID19 severityRandom forest one of the most basic learning algorithm around the block and the easiest to apply. For a detailed explanation, there are several resources such as [Sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html).One of the major application where random forest is employed in proteomics paper is scoring of proteins across samples and sample classification.Random forests can be conceptualized as a hierarchical stack of decision trees. A decision tree boils down to a series of questions which allows to uniquely separate a group of samples from others.So to calculate which features (i.e petal length or width) contributes more to classification, we can just observe how many misclassified samples are left after every decision.This concept is known as Gini Impurity and is related to how pure a leaf is (how many samples of only one class are present) compared to the remaining ones.Features leading to purer leafs are more important in the overall classification compared to other. __So we can retrieve how important a feature is and this will tell us about important proteins/analytes in our sample__For this example I will use the COVID data from a recent Cell paper where a random forest classifier was used to classify severe and non-severe COVID19 patients based on metabolites and proteins.All data is available (here)[https://www.cell.com/cell/fulltext/S0092-8674(20)30627-9supplementaryMaterial]So let's start coding! Import packages and prepare training and test datasetsWe import the data and prepare all data. Luckily for us, in the publication, the data is already separated in test and training set.Every ML model needs to be trained on a set of data and then the generalization (i.e how good the model learned our data) capabilities are tested on a independent datasets.If test data is not available usually the training data is split in a 0.75/0.25 training/test or 0.66/0.33 if there are sufficient data pointsAnyway, let's continue with out example. In the publication the data is already merged, we only need to get positive (severe covid-19) and negative (non-severe covid-19) assign it to a label.For this, we will use supplementary table 1 which has the patient informations
###Code
import pandas as pd
import numpy as np
import joblib
from sklearn.model_selection import cross_validate, StratifiedShuffleSplit
from sklearn.model_selection import RepeatedStratifiedKFold, KFold, RepeatedKFold
from sklearn.metrics import confusion_matrix, make_scorer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import RandomizedSearchCV
train = pd.read_excel('mmc3.xlsx', sheet_name='Prot_and_meta_matrix')
test = pd.read_excel('mmc4.xlsx', sheet_name='Prot_and_meta_matrix')
print(train.shape, test.shape)
###Output
(1639, 33) (1590, 12)
###Markdown
As seen there is a different number of features in training (1639) and test (1590). While for some algorithms this doesn't matter, random forest needs same number of features. So we will quickly fix it.
###Code
train = train[train['Proteins/Metabolites'].isin(test['Proteins/Metabolites'])]
test = test[test['Proteins/Metabolites'].isin(train['Proteins/Metabolites'])]
# now we sort test and training
# test.sort_values(by=['Proteins/Metabolites'], inplace=True)
def fix_headers(df):
"""
retrieve first row then create new headers
"""
# trick to get column names
oldcol = list(df)
newcol = list(df.iloc[0])
newcol[:2] = oldcol[:2]
df.columns = newcol
df.drop(df.index[0], inplace=True)
return df
# fix headers and sort
train = fix_headers(train).sort_values(['Proteins/Metabolites'])
test = fix_headers(test).sort_values(['Proteins/Metabolites'])
def target_features_split(info, df, sheet_name):
"""
utility function to preprocess data and retrieve target (labels) and features (numerical values)
"""
df.fillna(0, inplace=True)
ids = df[['Proteins/Metabolites', 'Gene Symbol']]
df.drop(['Proteins/Metabolites', 'Gene Symbol'], axis=1, inplace=True)
# df needs to have features as columns and samples as row (i.e wide format)
df = df.T
df['class'] = df.index.map(info)
return df.drop('class', axis=1).values, df['class'].values
# we need to have positive and negative information (i.e severe and )
info = pd.read_excel('mmc1.xlsx', index_col=0, sheet_name='Clinical_information')
info = info[info['Group d'].isin([2,3])]
info['class'] = [0 if x==2 else 1 for x in list(info['Group d'])]
info = dict(zip(info.index, info['class']))
# get training and test data in format for ml
Xtrain,ytrain = target_features_split(info, train, sheet_name='Prot_and_meta_matrix')
Xtest, ytest = target_features_split(info, test, sheet_name='Prot_and_meta_matrix')
Xtrain.shape, ytrain.shape, Xtest.shape, ytest.shape
###Output
_____no_output_____
###Markdown
Base machine learningWe can start by fitting a very simple model with all default parameters. For this, we initialized an empty classifier and then fit the data.It is very important in machine learning to always use a seed (random_state in sklearn) to ensure reproducibility of the results.
###Code
def evaluate(model, test_features, test_labels):
"""
Evaluate model accuracy
"""
predictions = model.predict(test_features)
ov = np.equal(test_labels, predictions)
tp = ov[np.where(ov ==True)].shape[0]
return tp/predictions.shape[0]
# now we train on the features from the training set
clf_rf_default = RandomForestClassifier(random_state=42)
clf_rf_default.fit(Xtrain, ytrain)
print(evaluate(clf_rf_default, Xtest, ytest))
###Output
1.0
###Markdown
Here we can see we got 70% recall and made two mistakes in classification where we predicted non severe covid (i.e 0) instead of severe covid (1). Let's see if we can improve this by doing some parameter optimization. For this we will use a grid search. So we will generate a set of various parameters and test random combinations to train our model.Alternatively GridSearchCV can be used, where instead of using random combinations, all possible ones are tested.RandomizedSearchCV and GridSearchCV use what is known as __cross validation__ or CV, which is a machine learning technique for model training where the data is splitted into equal parts (fold) and then all but one folds are used to train the model and the last one to predict. In this way a more robust estimation of model performance can obtained, but __the final model should be train on all available data which usually yields the best performance__.For a more in depth explanation of cross validation, (here)[https://scikit-learn.org/stable/modules/cross_validation.html] there is an excellent introduction.
###Code
def random_search():
# Number of trees in random forest
n_estimators = [1000, 5000, 10000]
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(20, 400, num = 40)]
# Number of features to consider at every split
max_features = ['sqrt', 'log2']
# % samples required to split a node
min_samples_split = [2, 4, 6, 8, 10]
# Minimum % samples required at each leaf node
min_samples_leaf = [1, 2, 4, 8]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
return random_grid
# initialize parameter space
grid = random_search()
clf = RandomForestClassifier(random_state=0)
# try 5 fold CV on 100 combinations aka 500 fits
rf_opt = RandomizedSearchCV(estimator = clf,
param_distributions = grid,
cv = 5,
n_iter=100,
verbose=1,
n_jobs = -1,
scoring='roc_auc')
# Fit the random search model
rf_opt_grid=rf_opt.fit(Xtrain, ytrain)
# retrieve best performing model
clf_rf_opt = rf_opt.best_estimator_
evaluate(clf_rf_opt, Xtest, ytest)
'Performance increase by {}%'.format(evaluate(clf_rf_opt, Xtest, ytest) - evaluate(clf_rf_default, Xtest, ytest))
# save classifier to a file
joblib.dump(clf_rf_opt, 'RF_covid.clf')
###Output
_____no_output_____
###Markdown
We can manually inspect the performance at every combination of parameters directly and compare it with the reported one from the paper (AUC 0.957) as we used the same score ('roc_auc') in GridSearchCV.
###Code
res = pd.DataFrame.from_dict(rf_opt_grid.cv_results_)
res['mean_test_score'].max()
###Output
_____no_output_____ |
s1/codes/s1c.ipynb | ###Markdown
Class Text
###Code
import nltk
nltk.download('gutenberg')
from nltk.corpus import gutenberg
from nltk.text import Text
crp = Text(gutenberg.words('blake-poems.txt'))
# counting words
crp.count('love')
crp.count('Love')
# consulting context in a corpus
crp.concordance('love')
# words in the same context
crp.similar('love')
# words in both context
crp.common_contexts(['love', 'laugh'])
# dispersion plot
crp.dispersion_plot(['love','laugh'])
###Output
_____no_output_____ |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.