path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
docs/code/examples/cli.ipynb | ###Markdown
Command line interfaceThis Jupyter notebook gives an overview of the command line interface (CLI) that comes with the Python package [mevis](https://pypi.org/project/mevis) after installation with pip. The .ipynb file can be found [here](https://github.com/robert-haas/mevis/tree/master/examples). Show the help text
###Code
!mevis -h
###Output
usage: mevis [-h] -i input_filepath [-o output_filepath] [-f] [-v]
[-b backend] [-l layout_method] [-cd capture_delay]
[-ft filter_target] [-fc filter_context] [-fm filter_mode] [-gua]
[-gud] [-nl node_label] [-nc node_color] [-no node_opacity]
[-ns node_size] [-nsh node_shape] [-nbc node_border_color]
[-nbs node_border_size] [-nlc node_label_color]
[-nls node_label_size] [-nh node_hover] [-ncl node_click]
[-ni node_image] [-np node_properties] [-el edge_label]
[-ec edge_color] [-eo edge_opacity] [-es edge_size]
[-elc edge_label_color] [-els edge_label_size] [-eh edge_hover]
[-ecl edge_click] [--kwargs [KWARGS [KWARGS ...]]]
Visualize an OpenCog Atomspace as graph with two kinds of vertices.
optional arguments:
-h, --help show this help message and exit
-i input_filepath path of input file (.scm) containing an Atomspace
-o output_filepath path of output file, with following cases
- none create plot and display it in webbrowser
- .html create plot and export it to a HTML file
- .jpg create plot and export it to a JPG file
- .png create plot and export it to a PNG file
- .svg create plot and export it to a SVG file
works only with backend d3
- .gml create graph and export it to GML file
- .gml.gz same but file is compressed with gzip
- .gml.bz2 same but file is compressed with bzip2
-f, --force overwrite output_filepath if it already exists
-v, --verbose print messages about intermediary results
-b backend backend library for graph visualization
"d3" = d3.js
"vis" = vis-network.js
"three" = 3d-force-directed.js using three.js
-l layout_method layout method to calculate node coordinates
- "dot"
- "neato"
- "twopi"
- "circo"
- "fdp"
- "sfdp"
- "bipartite"
- "circular"
- "kamada_kawai"
- "planar"
- "random"
- "shell"
- "spring"
- "spectral"
- "spiral"
-cd capture_delay delay in seconds when capturing a static image
for JPG/PNG/SVG export. Default: 3.5
-ft filter_target filter target to select Atoms
-fc filter_context filter context to expaned selected atoms to
- "atom" = selected Atoms
- "in" = selection + incoming neighbors
- "out" = selection + outgoing neighbors
- "both" = selection + incoming and outgoing
neighbors
- "in-tree" = selection + repeated incoming
neighbors
- "out-tree" = selection + repeated outgoing
neighbors
- "('in', n)" = selection + incoming neighbors
within distance n
- "('out', n)" = selection + outgoing neighbors
within distance n
- "('both', n)" = selection + incoming and outgoing
neighbors within distance n
-fm filter_mode filter mode deciding how to use the selection
- "include" = include selected Atoms to output
- "exclude" = exclude selected Atoms from output
-gua graph is unannotated, no properties are added
-gud graph is undirected, no arrows are drawn
-nl node_label text shown below node
-nc node_color
-no node_opacity
-ns node_size
-nsh node_shape
-nbc node_border_color
-nbs node_border_size
-nlc node_label_color
-nls node_label_size
-nh node_hover text shown when hovering with mouse over a node
-ncl node_click text shown in div below plot when clicking on a node
-ni node_image image drawn inside node, URL or data URL
-np node_properties other annotations for a node given as key/val dict
-el edge_label text shown in midpoint of edge
-ec edge_color
-eo edge_opacity
-es edge_size
-elc edge_label_color
-els edge_label_size
-eh edge_hover text shown when hovering with mouse over an edge
-ecl edge_click text shown in div below plot when clicking on an edge
--kwargs [KWARGS [KWARGS ...]]
optional keyword arguments forwarded to plot function
###Markdown
Use it minimalistically- `-i`: An **input file** (.scm) to load an AtomSpace from.- `-o`: An optional **output file**.Following three cases are possible. 1. No output file: Creates a graph visualization and **displays it** in the default webbrowser.
###Code
!mevis -i moses.scm
###Output
_____no_output_____
###Markdown
2. Output file ending with `.html`: Creates a graph visualization and stores it to a **HTML file**.
###Code
!mevis -i moses.scm -o moses.html
###Output
argparse.ArgumentTypeError: The provided output_filepath "moses.html" already exists. You can use --force to overwrite it.
###Markdown
3. Output file ending with `.gml`, `.gml.gz` or `.gml.bz2`: Creates a graph representation and stores it to a **GML file** that can be **compressed with gzip or bzip2** to considerably reduce size.
###Code
!mevis -i moses.scm -o moses.gml
!mevis -i moses.scm -o moses.gml.gz
!mevis -i moses.scm -o moses.gml.bz2
###Output
_____no_output_____
###Markdown
Show status messages and overwrite existing files- `--verbose`: If provided, messages are printed about individual steps and their intermediate results.- `--force`: If provided, output files are overwritten if they already exist.
###Code
!mevis -i moses.scm -o moses.html --force --verbose
###Output
Importing an Atomspace from file "moses.scm".
###Markdown
Choose another backend- `-b`: If provided, the chosen backend is used to create the visualization. For available options, please look at the help text.
###Code
!mevis -i moses.scm -o moses_d3.html -b d3
!mevis -i moses.scm -o moses_three.html -b three
!mevis -i moses.scm -o moses_vis.html -b vis
###Output
_____no_output_____
###Markdown
Calculate a layout- `-l`: If provided, the chosen method is used for calculating x and y coordinates for nodes. For available options, please look at the help text.
###Code
!mevis -i moses.scm -o moses_layout1.html -l dot --verbose
!mevis -i moses.scm -o moses_layout2.html -l neato
!mevis -i moses.scm -o moses_layout3.html -l twopi
!mevis -i moses.scm -o moses_layout4.html -l bipartite
!mevis -i moses.scm -o moses_layout5.html -l shell
###Output
_____no_output_____
###Markdown
Filter the AtomSpace- `-t`: Filter target which selects Atoms. There are three options on the command line: - Name that is compared against Atom name and type name. - List of multiple names - Lambda function that gets an Atom as input and must return True or False to select or deselect it.- `-c`: Filter context which can expand the selection.- `-m`: Filter mode which decides whether the selection is included or excluded from the result Some possible targets
###Code
!mevis -i moses.scm -o moses_filtered1.html -ft PredicateNode --verbose
!mevis -i moses.scm -o moses_filtered2.html -ft "['AndLink', 'OrLink', 'NotLink']"
!mevis -i moses.scm -o moses_filtered3.html -ft "lambda atom: atom.is_link()"
###Output
Importing an Atomspace from file "moses.scm".
###Markdown
Some possible contexts
###Code
!mevis -i moses.scm -o moses_filtered4.html -ft PredicateNode -fc both
!mevis -i moses.scm -o moses_filtered5.html -ft PredicateNode -fc "('in', 2)"
!mevis -i moses.scm -o moses_filtered6.html -ft OrLink -fc "out-tree"
###Output
_____no_output_____
###Markdown
Two possible modes
###Code
!mevis -i moses.scm -o moses_filtered7.html -ft PredicateNode -fc both -fm include
!mevis -i moses.scm -o moses_filtered8.html -ft PredicateNode -fc both -fm exclude
###Output
_____no_output_____
###Markdown
Annotate the graph to modify visual elements
###Code
# Create an unannotated graph
!mevis -i moses.scm -o moses_unannotated.html -gua
# Create an undirected graph and set its node color, node size, edge color, edge size with constants
!mevis -i moses.scm -o moses_annotated1.html -gud -nc blue -ns 20 -ec blue -es 4
# Set node color, node size, edge color, edge size with lambda functions
!mevis -i moses.scm -o moses_annotated2.html \
-nc "lambda atom: '#33339a' if atom.is_link() else 'green'" \
-ec "lambda atom1, atom2: '#33339a' if atom2.is_link() else 'green'" \
-ns "lambda atom: 12 if atom.is_node() else 18" \
-es "lambda atom1, atom2: 1 if atom2.is_node() else 3"
# Adjust all possible annotations (see advanced.ipynb for the same example in Python instead of Bash)
!mevis -i moses.scm -o moses_annotated3.html -f \
-b d3 \
-gud \
-nl "lambda atom: atom.name if atom.is_node() else atom.type_name.replace('Link', '')" \
-nc "lambda atom: 'red' if atom.is_node() \
else 'blue' if atom.type_name == 'AndLink' \
else 'green' if atom.type_name == 'OrLink' \
else 'orange'" \
-no 0.9 \
-ns "lambda atom: 20 if atom.type_name in ['AndLink', 'OrLink'] else 12" \
-nsh "lambda atom: 'rectangle' if atom.type_name == 'AndLink' \
else 'hexagon' if atom.type_name == 'OrLink' \
else 'circle'" \
-nbc white \
-nbs 2.0 \
-nlc "lambda atom: 'red' if atom.is_node() \
else 'blue' if atom.type_name == 'AndLink' \
else 'green' if atom.type_name == 'OrLink' \
else 'orange'" \
-nls 12.0 \
-nh "lambda atom: 'A {} with Atomese code:\n{}'.format(atom.type_name, atom.short_string())" \
-ncl "lambda atom: atom.short_string()" \
-np "lambda atom: dict(x=-300) if atom.is_node() else dict(x=-300+200*len(atom.out))"\
-el "lambda atom1, atom2: '{}{}'.format(atom1.type_name[0], atom2.type_name[0])" \
-ec "lambda atom1, atom2: 'lightgray' if atom2.is_node() \
else 'red' if atom1.is_node() \
else 'blue' if atom1.type_name == 'AndLink' \
else 'green' if atom1.type_name == 'OrLink' \
else 'orange'" \
-eo 0.5 \
-es "lambda atom1, atom2: 5 if atom2.is_node() else 2.5" \
-elc "lambda atom1, atom2: 'red' if atom1.is_node() \
else 'blue' if atom1.type_name == 'AndLink' \
else 'green' if atom1.type_name == 'OrLink' \
else 'orange'" \
-els 8 \
-eh "lambda atom1, atom2: '{} to {}'.format(atom1.type_name, atom2.type_name)" \
-ecl "lambda atom1, atom2: 'Edge connects {} with {}'.format(atom1.type_name, atom2.type_name)" \
--kwargs edge_curvature=0.2 show_edge_label=True many_body_force_strength=-1000
###Output
_____no_output_____ |
doc/keras_mnist.ipynb | ###Markdown
Build a Keras Model
###Code
# shameless copy of keras example : https://github.com/keras-team/keras/blob/master/examples/mnist_cnn.py
sample = True
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
if(sample):
indices = np.random.choice(x_train.shape[0], x_train.shape[0] // 10, replace=False)
x_train = x_train[indices, : , :, :]
y_train = y_train[indices]
indices = np.random.choice(x_test.shape[0], x_test.shape[0] // 10, replace=False)
x_test = x_test[indices, : , :, :]
y_test = y_test[indices]
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
def create_model():
model = Sequential()
model.add(Conv2D(8, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
# model.add(Conv2D(4, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
return model
from keras.wrappers.scikit_learn import KerasClassifier
clf = KerasClassifier(build_fn=create_model, epochs=epochs, batch_size=batch_size, verbose=1)
clf.fit(x_train, y_train ,
batch_size=batch_size,
epochs=12,
verbose=1,
validation_data=(x_test, y_test))
print(x_test.shape)
preds = clf.predict(x_test[0,:].reshape(1, 28 , 28, 1))
print(preds)
###Output
(1000, 28, 28, 1)
1/1 [==============================] - 0s 2ms/step
[2]
###Markdown
Generate SQL Code from the Model
###Code
import json, requests, base64, dill as pickle, sys
sys.setrecursionlimit(200000)
pickle.settings['recurse'] = False
# no luck for the web service... pickling feature of tensorflow and/or keras objects seems not to be a priority.
# there is a lot of github issues in the two projects when I search for pickle keyword!!!.
def test_ws_sql_gen(pickle_data):
WS_URL="http://localhost:1888/model"
b64_data = base64.b64encode(pickle_data).decode('utf-8')
data={"Name":"model1", "PickleData":b64_data , "SQLDialect":"postgresql"}
r = requests.post(WS_URL, json=data)
print(r.__dict__)
content = r.json()
# print(content)
lSQL = content["model"]["SQLGenrationResult"][0]["SQL"]
return lSQL;
def test_sql_gen(keras_regressor , metadata):
import sklearn2sql.PyCodeGenerator as codegen
cg1 = codegen.cAbstractCodeGenerator();
lSQL = cg1.generateCodeWithMetadata(clf, metadata, dsn = None, dialect = "postgresql");
return lSQL[0]
# commented .. see above
# pickle_data = pickle.dumps(clf)
# lSQL = test_ws_sql_gen(pickle_data)
# print(lSQL[0:2000])
lMetaData = {}
NC = x_test.shape[1] * x_test.shape[2] * x_test.shape[3]
lMetaData['features'] = ["X_" + str(x+1) for x in range(0 , NC)]
lMetaData["targets"] = ['TGT']
lMetaData['primary_key'] = 'KEY'
lMetaData['table'] = 'mnist'
lSQL = test_sql_gen(clf , lMetaData)
print(lSQL[0:50000])
###Output
WITH keras_input AS
(SELECT "ADS"."KEY" AS "KEY", "ADS"."X_1" AS "X_1", "ADS"."X_2" AS "X_2", "ADS"."X_3" AS "X_3", "ADS"."X_4" AS "X_4", "ADS"."X_5" AS "X_5", "ADS"."X_6" AS "X_6", "ADS"."X_7" AS "X_7", "ADS"."X_8" AS "X_8", "ADS"."X_9" AS "X_9", "ADS"."X_10" AS "X_10", "ADS"."X_11" AS "X_11", "ADS"."X_12" AS "X_12", "ADS"."X_13" AS "X_13", "ADS"."X_14" AS "X_14", "ADS"."X_15" AS "X_15", "ADS"."X_16" AS "X_16", "ADS"."X_17" AS "X_17", "ADS"."X_18" AS "X_18", "ADS"."X_19" AS "X_19", "ADS"."X_20" AS "X_20", "ADS"."X_21" AS "X_21", "ADS"."X_22" AS "X_22", "ADS"."X_23" AS "X_23", "ADS"."X_24" AS "X_24", "ADS"."X_25" AS "X_25", "ADS"."X_26" AS "X_26", "ADS"."X_27" AS "X_27", "ADS"."X_28" AS "X_28", "ADS"."X_29" AS "X_29", "ADS"."X_30" AS "X_30", "ADS"."X_31" AS "X_31", "ADS"."X_32" AS "X_32", "ADS"."X_33" AS "X_33", "ADS"."X_34" AS "X_34", "ADS"."X_35" AS "X_35", "ADS"."X_36" AS "X_36", "ADS"."X_37" AS "X_37", "ADS"."X_38" AS "X_38", "ADS"."X_39" AS "X_39", "ADS"."X_40" AS "X_40", "ADS"."X_41" AS "X_41", "ADS"."X_42" AS "X_42", "ADS"."X_43" AS "X_43", "ADS"."X_44" AS "X_44", "ADS"."X_45" AS "X_45", "ADS"."X_46" AS "X_46", "ADS"."X_47" AS "X_47", "ADS"."X_48" AS "X_48", "ADS"."X_49" AS "X_49", "ADS"."X_50" AS "X_50", "ADS"."X_51" AS "X_51", "ADS"."X_52" AS "X_52", "ADS"."X_53" AS "X_53", "ADS"."X_54" AS "X_54", "ADS"."X_55" AS "X_55", "ADS"."X_56" AS "X_56", "ADS"."X_57" AS "X_57", "ADS"."X_58" AS "X_58", "ADS"."X_59" AS "X_59", "ADS"."X_60" AS "X_60", "ADS"."X_61" AS "X_61", "ADS"."X_62" AS "X_62", "ADS"."X_63" AS "X_63", "ADS"."X_64" AS "X_64", "ADS"."X_65" AS "X_65", "ADS"."X_66" AS "X_66", "ADS"."X_67" AS "X_67", "ADS"."X_68" AS "X_68", "ADS"."X_69" AS "X_69", "ADS"."X_70" AS "X_70", "ADS"."X_71" AS "X_71", "ADS"."X_72" AS "X_72", "ADS"."X_73" AS "X_73", "ADS"."X_74" AS "X_74", "ADS"."X_75" AS "X_75", "ADS"."X_76" AS "X_76", "ADS"."X_77" AS "X_77", "ADS"."X_78" AS "X_78", "ADS"."X_79" AS "X_79", "ADS"."X_80" AS "X_80", "ADS"."X_81" AS "X_81", "ADS"."X_82" AS "X_82", "ADS"."X_83" AS "X_83", "ADS"."X_84" AS "X_84", "ADS"."X_85" AS "X_85", "ADS"."X_86" AS "X_86", "ADS"."X_87" AS "X_87", "ADS"."X_88" AS "X_88", "ADS"."X_89" AS "X_89", "ADS"."X_90" AS "X_90", "ADS"."X_91" AS "X_91", "ADS"."X_92" AS "X_92", "ADS"."X_93" AS "X_93", "ADS"."X_94" AS "X_94", "ADS"."X_95" AS "X_95", "ADS"."X_96" AS "X_96", "ADS"."X_97" AS "X_97", "ADS"."X_98" AS "X_98", "ADS"."X_99" AS "X_99", "ADS"."X_100" AS "X_100", "ADS"."X_101" AS "X_101", "ADS"."X_102" AS "X_102", "ADS"."X_103" AS "X_103", "ADS"."X_104" AS "X_104", "ADS"."X_105" AS "X_105", "ADS"."X_106" AS "X_106", "ADS"."X_107" AS "X_107", "ADS"."X_108" AS "X_108", "ADS"."X_109" AS "X_109", "ADS"."X_110" AS "X_110", "ADS"."X_111" AS "X_111", "ADS"."X_112" AS "X_112", "ADS"."X_113" AS "X_113", "ADS"."X_114" AS "X_114", "ADS"."X_115" AS "X_115", "ADS"."X_116" AS "X_116", "ADS"."X_117" AS "X_117", "ADS"."X_118" AS "X_118", "ADS"."X_119" AS "X_119", "ADS"."X_120" AS "X_120", "ADS"."X_121" AS "X_121", "ADS"."X_122" AS "X_122", "ADS"."X_123" AS "X_123", "ADS"."X_124" AS "X_124", "ADS"."X_125" AS "X_125", "ADS"."X_126" AS "X_126", "ADS"."X_127" AS "X_127", "ADS"."X_128" AS "X_128", "ADS"."X_129" AS "X_129", "ADS"."X_130" AS "X_130", "ADS"."X_131" AS "X_131", "ADS"."X_132" AS "X_132", "ADS"."X_133" AS "X_133", "ADS"."X_134" AS "X_134", "ADS"."X_135" AS "X_135", "ADS"."X_136" AS "X_136", "ADS"."X_137" AS "X_137", "ADS"."X_138" AS "X_138", "ADS"."X_139" AS "X_139", "ADS"."X_140" AS "X_140", "ADS"."X_141" AS "X_141", "ADS"."X_142" AS "X_142", "ADS"."X_143" AS "X_143", "ADS"."X_144" AS "X_144", "ADS"."X_145" AS "X_145", "ADS"."X_146" AS "X_146", "ADS"."X_147" AS "X_147", "ADS"."X_148" AS "X_148", "ADS"."X_149" AS "X_149", "ADS"."X_150" AS "X_150", "ADS"."X_151" AS "X_151", "ADS"."X_152" AS "X_152", "ADS"."X_153" AS "X_153", "ADS"."X_154" AS "X_154", "ADS"."X_155" AS "X_155", "ADS"."X_156" AS "X_156", "ADS"."X_157" AS "X_157", "ADS"."X_158" AS "X_158", "ADS"."X_159" AS "X_159", "ADS"."X_160" AS "X_160", "ADS"."X_161" AS "X_161", "ADS"."X_162" AS "X_162", "ADS"."X_163" AS "X_163", "ADS"."X_164" AS "X_164", "ADS"."X_165" AS "X_165", "ADS"."X_166" AS "X_166", "ADS"."X_167" AS "X_167", "ADS"."X_168" AS "X_168", "ADS"."X_169" AS "X_169", "ADS"."X_170" AS "X_170", "ADS"."X_171" AS "X_171", "ADS"."X_172" AS "X_172", "ADS"."X_173" AS "X_173", "ADS"."X_174" AS "X_174", "ADS"."X_175" AS "X_175", "ADS"."X_176" AS "X_176", "ADS"."X_177" AS "X_177", "ADS"."X_178" AS "X_178", "ADS"."X_179" AS "X_179", "ADS"."X_180" AS "X_180", "ADS"."X_181" AS "X_181", "ADS"."X_182" AS "X_182", "ADS"."X_183" AS "X_183", "ADS"."X_184" AS "X_184", "ADS"."X_185" AS "X_185", "ADS"."X_186" AS "X_186", "ADS"."X_187" AS "X_187", "ADS"."X_188" AS "X_188", "ADS"."X_189" AS "X_189", "ADS"."X_190" AS "X_190", "ADS"."X_191" AS "X_191", "ADS"."X_192" AS "X_192", "ADS"."X_193" AS "X_193", "ADS"."X_194" AS "X_194", "ADS"."X_195" AS "X_195", "ADS"."X_196" AS "X_196", "ADS"."X_197" AS "X_197", "ADS"."X_198" AS "X_198", "ADS"."X_199" AS "X_199", "ADS"."X_200" AS "X_200", "ADS"."X_201" AS "X_201", "ADS"."X_202" AS "X_202", "ADS"."X_203" AS "X_203", "ADS"."X_204" AS "X_204", "ADS"."X_205" AS "X_205", "ADS"."X_206" AS "X_206", "ADS"."X_207" AS "X_207", "ADS"."X_208" AS "X_208", "ADS"."X_209" AS "X_209", "ADS"."X_210" AS "X_210", "ADS"."X_211" AS "X_211", "ADS"."X_212" AS "X_212", "ADS"."X_213" AS "X_213", "ADS"."X_214" AS "X_214", "ADS"."X_215" AS "X_215", "ADS"."X_216" AS "X_216", "ADS"."X_217" AS "X_217", "ADS"."X_218" AS "X_218", "ADS"."X_219" AS "X_219", "ADS"."X_220" AS "X_220", "ADS"."X_221" AS "X_221", "ADS"."X_222" AS "X_222", "ADS"."X_223" AS "X_223", "ADS"."X_224" AS "X_224", "ADS"."X_225" AS "X_225", "ADS"."X_226" AS "X_226", "ADS"."X_227" AS "X_227", "ADS"."X_228" AS "X_228", "ADS"."X_229" AS "X_229", "ADS"."X_230" AS "X_230", "ADS"."X_231" AS "X_231", "ADS"."X_232" AS "X_232", "ADS"."X_233" AS "X_233", "ADS"."X_234" AS "X_234", "ADS"."X_235" AS "X_235", "ADS"."X_236" AS "X_236", "ADS"."X_237" AS "X_237", "ADS"."X_238" AS "X_238", "ADS"."X_239" AS "X_239", "ADS"."X_240" AS "X_240", "ADS"."X_241" AS "X_241", "ADS"."X_242" AS "X_242", "ADS"."X_243" AS "X_243", "ADS"."X_244" AS "X_244", "ADS"."X_245" AS "X_245", "ADS"."X_246" AS "X_246", "ADS"."X_247" AS "X_247", "ADS"."X_248" AS "X_248", "ADS"."X_249" AS "X_249", "ADS"."X_250" AS "X_250", "ADS"."X_251" AS "X_251", "ADS"."X_252" AS "X_252", "ADS"."X_253" AS "X_253", "ADS"."X_254" AS "X_254", "ADS"."X_255" AS "X_255", "ADS"."X_256" AS "X_256", "ADS"."X_257" AS "X_257", "ADS"."X_258" AS "X_258", "ADS"."X_259" AS "X_259", "ADS"."X_260" AS "X_260", "ADS"."X_261" AS "X_261", "ADS"."X_262" AS "X_262", "ADS"."X_263" AS "X_263", "ADS"."X_264" AS "X_264", "ADS"."X_265" AS "X_265", "ADS"."X_266" AS "X_266", "ADS"."X_267" AS "X_267", "ADS"."X_268" AS "X_268", "ADS"."X_269" AS "X_269", "ADS"."X_270" AS "X_270", "ADS"."X_271" AS "X_271", "ADS"."X_272" AS "X_272", "ADS"."X_273" AS "X_273", "ADS"."X_274" AS "X_274", "ADS"."X_275" AS "X_275", "ADS"."X_276" AS "X_276", "ADS"."X_277" AS "X_277", "ADS"."X_278" AS "X_278", "ADS"."X_279" AS "X_279", "ADS"."X_280" AS "X_280", "ADS"."X_281" AS "X_281", "ADS"."X_282" AS "X_282", "ADS"."X_283" AS "X_283", "ADS"."X_284" AS "X_284", "ADS"."X_285" AS "X_285", "ADS"."X_286" AS "X_286", "ADS"."X_287" AS "X_287", "ADS"."X_288" AS "X_288", "ADS"."X_289" AS "X_289", "ADS"."X_290" AS "X_290", "ADS"."X_291" AS "X_291", "ADS"."X_292" AS "X_292", "ADS"."X_293" AS "X_293", "ADS"."X_294" AS "X_294", "ADS"."X_295" AS "X_295", "ADS"."X_296" AS "X_296", "ADS"."X_297" AS "X_297", "ADS"."X_298" AS "X_298", "ADS"."X_299" AS "X_299", "ADS"."X_300" AS "X_300", "ADS"."X_301" AS "X_301", "ADS"."X_302" AS "X_302", "ADS"."X_303" AS "X_303", "ADS"."X_304" AS "X_304", "ADS"."X_305" AS "X_305", "ADS"."X_306" AS "X_306", "ADS"."X_307" AS "X_307", "ADS"."X_308" AS "X_308", "ADS"."X_309" AS "X_309", "ADS"."X_310" AS "X_310", "ADS"."X_311" AS "X_311", "ADS"."X_312" AS "X_312", "ADS"."X_313" AS "X_313", "ADS"."X_314" AS "X_314", "ADS"."X_315" AS "X_315", "ADS"."X_316" AS "X_316", "ADS"."X_317" AS "X_317", "ADS"."X_318" AS "X_318", "ADS"."X_319" AS "X_319", "ADS"."X_320" AS "X_320", "ADS"."X_321" AS "X_321", "ADS"."X_322" AS "X_322", "ADS"."X_323" AS "X_323", "ADS"."X_324" AS "X_324", "ADS"."X_325" AS "X_325", "ADS"."X_326" AS "X_326", "ADS"."X_327" AS "X_327", "ADS"."X_328" AS "X_328", "ADS"."X_329" AS "X_329", "ADS"."X_330" AS "X_330", "ADS"."X_331" AS "X_331", "ADS"."X_332" AS "X_332", "ADS"."X_333" AS "X_333", "ADS"."X_334" AS "X_334", "ADS"."X_335" AS "X_335", "ADS"."X_336" AS "X_336", "ADS"."X_337" AS "X_337", "ADS"."X_338" AS "X_338", "ADS"."X_339" AS "X_339", "ADS"."X_340" AS "X_340", "ADS"."X_341" AS "X_341", "ADS"."X_342" AS "X_342", "ADS"."X_343" AS "X_343", "ADS"."X_344" AS "X_344", "ADS"."X_345" AS "X_345", "ADS"."X_346" AS "X_346", "ADS"."X_347" AS "X_347", "ADS"."X_348" AS "X_348", "ADS"."X_349" AS "X_349", "ADS"."X_350" AS "X_350", "ADS"."X_351" AS "X_351", "ADS"."X_352" AS "X_352", "ADS"."X_353" AS "X_353", "ADS"."X_354" AS "X_354", "ADS"."X_355" AS "X_355", "ADS"."X_356" AS "X_356", "ADS"."X_357" AS "X_357", "ADS"."X_358" AS "X_358", "ADS"."X_359" AS "X_359", "ADS"."X_360" AS "X_360", "ADS"."X_361" AS "X_361", "ADS"."X_362" AS "X_362", "ADS"."X_363" AS "X_363", "ADS"."X_364" AS "X_364", "ADS"."X_365" AS "X_365", "ADS"."X_366" AS "X_366", "ADS"."X_367" AS "X_367", "ADS"."X_368" AS "X_368", "ADS"."X_369" AS "X_369", "ADS"."X_370" AS "X_370", "ADS"."X_371" AS "X_371", "ADS"."X_372" AS "X_372", "ADS"."X_373" AS "X_373", "ADS"."X_374" AS "X_374", "ADS"."X_375" AS "X_375", "ADS"."X_376" AS "X_376", "ADS"."X_377" AS "X_377", "ADS"."X_378" AS "X_378", "ADS"."X_379" AS "X_379", "ADS"."X_380" AS "X_380", "ADS"."X_381" AS "X_381", "ADS"."X_382" AS "X_382", "ADS"."X_383" AS "X_383", "ADS"."X_384" AS "X_384", "ADS"."X_385" AS "X_385", "ADS"."X_386" AS "X_386", "ADS"."X_387" AS "X_387", "ADS"."X_388" AS "X_388", "ADS"."X_389" AS "X_389", "ADS"."X_390" AS "X_390", "ADS"."X_391" AS "X_391", "ADS"."X_392" AS "X_392", "ADS"."X_393" AS "X_393", "ADS"."X_394" AS "X_394", "ADS"."X_395" AS "X_395", "ADS"."X_396" AS "X_396", "ADS"."X_397" AS "X_397", "ADS"."X_398" AS "X_398", "ADS"."X_399" AS "X_399", "ADS"."X_400" AS "X_400", "ADS"."X_401" AS "X_401", "ADS"."X_402" AS "X_402", "ADS"."X_403" AS "X_403", "ADS"."X_404" AS "X_404", "ADS"."X_405" AS "X_405", "ADS"."X_406" AS "X_406", "ADS"."X_407" AS "X_407", "ADS"."X_408" AS "X_408", "ADS"."X_409" AS "X_409", "ADS"."X_410" AS "X_410", "ADS"."X_411" AS "X_411", "ADS"."X_412" AS "X_412", "ADS"."X_413" AS "X_413", "ADS"."X_414" AS "X_414", "ADS"."X_415" AS "X_415", "ADS"."X_416" AS "X_416", "ADS"."X_417" AS "X_417", "ADS"."X_418" AS "X_418", "ADS"."X_419" AS "X_419", "ADS"."X_420" AS "X_420", "ADS"."X_421" AS "X_421", "ADS"."X_422" AS "X_422", "ADS"."X_423" AS "X_423", "ADS"."X_424" AS "X_424", "ADS"."X_425" AS "X_425", "ADS"."X_426" AS "X_426", "ADS"."X_427" AS "X_427", "ADS"."X_428" AS "X_428", "ADS"."X_429" AS "X_429", "ADS"."X_430" AS "X_430", "ADS"."X_431" AS "X_431", "ADS"."X_432" AS "X_432", "ADS"."X_433" AS "X_433", "ADS"."X_434" AS "X_434", "ADS"."X_435" AS "X_435", "ADS"."X_436" AS "X_436", "ADS"."X_437" AS "X_437", "ADS"."X_438" AS "X_438", "ADS"."X_439" AS "X_439", "ADS"."X_440" AS "X_440", "ADS"."X_441" AS "X_441", "ADS"."X_442" AS "X_442", "ADS"."X_443" AS "X_443", "ADS"."X_444" AS "X_444", "ADS"."X_445" AS "X_445", "ADS"."X_446" AS "X_446", "ADS"."X_447" AS "X_447", "ADS"."X_448" AS "X_448", "ADS"."X_449" AS "X_449", "ADS"."X_450" AS "X_450", "ADS"."X_451" AS "X_451", "ADS"."X_452" AS "X_452", "ADS"."X_453" AS "X_453", "ADS"."X_454" AS "X_454", "ADS"."X_455" AS "X_455", "ADS"."X_456" AS "X_456", "ADS"."X_457" AS "X_457", "ADS"."X_458" AS "X_458", "ADS"."X_459" AS "X_459", "ADS"."X_460" AS "X_460", "ADS"."X_461" AS "X_461", "ADS"."X_462" AS "X_462", "ADS"."X_463" AS "X_463", "ADS"."X_464" AS "X_464", "ADS"."X_465" AS "X_465", "ADS"."X_466" AS "X_466", "ADS"."X_467" AS "X_467", "ADS"."X_468" AS "X_468", "ADS"."X_469" AS "X_469", "ADS"."X_470" AS "X_470", "ADS"."X_471" AS "X_471", "ADS"."X_472" AS "X_472", "ADS"."X_473" AS "X_473", "ADS"."X_474" AS "X_474", "ADS"."X_475" AS "X_475", "ADS"."X_476" AS "X_476", "ADS"."X_477" AS "X_477", "ADS"."X_478" AS "X_478", "ADS"."X_479" AS "X_479", "ADS"."X_480" AS "X_480", "ADS"."X_481" AS "X_481", "ADS"."X_482" AS "X_482", "ADS"."X_483" AS "X_483", "ADS"."X_484" AS "X_484", "ADS"."X_485" AS "X_485", "ADS"."X_486" AS "X_486", "ADS"."X_487" AS "X_487", "ADS"."X_488" AS "X_488", "ADS"."X_489" AS "X_489", "ADS"."X_490" AS "X_490", "ADS"."X_491" AS "X_491", "ADS"."X_492" AS "X_492", "ADS"."X_493" AS "X_493", "ADS"."X_494" AS "X_494", "ADS"."X_495" AS "X_495", "ADS"."X_496" AS "X_496", "ADS"."X_497" AS "X_497", "ADS"."X_498" AS "X_498", "ADS"."X_499" AS "X_499", "ADS"."X_500" AS "X_500", "ADS"."X_501" AS "X_501", "ADS"."X_502" AS "X_502", "ADS"."X_503" AS "X_503", "ADS"."X_504" AS "X_504", "ADS"."X_505" AS "X_505", "ADS"."X_506" AS "X_506", "ADS"."X_507" AS "X_507", "ADS"."X_508" AS "X_508", "ADS"."X_509" AS "X_509", "ADS"."X_510" AS "X_510", "ADS"."X_511" AS "X_511", "ADS"."X_512" AS "X_512", "ADS"."X_513" AS "X_513", "ADS"."X_514" AS "X_514", "ADS"."X_515" AS "X_515", "ADS"."X_516" AS "X_516", "ADS"."X_517" AS "X_517", "ADS"."X_518" AS "X_518", "ADS"."X_519" AS "X_519", "ADS"."X_520" AS "X_520", "ADS"."X_521" AS "X_521", "ADS"."X_522" AS "X_522", "ADS"."X_523" AS "X_523", "ADS"."X_524" AS "X_524", "ADS"."X_525" AS "X_525", "ADS"."X_526" AS "X_526", "ADS"."X_527" AS "X_527", "ADS"."X_528" AS "X_528", "ADS"."X_529" AS "X_529", "ADS"."X_530" AS "X_530", "ADS"."X_531" AS "X_531", "ADS"."X_532" AS "X_532", "ADS"."X_533" AS "X_533", "ADS"."X_534" AS "X_534", "ADS"."X_535" AS "X_535", "ADS"."X_536" AS "X_536", "ADS"."X_537" AS "X_537", "ADS"."X_538" AS "X_538", "ADS"."X_539" AS "X_539", "ADS"."X_540" AS "X_540", "ADS"."X_541" AS "X_541", "ADS"."X_542" AS "X_542", "ADS"."X_543" AS "X_543", "ADS"."X_544" AS "X_544", "ADS"."X_545" AS "X_545", "ADS"."X_546" AS "X_546", "ADS"."X_547" AS "X_547", "ADS"."X_548" AS "X_548", "ADS"."X_549" AS "X_549", "ADS"."X_550" AS "X_550", "ADS"."X_551" AS "X_551", "ADS"."X_552" AS "X_552", "ADS"."X_553" AS "X_553", "ADS"."X_554" AS "X_554", "ADS"."X_555" AS "X_555", "ADS"."X_556" AS "X_556", "ADS"."X_557" AS "X_557", "ADS"."X_558" AS "X_558", "ADS"."X_559" AS "X_559", "ADS"."X_560" AS "X_560", "ADS"."X_561" AS "X_561", "ADS"."X_562" AS "X_562", "ADS"."X_563" AS "X_563", "ADS"."X_564" AS "X_564", "ADS"."X_565" AS "X_565", "ADS"."X_566" AS "X_566", "ADS"."X_567" AS "X_567", "ADS"."X_568" AS "X_568", "ADS"."X_569" AS "X_569", "ADS"."X_570" AS "X_570", "ADS"."X_571" AS "X_571", "ADS"."X_572" AS "X_572", "ADS"."X_573" AS "X_573", "ADS"."X_574" AS "X_574", "ADS"."X_575" AS "X_575", "ADS"."X_576" AS "X_576", "ADS"."X_577" AS "X_577", "ADS"."X_578" AS "X_578", "ADS"."X_579" AS "X_579", "ADS"."X_580" AS "X_580", "ADS"."X_581" AS "X_581", "ADS"."X_582" AS "X_582", "ADS"."X_583" AS "X_583", "ADS"."X_584" AS "X_584", "ADS"."X_585" AS "X_585", "ADS"."X_586" AS "X_586", "ADS"."X_587" AS "X_587", "ADS"."X_588" AS "X_588", "ADS"."X_589" AS "X_589", "ADS"."X_590" AS "X_590", "ADS"."X_591" AS "X_591", "ADS"."X_592" AS "X_592", "ADS"."X_593" AS "X_593", "ADS"."X_594" AS "X_594", "ADS"."X_595" AS "X_595", "ADS"."X_596" AS "X_596", "ADS"."X_597" AS "X_597", "ADS"."X_598" AS "X_598", "ADS"."X_599" AS "X_599", "ADS"."X_600" AS "X_600", "ADS"."X_601" AS "X_601", "ADS"."X_602" AS "X_602", "ADS"."X_603" AS "X_603", "ADS"."X_604" AS "X_604", "ADS"."X_605" AS "X_605", "ADS"."X_606" AS "X_606", "ADS"."X_607" AS "X_607", "ADS"."X_608" AS "X_608", "ADS"."X_609" AS "X_609", "ADS"."X_610" AS "X_610", "ADS"."X_611" AS "X_611", "ADS"."X_612" AS "X_612", "ADS"."X_613" AS "X_613", "ADS"."X_614" AS "X_614", "ADS"."X_615" AS "X_615", "ADS"."X_616" AS "X_616", "ADS"."X_617" AS "X_617", "ADS"."X_618" AS "X_618", "ADS"."X_619" AS "X_619", "ADS"."X_620" AS "X_620", "ADS"."X_621" AS "X_621", "ADS"."X_622" AS "X_622", "ADS"."X_623" AS "X_623", "ADS"."X_624" AS "X_624", "ADS"."X_625" AS "X_625", "ADS"."X_626" AS "X_626", "ADS"."X_627" AS "X_627", "ADS"."X_628" AS "X_628", "ADS"."X_629" AS "X_629", "ADS"."X_630" AS "X_630", "ADS"."X_631" AS "X_631", "ADS"."X_632" AS "X_632", "ADS"."X_633" AS "X_633", "ADS"."X_634" AS "X_634", "ADS"."X_635" AS "X_635", "ADS"."X_636" AS "X_636", "ADS"."X_637" AS "X_637", "ADS"."X_638" AS "X_638", "ADS"."X_639" AS "X_639", "ADS"."X_640" AS "X_640", "ADS"."X_641" AS "X_641", "ADS"."X_642" AS "X_642", "ADS"."X_643" AS "X_643", "ADS"."X_644" AS "X_644", "ADS"."X_645" AS "X_645", "ADS"."X_646" AS "X_646", "ADS"."X_647" AS "X_647", "ADS"."X_648" AS "X_648", "ADS"."X_649" AS "X_649", "ADS"."X_650" AS "X_650", "ADS"."X_651" AS "X_651", "ADS"."X_652" AS "X_652", "ADS"."X_653" AS "X_653", "ADS"."X_654" AS "X_654", "ADS"."X_655" AS "X_655", "ADS"."X_656" AS "X_656", "ADS"."X_657" AS "X_657", "ADS"."X_658" AS "X_658", "ADS"."X_659" AS "X_659", "ADS"."X_660" AS "X_660", "ADS"."X_661" AS "X_661", "ADS"."X_662" AS "X_662", "ADS"."X_663" AS "X_663", "ADS"."X_664" AS "X_664", "ADS"."X_665" AS "X_665", "ADS"."X_666" AS "X_666", "ADS"."X_667" AS "X_667", "ADS"."X_668" AS "X_668", "ADS"."X_669" AS "X_669", "ADS"."X_670" AS "X_670", "ADS"."X_671" AS "X_671", "ADS"."X_672" AS "X_672", "ADS"."X_673" AS "X_673", "ADS"."X_674" AS "X_674", "ADS"."X_675" AS "X_675", "ADS"."X_676" AS "X_676", "ADS"."X_677" AS "X_677", "ADS"."X_678" AS "X_678", "ADS"."X_679" AS "X_679", "ADS"."X_680" AS "X_680", "ADS"."X_681" AS "X_681", "ADS"."X_682" AS "X_682", "ADS"."X_683" AS "X_683", "ADS"."X_684" AS "X_684", "ADS"."X_685" AS "X_685", "ADS"."X_686" AS "X_686", "ADS"."X_687" AS "X_687", "ADS"."X_688" AS "X_688", "ADS"."X_689" AS "X_689", "ADS"."X_690" AS "X_690", "ADS"."X_691" AS "X_691", "ADS"."X_692" AS "X_692", "ADS"."X_693" AS "X_693", "ADS"."X_694" AS "X_694", "ADS"."X_695" AS "X_695", "ADS"."X_696" AS "X_696", "ADS"."X_697" AS "X_697", "ADS"."X_698" AS "X_698", "ADS"."X_699" AS "X_699", "ADS"."X_700" AS "X_700", "ADS"."X_701" AS "X_701", "ADS"."X_702" AS "X_702", "ADS"."X_703" AS "X_703", "ADS"."X_704" AS "X_704", "ADS"."X_705" AS "X_705", "ADS"."X_706" AS "X_706", "ADS"."X_707" AS "X_707", "ADS"."X_708" AS "X_708", "ADS"."X_709" AS "X_709", "ADS"."X_710" AS "X_710", "ADS"."X_711" AS "X_711", "ADS"."X_712" AS "X_712", "ADS"."X_713" AS "X_713", "ADS"."X_714" AS "X_714", "ADS"."X_715" AS "X_715", "ADS"."X_716" AS "X_716", "ADS"."X_717" AS "X_717", "ADS"."X_718" AS "X_718", "ADS"."X_719" AS "X_719", "ADS"."X_720" AS "X_720", "ADS"."X_721" AS "X_721", "ADS"."X_722" AS "X_722", "ADS"."X_723" AS "X_723", "ADS"."X_724" AS "X_724", "ADS"."X_725" AS "X_725", "ADS"."X_726" AS "X_726", "ADS"."X_727" AS "X_727", "ADS"."X_728" AS "X_728", "ADS"."X_729" AS "X_729", "ADS"."X_730" AS "X_730", "ADS"."X_731" AS "X_731", "ADS"."X_732" AS "X_732", "ADS"."X_733" AS "X_733", "ADS"."X_734" AS "X_734", "ADS"."X_735" AS "X_735", "ADS"."X_736" AS "X_736", "ADS"."X_737" AS "X_737", "ADS"."X_738" AS "X_738", "ADS"."X_739" AS "X_739", "ADS"."X_740" AS "X_740", "ADS"."X_741" AS "X_741", "ADS"."X_742" AS "X_742", "ADS"."X_743" AS "X_743", "ADS"."X_744" AS "X_744", "ADS"."X_745" AS "X_745", "ADS"."X_746" AS "X_746", "ADS"."X_747" AS "X_747", "ADS"."X_748" AS "X_748", "ADS"."X_749" AS "X_749", "ADS"."X_750" AS "X_750", "ADS"."X_751" AS "X_751", "ADS"."X_752" AS "X_752", "ADS"."X_753" AS "X_753", "ADS"."X_754" AS "X_754", "ADS"."X_755" AS "X_755", "ADS"."X_756" AS "X_756", "ADS"."X_757" AS "X_757", "ADS"."X_758" AS "X_758", "ADS"."X_759" AS "X_759", "ADS"."X_760" AS "X_760", "ADS"."X_761" AS "X_761", "ADS"."X_762" AS "X_762", "ADS"."X_763" AS "X_763", "ADS"."X_764" AS "X_764", "ADS"."X_765" AS "X_765", "ADS"."X_766" AS "X_766", "ADS"."X_767" AS "X_767", "ADS"."X_768" AS "X_768", "ADS"."X_769" AS "X_769", "ADS"."X_770" AS "X_770", "ADS"."X_771" AS "X_771", "ADS"."X_772" AS "X_772", "ADS"."X_773" AS "X_773", "ADS"."X_774" AS "X_774", "ADS"."X_775" AS "X_775", "ADS"."X_776" AS "X_776", "ADS"."X_777" AS "X_777", "ADS"."X_778" AS "X_778", "ADS"."X_779" AS "X_779", "ADS"."X_780" AS "X_780", "ADS"."X_781" AS "X_781", "ADS"."X_782" AS "X_782", "ADS"."X_783" AS "X_783", "ADS"."X_784" AS "X_784"
FROM mnist AS "ADS"),
"layer_conv2d_2_Filter0" AS
(SELECT keras_input."KEY" AS "KEY", 0.017863758 + keras_input."X_1" * 0.12594759464263916 + keras_input."X_2" * 0.6201684474945068 + keras_input."X_3" * 0.48617687821388245 + keras_input."X_29" * -0.1793716996908188 + keras_input."X_30" * -0.1397959291934967 + keras_input."X_31" * -0.02336321771144867 + keras_input."X_57" * -0.8183432221412659 + keras_input."X_58" * -0.23505675792694092 + keras_input."X_59" * -0.31184646487236023 AS output_0_1_1, 0.017863758 + keras_input."X_2" * 0.12594759464263916 + keras_input."X_3" * 0.6201684474945068 + keras_input."X_4" * 0.48617687821388245 + keras_input."X_30" * -0.1793716996908188 + keras_input."X_31" * -0.1397959291934967 + keras_input."X_32" * -0.02336321771144867 + keras_input."X_58" * -0.8183432221412659 + keras_input."X_59" * -0.23505675792694092 + keras_input."X_60" * -0.31184646487236023 AS output_0_1_2, 0.017863758 + keras_input."X_3" * 0.12594759464263916 + keras_input."X_4" * 0.6201684474945068 + keras_input."X_5" * 0.48617687821388245 + keras_input."X_31" * -0.1793716996908188 + keras_input."X_32" * -0.1397959291934967 + keras_input."X_33" * -0.02336321771144867 + keras_input."X_59" * -0.8183432221412659 + keras_input."X_60" * -0.23505675792694092 + keras_input."X_61" * -0.31184646487236023 AS output_0_1_3, 0.017863758 + keras_input."X_4" * 0.12594759464263916 + keras_input."X_5" * 0.6201684474945068 + keras_input."X_6" * 0.48617687821388245 + keras_input."X_32" * -0.1793716996908188 + keras_input."X_33" * -0.1397959291934967 + keras_input."X_34" * -0.02336321771144867 + keras_input."X_60" * -0.8183432221412659 + keras_input."X_61" * -0.23505675792694092 + keras_input."X_62" * -0.31184646487236023 AS output_0_1_4, 0.017863758 + keras_input."X_5" * 0.12594759464263916 + keras_input."X_6" * 0.6201684474945068 + keras_input."X_7" * 0.48617687821388245 + keras_input."X_33" * -0.1793716996908188 + keras_input."X_34" * -0.1397959291934967 + keras_input."X_35" * -0.02336321771144867 + keras_input."X_61" * -0.8183432221412659 + keras_input."X_62" * -0.23505675792694092 + keras_input."X_63" * -0.31184646487236023 AS output_0_1_5, 0.017863758 + keras_input."X_6" * 0.12594759464263916 + keras_input."X_7" * 0.6201684474945068 + keras_input."X_8" * 0.48617687821388245 + keras_input."X_34" * -0.1793716996908188 + keras_input."X_35" * -0.1397959291934967 + keras_input."X_36" * -0.02336321771144867 + keras_input."X_62" * -0.8183432221412659 + keras_input."X_63" * -0.23505675792694092 + keras_input."X_64" * -0.31184646487236023 AS output_0_1_6, 0.017863758 + keras_input."X_7" * 0.12594759464263916 + keras_input."X_8" * 0.6201684474945068 + keras_input."X_9" * 0.48617687821388245 + keras_input."X_35" * -0.1793716996908188 + keras_input."X_36" * -0.1397959291934967 + keras_input."X_37" * -0.02336321771144867 + keras_input."X_63" * -0.8183432221412659 + keras_input."X_64" * -0.23505675792694092 + keras_input."X_65" * -0.31184646487236023 AS output_0_1_7, 0.017863758 + keras_input."X_8" * 0.12594759464263916 + keras_input."X_9" * 0.6201684474945068 + keras_input."X_10" * 0.48617687821388245 + keras_input."X_36" * -0.1793716996908188 + keras_input."X_37" * -0.1397959291934967 + keras_input."X_38" * -0.02336321771144867 + keras_input."X_64" * -0.8183432221412659 + keras_input."X_65" * -0.23505675792694092 + keras_input."X_66" * -0.31184646487236023 AS output_0_1_8, 0.017863758 + keras_input."X_9" * 0.12594759464263916 + keras_input."X_10" * 0.6201684474945068 + keras_input."X_11" * 0.48617687821388245 + keras_input."X_37" * -0.1793716996908188 + keras_input."X_38" * -0.1397959291934967 + keras_input."X_39" * -0.02336321771144867 + keras_input."X_65" * -0.8183432221412659 + keras_input."X_66" * -0.23505675792694092 + keras_input."X_67" * -0.31184646487236023 AS output_0_1_9, 0.017863758 + keras_input."X_10" * 0.12594759464263916 + keras_input."X_11" * 0.6201684474945068 + keras_input."X_12" * 0.48617687821388245 + keras_input."X_38" * -0.1793716996908188 + keras_input."X_39" * -0.1397959291934967 + keras_input."X_40" * -0.02336321771144867 + keras_input."X_66" * -0.8183432221412659 + keras_input."X_67" * -0.23505675792694092 + keras_input."X_68" * -0.31184646487236023 AS output_0_1_10, 0.017863758 + keras_input."X_11" * 0.12594759464263916 + keras_input."X_12" * 0.6201684474945068 + keras_input."X_13" * 0.48617687821388245 + keras_input."X_39" * -0.1793716996908188 + keras_input."X_40" * -0.1397959291934967 + keras_input."X_41" * -0.02336321771144867 + keras_input."X_67" * -0.8183432221412659 + keras_input."X_68" * -0.23505675792694092 + keras_input."X_69" * -0.31184646487236023 AS output_0_1_11, 0.017863758 + keras_input."X_12" * 0.12594759464263916 + keras_input."X_13" * 0.6201684474945068 + keras_input."X_14" * 0.48617687821388245 + keras_input."X_40" * -0.1793716996908188 + keras_input."X_41" * -0.1397959291934967 + keras_input."X_42" * -0.02336321771144867 + keras_input."X_68" * -0.8183432221412659 + keras_input."X_69" * -0.23505675792694092 + keras_input."X_70" * -0.31184646487236023 AS output_0_1_12, 0.017863758 + keras_input."X_13" * 0.12594759464263916 + keras_input."X_14" * 0.6201684474945068 + keras_input."X_15" * 0.48617687821388245 + keras_input."X_41" * -0.1793716996908188 + keras_input."X_42" * -0.1397959291934967 + keras_input."X_43" * -0.02336321771144867 + keras_input."X_69" * -0.8183432221412659 + keras_input."X_70" * -0.23505675792694092 + keras_input."X_71" * -0.31184646487236023 AS output_0_1_13, 0.017863758 + keras_input."X_14" * 0.12594759464263916 + keras_input."X_15" * 0.6201684474945068 + keras_input."X_16" * 0.48617687821388245 + keras_input."X_42" * -0.1793716996908188 + keras_input."X_43" * -0.1397959291934967 + keras_input."X_44" * -0.02336321771144867 + keras_input."X_70" * -0.8183432221412659 + keras_input."X_71" * -0.23505675792694092 + keras_input."X_72" * -0.31184646487236023 AS output_0_1_14, 0.017863758 + keras_input."X_15" * 0.12594759464263916 + keras_input."X_16" * 0.6201684474945068 + keras_input."X_17" * 0.48617687821388245 + keras_input."X_43" * -0.1793716996908188 + keras_input."X_44" * -0.1397959291934967 + keras_input."X_45" * -0.02336321771144867 + keras_input."X_71" * -0.8183432221412659 + keras_input."X_72" * -0.23505675792694092 + keras_input."X_73" * -0.31184646487236023 AS output_0_1_15, 0.017863758 + keras_input."X_16" * 0.12594759464263916 + keras_input."X_17" * 0.6201684474945068 + keras_input."X_18" * 0.48617687821388245 + keras_input."X_44" * -0.1793716996908188 + keras_input."X_45" * -0.1397959291934967 + keras_input."X_46" * -0.02336321771144867 + keras_input."X_72" * -0.8183432221412659 + keras_input."X_73" * -0.23505675792694092 + keras_input."X_74" * -0.31184646487236023 AS output_0_1_16, 0.017863758 + keras_input."X_17" * 0.12594759464263916 + keras_input."X_18" * 0.6201684474945068 + keras_input."X_19" * 0.48617687821388245 + keras_input."X_45" * -0.1793716996908188 + keras_input."X_46" * -0.1397959291934967 + keras_input."X_47" * -0.02336321771144867 + keras_input."X_73" * -0.8183432221412659 + keras_input."X_74" * -0.23505675792694092 + keras_input."X_75" * -0.31184646487236023 AS output_0_1_17, 0.017863758 + keras_input."X_18" * 0.12594759464263916 + keras_input."X_19" * 0.6201684474945068 + keras_input."X_20" * 0.48617687821388245 + keras_input."X_46" * -0.1793716996908188 + keras_input."X_47" * -0.1397959291934967 + keras_input."X_48" * -0.02336321771144867 + keras_input."X_74" * -0.8183432221412659 + keras_input."X_75" * -0.23505675792694092 + keras_input."X_76" * -0.31184646487236023 AS output_0_1_18, 0.017863758 + keras_input."X_19" * 0.12594759464263916 + keras_input."X_20" * 0.6201684474945068 + keras_input."X_21" * 0.48617687821388245 + keras_input."X_47" * -0.1793716996908188 + keras_input."X_48" * -0.1397959291934967 + keras_input."X_49" * -0.02336321771144867 + keras_input."X_75" * -0.8183432221412659 + keras_input."X_76" * -0.23505675792694092 + keras_input."X_77" * -0.31184646487236023 AS output_0_1_19, 0.017863758 + keras_input."X_20" * 0.12594759464263916 + keras_input."X_21" * 0.6201684474945068 + keras_input."X_22" * 0.48617687821388245 + keras_input."X_48" * -0.1793716996908188 + keras_input."X_49" * -0.1397959291934967 + keras_input."X_50" * -0.02336321771144867 + keras_input."X_76" * -0.8183432221412659 + keras_input."X_77" * -0.23505675792694092 + keras_input."X_78" * -0.31184646487236023 AS output_0_1_20, 0.017863758 + keras_input."X_21" * 0.12594759464263916 + keras_input."X_22" * 0.6201684474945068 + keras_input."X_23" * 0.48617687821388245 + keras_input."X_49" * -0.1793716996908188 + keras_input."X_50" * -0.1397959291934967 + keras_input."X_51" * -0.02336321771144867 + keras_input."X_77" * -0.8183432221412659 + keras_input."X_78" * -0.23505675792694092 + keras_input."X_79" * -0.31184646487236023 AS output_0_1_21, 0.017863758 + keras_input."X_22" * 0.12594759464263916 + keras_input."X_23" * 0.6201684474945068 + keras_input."X_24" * 0.48617687821388245 + keras_input."X_50" * -0.1793716996908188 + keras_input."X_51" * -0.1397959291934967 + keras_input."X_52" * -0.02336321771144867 + keras_input."X_78" * -0.8183432221412659 + keras_input."X_79" * -0.23505675792694092 + keras_input."X_80" * -0.31184646487236023 AS output_0_1_22, 0.017863758 + keras_input."X_23" * 0.12594759464263916 + keras_input."X_24" * 0.6201684474945068 + keras_input."X_25" * 0.48617687821388245 + keras_input."X_51" * -0.1793716996908188 + keras_input."X_52" * -0.1397959291934967 + keras_input."X_53" * -0.02336321771144867 + keras_input."X_79" * -0.8183432221412659 + keras_input."X_80" * -0.23505675792694092 + keras_input."X_81" * -0.31184646487236023 AS output_0_1_23, 0.017863758 + keras_input."X_24" * 0.12594759464263916 + keras_input."X_25" * 0.6201684474945068 + keras_input."X_26" * 0.48617687821388245 + keras_input."X_52" * -0.1793716996908188 + keras_input."X_53" * -0.1397959291934967 + keras_input."X_54" * -0.02336321771144867 + keras_input."X_80" * -0.8183432221412659 + keras_input."X_81" * -0.23505675792694092 + keras_input."X_82" * -0.31184646487236023 AS output_0_1_24, 0.017863758 + keras_input."X_25" * 0.12594759464263916 + keras_input."X_26" * 0.6201684474945068 + keras_input."X_27" * 0.48617687821388245 + keras_input."X_53" * -0.1793716996908188 + keras_input."X_54" * -0.1397959291934967 + keras_input."X_55" * -0.02336321771144867 + keras_input."X_81" * -0.8183432221412659 + keras_input."X_82" * -0.23505675792694092 + keras_input."X_83" * -0.31184646487236023 AS output_0_1_25, 0.017863758 + keras_input."X_26" * 0.12594759464263916 + keras_input."X_27" * 0.6201684474945068 + keras_input."X_28" * 0.48617687821388245 + keras_input."X_54" * -0.1793716996908188 + keras_input."X_55" * -0.1397959291934967 + keras_input."X_56" * -0.02336321771144867 + keras_input."X_82" * -0.8183432221412659 + keras_input."X_83" * -0.23505675792694092 + keras_input."X_84" * -0.31184646487236023 AS output_0_1_26, 0.017863758 + keras_input."X_29" * 0.12594759464263916 + keras_input."X_30" * 0.6201684474945068 + keras_input."X_31" * 0.48617687821388245 + keras_input."X_57" * -0.1793716996908188 + keras_input."X_58" * -0.1397959291934967 + keras_input."X_59" * -0.02336321771144867 + keras_input."X_85" * -0.8183432221412659 + keras_input."X_86" * -0.23505675792694092 + keras_input."X_87" * -0.31184646487236023 AS output_0_2_1, 0.017863758 + keras_input."X_30" * 0.12594759464263916 + keras_input."X_31" * 0.6201684474945068 + keras_input."X_32" * 0.48617687821388245 + keras_input."X_58" * -0.1793716996908188 + keras_input."X_59" * -0.1397959291934967 + keras_input."X_60" * -0.02336321771144867 + keras_input."X_86" * -0.8183432221412659 + keras_input."X_87" * -0.23505675792694092 + keras_input."X_88" * -0.31184646487236023 AS output_0_2_2, 0.017863758 + keras_input."X_31" * 0.12594759464263916 + keras_input."X_32" * 0.6201684474945068 + keras_input."X_33" * 0.48617687821388245 + keras_input."X_59" * -0.1793716996908188 + keras_input."X_60" * -0.1397959291934967 + keras_input."X_61" * -0.02336321771144867 + keras_input."X_87" * -0.8183432221412659 + keras_input."X_88" * -0.23505675792694092 + keras_input."X_89" * -0.31184646487236023 AS output_0_2_3, 0.017863758 + keras_input."X_32" * 0.12594759464263916 + keras_input."X_33" * 0.6201684474945068 + keras_input."X_34" * 0.48617687821388245 + keras_input."X_60" * -0.1793716996908188 + keras_input."X_61" * -0.1397959291934967 + keras_input."X_62" * -0.02336321771144867 + keras_input."X_88" * -0.8183432221412659 + keras_input."X_89" * -0.23505675792694092 + keras_input."X_90" * -0.31184646487236023 AS output_0_2_4, 0.017863758 + keras_input."X_33" * 0.12594759464263916 + keras_input."X_34" * 0.6201684474945068 + keras_input."X_35" * 0.48617687821388245 + keras_input."X_61" * -0.1793716996908188 + keras_input."X_62" * -0.1397959291934967 + keras_input."X_63" * -0.02336321771144867 + keras_input."X_89" * -0.8183432221412659 + keras_input."X_90" * -0.23505675792694092 + keras_input."X_91" * -0.31184646487236023 AS output_0_2_5, 0.017863758 + keras_input."X_34" * 0.12594759464263916 + keras_input."X_35" * 0.6201684474945068 + keras_input."X_36" * 0.48617687821388245 + keras_input."X_62" * -0.1793716996908188 + keras_input."X_63" * -0.1397959291934967 + keras_input."X_64" * -0.02336321771144867 + keras_input."X_90" * -0.8183432221412659 + keras_input."X_91" * -0.23505675792694092 + keras_input."X_92" * -0.31184646487236023 AS output_0_2_6, 0.017863758 + keras_input."X_35" * 0.12594759464263916 + keras_input."X_36" * 0.6201684474945068 + keras_input."X_37" * 0.48617687821388245 + keras_input."X_63" * -0.1793716996908188 + keras_input."X_64" * -0.1397959291934967 + keras_input."X_65" * -0.02336321771144867 + keras_input."X_91" * -0.8183432221412659 + keras_input."X_92" * -0.23505675792694092 + keras_input."X_93" * -0.31184646487236023 AS output_0_2_7, 0.017863758 + keras_input."X_36" * 0.12594759464263916 + keras_input."X_37" * 0.6201684474945068 + keras_input."X_38" * 0.48617687821388245 + keras_input."X_64" * -0.1793716996908188 + keras_input."X_65" * -0.1397959291934967 + keras_input."X_66" * -0.02336321771144867 + keras_input."X_92" * -0.8183432221412659 + keras_input."X_93" * -0.23505675792694092 + keras_input."X_94" * -0.31184646487236023 AS output_0_2_8, 0.017863758 + keras_input."X_37" * 0.12594759464263916 + keras_input."X_38" * 0.6201684474945068 + keras_input."X_39" * 0.48617687821388245 + keras_input."X_65" * -0.1793716996908188 + keras_input."X_66" * -0.1397959291934967 + keras_input."X_67" * -0.02336321771144867 + keras_input."X_93" * -0.8183432221412659 + keras_input."X_94" * -0.23505675792694092 + keras_input."X_95" * -0.31184646487236023 AS output_0_2_9, 0.017863758 + keras_input."X_38" * 0.12594759464263916 + keras_input."X_39" * 0.6201684474945068 + keras_input."X_40" * 0.48617687821388245 + keras_input."X_66" * -0.1793716996908188 + keras_input."X_67" * -0.1397959291934967 + keras_input."X_68" * -0.02336321771144867 + keras_input."X_94" * -0.8183432221412659 + keras_input."X_95" * -0.23505675792694092 + keras_input."X_96" * -0.31184646487236023 AS output_0_2_10, 0.017863758 + keras_input."X_39" * 0.12594759464263916 + keras_input."X_40" * 0.6201684474945068 + keras_input."X_41" * 0.48617687821388245 + keras_input."X_67" * -0.1793716996908188 + keras_input."X_68" * -0.1397959291934967 + keras_input."X_69" * -0.02336321771144867 + keras_input."X_95" * -0.8183432221412659 + keras_input."X_96" * -0.23505675792694092 + keras_input."X_97" * -0.31184646487236023 AS output_0_2_11, 0.017863758 + keras_input."X_40" * 0.12594759464263916 + keras_input."X_41" * 0.6201684474945068 + keras_input."X_42" * 0.48617687821388245 + keras_input."X_68" * -0.1793716996908188 + keras_input."X_69" * -0.1397959291934967 + keras_input."X_70" * -0.02336321771144867 + keras_input."X_96" * -0.8183432221412659 + keras_input."X_97" * -0.23505675792694092 + keras_input."X_98" * -0.31184646487236023 AS output_0_2_12, 0.017863758 + keras_input."X_41" * 0.12594759464263916 + keras_input."X_42" * 0.6201684474945068 + keras_input."X_43" * 0.48617687821388245 + keras_input."X_69" * -0.1793716996908188 + keras_input."X_70" * -0.1397959291934967 + keras_input."X_71" * -0.02336321771144867 + keras_input."X_97" * -0.8183432221412659 + keras_input."X_98" * -0.23505675792694092 + keras_input."X_99" * -0.31184646487236023 AS output_0_2_13, 0.017863758 + keras_input."X_42" * 0.12594759464263916 + keras_input."X_43" * 0.6201684474945068 + keras_input."X_44" * 0.48617687821388245 + keras_input."X_70" * -0.1793716996908188 + keras_input."X_71" * -0.1397959291934967 + keras_input."X_72" * -0.02336321771144867 + keras_input."X_98" * -0.8183432221412659 + keras_input."X_99" * -0.23505675792694092 + keras_input."X_100" * -0.31184646487236023 AS output_0_2_14, 0.017863758 + keras_input."X_43" * 0.12594759464263916 + keras_input."X_44" * 0.6201684474945068 + keras_input."X_45" * 0.48617687821388245 + keras_input."X_71" * -0.1793716996908188 + keras_input."X_72" * -0.1397959291934967 + keras_input."X_73" * -0.02336321771144867 + keras_input."X_99" * -0.8183432221412659 + keras_input."X_100" * -0.23505675792694092 + keras_input."X_101" * -0.31184646487236023 AS output_0_2_15, 0.017863758 + keras_input."X_44" * 0.12594759464263916 + keras_input."X_45" * 0.6201684474945068 + keras_input."X_46" * 0.48617687821388245 + keras_input."X_72" * -0.1793716996908188 + keras_input."X_73" * -0.1397959291934967 + keras_input."X_74" * -0.02336321771144867 + keras_input."X_100" * -0.8183432221412659 + keras_input."X_101" * -0.23505675792694092 + keras_input."X_102" * -0.31184646487236023 AS output_0_2_16, 0.017863758 + keras_input."X_45" * 0.12594759464263916 + keras_input."X_46" * 0.6201684474945068 + keras_input."X_47" * 0.48617687821388245 + keras_input."X_73" * -0.1793716996908188 + keras_input."X_74" * -0.1397959291934967 + keras_input."X_75" * -0.02336321771144867 + keras_input."X_101" * -0.8183432221412659 + keras_input."X_102" * -0.23505675792694092 + keras_input."X_103" * -0.31184646487236023 AS output_0_2_17, 0.017863758 + keras_input."X_46" * 0.12594759464263916 + keras_input."X_47" * 0.6201684474945068 + keras_input."X_48" * 0.48617687821388245 + keras_input."X_74" * -0.1793716996908188 + keras_input."X_75" * -0.1397959291934967 + keras_input."X_76" * -0.02336321771144867 + keras_input."X_102" * -0.8183432221412659 + keras_input."X_103" * -0.23505675792694092 + keras_input."X_104" * -0.31184646487236023 AS output_0_2_18, 0.017863758 + keras_input."X_47" * 0.12594759464263916 + keras_input."X_48" * 0.6201684474945068 + keras_input."X_49" * 0.48617687821388245 + keras_input."X_75" * -0.1793716996908188 + keras_input."X_76" * -0.1397959291934967 + keras_input."X_77" * -0.02336321771144867 + keras_input."X_103" * -0.8183432221412659 + keras_input."X_104" * -0.23505675792694092 + keras_input."X_105" * -0.31184646487236023 AS output_0_2_19, 0.017863758 + keras_input."X_48" * 0.12594759464263916 + keras_input."X_49" * 0.6201684474945068 + keras_input."X_50" * 0.48617687821388245 + keras_input."X_76" * -0.1793716996908188 + keras_input."X_77" * -0.1397959291934967 + keras_input."X_78" * -0.02336321771144867 + keras_input."X_104" * -0.8183432221412659 + keras_input."X_105" * -0.23505675792694092 + keras_input."X_106" * -0.31184646487236023 AS output_0_2_20, 0.017863758 + keras_input."X_49" * 0.12594759464263916 + keras_input."X_50" * 0.6201684474945068 + keras_input."X_51" * 0.48617687821388245 + keras_input."X_77" * -0.1793716996908188 + keras_input."X_78" * -0.1397959291934967 + keras_input."X_79" * -0.02336321771144867 + keras_input."X_105" * -0.8183432221412659 + keras_input."X_106" * -0.23505675792694092 + keras_input."X_107" * -0.31184646487236023 AS output_0_2_21, 0.017863758 + keras_input."X_50" * 0.12594759464263916 + keras_input."X_51" * 0.6201684474945068 + keras_input."X_52" * 0.48617687821388245 + keras_input."X_78" * -0.1793716996908188 + keras_input."X_79" * -0.1397959291934967 + keras_input."X_80" * -0.02336321771144867 + keras_input."X_106" * -0.8183432221412659 + keras_input."X_107" * -0.23505675792694092 + keras_input."X_108" * -0.31184646487236023 AS output_0_2_22, 0.017863758 + keras_input."X_51" * 0.12594759464263916 + keras_input."X_52" * 0.6201684474945068 + keras_input."X_53" * 0.48617687821388245 + keras_input."X_79" * -0.1793716996908188 + keras_input."X_80" * -0.1397959291934967 + keras_input."X_81" * -0.02336321771144867 + keras_input."X_107" * -0.8183432221412659 + keras_input."X_108" * -0.23505675792694092 + keras_input."X_109" * -0.31184646487236023 AS output_0_2_23, 0.017863758 + keras_input."X_52" * 0.12594759464263916 + keras_input."X_53" * 0.6201684474945068 + keras_input."X_54" * 0.48617687821388245 + keras_input."X_80" * -0.1793716996908188 + keras_input."X_81" * -0.1397959291934967 + keras_input."X_82" * -0.02336321771144867 + keras_input."X_108" * -0.8183432221412659 + keras_input."X_109" * -0.23505675792694092 + keras_input."X_110" * -0.31184646487236023 AS output_0_2_24, 0.017863758 + keras_input."X_53" * 0.12594759464263916 + keras_input."X_54" * 0.6201684474945068 + keras_input."X_55" * 0.48617687821388245 + keras_input."X_81" * -0.1793716996908188 + keras_input."X_82" * -0.1397959291934967 + keras_input."X_83" * -0.02336321771144867 + keras_input."X_109" * -0.8183432221412659 + keras_input."X_110" * -0.23505675792694092 + keras_input."X_111" * -0.31184646487236023 AS output_0_2_25, 0.017863758 + keras_input."X_54" * 0.12594759464263916 + keras_input."X_55" * 0.6201684474945068 + keras_input."X_56" * 0.48617687821388245 + keras_input."X_82" * -0.1793716996908188 + keras_input."X_83" * -0.1397959291934967 + keras_input."X_84" * -0.02336321771144867 + keras_input."X_110" * -0.8183432221412659 + keras_input."X_111" * -0.23505675792694092 + keras_input."X_112" * -0.31184646487236023 AS output_0_2_26, 0.017863758 + keras_input."X_57" * 0.12594759464263916 + keras_input."X_58" * 0.6201684474945068 + keras_input."X_59" * 0.48617687821388245 + keras_input."X_85" * -0.1793716996908188 + keras_input."X_86" * -0.1397959291934967 + keras_input."X_87" * -0.02336321771144867 + keras_input."X_113" * -0.8183432221412659 + keras_input."X_114" * -0.23505675792694092 + keras_input."X_115" * -0.31184646487236023 AS output_0_3_1, 0.017863758 + keras_input."X_58" * 0.12594759464263916 + keras_input."X_59" * 0.6201684474945068 + keras_input."X_60" * 0.48617687821388245 + keras_input."X_86" * -0.1793716996908188 + keras_input."X_87" * -0.1397959291934967 + keras_input."X_88" * -0.02336321771144867 + keras_input."X_114" * -0.8183432221412659 + keras_input."X_115" * -0.23505675792694092 + keras_input."X_116" * -0.31184646487236023 AS output_0_3_2, 0.017863758 + keras_input."X_59" * 0.12594759464263916 + keras_input."X_60" * 0.6201684474945068 + keras_input."X_61" * 0.48617687821388245 + keras_input."X_87" * -0.1793716996908188 + keras_input."X_88" * -0.1397959291934967 + keras_input."X_89" * -0.02336321771144867 + keras_input."X_115" * -0.8183432221412659 + keras_input."X_116" * -0.23505675792694092 + keras_input."X_117" * -0.31184646487236023 AS output_0_3_3, 0.017863758 + keras_input."X_60" * 0.12594759464263916 + keras_input."X_61" * 0.6201684474945068 + keras_input."X_62" * 0.48617687821388245 + keras_input."X_88" * -0.1793716996908188 + keras_input."X_89" * -0.1397959291934967 + keras_input."X_90" * -0.02336321771144867 + keras_input."X_116" * -0.8183432221412659 + keras_input."X_117" * -0.23505675792694092 + keras_input."X_118" * -0.31184646487236023 AS output_0_3_4, 0.017863758 + keras_input."X_61" * 0.12594759464263916 + keras_input."X_62" * 0.6201684474945068 + keras_input."X_63" * 0.48617687821388245 + keras_input."X_89" * -0.1793716996908188 + keras_input."X_90" * -0.1397959291934967 + keras_input."X_91" * -0.02336321771144867 + keras_input."X_117" * -0.8183432221412659 + keras_input."X_118" * -0.23505675792694092 + keras_input."X_119" * -0.31184646487236023 AS output_0_3_5, 0.017863758 + keras_input."X_62" * 0.12594759464263916 + keras_input."X_63" * 0.6201684474945068 + keras_input."X_64" * 0.48617687821388245 + keras_input."X_90" * -0.1793716996908188 + keras_input."X_91" * -0.1397959291934967 + keras_input."X_92" * -0.02336321771144867 + keras_input."X_118" * -0.8183432221412659 + keras_input."X_119" * -0.23505675792694092 + keras_input."X_120" * -0.31184646487236023 AS output_0_3_6, 0.017863758 + keras_input."X_63" * 0.12594759464263916 + keras_input."X_64" * 0.6201684474945068 + keras_input."X_65" * 0.48617687821388245 + keras_input."X_91" * -0.1793716996908188 + keras_input."X_92" * -0.1397959291934967 + keras_input."X_93" * -0.02336321771144867 + keras_input."X_119" * -0.8183432221412659 + keras_input."X_120" * -0.23505675792694092 + keras_input."X_121" * -0.31184646487236023 AS output_0_3_7, 0.017863758 + keras_input."X_64" * 0.12594759464263916 + keras_input."X_65" * 0.6201684474945068 + keras_input."X_66" * 0.48617687821388245 + keras_input."X_92" * -0.1793716996908188 + keras_input."X_93" * -0.1397959291934967 + keras_input."X_94" * -0.02336321771144867 + keras_input."X_120" * -0.8183432221412659 + keras_input."X_121" * -0.23505675792694092 + keras_input."X_122" * -0.31184646487236023 AS output_0_3_8, 0.017863758 + keras_input."X_65" * 0.12594759464263916 + keras_input."X_66" * 0.6201684474945068 + keras_input."X_67" * 0.48617687821388245 + keras_input."X_93" * -0.1793716996908188 + keras_input."X_94" * -0.1397959291934967 + keras_input."X_95" * -0.02336321771144867 + keras_input."X_121" * -0.8183432221412659 + keras_input."X_122" * -0.23505675792694092 + keras_input."X_123" * -0.31184646487236023 AS output_0_3_9, 0.017863758 + keras_input."X_66" * 0.12594759464263916 + keras_input."X_67" * 0.6201684474945068 + keras_input."X_68" * 0.48617687821388245 + keras_input."X_94" * -0.1793716996908188 + keras_input."X_95" * -0.1397959291934967 + keras_input."X_96" * -0.02336321771144867 + keras_input."X_122" * -0.8183432221412659 + keras_input."X_123" * -0.23505675792694092 + keras_input."X_124" * -0.31184646487236023 AS output_0_3_10, 0.017863758 + keras_input."X_67" * 0.12594759464263916 + keras_input."X_68" * 0.6201684474945068 + keras_input."X_69" * 0.48617687821388245 + keras_input."X_95" * -0.1793716996908188 + keras_input."X_96" * -0.1397959291934967 + keras_input."X_97" * -0.02336321771144867 + keras_input."X_123" * -0.8183432221412659 + keras_input."X_124" * -0.23505675792694092 + keras_input."X_125" * -0.31184646487236023 AS output_0_3_11, 0.017863758 + keras_input."X_68" * 0.12594759464263916 + keras_input."X_69" * 0.6201684474945068 + keras_input."X_70" * 0.48617687821388245 + keras_input."X_96" * -0.1793716996908188 + keras_input."X_97" * -0.1397959291934967 + keras_input."X_98" * -0.02336321771144867 + keras_input."X_124" * -0.8183432221412659 + keras_input."X_125" * -0.23505675792694092 + keras_input."X_126" * -0.31184646487236023 AS output_0_3_12, 0.017863758 + keras_input."X_69" * 0.12594759464263916 + keras_input."X_70" * 0.6201684474945068 + keras_input."X_71" * 0.48617687821388245 + keras_input."X_97" * -0.1793716996908188 + keras_input."X_98" * -0.1397959291934967 + keras_input."X_99" * -0.02336321771144867 + keras_input."X_125" * -0.8183432221412659 + keras_input."X_126" * -0.23505675792694092 + keras_input."X_127" * -0.31184646487236023 AS output_0_3_13, 0.017863758 + keras_input."X_70" * 0.12594759464263916 + keras_input."X_71" * 0.6201684474945068 + keras_input."X_72" * 0.48617687821388245 + keras_input."X_98" * -0.1793716996908188 + keras_input."X_99" * -0.1397959291934967 + keras_input."X_100" * -0.02336321771144867 + keras_input."X_126" * -0.8183432221412659 + keras_input."X_127" * -0.23505675792694092 + keras_input."X_128" * -0.31184646487236023 AS output_0_3_14, 0.017863758 + keras_input."X_71" * 0.12594759464263916 + keras_input."X_72" * 0.6201684474945068 + keras_input."X_73" * 0.48617687821388245 + keras_input."X_99" * -0.1793716996908188 + keras_input."X_100" * -0.1397959291934967 + keras_input."X_101" * -0.02336321771144867 + keras_input."X_127" * -0.8183432221412659 + keras_input."X_128" * -0.23505675792694092 + keras_input."X_129" * -0.31184646487236023 AS output_0_3_15, 0.017863758 + keras_input."X_72" * 0.12594759464263916 + keras_input."X_73" * 0.6201684474945068 + keras_input."X_74" * 0.48617687821388245 + keras_input."X_100" * -0.1793716996908188 + keras_input."X_101" * -0.1397959291934967 + keras_input."X_102" * -0.02336321771144867 + keras_input."X_128" * -0.8183432221412659 + keras_input."X_129" * -0.23505675792694092 + keras_input."X_130" * -0.31184646487236023 AS output_0_3_16, 0.017863758 + keras_input."X_73" * 0.12594759464263916 + keras_input."X_74" * 0.6201684474945068 + keras_input."X_75" * 0.48617687821388245 + keras_input."X_101" * -0.1793716996908188 + keras_input."X_102" * -0.1397959291934967 + keras_input."X_103" * -0.02336321771144867 + keras_input."X_129" * -0.8183432221412659 + keras_input."X_130" * -0.23505675792694092 + keras_input."X_131" * -0.31184646487236023 AS output_0_3_17, 0.017863758 + keras_input."X_74" * 0.12594759464263916 + keras_input."X_75" * 0.6201684474945068 + keras_input."X_76" * 0.48617687821388245 + keras_input."X_102" * -0.1793716996908188 + keras_input."X_103" * -0.1397959291934967 + keras_input."X_104" * -0.02336321771144867 + keras_input."X_130" * -0.8183432221412659 + keras_input."X_131" * -0.23505675792694092 + keras_input."X_132" * -0.31184646487236023 AS output_0_3_18, 0.017863758 + keras_input."X_75" * 0.12594759464263916 + keras_input."X_76" * 0.6201684474945068 + keras_input."X_77" * 0.48617687821388245 + keras_input."X_103" * -0.1793716996908188 + keras_input."X_104" * -0.1397959291934967 + keras_input."X_105" * -0.02336321771144867 + keras_input."X_131" * -0.818343222141
###Markdown
Execute the SQL Code
###Code
# save the dataset in a database table
import sqlalchemy as sa
# engine = sa.create_engine('sqlite://' , echo=False)
engine = sa.create_engine("postgresql://db:db@localhost/db?port=5432", echo=False)
conn = engine.connect()
NR = x_test.shape[0]
lTable = pd.DataFrame(x_test.reshape(NR , NC));
lTable.columns = lMetaData['features']
lTable['TGT'] = None
lTable['KEY'] = range(NR)
lTable.to_sql(lMetaData['table'] , conn, if_exists='replace', index=False)
sql_output = pd.read_sql(lSQL , conn);
sql_output = sql_output.sort_values(by='KEY').reset_index(drop=True)
conn.close()
sql_output.sample(12, random_state=1960)
###Output
_____no_output_____
###Markdown
Keras Prediction
###Code
keras_output = pd.DataFrame()
keras_output_key = pd.DataFrame(list(range(x_test.shape[0])), columns=['KEY']);
keras_output_score = pd.DataFrame(columns=['Score_' + str(x) for x in range(num_classes)]);
keras_output_proba = pd.DataFrame(clf.predict_proba(x_test), columns=['Proba_' + str(x) for x in range(num_classes)])
keras_output = pd.concat([keras_output_key, keras_output_score, keras_output_proba] , axis=1)
for class_label in range(num_classes):
keras_output['LogProba_' + str(class_label)] = np.log(keras_output_proba['Proba_' + str(class_label)])
keras_output['Decision'] = clf.predict(x_test)
keras_output.sample(12, random_state=1960)
###Output
1000/1000 [==============================] - 0s 272us/step
1000/1000 [==============================] - 0s 266us/step
###Markdown
Comparing the SQL and Keras Predictions
###Code
sql_keras_join = keras_output.join(sql_output , how='left', on='KEY', lsuffix='_keras', rsuffix='_sql')
sql_keras_join.head(12)
condition = (sql_keras_join.Decision_sql != sql_keras_join.Decision_keras)
sql_keras_join[condition]
###Output
_____no_output_____ |
data_codes/.ipynb_checkpoints/Concat dataset-checkpoint.ipynb | ###Markdown
Part 1. 여러 개의 엑셀 파일을 하나로 합치기 1. 파일 불러오기
###Code
import numpy as np
import pandas as pd
f1 = pd.read_excel("1-1.xlsx", encoding = 'cp949')
f2 = pd.read_excel("1-2.xlsx", encoding = 'cp949')
f3 = pd.read_excel("2.xlsx", encoding = 'cp949')
f4 = pd.read_excel("3-1.xlsx", encoding = 'cp949')
f5 = pd.read_excel("3-2.xlsx", encoding = 'cp949')
f6 = pd.read_excel("3-3.xlsx", encoding = 'cp949')
f7 = pd.read_excel("3-4.xlsx", encoding = 'cp949')
f8 = pd.read_excel("4.xlsx", encoding = 'cp949')
f9 = pd.read_excel("5.xlsx", encoding = 'cp949')
f10 = pd.read_excel("6.xlsx", encoding = 'cp949')
###Output
_____no_output_____
###Markdown
2. 불필요한 column 삭제
###Code
f3.head()
del f3['대분류']
del f3['소분류']
del f3['상황']
del f3['Set Nr.']
del f3['발화자']
f3.head()
f4.head()
del f4['ID']
del f4['날짜']
del f4['자동분류1']
del f4['자동분류2']
del f4['자동분류3']
del f4['URL']
del f4['언론사']
f4.head()
f5.head()
del f5['ID']
del f5['날짜']
del f5['자동분류1']
del f5['자동분류2']
del f5['자동분류3']
del f5['URL']
del f5['언론사']
f6.head()
del f6['ID']
del f6['날짜']
del f6['자동분류1']
del f6['자동분류2']
del f6['자동분류3']
del f6['URL']
del f6['언론사']
f7.head()
del f7['ID']
del f7['날짜']
del f7['자동분류1']
del f7['자동분류2']
del f7['자동분류3']
del f7['URL']
del f7['언론사']
f7.head()
f8.head()
del f8['ID']
del f8['키워드']
f9.head()
del f9['ID']
del f9['지자체']
f10.head()
del f10['ID']
del f10['지자체']
f10.head()
del f1['SID']
del f2['SID']
###Output
_____no_output_____
###Markdown
3. f1 ~ f7을 이어붙이기 (concat)
###Code
data = pd.concat([f1,f2,f3,f4,f5,f6,f7,f8,f9,f10])
len(data)
data = data.reset_index(drop=True) # 0~20만, 0~20만 이런 인덱스까지 concat 되었던 것을 초기화, 다시 인덱스 주었다
data.rename(columns={'원문': 'Korean', '번역문': 'English'}, inplace=True)
data['English'].isnull().sum()
len(data)
data
###Output
_____no_output_____
###Markdown
4. pickle을 통해 객체로 데이터 저장하기
###Code
#data.to_pickle("concat_file.pkl")
###Output
_____no_output_____ |
tutorial/common/tuto-common02-colors.ipynb | ###Markdown
Shapash with custom colorsWith this tutorial you will understand how to manipulate colors with Shapash plotsContents:- Build a Regressor- Compile Shapash SmartExplainer- Use `palette_name` parameter- Use `colors_dict` parameter- Change the colors after comiling the explainerData from Kaggle [House Prices](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data)
###Code
import pandas as pd
from category_encoders import OrdinalEncoder
from lightgbm import LGBMRegressor
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Building Supervized Model
###Code
from shapash.data.data_loader import data_loading
house_df, house_dict = data_loading('house_prices')
y_df=house_df['SalePrice'].to_frame()
X_df=house_df[house_df.columns.difference(['SalePrice'])]
house_df.head()
from category_encoders import OrdinalEncoder
categorical_features = [col for col in X_df.columns if X_df[col].dtype == 'object']
encoder = OrdinalEncoder(
cols=categorical_features,
handle_unknown='ignore',
return_df=True).fit(X_df)
X_df=encoder.transform(X_df)
Xtrain, Xtest, ytrain, ytest = train_test_split(X_df, y_df, train_size=0.75, random_state=1)
regressor = LGBMRegressor(n_estimators=200).fit(Xtrain,ytrain)
y_pred = pd.DataFrame(regressor.predict(Xtest),columns=['pred'],index=Xtest.index)
###Output
_____no_output_____
###Markdown
Shapash with different colors Option 1 : use `palette_name` parameter
###Code
from shapash.explainer.smart_explainer import SmartExplainer
xpl = SmartExplainer(
features_dict=house_dict,
palette_name='blues' # Other available name : 'default'
)
xpl.compile(
x=Xtest,
model=regressor,
preprocessing=encoder, # Optional: compile step can use inverse_transform method
y_pred=y_pred # Optional
)
xpl.plot.features_importance()
###Output
_____no_output_____
###Markdown
Option 2 : define user-specific colors with `colors_dict` parameterThe colors declared will replace the one in the palette used.In the example below, we replace the colors used in the features importance bar plot:
###Code
# first, let's print the colors used in the previous explainer:
xpl.colors_dict['featureimp_bar']
# Now we replace these colors using the colors_dict parameter
xpl2 = SmartExplainer(
features_dict=house_dict,
colors_dict=dict(
featureimp_bar={
'1': 'rgba(100, 120, 150, 1)',
'2': 'rgba(120, 103, 50, 0.8)'
},
featureimp_line='rgba(150, 150, 54, 0.8)'
)
)
xpl2.compile(
x=Xtest,
model=regressor,
preprocessing=encoder,
y_pred=y_pred
)
xpl2.plot.features_importance()
###Output
_____no_output_____
###Markdown
Option 3 : redefine colors after compiling shapash
###Code
xpl3 = SmartExplainer(
features_dict=house_dict,
)
xpl3.compile(
x=Xtest,
model=regressor,
preprocessing=encoder,
y_pred=y_pred
)
xpl3.plot.features_importance()
xpl3.plot.contribution_plot('1stFlrSF')
###Output
_____no_output_____
###Markdown
- **We redefine the colors with the `blues` palette and custom colors for the features importance plot**
###Code
xpl3.define_style(
palette_name='blues',
colors_dict=dict(
featureimp_bar={
'1': 'rgba(100, 120, 150, 1)',
'2': 'rgba(120, 103, 50, 0.8)'
}
))
xpl3.plot.features_importance()
xpl3.plot.contribution_plot('1stFlrSF')
###Output
_____no_output_____ |
nbs/dl1/Haider-ubt-lesson1-pets-v11_testing_pretrainedmodels_v2.ipynb | ###Markdown
Lesson 1 - What's your pet
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai import *
from fastai.vision import *
gpu_device = 1
defaults.device = torch.device(f'cuda:{gpu_device}')
torch.cuda.set_device(gpu_device)
path = untar_data(URLs.PETS); path
path.ls()
path_anno = path/'annotations'
path_img = path/'images'
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
np.random.seed(2)
pat = re.compile(r'/([^/]+)_\d+.jpg$')
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
bs = 64
###Output
_____no_output_____
###Markdown
Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
import pretrainedmodels
pretrainedmodels.model_names
# this works
def get_model(pretrained=True, model_name = 'resnet50', **kwargs ):
if pretrained:
arch = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet')
else:
arch = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained=None)
return arch
# get_model()
custom_head = create_head(nf=2048*2, nc=37, ps=0.5, bn_final=False)
# Although that original resnet50 last layer in_features=2048 as you can see below, but the modified fastai head should be in_features = 2048 *2 since it has 2 Pooling
# AdaptiveConcatPool2d( 12 (0): AdaptiveConcatPool2d((ap): AdaptiveAvgPool2d(output_size=1) + (mp): AdaptiveMaxPool2d(output_size=1)
children(models.resnet50())[-2:]
custom_head
fastai_resnet50=nn.Sequential(*list(children(get_model(model_name = 'resnet50'))[:-2]),custom_head)
learn = Learner(data, fastai_resnet50, metrics=error_rate) # It seems `Learner' is not using transfer learning. Jeremy: It’s better to use create_cnn, so that fastai will create a version you can use for transfer learning for your problem.
# https://forums.fast.ai/t/lesson-5-advanced-discussion/30865/21
# fastai_resnet50
learn2 = create_cnn(data,models.resnet50, metrics=error_rate)
# learn2
learn.lr_find()
learn.recorder.plot()
# learn.fit_one_cycle(8)
learn.fit_one_cycle(5)
learn2.lr_find()
learn2.recorder.plot()
learn2.fit_one_cycle(5)
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.unfreeze()
# learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
learn.fit_one_cycle(1, max_lr=slice(1e-6,1e-4)) # for benchmark: https://forums.fast.ai/t/lesson-1-pets-benchmarks/27681
###Output
_____no_output_____
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
###Output
_____no_output_____ |
courses/modsim2018/tasks/Desiree/Task13_MotorControl.ipynb | ###Markdown
Task 13 - Motor Control Introduction to modeling and simulation of human movementhttps://github.com/BMClab/bmc/blob/master/courses/ModSim2018.md Desiree Miraldo Renato Watanabe(Caio Lima)* Implement the Knee simulation of the Nigg and Herzog book (chapter 4.8.6, [knee.m](http://isbweb.org/~tgcs/resources/software/bogert/knee.m)) based on code from Lecture 12
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import math
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Muscle properties
###Code
Lslack = .223 # sack length of SEE
Umax = .04 # SEE strain at Fmax
Lce_o = .093 #optmal l
width = .63#*Lce_o
Fmax = 7400 #maximal isometric force
a = 1 #inital conditional for ativation
u = 1 #Initial conditional for Brain's activation
#b = .25*10#*Lce_o
###Output
_____no_output_____
###Markdown
Joint Properties
###Code
m = 10 #segment mass
g = 9.81 #acceleration of gravity
Rcm = 0.264 #distance knee joint to centre of mass
I = 0.1832 #moment of inertia
Rf = 0.033 #moment arm of quadriceps
alpha = 0 #pennation angle
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
phi = np.pi/2 #start as 90 degree flexion
phid = 0 #zero velocity
Lm0 = 0.31 #initial total lenght of the muscle
Lnorm_ce = .087/Lce_o #norm
t0 = 0 #Initial time
tf = 0.15 #Final Time
h = 5e-4 #integration step size and step counter
t = np.arange(t0,tf,h)
F = np.empty(t.shape)
Fkpe = np.empty(t.shape)
FiberLen = np.empty(t.shape)
TendonLen = np.empty(t.shape)
a_dynamics = np.empty(t.shape)
phi_dynamics = np.empty(t.shape)
#defining u
form = 'sinusoid'
def createinput_u(form, plot=True):
if (form == 'sinusoid'):
u = .2*np.sin(np.pi*t) +.7
elif (form == 'step'):
u = np.ones(t.shape)*.01
u[:1/h] = 1
u[1/h:3/h] = .5
elif (form == 'pulse'):
u = np.ones(t.shape)*.01
u[1/h:3/h] = 1
if plot:
plt.figure()
plt.plot(u)
plt.title('u wave form')
return u
u = createinput_u (form)#,plot=False)
###Output
_____no_output_____
###Markdown
Simulation - Series for i in range (len(t)): ramp if t[i]<=1: Lm = 0.31 elif t[i]>1 and t[i]<2: Lm = .31 + .1*(t[i]-1) print(Lm) shortening at 4cm/s Lsee = Lm - Lce if Lsee<Lslack: F[i] = 0 else: F[i] = Fmax*((Lsee-Lslack)/(Umax*Lslack))**2 isometric force at Lce from CE force length relationship F0 = max([0, Fmax*(1-((Lce-Lce_o)/width)**2)]) calculate CE velocity from Hill's equation if F[i]>F0: print('Error: cannot do eccentric contractions') Lcedot = -b*(F0-F[i])/(F[i]+a) vel is negative for shortening --- Euler integration step Lce += h*Lcedot
###Code
def TendonForce (Lnorm_see,Lslack, Lce_o):
'''
Compute tendon force
Inputs:
Lnorm_see = normalized tendon length
Lslack = slack length of the tendon (non-normalized)
Lce_o = optimal length of the fiber
Output:
Fnorm_tendon = normalized tendon force
'''
Umax = .04
if Lnorm_see<Lslack/Lce_o:
Fnorm_tendon = 0
else:
Fnorm_tendon = ((Lnorm_see-Lslack/Lce_o)/(Umax*Lslack/Lce_o))**2
return Fnorm_tendon
def ParallelElementForce (Lnorm_ce):
'''
Compute parallel element force
Inputs:
Lnorm_ce = normalized contractile element length
Output:
Fnorm_kpe = normalized parallel element force
'''
Umax = 1
if Lnorm_ce< 1:
Fnorm_kpe = 0
else:
Fnorm_kpe = ((Lnorm_ce-1)/(Umax*1))**2
return Fnorm_kpe
def ForceLengthCurve (Lnorm_ce,width):
F0 = max([0, (1-((Lnorm_ce-1)/width)**2)])
return F0
def ContractileElementDot(F0, Fnorm_CE, a):
'''
Compute Contractile Element Derivative
Inputs:
F0 = Force-Length Curve
Fce = Contractile element force
Output:
Lnorm_cedot = normalized contractile element length derivative
'''
FMlen = 1.4 # young adults
Vmax = 10 # young adults
Af = 0.25 #force-velocity shape factor
Fnorm_CE = min(FMlen*a*F0 - 0.001, Fnorm_CE)
if Fnorm_CE > a*F0:
b = ((2 + 2/Af)*(a*F0*FMlen - Fnorm_CE))/(FMlen-1)
elif Fnorm_CE <= a*F0:
b = a*F0 + Fnorm_CE/Af
Lnorm_cedot = (.25 + .75*a)*Vmax*((Fnorm_CE - a*F0)/b)
return Lnorm_cedot
def ContractileElementForce(Fnorm_tendon,Fnorm_kpe, alpha):
'''
Compute Contractile Element force
Inputs:
Fnorm_tendon = normalized tendon force
Fnorm_kpe = normalized parallel element force
Output:
Fnorm_CE = normalized contractile element force
'''
Fnorm_CE = Fnorm_tendon/np.cos(alpha) - Fnorm_kpe
return Fnorm_CE
def tendonLength(Lm,Lce_o,Lnorm_ce, alpha):
'''
Compute tendon length
Inputs:
Lm =
Lce_o = optimal length of the fiber
Lnorm_ce = normalized contractile element length
Output:
Lnorm_see = normalized tendon length
'''
Lnorm_see = Lm/Lce_o - Lnorm_ce*np.cos(alpha)
return Lnorm_see
def activation(a,u,dt):
'''
Compute activation
Inputs:
u = idealized muscle excitation signal, 0 <= u <= 1
a = muscular activation
dt = time step
Output:
a = muscular activation
'''
tau_deact = 50e-3 #young adults
tau_act = 15e-3
if u>a:
tau_a = tau_act*(0.5+1.5*a)
elif u <=a:
tau_a = tau_deact/(0.5+1.5*a)
#-------
dadt = (u-a)/tau_a # euler
a = a + dadt*dt
#-------
return a
def totalLenght(Lm0,phi,Rf,Rcm):
'''
Inputs:
Lm0 = initial lenght of the muscle
Phi = degree flexion of the joint
RF = Moment arm
Lce_o = optimal size of the muscle
Output:
Lm = total muscle lenght
'''
Lm = Lm0 - (phi-(np.pi/2))*Rf #total muscle-tendon length from joint angle
return Lm
def momentJoint(Rf,Fnorm_tendon,Fmax,m,g):
'''
Inputs:
RF = Moment arm
Fnorm_tendon = Normalized tendon force
m = Segment Mass
g = Acelleration of gravity
Fmax= maximal isometric force
Output:
M = Total moment with respect to joint
'''
M=Rf*Fnorm_tendon*Fmax - m*g*Rcm*np.sin(phi-(np.pi/2))
return M
def angularAcelerationJoint (M,I):
'''
Inputs:
M = Total moment with respect to joint
I = Moment of Inertia
Output:
phidd= angular aceleration of the joint
'''
phidd = M/I
return phidd
###Output
_____no_output_____
###Markdown
Simulation - Parallel
###Code
#Normalizing
u = np.ones(t.shape)
for i in range (len(t)):
"""
#ramp
if t[i]<=1:
Lm = 0.31
elif t[i]>1 and t[i]<2:
Lm = .31 - .04*(t[i]-1)
#print(Lm)
"""
Lm = totalLenght(Lm0,phi,Rf,Rcm)
Lnorm_see = tendonLength(Lm,Lce_o,Lnorm_ce, alpha)
Fnorm_tendon = TendonForce (Lnorm_see,Lslack, Lce_o)
Fnorm_kpe = ParallelElementForce (Lnorm_ce)
#isometric force at Lce from CE force length relationship
F0 = ForceLengthCurve (Lnorm_ce,width)
Fnorm_CE = ContractileElementForce(Fnorm_tendon,Fnorm_kpe, alpha) #Fnorm_CE = ~Fm
#computing activation
a = activation(a,u[i],h)
#calculate CE velocity from Hill's equation
Lnorm_cedot = ContractileElementDot(F0, Fnorm_CE,a)
#Compute MomentJoint
M = momentJoint(Rf,Fnorm_tendon,Fmax,m,g)
#Compute Angular Aceleration Joint
phidd = angularAcelerationJoint (M,I)
# Euler integration steps
Lnorm_ce = Lnorm_ce + h*Lnorm_cedot
phid= phid + h*phidd
phi = phi +h*phid
# Store variables in vectors
F[i] = Fnorm_tendon*Fmax
Fkpe[i] = Fnorm_kpe*Fmax
FiberLen[i] = Lnorm_ce*Lce_o
TendonLen[i] = Lnorm_see*Lce_o
a_dynamics[i] = a
phi_dynamics[i] = phi
###Output
_____no_output_____
###Markdown
Plots
###Code
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,phi_dynamics*180/np.pi,c='orange')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Joint angle (deg)')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,a_dynamics,c='magenta')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Activation dynamics')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,F,c='red')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Force (N)')
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,FiberLen, label = 'fiber')
ax.plot(t,TendonLen, label = 'tendon')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Length (m)')
ax.legend(loc='best')
fig, ax = plt.subplots(1, 3, figsize=(12,4), sharex=True, sharey=True)
ax[0].plot(t,FiberLen, label = 'fiber')
ax[1].plot(t,TendonLen, label = 'tendon')
ax[2].plot(t,FiberLen + TendonLen, label = 'muscle (tendon + fiber)')
ax[1].set_xlabel('time (s)')
ax[0].set_ylabel('Length (m)')
#plt.legend(loc='best')
###Output
_____no_output_____
###Markdown
Task 13 - Motor Control Introduction to modeling and simulation of human movementhttps://github.com/BMClab/bmc/blob/master/courses/ModSim2018.md Desiree Miraldo Renato Watanabe* Implement the Knee simulation of the Nigg and Herzog book (chapter 4.8.6, [knee.m](http://isbweb.org/~tgcs/resources/software/bogert/knee.m)) based on code from Lecture 12
###Code
import numpy as np
#import pandas as pd
import matplotlib.pyplot as plt
import math
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Muscle properties
###Code
Lslack = .223
Umax = .04
Lce_o = .093 #optmal l
width = .63#*Lce_o
Fmax = 7400
a = 0
u = 0.5
#b = .25*10#*Lce_o
###Output
_____no_output_____
###Markdown
Parameters for the equation of motion (Nigg & Herzog, p. 562)
###Code
m = 10 #segment mass
g = 9.81 #acceleration of gravity
Rcm = 0.264 #distance knee joint to centre of mass
I = 0.1832 #moment of inertia
Rf = 0.033 #moment arm of quadriceps
alpha = 0 #pennation angle
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
Lnorm_ce = .087/Lce_o #norm
#Lnorm_ce = 0.31 - Lslack
t0 = 0
tf = 3
h = 5e-4
#start as 90 degree flexion and zero velocity
phi = np.pi/2 #rad
phid = 0
t = np.arange(t0,tf,h)
F = np.empty(t.shape)
Fkpe = np.empty(t.shape)
FiberLen = np.empty(t.shape)
TendonLen = np.empty(t.shape)
a_dynamics = np.empty(t.shape)
phi_dynamics = np.empty(t.shape)
#defining u
form = 'sinusoid'
def createinput_u(form, plot=True):
if (form == 'sinusoid'):
u = .2*np.sin(np.pi*t) +.7
elif (form == 'step'):
u = np.ones(t.shape)*.01
u[:1/h] = 1
u[1/h:3/h] = .5
elif (form == 'pulse'):
u = np.ones(t.shape)*.01
u[1/h:3/h] = 1
if plot:
plt.figure()
plt.plot(u)
plt.title('u wave form')
return u
u = createinput_u (form)#,plot=False)
###Output
_____no_output_____
###Markdown
Simulation - Series for i in range (len(t)): ramp if t[i]<=1: Lm = 0.31 elif t[i]>1 and t[i]<2: Lm = .31 + .1*(t[i]-1) print(Lm) shortening at 4cm/s Lsee = Lm - Lce if Lsee<Lslack: F[i] = 0 else: F[i] = Fmax*((Lsee-Lslack)/(Umax*Lslack))**2 isometric force at Lce from CE force length relationship F0 = max([0, Fmax*(1-((Lce-Lce_o)/width)**2)]) calculate CE velocity from Hill's equation if F[i]>F0: print('Error: cannot do eccentric contractions') Lcedot = -b*(F0-F[i])/(F[i]+a) vel is negative for shortening --- Euler integration step Lce += h*Lcedot
###Code
def TendonForce (Lnorm_see,Lslack, Lce_o):
'''
Compute tendon force
Inputs:
Lnorm_see = normalized tendon length
Lslack = slack length of the tendon (non-normalized)
Lce_o = optimal length of the fiber
Output:
Fnorm_tendon = normalized tendon force
'''
Umax = .04
if Lnorm_see<Lslack/Lce_o:
Fnorm_tendon = 0
else:
Fnorm_tendon = ((Lnorm_see-Lslack/Lce_o)/(Umax*Lslack/Lce_o))**2
return Fnorm_tendon
def ParallelElementForce (Lnorm_ce):
'''
Compute parallel element force
Inputs:
Lnorm_ce = normalized contractile element length
Output:
Fnorm_kpe = normalized parallel element force
'''
Umax = 1
if Lnorm_ce< 1:
Fnorm_kpe = 0
else:
Fnorm_kpe = ((Lnorm_ce-1)/(Umax*1))**2
return Fnorm_kpe
def ForceLengthCurve (Lnorm_ce,width):
F0 = max([0, (1-((Lnorm_ce-1)/width)**2)])
return F0
###Output
_____no_output_____
###Markdown
def ContractileElementDot(F0, Fnorm_CE, a, b): ''' Compute Contractile Element Derivative Inputs: F0 = Force-Length Curve Fce = Contractile element force Output: Lnorm_cedot = normalized contractile element length derivative ''' if Fnorm_CE>F0: print('Error: cannot do eccentric contractions') Lnorm_cedot = -b*(F0-Fnorm_CE)/(Fnorm_CE + a) vel is negative for shortening return Lnorm_cedot
###Code
def ContractileElementDot(F0, Fnorm_CE, a):
'''
Compute Contractile Element Derivative
Inputs:
F0 = Force-Length Curve
Fce = Contractile element force
Output:
Lnorm_cedot = normalized contractile element length derivative
'''
FMlen = 1.4 # young adults
Vmax = 10 # young adults
Af = 0.25 #force-velocity shape factor
Fnorm_CE = min(FMlen*a*F0 - 0.001, Fnorm_CE)
if Fnorm_CE > a*F0:
b = ((2 + 2/Af)*(a*F0*FMlen - Fnorm_CE))/(FMlen-1)
elif Fnorm_CE <= a*F0:
b = a*F0 + Fnorm_CE/Af
Lnorm_cedot = (.25 + .75*a)*Vmax*((Fnorm_CE - a*F0)/b)
return Lnorm_cedot
def ContractileElementForce(Fnorm_tendon,Fnorm_kpe, alpha):
'''
Compute Contractile Element force
Inputs:
Fnorm_tendon = normalized tendon force
Fnorm_kpe = normalized parallel element force
Output:
Fnorm_CE = normalized contractile element force
'''
Fnorm_CE = Fnorm_tendon/np.cos(alpha) - Fnorm_kpe
return Fnorm_CE
def tendonLength(Lm,Lce_o,Lnorm_ce, alpha):
'''
Compute tendon length
Inputs:
Lm =
Lce_o = optimal length of the fiber
Lnorm_ce = normalized contractile element length
Output:
Lnorm_see = normalized tendon length
'''
Lnorm_see = Lm/Lce_o - Lnorm_ce*np.cos(alpha)
return Lnorm_see
def activation(a,u,dt):
'''
Compute activation
Inputs:
u = idealized muscle excitation signal, 0 <= u <= 1
a = muscular activation
dt = time step
Output:
a = muscular activation
'''
tau_deact = 50e-3 #young adults
tau_act = 15e-3
if u>a:
tau_a = tau_act*(0.5+1.5*a)
elif u <=a:
tau_a = tau_deact/(0.5+1.5*a)
#-------
dadt = (u-a)/tau_a # euler
a += dadt*dt
#-------
return a
###Output
_____no_output_____
###Markdown
Simulation - Parallel
###Code
import time
#Normalizing
#alpha = 25*np.pi/180
for i in range (len(t)):
"""
#ramp
if t[i]<=1:
Lm = 0.31
elif t[i]>1 and t[i]<2:
Lm = .31 - .04*(t[i]-1)
#print(Lm)
"""
Lm = 0.31 - (phi-np.pi/2)*Rf #total muscle-tendon length from joint angle
#shortening at 4cm/s
Lnorm_see = tendonLength(Lm,Lce_o,Lnorm_ce, alpha)
Fnorm_tendon = TendonForce (Lnorm_see,Lslack, Lce_o)
Fnorm_kpe = ParallelElementForce (Lnorm_ce)
#isometric force at Lce from CE force length relationship
F0 = ForceLengthCurve (Lnorm_ce,width)
Fnorm_CE = ContractileElementForce(Fnorm_tendon,Fnorm_kpe, alpha) #Fnorm_CE = ~Fm
#computing activation
a = activation(a,u[i],h)
#calculate CE velocity from Hill's equation
Lnorm_cedot = ContractileElementDot(F0, Fnorm_CE,a)
#apply this muscle force, and gravity, to equation of motion
M = Rf*Fnorm_tendon - m*g*Rcm*np.sin(phi-np.pi/2) #total moment with respect to knee joint
phidd = M/I #angular acceleration
# --- Euler integration step
Lnorm_ce += h*Lnorm_cedot
phid = phid + phidd*h
phi = phi + phid*h
F[i] = Fnorm_tendon*Fmax
Fkpe[i] = Fnorm_kpe*Fmax
FiberLen[i] = Lnorm_ce*Lce_o
TendonLen[i] = Lnorm_see*Lce_o
a_dynamics[i] = a
phi_dynamics[i] = phi
###Output
_____no_output_____
###Markdown
Plots
###Code
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,phi_dynamics*180/np.pi,c='tab:orange')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Phi dynamics (deg)')
#ax.legend()
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,a_dynamics,c='magenta')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Activation dynamics')
#ax.legend()
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,F,c='red')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Force (N)')
#ax.legend()
fig, ax = plt.subplots(1, 1, figsize=(6,6), sharex=True)
ax.plot(t,FiberLen, label = 'fiber')
ax.plot(t,TendonLen, label = 'tendon')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Length (m)')
ax.legend(loc='best')
fig, ax = plt.subplots(1, 3, figsize=(12,4), sharex=True, sharey=True)
ax[0].plot(t,FiberLen, label = 'fiber')
ax[1].plot(t,TendonLen, label = 'tendon')
ax[2].plot(t,FiberLen + TendonLen, label = 'muscle (tendon + fiber)')
ax[1].set_xlabel('time (s)')
ax[0].set_ylabel('Length (m)')
#plt.legend(loc='best')
###Output
_____no_output_____ |
write-out-mean-predictions.ipynb | ###Markdown
Obtain mean head pose on *trainval* split and use as predictions for test datasetImplicitly documents test submission format
###Code
import os
from dd_pose.dataset import Dataset
from dd_pose.dataset_item import DatasetItem
from dd_pose.image_decorator import ImageDecorator
from dd_pose.jupyter_helpers import showimage
from dd_pose.evaluation_helpers import T_headfrontal_camdriver, T_camdriver_headfrontal
import transformations as tr
import json
import numpy as np
class MeanPredictor:
def __init__(self):
self.mean_T_camdriver_head = None
def get_name(self):
return 'Prior'
def get_dirname(self):
return 'prior'
def get_metadata(self):
return dict()
def initialize_from_dataset(self, dataset):
# get mean translation and rotation across the whole dataset
# we compute roll, pitch, yaw wrt. 'headfrontal' frame in order to avoid gimbal lock averaging later on
xyzs = []
rpys = []
for di_dict in dataset.get_dataset_items():
di = DatasetItem(di_dict)
print(di)
for stamp in di.get_stamps():
T_camdriver_head = di.get_T_camdriver_head(stamp)
T_headfrontal_head = np.dot(T_headfrontal_camdriver, T_camdriver_head)
rpy = tr.euler_from_matrix(T_headfrontal_head)
rpys.append(rpy)
xyzs.append(T_camdriver_head[0:3,3])
# rpy mean in headfrontal frame
mean_rpy = np.mean(np.array(rpys), axis=0)
print(mean_rpy)
# xyz mean in camdriver frame
mean_xyz = np.mean(np.array(xyzs), axis=0)
print(mean_xyz)
# rotational component from mean rpy to camdriver frame
mean_T_headfrontal_head = tr.euler_matrix(*mean_rpy)
self.mean_T_camdriver_head = np.dot(T_camdriver_headfrontal, mean_T_headfrontal_head)
# translational component from mean xyz in camdriver frame
self.mean_T_camdriver_head[0:3,3] = mean_xyz
def get_T_camdriver_head(self, stamp):
return self.mean_T_camdriver_head
# initialize mean predictor with measurements from trainval split
mean_predictor = MeanPredictor()
mean_predictor.initialize_from_dataset(Dataset(split='trainval'))
mean_predictor.get_T_camdriver_head(0)
###Output
_____no_output_____
###Markdown
Draw mean head pose onto sample image
###Code
d = Dataset(split='trainval')
di_dict = d.get(subject_id=1, scenario_id=3, humanhash='sodium-finch-fillet-spring')
di = DatasetItem(di_dict)
print(di)
stamp = di.get_stamps()[50]
stamp
img, pcm = di.get_img_driver_left(stamp, shift=True)
img_bgr = np.dstack((img, img, img))
image_decorator = ImageDecorator(img_bgr, pcm)
image_decorator.draw_axis(mean_predictor.get_T_camdriver_head(stamp), use_gray=False)
image_decorator.draw_axis(di.get_T_camdriver_head(stamp), use_gray=True)
###Output
_____no_output_____
###Markdown
colored axis: mean head posegray axis: measurement (ground truth)
###Code
showimage(img_bgr)
prediction_output_base_dir = os.path.join(os.environ['DD_POSE_DATA_ROOT_DIR'], '10-predictions')
try:
os.makedirs(prediction_output_base_dir)
except:
pass
###Output
_____no_output_____
###Markdown
Write out mean head pose predictions for test dataset
###Code
d = Dataset(split='test')
predictor = mean_predictor
predictor.get_name(), predictor.get_dirname()
predictor_predictions_dir = os.path.join(prediction_output_base_dir, predictor.get_dirname())
assert not os.path.exists(predictor_predictions_dir), "Predictions already written out. Aborting. %s" % predictor_predictions_dir
for di_dict in d.get_dataset_items():
di = DatasetItem(di_dict)
print(di)
predictions_dir = os.path.join(predictor_predictions_dir, 'subject-%02d' % di.get_subject(), 'scenario-%02d' % di.get_scenario(), di.get_humanhash())
try:
os.makedirs(predictions_dir)
except OSError as e:
pass
predictions = dict()
for stamp in di.get_stamps():
predictions[stamp] = predictor.get_T_camdriver_head(stamp).tolist()
# write out predictions
with open(os.path.join(predictions_dir, 't-camdriver-head-predictions.json'), 'w') as fp:
json.dump(predictions, fp, sort_keys=True, indent=4)
metadata = {
'name': predictor.get_name(),
'dirname': predictor.get_dirname(),
'metadata': predictor.get_metadata()
}
with open(os.path.join(predictor_predictions_dir, 'metadata.json'), 'w') as fp:
json.dump(metadata, fp, sort_keys=True, indent=4)
# now tar.gz the predictions in the format expected by the benchmark website
print('pushd %s; tar czf %s subject-* metadata.json; popd' % (predictor_predictions_dir,\
os.path.join(predictor_predictions_dir, 'predictions.tar.gz')))
###Output
_____no_output_____ |
restoration_student.ipynb | ###Markdown
Part 3 : RestorationIn this part of the TP, we are going to look at image restoration. We will look at several type of noise and ways to remove this noise. We first define some helper functions. Your taskIn the lab work, you must fill in the code in the places marked FILL IN CODE, or answer the written questions directly on the notebook.
###Code
from matplotlib import pyplot as plt
import numpy as np
import imageio
from skimage import color
is_colab = True
def read_image(file_name):
img_color = imageio.imread(file_name)
img_gray = color.rgb2gray(img_color)
return img_gray,img_color
def write_image(img_in,file_name_out):
imageio.imwrite(file_name_out, np.uint8(255.0*img_in))
def display_image(img_in):
plt.figure(figsize=(10, 10))
if (img_in.ndim == 2):
plt.imshow(img_in,cmap='gray')
elif (img_in.ndim == 3):
# careful, in this case we supppose the pixel values are between 0 and 255
plt.imshow(np.uint8(img_in))
else:
print('Error, unknown number of dimensions in image')
return
file_dir = 'images/'
file_name = 'palma'
file_ext = '.png'
if (is_colab == True):
!wget "https://perso.telecom-paristech.fr/anewson/doc/images/palma.png"
img_gray,_ = read_image(file_name+file_ext)
else:
img_gray,_ = read_image(file_dir+file_name+file_ext)
display_image(img_gray)
img_gray.shape
###Output
--2021-01-16 17:58:16-- https://perso.telecom-paristech.fr/anewson/doc/images/palma.png
Resolving perso.telecom-paristech.fr (perso.telecom-paristech.fr)... 137.194.2.165, 2001:660:330f:2::a5
Connecting to perso.telecom-paristech.fr (perso.telecom-paristech.fr)|137.194.2.165|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 770583 (753K) [image/png]
Saving to: ‘palma.png’
palma.png 100%[===================>] 752.52K 1.99MB/s in 0.4s
2021-01-16 17:58:17 (1.99 MB/s) - ‘palma.png’ saved [770583/770583]
###Markdown
We will look at the following noise types :- Gaussian noise- Impulse (salt-and-pepper, 0 or 1) noise- Missing pixelsFill in the following functions to add this noise to a gray-level image. Do not forget to clip the pixel values to the range $(0,1)$ (np.clip).
###Code
def add_gaussian_noise(img_in,sigma_noise = 0.01):
# FILL IN CODE
eta = np.random.normal(0,1, img_in.shape)
img_out = np.clip(img_in + sigma_noise*eta, 0, 1)
return img_out
# we define the impulse probability p as the probability of a pixel not being affected
def add_impulse_noise(img_in,p=0.9):
n, m = img_in.shape
img_in = img_in.flatten()
for i in range(img_in.flatten().shape[0]):
s = np.random.binomial(1,p)
if s == 0:
img_in[i] = 1
img_out = img_in.reshape((n,m))
# FILL IN CODE
return img_out
def add_missing_pixels_noise(img_in,p=0.9):
# FILL IN CODE
n, m = img_in.shape
img_in = img_in.flatten()
for i in range(img_in.flatten().shape[0]):
s = np.random.binomial(1,p)
if s == 0:
img_in[i] *= s
img_out = img_in.reshape((n,m))
return img_out
###Output
_____no_output_____
###Markdown
Add the different noises to the input image, and display (or write) the results. Use the following parameters :- sigma_noise=0.05 for the gaussian noise- $p=0.9$, the probability of a pixel __not__ being affected, for the impulse noise and missing pixels
###Code
sigma_noise = 0.05
img_gray_gaussian = add_gaussian_noise(img_gray, sigma_noise)
write_image(img_gray_gaussian,file_name+'_gaussian_noise.png')
img_gray_impulse = add_impulse_noise(img_gray, 0.9)
write_image(img_gray_impulse,file_name+'_impulse_noise.png')
img_gray_missing = add_missing_pixels_noise(img_gray, 0.9)
write_image(img_gray_missing,file_name+'_missing_pixels.png')
###Output
_____no_output_____
###Markdown
__Question__ For each type of noise, propose a restoration filter (see course slides). __Answer__ For Guassian Noise the solution is a Gaussian filter, for impulse noise and random missing pixel a median filter is good.
Implement these restoration filters in appropriately named functions, and write the corresponding output images. Try to find the parameters which give the best results (visually).__IMPORTANT NOTE__, you can use the filtering functions of the ``scipy.ndimage`` package, where the filtering is already implemented (do not re-implement the filters)
###Code
from scipy import ndimage
def gaussian(img_in,sigma):
img_out = gaussian_filter(img_in,sigma)
plt.imshow(img_out)
return img_out
def median(img_in,s):
img_out = ndimage.median_filter(img_in, size= s)
plt.imshow(img_out)
return img_out
sigma = 0.8
s = 4
img_gray_gaussian_restored = gaussian(img_gray_gaussian, sigma)
write_image(img_gray_gaussian_restored,file_name+'_gaussian_noise_restored.png')
img_gray_impulse_restored = median(img_gray_impulse, s)
write_image(img_gray_impulse_restored,file_name+'_impulse_noise_restored.png')
img_gray_missing_restored = median(img_gray_missing, s=5)
write_image(img_gray_missing_restored,file_name+'_missing_pixels_restored.png')
# FILL IN CODE : CREATE THE FUNCTIONS TO CARRY OUT THE RESTORATION FILTERS AND WRITE THE RESULTS
###Output
_____no_output_____
###Markdown
__Question__ Roughly speaking, what is the tradeoff which you are trying to achieve by tuning the parameters ? __Answer__ a tradeoff beteewn the covering up the pixels that affected by the noise and the visibility of details EvaluationA commonly used metric for denoising is the ''Peak Signal-to-Noise Ratio'' (PSNR). This is linked to the commonly known mean squared error. The mean squared error is defined, for a reference image $Y$ and a restored image $Y$, of size $m \times n$ as :- MSE$(Y,I) = \frac{1}{mn} \sum_{x,y} \left( I_{x,y} - Y_{x,y}\right)^2 $The PSNR is defined, in Decibels, as :PSNR$(Y,I) = 10 * \log{\left( \frac{I_{max}^2}{MSE(Y,I)} \right)}$,where $I_{max}$ is the maximum value of the image. For us (normalised to 1), this gives :PSNR$(Y,I) = -10 * \log{ \left({MSE(Y,I)} \right)}$.Implement this in a function, and create a code to plot the PSNR for several values of the paramter, __in the Gaussian case only (first filter)__.
###Code
def mse(Y,I):
return ((Y - I)**2).mean(axis=None)
def PSNR(img,img_ref):
psnr = -10 * np.log(mse(img, img_ref))
# FILL IN CODE
return psnr
print(PSNR(gaussian(img_gray_gaussian, sigma), img_gray_gaussian),
PSNR(gaussian(img_gray_gaussian, 1), img_gray_gaussian),
PSNR(gaussian(img_gray_gaussian, 0.9), img_gray_gaussian),
PSNR(gaussian(img_gray_gaussian, 2), img_gray_gaussian),
PSNR(gaussian(img_gray_gaussian, 50), img_gray_gaussian))
# FILL IN CODE : TEST THE PSNR FOR SEVERAL VALUES OF SIGMA
###Output
60.84428980194963 58.13633409710011 59.28892120252594 52.87468680729035 35.999781010232006
###Markdown
FUTHER RESTORATION TECHNIQUES (THIS IS NOT OBLIGATORY) DeconvolutionIn this part of the TP, we are going to try and invert a convolution operation. This is called __deconvolution__, and can be carried out in the Fourier domain, as follows. For an image $I$, filter $f$ and an output $Y$, if we have :$Y = I \ast f$,then using the convolution theorem (see lesson), we have :$I = \text{IFFT}\left(\frac{\hat{Y}}{\hat{f}}\right)$where $\hat{Y}$ and $\hat{f}$ are the Fourier transforms of $Y$ and $f$ respectively.To simplify the problem, we take a square image for this part of the TP.
###Code
file_dir = 'images/'
file_name = 'boston'
file_ext = '.png'
if (is_colab == True):
!wget "https://perso.telecom-paristech.fr/anewson/doc/images/boston.png"
img_gray,_ = read_image(file_name+file_ext)
else:
img_gray,_ = read_image(file_dir+file_name+file_ext)
img_gray.shape
display_image(img_gray)
img_gray.shape
###Output
_____no_output_____
###Markdown
Now, let us consider the following fiter, defined in the Fourier domain :
###Code
from scipy import signal
img_size = img_gray.shape[0]
h_size = int(np.floor(img_size/2.0))
f_hat = np.zeros((img_size,img_size))
X = np.asarray(range(-h_size,h_size))
f_hat = np.tile( np.expand_dims( np.exp(-( X**2) / (2.0*(20.0**2))) , axis=1), (1,img_size)).T
f_hat /= f_hat.sum()
f_hat = np.fft.ifftshift(f_hat)
plt.imshow( np.log( np.abs(f_hat)+1), cmap = 'gray')
print(f_hat.shape)
###Output
_____no_output_____
###Markdown
Using the convolution theorem and the inverse Fourier transform, carry out the convolution of the input image with $f$ (in the Fourier domain, so using $\hat{f}$) and write the result ``img_convolved`` to an output file__Question__ What does this filter do (you can use the visualisation method from the first part of the TP to see what the filter spectrum looks like) ? How can this happen in real life ?
###Code
img_convolved = # FILL IN CODE
img_convolved -= img_convolved.min()
img_convolved /= img_convolved.max()
write_image(img_convolved,file_name+'_convolution.png')
img_convolved.max()
###Output
_____no_output_____
###Markdown
__Answer__ Now, create a function which carries out a deconvolution in the Fourier domain, given an image and the Fourier transform of the filter $\hat{f}$. You can reuse the code in the first part of the TP. Carry out the deconvolution on ``img_convolved`` and write the result to an output file. Remember to renormalise the output image.__Important note__ : You will have a problem with very small values of $\hat{f}$ (division by 0). Propose a simple method to address this issue.
###Code
def deconvolve_fourier(img,f_hat):
# FILL IN CODE
return img_out
img_out = np.real(deconvolve_fourier(img_convolved,f_hat))
img_out -= img_out.min()
img_out /= img_out.max()
write_image(img_out,file_name+'_deconvolved.png')
###Output
_____no_output_____ |
notebooks/ROI/01_Offshore/07_TCs_RBFs.ipynb | ###Markdown
... ***CURRENTLY UNDER DEVELOPMENT*** ... Before running this notebook, you must have already the numerically simulated waves associated to the representative cases of synthetic simulated TCs (obtained with MaxDiss algorithm in notebook 06)inputs required: * Synthetic simulation of historical TCs parameters (copulas obtained in *notebook 06*) * MaxDiss selection of synthetic simulated TCs (parameters obtained in *notebook 06*) * simulated waves for the above selected TCs (**outside TeslaKit**) in this notebook: * RBF's interpolation of wave conditions based on TCs parameters (from swan simulated TCs waves)
###Code
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# common
import os
import os.path as op
# pip
import xarray as xr
import numpy as np
# DEV: override installed teslakit
import sys
sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..'))
# teslakit
from teslakit.database import Database
from teslakit.rbf import RBF_Reconstruction, RBF_Validation
###Output
_____no_output_____
###Markdown
Database and Site parameters
###Code
# --------------------------------------
# Teslakit database
p_data = r'/Users/nico/Projects/TESLA-kit/TeslaKit/data'
db = Database(p_data)
# set site
db.SetSite('ROI')
# --------------------------------------
# load data and set parameters
# r2 TCs Copula Simulated (dataset)
TCs_r2_sim_params = db.Load_TCs_r2_sim_params()
# r2 TCs MDA selection and solved simulations (not solved inside teslakit)
TCs_r2_MDA_params = db.Load_TCs_r2_mda_params()
TCs_sims = db.Load_TCs_r2_mda_Simulations()
###Output
_____no_output_____
###Markdown
Simulated TCs - Radial Basis Function
###Code
# --------------------------------------
# prepare dataset and subset
# RBFs training subset (TCs numerically solved)
subset = np.column_stack(
(TCs_r2_MDA_params['pressure_min'], TCs_r2_MDA_params['velocity_mean'],
TCs_r2_MDA_params['gamma'], TCs_r2_MDA_params['delta'])
)
# RBFs dataset to interpolate
dataset = np.column_stack(
(TCs_r2_sim_params['pressure_min'], TCs_r2_sim_params['velocity_mean'],
TCs_r2_sim_params['gamma'], TCs_r2_sim_params['delta'])
)
# --------------------------------------
# Extract waves data from TCs simulations (this is the RBFs training target)
print(TCs_sims)
print()
# Normalize data
d_maxis = {}
d_minis = {}
tcp = TCs_sims.copy()
for k in ['hs', 'tp', 'ss', 'twl']:
v = tcp[k].values[:]
mx = np.max(v)
mn = np.min(v)
tcp[k] =(('storm',), (v-mn)/(mx-mn))
# store maxs and mins for denormalization
d_maxis[k] = mx
d_minis[k] = mn
tcp['dir'] = tcp['dir'] * np.pi/180
print(tcp)
print()
# Build RBF target numpy array
target = np.column_stack(
(tcp['hs'], tcp['tp'], tcp['ss'], tcp['twl'], tcp['dir'], tcp['mu'])
)
# --------------------------------------
# RBF Interpolation
# subset - scalar / directional indexes
ix_scalar_subset = [0,1] # scalar (pmean, vmean)
ix_directional_subset = [2,3] # directional (delta, gamma)
# target - scalar / directional indexes
ix_scalar_target = [0,1,2,3,5] # scalar (Hs, Tp, SS, TWL, MU)
ix_directional_target = [4] # directional (Dir)
output = RBF_Reconstruction(
subset, ix_scalar_subset, ix_directional_subset,
target, ix_scalar_target, ix_directional_target,
dataset)
# --------------------------------------
# Reconstructed TCs
# denormalize RBF output
TCs_RBF_out = xr.Dataset(
{
'hs':(('storm',), output[:,0] * (d_maxis['hs']-d_minis['hs']) + d_minis['hs'] ),
'tp':(('storm',), output[:,1] * (d_maxis['tp']-d_minis['tp']) + d_minis['tp'] ),
'ss':(('storm',), output[:,2] * (d_maxis['ss']-d_minis['ss']) + d_minis['ss'] ),
'twl':(('storm',), output[:,3] * (d_maxis['twl']-d_minis['twl']) + d_minis['twl'] ),
'dir':(('storm',), output[:,4] * 180 / np.pi),
'mu':(('storm',), output[:,5]),
},
coords = {'storm': np.arange(output.shape[0])}
)
print(TCs_RBF_out)
# store data as xarray.Dataset
db.Save_TCs_sim_r2_rbf_output(TCs_RBF_out)
# --------------------------------------
# RBF Validation
# subset - scalar / directional indexes
ix_scalar_subset = [0,1] # scalar (pmean, vmean)
ix_directional_subset = [2,3] # directional (delta, gamma)
# target - scalar / directional indexes
ix_scalar_target = [0,1,2,3,5] # scalar (Hs, Tp, SS, TWL, MU)
ix_directional_target = [4] # directional (Dir)
output = RBF_Validation(
subset, ix_scalar_subset, ix_directional_subset,
target, ix_scalar_target, ix_directional_target)
###Output
RBFs Kfold Validation: 1/3
ix_scalar: 0, optimization: 56.86 | interpolation: 0.04
ix_scalar: 1, optimization: 51.49 | interpolation: 0.04
ix_scalar: 2, optimization: 63.19 | interpolation: 0.04
ix_scalar: 3, optimization: 51.19 | interpolation: 0.04
ix_scalar: 5, optimization: 63.90 | interpolation: 0.04
ix_directional: 4, optimization: 136.66 | interpolation: 0.09
mean squared error : 126.98985025207504
RBFs Kfold Validation: 2/3
ix_scalar: 0, optimization: 51.08 | interpolation: 0.04
ix_scalar: 1, optimization: 51.14 | interpolation: 0.04
ix_scalar: 2, optimization: 58.10 | interpolation: 0.04
ix_scalar: 3, optimization: 58.08 | interpolation: 0.04
ix_scalar: 5, optimization: 69.60 | interpolation: 0.04
ix_directional: 4, optimization: 128.08 | interpolation: 0.09
mean squared error : 62.62639397064555
RBFs Kfold Validation: 3/3
ix_scalar: 0, optimization: 74.74 | interpolation: 0.05
ix_scalar: 1, optimization: 81.92 | interpolation: 0.05
ix_scalar: 2, optimization: 77.92 | interpolation: 0.06
ix_scalar: 3, optimization: 59.18 | interpolation: 0.05
ix_scalar: 5, optimization: 75.20 | interpolation: 0.05
ix_directional: 4, optimization: 140.94 | interpolation: 0.09
mean squared error : 129.37619995064833
|
notebooks/dev/n01_first_look_at_the_data.ipynb | ###Markdown
After creating a script to download the data, and running it, I will look at the data and test some of the functions that I implemented for its analysis (most of them were implemented to solve the Machine Learning for Trading assignments).
###Code
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
data_df = pd.read_pickle('../../data/data_df.pkl')
print(data_df.shape)
data_df.head(25)
###Output
(30120, 503)
###Markdown
What if I only want the 'Close' value, maybe in a range, for only some symbols?
###Code
data_df.xs('Close', level='feature')
data_df.loc[dt.datetime(1993,2,4):dt.datetime(1993,2,7)]
symbols = ['SPY', 'AMD', 'IBM']
data_df.xs('Close', level='feature').loc[dt.datetime(1993,2,4):dt.datetime(1993,2,7),symbols]
###Output
_____no_output_____
###Markdown
Let's test the function to fill the missing data
###Code
from utils import preprocessing
select = ['SPY', 'GOOG', 'GM']
selected_df = data_df.xs('Close', level='feature').loc[:,select]
selected_df.plot()
selected_df = preprocessing.fill_missing(selected_df)
selected_df.plot()
###Output
_____no_output_____
###Markdown
A useful function to show the evolution of a portfolio value
###Code
from utils import analysis
analysis.assess_portfolio(start_date = dt.datetime(2006,1,22),
end_date = dt.datetime(2016,12,31),
symbols = ['GOOG','AAPL','AMD','XOM'],
allocations = [0.1,0.2,0.3,0.4],
initial_capital = 1000,
risk_free_rate = 0.0,
sampling_frequency = 252.0,
data = data_df,
gen_plot=True,
verbose=True)
from utils import marketsim
###Output
_____no_output_____
###Markdown
Limit leverage to 2.0
###Code
value_df, constituents_df = marketsim.simulate_orders('../../data/orders.csv',
data_df,
initial_cap=100000,
leverage_limit=2.0,
from_csv=True)
analysis.value_eval(value_df, verbose=True, graph=True, data_df=data_df)
constituents_df.plot()
###Output
_____no_output_____
###Markdown
No leverage limit
###Code
value_df, constituents_df = marketsim.simulate_orders('../../data/orders.csv',
data_df,
initial_cap=100000,
leverage_limit=None,
from_csv=True)
analysis.value_eval(value_df, verbose=True, graph=True, data_df=data_df)
constituents_df.plot()
analysis.assess_portfolio(start_date = dt.datetime(1993,1,22),
end_date = dt.datetime(2016,12,31),
symbols = ['SPY'],
allocations = [1.0],
initial_capital = 1000,
risk_free_rate = 0.0,
sampling_frequency = 252.0,
data = data_df,
gen_plot=True,
verbose=True)
###Output
_____no_output_____ |
ch6/6.2 Understanding recurrent neural networks.ipynb | ###Markdown
Deep Learning with Python 6.2 Understanding recurrent neural networks> 理解循环神经网络之前我们用的全连接网络和卷积神经网络都有是被叫做 feedforward networks (前馈网络) 的。这种网络是无记忆的,也就是说,它们单独处理每个输入,在输入与输入之间没有保存任何状态。在这种网络中,我们要处理时间/文本等序列,就必须把一个完整的序列处理成一个大张量,整个的传到网络中,让模型一次看完整个序列。这个显然和我们人类阅读、学习文本等信息的方式有所区别。我们不是一眼看完整本书的,我们要一个词一个词地看,眼睛不停移动获取新的数据的同时,记住之前的内容,将新的、旧的内容联系在一起来理解整句话的意思。说抽象一些,我们会保存一个关于所处理内容的内部模型,这个模型根据过去的信息构建,并随着新信息的进入而不断更新。我们都是以这种渐进的方式处理信息的。按照这种思想,我们又得到一种新的模型,叫做**循环神经网络**(recurrent neural network, RNN),这网络会遍历处理所有序列元素,并保存一个记录已查看内容相关信息的状态(state)。而在处理下一条序列之时,RNN 状态会被重置。使用 RNN 时,我们仍可以将一个序列整个的输出网络,不过在网络内部,数据不再是直接被整个处理,而是自动对序列元素进行遍历。为了理解循环神经网络,我们用 Numpy 手写一个玩具版的 RNN 前向传递。考虑处理形状为 `(timesteps, input_features)` 的一条序列,RNN 在 timesteps 上做迭代,将当前 timestep 的 input_features 与前一步得到的状态结合算出这一步的输出,然后将这个输出保存为新的状态供下一步使用。第一步时,没有状态,因此将状态初始化为一个全零向量,称为网络的初始状态。伪代码:```pythonstate_t = 0for input_t in input_sequence: output_t = f(input_t, state_t) state_t = output_t```这里的 `f(...)` 其实和我们的 Dense 层比较类似,但这里不仅处理输出,还要同时加入状态的影响。所以它就需要包含 3 个参数:分别作用与输出和状态的矩阵 W、U,以及偏移向量 b:```pythondef f(input_t, state_t): return activation( dot(W, input_t) + dot(U, state_t) + b )```画个图来表示这个程序:下面把它写成真实的代码:
###Code
import numpy as np
# 定义各种维度大小
timesteps = 100
input_features = 32
output_features = 64
inputs = np.random.random((timesteps, input_features))
state_t = np.zeros((output_features))
W = np.random.random((output_features, input_features))
U = np.random.random((output_features, output_features))
b = np.random.random((output_features))
successive_outputs = []
for input_t in inputs: # input_t: (input_features, )
output_t = np.tanh( # output_t: (output_features, )
np.dot(W, input_t) + np.dot(U, state_t) + b
)
successive_outputs.append(output_t)
state_t = output_t
final_output_sequence = np.stack(successive_outputs, axis=0) # (timesteps, output_features)
print(successive_outputs[-1].shape)
print(final_output_sequence.shape)
###Output
(64,)
(100, 64)
###Markdown
在这里,我们最终输出是一个形状为 (timesteps, output_features) ,是所有 timesteps 的结果拼起来的。但实际上,我们一般只用最后一个结果 `successive_outputs[-1]` 就行了,这个里面已经包含了之前所有步骤的结果,即包含了整个序列的信息。 Keras 中的循环层把刚才这个玩具版本再加工一下,让它能接收形状为 `(batch_size, timesteps, input_features)` 的输入,批量去处理,就得到了 keras 中的 `SimpleRNN` 层:```pythonfrom tensorflow.keras.layers import SimpleRNN```这个 SimpleRNN 层和 keras 中的其他循环层都有两种可选的输出模式:| 输出形状 | 说明 | 使用 || --- | --- | --- || `(batch_size, timesteps, output_features)` | 输出每个 timestep 输出的完整序列 | return_sequences=True || `(batch_size, output_features)` | 只返回每个序列的最终输出 | return_sequences=False (默认) |
###Code
# 只返回最后一个时间步的输出
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, SimpleRNN
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32))
model.summary()
# 返回完整的状态序列
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.summary()
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_2 (Embedding) (None, None, 32) 320000
_________________________________________________________________
simple_rnn_2 (SimpleRNN) (None, None, 32) 2080
=================================================================
Total params: 322,080
Trainable params: 322,080
Non-trainable params: 0
_________________________________________________________________
###Markdown
如果我们要堆叠使用多个 RNN 层的时候,中间的层必须返回完整的状态序列:
###Code
# 堆叠多个 RNN 层,中间层返回完整的状态序列
model = Sequential()
model.add(Embedding(10000, 32))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32)) # 最后一层要最后一个输出就行了
model.summary()
###Output
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_3 (Embedding) (None, None, 32) 320000
_________________________________________________________________
simple_rnn_3 (SimpleRNN) (None, None, 32) 2080
_________________________________________________________________
simple_rnn_4 (SimpleRNN) (None, None, 32) 2080
_________________________________________________________________
simple_rnn_5 (SimpleRNN) (None, None, 32) 2080
_________________________________________________________________
simple_rnn_6 (SimpleRNN) (None, 32) 2080
=================================================================
Total params: 328,320
Trainable params: 328,320
Non-trainable params: 0
_________________________________________________________________
###Markdown
接下来,我们尝试用 RNN 再次处理 IMDB 问题。首先,准备数据:
###Code
# 准备 IMDB 数据
from tensorflow.keras.datasets import imdb
from tensorflow.keras.preprocessing import sequence
max_features = 10000
maxlen = 500
batch_size = 32
print('Loading data...')
(input_train, y_train), (input_test, y_test) = imdb.load_data(num_words=max_features)
print(len(input_train), 'train sequences')
print(len(input_test), 'test sequences')
print('Pad sequences (samples x time)')
input_train = sequence.pad_sequences(input_train, maxlen=maxlen)
input_test = sequence.pad_sequences(input_test, maxlen=maxlen)
print('input_train shape:', input_train.shape)
print('input_train shape:', input_test.shape)
###Output
Loading data...
25000 train sequences
25000 test sequences
Pad sequences (samples x time)
input_train shape: (25000, 500)
input_train shape: (25000, 500)
###Markdown
构建并训练网络:
###Code
# 用 Embedding 层和 SimpleRNN 层来训练模型
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Embedding, SimpleRNN, Dense
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(SimpleRNN(32))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
###Output
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_4 (Embedding) (None, None, 32) 320000
_________________________________________________________________
simple_rnn_7 (SimpleRNN) (None, 32) 2080
_________________________________________________________________
dense (Dense) (None, 1) 33
=================================================================
Total params: 322,113
Trainable params: 322,113
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
157/157 [==============================] - 17s 107ms/step - loss: 0.6445 - acc: 0.6106 - val_loss: 0.6140 - val_acc: 0.6676
Epoch 2/10
157/157 [==============================] - 20s 129ms/step - loss: 0.4139 - acc: 0.8219 - val_loss: 0.4147 - val_acc: 0.8194
Epoch 3/10
157/157 [==============================] - 20s 124ms/step - loss: 0.3041 - acc: 0.8779 - val_loss: 0.4529 - val_acc: 0.8012
Epoch 4/10
157/157 [==============================] - 18s 115ms/step - loss: 0.2225 - acc: 0.9151 - val_loss: 0.3957 - val_acc: 0.8572
Epoch 5/10
157/157 [==============================] - 18s 115ms/step - loss: 0.1655 - acc: 0.9391 - val_loss: 0.4416 - val_acc: 0.8246
Epoch 6/10
157/157 [==============================] - 17s 111ms/step - loss: 0.1167 - acc: 0.9601 - val_loss: 0.4614 - val_acc: 0.8606
Epoch 7/10
157/157 [==============================] - 17s 109ms/step - loss: 0.0680 - acc: 0.9790 - val_loss: 0.4754 - val_acc: 0.8408
Epoch 8/10
157/157 [==============================] - 15s 95ms/step - loss: 0.0419 - acc: 0.9875 - val_loss: 0.5337 - val_acc: 0.8352
Epoch 9/10
157/157 [==============================] - 16s 99ms/step - loss: 0.0246 - acc: 0.9935 - val_loss: 0.5796 - val_acc: 0.8468
Epoch 10/10
157/157 [==============================] - 15s 96ms/step - loss: 0.0174 - acc: 0.9952 - val_loss: 0.7274 - val_acc: 0.7968
###Markdown
绘制训练过程看看:
###Code
# 绘制结果
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo-', label='Training acc')
plt.plot(epochs, val_acc, 'rs-', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo-', label='Training loss')
plt.plot(epochs, val_loss, 'rs-', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Emmmm,其实吧,这个模型的结果还没有第三章里面的用几个全连接层堆叠起来的模型好。原因有好几个,一个是我们这里只考虑了每个序列的前 500 个单词,还有一个是 SimpleRNN 其实并不擅长处理很长的序列。接下来,我们会看几个能表现的更好的循环层。 LSTM 层和 GRU 层在 Keras 中的循环层,除了 SimpleRNN,还有更“不simple”一些的 LSTM 层和 GRU 层,后面这两种会更常用。SimpleRNN 是有一些问题的,理论上,在遍历到时间步 t 的时候,它应该是能留存着之前许多步以来见过的信息的,但实际的应用中,由于某种叫做 vanishing gradient problem(梯度消失问题)的现象,它并不能学到这种长期依赖。梯度消失问题其实在层数比较多的前馈网络里面也会有发生,主要表现就是随着层数多了之后,网络无法训练了。LSTM 层和 GRU 层就是对抗这种问题而生的。**LSTM** 层是基于 LSTM (长短期记忆,long short-term memory) 算法的,这算法就是专门研究了处理梯度消失问题的。其实它的核心思想就是要保存信息以便后面使用,防止前面得到的信息在后面的处理中逐渐消失。LSTM 在 SimpleRNN 的基础上,增加了一种跨越多个时间步传递信息的方法。这个新方法做的事情就像一条在序列旁边的辅助传送带,序列中的信息可以在任意位置跳上传送带, 然后被传送到更晚的时间步,并在需要时原封不动地跳回来。这里把之前 SimpleRNN 里面的权重 W、U 重命名为 Wo、Uo 了(o 表示 output)。然后加了一个“携带轨道”数据流,这个携带轨道就是用来携带信息跨越时间步的。这个携带轨道上面放着时间步 t 的 ct 信息(c 表示 carry),这些信息将与输入、状态一起进行运算,而影响传递到下一个时间步的状态:```pythoonoutput_t = activation(dot(state_t, Uo) + dot(input_t, Wo) + dot(C_t, Vo) + bo)i_t = activation(dot(state_t, Ui) + dot(input_t, Wi) + bi)f_t = activation(dot(state_t, Uf) + dot(input_t, Wf) + bf)k_t = activation(dot(state_t, Uk) + dot(input_t, Wk) + bk)c_t_next = i_t * k_t + c_t * f_t```关于 LSTM 更多的细节、内部实现就不介绍了。咱完全不需要理解关于 LSTM 单元的具体架构,理解这东西就不是人干的事。我们只需要记住 LSTM 单元的作用: 允许把过去的信息稍后再次拿进来用,从而对抗梯度消失问题。(P.S. 作者说这里是玄学,信他就行了。🤪 Emmm,这句是我胡翻的,原话是:“it may seem a bit arbitrary, but bear with me.”)**GRU**(Gated Recurrent Unit, 门控循环单元),书上提的比较少,参考这篇 《[人人都能看懂的GRU](https://zhuanlan.zhihu.com/p/32481747)》,说 GRU 大概是 LSTM 的一种变种吧,二者原理区别不大、实际效果上也差不多。但 GRU 比 LSTM 新一些,它做了一些简化,更容易计算一些,但相应表示能力可能稍差一点点。 Keras 中使用 LSTM我们还是继续用之前处理好的的 IMDB 数据来跑一个 LSTM:
###Code
from tensorflow.keras.layers import LSTM
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(LSTM(32))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
# 绘制结果
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo-', label='Training acc')
plt.plot(epochs, val_acc, 'rs-', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo-', label='Training loss')
plt.plot(epochs, val_loss, 'rs-', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
比 SimpleRNN 好多了。但也没比以前那种用全连接层的网络好多少,而且还比较慢(计算代价大),其实主要是由于情感分析这样的问题,用 LSTM 去分析全局的长期性结构帮助并不是很大,LSTM 擅长的是更复杂的自然语言处理问题,比如机器翻译。用全连接的方法,其实是做了看出现了哪些词及其出现频率,这个对这种简单问题还比较有效。然后,我们再试试书上没提的 GRU:
###Code
# 把 LSTM 改成用 GRU
from tensorflow.keras.layers import GRU
model = Sequential()
model.add(Embedding(max_features, 32))
model.add(GRU(32))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(input_train, y_train,
epochs=10,
batch_size=128,
validation_split=0.2)
# 绘制结果
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo-', label='Training acc')
plt.plot(epochs, val_acc, 'rs-', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo-', label='Training loss')
plt.plot(epochs, val_loss, 'rs-', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_6 (Embedding) (None, None, 32) 320000
_________________________________________________________________
gru (GRU) (None, 32) 6336
_________________________________________________________________
dense_2 (Dense) (None, 1) 33
=================================================================
Total params: 326,369
Trainable params: 326,369
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
157/157 [==============================] - 37s 238ms/step - loss: 0.5119 - acc: 0.7386 - val_loss: 0.3713 - val_acc: 0.8434
Epoch 2/10
157/157 [==============================] - 36s 232ms/step - loss: 0.2971 - acc: 0.8806 - val_loss: 0.3324 - val_acc: 0.8722
Epoch 3/10
157/157 [==============================] - 37s 235ms/step - loss: 0.2495 - acc: 0.9034 - val_loss: 0.3148 - val_acc: 0.8722
Epoch 4/10
157/157 [==============================] - 34s 217ms/step - loss: 0.2114 - acc: 0.9200 - val_loss: 0.3596 - val_acc: 0.8738
Epoch 5/10
157/157 [==============================] - 36s 231ms/step - loss: 0.1872 - acc: 0.9306 - val_loss: 0.5291 - val_acc: 0.8084
Epoch 6/10
157/157 [==============================] - 35s 226ms/step - loss: 0.1730 - acc: 0.9359 - val_loss: 0.3976 - val_acc: 0.8802
Epoch 7/10
157/157 [==============================] - 34s 217ms/step - loss: 0.1523 - acc: 0.9452 - val_loss: 0.4303 - val_acc: 0.8532
Epoch 8/10
157/157 [==============================] - 34s 217ms/step - loss: 0.1429 - acc: 0.9486 - val_loss: 0.4019 - val_acc: 0.8542
Epoch 9/10
157/157 [==============================] - 34s 217ms/step - loss: 0.1258 - acc: 0.9562 - val_loss: 0.3476 - val_acc: 0.8746
Epoch 10/10
157/157 [==============================] - 34s 216ms/step - loss: 0.1191 - acc: 0.9585 - val_loss: 0.3558 - val_acc: 0.8812
|
06_running_an_image_data_pod.ipynb | ###Markdown
[](https://mybinder.org/v2/gh/bitfount/tutorials/main?labpath=06_running_an_image_data_pod.ipynb) Federated Learning - Part 6: An image data podWelcome to the Bitfount federated learning tutorials! In this sequence of tutorials, you will learn how federated learning works on the Bitfount platform. This is the sixth notebook in the series.By the end of this notebook, you should have run a pod that uses image data.Let's import the relevant pieces...
###Code
import logging
import nest_asyncio
from bitfount import CSVSource, Pod
from bitfount.runners.config_schemas import (
DataSplitConfig,
PodDataConfig,
PodDetailsConfig,
)
from bitfount.runners.utils import setup_loggers
nest_asyncio.apply() # Needed because Jupyter also has an asyncio loop
###Output
_____no_output_____
###Markdown
Let's set up the loggers.
###Code
loggers = setup_loggers([logging.getLogger("bitfount")])
###Output
_____no_output_____
###Markdown
We now specify the config for the pod to run. You'll need to download some data to run the image pod. For this tutorial we will be using a subset of MNIST:
###Code
# Download and extract MNIST images and labels
!curl https://bitfount-hosted-downloads.s3.eu-west-2.amazonaws.com/mnist_images.zip -o mnist_images.zip
!curl https://bitfount-hosted-downloads.s3.eu-west-2.amazonaws.com/mnist_labels.csv -o mnist_labels.csv
!unzip -o mnist_images.zip
###Output
_____no_output_____
###Markdown
Image datasets are slightly different from the tabular datasets we have been using up until this point. In particular, we have a specific column (in our case "file") which points to the image file corresponding to that entry. We need to tell the pod where all these image files are located (`image_path_prefix`) and which column corresponds to their names (`file`). Otherwise the setup is very similar to the pods we have run previously.
###Code
pod = Pod(
name="mnist-demo",
datasource=CSVSource(
"mnist_labels.csv", modifiers={"file": {"prefix": "mnist_images/"}}
),
pod_details_config=PodDetailsConfig(
display_name="MNIST demo pod",
is_public=True,
description="This pod contains a subset of the MNIST data.",
),
data_config=PodDataConfig(
force_stypes={"mnist-demo": {"categorical": ["target"], "image": ["file"]}}
),
)
###Output
_____no_output_____
###Markdown
That's the setup done. Let's run the pod. You'll notice that the notebook cell doesn't complete. That's because the pod is set to run until it is interrupted!
###Code
pod.start()
###Output
_____no_output_____ |
notebooks/figure/figI1.ipynb | ###Markdown
- Some of the calculation here also depends on the `awesome_cluster_finder` package by Christopher Bradshaw - It will be available [in this Github repo](https://github.com/Christopher-Bradshaw/awesome_cluster_finder)- If you don't have access to `acf` or don't have space for downloading the data, you can load the saved data in this folder to reproduce the figure.
###Code
import awesome_cluster_finder as acf
import jianbing
from jianbing import scatter
fig_dir = jianbing.FIG_DIR
data_dir = jianbing.DATA_DIR
sim_dir = jianbing.SIM_DIR
bin_dir = jianbing.BIN_DIR
res_dir = jianbing.RES_DIR
###Output
_____no_output_____
###Markdown
Estimate the $\sigma_{\mathcal{O}}$ M*100
###Code
# Assuming alpha=0.2
# Bin 1
print(np.round(scatter.sigo_to_sigm(0.20, alpha=0.35), 2))
# Bin 2
print(np.round(scatter.sigo_to_sigm(0.35, alpha=0.35), 2))
# Bin 3
print(np.round(scatter.sigo_to_sigm(0.35, alpha=0.35), 2))
###Output
0.41
0.5
0.5
###Markdown
M*[50, 100]
###Code
# Bin 1
print(np.round(scatter.sigo_to_sigm(0.30, alpha=0.66), 2))
# Bin 2
print(np.round(scatter.sigo_to_sigm(0.42, alpha=0.66), 2))
# Bin 2
print(np.round(scatter.sigo_to_sigm(0.44, alpha=0.66), 2))
###Output
0.36
0.43
0.44
###Markdown
Richness
###Code
# Bin 1
print(np.round(scatter.sigo_to_sigm(0.26, alpha=0.86), 2))
# Bin 2
print(np.round(scatter.sigo_to_sigm(0.38, alpha=0.86), 2))
###Output
0.27
0.35
###Markdown
UniverseMachine: logMvir v.s. in-situ & ex-situ stellar mass
###Code
um_cat = np.load(
'/Users/song/Dropbox/work/project/asap/data/umachine/um_smdpl_insitu_exsitu_0.7124_basic_logmp_11.5.npy')
um_cen = um_cat[um_cat['upid'] == -1]
logm_ins = um_cen['logms_gal']
logm_exs = um_cen['logms_icl']
logmh_vir = um_cen['logmh_vir']
###Output
_____no_output_____
###Markdown
Ex-situ
###Code
mask = um_cen['logmh_vir'] >= 13.6
x_err = np.full(len(um_cen[mask]), 0.03)
y_err = np.full(len(um_cen[mask]), 0.04)
w = 1. / (y_err ** 2.)
x_arr, y_arr = um_cen[mask]['logmh_vir'], um_cen[mask]['logms_icl']
reg = LinearRegression().fit(
x_arr.reshape(-1, 1), y_arr, sample_weight=w)
print(reg.coef_, reg.intercept_)
plt.scatter(x_arr, y_arr, s=2)
x_grid = np.linspace(13.6, 15.2, 100)
plt.plot(x_grid, reg.coef_ * x_grid + reg.intercept_, linewidth=2.0, linestyle='--', c='k')
lts_linefit.lts_linefit(x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
scatter.sigo_to_sigm(0.25, alpha=0.867)
###Output
_____no_output_____
###Markdown
In-situ
###Code
mask = um_cen['logmh_vir'] >= 14.0
x_err = np.full(len(um_cen[mask]), 0.03)
y_err = np.full(len(um_cen[mask]), 0.04)
w = 1. / (y_err ** 2.)
x_arr, y_arr = um_cen[mask]['logmh_vir'], um_cen[mask]['logms_gal']
reg = LinearRegression().fit(
x_arr.reshape(-1, 1), y_arr, sample_weight=w)
print(reg.coef_, reg.intercept_)
plt.scatter(x_arr, y_arr, s=2)
x_grid = np.linspace(13.8, 15.2, 100)
plt.plot(x_grid, reg.coef_ * x_grid + reg.intercept_, linewidth=2.0, linestyle='--', c='k')
plt.ylim(8.9, 12.2)
lts_linefit.lts_linefit(x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
scatter.sigo_to_sigm(0.37, alpha=0.141)
###Output
_____no_output_____
###Markdown
Ex-situ mass in Illustris and IllustrisTNG
###Code
def exsitu_frac_sum(gal):
"""Summarize the ex-situ fraction of a galaxy."""
summary = {}
# Central flag
summary['cen'] = gal['info']['cen_flag']
# Total stellar mass
summary['logms'] = gal['info']['logms']
# Total halo mass
summary['logm_200c'] = gal['info']['logm200c']
c_200c = concentration.concentration(
10.0 ** summary['logm_200c'], '200c', 0.4, model='diemer19')
mvir, rvir, cvir = mass_defs.changeMassDefinition(
10.0 ** summary['logm_200c'], c_200c, 0.4, '200c', 'vir')
summary['logm_vir'] = np.log10(mvir)
summary['logm_ins'] = gal['info']['logms_map_ins']
summary['logm_exs'] = gal['info']['logms_map_exs']
# Total ex-situ fraction
summary['fexs_tot'] = (10.0 ** gal['info']['logms_map_exs'] / 10.0 ** gal['info']['logms_map_gal'])
# 5kpc, 10kpc, 100kpc stellar mass
summary['logms_5'] = np.log10(gal['aper']['maper_gal'][6])
summary['logms_10'] = np.log10(gal['aper']['maper_gal'][9])
summary['logms_30'] = np.log10(gal['aper']['maper_gal'][12])
summary['logms_40'] = np.log10(gal['aper']['maper_gal'][13])
summary['logms_60'] = np.log10(gal['aper']['maper_gal'][14])
summary['logms_100'] = np.log10(gal['aper']['maper_gal'][16])
summary['logms_150'] = np.log10(gal['aper']['maper_gal'][17])
summary['logms_30_100'] = np.log10(gal['aper']['maper_gal'][16] - gal['aper']['maper_gal'][12])
summary['logms_40_100'] = np.log10(gal['aper']['maper_gal'][16] - gal['aper']['maper_gal'][13])
summary['logms_60_100'] = np.log10(gal['aper']['maper_gal'][16] - gal['aper']['maper_gal'][14])
summary['logms_30_150'] = np.log10(gal['aper']['maper_gal'][17] - gal['aper']['maper_gal'][12])
summary['logms_40_150'] = np.log10(gal['aper']['maper_gal'][17] - gal['aper']['maper_gal'][13])
summary['logms_60_150'] = np.log10(gal['aper']['maper_gal'][17] - gal['aper']['maper_gal'][14])
# Mass fraction in 5 and 10 kpc
summary['fmass_5'] = gal['aper']['maper_gal'][6] / gal['aper']['maper_gal'][16]
summary['fmass_10'] = gal['aper']['maper_gal'][9] / gal['aper']['maper_gal'][16]
# Ex-situ fraction within 5, 10, 100 kpc
summary['fexs_5'] = gal['aper']['maper_exs'][6] / gal['aper']['maper_gal'][6]
summary['fexs_10'] = gal['aper']['maper_exs'][9] / gal['aper']['maper_gal'][9]
summary['fexs_100'] = gal['aper']['maper_exs'][9] / gal['aper']['maper_gal'][9]
# In-situ and ex-situ mass profile
summary['rad'] = gal['aper']['rad_mid']
summary['mprof_ins'] = gal['aper']['mprof_ins']
summary['mprof_exs'] = gal['aper']['mprof_exs']
return summary
###Output
_____no_output_____
###Markdown
Illustris @ z=0.4
###Code
data_dir = '/Volumes/astro6/massive/simulation/riker/ori/sum'
xy_list = glob.glob(os.path.join(data_dir, '*xy_sum.npy'))
xy_sum = [np.load(gal, allow_pickle=True) for gal in xy_list]
print("# There are %d Illustris massive galaxies" % len(xy_list))
ori_cat = Table([exsitu_frac_sum(gal) for gal in xy_sum])
mask = ori_cat['cen'] & (ori_cat['logm_vir'] >= 13.2)
x_arr = ori_cat['logm_vir'][mask]
y_arr = ori_cat['logm_exs'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.01)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
mask = ori_cat['cen'] & (ori_cat['logm_vir'] >= 13.2)
x_arr = ori_cat['logm_vir'][mask]
y_arr = ori_cat['logm_ins'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.01)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
mask = ori_cat['cen'] & (ori_cat['logm_vir'] >= 13.2)
x_arr = ori_cat['logm_vir'][mask]
y_arr = ori_cat['logms_40_100'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.02)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
mask = ori_cat['cen'] & (ori_cat['logm_vir'] >= 13.2)
x_arr = ori_cat['logm_vir'][mask]
y_arr = ori_cat['logms_100'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.02)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
mask = ori_cat['cen'] & (ori_cat['logm_vir'] >= 13.2)
x_arr = ori_cat['logm_vir'][mask]
y_arr = ori_cat['logms_10'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.02)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
###Output
sig_int: 0.0000 31.2502
Computing sig_int
sig_int: 0.0000 31.2502
sig_int: 0.1637 -0.4382
sig_int: 0.1615 -0.4226
sig_int: 0.1009 0.4391
sig_int: 0.1318 -0.1405
sig_int: 0.1206 0.0215
sig_int: 0.1221 -0.0025
sig_int: 0.1219 -0.0000
sig_int: 0.1218 0.0009
Computing sig_int error
sig_int: 0.1219 0.2020
sig_int: 0.1637 -0.2361
sig_int: 0.1412 -0.0468
sig_int: 0.1376 -0.0076
sig_int: 0.1369 0.0001
sig_int: 0.1370 -0.0007
Repeat at best fitting solution
sig_int: 0.1219 -0.0000
################# Values and formal errors ################
intercept: 11.444 +/- 0.018
slope: 0.389 +/- 0.070
scatter: 0.122 +/- 0.015
Observed rms scatter: 0.12
y = a + b*(x - pivot) with pivot = 13.44
Spearman r=0.53 and p=6.1e-05
Pearson r=0.62 and p=1.3e-06
##########################################################
seconds 17.43
###Markdown
TNG100 @ z=0.4
###Code
data_dir = '/Volumes/astro6/massive/simulation/riker/tng/sum'
xy_list = glob.glob(os.path.join(data_dir, '*xy_sum.npy'))
xy_sum = [np.load(gal, allow_pickle=True) for gal in xy_list]
print("# There are %d TNG massive galaxies" % len(xy_list))
tng_cat = Table([exsitu_frac_sum(gal) for gal in xy_sum])
mask = tng_cat['cen'] & (tng_cat['logm_vir'] >= 13.2)
x_arr = tng_cat['logm_vir'][mask]
y_arr = tng_cat['logm_exs'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.01)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
mask = tng_cat['cen'] & (tng_cat['logm_vir'] >= 13.2)
x_arr = tng_cat['logm_vir'][mask]
y_arr = tng_cat['logm_ins'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.01)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
mask = tng_cat['cen'] & (tng_cat['logm_vir'] >= 13.2)
x_arr = tng_cat['logm_vir'][mask]
y_arr = tng_cat['logms_40_100'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.02)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
mask = tng_cat['cen'] & (tng_cat['logm_vir'] >= 13.2)
x_arr = tng_cat['logm_vir'][mask]
y_arr = tng_cat['logms_100'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.02)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
mask = tng_cat['cen'] & (tng_cat['logm_vir'] >= 13.2)
x_arr = tng_cat['logm_vir'][mask]
y_arr = tng_cat['logms_10'][mask]
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.02)
lts_linefit.lts_linefit(
x_arr, y_arr, x_err, y_err, pivot=np.nanmedian(x_arr), clip=4.0)
# Illustris
logm_exs_ori = 11.528 + 0.800 * (logmh_grid - 13.44)
sigm_exs_ori = 0.21
logm_ins_ori = 11.384 + 0.360 * (logmh_grid - 13.44)
sigm_ins_ori = 0.14
logm_out_ori = 10.994 + 0.920 * (logmh_grid - 13.44)
sigm_out_ori = 0.21
logm_100_ori = 11.740 + 0.594 * (logmh_grid - 13.44)
sigm_100_ori = 0.14
logm_10_ori = 11.444 + 0.389 * (logmh_grid - 13.44)
sigm_10_ori = 0.12
# TNG
logm_exs_tng = 11.437 + 0.784 * (logmh_grid - 13.48)
sigm_exs_tng = 0.19
logm_ins_tng = 11.182 + 0.543 * (logmh_grid - 13.48)
sigm_ins_tng = 0.16
logm_out_tng = 10.860 + 0.921 * (logmh_grid - 13.48)
sigm_out_tng = 0.21
logm_100_tng = 11.610 + 0.660 * (logmh_grid - 13.48)
sigm_100_tng = 0.15
logm_10_tng = 11.296 + 0.541 * (logmh_grid - 13.48)
sigm_10_tng = 0.13
###Output
_____no_output_____
###Markdown
Assign CAMIRA richness to ASAP halos - Based on the Mvir-richness relation of CAMIRA clusters from Murata et al. (2019)- $P(\ln N \mid M, z)=\frac{1}{\sqrt{2 \pi} \sigma_{\ln N \mid M, z}} \exp \left(-\frac{x^{2}(N, M, z)}{2 \sigma_{\ln N \mid M, z}^{2}}\right)$- $\begin{aligned} x(N, M, z) & \equiv \ln N-\left[A+B \ln \left(\frac{M}{M_{\text {pivot }}}\right)\right.\\ & \left.+B_{z} \ln \left(\frac{1+z}{1+z_{\text {pivot }}}\right)+C_{z}\left[\ln \left(\frac{1+z}{1+z_{\text {pivot }}}\right)\right]^{2}\right] \end{aligned}$- $\begin{aligned} \sigma_{\ln N \mid M, z} &=\sigma_{0}+q \ln \left(\frac{M}{M_{\text {pivot }}}\right) \\ &+q_{z} \ln \left(\frac{1+z}{1+z_{\text {pivot }}}\right)+p_{z}\left[\ln \left(\frac{1+z}{1+z_{\text {pivot }}}\right)\right]^{2} \end{aligned}$- Parameters for low-z ($0.1 < z < 0.4$) clusters using Planck cosmology: - $A = 3.34^{+0.25}_{-0.20}$ - $B = 0.85^{+0.08}_{-0.07}$ - $\sigma_0 = 0.36^{+0.07}_{-0.21}$ - $q = -0.06^{0.09}_{-0.11}$ - Parameters for full redshift range using Planck cosmology: - $A = 3.15^{+0.07}_{-0.08}$ - $B = 0.86^{+0.05}_{-0.05}$ - $B_{z} = -0.21^{+0.35}_{-0.42}$ - $C_{z} = 3.61^{+1.96}_{-2.23}$ - $\sigma_0 = 0.32^{+0.06}_{-0.06}$ - $q = -0.06^{+0.09}_{-0.11}$ - $q_{z} = 0.03^{+0.31}_{-0.30}$ - $p_{z} = 0.70^{+1.71}_{-1.60}$ - Pivot redshift and mass - $M_{\rm Pivot} = 3\times 10^{14} h^{-1} M_{\odot}$ - $z_{\rm Pivot} = 0.6$ - Here, $M \equiv M_{200m}$ and $h=0.68$.
###Code
def mean_ln_N(m200m, z=None, m_pivot=3e14, h=0.68, A=3.15, B=0.86,
z_pivot=0.6, B_z=-0.21, C_z=3.61):
"""
Estimate the mean ln(N) for CAMIRA clusters based on the halo mass-richness
relation calibrated by Murata et al. (2019).
"""
lnN = A + B * np.log(m200m / m_pivot / h)
if z is None:
return lnN
z_term = np.log((1 + z) / (1 + z_pivot))
return lnN + B_z * z_term + C_z * (z_term ** 2)
def sig_ln_N(m200m, z=None, m_pivot=3e14, h=0.68, sig0=0.32, z_pivot=0.6,
q=-0.06, q_z=0.03, p_z=0.70):
"""
Estimate the scatter of ln(N) for CAMINRA clusters based on the halo mass-richness
relation calibrated by Murata et al. (2019).
"""
sig_lnN = sig0 + q * np.log(m200m / m_pivot / h)
if z is None:
return sig_lnN
z_term = np.log((1 + z) / (1 + z_pivot))
return sig_lnN + q_z * z_term + p_z * (z_term ** 2)
lnN = np.random.normal(
loc=mean_ln_N(um_cen['m200b_hlist'], z=0.4),
scale=sig_ln_N(um_cen['m200b_hlist'], z=0.4))
log10_N = np.log10(np.exp(lnN))
x_arr = np.log10(um_cen['mvir'])
y_arr = log10_N
mask = x_arr >= 14.0
x_err = np.full(len(x_arr), 0.02)
y_err = np.full(len(y_arr), 0.01)
reg = LinearRegression().fit(
x_arr[mask].reshape(-1, 1), y_arr[mask])
print(reg.coef_, reg.intercept_)
plt.scatter(np.log10(um_cen['mvir']), log10_N, s=2, alpha=0.1)
plt.xlim(13.8, 15.2)
plt.ylim(0.1, 2.6)
lts_linefit.lts_linefit(
x_arr[mask], y_arr[mask], x_err[mask], y_err[mask], pivot=np.nanmedian(x_arr[mask]), clip=4.0)
logmh_grid = np.linspace(13.6, 15.3, 30)
# Relation for M*ex-situ
logm_exs = 11.7441 + 0.867 * (logmh_grid - 14.17)
sigm_exs = 0.25
# Relation for M*in-situ
logm_ins = 11.0242 + 0.141 * (logmh_grid - 14.17)
sigm_ins = 0.37
# Relation for M*[50-100]
logm_out = 10.7474 + 0.66 * (logmh_grid - 13.77)
sigm_out = 0.3
# Relation for richness
nmem_cam = 1.1615 + 0.864 * (logmh_grid - 14.17)
sign_cam = 0.16
fig = plt.figure(figsize=(7.2, 10))
fig.subplots_adjust(
left=0.175, bottom=0.09, right=0.855, top=0.99, wspace=0, hspace=0)
ax1 = fig.add_subplot(2, 1, 1)
ax1.fill_between(
logmh_grid, logm_exs - sigm_exs, logm_exs + sigm_exs,
alpha=0.3, edgecolor='none', linewidth=1.0,
label=r'__no_label__', facecolor='skyblue', linestyle='-', rasterized=True)
l1 = ax1.plot(logmh_grid, logm_exs, linestyle='-', linewidth=5, alpha=0.7,
label=r'$\rm UM\ ex\ situ$', color='dodgerblue', zorder=100)
ax1.fill_between(
logmh_grid, logm_out - sigm_out + 0.52, logm_out + sigm_out + 0.52,
alpha=0.3, edgecolor='none', linewidth=1.0,
label=r'__no_label__', facecolor='grey', linestyle='-', rasterized=True)
l2 = ax1.plot(logmh_grid, logm_out + 0.52, linestyle='-.', linewidth=5, alpha=0.8,
label=r'$\rm M_{\star,[50,100]} + 0.5\ \rm dex$', color='grey', zorder=100)
ax1.set_ylabel(r'$\log_{10} (M_{\star}/M_{\odot})$', fontsize=32)
#------------------------------------------------------------------------------------#
ax2=ax1.twinx()
ax2.yaxis.label.set_color('orangered')
ax2.tick_params(axis='y', colors='orangered', which='both')
ax2.spines['right'].set_color('red')
ax2.set_ylabel(r"$\log_{10} N_{\rm CAMIRA}$", color="red", fontsize=32)
ax2.fill_between(
logmh_grid, nmem_cam - sign_cam, nmem_cam + sign_cam,
alpha=0.2, edgecolor='none', linewidth=2.0, zorder=0,
label=r'__no_label__', facecolor='red', linestyle='-', rasterized=True)
l3 = ax2.plot(logmh_grid, nmem_cam, linestyle='--', linewidth=5, alpha=0.7,
label=r'$\rm N_{\rm CAMIRA}$', color='red', zorder=100)
ax2.set_ylim(0.45, 2.3)
ax1.set_ylim(11.01, 12.95)
ax1.set_xticklabels([])
custom_lines = [Line2D([0], [0], color="dodgerblue", lw=4, ls='-'),
Line2D([0], [0], color="grey", lw=4, ls='-.'),
Line2D([0], [0], color="red", lw=4, ls='--')]
ax1.legend(custom_lines,
[r'$\rm UM\ ex\ situ$', r'$M_{\star,[50,100]} + 0.5\ \rm dex$',
r'$\rm N_{\rm CAMIRA}$'],
loc='best', fontsize=18)
#------------------------------------------------------------------------------------#
ax3 = fig.add_subplot(2, 1, 2)
# Universe Machine
ax3.fill_between(
logmh_grid, logm_ins - sigm_ins, logm_ins + sigm_ins,
alpha=0.1, edgecolor='none', linewidth=1.0, zorder=0,
label=r'__no_label__', facecolor='orange', linestyle='-', rasterized=True)
l1 = ax3.plot(logmh_grid, logm_ins, linestyle='-', linewidth=5, alpha=0.7,
label=r'$\rm UM\ ins$', color='orangered', zorder=100)
ax3.fill_between(
logmh_grid, logm_exs - sigm_exs, logm_exs + sigm_exs,
alpha=0.1, edgecolor='none', linewidth=1.0, zorder=0,
label=r'__no_label__', facecolor='dodgerblue', linestyle='-', rasterized=True)
l2 = ax3.plot(logmh_grid, logm_exs, linestyle='-', linewidth=5, alpha=0.7,
label=r'$\rm UM\ exs$', color='dodgerblue')
# Illustris
ax3.fill_between(
logmh_grid, logm_ins_ori - sigm_ins_ori, logm_ins_ori + sigm_ins_ori,
alpha=0.1, edgecolor='none', linewidth=1.0, zorder=0,
label=r'__no_label__', facecolor='coral', linestyle='--', rasterized=True)
l3 = ax3.plot(logmh_grid, logm_ins_ori, linestyle='-.', linewidth=6, alpha=0.6,
label=r'$\rm Illustris\ ins$', color='coral', zorder=100)
ax3.fill_between(
logmh_grid, logm_exs_ori - sigm_exs_ori, logm_exs_ori + sigm_exs_ori,
alpha=0.05, edgecolor='none', linewidth=1.0, zorder=0,
label=r'__no_label__', facecolor='steelblue', linestyle='--', rasterized=True)
l4 = ax3.plot(logmh_grid, logm_exs_ori, linestyle='-.', linewidth=5, alpha=0.7,
label=r'$\rm Illustris\ exs$', color='steelblue', zorder=100)
# TNG
ax3.fill_between(
logmh_grid, logm_ins_tng - sigm_ins_tng, logm_ins_tng + sigm_ins_tng,
alpha=0.1, edgecolor='none', linewidth=1.0, zorder=0,
label=r'__no_label__', facecolor='goldenrod', linestyle='-', rasterized=True)
l3 = ax3.plot(logmh_grid, logm_ins_tng, linestyle='--', linewidth=6, alpha=0.7,
label=r'$\rm TNG\ ins$', color='goldenrod')
ax3.fill_between(
logmh_grid, logm_exs_tng - sigm_exs_tng, logm_exs_tng + sigm_exs_tng,
alpha=0.05, edgecolor='grey', linewidth=1.0,
label=r'__no_label__', facecolor='royalblue', linestyle='--', rasterized=True)
l4 = ax3.plot(logmh_grid, logm_exs_tng, linestyle='--', linewidth=6, alpha=0.5,
label=r'$\rm TNG\ exs$', color='royalblue')
ax3.legend(loc='best', ncol=2, fontsize=16, handletextpad=0.5, labelspacing=0.3)
ax1.set_xlim(13.59, 15.25)
ax2.set_xlim(13.59, 15.25)
ax3.set_xlim(13.59, 15.25)
ax3.set_ylim(10.61, 13.45)
ax3.set_xlabel(r'$\log_{10} (M_{\rm vir}/ M_{\odot})$', fontsize=32)
ax3.set_ylabel(r"$\log_{10} (M_{\star}/ M_{\odot})$", fontsize=32)
fig.savefig(os.path.join(fig_dir, 'fig_13.png'), dpi=120)
fig.savefig(os.path.join(fig_dir, 'fig_13.pdf'), dpi=120)
###Output
_____no_output_____ |
GoogleCloudPlatform/DataProc-Training/PySpark-analysis-file.ipynb | ###Markdown
Migrating from Spark to BigQuery via Dataproc -- Part 1* [Part 1](01_spark.ipynb): The original Spark code, now running on Dataproc (lift-and-shift).* [Part 2](02_gcs.ipynb): Replace HDFS by Google Cloud Storage. This enables job-specific-clusters. (cloud-native)* [Part 3](03_automate.ipynb): Automate everything, so that we can run in a job-specific cluster. (cloud-optimized)* [Part 4](04_bigquery.ipynb): Load CSV into BigQuery, use BigQuery. (modernize)* [Part 5](05_functions.ipynb): Using Cloud Functions, launch analysis every time there is a new file in the bucket. (serverless) Copy data to HDFSThe Spark code in this notebook is based loosely on the [code](https://github.com/dipanjanS/data_science_for_all/blob/master/tds_spark_sql_intro/Working%20with%20SQL%20at%20Scale%20-%20Spark%20SQL%20Tutorial.ipynb) accompanying [this post](https://opensource.com/article/19/3/apache-spark-and-dataframes-tutorial) by Dipanjan Sarkar. I am using it to illustrate migrating a Spark analytics workload to BigQuery via Dataproc.The data itself comes from the 1999 KDD competition. Let's grab 10% of the data to use as an illustration. Reading in dataThe data are CSV files. In Spark, these can be read using textFile and splitting rows on commas.
###Code
%%writefile -a spark_analysis.py
from pyspark.sql import SparkSession, SQLContext, Row
gcs_bucket='qwiklabs-gcp-ffc84680e86718f5'
spark = SparkSession.builder.appName("kdd").getOrCreate()
sc = spark.sparkContext
data_file = "gs://"+gcs_bucket+"//kddcup.data_10_percent.gz"
raw_rdd = sc.textFile(data_file).cache()
raw_rdd.take(5)
%%writefile -a spark_analysis.py
csv_rdd = raw_rdd.map(lambda row: row.split(","))
parsed_rdd = csv_rdd.map(lambda r: Row(
duration=int(r[0]),
protocol_type=r[1],
service=r[2],
flag=r[3],
src_bytes=int(r[4]),
dst_bytes=int(r[5]),
wrong_fragment=int(r[7]),
urgent=int(r[8]),
hot=int(r[9]),
num_failed_logins=int(r[10]),
num_compromised=int(r[12]),
su_attempted=r[14],
num_root=int(r[15]),
num_file_creations=int(r[16]),
label=r[-1]
)
)
parsed_rdd.take(5)
###Output
Appending to spark_analysis.py
###Markdown
Spark analysisOne way to analyze data in Spark is to call methods on a dataframe.
###Code
%%writefile -a spark_analysis.py
sqlContext = SQLContext(sc)
df = sqlContext.createDataFrame(parsed_rdd)
connections_by_protocol = df.groupBy('protocol_type').count().orderBy('count', ascending=False)
connections_by_protocol.show()
###Output
Appending to spark_analysis.py
###Markdown
Another way is to use Spark SQL
###Code
%%writefile -a spark_analysis.py
df.registerTempTable("connections")
attack_stats = sqlContext.sql("""
SELECT
protocol_type,
CASE label
WHEN 'normal.' THEN 'no attack'
ELSE 'attack'
END AS state,
COUNT(*) as total_freq,
ROUND(AVG(src_bytes), 2) as mean_src_bytes,
ROUND(AVG(dst_bytes), 2) as mean_dst_bytes,
ROUND(AVG(duration), 2) as mean_duration,
SUM(num_failed_logins) as total_failed_logins,
SUM(num_compromised) as total_compromised,
SUM(num_file_creations) as total_file_creations,
SUM(su_attempted) as total_root_attempts,
SUM(num_root) as total_root_acceses
FROM connections
GROUP BY protocol_type, state
ORDER BY 3 DESC
""")
attack_stats.show()
%%writefile -a spark_analysis.py
# %matplotlib inline
ax = attack_stats.toPandas().plot.bar(x='protocol_type', subplots=True, figsize=(10,25))
%%writefile -a spark_analysis.py
ax[0].get_figure().savefig('report.png');
%%writefile -a spark_analysis.py
import google.cloud.storage as gcs
bucket = gcs.Client().get_bucket(BUCKET)
for blob in bucket.list_blobs(prefix='sparktobq/'):
blob.delete()
bucket.blob('sparktobq/report.pdf').upload_from_filename('report.png')
%%writefile -a spark_analysis.py
connections_by_protocol.write.format("csv").mode("overwrite").save(
"gs://{}/sparktobq/connections_by_protocol".format(BUCKET))
BUCKET_list = !gcloud info --format='value(config.project)'
BUCKET=BUCKET_list[0]
print('Writing to {}'.format(BUCKET))
!python spark_analysis.py --bucket=$BUCKET
!gsutil ls gs://$BUCKET/sparktobq/**
!gsutil cp spark_analysis.py gs://$BUCKET/sparktobq/spark_analysis.py
###Output
Copying file://spark_analysis.py [Content-Type=text/x-python]...
/ [1 files][ 2.8 KiB/ 2.8 KiB]
Operation completed over 1 objects/2.8 KiB.
|
material/notebooks/sampling.ipynb | ###Markdown
Use PercentageSelectorIn this Notebook we will import several samplers, such as the `PercentageSelector` class and use it to create a training set. This selector need to be given a percentage. It will then copy the coresponding percentage of each class from the `out_path` to the `in_path`. In this example we will copy 5% of the images from each class.
###Code
# First we import the PercentageSelector class from the sampler module.
import sys
sys.path.append("..")
from sampler import PercentageSelector
# Here we intantiate the ps object from the PercentageSelector class and for each image class, we copy 5% of its images.
in_path = "../data/train/"
out_path = "../data/sample/train"
ps = PercentageSelector()
ps.sample(0.01, in_path, out_path)
###Output
Creating out_label directory at ../data/sample/train
Creating out_label directory at ../data/sample/train\1
Creating out_label directory at ../data/sample/train\10
Creating out_label directory at ../data/sample/train\100
Creating out_label directory at ../data/sample/train\101
Creating out_label directory at ../data/sample/train\102
Creating out_label directory at ../data/sample/train\103
Creating out_label directory at ../data/sample/train\104
Creating out_label directory at ../data/sample/train\105
Creating out_label directory at ../data/sample/train\106
Creating out_label directory at ../data/sample/train\107
Creating out_label directory at ../data/sample/train\108
Creating out_label directory at ../data/sample/train\109
Creating out_label directory at ../data/sample/train\11
Creating out_label directory at ../data/sample/train\110
Creating out_label directory at ../data/sample/train\111
Creating out_label directory at ../data/sample/train\112
Creating out_label directory at ../data/sample/train\113
Creating out_label directory at ../data/sample/train\114
Creating out_label directory at ../data/sample/train\115
Creating out_label directory at ../data/sample/train\116
Creating out_label directory at ../data/sample/train\117
Creating out_label directory at ../data/sample/train\118
Creating out_label directory at ../data/sample/train\119
Creating out_label directory at ../data/sample/train\12
Creating out_label directory at ../data/sample/train\120
Creating out_label directory at ../data/sample/train\121
Creating out_label directory at ../data/sample/train\122
Creating out_label directory at ../data/sample/train\123
Creating out_label directory at ../data/sample/train\124
Creating out_label directory at ../data/sample/train\125
Creating out_label directory at ../data/sample/train\126
Creating out_label directory at ../data/sample/train\127
Creating out_label directory at ../data/sample/train\128
Creating out_label directory at ../data/sample/train\13
Creating out_label directory at ../data/sample/train\14
Creating out_label directory at ../data/sample/train\15
Creating out_label directory at ../data/sample/train\16
Creating out_label directory at ../data/sample/train\17
Creating out_label directory at ../data/sample/train\18
Creating out_label directory at ../data/sample/train\19
Creating out_label directory at ../data/sample/train\2
Creating out_label directory at ../data/sample/train\20
Creating out_label directory at ../data/sample/train\21
Creating out_label directory at ../data/sample/train\22
Creating out_label directory at ../data/sample/train\23
Creating out_label directory at ../data/sample/train\24
Creating out_label directory at ../data/sample/train\25
Creating out_label directory at ../data/sample/train\26
Creating out_label directory at ../data/sample/train\27
Creating out_label directory at ../data/sample/train\28
Creating out_label directory at ../data/sample/train\29
Creating out_label directory at ../data/sample/train\3
Creating out_label directory at ../data/sample/train\30
Creating out_label directory at ../data/sample/train\31
Creating out_label directory at ../data/sample/train\32
Creating out_label directory at ../data/sample/train\33
Creating out_label directory at ../data/sample/train\34
Creating out_label directory at ../data/sample/train\35
Creating out_label directory at ../data/sample/train\36
Creating out_label directory at ../data/sample/train\37
Creating out_label directory at ../data/sample/train\38
Creating out_label directory at ../data/sample/train\39
Creating out_label directory at ../data/sample/train\4
Creating out_label directory at ../data/sample/train\40
Creating out_label directory at ../data/sample/train\41
Creating out_label directory at ../data/sample/train\42
Creating out_label directory at ../data/sample/train\43
Creating out_label directory at ../data/sample/train\44
Creating out_label directory at ../data/sample/train\45
Creating out_label directory at ../data/sample/train\46
Creating out_label directory at ../data/sample/train\47
Creating out_label directory at ../data/sample/train\48
Creating out_label directory at ../data/sample/train\49
Creating out_label directory at ../data/sample/train\5
Creating out_label directory at ../data/sample/train\50
Creating out_label directory at ../data/sample/train\51
Creating out_label directory at ../data/sample/train\52
Creating out_label directory at ../data/sample/train\53
Creating out_label directory at ../data/sample/train\54
Creating out_label directory at ../data/sample/train\55
Creating out_label directory at ../data/sample/train\56
Creating out_label directory at ../data/sample/train\57
Creating out_label directory at ../data/sample/train\58
Creating out_label directory at ../data/sample/train\59
Creating out_label directory at ../data/sample/train\6
Creating out_label directory at ../data/sample/train\60
Creating out_label directory at ../data/sample/train\61
Creating out_label directory at ../data/sample/train\62
Creating out_label directory at ../data/sample/train\63
Creating out_label directory at ../data/sample/train\64
Creating out_label directory at ../data/sample/train\65
Creating out_label directory at ../data/sample/train\66
Creating out_label directory at ../data/sample/train\67
Creating out_label directory at ../data/sample/train\68
Creating out_label directory at ../data/sample/train\69
Creating out_label directory at ../data/sample/train\7
Creating out_label directory at ../data/sample/train\70
Creating out_label directory at ../data/sample/train\71
Creating out_label directory at ../data/sample/train\72
Creating out_label directory at ../data/sample/train\73
Creating out_label directory at ../data/sample/train\74
Creating out_label directory at ../data/sample/train\75
Creating out_label directory at ../data/sample/train\76
Creating out_label directory at ../data/sample/train\77
Creating out_label directory at ../data/sample/train\78
Creating out_label directory at ../data/sample/train\79
Creating out_label directory at ../data/sample/train\8
Creating out_label directory at ../data/sample/train\80
Creating out_label directory at ../data/sample/train\81
Creating out_label directory at ../data/sample/train\82
Creating out_label directory at ../data/sample/train\83
Creating out_label directory at ../data/sample/train\84
Creating out_label directory at ../data/sample/train\85
Creating out_label directory at ../data/sample/train\86
Creating out_label directory at ../data/sample/train\87
Creating out_label directory at ../data/sample/train\88
Creating out_label directory at ../data/sample/train\89
Creating out_label directory at ../data/sample/train\9
Creating out_label directory at ../data/sample/train\90
Creating out_label directory at ../data/sample/train\91
Creating out_label directory at ../data/sample/train\92
Creating out_label directory at ../data/sample/train\93
Creating out_label directory at ../data/sample/train\94
Creating out_label directory at ../data/sample/train\95
Creating out_label directory at ../data/sample/train\96
Creating out_label directory at ../data/sample/train\97
Creating out_label directory at ../data/sample/train\98
Creating out_label directory at ../data/sample/train\99
|
S) RoadMap 19 - Appendix 2 - Fashion Classification with Monk.ipynb | ###Markdown
Fashion Classification with Monk and Densenet Blog Post -- [LINK]() Explanation of Dense blocks and Densenets[BLOG LINK](https://towardsdatascience.com/review-densenet-image-classification-b6631a8ef803)This is an excellent read comparing Densenets with other architectures and why Dense blocks achieve better accuracy while training lesser parameters Setup MonkWe begin by setting up monk and installing dependencies for colab
###Code
!git clone https://github.com/Tessellate-Imaging/monk_v1
cd monk_v1
!pip install -r installation/requirements_cu10.txt
cd ..
###Output
_____no_output_____
###Markdown
Prepare DatasetNext we grab the dataset. Credits to the original dataset -- [Kaggle](https://www.kaggle.com/paramaggarwal/fashion-product-images-small)
###Code
!wget https://www.dropbox.com/s/wzgyr1dx4sejo5u/dataset.zip
%%capture
!unzip dataset.zip
###Output
_____no_output_____
###Markdown
**Note** : Pytorch backend requires the images to have 3 channels when loading. We prepare a modified dataset for the same.
###Code
!mkdir mod_dataset
!mkdir mod_dataset/images
import cv2
import numpy as np
from glob import glob
from tqdm import tqdm
def convert23channel(imagePath):
#gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = cv2.imread(imagePath)
img2 = np.zeros_like(img)
b,g,r = cv2.split(img)
img2[:,:,0] = b
img2[:,:,1] = g
img2[:,:,2] = r
return img2
imageList = glob("./dataset/images/*.jpg")
for i in tqdm(imageList):
inPath = i
out = convert23channel(inPath)
outPath = "./mod_dataset/images/{}".format(inPath.split('/')[-1])
cv2.imwrite(outPath,out)
###Output
_____no_output_____
###Markdown
Data exploration [DOCUMENTATION](https://clever-noyce-f9d43f.netlify.com//compare_experiment)
###Code
import pandas as pd
gt = pd.read_csv("./dataset/styles.csv",error_bad_lines=False)
gt.head()
###Output
_____no_output_____
###Markdown
The dataset labels have multiple classification categories. We will train the sub category labels. Extract the sub category labels for images. The image id fields require image names with extension.
###Code
label_gt = gt[['id','subCategory']]
label_gt['id'] = label_gt['id'].astype(str) + '.jpg'
label_gt.to_csv('./mod_dataset/subCategory.csv',index=False)
###Output
_____no_output_____
###Markdown
Pytorch with Monk Create an Experiment [DOCS](https://clever-noyce-f9d43f.netlify.com//quick_mode/quickmode_pytorch) Import Monk library
###Code
import os
import sys
sys.path.append("./monk_v1/monk/");
import psutil
from pytorch_prototype import prototype
###Output
_____no_output_____
###Markdown
Experiment 1 with Densenet121 Create a new experiment
###Code
ptf = prototype(verbose=1);
ptf.Prototype("fashion", "exp1");
###Output
_____no_output_____
###Markdown
Load the training images and ground truth labels for sub category classification.We select **densenet121** as our neural architecture and set number of epochs to **5**
###Code
ptf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory.csv", model_name="densenet121", freeze_base_network=True, num_epochs=5);
###Output
_____no_output_____
###Markdown
**Note** : The dataset has a few missing images. We can find the missing and corrupt images by performing EDA EDA documentation [DOCS](https://clever-noyce-f9d43f.netlify.com//aux_functions)
###Code
ptf.EDA(check_missing=True, check_corrupt=True);
###Output
_____no_output_____
###Markdown
Clean the labels file
###Code
corruptImageList = ['39403.jpg','39410.jpg','39401.jpg','39425.jpg','12347.jpg']
def cleanCSV(csvPath,labelColumnName,imageIdColumnName,appendExtension=False,extension = '.jpg',corruptImageList = []):
gt = pd.read_csv(csvPath, error_bad_lines=False)
print("LABELS\n{}".format(gt["{}".format(labelColumnName)].unique()))
label_gt = gt[["{}".format(imageIdColumnName),"{}".format(labelColumnName)]]
if appendExtension == True:
label_gt['id'] = label_gt['id'].astype(str) + extension
for i in corruptImageList:
label_gt = label_gt[label_gt.id != i]
print("Total images : {}".format(label_gt.shape[0]))
return label_gt
subCategory_gt = cleanCSV('./dataset/styles.csv','subCategory','id',True,'.jpg',corruptImageList)
subCategory_gt.to_csv("./mod_dataset/subCategory_cleaned.csv",index=False)
###Output
_____no_output_____
###Markdown
Update the experiment [DOCS](https://clever-noyce-f9d43f.netlify.com//update_mode/update_dataset)Now that we have a clean ground truth labels file and modified images, we can update the experiment to take these as our inputs.**Note** Remember to reload the experiment after any updates. Check out the docs -- [DOCUMENTATION](https://clever-noyce-f9d43f.netlify.com//update_mode/update_dataset)
###Code
ptf.update_dataset(dataset_path="./mod_dataset/images",path_to_csv="./mod_dataset/subCategory_cleaned.csv");
ptf.Reload()
###Output
_____no_output_____
###Markdown
Start Training
###Code
ptf.Train()
###Output
_____no_output_____
###Markdown
After training for 5 epochs we reach a validation accuracy of 89% which is quite good. Lets see if other densenet architectures can help improve this performance Experiment 2 with Densenet169
###Code
ptf = prototype(verbose=1);
ptf.Prototype("fashion", "exp2");
ptf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet169", freeze_base_network=True, num_epochs=5);
ptf.Train()
###Output
_____no_output_____
###Markdown
We do improve the validation accuracy but not much. Next we run the experiment with densenet201 Experiment 3 with Densenet201
###Code
ptf = prototype(verbose=1);
ptf.Prototype("fashion", "exp3");
ptf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet201", freeze_base_network=True, num_epochs=5);
ptf.Train()
###Output
_____no_output_____
###Markdown
We can see that the 3 versions of densenet give us quite similar results. We can quickly compare the experiments to see variations in losses and training times to choose a fitting experiment Compare experiments [DOCS](https://clever-noyce-f9d43f.netlify.com//compare_experiment)
###Code
from compare_prototype import compare
ctf = compare(verbose=1);
ctf.Comparison("Fashion_Pytorch_Densenet");
ctf.Add_Experiment("fashion", "exp1");
ctf.Add_Experiment("fashion", "exp2");
ctf.Add_Experiment("fashion", "exp3");
ctf.Generate_Statistics();
###Output
_____no_output_____
###Markdown
Gluon with Monk Lets repeat the same experiments but while using a different backend framework **Gluon**
###Code
from gluon_prototype import prototype
###Output
_____no_output_____
###Markdown
Experiment 4 with Densenet121
###Code
%%capture
gtf = prototype(verbose=1);
gtf.Prototype("fashion", "exp4");
gtf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet121", freeze_base_network=True, num_epochs=5);
gtf.Train()
###Output
_____no_output_____
###Markdown
Experiment 5 with Densenet169
###Code
%%capture
gtf = prototype(verbose=1);
gtf.Prototype("fashion", "exp5");
gtf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet169", freeze_base_network=True, num_epochs=5);
gtf.Train()
###Output
_____no_output_____
###Markdown
Experiment 6 with Densenet201
###Code
%%capture
gtf = prototype(verbose=1);
gtf.Prototype("fashion", "exp6");
gtf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet201", freeze_base_network=True, num_epochs=5);
gtf.Train()
###Output
_____no_output_____
###Markdown
Lets compare the performance of Gluon backend and Densenet architecture.
###Code
ctf = compare(verbose=1);
ctf.Comparison("Fashion_Gluon_Densenet");
ctf.Add_Experiment("fashion", "exp4");
ctf.Add_Experiment("fashion", "exp5");
ctf.Add_Experiment("fashion", "exp6");
ctf.Generate_Statistics();
###Output
_____no_output_____
###Markdown
We can also compare how Pytorch and Gluon fared with our training, but before that lets use Keras backend to train densenets and compare all three frameworks together. Keras with Monk
###Code
from keras_prototype import prototype
###Output
_____no_output_____
###Markdown
Experiment 7 with Densenet121
###Code
%%capture
ktf = prototype(verbose=1);
ktf.Prototype("fashion", "exp7");
ktf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet121", freeze_base_network=True, num_epochs=5);
ktf.Train()
###Output
_____no_output_____
###Markdown
Experiment 8 with Densenet169
###Code
%%capture
ktf = prototype(verbose=1);
ktf.Prototype("fashion", "exp8");
ktf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet169", freeze_base_network=True, num_epochs=5);
ktf.Train()
###Output
_____no_output_____
###Markdown
Experiment 9 with Densenet201
###Code
%%capture
ktf = prototype(verbose=1);
ktf.Prototype("fashion", "exp9");
ktf.Default(dataset_path="./mod_dataset/images", path_to_csv="./mod_dataset/subCategory_cleaned.csv", model_name="densenet201", freeze_base_network=True, num_epochs=5);
ktf.Train()
###Output
_____no_output_____
###Markdown
Compare experiments After using different architectures and backend frameworks, lets compare their performance on accuracy, losses and resource usage.
###Code
ctf = compare(verbose=1);
ctf.Comparison("Fashion_Densenet_Compare");
ctf.Add_Experiment("fashion", "exp1");
ctf.Add_Experiment("fashion", "exp2");
ctf.Add_Experiment("fashion", "exp3");
ctf.Add_Experiment("fashion", "exp4");
ctf.Add_Experiment("fashion", "exp5");
ctf.Add_Experiment("fashion", "exp6");
ctf.Add_Experiment("fashion", "exp7");
ctf.Add_Experiment("fashion", "exp8");
ctf.Add_Experiment("fashion", "exp9");
ctf.Generate_Statistics();
###Output
_____no_output_____
###Markdown
You can find the generated plots inside **workspace/comparison/Fashion_Densenet_Compare** Lets visualise the training accuracy and GPU utilisation plots
###Code
from IPython.display import Image
Image('workspace/comparison/Fashion_Densenet_Compare/train_accuracy.png')
Image('workspace/comparison/Fashion_Densenet_Compare/stats_training_time.png')
###Output
_____no_output_____ |
doc/examples/contains_debug.ipynb | ###Markdown
Calculate Bearing per Row as AverageNote that the `asending` and `dsending` varibles refer to direction of the moving window with regard to bearing calculation, ***not*** asending/desending paths of the satellite orbit.
###Code
asending = bearing(ls8.lat_CTR[248:497].values, ls8.lon_CTR[248:497].values,
ls8.lat_CTR[247:496].values, ls8.lon_CTR[247:496].values)
# 180 degree offset
dsending = bearing(ls8.lat_CTR[247:496].values, ls8.lon_CTR[247:496].values,
ls8.lat_CTR[248:497].values, ls8.lon_CTR[248:497].values) + 180.
means = np.mean([asending[0:-1], dsending[1:]], axis=0)
# Replace invalid first value with non-averaged valid value
means[0] = dsending[1]
# Same for last, but on other array
means[-1] = asending[-2]
len(means)
plot(bearing(ls8.lat_CTR[1:248].values, ls8.lon_CTR[1:248].values,
ls8.lat_CTR[0:247].values, ls8.lon_CTR[0:247].values))
plot(means)
###Output
_____no_output_____
###Markdown
Comparing Landsat footprints
###Code
def get_corners(i):
corners = np.zeros((4,2))
row = ls8[i:i+1]
corners[0,1] = row.lat_UL.values
corners[1,1] = row.lat_UR.values
corners[2,1] = row.lat_LL.values
corners[3,1] = row.lat_LR.values
corners[0,0] = row.lon_UL.values
corners[1,0] = row.lon_UR.values
corners[2,0] = row.lon_LL.values
corners[3,0] = row.lon_LR.values
return corners
ls8[20:21]
get_corners(20)
def compare(idx):
ref = get_corners(idx)
scene = ls8[idx:idx+1]
corners, contains, _, _, _ = representation(radians(scene.lon_CTR.values)[0],
radians(scene.lat_CTR.values)[0],
means[scene.row.values -1][0],
185, 180)
calc = np.degrees(corners)
lat_error, lon_error = np.mean(np.abs(ref[ref[:,0].argsort()] - calc[calc[:,0].argsort()]), axis=0)
return lat_error, lon_error
lats_err, lons_err = [], []
for i in range(19,100):
lat, lon = compare(i)
lats_err.append(lat), lons_err.append(lon)
(np.array(lats_err) < 0.01).all() == True
plot(lats_err)
plot(lons_err)
# Swapping swadth dimentions...
scene = ls8[20:21]
# Change scene index above... need i and i+1 since pandas expects slice
corners1, contains, _, _, _ = representation(radians(scene.lon_CTR.values)[0],
radians(scene.lat_CTR.values)[0],
means[scene.row.values -1][0],
len_lon=185,
len_lat=180)
np.degrees(corners1)
plot(get_corners(20)[:,0],get_corners(20)[:,1],'k*')
plot(np.degrees(corners1[:,0]),
np.degrees(corners1[:,1]), 'b*')
# Swapping swadth dimentions...
scene = ls8[50:51]
# Change scene index above... need i and i+1 since pandas expects slice
corners1, contains, _, _, _ = representation(radians(scene.lon_CTR.values)[0],
radians(scene.lat_CTR.values)[0],
means[scene.row.values -1][0],
len_lon=185,
len_lat=180)
np.degrees(corners1)
plot(get_corners(50)[:,0],get_corners(50)[:,1],'k*')
plot(np.degrees(corners1[:,0]),
np.degrees(corners1[:,1]), 'b*')
###Output
_____no_output_____
###Markdown
Jonathan's Contains Code Let $C$ be the center point and $\theta$ is the tilt angle there is a corresponding "unit tangent vector" to the sphere with that tilt. Call this vector $v$. To move $C$ along $v$ a distance $D$ in a sphere of radius $R$ is something like$$P_1 = \cos(A) \cdot C + R\cdot \sin (A) \cdot v$$where $A$ corresponds to 90km in radians. This is `midpt_1` in code below. Moving in the direction $-v$yields$$P_2 = \cos(A) \cdot C - R\cdot \sin (A) \cdot v$$which is referred to as `midpt_2` below.
###Code
import numpy as np
import functools
lat_deg, lon_deg = 77.875, -20.975
lat, lon, R, theta = lat_deg*(2*np.pi)/360, lon_deg*(2*np.pi)/360, 6371, -70 * 2 * np.pi / 360
boulder_lat, boulder_lon = lat, lon
x, y, z = (R * np.cos(lat) * np.sin(lon), R * np.cos(lat) * np.cos(lon), R * np.sin(lat))
C = np.array([x,y,z])
###Output
_____no_output_____
###Markdown
Computing $v$ from $\theta$At a point $C=[x,y,z]$, a tilt can be thought of as moving through lat and lon along a line with direction vector $d=(d_lon, d_lat)$, so we have in parameters $t$$$x(t), y(t), z(t) = (R * \cos(lat_0 + t dlat) * \cos(lon_0 + t dlon), R * \cos(lat_0 + t dlat) * \sin(lon_0 + t dlon), R * \sin(lat_0 + t dlat))$$ Differentiating with respect to $t$(ignoring the $R$ scaling as we want normalized $v$) we see $v$ is parallelto $$R\cdot (-\sin (lat_0) \cos(lon_0) dlat - \cos(lat_0) \sin(lon_0) dlon, -\sin(lat_0) \sin(lon_0) dlat + \cos(lat_0) \cos(lon_0) dlon, \cos(lat_0) dlat)$$
###Code
dlat, dlon = np.sin(theta), np.cos(theta)
v = np.array([-np.sin(lat) * np.sin(lon) * dlat + np.cos(lat) * np.cos(lon) * dlon,
-np.sin(lat) * np.cos(lon) * dlat - np.cos(lat) * np.sin(lon) * dlon,
np.cos(lat) * dlat])
v /= np.linalg.norm(v)
np.sum(v*C)
###Output
_____no_output_____
###Markdown
The angle $A$ is $$\frac{A}{2\pi} = \frac{90km}{2 \pi \cdot 6371km}$$
###Code
A = 90/R
A
midpt_1 = np.cos(A) * C + R * np.sin(A) * v
np.linalg.norm(midpt_1 - C), np.dot(midpt_1, C) / R**2, np.cos(A)
###Output
_____no_output_____
###Markdown
To find next corner, we move $\perp$ to $v$.That direction can be found by$$v \times P_1.$$Let $v^{\perp}$ be the unit vector in this direction.
###Code
v_perp = np.cross(midpt_1, v) # == np.cross(C, v)
v_perp /= np.linalg.norm(v_perp)
v_perp
###Output
_____no_output_____
###Markdown
We will then move 92.5km from $P_1$ in the direction $$P_2 = \cos(B) \cdot P_1 + R \cdot \sin(B) \cdot v^{\perp}$$where$$\frac{B}{2\pi} = \frac{92.5km}{6371km}$$
###Code
B = 92.5/6371
corners = [np.cos(B) * midpt_1 + R * np.sin(B) * v_perp]
corners.append(np.cos(B) * midpt_1 - R * np.sin(B) * v_perp)
v_perp = np.cross(midpt_1, v) # == np.cross(C, v)
v_perp /= np.linalg.norm(v_perp)
v_perp
midpt_2 = np.cos(A) * C - R * np.sin(A) * v
corners.append(np.cos(B) * midpt_2 + R * np.sin(B) * v_perp)
corners.append(np.cos(B) * midpt_2 - R * np.sin(B) * v_perp)
corners
[np.linalg.norm(corner) for corner in corners]
###Output
_____no_output_____
###Markdown
We can find another corner$$\cos(A') \cdot P_1 - R \cdot \sin(A') \cdot v^{\perp}$$ and similarly other corners. Now convert back to lat lon
###Code
lat_degs = [np.arcsin(z_ / R) / (2 * np.pi) * 360 for x_, y_, z_ in corners]
lat_degs
lon_degs = [np.arctan2(x_ / R, y_ / R) / (2 * np.pi) * 360 for x_, y_, z_ in corners]
lon_degs
%matplotlib inline
import matplotlib.pyplot as plt
plt.scatter(lon_degs, lat_degs)
plt.scatter([lon_deg], [lat_deg])
###Output
_____no_output_____
###Markdown
A representation of the scene that implements `contains`
###Code
def representation(center_lon, # in radians
center_lat, # in radians
instrument_tilt, # in degrees, rotation clockwise
len_lon=180, # extent in km
len_lat=185, # extent in km
R=6371): # "radius" of earth
tilt_deg = instrument_tilt * 2 * np.pi / 360
x, y, z = (R * np.cos(center_lat) *
np.sin(center_lon),
R * np.cos(center_lat) *
np.cos(center_lon), R * np.sin(center_lat))
C = np.array([x,y,z]) # center of scene
dlat, dlon = np.sin(-tilt_deg), np.cos(-tilt_deg)
dir_lon = np.array([-np.sin(center_lat) * np.sin(center_lon) * dlat +
np.cos(center_lat) * np.cos(center_lon) * dlon,
-np.sin(center_lat) * np.cos(center_lon) * dlat -
np.cos(center_lat) * np.sin(center_lon) * dlon,
np.cos(center_lat) * dlat])
dir_lon /= np.linalg.norm(dir_lon)
A = len_lon / 2 / R
midpt_1 = np.cos(A) * C + R * np.sin(A) * dir_lon
dir_lat = np.cross(midpt_1, dir_lon)
dir_lat /= np.linalg.norm(dir_lat)
B = len_lat/ 2 / R
corners = [np.cos(B) * midpt_1 + R * np.sin(B) * dir_lat]
corners.append(np.cos(B) * midpt_1 - R * np.sin(B) * dir_lat)
midpt_2 = np.cos(A) * C - R * np.sin(A) * dir_lon
corners.append(np.cos(B) * midpt_2 + R * np.sin(B) * dir_lat)
corners.append(np.cos(B) * midpt_2 - R * np.sin(B) * dir_lat)
corners = np.array(corners)
corners_lon_lat = np.array([(np.arctan2(x_ / R, y_ / R),
np.arcsin(z_ / R)) for x_, y_, z_ in corners])
# now work out halfspace
# these are the edge segmentsin lon/lat space
supports = [corners_lon_lat[0]-corners_lon_lat[1],
corners_lon_lat[0]-corners_lon_lat[2],
corners_lon_lat[1]-corners_lon_lat[3],
corners_lon_lat[2]-corners_lon_lat[3]]
# normals to each edge segment
normals = np.array([(s[1],-s[0]) for s in supports])
pts = [corners_lon_lat[0], # a point within each edge
corners_lon_lat[0],
corners_lon_lat[1],
corners_lon_lat[3]]
bdry_values = np.array([np.sum(n * p) for n, p in zip(normals, pts)])
center_values = [np.sum(n * [center_lon, center_lat]) for n in normals]
center_signs = np.sign(center_values - bdry_values)
def _check(normals, center_signs, bdry_values, lon_lat_vals):
normal_mul = np.asarray(lon_lat_vals).dot(normals.T)
values_ = normal_mul - bdry_values[None,:]
signs_ = np.sign(values_) * center_signs[None,:]
return np.squeeze(np.all(signs_ == 1, 1))
_check = functools.partial(_check, normals, center_signs, bdry_values)
return corners_lon_lat, _check, normals, bdry_values, center_signs
###Output
_____no_output_____
###Markdown
What needs to be stored- We need to store `normals`, `bdry_values` and `center_signs` for each scene.
###Code
corners, contains, normals, bdry_values, center_signs = representation(radians(-10.647337),
radians(79.129883),
49.27267632,
len_lat=200,
len_lon=200)
###Output
_____no_output_____
###Markdown
How `contains` is determined- Function can check several query points at once....
###Code
def _check(normals, center_signs, bdry_values, lon_lat_vals):
normal_mul = np.asarray(lon_lat_vals).dot(normals.T)
values_ = normal_mul - bdry_values[None,:]
signs_ = np.sign(values_) * center_signs[None,:]
return np.squeeze(np.all(signs_ == 1, 1))
import functools
contains = functools.partial(_check, normals, center_signs, bdry_values)
###Output
_____no_output_____ |
Project Programming for Data Analysis - Affairs Dataset - The_Cheaters_Game.ipynb | ###Markdown
 Project Programming for Data Analysis**For this project you must create a data set by simulating a real-world phenomenon ofyour choosing. You may pick any phenomenon you wish – you might pick one that isof interest to you in your personal or professional life. Then, rather than collect datarelated to the phenomenon, you should model and synthesise such data using Python.We suggest you use the numpy.random package for this purpose.****Specifically, in this project you should:****• Choose a real-world phenomenon that can be measured and for which you couldcollect at least one-hundred data points across at least four different variables.**>The Fair's Extramarital Affairs Data contains Infidelity data, known as Fair's Affairs. Cross-section data from a survey conducted by Psychology Today in 1969.Format: A data frame containing 601 observations on 9 variables.**• Investigate the types of variables involved, their likely distributions, and theirrelationships with each other.**> To investigate the types of variables involved, their likely distributions, and their relationships with each other. I have devised a game called the 'The Players App / Fidelity Reckoner' which uses just four variables from the dataset, Gender, Age, Occupation and Education background.> Using the Python module Pandas, the user's responses will be used to datamine and filter data from https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv and a percentage will be calculated for your partner's likely fidelity.> Version 1 Of The Players App / Fidelity Reckoner>This version uses the original dataset of the 1969 survey Fair's Affairs.>Playing the game of choosing a partner by sex, then by age, then by occupation, then by educational levelthe user of the game is advised how many people in the survey responded this way, and what percentage of people according to the 1969 data had cheated.>The app will tell you how statistically likely it is that your partner will likely cheat on you. The result is based on Cross-section data from a survey conducted by Psychology Today in 1969.**• Synthesise/simulate a data set as closely matching their properties as possible.**> CHALLENGES WITH THE FAIRS AFFAIRS DATASETI shall begin this assignment with my conclusion about the data set. A cursory observation of the occupations of the people interviewed for this dataset makes me doubt that this dataset is reliable for being synthesised and extrapolated. 21.57% clerical and sales workers were cheaters, from sample group size of 20430.88% micro business owners were cheaters, from sample group size of 6831.91% semi-skilled workers were cheaters, from sample group size of 47**38.46% small_business_owners were cheaters, from sample group size of 13**20.35% students / housewives were cheaters, from sample group size of 11327.27% technicians were cheaters, from sample group size of 143**23.08% unskilled workers were cheaters, from sample group size of 13**>Such a small number of small business owners and unskilled workers were interviewed, and if you break those numbers down even further, only a small amount of men and women interviewed in each of these occupation categories, and break down even further many age categories would not be represented in these occupations, sometimes it comes down to 1 or 2 male or females being representative of the whole of society for their age, gender, occupational and educational background.For example only one unskilled worker was interviewed in the age bracket 0-34, and they had an affair. I think it would be very prejudicial to assume from this dataset that all unskilled workers aged between 0-34 will cheat on their partners.Only one small business owner was interviewed in the 40-44 age bracket, and they didn't have an affair, it is also in my view wrong to assume that all small business owners aged 40-44 are 100% statistically speaking likely to be faithful to their partners.>This leaves me with two issuesGenerate completely random data that doesn't take into account answers of others in the original survey, and effectively generate nonsense,or try to extrapolate patterns in the real data that would instill prejudice against under represented niches in the original data.> Version 2 Of The Players App / Fidelity Reckoner> This version creates a randomised dataset which simply randomises every survey response using the unique values of every column.> This version does not try and synthesise similarity to the original dataset in any way.> Version 3 Of The Players App / Fidelity Reckoner> For this version of the app I try and make the random data that the app generates for the dataset match the probabilities of the original data as closely as possible. > Rather than the random choice being decided from the unique columns, it needs to be proportional to the real data> Filter the original file dataset according to the first random selection,> random selection from gender e.g. male / female> filter all females> filter all females of a random certain age from the previous filtered data> filter all females of a random certain age and a random occupation from the previous filtered data> filter all females of a random certain age and a random occupation and a random education level from the previous filtered data> randomly select from the last rows filtered on education results 'yearsmarried','children','religiousness','rating'> In using my method described above I hoped to create a pattern of data that is similar to the original data and can be extrapolated.**• Detail your research and implement the simulation in a Jupyter notebook – thedata set itself can simply be displayed in an output cell within the notebook.Note that this project is about simulation – you must synthesise a data set. Somestudents may already have some real-world data sets in their own files. It is okay tobase your synthesised data set on these should you wish (please reference it if you do),but the main task in this project is to create a synthesised data set. The next sectiongives an example project idea.**> Imbetween my introduction and 'The Players App / Fidelity Reckoner' I shall show some graphical observations and filtered tables of the dataset Adultery Global Phenomenon> 'Several scientists have offered theories for the evolution of human adultery. I have proposed that during prehistory, philandering males disproportionately reproduced, selecting for the biological underpinnings of the roving eye in contemporary men. Unfaithful females reaped economic resources from their extra-dyadic partnerships, as well as additional males to help with parenting duties if their primary partner died or deserted them. Moreover, if an ancestral woman bore a child with this extra-marital partner, she also increased genetic variety in her descendants. Infidelity had unconscious biological payoffs for both males and females throughout prehistory, thus perpetuating the biological underpinnings and taste for infidelity in both sexes today.' Helen Fisherhttps://ideas.ted.com/10-facts-about-infidelity-helen-fisher/ Data Points Used For Investigating The Adultery Phenomenon Fair's Extramarital Affairs Datahttps://vincentarelbundock.github.io/Rdatasets/doc/AER/Affairs.htmlThe Fair's Extramarital Affairs Data contains Infidelity data, known as Fair's Affairs. Cross-section data from a survey conducted by Psychology Today in 1969.FormatA data frame containing 601 observations on 9 variables.affairsnumeric. How often engaged in extramarital sexual intercourse during the past year? 0 = none, 1 = once, 2 = twice, 3 = 3 times, 7 = 4–10 times, 12 = monthly, 12 = weekly, 12 = daily.genderfactor indicating gender.agenumeric variable coding age in years: 17.5 = under 20, 22 = 20–24, 27 = 25–29, 32 = 30–34, 37 = 35–39, 42 = 40–44, 47 = 45–49, 52 = 50–54, 57 = 55 or over.yearsmarriednumeric variable coding number of years married: 0.125 = 3 months or less, 0.417 = 4–6 months, 0.75 = 6 months–1 year, 1.5 = 1–2 years, 4 = 3–5 years, 7 = 6–8 years, 10 = 9–11 years, 15 = 12 or more years. childrenfactor. Are there children in the marriage?religiousnessnumeric variable coding religiousness: 1 = anti, 2 = not at all, 3 = slightly, 4 = somewhat, 5 = very.educationnumeric variable coding level of education: 9 = grade school, 12 = high school graduate, 14 = some college, 16 = college graduate, 17 = some graduate work, 18 = master's degree, 20 = Ph.D., M.D., or other advanced degree.occupationnumeric variable coding occupation according to Hollingshead classification (reverse numbering).ratingnumeric variable coding self rating of marriage: 1 = very unhappy, 2 = somewhat unhappy, 3 = average, 4 = happier than average, 5 = very happy. **Why I chose to analyse the Fair's Extramarital Affairs Data dataset?**Some who know me might think that it could come down to the fact that as both an analyst and divorcee I had a personal interest in this area. Personally it is interesting analysis for me, as a person who has had a previous relationship affected by adultery, it is a cause of insecurity, however statistical findings offer little guarantee that you can protect yourself from this phenomenon, other than people who rate themselves as happily married are less likely to cheat (I'll provide data on this later), so just work hard at making your marriage happy!There was no underlying reason for me choosing this dataset other than the Iris Dataset has been frequently demonstrated on our course, I wanted to try analysing something new, after much googling around for datasets I went to course recommended list of the Rdatasets at https://vincentarelbundock.github.io/Rdatasets/articles/data.html and the Affairs Dataset was listed at the top.Many available datasets are interest / industry specific, sometimes you have to have specialist knowledge to understand the terminology of a dataset, for example a dataset that contains data relating to dangerous asteroids orbating the earth contains very scientific headers, the Fair's Extramarital Affairs Data contains data which can be easily understood by most lay people / non scientic professionals. As an analyst starting out, being able to try and find common indicators for behaviour patterns of people can be applied to many uses. If I hadn't chosen adultery, I might have researched the topical phenomenon on the spread of COVID, however much data about COVID and published information population affected is still in its infancy and observational.The Fair's Extramarital Affairs Data, doesn't just profile the interviewees by age, education, occupation (e.g. in same way common data on COVID may show summary information like age, underlying health condition, location), The Fair's Extramarital Affairs Data is brilliant because it attempts to provide a psychological insight into behaviour drivers of adulteres and faithful spouses, do they consider themselves religious on a scale of 1-5, do they rate themselves as happily married on a scale of 1 - 5. While this is a very old 1969 data set, it's column header fields could very much be applied to collecting data for understanding behaviour in other areas of social activity. A Snapshot view of the data
###Code
import matplotlib as plt
import pandas as pd
import seaborn as sns
%matplotlib inline
df = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
print(df.head())
###Output
Unnamed: 0 affairs gender age yearsmarried children religiousness \
0 4 0 male 37.0 10.00 no 3
1 5 0 female 27.0 4.00 no 4
2 11 0 female 32.0 15.00 yes 1
3 16 0 male 57.0 15.00 yes 5
4 23 0 male 22.0 0.75 no 2
education occupation rating
0 18 7 4
1 14 6 4
2 12 1 4
3 18 6 5
4 17 6 3
###Markdown
Affairs - Trying to find patterns in the data
###Code
import matplotlib as plt
import pandas as pd
import seaborn as sns
%matplotlib inline
df = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
sns.pairplot(data=df, hue="affairs")
###Output
_____no_output_____
###Markdown
TOO MANY AFFAIRS CATEGORIES MAKES VISUALISATION ABOVE DIFFICULT TO READThe problem with reading the visualisation data above is that there are too many sub categories of affairs and other data to know what a visualisation means.**Affairs data is graded numerically as follows**How often engaged in extramarital sexual intercourse during the past year? 0 = none, 1 = once, 2 = twice, 3 = 3 times, 7 = 4–10 times, 12 = monthly, 12 = weekly, 12 = daily. Below I have tried to reduce the categories of adultery frequencies
###Code
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
df = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
#np.where(condition, value if condition is true, value if condition is false)
#https://www.dataquest.io/blog/tutorial-add-column-pandas-dataframe-based-on-if-else-condition/
#https://stackoverflow.com/questions/39109045/numpy-where-with-multiple-conditions/39111919
df['had_an_affair'] = np.where(df['affairs']== 0, 'None',(np.where(df['affairs']<= 3, 'between one and three', 'Four or more')))
analysed = df.groupby(['had_an_affair','gender','education','occupation']).size().reset_index(name='count')
sns.pairplot(data=analysed, hue="had_an_affair")
###Output
_____no_output_____
###Markdown
LESS CATEGORIES = EASIER VISUALISATIONI will start by making my analysis of adultery a 'No' or 'Yes' scenario.For many lay people thinking about adultery, the biggest issue I would say is has their partner broken their trust.Committed adultery - YESStayed faithful - NO
###Code
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
pd.set_option('display.max_rows', None)
df = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
df['affair_yes_or_no'] = np.where(df['affairs']!= 0, 'Yes', 'No')
sns.pairplot(data=df, hue="affair_yes_or_no")
###Output
C:\Users\Owner\anaconda3\lib\site-packages\seaborn\distributions.py:283: UserWarning: Data must have variance to compute a kernel density estimate.
warnings.warn(msg, UserWarning)
###Markdown
While it may be difficult to understand the data above, committed adultery in orange dots, faithful people represented by blue dots, understand this, the orange dots are prelavent in most, if not all areas of the graph, which means sadly adultery has manifested itself in all segments of occupation, age, education, even religion and people who rate themselves as happily married.This makes the adultery set very different to Fishers Iris Data Set, where you can easily segragate types of data as being different types of flower using a Seaborn Pair Plot.The best that I can hope to achieve from this study is filter down the data to try and indicate which sectors of society adultery is more or less prelavent.One thing of encouragment a reader can see when the data is presented in written form below is that more people admit to having stayed faithul than admit to having committed adultery. AMMENDING THE DATA NOMINAL NUMBER DESCRIPTIONS Hollingshead Four-Factor Index of Socioeconomic Status (SES-Child)http://fcon_1000.projects.nitrc.org/indi/enhanced/assessments/ses-child.html9=higher executive, proprietor of large businesses, major professional, 8=administrators, lesser professionals, proprietor of medium-sized business, 7=smaller business owners, farm owners, managers, minor professionals, 6=technicians, semi-professionals, small business owners - (business valued at 50,000-70,000 dollars), 5=clerical and sales workers, small farm and business owners (business valued at 25,000-50,000 dollars), 4=smaller business owners (3=machine operators and semi-skilled workers, 2=unskilled workers, 1=farm laborers, menial service workers, students, housewives, (dependent on welfare, no regular occupation), 0=not applicable or unknown An SES **In my code, rather than show CSV results as numbers 0-9, I have used an abbreviation of the terminology used in the Hollingshead Four-Factor Index of Socioeconomic Status**9: 'higher executive/large business', 8: 'administrators', 7: 'small business owners', 6: 'technicians', 5: 'clerical and sales workers', 4: 'micro business owners', 3: 'semi-skilled workers', 2: 'unskilled workers', 1: 'students, housewives', 0: 'not applicable' **For graphing and table purposes I have also changed the original nominal number codes to wordier descriptions**
###Code
import pandas as pd
import numpy as np
import seaborn as sns
%matplotlib inline
pd.set_option('display.max_rows', None)
df = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
df['affair_yes_or_no'] = np.where(df['affairs']!= 0, 'Yes', 'No')
#Change the values in the education column
new_education_column = df.education.replace({
9: 'grade school',
12: 'high school graduate',
14: 'some college',
16: 'college graduate',
17: 'some graduate work',
18: 'masters degree',
20: 'Ph.D., M.D.' })
df.education = new_education_column
new_occupation_column = df.occupation.replace({
9: 'higher executive/large business',
8: 'administrators',
7: 'small business owners',
6: 'technicians',
5: 'clerical and sales workers',
4: 'micro business owners',
3: 'semi-skilled workers',
2: 'unskilled workers',
1: 'students, housewives',
0: 'not applicable'})
df.occupation = new_occupation_column
new_rating_column = df.rating.replace({
1: 'very unhappy',
2: 'somewhat unhappy',
3: 'average',
4: 'happier than average',
5: 'very happy'})
new_age_column = df.age.replace({
17.5: 'under 20',
22: '20–24',
27: '25–29',
32: '0–34',
37: '35–39',
42: '40–44',
47: '45–49',
52: '50–54',
57: '55 or over'})
df.age = new_age_column
df.rating = new_rating_column
gender_affairs = df.groupby(['gender','affair_yes_or_no']).size().reset_index(name='counts')
occupation_affairs = df.groupby(['occupation','affair_yes_or_no']).size().reset_index(name='counts')
rating_affairs = df.groupby(['rating','affair_yes_or_no']).size().reset_index(name='counts')
age_affairs = df.groupby(['age','affair_yes_or_no']).size().reset_index(name='counts')
age_occupation_affairs = df.groupby(['age','occupation','affair_yes_or_no']).size().reset_index(name='counts')
rating_occupation_affair = df.groupby(['rating','occupation','affair_yes_or_no']).size().reset_index(name='counts')
rating_occupation_very_happy = rating_occupation_affair[(rating_occupation_affair['rating']=='very happy')]
###Output
_____no_output_____
###Markdown
Gender Affairs
###Code
print(gender_affairs)
###Output
gender affair_yes_or_no counts
0 female No 243
1 female Yes 72
2 male No 208
3 male Yes 78
###Markdown
Age AffairsAs you will see from the table below, affairs happen in every age group, however very little data was collected on people under 20, only 6 people interviewed, when this data is further sub categorized into gender, education and occupation an individual can end up representing the whole of society in their profile.
###Code
print(age_affairs)
###Output
age affair_yes_or_no counts
0 0–34 No 77
1 0–34 Yes 38
2 20–24 No 101
3 20–24 Yes 16
4 25–29 No 117
5 25–29 Yes 36
6 35–39 No 65
7 35–39 Yes 23
8 40–44 No 38
9 40–44 Yes 18
10 45–49 No 16
11 45–49 Yes 7
12 50–54 No 15
13 50–54 Yes 6
14 55 or over No 19
15 55 or over Yes 3
16 under 20 No 3
17 under 20 Yes 3
###Markdown
Occupation Affairs
###Code
print(occupation_affairs)
###Output
occupation affair_yes_or_no counts
0 clerical and sales workers No 160
1 clerical and sales workers Yes 44
2 micro business owners No 47
3 micro business owners Yes 21
4 semi-skilled workers No 32
5 semi-skilled workers Yes 15
6 small business owners No 8
7 small business owners Yes 5
8 students, housewives No 90
9 students, housewives Yes 23
10 technicians No 104
11 technicians Yes 39
12 unskilled workers No 10
13 unskilled workers Yes 3
###Markdown
Affairs Grouped By Age & OccupationIf you look at line 12 you will see that only one unskilled worker was interviewed in the age bracket 0-34, and they had an affair. I think it would be very prejudicial to assume from this dataset that all unskilled workers aged between 0-34 will cheat on their partners.It you look at line 53 you will see that only one small business owner was interviewed in the 40-44 age bracket, and they didn't have an affair, it is also in my view wrong to assume that all small business owners aged 40-44 are 100% statistically speaking likely to be faithful to their partners.
###Code
print(age_occupation_affairs)
###Output
age occupation affair_yes_or_no counts
0 0–34 clerical and sales workers No 27
1 0–34 clerical and sales workers Yes 10
2 0–34 micro business owners No 5
3 0–34 micro business owners Yes 8
4 0–34 semi-skilled workers No 6
5 0–34 semi-skilled workers Yes 3
6 0–34 small business owners No 1
7 0–34 small business owners Yes 1
8 0–34 students, housewives No 20
9 0–34 students, housewives Yes 7
10 0–34 technicians No 18
11 0–34 technicians Yes 8
12 0–34 unskilled workers Yes 1
13 20–24 clerical and sales workers No 41
14 20–24 clerical and sales workers Yes 3
15 20–24 micro business owners No 17
16 20–24 micro business owners Yes 1
17 20–24 semi-skilled workers No 9
18 20–24 semi-skilled workers Yes 5
19 20–24 students, housewives No 21
20 20–24 students, housewives Yes 2
21 20–24 technicians No 11
22 20–24 technicians Yes 3
23 20–24 unskilled workers No 2
24 20–24 unskilled workers Yes 2
25 25–29 clerical and sales workers No 42
26 25–29 clerical and sales workers Yes 12
27 25–29 micro business owners No 12
28 25–29 micro business owners Yes 3
29 25–29 semi-skilled workers No 8
30 25–29 semi-skilled workers Yes 5
31 25–29 small business owners Yes 2
32 25–29 students, housewives No 28
33 25–29 students, housewives Yes 7
34 25–29 technicians No 23
35 25–29 technicians Yes 7
36 25–29 unskilled workers No 4
37 35–39 clerical and sales workers No 22
38 35–39 clerical and sales workers Yes 9
39 35–39 micro business owners No 7
40 35–39 micro business owners Yes 1
41 35–39 semi-skilled workers No 2
42 35–39 small business owners No 3
43 35–39 small business owners Yes 2
44 35–39 students, housewives No 11
45 35–39 technicians No 20
46 35–39 technicians Yes 11
47 40–44 clerical and sales workers No 9
48 40–44 clerical and sales workers Yes 5
49 40–44 micro business owners No 3
50 40–44 micro business owners Yes 2
51 40–44 semi-skilled workers No 6
52 40–44 semi-skilled workers Yes 1
53 40–44 small business owners No 1
54 40–44 students, housewives No 5
55 40–44 students, housewives Yes 5
56 40–44 technicians No 14
57 40–44 technicians Yes 5
58 45–49 clerical and sales workers No 6
59 45–49 clerical and sales workers Yes 2
60 45–49 micro business owners No 1
61 45–49 micro business owners Yes 2
62 45–49 small business owners No 1
63 45–49 students, housewives No 1
64 45–49 technicians No 6
65 45–49 technicians Yes 3
66 45–49 unskilled workers No 1
67 50–54 clerical and sales workers No 6
68 50–54 clerical and sales workers Yes 2
69 50–54 micro business owners Yes 1
70 50–54 small business owners No 1
71 50–54 students, housewives No 4
72 50–54 students, housewives Yes 1
73 50–54 technicians No 3
74 50–54 technicians Yes 2
75 50–54 unskilled workers No 1
76 55 or over clerical and sales workers No 6
77 55 or over clerical and sales workers Yes 1
78 55 or over micro business owners No 1
79 55 or over micro business owners Yes 2
80 55 or over semi-skilled workers No 1
81 55 or over small business owners No 1
82 55 or over technicians No 8
83 55 or over unskilled workers No 2
84 under 20 clerical and sales workers No 1
85 under 20 micro business owners No 1
86 under 20 micro business owners Yes 1
87 under 20 semi-skilled workers Yes 1
88 under 20 students, housewives Yes 1
89 under 20 technicians No 1
###Markdown
Rating AffairsJust as a very brief deviation from the main four variables that I am working with, I had a quick look at marriage rating. Out of 232 people who said they were very happy in their marriage, 34 of them said they had cheated. That is 14.65% !
###Code
print(rating_affairs)
###Output
rating affair_yes_or_no counts
0 average No 66
1 average Yes 27
2 happier than average No 146
3 happier than average Yes 48
4 somewhat unhappy No 33
5 somewhat unhappy Yes 33
6 very happy No 198
7 very happy Yes 34
8 very unhappy No 8
9 very unhappy Yes 8
###Markdown
A Further Analysis Of The Occupations of Very Happy MarriagesAll very happy marriages have percentage of cheating regardless of occupation
###Code
print(rating_occupation_very_happy)
###Output
rating occupation affair_yes_or_no counts
40 very happy clerical and sales workers No 74
41 very happy clerical and sales workers Yes 11
42 very happy micro business owners No 26
43 very happy micro business owners Yes 5
44 very happy semi-skilled workers No 13
45 very happy semi-skilled workers Yes 3
46 very happy students, housewives No 39
47 very happy students, housewives Yes 5
48 very happy technicians No 44
49 very happy technicians Yes 8
50 very happy unskilled workers No 2
51 very happy unskilled workers Yes 2
###Markdown
Affairs Grouped By Age & Occupation
###Code
print(rating_occupation_affair)
###Output
rating occupation affair_yes_or_no counts
40 very happy clerical and sales workers No 74
41 very happy clerical and sales workers Yes 11
42 very happy micro business owners No 26
43 very happy micro business owners Yes 5
44 very happy semi-skilled workers No 13
45 very happy semi-skilled workers Yes 3
46 very happy students, housewives No 39
47 very happy students, housewives Yes 5
48 very happy technicians No 44
49 very happy technicians Yes 8
50 very happy unskilled workers No 2
51 very happy unskilled workers Yes 2
52 very unhappy clerical and sales workers No 2
53 very unhappy clerical and sales workers Yes 2
54 very unhappy micro business owners Yes 2
55 very unhappy semi-skilled workers No 2
56 very unhappy semi-skilled workers Yes 1
57 very unhappy students, housewives No 2
58 very unhappy students, housewives Yes 2
59 very unhappy technicians No 2
60 very unhappy technicians Yes 1
###Markdown
A more detailed examination of percentages of occupational affairsThe number counts above are fine, but it is useful to understand the number counts of individuals as a percentage of their group ... see print of code below
###Code
import pandas as pd
import numpy as np
pd.set_option('display.max_rows', None)
df = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
"""-------------------------------Analysing Cheaters ---------------------------------------------------------------"""
df = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
cheaters = df[(df['affairs']>=1)]
cheaters = cheaters[['gender','education','occupation','age']]
#Change the values in the education column
new_education_column = cheaters.education.replace({
9: 'grade school',
12: 'high school graduate',
14: 'some college',
16: 'college graduate',
17: 'some graduate work',
18: 'masters degree',
20: 'Ph.D., M.D.' })
cheaters.education = new_education_column
new_occupation_column = cheaters.occupation.replace({
9: 'higher executive/large business',
8: 'administrators',
7: 'small business owners',
6: 'technicians',
5: 'clerical and sales workers',
4: 'micro business owners',
3: 'semi-skilled workers',
2: 'unskilled workers',
1: 'students, housewives',
0: 'not applicable'})
cheaters.occupation = new_occupation_column
#print(cheaters.columns)
#print(cheaters)
ac = cheaters.groupby(['occupation']).size().reset_index(name='counts')
aq = ac.sort_values(by=['occupation'])
"""-------------------------------Analysing Faithful ---------------------------------------------------------------"""
faithfull = df[(df['affairs']==0)]
faithfull = faithfull[['gender','education','occupation','age']]
#Change the values in the education column
new_education_column2 = faithfull.education.replace({9: 'grade school', 12: 'high school graduate', 14: 'some college', 16: 'college graduate', 17: 'some graduate work', 18: 'masters degree', 20: 'Ph.D., M.D.' })
faithfull.education = new_education_column2
new_occupation_column2 = faithfull.occupation.replace({
9: 'higher executive/large business',
8: 'administrators',
7: 'small business owners',
6: 'technicians',
5: 'clerical and sales workers',
4: 'micro business owners',
3: 'semi-skilled workers',
2: 'unskilled workers',
1: 'students, housewives',
0: 'not applicable'})
faithfull.occupation = new_occupation_column2
#print(faithfull.columns)
#print(faithfull)
acb = faithfull.groupby(['occupation']).size().reset_index(name='counts')
aq2 = acb.sort_values(by=['occupation'])
"""-------------------------------Print Setup ---------------------------------------------------------------"""
print("A Count Of Occupation of Faithfull")
print(aq2)
print("\n A Count Of Occupation of Cheaters")
print(aq)
#https://www.shanelynn.ie/select-pandas-dataframe-rows-and-columns-using-iloc-loc-and-ix/
clerical_and_sales_workers_cheaters = aq.iloc[:1,1]
clerical_and_sales_workers_faithful= aq2.iloc[:1,1]
micro_business_owners_cheaters = aq.iloc[1:2,1]
micro_busines_owners_workers_faithful = aq2.iloc[1:2,1]
semi_skilled_workers_cheaters = aq.iloc[2:3,1]
semi_skilled_workers_faithful = aq2.iloc[2:3,1]
small_business_owners_cheaters = aq.iloc[3:4,1]
small_business_owners_faithful = aq2.iloc[3:4,1]
students_housewives_cheaters = aq.iloc[4:5,1]
students_housewives_faithful = aq2.iloc[4:5,1]
technicians_cheaters = aq.iloc[5:6,1]
technicians_faithful = aq2.iloc[5:6,1]
unskilled_workers_cheaters = aq.iloc[6:7,1]
unskilled_workers_faithful = aq2.iloc[6:7,1]
total_clerical = (int(clerical_and_sales_workers_cheaters)+int(clerical_and_sales_workers_faithful))
total_micro_business = (int(micro_business_owners_cheaters)+int(micro_busines_owners_workers_faithful))
total_semi_skilled_workers = (int(semi_skilled_workers_cheaters)+int(semi_skilled_workers_faithful))
total_small_business_owners = (int(small_business_owners_cheaters)+int(small_business_owners_faithful))
total_students_housewives = (int(students_housewives_cheaters)+int(students_housewives_faithful))
total_technicians = (int(technicians_cheaters)+int(technicians_faithful))
total_unskilled_workers = (int(unskilled_workers_cheaters)+int(unskilled_workers_faithful))
pc_clerical_cheats = int(clerical_and_sales_workers_cheaters) / total_clerical
pc_micro_business_cheats = int(micro_business_owners_cheaters) / total_micro_business
pc_semi_skilled_workers_cheats = int(semi_skilled_workers_cheaters) / total_semi_skilled_workers
pc_small_business_owners_cheats = int(small_business_owners_cheaters) / total_small_business_owners
pc_students_housewives_cheats = int(students_housewives_cheaters) / total_students_housewives
pc_technicians_cheats = int(technicians_cheaters) / total_technicians
pc_unskilled_workers_cheats = int(unskilled_workers_cheaters) / total_unskilled_workers
#a_number = percentage_clerical_cheats
#percentage = "{:.0%}".format(a_number)
#print(percentage)
#print(int(clerical_and_sales_workers_cheaters))
#print(int(clerical_and_sales_workers_faithful))
#print(int(micro_business_owners_cheaters))
#print(int(micro_busines_owners_workers_faithful))
print("-------------------------------------------------------------------------------")
print ("{:.2%} clerical and sales workers were cheaters, from sample group size of {}".format(pc_clerical_cheats,total_clerical))
print ("{:.2%} micro business owners were cheaters, from sample group size of {}".format(pc_micro_business_cheats,total_micro_business))
print ("{:.2%} semi-skilled workers were cheaters, from sample group size of {}".format(pc_semi_skilled_workers_cheats,total_semi_skilled_workers))
print ("{:.2%} small_business_owners were cheaters, from sample group size of {}".format(pc_small_business_owners_cheats,total_small_business_owners))
print ("{:.2%} students / housewives were cheaters, from sample group size of {}".format(pc_students_housewives_cheats,total_students_housewives))
print ("{:.2%} technicians were cheaters, from sample group size of {}".format(pc_technicians_cheats,total_technicians))
print ("{:.2%} unskilled workers were cheaters, from sample group size of {}".format(pc_unskilled_workers_cheats,total_unskilled_workers))
###Output
A Count Of Occupation of Faithfull
occupation counts
0 clerical and sales workers 160
1 micro business owners 47
2 semi-skilled workers 32
3 small business owners 8
4 students, housewives 90
5 technicians 104
6 unskilled workers 10
A Count Of Occupation of Cheaters
occupation counts
0 clerical and sales workers 44
1 micro business owners 21
2 semi-skilled workers 15
3 small business owners 5
4 students, housewives 23
5 technicians 39
6 unskilled workers 3
-------------------------------------------------------------------------------
21.57% clerical and sales workers were cheaters, from sample group size of 204
30.88% micro business owners were cheaters, from sample group size of 68
31.91% semi-skilled workers were cheaters, from sample group size of 47
38.46% small_business_owners were cheaters, from sample group size of 13
20.35% students / housewives were cheaters, from sample group size of 113
27.27% technicians were cheaters, from sample group size of 143
23.08% unskilled workers were cheaters, from sample group size of 13
###Markdown
 The Players App / Fidelity ReckonerTo investigate the types of variables involved, their likely distributions, and their relationships with each other.I have devised a game which uses just four variables from the dataset, **Gender, Age, Occupation and Education background**.I have called it 'The Players App / Fidelity Reckoner'When the code below is run, you will be asked four questions about **Gender, Age, Occupation and Education background** of your current / potential partner.Using the Python module Pandas, the user's responses will be used to datamine and filter data from https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv and a percentage will be calculated for your partner's likely fidelity.The data used was gathered in an Infidelity data survey known as Fair's Affairs, the app will tell you how statistically likely it is that your partner will likely cheat on you. The result is based on Cross-section data from a survey conducted by Psychology Today in 1969.**To play the game you will need to run the code below!** The Players App / Fidelity Reckoner - VERSION 1 - Reckoner Uses The Fair's Affairs 1969 Dataset
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
#%matplotlib inline
import matplotlib.image as mpimg # used for being able to add background images to graphs
for_ref = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
def dictionary_generator(column):
dictionary_choice = {}
count = 1
for item in column:
print(count, item)
dictionary_choice[count]=item
count +=1
return dictionary_choice
def number_selector(dictionary_choice):
number = int(input('type number to indicate your interest: '))
while number not in dictionary_choice.keys():
try:
number = int(input('type number to indicate your interest: '))
except ValueError:
number = int(input('type number to indicate your interest: '))
return number
def percentage_calculation(x):
#Faithful analysis
seriesObj2 = for_ref.apply(lambda x: True if x['affairs'] == 0 else False , axis=1)
numOfRows2 = len(seriesObj2[seriesObj2 == True].index)
percentage_faithful = (numOfRows2 / x)
#Cheat analysis
seriesObj = for_ref.apply(lambda x: True if x['affairs'] > 0 else False , axis=1)
numOfRows = len(seriesObj[seriesObj == True].index)
percentage_cheat = (numOfRows / x)
keys = ['percentage_faithful', 'percentage_cheat']
values = [percentage_faithful,percentage_cheat]
#https://www.kite.com/python/answers/how-to-rotate-axis-labels-in-matplotlib-in-python
plt.xticks(rotation=45)
plt.yticks(rotation=90)
#https://showmecode.info/matplotlib/bar/change-bar-color/
plt.bar(keys, values, color=['blue', 'red'])
plt.ylabel('Percentage')
plt.title('Fidelity Reckoner')
#https://www.kite.com/python/answers/how-save-a-matplotlib-plot-as-a-pdf-file-in-python
plt.savefig("plots.pdf")
plt.show()
print("Likelyhood of person being faithful {:.2%} ".format(percentage_faithful))
print("Likelyhood of person having an affair {:.2%} ".format(percentage_cheat))
new_age_column = for_ref.age.replace({
17.5: 'under 20',
22: '20–24',
27: '25–29',
32: '0–34',
37: '35–39',
42: '40–44',
47: '45–49',
52: '50–54',
57: '55 or over'})
for_ref.age = new_age_column
new_occupation_column = for_ref.occupation.replace({
9: 'higher executive/large business',
8: 'administrators',
7: 'small business owners',
6: 'technicians',
5: 'clerical and sales workers',
4: 'micro business owners',
3: 'semi-skilled workers',
2: 'unskilled workers',
1: 'students, housewives',
0: 'not applicable'})
for_ref.occupation = new_occupation_column
#Change the values in the education column
new_education_column = for_ref.education.replace({
9: 'grade school',
12: 'high school graduate',
14: 'some college',
16: 'college graduate',
17: 'some graduate work',
18: 'masters degree',
20: 'Ph.D., M.D.' })
for_ref.education = new_education_column
print("The following simulation is based on Infidelity data, known as Fair's Affairs. \nCross-section data from a survey conducted by Psychology Today in 1969.")
column_gender = for_ref['gender'].unique()
print("What is the gender of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_gender)
for_processing = number_selector(dictionary_choice)#
gender_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['gender'] == gender_type]
total_row = for_ref.shape[0]
print('Found ' , total_row , gender_type,"(s)" )
percentage_calculation(total_row)
column_age = for_ref['age'].unique()
print("What is the age of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_age)
for_processing = number_selector(dictionary_choice)#
age_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['age'] == age_type]
total_row2 = for_ref.shape[0]
print('Found ' , total_row2 , gender_type,"(s) in the age", age_type , "that you are interested" )
percentage_calculation(total_row2)
column_occupation = for_ref['occupation'].unique()
print("What is the career of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_occupation)
for_processing = number_selector(dictionary_choice)#
occupation_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['occupation'] == occupation_type]
total_row3 = for_ref.shape[0]
print ('Found ' , total_row3 , gender_type,"(s) in age category",age_type," working in", occupation_type)
percentage_calculation(total_row3)
column_education = for_ref['education'].unique()
print("What is the educational background of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_education)
for_processing = number_selector(dictionary_choice)#
education_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['education'] == education_type]
total_row4 = for_ref.shape[0]
print('Found ' , total_row4 , gender_type,"(s) in age category",age_type," working in", occupation_type,
"educated at",education_type,"level")
percentage_calculation(total_row4)
#https://thispointer.com/pandas-count-rows-in-a-dataframe-all-or-those-only-that-satisfy-a-condition/
###Output
The following simulation is based on Infidelity data, known as Fair's Affairs.
Cross-section data from a survey conducted by Psychology Today in 1969.
What is the gender of your partner / potential partner?
1 male
2 female
type number to indicate your interest: 1
Found 286 male (s)
###Markdown
Synthesise/simulate a data setTo be able to Synthesise/simulate a data set we need to first find out unique column values
###Code
import pandas as pd
df = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
column_affairs = df['affairs'].unique()
column_gender = df['gender'].unique()
column_age = df['age'].unique()
column_yearsmarried = df['yearsmarried'].unique()
column_children = df['children'].unique()
column_religiousness = df['religiousness'].unique()
column_education = df['education'].unique()
column_occupation = df['occupation'].unique()
column_rating = df['rating'].unique()
print('affairs', column_affairs)
print('gender',column_gender)
print('age',column_age)
print('yearsmarried',column_yearsmarried)
print('children',column_children)
print('religiousness',column_religiousness)
print('education',column_education)
print('occupation',column_occupation)
print('rating',column_rating)
###Output
affairs [ 0 3 7 12 1 2]
gender ['male' 'female']
age [37. 27. 32. 57. 22. 47. 42. 52. 17.5]
yearsmarried [10. 4. 15. 0.75 1.5 7. 0.417 0.125]
children ['no' 'yes']
religiousness [3 4 1 5 2]
education [18 14 12 17 16 20 9]
occupation [7 6 1 5 4 3 2]
rating [4 5 3 2 1]
###Markdown
We then need to investigate how the unique column values can be generated randomly to create the synthesised dataset.Because the numbers I am working with are nominal numbers, rather than numbers of measurement, the random results have to be the same as the available unique column results.
###Code
import numpy as np
number_of_affairs = np.random.choice(column_affairs)
gender_type = np.random.choice(column_gender)
age_type = np.random.choice(column_age)
yearsmarried_type = np.random.choice(column_yearsmarried)
children_type = np.random.choice(column_children)
religiousness_type = np.random.choice(column_religiousness)
education_type = np.random.choice(column_education)
occupation_type = np.random.choice(column_occupation)
rating_type = np.random.choice(column_rating)
my_array = [number_of_affairs,gender_type,age_type,yearsmarried_type,children_type,religiousness_type,
education_type, occupation_type, rating_type]
print(my_array)
###Output
[3, 'female', 52.0, 0.75, 'yes', 5, 9, 7, 5]
###Markdown
METHOD ONE - TO GENERATE VERY RANDOM SURVEY DATA
###Code
import pandas as pd
import numpy as np
# initialize list of lists
data = []
count = 0
while count < 10:
for_ref = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
column_affairs = for_ref['affairs'].unique()
column_gender = for_ref['gender'].unique()
column_age = for_ref['age'].unique()
column_yearsmarried = for_ref['yearsmarried'].unique()
column_children = for_ref['children'].unique()
column_religiousness = for_ref['religiousness'].unique()
column_education = for_ref['education'].unique()
column_occupation = for_ref['occupation'].unique()
column_rating = for_ref['rating'].unique()
number_of_affairs = np.random.choice(column_affairs)
gender_type = np.random.choice(column_gender)
age_type = np.random.choice(column_age)
yearsmarried_type = np.random.choice(column_yearsmarried)
children_type = np.random.choice(column_children)
religiousness_type = np.random.choice(column_religiousness)
education_type = np.random.choice(column_education)
occupation_type = np.random.choice(column_occupation)
rating_type = np.random.choice(column_rating)
my_array = [number_of_affairs,gender_type,age_type,yearsmarried_type,children_type,religiousness_type,
education_type, occupation_type, rating_type]
data.append(my_array)
count += 1
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['affairs','gender','age','yearsmarried','children','religiousness',
'education','occupation','rating'])
#df['affair_yes_or_no'] = np.where(df['affairs']!= 0, 'Yes', 'No')
#Change the values in the age column
new_age_column = df.age.replace({
17.5: 'under 20',
22: '20–24',
27: '25–29',
32: '0–34',
37: '35–39',
42: '40–44',
47: '45–49',
52: '50–54',
57: '55 or over'})
df.age = new_age_column
#Change the values in the occupation column
new_occupation_column = df.occupation.replace({
9: 'higher executive/large business',
8: 'administrators',
7: 'small business owners',
6: 'technicians',
5: 'clerical and sales workers',
4: 'micro business owners',
3: 'semi-skilled workers',
2: 'unskilled workers',
1: 'students, housewives',
0: 'not applicable'})
df.occupation = new_occupation_column
#Change the values in the education column
new_education_column = df.education.replace({
9: 'grade school',
12: 'high school graduate',
14: 'some college',
16: 'college graduate',
17: 'some graduate work',
18: 'masters degree',
20: 'Ph.D., M.D.' })
df.education = new_education_column
print(df)
###Output
affairs gender age yearsmarried children religiousness \
0 12 female 35–39 10.000 no 2
1 7 male 20–24 15.000 yes 2
2 2 female 20–24 4.000 no 4
3 12 female 45–49 0.417 yes 1
4 7 female 25–29 4.000 no 2
5 1 female 40–44 0.417 no 2
6 1 female 35–39 15.000 no 3
7 1 male 20–24 15.000 yes 5
8 1 female 20–24 4.000 yes 4
9 2 female 0–34 15.000 yes 2
education occupation rating
0 grade school unskilled workers 2
1 Ph.D., M.D. students, housewives 1
2 some college technicians 5
3 college graduate semi-skilled workers 2
4 Ph.D., M.D. small business owners 5
5 some graduate work technicians 4
6 grade school unskilled workers 4
7 some college unskilled workers 4
8 high school graduate small business owners 4
9 college graduate micro business owners 4
###Markdown
Version 2 Of The Players App / Fidelity ReckonerThis version creates a randomised dataset which simply randomises every survey response using the unique values of every column.This version does not try and synthesise similarity to the original dataset in any way.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import matplotlib.image as mpimg # used for being able to add background images to graphs
# initialize list of lists
data = []
count = 0
while count < 100:
for_ref = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
column_affairs = for_ref['affairs'].unique()
column_gender = for_ref['gender'].unique()
column_age = for_ref['age'].unique()
column_yearsmarried = for_ref['yearsmarried'].unique()
column_children = for_ref['children'].unique()
column_religiousness = for_ref['religiousness'].unique()
column_education = for_ref['education'].unique()
column_occupation = for_ref['occupation'].unique()
column_rating = for_ref['rating'].unique()
number_of_affairs = np.random.choice(column_affairs)
gender_type = np.random.choice(column_gender)
age_type = np.random.choice(column_age)
yearsmarried_type = np.random.choice(column_yearsmarried)
children_type = np.random.choice(column_children)
religiousness_type = np.random.choice(column_religiousness)
education_type = np.random.choice(column_education)
occupation_type = np.random.choice(column_occupation)
rating_type = np.random.choice(column_rating)
my_array = [number_of_affairs,gender_type,age_type,yearsmarried_type,children_type,religiousness_type,
education_type, occupation_type, rating_type]
data.append(my_array)
count += 1
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['affairs','gender','age','yearsmarried','children','religiousness',
'education','occupation','rating'])
"""---------------------------------------------------------------------------------------------------------------"""
# App code below / dataset generation code above
for_ref = df
def dictionary_generator(column):
dictionary_choice = {}
count = 1
for item in column:
print(count, item)
dictionary_choice[count]=item
count +=1
return dictionary_choice
def number_selector(dictionary_choice):
number = int(input('type number to indicate your interest: '))
while number not in dictionary_choice.keys():
try:
number = int(input('type number to indicate your interest: '))
except ValueError:
number = int(input('type number to indicate your interest: '))
return number
def percentage_calculation(x):
#Faithful analysis
seriesObj2 = for_ref.apply(lambda x: True if x['affairs'] == 0 else False , axis=1)
numOfRows2 = len(seriesObj2[seriesObj2 == True].index)
percentage_faithful = (numOfRows2 / x)
#Cheat analysis
seriesObj = for_ref.apply(lambda x: True if x['affairs'] > 0 else False , axis=1)
numOfRows = len(seriesObj[seriesObj == True].index)
percentage_cheat = (numOfRows / x)
keys = ['percentage_faithful', 'percentage_cheat']
values = [percentage_faithful,percentage_cheat]
#https://www.kite.com/python/answers/how-to-rotate-axis-labels-in-matplotlib-in-python
plt.xticks(rotation=45)
plt.yticks(rotation=90)
#https://showmecode.info/matplotlib/bar/change-bar-color/
plt.bar(keys, values, color=['blue', 'red'])
plt.ylabel('Percentage')
plt.title('Fidelity Reckoner')
#https://www.kite.com/python/answers/how-save-a-matplotlib-plot-as-a-pdf-file-in-python
plt.savefig("plots.pdf")
plt.show()
print("Likelyhood of person being faithful {:.2%} ".format(percentage_faithful))
print("Likelyhood of person having an affair {:.2%} ".format(percentage_cheat))
new_age_column = for_ref.age.replace({
17.5: 'under 20',
22: '20–24',
27: '25–29',
32: '0–34',
37: '35–39',
42: '40–44',
47: '45–49',
52: '50–54',
57: '55 or over'})
for_ref.age = new_age_column
new_occupation_column = for_ref.occupation.replace({
9: 'higher executive/large business',
8: 'administrators',
7: 'small business owners',
6: 'technicians',
5: 'clerical and sales workers',
4: 'micro business owners',
3: 'semi-skilled workers',
2: 'unskilled workers',
1: 'students, housewives',
0: 'not applicable'})
for_ref.occupation = new_occupation_column
#Change the values in the education column
new_education_column = for_ref.education.replace({
9: 'grade school',
12: 'high school graduate',
14: 'some college',
16: 'college graduate',
17: 'some graduate work',
18: 'masters degree',
20: 'Ph.D., M.D.' })
for_ref.education = new_education_column
print("The following simulation generates data for every survey response randomly.")
column_gender = for_ref['gender'].unique()
print("What is the gender of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_gender)
for_processing = number_selector(dictionary_choice)#
gender_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['gender'] == gender_type]
total_row = for_ref.shape[0]
print('Found ' , total_row , gender_type,"(s)" )
percentage_calculation(total_row)
column_age = for_ref['age'].unique()
print("What is the age of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_age)
for_processing = number_selector(dictionary_choice)#
age_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['age'] == age_type]
total_row2 = for_ref.shape[0]
print('Found ' , total_row2 , gender_type,"(s) in the age", age_type , "that you are interested" )
percentage_calculation(total_row2)
column_occupation = for_ref['occupation'].unique()
print("What is the career of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_occupation)
for_processing = number_selector(dictionary_choice)#
occupation_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['occupation'] == occupation_type]
total_row3 = for_ref.shape[0]
print ('Found ' , total_row3 , gender_type,"(s) in age category",age_type," working in", occupation_type)
percentage_calculation(total_row3)
column_education = for_ref['education'].unique()
print("What is the educational background of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_education)
for_processing = number_selector(dictionary_choice)#
education_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['education'] == education_type]
total_row4 = for_ref.shape[0]
print('Found ' , total_row4 , gender_type,"(s) in age category",age_type," working in", occupation_type,
"educated at",education_type,"level")
percentage_calculation(total_row4)
#https://thispointer.com/pandas-count-rows-in-a-dataframe-all-or-those-only-that-satisfy-a-condition/
###Output
The following simulation generates data for every survey response randomly.
What is the gender of your partner / potential partner?
1 male
2 female
type number to indicate your interest: 2
Found 59 female (s)
###Markdown
MAKING RANDOM SURVEY DATA TO REPLICATE PATTERNS IN REAL DATAMaking the random data replicate patterns in the real dataRather than the random choice being decided from the unique columns, it needs to be proportional to the real dataFilter the original file dataset according to the first random selection,random selection from gender e.g. male / femalefilter all femalesfilter all females of a random certain age from the previous filtered datafilter all females of a random certain age and a random occupation from the previous filtered datafilter all females of a random certain age and a random occupation and a random education level from the previous filtered datarandomly select from the last rows filtered on education results 'yearsmarried','children','religiousness','rating'https://stackoverflow.com/questions/56321765/append-values-from-dataframe-column-to-listhttps://cmdlinetips.com/2018/02/how-to-subset-pandas-dataframe-based-on-values-of-a-column/ The Players App / Fidelity Reckoner - VERSION 3 - Uses Randomly Filtered Data From Real Data To Create Random User Profiles More Closely Linked To Real Data
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import matplotlib.image as mpimg # used for being able to add background images to graphs
# initialize list of lists
data = []
count = 0
while count < 1000:
# Four Data Points
for_ref = pd.read_csv('https://vincentarelbundock.github.io/Rdatasets/csv/AER/Affairs.csv')
"----Filter Data From Real Data To Create Random User Profiles More Closely Linked To Real Data-------------"
column_gender = for_ref['gender'].tolist()
gender_type = np.random.choice(column_gender)
for_ref = for_ref.loc[for_ref['gender'] == gender_type]
column_age = for_ref['age'].tolist()
age_type = np.random.choice(column_age)
for_ref = for_ref.loc[for_ref['age'] == column_age]
column_education = for_ref['education'].tolist()
education_type = np.random.choice(column_education)
for_ref = for_ref.loc[for_ref['education'] == education_type]
column_occupation = for_ref['occupation'].tolist()
occupation_type = np.random.choice(column_occupation)
for_ref = for_ref.loc[for_ref['occupation'] == occupation_type]
column_affairs = for_ref['affairs'].tolist()
number_of_affairs = np.random.choice(column_affairs)
for_ref = for_ref.loc[for_ref['affairs'] == number_of_affairs]
"-----------------------------------------------------------------------------------------------------"
column_yearsmarried = for_ref['yearsmarried'].tolist()
yearsmarried_type = np.random.choice(column_yearsmarried)
column_children = for_ref['children'].tolist()
children_type = np.random.choice(column_children)
column_religiousness = for_ref['religiousness'].tolist()
religiousness_type = np.random.choice(column_religiousness)
column_rating = for_ref['rating'].tolist()
rating_type = np.random.choice(column_rating)
my_new_array = [number_of_affairs,gender_type,age_type,yearsmarried_type,children_type,religiousness_type,education_type,occupation_type,rating_type]
data.append(my_new_array)
count += 1
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['affairs','gender','age','yearsmarried','children','religiousness','education','occupation','rating'])
#Change the values in the age column
new_age_column = df.age.replace({
17.5: 'under 20',
22: '20–24',
27: '25–29',
32: '0–34',
37: '35–39',
42: '40–44',
47: '45–49',
52: '50–54',
57: '55 or over'})
df.age = new_age_column
#Change the values in the occupation column
new_occupation_column = df.occupation.replace({
9: 'higher executive/large business',
8: 'administrators',
7: 'small business owners',
6: 'technicians',
5: 'clerical and sales workers',
4: 'micro business owners',
3: 'semi-skilled workers',
2: 'unskilled workers',
1: 'students, housewives',
0: 'not applicable'})
df.occupation = new_occupation_column
#Change the values in the education column
new_education_column = df.education.replace({
9: 'grade school',
12: 'high school graduate',
14: 'some college',
16: 'college graduate',
17: 'some graduate work',
18: 'masters degree',
20: 'Ph.D., M.D.' })
df.education = new_education_column
""" --------------------------------------------------------------------------------------------------------------"""
for_ref = df
def dictionary_generator(column):
dictionary_choice = {}
count = 1
for item in column:
print(count, item)
dictionary_choice[count]=item
count +=1
return dictionary_choice
def number_selector(dictionary_choice):
number = int(input('type number to indicate your interest: '))
while number not in dictionary_choice.keys():
try:
number = int(input('type number to indicate your interest: '))
except ValueError:
number = int(input('type number to indicate your interest: '))
return number
def percentage_calculation(x):
#Faithful analysis
seriesObj2 = for_ref.apply(lambda x: True if x['affairs'] == 0 else False , axis=1)
numOfRows2 = len(seriesObj2[seriesObj2 == True].index)
percentage_faithful = (numOfRows2 / x)
#Cheat analysis
seriesObj = for_ref.apply(lambda x: True if x['affairs'] > 0 else False , axis=1)
numOfRows = len(seriesObj[seriesObj == True].index)
percentage_cheat = (numOfRows / x)
keys = ['percentage_faithful', 'percentage_cheat']
values = [percentage_faithful,percentage_cheat]
#https://www.kite.com/python/answers/how-to-rotate-axis-labels-in-matplotlib-in-python
plt.xticks(rotation=45)
plt.yticks(rotation=90)
#https://showmecode.info/matplotlib/bar/change-bar-color/
plt.bar(keys, values, color=['blue', 'red'])
plt.ylabel('Percentage')
plt.title('Fidelity Reckoner')
#https://www.kite.com/python/answers/how-save-a-matplotlib-plot-as-a-pdf-file-in-python
plt.savefig("plots.pdf")
plt.show()
print("Likelyhood of person being faithful {:.2%} ".format(percentage_faithful))
print("Likelyhood of person having an affair {:.2%} ".format(percentage_cheat))
new_age_column = for_ref.age.replace({
17.5: 'under 20',
22: '20–24',
27: '25–29',
32: '0–34',
37: '35–39',
42: '40–44',
47: '45–49',
52: '50–54',
57: '55 or over'})
for_ref.age = new_age_column
new_occupation_column = for_ref.occupation.replace({
9: 'higher executive/large business',
8: 'administrators',
7: 'small business owners',
6: 'technicians',
5: 'clerical and sales workers',
4: 'micro business owners',
3: 'semi-skilled workers',
2: 'unskilled workers',
1: 'students, housewives',
0: 'not applicable'})
for_ref.occupation = new_occupation_column
#Change the values in the education column
new_education_column = for_ref.education.replace({
9: 'grade school',
12: 'high school graduate',
14: 'some college',
16: 'college graduate',
17: 'some graduate work',
18: 'masters degree',
20: 'Ph.D., M.D.' })
for_ref.education = new_education_column
print("The following simulation is based on randomly filtering and extracting actual survey profiles from\n the Infidelity data, known as Fair's Affairs. This methodolgy creates a random dataset more similar\nto the original Fair's Affairs Cross-section data from a survey conducted by Psychology Today in 1969.")
column_gender = for_ref['gender'].unique()
print("What is the gender of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_gender)
for_processing = number_selector(dictionary_choice)#
gender_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['gender'] == gender_type]
total_row = for_ref.shape[0]
print('Found ' , total_row , gender_type,"(s)" )
percentage_calculation(total_row)
column_age = for_ref['age'].unique()
print("What is the age of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_age)
for_processing = number_selector(dictionary_choice)#
age_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['age'] == age_type]
total_row2 = for_ref.shape[0]
print('Found ' , total_row2 , gender_type,"(s) in the age", age_type , "that you are interested" )
percentage_calculation(total_row2)
column_occupation = for_ref['occupation'].unique()
print("What is the career of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_occupation)
for_processing = number_selector(dictionary_choice)#
occupation_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['occupation'] == occupation_type]
total_row3 = for_ref.shape[0]
print ('Found ' , total_row3 , gender_type,"(s) in age category",age_type," working in", occupation_type)
percentage_calculation(total_row3)
column_education = for_ref['education'].unique()
print("What is the educational background of your partner / potential partner?")
dictionary_choice = dictionary_generator(column_education)
for_processing = number_selector(dictionary_choice)#
education_type = dictionary_choice[for_processing]
for_ref = for_ref.loc[for_ref['education'] == education_type]
total_row4 = for_ref.shape[0]
print('Found ' , total_row4 , gender_type,"(s) in age category",age_type," working in", occupation_type,
"educated at",education_type,"level")
percentage_calculation(total_row4)
#https://thispointer.com/pandas-count-rows-in-a-dataframe-all-or-those-only-that-satisfy-a-condition/
###Output
The following simulation is based on randomly filtering and extracting actual survey profiles from
the Infidelity data, known as Fair's Affairs. This methodolgy creates a random dataset more similar
to the original Fair's Affairs Cross-section data from a survey conducted by Psychology Today in 1969.
What is the gender of your partner / potential partner?
1 male
2 female
type number to indicate your interest: 2
Found 541 female (s)
|
Chapter1/Chapter1-4.ipynb | ###Markdown
乳がんのデータをロード
###Code
from sklearn.datasets import load_breast_cancer
# 乳がんのデータをロード
X_dataset, y_dataset = load_breast_cancer(return_X_y=True)
###Output
_____no_output_____
###Markdown
データの前処理
###Code
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
# X_datasetをX_trainとX_testに
# y_datasetをy_trainとy_testに分割
X_train, X_test, y_train, y_test = train_test_split(
X_dataset, y_dataset, test_size=0.2, random_state=42)
# データを0~1の範囲にスケール
scaler = MinMaxScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# ラベルデータをone-hot形式に変換
from tensorflow.python import keras
y_train = keras.utils.to_categorical(y_train, 2)
y_test = keras.utils.to_categorical(y_test, 2)
###Output
_____no_output_____
###Markdown
Keras でモデル作成
###Code
from tensorflow.python.keras.models import Sequential
# Sequentialモデルで線形に積み重ねる
model = Sequential()
from tensorflow.python.keras.layers import Dense
model.add(Dense(units=4, activation='relu', input_dim=30))
model.add(Dense(units=4, activation='relu'))
model.add(Dense(units=2, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
モデルを学習
###Code
model.fit(X_train, y_train, epochs=50, batch_size=32)
###Output
Epoch 1/50
455/455 [==============================] - 1s 3ms/step - loss: 0.6827 - acc: 0.6462
Epoch 2/50
455/455 [==============================] - 0s 200us/step - loss: 0.6663 - acc: 0.7187
Epoch 3/50
455/455 [==============================] - 0s 190us/step - loss: 0.6455 - acc: 0.7582
Epoch 4/50
455/455 [==============================] - 0s 199us/step - loss: 0.6226 - acc: 0.7956
Epoch 5/50
455/455 [==============================] - 0s 202us/step - loss: 0.5972 - acc: 0.8154
Epoch 6/50
455/455 [==============================] - 0s 198us/step - loss: 0.5700 - acc: 0.8330
Epoch 7/50
455/455 [==============================] - 0s 228us/step - loss: 0.5436 - acc: 0.8374
Epoch 8/50
455/455 [==============================] - 0s 203us/step - loss: 0.5169 - acc: 0.8418
Epoch 9/50
455/455 [==============================] - 0s 202us/step - loss: 0.4902 - acc: 0.8615
Epoch 10/50
455/455 [==============================] - 0s 206us/step - loss: 0.4647 - acc: 0.8659
Epoch 11/50
455/455 [==============================] - 0s 211us/step - loss: 0.4415 - acc: 0.8747
Epoch 12/50
455/455 [==============================] - 0s 210us/step - loss: 0.4186 - acc: 0.8747
Epoch 13/50
455/455 [==============================] - 0s 215us/step - loss: 0.3967 - acc: 0.8813
Epoch 14/50
455/455 [==============================] - 0s 207us/step - loss: 0.3757 - acc: 0.8813
Epoch 15/50
455/455 [==============================] - 0s 206us/step - loss: 0.3571 - acc: 0.8791
Epoch 16/50
455/455 [==============================] - 0s 216us/step - loss: 0.3399 - acc: 0.8813
Epoch 17/50
455/455 [==============================] - 0s 212us/step - loss: 0.3234 - acc: 0.8901
Epoch 18/50
455/455 [==============================] - 0s 214us/step - loss: 0.3090 - acc: 0.8967
Epoch 19/50
455/455 [==============================] - 0s 212us/step - loss: 0.2958 - acc: 0.9033
Epoch 20/50
455/455 [==============================] - 0s 240us/step - loss: 0.2841 - acc: 0.8879
Epoch 21/50
455/455 [==============================] - 0s 211us/step - loss: 0.2720 - acc: 0.9011
Epoch 22/50
455/455 [==============================] - 0s 213us/step - loss: 0.2622 - acc: 0.9187
Epoch 23/50
455/455 [==============================] - 0s 207us/step - loss: 0.2522 - acc: 0.9143
Epoch 24/50
455/455 [==============================] - 0s 203us/step - loss: 0.2435 - acc: 0.9165
Epoch 25/50
455/455 [==============================] - 0s 206us/step - loss: 0.2357 - acc: 0.9187
Epoch 26/50
455/455 [==============================] - 0s 203us/step - loss: 0.2282 - acc: 0.9121
Epoch 27/50
455/455 [==============================] - 0s 204us/step - loss: 0.2212 - acc: 0.9187
Epoch 28/50
455/455 [==============================] - 0s 196us/step - loss: 0.2149 - acc: 0.9165
Epoch 29/50
455/455 [==============================] - 0s 203us/step - loss: 0.2086 - acc: 0.9209
Epoch 30/50
455/455 [==============================] - 0s 205us/step - loss: 0.2026 - acc: 0.9209
Epoch 31/50
455/455 [==============================] - 0s 208us/step - loss: 0.1967 - acc: 0.9187
Epoch 32/50
455/455 [==============================] - 0s 205us/step - loss: 0.1921 - acc: 0.9231
Epoch 33/50
455/455 [==============================] - 0s 241us/step - loss: 0.1874 - acc: 0.9275
Epoch 34/50
455/455 [==============================] - 0s 202us/step - loss: 0.1842 - acc: 0.9319
Epoch 35/50
455/455 [==============================] - 0s 195us/step - loss: 0.1790 - acc: 0.9275
Epoch 36/50
455/455 [==============================] - 0s 203us/step - loss: 0.1758 - acc: 0.9319
Epoch 37/50
455/455 [==============================] - 0s 208us/step - loss: 0.1713 - acc: 0.9341
Epoch 38/50
455/455 [==============================] - 0s 210us/step - loss: 0.1675 - acc: 0.9363
Epoch 39/50
455/455 [==============================] - 0s 208us/step - loss: 0.1644 - acc: 0.9407
Epoch 40/50
455/455 [==============================] - 0s 208us/step - loss: 0.1600 - acc: 0.9385
Epoch 41/50
455/455 [==============================] - 0s 215us/step - loss: 0.1577 - acc: 0.9451
Epoch 42/50
455/455 [==============================] - 0s 209us/step - loss: 0.1544 - acc: 0.9385
Epoch 43/50
455/455 [==============================] - 0s 207us/step - loss: 0.1518 - acc: 0.9429
Epoch 44/50
455/455 [==============================] - 0s 222us/step - loss: 0.1487 - acc: 0.9495
Epoch 45/50
455/455 [==============================] - 0s 218us/step - loss: 0.1477 - acc: 0.9495
Epoch 46/50
455/455 [==============================] - 0s 236us/step - loss: 0.1457 - acc: 0.9495
Epoch 47/50
455/455 [==============================] - 0s 202us/step - loss: 0.1409 - acc: 0.9538
Epoch 48/50
455/455 [==============================] - 0s 204us/step - loss: 0.1386 - acc: 0.9582
Epoch 49/50
455/455 [==============================] - 0s 202us/step - loss: 0.1359 - acc: 0.9604
Epoch 50/50
455/455 [==============================] - 0s 213us/step - loss: 0.1336 - acc: 0.9626
###Markdown
正解率の算出
###Code
# 正解率の算出
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
('Test loss:', 0.1039138252013608)
('Test accuracy:', 0.9824561403508771)
###Markdown
Fashion-MNIST のデータをロード
###Code
try: # tensorflow v1.8 まで
from tensorflow.python.keras._impl.keras.datasets import fashion_mnist
except: # tensorflow v1.9 以降
from tensorflow.python.keras.datasets import fashion_mnist
(X_train, y_train), (X_test, y_test) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
データを可視化
###Code
import matplotlib.pyplot as plt
plt.axis('off')
plt.set_cmap('gray_r')
plt.imshow(X_train[0])
###Output
_____no_output_____
###Markdown
データの前処理
###Code
from tensorflow.python import keras
# ラベルデータをone-hotの形に変換
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
# shapeを28x28画素x1チャネル(グレースケール)に変換
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
# 0~255の階調 から 0~1階調に変換
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
###Output
('X_train shape:', (60000, 28, 28, 1))
(60000, 'train samples')
(10000, 'test samples')
###Markdown
CNN の作成と学習注意:GPU無しの構成だと学習に非常に時間がかかります
###Code
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Dropout, Flatten
from tensorflow.python.keras.layers import Conv2D, MaxPooling2D
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(28, 28, 1)))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, kernel_size=(3, 3),
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
# 学習
model.fit(X_train, y_train,
batch_size=128,
epochs=12,
verbose=1,
validation_data=(X_test, y_test))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/12
60000/60000 [==============================] - 8s 127us/step - loss: 0.5232 - acc: 0.8088 - val_loss: 0.3926 - val_acc: 0.8595
Epoch 2/12
60000/60000 [==============================] - 6s 105us/step - loss: 0.3414 - acc: 0.8752 - val_loss: 0.3261 - val_acc: 0.8767
Epoch 3/12
60000/60000 [==============================] - 6s 104us/step - loss: 0.2920 - acc: 0.8923 - val_loss: 0.3123 - val_acc: 0.8878
Epoch 4/12
60000/60000 [==============================] - 6s 104us/step - loss: 0.2634 - acc: 0.9028 - val_loss: 0.2763 - val_acc: 0.8999
Epoch 5/12
60000/60000 [==============================] - 6s 104us/step - loss: 0.2395 - acc: 0.9114 - val_loss: 0.2559 - val_acc: 0.9098
Epoch 6/12
60000/60000 [==============================] - 6s 104us/step - loss: 0.2198 - acc: 0.9189 - val_loss: 0.2557 - val_acc: 0.9061
Epoch 7/12
60000/60000 [==============================] - 6s 104us/step - loss: 0.2039 - acc: 0.9234 - val_loss: 0.2461 - val_acc: 0.9116
Epoch 8/12
60000/60000 [==============================] - 6s 104us/step - loss: 0.1868 - acc: 0.9308 - val_loss: 0.2442 - val_acc: 0.9117
Epoch 9/12
60000/60000 [==============================] - 6s 105us/step - loss: 0.1722 - acc: 0.9357 - val_loss: 0.2374 - val_acc: 0.9131
Epoch 10/12
60000/60000 [==============================] - 6s 103us/step - loss: 0.1612 - acc: 0.9399 - val_loss: 0.2354 - val_acc: 0.9183
Epoch 11/12
60000/60000 [==============================] - 6s 104us/step - loss: 0.1439 - acc: 0.9463 - val_loss: 0.2446 - val_acc: 0.9172
Epoch 12/12
60000/60000 [==============================] - 6s 103us/step - loss: 0.1340 - acc: 0.9498 - val_loss: 0.2441 - val_acc: 0.9167
###Markdown
正解率の算出
###Code
# 正解率の算出
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
('Test loss:', 0.24411798218488692)
('Test accuracy:', 0.9167)
|
20201013 Introduction to Statistics.ipynb | ###Markdown
5.3 Introduction to Statistics OverviewToday's class reviews summary statistics previously taught in Unit 1 and covers the implementation of these statistical measures in Python. This lesson also introduces new statistical concepts such as sample versus population, standard error, Pearson correlation coefficient, and linear regression. Class ObjectivesBy the end of this class, students will be able to:* Calculate summary statistics such as mean, median, mode, variance and standard deviation using Python.* Plot, characterize, and quantify a normally distributed dataset using Python.* Qualitatively and quantitatively identify potential outliers in a dataset.* Differentiate between a sample and a population in regards to a dataset.* Define and quantify correlation between two factors.* Calculate and plot a linear regression in Python. 1. Welcome & Intro Presentation 📣 1.1 Instructor Do: Welcome Students * Welcome to Day 3 of Matplotlib. Today's lesson will focus on bringing together our knowledge of fundamental statistics with Matplotlib and SciPy. 📣 1.2 Instructor Do: Summary Statistics in Python * The most common measures of central tendency are the **mean**, **median** and **mode**. * The **mean** of a dataset is what is known as the arithmetic average of a dataset. It is calculated from the sum all of the numbers divided by the number of elements in a dataset. * The **median** of a dataset is the middle element. It is calculated from listing the data numerically and selecting the middle element. For even-length datasets, the average of the 2 center elements is the median of the dataset. * The **mode** of a dataset is the most frequently occurring element. The mode can be used for either numeric or categorical data. * With Python, there are a number of ways to measure the central tendency of the data. However, for this class we will be looking at the NumPy and SciPy packages and modules. * We will use the NumPy package to test for `mean` and `median` and use the SciPy package to test for `mode`. * The reason we need to use both NumPy and SciPy modules to calculate the measures of central tendency is that mode is not a function in NumPy. This is likely because NumPy is a very lightweight module and calculating the mode can be computationally intensive.* Pandas also provides functions to measure central tendency, but students will need to look at the documentation on their own. * The reason we would want to plot new data as soon as possible is to identify key characteristics about the data. * Key characteristics can include if the data is normally distributed, if the data is multimodal, or if there are clusters in the data. * Data is considered normally distributed when measurements are obtained independent of one another. * Another characteristic of normally distributed data is that its distribution follows a characteristic bell-curve shape. * **Variance** is the measurement of how far each number in the dataset is away from the mean of the dataset. * **Standard deviation** is the square root of the variance. * When calculating the variance and standard deviation in Python, we will use the NumPy module.```python Dependenciesimport pandas as pdimport matplotlib.pyplot as pltimport scipy.stats as stsimport numpy as np Read in the LAX temperature datatemperature_df = pd.read_csv('Resources/lax_temperature.csv')temperatures = temperature_df['HourlyDryBulbTemperature'] Demonstrate calculating measures of central tendencymean_numpy = np.mean(temperatures)print(f"The mean temperature at the LAX airport is {mean_numpy}")median_numpy = np.median(temperatures)print(f"The median temperature at the LAX airport is {median_numpy}")mode_scipy = sts.mode(temperatures)print(f"The mode temperature at the LAX airport is {mode_scipy}")```* This first dataset contains National Oceanic and Atmospheric Administration temperature measurements taken at the Los Angeles International (LAX) airport. * To calculate the mean, NumPy provides a decimal with far too much precision. Therefore we should always round the output of `numpy.mean`. In most cases, rounding the mean to the nearest hundredth decimal is sufficient. * To calculate the median, NumPy also can provide a decimal with far too much precision. However, with this dataset, the median was already rounded. * To calculate the mode, the `scipy.stats` module returns 2 arrays, one for all mode values, another for the frequency of each mode.* The easiest way to assert if a dataset has multiple modes, clusters of values, or if the dataset is normally distributed, is to plot the data using Matplotlib.```python Characterize the data set using matplotlib and stats.normaltestplt.hist(temperatures)plt.xlabel('Temperature (°F)')plt.ylabel('Counts')plt.show()print(sts.normaltest(temperatures.sample(50)))```* There only appears to be one mode in the dataset. Furthermore, the distribution of temperatures around the mode seems to form a bell curve. * This bell-curve characteristic is known in statistics as a **normal distribution**. * The theory behind a **normal distribution** is outside of the scope of this lesson, but it is important to know whether your data is normally distributed.* Many statistical tests assume that the data is normally distributed. Using such statistical tests when the data is _not_ normally distributed can cause us to draw incorrect conclusions. * The `stats.normaltest` function offers a more quantitative verification of normal distribution. * When we used `stats.normaltest` in our example code, we also used the Pandas `DataFrame.sample` function. * Because `stats.normaltest` function assumes a relatively small sample size, we could not run the test on our entire temperature data. Therefore, we must test on a subset of randomly selected values using Pandas's `DataFrame.sample` function. * We interpret the results of `stats.normaltest` using the **p** value. A **p** value 0.05 or larger indicates normally distributed data. * Because our **p** value is approximately 0.05 or greater, we can conclude that this distribution is normal.```python Demonstrate calculating the variance and standard deviation using the different modulesvar_numpy = np.var(temperatures,ddof = 0)print(f"The population variance using the NumPy module is {var_numpy}")sd_numpy = np.std(temperatures,ddof = 0)print(f"The population standard deviation using the NumPy module is {sd_numpy}")```* Point out that to calculate the total variance or standard deviation in NumPy, we must provide the list of numbers as well as `ddof =0`. * The `ddof = 0` argument is to ensure we calculate the population variance and standard deviation. * We will talk about sample versus population later in the class.* Execute the next code block.```python Calculate the 68-95-99.7 rule using the standard deviationprint(f"Roughly 68% of the data is between {round(mean_numpy-sd_numpy,3)} and {round(mean_numpy+sd_numpy,3)}")print(f"Roughly 95% of the data is between {round(mean_numpy-2*sd_numpy,3)} and {round(mean_numpy+2*sd_numpy,3)}")print(f"Roughly 99.7% of the data is between {round(mean_numpy-3*sd_numpy,3)} and {round(mean_numpy+3*sd_numpy,3)}")```* When we have a dataset that is normally distributed, we can use the **68-95-99.7** rule to characterize the data. * The **68-95-99.7** rule states that roughly 68% of all values in normally distributed data fall within one standard deviation of the mean (in either direction). Additionally, 95% of the values fall within two standard deviations, and 99.7% of the values fall within three standard deviations. * The z-score is the number of standard deviations a given number is from the mean of the dataset. * To calculate a z-score in Python, we must use the SciPy `stats.zscore` function.```python Demonstrate calculating the z-scores using SciPyz_scipy = sts.zscore(temperatures)print(f"The z-scores using the SciPy module are {z_scipy}")```* The output of `stats.zscore` is a list of z-scores that is equal in length to the list of temperatures. Therefore, if we want to know the z-score for any given value, we must find use index of that value from the temperature list.
###Code
# Dependencies
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as sts
import numpy as np
# Read in the LAX temperature data
temperature_df = pd.read_csv('Resources/lax_temperature.csv')
temperatures = temperature_df['HourlyDryBulbTemperature']
temperature_df.shape
temperature_df.head()
# Demonstrate calculating measures of central tendency
mean_numpy = np.mean(temperatures)
print(f"The mean temperature at the LAX airport is {mean_numpy}")
median_numpy = np.median(temperatures)
print(f"The median temperature at the LAX airport is {median_numpy}")
mode_scipy = sts.mode(temperatures)
print(f"The mode temperature at the LAX airport is {mode_scipy}")
# Characterize the data set using matplotlib and stats.normaltest
plt.hist(temperatures)
plt.xlabel('Temperature (°F)')
plt.ylabel('Counts')
plt.show()
print(sts.normaltest(temperatures.sample(50)))
# Demonstrate calculating the variance and standard deviation using the different modules
var_numpy = np.var(temperatures,ddof = 0)
print(f"The population variance using the NumPy module is {var_numpy}")
sd_numpy = np.std(temperatures,ddof = 0)
print(f"The population standard deviation using the NumPy module is {sd_numpy}")
# Calculate the 68-95-99.7 rule using the standard deviation
print(f"Roughly 68% of the data is between {round(mean_numpy-sd_numpy,3)} and {round(mean_numpy+sd_numpy,3)}")
print(f"Roughly 95% of the data is between {round(mean_numpy-2*sd_numpy,3)} and {round(mean_numpy+2*sd_numpy,3)}")
print(f"Roughly 99.7% of the data is between {round(mean_numpy-3*sd_numpy,3)} and {round(mean_numpy+3*sd_numpy,3)}")
# Demonstrate calculating the z-scores using SciPy
z_scipy = sts.zscore(temperatures)
print(f"The z-scores using the SciPy module are {z_scipy}")
###Output
The z-scores using the SciPy module are [-0.99457041 -1.17044048 -0.99457041 ... 0.06065001 0.06065001
0.06065001]
###Markdown
📣 1.3 Instructor Do: Quantiles and Outliers in Python * **Quantiles** are a way to divide our data into well-defined regions based on their order in a ranked list. The 2 most common quantiles used are **quartiles** and **percentiles**. * **Quartiles** divide the sorted data into 4 equal-sized groups and the median is known as the second quartile. * An **outlier** is an extreme value in a dataset that can skew a dataset. An **outlier** is typically identified as a value that is 1.5 * IQR (**interquartile range**) beyond the first and third quartiles. * We can visually identify quartiles and outliers using a box and whisker plot. Alternatively, we can identify quartiles using the `1.5 * IQR` rule. * When datasets are too large to identify the outliers visually, or when analysis requires more quantitative measures, we should calculate the interquartile range manually using Python modules.* Execute the first 2 blocks of code.```python Dependenciesimport pandas as pdimport numpy as npimport matplotlib.pyplot as plt Example outlier plot of reaction timestimes = [96,98,100,105,85,88,95,100,101,102,97,98,5]fig1, ax1 = plt.subplots()ax1.set_title('Reaction Times at Baseball Batting Cage')ax1.set_ylabel('Reaction Time (ms)')ax1.boxplot(times)plt.show()```* This first dataset is a theoretical collection of reaction times measured at a baseball batting cage.* A box and whisker plot is widely used in data science due to the amount of information it provides at-a-glance. * We render a box and whisker plot in Matplotlib using the `pyplot.boxplot` function. * The `pyplot.boxplot` function simply requires a list of numbers to draw. * The red line in the box plot is the median of the data. * The box surrounding the median is the IQR. * The whiskers that protrude from the box in the plot can be modified depending on the use, but by default represent 1.5 * IQR, or the outlier boundaries. * The data points that are located beyond the whiskers in the plot are potential outliers. * In this dataset, the 2 smallest data points appear to be outliers.* Execute the next block of code.```python We need to sort the data to determine which could be outlierstimes.sort()print(times)```* Once we have identified potential outliers in a box and whisker plot, we can use the sorted dataset to estimate which of the data points fall outside the outlier boundary.* Point out that the 5 ms and 85 ms times are outside of the whiskers and may merit investigation.* Execute the next block of code.```python The second example again looks at the LAX temperature data set and computes quantilestemperature_df = pd.read_csv('../Resources/lax_temperature.csv')temperatures = temperature_df['HourlyDryBulbTemperature']fig1, ax1 = plt.subplots()ax1.set_title('Temperatures at LAX')ax1.set_ylabel('Temperature (°F)')ax1.boxplot(temperatures)plt.show()```* This example is looking back at the LAX temperatures from NOAA. * This dataset has over 3,000 data points and we already know it to be normally distributed. * When we know a dataset is normally distributed, we can expect at least a few data points to be potential outliers.* We can also identify potential outliers using Pandas.* We can use Pandas to easily calculate the interquartile range to generate the outlier boundaries.* Execute the next block of code.```python If the data is in a dataframe, we use pandas to give quartile calculationsquartiles = temperatures.quantile([.25,.5,.75])lowerq = quartiles[0.25]upperq = quartiles[0.75]iqr = upperq-lowerqprint(f"The lower quartile of temperatures is: {lowerq}")print(f"The upper quartile of temperatures is: {upperq}")print(f"The interquartile range of temperatures is: {iqr}")print(f"The the median of temperatures is: {quartiles[0.5]} ")lower_bound = lowerq - (1.5*iqr)upper_bound = upperq + (1.5*iqr)print(f"Values below {lower_bound} could be outliers.")print(f"Values above {upper_bound} could be outliers.")```* In order to properly calculate the lower and upper quartiles of a dataset we would need to calculate the median of our dataset. Once we split our data into two groups using the median, we would then need to find the median of the lower and upper groups to determine the quartiles.* A very common practice in data science is to approximate the median-of-a-median quartile values by using prebuilt quantile functions such as Pandas's `quantile` method.* Pandas's `quantile` method requires decimal values between 0 and 1. In addition you must pass the quantile as the index instead of relative index values.```python You cannot pass a 0 index to retrieve the first element, it requires the actual value of 0.25lowerq = quartiles[0.25]```* Once you have calculated the IQR, you can create the boundaries to quantitatively determine any potential outliers.* Slack out the solution notebook for students to refer to in the next activity.
###Code
# Dependencies
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Example outlier plot of reaction times
times = [96,98,100,105,85,88,95,100,101,102,97,98,5]
fig1, ax1 = plt.subplots()
ax1.set_title('Reaction Times at Baseball Batting Cage')
ax1.set_ylabel('Reaction Time (ms)')
ax1.boxplot(times)
plt.show()
# We need to sort the data to determine which could be outliers
times.sort()
print(times)
# The second example again looks at the LAX temperature data set and computes quantiles
temperature_df = pd.read_csv('Resources/lax_temperature.csv')
temperatures = temperature_df['HourlyDryBulbTemperature']
fig1, ax1 = plt.subplots()
ax1.set_title('Temperatures at LAX')
ax1.set_ylabel('Temperature (°F)')
ax1.boxplot(temperatures)
plt.show()
# If the data is in a dataframe, we use pandas to give quartile calculations
quartiles = temperatures.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
print(f"The lower quartile of temperatures is: {lowerq}")
print(f"The upper quartile of temperatures is: {upperq}")
print(f"The interquartile range of temperatures is: {iqr}")
print(f"The the median of temperatures is: {quartiles[0.5]} ")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
###Output
The lower quartile of temperatures is: 54.0
The upper quartile of temperatures is: 60.0
The interquartile range of temperatures is: 6.0
The the median of temperatures is: 57.0
Values below 45.0 could be outliers.
Values above 69.0 could be outliers.
###Markdown
✏️ 2.1 Student Do: Summary Statistics in Python Summary Statistics in Python Instructions* Using Pandas, import the California housing dataset from the Resources folder.* Determine the most appropriate measure of central tendency to describe the population. Calculate this value.* Use both data visualization and a quantitative measurement to find whether the age of houses in California is considered normally distributed.* Inspect the average occupancy of housing in California and determine if there are any potential outliers in the dataset. * **Hint**: This dataset is very large.* If there are potential outliers in the average occupancy, find the minimum and maximum of the median housing prices across the outliers. BonusPlot the latitude and longitude of the California housing data using Matplotlib and color the data points using the median income of the block. Does any location seem to be an outlier?- - -
###Code
# Dependencies
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as sts
# Read in the california housing data set
california_df = pd.read_csv("Resources/California_Housing.csv")
# Determine which measure of central tendency is most appropriate to describe the Population
population = california_df['Population']
plt.hist(population)
plt.show()
print(np.mean(population))
print(np.median(population))
print(sts.mode(population))
# Determine if the house age in California is considered normally distributed
#california_df.head(5)
house_age= california_df['HouseAge']
sts.normaltest(house_age.sample(100))
california_df.head()
# Determine if there are any potential outliers in the average occupancy in California
average_occup = california_df['AveOccup']
fig1, ax1 = plt.subplots()
ax1.boxplot(average_occup)
plt.show()
np.std(average_occup)
np.sort(average_occup)
# With the potential outliers, what is the lowest and highest median income (in $1000s) observed?
med_inc = california_df['MedInc']
quartiles = med_inc.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
med = quartiles[0.5]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound*1000} could be outliers.")
print(f"Values below {upper_bound*1000} could be outliers.")
print(f"Values mid {med*1000} ")
outliers = california_df.loc[(california_df["AveOccup"]<lower_bound) | (california_df["AveOccup"]<upper_bound)]
# Bonus - plot the latitude and longitude of the California housing data using Matplotlib, color the data points using the median income of the block.
###Output
_____no_output_____
###Markdown
📣 3.1 Instructor Do: Sample, Population, and SEM* Weeks before Election Day, a local newspaper in a hypothetical city wants to predict the winner of the mayoral election. The newspaper will poll voters for their intended candidate. Point out the following: * It would be prohibitively expensive to ask every voter in the city whom they will vote for, nor is it possible to know exactly which people will go out and vote on Election Day. * The newspaper must therefore ask a _subset_ of all eligible voters in the city about their voting habits and _extrapolate_ information from the results. * In this scenario, the newspaper decides to poll 1,000 eligible voters shopping at grocery stores across the city. * By using the polling results from the 1,000 eligible voters, the newspaper can try to make an accurate prediction of the mayoral election outcome.* This hypothetical scenario is an example of a **sample** data set versus a **population** data set. * In statistics, a **population** is a complete data set that contains all possible elements of a study or experiment. * In this scenario, the population data set would be the voting habits of all eligible voters in the city. * In statistics, a **sample** is a subset of a population dataset, where not all elements of a study or experiment are collected or measured. * In this scenario, the sample dataset is the 1,000 eligible voters polled across the city. * In data science, the concept of sample versus population does not strictly apply to people or animals. Any comprehensive dataset is considered a population, and any dataset that is a subset of a larger data set is considered a sample.* Execute the first 2 blocks of code to bring in the fuel economy dataset.```python Dependenciesimport pandas as pdimport randomimport matplotlib.pyplot as pltimport numpy as npfrom scipy.stats import sem Set the seed so our data is reproduciblerandom.seed(42) Sample versus population example fuel economyfuel_economy = pd.read_csv('../Resources/2019_fuel_economy.csv') First overview the data set - how many factors, etc.print(fuel_economy.head())```* In this example we will be looking at 2019 vehicle fuel economy data from [fueleconomy.gov](https://https://www.fueleconomy.gov/feg/download.shtml). Our population data contains the fuel economy data for all 1,242 different 2019 model vehicles tested by the U.S. Department of Energy in 2018.* Calculate the population mean and standard deviation using the notebook.```python Calculate the summary statistics and plot the histogram of the entire population dataprint(f"The mean MPG of all vehicles is: {round(fuel_economy.Combined_MPG.mean(),2)}")print(f"The standard deviation of all vehicle's MPG is: {round(fuel_economy.Combined_MPG.std(),2)}")```* The mean miles per gallon of all vehicles tested is 23.33, while the standard deviation of all vehicles tested is 5.94.* Plot the histogram of the fuel efficiency of all vehicles tested using the notebook.```pythonplt.hist(fuel_economy.Combined_MPG)plt.xlabel("Fuel Economy (MPG)")plt.ylabel("Number of Vehicles")plt.show()```* When it comes to selecting a sample dataset, it is important to obtain a dataset that is representative of the entire population.* Subset the fuel economy data set using `fuel_economy.iloc[range(766,856)]` and calculate the mean and standard deviation of this sample. Plot the histogram of the sample data.```python Calculate the summary statistics and plot the histogram of the sample data using ilocsubset = fuel_economy.iloc[range(766,856)]print(f"The mean MPG of all vehicles is: {round(subset.Combined_MPG.mean(),2)}")print(f"The standard deviation of all vehicle's MPG is: {round(subset.Combined_MPG.std(),2)}")plt.hist(subset.Combined_MPG)plt.xlabel("Fuel Economy (MPG)")plt.ylabel("Number of Vehicles")plt.show()```* This sample data contains 90 data points from the fuel economy population dataset. * This sample data does not represent the population dataset well; the sample mean is much lower than the population mean and the sample standard deviation is far smaller than the population standard deviation. * The reason this sample does not represent the population data well is because it was not obtained using **random sampling**. * The random sampling is a technique in data science in which every subject or data point has an equal chance of being included in the sample. * This technique increases the likelihood that even a small sample size will include individuals from each group in the population.* Subset the fuel economy dataset using `fuel_economy.sample(90)` and calculate the mean and standard deviation of this sample. Plot the histogram of the sample data.```python Calculate the summary statistics and plot the histogram of the sample data using random samplingsubset = fuel_economy.sample(90)print(f"The mean MPG of all vehicles is: {round(subset.Combined_MPG.mean(),2)}")print(f"The standard deviation of all vehicle's MPG is: {round(subset.Combined_MPG.std(),2)}")plt.hist(subset.Combined_MPG)plt.xlabel("Fuel Economy (MPG)")plt.ylabel("Number of Vehicles")plt.show()```* Pandas' `DataFrame.sample()` function uses random sampling to subset the DataFrame, creating a sample that is far more likely to represent the population data.* Compare and contrast the calculated sample mean, standard deviations, and plots from both sample data sets. * Visually, the random sample has the same right skew to the distribution as the population data compared to the more normal distribution from the sliced sample. * The mean and standard deviation of the random sample are far closer to the population mean and standard deviation compared to the sliced sample.* When describing a sample dataset using summary statistics such as the mean, quartiles, variance, and standard deviation, these statistical values are imperfect. * Fortunately, there are ways of quantifying the trustworthiness of a sample dataset. * The population mean mpg in the fuel economy data set is 23.33, while the population standard deviation of all vehicles is 5.94. * The standard deviation is seemingly large compared to the mean, especially considering there are 1,242 vehicles in the dataset. The larger standard deviation is most likely due to the variety of vehicle types in the dataset.* In order for us to estimate how well a sample is representative of the total population, we calculate the **standard error** (**standard error of the mean**, or SEM) of the sample. * The standard error describes how far a sample's mean is from the population's "true" mean. * The standard error is a function of sample size; as sample size increases, the standard error decreases.* The formula for standard error is unimportant. There is a [function in SciPy](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.sem.html) that calculates standard error for us.* Using the notebook, create a new sample dataset from the fuel economy population data using `fuel_economy.sample(30)`. Calculate the SEM value using SciPy's `stats.sem` function.```python Generate a new 30 vehicle sample and calculate the SEM of the samplesample = fuel_economy.sample(30)print(f"The SEM value for the sample fuel economy data is {sem(sample.Combined_MPG)}")```* One of the most common uses of SEM in data science is to compare and contrast sample data across a sample set. One easy way to visualize the differences in standard error across samples is to generate **error bars** on a scatter or line plot.* Use the notebook to create a sample set of 10 samples, each containing 30 vehicles from the fuel economy population data.```python Create a sample set of 10, each with 30 vehiclesvehicle_sample_set = [fuel_economy.sample(30) for x in range(0,10)]```* Calculate the mean and SEM of each sample using list comprehension and plot the data using Matplotlib's `pyplot.errorbar` function.```python Generate the plot data for each samplemeans = [sample.Combined_MPG.mean() for sample in vehicle_sample_set]standard_errors = [sem(sample.Combined_MPG) for sample in vehicle_sample_set]x_axis = np.arange(0, len(vehicle_sample_set), 1) + 1 Setting up the plotfig, ax = plt.subplots()ax.errorbar(x_axis, means, standard_errors, fmt="o")ax.set_xlim(0, len(vehicle_sample_set) + 1)ax.set_ylim(20,28)ax.set_xlabel("Sample Number")ax.set_ylabel("Mean MPG")plt.show()```* The standard error essentially tells us how likely it is that the sample's mean is "close" to the population's mean—the one we actually care seek to estimate. * The error bars that are the largest are the samples whose mean is the least likely to represent the population mean. * If the standard error of the samples is too large, we can increase the number of data points in the sample to reduce the standard error.* Slack out the solution notebook for students to refer to during the next activity.
###Code
# Dependencies
import pandas as pd
import random
import matplotlib.pyplot as plt
import numpy as np
from scipy.stats import sem
# Set the seed so our data is reproducible
random.seed(42)
# Sample versus population example fuel economy
fuel_economy = pd.read_csv('Resources/2019_fuel_economy.csv')
# First overview the data set - how many factors, etc.
print(fuel_economy.head())
# Calculate the summary statistics and plot the histogram of the entire population data
print(f"The mean MPG of all vehicles is: {round(fuel_economy.Combined_MPG.mean(),2)}")
print(f"The standard deviation of all vehicle's MPG is: {round(fuel_economy.Combined_MPG.std(),2)}")
plt.hist(fuel_economy.Combined_MPG)
plt.xlabel("Fuel Economy (MPG)")
plt.ylabel("Number of Vehicles")
plt.show()
# Calculate the summary statistics and plot the histogram of the sample data using iloc
subset = fuel_economy.iloc[range(766,856)] #this is the sample using random, but it is not so good as 'sample' method
print(f"The mean MPG of all vehicles is: {round(subset.Combined_MPG.mean(),2)}")
print(f"The standard deviation of all vehicle's MPG is: {round(subset.Combined_MPG.std(),2)}")
plt.hist(subset.Combined_MPG)
plt.xlabel("Fuel Economy (MPG)")
plt.ylabel("Number of Vehicles")
plt.show()
# Calculate the summary statistics and plot the histogram of the sample data using random sampling
subset = fuel_economy.sample(90)
print(f"The mean MPG of all vehicles is: {round(subset.Combined_MPG.mean(),2)}")
print(f"The standard deviation of all vehicle's MPG is: {round(subset.Combined_MPG.std(),2)}")
plt.hist(subset.Combined_MPG)
plt.xlabel("Fuel Economy (MPG)")
plt.ylabel("Number of Vehicles")
plt.show()
# Generate a new 30 vehicle sample and calculate the SEM of the sample
sample = fuel_economy.sample(30)
print(f"The SEM value for the sample fuel economy data is {sem(sample.Combined_MPG)}")
# Generate a new 30 vehicle sample and calculate the SEM of the sample
sample = fuel_economy.sample(90)
print(f"The SEM value for the sample fuel economy data is {sem(sample.Combined_MPG)}")
# Create a sample set of 10, each with 30 vehicles
vehicle_sample_set = [fuel_economy.sample(30) for _ in range(0,10)]
#vehicle_sample_set is a list of Data Frames. Each Data Frames corresponds to a sample
# Generate the plot data for each sample
means = [sample.Combined_MPG.mean() for sample in vehicle_sample_set]
standard_errors = [sem(sample.Combined_MPG) for sample in vehicle_sample_set]
x_axis = np.arange(0, len(vehicle_sample_set), 1) + 1
# Setting up the plot
fig, ax = plt.subplots()
ax.errorbar(x_axis, means, standard_errors, fmt="o")
ax.set_xlim(0, len(vehicle_sample_set) + 1)
ax.set_ylim(20,28)
ax.set_xlabel("Sample Number")
ax.set_ylabel("Mean MPG")
plt.show()
###Output
_____no_output_____
###Markdown
✏️ 3.2 Student Do: SEM and Error Bars SEM and Error Bars InstructionsWork with a partner on this activity. Be sure to compare your calculated values as you progress through the activity.* Execute the starter code to import the Boston housing data set from scikit-learn.* Create a sample set of median housing prices using Pandas. Be sure to create samples of size 20.* Calculate the means and standard errors for each sample.* Create a plot displaying the means for each sample, with the standard error as error bars.* Calculate the range of SEM values across the sample set.* Determine which sample's mean is closest to the population mean.* Compare this sample's mean to the population's mean.* Rerun your sampling code a few times to generate new sample sets. Try changing the sample size and rerunning the sampling code.* Discuss with your partner what changes you observe when sample size changes.- - -
###Code
# Dependencies
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
from scipy.stats import sem
# Import the Boston housing data set from sklearn and get description
boston_dataset = load_boston()
# Read Boston housing data into a Pandas dataframe
housing_data = pd.DataFrame(data=boston_dataset.data,columns=boston_dataset.feature_names)
housing_data['MEDV'] = boston_dataset.target
# Create a bunch of samples, each with sample size of 20
# Calculate standard error of means
# Determine which sample's mean is closest to the population mean
# Compare to the population mean
# Plot sample means with error bars
###Output
_____no_output_____
###Markdown
📣 4.1 Instructor Do: Correlation Conundrum * Often in data analysis we will ask the question "Is there any relationship between Factor A and Factor B?" This concept is known in statistics as **correlation**.  * This is an example of a **positive correlation**. When two factors are positively correlated, they move in the same direction. * When the factor on the x-axis increases, the factor on the y-axis increases as well.  * This is an example of a **negative correlation**. When two factors are negatively correlated, they move in opposite directions. * When the factor on the x-axis increases, the factor on the y-axis decreases.  * This is an example of two factors with **no correlation**. When two factors are not correlated, their values are completely independent between one another.* With real-world data, it can be difficult to determine if two factors are correlated. * In statistics we can calculate the degree of correlation using the **Pearson correlation coefficient**. * The Pearson correlation coefficient is a quantitative measure that describes simultaneous movement (variability) of two factors. * The correlation coefficient, which is often indicated with the letter *r**, will always fall between –1 and 1. * An _r_ value of 1 indicates a perfect positive correlation, while an _r_ value of –1 indicates a perfect negative correlation. * An _r_ value of 0 means that there is no relationship between the two factors. * Most of the time, real-world data will not be the ideal case of -1,0, or 1. However, we can look at the correlation coefficient to determine how strongly or weakly two factors are related.```python Import the WHO dataset, drop missing datawho_data = pd.read_csv('../Resources/WHO_data.csv')who_data = who_data.dropna()who_data.head()```* For this example, we are looking at a dataset from the World Health Organization. This dataset contains a number of factors collected by WHO for each country regarding health, population, wealth and social tendencies.* Execute the next four blocks of code to produce plots of different pairs of factors. Ask the class which pairs of factors they believe to be correlated.    * All four of these pairs of factors are correlated with one another to varying degrees. * We will use the **Pearson correlation coefficient** to quantitate the degree of correlation. * We do not need to know the mathematical equation to derive the correlation coefficient. This is because most programming languages and analytical software have correlation functions built in or available through an imported module or package.* Return to the notebook and execute the next block of code. This time, we will take the same pairs of factors and use SciPy's `stats.pearsonr` function to quantify the correlation.```python The next example will compute the Pearson correlation coefficient between "Income per Capita" and "Average Alcohol Consumed"income = who_data.iloc[:,1]alcohol = who_data.iloc[:,8]correlation = st.pearsonr(income,alcohol)print(f"The correlation between both factors is {round(correlation[0],2)}")```* SciPy's `stats.pearsonr` function simply takes two numerical lists of values (i.e., two factors) and computes the Pearson correlation coefficient. * The output of the `stats.pearsonr` function returns both the _r_ value and a _p_ value. For now, we will only look at the _r_ value.* Execute the next few blocks of code to reproduce the previous example's plots, but this time we accompany the plots with the Pearson's _r_ statistic.* Across all four pairs of factors, we see the Pearson correlation coefficient range between .28 and .82. This means all four pairs of factors are positively correlated to varying degrees.* There is a general rule of thumb when describing the strength of a correlation in regards to the absolute value of r. Show the students the following table:* We can use this table along with our calculated _r_ values to describe if there is any relationship between two factors.* That calculating correlations across an entire dataset is a great way to try to find relationships between factors that one could test or investigate with more depth. But caution the students that correlations are not designed to determine the outcome of one variable from another—remember the saying that "correlation does not equal causation."
###Code
# Dependencies
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as st
# Import the WHO dataset, drop missing data
who_data = pd.read_csv('../Resources/WHO_data.csv')
who_data = who_data.dropna()
who_data.head()
# For the first example, determine which pairs of factors are correlated.
plt.scatter(who_data.iloc[:,1],who_data.iloc[:,8])
plt.xlabel('Income Per Capita')
plt.ylabel('Average Alcohol Consumed Per Person Per Year (L)')
plt.show()
plt.scatter(who_data.iloc[:,3],who_data.iloc[:,10])
plt.xlabel('Population Median Age')
plt.ylabel('Cell Phones Per 100 People')
plt.show()
plt.scatter(who_data.iloc[:,5],who_data.iloc[:,7])
plt.xlabel('% Government Expenditure on Health')
plt.ylabel('Male Life Expectancy')
plt.show()
plt.scatter(who_data.iloc[:,1],who_data.iloc[:,12])
plt.xlabel('Income Per Capita')
plt.ylabel('% Measles Immunization')
plt.show()
# The next example will compute the Pearson correlation coefficient between "Income per Capita" and "Average Alcohol Consumed"
income = who_data.iloc[:,1]
alcohol = who_data.iloc[:,8]
correlation = st.pearsonr(income,alcohol)
print(f"The correlation between both factors is {round(correlation[0],2)}")
# Compare the calcualted Pearson's r to the plots
plt.scatter(income,alcohol)
plt.xlabel('Income Per Capita')
plt.ylabel('Average Alcohol Consumed Per Person Per Year (L)')
print(f"The correlation between both factors is {round(correlation[0],2)}")
plt.show()
age = who_data.iloc[:,3]
cell_phones = who_data.iloc[:,10]
correlation = st.pearsonr(age,cell_phones)
plt.scatter(age,cell_phones)
plt.xlabel('Population Median Age')
plt.ylabel('Cell Phones Per 100 People')
print(f"The correlation between both factors is {round(correlation[0],2)}")
plt.show()
government = who_data.iloc[:,5]
life = who_data.iloc[:,7]
correlation = st.pearsonr(government,life)
plt.scatter(government,life)
plt.xlabel('% Government Expenditure on Health')
plt.ylabel('Male Life Expectancy')
print(f"The correlation between both factors is {round(correlation[0],2)}")
plt.show()
income = who_data.iloc[:,1]
measles = who_data.iloc[:,12]
correlation = st.pearsonr(income,measles)
plt.scatter(income,measles)
plt.xlabel('Income Per Capita')
plt.ylabel('% Measles Immunization')
print(f"The correlation between both factors is {round(correlation[0],2)}")
plt.show()
###Output
_____no_output_____
###Markdown
✏️ 4.2 Student Do: Correlation Conquerors Correlation ConquerorsThis activity gives students an opportunity to use SciPy to compare factors across the scikit-learn's wine recognition dataset.The wine recognition dataset is "the results of a chemical analysis of wines grown in the same region in Italy by three different cultivators." Measurements of * different constituents are taken for three types of wine. Instructions* Execute the starter code to import the wine recognition dataset from scikit-learn.* Using the dataset, plot the factors malic acid versus flavanoids on a scatter plot. Is this relationship positively correlated, negatively correlated, or not correlated? How strong is the correlation?* Calculate the Pearson's correlation coefficient for malic acid versus flavanoids. Compare the correlation coefficient to the Strength of Correlation table below. Was your prediction correct?* Plot the factors alcohol versus color intensity on a scatter plot. Is this relationship positively correlated, negatively correlated, or not correlated? How strong is the correlation?* Calculate the Pearson's correlation coefficient for alcohol versus color intensity. Compare the correlation coefficient to the Strength of Correlation table. Was your prediction correct? Bonus* Look at the [Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/) to find how to generate a correlation matrix. This matrix will contain the Pearson's correlation coefficient for all pairs of factors in the DataFrame.* Generate the correlation matrix and try to find the pair of factors that generate the strongest positive and strongest negative correlations.- - -
###Code
# Dependencies
import pandas as pd
import sklearn.datasets as dta
import scipy.stats as st
import matplotlib.pyplot as plt
# Read in the wine recognition data set from sklearn and load into Pandas
data = dta.load_wine()
wine_data = pd.DataFrame(data.data,columns=data.feature_names)
# Plot malic_acid versus flavanoids on a scatterplot
# Calculate the correlation coefficient between malic_acid and flavanoids
# Plot alcohol versus colour_intensity on a scatterplot
# Calculate the correlation coefficient between alcohol and color_intensity
# BONUS: Generate the correlation matrix and find the strongest positive and negative correlations
###Output
_____no_output_____
###Markdown
📣 5.1 Instructor Do: Fits and Regression * The final important statistical topic for the day is **linear regression**. However, before we can discuss linear regression, we must first talk about the equation of a line. * The equation of a line defines the relationship between x-values and y-values. * When it comes to variables in the equation, we refer to the _x_ in the equation as the **independent variable**, and the _y_ as the **dependent variable**. * The **slope** of a line is denoted as _m_ in the equation, and the **_y_-intercept** is denoted as _b_ in the equation. * Knowing the slope and y-intercept of a line, we can determine any value of _y_ given the value for _x_. This is why we say _y_ is dependent on _x_.  * First plot is considered the ideal linear relationship of _y_ and _x_, where the _x_ and _y_ values are the same value. * In this plot, the equation for line is _y = x_ because the slope is equal to 1, and the _y_-intercept is equal to 0. * If we look at the _x_ value of 7 (denoted by the vertical dashed line), the corresponding _y_ value is also 7 (denoted by the horizontal dashed line).  * In this linear relationship between _x_ and _y_, the slope is much smaller, but the _y_-intercept is much larger. * If you plug an _x_ value of 7 into the equation, the resulting _y_ value is 6.4. * This idea of relating _x_ values and _y_ values using the equation of a line is the general concept of **linear regression**. * **Linear regression** is used in data science to model and predict the relationship between two factors. * Although this may sound similar to correlation, there is a big difference between the two concepts––correlation quantifies if "factor Y" and "factor X" are related, while regression predicts "factor Y" values given values from "factor X." * By fitting the relationship of two factors to a linear equation, linear regression allows us to predict where data points we did not measure might end up if we had collected more data. * Linear regression is a truly powerful tool––it provides us the means to predict house prices, stock market movements, and the weather based on other data.* We will not dive into the mathematical details of linear regression; rather, we will focus on how to use [SciPy's linregress function](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html) to perform a linear regression, and visualize the linear regression using Matplotlib.```python Dependenciesfrom matplotlib import pyplot as pltfrom scipy.stats import linregressimport numpy as npfrom sklearn import datasetsimport pandas as pd This example compares different factors in the Boston housing data setboston_data = datasets.load_boston()housing_data = pd.DataFrame(data=boston_data.data,columns=boston_data.feature_names)housing_data['MEDV'] = boston_data.target Plot out rooms versus median house pricex_values = housing_data['RM']y_values = housing_data['MEDV']plt.scatter(x_values,y_values)plt.xlabel('Rooms in House')plt.ylabel('Median House Prices ($1000)')plt.show()``` * We are once again looking at the Boston housing dataset from scikit-learn. Specifically, we have plotted two factors from the Boston housing dataset in a scatter plot––rooms in a house versus the median housing prices.* Visually we can see that there is a strong positive correlation between the two factors. We could say overall, when there are more rooms in a house, the median house price goes up.* We can model this relationship using SciPy's `linregress` function by providing it both factors.```python Add the linear regression equation and line to plotx_values = housing_data['RM']y_values = housing_data['MEDV'](slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)regress_values = x_values * slope + interceptline_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))plt.scatter(x_values,y_values)plt.plot(x_values,regress_values,"r-")plt.annotate(line_eq,(6,10),fontsize=15,color="red")plt.xlabel('Rooms in House')plt.ylabel('Median House Prices ($1000)')plt.show()``` * `linregress` produces a number of calculated values, such as slope, intercept, r-value, which is the correlation coefficient, _p_ value, and standard error. The slope, intercept, and standard error are values we have already discussed today.* We can use the slope and intercept from the `linregress` function to generate our equation of a line. This linear equation can then be used to determine the corresponding _y_ values in order to plot the linear regression over our scatter plot.* Overall the regression line does a good job of predicting the _y_ values versus the _x_ values. However, some of the actual median housing prices are underestimated between 5 and 7 rooms in the house, and across the entire dataset are expensive houses regardless of rooms. Explain that these values are not accurately predicted by the regression model. * If we wanted to quantify how well the linear regression model predicts the actual values of the dataset, we look at the **r-squared** value, which is determined by squaring the correlation coefficient (`rvalue`). * The r-squared value is also known as **coefficient of determination**, and it represents the percent of data that is closest to the line of best fit. * The r-squared value ranges between 0 and 1, where 0 means that none of the actual _y_ values predicted by the _x_ values in the equation. Conversely, an r-squared value of 1 means that all of the actual _y_ values are predicted by the _x_ values in the equation. * The r-squared value is also the squared value of Pearson's correlation coefficient _r_. Therefore, the r-squared statistic can be used to describe the overall relationship between the two factors.* Execute the next block of code to reproduce the rooms versus price plot with the addition of the r-squared value.```python Print out the r-squared value along with the plot.x_values = housing_data['RM']y_values = housing_data['MEDV'](slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)regress_values = x_values * slope + interceptline_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))plt.scatter(x_values,y_values)plt.plot(x_values,regress_values,"r-")plt.annotate(line_eq,(6,10),fontsize=15,color="red")plt.xlabel('Rooms in House')plt.ylabel('Median House Prices ($1000)')print(f"The r-squared is: {rvalue**2}")plt.show()```* The r-squared value of the relationship is 0.48. This means the linear equation is predictive of 48% of all _y_ values, which is not ideal for predicting housing prices based on the number of rooms. * We could use the linear equation to predict median house prices when we have a different number of rooms than what was in the dataset and using this linear equation could lead to incorrect conclusions.* Execute the next two blocks of code in the notebook.```python The next example looks at a diabetes data set with less linear relationshipsdiabetes_data = datasets.load_diabetes()data = pd.DataFrame(diabetes_data.data,columns=diabetes_data.feature_names)data['1Y_Disease_Progress'] = diabetes_data.target Plot out the different factors in a scatter plotx_values = data['bp']y_values = data['1Y_Disease_Progress'](slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)regress_values = x_values * slope + interceptline_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))plt.scatter(x_values,y_values)plt.plot(x_values,regress_values,"r-")plt.annotate(line_eq,(0,50),fontsize=15,color="red")plt.xlabel('Normalized Blood Pressure')plt.ylabel('1Y_Disease_Progress')print(f"The r-squared is: {rvalue**2}")plt.show()``` * This dataset comes from the diabetes dataset from scikit-learn. * With this dataset, we want to quantify the relationship between the blood pressure of patients versus the progression of diabetes after 1 year since diagnosis. * With this plot, we can visually see there is a moderate positive correlation between blood pressure and disease progression. If we look at the linear regression model, the line does trend with the data, but the _y_ values are not well predicted by the linear equation. * The regression model produces an r-squared value of 0.19. This means that the equation only predicts the actual y values approximately 19% of the time. Considering that blood pressure and disease progression demonstrate a weak correlation, the simple linear model is not robust enough to adequately predict blood pressure. * It is unwise to use poor linear models to predict values. Doing so can lead to incorrect conclusions.* From these examples we now understand the relationship between correlation and regression––the weaker the correlation is between two factors, the less predictive a linear regression model can be.
###Code
# Dependencies
from matplotlib import pyplot as plt
from scipy.stats import linregress
import numpy as np
from sklearn import datasets
import pandas as pd
# This example compares different factors in the Boston housing data set
boston_data = datasets.load_boston()
housing_data = pd.DataFrame(data=boston_data.data,columns=boston_data.feature_names)
housing_data['MEDV'] = boston_data.target
# Plot out rooms versus median house price
x_values = housing_data['RM']
y_values = housing_data['MEDV']
plt.scatter(x_values,y_values)
plt.xlabel('Rooms in House')
plt.ylabel('Median House Prices ($1000)')
plt.show()
# Add the linear regression equation and line to plot
x_values = housing_data['RM']
y_values = housing_data['MEDV']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.xlabel('Rooms in House')
plt.ylabel('Median House Prices ($1000)')
plt.show()
# Print out the r-squared value along with the plot.
x_values = housing_data['RM']
y_values = housing_data['MEDV']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.xlabel('Rooms in House')
plt.ylabel('Median House Prices ($1000)')
print(f"The r-squared is: {rvalue**2}")
plt.show()
# The next example looks at a diabetes data set with less linear relationships
diabetes_data = datasets.load_diabetes()
data = pd.DataFrame(diabetes_data.data,columns=diabetes_data.feature_names)
data['1Y_Disease_Progress'] = diabetes_data.target
x_values = data['bp']
y_values = data['1Y_Disease_Progress']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(0,50),fontsize=15,color="red")
plt.xlabel('Normalized Blood Pressure')
plt.ylabel('1Y_Disease_Progress')
print(f"The r-squared is: {rvalue**2}")
plt.show()
###Output
_____no_output_____
###Markdown
✏️ 5.2 Student Do: Fits and Regression Fits and RegressionThis activity gives students an opportunity to use Scipy to fit data and Matplotlib to display the fit. Instructions* Generate a scatter plot with Matplotlib using the year as the independent (*x*) variable and the violent crime rate as the dependent (*y*) variable.* Use `stats.linregress` to perform a linear regression with the year as the independent variable (*x*) and the violent crime rate as the dependent variable (*y*).* Use the information returned by `stats.linregress` to create the equation of a line from the model.* Calculate the predicted violent crime rate of the linear model using the year as the *x* values.* Plot the linear model of year versus violent crime rate on top of your scatter plot. * **Hint**: Your scatter plot and line plot share the same axis. * **Hint**: In order to overlay plots in a notebook, the plots must be in the same code block.* Repeat the process of generating a scatter plot, calculating the linear regression model, and plotting the regression line over the scatter plot for the following pairs of variables: * Year versus murder rate. * Year versus aggravated assault. Bonus* Use `pyplot.subplots` from Matplotlib to create a new figure that displays all three pairs of variables on the same plot. For each pair of variables, there should be a scatter plot and a regression line. * **Hint**: All three plots share the same x-axis.* Use the regression lines you created to predict what the violent crime rate, murder rate, and assault rate will be in 20*. Hints* See the documentation for [stats.linregress](https://docs.scipy.org/doc/scipy-0.*.0/reference/generated/scipy.stats.linregress.html).* Recall that `stats.linregress` returns a slope, called *m*,, and a *y*-intercept, called *b*. These let you define a line for each fit by simply writing: `y-values = m * x-values + b`, for each linear regression you perform.- - -
###Code
# Dependencies
from matplotlib import pyplot as plt
from scipy import stats
import numpy as np
import pandas as pd
# Load crime data set into pandas
crime_data = pd.read_csv("Resources/crime_data.csv")
# Generate a scatter plot of violent crime rate versus year
# Perform a linear regression on violent crime rate versus year
# Create equation of line to calculate predicted violent crime rate
# Plot the linear model on top of scatter plot
# Repeat plotting scatter and linear model for murder rate versus year
# Repeat plotting scatter and linear model for aggravated assault versus year
# Generate a facet plot of all 3 figures
# Calculate the crime rates for 2019
###Output
_____no_output_____ |
AI/AI_for_Medical_Diagnosis/week01/utf-8''AI4M_C1_W1_lecture_ex_01.ipynb | ###Markdown
AI for Medicine Course 1 Week 1 lecture exercises Data ExplorationIn the first assignment of this course, you will work with chest x-ray images taken from the public [ChestX-ray8 dataset](https://arxiv.org/abs/1705.02315). In this notebook, you'll get a chance to explore this dataset and familiarize yourself with some of the techniques you'll use in the first graded assignment.The first step before jumping into writing code for any machine learning project is to explore your data. A standard Python package for analyzing and manipulating data is [pandas](https://pandas.pydata.org/docs/). With the next two code cells, you'll import `pandas` and a package called `numpy` for numerical manipulation, then use `pandas` to read a csv file into a dataframe and print out the first few rows of data.
###Code
# Import necessary packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
import seaborn as sns
sns.set()
# Read csv file containing training datadata
train_df = pd.read_csv("nih/train-small.csv")
# Print first 5 rows
print(f'There are {train_df.shape[0]} rows and {train_df.shape[1]} columns in this data frame')
train_df.head()
###Output
There are 1000 rows and 16 columns in this data frame
###Markdown
Have a look at the various columns in this csv file. The file contains the names of chest x-ray images ("Image" column) and the columns filled with ones and zeros identify which diagnoses were given based on each x-ray image. Data types and null values checkRun the next cell to explore the data types present in each column and whether any null values exist in the data.
###Code
# Look at the data type of each column and whether null values are present
train_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1000 entries, 0 to 999
Data columns (total 16 columns):
Image 1000 non-null object
Atelectasis 1000 non-null int64
Cardiomegaly 1000 non-null int64
Consolidation 1000 non-null int64
Edema 1000 non-null int64
Effusion 1000 non-null int64
Emphysema 1000 non-null int64
Fibrosis 1000 non-null int64
Hernia 1000 non-null int64
Infiltration 1000 non-null int64
Mass 1000 non-null int64
Nodule 1000 non-null int64
PatientId 1000 non-null int64
Pleural_Thickening 1000 non-null int64
Pneumonia 1000 non-null int64
Pneumothorax 1000 non-null int64
dtypes: int64(15), object(1)
memory usage: 125.1+ KB
###Markdown
Unique IDs check"PatientId" has an identification number for each patient. One thing you'd like to know about a medical dataset like this is if you're looking at repeated data for certain patients or whether each image represents a different person.
###Code
print(f"The total patient ids are {train_df['PatientId'].count()}, from those the unique ids are {train_df['PatientId'].value_counts().shape[0]} ")
###Output
_____no_output_____
###Markdown
As you can see, the number of unique patients in the dataset is less than the total number so there must be some overlap. For patients with multiple records, you'll want to make sure they do not show up in both training and test sets in order to avoid data leakage (covered later in this week's lectures). Explore data labelsRun the next two code cells to create a list of the names of each patient condition or disease.
###Code
columns = train_df.keys()
columns = list(columns)
print(columns)
# Remove unnecesary elements
columns.remove('Image')
columns.remove('PatientId')
# Get the total classes
print(f"There are {len(columns)} columns of labels for these conditions: {columns}")
###Output
There are 14 columns of labels for these conditions: ['Atelectasis', 'Cardiomegaly', 'Consolidation', 'Edema', 'Effusion', 'Emphysema', 'Fibrosis', 'Hernia', 'Infiltration', 'Mass', 'Nodule', 'Pleural_Thickening', 'Pneumonia', 'Pneumothorax']
###Markdown
Run the next cell to print out the number of positive labels (1's) for each condition
###Code
# Print out the number of positive labels for each class
for column in columns:
print(f"The class {column} has {train_df[column].sum()} samples")
###Output
The class Atelectasis has 106 samples
The class Cardiomegaly has 20 samples
The class Consolidation has 33 samples
The class Edema has 16 samples
The class Effusion has 128 samples
The class Emphysema has 13 samples
The class Fibrosis has 14 samples
The class Hernia has 2 samples
The class Infiltration has 175 samples
The class Mass has 45 samples
The class Nodule has 54 samples
The class Pleural_Thickening has 21 samples
The class Pneumonia has 10 samples
The class Pneumothorax has 38 samples
###Markdown
Have a look at the counts for the labels in each class above. Does this look like a balanced dataset? Data VisualizationUsing the image names listed in the csv file, you can retrieve the image associated with each row of data in your dataframe. Run the cell below to visualize a random selection of images from the dataset.
###Code
# Extract numpy values from Image column in data frame
images = train_df['Image'].values
# Extract 9 random images from it
random_images = [np.random.choice(images) for i in range(9)]
# Location of the image dir
img_dir = 'nih/images-small/'
print('Display Random Images')
# Adjust the size of your images
plt.figure(figsize=(20,10))
# Iterate and plot random images
for i in range(9):
plt.subplot(3, 3, i + 1)
img = plt.imread(os.path.join(img_dir, random_images[i]))
plt.imshow(img, cmap='gray')
plt.axis('off')
# Adjust subplot parameters to give specified padding
plt.tight_layout()
###Output
Display Random Images
###Markdown
Investigate a single imageRun the cell below to look at the first image in the dataset and print out some details of the image contents.
###Code
# Get the first image that was listed in the train_df dataframe
sample_img = train_df.Image[0]
raw_image = plt.imread(os.path.join(img_dir, sample_img))
plt.imshow(raw_image, cmap='gray')
plt.colorbar()
plt.title('Raw Chest X Ray Image')
print(f"The dimensions of the image are {raw_image.shape[0]} pixels width and {raw_image.shape[1]} pixels height, one single color channel")
print(f"The maximum pixel value is {raw_image.max():.4f} and the minimum is {raw_image.min():.4f}")
print(f"The mean value of the pixels is {raw_image.mean():.4f} and the standard deviation is {raw_image.std():.4f}")
###Output
The dimensions of the image are 1024 pixels width and 1024 pixels height, one single color channel
The maximum pixel value is 0.9804 and the minimum is 0.0000
The mean value of the pixels is 0.4796 and the standard deviation is 0.2757
###Markdown
Investigate pixel value distributionRun the cell below to plot up the distribution of pixel values in the image shown above.
###Code
# Plot a histogram of the distribution of the pixels
sns.distplot(raw_image.ravel(),
label=f'Pixel Mean {np.mean(raw_image):.4f} & Standard Deviation {np.std(raw_image):.4f}', kde=False)
plt.legend(loc='upper center')
plt.title('Distribution of Pixel Intensities in the Image')
plt.xlabel('Pixel Intensity')
plt.ylabel('# Pixels in Image')
###Output
_____no_output_____
###Markdown
Image Preprocessing in KerasBefore training, you'll first modify your images to be better suited for training a convolutional neural network. For this task you'll use the Keras [ImageDataGenerator](https://keras.io/preprocessing/image/) function to perform data preprocessing and data augmentation.Run the next two cells to import this function and create an image generator for preprocessing.
###Code
# Import data generator from keras
from keras.preprocessing.image import ImageDataGenerator
# Normalize images
image_generator = ImageDataGenerator(
samplewise_center=True, #Set each sample mean to 0.
samplewise_std_normalization= True # Divide each input by its standard deviation
)
###Output
_____no_output_____
###Markdown
StandardizationThe `image_generator` you created above will act to adjust your image data such that the new mean of the data will be zero, and the standard deviation of the data will be 1. In other words, the generator will replace each pixel value in the image with a new value calculated by subtracting the mean and dividing by the standard deviation.$$\frac{x_i - \mu}{\sigma}$$Run the next cell to pre-process your data using the `image_generator`. In this step you will also be reducing the image size down to 320x320 pixels.
###Code
# Flow from directory with specified batch size and target image size
generator = image_generator.flow_from_dataframe(
dataframe=train_df,
directory="nih/images-small/",
x_col="Image", # features
y_col= ['Mass'], # labels
class_mode="raw", # 'Mass' column should be in train_df
batch_size= 1, # images per batch
shuffle=False, # shuffle the rows or not
target_size=(320,320) # width and height of output image
)
###Output
Found 1000 validated image filenames.
###Markdown
Run the next cell to plot up an example of a pre-processed image
###Code
# Plot a processed image
sns.set_style("white")
generated_image, label = generator.__getitem__(0)
plt.imshow(generated_image[0], cmap='gray')
plt.colorbar()
plt.title('Raw Chest X Ray Image')
print(f"The dimensions of the image are {generated_image.shape[1]} pixels width and {generated_image.shape[2]} pixels height")
print(f"The maximum pixel value is {generated_image.max():.4f} and the minimum is {generated_image.min():.4f}")
print(f"The mean value of the pixels is {generated_image.mean():.4f} and the standard deviation is {generated_image.std():.4f}")
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
Run the cell below to see a comparison of the distribution of pixel values in the new pre-processed image versus the raw image.
###Code
# Include a histogram of the distribution of the pixels
sns.set()
plt.figure(figsize=(10, 7))
# Plot histogram for original iamge
sns.distplot(raw_image.ravel(),
label=f'Original Image: mean {np.mean(raw_image):.4f} - Standard Deviation {np.std(raw_image):.4f} \n '
f'Min pixel value {np.min(raw_image):.4} - Max pixel value {np.max(raw_image):.4}',
color='blue',
kde=False)
# Plot histogram for generated image
sns.distplot(generated_image[0].ravel(),
label=f'Generated Image: mean {np.mean(generated_image[0]):.4f} - Standard Deviation {np.std(generated_image[0]):.4f} \n'
f'Min pixel value {np.min(generated_image[0]):.4} - Max pixel value {np.max(generated_image[0]):.4}',
color='red',
kde=False)
# Place legends
plt.legend()
plt.title('Distribution of Pixel Intensities in the Image')
plt.xlabel('Pixel Intensity')
plt.ylabel('# Pixel')
###Output
_____no_output_____ |
Code/9_Gradient_5.ipynb | ###Markdown
Baseline modelRandom Forest Classifier CV -> Data is saperable or not
###Code
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import scale
x = scale(df.drop('class', axis=1).values)
y = df['class'].values
model = RandomForestClassifier()
cross_val_score(model, x, y) # cross validation # the data is seperable
###Output
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_split.py:1978: FutureWarning: The default value of cv will change from 3 to 5 in version 0.22. Specify it explicitly to silence this warning.
warnings.warn(CV_WARNING, FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/ensemble/forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/ensemble/forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
/usr/local/lib/python3.6/dist-packages/sklearn/ensemble/forest.py:245: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
###Markdown
Logistic Regression Keras Model
###Code
x_train , x_test, y_train, y_test = train_test_split(x, y, test_size=0.3, random_state=42)
import keras.backend as K
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import Adam, SGD
K.clear_session()
model = Sequential()
model.add(Dense(1, input_shape=(4,), activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='SGD', metrics=['accuracy'])
model.summary()
history = model.fit(x_train, y_train, epochs=10)
result = model.evaluate(x_test, y_test)
result
my_tup = ('swarna','shahin','tanvir','pial','razin','jahid')
my_tup2 = ('shahin','shahin')
print(my_tup2.count('shahins')) # return How much time the values are occur
print(my_tup.index('jahid'))
history_df = pd.DataFrame(history.history, index=history.epoch)
history_df.plot(ylim=(0,1))
plt.title("Test accuracy: {:3.1f} %".format(result[1]*100), fontsize=15)
###Output
_____no_output_____
###Markdown
Different LR
###Code
dflist = []
learning_rates = [0.01, 0.05, 0.1, 0.5]
for lr in learning_rates:
K.clear_session()
model = Sequential()
model.add(Dense(1, input_shape=(4,), activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=SGD(lr=lr),
metrics=['accuracy'])
h = model.fit(x_train, y_train, batch_size=16, verbose=0)
dflist.append(pd.DataFrame(h.history, index=h.epoch))
historydf = pd.concat(dflist, axis=1)
historydf.head()
metrics_reported = dflist[0].columns
idx = pd.MultiIndex.from_product([learning_rates, metrics_reported],
names=['learning_rate', 'metric'])
historydf.columns = idx
historydf
plt.subplots(figsize=(10,10))
ax = plt.subplot(211)
historydf.xs('loss', axis=1, level='metric').plot(ylim=(0,1), ax=ax)
plt.title("Loss")
ax = plt.subplot(212)
historydf.xs('acc', axis=1, level='metric').plot(ylim=(0,1), ax=ax)
plt.title("Accuracy")
plt.xlabel("Epochs")
plt.tight_layout()
###Output
/usr/local/lib/python3.6/dist-packages/pandas/plotting/_core.py:1001: UserWarning: Attempting to set identical left==right results
in singular transformations; automatically expanding.
left=0.0, right=0.0
ax.set_xlim(left, right)
###Markdown
Batch Size
###Code
dflist = []
batch_sizes = [16, 32, 64, 128]
for batch_size in batch_sizes:
K.clear_session()
model = Sequential()
model.add(Dense(1, input_shape=(4,), activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
h = model.fit(x_train, y_train, batch_size=batch_size, verbose=0)
dflist.append(pd.DataFrame(h.history, index=h.epoch))
historydf = pd.concat(dflist, axis=1)
metrics_reported = dflist[0].columns
idx = pd.MultiIndex.from_product([batch_sizes, metrics_reported],
names=['batch_size', 'metric'])
historydf.columns = idx
historydf
plt.subplots(figsize=(10,10))
ax = plt.subplot(211)
historydf.xs('loss', axis=1, level='metric').plot(ylim=(0,1), ax=ax)
plt.title("Loss")
ax = plt.subplot(212)
historydf.xs('acc', axis=1, level='metric').plot(ylim=(0,1), ax=ax)
plt.title("Accuracy")
plt.xlabel("Epochs")
plt.tight_layout()
###Output
/usr/local/lib/python3.6/dist-packages/pandas/plotting/_core.py:1001: UserWarning: Attempting to set identical left==right results
in singular transformations; automatically expanding.
left=0.0, right=0.0
ax.set_xlim(left, right)
###Markdown
Optimizer
###Code
from keras.optimizers import SGD, Adam, Adagrad, RMSprop
dflist = []
optimizers = ['SGD(lr=0.01)',
'SGD(lr=0.01, momentum=0.3)',
'SGD(lr=0.01, momentum=0.3, nesterov=True)',
'Adam(lr=0.01)',
'Adagrad(lr=0.01)',
'RMSprop(lr=0.01)']
for opt_name in optimizers:
K.clear_session()
model = Sequential()
model.add(Dense(1, input_shape=(4,), activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=eval(opt_name),
metrics=['accuracy'])
h = model.fit(x_train, y_train, batch_size=16, epochs=5, verbose=0)
dflist.append(pd.DataFrame(h.history, index=h.epoch))
historydf = pd.concat(dflist, axis=1)
metrics_reported = dflist[0].columns
idx = pd.MultiIndex.from_product([optimizers, metrics_reported],
names=['optimizers', 'metric'])
historydf.columns = idx
plt.subplots(figsize=(10,10))
ax = plt.subplot(211)
historydf.xs('loss', axis=1, level='metric').plot(ylim=(0,1), ax=ax)
plt.title("Loss")
ax = plt.subplot(212)
historydf.xs('acc', axis=1, level='metric').plot(ylim=(0,1), ax=ax)
plt.title("Accuracy")
plt.xlabel("Epochs")
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Initialization of weight
###Code
dflist = []
initializers = ['zeros', 'uniform', 'normal',
'he_normal', 'lecun_uniform']
for init in initializers:
K.clear_session()
model = Sequential()
model.add(Dense(1, input_shape=(4,),
kernel_initializer=init,
activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
h = model.fit(x_train, y_train, batch_size=16, epochs=5, verbose=0)
dflist.append(pd.DataFrame(h.history, index=h.epoch))
historydf = pd.concat(dflist, axis=1)
metrics_reported = dflist[0].columns
idx = pd.MultiIndex.from_product([initializers, metrics_reported],
names=['initializers', 'metric'])
historydf.columns = idx
plt.subplots(figsize=(10,10))
ax = plt.subplot(211)
historydf.xs('loss', axis=1, level='metric').plot(ylim=(0,1), ax=ax)
plt.title("Loss")
ax = plt.subplot(212)
historydf.xs('acc', axis=1, level='metric').plot(ylim=(0,1), ax=ax)
plt.title("Accuracy")
plt.xlabel("Epochs")
plt.tight_layout()
###Output
_____no_output_____ |
Kaggle-Computer-Vision/5-exercise-custom-convnets.ipynb | ###Markdown
**This notebook is an exercise in the [Computer Vision](https://www.kaggle.com/learn/computer-vision) course. You can reference the tutorial at [this link](https://www.kaggle.com/ryanholbrook/custom-convnets).**--- Introduction In these exercises, you'll build a custom convnet with performance competitive to the VGG16 model from Lesson 1.Get started by running the code cell below.
###Code
# Setup feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.computer_vision.ex5 import *
# Imports
import os, warnings
import matplotlib.pyplot as plt
from matplotlib import gridspec
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image_dataset_from_directory
# Reproducability
def set_seed(seed=31415):
np.random.seed(seed)
tf.random.set_seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
os.environ['TF_DETERMINISTIC_OPS'] = '1'
set_seed()
# Set Matplotlib defaults
plt.rc('figure', autolayout=True)
plt.rc('axes', labelweight='bold', labelsize='large',
titleweight='bold', titlesize=18, titlepad=10)
plt.rc('image', cmap='magma')
warnings.filterwarnings("ignore") # to clean up output cells
# Load training and validation sets
ds_train_ = image_dataset_from_directory(
'../input/car-or-truck/train',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=True,
)
ds_valid_ = image_dataset_from_directory(
'../input/car-or-truck/valid',
labels='inferred',
label_mode='binary',
image_size=[128, 128],
interpolation='nearest',
batch_size=64,
shuffle=False,
)
# Data Pipeline
def convert_to_float(image, label):
image = tf.image.convert_image_dtype(image, dtype=tf.float32)
return image, label
AUTOTUNE = tf.data.experimental.AUTOTUNE
ds_train = (
ds_train_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
ds_valid = (
ds_valid_
.map(convert_to_float)
.cache()
.prefetch(buffer_size=AUTOTUNE)
)
###Output
Found 5117 files belonging to 2 classes.
Found 5051 files belonging to 2 classes.
###Markdown
Design a Convnet Let's design a convolutional network with a block architecture like we saw in the tutorial. The model from the example had three blocks, each with a single convolutional layer. Its performance on the "Car or Truck" problem was okay, but far from what the pretrained VGG16 could achieve. It might be that our simple network lacks the ability to extract sufficiently complex features. We could try improving the model either by adding more blocks or by adding convolutions to the blocks we have.Let's go with the second approach. We'll keep the three block structure, but increase the number of `Conv2D` layer in the second block to two, and in the third block to three. --> 1) Define Model Given the diagram above, complete the model by defining the layers of the third block.
###Code
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
# Block One
layers.Conv2D(filters=32, kernel_size=3, activation='relu', padding='same',
input_shape=[128, 128, 3]),
layers.MaxPool2D(),
# Block Two
layers.Conv2D(filters=64, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Block Three
# YOUR CODE HERE
# ____,
layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'),
layers.Conv2D(filters=128, kernel_size=3, activation='relu', padding='same'),
layers.MaxPool2D(),
# Head
layers.Flatten(),
layers.Dense(6, activation='relu'),
layers.Dropout(0.2),
layers.Dense(1, activation='sigmoid'),
])
# Check your answer
q_1.check()
# Lines below will give you a hint or solution code
#q_1.hint()
#q_1.solution()
model.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_4 (Conv2D) (None, 128, 128, 32) 896
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 64, 64, 32) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 64, 64, 64) 18496
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 32, 32, 64) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 32, 32, 128) 73856
_________________________________________________________________
conv2d_7 (Conv2D) (None, 32, 32, 128) 147584
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 16, 16, 128) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 32768) 0
_________________________________________________________________
dense_2 (Dense) (None, 6) 196614
_________________________________________________________________
dropout_1 (Dropout) (None, 6) 0
_________________________________________________________________
dense_3 (Dense) (None, 1) 7
=================================================================
Total params: 437,453
Trainable params: 437,453
Non-trainable params: 0
_________________________________________________________________
###Markdown
2) Compile To prepare for training, compile the model with an appropriate loss and accuracy metric for the "Car or Truck" dataset.
###Code
model.compile(
optimizer=tf.keras.optimizers.Adam(epsilon=0.01),
# YOUR CODE HERE: Add loss and metric
loss='binary_crossentropy',
metrics=['binary_accuracy']
)
# Check your answer
q_2.check()
model.compile(
optimizer=tf.keras.optimizers.Adam(epsilon=0.01),
loss='binary_crossentropy',
metrics=['binary_accuracy'],
)
q_2.assert_check_passed()
# Lines below will give you a hint or solution code
#q_2.hint()
#q_2.solution()
###Output
_____no_output_____
###Markdown
Finally, let's test the performance of this new model. First run this cell to fit the model to the training set.
###Code
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=50,
)
###Output
Epoch 1/50
80/80 [==============================] - 17s 203ms/step - loss: 0.6879 - binary_accuracy: 0.5587 - val_loss: 0.6691 - val_binary_accuracy: 0.5785
Epoch 2/50
80/80 [==============================] - 4s 52ms/step - loss: 0.6676 - binary_accuracy: 0.5748 - val_loss: 0.6612 - val_binary_accuracy: 0.5785
Epoch 3/50
80/80 [==============================] - 4s 51ms/step - loss: 0.6643 - binary_accuracy: 0.5748 - val_loss: 0.6505 - val_binary_accuracy: 0.5785
Epoch 4/50
80/80 [==============================] - 4s 52ms/step - loss: 0.6541 - binary_accuracy: 0.5748 - val_loss: 0.6450 - val_binary_accuracy: 0.5785
Epoch 5/50
80/80 [==============================] - 4s 51ms/step - loss: 0.6531 - binary_accuracy: 0.5841 - val_loss: 0.6383 - val_binary_accuracy: 0.6034
Epoch 6/50
80/80 [==============================] - 4s 51ms/step - loss: 0.6419 - binary_accuracy: 0.6249 - val_loss: 0.6342 - val_binary_accuracy: 0.5947
Epoch 7/50
80/80 [==============================] - 4s 51ms/step - loss: 0.6358 - binary_accuracy: 0.6405 - val_loss: 0.6231 - val_binary_accuracy: 0.6353
Epoch 9/50
80/80 [==============================] - 4s 51ms/step - loss: 0.6313 - binary_accuracy: 0.6453 - val_loss: 0.6233 - val_binary_accuracy: 0.6375
Epoch 10/50
80/80 [==============================] - 4s 52ms/step - loss: 0.6206 - binary_accuracy: 0.6606 - val_loss: 0.6014 - val_binary_accuracy: 0.6700
Epoch 11/50
80/80 [==============================] - 4s 51ms/step - loss: 0.6097 - binary_accuracy: 0.6622 - val_loss: 0.5903 - val_binary_accuracy: 0.6842
Epoch 12/50
80/80 [==============================] - 4s 52ms/step - loss: 0.5991 - binary_accuracy: 0.6739 - val_loss: 0.5867 - val_binary_accuracy: 0.6838
Epoch 13/50
80/80 [==============================] - 4s 51ms/step - loss: 0.5823 - binary_accuracy: 0.6924 - val_loss: 0.5699 - val_binary_accuracy: 0.7088
Epoch 14/50
80/80 [==============================] - 4s 51ms/step - loss: 0.5657 - binary_accuracy: 0.7168 - val_loss: 0.5732 - val_binary_accuracy: 0.6923
Epoch 15/50
80/80 [==============================] - 4s 53ms/step - loss: 0.5558 - binary_accuracy: 0.7278 - val_loss: 0.5425 - val_binary_accuracy: 0.7278
Epoch 16/50
80/80 [==============================] - 4s 51ms/step - loss: 0.5331 - binary_accuracy: 0.7369 - val_loss: 0.5334 - val_binary_accuracy: 0.7315
Epoch 17/50
80/80 [==============================] - 4s 51ms/step - loss: 0.5108 - binary_accuracy: 0.7522 - val_loss: 0.5115 - val_binary_accuracy: 0.7553
Epoch 18/50
80/80 [==============================] - 4s 51ms/step - loss: 0.4825 - binary_accuracy: 0.7677 - val_loss: 0.4899 - val_binary_accuracy: 0.7636
Epoch 19/50
80/80 [==============================] - 4s 51ms/step - loss: 0.4433 - binary_accuracy: 0.7938 - val_loss: 0.4776 - val_binary_accuracy: 0.7767
Epoch 20/50
80/80 [==============================] - 4s 52ms/step - loss: 0.4446 - binary_accuracy: 0.7899 - val_loss: 0.4488 - val_binary_accuracy: 0.7897
Epoch 21/50
80/80 [==============================] - 4s 51ms/step - loss: 0.4006 - binary_accuracy: 0.8190 - val_loss: 0.4330 - val_binary_accuracy: 0.7990
Epoch 22/50
80/80 [==============================] - 4s 51ms/step - loss: 0.3800 - binary_accuracy: 0.8280 - val_loss: 0.4385 - val_binary_accuracy: 0.8113
Epoch 23/50
80/80 [==============================] - 4s 52ms/step - loss: 0.3405 - binary_accuracy: 0.8537 - val_loss: 0.4576 - val_binary_accuracy: 0.8127
Epoch 24/50
80/80 [==============================] - 4s 51ms/step - loss: 0.3356 - binary_accuracy: 0.8482 - val_loss: 0.4356 - val_binary_accuracy: 0.8204
Epoch 25/50
80/80 [==============================] - 4s 51ms/step - loss: 0.2928 - binary_accuracy: 0.8754 - val_loss: 0.4359 - val_binary_accuracy: 0.8236
Epoch 26/50
80/80 [==============================] - 4s 51ms/step - loss: 0.2715 - binary_accuracy: 0.8877 - val_loss: 0.5846 - val_binary_accuracy: 0.7573
Epoch 27/50
80/80 [==============================] - 4s 51ms/step - loss: 0.3016 - binary_accuracy: 0.8801 - val_loss: 0.4742 - val_binary_accuracy: 0.7987
Epoch 28/50
80/80 [==============================] - 4s 52ms/step - loss: 0.2773 - binary_accuracy: 0.8799 - val_loss: 0.6449 - val_binary_accuracy: 0.7420
Epoch 29/50
80/80 [==============================] - 4s 51ms/step - loss: 0.2794 - binary_accuracy: 0.8759 - val_loss: 0.4547 - val_binary_accuracy: 0.8264
Epoch 30/50
80/80 [==============================] - 4s 52ms/step - loss: 0.2787 - binary_accuracy: 0.8815 - val_loss: 0.4249 - val_binary_accuracy: 0.8284
Epoch 31/50
80/80 [==============================] - 4s 52ms/step - loss: 0.2444 - binary_accuracy: 0.8991 - val_loss: 0.4168 - val_binary_accuracy: 0.8345
Epoch 32/50
80/80 [==============================] - 4s 51ms/step - loss: 0.2411 - binary_accuracy: 0.9010 - val_loss: 0.4651 - val_binary_accuracy: 0.8313
Epoch 33/50
80/80 [==============================] - 4s 51ms/step - loss: 0.2320 - binary_accuracy: 0.9092 - val_loss: 0.5111 - val_binary_accuracy: 0.8234
Epoch 34/50
80/80 [==============================] - 4s 52ms/step - loss: 0.2015 - binary_accuracy: 0.9143 - val_loss: 0.5736 - val_binary_accuracy: 0.8238
Epoch 35/50
80/80 [==============================] - 4s 51ms/step - loss: 0.1718 - binary_accuracy: 0.9365 - val_loss: 0.6557 - val_binary_accuracy: 0.8113
Epoch 36/50
80/80 [==============================] - 4s 52ms/step - loss: 0.1625 - binary_accuracy: 0.9288 - val_loss: 0.5820 - val_binary_accuracy: 0.8188
Epoch 37/50
80/80 [==============================] - 4s 51ms/step - loss: 0.1627 - binary_accuracy: 0.9320 - val_loss: 0.7982 - val_binary_accuracy: 0.7959
Epoch 38/50
80/80 [==============================] - 4s 52ms/step - loss: 0.1630 - binary_accuracy: 0.9366 - val_loss: 0.7301 - val_binary_accuracy: 0.7888
Epoch 39/50
80/80 [==============================] - 4s 51ms/step - loss: 0.1559 - binary_accuracy: 0.9403 - val_loss: 0.6244 - val_binary_accuracy: 0.8177
Epoch 40/50
80/80 [==============================] - 4s 51ms/step - loss: 0.1529 - binary_accuracy: 0.9350 - val_loss: 0.4775 - val_binary_accuracy: 0.8422
Epoch 41/50
80/80 [==============================] - 4s 51ms/step - loss: 0.1572 - binary_accuracy: 0.9378 - val_loss: 0.4923 - val_binary_accuracy: 0.8442
Epoch 42/50
80/80 [==============================] - 4s 52ms/step - loss: 0.1277 - binary_accuracy: 0.9525 - val_loss: 0.4893 - val_binary_accuracy: 0.8404
Epoch 43/50
80/80 [==============================] - 4s 51ms/step - loss: 0.1147 - binary_accuracy: 0.9568 - val_loss: 0.5523 - val_binary_accuracy: 0.8335
Epoch 44/50
80/80 [==============================] - 4s 51ms/step - loss: 0.1203 - binary_accuracy: 0.9530 - val_loss: 0.5565 - val_binary_accuracy: 0.8307
Epoch 45/50
80/80 [==============================] - 4s 51ms/step - loss: 0.0901 - binary_accuracy: 0.9696 - val_loss: 0.5025 - val_binary_accuracy: 0.8030
Epoch 46/50
80/80 [==============================] - 4s 52ms/step - loss: 0.0968 - binary_accuracy: 0.9609 - val_loss: 0.6244 - val_binary_accuracy: 0.8468
Epoch 47/50
80/80 [==============================] - 4s 52ms/step - loss: 0.0750 - binary_accuracy: 0.9707 - val_loss: 0.6598 - val_binary_accuracy: 0.8495
Epoch 48/50
80/80 [==============================] - 4s 51ms/step - loss: 0.0632 - binary_accuracy: 0.9781 - val_loss: 0.6518 - val_binary_accuracy: 0.8507
Epoch 49/50
80/80 [==============================] - 4s 51ms/step - loss: 0.0617 - binary_accuracy: 0.9797 - val_loss: 0.7031 - val_binary_accuracy: 0.8487
Epoch 50/50
80/80 [==============================] - 4s 51ms/step - loss: 0.0531 - binary_accuracy: 0.9816 - val_loss: 0.7169 - val_binary_accuracy: 0.8480
###Markdown
And now run the cell below to plot the loss and metric curves for this training run.
###Code
import pandas as pd
history_frame = pd.DataFrame(history.history)
history_frame.loc[:, ['loss', 'val_loss']].plot()
history_frame.loc[:, ['binary_accuracy', 'val_binary_accuracy']].plot();
###Output
_____no_output_____
###Markdown
3) Train the Model How would you interpret these training curves? Did this model improve upon the model from the tutorial?
###Code
# View the solution (Run this code cell to receive credit!)
q_3.check()
###Output
_____no_output_____ |
Day18/EDA/Banking_EDA/Exploratory Data Analysis.ipynb | ###Markdown
No NaN or Null values in any columns
###Code
#Let's check the success percent
count = df.groupby('is_success').size()
percent = count / len(df)*100
print(percent)
###Output
is_success
no 88.30152
yes 11.69848
dtype: float64
###Markdown
Data is highly imbalanced with only 11 percent yes
###Code
#checking multicollinearity
sns.pairplot(df)
###Output
_____no_output_____
###Markdown
There seems to be no multicollinearity but we can cleary see some outliers in previous and pdays. We will start analyzing each variable now
###Code
#Age
sns.boxplot(x='is_success', y = 'age', data=df)
# Balance
sns.boxplot(x='is_success', y = 'balance', data=df)
# Impute outliers function
def impute_outliers(df, column , minimum, maximum):
col_values = df[column].values
df[column] = np.where(np.logical_or(col_values<minimum, col_values>maximum), col_values.mean(), col_values)
return df
#Balance has lot of outliers let's fix it
df_new = df
min_val = df['balance'].min()
max_val= 20000 #as most values are under it
df_new = impute_outliers(df=df_new, column='balance', minimum=min_val, maximum=max_val)
#day
sns.boxplot(x='is_success', y='day', data=df)
#duration
sns.boxplot(x='is_success', y='duration', data=df)
#Fixing Duration
min_val = df_new["duration"].min()
max_val = 2000
df_new = impute_outliers(df=df_new, column='duration' , minimum=min_val, maximum=max_val)
#Campaign
sns.boxplot(x='is_success', y='campaign', data=df)
#Fixing campaign column
min_val = df_new['campaign'].min()
max_val = 20
df_new = impute_outliers(df=df_new, column='campaign', minimum=min_val, maximum=max_val)
#pdays
sns.boxplot(x='is_success', y='pdays', data=df)
#Fixing pdays column
min_val = df_new['pdays'].min()
max_val = 250
df_new = impute_outliers(df=df_new, column='pdays', minimum=min_val, maximum = max_val)
#previous
sns.boxplot(x='is_success', y='previous', data=df)
#Fixing previous
min_val = df_new['previous'].min()
max_val = 15
df_new = impute_outliers(df = df_new, column='previous', minimum=min_val, maximum=max_val)
df_new.describe()
###Output
_____no_output_____
###Markdown
Data seems fine now Categorigcal variables have unknowns in them, let's fix them too
###Code
#Impute unknowns function
def impute_unknowns(df, column):
col_values = df[column].values
df[column] = np.where(col_values=='unknown', df[column].mode(), col_values)
return df
#job
job = pd.crosstab(df['job'], df['is_success'])
job.plot(kind='bar')
print(df.groupby(['job']).size()/len(df)*100)
#Fixing job
df_new = impute_unknowns(df=df_new, column='job')
#marital
marital = pd.crosstab(df['marital'], df['is_success'])
marital.plot(kind='bar')
print(df.groupby(['marital']).size()/len(df)*100)
#education
education = pd.crosstab(df['education'], df['is_success'])
education.plot(kind='bar')
print(df.groupby(['education']).size()/len(df)*100)
#Fixing education column
df_new = impute_unknowns(df=df_new, column='education')
#default
default = pd.crosstab(df['default'], df['is_success'])
default.plot(kind='bar')
print(df.groupby(['default']).size()/len(df)*100)
#highly unbalanced hence drop this
df.drop(['default'], axis=1, inplace=True)
#housing
housing = pd.crosstab(df['housing'], df['is_success'])
housing.plot(kind='bar')
print(df.groupby(['housing']).size()/len(df)*100)
#contact
contact = pd.crosstab(df['contact'], df['is_success'])
contact.plot(kind='bar')
#print(df.groupby(["contact"])).size()/len(df)*100
df.drop(['contact'], axis=1, inplace=True) #doesn't seem like an important feature
#month
month = pd.crosstab(df['month'], df['is_success'])
month.plot(kind='bar')
print(df.groupby(['month']).size()/len(df)*100)
#poutcome
poutcome = pd.crosstab(df['poutcome'], df['is_success'])
poutcome.plot(kind='bar')
df.groupby(['poutcome']).size()/len(df)*100
df.drop(['poutcome'], axis=1, inplace=True) #most of the values of this column is missing
#Loan
loan = pd.crosstab(df['loan'], df['is_success'])
loan.plot(kind='bar')
print(df.groupby(['loan']).size()/len(df)*100)
#Updated dataset
df_new.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 45211 entries, 0 to 45210
Data columns (total 14 columns):
age 45211 non-null int64
job 45211 non-null object
marital 45211 non-null object
education 45211 non-null object
balance 45211 non-null float64
housing 45211 non-null object
loan 45211 non-null object
day 45211 non-null int64
month 45211 non-null object
duration 45211 non-null float64
campaign 45211 non-null float64
pdays 45211 non-null float64
previous 45211 non-null float64
is_success 45211 non-null object
dtypes: float64(5), int64(2), object(7)
memory usage: 4.8+ MB
###Markdown
Feature Engineering
###Code
#separating target variable from the dataset before creating dummy variable
y = df_new['is_success']
X = df_new[df_new.columns[0:12]]
print(X.head())
#creating dummy variables
X_dummy = pd.get_dummies(X)
print(X_dummy.head())
X = np.array(X_dummy.values)
Y = np.array(y.values)
X.shape, y.shape
#splitting the validation dataset
size = 0.20
seed = 7
X_train, X_validation, y_train, Y_validation = model_selection.train_test_split(X, y, test_size=size, random_state = seed)
#scaling the values
X_t = scale(X_train)
#let's use all of our variables as components i.e. 39
pca = PCA(n_components=39)
pca.fit(X_t)
#Amount of variance by each principal component
var = pca.explained_variance_ratio_
#cumulative variance
cum_var = np.cumsum(np.round(pca.explained_variance_ratio_, decimals=4)*100)
#let's plot the cumilative variance
plt.plot(cum_var)
###Output
_____no_output_____
###Markdown
From the plot we can see that first 32 components are explaining 100% variability of data. Let's proceed with these 32 components
###Code
pca = PCA(n_components=32)
pca.fit(X_t)
X_train_PC = pca.fit_transform(X_t)
###Output
_____no_output_____
###Markdown
Let's train our models
###Code
#Test options
seed = 7
scoring = 'accuracy'
#Algorithms
models=[]
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('K-NN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))
#evaluating each model in turn
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state = seed)
cv_results = model_selection.cross_val_score(model, X_train_PC, y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
###Output
LR: 0.891755 (0.005013)
LDA: 0.891202 (0.004555)
K-NN: 0.883958 (0.004866)
CART: 0.855508 (0.004585)
NB: 0.859241 (0.004832)
SVM: 0.893553 (0.005163)
###Markdown
"SVM" has highest Accuracy but is slowest while "Logistic Regression" is almost as accurate but faster.
###Code
#Comparing Algorithms
fig = plt.figure()
fig.suptitle('Algorithm Wars')
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
###Output
_____no_output_____
###Markdown
Logistic Regression is the best model considering accuracy and speed. Let's Predict
###Code
X_val = scale(X_validation)
pca.fit(X_val)
X_validation_PC = pca.fit_transform(X_val)
#Maing Predictions
lr = LogisticRegression()
lr.fit(X_train_PC, y_train)
predictions = lr.predict(X_validation_PC)
print("Accuracy: ", accuracy_score(Y_validation, predictions))
print(confusion_matrix(Y_validation, predictions))
print(classification_report(Y_validation, predictions))
###Output
Accuracy: 0.8828928452947031
[[7744 283]
[ 776 240]]
precision recall f1-score support
no 0.91 0.96 0.94 8027
yes 0.46 0.24 0.31 1016
avg / total 0.86 0.88 0.87 9043
|
notebooks/Rudin-continua-no-diferenciable.ipynb | ###Markdown
Función continua en todo $\mathbb{R}$ y diferenciable en ningún lado. Construcción obtenida del libro _Principles of Mathematical Analysis_ de Walter Rudin.
###Code
import numpy as np
import matplotlib.pyplot as plt
import gif
M = 10000
###Output
_____no_output_____
###Markdown
Definamos $\phi(x) = \lvert x \lvert$, para $-1 \leq x \leq 1$. Extendamos la definición de $\phi(x)$ a todos los reales requiriendo que $\phi(x + 2) = \phi(x)$. Para toda $s, t$ se tiene que $$\lvert \phi(s) - \phi(t) \lvert \leq \lvert s - t\lvert. $$En particular, $\phi$ es continua en $\mathbb{R}$. Definamos $$f(x):= \sum_{n=0}^\infty \left( \frac{3}{4} \right)^n \phi(4^n x).$$Esta función satisface que es continua en todo $\mathbb{R}$ pero no es diferenciable en níngún punto de $\mathbb{R}$. Para más información, consultar este post.
###Code
def phi(x):
fl = np.floor(x)
li = np.where(np.mod(fl, 2) == 0, fl-1, fl)
d = np.where(li <= -1.0, -1.0-li, np.where(li >= 1.0, -1.0-li, 0) )
xs = d+x
return np.abs(xs)
def wr(x, n=10):
c=np.power(0.75, n)
xs=np.power(4, n)*x
return c*phi(xs)
@gif.frame
def plot(n):
x = np.linspace(-1,1,M)
y = wr(x, n=n)
plt.figure(figsize=(12,6))
plt.plot(x, y, 'k-', lw=0.3)
plt.axis('off')
frames = []
for n in [0,1,2,3,4,5,6,7]:
frame = plot(n)
frames.append(frame)
gif.save(frames, 'wr-connodif.gif', duration=512)
###Output
_____no_output_____ |
JetbotOriginalNotebooks/basic_motion/basic_motion.ipynb | ###Markdown
Basic MotionWelcome to JetBot's browser based programming interface! This document iscalled a *Jupyter Notebook*, which combines text, code, and graphicdisplay all in one! Prett neat, huh? If you're unfamiliar with *Jupyter* we suggest clicking the ``Help`` drop down menu in the top toolbar. This has useful references forprogramming with *Jupyter*. In this notebook, we'll cover the basics of controlling JetBot. Importing the Robot classTo get started programming JetBot, we'll need to import the ``Robot`` class. This classallows us to easily control the robot's motors! This is contained in the ``jetbot`` package.> If you're new to Python, a *package* is essentially a folder containing > code files. These code files are called *modules*.To import the ``Robot`` class, highlight the cell below and press ``ctrl + enter`` or the ``play`` icon above.This will execute the code contained in the cell
###Code
from jetbot import Robot
###Output
_____no_output_____
###Markdown
Now that we've imported the ``Robot`` class we can initialize the class *instance* as follows.
###Code
robot = Robot()
###Output
_____no_output_____
###Markdown
Commanding the robot Now that we've created our ``Robot`` instance we named "robot", we can use this instanceto control the robot. To make the robot spin counterclockwise at 30% of it's max speedwe can call the following> WARNING: This next command will make the robot move! Please make sure the robot has clearance.
###Code
robot.left(speed=0.3)
###Output
_____no_output_____
###Markdown
Cool, you should see the robot spin counterclockwise!> If your robot didn't turn left, that means one of the motors is wired backwards! Try powering down your> robot and swapping the terminals that the ``red`` and ``black`` cables of the incorrect motor.> > REMINDER: Always be careful to check your wiring, and don't change the wiring on a running system!Now, to stop the robot you can call the ``stop`` method.
###Code
robot.stop()
###Output
_____no_output_____
###Markdown
Maybe we only want to run the robot for a set period of time. For that, we can use the Python ``time`` package.
###Code
import time
###Output
_____no_output_____
###Markdown
This package defines the ``sleep`` function, which causes the code execution to block for the specified number of secondsbefore running the next command. Try the following to make the robot turn left only for half a second.
###Code
robot.left(0.3)
time.sleep(0.5)
robot.stop()
###Output
_____no_output_____
###Markdown
Great. You should see the robot turn left for a bit and then stop.> Wondering what happened to the ``speed=`` inside the ``left`` method? Python allows > us to set function parameters by either their name, or the order that they are defined> (without specifying the name).The ``BasicJetbot`` class also has the methods ``right``, ``forward``, and ``backwards``. Try creating your own cell to makethe robot move forward at 50% speed for one second.Create a new cell by highlighting an existing cell and pressing ``b`` or the ``+`` icon above. Once you've done that, type in the code that you think will make the robot move forward at 50% speed for one second. Controlling motors individuallyAbove we saw how we can control the robot using commands like ``left``, ``right``, etc. But what if we want to set each motor speed individually? Well, there are two ways you can do thisThe first way is to call the ``set_motors`` method. For example, to turn along a left arch for a second we could set the left motor to 30% and the right motor to 60% like follows.
###Code
robot.set_motors(0.3, 0.6)
time.sleep(1.0)
robot.stop()
###Output
_____no_output_____
###Markdown
Great! You should see the robot move along a left arch. But actually, there's another way that we could accomplish the same thing.The ``Robot`` class has two attributes named ``left_motor`` and ``right_motor`` that represent each motor individually.These attributes are ``Motor`` class instances, each which contains a ``value`` attribute. This ``value`` attributeis a [traitlet](https://github.com/ipython/traitlets) which generates ``events`` when assigned a new value. In the motorclass, we attach a function that updates the motor commands whenever the value changes.So, to accomplish the exact same thing we did above, we could execute the following.
###Code
robot.left_motor.value = 0.3
robot.right_motor.value = 0.6
time.sleep(1.0)
robot.left_motor.value = 0.0
robot.right_motor.value = 0.0
###Output
_____no_output_____
###Markdown
You should see the robot move in the same exact way! Link motors to traitlets A really cool feature about these [traitlets](https://github.com/ipython/traitlets) is that we can also link them to other traitlets! This is super handy because Jupyter Notebooks allow usto make graphical ``widgets`` that use traitlets under the hood. This means we can attachour motors to ``widgets`` to control them from the browser, or just visualize the value.To show how to do this, let's create and display two sliders that we'll use to control our motors.
###Code
import ipywidgets.widgets as widgets
from IPython.display import display
# create two sliders with range [-1.0, 1.0]
left_slider = widgets.FloatSlider(description='left', min=-1.0, max=1.0, step=0.01, orientation='vertical')
right_slider = widgets.FloatSlider(description='right', min=-1.0, max=1.0, step=0.01, orientation='vertical')
# create a horizontal box container to place the sliders next to eachother
slider_container = widgets.HBox([left_slider, right_slider])
# display the container in this cell's output
display(slider_container)
###Output
_____no_output_____
###Markdown
You should see two ``vertical`` sliders displayed above. > HELPFUL TIP: In Jupyter Lab, you can actually "pop" the output of cells into entirely separate window! It will still be > connected to the notebook, but displayed separately. This is helpful if we want to pin the output of code we executed elsewhere.> To do this, right click the output of the cell and select ``Create New View for Output``. You can then drag the new window> to a location you find pleasing.Try clicking and dragging the sliders up and down. Notice nothing happens when we move the sliders currently. That's because we haven't connected them to motors yet! We'll do that by using the ``link`` function from the traitlets package.
###Code
import traitlets
left_link = traitlets.link((left_slider, 'value'), (robot.left_motor, 'value'))
right_link = traitlets.link((right_slider, 'value'), (robot.right_motor, 'value'))
###Output
_____no_output_____
###Markdown
Now try dragging the sliders (slowly at first). You should see the respective motor turn!The ``link`` function that we created above actually creates a bi-directional link! That means,if we set the motor values elsewhere, the sliders will update! Try executing the code block below
###Code
robot.forward(0.3)
time.sleep(1.0)
robot.stop()
###Output
_____no_output_____
###Markdown
You should see the sliders respond to the motor commands! If we want to remove this connection we can call the``unlink`` method of each link.
###Code
left_link.unlink()
right_link.unlink()
###Output
_____no_output_____
###Markdown
But what if we don't want a *bi-directional* link, let's say we only want to use the sliders to display the motor values,but not control them. For that we can use the ``dlink`` function. The left input is the ``source`` and the right input is the ``target``
###Code
left_link = traitlets.dlink((robot.left_motor, 'value'), (left_slider, 'value'))
right_link = traitlets.dlink((robot.right_motor, 'value'), (right_slider, 'value'))
###Output
_____no_output_____
###Markdown
Now try moving the sliders. You should see that the robot doesn't respond. But when set the motors using a different method,the sliders will update and display the value! Attach functions to events Another way to use traitlets, is by attaching functions (like ``forward``) to events. Thesefunctions will get called whenever a change to the object occurs, and will be passed some information about that changelike the ``old`` value and the ``new`` value. Let's create and display some buttons that we'll use to control the robot.
###Code
# create buttons
button_layout = widgets.Layout(width='100px', height='80px', align_self='center')
stop_button = widgets.Button(description='stop', button_style='danger', layout=button_layout)
forward_button = widgets.Button(description='forward', layout=button_layout)
backward_button = widgets.Button(description='backward', layout=button_layout)
left_button = widgets.Button(description='left', layout=button_layout)
right_button = widgets.Button(description='right', layout=button_layout)
# display buttons
middle_box = widgets.HBox([left_button, stop_button, right_button], layout=widgets.Layout(align_self='center'))
controls_box = widgets.VBox([forward_button, middle_box, backward_button])
display(controls_box)
###Output
_____no_output_____
###Markdown
You should see a set of robot controls displayed above! But right now they wont do anything. To do thatwe'll need to create some functions that we'll attach to the button's ``on_click`` event.
###Code
def stop(change):
robot.stop()
def step_forward(change):
robot.forward(0.4)
time.sleep(0.5)
robot.stop()
def step_backward(change):
robot.backward(0.4)
time.sleep(0.5)
robot.stop()
def step_left(change):
robot.left(0.3)
time.sleep(0.5)
robot.stop()
def step_right(change):
robot.right(0.3)
time.sleep(0.5)
robot.stop()
###Output
_____no_output_____
###Markdown
Now that we've defined the functions, let's attach them to the on-click events of each button
###Code
# link buttons to actions
stop_button.on_click(stop)
forward_button.on_click(step_forward)
backward_button.on_click(step_backward)
left_button.on_click(step_left)
right_button.on_click(step_right)
###Output
_____no_output_____
###Markdown
Now when you click each button, you should see the robot move! Heartbeat KillswitchHere we show how to connect a 'heartbeat' to stop the robot from moving. This is a simple way to detect if the robot connection is alive. You can lower the slider below to reduce the period (in seconds) of the heartbeat. If a round-trip communication between broswer cannot be made within two heartbeats, the '`status`' attribute of the heartbeat will be set ``dead``. As soon as the connection is restored, the ``status`` attribute will return to ``alive``.
###Code
from jetbot import Heartbeat
heartbeat = Heartbeat()
# this function will be called when heartbeat 'alive' status changes
def handle_heartbeat_status(change):
if change['new'] == Heartbeat.Status.dead:
robot.stop()
heartbeat.observe(handle_heartbeat_status, names='status')
period_slider = widgets.FloatSlider(description='period', min=0.001, max=0.5, step=0.01, value=0.5)
traitlets.dlink((period_slider, 'value'), (heartbeat, 'period'))
display(period_slider, heartbeat.pulseout)
###Output
_____no_output_____
###Markdown
Try executing the code below to start the motors, and then lower the slider to see what happens. You can also try disconnecting your robot or PC.
###Code
robot.left(0.2)
# now lower the `period` slider above until the network heartbeat can't be satisfied
###Output
_____no_output_____ |
general/rydberg_transitions.ipynb | ###Markdown
Calculating Rydberg atom transition frequencies The wavelength of the transition between the $n_1$th and $n_2$th levels is given by,\begin{equation} \frac{1}{\lambda} = R_{M} \left( \frac{1}{(n_1-\delta_1)^2} - \frac{1}{(n_2-\delta_2)^2} \right)\end{equation}where $\delta_x$ are the quantum defects, and $R_{M}$ is the reduced mass,\begin{equation} R_{M} = \frac{R_{\infty}}{1+\frac{m_e}{M}}\end{equation}where $R_{\infty}$ is the Rydberg constant with an infinite mass nucleus, $m_e$ is the electron mass, and $M$ is the mass of the nucleus. $R_{\infty}$ is given by,\begin{equation} R_{\infty} = \frac{m_e e^4}{8 \epsilon_0^2 h^3 c} = 1.0973731568508 \times 10^7 m^{-1}\end{equation} The frequency of the transition is then,\begin{equation} f = \frac{c}{\lambda}\end{equation}where $c$ is the speed of light.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
class RydbergHelium:
def __init__(self):
self.z = 2
self.r_inf = 1.0973731568508 * 10**7
self.c = 2.99792458 * 10**8
self.h = 6.62607004 * 10**-34
self.e = 1.60217662 * 10**-19
self.m_e = 9.10938356 * 10**-31
self.m_p = 1.6726219 * 10**-27
def energy_level(self, n, l):
r_m = self.r_inf / (1 + (self.m_e/(2*self.z*self.m_p)))
defect = self.quantum_defect(n, l)
wavelength = 1 / ( r_m * ( 1/float(n-defect)**2 ) )
return self.h * self.c / wavelength
def energy_transition(self, n_from, n_to, l_from=6, l_to=6):
return np.abs(self.energy_level(n_from, l_from) - self.energy_level(n_to, l_to))
def quantum_defect(self, n, l):
# Routine to calculate the quantum defects of the triplet Rydberg states of helium
# From Martin, Phys. Rev. A, vol. 36, pp. 3575-3589 (1987)
# s p d f g h +
a = [0.29665486, 0.06835886, 0.00289043, 0.00043924, 0.00012568, 0.00004756, 0]
b = [0.03824614, -0.01870111, -0.0064691, -0.0017850, -0.0008992, -0.000552 , 0]
c = [0.0082574, -0.0117730, 0.001362, 0.000465, 0.0007, 0.00112 , 0]
d = [0.000359, -0.008540, -0.00325, 0, 0, 0 , 0]
if l <= 5:
idx = l;
else:
idx = 6
m = n - a[idx];
return a[idx] + b[idx]*m**(-2) + c[idx]*m**(-4) + d[idx]*m**(-6);
def energy_ionisation(self):
# E/hc = 1/lambda (cm^-1)
return (self.h * self.c) * (198310.6663720 * 100)
def energy_1s3p(self):
# E/hc = 1/lambda (cm^-1)
return (self.h * self.c) * (185564.561920 * 100) # J = 2
def energy_1s2s(self):
# E/hc = 1/lambda (cm^-1)
return (self.h * self.c) * (159855.9743297 * 100)
def energy_1s3p_nl(self, n, l):
return (self.energy_ionisation() - self.energy_1s3p()) - self.energy_level(n, l)
def frequency(self, E):
return E / self.h
def wavelength(self, E):
return self.h * self.c / E
atom = RydbergHelium()
abs_55s = atom.wavelength(atom.energy_1s3p_nl(55, 0)) * 10**9
ref_55s = 786.8166
offset = ref_55s - abs_55s
atom.wavelength(atom.energy_1s3p_nl(70, 0)) * 10**9 + offset
atom.frequency(atom.energy_transition(70,72,0,0)) / 10**9
###Output
_____no_output_____ |
PythonEssentials/Week1.ipynb | ###Markdown
---_You are currently looking at **version 1.1** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-data-analysis/resources/0dhYG) course resource._--- The Python Programming Language: Functions `add_numbers` is a function that takes two numbers and adds them together.
###Code
def add_numbers(x, y):
return x + y
add_numbers(1, 2)
###Output
_____no_output_____
###Markdown
`add_numbers` updated to take an optional 3rd parameter. Using `print` allows printing of multiple expressions within a single cell.
###Code
def add_numbers(x,y,z=None):
if (z==None):
return x+y
else:
return x+y+z
print(add_numbers(1, 2))
print(add_numbers(1, 2, 3))
###Output
3
6
###Markdown
`add_numbers` updated to take an optional flag parameter.
###Code
def add_numbers(x, y, z=None, flag=False):
if (flag):
print('Flag is true!')
if (z==None):
return x + y
else:
return x + y + z
print(add_numbers(1, 2, flag=True))
###Output
Flag is true!
3
###Markdown
Assign function `add_numbers` to variable `a`.
###Code
def add_numbers(x,y):
return x+y
a = add_numbers
a(1,2)
###Output
_____no_output_____
###Markdown
The Python Programming Language: Types and Sequences Use `type` to return the object's type.
###Code
type('This is a string')
type(None)
type(1)
type(1.0)
type(add_numbers)
###Output
_____no_output_____
###Markdown
Tuples are an immutable data structure (cannot be altered).
###Code
x = (1, 'a', 2, 'b')
type(x)
###Output
_____no_output_____
###Markdown
Lists are a mutable data structure.
###Code
x = [1, 'a', 2, 'b']
type(x)
###Output
_____no_output_____
###Markdown
Use `append` to append an object to a list.
###Code
x.append(3.3)
print(x)
###Output
[1, 'a', 2, 'b', 3.3]
###Markdown
This is an example of how to loop through each item in the list.
###Code
for item in x:
print(item)
###Output
1
a
2
b
3.3
###Markdown
Or using the indexing operator:
###Code
i=0
while( i != len(x) ):
print(x[i])
i = i + 1
###Output
1
a
2
b
3.3
###Markdown
Use `+` to concatenate lists.
###Code
[1,2] + [3,4]
###Output
_____no_output_____
###Markdown
Use `*` to repeat lists.
###Code
[1]*3
###Output
_____no_output_____
###Markdown
Use the `in` operator to check if something is inside a list.
###Code
1 in [1, 2, 3]
###Output
_____no_output_____
###Markdown
Now let's look at strings. Use bracket notation to slice a string.
###Code
x = 'This is a string'
print(x[0]) #first character
print(x[0:1]) #first character, but we have explicitly set the end character
print(x[0:2]) #first two characters
###Output
T
T
Th
###Markdown
This will return the last element of the string.
###Code
x[-1]
###Output
_____no_output_____
###Markdown
This will return the slice starting from the 4th element from the end and stopping before the 2nd element from the end.
###Code
x[-4:-2]
###Output
_____no_output_____
###Markdown
This is a slice from the beginning of the string and stopping before the 3rd element.
###Code
x[:3]
###Output
_____no_output_____
###Markdown
And this is a slice starting from the 4th element of the string and going all the way to the end.
###Code
x[3:]
firstname = 'Christopher'
lastname = 'Brooks'
print(firstname + ' ' + lastname)
print(firstname*3)
print('Chris' in firstname)
###Output
Christopher Brooks
ChristopherChristopherChristopher
True
###Markdown
`split` returns a list of all the words in a string, or a list split on a specific character.
###Code
firstname = 'Christopher Arthur Hansen Brooks'.split(' ')[0] # [0] selects the first element of the list
lastname = 'Christopher Arthur Hansen Brooks'.split(' ')[-1] # [-1] selects the last element of the list
print(firstname)
print(lastname)
###Output
Christopher
Brooks
###Markdown
Make sure you convert objects to strings before concatenating.
###Code
'Chris' + 2
'Chris' + str(2)
###Output
_____no_output_____
###Markdown
Dictionaries associate keys with values.
###Code
x = {'Christopher Brooks': '[email protected]', 'Bill Gates': '[email protected]'}
x['Christopher Brooks'] # Retrieve a value by using the indexing operator
x['Kevyn Collins-Thompson'] = None
x['Kevyn Collins-Thompson']
###Output
_____no_output_____
###Markdown
Iterate over all of the keys:
###Code
for name in x:
print(x[name])
###Output
[email protected]
[email protected]
None
###Markdown
Iterate over all of the values:
###Code
for email in x.values():
print(email)
###Output
[email protected]
[email protected]
None
###Markdown
Iterate over all of the items in the list:
###Code
for name, email in x.items():
print(name)
print(email)
###Output
Christopher Brooks
[email protected]
Bill Gates
[email protected]
Kevyn Collins-Thompson
None
###Markdown
You can unpack a sequence into different variables:
###Code
x = ('Christopher', 'Brooks', '[email protected]')
fname, lname, email = x
fname
lname
###Output
_____no_output_____
###Markdown
Make sure the number of values you are unpacking matches the number of variables being assigned.
###Code
x = ('Christopher', 'Brooks', '[email protected]', 'Ann Arbor')
fname, lname, email = x
###Output
_____no_output_____
###Markdown
The Python Programming Language: More on Strings
###Code
print('Chris' + 2)
print('Chris' + str(2))
###Output
Chris2
###Markdown
Python has a built in method for convenient string formatting.
###Code
sales_record = {
'price': 3.24,
'num_items': 4,
'person': 'Chris'}
sales_statement = '{} bought {} item(s) at a price of {} each for a total of {}'
print(sales_statement.format(sales_record['person'],
sales_record['num_items'],
sales_record['price'],
sales_record['num_items']*sales_record['price']))
###Output
Chris bought 4 item(s) at a price of 3.24 each for a total of 12.96
###Markdown
Reading and Writing CSV files Let's import our datafile mpg.csv, which contains fuel economy data for 234 cars.* mpg : miles per gallon* class : car classification* cty : city mpg* cyl : of cylinders* displ : engine displacement in liters* drv : f = front-wheel drive, r = rear wheel drive, 4 = 4wd* fl : fuel (e = ethanol E85, d = diesel, r = regular, p = premium, c = CNG)* hwy : highway mpg* manufacturer : automobile manufacturer* model : model of car* trans : type of transmission* year : model year
###Code
import csv
%precision 2
with open('mpg.csv') as csvfile:
mpg = list(csv.DictReader(csvfile))
mpg[:3] # The first three dictionaries in our list.
###Output
_____no_output_____
###Markdown
`csv.Dictreader` has read in each row of our csv file as a dictionary. `len` shows that our list is comprised of 234 dictionaries.
###Code
len(mpg)
###Output
_____no_output_____
###Markdown
`keys` gives us the column names of our csv.
###Code
mpg[0].keys()
###Output
_____no_output_____
###Markdown
This is how to find the average cty fuel economy across all cars. All values in the dictionaries are strings, so we need to convert to float.
###Code
sum(float(d['cty']) for d in mpg) / len(mpg)
###Output
_____no_output_____
###Markdown
Similarly this is how to find the average hwy fuel economy across all cars.
###Code
sum(float(d['hwy']) for d in mpg) / len(mpg)
###Output
_____no_output_____
###Markdown
Use `set` to return the unique values for the number of cylinders the cars in our dataset have.
###Code
cylinders = set(d['cyl'] for d in mpg)
cylinders
###Output
_____no_output_____
###Markdown
Here's a more complex example where we are grouping the cars by number of cylinder, and finding the average cty mpg for each group.
###Code
CtyMpgByCyl = []
for c in cylinders: # iterate over all the cylinder levels
summpg = 0
cyltypecount = 0
for d in mpg: # iterate over all dictionaries
if d['cyl'] == c: # if the cylinder level type matches,
summpg += float(d['cty']) # add the cty mpg
cyltypecount += 1 # increment the count
CtyMpgByCyl.append((c, summpg / cyltypecount)) # append the tuple ('cylinder', 'avg mpg')
CtyMpgByCyl.sort(key=lambda x: x[0])
CtyMpgByCyl
###Output
_____no_output_____
###Markdown
Use `set` to return the unique values for the class types in our dataset.
###Code
vehicleclass = set(d['class'] for d in mpg) # what are the class types
vehicleclass
###Output
_____no_output_____
###Markdown
And here's an example of how to find the average hwy mpg for each class of vehicle in our dataset.
###Code
HwyMpgByClass = []
for t in vehicleclass: # iterate over all the vehicle classes
summpg = 0
vclasscount = 0
for d in mpg: # iterate over all dictionaries
if d['class'] == t: # if the cylinder amount type matches,
summpg += float(d['hwy']) # add the hwy mpg
vclasscount += 1 # increment the count
HwyMpgByClass.append((t, summpg / vclasscount)) # append the tuple ('class', 'avg mpg')
HwyMpgByClass.sort(key=lambda x: x[1])
HwyMpgByClass
###Output
_____no_output_____
###Markdown
The Python Programming Language: Dates and Times
###Code
import datetime as dt
import time as tm
###Output
_____no_output_____
###Markdown
`time` returns the current time in seconds since the Epoch. (January 1st, 1970)
###Code
tm.time()
###Output
_____no_output_____
###Markdown
Convert the timestamp to datetime.
###Code
dtnow = dt.datetime.fromtimestamp(tm.time())
dtnow
###Output
_____no_output_____
###Markdown
Handy datetime attributes:
###Code
dtnow.year, dtnow.month, dtnow.day, dtnow.hour, dtnow.minute, dtnow.second # get year, month, day, etc.from a datetime
###Output
_____no_output_____
###Markdown
`timedelta` is a duration expressing the difference between two dates.
###Code
delta = dt.timedelta(days = 100) # create a timedelta of 100 days
delta
###Output
_____no_output_____
###Markdown
`date.today` returns the current local date.
###Code
today = dt.date.today()
today - delta # the date 100 days ago
today > today-delta # compare dates
###Output
_____no_output_____
###Markdown
The Python Programming Language: Objects and map() An example of a class in python:
###Code
class Person:
department = 'School of Information' #a class variable
def set_name(self, new_name): #a method
self.name = new_name
def set_location(self, new_location):
self.location = new_location
person = Person()
person.set_name('Christopher Brooks')
person.set_location('Ann Arbor, MI, USA')
print('{} live in {} and works in the department {}'.format(person.name, person.location, person.department))
###Output
Christopher Brooks live in Ann Arbor, MI, USA and works in the department School of Information
###Markdown
Here's an example of mapping the `min` function between two lists.
###Code
store1 = [10.00, 11.00, 12.34, 2.34]
store2 = [9.00, 11.10, 12.34, 2.01]
cheapest = map(min, store1, store2)
cheapest
###Output
_____no_output_____
###Markdown
Now let's iterate through the map object to see the values.
###Code
for item in cheapest:
print(item)
###Output
9.0
11.0
12.34
2.01
###Markdown
The Python Programming Language: Lambda and List Comprehensions Here's an example of lambda that takes in three parameters and adds the first two.
###Code
my_function = lambda a, b, c : a + b
my_function(1, 2, 3)
###Output
_____no_output_____
###Markdown
Let's iterate from 0 to 999 and return the even numbers.
###Code
my_list = []
for number in range(0, 1000):
if number % 2 == 0:
my_list.append(number)
my_list
###Output
_____no_output_____
###Markdown
Now the same thing but with list comprehension.
###Code
my_list = [number for number in range(0,1000) if number % 2 == 0]
my_list
###Output
_____no_output_____
###Markdown
The Python Programming Language: Numerical Python (NumPy)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Creating Arrays Create a list and convert it to a numpy array
###Code
mylist = [1, 2, 3]
x = np.array(mylist)
x
###Output
_____no_output_____
###Markdown
Or just pass in a list directly
###Code
y = np.array([4, 5, 6])
y
###Output
_____no_output_____
###Markdown
Pass in a list of lists to create a multidimensional array.
###Code
m = np.array([[7, 8, 9], [10, 11, 12]])
m
###Output
_____no_output_____
###Markdown
Use the shape method to find the dimensions of the array. (rows, columns)
###Code
m.shape
###Output
_____no_output_____
###Markdown
`arange` returns evenly spaced values within a given interval.
###Code
n = np.arange(0, 30, 2) # start at 0 count up by 2, stop before 30
n
###Output
_____no_output_____
###Markdown
`reshape` returns an array with the same data with a new shape.
###Code
n = n.reshape(3, 5) # reshape array to be 3x5
n
###Output
_____no_output_____
###Markdown
`linspace` returns evenly spaced numbers over a specified interval.
###Code
o = np.linspace(0, 4, 9) # return 9 evenly spaced values from 0 to 4
o
###Output
_____no_output_____
###Markdown
`resize` changes the shape and size of array in-place.
###Code
o.resize(3, 3)
o
###Output
_____no_output_____
###Markdown
`ones` returns a new array of given shape and type, filled with ones.
###Code
np.ones((3, 2))
###Output
_____no_output_____
###Markdown
`zeros` returns a new array of given shape and type, filled with zeros.
###Code
np.zeros((2, 3))
###Output
_____no_output_____
###Markdown
`eye` returns a 2-D array with ones on the diagonal and zeros elsewhere.
###Code
np.eye(3)
###Output
_____no_output_____
###Markdown
`diag` extracts a diagonal or constructs a diagonal array.
###Code
np.diag(y)
###Output
_____no_output_____
###Markdown
Create an array using repeating list (or see `np.tile`)
###Code
np.array([1, 2, 3] * 3)
###Output
_____no_output_____
###Markdown
Repeat elements of an array using `repeat`.
###Code
np.repeat([1, 2, 3], 3)
###Output
_____no_output_____
###Markdown
Combining Arrays
###Code
p = np.ones([2, 3], int)
p
###Output
_____no_output_____
###Markdown
Use `vstack` to stack arrays in sequence vertically (row wise).
###Code
np.vstack([p, 2*p])
###Output
_____no_output_____
###Markdown
Use `hstack` to stack arrays in sequence horizontally (column wise).
###Code
np.hstack([p, 2*p])
###Output
_____no_output_____
###Markdown
Operations Use `+`, `-`, `*`, `/` and `**` to perform element wise addition, subtraction, multiplication, division and power.
###Code
print(x + y) # elementwise addition [1 2 3] + [4 5 6] = [5 7 9]
print(x - y) # elementwise subtraction [1 2 3] - [4 5 6] = [-3 -3 -3]
print(x * y) # elementwise multiplication [1 2 3] * [4 5 6] = [4 10 18]
print(x / y) # elementwise divison [1 2 3] / [4 5 6] = [0.25 0.4 0.5]
print(x**2) # elementwise power [1 2 3] ^2 = [1 4 9]
###Output
[1 4 9]
###Markdown
**Dot Product:** $ \begin{bmatrix}x_1 \ x_2 \ x_3\end{bmatrix}\cdot\begin{bmatrix}y_1 \\ y_2 \\ y_3\end{bmatrix}= x_1 y_1 + x_2 y_2 + x_3 y_3$
###Code
x.dot(y) # dot product 1*4 + 2*5 + 3*6
z = np.array([y, y**2])
print(len(z)) # number of rows of array
###Output
2
###Markdown
Let's look at transposing arrays. Transposing permutes the dimensions of the array.
###Code
z = np.array([y, y**2])
z
###Output
_____no_output_____
###Markdown
The shape of array `z` is `(2,3)` before transposing.
###Code
z.shape
###Output
_____no_output_____
###Markdown
Use `.T` to get the transpose.
###Code
z.T
###Output
_____no_output_____
###Markdown
The number of rows has swapped with the number of columns.
###Code
z.T.shape
###Output
_____no_output_____
###Markdown
Use `.dtype` to see the data type of the elements in the array.
###Code
z.dtype
###Output
_____no_output_____
###Markdown
Use `.astype` to cast to a specific type.
###Code
z = z.astype('f')
z.dtype
###Output
_____no_output_____
###Markdown
Math Functions Numpy has many built in math functions that can be performed on arrays.
###Code
a = np.array([-4, -2, 1, 3, 5])
a.sum()
a.max()
a.min()
a.mean()
a.std()
###Output
_____no_output_____
###Markdown
`argmax` and `argmin` return the index of the maximum and minimum values in the array.
###Code
a.argmax()
a.argmin()
###Output
_____no_output_____
###Markdown
Indexing / Slicing
###Code
s = np.arange(13)**2
s
###Output
_____no_output_____
###Markdown
Use bracket notation to get the value at a specific index. Remember that indexing starts at 0.
###Code
s[0], s[4], s[-1]
###Output
_____no_output_____
###Markdown
Use `:` to indicate a range. `array[start:stop]`Leaving `start` or `stop` empty will default to the beginning/end of the array.
###Code
s[1:5]
###Output
_____no_output_____
###Markdown
Use negatives to count from the back.
###Code
s[-4:]
###Output
_____no_output_____
###Markdown
A second `:` can be used to indicate step-size. `array[start:stop:stepsize]`Here we are starting 5th element from the end, and counting backwards by 2 until the beginning of the array is reached.
###Code
s[-5::-2]
###Output
_____no_output_____
###Markdown
Let's look at a multidimensional array.
###Code
r = np.arange(36)
r.resize((6, 6))
r
###Output
_____no_output_____
###Markdown
Use bracket notation to slice: `array[row, column]`
###Code
r[2, 2]
###Output
_____no_output_____
###Markdown
And use : to select a range of rows or columns
###Code
r[3, 3:6]
###Output
_____no_output_____
###Markdown
Here we are selecting all the rows up to (and not including) row 2, and all the columns up to (and not including) the last column.
###Code
r[:2, :-1]
###Output
_____no_output_____
###Markdown
This is a slice of the last row, and only every other element.
###Code
r[-1, ::2]
###Output
_____no_output_____
###Markdown
We can also perform conditional indexing. Here we are selecting values from the array that are greater than 30. (Also see `np.where`)
###Code
r[r > 30]
###Output
_____no_output_____
###Markdown
Here we are assigning all values in the array that are greater than 30 to the value of 30.
###Code
r[r > 30] = 30
r
###Output
_____no_output_____
###Markdown
Copying Data Be careful with copying and modifying arrays in NumPy!`r2` is a slice of `r`
###Code
r2 = r[:3,:3]
r2
###Output
_____no_output_____
###Markdown
Set this slice's values to zero ([:] selects the entire array)
###Code
r2[:] = 0
r2
###Output
_____no_output_____
###Markdown
`r` has also been changed!
###Code
r
###Output
_____no_output_____
###Markdown
To avoid this, use `r.copy` to create a copy that will not affect the original array
###Code
r_copy = r.copy()
r_copy
###Output
_____no_output_____
###Markdown
Now when r_copy is modified, r will not be changed.
###Code
r_copy[:] = 10
print(r_copy, '\n')
print(r)
###Output
[[10 10 10 10 10 10]
[10 10 10 10 10 10]
[10 10 10 10 10 10]
[10 10 10 10 10 10]
[10 10 10 10 10 10]
[10 10 10 10 10 10]]
[[ 0 0 0 3 4 5]
[ 0 0 0 9 10 11]
[ 0 0 0 15 16 17]
[18 19 20 21 22 23]
[24 25 26 27 28 29]
[30 30 30 30 30 30]]
###Markdown
Iterating Over Arrays Let's create a new 4 by 3 array of random numbers 0-9.
###Code
test = np.random.randint(0, 10, (4,3))
test
###Output
_____no_output_____
###Markdown
Iterate by row:
###Code
for row in test:
print(row)
###Output
[0 6 8]
[5 3 0]
[7 5 9]
[9 3 6]
###Markdown
Iterate by index:
###Code
for i in range(len(test)):
print(test[i])
###Output
[0 6 8]
[5 3 0]
[7 5 9]
[9 3 6]
###Markdown
Iterate by row and index:
###Code
for i, row in enumerate(test):
print('row', i, 'is', row)
###Output
row 0 is [0 6 8]
row 1 is [5 3 0]
row 2 is [7 5 9]
row 3 is [9 3 6]
###Markdown
Use `zip` to iterate over multiple iterables.
###Code
test2 = test**2
test2
for i, j in zip(test, test2):
print(i,'+',j,'=',i+j)
###Output
[0 6 8] + [ 0 36 64] = [ 0 42 72]
[5 3 0] + [25 9 0] = [30 12 0]
[7 5 9] + [49 25 81] = [56 30 90]
[9 3 6] + [81 9 36] = [90 12 42]
|
notebooks/pipeline_img_registration_miri/jwst_level3_register_and_combine_miri_example.ipynb | ###Markdown
Image Registration and Combination using the JWST Level 3 Pipeline - MIRI example Stage 3 image (Image3, calwebb_image3) processing is intended for combining the calibrated data from multiple exposures (e.g., a dither or mosaic pattern) into a single distortion corrected product. Before being combined, the exposures receive additional corrections for the purpose of astrometric alignment, background matching, and outlier rejection. > **Inputs**: The inputs to calwebb_image3 will usually be in the form of an association (ASN) file that lists multiple associated 2D calibrated exposures to be processed and combined into a single product. The individual exposures should be calibrated ("cal") from calwebb_image2 processing. It is also possible use a single "cal" file as input, in which case only the resample and source_catalog steps will be applied.> **Outputs**: A resampled/rectified 2D image product with suffix "i2d" is created, containing the rectified single exposure or the rectified and combined association of exposures (the direct output of the resample step). A source catalog produced from the "i2d" product is saved as an ASCII file in "ecsv" format, with a suffix of "cat". If the outlier_detection step is applied, a new version of each input calibrated exposure product is created, which contains a DQ array that has been updated to flag pixels detected as outliers. This updated product is known as a CR-flagged product and the file is identified by including the association candidate ID in the original input "cal" file name and changing the suffix to "crf". Level 3 pipeline steps:**Tweakreg** (jwst.tweakreg, tweakreg_step, TweakRegStep)**Sky Match** (jwst.skymatch, skymatch_step, SkyMatchStep)**Outlier Detection** (jwst.outlier_detection, outlier_detection_step, OutlierDetectionStep)**Resample** (jwst.resample, resample_step, ResampleStep)**Source Catalog** (jwst.source_catalog, source_catalog_step, SourceCatalogStep)(for more information on individual steps see: https://jwst-pipeline.readthedocs.io/en/latest/jwst/package_index.html) Table of Contents:> * [Resources and Documentation](resources)> * [Create Association table](association)> * [Using Configuration Files](pipeline_configs)> * [Run Pipeline with Configuration Files](pipeline_with_cfgs)> * [Run Pipeline with Parameters Set Programmatically](pipeline_no_configs)> * [Run Individual Steps with Configuration Files](steps_with_config_files)> * [Run Individual Steps with Parameters Set Programmatically](steps_no_configs) *** 1. Resources and Documentation There are several different places to find information on installing and running the pipeline. This notebook will give a shortened description of the steps pulled from the detailed pipeline information pages, but to find more in-depth instructions use the links below. >1. JDox: https://jwst-docs.stsci.edu/display/JDAT/JWST+Data+Reduction+Pipeline>2. Installation page: http://astroconda.readthedocs.io/en/latest/releases.htmlpipeline-install>3. Detailed pipeline information: https://jwst-pipeline.readthedocs.io/en/latest/jwst/introduction.html>4. Help Desk (click on Pipeline Support): https://stsci.service-now.com/jwst?id=sc_category>5. GitHub README installation instructions: https://github.com/spacetelescope/jwst/blob/master/README.mdIf this is your first time trying to run the pipeline from a jupyter notebook, you need to install the jupyter notebook in your pipeline environment:>1. In a new terminal, change the directory to your working directory, terminal command: cd [your working directory]>2. Terminal command: source activate jwst_dev(or whatever your environment name for the pipeline is)>3. Terminal command: conda install jupyter>4. Terminal command: jupyter notebook First, we must define environment variables for the CRDS server. This is necessary if you are not on the STScI internal network.
###Code
import os
os.environ['CRDS_SERVER_URL'] = 'https://jwst-crds.stsci.edu/'
os.environ['CRDS_PATH'] = '.'
import requests
from astropy.io import fits
from astropy.utils.data import download_file
from astropy.visualization import LogStretch, ImageNormalize, ManualInterval
import matplotlib.pyplot as plt
%matplotlib inline
# Import pipeline
from jwst import datamodels
from jwst.pipeline import Image3Pipeline
# from jwst.associations.asn_from_list import asn_from_list # perhaps can be done in the future
# Import individual pipeline steps
from jwst.tweakreg import tweakreg_step
from jwst.skymatch import skymatch_step
from jwst.outlier_detection import outlier_detection_step
from jwst.resample import resample_step
from jwst.source_catalog import source_catalog_step
###Output
_____no_output_____
###Markdown
Loading Data An example dataset to be used with this notebook is present in our Box repository. The cells below download:1. The association file to be used as input to the pipeline2. The fits files listed in the association file
###Code
box_path = 'https://stsci.box.com/shared/static/'
association_file_link = '2vlo7yqk00wmpu8x32ipg127i8lynpr2.json'
fits_box_links = ['1voplv0ooacf0eb0v8ebx6kbm8udxp56.fits',
'gqqjnx560jq8a71nbwsumh1nrfdz30ez.fits',
'hmvf8fykpkliyin89swtbfzul28nisqz.fits',
'9tqp5v8sfwwmgrc639000nvcs7inzfxg.fits']
def download_file(url):
"""Download into the current working directory the
file from Box given the direct URL
Parameters
----------
url : str
URL to the file to be downloaded
Returns
-------
download_filename : str
Name of the downloaded file
"""
response = requests.get(url, stream=True)
if response.status_code != 200:
raise RuntimeError("Wrong URL - {}".format(url))
download_filename = response.headers['Content-Disposition'].split('"')[1]
with open(download_filename, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
f.write(chunk)
return download_filename
###Output
_____no_output_____
###Markdown
Download association file
###Code
association_url = os.path.join(box_path, association_file_link)
association_file = download_file(association_url)
print(association_file)
###Output
_____no_output_____
###Markdown
Download FITS files
###Code
# Grab a copy of the data used in this notebook from the Box repository
for boxfile in fits_box_links:
file_url = os.path.join(box_path, boxfile)
fits_file = download_file(file_url)
print("Downloading {}".format(fits_file))
###Output
_____no_output_____
###Markdown
*** 2. Create an Association Table An association table is a **json** file that should contain all of the files to be combined in a single mosaic. Files that cannot be combined (e.g. NIRCam shortwave and longwave data) must be placed in separate association tables. An example association table
###Code
{
"asn_type": "None",
"asn_rule": "DMS_Level3_Base",
"version_id": null,
"code_version": "0.10.1a.dev241",
"degraded_status": "No known degraded exposures in association.",
"program": "noprogram",
"constraints": "No constraints",
"asn_id": "a3001",
"target": "none",
"asn_pool": "none",
"products": [
{
"name": "jw10002_short",
"members": [
{
"expname": "jw10002001001_01101_00001_nrcb1_cal.fits",
"exptype": "science"
},
{
"expname": "jw10002001001_01101_00001_nrcb2_cal.fits",
"exptype": "science"
},
{
"expname": "jw10002001001_01101_00001_nrcb3_cal.fits",
"exptype": "science"
},
{
"expname": "jw10002001001_01101_00001_nrcb4_cal.fits",
"exptype": "science"
},
{
"expname": "jw10002001001_01102_00001_nrcb1_cal.fits",
"exptype": "science"
},
{
"expname": "jw10002001001_01102_00001_nrcb2_cal.fits",
"exptype": "science"
},
{
"expname": "jw10002001001_01102_00001_nrcb3_cal.fits",
"exptype": "science"
},
{
"expname": "jw10002001001_01102_00001_nrcb4_cal.fits",
"exptype": "science"
}
]
}
]
}
###Output
_____no_output_____
###Markdown
3. Using Configuration Files Configuration files are optional inputs for each step of the pipeline, as well as for the pipeline itself. These files list step-specific parameters, and can also be used to control which steps are run as part of the pipeline.You can get the full compliment of configuration files using the `collect_pipeline_cfgs` convenience function from the command line:>`$ collect_pipeline_cfgs ./`This creates a copy of all configuration files, for all steps and all JWST Instruments. Note that default parameters in the config files are not necessarily optimized for any particular instrument. Each of these configuration files can be customized to control pipeline behavior. For example, the configuration file for the Level 3 imaging pipeline is called **calwebb_image3.cfg** and contains a list (not necessarily in order) of the steps run as part of the Level 3 imaging pipeline. name = "Image3Pipeline" class = "jwst.pipeline.Image3Pipeline" [steps] [[tweakreg]] config_file = tweakreg.cfg skip = True [[skymatch]] config_file = skymatch.cfg [[outlier_detection]] config_file = outlier_detection.cfg [[resample]] config_file = resample.cfg [[source_catalog]] config_file = source_catalog.cfg save_results = true In this example, the ***tweakreg*** step will be skipped (`skip = True`), and the output from the ***source_catalog*** step will be saved (`save_results = True`).Note that **calwebb_image3.cfg** lists a configuration file for each pipeline step. You can customize a particular pipeline step by editing the parameters in its configuration file. For example, the source catalog configuration file, shown below, contains details on the kernel size and FWHM, as well as the signal to noise threshold to use in the identification of sources in the final combined image. name = "source_catalog" class = "jwst.source_catalog.SourceCatalogStep" kernel_fwhm = 3. kernel_xsize = 5. kernel_ysize = 5. snr_threshold = 3. npixels = 50 deblend = False 3.5 Running the pipeline on MIRI data The dataset being used in this notebook is a set of four files, each with 5 point sources, two files each at two different dither positions. The files can be combined by running them through the pipeline. The final output catalog has one extra position listed, if everything is run with defaults. The files can be found at https://stsci.box.com/s/to6mcfmyap8kn7z9ordmcyb1dcbh1ps2. This repository also includes rate files (output of calwebb_detector1) and the cal files (output of calwebb_image2) as well as the files used to create the simulations in case those are helpful.The association file is 'det_dithered_5stars.json' and has the following content:
###Code
{
"asn_type": "None",
"code_version": "0.9.19",
"asn_id": "a3001",
"products": [
{
"name": "det_dithered_5stars_tweak.fits",
"members": [
{
"expname": "det_image_1_MIRIMAGE_F770Wexp1_5stars_cal.fits",
"exptype": "science"
},
{
"expname": "det_image_1_MIRIMAGE_F770Wexp2_5stars_cal.fits",
"exptype": "science"
},
{
"expname": "det_image_2_MIRIMAGE_F770Wexp1_5stars_cal.fits",
"exptype": "science"
},
{
"expname": "det_image_2_MIRIMAGE_F770Wexp2_5stars_cal.fits",
"exptype": "science"
}
]
}
],
"asn_pool": "none",
"version_id": null,
"asn_rule": "DMS_Level3_Base",
"degraded_status": "No known degraded exposures in association.",
"program": "noprogram",
"constraints": "No constraints",
"target": "none"
}
###Output
_____no_output_____
###Markdown
The combined image is exported as: det_dithered_5stars_tweak_i2d.fits *** 4. Run Pipeline with Configuration Files Once you have edited the configuration files to customize the Level 3 pipeline, the command below will run the pipeline.This will generate a final source catalog ***cat.ecsv***, a final 2D image ***i2d.fits***, individual exposures with DQ array flagged for outliers ***crf.fits***, and blot images from the outlier detection step ***blot.fits***.
###Code
m = Image3Pipeline.call(association_file, config_file='calwebb_image3.cfg')
###Output
_____no_output_____
###Markdown
Examine Outputs Combined Image
###Code
# Output combined image
combined_image_file = 'det_dithered_5stars_tweak_i2d.fits'
combined_image = fits.getdata(combined_image_file)
norm = ImageNormalize(combined_image, interval=ManualInterval(vmin=-25, vmax=25), stretch=LogStretch())
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(1, 1, 1)
im = ax.imshow(combined_image, origin='lower', norm=norm)
fig.colorbar(im)
plt.show()
###Output
_____no_output_____
###Markdown
*** 5. Run Pipeline with Parameters Set Programmatically You can also run the pipeline without relying on configuration files by setting parameters programmatically, and relying on the defaults in the pipeline.
###Code
m = Image3Pipeline()
# You can skip steps and change parameter values
m.tweakreg.skip = False
m.source_catalog.snr_threshold = 10
# run the pipeline with these parameters
m.run(association_file)
###Output
_____no_output_____
###Markdown
Combined Image
###Code
combined_image_file = 'det_dithered_5stars_tweak_i2d.fits' ## need to load data gain
combined_image = fits.getdata(combined_image_file) ## need to load data again
norm = ImageNormalize(combined_image, interval=ManualInterval(vmin=-25, vmax=25), stretch=LogStretch())
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(1, 1, 1)
im = ax.imshow(combined_image, origin='lower', norm=norm)
fig.colorbar(im)
plt.show()
###Output
_____no_output_____
###Markdown
*** 6. Run Individual Steps with Configuration Files
###Code
m = tweakreg_step.TweakRegStep.call(association_file, config_file='tweakreg.cfg')
m = skymatch_step.SkyMatchStep.call(m, config_file='skymatch.cfg')
m = outlier_detection_step.OutlierDetectionStep.call(m, config_file='outlier_detection.cfg')
m = resample_step.ResampleStep.call(m, config_file='resample.cfg', output_file='jw10002_short_step_by_step_i2d.fits')
m = source_catalog_step.SourceCatalogStep.call(m, config_file='source_catalog.cfg', output_file='jw10002_short_step_by_step_cat.ecsv')
combined_image_file = 'det_dithered_5stars_tweak_i2d.fits' ## need to load data gain
combined_image = fits.getdata(combined_image_file) ## need to load data again
norm = ImageNormalize(combined_image, interval=ManualInterval(vmin=-25, vmax=25), stretch=LogStretch())
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(1, 1, 1)
im = ax.imshow(combined_image, origin='lower', norm=norm)
fig.colorbar(im)
plt.show()
###Output
_____no_output_____ |
results/dcgan_vbn/DCGAN_vbn.ipynb | ###Markdown
DCGAN Imports
###Code
import numpy as np
import itertools
import time
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.utils import shuffle
import pdb
from tensorflow.examples.tutorials.mnist import input_data
from google.colab import files
import warnings
###Output
_____no_output_____
###Markdown
Load data
###Code
IMAGE_SIZE = 28
tf.reset_default_graph()
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=[])
X_train = mnist.train.images
X_train = (X_train - 0.5) / 0.5
def leaky_relu(X, leak=0.2):
f1 = 0.5 * (1 + leak)
f2 = 0.5 * (1 - leak)
return f1 * X + f2 * tf.abs(X)
###Output
WARNING:tensorflow:From <ipython-input-4-c4739963f5b0>:5: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-images-idx3-ubyte.gz
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/learn/python/learn/datasets/mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
###Markdown
Placeholder
###Code
x = tf.placeholder(tf.float32, shape=(None, IMAGE_SIZE, IMAGE_SIZE, 1))
noise = tf.placeholder(tf.float32, shape=(None, 1, 1, 100))
Training = tf.placeholder(dtype=tf.bool)
keep_prob = tf.placeholder(dtype=tf.float32, name='keep_prob')
###Output
_____no_output_____
###Markdown
Generator
###Code
def Generator(x, keep_prob=keep_prob, Training=True, reuse=False):
with tf.variable_scope('Generator', reuse=reuse):
W = tf.truncated_normal_initializer(mean=0.0, stddev=0.02)
b = tf.constant_initializer(0.0)
g_init = tf.random_normal_initializer(1., 0.2)
out_1 = tf.layers.conv2d_transpose(x, 256, [7, 7], strides=(1, 1), padding='valid', kernel_initializer=W, bias_initializer=b)
out_1 = tf.layers.dropout(out_1, keep_prob)
out_1 = tf.contrib.gan.features.VBN(out_1, gamma_initializer=g_init)(out_1)
out_1 = leaky_relu(out_1, 0.2)
out_2 = tf.layers.conv2d_transpose(out_1, 128, [5, 5], strides=(2, 2), padding='same', kernel_initializer=W, bias_initializer=b)
out_2 = tf.layers.dropout(out_2, keep_prob)
out_2 = tf.contrib.gan.features.VBN(out_2, gamma_initializer=g_init)(out_2)
out_2 = leaky_relu(out_2, 0.2)
out_3 = tf.layers.conv2d_transpose(out_2, 1, [5, 5], strides=(2, 2), padding='same', kernel_initializer=W, bias_initializer=b)
out_3 = tf.nn.tanh(out_3)
return out_3
###Output
_____no_output_____
###Markdown
Discriminator
###Code
def Discriminator(x, keep_prob=keep_prob, Training=True, reuse=False):
with tf.variable_scope('Discriminator', reuse=reuse):
W = tf.truncated_normal_initializer(mean=0.0, stddev=0.02)
b = tf.constant_initializer(0.0)
d_init = tf.random_normal_initializer(1., 0.2)
out_1 = tf.layers.conv2d(x, 128, [5, 5], strides=(2, 2), padding='same', kernel_initializer=W, bias_initializer=b)
out_1 = tf.layers.dropout(out_1, keep_prob)
out_1 = tf.contrib.gan.features.VBN(out_1, gamma_initializer=d_init)(out_1)
out_1 = leaky_relu(out_1, 0.2)
out_2 = tf.layers.conv2d(out_1, 256, [5, 5], strides=(2, 2), padding='same', kernel_initializer=W, bias_initializer=b)
out_2 = tf.layers.dropout(out_2, keep_prob)
out_2 = tf.contrib.gan.features.VBN(out_2, gamma_initializer=d_init)(out_2)
out_2 = leaky_relu(out_2, 0.2)
logits = tf.layers.conv2d(out_2, 1, [7, 7], strides=(1, 1), padding='valid', kernel_initializer=W, bias_initializer=b)
out_3 = tf.nn.sigmoid(logits)
return out_3 ,logits
###Output
_____no_output_____
###Markdown
Parameters
###Code
EPOCH = 20
BATCH_SIZE = 200
keep_prob_train = 0.6
BETA1 = 0.5
lr = 0.0002
label_smooth = 1
###Output
_____no_output_____
###Markdown
Loss function
###Code
# Generate images
G_noise = Generator(noise, keep_prob, Training)
# D
D_real, D_real_logits = Discriminator(x, Training)
D_fake, D_fake_logits = Discriminator(G_noise, Training, reuse=True)
# D real loss
Dis_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_real_logits, labels=tf.multiply(tf.ones_like(D_real_logits), (label_smooth))))
# D generated image loss
Dis_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_fake_logits, labels=tf.zeros([BATCH_SIZE, 1, 1, 1])))
# D total loss
Dis_loss = Dis_loss_real + Dis_loss_fake
# G loss
Gen_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=D_fake_logits, labels=tf.ones([BATCH_SIZE, 1, 1, 1])))
# get all variables
tf_vars = tf.trainable_variables()
Dis_vars = [var for var in tf_vars if var.name.startswith('Discriminator')]
Gen_vars = [var for var in tf_vars if var.name.startswith('Generator')]
# optimise
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
D_optim = tf.train.AdamOptimizer(lr, beta1=BETA1).minimize(Dis_loss, var_list=Dis_vars)
G_optim = tf.train.AdamOptimizer(lr, beta1=BETA1).minimize(Gen_loss, var_list=Gen_vars)
###Output
WARNING:tensorflow:From <ipython-input-6-11069f37253b>:8: conv2d_transpose (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d_transpose instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From <ipython-input-6-11069f37253b>:9: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dropout instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/contrib/gan/python/features/python/virtual_batchnorm_impl.py:227: to_float (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From <ipython-input-7-c39997bdce1a>:8: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv2d instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
###Markdown
Training
###Code
saver = tf.train.Saver()
num_examples = len(X_train)
k = num_examples % BATCH_SIZE
num_examples = num_examples - k
G_loss = []
D_loss = []
D_r = []
D_f = []
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(EPOCH):
start = time.time()
X_train = shuffle(X_train)
for offset in range(0, num_examples, BATCH_SIZE):
train_d = True
train_g = True
end = offset + BATCH_SIZE
batch = X_train[offset:end]
noise_ = np.random.normal(0, 1, (BATCH_SIZE, 1, 1, 100))
#calculate loss
d_ls = sess.run(Dis_loss,{noise: noise_, x: batch, Training: False})
g_ls = sess.run(Gen_loss,{noise: noise_, x: batch, Training: False})
#Gobal loss
# d_ls_real, d_ls_fake = sess.run([Dis_loss_real, Dis_loss_fake], {noise: noise_,x: batch, Training: False})
d_r = sess.run([D_real], {x: batch, Training: False})
d_f = sess.run([D_fake], {noise: noise_, Training: False})
d_r = np.mean(d_r)
d_f = np.mean(d_f)
#break
D_r.append(d_r)
D_f.append(d_f)
D_loss.append(d_ls)
G_loss.append(g_ls)
if g_ls * 2 < d_ls:
train_g = False
pass
if d_ls * 2 < g_ls:
train_d = False
pass
#Update D
if train_d:
sess.run(D_optim, {x: batch, noise: noise_,keep_prob: keep_prob_train,Training: True})
#Update G
if train_g:
sess.run(G_optim, {noise: noise_, x: batch,keep_prob: keep_prob_train, Training: True})
end = time.time()
elapsed = end - start
#break
if ((i+1)%2 == 0)or(i==0):
print("EPOCH {} ...".format(i+1))
print("G_loss = {:.3f} D_loss = {:.3f} Time used = {:.3f}".format(g_ls, d_ls,elapsed))
print()
saver.save(sess, './lenet')
print("Model saved")
###Output
EPOCH 1 ...
G_loss = 1.767 D_loss = 0.845 Time used = 113.370
EPOCH 2 ...
G_loss = 1.042 D_loss = 0.782 Time used = 107.270
EPOCH 4 ...
G_loss = 0.373 D_loss = 1.422 Time used = 115.721
EPOCH 6 ...
G_loss = 0.850 D_loss = 0.842 Time used = 118.628
EPOCH 8 ...
G_loss = 1.247 D_loss = 0.807 Time used = 117.079
EPOCH 10 ...
G_loss = 1.723 D_loss = 1.147 Time used = 115.210
EPOCH 12 ...
G_loss = 1.179 D_loss = 0.769 Time used = 135.254
EPOCH 14 ...
G_loss = 2.002 D_loss = 0.944 Time used = 243.581
EPOCH 16 ...
G_loss = 1.999 D_loss = 0.973 Time used = 233.918
EPOCH 18 ...
G_loss = 1.757 D_loss = 0.714 Time used = 296.843
EPOCH 20 ...
G_loss = 1.679 D_loss = 0.891 Time used = 385.637
Model saved
###Markdown
D real and fake loss
###Code
D_r_mean = []
D_f_mean = []
N = len(D_r)
length = N // (EPOCH)
for k in range(0,EPOCH):
D_r_mean.append( np.mean(D_r[(k+1)*length -10 : (k+1)*length + 10] ))
D_f_mean.append( np.mean(D_f[(k+1)*length -10 : (k+1)*length + 10] ))
print("Average D real loss")
print(D_r_mean)
print("Average D fake loss")
print(D_f_mean)
index = np.arange(1,EPOCH+1,1)
f_d = plt.figure(1)
plt.plot(index, D_r_mean, 'r',label='D Real')
plt.plot(index, D_f_mean, 'b',label='D Fake')
plt.ylabel("D Loss")
plt.xlabel("EPOCH")
plt.legend(framealpha=1, frameon=True)
plt.show()
f_d.savefig('Real and fake Loss.png', dpi=600)
files.download('Real and fake Loss.png')
###Output
Average D real loss
[0.6111774, 0.65038663, 0.5889667, 0.58471406, 0.5752351, 0.63564235, 0.6320566, 0.67431223, 0.6234814, 0.5919114, 0.63561, 0.61221105, 0.63412607, 0.5786182, 0.5541294, 0.5995804, 0.6481639, 0.65736026, 0.6014099, 0.6721688]
Average D fake loss
[0.26092383, 0.23363683, 0.24135864, 0.26581687, 0.2586019, 0.2595662, 0.28155565, 0.27300572, 0.25612384, 0.2790418, 0.24700737, 0.28945822, 0.28991547, 0.2380435, 0.21027383, 0.24724276, 0.2812943, 0.24410598, 0.2844206, 0.31637686]
###Markdown
Plot loss
###Code
d_s_mean = []
g_s_mean = []
N = len(D_loss)
length = N // (EPOCH)
for k in range(0,EPOCH):
d_s_mean.append( np.mean(D_loss[(k+1)*length -10 : (k+1)*length + 10] ))
g_s_mean.append( np.mean(G_loss[(k+1)*length -10 : (k+1)*length + 10] ))
print("Average D loss")
print(d_s_mean)
print("Average G loss")
print(g_s_mean)
index = np.arange(1,EPOCH+1,1)
f = plt.figure(1)
plt.plot(index, d_s_mean, 'r',label='D Loss')
plt.plot(index, g_s_mean, 'b',label='G Loss')
plt.ylabel("Loss")
plt.xlabel("EPOCH")
plt.legend(framealpha=1, frameon=True)
plt.show()
f.savefig('Loss.png', dpi=600)
files.download('Loss.png')
###Output
Average D loss
[0.8753274, 0.7545825, 0.9339137, 1.0064814, 1.0097498, 0.822755, 0.9321219, 0.7622112, 0.8681854, 1.0316948, 0.8052949, 0.9875237, 0.90757483, 0.9504061, 0.95082915, 0.9013006, 0.86372834, 0.76692235, 1.086581, 0.8737558]
Average G loss
[1.5216542, 1.5928601, 1.6921208, 1.6420473, 1.6956627, 1.5057422, 1.553818, 1.4169589, 1.5873225, 1.614451, 1.5550375, 1.530342, 1.4578869, 1.7069432, 1.8458618, 1.6338545, 1.4870266, 1.5901834, 1.6885693, 1.3255904]
###Markdown
Visualization
###Code
def plot_images(images,save = True):
assert len(images) == 100
img_shape = (28, 28)
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(10, 10)
fig.subplots_adjust(hspace=0.1, wspace=0.1)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
plt.show()
if save:
fig.savefig('G images_white.png', dpi=600)
files.download('G images_white.png')
n = np.random.normal(0.0, 1.0, [100,1,1,100]).astype(np.float32)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
images = sess.run(G_noise, {noise: n, Training: False})
plot_images(images)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from ./lenet
|
sample1.2_python_tutorial.ipynb | ###Markdown
Sample 1.2 for AstrostatisticsThis sample displays some simple but important codes from which you can quickly learn how to program by python, in the context of astronomy.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Here, I show the frequently used data type in python.
###Code
'''
Data type
'''
#list
a = [1, 3., 'rrr']
print(a)
print(type(a))
print(np.shape(a))
a.append(2.)
print(np.shape(a))
a.append([2,3.,'rr'])
print(a)
#ndarray
a = [1, 3., 'rrr']
b = np.array(a)
print(type(b))
print(b)
#array operation
a = np.array([[1., 2., 3.],[4.,5.,6.]])
print(a)
print(np.shape(a))
b = a.T
print(b)
print(np.shape(b))
c = a*a
print(c)
d = np.dot(a,b**2)
print(d)
#tuple
a = (23, [34,1.,7],'ee')
print(type(a))
#dictionary
d = {'one':1, 'two':2, 'three':3}
print(d)
print(d['one'])
d['four'] = 4
print(d)
d['three'] = 33
print(d)
###Output
{'one': 1, 'two': 2, 'three': 3}
1
{'one': 1, 'two': 2, 'three': 3, 'four': 4}
{'one': 1, 'two': 2, 'three': 33, 'four': 4}
###Markdown
How to write a loop in python
###Code
'''
for loop
'''
for i in range(10):
print(i)
print(' ')
for i in range(0,10,2):
print(i)
###Output
0
1
2
3
4
5
6
7
8
9
0
2
4
6
8
###Markdown
if...else...
###Code
'''
if...else
'''
for i in range(10):
if np.mod(i,2)==0:
print(i/2)
elif np.mod(i,3)==0:
print(i/3)
else:
print('***')
###Output
0.0
***
1.0
1.0
2.0
***
3.0
***
4.0
3.0
###Markdown
A quick demo about how to draw plots in python.%matplotlib inline allows you to draw plots in line, while %pylab allows to draw in a pop-up window. You can comment either of the first two lines to choose one of the two plotting modes.
###Code
'''comment either of the two lines to choose one mode of plotting'''
#%matplotlib inline
%pylab
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rc('xtick', labelsize=12)
matplotlib.rc('ytick', labelsize=12)
x = np.random.rand(10)
x = np.sort(x)
y = np.arctan(x)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y,'k+',markersize=12)
ax.plot(x,y,'r--')
ax.set_xlabel('X',fontsize=12)
ax.set_ylabel(r'$\arctan{\theta}$',fontsize=12)
fig.show()
x = np.random.normal(0.,10.,size=10000)
xgrid = np.arange(-100,105,5)
xcenter = (xgrid[1:]+xgrid[:-1])/2.
hx,xedge = np.histogram(x,bins=xgrid)
fig = plt.figure(figsize=[14,6])
ax = fig.add_subplot(121)
ax.plot(x,'k.')
ax = fig.add_subplot(122)
ax.step(xedge[1:],hx,'k-')
ax.plot(xcenter,hx,'r-')
ax.set_xlabel(r'$y$',fontsize=12)
fig.show()
###Output
_____no_output_____
###Markdown
Two frequently used figures are contour plot and density plot (image). Here I show these two types of plots in a oversimplified way. You can look up the detailed documentations and learn how to decorate them for publishing.
###Code
import scipy.stats as stats
import time
start = time.time()
mu = np.array([1.,1.])
s1 = 1.
s2 = 0.2
rho = 0.8
sig = np.array([[s1, rho*np.sqrt(s1*s2)],[rho*np.sqrt(s1*s2),s2]])
#generate random numbers from 2D normal distribution
xx = np.random.multivariate_normal(mu,sig,100000)
xgrid = np.arange(-2.,4.,0.2)
ygrid = np.arange(-2.,4.,0.2)
xcenter = (xgrid[0:-1]+xgrid[1:])/2.
ycenter = (ygrid[0:-1]+ygrid[1:])/2.
#make 2d histogram
hxx,xedge,yedge = np.histogram2d(xx[:,0],xx[:,1],bins=[xgrid,ygrid])
fig = plt.figure(figsize=[14,6])
plt.set_cmap('jet')
ax = fig.add_subplot(121)
ax.contour(xcenter,ycenter,hxx.T)
ax.set_xlabel(r'$x_1$',fontsize=20)
ax.set_ylabel(r'$x_2$',fontsize=20)
ax = fig.add_subplot(122)
e = ax.imshow(hxx.T,extent=[xcenter[0],xcenter[-1],ycenter[-1],ycenter[0]])
plt.colorbar(e)
ax.set_ylim([ycenter[0],ycenter[-1]])
ax.set_xlabel(r'$x_1$',fontsize=20)
ax.set_ylabel(r'$x_2$',fontsize=20)
fig.show()
print('escape %(s).3f sec' % {'s':time.time()-start})
###Output
escape 0.129 sec
###Markdown
From this cell I demonstrate how to access data file.
###Code
'''
Read files
'''
f = open('Riess1998_Tab5.txt')
# line = f.readline()
# print(line)
for line in f:
cols = line.split()
print(cols)
f.close()
'''
read ASCII file in a more comfortable way
'''
from astropy.table import Table
data = Table.read('Riess1998_Tab5.txt',format="ascii.no_header")
print(data)
'''
write ASCII file
'''
import csv
rows = [[32, 33, 34],['a','b','c']]
with open("test.csv","wt") as f:
c_w = csv.writer(f, quoting=csv.QUOTE_NONE)
c_w.writerows(rows)
'''
Read a fits image file
'''
import astropy.io.fits as fits
def readImage(filename):
hdulist = fits.open(filename)
im = hdulist[0].data.copy()
hdulist.close()
return im
im = readImage('image.fits')
fig = plt.figure()
plt.set_cmap('gray')
ax = fig.add_subplot(111)
e = ax.imshow(np.log(im))
plt.colorbar(e)
#ax.set_ylim()
fig.show()
'''
Write an image from a fits file
'''
import numpy
from astropy.io import fits as pyfits
fitsfile = pyfits.open('image.fits', mode='update')
image = fitsfile[0].data
header = fitsfile[0].header
'''
write image to a fits file
'''
pyfits.writeto('image_2.fits',image,header)
'''
Read table data from a fit file
'''
def loadData(filename):
'''
Read fits data
'''
tchfits = fits.open(filename)
tabl = tchfits[1].data.copy()
return tabl
filename = 'ComaCluster.fits'
coma = loadData(filename)
cz = coma.czA[(coma.czA<20000) & (coma.czA>0)]
sig_cz = np.var(cz)
print(np.sqrt(sig_cz))
zgrid =np.arange(2000.,12000.,750.)
h, xedge = np.histogram(cz, bins=zgrid)
fig = plt.figure(figsize=[4,4])
ax = fig.add_subplot(111)
ax.plot(zgrid[0:-1]+250.,h,'k*-')
ax.set_xlabel('redshift (km/s)')
fig.show()
from astropy.table import Table
filename = 'ComaCluster.fits'
coma = Table.read(filename)
print(coma)
cz = coma['czA'][(coma['czA']<20000) & (coma['czA']>0)]
sig_cz = np.var(cz)
print(np.sqrt(sig_cz))
zgrid =np.arange(2000.,12000.,750.)
h, xedge = np.histogram(cz, bins=zgrid)
fig = plt.figure(figsize=[4,4])
ax = fig.add_subplot(111)
ax.plot(zgrid[0:-1]+250.,h,'k*-')
ax.set_xlabel('redshift (km/s)')
fig.show()
'''
read a spectrum fits file
'''
filename = '351110104.fits'
tchfits = fits.open(filename)
log_wv0 = tchfits[0].header['CRVAL1']
log_dwv = tchfits[0].header['CD1_1']
sp = tchfits[0].data.copy()
N = len(sp[0,:])
wv = 10**(log_wv0+np.arange(0,N,1)*log_dwv)
fig = plt.figure(figsize=[10,4])
ax = fig.add_subplot(111)
ax.plot(wv,sp[0,:],'k-')
ax.set_xlim([4000,9000])
fig.show()
'''
write table fits
'''
from astropy.table import Table
t = Table([[1, 2], [4, 5], [7, 8]], names=('a', 'b', 'c'))
t.write('table1.fits', format='fits')
c1 = fits.Column(name='a', array=np.array([1, 2]), format='K')
c2 = fits.Column(name='b', array=np.array([4, 5]), format='K')
c3 = fits.Column(name='c', array=np.array([7, 8]), format='K')
t = fits.BinTableHDU.from_columns([c1, c2, c3])
t.writeto('table2.fits')
###Output
_____no_output_____ |
00_read_data.ipynb | ###Markdown
Read data Get file paths by year
###Code
COMMENTS_DIR = '../data/comments/by_date/'
YEAR = 2019
# export
def get_comments_paths_year(COMMENTS_DIR, YEAR):
comments_dir_path = Path(COMMENTS_DIR)
comments_paths = list(comments_dir_path.glob(f'{YEAR}*.csv'))
return comments_paths
get_comments_paths_year(COMMENTS_DIR, '2019')
assert len(get_comments_paths_year(COMMENTS_DIR, '2019')) == 48
assert len(get_comments_paths_year(COMMENTS_DIR, '2020')) == 48
comment_paths_year = get_comments_paths_year(COMMENTS_DIR, YEAR)
###Output
_____no_output_____
###Markdown
by subreddit
###Code
COMMENTS_DIR_SUBR = '../data/comments/subr/'
SUBR = 'conspiracy'
# export
def get_comments_paths_subr(COMMENTS_DIR_SUBR, SUBR):
comments_subr_dir_path = Path(COMMENTS_DIR_SUBR)
comments_subr_paths = list(comments_subr_dir_path.glob(f'{SUBR}*.csv'))
return comments_subr_paths
comments_paths_subr = get_comments_paths_subr(COMMENTS_DIR_SUBR, SUBR)
###Output
_____no_output_____
###Markdown
Read comments Read `1` comments `csv` file
###Code
fpath = comment_paths_year[0]
# export
def read_comm_csv(fpath):
try:
# removed because new method for writing retrieved data out already does date conversion beforehand
# date_parser = lambda x: pd.to_datetime(x, unit='s', errors='coerce')
comments = pd.read_csv(
fpath,
usecols=['id', 'created_utc', 'author', 'subreddit', 'body'],
dtype={
'id': 'string',
# 'created_utc': int, s. above
'author': 'string',
'subreddit': 'string',
'body': 'string'
},
parse_dates=['created_utc'],
# date_parser=date_parser,
low_memory=False,
lineterminator='\n'
)
comments_clean = comments\
.dropna()\
.drop_duplicates(subset='id')
return comments_clean
except FileNotFoundError:
print(f'{fpath} not found on disk')
except pd.errors.EmptyDataError:
print(f'{fpath} is empty')
comments = read_comm_csv(fpath)
comments.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 200000 entries, 0 to 199999
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 author 200000 non-null string
1 body 200000 non-null string
2 created_utc 200000 non-null datetime64[ns]
3 id 200000 non-null string
4 subreddit 200000 non-null string
dtypes: datetime64[ns](1), string(4)
memory usage: 9.2 MB
###Markdown
Read multiple comment `csv` files
###Code
# export
def read_comm_csvs(fpaths: list):
comments_lst = []
for fpath in fpaths:
comments = read_comm_csv(fpath)
comments_lst.append(comments)
comments_concat = pd.concat(
comments_lst,
axis=0,
ignore_index=True
)
return comments_concat
comments = read_comm_csvs(comment_paths_year)
comments.value_counts('subreddit')
###Output
_____no_output_____
###Markdown
Inspect comments
###Code
COMMENTS_DIR = '../data/comments/by_date/'
YEAR = 2019
fpaths = get_comments_paths_year(COMMENTS_DIR, '2019')
comments = read_comm_csvs(fpaths)
pd.set_option('display.max_rows', 500)
pd.options.display.max_colwidth = 200
lex = 'spreader'
hits = (comments
.filter(['body'])
.query('body.str.contains(@lex)')
.head(100)
)
# hits.style.set_properties(**{'text-align': 'left'})
hits_tok = [hit.split(' ') for hit in hits.body]
for comm in hits_tok:
for i, tok in enumerate(comm):
if tok == lex:
print(" ".join(comm))
# print(comm[i-1], tok, comm[i+1])
###Output
No, record two separate takes of the same vocals. Copying and pasting vocals to two tracks will only make it louder. Recording two separate takes and making one lower in volume will give the ‘Doubler’ effect that is widely used and adds width and depth to vocals. Have both separate tracks go to a vocal mix bus and add the spreader in that.
And here I was going to ask if it fitted into your garage or did you have to take the salt spreader off ...🤣
They definitely don’t “age”—I had one running for my entire Interactions run and it worked like a champ the entire time.
When you say they aren’t producing much mana, what do you mean? Do they fill up when you check them with the wand after they eat a bucket? I would look into the rest of your Botania chain after the petropetunia to see if there’s an issue elsewhere... at one point I had put a tank in the line of sight of my mana spreader and it caused me to waste a bunch of mana, so my experience is that there may be issues elsewhere.
I have yet to see a monohull without a backstay. The swept back spreader puts a slight bend in the mast which helps in performance. Downwind does suck which is why you use a spinnaker instead.
System agent chip is under palmrest and covered by the heat spreader on the left side so likely its that.
Monitor has a Cooper heat spreader on it.
It does have one. It has a copper heat spreader
As a snow plow and spreader operator, I could buy a mansion and yacht after working one Russian winter.
I slipped in it and it spreader everywhere
Toast nerd here. I only toast one slice at a time. Butter must be slightly pre-softened. Toaster must be on ideal setting. Must use special wide spreader (narrow "butter knife" will not do. ) when the toast pops up, it's CRITICAL to spread butter promptly. Also helps if plate is warmed. Mmmmm... toast..
Saryn=cancer spreader
An electrical insulator of some sort. Perhaps a wire spreader to keep house or barn electrical service entrance wires with long runs from shorting during high wind. Pretty small wire size . Maybe a barn or water well pump electrical feed. Intriguing
I think you mean spatula. Specula is the plural of speculum, which a tool used to open vaginas for examination.
But hey, if you wanna use a cooch spreader to smooth your mousse, go for it.
Homemade butter separator dish. You would pour homemade butter in the dish and the spout strains off the excess liquid from the solids. You could use it as a butter dish afterwards. The hole would have been a place to put a spreader knife.
I remember my grandmother having something similar. The mini butter knife had a small cork on the end that fit into the hole. The hole was closer to the handle if I recall. That's the only thing I can think of.
That’s horrible news! This means you need to find a new wife!! Let this be a valuable lesson to everyone... always cover all your ass spreader bases before nuptials.
Hi /u/Hazmataz13,
Unfortunately your submission has been removed for the following reason(s):
* **All selling and swapping posts must have a timestamp - see [here](https://www.reddit.com/r/HardwareSwapUK/wiki/timestamps). These must be dated within 5 days.**
If you feel this was removed in error or are unsure about why this was removed then please [modmail](https://www.reddit.com/message/compose?to=%2Fr%2FHardwareSwapUK) us.
**Please don't comment with the timestamps or edit the post as the mods won't see this. Please make a new post.**
---
**Title:** [[SG] [H] 8gb ddr4 2800[W] £30 PayPal](/r/HardwareSwapUK/comments/bf4tjo/sg_h_8gb_ddr4_2800w_30_paypal/)
**Username:** /u/Hazmataz13
**Original Post:**
it is single channel and overclocked easily from 2400 to 2800.
also comes with a custom heat spreader that can be removed if wanted.
*I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/HardwareSwapUK) if you have any questions or concerns.*
I think we may have covered this in another thread? (I had the emergency leaking-tarp situation that lead to a shitty night of double hammocking.)
I have heard a few stories of people that do this on purpose, but I cannot imagine it working well. The spreader bars are a good option, however, if you want to fit two hammocks under one large tarp, which is something I honestly do want to try.
Hammocks are so comfortable because you're not on the ground, and because you have free range to kick/roll/stretch etc and all you have to worry about is not tipping out. The one time I slept in a hammock with my S/O, it was awful. Our shoulders were shoved against the other, an elbow always below you or jabbing into the other, and oh...don't try to get out to use the bathroom!
Tl;DR: Invest in separate hammocks and camp together with them. If you do this the first few nights you may want to venture into solo hammocks, and from there you have tons of options!
If I need some low cost equipment, I start off with some old Deutz Fahr tractors that are on ModHub. There's a couple that are admittedly low horsepower, but cost under $20k. Then get a small tipper, Amazone seeder... there's a cheap fertilizer spreader on ModHub as well. I forget the brand but it's white and spreads via a small pipe on the bottom that goes back and forth quickly. For a harvester, I'll usually go with the small Rostelmash or the old Case.
If I play a map with large fields like Marwell, I'll usually use the government subsidy sign to afford some big stuff to start with
Reinstall the heating system, doing it properly.
The radiant loop in the kitchen needs work...heat spreader plates installed, a proper radiant barrier \[not just foil underlayment\], insulation, etc. Doing so would save a fortune on heating costs, to the point it'll pay for itself in a year or two.
> *"Tell the boys not to wrap the straps around the gussets."*
How bout' that rigging angle?
Or maybe a spreader bar?
Jesus this sub is just turning into place for wood framers to post their favorite ways to kill people.
Reminder guys, you keep doing shit like this that you don't know how to do right and,
"Welp, we've always done it this way and this is the first time we killed anybody".
Isn't a good defense in court.
Cheers for this. Just a quick Q: I've seen that some people have to remove the heat spreader on their RAM modules for the heatsink to fit so did you have to remove yours too? If not it did it have to do with the mobo that you used?
Manure spreader
oh yeah? lol how so oh false rumor spreader and facebook harasser of a guy that parents that just got divorced
For example, I use tape saturation and compressor on the master bus nearly every mix. Sometimes I use frequency-based stereo spreader kind of thing. Sometimes I use an EQ. In a nutshell, everything you can imagine. Actually, I don't want to let things to the mastering engineer. He just fixes me, balances the master volume and brings his own color a little bit but doesn't try to change the song.
Someone at DE is probably feeling sad that despite their efforts the community hasn't caught on yet. My Gaze has actually become my go-to secondary for a couple of months ever since the chain buff. I built it to be a status spreader for CO and anti-nullifier, but it's more effective than I expected. Corrosive straight up kills specific targets while gas deletes crowds
I understand what you are saying. If I was single again for whatever reason I'd probably want the exact same thing as you are talking about. I agree that swingers are not for me, neither would be polyamory. With swingers, there are too many permutations for me to keep track of to be comfortable, because from a bacteria and body chemistry standpoint, you're not just having sex with them, you're also having sex with everyone they've had sex with.
This is going to be a really weird mentality for people to grasp, but let me put it this way. If bacteria from a third party is transmitted to me sexually, that feels like a violation. If I were to get involved in something like an orgy, that's different, because that's with the understanding that anything that anyone has in that room is going to get back around to me. If I'm having sex with one person however and they're having different partners regularly, from a viral/bacterial standpoint that's no different than being involved in an orgy even though I never agreed to that.
Your own personal reasons may be different, and I know that swingers go to a lot of effort to keep themselves and their partners safe. That being said, I know how bad some people's habits are, even when they are trying to be careful. Just like how an entire office building gets sick just because all it takes it one jackass to answer a communal company phone because it rings right after they just sneezed into their hand.
Our culture still shakes hands upon meeting people, even though it's a proven spreader of disease. It's a holdover from showing someone that you weren't going to attack them with a weapon, which is obsolete since the invention of small-scale guns. We still do it though, because it's culture and part of good manners, and get sick as a result.
How this applies to sex and swinging is that you may have every intention of keeping yourself and others safe, you may get tested regularly and use protection during sex, even including things like dental dams. That being said, if we can't stop doing things like a handshake because it would be rude, how are people going to stop themselves from making mistakes in a far more intimate and less rational situation?
I'm sure that will offend many swingers who are super safe and would never put others in danger, but that is my personal reason for avoiding it. I can masturbate all I want to orgy porn, but frankly the idea of actually having sex with a bunch of different people disgusts me on an intellectual level. I would far rather have a whole bunch of people in a room masturbating together and not touching than actually having sex. It doesn't matter how careful people are, to err is human and mistakes will happen. That's why despite having a high sex drive beyond what any partner has ever been able to match, I stay monogamous and just use masturbation to take care of the extra urges.
Which manure spreader and map is this?
B rank
These stages use their concepts well, but are held back by either poor stage design or lackluster concepts in general.
17. Fancy Spew
This stage would be the first to introduce spreaders, but it certainly wouldn't be the last. With the giant spreader and its cohorts always creating more of both sides ink, it was never truly over for one side. However, that also gave a sense of... staleness. As if there was nothing you could do about the spreaders unless you literally spent the whole match watching them like a hawk. Not really that fun. Thus, B is where it resides.
16. The Chronicles of Rolonium
Rolonium finally got its debut outside of octo canyon in the time travel v teleportation splatfest, and it was... fine? I guess. The main issue with this one was that the rolonium didnt play a massive role in this one. Yeah sure, people focused it as soon as it recharged, but for the most part, its role in most battles was suprisingly weak. It didnt do too much damage, it wasn't too hard to clean up after, and it took a long time to recharge. Overall, not a bad map, but it didn't leave much of an impression.
15. Flooders in the Attic
This stage gave us the Flooders, powerful OHKO enemies that both teams could use to pressure the enemy to retreat from. The flooders were used pretty well and had a presence throughout the battle. But the middle... dear god the middle. This map had one of the worst middle sections of the entire selection of shify stations. Thanks to it being raised, as well as the enemy flooder locking on to any allies in the middle, it could make it downright impossible to regain traction on this map if you lost it. Was definitely the downfall of this map imo.
14. Wayslide Cool
As our first taste of shifty station, this stage was not all that bad. It had a fun concept of horizontally moving platforms, which could prevent players from inking entire swathes of turf. While it certainly wasn't groundbreaking (especially compared with what was to come), it still was a fun and interesting map nonetheless.
13.Railway Chillin
This stage was done quite well. With Grindrails place everywhere on the map and areas only accesible by grindrail, it used them to their full extent. Why then, you're probably asking, is it so low? The answer: its middle. This map had one of the worst middles of any splatoon map... ever. Losing the middle here could lead to an extreme disadvantage, and breaking the other teams defense could end up overwhelmingly difficult. If the middle problem was fixed, the stage would be way higher, but for now, it stays around the middle.
12. The Bouncy Twins
While the Bounce pads were fun in this stage, I feel as if most of that fun came from just running straight into the enemy base. Playing a game of tag through their base while they silently screamed through their monitors was fun, but apart from that, they felt underused and the middle was kinda forgettable. Regardless, out of all the middle of the road stages, this one was the best.
A rank
These stages had good concepts, and executed them well--but were lacked the consistency of the higher up stages.
11. A Swiftly Tilting Balance
This was the final map for the TMNT splatfest tournament that happened, and after two repeat stations (neither of which were good ones), this map need to impress. And you know what? It succeeded! With an interesting concept and ample enough use, this maps only downside is its simplicity once the gimmick is discovered. Other than that, enjoy taking the weight off of your shoulders!
10. The Ink is Spreading
Now THIS was the best spreader stage. Now with switches to control the ink, this stage made the concept of spreaders interesting. While Fancy Spew was good in its own right, it doesnt copmare to the competitive epicenter of TIiS. This is probably the most all-around good stage of them all.
9. Cannon Fire Pearlie
Although these aren't exactly the cannons from single-player, they still serve thebsame purpose. And, as evident by this stages placing, just as fun. The cannons strong effect on the middle, combined with the giant slopes make this stage a truly chaotic one. The only issue i have with it is that the middle is very dense compared to the bases, so it can make comebacks difficult sometimes, but apart from that, good stage.
S rank
For stages that used their concepts greatly throughout the stage and used them very well, only to be held back from X by their lack of dynamic-ness.
8. Grapplink Girl
This stage used Grapplinks superbly, allowing players to quickly move to different parts of the map and even jump the entire map with specials such as Bubble Blower or Sting Ray. It had grapplinks that were the only way into the enemy base(or at least the less risky way) making it less prone to annoying teammates that try to sneak by the enemy and paint but end up dying. So thats good.
7. The Switches
Now here's a fun stage. The concept of having the switches block off the enemies from your spawn was a good one, and it made pushes that much more fun. There's really not that much to say about this stage. Its just a great overall stage.
6. Geyser Town
The one con about this stage is that the middle two geysers really aren't much more than a blockade. Apart from that, this stage is a fun one that puts geysers to full use, and even shows a few new things about them, like that fact that they instantly detonate bombs(who knew?).
5. Zappy Longshockings
This map truly felt like a game of time management. Between the wall blocking off players(if you couldnt guess i love area denial), the ink switches allowing you to enter enemy spawn, and the middle corridor of this saltspray rig shaped stage, it truly tests the player on where they had to be and when. Spend too long in one place, the enemy can easily take another. The only issue is that the switches feel a bit... lethargic... at times, but its not too much to worry about.
4. Furler in the Ashes
The second to last stage, appropriately put just before X rank. Finally using the long awaited furlers was good enough, but the suprise return of Mahi Mahi's water mechanics?! Now that was just icing on the cake. With the giant furlers allowing for stupid stuff such as being able to move way too fast with the splatlings or the best use of a sprinkler ever, this map only barely, barely misses out on X because it pales in comparison to the final 3.
X rank
The best of the best. These maps mechanics are so important to the stage that it makes every second on them important. They're dynamic, their fun, and they test the players ability to do more than just turf. They are what made shifty station stand out the most.
3. The Bunker Games
This stage. This stage was the first glimpse we had into how truly awesome the stations could be. With bunkers that appeared to outright deny players of swathes of turf, it made everyone invested for the whole match.
2. Bridge to Terraswitchia
This stage somehow did what The Bunker Games did even better, now having the area denial fully in the hands of the players. It perfectly handled the balance between offense and defense, and showed to players that if they got their butts handed to them in the first half, they would have to turn it up to 11 in order to fix that. Truly a great stage.
1. Sweet Valley Tentacles
Now, there was one problem with the previous stages that I decided not to mention until now. A problem that, while small, still brought them down a peg. And that problem was movement. In a game all about movement, its kind of boring to stanf in one place and wait to fight enemies. If you want to claim turf, then you should have to overcome your enemies head on to do it. And thats what Sweet Valley Tentacles does to put it above them all. It had everything: Area Denial, vast usage of a octo valley mechanic that none of us expected, large amounts of turf, and constantly shifting places of importance. If you pushed when needed, and defended your tentacles when they were under attack, you were guaranteed to succeed. It made sneakers(see grapplink girl to see what I'm talking about) unable to be annoying without getting some serious targets on their backs, it made scaredy-cats who were too afraid to push hold their team back. It truly made for a dynamic stage, and I honestly feel like the concept of this map would make an excellent ranked mode. It truly made turf war feel like more than it was.
B rank
These stages use their concepts well, but are held back by either poor stage design or lackluster concepts in general.
17. Fancy Spew
This stage would be the first to introduce spreaders, but it certainly wouldn't be the last. With the giant spreader and its cohorts always creating more of both sides ink, it was never truly over for one side. However, that also gave a sense of... staleness. As if there was nothing you could do about the spreaders unless you literally spent the whole match watching them like a hawk. Not really that fun. Thus, B is where it resides.
16. The Chronicles of Rolonium
Rolonium finally got its debut outside of octo canyon in the time travel v teleportation splatfest, and it was... fine? I guess. The main issue with this one was that the rolonium didnt play a massive role in this one. Yeah sure, people focused it as soon as it recharged, but for the most part, its role in most battles was suprisingly weak. It didnt do too much damage, it wasn't too hard to clean up after, and it took a long time to recharge. Overall, not a bad map, but it didn't leave much of an impression.
15. Flooders in the Attic
This stage gave us the Flooders, powerful OHKO enemies that both teams could use to pressure the enemy to retreat from. The flooders were used pretty well and had a presence throughout the battle. But the middle... dear god the middle. This map had one of the worst middle sections of the entire selection of shify stations. Thanks to it being raised, as well as the enemy flooder locking on to any allies in the middle, it could make it downright impossible to regain traction on this map if you lost it. Was definitely the downfall of this map imo.
14. Wayslide Cool
As our first taste of shifty station, this stage was not all that bad. It had a fun concept of horizontally moving platforms, which could prevent players from inking entire swathes of turf. While it certainly wasn't groundbreaking (especially compared with what was to come), it still was a fun and interesting map nonetheless.
13.Railway Chillin
This stage was done quite well. With Grindrails place everywhere on the map and areas only accesible by grindrail, it used them to their full extent. Why then, you're probably asking, is it so low? The answer: its middle. This map had one of the worst middles of any splatoon map... ever. Losing the middle here could lead to an extreme disadvantage, and breaking the other teams defense could end up overwhelmingly difficult. If the middle problem was fixed, the stage would be way higher, but for now, it stays around the middle.
12. The Bouncy Twins
While the Bounce pads were fun in this stage, I feel as if most of that fun came from just running straight into the enemy base. Playing a game of tag through their base while they silently screamed through their monitors was fun, but apart from that, they felt underused and the middle was kinda forgettable. Regardless, out of all the middle of the road stages, this one was the best.
A rank
These stages had good concepts, and executed them well--but were lacked the consistency of the higher up stages.
11. A Swiftly Tilting Balance
This was the final map for the TMNT splatfest tournament that happened, and after two repeat stations (neither of which were good ones), this map need to impress. And you know what? It succeeded! With an interesting concept and ample enough use, this maps only downside is its simplicity once the gimmick is discovered. Other than that, enjoy taking the weight off of your shoulders!
10. The Ink is Spreading
Now THIS was the best spreader stage. Now with switches to control the ink, this stage made the concept of spreaders interesting. While Fancy Spew was good in its own right, it doesnt copmare to the competitive epicenter of TIiS. This is probably the most all-around good stage of them all.
9. Cannon Fire Pearlie
Although these aren't exactly the cannons from single-player, they still serve thebsame purpose. And, as evident by this stages placing, just as fun. The cannons strong effect on the middle, combined with the giant slopes make this stage a truly chaotic one. The only issue i have with it is that the middle is very dense compared to the bases, so it can make comebacks difficult sometimes, but apart from that, good stage.
S rank
For stages that used their concepts greatly throughout the stage and used them very well, only to be held back from X by their lack of dynamic-ness.
8. Grapplink Girl
This stage used Grapplinks superbly, allowing players to quickly move to different parts of the map and even jump the entire map with specials such as Bubble Blower or Sting Ray. It had grapplinks that were the only way into the enemy base(or at least the less risky way) making it less prone to annoying teammates that try to sneak by the enemy and paint but end up dying. So thats good.
7. The Switches
Now here's a fun stage. The concept of having the switches block off the enemies from your spawn was a good one, and it made pushes that much more fun. There's really not that much to say about this stage. Its just a great overall stage.
6. Geyser Town
The one con about this stage is that the middle two geysers really aren't much more than a blockade. Apart from that, this stage is a fun one that puts geysers to full use, and even shows a few new things about them, like that fact that they instantly detonate bombs(who knew?).
5. Zappy Longshockings
This map truly felt like a game of time management. Between the wall blocking off players(if you couldnt guess i love area denial), the ink switches allowing you to enter enemy spawn, and the middle corridor of this saltspray rig shaped stage, it truly tests the player on where they had to be and when. Spend too long in one place, the enemy can easily take another. The only issue is that the switches feel a bit... lethargic... at times, but its not too much to worry about.
4. Furler in the Ashes
The second to last stage, appropriately put just before X rank. Finally using the long awaited furlers was good enough, but the suprise return of Mahi Mahi's water mechanics?! Now that was just icing on the cake. With the giant furlers allowing for stupid stuff such as being able to move way too fast with the splatlings or the best use of a sprinkler ever, this map only barely, barely misses out on X because it pales in comparison to the final 3.
X rank
The best of the best. These maps mechanics are so important to the stage that it makes every second on them important. They're dynamic, their fun, and they test the players ability to do more than just turf. They are what made shifty station stand out the most.
3. The Bunker Games
This stage. This stage was the first glimpse we had into how truly awesome the stations could be. With bunkers that appeared to outright deny players of swathes of turf, it made everyone invested for the whole match.
2. Bridge to Terraswitchia
This stage somehow did what The Bunker Games did even better, now having the area denial fully in the hands of the players. It perfectly handled the balance between offense and defense, and showed to players that if they got their butts handed to them in the first half, they would have to turn it up to 11 in order to fix that. Truly a great stage.
1. Sweet Valley Tentacles
Now, there was one problem with the previous stages that I decided not to mention until now. A problem that, while small, still brought them down a peg. And that problem was movement. In a game all about movement, its kind of boring to stanf in one place and wait to fight enemies. If you want to claim turf, then you should have to overcome your enemies head on to do it. And thats what Sweet Valley Tentacles does to put it above them all. It had everything: Area Denial, vast usage of a octo valley mechanic that none of us expected, large amounts of turf, and constantly shifting places of importance. If you pushed when needed, and defended your tentacles when they were under attack, you were guaranteed to succeed. It made sneakers(see grapplink girl to see what I'm talking about) unable to be annoying without getting some serious targets on their backs, it made scaredy-cats who were too afraid to push hold their team back. It truly made for a dynamic stage, and I honestly feel like the concept of this map would make an excellent ranked mode. It truly made turf war feel like more than it was.
the ridgerunner isn't an UL rig at all, no, but it's also not ridiculously heavy. it's still well under 2 lbs for the double layer with spreader bars (i think)
if you want ultra light, hit up towns end bridge hammock
tzlibre is a spreader of FUD, scammer of decent XTZ holders.
My primary surgeon destroyed my nose. I went to David Hacker (Toronto Rhinoplasty Clinic) and my nose was ruined. I couldn't breathe at all and it was collapsing. He left me with various issues (Inverted-v, alar collapse, etc). He was the most miserable and horrible man I ever met. I can't believe he's a doctor, but it seems that many others have posted and left reviews of their poor experiences with him...so I can hope others can avoid being botched by him. That year was horrible and all I could do was research and go to consultations with other surgeons to find a way to fix this mess.
I would not recommend going to Oakley Smith. I ended up going to him (Oakley Smith) and he seemed good. But he's really just a salesman and he sold me on having him do my surgery...choosing him was mistake #2 I made. He fixed some issues (inverted-v poorly..., flared nostrils) and worsened other issues + created new ones (alar collapse worsened, pinched tip and crooked nose). He gave multiple excuses for the issues he left me with and then tried to convince me it was in my head or well my airways were "open/not blocked" so I should be able to breathe well (I can literally see my alar cartilage that has sunken in and blocked my airway + it sticks to my septum). I ended up getting several second opinions to confirm this. So I have a revision surgeon that will be fixing the issues oakley smith left me with (thicker spreader graft, alar strut grafts, tip repositioning).
At my one year appointment he was more focused on my reviews/posts I left, kept asking why I was so angry (or was surprised at how frustrated/angry I was)...like do you think this is a joke? I paid you 18k to fix the function and look, you did a half-assed and lazy job. Idk he kept bringing these reviews/posts up and I realized then and there that he cares more about these posts/reviews than his patients. I'm sure they're keeping an eye on any other posts I make as well based on how he kept bringing these posts/reviews up...He decided to try lying to my face and state that perhaps I had taken him talking/thinking out loud as stating that x is the issue. And dude no, you did your poor checkup and then straight up said to my face the issue is x (then it was y the next time, then z, etc). But whatever, so I had stressed I prioritized function over the look since the start. And he had the gall to be shocked about how angry I am, like he should know how hard the recovery is for rhinoplasty (let alone a revision), taking time off work/school, etc. And he went off on how he is the best at what he does and basically the gist was he's the best at rhinoplasty? And went on to state that any surgeon that tells me they can fix this is lying and whatnot. He was triggered tbh and just was going on and on about how he's the best, etc.
It was a bit amusing to hear tbh (like I was trying to keep my eyes from rolling to the back of my head...), because based on my experience + the experience of all these numerous people I have met that have poor results from smith (and ones who did a second revision with him to end off worse)...He's not the best and he shouldn't be telling these patients that no other doctor can fix the mess he made (especially if he's basing it off whether it can fixed on his own skill+experience...which again is lacking as they caused the issues in the first place). There's no reason to scare these patients off from going to get a second opinion or getting a revision elsewhere.
Either way he gave out some random percentage success rate (like I predicted) and said he would do a revision. A few days later I received a message and basically he thinks I have good results + revision is not worth it and he'll discuss things further/options in the new year. So having a nose that does not function and is crooked is a good result for him. Take that as you will if anyone is considering him. Either way I have a revision surgeon that will be fixing the issues Oakley Smith left me with. There was honestly no way I would trust him to have the skill/ability or the ethics to do another revision. He would definitely leave me worse off and I don't trust him not to do something I already declined (i.e. he and his student kept trying to push me to do a turbinate reduction, knowing that the alar collapse + other issues are causing the breathing problems, and I said no numerous times and they kept pushing). You want a doctor that see patients as people and not just a profit. Funny thing is that, I have stated smith uses fake reviews and it seems that he had many of the negative ones he received removed (example: he had like 380 reviews on ratemd a while ago and now it's down to 360 or something with like all his negative reviews removed). So please be careful if you choose to go to him as he has them removed. If I could go back in time I would never touch my nose (or go to Hacker or Smith). I could live with a crooked nose that's the thing; it's the breathing issues that are the worst.
This girl is also someone that had their surgery with Smith btw:
https://www.realself.com/question/niagara-fillers-fix-curved-nose
This one is the same girl as above, but after Smith put filler in her nose.
https://www.realself.com/question/niagara-nose-swollen
and this one I posted somewhere else, but not sure if you saw it:
https://www.realself.com/review/18-year-girl-successful-revision-rhinoplasty-move-life-2017
I would have never touched my nose if I could go back in time. Otherwise I wouldn’t have gone to hacker or smith. I wish I had known smith manipulated his reviews and that I should have gone elsewhere (probably out of the country tbh…)
Well if you’re more then 30 feet, yes you are bad with shottys, they spread and the mastiff spreader left-right
That looks like a great deal, but I am height-constrained and need my RAM to be 31mm to less. So far I have only found that the Corsair Vengeance fits, but I had also heard about removing the heat spreader. Any idea if the bare board would fit, and it the missing thermal spreader would hurt performance?
The porn industry is, like, the biggest spreader of misoginy worldwide.
But our community overlaps with channer culture, therefore, we are up to no good — rite?
The box needs to be either on a spreader bracket OR have a support wire (314.23 (D) 1;2) and as long as the mc has an insulated ground and its continuous through the box theres no problem using the plastic box per 314.3 Exception 1.
So very evil... "Here is your cancer medicine, and here is your cancer spreader *points at Fallout 76* free now with every purchase!"
That bumper could've been saved if you wanted to get extremely involved. We would have to do that from time to time when used car managers wanted to save some money on a beater car. If you're interested look into plastic welding. Or what we would do is melt a bondo spreader into the crack with a soldering iron and grind it to get it close, then bondo it up to find tune it. Feather fill primer is your best friend on repairs like that too. Awesome job on the repair!
99.99% sure fertilizer doesn't effect trees, but if I grow grass there as well, it would help that. Might need to use a solid fertilizer spreader though as all the boom arms on liquid spreaders would get stuck all the time.
Not everyone has spreader bars on hand.
Personally, knowing that we don't have them with our rotator I would have ran a line from a light duty to the front of that truck, lift it up the exact same way and use the light duty to pull on the nose when it's ready to be upright. I'd do it that way to avoid additional damage (it's totaled anyway but we've got the equipment on scene) and I'd hook it while it's on the ground to avoid having someone get under a 6500lb fish.
We've had to do this with cars that end up in washes where you have to lift it out and can't just drag it out. It's great when they have full coverage (easily around 750-1000 towing/labor) but it sucks when it's a liability beater.
The person who was a large spreader of this photo is this guy, who posted the caption:
"Here's a photo of Jason Mamoa getting arrested while defending sacred land in Hawaii."
He has since responded:
"So I've been informed that this photo is staged. But I'm going to leave it up because it still brings attention to #WeAreMaunaKea"
[https://twitter.com/the\_green\_city/status/1161448144980365317](https://twitter.com/the_green_city/status/1161448144980365317)
**POSTER BOY FOR THE ATTITUDE OF THE MOVEMENT.**
It is factually incorrect. I know it is factually incorrect. But **meh**, it brings attention to the movement so I am leaving the post as is, saying he was arrested.
I was set to agree with you but your post history is 100% negative, easy questions that have been answered 500 times in the past or complaining about other games. You are a negativity spreader when it suits you.
I made my own spreader bar using rods, nuts and bolts from Home Depot.
spreader bar? may i ask what that is?
|
micropython/impact.ipynb | ###Markdown
Project impactThe idea started at the _BeTogetherConference_ in 2019 at the SSIS in Saigon. To investigate the trajectory of projectile motion we equip a rubber duck with a battery powered microcomputer (e.g. micro:bits or esp8266) that constantly measures acceleration with a gyroscope.The duck is fired with a catapult and the challenge for the students is to design an impact protection that helps the the duck to survive the distance flight.
###Code
import math
# parameters
m = 0.25 # mass or the rubber duck in kg
v = 4.3 # velocity of the rubber duck when leaving the catapult
alpha = 45 # release angle of the duck from the catapult, against horizontal
h = 1.5 # release height of the rubber duck from the catapult
g = 9.81 # vertical acceleration due to gravity in m/s^2
# calculations
alpha_rad = alpha * math.pi / 180 # convert the angle to radiant for calculation
v_h = v * math.cos( alpha_rad ) # horizontal velocity
v_v = v * math.sin( alpha_rad ) # vertical velocity of rubber duck on release
t_up = v_v / g # time to reach the highest point (v_v = 0)
h_max = h + v_v * t_up - 0.5 * g * t_up**2 # maximal hight of projectile
t_down = math.sqrt( 2 * h_max / g) # time from highest point to touchdown
distance = v_h * ( t_up + t_down)
print(distance)
###Output
2.869928897613528
###Markdown
Now we are goint to use pyplot to visualize the trajectory.
###Code
import matplotlib.pyplot as plt
plt.plot([1, 2, 3, 4])
plt.ylabel('some numbers')
plt.show()
###Output
_____no_output_____
###Markdown
This code needs to be adjusted for the MPU6050 that is connected to the ESP32 via I2C.
###Code
# just a test if the code can be written in MicroPython
from machine import Pin
from time import sleep
led = Pin(2, Pin.OUT)
while True:
led.value(not led.value())
sleep(0.5)
###Output
_____no_output_____ |
notebooks/data_structuring/borehole_data_2_spatialite.ipynb | ###Markdown
This notebook demonstrates how to create an analysis ready spatialite database for borehoel data. All data has been processed filtered and the depths corrected onto to metres below ground level. Induction and gamma data are resampled to 5cm intervals and are on the same table.Neil Symington [email protected]
###Code
import shapely.wkb
import shapely.wkt
from shapely.geometry import Point
import os, glob
import pandas as pd
# sqlite/spatialite
from sqlalchemy import create_engine, event, ForeignKey
from sqlalchemy import Column, Integer, String, Float, Date, Boolean
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
from sqlite3 import dbapi2 as sqlite
import sys
from pyproj import Proj, transform
import lasio
import sqlite3
import numpy as np
import matplotlib.pyplot as plt
import sys
import datetime
import math
# deal with the different setup of hydrogeol_utils
if os.getlogin().lower() == 'u19955':
sys.path.append(r'\\prod.lan\active\proj\futurex\Common\Working\Mike\GitHub\hydrogeol_utils\\')
from hydrogeol_utils.borehole_utils import extract_all_boredata_by_simple_query
from hydrogeol_utils.plotting_utils import drawCompLog
from hydrogeol_utils.db_utils import makeCon, closeCon
# Neil Symington's local configuration
DB_ROOT = r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\spatialite"
SPATIALITE_PATH = r'C:\mod_spatialite-4.3.0a-win-amd64'
# Add spatialite dll to path
os.environ['PATH'] = SPATIALITE_PATH + ';' + os.environ['PATH']
DB_PATH = os.path.join(DB_ROOT, r"East_Kimberley_borehole_data.sqlite")
if os.path.exists(DB_PATH):
os.remove(DB_PATH)
engine = create_engine('sqlite:///' + DB_PATH, module=sqlite, echo=False)
@event.listens_for(engine, 'connect')
def connect(dbapi_connection, connection_rec):
dbapi_connection.enable_load_extension(True)
dbapi_connection.execute('SELECT load_extension("mod_spatialite")')
# create spatialite metadata
print('creating spatial metadata...')
engine.execute("SELECT InitSpatialMetaData(1);")
# Create schema
Base = declarative_base()
class Boreholes(Base):
__tablename__ = 'borehole'
borehole_id = Column(Integer, index=True, primary_key=True)
borehole_name = Column("Borehole_name", String(20))
alternative_name = Column("Alternative_name", String(20))
easting = Column("Easting", Float)
northing = Column("Northing", Float)
elevation = Column("Ground_elevation_mAHD", Float)
induction = Column("Induction_acquired", Boolean)
gamma = Column("Gamma_acquired", Boolean)
javelin = Column("Javelin_acquired", Boolean)
hylogger_core = Column("Hylogger_acquired_on_core", Boolean)
hylogger_chips = Column("Hylogger_acquired_on_chips", Boolean)
lithology = Column("Lithology_available", Boolean)
ECpH = Column("EC_pH_acquired", Boolean)
swl = Column("SWL_available", Boolean)
construction = Column("Construction_available", Boolean)
magsus = Column("MagSus_available", Boolean)
AEM = Column("AEM_conductivity_available", Boolean)
geometry = Column(String)
class Induction_gamma_data(Base):
__tablename__ = 'induction_gamma_data'
induction_gamma_id = Column(Integer, index=True, primary_key=True)
depth = Column("Depth", Float)
conductivity = Column("Apparent_conductivity", Float)
gamma_calibrated = Column("Gamma_calibrated", Float)
K = Column("K", Float)
U = Column("U", Float)
Th = Column("Th", Float)
GR = Column("GR", Float)
borehole_id = Column(Integer, ForeignKey('borehole.borehole_id'))
borehole_header = relationship("Boreholes")
class Borehole_NMR_data(Base):
__tablename__ = 'boreholeNMR_data'
bNMR_id = Column(Integer, index=True, primary_key=True)
depth = Column("Depth", Float)
totalf = Column("Total_water_content", Float)
clayf = Column("Clay_water_content", Float)
capf = Column("Capillary_water_content", Float)
free = Column("Free_water_content", Float)
T2 = Column("T2", Float)
K = Column("K_sdr", Float)
borehole_id = Column(Integer, ForeignKey('borehole.borehole_id'))
borehole_header = relationship("Boreholes")
class Lithology(Base):
__tablename__ = 'borehole_lithology'
lithology_id = Column(Integer, index=True, primary_key=True)
depth_from = Column("Depth_from", Float)
depth_to = Column("Depth_to", Float)
lithology_type = Column("Lithology_type", String(40))
lithdescription = Column("Lithology_description", String(250))
clay_frac = Column("Clay_fraction", String(1))
silt_frac = Column("Silt_fraction", String(1))
fsand_frac = Column("Fine_sand_fraction", String(1))
msand_frac = Column("Medium_sand_fraction", String(1))
csand_frac = Column("Coarse_sand_fraction", String(1))
granule_frac = Column("Granule_fraction", String(1))
pebble_frac = Column("Pebble_fraction", String(1))
sorting = Column("Sorting", String(1))
rounding = Column("Rounding", String(1))
weathering = Column("Weathering", String(1))
borehole_id = Column(Integer, ForeignKey('borehole.borehole_id'))
borehole_header = relationship("Boreholes")
class EC_pH(Base):
__tablename__ = 'pore_fluid_EC_pH'
EC_pH_id = Column(Integer, index=True, primary_key=True)
depth = Column("Depth", Float)
EC = Column("EC", Float)
pH = Column("pH", Float)
borehole_id = Column(Integer, ForeignKey('borehole.borehole_id'))
borehole_header = relationship("Boreholes")
class SWL(Base):
__tablename__ = 'standing_water_level'
SWL_id = Column(Integer, index=True, primary_key=True)
date = Column("Date", Date)
depth = Column("Depth", Float)
Measurer = Column("Measurer", String(30))
borehole_id = Column(Integer, ForeignKey('borehole.borehole_id'))
borehole_header = relationship("Boreholes")
class Construction(Base):
__tablename__ = 'borehole_construction'
construction_id = Column(Integer, index=True, primary_key=True)
depth_from = Column("Depth_from", Float)
depth_to = Column("Depth_to", Float)
Measurer = Column("Measurer", String(30))
Construction_name = Column("Construction_name", String(20))
Construction_type = Column("Construction_type", String(20))
Construction_materials = Column("Construction_materials", String(20))
Internal_diameter = Column("Internal_diameter", Float)
Property = Column("Property", String(5))
Property_size = Column("Property_size", Float)
borehole_id = Column(Integer, ForeignKey('borehole.borehole_id'))
borehole_header = relationship("Boreholes")
class MagSus(Base):
__tablename__ = 'magnetic_susceptibility'
magsus_id = Column(Integer, index=True, primary_key=True)
depth = Column("Depth", Float)
magsus = Column("Magnetic_susceptibility", Float)
borehole_id = Column(Integer, ForeignKey('borehole.borehole_id'))
borehole_header = relationship("Boreholes")
class AEM_conductivity(Base):
__tablename__ = "representative_AEM_bulk_conductivity"
bulk_conductivity_id = Column(Integer, index=True, primary_key=True)
depth_from = Column("Depth_from", Float)
depth_to = Column("Depth_to", Float)
conductivity = Column("Bulk_conductivity", Float)
borehole_id = Column(Integer, ForeignKey('borehole.borehole_id'))
borehole_header = relationship("Boreholes")
Base.metadata.create_all(engine)
infile = os.path.join(DB_ROOT, "Boreholes_header.csv")
df_header = pd.read_csv(infile)
df_header["Induction_acquired"] = 0
df_header["Gamma_acquired"] = 0
df_header["Javelin_acquired"] = 0
df_header["Hylogger_chips_acquired"] = 0
df_header["Hylogger_core_acquired"] = 0
df_header["lithology_description"] = 0
df_header["EC_pH_acquired"] = 0
df_header['SWL_available'] = 0
df_header['Construction_available'] = 0
df_header['MagSus_available'] = 0
df_header['AEM_conductivity_available'] = 0
df_header['easting'] = [shapely.wkt.loads(x).x for x in df_header["geometry"]]
df_header['northing'] = [shapely.wkt.loads(y).y for y in df_header["geometry"]]
df_header[df_header['ENO'] == 627064]
def update_availability_flag(df_header, channels, eno):
# find index for given eno
index = df_header[df_header["ENO"] == eno].index
# Check induciton
if ("INDUCTION_CALIBRATED" in channels) or ("INDUCTION_BOREHOLE_COMPENSATED" in channels):
df_header.at[index, "Induction_acquired"] = 1
# Check gamma
if ("GAMMA_CALIBRATED" in channels) or ("GR" in channels) or ("K" in channels) or \
("U" in channels) or ("Th" in channels):
df_header.at[index, "Gamma_acquired"] = 1
return df_header
# Now lets read in the induction gamma data
las_dir = r"\\prod.lan\active\proj\futurex\East_Kimberley\Data\Processed\Geophysics\Induction_gamma\EK_filtered_induction_gamma"
# Create empty dataframe into which to append the data
df_indgam = pd.DataFrame(columns = ["borehole_id", "Depth_mBGL"])
# Iterate through the las files
os.chdir(las_dir)
for file in glob.glob('*.LAS'):
las = lasio.read(file)
df_logs = las.df()
# Get the eno and ref datum
datum = las.well.APD.value
eno = las.well.UWI.value
# Update the df_header dataframe with the inclusion or otherwise of
# induction and gamma
df_header = update_availability_flag(df_header, df_logs.columns, eno)
df_logs["borehole_id"] = eno
# Now make the convert the depth reference to mBGL
df_logs["Depth_mBGL"] = df_logs.index - datum
# Append
df_indgam = df_indgam.append(df_logs)
df_indgam.reset_index(inplace=True)
df_indgam.columns
#Convert to S/m
df_indgam['INDUCTION_BOREHOLE_COMPENSATED'] = df_indgam['INDUCTION_BOREHOLE_COMPENSATED'].values /1000.
df_indgam['INDUCTION_CALIBRATED'] = df_indgam['INDUCTION_CALIBRATED'].values /1000.
# Now we import the javelin data
infile = r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\borehole_NMR\bNMR_data_compiled.csv"
df_bnmr_data = pd.read_csv(infile)
# Now update the flag for NMR data
bnmr_enos = df_bnmr_data.borehole_id.unique()
for index, row in df_header.iterrows():
if row['ENO'] in bnmr_enos:
df_header.at[index, "Javelin_acquired"] = 1
# Now bring in the lithology data
infile = r"R:\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\sonic_lithology\ALLHOLES_RAW_NS.csv"
df_lithology = pd.read_csv(infile)
lithology_enos = df_lithology.borehole_id.unique()
# header table gets true if lithology data is available
# for this site
for index, row in df_header.iterrows():
if row['ENO'] in lithology_enos:
df_header.at[index, "lithology_description"] = 1
# NOw we bring in the hylogger data
hylog_dir = r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\hylogger"
df_hylogs = pd.read_csv(os.path.join(hylog_dir, "EK_hylogg_results_core.csv"))
df_hychips = pd.read_csv(os.path.join(hylog_dir, "EK_hylogg_results_chips.csv"))
df_hylogs.columns
# Now update the flag for NMR data
hylog_core_enos = df_hylogs.borehole_id.unique()
hylog_chips_enos = df_hychips.borehole_id.unique()
for index, row in df_header.iterrows():
if row['ENO'] in hylog_core_enos:
df_header.at[index, "Hylogger_core_acquired"] = 1
if row['ENO'] in hylog_chips_enos:
df_header.at[index, "Hylogger_chips_acquired"] = 1
# REmove from memory
df_hylogs = None
df_hychips = None
# Bring in the EC pH data
df_ECpH = pd.read_csv(r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\EC_pH\EC_pH_sonic.csv")
ECpH_enos = df_ECpH['Borehole_eno'].values
# Update the flags
for index, row in df_header.iterrows():
if row['ENO'] in ECpH_enos:
df_header.at[index, "EC_pH_acquired"] = 1
# COnvert to S/m
df_ECpH['EC Value'] = df_ECpH['EC Value'].values * 0.1
# Bring in the SWL data
df_swl = pd.read_csv(r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\SWLs\EK_adjusted_SWL.csv")
swl_enos = df_swl['ENO'].values
# Create datetime object
df_swl['Date Measured'] = pd.to_datetime(df_swl['Date Measured'], dayfirst = True,
format = "%d/%m/%Y")
# Update the flags
for index, row in df_header.iterrows():
if row['ENO'] in swl_enos:
df_header.at[index, "SWL_available"] = 1
# Bring in construction
df_const = pd.read_csv(r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\construction\GA_EK_Boreholes_Construction.csv")
constr_enos = df_const['ENO'].values
# Update the flags
for index, row in df_header.iterrows():
if row['ENO'] in constr_enos:
df_header.at[index, "Construction_available"] = 1
df_magsus = pd.read_csv(r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\mag_sus\mag_sus_vfinal.csv")
magsus_enos = df_magsus['Borehole_eno'].values
# Update the flags
for index, row in df_header.iterrows():
if row['ENO'] in magsus_enos:
df_header.at[index, 'MagSus_available'] = 1
df_aem = pd.read_csv(r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\AEM\EK_AEM_borehole_interpolated_duplicates_removed.csv")
aem_enos = df_aem['borehole_id'].values
# Update the flags
for index, row in df_header.iterrows():
if row['ENO'] in aem_enos:
df_header.at[index, 'AEM_conductivity_available'] = 1
df_header
# Now that the data has been loaded we write it to the spatialite database
# Add header data to a list
all_bores = []
for index, row in df_header.iterrows():
bore = Boreholes(borehole_id = row['ENO'],
borehole_name = row['BOREHOLE_NAME'],
alternative_name = row['ALTERNATE_NAMES'],
easting = row['easting'],
northing = row['northing'],
elevation = row['ground_elevation_(mAHD)'],
induction = row['Induction_acquired'],
gamma = row['Gamma_acquired'],
javelin = row['Javelin_acquired'],
hylogger_chips = row['Hylogger_chips_acquired'],
hylogger_core = row['Hylogger_core_acquired'],
lithology = row['lithology_description'],
ECpH = row["EC_pH_acquired"],
swl = row["SWL_available"],
construction = row["Construction_available"],
magsus = row['MagSus_available'],
AEM = row['AEM_conductivity_available'],
geometry = row['geometry'])
all_bores.append(bore)
# Add nmr data to a list
all_nmr_data = []
for index, row in df_bnmr_data.iterrows():
nmr_data = Borehole_NMR_data(bNMR_id = index,
depth = row['Depth_mBGL'],
totalf = row["Total_water_content"],
clayf = row["Clay_water_content"],
capf = row["Capillary_water_content"],
free = row["Free_water_content"],
T2 = row["T2"],
K = row['Ksdr'],
borehole_id = row['borehole_id'])
all_nmr_data.append(nmr_data)
# Add induction gamma data to a list
all_indgam_data = []
for index, row in df_indgam.iterrows():
# COnductivity will be what ever values is available
if not pd.isnull(row['INDUCTION_BOREHOLE_COMPENSATED']):
conductivity = row['INDUCTION_BOREHOLE_COMPENSATED']
elif not pd.isnull(row['INDUCTION_CALIBRATED']):
conductivity = row['INDUCTION_CALIBRATED']
else:
conductivity = np.nan
indgam_data = Induction_gamma_data(induction_gamma_id = index,
depth = row['Depth_mBGL'],
conductivity = conductivity,
gamma_calibrated = row['GAMMA_CALIBRATED'],
K = row["K"],
U = row["U"],
Th = row["TH"],
GR = row['GR'],
borehole_id = row['borehole_id'])
all_indgam_data.append(indgam_data)
all_lithology_data = []
for index, row in df_lithology.iterrows():
lithology_data = Lithology(lithology_id = index,
depth_from = row['Depth_from'],
depth_to = row["Depth_to"],
lithology_type = row['Extra Fields for Oracle: EM1 Lithology Type (eg, soil, muddy sand, sandstone), see lookup tab'],
lithdescription = row['lithology: eg. Sand, fine; Clay; interbedded sand and silt and clay etc.'],
clay_frac = row['grain size: clay.1'],
silt_frac = row['grain size: silt'],
fsand_frac = row['grain size: very fine - fine'],
msand_frac = row['grain size: medium'],
csand_frac = row['grain size: coarse-very coarse'],
granule_frac = row['grain size: granule'],
pebble_frac = row['grain size: pebble'],
sorting = row['sort: Sorting'],
rounding = row['round: Rounding'],
weathering = row['wth: Weathering'],
borehole_id = row['borehole_id'])
all_lithology_data.append(lithology_data)
df_lithology[df_lithology['borehole_id'] == 635728]
all_EC_pH_data = []
for index, row in df_ECpH.iterrows():
ECpH_data = EC_pH(EC_pH_id = index,
depth = row['Depth'],
EC = row["EC Value"],
pH = row['pH'],
borehole_id = row['Borehole_eno'])
all_EC_pH_data.append(ECpH_data)
all_swl_data = []
for index, row in df_swl.iterrows():
swl_data = SWL(SWL_id = index,
depth = row['SWL_m'],
date = row["Date Measured"],
Measurer = row['Who_Measured'],
borehole_id = row['ENO'])
all_swl_data.append(swl_data)
all_construction_data = []
for index, row in df_const.iterrows():
construction_data = Construction(construction_id = index,
depth_from = row["Depth_from"],
depth_to = row["Depth_to"],
Construction_name = row["Construction_name"],
Construction_type = row['Construction_type'],
Construction_materials =row['Construction_materials'],
Internal_diameter = row["Internal_diameter"],
Property = row["Property"],
Property_size = row['Property_size'],
borehole_id = row['ENO'])
all_construction_data.append(construction_data)
all_magsus_data = []
for index, row in df_magsus.iterrows():
magsus_data = MagSus(magsus_id = index,
depth = row['Sample_Depth_(m)'],
magsus = row['Mag_sus_(unitless)'],
borehole_id = row['Borehole_eno'])
all_magsus_data.append(magsus_data)
all_aem_data = []
for index, row in df_aem.iterrows():
aem_data = AEM_conductivity(bulk_conductivity_id = index,
depth_from = row['Depth_from'],
depth_to = row['Depth_to'],
conductivity = row['conductivity'],
borehole_id = row['borehole_id'])
all_aem_data.append(aem_data)
from sqlalchemy.orm import sessionmaker
Session = sessionmaker(bind=engine)
session = Session()
session.add_all(all_bores)
session.add_all(all_nmr_data)
session.add_all(all_indgam_data)
session.add_all(all_lithology_data)
session.add_all(all_EC_pH_data)
session.add_all(all_swl_data)
session.add_all(all_construction_data)
session.add_all(all_magsus_data)
session.add_all(all_aem_data)
session.commit()
# Create the spatialite table
# add a Spatialite geometry column called 'geom' to the table, using ESPG 28352,
# data type POLYGON and 2 dimensions (x, y)
engine.execute("SELECT AddGeometryColumn('borehole', 'geom', 28352, 'POINT', 'XY', 1);")
# update the yet empty geom column by parsing the well-known-binary objects from the geometry column into
# Spatialite geometry objects
engine.execute("UPDATE borehole SET geom=GeomFromText(geometry, 28352);")
# Now we will add the hylogging data to the database. Note that this could be done
# using the declarative base using a similar approach to that used above but
# the number of columns and my unfamiliarity with the data makes this a too tedious a task
#df_hylogs.to_sql("Hylogging_data_from_core", engine, if_exists='replace', index = False)
#df_hychips.to_sql("Hylogging_data_from_chips", engine, if_exists='replace', index = False)
# Create a metadata table and add it
df_metadata = pd.DataFrame(data = {"Depths": ['metres below ground level'],
"Conductivity": ["S/m"],
"GAMMA_CALIBRATED": ["counts per second"],
"GR": ["American Petroleum Index"],
"Magnetic_susceptibility": ['Unitless_(SI)'],
"U": ["ppm"],
"Th": ["ppm"],
"K": ["%"],
"water content": ["fraction"],
"Ksd": ["metres per day"],
"EC": ["S/m"]})
df_metadata.to_sql("Units", engine, if_exists="replace", index=False)
###Output
_____no_output_____
###Markdown
Composite Log Creation Loop
###Code
df_interp = pd.read_csv(r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\stratigraphy\base_of_cenozoic_picked.csv")
df_interp
# Define the database path. Check this to ensure its up to date with the variables above.
# Variable definition applied here just so this cell runs standalone
DB_ROOT = r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\spatialite"
DB_PATH = os.path.join(DB_ROOT, r"East_Kimberley_borehole_data.sqlite")
# connect to database
con = makeCon(DB_PATH)
# make query to extract all ENOs and borehole names from database
bh_list_query = 'select borehole_id, Borehole_name, Alternative_name from borehole'
# run query
bh_list = pd.read_sql_query(bh_list_query,con)
# Loop through each hole by ENO
for i, (bhid, bhname, altname) in bh_list.iterrows():
plt.close('all')
# extract all the data for that borehole into a dict of dataframes
data = extract_all_boredata_by_simple_query(con, bhid)
# draw the composite log for that hole. Pass only the file stem without extension, as the code creates .svg and .png
# note, when the output_path variable is supplied, the drawFunction only outputs to file
# remove this parameter to see the results inline
if altname is None:
output_path = r'\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\GAHoles_CompositeLogs\{}_complog'.format(bhname)
else:
output_path = r'\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\GAHoles_CompositeLogs\{}_{}_complog'.format(bhname, altname)
fig, axs = drawCompLog(data, output_path = None)
#plt.show()
if bhid in df_interp.borehole_id.values:
for ax in axs:
ax.axhline(y=df_interp[df_interp.borehole_id == bhid]['base_of_cenozoic_depth'].values[0],
color='red')
plt.savefig(output_path)
# Close the DB connection
closeCon(con, DB_PATH)
###Output
Connected to \\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\spatialite\East_Kimberley_borehole_data.sqlite. Temporary working copy created.
###Markdown
Testing cell for a single hole's composite log
###Code
# Define the database path. Check this to ensure its up to date with the variables above.
# Variable definition applied here just so this cell runs standalone
DB_ROOT = r"\\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\spatialite"
DB_PATH = os.path.join(DB_ROOT, r"East_Kimberley_borehole_data.sqlite")
# connect to database
con = makeCon(DB_PATH)
bhid = bhname = 635735 # ENO for KR33
data = extract_all_boredata_by_simple_query(con, bhid)
drawCompLog(data)
# Close the DB connection
closeCon(con, DB_PATH)
###Output
Connected to \\prod.lan\active\proj\futurex\East_Kimberley\Working\SharedWorkspace\Bores_working\compilation\spatialite\East_Kimberley_borehole_data.sqlite. Temporary working copy created.
|
scripts/plot_temperature_dependence.ipynb | ###Markdown
Get the data sorted Temperature from environment DB
###Code
temp = pd.read_csv("/home/prokoph/CTA/ArrayClockSystem/WRS/MonitoringWRSS/weather_Jan13.csv",index_col=0, parse_dates=True)
print(temp.shape)
temp.tail(4)
###Output
(8109, 3)
###Markdown
PtPData (incl RTT) from telegraf DB
###Code
ptp = pd.read_csv("/home/prokoph/CTA/ArrayClockSystem/WRS/MonitoringWRSS/PtpData_Jan13.csv",index_col=0, parse_dates=True)
print(ptp.shape)
ptp.tail(4)
wrs2_ptp = ptp['agent_host'].map(lambda x: x == '192.168.4.32')
rtt2 = ptp[wrs2_ptp]
rtt2 = rtt2[np.isfinite(rtt2['wrsPtpRTT'])]
wrs3_ptp = ptp['agent_host'].map(lambda x: x == '192.168.4.33')
rtt3 = ptp[wrs3_ptp]
rtt3 = rtt3[np.isfinite(rtt3['wrsPtpRTT'])]
wrs4_ptp = ptp['agent_host'].map(lambda x: x == '192.168.4.34')
rtt4 = ptp[wrs4_ptp]
rtt4 = rtt4[np.isfinite(rtt4['wrsPtpRTT'])]
###Output
_____no_output_____
###Markdown
snmp fields from telegraf DB
###Code
snmp = pd.read_csv("/home/prokoph/CTA/ArrayClockSystem/WRS/MonitoringWRSS/snmp_Jan13.csv",index_col=0, parse_dates=True)
print(snmp.shape)
#snmp.tail(4)
snmp.head(4)
wrs1_snmp = snmp['agent_host'].map(lambda x: x == '192.168.4.31')
wrs2_snmp = snmp['agent_host'].map(lambda x: x == '192.168.4.32')
wrs3_snmp = snmp['agent_host'].map(lambda x: x == '192.168.4.33')
wrs4_snmp = snmp['agent_host'].map(lambda x: x == '192.168.4.34')
wrs5_snmp = snmp['agent_host'].map(lambda x: x == '192.168.4.35')
wrs6_snmp = snmp['agent_host'].map(lambda x: x == '192.168.4.165')
# make selection for one variable only (and remove all NaN to make plots look nicer)
cpu5_snmp = snmp[wrs5_snmp]
cpu5_snmp = cpu5_snmp[np.isfinite(cpu5_snmp['wrsCPULoadAvg15min'])]
cpu6_snmp = snmp[wrs6_snmp]
cpu6_snmp = cpu6_snmp[np.isfinite(cpu6_snmp['wrsCPULoadAvg15min'])]
###Output
_____no_output_____
###Markdown
Timing from oszi DB
###Code
oszi = pd.read_csv("/home/prokoph/CTA/ArrayClockSystem/WRS/MonitoringWRSS/timing_Jan13.csv",index_col=0, parse_dates=True)
print(oszi.shape)
oszi.tail(4)
wrs2_oszi = oszi['link'].map(lambda x: x == 'wrs1-wrs2')
wrs2_oszi = oszi[wrs2_oszi]
wrs3_oszi = oszi['link'].map(lambda x: x == 'wrs1-wrs3')
wrs3_oszi = oszi[wrs3_oszi]
wrs4_oszi = oszi['link'].map(lambda x: x == 'wrs1-wrs4')
wrs4_oszi = oszi[wrs4_oszi]
###Output
_____no_output_____
###Markdown
Get the plotting running
###Code
#temp.temperature.plot(figsize=(15,8))
#rtt3.wrsPtpRTT.rolling("1h").mean().plot()
wrs2_oszi['skew'].loc['2020-01-03':'2020-01-13'].plot()
fig = plt.figure(figsize=(15,10))
ax1 = fig.add_subplot(1, 1, 1)
ax2 = ax1.twinx()
#plt.ticklabel_format(style='sci', axis='y', scilimits=(0,0))
plt.grid(True)
ax1.plot(temp['temperature'],
alpha=0.3,color='red',label='Temperature')
ax1.plot(temp['temperature'].rolling("3h").mean(),
color='red',label='3h average')
ax1.set_ylabel('Temperature (Celsius)',fontsize='large')
ax1.set_title('Temperature dependence of a short optical fiber between two WR switches')
rtt3mean = rtt3['wrsPtpRTT'].mean()
ax2.plot(rtt3['wrsPtpRTT']-rtt3mean,
alpha=0.3,label='Round trip time')
ax2.plot((rtt3['wrsPtpRTT']-rtt3mean).rolling("3h").mean(),
color='blue',label='3h average')
ax2.set_ylabel('Round trip time difference (ps)')
text = ('removed mean of \n%i ps' % rtt3mean )
plt.gca().text(0.87, 0.04, text, transform=plt.gca().transAxes, color='blue')
# plot all labels from different axes into one legend
lines, labels = ax1.get_legend_handles_labels()
lines2, labels2 = ax2.get_legend_handles_labels()
ax2.legend(lines + lines2, labels + labels2, loc=0)
temploc = temp['temperature'].loc['2020-01-03':'2020-01-13']
rtt2loc = rtt2['wrsPtpRTT'].loc['2020-01-03':'2020-01-13']
rtt3loc = rtt3['wrsPtpRTT'].loc['2020-01-03':'2020-01-13']
rtt4loc = rtt4['wrsPtpRTT'].loc['2020-01-03':'2020-01-13']
skw2loc = rtt2['wrsPtpSkew'].loc['2020-01-03':'2020-01-13']
skw3loc = rtt3['wrsPtpSkew'].loc['2020-01-03':'2020-01-13']
skw4loc = rtt4['wrsPtpSkew'].loc['2020-01-03':'2020-01-13']
# Creates two subplots
f, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True, figsize=(10,8))
ax1.grid(), ax2.grid(), ax3.grid()
ax1.set_title('WR performance for a short optical fiber (obtained via WR software)')
ax1.plot(temploc,alpha=0.3,color='red',label='Temperature')
ax1.plot(temploc.rolling("1h").mean(),color='red',label='1h average')
ax1.set_ylabel('Temperature (Celsius)')
mymean = rtt2loc.mean()
ax2.plot(rtt2loc-mymean,alpha=0.3, color='blue')
ax2.plot((rtt2loc-mymean).rolling("2h").mean(),color='blue',label='2h average')
ax2.set_ylabel('Round trip time (ps)')
text = ('removed mean of %.1f ps' % mymean )
ax2.text(0.02, 0.87, text, transform=ax2.transAxes, color='blue')
ax3.plot(skw2loc,alpha=0.3, color='blue')
ax3.plot(skw2loc.rolling("2h").mean(),color='blue',label='2h average')
ax3.set_ylabel('PPS skew (ps)')
ax1.legend(), ax2.legend(), ax3.legend()
# make selection for one variable only (and remove all NaN to make plots look nicer)
fpga3_snmp = snmp[wrs3_snmp]
fpga3_snmp = fpga3_snmp[np.isfinite(fpga3_snmp['wrsTempFPGA'])]
tfp3loc = fpga3_snmp['wrsTempFPGA'].loc['2020-01-03':'2020-01-13']
#tfp3loc.plot()
fpga1_snmp = snmp[wrs1_snmp]
fpga1_snmp = fpga1_snmp[np.isfinite(fpga1_snmp['wrsTempFPGA'])]
tfp1loc = fpga1_snmp['wrsTempFPGA'].loc['2020-01-03':'2020-01-13']
tfp1loc.plot()
temploc = temp['temperature'].loc['2020-01-03':'2020-01-13']
osz2loc = wrs2_oszi['skew'].loc['2020-01-03':'2020-01-13']
osz3loc = wrs3_oszi['skew'].loc['2020-01-03':'2020-01-13']
osz4loc = wrs4_oszi['skew'].loc['2020-01-03':'2020-01-13']
rms2loc = wrs2_oszi['skewrms'].loc['2020-01-03':'2020-01-13']
rms3loc = wrs3_oszi['skewrms'].loc['2020-01-03':'2020-01-13']
rms4loc = wrs4_oszi['skewrms'].loc['2020-01-03':'2020-01-13']
# Creates two subplots
f, (ax1, ax2, ax22, ax3) = plt.subplots(4, 1, sharex=True, figsize=(10,10))
ax1.grid(), ax2.grid(), ax3.grid()
ax22.grid()
ax1.set_title('WR performance for a short optical fiber (obtained via scope measurements)')
ax1.plot(temploc,alpha=0.3,color='red',label='Temperature')
ax1.plot(temploc.rolling("1h").mean(),color='red',label='1h average')
ax1.set_ylabel('Temperature (Celsius)')
mymean = osz3loc.mean()
ax2.plot(osz3loc-mymean,alpha=0.3, color='green')
ax2.plot((osz3loc-mymean).rolling("2h").mean(),color='green',label='2h average')
text = ('removed mean of %.1f ps' % mymean )
ax2.text(0.02, 0.87, text, transform=ax2.transAxes, color='green')
ax2.set_ylabel('PPS Skew (ps)')
ax22.plot(tfp1loc,alpha=0.3, label='WRS master', color='yellow')
ax22.plot(tfp1loc.rolling("2h").mean(),color='yellow',label='2h average')
ax22.plot(tfp3loc,alpha=0.3, color='orange', label='WRS slave')
ax22.plot(tfp3loc.rolling("2h").mean(),color='orange',label='2h average')
#text = ('removed mean of %.1f ps' % mymean )
#ax2.text(0.02, 0.87, text, transform=ax2.transAxes, color='green')
ax22.set_ylabel('FPGA temperature')
ax22.legend()
ax3.plot(rms2loc,alpha=0.3, color='green')
ax3.plot(rms2loc.rolling("2h").mean(),color='green',label='2h average')
ax3.set_ylabel('RMS of PPS skew (ps)')
ax1.legend(), ax2.legend(), ax3.legend()
fig = plt.figure()
# TODO: ensure that the RTT is twice the number of what we measure...
# but still... its not matching expectations
mymean = rtt3loc.mean()
myval = (rtt3loc-mymean)/2
ax = myval.hist(alpha=0.3, color='blue', bins=100, label='Measured in software')
(osz3loc-osz3loc.mean()).hist(alpha=0.3, color='green', bins=50, label='Measured with scope', ax=ax)
mydiff = myval.max() - myval.min()
text = ('mean = %.1f ps\nsigma = %.1f ps\nMTIE = %0.1f ps' % (myval.mean(),myval.std(), mydiff) )
myval = osz3loc-osz3loc.mean()
mydiff = myval.max() - myval.min()
text2 = ('mean = %.1f ps\nsigma = %.1f ps\nMTIE = %0.1f ps' % (myval.mean(),myval.std(), mydiff) )
plt.gca().text(0.05, 0.8, text, color='blue', transform=plt.gca().transAxes)
plt.gca().text(0.05, 0.6, text2, transform=plt.gca().transAxes, color='green')
plt.legend()
fig = plt.figure()
mymean = rtt4loc.mean()
myval = (rtt4loc-mymean)/2
ax = myval.hist(alpha=0.3, color='blue', bins=100, label='Measured in software')
(osz4loc-osz4loc.mean()).hist(alpha=0.3, color='green', bins=50, label='Measured with scope', ax=ax)
#osz2loc.hist(alpha=0.3, color='green', bins=50, label='Measured with scope', ax=ax)
mydiff = myval.max() - myval.min()
text = ('mean = %.1f ps\nsigma = %.1f ps\nMTIE = %0.1f ps' % (myval.mean(),myval.std(), mydiff) )
myval = osz4loc-osz4loc.mean()
mydiff = myval.max() - myval.min()
text2 = ('mean = %.1f ps\nsigma = %.1f ps\nMTIE = %0.1f ps' % (myval.mean(),myval.std(), mydiff) )
plt.gca().text(0.05, 0.8, text, color='blue', transform=plt.gca().transAxes)
plt.gca().text(0.05, 0.6, text2, transform=plt.gca().transAxes, color='green')
plt.legend()
# Creates two subplots
f, (ax1, ax2, ax3, ax4) = plt.subplots(4, 1, sharex=True, figsize=(12,10))
ax1.grid(), ax2.grid(), ax3.grid(), ax4.grid()
ax1.set_title('WR link performance for a long optical fiber')
ax1.plot(temploc,alpha=0.3,color='red',label='Temperature')
ax1.plot(temploc.rolling("1h").mean(),color='red',label='1h average')
ax1.set_ylabel('Temperature (Celsius)')
ax11 = ax1.twinx()
ax11.plot(rtt2loc,alpha=0.2, color='blue')
ax11.plot(rtt2loc.rolling("2h").mean(),color='blue',label='2h average')
ax11.set_ylabel('Round trip time (ps)')
ax2.plot(skw2loc,alpha=0.3, color='blue')
ax2.plot(skw2loc.rolling("2h").mean(),color='blue',label='2h average')
ax2.set_ylabel('PPS skew (ps)')
ax3.plot(osz2loc,alpha=0.3, color='green')
ax3.plot(osz2loc.rolling("2h").mean(),color='green',label='2h average')
ax3.set_ylabel('PPS Skew (ps)')
ax4.plot(rms2loc,alpha=0.3, color='green')
ax4.plot(rms2loc.rolling("2h").mean(),color='green',label='2h average')
ax4.set_ylabel('RMS of PPS skew (ps)')
# plot all labels from different axes into one legend
lines, labels = ax1.get_legend_handles_labels()
lines2, labels2 = ax11.get_legend_handles_labels()
ax11.legend(lines + lines2, labels + labels2, loc=0)
ax2.legend(), ax3.legend(), ax4.legend()
###Output
_____no_output_____
###Markdown
Group by constant temperature
###Code
temploc = temp['temperature'].loc['2020-01-06 14':'2020-01-06 22']
rtt2loc = rtt2['wrsPtpRTT'].loc['2020-01-06 14':'2020-01-06 22']
rtt3loc = rtt3['wrsPtpRTT'].loc['2020-01-06 14':'2020-01-06 22']
rtt4loc = rtt4['wrsPtpRTT'].loc['2020-01-06 14':'2020-01-06 22']
skw2loc = rtt2['wrsPtpSkew'].loc['2020-01-06 14':'2020-01-06 22']
skw3loc = rtt3['wrsPtpSkew'].loc['2020-01-06 14':'2020-01-06 22']
skw4loc = rtt4['wrsPtpSkew'].loc['2020-01-06 14':'2020-01-06 22']
osz2loc = wrs2_oszi['skew'].loc['2020-01-06 14':'2020-01-06 22']
osz3loc = wrs3_oszi['skew'].loc['2020-01-06 14':'2020-01-06 22']
osz4loc = wrs4_oszi['skew'].loc['2020-01-06 14':'2020-01-06 22']
rms2loc = wrs2_oszi['skewrms'].loc['2020-01-06 14':'2020-01-06 22']
rms3loc = wrs3_oszi['skewrms'].loc['2020-01-06 14':'2020-01-06 22']
rms4loc = wrs4_oszi['skewrms'].loc['2020-01-06 14':'2020-01-06 22']
#temploc.plot()
temploc.mean(), temploc.std()
((rtt3loc-rtt3loc.mean())/2).plot()
# Creates three subplots
f, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True, figsize=(10,8))
ax1.grid(), ax2.grid(), ax3.grid()
ax1.set_title('WR performance for a long optical fiber (ambient temperature variations)')
ax1.plot(temploc,alpha=0.3,color='red',label='Temperature')
ax1.plot(temploc.rolling("1h").mean(),color='red',label='1h average')
ax1.set_ylabel('Temperature (Celsius)')
mymean = rtt2loc.mean()
ax2.plot(rtt2loc-mymean,alpha=0.3, color='blue')
ax2.plot((rtt2loc-mymean).rolling("1h").mean(),color='blue',label='1h average')
ax2.set_ylabel('Round trip time (ps)')
text = ('removed mean of %.1f ps' % mymean )
ax2.text(0.02, 0.07, text, transform=ax2.transAxes, color='blue')
mymean = osz2loc.mean()
ax3.plot(osz2loc-mymean,alpha=0.3, color='green')
ax3.plot((osz2loc-mymean).rolling("1h").mean(),color='green',label='1h average')
ax3.set_ylabel('PPS Skew (ps)')
text = ('removed mean of %.1f ps' % mymean )
ax3.text(0.02, 0.87, text, transform=ax3.transAxes, color='green')
ax1.legend(), ax2.legend(), ax3.legend()
fig = plt.figure()
mymean = rtt2loc.mean()
myval = (rtt2loc-mymean)/2
ax = myval.hist(alpha=0.3, color='blue', bins=100, label='Measured in software\n(half round trip time)')
(osz2loc-osz2loc.mean()).hist(alpha=0.8, color='green', bins=20, label='Measured with scope\n(PPS skew)', ax=ax)
#osz2loc.hist(alpha=0.3, color='green', bins=50, label='Measured with scope', ax=ax)
mydiff = myval.max() - myval.min()
text = ('mean = %.1f ps\nsigma = %.1f ps\nMTIE = %0.1f ps' % (myval.mean(),myval.std(), mydiff) )
myval = osz2loc-osz2loc.mean()
mydiff = myval.max() - myval.min()
text2 = ('mean = %.1f ps\nsigma = %.1f ps\nMTIE = %0.1f ps' % (myval.mean(),myval.std(), mydiff) )
plt.gca().text(0.05, 0.8, text, color='blue', transform=plt.gca().transAxes)
plt.gca().text(0.05, 0.6, text2, transform=plt.gca().transAxes, color='green')
plt.title('Time jitter for a long link (temperature variable)')
plt.legend()
fig = plt.figure(figsize=(15,8))
plt.grid(True)
ax1 = fig.add_subplot(1, 1, 1)
ax1.plot(rtt4.wrsPtpSkew,alpha=0.3,label='PTP skew')
ax1.plot(rtt4.wrsPtpSkew.rolling("3h").mean(), color='blue',label='3h average')
ax1.set_ylabel('PPS skew (ps)')
#ax3.plot(rtt4.wrsPtpRTT,alpha=0.2,color='orange',label='Round Trip Time (RTT)')
#ax3.plot(rtt4.wrsPtpRTT.rolling("3h").mean(),color='green',label='3h RTT average')
#ax2.set_ylabel('Uncalibrated Round Trip Time (ps)')
# plot all labels from different axes into one legend
#lines, labels = ax1.get_legend_handles_labels()
#lines2, labels2 = ax2.get_legend_handles_labels()
#ax2.legend(lines + lines2, labels + labels2, loc=0)
ax = df[wrs5].hist(column='cpu15min',bins=24)
df[wrs6].hist(column='cpu15min',color='green',bins=5,ax=ax)
print(cpu5.cpu15min.mean(),cpu5.cpu15min.median())
print(cpu6.cpu15min.mean(),cpu6.cpu15min.median())
df[["cpu15min"]].rolling("1h").median().plot()
###Output
_____no_output_____
###Markdown
Plot with cuts on time range
###Code
# remember that during 2020-01-07 we switched from 1 snmp call to ~13 snmp calls
ax = df[wrs5].loc['2020-01-05':'2020-01-05'].hist(column='cpu15min',bins=30, range=(0,30), alpha=0.5, label='one snmp call (2020-01-05)')
df[wrs5].loc['2020-01-08':'2020-01-08'].hist(column='cpu15min',bins=30, range=(0,30), ax=ax, alpha=0.5, label='10 snmp calls (2020-01-08)')
snmp[wrs5_snmp].loc['2020-01-13':'2020-01-13'].hist(column='wrsCPULoadAvg15min',bins=30, range=(0,30), ax=ax, alpha=0.5, label='one telegraf call (2020-01-13)')
plt.title('15min average CPU load (v5.0.1)')
plt.legend()
#histtype='step'
ax = df[wrs6].loc['2020-01-05':'2020-01-05'].hist(column='cpu15min',bins=30, range=(0,30), alpha=0.5, label='one snmp call (2020-01-05)')
df[wrs6].loc['2020-01-08':'2020-01-08'].hist(column='cpu15min',bins=30, range=(0,30), ax=ax, alpha=0.5, label='10 snmp calls (2020-01-08)')
snmp[wrs6_snmp].loc['2020-01-13':'2020-01-13'].hist(column='wrsCPULoadAvg15min',bins=30, range=(0,30), ax=ax, alpha=0.5, label='one telegraf call (2020-01-13)')
# get current axis to draw some text in
#plt.gca().text(0.1, 0.9, "test", transform=plt.gca().transAxes)
plt.title('15min average CPU load (v4.2)')
plt.legend()
###Output
_____no_output_____ |
M5 Pridictive Modeling/M5 W3 Linear Dicriminant Analytics LDA/PM Week-1 Practice Exercise LR -1 Student File.ipynb | ###Markdown
Practice Exercise Linear Regression We will be using the Boston house price dataset for this exercise. This dataset is in-built in Python in the Sci-kit learn library. But for this exercise, we have already downloaded this dataset in the form of a csv file. **Importing Libraries**
###Code
from sklearn.datasets import load_boston
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import statsmodels.api as sm
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from statsmodels.stats.outliers_influence import variance_inflation_factor
import math
###Output
_____no_output_____
###Markdown
**Load the dataset**
###Code
df = pd.read_csv("Boston.csv")
df.head()
###Output
_____no_output_____
###Markdown
**Check the data description**
###Code
boston = load_boston()
print(boston.DESCR)
###Output
.. _boston_dataset:
Boston house prices dataset
---------------------------
**Data Set Characteristics:**
:Number of Instances: 506
:Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.
:Attribute Information (in order):
- CRIM per capita crime rate by town
- ZN proportion of residential land zoned for lots over 25,000 sq.ft.
- INDUS proportion of non-retail business acres per town
- CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
- NOX nitric oxides concentration (parts per 10 million)
- RM average number of rooms per dwelling
- AGE proportion of owner-occupied units built prior to 1940
- DIS weighted distances to five Boston employment centres
- RAD index of accessibility to radial highways
- TAX full-value property-tax rate per $10,000
- PTRATIO pupil-teacher ratio by town
- B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
- LSTAT % lower status of the population
- MEDV Median value of owner-occupied homes in $1000's
:Missing Attribute Values: None
:Creator: Harrison, D. and Rubinfeld, D.L.
This is a copy of UCI ML housing dataset.
https://archive.ics.uci.edu/ml/machine-learning-databases/housing/
This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University.
The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic
prices and the demand for clean air', J. Environ. Economics & Management,
vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics
...', Wiley, 1980. N.B. Various transformations are used in the table on
pages 244-261 of the latter.
The Boston house-price data has been used in many machine learning papers that address regression
problems.
.. topic:: References
- Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261.
- Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
###Markdown
**Check the shape of the dataset**
###Code
df.shape
###Output
_____no_output_____
###Markdown
**Get the info data types column wise**
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 506 entries, 0 to 505
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 CRIM 506 non-null float64
1 ZN 506 non-null float64
2 INDUS 506 non-null float64
3 CHAS 506 non-null int64
4 NOX 506 non-null float64
5 RM 506 non-null float64
6 AGE 506 non-null float64
7 DIS 506 non-null float64
8 RAD 506 non-null int64
9 TAX 506 non-null int64
10 PTRATIO 506 non-null float64
11 B 506 non-null float64
12 LSTAT 506 non-null float64
13 MEDV 506 non-null float64
dtypes: float64(11), int64(3)
memory usage: 55.5 KB
###Markdown
**Get the summary statistics of the dataset**
###Code
df.describe()
###Output
_____no_output_____
###Markdown
**Get the Correlation Heatmap**
###Code
plt.figure(figsize=(12,8))
sns.heatmap(df.iloc[:,0:13].corr(),annot=True,fmt='.2f',cmap='rainbow',mask=np.triu(df.iloc[:,0:13].corr(),+1))
plt.show()
###Output
_____no_output_____
###Markdown
**Split the dataset**
###Code
X = df.iloc[:,0:13]
X.head()
Y = df['MEDV']
Y.head()
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.30 , random_state=1)
###Output
_____no_output_____
###Markdown
Using Statsmodels OLS
###Code
# This adds the constant term beta0 to the Simple Linear Regression.
X_con=sm.add_constant(X)
X_trainc, X_testc, y_trainc, y_testc = train_test_split(X_con, Y, test_size=0.30 , random_state=1)
###Output
_____no_output_____
###Markdown
**Make the linear model using OLS**
###Code
model = sm.OLS(y_trainc,X_trainc).fit()
model.summary()
###Output
_____no_output_____
###Markdown
**Get the value of coefficient of determination**
###Code
print('The variation in the independent variable which is explained by the dependent variable is',round(model.rsquared*100,4),'%')
###Output
The variation in the independent variable which is explained by the dependent variable is 71.0388 %
###Markdown
**Get the Predictions on test set**
###Code
ypred = model.predict(X_testc)
print(ypred)
###Output
307 32.391465
343 27.944013
47 17.837628
67 21.669414
362 18.936396
...
467 17.329959
95 28.360234
122 20.794228
260 33.698157
23 13.518827
Length: 152, dtype: float64
###Markdown
**Calculate MSE for training set** **Get the RMSE on training set**
###Code
print("The Root Mean Square Error (RMSE) of the model is for the training set is",mean_squared_error(model.fittedvalues,y_trainc,squared=False))
###Output
The Root Mean Square Error (RMSE) of the model is for the training set is 4.849055005805464
###Markdown
**Get the RMSE on test set**
###Code
## Calculating the RMSE values with the code shown in the videos
math.sqrt(np.mean((model.predict(X_trainc)-y_trainc)**2))
model.predict(X_trainc)
print("The Root Mean Square Error (RMSE) of the model is for testing set is",np.sqrt(mean_squared_error(y_test,ypred)))
###Output
The Root Mean Square Error (RMSE) of the model is for testing set is 4.45323743719813
###Markdown
Using Linear Model from Sci-kit learn library **Fit the model to the training set**
###Code
regression_model = LinearRegression()
regression_model.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
**Get the score on training set**
###Code
print('The coefficient of determination R^2 of the prediction on Train set',regression_model.score(X_train, y_train))
###Output
The coefficient of determination R^2 of the prediction on Train set 0.7103879080674731
###Markdown
**Get the score on test set**
###Code
print('The coefficient of determination R^2 of the prediction on Test set',regression_model.score(X_test, y_test))
###Output
The coefficient of determination R^2 of the prediction on Test set 0.7836295385076292
###Markdown
**Get the RMSE on test set**
###Code
print("The Root Mean Square Error (RMSE) of the model is for testing set is",np.sqrt(mean_squared_error(y_test,regression_model.predict(X_test))))
###Output
The Root Mean Square Error (RMSE) of the model is for testing set is 4.453237437198149
###Markdown
**Check Multi-collinearity using VIF**
###Code
vif = [variance_inflation_factor(X.values, ix) for ix in range(X.shape[1])]
i=0
for column in X.columns:
if i < 15:
print (column ,"--->", vif[i])
i = i+1
###Output
CRIM ---> 2.1003728199615233
ZN ---> 2.8440132669462628
INDUS ---> 14.485757706539308
CHAS ---> 1.1529518589418775
NOX ---> 73.89494652814788
RM ---> 77.94828304638538
AGE ---> 21.38685048994314
DIS ---> 14.6996523837492
RAD ---> 15.167724857920897
TAX ---> 61.227274009649456
PTRATIO ---> 85.02954731061801
B ---> 20.104942636229136
LSTAT ---> 11.102024772203539
|
Imagenet Image Classification using Keras/Fine_Tunning.ipynb | ###Markdown
EXPERIMENT 6Applying Fine Tunning on DenseNet201 network to the CIFAR 10 dataset. FINE TUNNINGThe task of fine-tuning a network is to tweak the parameters of an already trained network so that it adapts to the new task at hand.The initial layers learn very general features and as we go higher up the network, the layers tend to learn patterns more specific to the task it is being trained on. Thus, for fine-tuning, we keep the initial layers intact and retrain the later layers of the model. DatasetCIFAR-10 dataset The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.http://www.cs.utoronto.ca/~kriz/cifar.html Importing Libraries & Loading the Dataset
###Code
import numpy as np
import matplotlib.pyplot as plt
from keras.datasets import cifar10
import keras
#Load the dataset:
(X_train, y_train), (X_test, y_test) = cifar10.load_data()
print("There are {} train images and {} test images.".format(X_train.shape[0], X_test.shape[0]))
print('There are {} unique classes to predict.'.format(np.unique(y_train).shape[0]))
###Output
There are 50000 train images and 10000 test images.
There are 10 unique classes to predict.
###Markdown
One-hot encoding the labels
###Code
num_classes = 10
from keras.utils import np_utils
y_train = np_utils.to_categorical(y_train, num_classes)
y_test = np_utils.to_categorical(y_test, num_classes)
###Output
_____no_output_____
###Markdown
Visualizing & Displaying the first eight images in the training data
###Code
fig = plt.figure(figsize=(10, 10))
for i in range(1, 9):
img = X_train[i-1]
fig.add_subplot(2, 4, i)
plt.imshow(img)
###Output
_____no_output_____
###Markdown
Building up a Sequential model
###Code
#Importing the necessary libraries
from keras.models import Sequential
from keras.layers import Dense, Conv2D, MaxPooling2D
from keras.layers import Dropout, Flatten, GlobalAveragePooling2D
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu',input_shape = X_train.shape[1:]))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(GlobalAveragePooling2D())
model.add(Dense(10, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_13 (Conv2D) (None, 30, 30, 32) 896
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 15, 15, 32) 0
_________________________________________________________________
conv2d_14 (Conv2D) (None, 13, 13, 32) 9248
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 6, 6, 32) 0
_________________________________________________________________
conv2d_15 (Conv2D) (None, 4, 4, 64) 18496
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 2, 2, 64) 0
_________________________________________________________________
global_average_pooling2d_2 ( (None, 64) 0
_________________________________________________________________
dense_5 (Dense) (None, 10) 650
=================================================================
Total params: 29,290
Trainable params: 29,290
Non-trainable params: 0
_________________________________________________________________
###Markdown
Compliing the Model
###Code
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
X_train_scratch = X_train/255.
X_test_scratch = X_test/255.
###Output
_____no_output_____
###Markdown
Creating a checkpointer A checkpointer is used to save the weights of the best model (i.e. the model with minimum loss).
###Code
checkpointer =keras.callbacks.ModelCheckpoint(filepath='scratchmodel.best.hdf5',
verbose=1,save_best_only=True)
# keras.callbacks.ModelCheckpoint(filepath ='scratchmodel.best.hdf5', monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1)
#Fitting the model on the train data and labels.
model.fit(X_train, y_train, batch_size=32, epochs=10,
verbose=1, callbacks=[checkpointer], validation_split=0.2, shuffle=True)
###Output
Train on 40000 samples, validate on 10000 samples
Epoch 1/10
40000/40000 [==============================] - 17s 416us/step - loss: 0.4729 - acc: 0.8898 - val_loss: 0.2475 - val_acc: 0.9060
Epoch 00001: val_loss improved from inf to 0.24749, saving model to scratchmodel.best.hdf5
Epoch 2/10
40000/40000 [==============================] - 12s 293us/step - loss: 0.2258 - acc: 0.9131 - val_loss: 0.2192 - val_acc: 0.9146
Epoch 00002: val_loss improved from 0.24749 to 0.21916, saving model to scratchmodel.best.hdf5
Epoch 3/10
40000/40000 [==============================] - 12s 293us/step - loss: 0.2070 - acc: 0.9199 - val_loss: 0.1976 - val_acc: 0.9233
Epoch 00003: val_loss improved from 0.21916 to 0.19756, saving model to scratchmodel.best.hdf5
Epoch 4/10
40000/40000 [==============================] - 12s 299us/step - loss: 0.1922 - acc: 0.9253 - val_loss: 0.1915 - val_acc: 0.9256
Epoch 00004: val_loss improved from 0.19756 to 0.19146, saving model to scratchmodel.best.hdf5
Epoch 5/10
40000/40000 [==============================] - 12s 293us/step - loss: 0.1799 - acc: 0.9303 - val_loss: 0.1863 - val_acc: 0.9281
Epoch 00005: val_loss improved from 0.19146 to 0.18632, saving model to scratchmodel.best.hdf5
Epoch 6/10
40000/40000 [==============================] - 12s 295us/step - loss: 0.1716 - acc: 0.9336 - val_loss: 0.1837 - val_acc: 0.9289
Epoch 00006: val_loss improved from 0.18632 to 0.18371, saving model to scratchmodel.best.hdf5
Epoch 7/10
40000/40000 [==============================] - 12s 293us/step - loss: 0.1644 - acc: 0.9363 - val_loss: 0.1749 - val_acc: 0.9327
Epoch 00007: val_loss improved from 0.18371 to 0.17486, saving model to scratchmodel.best.hdf5
Epoch 8/10
40000/40000 [==============================] - 12s 293us/step - loss: 0.1572 - acc: 0.9393 - val_loss: 0.1813 - val_acc: 0.9311
Epoch 00008: val_loss did not improve from 0.17486
Epoch 9/10
40000/40000 [==============================] - 12s 296us/step - loss: 0.1520 - acc: 0.9413 - val_loss: 0.1723 - val_acc: 0.9343
Epoch 00009: val_loss improved from 0.17486 to 0.17232, saving model to scratchmodel.best.hdf5
Epoch 10/10
40000/40000 [==============================] - 12s 294us/step - loss: 0.1477 - acc: 0.9429 - val_loss: 0.1675 - val_acc: 0.9364
Epoch 00010: val_loss improved from 0.17232 to 0.16753, saving model to scratchmodel.best.hdf5
###Markdown
Evaluating the model on the test data & Printing the accuracy
###Code
score = model.evaluate(X_test, y_test)
#Accuracy on test data
print('Accuracy on the Test Images: ', score[1])
# from keras_applications import densenet
# from keras.applications.imagenet_utils import preprocess_input as _preprocess_input
# from keras.applications import DenseNet201
# from skimage import data, io, filters, transform
#Importing the Densenet201 model
from keras.applications.densenet import DenseNet201, preprocess_input
#Loading the Densenet201 model with pre-trained ImageNet weights
model = DenseNet201(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
def DenseNetImageNet201(input_shape=None,
bottleneck=True,
reduction=0.5,
dropout_rate=0.0,
weight_decay=1e-4,
include_top=True,
weights=None,
input_tensor=None,
pooling=None,
classes=1000,
activation='softmax'):
return DenseNet(input_shape, depth=201, nb_dense_block=4, growth_rate=32,
nb_filter=64, nb_layers_per_block=[6, 12, 48, 32],
bottleneck=bottleneck, reduction=reduction,
dropout_rate=dropout_rate, weight_decay=weight_decay,
subsample_initial_block=True, include_top=include_top,
weights=weights, input_tensor=input_tensor,
pooling=pooling, classes=classes, activation=activation)
###Output
_____no_output_____
###Markdown
Reshaping & Preprocessing the training data
###Code
# from keras.applications.imagenet_utils import preprocess_input as _preprocess_input
import numpy as np
import skimage
from scipy.misc import imresize
#Preprocessing the data, so that it can be fed to the pre-trained ResNet50 model.
densenet_train_input = preprocess_input(X_train)
#Creating bottleneck features for the training data
train_features = model.predict(densenet_train_input)
#Saving the bottleneck features
np.savez('densenet_train_input', features=train_features)
###Output
_____no_output_____
###Markdown
Reshaping & Preprocessing the testing data
###Code
#Preprocessing the data, so that it can be fed to the pre-trained ResNet50 model.
densenet_test_input = preprocess_input(X_test)
#Creating bottleneck features for the testing data
test_features = model.predict(densenet_test_input)
#Saving the bottleneck features
np.savez('densenet_features_test', features=test_features)
#print(X_train.shape[1:])
print(X_train.shape[1:])
from keras.callbacks import LearningRateScheduler
from keras import regularizers
from keras.layers import Dense, Activation, Flatten, Dropout, BatchNormalization
def lr_schedule(epoch):
lrate = 0.001
if epoch > 75:
lrate = 0.0005
if epoch > 100:
lrate = 0.0003
return lrate
###Output
_____no_output_____
###Markdown
Fine tunning the model
###Code
weight_decay = 1e-4
model = Sequential()
model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay), input_shape=X_train.shape[1:]))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Conv2D(32, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Conv2D(64, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.3))
model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(Conv2D(128, (3,3), padding='same', kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('elu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_7 (Conv2D) (None, 32, 32, 32) 896
_________________________________________________________________
activation_1 (Activation) (None, 32, 32, 32) 0
_________________________________________________________________
batch_normalization_1 (Batch (None, 32, 32, 32) 128
_________________________________________________________________
conv2d_8 (Conv2D) (None, 32, 32, 32) 9248
_________________________________________________________________
activation_2 (Activation) (None, 32, 32, 32) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 32, 32, 32) 128
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 16, 16, 32) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 16, 16, 32) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 16, 16, 64) 18496
_________________________________________________________________
activation_3 (Activation) (None, 16, 16, 64) 0
_________________________________________________________________
batch_normalization_3 (Batch (None, 16, 16, 64) 256
_________________________________________________________________
conv2d_10 (Conv2D) (None, 16, 16, 64) 36928
_________________________________________________________________
activation_4 (Activation) (None, 16, 16, 64) 0
_________________________________________________________________
batch_normalization_4 (Batch (None, 16, 16, 64) 256
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 8, 8, 64) 0
_________________________________________________________________
dropout_3 (Dropout) (None, 8, 8, 64) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 8, 8, 128) 73856
_________________________________________________________________
activation_5 (Activation) (None, 8, 8, 128) 0
_________________________________________________________________
batch_normalization_5 (Batch (None, 8, 8, 128) 512
_________________________________________________________________
conv2d_12 (Conv2D) (None, 8, 8, 128) 147584
_________________________________________________________________
activation_6 (Activation) (None, 8, 8, 128) 0
_________________________________________________________________
batch_normalization_6 (Batch (None, 8, 8, 128) 512
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 4, 4, 128) 0
_________________________________________________________________
dropout_4 (Dropout) (None, 4, 4, 128) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 2048) 0
_________________________________________________________________
dense_4 (Dense) (None, 10) 20490
=================================================================
Total params: 309,290
Trainable params: 308,394
Non-trainable params: 896
_________________________________________________________________
###Markdown
Compliing the Model with DenseNet201
###Code
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=['accuracy'])
#Creating a checkpointer
checkpointer =keras.callbacks.ModelCheckpoint(filepath='scratchmodel.best.hdf5',
verbose=1,save_best_only=True)
#Fitting the model on the train data and labels.
model.fit(X_train, y_train, batch_size=32, epochs=10,
verbose=1, callbacks=[checkpointer], validation_split=0.2, shuffle=True)
###Output
Train on 40000 samples, validate on 10000 samples
Epoch 1/10
40000/40000 [==============================] - 50s 1ms/step - loss: 1.7532 - acc: 0.4640 - val_loss: 1.2133 - val_acc: 0.5905
Epoch 00001: val_loss improved from inf to 1.21334, saving model to scratchmodel.best.hdf5
Epoch 2/10
40000/40000 [==============================] - 46s 1ms/step - loss: 1.1185 - acc: 0.6364 - val_loss: 0.9951 - val_acc: 0.6829
Epoch 00002: val_loss improved from 1.21334 to 0.99510, saving model to scratchmodel.best.hdf5
Epoch 3/10
40000/40000 [==============================] - 46s 1ms/step - loss: 0.9460 - acc: 0.6967 - val_loss: 0.9960 - val_acc: 0.6833
Epoch 00003: val_loss did not improve from 0.99510
Epoch 4/10
40000/40000 [==============================] - 46s 1ms/step - loss: 0.8454 - acc: 0.7349 - val_loss: 0.9138 - val_acc: 0.7154
Epoch 00004: val_loss improved from 0.99510 to 0.91378, saving model to scratchmodel.best.hdf5
Epoch 5/10
40000/40000 [==============================] - 46s 1ms/step - loss: 0.7860 - acc: 0.7595 - val_loss: 0.7532 - val_acc: 0.7730
Epoch 00005: val_loss improved from 0.91378 to 0.75317, saving model to scratchmodel.best.hdf5
Epoch 6/10
40000/40000 [==============================] - 46s 1ms/step - loss: 0.7458 - acc: 0.7799 - val_loss: 0.8630 - val_acc: 0.7487
Epoch 00006: val_loss did not improve from 0.75317
Epoch 7/10
40000/40000 [==============================] - 46s 1ms/step - loss: 0.7088 - acc: 0.7962 - val_loss: 0.7519 - val_acc: 0.7874
Epoch 00007: val_loss improved from 0.75317 to 0.75186, saving model to scratchmodel.best.hdf5
Epoch 8/10
40000/40000 [==============================] - 46s 1ms/step - loss: 0.6916 - acc: 0.8077 - val_loss: 0.7736 - val_acc: 0.7912
Epoch 00008: val_loss did not improve from 0.75186
Epoch 9/10
40000/40000 [==============================] - 46s 1ms/step - loss: 0.6617 - acc: 0.8221 - val_loss: 0.7039 - val_acc: 0.8135
Epoch 00009: val_loss improved from 0.75186 to 0.70392, saving model to scratchmodel.best.hdf5
Epoch 10/10
40000/40000 [==============================] - 46s 1ms/step - loss: 0.6482 - acc: 0.8304 - val_loss: 0.7480 - val_acc: 0.8097
Epoch 00010: val_loss did not improve from 0.70392
###Markdown
Evaluating the model on the test data & Printing the accuracy using the DenseNet201
###Code
scores = model.evaluate(X_test, y_test, verbose=1)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
10000/10000 [==============================] - 3s 283us/step
Test loss: 0.76271309633255
Test accuracy: 0.805
|
02-cloud-datawarehouses/02-redshift/L3 Exercise 3 - Parallel ETL - Solution.ipynb | ###Markdown
Exercise 3: Parallel ETL
###Code
%load_ext sql
from time import time
import configparser
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
STEP 1: Get the params of the created redshift cluster - We need: - The redshift cluster endpoint - The IAM role ARN that give access to Redshift to read from S3
###Code
config = configparser.ConfigParser()
config.read_file(open('dwh.cfg'))
KEY=config.get('AWS','key')
SECRET= config.get('AWS','secret')
DWH_DB= config.get("DWH","DWH_DB")
DWH_DB_USER= config.get("DWH","DWH_DB_USER")
DWH_DB_PASSWORD= config.get("DWH","DWH_DB_PASSWORD")
DWH_PORT = config.get("DWH","DWH_PORT")
# FILL IN THE REDSHIFT ENPOINT HERE
# e.g. DWH_ENDPOINT="redshift-cluster-1.csmamz5zxmle.us-west-2.redshift.amazonaws.com"
DWH_ENDPOINT="dwhcluster.ct5uhpj0b2pz.us-west-2.redshift.amazonaws.com"
#FILL IN THE IAM ROLE ARN you got in step 2.2 of the previous exercise
#e.g DWH_ROLE_ARN="arn:aws:iam::988332130976:role/dwhRole"
DWH_ROLE_ARN="arn:aws:iam::596951707262:role/dwhRole"
###Output
_____no_output_____
###Markdown
STEP 2: Connect to the Redshift Cluster
###Code
conn_string="postgresql://{}:{}@{}:{}/{}".format(DWH_DB_USER, DWH_DB_PASSWORD, DWH_ENDPOINT, DWH_PORT,DWH_DB)
print(conn_string)
%sql $conn_string
import boto3
s3 = boto3.resource('s3',
region_name="us-west-2",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
sampleDbBucket = s3.Bucket("udacity-labs")
for obj in sampleDbBucket.objects.filter(Prefix="tickets"):
print(obj)
###Output
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/full/')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/full/full.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00000-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00001-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00002-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00003-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00004-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00005-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00006-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00007-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00008-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
s3.ObjectSummary(bucket_name='udacity-labs', key='tickets/split/part-00009-d33afb94-b8af-407d-abd5-59c0ee8f5ee8-c000.csv.gz')
###Markdown
STEP 3: Create Tables
###Code
%%sql
DROP TABLE IF EXISTS "sporting_event_ticket";
CREATE TABLE "sporting_event_ticket" (
"id" double precision DEFAULT nextval('sporting_event_ticket_seq') NOT NULL,
"sporting_event_id" double precision NOT NULL,
"sport_location_id" double precision NOT NULL,
"seat_level" numeric(1,0) NOT NULL,
"seat_section" character varying(15) NOT NULL,
"seat_row" character varying(10) NOT NULL,
"seat" character varying(10) NOT NULL,
"ticketholder_id" double precision,
"ticket_price" numeric(8,2) NOT NULL
);
###Output
* postgresql://dwhuser:***@dwhcluster.ct5uhpj0b2pz.us-west-2.redshift.amazonaws.com:5439/dwh
Done.
Done.
###Markdown
STEP 4: Load Partitioned data into the cluster
###Code
%%time
qry = """
copy sporting_event_ticket from 's3://udacity-labs/tickets/split/part'
credentials 'aws_iam_role={}'
gzip delimiter ';' compupdate off region 'us-west-2';
""".format(DWH_ROLE_ARN)
%sql $qry
###Output
* postgresql://dwhuser:***@dwhcluster.ct5uhpj0b2pz.us-west-2.redshift.amazonaws.com:5439/dwh
Done.
CPU times: user 8.36 ms, sys: 452 µs, total: 8.82 ms
Wall time: 29.2 s
###Markdown
STEP 4: Create Tables for the non-partitioned data
###Code
%%sql
DROP TABLE IF EXISTS "sporting_event_ticket_full";
CREATE TABLE "sporting_event_ticket_full" (
"id" double precision DEFAULT nextval('sporting_event_ticket_seq') NOT NULL,
"sporting_event_id" double precision NOT NULL,
"sport_location_id" double precision NOT NULL,
"seat_level" numeric(1,0) NOT NULL,
"seat_section" character varying(15) NOT NULL,
"seat_row" character varying(10) NOT NULL,
"seat" character varying(10) NOT NULL,
"ticketholder_id" double precision,
"ticket_price" numeric(8,2) NOT NULL
);
###Output
* postgresql://dwhuser:***@dwhcluster.ct5uhpj0b2pz.us-west-2.redshift.amazonaws.com:5439/dwh
Done.
Done.
###Markdown
STEP 5: Load non-partitioned data into the cluster- Note how it's slower than loading partitioned data
###Code
%%time
qry = """
copy sporting_event_ticket_full from 's3://udacity-labs/tickets/full/full.csv.gz'
credentials 'aws_iam_role={}'
gzip delimiter ';' compupdate off region 'us-west-2';
""".format(DWH_ROLE_ARN)
%sql $qry
###Output
* postgresql://dwhuser:***@dwhcluster.ct5uhpj0b2pz.us-west-2.redshift.amazonaws.com:5439/dwh
Done.
CPU times: user 8.23 ms, sys: 869 µs, total: 9.1 ms
Wall time: 23.3 s
|
notebooks/examples/summarization-tf.ipynb | ###Markdown
If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets as well as other dependencies. Right now this requires the current master branch of both. Uncomment the following cell and run it.
###Code
#! pip install git+https://github.com/huggingface/transformers.git
#! pip install git+https://github.com/huggingface/datasets.git
#! pip install rouge-score nltk
###Output
_____no_output_____
###Markdown
If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.First you have to store your authentication token from the Hugging Face website (sign up [here](https://huggingface.co/join) if you haven't already!) then uncomment the following cell and input your username and password (this only works on Colab, in a regular notebook, you need to do this in a terminal):
###Code
from huggingface_hub import notebook_login
notebook_login()
###Output
Login successful
Your token has been saved to /home/matt/.huggingface/token
###Markdown
Then you need to install Git-LFS and setup Git if you haven't already. Uncomment the following instructions and adapt with your name and email:
###Code
# !apt install git-lfs
# !git config --global user.email "[email protected]"
# !git config --global user.name "Your Name"
###Output
_____no_output_____
###Markdown
Make sure your version of Transformers is at least 4.8.1 since the functionality was introduced in that version:
###Code
import transformers
print(transformers.__version__)
###Output
4.15.0.dev0
###Markdown
You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq). Fine-tuning a model on a summarization task In this notebook, we will see how to fine-tune one of the [🤗 Transformers](https://github.com/huggingface/transformers) model for a summarization task. We will use the [XSum dataset](https://arxiv.org/pdf/1808.08745.pdf) (for extreme summarization) which contains BBC articles accompanied with single-sentence summaries.We will see how to easily load the dataset for this task using 🤗 Datasets and how to fine-tune a model on it using the `Trainer` API.
###Code
model_checkpoint = "t5-small"
###Output
_____no_output_____
###Markdown
This notebook is built to run with any model checkpoint from the [Model Hub](https://huggingface.co/models) as long as that model has a sequence-to-sequence version in the Transformers library. Here we picked the [`t5-small`](https://huggingface.co/t5-small) checkpoint. Loading the dataset We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions `load_dataset` and `load_metric`.
###Code
from datasets import load_dataset, load_metric
raw_datasets = load_dataset("xsum")
metric = load_metric("rouge")
###Output
Using custom data configuration default
###Markdown
The `dataset` object itself is [`DatasetDict`](https://huggingface.co/docs/datasets/package_reference/main_classes.htmldatasetdict), which contains one key for the training, validation and test set:
###Code
raw_datasets
###Output
_____no_output_____
###Markdown
To access an actual element, you need to select a split first, then give an index:
###Code
raw_datasets["train"][0]
###Output
_____no_output_____
###Markdown
To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset.
###Code
import datasets
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=5):
assert num_examples <= len(
dataset
), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset) - 1)
while pick in picks:
pick = random.randint(0, len(dataset) - 1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, datasets.ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
display(HTML(df.to_html()))
show_random_elements(raw_datasets["train"])
###Output
_____no_output_____
###Markdown
The metric is an instance of [`datasets.Metric`](https://huggingface.co/docs/datasets/package_reference/main_classes.htmldatasets.Metric):
###Code
metric
###Output
_____no_output_____
###Markdown
You can call its `compute` method with your predictions and labels, which need to be list of decoded strings:
###Code
fake_preds = ["hello there", "general kenobi"]
fake_labels = ["hello there", "general kenobi"]
metric.compute(predictions=fake_preds, references=fake_labels)
###Output
_____no_output_____
###Markdown
Preprocessing the data Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that the model requires.To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained` method, which will ensure:- we get a tokenizer that corresponds to the model architecture we want to use,- we download the vocabulary used when pretraining this specific checkpoint.That vocabulary will be cached, so it's not downloaded again the next time we run the cell.
###Code
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
###Output
_____no_output_____
###Markdown
By default, the call above will use one of the fast tokenizers (backed by Rust) from the 🤗 Tokenizers library. You can directly call this tokenizer on one sentence or a pair of sentences:
###Code
tokenizer("Hello, this one sentence!")
###Output
_____no_output_____
###Markdown
Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you're interested.Instead of one sentence, we can pass along a list of sentences:
###Code
tokenizer(["Hello, this one sentence!", "This is another sentence."])
###Output
_____no_output_____
###Markdown
To prepare the targets for our model, we need to tokenize them inside the `as_target_tokenizer` context manager. This will make sure the tokenizer uses the special tokens corresponding to the targets:
###Code
with tokenizer.as_target_tokenizer():
print(tokenizer(["Hello, this one sentence!", "This is another sentence."]))
###Output
{'input_ids': [[8774, 6, 48, 80, 7142, 55, 1], [100, 19, 430, 7142, 5, 1]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1]]}
###Markdown
If you are using one of the five T5 checkpoints we have to prefix the inputs with "summarize:" (the model can also translate and it needs the prefix to know which task it has to perform).
###Code
if model_checkpoint in ["t5-small", "t5-base", "t5-larg", "t5-3b", "t5-11b"]:
prefix = "summarize: "
else:
prefix = ""
###Output
_____no_output_____
###Markdown
We can then write the function that will preprocess our samples. We just feed them to the `tokenizer` with the argument `truncation=True`. This will ensure that an input longer that what the model selected can handle will be truncated to the maximum length accepted by the model. The padding will be dealt with later on (in a data collator) so we pad examples to the longest length in the batch and not the whole dataset.
###Code
max_input_length = 1024
max_target_length = 128
def preprocess_function(examples):
inputs = [prefix + doc for doc in examples["document"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(
examples["summary"], max_length=max_target_length, truncation=True
)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
###Output
_____no_output_____
###Markdown
This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:
###Code
preprocess_function(raw_datasets["train"][:2])
###Output
_____no_output_____
###Markdown
To apply this function on all the pairs of sentences in our dataset, we just use the `map` method of our `dataset` object we created earlier. This will apply the function on all the elements of all the splits in `dataset`, so our training, validation and testing data will be preprocessed in one single command.
###Code
tokenized_datasets = raw_datasets.map(preprocess_function, batched=True)
###Output
_____no_output_____
###Markdown
Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass `load_from_cache_file=False` in the call to `map` to not use the cached files and force the preprocessing to be applied again.Note that we passed `batched=True` to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently. Fine-tuning the model Now that our data is ready, we can download the pretrained model and fine-tune it. Since our task is of the sequence-to-sequence kind, we use the `AutoModelForSeq2SeqLM` class. Like with the tokenizer, the `from_pretrained` method will download and cache the model for us.
###Code
from transformers import TFAutoModelForSeq2SeqLM, DataCollatorForSeq2Seq
model = TFAutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)
###Output
2021-12-16 13:51:32.011280: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-16 13:51:32.018655: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-16 13:51:32.019939: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-16 13:51:32.021348: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-12-16 13:51:32.023736: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-16 13:51:32.024400: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-16 13:51:32.025046: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-16 13:51:32.339728: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-16 13:51:32.340404: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-16 13:51:32.341041: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:937] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-12-16 13:51:32.341650: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1510] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 21864 MB memory: -> device: 0, name: GeForce RTX 3090, pci bus id: 0000:21:00.0, compute capability: 8.6
2021-12-16 13:51:32.543965: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2021-12-16 13:51:33.001314: I tensorflow/stream_executor/cuda/cuda_blas.cc:1760] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
All model checkpoint layers were used when initializing TFT5ForConditionalGeneration.
All the layers of TFT5ForConditionalGeneration were initialized from the model checkpoint at t5-small.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training.
###Markdown
Note that we don't get a warning like in our classification example. This means we used all the weights of the pretrained model and there is no randomly initialized head in this case. Next we set some parameters like the learning rate and the `batch_size`and customize the weight decay. The last two arguments are to setup everything so we can push the model to the [Hub](https://huggingface.co/models) at the end of training. Remove the two of them if you didn't follow the installation steps at the top of the notebook, otherwise you can change the value of push_to_hub_model_id to something you would prefer.
###Code
batch_size = 8
learning_rate = 2e-5
weight_decay = 0.01
num_train_epochs = 1
model_name = model_checkpoint.split("/")[-1]
push_to_hub_model_id = f"{model_name}-finetuned-xsum"
###Output
_____no_output_____
###Markdown
Then, we need a special kind of data collator, which will not only pad the inputs to the maximum length in the batch, but also the labels. Note that our data collators are multi-framework, so make sure you set `return_tensors='tf'` so you get `tf.Tensor` objects back and not something else!
###Code
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, return_tensors="tf")
tokenized_datasets["train"]
###Output
_____no_output_____
###Markdown
Now we convert our input datasets to TF datasets using this collator. There's a built-in method for this: `to_tf_dataset()`. Make sure to specify the collator we just created as our `collate_fn`!
###Code
train_dataset = tokenized_datasets["train"].to_tf_dataset(
batch_size=batch_size,
columns=["input_ids", "attention_mask", "labels"],
shuffle=True,
collate_fn=data_collator,
)
validation_dataset = tokenized_datasets["validation"].to_tf_dataset(
batch_size=8,
columns=["input_ids", "attention_mask", "labels"],
shuffle=False,
collate_fn=data_collator,
)
###Output
_____no_output_____
###Markdown
Now we initialize our loss and optimizer and compile the model. Note that most Transformers models compute loss internally - we can train on this as our loss value simply by not specifying a loss when we `compile()`.
###Code
from transformers import AdamWeightDecay
import tensorflow as tf
optimizer = AdamWeightDecay(learning_rate=learning_rate, weight_decay_rate=weight_decay)
model.compile(optimizer=optimizer)
###Output
No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! Please ensure your labels are passed as the 'labels' key of the input dict so that they are accessible to the model during the forward pass. To disable this behaviour, please pass a loss argument, or explicitly pass loss=None if you do not want your model to compute a loss.
###Markdown
Now we can train our model. We can also add a callback to sync up our model with the Hub - this allows us to resume training from other machines and even test the model's inference quality midway through training! Make sure to change the `username` if you do. If you don't want to do this, simply remove the callbacks argument in the call to `fit()`.
###Code
from transformers.keras_callbacks import PushToHubCallback
callback = PushToHubCallback(
output_dir="./summarization_model_save",
tokenizer=tokenizer,
hub_model_id=push_to_hub_model_id,
)
model.fit(train_dataset, validation_data=validation_dataset, epochs=1, callbacks=[callback])
###Output
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
###Markdown
Hopefully you saw your loss value declining as training continued, but that doesn't really tell us much about the quality of the model. Let's use the ROUGE metric we loaded earlier to quantify our model's ability in more detail. First we need to get the model's predictions for the validation set.
###Code
import numpy as np
decoded_predictions = []
decoded_labels = []
for batch in validation_dataset:
labels = batch["labels"]
predictions = model.predict_on_batch(batch)["logits"]
predicted_tokens = np.argmax(predictions, axis=-1)
decoded_predictions.extend(
tokenizer.batch_decode(predicted_tokens, skip_special_tokens=True)
)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels.extend(tokenizer.batch_decode(labels, skip_special_tokens=True))
###Output
_____no_output_____
###Markdown
Now we need to prepare the data as the metric expects, with one sentence per line.
###Code
import nltk
import numpy as np
# Rouge expects a newline after each sentence
decoded_predictions = [
"\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_predictions
]
decoded_labels = [
"\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels
]
result = metric.compute(
predictions=decoded_predictions, references=decoded_labels, use_stemmer=True
)
# Extract a few results
result = {key: value.mid.fmeasure * 100 for key, value in result.items()}
# Add mean generated length
prediction_lens = [
np.count_nonzero(pred != tokenizer.pad_token_id) for pred in predictions
]
result["gen_len"] = np.mean(prediction_lens)
print({k: round(v, 4) for k, v in result.items()})
###Output
{'rouge1': 37.4199, 'rouge2': 13.9768, 'rougeL': 34.361, 'rougeLsum': 35.0781, 'gen_len': 1060224.0}
|
autox/autox_ts/demo/kdd_cup_2022_autox.ipynb | ###Markdown
import 包
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import os
from tqdm import tqdm
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
读数据,预处理
###Code
df = pd.read_csv('~/KDD_CUP_2022/input/sdwpf134_initial_kddcup.csv')
def get_date(k):
cur_date = "2020-01-01"
one_day = timedelta(days=k-1)
return str(datetime.strptime(cur_date, '%Y-%m-%d') + one_day)[:10]
df['Day'] = df['Day'].apply(lambda x: get_date(x))
def cols_concat(df, con_list):
name = 't1'
df[name] = df[con_list[0]].astype(str)
for item in con_list[1:]:
df[name] = df[name] + ' ' + df[item].astype(str)
return df
df = cols_concat(df, ["Day", "Tmstamp"])
df = df[['TurbID', 't1', 'Wspd', 'Wdir', 'Etmp', 'Itmp', 'Ndir', 'Pab1', 'Pab2', 'Pab3', 'Prtv', 'Patv']]
df['t1'] = pd.to_datetime(df['t1'])
df
###Output
_____no_output_____
###Markdown
AutoX.AutoTS
###Code
from autox import AutoTS
autots = AutoTS(df = df,
id_col = 'TurbID',
time_col = 't1',
target_col = 'Patv',
time_varying_cols = ['Wspd', 'Wdir', 'Etmp', 'Itmp', 'Ndir', 'Pab1', 'Pab2', 'Pab3', 'Prtv', 'Patv'],
time_interval_num = 15,
time_interval_unit = 'minute',
forecast_period = 4*24*2)
sub = autots.get_result()
###Output
INFO -> [+] feature engineer
INFO -> [+] fe_rolling_stat
100%|██████████| 4/4 [00:19<00:00, 4.92s/it]
INFO -> [+] fe_lag
INFO -> [+] fe_diff
INFO -> [+] fe_time
INFO -> [+] feature combination
100%|██████████| 4/4 [00:00<00:00, 4.85it/s]
INFO -> [+] construct data
0%| | 0/192 [00:00<?, ?it/s] INFO -> [+] sample data, frac=0.2891976131248519
1%| | 1/192 [00:04<13:34, 4.26s/it] INFO -> [+] sample data, frac=0.2891976131248519
1%| | 2/192 [00:08<13:55, 4.40s/it] INFO -> [+] sample data, frac=0.2891976131248519
2%|▏ | 3/192 [00:13<13:53, 4.41s/it] INFO -> [+] sample data, frac=0.2891976131248519
2%|▏ | 4/192 [00:17<13:48, 4.41s/it] INFO -> [+] sample data, frac=0.2891976131248519
3%|▎ | 5/192 [00:22<13:48, 4.43s/it] INFO -> [+] sample data, frac=0.2891976131248519
3%|▎ | 6/192 [00:26<13:43, 4.43s/it] INFO -> [+] sample data, frac=0.2891976131248519
4%|▎ | 7/192 [00:30<13:34, 4.40s/it] INFO -> [+] sample data, frac=0.2891976131248519
4%|▍ | 8/192 [00:35<14:02, 4.58s/it] INFO -> [+] sample data, frac=0.2891976131248519
5%|▍ | 9/192 [00:40<13:58, 4.58s/it] INFO -> [+] sample data, frac=0.2891976131248519
5%|▌ | 10/192 [00:45<14:02, 4.63s/it] INFO -> [+] sample data, frac=0.2891976131248519
6%|▌ | 11/192 [00:49<14:06, 4.67s/it] INFO -> [+] sample data, frac=0.2891976131248519
6%|▋ | 12/192 [00:54<14:20, 4.78s/it] INFO -> [+] sample data, frac=0.2891976131248519
7%|▋ | 13/192 [00:59<14:24, 4.83s/it] INFO -> [+] sample data, frac=0.2891976131248519
7%|▋ | 14/192 [01:04<14:29, 4.89s/it] INFO -> [+] sample data, frac=0.2891976131248519
8%|▊ | 15/192 [01:10<14:42, 4.98s/it] INFO -> [+] sample data, frac=0.2891976131248519
8%|▊ | 16/192 [01:15<14:55, 5.09s/it] INFO -> [+] sample data, frac=0.2891976131248519
9%|▉ | 17/192 [01:20<15:05, 5.18s/it] INFO -> [+] sample data, frac=0.2891976131248519
9%|▉ | 18/192 [01:26<15:18, 5.28s/it] INFO -> [+] sample data, frac=0.2891976131248519
10%|▉ | 19/192 [01:31<15:20, 5.32s/it] INFO -> [+] sample data, frac=0.2891976131248519
10%|█ | 20/192 [01:36<15:04, 5.26s/it] INFO -> [+] sample data, frac=0.2891976131248519
11%|█ | 21/192 [01:42<15:12, 5.33s/it] INFO -> [+] sample data, frac=0.2891976131248519
11%|█▏ | 22/192 [01:48<15:29, 5.47s/it] INFO -> [+] sample data, frac=0.2891976131248519
12%|█▏ | 23/192 [01:53<15:32, 5.52s/it] INFO -> [+] sample data, frac=0.2891976131248519
12%|█▎ | 24/192 [01:59<15:15, 5.45s/it] INFO -> [+] sample data, frac=0.2891976131248519
13%|█▎ | 25/192 [02:04<15:05, 5.42s/it] INFO -> [+] sample data, frac=0.2891976131248519
14%|█▎ | 26/192 [02:09<14:58, 5.41s/it] INFO -> [+] sample data, frac=0.2891976131248519
14%|█▍ | 27/192 [02:15<15:07, 5.50s/it] INFO -> [+] sample data, frac=0.2891976131248519
15%|█▍ | 28/192 [02:21<15:06, 5.53s/it] INFO -> [+] sample data, frac=0.2891976131248519
15%|█▌ | 29/192 [02:26<15:11, 5.59s/it] INFO -> [+] sample data, frac=0.2891976131248519
16%|█▌ | 30/192 [02:32<15:09, 5.62s/it] INFO -> [+] sample data, frac=0.2891976131248519
16%|█▌ | 31/192 [02:38<15:22, 5.73s/it] INFO -> [+] sample data, frac=0.2891976131248519
17%|█▋ | 32/192 [02:44<15:26, 5.79s/it] INFO -> [+] sample data, frac=0.2891976131248519
17%|█▋ | 33/192 [02:50<15:38, 5.90s/it] INFO -> [+] sample data, frac=0.2891976131248519
18%|█▊ | 34/192 [02:56<15:42, 5.97s/it] INFO -> [+] sample data, frac=0.2891976131248519
18%|█▊ | 35/192 [03:02<15:41, 6.00s/it] INFO -> [+] sample data, frac=0.2891976131248519
19%|█▉ | 36/192 [03:08<15:40, 6.03s/it] INFO -> [+] sample data, frac=0.2891976131248519
19%|█▉ | 37/192 [03:15<15:43, 6.08s/it] INFO -> [+] sample data, frac=0.2891976131248519
20%|█▉ | 38/192 [03:21<15:50, 6.17s/it] INFO -> [+] sample data, frac=0.2891976131248519
20%|██ | 39/192 [03:27<15:53, 6.23s/it] INFO -> [+] sample data, frac=0.2891976131248519
21%|██ | 40/192 [03:34<15:54, 6.28s/it] INFO -> [+] sample data, frac=0.2891976131248519
21%|██▏ | 41/192 [03:40<16:03, 6.38s/it] INFO -> [+] sample data, frac=0.2891976131248519
22%|██▏ | 42/192 [03:47<16:13, 6.49s/it] INFO -> [+] sample data, frac=0.2891976131248519
22%|██▏ | 43/192 [03:54<16:20, 6.58s/it] INFO -> [+] sample data, frac=0.2891976131248519
23%|██▎ | 44/192 [04:01<16:22, 6.64s/it] INFO -> [+] sample data, frac=0.2891976131248519
23%|██▎ | 45/192 [04:08<16:25, 6.70s/it] INFO -> [+] sample data, frac=0.2891976131248519
24%|██▍ | 46/192 [04:14<16:28, 6.77s/it] INFO -> [+] sample data, frac=0.2891976131248519
24%|██▍ | 47/192 [04:21<16:30, 6.83s/it] INFO -> [+] sample data, frac=0.2891976131248519
25%|██▌ | 48/192 [04:28<16:29, 6.87s/it] INFO -> [+] sample data, frac=0.2891976131248519
26%|██▌ | 49/192 [04:35<16:31, 6.93s/it] INFO -> [+] sample data, frac=0.2891976131248519
26%|██▌ | 50/192 [04:43<16:32, 6.99s/it] INFO -> [+] sample data, frac=0.2891976131248519
27%|██▋ | 51/192 [04:50<16:41, 7.11s/it] INFO -> [+] sample data, frac=0.2891976131248519
27%|██▋ | 52/192 [04:57<16:52, 7.23s/it] INFO -> [+] sample data, frac=0.2891976131248519
28%|██▊ | 53/192 [05:05<16:55, 7.31s/it] INFO -> [+] sample data, frac=0.2891976131248519
28%|██▊ | 54/192 [05:12<16:55, 7.36s/it] INFO -> [+] sample data, frac=0.2891976131248519
29%|██▊ | 55/192 [05:20<16:54, 7.41s/it] INFO -> [+] sample data, frac=0.2891976131248519
29%|██▉ | 56/192 [05:28<16:55, 7.47s/it] INFO -> [+] sample data, frac=0.2891976131248519
30%|██▉ | 57/192 [05:35<16:57, 7.54s/it] INFO -> [+] sample data, frac=0.2891976131248519
30%|███ | 58/192 [05:43<17:07, 7.67s/it] INFO -> [+] sample data, frac=0.2891976131248519
31%|███ | 59/192 [05:51<17:05, 7.71s/it] INFO -> [+] sample data, frac=0.2891976131248519
31%|███▏ | 60/192 [05:59<16:56, 7.70s/it] INFO -> [+] sample data, frac=0.2891976131248519
32%|███▏ | 61/192 [06:07<16:57, 7.77s/it] INFO -> [+] sample data, frac=0.2891976131248519
32%|███▏ | 62/192 [06:15<17:00, 7.85s/it] INFO -> [+] sample data, frac=0.2891976131248519
33%|███▎ | 63/192 [06:23<17:02, 7.92s/it] INFO -> [+] sample data, frac=0.2891976131248519
33%|███▎ | 64/192 [06:31<17:02, 7.99s/it] INFO -> [+] sample data, frac=0.2891976131248519
34%|███▍ | 65/192 [06:39<16:54, 7.99s/it] INFO -> [+] sample data, frac=0.2891976131248519
34%|███▍ | 66/192 [06:47<16:53, 8.04s/it] INFO -> [+] sample data, frac=0.2891976131248519
35%|███▍ | 67/192 [06:55<16:54, 8.12s/it] INFO -> [+] sample data, frac=0.2891976131248519
35%|███▌ | 68/192 [07:04<16:55, 8.19s/it] INFO -> [+] sample data, frac=0.2891976131248519
36%|███▌ | 69/192 [07:12<16:57, 8.27s/it] INFO -> [+] sample data, frac=0.2891976131248519
36%|███▋ | 70/192 [07:21<16:53, 8.31s/it] INFO -> [+] sample data, frac=0.2891976131248519
37%|███▋ | 71/192 [07:29<17:01, 8.44s/it] INFO -> [+] sample data, frac=0.2891976131248519
38%|███▊ | 72/192 [07:38<16:53, 8.45s/it] INFO -> [+] sample data, frac=0.2891976131248519
38%|███▊ | 73/192 [07:46<16:51, 8.50s/it] INFO -> [+] sample data, frac=0.2891976131248519
39%|███▊ | 74/192 [07:55<16:58, 8.63s/it] INFO -> [+] sample data, frac=0.2891976131248519
39%|███▉ | 75/192 [08:04<16:58, 8.71s/it] INFO -> [+] sample data, frac=0.2891976131248519
40%|███▉ | 76/192 [08:13<16:58, 8.78s/it] INFO -> [+] sample data, frac=0.2891976131248519
40%|████ | 77/192 [08:22<17:03, 8.90s/it] INFO -> [+] sample data, frac=0.2891976131248519
41%|████ | 78/192 [08:31<16:59, 8.95s/it] INFO -> [+] sample data, frac=0.2891976131248519
41%|████ | 79/192 [08:41<17:00, 9.03s/it] INFO -> [+] sample data, frac=0.2891976131248519
42%|████▏ | 80/192 [08:50<17:01, 9.12s/it] INFO -> [+] sample data, frac=0.2891976131248519
42%|████▏ | 81/192 [08:59<17:04, 9.23s/it] INFO -> [+] sample data, frac=0.2891976131248519
43%|████▎ | 82/192 [09:09<17:01, 9.28s/it] INFO -> [+] sample data, frac=0.2891976131248519
43%|████▎ | 83/192 [09:18<16:57, 9.34s/it] INFO -> [+] sample data, frac=0.2891976131248519
44%|████▍ | 84/192 [09:28<16:46, 9.32s/it] INFO -> [+] sample data, frac=0.2891976131248519
44%|████▍ | 85/192 [09:37<16:47, 9.42s/it] INFO -> [+] sample data, frac=0.2891976131248519
45%|████▍ | 86/192 [09:47<16:44, 9.47s/it] INFO -> [+] sample data, frac=0.2891976131248519
45%|████▌ | 87/192 [09:57<17:05, 9.77s/it] INFO -> [+] sample data, frac=0.2891976131248519
46%|████▌ | 88/192 [10:08<17:13, 9.94s/it] INFO -> [+] sample data, frac=0.2891976131248519
46%|████▋ | 89/192 [10:18<17:14, 10.05s/it] INFO -> [+] sample data, frac=0.2891976131248519
47%|████▋ | 90/192 [10:29<17:22, 10.22s/it] INFO -> [+] sample data, frac=0.2891976131248519
47%|████▋ | 91/192 [10:38<16:57, 10.07s/it] INFO -> [+] sample data, frac=0.2891976131248519
48%|████▊ | 92/192 [10:48<16:45, 10.06s/it] INFO -> [+] sample data, frac=0.2891976131248519
48%|████▊ | 93/192 [10:59<16:45, 10.16s/it] INFO -> [+] sample data, frac=0.2891976131248519
49%|████▉ | 94/192 [11:09<16:40, 10.21s/it] INFO -> [+] sample data, frac=0.2891976131248519
49%|████▉ | 95/192 [11:19<16:27, 10.18s/it] INFO -> [+] sample data, frac=0.2891976131248519
50%|█████ | 96/192 [11:30<16:23, 10.25s/it] INFO -> [+] sample data, frac=0.2891976131248519
51%|█████ | 97/192 [11:40<16:15, 10.27s/it] INFO -> [+] sample data, frac=0.2891976131248519
51%|█████ | 98/192 [11:50<16:10, 10.33s/it] INFO -> [+] sample data, frac=0.2891976131248519
52%|█████▏ | 99/192 [12:01<16:08, 10.42s/it] INFO -> [+] sample data, frac=0.2891976131248519
52%|█████▏ | 100/192 [12:12<16:05, 10.49s/it] INFO -> [+] sample data, frac=0.2891976131248519
53%|█████▎ | 101/192 [12:22<15:57, 10.52s/it] INFO -> [+] sample data, frac=0.2891976131248519
53%|█████▎ | 102/192 [12:33<16:00, 10.67s/it] INFO -> [+] sample data, frac=0.2891976131248519
54%|█████▎ | 103/192 [12:44<15:54, 10.73s/it] INFO -> [+] sample data, frac=0.2891976131248519
54%|█████▍ | 104/192 [12:55<15:49, 10.79s/it] INFO -> [+] sample data, frac=0.2891976131248519
55%|█████▍ | 105/192 [13:06<15:49, 10.92s/it] INFO -> [+] sample data, frac=0.2891976131248519
55%|█████▌ | 106/192 [13:18<15:48, 11.02s/it] INFO -> [+] sample data, frac=0.2891976131248519
56%|█████▌ | 107/192 [13:28<15:31, 10.95s/it] INFO -> [+] sample data, frac=0.2891976131248519
56%|█████▋ | 108/192 [13:40<15:30, 11.08s/it] INFO -> [+] sample data, frac=0.2891976131248519
57%|█████▋ | 109/192 [13:51<15:21, 11.11s/it] INFO -> [+] sample data, frac=0.2891976131248519
57%|█████▋ | 110/192 [14:02<15:12, 11.13s/it] INFO -> [+] sample data, frac=0.2891976131248519
58%|█████▊ | 111/192 [14:14<15:12, 11.27s/it] INFO -> [+] sample data, frac=0.2891976131248519
58%|█████▊ | 112/192 [14:25<15:05, 11.31s/it] INFO -> [+] sample data, frac=0.2891976131248519
59%|█████▉ | 113/192 [14:36<14:51, 11.29s/it] INFO -> [+] sample data, frac=0.2891976131248519
59%|█████▉ | 114/192 [14:48<14:56, 11.50s/it] INFO -> [+] sample data, frac=0.2891976131248519
60%|█████▉ | 115/192 [15:00<14:48, 11.54s/it] INFO -> [+] sample data, frac=0.2891976131248519
60%|██████ | 116/192 [15:12<14:39, 11.58s/it] INFO -> [+] sample data, frac=0.2891976131248519
61%|██████ | 117/192 [15:24<14:44, 11.80s/it] INFO -> [+] sample data, frac=0.2891976131248519
61%|██████▏ | 118/192 [15:36<14:29, 11.75s/it] INFO -> [+] sample data, frac=0.2891976131248519
62%|██████▏ | 119/192 [15:47<14:17, 11.74s/it] INFO -> [+] sample data, frac=0.2891976131248519
62%|██████▎ | 120/192 [15:59<14:15, 11.88s/it] INFO -> [+] sample data, frac=0.2891976131248519
63%|██████▎ | 121/192 [16:11<14:01, 11.85s/it] INFO -> [+] sample data, frac=0.2891976131248519
64%|██████▎ | 122/192 [16:24<14:00, 12.01s/it] INFO -> [+] sample data, frac=0.2891976131248519
64%|██████▍ | 123/192 [16:36<13:47, 12.00s/it] INFO -> [+] sample data, frac=0.2891976131248519
65%|██████▍ | 124/192 [16:48<13:37, 12.02s/it] INFO -> [+] sample data, frac=0.2891976131248519
65%|██████▌ | 125/192 [17:00<13:37, 12.20s/it] INFO -> [+] sample data, frac=0.2891976131248519
66%|██████▌ | 126/192 [17:13<13:30, 12.28s/it] INFO -> [+] sample data, frac=0.2891976131248519
66%|██████▌ | 127/192 [17:26<13:31, 12.49s/it] INFO -> [+] sample data, frac=0.2891976131248519
67%|██████▋ | 128/192 [17:38<13:18, 12.48s/it] INFO -> [+] sample data, frac=0.2891976131248519
67%|██████▋ | 129/192 [17:50<13:02, 12.42s/it] INFO -> [+] sample data, frac=0.2891976131248519
68%|██████▊ | 130/192 [18:04<13:01, 12.60s/it] INFO -> [+] sample data, frac=0.2891976131248519
68%|██████▊ | 131/192 [18:16<12:46, 12.56s/it] INFO -> [+] sample data, frac=0.2891976131248519
69%|██████▉ | 132/192 [18:29<12:47, 12.80s/it] INFO -> [+] sample data, frac=0.2891976131248519
69%|██████▉ | 133/192 [18:42<12:38, 12.85s/it] INFO -> [+] sample data, frac=0.2891976131248519
70%|██████▉ | 134/192 [18:55<12:21, 12.78s/it] INFO -> [+] sample data, frac=0.2891976131248519
70%|███████ | 135/192 [19:08<12:18, 12.96s/it] INFO -> [+] sample data, frac=0.2891976131248519
71%|███████ | 136/192 [19:21<12:06, 12.98s/it] INFO -> [+] sample data, frac=0.2891976131248519
71%|███████▏ | 137/192 [19:35<12:10, 13.28s/it] INFO -> [+] sample data, frac=0.2891976131248519
72%|███████▏ | 138/192 [19:48<11:55, 13.26s/it] INFO -> [+] sample data, frac=0.2891976131248519
72%|███████▏ | 139/192 [20:02<11:40, 13.22s/it] INFO -> [+] sample data, frac=0.2891976131248519
73%|███████▎ | 140/192 [20:15<11:33, 13.34s/it] INFO -> [+] sample data, frac=0.2891976131248519
73%|███████▎ | 141/192 [20:28<11:16, 13.26s/it] INFO -> [+] sample data, frac=0.2891976131248519
74%|███████▍ | 142/192 [20:42<11:15, 13.51s/it] INFO -> [+] sample data, frac=0.2891976131248519
74%|███████▍ | 143/192 [20:56<11:04, 13.56s/it] INFO -> [+] sample data, frac=0.2891976131248519
75%|███████▌ | 144/192 [21:09<10:46, 13.46s/it] INFO -> [+] sample data, frac=0.2891976131248519
76%|███████▌ | 145/192 [21:23<10:38, 13.58s/it] INFO -> [+] sample data, frac=0.2891976131248519
76%|███████▌ | 146/192 [21:37<10:26, 13.61s/it] INFO -> [+] sample data, frac=0.2891976131248519
77%|███████▋ | 147/192 [21:51<10:26, 13.91s/it] INFO -> [+] sample data, frac=0.2891976131248519
77%|███████▋ | 148/192 [22:06<10:15, 13.99s/it] INFO -> [+] sample data, frac=0.2891976131248519
78%|███████▊ | 149/192 [22:23<10:47, 15.07s/it] INFO -> [+] sample data, frac=0.2891976131248519
78%|███████▊ | 150/192 [22:41<11:03, 15.79s/it] INFO -> [+] sample data, frac=0.2891976131248519
79%|███████▊ | 151/192 [22:58<11:06, 16.26s/it] INFO -> [+] sample data, frac=0.2891976131248519
79%|███████▉ | 152/192 [23:15<11:04, 16.61s/it] INFO -> [+] sample data, frac=0.2891976131248519
80%|███████▉ | 153/192 [23:32<10:45, 16.56s/it] INFO -> [+] sample data, frac=0.2891976131248519
80%|████████ | 154/192 [23:46<10:00, 15.79s/it] INFO -> [+] sample data, frac=0.2891976131248519
81%|████████ | 155/192 [24:01<09:32, 15.46s/it] INFO -> [+] sample data, frac=0.2891976131248519
81%|████████▏ | 156/192 [24:15<09:01, 15.04s/it] INFO -> [+] sample data, frac=0.2891976131248519
82%|████████▏ | 157/192 [24:29<08:41, 14.90s/it] INFO -> [+] sample data, frac=0.2891976131248519
82%|████████▏ | 158/192 [24:44<08:22, 14.77s/it] INFO -> [+] sample data, frac=0.2891976131248519
83%|████████▎ | 159/192 [24:59<08:08, 14.81s/it] INFO -> [+] sample data, frac=0.2891976131248519
83%|████████▎ | 160/192 [25:13<07:49, 14.68s/it] INFO -> [+] sample data, frac=0.2891976131248519
84%|████████▍ | 161/192 [25:28<07:38, 14.79s/it] INFO -> [+] sample data, frac=0.2891976131248519
84%|████████▍ | 162/192 [25:43<07:23, 14.78s/it] INFO -> [+] sample data, frac=0.2891976131248519
85%|████████▍ | 163/192 [25:58<07:10, 14.84s/it] INFO -> [+] sample data, frac=0.2891976131248519
85%|████████▌ | 164/192 [26:13<06:56, 14.86s/it] INFO -> [+] sample data, frac=0.2891976131248519
86%|████████▌ | 165/192 [26:29<06:51, 15.23s/it] INFO -> [+] sample data, frac=0.2891976131248519
86%|████████▋ | 166/192 [26:44<06:37, 15.27s/it] INFO -> [+] sample data, frac=0.2891976131248519
87%|████████▋ | 167/192 [26:59<06:18, 15.16s/it] INFO -> [+] sample data, frac=0.2891976131248519
88%|████████▊ | 168/192 [27:15<06:07, 15.30s/it] INFO -> [+] sample data, frac=0.2891976131248519
88%|████████▊ | 169/192 [27:30<05:50, 15.24s/it] INFO -> [+] sample data, frac=0.2891976131248519
89%|████████▊ | 170/192 [27:46<05:44, 15.65s/it] INFO -> [+] sample data, frac=0.2891976131248519
89%|████████▉ | 171/192 [28:02<05:27, 15.61s/it] INFO -> [+] sample data, frac=0.2891976131248519
90%|████████▉ | 172/192 [28:17<05:08, 15.44s/it] INFO -> [+] sample data, frac=0.2891976131248519
90%|█████████ | 173/192 [28:36<05:15, 16.58s/it] INFO -> [+] sample data, frac=0.2891976131248519
91%|█████████ | 174/192 [28:56<05:16, 17.60s/it] INFO -> [+] sample data, frac=0.2891976131248519
91%|█████████ | 175/192 [29:16<05:11, 18.29s/it] INFO -> [+] sample data, frac=0.2891976131248519
92%|█████████▏| 176/192 [29:34<04:51, 18.22s/it] INFO -> [+] sample data, frac=0.2891976131248519
92%|█████████▏| 177/192 [29:50<04:21, 17.41s/it] INFO -> [+] sample data, frac=0.2891976131248519
93%|█████████▎| 178/192 [30:06<04:00, 17.15s/it] INFO -> [+] sample data, frac=0.2891976131248519
93%|█████████▎| 179/192 [30:22<03:38, 16.81s/it] INFO -> [+] sample data, frac=0.2891976131248519
94%|█████████▍| 180/192 [30:39<03:20, 16.70s/it] INFO -> [+] sample data, frac=0.2891976131248519
94%|█████████▍| 181/192 [30:55<03:01, 16.51s/it] INFO -> [+] sample data, frac=0.2891976131248519
95%|█████████▍| 182/192 [31:12<02:47, 16.78s/it] INFO -> [+] sample data, frac=0.2891976131248519
95%|█████████▌| 183/192 [31:28<02:29, 16.64s/it] INFO -> [+] sample data, frac=0.2891976131248519
96%|█████████▌| 184/192 [31:44<02:11, 16.38s/it] INFO -> [+] sample data, frac=0.2891976131248519
96%|█████████▋| 185/192 [32:01<01:55, 16.46s/it] INFO -> [+] sample data, frac=0.2891976131248519
97%|█████████▋| 186/192 [32:17<01:38, 16.34s/it] INFO -> [+] sample data, frac=0.2891976131248519
97%|█████████▋| 187/192 [32:34<01:22, 16.49s/it] INFO -> [+] sample data, frac=0.2891976131248519
98%|█████████▊| 188/192 [32:50<01:05, 16.49s/it] INFO -> [+] sample data, frac=0.2891976131248519
98%|█████████▊| 189/192 [33:07<00:49, 16.62s/it] INFO -> [+] sample data, frac=0.2891976131248519
99%|█████████▉| 190/192 [33:24<00:33, 16.64s/it] INFO -> [+] sample data, frac=0.2891976131248519
99%|█████████▉| 191/192 [33:41<00:16, 16.78s/it] INFO -> [+] sample data, frac=0.2891976131248519
100%|██████████| 192/192 [33:57<00:00, 10.61s/it]
INFO -> [+] fe_time_add
INFO -> [+] fe_time_add
INFO -> [+] feature_filter
100%|██████████| 375/375 [00:28<00:00, 13.18it/s]
INFO -> [+] train model
###Markdown
查看特征重要性
###Code
autots.feature_importances.head(10)
###Output
_____no_output_____
###Markdown
输出结果
###Code
sub.to_csv("./autox_kdd.csv", index=False)
###Output
_____no_output_____
###Markdown
画图
###Code
cur_TurbID = 1
plt.plot(df.loc[df['TurbID'] == cur_TurbID, 't1'], df.loc[df['TurbID'] == cur_TurbID, 'Patv'], color = 'b')
plt.plot(sub.loc[sub['TurbID'] == cur_TurbID, 't2'], sub.loc[sub['TurbID'] == cur_TurbID, 'y_mean'], color = 'r')
cur_TurbID = 3
plt.plot(df.loc[df['TurbID'] == cur_TurbID, 't1'], df.loc[df['TurbID'] == cur_TurbID, 'Patv'], color = 'b')
plt.plot(sub.loc[sub['TurbID'] == cur_TurbID, 't2'], sub.loc[sub['TurbID'] == cur_TurbID, 'y_mean'], color = 'r')
###Output
_____no_output_____ |
Soluciones/2. Numpy/1. Soluciones.ipynb | ###Markdown
NumPy - Ejercicios 1. Crea una matriz 3x3 con valores entre 0 y 8.
###Code
import numpy as np
array = np.arange(0,9).reshape(3,3)
array
###Output
_____no_output_____
###Markdown
2. Crea una matriz 5x5 con los valores 1,2,3,4 debajo de la diagonal principal.
###Code
array_identidad= np.eye(5, k=-1)
array_identidad[2,1]=2
array_identidad[3,2]=3
array_identidad[4,3]=4
array_identidad
matriz = np.zeros((5,5))
matriz[[1,2,3,4],[0,1,2,3]] = [1,2,3,4]
matriz
array_1 = np.eye(5, k=-1)
array_2 = np.array([1,2,3,4,0])
array_3 = array_1 * array_2
array_3
array_1 = np.eye(5, k = -1)
array_1[array_1 == 1] = [1,2,3,4]
array_1
array_identidad= np.eye(5, k=-1)
array_identidad[2]=array_identidad[2] * 2
array_identidad[3]=array_identidad[3]*3
array_identidad[3]=array_identidad[4]*4
array_identidad
np.diag([1,2,3,4], k = -1)
###Output
_____no_output_____
###Markdown
3. Crea una función que reciba como parámetro un número n y devuelva una matriz cuadrada (dos dimensiones) de nxn donde los elementos de la matriz principal valgan 1 y el resto valgan n.
###Code
def array_ex_3(n):
array_ex_3 = np.eye(n)
array_ex_3[array_ex_3 == 0] = n
return array_ex_3
print(array_ex_3(5))
def ejercicio_3(n):
matriz = np.ones((n,n)) * n
np.fill_diagonal(matriz,1)
return (matriz)
ejercicio_3(5)
def ejercicio_3(n):
matriz = np.ones((n,n)) * n
matriz = matriz - np.identity(n)*(n-1)
return matriz
ejercicio_3(5)
import numpy as np
###Output
_____no_output_____
###Markdown
4. Crea una función que reciba dos números n y m que definirán ancho y alto de una matriz de nxm elementos numéricos secuenciales y un booleano que definirá si se debe devolver la matriz generada (False) o su traspuesta (True).
###Code
def matriz(n,m, traspuesta):
matriz_1=np.arange(0,n*m)
matriz_2=matriz_1.reshape(n,m)
if traspuesta:
return matriz_2.T
else:
return matriz_2
matriz(4,5,False)
###Output
_____no_output_____
###Markdown
5. Crea una función que reciba dos arrays y genere un array que tenga 0 en todos los elementos y 1 en aquellos elementos del primer array que sean mayores que su equivalente del segundo array.
###Code
def doble_array (array1, array2):
array3 = np.empty_like(array1) # np.empty(array1.shape)
array3[array1 > array2] = 1
array3[array1 <= array2] = 0
return array3
doble_array (np.array([[3,5],[2,2]]), np.array([[2,2],[8,8]]))
def doble_array (array1, array2):
array3 = np.zeros_like(array1) # np.zeros(array1.shape)
array3[array1 > array2] = 1
return array3
def ejercicio5(array_1,array_2):
array_3=array_1>array_2
array_4=np.array(array_3, dtype=np.int8)
return array_4
###Output
_____no_output_____
###Markdown
6. Crea una función que reciba un array y una tupla de enteros y devuelva la matriz resultante de seleccionar los índices de la tupla aplicados desde el final.
###Code
def filasfinal(array,tupla_filas):
n,m = array.shape
lista_filas = list(tupla_filas)
if max(lista_filas) > n:
return ('La tupla tiene algún índce mayor que el número de filas')
else:
lista_filas_final = [- fila for fila in lista_filas]
return(array[lista_filas_final])
array = np.random.randn(8,4)
tupla = (5,3,6)
print(array)
filasfinal(array,tupla)
###Output
[[-1.57759398 -1.81286255 2.99780851 -0.09115207]
[-0.11539132 1.17114408 0.53318104 -0.16539265]
[-1.13773899 0.94320294 0.5636784 -0.3592956 ]
[-1.15636182 0.76753663 0.54604084 0.68437726]
[ 0.48875272 0.37858245 0.74001313 0.85257251]
[-0.72356253 -1.46022264 -1.03207395 1.27874088]
[-1.14776473 0.13525921 -0.13950328 -0.29280242]
[-0.05340892 0.31454825 1.44982422 -1.08944671]]
[-5, -3, -6]
###Markdown
7. Replica la función del ejercicio 5 utilizando la función np.where
###Code
def ejercicio_7(array5_1, array5_2):
return np.where(array_5_1 > array_5_2, 1,0)
array_5_1 = np.random.randn(4,4)
array_5_2 = np.random.randn(4,4)
ejercicio_7(array_5_1,array_5_2)
###Output
_____no_output_____
###Markdown
8. Crea una función que reciba una matriz y chequee si hay o no algún valor negativo en la misma.
###Code
def ejercicio_8(array_8):
if (array_8 < 0).sum() != 0: #np.sum(array_8 < 0)
return "Hay un valor negativo"
else:
return "No hay un valor negativo"
def ejercicio_8(array_8):
if (array_8 < 0).any(): #np.sum(array_8 < 0)
return "Hay un valor negativo"
else:
return "No hay un valor negativo"
def ejercicio_8(array_8):
return (array_8 < 0).any()
array_8 = np.random.randn(4,4)
ejercicio_8(array_8)
###Output
_____no_output_____
###Markdown
9. Crea una función que utilizando la del ejercicio anterior, sume el valor absoluto de todos los elementos negativos de una matriz recibida como parámetro o devuelve "No hay negativos" en caso de que no los haya.
###Code
def funcion(a):
if ejercicio_8(a):
b = np.where(a<0,np.abs(a),0)
return np.sum(b)
else:
return 'No hay negativos'
funcion(np.array([0,3,-4,-1,9]))
def funcion(a):
if ejercicio_8(a):
return -np.sum(array[array < 0])
else:
return 'No hay negativos'
###Output
_____no_output_____
###Markdown
10. Crea una función que reciba dos matrices y devuelva la secuencia ordenada de la union de los elementos únicos de ambas.
###Code
def union(array1, array2):
return np.union1d(array1,array2)
###Output
_____no_output_____
###Markdown
11. Crea una función que reciba una matriz unidimensional con nombres, una matriz con el mismo ancho que la matriz de nombres que contenga valores numéricos y un nombre de los de la primera lista. Deberá devolver, en una tupla, los descriptivos (mínimo, máximo, media, mediana y desviación típica) del vector de valores "asociado" al nombre seleccionado por posición. Ten en cuenta que el nombre introducido puede no estar en el listado y contrólalo.
###Code
def descriptivos(nombres,numerico,nombre):
if nombre not in nombres:
return "El nombre no existe"
else:
num_col = numerico.shape[1]
if num_col != nombres.size:
return "Los tamaños de los arrays no coinciden"
else:
nombre_selec= numerico[:, nombres==nombre] #numerico.T[nombres==nombre]
datos=(np.min(nombre_selec),np.max(nombre_selec),np.median(nombre_selec),np.mean(nombre_selec),np.std(nombre_selec))
return datos
nombres_ej=np.array(["a","b","c","d"])
numerico_ej=np.random.randn(8,4)
nombre_ej="a"
descriptivos(nombres_ej,numerico_ej,nombre_ej)
###Output
(-1.2248285355325208, 1.229659988262854, 0.01892412209714324, 0.08538476439638658, 0.7123988849157867)
|
SMS Spam Collection Data Set.ipynb | ###Markdown
https://archive.ics.uci.edu/ml/datasets/SMS+Spam+CollectionData Set Information:This corpus has been collected from free or free for research sources at the Internet:-> A collection of 425 SMS spam messages was manually extracted from the Grumbletext Web site. This is a UK forum in which cell phone users make public claims about SMS spam messages, most of them without reporting the very spam message received. The identification of the text of spam messages in the claims is a very hard and time-consuming task, and it involved carefully scanning hundreds of web pages. The Grumbletext Web site is: [Web Link].-> A subset of 3,375 SMS randomly chosen ham messages of the NUS SMS Corpus (NSC), which is a dataset of about 10,000 legitimate messages collected for research at the Department of Computer Science at the National University of Singapore. The messages largely originate from Singaporeans and mostly from students attending the University. These messages were collected from volunteers who were made aware that their contributions were going to be made publicly available. The NUS SMS Corpus is avalaible at: [Web Link].-> A list of 450 SMS ham messages collected from Caroline Tag's PhD Thesis available at [Web Link].-> Finally, we have incorporated the SMS Spam Corpus v.0.1 Big. It has 1,002 SMS ham messages and 322 spam messages and it is public available at: [Web Link]. This corpus has been used in the following academic researches:[1] Gómez Hidalgo, J.M., Cajigas Bringas, G., Puertas Sanz, E., Carrero GarcÃa, F. Content Based SMS Spam Filtering. Proceedings of the 2006 ACM Symposium on Document Engineering (ACM DOCENG'06), Amsterdam, The Netherlands, 10-13, 2006.[2] Cormack, G. V., Gómez Hidalgo, J. M., and Puertas Sánz, E. Feature engineering for mobile (SMS) spam filtering. Proceedings of the 30th Annual international ACM Conference on Research and Development in information Retrieval (ACM SIGIR'07), New York, NY, 871-872, 2007.[3] Cormack, G. V., Gómez Hidalgo, J. M., and Puertas Sánz, E. Spam filtering for short messages. Proceedings of the 16th ACM Conference on Information and Knowledge Management (ACM CIKM'07). Lisbon, Portugal, 313-320, 2007.Attribute Information:The collection is composed by just one text file, where each line has the correct class followed by the raw message. We offer some examples bellow:ham What you doing?how are you?ham Ok lar... Joking wif u oni...ham dun say so early hor... U c already then say...ham MY NO. IN LUTON 0125698789 RING ME IF UR AROUND! H*ham Siva is in hostel aha:-.ham Cos i was out shopping wif darren jus now n i called him 2 ask wat present he wan lor. Then he started guessing who i was wif n he finally guessed darren lor.spam FreeMsg: Txt: CALL to No: 86888 & claim your reward of 3 hours talk time to use from your phone now! ubscribe6GBP/ mnth inc 3hrs 16 stop?txtStopspam Sunshine Quiz! Win a super Sony DVD recorder if you canname the capital of Australia? Text MQUIZ to 82277. Bspam URGENT! Your Mobile No 07808726822 was awarded a L2,000 Bonus Caller Prize on 02/09/03! This is our 2nd attempt to contact YOU! Call 0871-872-9758 BOX95QUNote: the messages are not chronologically sorted.
###Code
import pandas as pd
messages=pd.read_csv('/media/gaurav/DATASCIENCE/data science/NLP/sms spam or ham/SMSSpamCollection',sep='\t',names=['Label','Message'])
messages
import re
import nltk
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
ps=PorterStemmer()
corpus=[]
for i in range(0,len(messages)):
review=re.sub('[^a-zA-Z]',' ',messages['Message'][i])
review=review.lower()
review=review.split()
review=[ps.stem(word) for word in review if not word in stopwords.words('english')]
review=''.join(review)
corpus.append(review)
from sklearn.feature_extraction.text import CountVectorizer
cv=CountVectorizer()
x=cv.fit_transform(corpus).toarray()
x
y=pd.get_dummies(messages['Label'])
y=y.iloc[:,1].values
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.30,random_state=0)
from sklearn.naive_bayes import MultinomialNB
NB=MultinomialNB()
NB.fit(x_train,y_train)
y_pred=NB.predict(x_test)
from sklearn.metrics import confusion_matrix,accuracy_score
cm = confusion_matrix(y_test,y_pred)
cm
score=accuracy_score(y_test,y_pred)
score
from xgboost import XGBClassifier
xgb=XGBClassifier().fit(x_train,y_train)
y_pred_xgb=xgb.predict(x_test)
cm_xgb=confusion_matrix(y_test,y_pred_xgb)
cm_xgb
score_xgb=accuracy_score(y_test,y_pred_xgb)
score_xgb
###Output
_____no_output_____ |
ist256-in-class-examples/Lesson09-lists.ipynb | ###Markdown
Input: a file of known passwordsInput: your passwordOutput: Whether or not your password is on the list of known passwords1) load the file of known passwords into a list2) input your password3) check for your password in the list4) if password in the list print "not a good password"5) else print "password not on the list"
###Code
def get_passwords():
passwords = []
pwfile = 'bad-passwords.txt'
with open(pwfile,'r') as f:
for line in f:
passwords.append(line.strip())
return passwords
def strip_numbers(pw):
numbers = "0123456789"
for letter in pw:
if letter in numbers:
pw = pw.replace(letter,"")
return pw
from getpass import getpass
passwords = get_passwords()
pw = getpass("Enter your password: ")
pw = strip_numbers(pw)
if pw in passwords:
print("Uh-Oh! Your password is too simple! It's on the list! ")
else:
print("Your password is not on the list")
# pw=""
###Output
Enter PW: 123mike456
23mike456
3mike456
mike456
mike56
mike6
mike
|
class02c_find_R_template.ipynb | ###Markdown
CS446/546 Class Session 2 - Adjacency Forests Comparing asymptotic running time for testing two vertices for an edge In this exercise, we'll compare the asymptotic computational running time for testing if there is an edge between a pair of vertices, averaged over all pairs of vertices in the graph. We'll do it for a series of undirected graphs (each generated using an Barabasi-Albert model), each with 1000 vertices. We will vary the number of edges in the graph; each graph will have a different average number of vertex neighbors for a vertex (i.e., each graph will have a different average vertex degree). We will time how long it takes to test all possible pairs of vertices for whether or not there is an edge between them, for each of four different graph data structures (adjacency matrix, adjacency list, edge list, and adjacency forest). First, we import all the R packages that we will need for this exercise:
###Code
suppressPackageStartupMessages(library(igraph))
###Output
_____no_output_____
###Markdown
We'll need to start by creating a function `get_adj_tree` that will accept an adjacency list data structure and will create an "adjacency forest" data structure representing the graph. NOTE: I have deleted the line of code that creates a new environment; see `?new.env` for help.
###Code
get_adj_tree <- function(adj_list) {
n <- length(adj_list)
myforest <- list()
for (i in 1:n) {
FILL IN HERE
for (j in as.vector(adj_list[[i]])) {
FILL IN HERE # convert j to character string and use it as a key into newenv; insert a 1
}
myforest[[i]] <- newenv
}
myforest
}
###Output
_____no_output_____
###Markdown
Now, define a function that will test whether vertices `i` and `j` are connected, using an adjacency matrix:
###Code
find_matrix <- function(gmat, i, j) {
FILL IN HERE
}
###Output
_____no_output_____
###Markdown
Now, define a function that will test whether vertices `i` and `j` are connected, using an adjacency list. You may find the function `%in%` useful:
###Code
find_adj_list <- function(adj_list, i, j) {
FILL IN HERE
}
###Output
_____no_output_____
###Markdown
Now, define a function that will test whether vertices `i` and `j` are connected, using an edge list. You may find the function `any` useful:
###Code
find_edge_list <- function(edge_list, i, j) {
any((edge_list[,1] == i) & (edge_list[,2] == j)) |
any((edge_list[,2] == i) & (edge_list[,1] == j))
}
###Output
_____no_output_____
###Markdown
Now, define a function that will test whether vertices `i` and `j` are connected, using an adjacency forest. You may find the function ``is.null`` useful:
###Code
find_adj_tree <- function(adj_tree, i, jstr) {
FILL IN HERE
}
###Output
_____no_output_____
###Markdown
This is the simulation code; note that we now have two parameters, "n" and "k" (n is the number of vertices in the graph, and k is the average vertex degree. We'll actually be keeping n fixed and varying k for this exercise.
###Code
do_sim <- function(n, k) {
nrep <- 1
nsubrep <- 1
simdf <- do.call(rbind,
replicate(nrep, {
g <- sample_pa(n, out.seq=rep(k, n), directed=FALSE)
g_matrix <- as.matrix(as_adjacency_matrix(g))
g_adj_list <- as_adj_list(g)
g_edge_list <- as_edgelist(g)
g_adj_tree <- get_adj_tree(g_adj_list)
# this is for setting up the (admittedly weird) R way of doing a
# double "for" loop (see "mapply" below)
allvals <- expand.grid(1:n, 1:n)
# need this because "as.character" is kind of slow
jstrs <- as.character(1:n)
time_mat <- system.time(
replicate(nsubrep, {
mapply(function(i, j) {
find_matrix(g_matrix, i, j)
}, allvals$Var1, allvals$Var2)
})
)[1]
time_adj_list <- system.time(
replicate(nsubrep, {
mapply(function(i, j) {
find_adj_list(g_adj_list, i, jstrs[j])
}, allvals$Var1, allvals$Var2)
})
)[1]
time_adjacency_forest <- system.time(
replicate(nsubrep, {
mapply(function(i, j) {
find_adj_tree(g_adj_tree, i, jstrs[j])
}, allvals$Var1, allvals$Var2)
})
)[1]
rowdf <- data.frame(matrix=time_mat,
adjlist=time_adj_list,
adjforest=time_adjacency_forest)
rowdf
}, simplify=FALSE)
)
# average over replicates
simres <- apply(simdf, 2, mean)
# get results in microseconds, on a per-vertex-pair basis
1000000*simres/(n*(n-1)/2)
}
###Output
_____no_output_____
###Markdown
Call the do_sim function for three different values of "k" (the average vertex degree), and convert the resulting list (of single-row data frames) to a three-row data frame:
###Code
sim_data_df <- do.call(rbind, lapply(c(1, 5, 10, 100),
function(k) {do_sim(1000, k)}))
sim_data_df
###Output
_____no_output_____ |
test/Notebooks/telem.ipynb | ###Markdown
Annotations
###Code
link = 'https://docs.google.com/spreadsheets/d/e/2PACX-1vQ5JRPuanz8kRkVKU6BsZReBNENKglrLQDj1CTWnM1AqpxdWdWb3BEEzSeIcuPq9rSLNwzux_1l7mJb/pub?gid=1668794547&single=true&output=csv'
observation = pd.read_csv(link, parse_dates=["Timestamp_Overrode"], index_col=["Timestamp_Overrode"])
observation.index = observation.index.tz_localize('America/New_York',ambiguous='infer')
notes= pd.DataFrame(observation[['note','sensor','Coord_X_m', 'Coord_Y_m', 'Coord_Z_m','Position_HumanReadable']])
notes.sort_index( inplace=True )
notes = notes["2020-01-01 ":"2020- "]
queryv = '''
SELECT *
FROM values;
'''
values = pd.read_sql(queryv,engine)
values
###Output
_____no_output_____
###Markdown
Telemetry
###Code
queryt = '''
SELECT * FROM telemetry
order by epoch DESC
limit 10;
'''
tele = pd.read_sql(queryt,engine)
tele
queryt = '''
SELECT "epoch", "sensor", "ID", "Ver","ZeroWind","Lat", "Lng", "Coord_X", "Coord_Y", "Coord_Z"
FROM telemetry
WHERE "Ver" = '0.7.9.0B_GBP'
order by epoch DESC
limit 10;
'''
tele = pd.read_sql(queryt,engine)
tele
queryt = '''
SELECT DISTINCT "epoch", "sensor", "ID", "Ver","ZeroWind","Lat", "Lng", "Coord_X", "Coord_Y", "Coord_Z"
FROM telemetry
WHERE "Ver" = '0.6.9.9B_GBP'
--WHERE "Ver" = '0.7.9.0B_GBP'
order by sensor DESC
limit 10;
'''
tele = pd.read_sql(queryt,engine)
tele
###Output
_____no_output_____
###Markdown
FW_version
###Code
#ver = "\''0.6.9.9B_GBP\'"
ver = "\'0.7.9.1B_GBP\'"
querya = " SELECT DISTINCT \"sensor\", \"ID\", \"Ver\",\"ZeroWind\",\"Lat\", \"Lng\" FROM telemetry WHERE \"Ver\" = "
queryb = " order by sensor;"
queryt= querya+ver+queryb
tele = pd.read_sql(queryt,engine)
tele
###Output
_____no_output_____ |
_notebooks/2020-07-28-Meeting.ipynb | ###Markdown
JATA Tools V1> Data Fetchers for Interactivitys: AJHS Records, Finding Guides, and Loeb Images.- toc:true- branch: master- badges: true- comments: true- author: Blaise- permalink: /JATA1/- categories: [fastpages, jupyter, dev, wip, check-in]
###Code
#hide
!pip install requests
!pip install selenium
#hide
!pip install beautifulsoup4
#hide
import numpy as np
import pandas as pd
import requests
from bs4 import BeautifulSoup
import bs4
import lxml.etree as xml
import urllib.request
import re
from pandas.io.html import read_html
from selenium import webdriver
from timeit import default_timer as timer
#hide
def striplist(list):
out = []
# print(list)
for i in list:
stin = str(i)
split = (stin.split('>'))
otherside = (split[1].split('<'))
out_app = otherside[0]
out.append(out_app)
# b = str(i.split('>')[1])
# print(b)
# out.append(i)
return out
def find_between( s, first, last ):
try:
try:
start = s.index( first ) + len( first )
end = s.index( last, start )
return s[start:end]
except ValueError:
start = s.index( first ) + len( first )
# end = s.index( last, start )
return s[start:]
# return "ERROR"
except BaseException as e:
print(e, first, last)
return 'NA'
###Output
_____no_output_____
###Markdown
Center For Jewish History Archives Scraper - Built in support for AJHS - Can be easily used for any repo on CJH's archive space - Could also be used in other archivespace scraping situations.
###Code
#collapse-hide
class CJH_Archives:
def scrape_all_records(object_type='records',start_page=1, stop_after_pages=0):
if start_page <= 0:
print("Must start at minimum of page 1")
start_page=1
page=start_page
else:
page = start_page
if object_type.upper() == 'RECORDS':
print("Scraping All Individual Records")
# page = start_page
headless_url = "https://archives.cjh.org/repositories/3/objects?q%5B%5D=%2A&op%5B%5D=OR&field%5B%5D=keyword&from_year%5B%5D=&to_year%5B%5D=&limit=digital_object,archival_object&sort=title_sort%20asc&page="
base_URL = str(headless_url + str(page))
elif object_type.upper() == 'COLLECTIONS':
# page = start_page
print("Scraping Collections (Finding Aids)")
headless_url = "https://archives.cjh.org/repositories/3/resources?q[]=%2A&op[]=&field[]=title&from_year[]=&to_year[]=&limit=resource&sort=title_sort%20asc&page="
base_URL = str(headless_url + str(page))
def scrape_record(name, link, web_page, object_type):
# print(web_page, link)
# (.+?)
# meta_dict = find_between(str(i),'<script type="application/ld+json">',' </script>' )
# meta_dict = re.findall(r'>(', str(web_page))
title = (web_page.title)
part_of = web_page.find_all('ul',{'class':'breadcrumb'})
part_of = part_of[0].find_all('a')
location_tupes = []
for i in part_of:
link = (str(i).split('"')[1])
found_loc_name = (str(i).split('>')[1]).split('<')[0]
tupp = (found_loc_name,link)
location_tupes.append(tupp)
# location_name = (str(i.split('>')[1])).split('<')[0]
# stri = "<a href="
# part_of = list(map(lambda st: str.replace(st,stri, ""), part_of))
locs = (location_tupes)
subnotes = web_page.find_all('div', {'class': 'upper-record-details'})[0].text
# print(subnotes)
div_data_1 = [("Name", name), ("Link",link)]
acord = web_page.find_all('div', {'class': 'acc_holder clear'})[0].text
acc_data = []
if object_type.upper() == 'RECORDS':
possible_fields_1=[
"Scope and Contents",
"Dates",
"Language of Materials",
"Access Restrictions",
"Extent",
]
possible_fields_2 = [
"Related Names",
"Digital Material",
"Physical Storage Information",
"Repository Details",
]
elif object_type.upper() == 'COLLECTIONS':
possible_fields_1=[
"Scope and Content Note",
"Dates",
"Creator",
"Access Restrictions",
"Use Restrictions",
"Conditions Governing Access",
"Conditions Governing Use",
"Extent",
"Language of Materials"
]
possible_fields_2 = [
"Additional Description",
"Subjects",
"Related Names",
"Finding Aid & Administrative Information",
'Physical Storage Information',
'Repository Details',
]
##subnotes
b1 = []
for i in possible_fields_1:
if i in str(subnotes):
out=True
else:
out = False
missingTuple = (i, '')
div_data_1.append(missingTuple)
b1.append(out)
##accordian
b2=[]
for i in possible_fields_2:
if i in str(acord):
out=True
else:
out = False
missingTuple = (i, '')
div_data_1.append(missingTuple)
b2.append(out)
# print(b1, b2)
xs=possible_fields_1
ys=b1
# sec_1_heads = [x for x, y in zip(xs, ys) if y == 'True']
filtered1 = np.array(xs)[np.array(ys)]
xs=possible_fields_2
ys=b2
filtered2 = np.array(xs)[np.array(ys)]
# sec_2_heads = [x for x, y in zip(xs, ys) if y == 'True']
# print(filtered1,filtered2,'xyz')
indexer = 0
for i in filtered1:
# print(len(filtered1),len(filtered2), (indexer))
first = i
try:
next = filtered1[indexer+1]
except BaseException as e:
next = '$$$'
# print(first, next)
value = find_between(subnotes, first, next)
# print(first, next, value)
value = value.replace('\n',' ').strip().replace('\t', ' ')
# print(first, next, value)
val = (i,value)
div_data_1.append(val)
indexer+=1
# print(indexer, first, next)
indexer = 0
for i in filtered2:
first = i
try:
next = filtered1[indexer+1]
except BaseException as e:
next = '$$$'
# print(first,next)
value = find_between(acord, first, next)
# print(first, next, value)
value = value.replace('\n',' ').strip().replace('\t', ' ')
val = (i,value)
div_data_1.append(val)
indexer+=1
# print(indexer, first, next)
# exit
bigList = (div_data_1)
return tuple(bigList)
URL = base_URL
web_page = BeautifulSoup(requests.get(URL, {}).text, "lxml")
pagnation = web_page.find_all('ul',{'class':'pagination'})[0].find_all('li')
next_link = (web_page.find_all('li',{'class':'next'})[0]).find('a',href=True)
linkky = str(next_link)
nextPage_ = str("https://archives.cjh.org" + (linkky.split('"')[1]))
# exit
# print(pagnation)
pageList = []
s_pages = []
for i in pagnation:
number = str(i).split('>')[2].split('<')[0]
pageList.append((number))
# print("Pages", pageList)
# break
test_list=[]
for i in pageList:
try:
# print(i)
# print( int(i))
test_list.append(int(i))
except:
pass
# test_list = [int(i) for i in pageList if not (i).isdigit()]
# print(test_list)
last_page__ = (max(test_list))
__lastPage = last_page__ - (last_page__ - stop_after_pages)
print()
# exit
page_counter = 1
while page_counter < __lastPage:
row_list = []
pagez= page_counter
print("Scraping Page", page_counter)
page_current = page_counter
URL = str(headless_url + str(page_current))
web_page = BeautifulSoup(requests.get(URL, {}).text, "lxml")
h3s = web_page.find_all('h3')
# summs = web_page
tupleList = []
for i in h3s:
# print(i)
try:
link = ((str(i).split('href="')[1]).split('"'))[0]
name = (str(i).split('">'))[1].split("</a")[0]
# print(link, name)
# break
data_tuple = (name ,str("https://archives.cjh.org" + link), link)
tupleList.append(data_tuple)
except BaseException as e:
print(e, i)
page_counter+=1
archIndex = pd.DataFrame.from_records(tupleList, columns = ['Names', 'Link', 'Location'])
# ...
counter = 0
for i in archIndex.itertuples():
counter +=1
name = i.Names
link = i.Link
link123 = link
Location=i.Location
web_page = BeautifulSoup(requests.get(link, {}).text, "lxml")
record_row = scrape_record(name, link123, web_page,object_type.upper() )
row_list.extend(record_row)
print("Record: ",counter, link123)
s_pages.extend(row_list)
d = {}
for x, y in s_pages:
d.setdefault(x, []).append(y)
df = pd.DataFrame.from_records(d).drop_duplicates()
if object_type.upper() == 'RECORDS':
df[['Date_1','Date_2']] = (df['Dates'].str.split('–', n=1,expand=True))
else:
df['Use Terms'] = df['Use Restrictions']+df['Conditions Governing Use']
# df1.replace('NA',np.nan,inplace=True)
df['Access Terms'] = df[ 'Access Restrictions']+df['Conditions Governing Access']
dropThese = [
'Use Restrictions',
'Conditions Governing Use',
'Access Restrictions',
'Conditions Governing Access',
]
df.drop(columns=dropThese,inplace=True)
# df1 = df1.apply(lambda x: None if x.isnull().all() else ';'.join(x.dropna()), axis=1)
return (df)
def __init__(self, repo):
self.repo = repo
def get_meta_data(self, object_type,page_to_start_at,maximum_pages_to_scrape):
if self.repo.upper() == 'AJHS':
print('Creating CJHA Scraper Object for AJHS')
self.meta_df = scrape_all_records(object_type,page_to_start_at,maximum_pages_to_scrape)
return self.meta_df
else:
print("WIP WIP WIP WIP WIP WIP")
pass
###Output
_____no_output_____
###Markdown
Building AJHS Archive DatasetsThe below line of code can be used to scrape the archive for a given number of pages (input 0 for all records). There are two object types, records and collections. Collections are digitized finding aids and records are all contained in some sort of collection. Some are under multiple collections. The below lines of code generate dataframes for the first 3 pages of records and collections
###Code
#collapse-hide
# %%capture
#records
ajhs_recs = CJH_Archives('ajhs').get_meta_data('records', 1, 3)
#collections
ajhs_cols= CJH_Archives('ajhs').get_meta_data('collections', 1, 3)
###Output
_____no_output_____
###Markdown
Output For Records
###Code
#hide-input
ajhs_recs
###Output
_____no_output_____
###Markdown
Output for Collections
###Code
#hide-input
ajhs_cols
###Output
_____no_output_____
###Markdown
Loeb Data Scraper The [Loeb data scraper](https://loebjewishportraits.com)fetches meta deta and can download images for paintings, silhouettes, and photographs from the archive (or all of the above).
###Code
#collapse-hide
class loeb:
"""
This class can be used to interact with the loeb image data base.
The init funciton takes 1 argument which is the type of data to retreive.
The input should be one of the following : 'paintings', silhouettes, photographs, or 'all'
"""
def __init__(self, data_set='paintings'):
def scrape_loeb(URL):
requests.get(URL)
web_page = bs4.BeautifulSoup(requests.get(URL, {}).text, "lxml")
table = web_page.find_all('portfolio')
div = web_page.find(id="portfolio")
linkList = web_page.find_all('div',{'class':'work-info'})
df_dict = []
for links in linkList:
twolinks = links.find_all('a', href=True)
details = str(twolinks[0]).split('"')[1]
img = str(twolinks[1]).split('"')[3]
new_df_tuple = {'info_link':details, 'img_link':img}
df_dict.append(new_df_tuple)
listOfDfs = []
counter = 0
df = pd.DataFrame.from_records(df_dict)
for i in df.itertuples():
img = i.img_link
info = i.info_link
# print(info)
# print(info)
# download_image(img,'test.jpg')
profile = bs4.BeautifulSoup(requests.get(info, {}).text, "lxml")
img = str(profile.find_all('img',src=True)[0]).split('"')[3]
# print(img)
# print(profile)
# print(profile)
a = profile.find_all('h4')
# print(a)
b = profile.find_all("h3")
# bio = profile
linkts = str(profile.find_all('a',{'id':'viewzoom'},href=True)[1]).split('"')[1]
def scrape_bio_loeb(url):
bio = bs4.BeautifulSoup(requests.get(url, {}).text, "lxml")
abc=str(bio.find_all('p')[1]).replace("<p>", " ")
abcd=(str(abc).replace('</p>', " "))
bio_text = str(str(abcd.replace('<i>',' ')).replace("</i>",' '))
s = bio_text
bio_plain = re.sub(r'<.+?>', '', s)
if 'Lorem ipsum dolor sit amet,' in bio_plain:
bio_plain = ''
if "Lorem ipsum dolor sit amet," in s:
s = ''
# bio_escapes = re.sub(r'>.+?<', '', s)
return bio_plain, s
bio__ = scrape_bio_loeb(linkts)
# print(bio__)
# print(linkts,len(linkts), "hkgdfsjhsfgakljhashlf")
# break
headers4 = striplist((a))
headers4_ = ['Name']
for i in headers4:
headers4_.append(i)
# headers4_ = .extend(headers4)
headers3 = striplist( b)
# print(headers4_, headers3)
# break
headers4_ = headers4_[:-1]
headers4_.append('Bio_Plain')
headers3.append(bio__[0])
headers4_.append('Bio_Links')
headers3.append(bio__[1])
df1 = pd.DataFrame({'Label':headers4_ , 'Value': headers3})
# name_for_file = headers[0][1]
# print(name_for_file, headers, headers[0])
self.image_cache.append((img, df1))
listOfDfs.append(df1)
# download_image(img, str(str(counter) + '.jpg'))
counter+=1
self.list_of_dfs.extend(listOfDfs)
self.list_of_dfs = []
self.image_cache = []
if data_set.upper() == 'ALL':
data_options = ['paintings', 'silhouettes', 'photographs']
for i in data_options:
print(i)
URL = str("http://loebjewishportraits.com/" + i + '/')
scrape_loeb(URL)
else:
try:
URL = str("http://loebjewishportraits.com/" + data_set + '/')
scrape_loeb(URL)
except BaseException as e:
print(e)
print("Could not find a data set for: ", data_set, "Make sure you input either 'paintings', 'silhouettes', or 'photographs'!")
def get_meta_data(self, export=False):
"""
returns a meta dataframe with each painting as an entry in a row
export can be csv or excel
"""
listy = self.list_of_dfs
transposed = [thing.transpose() for thing in listy]
cc = 1
newList = []
for i in transposed:
# print(len(i.columns))
new_cols = (i.iloc[0])
i.columns = new_cols
i.drop(i.index[0], inplace= True)
long_df_of_entrys = pd.concat(transposed)
long_df_of_entrys.set_index('Name')
return long_df_of_entrys.reset_index()
def download_images(self):
def download_image(link,filename):
urllib.request.urlretrieve(link, filename)
# print('image saved to temp directory')
for i in self.image_cache:
name = (i[1].Value.iloc[0])
fileName = str(name + '.jpg')
try:
download_image(i[0],fileName)
print('Saved', fileName, 'to current directory')
except BaseException as e:
print("Could not download:", fileName, "Error:",e)
###Output
_____no_output_____
###Markdown
Scraping Meta Data and Download Locations for Selected Image Type
###Code
paintings = loeb()
###Output
_____no_output_____
###Markdown
Building a MetaData Dataset for the Paintings
###Code
meta_data = paintings.get_meta_data()
###Output
_____no_output_____
###Markdown
Output For Painting MetaData
###Code
#hide-input
meta_data
###Output
_____no_output_____
###Markdown
Batch Downloading Paintings (Takes a while!)
###Code
# paintings.download_images()
###Output
_____no_output_____ |
complete_model.ipynb | ###Markdown
Training and ExportIn this notebook, I train and export a model to identify dog breeds from photos.
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
import utils
###Output
_____no_output_____
###Markdown
DataGetting the data here is easy, since I did all of the hard work in the data processing script. First, I load in the label vocabulary from a saved numpy array.
###Code
label_vocab = np.load('data/labelvocab.npy')
n_classes = np.shape(label_vocab)[0]
###Output
_____no_output_____
###Markdown
Then, I load in the basis for the transfer learning model so I can get its input size. I'm using the one of the pre-trained MobileNet V2 models from Tensorflow Hub because it works very well on limited resources, so I won't need anything fancy (or expensive) to serve the model.
###Code
image_col = hub.image_embedding_column("image", "https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/2")
height, width = hub.get_expected_image_size(image_col.module_spec)
depth = hub.get_num_image_channels(image_col.module_spec)
size = (height, width, depth)
###Output
INFO:tensorflow:Using /tmp/tfhub_modules to cache modules.
INFO:tensorflow:Downloading TF-Hub Module 'https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/2'.
INFO:tensorflow:Downloaded TF-Hub Module 'https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/2'.
###Markdown
The input function here is pretty straightforward. it just loads the TFRecords at the given filename, decodes them, shuffles them, and batches them. The function returns a lambda function so I can make versions for both training and validation data.
###Code
def make_input_fn(fname, repeat=1, batch_size=256):
ds = (tf.data.TFRecordDataset(fname)
.map(lambda im:
utils.decode_image_example(im, size))
.shuffle(batch_size*2) # arbitrary
.repeat(repeat)
.batch(batch_size)
.prefetch(2))
return lambda: ds.make_one_shot_iterator().get_next()
train_input_fn = make_input_fn('data/dogs224_train.tfrecord', 3)
valid_input_fn = make_input_fn('data/dogs224_valid.tfrecord')
###Output
_____no_output_____
###Markdown
ModelHere's the fun (and slow part): training the model. Keeping with my theme of simplicity, I train a canned linear classifier that consumes the output of MobileNet and outputs a prediction in terms of our labels.
###Code
est = tf.estimator.LinearClassifier(
[image_col],
n_classes=n_classes,
label_vocabulary=list(label_vocab)
)
###Output
INFO:tensorflow:Using default config.
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpwppe1d9r
INFO:tensorflow:Using config: {'_model_dir': '/tmp/tmpwppe1d9r', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fe7515dc1d0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
###Markdown
I turn down log verbosity here because TF Hub modules produce a monumental amount of log spam when they first load in. I also periodically print evaluation metrics from the validation data.
###Code
tf.logging.set_verbosity(tf.logging.WARN)
for _ in range(5):
est.train(train_input_fn)
print(est.evaluate(valid_input_fn))
###Output
{'accuracy': 0.8022352, 'average_loss': 1.9217477, 'loss': 439.43964, 'global_step': 96}
{'accuracy': 0.8070943, 'average_loss': 1.7060735, 'loss': 390.12216, 'global_step': 192}
{'accuracy': 0.81341106, 'average_loss': 1.6882304, 'loss': 386.04202, 'global_step': 288}
{'accuracy': 0.81341106, 'average_loss': 1.680952, 'loss': 384.3777, 'global_step': 384}
{'accuracy': 0.8158406, 'average_loss': 1.6717596, 'loss': 382.2757, 'global_step': 480}
###Markdown
My serving input function takes in a vector (of unknown length) of strings that represent encoded images. They're then preprocessed and resized in the same manner as the training data (with the same function) before being sent to the model for prediction.
###Code
def serving_input_fn():
receiver = tf.placeholder(tf.string, shape=(None))
examples = tf.parse_example(
receiver,
{
"image": tf.FixedLenFeature((), tf.string),
}
)
decode_and_prep = lambda image: utils.preprocess_image(image, size[:-1])
images = tf.map_fn(decode_and_prep, examples["image"],
tf.float32)
return tf.estimator.export.ServingInputReceiver(
{"image": images},
receiver,
)
est.export_savedmodel("serving/model/", serving_input_fn)
###Output
_____no_output_____ |
src/lab2/nemo/tutorials/asr/03_Speech_Commands.ipynb | ###Markdown
IntroductionThis Speech Command recognition tutorial is based on the MatchboxNet model from the paper ["MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition"](https://arxiv.org/abs/2004.08531). MatchboxNet is a modified form of the QuartzNet architecture from the paper "[QuartzNet: Deep Automatic Speech Recognition with 1D Time-Channel Separable Convolutions](https://arxiv.org/pdf/1910.10261.pdf)" with a modified decoder head to suit classification tasks.The notebook will follow the steps below: - Dataset preparation: Preparing Google Speech Commands dataset - Audio preprocessing (feature extraction): signal normalization, windowing, (log) spectrogram (or mel scale spectrogram, or MFCC) - Data augmentation using SpecAugment "[SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779)" to increase the number of data samples. - Develop a small Neural classification model that can be trained efficiently. - Model training on the Google Speech Commands dataset in NeMo. - Evaluation of error cases of the model by audibly hearing the samples
###Code
# Some utility imports
import os
from omegaconf import OmegaConf
# This is where the Google Speech Commands directory will be placed.
# Change this if you don't want the data to be extracted in the current directory.
# Select the version of the dataset required as well (can be 1 or 2)
DATASET_VER = 1
data_dir = './google_dataset_v{0}/'.format(DATASET_VER)
if DATASET_VER == 1:
MODEL_CONFIG = "matchboxnet_3x1x64_v1.yaml"
else:
MODEL_CONFIG = "matchboxnet_3x1x64_v2.yaml"
if not os.path.exists(f"configs/{MODEL_CONFIG}"):
!wget -P configs/ "https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/examples/asr/conf/matchboxnet/{MODEL_CONFIG}"
###Output
_____no_output_____
###Markdown
Data PreparationWe will be using the open-source Google Speech Commands Dataset (we will use V1 of the dataset for the tutorial but require minor changes to support the V2 dataset). These scripts below will download the dataset and convert it to a format suitable for use with NeMo. Download the datasetThe dataset must be prepared using the scripts provided under the `{NeMo root directory}/scripts` sub-directory. Run the following command below to download the data preparation script and execute it.**NOTE**: You should have at least 4GB of disk space available if you’ve used --data_version=1; and at least 6GB if you used --data_version=2. Also, it will take some time to download and process, so go grab a coffee.**NOTE**: You may additionally pass a `--rebalance` flag at the end of the `process_speech_commands_data.py` script to rebalance the class samples in the manifest.
###Code
if not os.path.exists("process_speech_commands_data.py"):
!wget https://raw.githubusercontent.com/NVIDIA/NeMo/$BRANCH/scripts/dataset_processing/process_speech_commands_data.py
###Output
_____no_output_____
###Markdown
Preparing the manifest fileThe manifest file is a simple file that has the full path to the audio file, the duration of the audio file, and the label that is assigned to that audio file. This notebook is only a demonstration, and therefore we will use the `--skip_duration` flag to speed up construction of the manifest file.**NOTE: When replicating the results of the paper, do not use this flag and prepare the manifest file with correct durations.**
###Code
!mkdir {data_dir}
!python process_speech_commands_data.py --data_root={data_dir} --data_version={DATASET_VER} --skip_duration --log
print("Dataset ready !")
###Output
_____no_output_____
###Markdown
Prepare the path to manifest files
###Code
dataset_path = 'google_speech_recognition_v{0}'.format(DATASET_VER)
dataset_basedir = os.path.join(data_dir, dataset_path)
train_dataset = os.path.join(dataset_basedir, 'train_manifest.json')
val_dataset = os.path.join(dataset_basedir, 'validation_manifest.json')
test_dataset = os.path.join(dataset_basedir, 'validation_manifest.json')
###Output
_____no_output_____
###Markdown
Read a few rows of the manifest file Manifest files are the data structure used by NeMo to declare a few important details about the data :1) `audio_filepath`: Refers to the path to the raw audio file 2) `command`: The class label (or speech command) of this sample 3) `duration`: The length of the audio file, in seconds.
###Code
!head -n 5 {train_dataset}
###Output
_____no_output_____
###Markdown
Training - PreparationWe will be training a MatchboxNet model from the paper ["MatchboxNet: 1D Time-Channel Separable Convolutional Neural Network Architecture for Speech Commands Recognition"](https://arxiv.org/abs/2004.08531). The benefit of MatchboxNet over JASPER models is that they use 1D Time-Channel Separable Convolutions, which greatly reduce the number of parameters required to obtain good model accuracy.MatchboxNet models generally follow the model definition pattern QuartzNet-[BxRXC], where B is the number of blocks, R is the number of convolutional sub-blocks, and C is the number of channels in these blocks. Each sub-block contains a 1-D masked convolution, batch normalization, ReLU, and dropout.An image of QuartzNet, the base configuration of MatchboxNet models, is provided below.
###Code
# NeMo's "core" package
import nemo
# NeMo's ASR collection - this collections contains complete ASR models and
# building blocks (modules) for ASR
import nemo.collections.asr as nemo_asr
###Output
_____no_output_____
###Markdown
Model ConfigurationThe MatchboxNet Model is defined in a config file which declares multiple important sections.They are:1) `model`: All arguments that will relate to the Model - preprocessors, encoder, decoder, optimizer and schedulers, datasets and any other related information2) `trainer`: Any argument to be passed to PyTorch Lightning
###Code
# This line will print the entire config of the MatchboxNet model
config_path = f"configs/{MODEL_CONFIG}"
config = OmegaConf.load(config_path)
config = OmegaConf.to_container(config, resolve=True)
config = OmegaConf.create(config)
print(OmegaConf.to_yaml(config))
# Preserve some useful parameters
labels = config.model.labels
sample_rate = config.sample_rate
###Output
_____no_output_____
###Markdown
Setting up the datasets within the configIf you'll notice, there are a few config dictionaries called `train_ds`, `validation_ds` and `test_ds`. These are configurations used to setup the Dataset and DataLoaders of the corresponding config.
###Code
print(OmegaConf.to_yaml(config.model.train_ds))
###Output
_____no_output_____
###Markdown
`???` inside configsYou will often notice that some configs have `???` in place of paths. This is used as a placeholder so that the user can change the value at a later time.Let's add the paths to the manifests to the config above.
###Code
config.model.train_ds.manifest_filepath = train_dataset
config.model.validation_ds.manifest_filepath = val_dataset
config.model.test_ds.manifest_filepath = test_dataset
###Output
_____no_output_____
###Markdown
Building the PyTorch Lightning TrainerNeMo models are primarily PyTorch Lightning modules - and therefore are entirely compatible with the PyTorch Lightning ecosystem!Lets first instantiate a Trainer object!
###Code
import torch
import pytorch_lightning as pl
print("Trainer config - \n")
print(OmegaConf.to_yaml(config.trainer))
# Lets modify some trainer configs for this demo
# Checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
config.trainer.gpus = cuda
# Reduces maximum number of epochs to 5 for quick demonstration
config.trainer.max_epochs = 5
# Remove distributed training flags
config.trainer.accelerator = None
trainer = pl.Trainer(**config.trainer)
###Output
_____no_output_____
###Markdown
Setting up a NeMo ExperimentNeMo has an experiment manager that handles logging and checkpointing for us, so let's use it !
###Code
from nemo.utils.exp_manager import exp_manager
exp_dir = exp_manager(trainer, config.get("exp_manager", None))
# The exp_dir provides a path to the current experiment for easy access
exp_dir = str(exp_dir)
exp_dir
###Output
_____no_output_____
###Markdown
Building the MatchboxNet ModelMatchboxNet is an ASR model with a classification task - it generates one label for the entire provided audio stream. Therefore we encapsulate it inside the `EncDecClassificationModel` as follows.
###Code
asr_model = nemo_asr.models.EncDecClassificationModel(cfg=config.model, trainer=trainer)
###Output
_____no_output_____
###Markdown
Training a MatchboxNet ModelAs MatchboxNet is inherently a PyTorch Lightning Model, it can easily be trained in a single line - `trainer.fit(model)` ! Monitoring training progressBefore we begin training, let's first create a Tensorboard visualization to monitor progress
###Code
try:
from google import colab
COLAB_ENV = True
except (ImportError, ModuleNotFoundError):
COLAB_ENV = False
# Load the TensorBoard notebook extension
if COLAB_ENV:
%load_ext tensorboard
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
if COLAB_ENV:
%tensorboard --logdir {exp_dir}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
Training for 5 epochsWe see below that the model begins to get modest scores on the validation set after just 5 epochs of training
###Code
trainer.fit(asr_model)
###Output
_____no_output_____
###Markdown
Evaluation on the Test setLets compute the final score on the test set via `trainer.test(model)`
###Code
trainer.test(asr_model, ckpt_path=None)
###Output
_____no_output_____
###Markdown
Fast TrainingWe can dramatically improve the time taken to train this model by using Multi GPU training along with Mixed Precision.For multi-GPU training, take a look at [the PyTorch Lightning Multi-GPU training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html)For mixed-precision training, take a look at [the PyTorch Lightning Mixed-Precision training section](https://pytorch-lightning.readthedocs.io/en/latest/advanced/amp.html)```python Mixed precision:trainer = Trainer(amp_level='O1', precision=16) Trainer with a distributed backend:trainer = Trainer(gpus=2, num_nodes=2, accelerator='ddp') Of course, you can combine these flags as well.``` Evaluation of incorrectly predicted samplesGiven that we have a trained model, which performs reasonably well, let's try to listen to the samples where the model is least confident in its predictions.For this, we need the support of the librosa library.**NOTE**: The following code depends on librosa. To install it, run the following code block first.
###Code
!pip install librosa
###Output
_____no_output_____
###Markdown
Extract the predictions from the modelWe want to possess the actual logits of the model instead of just the final evaluation score, so we can define a function to perform the forward step for us without computing the final loss. Instead, we extract the logits per batch of samples provided. Accessing the data loadersWe can utilize the `setup_test_data` method in order to instantiate a data loader for the dataset we want to analyze.For convenience, we can access these instantiated data loaders using the following accessors - `asr_model._train_dl`, `asr_model._validation_dl` and `asr_model._test_dl`.
###Code
asr_model.setup_test_data(config.model.test_ds)
test_dl = asr_model._test_dl
###Output
_____no_output_____
###Markdown
Partial Test StepBelow we define a utility function to perform most of the test step. For reference, the test step is defined as follows:```python def test_step(self, batch, batch_idx, dataloader_idx=0): audio_signal, audio_signal_len, labels, labels_len = batch logits = self.forward(input_signal=audio_signal, input_signal_length=audio_signal_len) loss_value = self.loss(logits=logits, labels=labels) correct_counts, total_counts = self._accuracy(logits=logits, labels=labels) return {'test_loss': loss_value, 'test_correct_counts': correct_counts, 'test_total_counts': total_counts}```
###Code
@torch.no_grad()
def extract_logits(model, dataloader):
logits_buffer = []
label_buffer = []
# Follow the above definition of the test_step
for batch in dataloader:
audio_signal, audio_signal_len, labels, labels_len = batch
logits = model(input_signal=audio_signal, input_signal_length=audio_signal_len)
logits_buffer.append(logits)
label_buffer.append(labels)
print(".", end='')
print()
print("Finished extracting logits !")
logits = torch.cat(logits_buffer, 0)
labels = torch.cat(label_buffer, 0)
return logits, labels
cpu_model = asr_model.cpu()
cpu_model.eval()
logits, labels = extract_logits(cpu_model, test_dl)
print("Logits:", logits.shape, "Labels :", labels.shape)
# Compute accuracy - `_accuracy` is a PyTorch Lightning Metric !
acc = cpu_model._accuracy(logits=logits, labels=labels)
print("Accuracy : ", float(acc[0]*100))
###Output
_____no_output_____
###Markdown
Filtering out incorrect samplesLet us now filter out the incorrectly labeled samples from the total set of samples in the test set
###Code
import librosa
import json
import IPython.display as ipd
# First let's create a utility class to remap the integer class labels to actual string label
class ReverseMapLabel:
def __init__(self, data_loader):
self.label2id = dict(data_loader.dataset.label2id)
self.id2label = dict(data_loader.dataset.id2label)
def __call__(self, pred_idx, label_idx):
return self.id2label[pred_idx], self.id2label[label_idx]
# Next, let's get the indices of all the incorrectly labeled samples
sample_idx = 0
incorrect_preds = []
rev_map = ReverseMapLabel(test_dl)
# Remember, evaluated_tensor = (loss, logits, labels)
probs = torch.softmax(logits, dim=-1)
probas, preds = torch.max(probs, dim=-1)
total_count = cpu_model._accuracy.total_counts_k[0]
incorrect_ids = (preds != labels).nonzero()
for idx in incorrect_ids:
proba = float(probas[idx][0])
pred = int(preds[idx][0])
label = int(labels[idx][0])
idx = int(idx[0]) + sample_idx
incorrect_preds.append((idx, *rev_map(pred, label), proba))
print(f"Num test samples : {total_count.item()}")
print(f"Num errors : {len(incorrect_preds)}")
# First lets sort by confidence of prediction
incorrect_preds = sorted(incorrect_preds, key=lambda x: x[-1], reverse=False)
###Output
_____no_output_____
###Markdown
Examine a subset of incorrect samplesLet's print out the (test id, predicted label, ground truth label, confidence) tuple of first 20 incorrectly labeled samples
###Code
for incorrect_sample in incorrect_preds[:20]:
print(str(incorrect_sample))
###Output
_____no_output_____
###Markdown
Define a threshold below which we designate a model's prediction as "low confidence"
###Code
# Filter out how many such samples exist
low_confidence_threshold = 0.25
count_low_confidence = len(list(filter(lambda x: x[-1] <= low_confidence_threshold, incorrect_preds)))
print(f"Number of low confidence predictions : {count_low_confidence}")
###Output
_____no_output_____
###Markdown
Let's hear the samples which the model has least confidence in !
###Code
# First let's create a helper function to parse the manifest files
def parse_manifest(manifest):
data = []
for line in manifest:
line = json.loads(line)
data.append(line)
return data
# Next, let's create a helper function to actually listen to certain samples
def listen_to_file(sample_id, pred=None, label=None, proba=None):
# Load the audio waveform using librosa
filepath = test_samples[sample_id]['audio_filepath']
audio, sample_rate = librosa.load(filepath)
if pred is not None and label is not None and proba is not None:
print(f"Sample : {sample_id} Prediction : {pred} Label : {label} Confidence = {proba: 0.4f}")
else:
print(f"Sample : {sample_id}")
return ipd.Audio(audio, rate=sample_rate)
# Now let's load the test manifest into memory
test_samples = []
with open(test_dataset, 'r') as test_f:
test_samples = test_f.readlines()
test_samples = parse_manifest(test_samples)
# Finally, let's listen to all the audio samples where the model made a mistake
# Note: This list of incorrect samples may be quite large, so you may choose to subsample `incorrect_preds`
count = min(count_low_confidence, 20) # replace this line with just `count_low_confidence` to listen to all samples with low confidence
for sample_id, pred, label, proba in incorrect_preds[:count]:
ipd.display(listen_to_file(sample_id, pred=pred, label=label, proba=proba))
###Output
_____no_output_____
###Markdown
Fine-tuning on a new datasetWe currently trained our dataset on all 30/35 classes of the Google Speech Commands dataset (v1/v2).We will now show an example of fine-tuning a trained model on a subset of the classes, as a demonstration of fine-tuning. Preparing the data-subsetsLet's select 2 of the classes, `yes` and `no` and prepare our manifests with this dataset.
###Code
import json
def extract_subset_from_manifest(name: str, manifest_path: str, labels: list):
manifest_dir = os.path.split(manifest_path)[0]
labels = set(labels)
manifest_values = []
print(f"Parsing manifest: {manifest_path}")
with open(manifest_path, 'r') as f:
for line in f:
val = json.loads(line)
if val['command'] in labels:
manifest_values.append(val)
print(f"Number of files extracted from dataset: {len(manifest_values)}")
outpath = os.path.join(manifest_dir, name)
with open(outpath, 'w') as f:
for val in manifest_values:
json.dump(val, f)
f.write("\n")
f.flush()
print("Manifest subset written to path :", outpath)
print()
return outpath
labels = ["yes", "no"]
train_subdataset = extract_subset_from_manifest("train_subset.json", train_dataset, labels)
val_subdataset = extract_subset_from_manifest("val_subset.json", val_dataset, labels)
test_subdataset = extract_subset_from_manifest("test_subset.json", test_dataset, labels)
###Output
_____no_output_____
###Markdown
Saving/Restoring a checkpointThere are multiple ways to save and load models in NeMo. Since all NeMo models are inherently Lightning Modules, we can use the standard way that PyTorch Lightning saves and restores models.NeMo also provides a more advanced model save/restore format, which encapsulates all the parts of the model that are required to restore that model for immediate use.In this example, we will explore both ways of saving and restoring models, but we will focus on the PyTorch Lightning method. Saving and Restoring via PyTorch Lightning CheckpointsWhen using NeMo for training, it is advisable to utilize the `exp_manager` framework. It is tasked with handling checkpointing and logging (Tensorboard as well as WandB optionally!), as well as dealing with multi-node and multi-GPU logging.Since we utilized the `exp_manager` framework above, we have access to the directory where the checkpoints exist. `exp_manager` with the default settings will save multiple checkpoints for us - 1) A few checkpoints from certain steps of training. They will have `--val_loss=` tags2) A checkpoint at the last epoch of training denotes by `-last`.3) If the model finishes training, it will also have a `--end` checkpoint.
###Code
import glob
print(exp_dir)
# Let's list all the checkpoints we have
checkpoint_dir = os.path.join(exp_dir, 'checkpoints')
checkpoint_paths = list(glob.glob(os.path.join(checkpoint_dir, "*.ckpt")))
checkpoint_paths
# We want the checkpoint saved after the final step of training
final_checkpoint = list(filter(lambda x: "-last.ckpt" in x, checkpoint_paths))[0]
print(final_checkpoint)
###Output
_____no_output_____
###Markdown
Restoring from a PyTorch Lightning checkpointTo restore a model using the `LightningModule.load_from_checkpoint()` class method.
###Code
restored_model = nemo_asr.models.EncDecClassificationModel.load_from_checkpoint(final_checkpoint)
###Output
_____no_output_____
###Markdown
Prepare the model for fine-tuningRemember, the original model was trained for a 30/35 way classification task. Now we require only a subset of these models, so we need to modify the decoder head to support fewer classes.We can do this easily with the convenient function `EncDecClassificationModel.change_labels(new_label_list)`.By performing this step, we discard the old decoder head, but still, preserve the encoder!
###Code
restored_model.change_labels(labels)
###Output
_____no_output_____
###Markdown
Prepare the data loadersThe restored model, upon restoration, will not attempt to set up any data loaders. This is so that we can manually set up any datasets we want - train and val to finetune the model, test in order to just evaluate, or all three to do both!The entire config that we used before can still be accessed via `ModelPT.cfg`, so we will use it in order to set up our data loaders. This also gives us the opportunity to set any additional parameters we wish to setup!
###Code
import copy
train_subdataset_cfg = copy.deepcopy(restored_model.cfg.train_ds)
val_subdataset_cfg = copy.deepcopy(restored_model.cfg.validation_ds)
test_subdataset_cfg = copy.deepcopy(restored_model.cfg.test_ds)
# Set the paths to the subset of the dataset
train_subdataset_cfg.manifest_filepath = train_subdataset
val_subdataset_cfg.manifest_filepath = val_subdataset
test_subdataset_cfg.manifest_filepath = test_subdataset
# Setup the data loader for the restored model
restored_model.setup_training_data(train_subdataset_cfg)
restored_model.setup_multiple_validation_data(val_subdataset_cfg)
restored_model.setup_multiple_test_data(test_subdataset_cfg)
# Check data loaders are correct
print("Train dataset labels :", restored_model._train_dl.dataset.labels)
print("Val dataset labels :", restored_model._validation_dl.dataset.labels)
print("Test dataset labels :", restored_model._test_dl.dataset.labels)
###Output
_____no_output_____
###Markdown
Setting up a new Trainer and Experiment ManagerA restored model has a utility method to attach the Trainer object to it, which is necessary in order to correctly set up the optimizer and scheduler!**Note**: The restored model does not contain the trainer config with it. It is necessary to create a new Trainer object suitable for the environment where the model is being trained. The template can be replicated from any of the training scripts.Here, since we already had the previous config object that prepared the trainer, we could have used it, but for demonstration, we will set up the trainer config manually.
###Code
# Setup the new trainer object
# Let's modify some trainer configs for this demo
# Checks if we have GPU available and uses it
cuda = 1 if torch.cuda.is_available() else 0
trainer_config = OmegaConf.create(dict(
gpus=cuda,
max_epochs=5,
max_steps=None, # computed at runtime if not set
num_nodes=1,
accumulate_grad_batches=1,
checkpoint_callback=False, # Provided by exp_manager
logger=False, # Provided by exp_manager
log_every_n_steps=1, # Interval of logging.
val_check_interval=1.0, # Set to 0.25 to check 4 times per epoch, or an int for number of iterations
))
print(trainer_config.pretty())
trainer_finetune = pl.Trainer(**trainer_config)
###Output
_____no_output_____
###Markdown
Setting the trainer to the restored modelAll NeMo models provide a convenience method `set_trainer()` in order to setup the trainer after restoration
###Code
restored_model.set_trainer(trainer_finetune)
exp_dir_finetune = exp_manager(trainer_finetune, config.get("exp_manager", None))
exp_dir_finetune = str(exp_dir_finetune)
exp_dir_finetune
###Output
_____no_output_____
###Markdown
Setup optimizer + schedulerFor a fine-tuning experiment, let's set up the optimizer and scheduler!We will use a much lower learning rate than before, and also swap out the scheduler from PolyHoldDecay to CosineDecay.
###Code
optim_sched_cfg = copy.deepcopy(restored_model.cfg.optim)
# Struct mode prevents us from popping off elements from the config, so let's disable it
OmegaConf.set_struct(optim_sched_cfg, False)
# Lets change the maximum learning rate to previous minimum learning rate
optim_sched_cfg.lr = 0.001
# Lets change the scheduler
optim_sched_cfg.sched.name = "CosineAnnealing"
# "power" isnt applicable to CosineAnnealing so let's remove it
optim_sched_cfg.sched.pop('power')
# "hold_ratio" isnt applicable to CosineAnnealing, so let's remove it
optim_sched_cfg.sched.pop('hold_ratio')
# Set "min_lr" to lower value
optim_sched_cfg.sched.min_lr = 1e-4
print(optim_sched_cfg.pretty())
# Now lets update the optimizer settings
restored_model.setup_optimization(optim_sched_cfg)
# We can also just directly replace the config inplace if we choose to
restored_model.cfg.optim = optim_sched_cfg
###Output
_____no_output_____
###Markdown
Fine-tune training stepWe fine-tune on the subset classification problem. Note, the model was originally trained on these classes (the subset defined here has already been trained on above).When fine-tuning on a truly new dataset, we will not see such a dramatic improvement in performance. However, it should still converge a little faster than if it was trained from scratch. Monitor training progress via Tensorboard
###Code
if COLAB_ENV:
%tensorboard --logdir {exp_dir_finetune}
else:
print("To use tensorboard, please use this notebook in a Google Colab environment.")
###Output
_____no_output_____
###Markdown
Fine-tuning for 5 epochs
###Code
trainer_finetune.fit(restored_model)
###Output
_____no_output_____
###Markdown
Evaluation on the Test setLet's compute the final score on the test set via `trainer.test(model)`
###Code
trainer_finetune.test(restored_model, ckpt_path=None)
###Output
_____no_output_____
###Markdown
Advanced Usage: Exporting a model in its entiretyWhile most models can be easily serialized via the Experiment Manager as a PyTorch Lightning checkpoint, there are certain models where this is insufficient. Consider the case where a Model contains artifacts such as tokenizers or other intermediate file objects that cannot be so easily serialized into a checkpoint.For such cases, NeMo offers two utility functions that enable serialization of a Model + artifacts - `save_to` and `restore_from`.Further documentation regarding these methods can be obtained from the documentation pages on NeMo.
###Code
import tarfile
# Save a model as a tarfile
restored_model.save_to(os.path.join(exp_dir_finetune, "model.nemo"))
# The above object is just a tarfile which can store additional artifacts.
with tarfile.open(os.path.join(exp_dir_finetune, 'model.nemo')) as blob:
for item in blob:
print(item)
# Restore a model from a tarfile
restored_model_2 = nemo_asr.models.EncDecClassificationModel.restore_from(os.path.join(exp_dir_finetune, "model.nemo"))
###Output
_____no_output_____ |
materials/Module 7/7_netcdf.ipynb | ###Markdown
NetCDF filesNetCDF is a binary storage format for many different kinds of rectangular data. Examples include atmosphere and ocean model output, satellite images, and timeseries data. NetCDF files are intended to be device independent, and the dataset may be queried in a fast, random-access way. More information about NetCDF files can be found [here](http://www.unidata.ucar.edu/software/netcdf/). The [CF conventions](http://cfconventions.org) are used for storing NetCDF data for earth system models, so that programs can be aware of the coordinate axes used by the data cubes.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import cartopy
#import cmocean.cm as cmo
import netCDF4
###Output
_____no_output_____
###Markdown
Sea surface temperature exampleAn example NetCDF file containing monthly means of sea surface temperature over 160 years can be found [here](http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.v4.html). We'll use the NetCDF4 package to read this file, which has already been saved into the `data` directory.
###Code
nc = netCDF4.Dataset('../data/sst.mnmean.v4.nc')
nc['sst'].shape
print(nc)
###Output
<class 'netCDF4._netCDF4.Dataset'>
root group (NETCDF4_CLASSIC data model, file format HDF5):
history: created 10/2014 by CAS using NCDC's ERSST V4 ascii values
title: NOAA Extended Reconstructed Sea Surface Temperature (ERSST), Version 4 (in situ only)
climatology: Climatology is based on 1971-2000 SST, Xue, Y., T. M. Smith, and R. W. Reynolds, 2003: Interdecadal changes of 30-yr SST normals during 1871.2000. Journal of Climate, 16, 1601-1612.
description: In situ data: ICOADS2.5 before 2007 and NCEP in situ data from 2008 to present. Ice data: HadISST ice before 2010 and NCEP ice after 2010.
citation: Huang et al, 2014: Extended Reconstructed Sea Surface Temperatures Version 4 (ERSST.v4), Part I. Upgrades and Intercomparisons. Journal of Climate.
comment: SSTs were observed by conventional thermometers in Buckets (insulated or un-insulated canvas and wooded buckets) or Engine Room Intaker
Conventions: CF-1.2
institution: This version written at NOAA/ESRL PSD: obtained from NOAA/NESDIS/National Climatic Data Center
keywords_vocabulary: NASA Global Change Master Directory (GCMD) Science Keywords
keywords: Earth Science > Oceans > Ocean Temperature > Sea Surface Temperature >
platform: Ship and Buoy SSTs from ICOADS R2.5 and NCEP GTS
instrument: Conventional thermometers
source: ICOADS R2.5 SST, NCEP GTS SST, HadISST ice, NCEP ice
source_comment: SSTs were observed by conventional thermometers in Buckets (insulated or un-insulated canvas and wooded buckets) or Engine Room Intaker
geospatial_lon_min: -1.0
geospatial_lon_max: 359.0
geospatial_laty_max: 89.0
geospatial_laty_min: -89.0
geospatial_lat_max: 89.0
geospatial_lat_min: -89.0
geospatial_lat_units: degrees_north
geospatial_lon_units: degrees_east
cdm_data_type: Grid
project: NOAA Extended Reconstructed Sea Surface Temperature (ERSST)
license: No constraints on data access or use
original_publisher_url: http://www.ncdc.noaa.gov
References: http://www.ncdc.noaa.gov/data-access/marineocean-data/extended-reconstructed-sea-surface-temperature-ersst-v4 at NCDC and http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.v4.html
dataset_title: Extended Reconstructed Sea Surface Temperature (ERSST) v4
dimensions(sizes): lon(180), lat(89), nbnds(2), time(1946)
variables(dimensions): float32 lat(lat), float32 lon(lon), float64 time_bnds(time,nbnds), float64 time(time), float32 sst(time,lat,lon)
groups:
###Markdown
The representation of the object shows some of the attributes of the netCDF file. The final few lines show the dimensions and the variable names (with corresponding dimensions). Another representation of the file can be seen using the `ncdump` command. This is similar to the output of the command (at a command-line prompt, not within python) $ ncdump -h ../data/sst.mnmean.v4.nc netcdf sst.mnmean.v4 { dimensions: lon = 180 ; lat = 89 ; nbnds = 2 ; time = UNLIMITED ; // (1946 currently) variables: float lat(lat) ; lat:units = "degrees_north" ; lat:long_name = "Latitude" ; lat:actual_range = 88.f, -88.f ; lat:standard_name = "latitude" ; lat:axis = "Y" ; lat:coordinate_defines = "center" ; float lon(lon) ; lon:units = "degrees_east" ; lon:long_name = "Longitude" ; lon:actual_range = 0.f, 358.f ; lon:standard_name = "longitude" ; lon:axis = "X" ; lon:coordinate_defines = "center" ; double time_bnds(time, nbnds) ; time_bnds:long_name = "Time Boundaries" ; double time(time) ; time:units = "days since 1800-1-1 00:00:00" ; time:long_name = "Time" ; time:delta_t = "0000-01-00 00:00:00" ; time:avg_period = "0000-01-00 00:00:00" ; time:prev_avg_period = "0000-00-07 00:00:00" ; time:standard_name = "time" ; time:axis = "T" ; time:actual_range = 19723., 78923. ; float sst(time, lat, lon) ; sst:long_name = "Monthly Means of Sea Surface Temperature" ; sst:units = "degC" ; sst:var_desc = "Sea Surface Temperature" ; sst:level_desc = "Surface" ; sst:statistic = "Mean" ; sst:missing_value = -9.96921e+36f ; sst:actual_range = -1.8f, 33.95f ; sst:valid_range = -5.f, 40.f ; sst:dataset = "NOAA Extended Reconstructed SST V4" ; sst:parent_stat = "Individual Values" ; // global attributes: :history = "created 10/2014 by CAS using NCDC\'s ERSST V4 ascii values" ; [....and so on....] You can access terminal commands from with the Jupyter notebook by putting "!" first:
###Code
!ncdump -h ../data/sst.mnmean.v4.nc
###Output
netcdf ../data/sst.mnmean.v4 {
dimensions:
lon = 180 ;
lat = 89 ;
nbnds = 2 ;
time = UNLIMITED ; // (1946 currently)
variables:
float lat(lat) ;
lat:units = "degrees_north" ;
lat:long_name = "Latitude" ;
lat:actual_range = 88.f, -88.f ;
lat:standard_name = "latitude" ;
lat:axis = "Y" ;
lat:coordinate_defines = "center" ;
float lon(lon) ;
lon:units = "degrees_east" ;
lon:long_name = "Longitude" ;
lon:actual_range = 0.f, 358.f ;
lon:standard_name = "longitude" ;
lon:axis = "X" ;
lon:coordinate_defines = "center" ;
double time_bnds(time, nbnds) ;
time_bnds:long_name = "Time Boundaries" ;
double time(time) ;
time:units = "days since 1800-1-1 00:00:00" ;
time:long_name = "Time" ;
time:delta_t = "0000-01-00 00:00:00" ;
time:avg_period = "0000-01-00 00:00:00" ;
time:prev_avg_period = "0000-00-07 00:00:00" ;
time:standard_name = "time" ;
time:axis = "T" ;
time:actual_range = 19723., 78923. ;
float sst(time, lat, lon) ;
sst:long_name = "Monthly Means of Sea Surface Temperature" ;
sst:units = "degC" ;
sst:var_desc = "Sea Surface Temperature" ;
sst:level_desc = "Surface" ;
sst:statistic = "Mean" ;
sst:missing_value = -9.96921e+36f ;
sst:actual_range = -1.8f, 33.95f ;
sst:valid_range = -5.f, 40.f ;
sst:dataset = "NOAA Extended Reconstructed SST V4" ;
sst:parent_stat = "Individual Values" ;
// global attributes:
:history = "created 10/2014 by CAS using NCDC\'s ERSST V4 ascii values" ;
:title = "NOAA Extended Reconstructed Sea Surface Temperature (ERSST), Version 4 (in situ only)" ;
:climatology = "Climatology is based on 1971-2000 SST, Xue, Y., T. M. Smith, and R. W. Reynolds, 2003: Interdecadal changes of 30-yr SST normals during 1871.2000. Journal of Climate, 16, 1601-1612." ;
:description = "In situ data: ICOADS2.5 before 2007 and NCEP in situ data from 2008 to present. Ice data: HadISST ice before 2010 and NCEP ice after 2010." ;
:citation = "Huang et al, 2014: Extended Reconstructed Sea Surface Temperatures Version 4 (ERSST.v4), Part I. Upgrades and Intercomparisons. Journal of Climate." ;
:comment = "SSTs were observed by conventional thermometers in Buckets (insulated or un-insulated canvas and wooded buckets) or Engine Room Intaker" ;
:Conventions = "CF-1.2" ;
:institution = "This version written at NOAA/ESRL PSD: obtained from NOAA/NESDIS/National Climatic Data Center" ;
:keywords_vocabulary = "NASA Global Change Master Directory (GCMD) Science Keywords" ;
:keywords = "Earth Science > Oceans > Ocean Temperature > Sea Surface Temperature >" ;
:platform = "Ship and Buoy SSTs from ICOADS R2.5 and NCEP GTS" ;
:instrument = "Conventional thermometers" ;
:source = "ICOADS R2.5 SST, NCEP GTS SST, HadISST ice, NCEP ice" ;
:source_comment = "SSTs were observed by conventional thermometers in Buckets (insulated or un-insulated canvas and wooded buckets) or Engine Room Intaker" ;
:geospatial_lon_min = -1.f ;
:geospatial_lon_max = 359.f ;
:geospatial_laty_max = 89.f ;
:geospatial_laty_min = -89.f ;
:geospatial_lat_max = 89.f ;
:geospatial_lat_min = -89.f ;
:geospatial_lat_units = "degrees_north" ;
:geospatial_lon_units = "degrees_east" ;
:cdm_data_type = "Grid" ;
:project = "NOAA Extended Reconstructed Sea Surface Temperature (ERSST)" ;
:license = "No constraints on data access or use" ;
:original_publisher_url = "http://www.ncdc.noaa.gov" ;
:References = "http://www.ncdc.noaa.gov/data-access/marineocean-data/extended-reconstructed-sea-surface-temperature-ersst-v4 at NCDC and http://www.esrl.noaa.gov/psd/data/gridded/data.noaa.ersst.v4.html" ;
:dataset_title = "Extended Reconstructed Sea Surface Temperature (ERSST) v4" ;
}
###Markdown
Mapping the netcdf object to the python objectWe can query the data within the NetCDF file using the NetCDF object. The structure of the object (the composition of the methods and attributes) is designed to mirror the data structure in the file. See how these queries give the same information as the textual representation above.
###Code
# `Global` attributes of the file
nc.history
# Variables are stored in a dictionary
nc.variables['lon'] # this is a variable object, just a pointer to the variable. NO DATA HAS BEEN LOADED!
# Variable objects also have attributes
nc.variables['lon'].units
# we can also query the dimensions
nc.dimensions['lon']
# to find the length of a dimension, do
len(nc.dimensions['lon'])
# A list of the dimensions can be found by looking at the keys in the dimensions dictionary
nc.dimensions.keys()
# Same for variables
nc.variables.keys()
# Let's take a look at the main 3D variable
nc['sst'] # A shorthand for nc.variables['sst']
nc['sst'].units
###Output
C:\Users\dhenrichs\AppData\Local\Continuum\anaconda3\envs\ocng_669\lib\site-packages\ipykernel_launcher.py:1: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
"""Entry point for launching an IPython kernel.
###Markdown
--- *Exercise*> Inspect the NetCDF object. > 1. What are the units of the time variable?> 1. What are the dimensions of the latitude variable?> 1. What is the length of the latitude dimension?---
###Code
# We can extract data from the file by indexing:
# This reads in the data so be careful how much you read in at once
lon = nc['lon'][:]
lat = nc['lat'][:]
sst = nc['sst'][0] # same as nc['sst'][0, :, :], gets the first 2D time slice in the series.
# Extract the time variable using the convenient num2date, which converts from time numbers to datetime objects
time = netCDF4.num2date(nc['time'][:], nc['time'].units)
time
###Output
C:\Users\dhenrichs\AppData\Local\Continuum\anaconda3\envs\ocng_669\lib\site-packages\ipykernel_launcher.py:2: DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
###Markdown
There are lots of operations you can do with `datetime` objects, some of which you've used before, possibly in other packages like `pandas`.For example, you can find the difference between two datetimes. This is given in a `datetime.timedelta` object.
###Code
time[2] - time[1]
###Output
_____no_output_____
###Markdown
You can also specify the time unit of measurement you get out of this difference:
###Code
(time[2] - time[1]).days
###Output
_____no_output_____
###Markdown
Note that asking for the number of seconds (from 0 to 60 for that datetime object) is different than asking for the number of total seconds (total number of seconds in the time measurement):
###Code
(time[2] - time[1]).seconds
(time[2] - time[1]).total_seconds()
###Output
_____no_output_____
###Markdown
--- *Exercise*> Practice with `datetime`:> 1. Find the number of days between several successive `datetimes` in the `time` variable. You will need to extract this number from the `timedelta` object.> 1. One way you want present the date and time contained within a `datetime` object is with: time[0].isoformat() Test this, and also try using the following to display your datetime as a string: time[0].strftime([input formats]) where you choose time formats from the options which can be seen at `strftime.org`.--- Let's use the data that we have read in to make a plot.
###Code
proj = cartopy.crs.Mollweide(central_longitude=180)
pc = cartopy.crs.PlateCarree()
fig = plt.figure(figsize=(14,6))
ax = fig.add_subplot(111, projection=proj)
mappable = ax.contourf(lon, lat, sst, 100, cmap=plt.get_cmap('magma'), transform=pc)
###Output
_____no_output_____
###Markdown
--- *Exercise*> Finish the plot above. Add:> * Land> * Colorbar with proper label and units> * Title with nicely formatting date and time--- THREDDS example. Loading data from a remote dataset.The netCDF library can be compiled such that it is 'THREDDS enabled', which means that you can put in a URL instead of a filename. This allows access to large remote datasets, without having to download the entire file. You can find a large list of datasets served via an OpenDAP/THREDDs server [here](http://apdrc.soest.hawaii.edu/data/data.php).Let's look at the ESRL/NOAA 20th Century Reanalysis – Version 2. You can access the data by the following link (this is the link of the `.dds` and `.das` files without the extension.):
###Code
loc = 'http://apdrc.soest.hawaii.edu/dods/public_data/Reanalysis_Data/esrl/daily/monolevel/V2c/cprat'
nc_cprat = netCDF4.Dataset(loc)
nc_cprat['cprat'].long_name
time = netCDF4.num2date(nc_cprat['time'][:], nc_cprat['time'].units) # convert to datetime objects
time
cprat = nc_cprat['cprat'][-1] # get the last time, datetime.datetime([year], 12, 31, 0, 0)
lon = nc_cprat['lon'][:]
lat = nc_cprat['lat'][:]
proj = cartopy.crs.Sinusoidal(central_longitude=180)
fig = plt.figure(figsize=(14,6))
ax = fig.add_subplot(111, projection=proj)
ax.coastlines(linewidth=0.25)
mappable = ax.contourf(lon, lat, cprat, 20, cmap=cmo.tempo, transform=pc)
ax.set_title(time[-1].isoformat()[:10])
fig.colorbar(mappable).set_label('%s' % nc_cprat['cprat'].long_name)
###Output
_____no_output_____
###Markdown
--- *Exercise*> Pick another [variable](http://apdrc.soest.hawaii.edu/dods/public_data/Reanalysis_Data/esrl/daily/monolevel) from this dataset. Inspect and plot the variable in a similar manner to precipitation.> Find another dataset on a THREDDS server at SOEST (or elsewhere), pick a variable, and plot it.--- Creating NetCDF filesWe can also create a NetCDF file to store data. It is a bit of a pain. Later we will see an easier way to do this.
###Code
from matplotlib import tri
Ndatapoints = 1000
Ntimes = 20
Nbad = 200
xdata = np.random.rand(Ndatapoints)
ydata = np.random.rand(Ndatapoints)
time = np.arange(Ntimes)
# create a progressive wave
fdata = np.sin((xdata+ydata)[np.newaxis, :]*5.0 +
time[:, np.newaxis]/3.0)
# remove some random 'bad' data.
idx = np.arange(fdata.size)
np.random.shuffle(idx)
fdata.flat[idx[:Nbad]] = np.nan
ygrid, xgrid = np.mgrid[0:1:60j, 0:1:50j]
fgrid = np.ma.empty((Ntimes, 60, 50), 'd')
# interpolate
for n in range(Ntimes):
igood = ~np.isnan(fdata[n])
t = tri.Triangulation(xdata[igood], ydata[igood])
interp = tri.LinearTriInterpolator(t, fdata[n][igood])
fgrid[n] = interp(xgrid, ygrid)
# create netCDF file
nc = netCDF4.Dataset('foo.nc', 'w')
nc.author = 'Me'
nc.createDimension('x', 50)
nc.createDimension('y', 60)
nc.createDimension('time', None) # An 'unlimited' dimension.
nc.createVariable('f', 'd', ('time', 'y', 'x'))
nc.variables['f'][:] = fgrid
nc.variables['f'].units = 'meters sec-1'
nc.createVariable('x', 'd', ('x',))
nc.variables['x'][:] = xgrid[0, :]
nc.variables['x'].units = 'meters'
nc.createVariable('y', 'd', ('y',))
nc.variables['y'][:] = ygrid[:, 0]
nc.variables['y'].units = 'meters'
nc.createVariable('time', 'd', ('time',))
nc.variables['time'][:] = time
nc.variables['time'].units = 'seconds'
nc.close()
nc = netCDF4.Dataset('foo.nc')
nc
###Output
_____no_output_____
###Markdown
See also- [Xarray](http://xarray.pydata.org/en/stable/): NetCDF + PANDAS + CF conventions. Awesome.- [pygrib](https://github.com/jswhit/pygrib): Reading GRIB files.- [ncview](http://meteora.ucsd.edu/~pierce/ncview_home_page.html): Not python, but a very useful NetCDF file viewer. `xarray``xarray` expands the utility of the time series analysis package `pandas` into more than one dimension. It is actively being developed so some functionality isn't yet available, but for certain analysis it is very useful.
###Code
import xarray as xr
###Output
_____no_output_____
###Markdown
In the previous material, we used `netCDF` directly to read in a data file, then access the data:
###Code
nc = netCDF4.Dataset('../data/sst.mnmean.v4.nc')
print(nc['sst'].shape)
###Output
(1946, 89, 180)
###Markdown
However, as was pointed out in class, in this approach if we want to pull out the sea surface temperature data at a particular time, we need to first know which time index that particular time corresponds to. How can we find this?First we convert the time numbers from the file into datetimes, like before:
###Code
# Extract the time variable using the convenient num2date
time = netCDF4.num2date(nc['time'][:], nc['time'].units)
time
time.fill_value = 9
###Output
_____no_output_____
###Markdown
Say we want to search for the time index corresponding to May 1, 1954.
###Code
from datetime import datetime
date = datetime(1954, 5, 1, 0, 0)
###Output
_____no_output_____
###Markdown
Now we search for the time index:
###Code
tind = np.where(time==date)[0][0]
print(tind)
###Output
1204
###Markdown
Great! So the time index we want is 1204. We can now make our sea surface temperature plot:
###Code
proj = cartopy.crs.Mollweide(central_longitude=180)
pc = cartopy.crs.PlateCarree()
fig = plt.figure(figsize=(14,6))
ax = fig.add_subplot(111, projection=proj)
mappable = ax.contourf(nc['lon'][:], nc['lat'][:], nc['sst'][tind], 100, cmap=plt.get_cmap('inferno'), transform=pc)
###Output
_____no_output_____
###Markdown
What if instead we want the index corresponding to May 23, 1954
###Code
date = datetime(1954, 5, 23, 0, 0)
np.where(time==date)
###Output
_____no_output_____
###Markdown
What is the problem here? There is no data at that exact time.So what should we do?A few options:
###Code
# index of date that minimizes time between model times and desired date
tidx = np.abs(time - date).argmin()
tidx
time[tidx]
np.where(time<=date)[0][-1]
###Output
_____no_output_____
###Markdown
So, you can do this but it's a little annoying and takes extra effort. Now let's access this data using a different package called `xarray`:
###Code
ds = xr.open_dataset('../data/sst.mnmean.v4.nc') # similar way to read in — also works for nonlocal data addresses
ds
###Output
_____no_output_____
###Markdown
Now we can search for data in May 1954:
###Code
ds['sst'].sel(time=slice('1954-05','1954-05'))
###Output
_____no_output_____
###Markdown
Or we can search for the nearest output to May 23, 1954:
###Code
ds['sst'].sel(time='1954-05-23', method='nearest')
###Output
_____no_output_____
###Markdown
Let's plot it!
###Code
sst = ds['sst'].sel(time='1954-05-23', method='nearest')
fig = plt.figure(figsize=(14,6))
ax = fig.add_subplot(111, projection=proj)
mappable = ax.contourf(nc['lon'][:], nc['lat'][:], sst, 10, cmap=plt.get_cmap('inferno'), transform=pc)
###Output
_____no_output_____
###Markdown
Note that you can also just plot against the included coordinates with built-in convenience functions (this is analogous to `pandas` which was for one dimension):
###Code
fig = plt.figure(figsize=(14,6))
ax = fig.add_subplot(111, projection=proj)
sst.plot(transform=pc) # the plot's projection
###Output
_____no_output_____
###Markdown
GroupByLike in `pandas`, we can use the `groupby` method to do some neat things. Let's group by season and save a new file.
###Code
seasonal_mean = ds.groupby('time.season').mean('time')
seasonal_mean
###Output
_____no_output_____
###Markdown
Do you remember how many lines of code were required to save a netCDF file from scratch? It is straight-forward, but tedious. Once you are working with data using `xarray`, you can save new, derived files very easily from your data array:
###Code
fname = 'test.nc'
seasonal_mean.to_netcdf(fname)
d = netCDF4.Dataset(fname)
d
###Output
_____no_output_____ |
docs/examples/nrb_cube.ipynb | ###Markdown
Exploring S1-NRB data cubes Introduction **Sentinel-1 Normalised Radar Backscatter** Sentinel-1 Normalised Radar Backscatter (S1-NRB) is a newly developed Analysis Ready Data (ARD) product for the European Space Agency that offers high-quality, radiometrically terrain corrected (RTC) Synthetic Aperture Radar (SAR) backscatter and is designed to be compliant with the CEOS ARD for Land (CARD4L) [NRB specification](https://ceos.org/ard/files/PFS/NRB/v5.5/CARD4L-PFS_NRB_v5.5.pdf).You can find more detailed information about the S1-NRB product [here](https://sentinel.esa.int/web/sentinel/sentinel-1-ard-normalised-radar-backscatter-nrb-product). **SpatioTemporal Asset Catalog (STAC)** All S1-NRB products include metadata in JSON format compliant with the [SpatioTemporal Asset Catalog (STAC)](https://stacspec.org/) specification. STAC uses several sub-specifications ([Item](https://github.com/radiantearth/stac-spec/blob/master/item-spec/item-spec.md), [Collection](https://github.com/radiantearth/stac-spec/blob/master/collection-spec/collection-spec.md) & [Catalog](https://github.com/radiantearth/stac-spec/blob/master/catalog-spec/catalog-spec.md)) to create a hierarchical structure that enables efficient querying and access of large volumes of geospatial data. **This example notebook will give a short demonstration of how S1-NRB products can be explored as on-the-fly data cubes with little effort by utilizing the STAC metadata provided with each product. It is not intended to demonstrate how to process the S1-NRB products in the first place. For this information please refer to the [usage instructions](https://s1-nrb.readthedocs.io/en/docs/general/usage.html).** Getting started After following the [installation instructions](https://s1-nrb.readthedocs.io/en/latest/general/installation.html) you need to install a few additional packages into the activated conda environment to reproduce all steps presented in the following example notebook.```bashconda activate nrb_env conda install jupyterlab stackstac rioxarray xarray_leaflet``` Instead of importing all packages now, they will successively be imported throughout the notebook:
###Code
import numpy as np
import stackstac
from S1_NRB.metadata.stac import make_catalog
###Output
_____no_output_____
###Markdown
Let's assume you have a collection of S1-NRB scenes located on your local disk, a fileserver or somewhere in the cloud. As mentioned in the [Introduction](Introduction), each S1-NRB scene includes metadata as a STAC Item, describing the scene's temporal, spatial and product specific properties.The **only step necessary to get started** with analysing your collection of scenes, is the creation of STAC Collection and Catalog files, which connect individual STAC Items and thereby create a hierarchy of STAC objects. `S1_NRB` includes the utility function [make_catalog](https://s1-nrb.readthedocs.io/en/latest/api.htmlS1_NRB.metadata.stac.make_catalog), which will create these files for you. Please note that `make_catalog` expects a directory structure based on MGRS tile IDs, which allows for efficient data querying and access. After user confirmation it will take care of reorganizing your S1-NRB scenes if this directory structure doesn't exist yet.
###Code
nrb_catalog = make_catalog(directory='./NRB_thuringia', silent=True)
###Output
WARNING:
./NRB_thuringia
and the NRB products it contains will be reorganized into subdirectories based on unique MGRS tile IDs if this directory structure does not yet exist.
Do you wish to continue? [yes|no] yes
###Markdown
The STAC Catalog can then be used with libraries such as [stackstac](https://github.com/gjoseph92/stackstac), which _"turns a STAC Collection into a lazy xarray.DataArray, backed by dask"._ The term _lazy_ describes a [method of execution](https://tutorial.dask.org/01x_lazy.html) that only computes results when actually needed and thereby enables computations on larger-than-memory datasets. _[xarray](https://xarray.pydata.org/en/stable/index.html)_ is a Python library for working with labeled multi-dimensional arrays of data, while the Python library _[dask](https://docs.dask.org/en/latest/)_ facilitates parallel computing in a flexible way.Compatibility with [odc-stac](https://github.com/opendatacube/odc-stac), a very [similar library](https://github.com/opendatacube/odc-stac/issues/54) to stackstac, will be tested in the near future.
###Code
aoi = (10.638066, 50.708415, 11.686751, 50.975775)
ds = stackstac.stack(items=nrb_catalog, bounds_latlon=aoi,
dtype=np.dtype('float32'), chunksize=(-1, 1, 1024, 1024))
ds
###Output
_____no_output_____ |
lectures/lecture-03-differentiation-01.ipynb | ###Markdown
Lecture 3 Differentiation I: Introduction and Interpretation
###Code
import numpy as np
##################################################
##### Matplotlib boilerplate for consistency #####
##################################################
from ipywidgets import interact
from ipywidgets import FloatSlider
from matplotlib import pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg')
global_fig_width = 10
global_fig_height = global_fig_width / 1.61803399
font_size = 12
plt.rcParams['axes.axisbelow'] = True
plt.rcParams['axes.edgecolor'] = '0.8'
plt.rcParams['axes.grid'] = True
plt.rcParams['axes.labelpad'] = 8
plt.rcParams['axes.linewidth'] = 2
plt.rcParams['axes.titlepad'] = 16.0
plt.rcParams['axes.titlesize'] = font_size * 1.4
plt.rcParams['figure.figsize'] = (global_fig_width, global_fig_height)
plt.rcParams['font.sans-serif'] = ['Computer Modern Sans Serif', 'DejaVu Sans', 'sans-serif']
plt.rcParams['font.size'] = font_size
plt.rcParams['grid.color'] = '0.8'
plt.rcParams['grid.linestyle'] = 'dashed'
plt.rcParams['grid.linewidth'] = 2
plt.rcParams['lines.dash_capstyle'] = 'round'
plt.rcParams['lines.dashed_pattern'] = [1, 4]
plt.rcParams['xtick.labelsize'] = font_size
plt.rcParams['xtick.major.pad'] = 4
plt.rcParams['xtick.major.size'] = 0
plt.rcParams['ytick.labelsize'] = font_size
plt.rcParams['ytick.major.pad'] = 4
plt.rcParams['ytick.major.size'] = 0
##################################################
###Output
_____no_output_____
###Markdown
GradientsWe often want to know about the *rate* at which one quantity changes over time.Examples:1. The rate of disappearance of substrate with time in an enzyme reaction. 1. The rate of decay of a radioactive substance (how long will it have activity above a certain level?)1. The rate of bacterial cell growth over time.1. How quickly an epidemic is growing. Defining the gradient* The **gradient of a curve** at a point $P$ is the slope of the tangent of the curve at that point.* The **tangent** is the line that "just touches" (but doesn't cross) the curve.* The gradient is also known as the **rate of change** or **derivative**, and the process of finding the gradient is called **differentiation**.* The gradient of the curve $\;y = f(x)\;$ is denoted in a few different ways, the three most common are:$$ y', \quad f'(x), \quad \frac{dy}{dx}. $$ Example, $y = x^2$
###Code
x1_widget = FloatSlider(value=1.0, min=-3., max=3., step=0.2, continuous_update=False)
_x = np.linspace(-5,5,50)
def add_line(x):
plt.title('$y=x^2$')
plt.xlabel('$x$')
plt.ylabel('$y=x^2$')
plt.xlim((-4.,4.))
plt.ylim((-5.,17.))
plt.plot(_x, _x**2);
plt.plot([x-10., x, x+10.], [x*x-20.*x, x*x, x*x+20.*x]);
plt.plot(x, x*x, 'ko')
interact(add_line, x=x1_widget, continuous_update=False);
###Output
_____no_output_____
###Markdown
Example, $y = \log(x)$
###Code
x_widget = FloatSlider(value=1.0, min=.4, max=1.8, step=0.05, continuous_update=False)
_x2 = np.linspace(0.2,2.,50)
def add_line(x):
plt.title('$y=log(x)$')
plt.xlabel('$x$')
plt.ylabel('$y=\log(x)$')
plt.xlim((0.2,2.))
plt.ylim((-2.,1.))
plt.plot(_x2, np.log(_x2));
plt.plot([x-10., x, x+10.], [np.log(x)-10./x, np.log(x), np.log(x)+10./x]);
plt.plot(x, np.log(x), 'ko')
interact(add_line, x=x_widget, continuous_update=False);
###Output
_____no_output_____
###Markdown
Algebraic exampleIf we want to find $y'(x)$ for $y = x^3 + 2$:$$ \text{Gradient} = \frac{y_2 - y_1}{x_2-x_1} = \frac{\Delta y}{\Delta x}$$Try with$x_1 = 1.5,\;1.9,\;1.99,\;\ldots$$x_2 = 2.5,\;2.1,\;2.01,\;\ldots$
###Code
x_1 = 1.5; x_2 = 2.5
y_1 = x_1**3 + 2; y_2 = x_2**3 + 2
print((y_2-y_1)/(x_2-x_1))
x_1 = 1.9; x_2 = 2.1
y_1 = x_1**3 + 2; y_2 = x_2**3 + 2
print((y_2-y_1)/(x_2-x_1))
x_1 = 1.99; x_2 = 2.01
y_1 = x_1**3 + 2; y_2 = x_2**3 + 2
print((y_2-y_1)/(x_2-x_1))
###Output
_____no_output_____
###Markdown
As the difference between $x_1$ and $x_2$ gets smaller, the gradient stabilises. The value it converges to is the gradient at the midway point of $x_1$ and $x_2$. Calculating gradients exactly$\text{Gradient} \approx \frac{\Delta y}{\Delta x} = \frac{f(x+h) - f(x)}{h}$This is called a finite difference approximation to the gradient. The approximation becomes more accurate the smaller h is.When using the approximation, we denote the changes as $\frac{\Delta y}{\Delta x}$, in the limit as h goes to 0, this becomes $\frac{dy}{dx}$. In this way, $\frac{d}{dx}$ is an operator, acting on $y$.Note, the $d$s cannot be cancelled out, as they aren't variables, they denote an infinitely small change.
###Code
h_widget = FloatSlider(value=5.0, min=0.05, max=9., step=0.05, continuous_update=False)
_x3 = np.linspace(-2,11,50)
def add_line(h):
plt.title('$y=x^2$')
plt.xlabel('$x$')
plt.ylabel('$y=x^2$')
plt.xlim((-2.,11.))
plt.ylim((-15.,121.))
plt.plot(_x3, _x3**2);
plt.plot([-8., 12.], [-36., 44.]);
plt.plot([12, -8], [4. + 10.*((2+h)**2-4)/h, 4. - 10.*((2+h)**2-4)/h]);
plt.plot([2., 2.+h], [4., (2.+h)**2], 'ko')
interact(add_line, h=h_widget, continuous_update=False);
###Output
_____no_output_____
###Markdown
ExampleFind the gradient of $y = f(x) = x^3 + 2$. $\frac{dy}{dx} = \frac{f(x+h) - f(x)}{h}$$\frac{dy}{dx} = \frac{(x+h)^3 + 2 - (x^3 + 2)}{h}$$\frac{dy}{dx} = \frac{x^3 + 3x^2 h + 3xh^2 + h^3 + 2 - x^3 - 2}{h}$$\frac{dy}{dx} = \frac{3x^2h + 3xh^2 + h^3}{h}$$\frac{dy}{dx} = 3x^2 + 3xh + h^3$Now this is only exactly right when $h \rightarrow 0$. So letting that happen, we have$\frac{dy}{dx} = 3x^2$ Derivative of polynomial functionsUsing techniques like the one above (which is called differentiation from first principles), one can generalise the connection between powers of $x$ and their derivatives:If $y = a x^n$, then its **derivative** is$\frac{dy}{dx} = y'(x) = a n x^{n-1}$ Examples to try1. $y = x^4$2. $y = 7x^5$3. $y = x^{-2} = \frac{1}{x^2}$4. $y = \sqrt{1/x} = (1/x)^{1/2} = x^{-1/2}$ Summing and multiplying derivatives Summing$(f(x) \pm g(x))' = f'(x) \pm g'(x)$e.g.$y = x^2 + x^3, \quad y' = 2x + 3x^2$ Multiplying (by a scalar)$ (a f(x))' = a f'(x)$e.g.$y = 6x^3, \quad y' = 6 \cdot 3x^2 = 18 x^2$**This only works for scalars**.In most circumstances $(f(x) g(x))' \neq f(x)' g(x)'$e.g.$y = x\cdot x = x^2, \quad y' \neq 1$ Higher-order derivativesYou can take a derivative of a function multiple times in a row. This is usually denoted either $y''(x),\;\;f''(x)\;$ or $\;\frac{d^2 y}{dx^2}\;$ for second-order derivatives (differentiating twice), and similar for higher orders. e.g.$y = x^3$$y' = 3x^2$$y'' = \frac{d^2 y}{dx^2} = 6 x$ Interpreting derivatives:The sign of the first derivative $\;f'(x)\;$ tells us how $\;f(x)\;$ is growing- Positive gradient: If $\;y' > 0\;$ then $\;y\;$ is **increasing** at $\;x\;$- Negative gradient: If $\;y' < 0\;$ then $\;y\;$ is **decreasing** at $\;x\;$- Zero gradient: If $\;y' = 0\;$ then $\;y\;$ is not changing (flat) at $\;x\;$ Extreme values (turning points and points of inflection)(a) Local maximum: $\;\frac{dy}{dx} = 0,\;$ and $\;\frac{d^2y}{dx^2} < 0\;$(b) Local minimum: $\;\frac{dy}{dx} = 0,\;$ and $\;\frac{d^2y}{dx^2} > 0\;$(c) Inflection: $\;\frac{d^2y}{dx^2} = 0\;$ Example: Find the stationary points of $\;y = 2x^3 - 5x^2 - 4x\;$To do this, we need to know both $\;y'(x)\;$ and $\;y''(x)\;$.$y'(x) = 6x^2 - 10x - 4$$y''(x) = 12x - 10$Stationary points occur when $\;y'(x) = 0\;$$6x^2 - 10x - 4 = 0$$(3x + 1)(2x - 4) = 0$$x = -1/3,\;2$ At $x = -1/3$$y''(-1/3) = 12 \times -1/3 - 10 = -14 < 0$So this point is a **maximum**.At $x = 2$$y''(2) = 12 \times 2 - 10 = 14 > 0$So this point is a **mimimum**.Inflection points occur whenever $y''(x) = 0$$y''(x) = 12x - 10 = 0$$x = \frac{10}{12} = \frac{5}{6}$This is an **inflection point**
###Code
x = np.linspace(-2, 3.5, 100)
y = 2*x**3 - 5*x**2 - 4*x
plt.plot(x,y, label='y = 2x^3 - 5x^2 - 4x')
plt.plot([2., -1./3., 5./6.], [-12., 19./27., -305./54.], 'ko')
plt.xlabel('x')
plt.ylabel('y')
###Output
_____no_output_____ |
example/ratio_mar.ipynb | ###Markdown
###Code
#Google Colab 上からブラウザ経由で利用する場合、初回はインストールが必要(約20秒)
!pip install simpleOption
from simpleOption import *
import pandas as pd
import numpy as np
% matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
p = Portfolio(
"""
03/C22000[1]
03/C22250[-2]
""")
x = np.arange(21300, 22400)
setting(21300, 16, 20190221) #マーケット情報1(IV16%と仮定)
plt.plot(x, np.vectorize(p.v)(x), label= 'Ratio_feb21' )
setting(evaluationDate=20190225)
plt.plot(x, np.vectorize(p.v)(x), label= 'Ratio_feb25' )
setting(evaluationDate=20190305)
plt.plot(x, np.vectorize(p.v)(x), label= 'Ratio_Mar05' )
plt.plot(x, np.vectorize(p.pay)(x), label= 'Payoff',linestyle="dashed" )
plt.legend(loc="best")
###Output
_____no_output_____ |
coco JSON.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
!git clone https://github.com/Tony607/labelme2coco.git
!pip install pyqt5
!pip install labelme
import os
import argparse
import json
from labelme import utils
import numpy as np
import glob
import PIL.Image
class labelme2coco(object):
def __init__(self, labelme_json=[], save_json_path="./coco.json"):
"""
:param labelme_json: the list of all labelme json file paths
:param save_json_path: the path to save new json
"""
self.labelme_json = labelme_json
self.save_json_path = save_json_path
self.images = []
self.categories = []
self.annotations = []
self.label = []
self.annID = 1
self.height = 0
self.width = 0
self.save_json()
def data_transfer(self):
for num, json_file in enumerate(self.labelme_json):
with open(json_file, "r") as fp:
data = json.load(fp)
self.images.append(self.image(data, num))
for shapes in data["shapes"]:
label = shapes["label"].split("_")
if label not in self.label:
self.label.append(label)
points = shapes["points"]
self.annotations.append(self.annotation(points, label, num))
self.annID += 1
# Sort all text labels so they are in the same order across data splits.
self.label.sort()
for label in self.label:
self.categories.append(self.category(label))
for annotation in self.annotations:
annotation["category_id"] = self.getcatid(annotation["category_id"])
def image(self, data, num):
image = {}
img = utils.img_b64_to_arr(data["imageData"])
height, width = img.shape[:2]
img = None
image["height"] = height
image["width"] = width
image["id"] = num
image["file_name"] = data["imagePath"].split("/")[-1]
self.height = height
self.width = width
return image
def category(self, label):
category = {}
category["supercategory"] = label[0]
category["id"] = len(self.categories)
category["name"] = label[0]
return category
def annotation(self, points, label, num):
annotation = {}
contour = np.array(points)
x = contour[:, 0]
y = contour[:, 1]
area = 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1)))
annotation["segmentation"] = [list(np.asarray(points).flatten())]
annotation["iscrowd"] = 0
annotation["area"] = area
annotation["image_id"] = num
annotation["bbox"] = list(map(float, self.getbbox(points)))
annotation["category_id"] = label[0] # self.getcatid(label)
annotation["id"] = self.annID
return annotation
def getcatid(self, label):
for category in self.categories:
if label == category["name"]:
return category["id"]
print("label: {} not in categories: {}.".format(label, self.categories))
exit()
return -1
def getbbox(self, points):
polygons = points
mask = self.polygons_to_mask([self.height, self.width], polygons)
return self.mask2box(mask)
def mask2box(self, mask):
index = np.argwhere(mask == 1)
rows = index[:, 0]
clos = index[:, 1]
left_top_r = np.min(rows) # y
left_top_c = np.min(clos) # x
right_bottom_r = np.max(rows)
right_bottom_c = np.max(clos)
return [
left_top_c,
left_top_r,
right_bottom_c - left_top_c,
right_bottom_r - left_top_r,
]
def polygons_to_mask(self, img_shape, polygons):
mask = np.zeros(img_shape, dtype=np.uint8)
mask = PIL.Image.fromarray(mask)
xy = list(map(tuple, polygons))
PIL.ImageDraw.Draw(mask).polygon(xy=xy, outline=1, fill=1)
mask = np.array(mask, dtype=bool)
return mask
def data2coco(self):
data_coco = {}
data_coco["images"] = self.images
data_coco["categories"] = self.categories
data_coco["annotations"] = self.annotations
return data_coco
def save_json(self):
print("save coco json")
self.data_transfer()
self.data_coco = self.data2coco()
print(self.save_json_path)
os.makedirs(
os.path.dirname(os.path.abspath(self.save_json_path)), exist_ok=True
)
json.dump(self.data_coco, open(self.save_json_path, "w"), indent=4)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(
description="labelme annotation to coco data json file."
)
parser.add_argument(
"labelme_images",
help="Directory to labelme images and annotation json files.",
type=str,
)
parser.add_argument(
"--output", help="Output json file path.", default="trainval.json"
)
args, unknown = parser.parse_known_args()
#args = parser.parse_args()
labelme_json = glob.glob(os.path.join(args.labelme_images, "*.json"))
labelme2coco(labelme_json, args.output)
###Output
save coco json
trainval.json
|
M4_Convolutional_Neural_Networks/Week3/codes/Autonomous_driving_application_Car_detection.ipynb | ###Markdown
Autonomous Driving - Car DetectionWelcome to the Week 3 programming assignment! In this notebook, you'll implement object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242). **By the end of this assignment, you'll be able to**:- Detect objects in a car detection dataset- Implement non-max suppression to increase accuracy- Implement intersection over union- Handle bounding boxes, a type of image annotation popular in deep learning Table of Content- [Packages](0)- [1 - Problem Statement](1)- [2 - YOLO](2) - [2.1 - Model Details](2-1) - [2.2 - Filtering with a Threshold on Class Scores](2-2) - [Exercise 1 - yolo_filter_boxes](ex-1) - [2.3 - Non-max Suppression](2-3) - [Exercise 2 - iou](ex-2) - [2.4 - YOLO Non-max Suppression](2-4) - [Exercise 3 - yolo_non_max_suppression](ex-3) - [2.5 - Wrapping Up the Filtering](2-5) - [Exercise 4 - yolo_eval](ex-4)- [3 - Test YOLO Pre-trained Model on Images](3) - [3.1 - Defining Classes, Anchors and Image Shape](3-1) - [3.2 - Loading a Pre-trained Model](3-2) - [3.3 - Convert Output of the Model to Usable Bounding Box Tensors](3-3) - [3.4 - Filtering Boxes](3-4) - [3.5 - Run the YOLO on an Image](3-5)- [4 - Summary for YOLO](4)- [5 - References](5) PackagesRun the following cell to load the packages and dependencies that will come in handy as you build the object detector!
###Code
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
from PIL import ImageFont, ImageDraw, Image
import tensorflow as tf
from tensorflow.python.framework.ops import EagerTensor
from tensorflow.keras.models import load_model
from yad2k.models.keras_yolo import yolo_head
from yad2k.utils.utils import draw_boxes, get_colors_for_classes, scale_boxes, read_classes, read_anchors, preprocess_image
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Problem StatementYou are working on a self-driving car. Go you! As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds as you drive around. Pictures taken from a car-mounted camera while driving around Silicon Valley. Dataset provided by drive.ai.You've gathered all these images into a folder and labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like: Figure 1: Definition of a box If there are 80 classes you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1, and the rest of which are 0. The video lectures used the latter representation; in this notebook, you'll use both representations, depending on which is more convenient for a particular step. In this exercise, you'll discover how YOLO ("You Only Look Once") performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, the pre-trained weights are already loaded for you to use. 2 - YOLO "You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes. 2.1 - Model Details Inputs and outputs- The **input** is a batch of images, and each image has the shape (m, 608, 608, 3)- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers. Anchor Boxes* Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt'* The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$.* The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85). EncodingLet's look in greater detail at what this encoding represents. Figure 2 : Encoding architecture for YOLO If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object. Since you're using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.For simplicity, you'll flatten the last two dimensions of the shape (19, 19, 5, 85) encoding, so the output of the Deep CNN is (19, 19, 425). Figure 3 : Flattening the last two last dimensions Class scoreNow, for each box (of each cell) you'll compute the following element-wise product and extract a probability that the box contains a certain class. The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$. Figure 4: Find the class detected by each box Example of figure 4* In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1). * The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$. * The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$. * Let's say you calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So you'll assign the score 0.44 and class "3" to this box "1". Visualizing classesHere's one way to visualize what YOLO is predicting on an image:- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes).- Color that grid cell according to what object that grid cell considers the most likely.Doing this results in this picture: Figure 5: Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell. Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm. Visualizing bounding boxesAnother way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this: Figure 6: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. Non-Max suppressionIn the figure above, the only boxes plotted are ones for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects. To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps: - Get rid of boxes with a low score. Meaning, the box is not very confident about detecting a class, either due to the low probability of any object, or low probability of this particular class.- Select only one box when several boxes overlap with each other and detect the same object. 2.2 - Filtering with a Threshold on Class ScoresYou're going to first apply a filter by thresholding, meaning you'll get rid of any box for which the class "score" is less than a chosen threshold. The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It's convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables: - `box_confidence`: tensor of shape $(19, 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.- `boxes`: tensor of shape $(19, 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell.- `box_class_probs`: tensor of shape $(19, 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell. Exercise 1 - yolo_filter_boxesImplement `yolo_filter_boxes()`.1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$). The following code may help you choose the right operator: ```pythona = np.random.randn(19, 19, 5, 1)b = np.random.randn(19, 19, 5, 80)c = a * b shape of c will be (19, 19, 5, 80)```This is an example of **broadcasting** (multiplying vectors of different sizes).2. For each box, find: - the index of the class with the maximum box score - the corresponding box score **Useful References** * [tf.math.argmax](https://www.tensorflow.org/api_docs/python/tf/math/argmax) * [tf.math.reduce_max](https://www.tensorflow.org/api_docs/python/tf/math/reduce_max) **Helpful Hints** * For the `axis` parameter of `argmax` and `reduce_max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`. * Applying `reduce_max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. You don't need to keep the last dimension after applying the maximum here.3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be `True` for the boxes you want to keep. 4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes you don't want. You should be left with just the subset of boxes you want to keep. **One more useful reference**: * [tf.boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask) **And one more helpful hint**: :) * For the `tf.boolean_mask`, you can keep the default `axis=None`.
###Code
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(boxes, box_confidence, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
boxes -- tensor of shape (19, 19, 5, 4)
box_confidence -- tensor of shape (19, 19, 5, 1)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold],
then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
x = 10
y = tf.constant(100)
# Step 1: Compute box scores
##(≈ 1 line)
box_scores = box_confidence * box_class_probs
# Step 2: Find the box_classes using the max box_scores, keep track of the corresponding score
##(≈ 2 lines)
box_classes = tf.math.argmax(box_scores, axis=-1)
box_class_scores = tf.math.reduce_max(box_scores,keepdims=False,axis=-1)
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
## (≈ 1 line)
filtering_mask = box_class_scores >= threshold
# Step 4: Apply the mask to box_class_scores, boxes and box_classes
## (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask, axis=None)
boxes = tf.boolean_mask(boxes, filtering_mask, axis=None)
classes = tf.boolean_mask(box_classes, filtering_mask, axis=None)
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return scores, boxes, classes
tf.random.set_seed(10)
box_confidence = tf.random.normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random.normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random.normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(boxes, box_confidence, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].numpy()))
print("boxes[2] = " + str(boxes[2].numpy()))
print("classes[2] = " + str(classes[2].numpy()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
assert type(scores) == EagerTensor, "Use tensorflow functions"
assert type(boxes) == EagerTensor, "Use tensorflow functions"
assert type(classes) == EagerTensor, "Use tensorflow functions"
assert scores.shape == (1789,), "Wrong shape in scores"
assert boxes.shape == (1789, 4), "Wrong shape in boxes"
assert classes.shape == (1789,), "Wrong shape in classes"
assert np.isclose(scores[2].numpy(), 9.270486), "Values are wrong on scores"
assert np.allclose(boxes[2].numpy(), [4.6399336, 3.2303846, 4.431282, -2.202031]), "Values are wrong on boxes"
assert classes[2].numpy() == 8, "Values are wrong on classes"
print("\033[92m All tests passed!")
###Output
scores[2] = 9.270486
boxes[2] = [ 4.6399336 3.2303846 4.431282 -2.202031 ]
classes[2] = 8
scores.shape = (1789,)
boxes.shape = (1789, 4)
classes.shape = (1789,)
[92m All tests passed!
###Markdown
**Expected Output**: scores[2] 9.270486 boxes[2] [ 4.6399336 3.2303846 4.431282 -2.202031 ] classes[2] 8 scores.shape (1789,) boxes.shape (1789, 4) classes.shape (1789,) **Note** In the test for `yolo_filter_boxes`, you're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative. 2.3 - Non-max SuppressionEven after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS). Figure 7 : In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU. Figure 8 : Definition of "Intersection over Union". Exercise 2 - iouImplement `iou()` Some hints:- This code uses the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, you move to the right. As y increases, you move down.- For this exercise, a box is defined using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. This makes it a bit easier to calculate the intersection.- To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative.- To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$: - Feel free to draw some examples on paper to clarify this conceptually. - The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom. - The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top. - The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero). - The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero.**Additional Hints**- `xi1` = **max**imum of the x1 coordinates of the two boxes- `yi1` = **max**imum of the y1 coordinates of the two boxes- `xi2` = **min**imum of the x2 coordinates of the two boxes- `yi2` = **min**imum of the y2 coordinates of the two boxes- `inter_area` = You can use `max(height, 0)` and `max(width, 0)`
###Code
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)
box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)
"""
(box1_x1, box1_y1, box1_x2, box1_y2) = box1
(box2_x1, box2_y1, box2_x2, box2_y2) = box2
# Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area.
##(≈ 7 lines)
xi1 = max(box1[0],box2[0])
yi1 = max(box1[1],box2[1])
xi2 = min(box1[2],box2[2])
yi2 = min(box1[3],box2[3])
inter_width = max(yi2 - yi1, 0)
inter_height = max(xi2 - xi1, 0)
inter_area = inter_width * inter_height
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
## (≈ 3 lines)
box1_area = (box1[3] - box1[1]) * (box1[2] - box1[0])
box2_area = (box2[3] - box2[1]) * (box2[2] - box2[0])
union_area = box1_area + box2_area - inter_area
# compute the IoU
## (≈ 1 line)
iou = inter_area / union_area
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return iou
## Test case 1: boxes intersect
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou for intersecting boxes = " + str(iou(box1, box2)))
assert iou(box1, box2) < 1, "The intersection area must be always smaller or equal than the union area."
## Test case 2: boxes do not intersect
box1 = (1,2,3,4)
box2 = (5,6,7,8)
print("iou for non-intersecting boxes = " + str(iou(box1,box2)))
assert iou(box1, box2) == 0, "Intersection must be 0"
## Test case 3: boxes intersect at vertices only
box1 = (1,1,2,2)
box2 = (2,2,3,3)
print("iou for boxes that only touch at vertices = " + str(iou(box1,box2)))
assert iou(box1, box2) == 0, "Intersection at vertices must be 0"
## Test case 4: boxes intersect at edge only
box1 = (1,1,3,3)
box2 = (2,3,3,4)
print("iou for boxes that only touch at edges = " + str(iou(box1,box2)))
assert iou(box1, box2) == 0, "Intersection at edges must be 0"
print("\033[92m All tests passed!")
###Output
iou for intersecting boxes = 0.14285714285714285
iou for non-intersecting boxes = 0.0
iou for boxes that only touch at vertices = 0.0
iou for boxes that only touch at edges = 0.0
[92m All tests passed!
###Markdown
**Expected Output**:```iou for intersecting boxes = 0.14285714285714285iou for non-intersecting boxes = 0.0iou for boxes that only touch at vertices = 0.0iou for boxes that only touch at edges = 0.0``` 2.4 - YOLO Non-max SuppressionYou are now ready to implement non-max suppression. The key steps are: 1. Select the box that has the highest score.2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`).3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box.This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain. Exercise 3 - yolo_non_max_suppressionImplement `yolo_non_max_suppression()` using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):**Reference documentation**: - [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)```tf.image.non_max_suppression( boxes, scores, max_output_size, iou_threshold=0.5, name=None)```Note that in the version of TensorFlow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument `score_threshold`.*- [tf.gather()](https://www.tensorflow.org/api_docs/python/tf/gather)```keras.gather( reference, indices)```
###Code
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = tf.Variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
##(≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold)
# Use tf.gather() to select only nms_indices from scores, boxes and classes
##(≈ 3 lines)
scores = tf.gather(scores, nms_indices)
boxes = tf.gather(boxes, nms_indices)
classes = tf.gather(classes, nms_indices)
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
return scores, boxes, classes
tf.random.set_seed(10)
scores = tf.random.normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random.normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random.normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
assert type(scores) == EagerTensor, "Use tensoflow functions"
print("scores[2] = " + str(scores[2].numpy()))
print("boxes[2] = " + str(boxes[2].numpy()))
print("classes[2] = " + str(classes[2].numpy()))
print("scores.shape = " + str(scores.numpy().shape))
print("boxes.shape = " + str(boxes.numpy().shape))
print("classes.shape = " + str(classes.numpy().shape))
assert type(scores) == EagerTensor, "Use tensoflow functions"
assert type(boxes) == EagerTensor, "Use tensoflow functions"
assert type(classes) == EagerTensor, "Use tensoflow functions"
assert scores.shape == (10,), "Wrong shape"
assert boxes.shape == (10, 4), "Wrong shape"
assert classes.shape == (10,), "Wrong shape"
assert np.isclose(scores[2].numpy(), 8.147684), "Wrong value on scores"
assert np.allclose(boxes[2].numpy(), [ 6.0797963, 3.743308, 1.3914018, -0.34089637]), "Wrong value on boxes"
assert np.isclose(classes[2].numpy(), 1.7079165), "Wrong value on classes"
print("\033[92m All tests passed!")
###Output
scores[2] = 8.147684
boxes[2] = [ 6.0797963 3.743308 1.3914018 -0.34089637]
classes[2] = 1.7079165
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
[92m All tests passed!
###Markdown
**Expected Output**: scores[2] 8.147684 boxes[2] [ 6.0797963 3.743308 1.3914018 -0.34089637] classes[2] 1.7079165 scores.shape (10,) boxes.shape (10, 4) classes.shape (10,) 2.5 - Wrapping Up the FilteringIt's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented. Exercise 4 - yolo_evalImplement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which are provided): ```pythonboxes = yolo_boxes_to_corners(box_xy, box_wh) ```which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes````pythonboxes = scale_boxes(boxes, image_shape)```YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image -- for example, the car detection dataset had 720x1280 images -- this step rescales the boxes so that they can be plotted on top of the original 720x1280 image. Don't worry about these two functions; you'll see where they need to be called below.
###Code
def yolo_boxes_to_corners(box_xy, box_wh):
"""Convert YOLO box predictions to bounding box corners."""
box_mins = box_xy - (box_wh / 2.)
box_maxes = box_xy + (box_wh / 2.)
return tf.keras.backend.concatenate([
box_mins[..., 1:2], # y_min
box_mins[..., 0:1], # x_min
box_maxes[..., 1:2], # y_max
box_maxes[..., 0:1] # x_max
])
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720, 1280), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
# Retrieve outputs of the YOLO model (≈1 line)
box_xy, box_wh, box_confidence, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions (convert boxes box_xy and box_wh to corner coordinates)
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(boxes, box_confidence, box_class_probs, score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes,image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with
# maximum number of boxes set to max_boxes and a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes,classes , max_boxes, iou_threshold)
# YOUR CODE STARTS HERE
# YOUR CODE ENDS HERE
print(scores)
return scores, boxes, classes
tf.random.set_seed(10)
yolo_outputs = (tf.random.normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random.normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random.normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random.normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].numpy()))
print("boxes[2] = " + str(boxes[2].numpy()))
print("classes[2] = " + str(classes[2].numpy()))
print("scores.shape = " + str(scores.numpy().shape))
print("boxes.shape = " + str(boxes.numpy().shape))
print("classes.shape = " + str(classes.numpy().shape))
assert type(scores) == EagerTensor, "Use tensoflow functions"
assert type(boxes) == EagerTensor, "Use tensoflow functions"
assert type(classes) == EagerTensor, "Use tensoflow functions"
assert scores.shape == (10,), "Wrong shape"
assert boxes.shape == (10, 4), "Wrong shape"
assert classes.shape == (10,), "Wrong shape"
assert np.isclose(scores[2].numpy(), 171.60194), "Wrong value on scores"
assert np.allclose(boxes[2].numpy(), [-1240.3483, -3212.5881, -645.78, 2024.3052]), "Wrong value on boxes"
assert np.isclose(classes[2].numpy(), 16), "Wrong value on classes"
print("\033[92m All tests passed!")
###Output
tf.Tensor(
[174.1641 171.88335 171.60194 153.03522 142.66139 137.75563 137.1806
136.07318 133.38445 133.06157], shape=(10,), dtype=float32)
scores[2] = 171.60194
boxes[2] = [-1240.3483 -3212.5881 -645.78 2024.3052]
classes[2] = 16
scores.shape = (10,)
boxes.shape = (10, 4)
classes.shape = (10,)
[92m All tests passed!
###Markdown
**Expected Output**: scores[2] 171.60194 boxes[2] [-1240.3483 -3212.5881 -645.78 2024.3052] classes[2] 16 scores.shape (10,) boxes.shape (10, 4) classes.shape (10,) 3 - Test YOLO Pre-trained Model on ImagesIn this section, you are going to use a pre-trained model and test it on the car detection dataset. 3.1 - Defining Classes, Anchors and Image ShapeYou're trying to detect 80 classes, and are using 5 anchor boxes. The information on the 80 classes and 5 boxes is gathered in two files: "coco_classes.txt" and "yolo_anchors.txt". You'll read class names and anchors from text files. The car detection dataset has 720x1280 images, which are pre-processed into 608x608 images.
###Code
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
model_image_size = (608, 608) # Same as yolo_model input layer size
###Output
_____no_output_____
###Markdown
3.2 - Loading a Pre-trained ModelTraining a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5". These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but are simply referred to as "YOLO" in this notebook.Run the cell below to load the model from this file.
###Code
yolo_model = load_model("model_data/", compile=False)
###Output
_____no_output_____
###Markdown
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains:
###Code
yolo_model.summary()
###Output
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 608, 608, 3) 0
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 608, 608, 32) 864 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 608, 608, 32) 128 conv2d[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 608, 608, 32) 0 batch_normalization[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 304, 304, 32) 0 leaky_re_lu[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 304, 304, 64) 18432 max_pooling2d[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 304, 304, 64) 256 conv2d_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 304, 304, 64) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 152, 152, 64) 0 leaky_re_lu_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 152, 152, 128 73728 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 152, 152, 128 512 conv2d_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 152, 152, 128 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 152, 152, 64) 8192 leaky_re_lu_2[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 152, 152, 64) 256 conv2d_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 152, 152, 64) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 152, 152, 128 73728 leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 152, 152, 128 512 conv2d_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 152, 152, 128 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 76, 76, 128) 0 leaky_re_lu_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 76, 76, 256) 294912 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 76, 76, 256) 1024 conv2d_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 76, 76, 128) 32768 leaky_re_lu_5[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 76, 76, 128) 512 conv2d_6[0][0]
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 76, 76, 128) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 76, 76, 256) 294912 leaky_re_lu_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 76, 76, 256) 1024 conv2d_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 76, 76, 256) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 38, 38, 256) 0 leaky_re_lu_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 38, 38, 512) 1179648 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 38, 38, 512) 2048 conv2d_8[0][0]
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 38, 38, 256) 1024 conv2d_9[0][0]
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_9[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 38, 38, 512) 2048 conv2d_10[0][0]
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 38, 38, 256) 131072 leaky_re_lu_10[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 38, 38, 256) 1024 conv2d_11[0][0]
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 38, 38, 256) 0 batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 38, 38, 512) 1179648 leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 38, 38, 512) 2048 conv2d_12[0][0]
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 38, 38, 512) 0 batch_normalization_12[0][0]
__________________________________________________________________________________________________
max_pooling2d_4 (MaxPooling2D) (None, 19, 19, 512) 0 leaky_re_lu_12[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 19, 19, 1024) 4718592 max_pooling2d_4[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 19, 19, 1024) 4096 conv2d_13[0][0]
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 19, 19, 512) 2048 conv2d_14[0][0]
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_14[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_14[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 19, 19, 1024) 4096 conv2d_15[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 19, 19, 512) 524288 leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 19, 19, 512) 2048 conv2d_16[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 19, 19, 512) 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 19, 19, 1024) 4718592 leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 19, 19, 1024) 4096 conv2d_17[0][0]
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_17[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 19, 19, 1024) 4096 conv2d_18[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 38, 38, 64) 32768 leaky_re_lu_12[0][0]
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_18[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 38, 38, 64) 256 conv2d_20[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 19, 19, 1024) 9437184 leaky_re_lu_18[0][0]
__________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 38, 38, 64) 0 batch_normalization_20[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 19, 19, 1024) 4096 conv2d_19[0][0]
__________________________________________________________________________________________________
space_to_depth_x2 (Lambda) (None, 19, 19, 256) 0 leaky_re_lu_20[0][0]
__________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_19[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 19, 19, 1280) 0 space_to_depth_x2[0][0]
leaky_re_lu_19[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 19, 19, 1024) 11796480 concatenate[0][0]
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 19, 19, 1024) 4096 conv2d_21[0][0]
__________________________________________________________________________________________________
leaky_re_lu_21 (LeakyReLU) (None, 19, 19, 1024) 0 batch_normalization_21[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 19, 19, 425) 435625 leaky_re_lu_21[0][0]
==================================================================================================
Total params: 50,983,561
Trainable params: 50,962,889
Non-trainable params: 20,672
__________________________________________________________________________________________________
###Markdown
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do -- this is fine!**Reminder**: This model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2). 3.3 - Convert Output of the Model to Usable Bounding Box TensorsThe output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. You will need to call `yolo_head` to format the encoding of the model you got from `yolo_model` into something decipherable:yolo_model_outputs = yolo_model(image_data) yolo_outputs = yolo_head(yolo_model_outputs, anchors, len(class_names))The variable `yolo_outputs` will be defined as a set of 4 tensors that you can then use as input by your yolo_eval function. If you are curious about how yolo_head is implemented, you can find the function definition in the file `keras_yolo.py`. The file is also located in your workspace in this path: `yad2k/models/keras_yolo.py`. 3.4 - Filtering Boxes`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. To perform filtering and select only the best boxes, you will call `yolo_eval`, which you had previously implemented, to do so: out_scores, out_boxes, out_classes = yolo_eval(yolo_outputs, [image.size[1], image.size[0]], 10, 0.3, 0.5) 3.5 - Run the YOLO on an ImageLet the fun begin! You will create a graph that can be summarized as follows:`yolo_model.input` is given to `yolo_model`. The model is used to compute the output `yolo_model.output``yolo_model.output` is processed by `yolo_head`. It gives you `yolo_outputs``yolo_outputs` goes through a filtering function, `yolo_eval`. It outputs your predictions: `out_scores`, `out_boxes`, `out_classes`. Now, we have implemented for you the `predict(image_file)` function, which runs the graph to test YOLO on an image to compute `out_scores`, `out_boxes`, `out_classes`.The code below also uses the following function: image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))which opens the image file and scales, reshapes and normalizes the image. It returns the outputs: image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it. image_data: a numpy-array representing the image. This will be the input to the CNN.
###Code
def predict(image_file):
"""
Runs the graph to predict boxes for "image_file". Prints and plots the predictions.
Arguments:
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
yolo_model_outputs = yolo_model(image_data)
yolo_outputs = yolo_head(yolo_model_outputs, anchors, len(class_names))
out_scores, out_boxes, out_classes = yolo_eval(yolo_outputs, [image.size[1], image.size[0]], 10, 0.3, 0.5)
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), "images/" + image_file))
# Generate colors for drawing bounding boxes.
colors = get_colors_for_classes(len(class_names))
# Draw bounding boxes on the image file
#draw_boxes2(image, out_scores, out_boxes, out_classes, class_names, colors, image_shape)
draw_boxes(image, out_boxes, out_classes, class_names, out_scores)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=100)
# Display the results in the notebook
output_image = Image.open(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
###Output
_____no_output_____
###Markdown
Run the following cell on the "test.jpg" image to verify that your function is correct.
###Code
out_scores, out_boxes, out_classes = predict("test.jpg")
###Output
tf.Tensor(
[0.8912789 0.7958998 0.74370307 0.69599354 0.6702379 0.664843
0.60223454 0.44434837 0.3722607 0.35717058], shape=(10,), dtype=float32)
Found 10 boxes for images/test.jpg
car 0.89 (367, 300) (745, 648)
car 0.80 (761, 282) (942, 412)
car 0.74 (159, 303) (346, 440)
car 0.70 (947, 324) (1280, 705)
bus 0.67 (5, 266) (220, 407)
car 0.66 (706, 279) (786, 350)
car 0.60 (925, 285) (1045, 374)
car 0.44 (336, 296) (378, 335)
car 0.37 (965, 273) (1022, 292)
traffic light 0.36 (681, 195) (692, 214)
|
growing.ipynb | ###Markdown
Growing classesWhen implementing much of the functionality and running the research whose artifacts live in this repository, the authors found it best to document the iterations of the research and development. However, Python insists classes should be defined in one block, complicating the iterative development of its methods. We thus write here a decorator that allows for the definition of classes one method at a time, across multiple code cells.
###Code
%load_ext pycodestyle_magic
%flake8_on --max_line_length 120 --ignore W293,E302
from contextlib import contextmanager
from dask.delayed import Delayed
import dask
from functools import reduce
import inspect
from jupytest import Suite, Report, Magic, summarize_results, assert_, eq, belong_to, is_any_of, not_
import operator as op
import re
from typing import Callable, Sequence, Optional, cast, Set, Union
suite = Suite()
if __name__ == "__main__":
suite |= Report()
suite |= Magic()
Decorator = Callable[[Callable], Callable]
def growing(klass: type) -> type:
def add_method(
fn_: Optional[Callable] = None,
name: str = "",
wrapped_in: Union[Decorator, Sequence[Decorator]] = []
) -> Callable:
def add_to_class(fn: Callable):
name_method = name or fn.__name__
method_new = reduce(lambda f, w: w(f), wrapped_in if hasattr(wrapped_in, "__iter__") else [wrapped_in], fn)
setattr(klass, name_method, method_new)
return getattr(klass, name_method)
if fn_ is None:
return add_to_class
return add_to_class(cast(Callable, fn_))
def add_class_method(
fn_: Optional[Callable] = None,
name: str = "",
wrapped_in: Union[Decorator, Sequence[Decorator]] = []
) -> Callable:
wrappers = wrapped_in if hasattr(wrapped_in, "__iter__") else [wrapped_in]
return add_method(fn_, name, wrappers + [classmethod])
setattr(klass, "method", staticmethod(add_method))
setattr(klass, "classmethod", staticmethod(add_class_method))
return klass
###Output
_____no_output_____
###Markdown
Tests
###Code
def user_members(klass) -> Set[str]:
return {m for m in dir(klass) if not re.match(r"^__.*__$", m)}
%%test Add method
@growing
class MyClass:
def f(self):
return 5
assert_(op.le, {"f", "method"}, user_members(MyClass), msg="User members before adding method g")
assert_(not_(belong_to(user_members(MyClass))), "g")
@MyClass.method
def g(self, x):
return self.f() + x
assert_(op.le, {"f", "g", "method"}, user_members(MyClass), msg="User members after adding method g")
assert_(eq, obtained=MyClass().g(3), expected=8)
%%test Add Dask Delayed method
@growing
class MyClass:
def f(self):
return 5
@MyClass.method(wrapped_in=dask.delayed(pure=True))
def h(self, x, y):
return self.f() * x + y
assert_(belong_to(user_members(MyClass)), "h")
assert_(is_any_of(Delayed), MyClass().h(4, 5))
assert_(eq, expected=25, obtained=MyClass().h(4, 5).compute(scheduler="single-threaded"))
%%test Multiple method wrappers
@growing
class MyClass:
def f(self):
return 5
def wrapper1(fn):
return lambda self, x: fn(self, x) + x
def wrapper2(fn):
return lambda self, x: fn(self, x) * x
@MyClass.method(wrapped_in=[wrapper1, wrapper2])
def double_wrapped(self, x):
return x / 3 + self.f()
assert_(belong_to(user_members(MyClass)), "double_wrapped")
assert_(eq, expected=153.0, obtained=MyClass().double_wrapped(9))
%%test Add class method, inelegant
@growing
class MyClass:
C = 34
def f(self):
return 5
try:
@MyClass.method
@classmethod
def cm(cls):
return cls.C
fail()
except AttributeError:
pass
@MyClass.method(wrapped_in=classmethod)
def cm(cls):
return cls.C
assert_(eq, expected=MyClass.C, obtained=MyClass.cm())
%%test Add class method, preferred approach
@growing
class MyClass:
C = 34
def f(self):
return 5
@MyClass.classmethod
def cm(cls):
return cls.C
assert_(eq, expected=MyClass.C, obtained=MyClass.cm())
%%test Add class method that acts as context manager
@growing
class MyClass:
C = 34
def f(self):
return 5
@MyClass.classmethod(wrapped_in=contextmanager)
def changing_C(cls, num: int):
old = cls.C
try:
cls.C = num
yield
finally:
cls.C = old
assert_(eq, expected=34, obtained=MyClass.C)
with MyClass.changing_C(45):
assert_(eq, expected=45, obtained=MyClass.C)
assert_(eq, expected=34, obtained=MyClass.C)
%%test Add method, then redefine it
@growing
class C:
def f(self):
return 56
assert_(eq, expected=56, obtained=C().f())
@C.method
def f(self):
return 890
assert_(eq, expected=890, obtained=C().f())
if __name__ == "__main__":
_ = summarize_results(suite)
###Output
7 passed, [37m0 failed[0m, [37m0 raised an error[0m
|
Chapter 4 - Quarterback Performance-Copy1.ipynb | ###Markdown
Initial Datasets
###Code
# read dataset
qb = pd.read_csv('data/qb_yearly.csv')
qb.dtypes
# we don't need a few of these columns
qb = qb.drop(['gs', 'pos', 'pass_cmp_perc'], axis=1)
# drop seasons with less than 100 pass attempts
# this should filter out non-QBs who threw some passes
# as well as very marginal players
qb = qb.loc[qb['pass_att'] >= 100, :]
# rename some columns
renames = {
'source_player_name': 'player',
'source_player_id': 'player_id',
'pass_adj_yds_per_att': 'aya',
'pass_adj_net_yds_per_att': 'anya'
}
qb = qb.rename(columns=renames)
# convert columns to string
qb['player'] = qb['player'].astype('string')
qb['player_id'] = qb['player_id'].astype('string')
# check missing values
qb.loc[qb.isna().any(axis=1), :]
###Output
_____no_output_____
###Markdown
QB Metrics: Adjusted Net Yards Per Attempt
###Code
# anya identifies all-time greats like Manning, Brady, Rodgers
# also highlights massive seasons like Mahomes 2018, Ryan 2016, Foles 2013
qb.sort_values('anya', ascending=False).head(10)
# let's look at how anya is distributed
# we have 960 QB seasons
# 25th percentile is is 4.6, median is 5.5, 75th is 6.44
qb['anya'].describe()
# looks like anya is normally distributed
# skew and kurtosis near zero, histogram looks normal
from scipy.stats import skew, kurtosis
print(kurtosis(qb['anya']))
print(skew(qb['anya']))
qb['anya'].hist()
###Output
0.09052486510917301
-0.12965253547380817
###Markdown
Create Age Curves with "Delta Method" Unadjusted Delta Method
###Code
# delta method starts with calculating the change or delta in a metric
# from one year to the next
# here, we will start with adjusted net yards per attempt
# will be easier if we sort the data at the beginning
qb = qb.sort_values(['player_id', 'season_year'])
# create two new columns
# anya_lag shows the anya from the previous year
# anya_d shows the change in anya from the previous year
# a positive anya_d means improved, negative means regressed
qb['anya_lag'] = qb.groupby(['player_id'])['anya'].shift(1)
qb['anya_d'] = qb['anya'] - qb['anya_lag']
# the delta method doesn't allow for gaps in seasons
# so we also need to measure the change in season_year
qb['season_lag'] = qb.groupby(['player_id'])['season_year'].shift(1)
qb['season_d'] = qb['season_year'] - qb['season_lag']
# now we can filter out the na rows
# which are the first row of that player in the dataset
qb = qb.loc[~qb.isna().any(axis=1), :]
# we can also filter out rows where season_d > 1
# so we ensure consecutive seasons
qb = qb.loc[qb['season_d'] == 1, :]
# now we'll make a dataframe of age and anya_d
qb_age_curve = (
qb.groupby('age')['anya_d']
.agg(['count', 'mean'])
.reset_index()
)
qb_age_curve.plot(x='age', y='mean', kind='scatter')
###Output
_____no_output_____
###Markdown
Weighted Delta Method
###Code
# as before, we will use adjusted net yards / attempt as the metric
# will be easier if we sort the data at the beginning
# that way we can visually see the lag
qb = qb.sort_values(['player_id', 'season_year'])
# create two new columns
# anya_lag shows the anya from the previous year
# anya_d shows the change in anya from the previous year
# a positive anya_d means improved, negative means regressed
qb['anya_lag'] = qb.groupby(['player_id'])['anya'].shift(1)
qb['anya_d'] = qb['anya'] - qb['anya_lag']
# the delta method doesn't allow for gaps in seasons
# so we also need to measure the change in season_year
qb['season_lag'] = qb.groupby(['player_id'])['season_year'].shift(1)
qb['season_d'] = qb['season_year'] - qb['season_lag']
# now we can filter out the na rows
# which are the first row of that player in the dataset
qb = qb.loc[~qb.isna().any(axis=1), :]
# we can also filter out rows where season_d > 1
# so we ensure consecutive seasons
qb = qb.loc[qb['season_d'] == 1, :]
qb_age_curve['anya_d_wm'] = (
qb
.groupby('age')
.apply(lambda df_: np.average(df_.anya_d, weights=df_.pass_att))
)
qb_age_curve
qb_age_curve.reset_index().plot(x='age', y='weighted_mean', kind='scatter')
# polynomial fit
poly_params = np.polyfit(qb_age_curve.index, qb_age_curve.anya_d_mean, 3)
poly_3 = np.poly1d(poly_params)
xpoly = np.linspace(x.min(), x.max(), 100)
ypoly = poly_3(xpoly)
plt.plot(x, y, 'o', xpoly, ypoly)
###Output
_____no_output_____
###Markdown
Create Age Curves with Peak Method
###Code
# idea here is to identify the player's peak year and then
# express every other season as a % of the player's peak
# so if Manning's best season was 10 aya
# a season with 9.2 aya would be 92 (we are using 1-100 scale)
# as before, we will use adjusted net yards / attempt as the metric
# will be easier if we sort the data at the beginning
# that way we can visually check the calculations
qb = qb.sort_values(['player_id', 'season_year'])
# create two new columns
# peak shows the maximum anya for the player
# normally, groupby produces one row per group
# but we want the peak value for every row
# tranform produces series of the same length as the original series
# so if there are 5 Aikman rows, it sets the peak in all of those rows
display(qb.groupby(['player_id'])['anya'].max().head())
display(qb.groupby(['player_id'])['anya'].transform('max').head())
qb['peak'] = qb.groupby(['player_id'])['anya'].transform('max')
# anya_d shows the difference between peak and anya for this row
from math import floor
qb['anya_d'] = qb.apply(lambda df_: floor((df_.anya / df_.peak) * 100), axis=1)
# now we'll make a dataframe of age and anya_d
# we want to use the weighted average of anya_d
# meaning that a QB that throws 600 passes will contribute
# more to the average than one who throws 350 passes.
qb_age_curve = (
qb.query('(age > 21) & (age < 40)')
.groupby('age')
.agg({'anya_d': ['count', 'mean']})
)
qb_age_curve.columns = ['_'.join([el for el in c if el])
for c in qb_age_curve.columns.to_flat_index()]
poly_params = np.polyfit(qb_age_curve.index, qb_age_curve.anya_d_mean, 3)
poly_3 = np.poly1d(poly_params)
xpoly = np.linspace(x.min(), x.max(), 100)
ypoly = poly_3(xpoly)
fig, ax = plt.subplots(figsize=(9, 5))
plt.plot(x, y, 'o', xpoly, ypoly)
plt.xticks(range(21, 40))
# try the same plot with a spline
x = qb_age_curve.index
y = qb_age_curve['anya_d_mean']
spl = UnivariateSpline(x, y, s=25)
xx = np.linspace(x.min(), x.max(), 100)
plt.plot(x, y, 'bo', xx, spl(xx))
x = qb_age_curve.index
y = qb_age_curve['anya_d_mean']
spl = InterpolatedUnivariateSpline(x, y)
xx = np.linspace(x.min(), x.max(), 100)
plt.plot(x, y, 'bo', xx, spl(xx))
# weighted mean
qb_age_curve['anya_d_wm'] = (
qb
.groupby('age')
.apply(lambda df_: np.average(df_.anya_d, weights=df_.pass_att))
)
x = qb_age_curve.index
y = qb_age_curve.anya_d_wm
poly_params = np.polyfit(x, y, 3)
poly_3 = np.poly1d(poly_params)
xx = np.linspace(x.min(), x.max(), 100)
yy = poly_3(xx)
fig, ax = plt.subplots(figsize=(9, 5))
plt.plot(x, y, 'o', xx, yy)
plt.xticks(range(21, 40))
# try the same plot with a spline
x = qb_age_curve.index
y = qb_age_curve['anya_d_wm']
spl = UnivariateSpline(x, y, s=25)
xx = np.linspace(x.min(), x.max(), 100)
yy = spl(xx)
fig, ax = plt.subplots(figsize=(9, 5))
plt.plot(x, y, 'o', xx, yy)
plt.xticks(range(21, 40))
x = qb_age_curve.index
y = qb_age_curve['anya_d_wm']
spl = InterpolatedUnivariateSpline(x, y)
xx = np.linspace(x.min(), x.max(), 100)
yy = spl(xx)
fig, ax = plt.subplots(figsize=(9, 5))
plt.plot(x, y, 'o', xx, yy)
plt.xticks(range(21, 40))
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
# calculate fantasy points
def qb_points(row, add_bonus=False):
"""Calculates qb fantasy points from row in dataframe"""
# assume 4 points pass TD, 1 point per 25 yards
# NOTE: our dataset does not have fumbles
points = 0
points += row.pass_yds * .04
points += row.pass_td * 4
points -= row.pass_int
points += row.rush_yds * .10
points += row.rush_td * 6
if add_bonus and row.pass_yds >= 300:
points += 3
return points
# add fantasy points
def add_fantasy_points(df):
"""Adds fantasy points columns to dataframe"""
df['fpts'] = df.apply(qb_points, axis=1)
df['dkpts'] = df.apply(qb_points, args=(True,), axis=1)
return df
def yearly_stats(df):
statcols = ['pass_att', 'pass_cmp', 'pass_int', 'pass_td', 'pass_yds', 'rush_att',
'rush_td', 'rush_yds', 'air_yards', 'fpts', 'dkpts']
return df.groupby(['nflid', 'player', 'season_year'])[statcols].sum()
def age_as_of_game(df):
"""Player age as of game date"""
# calculate the age by subtracting birthdate from gamedate
# convert the timedelta to days, then divide by 365
return df.apply(lambda df_: (df_.game_date - df_.birthdate).days / 365, axis=1)
def age_as_of_season(df):
"""Player age as of season start (Sept 1)"""
# create index that is cross join of nflid and seasons
idx = pd.MultiIndex.from_product(
[df.nflid.unique(), df.season_year.unique()],
names = ["nflid", "season_year"]
)
df = pd.DataFrame(idx).reset_index().join(df, how='left', on='nflid')
return (
df
.assign(start_date=lambda df_: df_.season_year.apply(lambda x: datetime(x, 9, 1)))
.assign(age=lambda df_: df_.apply(lambda row: (row.start_date - row.birthdate).days / 365, axis=1))
.drop(['birthdate', 'start_date'], axis=1)
.set_index(['nflid', 'season_year'])
)
###Output
_____no_output_____ |
Jupyter/SIT742P07B-MLlib_DataType.ipynb | ###Markdown
SIT742: Modern Data Science **(Week 07: Big Data Platform (II))**---- Materials in this module include resources collected from various open-source online repositories.- You are free to use, change and distribute this package.- If you found any issue/bug for this document, please submit an issue at [tulip-lab/sit742](https://github.com/tulip-lab/sit742/issues)Prepared by **SIT742 Teaching Team**--- Session 7B - Spark MLlib (1): Data TypesThe purpose of this session is to demonstrate different [coefficient and linear regression](https://statisticsbyjim.com/glossary/regression-coefficient/). Content Part 1 Vectors1.1 Dense and Sparse Vectors1.2 Labeled Points Part 2 Matrix Data Types2.1 Local Matrix2.2 Row Matrix2.3 Indexed Row Matrix2.4 Coordinate Matrix2.5 Block Matrix Part 3 Matrix Conversions3.1 Indexed Row Matrix Conversions3.2 Coordinate Matrix Conversions3.3 Block Matrix Conversions Part1. Vectors 1.1.Dense and Sparse VectorsSpark has many libraries, namely under MLlib (Machine Learning Library). It allows for quick and easy scalability of practical machine learning.In this lab exercise, you will learn about the basic Data Types that are used in Spark MLlib. This lab will help you develop the building blocks required to continue developing knowledge in machine learning with Spark.Import the following libraries: numpy as np scipy.sparse as sps Vectors from pyspark.mllib.linalg
###Code
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q https://archive.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
!tar xf spark-2.4.0-bin-hadoop2.7.tgz
!pip install -q findspark
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.4.0-bin-hadoop2.7"
import findspark
findspark.init()
import numpy as np
import scipy.sparse as sps
from pyspark.mllib.linalg import Vectors
import time
###Output
_____no_output_____
###Markdown
A local vector has integer-typed and 0-based indices and double-typed values, stored on a single machine. MLlib supports two types of **local vectors**: **dense** and **sparse**. A dense vector is backed by a double array representing its entry values, while a sparse vector is backed by two parallel arrays: indices and values. For example, a vector (1.0, 0.0, 3.0) can be represented in dense format as [1.0, 0.0, 3.0] or in sparse format as (3, [0, 2], [1.0, 3.0]), where 3 is the size of the vector.First, we will be dealing with Dense Vectors. For example, we assume that the dense vectors will be modeled having the values: 8.0, 312.0, -9.0, 1.3. There are 2 types of dense vectors that we can create. The first dense vector we will create is as easy as creating a numpy array, which is using the np.array function, create a dense vector called dense_vector1. Note: numpy's array function takes an array as input
###Code
dense_vector1 = np.array([8.0, 312.0, -9.0, 1.3])
print (dense_vector1)
type(dense_vector1)
###Output
_____no_output_____
###Markdown
The second dense vector is easier than the first, and is made by creating an array, which is to create a dense vector called dense_vector2
###Code
dense_vector2 = [8.0, 312.0, -9.0, 1.3]
print (dense_vector2)
type (dense_vector2)
###Output
_____no_output_____
###Markdown
Next, we will be dealing with sparse vectors. There are 2 types of sparse vectors we can create. The sparse vectors we will be creating will follow these values: 7.0, 0.0, 0.0, 2.0, 0.0, 1.0, 0.0, 0.0, 0.0, 6.5 First, create a sparse vector called sparse_vector1 using Vector's sparse function. Parameters to Vector.sparse: 1st parameter: Size of the sparse vector 2nd parameter: Indicies of array 3rd parameter: Values placed where the indices are
###Code
#Size of the sparse vector =10
#Indicies of array:[0, 3, 5, 9]. Becuase the index of 7.0 is 0, the index of 2.0 is 3, the index of 1.0 is 5
#and the index of 6.5 is 9
#Values placed where the indices are:[7.0, 2.0, 1.0, 6.5]
sparse_vector1 = Vectors.sparse(10, [0, 3, 5, 9], [7.0, 2.0, 1.0, 6.5])
print(sparse_vector1)
type(sparse_vector1)
###Output
_____no_output_____
###Markdown
Next we will create a sparse vector called sparse_vector2 using a single-column SciPy csc_matrix The inputs to sps.csc_matrix are: 1st: A tuple consisting of the three inputs: 1st: Data Values (in a numpy array) (values placed at the specified indices) 2nd: Indicies of the array (in a numpy array) (where the values will be placed) 3rd: Index pointer of the array (in a numpy array) 2nd: Shape of the array (rows, columns) Use 10 rows and 1 column shape = (\_,\_) Note: You may get a deprecation warning. Please Ignore it.
###Code
#[7.0, 0.0, 0.0, 2.0, 0.0, 1.0, 0.0, 0.0, 0.0, 6.5]
#Data Values[7.0, 2.0, 1.0, 6.5] All none-zero value
#Indicies of the array[0,3,5,9] All none-zero value Indicies
#Index pointer of the array[0,4] The number of all nonx-zero value
#Shape[10,1] 10 row and 1 column
sparse_vector2 = sps.csc_matrix((np.array([7.0, 2.0, 1.0, 6.5]), np.array([0, 3, 5, 9]), np.array([0, 4])), shape = (10, 1))
print (sparse_vector2)
print (type(sparse_vector2))
print (sparse_vector2.toarray())
###Output
_____no_output_____
###Markdown
You also can try the ** sps.csr_matrix** function. It syntax is similar with the csc_martix. just the definition of the Shape is different.
###Code
#[7.0, 0.0, 0.0, 2.0, 0.0, 1.0, 0.0, 0.0, 0.0, 6.5]
#[7.0, 0.0, 0.0, 2.0, 0.0, 1.0, 0.0, 0.0, 0.0, 6.5]
#Data Values[7.0, 2.0, 1.0, 6.5] All none-zero value
#Indicies of the array[0,3,5,9] All none-zero value Indicies
#Index pointer of the array[0,4] The number of all nonx-zero value
#Shape[1,10] 1 row and 10 column
sparse_vector3 = sps.csr_matrix((np.array([7.0, 2.0, 1.0, 6.5]), np.array([0, 3, 5, 9]), np.array([0, 4])), shape = (1, 10))
print (sparse_vector3)
print (type(sparse_vector3))
print (sparse_vector3.toarray())
###Output
_____no_output_____
###Markdown
1.2 Labeled PointsSo the next data type will be Labeled points. A labeled point is a local vector, either dense or sparse, associated with a label/response. In MLlib, labeled points are used in supervised learning algorithms. We use a double to store a label, so we can use labeled points in both regression and classification. For binary classification, a label should be either 0 (negative) or 1 (positive). For multiclass classification, labels should be class indices starting from zero: 0, 1, 2, ....Start by importing the following libraries: SparseVector from pyspark.mllib.linalg LabeledPoint from pyspark.mllib.regressionRemember that this data type is mainly used for **classification algorithms in supervised learning**.
###Code
from pyspark.mllib.linalg import SparseVector
from pyspark.mllib.regression import LabeledPoint
###Output
_____no_output_____
###Markdown
Remember that with a lableled point, we can create binary or multiclass classification. In this lab, we will deal with binary classification for ease. The LabeledPoint function takes in 2 inputs: 1st: Label of the Point. In this case (for binary classification), we will be using 1.0 for positive and 0.0 for negative 2nd: Vector of features for the point (We will input a Dense or Sparse Vector using any of the methods defined in the Dense and Sparse Vectors section of this lab. Using the LabelPoint class, create a dense feature vector with a positive label called pos_class with the values: 5.0, 2.0, 1.0, 9.0
###Code
#1.0 means the positive
#[5.0, 2.0, 1.0, 9.0] are vectors of features for the point
pos_class = LabeledPoint(1.0, [5.0, 2.0, 1.0, 9.0])
print(pos_class)
type(pos_class)
###Output
_____no_output_____
###Markdown
Next we will create a sparse feature vector with a negative label called neg_class with the values: 1.0, 0.0, 0.0, 4.0, 0.0, 2.0
###Code
neg_class = LabeledPoint(0.0, SparseVector(6, [0, 3, 5], [1.0, 4.0, 2.0]))
print(neg_class)
type(neg_class)
###Output
_____no_output_____
###Markdown
--- 2. Matrix Data TypesIn this next section, we will be dealing creating the following matrices: Local Matrix Row Matrix Indexed Row Matrix Coordinate Matrix Block Matrix Throughout this section, we will be modelling the following matricies: For a Dense Matrix: $$ \begin{pmatrix} 1.00 & 6.00 & 3.00 & 0.00 \\ 3.00 & 2.00 & 5.00 & 1.00 \\ 9.00 & 4.00 & 0.00 & 3.00 \end{pmatrix}$$For a Sparse Matrix: $$ \begin{pmatrix} 1.00 & 0.00 & 3.00 & 0.00 \\ 3.00 & 0.00 & 0.00 & 1.00 \\ 0.00 & 4.00 & 0.00 & 0.00 \end{pmatrix}$$ 2.1 Local MatrixA local matrix has integer-typed row and column indices and double-typed values, stored on a single machine. MLlib supports dense matrices, whose entry values are stored in a single double array in column-major order, and sparse matrices, whose non-zero entry values are stored in the Compressed Sparse Column (CSC) format in column-major order. Import the following Library: pyspark.mllib.linalg as laMat
###Code
import pyspark.mllib.linalg as laMat
###Output
_____no_output_____
###Markdown
Create a dense local matrix called dense_LM The inputs into the laMat.Matrices.dense function are: 1st: Number of Rows 2nd: Number of Columns 3rd: Values in an array format (Read as Column-Major)
###Code
#3 Rows
#4 Columns
#[1.0, 3.0, 9.0, 6.0, 2.0, 4.0, 3.0, 5.0, 0.0, 0.0, 1.0, 3.0] are values in an array format
dense_LM = laMat.Matrices.dense(3,4, [1.0, 3.0, 9.0, 6.0, 2.0, 4.0, 3.0, 5.0, 0.0, 0.0, 1.0, 3.0])
print(dense_LM)
type(dense_LM)
###Output
_____no_output_____
###Markdown
Next we will do the same thing with a sparse matrix, calling the output sparse_LMThe inputs into the laMat.Matrices.sparse function are: 1st: Number of Rows 2nd: Number of Columns 3rd: Column Pointers (in a list) 4th: Row Indices (in a list) 5th: Values of the Matrix (in a list) Note: Remember that this is column-major so all arrays should be read as columns first (top down, left to right)
###Code
#For a spare Martix
# ([[1., 0., 3., 0.],
# [3., 0., 0., 1.],
# [0., 4., 0., 0.]])
#1st: Number of Rows = 3
#2nd: Number of Columns= 4
#3rd: Column Pointers (in a list) = [0, 2, 3, 4, 5]
#4th: Row Indices (in a list) = [0, 1, 2, 0, 1]
#5th: Values of the Matrix (in a list) = [1.0, 3.0, 4.0, 3.0, 1.0]
sparse_LM = laMat.Matrices.sparse(3, 4, [0, 2, 3, 4, 5], [0, 1, 2, 0, 1], [1.0, 3.0, 4.0, 3.0, 1.0])
print(sparse_LM)
type(sparse_LM)
print(sparse_LM.toDense())
###Output
_____no_output_____
###Markdown
Make sure the output of sparse_LM matches the original matrix.Please refer the sample on the webpage for the understanding: https://stackoverflow.com/questions/44825193/how-to-create-a-sparse-cscmatrix-using-spark 2.2 Row MatrixA RowMatrix is a row-oriented distributed matrix without meaningful row indices, backed by an RDD of its rows, where each row is a local vector. Since each row is represented by a local vector, the number of columns is limited by the integer range but it should be much smaller in practice.Import the following library: RowMatrix from pyspark.mllib.linalg.distributed
###Code
from pyspark.mllib.linalg.distributed import RowMatrix
from pyspark import SparkContext
from pyspark.sql import SQLContext
sc = SparkContext.getOrCreate()
sqlContext = SQLContext(sc)
###Output
_____no_output_____
###Markdown
Now, let's create a RDD of vectors called rowVecs, using the SparkContext's parallelize function on the Dense Matrix.The input into sc.parallelize is: A list (The list we will be creating will be a list of the row values (each row is a list)) Note: And RDD is a fault-tolerated collection of elements that can be operated on in parallel.
###Code
rowVecs = sc.parallelize([[1.0, 6.0, 3.0, 0.0],
[3.0, 2.0, 5.0, 1.0],
[9.0, 4.0, 0.0, 3.0]])
###Output
_____no_output_____
###Markdown
Next, create a variable called rowMat by using the RowMatrix function and passing in the RDD.
###Code
rowMat = RowMatrix(rowVecs)
###Output
_____no_output_____
###Markdown
Now we will retrieve the row numbers (save it as m) and column numbers (save it as n) from the RowMatrix. To get the number of rows, use numRows() on rowMat To get the number of columns, use numCols() on rowMat
###Code
m = rowMat.numRows()
n = rowMat.numCols()
###Output
_____no_output_____
###Markdown
Print out m and n. The results should be: Number of Rows: 3 Number of Columns: 4
###Code
print(m)
print(n)
###Output
_____no_output_____
###Markdown
2.3 Indexed Row MatrixAn IndexedRowMatrix is similar to a RowMatrix but with meaningful row indices. It is backed by an RDD of indexed rows, so that each row is represented by its index (long-typed) and a local vector.Import the following Library: IndexedRow, IndexedRowMatrix from pyspark.mllib.linalg.distributed
###Code
from pyspark.mllib.linalg.distributed import IndexedRow, IndexedRowMatrix
###Output
_____no_output_____
###Markdown
Now, create a RDD called indRows by using the SparkContext's parallelize function on the Dense Matrix. There are two different inputs you can use to create the RDD: Method 1: A list containing multiple IndexedRow inputs Input into IndexedRow: 1. Index for the given row (row number) 2. row in the matrix for the given index ex. sc.parallelize([IndexedRow(0,[1, 2, 3]), ...]) Method 2: A list containing multiple tuples Values in the tuple: 1. Index for the given row (row number) (type:long) 2. List containing the values in the row for the given index (type:vector) ex. sc.parallelize([(0, [1, 2, 3]), ...])
###Code
# Method 1: Using IndexedRow class
indRows = sc.parallelize([IndexedRow(0, [1.0, 6.0, 3.0, 0.0]),
IndexedRow(1, [3.0, 2.0, 5.0, 1.0]),
IndexedRow(2, [9.0, 4.0, 0.0, 3.0])])
# Method 2: Using (long, vector) tuples
indRows = sc.parallelize([(0, [1.0, 6.0, 3.0, 0.0]),
(1, [3.0, 2.0, 5.0, 1.0]),
(2, [9.0, 4.0, 0.0, 3.0])])
###Output
_____no_output_____
###Markdown
Now, create the IndexedRowMatrix called indRowMat by using the IndexedRowMatrix function and passing in the indRows RDD
###Code
indRowMat = IndexedRowMatrix(indRows)
###Output
_____no_output_____
###Markdown
Now we will retrieve the row numbers (save it as m2) and column numbers (save it as n2) from the IndexedRowMatrix. To get the number of rows, use numRows() on indRowMat To get the number of columns, use numCols() on indRowMat
###Code
m2 = indRowMat.numRows()
n2 = indRowMat.numCols()
###Output
_____no_output_____
###Markdown
Print out m2 and n2. The results should be: Number of Rows: 3 Number of Columns: 4
###Code
print(m2)
print(n2)
###Output
_____no_output_____
###Markdown
2.3 Coordinate MatrixNow it's time to create a different type of matrix, whos use should be when both the dimensions of the matrix is very large, and the data in the matrix is sparse. Note: In this case, we will be using the small, sparse matrix above, just to get the idea of how to initialize a CoordinateMatrixA CoordinateMatrix is a distributed matrix backed by an RDD of its entries. Each entry is a tuple of (i: Long, j: Long, value: Double), where i is the row index, j is the column index, and value is the entry value. A CoordinateMatrix should be used only when both dimensions of the matrix are huge and the matrix is very sparse.Import the following libraries: CoordinateMatrix, MatrixEntry from pyspark.mllib.linalg.distributed
###Code
from pyspark.mllib.linalg.distributed import CoordinateMatrix, MatrixEntry
###Output
_____no_output_____
###Markdown
Now, create a RDD called coordRows by using the SparkContext's parallelize function on the Sparse Matrix. There are two different inputs you can use to create the RDD: Method 1: A list containing multiple MatrixEntry inputs Input into MatrixEntry: 1. Row index of the matrix (row number) (type: long) 2. Column index of the matrix (column number) (type: long) 3. Value at the (Row Index, Column Index) entry of the matrix (type: float) ex. sc.parallelize([MatrixEntry(0, 0, 1,), ...]) Method 2: A list containing multiple tuples Values in the tuple: 1. Row index of the matrix (row number) (type: long) 2. Column index of the matrix (column number) (type: long) 3. Value at the (Row Index, Column Index) entry of the matrix (type: float) ex. sc.parallelize([(0, 0, 1), ...])
###Code
# Method 1. Using MatrixEntry class
coordRows = sc.parallelize([MatrixEntry(0, 0, 1.0),
MatrixEntry(0, 2, 3.0),
MatrixEntry(1, 0, 3.0),
MatrixEntry(1, 3, 1.0),
MatrixEntry(2, 2, 4.0)])
# Method 2. Using (long, long, float) tuples
coordRows = sc.parallelize([(0, 0, 1.0),
(0, 2, 3.0),
(1, 1, 3.0),
(1, 3, 1.0),
(2, 2, 4.0)])
###Output
_____no_output_____
###Markdown
Now, create the CoordinateMatrix called coordMat by using the CoordinateMatrix function and passing in the coordRows RDD
###Code
coordMat = CoordinateMatrix(coordRows)
###Output
_____no_output_____
###Markdown
Now we will retrieve the row numbers (save it as m3) and column numbers (save it as n3) from the CoordinateMatrix. To get the number of rows, use numRows() on coordMat To get the number of columns, use numCols() on coordMat
###Code
m3 = coordMat.numRows()
n3 = coordMat.numCols()
###Output
_____no_output_____
###Markdown
Print out m3 and n3. The results should be: Number of Rows: 3 Number of Columns: 4
###Code
print(m3)
print(n3)
###Output
_____no_output_____
###Markdown
Now, we can get the entries of coordMat by calling the entries method on it. Store this in a variable called coordEnt.
###Code
coordEnt = coordMat.entries
###Output
_____no_output_____
###Markdown
Check out the type of coordEnt.
###Code
type(coordEnt)
###Output
_____no_output_____
###Markdown
It should be a PipelinedRDD type, which has many methods that are associated with it. One of them is first(), which will get the first element in the RDD. Run coordEnt.first()
###Code
coordEnt.first()
###Output
_____no_output_____
###Markdown
2.4 Block MatrixA BlockMatrix is essentially a matrix consisting of elements which are partitions of the matrix that is being created.Import the following libraries: Matrices from pyspark.mllib.linalg BlockMatrix from pyspark.mllib.linalg.distributedA BlockMatrix is a distributed matrix backed by an RDD of MatrixBlocks, where a MatrixBlock is a tuple of ((Int, Int), Matrix), where the (Int, Int) is the index of the block, and Matrix is the sub-matrix at the given index with size rowsPerBlock x colsPerBlock. BlockMatrix supports methods such as add and multiply with another BlockMatrix. BlockMatrix also has a helper function validate which can be used to check whether the BlockMatrix is set up properly.
###Code
from pyspark.mllib.linalg import Matrices
from pyspark.mllib.linalg.distributed import BlockMatrix
###Output
_____no_output_____
###Markdown
Now create a RDD of sub-matrix blocks. This will be done using SparkContext's parallelize function. The input into sc.parallelize requires a list of tuples. The tuples are the sub-matrices, which consist of two inputs: 1st: A tuple containing the row index and column index (row, column), denoting where the sub-matrix will start 2nd: The sub-matrix, which will come from Matrices.dense. The sub-matrix requires 3 inputs: 1st: Number of rows 2nd: Number of columns 3rd: A list containing the elements of the sub-matrix. These values are read into the sub-matrix column-major fashion (ex. ((51, 2), Matrices.dense(2, 2, [61.0, 43.0, 1.0, 74.0])) would be one row (one tuple)). The matrix we will be modelling is Dense Matrix from above. Create the following sub-matrices: Row: 0, Column: 0, Values: 1.0, 3.0, 6.0, 2.0, with 2 Rows and 2 Columns Row: 2, Column: 0, Values: 9.0, 4.0, with 1 Row and 2 Columns Row: 0, Column: 2, Values: 3.0, 5.0, 0.0, 0.0, 1.0, 3.0, with 3 Rows and 2 Columns
###Code
blocks = sc.parallelize([((0, 0), Matrices.dense(2, 2, [1.0, 3.0, 6.0, 2.0])),
((2, 0), Matrices.dense(1, 2, [9.0, 4.0])),
((0, 2), Matrices.dense(3, 2, [3.0, 5.0, 0.0, 0.0, 1.0, 3.0]))])
###Output
_____no_output_____
###Markdown
Now that we have the RDD, it's time to create the BlockMatrix called blockMat using the BlockMatrix class. The BlockMatrix class requires 3 inputs: 1st: The RDD of sub-matricies 2nd: The rows per block. Keep this value at 1 3rd: The columns per block. Keep this value at 1
###Code
blockMat = BlockMatrix(blocks, 1, 1)
###Output
_____no_output_____
###Markdown
Now we will retrieve the row numbers (save it as m4) and column numbers (save it as n4) from the BlockMatrix. To get the number of rows, use numRows() on blockMat To get the number of columns, use numCols() on blockMat
###Code
m4 = blockMat.numRows()
n4 = blockMat.numCols()
###Output
_____no_output_____
###Markdown
Print out m4 and n4. The results should be: Number of Rows: 3 Number of Columns: 4
###Code
print(m4)
print(n4)
###Output
_____no_output_____
###Markdown
Now, we need to check if our matrix is correct. We can do this by first converting blockMat into a LocalMatrix, by using the .toLocalMatrix() function on our matrix. Store the result into a variable called locBMat
###Code
locBMat = blockMat.toLocalMatrix()
###Output
_____no_output_____
###Markdown
Now print out locBMat and its type. The result should model the original Dense Matrix and the type should be a DenseMatrix.
###Code
print(locBMat)
print(type(locBMat))
###Output
_____no_output_____
###Markdown
**Conclusion**Distributed matrixA distributed matrix has long-typed row and column indices and double-typed values, stored distributively in one or more RDDs. It is very important to choose the right format to store large and distributed matrices. Converting a distributed matrix to a different format may require a global shuffle, which is quite expensive. Four types of distributed matrices have been implemented so far.The basic type is called **RowMatrix**. A RowMatrix is a row-oriented distributed matrix without meaningful row indices, e.g., a collection of feature vectors. It is backed by an RDD of its rows, where each row is a local vector. We assume that the number of columns is not huge for a RowMatrix so that a single local vector can be reasonably communicated to the driver and can also be stored / operated on using a single node. An **IndexedRowMatrix** is similar to a RowMatrix but with row indices, which can be used for identifying rows and executing joins. A **CoordinateMatrix** is a distributed matrix stored in coordinate list (COO) format, backed by an RDD of its entries. A **BlockMatrix** is a distributed matrix backed by an RDD of MatrixBlock which is a tuple of (Int, Int, Matrix).**Note**The underlying RDDs of a distributed matrix must be deterministic, because we cache the matrix size. In general the use of non-deterministic RDDs can lead to errors. --- 3. Matrix ConversionsIn this bonus section, we will talk about a relationship between the different [types of matrices](https://www.emathzone.com/tutorials/algebra/types-of-matrices.html). You can convert between these matrices that we discussed with the following functions. .toRowMatrix() converts the matrix to a RowMatrix .toIndexedRowMatrix() converts the matrix to an IndexedRowMatrix .toCoordinateMatrix() converts the matrix to a CoordinateMatrix .toBlockMatrix() converts the matrix to a BlockMatrix 3.1 Indexed Row Matrix ConversionsThe following conversions are supported for an IndexedRowMatrix: IndexedRowMatrix -> RowMatrix IndexedRowMatrix -> CoordinateMatrix IndexedRowMatrix -> BlockMatrix
###Code
# Convert to a RowMatrix
rMat = indRowMat.toRowMatrix()
print(type(rMat))
# Convert to a CoordinateMatrix
cMat = indRowMat.toCoordinateMatrix()
print(type(cMat))
# Convert to a BlockMatrix
bMat = indRowMat.toBlockMatrix()
print(type(bMat))
###Output
_____no_output_____
###Markdown
3.2 Coordinate Matrix ConversionsThe following conversions are supported for an CoordinateMatrix: CoordinateMatrix -> RowMatrix CoordinateMatrix -> IndexedRowMatrix CoordinateMatrix -> BlockMatrix
###Code
# Convert to a RowMatrix
rMat2 = coordMat.toRowMatrix()
print(type(rMat2))
# Convert to an IndexedRowMatrix
iRMat = coordMat.toIndexedRowMatrix()
print(type(iRMat))
# Convert to a BlockMatrix
bMat2 = coordMat.toBlockMatrix()
print(type(bMat2))
###Output
_____no_output_____
###Markdown
3.3 Block Matrix ConversionsThe following conversions are supported for an BlockMatrix: BlockMatrix -> LocalMatrix (Can display the Matrix) BlockMatrix -> IndexedRowMatrix BlockMatrix -> CoordinateMatrix
###Code
# Convert to a LocalMatrix
lMat = blockMat.toLocalMatrix()
print(type(lMat))
# Convert to an IndexedRowMatrix
iRMat2 = blockMat.toIndexedRowMatrix()
print(type(iRMat2))
# Convert to a CoordinateMatrix
cMat2 = blockMat.toCoordinateMatrix()
print(type(cMat2))
###Output
_____no_output_____ |
train_efficientdet.ipynb | ###Markdown
setup dataset
###Code
# import stuff
import os
import numpy as np
import time
import pandas as pd
import torch
import torch.utils.data as data
from itertools import product as product
import torch
import torch.nn as nn
import torch.nn.init as init
import torch.nn.functional as F
from torch.autograd import Function
from utils.to_fp16 import network_to_half
# import dataset
from utils.dataset import VOCDataset, DatasetTransform, make_datapath_list, Anno_xml2list, od_collate_fn
## meta settings
# select from efficientnet backbone or resnet backbone
backbone = "efficientnet-b2"
scale = 1
# scale==1: resolution 300
# scale==2: resolution 600
useBiFPN = True
HALF = False # enable FP16
DATASET = "VOC"
retina = False # for trying retinanets
###Output
_____no_output_____
###Markdown
make data.Dataset for training
###Code
if not DATASET == "COCO":
# load files
# set your VOCdevkit path here.
vocpath = "../VOCdevkit/VOC2007"
train_img_list, train_anno_list, val_img_list, val_anno_list = make_datapath_list(vocpath)
vocpath = "../VOCdevkit/VOC2012"
train_img_list2, train_anno_list2, _, _ = make_datapath_list(vocpath)
train_img_list.extend(train_img_list2)
train_anno_list.extend(train_anno_list2)
print("trainlist: ", len(train_img_list))
print("vallist: ", len(val_img_list))
# make Dataset
voc_classes = ['aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat', 'chair',
'cow', 'diningtable', 'dog', 'horse',
'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
color_mean = (104, 117, 123) # (BGR)の色の平均値
if scale == 1:
input_size = 300 # 画像のinputサイズを300×300にする
else:
input_size = 512
## DatasetTransformを適応
transform = DatasetTransform(input_size, color_mean)
transform_anno = Anno_xml2list(voc_classes)
# Dataloaderに入れるデータセットファイル。
# ゲットで叩くと画像とGTを前処理して出力してくれる。
train_dataset = VOCDataset(train_img_list, train_anno_list, phase = "train", transform=transform, transform_anno = transform_anno)
val_dataset = VOCDataset(val_img_list, val_anno_list, phase="val", transform=DatasetTransform(
input_size, color_mean), transform_anno=Anno_xml2list(voc_classes))
else:
from dataset.coco import COCODetection
import torch.utils.data as data
from utils.dataset import VOCDataset, COCODatasetTransform, make_datapath_list, Anno_xml2list, od_collate_fn
color_mean = (104, 117, 123) # (BGR)の色の平均値
if scale == 1:
input_size = 300 # 画像のinputサイズを300×300にする
else:
input_size = 512
## DatasetTransformを適応
transform = COCODatasetTransform(input_size, color_mean)
train_dataset = COCODetection("../data/coco/", image_set="train2014", phase="train", transform=transform)
val_dataset = COCODetection("../data/coco/", image_set="val2014", phase="val", transform=transform)
batch_size = int(32/scale)
train_dataloader = data.DataLoader(
train_dataset, batch_size=batch_size, shuffle=True, collate_fn=od_collate_fn, num_workers=8)
val_dataloader = data.DataLoader(
val_dataset, batch_size=batch_size, shuffle=False, collate_fn=od_collate_fn, num_workers=8)
# 辞書型変数にまとめる
dataloaders_dict = {"train": train_dataloader, "val": val_dataloader}
# 動作の確認
batch_iterator = iter(dataloaders_dict["val"]) # イタレータに変換
images, targets = next(batch_iterator) # 1番目の要素を取り出す
print(images.size()) # torch.Size([4, 3, 300, 300])
print(len(targets))
print(targets[1].shape) # ミニバッチのサイズのリスト、各要素は[n, 5]、nは物体数
###Output
torch.Size([32, 3, 300, 300])
32
torch.Size([1, 5])
###Markdown
define EfficientDet model
###Code
from utils.efficientdet import EfficientDet
if not DATASET == "COCO":
num_class = 21
else:
num_class = 81
if scale==1:
ssd_cfg = {
'num_classes': num_class, # 背景クラスを含めた合計クラス数
'input_size': 300*scale, # 画像の入力サイズ
'bbox_aspect_num': [4, 6, 6, 6, 4, 4], # 出力するDBoxのアスペクト比の種類
'feature_maps': [37, 18, 9, 5, 3, 1], # 各sourceの画像サイズ
'steps': [8, 16, 32, 64, 100, 300], # DBOXの大きさを決める
'min_sizes': [30, 60, 111, 162, 213, 264], # DBOXの大きさを決める
'max_sizes': [60, 111, 162, 213, 264, 315], # DBOXの大きさを決める
'aspect_ratios': [[2], [2, 3], [2, 3], [2, 3], [2], [2]],
}
elif scale==2:
ssd_cfg = {
'num_classes': num_class, # 背景クラスを含めた合計クラス数
'input_size': 512, # 画像の入力サイズ
'bbox_aspect_num': [4, 6, 6, 6, 4, 4], # 出力するDBoxのアスペクト比の種類
'feature_maps': [64, 32, 16, 8, 4, 2], # 各sourceの画像サイズ
'steps': [8, 16, 32, 64, 100, 300], # DBOXの大きさを決める
'min_sizes': [30, 60, 111, 162, 213, 264]*scale, # DBOXの大きさを決める
'max_sizes': [60, 111, 162, 213, 264, 315]*scale, # DBOXの大きさを決める
'aspect_ratios': [[2], [2, 3], [2, 3], [2, 3], [2], [2]],
}
# test if net works
net = EfficientDet(phase="train", cfg=ssd_cfg, verbose=True, backbone=backbone, useBiFPN=useBiFPN)
#out = net(torch.rand([1,3,input_size,input_size]))
#print(out[0].size())
net = EfficientDet(phase="train", cfg=ssd_cfg, verbose=False, backbone=backbone, useBiFPN=useBiFPN)
# call retinanet for test purpose
if retina:
from utils.retinanet import RetinaFPN
ssd_cfg = {
'num_classes': num_class, # 背景クラスを含めた合計クラス数
'input_size': 300*scale, # 画像の入力サイズ
'bbox_aspect_num': [4, 6, 6, 6, 4, 4], # 出力するDBoxのアスペクト比の種類
'feature_maps': [38, 19, 10, 5, 3, 1], # 各sourceの画像サイズ
'steps': [8, 16, 32, 64, 100, 300], # DBOXの大きさを決める
'min_sizes': [30, 60, 111, 162, 213, 264], # DBOXの大きさを決める
'max_sizes': [60, 111, 162, 213, 264, 315], # DBOXの大きさを決める
'aspect_ratios': [[2], [2, 3], [2, 3], [2, 3], [2], [2]],
}
net = RetinaFPN("train", ssd_cfg)
# GPUが使えるか確認
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("using:", device)
print("set weights!")
# FP16..
if HALF:
net = network_to_half(net)
# Freeze backbone layers
#for param in net.layer0.parameters():
# param.requires_grad = False
#for param in net.layer2.parameters():
# param.requires_grad = False
#for param in net.layer3.parameters():
# param.requires_grad = False
#for param in net.layer4.parameters():
# param.requires_grad = False
#for param in net.layer5.parameters():
# param.requires_grad = False
from utils.ssd_model import MultiBoxLoss
# define loss
criterion = MultiBoxLoss(jaccard_thresh=0.5,neg_pos=3, device=device, half=HALF)
# optim
import torch.optim as optim
optimizer = optim.SGD(filter(lambda p: p.requires_grad, net.parameters()), lr=1e-3, momentum=0.9, weight_decay=5e-4)
# while the original efficientdet uses cosine annealining lr scheduling, we utilize epoch-based lr decreasing for simplicity.
def get_current_lr(epoch):
if DATASET == "COCO":
reduce = [120, 180]
lr = 1e-3
else:
reduce = [120,180]
lr = 1e-3
for i,lr_decay_epoch in enumerate(reduce):
if epoch >= lr_decay_epoch:
lr *= 0.1
return lr
def adjust_learning_rate(optimizer, epoch):
lr = get_current_lr(epoch)
print("lr is:", lr)
for param_group in optimizer.param_groups:
param_group['lr'] = lr
# train script. nothing special..
def train_model(net, dataloaders_dict, criterion, optimizer, num_epochs):
# GPUが使えるかを確認
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print("used device:", device)
# ネットワークをGPUへ
net.to(device)
# ネットワークがある程度固定であれば、高速化させる
torch.backends.cudnn.benchmark = True
# イテレーションカウンタをセット
iteration = 1
epoch_train_loss = 0.0 # epochの損失和
epoch_val_loss = 0.0 # epochの損失和
logs = []
# epochのループ
for epoch in range(num_epochs+1):
adjust_learning_rate(optimizer, epoch)
# 開始時刻を保存
t_epoch_start = time.time()
t_iter_start = time.time()
print('-------------')
print('Epoch {}/{}'.format(epoch+1, num_epochs))
print('-------------')
# epochごとの訓練と検証のループ
for phase in ['train', 'val']:
if phase == 'train':
net.train() # モデルを訓練モードに
print('(train)')
else:
if((epoch+1) % 10 == 0):
net.eval() # モデルを検証モードに
print('-------------')
print('(val)')
else:
# 検証は10回に1回だけ行う
continue
# データローダーからminibatchずつ取り出すループ
for images, targets in dataloaders_dict[phase]:
# GPUが使えるならGPUにデータを送る
images = images.to(device)
targets = [ann.to(device)
for ann in targets] # リストの各要素のテンソルをGPUへ
if HALF:
images = images.half()
targets = [ann.half() for ann in targets]
# optimizerを初期化
optimizer.zero_grad()
# 順伝搬(forward)計算
with torch.set_grad_enabled(phase == 'train'):
# 順伝搬(forward)計算
outputs = net(images)
#print(outputs[0].type())
# 損失の計算
loss_l, loss_c = criterion(outputs, targets)
loss = loss_l + loss_c
# 訓練時はバックプロパゲーション
if phase == 'train':
loss.backward() # 勾配の計算
# 勾配が大きくなりすぎると計算が不安定になるので、clipで最大でも勾配2.0に留める
nn.utils.clip_grad_value_(
net.parameters(), clip_value=2.0)
optimizer.step() # パラメータ更新
if (iteration % 10 == 0): # 10iterに1度、lossを表示
t_iter_finish = time.time()
duration = t_iter_finish - t_iter_start
print('Iter {} || Loss: {:.4f} || 10iter: {:.4f} sec.'.format(
iteration, loss.item(), duration))
t_iter_start = time.time()
# filter inf..
if not loss.item() == float("inf"):
epoch_train_loss += loss.item()
iteration += 1
# 検証時
else:
if not loss.item() == float("inf"):
epoch_val_loss += loss.item()
# epochのphaseごとのlossと正解率
t_epoch_finish = time.time()
print('-------------')
print('epoch {} || Epoch_TRAIN_Loss:{:.4f} ||Epoch_VAL_Loss:{:.4f}'.format(
epoch+1, epoch_train_loss, epoch_val_loss))
print('timer: {:.4f} sec.'.format(t_epoch_finish - t_epoch_start))
t_epoch_start = time.time()
# ログを保存
log_epoch = {'epoch': epoch+1,
'train_loss': epoch_train_loss, 'val_loss': epoch_val_loss}
logs.append(log_epoch)
df = pd.DataFrame(logs)
df.to_csv("log/"+DATASET+"_"+backbone+"_" + str(300*scale) +"log_output.csv")
epoch_train_loss = 0.0 # epochの損失和
epoch_val_loss = 0.0 # epochの損失和
# ネットワークを保存する
if ((epoch+1) % 5 == 0):
if useBiFPN:
word="BiFPN"
else:
word="FPN"
torch.save(net.state_dict(), 'weights/'+DATASET+"_"+backbone+"_" + str(300*scale) + "_" + word + "_" +
str(epoch+1) + '.pth')
if DATASET == "COCO":
num_epochs = 200
else:
num_epochs = 200
train_model(net, dataloaders_dict, criterion, optimizer, num_epochs=num_epochs)
###Output
used device: cuda:0
lr is: 0.001
-------------
Epoch 1/200
-------------
(train)
|
Udemy - Python for Data Science & Machine Learning Boot Camp/10. Missing Values - Pandas(Basic).ipynb | ###Markdown
Missing Values
###Code
import numpy as np
import pandas as pd
d = {'A':[1,2,np.nan],'B':[4,np.nan,6],'C':[7,8,9]}
d
df = pd.DataFrame(d)
df
###Output
_____no_output_____
###Markdown
Dropping Null values
###Code
df.dropna()
df.dropna(axis=1)
df.dropna(thresh=2)
df.dropna(thresh=3)
###Output
_____no_output_____
###Markdown
Filling missing values
###Code
df.fillna(value='New Value')
###Output
_____no_output_____
###Markdown
Filling with mean values
###Code
df['A'].fillna(value = df['A'].mean())
###Output
_____no_output_____ |
.ipynb_checkpoints/206 Retrieval v5-checkpoint.ipynb | ###Markdown
Define User Interface
###Code
import pandas as pd
import numpy as np
import functools
import collections
import operator
pd.options.mode.chained_assignment = None
df = pd.read_excel('song_data2.xlsx')
# Replace NaN values with empty strings
df['entity'] = df['entity'].fillna('')
df['Google_Entities'] = df['Google_Entities'].fillna('')
# Function to convert the genre value in each row to a list
def list_converter (value):
return value.split(', ')
df['genre'] = df['genre'].apply(list_converter)
# Here we define the user interface function the includes all the filters created above
def UserInterface1(df):
print("In the following steps you will filter the song dataset on the following variables: ")
[print(i) for i in df.columns.values]
print("")
####################
### ARTIST ###
####################
unique_artists = list(df['artist'].unique())
print ('\nArtist options: ', unique_artists)
artist_input = input('\nPlease select your artists (or type * to select all): ')
if artist_input == '*':
artist_input = unique_artists
else:
artist_input = artist_input.split(', ')
# Filtering for artist
df_filtered = df.loc[df['artist'].isin(artist_input)]
####################
### ALBUM ###
####################
unique_albums = list(df_filtered['album'].unique())
print ('\nAlbum options: ', unique_albums)
album_input = input('\nPlease select your albums (or type * to select all): ')
if album_input == '*':
album_input = unique_albums
else:
album_input = album_input.split(', ')
df_filtered = df_filtered.loc[df_filtered['album'].isin(album_input)]
####################
### COUNTRY ###
####################
unique_countries = list(df_filtered['home_country'].unique())
print ('\nCountry options: ', unique_countries)
country_input = input('\nPlease select your countries (or type * to select all): ')
if country_input == '*':
country_input = unique_countries
else:
country_input = country_input.split(', ')
df_filtered = df_filtered.loc[df_filtered['home_country'].isin(country_input)]
###################
### GENRE ###
###################
unflattened_genre_list = list(df_filtered['genre'])
unique_genres = set([item for sublist in unflattened_genre_list for item in sublist])
print ('\nGenre options: ', unique_genres)
genre_input = input('\nPlease select your genres (or type * to select all): ')
if genre_input == '*':
genre_input = list(unique_genres)
else:
genre_input = genre_input.split(', ')
df_filtered['genre_match'] = False
for count, each_row in enumerate(df_filtered['genre']):
for item in each_row:
if item in genre_input:
df_filtered['genre_match'].iloc[count] = True
df_filtered = df_filtered.loc[df_filtered['genre_match'] == True]
####################
### RELEASE YEAR ###
####################
print ('\nRelease date options: \n 1960-1969 \n 1970-1979 \n 1980-1989 \n 1990-1999 \n 2000-2009 \n 2010-Present')
release_input = input('\nPlease select your release year (or type * to select all): ')
release_year_list = []
if release_input == '*':
for elem in range (1960, 2018):
release_year_list.append(elem)
else:
release_input = release_input.split(', ')
for elem in release_input:
if '1960' in elem:
for elem in range (1960, 1970):
release_year_list.append(elem)
elif '1970' in elem:
for elem in range (1970, 1980):
release_year_list.append(elem)
elif '1980' in elem:
for elem in range (1980, 1990):
release_year_list.append(elem)
elif '1990' in elem:
for elem in range (1990, 2000):
release_year_list.append(elem)
elif '2000' in elem:
for elem in range (2000, 2010):
release_year_list.append(elem)
elif '2010' in elem:
for elem in range (2010, 2018):
release_year_list.append(elem)
release_input = release_year_list
df_filtered = df_filtered.loc[df_filtered['released'].isin(release_input)]
##################
### SINGER AGE ###
##################
print ('\nSinger age options: \n 10-19 \n 20-29 \n 30-39 \n 40-49 \n 50-59')
age_input = input('\nPlease select an age range for singers (or type * to select all): ')
age_year_list = []
if age_input == '*':
for elem in range (10, 56):
age_year_list.append(elem)
else:
age_input = age_input.split(', ')
for elem in age_input:
if '10' in elem:
for elem in range (10, 20):
age_year_list.append(elem)
elif '20' in elem:
for elem in range (20, 30):
age_year_list.append(elem)
elif '30' in elem:
for elem in range (30, 40):
age_year_list.append(elem)
elif '40' in elem:
for elem in range (40, 50):
age_year_list.append(elem)
elif '50' in elem:
for elem in range (50, 57):
age_year_list.append(elem)
age_input = age_year_list
df_filtered = df_filtered.loc[df_filtered['singer_age_at_release'].isin(age_input)]
# ____ _ _
# / __ \ | | | |
# | | | |_ _| |_ _ __ _ _| |_
# | | | | | | | __| '_ \| | | | __|
# | |__| | |_| | |_| |_) | |_| | |_
# \____/ \__,_|\__| .__/ \__,_|\__|
# | |
# |_|
print("\n===================================================================\n===================================================================\n")
## Print average sentiment of songs reslting from filtering ##
####
print ('The average sentiment of the songs resulting from your search is: ' + str(round(df_filtered['overall_sentiment'].mean(), 2)))
print("\n===================================================================\n")
## Return the Top 5 topics and their sentiment for the filtered songs
####
print('\nThe top 5 most prevalent topics of the songs resulting from your search are:\n')
df_sentiment_copy = df_filtered.copy()
df_sentiment_copy = df_sentiment_copy.sort_values(by=['overall_sentiment'], axis=0, ascending=False)[:5]
df_sentiment_copy = df_sentiment_copy.reset_index()
TopTopics = []
for i in df['Google_Entities']:
if i != '':
for L in eval(i):
for key,value in L.items():
TopTopics.append([key,value])
li = sorted(TopTopics, key=operator.itemgetter(1), reverse = True) # Ascending order
for i in li[:5]:
print(i[0])
#counter = 0
#while counter < len(df_sentiment_copy['artist']):
# print ('Song: "' + (df_sentiment_copy['song'][counter]) + '", Artist: ' + str((df_sentiment_copy['artist'][counter])) + ', '
# + 'Sentiment: ' + str((df_sentiment_copy['overall_sentiment'][counter])))
# print("")
# counter +=1
#
return df_filtered
###Output
_____no_output_____
###Markdown
Run User Interface 1
###Code
# Test User Interface
df_filtered = UserInterface1(df)
df_filtered.head()
###Output
_____no_output_____
###Markdown
Define User Interface 2
###Code
def makeInvertedIndex(df,voctrain):
InvertInd = {}
for word in voctrain:
InvertInd[word] = []
for j in voctrain:
for ind, val in enumerate(df['topic']):
if df['topic'][ind] != '':
evalue = eval(val)
#print(evalue)
for dual in evalue:
for key,value in dual.items():
keySplit = key.split(' ')
for kS in keySplit:
if j == kS:
InvertInd[j].append(ind)
for j in voctrain:
for ind, val in enumerate(df['Google_Entities']):
if df['Google_Entities'][ind] != '':
evalue = eval(val)
for dual in evalue:
for key,value in dual.items():
keySplit = key.split(' ')
for kS in keySplit:
if j == kS:
InvertInd[j].append(ind)
for j in voctrain:
for ind, val in enumerate(df['entity']):
if df['entity'][ind] != '':
evalue = eval(val)
for dual in evalue:
for key,value in dual.items():
keySplit = key.split(' ')
for kS in keySplit:
if j == kS:
InvertInd[j].append(ind)
return InvertInd
def orSearch(invertedIndex, query, df):
results = []
print("Query is:",query,"\n")
for ask in query:
print("Check for:",ask)
if ask in invertedIndex:
word = invertedIndex[ask]
print("Found...\n")
for res in word:
#print(res)
if res not in results:
results.append(res)
else:
print("Not found...\n")
for i in results:
print(df['song'].iloc[i]," by ", df['artist'].iloc[i])
#print("Results: ",results)
def andSearch(invertedIndex, query, df):
results = {}
resultCheck = []
for ask in query:
results[ask] = []
print("Query is:",query,"\n")
Match = False
for ask in query:
print("Check for:",ask,"...")
if ask in invertedIndex:
word = invertedIndex[ask]
print("Found\n")
for res in word:
results[ask].append(res)
if res not in resultCheck:
resultCheck.append(res)
else:
print("Not found\n")
print("Results: ",results)
Matches = []
for check in resultCheck:
Check = True
for key,value in results.items():
if check in value:
Check = Check and True
else:
Check = Check and False
if Check == True:
Matches.append(check)
Matches = set (Matches)
print('\n===================================\n')
print("Common matches:",Matches,"\n")
for num in Matches:
print("Found in title: ",df.iloc[num][2], " by ", df.iloc[num][1])
def VocBuilder(df):
# Create vocabulary for 'entity'
entity_vocabulary = []
for i in range(len(df['entity'])):
if df['entity'][i] != '':
test = eval(df['entity'][i])
for j in test:
for key,value in j.items():
temp = key.split(' ')
for item in temp:
entity_vocabulary.append(item)
entity_vocabulary = list(set(entity_vocabulary))
#print(list(set(entity_vocabulary)))
# Create vocabulary for 'topic'
topic_vocabulary = []
for i in range(len(df['topic'])):
test = eval(df['topic'][i])
for j in test:
for key,value in j.items():
temp = key.split(' ')
for item in temp:
topic_vocabulary.append(item)
topic_vocabulary = list(set(topic_vocabulary))
#print(list(set(topic_vocabulary)))
# Create vocabulary for 'Google_Entities'
google_entity_vocabulary = []
for i in range(len(df['topic'])):
test = eval(df['topic'][i])
for j in test:
for key,value in j.items():
temp = key.split(' ')
for item in temp:
google_entity_vocabulary.append(item)
google_entity_vocabulary = list(set(google_entity_vocabulary))
# Create vocabularies of individual columns 'entity','topic' and 'Google_Entity'
vocabulary_train = entity_vocabulary + topic_vocabulary + google_entity_vocabulary
vocabulary_train = list(set(vocabulary_train))
return vocabulary_train
def UserInterface2(df):
print("In the following steps you will be able to search songs, artists, and albums by Topics and Entities: ")
print("")
print ("Lookup topics such as Mars, Fire, Weapons, Love, Death")
topics_input = input('\nPlease enter topics (or type * to select some default topics): ')
if topics_input == '*':
topics_input = ['Love', 'War']
else:
topics_input = topics_input.split(', ')
print ("Would you like to filter the song dataset on the following variables")
filter_input = input('\nPlease select 1 to filter or 0 to search entire dataset: ')
if filter_input == '1':
df_filtered = UserInterface1(df)
choice_df = df_filtered.reset_index()
#print(choice_df.head())
else:
choice_df = df
vocabulary_train = VocBuilder(choice_df)
# take the choice of df, either the filtered one or the entire df and pass to inverted index.
invertind = makeInvertedIndex(choice_df,vocabulary_train)
andor_input = input('\nPlease select OR to perform OR search, AND to perform AND search!: ')
# search inverted index using whichever method AND/OR
if andor_input == 'OR':
orSearch(invertind,topics_input, choice_df)
else:
andSearch(invertind,topics_input, choice_df)
###Output
_____no_output_____
###Markdown
Run User Interface 2
###Code
UserInterface2(df)
###Output
In the following steps you will be able to search songs, artists, and albums by Topics and Entities:
Lookup topics such as Mars, Fire, Weapons, Love, Death
Please enter topics (or type * to select some default topics): God
Would you like to filter the song dataset on the following variables
Please select 1 to filter or 0 to search entire dataset: 1
In the following steps you will filter the song dataset on the following variables:
ID
artist
song
album
released
genre
home_country
singer_age_at_release
entity
topic
Google_Entities
overall_sentiment
Artist options: ['Aerosmith', 'Bruno Mars', 'Coldplay', 'Doors', 'Elton John', 'Elvis Presley', 'Grateful Dead', 'Jimi Hendrix', 'John Legend', 'Lady Gaga', 'Linkin Park', 'Maroon 5', 'Metallica', 'Michael Jackson', 'Nickelback', 'Outkast', 'Santana', 'Stevie Wonder', 'Weezer', 'Wu-Tang Clan']
Please select your artists (or type * to select all): Aerosmith
Album options: ['Pump', 'Permanent Vacation', 'Nine Lives', 'Aerosmith', "Honkin' on Bobo", 'Just Push Play', 'Draw the Line', 'Rock in a Hard Place']
Please select your albums (or type * to select all): *
Country options: ['United States']
Please select your countries (or type * to select all): *
Genre options:
Please select your genres (or type * to select all): *
Release date options:
1960-1969
1970-1979
1980-1989
1990-1999
2000-2009
2010-Present
Please select your release year (or type * to select all): *
Singer age options:
10-19
20-29
30-39
40-49
50-59
Please select an age range for singers (or type * to select all): *
===================================================================
===================================================================
The average sentiment of the songs resulting from your search is: -0.29
===================================================================
The top 5 most prevalent topics of the songs resulting from your search are:
Song: "Taste Of India", Artist: Aerosmith, Sentiment: 0.6
Song: "Dream On", Artist: Aerosmith, Sentiment: 0.0
Song: "Lightning Strikes", Artist: Aerosmith, Sentiment: 0.0
Song: "Jaded", Artist: Aerosmith, Sentiment: -0.3
Song: "Magic Touch", Artist: Aerosmith, Sentiment: -0.6
['India', 'Hell', 'songs', 'English-language', 'Dagger', 'a', 'Ship', 'Got', 'abuse', 'Television', 'Music', 'Sailing', 'an', 'saidI', 'Eye', 'industry', 'Alcohol', 'Age', "Janie's", 'Dream', '10', 'Spit', 'Appetite', 'Glam', 'Creative', 'American', 'Shotgun', 'originating', 'Drama', 'Weapons', 'babe', '(landform)', 'Sweetness', 'objects', 'Fiction', 'Lightning', 'Truth', 'Recorded', 'Déjà', "Hell's", 'Cutting', 'Manhattan', 'Novels', 'recording', 'child', 'Blade', 'Ages', 'Works', 'blues', 'produced', 'Velvet', '(botany)', 'Love', 'Alcoholic', 'rock', 'episodes', 'Pop', 'Crops', 'didn', 'Thunder', 'Aerosmith', 'Vindaloo', 'and', 'Sin', 'Concubinage', 'works', 'Funk', 'singles', 'Starch', 'Vikings', 'Personal', 'Guillotine', 'from', 'ballads', 'music', 'wine', 'Hand', 'weapons', 'Europe', 'Parchment', 'Janie', 'Rock', 'Geffen', 'experience', 'Melee', 'media', 'Kings', 'Literature', 'Yin', 'recordings', 'Culture', 'Song', 'Songs', 'Dawn', 'Rumble', 'metal', 'behaviour', 'series', 'Labellum', 'Improvised', 'identifier', 'Albums', 'albums', 'television', 'yang', 'Knife', 'Grape', 'Ready', 'Hard', 'Medieval', 'Middle', 'firearm', 'Greed', 'callI', 'about', 'Gun', 'Religious', 'Entertainment', 'Knives', 'Long', 'Fermented', 'people', 'Wine', 'tools', 'Early', 'for', 'Honey', 'Laughter', 'Location', 'Kitchen,', 'Viking', 'Artificial', 'Perfume', 'of', 'drinks', 'Records', 'Incense', 'I', 'vu', 'written', 'to', 'Lordy', 'eye', 't', 'Denmark', 'hard', 'Ships', 'ship', 'Sound']
Please select OR to perform OR search, AND to perform AND search!: OR
Query is: ['God']
Check for: God
Not found...
###Markdown
Define Main User Interface
###Code
def UserInterface3(df):
print ("WELCOME!\n")
key_choice = input('\nTo Perform Filtered Search press 1, To perform Topic Search Press 2: ')
if key_choice == '1':
df_filtered = UserInterface1(df)
else:
UserInterface2(df)
# else:
# print("Did not catch that!\n")
# key_choice = input('\nTo Perform Filtered Search press 1, To perform Topic Search Press 2: ')
###Output
_____no_output_____
###Markdown
Run Main User Interface
###Code
UserInterface3(df)
#df
###Output
_____no_output_____ |
Lab/09--ca.ipynb | ###Markdown
CX 4230, Spring 2016: [09] Cellular AutomataThe following exercises accompany the class slides on Wolfram's 1-D nearest neighbor cellular automata model. You can download a copy of those slides here: [PDF (0.7 MiB)](https://t-square.gatech.edu/access/content/group/gtc-59b8-dc03-5a67-a5f4-88b8e4d5b69a/cx4230-sp16--09-cellular-automata.pdf) Setup
###Code
import numpy as np
import scipy as sp
import scipy.sparse
import matplotlib.pyplot as plt # Core plotting support
%matplotlib inline
def show_grid (grid):
plt.matshow (grid)
###Output
_____no_output_____
###Markdown
Wolfram's 1-D near-neighbor CA Let's evolve a 1-D region of length `N` over `T` time steps.Start by creating a 2-D Numpy array (or _matrix_) `X[0:N, 0:T]`, which will eventually hold the sequence of all state changes over time. Our convention will be to store either a `0` or a `1` value in every cell.
###Code
N = 10
T = 20
X = np.zeros ((N, T), dtype=int) # X[i, t] == cell i at time t
###Output
_____no_output_____
###Markdown
As the initial state of the 1-D system, let's put a single `1` bit at or close to the center.
###Code
# Initial conditions
i_center = int (X.shape[0]/2)
X[i_center, 0] = 1
show_grid (X.transpose ())
###Output
_____no_output_____
###Markdown
Sparse matrices Suppose you are given a 1-D neighborhood as a 3-bit pattern, `011`$_2$. This value is the binary representation of the decimal value, $(2^2 \cdot 0) + (2^1 \cdot 1) + (2^0 \cdot 1) = 3$. More generally, given a 3-bit string, $b_2b_1b_0$, let its _neighborhood index_ be the decimal integer $k$ such that$$ k \equiv (4 \cdot b_2) + (2 \cdot b_1) + (1 \cdot b_0).$$Given one of Wolfram's rules, you could then build a lookup table to convert every possible neighborhood index into the corresponding `0` or `1` state. To implement this idea, try this notional trick from linear algebra. Let $\vec{x}$ denote the 1-D grid of $n$ cells, represented as a _vector_ of $n$ bits,$$\begin{eqnarray} \vec{x} & = & \left(\begin{array}{c} x_0 \\ x_1 \\ \vdots \\ x_{n-1} \end{array}\right).\end{eqnarray}$$ From this vector, you can enumerate all neighborhood indices using a _(sparse) matrix-vector product_. Let $k_i$ denote the neighborhood index of cell (bit) $x_i$. Then,$$\begin{eqnarray} k_0 & = & 2 x_0 + x_1 \\ k_1 & = & 4 x_0 + 2 x_1 + x_2 \\ k_2 & = & 4 x_1 + 2 x_2 + x_3 \\ & \vdots & \\ k_i & = & 4 x_{i-1} + 2 x_i + x_{i+1} \\ & \vdots & \\ k_{n-2} & = & 4 x_{n-3} + 2 x_{n-2} + x_{n-1} \\ k_{n-1} & = & 4 x_{n-2} + 2 x_{n-1}\end{eqnarray}$$This system of equations can be written in matrix form as $\vec{k} \equiv A \cdot \vec{x}$, where$$\vec{k} \equiv \left(\begin{array}{c} k_0 \\ k_1 \\ k_2 \\ \vdots \\ k_i \\ \vdots \\ k_{n-2} \\ k_{n-1} \end{array}\right)= \underbrace{\left(\begin{array}{cccccccc} 2 & 1 & & & & & & \\ 4 & 2 & 1 & & & & & \\ & 4 & 2 & 1 & & & & \\ & & & \ddots & & & & \\ & & & 4 & 2 & 1 & & \\ & & & & & \ddots & & \\ & & & & & 4 & 2 & 1 \\ & & & & & & 4 & 2 \end{array}\right)}_{\equiv A}\cdot \underbrace{\left(\begin{array}{c} x_0 \\ x_1 \\ x_2 \\ \vdots \\ x_i \\ \vdots \\ x_{n-2} \\ x_{n-1} \end{array}\right)}_{= \vec{x}}.$$The matrix $A$ is _sparse_ because it is mostly zero.> Sparsity does not have a precise formal definition. However, one typically expects that the number of non-zeros in $n \times n$ sparse matrix $A$ is $\mathrm{nnz}(A) = \mathcal{O}(n)$.In fact, $A$ has a more specific structure: it is _tridiagonal_, meaning that all of its non-zero entries are contained in the diagonal of $A$ plus the first sub- and super-diagonals. Numpy and Scipy, Numpy's "parent" library, have an especially handy function, `scipy.sparse.diags()`, which can easily construct sparse matrices consisting only of diagonals: http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.diags.htmlscipy.sparse.diagsHere is a one-line statement to construct a sparse matrix $A$ as the variable `A`, which references a sparse matrix object.
###Code
A = sp.sparse.diags ([4, 2, 1], [-1, 0, 1], shape=(N, N), dtype=int)
print ("=== A (sparse) ===", A, sep="\n")
print ("=== A (dense) ===", A.toarray (), sep="\n")
###Output
=== A (dense) ===
[[2 1 0 0 0 0 0 0 0 0]
[4 2 1 0 0 0 0 0 0 0]
[0 4 2 1 0 0 0 0 0 0]
[0 0 4 2 1 0 0 0 0 0]
[0 0 0 4 2 1 0 0 0 0]
[0 0 0 0 4 2 1 0 0 0]
[0 0 0 0 0 4 2 1 0 0]
[0 0 0 0 0 0 4 2 1 0]
[0 0 0 0 0 0 0 4 2 1]
[0 0 0 0 0 0 0 0 4 2]]
###Markdown
As a sanity check, let's multiply $A$ by the initial 1-D grid. Denote this initial grid mathematically as $\vec{x}(t=0)$, which is just the first column of the array `X`, i.e., `X[:, 0]`. **Exercise.** Compute $\vec{k}(0) \leftarrow A \cdot \vec{x}(0)$ by hand.
###Code
print (X[:, 0])
###Output
[0 0 0 0 0 1 0 0 0 0]
###Markdown
> Answer: `[0, 0, 0, 0, 1, 2, 4, 0, 0, 0]` Let's check your answer using the Python code below to compute $\vec{k}(0)$. It uses the `A` object's `dot()` member function.
###Code
K0 = A.dot (X[:, 0])
print (X[:, 0])
print (K0)
###Output
[0 0 0 0 0 1 0 0 0 0]
[0 0 0 0 1 2 4 0 0 0]
###Markdown
**Exercise.** Recall that the rule number is an integer between 0 and 255, inclusive. Its bit pattern determines which neighborhood patterns map to which states. Complete the following function: given a rule number, it should build and return a lookup table, `bits[:]`, that maps a neighborhood index `k` in `[0, 8)` to the output bit `bits[k]`.
###Code
def gen_rule_bits (rule_num):
"""
Computes a bit lookup table for one of Wolfram's 1-D
cellular automata (CA), given a rule number.
That is, let `k` be an integer in [0, 8) corresponding
to a 3-bit neighborhood pattern. Then this function
returns a 1-D lookup table `bits[:]` such that
`bits[k]` is either a 0 or 1, according to the output
of a CA for rule number `rule_num`.
"""
assert (0 <= rule_num < 256)
# Initialize output array
bits = np.zeros (8, dtype=int)
# @YOUSE: Compute `bits[:]`
for i in range(8):
bits[i] = rule_num%2
rule_num = int(rule_num/2)
print (bits)
return bits
# Test code:
def rev (x):
return list (reversed (x))
assert all (gen_rule_bits (90) == rev ([0, 1, 0, 1, 1, 0, 1, 0]))
assert all (gen_rule_bits (150) == rev ([1, 0, 0, 1, 0, 1, 1, 0]))
###Output
[0 1 0 1 1 0 1 0]
[0 1 1 0 1 0 0 1]
###Markdown
**Exercise.** Write some code to compute the state at time 1, `X[:, 1]`.
###Code
RULE = 90
RULE_BITS = gen_rule_bits (RULE)
# @YOUSE: Complete this code:
K0 = A.dot (X[:, 0])
X[:, 1] = RULE_BITS[K0]
# Code to test your implementation:
print ("Rule:", RULE, "==>", rev (RULE_BITS))
print ("x(0):", X[:, 0])
print ("k(0):", K0)
print ("==>\nx(1):", X[:, 1])
###Output
[0 1 0 1 1 0 1 0]
Rule: 90 ==> [0, 1, 0, 1, 1, 0, 1, 0]
x(0): [0 0 0 0 0 1 0 0 0 0]
k(0): [0 0 0 0 1 2 4 0 0 0]
==>
x(1): [0 0 0 0 1 0 1 0 0 0]
###Markdown
**Exercise.** Complete the following function, which runs a 1-D `n`-cell CA for `t_max` time steps, given an initial state `x0` and a rule number `rule_num`.
###Code
def run_ca (rule_num, n, t_max, x0=None):
bits = gen_rule_bits (rule_num)
cells = np.zeros ((n, t_max), dtype=int)
# Initial condition (default: centered impulse)
if not x0:
cells[int (n/2), 0] = 1
else:
cells[:, 0] = x0
cells2idx = sp.sparse.diags ([4, 2, 1], [-1, 0, 1], \
shape=(n, n), dtype=int)
for t in range (1, t_max):
# @YOUSE: Complete this loop body
Kt = cells2idx.dot (cells[:, t-1])
cells[:, t] = bits[Kt]
return cells
###Output
_____no_output_____
###Markdown
Check your results against these patterns: https://commons.wikimedia.org/wiki/Elementary_cellular_automata
###Code
# Some test code:
def irun_ca (rule_num=90, n=100, t_max=100):
show_grid (run_ca (rule_num, n, t_max).transpose ())
irun_ca (90) # Try 90, 169, and 37
from ipywidgets import interact
interact (irun_ca
, rule_num=(0, 256, 1)
, n=(10, 100, 10)
, t_max=(10, 100, 10))
###Output
[1 0 0 1 0 1 0 1]
|
playground/mirco_nani/words2words_matcher_tests/domain_specific/notebook/results.ipynb | ###Markdown
Results Plot Code
###Code
import matplotlib.pyplot as plt
import pandas as pd
import os
def get_results_folder():
results_folder = os.path.join('..', 'assets', 'results')
fallback_results_folder = 'results'
if os.path.exists(results_folder):
return results_folder
else:
return fallback_results_folder
def load_results():
df_results = []
results_folder = get_results_folder()
for filename in os.listdir(results_folder):
model_name = os.path.splitext(filename)[0]
results_path = os.path.join(results_folder, filename)
df_result = pd.read_csv(results_path)
df_result["model"] = model_name
df_results.append(df_result)
return pd.concat(df_results)
def plot_results_metrics(df_results, metrics, y_scale=None, figsize=(30,5)):
fig, axes = plt.subplots(nrows=1, ncols=len(metrics),figsize=figsize)
for i,metric_name in enumerate(metrics):
ax = axes[i]
if y_scale != None:
ax.set_ylim(y_scale[0], y_scale[1])
df.pivot(index="uncertainty_threshold", columns="model", values=metric_name).plot(title=metric_name, ax=axes[i])
def plot_results(df_results):
plot_results_metrics(df_results, ["precision","recall","accuracy","F1_score"], y_scale=(0.0, 1.0))
plot_results_metrics(df_results, ["TP", "FP", "FN"])
###Output
_____no_output_____
###Markdown
Plot
###Code
df=load_results()
plot_results(df)
###Output
_____no_output_____
###Markdown
Dummy model testTest to visualize results from multiple models
###Code
df_dummy = df[["TP","FP","FN","precision","recall","accuracy","F1_score"]]/2
df_dummy['uncertainty_threshold'] = df['uncertainty_threshold']
df_dummy["model"] = "dummy"
for c in ["TP","FP","FN"]:
df_dummy[c] = df_dummy[c].astype(int)
df_dummy
df = pd.concat([df,df_dummy])
plot_results(df_dummy)
###Output
_____no_output_____ |
tep-meets-lstm.ipynb | ###Markdown
Step 0 - Setup and helper functions
###Code
# Setup
# NOTE: Uncomment the lines bellow in order to run the notebook in Colab (RECOMMENDED)
#from google.colab import drive
#drive.mount('/content/drive/', force_remount=True) # follow the instructions to get the key
#%cd drive
#%cd MyDrive
#!git clone https://github.com/gmxavier/TEP-meets-LSTM.git # clone the repo
#%cd TEP-meets-LSTM
#!ls # check the repo folder contents
#%tensorflow_version 1.x # set the Colab tf version
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn import metrics
import os
from functools import reduce
# Normalised input features
INPUT_SIGNAL_TYPES = ["XMV(1)",
"XMV(2)",
"XMV(3)",
"XMV(4)",
"XMV(5)",
"XMV(6)",
"XMV(7)",
"XMV(8)",
"XMV(9)",
"XMV(10)",
"XMV(11)"]
# Output classes
LABELS = ["NORMAL ",
"FAULT 1",
"FAULT 2",
"FAULT 3",
"FAULT 4",
"FAULT 5",
"FAULT 7"]
# Input folders paths
DATA_PATH = "tep/input/"
DATASET_PATH = DATA_PATH + "tep_dataset/"
TRAIN = "train/"
TEST = "test/"
X_train_signals_paths = [
DATASET_PATH + TRAIN + signal + ".txt" for signal in INPUT_SIGNAL_TYPES
]
X_test_signals_paths = [
DATASET_PATH + TEST + signal + ".txt" for signal in INPUT_SIGNAL_TYPES
]
y_train_path = DATASET_PATH + TRAIN + "idv.txt"
y_test_path = DATASET_PATH + TEST + "idv.txt"
# Helper functions
def load_X(X_signals_paths):
# Function returns the input features tensor.
X_signals = []
for signal_type_path in X_signals_paths:
file = open(signal_type_path, 'r')
# Read dataset from disk, dealing with text files' syntax
X_signals.append(
[np.array(serie, dtype=np.float32) for serie in [
row.split(' ') for row in file
]]
)
file.close()
return np.transpose(np.array(X_signals), (1, 2, 0))
def load_y(y_path):
# Function returns the fault labels vector.
file = open(y_path, 'r')
# Read dataset from disk, dealing with text file's syntax
y_ = np.array(
[elem for elem in [
row.split(' ') for row in file
]],
dtype=np.int32
)
file.close()
return y_
def LSTM_RNN(_X, _weights, _biases):
# Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters.
# Moreover, two LSTM cells are stacked which adds deepness to the neural network.
# Note, some code of this notebook is inspired from an slightly different
# RNN architecture used on another dataset, some of the credits goes to
# "aymericdamien" under the MIT license.
# (NOTE: This step could be greatly optimised by shaping the dataset once
# input shape: (batch_size, n_steps, n_input)
_X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size
# Reshape to prepare input to hidden activation
_X = tf.reshape(_X, [-1, n_input])
# new shape: (n_steps*batch_size, n_input)
# Linear activation
_X = tf.nn.relu(tf.matmul(_X, _weights['hidden']) + _biases['hidden'])
# Split data because rnn cell needs a list of inputs for the RNN inner loop
_X = tf.split(_X, n_steps, 0)
# new shape: n_steps * (batch_size, n_hidden)
# Define two stacked LSTM cells (two recurrent layers deep) with tensorflow
lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)
lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)
# Get LSTM cell output
outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)
# Get last time step's output feature for a "many to one" style classifier,
# as in the image describing RNNs at the top of this page
lstm_last_output = outputs[-1]
# Linear activation
return tf.matmul(lstm_last_output, _weights['out']) + _biases['out']
def extract_batch_size(_train, step, batch_size):
# Function to fetch a "batch_size" amount of data from "(X|y)_train" data.
shape = list(_train.shape)
shape[0] = batch_size
batch_s = np.empty(shape)
for i in range(batch_size):
# Loop index
index = ((step-1)*batch_size + i) % len(_train)
batch_s[i] = _train[index]
return batch_s
def one_hot(y_):
# Function to encode output labels from number indexes
# e.g.: [[5], [0], [3]] --> [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]
y_ = y_.reshape(len(y_))
n_values = int(np.max(y_)) + 1
return np.eye(n_values)[np.array(y_, dtype=np.int32)] # Returns FLOATS
def model_size():
# Function to print the number of trainable variables
size = lambda v: reduce(lambda x, y: x*y, v.get_shape().as_list())
n = sum(size(v) for v in tf.trainable_variables())
print("Overall model size: %d" % (n,))
def parameter_size():
# Function to print the size of trainable variables
print("Parameters sizes:")
for tf_var in tf.trainable_variables():
print(tf_var.shape)
###Output
_____no_output_____
###Markdown
Step 1 - Load the data
###Code
# Input features tensors
X_train = load_X(X_train_signals_paths)
X_test = load_X(X_test_signals_paths)
# Fault labels
y_train = load_y(y_train_path)
y_test = load_y(y_test_path)
# Some debugging info
print("Some useful info to get an insight on dataset's shape and normalisation:")
print("(X shape, y shape, every X's mean, every X's standard deviation)")
print(X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test))
print("The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.")
print("")
unique_elements, counts_elements = np.unique(y_train, return_counts=True)
print('Faults distribution in the training set:')
print(np.asarray((unique_elements, counts_elements)))
unique_elements, counts_elements = np.unique(y_test, return_counts=True)
print('Faults distribution in the test set:')
print(np.asarray((unique_elements, counts_elements)))
# Input tensor data
training_data_count = len(X_train) # 5733 training sequences (with 50% overlap between each sequence)
test_data_count = len(X_test) # 2458 testing sequences
n_steps = len(X_train[0]) # 128 timesteps per sequence
n_input = len(X_train[0][0]) # 11 input features per timestep
###Output
_____no_output_____
###Markdown
Step 2 - Build the LSTM network
###Code
# LSTM internal structure
n_hidden = 32 # Hidden layer num of features
n_classes = 8 # Total classes (due one-hot-encode it should be 8 not 7,
# as fault 6 is omitted)
# Training hyperparameters
learning_rate = 0.0025
lambda_loss_amount = 0.0015
training_iters = training_data_count * 300 # Loop 300 times on the dataset
batch_size = 1500
display_iter = 30000 # To show test set accuracy during training
# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])
# Graph weights
weights = {
'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights
'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))
}
biases = {
'hidden': tf.Variable(tf.random_normal([n_hidden])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
pred = LSTM_RNN(x, weights, biases)
# Loss, optimizer and evaluation
l2 = lambda_loss_amount * sum(
tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()
) # L2 loss prevents this overkill neural network to overfit the data
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer
correct_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
###Output
_____no_output_____
###Markdown
Step 3 - Train the LSTM network
###Code
# To keep track of training's performance
test_losses = []
test_accuracies = []
train_losses = []
train_accuracies = []
X_ = np.append(X_train, X_test, axis=0)
y_ = np.append(y_train, y_test, axis=0)
nfold = 5
dr = []
ks = np.array_split(np.arange(len(y_)), nfold)
for k in ks:
# Launch the graph
sess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))
init = tf.global_variables_initializer()
sess.run(init)
# Some useful info
print("Some useful info ...")
model_size()
parameter_size()
print("")
print("Starting training ...")
# Perform Training steps with "batch_size" amount of example data at each loop
step = 1
while step * batch_size <= training_iters:
batch_xs = extract_batch_size(np.delete(X_, k, axis=0), step, batch_size)
batch_ys = one_hot(extract_batch_size(np.delete(y_, k, axis=0), step, batch_size))
# Fit training using batch data
_, loss, acc = sess.run(
[optimizer, cost, accuracy],
feed_dict={
x: batch_xs,
y: batch_ys
}
)
train_losses.append(loss)
train_accuracies.append(acc)
# Evaluate network only at some steps for faster training:
if (step*batch_size % display_iter == 0) or (step == 1) or (step * batch_size > training_iters):
# To not spam console, show training accuracy/loss in this "if"
print("Iteration #" + str(step*batch_size) + "\n" + \
"TRAINING SET: " + \
"Batch Loss = {:.6f}".format(loss) + \
", Accuracy = {:.6f}".format(acc))
# Evaluation on the test set (no learning made here - just evaluation for diagnosis)
loss, acc = sess.run(
[cost, accuracy],
feed_dict={
x: X_[k],
y: one_hot(y_)[k]
}
)
test_losses.append(loss)
test_accuracies.append(acc)
print(" TEST SET: " + \
"Batch Loss = {:.6f}".format(loss) + \
", Accuracy = {:.6f}".format(acc))
step += 1
print("Optimization finished!")
# Accuracy for test data
one_hot_predictions, final_acc, final_loss = sess.run(
[pred, accuracy, cost],
feed_dict={
x: X_[k],
y: one_hot(y_)[k]
}
)
test_losses.append(final_loss)
test_accuracies.append(final_acc)
print("FINAL RESULT: " + \
"Batch Loss = {:.6f}".format(final_loss) + \
", Accuracy = {:.6f}".format(final_acc))
predictions = one_hot_predictions.argmax(1)
aux = metrics.confusion_matrix(y_[k], predictions, labels = np.unique(y_))
dr.append(100*aux.diagonal()/(np.sum(aux, axis = 1)+1e-12))
print("Cross-validation fold #" + str(len(dr)) + " of " + str(nfold))
sess.close()
###Output
_____no_output_____
###Markdown
Step 4 - Plot the training progress
###Code
# (Inline plots: )
%matplotlib inline
font = {
'family' : 'Bitstream Vera Sans',
'weight' : 'bold',
'size' : 18
}
matplotlib.rc('font', **font)
width = 12
height = 12
plt.figure(figsize=(width, height))
indep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size))
plt.plot(indep_train_axis, np.array(train_losses), "b--", label="Train losses")
plt.plot(indep_train_axis, np.array(train_accuracies), "g--", label="Train accuracies")
indep_test_axis = np.append(
np.array(range(batch_size, len(test_losses)*display_iter, display_iter)[:-1]),
[training_iters]
)
plt.plot(indep_test_axis, np.array(test_losses), "b-", label="Test losses")
plt.plot(indep_test_axis, np.array(test_accuracies), "g-", label="Test accuracies")
plt.title("Training session's progress over iterations and folds")
plt.legend(loc='upper right', shadow=True)
plt.ylabel('Training Progress (Loss or Accuracy values)')
plt.xlabel('Training iteration')
plt.show()
###Output
_____no_output_____
###Markdown
Step 5 - Print and plot the final results
###Code
# Print results
predictions = one_hot_predictions.argmax(1)
print("Testing accuracy: {:.2f}%".format(100*final_acc))
print("")
print("Precision: {:.2f}%".format(100*metrics.precision_score(y_[k], predictions, average="weighted")))
print("Recall: {:.2f}%".format(100*metrics.recall_score(y_[k], predictions, average="weighted")))
print("f1_score: {:.2f}%".format(100*metrics.f1_score(y_[k], predictions, average="weighted")))
print("")
print("Confusion matrix:")
confusion_matrix = metrics.confusion_matrix(y_[k], predictions)
print(confusion_matrix)
print("")
print("Confusion matrix (normalised to % of total test data):")
normalised_confusion_matrix = np.array(confusion_matrix, dtype=np.float32)/np.sum(confusion_matrix)*100
print(np.array_str(normalised_confusion_matrix, precision=2, suppress_small=True))
# Plot results
width = 12
height = 12
plt.figure(figsize=(width, height))
res = plt.imshow(np.array(confusion_matrix), cmap=plt.cm.summer, interpolation='nearest')
for i, row in enumerate(confusion_matrix):
for j, c in enumerate(row):
if c>0:
plt.text(j-.2, i+.1, c, fontsize=16)
plt.title('Confusion Matrix')
plt.colorbar()
_ = plt.xticks(range(n_classes), [l for l in LABELS], rotation=90)
_ = plt.yticks(range(n_classes), [l for l in LABELS])
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
###Output
_____no_output_____ |
Inferencial_Stats.ipynb | ###Markdown
Clustering
###Code
from sklearn.cluster import KMeans, SpectralClustering, AgglomerativeClustering
from sklearn.model_selection import train_test_split
X_train,X_test = train_test_split(df, test_size=0.2,random_state = 42)
clusters = ['IX','X','XI','XII']
y_train = X_train[clusters]
X_train = X_train.drop(clusters,axis=1)
y_test = X_test[clusters]
X_test = X_test.drop(clusters,axis=1)
#label encoding the one hot encoded variable
y_test = pd.DataFrame([pd.Series(y_test.loc[y_test.index[i]]).nonzero()[0][0] for i in range(len(y_test.index))])
y_train = pd.DataFrame([pd.Series(y_train.loc[y_train.index[i]]).nonzero()[0][0] for i in range(len(y_train.index))])
kmeans = KMeans(n_clusters=4,random_state=1).fit(X_train)
y_k = kmeans.predict(X_test)
kmeans.score(X_test,y_test)
spec = SpectralClustering(n_clusters=4, random_state=1).fit(X_train)
y_spec = spec.fit_predict(X_test)
agg = AgglomerativeClustering(n_clusters=4).fit(X_train)
y_agg = agg.fit_predict(X_test)
###Output
_____no_output_____
###Markdown
Visualizing the reduced data in 3 dimensions
###Code
grade = df[clusters]
df_new = df.drop(clusters, axis=1)
pca = PCA(n_components=3)
principal_comp = pca.fit_transform(df_new)
pcadf = pd.DataFrame(data = principal_comp, columns=['component_1', 'component_2','component_3'])
g = pd.DataFrame([pd.Series(grade.loc[grade.index[i]]).nonzero()[0][0] for i in range(len(grade.index))])
finaldf = pd.concat([pcadf, g],axis=1)
c = {1:finaldf[finaldf[0]==0],2:finaldf[finaldf[0]==1],3:finaldf[finaldf[0]==2],4:finaldf[finaldf[0]==3]}
colors = {1:'red',2:'blues',3:'green',4:'yellow'}
fig = plt.figure(figsize=(16,8))
ax = plt.axes(projection='3d')
for key in c:
ax.scatter(c[key]['component_1'],c[key]['component_2'],c[key]['component_3'], c=colors[key], linewidth=3)
ax.legend(['IX','X','XI','XII'])
plt.show()
###Output
_____no_output_____
###Markdown
Clustering on the reduced data set
###Code
x_new = finaldf.drop(g,axis=1)
x_train,x_test, ytr, yts = train_test_split(x_new,g, test_size=0.2,random_state=42)
knew = KMeans(n_clusters=4,random_state=1).fit(x_train)
y_h = knew.predict(x_test)
knew.score(x_test,yts)
###Output
_____no_output_____
###Markdown
Discretizing the marks and doing PCA on it
###Code
bins= [0,10,20,30,40,50,60,70,80,90,100]
labels = [1,2,3,4,5,6,7,8,9,10]
s =pd.cut(df.Performance, bins =bins,labels =labels)
df_bins = df.assign(Performance_bins = s.values)
df_bins = df_bins.drop(['Performance'],axis=1)
df_bins
per_bins = df_bins['Performance_bins']
df1 = df_bins.drop(['Performance_bins'],axis=1)
# 3 dimensional PCA
pca = PCA(n_components=3)
principal_comp = pca.fit_transform(df_new)
pca_new = pd.DataFrame(data = principal_comp, columns=['component_1', 'component_2','component_3'])
final = pd.concat([pca_new,per_bins],axis=1)
classes = {1:final[final['Performance_bins']==1],2:final[final['Performance_bins']==2],3:final[final['Performance_bins']==3],4:final[final['Performance_bins']==4],
5:final[final['Performance_bins']==5], 6:final[final['Performance_bins']==6], 7:final[final['Performance_bins']==7],8:final[final['Performance_bins']==8],9:final[final['Performance_bins']==9],10:final[final['Performance_bins']==10]}
color_classes = {1:'red',2:'blue',3:'green',4:'yellow',5:'brown',6:'black',7:'purple',8:'orange',9:'pink',10:'magenta'}
%matplotlib notebook
fig = plt.figure(figsize=(16,8))
ax = plt.axes(projection='3d')
for key in classes:
ax.scatter(classes[key]['component_1'],classes[key]['component_2'],classes[key]['component_3'], c=color_classes[key], linewidth=3)
ax.legend(['1','2','3','4','5','6','7','8','9','10'])
plt.show()
###Output
_____no_output_____ |
OOP_CONRAD.ipynb | ###Markdown
**Conrad Ully R. Esconde**Class with Multiple Objects Application Create a Python program that displays the name of 3 students (Student 1, Student 2, Student 3) and their gradesCreate a class name "Person" and attributes - std1, std2, std3, pre, mid, finCompute for the average grade of each term using Grade() methodInformation about student's grades must be hidden from others *italicized text*
###Code
import random
class Person:
def __init__ (self, student, pre, mid, fin):
self.student = student
self.pre = pre *0.30
self.mid = mid *0.30
self.fin = fin *0.40
def Grade (self):
print (self.student, "-> Prelim Grade of: ", self.pre)
print (self.student, "-> Midterm Grade of: ", self.mid)
print (self.student, "-> Final Grade of: ", self.fin)
std1 = Person ("Conrad Ully", random.randint(70,99), random.randint(70,99), random.randint(70,99))
std2 = Person ("Kurt Bazolny", random.randint(70,99), random.randint(70,99), random.randint(70,99))
std3 = Person ("Edward Kenway", random.randint(70,99), random.randint(70,99), random.randint(70,99))
std1.Grade()
###Output
Conrad Ully -> Prelim Grade of: 24.3
Conrad Ully -> Midterm Grade of: 22.8
Conrad Ully -> Final Grade of: 38.800000000000004
|
eager.ipynb | ###Markdown
[View in Colaboratory](https://colab.research.google.com/github/sbeleidy/openmined-designs/blob/master/eager.ipynb) **Copyright 2018 The TensorFlow Authors.**Licensed under the Apache License, Version 2.0 (the "License");
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Get Started with Eager ExecutionNote: you can run **[this notebook, live in Google Colab](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/eager.ipynb)** with zero setup.This tutorial describes how to use machine learning to *categorize* Iris flowers by species. It uses [TensorFlow](https://www.tensorflow.org)'s eager execution to (1) build a *model*, (2) *train* the model on example data, and (3) use the model to make *predictions* on unknown data. Machine learning experience isn't required to follow this guide, but you'll need to read some Python code. TensorFlow programmingThere many [TensorFlow APIs](https://www.tensorflow.org/api_docs/python/) available, but we recommend starting with these high-level TensorFlow concepts:* Enable an [eager execution](https://www.tensorflow.org/programmers_guide/eager) development environment,* Import data with the [Datasets API](https://www.tensorflow.org/programmers_guide/datasets),* Build models and layers with TensorFlow's [Keras API](https://keras.io/getting-started/sequential-model-guide/).This tutorial shows these APIs and is structured like many other TensorFlow programs:1. Import and parse the data sets.2. Select the type of model.3. Train the model.4. Evaluate the model's effectiveness.5. Use the trained model to make predictions.To learn more about using TensorFlow, see the [Getting Started guide](https://www.tensorflow.org/get_started/) and the [example tutorials](https://www.tensorflow.org/tutorials/). If you'd like to learn about the basics of machine learning, consider taking the [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/). Run the notebookThis tutorial is available as an interactive [Colab notebook](https://colab.research.google.com) for you to run and change the Python code directly in the browser. The notebook handles setup and dependencies while you "play" cells to execute the code blocks. This is a fun way to explore the program and test ideas. If you are unfamiliar with Python notebook environments, there are a couple of things to keep in mind:1. Executing code requires connecting to a runtime environment. In the Colab notebook menu, select *Runtime > Connect to runtime...*2. Notebook cells are arranged sequentially to gradually build the program. Typically, later code cells depend on prior code cells, though you can always rerun a code block. To execute the entire notebook in order, select *Runtime > Run all*. To rerun a code cell, select the cell and click the *play icon* on the left. Setup program Install the latest version of TensorFlowThis tutorial uses eager execution, which is available in [TensorFlow 1.8](https://www.tensorflow.org/install/). (You may need to restart the runtime after upgrading.)
###Code
!pip install --upgrade tensorflow
###Output
Collecting tensorflow
[?25l Downloading https://files.pythonhosted.org/packages/22/c6/d08f7c549330c2acc1b18b5c1f0f8d9d2af92f54d56861f331f372731671/tensorflow-1.8.0-cp36-cp36m-manylinux1_x86_64.whl (49.1MB)
[K 100% |████████████████████████████████| 49.1MB 1.2MB/s
[?25hRequirement not upgraded as not directly required: absl-py>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.2.0)
Requirement not upgraded as not directly required: six>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.11.0)
Requirement not upgraded as not directly required: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.31.0)
Requirement not upgraded as not directly required: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.11.0)
Requirement not upgraded as not directly required: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.1.0)
Requirement not upgraded as not directly required: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.2.0)
Requirement not upgraded as not directly required: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (1.14.3)
Requirement not upgraded as not directly required: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (0.6.2)
Collecting tensorboard<1.9.0,>=1.8.0 (from tensorflow)
[?25l Downloading https://files.pythonhosted.org/packages/59/a6/0ae6092b7542cfedba6b2a1c9b8dceaf278238c39484f3ba03b03f07803c/tensorboard-1.8.0-py3-none-any.whl (3.1MB)
[K 100% |████████████████████████████████| 3.1MB 11.2MB/s
[?25hRequirement not upgraded as not directly required: protobuf>=3.4.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow) (3.5.2.post1)
Requirement not upgraded as not directly required: werkzeug>=0.11.10 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow) (0.14.1)
Requirement not upgraded as not directly required: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow) (2.6.11)
Requirement not upgraded as not directly required: bleach==1.5.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow) (1.5.0)
Requirement not upgraded as not directly required: html5lib==0.9999999 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.9.0,>=1.8.0->tensorflow) (0.9999999)
Requirement not upgraded as not directly required: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.4.0->tensorflow) (39.1.0)
Installing collected packages: tensorboard, tensorflow
Found existing installation: tensorboard 1.7.0
Uninstalling tensorboard-1.7.0:
Successfully uninstalled tensorboard-1.7.0
Found existing installation: tensorflow 1.7.0
Uninstalling tensorflow-1.7.0:
Successfully uninstalled tensorflow-1.7.0
Successfully installed tensorboard-1.8.0 tensorflow-1.8.0
###Markdown
Configure imports and eager executionImport the required Python modules, including TensorFlow, and enable eager execution for this program. Eager execution makes TensorFlow evaluate operations immediately, returning concrete values instead of creating a [computational graph](https://www.tensorflow.org/programmers_guide/graphs) that is executed later. If you are used to a REPL or the `python` interactive console, you'll feel at home.Once eager execution is enabled, it *cannot* be disabled within the same program. See the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager) for more details.
###Code
from __future__ import absolute_import, division, print_function
import os
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
print("TensorFlow version: {}".format(tf.VERSION))
print("Eager execution: {}".format(tf.executing_eagerly()))
###Output
TensorFlow version: 1.8.0
Eager execution: True
###Markdown
The Iris classification problemImagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to statistically classify flowers. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their [sepals](https://en.wikipedia.org/wiki/Sepal) and [petals](https://en.wikipedia.org/wiki/Petal).The Iris genus entails about 300 species, but our program will classify only the following three:* Iris setosa* Iris virginica* Iris versicolor <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> Figure 1. Iris setosa (by Radomil, CC BY-SA 3.0), Iris versicolor, (by Dlanglois, CC BY-SA 3.0), and Iris virginica (by Frank Mayfield, CC BY-SA 2.0). Fortunately, someone has already created a [data set of 120 Iris flowers](https://en.wikipedia.org/wiki/Iris_flower_data_set) with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems. Import and parse the training datasetWe need to download the dataset file and convert it to a structure that can be used by this Python program. Download the datasetDownload the training dataset file using the [tf.keras.utils.get_file](https://www.tensorflow.org/api_docs/python/tf/keras/utils/get_file) function. This returns the file path of the downloaded file.
###Code
train_dataset_url = "http://download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("Local copy of the dataset file: {}".format(train_dataset_fp))
###Output
Downloading data from http://download.tensorflow.org/data/iris_training.csv
8192/2194 [================================================================================================================] - 0s 0us/step
Local copy of the dataset file: /content/.keras/datasets/iris_training.csv
###Markdown
Inspect the dataThis dataset, `iris_training.csv`, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the `head -n5` command to take a peak at the first five entries:
###Code
!head -n5 {train_dataset_fp}
###Output
120,4,setosa,versicolor,virginica
6.4,2.8,5.6,2.2,2
5.0,2.3,3.3,1.0,1
4.9,2.5,4.5,1.7,2
4.9,3.1,1.5,0.1,0
###Markdown
From this view of the dataset, we see the following:1. The first line is a header containing information about the dataset: * There are 120 total examples. Each example has four features and one of three possible label names. 2. Subsequent rows are data records, one *[example](https://developers.google.com/machine-learning/glossary/example)* per line, where: * The first four fields are *[features](https://developers.google.com/machine-learning/glossary/feature)*: these are characteristics of an example. Here, the fields hold float numbers representing flower measurements. * The last column is the *[label](https://developers.google.com/machine-learning/glossary/label)*: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginicaFor more information about features and labels, see the [ML Terminology section of the Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/framing/ml-terminology). Parse the datasetSince our dataset is a CSV-formatted text file, we'll parse the feature and label values into a format our Python model can use. Each line—or row—in the file is passed to the `parse_csv` function which grabs the first four feature fields and combines them into a single tensor. Then, the last field is parsed as the label. The function returns *both* the `features` and `label` tensors:
###Code
def parse_csv(line):
example_defaults = [[0.], [0.], [0.], [0.], [0]] # sets field types
parsed_line = tf.decode_csv(line, example_defaults)
# First 4 fields are features, combine into single tensor
features = tf.reshape(parsed_line[:-1], shape=(4,))
# Last field is the label
label = tf.reshape(parsed_line[-1], shape=())
return features, label
###Output
_____no_output_____
###Markdown
Create the training tf.data.DatasetTensorFlow's [Dataset API](https://www.tensorflow.org/programmers_guide/datasets) handles many common cases for feeding data into a model. This is a high-level API for reading data and transforming it into a form used for training. See the [Datasets Quick Start guide](https://www.tensorflow.org/get_started/datasets_quickstart) for more information.This program uses [tf.data.TextLineDataset](https://www.tensorflow.org/api_docs/python/tf/data/TextLineDataset) to load a CSV-formatted text file and is parsed with our `parse_csv` function. A [tf.data.Dataset](https://www.tensorflow.org/api_docs/python/tf/data/Dataset) represents an input pipeline as a collection of elements and a series of transformations that act on those elements. Transformation methods are chained together or called sequentially—just make sure to keep a reference to the returned `Dataset` object.Training works best if the examples are in random order. Use `tf.data.Dataset.shuffle` to randomize entries, setting `buffer_size` to a value larger than the number of examples (120 in this case). To train the model faster, the dataset's *[batch size](https://developers.google.com/machine-learning/glossary/batch_size)* is set to `32` examples to train at once.
###Code
train_dataset = tf.data.TextLineDataset(train_dataset_fp)
train_dataset = train_dataset.skip(1) # skip the first header row
train_dataset = train_dataset.map(parse_csv) # parse each row
train_dataset = train_dataset.shuffle(buffer_size=1000) # randomize
train_dataset = train_dataset.batch(32)
# View a single example entry from a batch
features, label = iter(train_dataset).next()
print("example features:", features[0])
print("example label:", label[0])
###Output
_____no_output_____
###Markdown
Select the type of model Why model?A *[model](https://developers.google.com/machine-learning/crash-course/glossarymodel)* is the relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.Could you determine the relationship between the four features and the Iris species *without* using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach *determines the model for you*. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you. Select the modelWe need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. *[Neural networks](https://developers.google.com/machine-learning/glossary/neural_network)* can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more *[hidden layers](https://developers.google.com/machine-learning/glossary/hidden_layer)*. Each hidden layer consists of one or more *[neurons](https://developers.google.com/machine-learning/glossary/neuron)*. There are several categories of neural networks and this program uses a dense, or *[fully-connected neural network](https://developers.google.com/machine-learning/glossary/fully_connected_layer)*: the neurons in one layer receive input connections from *every* neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer: <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> Figure 2. A neural network with features, hidden layers, and predictions. When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called *[inference](https://developers.google.com/machine-learning/crash-course/glossaryinference)*. For this example, the sum of the output predictions are 1.0. In Figure 2, this prediction breaks down as: `0.03` for *Iris setosa*, `0.95` for *Iris versicolor*, and `0.02` for *Iris virginica*. This means that the model predicts—with 95% probability—that an unlabeled example flower is an *Iris versicolor*. Create a model using KerasThe TensorFlow [tf.keras](https://www.tensorflow.org/api_docs/python/tf/keras) API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together. See the [Keras documentation](https://keras.io/) for details.The [tf.keras.Sequential](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential) model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense) layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's `input_shape` parameter corresponds to the amount of features from the dataset, and is required.
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation="relu", input_shape=(4,)), # input shape required
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(3)
])
###Output
_____no_output_____
###Markdown
The *[activation function](https://developers.google.com/machine-learning/crash-course/glossaryactivation_function)* determines the output of a single neuron to the next layer. This is loosely based on how brain neurons are connected. There are many [available activations](https://www.tensorflow.org/api_docs/python/tf/keras/activations), but [ReLU](https://developers.google.com/machine-learning/crash-course/glossaryReLU) is common for hidden layers.The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively. Train the model*[Training](https://developers.google.com/machine-learning/crash-course/glossarytraining)* is the stage of machine learning when the model is gradually optimized, or the model *learns* the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn *too much* about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called *[overfitting](https://developers.google.com/machine-learning/crash-course/glossaryoverfitting)*—it's like memorizing the answers instead of understanding how to solve a problem.The Iris classification problem is an example of *[supervised machine learning](https://developers.google.com/machine-learning/glossary/supervised_machine_learning)*: the model is trained from examples that contain labels. In *[unsupervised machine learning](https://developers.google.com/machine-learning/glossary/unsupervised_machine_learning)*, the examples don't contain labels. Instead, the model typically finds patterns among the features. Define the loss and gradient functionBoth training and evaluation stages need to calculate the model's *[loss](https://developers.google.com/machine-learning/crash-course/glossaryloss)*. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.Our model will calculate its loss using the [tf.losses.sparse_softmax_cross_entropy](https://www.tensorflow.org/api_docs/python/tf/losses/sparse_softmax_cross_entropy) function which takes the model's prediction and the desired label. The returned loss value is progressively larger as the prediction gets worse.
###Code
def loss(model, x, y):
y_ = model(x)
return tf.losses.sparse_softmax_cross_entropy(labels=y, logits=y_)
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, model.variables)
###Output
_____no_output_____
###Markdown
The `grad` function uses the `loss` function and the [tf.GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape) to record operations that compute the *[gradients](https://developers.google.com/machine-learning/crash-course/glossarygradient)* used to optimize our model. For more examples of this, see the [eager execution guide](https://www.tensorflow.org/programmers_guide/eager). Create an optimizerAn *[optimizer](https://developers.google.com/machine-learning/crash-course/glossaryoptimizer)* applies the computed gradients to the model's variables to minimize the `loss` function. You can think of a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions. <img src="http://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorthims visualized over time in 3D space."> Figure 3. Optimization algorthims visualized over time in 3D space. (Source: Stanford class CS231n, MIT License) TensorFlow has many [optimization algorithms](https://www.tensorflow.org/api_guides/python/train) available for training. This model uses the [tf.train.GradientDescentOptimizer](https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer) that implements the *[stochastic gradient descent](https://developers.google.com/machine-learning/crash-course/glossarygradient_descent)* (SGD) algorithm. The `learning_rate` sets the step size to take for each iteration down the hill. This is a *hyperparameter* that you'll commonly adjust to achieve better results.
###Code
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
###Output
_____no_output_____
###Markdown
Training loopWith all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:1. Iterate each epoch. An epoch is one pass through the dataset.2. Within an epoch, iterate over each example in the training `Dataset` grabbing its *features* (`x`) and *label* (`y`).3. Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.4. Use an `optimizer` to update the model's variables.5. Keep track of some stats for visualization.6. Repeat for each epoch.The `num_epochs` variable is the amount of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. `num_epochs` is a *[hyperparameter](https://developers.google.com/machine-learning/glossary/hyperparameter)* that you can tune. Choosing the right number usually requires both experience and experimentation.
###Code
## Note: Rerunning this cell uses the same model variables
# keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tfe.metrics.Mean()
epoch_accuracy = tfe.metrics.Accuracy()
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.variables),
global_step=tf.train.get_or_create_global_step())
# Track progress
epoch_loss_avg(loss(model, x, y)) # add current batch loss
# compare predicted label to actual label
epoch_accuracy(tf.argmax(model(x), axis=1, output_type=tf.int32), y)
# end epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
###Output
_____no_output_____
###Markdown
Visualize the loss function over time While it's helpful to print out the model's training progress, it's often *more helpful* to see this progress. [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard) is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the `mathplotlib` module.Interpreting these charts takes some experience, but you really want to see the *loss* go down and the *accuracy* go up.
###Code
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('Training Metrics')
axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
###Output
_____no_output_____
###Markdown
Evaluate the model's effectivenessNow that the model is trained, we can get some statistics on its performance.*Evaluating* means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's prediction against the actual label. For example, a model that picked the correct species on half the input examples has an *[accuracy](https://developers.google.com/machine-learning/glossary/accuracy)* of `0.5`. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy: Example features Label Model prediction 5.93.04.31.511 6.93.15.42.122 5.13.31.70.500 6.0 3.4 4.5 1.6 12 5.52.54.01.311 Figure 4. An Iris classifier that is 80% accurate. Setup the test datasetEvaluating the model is similar to training the model. The biggest difference is the examples come from a separate *[test set](https://developers.google.com/machine-learning/crash-course/glossarytest_set)* rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.The setup for the test `Dataset` is similar to the setup for training `Dataset`. Download the CSV text file and parse that values, then give it a little shuffle:
###Code
test_url = "http://download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_dataset = tf.data.TextLineDataset(test_fp)
test_dataset = test_dataset.skip(1) # skip header row
test_dataset = test_dataset.map(parse_csv) # parse each row with the funcition created earlier
test_dataset = test_dataset.shuffle(1000) # randomize
test_dataset = test_dataset.batch(32) # use the same batch size as the training set
###Output
_____no_output_____
###Markdown
Evaluate the model on the test datasetUnlike the training stage, the model only evaluates a single [epoch](https://developers.google.com/machine-learning/glossary/epoch) of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set.
###Code
test_accuracy = tfe.metrics.Accuracy()
for (x, y) in test_dataset:
prediction = tf.argmax(model(x), axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("Test set accuracy: {:.3%}".format(test_accuracy.result()))
###Output
_____no_output_____
###Markdown
Use the trained model to make predictionsWe've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on [unlabeled examples](https://developers.google.com/machine-learning/glossary/unlabeled_example); that is, on examples that contain features but not a label.In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:* `0`: Iris setosa* `1`: Iris versicolor* `2`: Iris virginica
###Code
class_ids = ["Iris setosa", "Iris versicolor", "Iris virginica"]
predict_dataset = tf.convert_to_tensor([
[5.1, 3.3, 1.7, 0.5,],
[5.9, 3.0, 4.2, 1.5,],
[6.9, 3.1, 5.4, 2.1]
])
predictions = model(predict_dataset)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
name = class_ids[class_idx]
print("Example {} prediction: {}".format(i, name))
###Output
_____no_output_____ |
first_project/automated-model-tuning.ipynb | ###Markdown
Introduction: Automated Hyperparameter TuningIn this notebook, we will talk through a complete example of using automated hyperparameter tuning to optimize a machine learning model. In particular, we will use Bayesian Optimization and the Hyperopt library to tune the hyperparameters of a gradient boosting machine. __Additional Notebooks__ If you haven't checked out my other work on this problem, here is a complete list of the notebooks I have completed so far:* [A Gentle Introduction](https://www.kaggle.com/willkoehrsen/start-here-a-gentle-introduction)* [Manual Feature Engineering Part One](https://www.kaggle.com/willkoehrsen/introduction-to-manual-feature-engineering)* [Manual Feature Engineering Part Two](https://www.kaggle.com/willkoehrsen/introduction-to-manual-feature-engineering-p2)* [Introduction to Automated Feature Engineering](https://www.kaggle.com/willkoehrsen/automated-feature-engineering-basics)* [Advanced Automated Feature Engineering](https://www.kaggle.com/willkoehrsen/tuning-automated-feature-engineering-exploratory)* [Feature Selection](https://www.kaggle.com/willkoehrsen/introduction-to-feature-selection)* [Intro to Model Tuning: Grid and Random Search](https://www.kaggle.com/willkoehrsen/intro-to-model-tuning-grid-and-random-search)* [Automated Model Tuning](https://www.kaggle.com/willkoehrsen/automated-model-tuning)There are four approaches to tuning the hyperparameters of a machine learning model1. __Manual__: select hyperparameters based on intuition/experience/guessing, train the model with the hyperparameters, and score on the validation data. Repeat process until you run out of patience or are satisfied with the results.2. __Grid Search__: set up a grid of hyperparameter values and for each combination, train a model and score on the validation data. In this approach, every single combination of hyperparameters values is tried which can be very inefficient!3. __Random search__: set up a grid of hyperparameter values and select random combinations to train the model and score. The number of search iterations is set based on time/resources.4. __Automated Hyperparameter Tuning__: use methods such as gradient descent, Bayesian Optimization, or evolutionary algorithms to conduct a guided search for the best hyperparameters.These are listed in general order of least to most efficient. While we already conquered 2 and 3 [in this notebook](https://www.kaggle.com/willkoehrsen/intro-to-model-tuning-grid-and-random-search) (we didn't even try method 1), we have yet to take on automated hyperparameter tuning. There are a number of methods to do this including genetic programming, Bayesian optimization, and gradient based methods. Here we will focus only on Bayesian optimization, using the Tree Parzen Esimator (don't worry, you don't need to understand this in detail) in the [Hyperopt open-source Python library](https://hyperopt.github.io/hyperopt/).For a little more background (we'll cover everything you need below), [here is an introductory article](https://towardsdatascience.com/an-introductory-example-of-bayesian-optimization-in-python-with-hyperopt-aae40fff4ff0) on Bayesian optimization, and [here is an article on automated hyperparameter tuning](https://towardsdatascience.com/automated-machine-learning-hyperparameter-tuning-in-python-dfda59b72f8a) using Bayesian optimization. Here we'll get right into automated hyperparameter tuning, so for the necessary background on model tuning, refer to [this kernel](https://www.kaggle.com/willkoehrsen/intro-to-model-tuning-grid-and-random-search) Bayesian Optimization PrimerThe problem with grid and random search is that these are __uninformed methods__ because they do not use the past results from different values of hyperparameters in the objective function (remember the objective function takes in the hyperparameters and returns the model cross validation score). We record the results of the objective function for each set of hyperparameters, but the algorithms do not select the next hyperparameter values from this information. Intuitively, if we have the past results, we should use them to reason about what hyperparameter values work the best and choose the next values wisely to try and spend more iterations evaluating promising values. Evaluating hyperparameters in the objective function is very time-consuming, and the __concept of Bayesian optimization is to limit calls to the evaluation function by choosing the next hyperparameter values based on the previous results.__ This allows the algorithm to spend __more time evaluating promising hyperparameter values and less time in low-scoring regions of the hyperparameter space__. For example, consider the image below:If you were choosing the next number of trees to try for the random forest, where would you concentrate your search? Probably around 100 trees because that is where the lowest errors have tended to occur (imagine this is a problem where we want to minimize the error). In effect, you have just done Bayesian hyperparameter optimization in your head! You formed a probability model of the error as a function of the hyperparameters and then selected the next hyperparameter values by maximizing the probability of a low error. Bayesian optimization works by building a surrogate function (in the form of a probability model) of the objective function $P(\text{score} | \text{hyperparameters}$. The surrogate function is much cheaper to evaluate than the objective, so the algorithm chooses the next values to try in the objective based on maximizing a criterion on the surrogate (usually expected improvement), exactly what you would have done with respect to the image above. The surrogate function is based on past evaluation results - pairs of (score, hyperparameter) records - and is continually updated with each objective function evaluation. Bayesian optimization therefore uses Bayesian reasoning: form an initial model (called a prior) and then update it with more evidence. The idea is that as the data accumulates, the surrogate function gets closer and closer to the objective function, and the hyperparameter values that are the best in the surrogate function will also do the best in the objective function. Bayesian optimization methods differ in the algorithm used to build the surrogate function and choose the next hyperparameter values to try. Some of the common choices are Gaussian Process (implemented in Spearmint), Random Forest Regression (in SMAC), and the Tree Parzen Estimator (TPE) in Hyperopt (technical details can be found in this article, although they won't be necessary to use the methods). Four Part of Bayesian OptimizationBayesian hyperparameter optimization requires the same four parts as we implemented in grid and random search:1. __Objective Function__: takes in an input (hyperparameters) and returns a score to minimize or maximize (the cross validation score)2. __Domain space__: the range of input values (hyperparameters) to evaluate3. __Optimization Algorithm__: the method used to construct the surrogate function and choose the next values to evaluate4. __Results__: score, value pairs that the algorithm uses to build the surrogate functionThe only differences are that now our objective function will return a score to minimize (this is just convention in the field of optimization), our domain space will be probability distributions rather than a hyperparameter grid, and the optimization algorithm will be an __informed method__ that uses past results to choose the next hyperparameter values to evaluate. HyperoptHyperopt is an open-source Python library the implements Bayesian Optimization using the Tree Parzen Estimator algorithm to construct the surrogate function and select the next hyperparameter values to evaluate in the objective function. There are a number of other libraries such as Spearmint (Guassian process surrogate function) and SMAC (random forest regression surrogate function) sharing the same problem structure. The four parts of an optimization problem that we develop here will apply to all the libraries with only a change in syntax. Morevoer, the optimization methods as applied to the Gradient Boosting Machine will translate to other machine learning models or any problem where we have to minimize a function. Gradient Boosting MachineWe will use the gradient booosting machine (GBM) as our model to tune in the LightGBM library. The GBM is our choice of model because it performs extremely well for these types of problems (as shown on the leaderboard) and because the performance is heavily dependent on the choice of hyperparameter values. For more details of the Gradient Boosting Machine (GBM), check out this high-level blog post, or this in depth technical article. Cross Validation with Early StoppingAs with random and grid search, we will evaluate each set of hyperparameters using 5 fold cross validation on the training data. The GBM model will be trained with early stopping, where estimators are added to the ensemble until the validation score has not decrease for 100 iterations (estimators added). Cross validation and early stopping will be implemented using the LightGBM `cv` function. We will use 5 folds and 100 early stopping rounds. Dataset and ApproachAs before, we will work with a limited section of the data - 10000 observations for training and 6000 observations for testing. This will allow the optimization within the notebook to finish in a reasonable amount of time. Later in the notebook, I'll present results from 1000 iterations of Bayesian hyperparameter optimization on the reduced dataset and we then will see if these results translate to a full dataset (from [this kernel](https://www.kaggle.com/jsaguiar/updated-0-792-lb-lightgbm-with-simple-features)). The functions developed here can be taken and run on any dataset, or used with any machine learning model (just with minor changes in the details) and working with a smaller dataset will allow us to learn all of the concepts. I am currently running 500 iterations of Bayesian hyperparameter optimization on a complete dataset and will make the results available when the search is completed. With the background details out of the way, let's get started with Bayesian optimization applied to automated hyperparameter tuning!
###Code
# Data manipulation
import pandas as pd
import numpy as np
# Modeling
import lightgbm as lgb
# Evaluation of the model
from sklearn.model_selection import KFold, train_test_split
from sklearn.metrics import roc_auc_score
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
plt.rcParams['font.size'] = 18
%matplotlib inline
# Governing choices for search
N_FOLDS = 5
MAX_EVALS = 5
###Output
_____no_output_____
###Markdown
The code below reads in the data and creates a smaller version for training and a set for testing. We can only use the training data __a single time__ when we evaluate the final model. Hyperparameter tuning must be done on the training data using cross validation!
###Code
features = pd.read_csv('../input/home-credit-default-risk/application_train.csv')
# Sample 16000 rows (10000 for training, 6000 for testing)
features = features.sample(n = 16000, random_state = 42)
# Only numeric features
features = features.select_dtypes('number')
# Extract the labels
labels = np.array(features['TARGET'].astype(np.int32)).reshape((-1, ))
features = features.drop(columns = ['TARGET', 'SK_ID_CURR'])
# Split into training and testing data
train_features, test_features, train_labels, test_labels = train_test_split(features, labels, test_size = 6000, random_state = 42)
print('Train shape: ', train_features.shape)
print('Test shape: ', test_features.shape)
train_features.head()
###Output
_____no_output_____
###Markdown
Baseline Model First we can create a model with the default value of hyperparameters and score it using cross validation with early stopping. Using the `cv` LightGBM function requires creating a `Dataset`.
###Code
model = lgb.LGBMClassifier(random_state=50)
# Training set
train_set = lgb.Dataset(train_features, label = train_labels)
test_set = lgb.Dataset(test_features, label = test_labels)
# Default hyperparamters
hyperparameters = model.get_params()
# Using early stopping to determine number of estimators.
del hyperparameters['n_estimators']
# Perform cross validation with early stopping
cv_results = lgb.cv(hyperparameters, train_set, num_boost_round = 10000, nfold = N_FOLDS, metrics = 'auc',
early_stopping_rounds = 100, verbose_eval = False, seed = 42)
# Highest score
best = cv_results['auc-mean'][-1]
# Standard deviation of best score
best_std = cv_results['auc-stdv'][-1]
print('The maximium ROC AUC in cross validation was {:.5f} with std of {:.5f}.'.format(best, best_std))
print('The ideal number of iterations was {}.'.format(len(cv_results['auc-mean'])))
###Output
_____no_output_____
###Markdown
Now we can evaluate the baseline model on the testing data.
###Code
# Optimal number of esimators found in cv
model.n_estimators = len(cv_results['auc-mean'])
# Train and make predicions with model
model.fit(train_features, train_labels)
preds = model.predict_proba(test_features)[:, 1]
baseline_auc = roc_auc_score(test_labels, preds)
print('The baseline model scores {:.5f} ROC AUC on the test set.'.format(baseline_auc))
###Output
_____no_output_____
###Markdown
Objective FunctionThe first part to write is the objective function which takes in a set of hyperparameter values and returns the cross validation score on the training data. An objective function in Hyperopt must return either a single real value to minimize, or a dictionary with a key "loss" with the score to minimize (and a key "status" indicating if the run was successful or not). Optimization is typically about minimizing a value, and because our metric is Receiver Operating Characteristic Area Under the Curve (ROC AUC) where higher is better, the objective function will return $1 - \text{ROC AUC Cross Validation}$. The algorithm will try to drive this value as low as possible (raising the ROC AUC) by choosing the next hyperparameters based on the past results. The complete objective function is shown below. As with random and grid search, we write to a `csv` file on each call of the function in order to track results as the search progress and so we have a saved record of the search. (The `subsample` and `boosting_type` logic will be explained when we get to the domain).
###Code
import csv
from hyperopt import STATUS_OK
from timeit import default_timer as timer
def objective(hyperparameters):
"""Objective function for Gradient Boosting Machine Hyperparameter Optimization.
Writes a new line to `outfile` on every iteration"""
# Keep track of evals
global ITERATION
ITERATION += 1
# Using early stopping to find number of trees trained
if 'n_estimators' in hyperparameters:
del hyperparameters['n_estimators']
# Retrieve the subsample
subsample = hyperparameters['boosting_type'].get('subsample', 1.0)
# Extract the boosting type and subsample to top level keys
hyperparameters['boosting_type'] = hyperparameters['boosting_type']['boosting_type']
hyperparameters['subsample'] = subsample
# Make sure parameters that need to be integers are integers
for parameter_name in ['num_leaves', 'subsample_for_bin', 'min_child_samples']:
hyperparameters[parameter_name] = int(hyperparameters[parameter_name])
start = timer()
# Perform n_folds cross validation
cv_results = lgb.cv(hyperparameters, train_set, num_boost_round = 10000, nfold = N_FOLDS,
early_stopping_rounds = 100, metrics = 'auc', seed = 50)
run_time = timer() - start
# Extract the best score
best_score = cv_results['auc-mean'][-1]
# Loss must be minimized
loss = 1 - best_score
# Boosting rounds that returned the highest cv score
n_estimators = len(cv_results['auc-mean'])
# Add the number of estimators to the hyperparameters
hyperparameters['n_estimators'] = n_estimators
# Write to the csv file ('a' means append)
of_connection = open(OUT_FILE, 'a')
writer = csv.writer(of_connection)
writer.writerow([loss, hyperparameters, ITERATION, run_time, best_score])
of_connection.close()
# Dictionary with information for evaluation
return {'loss': loss, 'hyperparameters': hyperparameters, 'iteration': ITERATION,
'train_time': run_time, 'status': STATUS_OK}
###Output
_____no_output_____
###Markdown
Domain Specifying the domain (called the space in Hyperopt) is a little trickier than in grid search. In Hyperopt, and other Bayesian optimization frameworks, the domian is not a discrete grid but instead has probability distributions for each hyperparameter. For each hyperparameter, we will use the same limits as with the grid, but instead of being defined at each point, the domain represents probabilities for each hyperparameter. This will probably become clearer in the code and the images!
###Code
from hyperopt import hp
from hyperopt.pyll.stochastic import sample
###Output
_____no_output_____
###Markdown
First we will go through an example of the learning rate. We are using a log-uniform space for the learning rate defined from 0.005 to 0.5. The log - uniform distribution has the values evenly placed in logarithmic space rather than linear space. This is useful for variables that differ over several orders of magnitude such as the learning rate. For example, with a log-uniform distribution, there will be an equal chance of drawing a value from 0.005 to 0.05 and from 0.05 to 0.5 (in linear space far more values would be drawn from the later since the linear distance is much larger. The logarithmic space is exactly the same - a factor of 10).
###Code
# Create the learning rate
learning_rate = {'learning_rate': hp.loguniform('learning_rate', np.log(0.005), np.log(0.2))}
###Output
_____no_output_____
###Markdown
We can visualize the learning rate by drawing 10000 samples from the distribution.
###Code
learning_rate_dist = []
# Draw 10000 samples from the learning rate domain
for _ in range(10000):
learning_rate_dist.append(sample(learning_rate)['learning_rate'])
plt.figure(figsize = (8, 6))
sns.kdeplot(learning_rate_dist, color = 'red', linewidth = 2, shade = True);
plt.title('Learning Rate Distribution', size = 18); plt.xlabel('Learning Rate', size = 16); plt.ylabel('Density', size = 16);
###Output
_____no_output_____
###Markdown
The number of leaves on the other hand is a discrete uniform distribution.
###Code
# Discrete uniform distribution
num_leaves = {'num_leaves': hp.quniform('num_leaves', 30, 150, 1)}
num_leaves_dist = []
# Sample 10000 times from the number of leaves distribution
for _ in range(10000):
num_leaves_dist.append(sample(num_leaves)['num_leaves'])
# kdeplot
plt.figure(figsize = (8, 6))
sns.kdeplot(num_leaves_dist, linewidth = 2, shade = True);
plt.title('Number of Leaves Distribution', size = 18); plt.xlabel('Number of Leaves', size = 16); plt.ylabel('Density', size = 16);
###Output
_____no_output_____
###Markdown
Conditional DomainIn Hyperopt, we can use nested conditional statements to indicate hyperparameters that depend on other hyperparameters. For example, the "goss" `boosting_type` cannot use subsampling, so when we set up the `boosting_type` categorical variable, we have to set the subsample to 1.0 while for the other boosting types it's a float between 0.5 and 1.0.
###Code
# boosting type domain
boosting_type = {'boosting_type': hp.choice('boosting_type',
[{'boosting_type': 'gbdt', 'subsample': hp.uniform('subsample', 0.5, 1)},
{'boosting_type': 'dart', 'subsample': hp.uniform('subsample', 0.5, 1)},
{'boosting_type': 'goss', 'subsample': 1.0}])}
# Draw a sample
hyperparams = sample(boosting_type)
hyperparams
###Output
_____no_output_____
###Markdown
We need to set both the boosting_type and subsample as top-level keys in the parameter dictionary. We can use the Python dict.get method with a default value of 1.0. This means that if the key is not present in the dictionary, the value returned will be the default (1.0).
###Code
# Retrieve the subsample if present otherwise set to 1.0
subsample = hyperparams['boosting_type'].get('subsample', 1.0)
# Extract the boosting type
hyperparams['boosting_type'] = hyperparams['boosting_type']['boosting_type']
hyperparams['subsample'] = subsample
hyperparams
###Output
_____no_output_____
###Markdown
The gbm cannot use the nested dictionary so we need to set the `boosting_type` and `subsample` as top level keys. Nested conditionals allow us to use a different set of hyperparameters depending on other hyperparameters. For example, we can explore different models with completely different sets of hyperparameters by using nested conditionals. The only requirement is that the first nested statement must be based on a choice hyperparameter (the choice could be the type of model). Complete Bayesian DomainNow we can define the entire domain. Each variable needs to have a label and a few parameters specifying the type and extent of the distribution. For the variables such as boosting type that are categorical, we use the choice variable. Other variables types include quniform, loguniform, and uniform. For the complete list, check out the documentation for Hyperopt. Altogether there are 10 hyperparameters to optimize.
###Code
# Define the search space
space = {
'boosting_type': hp.choice('boosting_type',
[{'boosting_type': 'gbdt', 'subsample': hp.uniform('gdbt_subsample', 0.5, 1)},
{'boosting_type': 'dart', 'subsample': hp.uniform('dart_subsample', 0.5, 1)},
{'boosting_type': 'goss', 'subsample': 1.0}]),
'num_leaves': hp.quniform('num_leaves', 20, 150, 1),
'learning_rate': hp.loguniform('learning_rate', np.log(0.01), np.log(0.5)),
'subsample_for_bin': hp.quniform('subsample_for_bin', 20000, 300000, 20000),
'min_child_samples': hp.quniform('min_child_samples', 20, 500, 5),
'reg_alpha': hp.uniform('reg_alpha', 0.0, 1.0),
'reg_lambda': hp.uniform('reg_lambda', 0.0, 1.0),
'colsample_bytree': hp.uniform('colsample_by_tree', 0.6, 1.0),
'is_unbalance': hp.choice('is_unbalance', [True, False]),
}
###Output
_____no_output_____
###Markdown
Example of Sampling from the DomainLet's sample from the domain (using the conditional logic) to see the result of each draw. Every time we run this code, the results will change. (Again notice that we need to assign the top level keys to the keywords understood by the GBM).
###Code
# Sample from the full space
x = sample(space)
# Conditional logic to assign top-level keys
subsample = x['boosting_type'].get('subsample', 1.0)
x['boosting_type'] = x['boosting_type']['boosting_type']
x['subsample'] = subsample
x
x = sample(space)
subsample = x['boosting_type'].get('subsample', 1.0)
x['boosting_type'] = x['boosting_type']['boosting_type']
x['subsample'] = subsample
x
###Output
_____no_output_____
###Markdown
Let's test the objective function with the domain to make sure it works. (Every time the `of_connection` line is run, the `outfile` will be overwritten, so use a different name for each trial to save the results.)
###Code
# Create a new file and open a connection
OUT_FILE = 'bayes_test.csv'
of_connection = open(OUT_FILE, 'w')
writer = csv.writer(of_connection)
ITERATION = 0
# Write column names
headers = ['loss', 'hyperparameters', 'iteration', 'runtime', 'score']
writer.writerow(headers)
of_connection.close()
# Test the objective function
results = objective(sample(space))
print('The cross validation loss = {:.5f}.'.format(results['loss']))
print('The optimal number of estimators was {}.'.format(results['hyperparameters']['n_estimators']))
###Output
_____no_output_____
###Markdown
Optimization AlgorithmThe optimization algorithm is the method for constructing the surrogate function (probability model) and selecting the next set of hyperparameters to evaluate in the objective function. Hyperopt has two choices: random search and Tree Parzen Estimator. The technical details of TPE can be found in [this article](https://papers.nips.cc/paper/4443-algorithms-for-hyper-parameter-optimization.pdf) and a conceptual explanation is in [this article](https://towardsdatascience.com/a-conceptual-explanation-of-bayesian-model-based-hyperparameter-optimization-for-machine-learning-b8172278050f). Although this is the most technical part of Bayesian hyperparameter optimization, defining the algorithm in Hyperopt is simple.
###Code
from hyperopt import tpe
# Create the algorithm
tpe_algorithm = tpe.suggest
###Output
_____no_output_____
###Markdown
Results HistoryThe final part is the history of objective function evaluations. Although Hyperopt internally keeps track of the results for the algorithm to use, if we want to monitor the results and have a saved copy of the search, we need to store the results ourselves. Here, we are using two methods to make sure we capture all the results:1. A `Trials` object that stores the dictionary returned from the objective function2. Adding a line to a csv file every iteration.The csv file option also lets us monitor the results of an on-going experiment. Although do not use Excel to open the file while training is on-going. Instead check the results using `tail results/out_file.csv` from bash or open the file in Sublime Text or Notepad.
###Code
from hyperopt import Trials
# Record results
trials = Trials()
###Output
_____no_output_____
###Markdown
The `Trials` object will hold everything returned from the objective function in the `.results` attribute. We can use this after the search is complete to inspect the results, but an easier method is to read in the `csv` file because that will already be in a dataframe.
###Code
# Create a file and open a connection
OUT_FILE = 'bayes_test.csv'
of_connection = open(OUT_FILE, 'w')
writer = csv.writer(of_connection)
ITERATION = 0
# Write column names
headers = ['loss', 'hyperparameters', 'iteration', 'runtime', 'score']
writer.writerow(headers)
of_connection.close()
###Output
_____no_output_____
###Markdown
Automated Hyperparameter Optimization in PracticeWe have all four parts we need to run the optimization. To run Bayesian optimization we use the `fmin` function (a good reminder that we need a metric to minimize!)
###Code
from hyperopt import fmin
###Output
_____no_output_____
###Markdown
`fmin` takes the four parts defined above as well as the maximum number of iterations `max_evals`.
###Code
# Global variable
global ITERATION
ITERATION = 0
# Run optimization
best = fmin(fn = objective, space = space, algo = tpe.suggest, trials = trials,
max_evals = MAX_EVALS)
best
###Output
_____no_output_____
###Markdown
The `best` object holds only the hyperparameters that returned the lowest loss in the objective function. Although this is ultimately what we are after, if we want to understand how the search progresses, we need to inspect the `Trials` object or the `csv` file. For example, we can sort the `results` returned from the objective function by the lowest loss:
###Code
# Sort the trials with lowest loss (highest AUC) first
trials_dict = sorted(trials.results, key = lambda x: x['loss'])
trials_dict[:1]
###Output
_____no_output_____
###Markdown
An easier method is to read in the csv file since this will be a dataframe.
###Code
results = pd.read_csv(OUT_FILE)
###Output
_____no_output_____
###Markdown
The function below takes in the results, trains a model on the training data, and evalutes on the testing data. It returns a dataframe of hyperparameters from the search. Saving the results to a csv file converts the dictionary of hyperparameters to a string. We need to map this back to a dictionary using `ast.literal_eval`.
###Code
import ast
def evaluate(results, name):
"""Evaluate model on test data using hyperparameters in results
Return dataframe of hyperparameters"""
new_results = results.copy()
# String to dictionary
new_results['hyperparameters'] = new_results['hyperparameters'].map(ast.literal_eval)
# Sort with best values on top
new_results = new_results.sort_values('score', ascending = False).reset_index(drop = True)
# Print out cross validation high score
print('The highest cross validation score from {} was {:.5f} found on iteration {}.'.format(name, new_results.loc[0, 'score'], new_results.loc[0, 'iteration']))
# Use best hyperparameters to create a model
hyperparameters = new_results.loc[0, 'hyperparameters']
model = lgb.LGBMClassifier(**hyperparameters)
# Train and make predictions
model.fit(train_features, train_labels)
preds = model.predict_proba(test_features)[:, 1]
print('ROC AUC from {} on test data = {:.5f}.'.format(name, roc_auc_score(test_labels, preds)))
# Create dataframe of hyperparameters
hyp_df = pd.DataFrame(columns = list(new_results.loc[0, 'hyperparameters'].keys()))
# Iterate through each set of hyperparameters that were evaluated
for i, hyp in enumerate(new_results['hyperparameters']):
hyp_df = hyp_df.append(pd.DataFrame(hyp, index = [0]),
ignore_index = True)
# Put the iteration and score in the hyperparameter dataframe
hyp_df['iteration'] = new_results['iteration']
hyp_df['score'] = new_results['score']
return hyp_df
bayes_results = evaluate(results, name = 'Bayesian')
bayes_results
###Output
_____no_output_____
###Markdown
Continue OptimizationHyperopt can continue searching where a previous search left off if we pass in a `Trials` object that already has results. The algorithms used in Bayesian optimization are black-box optimizers because they have no internal state. All they need is the previous results of objective function evaluations (the input values and loss) and they can build up the surrogate function and select the next values to evaluate in the objective function. This means that any search can be continued as long as we have the history in a `Trials` object.
###Code
MAX_EVALS = 10
# Continue training
best = fmin(fn = objective, space = space, algo = tpe.suggest, trials = trials,
max_evals = MAX_EVALS)
###Output
_____no_output_____
###Markdown
To save the `Trials` object so it can be read in later for more training, we can use the `json` format.
###Code
import json
# Save the trial results
with open('trials.json', 'w') as f:
f.write(json.dumps(trials_dict))
###Output
_____no_output_____
###Markdown
To start the training from where it left off, simply load in the `Trials` object and pass it to an instance of `fmin`. (You might even be able to tweak the hyperparameter distribution and continue searching with the `Trials` object because the algorithm does not maintain an internal state. Someone should check this and let me know in the comments!). Next StepsNow that we have developed all the necessary parts for automated hyperparameter tuning using Bayesian optimization, we can apply these to any dataset or any machine learning method. The functions taken here can be put in a script and run a full dataset. Next, we will go through results from 1000 evaluations on a reduced size dataset to see how the search progresses. We can then compare these results to random search to see how a method that uses __reasoning__ about past results differs from a method that does not. After examining the tuning results from the reduced dataset, we will take the best performing hyperparameters and see if these translate to a full dataset, the features from the `[Updated 0.792 LB] LightGBM with Simple Features`(https://www.kaggle.com/jsaguiar/updated-0-792-lb-lightgbm-with-simple-features) kernel (I did not develop these features and want to give credit to the numerous people, including [Aguiar](https://www.kaggle.com/jsaguiar) and [olivier](https://www.kaggle.com/ogrellier), who have worked on these features. Please check out their [kernels](https://www.kaggle.com/ogrellier/lighgbm-with-selected-features)!). We saw in the random and grid search notebook that the best hyperparameter values from the small datasets do not necessarily perform well on the full datasets. I am currently running the Bayesian Hyperparameter optimization for 500 iterations on the features referenced above and will make the results publicly available when the search is finished. For now, we will turn to the 1000 trials from the smaller dataset. These results can be generated by running the cell below, but I can't guarantee if this will finish within the kernel time limit!
###Code
# MAX_EVALS = 1000
# # Create a new file and open a connection
# OUT_FILE = 'bayesian_trials_1000.csv'
# of_connection = open(OUT_FILE, 'w')
# writer = csv.writer(of_connection)
# # Write column names
# headers = ['loss', 'hyperparameters', 'iteration', 'runtime', 'score']
# writer.writerow(headers)
# of_connection.close()
# # Record results
# trials = Trials()
# global ITERATION
# ITERATION = 0
# best = fmin(fn = objective, space = space, algo = tpe.suggest,
# trials = trials, max_evals = MAX_EVALS)
# # Sort the trials with lowest loss (highest AUC) first
# trials_dict = sorted(trials.results, key = lambda x: x['loss'])
# print('Finished, best results')
# print(trials_dict[:1])
# # Save the trial results
# with open('trials.json', 'w') as f:
# f.write(json.dumps(trials_dict))
###Output
_____no_output_____
###Markdown
Search Results Next we will go through the results from 1000 search iterations on the reduced dataset. We will look at the scores, the distribution of hyperparameter values tried, the evolution of values over time, and compare the hyperparameters values to those from random search.After examining the search results, we will use the optimized hyperparameters (at least optimized for the smaller dataset) to make predictions on a full dataset. These can then be submitted to the competition to see how well the methods do on a small sample of the data. Learning Rate Distribution
###Code
bayes_results = pd.read_csv('../input/home-credit-model-tuning/bayesian_trials_1000.csv').sort_values('score', ascending = False).reset_index()
random_results = pd.read_csv('../input/home-credit-model-tuning/random_search_trials_1000.csv').sort_values('score', ascending = False).reset_index()
random_results['loss'] = 1 - random_results['score']
bayes_params = evaluate(bayes_results, name = 'Bayesian')
random_params = evaluate(random_results, name = 'random')
###Output
_____no_output_____
###Markdown
We can see that the Bayesian search did worse in cross validation but then found hyperparameter values that did better on the test set! We will have to see if these results translate to the acutal competition data. First though, we can get all the scores in a dataframe in order to plot them over the course of training.
###Code
# Dataframe of just scores
scores = pd.DataFrame({'ROC AUC': random_params['score'], 'iteration': random_params['iteration'], 'search': 'Random'})
scores = scores.append(pd.DataFrame({'ROC AUC': bayes_params['score'], 'iteration': bayes_params['iteration'], 'search': 'Bayesian'}))
scores['ROC AUC'] = scores['ROC AUC'].astype(np.float32)
scores['iteration'] = scores['iteration'].astype(np.int32)
scores.head()
###Output
_____no_output_____
###Markdown
We can also find the best scores for plotting the best hyperparameter values.
###Code
best_random_params = random_params.iloc[random_params['score'].idxmax(), :].copy()
best_bayes_params = bayes_params.iloc[bayes_params['score'].idxmax(), :].copy()
###Output
_____no_output_____
###Markdown
Below is the code showing the progress of scores versus the iteration. For random search we do not expect to see a pattern, but for Bayesian optimization, we expect to see the scores increasing with the search as more promising hyperparameter values are tried.
###Code
# Plot of scores over the course of searching
sns.lmplot('iteration', 'ROC AUC', hue = 'search', data = scores, size = 8);
plt.scatter(best_bayes_params['iteration'], best_bayes_params['score'], marker = '*', s = 400, c = 'orange', edgecolor = 'k')
plt.scatter(best_random_params['iteration'], best_random_params['score'], marker = '*', s = 400, c = 'blue', edgecolor = 'k')
plt.xlabel('Iteration'); plt.ylabel('ROC AUC'); plt.title("Validation ROC AUC versus Iteration");
###Output
_____no_output_____
###Markdown
Sure enough, we see that the Bayesian hyperparameter optimization scores increase as the search continues. This shows that more promising values (at least on the cross validation reduced dataset) were tried as the search progressed. Random search does record a better score, but the results do not improve over the course of the search. In this case, it looks like if we were to continue searching with Bayesian optimization, we would eventually reach higher scores on the cross vadidation data. For fun, we can make the same plot in Altair.
###Code
import altair as alt
alt.renderers.enable('notebook')
c = alt.Chart(scores).mark_circle().encode(x = 'iteration', y = alt.Y('ROC AUC',
scale = alt.Scale(domain = [0.64, 0.74])),
color = 'search')
c.title = 'Validation ROC AUC vs Iteration'
c
###Output
_____no_output_____
###Markdown
Same chart, just in a different library for practice! Learning Rate DistributionNext we can start plotting the distributions of hyperparameter values searched. We expect random search to align with the search domain, while the Bayesian hyperparameter optimization should tend to focus on more promising values, wherever those happen to be in the search domain.The dashed vertical lines indicate the "optimal" value of the hyperparameter.
###Code
plt.figure(figsize = (20, 8))
plt.rcParams['font.size'] = 18
# Density plots of the learning rate distributions
sns.kdeplot(learning_rate_dist, label = 'Sampling Distribution', linewidth = 4)
sns.kdeplot(random_params['learning_rate'], label = 'Random Search', linewidth = 4)
sns.kdeplot(bayes_params['learning_rate'], label = 'Bayes Optimization', linewidth = 4)
plt.vlines([best_random_params['learning_rate'], best_bayes_params['learning_rate']],
ymin = 0.0, ymax = 50.0, linestyles = '--', linewidth = 4, colors = ['orange', 'green'])
plt.legend()
plt.xlabel('Learning Rate'); plt.ylabel('Density'); plt.title('Learning Rate Distribution');
###Output
_____no_output_____
###Markdown
Distribution of all Numeric HyperparametersWe can make the same chart now for all of the hyperparameters. For each setting, we plot the values tried by random search and bayesian optimization, as well as the sampling distirbution.
###Code
# Iterate through each hyperparameter
for i, hyper in enumerate(random_params.columns):
if hyper not in ['class_weight', 'n_estimators', 'score', 'is_unbalance',
'boosting_type', 'iteration', 'subsample', 'metric', 'verbose', 'loss', 'learning_rate']:
plt.figure(figsize = (14, 6))
# Plot the random search distribution and the bayes search distribution
if hyper != 'loss':
sns.kdeplot([sample(space[hyper]) for _ in range(1000)], label = 'Sampling Distribution', linewidth = 4)
sns.kdeplot(random_params[hyper], label = 'Random Search', linewidth = 4)
sns.kdeplot(bayes_params[hyper], label = 'Bayes Optimization', linewidth = 4)
plt.vlines([best_random_params[hyper], best_bayes_params[hyper]],
ymin = 0.0, ymax = 10.0, linestyles = '--', linewidth = 4, colors = ['orange', 'green'])
plt.legend(loc = 1)
plt.title('{} Distribution'.format(hyper))
plt.xlabel('{}'.format(hyper)); plt.ylabel('Density');
plt.show();
###Output
_____no_output_____
###Markdown
Evolution of SearchAn interesting series of plots to make is the evolution of the hyperparameters over the search. This can show us what values the Bayesian optimization tended to focus on. The average cross validation score continued to improve throughout Bayesian optimization, indicating that "more promising" values of the hyperparameters were being evaluated and maybe a longer search would prove useful (or there could be a plateau in the validation scores with a longer search).
###Code
fig, axs = plt.subplots(1, 4, figsize = (24, 6))
i = 0
# Plot of four hyperparameters
for i, hyper in enumerate(['colsample_bytree', 'learning_rate', 'min_child_samples', 'num_leaves']):
# Scatterplot
sns.regplot('iteration', hyper, data = bayes_params, ax = axs[i])
axs[i].scatter(best_bayes_params['iteration'], best_bayes_params[hyper], marker = '*', s = 200, c = 'k')
axs[i].set(xlabel = 'Iteration', ylabel = '{}'.format(hyper), title = '{} over Search'.format(hyper));
plt.tight_layout()
fig, axs = plt.subplots(1, 4, figsize = (24, 6))
i = 0
# Scatterplot of next three hyperparameters
for i, hyper in enumerate(['reg_alpha', 'reg_lambda', 'subsample_for_bin', 'subsample']):
sns.regplot('iteration', hyper, data = bayes_params, ax = axs[i])
axs[i].scatter(best_bayes_params['iteration'], best_bayes_params[hyper], marker = '*', s = 200, c = 'k')
axs[i].set(xlabel = 'Iteration', ylabel = '{}'.format(hyper), title = '{} over Search'.format(hyper));
plt.tight_layout()
###Output
_____no_output_____
###Markdown
The final plot is just a bar chart of the `boosting_type`.
###Code
fig, axs = plt.subplots(1, 2, sharey = True, sharex = True)
# Bar plots of boosting type
random_params['boosting_type'].value_counts().plot.bar(ax = axs[0], figsize = (14, 6), color = 'orange', title = 'Random Search Boosting Type')
bayes_params['boosting_type'].value_counts().plot.bar(ax = axs[1], figsize = (14, 6), color = 'green', title = 'Bayes Optimization Boosting Type');
###Output
_____no_output_____
###Markdown
The Bayes optimization spent many more iterations using the `dart` boosting type than would be expected from a uniform distribution. We can use information such as this in further hyperparameter tuning. For example, we could use the distributions from Bayesian hyperparameter optimization to make a more focused hyperparameter grid for grid or even random search. For this chart, we can also make it in Altair for the practice.
###Code
bars = alt.Chart(random_params, width = 500).mark_bar(color = 'orange').encode(x = 'boosting_type', y = alt.Y('count()', scale = alt.Scale(domain = [0, 400])))
text = bars.mark_text(size = 20, align = 'center', baseline = 'bottom').encode(text = 'count()')
bars + text
bars = alt.Chart(bayes_params, width = 500).mark_bar(color = 'green').encode(x = 'boosting_type', y = alt.Y('count()', scale = alt.Scale(domain = [0, 800])))
text = bars.mark_text(size = 20, align = 'center', baseline = 'bottom').encode(text = 'count()')
bars + text
###Output
_____no_output_____
###Markdown
Applied to Full DatasetNow, we can take the best hyperparameters found from 1000 iterations of Bayesian hyperparameter optimization on the smaller dataset and apply these to a full dataset of features from the `[Updated 0.792 LB] LightGBM with Simple Features`(https://www.kaggle.com/jsaguiar/updated-0-792-lb-lightgbm-with-simple-features) kernel. The best hyperparameters from the smaller dataset will not necessarily be the best on the full dataset (because the small dataset does nto perfectly represent the entire data), but we can at least try them out. We will train a model using the optimal hyperparameters from Bayesian optimization using early stopping to determine the number of estimators.
###Code
# Read in full dataset
train = pd.read_csv('../input/home-credit-simple-featuers/simple_features_train.csv')
test = pd.read_csv('../input/home-credit-simple-featuers/simple_features_test.csv')
# Extract the test ids and train labels
test_ids = test['SK_ID_CURR']
train_labels = np.array(train['TARGET'].astype(np.int32)).reshape((-1, ))
train = train.drop(columns = ['SK_ID_CURR', 'TARGET'])
test = test.drop(columns = ['SK_ID_CURR'])
print('Training shape: ', train.shape)
print('Testing shape: ', test.shape)
random_results['hyperparameters'] = random_results['hyperparameters'].map(ast.literal_eval)
bayes_results['hyperparameters'] = bayes_results['hyperparameters'].map(ast.literal_eval)
###Output
_____no_output_____
###Markdown
Random Search on the Full Dataset
###Code
train_set = lgb.Dataset(train, label = train_labels)
hyperparameters = dict(**random_results.loc[0, 'hyperparameters'])
del hyperparameters['n_estimators']
# Cross validation with n_folds and early stopping
cv_results = lgb.cv(hyperparameters, train_set,
num_boost_round = 10000, early_stopping_rounds = 100,
metrics = 'auc', nfold = N_FOLDS)
print('The cross validation score on the full dataset for Random Search= {:.5f} with std: {:.5f}.'.format(
cv_results['auc-mean'][-1], cv_results['auc-stdv'][-1]))
print('Number of estimators = {}.'.format(len(cv_results['auc-mean'])))
###Output
_____no_output_____
###Markdown
Then we can make predictions on the test data. The predictions are saved to a csv file that can be submitted to the competition.
###Code
model = lgb.LGBMClassifier(n_estimators = len(cv_results['auc-mean']), **hyperparameters)
model.fit(train, train_labels)
preds = model.predict_proba(test)[:, 1]
submission = pd.DataFrame({'SK_ID_CURR': test_ids, 'TARGET': preds})
submission.to_csv('submission_random_search.csv', index = False)
###Output
_____no_output_____
###Markdown
Submitting these to the competition results in a score of __0.787__ which compares to the original score from the kernel of __0.792__ Bayesian Optimization on the Full Dataset
###Code
hyperparameters = dict(**bayes_results.loc[0, 'hyperparameters'])
del hyperparameters['n_estimators']
# Cross validation with n_folds and early stopping
cv_results = lgb.cv(hyperparameters, train_set,
num_boost_round = 10000, early_stopping_rounds = 100,
metrics = 'auc', nfold = N_FOLDS)
print('The cross validation score on the full dataset for Bayesian optimization = {:.5f} with std: {:.5f}.'.format(
cv_results['auc-mean'][-1], cv_results['auc-stdv'][-1]))
print('Number of estimators = {}.'.format(len(cv_results['auc-mean'])))
model = lgb.LGBMClassifier(n_estimators = len(cv_results['auc-mean']), **hyperparameters)
model.fit(train, train_labels)
preds = model.predict_proba(test)[:, 1]
submission = pd.DataFrame({'SK_ID_CURR': test_ids, 'TARGET': preds})
submission.to_csv('submission_bayesian_optimization.csv', index = False)
###Output
_____no_output_____ |
docs/source/notebooks/zipline_algo_example.ipynb | ###Markdown
Zipline AlgorithmHere's an example where we run an algorithm with zipline, then produce tear sheets for that algorithm. Imports & SettingsImport pyfolio and zipline, and ingest the pricing data for backtesting. You may have to install [Zipline](https://zipline.ml4trading.io/) first; you can do so using either:
###Code
# !pip install zipline-reloaded
###Output
_____no_output_____
###Markdown
or:
###Code
# !conda install -c ml4t zipline-reloaded
import pyfolio as pf
%matplotlib inline
# silence warnings
import warnings
warnings.filterwarnings('ignore')
import zipline
%load_ext zipline
###Output
_____no_output_____
###Markdown
Ingest Zipline Bundle If you have not yet downloaded [data for Zipline](https://zipline.ml4trading.io/bundles.html), you need to do so first (uncomment and execute the following cell):
###Code
# !zipline ingest
###Output
_____no_output_____
###Markdown
Run Zipline algorithmThis algorithm can also be adjusted to execute a modified, or completely different, trading strategy.
###Code
%%zipline --start 2004-1-1 --end 2010-1-1 -o results.pickle --no-benchmark
# Zipline trading algorithm
# Taken from zipline.examples.olmar
import numpy as np
from zipline.finance import commission, slippage
STOCKS = ['AMD', 'CERN', 'COST', 'DELL', 'GPS', 'INTC', 'MMM']
# On-Line Portfolio Moving Average Reversion
# More info can be found in the corresponding paper:
# http://icml.cc/2012/papers/168.pdf
def initialize(algo, eps=1, window_length=5):
algo.stocks = STOCKS
algo.sids = [algo.symbol(symbol) for symbol in algo.stocks]
algo.m = len(algo.stocks)
algo.price = {}
algo.b_t = np.ones(algo.m) / algo.m
algo.eps = eps
algo.window_length = window_length
algo.set_commission(commission.PerShare(cost=0))
algo.set_slippage(slippage.FixedSlippage(spread=0))
def handle_data(algo, data):
m = algo.m
x_tilde = np.zeros(m)
b = np.zeros(m)
# find relative moving average price for each asset
mavgs = data.history(algo.sids, 'price', algo.window_length, '1d').mean()
for i, sid in enumerate(algo.sids):
price = data.current(sid, "price")
# Relative mean deviation
x_tilde[i] = mavgs[sid] / price
###########################
# Inside of OLMAR (algo 2)
x_bar = x_tilde.mean()
# market relative deviation
mark_rel_dev = x_tilde - x_bar
# Expected return with current portfolio
exp_return = np.dot(algo.b_t, x_tilde)
weight = algo.eps - exp_return
variability = (np.linalg.norm(mark_rel_dev)) ** 2
# test for divide-by-zero case
if variability == 0.0:
step_size = 0
else:
step_size = max(0, weight / variability)
b = algo.b_t + step_size * mark_rel_dev
b_norm = simplex_projection(b)
np.testing.assert_almost_equal(b_norm.sum(), 1)
rebalance_portfolio(algo, data, b_norm)
# update portfolio
algo.b_t = b_norm
def rebalance_portfolio(algo, data, desired_port):
# rebalance portfolio
for i, sid in enumerate(algo.sids):
algo.order_target_percent(sid, desired_port[i])
def simplex_projection(v, b=1):
"""Projection vectors to the simplex domain
Implemented according to the paper: Efficient projections onto the
l1-ball for learning in high dimensions, John Duchi, et al. ICML 2008.
Implementation Time: 2011 June 17 by Bin@libin AT pmail.ntu.edu.sg
Optimization Problem: min_{w}\| w - v \|_{2}^{2}
s.t. sum_{i=1}^{m}=z, w_{i}\geq 0
Input: A vector v \in R^{m}, and a scalar z > 0 (default=1)
Output: Projection vector w
:Example:
>>> proj = simplex_projection([.4 ,.3, -.4, .5])
>>> print(proj)
array([ 0.33333333, 0.23333333, 0. , 0.43333333])
>>> print(proj.sum())
1.0
Original matlab implementation: John Duchi ([email protected])
Python-port: Copyright 2013 by Thomas Wiecki ([email protected]).
"""
v = np.asarray(v)
p = len(v)
# Sort v into u in descending order
v = (v > 0) * v
u = np.sort(v)[::-1]
sv = np.cumsum(u)
rho = np.where(u > (sv - b) / np.arange(1, p + 1))[0][-1]
theta = np.max([0, (sv[rho] - b) / (rho + 1)])
w = (v - theta)
w[w < 0] = 0
return w
###Output
_____no_output_____
###Markdown
Extract metricsGet the returns, positions, and transactions from the zipline backtest object.
###Code
import pandas as pd
results = pd.read_pickle('results.pickle')
returns, positions, transactions = pf.utils.extract_rets_pos_txn_from_zipline(results)
###Output
_____no_output_____
###Markdown
Single plot exampleMake one plot of the top 5 drawdown periods.
###Code
pf.plot_drawdown_periods(returns, top=5).set_xlabel('Date');
###Output
_____no_output_____
###Markdown
Full tear sheet exampleCreate a full tear sheet for our algorithm. As an example, set the live start date to something arbitrary.
###Code
pf.create_full_tear_sheet(returns, positions=positions, transactions=transactions,
live_start_date='2009-10-22', round_trips=True)
###Output
_____no_output_____
###Markdown
Suppressing symbol outputWhen sharing tear sheets it might be undesirable to display which symbols where used by a strategy. To suppress these in the tear sheet you can pass `hide_positions=True`.
###Code
pf.create_full_tear_sheet(returns, positions=positions, transactions=transactions,
live_start_date='2009-10-22', hide_positions=True)
###Output
Entire data start date: 2004-01-02
Entire data end date: 2009-12-31
In-sample months: 69
Out-of-sample months: 2
|
notebooks/01g_Phase_1_ML_FullFeatureSet_ak.ipynb | ###Markdown
**This is a preliminary ML script to determine the best models for our project data.**
###Code
%matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
###Output
_____no_output_____
###Markdown
**Import 2017 sample of 25,000 observations.** Note import warning:"Columns (29,30,39,40) have mixed types. Specify dtype option on import or set low_memory=False."
###Code
# Fetch the data if required
filepath = os.path.abspath(os.path.join( "..", "fixtures", "hmda2017sample.csv"))
DATA = pd.read_csv(filepath, low_memory=False)
DATA.describe(include='all')
###Output
_____no_output_____
###Markdown
**Drop features which are redundant or have mostly missing data + Drop first columnALSO: drop msamd_name and census_tract_number, as they make each bin of data too granular**
###Code
DATA = DATA.drop(DATA.columns[0], axis=1)
DATA = DATA.drop(['rate_spread',
'state_name',
'sequence_number',
'respondent_id',
'msamd_name',
'edit_status_name',
'denial_reason_name_3',
'denial_reason_name_2',
'denial_reason_name_1',
'co_applicant_race_name_5',
'co_applicant_race_name_4',
'co_applicant_race_name_3',
'co_applicant_race_name_2',
'census_tract_number',
'application_date_indicator',
'applicant_race_name_5',
'applicant_race_name_4',
'applicant_race_name_3',
'applicant_race_name_2',
'agency_name'],
axis=1)
###Output
_____no_output_____
###Markdown
**Write the initial script using subset of features which are already int or float, plus the target** **IDEAS: discard file closed, call 'application approved but not accepted" a 1 or discard, discard 'application withdrawn by applicant'. Concern about overfitting if we leave too much stuff in.**
###Code
DATA['action_taken'] = DATA.action_taken_name.apply(lambda x: 1 if x in ['Loan purchased by the institution', 'Loan originated'] else 0)
pd.crosstab(DATA['action_taken_name'],DATA['action_taken'], margins=True)
###Output
_____no_output_____
###Markdown
**ACTION: look at imputing income using hud household median income rather than mean**
###Code
DATA_targ_numeric = DATA[['action_taken',
'tract_to_msamd_income',
'population',
'minority_population',
'number_of_owner_occupied_units',
'number_of_1_to_4_family_units',
'loan_amount_000s',
'hud_median_family_income',
'applicant_income_000s'
]]
#resolve missing values in applicant_income_000s
DATA_targ_numeric.fillna(DATA_targ_numeric.mean(), inplace=True)
DATA_targ_numeric.info()
DATA_basefile = DATA_targ_numeric
###Output
C:\Users\akx00\Anaconda3\lib\site-packages\pandas\core\generic.py:6130: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._update_inplace(new_data)
###Markdown
**Use one-hot encoding via Pandas, concatenate to the rest of the data frame.**Reference link:https://stackoverflow.com/questions/37292872/how-can-i-one-hot-encode-in-python
###Code
DATA = DATA.drop(['action_taken_name'], axis=1)
DATA.columns
non_categorical_features = ['action_taken',
'tract_to_msamd_income',
'population',
'minority_population',
'number_of_owner_occupied_units',
'number_of_1_to_4_family_units',
'loan_amount_000s',
'hud_median_family_income',
'applicant_income_000s'
]
for categorical_feature in list(DATA.columns):
if categorical_feature not in non_categorical_features:
DATA[categorical_feature] = DATA[categorical_feature].astype('category')
dummies = pd.get_dummies(DATA[categorical_feature], prefix=categorical_feature)
DATA_basefile = pd.concat([DATA_basefile, dummies], axis=1)
DATA_basefile.info(verbose=True)
tofilepath = os.path.abspath(os.path.join( "..", "fixtures", "hmda2017sample_alltest.csv"))
DATA_basefile.to_csv(tofilepath, index=False)
# Determine the shape of the data
print("{} instances with {} features\n".format(*DATA_basefile.shape))
# Determine the frequency of each class
print(pd.crosstab(index=DATA['action_taken'], columns="count"))
###Output
25000 instances with 1255 features
col_0 count
action_taken
0 8673
1 16327
###Markdown
Classification
###Code
from sklearn import metrics
from sklearn.naive_bayes import GaussianNB, BernoulliNB, MultinomialNB
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression, SGDClassifier
from sklearn.svm import LinearSVC, NuSVC, SVC
from sklearn import tree
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import BaggingClassifier, ExtraTreesClassifier, RandomForestClassifier
from yellowbrick.classifier import ClassificationReport
X = DATA_basefile[DATA_basefile.columns[1:]]
y = DATA_basefile['action_taken']
def score_model(X, y, estimator, **kwargs):
"""
Test various estimators.
"""
#NOTE: for capstone add X_test, X_train, Y_test, Y_train for capstone code.
#Bake into model to see if it does cross validation, if not there do CV.
scores = {'precision':[], 'recall':[], 'accuracy':[], 'f1':[]}
# Instantiate the classification model and visualizer
model.fit(X, y, **kwargs)
expected = y
predicted = model.predict(X)
# Append our scores to the tracker
scores['precision'].append(metrics.precision_score(expected, predicted, average="binary"))
scores['recall'].append(metrics.recall_score(expected, predicted, average="binary"))
scores['accuracy'].append(metrics.accuracy_score(expected, predicted))
scores['f1'].append(metrics.f1_score(expected, predicted, average="binary"))
# Compute and return F1 (harmonic mean of precision and recall), Precision, Recall, Accuracy
print("{}".format(estimator.__class__.__name__))
print("Validation scores are as follows:\n")
print(pd.DataFrame(scores).mean())
# Try them all!
models = [
GaussianNB(),
MultinomialNB(),
BernoulliNB(),
tree.DecisionTreeClassifier(),
LinearDiscriminantAnalysis(),
LogisticRegression(solver='lbfgs', max_iter=6000),
LogisticRegressionCV(cv=3, max_iter=6000),
BaggingClassifier(),
ExtraTreesClassifier(n_estimators=100),
RandomForestClassifier(n_estimators=100)
]
for model in models:
score_model(X, y, model)
svc_models = [
LinearSVC(max_iter=6000)
]
for model in svc_models:
score_model(X, y, model)
def visualize_model(X, y, estimator):
"""
Test various estimators.
"""
# Instantiate the classification model and visualizer
visualizer = ClassificationReport(
model, classes=[1,0],
cmap="Blues", size=(600, 360)
)
visualizer.fit(X, y)
visualizer.score(X, y)
visualizer.poof()
for model in models:
visualize_model(X, y, model)
###Output
_____no_output_____ |
examples/usage_dataframe.ipynb | ###Markdown
Example Usage for DataFrame========================
###Code
# remove comment to use latest development version
import sys; sys.path.insert(0, '../')
# import libraries
import raccoon as rc
###Output
_____no_output_____
###Markdown
Initialize----------
###Code
# empty DataFrame
df = rc.DataFrame()
df
# with columns and indexes but no data
df = rc.DataFrame(columns=['a', 'b', 'c'], index=[1, 2, 3])
df
# with data
df = rc.DataFrame(data={'a': [1, 2, 3], 'b': [4, 5, 6]}, index=[10, 11, 12], columns=['a', 'b'])
df
###Output
_____no_output_____
###Markdown
Print-----
###Code
df.print()
print(df)
###Output
index a b
------- --- ---
10 1 4
11 2 5
12 3 6
###Markdown
Setters and Getters-------------------
###Code
# columns
df.columns
df.columns = ['first', 'second']
print(df)
# columns can be renamed with a dict()
df.rename_columns({'second': 'b', 'first': 'a'})
df.columns
# index
df.index
#indexes can be any non-repeating unique values
df.index = ['apple', 'pear', 7.7]
df.print()
df.index = [10, 11, 12]
print(df)
# the index can also have a name, befault it is "index"
df.index_name
df.index_name = 'units'
df.index_name
# data is a shallow copy, be careful on how this is used
df.index_name = 'index'
df.data
###Output
_____no_output_____
###Markdown
Select Index------------
###Code
df.select_index(11)
###Output
_____no_output_____
###Markdown
Set Values----------
###Code
# set a single cell
df.set(10, 'a', 100)
print(df)
# set a value outside current range creates a new row and/or column. Can also use [] for setting
df[13, 'c'] = 9
df.print()
# set column
df['b'] = 55
print(df)
# set a subset of column
df[[10, 12], 'b'] = 66
print(df)
# using boolean list
df.set([True, False, True, False], 'b', [88, 99])
print(df)
# setting with slices
df[12:13, 'a'] = 33
print(df)
df[10:12, 'c'] = [1, 2, 3]
print(df)
# append a row, DANGEROUS as there is not validation checking, but can be used for speed
df.append_row(14, {'a': 44, 'c': 100, 'd': 99})
print(df)
# append rows, again use caution
df.append_rows([15, 16], {'a': [55, 56], 'd': [100,101]})
print(df)
###Output
index a b c d
------- --- --- --- ---
10 100 88 1
11 2 55 2
12 33 99 3
13 33 55 9
14 44 100 99
15 55 100
16 56 101
###Markdown
Get Values----------
###Code
# get a single cell
df[10, 'a']
# get an entire column
df['c'].print()
# get list of columns
df[['a', 'c']].print()
# get subset of the index
df[[11, 12, 13], 'b'].print()
# get using slices
df[11:13, 'b'].print()
# get a matrix
df[10:11, ['a', 'c']].print()
# get a column, return as a list
df.get(columns='a', as_list=True)
# get a row and return as a dictionary
df.get_columns(index=13, columns=['a', 'b'], as_dict=True)
###Output
_____no_output_____
###Markdown
Set and Get by Location-----------------------Locations are the index of the index, in other words the index locations from 0...len(index)
###Code
# get a single cell
df.get_location(2, 'a')
# get an entire row when the columns is None
print(df.get_location(2))
print(df.get_location(0, ['b', 'c'], as_dict=True))
df.get_location(-1).print()
df.get_locations(locations=[0, 2]).print()
df.set_locations(locations=[0, 2], column='a', values=-9)
df.print()
###Output
index a b c d
------- --- --- --- ---
10 -9 88 1
11 2 55 2
12 -9 99 3
13 33 55 9
14 44 100 99
15 55 100
16 56 101
###Markdown
Head and Tail-------------
###Code
df.head(2).print()
df.tail(2).print()
###Output
index a b c d
------- --- --- --- ---
15 55 100
16 56 101
###Markdown
Delete colunmns and rows------------------------
###Code
df.delete_rows([10, 13])
print(df)
df.delete_columns('b')
print(df)
###Output
index a c d
------- --- --- ---
11 2 2
12 -9 3
14 44 100 99
15 55 100
16 56 101
###Markdown
Convert-------
###Code
# return a dict
df.to_dict()
# exclude the index
df.to_dict(index=False)
# return an OrderedDict()
df.to_dict(ordered=True)
# return a list of just one column
df['c'].to_list()
# convert to JSON
string = df.to_json()
print(string)
# construct DataFrame from JSON
df_from_json = rc.DataFrame.from_json(string)
print(df_from_json)
###Output
index a c d
------- --- --- ---
11 2 2
12 -9 3
14 44 100 99
15 55 100
16 56 101
###Markdown
Sort by Index and Column------------------------
###Code
df = rc.DataFrame({'a': [4, 3, 2, 1], 'b': [6, 7, 8, 9]}, index=[25, 24, 23, 22])
print(df)
# sort by index. Sorts are inplace
df.sort_index()
print(df)
# sort by column
df.sort_columns('b')
print(df)
# sort by column in reverse order
df.sort_columns('b', reverse=True)
print(df)
# sorting with a key function is avaialble, see tests for examples
###Output
_____no_output_____
###Markdown
Append------
###Code
df1 = rc.DataFrame({'a': [1, 2], 'b': [5, 6]}, index=[1, 2])
df1.print()
df2 = rc.DataFrame({'b': [7, 8], 'c': [11, 12]}, index=[3, 4])
print(df2)
df1.append(df2)
print(df1)
###Output
index a b c
------- --- --- ---
1 1 5
2 2 6
3 7 11
4 8 12
###Markdown
Math Methods------------
###Code
df = rc.DataFrame({'a': [1, 2, 3], 'b': [2, 8, 9]})
# test for equality
df.equality('a', value=3)
# all math methods can operate on a subset of the index
df.equality('b', indexes=[1, 2], value=2)
# add two columns
df.add('a', 'b')
# subtract
df.subtract('b', 'a')
# multiply
df.multiply('a', 'b', [0, 2])
# divide
df.divide('b', 'a')
###Output
_____no_output_____
###Markdown
Multi-Index-----------Raccoon does not have true hierarchical mulit-index capabilities like Pandas, but attempts to mimic some of the capabilities with the use of tuples as the index. Raccoon does not provide any checking to make sure the indexes are all the same length orany other integrity checking.
###Code
tuples = [('a', 1, 3), ('a', 1, 4), ('a', 2, 3), ('b', 1, 4), ('b', 2, 1), ('b', 3, 3)]
df = rc.DataFrame({'a': [1, 2, 3, 4, 5, 6]}, index=tuples)
print(df)
###Output
index a
----------- ---
('a', 1, 3) 1
('a', 1, 4) 2
('a', 2, 3) 3
('b', 1, 4) 4
('b', 2, 1) 5
('b', 3, 3) 6
###Markdown
The select_index method works with tuples by allowing the * to act as a wild card for matching.
###Code
compare = ('a', None, None)
df.select_index(compare)
compare = ('a', None, 3)
df.select_index(compare, 'boolean')
compare = (None, 2, None)
df.select_index(compare, 'value')
compare = (None, None, 3)
df.select_index(compare, 'value')
compare = (None, None, None)
df.select_index(compare)
###Output
_____no_output_____
###Markdown
Reset Index-----------
###Code
df = rc.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]}, columns=['a', 'b'])
print(df)
df.reset_index()
df
df = rc.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]}, columns=['a', 'b'], index=['x', 'y', 'z'], index_name='jelo')
print(df)
df.reset_index()
print(df)
df = rc.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]}, columns=['a', 'b'],
index=[('a', 10, 'x'), ('b', 11, 'y'), ('c', 12, 'z')], index_name=('melo', 'helo', 'gelo'))
print(df)
df.reset_index()
print(df)
df = rc.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]}, columns=['a', 'b'], index=['x', 'y', 'z'], index_name='jelo')
print(df)
df.reset_index(drop=True)
print(df)
###Output
index a b
------- --- ---
0 1 4
1 2 5
2 3 6
###Markdown
Iterators---------
###Code
df = rc.DataFrame({'a': [1, 2, 'c'], 'b': [5, 6, 'd']}, index=[1, 2, 3])
for row in df.iterrows():
print(row)
for row in df.itertuples():
print(row)
###Output
Raccoon(index=1, a=1, b=5)
Raccoon(index=2, a=2, b=6)
Raccoon(index=3, a='c', b='d')
###Markdown
Sorted DataFrames-----------------DataFrames will be set to sorted by default if no index is given at initialization. If an index is given at initialization then the parameter sorted must be set to True
###Code
df = rc.DataFrame({'a': [3, 5, 4], 'b': [6, 8, 7]}, index=[12, 15, 14], sort=True)
###Output
_____no_output_____
###Markdown
When sorted=True on initialization the data will be sorted by index to start
###Code
df.print()
df[16, 'b'] = 9
print(df)
df.set(indexes=13, values={'a': 3.5, 'b': 6.5})
print(df)
###Output
index a b
------- --- ---
12 3 6
13 3.5 6.5
14 4 7
15 5 8
16 9
|
SD202/sql-intro-master/sql-intro-4.ipynb | ###Markdown
Create the tables for this section.
###Code
%load_ext sql
# Connect to an empty SQLite database
%sql sqlite://
%%sql
DROP TABLE IF EXISTS Purchase;
-- Create tables
CREATE TABLE Purchase (
Product VARCHAR(255),
Date DATE,
Price FLOAT,
Quantity INT
);
-- Insert tuples
INSERT INTO Purchase VALUES ('Bagel', '10/21', 1, 20);
INSERT INTO Purchase VALUES ('Bagel', '10/25', 1.5, 20);
INSERT INTO Purchase VALUES ('Banana', '10/3', 0.5, 10);
INSERT INTO Purchase VALUES ('Banana', '10/10', 1, 10);
SELECT * FROM Purchase;
###Output
Done.
Done.
Done.
1 rows affected.
1 rows affected.
1 rows affected.
Done.
###Markdown
Aggregation OperationsSQL support several __aggregation__ operations* SUM, COUNT, MIN, MAX, AVG* Except COUNT, all aggregations apply to a signle attribute COUNTSyntax```mysqlSELECT COUNT(column_name)FROM table_nameWHERE condition;``` > __Example:__ Find the number of purchases| Product | Date | Price | Quantity ||---------|-------|-------|----------|| Bagel | 10/21 | 1 | 20 || Bagel | 10/25 | 1.5 | 20 || Banana | 10/3 | 0.5 | 10 || Banana | 10/10 | 1 | 10 |
###Code
%%sql
SELECT COUNT(Product)
FROM Purchase;
###Output
Done.
###Markdown
* Count applies to duplicates, unless otherwise stated * Same as ```COUNT(*)```. Why? > __Example:__ Find the number of different product purchases* Use DISTINCT
###Code
%%sql
SELECT COUNT(DISTINCT Product)
FROM Purchase;
###Output
Done.
###Markdown
SUMSyntax```mysqlSELECT SUM(column_name)FROM table_nameWHERE condition;```> __Example:__ How many units of all products have been purchased?| Product | Date | Price | Quantity ||---------|-------|-------|----------|| Bagel | 10/21 | 1 | 20 || Bagel | 10/25 | 1.5 | 20 || Banana | 10/3 | 0.5 | 10 || Banana | 10/10 | 1 | 10 |
###Code
%%sql
SELECT SUM(Quantity)
FROM Purchase;
###Output
Done.
###Markdown
> __Example:__ How many Bagels have been purchased?
###Code
%%sql
SELECT SUM(Quantity)
FROM Purchase
WHERE Product = 'Bagel'
###Output
Done.
###Markdown
AVGSyntax```mysqlSELECT AVG(column_name)FROM table_nameWHERE condition;```> __Example:__ What is the average sell price of Bagels?| Product | Date | Price | Quantity ||---------|-------|-------|----------|| Bagel | 10/21 | 1 | 20 || Bagel | 10/25 | 1.5 | 20 || Banana | 10/3 | 0.5 | 10 || Banana | 10/10 | 1 | 10 |
###Code
%%sql
SELECT AVG(Price)
FROM Purchase
WHERE Product = 'Bagel';
###Output
Done.
###Markdown
Simple Aggregations> __Example:__ Total earnings from Bagels sold?
###Code
%%sql
SELECT SUM(Price * Quantity)
FROM Purchase
WHERE Product = 'Bagel';
###Output
Done.
###Markdown
GROUP BYUed with aggregate functions (COUNT, MAX, MIN, SUM, AVG) to group the result-set by one or more columns.Syntax```mysqlSELECT column_name(s)FROM table_nameWHERE conditionGROUP BY column_name(s)[ORDER BY column_name(s)];```> __Example:__ Find total sales after 10/1 per product| Product | Date | Price | Quantity ||---------|-------|-------|----------|| Bagel | 10/21 | 1 | 20 || Bagel | 10/25 | 1.5 | 20 || Banana | 10/3 | 0.5 | 10 || Banana | 10/10 | 1 | 10 |
###Code
%%sql
SELECT Product, SUM(price * quantity) AS TotalSales
FROM Purchase
WHERE Date > '10/1'
GROUP BY Product;
###Output
Done.
###Markdown
Grouping and Aggregation: Semantics of the Query__1.__ Compute the FROM and WHERE clauses
###Code
%%sql
SELECT *
FROM Purchase
WHERE Date > '10/1'
###Output
Done.
###Markdown
__2.__ Group attributes according to GROUP BY|__Product__| __Date__ |__Price__|__Quantity__||:-------:|:--------:|:-----:|:--------:|| Bagel | 10/21/17 | 1 | 20 || | 10/25/17 | 1.5 | 20 || Banana | 10/03/17 | 0.5 | 10 || | 10/10/17 | 1 | 10 |__Caution:__ SQL _only_ displays one row if no aggregation function is used
###Code
%%sql
SELECT *
FROM Purchase
WHERE Date > '10/1'
GROUP BY Product;
%%sql
SELECT Product, Count(Product)
FROM Purchase
WHERE Date > '10/1'
GROUP BY Product;
###Output
Done.
###Markdown
__3.__ Compute the SELECT clause: grouped attributes and aggregates
###Code
%%sql -- Find total sales after '10/1' per product
SELECT Product, SUM(price * quantity) AS TotalSales
FROM Purchase
WHERE Date > '10/1'
GROUP BY Product;
###Output
Done.
###Markdown
GROUP BY vs Nested Queries
###Code
%%sql
SELECT DISTINCT x.Product, (SELECT Sum(y.price*y.quantity)
FROM Purchase y
WHERE x.product = y.product
AND y.date > '10/1') AS TotalSales
FROM Purchase x
WHERE x.date > '10/1';
###Output
Done.
###Markdown
HAVING* HAVING clauses contains conditions on __aggregates__* WHERE clauses condition on __individual tuples__Syntax```mysqlSELECT column_name(s)FROM table_nameWHERE conditionGROUP BY column_name(s)HAVING condition[ORDER BY column_name(s)];```> __Example:__ Same query as before, except that we consider only products with more than 30 units sold
###Code
%%sql
SELECT Product, SUM(price * quantity) AS TotalSales
FROM Purchase
WHERE Date > '10/1'
GROUP BY Product
HAVING SUM(Quantity) > 30;
###Output
Done.
###Markdown
Advanced\* TopicsIn this section* Relational Division is SQL* Nulls (revisited)* Outer Joins Relational Division in SQL* Not supported as a primitive operator, but useful for expressing queries like:> _"Find suppliers who sell the x parts..."_> _"Find buyers who bught all products from a given category..."_* Let $A$ have 2 fields, $x$ and $y$, $B$ have only field $y$```mysqlA(x, y)B(y)``` * $A/B$ contains all $x$ tuples such that for every $y$ tuple in $B$, there is an $xy$ tuple in $A$ * Or: If the set of $y$ values associated with an $x$ value in $A$ contains all $y$ values in $B$, the $x$ value is in $A/B$.Classic Option 1```mysql%%sqlSELECT T1.xFROM A AS T1WHERE NOT EXISTS( SELECT T2.y FROM B AS T2 EXCEPT SELECT T3.y FROM A AS T3 WHERE T3.y=T1.y);```Classic Option 2 (without EXCEPT)```mysql%%sqlSELECT DISTINCT T1.xFROM A AS T1WHERE NOT EXISTS(SELECT T2.y FROM B AS T2 WHERE NOT EXISTS (SELECT T3.x FROM A AS T3 WHERE T3.x=T1.x AND T3.y=T2.y ) );```
###Code
%%sql
DROP TABLE IF EXISTS A;
-- Create tables
CREATE TABLE A (
x VARCHAR,
y VARCHAR);
DROP TABLE IF EXISTS B1;
-- Create tables
CREATE TABLE B1 (
y VARCHAR);
DROP TABLE IF EXISTS B2;
-- Create tables
CREATE TABLE B2 (
y VARCHAR);
DROP TABLE IF EXISTS B3;
-- Create tables
CREATE TABLE B3 (
y VARCHAR);
-- Insert tuples
INSERT INTO A VALUES ('x1', 'y1');
INSERT INTO A VALUES ('x1', 'y2');
INSERT INTO A VALUES ('x1', 'y3');
INSERT INTO A VALUES ('x1', 'y4');
INSERT INTO A VALUES ('x2', 'y1');
INSERT INTO A VALUES ('x2', 'y2');
INSERT INTO A VALUES ('x3', 'y2');
INSERT INTO A VALUES ('x4', 'y2');
INSERT INTO A VALUES ('x4', 'y4');
INSERT INTO B1 VALUES ('y2');
INSERT INTO B2 VALUES ('y2');
INSERT INTO B2 VALUES ('y4');
INSERT INTO B3 VALUES ('y1');
INSERT INTO B3 VALUES ('y2');
INSERT INTO B3 VALUES ('y4');
%%sql
SELECT * FROM A;
%%sql
SELECT * FROM B1;
%%sql -- Change bellow to perform: A/B1, A/B2, A/B3
SELECT DISTINCT T1.x
FROM A AS T1
WHERE NOT EXISTS(SELECT T2.y
FROM B1 AS T2
WHERE NOT EXISTS (SELECT T3.x
FROM A AS T3
WHERE T3.x=T1.x
AND T3.y=T2.y
)
);
###Output
Done.
###Markdown
Yet another option[“A Simpler (and Better) SQL Approach to Relational Division”](https://users.dcc.uchile.cl/~cgutierr/cursos/BD/divisionSQL.pdf)Journal of Information Systems Education, Vol. 13(2) Null Values* For _numerical operations_, NULL -> NULL: * If x is NULL then ```4*(3-x)/7``` is still NULL* For _boolean operations_, in SQL there are three values:```FALSE = 0UNKNOWN = 0.5TRUE = 1```* If x is NULL then ```x = “Joe”``` is UNKNOWN ```C1 AND C2 = min(C1, C2)C1 OR C2 = max(C1, C2)NOT C1 = 1 – C1```> __Example:__```mysqlSELECT *FROM PersonWHERE (age < 25) AND (height > 6 AND weight > 190);```Won’t return: age=20height=NULLweight=200__Rule in SQL:__ include only tuples that yield TRUE (1.0) > __Example:__ Unexpected behavior```mysqlSELECT *FROM PersonWHERE age = 25;```Some tuples from _Person_ are not included Test for NULL expliitly:* x IS NULL* x IS NOT NULL>```mysqlSELECT *FROM PersonWHERE age = 25 OR age IS NULL;```Now it includes all tuples in _Person_ Inner Joins + NULLS = Lost data?* By default, joins in SQL are __inner joins__```Product(name, category)Purchase(prodName, store)```Syntax 1```mysqlSELECT Product.name, Purchase.storeFROM ProductJOIN Purchase ON Product.name = Purchase.prodName;```Syntax 2```mysqlSELECT Product.name, Purchase.storeFROM Product, PurchaseWHERE Product.name = Purchase.prodName;```* Both equivalent, both _inner joins_* __However:__ Products that never sold (with no Purchase tuple) will be lost! Outer Joins* An __outer join__ returns tuples from the joined relations that don’t have a corresponding tuple in the other relations * i.e. If we join relations A and B on a.X = b.X, and there is an entry in A with X=5, but none in B with X=5 LEFT [OUTER] JOIN will return a tuple __(a, NULL)__ Syntax```mysqlSELECT column_name(s)FROM table1LEFT OUTER JOIN table2 ON table1.column_name = table2.column_name;```
###Code
%%sql
-- Create tables
DROP TABLE IF EXISTS Product;
CREATE TABLE Product (
name VARCHAR(255) PRIMARY KEY,
category VARCHAR(255)
);
DROP TABLE IF EXISTS Purchase;
CREATE TABLE Purchase(
prodName varchar(255),
store varchar(255)
);
-- Insert tuples
INSERT INTO Product VALUES ('Gizmo', 'Gadget');
INSERT INTO Product VALUES ('Camera', 'Photo');
INSERT INTO Product VALUES ('OneClick', 'Photo');
INSERT INTO Purchase VALUES ('Gizmo', 'Wiz');
INSERT INTO Purchase VALUES ('Camera', 'Ritz');
INSERT INTO Purchase VALUES ('Camera', 'Wiz');
%%sql
SELECT *
FROM Product;
%%sql
SELECT *
FROM Purchase;
%%sql
SELECT Product.name, Purchase.store
FROM Product
INNER JOIN Purchase
ON Product.name = Purchase.prodName;
###Output
Done.
###Markdown
Outer Joins* __Left outer join__ * Include the left tuple even if there is no match* __Right outer join__ * Include the right tuple even if there is no match* __Full outer join__ * Include both left and right tuples even if there is no match Summary* The relational model has rigorously defined query languages that are simple and powerful.* Several ways of expressing a given query; a query optimizer should choose the most efficient version.* SQL is the lingua franca (common language) for accessing relational database systems.* SQL is a rich language that handles the way data is processed ___declaratively___ * Expresses the logic of a computation without describing its control flow ___
###Code
# Modify the css style
from IPython.core.display import HTML
def css_styling():
styles = open("./style/custom.css").read()
return HTML(styles)
css_styling()
###Output
_____no_output_____ |
hunkim_ReinforcementLearning/.ipynb_checkpoints/Lecture04-checkpoint.ipynb | ###Markdown
Lecture 4. Q-Learning exploit & exploration and discouted reward Q-Learning을 완벽하게 하는 방법에 대해서 배운다.Lecture 3. 에서는 Exploit VS Exploration을 하지 않는 다는 문제점이 있었다. 따라서 이번에는 Exploit and Exploration을 접목해서 학습 시키는 방법에 대해서 해보자.Exploration : E-greedy 정책 Decaying E-greedy : 시간이 지날수록 Epsilon 값을 감소 시켜서 Exploration 확률을 줄이는 것 Random Noise : 각각의 Argment에 Random한 값을 더해서 argmax가 다르게 나오도록 하는것.Q^hat converges to Q.
###Code
import gym
import numpy as np
import matplotlib.pyplot as plt
from gym.envs.registration import register
register(
id='FrozenLake-v3',
entry_point='gym.envs.toy_text:FrozenLakeEnv',
kwargs={'map_name': '4x4',
'is_slippery': False
}
)
env = gym.make('FrozenLake-v3')
Q = np.zeros([env.observation_space.n, env.action_space.n])
dis = 0.99
num_episodes = 2000
rList = []
for i in range(num_episodes):
state = env.reset()
rAll = 0
done = False
e = 1 / ((i / 100) +1)
while not done:
action = np.argmax(Q[state, :] + np.random.randn(1, env.action_space.n) / (i + 1))
new_state, reward, done, _ = env.step(action)
Q[state, action] = reward + dis * np.max(Q[new_state, :])
rAll += reward
state = new_state
rList.append(rAll)
print("Success rate: " + str(sum(rList)/num_episodes))
print("Final Q-Table Values")
print(Q)
plt.bar(range(len(rList)), rList, color="blue")
plt.show()
###Output
Success rate: 0.9845
Final Q-Table Values
[[0. 0.95099005 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0.96059601 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0.970299 0. ]
[0. 0.9801 0. 0. ]
[0. 0.99 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0. 0. ]
[0. 0. 0.99 0. ]
[0. 0. 1. 0. ]
[0. 0. 0. 0. ]]
|
_notebooks/2020-07-01-karuba-junior.ipynb | ###Markdown
Karuba Junior> Calculating the odds of winning a cooperative children's game.- comments: true- categories: [board games, monte carlo]- image: "https://cf.geekdo-images.com/imagepage/img/gzLtKeHmprKcxhBGhJ_hY1o4kR4=/fit-in/900x600/filters:no_upscale()/pic3742928.jpg" [Karuba Junior](https://www.boardgamegeek.com/boardgame/234439/karuba-junior) is a cooperative tile-placement adventure game.Having lost twice in a row I wanted to calculate the odds of winning this game.As expected for a child games the rules are dead simple. The players take turns in drawing a card and using it to extend or end one of the 4 starting roads.Tigers and treasures end a road.Forks and crossroads add one and two additional roads, respectively, as long as they are placed in such a way that none of the roads are blocked by another card, which in practice is always possible.The game is won if all 3 treasures are found.The game is lost if there is no open road left, or if the pirates advanced a total of 9 fields, which happens by drawing the corresponding pirate cards. Let's find the odds of winning the game through Monte Carlo. ImplementationWe need 3 counters: the number of treasures found, the number of open roads, and the number of pirate moves.For each card we define how the counters change in form of a 3-component vector. Then we can accumulate the changes in the random order they are drawn and determine which win/loss condition occurs first.There are 28 cards:* 3 treasures* 3 tigers* 11 straight and curved roads* 3 forks* 1 crossroads* 6 pirate cards: 3 cards with one movement point, 2 two's and 1 three
###Code
#collapse-show
import numpy as np
# card = (#treasure, #roads, #pirates)
cards = np.concatenate([
np.repeat([[1, -1, 0]], 3, axis=0), # treasure
np.repeat([[0, -1, 0]], 3, axis=0), # tiger
np.repeat([[0, 0, 0]], 11, axis=0), # simple road
np.repeat([[0, 1, 0]], 4, axis=0), # fork
np.repeat([[0, 2, 0]], 1, axis=0), # crossroad
np.repeat([[0, 0, 1]], 3, axis=0), # pirate 1
np.repeat([[0, 0, 2]], 2, axis=0), # pirate 2
np.repeat([[0, 0, 3]], 1, axis=0), # pirate 3
])
def simulate():
"""Simulate a game and determine the win or loss condition"""
np.random.shuffle(cards)
# all counter start from 0
(treasures, roads, pirates) = cards.cumsum(axis=0).T
# round when all 3 treasures found
i_treasure = np.where(treasures == 3)[0][0]
# round when pirates arrive at the beach
i_pirates = np.where(pirates >= 9)[0][0]
# check if all roads are blocked
if (roads == -4).any():
i_roads = np.where(roads <= -4)[0][0]
else:
i_roads = np.inf
# note: the case that the third treasure also closes the last road is correctly registered as a win
return np.argmin([i_treasure, i_roads, i_pirates])
n = 100000
res = [simulate() for i in range(n)]
frequency = np.bincount(res) / n
#hide_input
print('Probability of outcomes')
print(f'Win: p={frequency[0]:.3f}')
print(f'Loss (roads blocked): p={frequency[1]:.3f}')
print(f'Loss (pirates): p={frequency[2]:.3f}')
###Output
Probability of outcomes
Win: p=0.508
Loss (roads blocked): p=0.052
Loss (pirates): p=0.441
|
data structures & algorithms/11.ipynb | ###Markdown
Naming a slice Problem: Cleaning up a messy dataset Solution:If we are pulling a set of data fields out of a record string with fixed fields, we are likely to get a messy group of data.- A general rule to follow, writing code with a lot of hardcoded index values leads to a readability and maintenance mess.- A better approach is to use the built-n `slice()`, which creates a slice object that can be used anywhere.
###Code
items = [0, 1, 2, 3, 4, 5, 6]
a = slice(2,4)
items[2:4]
items[a]
items[a]
del items[a]
items
###Output
_____no_output_____
###Markdown
If we have a `slice` instance `s`, we can get more information about it by looking at its `s.start`, `s.stop` and `s.step` attributes, respectively. - For example,
###Code
a = slice(5, 50, 2)
a.start
a.stop
a.step
###Output
_____no_output_____
###Markdown
We can map a slice onto a sequence of a specific size by using the `indices(size)` method.This returns a tuple `(start, stop, step)` where all values have been suitably limited to fit within bounds.- Example:
###Code
s = 'HelloWorld'
a.indices(len(s))
for i in range(*a.indices(len(s))):
print(s[i])
###Output
W
r
d
|
Jan_2020/a05_jan22/a02_ngals10k_20k_compare.ipynb | ###Markdown
Table of Contents 1 Introduction2 Imports3 Load the final text cleancat15 data4 Plot g00 vs g205 compare ngals10k vs ngals20k Kernel Author: Bhishan Poudel, Ph.D Contd. Astrophysics Update: Jan 13, 2020 Date : Jan 10, 2020 IntroductionDate: Dec 10, 2019 Mon**Update** 1. Looked at gm0 vs gc0 (and gm1 vs gc1) 45 degree line and removed outliers.2. Find the weights for g_sq for given magnitude bins using smooth fitting curve.**Usual Filtering** ```pythondf = df.query('calib_psfCandidate == 0.0')df = df.query('deblend_nChild == 0.0')df['ellip'] = np.hypot( df['ext_shapeHSM_HsmShapeRegauss_e1'] , df['ext_shapeHSM_HsmShapeRegauss_e2'] )df = df.query('ellip < 2.0') it was 1.5 beforeselect only few columns after filtering:cols_select = ['base_SdssCentroid_x', 'base_SdssCentroid_y', 'base_SdssCentroid_xSigma','base_SdssCentroid_ySigma', 'ext_shapeHSM_HsmShapeRegauss_e1','ext_shapeHSM_HsmShapeRegauss_e2', 'base_SdssShape_flux']df = df[cols_select] drop all nansdf = df.dropna() additional columnsdf['radius'] = df.eval(""" ( (ext_shapeHSM_HsmSourceMoments_xx * ext_shapeHSM_HsmSourceMoments_yy) \ - (ext_shapeHSM_HsmSourceMoments_xy**2 ) )**0.25 """)```**Shape filtering** https://github.com/LSSTDESC/DC2-analysis/blob/master/tutorials/object_gcr_2_lensing_cuts.ipynb```pythondf = df.query('ext_shapeHSM_HsmShapeRegauss_resolution >= 0.3')df = df.query('ext_shapeHSM_HsmShapeRegauss_sigma <= 0.4')df = df.query('ext_shapeHSM_HsmShapeRegauss_flag== 0.0')```**Filter strongly lensed objects** - Take the objects with centroids >154 pixels (remove strong lens objects).```python exclude strong lens objects <=154 distance The shape of lsst.fits file is 3998,3998 and center is 1699,1699.df['x_center'] = 1699df['y_center'] = 1699df['distance'] = ( (df['x[0]'] - df['x_center'])**2 + (df['x[1]'] - df['y_center'])**2 )**0.5df = df[df.distance > 154]```**Imcat script** ```bash create new columns and cleaning (four files)lc -C -n fN -n id -N '1 2 x' -N '1 2 errx' -N '1 2 g' -n ellip -n flux -n radius "${M9C}".cat merge 4 catalogsmergecats 5 "${MC}".cat "${M9C}".cat "${LC}".cat "${L9C}".cat > ${catalogs}/merge.cat && lc -b +all 'x = %x[0][0] %x[1][0] + %x[2][0] + %x[3][0] + 4 / %x[0][1] %x[1][1] + %x[2][1] + %x[3][1] + 4 / 2 vector''gm = %g[0][0] %g[1][0] + 2 / %g[0][1] %g[1][1] + 2 / 2 vector' 'gc = %g[2][0] %g[3][0] + 2 / %g[2][1] %g[3][1] + 2 / 2 vector' 'gmd = %g[0][0] %g[1][0] - 2 / %g[0][1] %g[1][1] - 2 / 2 vector' 'gcd = %g[2][0] %g[3][0] - 2 / %g[2][1] %g[3][1] - 2 / 2 vector' ${final}/final_${i}.cat```**Notes** final_text.txt is created by imcat program after merging four lsst files (m,m9,l,l9) after cleaning. Imports
###Code
import json, os,sys
import numpy as np
import pandas as pd
import seaborn as sns
sns.set(color_codes=True)
import plotly
import ipywidgets
pd.set_option('display.max_columns',200)
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
print([(x.__name__, x.__version__) for x in [np,pd,sns,plotly,ipywidgets]])
%%javascript
IPython.OutputArea.auto_scroll_threshold = 9999;
###Output
_____no_output_____
###Markdown
Load the final text cleancat15 data```g_sq = g00 g00 + g10 g10gmd_sq = gmd0**2 + gmd1**2```
###Code
!head -2 ../data/cleancat/final_text_cleancat15_000_099.txt
names = "fN[0][0] fN[1][0] fN[2][0] fN[3][0] id[0][0] id[1][0] id[2][0] id[3][0] x[0] x[1] errx[0][0] errx[0][1] errx[1][0] errx[1][1] errx[2][0] errx[2][1] errx[3][0] errx[3][1] g[0][0] g[0][1] g[1][0] g[1][1] g[2][0] g[2][1] g[3][0] g[3][1] ellip[0][0] ellip[1][0] ellip[2][0] ellip[3][0] flux[0][0] flux[1][0] flux[2][0] flux[3][0] radius[0][0] radius[1][0] radius[2][0] radius[3][0] mag[0][0] mag[1][0] mag[2][0] mag[3][0] gm[0] gm[1] gc[0] gc[1] gmd[0] gmd[1] gcd[0] gcd[1]"
print(names)
names = ['fN[0][0]','fN[1][0]','fN[2][0]','fN[3][0]',
'id[0][0]','id[1][0]','id[2][0]','id[3][0]',
'x[0]','x[1]',
'errx[0][0]','errx[0][1]','errx[1][0]','errx[1][1]','errx[2][0]',
'errx[2][1]','errx[3][0]','errx[3][1]',
'g[0][0]','g[0][1]','g[1][0]','g[1][1]','g[2][0]','g[2][1]','g[3][0]','g[3][1]',
'ellip[0][0]','ellip[1][0]','ellip[2][0]','ellip[3][0]',
'flux[0][0]','flux[1][0]','flux[2][0]','flux[3][0]',
'radius[0][0]','radius[1][0]','radius[2][0]','radius[3][0]',
'mag[0][0]','mag[1][0]','mag[2][0]','mag[3][0]',
'gm[0]','gm[1]','gc[0]', 'gc[1]',
'gmd[0]','gmd[1]','gcd[0]','gcd[1]']
def read_data(ifile):
df = pd.read_csv(ifile,comment='#',engine='python',sep=r'\s\s+',
header=None,names=names)
print(df.shape)
# new columns
# df['g_sq'] = df['g[0][0]'] **2 + df['g[1][0]']**2 # only for imcat 00 and 10
# df['gmd_sq'] = df['gmd[0]'] **2 + df['gmd[1]']**2
df['g_sq'] = df['g[0][0]'] **2 + df['g[0][1]']**2
df['gmd_sq'] = df['gmd[0]'] **2 + df['gmd[1]']**2
df['gm_sq'] = df['gm[0]']**2 + df['gm[1]']**2
df['gc_sq'] = df['gc[0]']**2 + df['gc[1]']**2
df['mag_mono'] = (df['mag[0][0]'] + df['mag[1][0]'] ) / 2
df['mag_chro'] = (df['mag[2][0]'] + df['mag[3][0]'] ) / 2
return df
file_path = f'../data/cleancat/final_text_cleancat15_000_099.txt'
df = read_data(file_path)
df.head()
###Output
(56861, 50)
###Markdown
Plot g00 vs g20
###Code
df.head(2)
def plot_g00_20(df,start,end):
fig,ax = plt.subplots(1,2,figsize=(12,8))
x = df['gm[0]']
y = df['gc[0]']-df['gm[0]']
xx = df['g[0][0]']
yy = df['g[2][0]']-df['g[0][0]']
ax[0].scatter(x,y)
ax[1].scatter(xx,yy)
ax[0].set_ylabel('gc0-gm0')
ax[0].set_xlabel('gm0')
ax[1].set_ylabel('g20-g00')
ax[1].set_xlabel('g00')
plt.suptitle(f'gm vs gc plot from {start} to {end}',weight='bold',fontsize=24);
file_path = f'../data/cleancat/final_text_cleancat15_000_099.txt'
df0 = read_data(file_path)
plot_g00_20(df0,start,end)
file_path = f'../data/cleancat/final_text_cleancat15_000_167_ngals20k.txt'
df1 = read_data(file_path)
plot_g00_20(df1,start,end)
file_path = f'../data/cleancat/final_text_cleancat15_000_199_ngals10k.txt'
df2 = read_data(file_path)
plot_g00_20(df2,start,end)
###Output
(113970, 50)
###Markdown
compare ngals10k vs ngals20k
###Code
fig,ax = plt.subplots(1,3,sharey=True,figsize=(12,12))
ax[0].scatter(x = df0['gm[0]'],
y = df0['gc[0]']-df0['gm[0]'])
ax[1].scatter(x = df2['gm[0]'],
y = df2['gc[0]']-df2['gm[0]'])
ax[2].scatter(x = df1['gm[0]'],
y = df1['gc[0]']-df1['gm[0]'])
ax[0].set_title('ngals10k 000 to 099')
ax[1].set_title('ngals10k 000 to 199')
ax[2].set_title('ngals10k 000 to 099 + ngals20k 100 to 167')
###Output
_____no_output_____ |
ML_practice.ipynb | ###Markdown
###Code
###Output
_____no_output_____ |
Aqueduct/lab/AQ_2_V3_API_layers.ipynb | ###Markdown
Registering Datasets and Layers into the API- All Aqueduct Water Risk Atlas datasets: https://staging-api.globalforestwatch.org/v1/dataset?application=aqueduct-water-risk&status=saved&page[size]=1000- All Aqueduct Water Risk Atlas layers: https://staging-api.globalforestwatch.org/v1/layer?application=aqueduct-water-risk&status=saved&page[size]=1000
###Code
import json
import requests
from pprint import pprint
import random
import getpass
s = requests.session()
s.cookies.clear()
OAUTH = getpass.getpass('RW Token_ID:')
###Output
RW Token_ID: ···············································································································································································································································································································································································································································································································
###Markdown
Register Dataset **Annual indicator layers**
###Code
payload = {"dataset": {
"name": "Annual indicator layers",
"application": ["aqueduct-water-risk"],
"connectorType": "rest",
"provider": "cartodb",
"connectorUrl": "https://wri-rw.carto.com/tables/water_risk_indicators_annual/public",
"tableName": "water_risk_indicators_annual",
"status": "saved",
"published": True,
"overwrite": False,
"verified": False,
"env": "production",
}
}
###Output
_____no_output_____
###Markdown
**Monthly indicator layers**
###Code
payload = {"dataset": {
"name": "Monthly indicator layers",
"application": ["aqueduct-water-risk"],
"connectorType": "rest",
"provider": "cartodb",
"connectorUrl": "https://wri-rw.carto.com/tables/water_risk_indicators_monthly/public",
"tableName": "water_risk_indicators_monthly",
"status": "saved",
"published": True,
"overwrite": False,
"verified": False,
"env": "production",
}
}
###Output
_____no_output_____
###Markdown
**Projected indicator layers**
###Code
payload = {"dataset": {
"name": "Projected indicator layers",
"application": ["aqueduct-water-risk"],
"connectorType": "rest",
"provider": "cartodb",
"connectorUrl": "https://wri-rw.carto.com/tables/water_risk_indicators_projections/public",
"tableName": "water_risk_indicators_projections",
"status": "saved",
"published": True,
"overwrite": False,
"verified": False,
"env": "production",
}
}
###Output
_____no_output_____
###Markdown
**Predefined weights layers**
###Code
payload = {"dataset": {
"name": "Predefined weights layers",
"application": ["aqueduct-water-risk"],
"connectorType": "rest",
"provider": "cartodb",
"connectorUrl": "https://wri-rw.carto.com/tables/water_risk_indicators_annual/public",
"tableName": "water_risk_indicators_annual",
"status": "saved",
"published": True,
"overwrite": False,
"verified": False,
"env": "production",
}
}
###Output
_____no_output_____
###Markdown
**Custom weights layers**
###Code
payload = {"dataset": {
"name": "Custom weights layers",
"application": ["aqueduct-water-risk"],
"connectorType": "rest",
"provider": "cartodb",
"connectorUrl": "https://wri-rw.carto.com/tables/water_risk_indicators_annual/public",
"tableName": "water_risk_indicators_normalized",
"status": "saved",
"published": True,
"overwrite": False,
"verified": False,
"env": "production",
}
}
###Output
_____no_output_____
###Markdown
**FAO hydrobasins layer**
###Code
payload = {"dataset": {
"name": "FAO hydrobasins",
"application": ["aqueduct-water-risk"],
"connectorType": "rest",
"provider": "cartodb",
"connectorUrl": "https://wri-rw.carto.com/tables/hydrobasins_fao_fiona_merged_v01/public",
"tableName": "hydrobasins_fao_fiona_merged_v01",
"status": "saved",
"published": True,
"overwrite": False,
"verified": False,
"env": "production",
}
}
###Output
_____no_output_____
###Markdown
**Aquifers layer**
###Code
payload = {"dataset": {
"name": "Aquifers",
"application": ["aqueduct-water-risk"],
"connectorType": "rest",
"provider": "cartodb",
"connectorUrl": "https://wri-rw.carto.com/tables/aquifer_names_simple_v01/public",
"tableName": "aquifer_names_simple_v01",
"status": "saved",
"published": True,
"overwrite": False,
"verified": False,
"env": "production",
}
}
#Post new dataset
url = f'https://staging-api.globalforestwatch.org/v1/dataset'
#url = f'https://api.resourcewatch.org/v1/dataset'
headers = {'Authorization': 'Bearer ' + OAUTH, 'Content-Type': 'application/json', 'Cache-Control': 'no-cache'}
r = requests.post(url, data=json.dumps(payload), headers=headers)
print(r.json())
###Output
{'data': {'id': 'ee0aefe6-176e-43e7-b349-b44b70d95d22', 'type': 'dataset', 'attributes': {'name': 'Pop. at risk of hunger', 'slug': 'Pop-at-risk-of-hunger', 'type': None, 'subtitle': None, 'application': ['aqueduct'], 'dataPath': None, 'attributesPath': None, 'connectorType': 'rest', 'provider': 'cartodb', 'userId': '5b60606f5a4e04a7f54ff857', 'connectorUrl': 'https://wri-01.carto.com/tables/combined01_prepared/public', 'tableName': 'combined01_prepared', 'status': 'pending', 'published': True, 'overwrite': False, 'verified': False, 'blockchain': {}, 'mainDateField': None, 'env': 'production', 'geoInfo': False, 'protected': False, 'legend': {'date': [], 'region': [], 'country': [], 'nested': [], 'integer': [], 'short': [], 'byte': [], 'double': [], 'float': [], 'half_float': [], 'scaled_float': [], 'boolean': [], 'binary': [], 'text': [], 'keyword': []}, 'clonedHost': {}, 'errorMessage': None, 'taskId': None, 'createdAt': '2019-06-12T13:58:39.055Z', 'updatedAt': '2019-06-12T13:58:39.055Z', 'dataLastUpdated': None, 'widgetRelevantProps': [], 'layerRelevantProps': []}}}
###Markdown
Updating Datasets
###Code
""" Copy the old one and change desired values (in this case attributes.description)
NOTE that you cannot update id, or type, only attributes!
"""
payload = {
"published": False,
}
#Update layers
dataset_id = 'a205e78b-8867-4677-b32b-871dfa6bbdd6'
url = f'https://staging-api.globalforestwatch.org/v1/dataset/{dataset_id}'
headers = {'Authorization': 'Bearer ' + OAUTH, 'Content-Type': 'application/json'}
r = requests.patch(url, data=json.dumps(payload), headers=headers)
pprint(r.json())
###Output
{'data': {'attributes': {'application': ['aqueduct'],
'attributesPath': None,
'blockchain': {},
'clonedHost': {},
'connectorType': 'rest',
'connectorUrl': 'https://wri-rw.carto.com/tables/combined01_prepared/public',
'createdAt': '2019-07-04T14:42:14.276Z',
'dataLastUpdated': None,
'dataPath': None,
'env': 'production',
'errorMessage': None,
'geoInfo': False,
'layerRelevantProps': [],
'legend': {'binary': [],
'boolean': [],
'byte': [],
'country': [],
'date': [],
'double': [],
'float': [],
'half_float': [],
'integer': [],
'keyword': [],
'nested': [],
'region': [],
'scaled_float': [],
'short': [],
'text': []},
'mainDateField': None,
'name': 'On average, groundwater tables have declined '
'by x% in areas where [crop] is grown since '
'1990.',
'overwrite': False,
'protected': False,
'provider': 'cartodb',
'published': False,
'slug': 'On-average-groundwater-tables-have-declined-by-x-in-areas-where-crop-is-grown-since-1990',
'status': 'saved',
'subtitle': None,
'tableName': 'combined01_prepared',
'taskId': None,
'type': None,
'updatedAt': '2019-07-04T14:44:23.089Z',
'userId': '58333dcfd9f39b189ca44c75',
'verified': False,
'widgetRelevantProps': []},
'id': 'a205e78b-8867-4677-b32b-871dfa6bbdd6',
'type': 'dataset'}}
###Markdown
Registering Layers
###Code
payload = {
"name": "Pop. at risk of hunger",
"description": "",
"application": [
"aqueduct"
],
"iso": [],
"provider": "cartodb",
"default": True,
"protected": False,
"published": True,
"env": "production",
"layerConfig": {
"body": {
"use_cors": False,
"url": "https://wri-rw.carto.com/api/v2/sql?q=with s as (SELECT iso, region, value, commodity FROM combined01_prepared {{where}} and impactparameter='Share Pop. at risk of hunger' and scenario='SSP2-MIRO' and iso is not null ), r as (SELECT iso, region, sum(value) as value FROM s group by iso, region), d as (SELECT centroid as geometry, iso, value, region FROM impact_regions_159 t inner join r on new_region=iso) select json_build_object('type','FeatureCollection','features',json_agg(json_build_object('geometry',cast(geometry as json),'properties', json_build_object('value',value,'country',region,'iso',iso, 'unit', 'percentage'),'type','Feature'))) as data from d"
},
"sql_config": [
{
"key_params": [
{
"required": True,
"key": "year"
}
],
"key": "where"
}
],
"params_config": [],
"type": "leaflet"
},
"legendConfig": {
"type": "cluster",
"description": "Pop. at risk of hunger represents the percentage of each country's population at risk of suffering from malnourishment. Note that it is not crop specific.",
"units": "%",
"items": [
{
"name": "Pop. at risk of hunger",
"color": "rgba(53, 89, 161, 0.85)"
}
]
},
}
#Register layers
dataset_id = 'ee0aefe6-176e-43e7-b349-b44b70d95d22'
url = f'https://staging-api.globalforestwatch.org/v1/dataset/{dataset_id}/layer/'
headers = {'Authorization': 'Bearer ' + OAUTH, 'Content-Type': 'application/json'}
r = requests.post(url, data=json.dumps(payload), headers=headers)
print(r.url)
pprint(r.json())
layer = r.json().get('data',None)
###Output
https://staging-api.globalforestwatch.org/v1/dataset/ee0aefe6-176e-43e7-b349-b44b70d95d22/layer/
{'data': {'attributes': {'application': ['aqueduct'],
'applicationConfig': {},
'dataset': 'ee0aefe6-176e-43e7-b349-b44b70d95d22',
'default': True,
'description': '',
'env': 'production',
'interactionConfig': {},
'iso': [],
'layerConfig': {'body': {'url': 'https://wri-rw.carto.com/api/v2/sql?q=with '
's as (SELECT iso, '
'region, value, '
'commodity FROM '
'combined01_prepared '
'{{where}} and '
"impactparameter='Share "
'Pop. at risk of '
"hunger' and "
"scenario='SSP2-MIRO' "
'and iso is not null '
'), r as (SELECT iso, '
'region, sum(value) '
'as value FROM s '
'group by iso, '
'region), d as '
'(SELECT centroid as '
'geometry, iso, '
'value, region FROM '
'impact_regions_159 t '
'inner join r on '
'new_region=iso) '
'select '
"json_build_object('type','FeatureCollection','features',json_agg(json_build_object('geometry',cast(geometry "
'as '
"json),'properties', "
"json_build_object('value',value,'country',region,'iso',iso, "
"'unit', "
"'percentage'),'type','Feature'))) "
'as data from d',
'use_cors': False},
'params_config': [],
'sql_config': [{'key': 'where',
'key_params': [{'key': 'year',
'required': True}]}],
'type': 'leaflet'},
'legendConfig': {'description': 'Pop. at risk of '
'hunger represents '
'the percentage of '
"each country's "
'population at risk '
'of suffering from '
'malnourishment. Note '
'that it is not crop '
'specific.',
'items': [{'color': 'rgba(53, 89, '
'161, 0.85)',
'name': 'Pop. at risk of '
'hunger'}],
'type': 'cluster',
'units': '%'},
'name': 'Pop. at risk of hunger',
'protected': False,
'provider': 'cartodb',
'published': True,
'slug': 'Pop-at-risk-of-hunger',
'staticImageConfig': {},
'updatedAt': '2019-06-12T14:08:26.061Z',
'userId': '5b60606f5a4e04a7f54ff857'},
'id': '8b8c9f05-0cfe-4c82-ba91-1b4e8d7c9942',
'type': 'layer'}}
###Markdown
Updating Layers
###Code
aqueduct_projections_20150309
""" Copy the old one and change desired values (in this case attributes.description)
NOTE that you cannot update id, or type, only attributes!
"""
payload = {
"layerConfig": {
"sql_config": [],
"params_config": [
{
"required": True,
"key": "year"
},
{
"required": True,
"key": "scenario"
}
],
"body": {
"layers": [
{
"options": {
"cartocss_version": "2.3.0",
"cartocss": "#water_risk_indicators_projections{ polygon-fill:transparent; polygon-opacity: 1; line-color:transparent; line-width: 1; line-opacity: 1; } #water_risk_indicators_projections [label='1.7x or greater decrease'] { polygon-fill:#0099CD; line-color:#0099CD } #water_risk_indicators_projections [label='1.4x decrease'] { polygon-fill: #74AFD1; line-color: #74AFD1 } #water_risk_indicators_projections [label='1.2x decrease'] { polygon-fill: #AAC7D8; line-color: #AAC7D8 } #water_risk_indicators_projections [label='Near normal'] { polygon-fill: #DEDEDD; line-color: #DEDEDD } #water_risk_indicators_projections [label='1.2x increase'] { polygon-fill: #F8AB95; line-color: #F8AB95 } #water_risk_indicators_projections [label='1.4x increase'] { polygon-fill: #F27454; line-color: #F27454 } #water_risk_indicators_projections [label='1.7x or greater increase'] { polygon-fill: #ED2924; line-color: #ED2924 } #water_risk_indicators_projections [label='No data'] { polygon-fill: #4F4F4F; line-color: #4F4F4F }",
"sql": "with r as (SELECT basinid, label FROM water_risk_indicators_projections WHERE year = {{year}} and type = 'change_from_baseline' and indicator = 'water_demand' and scenario = '{{scenario}}') SELECT s.cartodb_id, s.basinid, s.the_geom, s.the_geom_webmercator, r.label FROM aqueduct_projections_20150309 s LEFT JOIN r on s.basinid=r.basinid WHERE s.the_geom is not null and r.label is not null"
},
"type": "cartodb"
}
]
},
"account": "wri-rw"
}
}
#Update layers
dataset_id = '17f3b259-b3b9-4bd6-910d-852fb3c1c510'
layer_id = 'a3795c06-d2eb-4aa3-8e24-62965b69e5ce'
url = f'https://staging-api.globalforestwatch.org/v1/dataset/{dataset_id}/layer/{layer_id}'
headers = {'Authorization': 'Bearer ' + OAUTH, 'Content-Type': 'application/json'}
r = requests.patch(url, data=json.dumps(payload), headers=headers)
pprint(r.json())
###Output
{'data': {'attributes': {'application': ['aqueduct-water-risk'],
'applicationConfig': {},
'createdAt': '2020-01-17T14:07:27.841Z',
'dataset': '17f3b259-b3b9-4bd6-910d-852fb3c1c510',
'default': True,
'env': 'production',
'interactionConfig': {'output': [{'column': 'label',
'format': None,
'prefix': '',
'property': 'Category',
'suffix': '',
'type': 'string'}]},
'iso': [],
'layerConfig': {'account': 'wri-rw',
'body': {'layers': [{'options': {'cartocss': '#water_risk_indicators_projections{ '
'polygon-fill:transparent; '
'polygon-opacity: '
'1; '
'line-color:transparent; '
'line-width: '
'1; '
'line-opacity: '
'1; '
'} '
'#water_risk_indicators_projections '
"[label='1.7x "
'or '
'greater '
"decrease'] "
'{ '
'polygon-fill:#0099CD; '
'line-color:#0099CD '
'} '
'#water_risk_indicators_projections '
"[label='1.4x "
"decrease'] "
'{ '
'polygon-fill: '
'#74AFD1; '
'line-color: '
'#74AFD1 '
'} '
'#water_risk_indicators_projections '
"[label='1.2x "
"decrease'] "
'{ '
'polygon-fill: '
'#AAC7D8; '
'line-color: '
'#AAC7D8 '
'} '
'#water_risk_indicators_projections '
"[label='Near "
"normal'] "
'{ '
'polygon-fill: '
'#DEDEDD; '
'line-color: '
'#DEDEDD '
'} '
'#water_risk_indicators_projections '
"[label='1.2x "
"increase'] "
'{ '
'polygon-fill: '
'#F8AB95; '
'line-color: '
'#F8AB95 '
'} '
'#water_risk_indicators_projections '
"[label='1.4x "
"increase'] "
'{ '
'polygon-fill: '
'#F27454; '
'line-color: '
'#F27454 '
'} '
'#water_risk_indicators_projections '
"[label='1.7x "
'or '
'greater '
"increase'] "
'{ '
'polygon-fill: '
'#ED2924; '
'line-color: '
'#ED2924 '
'} '
'#water_risk_indicators_projections '
"[label='No "
"data'] "
'{ '
'polygon-fill: '
'#4F4F4F; '
'line-color: '
'#4F4F4F '
'}',
'cartocss_version': '2.3.0',
'sql': 'with '
'r '
'as '
'(SELECT '
'basinid, '
'label '
'FROM '
'water_risk_indicators_projections '
'WHERE '
'year '
'= '
'{{year}} '
'and '
'type '
'= '
"'change_from_baseline' "
'and '
'indicator '
'= '
"'water_demand' "
'and '
'scenario '
'= '
"'{{scenario}}') "
'SELECT '
's.cartodb_id, '
's.basinid, '
's.the_geom, '
's.the_geom_webmercator, '
'r.label '
'FROM '
'aqueduct_projections_20150309 '
's '
'LEFT '
'JOIN '
'r '
'on '
's.basinid=r.basinid '
'WHERE '
's.the_geom '
'is '
'not '
'null '
'and '
'r.label '
'is '
'not '
'null'},
'type': 'cartodb'}]},
'params_config': [{'key': 'year',
'required': True},
{'key': 'scenario',
'required': True}],
'sql_config': []},
'legendConfig': {'disclaimer': [{'color': '#4E4E4E',
'name': 'No data'}],
'items': [{'color': '#0099CD',
'name': '1.7x or greater '
'decrease'},
{'color': '#74AFD1',
'name': '1.4x decrease'},
{'color': '#AAC7D8',
'name': '1.2x decrease'},
{'color': '#DEDEDD',
'name': 'Near normal'},
{'color': '#F8AB95',
'name': '1.2x increase'},
{'color': '#F27454',
'name': '1.4x increase'},
{'color': '#ED2924',
'name': '1.7x or greater '
'increase'}],
'type': 'choropleth'},
'name': 'Projected Change in Water Demand',
'protected': False,
'provider': 'cartodb',
'published': True,
'slug': 'Projected-Change-in-Water-Demand',
'staticImageConfig': {},
'updatedAt': '2020-01-17T14:07:27.842Z',
'userId': '5b60606f5a4e04a7f54ff857'},
'id': 'a3795c06-d2eb-4aa3-8e24-62965b69e5ce',
'type': 'layer'}}
###Markdown
Registering Metadata
###Code
payload = {
"application": "aqueduct",
"language": "en",
"name": "Crops",
"description": "These crops were selected based on their importance in the global commodities market and for food security. 'All crops' represent all of the crops that are included in the tool as displayed in the menu. Pixels are shaded if they contain at least 10 hectares of crop area within the 10x10 km pixel. If there are multiple crops meeting this criteria per pixel, the predominant crop (based on production) is displayed. If a single crop is selected, and the pixel colors are shaded by level of production. The crop layers displayed on the map reflect 2010 data regardless of the timeframe selected.",
"source": "MapSPAM 2010",
"info": {
"sources": [
{
"source-url": "http://mapspam.info/",
"source-name": "MapSPAM 2010"
}
]
},
"status": "published"
}
#Register metadata
dataset_id = 'a57a457a-cee7-44a6-af0a-5c27176e0ec0'
url = f'https://staging-api.globalforestwatch.org/v1/dataset/{dataset_id}/metadata/'
headers = {'Authorization': 'Bearer ' + OAUTH, 'Content-Type': 'application/json'}
r = requests.post(url, data=json.dumps(payload), headers=headers)
print(r.url)
pprint(r.json())
layer = r.json().get('data',None)
###Output
https://staging-api.globalforestwatch.org/v1/dataset/a57a457a-cee7-44a6-af0a-5c27176e0ec0/metadata/
{'data': [{'attributes': {'application': 'aqueduct',
'createdAt': '2019-05-30T09:48:36.558Z',
'dataset': 'a57a457a-cee7-44a6-af0a-5c27176e0ec0',
'description': 'These crops were selected based on '
'their importance in the global '
'commodities market and for food '
"security. 'All crops' represent all "
'of the crops that are included in '
'the tool as displayed in the menu. '
'Pixels are shaded if they contain at '
'least 10 hectares of crop area '
'within the 10x10 km pixel. If there '
'are multiple crops meeting this '
'criteria per pixel, the predominant '
'crop (based on production) is '
'displayed. If a single crop is '
'selected, and the pixel colors are '
'shaded by level of production. The '
'crop layers displayed on the map '
'reflect 2010 data regardless of the '
'timeframe selected.',
'info': {'sources': [{'source-name': 'MapSPAM 2010',
'source-url': 'http://mapspam.info/'}]},
'language': 'en',
'name': 'Crops',
'resource': {'id': 'a57a457a-cee7-44a6-af0a-5c27176e0ec0',
'type': 'dataset'},
'source': 'MapSPAM 2010',
'status': 'published',
'updatedAt': '2019-05-30T09:48:36.558Z'},
'id': '5cefa6f46613e100100c8eb5',
'type': 'metadata'}]}
###Markdown
Updating Widget
###Code
""" Copy the old one and change desired values (in this case attributes.description)
NOTE that you cannot update id, or type, only attributes!
"""
payload = {
"widgetConfig": {
"interaction_config": [
{
"config": {
"fields": [
{
"suffix": "%",
"label": "Percentage",
"key": "percentage"
}
]
},
"name": "tooltip"
}
],
"params_config": [
{
"required": True,
"key": "crop_name"
},
{
"required": True,
"key": "year"
},
{
"required": True,
"key": "countryName"
}
],
"titleConfig": {
"baseline": "Percentage of {{crop_name}} area that is irrigated vs. rainfed in {{countryName}}",
"future": "Projected percentage of {{crop_name}} area that will be irrigated vs. rainfed in {{countryName}} in {{year}}"
},
"sqlParams": [],
"sql_config": [
{
"key_params": [
{
"required": True,
"key": "year"
},
{
"required": True,
"key": "iso"
},
{
"required": False,
"key": "commodity"
}
],
"key": "and"
}
],
"padding": {
"bottom": 30,
"right": 30,
"left": 30,
"top": 30
},
"legends": [
{
"properties": {
"categories": {
"y": {
"offset": 30,
"value": 0,
"field": {
"group": "height"
}
},
"x": {
"offset": -108,
"value": 0
}
},
"symbols": {
"shape": {
"value": "square"
},
"size": {
"value": 100
},
"y": {
"value": 304
},
"x": {
"scale": "legend-series-x"
}
},
"labels": {
"fontSize": {
"value": 14
},
"text": {
"template": "{{datum.data|left:1|upper}}{{datum.data|slice:1|truncate:15}}"
},
"y": {
"value": 308
},
"x": {
"offset": 10,
"scale": "legend-series-x"
}
}
},
"fill": "color"
}
],
"scales": [
{
"domain": {
"field": "value",
"data": "table"
},
"range": [
0,
100
],
"type": "sqrt",
"name": "r"
},
{
"domain": {
"field": "category",
"data": "categories"
},
"range": "cropColor",
"type": "ordinal",
"name": "color"
},
{
"padding": 1,
"points": True,
"domain": {
"field": "category",
"data": "categories"
},
"range": "width",
"type": "ordinal",
"name": "horizontal"
},
{
"domain": {
"field": "category",
"data": "categories"
},
"range": [
175,
330
],
"type": "ordinal",
"name": "legend-series-x"
}
],
"height": 300,
"marks": [
{
"properties": {
"enter": {
"outerRadius": {
"value": 225,
"mult": 0.47
},
"innerRadius": {
"value": 150,
"mult": 0.38
},
"startAngle": {
"field": "angle_start"
},
"endAngle": {
"field": "angle_end"
},
"fill": {
"scale": "color",
"field": "a.category"
},
"y": {
"field": {
"group": "height"
},
"mult": 0.475
},
"x": {
"field": {
"group": "width"
},
"mult": 0.525
}
}
},
"type": "arc",
"from": {
"data": "layout"
}
},
{
"properties": {
"enter": {
"fontWeight": {
"value": "medium"
},
"fontSize": {
"value": 12
},
"baseline": {
"value": "bottom"
},
"radius": {
"field": {
"group": "height"
},
"mult": 0.45
},
"theta": {
"field": "angle_mid"
},
"align": {
"value": "left"
},
"text": {
"template": "{{datum.percentage}}%"
},
"font": {
"value": "\"Roboto\""
},
"fill": {
"value": "#758290"
},
"y": {
"field": {
"group": "height"
},
"mult": 0.5
},
"x": {
"field": {
"group": "width"
},
"mult": 0.5
}
}
},
"type": "text",
"from": {
"data": "layout"
}
}
],
"name": "arc",
"data": [
{
"format": {
"property": "rows",
"type": "json"
},
"name": "table",
"url": "https://wri-01.carto.com/api/v2/sql?q=SELECT sum(value) as value, irrigation as category FROM combined01_prepared where impactparameter='Area' and scenario='SSP2-MIRO' {{and}} and commodity<>'All Cereals' and commodity<>'All Pulses' and region <> 'World' group by irrigation"
},
{
"transform": [
{
"summarize": {
"value": "sum"
},
"type": "aggregate"
}
],
"source": "table",
"name": "summary"
},
{
"transform": [
{
"with": "summary",
"type": "cross"
},
{
"field": "a.value",
"type": "pie"
},
{
"field": "percentage",
"type": "formula",
"expr": "round(datum.a.value / datum.b.sum_value * 100) === 0 ? '<1' : round(datum.a.value / datum.b.sum_value * 100)"
},
{
"field": "angle_start",
"type": "formula",
"expr": "2*PI-datum.layout_end"
},
{
"field": "angle_end",
"type": "formula",
"expr": "datum.angle_start+datum.layout_end-datum.layout_start"
},
{
"field": "angle_mid",
"type": "formula",
"expr": "2*PI-datum.layout_mid"
}
],
"source": "table",
"name": "layout"
},
{
"values": [
{
"category": "rainfed"
},
{
"category": "irrigated"
}
],
"name": "categories"
}
]
},
}
#Update widget
dataset_id = '3e7154d1-2aad-4f64-a7e7-01bcf8f7f488'
widget_id = '25facdf3-acf6-4f0e-9a0a-5ec94216407f'
url = f'https://staging-api.globalforestwatch.org/v1/dataset/{dataset_id}/widget/{widget_id}'
headers = {'Authorization': 'Bearer ' + OAUTH, 'Content-Type': 'application/json'}
r = requests.patch(url, data=json.dumps(payload), headers=headers)
pprint(r.json())
###Output
{'data': {'attributes': {'application': ['aqueduct'],
'authors': '',
'createdAt': '2017-02-07T09:28:28.398Z',
'dataset': '3e7154d1-2aad-4f64-a7e7-01bcf8f7f488',
'default': True,
'defaultEditableWidget': False,
'description': '',
'env': 'production',
'freeze': False,
'layerId': None,
'name': 'Percentage of {{crop_name}} area that '
'{{year_verb}}',
'protected': False,
'published': True,
'queryUrl': 'query/3e7154d1-2aad-4f64-a7e7-01bcf8f7f488?sql=SELECT '
'* FROM crops_stats',
'slug': 'rainfed-vs-irrigated',
'source': '',
'sourceUrl': '',
'template': False,
'thumbnailUrl': 'http://wri-api-backups.s3.amazonaws.com/resourcewatch/staging/thumbnails/25facdf3-acf6-4f0e-9a0a-5ec94216407f-1560856213000.png',
'updatedAt': '2019-06-18T14:31:23.514Z',
'userId': '5858f37140621f11066fb2f7',
'verified': False,
'widgetConfig': {'data': [{'format': {'property': 'rows',
'type': 'json'},
'name': 'table',
'url': 'https://wri-01.carto.com/api/v2/sql?q=SELECT '
'sum(value) as '
'value, irrigation '
'as category FROM '
'combined01_prepared '
'where '
"impactparameter='Area' "
'and '
"scenario='SSP2-MIRO' "
'{{and}} and '
"commodity<>'All "
"Cereals' and "
"commodity<>'All "
"Pulses' and region "
"<> 'World' group "
'by irrigation'},
{'name': 'summary',
'source': 'table',
'transform': [{'summarize': {'value': 'sum'},
'type': 'aggregate'}]},
{'name': 'layout',
'source': 'table',
'transform': [{'type': 'cross',
'with': 'summary'},
{'field': 'a.value',
'type': 'pie'},
{'expr': 'round(datum.a.value '
'/ '
'datum.b.sum_value '
'* '
'100) '
'=== '
'0 '
'? '
"'<1' "
': '
'round(datum.a.value '
'/ '
'datum.b.sum_value '
'* '
'100)',
'field': 'percentage',
'type': 'formula'},
{'expr': '2*PI-datum.layout_end',
'field': 'angle_start',
'type': 'formula'},
{'expr': 'datum.angle_start+datum.layout_end-datum.layout_start',
'field': 'angle_end',
'type': 'formula'},
{'expr': '2*PI-datum.layout_mid',
'field': 'angle_mid',
'type': 'formula'}]},
{'name': 'categories',
'values': [{'category': 'rainfed'},
{'category': 'irrigated'}]}],
'height': 300,
'interaction_config': [{'config': {'fields': [{'key': 'percentage',
'label': 'Percentage',
'suffix': '%'}]},
'name': 'tooltip'}],
'legends': [{'fill': 'color',
'properties': {'categories': {'x': {'offset': -108,
'value': 0},
'y': {'field': {'group': 'height'},
'offset': 30,
'value': 0}},
'labels': {'fontSize': {'value': 14},
'text': {'template': '{{datum.data|left:1|upper}}{{datum.data|slice:1|truncate:15}}'},
'x': {'offset': 10,
'scale': 'legend-series-x'},
'y': {'value': 308}},
'symbols': {'shape': {'value': 'square'},
'size': {'value': 100},
'x': {'scale': 'legend-series-x'},
'y': {'value': 304}}}}],
'marks': [{'from': {'data': 'layout'},
'properties': {'enter': {'endAngle': {'field': 'angle_end'},
'fill': {'field': 'a.category',
'scale': 'color'},
'innerRadius': {'mult': 0.38,
'value': 150},
'outerRadius': {'mult': 0.47,
'value': 225},
'startAngle': {'field': 'angle_start'},
'x': {'field': {'group': 'width'},
'mult': 0.525},
'y': {'field': {'group': 'height'},
'mult': 0.475}}},
'type': 'arc'},
{'from': {'data': 'layout'},
'properties': {'enter': {'align': {'value': 'left'},
'baseline': {'value': 'bottom'},
'fill': {'value': '#758290'},
'font': {'value': '"Roboto"'},
'fontSize': {'value': 12},
'fontWeight': {'value': 'medium'},
'radius': {'field': {'group': 'height'},
'mult': 0.45},
'text': {'template': '{{datum.percentage}}%'},
'theta': {'field': 'angle_mid'},
'x': {'field': {'group': 'width'},
'mult': 0.5},
'y': {'field': {'group': 'height'},
'mult': 0.5}}},
'type': 'text'}],
'name': 'arc',
'padding': {'bottom': 30,
'left': 30,
'right': 30,
'top': 30},
'params_config': [{'key': 'crop_name',
'required': True},
{'key': 'year',
'required': True},
{'key': 'countryName',
'required': True}],
'scales': [{'domain': {'data': 'table',
'field': 'value'},
'name': 'r',
'range': [0, 100],
'type': 'sqrt'},
{'domain': {'data': 'categories',
'field': 'category'},
'name': 'color',
'range': 'cropColor',
'type': 'ordinal'},
{'domain': {'data': 'categories',
'field': 'category'},
'name': 'horizontal',
'padding': 1,
'points': True,
'range': 'width',
'type': 'ordinal'},
{'domain': {'data': 'categories',
'field': 'category'},
'name': 'legend-series-x',
'range': [175, 330],
'type': 'ordinal'}],
'sqlParams': [],
'sql_config': [{'key': 'and',
'key_params': [{'key': 'year',
'required': True},
{'key': 'iso',
'required': True},
{'key': 'commodity',
'required': False}]}],
'titleConfig': {'baseline': 'Percentage '
'of '
'{{crop_name}} '
'area '
'that is '
'irrigated '
'vs. '
'rainfed '
'in '
'{{countryName}}',
'future': 'Projected '
'percentage '
'of '
'{{crop_name}} '
'area that '
'will be '
'irrigated '
'vs. '
'rainfed '
'in '
'{{countryName}} '
'in '
'{{year}}'}}},
'id': '25facdf3-acf6-4f0e-9a0a-5ec94216407f',
'type': 'widget'}}
|
dash-2019-coronavirus/notebook/Coordinates_database.ipynb | ###Markdown
This is the script for saving all coordinates as my own database. By doing so, `opencage.geocoder` does not need to go through all regions everytime (as most regions are already have coordinates in this database).
###Code
# Import coordinate database
GeoDB = pd.read_csv('./coordinatesDB.csv')
GeoDB.head()
# Import xlsx file and store each sheet in to a df list
xl_file = pd.ExcelFile('./data.xls',)
dfs = {sheet_name: xl_file.parse(sheet_name)
for sheet_name in xl_file.sheet_names}
# Data from each sheet can be accessed via key
keyList = list(dfs.keys())
# Data cleansing
for key, df in dfs.items():
dfs[key].loc[:,'Confirmed'].fillna(value=0, inplace=True)
dfs[key].loc[:,'Deaths'].fillna(value=0, inplace=True)
dfs[key].loc[:,'Recovered'].fillna(value=0, inplace=True)
dfs[key]=dfs[key].astype({'Confirmed':'int64', 'Deaths':'int64', 'Recovered':'int64'})
# Change as China for coordinate search
dfs[key]=dfs[key].replace({'Country/Region':'Mainland China'}, 'China')
# Add a zero to the date so can be convert by datetime.strptime as 0-padded date
dfs[key]['Last Update'] = '0' + dfs[key]['Last Update']
# Convert time as Australian eastern daylight time
dfs[key]['Date_last_updated_AEDT'] = [datetime.strptime(d, '%m/%d/%Y %H:%M') for d in dfs[key]['Last Update']]
dfs[key]['Date_last_updated_AEDT'] = dfs[key]['Date_last_updated_AEDT'] + timedelta(hours=16)
# Save the latest data into targetData
targetData = dfs[keyList[0]]
targetData
# Assign coordinates to regions from coordinates database
resultData = pd.merge(targetData, GeoDB, how='left', on=['Province/State', 'Country/Region'])
# Find regions do not have coordinates
queryData = resultData.loc[resultData['lat'].isnull()]
queryData = queryData[['Province/State', 'Country/Region']]
queryData
# Using opencage.geocoder to call coordinates for these regions
# Add coordinates for each area in the list for the latest table sheet
# As there are limit for free account, we only call coordinates for the latest table sheet
from opencage.geocoder import OpenCageGeocode
import time
import random
import progressbar
key = 'b33700b33d0a446aa6e16c0b57fc82d1' # get api key from: https://opencagedata.com
geocoder = OpenCageGeocode(key)
list_lat = [] # create empty lists
list_long = []
for index, row in queryData.iterrows(): # iterate over rows in dataframe
City = row['Province/State']
State = row['Country/Region']
# Note that 'nan' is float
if type(City) is str:
query = str(City)+','+str(State)
results = geocoder.geocode(query)
lat = results[0]['geometry']['lat']
long = results[0]['geometry']['lng']
list_lat.append(lat)
list_long.append(long)
else:
query = str(State)
results = geocoder.geocode(query)
lat = results[0]['geometry']['lat']
long = results[0]['geometry']['lng']
list_lat.append(lat)
list_long.append(long)
# create new columns from lists
queryData['lat'] = list_lat
queryData['lon'] = list_long
queryData
print('Coordinate data are generated!')
# Add the new coordinates into coordinates database
catList = [GeoDB, queryData]
GeoDB = pd.concat(catList, ignore_index=True)
GeoDB
# Save the coordinates database
GeoDB.to_csv('./coordinatesDB.csv', index = False)
# Assign coordinates to all regions using the latest coordinates database
finalData = pd.merge(targetData, GeoDB, how='left', on=['Province/State', 'Country/Region'] )
finalData
# To check if there is still regions without coordinates (There should not be)
testData = finalData.loc[finalData['lat'].isnull()]
testData
# Save the data for heroku app
finalData.to_csv('./{}_data.csv'.format(keyList[0]), index = False)
# A variable for using in bash
# Refer to https://stackoverflow.com/questions/19579546/can-i-access-python-variables-within-a-bash-or-script-ipython-notebook-c
fileNmae = keyList[0]
%%bash -s "$fileNmae"
cp ./$1_data.csv ../../heroku_app/dash_coronavirus_2019/
echo "All files have been transferred to heroku folder.\nYou are now good to update heroku app!"
###Output
_____no_output_____ |
Notebook/Lesson-logistic-regression/solution-code/solution-code.ipynb | ###Markdown
Introduction to Logistic Regression_Authors: Kiefer Katovich, Matt Brems, Noelle Brown_--- Learning Objectives- Distinguish between regression and classification problems.- Understand how logistic regression is similar to and different from linear regression.- Fit, generate predictions from, and evaluate a logistic regression model in `sklearn`.- Understand how to interpret the coefficients of logistic regression.- Know the benefits of logistic regression as a classifier. Introduction---Logistic regression is a natural bridge to connect regression and classification.- Logistic regression is the most common binary classification algorithm.- Because it is a regression model, logistic regression will predict continuous values. - Logistic regression will predict continuous probabilities between 0 and 1. - Example: What is the probability that someone shows up to vote?- However, logistic regression almost always operates as a classification model. - Logistic regression will use these continuous predictions to classify something as 0 or 1. - Example: Based on the predicted probability, do we predict that someone votes?In this lecture, we'll only be reviewing the binary outcome case with two classes, but logistic regression can be generalized to predicting outcomes with 3 or more classes.**Some examples of when logistic regression could be used:**- Will a user will purchase a product, given characteristics like income, age, and number of family members?- Does this patient have a specific disease based on their symptoms?- Will a person default on their loan?- Is the iris flower in front of me an "_Iris versicolor_?"- Given one's GPA and the prestige of a college, will a student be admitted to a specific graduate program?And many more.
###Code
# imports
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
# Import train_test_split.
from sklearn.model_selection import train_test_split
# Import logistic regression
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
Graduate School Admissions---Today, we'll be applying logistic regression to solve the following problem: "Given one's GPA, will a student be admitted to a specific graduate program?"
###Code
# Read in the data.
admissions = pd.read_csv('../data/grad_admissions.csv')
# Check first five rows.
admissions.head()
admissions.shape
###Output
_____no_output_____
###Markdown
The columns are:- `admit`: A binary 0/1 variable indicating whether or not a student was admitted, where 1 means admitted and 0 means not admitted.- `gre`: The student's [GRE (Graduate Record Exam)](https://en.wikipedia.org/wiki/Graduate_Record_Examinations) score.- `gpa`: The student's GPA.
###Code
admissions.info()
# How many missing values do we have in each column?
admissions.isnull().sum()
# Drop every row that has an NA.
admissions.dropna(inplace=True)
###Output
_____no_output_____
###Markdown
What assumption are we making when we drop rows that have at least one NA in it? - We assume that what we drop looks like what we have observed. That is, there's nothing special about the rows we happened to drop.- We might say that what we dropped is a random sample of our whole data.- It's not important to know this now, but the formal term is that our data is missing completely at random. Recap of NotationYou're quite familiar with **linear** regression:$$\begin{eqnarray*}\hat{\mathbf{y}} &=& \hat{\beta}_0 + \hat{\beta}_1x_1 + \hat{\beta}_2x_2 + \cdots + \hat{\beta}_px_p \\&=& \hat{\beta}_0 + \sum_{j=1}^p\hat{\beta}_jX_j\end{eqnarray*}$$Where:- $\hat{\mathbf{y}}$ is the predicted values of $\mathbf{y}$ based on all of the inputs $x_j$.- $x_1$, $x_2$, $\ldots$, $x_p$ are the predictors.- $\hat{\beta}_0$ is the estimated intercept.- $\hat{\beta}_j$ is the estimated coefficient for the predictor $x_j$, the $j$th column in variable matrix $X$. What if we predicted `admit` with `gpa` using Linear Regression?Looking at the plot below, what are problems with using a regression?
###Code
plt.figure(figsize = (12, 5))
sns.regplot(admissions['gpa'], admissions['admit'], admissions,
ci = False, scatter_kws = {'s': 2},
line_kws = {'color': 'orange'})
plt.ylim(-0.1, 1.1);
###Output
C:\Users\Home\anaconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y, data. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
###Markdown
Predicting a Binary Class---In our case we have two classes: `1=admitted` and `0=rejected`.The logistic regression is still solving for $\hat{y}$. However, in our binary classification case, $\hat{y}$ will be the probability of $y$ being one of the classes.$$\hat{y} = P(y = 1)$$We'll still try to fit a "line" of best fit to this... except it won't be perfectly linear. We need to *guarantee* that the right-hand side of the regression equation will evaluate to a probability. (That is, some number between 0 and 1!) The Logit Link Function (advanced)---We will use something called a **link function** to effectively "bend" our line of best fit so that it is a curve of best fit that matches the range or set of values in which we're interested.For logistic regression, that specific link function that transforms ("bends") our line is known as the **logit** link.$$\text{logit}\left(P(y = 1)\right) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p$$$$\log\left(\frac{P(y = 1)}{1 - P(y = 1)}\right) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p$$Equivalently, we assume that each independent variable $x_i$ is linearly related to the **log of the odds of success**.Remember, the purpose of the link function is to bend our line of best fit.- This is convenient because we can have any values of $X$ inputs that we want, and we'll only ever predict between 0 and 1!- However, interpreting a one-unit change gets a little harder. (More on this later.) [*image source*](https://twitter.com/ChelseaParlett/status/1279111984433127425?s=20) Fitting and making predictions with the logistic regression model.We can follow the same steps to build a logistic regression model that we follow to build a linear regression model.1. Define X & y2. Instantiate the model.3. Fit the model.4. Generate predictions.5. Evaluate model.
###Code
# Step 1: Split into training & testing sets
X = admissions[['gpa']]
y = admissions['admit']
X_train, X_test, y_train, y_test = train_test_split(X, y,
random_state = 50)
# Step 2: Instantiate our model.
logreg = LogisticRegression()
# Step 3: Fit our model.
logreg.fit(X_train, y_train)
print(f'Logistic Regression Intercept: {logreg.intercept_}')
print(f'Logistic Regression Coefficient: {logreg.coef_}')
###Output
Logistic Regression Intercept: [-17.50520394]
Logistic Regression Coefficient: [[4.92045156]]
###Markdown
There are two methods in `sklearn` to be aware of when using logistic regression:- `.predict()`- `.predict_proba()`
###Code
# Step 4 (part 1): Generate predicted values.
logreg.predict(X_test)[:10]
# Step 4 (part 2): Generate predicted probabilities.
np.round(logreg.predict_proba(X_test), 3)
###Output
_____no_output_____
###Markdown
How would you interpret the predict_proba() output? - This shows the probability of being rejected ($P(Y=0)$) and the probability of being admitted ($P(Y=1)$) for each observation in the testing dataset.- The first array, corresponds to the first testing observation. - The `.predict()` value for this observation is 0. This is because $P(Y=0) > P(Y=1)$.- The second array, corresponds to the second testing observation. - The `.predict()` value for this observation is 0. This is because $P(Y=0) > P(Y=1)$.
###Code
# Visualizing logistic regression probabilities.
plt.figure(figsize = (10, 5))
plt.scatter(X_test, y_test, s = 10);
plt.plot(X_test.sort_values('gpa'),
logreg.predict_proba(X_test.sort_values('gpa'))[:,1],
color = 'grey', alpha = 0.8, lw = 3)
plt.xlabel('GPA')
plt.ylabel('Admit')
plt.title('Predicting Admission from GPA');
# Step 5: Evaluate model.
logreg.score(X_train, y_train)
logreg.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
By default, the `.score()` method for classification models gives us the accuracy score.$$\begin{eqnarray*}\text{Accuracy} = \frac{\text{number of correct predictions}}{\text{number of total predictions}}\end{eqnarray*}$$ Remind me: what does .score() tell me for a regression model? - The $R^2$ score.- Remember that $R^2$ is the proportion of variance in our $Y$ values that are explained by our model. Using the log-odds —the natural logarithm of the odds.The combination of converting the "probability of success" to "odds of success," then taking the logarithm of that is called the **logit link function**.$$\text{logit}\big(P(y=1)\big) = \log\bigg(\frac{P(y=1)}{1-P(y=1)}\bigg) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p$$We've bent our line how we want... but how do we interpret our coefficients? OddsProbabilities and odds represent the same thing in different ways. The odds for probability **p** is defined as:$$\text{odds}(p) = \frac{p}{1-p}$$The odds of a probability is a measure of how many times as likely an event is to happen than it is to not happen.**Example**: Suppose I'm looking at the probability and odds of a specific horse, "Secretariat," winning a race.- When **`p = 0.5`**: **`odds = 1`** - The horse Secretariat is as likely to win as it is to lose.- When **`p = 0.75`**: **`odds = 3`** - The horse Secretariat is three times as likely to win as it is to lose.- When **`p = 0.40`**: **`odds = 0.666..`** - The horse Secretariat is two-thirds as likely to win as it is to lose. Interpreting a one-unit change in $x_i$.$$\log\bigg(\frac{P(y=1)}{1-P(y=1)}\bigg) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p$$Given this model, a one-unit change in $x_i$ implies a $\beta_i$ unit change in the log odds of success.**This is annoying**.We often convert log-odds back to "regular odds" when interpreting our coefficient... our mind understands odds better than the log of odds.**(BONUS)** So, let's get rid of the log on the left-hand side. Mathematically, we do this by "exponentiating" each side.$$\begin{eqnarray*}\log\bigg(\frac{P(y=1)}{1-P(y=1)}\bigg) &=& \beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p \\\Rightarrow e^{\Bigg(\log\bigg(\frac{P(y=1)}{1-P(y=1)}\bigg)\Bigg)} &=& e^{\Bigg(\beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p\Bigg)} \\\Rightarrow \frac{P(y=1)}{1-P(y=1)} &=& e^{\Bigg(\beta_0 + \beta_1x_1 + \beta_2x_2 + \cdots + \beta_px_p\Bigg)} \\\end{eqnarray*}$$**Interpretation**: A one-unit change in $x_i$ means that success is $e^{\beta_i}$ times as likely.
###Code
logreg.coef_
###Output
_____no_output_____
###Markdown
I want to interpret the coefficient $\hat{\beta}_1$ for my logistic regression model. How would I interpret this coefficient? - Our model is that $\log\bigg(\frac{P(admit=1)}{1-P(admit=1)}\bigg) = \beta_0 + \beta_1\text{GPA}$.- As GPA increases by 1, the log-odds of someone being admitted increases by 4.92.- As GPA increases by 1, someone is $e^{4.92}$ times as likely to be admitted.- As GPA increases by 1, someone is about 137.06 times as likely to be admitted to grad school.> Hint: Use the [np.exp](https://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html) function.
###Code
# Use np.exp() to exponentiate the coefficient.
np.exp(logreg.coef_)
###Output
_____no_output_____ |
Global Wheat Detection/yolov5-pseudo-labeling.ipynb | ###Markdown
YoloV5 Pseudo Labeling + OOF EvaluationThis notebook is a clean up of [Yolov5 Pseudo Labeling](https://www.kaggle.com/nvnnghia/yolov5-pseudo-labeling), with OOF-evaluation to search the best `score_threshold` for final prediction. References: Awesome original Pseudo Labeling notebook: https://www.kaggle.com/nvnnghia/yolov5-pseudo-labeling Evaluation Script: https://www.kaggle.com/pestipeti/competition-metric-details-script OOF-Evaluation: https://www.kaggle.com/shonenkov/oof-evaluation-mixup-efficientdet Bayesian Optimization (though failed to improve my results): https://www.kaggle.com/shonenkov/bayesian-optimization-wbf-efficientdet
###Code
!ls ../input/best-yolov5x-fold0/wheat0.yaml
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
from tqdm.auto import tqdm
import shutil as sh
import torch
import sys
import glob
NMS_IOU_THR = 0.6
NMS_CONF_THR = 0.5
# WBF
best_iou_thr = 0.6
best_skip_box_thr = 0.43
# Box conf threshold
best_final_score = 0
best_score_threshold = 0
EPO = 15
WEIGHTS = '../input/best-yolov5x-fold0pt/best_yolov5x_fold0.pt'
CONFIG = '../input/best-yolov5x-fold0pt/yolov5x.yaml'
DATA = '../input/best-yolov5x-fold0pt/wheat0.yaml'
is_TEST = len(os.listdir('../input/global-wheat-detection/test/'))>11
is_AUG = True
is_ROT = True
VALIDATE = True
PSEUDO = True
# For OOF evaluation
marking = pd.read_csv('../input/global-wheat-detection/train.csv')
bboxs = np.stack(marking['bbox'].apply(lambda x: np.fromstring(x[1:-1], sep=',')))
for i, column in enumerate(['x', 'y', 'w', 'h']):
marking[column] = bboxs[:,i]
marking.drop(columns=['bbox'], inplace=True)
!cp -r ../input/yolov5train/* .
#sys.path.insert(0, "../input/yolov5tta/")
sys.path.insert(0, "../input/weightedboxesfusion")
def convertTrainLabel():
df = pd.read_csv('../input/global-wheat-detection/train.csv')
bboxs = np.stack(df['bbox'].apply(lambda x: np.fromstring(x[1:-1], sep=',')))
for i, column in enumerate(['x', 'y', 'w', 'h']):
df[column] = bboxs[:,i]
df.drop(columns=['bbox'], inplace=True)
df['x_center'] = df['x'] + df['w']/2
df['y_center'] = df['y'] + df['h']/2
df['classes'] = 0
from tqdm.auto import tqdm
import shutil as sh
df = df[['image_id','x', 'y', 'w', 'h','x_center','y_center','classes']]
index = list(set(df.image_id))
source = 'train'
if True:
for fold in [0]:
val_index = index[len(index)*fold//5:len(index)*(fold+1)//5]
for name, mini in tqdm(df.groupby('image_id')):
path2save = 'val2017/' if name in val_index else 'train2017/'
os.makedirs('convertor/fold{}/labels/'.format(fold)+path2save, exist_ok=True)
with open('convertor/fold{}/labels/'.format(fold)+path2save+name+".txt", 'w+') as f:
row = mini[['classes','x_center','y_center','w','h']].astype(float).values
row = row/1024
row = row.astype(str)
for j in range(len(row)):
text = ' '.join(row[j])
f.write(text)
f.write("\n")
os.makedirs('convertor/fold{}/images/{}'.format(fold,path2save), exist_ok=True)
sh.copy("../input/global-wheat-detection/{}/{}.jpg".format(source,name),'convertor/fold{}/images/{}/{}.jpg'.format(fold,path2save,name))
from ensemble_boxes import *
def run_wbf(boxes, scores, image_size=1024, iou_thr=0.5, skip_box_thr=0.7, weights=None):
labels = [np.zeros(score.shape[0]) for score in scores]
boxes = [box/(image_size) for box in boxes]
boxes, scores, labels = weighted_boxes_fusion(boxes, scores, labels, weights=None, iou_thr=iou_thr, skip_box_thr=skip_box_thr)
boxes = boxes*(image_size)
return boxes, scores, labels
def TTAImage(image, index):
image1 = image.copy()
if index==0:
rotated_image = cv2.rotate(image1, cv2.ROTATE_90_CLOCKWISE)
return rotated_image
elif index==1:
rotated_image2 = cv2.rotate(image1, cv2.ROTATE_90_CLOCKWISE)
rotated_image2 = cv2.rotate(rotated_image2, cv2.ROTATE_90_CLOCKWISE)
return rotated_image2
elif index==2:
rotated_image3 = cv2.rotate(image1, cv2.ROTATE_90_CLOCKWISE)
rotated_image3 = cv2.rotate(rotated_image3, cv2.ROTATE_90_CLOCKWISE)
rotated_image3 = cv2.rotate(rotated_image3, cv2.ROTATE_90_CLOCKWISE)
return rotated_image3
elif index == 3:
return image1
def rotBoxes90(boxes, im_w, im_h):
ret_boxes =[]
for box in boxes:
x1, y1, x2, y2 = box
x1, y1, x2, y2 = x1-im_w//2, im_h//2 - y1, x2-im_w//2, im_h//2 - y2
x1, y1, x2, y2 = y1, -x1, y2, -x2
x1, y1, x2, y2 = int(x1+im_w//2), int(im_h//2 - y1), int(x2+im_w//2), int(im_h//2 - y2)
x1a, y1a, x2a, y2a = min(x1, x2), min(y1, y2), max(x1, x2), max(y1, y2)
ret_boxes.append([x1a, y1a, x2a, y2a])
return np.array(ret_boxes)
def detect1Image(img, img0, model, device, aug):
img = img.transpose(2,0,1)
img = torch.from_numpy(img).to(device)
img = img.float() # uint8 to fp16/32
img /= 255.0
if img.ndimension() == 3:
img = img.unsqueeze(0)
# Inference
pred = model(img, augment=aug)[0]
# Apply NMS
pred = non_max_suppression(pred, NMS_CONF_THR, NMS_IOU_THR, merge=True, classes=None, agnostic=False)
boxes = []
scores = []
for i, det in enumerate(pred): # detections per image
# save_path = 'draw/' + image_id + '.jpg'
if det is not None and len(det):
# Rescale boxes from img_size to im0 size
det[:, :4] = scale_coords(img.shape[2:], det[:, :4], img0.shape).round()
# Write results
for *xyxy, conf, cls in det:
boxes.append([int(xyxy[0]), int(xyxy[1]), int(xyxy[2]), int(xyxy[3])])
scores.append(conf)
return np.array(boxes), np.array(scores)
# validate
import pandas as pd
import numpy as np
import numba
import re
import cv2
import ast
import matplotlib.pyplot as plt
from numba import jit
from typing import List, Union, Tuple
@jit(nopython=True)
def calculate_iou(gt, pr, form='pascal_voc') -> float:
"""Calculates the Intersection over Union.
Args:
gt: (np.ndarray[Union[int, float]]) coordinates of the ground-truth box
pr: (np.ndarray[Union[int, float]]) coordinates of the prdected box
form: (str) gt/pred coordinates format
- pascal_voc: [xmin, ymin, xmax, ymax]
- coco: [xmin, ymin, w, h]
Returns:
(float) Intersection over union (0.0 <= iou <= 1.0)
"""
if form == 'coco':
gt = gt.copy()
pr = pr.copy()
gt[2] = gt[0] + gt[2]
gt[3] = gt[1] + gt[3]
pr[2] = pr[0] + pr[2]
pr[3] = pr[1] + pr[3]
# Calculate overlap area
dx = min(gt[2], pr[2]) - max(gt[0], pr[0]) + 1
if dx < 0:
return 0.0
dy = min(gt[3], pr[3]) - max(gt[1], pr[1]) + 1
if dy < 0:
return 0.0
overlap_area = dx * dy
# Calculate union area
union_area = (
(gt[2] - gt[0] + 1) * (gt[3] - gt[1] + 1) +
(pr[2] - pr[0] + 1) * (pr[3] - pr[1] + 1) -
overlap_area
)
return overlap_area / union_area
@jit(nopython=True)
def find_best_match(gts, pred, pred_idx, threshold = 0.5, form = 'pascal_voc', ious=None) -> int:
"""Returns the index of the 'best match' between the
ground-truth boxes and the prediction. The 'best match'
is the highest IoU. (0.0 IoUs are ignored).
Args:
gts: (List[List[Union[int, float]]]) Coordinates of the available ground-truth boxes
pred: (List[Union[int, float]]) Coordinates of the predicted box
pred_idx: (int) Index of the current predicted box
threshold: (float) Threshold
form: (str) Format of the coordinates
ious: (np.ndarray) len(gts) x len(preds) matrix for storing calculated ious.
Return:
(int) Index of the best match GT box (-1 if no match above threshold)
"""
best_match_iou = -np.inf
best_match_idx = -1
for gt_idx in range(len(gts)):
if gts[gt_idx][0] < 0:
# Already matched GT-box
continue
iou = -1 if ious is None else ious[gt_idx][pred_idx]
if iou < 0:
iou = calculate_iou(gts[gt_idx], pred, form=form)
if ious is not None:
ious[gt_idx][pred_idx] = iou
if iou < threshold:
continue
if iou > best_match_iou:
best_match_iou = iou
best_match_idx = gt_idx
return best_match_idx
@jit(nopython=True)
def calculate_precision(gts, preds, threshold = 0.5, form = 'coco', ious=None) -> float:
"""Calculates precision for GT - prediction pairs at one threshold.
Args:
gts: (List[List[Union[int, float]]]) Coordinates of the available ground-truth boxes
preds: (List[List[Union[int, float]]]) Coordinates of the predicted boxes,
sorted by confidence value (descending)
threshold: (float) Threshold
form: (str) Format of the coordinates
ious: (np.ndarray) len(gts) x len(preds) matrix for storing calculated ious.
Return:
(float) Precision
"""
n = len(preds)
tp = 0
fp = 0
# for pred_idx, pred in enumerate(preds_sorted):
for pred_idx in range(n):
best_match_gt_idx = find_best_match(gts, preds[pred_idx], pred_idx,
threshold=threshold, form=form, ious=ious)
if best_match_gt_idx >= 0:
# True positive: The predicted box matches a gt box with an IoU above the threshold.
tp += 1
# Remove the matched GT box
gts[best_match_gt_idx] = -1
else:
# No match
# False positive: indicates a predicted box had no associated gt box.
fp += 1
# False negative: indicates a gt box had no associated predicted box.
fn = (gts.sum(axis=1) > 0).sum()
return tp / (tp + fp + fn)
@jit(nopython=True)
def calculate_image_precision(gts, preds, thresholds = (0.5, ), form = 'coco') -> float:
"""Calculates image precision.
Args:
gts: (List[List[Union[int, float]]]) Coordinates of the available ground-truth boxes
preds: (List[List[Union[int, float]]]) Coordinates of the predicted boxes,
sorted by confidence value (descending)
thresholds: (float) Different thresholds
form: (str) Format of the coordinates
Return:
(float) Precision
"""
n_threshold = len(thresholds)
image_precision = 0.0
ious = np.ones((len(gts), len(preds))) * -1
# ious = None
for threshold in thresholds:
precision_at_threshold = calculate_precision(gts.copy(), preds, threshold=threshold,
form=form, ious=ious)
image_precision += precision_at_threshold / n_threshold
return image_precision
# Numba typed list!
iou_thresholds = numba.typed.List()
for x in [0.5, 0.55, 0.6, 0.65, 0.7, 0.75]:
iou_thresholds.append(x)
def validate():
source = 'convertor/fold0/images/val2017'
weights = 'weights/best.pt'
if not os.path.exists(weights):
weights = WEIGHTS
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# Load model
model = torch.load(weights, map_location=device)['model'].float() # load to FP32
model.to(device).eval()
dataset = LoadImages(source, img_size=1024)
results = []
for path, img, img0, vid_cap in dataset:
image_id = os.path.basename(path).split('.')[0]
img = img.transpose(1,2,0) # [H, W, 3]
enboxes = []
enscores = []
# only rot, no flip
if is_ROT:
for i in range(4):
img1 = TTAImage(img, i)
boxes, scores = detect1Image(img1, img0, model, device, aug=False)
for _ in range(3-i):
boxes = rotBoxes90(boxes, *img.shape[:2])
enboxes.append(boxes)
enscores.append(scores)
# flip
boxes, scores = detect1Image(img, img0, model, device, aug=is_AUG)
enboxes.append(boxes)
enscores.append(scores)
#boxes, scores, labels = run_wbf(enboxes, enscores, image_size=1024, iou_thr=WBF_IOU_THR, skip_box_thr=WBF_CONF_THR)
#boxes = boxes.astype(np.int32).clip(min=0, max=1024)
#boxes[:, 2] = boxes[:, 2] - boxes[:, 0]
#boxes[:, 3] = boxes[:, 3] - boxes[:, 1]
#boxes = boxes[scores >= 0.05].astype(np.int32)
#scores = scores[scores >= float(0.05)]
records = marking[marking['image_id'] == image_id]
gtboxes = records[['x', 'y', 'w', 'h']].values
gtboxes = gtboxes.astype(np.int32).clip(min=0, max=1024)
gtboxes[:, 2] = gtboxes[:, 0] + gtboxes[:, 2]
gtboxes[:, 3] = gtboxes[:, 1] + gtboxes[:, 3]
result = {
'image_id': image_id,
'pred_enboxes': enboxes, # xyhw
'pred_enscores': enscores,
'gt_boxes': gtboxes, # xyhw
}
results.append(result)
return results
def calculate_final_score(all_predictions, iou_thr, skip_box_thr, score_threshold):
final_scores = []
for i in range(len(all_predictions)):
gt_boxes = all_predictions[i]['gt_boxes'].copy()
enboxes = all_predictions[i]['pred_enboxes'].copy()
enscores = all_predictions[i]['pred_enscores'].copy()
image_id = all_predictions[i]['image_id']
pred_boxes, scores, labels = run_wbf(enboxes, enscores, image_size=1024, iou_thr=iou_thr, skip_box_thr=skip_box_thr)
pred_boxes = pred_boxes.astype(np.int32).clip(min=0, max=1024)
indexes = np.where(scores>score_threshold)
pred_boxes = pred_boxes[indexes]
scores = scores[indexes]
image_precision = calculate_image_precision(gt_boxes, pred_boxes,thresholds=iou_thresholds,form='pascal_voc')
final_scores.append(image_precision)
return np.mean(final_scores)
def show_result(sample_id, preds, gt_boxes):
sample = cv2.imread(f'../input/global-wheat-detection/train/{sample_id}.jpg', cv2.IMREAD_COLOR)
sample = cv2.cvtColor(sample, cv2.COLOR_BGR2RGB)
fig, ax = plt.subplots(1, 1, figsize=(16, 8))
for pred_box in preds:
cv2.rectangle(
sample,
(pred_box[0], pred_box[1]),
(pred_box[2], pred_box[3]),
(220, 0, 0), 2
)
for gt_box in gt_boxes:
cv2.rectangle(
sample,
(gt_box[0], gt_box[1]),
(gt_box[2], gt_box[3]),
(0, 0, 220), 2
)
ax.set_axis_off()
ax.imshow(sample)
ax.set_title("RED: Predicted | BLUE - Ground-truth")
# Bayesian Optimize
from skopt import gp_minimize, forest_minimize
from skopt.utils import use_named_args
from skopt.plots import plot_objective, plot_evaluations, plot_convergence, plot_regret
from skopt.space import Categorical, Integer, Real
def log(text):
print(text)
def optimize(space, all_predictions, n_calls=10):
@use_named_args(space)
def score(**params):
log('-'*10)
log(params)
final_score = calculate_final_score(all_predictions, **params)
log(f'final_score = {final_score}')
log('-'*10)
return -final_score
return gp_minimize(func=score, dimensions=space, n_calls=n_calls)
from utils.datasets import *
from utils.utils import *
def makePseudolabel():
source = '../input/global-wheat-detection/test/'
weights = WEIGHTS
imagenames = os.listdir(source)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# Load model
model = torch.load(weights, map_location=device)['model'].float() # load to FP32
model.to(device).eval()
dataset = LoadImages(source, img_size=1024)
path2save = 'train2017/'
if not os.path.exists('convertor/fold0/labels/'+path2save):
os.makedirs('convertor/fold0/labels/'+path2save)
if not os.path.exists('convertor/fold0/images/{}'.format(path2save)):
os.makedirs('convertor/fold0/images/{}'.format(path2save))
for path, img, img0, vid_cap in dataset:
image_id = os.path.basename(path).split('.')[0]
img = img.transpose(1,2,0) # [H, W, 3]
enboxes = []
enscores = []
# only rot, no flip
if is_ROT:
for i in range(4):
img1 = TTAImage(img, i)
boxes, scores = detect1Image(img1, img0, model, device, aug=False)
for _ in range(3-i):
boxes = rotBoxes90(boxes, *img.shape[:2])
enboxes.append(boxes)
enscores.append(scores)
# flip
boxes, scores = detect1Image(img, img0, model, device, aug=is_AUG)
enboxes.append(boxes)
enscores.append(scores)
boxes, scores, labels = run_wbf(enboxes, enscores, image_size=1024, iou_thr=best_iou_thr, skip_box_thr=best_skip_box_thr)
boxes = boxes.astype(np.int32).clip(min=0, max=1024)
boxes[:, 2] = boxes[:, 2] - boxes[:, 0]
boxes[:, 3] = boxes[:, 3] - boxes[:, 1]
indices = scores >= best_score_threshold
boxes = boxes[indices]
scores = scores[indices]
lineo = ''
for box in boxes:
x1, y1, w, h = box
xc, yc, w, h = (x1+w/2)/1024, (y1+h/2)/1024, w/1024, h/1024
lineo += '0 %f %f %f %f\n'%(xc, yc, w, h)
fileo = open('convertor/fold0/labels/'+path2save+image_id+".txt", 'w+')
fileo.write(lineo)
fileo.close()
sh.copy("../input/global-wheat-detection/test/{}.jpg".format(image_id),'convertor/fold0/images/{}/{}.jpg'.format(path2save,image_id))
if PSEUDO or VALIDATE:
convertTrainLabel()
if PSEUDO:
# this gives worse results
'''
if VALIDATE and is_TEST:
all_predictions = validate()
for score_threshold in tqdm(np.arange(0, 1, 0.01), total=np.arange(0, 1, 0.01).shape[0]):
final_score = calculate_final_score(all_predictions, score_threshold)
if final_score > best_final_score:
best_final_score = final_score
best_score_threshold = score_threshold
print('-'*30)
print(f'[Best Score Threshold]: {best_score_threshold}')
print(f'[OOF Score]: {best_final_score:.4f}')
print('-'*30)
'''
makePseudolabel()
if is_TEST:
!python train.py --weights {WEIGHTS} --img 1024 --batch 4 --epochs {EPO} --data {DATA} --cfg {CONFIG}
else:
!python train.py --weights {WEIGHTS} --img 1024 --batch 4 --epochs 1 --data {DATA} --cfg {CONFIG}
if VALIDATE and is_TEST:
all_predictions = validate()
# Bayesian Optimization: worse results.
'''
space = [
Real(0, 1, name='iou_thr'),
Real(0.25, 1, name='skip_box_thr'),
Real(0, 1, name='score_threshold'),
]
opt_result = optimize(
space,
all_predictions,
n_calls=50,
)
best_final_score = -opt_result.fun
best_iou_thr = opt_result.x[0]
best_skip_box_thr = opt_result.x[1]
best_score_threshold = opt_result.x[2]
print('-'*13 + 'WBF' + '-'*14)
print("[Baseline score]", calculate_final_score(all_predictions, 0.6, 0.43, 0))
print(f'[Best Iou Thr]: {best_iou_thr:.3f}')
print(f'[Best Skip Box Thr]: {best_skip_box_thr:.3f}')
print(f'[Best Score Thr]: {best_score_threshold:.3f}')
print(f'[Best Score]: {best_final_score:.4f}')
print('-'*30)
'''
for score_threshold in tqdm(np.arange(0, 1, 0.01), total=np.arange(0, 1, 0.01).shape[0]):
final_score = calculate_final_score(all_predictions, best_iou_thr, best_skip_box_thr, score_threshold)
if final_score > best_final_score:
best_final_score = final_score
best_score_threshold = score_threshold
print('-'*30)
print(f'[Best Score Threshold]: {best_score_threshold}')
print(f'[OOF Score]: {best_final_score:.4f}')
print('-'*30)
def format_prediction_string(boxes, scores):
pred_strings = []
for j in zip(scores, boxes):
pred_strings.append("{0:.4f} {1} {2} {3} {4}".format(j[0], j[1][0], j[1][1], j[1][2], j[1][3]))
return " ".join(pred_strings)
def detect():
source = '../input/global-wheat-detection/test/'
weights = 'weights/best.pt'
if not os.path.exists(weights):
weights = WEIGHTS
imagenames = os.listdir(source)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
# Load model
model = torch.load(weights, map_location=device)['model'].float() # load to FP32
model.to(device).eval()
dataset = LoadImages(source, img_size=1024)
results = []
fig, ax = plt.subplots(5, 2, figsize=(30, 70))
count = 0
for path, img, img0, vid_cap in dataset:
image_id = os.path.basename(path).split('.')[0]
img = img.transpose(1,2,0) # [H, W, 3]
enboxes = []
enscores = []
# only rot, no flip
if is_ROT:
for i in range(4):
img1 = TTAImage(img, i)
boxes, scores = detect1Image(img1, img0, model, device, aug=False)
for _ in range(3-i):
boxes = rotBoxes90(boxes, *img.shape[:2])
enboxes.append(boxes)
enscores.append(scores)
# flip
boxes, scores = detect1Image(img, img0, model, device, aug=is_AUG)
enboxes.append(boxes)
enscores.append(scores)
boxes, scores, labels = run_wbf(enboxes, enscores, image_size=1024, iou_thr=best_iou_thr, skip_box_thr=best_skip_box_thr)
boxes = boxes.astype(np.int32).clip(min=0, max=1024)
boxes[:, 2] = boxes[:, 2] - boxes[:, 0]
boxes[:, 3] = boxes[:, 3] - boxes[:, 1]
indices = scores >= best_score_threshold
boxes = boxes[indices]
scores = scores[indices]
if count<10:
img_ = cv2.imread(path) # BGR
for box, score in zip(boxes,scores):
cv2.rectangle(img_, (box[0], box[1]), (box[2]+box[0], box[3]+box[1]), (220, 0, 0), 2)
cv2.putText(img_, '%.2f'%(score), (box[0], box[1]), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255,255,255), 2, cv2.LINE_AA)
ax[count%5][count//5].imshow(img_)
count+=1
result = {
'image_id': image_id,
'PredictionString': format_prediction_string(boxes, scores)
}
results.append(result)
return results
results = detect()
test_df = pd.DataFrame(results, columns=['image_id', 'PredictionString'])
test_df.to_csv('submission.csv', index=False)
test_df.head()
!rm -rf convertor
###Output
_____no_output_____ |
VeriAnaliziOlculeri.ipynb | ###Markdown
Medyan Mod Standart Sapma Varyans Kovaryans Korelasyon Nedir. Medyan Bir veri kümesindeki ortada yer alan değerdir. veriler sıralı omalıdır. Eğer çift sayıda veri varsa medyan ortadaki değerlerin ortalamsıdır.
###Code
lst1 = [1,4,4,7,12,48,55,58,600,1500]
lst2 = [5,2,56,2,45,7,36,56,1500]
# liste1 in medyanı (12+48)/2 = 30
import numpy as np
import math
###Output
_____no_output_____
###Markdown
Numpy ile medyan bulma
###Code
np.median([lst1])
sorted(lst2)
np.median([sorted(lst2)])
###Output
_____no_output_____
###Markdown
Mod En sık tekrarlanan verilerdir.
###Code
print("liste1 : ",lst1)
print("liste2 : ",lst2)
s = {}
for i in lst1:
s[str(i)] = 0
for j in lst1:
if j == i:
s[str(i)] = s[str(i)] +1
s
a = [key for m in [max(s.values())] for key,val in s.items() if val == m]
b = max(s.values())
print("Mod: ",a," tekrar sayısı: ",b)
s = {}
for i in lst2:
s[str(i)] = 0
for j in lst2:
if j == i:
s[str(i)] = s[str(i)] +1
s
a = [key for m in [max(s.values())] for key,val in s.items() if val == m]
b = max(s.values())
print("Mod: ",a," tekrar sayısı: ",b)
###Output
Mod: ['2', '56'] tekrar sayısı: 2
###Markdown
Standart Sapma Mekezi dağılım ölçütüdür. Veriler turarlı mı diil mi anlamak için kulanılır. Standart Sapma yüksek ise veriler birbirinden uzak demektir, verilerin tutarlılığı azdır. Standart Sapma düşük ise veriler birbirine yakın demektir, verilerin tutarlılığı yüksektir. Standart sapma, değerin hesaplandığı verilerle aynı ölçekte olması nedeniyle yararlıdır. Standart sapma, varyansın kareköküdür.Gruplandırılmış verilerde standart sapma şöyle hesaplanır = → Dağılımın aritmetik ortalaması bulunur → Her ölçümün aritmetik ortalamadan farkı alınır → Farkların kareleri alınır ve toplanır → Bulduğumuz toplamı elaman sayısına(N) böleriz.→ Numune Standart Sapmasını hesaplarken N yerine (N-1) kulanılır.→ Çıkan sonucun karekökünü alırız.
###Code
lst1 = [10,20,20,30,10,40,80,90,100,100]
lst2 = [50,60,50,70,40,50,30,40,40,70]
ort = sum(lst1)/10
ort
a = []
for i in lst1:
b = i-ort
a.append(b)
a
akare = []
for i in a:
b = i*i
akare.append(b)
akare
kareToplam =sum(akare)
kareToplam
s = math.sqrt(kareToplam/10)
print("Liste1 için Standart Sapma: ",s)
###Output
Liste1 için Standart Sapma: 36.05551275463989
###Markdown
Numpy ile Standart Sapma bulma
###Code
np.std(lst1)
lst2
ort = sum(lst2)/10
ort
a = []
for i in lst2:
b = i-ort
a.append(b)
a
akare = []
for i in a:
b = i*i
akare.append(b)
akare
kareToplam =sum(akare)
kareToplam
s = math.sqrt(kareToplam/10)
print("Liste2 için Standart Sapma: ",s)
###Output
Liste2 için Standart Sapma: 12.649110640673518
###Markdown
Numpy ile Standart Sapma bulma
###Code
np.std(lst2)
###Output
_____no_output_____
###Markdown
Varyans Varyans, verilerin dağılımın ortalama değerinden dağılmasının bir ölçüsüdür. Veri noktalarının dağılımın ortalamasından ne kadar uzakta olduğunu söyler. Her bir veri noktası ile dağılımın ortalaması arasındaki karenin ortalamaları olarak tanımlanabilir. Standart sapmanın karesidir. Varyans ve Standart Sapma Arasındaki Temel Farklar→ Varyans, gözlemlerin aritmetik ortalamasından değişkenliği tanımlayan sayısal bir değerdir. Standart sapma, bir veri setinde gözlemlerin dağılımının bir ölçüsüdür.→ Varyans, ortalama kare sapmalardan başka bir şey değildir. Diğer yandan standart sapma, kök ortalama kare sapmadır.→ Varyans sigma karesi (σ2) ile gösterilirken standart sapma sigma (σ) olarak etiketlenir.→ Varyans, verilen veri setindeki değerlerden genellikle daha büyük olan kare birimlerle ifade edilir. Veri setindeki değerlerle aynı birimlerle ifade edilen standart sapmaların aksine.→ Varyans, bir gruptaki bireylerin ne kadar yayıldığını ölçer. Tersine, Standart Sapma, bir veri kümesinin gözlemlerinin ortalamadan ne kadar farklı olduğunu ölçer. Numpy ile Varyans bulma
###Code
print("liste1 : ",lst1)
print("liste2 : ",lst2)
stdSapma = np.std(lst1)
print("Liste1 için Standart Sapma: ",stdSapma)
Varyans =np.var(lst1)
print("Liste1 için Varyans: ",Varyans)
stdSapma = np.std(lst2)
print("Liste2 için Standart Sapma: ",stdSapma)
Varyans =np.var(lst2)
print("Liste2 için Varyans: ",Varyans)
###Output
Liste2 için Varyans: 160.0
###Markdown
Kovaryans Kovaryans iki değişken arasındaki doğrusal ilişkinin değişkenliğini ölçen bir kavramdır. Kovaryans formülü sonucu hesaplanan değer pozitif ise pozitif bir ilişki olduğu anlamına gelir, negatif ise negatif bir ilişki olduğu anlamındadır. Kovaryans değerinin büyüklüğü bir anlam ifade etmez, tamamen veriye bağlı olarak değişir.Kovaryans Nasıl Hesaplanır?COV( x,y ) : x bağımsız değişkeni ile y bağımsız değişkenin koveryansıxi : değişkeniyi : değişkenix^- : değişkeninin ortalamasıy^- : değişkeninin ortalamasın : toplam veri sayısı Kovaryans matrisiKovaryans matrisi bu değişkenlerin karşılıklı kovaryans değerlerinin bulunduğu bir matristir. Aşağıda örnek bir kovaryans matrisi görülmektedir. Dikkat edilirse esas köşegen değişkelerin varyanslarından oluşmaktadır.
###Code
liste1 = [1,3,5,4,6,8,5,9,10,9]
liste2 = [10, 20, 20, 30, 10, 40, 80, 90, 100, 100]
ortx = sum(liste1)/10
ortx
orty = sum(liste2)/10
orty
a = []
for i in liste1:
b = i-ortx
a.append(b)
a
c = []
for i in liste2:
b = i-orty
c.append(b)
c
liste3 = [a*b for a,b in zip(a,c)]
liste3
Kovaryans = sum(liste3)/9
print("Kovaryans : ",Kovaryans)
###Output
Kovaryans : 86.66666666666667
###Markdown
Korelasyon İki değişkenin arasında ki yönü belirlemekte kullanılır. Kovaryansın sadeleştirilmiş hali de demek mümkündür. Her zaman -1 ile +1 değerleri arasında bir değer verir. Korelasyonun pozitif olması iki değişken arasında ki ilişkinin aynı yönde olduğunu, negatif olması ise ters yönde olduğunu göstermektedir. Kovaryans ve korelasyon genel olarak finansal değişkenler arasında ki bağı ölçmek için yapılan hesaplamalarda ön plana çıkmaktadır. Bu yöntemler değişkenlerin bir birleri ile olan bağını ve yönünü ölçerken, neden sonuç ilişkisi vermemektedir. Örneğin; inşaat demirinin fiyatının artmasına enflasyon artışı sebep oluyor diyemeyiz ama enflasyonun artışının demir fiyatlarının artmasında ne kadar etkili olduğunu görebiliriz.Kovaryans vs. Korelasyon→ Kovaryans iki değişken arasındaki yönü belirler. +, – ya da 0 olabilir.→ Hesaplanan Kovaryans değeri için üst-alt sınırı yoktur, tamamen verisetindeki değerlere bağlıdır.→ Korelasyon iki değişken arasındaki yön ve ilişkinin güçlülüğünü belirler.→ Korelasyon, kovaryansın standartlaştırılmış halidir. Her zaman (-1,+1) aralığındadır.→ Korelasyon Neden-Sonuç ilişkisi belirlemez! Örneğin, dondurma tüketimi ile suç oranlarının artılı arasında bir korelasyon bulunabilir. Fakat, dondurma tüketimi artışı ile şuç oranları artar diyemeyiz. Bunun altında yatan farklı bir neden, örneğin sıcaklıkların artması olabilir.→ Korelasyonun yüksek olması bunun istatistiksel olarak geçerli olduğunu göstermez. Veri setinin büyüklüğüne göre bu test edilmelidir.Korelasyon Nasıl Hesaplanır?
###Code
print("liste1 : ",liste1)
print("liste2 : ",liste2)
print("Kovaryans : ",Kovaryans)
###Output
Kovaryans : 86.66666666666667
###Markdown
Numpy ile Standart Sapma bulma
###Code
sx = np.std(liste1)
sx
sy = np.std(liste2)
sy
Kor = Kovaryans/(sx*sy)
print("Korelasyon : ",Kor)
###Output
Korelasyon : 0.8606629658238706
|
case3/case3_going_to_be_final-Emilinsaadotuusi.ipynb | ###Markdown
Case 3. Heart Disease Classification Cognitive Systems for Health Technology Applications 19.3.2019, Emil Rantanen ja Wille Tuovinen Metropolia University of Applied Sciences 1. ObjectivesThis is the code made for the Case 3 exercise of the Cognitive Systems for Health Technology applications.The goal of this assignment were to use recurrent and neural networks to process text data and predict ratings from a review text.Links to codes used to complete this assignment: Case 3. Data analysis and first classification experimentshttps://github.com/sakluk/cognitive-systems-for-health-technology/blob/master/Week%206.%20Case%203%20-%20First%20classification%20experiments.ipynbEmbedding, LSTM, GRU and Conv1Dhttps://github.com/sakluk/cognitive-systems-for-health-technology/blob/master/Week%207.%20embedding-lstm-gru-and-conv1d.ipynbThere were also pieces of code based on the previous cases.
###Code
# Import libraries that are needed in this assignment
from __future__ import print_function
import numpy as np # linear algebra
import pandas as pd # data processing
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense, Embedding, LSTM, Bidirectional, Dropout
from keras.preprocessing import sequence
from keras.models import Sequential
#For CNN
from keras.layers import Conv1D, Activation, MaxPooling1D, Dropout, Flatten, Dense
from keras import optimizers
import os
print(os.listdir("A:/Downloads/Case3/"))
# Create dataframes train and test from the data
train = pd.read_csv('A:/Downloads/Case3/drugsComTrain_raw.csv')
test = pd.read_csv('A:/Downloads/Case3/drugsComTest_raw.csv')
train.head(10)
test.head(10)
print("Training data shape:", train.shape)
print("Test data shape:", test.shape)
###Output
Training data shape: (161297, 7)
Test data shape: (53766, 7)
###Markdown
Create labels based on the original article: Grässer et al. (2018)Labels between 1 - 5
###Code
r = train['rating']
labels = 1*(( 1 <= r ) & ( r <= 2 )) + 2*(( 3 <= r ) & ( r <= 4 )) \
+ 3*(( 5 <= r ) & ( r <= 6 )) + 4*(( 7 <= r ) & ( r <= 8 )) \
+ 5*(( 9 <= r ) & ( r <= 10))
# Add the label column to the data
train['label'] = labels
# Check the new data
train.head(10)
# Check ratings to labels conversion
import matplotlib.pyplot as plt
train.plot(x = 'rating', y = 'label', kind = 'scatter')
plt.show()
# Plot distribution of labels
train.hist(column = 'label', bins = np.arange(1, 7), align = 'left');
###Output
_____no_output_____
###Markdown
Convert reviews to padded sequences
###Code
# Read a part of the reviews and create training sequences (x_train)
samples = train['review'].iloc[:10000]
tokenizer = Tokenizer(num_words = 1000)
tokenizer.fit_on_texts(samples)
sequences = tokenizer.texts_to_sequences(samples)
x_train = pad_sequences(sequences, maxlen = 500)
# Read a part of the reviews and create testing sequences (x_test)
test_samples = test['review'].iloc[:10000]
test_tokenizer = Tokenizer(num_words = 1000)
test_tokenizer.fit_on_texts(test_samples)
test_sequences = tokenizer.texts_to_sequences(test_samples)
x_test = pad_sequences(test_sequences, maxlen = 500)
###Output
_____no_output_____
###Markdown
Convert labels to one-hot-categories
###Code
# Convert the labels to one_hot_category values
one_hot_labels = to_categorical(labels[:10000], num_classes = 7)
###Output
_____no_output_____
###Markdown
Check the shapes
###Code
# Check the training and label sets
x_train.shape, one_hot_labels.shape
print(one_hot_labels)
print(" ")
print(x_train)
# We use the same plotting commands several times, so create a function for that purpose
def plot_history(history):
f, ax = plt.subplots(1, 2, figsize = (16, 7))
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.sca(ax[0])
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.sca(ax[1])
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
# Similarly create a function for model training, for demonstration purposes we use constant values
def train_model(model, x, y, e = 10, bs = 32, v = 1, vs = 0.25):
h = model.fit(x, y, epochs = e, batch_size = bs, verbose = v, validation_split = vs)
return h
# First model: Embedding layer -> Flatten -> Dense classifier
m0 = Sequential()
m0.add(Embedding(1000, 64, input_length = 500)) # 1000 = num_words, 64 = Embedding layers, 500 = sequence length
m0.add(Flatten())
m0.add(Dense(32, activation = 'relu'))
m0.add(Dense(7, activation = 'softmax'))
m0.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics = ['acc'])
m0.summary()
#Train the first model and plot the history
h0 = train_model(m0, x_train, one_hot_labels)
plot_history(h0)
###Output
Train on 7500 samples, validate on 2500 samples
Epoch 1/10
7500/7500 [==============================] - 3s 351us/step - loss: 0.8376 - acc: 0.6629 - val_loss: 0.7720 - val_acc: 0.6664
Epoch 2/10
7500/7500 [==============================] - 2s 254us/step - loss: 0.5941 - acc: 0.7760 - val_loss: 0.6888 - val_acc: 0.7436
Epoch 3/10
7500/7500 [==============================] - 2s 249us/step - loss: 0.4054 - acc: 0.8496 - val_loss: 0.8014 - val_acc: 0.6724
Epoch 4/10
7500/7500 [==============================] - 2s 252us/step - loss: 0.2240 - acc: 0.9259 - val_loss: 0.9369 - val_acc: 0.7140
Epoch 5/10
7500/7500 [==============================] - 2s 247us/step - loss: 0.1078 - acc: 0.9693 - val_loss: 1.1679 - val_acc: 0.7096
Epoch 6/10
7500/7500 [==============================] - 2s 249us/step - loss: 0.0498 - acc: 0.9879 - val_loss: 1.4092 - val_acc: 0.6860
Epoch 7/10
7500/7500 [==============================] - 2s 252us/step - loss: 0.0248 - acc: 0.9931 - val_loss: 1.6195 - val_acc: 0.6976
Epoch 8/10
7500/7500 [==============================] - 2s 251us/step - loss: 0.0136 - acc: 0.9963 - val_loss: 1.9483 - val_acc: 0.7028
Epoch 9/10
7500/7500 [==============================] - 2s 254us/step - loss: 0.0091 - acc: 0.9973 - val_loss: 2.2918 - val_acc: 0.7096
Epoch 10/10
7500/7500 [==============================] - 2s 251us/step - loss: 0.0074 - acc: 0.9973 - val_loss: 2.6184 - val_acc: 0.7100
###Markdown
New model using CNN
###Code
# Lets Create a basic Sequential model with several Conv1D layers
model = Sequential()
model.add(Embedding(1000, 64, input_length = 500))
model.add(Conv1D(32, (3), activation = 'relu', input_shape = (161297, 7)))
model.add(Conv1D(32, (3), activation = 'relu'))
model.add(MaxPooling1D(pool_size = (2)))
model.add(Dropout(0.25))
model.add(Conv1D(64, (3), activation = 'relu'))
model.add(Conv1D(64, (3), activation = 'relu'))
model.add(MaxPooling1D(pool_size = (2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(32, activation = 'relu'))
model.add(Dropout(0.5))
model.add(Dense(7, activation = 'softmax'))
# Try a custom metrics, needs to be calculated in backend (Tensorflow)
from keras import backend
def rmse(y_true, y_pred):
return backend.sqrt(backend.mean(backend.square(y_pred - y_true), axis=-1))
sgd = optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer = sgd,
loss='categorical_crossentropy',
metrics = ["accuracy", "mse", rmse])
model.summary()
h1 = train_model(model, x_train, one_hot_labels)
plot_history(h1)
# Train the first model and plot the history
#h1 = train_model(model2, x_train, one_hot_labels)
#plot_history(h1)
###Output
_____no_output_____ |
Coursera/Python for Data Science-IBM/Quiz/Week-3/Objects-and-Classes.ipynb | ###Markdown
1. Consider the class Points, what are the data attributes:
###Code
class Point(object):
def __init__(self,x,y):
self.x=x
self.y=y
def print_point(self):
print('x=',self.x,'y=',self.y)
###Output
_____no_output_____
###Markdown
Ans: self.x self.y 2. What is the result of running the following lines of code ?
###Code
class Points(object):
def __init__(self,x,y):
self.x=x
self.y=y
def print_point(self):
print('x=',self.x,' y=',self.y)
p1=Points("A","B")
p1.print_point()
###Output
x= A y= B
###Markdown
Ans: x= A y= B 3. What is the result of running the following lines of code ?
###Code
class Points(object):
def __init__(self,x,y):
self.x=x
self.y=y
def print_point(self):
print('x=',self.x,' y=',self.y)
p2=Points(1,2)
p2.x='A'
p2.print_point()
###Output
x= A y= 2
|
notebooks/birdsong/05.0-model-song-MI.ipynb | ###Markdown
Model MI for each species1. load datasets2. fit models to each language3. calculate curvature for each model
###Code
import pandas as pd
import numpy as np
from parallelspaper.config.paths import DATA_DIR
from parallelspaper import model_fitting as mf
from tqdm.autonotebook import tqdm
from parallelspaper.quickplots import plot_model_fits
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
load MI_DF
###Code
MI_DF = pd.read_pickle(DATA_DIR / 'MI_DF/birdsong/birdsong_MI_DF.pickle')
# prep for new data in dataframe
MI_DF = MI_DF.assign(**{i:np.nan for i in ['exp_results', 'pow_results', 'concat_results',
'R2_exp', 'R2_concat', 'R2_power', 'AICc_exp',
'AICc_concat', 'AICc_power', 'bestfitmodel', 'curvature', 'min_peak']})
MI_DF['curvature'] = MI_DF['curvature'].astype(object)
n = 100 # max distance for computation
for idx, row in tqdm(MI_DF.iterrows(), total=len(MI_DF)):
# get signal
sig = np.array(row.MI-row.MI_shuff)
distances = row.distances
sig = sig
# fit models
results_power, results_exp, results_pow_exp, best_fit_model = mf.fit_models(distances, sig)
# get fit results
R2_exp, R2_concat, R2_power, AICc_exp, \
AICc_pow, AICc_concat = mf.fit_results(sig, distances,
results_exp, results_power,
results_pow_exp)
# get model y
distances_mod = np.logspace(0,np.log10(n), base=10, num=1000)
if best_fit_model == 'pow_exp':
y_model = mf.get_y(mf.pow_exp_decay, results_pow_exp, distances_mod)
elif best_fit_model == 'exp':
y_model = mf.get_y(mf.exp_decay, results_exp, distances_mod)
elif best_fit_model == 'pow':
y_model = mf.get_y(mf.powerlaw_decay, results_power, distances_mod)
# get curvature of model_y
curvature_model = mf.curvature(np.log(y_model))
# if the best fit model is pow_exp, then grab the min peak
if best_fit_model == 'pow_exp':
# get peaks of curvature
peaks = np.where((
(curvature_model[:-1] < curvature_model[1:])[1:] & (curvature_model[1:] < curvature_model[:-1])[:-1]
))
min_peak = peaks[0][0]
else:
min_peak = np.nan
# get save model fit results to MI_DF
MI_DF.loc[idx, np.array(['exp_results', 'pow_results', 'concat_results',
'R2_exp', 'R2_concat', 'R2_power', 'AICc_exp',
'AICc_concat', 'AICc_power', 'bestfitmodel', 'curvature', 'min_peak'])] = [
results_exp, results_power, results_pow_exp,
R2_exp, R2_concat, R2_power, AICc_exp,
AICc_concat, AICc_pow, best_fit_model,
curvature_model, min_peak
]
# quick plot of model fitting
plot_model_fits(row.MI, row.MI_shuff, distances, results_power, results_exp, results_pow_exp)
print(row.species, row.type, best_fit_model, row.n_elements)
MI_DF.to_pickle((DATA_DIR / 'MI_DF/birdsong/birdsong_MI_DF_fitted.pickle'))
###Output
_____no_output_____ |
code-colab/Tensorflow - Introduction to Computer Vision Fashion MNIST.ipynb | ###Markdown
Import Module
###Code
from tensorflow import keras
import numpy as np
import tensorflow as tf
###Output
2021-08-21 21:46:46.229459: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-08-21 21:46:46.229572: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
###Markdown
Load Data
###Code
keras.__version__, tf.__version__, np.__version__
fashion_mnist = keras.datasets.fashion_mnist
fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
train_images.shape, train_labels.shape
np.set_printoptions(linewidth=200)
train_images[0], train_labels[0]
import matplotlib.pyplot as plt
plt.imshow(train_images[0]), train_labels[0]
plt.imshow(train_images[1])
###Output
_____no_output_____
###Markdown
Normalize Data
###Code
train_images = train_images / 255.0
test_images = test_images / 255.0
train_images.shape, test_images.shape
###Output
_____no_output_____
###Markdown
Create Model
###Code
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28,28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 784) 0
_________________________________________________________________
dense (Dense) (None, 128) 100480
_________________________________________________________________
dense_1 (Dense) (None, 10) 1290
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
_________________________________________________________________
###Markdown
Compile Model
###Code
model.compile(optimizer = tf.optimizers.Adam(),
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Fit the Model
###Code
model.fit(train_images, train_labels, epochs=15)
model.evaluate(test_images, test_labels)
classifications = model.predict(test_images)
classifications[0]
print(test_labels[0])
plt.imshow(test_images[0])
###Output
_____no_output_____
###Markdown
Fashion MNIST with Callback
###Code
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.6):
print("\nReached 60% accuracy so cancelling training!.\n")
self.model.stop_training = True
mnist = tf.keras.datasets.fashion_mnist
# load data
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# normalize
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = myCallback()
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28,28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
# compile
model.compile(optimizer=tf.optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# fit
model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])
model.evaluate(x_test, y_test)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.4075 - accuracy: 0.8539
|
Quantum Simulator Cross sell Recommender.ipynb | ###Markdown
Quantum Simulator:Cross-sell RecommenderThis is a state of art quantum simulator-based solution that identifies the products that has a higher probability of purchase by a buyer based on the past purchase patterns. This solution helps businesses to achieve better cross-sell and improved customer life time value. This will also help businesses such as Retail, e-Commerce, etc. to plan their marketing strategy and product promotions based on historical purchase patterns.This sample notebook shows you how to deploy Quantum Simulator:Cross-sell Recommender using Amazon SageMaker.> **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook. Pre-requisites:1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.1. Ensure that IAM role used has **AmazonSageMakerFullAccess**1. To deploy this ML model successfully, ensure that: 1. Either your IAM role has these three permissions and you have authority to make AWS Marketplace subscriptions in the AWS account used: 1. **aws-marketplace:ViewSubscriptions** 1. **aws-marketplace:Unsubscribe** 1. **aws-marketplace:Subscribe** 2. or your AWS account has a subscription to Automated Feature Engineering. If so, skip step: [Subscribe to the model package](1.-Subscribe-to-the-model-package) Contents:1. [Subscribe to the model package](1.-Subscribe-to-the-model-package)2. [Create an endpoint and perform real-time inference](2.-Create-an-endpoint-and-perform-real-time-inference) 1. [Create an endpoint](A.-Create-an-endpoint) 2. [Create input payload](B.-Create-input-payload) 3. [Perform real-time inference](C.-Perform-real-time-inference) 4. [Output result](D.-Output-result) 5. [Delete the endpoint](E.-Delete-the-endpoint)3. [Perform batch inference](3.-Perform-batch-inference) 4. [Clean-up](4.-Clean-up) 1. [Delete the model](A.-Delete-the-model) 2. [Unsubscribe to the listing (optional)](B.-Unsubscribe-to-the-listing-(optional)) Usage instructionsYou can run this notebook one cell at a time (By using Shift+Enter for running a cell). 1. Subscribe to the model package To subscribe to the model package:1. Open the model package listing page **Automated Feature Engineering**1. On the AWS Marketplace listing, click on the **Continue to subscribe** button.1. On the **Subscribe to this software** page, review and click on **"Accept Offer"** if you and your organization agrees with EULA, pricing, and support terms. 1. Once you click on **Continue to configuration button** and then choose a **region**, you will see a **Product Arn** displayed. This is the model package ARN that you need to specify while creating a deployable model using Boto3. Copy the ARN corresponding to your region and specify the same in the following cell.
###Code
model_package_arn='arn:aws:sagemaker:us-east-2:786796469737:model-package/mphasis-marketplace-quantum-cross-sell-v1'
import base64
import json
import uuid
from sagemaker import ModelPackage
import sagemaker as sage
from sagemaker import get_execution_role
from sagemaker import ModelPackage
#from urllib.parse import urlparse
import boto3
from IPython.display import Image
from PIL import Image as ImageEdit
#import urllib.request
import numpy as np
role = get_execution_role()
sagemaker_session = sage.Session()
bucket=sagemaker_session.default_bucket()
bucket
###Output
_____no_output_____
###Markdown
2. Create an endpoint and perform real-time inference If you want to understand how real-time inference with Amazon SageMaker works, see [Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-hosting.html).
###Code
model_name='quantum-cross-sell'
content_type='application/zip'
real_time_inference_instance_type='ml.m5.xlarge'
batch_transform_inference_instance_type='ml.m5.large'
###Output
_____no_output_____
###Markdown
A. Create an endpoint
###Code
def predict_wrapper(endpoint, session):
return sage.RealTimePredictor(endpoint, session,content_type=content_type)
#create a deployable model from the model package.
model = ModelPackage(role=role,
model_package_arn=model_package_arn,
sagemaker_session=sagemaker_session,
predictor_cls=predict_wrapper)
#Deploy the model
predictor = model.deploy(1, real_time_inference_instance_type, endpoint_name=model_name)
###Output
_____no_output_____
###Markdown
Once endpoint has been created, you would be able to perform real-time inference. B. Create input payload
###Code
import pandas as pd
file_name = './data/data.zip'
###Output
_____no_output_____
###Markdown
###Code
output_file_name = 'output.csv'
df = pd.read_csv("data.csv")
#df = df.drop('Unnamed: 0',1)
df.head(10)
###Output
_____no_output_____
###Markdown
C. Perform real-time inference
###Code
!aws sagemaker-runtime invoke-endpoint \
--endpoint-name 'quantum-cross-sell' \
--body fileb://$file_name \
--content-type 'application/zip' \
--region us-east-2 \
output.csv
###Output
_____no_output_____
###Markdown
D. Output result
###Code
df = pd.read_csv("output.csv")
#df = df.drop('Unnamed: 0',1)
df.head(10)
###Output
_____no_output_____
###Markdown
E. Delete the endpoint Now that you have successfully performed a real-time inference, you do not need the endpoint any more. You can terminate the endpoint to avoid being charged.
###Code
predictor=sage.RealTimePredictor(model_name, sagemaker_session,content_type)
predictor.delete_endpoint(delete_endpoint_config=True)
###Output
_____no_output_____
###Markdown
3. Perform batch inference In this section, you will perform batch inference using multiple input payloads together. If you are not familiar with batch transform, and want to learn more, see these links:1. [How it works](https://docs.aws.amazon.com/sagemaker/latest/dg/ex1-batch-transform.html)2. [How to run a batch transform job](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html)
###Code
#upload the batch-transform job input files to S3
transform_input_folder = "./data/data.zip"
transform_input = sagemaker_session.upload_data(transform_input_folder, key_prefix=model_name)
print("Transform input uploaded to " + transform_input)
#Run the batch-transform job
transformer = model.transformer(1, batch_transform_inference_instance_type)
transformer.transform(transform_input, content_type=content_type)
transformer.wait()
#output is available on following path
transformer.output_path
s3_conn = boto3.client("s3")
bucket_name="sagemaker-us-east-2-786796469737"
with open('./data/output/output.csv', 'wb') as f:
s3_conn.download_fileobj(bucket_name, 'mphasis-marketplace'+'/data.zip.out', f)
print("Output file loaded from bucket")
df = pd.read_csv("output.csv")
#df = df.drop('Unnamed: 0',1)
df.head(10)
###Output
_____no_output_____
###Markdown
4. Clean-up A. Delete the model
###Code
model.delete_model()
###Output
_____no_output_____ |
exercises/Exercise_2.3.ipynb | ###Markdown
Exercise 2.3: Longest common substring
###Code
seq_1 = 'AGTCATGCATGCACTGTGACCAGTTA'
seq_2 = 'AGTCATGCAGAGATCAGTACTATGCATAGCATGA'
def longest_common_substring (s1, s2):
"""return the longest common substring between two sequences"""
#Start with the longest substring of s1 and loop through s2, shortening the substring and testing each variation each loop
substr_len = len(s1)
for i in range(len(s1)):
# Try all substrings
for j in range(len(s1) - substr_len + 1):
if s1[j:j+substr_len] in s2:
return s1[j:j+substr_len]
substr_len -= 1
# If we haven't returned, there is no common substring
return ''
longest_common_substring (seq_1, seq_2)
def longest_common_substring (s1, s2):
"""return the longest common continuous portion of a string"""
# Start with the longest substring of s1 and loop through s2, shorten the substring and continue to loop through
for i in range(len(s1)):
for j in range(i+1):
if s1[i:ij]
###Output
_____no_output_____ |
Assignment-2018-Numpy.ipynb | ###Markdown
Programming for Data Analysis - Assignment 2018*****NumPy** is the fundamental package for scientific computing with Python. It contains among other things a powerful N-dimensional array object. Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data*** Problem statement1. Explain the overall purpose of the package. 2. Explain the use of the “Simple random data” and “Permutations” functions. 3. Explain the use and purpose of at least five “Distributions” functions. 4. Explain the use of seeds in generating pseudorandom numbers.*** References - **Numpy Library** *https://docs.scipy.org/doc/numpy/reference/routines.random.html* - *https://en.wikipedia.org* - *https://en.wikipedia.org/wiki/Mersenne_Twister* - *www.stat.yale.edu/Courses/1997-98/101/sample.htm* - *https://realpython.com* *** 1. Purpose of the Package ("random")*****Random Sampling** is a sampling method in which all members of a group (population) have an equal and independent chance of being selected.The Numpy.random package helps generate random numbers without wrinting underneath code. *****random package** is Key part of any simulation and it helps to generate random numbers and for this purpose, NumPy provides various routines in the submodule **random**. **Mersenne Twister** algorithum is used in numpy package to generate pseudorandom numbers. The Mersenne Twister is a pseudorandom number generator (PRNG) and it is a most widely used general-purpose PRNG.***Possible situation where random generations are required: - Like pick a random element from a list - Pick a random card from a deck - flip a coin etc. *** 2. Use of "Simple random data" and "Permutations" functions*** 2(a). Simple random data *** **Simple random sampling/data** is the basic sampling technique to select a group of sample for study from a larger population. Each individual is chosen entirely by chance and each member of the population has an equal chance of being included in the sample. Every possible sample of a given size has the same chance of selection. - All sample random data functions in numpy, create a array of the given shape with random sample values***- Random functions available in numpy package and examples - rand, randn, randint, random_integers, random_sample - random,randf,sample - choice - bytes*****Use** - Generating data encryption keys - Simulating and modeling complex phenomena and for selecting random samples from larger data sets.***
###Code
# Usage of Simple randam functions
# rand, randn, randint, random_integers,random_sample,random,ranf,sample,choice and bytes
import numpy as np
#Ignore the depricated warnings
import warnings
warnings.filterwarnings("ignore")
#Print the function name and the results
print ("Results of rand function"+ " : "+ '{0!r}'.format(np.random.rand(2)))
print ("Results of randn function"+ " : "+ '{0!r}'.format(np.random.randn(5)))
print ("Results of randint function"+ " : "+ '{0!r}'.format(np.random.randint(5, size=5)))
print ("Results of random_integers function"+ " : "+ '{0!r}'.format(np.random.random_integers(5)))
print ("Results of random_sample function"+ " : "+ '{0!r}'.format(np.random.random_sample(5,)))
print ("Results of random function"+ " : "+ '{0!r}'.format(np.random.random(5,)))
print ("Results of ranf function"+ " : "+ '{0!r}'.format(np.random.ranf(5,)))
print ("Results of sample function"+ " : "+ '{0!r}'.format(np.random.sample(5,)))
print ("Results of choice function"+ " : "+ '{0!r}'.format(np.random.choice(5,)))
print ("Results of bytes function"+ " : "+ '{0!r}'.format(np.random.bytes(5)))
# Usage of Simple randam functions
# rand, randn, randint, random_integers,random_sample,random,ranf,sample,choice and bytes
import numpy as np
## SELECT Random County from a given list
counties = ['Dublin','Meath','Cork','Derry','Antrim','Carlow','Cavan','Clare','Donegal','Sligo','Louth','Mayo']
np.random.choice(counties)
###Output
_____no_output_____
###Markdown
*** 2(b).Permutations The two permutation functions in scipy is mentioned below shuffle: Modify a sequence in-place by shuffling its contents.permutation: Randomly permute a sequence, or return a permuted range.***
###Code
# Usage of Permutations functions # import numpy library
import numpy as np
#Define the size, #Define the size,
arr = np.arange(5)
np.random.shuffle(arr)
#Print the function name and the results
print ("Results of shuffle function"+ " : "+ '{0!r}'.format(arr))
print ("Results of permutation function"+ " : "+ '{0!r}'.format(np.random.permutation([1, 2, 3, 4, 5])))
###Output
Results of shuffle function : array([3, 4, 0, 2, 1])
Results of permutation function : array([2, 3, 4, 5, 1])
###Markdown
3. Use and purpose of five “Distributions” functions*** ** 3 (a). numpy.random.power *******Power distribution** is a functional relationship between two quantities, where a relative change in one quantity results in a proportional relative change in the other quantity, independent of the initial size of those quantities: one quantity varies as a power of another. Example: considering the area of a square in terms of the length of its side, if the length is doubled, the area is multiplied by a factor of four**Purpose** - Generate samples in [0, 1] from a power distribution with positive exponent a - 1. - The power function distribution is just the inverse of the Pareto distribution **Use** - Used in modeling the over-reporting of insurance claims. ***
###Code
#import numpy library and maplot library
import numpy as np
import matplotlib.pyplot as plt
#variable declarations and value definitions. a -> exponent; samples -> Number of Samples
#Use random power to generate samples and store the values in the variable s
a, samples = 10, 500
s, x = np.random.power(a, samples), np.linspace(0, 1, 100)
# calculate the Y value using power distribution funtion
y = a*x**(a-1.)
# Plot the histogram with the values of s; ignore the warning messages
count, bins, ignored = plt.hist(s)
#calculate normalised y and plot the graph
normed_y = samples*np.diff(bins)[0]*y
plt.plot(x, normed_y)
plt.show()
###Output
_____no_output_____
###Markdown
*****3 (b). numpy.random.normal *******Normal distribution** is a probability function that describes how the values of a variable are distributed. It is a symmetric distribution where most of the observations cluster around the central peak and the probabilities for values further away from the mean tap**Purpose**- Generate random samples from a normal (Gaussian) distribution.**Use** - Statiscal theory - The normal distributions occurs often in nature. it describes the commonly occurring distribution of samples influenced by a large number of tiny, random disturbances, each with its own unique distribution. ***
###Code
#import numpy library and maplot library
import numpy as np
import matplotlib.pyplot as plt
# mean and standard deviation
mu, sigma = 0, 0.1
s = np.random.normal(mu, sigma, 1000)
#histogram and plot the graph
count, bins, ignored = plt.hist(s, 30, density=True)
plt.plot(bins, 1/(sigma * np.sqrt(2 * np.pi)) * np.exp( - (bins - mu)**2 / (2 * sigma**2) ), linewidth=2, color='r')
plt.show()
###Output
_____no_output_____
###Markdown
*****3 (c). numpy.random.ZIPF *******Purpose**- Generate samples which confirms to Zipf's law. The Zipf distribution is also known as the zeta distribution. - It is a continuous probability distribution that satisfies Zipf’s law: the frequency of an item is inversely proportional to its rank in a frequency table.**Use** - Linguistics and insurance Modelling - A library has a few books that everyone wants to borrow (best sellers), a greater number of books that get borrowed occasionally (classics), and a vast number of books that hardly ever get borrowed.***
###Code
import matplotlib.pyplot as plt
from scipy import special
# parameter and random sample generation
a = 2
s = np.random.zipf(a, 100)
x = np.arange(1., 100.)
y = x**(-a) / special.zetac(a)
#histogram and plot the graph
count, bins, ignored = plt.hist(s[s<50], 50, density=True)
plt.plot(x, y/max(y), linewidth=2, color='r')
plt.show()
###Output
_____no_output_____
###Markdown
***** 3 (d). numpy.random.binomial*******Purpose**- Generate samples from a binomial distribution with specified parameters, - n trials and p probability of success where n an integer >= 0 and p is in the interval [0,1]. - Each replication of the process results in one of two possible outcomes (success or failure)- The replications are independent, meaning here that a success in one patient does not influence the probability of success in another.*****Use with an example**Say a likewood of a hurricane causing direct hit to a city is 0.03 (3%); if there are 5 potential hurricans in this season, cacluate the probability that all hurricans will miss the citry? - Input parameter: Success a direct hurricane hit (p = 0.03)- Input parameter: Number of Hurricans n=5 - Output: Calculate the probability that all will miss ***
###Code
import numpy as np
#Calculate the percentage of survival probability
percentage = (sum(np.random.binomial(5, 0.03, 100) == 0)/100) * 100
print ("There is an " + str(percentage) + "% probability that all hurricans will miss hitting the city directly")
###Output
There is an 77.0% probability that all hurricans will miss hitting the city directly
###Markdown
*****Observation:** There is an 91.0% probability that all hurricans will miss hitting the city directly ***** 3 (e). numpy.random.uniform*******Uniform Distribution** also called a rectangular distribution, is a probability distribution that has constant probability.**Purpose**- Generate uniformly distributed samples over the interval [low, high] (includes low, but excludes high). **Use with an example**- Say for example, the amount of time a person waits for a bus is distributed between zero and 15 minutes - The probability that a person waits fewer than 12.5 minutes is given below - Let X = the number of minutes a person must wait for a bus. a = 0 and b = 15. - x ~ U(0, 15). - f(x) = 1/(b-a) = 1/15 - Probability the person waits for 12.5 minutes = 1/15 * 12.5 = 0.833
###Code
import numpy as np
average = []
for y in range (0,5):
for i in range (0,100):
lst = np.random.uniform(1,5000000,100)
s= sum (lst)
l = len (lst)
average.append(s/l)
print(sum(average)/len(average))
###Output
2503116.1919538435
2508602.0480472273
2502747.1928388188
2498473.507762343
2496030.826059405
###Markdown
*****Observation:** The sample values generated by the function is around average of the "low" and "high" values of the input parameters. - example: np.random.uniform(1,5000,100) :: Low ->1 and High -> 5000 - The average of the result is between 2490 and 2503*** 4. Use of Seeds in generating Pseudorandom numbers*** A **random seed** is a number used to initialize a pseudorandom number generator. **Use**- Random seed is used in the field of computer security. Secret encryption key is pseudorandomly generated, having the seed will allow to obtain the key. - Random seeds are often generated from the state of the computer system or from a hardware random number generator. - CSPRNG -> cryptographically secure Pesudo Random number Generator. CSPRNGs are suitable for generating sensitive data such as passwords, authenticators, and tokens. Given a random string, there is realistically no way for Malicious Joe to determine what string came before or after that string in a sequence of random strings.- entropy -> Refers to the amount of randomness introduced or desired.
###Code
# Usage of RandomState and See
# The random values generate will not change with the mutliple execution of the script
import numpy as np
from time import time
#Instantiate RandomState object and call the "uniform" distribution function
R = np.random.RandomState(5)
y = 1
#Print the function name and the results
for i in range (1,3):
R = np.random.RandomState(5)
print ("Results of uniform function"+ " : "+ '{0!r}'.format(R.uniform(1,50,3)))
###Output
Results of uniform function : array([11.87766538, 43.665883 , 11.12923861])
Results of uniform function : array([11.87766538, 43.665883 , 11.12923861])
|
chap05.ipynb | ###Markdown
Python Machine Learning Chapter 3 - A Tour of Machine Learning Classifiers Using Scikit-Learn Overview - [Decision tree learning](Decision-tree-learning) - [Maximizing information gain – getting the most bang for the buck](Maximizing-information-gain-–-getting-the-most-bang-for-the-buck) - [Building a decision tree](Building-a-decision-tree) - [Combining weak to strong learners via random forests](Combining-weak-to-strong-learners-via-random-forests)- [K-nearest neighbors – a lazy learning algorithm](K-nearest-neighbors-–-a-lazy-learning-algorithm)
###Code
from IPython.display import Image
%matplotlib inline
from sklearn import datasets
import numpy as np
iris = datasets.load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train) # not on the test set
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import warnings
def versiontuple(v):
return tuple(map(int, (v.split("."))))
def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
# highlight test samples
if test_idx:
# plot all samples
if not versiontuple(np.__version__) >= versiontuple('1.9.0'):
X_test, y_test = X[list(test_idx), :], y[list(test_idx)]
warnings.warn('Please update to NumPy 1.9.0 or newer')
else:
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0],
X_test[:, 1],
c='',
alpha=1.0,
edgecolor='black',
linewidths=1,
marker='o',
s=55, label='test set')
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
###Output
_____no_output_____
###Markdown
Decision tree learning A machine learning model that breaks down the data with a series of questionsIn particular, we ask questions *most* helpful for dividing different classes- e.g. "sepal width $\ge$ 2.8cm?" - Yes ==> Class 1 - No ==> Class 2Need to find the feature *most* helpful for dividing the data Information Gain (IG) $$ IG(D_p, f) = I(D_p) - \sum_{j=1}^m \frac{N_j}{N_p} I(D_j)$$- $f$: the feature to split- $D_p$: the data at the parent node- $D_j$: the data at the $j$th child node- $I$: impurity measure- $N_p$: no. samples at the paraent node- $N_j$: no. samples at the $j$th child nodeThe lower the impurity at the child nodes, the larger the information gain. $$ IG(D_p, f) = I(D_p) - \frac{N_{\text{left}}}{N_p} I(D_{\text{left}}) - \frac{N_{\text{right}}}{N_p} I(D_{\text{right}})$$ Impurity Measures- Shannon Entropy$$ I_H(t) = - \sum_{i=1}^c p(i|t) \log_2 p(i|t)$$ - $p(i|t)$: the proportion of the samples that belong to the class $i$ at the node $t$ - Entropy is 0 (min value) if $p(i=1|t)=1$ or $p(i=0|t)=0$. - Entropy is 1 (max value) if $p(i=1|t)=0.5$ and $p(i=0|t)=0.5$ (uniform distribution) - Gini Index$$ I_G(t) = \sum_{i=1}^c p(i|t)[ 1 - p(i|t) ] = 1 - \sum_{i=1}^c p(i|t)^2$$ - Gini index is 0 (min value) if $p(i=1|t)=1$ or $p(i=0|t)=0$ - Gini index is 0.5 (max value) if $p(i=1|t)=0.5$ and $p(i=0|t)=0.5$ - In practice, Gini index and entropy produces very similar results in decision trees - Classification error$$ I_E(t) = 1 - \max_{i \in \{1,\dots,c\}}\{ p(i|t) \}$$
###Code
Image(filename='./images/03_22.png', width=600)
###Output
_____no_output_____
###Markdown
$$ I_E(D_p) = 1 - \frac12 = 0.5 \qquad I_G(D_p) = 1 - (0.5^2 + 0.5^2) = 0.5$$A:\begin{alignat}{2} & I_E(D_\text{left}) = 1- \frac14 = 0.25 & & I_G(D_\text{left}) = 1 - (\frac34^2 + \frac14^2) = 0.375\\ & I_E(D_\text{right}) = 1 - \frac14 = 0.25 & & I_G(D_\text{right}) = 1 - (\frac14^2 + \frac34^2) = 0.375\\ & IG_E = 0.5 - \frac48 0.25 - \frac48 0.25 = 0.25 \quad & & IG_G = 0.5 - \frac48 0.375 - \frac48 0.375 = 0.125\end{alignat}B:\begin{alignat}{2} & I_E(D_\text{left}) = 1- \frac46 = \frac13 & & I_G(D_\text{left}) = 1 - (\frac26^2 + \frac46^2) = \frac49\\\\ & I_E(D_\text{right}) = 1 - 1 = 0 & & I_G(D_\text{right}) = 1 - (1^2 + 0^2) = 0\\\\ & IG_E = 0.5 - \frac68 \frac13 - 0 = 0.25 \quad & & IG_G = 0.5 - \frac68 \frac49 - \frac28 0 = 0.167\end{alignat}
###Code
import matplotlib.pyplot as plt
import numpy as np
def gini(p):
return p * (1 - p) + (1 - p) * (1 - (1 - p))
def entropy(p):
return - p * np.log2(p) - (1 - p) * np.log2((1 - p))
def error(p):
return 1 - np.max([p, 1 - p])
x = np.arange(0.0, 1.0, 0.01)
ent = [entropy(p) if p != 0 else None for p in x]
sc_ent = [e * 0.5 if e else None for e in ent]
err = [error(i) for i in x]
fig = plt.figure()
ax = plt.subplot(111)
for i, lab, ls, c, in zip([ent, sc_ent, gini(x), err],
['Entropy', 'Entropy (scaled)',
'Gini Impurity', 'Misclassification Error'],
['-', '-', '--', '-.'],
['black', 'lightgray', 'red', 'green', 'cyan']):
line = ax.plot(x, i, label=lab, linestyle=ls, lw=2, color=c)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.15),
ncol=3, fancybox=True, shadow=False)
ax.axhline(y=0.5, linewidth=1, color='k', linestyle='--')
ax.axhline(y=1.0, linewidth=1, color='k', linestyle='--')
plt.ylim([0, 1.1])
plt.xlabel('p(i=1)')
plt.ylabel('Impurity Index')
plt.tight_layout()
#plt.savefig('./figures/impurity.png', dpi=300, bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Building a decision tree
###Code
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy', max_depth=1
, random_state=0)
tree.fit(X_train, y_train)
X_combined = np.vstack((X_train, X_test))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X_combined, y_combined,
classifier=tree, test_idx=range(105, 150))
plt.xlabel('petal length [cm]')
plt.ylabel('petal width [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/decision_tree_decision.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
###Code
from sklearn.tree import export_graphviz
export_graphviz(tree,
out_file='tree.dot',
feature_names=['petal length', 'petal width'])
Image(filename='./images/03_18.png', width=600)
###Output
_____no_output_____
###Markdown
We typically need to **prune** the tree to avoid overfitting Combining weak to strong learners via random forests Combine **weak learners** to build a **strong** learner (Ensemble models)Steps:- Draw a random **bootstrap** sample of size n (choose n random samples out of total n samples with replacement)- Make a **weak** decision tree from the bootstrap sample. At each node: - Choose $d$ features at random without replacement (defalut: $d = \sqrt{m}$) - Split the node using the best feature amongst the $d$ features, e.g. to maximize the information gain- Repleat above steps for $k$ times (building $k$ trees)- Aggregate the prediction by each tree by **majority voting** Pros: - Don't need to prune the random forest in general, since the ensemble model is quite robust to the noise from individual decision trees - The larger the number of trees $k$, the better the performance of the random forest Cons: - Large computational cost for large $k$ Hyperparameters: - $k$: the number of trees - $d$: the number of features that is randomly chosen for each split
###Code
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(criterion='entropy',
n_estimators=10,
max_features=2, #auto, sqrt, log2, None
max_depth=None,
random_state=1,
n_jobs=2)
forest.fit(X_train, y_train)
plot_decision_regions(X_combined, y_combined,
classifier=forest, test_idx=range(105, 150))
plt.xlabel('petal length [cm]')
plt.ylabel('petal width [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/random_forest.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
K-nearest neighbors (kNN) A **lazy** learner:- KNN doesn't learn a discriminative function the training data KNN is an example of the **instance-based learning**- learning is performed by memorizing the training dataset KNN- Choose the number $k$ of neighbors- Choose the distance metric- Find the $k$ nearest neighbors of the sample we want to classify- Assign the class label by majority voting
###Code
Image(filename='./images/03_20.png', width=400)
###Output
_____no_output_____
###Markdown
Pros:- The classifier immediately adapts as we collect new training dataCons:- Computational complexity grows linearly with the number of samples in the training data in the worst-case- Susceptible for overfitting, especially when the input dimension is high (** curse of dimensionality** : for a fixed-size training set, the feature space becomes increasingly sparse as the dimension increases.) Metrics- Minkowski distance ($\ell_p$-norm)$$ d(x^{(i)}, x^{(j)}) = \left( \sum_k \left|x^{(i)}_k - x^{(j)}_k \right|^p \right)^{1/p}$$ - $p=2$ : Euclidean distance - $p=1$ : Manhattan distance ($\ell_1$-norm)
###Code
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=2, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=range(105, 150))
plt.xlabel('petal length [standardized]')
plt.ylabel('petal width [standardized]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/k_nearest_neighbors.png', dpi=300)
plt.show()
###Output
_____no_output_____ |
Python/1_Limpeza-e-tratamento-de-dados.ipynb | ###Markdown
LIMPEZA E TRATAMENTO DE DADOS - 80% do tempo do cientista de dados é utilizado limpando e tratando dados. - Por que dados tem problemas? * Sistemas de operações e bancos de dados sem restrições de entrada. * Atualizações diretas em bancos de dados. * Sistemas antigos, codificações diferentes. * Inconsistência nos processos de carga: - Origem da informação é diversa, não padronizada. - Mudanças no processo. - Erros no processo. - Problemas geralmente encontrados: * Duplicidade * Consistência * Completude * Conformidade * Integridade - Operação x Analítico: * Na operação o dado em seu formato individual não pode ser alterado para um valor padrão. - Ex.: Cliente do plano de saúde tem dado faltante. Não podemos preencher com a mediana, pois isso influencia o valor do plano. * No analítico, o dado não tem valor original, mas coletivo. Ele pode ser corrigido pelo "bem" do modelo. - Ex.: Modelo para prever custo dos clientes para o plano de saúde. Mas o algoritmo não suporta dados faltantes. Logo, pode-se alterar uma idade faltante para a mediana, pois não vai afetar a operação e não vai causar enviesamento no modelo.
###Code
# Importando as bibliotecas
import pandas as pd
import seaborn as sns
import statistics as sts
# Importar dados
dataset = pd.read_csv('Dados/Churn.csv', sep=';')
dataset.head()
# Tamanho
dataset.shape
# Primeiro problema é dar nomes as colunas
dataset.columns = ["ID","Score","Estado","Gênero","Idade","Patrimônio","Saldo","Produtos","TemCardCredito","Ativo","Salário","Saiu"]
dataset.head()
# Explorar dados categóricos
# Estado
agrupado = dataset.groupby(['Estado']).size()
agrupado
agrupado.plot.bar(color = 'gray')
# Gênero
agrupado = dataset.groupby(['Gênero']).size()
agrupado
agrupado.plot.bar(color = 'gray')
# Explorar colunas númericas
# score
dataset['Score'].describe()
sns.boxplot(dataset['Score']).set_title('Score')
sns.distplot(dataset['Score']).set_title('Score')
# Idade
dataset['Idade'].describe()
sns.boxplot(dataset['Idade']).set_title('Idade')
sns.distplot(dataset['Score']).set_title('Score')
# Saldo
dataset['Saldo'].describe()
sns.boxplot(dataset['Saldo']).set_title('Saldo')
sns.distplot(dataset['Saldo']).set_title('Saldo')
# Salário
dataset['Salário'].describe()
sns.boxplot(dataset['Salário']).set_title('Salário')
sns.distplot(dataset['Salário']).set_title('Salário')
# Contamos valores NAs
# Gênero e salário
dataset.isnull().sum()
# Salário
# Remover NAs e substituir pela mediana
dataset['Salário'].describe()
mediana = sts.median(dataset['Salário'])
mediana
# Substituir NAs por mediana
dataset['Salário'].fillna(mediana, inplace = True)
# Verificar se NAs não existem mais
dataset['Salário'].isnull().sum()
# Gênero
# Falta de padronização e NAs
agrupado = dataset.groupby(["Gênero"]).size()
agrupado
# Total de NAs
dataset['Gênero'].isnull().sum()
# Padroniza de acordo com o domínio
dataset.loc[dataset['Gênero'] == 'M', 'Gênero'] = 'Masculino'
dataset.loc[dataset['Gênero'].isin(['Fem','F']), 'Gênero'] = 'Feminino'
# Visualiza o resultado
agrupado = dataset.groupby(['Gênero']).size()
agrupado
# Idades fora do domínio
dataset['Idade'].describe()
# Visualizar
dataset.loc[(dataset['Idade']<0) | (dataset['Idade']>120)]
# Calcular a mediana
mediana = sts.median(dataset['Idade'])
mediana
# Substituir pela mediana
dataset.loc[(dataset['Idade']<0) | (dataset['Idade']>120), 'Idade'] = mediana
# Verificamos se ainda existem idades fora do domínio
dataset.loc[(dataset['Idade']<0) | (dataset['Idade']>120)]
# Dados duplicados, buscamos pelo ID
dataset[dataset.duplicated(['ID'], keep=False)]
# Excluindo pelo ID
dataset.drop_duplicates(subset='ID', keep='first', inplace=True)
# Buscamos duplicadas
dataset[dataset.duplicated(['ID'], keep=False)]
# Estado fora do domínio
agrupado = dataset.groupby(['Estado']).size()
agrupado
# Atribuições RS (moda)
dataset.loc[dataset['Estado'].isin(['RP','SP','TD']), 'Estado'] = 'RS'
agrupado = dataset.groupby(['Estado']).size()
agrupado
# Outliers em salário, vamos considerar 2 desvios padrão
desv = sts.stdev(dataset['Salário'])
desv
# Definir padrão como maior que 2 desvios padrão
# Checamos se algum atende ao critério
dataset.loc[dataset['Salário'] > 2 * desv]
# Vamos atualizar salário para mediana, calculamos
mediana = sts.median(dataset['Salário'])
mediana
# Atribuimos
dataset.loc[dataset['Salário'] >= 2 * desv, 'Salário'] = mediana
# Checamos se algum atende ao critério
dataset.loc[dataset['Salário'] >= 2 * desv]
dataset.head()
dataset.shape
###Output
_____no_output_____ |
2 - Matplotlib.ipynb | ###Markdown
Matplotlib O que é? Não tem como a explicação ser melhor do que a que está no próprio site, mas basicamente é uma biblioteca mais simples para se fazer plots com Python, inspirada no MATLAB.https://matplotlib.org/ Forma básica de fazer um plot
###Code
#np.linspace pega 'n' pontos igualmente espaçados entre dois intervalos passados.
np.linspace(1,20,10)
#Quero 10 pontos igualmente espaçados começando em 1 e terminando em 20, nesse exemplo.
#Podemos operar em cima desse numpy array, assim como podemos operar por cima de listas comuns em Python.
np.linspace(1,20,10) ** 2 #elevando todos ao quadrado (** em Python é exponenciação)
x = np.linspace(0,50,500) #Dizendo que meu x vai ser um array de 500 pontos igualmente espaçados de 0 a 50.
y = x ** 2 # Dizendo que o meu y vai ser o meu x com os valores ao quadrado.
#Essencialmente estamos observando o gráfico y = x² para x >= 0
plt.plot(x,y)
###Output
_____no_output_____
###Markdown
Essa foi a forma básica de fazer. Que é raramente utilizada. Geralmente utilizamos a maneira **orientada a objetos**, por nos dar mais possibilidades. Forma orientada a objetos e Multiplots Primeiramente criamos algo que chamamos de Figure object, que na prática é um quadro em branco(vazio).
###Code
plt.figure()
###Output
_____no_output_____
###Markdown
Note que ele mostra que há uma imagem de tamanho 432x288 mostrada, porém com 0 eixos (Axes em inglês é plural de axis, eixos e eixo, respectivamente). Por isso não vemos nada, mas está aí.**E assim encerramos matplotlib.** Mentira.Podemos atribuir essa figura a uma variável e criar os eixos manualmente. Vamos lá.
###Code
fig = plt.figure()
axes = fig.add_axes([1,1,1,1])
###Output
_____no_output_____
###Markdown
Ok. Dessa vez vemos algo aparecer, mas aposto que vocês estão se perguntando o que diabos acabamos de fazer.Chamei fig.add_axes, adicionando eixos à figura. **O array que é passado contém 4 elementos e é organizado da seguinte maneira:**1. É passada, em porcentagem, a margem à esquerda;2. É passada, em porcentagem, a margem em relação a parte de baixo do gráfico;3. É passada a largura do gráfico;4. É passada a altura do gráfico.Vamos entender isso melhor jajá quando falarmos de **Multiplots**, mas antes vamos plotar alguma coisa nesses eixos para que vocês possam comparar com o código anterior
###Code
fig = plt.figure()
axes = fig.add_axes([1,1,1,1])
axes.plot(x,y)
#note que o X e o Y são os mesmos de antes, não alterei-os. Lembre-se que o Jupyter Notebook é interativo.
#Logo, estamos olhando para o mesmo gráfico de antes.
###Output
_____no_output_____
###Markdown
Se não entenderam ainda os valores passados na lista do método **add_axes**, agora provavelmente irão.Dessa vez, vamos colocar dois plots numa só figura! Olha que legal.Agora criarei dois conjuntos de eixos bidimensionais (antes tinhamos apenas um conjunto, a variável axes)
###Code
fig = plt.figure()
#Primeiro
axes1 = fig.add_axes([0.1,0.1,0.8,0.8])
#0.1 de distância da esquerda, 0.1 de distância de baixo, 0.8 de escala geral (largura e altura)
axes2 = fig.add_axes([0.2,0.5,0.4,0.3])
#0.2 de distância da esquerda (um pouco mais distante que o anterior, para poder caber)
#0.5 de distância de baixo (se fosse uma distância menor, provavelmente ficaria muito próximo do outro plot)
#0.4 de largura
#0.3 de altura
#Na prática, você passará um tempo tentando descobrir os melhores valores para você.
#Os plots são apenas o "inverso" um do outro.
axes1.plot(x,y)
axes2.plot(y,x)
###Output
_____no_output_____
###Markdown
Interessante, não é? Vamos agora customizar um pouco o nosso gráfico. Que tal *dar nome aos bois*? Vamos colocar um nome para o eixo X, para o eixo Y e dar um título decente.Note que podemos escolher em qual dos dois conjuntos de eixos (cada desenho, falando de forma simples) iremos operar.
###Code
fig = plt.figure()
axes1 = fig.add_axes([0.1,0.1,0.8,0.8])
axes2 = fig.add_axes([0.2,0.5,0.4,0.3])
axes1.set_xlabel('Teste')
axes1.set_ylabel('Teste y')
axes1.set_title('Titulo Maior')
axes2.set_title('Titulo menor')
axes1.plot(x,y)
axes2.plot(y,x)
###Output
_____no_output_____
###Markdown
E se agora, ao invés de termos um gráfico dentro de outro, querermos ter vários gráficos **um ao lado do outro**? Subplots - subplots(nrows,ncols), tuple unpacking Para os subplots, nós passaremos quantas linhas e colunas queremos na nossa matriz de gráficos (conjuntos de 2 eixos) e então diremos o que plotar em cada gráfico desses. É bem simples.
###Code
x = np.linspace(0,5,11)
y = x ** 2
#Disso aqui já estamos cansados de ver
fig,axes = plt.subplots() #Conceito de Tuple Unpacking, de Python. Vale a pena estudar.
#Basicamente a função subplots() retorna uma tupla de acordo com os seus requerimentos.
axes.plot(x,y)
###Output
_____no_output_____
###Markdown
Por padrão, o número de linhas e colunas é 1. Por isso conseguimos fazer o "mesmo"(x e y mudaram nos seus valores) bendito plot do início.Vamos agora experimentar especificando o número de linhas e colunas.
###Code
fig,axes = plt.subplots(nrows=2,ncols=3)
###Output
_____no_output_____
###Markdown
Legal, agora temos uma matriz de 2 linhas e 3 colunas de gráficos. Porém há um problema. E se você não vê o problema, provavelmente precisa de um novo monitor ou fazer um exame de visão, porque o valor dos eixos claramente estão se sobrepondo.O Matplotlib oferece uma solução bem simples pra esse problema com uma **função MÁGICA!** chamada **tight_layout**. É considerado boa prática chamar essa função no fim de todo seu plot que utiliza o matplotlib, às vezes até mesmo no Seaborn, quando estudarmos ele jajá.
###Code
fig,axes = plt.subplots(nrows=2,ncols=3)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Agora sim, tudo bonitinho e bem separado.Vamos agora dar uma olhada na nossa variável "axes".
###Code
axes
###Output
_____no_output_____
###Markdown
Note que axes é um NumPy array (nesse caso de 2 dimensões, basta contar o número de colchetes no início ou fim) em que cada item é um axes idêntico a quando tinhamos nossos Multiplots anteriormente.
###Code
fig,axes = plt.subplots(nrows=2,ncols=3)
axes[0,1].plot(x,y)
#Equivalente a axes[0][1].plot(x,y)
axes[1,2].plot(y,x)
#Equivalente a axes[1][2].plot(y,x)
#Esse tipo de consulta utilizando apenas um colchete é uma herança do NumPy, infelizmente não vimos nada de NumPy
#por conta da falta de tempo. Mas vocês podem pesquisar sobre NumPy arrays.
#Pense em axes[1,2] como sendo "Aquele gráfico na linha de índice 1 e coluna de índice 2"
#lembre-se que os índices começam em 0. Logo, axes[1,2] seria o gráfico da segunda linha e terceira coluna.
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Customização 1 - tight_layout, figure size, aspect ratio, dpi, savefig, legend (loc codes)É tanta coisa que vou cobrir em relação a customização aqui, que decidi deixar as palavras-chave ao lado e ainda separar em duas partes.Abaixo está a documentação de legendas em Matplotlib para consulta caso necessário.https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html Para explorar customização, voltemos ao nosso simples exemplo introdutório ao modo orientado a objeto de plotar no matplotlib.
###Code
fig = plt.figure()
axes = fig.add_axes([1,1,1,1])
axes.plot(x,y)
###Output
_____no_output_____
###Markdown
Podemos mudar o "aspect ratio" do nosso plot da maneira que quisermos. Basta, na chamada do **plt.figure**, passarmos um argumento **figsize** mandando uma tupla(dupla) com **comprimento e altura**.Podemos também definir o **dpi**, com um argumento de nome nada intuitivo, chamado... **dpi**.
###Code
fig = plt.figure(figsize=(7,3), dpi=100)
#7 de comprimento
#3 de altura
#100 dots per inch (dpi)
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y)
###Output
_____no_output_____
###Markdown
Podemos também plotar várias funções diferentes num mesmo conjunto de eixos.
###Code
fig = plt.figure(figsize=(6,3),dpi=100)
ax = fig.add_axes([0.1,0.1,0.8,0.8])
ax.plot(x,y)
ax.plot(y,x)
ax.plot(y*3,x*2)
#Função para salvar as nossas figuras. Basta chamar savefig('nome do arquivo')
fig.savefig('teste.png')
###Output
_____no_output_____
###Markdown
A partir desse ponto, acho válido considerarmos colocar legendas em cada uma dessas funções.Para isso, basta mandar o argumento **label** para cada função e então chamar a função **legend** para passar código de locação da legenda (checar documentação acima).
###Code
fig = plt.figure(figsize=(6,3),dpi=100)
ax = fig.add_axes([0.1,0.1,0.8,0.8])
ax.plot(x,y, label='Normal')
ax.plot(y,x, label='Invertido')
ax.plot(y*3,x*2, label='Improviso')
ax.legend(loc=4)
#"loc = 0" escolhe o que o matplotlib julga ser o melhor lugar para as legendas.
#Vale a pena checar a documentação.
fig.savefig('teste.png')
###Output
_____no_output_____
###Markdown
Se ler a documentação, verá que também é possível mandar uma tupla para indicar a posição X e Y (na verdade distância da esquerda e de baixo, do mesmo estilo do array do add_axes anteriormente) da legenda.Vamos ver um exemplo:
###Code
fig = plt.figure(figsize=(6,3),dpi=100)
ax = fig.add_axes([0.1,0.1,0.8,0.8])
ax.plot(x,y, label='Normal')
ax.plot(y,x, label='Invertido')
ax.plot(y*3,x*2, label='Improviso')
ax.legend( loc=(1,1) )
fig.savefig('teste.png')
###Output
_____no_output_____
###Markdown
Note que a legenda ficou exatamente 100% longe da esquerda e 100% longe de baixo. Ficou na extrema ponta superior direita.**Note que apenas estou mudando o valor do argumento "loc" passado para o método "legend"**Podemos customizar isso como quisermos:
###Code
fig = plt.figure(figsize=(6,3),dpi=100)
ax = fig.add_axes([0.1,0.1,0.8,0.8])
ax.plot(x,y, label='Normal')
ax.plot(y,x, label='Invertido')
ax.plot(y*3,x*2, label='Improviso')
ax.legend( loc=(0.5,0.6) )
fig.savefig('teste.png')
###Output
_____no_output_____
###Markdown
Customização 2 - Colors, LineWidth, alpha, linestyle, marker, markersize, markerfacecolor, markeredgewidth, markeredgecolor, set_ylim([0,1]), set_xlim([7,40])Mesmo esquema da seção anterior. Deixarei palavras-chave.Abaixo está a referência de tipos de linha do matplotlib.Existe muita coisa interessante no site do matplotlib.https://matplotlib.org/gallery/lines_bars_and_markers/line_styles_reference.html?highlight=line%20style%20reference
###Code
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y, color='purple') #simplesmente mudando a cor
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y, color='purple',linewidth=10, alpha=0.5, linestyle='steps')
#Mudando a cor, a largura da linha, a opacidade(alpha) e o estilo da linha(linestyle).
#A string 'steps' é apenas uma das que o matplotlib reconhece.
#alpha=0.5 quer dizer que temos 50% de opacidade nessa linha.
###Output
_____no_output_____
###Markdown
Podemos abreviar **linestyle** para simplesmente **ls** e **linewidth** para simplesmente **lw**.Também podemos colocar marcadores em cada ponto do nosso gráfico (pode não ser bom para quando tem muitos pontos, mas no nosso caso é tranquilo).
###Code
len(x) #apenas 11 pontos
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y, color='purple',lw=1.5, ls='--',marker='o',markersize=10, markerfacecolor='y',
markeredgewidth=3, markeredgecolor='g')
#Isso aqui é só explorando a documentação do matplotlib.
#O matplotlib aceita de várias formas que você passe cores como argumentos, nesse caso passei abreviações
# y = yellow, g = green
#marker = 'o' é esse tipo de marcador circular. Tente mudar para 'v' em casa e veja a diferença.
#markersize apenas modifica o tamanho do marcador em si.
#markeredgewidth modifica o tamanho da BORDA do marcador.
#markerfacecolor modifica a cor do marcador em si
#markeredgecolor modifica a cor da borda do marcador
###Output
_____no_output_____
###Markdown
Claro que não é para decorar isso tudo, apenas estou mostrando o quanto podemos avançar com isso.Para finalizar, mostrarei que podemos dar um "zoom" no nosso plot limitando os eixos X e/ou Y com os métodos **set_ylim** e **set_xlim**Nas palavras-chave, eu coloquei o argumento passado como sendo uma lista, mas o matplotlib também entende se você apenas passar valores crus.
###Code
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y, color='purple',lw=1.5, ls='--',marker='o',markersize=10, markerfacecolor='y',
markeredgewidth=3, markeredgecolor='g')
#ax.set_xlim(2.5,4)
ax.set_ylim(5,20)
###Output
_____no_output_____
###Markdown
Apenas demonstrando que é a mesma coisa se passar uma lista:
###Code
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.plot(x,y, color='purple',lw=1.5, ls='--',marker='o',markersize=10, markerfacecolor='y',
markeredgewidth=3, markeredgecolor='g')
#ax.set_xlim(2.5,4)
ax.set_ylim([5,20])
###Output
_____no_output_____ |
LAB03_Ingesting_and_Transforming_Data_to_Datalake_using_Python.ipynb | ###Markdown
In this lab, the idea is to download a public dataset, make a transformation (agregate, join and filter data to create a new table), convert the files to optimized formats, and then record everything in our data lake:  1) Install required packages
###Code
!pip install pandas
!pip install pyarrow
!pip install s3fs
!pip install simplejson
###Output
_____no_output_____
###Markdown
2) Import packages
###Code
import urllib.request
from zipfile import ZipFile
import pandas as pd
import os
###Output
_____no_output_____
###Markdown
3) Retrieve your account number and set a bucket name
###Code
import simplejson
with open('/opt/ml/metadata/resource-metadata.json') as fh:
metadata = simplejson.loads(fh.read())
accountid = metadata['ResourceArn'].split(':')[4]
%set_env accountid={accountid}
%set_env bucket_name=lab-{accountid}
###Output
_____no_output_____
###Markdown
4) Download MovieLens 1M Dataset
###Code
print("downloading file from movielens website...")
urllib.request.urlretrieve(
'http://files.grouplens.org/datasets/movielens/ml-1m.zip',
'/tmp/ml-1m.zip')
###Output
_____no_output_____
###Markdown
5) Extract the zip file
###Code
print("extracting dataset into tmp folder...")
with ZipFile('/tmp/ml-1m.zip', 'r') as zipObj:
zipObj.extractall('/tmp/')
###Output
_____no_output_____
###Markdown
6) Ingesting RAW data
###Code
import datetime
x = datetime.datetime.now()
etl_date = x.strftime("%Y%m%d_%H%M%S")
print(etl_date)
%set_env etl_date={etl_date}
%%bash
aws s3 cp /tmp/ml-1m/movies.dat s3://$bucket_name/data/landing/movies/movies_$etl_date.dat
aws s3 cp /tmp/ml-1m/ratings.dat s3://$bucket_name/data/landing/ratings/ratings_$etl_date.dat
###Output
_____no_output_____
###Markdown
7) Read the CSV
###Code
print("reading csv files...")
movies_df = pd.read_csv("/tmp/ml-1m/movies.dat", "::",
engine='python',
header=None,
names=['movieid', 'title', 'genres'])
print("movies_df has %s lines" % movies_df.shape[0])
ratings_df = pd.read_csv("/tmp/ml-1m/ratings.dat", "::",
engine='python',
header=None,
names=['userid', 'movieid', 'rating', 'timestamp'])
print("ratings_df has %s lines" % ratings_df.shape[0])
movies_df[0:5]
ratings_df[0:5]
###Output
_____no_output_____
###Markdown
8) Join both dataframes
###Code
print("merging dataframes...")
merged_df = pd.merge(movies_df, ratings_df, on='movieid')
###Output
_____no_output_____
###Markdown
9) Aggregate data from dataframes, counting votes...
###Code
print("aggregating data...")
aggregation_df = merged_df.groupby('title').agg({'rating': ['count', 'mean']})
aggregation_df.columns = aggregation_df.columns.droplevel(level=0)
aggregation_df = aggregation_df.rename(columns={
"count": "rating_count", "mean": "rating_mean"
})
###Output
_____no_output_____
###Markdown
10) Sorting data and filtering only movies with more than 1000 votes...
###Code
print("sorting data...")
aggregation_df = aggregation_df.sort_values(
'rating_mean',
ascending=False).loc[aggregation_df['rating_count'] > 1000].head()
###Output
_____no_output_____
###Markdown
11) Writing files to s3...
###Code
print("writing files to s3...")
movies_df.to_parquet(
"s3://" +
os.getenv('bucket_name') +
"/data/analytics/movies/movies_" +
etl_date +
".parquet.snappy")
ratings_df.to_parquet(
"s3://" +
os.getenv('bucket_name') +
"/data/analytics/ratings/ratings_" +
etl_date +
".parquet.snappy")
aggregation_df.to_parquet(
"s3://" +
os.getenv('bucket_name') +
"/data/analytics/best_movies/best_movies_" +
etl_date +
".parquet.snappy")
###Output
_____no_output_____
###Markdown
12) Reading data...
###Code
print("reading file from s3 and printing result...")
result_df = pd.read_parquet(
"s3://" +
os.getenv('bucket_name') +
"/data/analytics/best_movies/best_movies_" + etl_date + ".parquet.snappy")
print("result_df has %s lines" % result_df.size)
print("Top 5 movies: ")
result_df[0:5]
###Output
_____no_output_____ |
src/run_all_tests_SLDS-SSM-smooth.ipynb | ###Markdown
Fit SLDS with rank r = 4 & r = 6
###Code
for N in N_array[N_array > 6]:
data = scipy.io.loadmat("../data/test_data_smooth_N_%d_M_%d_sigma_0.200000.mat" % (N, num_steps))
X = data['X']
thetas = data['thetas'].flatten()
U = data['U']
err_inf, err_2, err_fro, err_mse = \
fit_slds_and_return_errors(X.T, thetas, U, Kmax=Kmax, r=6, num_iters=2000)
print("N = %d : err_inf = %f, err_2 = %f, err_fro = %f, err_mse = %f" % \
(N, err_inf, err_2, err_fro, err_mse))
new_row = dict(zip(error_table.columns,
[N, np.nan, 4, err_inf, err_2, err_fro, err_mse, np.nan]))
error_table = error_table.append(new_row, ignore_index=True)
## TODO!!!! load output again and re-append, remake plots, save csv
error_table = error_table.append(error_table_save, ignore_index=True)
print(error_table)
data = error_table
import matplotlib
#plt.loglog(data['N'], data['err_2'])
plot_type = 'err_2'
plt.rc('text', usetex=True)
matplotlib.rcParams['mathtext.fontset'] = 'custom'
matplotlib.rcParams['mathtext.rm'] = 'Bitstream Vera Sans'
matplotlib.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic'
matplotlib.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold'
matplotlib.rcParams.update({'font.size': 16})
for plot_type in ['err_inf', 'err_2', 'err_fro', 'model_MSE']:
fig, ax = plt.subplots(figsize=(8,6))
for key, grp in data.groupby(['model']):
grp = grp.groupby(['N']).mean()
if key == 1:
keystr = 'indep(N)'
elif key == 2:
keystr = 'indep(4)'
elif key == 3:
keystr = 'TVART(4)'
elif key == 4:
keystr = 'SLDS(6)'
elif key == 5:
keystr = 'SLDS(6)'
ax = grp.plot(ax=ax, kind='line', y=plot_type, label=keystr, logx=True, logy=True)
#plt.ylim([1e-2, 1e-1])
plt.legend(loc='upper right', fontsize=14)
plt.xlabel('N', fontsize=20)
if plot_type == 'err_inf':
plt.ylabel("$\| \mathbf{A} - \hat{\mathbf{A}} \|_\mathrm{max}$", fontsize=20)
elif plot_type == 'err_2':
plt.ylabel("$\| \mathbf{A} - \hat{\mathbf{A}} \|_{2}$", fontsize=20)
plt.ylim([10**-1.1, 10**1.5])
elif plot_type == 'err_fro':
plt.ylabel("$\| \mathbf{A} - \hat{\mathbf{A}} \|_F$", fontsize=20)
plt.ylim([10**-1.1, 10**1.5])
elif plot_type == 'model_MSE':
plt.ylabel("prediction MSE", fontsize=20)
plt.plot([min(grp.index), max(grp.index)], [0.25, 0.25], 'k--')
#plt.ylim([10**-1, 10**0.5])
plt.xlim([10, 4*10**3])
plt.savefig("../figures/smooth_compare_" + plot_type + ".eps")
plt.show()
#data.plot.line(x='N', y='err_inf', logx=True, logy=True)
error_table.tail()
error_table.to_csv(output_file, header=True, index=False)
norm_vector = np.zeros((len(N_array), 3))
for i, N in enumerate(N_array):
data = scipy.io.loadmat("test_data_N_%d_M_201_sigma_0.500000.mat" % N)
X = data['X']
A1 = data['A1']
A2 = data['A2']
norm_vector[i, 0] = 0.5 * (np.max(np.abs(A1.ravel())) + np.max(np.abs(A2.ravel())))
norm_vector[i, 1] = 0.5 * (np.linalg.norm(A1, 2) + np.linalg.norm(A2, 2))
norm_vector[i, 2] = 0.5 * (np.linalg.norm(A1, 'fro') + np.linalg.norm(A2, 'fro'))
data = error_table
#plt.loglog(data['N'], data['err_2'])
fig, ax = plt.subplots()
for key, grp in data.groupby(['model']):
grp = grp.groupby(['N']).mean()
if key == 1:
keystr = 'indep(N)'
elif key == 2:
keystr = 'indep(6)'
elif key == 3:
keystr = 'TVART(4)'
elif key == 4:
keystr = 'SLDS(4)'
elif key == 5:
keystr = 'SLDS(6)'
grp = grp.iloc[grp.index >= min(N_array)]
grp['err_inf'] /= norm_vector[i, 0]
grp['err_2'] /= norm_vector[i, 1]
grp['err_fro'] /= norm_vector[i, 2]
ax = grp.plot(ax=ax, kind='line', y='err_inf', label=keystr, logx=True, logy=True)
plt.legend(loc='best')
#plt.ylim([1e-2, 1e-1])
plt.show()
###Output
_____no_output_____ |
Nate Silver ELO/NBA ELO Replicate.ipynb | ###Markdown
Nates Silver's NBA Elo Algorithm Single Season: $$R_{0}=1300$$$$R_{i+1}=K(S_{team}-E_{team})R_i$$ where R is elo rating, $S$=1 if the team wins and $S=0$ for a loss. $E$ represents the expected win probability in Nate's formula and is defined as $$E_{\text{team}}=\frac{1}{1+10^{\frac{\text{opp_elo}-\text{team_elo}}{400}}}.$$ In chess K is a fixed constant but Nate changes K to handle margin of victory. Nate Silver's K is $$\text{K}=20\frac{(\text{MOV}_{winner}+3)^{0.8}}{7.5+0.006(\text{elo_difference}_{winner})}.$$ where $$\text{elo difference}_{winner}=\text{winning_elo}-\text{losing_elo}.$$ Nate also takes into account home advantage by increasing the rating of the home team by 100 as in $R_{\text{home}}=R_\text{team}+100$. The only other consideration is what to do in between seasons. Nate handles this by reverting each team towards a mean of 1505 as in the following formula $$R_{s=i+1}=(0.75)R_{s=i}+(0.25)1505.$$
###Code
import scipy.stats as st
from collections import defaultdict
import numpy as np
def silverK(MOV, elo_diff):
K_0=20
if MOV>0:
multiplier=(MOV+3)**(0.8)/(7.5+0.006*(elo_diff))
else:
multiplier=(-MOV+3)**(0.8)/(7.5+0.006*(-elo_diff))
return K_0*multiplier,K_0*multiplier
def silverS(home_score, away_score):
S_home,S_away=0,0
if home_score>away_score:
S_home=1
elif away_score>home_score:
S_away=1
else:
S_home,S_away=.5,.5
return S_home,S_away
def silver_elo_update(home_score, away_score, home_rating, away_rating):
HOME_AD=100.
home_rating+=HOME_AD
E_home = elo_prediction(home_rating,away_rating)
E_away=1-E_home
elo_diff=home_rating-away_rating
MOV=home_score-away_score
S_home,S_away = silverS(home_score,away_score)
if S_home>0:
K_home,K_away = silverK(MOV,elo_diff)
else:
K_home,K_away = silverK(MOV,elo_diff)
return K_home*(S_home-E_home),K_away*(S_away-E_away)
def elo_prediction(home_rating,away_rating):
E_home = 1./(1 + 10 ** ((away_rating - home_rating) / (400.)))
return E_home
def score_prediction(home_rating,away_rating):
return (home_rating-away_rating)/28.
class HeadToHeadModel(object):
def __init__(self, events, update_function, prediction_function=None):
self.update_function=update_function
self.events=events
self.ratings={}
self.prediction_function = prediction_function
self.predictions = []
self.curr_season=defaultdict(lambda: self.events[0][1]['year_id'])
def train(self):
for idx, event in self.events:
new_year=event['year_id']
label_i=event['fran_id']
label_j=event['opp_fran']
if self.ratings.get(label_i,False)==False:
self.ratings[label_i]=elo_lookup(label_i,event['gameorder'])
if self.ratings.get(label_j,False)==False:
self.ratings[label_j]=elo_lookup(label_j,event['gameorder'])
if self.curr_season[label_i]!=new_year:
self.curr_season[label_i]=new_year
self.ratings[label_i]=self.ratings[label_i]*.75+1505.*.25
elif self.curr_season[label_j]!=new_year:
self.curr_season[label_j]=new_year
self.ratings[label_j]=self.ratings[label_j]*.75+1505.*.25
#todo change below to just use event
update=self.update_function(event['pts'],event['opp_pts'], self.ratings[label_i], self.ratings[label_j])
self.ratings[label_i]+=update[0]
self.ratings[label_j]+=update[1]
def power_rankings(self):
from operator import itemgetter
power_rankings = sorted(self.ratings.items(), key=itemgetter(1), reverse=True)
power = []
for i, x in enumerate(power_rankings):
power.append((i + 1, x))
return power
STARTING_LOC=0
def elo_lookup(fran_id,gameorder):
return full_df[(full_df['fran_id']==fran_id)&(full_df['gameorder']>=gameorder)]['elo_i'].iloc[0]
m=HeadToHeadModel(list(games[games['gameorder']>STARTING_LOC][:2].iterrows()), silver_elo_update, elo_prediction)
m.train()
m.power_rankings()
m=HeadToHeadModel(list(games[games['gameorder']>STARTING_LOC][:1].iterrows()), silver_elo_update, elo_prediction)
m.train()
m.power_rankings()
elo_lookup("Knicks",1)
games
full_df
SSE=0
my_scores=[]
nate_scores=[]
for team,rating in m.ratings.items():
nate_final_rating=full_df[full_df['fran_id']==team]['elo_n'].iloc[-1]
my_scores.append(rating)
nate_scores.append(nate_final_rating)
plt.scatter(my_scores, nate_scores)
plt.ylabel("Nate Silver's Final Elo Ratings")
plt.xlabel("My Final Elo Ratings")
plt.title("Comparison of Nate Silver's Elo and my Implementation")
import statsmodels.api as sm
X=my_scores
X=sm.add_constant(X)
Y=nate_scores
model=sm.OLS(Y,X)
results=model.fit()
results.summary()
###Output
C:\Users\Matteo\Anaconda3\envs\betting\lib\site-packages\statsmodels\stats\stattools.py:72: ValueWarning: omni_normtest is not valid with less than 8 observations; 2 samples were given.
"samples were given." % int(n), ValueWarning)
C:\Users\Matteo\Anaconda3\envs\betting\lib\site-packages\statsmodels\regression\linear_model.py:1549: RuntimeWarning: divide by zero encountered in true_divide
return 1 - (np.divide(self.nobs - self.k_constant, self.df_resid)
C:\Users\Matteo\Anaconda3\envs\betting\lib\site-packages\statsmodels\regression\linear_model.py:1550: RuntimeWarning: invalid value encountered in double_scalars
* (1 - self.rsquared))
C:\Users\Matteo\Anaconda3\envs\betting\lib\site-packages\statsmodels\regression\linear_model.py:1558: RuntimeWarning: divide by zero encountered in double_scalars
return self.ssr/self.df_resid
C:\Users\Matteo\Anaconda3\envs\betting\lib\site-packages\statsmodels\regression\linear_model.py:1510: RuntimeWarning: divide by zero encountered in double_scalars
return np.dot(wresid, wresid) / self.df_resid
###Markdown
Accuracy Win/Loss
###Code
class HeadToHeadModel(object):
def __init__(self, events, update_function, prediction_function=None):
self.update_function=update_function
self.events=events
self.ratings={}
self.prediction_function = prediction_function
self.predictions = []
self.curr_season=defaultdict(lambda: self.events[0][1]['year_id'])
def train(self):
for idx, event in self.events:
new_year=event['year_id']
label_i=event['fran_id']
label_j=event['opp_fran']
if self.ratings.get(label_i,False)==False:
self.ratings[label_i]=elo_lookup(label_i,event['gameorder'])
if self.ratings.get(label_j,False)==False:
self.ratings[label_j]=elo_lookup(label_j,event['gameorder'])
if self.curr_season[label_i]!=new_year:
self.curr_season[label_i]=new_year
self.ratings[label_i]=self.ratings[label_i]*.75+1505.*.25
elif self.curr_season[label_j]!=new_year:
self.curr_season[label_j]=new_year
self.ratings[label_j]=self.ratings[label_j]*.75+1505.*.25
self.predictions.append(elo_prediction(self.ratings[label_i]+100, self.ratings[label_j]))
#todo change below to just use event
update=self.update_function(event['pts'],event['opp_pts'], self.ratings[label_i], self.ratings[label_j])
self.ratings[label_i]+=update[0]
self.ratings[label_j]+=update[1]
def power_rankings(self):
from operator import itemgetter
power_rankings = sorted(self.ratings.items(), key=itemgetter(1), reverse=True)
power = []
for i, x in enumerate(power_rankings):
power.append((i + 1, x))
return power
STARTING_LOC=0
m=HeadToHeadModel(list(games[games['gameorder']>STARTING_LOC].iterrows()), silver_elo_update, elo_prediction)
m.train()
m.power_rankings()
games['prediction']=m.predictions
games['predictedWinner']=games['prediction'].apply(lambda x: 1 if x>=.5 else 0)
games['winner']=games.apply(lambda x: x['pts']>=x['opp_pts'],axis=1)
from sklearn.metrics import confusion_matrix
conf_matrix=confusion_matrix(games['winner'],games['predictedWinner'])
conf_matrix
success_rate=np.trace(conf_matrix)/(np.sum(conf_matrix))
success_rate
###Output
_____no_output_____
###Markdown
Against the Spread (ATS) We only have spread data from 2011 onwards so we will change our starting location for this run.
###Code
class HeadToHeadModel(object):
def __init__(self, events, update_function, prediction_function=None):
self.update_function=update_function
self.events=events
self.ratings={}
self.prediction_function = prediction_function
self.predictions = []
self.curr_season=defaultdict(lambda: self.events[0][1]['year_id'])
def train(self):
for idx, event in self.events:
new_year=event['year_id']
label_i=event['fran_id']
label_j=event['opp_fran']
if self.ratings.get(label_i,False)==False:
self.ratings[label_i]=elo_lookup(label_i,event['gameorder'])
if self.ratings.get(label_j,False)==False:
self.ratings[label_j]=elo_lookup(label_j,event['gameorder'])
if self.curr_season[label_i]!=new_year:
self.curr_season[label_i]=new_year
self.ratings[label_i]=self.ratings[label_i]*.75+1505.*.25
elif self.curr_season[label_j]!=new_year:
self.curr_season[label_j]=new_year
self.ratings[label_j]=self.ratings[label_j]*.75+1505.*.25
#todo change below to just use event
self.predictions.append(score_prediction(self.ratings[label_i]+100, self.ratings[label_j]))
update=self.update_function(event['pts'],event['opp_pts'], self.ratings[label_i], self.ratings[label_j])
self.ratings[label_i]+=update[0]
self.ratings[label_j]+=update[1]
def power_rankings(self):
from operator import itemgetter
power_rankings = sorted(self.ratings.items(), key=itemgetter(1), reverse=True)
power = []
for i, x in enumerate(power_rankings):
power.append((i + 1, x))
return power
matchups.columns
matchups.columns
games['SEASON'].unique()
matchups['game_id']=matchups.apply(lambda x: x['game_datetime'].split(" ")[0].replace("-","")+"0"+x['home_name'],axis=1)
games_w_odds=matchups.merge(games)
games_w_odds.tail(1)
m=HeadToHeadModel(list(games_w_odds.iterrows()), silver_elo_update, elo_prediction)
m.train()
m.power_rankings()
games_w_odds['predictedHomeMOV']=m.predictions
games_w_odds['homeMOV']=games_w_odds['pts']-games_w_odds['opp_pts']
games_w_odds['homeCover']=(games_w_odds['homeMOV']+games_w_odds['home_line'])>0
games_w_odds.head(1)
len(games_w_odds)
games_w_odds['SEASON'].unique()
from sklearn.metrics import accuracy_score
def bettingFunction(row):
'''
if algo favors team more than vegas predicted score bet for, True. Else bet for the team True
'''
return (row['predictedHomeMOV']+row['home_line'])>0
games_w_odds['bets']=games_w_odds.apply(bettingFunction,axis=1)
for season, data in games_w_odds.groupby("season"):
print(season)
conf_matrix=confusion_matrix(data['homeCover'],data['bets'])
print(accuracy_score(data['homeCover'],data['bets']))
display(pd.DataFrame(conf_matrix, columns=["Bet on Away","Bet on Home"], index=["Away Covers","Home Covers"]))
print("____________________")
conf_matrix=confusion_matrix(games_w_odds['homeCover'],games_w_odds['bets'])
pd.DataFrame(conf_matrix, columns=["Bet on Away","Bet on Home"], index=["Away Covers","Home Covers"])
print(pd.DataFrame(conf_matrix, columns=["Bet on Away","Bet on Home"], index=["Away Covers","Home Covers"]).to_html())
success_rate=np.trace(conf_matrix)/(np.sum(conf_matrix))
success_rate
###Output
_____no_output_____
###Markdown
Analysis of Error Profile Although the expected value is no better than a coin flip there could be an advantage in terms of the error profile
###Code
bets=np.random.binomial(1,.5,len(games_w_odds))
truths=np.random.binomial(1,.5,len(games_w_odds))
conf_matrix=confusion_matrix(truths,bets)
pd.DataFrame(conf_matrix, columns=["Bet on Away","Bet on Home"], index=["Away Covers","Home Covers"])
pd.DataFrame(conf_matrix, columns=["Bet on Away","Bet on Home"], index=["Away Covers","Home Covers"]).to_html()
###Output
_____no_output_____ |
.ipynb_checkpoints/PA4-ME16B001-checkpoint.ipynb | ###Markdown
PA_4: Feedforward Neural Network AimTrain and test a Feedforward Neural Network for MNIST digit classification. Procedure* Download `mnist_file.rar` which contains mnist data as a *pickle* file and read `mnist.py` for loading partial mnist data.* Run read `mnist.py` file which will give 1000 train and 500 test images per each class.* x train,y train gives the image $784\times1$ and corresponding label for training data. Similarly, for test data.* Write1. Neural network model using library functions.2. Your own neural network model and train with Back propagation 1. On the training data and report accuracy. 2. Train with Five fold cross validation (4 fold training and 1 fold testing. Repeating this for 5 times changing the test fold each time) and report the average accuracy as train accuracy.* Test both models with the test data.* Find the confusion matrix and report the accuracy.
###Code
import numpy as np
from utils import visualise
from read_mnist import load_data
import random
y_train,x_train,y_test,x_test=load_data()
print("Train data label dim: {}".format(y_train.shape))
print("Train data features dim: {}".format(x_train.shape))
print("Test data label dim: {}".format(y_test.shape))
print("Test data features dim:{}".format(x_test.shape))
visualise(x_train)
import numpy as np
import random
import itertools
import time
from sklearn.metrics import f1_score, precision_score, recall_score
from read_mnist import load_data
import pickle
def sigmoid(x):
return 1.0/(1+ np.exp(-x))
def sigmoid_grad(x):
return x * (1 - x)
def tanh_grad(x):
return 1-np.power(x,2)
def softmax(x):
e_x = np.exp(x - np.max(x))
return e_x /np.sum(e_x, axis=1, keepdims=True)
def relu(x):
return x * (x > 0)
def relu_grad(x):
return (x>0)*1
def cross_entropy(y_,y):
n = y.shape[0]
nll = -np.log(y_[range(n),y])
return np.mean(nll)
def delta_cross_entropy(y_,y):
n = y.shape[0]
y_[range(n),y] -= 1
return y_/n
class NN:
def __init__(self, hidden_layers, hidden_neurons, hidden_activation, lr=0.01):
self.hidden_layers = hidden_layers
self.hidden_neurons = hidden_neurons
self.hidden_activation = hidden_activation
self.lr=lr
np.random.seed(786)
self.W1 = 0.1* np.random.randn(x_train.shape[1],self.hidden_neurons)
self.b1 = np.zeros((1,self.hidden_neurons))
self.W2 = 0.1* np.random.randn(self.hidden_neurons,10)
self.b2 = np.zeros((1,10))
def forward(self,x_train):
s1=np.dot(x_train, self.W1) + self.b1
if self.hidden_activation == 'sigmoid':
a1 = sigmoid(s1)
elif self.hidden_activation=='tanh':
a1 = np.tanh(s1)
elif self.hidden_activation=='relu':
a1 = relu(s1)
else:
raise Exception('Error: Activation not implemented')
s2 = np.dot(a1, self.W2) + self.b2
a2 = softmax(s2)
loss=cross_entropy(a2,y_train)
return(loss,s1,a1,s2,a2)
def backward(self, s1, a1, s2, a2):
delta3=delta_cross_entropy(a2,y_train)
dW2 = np.dot(a1.T, delta3)
db2 = np.sum(delta3, axis=0, keepdims=True)
if self.hidden_activation=='sigmoid':
delta2 = delta3.dot(self.W2.T) * sigmoid_grad(a1)
elif self.hidden_activation == 'tanh':
delta2 = delta3.dot(self.W2.T) * tanh_grad(a1)
elif self.hidden_activation == 'relu':
delta2 = delta3.dot(self.W2.T) * relu_grad(a1)
else:
raise Exception('Error: Activation not implemented')
dW1 = np.dot(x_train.T, delta2)
db1 = np.sum(delta2, axis=0)
self.W1 += -self.lr * dW1
self.b1 += -self.lr * db1
self.W2 += -self.lr * dW2
self.b2 += -self.lr * db2
def predict(self, x):
s1=np.dot(x, self.W1)
a1 = (sigmoid(s1))
s2 = np.dot(a1, self.W2)
a2 = softmax(s2)
return np.argmax(a2, axis=1)
def save_model(self, name):
params = { 'W1': self.W1, 'b1': self.b1, 'W2': self.W2, 'b2': self.b2}
with open(name, 'wb') as handle:
pickle.dump(params, handle, protocol=pickle.HIGHEST_PROTOCOL)
epochs=1000
# hyperparameter variation
lr=0.1
neurons = [32,64,128,256]
activations = ['sigmoid', 'relu', 'tanh']
experiments = list(itertools.product(neurons, activations))
for (hidden_neurons,hidden_activation) in experiments:
print('\n############ Activation function: {} No. of neurons: {} ############'.format(hidden_activation, hidden_neurons))
model=NN(hidden_layers=5,hidden_neurons=hidden_neurons,hidden_activation=hidden_activation, lr=lr)
print('\nTraining started!')
start = time.time()
for epoch in range(epochs):
loss,s1,a1,s2,a2 = model.forward(x_train)
if epoch%100==0:
print("Loss: {} Training progress: {}/{}".format(loss,epoch,epochs))
model.backward(s1, a1, s2, a2)
name = 'model_'+str(hidden_activation)+'_'+str(hidden_neurons)+'.pickle'
model.save_model(name=name)
stop = time.time()
print('Training finished in {} s'.format(stop - start))
test_preds = model.predict(x_test)
print('Test Results-Accuracy: {} F1-Score: {}, Precision: {} Recall: {}'.format( np.mean(test_preds == y_test), f1_score(y_test, test_preds, average='micro'), precision_score(y_test, test_preds, average='micro'), recall_score(y_test, test_preds, average='micro') ))
# copying the original data
y_train_or = np.copy(y_train)
x_train_or = np.copy(x_train)
y_test_or = np.copy(y_test)
x_test_or = np.copy(x_test)
# five fold cross validation
folds=5
val_acc=[]
for (hidden_neurons,hidden_activation) in experiments:
print('\n############(Val) Activation function: {} No. of neurons: {} ############'.format(hidden_activation, hidden_neurons))
for fold in range(0,folds):
# x_train.shape[0]=10000
start=int(fold*(x_train_or.shape[0]/folds))
stop=int((fold+1)*(x_train_or.shape[0]/folds))
del x_train
del y_train
del x_test
del y_test
x_test=x_train_or[start:stop]
y_test=y_train_or[start:stop]
x_train=np.vstack((x_train_or[:start], x_train_or[stop:]))
y_train=np.append(y_train_or[:start],y_train_or[stop:])
# print(x_train.shape, y_train.shape)
model=NN(hidden_layers=5,hidden_neurons=hidden_neurons,hidden_activation=hidden_activation, lr=lr)
print('\nTraining started (validation set {}/{})!'.format(fold+1,folds))
for epoch in range(epochs):
loss,s1,a1,s2,a2 = model.forward(x_train)
if epoch%100==0:
print("Loss: {} Training progress: {}/{}".format(loss,epoch,epochs))
model.backward(s1, a1, s2, a2)
train_preds= model.predict(x_train)
val_acc.append(np.mean(train_preds == y_train))
print("5 fold validation accuracy",val_acc)
print('Train Results-Average validation accuracy: {}'.format(np.mean(np.array(val_acc)) ))
val_acc.clear()
# Load model parameters
for (hidden_neurons,hidden_activation) in experiments:
name = 'model_'+str(hidden_activation)+'_'+str(hidden_neurons)+'.pickle'
print(name)
with open(name, 'rb') as handle:
b = pickle.load(handle)
###Output
model_sigmoid_32.pickle
model_relu_32.pickle
model_tanh_32.pickle
model_sigmoid_64.pickle
model_relu_64.pickle
model_tanh_64.pickle
model_sigmoid_128.pickle
model_relu_128.pickle
model_tanh_128.pickle
model_sigmoid_256.pickle
model_relu_256.pickle
model_tanh_256.pickle
|
Convolutional Neural Networks/week1/.ipynb_checkpoints/Convolution_model_Step_by_Step_v2a-checkpoint.ipynb | ###Markdown
Convolutional Neural Networks: Step by StepWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**:- Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object from the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input. - Subscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! Updates If you were working on the notebook before this update...* The current notebook is version "v2a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* clarified example used for padding function. Updated starter code for padding function.* `conv_forward` has additional hints to help students if they're stuck.* `conv_forward` places code for `vert_start` and `vert_end` within the `for h in range(...)` loop; to avoid redundant calculations. Similarly updated `horiz_start` and `horiz_end`. **Thanks to our mentor Kevin Brown for pointing this out.*** `conv_forward` breaks down the `Z[i, h, w, c]` single line calculation into 3 lines, for clarity.* `conv_forward` test case checks that students don't accidentally use n_H_prev instead of n_H, use n_W_prev instead of n_W, and don't accidentally swap n_H with n_W* `pool_forward` properly nests calculations of `vert_start`, `vert_end`, `horiz_start`, and `horiz_end` to avoid redundant calculations.* `pool_forward' has two new test cases that check for a correct implementation of stride (the height and width of the previous layer's activations should be large enough relative to the filter dimensions so that a stride can take place). * `conv_backward`: initialize `Z` and `cache` variables within unit test, to make it independent of unit testing that occurs in the `conv_forward` section of the assignment.* **Many thanks to our course mentor, Paul Mielke, for proposing these test cases.** 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - Outline of the AssignmentYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:- Convolution functions, including: - Zero Padding - Convolve window - Convolution forward - Convolution backward (optional)- Pooling functions, including: - Pooling forward - Create mask - Distribute value - Pooling backward (optional) This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural NetworksAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-PaddingZero-padding adds zeros around the border of an image: **Figure 1** : **Zero-Padding** Image (3 channels, RGB) with a padding of 2. The main benefits of padding are the following:- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:```pythona = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), mode='constant', constant_values = (0,0))```
###Code
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), mode='constant', constant_values=(0, 0))
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =\n", x.shape)
print ("x_pad.shape =\n", x_pad.shape)
print ("x[1,1] =\n", x[1,1])
print ("x_pad[1,1] =\n", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
###Output
x.shape =
(4, 3, 3, 2)
x_pad.shape =
(4, 7, 7, 2)
x[1,1] =
[[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] =
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
###Markdown
**Expected Output**:```x.shape = (4, 3, 3, 2)x_pad.shape = (4, 7, 7, 2)x[1,1] = [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]]x_pad[1,1] = [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]]``` 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: - Takes an input volume - Applies a filter at every position of the input- Outputs another volume (usually of different size) **Figure 2** : **Convolution operation** with a filter of 3x3 and a stride of 1 (stride = amount you move the window each time you slide) In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. **Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html). **Note**: The variable b will be passed in as a numpy array. If we add a scalar (a float or integer) to a numpy array, the result is a numpy array. In the special case when a numpy array contains a single value, we can cast it as a float to convert it to a scalar.
###Code
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, the result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice_prev and W. Do not add the bias yet.
s = np.multiply(a_slice_prev, W)
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z + b.astype(float)
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
###Output
Z = [[[-6.99908945]]]
###Markdown
**Expected Output**: **Z** -6.99908945068 3.3 - Convolutional Neural Networks - Forward passIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: **Exercise**: Implement the function below to convolve the filters `W` on an input activation `A_prev`. This function takes the following inputs:* `A_prev`, the activations output by the previous layer (for a batch of m inputs); * Weights are denoted by `W`. The filter window size is `f` by `f`.* The bias vector is `b`, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. **Hint**: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:```pythona_slice_prev = a_prev[0:2,0:2,:]```Notice how this gives a 3D slice that has height 2, width 2, and depth 3. Depth is the number of channels. This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find out how each of the corner can be defined using h, w, f and s in the code below. **Figure 3** : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** This figure shows only a single channel. **Reminder**:The formulas relating the output shape of the convolution to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_C = \text{number of filters used in the convolution}$$For this exercise, we won't worry about vectorization, and will just implement everything with for-loops. Additional Hints if you're stuck* You will want to use array slicing (e.g.`varname[0:1,:,3:5]`) for the following variables: `a_prev_pad` ,`W`, `b` Copy the starter code of the function and run it outside of the defined function, in separate cells. Check that the subset of each array is the size and dimension that you're expecting. * To decide how to get the vert_start, vert_end; horiz_start, horiz_end, remember that these are indices of the previous layer. Draw an example of a previous padded layer (8 x 8, for instance), and the current (output layer) (2 x 2, for instance). The output layer's indices are denoted by `h` and `w`. * Make sure that `a_slice_prev` has a height, width and depth.* Remember that `a_prev_pad` is a subset of `A_prev_pad`. Think about which one should be used within the for loops.
###Code
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer,
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters["stride"]
pad = hparameters["pad"]
# Compute the dimensions of the CONV output volume using the formula given above.
# Hint: use int() to apply the 'floor' operation. (≈2 lines)
n_H = int((n_H_prev - f + 2 * pad) / stride) + 1
n_W = int((n_W_prev - f + 2 * pad) / stride) + 1
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
vert_start = h * stride
vert_end = vert_start + f
for w in range(n_W): # loop over horizontal axis of the output volume
# Find the horizontal start and end of the current "slice" (≈2 lines)
horiz_start = w * stride
horiz_end = horiz_start + f
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈3 line)
weights = W[..., c]
biases = b[..., c]
Z[i, h, w, c] = conv_single_step(a_slice_prev, weights, biases)
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,5,7,4)
W = np.random.randn(3,3,4,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 1,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =\n", np.mean(Z))
print("Z[3,2,1] =\n", Z[3,2,1])
print("cache_conv[0][1][2][3] =\n", cache_conv[0][1][2][3])
###Output
Z's mean =
0.692360880758
Z[3,2,1] =
[ -1.28912231 2.27650251 6.61941931 0.95527176 8.25132576
2.31329639 13.00689405 2.34576051]
cache_conv[0][1][2][3] =
[-1.1191154 1.9560789 -0.3264995 -1.34267579]
###Markdown
**Expected Output**:```Z's mean = 0.692360880758Z[3,2,1] = [ -1.28912231 2.27650251 6.61941931 0.95527176 8.25132576 2.31329639 13.00689405 2.34576051]cache_conv[0][1][2][3] = [-1.1191154 1.9560789 -0.3264995 -1.34267579]``` Finally, CONV layer should also contain an activation, in which case we would add the following line of code:```python Convolve the window to get back one output neuronZ[i, h, w, c] = ... Apply activationA[i, h, w, c] = activation(Z[i, h, w, c])```You don't need to do it here. 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the $f \times f$ window you would compute a *max* or *average* over. 4.1 - Forward PoolingNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.**Reminder**:As there's no padding, the formulas binding the output shape of the pooling to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$$$ n_C = n_{C_{prev}}$$
###Code
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
vert_start = h * stride
vert_end = vert_start + f
for w in range(n_W): # loop on the horizontal axis of the output volume
# Find the vertical start and end of the current "slice" (≈2 lines)
horiz_start = w * stride
horiz_end = horiz_start + f
for c in range (n_C): # loop over the channels of the output volume
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice.
# Use an if statement to differentiate the modes.
# Use np.max and np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
# Case 1: stride of 1
np.random.seed(1)
A_prev = np.random.randn(2, 5, 5, 3)
hparameters = {"stride" : 1, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A.shape = " + str(A.shape))
print("A =\n", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A.shape = " + str(A.shape))
print("A =\n", A)
###Output
mode = max
A.shape = (2, 3, 3, 3)
A =
[[[[ 1.74481176 0.90159072 1.65980218]
[ 1.74481176 1.46210794 1.65980218]
[ 1.74481176 1.6924546 1.65980218]]
[[ 1.14472371 0.90159072 2.10025514]
[ 1.14472371 0.90159072 1.65980218]
[ 1.14472371 1.6924546 1.65980218]]
[[ 1.13162939 1.51981682 2.18557541]
[ 1.13162939 1.51981682 2.18557541]
[ 1.13162939 1.6924546 2.18557541]]]
[[[ 1.19891788 0.84616065 0.82797464]
[ 0.69803203 0.84616065 1.2245077 ]
[ 0.69803203 1.12141771 1.2245077 ]]
[[ 1.96710175 0.84616065 1.27375593]
[ 1.96710175 0.84616065 1.23616403]
[ 1.62765075 1.12141771 1.2245077 ]]
[[ 1.96710175 0.86888616 1.27375593]
[ 1.96710175 0.86888616 1.23616403]
[ 1.62765075 1.12141771 0.79280687]]]]
mode = average
A.shape = (2, 3, 3, 3)
A =
[[[[ -3.01046719e-02 -3.24021315e-03 -3.36298859e-01]
[ 1.43310483e-01 1.93146751e-01 -4.44905196e-01]
[ 1.28934436e-01 2.22428468e-01 1.25067597e-01]]
[[ -3.81801899e-01 1.59993515e-02 1.70562706e-01]
[ 4.73707165e-02 2.59244658e-02 9.20338402e-02]
[ 3.97048605e-02 1.57189094e-01 3.45302489e-01]]
[[ -3.82680519e-01 2.32579951e-01 6.25997903e-01]
[ -2.47157416e-01 -3.48524998e-04 3.50539717e-01]
[ -9.52551510e-02 2.68511000e-01 4.66056368e-01]]]
[[[ -1.73134159e-01 3.23771981e-01 -3.43175716e-01]
[ 3.80634669e-02 7.26706274e-02 -2.30268958e-01]
[ 2.03009393e-02 1.41414785e-01 -1.23158476e-02]]
[[ 4.44976963e-01 -2.61694592e-03 -3.10403073e-01]
[ 5.08114737e-01 -2.34937338e-01 -2.39611830e-01]
[ 1.18726772e-01 1.72552294e-01 -2.21121966e-01]]
[[ 4.29449255e-01 8.44699612e-02 -2.72909051e-01]
[ 6.76351685e-01 -1.20138225e-01 -2.44076712e-01]
[ 1.50774518e-01 2.89111751e-01 1.23238536e-03]]]]
###Markdown
** Expected Output**```mode = maxA.shape = (2, 3, 3, 3)A = [[[[ 1.74481176 0.90159072 1.65980218] [ 1.74481176 1.46210794 1.65980218] [ 1.74481176 1.6924546 1.65980218]] [[ 1.14472371 0.90159072 2.10025514] [ 1.14472371 0.90159072 1.65980218] [ 1.14472371 1.6924546 1.65980218]] [[ 1.13162939 1.51981682 2.18557541] [ 1.13162939 1.51981682 2.18557541] [ 1.13162939 1.6924546 2.18557541]]] [[[ 1.19891788 0.84616065 0.82797464] [ 0.69803203 0.84616065 1.2245077 ] [ 0.69803203 1.12141771 1.2245077 ]] [[ 1.96710175 0.84616065 1.27375593] [ 1.96710175 0.84616065 1.23616403] [ 1.62765075 1.12141771 1.2245077 ]] [[ 1.96710175 0.86888616 1.27375593] [ 1.96710175 0.86888616 1.23616403] [ 1.62765075 1.12141771 0.79280687]]]]mode = averageA.shape = (2, 3, 3, 3)A = [[[[ -3.01046719e-02 -3.24021315e-03 -3.36298859e-01] [ 1.43310483e-01 1.93146751e-01 -4.44905196e-01] [ 1.28934436e-01 2.22428468e-01 1.25067597e-01]] [[ -3.81801899e-01 1.59993515e-02 1.70562706e-01] [ 4.73707165e-02 2.59244658e-02 9.20338402e-02] [ 3.97048605e-02 1.57189094e-01 3.45302489e-01]] [[ -3.82680519e-01 2.32579951e-01 6.25997903e-01] [ -2.47157416e-01 -3.48524998e-04 3.50539717e-01] [ -9.52551510e-02 2.68511000e-01 4.66056368e-01]]] [[[ -1.73134159e-01 3.23771981e-01 -3.43175716e-01] [ 3.80634669e-02 7.26706274e-02 -2.30268958e-01] [ 2.03009393e-02 1.41414785e-01 -1.23158476e-02]] [[ 4.44976963e-01 -2.61694592e-03 -3.10403073e-01] [ 5.08114737e-01 -2.34937338e-01 -2.39611830e-01] [ 1.18726772e-01 1.72552294e-01 -2.21121966e-01]] [[ 4.29449255e-01 8.44699612e-02 -2.72909051e-01] [ 6.76351685e-01 -1.20138225e-01 -2.44076712e-01] [ 1.50774518e-01 2.89111751e-01 1.23238536e-03]]]]```
###Code
# Case 2: stride of 2
np.random.seed(1)
A_prev = np.random.randn(2, 5, 5, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A.shape = " + str(A.shape))
print("A =\n", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A.shape = " + str(A.shape))
print("A =\n", A)
###Output
mode = max
A.shape = (2, 2, 2, 3)
A =
[[[[ 1.74481176 0.90159072 1.65980218]
[ 1.74481176 1.6924546 1.65980218]]
[[ 1.13162939 1.51981682 2.18557541]
[ 1.13162939 1.6924546 2.18557541]]]
[[[ 1.19891788 0.84616065 0.82797464]
[ 0.69803203 1.12141771 1.2245077 ]]
[[ 1.96710175 0.86888616 1.27375593]
[ 1.62765075 1.12141771 0.79280687]]]]
mode = average
A.shape = (2, 2, 2, 3)
A =
[[[[-0.03010467 -0.00324021 -0.33629886]
[ 0.12893444 0.22242847 0.1250676 ]]
[[-0.38268052 0.23257995 0.6259979 ]
[-0.09525515 0.268511 0.46605637]]]
[[[-0.17313416 0.32377198 -0.34317572]
[ 0.02030094 0.14141479 -0.01231585]]
[[ 0.42944926 0.08446996 -0.27290905]
[ 0.15077452 0.28911175 0.00123239]]]]
###Markdown
**Expected Output:** ```mode = maxA.shape = (2, 2, 2, 3)A = [[[[ 1.74481176 0.90159072 1.65980218] [ 1.74481176 1.6924546 1.65980218]] [[ 1.13162939 1.51981682 2.18557541] [ 1.13162939 1.6924546 2.18557541]]] [[[ 1.19891788 0.84616065 0.82797464] [ 0.69803203 1.12141771 1.2245077 ]] [[ 1.96710175 0.86888616 1.27375593] [ 1.62765075 1.12141771 0.79280687]]]]mode = averageA.shape = (2, 2, 2, 3)A = [[[[-0.03010467 -0.00324021 -0.33629886] [ 0.12893444 0.22242847 0.1250676 ]] [[-0.38268052 0.23257995 0.6259979 ] [-0.09525515 0.268511 0.46605637]]] [[[-0.17313416 0.32377198 -0.34317572] [ 0.02030094 0.14141479 -0.01231585]] [[ 0.42944926 0.08446996 -0.27290905] [ 0.15077452 0.28911175 0.00123239]]]]``` Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainder of this notebook is optional, and will not be graded. 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we will briefly present them below. 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA:This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into:```pythonda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]``` 5.1.2 - Computing dW:This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$Where $a_{slice}$ corresponds to the slice which was used to generate the activation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into:```pythondW[:,:,:,c] += a_slice * dZ[i, h, w, c]``` 5.1.3 - Computing db:This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into:```pythondb[:,:,:,c] += dZ[i, h, w, c]```**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
###Code
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters["stride"]
pad = hparameters["pad"]
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev))
dW = np.zeros((f, f, n_C_prev, n_C))
db = np.zeros((1, 1, 1, n_C))
# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev, pad)
dA_prev_pad = zero_pad(dA_prev, pad)
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i]
da_prev_pad = dA_prev_pad[i]
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[..., c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c]
# Set the ith training example's dA_prev to the unpadded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
# We'll run conv_forward to initialize the 'Z' and 'cache_conv",
# which we'll use to test the conv_backward function
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
# Test conv_backward
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
###Output
dA_mean = 1.45243777754
dW_mean = 1.72699145831
db_mean = 7.83923256462
###Markdown
** Expected Output: ** **dA_mean** 1.45243777754 **dW_mean** 1.72699145831 **db_mean** 7.83923256462 5.2 Pooling layer - backward passNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: $$ X = \begin{bmatrix}1 && 3 \\4 && 2\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}0 && 0 \\1 && 0\end{bmatrix}\tag{4}$$As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. **Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. Hints:- [np.max()]() may be helpful. It computes the maximum of an array.- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:```A[i,j] = True if X[i,j] = xA[i,j] = False if X[i,j] != x```- Here, you don't need to consider cases where there are several maxima in a matrix.
###Code
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (≈1 line)
mask = x == np.max(x)
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
###Output
x = [[ 1.62434536 -0.61175641 -0.52817175]
[-1.07296862 0.86540763 -2.3015387 ]]
mask = [[ True False False]
[False False False]]
###Markdown
**Expected Output:** **x =**[[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] **mask =**[[ True False False] [False False False]] Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}1/4 && 1/4 \\1/4 && 1/4\end{bmatrix}\tag{5}$$This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. **Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
###Code
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = shape
# Compute the value to distribute on the matrix (≈1 line)
average = dz / (n_H * n_W)
# Create a matrix where every entry is the "average" value (≈1 line)
a = np.ones(shape) * average
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
###Output
distributed value = [[ 0.5 0.5]
[ 0.5 0.5]]
###Markdown
**Expected Output**: distributed_value =[[ 0.5 0.5] [ 0.5 0.5]] 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer.**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dA.
###Code
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = cache
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = hparameters["stride"]
f = hparameters["f"]
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape
m, n_H, n_W, n_C = dA.shape
# Initialize dA_prev with zeros (≈1 line)
dA_prev = np.zeros(A_prev.shape)
for i in range(m): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = A_prev[i]
for h in range(n_H): # loop on the vertical axis
for w in range(n_W): # loop on the horizontal axis
for c in range(n_C): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = h
vert_end = vert_start + f
horiz_start = w
horiz_end = horiz_start + f
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c]
# Create the mask from a_prev_slice (≈1 line)
mask = create_mask_from_window(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += np.multiply(mask, dA[i, h, w, c])
elif mode == "average":
# Get the value a from dA (≈1 line)
da = dA[i, h, w, c]
# Define the shape of the filter as fxf (≈1 line)
shape = (f, f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
###Output
mode = max
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0. 0. ]
[ 5.05844394 -1.68282702]
[ 0. 0. ]]
mode = average
mean of dA = 0.145713902729
dA_prev[1,1] = [[ 0.08485462 0.2787552 ]
[ 1.26461098 -0.25749373]
[ 1.17975636 -0.53624893]]
|
articles_summarization_transformers.ipynb | ###Markdown
Articles Summarization using Transformers and BeautifulSoupTo scrape a given article from the web using BeautifulSoup, and to summarize the article using Transformers.References for this tutorial:- https://www.youtube.com/watch?v=JctmnczWg0U- https://github.com/nicknochnack/Longform-Summarization-with-Hugging-Face/blob/main/LongSummarization.ipynb- https://blog.finxter.com/classification-of-star-wars-lego-images-using-cnn-and-transfer-learning/ Install and Import Modules Enable GPU:- Method 1: Edit tab --> Notebook settings --> GPU --> Save.- Method 2: Runtime tab --> Change runtime type --> GPU --> Save.
###Code
!pip install transformers
from transformers import pipeline
from bs4 import BeautifulSoup
import requests
###Output
_____no_output_____
###Markdown
Load Model
###Code
summarizer = pipeline("summarization")
###Output
_____no_output_____
###Markdown
Web Scraping
###Code
URL = "https://blog.finxter.com/classification-of-star-wars-lego-images-using-cnn-and-transfer-learning/"
r = requests.get(URL)
soup = BeautifulSoup(r.text, 'html.parser')
results = soup.find_all(['h1', 'p'])
text = [result.text for result in results]
ARTICLE = ' '.join(text)
ARTICLE
###Output
_____no_output_____
###Markdown
Text Processing
###Code
content = ARTICLE.split('How to Classify Star Wars Lego Images using CNN and Transfer Learning ')
content
content = content[1]
content
content = content.replace('.', '.<eos>')
content = content.replace('?', '?<eos>')
content = content.replace('!', '!<eos>')
content
sentences = content.split('<eos>')
sentences
max_word_len = 500
current_block = 0
blocks = []
# split sentences into word blocks of 500 words because that's the upper limit of transformers
for sentence in sentences:
if len(blocks) == current_block + 1:
if len(blocks[current_block]) + len(sentence.split(' ')) <= max_word_len:
blocks[current_block].extend(sentence.split(' '))
else:
current_block += 1
blocks.append(sentence.split(' '))
else:
blocks.append(sentence.split(' '))
# join words of each block into a sentence string
for block_id in range(len(blocks)):
blocks[block_id] = ' '.join(blocks[block_id])
print(f"Total number of blocks: {len(blocks)}")
blocks[0]
blocks[1]
blocks[2]
blocks[3]
###Output
_____no_output_____
###Markdown
Text Summarization
###Code
res = summarizer(blocks, max_length=120, min_length=30, do_sample=False)
# summarized text for block 1
res[0]
text = ' '.join([summ['summary_text'] for summ in res])
text
# save as text file
with open('blogsummary.txt', 'w') as f:
f.write(text)
###Output
_____no_output_____
###Markdown
Bonus: Make It An Audiobook!
###Code
!pip install gtts
from gtts import gTTS
from IPython.display import Audio
tts = gTTS(text)
tts.save('my_audiobook.wav')
sound_file = 'my_audiobook.wav'
Audio(sound_file, autoplay=True)
###Output
_____no_output_____ |
FundaML-CourseraAndrewNg/1.3 Regularized Linear Regression and Bias vs Variance.ipynb | ###Markdown
Regularized Linear Regression and Bias v.s. Variance This dataset is divided into three parts:- A training set that your model will learn on: X, y- A cross validation set for determining the regularization parameter:Xval, yval- A test set for evaluating performance. These are \unseen" examples which your model did not see during training: Xtest, ytest
###Code
# %load ../../../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from scipy.io import loadmat
from scipy.optimize import minimize
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.preprocessing import PolynomialFeatures
pd.set_option('display.notebook_repr_html', False)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', 150)
pd.set_option('display.max_seq_items', None)
#%config InlineBackend.figure_formats = {'pdf',}
#%matplotlib inline
import seaborn as sns
sns.set_context('notebook')
sns.set_style('white')
data = loadmat('data/ex5data1.mat')
data.keys()
y_train = data['y']
X_train = np.c_[np.ones_like(data['X']), data['X']] # adding the intercept np.ones_like(data['X'])
yval = data['yval']
Xval = np.c_[np.ones_like(data['Xval']), data['Xval']] # adding the intercept np.ones_like(data['X'])
print('X_train:', X_train.shape)
print('y_train:', y_train.shape)
print('Xval:', Xval.shape)
print('yval:', yval.shape)
###Output
X_train: (12, 2)
y_train: (12, 1)
Xval: (21, 2)
yval: (21, 1)
###Markdown
Regularized Linear Regression
###Code
plt.scatter(X_train[:,1], y_train, s=50, c='r', marker='x', linewidths=1)
plt.xlabel('Change in water level (x)')
plt.ylabel('Water flowing out of the dam (y)')
plt.ylim(ymin=0);
###Output
_____no_output_____
###Markdown
Regularized Cost function
###Code
def linearRegCostFunction(theta, X, y, reg):
m = y.size
h = X.dot(theta)
J = (1/(2*m))*np.sum(np.square(h-y)) + (reg/(2*m))*np.sum(np.square(theta[1:]))
return(J)
###Output
_____no_output_____
###Markdown
Gradient
###Code
def lrgradientReg(theta, X, y, reg):
m = y.size
h = X.dot(theta.reshape(-1,1))
grad = (1/m)*(X.T.dot(h-y))+ (reg/m)*np.r_[[[0]],theta[1:].reshape(-1,1)]
return(grad.flatten())
# Example without regularization
initial_theta = np.ones((X_train.shape[1],1))
cost = linearRegCostFunction(initial_theta, X_train, y_train, 0)
gradient = lrgradientReg(initial_theta, X_train, y_train, 0)
print(cost)
print(gradient)
def trainLinearReg(X, y, reg):
#initial_theta = np.zeros((X.shape[1],1))
initial_theta = np.array([[15],[15]])
# For some reason the minimize() function does not converge when using
# zeros as initial theta.
res = minimize(linearRegCostFunction, initial_theta, args=(X,y,reg), method=None, jac=lrgradientReg,
options={'maxiter':5000})
return(res)
fit = trainLinearReg(X_train, y_train, 0)
fit
###Output
_____no_output_____
###Markdown
Comparison: coefficients and cost obtained with LinearRegression in Scikit-learn
###Code
regr = LinearRegression(fit_intercept=False)
regr.fit(X_train, y_train.ravel())
print(regr.coef_)
print(linearRegCostFunction(regr.coef_, X_train, y_train, 0))
plt.plot(np.linspace(-50,40), (fit.x[0]+ (fit.x[1]*np.linspace(-50,40))), label='Scipy optimize')
#plt.plot(np.linspace(-50,40), (regr.coef_[0]+ (regr.coef_[1]*np.linspace(-50,40))), label='Scikit-learn')
plt.scatter(X_train[:,1], y_train, s=50, c='r', marker='x', linewidths=1)
plt.xlabel('Change in water level (x)')
plt.ylabel('Water flowing out of the dam (y)')
plt.ylim(ymin=-5)
plt.xlim(xmin=-50)
plt.legend(loc=4);
def learningCurve(X, y, Xval, yval, reg):
m = y.size
error_train = np.zeros((m, 1))
error_val = np.zeros((m, 1))
for i in np.arange(m):
res = trainLinearReg(X[:i+1], y[:i+1], reg)
error_train[i] = linearRegCostFunction(res.x, X[:i+1], y[:i+1], reg)
error_val[i] = linearRegCostFunction(res.x, Xval, yval, reg)
return(error_train, error_val)
t_error, v_error = learningCurve(X_train, y_train, Xval, yval, 0)
plt.plot(np.arange(1,13), t_error, label='Training error')
plt.plot(np.arange(1,13), v_error, label='Validation error')
plt.title('Learning curve for linear regression')
plt.xlabel('Number of training examples')
plt.ylabel('Error')
plt.legend();
###Output
_____no_output_____
###Markdown
Polynomial regression (Scikit-learn)
###Code
poly = PolynomialFeatures(degree=8)
X_train_poly = poly.fit_transform(X_train[:,1].reshape(-1,1))
regr2 = LinearRegression()
regr2.fit(X_train_poly, y_train)
regr3 = Ridge(alpha=20)
regr3.fit(X_train_poly, y_train)
# plot range for x
plot_x = np.linspace(-60,45)
# using coefficients to calculate y
plot_y = regr2.intercept_+ np.sum(regr2.coef_*poly.fit_transform(plot_x.reshape(-1,1)), axis=1)
plot_y2 = regr3.intercept_ + np.sum(regr3.coef_*poly.fit_transform(plot_x.reshape(-1,1)), axis=1)
plt.plot(plot_x, plot_y, label='Scikit-learn LinearRegression')
plt.plot(plot_x, plot_y2, label='Scikit-learn Ridge (alpha={})'.format(regr3.alpha))
plt.scatter(X_train[:,1], y_train, s=50, c='r', marker='x', linewidths=1)
plt.xlabel('Change in water level (x)')
plt.ylabel('Water flowing out of the dam (y)')
plt.title('Polynomial regression degree 8')
plt.legend(loc=4);
###Output
/home/annguyen/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/ridge.py:112: LinAlgWarning: scipy.linalg.solve
Ill-conditioned matrix detected. Result is not guaranteed to be accurate.
Reciprocal condition number2.356626e-26
overwrite_a=True).T
|
2.Instalar_Python_ML_DL_Anaconda.ipynb | ###Markdown
⚙️ Cómo Instalar Python para Machine Learning y Deep Learning___ En este notebook mostramos cómo **instalar Python** y las librerías de machine learning (**scikit-learn**) y deep learning (**TensorFlow**, **Keras**, **PyTorch** y **Theano**) para Python en Windows, macOS y Linux. Para ello, utilizamos la distribución de Python **Anaconda**, que también incluye los IDEs **Jupyter Notebook**, **JupyterLab** y **Spyder**, y las librerías NumPy, SciPy, Matplotlib y Pandas, entre muchas otras.Empezamos. Contenidos 1. Introducción: Software y Librerías Necesarias 2. Descargar Anaconda 3. Instalar Anaconda 4. Comprobar Instalación y Actualizar Anaconda 5. Actualizar la Librería de Machine Learning scikit-learn 6. Instalar las Librerías de Deep Learning 7. Comprobación Final 8. Siguiente Paso Extra 1: ¿Problemas? Obteniendo Ayuda Extra 2. Cómo Gestionar Paquetes con Conda Extra 3. Anaconda Navigator 1. Introducción: Software y Librerías Necesarias___ Si queremos usar Python para machine learning o deep learning, necesitamos instalar lo siguiente: - **Python**: Intérprete de Python. - Librerías de SciPy: - **NumPy**: Arrays n-dimensionales y operaciones sobre los mismos. Funciones matemáticas. - **SciPy**. Funciones matemáticas (para integración, interpolación, optimización, álgebra lineal y estadística). - **Matplotlib**: Visualización de datos. - **Pandas**: Estructuras de datos y funciones para manipulación y análisis de datos. Estas librerías son utilizadas por muchas de las librerías de machine learning y deep learning. Además, proporcionan funciones útiles para análisis de datos. - **Librería de machine learning** (muchos algoritmos distintos): **scikit-learn**. Esta es la principal librería general de machine learning para Python. - **Librerías para deep learning** (redes neuronales): - **TensorFlow** - **Keras** - **PyTorch** - Theano Sólo es necesario instalar estas librerías si vamos a utilizar redes neuronales para deep learning. Si este es nuestro caso, no es necesario instalar todas las librerías de deep learning. - **Entornos de desarrollo integrado** (IDEs): **Jupyter Notebook**, **Jupyter Lab**, Spyder, etc.Por tanto, para instalar Python y las liberías de machine learning y deep learning, primero tendríamos que instalar Python, y después tendríamos que ir instalando las diferentes librerías. ¿Cómo hacemos esto? Hay varias formas.Una forma de hacerlo muy rápidamente, y la que vamos a ver en este notebook, es utilizar **Anaconda**, que es una **distribución de Python** que ya incluye muchas de estas librerías.Anaconda es gratuita y de código abierto, y está disponible para Windows, macOS y Linux.¿Qué incluye Anaconda? - **Python** y muchas **librerías de Python preinstaladas**. De las librerías y el software que hemos mecionado antes, en Anaconda vienen instalados por defecto: Python, NumPy, SciPy, Matplotlib, Pandas, scikit-learn, Jupyter Notebook, JupyterLab y Spyder. Es decir, con lo que trae instalado ya tenemos todo lo que necesitamos para machine learning en Python. - **Gestor de paquetes Conda**. - Permite instalar, actualizar y desinstalar paquetes. Un paquete puede contener software, librerías, etc. Por defecto, los paquetes o librerías instalados con Conda se descargan del [repositorio de Anaconda](https://repo.anaconda.com/pkgs/). - Gestiona automáticamente las dependencias entre paquetes. - Permite gestionar entornos virtuales, cada uno de ellos con distintas versiones de software y de librerías. Por tanto, podemos utilizar Conda para instalar las librerías de deep learning (TensorFlow, Keras, PyTorch, etc.) que necesitemos y otras librerías de Python. 2. Descargar Anaconda___ Descargamos el instalador de Anaconda: 1. Vamos a la [página de descarga de Anaconda](https://www.anaconda.com/distribution/). 2. Pulsamos en el botón "Download" o bajamos por la página de descarga. 3. Seleccionamos nuestro sistema operativo: Windows, macOS o Linux. 4. Seleccionamos la última versión de Python disponible (no hace falta pulsar ningún botón). En este caso, la versión 3.7. Pero puede que actualmente nos aparezca una versión más reciente. 5. Seleccionamos el instalador: - Windows: Seleccionamos el instalador gráfico. Pulsamos sobre "64-Bit Graphical Installer". - macOS: Seleccionamos el instalador gráfico "64-Bit Graphical Installer". Si lo queremos instalar ejecutando un script en la línea de comandos, podemos seleccionar el instalador "64-Bit Command Line Installer". - Linux: Actualmente, Anaconda no tiene instalador gráfico para Linux, así que seleccionamos "64-Bit (x86) Installer". Este instalador es un script que ejecutaremos en la terminal o línea de comandos. 6. Tras seleccionar el instalador, empezará a descargarse: - Windows: Un fichero `.exe` (instalador gráfico). - MacOS: Un fichero `.pkg` (instaldor gráfico) o `.sh` (script). - Linux: Un fichero `.sh` (script). 3. Instalar Anaconda___ Ya tenemos el instalador de Anaconda descargado.Antes de instalar Anaconda, unas aclaraciones: - Por defecto, la instalación de Anaconda se realiza localmente para el usuario concreto que estamos utilizando en nuestro sistema operativo, por lo que no se necesitan permisos de administrador para instalar Anaconda. Si necesitas instalar Anaconda en todo el sistema (para todos los usuarios), sí se necesitan permisos de administrador. - Si ya tenemos una versión de Python instalada en nuestro sistema operativo, podemos instalar Anaconda sin problemas. No necesitamos desinstalar otras versiones ni paquetes de Python antes de instalar Anaconda. 3.1. Instalador Gráfico (Windows y macOS) 1. Hacemos doble click sobre el fichero del instalador: - Windows: Fichero `.exe`. - MacOS: Fichero `.pkg`. 2. Seguimos las instrucciones del asistente de instalación. En algún momento de la instalación se nos van a presentar las siguientes opciones (no tienen por qué presentarse en este mismo orden): - En Windows, seleccionamos la opción para instalar Anaconda localmente para nuestro usuario ("*Just Me*"). - Aceptamos la ruta por defecto donde se instala Anaconda: - MacOS: `/Users//opt/Anaconda3/`. - Windows: `C:\Users\\Anaconda3\`. Importante: En Windows, si nuestro nombre de usuario (``) contiene espacios, tildes o la letra "ñ", seleccionamos otro lugar (cuya ruta no contenga estos caracteres) para instalar Anaconda. - Casilla para añadir Anaconda al PATH ("*Add Anaconda to my PATH environment variable*"): - Windows: No marcamos la casilla. Si Añadimos Anaconda al PATH en Windows, puede interferir con otro software. Incluso si queremos utilizar Python desde la línea de comandos, no añadimos Anaconda al PATH. - MacOS: Sí marcamos la casilla. Si no te aparece la casilla, no te preocupes, se marca sola. - En Windows, marcamos la casilla "*Register Anaconda as my default Python 3.7*" (ya viene marcada por defecto). Esto establece Anaconda Python como la instalación de Python que se usa por defecto. El resto de configuraciones las dejamos como vienen por defecto. 3.2. Instalador por Línea de Comandos (MacOS y Linux) 1. Abrimos un terminal. 2. Ejecutamos el script de instalación. Si nos encontramos en el directorio donde se encuentra el instalador, ejecutamos: `bash ./.sh` Sustituimos `` por el nombre del script del instalador de Anaconda. Nota: No hace falta ejecutar el script con permisos de administrador, es decir, no usamos `sudo` para ejecutar el script. 3. El instalador muestra "*In order to continue the installation process, please review the license agreement*" ("Para continuar el proceso de instalación, por favor revise el acuerdo de la licencia"). Pulsamos `Enter`. 4. El instalador muestra el texto de la licencia. Bajamos hasta el final del texto, escribimos "*yes*" y pulsamos `Enter`. 5. Pulsamos `Enter` para aceptar la ruta de instalación por defecto: - MacOS: `/Users//anaconda3`. - Linux: `/home//anaconda3`. 6. El instalador nos pregunta si queremos que inicialice Anaconda ("*Do you wish the installer to initialize Anaconda by running conda init?*"). Escribimos "*yes*" y pulsamos `Enter`. De esta forma, el instalador añade Anaconda al PATH. 7. Termina la instalación. 8. Cerramos y volvemos a abrir el terminal para que la instalación de Anaconda tenga efecto. 4. Comprobar Instalación y Actualizar Anaconda___ Una vez hemos realizado la instalación: - En macOS y Linux, abrimos un terminal. - En Windows, abrimos Anaconda Prompt.  Si no aparece el icono de Anaconda Prompt, reiniciamos Windows. Nota: En Windows no utilizamos el terminal de Windows, sino Anaconda Prompt. Anaconda Prompt se utiliza igual que un terminal.Todos los comandos que se muestran a partir de ahora tenemos que ejecutarlos en la terminal (en macOS y Linux) o en Anaconda Prompt (en Windows), y son exactamente los mismos comandos para Windows, macOS y Linux. 4.1. Comprobar la Instalación de Anaconda 1. Comprobamos que el gestor de paquetes Conda está instalado, ejecutando: `conda -V` Si está instalado, se muestra el número de la versión instalada: ________________________________________________________ conda 4.7.12 ________________________________________________________ Nota: Si se muestra un mensaje de error en macOS o Linux, asegúrate de haber cerrado y vuelto a abrir la terminal después de instalar Anaconda. 2. Comprobamos que Python se ha instalado correctamente, ejecutando: `python -V` Esto muestra la versión de Python que tenemos instalada: ________________________________________________________ Python 3.7.5 ________________________________________________________ 3. Por defecto, en el prompt del terminal (macOS y Linux) y de Anaconda Prompt (Windows) aparece la palabra "*(base)*" después de instalar Anaconda. Si queremos que no aparezca, podemos ejecutar: `conda config --set changeps1 False` Y cerramos y volvemos a abrir el terminal o Anaconda Prompt. 4.2. Actualizar Conda y Anaconda 1. Actualizamos el gestor de paquetes Conda: `conda update conda` Si hay una nueva versión de Conda disponible, se nos pregunta si queremos instalarla: ________________________________________________________ Proceed ([y]/n)? ________________________________________________________ Para instalar las actualizaciones, escribimos "*y*" y pulsamos `Enter`. Si no hay actualizaciones disponibles, no nos piden que instalemos nada. 2. Actualizamos todos los paquetes a la última versión de Anaconda: `conda update anaconda` 4.3. Comprobar que las Librerías SciPy, NumPy, Matplotlib y Pandas están Instaladas 1. Abrimos un editor de textos y creamos el siguiente fichero (podemos copiar y pegar el texto):
###Code
# SciPy
import scipy
print('scipy: %s' % scipy.__version__)
# NumPy
import numpy
print('numpy: %s' % numpy.__version__)
# Matplotlib
import matplotlib
print('matplotlib: %s' % matplotlib.__version__)
# Pandas
import pandas
print('pandas: %s' % pandas.__version__)
###Output
_____no_output_____
###Markdown
Guardamos el fichero anterior con el nombre "versiones_scipy.py". Esto es un script o programa en Python que imprime por pantalla las versiones instaladas de las librerías SciPy, NumPy, Matplotlib y Pandas. 2. Desde el terminal (macOS y Linux) o Anaconda Prompt (Windows), navegamos hacia el directorio en el que hemos guardado el fichero anterior. 3. Ejecutamos el script de Python mediante: `python versiones_scipy.py` Esto muestra las versiones instaladas de SciPy, NumPy, Matplotlib y Pandas: ________________________________________________________ scipy: 1.3.1 numpy: 1.17.3 matplotlib: 3.1.1 pandas: 0.25.2 ________________________________________________________ Nota: Puede que nos aparezcan números de versión más recientes que estos, ya que las librerías se actualizan frecuentemente. 5. Actualizar la Librería de Machine Learning scikit-learn___ La librería scikit-learn ya viene instalada con Anaconda. La actualizamos y comprobamos la versión instalada: 1. Actualizamos la librería scikit-learn: `conda update scikit-learn` 2. Abrimos un editor de textos y creamos el siguiente fichero (podemos copiar y pegar el texto):
###Code
import sklearn
print('scikit-learn: %s' % sklearn.__version__)
###Output
_____no_output_____
###Markdown
Guardamos el fichero anterior con el nombre "version_scikit-learn.py". 3. Desde el terminal (macOS y Linux) o Anaconda Prompt (Windows), navegamos hacia el directorio en el que hemos guardado el fichero anterior. 4. Ejecutamos el script de Python mediante: `python version_scikit-learn.py` Esto muestra la versión instalada de la librería scikit-learn: ________________________________________________________ scikit-learn: 0.21.3 ________________________________________________________ Nota: Puede que nos aparezca un número de versión más reciente que este, ya que la librería se actualiza frecuentemente.Hasta aquí, con NumPy, SciPy, Matplotlib, Pandas y scikit-learn, tenemos todo lo necesario para empezar a practicar machine learning con Python. Si además queremos utilizar las principales librerías de deep learning para Python, en la siguiente sección mostramos cómo instalarlas. 6. Instalar las Librerías de Deep Learning___ Principales librerías de deep learning para Python: - **TensorFlow** - **Keras**: API de alto nivel para redes neuronales. Por debajo utiliza TensorFlow. - **PyTorch**Otras librerías disponibles que ya no reciben mantenimiento: - **Theano**Hay más librerías. Aquí mostramos cómo instalar las principales. No es necesario instalarlas todas. Sólo las que vayamos a utilizar. 6.1. Instalar TensorFlow 1. Instalamos TensorFlow: `conda install tensorflow` O si queremos ejecutar TensorFlow en GPUs: `conda install tensorflow-gpu` 2. Confirmamos que TensorFlow se ha instalado correctamente. Abrimos un editor de textos y creamos el siguiente fichero (podemos copiar y pegar el texto):
###Code
import tensorflow
print('tensorflow: %s' % tensorflow.__version__)
###Output
_____no_output_____
###Markdown
Guardamos el fichero anterior con el nombre "version_tensorflow.py". 3. Desde el terminal (macOS y Linux) o Anaconda Prompt (Windows), navegamos hacia el directorio en el que hemos guardado el fichero anterior. 4. Ejecutamos el script de Python mediante: `python version_tensorflow.py` Esto muestra la versión instalada de la librería TensorFlow: ________________________________________________________ tensorflow: 2.0.0 ________________________________________________________ Nota: Puede que nos aparezca un número de versión más reciente que este. 6.2. Instalar Keras Aclaraciones sobre Keras: - **Keras multi-backend**: - Anteriormente, Keras se podía utilizar sobre distintas librerías o backends (TensorFlow, Theano o CNTK). Actualmente, todavía se puede utilizar con cualquiera de estas librerías, pero Keras para múltiples backends o librerías no va a recibir mantenimiento en el futuro. - [Keras 2.3.0](https://github.com/keras-team/keras/releases/tag/2.3.0) es la última versión de Keras multi-backend, es decir, es la última versión de Keras que soporta TensorFlow, Theano y CNTK. - Keras multi-backend ha sido reemplazado por **tf.keras** (incluido dentro de TensorFlow). Los bugs presentes en Keras multi-backend sólo se solucionarán hasta abril de 2020. A partir de entonces, el equipo de Keras no realizará mantenimiento de Keras multi-backend, y el desarrollo se centrará únicamente en **tf.keras**. Por tanto, el equipo de desarrollo de Keras recomienda a los usuarios de Keras multi-backend pasar a utilizar **tf.keras**. - Keras en PyPI pasará a ser **tf.keras**. - [**tf.keras**](https://www.tensorflow.org/guide/keras): - Keras (**tf.keras**) es parte de la librería TensorFlow a partir de TensorFlow 2.0. - **tf.keras** es donde se centra el desarrollo de Keras actualmente. - **tf.keras** implementa la misma API que Keras 2.3.0. Además, también incluye funcionalidad adicional para TensorFlow.Por tanto, recomendamos usar Keras desde dentro de la librería TensorFlow. Para ello: 1. Instalamos TensorFlow. Para esto, seguimos las instrucciones de la sección anterior para instalar TensorFlow. 2. Comprobamos que tenemos al menos TensorFlow 2.0 y comprobamos la versión de Keras: Abrimos un editor de textos y creamos el siguiente fichero (podemos copiar y pegar el texto):
###Code
# TensorFlow
import tensorflow
print('tensorflow: %s' % tensorflow.__version__)
# Keras
print('keras: %s' % tensorflow.keras.__version__)
###Output
_____no_output_____
###Markdown
Guardamos el fichero anterior con el nombre "version_keras.py". 3. Desde el terminal (macOS y Linux) o Anaconda Prompt (Windows), navegamos hacia el directorio en el que hemos guardado el fichero anterior. 4. Ejecutamos el script de Python mediante: `python version_keras.py` Esto muestra la versión instalada de la librería Keras: ________________________________________________________ tensorflow: 2.0.0 keras: 2.2.4-tf ________________________________________________________ Nota: Puede que nos aparezcan números de versión más recientes que estos. Si tenemos una versión de TensorFlow anterior a 2.0, lo actualizamos: `conda update tensorflow` Nota: No tenemos que instalar Keras de forma independiente, ya que está incluido dentro de TensorFlow. 5. Para utilizar Keras dentro de un script en Python, importamos TensorFlow:
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Y ya podemos utilizar Keras. Ejemplo:
###Code
Dense = tf.keras.layers.Dense
###Output
_____no_output_____
###Markdown
6.3. Instalar PyTorch 1. [Instalamos PyTorch](https://pytorch.org/get-started/locally/): Si nuestro sistema no tiene una GPU o no necesitamos ejectuar PyTorch en GPUs: `conda install pytorch torchvision cpuonly -c pytorch` Si queremos ejecutar PyTorch en GPUs, ejecutamos el comando que se corresponda con la versión de CUDA de nuestro sistema: - CUDA 10.1: `conda install pytorch torchvision cudatoolkit=10.1 -c pytorch` - CUDA 9.2: `conda install pytorch torchvision cudatoolkit=9.2 -c pytorch` 2. Confirmamos que PyTorch se ha instalado correctamente. Abrimos un editor de textos y creamos el siguiente fichero (podemos copiar y pegar el texto):
###Code
import torch
print('PyTorch: %s' % torch.__version__)
print('¿PyTorch con CUDA? %s' % ("Sí" if torch.cuda.is_available() else "No"))
###Output
_____no_output_____
###Markdown
Guardamos el fichero anterior con el nombre "version_pytorch.py". 3. Desde el terminal (macOS y Linux) o Anaconda Prompt (Windows), navegamos hacia el directorio en el que hemos guardado el fichero anterior. 4. Ejecutamos el script de Python mediante: `python version_pytorch.py` Esto muestra la versión instalada de la librería PyTorch, y si lo hemos instalado con soporte para GPUs o no: ________________________________________________________ pytorch: 1.3.1 ¿pytorch con CUDA? No ________________________________________________________ Nota: Puede que nos aparezca un número de versión más reciente que este. 6.4. Instalar Theano Nota: Theano es una librería de código abierto y sigue estando disponible, pero no recibe mantenimiento ni soporte a partir de la versión 1.0 (lanzada en noviembre de 2017). Simplemente dejamos las instrucciones para instalarlo por si alguien quiere probarlo. 1. Instalamos Theano: `conda install theano` 2. Confirmamos que Theano se ha instalado correctamente. Abrimos un editor de textos y creamos el siguiente fichero (podemos copiar y pegar el texto):
###Code
import theano
print('theano: %s' % theano.__version__)
###Output
_____no_output_____
###Markdown
Guardamos el fichero anterior con el nombre "version_theano.py". 3. Desde el terminal (macOS y Linux) o Anaconda Prompt (Windows), navegamos hacia el directorio en el que hemos guardado el fichero anterior. 4. Ejecutamos el script de Python mediante: `python version_theano.py` Esto muestra la versión instalada de la librería Theano: ________________________________________________________ theano: 1.0.4 ________________________________________________________ 7. Comprobación Final___ Podemos comprobar que todas las librerías se han instalado correctamente con un único script (mostramos el código abajo).Para ello, creamos un script que muestre las versiones de las librerías que hayamos instalado. Si no hemos instalado alguna librería, simplemente eliminamos las líneas correspondientes a esa librería. Guardamos el script con el nombre "versiones.py" y lo ejecutamos mediante `python versiones.py`.
###Code
# Python
import platform
print('python: %s' % platform.python_version())
# SciPy
import scipy
print('scipy: %s' % scipy.__version__)
# NumPy
import numpy
print('numpy: %s' % numpy.__version__)
# Matplotlib
import matplotlib
print('matplotlib: %s' % matplotlib.__version__)
# Pandas
import pandas
print('pandas: %s' % pandas.__version__)
# scikit-learn
import sklearn
print('sklearn: %s' % sklearn.__version__)
# TensorFlow
import tensorflow
print('tensorflow: %s' % tensorflow.__version__)
# Keras
print('keras: %s' % tensorflow.keras.__version__)
# PyTorch
import torch
print('pytorch: %s' % torch.__version__)
print('¿pytorch con CUDA? %s' % ("Sí" if torch.cuda.is_available() else "No"))
# Theano
import theano
print('theano: %s' % theano.__version__)
###Output
python: 3.7.5
scipy: 1.3.1
numpy: 1.17.3
matplotlib: 3.1.1
pandas: 0.25.2
sklearn: 0.21.3
tensorflow: 2.0.0
keras: 2.2.4-tf
pytorch: 1.3.1
¿pytorch con CUDA? No
theano: 1.0.4
|
tf0/Demo_Autodiff_Algorithm.ipynb | ###Markdown
Implement the `Scalar` class
###Code
###Output
_____no_output_____
###Markdown
Apply `Scalar` to linear regression
###Code
###Output
_____no_output_____
###Markdown
Make linear regression data
###Code
###Output
_____no_output_____
###Markdown
Implement the mean squared error calculation
###Code
###Output
_____no_output_____
###Markdown
Confirm that gradient descent reduces MSE
###Code
###Output
_____no_output_____
###Markdown
Compare the value of `w` to the analytical solution* the ordinary least squares solution is $ (X^TX)^{-1}X^Ty $
###Code
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.